CockroachDB Multi-Region Under-the-Hood

CockroachDB has always provided a SQL relational database with high availability, horizontal scalability, and multi-region geographic distribution, while delivering fully-consistent data and ACID transactions. In version 21.1, CockroachDB provided a new multi-region syntax to make these features easier to configure. This syntax is high-level, declarative and goal-oriented. But it raises the question, “What’s going on under the hood with this new syntax?”

The previous syntax (which is still supported) is based on zone configuration. Zone configuration is powerful and allows fine-grained control — but it is also complex. This blog explores what CockroachDB is doing with the zone configurations and other settings when you use the new multi-region syntax.

Let’s start with this YouTube video that introduces and explains the new multi-region syntax:

Multi-Region Applications on a Multi-Region Database for Low Latency
https://www.youtube.com/watch?v=WIS7MGqCfhI

It includes a demonstration of the new syntax, in which Group Product Manager Andy Woods issues the following statements:

ALTER DATABASE movr PRIMARY REGION "us-east1";
ALTER DATABASE movr ADD REGION "us-west1";
ALTER DATABASE movr ADD REGION "europe-west1";
SHOW SURVIVAL GOAL FROM DATABASE movr;
ALTER DATABASE movr SURVIVE REGION FAILURE;
ALTER TABLE promo_codes SET LOCALITY GLOBAL;
ALTER TABLE rides SET LOCALITY REGIONAL BY ROW;

Let’s see what CockroachDB is doing “under the hood” with these statements. You can follow along on your own laptop if you wish.

Initial State

First, after installing CockroachDB on a laptop (in this case version 22.2.6 but you can use any version 22.1 or later), let’s start CockroachDB in demo mode, with 9 nodes and multiple regions:

cockroach demo movr --nodes 9

Next we issue a few setup commands. The first makes the prompt smaller, and the other commands show and change the frequency for computing some internal statistics.

\set prompt1 >
SHOW CLUSTER SETTING kv.replication_reports.interval;
SET CLUSTER setting kv.replication_reports.interval = '20s';

As you can see, demo mode starts pre-populated with sample data from a fictional ride-sharing company (these tables are located in the movr database):

> SHOW TABLES;
  schema_name |         table_name         | type  | owner | estimated_row_count | locality
--------------+----------------------------+-------+-------+---------------------+-----------
  public      | promo_codes                | table | demo  |                1000 | NULL
  public      | rides                      | table | demo  |                 500 | NULL
  public      | user_promo_codes           | table | demo  |                   5 | NULL
  public      | users                      | table | demo  |                  50 | NULL
  public      | vehicle_location_histories | table | demo  |                1000 | NULL
  public      | vehicles                   | table | demo  |                  15 | NULL
(6 rows)

Demo mode also starts pre-configured with multiple regions. Let’s see what those regions are:

> SHOW REGIONS;
     region    |  zones  | database_names | primary_region_of | secondary_region_of
---------------+---------+----------------+-------------------+----------------------
  europe-west1 | {b,c,d} | {}             | {}                | {}
  us-east1     | {b,c,d} | {}             | {}                | {}
  us-west1     | {a,b,c} | {}             | {}                | {}
(3 rows)

You can see that there are three regions, and within each region there are three zones. No databases are associated with any of those regions.

In particular, we can confirm that the movr database is not associated with any regions (yet!):

> SHOW REGIONS FROM DATABASE movr;
SHOW REGIONS 0

Similarly, we can confirm that the movr database does not have a survival goal:

> SHOW SURVIVAL GOAL FROM DATABASE movr;
  database | survival_goal
-----------+----------------
  movr     | NULL
(1 row)

Let’s look at the definition for the rides table:

> SHOW CREATE TABLE rides;
  table_name |                                                        create_statement
-------------+---------------------------------------------------------------------------------------------------------------------------------
  rides      | CREATE TABLE public.rides (
             |     id UUID NOT NULL,
             |     city VARCHAR NOT NULL,
             |     vehicle_city VARCHAR NULL,
             |     rider_id UUID NULL,
             |     vehicle_id UUID NULL,
             |     start_address VARCHAR NULL,
             |     end_address VARCHAR NULL,
             |     start_time TIMESTAMP NULL,
             |     end_time TIMESTAMP NULL,
             |     revenue DECIMAL(10,2) NULL,
             |     CONSTRAINT rides_pkey PRIMARY KEY (city ASC, id ASC),
             |     CONSTRAINT rides_city_rider_id_fkey FOREIGN KEY (city, rider_id) REFERENCES public.users(city, id),
             |     CONSTRAINT rides_vehicle_city_vehicle_id_fkey FOREIGN KEY (vehicle_city, vehicle_id) REFERENCES public.vehicles(city, id),
             |     INDEX rides_auto_index_fk_city_ref_users (city ASC, rider_id ASC),
             |     INDEX rides_auto_index_fk_vehicle_city_ref_vehicles (vehicle_city ASC, vehicle_id ASC),
             |     CONSTRAINT check_vehicle_city_city CHECK (vehicle_city = city)
             | )
(1 row)

As you can see, this is a relatively simple table with a few points of interest:

  • It has a composite primary key with two columns: (city, id)
  • It has a couple of foreign key constraints
  • It has a couple of secondary indexes
  • It has a check constraint

Let’s look at some more detailed information on this table…

First, let’s see if the rides table has any partitions:

> SHOW PARTITIONS FROM TABLE rides;
SHOW PARTITIONS 0

No partitions (yet).

Second, let’s look at the ranges for the rides table. (As you probably know, CockroachDB automatically shards data into ranges, which are automatically replicated and distributed for performance, high availability, and regulatory compliance. Cockroach University and the Architecture Overview in the docs are great places to explore the CockroachDB architecture in more detail.)

> SHOW ZONE CONFIGURATION FOR TABLE rides;
     target     |              raw_config_sql
----------------+-------------------------------------------
  RANGE default | ALTER RANGE default CONFIGURE ZONE USING
                |     range_min_bytes = 134217728,
                |     range_max_bytes = 536870912,
                |     gc.ttlseconds = 90000,
                |     num_replicas = 3,
                |     constraints = '[]',
                |     lease_preferences = '[]'
(1 row)

This shows a few interesting things:

  • minimum and maximum range size
  • the garbage collection time-to-live
  • the number of replicas
  • no range placement constraints
  • no leaseholder placement preferences

The key here is that the replication factor is the default value of 3 and CockroachDB has no constraints or preferences on the placement of either replicas or the leaseholder.

Next, let’s look at where those ranges are actually located in the cluster. To do this, we will use the SHOW RANGES FROM TABLE statement, but we will use it as a sub-query in order to make the output smaller:

> SELECT LEFT(start_key, 30) as start_key_30, range_id, range_size_mb, lease_holder, lease_holder_locality, replicas, replica_localities FROM [SHOW RANGES FROM TABLE rides];
           start_key_30          | range_id |      range_size_mb       | lease_holder |  lease_holder_locality   | replicas |                             replica_localities
---------------------------------+----------+--------------------------+--------------+--------------------------+----------+-----------------------------------------------------------------------------
  NULL                           |      125 | 0.0087600000000000000000 |            5 | region=us-west1,az=b     | {3,5,9}  | {"region=us-east1,az=d","region=us-west1,az=b","region=europe-west1,az=d"}
  /"amsterdam"/"\xc5\x1e\xb8Q\xe |      149 | 0.0096480000000000000000 |            5 | region=us-west1,az=b     | {3,5,9}  | {"region=us-east1,az=d","region=us-west1,az=b","region=europe-west1,az=d"}
  /"boston"/"8Q\xeb\x85\x1e\xb8B |      148 | 0.0095300000000000000000 |            6 | region=us-west1,az=c     | {3,6,9}  | {"region=us-east1,az=d","region=us-west1,az=c","region=europe-west1,az=d"}
  /"los angeles"/"\xa8\xf5\u008f |      146 |  0.010206000000000000000 |            4 | region=us-west1,az=a     | {3,4,9}  | {"region=us-east1,az=d","region=us-west1,az=a","region=europe-west1,az=d"}
  /"new york"/"\x1c(\xf5\u008f\\ |      114 | 0.0086290000000000000000 |            4 | region=us-west1,az=a     | {2,4,9}  | {"region=us-east1,az=c","region=us-west1,az=a","region=europe-west1,az=d"}
  /"paris"/"\xe1G\xae\x14z\xe1H\ |      144 |  0.019204000000000000000 |            6 | region=us-west1,az=c     | {2,6,7}  | {"region=us-east1,az=c","region=us-west1,az=c","region=europe-west1,az=b"}
  /"san francisco"/"\x8c\xcc\xcc |      147 | 0.0095460000000000000000 |            4 | region=us-west1,az=a     | {3,4,7}  | {"region=us-east1,az=d","region=us-west1,az=a","region=europe-west1,az=b"}
  /"seattle"/"p\xa3\xd7\n=pD\x00 |      145 |  0.010118000000000000000 |            7 | region=europe-west1,az=b | {3,6,7}  | {"region=us-east1,az=d","region=us-west1,az=c","region=europe-west1,az=b"}
  /"washington dc"/"Tz\xe1G\xae\ |      107 |  0.090246000000000000000 |            5 | region=us-west1,az=b     | {2,5,7}  | {"region=us-east1,az=c","region=us-west1,az=b","region=europe-west1,az=b"}
(9 rows)

That’s not so easy to read! Let’s format the output differently using \x which is a command to the client, not to the server:

> \x
> SELECT LEFT(start_key, 30) as start_key_30, range_id, range_size_mb, lease_holder, lease_holder_locality, replicas, replica_localities FROM [SHOW RANGES FROM TABLE rides];                     -[ RECORD 1 ]
start_key_30          | NULL
range_id              | 125
range_size_mb         | 0.0087600000000000000000
lease_holder          | 5
lease_holder_locality | region=us-west1,az=b
replicas              | {3,5,9}
replica_localities    | {"region=us-east1,az=d","region=us-west1,az=b","region=europe-west1,az=d"}
-[ RECORD 2 ]
start_key_30          | /"amsterdam"/"\xc5\x1e\xb8Q\xe
range_id              | 149
range_size_mb         | 0.0096480000000000000000
lease_holder          | 5
lease_holder_locality | region=us-west1,az=b
replicas              | {3,5,9}
replica_localities    | {"region=us-east1,az=d","region=us-west1,az=b","region=europe-west1,az=d"}
-[ RECORD 3 ]
start_key_30          | /"boston"/"8Q\xeb\x85\x1e\xb8B
range_id              | 148
range_size_mb         | 0.0095300000000000000000
lease_holder          | 6
lease_holder_locality | region=us-west1,az=c
replicas              | {3,6,9}
replica_localities    | {"region=us-east1,az=d","region=us-west1,az=c","region=europe-west1,az=d"}
-[ RECORD 4 ]
start_key_30          | /"los angeles"/"\xa8\xf5\u008f
range_id              | 146
range_size_mb         | 0.010206000000000000000
lease_holder          | 4
lease_holder_locality | region=us-west1,az=a
replicas              | {3,4,9}
replica_localities    | {"region=us-east1,az=d","region=us-west1,az=a","region=europe-west1,az=d"}
-[ RECORD 5 ]
start_key_30          | /"new york"/"\x1c(\xf5\u008f\\
range_id              | 114
range_size_mb         | 0.0086290000000000000000
lease_holder          | 4
lease_holder_locality | region=us-west1,az=a
replicas              | {2,4,9}
replica_localities    | {"region=us-east1,az=c","region=us-west1,az=a","region=europe-west1,az=d"}
-[ RECORD 6 ]
start_key_30          | /"paris"/"\xe1G\xae\x14z\xe1H\
range_id              | 144
range_size_mb         | 0.019204000000000000000
lease_holder          | 6
lease_holder_locality | region=us-west1,az=c
replicas              | {2,6,7}
replica_localities    | {"region=us-east1,az=c","region=us-west1,az=c","region=europe-west1,az=b"}
-[ RECORD 7 ]
start_key_30          | /"san francisco"/"\x8c\xcc\xcc
range_id              | 147
range_size_mb         | 0.0095460000000000000000
lease_holder          | 4
lease_holder_locality | region=us-west1,az=a
replicas              | {3,4,7}
replica_localities    | {"region=us-east1,az=d","region=us-west1,az=a","region=europe-west1,az=b"}
-[ RECORD 8 ]
start_key_30          | /"seattle"/"p\xa3\xd7\n=pD\x00
range_id              | 145
range_size_mb         | 0.010118000000000000000
lease_holder          | 7
lease_holder_locality | region=europe-west1,az=b
replicas              | {3,6,7}
replica_localities    | {"region=us-east1,az=d","region=us-west1,az=c","region=europe-west1,az=b"}
-[ RECORD 9 ]
start_key_30          | /"washington dc"/"Tz\xe1G\xae\
range_id              | 107
range_size_mb         | 0.090246000000000000000
lease_holder          | 5
lease_holder_locality | region=us-west1,az=b
replicas              | {2,5,7}
replica_localities    | {"region=us-east1,az=c","region=us-west1,az=b","region=europe-west1,az=b"}

This puts each column on a separate line, which is easier to read in this case. The \x command toggles between the two output formats; I won’t show it in the output below.

This output is very interesting! We can see that there are 9 ranges, and for each range we can see:

  • the portion of the keyspace of the table that range
  • the ID for the range
  • the size of the range
  • which node the leaseholder is on
  • the locality of the node that the leaseholder is on (region and az)
  • the nodes where the replicas are located
  • the localities of the nodes where the replicas are located

Given the lack of leaseholder preference and replica constraints, as you can imagine, leaseholders and ranges are spread more-or-less evenly among the nodes.

Before we go further, let’s issue a couple statements to verify that:

  • the current number of ranges does not violate the configured replication factor
  • the current range placement does not violate configured constraints

This is a way to verify that CockroachDB has “settled down” and it has no unfinished work to do as it replicates and distributes ranges. These queries are in the Replication Reports page of the docs.

Here is the first statement to check against the configured replication factor:

> SELECT * FROM system.replication_stats WHERE under_replicated_ranges > 0;                                                                                                                         zone_id | subzone_id | report_id | total_ranges | unavailable_ranges | under_replicated_ranges | over_replicated_ranges
----------+------------+-----------+--------------+--------------------+-------------------------+-------------------------
(0 rows)

There are no results from this query, which is good.

Here is the second statement to check against the configured constraints:

> SELECT * FROM system.replication_constraint_stats WHERE violating_ranges > 0;
  zone_id | subzone_id | type | config | report_id | violation_start | violating_ranges
----------+------------+------+--------+-----------+-----------------+-------------------
(0 rows)

Again, no results, which is good. (Given the tiny size of the cluster and demo database, this is not surprising. But, when you issue multi-region configuration statements on larger clusters with more data, the operations can take a while to complete. These statements can be part of monitoring progress.)

The above information is a reference that we compare to later as we start specifying the multi-region configuration statements that will follow.

It is now time to start issuing the statements from the above-mentioned YouTube video, and we will observe what happens. We will follow a common pattern of making a multi-region configuration change, then issuing statements to see how that affected the configuration at a lower level.

Starting the Multi-Region Configuration

The first step on the multi-region journey is to make the database be a “multi-region database” by assigning a primary region to the database:

> ALTER DATABASE movr PRIMARY REGION "us-east1";
ALTER DATABASE PRIMARY REGION

Let’s see what this did!

> SHOW REGIONS;
     region    |  zones  | database_names | primary_region_of | secondary_region_of
---------------+---------+----------------+-------------------+----------------------
  europe-west1 | {b,c,d} | {}             | {}                | {}
  us-east1     | {b,c,d} | {movr}         | {movr}            | {}
  us-west1     | {a,b,c} | {}             | {}                | {}
(3 rows)

The SHOW REGIONS statement shows the same three regions as before, but it shows that the us-east1 region has the movr database, and that it is the primary region of the movr database.

Here is another way to look at it, from the database perspective:

> SHOW REGIONS FROM DATABASE movr;
  database |  region  | primary | secondary |  zones
-----------+----------+---------+-----------+----------
  movr     | us-east1 |    t    |     f     | {b,c,d}
(1 row)

This shows the “regions-for-the-database”, in contrast to the first statement which shows the “databases-for-each-region”.

The movr database is now a multi-region database (even though it currently has just one region)! Multi-region databases have a survival goal. We can see the survival goal with the SHOW SURVIVAL GOAL statement:

> SHOW SURVIVAL GOAL FROM DATABASE movr;
  database | survival_goal
-----------+----------------
  movr     | zone
(1 row)

This shows that the default survival goal is zone, which means that CockroachDB will survive the loss of an availability zone. (Note this assumes that the cluster nodes are installed on a sufficient number of availability zones. In this case, the cluster nodes must be running on at least three availability zones.)

What happened to the tables in the movr database as a result of it becoming a multi-region database? Let’s look at the rides table again:

> SHOW CREATE TABLE rides;
  table_name |                                                        create_statement
-------------+---------------------------------------------------------------------------------------------------------------------------------
  rides      | CREATE TABLE public.rides (
             |     id UUID NOT NULL,
             |     city VARCHAR NOT NULL,
             |     vehicle_city VARCHAR NULL,
             |     rider_id UUID NULL,
             |     vehicle_id UUID NULL,
             |     start_address VARCHAR NULL,
             |     end_address VARCHAR NULL,
             |     start_time TIMESTAMP NULL,
             |     end_time TIMESTAMP NULL,
             |     revenue DECIMAL(10,2) NULL,
             |     CONSTRAINT rides_pkey PRIMARY KEY (city ASC, id ASC),
             |     CONSTRAINT rides_city_rider_id_fkey FOREIGN KEY (city, rider_id) REFERENCES public.users(city, id),
             |     CONSTRAINT rides_vehicle_city_vehicle_id_fkey FOREIGN KEY (vehicle_city, vehicle_id) REFERENCES public.vehicles(city, id),
             |     INDEX rides_auto_index_fk_city_ref_users (city ASC, rider_id ASC),
             |     INDEX rides_auto_index_fk_vehicle_city_ref_vehicles (vehicle_city ASC, vehicle_id ASC),
             |     CONSTRAINT check_vehicle_city_city CHECK (vehicle_city = city)
             | ) LOCALITY REGIONAL BY TABLE IN PRIMARY REGION
(1 row)

The rides table now has some additional information (highlighted) to show that it is now a regional table. It says that this table is “REGIONAL BY TABLE IN PRIMARY REGION“. This means that the whole table is now located in the primary region for the movr database.

We will confirm that shortly, but first let’s see if the table has any partitions:

> SHOW PARTITIONS FROM TABLE rides;
SHOW PARTITIONS 0

No partitions yet.

What has changed for the table’s zone configuration?

> SHOW ZONE CONFIGURATION FOR TABLE rides;
     target     |                 raw_config_sql
----------------+-------------------------------------------------
  DATABASE movr | ALTER DATABASE movr CONFIGURE ZONE USING
                |     range_min_bytes = 134217728,
                |     range_max_bytes = 536870912,
                |     gc.ttlseconds = 90000,
                |     num_replicas = 3,
                |     num_voters = 3,
                |     constraints = '{+region=us-east1: 1}',
                |     voter_constraints = '[+region=us-east1]',
                |     lease_preferences = '[[+region=us-east1]]'
(1 row)

Quite a bit has changed:

  • In addition to having 3 replicas configured, those replicas are all designated as voting replicas. (In this case it is largely a no-op, because in a 3-node cluster with replication factor 3, all the replicas are voting replicas.)
  • At least 1 replica is now required to be in us-east1
  • The voting replicas (all replicas for now) are also required to be in us-east1
  • The leaseholder is also required to be in us-east1

The consequence of all this is that the rides table now lives wholly in the us-east1 region, which, you will remember, is the primary region for the movr database.

Now let’s verify this by looking at where the ranges actually are. But, first, let’s verify that CockroachDB has finished relocating ranges based on the configuration change we just made.

Are any ranges under-replicated?

> SELECT * FROM system.replication_stats WHERE under_replicated_ranges > 0;
  zone_id | subzone_id | report_id | total_ranges | unavailable_ranges | under_replicated_ranges | over_replicated_ranges
----------+------------+-----------+--------------+--------------------+-------------------------+-------------------------
(0 rows)

No results, so no under-replicated ranges.

What about constraints? Are all replicas in locations consistent with the zone configuration?

> SELECT * FROM system.replication_constraint_stats WHERE violating_ranges > 0;
  zone_id | subzone_id | type | config | report_id | violation_start | violating_ranges
----------+------------+------+--------+-----------+-----------------+-------------------
(0 rows)

This shows that no constraints are being violated. If CockroachDB were not yet finished processing the ALTER DATABASE command, you would get output that looks similar to this:

> SELECT * FROM system.replication_constraint_stats WHERE violating_ranges > 0;
-[ RECORD 1 ]
zone_id          | 104
subzone_id       | 0
type             | voter_constraint
config           | +region=us-east1
report_id        | 1
violation_start  | 2023-03-30 21:41:25.068858+00
violating_ranges | 20

This shows that CockroachDB is still in the process of relocating ranges. Note that, by default, the system tables we are querying are refreshed every minute. And, in case you are wondering, CockroachDB can continue to serve client queries while this is happening. Just keep in mind that this kind of reconfiguration on a large database can consume significant cluster resources.

If you get the above result, just give CockroachDB a bit of time, and you should then get this result:

> SELECT * FROM system.replication_constraint_stats WHERE violating_ranges > 0;
  zone_id | subzone_id | type | config | report_id | violation_start | violating_ranges
----------+------------+------+--------+-----------+-----------------+-------------------
(0 rows)

Now that the data is correctly relocated, let’s see where the ranges actually are:

> SELECT LEFT(start_key, 30) as start_key_30, range_id, range_size_mb, lease_holder, lease_holder_locality, replicas, replica_localities FROM [SHOW RANGES FROM TABLE rides];
-[ RECORD 1 ]
start_key_30          | NULL
range_id              | 125
range_size_mb         | 0.0087600000000000000000
lease_holder          | 1
lease_holder_locality | region=us-east1,az=b
replicas              | {1,2,3}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d"}
-[ RECORD 2 ]
start_key_30          | /"amsterdam"/"\xc5\x1e\xb8Q\xe
range_id              | 149
range_size_mb         | 0.0096480000000000000000
lease_holder          | 1
lease_holder_locality | region=us-east1,az=b
replicas              | {1,2,3}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d"}
-[ RECORD 3 ]
start_key_30          | /"boston"/"8Q\xeb\x85\x1e\xb8B
range_id              | 148
range_size_mb         | 0.0095300000000000000000
lease_holder          | 3
lease_holder_locality | region=us-east1,az=d
replicas              | {1,2,3}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d"}
-[ RECORD 4 ]
start_key_30          | /"los angeles"/"\xa8\xf5\u008f
range_id              | 146
range_size_mb         | 0.010206000000000000000
lease_holder          | 2
lease_holder_locality | region=us-east1,az=c
replicas              | {1,2,3}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d"}
-[ RECORD 5 ]
start_key_30          | /"new york"/"\x1c(\xf5\u008f\\
range_id              | 114
range_size_mb         | 0.0086290000000000000000
lease_holder          | 2
lease_holder_locality | region=us-east1,az=c
replicas              | {1,2,3}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d"}
-[ RECORD 6 ]
start_key_30          | /"paris"/"\xe1G\xae\x14z\xe1H\
range_id              | 144
range_size_mb         | 0.019204000000000000000
lease_holder          | 2
lease_holder_locality | region=us-east1,az=c
replicas              | {1,2,3}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d"}
-[ RECORD 7 ]
start_key_30          | /"san francisco"/"\x8c\xcc\xcc
range_id              | 147
range_size_mb         | 0.0095460000000000000000
lease_holder          | 2
lease_holder_locality | region=us-east1,az=c
replicas              | {1,2,3}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d"}
-[ RECORD 8 ]
start_key_30          | /"seattle"/"p\xa3\xd7\n=pD\x00
range_id              | 145
range_size_mb         | 0.010118000000000000000
lease_holder          | 2
lease_holder_locality | region=us-east1,az=c
replicas              | {1,2,3}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d"}
-[ RECORD 9 ]
start_key_30          | /"washington dc"/"Tz\xe1G\xae\
range_id              | 107
range_size_mb         | 0.090246000000000000000
lease_holder          | 3
lease_holder_locality | region=us-east1,az=d
replicas              | {1,2,3}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d"}

All the replicas are in the region us-east1, as expected. Note they are in different availability zones within that region, which is also expected.

Second Region

Now let’s add a second region:

> ALTER DATABASE movr ADD REGION "us-west1";
ALTER DATABASE ADD REGION

Let’s see what that did:

> SHOW REGIONS;
     region    |  zones  | database_names | primary_region_of | secondary_region_of
---------------+---------+----------------+-------------------+----------------------
  europe-west1 | {b,c,d} | {}             | {}                | {}
  us-east1     | {b,c,d} | {movr}         | {movr}            | {}
  us-west1     | {a,b,c} | {movr}         | {}                | {}
(3 rows)

And…

> SHOW REGIONS FROM DATABASE movr;
  database |  region  | primary | secondary |  zones
-----------+----------+---------+-----------+----------
  movr     | us-east1 |    t    |     f     | {b,c,d}
  movr     | us-west1 |    f    |     f     | {a,b,c}
(2 rows)

As you see, the movr database now has two regions. The primary region is still us-east1.

Has the survival goal changed?

> SHOW SURVIVAL GOAL FROM DATABASE movr;
  database | survival_goal
-----------+----------------
  movr     | zone
(1 row)

No, the survival goal is still zone.

What about the table definition?

> SHOW CREATE TABLE rides;
  table_name |                                                        create_statement
-------------+---------------------------------------------------------------------------------------------------------------------------------
  rides      | CREATE TABLE public.rides (
             |     id UUID NOT NULL,
             |     city VARCHAR NOT NULL,
             |     vehicle_city VARCHAR NULL,
             |     rider_id UUID NULL,
             |     vehicle_id UUID NULL,
             |     start_address VARCHAR NULL,
             |     end_address VARCHAR NULL,
             |     start_time TIMESTAMP NULL,
             |     end_time TIMESTAMP NULL,
             |     revenue DECIMAL(10,2) NULL,
             |     CONSTRAINT rides_pkey PRIMARY KEY (city ASC, id ASC),
             |     CONSTRAINT rides_city_rider_id_fkey FOREIGN KEY (city, rider_id) REFERENCES public.users(city, id),
             |     CONSTRAINT rides_vehicle_city_vehicle_id_fkey FOREIGN KEY (vehicle_city, vehicle_id) REFERENCES public.vehicles(city, id),
             |     INDEX rides_auto_index_fk_city_ref_users (city ASC, rider_id ASC),
             |     INDEX rides_auto_index_fk_vehicle_city_ref_vehicles (vehicle_city ASC, vehicle_id ASC),
             |     CONSTRAINT check_vehicle_city_city CHECK (vehicle_city = city)
             | ) LOCALITY REGIONAL BY TABLE IN PRIMARY REGION
(1 row)

No, the table definition has not changed.

What about partitions?

> SHOW PARTITIONS FROM TABLE rides;
SHOW PARTITIONS 0

Still no partitions.

What about the zone configuration for the table?

> SHOW ZONE CONFIGURATION FOR TABLE rides;
     target     |                         raw_config_sql
----------------+------------------------------------------------------------------
  DATABASE movr | ALTER DATABASE movr CONFIGURE ZONE USING
                |     range_min_bytes = 134217728,
                |     range_max_bytes = 536870912,
                |     gc.ttlseconds = 90000,
                |     num_replicas = 4,
                |     num_voters = 3,
                |     constraints = '{+region=us-east1: 1, +region=us-west1: 1}',
                |     voter_constraints = '[+region=us-east1]',
                |     lease_preferences = '[[+region=us-east1]]'
(1 row)

So, we have some changes! The number of replicas has gone from 3 to 4. And one replica now has to be stored in the new region us-west1. Because the total number of replicas is 4, and 3 of them are voting replicas, you can infer that this is a non-voting replica. That means that this replica does not participate in CockroachDB’s distributed consensus voting protocol.

Note that, because the primary region for the movr database is still us-east1, CockroachDB has 3 voting replicas in us-east1, and that includes the leaseholder. This insures that all writes and reads have fast in-region latency for clients connecting to nodes in us-east1.

What about the additional non-voting replica in us-west1? It is available for fast reads for clients connecting to nodes in us-west1, if those clients use “follower read” time travel queries using the AS OF SYSTEM TIME syntax in their SELECT statements.

Here are some doc links to follower read time-travel queries using the AS OF SYSTEM TIME syntax:

Let’s verify where the ranges are. After CockroachDB has settled down (which we can confirm using queries shown above), here are where the ranges for the rides table are located:

> SELECT LEFT(start_key, 30) as start_key_30, range_id, range_size_mb, lease_holder, lease_holder_locality, replicas, replica_localities FROM [SHOW RANGES FROM TABLE rides];
-[ RECORD 1 ]
start_key_30          | NULL
range_id              | 125
range_size_mb         | 0.0087600000000000000000
lease_holder          | 1
lease_holder_locality | region=us-east1,az=b
replicas              | {1,2,3,4}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d","region=us-west1,az=a"}
-[ RECORD 2 ]
start_key_30          | /"amsterdam"/"\xc5\x1e\xb8Q\xe
range_id              | 149
range_size_mb         | 0.0096480000000000000000
lease_holder          | 1
lease_holder_locality | region=us-east1,az=b
replicas              | {1,2,3,6}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d","region=us-west1,az=c"}
-[ RECORD 3 ]
start_key_30          | /"boston"/"8Q\xeb\x85\x1e\xb8B
range_id              | 148
range_size_mb         | 0.0095300000000000000000
lease_holder          | 3
lease_holder_locality | region=us-east1,az=d
replicas              | {1,2,3,4}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d","region=us-west1,az=a"}
-[ RECORD 4 ]
start_key_30          | /"los angeles"/"\xa8\xf5\u008f
range_id              | 146
range_size_mb         | 0.010206000000000000000
lease_holder          | 2
lease_holder_locality | region=us-east1,az=c
replicas              | {1,2,3,6}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d","region=us-west1,az=c"}
-[ RECORD 5 ]
start_key_30          | /"new york"/"\x1c(\xf5\u008f\\
range_id              | 114
range_size_mb         | 0.0086290000000000000000
lease_holder          | 2
lease_holder_locality | region=us-east1,az=c
replicas              | {1,2,3,5}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d","region=us-west1,az=b"}
-[ RECORD 6 ]
start_key_30          | /"paris"/"\xe1G\xae\x14z\xe1H\
range_id              | 144
range_size_mb         | 0.019204000000000000000
lease_holder          | 2
lease_holder_locality | region=us-east1,az=c
replicas              | {1,2,3,5}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d","region=us-west1,az=b"}
-[ RECORD 7 ]
start_key_30          | /"san francisco"/"\x8c\xcc\xcc
range_id              | 147
range_size_mb         | 0.0095460000000000000000
lease_holder          | 2
lease_holder_locality | region=us-east1,az=c
replicas              | {1,2,3,5}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d","region=us-west1,az=b"}
-[ RECORD 8 ]
start_key_30          | /"seattle"/"p\xa3\xd7\n=pD\x00
range_id              | 145
range_size_mb         | 0.010118000000000000000
lease_holder          | 2
lease_holder_locality | region=us-east1,az=c
replicas              | {1,2,3,4}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d","region=us-west1,az=a"}
-[ RECORD 9 ]
start_key_30          | /"washington dc"/"Tz\xe1G\xae\
range_id              | 107
range_size_mb         | 0.090246000000000000000
lease_holder          | 3
lease_holder_locality | region=us-east1,az=d
replicas              | {1,2,3,5}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d","region=us-west1,az=b"}

This confirms that, for each range:

  • 3 replicas are in us-east1
  • 1 replica is in us-west1
  • the leaseholder range is in us-east1

Third Region

Now it is time to add a third region. Based on the pattern we have seen for adding the second region, can you predict what will happen?

> ALTER DATABASE movr ADD REGION "europe-west1";
ALTER DATABASE ADD REGION

Looking again at regions:

> SHOW REGIONS;
     region    |  zones  | database_names | primary_region_of | secondary_region_of
---------------+---------+----------------+-------------------+----------------------
  europe-west1 | {b,c,d} | {movr}         | {}                | {}
  us-east1     | {b,c,d} | {movr}         | {movr}            | {}
  us-west1     | {a,b,c} | {movr}         | {}                | {}
(3 rows)

And from the perspective of the movr database:

> SHOW REGIONS FROM DATABASE movr;
  database |    region    | primary | secondary |  zones
-----------+--------------+---------+-----------+----------
  movr     | us-east1     |    t    |     f     | {b,c,d}
  movr     | europe-west1 |    f    |     f     | {b,c,d}
  movr     | us-west1     |    f    |     f     | {a,b,c}
(3 rows)

As you can see, the movr database is now in three regions. The primary region is still us-east1.

Survival goal?

> SHOW SURVIVAL GOAL FROM DATABASE movr;
  database | survival_goal
-----------+----------------
  movr     | zone
(1 row)

No change.

Table definition?

> SHOW CREATE TABLE rides;
  table_name |                                                        create_statement
-------------+---------------------------------------------------------------------------------------------------------------------------------
  rides      | CREATE TABLE public.rides (
             |     id UUID NOT NULL,
             |     city VARCHAR NOT NULL,
             |     vehicle_city VARCHAR NULL,
             |     rider_id UUID NULL,
             |     vehicle_id UUID NULL,
             |     start_address VARCHAR NULL,
             |     end_address VARCHAR NULL,
             |     start_time TIMESTAMP NULL,
             |     end_time TIMESTAMP NULL,
             |     revenue DECIMAL(10,2) NULL,
             |     CONSTRAINT rides_pkey PRIMARY KEY (city ASC, id ASC),
             |     CONSTRAINT rides_city_rider_id_fkey FOREIGN KEY (city, rider_id) REFERENCES public.users(city, id),
             |     CONSTRAINT rides_vehicle_city_vehicle_id_fkey FOREIGN KEY (vehicle_city, vehicle_id) REFERENCES public.vehicles(city, id),
             |     INDEX rides_auto_index_fk_city_ref_users (city ASC, rider_id ASC),
             |     INDEX rides_auto_index_fk_vehicle_city_ref_vehicles (vehicle_city ASC, vehicle_id ASC),
             |     CONSTRAINT check_vehicle_city_city CHECK (vehicle_city = city)
             | ) LOCALITY REGIONAL BY TABLE IN PRIMARY REGION
(1 row)

No change.

Partitions?

> SHOW PARTITIONS FROM TABLE rides;
SHOW PARTITIONS 0

No change.

What about zone configuration?

> SHOW ZONE CONFIGURATION FOR TABLE rides;
     target     |                                      raw_config_sql
----------------+-------------------------------------------------------------------------------------------
  DATABASE movr | ALTER DATABASE movr CONFIGURE ZONE USING
                |     range_min_bytes = 134217728,
                |     range_max_bytes = 536870912,
                |     gc.ttlseconds = 90000,
                |     num_replicas = 5,
                |     num_voters = 3,
                |     constraints = '{+region=europe-west1: 1, +region=us-east1: 1, +region=us-west1: 1}',
                |     voter_constraints = '[+region=us-east1]',
                |     lease_preferences = '[[+region=us-east1]]'
(1 row)

As you might have guessed, the number of replicas has gone up again, this time from 4 replicas to 5 replicas, and that 5th replica is “pinned” to the new region europe-west1.

Let’s look at where the replicas are located:

> SELECT LEFT(start_key, 30) as start_key_30, range_id, range_size_mb, lease_holder, lease_holder_locality, replicas, replica_localities FROM [SHOW RANGES FROM TABLE rides];
-[ RECORD 1 ]
start_key_30          | NULL
range_id              | 125
range_size_mb         | 0.0087600000000000000000
lease_holder          | 1
lease_holder_locality | region=us-east1,az=b
replicas              | {1,2,3,4,7}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d","region=us-west1,az=a","region=europe-west1,az=b"}
-[ RECORD 2 ]
start_key_30          | /"amsterdam"/"\xc5\x1e\xb8Q\xe
range_id              | 149
range_size_mb         | 0.0096480000000000000000
lease_holder          | 1
lease_holder_locality | region=us-east1,az=b
replicas              | {1,2,3,6,8}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d","region=us-west1,az=c","region=europe-west1,az=c"}
-[ RECORD 3 ]
start_key_30          | /"boston"/"8Q\xeb\x85\x1e\xb8B
range_id              | 148
range_size_mb         | 0.0095300000000000000000
lease_holder          | 3
lease_holder_locality | region=us-east1,az=d
replicas              | {1,2,3,4,8}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d","region=us-west1,az=a","region=europe-west1,az=c"}
-[ RECORD 4 ]
start_key_30          | /"los angeles"/"\xa8\xf5\u008f
range_id              | 146
range_size_mb         | 0.010206000000000000000
lease_holder          | 2
lease_holder_locality | region=us-east1,az=c
replicas              | {1,2,3,6,7}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d","region=us-west1,az=c","region=europe-west1,az=b"}
-[ RECORD 5 ]
start_key_30          | /"new york"/"\x1c(\xf5\u008f\\
range_id              | 114
range_size_mb         | 0.0086290000000000000000
lease_holder          | 2
lease_holder_locality | region=us-east1,az=c
replicas              | {1,2,3,5,9}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d","region=us-west1,az=b","region=europe-west1,az=d"}
-[ RECORD 6 ]
start_key_30          | /"paris"/"\xe1G\xae\x14z\xe1H\
range_id              | 144
range_size_mb         | 0.019204000000000000000
lease_holder          | 2
lease_holder_locality | region=us-east1,az=c
replicas              | {1,2,3,5,9}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d","region=us-west1,az=b","region=europe-west1,az=d"}
-[ RECORD 7 ]
start_key_30          | /"san francisco"/"\x8c\xcc\xcc
range_id              | 147
range_size_mb         | 0.0095460000000000000000
lease_holder          | 2
lease_holder_locality | region=us-east1,az=c
replicas              | {1,2,3,5,8}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d","region=us-west1,az=b","region=europe-west1,az=c"}
-[ RECORD 8 ]
start_key_30          | /"seattle"/"p\xa3\xd7\n=pD\x00
range_id              | 145
range_size_mb         | 0.010118000000000000000
lease_holder          | 2
lease_holder_locality | region=us-east1,az=c
replicas              | {1,2,3,4,8}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d","region=us-west1,az=a","region=europe-west1,az=c"}
-[ RECORD 9 ]
start_key_30          | /"washington dc"/"Tz\xe1G\xae\
range_id              | 107
range_size_mb         | 0.090246000000000000000
lease_holder          | 3
lease_holder_locality | region=us-east1,az=d
replicas              | {1,2,3,5,9}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d","region=us-west1,az=b","region=europe-west1,az=d"}

As expected:

  • 3 voting replicas are in us-east1
  • 1 non-voting replica is in us-west1
  • 1 non-voting replica is in europe-west1

Changing the Survival Goal

We could go on adding regions, but now let’s do something different. What happens if we change the survival goal from zone to region?

> ALTER DATABASE movr SURVIVE REGION FAILURE;
ALTER DATABASE SURVIVE

Now the database has a higher survival goal: Instead of being able to survive the loss of an availability zone, it can now survive the loss of a whole region! That’s pretty amazing for a single line of code.

What did CockroachDB do with this configuration change? Let’s find out by issuing, once again, the same set of statements we have used before.

Regions?

> SHOW REGIONS;
     region    |  zones  | database_names | primary_region_of | secondary_region_of
---------------+---------+----------------+-------------------+----------------------
  europe-west1 | {b,c,d} | {movr}         | {}                | {}
  us-east1     | {b,c,d} | {movr}         | {movr}            | {}
  us-west1     | {a,b,c} | {movr}         | {}                | {}
(3 rows)


Time: 14ms total (execution 14ms / network 0ms)

> SHOW REGIONS FROM DATABASE movr;
  database |    region    | primary | secondary |  zones
-----------+--------------+---------+-----------+----------
  movr     | us-east1     |    t    |     f     | {b,c,d}
  movr     | europe-west1 |    f    |     f     | {b,c,d}
  movr     | us-west1     |    f    |     f     | {a,b,c}
(3 rows)

No change.

Survival goal?


> SHOW SURVIVAL GOAL FROM DATABASE movr;
  database | survival_goal
-----------+----------------
  movr     | region
(1 row)

This shows the new region survival goal.

Table definition?

> SHOW CREATE TABLE rides;
  table_name |                                                        create_statement
-------------+---------------------------------------------------------------------------------------------------------------------------------
  rides      | CREATE TABLE public.rides (
             |     id UUID NOT NULL,
             |     city VARCHAR NOT NULL,
             |     vehicle_city VARCHAR NULL,
             |     rider_id UUID NULL,
             |     vehicle_id UUID NULL,
             |     start_address VARCHAR NULL,
             |     end_address VARCHAR NULL,
             |     start_time TIMESTAMP NULL,
             |     end_time TIMESTAMP NULL,
             |     revenue DECIMAL(10,2) NULL,
             |     CONSTRAINT rides_pkey PRIMARY KEY (city ASC, id ASC),
             |     CONSTRAINT rides_city_rider_id_fkey FOREIGN KEY (city, rider_id) REFERENCES public.users(city, id),
             |     CONSTRAINT rides_vehicle_city_vehicle_id_fkey FOREIGN KEY (vehicle_city, vehicle_id) REFERENCES public.vehicles(city, id),
             |     INDEX rides_auto_index_fk_city_ref_users (city ASC, rider_id ASC),
             |     INDEX rides_auto_index_fk_vehicle_city_ref_vehicles (vehicle_city ASC, vehicle_id ASC),
             |     CONSTRAINT check_vehicle_city_city CHECK (vehicle_city = city)
             | ) LOCALITY REGIONAL BY TABLE IN PRIMARY REGION
(1 row)

No change.

Partitions?

> SHOW PARTITIONS FROM TABLE rides;
SHOW PARTITIONS 0

No change.

Well, what did change? We will start to see changes here with the zone configuration:

> SHOW ZONE CONFIGURATION FOR TABLE rides;
     target     |                                      raw_config_sql
----------------+-------------------------------------------------------------------------------------------
  DATABASE movr | ALTER DATABASE movr CONFIGURE ZONE USING
                |     range_min_bytes = 134217728,
                |     range_max_bytes = 536870912,
                |     gc.ttlseconds = 90000,
                |     num_replicas = 5,
                |     num_voters = 5,
                |     constraints = '{+region=europe-west1: 1, +region=us-east1: 1, +region=us-west1: 1}',
                |     voter_constraints = '{+region=us-east1: 2}',
                |     lease_preferences = '[[+region=us-east1]]'
(1 row)

This is interesting… two changes occurred with the new survival goal:

  • The number of voters went from 3 to 5. So now all replicas are voting replicas.
  • The number of voters that were constrained to us-east1 went down from 3 to 2.

This has the effect of causing at least one voting replica to exist in each region. It also causes no region to have too many voting replicas (that is, every region has less than half of the voting replicas). Together, these guarantee that, if any region goes down, sufficient voting replicas will remain for CockroachDB to continue operation uninterrupted.

We can see this in how the ranges are now distributed:

> SELECT LEFT(start_key, 30) as start_key_30, range_id, range_size_mb, lease_holder, lease_holder_locality, replicas, replica_localities FROM [SHOW RANGES FROM TABLE rides];
-[ RECORD 1 ]
start_key_30          | NULL
range_id              | 125
range_size_mb         | 0.0087600000000000000000
lease_holder          | 1
lease_holder_locality | region=us-east1,az=b
replicas              | {1,3,4,5,7}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=d","region=us-west1,az=a","region=us-west1,az=b","region=europe-west1,az=b"}
-[ RECORD 2 ]
start_key_30          | /"amsterdam"/"\xc5\x1e\xb8Q\xe
range_id              | 149
range_size_mb         | 0.0096480000000000000000
lease_holder          | 1
lease_holder_locality | region=us-east1,az=b
replicas              | {1,2,5,6,9}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-west1,az=b","region=us-west1,az=c","region=europe-west1,az=d"}
-[ RECORD 3 ]
start_key_30          | /"boston"/"8Q\xeb\x85\x1e\xb8B
range_id              | 148
range_size_mb         | 0.0095300000000000000000
lease_holder          | 3
lease_holder_locality | region=us-east1,az=d
replicas              | {2,3,4,6,8}
replica_localities    | {"region=us-east1,az=c","region=us-east1,az=d","region=us-west1,az=a","region=us-west1,az=c","region=europe-west1,az=c"}
-[ RECORD 4 ]
start_key_30          | /"los angeles"/"\xa8\xf5\u008f
range_id              | 146
range_size_mb         | 0.010206000000000000000
lease_holder          | 2
lease_holder_locality | region=us-east1,az=c
replicas              | {1,2,4,6,9}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-west1,az=a","region=us-west1,az=c","region=europe-west1,az=d"}
-[ RECORD 5 ]
start_key_30          | /"new york"/"\x1c(\xf5\u008f\\
range_id              | 114
range_size_mb         | 0.0086290000000000000000
lease_holder          | 2
lease_holder_locality | region=us-east1,az=c
replicas              | {1,2,4,8,9}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-west1,az=a","region=europe-west1,az=c","region=europe-west1,az=d"}
-[ RECORD 6 ]
start_key_30          | /"paris"/"\xe1G\xae\x14z\xe1H\
range_id              | 144
range_size_mb         | 0.019204000000000000000
lease_holder          | 3
lease_holder_locality | region=us-east1,az=d
replicas              | {1,3,5,6,7}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=d","region=us-west1,az=b","region=us-west1,az=c","region=europe-west1,az=b"}
-[ RECORD 7 ]
start_key_30          | /"san francisco"/"\x8c\xcc\xcc
range_id              | 147
range_size_mb         | 0.0095460000000000000000
lease_holder          | 2
lease_holder_locality | region=us-east1,az=c
replicas              | {1,2,6,8,9}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-west1,az=c","region=europe-west1,az=c","region=europe-west1,az=d"}
-[ RECORD 8 ]
start_key_30          | /"seattle"/"p\xa3\xd7\n=pD\x00
range_id              | 145
range_size_mb         | 0.010118000000000000000
lease_holder          | 2
lease_holder_locality | region=us-east1,az=c
replicas              | {1,2,4,6,8}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-west1,az=a","region=us-west1,az=c","region=europe-west1,az=c"}
-[ RECORD 9 ]
start_key_30          | /"washington dc"/"Tz\xe1G\xae\
range_id              | 107
range_size_mb         | 0.090246000000000000000
lease_holder          | 1
lease_holder_locality | region=us-east1,az=b
replicas              | {1,3,5,7,9}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=d","region=us-west1,az=b","region=europe-west1,az=b","region=europe-west1,az=d"}

Global Tables

We have seen how (with the zone survival goal) CockroachDB creates enough additional non-voting replicas to ensure fast reads from any region — as long as those reads use “follower read” time travel queries. But what if the business logic of your application requires the latest current values? Is there a way to get fast reads from any region in this situation?

CockroachDB provides a new kind of table called a “global table” that provides fast up-to-date reads from any region. This is pretty amazing, but there is a downside: writes to global tables take longer than writes to normal tables.

This makes global tables ideal for data that is read more often than it is written. It is ideal for things like lookup codes.

To show this, let’s change our focus from the rides table to another table, the promo_codes table. Currently the promo_codes table is a “regular” table. Let’s look at some of the details on this table, then change it into a global table. Then we can look at how CockroachDB changed those details.

First, here is the table definition:

> SHOW CREATE TABLE promo_codes;
  table_name  |                    create_statement
--------------+---------------------------------------------------------
  promo_codes | CREATE TABLE public.promo_codes (
              |     code VARCHAR NOT NULL,
              |     description VARCHAR NULL,
              |     creation_time TIMESTAMP NULL,
              |     expiration_time TIMESTAMP NULL,
              |     rules JSONB NULL,
              |     CONSTRAINT promo_codes_pkey PRIMARY KEY (code ASC)
              | ) LOCALITY REGIONAL BY TABLE IN PRIMARY REGION
(1 row)

Pretty simple. And you can see that it has no partitions:

> SHOW PARTITIONS FROM TABLE promo_codes;
SHOW PARTITIONS 0

And the zone configuration is in line with what we saw previously for the rides table:

> SHOW ZONE CONFIGURATION FOR TABLE promo_codes;
     target     |                                      raw_config_sql
----------------+-------------------------------------------------------------------------------------------
  DATABASE movr | ALTER DATABASE movr CONFIGURE ZONE USING
                |     range_min_bytes = 134217728,
                |     range_max_bytes = 536870912,
                |     gc.ttlseconds = 90000,
                |     num_replicas = 5,
                |     num_voters = 5,
                |     constraints = '{+region=europe-west1: 1, +region=us-east1: 1, +region=us-west1: 1}',
                |     voter_constraints = '{+region=us-east1: 2}',
                |     lease_preferences = '[[+region=us-east1]]'
(1 row)

It is a small table so it has only one range:

> SELECT LEFT(start_key, 30) as start_key_30, range_id, range_size_mb, lease_holder, lease_holder_locality, replicas, replica_localities FROM [SHOW RANGES FROM TABLE promo_codes];
-[ RECORD 1 ]
start_key_30          | NULL
range_id              | 126
range_size_mb         | 0.23397300000000000000
lease_holder          | 2
lease_holder_locality | region=us-east1,az=c
replicas              | {2,3,5,6,7}
replica_localities    | {"region=us-east1,az=c","region=us-east1,az=d","region=us-west1,az=b","region=us-west1,az=c","region=europe-west1,az=b"}

With this baseline, what happens when we turn it into a global table?

> ALTER TABLE promo_codes SET LOCALITY GLOBAL;
ALTER TABLE SET LOCALITY

Let’s look at the table definition:

> SHOW CREATE TABLE promo_codes;
  table_name  |                    create_statement
--------------+---------------------------------------------------------
  promo_codes | CREATE TABLE public.promo_codes (
              |     code VARCHAR NOT NULL,
              |     description VARCHAR NULL,
              |     creation_time TIMESTAMP NULL,
              |     expiration_time TIMESTAMP NULL,
              |     rules JSONB NULL,
              |     CONSTRAINT promo_codes_pkey PRIMARY KEY (code ASC)
              | ) LOCALITY GLOBAL
(1 row)

OK, the table definition now shows that it is a global table.

Let’s look at the zone configuration:

> SHOW ZONE CONFIGURATION FOR TABLE promo_codes;
       target       |                                      raw_config_sql
--------------------+-------------------------------------------------------------------------------------------
  TABLE promo_codes | ALTER TABLE promo_codes CONFIGURE ZONE USING
                    |     range_min_bytes = 134217728,
                    |     range_max_bytes = 536870912,
                    |     gc.ttlseconds = 90000,
                    |     global_reads = true,
                    |     num_replicas = 5,
                    |     num_voters = 5,
                    |     constraints = '{+region=europe-west1: 1, +region=us-east1: 1, +region=us-west1: 1}',
                    |     voter_constraints = '{+region=us-east1: 2}',
                    |     lease_preferences = '[[+region=us-east1]]'
(1 row)

The zone configuration also shows that the table is a global table.

And you can see that the actual range placement has not changed:

> SELECT LEFT(start_key, 30) as start_key_30, range_id, range_size_mb, lease_holder, lease_holder_locality, replicas, replica_localities FROM [SHOW RANGES FROM TABLE promo_codes];
-[ RECORD 1 ]
start_key_30          | NULL
range_id              | 126
range_size_mb         | 0.23397300000000000000
lease_holder          | 2
lease_holder_locality | region=us-east1,az=c
replicas              | {2,3,5,6,7}
replica_localities    | {"region=us-east1,az=c","region=us-east1,az=d","region=us-west1,az=b","region=us-west1,az=c","region=europe-west1,az=b"}

So, it appears that not much has happened when we turned the promo_codes table into a global table. But that is because we are now at the region survival goal, and CockroachDB has already distributed the ranges for the promo_codes table to all regions. If we had done this at the zone survival goal setting, CockroachDB would have done more work — for just this table — to distribute those ranges.

Homework assignment: Verify that this is true, by using a database that has the zone survival goal. Pick a table in that database, and look at the zone configuration for that table before and after you make it a global table.

But global tables are about more than the distribution of ranges. CockroachDB changes the way it does writes to global tables to make sure that subsequent reads provide:

  • low latency
  • the latest up-to-date values
  • regardless of the region

Here are some resources on global tables:

Regional By Row

While CockroachDB gives you the ability to place an entire table in the region of your choice, you can do better than that. CockroachDB gives you the ability to place individual rows in different regions, even while you work with a single table!

This feature is called regional by row tables. Let’s turn the rides table into a regional by row table with a simple SQL statement:

> ALTER TABLE rides SET LOCALITY REGIONAL BY ROW;
NOTICE: LOCALITY changes will be finalized asynchronously; further schema changes on this table may be restricted until the job completes
ALTER TABLE SET LOCALITY

Now it is time to look at what changed. First, the table definition:

> SHOW CREATE TABLE rides;
  table_name |                                                                           create_statement
-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  rides      | CREATE TABLE public.rides (
             |     id UUID NOT NULL,
             |     city VARCHAR NOT NULL,
             |     vehicle_city VARCHAR NULL,
             |     rider_id UUID NULL,
             |     vehicle_id UUID NULL,
             |     start_address VARCHAR NULL,
             |     end_address VARCHAR NULL,
             |     start_time TIMESTAMP NULL,
             |     end_time TIMESTAMP NULL,
             |     revenue DECIMAL(10,2) NULL,
             |     crdb_region movr.public.crdb_internal_region NOT VISIBLE NOT NULL DEFAULT default_to_database_primary_region(gateway_region())::movr.public.crdb_internal_region,
             |     CONSTRAINT rides_pkey PRIMARY KEY (city ASC, id ASC),
             |     CONSTRAINT rides_city_rider_id_fkey FOREIGN KEY (city, rider_id) REFERENCES public.users(city, id),
             |     CONSTRAINT rides_vehicle_city_vehicle_id_fkey FOREIGN KEY (vehicle_city, vehicle_id) REFERENCES public.vehicles(city, id),
             |     INDEX rides_auto_index_fk_city_ref_users (city ASC, rider_id ASC),
             |     INDEX rides_auto_index_fk_vehicle_city_ref_vehicles (vehicle_city ASC, vehicle_id ASC),
             |     CONSTRAINT check_vehicle_city_city CHECK (vehicle_city = city)
             | ) LOCALITY REGIONAL BY ROW

Note that the rides table now has a new column, crdb_region, and that column is hidden. This column will not be displayed in a SELECT * statement, but is a usable column otherwise. This column tells CockroachDB which region each row should be placed in.

What is the value of this new column?

> SELECT DISTINCT crdb_region FROM rides;
  crdb_region
---------------
  us-east1

All the rows are in us-east1 (at least for now).

Further, the rides table is now split into multiple partitions:

> SHOW PARTITIONS FROM TABLE rides;
-[ RECORD 1 ]
database_name    | movr
table_name       | rides
partition_name   | europe-west1
parent_partition | NULL
column_names     | crdb_region
index_name       | rides@rides_auto_index_fk_city_ref_users
partition_value  | ('europe-west1')
zone_config      | num_voters = 5,+
                 | voter_constraints = '{+region=europe-west1: 2}',+
                 | lease_preferences = '[[+region=europe-west1]]'
full_zone_config | range_min_bytes = 134217728,+
                 | range_max_bytes = 536870912,+
                 | gc.ttlseconds = 90000,+
                 | num_replicas = 5,+
                 | num_voters = 5,+
                 | constraints = '{+region=europe-west1: 1, +region=us-east1: 1, +region=us-west1: 1}',+
                 | voter_constraints = '{+region=europe-west1: 2}',+
                 | lease_preferences = '[[+region=europe-west1]]'
-[ RECORD 2 ]
database_name    | movr
table_name       | rides
partition_name   | europe-west1
parent_partition | NULL
column_names     | crdb_region
index_name       | rides@rides_auto_index_fk_vehicle_city_ref_vehicles
partition_value  | ('europe-west1')
zone_config      | num_voters = 5,+
                 | voter_constraints = '{+region=europe-west1: 2}',+
                 | lease_preferences = '[[+region=europe-west1]]'
full_zone_config | range_min_bytes = 134217728,+
                 | range_max_bytes = 536870912,+
                 | gc.ttlseconds = 90000,+
                 | num_replicas = 5,+
                 | num_voters = 5,+
                 | constraints = '{+region=europe-west1: 1, +region=us-east1: 1, +region=us-west1: 1}',+
                 | voter_constraints = '{+region=europe-west1: 2}',+
                 | lease_preferences = '[[+region=europe-west1]]'
-[ RECORD 3 ]
database_name    | movr
table_name       | rides
partition_name   | europe-west1
parent_partition | NULL
column_names     | crdb_region
index_name       | rides@rides_pkey
partition_value  | ('europe-west1')
zone_config      | num_voters = 5,+
                 | voter_constraints = '{+region=europe-west1: 2}',+
                 | lease_preferences = '[[+region=europe-west1]]'
full_zone_config | range_min_bytes = 134217728,+
                 | range_max_bytes = 536870912,+
                 | gc.ttlseconds = 90000,+
                 | num_replicas = 5,+
                 | num_voters = 5,+
                 | constraints = '{+region=europe-west1: 1, +region=us-east1: 1, +region=us-west1: 1}',+
                 | voter_constraints = '{+region=europe-west1: 2}',+
                 | lease_preferences = '[[+region=europe-west1]]'
-[ RECORD 4 ]
database_name    | movr
table_name       | rides
partition_name   | us-east1
parent_partition | NULL
column_names     | crdb_region
index_name       | rides@rides_auto_index_fk_city_ref_users
partition_value  | ('us-east1')
zone_config      | num_voters = 5,+
                 | voter_constraints = '{+region=us-east1: 2}',+
                 | lease_preferences = '[[+region=us-east1]]'
full_zone_config | range_min_bytes = 134217728,+
                 | range_max_bytes = 536870912,+
                 | gc.ttlseconds = 90000,+
                 | num_replicas = 5,+
                 | num_voters = 5,+
                 | constraints = '{+region=europe-west1: 1, +region=us-east1: 1, +region=us-west1: 1}',+
                 | voter_constraints = '{+region=us-east1: 2}',+
                 | lease_preferences = '[[+region=us-east1]]'
-[ RECORD 5 ]
database_name    | movr
table_name       | rides
partition_name   | us-east1
parent_partition | NULL
column_names     | crdb_region
index_name       | rides@rides_auto_index_fk_vehicle_city_ref_vehicles
partition_value  | ('us-east1')
zone_config      | num_voters = 5,+
                 | voter_constraints = '{+region=us-east1: 2}',+
                 | lease_preferences = '[[+region=us-east1]]'
full_zone_config | range_min_bytes = 134217728,+
                 | range_max_bytes = 536870912,+
                 | gc.ttlseconds = 90000,+
                 | num_replicas = 5,+
                 | num_voters = 5,+
                 | constraints = '{+region=europe-west1: 1, +region=us-east1: 1, +region=us-west1: 1}',+
                 | voter_constraints = '{+region=us-east1: 2}',+
                 | lease_preferences = '[[+region=us-east1]]'
-[ RECORD 6 ]
database_name    | movr
table_name       | rides
partition_name   | us-east1
parent_partition | NULL
column_names     | crdb_region
index_name       | rides@rides_pkey
partition_value  | ('us-east1')
zone_config      | num_voters = 5,+
                 | voter_constraints = '{+region=us-east1: 2}',+
                 | lease_preferences = '[[+region=us-east1]]'
full_zone_config | range_min_bytes = 134217728,+
                 | range_max_bytes = 536870912,+
                 | gc.ttlseconds = 90000,+
                 | num_replicas = 5,+
                 | num_voters = 5,+
                 | constraints = '{+region=europe-west1: 1, +region=us-east1: 1, +region=us-west1: 1}',+
                 | voter_constraints = '{+region=us-east1: 2}',+
                 | lease_preferences = '[[+region=us-east1]]'
-[ RECORD 7 ]
database_name    | movr
table_name       | rides
partition_name   | us-west1
parent_partition | NULL
column_names     | crdb_region
index_name       | rides@rides_auto_index_fk_city_ref_users
partition_value  | ('us-west1')
zone_config      | num_voters = 5,+
                 | voter_constraints = '{+region=us-west1: 2}',+
                 | lease_preferences = '[[+region=us-west1]]'
full_zone_config | range_min_bytes = 134217728,+
                 | range_max_bytes = 536870912,+
                 | gc.ttlseconds = 90000,+
                 | num_replicas = 5,+
                 | num_voters = 5,+
                 | constraints = '{+region=europe-west1: 1, +region=us-east1: 1, +region=us-west1: 1}',+
                 | voter_constraints = '{+region=us-west1: 2}',+
                 | lease_preferences = '[[+region=us-west1]]'
-[ RECORD 8 ]
database_name    | movr
table_name       | rides
partition_name   | us-west1
parent_partition | NULL
column_names     | crdb_region
index_name       | rides@rides_auto_index_fk_vehicle_city_ref_vehicles
partition_value  | ('us-west1')
zone_config      | num_voters = 5,+
                 | voter_constraints = '{+region=us-west1: 2}',+
                 | lease_preferences = '[[+region=us-west1]]'
full_zone_config | range_min_bytes = 134217728,+
                 | range_max_bytes = 536870912,+
                 | gc.ttlseconds = 90000,+
                 | num_replicas = 5,+
                 | num_voters = 5,+
                 | constraints = '{+region=europe-west1: 1, +region=us-east1: 1, +region=us-west1: 1}',+
                 | voter_constraints = '{+region=us-west1: 2}',+
                 | lease_preferences = '[[+region=us-west1]]'
-[ RECORD 9 ]
database_name    | movr
table_name       | rides
partition_name   | us-west1
parent_partition | NULL
column_names     | crdb_region
index_name       | rides@rides_pkey
partition_value  | ('us-west1')
zone_config      | num_voters = 5,+
                 | voter_constraints = '{+region=us-west1: 2}',+
                 | lease_preferences = '[[+region=us-west1]]'
full_zone_config | range_min_bytes = 134217728,+
                 | range_max_bytes = 536870912,+
                 | gc.ttlseconds = 90000,+
                 | num_replicas = 5,+
                 | num_voters = 5,+
                 | constraints = '{+region=europe-west1: 1, +region=us-east1: 1, +region=us-west1: 1}',+
                 | voter_constraints = '{+region=us-west1: 2}',+
                 | lease_preferences = '[[+region=us-west1]]'

Wow, that’s a lot! We have gone from no partitions to nine partitions! But we can break it down pretty easily. Note that:

  • There are three regions
  • There is the rides table plus its two secondary indexes, for a total of three.

The above output shows that CockroachDB has broken the primary table and each of the secondary indexes into three partitions (one for each region).

What about the zone configuration?

> SHOW ZONE CONFIGURATION FOR TABLE rides;
     target     |                                      raw_config_sql
----------------+-------------------------------------------------------------------------------------------
  DATABASE movr | ALTER DATABASE movr CONFIGURE ZONE USING
                |     range_min_bytes = 134217728,
                |     range_max_bytes = 536870912,
                |     gc.ttlseconds = 90000,
                |     num_replicas = 5,
                |     num_voters = 5,
                |     constraints = '{+region=europe-west1: 1, +region=us-east1: 1, +region=us-west1: 1}',
                |     voter_constraints = '{+region=us-east1: 2}',
                |     lease_preferences = '[[+region=us-east1]]'

No change to the zone configuration.

As you would expect from the value of the crdb_region column, all the data is located in us-east1. Note that the range with range_id = 99 is the only range that has a non-zero size. It has a lease_holder located in us-east1:

> SELECT LEFT(start_key, 30) as start_key_30, range_id, range_size_mb, lease_holder, lease_holder_locality, replicas, replica_localities FROM [SHOW RANGES FROM TABLE rides];
-[ RECORD 1 ]
start_key_30          | NULL
range_id              | 58
range_size_mb         | 0
lease_holder          | 1
lease_holder_locality | region=us-east1,az=b
replicas              | {1,3,4,5,8}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=d","region=us-west1,az=a","region=us-west1,az=b","region=europe-west1,az=c"}
-[ RECORD 2 ]
start_key_30          | /"@"
range_id              | 93
range_size_mb         | 0
lease_holder          | 9
lease_holder_locality | region=europe-west1,az=d
replicas              | {2,3,6,7,9}
replica_localities    | {"region=us-east1,az=c","region=us-east1,az=d","region=us-west1,az=c","region=europe-west1,az=b","region=europe-west1,az=d"}
-[ RECORD 3 ]
start_key_30          | /"@"/PrefixEnd
range_id              | 94
range_size_mb         | 0
lease_holder          | 1
lease_holder_locality | region=us-east1,az=b
replicas              | {1,2,4,7,9}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-west1,az=a","region=europe-west1,az=b","region=europe-west1,az=d"}
-[ RECORD 4 ]
start_key_30          | /"\x80"
range_id              | 99
range_size_mb         | 0.087997000000000000000
lease_holder          | 2
lease_holder_locality | region=us-east1,az=c
replicas              | {2,3,5,6,7}
replica_localities    | {"region=us-east1,az=c","region=us-east1,az=d","region=us-west1,az=b","region=us-west1,az=c","region=europe-west1,az=b"}
-[ RECORD 5 ]
start_key_30          | /"\x80"/PrefixEnd
range_id              | 100
range_size_mb         | 0
lease_holder          | 2
lease_holder_locality | region=us-east1,az=c
replicas              | {2,3,4,6,9}
replica_localities    | {"region=us-east1,az=c","region=us-east1,az=d","region=us-west1,az=a","region=us-west1,az=c","region=europe-west1,az=d"}
-[ RECORD 6 ]
start_key_30          | /"\xc0"
range_id              | 185
range_size_mb         | 0
lease_holder          | 5
lease_holder_locality | region=us-west1,az=b
replicas              | {1,3,5,6,7}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=d","region=us-west1,az=b","region=us-west1,az=c","region=europe-west1,az=b"}
-[ RECORD 7 ]
start_key_30          | /"\xc0"/PrefixEnd
range_id              | 186
range_size_mb         | 0
lease_holder          | 1
lease_holder_locality | region=us-east1,az=b
replicas              | {1,3,4,7,9}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=d","region=us-west1,az=a","region=europe-west1,az=b","region=europe-west1,az=d"}

We can spread out the data by setting the crdb_region column:

> UPDATE rides SET crdb_region = 
 CASE 
    WHEN city IN ('boston', 'new york', 'washington dc') THEN 'us-east1' 
    WHEN city IN ('los angeles', 'san francisco', 'seattle') THEN 'us-west1' 
    WHEN city IN ('rome', 'amsterdam', 'paris') THEN 'europe-west1' 
 END
WHERE TRUE;
UPDATE 500

Now, we can see that the data is automatically moved to the correct region:

> SELECT LEFT(start_key, 30) as start_key_30, range_id, range_size_mb, lease_holder, lease_holder_locality, replicas, replica_localities FROM [SHOW RANGES FROM TABLE rides];                     -[ RECORD 1 ]
start_key_30          | NULL
range_id              | 58
range_size_mb         | 0
lease_holder          | 1
lease_holder_locality | region=us-east1,az=b
replicas              | {1,3,4,5,8}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=d","region=us-west1,az=a","region=us-west1,az=b","region=europe-west1,az=c"}
-[ RECORD 2 ]
start_key_30          | /"@"
range_id              | 93
range_size_mb         | 0.028388000000000000000
lease_holder          | 9
lease_holder_locality | region=europe-west1,az=d
replicas              | {2,3,6,7,9}
replica_localities    | {"region=us-east1,az=c","region=us-east1,az=d","region=us-west1,az=c","region=europe-west1,az=b","region=europe-west1,az=d"}
-[ RECORD 3 ]
start_key_30          | /"@"/PrefixEnd
range_id              | 94
range_size_mb         | 0
lease_holder          | 1
lease_holder_locality | region=us-east1,az=b
replicas              | {1,2,4,7,9}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=c","region=us-west1,az=a","region=europe-west1,az=b","region=europe-west1,az=d"}
-[ RECORD 4 ]
start_key_30          | /"\x80"
range_id              | 99
range_size_mb         | 0.11390800000000000000
lease_holder          | 2
lease_holder_locality | region=us-east1,az=c
replicas              | {2,3,5,6,7}
replica_localities    | {"region=us-east1,az=c","region=us-east1,az=d","region=us-west1,az=b","region=us-west1,az=c","region=europe-west1,az=b"}
-[ RECORD 5 ]
start_key_30          | /"\x80"/PrefixEnd
range_id              | 100
range_size_mb         | 0
lease_holder          | 2
lease_holder_locality | region=us-east1,az=c
replicas              | {2,3,4,6,9}
replica_localities    | {"region=us-east1,az=c","region=us-east1,az=d","region=us-west1,az=a","region=us-west1,az=c","region=europe-west1,az=d"}
-[ RECORD 6 ]
start_key_30          | /"\xc0"
range_id              | 185
range_size_mb         | 0.030024000000000000000
lease_holder          | 5
lease_holder_locality | region=us-west1,az=b
replicas              | {1,3,5,6,7}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=d","region=us-west1,az=b","region=us-west1,az=c","region=europe-west1,az=b"}
-[ RECORD 7 ]
start_key_30          | /"\xc0"/PrefixEnd
range_id              | 186
range_size_mb         | 0
lease_holder          | 1
lease_holder_locality | region=us-east1,az=b
replicas              | {1,3,4,7,9}
replica_localities    | {"region=us-east1,az=b","region=us-east1,az=d","region=us-west1,az=a","region=europe-west1,az=b","region=europe-west1,az=d"}

As you can see above, the range_size_mb column shows that the data has been distributed globally! This allows fast reads and writes of the current values of rows that are accessed from the region where those rows are located. This is an elegant solution to the problem of providing fast local access to geographically-distributed data. And this is all done in SQL, without extensive error-prone manual configuration of multiple databases or complex application logic!

Here are resources on regional by row tables:

Closing

We have looked at what CockroachDB does “under the hood” as you leverage CockroachDB’s high-level multi-region syntax. Along the way, we:

  • explored what it means to make a database multi-region
  • explored what happens when you change a multi-region database’s survival goal
  • examined regional tables in detail
  • examined global tables in detail
  • examined regional by row tables in detail.

CockroachDB’s multi-region syntax, and its high-availability, horizontal scalability, and transactional data consistency features, enable developers to create cutting-edge applications that unlock internet-scale business opportunities.

Leave a comment

Your email address will not be published. Required fields are marked *