Today I’ll share a simple “snippet” utility SQL query. You specify a table in your database and an alias for that table, and it generates several flavors of column-separated lists of column names for that table. This can come in handy when you are crafting SQL to join multiple tables that have the same column names (such as when you are doing a self-join or are joining to the same table twice).
Let’s look at a specific problem to illustrate. Suppose you are using CockroachDB and you want to diagnose transaction contention in your application. CockroachDB supplies an internal table, crdb_internal.transaction_contention_events
, to show details about transaction contention.
Here are the columns in that table:
> SHOW COLUMNS FROM crdb_internal.transaction_contention_events;
column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden
------------------------------+-------------+-------------+----------------+-----------------------+---------+------------
collection_ts | TIMESTAMPTZ | f | NULL | | {} | f
blocking_txn_id | UUID | f | NULL | | {} | f
blocking_txn_fingerprint_id | BYTES | f | NULL | | {} | f
waiting_txn_id | UUID | f | NULL | | {} | f
waiting_txn_fingerprint_id | BYTES | f | NULL | | {} | f
contention_duration | INTERVAL | f | NULL | | {} | f
contending_key | BYTES | f | NULL | | {} | f
(7 rows)
Among other things, each row in this table contains two useful references:
- the blocking transaction fingerprint ID
- the waiting transaction fingerprint ID
We can use these IDs to join to another internal table, crdb_internal.transaction_statistics
, to get more information.
Here are that table’s columns, too:
> SHOW COLUMNS FROM crdb_internal.transaction_statistics;
column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden
-----------------------+-------------+-------------+----------------+-----------------------+---------+------------
aggregated_ts | TIMESTAMPTZ | t | NULL | | {} | f
fingerprint_id | BYTES | t | NULL | | {} | f
app_name | STRING | t | NULL | | {} | f
metadata | JSONB | t | NULL | | {} | f
statistics | JSONB | t | NULL | | {} | f
aggregation_interval | INTERVAL | t | NULL | | {} | f
(6 rows)
The reason for this example (besides having real-world application) is that it illustrates a SQL query that joins to the same table twice. This query will do so in order to get information on both the blocking and waiting transactions for a transaction contention event. For example, this is a simple start:
> SELECT COUNT(*) FROM crdb_internal.transaction_contention_events;
count
---------
35
(1 row)
So we have 35 transaction contention events. Let’s pick one:
> \x
> SELECT *
FROM crdb_internal.transaction_contention_events
WHERE blocking_txn_fingerprint_id != '\x0000000000000000'::BYTES
AND waiting_txn_fingerprint_id != '\x0000000000000000'::BYTES
LIMIT 1;
-[ RECORD 1 ]
collection_ts | 2023-05-19 17:50:42.028697+00
blocking_txn_id | 30da5bd7-df74-4eff-be94-50d0b1743ee4
blocking_txn_fingerprint_id | \x46897aeb66c6d6d9
waiting_txn_id | 9ac3f739-8210-416a-9e8a-bb1bf60ff4e7
waiting_txn_fingerprint_id | \xc5127e4e36421247
contention_duration | 00:00:00.009056
contending_key | \x8b89f28a89
(Note that, for readability, I’m using the \x
command to the cockroach sql
interactive SQL shell to toggle the display format. I’ll stop showing this below.)
Now that we have a transaction contention event to work with, let’s add details about the blocking and waiting transactions to the above output. We’ll join to the crdb_internal.transaction_statistics
table twice to do this:
> SELECT *
FROM crdb_internal.transaction_contention_events AS tce
LEFT OUTER JOIN crdb_internal.transaction_statistics AS ts_waiting
ON tce.waiting_txn_fingerprint_id = ts_waiting.fingerprint_id
LEFT OUTER JOIN crdb_internal.transaction_statistics AS ts_blocking
ON tce.blocking_txn_fingerprint_id = ts_blocking.fingerprint_id
WHERE tce.collection_ts = '2023-05-19 17:50:42.028697+00'::TIMESTAMPTZ;
-[ RECORD 1 ]
collection_ts | 2023-05-19 17:50:42.028697+00
blocking_txn_id | 30da5bd7-df74-4eff-be94-50d0b1743ee4
blocking_txn_fingerprint_id | \x46897aeb66c6d6d9
waiting_txn_id | 9ac3f739-8210-416a-9e8a-bb1bf60ff4e7
waiting_txn_fingerprint_id | \xc5127e4e36421247
contention_duration | 00:00:00.009056
contending_key | \x8b89f28a89
aggregated_ts | 2023-05-19 17:00:00+00
fingerprint_id | \xc5127e4e36421247
app_name | ycsb
metadata | {"stmtFingerprintIDs": ["6a71c302b043a598"]}
statistics | {"execution_statistics": {"cnt": 1, "contentionTime": {"mean": 0, "sqDiff": 0}, "maxDiskUsage": {"mean": 0, "sqDiff": 0}, "maxMemUsage": {"mean": 1.024E+4, "sqDiff": 0}, "networkBytes": {"mean": 0, "sqDiff": 0}, "networkMsgs": {"mean": 0, "sqDiff": 0}}, "statistics": {"bytesRead": {"mean": 0, "sqDiff": 0}, "cnt": 1, "commitLat": {"mean": 0.000002125, "sqDiff": 0}, "maxRetries": 0, "numRows": {"mean": 0, "sqDiff": 0}, "retryLat": {"mean": 0, "sqDiff": 0}, "rowsRead": {"mean": 0, "sqDiff": 0}, "rowsWritten": {"mean": 0, "sqDiff": 0}, "svcLat": {"mean": 0.401530708, "sqDiff": 0}}}
aggregation_interval | 01:00:00
aggregated_ts | 2023-05-19 17:00:00+00
fingerprint_id | \x46897aeb66c6d6d9
app_name | ycsb
metadata | {"stmtFingerprintIDs": ["e9eac7a7e0c76106"]}
statistics | {"execution_statistics": {"cnt": 1, "contentionTime": {"mean": 0, "sqDiff": 0}, "maxDiskUsage": {"mean": 0, "sqDiff": 0}, "maxMemUsage": {"mean": 1.024E+4, "sqDiff": 0}, "networkBytes": {"mean": 0, "sqDiff": 0}, "networkMsgs": {"mean": 0, "sqDiff": 0}}, "statistics": {"bytesRead": {"mean": 0, "sqDiff": 0}, "cnt": 1, "commitLat": {"mean": 0.008151, "sqDiff": 0}, "maxRetries": 0, "numRows": {"mean": 0, "sqDiff": 0}, "retryLat": {"mean": 0, "sqDiff": 0}, "rowsRead": {"mean": 0, "sqDiff": 0}, "rowsWritten": {"mean": 0, "sqDiff": 0}, "svcLat": {"mean": 0.01705525, "sqDiff": 0}}}
aggregation_interval | 01:00:00
That was straightforward, but note the duplicated column names in the output. That makes interpreting the results harder. Beyond that, what if we needed a more complex query? We are already referring to specific columns to specify the JOIN conditions and the WHERE clause. Some additional reasons to specify individual columns include:
- specifying columns in the SELECT clause
- specifying columns in a GROUP BY clause
- specifying columns in an ORDER BY clause
We can always use the table_name.column_name
approach to specify each column, but that is verbose. We have already used table aliases in the above query to make it more readable and to support joining to the same table twice. Going even further, a nice approach would liberally use both table aliases and column aliases. The above query using both table aliases and column aliases could look something like this:
> SELECT
tce.collection_ts AS tce_collection_ts,
tce.blocking_txn_id AS tce_blocking_txn_id,
tce.blocking_txn_fingerprint_id AS tce_blocking_txn_fingerprint_id,
tce.waiting_txn_id AS tce_waiting_txn_id,
tce.waiting_txn_fingerprint_id AS tce_waiting_txn_fingerprint_id,
tce.contention_duration AS tce_contention_duration,
tce.contending_key AS tce_contending_key,
ts_waiting.aggregated_ts AS ts_waiting_aggregated_ts,
ts_waiting.fingerprint_id AS ts_waiting_fingerprint_id,
ts_waiting.app_name AS ts_waiting_app_name,
ts_waiting.metadata AS ts_waiting_metadata,
ts_waiting.statistics AS ts_waiting_statistics,
ts_waiting.aggregation_interval AS ts_waiting_aggregation_interval,
ts_blocking.aggregated_ts AS ts_blocking_aggregated_ts,
ts_blocking.fingerprint_id AS ts_blocking_fingerprint_id,
ts_blocking.app_name AS ts_blocking_app_name,
ts_blocking.metadata AS ts_blocking_metadata,
ts_blocking.statistics AS ts_blocking_statistics,
ts_blocking.aggregation_interval AS ts_blocking_aggregation_interval
FROM crdb_internal.transaction_contention_events AS tce
LEFT OUTER JOIN crdb_internal.transaction_statistics AS ts_waiting
ON tce.waiting_txn_fingerprint_id = ts_waiting.fingerprint_id
LEFT OUTER JOIN crdb_internal.transaction_statistics AS ts_blocking
ON tce.blocking_txn_fingerprint_id = ts_blocking.fingerprint_id
WHERE tce.collection_ts = '2023-05-19 17:50:42.028697+00'::TIMESTAMPTZ;
-[ RECORD 1 ]
tce_collection_ts | 2023-05-19 17:50:42.028697+00
tce_blocking_txn_id | 30da5bd7-df74-4eff-be94-50d0b1743ee4
tce_blocking_txn_fingerprint_id | \x46897aeb66c6d6d9
tce_waiting_txn_id | 9ac3f739-8210-416a-9e8a-bb1bf60ff4e7
tce_waiting_txn_fingerprint_id | \xc5127e4e36421247
tce_contention_duration | 00:00:00.009056
tce_contending_key | \x8b89f28a89
ts_waiting_aggregated_ts | 2023-05-19 17:00:00+00
ts_waiting_fingerprint_id | \xc5127e4e36421247
ts_waiting_app_name | ycsb
ts_waiting_metadata | {"stmtFingerprintIDs": ["6a71c302b043a598"]}
ts_waiting_statistics | {"execution_statistics": {"cnt": 1, "contentionTime": {"mean": 0, "sqDiff": 0}, "maxDiskUsage": {"mean": 0, "sqDiff": 0}, "maxMemUsage": {"mean": 1.024E+4, "sqDiff": 0}, "networkBytes": {"mean": 0, "sqDiff": 0}, "networkMsgs": {"mean": 0, "sqDiff": 0}}, "statistics": {"bytesRead": {"mean": 0, "sqDiff": 0}, "cnt": 1, "commitLat": {"mean": 0.000002125, "sqDiff": 0}, "maxRetries": 0, "numRows": {"mean": 0, "sqDiff": 0}, "retryLat": {"mean": 0, "sqDiff": 0}, "rowsRead": {"mean": 0, "sqDiff": 0}, "rowsWritten": {"mean": 0, "sqDiff": 0}, "svcLat": {"mean": 0.401530708, "sqDiff": 0}}}
ts_waiting_aggregation_interval | 01:00:00
ts_blocking_aggregated_ts | 2023-05-19 17:00:00+00
ts_blocking_fingerprint_id | \x46897aeb66c6d6d9
ts_blocking_app_name | ycsb
ts_blocking_metadata | {"stmtFingerprintIDs": ["e9eac7a7e0c76106"]}
ts_blocking_statistics | {"execution_statistics": {"cnt": 1, "contentionTime": {"mean": 0, "sqDiff": 0}, "maxDiskUsage": {"mean": 0, "sqDiff": 0}, "maxMemUsage": {"mean": 1.024E+4, "sqDiff": 0}, "networkBytes": {"mean": 0, "sqDiff": 0}, "networkMsgs": {"mean": 0, "sqDiff": 0}}, "statistics": {"bytesRead": {"mean": 0, "sqDiff": 0}, "cnt": 1, "commitLat": {"mean": 0.008151, "sqDiff": 0}, "maxRetries": 0, "numRows": {"mean": 0, "sqDiff": 0}, "retryLat": {"mean": 0, "sqDiff": 0}, "rowsRead": {"mean": 0, "sqDiff": 0}, "rowsWritten": {"mean": 0, "sqDiff": 0}, "svcLat": {"mean": 0.01705525, "sqDiff": 0}}}
ts_blocking_aggregation_interval | 01:00:00
This has the benefit of making the output clearer, because the column names are not duplicated any more. But, before you complain about how much more complicated this query is, realize that this is mainly from explicitly listing all the columns in the SELECT
instead of using *
.
That said, I admit it IS more complicated, and all the extra column aliasing may not be worth it unless you can use that extra work in more places. Such as
- removing duplicate columns (the join condition columns have the same values since it is an equi-join)
- specifying a
GROUP BY
- specifying an
ORDER BY
For example, consider this modified query to explore whether contention is coming from two different applications:
> SELECT
ts_waiting.app_name AS ts_waiting_app_name,
ts_blocking.app_name AS ts_blocking_app_name,
COUNT(*) AS the_count
FROM
crdb_internal.transaction_contention_events AS tce
LEFT OUTER JOIN crdb_internal.transaction_statistics AS ts_waiting
ON tce.waiting_txn_fingerprint_id = ts_waiting.fingerprint_id
LEFT OUTER JOIN crdb_internal.transaction_statistics AS ts_blocking
ON tce.blocking_txn_fingerprint_id = ts_blocking.fingerprint_id
GROUP BY ts_waiting_app_name, ts_blocking_app_name
ORDER BY ts_waiting_app_name, ts_blocking_app_name;
-[ RECORD 1 ]
ts_waiting_app_name | NULL
ts_blocking_app_name | NULL
the_count | 6
-[ RECORD 2 ]
ts_waiting_app_name | ycsb
ts_blocking_app_name | ycsb
the_count | 29
This is simple, but I hope it shows the value of column aliases. The value increases the more tables are joined in the query, because it can be difficult to remember which columns are from which tables. To help with the grunt work of creating those column aliases…
The Snippet
To make it easier to write SQL queries that use column aliases, here is that “snippet” of SQL that we have been working toward:
> WITH
cte_s AS (SELECT 'crdb_internal' AS schema),
cte_t AS (SELECT 'transaction_contention_events' AS table),
cte_a AS (SELECT 'tce' AS alias),
cte_w AS (
SELECT column1 AS what
FROM (VALUES('original_columns'),('aliasing_the_columns'),('column_aliases'),('table_aliases_only'))
),
cte_original AS (
SELECT cte_t.table, c.column_name
FROM information_schema.columns AS c
JOIN cte_s ON TRUE
JOIN cte_t ON TRUE
WHERE c.table_schema = cte_s.schema AND c.table_name = cte_t.table
ORDER BY c.ordinal_position
)
SELECT cte_w.what,
CASE cte_w.what
WHEN 'original_columns' THEN STRING_AGG(cte_original.column_name, ', ')
WHEN 'aliasing_the_columns' THEN STRING_AGG(cte_a.alias || '.' || cte_original.column_name || ' AS ' || cte_a.alias || '_' || cte_original.column_name, ', ')
WHEN 'column_aliases' THEN STRING_AGG(cte_a.alias || '_' || cte_original.column_name, ', ')
WHEN 'table_aliases_only' THEN STRING_AGG(cte_a.alias || '.' || cte_original.column_name, ', ')
END AS value
FROM cte_original
JOIN cte_w on true
JOIN cte_a on true
GROUP BY cte_w.what;
-[ RECORD 1 ]
what | original_columns
value | collection_ts, blocking_txn_id, blocking_txn_fingerprint_id, waiting_txn_id, waiting_txn_fingerprint_id, contention_duration, contending_key
-[ RECORD 2 ]
what | aliasing_the_columns
value | tce.collection_ts AS tce_collection_ts, tce.blocking_txn_id AS tce_blocking_txn_id, tce.blocking_txn_fingerprint_id AS tce_blocking_txn_fingerprint_id, tce.waiting_txn_id AS tce_waiting_txn_id, tce.waiting_txn_fingerprint_id AS tce_waiting_txn_fingerprint_id, tce.contention_duration AS tce_contention_duration, tce.contending_key AS tce_contending_key
-[ RECORD 3 ]
what | column_aliases
value | tce_collection_ts, tce_blocking_txn_id, tce_blocking_txn_fingerprint_id, tce_waiting_txn_id, tce_waiting_txn_fingerprint_id, tce_contention_duration, tce_contending_key
-[ RECORD 4 ]
what | table_aliases_only
value | tce.collection_ts, tce.blocking_txn_id, tce.blocking_txn_fingerprint_id, tce.waiting_txn_id, tce.waiting_txn_fingerprint_id, tce.contention_duration, tce.contending_key
Input
Note that the query starts with three common table expressions:
cte_s
– specifies the schema for the table you are interested incte_t
– specifies the table you are interested incte_a
– specifies the alias you wish to use for the table you are interested in
In this case:
cte_s
iscrdb_internal
cte_t
istransaction_contention_events
cte_a
istce
Output
It produces 4 rows of output. Each is a comma-separated list of column names in different flavors:
- Original unmodified column names. This can be useful for simple queries on a single table.
- Example:
collection_ts, blocking_txn_id, blocking_txn_fingerprint_id,
…
- Example:
- Column alias definitions. This is for use in the SELECT clause of your query. This guarantees every column of output has a different name.
- Example:
tce.collection_ts AS tce_collection_ts, tce.blocking_txn_id AS tce_blocking_txn_id, tce.blocking_txn_fingerprint_id AS tce_blocking_txn_fingerprint_id,
…
- Example:
- Column aliases. This is for things like GROUP BY and ORDER BY.
- Example:
tce_collection_ts, tce_blocking_txn_id, tce_blocking_txn_fingerprint_id,
…
- Example:
- Table alias, followed by a period, followed by the original column name. This is for use in JOIN conditions, or wherever you prefer this to the preceding format.
- Example:
tce.collection_ts, tce.blocking_txn_id, tce.blocking_txn_fingerprint_id,
…
- Example:
So, whenever you write complex SQL that involves multiple tables, you can use the above SQL to generate column lists for each table, then copy-and-past as you see fit to build your query. I hope this simple “snippet” helps you in your work!
How it Works
The query has six parts:
- Three CTEs (common table expressions), named
cte_s
,cte_t
, andcte_a
to specify the input parameters. (These could just as well be replaced by a single CTE that returns one row with three columns.) - A fourth CTE named
cte_w
to generate labels for the four output rows. These labels are also used to decide what output to calculate for each row. - A fifth CTE named
cte_original
to provide the original column names for the specified table. This uses the PostgreSQL-compatible database metadata tableinformation_schema.columns
. CockroachDB is highly compatible with PostgreSQL, in both SQL and metadata, to make porting existing applications easier, to leverage developer knowledge better, and to work more easily with developer tools and ORM (object-relational mapping) systems. - A SELECT statement that aggregates the column names using the STRING_AGG() function in four different ways. This calculates the different flavors of comma-separated column names.
Postscript
In case you are interested, here is how you can use CockroachDB’s built-in workload feature to generate some sample contention.
First, start CockroachDB on your laptop in single-node mode and running in the background:
▶ cockroach start-single-node --insecure --background
*
* WARNING: ALL SECURITY CONTROLS HAVE BEEN DISABLED!
*
* This mode is intended for non-production testing only.
*
* In this mode:
* - Your cluster is open to any client that can access any of your IP addresses.
* - Intruders with access to your machine or network can observe client-server traffic.
* - Intruders can log in without password and read or write any data in the cluster.
* - Intruders can consume all your server's resources and cause unavailability.
*
*
* INFO: To start a secure server without mandating TLS for clients,
* consider --accept-sql-without-tls instead. For other options, see:
*
* - https://go.crdb.dev/issue-v/53404/v22.2
* - https://www.cockroachlabs.com/docs/v22.2/secure-a-cluster.html
*
*
* WARNING: Running a server without --sql-addr, with a combined RPC/SQL listener, is deprecated.
* This feature will be removed in the next version of CockroachDB.
*
*
* WARNING: neither --listen-addr nor --advertise-addr was specified.
* The server will advertise "crlMBP-NRWLD7WPC6MTc1.local" to other nodes, is this routable?
*
* Consider using:
* - for local-only servers: --listen-addr=localhost:36257 --sql-addr=localhost:26257
* - for multi-node clusters: --listen-addr=:36257 --sql-addr=:26257 --advertise-addr=<host/IP addr>
*
*
Next initialize a built-in workload, which in this case is the ycsb
workload:
▶ cockroach workload init ycsb
I230519 17:50:41.540384 1 workload/cli/run.go:622 [-] 1 random seed: 7215135280161608806
I230519 17:50:41.579107 1 ccl/workloadccl/fixture.go:318 [-] 2 starting import of 1 tables
I230519 17:50:42.029132 68 ccl/workloadccl/fixture.go:481 [-] 3 imported 13 MiB in usertable table (10000 rows, 0 index entries, took 402.743583ms, 32.88 MiB/s)
I230519 17:50:42.029340 1 ccl/workloadccl/fixture.go:326 [-] 4 imported 13 MiB bytes in 1 tables (took 449.453875ms, 29.46 MiB/s)
Next, run the ycsb
workload for one minute. That is long enough to generate some contention events as it runs:
▶ cockroach workload run ycsb --duration 60s
I230519 17:53:43.153893 1 workload/cli/run.go:622 [-] 1 random seed: 4551622119067636639
I230519 17:53:43.154000 1 workload/cli/run.go:429 [-] 2 creating load generator...
I230519 17:53:43.213483 1 workload/cli/run.go:460 [-] 3 creating load generator... done (took 59.481292ms)
_elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)
1.0s 0 24660.2 26141.7 0.3 0.9 4.7 23.1 read
1.0s 0 1292.3 1369.9 6.0 10.0 16.3 35.7 update
2.0s 0 25513.4 25821.5 0.3 1.0 4.5 22.0 read
2.0s 0 1371.4 1370.3 6.3 10.5 13.6 26.2 update
3.0s 0 27271.1 26304.6 0.3 0.9 4.2 32.5 read
3.0s 0 1440.3 1393.6 5.5 8.9 12.6 18.9 update
4.0s 0 24345.4 25814.7 0.3 1.0 5.8 29.4 read
4.0s 0 1289.0 1367.5 6.3 12.1 15.7 35.7 update
5.0s 0 26258.4 25903.5 0.3 1.0 3.9 35.7 read
5.0s 0 1357.0 1365.4 6.3 10.0 12.6 21.0 update
6.0s 0 27830.8 26224.7 0.3 1.0 3.7 50.3 read
6.0s 0 1450.0 1379.5 5.8 9.4 12.6 15.7 update
7.0s 0 26700.1 26292.6 0.3 1.0 4.2 30.4 read
7.0s 0 1355.0 1376.0 5.8 9.4 13.1 23.1 update
8.0s 0 25453.1 26187.7 0.3 1.0 4.5 52.4 read
8.0s 0 1406.0 1379.7 5.8 10.0 12.6 27.3 update
9.0s 0 25706.8 26139.9 0.3 1.0 5.0 41.9 read
9.0s 0 1348.4 1376.6 6.3 10.5 14.7 21.0 update
10.0s 0 24336.1 25954.7 0.3 1.3 4.5 83.9 read
10.0s 0 1264.4 1365.1 6.0 10.0 14.2 60.8 update
_elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)
11.0s 0 25109.4 25877.9 0.3 1.0 4.5 60.8 read
11.0s 0 1287.1 1358.0 6.0 10.5 16.3 54.5 update
12.0s 0 18860.7 25293.1 0.3 0.7 8.4 37.7 read
12.0s 0 1025.0 1330.2 11.0 15.2 23.1 46.1 update
13.0s 0 19293.5 24831.6 0.3 1.1 7.6 46.1 read
13.0s 0 1016.0 1306.1 10.5 15.7 23.1 48.2 update
14.0s 0 20065.4 24491.2 0.3 0.7 8.9 31.5 read
14.0s 0 1035.0 1286.7 10.5 16.3 21.0 32.5 update
15.0s 0 20965.7 24256.2 0.3 0.7 6.6 50.3 read
15.0s 0 1093.0 1273.8 10.5 15.2 21.0 28.3 update
16.0s 0 21009.0 24053.2 0.3 0.7 7.9 26.2 read
16.0s 0 1032.0 1258.7 10.5 16.3 21.0 35.7 update
17.0s 0 18960.1 23753.6 0.2 0.7 9.4 48.2 read
17.0s 0 1011.0 1244.1 11.5 15.7 24.1 44.0 update
18.0s 0 20745.6 23586.5 0.3 0.7 6.6 46.1 read
18.0s 0 1056.0 1233.7 11.0 16.3 23.1 33.6 update
19.0s 0 19903.7 23392.6 0.3 0.7 8.9 31.5 read
19.0s 0 1073.9 1225.3 10.0 14.7 22.0 35.7 update
20.0s 0 21438.7 23295.0 0.3 0.7 7.1 22.0 read
20.0s 0 1120.1 1220.0 10.0 14.7 16.8 37.7 update
_elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)
21.0s 0 20142.4 23144.8 0.3 0.6 7.6 25.2 read
21.0s 0 1086.1 1213.6 11.0 15.7 23.1 27.3 update
22.0s 0 19122.1 22962.0 0.3 0.7 8.9 50.3 read
22.0s 0 991.0 1203.5 11.5 16.3 21.0 37.7 update
23.0s 0 20179.2 22841.0 0.3 0.7 8.1 41.9 read
23.0s 0 1053.0 1197.0 10.0 15.7 26.2 50.3 update
24.0s 0 22973.6 22846.5 0.3 0.7 5.5 21.0 read
24.0s 0 1217.0 1197.8 9.4 14.2 16.3 24.1 update
25.0s 0 21115.7 22777.3 0.3 0.7 8.4 27.3 read
25.0s 0 1093.0 1193.6 9.4 14.7 18.9 30.4 update
26.0s 0 19347.9 22645.4 0.3 0.9 7.3 37.7 read
26.0s 0 1006.7 1186.4 10.5 17.8 25.2 37.7 update
27.0s 0 21035.9 22585.7 0.3 0.7 7.3 37.7 read
27.0s 0 1121.0 1184.0 10.5 14.7 19.9 30.4 update
28.0s 0 21041.9 22530.6 0.3 0.7 6.8 30.4 read
28.0s 0 1075.3 1180.1 10.5 15.7 26.2 32.5 update
29.0s 0 21244.0 22486.2 0.3 0.6 7.6 26.2 read
29.0s 0 1166.0 1179.6 10.0 14.7 18.9 28.3 update
30.0s 0 21329.3 22447.7 0.3 0.6 7.3 33.6 read
30.0s 0 1124.9 1177.8 10.0 15.2 22.0 30.4 update
_elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)
31.0s 0 20827.1 22395.4 0.3 0.6 8.4 35.7 read
31.0s 0 1066.1 1174.2 10.5 15.2 21.0 35.7 update
32.0s 0 19472.8 22304.1 0.3 0.7 8.1 46.1 read
32.0s 0 1064.0 1170.7 10.5 16.3 21.0 46.1 update
33.0s 0 20896.5 22261.4 0.3 0.7 8.1 37.7 read
33.0s 0 1048.9 1167.1 10.5 15.2 19.9 44.0 update
34.0s 0 21616.7 22242.5 0.3 0.7 8.1 28.3 read
34.0s 0 1145.1 1166.4 9.4 14.7 22.0 35.7 update
35.0s 0 22757.1 22257.2 0.3 0.7 6.3 29.4 read
35.0s 0 1237.7 1168.4 8.9 14.2 18.9 27.3 update
36.0s 0 23760.4 22298.9 0.3 0.7 5.0 35.7 read
36.0s 0 1206.3 1169.5 8.9 14.7 21.0 32.5 update
37.0s 0 22333.1 22299.8 0.3 0.9 5.8 24.1 read
37.0s 0 1188.0 1170.0 8.9 14.2 17.8 28.3 update
38.0s 0 20787.1 22260.0 0.3 0.8 7.9 35.7 read
38.0s 0 1157.0 1169.7 8.9 14.7 19.9 39.8 update
39.0s 0 21763.0 22247.3 0.3 0.8 6.6 29.4 read
39.0s 0 1171.0 1169.7 9.4 14.2 16.8 25.2 update
40.0s 0 22873.4 22262.9 0.3 0.7 5.8 35.7 read
40.0s 0 1180.0 1169.9 8.9 14.2 18.9 37.7 update
_elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)
41.0s 0 21894.1 22253.9 0.3 0.7 6.3 46.1 read
41.0s 0 1156.0 1169.6 8.9 15.2 18.9 31.5 update
42.0s 0 22344.9 22256.1 0.3 0.7 6.6 24.1 read
42.0s 0 1192.0 1170.1 8.9 14.2 18.9 24.1 update
43.0s 0 22774.2 22268.2 0.3 0.8 5.8 41.9 read
43.0s 0 1184.9 1170.5 8.9 14.2 16.3 32.5 update
44.0s 0 21045.2 22240.4 0.3 0.7 7.3 29.4 read
44.0s 0 1123.1 1169.4 10.0 15.7 19.9 30.4 update
45.0s 0 22474.6 22245.6 0.3 0.9 6.6 35.7 read
45.0s 0 1231.9 1170.8 7.3 13.6 17.8 28.3 update
46.0s 0 24140.5 22286.8 0.3 0.7 5.5 28.3 read
46.0s 0 1322.1 1174.1 6.8 13.1 17.8 27.3 update
47.0s 0 21925.0 22279.1 0.3 0.9 6.6 39.8 read
47.0s 0 1126.0 1173.1 7.9 15.2 19.9 37.7 update
48.0s 0 23339.6 22301.2 0.3 0.9 6.6 35.7 read
48.0s 0 1198.0 1173.6 7.3 13.6 19.9 44.0 update
49.0s 0 22826.0 22311.9 0.3 0.8 5.2 41.9 read
49.0s 0 1274.1 1175.6 7.3 13.6 18.9 56.6 update
50.0s 0 23636.2 22338.4 0.3 0.9 5.8 24.1 read
50.0s 0 1258.9 1177.3 7.3 13.6 17.8 41.9 update
_elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)
51.0s 0 21497.7 22321.9 0.3 0.9 7.1 37.7 read
51.0s 0 1124.0 1176.3 8.4 15.7 24.1 29.4 update
52.0s 0 23341.2 22341.5 0.3 0.9 6.6 30.4 read
52.0s 0 1170.1 1176.1 7.6 13.6 19.9 29.4 update
53.0s 0 21570.3 22326.9 0.3 0.9 6.6 46.1 read
53.0s 0 1172.1 1176.1 8.1 15.2 21.0 39.8 update
54.0s 0 22708.1 22334.0 0.3 0.9 5.8 35.7 read
54.0s 0 1208.0 1176.6 7.6 14.2 17.8 30.4 update
55.0s 0 23471.2 22354.7 0.3 0.8 5.5 35.7 read
55.0s 0 1290.0 1178.7 7.1 13.1 21.0 41.9 update
56.0s 0 23765.7 22379.9 0.3 0.8 5.2 29.4 read
56.0s 0 1294.9 1180.8 7.1 13.6 17.8 30.4 update
57.0s 0 23934.7 22407.1 0.3 0.9 5.5 79.7 read
57.0s 0 1238.1 1181.8 7.1 13.6 19.9 32.5 update
58.0s 0 21042.0 22383.6 0.3 0.8 5.8 33.6 read
58.0s 0 1131.9 1180.9 8.9 17.8 22.0 32.5 update
59.0s 0 21351.2 22366.1 0.3 0.7 7.3 25.2 read
59.0s 0 1165.0 1180.7 8.4 16.3 21.0 31.5 update
60.0s 0 23861.1 22391.0 0.3 0.9 5.5 32.5 read
60.0s 0 1223.0 1181.4 7.6 14.2 17.8 26.2 update
_elapsed___errors_____ops(total)___ops/sec(cum)__avg(ms)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)__total
60.0s 0 1343466 22391.0 0.4 0.3 0.8 6.3 83.9 read
_elapsed___errors_____ops(total)___ops/sec(cum)__avg(ms)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)__total
60.0s 0 70882 1181.4 8.7 8.4 14.7 19.9 60.8 update
_elapsed___errors_____ops(total)___ops/sec(cum)__avg(ms)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)__result
60.0s 0 1414348 23572.4 0.8 0.3 5.8 12.6 83.9
When you are finished using CockroachDB you can kill the CockroachDB process that is running in the background. On a MacBook laptop or Linux machine this could look like:
▶ ps aux | grep -i cockroach | grep -v grep
[userid] process_id 2.2 1.0 409760352 320528 s042 S 5:45PM 0:01.26 cockroach start-single-node --insecure
▶ kill process_id
For information on installing and running CockroachDB locally on your laptop, check out this doc:
Install CockroachDB on Mac
https://www.cockroachlabs.com/docs/dev/install-cockroachdb-mac.html
And for information on creating your own free serverless CockroachDB in the cloud, check out this doc:
CockroachDB Cloud Docs
https://www.cockroachlabs.com/docs/cockroachcloud/