Skip to content

Commit

Permalink
Move FAQ from sql to operational FAQs page
Browse files Browse the repository at this point in the history
Also link to new transaction contention content from transactions page.
  • Loading branch information
jseldess committed May 22, 2018
1 parent eb801ee commit 67019f2
Show file tree
Hide file tree
Showing 6 changed files with 40 additions and 32 deletions.
11 changes: 11 additions & 0 deletions v2.0/operational-faqs.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,6 +99,17 @@ If you want all existing timeseries data to be deleted, change the `timeseries.r
> SET CLUSTER SETTING timeseries.resolution_10s.storage_duration = '0s';
~~~

## Why would increasing the number of nodes not result in more operations per second?

If queries operate on different data, then increasing the number
of nodes should improve the overall throughput (transactions/second or QPS).

However, if your queries operate on the same data, you may be
observing transaction contention. See [Understanding and Avoiding
Transaction
Contention](performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention)
for more details.

## Why does CockroachDB collect anonymized cluster usage details by default?

Collecting information about CockroachDB's real world usage helps us prioritize the development of product features. We choose our default as "opt-in" to strengthen the information we receive from our collection efforts, but we also make a careful effort to send only anonymous, aggregate usage statistics. See [Diagnostics Reporting](diagnostics-reporting.html) for a detailed look at what information is sent and how to opt-out.
Expand Down
13 changes: 1 addition & 12 deletions v2.0/sql-faqs.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: SQL FAQs
summary: Get answers to frequently asked questions about CockroachDB SQL.
toc: false
toc_not_nested: true
toc_not_nested: true
---

<div id="toc"></div>
Expand Down Expand Up @@ -51,17 +51,6 @@ For more information about contention, see [Understanding and Avoiding
Transaction
Contention](performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention).

## Why would increasing the number of nodes not result in more operations per second?

If queries operate on different data, then increasing the number
of nodes should improve the overall throughput (transactions/second or QPS).

However, if your queries operate on the same data, you may be
observing transaction contention. See [Understanding and Avoiding
Transaction
Contention](performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention)
for more details.

## Does CockroachDB support `JOIN`?

[CockroachDB supports uncorrelated SQL joins](joins.html). We are
Expand Down
14 changes: 9 additions & 5 deletions v2.0/transactions.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,11 +57,15 @@ Type | Description
**Ambiguous Errors** | Errors with the code `40003` that are returned in response to `RELEASE SAVEPOINT` (or `COMMIT` when not using `SAVEPOINT`), which indicate that the state of the transaction is ambiguous, i.e., you cannot assume it either committed or failed. How you handle these errors depends on how you want to resolve the ambiguity. See [here](common-errors.html#result-is-ambiguous) for more about this kind of error.
**SQL Errors** | All other errors, which indicate that a statement in the transaction failed. For example, violating the Unique constraint generates an `23505` error. After encountering these errors, you can either issue a `COMMIT` or `ROLLBACK` to abort the transaction and revert the database to its state before the transaction began.<br><br>If you want to attempt the same set of statements again, you must begin a completely new transaction.

## Transaction Retries
## Transaction Contention

Transactions in CockroachDB lock data resources that are written during their execution. When a pending write from one transaction conflicts with a write of a concurrent transaction, the concurrent transaction must wait for the earlier transaction to complete before proceeding. When a dependency cycle is detected between transactions, the transaction with the higher priority aborts the dependent transaction to avoid deadlock, which much be retried.

Transactions in CockroachDB lock data resources that are written during their execution. In the event that a pending write from one transaction conflicts with a write of a concurrent transaction, the concurrent transaction must wait for the earlier transaction to complete before proceeding. CockroachDB implements a distributed deadlock detection algorithm to discover dependency cycles. Deadlocks are resolved by allowing transactions with higher priority to abort their dependencies. Transactions which are aborted to avoid deadlock must be retried.
For more details about transaction contention and best practices for avoiding contention, see [Understanding and Avoiding Transaction Contention](performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention).

## Transaction Retries

Transactions may require retries if they experience deadlock or read/write contention with other concurrent transactions which cannot be resolved without allowing potential [serializable anomalies](https://en.wikipedia.org/wiki/Serializability). (However, it's possible to mitigate read-write conflicts by performing reads using [`AS OF SYSTEM TIME`](performance-best-practices-overview.html#use-as-of-system-time-to-decrease-conflicts-with-long-running-queries).)
Transactions may require retries if they experience deadlock or [read/write contention](performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention) with other concurrent transactions which cannot be resolved without allowing potential [serializable anomalies](https://en.wikipedia.org/wiki/Serializability). (However, it's possible to mitigate read-write conflicts by performing reads using [`AS OF SYSTEM TIME`](performance-best-practices-overview.html#use-as-of-system-time-to-decrease-conflicts-with-long-running-queries).)

There are two cases for handling transaction retries:

Expand Down Expand Up @@ -130,7 +134,7 @@ To handle these types of errors you have two options:

#### Client-Side Transaction Retries

To improve the performance of transactions that fail due to contention, CockroachDB includes a set of statements that let you retry those transactions. Retrying transactions has the benefit of increasing their priority each time they're retried, increasing their likelihood to succeed.
As one way to improve the performance of [contended transactions](performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention), CockroachDB includes a set of statements that let you retry those transactions. Retrying transactions has the benefit of increasing their priority each time they're retried, increasing their likelihood to succeed.

Retried transactions are also issued at a later timestamp, so the transaction now operates on a later snapshot of the database, so the reads might return updated data.

Expand Down Expand Up @@ -181,7 +185,7 @@ isolation level. The following two sections detail these further.

### Transaction Priorities

Every transaction in CockroachDB is assigned an initial **priority**. By default, that priority is `NORMAL`, but for transactions that should be given preference in high-contention scenarios, the client can set the priority within the [`BEGIN`](begin-transaction.html) statement:
Every transaction in CockroachDB is assigned an initial **priority**. By default, that priority is `NORMAL`, but for transactions that should be given preference in [high-contention scenarios](performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention), the client can set the priority within the [`BEGIN`](begin-transaction.html) statement:

~~~ sql
> BEGIN PRIORITY <LOW | NORMAL | HIGH>;
Expand Down
11 changes: 11 additions & 0 deletions v2.1/operational-faqs.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,6 +99,17 @@ If you want all existing timeseries data to be deleted, change the `timeseries.r
> SET CLUSTER SETTING timeseries.resolution_10s.storage_duration = '0s';
~~~

## Why would increasing the number of nodes not result in more operations per second?

If queries operate on different data, then increasing the number
of nodes should improve the overall throughput (transactions/second or QPS).

However, if your queries operate on the same data, you may be
observing transaction contention. See [Understanding and Avoiding
Transaction
Contention](performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention)
for more details.

## Why does CockroachDB collect anonymized cluster usage details by default?

Collecting information about CockroachDB's real world usage helps us prioritize the development of product features. We choose our default as "opt-in" to strengthen the information we receive from our collection efforts, but we also make a careful effort to send only anonymous, aggregate usage statistics. See [Diagnostics Reporting](diagnostics-reporting.html) for a detailed look at what information is sent and how to opt-out.
Expand Down
11 changes: 0 additions & 11 deletions v2.1/sql-faqs.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,17 +51,6 @@ For more information about contention, see [Understanding and Avoiding
Transaction
Contention](performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention).

## Why would increasing the number of nodes not result in more operations per second?

If queries operate on different data, then increasing the number
of nodes should improve the overall throughput (transactions/second or QPS).

However, if your queries operate on the same data, you may be
observing transaction contention. See [Understanding and Avoiding
Transaction
Contention](performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention)
for more details.

## Does CockroachDB support `JOIN`?

[CockroachDB supports uncorrelated SQL joins](joins.html). We are
Expand Down
12 changes: 8 additions & 4 deletions v2.1/transactions.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,11 +57,15 @@ Type | Description
**Ambiguous Errors** | Errors with the code `40003` that are returned in response to `RELEASE SAVEPOINT` (or `COMMIT` when not using `SAVEPOINT`), which indicate that the state of the transaction is ambiguous, i.e., you cannot assume it either committed or failed. How you handle these errors depends on how you want to resolve the ambiguity. See [here](common-errors.html#result-is-ambiguous) for more about this kind of error.
**SQL Errors** | All other errors, which indicate that a statement in the transaction failed. For example, violating the Unique constraint generates an `23505` error. After encountering these errors, you can either issue a `COMMIT` or `ROLLBACK` to abort the transaction and revert the database to its state before the transaction began.<br><br>If you want to attempt the same set of statements again, you must begin a completely new transaction.

## Transaction Retries
## Transaction Contention

Transactions in CockroachDB lock data resources that are written during their execution. When a pending write from one transaction conflicts with a write of a concurrent transaction, the concurrent transaction must wait for the earlier transaction to complete before proceeding. When a dependency cycle is detected between transactions, the transaction with the higher priority aborts the dependent transaction to avoid deadlock, which much be retried.

Transactions in CockroachDB lock data resources that are written during their execution. In the event that a pending write from one transaction conflicts with a write of a concurrent transaction, the concurrent transaction must wait for the earlier transaction to complete before proceeding. CockroachDB implements a distributed deadlock detection algorithm to discover dependency cycles. Deadlocks are resolved by allowing transactions with higher priority to abort their dependencies. Transactions which are aborted to avoid deadlock must be retried.
For more details about transaction contention and best practices for avoiding contention, see [Understanding and Avoiding Transaction Contention](performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention).

## Transaction Retries

Transactions may require retries if they experience deadlock or read/write contention with other concurrent transactions which cannot be resolved without allowing potential [serializable anomalies](https://en.wikipedia.org/wiki/Serializability). (However, it's possible to mitigate read-write conflicts by performing reads using [`AS OF SYSTEM TIME`](performance-best-practices-overview.html#use-as-of-system-time-to-decrease-conflicts-with-long-running-queries).)
Transactions may require retries if they experience deadlock or [read/write contention](performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention) with other concurrent transactions which cannot be resolved without allowing potential [serializable anomalies](https://en.wikipedia.org/wiki/Serializability). (However, it's possible to mitigate read-write conflicts by performing reads using [`AS OF SYSTEM TIME`](performance-best-practices-overview.html#use-as-of-system-time-to-decrease-conflicts-with-long-running-queries).)

There are two cases for handling transaction retries:

Expand Down Expand Up @@ -181,7 +185,7 @@ isolation level. The following two sections detail these further.

### Transaction Priorities

Every transaction in CockroachDB is assigned an initial **priority**. By default, that priority is `NORMAL`, but for transactions that should be given preference in high-contention scenarios, the client can set the priority within the [`BEGIN`](begin-transaction.html) statement:
Every transaction in CockroachDB is assigned an initial **priority**. By default, that priority is `NORMAL`, but for transactions that should be given preference in [high-contention scenarios](performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention), the client can set the priority within the [`BEGIN`](begin-transaction.html) statement:

~~~ sql
> BEGIN PRIORITY <LOW | NORMAL | HIGH>;
Expand Down

0 comments on commit 67019f2

Please sign in to comment.