Skip to content

Commit

Permalink
Add known limitation changefeed with on_error on truncated table (coc…
Browse files Browse the repository at this point in the history
  • Loading branch information
kathancox authored May 2, 2023
1 parent ed48c4b commit 840f622
Show file tree
Hide file tree
Showing 4 changed files with 4 additions and 2 deletions.
1 change: 1 addition & 0 deletions _includes/v22.2/known-limitations/cdc.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,3 +6,4 @@
- There is no concurrency configurability for [webhook sinks](changefeed-sinks.html#webhook-sink). [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/73430)
- Using the [`split_column_families`](create-changefeed.html#split-column-families) and [`resolved`](create-changefeed.html#resolved-option) options on the same changefeed will cause an error when using the following [sinks](changefeed-sinks.html): Kafka and Google Cloud Pub/Sub. Instead, use the individual `FAMILY` keyword to specify column families when creating a changefeed. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/79452)
- There is no configuration for unordered messages for [Google Cloud Pub/Sub sinks](changefeed-sinks.html#google-cloud-pub-sub). You must specify the `region` parameter in the URI to maintain [ordering guarantees](changefeed-messages.html#ordering-guarantees). [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/80884)
- If a changefeed with [`on_error='pause'`](create-changefeed.html#on-error) is running when a watched table is [truncated](truncate.html), the changefeed will pause but will not be able to resume reads from that table. Using [`ALTER CHANGEFEED`](alter-changefeed.html) to drop the table from the changefeed and then [resuming the job](resume-job.html) will work, but you cannot add the same table to the changefeed again. Instead, you will need to [create a new changefeed](create-changefeed.html#start-a-new-changefeed-where-another-ended) for that table. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/98506)
1 change: 1 addition & 0 deletions _includes/v23.1/known-limitations/cdc.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,3 +5,4 @@
- [Webhook sinks](changefeed-sinks.html#webhook-sink) only support emitting `JSON`. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/73432)
- There is no concurrency configurability for [webhook sinks](changefeed-sinks.html#webhook-sink). [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/73430)
- Using the [`split_column_families`](create-changefeed.html#split-column-families) and [`resolved`](create-changefeed.html#resolved-option) options on the same changefeed will cause an error when using the following [sinks](changefeed-sinks.html): Kafka and Google Cloud Pub/Sub. Instead, use the individual `FAMILY` keyword to specify column families when creating a changefeed. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/79452)
- If a changefeed with [`on_error='pause`](create-changefeed.html#on-error) is running when a watched table is [truncated](truncate.html), the changefeed will pause but will not be able to resume reads from that table. Using [`ALTER CHANGEFEED`](alter-changefeed.html) to drop the table from the changefeed and then [resuming the job](resume-job.html) will work, but you cannot add the same table to the changefeed again. Instead, you will need to [create a new changefeed](create-changefeed.html#start-a-new-changefeed-where-another-ended) for that table. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/98506)
2 changes: 1 addition & 1 deletion v22.2/create-and-configure-changefeeds.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Both Core and {{ site.data.products.enterprise }} changefeeds require that you e

- It is necessary to [enable rangefeeds](#enable-rangefeeds) for changefeeds to work.
- If you require [`resolved`](create-changefeed.html#resolved-option) message frequency under `30s`, then you **must** set the [`min_checkpoint_frequency`](create-changefeed.html#min-checkpoint-frequency) option to at least the desired `resolved` frequency.
- Many DDL queries (including [`TRUNCATE`](truncate.html), [`DROP TABLE`](drop-table.html), and queries that add a column family) will cause errors on a changefeed watching the affected tables. You will need to [start a new changefeed](create-changefeed.html#start-a-new-changefeed-where-another-ended).
- Many DDL queries (including [`TRUNCATE`](truncate.html), [`DROP TABLE`](drop-table.html), and queries that add a column family) will cause errors on a changefeed watching the affected tables. You will need to [start a new changefeed](create-changefeed.html#start-a-new-changefeed-where-another-ended). If a table is truncated that a changefeed with `on_error='pause'` is watching, you will also need to start a new changefeed. See change data capture [Known Limitations](change-data-capture-overview.html) for more detail.
- Partial or intermittent sink unavailability may impact changefeed stability. If a sink is unavailable, messages can't send, which means that a changefeed's high-water mark timestamp is at risk of falling behind the cluster's [garbage collection window](configure-replication-zones.html#replication-zone-variables). Throughput and latency can be affected once the sink is available again. However, [ordering guarantees](changefeed-messages.html#ordering-guarantees) will still hold for as long as a changefeed [remains active](monitor-and-debug-changefeeds.html#monitor-a-changefeed).
- When an [`IMPORT INTO`](import-into.html) statement is run, any current changefeed jobs targeting that table will fail.
- {% include {{ page.version.version }}/cdc/virtual-computed-column-cdc.md %}
Expand Down
2 changes: 1 addition & 1 deletion v23.1/create-and-configure-changefeeds.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Both Core and {{ site.data.products.enterprise }} changefeeds require that you e

- It is necessary to [enable rangefeeds](#enable-rangefeeds) for changefeeds to work.
- If you require [`resolved`](create-changefeed.html#resolved-option) message frequency under `30s`, then you **must** set the [`min_checkpoint_frequency`](create-changefeed.html#min-checkpoint-frequency) option to at least the desired `resolved` frequency.
- Many DDL queries (including [`TRUNCATE`](truncate.html), [`DROP TABLE`](drop-table.html), and queries that add a column family) will cause errors on a changefeed watching the affected tables. You will need to [start a new changefeed](create-changefeed.html#start-a-new-changefeed-where-another-ended).
- Many DDL queries (including [`TRUNCATE`](truncate.html), [`DROP TABLE`](drop-table.html), and queries that add a column family) will cause errors on a changefeed watching the affected tables. You will need to [start a new changefeed](create-changefeed.html#start-a-new-changefeed-where-another-ended). If a table is truncated that a changefeed with `on_error='pause'` is watching, you will also need to start a new changefeed. See change data capture [Known Limitations](change-data-capture-overview.html) for more detail.
- Partial or intermittent sink unavailability may impact changefeed stability. If a sink is unavailable, messages can't send, which means that a changefeed's high-water mark timestamp is at risk of falling behind the cluster's [garbage collection window](configure-replication-zones.html#replication-zone-variables). Throughput and latency can be affected once the sink is available again. However, [ordering guarantees](changefeed-messages.html#ordering-guarantees) will still hold for as long as a changefeed [remains active](monitor-and-debug-changefeeds.html#monitor-a-changefeed).
- When an [`IMPORT INTO`](import-into.html) statement is run, any current changefeed jobs targeting that table will fail.
- {% include {{ page.version.version }}/cdc/virtual-computed-column-cdc.md %}
Expand Down

0 comments on commit 840f622

Please sign in to comment.