Skip to content

Latest commit

 

History

History
643 lines (453 loc) · 32.8 KB

changefeed-examples.md

File metadata and controls

643 lines (453 loc) · 32.8 KB
title summary toc docs_area
Changefeed Examples
Examples for starting and using changefeeds with different aims.
true
stream_data

This page provides step-by-step examples for using Core and {{ site.data.products.enterprise }} changefeeds. Creating {{ site.data.products.enterprise }} changefeeds is available on {{ site.data.products.dedicated }}, on {{ site.data.products.serverless }} clusters with an {{ site.data.products.enterprise }} license, and on {{ site.data.products.core }} clusters with an {{ site.data.products.enterprise }} license. Core changefeeds are available in all products.

For a summary of Core and {{ site.data.products.enterprise }} changefeed features, see What is Change Data Capture?

{{ site.data.products.enterprise }} changefeeds can connect to the following sinks:

See the Changefeed Sinks page for more detail on forming sink URIs, available sink query parameters, and specifics on configuration.

Use the following filters to show usage examples for either Enterprise or Core changefeeds:

Enterprise Changefeeds Core Changefeeds

Create a changefeed connected to Kafka

{{site.data.alerts.callout_info}} CREATE CHANGEFEED is an {{ site.data.products.enterprise }}-only feature. For the Core version, see the CHANGEFEED FOR example. {{site.data.alerts.end}}

In this example, you'll set up a changefeed for a single-node cluster that is connected to a Kafka sink. The changefeed will watch two tables.

  1. If you do not already have one, request a trial {{ site.data.products.enterprise }} license.

  2. Use the cockroach start-single-node command to start a single-node cluster:

    {% include_cached copy-clipboard.html %}

    $ cockroach start-single-node --insecure --listen-addr=localhost --background
  3. Download and extract the Confluent Open Source platform (which includes Kafka).

  4. Move into the extracted confluent-<version> directory and start Confluent:

    {% include_cached copy-clipboard.html %}

    $ ./bin/confluent local services start

    Only zookeeper and kafka are needed. To troubleshoot Confluent, see their docs and the Quick Start Guide.

  5. Create two Kafka topics:

    {% include_cached copy-clipboard.html %}

    $ ./bin/kafka-topics \
    --create \
    --zookeeper localhost:2181 \
    --replication-factor 1 \
    --partitions 1 \
    --topic office_dogs

    {% include_cached copy-clipboard.html %}

    $ ./bin/kafka-topics \
    --create \
    --zookeeper localhost:2181 \
    --replication-factor 1 \
    --partitions 1 \
    --topic employees

    {{site.data.alerts.callout_info}} You are expected to create any Kafka topics with the necessary number of replications and partitions. Topics can be created manually or Kafka brokers can be configured to automatically create topics with a default partition count and replication factor. {{site.data.alerts.end}}

{% include {{ page.version.version }}/cdc/sql-cluster-settings-example.md %}

{% include {{ page.version.version }}/cdc/create-example-db-cdc.md %}

  1. Start the changefeed:

    {% include_cached copy-clipboard.html %}

    > CREATE CHANGEFEED FOR TABLE office_dogs, employees INTO 'kafka://localhost:9092';
    
            job_id       
    +--------------------+
      360645287206223873
    (1 row)
    
    NOTICE: changefeed will emit to topic office_dogs
    NOTICE: changefeed will emit to topic employees
    

    This will start up the changefeed in the background and return the job_id. The changefeed writes to Kafka.

  2. In a new terminal, move into the extracted confluent-<version> directory and start watching the Kafka topics:

    {% include_cached copy-clipboard.html %}

    $ ./bin/kafka-console-consumer \
    --bootstrap-server=localhost:9092 \
    --from-beginning \
    --whitelist 'office_dogs|employees'
    {"after": {"id": 1, "name": "Petee H"}}
    {"after": {"id": 2, "name": "Carl"}}
    {"after": {"id": 1, "name": "Lauren", "rowid": 528514320239329281}}
    {"after": {"id": 2, "name": "Spencer", "rowid": 528514320239362049}}

    The initial scan displays the state of the tables as of when the changefeed started (therefore, the initial value of "Petee" is omitted).

    {% include {{ page.version.version }}/cdc/print-key.md %}

  3. Back in the SQL client, insert more data:

    {% include_cached copy-clipboard.html %}

    > INSERT INTO office_dogs VALUES (3, 'Ernie');
  4. Back in the terminal where you're watching the Kafka topics, the following output has appeared:

    {"after": {"id": 3, "name": "Ernie"}}
  5. When you are done, exit the SQL shell (\q).

  6. To stop cockroach:

    Get the process ID of the node:

    {% include_cached copy-clipboard.html %}

    ps -ef | grep cockroach | grep -v grep
      501 21766     1   0  6:21PM ttys001    0:00.89 cockroach start-single-node --insecure --listen-addr=localhost
    

    Gracefully shut down the node, specifying its process ID:

    {% include_cached copy-clipboard.html %}

    kill -TERM 21766
    initiating graceful shutdown of server
    server drained and shutdown completed
    
  7. To stop Kafka, move into the extracted confluent-<version> directory and stop Confluent:

    {% include_cached copy-clipboard.html %}

    $ ./bin/confluent local services stop

Create a changefeed connected to Kafka using Avro

{{site.data.alerts.callout_info}} CREATE CHANGEFEED is an {{ site.data.products.enterprise }}-only feature. For the Core version, see the CHANGEFEED FOR example. {{site.data.alerts.end}}

In this example, you'll set up a changefeed for a single-node cluster that is connected to a Kafka sink and emits Avro records. The changefeed will watch two tables.

  1. If you do not already have one, request a trial {{ site.data.products.enterprise }} license.

  2. Use the cockroach start-single-node command to start a single-node cluster:

    {% include_cached copy-clipboard.html %}

    $ cockroach start-single-node --insecure --listen-addr=localhost --background
  3. Download and extract the Confluent Open Source platform (which includes Kafka).

  4. Move into the extracted confluent-<version> directory and start Confluent:

    {% include_cached copy-clipboard.html %}

    $ ./bin/confluent local services start

    Only zookeeper, kafka, and schema-registry are needed. To troubleshoot Confluent, see their docs and the Quick Start Guide.

  5. Create two Kafka topics:

    {% include_cached copy-clipboard.html %}

    $ ./bin/kafka-topics \
    --create \
    --zookeeper localhost:2181 \
    --replication-factor 1 \
    --partitions 1 \
    --topic office_dogs

    {% include_cached copy-clipboard.html %}

    $ ./bin/kafka-topics \
    --create \
    --zookeeper localhost:2181 \
    --replication-factor 1 \
    --partitions 1 \
    --topic employees

    {{site.data.alerts.callout_info}} You are expected to create any Kafka topics with the necessary number of replications and partitions. Topics can be created manually or Kafka brokers can be configured to automatically create topics with a default partition count and replication factor. {{site.data.alerts.end}}

{% include {{ page.version.version }}/cdc/sql-cluster-settings-example.md %}

{% include {{ page.version.version }}/cdc/create-example-db-cdc.md %}

  1. Start the changefeed:

    {% include_cached copy-clipboard.html %}

    > CREATE CHANGEFEED FOR TABLE office_dogs, employees INTO 'kafka://localhost:9092' WITH format = avro, confluent_schema_registry = 'http://localhost:8081';

    {% include {{ page.version.version }}/cdc/confluent-cloud-sr-url.md %}

            job_id       
    +--------------------+
      360645287206223873
    (1 row)
    
    NOTICE: changefeed will emit to topic office_dogs
    NOTICE: changefeed will emit to topic employees
    

    This will start up the changefeed in the background and return the job_id. The changefeed writes to Kafka.

  2. In a new terminal, move into the extracted confluent-<version> directory and start watching the Kafka topics:

    {% include_cached copy-clipboard.html %}

    $ ./bin/kafka-avro-console-consumer \
    --bootstrap-server=localhost:9092 \
    --from-beginning \
    --whitelist 'office_dogs|employees'
    {"after":{"office_dogs":{"id":{"long":1},"name":{"string":"Petee H"}}}}
    {"after":{"office_dogs":{"id":{"long":2},"name":{"string":"Carl"}}}}
    {"after":{"employees":{"dog_id":{"long":1},"employee_name":{"string":"Lauren"},"rowid":{"long":528537452042682369}}}}
    {"after":{"employees":{"dog_id":{"long":2},"employee_name":{"string":"Spencer"},"rowid":{"long":528537452042747905}}}}

    The initial scan displays the state of the table as of when the changefeed started (therefore, the initial value of "Petee" is omitted).

    {% include {{ page.version.version }}/cdc/print-key.md %}

  3. Back in the SQL client, insert more data:

    {% include_cached copy-clipboard.html %}

    > INSERT INTO office_dogs VALUES (3, 'Ernie');
  4. Back in the terminal where you're watching the Kafka topics, the following output has appeared:

    {"after":{"office_dogs":{"id":{"long":3},"name":{"string":"Ernie"}}}}
  5. When you are done, exit the SQL shell (\q).

  6. To stop cockroach:

    Get the process ID of the node:

    {% include_cached copy-clipboard.html %}

    ps -ef | grep cockroach | grep -v grep
      501 21766     1   0  6:21PM ttys001    0:00.89 cockroach start-single-node --insecure --listen-addr=localhost
    

    Gracefully shut down the node, specifying its process ID:

    {% include_cached copy-clipboard.html %}

    kill -TERM 21766
    initiating graceful shutdown of server
    server drained and shutdown completed
    
  7. To stop Kafka, move into the extracted confluent-<version> directory and stop Confluent:

    {% include_cached copy-clipboard.html %}

    $ ./bin/confluent local services stop

Create a changefeed connected to a Google Cloud Pub/Sub sink

{{site.data.alerts.callout_info}} {% include feature-phases/preview.md %} {{site.data.alerts.end}}

In this example, you'll set up a changefeed for a single-node cluster that is connected to a Google Cloud Pub/Sub sink. The changefeed will watch a table and send messages to the sink.

You'll need access to a Google Cloud Project to set up a Pub/Sub sink. In this example, the Google Cloud CLI (gcloud) is used, but you can also complete each of these steps within your Google Cloud Console.

  1. If you do not already have one, request a trial {{ site.data.products.enterprise }} license.

  2. Use the cockroach start-single-node command to start a single-node cluster:

    {% include_cached copy-clipboard.html %}

    cockroach start-single-node --insecure --listen-addr=localhost --background
  3. In this example, you'll run CockroachDB's Movr application workload to set up some data for your changefeed.

    First create the schema for the workload:

    {% include_cached copy-clipboard.html %}

    cockroach workload init movr "postgresql://[email protected]:26257?sslmode=disable"

    Then run the workload:

    {% include_cached copy-clipboard.html %}

    cockroach workload run movr --duration=1m "postgresql://[email protected]:26257?sslmode=disable"

{% include {{ page.version.version }}/cdc/sql-cluster-settings-example.md %}

  1. Next, you'll prepare your Pub/Sub sink.

    In a new terminal window, create a Service Account attached to your Google Project:

    {% include_cached copy-clipboard.html %}

    gcloud iam service-accounts create cdc-demo --project cockroach-project

    In this example, cdc-demo will represent the name of the service account, and cockroach-project is the name of the Google Project.

    To ensure that your Service Account has the correct permissions to publish to the sink, use the following command to give the Service Account the predefined Pub/Sub Editor role:

    {% include_cached copy-clipboard.html %}

    gcloud projects add-iam-policy-binding cockroach-project --member='serviceAccount:[email protected]' --role='roles/pubsub.editor'
  2. Create the Pub/Sub topic to which your changefeed will emit messages:

    {% include_cached copy-clipboard.html %}

    gcloud pubsub topics create movr-users --project cockroach-project

    Run the following command to create a subscription within the movr-users topic:

    {% include_cached copy-clipboard.html %}

    gcloud pubsub subscriptions create movr-users-sub --topic=movr-users --topic-project=cockroach-project
  3. With the topic and subscription set up, you can now download your Service Account's key. Use the following command to specify where to download the json key file (key.json):

    {% include_cached copy-clipboard.html %}

    gcloud iam service-accounts keys create key.json [email protected]

    Next, base64 encode your credentials key using the command specific to your platform.

    If you're working on macOS:

    {% include_cached copy-clipboard.html %}

    cat key.json | base64

    If you're working on Linux, run the following to ensure that lines are not wrapped in the output:

    {% include_cached copy-clipboard.html %}

    cat key.json | base64 -w 0

    Copy the output so that you can add it to your CREATE CHANGEFEED statement in the next step. When you create your changefeed, it is necessary that the key is base64 encoded before passing it in the URI.

  4. Back in the SQL shell, create a changefeed that will emit messages to your Pub/Sub topic. Ensure that you pass the base64-encoded credentials for your Service Account and add your topic's region:

    {% include_cached copy-clipboard.html %}

    CREATE CHANGEFEED FOR TABLE users INTO 'gcpubsub://cockroach-project?region=us-east1&topic_name=movr-users&AUTH=specified&CREDENTIALS={base64-encoded key}';

    The output will confirm the topic where the changefeed will emit messages to.

           job_id
    ----------------------
    756641304964792321
    (1 row)
    
    NOTICE: changefeed will emit to topic movr-users
    

    To view all the messages delivered to your topic, you can use the Cloud Console. You'll see the messages emitted to the movr-users-sub subscription.

    Google Cloud Console changefeed message output from movr database

    To view published messages from your terminal, run the following command:

    {% include_cached copy-clipboard.html %}

    gcloud pubsub subscriptions pull movr-users-sub --auto-ack --limit=10

    This command will only pull these messages once per subscription. For example, if you ran this command again you would receive 10 different messages in your output. To receive more than one message at a time, pass the --limit flag. For more details, see the gcloud pubsub subscriptions pull documentation.

    ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────┬─────────────────────────────────────────────────────────┬────────────┬──────────────────┐
    │                                                                                                                                 DATA                                                                                                                                 │    MESSAGE_ID    │                       ORDERING_KEY                      │ ATTRIBUTES │ DELIVERY_ATTEMPT │
    ├──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────┼─────────────────────────────────────────────────────────┼────────────┼──────────────────┤
    │ {"key":["boston","40ef7cfa-5e16-4bd3-9e14-2f23407a66df"],"value":{"after":{"address":"14980 Gentry Plains Apt. 64","city":"boston","credit_card":"2466765790","id":"40ef7cfa-5e16-4bd3-9e14-2f23407a66df","name":"Vickie Fitzpatrick"}},"topic":"movr-users"}         │ 4466153049158588 │ ["boston", "40ef7cfa-5e16-4bd3-9e14-2f23407a66df"]      │            │                  │
    │ {"key":["los angeles","947ae147-ae14-4800-8000-00000000001d"],"value":{"after":{"address":"35627 Chelsey Tunnel Suite 94","city":"los angeles","credit_card":"2099932769","id":"947ae147-ae14-4800-8000-00000000001d","name":"Kenneth Barnes"}},"topic":"movr-users"} │ 4466144577818136 │ ["los angeles", "947ae147-ae14-4800-8000-00000000001d"] │            │                  │
    │ {"key":["amsterdam","c28f5c28-f5c2-4000-8000-000000000026"],"value":{"after":{"address":"14729 Karen Radial","city":"amsterdam","credit_card":"5844236997","id":"c28f5c28-f5c2-4000-8000-000000000026","name":"Maria Weber"}},"topic":"movr-users"}                   │ 4466151194002912 │ ["amsterdam", "c28f5c28-f5c2-4000-8000-000000000026"]   │            │                  │
    │ {"key":["new york","6c8ab772-584a-439d-b7b4-fda37767c74c"],"value":{"after":{"address":"34196 Roger Row Suite 6","city":"new york","credit_card":"3117945420","id":"6c8ab772-584a-439d-b7b4-fda37767c74c","name":"James Lang"}},"topic":"movr-users"}                 │ 4466147099992681 │ ["new york", "6c8ab772-584a-439d-b7b4-fda37767c74c"]    │            │                  │
    │ {"key":["boston","c56dab0a-63e7-4fbb-a9af-54362c481c41"],"value":{"after":{"address":"83781 Ross Overpass","city":"boston","credit_card":"7044597874","id":"c56dab0a-63e7-4fbb-a9af-54362c481c41","name":"Mark Butler"}},"topic":"movr-users"}                        │ 4466150752442731 │ ["boston", "c56dab0a-63e7-4fbb-a9af-54362c481c41"]      │            │                  │
    │ {"key":["amsterdam","f27e09d5-d7cd-4f88-8b65-abb910036f45"],"value":{"after":{"address":"77153 Donald Road Apt. 62","city":"amsterdam","credit_card":"7531160744","id":"f27e09d5-d7cd-4f88-8b65-abb910036f45","name":"Lisa Sandoval"}},"topic":"movr-users"}          │ 4466147182359256 │ ["amsterdam", "f27e09d5-d7cd-4f88-8b65-abb910036f45"]   │            │                  │
    │ {"key":["new york","46d200c0-6924-4cc7-b3c9-3398997acb84"],"value":{"after":{"address":"92843 Carlos Grove","city":"new york","credit_card":"8822366402","id":"46d200c0-6924-4cc7-b3c9-3398997acb84","name":"Mackenzie Malone"}},"topic":"movr-users"}                │ 4466142864542016 │ ["new york", "46d200c0-6924-4cc7-b3c9-3398997acb84"]    │            │                  │
    │ {"key":["boston","52ecbb26-0eab-4e0b-a160-90caa6a7d350"],"value":{"after":{"address":"95044 Eric Corner Suite 33","city":"boston","credit_card":"3982363300","id":"52ecbb26-0eab-4e0b-a160-90caa6a7d350","name":"Brett Porter"}},"topic":"movr-users"}                │ 4466152539161631 │ ["boston", "52ecbb26-0eab-4e0b-a160-90caa6a7d350"]      │            │                  │
    │ {"key":["amsterdam","ae147ae1-47ae-4800-8000-000000000022"],"value":{"after":{"address":"88194 Angela Gardens Suite 94","city":"amsterdam","credit_card":"4443538758","id":"ae147ae1-47ae-4800-8000-000000000022","name":"Tyler Dalton"}},"topic":"movr-users"}       │ 4466151398997150 │ ["amsterdam", "ae147ae1-47ae-4800-8000-000000000022"]   │            │                  │
    │ {"key":["paris","dc28f5c2-8f5c-4800-8000-00000000002b"],"value":{"after":{"address":"2058 Rodriguez Stream","city":"paris","credit_card":"9584502537","id":"dc28f5c2-8f5c-4800-8000-00000000002b","name":"Tony Ortiz"}},"topic":"movr-users"}                         │ 4466146372222914 │ ["paris", "dc28f5c2-8f5c-4800-8000-00000000002b"]       │            │                  │
    └──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────┴─────────────────────────────────────────────────────────┴────────────┴──────────────────┘
    

Create a changefeed connected to a cloud storage sink

{{site.data.alerts.callout_info}} CREATE CHANGEFEED is an {{ site.data.products.enterprise }}-only feature. For the Core version, see the CHANGEFEED FOR example. {{site.data.alerts.end}}

In this example, you'll set up a changefeed for a single-node cluster that is connected to an AWS S3 sink. The changefeed watches two tables. Note that you can set up changefeeds for any of these cloud storage providers.

  1. If you do not already have one, request a trial {{ site.data.products.enterprise }} license.

  2. Use the cockroach start-single-node command to start a single-node cluster:

    {% include_cached copy-clipboard.html %}

    $ cockroach start-single-node --insecure --listen-addr=localhost --background

{% include {{ page.version.version }}/cdc/sql-cluster-settings-example.md %}

{% include {{ page.version.version }}/cdc/create-example-db-cdc.md %}

  1. Start the changefeed:

    {% include_cached copy-clipboard.html %}

    > CREATE CHANGEFEED FOR TABLE office_dogs, employees INTO 's3://example-bucket-name/test?AWS_ACCESS_KEY_ID=enter_key-here&AWS_SECRET_ACCESS_KEY=enter_key_here' with updated, resolved='10s';
            job_id       
    +--------------------+
      360645287206223873
    (1 row)
    

    This will start up the changefeed in the background and return the job_id. The changefeed writes to AWS.

  2. Monitor your changefeed on the DB Console. For more information, see Changefeeds Dashboard.

  3. When you are done, exit the SQL shell (\q).

  4. To stop cockroach:

    Get the process ID of the node:

    {% include_cached copy-clipboard.html %}

    ps -ef | grep cockroach | grep -v grep
      501 21766     1   0  6:21PM ttys001    0:00.89 cockroach start-single-node --insecure --listen-addr=localhost
    

    Gracefully shut down the node, specifying its process ID:

    {% include_cached copy-clipboard.html %}

    kill -TERM 21766
    initiating graceful shutdown of server
    server drained and shutdown completed
    

Create a changefeed connected to a webhook sink

{{site.data.alerts.callout_info}} CREATE CHANGEFEED is an {{ site.data.products.enterprise }}-only feature. For the Core version, see the CHANGEFEED FOR example. {{site.data.alerts.end}}

{{site.data.alerts.callout_info}} {% include feature-phases/preview.md %} {{site.data.alerts.end}}

In this example, you'll set up a changefeed for a single-node cluster that is connected to a local HTTP server via a webhook. For this example, you'll use an example HTTP server to test out the webhook sink.

  1. If you do not already have one, request a trial {{ site.data.products.enterprise }} license.

  2. Use the cockroach start-single-node command to start a single-node cluster:

    {% include_cached copy-clipboard.html %}

    $ cockroach start-single-node --insecure --listen-addr=localhost --background
  3. In this example, you'll run CockroachDB's Movr application workload to set up some data for your changefeed.

    First create the schema for the workload:

    {% include_cached copy-clipboard.html %}

    cockroach workload init movr "postgresql://[email protected]:26257?sslmode=disable"

    Then run the workload:

    {% include_cached copy-clipboard.html %}

    cockroach workload run movr --duration=1m "postgresql://[email protected]:26257?sslmode=disable"

{% include {{ page.version.version }}/cdc/sql-cluster-settings-example.md %}

  1. In a separate terminal window, set up your HTTP server. Clone the test repository:

    {% include_cached copy-clipboard.html %}

    git clone https://github.com/cockroachlabs/cdc-webhook-sink-test-server.git

    {% include_cached copy-clipboard.html %}

    cd cdc-webhook-sink-test-server/go-https-server
  2. Next make the script executable and then run the server (passing a specific port if preferred, otherwise it will default to :3000):

    {% include_cached copy-clipboard.html %}

    chmod +x ./server.sh

    {% include_cached copy-clipboard.html %}

    ./server.sh <port>
  3. Back in your SQL shell, run the following statement to create a changefeed that emits to your webhook sink:

    {% include_cached copy-clipboard.html %}

    CREATE CHANGEFEED FOR TABLE movr.vehicles INTO 'webhook-https://localhost:3000?insecure_tls_skip_verify=true' WITH updated;

    You set up a changefeed on the vehicles table, which emits changefeed messages to the local HTTP server.

    See the options table for more information on the options available for creating your changefeed to a webhook sink.

          job_id
    ----------------------
    687842491801632769
    (1 row)
    

    In the terminal where your HTTP server is running, you'll receive output similar to:

    2021/08/24 14:00:21 {"payload":[{"after":{"city":"rome","creation_time":"2019-01-02T03:04:05","current_location":"39141 Travis Curve Suite 87","ext":{"brand":"Schwinn","color":"red"},"id":"d7b18299-c0c4-4304-9ef7-05ae46fd5ee1","dog_owner_id":"5d0c85b5-8866-47cf-a6bc-d032f198e48f","status":"in_use","type":"bike"},"key":["rome","d7b18299-c0c4-4304-9ef7-05ae46fd5ee1"],"topic":"vehicles","updated":"1629813621680097993.0000000000"}],"length":1}
    2021/08/24 14:00:22 {"payload":[{"after":{"city":"san francisco","creation_time":"2019-01-02T03:04:05","current_location":"84888 Wallace Wall","ext":{"color":"black"},"id":"020cf7f4-6324-48a0-9f74-6c9010fb1ab4","dog_owner_id":"b74ea421-fcaf-4d80-9dcc-d222d49bdc17","status":"available","type":"scooter"},"key":["san francisco","020cf7f4-6324-48a0-9f74-6c9010fb1ab4"],"topic":"vehicles","updated":"1629813621680097993.0000000000"}],"length":1}
    2021/08/24 14:00:22 {"payload":[{"after":{"city":"san francisco","creation_time":"2019-01-02T03:04:05","current_location":"3893 Dunn Fall Apt. 11","ext":{"color":"black"},"id":"21b2ec54-81ad-4af7-a76d-6087b9c7f0f8","dog_owner_id":"8924c3af-ea6e-4e7e-b2c8-2e318f973393","status":"lost","type":"scooter"},"key":["san francisco","21b2ec54-81ad-4af7-a76d-6087b9c7f0f8"],"topic":"vehicles","updated":"1629813621680097993.0000000000"}],"length":1}
    

    For more detail on emitted changefeed messages, see responses.

Core changefeeds stream row-level changes to a client until the underlying SQL connection is closed.

Create a Core changefeed

{% include {{ page.version.version }}/cdc/create-core-changefeed.md %}

Create a Core changefeed using Avro

{% include {{ page.version.version }}/cdc/create-core-changefeed-avro.md %}

For further information on Core changefeeds, see EXPERIMENTAL CHANGEFEED FOR.

See also