Cadence Worker is a new role for Cadence service used for hosting any components responsible for performing background processing on the Cadence cluster.
Replicator is a background worker responsible for consuming replication tasks generated by remote Cadence clusters and pass it down to processor so they can be applied to local Cadence cluster.
It uses Kafka as the replication tasks buffer and relies on [kafka-client library] (https://github.com/uber-go/kafka-client/) for consuming messages from Kafka.
- Setup Kafka by following instructions: Kafka Quickstart
- Create Kafka topic for active and standby clusters if needed. By default the development Kafka should create topics in- flight (with 1 partition). If not, then use the follow command to create topics:
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic active
and
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic standby
- Start Cadence development server for active zone:
./cadence-server --zone active start
- Start Cadence development server for standby(passive) zone:
./cadence-server --zone standby start
- Create global domains
cadence --do sample domain register --gd true --ac active --cl active standby
- Failover between zones:
Failover to standby:
cadence --do sample domain update --ac standby
Failback to active:
cadence --do sample domain update --ac active
Kafka CLI can be used to generate a replication task for testing purpose:
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic standby
Replication task message:
{taskType: 0}
Archiver is used to handle archival of workflow execution histories. It does this by hosting a cadence client worker and running an archival system workflow. The archival client gets used to initiate archival through signal sending. The archiver shards work across several workflows.