diff --git a/.github/ISSUE_TEMPLATE/help_support.md b/.github/ISSUE_TEMPLATE/help_support.md new file mode 100644 index 00000000000..3b7cc7c25ae --- /dev/null +++ b/.github/ISSUE_TEMPLATE/help_support.md @@ -0,0 +1,11 @@ +--- +name: QA/Help/Support +about: Please use [Discussion](https://github.com/uber/cadence/discussions) or [StackOverflow](https://stackoverflow.com/questions/tagged/cadence-workflow) for QA/Help/Support +title: '' +labels: '' +assignees: '' + +--- + +Please use [Discussion](https://github.com/uber/cadence/discussions) or [StackOverflow](https://stackoverflow.com/questions/tagged/cadence-workflow) for QA/Help/Support. +Do NOT use issue for this. \ No newline at end of file diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index fd6eb9200ec..6570c314999 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -81,18 +81,17 @@ Also use `docker-compose -f ./docker/dev/cassandra.yml down` to stop and clean u ### 3. Schema installation Based on the above dependency setup, you also need to install the schemas. -* If you use `cassandra.yml` or `cassandra-esv7-kafka.yml`, then run `make install-schema` to install Casandra schemas +* If you use `cassandra.yml` then run `make install-schema` to install Casandra schemas +* If you use `cassandra-esv7-kafka.yml` then run `make install-schema && make install-schema-es-v7` to install Casandra & ElasticSearch schemas * If you use `mysql.yml` then run `install-schema-mysql` to install MySQL schemas * If you use `postgres.yml` then run `install-schema-postgres` to install Postgres schemas -Beside database schema, you will also need to install ElasticSearch schema if you use `cassandra-esv7-kafka.yml`: -Run below commands: -```bash -export ES_SCHEMA_FILE=./schema/elasticsearch/v7/visibility/index_template.json -curl -X PUT "http://127.0.0.1:9200/_template/cadence-visibility-template" -H 'Content-Type: application/json' --data-binary "@$ES_SCHEMA_FILE" -curl -X PUT "http://127.0.0.1:9200/cadence-visibility-dev" +:warning: Note: +>If you use `cassandra-esv7-kafka.yml` and start server before `make install-schema-es-v7`, ElasticSearch may create a wrong index on demand. +You will have to delete the wrong index and then run the `make install-schema-es-v7` again. To delete the wrong index: +``` +curl -X DELETE "http://127.0.0.1:9200/cadence-visibility-dev" ``` -They will create an index template and an index in ElasticSearch. ### 4. Run Once you have done all above, try running the local binaries: @@ -113,6 +112,7 @@ Then register a domain: Then run a helloworld from [Go Client Sample](https://github.com/uber-common/cadence-samples/) or [Java Client Sample](https://github.com/uber/cadence-java-samples) +See [instructions](service/worker/README.md) for setting up replication(XDC). ## Issues to start with diff --git a/Makefile b/Makefile index 2755d094096..c89abc19673 100644 --- a/Makefile +++ b/Makefile @@ -554,42 +554,52 @@ install-schema-postgres: cadence-sql-tool ./cadence-sql-tool --ep 127.0.0.1 -p 5432 -u postgres -pw cadence --pl postgres --db cadence_visibility setup-schema -v 0.0 ./cadence-sql-tool --ep 127.0.0.1 -p 5432 -u postgres -pw cadence --pl postgres --db cadence_visibility update-schema -d ./schema/postgres/visibility/versioned +install-schema-es-v7: + export ES_SCHEMA_FILE=./schema/elasticsearch/v7/visibility/index_template.json + curl -X PUT "http://127.0.0.1:9200/_template/cadence-visibility-template" -H 'Content-Type: application/json' --data-binary "@$(ES_SCHEMA_FILE)" + curl -X PUT "http://127.0.0.1:9200/cadence-visibility-dev" + +install-schema-es-v6: + export ES_SCHEMA_FILE=./schema/elasticsearch/v6/visibility/index_template.json + curl -X PUT "http://127.0.0.1:9200/_template/cadence-visibility-template" -H 'Content-Type: application/json' --data-binary "@$(ES_SCHEMA_FILE)" + curl -X PUT "http://127.0.0.1:9200/cadence-visibility-dev" + start: bins ./cadence-server start -install-schema-cdc: cadence-cassandra-tool - @echo Setting up cadence_active key space - ./cadence-cassandra-tool --ep 127.0.0.1 create -k cadence_active --rf 1 - ./cadence-cassandra-tool --ep 127.0.0.1 -k cadence_active setup-schema -v 0.0 - ./cadence-cassandra-tool --ep 127.0.0.1 -k cadence_active update-schema -d ./schema/cassandra/cadence/versioned - ./cadence-cassandra-tool --ep 127.0.0.1 create -k cadence_visibility_active --rf 1 - ./cadence-cassandra-tool --ep 127.0.0.1 -k cadence_visibility_active setup-schema -v 0.0 - ./cadence-cassandra-tool --ep 127.0.0.1 -k cadence_visibility_active update-schema -d ./schema/cassandra/visibility/versioned - - @echo Setting up cadence_standby key space - ./cadence-cassandra-tool --ep 127.0.0.1 create -k cadence_standby --rf 1 - ./cadence-cassandra-tool --ep 127.0.0.1 -k cadence_standby setup-schema -v 0.0 - ./cadence-cassandra-tool --ep 127.0.0.1 -k cadence_standby update-schema -d ./schema/cassandra/cadence/versioned - ./cadence-cassandra-tool --ep 127.0.0.1 create -k cadence_visibility_standby --rf 1 - ./cadence-cassandra-tool --ep 127.0.0.1 -k cadence_visibility_standby setup-schema -v 0.0 - ./cadence-cassandra-tool --ep 127.0.0.1 -k cadence_visibility_standby update-schema -d ./schema/cassandra/visibility/versioned - - @echo Setting up cadence_other key space - ./cadence-cassandra-tool --ep 127.0.0.1 create -k cadence_other --rf 1 - ./cadence-cassandra-tool --ep 127.0.0.1 -k cadence_other setup-schema -v 0.0 - ./cadence-cassandra-tool --ep 127.0.0.1 -k cadence_other update-schema -d ./schema/cassandra/cadence/versioned - ./cadence-cassandra-tool --ep 127.0.0.1 create -k cadence_visibility_other --rf 1 - ./cadence-cassandra-tool --ep 127.0.0.1 -k cadence_visibility_other setup-schema -v 0.0 - ./cadence-cassandra-tool --ep 127.0.0.1 -k cadence_visibility_other update-schema -d ./schema/cassandra/visibility/versioned - -start-cdc-active: bins - ./cadence-server --zone active start - -start-cdc-standby: bins - ./cadence-server --zone standby start - -start-cdc-other: bins - ./cadence-server --zone other start +install-schema-xdc: cadence-cassandra-tool + @echo Setting up cadence_cluster0 key space + ./cadence-cassandra-tool --ep 127.0.0.1 create -k cadence_cluster0 --rf 1 + ./cadence-cassandra-tool --ep 127.0.0.1 -k cadence_cluster0 setup-schema -v 0.0 + ./cadence-cassandra-tool --ep 127.0.0.1 -k cadence_cluster0 update-schema -d ./schema/cassandra/cadence/versioned + ./cadence-cassandra-tool --ep 127.0.0.1 create -k cadence_visibility_cluster0 --rf 1 + ./cadence-cassandra-tool --ep 127.0.0.1 -k cadence_visibility_cluster0 setup-schema -v 0.0 + ./cadence-cassandra-tool --ep 127.0.0.1 -k cadence_visibility_cluster0 update-schema -d ./schema/cassandra/visibility/versioned + + @echo Setting up cadence_cluster1 key space + ./cadence-cassandra-tool --ep 127.0.0.1 create -k cadence_cluster1 --rf 1 + ./cadence-cassandra-tool --ep 127.0.0.1 -k cadence_cluster1 setup-schema -v 0.0 + ./cadence-cassandra-tool --ep 127.0.0.1 -k cadence_cluster1 update-schema -d ./schema/cassandra/cadence/versioned + ./cadence-cassandra-tool --ep 127.0.0.1 create -k cadence_visibility_cluster1 --rf 1 + ./cadence-cassandra-tool --ep 127.0.0.1 -k cadence_visibility_cluster1 setup-schema -v 0.0 + ./cadence-cassandra-tool --ep 127.0.0.1 -k cadence_visibility_cluster1 update-schema -d ./schema/cassandra/visibility/versioned + + @echo Setting up cadence_cluster2 key space + ./cadence-cassandra-tool --ep 127.0.0.1 create -k cadence_cluster2 --rf 1 + ./cadence-cassandra-tool --ep 127.0.0.1 -k cadence_cluster2 setup-schema -v 0.0 + ./cadence-cassandra-tool --ep 127.0.0.1 -k cadence_cluster2 update-schema -d ./schema/cassandra/cadence/versioned + ./cadence-cassandra-tool --ep 127.0.0.1 create -k cadence_visibility_cluster2 --rf 1 + ./cadence-cassandra-tool --ep 127.0.0.1 -k cadence_visibility_cluster2 setup-schema -v 0.0 + ./cadence-cassandra-tool --ep 127.0.0.1 -k cadence_visibility_cluster2 update-schema -d ./schema/cassandra/visibility/versioned + +start-xdc-cluster0: bins + ./cadence-server --zone xdc_cluster0 start + +start-xdc-cluster1: bins + ./cadence-server --zone xdc_cluster1 start + +start-xdc-cluster2: bins + ./cadence-server --zone xdc_cluster2 start start-canary: bins ./cadence-canary start diff --git a/README.md b/README.md index 47cf7c2a429..f56c9eb3047 100644 --- a/README.md +++ b/README.md @@ -3,67 +3,58 @@ [![Coverage Status](https://coveralls.io/repos/github/uber/cadence/badge.svg)](https://coveralls.io/github/uber/cadence) [![Slack Status](https://img.shields.io/badge/slack-join_chat-white.svg?logo=slack&style=social)](http://t.uber.com/cadence-slack) -Visit [cadenceworkflow.io](https://cadenceworkflow.io) to learn about Cadence. +This repo contains the source code of the Cadence server and other tooling including CLI, schema tools, bench and canary. -This repo contains the source code of the Cadence server. To implement workflows, activities and worker use [Go client](https://github.com/uber-go/cadence-client) or [Java client](https://github.com/uber-java/cadence-client). +You can implement your workflows with one of our client libraries. +The [Go](https://github.com/uber-go/cadence-client) and [Java](https://github.com/uber-java/cadence-client) libraries are officially maintained by the Cadence team, +while the [Python](https://github.com/firdaus/cadence-python) and [Ruby](https://github.com/coinbase/cadence-ruby) client libraries are developed by the community. See Maxim's talk at [Data@Scale Conference](https://atscaleconference.com/videos/cadence-microservice-architecture-beyond-requestreply) for an architectural overview of Cadence. +Visit [cadenceworkflow.io](https://cadenceworkflow.io) to learn more about Cadence. Join us in [Cadence Documentation](https://github.com/uber/cadence-docs) project. Feel free to raise an Issue or Pull Request there. + +### Community +* [Github Discussion](https://github.com/uber/cadence/discussions) + * Best for Q&A, support/help, general discusion, and annoucement +* [StackOverflow](https://stackoverflow.com/questions/tagged/cadence-workflow) + * Best for Q&A and general discusion +* [Github Issues](https://github.com/uber/cadence/issues) + * Best for reporting bugs and feature requests +* [Slack](http://t.uber.com/cadence-slack) + * Best for contributing/development discussion + ## Getting Started -### Start the cadence-server locally +### Start the cadence-server -We highly recommend that you use [Cadence service docker](docker/README.md) to run the service. +To run Cadence services locally, we highly recommend that you use [Cadence service docker](docker/README.md) to run the service. +You can also follow the [instructions](./CONTRIBUTING.md) to build and run it. + +Please visit our [documentation](https://cadenceworkflow.io/docs/operation-guide/) site for production/cluster setup. ### Run the Samples Try out the sample recipes for [Go](https://github.com/uber-common/cadence-samples) or [Java](https://github.com/uber/cadence-java-samples) to get started. -### Client SDKs -Java and Golang clients are developed by Cadence team: -* [Java Client](https://github.com/uber/cadence-java-client) -* [Go Client](https://github.com/uber-go/cadence-client) - -Other clients are developed by community: -* [Python Client](https://github.com/firdaus/cadence-python) -* [Ruby Client](https://github.com/coinbase/cadence-ruby) +### Use [Cadence CLI](https://cadenceworkflow.io/docs/cli/) -### Use CLI Tools +Cadence CLI can be used to operate workflows, tasklist, domain and even the clusters. -* Use [Cadence command-line tool](https://cadenceworkflow.io/docs/cli/) to perform various tasks on Cadence server cluster - * Use brew to install CLI: `brew install cadence-workflow` - * Use docker image for CLI: `docker run --rm ubercadence/cli:` or `docker run --rm ubercadence/cli:master ` . Be sure to update your image when you want to try new features: `docker pull ubercadence/cli:master ` - * Build the CLI image, see [instructions](docker/README.md#diy-building-an-image-for-any-tag-or-branch) - * Check out the repo and run `make cadence` to build all tools. See [CONTRIBUTING](CONTRIBUTING.md) for prerequisite of make command. +You can use the following ways to install Cadence CLI: +* Use brew to install CLI: `brew install cadence-workflow` +* Use docker image for CLI: `docker run --rm ubercadence/cli:` or `docker run --rm ubercadence/cli:master ` . Be sure to update your image when you want to try new features: `docker pull ubercadence/cli:master ` +* Build the CLI binary yourself, check out the repo and run `make cadence` to build all tools. See [CONTRIBUTING](CONTRIBUTING.md) for prerequisite of make command. +* Build the CLI image yourself, see [instructions](docker/README.md#diy-building-an-image-for-any-tag-or-branch) - -* For [manual setup or upgrading](docs/persistence.md) server schema -- - * Use brew to install CLI: `brew install cadence-workflow` which also includes `cadence-sql-tool` and `cadence-cassandra-tool` - * If server runs with Cassandra, Use [Cadence Cassandra tool](tools/cassandra/README.md) to perform various tasks on database schema of Cassandra persistence - * If server runs with SQL database, Use [Cadence SQL tool](tools/sql/README.md) to perform various tasks on database schema of SQL based persistence - -> Tips: Use `make tools` to build all tools +Cadence CLI is a powerful tool. The commands are organized by **tabs**. E.g. `workflow`->`batch`->`start`, or `admin`->`workflow`->`describe`. +Please read the [documentation](https://cadenceworkflow.io/docs/cli/#documentation) and always try out `--help` on any tab to learn & explore. + ### Use Cadence Web Try out [Cadence Web UI](https://github.com/uber/cadence-web) to view your workflows on Cadence. (This is already available at localhost:8088 if you run Cadence with docker compose) -## Documentation - -Visit [cadenceworkflow.io](https://cadenceworkflow.io) for documentation. - -Join us in [Cadence Docs](https://github.com/uber/cadence-docs) project. Raise an Issue or Pull Request there. - -## Community -* [Github Discussion](https://github.com/uber/cadence/discussions) - * Best for Q&A, support/help, general discusion, and annoucement -* [StackOverflow](https://stackoverflow.com/questions/tagged/cadence-workflow) - * Best for Q&A and general discusion -* [Github Issues](https://github.com/uber/cadence/issues) - * Best for reporting bugs and feature requests -* [Slack](http://t.uber.com/cadence-slack) - * Best for contributing/development discussion ## Contributing @@ -71,6 +62,25 @@ We'd love your help in making Cadence great. Please review our [contribution gui If you'd like to propose a new feature, first join the [Slack channel](http://t.uber.com/cadence-slack) to start a discussion and check if there are existing design discussions. Also peruse our [design docs](docs/design/index.md) in case a feature has been designed but not yet implemented. Once you're sure the proposal is not covered elsewhere, please follow our [proposal instructions](PROPOSALS.md). +## Other binaries in this repo + +#### Bench/stress test workflow tools +See [bench documentation](./bench/README.md). + +#### Periodical feature health check workflow tools(aka Canary) +See [canary documentation](./canary/README.md). + +#### Schema tools for SQL and Cassandra +The tools are for [manual setup or upgrading database schema](docs/persistence.md) + + * If server runs with Cassandra, Use [Cadence Cassandra tool](tools/cassandra/README.md) + * If server runs with SQL database, Use [Cadence SQL tool](tools/sql/README.md) + +The easiest way to get the schema tool is via homebrew. + +`brew install cadence-workflow` also includes `cadence-sql-tool` and `cadence-cassandra-tool`. + * The schema files are located at `/usr/local/etc/cadence/schema/`. + ## License MIT License, please see [LICENSE](https://github.com/uber/cadence/blob/master/LICENSE) for details. diff --git a/config/development.yaml b/config/development.yaml index 44ab211cbbf..ad2c4c71257 100644 --- a/config/development.yaml +++ b/config/development.yaml @@ -74,10 +74,10 @@ services: clusterGroupMetadata: enableGlobalDomain: true failoverVersionIncrement: 10 - masterClusterName: "active" - currentClusterName: "active" + masterClusterName: "cluster0" + currentClusterName: "cluster0" clusterGroup: - active: + cluster0: enabled: true initialFailoverVersion: 0 rpcAddress: "localhost:7933" # this is to let worker service and XDC replicator connected to the frontend service. In cluster setup, localhost will not work diff --git a/config/development_oauth.yaml b/config/development_oauth.yaml index ea8d7568e28..2c33b0cd59b 100644 --- a/config/development_oauth.yaml +++ b/config/development_oauth.yaml @@ -43,10 +43,10 @@ authorization: clusterGroupMetadata: enableGlobalDomain: true failoverVersionIncrement: 10 - masterClusterName: "active" - currentClusterName: "active" + masterClusterName: "cluster0" + currentClusterName: "cluster0" clusterGroup: - active: + cluster0: enabled: true initialFailoverVersion: 0 rpcAddress: "localhost:7933" # this is to let worker service and XDC replicator connected to the frontend service. In cluster setup, localhost will not work diff --git a/config/development_active.yaml b/config/development_xdc_cluster0.yaml similarity index 88% rename from config/development_active.yaml rename to config/development_xdc_cluster0.yaml index 80484be9f09..5aa7aa232a0 100644 --- a/config/development_active.yaml +++ b/config/development_xdc_cluster0.yaml @@ -7,15 +7,15 @@ persistence: nosql: pluginName: "cassandra" hosts: "127.0.0.1" - keyspace: "cadence_active" + keyspace: "cadence_cluster0" cass-visibility: nosql: pluginName: "cassandra" hosts: "127.0.0.1" - keyspace: "cadence_visibility_active" + keyspace: "cadence_visibility_cluster0" ringpop: - name: cadence_active + name: cadence_cluster0 bootstrapMode: hosts bootstrapHosts: [ "127.0.0.1:7933", "127.0.0.1:7934", "127.0.0.1:7935", "127.0.0.1:7940" ] maxJoinDuration: 30s @@ -29,7 +29,7 @@ services: metrics: statsd: hostPort: "127.0.0.1:8125" - prefix: "cadence_active" + prefix: "cadence_cluster0" pprof: port: 7936 @@ -41,7 +41,7 @@ services: metrics: statsd: hostPort: "127.0.0.1:8125" - prefix: "cadence_active" + prefix: "cadence_cluster0" pprof: port: 7938 @@ -53,7 +53,7 @@ services: metrics: statsd: hostPort: "127.0.0.1:8125" - prefix: "cadence_active" + prefix: "cadence_cluster0" pprof: port: 7937 @@ -64,29 +64,29 @@ services: metrics: statsd: hostPort: "127.0.0.1:8125" - prefix: "cadence_active" + prefix: "cadence_cluster0" pprof: port: 7941 clusterGroupMetadata: enableGlobalDomain: true failoverVersionIncrement: 10 - primaryClusterName: "active" - currentClusterName: "active" + primaryClusterName: "cluster0" + currentClusterName: "cluster0" clusterGroup: - active: + cluster0: enabled: true initialFailoverVersion: 1 rpcName: "cadence-frontend" rpcAddress: "localhost:7833" # this is to let worker service and XDC replicator connected to the frontend service. In cluster setup, localhost will not work rpcTransport: "grpc" - standby: + cluster1: enabled: true initialFailoverVersion: 0 rpcName: "cadence-frontend" rpcAddress: "localhost:8833" # this is to let worker service and XDC replicator connected to the frontend service. In cluster setup, localhost will not work rpcTransport: "grpc" - other: + cluster2: enabled: true initialFailoverVersion: 2 rpcName: "cadence-frontend" diff --git a/config/development_standby.yaml b/config/development_xdc_cluster1.yaml similarity index 88% rename from config/development_standby.yaml rename to config/development_xdc_cluster1.yaml index 2facb0241fd..a869dc09280 100644 --- a/config/development_standby.yaml +++ b/config/development_xdc_cluster1.yaml @@ -7,15 +7,15 @@ persistence: nosql: pluginName: "cassandra" hosts: "127.0.0.1" - keyspace: "cadence_standby" + keyspace: "cadence_cluster1" cass-visibility: nosql: pluginName: "cassandra" hosts: "127.0.0.1" - keyspace: "cadence_visibility_standby" + keyspace: "cadence_visibility_cluster1" ringpop: - name: cadence_standby + name: cadence_cluster1 bootstrapMode: hosts bootstrapHosts: [ "127.0.0.1:8933", "127.0.0.1:8934", "127.0.0.1:8935", "127.0.0.1:8940" ] maxJoinDuration: 30s @@ -29,7 +29,7 @@ services: metrics: statsd: hostPort: "127.0.0.1:8125" - prefix: "cadence_standby" + prefix: "cadence_cluster1" pprof: port: 8936 @@ -41,7 +41,7 @@ services: metrics: statsd: hostPort: "127.0.0.1:8125" - prefix: "cadence_standby" + prefix: "cadence_cluster1" pprof: port: 8938 @@ -53,7 +53,7 @@ services: metrics: statsd: hostPort: "127.0.0.1:8125" - prefix: "cadence_standby" + prefix: "cadence_cluster1" pprof: port: 8937 @@ -64,29 +64,29 @@ services: metrics: statsd: hostPort: "127.0.0.1:8125" - prefix: "cadence_standby" + prefix: "cadence_cluster1" pprof: port: 8941 clusterGroupMetadata: enableGlobalDomain: true failoverVersionIncrement: 10 - primaryClusterName: "active" - currentClusterName: "standby" + primaryClusterName: "cluster0" + currentClusterName: "cluster1" clusterGroup: - active: + cluster0: enabled: true initialFailoverVersion: 1 rpcName: "cadence-frontend" rpcAddress: "localhost:7833" # this is to let worker service and XDC replicator connected to the frontend service. In cluster setup, localhost will not work rpcTransport: "grpc" - standby: + cluster1: enabled: true initialFailoverVersion: 0 rpcName: "cadence-frontend" rpcAddress: "localhost:8833" # this is to let worker service and XDC replicator connected to the frontend service. In cluster setup, localhost will not work rpcTransport: "grpc" - other: + cluster2: enabled: true initialFailoverVersion: 2 rpcName: "cadence-frontend" diff --git a/config/development_other.yaml b/config/development_xdc_cluster2.yaml similarity index 88% rename from config/development_other.yaml rename to config/development_xdc_cluster2.yaml index dc73f23a367..3bc0e3409fc 100644 --- a/config/development_other.yaml +++ b/config/development_xdc_cluster2.yaml @@ -7,15 +7,15 @@ persistence: nosql: pluginName: "cassandra" hosts: "127.0.0.1" - keyspace: "cadence_other" + keyspace: "cadence_cluster2" cass-visibility: nosql: pluginName: "cassandra" hosts: "127.0.0.1" - keyspace: "cadence_visibility_other" + keyspace: "cadence_visibility_cluster2" ringpop: - name: cadence_other + name: cadence_cluster2 bootstrapMode: hosts bootstrapHosts: [ "127.0.0.1:9933", "127.0.0.1:9934", "127.0.0.1:9935", "127.0.0.1:9940" ] maxJoinDuration: 30s @@ -29,7 +29,7 @@ services: metrics: statsd: hostPort: "127.0.0.1:8125" - prefix: "cadence_other" + prefix: "cadence_cluster2" pprof: port: 9936 @@ -41,7 +41,7 @@ services: metrics: statsd: hostPort: "127.0.0.1:8125" - prefix: "cadence_other" + prefix: "cadence_cluster2" pprof: port: 9938 @@ -53,7 +53,7 @@ services: metrics: statsd: hostPort: "127.0.0.1:8125" - prefix: "cadence_other" + prefix: "cadence_cluster2" pprof: port: 9937 @@ -64,29 +64,29 @@ services: metrics: statsd: hostPort: "127.0.0.1:8125" - prefix: "cadence_other" + prefix: "cadence_cluster2" pprof: port: 9941 clusterGroupMetadata: enableGlobalDomain: true failoverVersionIncrement: 10 - primaryClusterName: "active" - currentClusterName: "other" + primaryClusterName: "cluster0" + currentClusterName: "cluster2" clusterGroup: - active: + cluster0: enabled: true initialFailoverVersion: 1 rpcName: "cadence-frontend" rpcAddress: "localhost:7833" # this is to let worker service and XDC replicator connected to the frontend service. In cluster setup, localhost will not work rpcTransport: "grpc" - standby: + cluster1: enabled: true initialFailoverVersion: 0 rpcName: "cadence-frontend" rpcAddress: "localhost:8833" # this is to let worker service and XDC replicator connected to the frontend service. In cluster setup, localhost will not work rpcTransport: "grpc" - other: + cluster2: enabled: true initialFailoverVersion: 2 rpcName: "cadence-frontend" diff --git a/service/worker/README.md b/service/worker/README.md index ce5380d7f71..f66c06caf54 100644 --- a/service/worker/README.md +++ b/service/worker/README.md @@ -12,57 +12,43 @@ Replicator is a background worker responsible for consuming replication tasks generated by remote Cadence clusters and pass it down to processor so they can be applied to local Cadence cluster. -Quickstart for localhost development +Quickstart for local development with multiple Cadence clusters and replication ==================================== - -1. Setup Kafka by following instructions: -[Kafka Quickstart](https://kafka.apache.org/quickstart) -2. Create Kafka topic for active and standby clusters if needed. By default the development Kafka should create topics in- flight (with 1 partition). If not, then use the follow command to create topics: -``` -bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic active -``` -and +1. Start dependency using docker if you don't have one running: ``` -bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic standby +docker-compose -f docker/dev/cassandra.yml up ``` -3. Start Cadence development server for active zone: +Then install the schemas: ``` -./cadence-server --zone active start +make install-schema-xdc ``` -4. Start Cadence development server for standby(passive) zone: +2. Start Cadence development server for cluster0, cluster1 and cluster2: ``` -./cadence-server --zone standby start +./cadence-server --zone xdc_cluster0 start +./cadence-server --zone xdc_cluster1 start +./cadence-server --zone xdc_cluster2 start ``` -5. Create global domains +3. Create a global Cadence domain that replicates data across clusters ``` -cadence --do sample domain register --gd true --ac active --cl active standby +cadence --do sample domain register --ac cluster0 --cl cluster0 cluster1 cluster2 ``` +Then run a helloworld from [Go Client Sample](https://github.com/uber-common/cadence-samples/) or [Java Client Sample](https://github.com/uber/cadence-java-samples) -6. Failover between zones: +4. Failover a domain between clusters: -Failover to standby: +Failover to cluster1: ``` -cadence --do sample domain update --ac standby +cadence --do samples-domain domain update --ac cluster1 ``` -Failback to active: -``` -cadence --do sample domain update --ac active -``` - -Create replication task using CLI ---------------------------------- - -Kafka CLI can be used to generate a replication task for testing purpose: - -``` -bin/kafka-console-producer.sh --broker-list localhost:9092 --topic standby -``` - -Replication task message: +or failover to cluster2: + ``` + cadence --do samples-domain domain update --ac cluster2 + ``` +Failback to cluster0: ``` -{taskType: 0} +cadence --do sample samples-domain update --ac cluster0 ``` Archiver