diff --git a/.gitignore b/.gitignore index 64b31f15..4b7ed1f8 100644 --- a/.gitignore +++ b/.gitignore @@ -4,3 +4,7 @@ node_modules/ # Hugo-generated assets public/ resources/ + +# Link checker artifacts +bin/ +tmp/ diff --git a/.htmltest.yml b/.htmltest.yml new file mode 100644 index 00000000..50cbb62c --- /dev/null +++ b/.htmltest.yml @@ -0,0 +1,4 @@ +DirectoryPath: public +IgnoreDirectoryMissingTrailingSlash: true +CheckExternal: false +IgnoreAltMissing: true diff --git a/Makefile b/Makefile index 595090e9..8103b5dc 100644 --- a/Makefile +++ b/Makefile @@ -21,3 +21,14 @@ preview-build: --buildDrafts \ --buildFuture \ --minify + +clean: + rm -rf public + +link-checker-setup: + curl https://htmltest.wjdp.uk | bash + +run-link-checker: + bin/htmltest + +check-links: clean production-build link-checker-setup run-link-checker diff --git a/config.toml b/config.toml index a9c4d278..2f6fdef4 100644 --- a/config.toml +++ b/config.toml @@ -8,7 +8,8 @@ enableRobotsTxt = true ignoreFiles = [ "content/docs/v*/README.md", "content/docs/v*/*/README.md", - "content/docs/v*/docs.md" + "content/docs/v*/docs.md", + "content/docs/v3.4.0/etcd-mixin", ] [params] @@ -35,6 +36,9 @@ etcd is written in [Go](https://golang.org), which has excellent cross-platform Latency from the etcd leader is the most important metric to track and the built-in dashboard has a view dedicated to this. In our testing, severe latency will introduce instability within the cluster because Raft is only as fast as the slowest machine in the majority. You can mitigate this issue by properly tuning the cluster. etcd has been pre-tuned on cloud providers with highly variable networks. """ +[markup.goldmark.renderer] +unsafe = true + [params.versions] latest = "3.4.0" all = ["3.4.0", "3.3.13", "3.3.12", "3.2.17", "3.1.12", "2"] diff --git a/content/docs/v3.3.12/benchmarks/etcd-2-2-0-benchmarks.md b/content/docs/v3.3.12/benchmarks/etcd-2-2-0-benchmarks.md index 6aae3596..ad33bbd5 100644 --- a/content/docs/v3.3.12/benchmarks/etcd-2-2-0-benchmarks.md +++ b/content/docs/v3.3.12/benchmarks/etcd-2-2-0-benchmarks.md @@ -26,7 +26,7 @@ Go OS/Arch: linux/amd64 ## Testing -Bootstrap another machine, outside of the etcd cluster, and run the [`hey` HTTP benchmark tool](https://github.com/rakyll/hey) with a connection reuse patch to send requests to each etcd cluster member. See the [benchmark instructions](../../hack/benchmark/) for the patch and the steps to reproduce our procedures. +Bootstrap another machine, outside of the etcd cluster, and run the [`hey` HTTP benchmark tool](https://github.com/rakyll/hey) with a connection reuse patch to send requests to each etcd cluster member. See the [benchmark instructions](https://github.com/etcd-io/etcd/tree/master/hack/benchmark) for the patch and the steps to reproduce our procedures. The performance is calculated through results of 100 benchmark rounds. diff --git a/content/docs/v3.3.12/benchmarks/etcd-2-2-0-rc-benchmarks.md b/content/docs/v3.3.12/benchmarks/etcd-2-2-0-rc-benchmarks.md index df325103..6cd697f2 100644 --- a/content/docs/v3.3.12/benchmarks/etcd-2-2-0-rc-benchmarks.md +++ b/content/docs/v3.3.12/benchmarks/etcd-2-2-0-rc-benchmarks.md @@ -73,4 +73,4 @@ Bootstrap another machine and use the [hey HTTP benchmark tool][hey] to send req [hey]: https://github.com/rakyll/hey [c7146bd5]: https://github.com/coreos/etcd/commits/c7146bd5f2c73716091262edc638401bb8229144 [etcd-2.1-benchmark]: etcd-2-1-0-alpha-benchmarks.md -[hack-benchmark]: ../../hack/benchmark/ +[hack-benchmark]: https://github.com/etcd-io/etcd/tree/master/hack/benchmark diff --git a/content/docs/v3.3.12/benchmarks/etcd-3-demo-benchmarks.md b/content/docs/v3.3.12/benchmarks/etcd-3-demo-benchmarks.md index 13ed2fe8..eaeed465 100644 --- a/content/docs/v3.3.12/benchmarks/etcd-3-demo-benchmarks.md +++ b/content/docs/v3.3.12/benchmarks/etcd-3-demo-benchmarks.md @@ -43,4 +43,4 @@ The performance is nearly the same as the one with empty server handler. The performance with empty server handler is not affected by one put. So the performance downgrade should be caused by storage package. -[etcd-v3-benchmark]: ../../tools/benchmark/ +[etcd-v3-benchmark]: https://github.com/etcd-io/etcd/tree/master/tools/benchmark diff --git a/content/docs/v3.3.12/dev-guide/api_grpc_gateway.md b/content/docs/v3.3.12/dev-guide/api_grpc_gateway.md index 433959d3..4880c690 100644 --- a/content/docs/v3.3.12/dev-guide/api_grpc_gateway.md +++ b/content/docs/v3.3.12/dev-guide/api_grpc_gateway.md @@ -134,4 +134,4 @@ Generated [Swagger][swagger] API definitions can be found at [rpc.swagger.json][ [grpc-gateway]: https://github.com/grpc-ecosystem/grpc-gateway [json-mapping]: https://developers.google.com/protocol-buffers/docs/proto3#json [swagger]: http://swagger.io/ -[swagger-doc]: apispec/swagger/rpc.swagger.json +[swagger-doc]: /apispec/swagger/rpc.swagger.json diff --git a/content/docs/v3.3.12/op-guide/clustering.md b/content/docs/v3.3.12/op-guide/clustering.md index 99d41144..61b28a6c 100644 --- a/content/docs/v3.3.12/op-guide/clustering.md +++ b/content/docs/v3.3.12/op-guide/clustering.md @@ -494,5 +494,5 @@ To setup an etcd cluster with proxies of v2 API, please read the the [clustering [clustering_etcd2]: https://github.com/coreos/etcd/blob/release-2.3/Documentation/clustering.md [security-guide]: security.md [security-guide-dns-srv]: security.md#notes-for-dns-srv -[tls-setup]: ../../hack/tls-setup +[tls-setup]: https://github.com/etcd-io/etcd/tree/master/hack/tls-setup [gateway]: gateway.md diff --git a/content/docs/v3.3.12/op-guide/configuration.md b/content/docs/v3.3.12/op-guide/configuration.md index d2ff0254..ad29bef3 100644 --- a/content/docs/v3.3.12/op-guide/configuration.md +++ b/content/docs/v3.3.12/op-guide/configuration.md @@ -424,10 +424,10 @@ Follow the instructions when using these flags. [reconfig]: runtime-configuration.md [discovery]: clustering.md#discovery [iana-ports]: http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.txt -[proxy]: ../v2/proxy.md -[restore]: ../v2/admin_guide.md#restoring-a-backup +[proxy]: /docs/v2/proxy.md +[restore]: /docs/v2/admin_guide.md#restoring-a-backup [security]: security.md [systemd-intro]: http://freedesktop.org/wiki/Software/systemd/ [tuning]: ../tuning.md#time-parameters [sample-config-file]: ../../etcd.conf.yml.sample -[recovery]: recovery.md#disaster-recovery +[recovery]: recovery.md diff --git a/content/docs/v3.3.12/op-guide/recovery.md b/content/docs/v3.3.12/op-guide/recovery.md index e20e1401..83f03070 100644 --- a/content/docs/v3.3.12/op-guide/recovery.md +++ b/content/docs/v3.3.12/op-guide/recovery.md @@ -6,7 +6,7 @@ etcd is designed to withstand machine failures. An etcd cluster automatically re To recover from disastrous failure, etcd v3 provides snapshot and restore facilities to recreate the cluster without v3 key data loss. To recover v2 keys, refer to the [v2 admin guide][v2_recover]. -[v2_recover]: ../v2/admin_guide.md#disaster-recovery +[v2_recover]: /docs/v2/admin_guide.md#disaster-recovery ## Snapshotting the keyspace diff --git a/content/docs/v3.3.12/op-guide/security.md b/content/docs/v3.3.12/op-guide/security.md index 305cbb2f..d36b4509 100644 --- a/content/docs/v3.3.12/op-guide/security.md +++ b/content/docs/v3.3.12/op-guide/security.md @@ -427,7 +427,7 @@ Make sure to sign the certificates with a Subject Name the member's public IP ad The certificate needs to be signed for the member's FQDN in its Subject Name, use Subject Alternative Names (short IP SANs) to add the IP address. The `etcd-ca` tool provides `--domain=` option for its `new-cert` command, and openssl can make [it][alt-name] too. [cfssl]: https://github.com/cloudflare/cfssl -[tls-setup]: ../../hack/tls-setup +[tls-setup]: https://github.com/etcd-io/etcd/tree/master/hack/tls-setup [tls-guide]: https://github.com/coreos/docs/blob/master/os/generate-self-signed-certificates.md [alt-name]: http://wiki.cacert.org/FAQ/subjectAltName [auth]: authentication.md diff --git a/content/docs/v3.3.12/op-guide/v2-migration.md b/content/docs/v3.3.12/op-guide/v2-migration.md index 981ace24..ead97377 100644 --- a/content/docs/v3.3.12/op-guide/v2-migration.md +++ b/content/docs/v3.3.12/op-guide/v2-migration.md @@ -56,4 +56,4 @@ After finishing data migration, the background job writes `true` into the switch Online migration can be difficult when the application logic depends on store v2 indexes. Applications will need additional logic to convert mvcc store revisions to store v2 indexes. -[migrate_command]: ../../etcdctl/README.md#migrate-options +[migrate_command]: https://github.com/etcd-io/etcd/tree/master/etcdctl#migrate-options diff --git a/content/docs/v3.3.12/upgrades/upgrade_3_0.md b/content/docs/v3.3.12/upgrades/upgrade_3_0.md index 36fc061e..9f4bc2a9 100644 --- a/content/docs/v3.3.12/upgrades/upgrade_3_0.md +++ b/content/docs/v3.3.12/upgrades/upgrade_3_0.md @@ -22,7 +22,7 @@ Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. C Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment. -Before beginning, [backup the etcd data directory](../v2/admin_guide.md#backing-up-the-datastore). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. +Before beginning, [backup the etcd data directory](/docs/v2/admin_guide.md#backing-up-the-datastore). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. #### Mixed versions @@ -38,7 +38,7 @@ For a much larger total data size, 100MB or more , this one-time process might t If all members have been upgraded to v3.0, the cluster will be upgraded to v3.0, and downgrade from this completed state is **not possible**. If any single member is still v2.3, however, the cluster and its operations remains “v2.3”, and it is possible from this mixed cluster state to return to using a v2.3 etcd binary on all members. -Please [backup the data directory](../v2/admin_guide.md#backing-up-the-datastore) of all etcd members to make downgrading the cluster possible even after it has been completely upgraded. +Please [backup the data directory](/docs/v2/admin_guide.md#backing-up-the-datastore) of all etcd members to make downgrading the cluster possible even after it has been completely upgraded. ### Upgrade procedure @@ -68,7 +68,7 @@ When each etcd process is stopped, expected errors will be logged by other clust 2016-06-27 15:21:48.624175 I | rafthttp: the connection with 8211f1d0f64f3269 became inactive ``` -It’s a good idea at this point to [backup the etcd data directory](../v2/admin_guide.md#backing-up-the-datastore) to provide a downgrade path should any problems occur: +It’s a good idea at this point to [backup the etcd data directory](/docs/v2/admin_guide.md#backing-up-the-datastore) to provide a downgrade path should any problems occur: ``` $ etcdctl backup \ diff --git a/content/docs/v3.3.12/upgrades/upgrade_3_1.md b/content/docs/v3.3.12/upgrades/upgrade_3_1.md index 5ab096cb..0d7e5c0a 100644 --- a/content/docs/v3.3.12/upgrades/upgrade_3_1.md +++ b/content/docs/v3.3.12/upgrades/upgrade_3_1.md @@ -31,7 +31,7 @@ Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. C Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment. -Before beginning, [backup the etcd data](../op-guide/maintenance.md#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](../v2/admin_guide.md#backing-up-the-datastore). +Before beginning, [backup the etcd data](../op-guide/maintenance.md#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](/docs/v2/admin_guide.md#backing-up-the-datastore). #### Mixed versions diff --git a/content/docs/v3.3.12/upgrades/upgrade_3_2.md b/content/docs/v3.3.12/upgrades/upgrade_3_2.md index f99a2afe..f90c755f 100644 --- a/content/docs/v3.3.12/upgrades/upgrade_3_2.md +++ b/content/docs/v3.3.12/upgrades/upgrade_3_2.md @@ -228,7 +228,7 @@ Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. C Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment. -Before beginning, [backup the etcd data](../op-guide/maintenance.md#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](../v2/admin_guide.md#backing-up-the-datastore). +Before beginning, [backup the etcd data](../op-guide/maintenance.md#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](/docs/v2/admin_guide.md#backing-up-the-datastore). #### Mixed versions diff --git a/content/docs/v3.3.12/upgrades/upgrade_3_3.md b/content/docs/v3.3.12/upgrades/upgrade_3_3.md index a01b935b..932d0ba1 100644 --- a/content/docs/v3.3.12/upgrades/upgrade_3_3.md +++ b/content/docs/v3.3.12/upgrades/upgrade_3_3.md @@ -381,7 +381,7 @@ Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. C Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment. -Before beginning, [backup the etcd data](../op-guide/maintenance.md#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](../v2/admin_guide.md#backing-up-the-datastore). +Before beginning, [backup the etcd data](../op-guide/maintenance.md#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](/docs/v2/admin_guide.md#backing-up-the-datastore). #### Mixed versions @@ -492,4 +492,4 @@ localhost:22379 is healthy: successfully committed proposal: took = 2.553476ms localhost:32379 is healthy: successfully committed proposal: took = 2.517902ms ``` -[etcd-contact]: https://groups.google.com/forum/#!forum/etcd-dev \ No newline at end of file +[etcd-contact]: https://groups.google.com/forum/#!forum/etcd-dev diff --git a/content/docs/v3.3.12/upgrades/upgrade_3_4.md b/content/docs/v3.3.12/upgrades/upgrade_3_4.md index 947ed003..1f3e64a6 100644 --- a/content/docs/v3.3.12/upgrades/upgrade_3_4.md +++ b/content/docs/v3.3.12/upgrades/upgrade_3_4.md @@ -230,7 +230,7 @@ Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. C Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment. -Before beginning, [download the snapshot backup](../op-guide/maintenance.md#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](../v2/admin_guide.md#backing-up-the-datastore). +Before beginning, [download the snapshot backup](../op-guide/maintenance.md#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](/docs/v2/admin_guide.md#backing-up-the-datastore). #### Mixed versions diff --git a/content/docs/v3.3.12/upgrades/upgrade_3_5.md b/content/docs/v3.3.12/upgrades/upgrade_3_5.md index 29191640..5af64b8b 100644 --- a/content/docs/v3.3.12/upgrades/upgrade_3_5.md +++ b/content/docs/v3.3.12/upgrades/upgrade_3_5.md @@ -106,7 +106,7 @@ Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. C Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment. -Before beginning, [download the snapshot backup](../op-guide/maintenance.md#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](../v2/admin_guide.md#backing-up-the-datastore). +Before beginning, [download the snapshot backup](../op-guide/maintenance.md#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](/docs/v2/admin_guide.md#backing-up-the-datastore). #### Mixed versions diff --git a/content/docs/v3.4.0/benchmarks/etcd-2-2-0-benchmarks.md b/content/docs/v3.4.0/benchmarks/etcd-2-2-0-benchmarks.md index 6aae3596..ad33bbd5 100644 --- a/content/docs/v3.4.0/benchmarks/etcd-2-2-0-benchmarks.md +++ b/content/docs/v3.4.0/benchmarks/etcd-2-2-0-benchmarks.md @@ -26,7 +26,7 @@ Go OS/Arch: linux/amd64 ## Testing -Bootstrap another machine, outside of the etcd cluster, and run the [`hey` HTTP benchmark tool](https://github.com/rakyll/hey) with a connection reuse patch to send requests to each etcd cluster member. See the [benchmark instructions](../../hack/benchmark/) for the patch and the steps to reproduce our procedures. +Bootstrap another machine, outside of the etcd cluster, and run the [`hey` HTTP benchmark tool](https://github.com/rakyll/hey) with a connection reuse patch to send requests to each etcd cluster member. See the [benchmark instructions](https://github.com/etcd-io/etcd/tree/master/hack/benchmark) for the patch and the steps to reproduce our procedures. The performance is calculated through results of 100 benchmark rounds. diff --git a/content/docs/v3.4.0/benchmarks/etcd-2-2-0-rc-benchmarks.md b/content/docs/v3.4.0/benchmarks/etcd-2-2-0-rc-benchmarks.md index df325103..6cd697f2 100644 --- a/content/docs/v3.4.0/benchmarks/etcd-2-2-0-rc-benchmarks.md +++ b/content/docs/v3.4.0/benchmarks/etcd-2-2-0-rc-benchmarks.md @@ -73,4 +73,4 @@ Bootstrap another machine and use the [hey HTTP benchmark tool][hey] to send req [hey]: https://github.com/rakyll/hey [c7146bd5]: https://github.com/coreos/etcd/commits/c7146bd5f2c73716091262edc638401bb8229144 [etcd-2.1-benchmark]: etcd-2-1-0-alpha-benchmarks.md -[hack-benchmark]: ../../hack/benchmark/ +[hack-benchmark]: https://github.com/etcd-io/etcd/tree/master/hack/benchmark diff --git a/content/docs/v3.4.0/benchmarks/etcd-3-demo-benchmarks.md b/content/docs/v3.4.0/benchmarks/etcd-3-demo-benchmarks.md index 13ed2fe8..148ddd38 100644 --- a/content/docs/v3.4.0/benchmarks/etcd-3-demo-benchmarks.md +++ b/content/docs/v3.4.0/benchmarks/etcd-3-demo-benchmarks.md @@ -43,4 +43,4 @@ The performance is nearly the same as the one with empty server handler. The performance with empty server handler is not affected by one put. So the performance downgrade should be caused by storage package. -[etcd-v3-benchmark]: ../../tools/benchmark/ +[etcd-v3-benchmark]: https://github.com/etcd-io/etcd/tree/master/hack/benchmark diff --git a/content/docs/v3.4.0/faq.md b/content/docs/v3.4.0/faq.md index cb50e7cb..2e7d93d5 100644 --- a/content/docs/v3.4.0/faq.md +++ b/content/docs/v3.4.0/faq.md @@ -160,6 +160,6 @@ etcd sends a snapshot of its complete key-value store to refresh slow followers [api-mvcc]: learning/api.md#revisions [maintenance-compact]: op-guide/maintenance.md#history-compaction [maintenance-defragment]: op-guide/maintenance.md#defragmentation -[maintenance-disarm]: ../etcdctl/README.md#alarm-disarm +[maintenance-disarm]: https://github.com/etcd-io/etcd/tree/master/etcdctl#alarm-disarm [fio]: https://github.com/axboe/fio [fio-blog-post]: https://www.ibm.com/blogs/bluemix/2019/04/using-fio-to-tell-whether-your-storage-is-fast-enough-for-etcd/ diff --git a/content/docs/v3.4.0/integrations.md b/content/docs/v3.4.0/integrations.md index 422a3254..fff282de 100644 --- a/content/docs/v3.4.0/integrations.md +++ b/content/docs/v3.4.0/integrations.md @@ -147,7 +147,7 @@ title: Libraries and tools - [cloudfoundry/cf-release](https://github.com/cloudfoundry/cf-release/tree/master/jobs/etcd) **Projects using etcd** -- [etcd Raft users](../raft/README.md#notable-users) - projects using etcd's raft library implementation. +- [etcd Raft users](https://github.com/etcd-io/etcd/tree/master/raft#notable-users) - projects using etcd's raft library implementation. - [apache/celix](https://github.com/apache/celix) - an implementation of the OSGi specification adapted to C and C++ - [binocarlos/yoda](https://github.com/binocarlos/yoda) - etcd + ZeroMQ - [blox/blox](https://github.com/blox/blox) - a collection of open source projects for container management and orchestration with AWS ECS diff --git a/content/docs/v3.4.0/learning/api_guarantees.md b/content/docs/v3.4.0/learning/api_guarantees.md index 8df146a8..4f7e936b 100644 --- a/content/docs/v3.4.0/learning/api_guarantees.md +++ b/content/docs/v3.4.0/learning/api_guarantees.md @@ -49,6 +49,6 @@ etcd does not ensure linearizability for watch operations. Users are expected to etcd ensures linearizability for all other operations by default. Linearizability comes with a cost, however, because linearized requests must go through the Raft consensus process. To obtain lower latencies and higher throughput for read requests, clients can configure a request’s consistency mode to `serializable`, which may access stale data with respect to quorum, but removes the performance penalty of linearized accesses' reliance on live consensus. -[txn]: api.md#transactions +[txn]: api.md#transaction [linearizability]: https://cs.brown.edu/~mph/HerlihyW90/p463-herlihy.pdf [strict_serializability]: http://jepsen.io/consistency/models/strict-serializable diff --git a/content/docs/v3.4.0/learning/design-client.md b/content/docs/v3.4.0/learning/design-client.md index fd59a04f..8f68dabd 100644 --- a/content/docs/v3.4.0/learning/design-client.md +++ b/content/docs/v3.4.0/learning/design-client.md @@ -69,7 +69,7 @@ clientv3-grpc1.0: Balancer Overview `clientv3-grpc1.0` maintains multiple TCP connections when configured with multiple etcd endpoints. Then pick one address and use it to send all client requests. The pinned address is maintained until the client object is closed (see *Figure 1*). When the client receives an error, it randomly picks another and retries. -![client-balancer-figure-01.png](img/client-balancer-figure-01.png) +![client-balancer-figure-01.png](../img/client-balancer-figure-01.png) clientv3-grpc1.0: Balancer Limitation @@ -83,19 +83,19 @@ clientv3-grpc1.7: Balancer Overview `clientv3-grpc1.7` maintains only one TCP connection to a chosen etcd server. When given multiple cluster endpoints, a client first tries to connect to them all. As soon as one connection is up, balancer pins the address, closing others (see *Figure 2*). The pinned address is to be maintained until the client object is closed. An error, from server or client network fault, is sent to client error handler (see *Figure 3*). -![client-balancer-figure-02.png](img/client-balancer-figure-02.png) +![client-balancer-figure-02.png](../img/client-balancer-figure-02.png) -![client-balancer-figure-03.png](img/client-balancer-figure-03.png) +![client-balancer-figure-03.png](../img/client-balancer-figure-03.png) The client error handler takes an error from gRPC server, and decides whether to retry on the same endpoint, or to switch to other addresses, based on the error code and message (see *Figure 4* and *Figure 5*). -![client-balancer-figure-04.png](img/client-balancer-figure-04.png) +![client-balancer-figure-04.png](../img/client-balancer-figure-04.png) -![client-balancer-figure-05.png](img/client-balancer-figure-05.png) +![client-balancer-figure-05.png](../img/client-balancer-figure-05.png) Stream RPCs, such as Watch and KeepAlive, are often requested with no timeouts. Instead, client can send periodic HTTP/2 pings to check the status of a pinned endpoint; if the server does not respond to the ping, balancer switches to other endpoints (see *Figure 6*). -![client-balancer-figure-06.png](img/client-balancer-figure-06.png) +![client-balancer-figure-06.png](../img/client-balancer-figure-06.png) clientv3-grpc1.7: Balancer Limitation @@ -103,13 +103,13 @@ clientv3-grpc1.7: Balancer Limitation `clientv3-grpc1.7` balancer sends HTTP/2 keepalives to detect disconnects from streaming requests. It is a simple gRPC server ping mechanism and does not reason about cluster membership, thus unable to detect network partitions. Since partitioned gRPC server can still respond to client pings, balancer may get stuck with a partitioned node. Ideally, keepalive ping detects partition and triggers endpoint switch, before request time-out (see [etcd#8673](https://github.com/etcd-io/etcd/issues/8673) and *Figure 7*). -![client-balancer-figure-07.png](img/client-balancer-figure-07.png) +![client-balancer-figure-07.png](../img/client-balancer-figure-07.png) `clientv3-grpc1.7` balancer maintains a list of unhealthy endpoints. Disconnected addresses are added to “unhealthy” list, and considered unavailable until after wait duration, which is hard coded as dial timeout with default value 5-second. Balancer can have false positives on which endpoints are unhealthy. For instance, endpoint A may come back right after being blacklisted, but still unusable for next 5 seconds (see *Figure 8*). `clientv3-grpc1.0` suffered the same problems above. -![client-balancer-figure-08.png](img/client-balancer-figure-08.png) +![client-balancer-figure-08.png](../img/client-balancer-figure-08.png) Upstream gRPC Go had already migrated to new balancer interface. For example, `clientv3-grpc1.7` underlying balancer implementation uses new gRPC balancer and tries to be consistent with old balancer behaviors. While its compatibility has been maintained reasonably well, etcd client still [suffered from subtle breaking changes](https://github.com/grpc/grpc-go/issues/1649). Furthermore, gRPC maintainer recommends to [not rely on the old balancer interface](https://github.com/grpc/grpc-go/issues/1942#issuecomment-375368665). In general, to get better support from upstream, it is best to be in sync with latest gRPC releases. And new features, such as retry policy, may not be backported to gRPC 1.7 branch. Thus, both etcd server and client must migrate to latest gRPC versions. @@ -123,7 +123,7 @@ The primary goal of `clientv3-grpc1.23` is to simplify balancer failover logic; Internally, when given multiple endpoints, `clientv3-grpc1.23` creates multiple sub-connections (one sub-connection per each endpoint), while `clientv3-grpc1.7` creates only one connection to a pinned endpoint (see *Figure 9*). For instance, in 5-node cluster, `clientv3-grpc1.23` balancer would require 5 TCP connections, while `clientv3-grpc1.7` only requires one. By preserving the pool of TCP connections, `clientv3-grpc1.23` may consume more resources but provide more flexible load balancer with better failover performance. The default balancing policy is round robin but can be easily extended to support other types of balancers (e.g. power of two, pick leader, etc.). `clientv3-grpc1.23` uses gRPC resolver group and implements balancer picker policy, in order to delegate complex balancing work to upstream gRPC. On the other hand, `clientv3-grpc1.7` manually handles each gRPC connection and balancer failover, which complicates the implementation. `clientv3-grpc1.23` implements retry in the gRPC interceptor chain that automatically handles gRPC internal errors and enables more advanced retry policies like backoff, while `clientv3-grpc1.7` manually interprets gRPC errors for retries. -![client-balancer-figure-09.png](img/client-balancer-figure-09.png) +![client-balancer-figure-09.png](../img/client-balancer-figure-09.png) clientv3-grpc1.23: Balancer Limitation @@ -133,6 +133,6 @@ Improvements can be made by caching the status of each endpoint. For instance, b Client-side keepalive ping still does not reason about network partitions. Streaming request may get stuck with a partitioned node. Advanced health checking service need to be implemented to understand the cluster membership (see [etcd#8673](https://github.com/etcd-io/etcd/issues/8673) for more detail). -![client-balancer-figure-07.png](img/client-balancer-figure-07.png) +![client-balancer-figure-07.png](../img/client-balancer-figure-07.png) Currently, retry logic is handled manually as an interceptor. This may be simplified via [official gRPC retries](https://github.com/grpc/proposal/blob/master/A6-client-retries.md). diff --git a/content/docs/v3.4.0/learning/design-learner.md b/content/docs/v3.4.0/learning/design-learner.md index 6f5b46f9..4fda8104 100644 --- a/content/docs/v3.4.0/learning/design-learner.md +++ b/content/docs/v3.4.0/learning/design-learner.md @@ -16,49 +16,49 @@ Membership reconfiguration has been one of the biggest operational challenges. L ### 1. New Cluster member overloads Leader A newly joined etcd member starts with no data, thus demanding more updates from leader until it catches up with leader’s logs. Then leader’s network is more likely to be overloaded, blocking or dropping leader heartbeats to followers. In such case, a follower may election-timeout to start a new leader election. That is, a cluster with a new member is more vulnerable to leader election. Both leader election and the subsequent update propagation to the new member are prone to causing periods of cluster unavailability (see *Figure 1*). -![server-learner-figure-01](img/server-learner-figure-01.png) +![server-learner-figure-01](../img/server-learner-figure-01.png) ### 2. Network Partitions scenarios What if network partition happens? It depends on leader partition. If the leader still maintains the active quorum, the cluster would continue to operate (see *Figure 2*). -![server-learner-figure-02](img/server-learner-figure-02.png) +![server-learner-figure-02](../img/server-learner-figure-02.png) #### 2.1 Leader isolation What if the leader becomes isolated from the rest of the cluster? Leader monitors progress of each follower. When leader loses connectivity from the quorum, it reverts back to follower which will affect the cluster availability (see *Figure 3*). -![server-learner-figure-03](img/server-learner-figure-03.png) +![server-learner-figure-03](../img/server-learner-figure-03.png) When a new node is added to 3 node cluster, the cluster size becomes 4 and the quorum size becomes 3. What if a new node had joined the cluster, and then network partition happens? It depends on which partition the new member gets located after partition. #### 2.2 Cluster Split 3+1 If the new node happens to be located in the same partition as leader’s, the leader still maintains the active quorum of 3. No leadership election happens, and no cluster availability gets affected (see *Figure 4*). -![server-learner-figure-04](img/server-learner-figure-04.png) +![server-learner-figure-04](../img/server-learner-figure-04.png) #### 2.3 Cluster Split 2+2 If the cluster is 2-and-2 partitioned, then neither of partition maintains the quorum of 3. In this case, leadership election happens (see *Figure 5*). -![server-learner-figure-05](img/server-learner-figure-05.png) +![server-learner-figure-05](../img/server-learner-figure-05.png) #### 2.4 Quorum Lost What if network partition happens first, and then a new member gets added? A partitioned 3-node cluster already has one disconnected follower. When a new member is added, the quorum changes from 2 to 3. Now, this cluster has only 2 active nodes out 4, thus losing quorum and starting a new leadership election (see *Figure 6*). -![server-learner-figure-06](img/server-learner-figure-06.png) +![server-learner-figure-06](../img/server-learner-figure-06.png) Since member add operation can change the size of quorum, it is always recommended to “member remove” first to replace an unhealthy node. Adding a new member to a 1-node cluster changes the quorum size to 2, immediately causing a leader election when the previous leader finds out quorum is not active. This is because “member add” operation is a 2-step process where user needs to apply “member add” command first, and then starts the new node process (see *Figure 7*). -![server-learner-figure-07](img/server-learner-figure-07.png) +![server-learner-figure-07](../img/server-learner-figure-07.png) ### 3. Cluster Misconfigurations An even worse case is when an added member is misconfigured. Membership reconfiguration is a two-step process: “etcdctl member add” and starting an etcd server process with the given peer URL. That is, “member add” command is applied regardless of URL, even when the URL value is invalid. If the first step is applied with invalid URLs, the second step cannot even start the new etcd. Once the cluster loses quorum, there is no way to revert the membership change (see *Figure 8*). -![server-learner-figure-08](img/server-learner-figure-08.png) +![server-learner-figure-08](../img/server-learner-figure-08.png) Same applies to a multi-node cluster. For example, the cluster has two members down (one is failed, the other is misconfigured) and two members up, but now it requires at least 3 votes to change the cluster membership (see *Figure 9*). -![server-learner-figure-09](img/server-learner-figure-09.png) +![server-learner-figure-09](../img/server-learner-figure-09.png) As seen above, a simple misconfiguration can fail the whole cluster into an inoperative state. In such case, an operator need manually recreate the cluster with `etcd --force-new-cluster` flag. As etcd has become a mission-critical service for Kubernetes, even the slightest outage may have significant impact on users. What can we better to make etcd such operations easier? Among other things, leader election is most critical to cluster availability: Can we make membership reconfiguration less disruptive by not changing the size of quorum? Can a new node be idle, only requesting the minimum updates from leader, until it catches up? Can membership misconfiguration be always reversible and handled in a more secure way (wrong member add command run should never fail the cluster)? Should an user worry about network topology when adding a new member? Can member add API work regardless of the location of nodes and ongoing network partitions? @@ -72,19 +72,19 @@ Features in v3.4 An operator should do the minimum amount of work possible to add a new learner node. `member add --learner` command to add a new learner, which joins cluster as a non-voting member but still receives all data from leader (see *Figure 10*). -![server-learner-figure-10](img/server-learner-figure-10.png) +![server-learner-figure-10](../img/server-learner-figure-10.png) When a learner has caught up with leader’s progress, the learner can be promoted to a voting member using `member promote` API, which then counts towards the quorum (see *Figure 11*). -![server-learner-figure-11](img/server-learner-figure-11.png) +![server-learner-figure-11](../img/server-learner-figure-11.png) etcd server validates promote request to ensure its operational safety. Only after its log has caught up to leader’s can learner be promoted to a voting member (see *Figure 12*). -![server-learner-figure-12](img/server-learner-figure-12.png) +![server-learner-figure-12](../img/server-learner-figure-12.png) Learner only serves as a standby node until promoted: Leadership cannot be transferred to learner. Learner rejects client reads and writes (client balancer should not route requests to learner). Which means learner does not need issue Read Index requests to leader. Such limitation simplifies the initial learner implementation in v3.4 release (see *Figure 13*). -![server-learner-figure-13](img/server-learner-figure-13.png) +![server-learner-figure-13](../img/server-learner-figure-13.png) In addition, etcd limits the total number of learners that a cluster can have, and avoids overloading the leader with log replication. Learner never promotes itself. While etcd provides learner status information and safety checks, cluster operator must make the final decision whether to promote learner or not. diff --git a/content/docs/v3.4.0/learning/why.md b/content/docs/v3.4.0/learning/why.md index 8241590b..34891b52 100644 --- a/content/docs/v3.4.0/learning/why.md +++ b/content/docs/v3.4.0/learning/why.md @@ -77,7 +77,7 @@ In theory, it’s possible to build these primitives atop any storage systems pr For distributed coordination, choosing etcd can help prevent operational headaches and save engineering effort. -[production-users]: ../../ADOPTERS.md +[production-users]: https://github.com/etcd-io/etcd/tree/master/ADOPTERS.md [grpc]: https://www.grpc.io [consul-bulletproof]: https://www.consul.io/docs/internals/sessions.html [curator]: http://curator.apache.org/ @@ -86,8 +86,8 @@ For distributed coordination, choosing etcd can help prevent operational headach [tidb]: https://github.com/pingcap/tidb [etcd-v3lock]: https://godoc.org/github.com/etcd-io/etcd/etcdserver/api/v3lock/v3lockpb [etcd-v3election]: https://godoc.org/github.com/coreos/etcd-io/etcdserver/api/v3election/v3electionpb -[etcd-etcdctl-lock]: ../../etcdctl/README.md#lock-lockname-command-arg1-arg2- -[etcd-etcdctl-elect]: ../../etcdctl/README.md#elect-options-election-name-proposal +[etcd-etcdctl-lock]: https://github.com/etcd-io/etcd/tree/master/etcdctl/README.md#lock-lockname-command-arg1-arg2- +[etcd-etcdctl-elect]: https://github.com/etcd-io/etcd/tree/master/etcdctl/README.md#elect-options-election-name-proposal [etcd-mvcc]: data_model.md [etcd-recipe]: https://godoc.org/github.com/etcd-io/etcd/contrib/recipes [consul-lock]: https://www.consul.io/docs/commands/lock.html @@ -95,7 +95,7 @@ For distributed coordination, choosing etcd can help prevent operational headach [etcd-reconfig]: ../op-guide/runtime-configuration.md [zk-reconfig]: https://zookeeper.apache.org/doc/trunk/zookeeperReconfig.html [consul-reconfig]: https://www.consul.io/docs/guides/servers.html -[etcd-linread]: api_guarantees.md#linearizability +[etcd-linread]: api_guarantees.md#isolation-level-and-consistency-of-replicas [consul-linread]: https://www.consul.io/docs/agent/http.html#consistency [etcd-json]: ../dev-guide/api_grpc_gateway.md [consul-json]: https://www.consul.io/docs/agent/http.html#formatted-json-output diff --git a/content/docs/v3.4.0/op-guide/clustering.md b/content/docs/v3.4.0/op-guide/clustering.md index 99d41144..61b28a6c 100644 --- a/content/docs/v3.4.0/op-guide/clustering.md +++ b/content/docs/v3.4.0/op-guide/clustering.md @@ -494,5 +494,5 @@ To setup an etcd cluster with proxies of v2 API, please read the the [clustering [clustering_etcd2]: https://github.com/coreos/etcd/blob/release-2.3/Documentation/clustering.md [security-guide]: security.md [security-guide-dns-srv]: security.md#notes-for-dns-srv -[tls-setup]: ../../hack/tls-setup +[tls-setup]: https://github.com/etcd-io/etcd/tree/master/hack/tls-setup [gateway]: gateway.md diff --git a/content/docs/v3.4.0/op-guide/failures.md b/content/docs/v3.4.0/op-guide/failures.md index c60e7ec4..8ec680ee 100644 --- a/content/docs/v3.4.0/op-guide/failures.md +++ b/content/docs/v3.4.0/op-guide/failures.md @@ -43,4 +43,4 @@ A cluster bootstrap is only successful if all required members successfully star Of course, it is possible to recover a failed bootstrapped cluster like recovering a running cluster. However, it almost always takes more time and resources to recover that cluster than bootstrapping a new one, since there is no data to recover. [backup]: maintenance.md#snapshot-backup -[unrecoverable]: recovery.md#disaster-recovery +[unrecoverable]: recovery.md diff --git a/content/docs/v3.4.0/op-guide/maintenance.md b/content/docs/v3.4.0/op-guide/maintenance.md index 843ee657..367ece28 100644 --- a/content/docs/v3.4.0/op-guide/maintenance.md +++ b/content/docs/v3.4.0/op-guide/maintenance.md @@ -18,7 +18,7 @@ Since v3.2, the default value of `--snapshot-count` has [changed from from 10,00 In performance-wise, `--snapshot-count` greater than 100,000 may impact the write throughput. Higher number of in-memory objects can slow down [Go GC mark phase `runtime.scanobject`](https://golang.org/src/runtime/mgc.go), and infrequent memory reclamation makes allocation slow. Performance varies depending on the workloads and system environments. However, in general, too frequent compaction affects cluster availabilities and write throughputs. Too infrequent compaction is also harmful placing too much pressure on Go garbage collector. See https://www.slideshare.net/mitakeh/understanding-performance-aspects-of-etcd-and-raft for more research results. -## History compaction: v3 API Key-Value Database +## History compaction: v3 API Key-Value Database {#history-compaction} Since etcd keeps an exact history of its keyspace, this history should be periodically compacted to avoid performance degradation and eventual storage space exhaustion. Compacting the keyspace history drops all information about keys superseded prior to a given keyspace revision. The space used by these keys then becomes available for additional writes to the keyspace. diff --git a/content/docs/v3.4.0/op-guide/runtime-configuration.md b/content/docs/v3.4.0/op-guide/runtime-configuration.md index 8b019a07..2d0e0cdc 100644 --- a/content/docs/v3.4.0/op-guide/runtime-configuration.md +++ b/content/docs/v3.4.0/op-guide/runtime-configuration.md @@ -232,14 +232,14 @@ It is enabled by default. [add member]: #add-a-new-member [cluster-reconf]: #cluster-reconfiguration-operations -[conf-adv-peer]: configuration.md#-initial-advertise-peer-urls -[conf-name]: configuration.md#-name +[conf-adv-peer]: configuration.md#--initial-advertise-peer-urls +[conf-name]: configuration.md#--name [disaster recovery]: recovery.md -[fault tolerance table]: ../v2/admin_guide.md#fault-tolerance-table +[fault tolerance table]: /docs/v2/admin_guide.md#fault-tolerance-table [majority failure]: #restart-cluster-from-majority-failure -[member-api]: ../v2/members_api.md +[member-api]: /docs/v2/members_api.md [member-api-grpc]: ../dev-guide/api_reference_v3.md#service-cluster-etcdserveretcdserverpbrpcproto -[member migration]: ../v2/admin_guide.md#member-migration +[member migration]: /docs/v2/admin_guide.md#member-migration [remove member]: #remove-a-member [runtime-reconf]: runtime-reconf-design.md [error cases when promoting a member]: #error-cases-when-promoting-a-learner-member diff --git a/content/docs/v3.4.0/op-guide/security.md b/content/docs/v3.4.0/op-guide/security.md index 305cbb2f..d36b4509 100644 --- a/content/docs/v3.4.0/op-guide/security.md +++ b/content/docs/v3.4.0/op-guide/security.md @@ -427,7 +427,7 @@ Make sure to sign the certificates with a Subject Name the member's public IP ad The certificate needs to be signed for the member's FQDN in its Subject Name, use Subject Alternative Names (short IP SANs) to add the IP address. The `etcd-ca` tool provides `--domain=` option for its `new-cert` command, and openssl can make [it][alt-name] too. [cfssl]: https://github.com/cloudflare/cfssl -[tls-setup]: ../../hack/tls-setup +[tls-setup]: https://github.com/etcd-io/etcd/tree/master/hack/tls-setup [tls-guide]: https://github.com/coreos/docs/blob/master/os/generate-self-signed-certificates.md [alt-name]: http://wiki.cacert.org/FAQ/subjectAltName [auth]: authentication.md diff --git a/content/docs/v3.4.0/upgrades/upgrade_3_0.md b/content/docs/v3.4.0/upgrades/upgrade_3_0.md index 36fc061e..9f4bc2a9 100644 --- a/content/docs/v3.4.0/upgrades/upgrade_3_0.md +++ b/content/docs/v3.4.0/upgrades/upgrade_3_0.md @@ -22,7 +22,7 @@ Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. C Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment. -Before beginning, [backup the etcd data directory](../v2/admin_guide.md#backing-up-the-datastore). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. +Before beginning, [backup the etcd data directory](/docs/v2/admin_guide.md#backing-up-the-datastore). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. #### Mixed versions @@ -38,7 +38,7 @@ For a much larger total data size, 100MB or more , this one-time process might t If all members have been upgraded to v3.0, the cluster will be upgraded to v3.0, and downgrade from this completed state is **not possible**. If any single member is still v2.3, however, the cluster and its operations remains “v2.3”, and it is possible from this mixed cluster state to return to using a v2.3 etcd binary on all members. -Please [backup the data directory](../v2/admin_guide.md#backing-up-the-datastore) of all etcd members to make downgrading the cluster possible even after it has been completely upgraded. +Please [backup the data directory](/docs/v2/admin_guide.md#backing-up-the-datastore) of all etcd members to make downgrading the cluster possible even after it has been completely upgraded. ### Upgrade procedure @@ -68,7 +68,7 @@ When each etcd process is stopped, expected errors will be logged by other clust 2016-06-27 15:21:48.624175 I | rafthttp: the connection with 8211f1d0f64f3269 became inactive ``` -It’s a good idea at this point to [backup the etcd data directory](../v2/admin_guide.md#backing-up-the-datastore) to provide a downgrade path should any problems occur: +It’s a good idea at this point to [backup the etcd data directory](/docs/v2/admin_guide.md#backing-up-the-datastore) to provide a downgrade path should any problems occur: ``` $ etcdctl backup \ diff --git a/content/docs/v3.4.0/upgrades/upgrade_3_3.md b/content/docs/v3.4.0/upgrades/upgrade_3_3.md index 85e60805..27f1202b 100644 --- a/content/docs/v3.4.0/upgrades/upgrade_3_3.md +++ b/content/docs/v3.4.0/upgrades/upgrade_3_3.md @@ -427,7 +427,7 @@ Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. C Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment. -Before beginning, [backup the etcd data](../op-guide/maintenance.md#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](../v2/admin_guide.md#backing-up-the-datastore). +Before beginning, [backup the etcd data](../op-guide/maintenance.md#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](/docs/v2/admin_guide.md#backing-up-the-datastore). #### Mixed versions @@ -538,4 +538,4 @@ localhost:22379 is healthy: successfully committed proposal: took = 2.553476ms localhost:32379 is healthy: successfully committed proposal: took = 2.517902ms ``` -[etcd-contact]: https://groups.google.com/forum/#!forum/etcd-dev \ No newline at end of file +[etcd-contact]: https://groups.google.com/forum/#!forum/etcd-dev diff --git a/content/docs/v3.4.0/upgrades/upgrade_3_5.md b/content/docs/v3.4.0/upgrades/upgrade_3_5.md index feda939b..3f6a8366 100644 --- a/content/docs/v3.4.0/upgrades/upgrade_3_5.md +++ b/content/docs/v3.4.0/upgrades/upgrade_3_5.md @@ -159,7 +159,7 @@ Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. C Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment. -Before beginning, [download the snapshot backup](../op-guide/maintenance.md#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](../v2/admin_guide.md#backing-up-the-datastore). +Before beginning, [download the snapshot backup](../op-guide/maintenance.md#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](/docs/v2/admin_guide.md#backing-up-the-datastore). #### Mixed versions diff --git a/layouts/404.html b/layouts/404.html index e2047fd7..d6ebc4ca 100644 --- a/layouts/404.html +++ b/layouts/404.html @@ -29,7 +29,7 @@