Skip to content

Commit

Permalink
Merge pull request etcd-io#579 from kianmeng/fix-typos
Browse files Browse the repository at this point in the history
Fix typos
  • Loading branch information
spzala authored Jun 16, 2022
2 parents 25cdc6b + a2da31e commit 3e04053
Show file tree
Hide file tree
Showing 57 changed files with 59 additions and 59 deletions.
2 changes: 1 addition & 1 deletion content/en/blog/2017/etcd-deployments-on-AWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Also, this post assumes operational knowledge of Amazon Web Services (AWS), spec
* [Cluster Design](#cluster-design)
* [Availability](#availability)
* [Data durability after member failure](#data-durability-after-member-failure)
* [Perfomance/Throughput](#performancethroughput)
* [Performance/Throughput](#performancethroughput)
* [Network](#network)
* [Disk](#disk)
* [Self-healing](#self-healing)
Expand Down
4 changes: 2 additions & 2 deletions content/en/blog/2021/announcing-etcd-3.5.md
Original file line number Diff line number Diff line change
Expand Up @@ -194,11 +194,11 @@ etcd 3.5 improvements further increase transaction concurrency.

1. etcd now caches the transaction buffer to avoid the unnecessary copy operations. This speeds up concurrent read transaction creation and as a result, the **transaction with a high read ratio has increased up to 2.4 times** (see *Figures 5* and *6*). See [wilsonwang371@ of ByteDance's code change and benchmark results](https://github.com/etcd-io/etcd/pull/12933).

![figure-5](../announcing-etcd-3.5/figure-5.png "Diagrams of etcd transaction throughput that shows with the caching mechanism for read transations, the transaction throughput increases up to 1.4 times.")
![figure-5](../announcing-etcd-3.5/figure-5.png "Diagrams of etcd transaction throughput that shows with the caching mechanism for read transactions, the transaction throughput increases up to 1.4 times.")

_**Figure 5:** etcd transaction ratio with a high write ratio. The value at the top is the ratio of reads and writes. The first ratio, 0.125, is 1 read per 8 writes. The second ratio, 0.25, is 1 read per 4 writes. The value at the right bar represents the inverse ratio of transaction throughput before and after [etcd/pull/12933](https://github.com/etcd-io/etcd/pull/12933). With the caching mechanism for read transactions, the transaction throughput is increased up to 1.4 times._

![figure-6](../announcing-etcd-3.5/figure-6.png "Diagrams of etcd transaction throughput that shows with the caching mechanism for read transations, the transaction throughput increases up to 2.5 times.")
![figure-6](../announcing-etcd-3.5/figure-6.png "Diagrams of etcd transaction throughput that shows with the caching mechanism for read transactions, the transaction throughput increases up to 2.5 times.")

_**Figure 6:** etcd transaction ratio with a high read ratio. The value at the top is the ratio of reads and writes. The first ratio, 4.0, is 4 reads per 1 write. The second ratio, 8.0, is 8 reads per 1 write. The value at the right bar represents the inverse ratio of transaction throughput before and after [etcd/pull/12933](https://github.com/etcd-io/etcd/pull/12933). With the caching mechanism for read transactions, the transaction throughput is increased up to 2.5 times._

Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/v2.3/security.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ And also the response from the server:

etcd supports the same model as above for **peer communication**, that means the communication between etcd members in a cluster.

Assuming we have our `ca.crt` and two members with their own keypairs (`member1.crt` & `member1.key`, `member2.crt` & `member2.key`) signed by this CA, we launch etcd as follows:
Assuming we have our `ca.crt` and two members with their own key pairs (`member1.crt` & `member1.key`, `member2.crt` & `member2.key`) signed by this CA, we launch etcd as follows:


```sh
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/v3.1/benchmarks/etcd-2-2-0-benchmarks.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Go OS/Arch: linux/amd64

Bootstrap another machine, outside of the etcd cluster, and run the [`hey` HTTP benchmark tool](https://github.com/rakyll/hey) with a connection reuse patch to send requests to each etcd cluster member. See the [benchmark instructions][hack-benchmark] for the patch and the steps to reproduce our procedures.

The performance is calulated through results of 100 benchmark rounds.
The performance is calculated through results of 100 benchmark rounds.

## Performance

Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/v3.1/dev-guide/api_reference_v3.md
Original file line number Diff line number Diff line change
Expand Up @@ -430,7 +430,7 @@ Empty field.
| ----- | ----------- | ---- |
| key | key is the first key to delete in the range. | bytes |
| range_end | range_end is the key following the last key to delete for the range [key, range_end). If range_end is not given, the range is defined to contain only the key argument. If range_end is one bit larger than the given key, then the range is all the all keys with the prefix (the given key). If range_end is '\0', the range is all keys greater than or equal to the key argument. | bytes |
| prev_kv | If prev_kv is set, etcd gets the previous key-value pairs before deleting it. The previous key-value pairs will be returned in the delte response. | bool |
| prev_kv | If prev_kv is set, etcd gets the previous key-value pairs before deleting it. The previous key-value pairs will be returned in the delete response. | bool |



Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1514,7 +1514,7 @@
"prev_kv": {
"type": "boolean",
"format": "boolean",
"description": "If prev_kv is set, etcd gets the previous key-value pairs before deleting it.\nThe previous key-value pairs will be returned in the delte response."
"description": "If prev_kv is set, etcd gets the previous key-value pairs before deleting it.\nThe previous key-value pairs will be returned in the delete response."
},
"range_end": {
"type": "string",
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/v3.1/dev-guide/interacting_v3.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ export ETCDCTL_API=3

## Find versions

etcdctl version and Server API version can be useful in finding the appropriate commands to be used for performing various opertions on etcd.
etcdctl version and Server API version can be useful in finding the appropriate commands to be used for performing various operations on etcd.

Here is the command to find the versions:

Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/v3.1/op-guide/performance.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ title: Performance

etcd provides stable, sustained high performance. Two factors define performance: latency and throughput. Latency is the time taken to complete an operation. Throughput is the total operations completed within some time period. Usually average latency increases as the overall throughput increases when etcd accepts concurrent client requests. In common cloud environments, like a standard `n-4` on Google Compute Engine (GCE) or a comparable machine type on AWS, a three member etcd cluster finishes a request in less than one millisecond under light load, and can complete more than 30,000 requests per second under heavy load.

etcd uses the Raft consensus algorithm to replicate requests among members and reach agreement. Consensus performance, especially commit latency, is limited by two physical constraints: network IO latency and disk IO latency. The minimum time to finish an etcd request is the network Round Trip Time (RTT) between members, plus the time `fdatasync` requires to commit the data to permanant storage. The RTT within a datacenter may be as long as several hundred microseconds. A typical RTT within the United States is around 50ms, and can be as slow as 400ms between continents. The typical fdatasync latency for a spinning disk is about 10ms. For SSDs, the latency is often lower than 1ms. To increase throughput, etcd batches multiple requests together and submits them to Raft. This batching policy lets etcd attain high throughput despite heavy load.
etcd uses the Raft consensus algorithm to replicate requests among members and reach agreement. Consensus performance, especially commit latency, is limited by two physical constraints: network IO latency and disk IO latency. The minimum time to finish an etcd request is the network Round Trip Time (RTT) between members, plus the time `fdatasync` requires to commit the data to permanent storage. The RTT within a datacenter may be as long as several hundred microseconds. A typical RTT within the United States is around 50ms, and can be as slow as 400ms between continents. The typical fdatasync latency for a spinning disk is about 10ms. For SSDs, the latency is often lower than 1ms. To increase throughput, etcd batches multiple requests together and submits them to Raft. This batching policy lets etcd attain high throughput despite heavy load.

There are other sub-systems which impact the overall performance of etcd. Each serialized etcd request must run through etcd’s boltdb-backed MVCC storage engine, which usually takes tens of microseconds to finish. Periodically etcd incrementally snapshots its recently applied requests, merging them back with the previous on-disk snapshot. This process may lead to a latency spike. Although this is usually not a problem on SSDs, it may double the observed latency on HDD. Likewise, inflight compactions can impact etcd’s performance. Fortunately, the impact is often insignificant since the compaction is staggered so it does not compete for resources with regular requests. The RPC system, gRPC, gives etcd a well-defined, extensible API, but it also introduces additional latency, especially for local reads.

Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/v3.1/op-guide/security.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ And also the response from the server:

etcd supports the same model as above for **peer communication**, that means the communication between etcd members in a cluster.

Assuming we have our `ca.crt` and two members with their own keypairs (`member1.crt` & `member1.key`, `member2.crt` & `member2.key`) signed by this CA, we launch etcd as follows:
Assuming we have our `ca.crt` and two members with their own key pairs (`member1.crt` & `member1.key`, `member2.crt` & `member2.key`) signed by this CA, we launch etcd as follows:


```sh
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/v3.1/upgrades/upgrade_3_2.md
Original file line number Diff line number Diff line change
Expand Up @@ -210,7 +210,7 @@ clientv3yaml.NewConfig

#### Change in `--listen-peer-urls` and `--listen-client-urls`

3.2 now rejects domains names for `--listen-peer-urls` and `--listen-client-urls` (3.1 only prints out warnings), since domain name is invalid for network interface binding. Make sure that those URLs are properly formated as `scheme://IP:port`.
3.2 now rejects domains names for `--listen-peer-urls` and `--listen-client-urls` (3.1 only prints out warnings), since domain name is invalid for network interface binding. Make sure that those URLs are properly formatted as `scheme://IP:port`.

See [issue #6336](https://github.com/etcd-io/etcd/issues/6336) for more contexts.

Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/v3.2/benchmarks/etcd-2-2-0-benchmarks.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Go OS/Arch: linux/amd64

Bootstrap another machine, outside of the etcd cluster, and run the [`hey` HTTP benchmark tool](https://github.com/rakyll/hey) with a connection reuse patch to send requests to each etcd cluster member. See the [benchmark instructions][hack-benchmark] for the patch and the steps to reproduce our procedures.

The performance is calulated through results of 100 benchmark rounds.
The performance is calculated through results of 100 benchmark rounds.

## Performance

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ The election service exposes client-side election facilities as a gRPC interface

| Field | Description | Type |
| ----- | ----------- | ---- |
| name | name is the election identifier that correponds to the leadership key. | bytes |
| name | name is the election identifier that corresponds to the leadership key. | bytes |
| key | key is an opaque key representing the ownership of the election. If the key is deleted, then leadership is lost. | bytes |
| rev | rev is the creation revision of the key. It can be used to test for ownership of an election during transactions by testing the key's creation revision matches rev. | int64 |
| lease | lease is the lease ID of the election leader. | int64 |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -250,7 +250,7 @@
"name": {
"type": "string",
"format": "byte",
"description": "name is the election identifier that correponds to the leadership key."
"description": "name is the election identifier that corresponds to the leadership key."
},
"key": {
"type": "string",
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/v3.2/learning/auth_design.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ Therefore, the permission checking logic should be added to the state machine of

### Authentication

At first, a client must create a gRPC connection only to authenticate its user ID and password. An etcd server will respond with an authentication reply. The reponse will be an authentication token on success or an error on failure. The client can use its authentication token to present its credentials to etcd when making API requests.
At first, a client must create a gRPC connection only to authenticate its user ID and password. An etcd server will respond with an authentication reply. The response will be an authentication token on success or an error on failure. The client can use its authentication token to present its credentials to etcd when making API requests.

The client connection used to request the authentication token is typically thrown away; it cannot carry the new token's credentials. This is because gRPC doesn't provide a way for adding per RPC credential after creation of the connection (calling `grpc.Dial()`). Therefore, a client cannot assign a token to its connection that is obtained through the connection. The client needs a new connection for using the token.

Expand Down
4 changes: 2 additions & 2 deletions content/en/docs/v3.2/op-guide/gateway.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ title: etcd gateway

etcd gateway is a simple TCP proxy that forwards network data to the etcd cluster. The gateway is stateless and transparent; it neither inspects client requests nor interferes with cluster responses.

The gateway supports multiple etcd server endpoints and works on a simple round-robin policy. It only routes to available enpoints and hides failures from its clients. Other retry policies, such as weighted round-robin, may be supported in the future.
The gateway supports multiple etcd server endpoints and works on a simple round-robin policy. It only routes to available endpoints and hides failures from its clients. Other retry policies, such as weighted round-robin, may be supported in the future.

## When to use etcd gateway

Expand Down Expand Up @@ -78,7 +78,7 @@ $ etcd gateway --discovery-srv=example.com

#### --discovery-srv

* DNS domain used to bootstrap cluster endpoints through SRV recrods.
* DNS domain used to bootstrap cluster endpoints through SRV records.
* Default: (not set)

### Network
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/v3.2/op-guide/grpc_proxy.md
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@ ETCDCTL_API=3 ./bin/etcdctl --endpoints=http://localhost:23792 member list --wri

## Namespacing

Suppose an application expects full control over the entire key space, but the etcd cluster is shared with other applications. To let all appications run without interfering with each other, the proxy can partition the etcd keyspace so clients appear to have access to the complete keyspace. When the proxy is given the flag `--namespace`, all client requests going into the proxy are translated to have a user-defined prefix on the keys. Accesses to the etcd cluster will be under the prefix and responses from the proxy will strip away the prefix; to the client, it appears as if there is no prefix at all.
Suppose an application expects full control over the entire key space, but the etcd cluster is shared with other applications. To let all applications run without interfering with each other, the proxy can partition the etcd keyspace so clients appear to have access to the complete keyspace. When the proxy is given the flag `--namespace`, all client requests going into the proxy are translated to have a user-defined prefix on the keys. Accesses to the etcd cluster will be under the prefix and responses from the proxy will strip away the prefix; to the client, it appears as if there is no prefix at all.

To namespace a proxy, start it with `--namespace`:

Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/v3.2/op-guide/performance.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ title: Performance

etcd provides stable, sustained high performance. Two factors define performance: latency and throughput. Latency is the time taken to complete an operation. Throughput is the total operations completed within some time period. Usually average latency increases as the overall throughput increases when etcd accepts concurrent client requests. In common cloud environments, like a standard `n-4` on Google Compute Engine (GCE) or a comparable machine type on AWS, a three member etcd cluster finishes a request in less than one millisecond under light load, and can complete more than 30,000 requests per second under heavy load.

etcd uses the Raft consensus algorithm to replicate requests among members and reach agreement. Consensus performance, especially commit latency, is limited by two physical constraints: network IO latency and disk IO latency. The minimum time to finish an etcd request is the network Round Trip Time (RTT) between members, plus the time `fdatasync` requires to commit the data to permanant storage. The RTT within a datacenter may be as long as several hundred microseconds. A typical RTT within the United States is around 50ms, and can be as slow as 400ms between continents. The typical fdatasync latency for a spinning disk is about 10ms. For SSDs, the latency is often lower than 1ms. To increase throughput, etcd batches multiple requests together and submits them to Raft. This batching policy lets etcd attain high throughput despite heavy load.
etcd uses the Raft consensus algorithm to replicate requests among members and reach agreement. Consensus performance, especially commit latency, is limited by two physical constraints: network IO latency and disk IO latency. The minimum time to finish an etcd request is the network Round Trip Time (RTT) between members, plus the time `fdatasync` requires to commit the data to permanent storage. The RTT within a datacenter may be as long as several hundred microseconds. A typical RTT within the United States is around 50ms, and can be as slow as 400ms between continents. The typical fdatasync latency for a spinning disk is about 10ms. For SSDs, the latency is often lower than 1ms. To increase throughput, etcd batches multiple requests together and submits them to Raft. This batching policy lets etcd attain high throughput despite heavy load.

There are other sub-systems which impact the overall performance of etcd. Each serialized etcd request must run through etcd’s boltdb-backed MVCC storage engine, which usually takes tens of microseconds to finish. Periodically etcd incrementally snapshots its recently applied requests, merging them back with the previous on-disk snapshot. This process may lead to a latency spike. Although this is usually not a problem on SSDs, it may double the observed latency on HDD. Likewise, inflight compactions can impact etcd’s performance. Fortunately, the impact is often insignificant since the compaction is staggered so it does not compete for resources with regular requests. The RPC system, gRPC, gives etcd a well-defined, extensible API, but it also introduces additional latency, especially for local reads.

Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/v3.2/op-guide/security.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ And also the response from the server:

etcd supports the same model as above for **peer communication**, that means the communication between etcd members in a cluster.

Assuming we have our `ca.crt` and two members with their own keypairs (`member1.crt` & `member1.key`, `member2.crt` & `member2.key`) signed by this CA, we launch etcd as follows:
Assuming we have our `ca.crt` and two members with their own key pairs (`member1.crt` & `member1.key`, `member2.crt` & `member2.key`) signed by this CA, we launch etcd as follows:


```sh
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/v3.2/upgrades/upgrade_3_2.md
Original file line number Diff line number Diff line change
Expand Up @@ -210,7 +210,7 @@ clientv3yaml.NewConfig

#### Change in `--listen-peer-urls` and `--listen-client-urls`

3.2 now rejects domains names for `--listen-peer-urls` and `--listen-client-urls` (3.1 only prints out warnings), since domain name is invalid for network interface binding. Make sure that those URLs are properly formated as `scheme://IP:port`.
3.2 now rejects domains names for `--listen-peer-urls` and `--listen-client-urls` (3.1 only prints out warnings), since domain name is invalid for network interface binding. Make sure that those URLs are properly formatted as `scheme://IP:port`.

See [issue #6336](https://github.com/etcd-io/etcd/issues/6336) for more contexts.

Expand Down
Loading

0 comments on commit 3e04053

Please sign in to comment.