Skip to content

Commit

Permalink
Fix errors caught by vale
Browse files Browse the repository at this point in the history
  • Loading branch information
jseldess committed Jan 16, 2019
1 parent 1104b14 commit 195226d
Show file tree
Hide file tree
Showing 307 changed files with 520 additions and 520 deletions.
8 changes: 4 additions & 4 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ Once you're ready to contribute:
6. Back in the CockroachDB docs repo, [open a pull request](https://github.com/cockroachdb/docs/pulls) and assign it to `jseldess`. If you check the `Allow edits from maintainers` option when creating your pull request, we'll be able to make minor edits or fixes directly, if it seems easier than commenting and asking you to make those revisions, which can streamline the review process.

We'll review your changes, providing feedback and guidance as necessary. Also, Teamcity, the system we use to automate tests, will run the markdown files through Jekyll and then run [htmltest](https://github.com/cockroachdb/htmltest) against the resulting HTML output to check for errors. Teamcity will also attempt to sync the HTML to an AWS server, but since you'll be working on your own fork, this part of the process will fail; don't worry about the Teamcity fail status.
We'll review your changes, providing feedback and guidance as necessary. Also, Teamcity, the system we use to automate tests, will run the markdown files through Jekyll and then run [htmltest](https://github.com/cockroachdb/htmltest) against the resulting HTML output to check for errors. Teamcity will also attempt to sync the HTML to an AWS server, but since you'll be working on your own fork, this part of the process will fail; do not worry about the Teamcity fail status.

## Keep Contributing

Expand Down Expand Up @@ -101,7 +101,7 @@ Field | Description | Default
`toc` | Adds an auto-generated table of contents to the right of the page body (on standard screens) or at the top of the page (on smaller screens). | `true`
`toc_not_nested` | Limits a page's TOC to h2 headers only. | `false`
`build_for` | Whether to include a page only in CockroachDB docs (`[standard]`), only in Managed CockroachDB docs (`[managed]`), or in both outputs (`[standard, managed]`). | `[standard]`
`allowed_hashes` | Specifies a list of allowed hashes that don't correspond to a section heading on the page. | Nothing
`allowed_hashes` | Specifies a list of allowed hashes that do not correspond to a section heading on the page. | Nothing
`asciicast` | Adds code required to play asciicasts on the page. See [Asciicasts](#asciicasts) for more details. | `false`
`feedback` | Adds "Yes/No" feedback buttons at the bottom of the page. See [Feedback Widget](#feedback-widget) for more details. | `true`
`contribute` | Adds "Contribute" options at the top-right of the page. See [Contributing Options](#contributing-options) for more details. | `true`
Expand Down Expand Up @@ -138,7 +138,7 @@ New and changed features should be called out in the documentation using version
<span class="version-tag">New in v1.1:</span> The `user_privileges` view identifies global privileges.
```

- To add a version tag to a heading, place `<span class="version-tag">New in vX.X</span>` to the right of the heading, e.g.:
- To add a version tag to a heading, place `<span class="version-tag">New in vX.X</span>` to the right of the heading, for example:

```
## SQL Shell Welcome <div class="version-tag">New in v2.1</div>
Expand All @@ -149,7 +149,7 @@ When calling out a change, rather than something new, change `New in vX.X` to `C
#### Allowed Hashes

In a page's front-matter, you can specify a list of allowed hashes
that don't correspond to a section heading on the page. This is
that do not correspond to a section heading on the page. This is
currently used for pages with JavaScript toggle buttons, where the
toggle to activate by default can be specified in the URL hash. If you
attempt to link to, for example, `page-with-toggles.html#toggle-id` without
Expand Down
2 changes: 1 addition & 1 deletion _includes/v1.0/faq/planned-maintenance.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ After completing the maintenance work and [restarting the nodes](start-a-node.ht
> SET CLUSTER SETTING server.time_until_store_dead = '5m0s';
~~~

It's also important to ensure that load balancers don't send client traffic to a node about to be shut down, even if it will only be down for a few seconds. If you find that your load balancer's health check is not always recognizing a node as unready before the node shuts down, you can increase the `server.shutdown.drain_wait` setting, which tells the node to wait in an unready state for the specified duration. For example:
It's also important to ensure that load balancers do not send client traffic to a node about to be shut down, even if it will only be down for a few seconds. If you find that your load balancer's health check is not always recognizing a node as unready before the node shuts down, you can increase the `server.shutdown.drain_wait` setting, which tells the node to wait in an unready state for the specified duration. For example:

{% include copy-clipboard.html %}
~~~ sql
Expand Down
2 changes: 1 addition & 1 deletion _includes/v1.1/faq/planned-maintenance.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ After completing the maintenance work and [restarting the nodes](start-a-node.ht
> SET CLUSTER SETTING server.time_until_store_dead = '5m0s';
~~~

It's also important to ensure that load balancers don't send client traffic to a node about to be shut down, even if it will only be down for a few seconds. If you find that your load balancer's health check is not always recognizing a node as unready before the node shuts down, you can increase the `server.shutdown.drain_wait` setting, which tells the node to wait in an unready state for the specified duration. For example:
It's also important to ensure that load balancers do not send client traffic to a node about to be shut down, even if it will only be down for a few seconds. If you find that your load balancer's health check is not always recognizing a node as unready before the node shuts down, you can increase the `server.shutdown.drain_wait` setting, which tells the node to wait in an unready state for the specified duration. For example:

{% include copy-clipboard.html %}
~~~ sql
Expand Down
2 changes: 1 addition & 1 deletion _includes/v1.1/orchestration/kubernetes-scale-cluster.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
The Kubernetes cluster we created contains 3 nodes that pods can be run on. To ensure that you don't have two pods on the same node (as recommended in our [production best practices](recommended-production-settings.html)), you need to add a new node and then edit your StatefulSet configuration to add another pod.
The Kubernetes cluster we created contains 3 nodes that pods can be run on. To ensure that you do not have two pods on the same node (as recommended in our [production best practices](recommended-production-settings.html)), you need to add a new node and then edit your StatefulSet configuration to add another pod.

1. Add a worker node:
- On GKE, [resize your cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/resizing-a-cluster).
Expand Down
2 changes: 1 addition & 1 deletion _includes/v1.1/orchestration/start-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ Choose whether you want to orchestrate CockroachDB with Kubernetes using the hos

This creates GKE instances and joins them into a single Kubernetes cluster named `cockroachdb`.

The process can take a few minutes, so don't move on to the next step until you see a `Creating cluster cockroachdb...done` message and details about your cluster.
The process can take a few minutes, so do not move on to the next step until you see a `Creating cluster cockroachdb...done` message and details about your cluster.

{% if page.secure == true %}

Expand Down
2 changes: 1 addition & 1 deletion _includes/v1.1/prod-deployment/secure-recommendations.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
- If you plan to use CockroachDB in production, carefully review the the [Production Checklist](recommended-production-settings.html).
- If you plan to use CockroachDB in production, carefully review the [Production Checklist](recommended-production-settings.html).

- Decide how you want to access your Admin UI:

Expand Down
2 changes: 1 addition & 1 deletion _includes/v1.1/prod-deployment/synchronize-clocks.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ Amazon provides the [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2

1. SSH to the first machine.

2. Find the the ID of the Hyper-V Time Synchronization device:
2. Find the ID of the Hyper-V Time Synchronization device:

{% include copy-clipboard.html %}
~~~ shell
Expand Down
2 changes: 1 addition & 1 deletion _includes/v2.0/faq/planned-maintenance.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ After completing the maintenance work and [restarting the nodes](start-a-node.ht
> SET CLUSTER SETTING server.time_until_store_dead = '5m0s';
~~~

It's also important to ensure that load balancers don't send client traffic to a node about to be shut down, even if it will only be down for a few seconds. If you find that your load balancer's health check is not always recognizing a node as unready before the node shuts down, you can increase the `server.shutdown.drain_wait` setting, which tells the node to wait in an unready state for the specified duration. For example:
It's also important to ensure that load balancers do not send client traffic to a node about to be shut down, even if it will only be down for a few seconds. If you find that your load balancer's health check is not always recognizing a node as unready before the node shuts down, you can increase the `server.shutdown.drain_wait` setting, which tells the node to wait in an unready state for the specified duration. For example:

{% include copy-clipboard.html %}
~~~ sql
Expand Down
4 changes: 2 additions & 2 deletions _includes/v2.0/known-limitations/node-map.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
You won't be able to assign latitude/longitude coordinates to localities if the components of your localities have the same name. For example, consider the following partial configuration:
You cannot assign latitude/longitude coordinates to localities if the components of your localities have the same name. For example, consider the following partial configuration:

| Node | Region | Datacenter |
| ------ | ------ | ------ |
| Node1 | us-east | datacenter-1 |
| Node2 | us-west | datacenter-1 |

In this case, if you try to set the latitude/longitude coordinates to the datacenter level of the localities, you will get the "primary key exists" error and the **Node Map** won't be displayed. You can, however, set the latitude/longitude coordinates to the region components of the localities, and the **Node Map** will be displayed.
In this case, if you try to set the latitude/longitude coordinates to the datacenter level of the localities, you will get the "primary key exists" error and the **Node Map** will not be displayed. You can, however, set the latitude/longitude coordinates to the region components of the localities, and the **Node Map** will be displayed.
6 changes: 3 additions & 3 deletions _includes/v2.0/metric-names.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
Name | Help
-----|-----
`addsstable.applications` | Number of SSTable ingestions applied (i.e. applied by Replicas)
`addsstable.applications` | Number of SSTable ingestions applied (i.e., applied by Replicas)
`addsstable.copies` | Number of SSTable ingestions that required copying files during application
`addsstable.proposals` | Number of SSTable ingestions proposed (i.e. sent to Raft by lease holders)
`addsstable.proposals` | Number of SSTable ingestions proposed (i.e., sent to Raft by lease holders)
`build.timestamp` | Build information
`capacity.available` | Available storage capacity
`capacity.reserved` | Capacity reserved for snapshots
Expand Down Expand Up @@ -138,7 +138,7 @@ Name | Help
`ranges.unavailable` | Number of ranges with fewer live replicas than needed for quorum
`ranges.underreplicated` | Number of ranges with fewer live replicas than the replication target
`ranges` | Number of ranges
`rebalancing.writespersecond` | Number of keys written (i.e. applied by raft) per second to the store, averaged over a large time period as used in rebalancing decisions
`rebalancing.writespersecond` | Number of keys written (i.e., applied by raft) per second to the store, averaged over a large time period as used in rebalancing decisions
`replicas.commandqueue.combinedqueuesize` | Number of commands in all CommandQueues combined
`replicas.commandqueue.combinedreadcount` | Number of read-only commands in all CommandQueues combined
`replicas.commandqueue.combinedwritecount` | Number of read-write commands in all CommandQueues combined
Expand Down
2 changes: 1 addition & 1 deletion _includes/v2.0/orchestration/kubernetes-scale-cluster.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
The Kubernetes cluster we created contains 3 nodes that pods can be run on. To ensure that you don't have two pods on the same node (as recommended in our [production best practices](recommended-production-settings.html)), you need to add a new node and then edit your StatefulSet configuration to add another pod.
The Kubernetes cluster we created contains 3 nodes that pods can be run on. To ensure that you do not have two pods on the same node (as recommended in our [production best practices](recommended-production-settings.html)), you need to add a new node and then edit your StatefulSet configuration to add another pod.

1. Add a worker node:
- On GKE, [resize your cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/resizing-a-cluster).
Expand Down
2 changes: 1 addition & 1 deletion _includes/v2.0/orchestration/start-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Choose whether you want to orchestrate CockroachDB with Kubernetes using the hos

This creates GKE instances and joins them into a single Kubernetes cluster named `cockroachdb`.

The process can take a few minutes, so don't move on to the next step until you see a `Creating cluster cockroachdb...done` message and details about your cluster.
The process can take a few minutes, so do not move on to the next step until you see a `Creating cluster cockroachdb...done` message and details about your cluster.

3. Get the email address associated with your Google Cloud account:

Expand Down
2 changes: 1 addition & 1 deletion _includes/v2.0/prod-deployment/secure-recommendations.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
- If you plan to use CockroachDB in production, carefully review the the [Production Checklist](recommended-production-settings.html).
- If you plan to use CockroachDB in production, carefully review the [Production Checklist](recommended-production-settings.html).

- Decide how you want to access your Admin UI:

Expand Down
2 changes: 1 addition & 1 deletion _includes/v2.0/prod-deployment/synchronize-clocks.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ Amazon provides the [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2

1. SSH to the first machine.

2. Find the the ID of the Hyper-V Time Synchronization device:
2. Find the ID of the Hyper-V Time Synchronization device:

{% include copy-clipboard.html %}
~~~ shell
Expand Down
2 changes: 1 addition & 1 deletion _includes/v2.0/prod-deployment/use-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,5 +7,5 @@ Now that your deployment is working, you can:
You may also want to adjust the way the cluster replicates data. For example, by default, a multi-node cluster replicates all data 3 times; you can change this replication factor or create additional rules for replicating individual databases, tables, and rows differently. For more information, see [Configure Replication Zones](configure-replication-zones.html).

{{site.data.alerts.callout_danger}}
When running a cluster of 5 nodes or more, it's safest to [increase the replication factor for important internal data](configure-replication-zones.html#create-a-replication-zone-for-a-system-range) to 5, even if you don't do so for user data. For the cluster as a whole to remain available, the ranges for this internal data must always retain a majority of their replicas.
When running a cluster of 5 nodes or more, it's safest to [increase the replication factor for important internal data](configure-replication-zones.html#create-a-replication-zone-for-a-system-range) to 5, even if you do not do so for user data. For the cluster as a whole to remain available, the ranges for this internal data must always retain a majority of their replicas.
{{site.data.alerts.end}}
2 changes: 1 addition & 1 deletion _includes/v2.1/faq/planned-maintenance.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ After completing the maintenance work and [restarting the nodes](start-a-node.ht
> SET CLUSTER SETTING server.time_until_store_dead = '5m0s';
~~~

It's also important to ensure that load balancers don't send client traffic to a node about to be shut down, even if it will only be down for a few seconds. If you find that your load balancer's health check is not always recognizing a node as unready before the node shuts down, you can increase the `server.shutdown.drain_wait` setting, which tells the node to wait in an unready state for the specified duration. For example:
It's also important to ensure that load balancers do not send client traffic to a node about to be shut down, even if it will only be down for a few seconds. If you find that your load balancer's health check is not always recognizing a node as unready before the node shuts down, you can increase the `server.shutdown.drain_wait` setting, which tells the node to wait in an unready state for the specified duration. For example:

{% include copy-clipboard.html %}
~~~ sql
Expand Down
4 changes: 2 additions & 2 deletions _includes/v2.1/known-limitations/node-map.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
You won't be able to assign latitude/longitude coordinates to localities if the components of your localities have the same name. For example, consider the following partial configuration:
You cannot assign latitude/longitude coordinates to localities if the components of your localities have the same name. For example, consider the following partial configuration:

| Node | Region | Datacenter |
| ------ | ------ | ------ |
| Node1 | us-east | datacenter-1 |
| Node2 | us-west | datacenter-1 |

In this case, if you try to set the latitude/longitude coordinates to the datacenter level of the localities, you will get the "primary key exists" error and the **Node Map** won't be displayed. You can, however, set the latitude/longitude coordinates to the region components of the localities, and the **Node Map** will be displayed.
In this case, if you try to set the latitude/longitude coordinates to the datacenter level of the localities, you will get the "primary key exists" error and the **Node Map** will not be displayed. You can, however, set the latitude/longitude coordinates to the region components of the localities, and the **Node Map** will be displayed.
6 changes: 3 additions & 3 deletions _includes/v2.1/metric-names.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
Name | Help
-----|-----
`addsstable.applications` | Number of SSTable ingestions applied (i.e. applied by Replicas)
`addsstable.applications` | Number of SSTable ingestions applied (i.e., applied by Replicas)
`addsstable.copies` | Number of SSTable ingestions that required copying files during application
`addsstable.proposals` | Number of SSTable ingestions proposed (i.e. sent to Raft by lease holders)
`addsstable.proposals` | Number of SSTable ingestions proposed (i.e., sent to Raft by lease holders)
`build.timestamp` | Build information
`capacity.available` | Available storage capacity
`capacity.reserved` | Capacity reserved for snapshots
Expand Down Expand Up @@ -138,7 +138,7 @@ Name | Help
`ranges.unavailable` | Number of ranges with fewer live replicas than needed for quorum
`ranges.underreplicated` | Number of ranges with fewer live replicas than the replication target
`ranges` | Number of ranges
`rebalancing.writespersecond` | Number of keys written (i.e. applied by raft) per second to the store, averaged over a large time period as used in rebalancing decisions
`rebalancing.writespersecond` | Number of keys written (i.e., applied by raft) per second to the store, averaged over a large time period as used in rebalancing decisions
`replicas.commandqueue.combinedqueuesize` | Number of commands in all CommandQueues combined
`replicas.commandqueue.combinedreadcount` | Number of read-only commands in all CommandQueues combined
`replicas.commandqueue.combinedwritecount` | Number of read-write commands in all CommandQueues combined
Expand Down
4 changes: 2 additions & 2 deletions _includes/v2.1/orchestration/kubernetes-scale-cluster.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
The Kubernetes cluster contains 4 nodes, one master and 3 workers. Pods get placed only on worker nodes, so to ensure that you don't have two pods on the same node (as recommended in our [production best practices](recommended-production-settings.html)), you need to add a new worker node and then edit your StatefulSet configuration to add another pod.
The Kubernetes cluster we created contains 3 nodes that pods can be run on. To ensure that you don't have two pods on the same node (as recommended in our [production best practices](recommended-production-settings.html)), you need to add a new node and then edit your StatefulSet configuration to add another pod.
The Kubernetes cluster contains 4 nodes, one master and 3 workers. Pods get placed only on worker nodes, so to ensure that you do not have two pods on the same node (as recommended in our [production best practices](recommended-production-settings.html)), you need to add a new worker node and then edit your StatefulSet configuration to add another pod.
The Kubernetes cluster we created contains 3 nodes that pods can be run on. To ensure that you do not have two pods on the same node (as recommended in our [production best practices](recommended-production-settings.html)), you need to add a new node and then edit your StatefulSet configuration to add another pod.

1. Add a worker node:
- On GKE, [resize your cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/resizing-a-cluster).
Expand Down
Loading

0 comments on commit 195226d

Please sign in to comment.