Skip to content

Commit

Permalink
Fixed several typos
Browse files Browse the repository at this point in the history
  • Loading branch information
joe2far committed Jul 13, 2016
1 parent 9813b6e commit fa027ee
Show file tree
Hide file tree
Showing 7 changed files with 8 additions and 8 deletions.
2 changes: 1 addition & 1 deletion control-plane-resilience.md
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,7 @@ well-bounded time period.
Multiple stateless, self-hosted, self-healing API servers behind a HA
load balancer, built out by the default "kube-up" automation on GCE,
AWS and basic bare metal (BBM). Note that the single-host approach of
hving etcd listen only on localhost to ensure that onyl API server can
having etcd listen only on localhost to ensure that only API server can
connect to it will no longer work, so alternative security will be
needed in the regard (either using firewall rules, SSL certs, or
something else). All necessary flags are currently supported to enable
Expand Down
2 changes: 1 addition & 1 deletion daemon.md
Original file line number Diff line number Diff line change
Expand Up @@ -174,7 +174,7 @@ upgradable, and more generally could not be managed through the API server
interface.
A third alternative is to generalize the Replication Controller. We would do
something like: if you set the `replicas` field of the ReplicationConrollerSpec
something like: if you set the `replicas` field of the ReplicationControllerSpec
to -1, then it means "run exactly one replica on every node matching the
nodeSelector in the pod template." The ReplicationController would pretend
`replicas` had been set to some large number -- larger than the largest number
Expand Down
2 changes: 1 addition & 1 deletion federated-services.md
Original file line number Diff line number Diff line change
Expand Up @@ -505,7 +505,7 @@ depend on what scheduling policy is in force. In the above example, the
scheduler created an equal number of replicas (2) in each of the three
underlying clusters, to make up the total of 6 replicas required. To handle
entire cluster failures, various approaches are possible, including:
1. **simple overprovisioing**, such that sufficient replicas remain even if a
1. **simple overprovisioning**, such that sufficient replicas remain even if a
cluster fails. This wastes some resources, but is simple and reliable.
2. **pod autoscaling**, where the replication controller in each
cluster automatically and autonomously increases the number of
Expand Down
2 changes: 1 addition & 1 deletion indexed-job.md
Original file line number Diff line number Diff line change
Expand Up @@ -522,7 +522,7 @@ The index-only approach:
- Requires that the user keep the *per completion parameters* in a separate
storage, such as a configData or networked storage.
- Makes no changes to the JobSpec.
- Drawback: while in separate storage, they could be mutatated, which would have
- Drawback: while in separate storage, they could be mutated, which would have
unexpected effects.
- Drawback: Logic for using index to lookup parameters needs to be in the Pod.
- Drawback: CLIs and UIs are limited to using the "index" as the identity of a
Expand Down
2 changes: 1 addition & 1 deletion nodeaffinity.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ scheduling requirements.
rather than replacing `map[string]string`, due to backward compatibility
requirements.)

The affiniy specifications described above allow a pod to request various
The affinity specifications described above allow a pod to request various
properties that are inherent to nodes, for example "run this pod on a node with
an Intel CPU" or, in a multi-zone cluster, "run this pod on a node in zone Z."
([This issue](https://github.com/kubernetes/kubernetes/issues/9044) describes
Expand Down
2 changes: 1 addition & 1 deletion security.md
Original file line number Diff line number Diff line change
Expand Up @@ -204,7 +204,7 @@ arbitrary containers on hosts, to gain access to any protected information
stored in either volumes or in pods (such as access tokens or shared secrets
provided as environment variables), to intercept and redirect traffic from
running services by inserting middlemen, or to simply delete the entire history
of the custer.
of the cluster.

As a general principle, access to the central data store should be restricted to
the components that need full control over the system and which can apply
Expand Down
4 changes: 2 additions & 2 deletions taint-toleration-dedicated.md
Original file line number Diff line number Diff line change
Expand Up @@ -201,7 +201,7 @@ to both `NodeSpec` and `NodeStatus`. The value in `NodeStatus` is the union
of the taints specified by various sources. For now, the only source is
the `NodeSpec` itself, but in the future one could imagine a node inheriting
taints from pods (if we were to allow taints to be attached to pods), from
the node's startup coniguration, etc. The scheduler should look at the `Taints`
the node's startup configuration, etc. The scheduler should look at the `Taints`
in `NodeStatus`, not in `NodeSpec`.

Taints and tolerations are not scoped to namespace.
Expand Down Expand Up @@ -305,7 +305,7 @@ Users should not start using taints and tolerations until the full
implementation has been in Kubelet and the master for enough binary versions
that we feel comfortable that we will not need to roll back either Kubelet or
master to a version that does not support them. Longer-term we will use a
progamatic approach to enforcing this ([#4855](https://github.com/kubernetes/kubernetes/issues/4855)).
programatic approach to enforcing this ([#4855](https://github.com/kubernetes/kubernetes/issues/4855)).

## Related issues

Expand Down

0 comments on commit fa027ee

Please sign in to comment.