Skip to content

Commit

Permalink
Incorrect punctuation types
Browse files Browse the repository at this point in the history
Signed-off-by: yanan Lee <[email protected]>

Single quotes incorrect

Signed-off-by: yanan Lee <[email protected]>

Incorrect

Signed-off-by: yanan Lee <[email protected]>

delete

Signed-off-by: yanan Lee <[email protected]>
  • Loading branch information
EnergyLiYN committed Dec 22, 2016
1 parent c9f881b commit 1644134
Show file tree
Hide file tree
Showing 35 changed files with 137 additions and 137 deletions.
4 changes: 2 additions & 2 deletions CLA.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,6 @@

**Step 5**: The status on your old PRs will be updated when any new comment is made on it.

### Im having issues with signing the CLA.
### I'm having issues with signing the CLA.

If youre facing difficulty with signing the CNCF CLA, please explain your case on https://github.com/kubernetes/kubernetes/issues/27796 and we (@sarahnovotny and @foxish), along with the CNCF will help sort it out.
If you're facing difficulty with signing the CNCF CLA, please explain your case on https://github.com/kubernetes/kubernetes/issues/27796 and we (@sarahnovotny and @foxish), along with the CNCF will help sort it out.
6 changes: 3 additions & 3 deletions contributors/design-proposals/bootstrap-discovery.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ Additions include:
* In an HA world, API servers may come and go and it is necessary to make sure we are talking to the same cluster as we thought we were talking to.
* A _set_ of addresses for finding the cluster.
* It is implied that all of these are equivalent and that a client can try multiple until an appropriate target is found.
* Initially Im proposing a flat set here. In the future we can introduce more structure that hints to the user which addresses to try first.
* Initially I'm proposing a flat set here. In the future we can introduce more structure that hints to the user which addresses to try first.
* Better documentation and exposure of:
* The root certificates can be a bundle to enable rotation.
* If no root certificates are given (and the insecure bit isn't set) then the client trusts the system managed list of CAs.
Expand All @@ -45,7 +45,7 @@ Additions include:

**This is to be implemented in a later phase**

Any client of the cluster will want to have this information. As the configuration of the cluster changes we need the client to keep this information up to date. It is assumed that the information here wont drift so fast that clients wont be able to find *some* way to connect.
Any client of the cluster will want to have this information. As the configuration of the cluster changes we need the client to keep this information up to date. It is assumed that the information here won't drift so fast that clients won't be able to find *some* way to connect.

In exceptional circumstances it is possible that this information may be out of date and a client would be unable to connect to a cluster. Consider the case where a user has kubectl set up and working well and then doesn't run kubectl for quite a while. It is possible that over this time (a) the set of servers will have migrated so that all endpoints are now invalid or (b) the root certificates will have rotated so that the user can no longer trust any endpoint.

Expand Down Expand Up @@ -83,7 +83,7 @@ If the user requires some auth to the HTTPS server (to keep the ClusterInfo obje

### Method: Bootstrap Token

There wont always be a trusted external endpoint to talk to and transmitting
There won't always be a trusted external endpoint to talk to and transmitting
the locator file out of band is a pain. However, we want something more secure
than just hitting HTTP and trusting whatever we get back. In this case, we
assume we have the following:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ All functions listed above are expected to be thread-safe.

### Pod/Container Lifecycle

The PodSandboxs lifecycle is decoupled from the containers, i.e., a sandbox
The PodSandbox's lifecycle is decoupled from the containers, i.e., a sandbox
is created before any containers, and can exist after all containers in it have
terminated.

Expand Down
4 changes: 2 additions & 2 deletions contributors/design-proposals/controller-ref.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Approvers:

Main goal of `ControllerReference` effort is to solve a problem of overlapping controllers that fight over some resources (e.g. `ReplicaSets` fighting with `ReplicationControllers` over `Pods`), which cause serious [problems](https://github.com/kubernetes/kubernetes/issues/24433) such as exploding memory of Controller Manager.

We dont want to have (just) an in-memory solution, as we don’t want a Controller Manager crash to cause massive changes in object ownership in the system. I.e. we need to persist the information about "owning controller".
We don't want to have (just) an in-memory solution, as we don’t want a Controller Manager crash to cause massive changes in object ownership in the system. I.e. we need to persist the information about "owning controller".

Secondary goal of this effort is to improve performance of various controllers and schedulers, by removing the need for expensive lookup for all matching "controllers".

Expand Down Expand Up @@ -75,7 +75,7 @@ and

By design there are possible races during adoption if multiple controllers can own a given object.

To prevent re-adoption of an object during deletion the `DeletionTimestamp` will be set when deletion is starting. When a controller has a non-nil `DeletionTimestamp` it wont take any actions except updating its `Status` (in particular it wont adopt any objects).
To prevent re-adoption of an object during deletion the `DeletionTimestamp` will be set when deletion is starting. When a controller has a non-nil `DeletionTimestamp` it won't take any actions except updating its `Status` (in particular it won't adopt any objects).

# Implementation plan (sketch):

Expand Down
8 changes: 4 additions & 4 deletions contributors/design-proposals/daemon.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ For other uses, see the related [feature request](https://issues.k8s.io/1518)
The DaemonSet supports standard API features:
- create
- The spec for DaemonSets has a pod template field.
- Using the pods nodeSelector field, DaemonSets can be restricted to operate
- Using the pod's nodeSelector field, DaemonSets can be restricted to operate
over nodes that have a certain label. For example, suppose that in a cluster
some nodes are labeled ‘app=database’. You can use a DaemonSet to launch a
datastore pod on exactly those nodes labeled ‘app=database’.
Expand Down Expand Up @@ -118,7 +118,7 @@ replica of the daemon pod on the node.
- When a new node is added to the cluster, the DaemonSet controller starts
daemon pods on the node for DaemonSets whose pod template nodeSelectors match
the nodes labels.
the node's labels.
- Suppose the user launches a DaemonSet that runs a logging daemon on all
nodes labeled “logger=fluentd”. If the user then adds the “logger=fluentd” label
to a node (that did not initially have the label), the logging daemon will
Expand Down Expand Up @@ -179,7 +179,7 @@ expapi/v1/register.go
#### Daemon Manager

- Creates new DaemonSets when requested. Launches the corresponding daemon pod
on all nodes with labels matching the new DaemonSets selector.
on all nodes with labels matching the new DaemonSet's selector.
- Listens for addition of new nodes to the cluster, by setting up a
framework.NewInformer that watches for the creation of Node API objects. When a
new node is added, the daemon manager will loop through each DaemonSet. If the
Expand All @@ -193,7 +193,7 @@ via its hostname.)

- Does not need to be modified, but health checking will occur for the daemon
pods and revive the pods if they are killed (we set the pod restartPolicy to
Always). We reject DaemonSet objects with pod templates that dont have
Always). We reject DaemonSet objects with pod templates that don't have
restartPolicy set to Always.

## Open Issues
Expand Down
40 changes: 20 additions & 20 deletions contributors/design-proposals/disk-accounting.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ This proposal is an attempt to come up with a means for accounting disk usage in

### Why is disk accounting necessary?

As of kubernetes v1.1 clusters become unusable over time due to the local disk becoming full. The kubelets on the node attempt to perform garbage collection of old containers and images, but that doesnt prevent running pods from using up all the available disk space.
As of kubernetes v1.1 clusters become unusable over time due to the local disk becoming full. The kubelets on the node attempt to perform garbage collection of old containers and images, but that doesn't prevent running pods from using up all the available disk space.

Kubernetes users have no insight into how the disk is being consumed.

Expand Down Expand Up @@ -42,13 +42,13 @@ Disk can be consumed for:

1. Container images

2. Containers writable layer
2. Container's writable layer

3. Containers logs - when written to stdout/stderr and default logging backend in docker is used.
3. Container's logs - when written to stdout/stderr and default logging backend in docker is used.

4. Local volumes - hostPath, emptyDir, gitRepo, etc.

As of Kubernetes v1.1, kubelet exposes disk usage for the entire node and the containers writable layer for aufs docker storage driver.
As of Kubernetes v1.1, kubelet exposes disk usage for the entire node and the container's writable layer for aufs docker storage driver.
This information is made available to end users via the heapster monitoring pipeline.

#### Image layers
Expand Down Expand Up @@ -86,7 +86,7 @@ In addition to this, the changes introduced by a pod on the source of a hostPath

### Docker storage model

Before we start exploring solutions, lets get familiar with how docker handles storage for images, writable layer and logs.
Before we start exploring solutions, let's get familiar with how docker handles storage for images, writable layer and logs.

On all storage drivers, logs are stored under `<docker root dir>/containers/<container-id>/`

Expand Down Expand Up @@ -123,7 +123,7 @@ Everything under `/var/lib/docker/overlay/<id>` are files required for running

Disk accounting is dependent on the storage driver in docker. A common solution that works across all storage drivers isn't available.

Im listing a few possible solutions for disk accounting below along with their limitations.
I'm listing a few possible solutions for disk accounting below along with their limitations.

We need a plugin model for disk accounting. Some storage drivers in docker will require special plugins.

Expand All @@ -136,7 +136,7 @@ But isolated usage isn't of much use because image layers are shared between con

Continuing to use the entire partition availability for garbage collection purposes in kubelet, should not affect reliability.
We might garbage collect more often.
As long as we do not expose features that require persisting old containers, computing image layer usage wouldnt be necessary.
As long as we do not expose features that require persisting old containers, computing image layer usage wouldn't be necessary.

Main goals for images are
1. Capturing total image disk usage
Expand Down Expand Up @@ -208,7 +208,7 @@ Both `uids` and `gids` are meant for security. Overloading that concept for disk

Kubelet needs to define a gid for tracking image layers and make that gid or group the owner of `/var/lib/docker/[aufs | overlayfs]` recursively. Once this is done, the quota sub-system in the kernel will report the blocks being consumed by the storage driver on the underlying partition.

Since this number also includes the containers writable layer, we will have to somehow subtract that usage from the overall usage of the storage driver directory. Luckily, we can use the same mechanism for tracking container’s writable layer. Once we apply a different `gid` to the containers writable layer, which is located under `/var/lib/docker/<storage_driver>/diff/<container_id>`, the quota subsystem will not include the containers writable layer usage.
Since this number also includes the container's writable layer, we will have to somehow subtract that usage from the overall usage of the storage driver directory. Luckily, we can use the same mechanism for tracking container’s writable layer. Once we apply a different `gid` to the container's writable layer, which is located under `/var/lib/docker/<storage_driver>/diff/<container_id>`, the quota subsystem will not include the container's writable layer usage.

Xfs on the other hand support project quota which lets us track disk usage of arbitrary directories using a project. Support for this feature in ext4 is being reviewed. So on xfs, we can use quota without having to clobber the writable layer's uid and gid.

Expand All @@ -219,7 +219,7 @@ Xfs on the other hand support project quota which lets us track disk usage of ar

**Cons**

* Requires updates to default ownership on dockers internal storage driver directories. We will have to deal with storage driver implementation details in any approach that is not docker native.
* Requires updates to default ownership on docker's internal storage driver directories. We will have to deal with storage driver implementation details in any approach that is not docker native.

* Requires additional node configuration - quota subsystem needs to be setup on the node. This can either be automated or made a requirement for the node.

Expand All @@ -238,11 +238,11 @@ Project Quota support for ext4 is currently being reviewed upstream. If that fea

Devicemapper storage driver will setup two volumes, metadata and data, that will be used to store image layers and container writable layer. The volumes can be real devices or loopback. A Pool device is created which uses the underlying volume for real storage.

A new thinly-provisioned volume, based on the pool, will be created for running containers.
A new thinly-provisioned volume, based on the pool, will be created for running container's.

The kernel tracks the usage of the pool device at the block device layer. The usage here includes image layers and containers writable layers.
The kernel tracks the usage of the pool device at the block device layer. The usage here includes image layers and container's writable layers.

Since the kubelet has to track the writable layer usage anyways, we can subtract the aggregated root filesystem usage from the overall pool device usage to get the image layers disk usage.
Since the kubelet has to track the writable layer usage anyways, we can subtract the aggregated root filesystem usage from the overall pool device usage to get the image layer's disk usage.

Linux quota and `du` will not work with device mapper.

Expand All @@ -253,7 +253,7 @@ A docker dry run option (mentioned above) is another possibility.

###### Overlayfs / Aufs

Docker creates a separate directory for the containers writable layer which is then overlayed on top of read-only image layers.
Docker creates a separate directory for the container's writable layer which is then overlayed on top of read-only image layers.

Both the previously mentioned options of `du` and `Linux Quota` will work for this case as well.

Expand All @@ -268,14 +268,14 @@ If local disk becomes a schedulable resource, `linux quota` can be used to impos

FIXME: How to calculate writable layer usage with devicemapper?

To enforce `limits` the volume created for the containers writable layer filesystem can be dynamically [resized](https://jpetazzo.github.io/2014/01/29/docker-device-mapper-resize/), to not use more than `limit`. `request` will have to be enforced by the kubelet.
To enforce `limits` the volume created for the container's writable layer filesystem can be dynamically [resized](https://jpetazzo.github.io/2014/01/29/docker-device-mapper-resize/), to not use more than `limit`. `request` will have to be enforced by the kubelet.


#### Container logs

Container logs are not storage driver specific. We can use either `du` or `quota` to track log usage per container. Log files are stored under `/var/lib/docker/containers/<container-id>`.

In the case of quota, we can create a separate gid for tracking log usage. This will let users track log usage and writable layers usage individually.
In the case of quota, we can create a separate gid for tracking log usage. This will let users track log usage and writable layer's usage individually.

For the purposes of enforcing limits though, kubelet will use the sum of logs and writable layer.

Expand Down Expand Up @@ -340,9 +340,9 @@ In this milestone, we will add support for quota and make it opt-in. There shoul

* Configure linux quota automatically on startup. Do not set any limits in this phase.

* Allocate gids for pod volumes, containers writable layer and logs, and also for image layers.
* Allocate gids for pod volumes, container's writable layer and logs, and also for image layers.

* Update the docker runtime plugin in kubelet to perform the necessary `chowns` and `chmods` between container creation and startup.
* Update the docker runtime plugin in kubelet to perform the necessary `chown's` and `chmod's` between container creation and startup.

* Pass the allocated gids as supplementary gids to containers.

Expand All @@ -363,7 +363,7 @@ In this milestone, we will make local disk a schedulable resource.

* Quota plugin sets hard limits equal to user specified `limits`.

* Devicemapper plugin resizes writable layer to not exceed the containers disk `limit`.
* Devicemapper plugin resizes writable layer to not exceed the container's disk `limit`.

* Disk manager evicts pods based on `usage` - `request` delta instead of just QoS class.

Expand Down Expand Up @@ -448,7 +448,7 @@ Track the space occupied by images after it has been pulled locally as follows.

3. Any new images pulled or containers created will be accounted to the `docker-images` group by default.

4. Once we update the group ownership on newly created containers to a different gid, the container writable layers specific disk usage gets dropped from this group.
4. Once we update the group ownership on newly created containers to a different gid, the container writable layer's specific disk usage gets dropped from this group.
#### Overlayfs
Expand Down Expand Up @@ -574,7 +574,7 @@ Capacity in MB = 1638400 * 512 * 128 bytes = 100 GB
##### Testing titbits
* Ubuntu 15.10 doesnt ship with the quota module on virtual machines. [Install ‘linux-image-extra-virtual’](http://askubuntu.com/questions/109585/quota-format-not-supported-in-kernel) package to get quota to work.
* Ubuntu 15.10 doesn't ship with the quota module on virtual machines. [Install ‘linux-image-extra-virtual’](http://askubuntu.com/questions/109585/quota-format-not-supported-in-kernel) package to get quota to work.

* Overlay storage driver needs kernels >= 3.18. I used Ubuntu 15.10 to test Overlayfs.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -221,7 +221,7 @@ Unable to join mesh network. Check your token.

* @jbeda & @philips?

1. Documentation - so that new users can see this in 1.4 (even if its caveated with alpha/experimental labels and flags all over it)
1. Documentation - so that new users can see this in 1.4 (even if it's caveated with alpha/experimental labels and flags all over it)

* @lukemarsden

Expand Down
Loading

0 comments on commit 1644134

Please sign in to comment.