Skip to content

Commit

Permalink
Fix trailing whitespace in all docs
Browse files Browse the repository at this point in the history
  • Loading branch information
eparis committed Jul 31, 2015
1 parent 39eedfa commit b15dad5
Show file tree
Hide file tree
Showing 12 changed files with 54 additions and 54 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Documentation for other releases can be found at

# Kubernetes Design Overview

Kubernetes is a system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications.
Kubernetes is a system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications.

Kubernetes establishes robust declarative primitives for maintaining the desired state requested by the user. We see these primitives as the main value added by Kubernetes. Self-healing mechanisms, such as auto-restarting, re-scheduling, and replicating containers require active controllers, not just imperative orchestration.

Expand Down
6 changes: 3 additions & 3 deletions admission_control_resource_quota.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ type ResourceQuotaList struct {

## AdmissionControl plugin: ResourceQuota

The **ResourceQuota** plug-in introspects all incoming admission requests.
The **ResourceQuota** plug-in introspects all incoming admission requests.

It makes decisions by evaluating the incoming object against all defined **ResourceQuota.Status.Hard** resource limits in the request
namespace. If acceptance of the resource would cause the total usage of a named resource to exceed its hard limit, the request is denied.
Expand All @@ -125,7 +125,7 @@ Any resource that is not part of core Kubernetes must follow the resource naming
This means the resource must have a fully-qualified name (i.e. mycompany.org/shinynewresource)

If the incoming request does not cause the total usage to exceed any of the enumerated hard resource limits, the plug-in will post a
**ResourceQuotaUsage** document to the server to atomically update the observed usage based on the previously read
**ResourceQuotaUsage** document to the server to atomically update the observed usage based on the previously read
**ResourceQuota.ResourceVersion**. This keeps incremental usage atomically consistent, but does introduce a bottleneck (intentionally)
into the system.

Expand Down Expand Up @@ -184,7 +184,7 @@ resourcequotas 1 1
services 3 5
```

## More information
## More information

See [resource quota document](../admin/resource-quota.md) and the [example of Resource Quota](../user-guide/resourcequota/) for more information.

Expand Down
2 changes: 1 addition & 1 deletion architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ Each node runs Docker, of course. Docker takes care of the details of downloadi

### Kubelet

The **Kubelet** manages [pods](../user-guide/pods.md) and their containers, their images, their volumes, etc.
The **Kubelet** manages [pods](../user-guide/pods.md) and their containers, their images, their volumes, etc.

### Kube-Proxy

Expand Down
2 changes: 1 addition & 1 deletion event_compression.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ Event compression should be best effort (not guaranteed). Meaning, in the worst
## Design

Instead of a single Timestamp, each event object [contains](http://releases.k8s.io/HEAD/pkg/api/types.go#L1111) the following fields:
* `FirstTimestamp util.Time`
* `FirstTimestamp util.Time`
* The date/time of the first occurrence of the event.
* `LastTimestamp util.Time`
* The date/time of the most recent occurrence of the event.
Expand Down
8 changes: 4 additions & 4 deletions expansion.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ available to subsequent expansions.

### Use Case: Variable expansion in command

Users frequently need to pass the values of environment variables to a container's command.
Users frequently need to pass the values of environment variables to a container's command.
Currently, Kubernetes does not perform any expansion of variables. The workaround is to invoke a
shell in the container's command and have the shell perform the substitution, or to write a wrapper
script that sets up the environment and runs the command. This has a number of drawbacks:
Expand Down Expand Up @@ -130,7 +130,7 @@ The exact syntax for variable expansion has a large impact on how users perceive
feature. We considered implementing a very restrictive subset of the shell `${var}` syntax. This
syntax is an attractive option on some level, because many people are familiar with it. However,
this syntax also has a large number of lesser known features such as the ability to provide
default values for unset variables, perform inline substitution, etc.
default values for unset variables, perform inline substitution, etc.

In the interest of preventing conflation of the expansion feature in Kubernetes with the shell
feature, we chose a different syntax similar to the one in Makefiles, `$(var)`. We also chose not
Expand Down Expand Up @@ -239,7 +239,7 @@ The necessary changes to implement this functionality are:
`ObjectReference` and an `EventRecorder`
2. Introduce `third_party/golang/expansion` package that provides:
1. An `Expand(string, func(string) string) string` function
2. A `MappingFuncFor(ObjectEventRecorder, ...map[string]string) string` function
2. A `MappingFuncFor(ObjectEventRecorder, ...map[string]string) string` function
3. Make the kubelet expand environment correctly
4. Make the kubelet expand command correctly

Expand Down Expand Up @@ -311,7 +311,7 @@ func Expand(input string, mapping func(string) string) string {

#### Kubelet changes

The Kubelet should be made to correctly expand variables references in a container's environment,
The Kubelet should be made to correctly expand variables references in a container's environment,
command, and args. Changes will need to be made to:

1. The `makeEnvironmentVariables` function in the kubelet; this is used by
Expand Down
14 changes: 7 additions & 7 deletions namespaces.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ Each user community has its own:

A cluster operator may create a Namespace for each unique user community.

The Namespace provides a unique scope for:
The Namespace provides a unique scope for:

1. named resources (to avoid basic naming collisions)
2. delegated management authority to trusted users
Expand Down Expand Up @@ -142,7 +142,7 @@ type NamespaceSpec struct {

A *FinalizerName* is a qualified name.

The API Server enforces that a *Namespace* can only be deleted from storage if and only if
The API Server enforces that a *Namespace* can only be deleted from storage if and only if
it's *Namespace.Spec.Finalizers* is empty.

A *finalize* operation is the only mechanism to modify the *Namespace.Spec.Finalizers* field post creation.
Expand Down Expand Up @@ -189,12 +189,12 @@ are known to the cluster.
The *namespace controller* enumerates each known resource type in that namespace and deletes it one by one.

Admission control blocks creation of new resources in that namespace in order to prevent a race-condition
where the controller could believe all of a given resource type had been deleted from the namespace,
where the controller could believe all of a given resource type had been deleted from the namespace,
when in fact some other rogue client agent had created new objects. Using admission control in this
scenario allows each of registry implementations for the individual objects to not need to take into account Namespace life-cycle.

Once all objects known to the *namespace controller* have been deleted, the *namespace controller*
executes a *finalize* operation on the namespace that removes the *kubernetes* value from
executes a *finalize* operation on the namespace that removes the *kubernetes* value from
the *Namespace.Spec.Finalizers* list.

If the *namespace controller* sees a *Namespace* whose *ObjectMeta.DeletionTimestamp* is set, and
Expand Down Expand Up @@ -245,13 +245,13 @@ In etcd, we want to continue to still support efficient WATCH across namespaces.

Resources that persist content in etcd will have storage paths as follows:

/{k8s_storage_prefix}/{resourceType}/{resource.Namespace}/{resource.Name}
/{k8s_storage_prefix}/{resourceType}/{resource.Namespace}/{resource.Name}

This enables consumers to WATCH /registry/{resourceType} for changes across namespace of a particular {resourceType}.

### Kubelet

The kubelet will register pod's it sources from a file or http source with a namespace associated with the
The kubelet will register pod's it sources from a file or http source with a namespace associated with the
*cluster-id*

### Example: OpenShift Origin managing a Kubernetes Namespace
Expand Down Expand Up @@ -362,7 +362,7 @@ This results in the following state:

At this point, the Kubernetes *namespace controller* in its sync loop will see that the namespace
has a deletion timestamp and that its list of finalizers is empty. As a result, it knows all
content associated from that namespace has been purged. It performs a final DELETE action
content associated from that namespace has been purged. It performs a final DELETE action
to remove that Namespace from the storage.

At this point, all content associated with that Namespace, and the Namespace itself are gone.
Expand Down
12 changes: 6 additions & 6 deletions persistent-storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,11 +41,11 @@ Two new API kinds:

A `PersistentVolume` (PV) is a storage resource provisioned by an administrator. It is analogous to a node. See [Persistent Volume Guide](../user-guide/persistent-volumes/) for how to use it.

A `PersistentVolumeClaim` (PVC) is a user's request for a persistent volume to use in a pod. It is analogous to a pod.
A `PersistentVolumeClaim` (PVC) is a user's request for a persistent volume to use in a pod. It is analogous to a pod.

One new system component:

`PersistentVolumeClaimBinder` is a singleton running in master that watches all PersistentVolumeClaims in the system and binds them to the closest matching available PersistentVolume. The volume manager watches the API for newly created volumes to manage.
`PersistentVolumeClaimBinder` is a singleton running in master that watches all PersistentVolumeClaims in the system and binds them to the closest matching available PersistentVolume. The volume manager watches the API for newly created volumes to manage.

One new volume:

Expand All @@ -69,7 +69,7 @@ Cluster administrators use the API to manage *PersistentVolumes*. A custom stor

PVs are system objects and, thus, have no namespace.

Many means of dynamic provisioning will be eventually be implemented for various storage types.
Many means of dynamic provisioning will be eventually be implemented for various storage types.


##### PersistentVolume API
Expand Down Expand Up @@ -116,7 +116,7 @@ TBD

#### Events

The implementation of persistent storage will not require events to communicate to the user the state of their claim. The CLI for bound claims contains a reference to the backing persistent volume. This is always present in the API and CLI, making an event to communicate the same unnecessary.
The implementation of persistent storage will not require events to communicate to the user the state of their claim. The CLI for bound claims contains a reference to the backing persistent volume. This is always present in the API and CLI, making an event to communicate the same unnecessary.

Events that communicate the state of a mounted volume are left to the volume plugins.

Expand Down Expand Up @@ -232,9 +232,9 @@ When a claim holder is finished with their data, they can delete their claim.
$ kubectl delete pvc myclaim-1
```

The ```PersistentVolumeClaimBinder``` will reconcile this by removing the claim reference from the PV and change the PVs status to 'Released'.
The ```PersistentVolumeClaimBinder``` will reconcile this by removing the claim reference from the PV and change the PVs status to 'Released'.

Admins can script the recycling of released volumes. Future dynamic provisioners will understand how a volume should be recycled.
Admins can script the recycling of released volumes. Future dynamic provisioners will understand how a volume should be recycled.


<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
Expand Down
8 changes: 4 additions & 4 deletions principles.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Documentation for other releases can be found at

# Design Principles

Principles to follow when extending Kubernetes.
Principles to follow when extending Kubernetes.

## API

Expand All @@ -44,14 +44,14 @@ See also the [API conventions](../devel/api-conventions.md).
* The control plane should be transparent -- there are no hidden internal APIs.
* The cost of API operations should be proportional to the number of objects intentionally operated upon. Therefore, common filtered lookups must be indexed. Beware of patterns of multiple API calls that would incur quadratic behavior.
* Object status must be 100% reconstructable by observation. Any history kept must be just an optimization and not required for correct operation.
* Cluster-wide invariants are difficult to enforce correctly. Try not to add them. If you must have them, don't enforce them atomically in master components, that is contention-prone and doesn't provide a recovery path in the case of a bug allowing the invariant to be violated. Instead, provide a series of checks to reduce the probability of a violation, and make every component involved able to recover from an invariant violation.
* Cluster-wide invariants are difficult to enforce correctly. Try not to add them. If you must have them, don't enforce them atomically in master components, that is contention-prone and doesn't provide a recovery path in the case of a bug allowing the invariant to be violated. Instead, provide a series of checks to reduce the probability of a violation, and make every component involved able to recover from an invariant violation.
* Low-level APIs should be designed for control by higher-level systems. Higher-level APIs should be intent-oriented (think SLOs) rather than implementation-oriented (think control knobs).

## Control logic

* Functionality must be *level-based*, meaning the system must operate correctly given the desired state and the current/observed state, regardless of how many intermediate state updates may have been missed. Edge-triggered behavior must be just an optimization.
* Assume an open world: continually verify assumptions and gracefully adapt to external events and/or actors. Example: we allow users to kill pods under control of a replication controller; it just replaces them.
* Do not define comprehensive state machines for objects with behaviors associated with state transitions and/or "assumed" states that cannot be ascertained by observation.
* Do not define comprehensive state machines for objects with behaviors associated with state transitions and/or "assumed" states that cannot be ascertained by observation.
* Don't assume a component's decisions will not be overridden or rejected, nor for the component to always understand why. For example, etcd may reject writes. Kubelet may reject pods. The scheduler may not be able to schedule pods. Retry, but back off and/or make alternative decisions.
* Components should be self-healing. For example, if you must keep some state (e.g., cache) the content needs to be periodically refreshed, so that if an item does get erroneously stored or a deletion event is missed etc, it will be soon fixed, ideally on timescales that are shorter than what will attract attention from humans.
* Component behavior should degrade gracefully. Prioritize actions so that the most important activities can continue to function even when overloaded and/or in states of partial failure.
Expand All @@ -61,7 +61,7 @@ See also the [API conventions](../devel/api-conventions.md).
* Only the apiserver should communicate with etcd/store, and not other components (scheduler, kubelet, etc.).
* Compromising a single node shouldn't compromise the cluster.
* Components should continue to do what they were last told in the absence of new instructions (e.g., due to network partition or component outage).
* All components should keep all relevant state in memory all the time. The apiserver should write through to etcd/store, other components should write through to the apiserver, and they should watch for updates made by other clients.
* All components should keep all relevant state in memory all the time. The apiserver should write through to etcd/store, other components should write through to the apiserver, and they should watch for updates made by other clients.
* Watch is preferred over polling.

## Extensibility
Expand Down
10 changes: 5 additions & 5 deletions resources.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ The resource model aims to be:

A Kubernetes _resource_ is something that can be requested by, allocated to, or consumed by a pod or container. Examples include memory (RAM), CPU, disk-time, and network bandwidth.

Once resources on a node have been allocated to one pod, they should not be allocated to another until that pod is removed or exits. This means that Kubernetes schedulers should ensure that the sum of the resources allocated (requested and granted) to its pods never exceeds the usable capacity of the node. Testing whether a pod will fit on a node is called _feasibility checking_.
Once resources on a node have been allocated to one pod, they should not be allocated to another until that pod is removed or exits. This means that Kubernetes schedulers should ensure that the sum of the resources allocated (requested and granted) to its pods never exceeds the usable capacity of the node. Testing whether a pod will fit on a node is called _feasibility checking_.

Note that the resource model currently prohibits over-committing resources; we will want to relax that restriction later.

Expand All @@ -70,7 +70,7 @@ For future reference, note that some resources, such as CPU and network bandwidt

### Resource quantities

Initially, all Kubernetes resource types are _quantitative_, and have an associated _unit_ for quantities of the associated resource (e.g., bytes for memory, bytes per seconds for bandwidth, instances for software licences). The units will always be a resource type's natural base units (e.g., bytes, not MB), to avoid confusion between binary and decimal multipliers and the underlying unit multiplier (e.g., is memory measured in MiB, MB, or GB?).
Initially, all Kubernetes resource types are _quantitative_, and have an associated _unit_ for quantities of the associated resource (e.g., bytes for memory, bytes per seconds for bandwidth, instances for software licences). The units will always be a resource type's natural base units (e.g., bytes, not MB), to avoid confusion between binary and decimal multipliers and the underlying unit multiplier (e.g., is memory measured in MiB, MB, or GB?).

Resource quantities can be added and subtracted: for example, a node has a fixed quantity of each resource type that can be allocated to pods/containers; once such an allocation has been made, the allocated resources cannot be made available to other pods/containers without over-committing the resources.

Expand Down Expand Up @@ -110,7 +110,7 @@ resourceCapacitySpec: [
```

Where:
* _total_: the total allocatable resources of a node. Initially, the resources at a given scope will bound the resources of the sum of inner scopes.
* _total_: the total allocatable resources of a node. Initially, the resources at a given scope will bound the resources of the sum of inner scopes.

#### Notes

Expand Down Expand Up @@ -194,7 +194,7 @@ The following are planned future extensions to the resource model, included here

Because resource usage and related metrics change continuously, need to be tracked over time (i.e., historically), can be characterized in a variety of ways, and are fairly voluminous, we will not include usage in core API objects, such as [Pods](../user-guide/pods.md) and Nodes, but will provide separate APIs for accessing and managing that data. See the Appendix for possible representations of usage data, but the representation we'll use is TBD.

Singleton values for observed and predicted future usage will rapidly prove inadequate, so we will support the following structure for extended usage information:
Singleton values for observed and predicted future usage will rapidly prove inadequate, so we will support the following structure for extended usage information:

```yaml
resourceStatus: [
Expand Down Expand Up @@ -223,7 +223,7 @@ where a `<CPU-info>` or `<memory-info>` structure looks like this:
```

All parts of this structure are optional, although we strongly encourage including quantities for 50, 90, 95, 99, 99.5, and 99.9 percentiles. _[In practice, it will be important to include additional info such as the length of the time window over which the averages are calculated, the confidence level, and information-quality metrics such as the number of dropped or discarded data points.]_
and predicted
and predicted

## Future resource types

Expand Down
Loading

0 comments on commit b15dad5

Please sign in to comment.