diff --git a/docs/README.md b/docs/README.md index 82773825f4d89..93ad7781d6140 100644 --- a/docs/README.md +++ b/docs/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Kubernetes Documentation: releases.k8s.io/HEAD * The [User's guide](user-guide/README.md) is for anyone who wants to run programs and diff --git a/docs/admin/README.md b/docs/admin/README.md index 5f34cd4a852f1..69e8fa435feb6 100644 --- a/docs/admin/README.md +++ b/docs/admin/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Kubernetes Cluster Admin Guide The cluster admin guide is for anyone creating or administering a Kubernetes cluster. @@ -72,6 +73,7 @@ If you are modifying an existing guide which uses Salt, this document explains [ project.](salt.md). ## Upgrading a cluster + [Upgrading a cluster](cluster-management.md). ## Managing nodes diff --git a/docs/admin/accessing-the-api.md b/docs/admin/accessing-the-api.md index 81677fa29006a..5e3a01f88712d 100644 --- a/docs/admin/accessing-the-api.md +++ b/docs/admin/accessing-the-api.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Configuring APIserver ports This document describes what ports the kubernetes apiserver @@ -42,6 +43,7 @@ in [Accessing the cluster](../user-guide/accessing-the-cluster.md). ## Ports and IPs Served On + The Kubernetes API is served by the Kubernetes APIServer process. Typically, there is one of these running on a single kubernetes-master node. @@ -93,6 +95,7 @@ variety of uses cases: setup time. Kubelets use cert-based auth, while kube-proxy uses token-based auth. ## Expected changes + - Policy will limit the actions kubelets can do via the authed port. - Scheduler and Controller-manager will use the Secure Port too. They will then be able to run on different machines than the apiserver. diff --git a/docs/admin/admission-controllers.md b/docs/admin/admission-controllers.md index b175f025970fb..3c4426148f181 100644 --- a/docs/admin/admission-controllers.md +++ b/docs/admin/admission-controllers.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Admission Controllers **Table of Contents** diff --git a/docs/admin/authentication.md b/docs/admin/authentication.md index d29361092ad74..9eddb47ff14f7 100644 --- a/docs/admin/authentication.md +++ b/docs/admin/authentication.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Authentication Plugins Kubernetes uses client certificates, tokens, or http basic auth to authenticate users for API calls. diff --git a/docs/admin/authorization.md b/docs/admin/authorization.md index decc2fa4c7cb8..aef726273e769 100644 --- a/docs/admin/authorization.md +++ b/docs/admin/authorization.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Authorization Plugins @@ -53,6 +54,7 @@ The following implementations are available, and are selected by flag: `ABAC` allows for user-configured authorization policy. ABAC stands for Attribute-Based Access Control. ## ABAC Mode + ### Request Attributes A request has 4 attributes that can be considered for authorization: @@ -105,6 +107,7 @@ To permit any user to do something, write a policy with the user property unset. To permit an action Policy with an unset namespace applies regardless of namespace. ### Examples + 1. Alice can do anything: `{"user":"alice"}` 2. Kubelet can read any pods: `{"user":"kubelet", "resource": "pods", "readonly": true}` 3. Kubelet can read and write events: `{"user":"kubelet", "resource": "events"}` diff --git a/docs/admin/cluster-components.md b/docs/admin/cluster-components.md index 766af21c57713..fff82caa38e1c 100644 --- a/docs/admin/cluster-components.md +++ b/docs/admin/cluster-components.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Kubernetes Cluster Admin Guide: Cluster Components This document outlines the various binary components that need to run to @@ -92,6 +93,7 @@ These controllers include: selects a node for them to run on. ### addons + Addons are pods and services that implement cluster features. They don't run on the master VM, but currently the default setup scripts that make the API calls to create these pods and services does run on the master VM. See: diff --git a/docs/admin/cluster-large.md b/docs/admin/cluster-large.md index dbedc13f143fb..54883000243f5 100644 --- a/docs/admin/cluster-large.md +++ b/docs/admin/cluster-large.md @@ -30,9 +30,11 @@ Documentation for other releases can be found at + # Kubernetes Large Cluster ## Support + At v1.0, Kubernetes supports clusters up to 100 nodes with 30 pods per node and 1-2 container per pod (as defined in the [1.0 roadmap](../../docs/roadmap.md#reliability-and-performance)). ## Setup @@ -59,6 +61,7 @@ To avoid running into cloud provider quota issues, when creating a cluster with * Gating the setup script so that it brings up new node VMs in smaller batches with waits in between, because some cloud providers rate limit the creation of VMs. ### Addon Resources + To prevent memory leaks or other resource issues in [cluster addons](../../cluster/addons/) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](https://github.com/GoogleCloudPlatform/kubernetes/pull/10653/files) and [#10778](https://github.com/GoogleCloudPlatform/kubernetes/pull/10778/files)). For example: diff --git a/docs/admin/cluster-management.md b/docs/admin/cluster-management.md index e75ed5c340ac9..bfe93922d97ee 100644 --- a/docs/admin/cluster-management.md +++ b/docs/admin/cluster-management.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Cluster Management This doc is in progress. diff --git a/docs/admin/cluster-troubleshooting.md b/docs/admin/cluster-troubleshooting.md index 9348d63b74737..744241033fd11 100644 --- a/docs/admin/cluster-troubleshooting.md +++ b/docs/admin/cluster-troubleshooting.md @@ -30,13 +30,16 @@ Documentation for other releases can be found at + # Cluster Troubleshooting + This doc is about cluster troubleshooting; we assume you have already ruled out your application as the root cause of the problem you are experiencing. See the [application troubleshooting guide](../user-guide/application-troubleshooting.md) for tips on application debugging. You may also visit [troubleshooting document](../troubleshooting.md) for more information. ## Listing your cluster + The first thing to debug in your cluster is if your nodes are all registered correctly. Run @@ -48,15 +51,18 @@ kubectl get nodes And verify that all of the nodes you expect to see are present and that they are all in the ```Ready``` state. ## Looking at logs + For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations of the relevant log files. (note that on systemd-based systems, you may need to use ```journalctl``` instead) ### Master + * /var/log/kube-apiserver.log - API Server, responsible for serving the API * /var/log/kube-scheduler.log - Scheduler, responsible for making scheduling decisions * /var/log/kube-controller-manager.log - Controller that manages replication controllers ### Worker Nodes + * /var/log/kubelet.log - Kubelet, responsible for running containers on the node * /var/log/kube-proxy.log - Kube Proxy, responsible for service load balancing diff --git a/docs/admin/dns.md b/docs/admin/dns.md index c6a5985813f65..7e9e1303be0d7 100644 --- a/docs/admin/dns.md +++ b/docs/admin/dns.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # DNS Integration with Kubernetes As of kubernetes 0.8, DNS is offered as a [cluster add-on](../../cluster/addons/README.md). diff --git a/docs/admin/high-availability.md b/docs/admin/high-availability.md index 9c31d7144b617..789233a99c898 100644 --- a/docs/admin/high-availability.md +++ b/docs/admin/high-availability.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # High Availability Kubernetes Clusters **Table of Contents** @@ -43,6 +44,7 @@ Documentation for other releases can be found at ## Introduction + This document describes how to build a high-availability (HA) Kubernetes cluster. This is a fairly advanced topic. Users who merely want to experiment with Kubernetes are encouraged to use configurations that are simpler to set up such as the simple [Docker based single node cluster instructions](../../docs/getting-started-guides/docker.md), @@ -52,6 +54,7 @@ Also, at this time high availability support for Kubernetes is not continuously be working to add this continuous testing, but for now the single-node master installations are more heavily tested. ## Overview + Setting up a truly reliable, highly available distributed system requires a number of steps, it is akin to wearing underwear, pants, a belt, suspenders, another pair of underwear, and another pair of pants. We go into each of these steps in detail, but a summary is given here to help guide and orient the user. @@ -68,6 +71,7 @@ Here's what the system should look like when it's finished: Ready? Let's get started. ## Initial set-up + The remainder of this guide assumes that you are setting up a 3-node clustered master, where each machine is running some flavor of Linux. Examples in the guide are given for Debian distributions, but they should be easily adaptable to other distributions. Likewise, this set up should work whether you are running in a public or private cloud provider, or if you are running @@ -78,6 +82,7 @@ instructions at [https://get.k8s.io](https://get.k8s.io) describe easy installation for single-master clusters on a variety of platforms. ## Reliable nodes + On each master node, we are going to run a number of processes that implement the Kubernetes API. The first step in making these reliable is to make sure that each automatically restarts when it fails. To achieve this, we need to install a process watcher. We choose to use the ```kubelet``` that we run on each of the worker nodes. This is convenient, since we can use containers to distribute our binaries, we can @@ -98,6 +103,7 @@ On systemd systems you ```systemctl enable kubelet``` and ```systemctl enable do ## Establishing a redundant, reliable data storage layer + The central foundation of a highly available solution is a redundant, reliable storage layer. The number one rule of high-availability is to protect the data. Whatever else happens, whatever catches on fire, if you have the data, you can rebuild. If you lose the data, you're done. @@ -109,6 +115,7 @@ size of the cluster from three to five nodes. If that is still insufficient, yo [even more redundancy to your storage layer](#even-more-reliable-storage). ### Clustering etcd + The full details of clustering etcd are beyond the scope of this document, lots of details are given on the [etcd clustering page](https://github.com/coreos/etcd/blob/master/Documentation/clustering.md). This example walks through a simple cluster set up, using etcd's built in discovery to build our cluster. @@ -130,6 +137,7 @@ for ```${NODE_IP}``` on each machine. #### Validating your cluster + Once you copy this into all three nodes, you should have a clustered etcd set up. You can validate with ``` @@ -146,6 +154,7 @@ You can also validate that this is working with ```etcdctl set foo bar``` on one on a different node. ### Even more reliable storage + Of course, if you are interested in increased data reliability, there are further options which makes the place where etcd installs it's data even more reliable than regular disks (belts *and* suspenders, ftw!). @@ -162,9 +171,11 @@ for each node. Throughout these instructions, we assume that this storage is mo ## Replicated API Servers + Once you have replicated etcd set up correctly, we will also install the apiserver using the kubelet. ### Installing configuration files + First you need to create the initial log file, so that Docker mounts a file instead of a directory: ``` @@ -183,12 +194,14 @@ Next, you need to create a ```/srv/kubernetes/``` directory on each node. This The easiest way to create this directory, may be to copy it from the master node of a working cluster, or you can manually generate these files yourself. ### Starting the API Server + Once these files exist, copy the [kube-apiserver.yaml](high-availability/kube-apiserver.yaml) into ```/etc/kubernetes/manifests/``` on each master node. The kubelet monitors this directory, and will automatically create an instance of the ```kube-apiserver``` container using the pod definition specified in the file. ### Load balancing + At this point, you should have 3 apiservers all working correctly. If you set up a network load balancer, you should be able to access your cluster via that load balancer, and see traffic balancing between the apiserver instances. Setting up a load balancer will depend on the specifics of your platform, for example instructions for the Google Cloud @@ -203,6 +216,7 @@ For external users of the API (e.g. the ```kubectl``` command line interface, co them to talk to the external load balancer's IP address. ## Master elected components + So far we have set up state storage, and we have set up the API server, but we haven't run anything that actually modifies cluster state, such as the controller manager and scheduler. To achieve this reliably, we only want to have one actor modifying state at a time, but we want replicated instances of these actors, in case a machine dies. To achieve this, we are going to use a lease-lock in etcd to perform @@ -226,6 +240,7 @@ by copying [kube-scheduler.yaml](high-availability/kube-scheduler.yaml) and [kub directory. ### Running the podmaster + Now that the configuration files are in place, copy the [podmaster.yaml](high-availability/podmaster.yaml) config file into ```/etc/kubernetes/manifests/``` As before, the kubelet on the node monitors this directory, and will start an instance of the podmaster using the pod specification provided in ```podmaster.yaml```. @@ -236,6 +251,7 @@ the kubelet will restart them. If any of these nodes fail, the process will mov node. ## Conclusion + At this point, you are done (yeah!) with the master components, but you still need to add worker nodes (boo!). If you have an existing cluster, this is as simple as reconfiguring your kubelets to talk to the load-balanced endpoint, and @@ -244,7 +260,7 @@ restarting the kubelets on each node. If you are turning up a fresh cluster, you will need to install the kubelet and kube-proxy on each worker node, and set the ```--apiserver``` flag to your replicated endpoint. -##Vagrant up! +## Vagrant up! We indeed have an initial proof of concept tester for this, which is available [here](../../examples/high-availability/). diff --git a/docs/admin/kube-apiserver.md b/docs/admin/kube-apiserver.md index d7ba54c33d876..7b4bea16e0415 100644 --- a/docs/admin/kube-apiserver.md +++ b/docs/admin/kube-apiserver.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kube-apiserver diff --git a/docs/admin/kube-controller-manager.md b/docs/admin/kube-controller-manager.md index 45446efbf6e18..7439d4f6c05c4 100644 --- a/docs/admin/kube-controller-manager.md +++ b/docs/admin/kube-controller-manager.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kube-controller-manager diff --git a/docs/admin/kube-proxy.md b/docs/admin/kube-proxy.md index 894bb9a3f7441..15928a1bbc28c 100644 --- a/docs/admin/kube-proxy.md +++ b/docs/admin/kube-proxy.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kube-proxy diff --git a/docs/admin/kube-scheduler.md b/docs/admin/kube-scheduler.md index 4418c662b58bf..f52bdcfc6a5ae 100644 --- a/docs/admin/kube-scheduler.md +++ b/docs/admin/kube-scheduler.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kube-scheduler diff --git a/docs/admin/kubelet.md b/docs/admin/kubelet.md index 43e3d29d1ead7..874f0359c0f0a 100644 --- a/docs/admin/kubelet.md +++ b/docs/admin/kubelet.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubelet diff --git a/docs/admin/multi-cluster.md b/docs/admin/multi-cluster.md index 6b5f54d24b464..e2bed79ec9193 100644 --- a/docs/admin/multi-cluster.md +++ b/docs/admin/multi-cluster.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Considerations for running multiple Kubernetes clusters You may want to set up multiple kubernetes clusters, both to @@ -65,6 +66,7 @@ Reasons to have multiple clusters include: - test clusters to canary new Kubernetes releases or other cluster software. ## Selecting the right number of clusters + The selection of the number of kubernetes clusters may be a relatively static choice, only revisited occasionally. By contrast, the number of nodes in a cluster and the number of pods in a service may be change frequently according to load and growth. diff --git a/docs/admin/networking.md b/docs/admin/networking.md index 9e24b0098bbb9..427a97150e911 100644 --- a/docs/admin/networking.md +++ b/docs/admin/networking.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Networking in Kubernetes **Table of Contents** diff --git a/docs/admin/node.md b/docs/admin/node.md index cfdf9bcbd1b8f..06574dc0c3955 100644 --- a/docs/admin/node.md +++ b/docs/admin/node.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Node **Table of Contents** diff --git a/docs/admin/ovs-networking.md b/docs/admin/ovs-networking.md index d6627285adf2c..c8c0b8dcb5fad 100644 --- a/docs/admin/ovs-networking.md +++ b/docs/admin/ovs-networking.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Kubernetes OpenVSwitch GRE/VxLAN networking This document describes how OpenVSwitch is used to setup networking between pods across nodes. diff --git a/docs/admin/resource-quota.md b/docs/admin/resource-quota.md index 41765329ff5da..21bec1eb75b02 100644 --- a/docs/admin/resource-quota.md +++ b/docs/admin/resource-quota.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Administering Resource Quotas Kubernetes can limit both the number of objects created in a namespace, and the @@ -49,7 +50,8 @@ Resource Quota is enforced in a particular namespace when there is a See [ResourceQuota design doc](../design/admission_control_resource_quota.md) for more information. -## Object Count Quota +## Object Count Quota + The number of objects of a given type can be restricted. The following types are supported: @@ -65,7 +67,8 @@ are supported: For example, `pods` quota counts and enforces a maximum on the number of `pods` created in a single namespace. -## Compute Resource Quota +## Compute Resource Quota + The total number of objects of a given type can be restricted. The following types are supported: @@ -83,6 +86,7 @@ Any resource that is not part of core Kubernetes must follow the resource naming This means the resource must have a fully-qualified name (i.e. mycompany.org/shinynewresource) ## Viewing and Setting Quotas + Kubectl supports creating, updating, and viewing quotas ``` @@ -123,6 +127,7 @@ services 3 5 ``` ## Quota and Cluster Capacity + Resource Quota objects are independent of the Cluster Capacity. They are expressed in absolute units. @@ -136,6 +141,7 @@ writing a 'controller' which watches the quota usage and adjusts the quota hard limits of each namespace. ## Example + See a [detailed example for how to use resource quota](../user-guide/resourcequota/). diff --git a/docs/admin/salt.md b/docs/admin/salt.md index f72038b8adf14..5807c28117111 100644 --- a/docs/admin/salt.md +++ b/docs/admin/salt.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Using Salt to configure Kubernetes The Kubernetes cluster can be configured using Salt. diff --git a/docs/admin/service-accounts-admin.md b/docs/admin/service-accounts-admin.md index 59d84c7c4c5ee..defa272a006cc 100644 --- a/docs/admin/service-accounts-admin.md +++ b/docs/admin/service-accounts-admin.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Cluster Admin Guide to Service Accounts *This is a Cluster Administrator guide to service accounts. It assumes knowledge of @@ -57,7 +58,7 @@ for a number of reasons: accounts for components of that system. Because service accounts can be created ad-hoc and have namespaced names, such config is portable. -## Service account automation +## Service account automation Three separate components cooperate to implement the automation around service accounts: - A Service account admission controller @@ -78,6 +79,7 @@ It acts synchronously to modify pods as they are created or updated. When this p 6. It adds a `volumeSource` to each container of the pod mounted at `/var/run/secrets/kubernetes.io/serviceaccount`. ### Token Controller + TokenController runs as part of controller-manager. It acts asynchronously. It: - observes serviceAccount creation and creates a corresponding Secret to allow API access. - observes serviceAccount deletion and deletes all corresponding ServiceAccountToken Secrets diff --git a/docs/api.md b/docs/api.md index b97d845074996..6e1bf443ab535 100644 --- a/docs/api.md +++ b/docs/api.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # The Kubernetes API Primary system and API concepts are documented in the [User guide](user-guide/README.md). diff --git a/docs/design/README.md b/docs/design/README.md index b0f3115aee1b2..62946cb6f5b74 100644 --- a/docs/design/README.md +++ b/docs/design/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Kubernetes Design Overview Kubernetes is a system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications. diff --git a/docs/design/access.md b/docs/design/access.md index e42d78597fcb0..9a0c0d3dc66da 100644 --- a/docs/design/access.md +++ b/docs/design/access.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # K8s Identity and Access Management Sketch This document suggests a direction for identity and access management in the Kubernetes system. @@ -43,6 +44,7 @@ High level goals are: - Ease integration with existing enterprise and hosted scenarios. ### Actors + Each of these can act as normal users or attackers. - External Users: People who are accessing applications running on K8s (e.g. a web site served by webserver running in a container on K8s), but who do not have K8s API access. - K8s Users : People who access the K8s API (e.g. create K8s API objects like Pods) @@ -51,6 +53,7 @@ Each of these can act as normal users or attackers. - K8s Admin means K8s Cluster Admins and K8s Project Admins taken together. ### Threats + Both intentional attacks and accidental use of privilege are concerns. For both cases it may be useful to think about these categories differently: @@ -81,6 +84,7 @@ K8s Cluster assets: This document is primarily about protecting K8s User assets and K8s cluster assets from other K8s Users and K8s Project and Cluster Admins. ### Usage environments + Cluster in Small organization: - K8s Admins may be the same people as K8s Users. - few K8s Admins. @@ -112,6 +116,7 @@ Pods configs should be largely portable between Org-run and hosted configuration # Design + Related discussion: - https://github.com/GoogleCloudPlatform/kubernetes/issues/442 - https://github.com/GoogleCloudPlatform/kubernetes/issues/443 @@ -125,7 +130,9 @@ K8s distribution should include templates of config, and documentation, for simp Features in this doc are divided into "Initial Feature", and "Improvements". Initial features would be candidates for version 1.00. ## Identity -###userAccount + +### userAccount + K8s will have a `userAccount` API object. - `userAccount` has a UID which is immutable. This is used to associate users with objects and to record actions in audit logs. - `userAccount` has a name which is a string and human readable and unique among userAccounts. It is used to refer to users in Policies, to ensure that the Policies are human readable. It can be changed only when there are no Policy objects or other objects which refer to that name. An email address is a suggested format for this field. @@ -158,7 +165,8 @@ Enterprise Profile: - each service using the API has own `userAccount` too. (e.g. `scheduler`, `repcontroller`) - automated jobs to denormalize the ldap group info into the local system list of users into the K8s userAccount file. -###Unix accounts +### Unix accounts + A `userAccount` is not a Unix user account. The fact that a pod is started by a `userAccount` does not mean that the processes in that pod's containers run as a Unix user with a corresponding name or identity. Initially: @@ -170,7 +178,8 @@ Improvements: - requires docker to integrate user namespace support, and deciding what getpwnam() does for these uids. - any features that help users avoid use of privileged containers (https://github.com/GoogleCloudPlatform/kubernetes/issues/391) -###Namespaces +### Namespaces + K8s will have a have a `namespace` API object. It is similar to a Google Compute Engine `project`. It provides a namespace for objects created by a group of people co-operating together, preventing name collisions with non-cooperating groups. It also serves as a reference point for authorization policies. Namespaces are described in [namespaces.md](namespaces.md). diff --git a/docs/design/admission_control.md b/docs/design/admission_control.md index aaa6ed164b2ca..c75d55359cdaa 100644 --- a/docs/design/admission_control.md +++ b/docs/design/admission_control.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Kubernetes Proposal - Admission Control **Related PR:** diff --git a/docs/design/admission_control_limit_range.md b/docs/design/admission_control_limit_range.md index 90329815a64fe..ccdb44d88704b 100644 --- a/docs/design/admission_control_limit_range.md +++ b/docs/design/admission_control_limit_range.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Admission control plugin: LimitRanger ## Background @@ -164,6 +165,7 @@ It is expected we will want to define limits for particular pods or containers b To make a **LimitRangeItem** more restrictive, we will intend to add these additional restrictions at a future point in time. ## Example + See the [example of Limit Range](../user-guide/limitrange/) for more information. diff --git a/docs/design/admission_control_resource_quota.md b/docs/design/admission_control_resource_quota.md index d5cdc9a15c7da..99d5431a157dc 100644 --- a/docs/design/admission_control_resource_quota.md +++ b/docs/design/admission_control_resource_quota.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Admission control plugin: ResourceQuota ## Background @@ -185,6 +186,7 @@ services 3 5 ``` ## More information + See [resource quota document](../admin/resource-quota.md) and the [example of Resource Quota](../user-guide/resourcequota/) for more information. diff --git a/docs/design/architecture.md b/docs/design/architecture.md index 2e4afc622c2e8..f7c5517198ea6 100644 --- a/docs/design/architecture.md +++ b/docs/design/architecture.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Kubernetes architecture A running Kubernetes cluster contains node agents (kubelet) and master components (APIs, scheduler, etc), on top of a distributed storage solution. This diagram shows our desired eventual state, though we're still working on a few things, like making kubelet itself (all our components, really) run within containers, and making the scheduler 100% pluggable. @@ -45,6 +46,7 @@ The Kubernetes node has the services necessary to run application containers and Each node runs Docker, of course. Docker takes care of the details of downloading images and running containers. ### Kubelet + The **Kubelet** manages [pods](../user-guide/pods.md) and their containers, their images, their volumes, etc. ### Kube-Proxy diff --git a/docs/design/clustering.md b/docs/design/clustering.md index 8673284f4a0a3..1fcb8aa325bda 100644 --- a/docs/design/clustering.md +++ b/docs/design/clustering.md @@ -30,10 +30,12 @@ Documentation for other releases can be found at + # Clustering in Kubernetes ## Overview + The term "clustering" refers to the process of having all members of the kubernetes cluster find and trust each other. There are multiple different ways to achieve clustering with different security and usability profiles. This document attempts to lay out the user experiences for clustering that Kubernetes aims to address. Once a cluster is established, the following is true: diff --git a/docs/design/clustering/README.md b/docs/design/clustering/README.md index f05168d667c51..53649a31b4970 100644 --- a/docs/design/clustering/README.md +++ b/docs/design/clustering/README.md @@ -41,6 +41,7 @@ pip install seqdiag Just call `make` to regenerate the diagrams. ## Building with Docker + If you are on a Mac or your pip install is messed up, you can easily build with docker. ``` diff --git a/docs/design/command_execution_port_forwarding.md b/docs/design/command_execution_port_forwarding.md index c7408b58bd325..1d319adf7c524 100644 --- a/docs/design/command_execution_port_forwarding.md +++ b/docs/design/command_execution_port_forwarding.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Container Command Execution & Port Forwarding in Kubernetes ## Abstract @@ -87,12 +88,14 @@ won't be able to work with this mechanism, unless adapters can be written. ## Process Flow ### Remote Command Execution Flow + 1. The client connects to the Kubernetes Master to initiate a remote command execution request 2. The Master proxies the request to the Kubelet where the container lives 3. The Kubelet executes nsenter + the requested command and streams stdin/stdout/stderr back and forth between the client and the container ### Port Forwarding Flow + 1. The client connects to the Kubernetes Master to initiate a remote command execution request 2. The Master proxies the request to the Kubelet where the container lives diff --git a/docs/design/event_compression.md b/docs/design/event_compression.md index af823972bd61c..29e659170fdfa 100644 --- a/docs/design/event_compression.md +++ b/docs/design/event_compression.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Kubernetes Event Compression This document captures the design of event compression. @@ -40,11 +41,13 @@ This document captures the design of event compression. Kubernetes components can get into a state where they generate tons of events which are identical except for the timestamp. For example, when pulling a non-existing image, Kubelet will repeatedly generate ```image_not_existing``` and ```container_is_waiting``` events until upstream components correct the image. When this happens, the spam from the repeated events makes the entire event mechanism useless. It also appears to cause memory pressure in etcd (see [#3853](https://github.com/GoogleCloudPlatform/kubernetes/issues/3853)). ## Proposal + Each binary that generates events (for example, ```kubelet```) should keep track of previously generated events so that it can collapse recurring events into a single event instead of creating a new instance for each new event. Event compression should be best effort (not guaranteed). Meaning, in the worst case, ```n``` identical (minus timestamp) events may still result in ```n``` event entries. ## Design + Instead of a single Timestamp, each event object [contains](../../pkg/api/types.go#L1111) the following fields: * ```FirstTimestamp util.Time``` * The date/time of the first occurrence of the event. @@ -78,11 +81,13 @@ Each binary that generates events: * An entry for the event is also added to the previously generated events cache. ## Issues/Risks + * Compression is not guaranteed, because each component keeps track of event history in memory * An application restart causes event history to be cleared, meaning event history is not preserved across application restarts and compression will not occur across component restarts. * Because an LRU cache is used to keep track of previously generated events, if too many unique events are generated, old events will be evicted from the cache, so events will only be compressed until they age out of the events cache, at which point any new instance of the event will cause a new entry to be created in etcd. ## Example + Sample kubectl output ``` @@ -104,6 +109,7 @@ Thu, 12 Feb 2015 01:13:20 +0000 Thu, 12 Feb 2015 01:13:20 +0000 1 This demonstrates what would have been 20 separate entries (indicating scheduling failure) collapsed/compressed down to 5 entries. ## Related Pull Requests/Issues + * Issue [#4073](https://github.com/GoogleCloudPlatform/kubernetes/issues/4073): Compress duplicate events * PR [#4157](https://github.com/GoogleCloudPlatform/kubernetes/issues/4157): Add "Update Event" to Kubernetes API * PR [#4206](https://github.com/GoogleCloudPlatform/kubernetes/issues/4206): Modify Event struct to allow compressing multiple recurring events in to a single event diff --git a/docs/design/expansion.md b/docs/design/expansion.md index 5cc08c6cda2ae..096b8a9d8ac74 100644 --- a/docs/design/expansion.md +++ b/docs/design/expansion.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Variable expansion in pod command, args, and env ## Abstract diff --git a/docs/design/identifiers.md b/docs/design/identifiers.md index eda7254be6914..9e2699936e6ad 100644 --- a/docs/design/identifiers.md +++ b/docs/design/identifiers.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Identifiers and Names in Kubernetes A summarization of the goals and recommendations for identifiers in Kubernetes. Described in [GitHub issue #199](https://github.com/GoogleCloudPlatform/kubernetes/issues/199). diff --git a/docs/design/namespaces.md b/docs/design/namespaces.md index 7bd7ab67b6357..1f1a767c6c433 100644 --- a/docs/design/namespaces.md +++ b/docs/design/namespaces.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Namespaces ## Abstract diff --git a/docs/design/networking.md b/docs/design/networking.md index ac6e5794627af..d7822d4d85f48 100644 --- a/docs/design/networking.md +++ b/docs/design/networking.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Networking There are 4 distinct networking problems to solve: diff --git a/docs/design/persistent-storage.md b/docs/design/persistent-storage.md index f919baa9e7697..3e9edd3ef8167 100644 --- a/docs/design/persistent-storage.md +++ b/docs/design/persistent-storage.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Persistent Storage This document proposes a model for managing persistent, cluster-scoped storage for applications requiring long lived data. diff --git a/docs/design/principles.md b/docs/design/principles.md index 1ae3bc3a13317..c208fb6b46807 100644 --- a/docs/design/principles.md +++ b/docs/design/principles.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Design Principles Principles to follow when extending Kubernetes. diff --git a/docs/design/resources.md b/docs/design/resources.md index 2effb5cf5a019..055c5d86ed506 100644 --- a/docs/design/resources.md +++ b/docs/design/resources.md @@ -48,6 +48,7 @@ The resource model aims to be: * precise, to avoid misunderstandings and promote pod portability. ## The resource model + A Kubernetes _resource_ is something that can be requested by, allocated to, or consumed by a pod or container. Examples include memory (RAM), CPU, disk-time, and network bandwidth. Once resources on a node have been allocated to one pod, they should not be allocated to another until that pod is removed or exits. This means that Kubernetes schedulers should ensure that the sum of the resources allocated (requested and granted) to its pods never exceeds the usable capacity of the node. Testing whether a pod will fit on a node is called _feasibility checking_. @@ -124,9 +125,11 @@ Where: ## Kubernetes-defined resource types + The following resource types are predefined ("reserved") by Kubernetes in the `kubernetes.io` namespace, and so cannot be used for user-defined resources. Note that the syntax of all resource types in the resource spec is deliberately similar, but some resource types (e.g., CPU) may receive significantly more support than simply tracking quantities in the schedulers and/or the Kubelet. ### Processor cycles + * Name: `cpu` (or `kubernetes.io/cpu`) * Units: Kubernetes Compute Unit seconds/second (i.e., CPU cores normalized to a canonical "Kubernetes CPU") * Internal representation: milli-KCUs @@ -141,6 +144,7 @@ Note that requesting 2 KCU won't guarantee that precisely 2 physical cores will ### Memory + * Name: `memory` (or `kubernetes.io/memory`) * Units: bytes * Compressible? no (at least initially) @@ -152,6 +156,7 @@ rather than decimal ones: "64MiB" rather than "64MB". ## Resource metadata + A resource type may have an associated read-only ResourceType structure, that contains metadata about the type. For example: ``` @@ -222,16 +227,19 @@ and predicted ## Future resource types ### _[future] Network bandwidth_ + * Name: "network-bandwidth" (or `kubernetes.io/network-bandwidth`) * Units: bytes per second * Compressible? yes ### _[future] Network operations_ + * Name: "network-iops" (or `kubernetes.io/network-iops`) * Units: operations (messages) per second * Compressible? yes ### _[future] Storage space_ + * Name: "storage-space" (or `kubernetes.io/storage-space`) * Units: bytes * Compressible? no @@ -239,6 +247,7 @@ and predicted The amount of secondary storage space available to a container. The main target is local disk drives and SSDs, although this could also be used to qualify remotely-mounted volumes. Specifying whether a resource is a raw disk, an SSD, a disk array, or a file system fronting any of these, is left for future work. ### _[future] Storage time_ + * Name: storage-time (or `kubernetes.io/storage-time`) * Units: seconds per second of disk time * Internal representation: milli-units @@ -247,6 +256,7 @@ The amount of secondary storage space available to a container. The main target This is the amount of time a container spends accessing disk, including actuator and transfer time. A standard disk drive provides 1.0 diskTime seconds per second. ### _[future] Storage operations_ + * Name: "storage-iops" (or `kubernetes.io/storage-iops`) * Units: operations per second * Compressible? yes diff --git a/docs/design/security.md b/docs/design/security.md index 2989148bbd4e3..522ff4ca5feed 100644 --- a/docs/design/security.md +++ b/docs/design/security.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Security in Kubernetes Kubernetes should define a reasonable set of security best practices that allows processes to be isolated from each other, from the cluster infrastructure, and which preserves important boundaries between those who manage the cluster, and those who use the cluster. diff --git a/docs/design/security_context.md b/docs/design/security_context.md index bc76495a34acb..03213927ece18 100644 --- a/docs/design/security_context.md +++ b/docs/design/security_context.md @@ -30,8 +30,11 @@ Documentation for other releases can be found at + # Security Contexts + ## Abstract + A security context is a set of constraints that are applied to a container in order to achieve the following goals (from [security design](security.md)): 1. Ensure a clear isolation between container and the underlying host it runs on @@ -53,11 +56,13 @@ to the container process. Support for user namespaces has recently been [merged](https://github.com/docker/libcontainer/pull/304) into Docker's libcontainer project and should soon surface in Docker itself. It will make it possible to assign a range of unprivileged uids and gids from the host to each container, improving the isolation between host and container and between containers. ### External integration with shared storage + In order to support external integration with shared storage, processes running in a Kubernetes cluster should be able to be uniquely identified by their Unix UID, such that a chain of ownership can be established. Processes in pods will need to have consistent UID/GID/SELinux category labels in order to access shared disks. ## Constraints and Assumptions + * It is out of the scope of this document to prescribe a specific set of constraints to isolate containers from their host. Different use cases need different settings. @@ -96,6 +101,7 @@ be addressed with security contexts: ## Proposed Design ### Overview + A *security context* consists of a set of constraints that determine how a container is secured before getting created and run. A security context resides on the container and represents the runtime parameters that will be used to create and run the container via container APIs. A *security context provider* is passed to the Kubelet so it can have a chance diff --git a/docs/design/service_accounts.md b/docs/design/service_accounts.md index c6acbd2480393..d9535de5acd3f 100644 --- a/docs/design/service_accounts.md +++ b/docs/design/service_accounts.md @@ -30,7 +30,8 @@ Documentation for other releases can be found at -#Service Accounts + +# Service Accounts ## Motivation @@ -50,6 +51,7 @@ They also may interact with services other than the Kubernetes API, such as: - accessing files in an NFS volume attached to the pod ## Design Overview + A service account binds together several things: - a *name*, understood by users, and perhaps by peripheral systems, for an identity - a *principal* that can be authenticated and [authorized](../admin/authorization.md) @@ -137,6 +139,7 @@ are added to the map of tokens used by the authentication process in the apiserv might have some types that do not do anything on apiserver but just get pushed to the kubelet.) ### Pods + The `PodSpec` is extended to have a `Pods.Spec.ServiceAccountUsername` field. If this is unset, then a default value is chosen. If it is set, then the corresponding value of `Pods.Spec.SecurityContext` is set by the Service Account Finalizer (see below). @@ -144,6 +147,7 @@ Service Account Finalizer (see below). TBD: how policy limits which users can make pods with which service accounts. ### Authorization + Kubernetes API Authorization Policies refer to users. Pods created with a `Pods.Spec.ServiceAccountUsername` typically get a `Secret` which allows them to authenticate to the Kubernetes APIserver as a particular user. So any policy that is desired can be applied to them. diff --git a/docs/design/simple-rolling-update.md b/docs/design/simple-rolling-update.md index b142c6e513463..80bc656666d14 100644 --- a/docs/design/simple-rolling-update.md +++ b/docs/design/simple-rolling-update.md @@ -30,12 +30,15 @@ Documentation for other releases can be found at + ## Simple rolling update + This is a lightweight design document for simple [rolling update](../user-guide/kubectl/kubectl_rolling-update.md) in ```kubectl```. Complete execution flow can be found [here](#execution-details). See the [example of rolling update](../user-guide/update-demo/) for more information. ### Lightweight rollout + Assume that we have a current replication controller named ```foo``` and it is running image ```image:v1``` ```kubectl rolling-update foo [foo-v2] --image=myimage:v2``` @@ -51,6 +54,7 @@ and the old 'foo' replication controller is deleted. For the purposes of the ro The value of that label is the hash of the complete JSON representation of the```foo-next``` or```foo``` replication controller. The name of this label can be overridden by the user with the ```--deployment-label-key``` flag. #### Recovery + If a rollout fails or is terminated in the middle, it is important that the user be able to resume the roll out. To facilitate recovery in the case of a crash of the updating process itself, we add the following annotations to each replication controller in the ```kubernetes.io/``` annotation namespace: * ```desired-replicas``` The desired number of replicas for this replication controller (either N or zero) @@ -68,6 +72,7 @@ it is assumed that the rollout is nearly completed, and ```foo-next``` is rename ### Aborting a rollout + Abort is assumed to want to reverse a rollout in progress. ```kubectl rolling-update foo [foo-v2] --rollback``` @@ -87,6 +92,7 @@ If the user doesn't specify a ```foo-next``` name, then it is either discovered then ```foo-next``` is synthesized using the pattern ```-``` #### Initialization + * If ```foo``` and ```foo-next``` do not exist: * Exit, and indicate an error to the user, that the specified controller doesn't exist. * If ```foo``` exists, but ```foo-next``` does not: @@ -102,6 +108,7 @@ then ```foo-next``` is synthesized using the pattern ```- 0 @@ -109,11 +116,13 @@ then ```foo-next``` is synthesized using the pattern ```- + # Kubernetes API and Release Versioning Legend: diff --git a/docs/devel/README.md b/docs/devel/README.md index a06efc8d96b95..9a73d949fb7ad 100644 --- a/docs/devel/README.md +++ b/docs/devel/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Kubernetes Developer Guide The developer guide is for anyone wanting to either write code which directly accesses the diff --git a/docs/devel/api-conventions.md b/docs/devel/api-conventions.md index 323cde416434f..7f46d5be08109 100644 --- a/docs/devel/api-conventions.md +++ b/docs/devel/api-conventions.md @@ -455,6 +455,7 @@ The following HTTP status codes may be returned by the API. * Returned in response to HTTP OPTIONS requests. #### Error codes + * `307 StatusTemporaryRedirect` * Indicates that the address for the requested resource has changed. * Suggested client recovery behavior diff --git a/docs/devel/api_changes.md b/docs/devel/api_changes.md index edf227cc49cb7..7a0418e83b29f 100644 --- a/docs/devel/api_changes.md +++ b/docs/devel/api_changes.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # So you want to change the API? The Kubernetes API has two major components - the internal structures and @@ -365,6 +366,7 @@ $ hack/update-swagger-spec.sh The API spec changes should be in a commit separate from your other changes. ## Incompatible API changes + If your change is going to be backward incompatible or might be a breaking change for API consumers, please send an announcement to `kubernetes-dev@googlegroups.com` before the change gets in. If you are unsure, ask. Also make sure that the change gets documented in diff --git a/docs/devel/cherry-picks.md b/docs/devel/cherry-picks.md index 1d59eaef0ebf6..7ed63d088acef 100644 --- a/docs/devel/cherry-picks.md +++ b/docs/devel/cherry-picks.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Overview This document explains cherry picks are managed on release branches within the diff --git a/docs/devel/cli-roadmap.md b/docs/devel/cli-roadmap.md index 45c2682751b53..00b454fa9da3c 100644 --- a/docs/devel/cli-roadmap.md +++ b/docs/devel/cli-roadmap.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Kubernetes CLI/Configuration Roadmap See also issues with the following labels: diff --git a/docs/devel/client-libraries.md b/docs/devel/client-libraries.md index ae7cb6236fd2a..69cba1e697d6e 100644 --- a/docs/devel/client-libraries.md +++ b/docs/devel/client-libraries.md @@ -30,12 +30,15 @@ Documentation for other releases can be found at + ## kubernetes API client libraries ### Supported + * [Go](../../pkg/client/) ### User Contributed + *Note: Libraries provided by outside parties are supported by their authors, not the core Kubernetes team* * [Java (OSGI)](https://bitbucket.org/amdatulabs/amdatu-kubernetes) diff --git a/docs/devel/collab.md b/docs/devel/collab.md index 38b6d586b8a71..96db64c85f3a7 100644 --- a/docs/devel/collab.md +++ b/docs/devel/collab.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # On Collaborative Development Kubernetes is open source, but many of the people working on it do so as their day job. In order to avoid forcing people to be "at work" effectively 24/7, we want to establish some semi-formal protocols around development. Hopefully these rules make things go more smoothly. If you find that this is not the case, please complain loudly. diff --git a/docs/devel/developer-guides/vagrant.md b/docs/devel/developer-guides/vagrant.md index 2b6fcc42fe778..e704bf3bd52c3 100644 --- a/docs/devel/developer-guides/vagrant.md +++ b/docs/devel/developer-guides/vagrant.md @@ -30,11 +30,13 @@ Documentation for other releases can be found at + ## Getting started with Vagrant Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X). ### Prerequisites + 1. Install latest version >= 1.6.2 of vagrant from http://www.vagrantup.com/downloads.html 2. Install one of: 1. The latest version of Virtual Box from https://www.virtualbox.org/wiki/Downloads @@ -371,6 +373,7 @@ export KUBERNETES_MINION_MEMORY=2048 ``` #### I ran vagrant suspend and nothing works! + ```vagrant suspend``` seems to mess up the network. It's not supported at this time. diff --git a/docs/devel/development.md b/docs/devel/development.md index e258f841d996c..6822ab5e18bf5 100644 --- a/docs/devel/development.md +++ b/docs/devel/development.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Development Guide # Releases and Official Builds @@ -45,6 +46,7 @@ Kubernetes is written in [Go](http://golang.org) programming language. If you ha Below, we outline one of the more common git workflows that core developers use. Other git workflows are also valid. ### Visual overview + ![Git workflow](git_workflow.png) ### Fork the main repository @@ -93,6 +95,7 @@ $ git push -f origin myfeature ``` ### Creating a pull request + 1. Visit http://github.com/$YOUR_GITHUB_USERNAME/kubernetes 2. Click the "Compare and pull request" button next to your "myfeature" branch. @@ -102,6 +105,7 @@ $ git push -f origin myfeature Kubernetes uses [godep](https://github.com/tools/godep) to manage dependencies. It is not strictly required for building Kubernetes but it is required when managing dependencies under the Godeps/ tree, and is required by a number of the build and test scripts. Please make sure that ``godep`` is installed and in your ``$PATH``. ### Installing godep + There are many ways to build and host go binaries. Here is an easy way to get utilities like ```godep``` installed: 1) Ensure that [mercurial](http://mercurial.selenic.com/wiki/Download) is installed on your system. (some of godep's dependencies use the mercurial @@ -124,6 +128,7 @@ export PATH=$PATH:$GOPATH/bin ``` ### Using godep + Here's a quick walkthrough of one way to use godeps to add or update a Kubernetes dependency into Godeps/_workspace. For more details, please see the instructions in [godep's documentation](https://github.com/tools/godep). 1) Devote a directory to this endeavor: @@ -259,6 +264,7 @@ go run hack/e2e.go --down ``` ### Flag options + See the flag definitions in `hack/e2e.go` for more options, such as reusing an existing cluster, here is an overview: ```sh @@ -309,6 +315,7 @@ go run hack/e2e.go -v -ctl='delete pod foobar' ``` ## Conformance testing + End-to-end testing, as described above, is for [development distributions](writing-a-getting-started-guide.md). A conformance test is used on a [versioned distro](writing-a-getting-started-guide.md). @@ -320,6 +327,7 @@ intended to run against a cluster at a specific binary release of Kubernetes. See [conformance-test.sh](../../hack/conformance-test.sh). ## Testing out flaky tests + [Instructions here](flaky-tests.md) ## Regenerating the CLI documentation diff --git a/docs/devel/faster_reviews.md b/docs/devel/faster_reviews.md index 20e3e9903f6ad..d28e9b55447c8 100644 --- a/docs/devel/faster_reviews.md +++ b/docs/devel/faster_reviews.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # How to get faster PR reviews Most of what is written here is not at all specific to Kubernetes, but it bears diff --git a/docs/devel/flaky-tests.md b/docs/devel/flaky-tests.md index 0fbf643c95242..1e7f5fcb1d288 100644 --- a/docs/devel/flaky-tests.md +++ b/docs/devel/flaky-tests.md @@ -30,7 +30,9 @@ Documentation for other releases can be found at + # Hunting flaky tests in Kubernetes + Sometimes unit tests are flaky. This means that due to (usually) race conditions, they will occasionally fail, even though most of the time they pass. We have a goal of 99.9% flake free tests. This means that there is only one flake in one thousand runs of a test. diff --git a/docs/devel/getting-builds.md b/docs/devel/getting-builds.md index f59a753bdfbc0..4c92a44676469 100644 --- a/docs/devel/getting-builds.md +++ b/docs/devel/getting-builds.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Getting Kubernetes Builds You can use [hack/get-build.sh](../../hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build). diff --git a/docs/devel/making-release-notes.md b/docs/devel/making-release-notes.md index 343b9203a2eda..d76f7415f85e4 100644 --- a/docs/devel/making-release-notes.md +++ b/docs/devel/making-release-notes.md @@ -30,10 +30,13 @@ Documentation for other releases can be found at + ## Making release notes + This documents the process for making release notes for a release. ### 1) Note the PR number of the previous release + Find the most-recent PR that was merged with the previous .0 release. Remember this as $LASTPR. _TODO_: Figure out a way to record this somewhere to save the next release engineer time. @@ -46,6 +49,7 @@ ${KUBERNETES_ROOT}/build/make-release-notes.sh $LASTPR $CURRENTPR ``` ### 3) Trim the release notes + This generates a list of the entire set of PRs merged since the last minor release. It is likely long and many PRs aren't worth mentioning. If any of the PRs were cherrypicked into patches on the last minor release, you should exclude @@ -57,9 +61,11 @@ Remove, regroup, organize to your hearts content. ### 4) Update CHANGELOG.md + With the final markdown all set, cut and paste it to the top of ```CHANGELOG.md``` ### 5) Update the Release page + * Switch to the [releases](https://github.com/GoogleCloudPlatform/kubernetes/releases) page. * Open up the release you are working on. * Cut and paste the final markdown from above into the release notes diff --git a/docs/devel/profiling.md b/docs/devel/profiling.md index fbb54c9faf5d2..d36885dd6979f 100644 --- a/docs/devel/profiling.md +++ b/docs/devel/profiling.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Profiling Kubernetes This document explain how to plug in profiler and how to profile Kubernetes services. @@ -53,6 +54,7 @@ to the init(c *Config) method in 'pkg/master/master.go' and import 'net/http/ppr In most use cases to use profiler service it's enough to do 'import _ net/http/pprof', which automatically registers a handler in the default http.Server. Slight inconvenience is that APIserver uses default server for intra-cluster communication, so plugging profiler to it is not really useful. In 'pkg/master/server/server.go' more servers are created and started as separate goroutines. The one that is usually serving external traffic is secureServer. The handler for this traffic is defined in 'pkg/master/master.go' and stored in Handler variable. It is created from HTTP multiplexer, so the only thing that needs to be done is adding profiler handler functions to this multiplexer. This is exactly what lines after TL;DR do. ## Connecting to the profiler + Even when running profiler I found not really straightforward to use 'go tool pprof' with it. The problem is that at least for dev purposes certificates generated for APIserver are not signed by anyone trusted and because secureServer serves only secure traffic it isn't straightforward to connect to the service. The best workaround I found is by creating an ssh tunnel from the kubernetes_master open unsecured port to some external server, and use this server as a proxy. To save everyone looking for correct ssh flags, it is done by running: ``` diff --git a/docs/devel/releasing.md b/docs/devel/releasing.md index 8b1a661c874d2..65db081d05a0a 100644 --- a/docs/devel/releasing.md +++ b/docs/devel/releasing.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Releasing Kubernetes This document explains how to cut a release, and the theory behind it. If you @@ -87,6 +88,7 @@ Where `v0.20.2-322-g974377b` is the git hash you decided on. This will become our (retroactive) branch point. #### Branching, Tagging and Merging + Do the following: 1. `export VER=x.y` (e.g. `0.20` for v0.20) diff --git a/docs/devel/scheduler_algorithm.md b/docs/devel/scheduler_algorithm.md index 791de7c4b393d..e73e4f279252a 100644 --- a/docs/devel/scheduler_algorithm.md +++ b/docs/devel/scheduler_algorithm.md @@ -30,11 +30,13 @@ Documentation for other releases can be found at + # Scheduler Algorithm in Kubernetes For each unscheduled Pod, the Kubernetes scheduler tries to find a node across the cluster according to a set of rules. A general introduction to the Kubernetes scheduler can be found at [scheduler.md](scheduler.md). In this document, the algorithm of how to select a node for the Pod is explained. There are two steps before a destination node of a Pod is chosen. The first step is filtering all the nodes and the second is ranking the remaining nodes to find a best fit for the Pod. ## Filtering the nodes + The purpose of filtering the nodes is to filter out the nodes that do not meet certain requirements of the Pod. For example, if the free resource on a node (measured by the capacity minus the sum of the resource limits of all the Pods that already run on the node) is less than the Pod's required resource, the node should not be considered in the ranking phase so it is filtered out. Currently, there are several "predicates" implementing different filtering policies, including: - `NoDiskConflict`: Evaluate if a pod can fit due to the volumes it requests, and those that are already mounted. diff --git a/docs/devel/writing-a-getting-started-guide.md b/docs/devel/writing-a-getting-started-guide.md index 3e67b632012b0..c22d92042cd66 100644 --- a/docs/devel/writing-a-getting-started-guide.md +++ b/docs/devel/writing-a-getting-started-guide.md @@ -32,6 +32,7 @@ Documentation for other releases can be found at # Writing a Getting Started Guide + This page gives some advice for anyone planning to write or update a Getting Started Guide for Kubernetes. It also gives some guidelines which reviewers should follow when reviewing a pull request for a guide. @@ -57,6 +58,7 @@ Distros fall into two categories: There are different guidelines for each. ## Versioned Distro Guidelines + These guidelines say *what* to do. See the Rationale section for *why*. - Send us a PR. - Put the instructions in `docs/getting-started-guides/...`. Scripts go there too. This helps devs easily @@ -77,6 +79,7 @@ we still want to hear from you. We suggest you write a blog post or a Gist, and Just file an issue or chat us on IRC and one of the committers will link to it from the wiki. ## Development Distro Guidelines + These guidelines say *what* to do. See the Rationale section for *why*. - the main reason to add a new development distro is to support a new IaaS provider (VM and network management). This means implementing a new `pkg/cloudprovider/$IAAS_NAME`. @@ -93,6 +96,7 @@ These guidelines say *what* to do. See the Rationale section for *why*. refactoring and feature additions that affect code for their IaaS. ## Rationale + - We want people to create Kubernetes clusters with whatever IaaS, Node OS, configuration management tools, and so on, which they are familiar with. The guidelines for **versioned distros** are designed for flexibility. diff --git a/docs/getting-started-guides/README.md b/docs/getting-started-guides/README.md index ae36a1707b69f..fc24eacde9787 100644 --- a/docs/getting-started-guides/README.md +++ b/docs/getting-started-guides/README.md @@ -55,6 +55,7 @@ they vary from step-by-step instructions to general advice for setting up a kubernetes cluster from scratch. ### Local-machine Solutions + Local-machine solutions create a single cluster with one or more kubernetes nodes on a single physical machine. Setup is completely automated and doesn't require a cloud provider account. But their size and availability is limited to that of a single machine. @@ -66,10 +67,12 @@ The local-machine solutions are: ### Hosted Solutions + [Google Container Engine](https://cloud.google.com/container-engine) offers managed Kubernetes clusters. ### Turn-key Cloud Solutions + These solutions allow you to create Kubernetes clusters on range of Cloud IaaS providers with only a few commands, and have active community support. - [GCE](gce.md) @@ -90,6 +93,7 @@ If you are interested in supporting Kubernetes on a new platform, check out our writing a new solution](../../docs/devel/writing-a-getting-started-guide.md). #### Cloud + These solutions are combinations of cloud provider and OS not covered by the above solutions. - [AWS + coreos](coreos.md) - [GCE + CoreOS](coreos.md) @@ -98,6 +102,7 @@ These solutions are combinations of cloud provider and OS not covered by the abo - [Rackspace + CoreOS](rackspace.md) #### On-Premises VMs + - [Vagrant](coreos.md) (uses CoreOS and flannel) - [CloudStack](cloudstack.md) (uses Ansible, CoreOS and flannel) - [Vmware](vsphere.md) (uses Debian) @@ -109,6 +114,7 @@ These solutions are combinations of cloud provider and OS not covered by the abo - [KVM](fedora/flannel_multi_node_cluster.md) (uses Fedora and flannel) #### Bare Metal + - [Offline](coreos/bare_metal_offline.md) (no internet required. Uses CoreOS and Flannel) - [fedora/fedora_ansible_config.md](fedora/fedora_ansible_config.md) - [Fedora single node](fedora/fedora_manual_config.md) @@ -118,9 +124,11 @@ These solutions are combinations of cloud provider and OS not covered by the abo - [Docker Multi Node](docker-multinode.md) #### Integrations + - [Kubernetes on Mesos](mesos.md) (Uses GCE) ## Table of Solutions + Here are all the solutions mentioned above in table form. IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level diff --git a/docs/getting-started-guides/aws-coreos.md b/docs/getting-started-guides/aws-coreos.md index 99d694eb9425d..a1b8c13a4bf7d 100644 --- a/docs/getting-started-guides/aws-coreos.md +++ b/docs/getting-started-guides/aws-coreos.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Getting started on Amazon EC2 with CoreOS The example below creates an elastic Kubernetes cluster with a custom number of worker nodes and a master. diff --git a/docs/getting-started-guides/aws.md b/docs/getting-started-guides/aws.md index c2f7a6e9deafd..963e47821304c 100644 --- a/docs/getting-started-guides/aws.md +++ b/docs/getting-started-guides/aws.md @@ -52,6 +52,7 @@ Getting started on AWS EC2 3. You need an AWS [instance profile and role](http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html) with EC2 full access. ## Cluster turnup + ### Supported procedure: `get-kube` ```bash @@ -89,11 +90,14 @@ If these already exist, make sure you want them to be used here. NOTE: If using an existing keypair named "kubernetes" then you must set the `AWS_SSH_KEY` key to point to your private key. ### Alternatives + A contributed [example](aws-coreos.md) allows you to setup a Kubernetes cluster based on [CoreOS](http://www.coreos.com), either using AWS CloudFormation or EC2 with user data (cloud-config). ## Getting started with your cluster + ### Command line administration tool: `kubectl` + The cluster startup script will leave you with a ```kubernetes``` directory on your workstation. Alternately, you can download the latest Kubernetes release from [this page](https://github.com/GoogleCloudPlatform/kubernetes/releases). @@ -113,6 +117,7 @@ By default, `kubectl` will use the `kubeconfig` file generated during the cluste For more information, please read [kubeconfig files](../../docs/user-guide/kubeconfig-file.md) ### Examples + See [a simple nginx example](../../docs/user-guide/simple-nginx.md) to try out your new cluster. The "Guestbook" application is another popular example to get started with Kubernetes: [guestbook example](../../examples/guestbook/) @@ -120,6 +125,7 @@ The "Guestbook" application is another popular example to get started with Kuber For more complete applications, please look in the [examples directory](../../examples/) ## Tearing down the cluster + Make sure the environment variables you used to provision your cluster are still exported, then call the following script inside the `kubernetes` directory: @@ -128,6 +134,7 @@ cluster/kube-down.sh ``` ## Further reading + Please see the [Kubernetes docs](../../docs/) for more details on administering and using a Kubernetes cluster. diff --git a/docs/getting-started-guides/aws/kubectl.md b/docs/getting-started-guides/aws/kubectl.md index ad6c1bf240e6c..a5d6f435b019d 100644 --- a/docs/getting-started-guides/aws/kubectl.md +++ b/docs/getting-started-guides/aws/kubectl.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Install and configure kubectl ## Download the kubectl CLI tool diff --git a/docs/getting-started-guides/azure.md b/docs/getting-started-guides/azure.md index fadad9e1ee5f1..5762147b922f6 100644 --- a/docs/getting-started-guides/azure.md +++ b/docs/getting-started-guides/azure.md @@ -58,7 +58,9 @@ installed](https://docs.docker.com/installation/). On Mac OS X you can use [boot2docker](http://boot2docker.io/). ## Setup -###Starting a cluster + +### Starting a cluster + The cluster setup scripts can setup Kubernetes for multiple targets. First modify `cluster/kube-env.sh` to specify azure: KUBERNETES_PROVIDER="azure" @@ -83,6 +85,7 @@ The script above will start (by default) a single master VM along with 4 worker can tweak some of these parameters by editing `cluster/azure/config-default.sh`. ### Adding the kubernetes command line tools to PATH + The [kubectl](../../docs/user-guide/kubectl/kubectl.md) tool controls the Kubernetes cluster manager. It lets you inspect your cluster resources, create, delete, and update components, and much more. You will use it to look at your new cluster and bring up example apps. @@ -95,6 +98,7 @@ Add the appropriate binary folder to your ```PATH``` to access kubectl: export PATH=/platforms/linux/amd64:$PATH ## Getting started with your cluster + See [a simple nginx example](../user-guide/simple-nginx.md) to try out your new cluster. For more complete applications, please look in the [examples directory](../../examples/). diff --git a/docs/getting-started-guides/binary_release.md b/docs/getting-started-guides/binary_release.md index a7144f8b09fb2..82076d9bbbaa8 100644 --- a/docs/getting-started-guides/binary_release.md +++ b/docs/getting-started-guides/binary_release.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## Getting a Binary Release You can either build a release from sources or download a pre-built release. If you do not plan on developing Kubernetes itself, we suggest a pre-built release. diff --git a/docs/getting-started-guides/centos/centos_manual_config.md b/docs/getting-started-guides/centos/centos_manual_config.md index 097b0a0fa27f6..d333b212405d0 100644 --- a/docs/getting-started-guides/centos/centos_manual_config.md +++ b/docs/getting-started-guides/centos/centos_manual_config.md @@ -37,10 +37,13 @@ Getting started on [CentOS](http://centos.org) - [Prerequisites](#prerequisites) - [Starting a cluster](#starting-a-cluster) + ## Prerequisites + You need two machines with CentOS installed on them. ## Starting a cluster + This is a getting started guide for CentOS. It is a manual configuration so you understand all the underlying packages / services / ports, etc... This guide will only get ONE node working. Multiple nodes requires a functional [networking configuration](../../admin/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious. diff --git a/docs/getting-started-guides/cloudstack.md b/docs/getting-started-guides/cloudstack.md index 9dee1a0e339e2..004cb03c74522 100644 --- a/docs/getting-started-guides/cloudstack.md +++ b/docs/getting-started-guides/cloudstack.md @@ -52,7 +52,7 @@ This is a completely automated, a single playbook deploys Kubernetes based on th This [Ansible](http://ansibleworks.com) playbook deploys Kubernetes on a CloudStack based Cloud using CoreOS images. The playbook, creates an ssh key pair, creates a security group and associated rules and finally starts coreOS instances configured via cloud-init. -###Prerequisites +### Prerequisites $ sudo apt-get install -y python-pip $ sudo pip install ansible @@ -74,14 +74,14 @@ Or create a `~/.cloudstack.ini` file: We need to use the http POST method to pass the _large_ userdata to the coreOS instances. -###Clone the playbook +### Clone the playbook $ git clone --recursive https://github.com/runseb/ansible-kubernetes.git $ cd ansible-kubernetes The [ansible-cloudstack](https://github.com/resmo/ansible-cloudstack) module is setup in this repository as a submodule, hence the `--recursive`. -###Create a Kubernetes cluster +### Create a Kubernetes cluster You simply need to run the playbook. diff --git a/docs/getting-started-guides/coreos.md b/docs/getting-started-guides/coreos.md index 66c6bf378c8a0..d6adb9d4bf35d 100644 --- a/docs/getting-started-guides/coreos.md +++ b/docs/getting-started-guides/coreos.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## Getting started on [CoreOS](http://coreos.com) There are multiple guides on running Kubernetes with [CoreOS](http://coreos.com): diff --git a/docs/getting-started-guides/coreos/azure/README.md b/docs/getting-started-guides/coreos/azure/README.md index d5a61d6d3439f..f55907bd95c12 100644 --- a/docs/getting-started-guides/coreos/azure/README.md +++ b/docs/getting-started-guides/coreos/azure/README.md @@ -49,6 +49,7 @@ Kubernetes on Azure with CoreOS and [Weave](http://weave.works) In this guide I will demonstrate how to deploy a Kubernetes cluster to Azure cloud. You will be using CoreOS with Weave, which implements simple and secure networking, in a transparent, yet robust way. The purpose of this guide is to provide an out-of-the-box implementation that can ultimately be taken into production with little change. It will demonstrate how to provision a dedicated Kubernetes master and etcd nodes, and show how to scale the cluster with ease. ### Prerequisites + 1. You need an Azure account. ## Let's go! diff --git a/docs/getting-started-guides/coreos/bare_metal_offline.md b/docs/getting-started-guides/coreos/bare_metal_offline.md index 31880409513de..360afdbd94414 100644 --- a/docs/getting-started-guides/coreos/bare_metal_offline.md +++ b/docs/getting-started-guides/coreos/bare_metal_offline.md @@ -53,10 +53,12 @@ Deploy a CoreOS running Kubernetes environment. This particular guild is made to ## Prerequisites + 1. Installed *CentOS 6* for PXE server 2. At least two bare metal nodes to work with ## High Level Design + 1. Manage the tftp directory * /tftpboot/(coreos)(centos)(RHEL) * /tftpboot/pxelinux.0/(MAC) -> linked to Linux image config file @@ -67,6 +69,7 @@ Deploy a CoreOS running Kubernetes environment. This particular guild is made to 6. Installing the CoreOS slaves to become Kubernetes nodes. ## This Guides variables + | Node Description | MAC | IP | | :---------------------------- | :---------------: | :---------: | | CoreOS/etcd/Kubernetes Master | d0:00:67:13:0d:00 | 10.20.30.40 | @@ -75,6 +78,7 @@ Deploy a CoreOS running Kubernetes environment. This particular guild is made to ## Setup PXELINUX CentOS + To setup CentOS PXELINUX environment there is a complete [guide here](http://docs.fedoraproject.org/en-US/Fedora/7/html/Installation_Guide/ap-pxe-server.html). This section is the abbreviated version. 1. Install packages needed on CentOS @@ -121,6 +125,7 @@ To setup CentOS PXELINUX environment there is a complete [guide here](http://doc Now you should have a working PXELINUX setup to image CoreOS nodes. You can verify the services by using VirtualBox locally or with bare metal servers. ## Adding CoreOS to PXE + This section describes how to setup the CoreOS images to live alongside a pre-existing PXELINUX environment. 1. Find or create the TFTP root directory that everything will be based off of. @@ -168,6 +173,7 @@ This section describes how to setup the CoreOS images to live alongside a pre-ex This configuration file will now boot from local drive but have the option to PXE image CoreOS. ## DHCP configuration + This section covers configuring the DHCP server to hand out our new images. In this case we are assuming that there are other servers that will boot alongside other images. 1. Add the ```filename``` to the _host_ or _subnet_ sections. @@ -210,6 +216,7 @@ This section covers configuring the DHCP server to hand out our new images. In t We will be specifying the node configuration later in the guide. ## Kubernetes + To deploy our configuration we need to create an ```etcd``` master. To do so we want to pxe CoreOS with a specific cloud-config.yml. There are two options we have here. 1. Is to template the cloud config file and programmatically create new static configs for different cluster setups. 2. Have a service discovery protocol running in our stack to do auto discovery. @@ -243,6 +250,7 @@ This sets up our binaries we need to run Kubernetes. This would need to be enhan Now for the good stuff! ## Cloud Configs + The following config files are tailored for the OFFLINE version of a Kubernetes deployment. These are based on the work found here: [master.yml](cloud-configs/master.yaml), [node.yml](cloud-configs/node.yaml) @@ -256,6 +264,7 @@ To make the setup work, you need to replace a few placeholders: - Add your own SSH public key(s) to the cloud config at the end ### master.yml + On the PXE server make and fill in the variables ```vi /var/www/html/coreos/pxe-cloud-config-master.yml```. @@ -476,6 +485,7 @@ On the PXE server make and fill in the variables ```vi /var/www/html/coreos/pxe- ### node.yml + On the PXE server make and fill in the variables ```vi /var/www/html/coreos/pxe-cloud-config-slave.yml```. #cloud-config @@ -610,6 +620,7 @@ On the PXE server make and fill in the variables ```vi /var/www/html/coreos/pxe- ## New pxelinux.cfg file + Create a pxelinux target file for a _slave_ node: ```vi /tftpboot/pxelinux.cfg/coreos-node-slave``` default coreos @@ -637,6 +648,7 @@ And one for the _master_ node: ```vi /tftpboot/pxelinux.cfg/coreos-node-master`` append initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http:///coreos/pxe-cloud-config-master.yml console=tty0 console=ttyS0 coreos.autologin=tty1 coreos.autologin=ttyS0 ## Specify the pxelinux targets + Now that we have our new targets setup for master and slave we want to configure the specific hosts to those targets. We will do this by using the pxelinux mechanism of setting a specific MAC addresses to a specific pxelinux.cfg file. Refer to the MAC address table in the beginning of this guide. Documentation for more details can be found [here](http://www.syslinux.org/wiki/index.php/PXELINUX). @@ -650,6 +662,7 @@ Refer to the MAC address table in the beginning of this guide. Documentation for Reboot these servers to get the images PXEd and ready for running containers! ## Creating test pod + Now that the CoreOS with Kubernetes installed is up and running lets spin up some Kubernetes pods to demonstrate the system. See [a simple nginx example](../../../docs/user-guide/simple-nginx.md) to try out your new cluster. diff --git a/docs/getting-started-guides/coreos/coreos_multinode_cluster.md b/docs/getting-started-guides/coreos/coreos_multinode_cluster.md index 0adc1ee3c41f1..0b305427e1235 100644 --- a/docs/getting-started-guides/coreos/coreos_multinode_cluster.md +++ b/docs/getting-started-guides/coreos/coreos_multinode_cluster.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # CoreOS Multinode Cluster Use the [master.yaml](cloud-configs/master.yaml) and [node.yaml](cloud-configs/node.yaml) cloud-configs to provision a multi-node Kubernetes cluster. diff --git a/docs/getting-started-guides/docker-multinode.md b/docs/getting-started-guides/docker-multinode.md index 53d1afd03627a..eef7673a3ec6d 100644 --- a/docs/getting-started-guides/docker-multinode.md +++ b/docs/getting-started-guides/docker-multinode.md @@ -51,9 +51,11 @@ Please install Docker 1.6.2 or wait for Docker 1.7.1. - [Testing your cluster](#testing-your-cluster) ## Prerequisites + 1. You need a machine with docker installed. ## Overview + This guide will set up a 2-node kubernetes cluster, consisting of a _master_ node which hosts the API server and orchestrates work and a _worker_ node which receives work from the master. You can repeat the process of adding worker nodes an arbitrary number of times to create larger clusters. @@ -62,6 +64,7 @@ Here's a diagram of what the final result will look like: ![Kubernetes Single Node on Docker](k8s-docker.png) ### Bootstrap Docker + This guide also uses a pattern of running two instances of the Docker daemon 1) A _bootstrap_ Docker instance which is used to start system daemons like ```flanneld``` and ```etcd``` 2) A _main_ Docker instance which is used for the Kubernetes infrastructure and user's scheduled containers @@ -71,6 +74,7 @@ all of the Docker containers created by Kubernetes. To achieve this, it must ru it is still useful to use containers for deployment and management, so we create a simpler _bootstrap_ daemon to achieve this. ## Master Node + The first step in the process is to initialize the master node. See [here](docker-multinode/master.md) for detailed instructions. diff --git a/docs/getting-started-guides/docker-multinode/master.md b/docs/getting-started-guides/docker-multinode/master.md index d9c99cba36353..fca6918d1a735 100644 --- a/docs/getting-started-guides/docker-multinode/master.md +++ b/docs/getting-started-guides/docker-multinode/master.md @@ -30,7 +30,9 @@ Documentation for other releases can be found at + ## Installing a Kubernetes Master Node via Docker + We'll begin by setting up the master node. For the purposes of illustration, we'll assume that the IP of this machine is ```${MASTER_IP}``` There are two main phases to installing the master: @@ -45,6 +47,7 @@ There is a [bug](https://github.com/docker/docker/issues/14106) in Docker 1.7.0 Please install Docker 1.6.2 or wait for Docker 1.7.1. ### Setup Docker-Bootstrap + We're going to use ```flannel``` to set up networking between Docker daemons. Flannel itself (and etcd on which it relies) will run inside of Docker containers themselves. To achieve this, we need a separate "bootstrap" instance of the Docker daemon. This daemon will be started with ```--iptables=false``` so that it can only run containers with ```--net=host```. That's sufficient to bootstrap our system. @@ -61,6 +64,7 @@ across reboots and failures. ### Startup etcd for flannel and the API server to use + Run: ``` @@ -75,11 +79,13 @@ sudo docker -H unix:///var/run/docker-bootstrap.sock run --net=host gcr.io/googl ### Set up Flannel on the master node + Flannel is a network abstraction layer build by CoreOS, we will use it to provide simplified networking between our Pods of containers. Flannel re-configures the bridge that Docker uses for networking. As a result we need to stop Docker, reconfigure its networking, and then restart Docker. #### Bring down Docker + To re-configure Docker to use flannel, we need to take docker down, run flannel and then restart Docker. Turning down Docker is system dependent, it may be: @@ -113,6 +119,7 @@ sudo docker -H unix:///var/run/docker-bootstrap.sock exec + ## Testing your Kubernetes cluster. To validate that your node(s) have been added, run: diff --git a/docs/getting-started-guides/docker-multinode/worker.md b/docs/getting-started-guides/docker-multinode/worker.md index f625ec51e4508..e73326c2bf540 100644 --- a/docs/getting-started-guides/docker-multinode/worker.md +++ b/docs/getting-started-guides/docker-multinode/worker.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## Adding a Kubernetes worker node via Docker. @@ -44,6 +45,7 @@ For each worker node, there are three steps: * [Add the worker to the cluster](#add-the-node-to-the-cluster) ### Set up Flanneld on the worker node + As before, the Flannel daemon is going to provide network connectivity. _Note_: @@ -52,6 +54,7 @@ Please install Docker 1.6.2 or wait for Docker 1.7.1. #### Set up a bootstrap docker + As previously, we need a second instance of the Docker daemon running to bootstrap the flannel networking. Run: @@ -65,6 +68,7 @@ If you are running this on a long running system, rather than experimenting, you across reboots and failures. #### Bring down Docker + To re-configure Docker to use flannel, we need to take docker down, run flannel and then restart Docker. Turning down Docker is system dependent, it may be: @@ -99,6 +103,7 @@ sudo docker -H unix:///var/run/docker-bootstrap.sock exec Note that you will need run this curl command on your boot2docker VM if you are running on OS X. ### A note on turning down your cluster + Many of these containers run under the management of the ```kubelet``` binary, which attempts to keep containers running, even if they fail. So, in order to turn down the cluster, you need to first kill the kubelet container, and then any other containers. diff --git a/docs/getting-started-guides/fedora/fedora_ansible_config.md b/docs/getting-started-guides/fedora/fedora_ansible_config.md index 6d30e4da86e28..dc87c0f9fb06a 100644 --- a/docs/getting-started-guides/fedora/fedora_ansible_config.md +++ b/docs/getting-started-guides/fedora/fedora_ansible_config.md @@ -44,7 +44,7 @@ Configuring kubernetes on Fedora via Ansible offers a simple way to quickly crea - [Setting up the cluster](#setting-up-the-cluster) - [Testing and using your new cluster](#testing-and-using-your-new-cluster) -##Prerequisites +## Prerequisites 1. Host able to run ansible and able to clone the following repo: [kubernetes-ansible](https://github.com/eparis/kubernetes-ansible) 2. A Fedora 20+ or RHEL7 host to act as cluster master diff --git a/docs/getting-started-guides/fedora/fedora_manual_config.md b/docs/getting-started-guides/fedora/fedora_manual_config.md index c5379d8b1238d..ea20160013ffd 100644 --- a/docs/getting-started-guides/fedora/fedora_manual_config.md +++ b/docs/getting-started-guides/fedora/fedora_manual_config.md @@ -39,6 +39,7 @@ Getting started on [Fedora](http://fedoraproject.org) - [Instructions](#instructions) ## Prerequisites + 1. You need 2 or more machines with Fedora installed. ## Instructions diff --git a/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md b/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md index d6908485b7619..65207f1c84f88 100644 --- a/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md +++ b/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md @@ -46,6 +46,7 @@ Kubernetes multiple nodes cluster with flannel on Fedora This document describes how to deploy kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](fedora_manual_config.md) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on kubernetes nodes. flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network. ## Prerequisites + 1. You need 2 or more machines with Fedora installed. ## Master Setup @@ -124,7 +125,7 @@ FLANNEL_OPTIONS="" *** -##**Test the cluster and flannel configuration** +## **Test the cluster and flannel configuration** * Now check the interfaces on the nodes. Notice there is now a flannel.1 interface, and the ip addresses of docker0 and flannel.1 interfaces are in the same network. You will notice that docker0 is assigned a subnet (18.16.29.0/24 as shown below) on each kubernetes node out of the IP range configured above. A working output should look like this: diff --git a/docs/getting-started-guides/gce.md b/docs/getting-started-guides/gce.md index 2b0794560c8a1..308c8b280c8cd 100644 --- a/docs/getting-started-guides/gce.md +++ b/docs/getting-started-guides/gce.md @@ -188,6 +188,7 @@ Then, see [a simple nginx example](../../docs/user-guide/simple-nginx.md) to try For more complete applications, please look in the [examples directory](../../examples/). The [guestbook example](../../examples/guestbook/) is a good "getting started" walkthrough. ### Tearing down the cluster + To remove/delete/teardown the cluster, use the `kube-down.sh` script. ```bash diff --git a/docs/getting-started-guides/locally.md b/docs/getting-started-guides/locally.md index 4bd6867f8534b..1e8be200ea43e 100644 --- a/docs/getting-started-guides/locally.md +++ b/docs/getting-started-guides/locally.md @@ -160,6 +160,7 @@ hack/local-up-cluster.sh One or more of the kubernetes daemons might've crashed. Tail the logs of each in /tmp. #### The pods fail to connect to the services by host names + The local-up-cluster.sh script doesn't start a DNS service. Similar situation can be found [here](https://github.com/GoogleCloudPlatform/kubernetes/issues/6667). You can start a manually. Related documents can be found [here](../../cluster/addons/dns/#how-do-i-configure-it) diff --git a/docs/getting-started-guides/logging-elasticsearch.md b/docs/getting-started-guides/logging-elasticsearch.md index f5dad5c60ee15..81befb992d0dd 100644 --- a/docs/getting-started-guides/logging-elasticsearch.md +++ b/docs/getting-started-guides/logging-elasticsearch.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Cluster Level Logging with Elasticsearch and Kibana On the Google Compute Engine (GCE) platform the default cluster level logging support targets diff --git a/docs/getting-started-guides/logging.md b/docs/getting-started-guides/logging.md index bd499847d0eee..6941d174a0ac3 100644 --- a/docs/getting-started-guides/logging.md +++ b/docs/getting-started-guides/logging.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Cluster Level Logging to Google Cloud Logging A Kubernetes cluster will typically be humming along running many system and application pods. How does the system administrator collect, manage and query the logs of the system pods? How does a user query the logs of their application which is composed of many pods which may be restarted or automatically generated by the Kubernetes system? These questions are addressed by the Kubernetes **cluster level logging** services. diff --git a/docs/getting-started-guides/mesos.md b/docs/getting-started-guides/mesos.md index c144a0eeb3edf..cb3207c279347 100644 --- a/docs/getting-started-guides/mesos.md +++ b/docs/getting-started-guides/mesos.md @@ -46,6 +46,7 @@ Getting started with Kubernetes on Mesos - [Test Guestbook App](#test-guestbook-app) ## About Kubernetes on Mesos + Mesos allows dynamic sharing of cluster resources between Kubernetes and other first-class Mesos frameworks such as [Hadoop][1], [Spark][2], and [Chronos][3]. @@ -97,6 +98,7 @@ $ export KUBERNETES_MASTER=http://${KUBERNETES_MASTER_IP}:8888 ``` ### Deploy etcd + Start etcd and verify that it is running: ```bash @@ -118,6 +120,7 @@ curl -L http://${KUBERNETES_MASTER_IP}:4001/v2/keys/ If connectivity is OK, you will see an output of the available keys in etcd (if any). ### Start Kubernetes-Mesos Services + Update your PATH to more easily run the Kubernetes-Mesos binaries: ```bash @@ -176,6 +179,7 @@ $ disown -a ``` #### Validate KM Services + Add the appropriate binary folder to your ```PATH``` to access kubectl: ```bash diff --git a/docs/getting-started-guides/rackspace.md b/docs/getting-started-guides/rackspace.md index 34e3aa10a7299..5929590a3c60b 100644 --- a/docs/getting-started-guides/rackspace.md +++ b/docs/getting-started-guides/rackspace.md @@ -58,23 +58,26 @@ The current cluster design is inspired by: - [Angus Lees](https://github.com/anguslees/kube-openstack) ## Prerequisites + 1. Python2.7 2. You need to have both `nova` and `swiftly` installed. It's recommended to use a python virtualenv to install these packages into. 3. Make sure you have the appropriate environment variables set to interact with the OpenStack APIs. See [Rackspace Documentation](http://docs.rackspace.com/servers/api/v2/cs-gettingstarted/content/section_gs_install_nova.html) for more details. -##Provider: Rackspace +## Provider: Rackspace - To build your own released version from source use `export KUBERNETES_PROVIDER=rackspace` and run the `bash hack/dev-build-and-up.sh` - Note: The get.k8s.io install method is not working yet for our scripts. * To install the latest released version of kubernetes use `export KUBERNETES_PROVIDER=rackspace; wget -q -O - https://get.k8s.io | bash` ## Build + 1. The kubernetes binaries will be built via the common build scripts in `build/`. 2. If you've set the ENV `KUBERNETES_PROVIDER=rackspace`, the scripts will upload `kubernetes-server-linux-amd64.tar.gz` to Cloud Files. 2. A cloud files container will be created via the `swiftly` CLI and a temp URL will be enabled on the object. 3. The built `kubernetes-server-linux-amd64.tar.gz` will be uploaded to this container and the URL will be passed to master/nodes when booted. ## Cluster + There is a specific `cluster/rackspace` directory with the scripts for the following steps: 1. A cloud network will be created and all instances will be attached to this network. - flanneld uses this network for next hop routing. These routes allow the containers running on each node to communicate with one another on this private network. @@ -83,6 +86,7 @@ There is a specific `cluster/rackspace` directory with the scripts for the follo 4. We then boot as many nodes as defined via `$NUM_MINIONS`. ## Some notes + - The scripts expect `eth2` to be the cloud network that the containers will communicate across. - A number of the items in `config-default.sh` are overridable via environment variables. - For older versions please either: @@ -92,6 +96,7 @@ There is a specific `cluster/rackspace` directory with the scripts for the follo * Download a [snapshot of `v0.3`](https://github.com/GoogleCloudPlatform/kubernetes/archive/v0.3.tar.gz) ## Network Design + - eth0 - Public Interface used for servers/containers to reach the internet - eth1 - ServiceNet - Intra-cluster communication (k8s, etcd, etc) communicate via this interface. The `cloud-config` files use the special CoreOS identifier `$private_ipv4` to configure the services. - eth2 - Cloud Network - Used for k8s pods to communicate with one another. The proxy service will pass traffic via this interface. diff --git a/docs/getting-started-guides/rkt/README.md b/docs/getting-started-guides/rkt/README.md index b37fc76abb7b9..37fcf184d579f 100644 --- a/docs/getting-started-guides/rkt/README.md +++ b/docs/getting-started-guides/rkt/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Run Kubernetes with rkt This document describes how to run Kubernetes using [rkt](https://github.com/coreos/rkt) as a container runtime. @@ -127,6 +128,7 @@ Note: CoreOS is not supported as the master using the automated launch scripts. The master node is always Ubuntu. ### Getting started with your cluster + See [a simple nginx example](../../../docs/user-guide/simple-nginx.md) to try out your new cluster. For more complete applications, please look in the [examples directory](../../../examples/). diff --git a/docs/getting-started-guides/scratch.md b/docs/getting-started-guides/scratch.md index 735aa49de7d41..a3c19a1b2e81a 100644 --- a/docs/getting-started-guides/scratch.md +++ b/docs/getting-started-guides/scratch.md @@ -72,6 +72,7 @@ steps that existing cluster setup scripts are making. ## Designing and Preparing ### Learning + 1. You should be familiar with using Kubernetes already. We suggest you set up a temporary cluster by following one of the other Getting Started Guides. This will help you become familiar with the CLI ([kubectl](../user-guide/kubectl/kubectl.md)) and concepts ([pods](../user-guide/pods.md), [services](../user-guide/services.md), etc.) first. @@ -79,6 +80,7 @@ steps that existing cluster setup scripts are making. effect of completing one of the other Getting Started Guides. ### Cloud Provider + Kubernetes has the concept of a Cloud Provider, which is a module which provides an interface for managing TCP Load Balancers, Nodes (Instances) and Networking Routes. The interface is defined in `pkg/cloudprovider/cloud.go`. It is possible to @@ -87,6 +89,7 @@ bare-metal), and not all parts of the interface need to be implemented, dependin on how flags are set on various components. ### Nodes + - You can use virtual or physical machines. - While you can build a cluster with 1 machine, in order to run all the examples and tests you need at least 4 nodes. @@ -100,6 +103,7 @@ on how flags are set on various components. have identical configurations. ### Network + Kubernetes has a distinctive [networking model](../admin/networking.md). Kubernetes allocates an IP address to each pod. When creating a cluster, you @@ -167,6 +171,7 @@ region of the world, etc. need to distinguish which resources each created. Call this `CLUSTERNAME`. ### Software Binaries + You will need binaries for: - etcd - A container runner, one of: @@ -180,6 +185,7 @@ You will need binaries for: - kube-scheduler #### Downloading and Extracting Kubernetes Binaries + A Kubernetes binary release includes all the Kubernetes binaries as well as the supported release of etcd. You can use a Kubernetes binary release (recommended) or build your Kubernetes binaries following the instructions in the [Developer Documentation](../devel/README.md). Only using a binary release is covered in this guide. @@ -190,6 +196,7 @@ Then, within the second set of unzipped files, locate `./kubernetes/server/bin`, all the necessary binaries. #### Selecting Images + You will run docker, kubelet, and kube-proxy outside of a container, the same way you would run any system daemon, so you just need the bare binaries. For etcd, kube-apiserver, kube-controller-manager, and kube-scheduler, we recommend that you run these as containers, so you need an image to be built. @@ -238,6 +245,7 @@ There are two main options for security: If following the HTTPS approach, you will need to prepare certs and credentials. #### Preparing Certs + You need to prepare several certs: - The master needs a cert to act as an HTTPS server. - The kubelets optionally need certs to identify themselves as clients of the master, and when @@ -262,6 +270,7 @@ You will end up with the following files (we will use these variables later on) - optional #### Preparing Credentials + The admin user (and any users) need: - a token or a password to identify them. - tokens are just long alphanumeric strings, e.g. 32 chars. See @@ -339,6 +348,7 @@ Started Guide. After getting a cluster running, you can then copy the init.d s cluster, and then modify them for use on your custom cluster. ### Docker + The minimum required Docker version will vary as the kubelet version changes. The newest stable release is a good choice. Kubelet will log a warning and refuse to start pods if the version is too old, so pick a version and try it. If you previously had Docker installed on a node without setting Kubernetes-specific @@ -422,6 +432,7 @@ Arguments to consider: - `--api-servers=http://$MASTER_IP` ### Networking + Each node needs to be allocated its own CIDR range for pod networking. Call this `NODE_X_POD_CIDR`. @@ -462,6 +473,7 @@ any masquerading at all. Others, such as GCE, will not allow pod IPs to send traffic to the internet, but have no problem with them inside your GCE Project. ### Other + - Enable auto-upgrades for your OS package manager, if desired. - Configure log rotation for all node components (e.g. using [logrotate](http://linux.die.net/man/8/logrotate)). - Setup liveness-monitoring (e.g. using [monit](http://linux.die.net/man/1/monit)). @@ -470,6 +482,7 @@ traffic to the internet, but have no problem with them inside your GCE Project. volumes. ### Using Configuration Management + The previous steps all involved "conventional" system administration techniques for setting up machines. You may want to use a Configuration Management system to automate the node configuration process. There are examples of [Saltstack](../admin/salt.md), Ansible, Juju, and CoreOS Cloud Config in the @@ -485,6 +498,7 @@ all configured and managed *by Kubernetes*: - they are kept running by Kubernetes rather than by init. ### etcd + You will need to run one or more instances of etcd. - Recommended approach: run one etcd instance, with its log written to a directory backed by durable storage (RAID, GCE PD) @@ -613,6 +627,7 @@ node disk. Optionally, you may want to mount `/var/log` as well and redirect output there. #### Starting Apiserver + Place the completed pod template into the kubelet config dir (whatever `--config=` argument of kubelet is set to, typically `/etc/kubernetes/manifests`). @@ -688,6 +703,7 @@ Optionally, you may want to mount `/var/log` as well and redirect output there. Start as described for apiserver. ### Controller Manager + To run the controller manager: - select the correct flags for your cluster - write a pod spec for the controller manager using the provided template @@ -803,6 +819,7 @@ The nodes must be able to connect to each other using their private IP. Verify t pinging or SSH-ing from one node to another. ### Getting Help + If you run into trouble, please see the section on [troubleshooting](gce.md#troubleshooting), post to the [google-containers group](https://groups.google.com/forum/#!forum/google-containers), or come ask questions on IRC at [#google-containers](http://webchat.freenode.net/?channels=google-containers) on freenode. diff --git a/docs/getting-started-guides/ubuntu.md b/docs/getting-started-guides/ubuntu.md index 09d0de6f861a1..58919289483c1 100644 --- a/docs/getting-started-guides/ubuntu.md +++ b/docs/getting-started-guides/ubuntu.md @@ -48,6 +48,7 @@ This document describes how to deploy kubernetes on ubuntu nodes, including 1 ku [Cloud team from Zhejiang University](https://github.com/ZJU-SEL) will maintain this work. ## Prerequisites + *1 The nodes have installed docker version 1.2+ and bridge-utils to manipulate linux bridge* *2 All machines can communicate with each other, no need to connect Internet (should use private docker registry in this case)* @@ -60,6 +61,7 @@ This document describes how to deploy kubernetes on ubuntu nodes, including 1 ku ### Starting a Cluster + #### Make *kubernetes* , *etcd* and *flanneld* binaries First clone the kubernetes github repo, `$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git` @@ -74,6 +76,7 @@ Please make sure that there are `kube-apiserver`, `kube-controller-manager`, `ku > We used flannel here because we want to use overlay network, but please remember it is not the only choice, and it is also not a k8s' necessary dependence. Actually you can just build up k8s cluster natively, or use flannel, Open vSwitch or any other SDN tool you like, we just choose flannel here as a example. #### Configure and start the kubernetes cluster + An example cluster is listed as below: | IP Address|Role | diff --git a/docs/getting-started-guides/vagrant.md b/docs/getting-started-guides/vagrant.md index 0a0a36c24a7df..c9a6af6700148 100644 --- a/docs/getting-started-guides/vagrant.md +++ b/docs/getting-started-guides/vagrant.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## Getting started with Vagrant Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X). @@ -53,6 +54,7 @@ Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/deve - [I want vagrant to sync folders via nfs!](#i-want-vagrant-to-sync-folders-via-nfs) ### Prerequisites + 1. Install latest version >= 1.6.2 of vagrant from http://www.vagrantup.com/downloads.html 2. Install one of: 1. Version 4.3.28 of Virtual Box from https://www.virtualbox.org/wiki/Download_Old_Builds_4_3 @@ -366,6 +368,7 @@ export KUBERNETES_MINION_MEMORY=2048 ``` #### I ran vagrant suspend and nothing works! + ```vagrant suspend``` seems to mess up the network. This is not supported at this time. #### I want vagrant to sync folders via nfs! diff --git a/docs/proposals/autoscaling.md b/docs/proposals/autoscaling.md index ebc49905f5edf..86a9a819e4525 100644 --- a/docs/proposals/autoscaling.md +++ b/docs/proposals/autoscaling.md @@ -30,7 +30,9 @@ Documentation for other releases can be found at + ## Abstract + Auto-scaling is a data-driven feature that allows users to increase or decrease capacity as needed by controlling the number of pods deployed within the system automatically. @@ -230,6 +232,7 @@ Since an auto-scaler is a durable object it is best represented as a resource. ``` #### Boundary Definitions + The `AutoScaleThreshold` definitions provide the boundaries for the auto-scaler. By defining comparisons that form a range along with positive and negative increments you may define bi-directional scaling. For example the upper bound may be specified as "when requests per second rise above 50 for 30 seconds scale the application up by 1" and a lower bound may @@ -251,6 +254,7 @@ Of note: If the statistics gathering mechanisms can be initialized with a regist potentially piggyback on this registry. ### Multi-target Scaling Policy + If multiple scalable targets satisfy the `TargetSelector` criteria the auto-scaler should be configurable as to which target(s) are scaled. To begin with, if multiple targets are found the auto-scaler will scale the largest target up or down as appropriate. In the future this may be more configurable. diff --git a/docs/proposals/federation.md b/docs/proposals/federation.md index 713db4b34515b..8de05a9c0ee76 100644 --- a/docs/proposals/federation.md +++ b/docs/proposals/federation.md @@ -30,12 +30,15 @@ Documentation for other releases can be found at -#Kubernetes Cluster Federation -##(a.k.a. "Ubernetes") + +# Kubernetes Cluster Federation + +## (a.k.a. "Ubernetes") ## Requirements Analysis and Product Proposal ## _by Quinton Hoole ([quinton@google.com](mailto:quinton@google.com))_ + _Initial revision: 2015-03-05_ _Last updated: 2015-03-09_ This doc: [tinyurl.com/ubernetesv2](http://tinyurl.com/ubernetesv2) @@ -417,7 +420,7 @@ TBD: All very hand-wavey still, but some initial thoughts to get the conversatio ![image](federation-high-level-arch.png) -## Ubernetes API +## Ubernetes API This looks a lot like the existing Kubernetes API but is explicitly multi-cluster. diff --git a/docs/proposals/high-availability.md b/docs/proposals/high-availability.md index fd6bef7b4c0ce..ecb9966e9cd67 100644 --- a/docs/proposals/high-availability.md +++ b/docs/proposals/high-availability.md @@ -30,10 +30,13 @@ Documentation for other releases can be found at + # High Availability of Scheduling and Controller Components in Kubernetes + This document serves as a proposal for high availability of the scheduler and controller components in kubernetes. This proposal is intended to provide a simple High Availability api for kubernetes components with the potential to extend to services running on kubernetes. Those services would be subject to their own constraints. ## Design Options + For complete reference see [this](https://www.ibm.com/developerworks/community/blogs/RohitShetty/entry/high_availability_cold_warm_hot?lang=en) 1. Hot Standby: In this scenario, data and state are shared between the two components such that an immediate failure in one component causes the standby daemon to take over exactly where the failed component had left off. This would be an ideal solution for kubernetes, however it poses a series of challenges in the case of controllers where component-state is cached locally and not persisted in a transactional way to a storage facility. This would also introduce additional load on the apiserver, which is not desirable. As a result, we are **NOT** planning on this approach at this time. @@ -43,6 +46,7 @@ For complete reference see [this](https://www.ibm.com/developerworks/community/b 3. Active-Active (Load Balanced): Clients can simply load-balance across any number of servers that are currently running. Their general availability can be continuously updated, or published, such that load balancing only occurs across active participants. This aspect of HA is outside of the scope of *this* proposal because there is already a partial implementation in the apiserver. ## Design Discussion Notes on Leader Election + Implementation References: * [zookeeper](http://zookeeper.apache.org/doc/trunk/recipes.html#sc_leaderElection) * [etcd](https://groups.google.com/forum/#!topic/etcd-dev/EbAa4fjypb4) @@ -55,11 +59,13 @@ The first component to request leadership will become the master. All other com The component that becomes master should create a thread to manage the lease. This thread should be created with a channel that the main process can use to release the master lease. The master should release the lease in cases of an unrecoverable error and clean shutdown. Otherwise, this process will renew the lease and sleep, waiting for the next renewal time or notification to release the lease. If there is a failure to renew the lease, this process should force the entire component to exit. Daemon exit is meant to prevent potential split-brain conditions. Daemon restart is implied in this scenario, by either the init system (systemd), or possible watchdog processes. (See Design Discussion Notes) ## Options added to components with HA functionality + Some command line options would be added to components that can do HA: * Lease Duration - How long a component can be master ## Design Discussion Notes + Some components may run numerous threads in order to perform tasks in parallel. Upon losing master status, such components should exit instantly instead of attempting to gracefully shut down such threads. This is to ensure that, in the case there's some propagation delay in informing the threads they should stop, the lame-duck threads won't interfere with the new master. The component should exit with an exit code indicating that the component is not the master. Since all components will be run by systemd or some other monitoring system, this will just result in a restart. There is a short window after a new master acquires the lease, during which data from the old master might be committed. This is because there is currently no way to condition a write on its source being the master. Having the daemons exit shortens this window but does not eliminate it. A proper solution for this problem will be addressed at a later date. The proposed solution is: @@ -75,6 +81,7 @@ There is a short window after a new master acquires the lease, during which data 5. When the API server makes the corresponding write to etcd, it includes it in a transaction that does a compare-and-swap on the "current master" entry (old value == new value == host:port and sequence number from the replica that sent the mutating operation). This basically guarantees that if we elect the new master, all transactions coming from the old master will fail. You can think of this as the master attaching a "precondition" of its belief about who is the latest master. ## Open Questions + * Is there a desire to keep track of all nodes for a specific component type? diff --git a/docs/roadmap.md b/docs/roadmap.md index 372ff502900a0..14db54b9dd4c7 100644 --- a/docs/roadmap.md +++ b/docs/roadmap.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Kubernetes v1 Updated May 28, 2015 @@ -51,9 +52,12 @@ clustered database or key-value store. We will target such workloads for our 1.0 release. ## v1 APIs + For existing and future workloads, we want to provide a consistent, stable set of APIs, over which developers can build and extend Kubernetes. This includes input validation, a consistent API structure, clean semantics, and improved diagnosability of the system. ||||||| merged common ancestors + ## APIs and core features + 1. Consistent v1 API - Status: DONE. [v1beta3](http://kubernetesio.blogspot.com/2015/04/introducing-kubernetes-v1beta3.html) was developed as the release candidate for the v1 API. 2. Multi-port services for apps which need more than one port on the same portal IP ([#1802](https://github.com/GoogleCloudPlatform/kubernetes/issues/1802)) @@ -108,12 +112,15 @@ For existing and future workloads, we want to provide a consistent, stable set o In addition, we will provide versioning and deprecation policies for the APIs. ## Cluster Environment + Currently, a cluster is a set of nodes (VMs, machines), managed by a master, running a version of Kubernetes. This master is the cluster-level control-plane. For the purpose of running production workloads, members of the cluster must be serviceable and upgradeable. ## Micro-services and Resources + For applications / micro-services that run on Kubernetes, we want deployments to be easy but powerful. An Operations user should be able to launch a micro-service, letting the scheduler find the right placement. That micro-service should be able to require “pet storage” resources, fulfilled by external storage and with help from the cluster. We also want to improve the tools, experience for how users can roll-out applications through patterns like canary deployments. ## Performance and Reliability + The system should be performant, especially from the perspective of micro-service running on top of the cluster and for Operations users. As part of being production grade, the system should have a measured availability and be resilient to failures, including fatal failures due to hardware. In terms of performance, the objectives include: diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md index 0382bea79515a..b6b9feb93d1c2 100644 --- a/docs/troubleshooting.md +++ b/docs/troubleshooting.md @@ -30,15 +30,19 @@ Documentation for other releases can be found at + # Troubleshooting + Sometimes things go wrong. This guide is aimed at making them right. It has two sections: * [Troubleshooting your application](user-guide/application-troubleshooting.md) - Useful for users who are deploying code into Kubernetes and wondering why it is not working. * [Troubleshooting your cluster](admin/cluster-troubleshooting.md) - Useful for cluster administrators and people whose Kubernetes cluster is unhappy. # Getting help + If your problem isn't answered by any of guides above, there are variety of ways for you to get help from the Kubernetes team. ## Questions + We have a number of FAQ pages * [User FAQ](https://github.com/GoogleCloudPlatform/kubernetes/wiki/User-FAQ) * [Debugging FAQ](https://github.com/GoogleCloudPlatform/kubernetes/wiki/Debugging-FAQ) @@ -49,6 +53,7 @@ You may also find the StackOverflow topics relevant * [Google Container Engine - GKE](http://stackoverflow.com/questions/tagged/google-container-engine) ## Bugs and Feature requests + If you have what looks like a bug, or you would like to make a feature request, please use the [Github issue tracking system](https://github.com/GoogleCloudPlatform/kubernetes/issues). Before you file an issue, please search existing issues to see if your issue is already covered. @@ -56,9 +61,11 @@ Before you file an issue, please search existing issues to see if your issue is # Help! My question isn't covered! I need help now! ## IRC + The Kubernetes team hangs out on IRC at [```#google-containers```](https://botbot.me/freenode/google-containers/) on freenode. Feel free to come and ask any and all questions there. ## Mailing List + The Kubernetes mailing list is google-containers@googlegroups.com diff --git a/docs/user-guide/README.md b/docs/user-guide/README.md index da9795c8bce36..ff7ff08071063 100644 --- a/docs/user-guide/README.md +++ b/docs/user-guide/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Kubernetes User Guide: Managing Applications **Table of Contents** diff --git a/docs/user-guide/accessing-the-cluster.md b/docs/user-guide/accessing-the-cluster.md index 720069c88843f..64411f365f421 100644 --- a/docs/user-guide/accessing-the-cluster.md +++ b/docs/user-guide/accessing-the-cluster.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # User Guide to Accessing the Cluster **Table of Contents** @@ -54,7 +55,9 @@ Documentation for other releases can be found at ## Accessing the cluster API + ### Accessing for the first time with kubectl + When accessing the Kubernetes API for the first time, we suggest using the kubernetes CLI, `kubectl`. @@ -73,6 +76,7 @@ Many of the [examples](../../examples/) provide an introduction to using kubectl and complete documentation is found in the [kubectl manual](kubectl/kubectl.md). ### Directly accessing the REST API + Kubectl handles locating and authenticating to the apiserver. If you want to directly access the REST API with an http client like curl or wget, or a browser, there are several ways to locate and authenticate: @@ -111,6 +115,7 @@ $ curl http://localhost:8080/api/ ``` #### Without kubectl proxy + It is also possible to avoid using kubectl proxy by passing an authentication token directly to the apiserver, like this: @@ -175,6 +180,7 @@ In each case, the credentials of the pod are used to communicate securely with t ## Accessing services running on the cluster + The previous section was about connecting the Kubernetes API server. This section is about connecting to other services running on Kubernetes cluster. In kubernetes, the [nodes](../admin/node.md), [pods](pods.md) and [services](services.md) all have @@ -183,6 +189,7 @@ routable, so they will not be reachable from a machine outside the cluster, such as your desktop machine. ### Ways to connect + You have several options for connecting to nodes, pods and services from outside the cluster: - Access services through public IPs. - Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside @@ -232,12 +239,14 @@ at `https://104.197.5.247/api/v1/proxy/namespaces/default/services/elasticsearch (See [above](#accessing-the-cluster-api) for how to pass credentials or use kubectl proxy.) #### Manually constructing apiserver proxy URLs + As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL: `http://`*`kubernetes_master_address`*`/`*`service_path`*`/`*`service_name`*`/`*`service_endpoint-suffix-parameter`* ##### Examples + * To access the Elasticsearch service endpoint `_search?q=user:kimchy`, you would use: `http://104.197.5.247/api/v1/proxy/namespaces/default/services/elasticsearch-logging/_search?q=user:kimchy` * To access the Elasticsearch cluster health information `_cluster/health?pretty=true`, you would use: `https://104.197.5.247/api/v1/proxy/namespaces/default/services/elasticsearch-logging/_cluster/health?pretty=true` @@ -257,6 +266,7 @@ about namespaces? 'proxy' verb? --> ``` #### Using web browsers to access services running on the cluster + You may be able to put an apiserver proxy url into the address bar of a browser. However: - Web browsers cannot usually pass tokens, so you may need to use basic (password) auth. Apiserver can be configured to accept basic auth, but your cluster may not be configured to accept basic auth. @@ -264,9 +274,11 @@ You may be able to put an apiserver proxy url into the address bar of a browser. way that is unaware of the proxy path prefix. ## Requesting redirects + The redirect capabilities have been deprecated and removed. Please use a proxy (see below) instead. ## So Many Proxies + There are several different proxies you may encounter when using kubernetes: 1. The [kubectl proxy](#directly-accessing-the-rest-api): - runs on a user's desktop or in a pod diff --git a/docs/user-guide/annotations.md b/docs/user-guide/annotations.md index db2da0ddc0a18..672a7c3ed6421 100644 --- a/docs/user-guide/annotations.md +++ b/docs/user-guide/annotations.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Annotations We have [labels](labels.md) for identifying metadata. diff --git a/docs/user-guide/application-troubleshooting.md b/docs/user-guide/application-troubleshooting.md index 572f75ab7ac5a..ac0b9e309ded8 100644 --- a/docs/user-guide/application-troubleshooting.md +++ b/docs/user-guide/application-troubleshooting.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Application Troubleshooting This guide is to help users debug applications that are deployed into Kubernetes and not behaving correctly. @@ -54,9 +55,11 @@ This is *not* a guide for people who want to debug their cluster. For that you ## FAQ + Users are highly encouraged to check out our [FAQ](https://github.com/GoogleCloudPlatform/kubernetes/wiki/User-FAQ) ## Diagnosing the problem + The first step in troubleshooting is triage. What is the problem? Is it your Pods, your Replication Controller or your Service? * [Debugging Pods](#debugging-pods) @@ -64,6 +67,7 @@ your Service? * [Debugging Services](#debugging-services) ### Debugging Pods + The first step in debugging a Pod is taking a look at it. Check the current state of the Pod and recent events with the following command: ```sh @@ -75,6 +79,7 @@ Look at the state of the containers in the pod. Are they all ```Running```? Ha Continue debugging depending on the state of the pods. #### My pod stays pending + If a Pod is stuck in ```Pending``` it means that it can not be scheduled onto a node. Generally this is because there are insufficient resources of one type or another that prevent scheduling. Look at the output of the ```kubectl describe ...``` command above. There should be messages from the scheduler about why it can not schedule @@ -89,6 +94,7 @@ scheduled. In most cases, ```hostPort``` is unnecessary, try using a Service ob #### My pod stays waiting + If a Pod is stuck in the ```Waiting``` state, then it has been scheduled to a worker node, but it can't run on that machine. Again, the information from ```kubectl describe ...``` should be informative. The most common cause of ```Waiting``` pods is a failure to pull the image. There are three things to check: * Make sure that you have the name of the image correct @@ -130,6 +136,7 @@ but this should generally not be necessary given tools in the Kubernetes API. Th feature request on GitHub describing your use case and why these tools are insufficient. ### Debugging Replication Controllers + Replication controllers are fairly straightforward. They can either create Pods or they can't. If they can't create pods, then please refer to the [instructions above](#debugging-pods) to debug your pods. @@ -137,6 +144,7 @@ You can also use ```kubectl describe rc ${CONTROLLER_NAME}``` to introspect even controller. ### Debugging Services + Services provide load balancing across a set of pods. There are several common problems that can make Services not work properly. The following instructions should help debug Service problems. @@ -153,6 +161,7 @@ For example, if your Service is for an nginx container with 3 replicas, you woul IP addresses in the Service's endpoints. #### My service is missing endpoints + If you are missing endpoints, try listing pods using the labels that Service uses. Imagine that you have a Service where the labels are: @@ -179,6 +188,7 @@ selected don't have that port listed, then they won't be added to the endpoints Verify that the pod's ```containerPort``` matches up with the Service's ```containerPort``` #### Network traffic is not forwarded + If you can connect to the service, but the connection is immediately dropped, and there are endpoints in the endpoints list, it's likely that the proxy can't contact your pods. @@ -189,6 +199,7 @@ check: * Is your application serving on the port that you configured? Kubernetes doesn't do port remapping, so if your application serves on 8080, the ```containerPort``` field needs to be 8080. #### More information + If none of the above solves your problem, follow the instructions in [Debugging Service document](debugging-services.md) to make sure that your `Service` is running, has `Endpoints`, and your `Pods` are actually serving; you have DNS working, iptables rules installed, and kube-proxy does not seem to be misbehaving. You may also visit [troubleshooting document](../troubleshooting.md) for more information. diff --git a/docs/user-guide/compute-resources.md b/docs/user-guide/compute-resources.md index 9d381c1676f97..6132d1c6c3617 100644 --- a/docs/user-guide/compute-resources.md +++ b/docs/user-guide/compute-resources.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Compute Resources ** Table of Contents** @@ -149,6 +150,7 @@ then pod resource usage can be retrieved from the monitoring system. ## Troubleshooting ### My pods are pending with event message failedScheduling + If the scheduler cannot find any node where a pod can fit, then the pod will remain unscheduled until a place can be found. An event will be produced each time the scheduler fails to find a place for the pod, like this: @@ -176,6 +178,7 @@ to limit the total amount of resources that can be consumed. If used in conjunc with namespaces, it can prevent one team from hogging all the resources. ### My container is terminated + Your container may be terminated because it's resource-starved. To check if a container is being killed because it is hitting a resource limit, call `kubectl describe pod` on the pod you are interested in: diff --git a/docs/user-guide/config-best-practices.md b/docs/user-guide/config-best-practices.md index 8ce3e30e67321..1e27eebd161ab 100644 --- a/docs/user-guide/config-best-practices.md +++ b/docs/user-guide/config-best-practices.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Tips and tricks when working with config This document is meant to highlight and consolidate in one place configuration best practices that are introduced throughout the user-guide and getting-started documentation and examples. This is a living document so if you think of something that is not on this list but might be useful to others, please don't hesitate to file an issue or submit a PR. diff --git a/docs/user-guide/configuring-containers.md b/docs/user-guide/configuring-containers.md index a82cce81c6809..c211aa2b38f55 100644 --- a/docs/user-guide/configuring-containers.md +++ b/docs/user-guide/configuring-containers.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Kubernetes User Guide: Managing Applications: Configuring and launching containers **Table of Contents** @@ -177,6 +178,7 @@ hello world ``` ## Deleting pods + When you’re done looking at the output, you should delete the pod: ```bash diff --git a/docs/user-guide/connecting-applications.md b/docs/user-guide/connecting-applications.md index e488433946143..4f8c37248ef00 100644 --- a/docs/user-guide/connecting-applications.md +++ b/docs/user-guide/connecting-applications.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Kubernetes User Guide: Managing Applications: Connecting applications **Table of Contents** @@ -161,6 +162,7 @@ You should now be able to curl the nginx Service on `10.0.116.146:80` from any n Kubernetes supports 2 primary modes of finding a Service - environment variables and DNS. The former works out of the box while the latter requires the [kube-dns cluster addon](../../cluster/addons/dns/README.md). ### Environment Variables + When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service. This introduces an ordering problem. To see why, inspect the environment of your running nginx pods: ```shell @@ -186,6 +188,7 @@ NGINXSVC_SERVICE_PORT=80 ``` ### DNS + Kubernetes offers a DNS cluster addon Service that uses skydns to automatically assign dns names to other Services. You can check if it’s running on your cluster: ```shell diff --git a/docs/user-guide/connecting-to-applications-port-forward.md b/docs/user-guide/connecting-to-applications-port-forward.md index 112f0d6f4fa83..5d74ee3068e96 100644 --- a/docs/user-guide/connecting-to-applications-port-forward.md +++ b/docs/user-guide/connecting-to-applications-port-forward.md @@ -30,7 +30,8 @@ Documentation for other releases can be found at -#Connecting to applications: kubectl port-forward + +# Connecting to applications: kubectl port-forward kubectl port-forward forwards connections to a local port to a port on a pod. Its man page is available [here](kubectl/kubectl_port-forward.md). Compared to [kubectl proxy](accessing-the-cluster.md#using-kubectl-proxy), `kubectl port-forward` is more generic as it can forward TCP traffic while `kubectl proxy` can only forward HTTP traffic. This guide demonstrates how to use `kubectl port-forward` to connect to a Redis database, which may be useful for database debugging. @@ -51,6 +52,7 @@ redis-master 2/2 Running 0 41s ## Connecting to the Redis master[a] + The Redis master is listening on port 6397, to verify this, ``` diff --git a/docs/user-guide/connecting-to-applications-proxy.md b/docs/user-guide/connecting-to-applications-proxy.md index 436409671eaad..c9ccad733120f 100644 --- a/docs/user-guide/connecting-to-applications-proxy.md +++ b/docs/user-guide/connecting-to-applications-proxy.md @@ -30,11 +30,14 @@ Documentation for other releases can be found at -#Connecting to applications: kubectl proxy and apiserver proxy + +# Connecting to applications: kubectl proxy and apiserver proxy + You have seen the [basics](accessing-the-cluster.md) about `kubectl proxy` and `apiserver proxy`. This guide shows how to use them together to access a service([kube-ui](ui.md)) running on the Kubernetes cluster from your workstation. -##Getting the apiserver proxy URL of kube-ui +## Getting the apiserver proxy URL of kube-ui + kube-ui is deployed as a cluster add-on. To find its apiserver proxy URL, ``` @@ -45,7 +48,8 @@ KubeUI is running at https://173.255.119.104/api/v1/proxy/namespaces/kube-system if this command does not find the URL, try the steps [here](ui.md#accessing-the-ui). -##Connecting to the kube-ui service from your local workstation +## Connecting to the kube-ui service from your local workstation + The above proxy URL is an access to the kube-ui service provided by the apiserver. To access it, you still need to authenticate to the apiserver. `kubectl proxy` can handle the authentication. ``` diff --git a/docs/user-guide/container-environment.md b/docs/user-guide/container-environment.md index c4772db62625b..6e82b2f5f7fad 100644 --- a/docs/user-guide/container-environment.md +++ b/docs/user-guide/container-environment.md @@ -50,6 +50,7 @@ Documentation for other releases can be found at ## Overview + This document describes the environment for Kubelet managed containers on a Kubernetes node (kNode).  In contrast to the Kubernetes cluster API, which provides an API for creating and managing containers, the Kubernetes container environment provides the container access to information about what else is going on in the cluster.  This cluster information makes it possible to build applications that are *cluster aware*.   @@ -61,14 +62,17 @@ Another important part of the container environment is the file system that is a The following sections describe both the cluster information provided to containers, as well as the hooks and life-cycle that allows containers to interact with the management system. ## Cluster Information + There are two types of information that are available within the container environment.  There is information about the container itself, and there is information about other objects in the system. ### Container Information + Currently, the only information about the container that is available to the container is the Pod name for the pod in which the container is running.  This ID is set as the hostname of the container, and is accessible through all calls to access the hostname within the container (e.g. the hostname command, or the [gethostname][1] function call in libc).  Additionally, user-defined environment variables from the pod definition, are also available to the container, as are any environment variables specified statically in the Docker image. In the future, we anticipate expanding this information with richer information about the container.  Examples include available memory, number of restarts, and in general any state that you could get from the call to GET /pods on the API server. ### Cluster Information + Currently the list of all services that are running at the time when the container was created via the Kubernetes Cluster API are available to the container as environment variables.  The set of environment variables matches the syntax of Docker links. For a service named **foo** that maps to a container port named **bar**, the following variables are defined: @@ -81,11 +85,13 @@ FOO_SERVICE_PORT= Services have dedicated IP address, and are also surfaced to the container via DNS (If [DNS addon](../../cluster/addons/dns/) is enabled).  Of course DNS is still not an enumerable protocol, so we will continue to provide environment variables so that containers can do discovery. ## Container Hooks + *NB*: Container hooks are under active development, we anticipate adding additional hooks as the Kubernetes container management system evolves.* Container hooks provide information to the container about events in its management lifecycle.  For example, immediately after a container is started, it receives a *PostStart* hook.  These hooks are broadcast *into* the container with information about the life-cycle of the container.  They are different from the events provided by Docker and other systems which are *output* from the container.  Output events provide a log of what has already happened.  Input hooks provide real-time notification about things that are happening, but no historical log.   ### Hook Details + There are currently two container hooks that are surfaced to containers, and two proposed hooks: *PreStart - ****Proposed*** @@ -114,11 +120,13 @@ Eventually, user specified reasons may be [added to the API](https://github.com/ ### Hook Handler Execution + When a management hook occurs, the management system calls into any registered hook handlers in the container for that hook.  These hook handler calls are synchronous in the context of the pod containing the container. Note:this means that hook handler execution blocks any further management of the pod.  If your hook handler blocks, no other management (including [health checks](production-pods.md#liveness-and-readiness-probes-aka-health-checks)) will occur until the hook handler completes.  Blocking hook handlers do *not* affect management of other Pods.  Typically we expect that users will make their hook handlers as lightweight as possible, but there are cases where long running commands make sense (e.g. saving state prior to container stop) For hooks which have parameters, these parameters are passed to the event handler as a set of key/value pairs.  The details of this parameter passing is handler implementation dependent (see below). ### Hook delivery guarantees + Hook delivery is "at least one", which means that a hook may be called multiple times for any given event (e.g. "start" or "stop") and it is up to the hook implementer to be able to handle this correctly. @@ -127,6 +135,7 @@ We expect double delivery to be rare, but in some cases if the ```kubelet``` res Likewise, we only make a single delivery attempt. If (for example) an http hook receiver is down, and unable to take traffic, we do not make any attempts to resend. ### Hook Handler Implementations + Hook handlers are the way that hooks are surfaced to containers.  Containers can select the type of hook handler they would like to implement.  Kubernetes currently supports two different hook handler types: * Exec - Executes a specific command (e.g. pre-stop.sh) inside the cgroup and namespaces of the container.  Resources consumed by the command are counted against the container.  Commands which print "ok" to standard out (stdout) are treated as healthy, any other output is treated as container failures (and will cause kubelet to forcibly restart the container).  Parameters are passed to the command as traditional linux command line flags (e.g. pre-stop.sh --reason=HEALTH) diff --git a/docs/user-guide/containers.md b/docs/user-guide/containers.md index c41154cf6a3c4..e7a7efe733ac3 100644 --- a/docs/user-guide/containers.md +++ b/docs/user-guide/containers.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Containers with Kubernetes ## Containers and commands diff --git a/docs/user-guide/debugging-services.md b/docs/user-guide/debugging-services.md index 06a9ae9f16571..316d0da1371f2 100644 --- a/docs/user-guide/debugging-services.md +++ b/docs/user-guide/debugging-services.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # My Service is not working - how to debug An issue that comes up rather frequently for new installations of Kubernetes is @@ -550,6 +551,7 @@ Contact us on [GitHub](https://github.com/GoogleCloudPlatform/kubernetes). ## More information + Visit [troubleshooting document](../troubleshooting.md) for more information. diff --git a/docs/user-guide/deploying-applications.md b/docs/user-guide/deploying-applications.md index afe87743c47c6..5248ade35e0fb 100644 --- a/docs/user-guide/deploying-applications.md +++ b/docs/user-guide/deploying-applications.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Kubernetes User Guide: Managing Applications: Deploying continuously running applications **Table of Contents** diff --git a/docs/user-guide/docker-cli-to-kubectl.md b/docs/user-guide/docker-cli-to-kubectl.md index 4a5cf4fa37799..89bafce92382e 100644 --- a/docs/user-guide/docker-cli-to-kubectl.md +++ b/docs/user-guide/docker-cli-to-kubectl.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # kubectl for docker users In this doc, we introduce the kubernetes command line to for interacting with the api to docker-cli users. The tool, kubectl, is designed to be familiar to docker-cli users but there are a few necessary differences. Each section of this doc highlights a docker subcommand explains the kubectl equivalent. diff --git a/docs/user-guide/downward-api.md b/docs/user-guide/downward-api.md index 823bdf02b9a52..a3816f711deff 100644 --- a/docs/user-guide/downward-api.md +++ b/docs/user-guide/downward-api.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Downward API It is sometimes useful for a container to have information about itself, but we diff --git a/docs/user-guide/downward-api/README.md b/docs/user-guide/downward-api/README.md index e3d2e55ccde5d..1c881ee3076ab 100644 --- a/docs/user-guide/downward-api/README.md +++ b/docs/user-guide/downward-api/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Downward API example Following this example, you will create a pod with a containers that consumes the pod's name and diff --git a/docs/user-guide/getting-into-containers.md b/docs/user-guide/getting-into-containers.md index 8471be7227031..8f6d793ce1402 100644 --- a/docs/user-guide/getting-into-containers.md +++ b/docs/user-guide/getting-into-containers.md @@ -33,7 +33,8 @@ Documentation for other releases can be found at #Getting into containers: kubectl exec Developers can use `kubectl exec` to run commands in a container. This guide demonstrates two use cases. -##Using kubectl exec to check the environment variables of a container +## Using kubectl exec to check the environment variables of a container + Kubernetes exposes [services](services.md#environment-variables) through environment variables. It is convenient to check these environment variables using `kubectl exec`. @@ -66,6 +67,7 @@ We can use these environment variables in applications to find the service. ## Using kubectl exec to check the mounted volumes + It is convenient to use `kubectl exec` to check if the volumes are mounted as expected. We first create a Pod with a volume mounted at /data/redis, @@ -89,6 +91,7 @@ redis ``` ## Using kubectl exec to open a bash terminal in a pod + After all, open a terminal in a pod is the most direct way to introspect the pod. Assuming the pod/storage is still running, run ``` diff --git a/docs/user-guide/identifiers.md b/docs/user-guide/identifiers.md index f28ab9af59e62..9f54271425621 100644 --- a/docs/user-guide/identifiers.md +++ b/docs/user-guide/identifiers.md @@ -30,15 +30,19 @@ Documentation for other releases can be found at + # Identifiers + All objects in the Kubernetes REST API are unambiguously identified by a Name and a UID. For non-unique user-provided attributes, Kubernetes provides [labels](labels.md) and [annotations](annotations.md). ## Names + Names are generally client-provided. Only one object of a given kind can have a given name at a time (i.e., they are spatially unique). But if you delete an object, you can make a new object with the same name. Names are the used to refer to an object in a resource URL, such as `/api/v1/pods/some-name`. By convention, the names of Kubernetes resources should be up to maximum length of 253 characters and consist of lower case alphanumeric characters, `-`, and `.`, but certain resources have more specific restrictions. See the [identifiers design doc](../design/identifiers.md) for the precise syntax rules for names. ## UIDs + UID are generated by Kubernetes. Every object created over the whole lifetime of a Kubernetes cluster has a distinct UID (i.e., they are spatially and temporally unique). diff --git a/docs/user-guide/images.md b/docs/user-guide/images.md index 848de4b818c5f..a835f0691c325 100644 --- a/docs/user-guide/images.md +++ b/docs/user-guide/images.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Images Each container in a pod has its own image. Currently, the only type of image supported is a [Docker Image](https://docs.docker.com/userguide/dockerimages/). @@ -251,6 +252,7 @@ You can use this in conjunction with a per-node `.dockerfile`. The credentials will be merged. This approach will work on Google Container Engine (GKE). ### Use Cases + There are a number of solutions for configuring private registries. Here are some common use cases and suggested solutions. diff --git a/docs/user-guide/introspection-and-debugging.md b/docs/user-guide/introspection-and-debugging.md index 5385e1045a02e..8cfd0b6c9ac6c 100644 --- a/docs/user-guide/introspection-and-debugging.md +++ b/docs/user-guide/introspection-and-debugging.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Kubernetes User Guide: Managing Applications: Application Introspection and Debugging Once your application is running, you’ll inevitably need to debug problems with it. diff --git a/docs/user-guide/kubeconfig-file.md b/docs/user-guide/kubeconfig-file.md index 63331968dd446..7d2286cd69adb 100644 --- a/docs/user-guide/kubeconfig-file.md +++ b/docs/user-guide/kubeconfig-file.md @@ -30,12 +30,15 @@ Documentation for other releases can be found at + # kubeconfig files + In order to easily switch between multiple clusters, a kubeconfig file was defined. This file contains a series of authentication mechanisms and cluster connection information associated with nicknames. It also introduces the concept of a tuple of authentication information (user) and cluster connection information called a context that is also associated with a nickname. Multiple kubeconfig files are allowed. At runtime they are loaded and merged together along with override options specified from the command line (see rules below). ## Related discussion + https://github.com/GoogleCloudPlatform/kubernetes/issues/1755 ## Example kubeconfig file @@ -81,6 +84,7 @@ users: ``` ## Loading and merging rules + The rules for loading and merging the kubeconfig files are straightforward, but there are a lot of them. The final config is built in this order: 1. Get the kubeconfig from disk. This is done with the following hierarchy and merge rules: @@ -115,6 +119,7 @@ The rules for loading and merging the kubeconfig files are straightforward, but 1. For any information still missing, use default values and potentially prompt for authentication information ## Manipulation of kubeconfig via `kubectl config ` + In order to more easily manipulate kubeconfig files, there are a series of subcommands to `kubectl config` to help. See [kubectl/kubectl_config.md](kubectl/kubectl_config.md) for help. diff --git a/docs/user-guide/kubectl/kubectl.md b/docs/user-guide/kubectl/kubectl.md index 8be799f462c31..6ee5877fdb068 100644 --- a/docs/user-guide/kubectl/kubectl.md +++ b/docs/user-guide/kubectl/kubectl.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl kubectl controls the Kubernetes cluster manager @@ -76,6 +77,7 @@ kubectl ``` ### SEE ALSO + * [kubectl api-versions](kubectl_api-versions.md) - Print available API versions. * [kubectl cluster-info](kubectl_cluster-info.md) - Display cluster info * [kubectl config](kubectl_config.md) - config modifies kubeconfig files diff --git a/docs/user-guide/kubectl/kubectl_api-versions.md b/docs/user-guide/kubectl/kubectl_api-versions.md index 4dee8f861351b..aa151bb4e1ad6 100644 --- a/docs/user-guide/kubectl/kubectl_api-versions.md +++ b/docs/user-guide/kubectl/kubectl_api-versions.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl api-versions Print available API versions. @@ -79,6 +80,7 @@ kubectl api-versions ``` ### SEE ALSO + * [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra at 2015-07-14 00:11:42.959722426 +0000 UTC diff --git a/docs/user-guide/kubectl/kubectl_cluster-info.md b/docs/user-guide/kubectl/kubectl_cluster-info.md index efd85435a308d..fc228f9ab2f41 100644 --- a/docs/user-guide/kubectl/kubectl_cluster-info.md +++ b/docs/user-guide/kubectl/kubectl_cluster-info.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl cluster-info Display cluster info @@ -79,6 +80,7 @@ kubectl cluster-info ``` ### SEE ALSO + * [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra at 2015-07-14 00:11:42.959601452 +0000 UTC diff --git a/docs/user-guide/kubectl/kubectl_config.md b/docs/user-guide/kubectl/kubectl_config.md index 12f07de5f5695..7f6b79d756739 100644 --- a/docs/user-guide/kubectl/kubectl_config.md +++ b/docs/user-guide/kubectl/kubectl_config.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl config config modifies kubeconfig files @@ -85,6 +86,7 @@ kubectl config SUBCOMMAND ``` ### SEE ALSO + * [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager * [kubectl config set](kubectl_config_set.md) - Sets an individual value in a kubeconfig file * [kubectl config set-cluster](kubectl_config_set-cluster.md) - Sets a cluster entry in kubeconfig diff --git a/docs/user-guide/kubectl/kubectl_config_set-cluster.md b/docs/user-guide/kubectl/kubectl_config_set-cluster.md index 19efbbf412ad7..9fc7cbc31dc93 100644 --- a/docs/user-guide/kubectl/kubectl_config_set-cluster.md +++ b/docs/user-guide/kubectl/kubectl_config_set-cluster.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl config set-cluster Sets a cluster entry in kubeconfig @@ -94,6 +95,7 @@ $ kubectl config set-cluster e2e --insecure-skip-tls-verify=true ``` ### SEE ALSO + * [kubectl config](kubectl_config.md) - config modifies kubeconfig files ###### Auto generated by spf13/cobra at 2015-07-14 00:11:42.95861887 +0000 UTC diff --git a/docs/user-guide/kubectl/kubectl_config_set-context.md b/docs/user-guide/kubectl/kubectl_config_set-context.md index 34646086221e0..cc53b6c7c5e89 100644 --- a/docs/user-guide/kubectl/kubectl_config_set-context.md +++ b/docs/user-guide/kubectl/kubectl_config_set-context.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl config set-context Sets a context entry in kubeconfig @@ -87,6 +88,7 @@ $ kubectl config set-context gce --user=cluster-admin ``` ### SEE ALSO + * [kubectl config](kubectl_config.md) - config modifies kubeconfig files ###### Auto generated by spf13/cobra at 2015-07-14 00:11:42.958911281 +0000 UTC diff --git a/docs/user-guide/kubectl/kubectl_config_set-credentials.md b/docs/user-guide/kubectl/kubectl_config_set-credentials.md index b1ffe5738c42a..d8f35c0aff06c 100644 --- a/docs/user-guide/kubectl/kubectl_config_set-credentials.md +++ b/docs/user-guide/kubectl/kubectl_config_set-credentials.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl config set-credentials Sets a user entry in kubeconfig @@ -107,6 +108,7 @@ $ kubectl config set-credentials cluster-admin --client-certificate=~/.kube/admi ``` ### SEE ALSO + * [kubectl config](kubectl_config.md) - config modifies kubeconfig files ###### Auto generated by spf13/cobra at 2015-07-14 00:11:42.958785654 +0000 UTC diff --git a/docs/user-guide/kubectl/kubectl_config_set.md b/docs/user-guide/kubectl/kubectl_config_set.md index 618547fbd66b9..aabd2e6011fa0 100644 --- a/docs/user-guide/kubectl/kubectl_config_set.md +++ b/docs/user-guide/kubectl/kubectl_config_set.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl config set Sets an individual value in a kubeconfig file @@ -81,6 +82,7 @@ kubectl config set PROPERTY_NAME PROPERTY_VALUE ``` ### SEE ALSO + * [kubectl config](kubectl_config.md) - config modifies kubeconfig files ###### Auto generated by spf13/cobra at 2015-07-14 00:11:42.959031072 +0000 UTC diff --git a/docs/user-guide/kubectl/kubectl_config_unset.md b/docs/user-guide/kubectl/kubectl_config_unset.md index 3f93f4d0f2e21..80d0430691bea 100644 --- a/docs/user-guide/kubectl/kubectl_config_unset.md +++ b/docs/user-guide/kubectl/kubectl_config_unset.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl config unset Unsets an individual value in a kubeconfig file @@ -80,6 +81,7 @@ kubectl config unset PROPERTY_NAME ``` ### SEE ALSO + * [kubectl config](kubectl_config.md) - config modifies kubeconfig files ###### Auto generated by spf13/cobra at 2015-07-14 00:11:42.959148086 +0000 UTC diff --git a/docs/user-guide/kubectl/kubectl_config_use-context.md b/docs/user-guide/kubectl/kubectl_config_use-context.md index 715d8fbd7b893..292045ae77f10 100644 --- a/docs/user-guide/kubectl/kubectl_config_use-context.md +++ b/docs/user-guide/kubectl/kubectl_config_use-context.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl config use-context Sets the current-context in a kubeconfig file @@ -79,6 +80,7 @@ kubectl config use-context CONTEXT_NAME ``` ### SEE ALSO + * [kubectl config](kubectl_config.md) - config modifies kubeconfig files ###### Auto generated by spf13/cobra at 2015-07-14 00:11:42.959263442 +0000 UTC diff --git a/docs/user-guide/kubectl/kubectl_config_view.md b/docs/user-guide/kubectl/kubectl_config_view.md index b4413a7039561..611310a209611 100644 --- a/docs/user-guide/kubectl/kubectl_config_view.md +++ b/docs/user-guide/kubectl/kubectl_config_view.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl config view displays Merged kubeconfig settings or a specified kubeconfig file. @@ -99,6 +100,7 @@ $ kubectl config view -o template --template='{{range .users}}{{ if eq .name "e2 ``` ### SEE ALSO + * [kubectl config](kubectl_config.md) - config modifies kubeconfig files ###### Auto generated by spf13/cobra at 2015-07-14 00:11:42.958490153 +0000 UTC diff --git a/docs/user-guide/kubectl/kubectl_create.md b/docs/user-guide/kubectl/kubectl_create.md index 525ec347f4ce2..d62973c3fdc1d 100644 --- a/docs/user-guide/kubectl/kubectl_create.md +++ b/docs/user-guide/kubectl/kubectl_create.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl create Create a resource by filename or stdin @@ -92,6 +93,7 @@ $ cat pod.json | kubectl create -f - ``` ### SEE ALSO + * [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra at 2015-07-16 22:39:16.132575015 +0000 UTC diff --git a/docs/user-guide/kubectl/kubectl_delete.md b/docs/user-guide/kubectl/kubectl_delete.md index f9252357e255d..85b0ac53c5abf 100644 --- a/docs/user-guide/kubectl/kubectl_delete.md +++ b/docs/user-guide/kubectl/kubectl_delete.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl delete Delete a resource by filename, stdin, resource and name, or by resources and label selector. @@ -114,6 +115,7 @@ $ kubectl delete pods --all ``` ### SEE ALSO + * [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra at 2015-07-16 05:13:00.190175769 +0000 UTC diff --git a/docs/user-guide/kubectl/kubectl_describe.md b/docs/user-guide/kubectl/kubectl_describe.md index 5c05fe3bbaa75..9c2682997bdd4 100644 --- a/docs/user-guide/kubectl/kubectl_describe.md +++ b/docs/user-guide/kubectl/kubectl_describe.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl describe Show details of a specific resource or group of resources @@ -110,6 +111,7 @@ $ kubectl describe pods frontend ``` ### SEE ALSO + * [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra at 2015-07-14 08:21:33.374469932 +0000 UTC diff --git a/docs/user-guide/kubectl/kubectl_exec.md b/docs/user-guide/kubectl/kubectl_exec.md index a96d97bd8b813..9fc3fff907a62 100644 --- a/docs/user-guide/kubectl/kubectl_exec.md +++ b/docs/user-guide/kubectl/kubectl_exec.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl exec Execute a command in a container. @@ -97,6 +98,7 @@ $ kubectl exec 123456-7890 -c ruby-container -i -t -- bash -il ``` ### SEE ALSO + * [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra at 2015-07-14 00:11:42.956874128 +0000 UTC diff --git a/docs/user-guide/kubectl/kubectl_expose.md b/docs/user-guide/kubectl/kubectl_expose.md index 9dce59ee9c3e9..12377b80fd61e 100644 --- a/docs/user-guide/kubectl/kubectl_expose.md +++ b/docs/user-guide/kubectl/kubectl_expose.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl expose Take a replicated application and expose it as Kubernetes Service @@ -113,6 +114,7 @@ $ kubectl expose rc streamer --port=4100 --protocol=udp --name=video-stream ``` ### SEE ALSO + * [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra at 2015-07-17 01:17:57.020108348 +0000 UTC diff --git a/docs/user-guide/kubectl/kubectl_get.md b/docs/user-guide/kubectl/kubectl_get.md index 35d8be377d142..4b77d3cdfc3ad 100644 --- a/docs/user-guide/kubectl/kubectl_get.md +++ b/docs/user-guide/kubectl/kubectl_get.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl get Display one or many resources @@ -121,6 +122,7 @@ $ kubectl get rc/web service/frontend pods/web-pod-13je7 ``` ### SEE ALSO + * [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra at 2015-07-14 00:11:42.955450097 +0000 UTC diff --git a/docs/user-guide/kubectl/kubectl_label.md b/docs/user-guide/kubectl/kubectl_label.md index 02eed69490fc7..d90884542838b 100644 --- a/docs/user-guide/kubectl/kubectl_label.md +++ b/docs/user-guide/kubectl/kubectl_label.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl label Update the labels on a resource @@ -111,6 +112,7 @@ $ kubectl label pods foo bar- ``` ### SEE ALSO + * [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra at 2015-07-14 00:11:42.958329854 +0000 UTC diff --git a/docs/user-guide/kubectl/kubectl_logs.md b/docs/user-guide/kubectl/kubectl_logs.md index c625148fd966b..a34354e669b55 100644 --- a/docs/user-guide/kubectl/kubectl_logs.md +++ b/docs/user-guide/kubectl/kubectl_logs.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl logs Print the logs for a container in a pod. @@ -96,6 +97,7 @@ $ kubectl logs -f 123456-7890 ruby-container ``` ### SEE ALSO + * [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra at 2015-07-14 00:11:42.956443079 +0000 UTC diff --git a/docs/user-guide/kubectl/kubectl_namespace.md b/docs/user-guide/kubectl/kubectl_namespace.md index 341c282e67a59..b93d5187ea81f 100644 --- a/docs/user-guide/kubectl/kubectl_namespace.md +++ b/docs/user-guide/kubectl/kubectl_namespace.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl namespace SUPERCEDED: Set and view the current Kubernetes namespace @@ -82,6 +83,7 @@ kubectl namespace [namespace] ``` ### SEE ALSO + * [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra at 2015-07-14 00:11:42.956297427 +0000 UTC diff --git a/docs/user-guide/kubectl/kubectl_patch.md b/docs/user-guide/kubectl/kubectl_patch.md index cf0ade964f98f..41e09c8eabb0e 100644 --- a/docs/user-guide/kubectl/kubectl_patch.md +++ b/docs/user-guide/kubectl/kubectl_patch.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl patch Update field(s) of a resource by stdin. @@ -90,6 +91,7 @@ kubectl patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}' ``` ### SEE ALSO + * [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra at 2015-07-14 00:11:42.956026887 +0000 UTC diff --git a/docs/user-guide/kubectl/kubectl_port-forward.md b/docs/user-guide/kubectl/kubectl_port-forward.md index 6963632f42fdb..b612513566e1e 100644 --- a/docs/user-guide/kubectl/kubectl_port-forward.md +++ b/docs/user-guide/kubectl/kubectl_port-forward.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl port-forward Forward one or more local ports to a pod. @@ -97,6 +98,7 @@ $ kubectl port-forward -p mypod 0:5000 ``` ### SEE ALSO + * [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra at 2015-07-14 00:11:42.957000233 +0000 UTC diff --git a/docs/user-guide/kubectl/kubectl_proxy.md b/docs/user-guide/kubectl/kubectl_proxy.md index cb6878dc851ef..2c5cb2eb9a93d 100644 --- a/docs/user-guide/kubectl/kubectl_proxy.md +++ b/docs/user-guide/kubectl/kubectl_proxy.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl proxy Run a proxy to the Kubernetes API server @@ -114,6 +115,7 @@ $ kubectl proxy --api-prefix=/k8s-api ``` ### SEE ALSO + * [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra at 2015-07-14 00:11:42.957150329 +0000 UTC diff --git a/docs/user-guide/kubectl/kubectl_replace.md b/docs/user-guide/kubectl/kubectl_replace.md index a4fa525f20238..d6d1b90874469 100644 --- a/docs/user-guide/kubectl/kubectl_replace.md +++ b/docs/user-guide/kubectl/kubectl_replace.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl replace Replace a resource by filename or stdin. @@ -99,6 +100,7 @@ kubectl replace --force -f ./pod.json ``` ### SEE ALSO + * [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra at 2015-07-16 22:39:16.132838722 +0000 UTC diff --git a/docs/user-guide/kubectl/kubectl_rolling-update.md b/docs/user-guide/kubectl/kubectl_rolling-update.md index 50b32ac2a1a8a..478313d70cb81 100644 --- a/docs/user-guide/kubectl/kubectl_rolling-update.md +++ b/docs/user-guide/kubectl/kubectl_rolling-update.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl rolling-update Perform a rolling update of the given ReplicationController. @@ -113,6 +114,7 @@ $ kubectl rolling-update frontend --image=image:v2 ``` ### SEE ALSO + * [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra at 2015-07-14 00:11:42.956605022 +0000 UTC diff --git a/docs/user-guide/kubectl/kubectl_run.md b/docs/user-guide/kubectl/kubectl_run.md index 3d2295f24d2e5..5f3a1943b5661 100644 --- a/docs/user-guide/kubectl/kubectl_run.md +++ b/docs/user-guide/kubectl/kubectl_run.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl run Run a particular image on the cluster. @@ -108,6 +109,7 @@ $ kubectl run nginx --image=nginx --overrides='{ "apiVersion": "v1", "spec": { . ``` ### SEE ALSO + * [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra at 2015-07-14 00:11:42.957298888 +0000 UTC diff --git a/docs/user-guide/kubectl/kubectl_scale.md b/docs/user-guide/kubectl/kubectl_scale.md index 880587b4ceb3f..863e444ef3279 100644 --- a/docs/user-guide/kubectl/kubectl_scale.md +++ b/docs/user-guide/kubectl/kubectl_scale.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl scale Set a new size for a Replication Controller. @@ -97,6 +98,7 @@ $ kubectl scale --current-replicas=2 --replicas=3 replicationcontrollers foo ``` ### SEE ALSO + * [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra at 2015-07-14 00:11:42.956739933 +0000 UTC diff --git a/docs/user-guide/kubectl/kubectl_stop.md b/docs/user-guide/kubectl/kubectl_stop.md index 5a9a99d08a2a3..ed65ad3fe4dd4 100644 --- a/docs/user-guide/kubectl/kubectl_stop.md +++ b/docs/user-guide/kubectl/kubectl_stop.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl stop Gracefully shut down a resource by name or filename. @@ -104,6 +105,7 @@ $ kubectl stop -f path/to/resources ``` ### SEE ALSO + * [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra at 2015-07-14 00:11:42.957441942 +0000 UTC diff --git a/docs/user-guide/kubectl/kubectl_version.md b/docs/user-guide/kubectl/kubectl_version.md index f7a03c0a32d0b..5d40eb95a8949 100644 --- a/docs/user-guide/kubectl/kubectl_version.md +++ b/docs/user-guide/kubectl/kubectl_version.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## kubectl version Print the client and server version information. @@ -80,6 +81,7 @@ kubectl version ``` ### SEE ALSO + * [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra at 2015-07-14 00:11:42.959846454 +0000 UTC diff --git a/docs/user-guide/labels.md b/docs/user-guide/labels.md index a3322b4c08462..f7063aa824982 100644 --- a/docs/user-guide/labels.md +++ b/docs/user-guide/labels.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Labels _Labels_ are key/value pairs that are attached to objects, such as pods. diff --git a/docs/user-guide/liveness/README.md b/docs/user-guide/liveness/README.md index ece5db5e1e417..1298f27ee8f76 100644 --- a/docs/user-guide/liveness/README.md +++ b/docs/user-guide/liveness/README.md @@ -30,7 +30,9 @@ Documentation for other releases can be found at + ## Overview + This example shows two types of pod [health checks](../production-pods.md#liveness-and-readiness-probes-aka-health-checks): HTTP checks and container execution checks. The [exec-liveness.yaml](exec-liveness.yaml) demonstrates the container execution check. @@ -72,6 +74,7 @@ The Kubelet sends a HTTP request to the specified path and port to perform the h This [guide](../walkthrough/k8s201.md#health-checking) has more information on health checks. ## Get your hands dirty + To show the health check is actually working, first create the pods: ``` diff --git a/docs/user-guide/logging-demo/README.md b/docs/user-guide/logging-demo/README.md index 166ae8d2aa139..d3400eb4b3589 100644 --- a/docs/user-guide/logging-demo/README.md +++ b/docs/user-guide/logging-demo/README.md @@ -30,7 +30,9 @@ Documentation for other releases can be found at + # Elasticsearch/Kibana Logging Demonstration + This directory contains two [pod](../../../docs/user-guide/pods.md) specifications which can be used as synthetic logging sources. The pod specification in [synthetic_0_25lps.yaml](synthetic_0_25lps.yaml) describes a pod that just emits a log message once every 4 seconds. The pod specification in diff --git a/docs/user-guide/logging.md b/docs/user-guide/logging.md index 140dcfb0fda68..5b4208e78852b 100644 --- a/docs/user-guide/logging.md +++ b/docs/user-guide/logging.md @@ -30,12 +30,15 @@ Documentation for other releases can be found at + # Logging ## Logging by Kubernetes Components + Kubernetes components, such as kubelet and apiserver, use the [glog](https://godoc.org/github.com/golang/glog) logging library. Developer conventions for logging severity are described in [docs/devel/logging.md](../devel/logging.md). ## Examining the logs of running containers + The logs of a running container may be fetched using the command `kubectl logs`. For example, given this pod specification [counter-pod.yaml](../../examples/blog-logging/counter-pod.yaml), which has a container which writes out some text to standard output every second. (You can find different pod specifications [here](logging-demo/).) @@ -95,15 +98,18 @@ $ kubectl logs kube-dns-v3-7r1l9 etcd ``` ## Cluster level logging to Google Cloud Logging + The getting started guide [Cluster Level Logging to Google Cloud Logging](../getting-started-guides/logging.md) explains how container logs are ingested into [Google Cloud Logging](https://cloud.google.com/logging/docs/) and shows how to query the ingested logs. ## Cluster level logging with Elasticsearch and Kibana + The getting started guide [Cluster Level Logging with Elasticsearch and Kibana](../getting-started-guides/logging-elasticsearch.md) describes how to ingest cluster level logs into Elasticsearch and view them using Kibana. ## Ingesting Application Log Files + Cluster level logging only collects the standard output and standard error output of the applications running in containers. The guide [Collecting log files within containers with Fluentd](../../contrib/logging/fluentd-sidecar-gcp/README.md) explains how the log files of applications can also be ingested into Google Cloud logging. diff --git a/docs/user-guide/managing-deployments.md b/docs/user-guide/managing-deployments.md index e8db421f9fcfd..38b508865bfaf 100644 --- a/docs/user-guide/managing-deployments.md +++ b/docs/user-guide/managing-deployments.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Kubernetes User Guide: Managing Applications: Managing deployments You’ve deployed your application and exposed it via a service. Now what? Kubernetes provides a number of tools to help you manage your application deployment, including scaling and updating. Among the features we’ll discuss in more depth are [configuration files](configuring-containers.md#configuration-in-kubernetes) and [labels](deploying-applications.md#labels). @@ -436,6 +437,7 @@ $ rm $TMP ``` The system ensures that you don’t clobber changes made by other users or components by confirming that the `resourceVersion` doesn’t differ from the version you edited. If you want to update regardless of other changes, remove the `resourceVersion` field when you edit the resource. However, if you do this, don’t use your original configuration file as the source since additional fields most likely were set in the live state. + ## Disruptive updates In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a replication controller. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can simply modify your original configuration file: diff --git a/docs/user-guide/monitoring.md b/docs/user-guide/monitoring.md index 5038fe5598d62..6688bde113ea5 100644 --- a/docs/user-guide/monitoring.md +++ b/docs/user-guide/monitoring.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Resource Usage Monitoring in Kubernetes Understanding how an application behaves when deployed is crucial to scaling the application and providing a reliable service. In a Kubernetes cluster, application performance can be examined at many different levels: containers, [pods](pods.md), [services](services.md), and whole clusters. As part of Kubernetes we want to provide users with detailed resource usage information about their running applications at all these levels. This will give users deep insights into how their applications are performing and where possible application bottlenecks may be found. In comes [Heapster](https://github.com/GoogleCloudPlatform/heapster), a project meant to provide a base monitoring platform on Kubernetes. @@ -55,6 +56,7 @@ On most Kubernetes clusters, cAdvisor exposes a simple UI for on-machine contain The Kubelet acts as a bridge between the Kubernetes master and the nodes. It manages the pods and containers running on a machine. Kubelet translates each pod into its constituent containers and fetches individual container usage statistics from cAdvisor. It then exposes the aggregated pod resource usage statistics via a REST API. ## Storage Backends + ### InfluxDB and Grafana A Grafana setup with InfluxDB is a very popular combination for monitoring in the open source world. InfluxDB exposes an easy to use API to write and fetch time series data. Heapster is setup to use this storage backend by default on most kubernetes clusters. A detailed setup guide can be found [here](https://github.com/GoogleCloudPlatform/heapster/blob/master/docs/influxdb.md). InfluxDB and Grafana run in Pods. The pod exposes itself as a Kubernetes service which is how Heapster discovers it. @@ -82,6 +84,7 @@ Here is a snapshot of the a Google Cloud Monitoring dashboard showing cluster-wi ![Google Cloud Monitoring dashboard](gcm.png) ## Try it out! + Now that you’ve learned a bit about Heapster, feel free to try it out on your own clusters! The [Heapster repository](https://github.com/GoogleCloudPlatform/heapster) is available on GitHub. It contains detailed instructions to setup Heapster and its storage backends. Heapster runs by default on most Kubernetes clusters, so you may already have it! Feedback is always welcome. Please let us know if you run into any issues. Heapster and Kubernetes developers hang out in the [#google-containers](http://webchat.freenode.net/?channels=google-containers) IRC channel on freenode.net. You can also reach us on the [google-containers Google Groups mailing list](https://groups.google.com/forum/#!forum/google-containers). *** diff --git a/docs/user-guide/namespaces.md b/docs/user-guide/namespaces.md index 8e1e6e26c84d6..75029e2fdc091 100644 --- a/docs/user-guide/namespaces.md +++ b/docs/user-guide/namespaces.md @@ -74,6 +74,7 @@ The Namespace provides a unique scope for: Look [here](namespaces/) for an in depth example of namespaces. ### Viewing namespaces + You can list the current namespaces in a cluster using: ```sh @@ -187,6 +188,7 @@ kubectl delete namespaces This delete is asynchronous, so for a time you will see the namespace in the ```Terminating``` state. ## Namespaces and DNS + When you create a [Service](services.md), it creates a corresponding [DNS entry](../admin/dns.md)1. This entry is of the form ```..cluster.local```, which means that if a container just uses `````` it will resolve to the service which diff --git a/docs/user-guide/namespaces/README.md b/docs/user-guide/namespaces/README.md index 135a248004e09..b310732cca153 100644 --- a/docs/user-guide/namespaces/README.md +++ b/docs/user-guide/namespaces/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## Kubernetes Namespaces Kubernetes _[namespaces](../namespaces.md)_ help different projects, teams, or customers to share a Kubernetes cluster. diff --git a/docs/user-guide/node-selection/README.md b/docs/user-guide/node-selection/README.md index 96026f7ebfd1d..cb9e51f4929ed 100644 --- a/docs/user-guide/node-selection/README.md +++ b/docs/user-guide/node-selection/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## Node selection example This example shows how to assign a [pod](../pods.md) to a specific [node](../../admin/node.md) or to one of a set of nodes using node labels and the nodeSelector field in a pod specification. Generally this is unnecessary, as the scheduler will take care of things for you, but you may want to do so in certain circumstances like to ensure that your pod ends up on a machine with an SSD attached to it. diff --git a/docs/user-guide/overview.md b/docs/user-guide/overview.md index 6d5d5eaffd339..cc98369429260 100644 --- a/docs/user-guide/overview.md +++ b/docs/user-guide/overview.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Kubernetes Overview Kubernetes is an open-source system for managing containerized applications across multiple hosts in a cluster. It provides mechanisms for application deployment, scheduling, updating, maintenance, and scaling. A key feature of Kubernetes is that it actively manages the containers to ensure that the state of the cluster continually matches the user's intentions. diff --git a/docs/user-guide/persistent-volumes.md b/docs/user-guide/persistent-volumes.md index fcca240fcb37a..1b495dc85dacc 100644 --- a/docs/user-guide/persistent-volumes.md +++ b/docs/user-guide/persistent-volumes.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Persistent Volumes and Claims This document describes the current state of `PersistentVolumes` in Kubernetes. Familiarity with [volumes](volumes.md) is suggested. diff --git a/docs/user-guide/persistent-volumes/README.md b/docs/user-guide/persistent-volumes/README.md index f416543c507e9..f08bc61cf0a53 100644 --- a/docs/user-guide/persistent-volumes/README.md +++ b/docs/user-guide/persistent-volumes/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # How To Use Persistent Volumes The purpose of this guide is to help you become familiar with [Kubernetes Persistent Volumes](../persistent-volumes.md). By the end of the guide, we'll have diff --git a/docs/user-guide/pod-states.md b/docs/user-guide/pod-states.md index f7d4e00d45ba9..2d27625b4198d 100644 --- a/docs/user-guide/pod-states.md +++ b/docs/user-guide/pod-states.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # The life of a pod Updated: 4/14/2015 diff --git a/docs/user-guide/pods.md b/docs/user-guide/pods.md index bbab2ac9439d7..8e7f05f7ee3ee 100644 --- a/docs/user-guide/pods.md +++ b/docs/user-guide/pods.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Pods In Kubernetes, rather than individual application containers, _pods_ are the smallest deployable units that can be created, scheduled, and managed. diff --git a/docs/user-guide/prereqs.md b/docs/user-guide/prereqs.md index 7925efbf46433..1742e7520e18a 100644 --- a/docs/user-guide/prereqs.md +++ b/docs/user-guide/prereqs.md @@ -30,10 +30,13 @@ Documentation for other releases can be found at + # Kubernetes User Guide: Managing Applications: Prerequisites + To deploy and manage applications on Kubernetes, you’ll use the Kubernetes command-line tool, [kubectl](kubectl/kubectl.md). It lets you inspect your cluster resources, create, delete, and update components, and much more. You will use it to look at your new cluster and bring up example apps. -##Install kubectl +## Install kubectl + You can find it in the [release](https://github.com/GoogleCloudPlatform/kubernetes/releases) tar bundle, under platforms//; or if you build from source, kubectl should be either under _output/local/bin// or _output/dockerized/bin//. @@ -47,7 +50,8 @@ export PATH=/platforms/darwin/amd64:$PATH export PATH=/platforms/linux/amd64:$PATH ``` -##Configure kubectl +## Configure kubectl + In order for kubectl to find and access the Kubernetes cluster, it needs a [kubeconfig file](kubeconfig-file.md), which is created automatically when creating a cluster using kube-up.sh (see the [getting started guides](../../docs/getting-started-guides/) for more about creating clusters). If you need access to a cluster you didn’t create, see the [Sharing Cluster Access document](sharing-clusters.md). #### Installing Kubectl diff --git a/docs/user-guide/production-pods.md b/docs/user-guide/production-pods.md index 26e8c44ba564c..a9ccbb639c2e3 100644 --- a/docs/user-guide/production-pods.md +++ b/docs/user-guide/production-pods.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Kubernetes User Guide: Managing Applications: Working with pods and containers in production **Table of Contents** diff --git a/docs/user-guide/quick-start.md b/docs/user-guide/quick-start.md index fd6ab60808402..85f51ce3eb0ad 100644 --- a/docs/user-guide/quick-start.md +++ b/docs/user-guide/quick-start.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Kubernetes User Guide: Managing Applications: Quick start **Table of Contents** @@ -67,7 +68,9 @@ my-nginx-q7jo3 1/1 Running 0 29m ``` Kubernetes will ensure that your application keeps running, by automatically restarting containers that fail, spreading containers across nodes, and recreating containers on new nodes when nodes fail. + ## Exposing your application to the Internet + Through integration with some cloud providers (for example Google Compute Engine and AWS EC2), Kubernetes enables you to request that it provision a public IP address for your application. To do this run: ```bash diff --git a/docs/user-guide/replication-controller.md b/docs/user-guide/replication-controller.md index 8cf3a5b7d831a..91216f58d786d 100644 --- a/docs/user-guide/replication-controller.md +++ b/docs/user-guide/replication-controller.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Replication Controller **Table of Contents** diff --git a/docs/user-guide/secrets.md b/docs/user-guide/secrets.md index 4fc5e4cab6d2e..46cb292cca7b6 100644 --- a/docs/user-guide/secrets.md +++ b/docs/user-guide/secrets.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Secrets Objects of type `secret` are intended to hold sensitive information, such as @@ -157,7 +158,9 @@ whichever is convenient. See another example of creating a secret and a pod that consumes that secret in a volume [here](secrets/). ### Manually specifying an imagePullSecret + Use of imagePullSecrets is desribed in the [images documentation](images.md#specifying-imagepullsecrets-on-a-pod) + ### Automatic use of Manually Created Secrets *This feature is planned but not implemented. See [issue @@ -169,7 +172,9 @@ Then, pods which use that service account will have The secrets will be mounted at **TBD**. ## Details + ### Restrictions + Secret volume sources are validated to ensure that the specified object reference actually points to an object of type `Secret`. Therefore, a secret needs to be created before any pods that depend on it. @@ -461,6 +466,7 @@ one called, say, `prod-user` with the `prod-db-secret`, and one called, say, ``` ### Use-case: Secret visible to one container in a pod + Consider a program that needs to handle HTTP requests, do some complex business diff --git a/docs/user-guide/secrets/README.md b/docs/user-guide/secrets/README.md index 458c27f7f1aeb..eca8515e1314c 100644 --- a/docs/user-guide/secrets/README.md +++ b/docs/user-guide/secrets/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Secrets example Following this example, you will create a [secret](../secrets.md) and a [pod](../pods.md) that consumes that secret in a [volume](../volumes.md). See [Secrets design document](../../design/secrets.md) for more information. diff --git a/docs/user-guide/security-context.md b/docs/user-guide/security-context.md index 7a8902e0ab08f..2381a61721ea6 100644 --- a/docs/user-guide/security-context.md +++ b/docs/user-guide/security-context.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Security Contexts A security context defines the operating system security settings (uid, gid, capabilities, SELinux role, etc..) applied to a container. See [security context design](../design/security_context.md) for more details. diff --git a/docs/user-guide/service-accounts.md b/docs/user-guide/service-accounts.md index a84b3a496236f..c411db8ee859f 100644 --- a/docs/user-guide/service-accounts.md +++ b/docs/user-guide/service-accounts.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Service Accounts A service account provides an identity for processes that run in a Pod. @@ -121,6 +122,7 @@ $ kubectl delete serviceaccount/build-robot ## Adding Secrets to a service account. + TODO: Test and explain how to use additional non-K8s secrets with an existing service account. TODO explain: diff --git a/docs/user-guide/services-firewalls.md b/docs/user-guide/services-firewalls.md index 518289bb19000..cc457181d3a49 100644 --- a/docs/user-guide/services-firewalls.md +++ b/docs/user-guide/services-firewalls.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Services and Firewalls Many cloud providers (e.g. Google Compute Engine) define firewalls that help keep prevent inadvertent @@ -39,6 +40,7 @@ well as any provider specific details that may be necessary. ### Google Compute Engine + When using a Service with `spec.type: LoadBalancer`, the firewall will be opened automatically. When using `spec.type: NodePort`, however, the firewall is *not* opened by default. @@ -77,6 +79,7 @@ the wilds of the internet. This will be fixed in an upcoming release of Kubernetes. ### Other cloud providers + Coming soon. diff --git a/docs/user-guide/services.md b/docs/user-guide/services.md index 759ab65782cbd..c27ec0cbb9fe6 100644 --- a/docs/user-guide/services.md +++ b/docs/user-guide/services.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Services in Kubernetes **Table of Contents** diff --git a/docs/user-guide/sharing-clusters.md b/docs/user-guide/sharing-clusters.md index 719ca6f4b33f0..6297bf49594ef 100644 --- a/docs/user-guide/sharing-clusters.md +++ b/docs/user-guide/sharing-clusters.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Sharing Cluster Access Client access to a running kubernetes cluster can be shared by copying diff --git a/docs/user-guide/simple-nginx.md b/docs/user-guide/simple-nginx.md index e8236433f187c..680c12e424724 100644 --- a/docs/user-guide/simple-nginx.md +++ b/docs/user-guide/simple-nginx.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## Running your first containers in Kubernetes Ok, you've run one of the [getting started guides](../../docs/getting-started-guides/) and you have @@ -65,6 +66,7 @@ kubectl stop rc my-nginx ``` ### Exposing your pods to the internet. + On some platforms (for example Google Compute Engine) the kubectl command can integrate with your cloud provider to add a [public IP address](services.md#external-services) for the pods, to do this run: @@ -81,6 +83,7 @@ kubectl get services In order to access your nginx landing page, you also have to make sure that traffic from external IPs is allowed. Do this by opening a firewall to allow traffic on port 80. ### Next: Configuration files + Most people will eventually want to use declarative configuration files for creating/modifying their applications. A [simplified introduction](simple-yaml.md) is given in a different document. diff --git a/docs/user-guide/simple-yaml.md b/docs/user-guide/simple-yaml.md index e7048c122abe0..b6e8ce8c9537e 100644 --- a/docs/user-guide/simple-yaml.md +++ b/docs/user-guide/simple-yaml.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## Getting started with config files. In addition to the imperative style commands described [elsewhere](simple-nginx.md), Kubernetes @@ -74,6 +75,7 @@ kubectl delete pods nginx ``` ### Running a replicated set of containers from a configuration file + To run replicated containers, you need a [Replication Controller](replication-controller.md). A replication controller is responsible for ensuring that a specific number of pods exist in the cluster. diff --git a/docs/user-guide/ui.md b/docs/user-guide/ui.md index 56b59cab1a78a..86f747db5d1cc 100644 --- a/docs/user-guide/ui.md +++ b/docs/user-guide/ui.md @@ -30,10 +30,13 @@ Documentation for other releases can be found at + # Kubernetes User Interface + Kubernetes has a web-based user interface that displays the current cluster state graphically. ## Accessing the UI + By default, the Kubernetes UI is deployed as a cluster addon. To access it, visit `https:///ui`, which redirects to `https:///api/v1/proxy/namespaces/kube-system/services/kube-ui/#/dashboard/`. If you find that you're not able to access the UI, it may be because the kube-ui service has not been started on your cluster. In that case, you can start it manually with: @@ -46,16 +49,20 @@ kubectl create -f cluster/addons/kube-ui/kube-ui-svc.yaml --namespace=kube-syste Normally, this should be taken care of automatically by the [`kube-addons.sh`](../../cluster/saltbase/salt/kube-addons/kube-addons.sh) script that runs on the master. ## Using the UI + The Kubernetes UI can be used to introspect your current cluster, such as checking how resources are used, or looking at error messages. You cannot, however, use the UI to modify your cluster. ### Node Resource Usage + After accessing Kubernetes UI, you'll see a homepage dynamically listing out all nodes in your current cluster, with related information including internal IP addresses, CPU usage, memory usage, and file systems usage. ![kubernetes UI home page](k8s-ui-overview.png) ### Dashboard Views + Click on the "Views" button in the top-right of the page to see other views available, which include: Explore, Pods, Nodes, Replication Controllers, Services, and Events. #### Explore View + The "Explore" view allows your to see the pods, replication controllers, and services in current cluster easily. ![kubernetes UI Explore View](k8s-ui-explore.png) The "Group by" dropdown list allows you to group these resources by a number of factors, such as type, name, host, etc. @@ -66,10 +73,12 @@ To see more details of each resource instance, simply click on it. ![kubernetes UI - Pod](k8s-ui-explore-poddetail.png) ### Other Views + Other views (Pods, Nodes, Replication Controllers, Services, and Events) simply list information about each type of resource. You can also click on any instance for more details. ![kubernetes UI - Nodes](k8s-ui-nodes.png) ## More Information + For more information, see the [Kubernetes UI development document](../../www/README.md) in the www directory. diff --git a/docs/user-guide/update-demo/README.md b/docs/user-guide/update-demo/README.md index d2ee002a26bc5..9ea875c212fbc 100644 --- a/docs/user-guide/update-demo/README.md +++ b/docs/user-guide/update-demo/README.md @@ -46,7 +46,9 @@ See the License for the specific language governing permissions and limitations under the License. --> + # Rolling update example + This example demonstrates the usage of Kubernetes to perform a [rolling update](../kubectl/kubectl_rolling-update.md) on a running group of [pods](../../../docs/user-guide/pods.md). See [here](../managing-deployments.md#updating-your-application-without-a-service-outage) to understand why you need a rolling update. Also check [rolling update design document](../../design/simple-rolling-update.md) for more information. ### Step Zero: Prerequisites @@ -74,6 +76,7 @@ I0218 15:18:31.623279 67480 proxy.go:36] Starting to serve on localhost:8001 Now visit the the [demo website](http://localhost:8001/static). You won't see anything much quite yet. ### Step Two: Run the replication controller + Now we will turn up two replicas of an [image](../images.md). They all serve on internal port 80. ```bash @@ -93,6 +96,7 @@ $ kubectl scale rc update-demo-nautilus --replicas=4 If you go back to the [demo website](http://localhost:8001/static/index.html) you should eventually see four boxes, one for each pod. ### Step Four: Update the docker image + We will now update the docker image to serve a different image by doing a rolling update to a new Docker image. ```bash diff --git a/docs/user-guide/volumes.md b/docs/user-guide/volumes.md index adc1d0446bf42..c77ae9a904f27 100644 --- a/docs/user-guide/volumes.md +++ b/docs/user-guide/volumes.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Volumes On-disk files in a container are ephemeral, which presents some problems for diff --git a/docs/user-guide/walkthrough/README.md b/docs/user-guide/walkthrough/README.md index 42d2bdff5e229..5dc8e868625ae 100644 --- a/docs/user-guide/walkthrough/README.md +++ b/docs/user-guide/walkthrough/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Kubernetes 101 - Kubectl CLI and Pods For Kubernetes 101, we will cover kubectl, pods, volumes, and multiple containers @@ -59,6 +60,7 @@ For more info about kubectl, including its usage, commands, and parameters, see If you haven't installed and configured kubectl, finish the [prerequisites](../prereqs.md) before continuing. ## Pods + In Kubernetes, a group of one or more containers is called a _pod_. Containers in a pod are deployed together, and are started, stopped, and replicated as a group. See [pods](../../../docs/user-guide/pods.md) for more details. diff --git a/docs/user-guide/walkthrough/k8s201.md b/docs/user-guide/walkthrough/k8s201.md index a66da4e5d0321..c38e7792e338c 100644 --- a/docs/user-guide/walkthrough/k8s201.md +++ b/docs/user-guide/walkthrough/k8s201.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Kubernetes 201 - Labels, Replication Controllers, Services and Health Checking If you went through [Kubernetes 101](README.md), you learned about kubectl, pods, volumes, and multiple containers. diff --git a/docs/user-guide/working-with-resources.md b/docs/user-guide/working-with-resources.md index 91ff3e3306bb6..2ec21356a3dd4 100644 --- a/docs/user-guide/working-with-resources.md +++ b/docs/user-guide/working-with-resources.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Working with Resources *This document is aimed at users who have worked through some of the examples, @@ -40,6 +41,7 @@ refer to the [api conventions](../devel/api-conventions.md) and the [api document](../api.md).* ## Resources are Automatically Modified + When you create a resource such as pod, and then retrieve the created resource, a number of the fields of the resource are added. You can see this at work in the following example: @@ -78,6 +80,7 @@ The system adds fields in several ways: The API will generally not modify fields that you have set; it just sets ones which were unspecified. ## Finding Documentation on Resource Fields + You can browse auto-generated API documentation at the [project website](http://kubernetes.io/third_party/swagger-ui/) or directly from your cluster, like this: - Run `kubectl proxy --api-prefix=/` - Go to `http://localhost:8001/swagger-ui` in your browser. diff --git a/examples/README.md b/examples/README.md index 4d9bf2afdb59f..ec115df92980c 100644 --- a/examples/README.md +++ b/examples/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Kubernetes Examples: releases.k8s.io/HEAD This directory contains a number of different examples of how to run diff --git a/examples/blog-logging/diagrams/README.md b/examples/blog-logging/diagrams/README.md index 374da3d27ae60..16946833276d5 100644 --- a/examples/blog-logging/diagrams/README.md +++ b/examples/blog-logging/diagrams/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Diagrams for Cloud Logging Blog Article diff --git a/examples/cassandra/README.md b/examples/cassandra/README.md index eb8e3d4c47133..16b8af4733286 100644 --- a/examples/cassandra/README.md +++ b/examples/cassandra/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## Cloud Native Deployments of Cassandra using Kubernetes The following document describes the development of a _cloud native_ [Cassandra](http://cassandra.apache.org/) deployment on Kubernetes. When we say _cloud native_ we mean an application which understands that it is running within a cluster manager, and uses this cluster management infrastructure to help implement the application. In particular, in this instance, a custom Cassandra ```SeedProvider``` is used to enable Cassandra to dynamically discover new Cassandra nodes as they join the cluster. @@ -37,14 +38,17 @@ The following document describes the development of a _cloud native_ [Cassandra] This document also attempts to describe the core components of Kubernetes: _Pods_, _Services_, and _Replication Controllers_. ### Prerequisites + This example assumes that you have a Kubernetes cluster installed and running, and that you have installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting started](../../docs/getting-started-guides/) for installation instructions for your platform. This example also has a few code and configuration files needed. To avoid typing these out, you can ```git clone``` the Kubernetes repository to you local computer. ### A note for the impatient + This is a somewhat long tutorial. If you want to jump straight to the "do it now" commands, please see the [tl; dr](#tl-dr) at the end. ### Simple Single Pod Cassandra Node + In Kubernetes, the atomic unit of an application is a [_Pod_](../../docs/user-guide/pods.md). A Pod is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes. In this simple case, we define a single container running Cassandra for our pod: @@ -93,6 +97,7 @@ You may also note that we are setting some Cassandra parameters (```MAX_HEAP_SIZ In theory could create a single Cassandra pod right now but since `KubernetesSeedProvider` needs to learn what nodes are in the Cassandra deployment we need to create a service first. ### Cassandra Service + In Kubernetes a _[Service](../../docs/user-guide/services.md)_ describes a set of Pods that perform the same task. For example, the set of Pods in a Cassandra cluster can be a Kubernetes Service, or even just the single Pod we created above. An important use for a Service is to create a load balancer which distributes traffic across members of the set of Pods. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods (or the single Pod we've already created) available via the Kubernetes API. This is the way that we use initially use Services with Cassandra. Here is the service description: @@ -163,6 +168,7 @@ subsets: ``` ### Adding replicated nodes + Of course, a single node cluster isn't particularly interesting. The real power of Kubernetes and Cassandra lies in easily building a replicated, scalable Cassandra cluster. In Kubernetes a _[Replication Controller](../../docs/user-guide/replication-controller.md)_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state. @@ -277,6 +283,7 @@ UN 10.244.3.3 51.28 KB 256 51.0% dafe3154-1d67-42e1-ac1d-78e ``` ### tl; dr; + For those of you who are impatient, here is the summary of the commands we ran in this tutorial. ```sh diff --git a/examples/celery-rabbitmq/README.md b/examples/celery-rabbitmq/README.md index 0e4ed4053b812..3f43ffcf25d72 100644 --- a/examples/celery-rabbitmq/README.md +++ b/examples/celery-rabbitmq/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Example: Distributed task queues with Celery, RabbitMQ and Flower ## Introduction diff --git a/examples/cluster-dns/README.md b/examples/cluster-dns/README.md index 006b96faff8c7..87fe640385369 100644 --- a/examples/cluster-dns/README.md +++ b/examples/cluster-dns/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## Kubernetes DNS example This is a toy example demonstrating how to use kubernetes DNS. @@ -174,6 +175,7 @@ If you prefer not using namespace, then all your services can be addressed using ### tl; dr; + For those of you who are impatient, here is the summary of the commands we ran in this tutorial. Remember to set first `$CLUSTER_NAME` and `$USER_NAME` to the values found in `~/.kube/config`. ```sh diff --git a/examples/elasticsearch/README.md b/examples/elasticsearch/README.md index 6fc01e5486787..4382ace975976 100644 --- a/examples/elasticsearch/README.md +++ b/examples/elasticsearch/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Elasticsearch for Kubernetes This directory contains the source for a Docker image that creates an instance diff --git a/examples/explorer/README.md b/examples/explorer/README.md index 75b093d9af904..b3ccabd23d181 100644 --- a/examples/explorer/README.md +++ b/examples/explorer/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ### explorer Explorer is a little container for examining the runtime environment kubernetes produces for your pods. diff --git a/examples/glusterfs/README.md b/examples/glusterfs/README.md index e7dccba5b112e..9a908bc960d59 100644 --- a/examples/glusterfs/README.md +++ b/examples/glusterfs/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## Glusterfs [Glusterfs](http://www.gluster.org) is an open source scale-out filesystem. These examples provide information about how to allow containers use Glusterfs volumes. @@ -41,6 +42,7 @@ The example assumes that you have already set up a Glusterfs server cluster and Set up Glusterfs server cluster; install Glusterfs client package on the Kubernetes nodes. ([Guide](https://www.howtoforge.com/high-availability-storage-with-glusterfs-3.2.x-on-debian-wheezy-automatic-file-replication-mirror-across-two-storage-servers)) ### Create endpoints + Here is a snippet of [glusterfs-endpoints.json](glusterfs-endpoints.json), ``` diff --git a/examples/guestbook-go/README.md b/examples/guestbook-go/README.md index a1283181b32db..b0fc49b75d3ae 100644 --- a/examples/guestbook-go/README.md +++ b/examples/guestbook-go/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## Guestbook Example This example shows how to build a simple multi-tier web application using Kubernetes and Docker. The application consists of a web front-end, Redis master for storage, and replicated set of Redis slaves, all for which we will create Kubernetes replication controllers, pods, and services. @@ -37,6 +38,7 @@ This example shows how to build a simple multi-tier web application using Kubern If you are running a cluster in Google Container Engine (GKE), instead see the [Guestbook Example for Google Container Engine](https://cloud.google.com/container-engine/docs/tutorials/guestbook). ##### Table of Contents + * [Step Zero: Prerequisites](#step-zero) * [Step One: Create the Redis master pod](#step-one) * [Step Two: Create the Redis master service](#step-two) @@ -99,6 +101,7 @@ Use the `examples/guestbook-go/redis-master-controller.json` file to create a [r Note: The initial `docker pull` can take a few minutes, depending on network conditions. ### Step Two: Create the Redis master service + A Kubernetes '[service](../../docs/user-guide/services.md)' is a named load balancer that proxies traffic to one or more containers. The services in a Kubernetes cluster are discoverable inside other containers via environment variables or DNS. Services find the containers to load balance based on pod labels. The pod that you created in Step One has the label `app=redis` and `role=master`. The selector field of the service determines which pods will receive the traffic sent to the service. @@ -123,6 +126,7 @@ Services find the containers to load balance based on pod labels. The pod that y ### Step Three: Create the Redis slave pods + The Redis master we created earlier is a single pod (REPLICAS = 1), while the Redis read slaves we are creating here are 'replicated' pods. In Kubernetes, a replication controller is responsible for managing the multiple instances of a replicated pod. 1. Use the file [redis-slave-controller.json](redis-slave-controller.json) to create the replication controller by running the `kubectl create -f` *`filename`* command: diff --git a/examples/guestbook-go/_src/README.md b/examples/guestbook-go/_src/README.md index 0665ddbd52024..e23b1b678f302 100644 --- a/examples/guestbook-go/_src/README.md +++ b/examples/guestbook-go/_src/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## Building and releasing Guestbook Image This process employs building two docker images, one compiles the source and the other hosts the compiled binaries. diff --git a/examples/guestbook/README.md b/examples/guestbook/README.md index 48d8b0c49d039..1031c89a85022 100644 --- a/examples/guestbook/README.md +++ b/examples/guestbook/README.md @@ -346,6 +346,7 @@ redis-slave name=redis-slave name=redis-sla ``` ### Step Five: Create the frontend replicated pods + A frontend pod is a simple PHP server that is configured to talk to either the slave or master services, depending on whether the client request is a read or a write. It exposes a simple AJAX interface, and serves an Angular-based UX. @@ -504,6 +505,7 @@ redis-slave name=redis-slave name=redis-sla #### Accessing the guestbook site externally + You'll want to set up your guestbook service so that it can be accessed from outside of the internal Kubernetes network. Above, we introduced one way to do that, using the `type: LoadBalancer` spec. diff --git a/examples/hazelcast/README.md b/examples/hazelcast/README.md index afeed6c5af43e..52c707619e170 100644 --- a/examples/hazelcast/README.md +++ b/examples/hazelcast/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## Cloud Native Deployments of Hazelcast using Kubernetes The following document describes the development of a _cloud native_ [Hazelcast](http://hazelcast.org/) deployment on Kubernetes. When we say _cloud native_ we mean an application which understands that it is running within a cluster manager, and uses this cluster management infrastructure to help implement the application. In particular, in this instance, a custom Hazelcast ```bootstrapper``` is used to enable Hazelcast to dynamically discover Hazelcast nodes that have already joined the cluster. @@ -39,9 +40,11 @@ Any topology changes are communicated and handled by Hazelcast nodes themselves. This document also attempts to describe the core components of Kubernetes: _Pods_, _Services_, and _Replication Controllers_. ### Prerequisites + This example assumes that you have a Kubernetes cluster installed and running, and that you have installed the `kubectl` command line tool somewhere in your path. Please see the [getting started](../../docs/getting-started-guides/) for installation instructions for your platform. ### A note for the impatient + This is a somewhat long tutorial. If you want to jump straight to the "do it now" commands, please see the [tl; dr](#tl-dr) at the end. ### Sources @@ -52,12 +55,14 @@ Source is freely available at: * Docker Trusted Build - https://quay.io/repository/pires/hazelcast-kubernetes ### Simple Single Pod Hazelcast Node + In Kubernetes, the atomic unit of an application is a [_Pod_](../../docs/user-guide/pods.md). A Pod is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes. In this case, we shall not run a single Hazelcast pod, because the discovery mechanism now relies on a service definition. ### Adding a Hazelcast Service + In Kubernetes a _[Service](../../docs/user-guide/services.md)_ describes a set of Pods that perform the same task. For example, the set of nodes in a Hazelcast cluster. An important use for a Service is to create a load balancer which distributes traffic across members of the set. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods available via the Kubernetes API. This is actually how our discovery mechanism works, by relying on the service to discover other Hazelcast pods. Here is the service description: @@ -85,6 +90,7 @@ $ kubectl create -f examples/hazelcast/hazelcast-service.yaml ``` ### Adding replicated nodes + The real power of Kubernetes and Hazelcast lies in easily building a replicated, resizable Hazelcast cluster. In Kubernetes a _[Replication Controller](../../docs/user-guide/replication-controller.md)_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state. @@ -243,6 +249,7 @@ $ kubectl scale rc hazelcast --replicas=4 Examine the status again by checking the logs and you should see the 4 members connected. ### tl; dr; + For those of you who are impatient, here is the summary of the commands we ran in this tutorial. ```sh diff --git a/examples/https-nginx/README.md b/examples/https-nginx/README.md index d50fcd67b85b5..a32664de57093 100644 --- a/examples/https-nginx/README.md +++ b/examples/https-nginx/README.md @@ -30,12 +30,14 @@ Documentation for other releases can be found at + # Nginx https service This example creates a basic nginx https service useful in verifying proof of concept, keys, secrets, and end-to-end https service creation in kubernetes. It uses an [nginx server block](http://wiki.nginx.org/ServerBlockExample) to serve the index page over both http and https. ### Generate certificates + First generate a self signed rsa key and certificate that the server can use for TLS. ```shell diff --git a/examples/iscsi/README.md b/examples/iscsi/README.md index d628c635a69da..7d63edba7beba 100644 --- a/examples/iscsi/README.md +++ b/examples/iscsi/README.md @@ -30,7 +30,9 @@ Documentation for other releases can be found at + ## Step 1. Setting up iSCSI target and iSCSI initiator + **Setup A.** On Fedora 21 nodes If you use Fedora 21 on Kubernetes node, then first install iSCSI initiator on the node: @@ -46,7 +48,8 @@ I mostly followed these [instructions](http://www.server-world.info/en/note?os=F GCE does not provide preconfigured Fedora 21 image, so I set up the iSCSI target on a preconfigured Ubuntu 12.04 image, mostly following these [instructions](http://www.server-world.info/en/note?os=Ubuntu_12.04&p=iscsi). My Kubernetes cluster on GCE was running Debian 7 images, so I followed these [instructions](http://www.server-world.info/en/note?os=Debian_7.0&p=iscsi&f=2) to set up the iSCSI initiator. -##Step 2. Creating the pod with iSCSI persistent storage +## Step 2. Creating the pod with iSCSI persistent storage + Once you have installed iSCSI initiator and new Kubernetes, you can create a pod based on my example *iscsi.json*. In the pod JSON, you need to provide *targetPortal* (the iSCSI target's **IP** address and *port* if not the default port 3260), target's *iqn*, *lun*, and the type of the filesystem that has been created on the lun, and *readOnly* boolean. **Note:** If you have followed the instructions in the links above you diff --git a/examples/k8petstore/README.md b/examples/k8petstore/README.md index 0f7262ea9b984..72754a9daaa20 100644 --- a/examples/k8petstore/README.md +++ b/examples/k8petstore/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## Welcome to k8PetStore This is a follow up to the [Guestbook Example](../guestbook/README.md)'s [Go implementation](../guestbook-go/). diff --git a/examples/k8petstore/bps-data-generator/README.md b/examples/k8petstore/bps-data-generator/README.md index 5b6e5fe58f0f6..6e47189d25520 100644 --- a/examples/k8petstore/bps-data-generator/README.md +++ b/examples/k8petstore/bps-data-generator/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # How to generate the bps-data-generator container # This container is maintained as part of the apache bigtop project. diff --git a/examples/mysql-wordpress-pd/README.md b/examples/mysql-wordpress-pd/README.md index 632e6cd65a953..5420f8adfab2e 100644 --- a/examples/mysql-wordpress-pd/README.md +++ b/examples/mysql-wordpress-pd/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Persistent Installation of MySQL and WordPress on Kubernetes This example describes how to run a persistent installation of [Wordpress](https://wordpress.org/) using the [volumes](../../docs/user-guide/volumes.md) feature of Kubernetes, and [Google Compute Engine](https://cloud.google.com/compute/docs/disks) [persistent disks](../../docs/user-guide/volumes.md#gcepersistentdisk). diff --git a/examples/nfs/README.md b/examples/nfs/README.md index dd3a5fa958e92..343b58c0c6935 100644 --- a/examples/nfs/README.md +++ b/examples/nfs/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Example of NFS volume See [nfs-web-pod.yaml](nfs-web-pod.yaml) for a quick example, how to use NFS volume @@ -40,7 +41,8 @@ in a pod. The example below shows how to export a NFS share from a pod and import it into another one. -###Prerequisites +### Prerequisites + The nfs server pod creates a privileged container, so if you are using a Salt based KUBERNETES_PROVIDER (**gce**, **vagrant**, **aws**), you have to enable the ability to create privileged containers by API. ```shell diff --git a/examples/nfs/exporter/README.md b/examples/nfs/exporter/README.md index f7ef0d898396e..284ea27e1dca1 100644 --- a/examples/nfs/exporter/README.md +++ b/examples/nfs/exporter/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # NFS-exporter container Inspired by https://github.com/cpuguy83/docker-nfs-server. Rewritten for diff --git a/examples/nfs/nfs-data/README.md b/examples/nfs/nfs-data/README.md index 654194e2c5e12..ac49dc56f75e7 100644 --- a/examples/nfs/nfs-data/README.md +++ b/examples/nfs/nfs-data/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # NFS-exporter container with a file This container exports /mnt/data with index.html in it via NFSv4. Based on diff --git a/examples/openshift-origin/README.md b/examples/openshift-origin/README.md index d0cbf6558a9c6..23bfabaa61a71 100644 --- a/examples/openshift-origin/README.md +++ b/examples/openshift-origin/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## OpenShift Origin example This example shows how to run OpenShift Origin as a pod on an existing Kubernetes cluster. diff --git a/examples/phabricator/README.md b/examples/phabricator/README.md index 0e1ef40a6c75b..8ca60330a2705 100644 --- a/examples/phabricator/README.md +++ b/examples/phabricator/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## Phabricator example This example shows how to build a simple multi-tier web application using Kubernetes and Docker. diff --git a/examples/rbd/README.md b/examples/rbd/README.md index 14590972898e7..17cbe6fdff16b 100644 --- a/examples/rbd/README.md +++ b/examples/rbd/README.md @@ -30,7 +30,9 @@ Documentation for other releases can be found at + # How to Use it? + Install Ceph on the Kubernetes host. For example, on Fedora 21 # yum -y install ceph diff --git a/examples/redis/README.md b/examples/redis/README.md index d837c01d9ea99..26087365e4326 100644 --- a/examples/redis/README.md +++ b/examples/redis/README.md @@ -30,17 +30,21 @@ Documentation for other releases can be found at + ## Reliable, Scalable Redis on Kubernetes The following document describes the deployment of a reliable, multi-node Redis on Kubernetes. It deploys a master with replicated slaves, as well as replicated redis sentinels which are use for health checking and failover. ### Prerequisites + This example assumes that you have a Kubernetes cluster installed and running, and that you have installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting started](../../docs/getting-started-guides/) for installation instructions for your platform. ### A note for the impatient + This is a somewhat long tutorial. If you want to jump straight to the "do it now" commands, please see the [tl; dr](#tl-dr) at the end. ### Turning up an initial master/sentinel pod. + A [_Pod_](../../docs/user-guide/pods.md) is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes. We will used the shared network namespace to bootstrap our Redis cluster. In particular, the very first sentinel needs to know how to find the master (subsequent sentinels just ask the first sentinel). Because all containers in a Pod share a network namespace, the sentinel can simply look at ```$(hostname -i):6379```. @@ -55,6 +59,7 @@ kubectl create -f examples/redis/redis-master.yaml ``` ### Turning up a sentinel service + In Kubernetes a [_Service_](../../docs/user-guide/services.md) describes a set of Pods that perform the same task. For example, the set of nodes in a Cassandra cluster, or even the single node we created above. An important use for a Service is to create a load balancer which distributes traffic across members of the set. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods (or the single Pod we've already created) available via the Kubernetes API. In Redis, we will use a Kubernetes Service to provide a discoverable endpoints for the Redis sentinels in the cluster. From the sentinels Redis clients can find the master, and then the slaves and other relevant info for the cluster. This enables new members to join the cluster when failures occur. @@ -68,6 +73,7 @@ kubectl create -f examples/redis/redis-sentinel-service.yaml ``` ### Turning up replicated redis servers + So far, what we have done is pretty manual, and not very fault-tolerant. If the ```redis-master``` pod that we previously created is destroyed for some reason (e.g. a machine dying) our Redis service goes away with it. In Kubernetes a [_Replication Controller_](../../docs/user-guide/replication-controller.md) is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state. @@ -91,6 +97,7 @@ kubectl create -f examples/redis/redis-sentinel-controller.yaml ``` ### Scale our replicated pods + Initially creating those pods didn't actually do anything, since we only asked for one sentinel and one redis server, and they already existed, nothing changed. Now we will add more replicas: ```sh @@ -106,6 +113,7 @@ This will create two additional replicas of the redis server and two additional Unlike our original redis-master pod, these pods exist independently, and they use the ```redis-sentinel-service``` that we defined above to discover and join the cluster. ### Delete our manual pod + The final step in the cluster turn up is to delete the original redis-master pod that we created manually. While it was useful for bootstrapping discovery in the cluster, we really don't want the lifespan of our sentinel to be tied to the lifespan of one of our redis servers, and now that we have a successful, replicated redis sentinel service up and running, the binding is unnecessary. Delete the master as follows: @@ -121,9 +129,11 @@ Now let's take a close look at what happens after this pod is deleted. There ar 3. The redis sentinels themselves, realize that the master has disappeared from the cluster, and begin the election procedure for selecting a new master. They perform this election and selection, and chose one of the existing redis server replicas to be the new master. ### Conclusion + At this point we now have a reliable, scalable Redis installation. By scaling the replication controller for redis servers, we can increase or decrease the number of read-slaves in our cluster. Likewise, if failures occur, the redis-sentinels will perform master election and select a new master. ### tl; dr + For those of you who are impatient, here is the summary of commands we ran in this tutorial: ``` diff --git a/examples/simple-nginx.md b/examples/simple-nginx.md index 7d775584e3299..6ba1952b9715a 100644 --- a/examples/simple-nginx.md +++ b/examples/simple-nginx.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + ## Running your first containers in Kubernetes Ok, you've run one of the [getting started guides](../docs/getting-started-guides/) and you have @@ -65,6 +66,7 @@ kubectl stop rc my-nginx ``` ### Exposing your pods to the internet. + On some platforms (for example Google Compute Engine) the kubectl command can integrate with your cloud provider to add a [public IP address](../docs/user-guide/services.md#external-services) for the pods, to do this run: @@ -81,6 +83,7 @@ kubectl get services In order to access your nginx landing page, you also have to make sure that traffic from external IPs is allowed. Do this by opening a firewall to allow traffic on port 80. ### Next: Configuration files + Most people will eventually want to use declarative configuration files for creating/modifying their applications. A [simplified introduction](../docs/user-guide/simple-yaml.md) is given in a different document. diff --git a/examples/spark/README.md b/examples/spark/README.md index 62208701d5cea..74ac2844a8931 100644 --- a/examples/spark/README.md +++ b/examples/spark/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Spark example Following this example, you will create a functional [Apache diff --git a/examples/storm/README.md b/examples/storm/README.md index bcad599093fc4..1a20a61c921ef 100644 --- a/examples/storm/README.md +++ b/examples/storm/README.md @@ -30,6 +30,7 @@ Documentation for other releases can be found at + # Storm example Following this example, you will create a functional [Apache