Skip to content

Commit

Permalink
Merge pull request kubernetes#11625 from timstclair/docs
Browse files Browse the repository at this point in the history
Cleanup docs.
  • Loading branch information
krousey committed Jul 20, 2015
2 parents 220cc6a + ca32e65 commit 33b4a0e
Show file tree
Hide file tree
Showing 3 changed files with 13 additions and 13 deletions.
2 changes: 1 addition & 1 deletion docs/user-guide/services-firewalls.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Documentation for other releases can be found at

# Services and Firewalls

Many cloud providers (e.g. Google Compute Engine) define firewalls that help keep prevent inadvertent
Many cloud providers (e.g. Google Compute Engine) define firewalls that help prevent inadvertent
exposure to the internet. When exposing a service to the external world, you may need to open up
one or more ports in these firewalls to serve traffic. This document describes this process, as
well as any provider specific details that may be necessary.
Expand Down
6 changes: 3 additions & 3 deletions docs/user-guide/walkthrough/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ That's great for a simple static web server, but what about persistent storage?

The container file system only lives as long as the container does. So if your app's state needs to survive relocation, reboots, and crashes, you'll need to configure some persistent storage.

For this example, we'll be creating a Redis pod, with a named volume and volume mount that defines the path to mount the volume.
For this example we'll be creating a Redis pod with a named volume and volume mount that defines the path to mount the volume.

1. Define a volume:

Expand All @@ -134,7 +134,7 @@ For this example, we'll be creating a Redis pod, with a named volume and volume
emptyDir: {}
```
1. Define a volume mount within a container definition:
2. Define a volume mount within a container definition:
```yaml
volumeMounts:
Expand Down Expand Up @@ -170,7 +170,7 @@ Notes:
##### Volume Types
- **EmptyDir**: Creates a new directory that will persist across container failures and restarts.
- **HostPath**: Mounts an existing directory on the minion's file system (e.g. `/var/logs`).
- **HostPath**: Mounts an existing directory on the node's file system (e.g. `/var/logs`).

See [volumes](../../../docs/user-guide/volumes.md) for more details.

Expand Down
18 changes: 9 additions & 9 deletions docs/user-guide/walkthrough/k8s201.md
Original file line number Diff line number Diff line change
Expand Up @@ -223,9 +223,9 @@ For more information, see [Services](../services.md).
When I write code it never crashes, right? Sadly the [Kubernetes issues list](https://github.com/GoogleCloudPlatform/kubernetes/issues) indicates otherwise...

Rather than trying to write bug-free code, a better approach is to use a management system to perform periodic health checking
and repair of your application. That way, a system, outside of your application itself, is responsible for monitoring the
application and taking action to fix it. It's important that the system be outside of the application, since of course, if
your application fails, and the health checking agent is part of your application, it may fail as well, and you'll never know.
and repair of your application. That way a system outside of your application itself is responsible for monitoring the
application and taking action to fix it. It's important that the system be outside of the application, since if
your application fails and the health checking agent is part of your application, it may fail as well and you'll never know.
In Kubernetes, the health check monitor is the Kubelet agent.

#### Process Health Checking
Expand All @@ -237,7 +237,7 @@ Kubernetes.

#### Application Health Checking

However, in many cases, this low-level health checking is insufficient. Consider for example, the following code:
However, in many cases this low-level health checking is insufficient. Consider, for example, the following code:

```go
lockOne := sync.Mutex{}
Expand All @@ -253,21 +253,21 @@ lockTwo.Lock();
lockOne.Lock();
```

This is a classic example of a problem in computer science known as "Deadlock". From Docker's perspective your application is
still operating, the process is still running, but from your application's perspective, your code is locked up, and will never respond correctly.
This is a classic example of a problem in computer science known as ["Deadlock"](https://en.wikipedia.org/wiki/Deadlock). From Docker's perspective your application is
still operating and the process is still running, but from your application's perspective your code is locked up and will never respond correctly.

To address this problem, Kubernetes supports user implemented application health-checks. These checks are performed by the
Kubelet to ensure that your application is operating correctly for a definition of "correctly" that _you_ provide.

Currently, there are three types of application health checks that you can choose from:

* HTTP Health Checks - The Kubelet will call a web hook. If it returns between 200 and 399, it is considered success, failure otherwise. See health check examples [here](../liveness/).
* HTTP Health Checks - The Kubelet will call a web hook. If it returns between 200 and 399, it is considered success, failure otherwise. See health check examples [here](../liveness/).
* Container Exec - The Kubelet will execute a command inside your container. If it exits with status 0 it will be considered a success. See health check examples [here](../liveness/).
* TCP Socket - The Kubelet will attempt to open a socket to your container. If it can establish a connection, the container is considered healthy, if it can't it is considered a failure.

In all cases, if the Kubelet discovers a failure, the container is restarted.
In all cases, if the Kubelet discovers a failure the container is restarted.

The container health checks are configured in the "LivenessProbe" section of your container config. There you can also specify an "initialDelaySeconds" that is a grace period from when the container is started to when health checks are performed, to enable your container to perform any necessary initialization.
The container health checks are configured in the `livenessProbe` section of your container config. There you can also specify an `initialDelaySeconds` that is a grace period from when the container is started to when health checks are performed, to enable your container to perform any necessary initialization.

Here is an example config for a pod with an HTTP health check ([pod-with-http-healthcheck.yaml](pod-with-http-healthcheck.yaml)):

Expand Down

0 comments on commit 33b4a0e

Please sign in to comment.