Skip to content

Commit

Permalink
Merge pull request kubernetes#10935 from caesarxuchao/kubectl-output-…
Browse files Browse the repository at this point in the history
…in-examples

update kubectl output in examples
  • Loading branch information
rjnagal committed Jul 10, 2015
2 parents 3c71eb6 + f8047aa commit 4923bb2
Show file tree
Hide file tree
Showing 5 changed files with 62 additions and 72 deletions.
52 changes: 20 additions & 32 deletions examples/persistent-volumes/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,21 +20,17 @@ support local storage on the host at this time. There is no guarantee your pod
```
// this will be nginx's webroot
mkdir /tmp/data01
echo 'I love Kubernetes storage!' > /tmp/data01/index.html
$ mkdir /tmp/data01
$ echo 'I love Kubernetes storage!' > /tmp/data01/index.html
```

PVs are created by posting them to the API server.

```
kubectl create -f examples/persistent-volumes/volumes/local-01.yaml
kubectl get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM
pv0001 map[] 10737418240 RWO Available
$ kubectl create -f examples/persistent-volumes/volumes/local-01.yaml
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON
pv0001 type=local 10737418240 RWO Available
```

## Requesting storage
Expand All @@ -46,47 +42,39 @@ Claims must be created in the same namespace as the pods that use them.

```
kubectl create -f examples/persistent-volumes/claims/claim-01.yaml
kubectl get pvc
$ kubectl create -f examples/persistent-volumes/claims/claim-01.yaml
$ kubectl get pvc
NAME LABELS STATUS VOLUME
myclaim-1 map[]
# A background process will attempt to match this claim to a volume.
# The eventual state of your claim will look something like this:
kubectl get pvc
NAME LABELS STATUS VOLUME
myclaim-1 map[] Bound f5c3a89a-e50a-11e4-972f-80e6500a981e
kubectl get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM
pv0001 map[] 10737418240 RWO Bound myclaim-1 / 6bef4c40-e50b-11e4-972f-80e6500a981e
$ kubectl get pvc
NAME LABELS STATUS VOLUME
myclaim-1 map[] Bound pv0001
$ kubectl get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON
pv0001 type=local 10737418240 RWO Bound default/myclaim-1
```

## Using your claim as a volume

Claims are used as volumes in pods. Kubernetes uses the claim to look up its bound PV. The PV is then exposed to the pod.

```
$ kubectl create -f examples/persistent-volumes/simpletest/pod.yaml
kubectl create -f examples/persistent-volumes/simpletest/pod.yaml
kubectl get pods
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED
mypod 172.17.0.2 myfrontend nginx 127.0.0.1/127.0.0.1 <none> Running 12 minutes
kubectl create -f examples/persistent-volumes/simpletest/service.json
kubectl get services
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mypod 1/1 Running 0 1h
NAME LABELS SELECTOR IP PORT(S)
$ kubectl create -f examples/persistent-volumes/simpletest/service.json
$ kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S)
frontendservice <none> name=frontendhttp 10.0.0.241 3000/TCP
kubernetes component=apiserver,provider=kubernetes <none> 10.0.0.2 443/TCP
Expand Down
6 changes: 3 additions & 3 deletions examples/phabricator/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,8 +78,8 @@ kubectl get pods
You'll see a single phabricator pod. It will also display the machine that the pod is running on once it gets placed (may take up to thirty seconds):

```
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS
phabricator-controller-02qp4 10.244.1.34 phabricator fgrzadkowski/phabricator kubernetes-minion-2.c.myproject.internal/130.211.141.151 name=phabricator
NAME READY STATUS RESTARTS AGE
phabricator-controller-9vy68 1/1 Running 0 1m
```

If you ssh to that machine, you can run `docker ps` to see the actual pod:
Expand Down Expand Up @@ -203,7 +203,7 @@ phabricator
To play with the service itself, find the external IP of the load balancer:
```shell
$ kubectl get services guestbook -o template --template='{{(index .status.loadBalancer.ingress 0).ip}}'
$ kubectl get services phabricator -o template --template='{{(index .status.loadBalancer.ingress 0).ip}}{{"\n"}}'
```
and then visit port 80 of that IP address.
Expand Down
64 changes: 33 additions & 31 deletions examples/resourcequota/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,17 +42,18 @@ namespace.

```
$ kubectl describe quota quota --namespace=quota-example
Name: quota
Resource Used Hard
-------- ---- ----
cpu 0m 20
memory 0m 1Gi
persistentvolumeclaims 0m 10
pods 0m 10
replicationcontrollers 0m 20
resourcequotas 1 1
secrets 1 10
services 0m 5
Name: quota
Namespace: quota-example
Resource Used Hard
-------- ---- ----
cpu 0 20
memory 0 1Gi
persistentvolumeclaims 0 10
pods 0 10
replicationcontrollers 0 20
resourcequotas 1 1
secrets 1 10
services 0 5
```

Step 3: Applying default resource limits
Expand All @@ -74,7 +75,7 @@ Now let's look at the pods that were created.

```shell
$ kubectl get pods --namespace=quota-example
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
NAME READY STATUS RESTARTS AGE
```

What happened? I have no pods! Let's describe the replication controller to get a view of what is happening.
Expand All @@ -101,11 +102,12 @@ So let's set some default limits for the amount of cpu and memory a pod can cons
$ kubectl create -f limits.yaml --namespace=quota-example
limitranges/limits
$ kubectl describe limits limits --namespace=quota-example
Name: limits
Type Resource Min Max Default
---- -------- --- --- ---
Container cpu - - 100m
Container memory - - 512Mi
Name: limits
Namespace: quota-example
Type Resource Min Max Default
---- -------- --- --- ---
Container memory - - 512Mi
Container cpu - - 100m
```

Now any time a pod is created in this namespace, if it has not specified any resource limits, the default
Expand All @@ -116,26 +118,26 @@ create its pods.

```shell
$ kubectl get pods --namespace=quota-example
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
nginx-t40zm 10.0.0.2 10.245.1.3/10.245.1.3 run=nginx Running 2 minutes
nginx nginx Running 2 minutes
NAME READY STATUS RESTARTS AGE
nginx-t9cap 1/1 Running 0 49s
```

And if we print out our quota usage in the namespace:

```shell
kubectl describe quota quota --namespace=quota-example
Name: quota
Resource Used Hard
-------- ---- ----
cpu 100m 20
memory 536870912 1Gi
persistentvolumeclaims 0m 10
pods 1 10
replicationcontrollers 1 20
resourcequotas 1 1
secrets 1 10
services 0m 5
Name: quota
Namespace: default
Resource Used Hard
-------- ---- ----
cpu 100m 20
memory 536870912 1Gi
persistentvolumeclaims 0 10
pods 1 10
replicationcontrollers 1 20
resourcequotas 1 1
secrets 1 10
services 0 5
```

You can now see the pod that was created is consuming explicit amounts of resources, and the usage is being
Expand Down
4 changes: 2 additions & 2 deletions examples/spark/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ $ kubectl create -f examples/spark/spark-master-service.json

```shell
$ kubectl get pods
NAME READY REASON RESTARTS AGE
NAME READY STATUS RESTARTS AGE
[...]
spark-master 1/1 Running 0 25s

Expand Down Expand Up @@ -97,7 +97,7 @@ $ kubectl create -f examples/spark/spark-worker-controller.json

```shell
$ kubectl get pods
NAME READY REASON RESTARTS AGE
NAME READY STATUS RESTARTS AGE
[...]
spark-master 1/1 Running 0 14m
spark-worker-controller-hifwi 1/1 Running 0 33s
Expand Down
8 changes: 4 additions & 4 deletions examples/storm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,15 +52,15 @@ before proceeding.

```shell
$ kubectl get pods
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS
zookeeper 192.168.86.4 zookeeper mattf/zookeeper 172.18.145.8/172.18.145.8 name=zookeeper Running
NAME READY STATUS RESTARTS AGE
zookeeper 1/1 Running 0 43s
```

### Check to see if ZooKeeper is accessible

```shell
$ kubectl get services
NAME LABELS SELECTOR IP PORT
NAME LABELS SELECTOR IP(S) PORT(S)
kubernetes component=apiserver,provider=kubernetes <none> 10.254.0.2 443
zookeeper name=zookeeper name=zookeeper 10.254.139.141 2181

Expand Down Expand Up @@ -94,7 +94,7 @@ Ensure that the Nimbus service is running and functional.

```shell
$ kubectl get services
NAME LABELS SELECTOR IP PORT
NAME LABELS SELECTOR IP(S) PORT(S)
kubernetes component=apiserver,provider=kubernetes <none> 10.254.0.2 443
zookeeper name=zookeeper name=zookeeper 10.254.139.141 2181
nimbus name=nimbus name=nimbus 10.254.115.208 6627
Expand Down

0 comments on commit 4923bb2

Please sign in to comment.