Skip to content

Commit

Permalink
更新kubedns文档
Browse files Browse the repository at this point in the history
  • Loading branch information
jmgao1983 committed Dec 8, 2017
1 parent 2d26c01 commit 8251c73
Show file tree
Hide file tree
Showing 5 changed files with 255 additions and 6 deletions.
2 changes: 1 addition & 1 deletion 90.setup.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# 在deploy节点生成CA相关证书,以供整个集群使用
- hosts: deploy
roles:
- ca
- deploy

# 集群节点的公共配置任务
- hosts:
Expand Down
2 changes: 1 addition & 1 deletion docs/05-安装calico网络组件.md
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ spec:

### 验证calico网络

执行calico安装 `ansible-playbook 05.calico.yml` 成功后可以验证如下:需要等待一会儿
执行calico安装 `ansible-playbook 05.calico.yml` 成功后可以验证如下:(需要等待calico/node:v2.6.2 镜像下载完成,有时候即便上一步已经配置了docker国内加速,还是可能比较慢,建议确认以下容器运行起来以后,再执行后续步骤)

``` bash
docker ps
Expand Down
53 changes: 52 additions & 1 deletion docs/guide/kubedns.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,58 @@ kubedns 是 k8s 集群首先需要部署的,集群中的其他 pods 使用它

### 安装

kubectl create -f /etc/ansible/manifests/kubedns/[kubedns.yaml](../../manifests/kubedns/kubedns.yaml)
**kubectl create -f /etc/ansible/manifests/kubedns/[kubedns.yaml](../../manifests/kubedns/kubedns.yaml)**

+ 注意deploy中使用的 serviceAccount `kube-dns`,该预定义的 ClusterRoleBinding system:kube-dns 将 kube-system 命名空间的 kube-dns ServiceAccount 与 system:kube-dns ClusterRole 绑定, 因此POD 具有访问 kube-apiserver DNS 相关 API 的权限;
+ 集群 pod默认继承 node的dns 解析,修改 kubelet服务启动参数 --resolv-conf="",可以更改这个特性,详见 kubelet 启动参数

### 验证 kubedns

新建一个测试nginx服务

`kubectl run nginx --image=nginx --expose --port=80`

确认nginx服务

``` bash
kubectl get pod|grep nginx
nginx-7cbc4b4d9c-fl46v 1/1 Running 0 1m
kubectl get svc|grep nginx
nginx ClusterIP 10.68.33.167 <none> 80/TCP 1m
```

测试pod busybox

``` bash
kubectl run busybox --rm -it --image=busybox /bin/sh
If you don't see a command prompt, try pressing enter.
/ # cat /etc/resolv.conf
nameserver 10.68.0.2
search default.svc.cluster.local. svc.cluster.local. cluster.local.
options ndots:5
# 测试集群内部服务解析
/ # nslookup nginx
Server: 10.68.0.2
Address 1: 10.68.0.2 kube-dns.kube-system.svc.cluster.local
Name: nginx
Address 1: 10.68.33.167 nginx.default.svc.cluster.local
/ # nslookup kubernetes
Server: 10.68.0.2
Address 1: 10.68.0.2 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.68.0.1 kubernetes.default.svc.cluster.local
# 测试外部域名的解析,默认集成node的dns解析
/ # nslookup www.baidu.com
Server: 10.68.0.2
Address 1: 10.68.0.2 kube-dns.kube-system.svc.cluster.local
Name: www.baidu.com
Address 1: 180.97.33.108
Address 2: 180.97.33.107
/ #
```
Expand Down
198 changes: 198 additions & 0 deletions manifests/kubedns/kubedns.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,198 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists

---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-dns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile

---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.68.0.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
volumes:
- name: kube-dns-config
configMap:
name: kube-dns
optional: true
containers:
- name: kubedns
#image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5
image: mirrorgooglecontainers/k8s-dns-kube-dns-amd64:1.14.5
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
livenessProbe:
httpGet:
path: /healthcheck/kubedns
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 3
timeoutSeconds: 5
args:
- --domain=cluster.local.
- --dns-port=10053
- --config-dir=/kube-dns-config
- --v=2
env:
- name: PROMETHEUS_PORT
value: "10055"
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
volumeMounts:
- name: kube-dns-config
mountPath: /kube-dns-config
- name: dnsmasq
#image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5
image: mirrorgooglecontainers/k8s-dns-dnsmasq-nanny-amd64:1.14.5
livenessProbe:
httpGet:
path: /healthcheck/dnsmasq
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- -v=2
- -logtostderr
- -configDir=/etc/k8s/dns/dnsmasq-nanny
- -restartDnsmasq=true
- --
- -k
- --cache-size=1000
- --log-facility=-
- --server=/cluster.local./127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
- --server=/ip6.arpa/127.0.0.1#10053
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
# see: https://github.com/kubernetes/kubernetes/issues/29055 for details
resources:
requests:
cpu: 150m
memory: 20Mi
volumeMounts:
- name: kube-dns-config
mountPath: /etc/k8s/dns/dnsmasq-nanny
- name: sidecar
#image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5
image: mirrorgooglecontainers/k8s-dns-sidecar-amd64:1.14.5
livenessProbe:
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --v=2
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local.,5,A
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local.,5,A
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
memory: 20Mi
cpu: 10m
dnsPolicy: Default # Don't use cluster DNS.
serviceAccountName: kube-dns
6 changes: 3 additions & 3 deletions manifests/kubedns/readme.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
### 说明

+ 本目录为k8s集群的插件 kube-dns的配置目录,初始时本目录为空
+ 本目录为k8s集群的插件 kube-dns的配置目录
+ 因kubedns.yaml文件中参数(CLUSTER_DNS_SVC_IP, CLUSTER_DNS_DOMAIN)根据hosts文件设置而定,需要使用ansible template模块替换参数后生成
+ 运行 `ansible-playbook 01.prepare.yml`后该目录下生成kubedns.yaml 文件
+ kubedns.yaml [模板文件](../../roles/deploy/template/kubedns.yaml.j2)
+ 运行 `ansible-playbook 01.prepare.yml`后会重新生成该目录下的kubedns.yaml 文件
+ kubedns.yaml [模板文件](../../roles/deploy/templates/kubedns.yaml.j2)

0 comments on commit 8251c73

Please sign in to comment.