Skip to content

Commit

Permalink
update doc
Browse files Browse the repository at this point in the history
  • Loading branch information
huweihuang committed Jul 4, 2024
1 parent 1ba97df commit 88a4587
Show file tree
Hide file tree
Showing 4 changed files with 54 additions and 20 deletions.
2 changes: 1 addition & 1 deletion operation/node/_index.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
---
title: "节点调度"
title: "节点迁移"
weight: 3
---
51 changes: 32 additions & 19 deletions operation/node/safely-drain-node.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,11 @@ kubectl uncordon <NodeName>
## 1.2. 执行kubectl drain命令

```bash
# 驱逐节点的所有pod
kubectl drain <NodeName> --force --ignore-daemonsets

# 驱逐指定节点的pod
kubectl drain <NodeName> --ignore-daemonsets --pod-selector=pod-template-hash=88964949c
```

示例:
Expand Down Expand Up @@ -99,40 +103,49 @@ error: DaemonSet-managed pods (use --ignore-daemonsets to ignore): calicoopsmoni
$ kubectl drain --help
Drain node in preparation for maintenance.

The given node will be marked unschedulable to prevent new pods from arriving. 'drain' evicts the pods if the APIServer
supports eviction (http://kubernetes.io/docs/admin/disruptions/). Otherwise, it will use normal DELETE to delete the
pods. The 'drain' evicts or deletes all pods except mirror pods (which cannot be deleted through the API server). If
there are DaemonSet-managed pods, drain will not proceed without --ignore-daemonsets, and regardless it will not delete
any DaemonSet-managed pods, because those pods would be immediately replaced by the DaemonSet controller, which ignores
unschedulable markings. If there are any pods that are neither mirror pods nor managed by ReplicationController,
ReplicaSet, DaemonSet, StatefulSet or Job, then drain will not delete any pods unless you use --force. --force will
also allow deletion to proceed if the managing resource of one or more pods is missing.
The given node will be marked unschedulable to prevent new pods from arriving. 'drain' evicts the pods if the API
server supports https://kubernetes.io/docs/concepts/workloads/pods/disruptions/ . Otherwise, it will use normal DELETE
to delete the pods. The 'drain' evicts or deletes all pods except mirror pods (which cannot be deleted through the API
server). If there are daemon set-managed pods, drain will not proceed without --ignore-daemonsets, and regardless it
will not delete any daemon set-managed pods, because those pods would be immediately replaced by the daemon set
controller, which ignores unschedulable markings. If there are any pods that are neither mirror pods nor managed by a
replication controller, replica set, daemon set, stateful set, or job, then drain will not delete any pods unless you
use --force. --force will also allow deletion to proceed if the managing resource of one or more pods is missing.

'drain' waits for graceful termination. You should not operate on the machine until the command completes.
'drain' waits for graceful termination. You should not operate on the machine until the command completes.

When you are ready to put the node back into service, use kubectl uncordon, which will make the node schedulable again.
When you are ready to put the node back into service, use kubectl uncordon, which will make the node schedulable again.

! http://kubernetes.io/images/docs/kubectl_drain.svg
https://kubernetes.io/images/docs/kubectl_drain.svg

Examples:
# Drain node "foo", even if there are pods not managed by a ReplicationController, ReplicaSet, Job, DaemonSet or
StatefulSet on it.
$ kubectl drain foo --force
# Drain node "foo", even if there are pods not managed by a replication controller, replica set, job, daemon set or
stateful set on it
kubectl drain foo --force

# As above, but abort if there are pods not managed by a ReplicationController, ReplicaSet, Job, DaemonSet or
StatefulSet, and use a grace period of 15 minutes.
$ kubectl drain foo --grace-period=900
# As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set or
stateful set, and use a grace period of 15 minutes
kubectl drain foo --grace-period=900

Options:
--delete-local-data=false: Continue even if there are pods using emptyDir (local data that will be deleted when
--chunk-size=500: Return large lists in chunks rather than all at once. Pass 0 to disable. This flag is beta and
may change in the future.
--delete-emptydir-data=false: Continue even if there are pods using emptyDir (local data that will be deleted when
the node is drained).
--dry-run=false: If true, only print the object that would be sent, without sending it.
--disable-eviction=false: Force drain to use delete, even if eviction is supported. This will bypass checking
PodDisruptionBudgets, use with caution.
--dry-run='none': Must be "none", "server", or "client". If client strategy, only print the object that would be
sent, without sending it. If server strategy, submit server-side request without persisting the resource.
--force=false: Continue even if there are pods not managed by a ReplicationController, ReplicaSet, Job, DaemonSet
or StatefulSet.
--grace-period=-1: Period of time in seconds given to each pod to terminate gracefully. If negative, the default
value specified in the pod will be used.
--ignore-daemonsets=false: Ignore DaemonSet-managed pods.
--ignore-errors=false: Ignore errors occurred between drain nodes in group.
--pod-selector='': Label selector to filter pods on the node
-l, --selector='': Selector (label query) to filter on
--skip-wait-for-delete-timeout=0: If pod DeletionTimestamp older than N seconds, skip waiting for the pod.
Seconds must be greater than 0 to skip.
--timeout=0s: The length of time to wait before giving up, zero means infinite

Usage:
Expand Down
19 changes: 19 additions & 0 deletions setup/installer/install-k8s-by-kubeadm.md
Original file line number Diff line number Diff line change
Expand Up @@ -504,6 +504,25 @@ kubectl get nodes

宿主机节点有`cni0`网卡,且网卡的IP段与flannel的CIDR网段不同,因此需要删除该网卡,让其重建。

查看cni0网卡

```bash
# ifconfig cni0 |grep -w inet
inet 10.244.5.1 netmask 255.255.255.0 broadcast 10.244.116.255
```

查看flannel配置

```bash
# cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.116.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
```

发现cni0 IP与FLANNEL_SUBNET网段不一致,因此删除cni0重建。

**解决:**

```bash
Expand Down
2 changes: 2 additions & 0 deletions setup/kubeadm-certs.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,8 @@ crictl ps |egrep "kube-apiserver|kube-scheduler|kube-controller"|awk '{print $1}
cp -fr /etc/kubernetes/admin.conf $HOME/.kube/config
```

## 2.5. 配置kubelet证书轮转

由于kubelet默认支持证书轮转,当证书过期时,可以自动生成新的密钥,并从 Kubernetes API 申请新的证书。可以查看kubelet的配置检查是否已经开启。

```bash
Expand Down

0 comments on commit 88a4587

Please sign in to comment.