...menustart
...menuend
- 客户端小版本最多比服务器大1, 比如服务器版本是1.7.8 , 客户端版本可以用 1.8.x
- linux
- macos:
- replace
linux
withdarwin
- replace
chmod +x ./kubectl
mv ./kubectl /usr/local/bin/kubectl
#list only pod name, and no column name
kubectl get po --no-headers -o custom-columns=:.metadata.name
# find pod by ip
kubectl get po --all-namespaces -o wide | grep 10.0.0.39
# get yaml
kubectl ... get ... -o yaml --export
# full service name across namespaces
<service-name>.<namespace-name>.svc.cluster.local
# list all context
kubectl config get-contexts
# specify context
kubectl_tke --context=<ContextName> get nodes
kubectl -n <namespace> rollout restart deployment <deployment-name>
NAMESPACE="your-namespace"
SELECTOR="k8s-app=xxxxxxx"
CMD="cat logs/app.log"
TEXT="GET /callback"
if [ "$1" != "" ]
then
TEXT=$1
fi
for pod in `kubectl -n $NAMESPACE get po --no-headers --selector=$SELECTOR -o custom-columns='NAME:metadata.name'`
do
echo -------------- seaching $pod
kubectl -n $NAMESPACE exec -it $pod -- $CMD | grep "$TEXT"
done
7. ImagePullSecret ( 如果需要从外部pull 镜像的话需要设置, in deployment)
- qcloudregistrykey ,
it seems that TKE will automatically use `tencenthubkey` ?
8. secret can not be access across namespaces
to dup a secret from namespace A into namespace B
kubectl get secret <secret-name> --namespace=A --export -o yaml | kubectl apply --namespace=B -f -
使用 role / rolebinding 的一般方法
- create serveraccount
create serviceaccount <name>
, it will also create an secret- 任何namespace 下都有一个默认的 serviceacount: default
-
# create a role kubectl -n $_NS create role role_po --verb="list" --resource=po --dry-run -o yaml kubectl -n $_NS create role role_deploy --verb="get" --resource=deploy --dry-run -o yaml kubectl -n $_NS create role role_scale --verb="update" --resource=replicasets --dry-run -o yaml
- we also can merge different roles ,by merging --dry-run output into 1 yaml file then.
kubectl -n $_NS create -f role.yaml
- we also can merge different roles ,by merging --dry-run output into 1 yaml file then.
-
# To bind the "default" serviceaccount and "role_scale" kubectl -n $_NS create rolebinding default_scale_binding --serviceaccount="$_NS:default" --role=role_scale --dry-run -o yaml
Define postStart and preStop handlers
on TKE, add it on deployment yaml, following the image
property.
example: when this pod restart , before it really ready, make 1 another service/pod relanuch.
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "_NS=umc-hse-dev && _APP=umc-hse-app && replinum=`kubectl -n $_NS get deploy $_APP -o=jsonpath='{.status.replicas}'` && kubectl -n $_NS scale deployments/$_APP --replicas=$(($replinum-1)) && kubectl -n $_NS scale deployments/$_APP --replicas=$replinum"]
# 选择一个列表的指定元素
$ kubectl get pods -o custom-columns='DATA:spec.containers[0].image'
# 选择和一个过滤表达式匹配的列表元素
$ kubectl get pods -o custom-columns='DATA:spec.containers[?(@.image!="nginx")].image'
# 选择特定位置下的所有字段(无论名称是什么)
$ kubectl get pods -o custom-columns='DATA:metadata.*'
# 选择具有特定名称的所有字段(无论其位置如何)
$ kubectl get pods -o custom-columns='DATA:..image'
显示 Pod 的所有容器镜像:
$ kubectl get pods \
-o custom-columns='NAME:metadata.name,IMAGES:spec.containers[*].image'
显示节点的可用区域:
$ kubectl get nodes \
-o custom-columns='NAME:metadata.name,ZONE:metadata.labels.failure-domain\.beta\.kubernetes\.io/zone'
- 每个节点的可用区都可以通过标签
failure-domain.beta.kubernetes.io/zone
来获得 - 如果你的 Kubernetes 集群部署在公有云上面(比如 AWS、Azure 或 GCP),那么上面的命令就非常有用了
- 访问 COS 某个bucket的策略
{
"version": "2.0",
"statement": [
{
"action": [
"cos:*"
],
"resource": "qcs::cos:::BUCKET-NAME/*",
"effect": "allow"
},
{
"effect": "allow",
"action": [
"monitor:*",
"cam:ListUsersForGroup",
"cam:ListGroups",
"cam:GetGroup"
],
"resource": "*"
}
]
}
- 更换证书
- 检查证书兼容性 https://myssl.com/
-
kubectl 查看node 状态
kubectl describe nodes
. -
登陆节点,查看硬盘占用
- 查看总体占用
df | less
- 查看某个path下的占用
ls -Sl
du -m <path> | sort -nr | head -n 10
du -shxm * | sort -nr | head -n 10
-
du -sh <your dictory>
- 查看总体占用
docker images | grep "<none>" | grep umc-app-images | awk "{print \$3}" | xargs docker rmi
# more aggressive
docker images | grep umc-app-images | awk "{print \$3}" | xargs docker rmi
kubectl_umc get pods --all-namespaces | awk '{ if ($4!="Running") print $0_ }'
- 1 install cntlm
1) download from
https://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/c/
2) rpm -Uvh xxx.rpm
- 2 Get password hash
- (type your password, press enter and copy the output)
- modify your username/domain first in
/etc/cntlm.conf
- or
cntlm -H -u <Your username> -d cop-domain
?
$ cntlm -H
Password:
PassLM 14BE8CB0282308185246B269C29C0A88
PassNT DD8F12AC2482B5BC43A6972E7DFD0F78
PassNTLMv2 934498581AFCBE80CA0457E0FD30B0F9 # Only for user '', domain ''
- 3 Edit cntlm configuration file(Example for testuser)
#vi /etc/cntlm.conf
Username YOURUSERNAME
Domain YOURCOMPANYDOMAIN
########Paste result of cntlm -H here###########
PassLM 14BE8CB0282308185246B269C29C0A88
PassNT DD8F12AC2482B5BC43A6972E7DFD0F78
PassNTLMv2 934498581AFCBE80CA0457E0FD30B0F9 # Only for user '', domain ''
Proxy YOUR_COMPANY_PROXY_HOST:PORT
NoProxy ...
Auth NTLM
-
4 Enable cntlm service at boot , and start it now
#systemctl enable cntlm
#systemctl start cntlm
-
5 Set environment variables (HTTP_PROXY and HTTPS_PROXY)
- use:
127.0.0.1:3128
- use:
-
/usr/local/etc/cntlm.conf
- otherwise it might be in /etc/cntlm.conf
-
You can run cntlm in debug mode for testing purpose and see what’s happening:
cntlm -f
# Run in foreground, do not fork into daemon mode.
-
If everything is fine you can launch it as a daemon just by typing:
cntlm
-
To have launchd start cntlm now and restart at startup:
sudo brew services start cntlm
-
set proxy env
export http_proxy=http://localhost:3128
export https_proxy=https://localhost:3128
- restart
brew services restart cntlm