Skip to content

Latest commit

 

History

History
 
 

kubeadm-ha-multi-master

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 

Set up a Highly Available Kubernetes Cluster using kubeadm

Follow this documentation to set up a highly available Kubernetes cluster using Ubuntu 20.04 LTS.

This documentation guides you in setting up a cluster with two master nodes, one worker node and a load balancer node using HAProxy.

Vagrant Environment

Role FQDN IP OS RAM CPU
Load Balancer loadbalancer.example.com 172.16.16.100 Ubuntu 20.04 1G 1
Master kmaster1.example.com 172.16.16.101 Ubuntu 20.04 2G 2
Master kmaster2.example.com 172.16.16.102 Ubuntu 20.04 2G 2
Worker kworker1.example.com 172.16.16.201 Ubuntu 20.04 1G 1
  • Password for the root account on all these virtual machines is kubeadmin
  • Perform all the commands as root user unless otherwise specified

Pre-requisites

If you want to try this in a virtualized environment on your workstation

  • Virtualbox installed
  • Vagrant installed
  • Host machine has atleast 8 cores
  • Host machine has atleast 8G memory

Bring up all the virtual machines

vagrant up

Set up load balancer node

Install Haproxy
apt update && apt install -y haproxy
Configure haproxy

Append the below lines to /etc/haproxy/haproxy.cfg

frontend kubernetes-frontend
    bind 172.16.16.100:6443
    mode tcp
    option tcplog
    default_backend kubernetes-backend

backend kubernetes-backend
    mode tcp
    option tcp-check
    balance roundrobin
    server kmaster1 172.16.16.101:6443 check fall 3 rise 2
    server kmaster2 172.16.16.102:6443 check fall 3 rise 2
Restart haproxy service
systemctl restart haproxy

On all kubernetes nodes (kmaster1, kmaster2, kworker1)

Disable Firewall
ufw disable
Disable swap
swapoff -a; sed -i '/swap/d' /etc/fstab
Update sysctl settings for Kubernetes networking
cat >>/etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
Install docker engine
{
  apt install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common
  curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
  add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
  apt update && apt install -y docker-ce=5:19.03.10~3-0~ubuntu-focal containerd.io
}

Kubernetes Setup

Add Apt repository
{
  curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
  echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
}
Install Kubernetes components
apt update && apt install -y kubeadm=1.19.2-00 kubelet=1.19.2-00 kubectl=1.19.2-00

On any one of the Kubernetes master node (Eg: kmaster1)

Initialize Kubernetes Cluster
kubeadm init --control-plane-endpoint="172.16.16.100:6443" --upload-certs --apiserver-advertise-address=172.16.16.101 --pod-network-cidr=192.168.0.0/16

Copy the commands to join other master nodes and worker nodes.

Deploy Calico network
kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f https://docs.projectcalico.org/v3.15/manifests/calico.yaml

Join other nodes to the cluster (kmaster2 & kworker1)

Use the respective kubeadm join commands you copied from the output of kubeadm init command on the first master.

IMPORTANT: You also need to pass --apiserver-advertise-address to the join command when you join the other master node.

Downloading kube config to your local machine

On your host machine

mkdir ~/.kube
scp [email protected]:/etc/kubernetes/admin.conf ~/.kube/config

Password for root account is kubeadmin (if you used my Vagrant setup)

Verifying the cluster

kubectl cluster-info
kubectl get nodes
kubectl get cs

Have Fun!!