Skip to content

Commit

Permalink
Bump kubevirtci
Browse files Browse the repository at this point in the history
Signed-off-by: Roman Mohr <[email protected]>
  • Loading branch information
rmohr committed Jan 4, 2021
1 parent 66ccc42 commit ef7e21d
Show file tree
Hide file tree
Showing 9 changed files with 364 additions and 161 deletions.
2 changes: 1 addition & 1 deletion cluster-up-sha.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
d86cd1cbbd28a7b3997c1f61b6b353e953a00253
abd22af61fbe318c04eaf63881d0a09038d14a6f
81 changes: 81 additions & 0 deletions cluster-up/cluster/k8s-1.20/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
# Kubernetes 1.20 in ephemeral containers

Provides a pre-deployed Kubernetes with version 1.20 purely in docker
containers with qemu. The provided VMs are completely ephemeral and are
recreated on every cluster restart. The KubeVirt containers are built on the
local machine and are then pushed to a registry which is exposed at
`localhost:5000`.

## Bringing the cluster up

```bash
export KUBEVIRT_PROVIDER=k8s-1.20
export KUBEVIRT_NUM_NODES=2 # master + one node
make cluster-up
```

The cluster can be accessed as usual:

```bash
$ cluster/kubectl.sh get nodes
NAME STATUS ROLES AGE VERSION
node01 NotReady master 31s v1.20.1
node02 NotReady <none> 5s v1.20.1
```

## Bringing the cluster up with cluster-network-addons-operator provisioned

```bash
export KUBEVIRT_PROVIDER=k8s-1.20
export KUBEVIRT_NUM_NODES=2 # master + one node
export KUBEVIRT_WITH_CNAO=true
make cluster-up
```

To get more info about CNAO you can check the github project documentation
here https://github.com/kubevirt/cluster-network-addons-operator

## Bringing the cluster down

```bash
export KUBEVIRT_PROVIDER=k8s-1.20
make cluster-down
```

This destroys the whole cluster. Recreating the cluster is fast, since k8s is
already pre-deployed. The only state which is kept is the state of the local
docker registry.

## Destroying the docker registry state

The docker registry survives a `make cluster-down`. It's state is stored in a
docker volume called `kubevirt_registry`. If the volume gets too big or the
volume contains corrupt data, it can be deleted with

```bash
docker volume rm kubevirt_registry
```

## Enabling IPv6 connectivity

In order to be able to reach from the cluster to the host's IPv6 network, IPv6
has to be enabled on your Docker. Add following to your
`/etc/docker/daemon.json` and restart docker service:

```json
{
"ipv6": true,
"fixed-cidr-v6": "2001:db8:1::/64"
}
```

```bash
systemctl restart docker
```

With an IPv6-connected host, you may want the pods to be able to reach the rest
of the IPv6 world, too. In order to allow that, enable IPv6 NAT on your host:

```bash
ip6tables -t nat -A POSTROUTING -s 2001:db8:1::/64 -j MASQUERADE
```
149 changes: 149 additions & 0 deletions cluster-up/cluster/k8s-1.20/dev-guide.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,149 @@
# kubevirtci K8s provider dev guide.

The purpose of kubevirtci is to create pre-provisioned K8s clusters as container images,
allowing people to easily run a K8s cluster.

The target audience is developers of kubevirtci, who want to create a new provider, or to update an existing one.

Please refer first to the following documents on how to run k8s-1.20:\
[k8s-1.20 cluster-up](https://github.com/kubevirt/kubevirtci/blob/master/cluster-up/cluster/k8s-1.20/README.md)

In this doc, we will go on what kubevirtci provider image consist of, what its inner architecture,
flow of start a pre-provisioned cluster, flow of creating a new provider, and how to create a new provider.

A provider includes all the images (K8s base image, nodes OS image) and the scripts that allows it to start a
cluster offline, without downloading / installing / compiling new resources.
Deploying a cluster will create containers, which communicate with each other, in order to act as a K8s cluster.
It's a bit different from running bare-metal cluster where the nodes are physical machines or when the nodes are virtual machines on the host itself,
It gives us isolation advantage and state freezing of the needed components, allowing offline deploy, agnostic of the host OS, and installed packages.

# Project structure
* cluster-provision folder - creating preprovisioned clusters.
* cluster-up folder - spinning up preprovisioned clusters.
* gocli - gocli is a binary that assist in provisioning and spinning up a cluster. sources of gocli are at cluster-provision/gocli.

# K8s Deployment
Running `make cluster-up` will deploy a pre-provisioned cluster.
Upon finishing deployment of a K8s deploy, we will have 3 containers:
* k8s-1.20 vm container - a container that runs a qemu VM, which is the K8s node, in which the pods will run.
* Registry container - a shared image registry.
* k8s-1.20 dnsmasq container - a container that run dnsmasq, which gives dns and dhcp services.

The containers are running and looks like this:
```
[root@modi01 1.20.0]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3589e85efc7d kubevirtci/k8s-1.20.0 "/bin/bash -c '/vm.s…" About an hour ago Up About an hour k8s-1.20.0-node01
4742dc02add2 registry:2.7.1 "/entrypoint.sh /etc…" About an hour ago Up About an hour k8s-1.20.0-registry
13787e7d4ac9 kubevirtci/k8s-1.20.0 "/bin/bash -c /dnsma…" About an hour ago Up About an hour 127.0.0.1:8443->8443/tcp, 0.0.0.0:32794->2201/tcp, 0.0.0.0:32793->5000/tcp, 0.0.0.0:32792->5901/tcp, 0.0.0.0:32791->6443/tcp k8s-1.20.0-dnsmasq
```

Nodes:
```
[root@modi01 kubevirtci]# oc get nodes
NAME STATUS ROLES AGE VERSION
node01 Ready master 83m v1.20.0
```

# Inner look of a deployed cluster
We can connect to the node of the cluster by:
```
./cluster-up/ssh.sh node01
```

List the pods
```
[vagrant@node01 ~]$ sudo crictl pods
POD ID CREATED STATE NAME NAMESPACE ATTEMPT
403513878c8b7 10 minutes ago Ready coredns-6955765f44-m6ckl kube-system 4
0c3e25e58b9d0 10 minutes ago Ready local-volume-provisioner-fkzgk default 4
e6d96770770f4 10 minutes ago Ready coredns-6955765f44-mhfgg kube-system 4
19ad529c78acc 10 minutes ago Ready kube-flannel-ds-amd64-mq5cx kube-system 0
47acef4276900 10 minutes ago Ready kube-proxy-vtj59 kube-system 0
df5863c55a52f 11 minutes ago Ready kube-scheduler-node01 kube-system 0
ca0637d5ac82f 11 minutes ago Ready kube-apiserver-node01 kube-system 0
f0d90506ce3b8 11 minutes ago Ready kube-controller-manager-node01 kube-system 0
f873785341215 11 minutes ago Ready etcd-node01 kube-system 0
```

Check kubelet service status
```
[vagrant@node01 ~]$ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Wed 2020-01-15 13:39:54 UTC; 11min ago
Docs: https://kubernetes.io/docs/
Main PID: 4294 (kubelet)
CGroup: /system.slice/kubelet.service
‣ 4294 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/boo...
```

Connect to the container that runs the vm:
```
CONTAINER=$(docker ps | grep vm | awk '{print $1}')
docker exec -it $CONTAINER bash
```

From within the container we can see there is a process of qemu which runs the node as a virtual machine.
```
[root@855de8c8310f /]# ps -ef | grep qemu
root 1 0 36 13:39 ? 00:05:22 qemu-system-x86_64 -enable-kvm -drive format=qcow2,file=/var/run/disk/disk.qcow2,if=virtio,cache=unsafe -device virtio-net-pci,netdev=network0,mac=52:55:00:d1:55:01 -netdev tap,id=network0,ifname=tap01,script=no,downscript=no -device virtio-rng-pci -vnc :01 -cpu host -m 5120M -smp 5 -serial pty
```

# Flow of K8s provisioning (1.20 for example)
`cluster-provision/k8s/1.20.0/provision.sh`
* Runs the common cluster-provision/k8s/provision.sh.
* Runs cluster-provision/cli/cli (bash script).
* Creates a container for dnsmasq and runs dnsmasq.sh in it.
* Create a container, and runs vm.sh in it.
* Creates a vm using qemu, and checks its ready (according ssh).
* Runs cluster-provision/k8s/scripts/provision.sh in the container.
* Update docker trusted registries.
* Start kubelet service and K8s cluster.
* Enable ip routing.
* Apply additional manifests, such as flannel.
* Wait for pods to become ready.
* Pull needed images such as Ceph CSI, fluentd logger.
* Create local volume directiories.
* Shutdown the vm and commit its container.

# Flow of K8s cluster-up (1.20 for example)
Run
```
export KUBEVIRT_PROVIDER=k8s-1.20.0
make cluster-up
```
* Runs cluster-up/up.sh which sources the following:
* cluster-up/cluster/k8s-1.20.0/provider.sh (selected according $KUBEVIRT_PROVIDER), which sources:
* cluster-up/cluster/k8s-provider-common.sh
* Runs `up` (which appears at cluster-up/cluster/k8s-provider-common.sh).
It Triggers `gocli run` - (cluster-provision/gocli/cmd/run.go) which create the following containers:
* Cluster container (that one with the vm from the provisioning, vm.sh is used with parameters here that starts an already created vm).
* Registry.
* Container for dnsmasq (provides dns, dhcp services).

# Creating new K8s provider
Clone folders of k8s, folder name should be x/y as in the provider name x-y (ie. k8s-1.20.0) and includes:
* cluster-provision/k8s/1.20.0/provision.sh # used to create a new provider
* cluster-provision/k8s/1.20.0/publish.sh # used to publish new provider
* cluster-up/cluster/k8s-1.20.0/provider.sh # used by cluster-up
* cluster-up/cluster/k8s-1.20.0/README.md

# Example - Adding a new manifest to K8s 1.20
* First add the file at cluster-provision/manifests, this folder would be copied to /tmp in the container,
by cluster-provision/cli/cli as part of provision.
* Add this snippet at cluster-provision/k8s/scripts/provision.sh, before "Wait at least for 7 pods" line.
```
custom_manifest="/tmp/custom_manifest.yaml"
kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f "$custom_manifest"
```
* Run ./cluster-provision/k8s/1.20.0/provision.sh, it will create a new provision and test it.
* Run ./cluster-provision/k8s/1.20.0/publish.sh, it will publish the new created image to docker.io
* Update k8s-1.20.0 image line at cluster-up/cluster/images.sh, to point on the newly published image.
* Create a PR with the following files:
* The new manifest.
* Updated cluster-provision/k8s/scripts/provision.sh
* Updated cluster-up/cluster/images.sh.

4 changes: 4 additions & 0 deletions cluster-up/cluster/k8s-1.20/provider.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
#!/usr/bin/env bash
set -e
# shellcheck disable=SC1090
source "${KUBEVIRTCI_PATH}/cluster/k8s-provider-common.sh"
19 changes: 19 additions & 0 deletions cluster-up/cluster/kind-k8s-sriov-1.17.0/certcreator/certsecret.go
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ import (
"time"

corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/kubernetes"
Expand Down Expand Up @@ -85,8 +86,26 @@ func createSecret(clusterApi kubernetes.Interface, namespace, secretName string,
}

err := wait.Poll(time.Second*5, time.Minute*3, func() (bool, error) {
_, err := clusterApi.CoreV1().Secrets(namespace).Get(secret.Name, metav1.GetOptions{})
if err != nil {
if errors.IsNotFound(err) {
return true, nil
}
return false, nil
}
return false, fmt.Errorf("secret %s already exists", secret.Name)
})

if err != nil {
return err
}

err = wait.Poll(time.Second*5, time.Minute*3, func() (bool, error) {
_, err := clusterApi.CoreV1().Secrets(namespace).Create(secret)
if err != nil {
if errors.IsAlreadyExists(err) {
return true, nil
}
log.Printf("failed to create secret '%s': %v", secret.Name, err)
return false, nil
}
Expand Down
Loading

0 comments on commit ef7e21d

Please sign in to comment.