Skip to content

Commit

Permalink
Bump to the simplified provider layout in kubevirtci
Browse files Browse the repository at this point in the history
Signed-off-by: Roman Mohr <[email protected]>
  • Loading branch information
rmohr committed Mar 5, 2020
1 parent 656f9d0 commit ca350eb
Show file tree
Hide file tree
Showing 52 changed files with 2,328 additions and 270 deletions.
2 changes: 1 addition & 1 deletion cluster-up-sha.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
dfa2be010812c39ce653331db5bd974ae347c652
a5060d40ae57089ca5a318c70f9c592912ecfb95
6 changes: 0 additions & 6 deletions cluster-up/OWNERS

This file was deleted.

2 changes: 1 addition & 1 deletion cluster-up/cluster/ephemeral-provider-common.sh
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ function _registry_volume() {
}

function _add_common_params() {
local params="--nodes ${KUBEVIRT_NUM_NODES} --memory ${KUBEVIRT_MEMORY_SIZE} --cpu 5 --secondary-nics ${KUBEVIRT_NUM_SECONDARY_NICS} --random-ports --background --prefix $provider_prefix --registry-volume $(_registry_volume) kubevirtci/${image} ${KUBEVIRT_PROVIDER_EXTRA_ARGS}"
local params="--nodes ${KUBEVIRT_NUM_NODES} --memory ${KUBEVIRT_MEMORY_SIZE} --cpu 6 --secondary-nics ${KUBEVIRT_NUM_SECONDARY_NICS} --random-ports --background --prefix $provider_prefix --registry-volume $(_registry_volume) kubevirtci/${image} ${KUBEVIRT_PROVIDER_EXTRA_ARGS}"
if [[ $TARGET =~ windows.* ]] && [ -n "$WINDOWS_NFS_DIR" ]; then
params=" --nfs-data $WINDOWS_NFS_DIR $params"
elif [[ $TARGET =~ os-.* ]] && [ -n "$RHEL_NFS_DIR" ]; then
Expand Down
16 changes: 8 additions & 8 deletions cluster-up/cluster/images.sh
Original file line number Diff line number Diff line change
Expand Up @@ -3,20 +3,20 @@
set -e

declare -A IMAGES
IMAGES[gocli]="gocli@sha256:e48c7285ac9e4e61fe0f89f35ac5f9090497ea7c8165deeadb61e464c88d8afd"
IMAGES[gocli]="gocli@sha256:220f55f6b1bcb3975d535948d335bd0e6b6297149a3eba1a4c14cad9ac80f80d"
if [ -z $KUBEVIRTCI_PROVISION_CHECK ]; then
IMAGES[k8s-1.17.0]="k8s-1.17.0@sha256:7c932e8551f26d1d84b3b7846ac88de3ee835399f10623fc447654b55c0b91e6"
IMAGES[k8s-1.16.2]="k8s-1.16.2@sha256:5bae6a5f3b996952c5ceb4ba12ac635146425909801df89d34a592f3d3502b0c"
IMAGES[k8s-1.15.1]="k8s-1.15.1@sha256:14d7b1806f24e527167d2913deafd910ea46e69b830bf0b094dde35ba961b159"
IMAGES[k8s-1.14.6]="k8s-1.14.6@sha256:ec29c07c94fce22f37a448cb85ca1fb9215d1854f52573316752d19a1c88bcb3"
IMAGES[k8s-1.13.3]="k8s-1.13.3@sha256:afbdd9b4208e5ce2ec327f302c336cea3ed3c22488603eab63b92c3bfd36d6cd"
IMAGES[k8s-1.11.0]="k8s-1.11.0@sha256:696ba7860fc635628e36713a2181ef72568d825f816911cf857b2555ea80a98a"
IMAGES[k8s-fedora-1.17.0]="k8s-fedora-1.17.0@sha256:5fc78a20fae562ce78618fc25d0a15acd6de384b27adc3b6cd54f54f6c9d4fdf"
IMAGES[k8s-1.17]="k8s-1.17@sha256:4dc613045b7fdd959ef0b03b8eed61ac121ffe3d1363bda9d9a4d435da3fd567"
IMAGES[k8s-1.16]="k8s-1.16@sha256:3559c7d83baa16d1bb641c38f24afee82a24023f9cc03bf4cffc9b54435d35ab"
IMAGES[k8s-1.15]="k8s-1.15@sha256:bfa0b87f7a561d15ed8bdba1506f34daf024c48d70677a02920e02494e40354b"
IMAGES[k8s-1.14]="k8s-1.14@sha256:410468892ed51308b0e71c755d2b3a65b060a22302c7cfdbc213b5566de0e661"
IMAGES[k8s-genie-1.11.1]="k8s-genie-1.11.1@sha256:19af1961fdf92c08612d113a3cf7db40f02fd213113a111a0b007a4bf0f3f7e7"
IMAGES[k8s-multus-1.13.3]="k8s-multus-1.13.3@sha256:c0bcf0d2e992e5b4d96a7bcbf988b98b64c4f5aef2f2c4d1c291e90b85529738"
IMAGES[okd-4.1]="okd-4.1@sha256:e7e3a03bb144eb8c0be4dcd700592934856fb623d51a2b53871d69267ca51c86"
IMAGES[okd-4.2]="okd-4.2@sha256:a830064ca7bf5c5c2f15df180f816534e669a9a038fef4919116d61eb33e84c5"
IMAGES[okd-4.3]="okd-4.3@sha256:63abc3884002a615712dfac5f42785be864ea62006892bf8a086ccdbca8b3d38"
IMAGES[ocp-4.3]="ocp-4.3@sha256:8f59d625852ef285d6ce3ddd6ebd3662707d2c0fab19772b61dd0aa0f6b41e5f"
IMAGES[ocp-4.3]="ocp-4.3@sha256:03a8c736263493961f198b5cb214d9b1fc265ece233c60bdb1c8b8b4b779ee1e"
IMAGES[ocp-4.4]="ocp-4.4@sha256:b235e87323ed88c46fedf27e9115573b92f228a82559ab7523dd1be183f66af8"
fi
export IMAGES

Expand Down
45 changes: 0 additions & 45 deletions cluster-up/cluster/k8s-1.11.0/README.md

This file was deleted.

File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
3 changes: 0 additions & 3 deletions cluster-up/cluster/k8s-1.17.0/provider.sh

This file was deleted.

File renamed without changes.
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
@@ -1,15 +1,22 @@
# Kubernetes 1.13.3 in ephemeral containers
# Kubernetes 1.17.0 in ephemeral containers

Provides a pre-deployed Kubernetes with version 1.13.3 purely in docker
Provides a pre-deployed Kubernetes with version 1.17.0 purely in docker
containers with qemu. The provided VMs are completely ephemeral and are
recreated on every cluster restart. The KubeVirt containers are built on the
local machine and are then pushed to a registry which is exposed at
`localhost:5000`.

# Kubernetes 1.17.0 with fedora nodes

This provider deploys kubernetes 1.17.0 cluster with fedora31 nodes.
This allows you to test and deploy the latest packages or updates
over the cluster nodes,
for example: using copr builds to test the latest version of knmstate/nmstate

## Bringing the cluster up

```bash
export KUBEVIRT_PROVIDER=k8s-1.13.3
export KUBEVIRT_PROVIDER=k8s-fedora-1.17.0
export KUBEVIRT_NUM_NODES=2 # master + one node
make cluster-up
```
Expand All @@ -19,14 +26,14 @@ The cluster can be accessed as usual:
```bash
$ cluster/kubectl.sh get nodes
NAME STATUS ROLES AGE VERSION
node01 NotReady master 31s v1.13.3
node02 NotReady <none> 5s v1.13.3
node01 NotReady master 31s v1.17.0
node02 NotReady <none> 5s v1.17.0
```

## Bringing the cluster down

```bash
export KUBEVIRT_PROVIDER=k8s-1.13.3
export KUBEVIRT_PROVIDER=k8s-fedora-1.17.0
make cluster-down
```

Expand Down
File renamed without changes.
49 changes: 49 additions & 0 deletions cluster-up/cluster/kind-k8s-1.17.0-ipv6/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# K8S 1.17.0 in a Kind cluster, with IPv6 only

Provides a pre-deployed k8s cluster with version 1.17.0 that runs using [kind](https://github.com/kubernetes-sigs/kind). The cluster is completely ephemeral and is recreated on every cluster restart.
The KubeVirt containers are built on the local machine and are then pushed to a registry which is exposed at
`localhost:5000`.

cluster is brought up with ipv6 support but without flannel or multi nic support

## Prerequisits
1. kubectl >= 1.16
1. docker network with ipv6.
To get that you'll have to add the following section to /etc/docker/daemon.json:
```
{
"ipv6": true,
"fixed-cidr-v6": "2001:db8:1::/64"
}
```
and to fully restart docker (systemctl restart docker)
if needed, docker can be tested with:
`docker run --rm busybox ip a`
and make sure you get an ipv6 address
## Bringing the cluster up
```bash
export KUBEVIRT_PROVIDER=kind-k8s-1.17.0-ipv6
export KUBEVIRT_NUM_NODES=2 # master + one node
make cluster-up
```

The cluster can be accessed as usual:

```bash
$ cluster-up/kubectl.sh get nodes
NAME STATUS ROLES AGE VERSION
kind-1.17.0-control-plane Ready master 105s v1.14.2
kind-1.17.0-worker Ready <none> 71s v1.14.2
```

## Bringing the cluster down

```bash
export KUBEVIRT_PROVIDER=kind-k8s-1.17.0
make cluster-down
```

This destroys the whole cluster.

14 changes: 14 additions & 0 deletions cluster-up/cluster/kind-k8s-1.17.0-ipv6/provider.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
#!/usr/bin/env bash

set -e

export IPV6_CNI="yes"
export CLUSTER_NAME="kind-1.17.0"
export KIND_NODE_IMAGE="kindest/node:v1.17.0"

source ${KUBEVIRTCI_PATH}/cluster/kind/common.sh

function up() {
cp $KIND_MANIFESTS_DIR/kind-ipv6.yaml ${KUBEVIRTCI_CONFIG_PATH}/$KUBEVIRT_PROVIDER/kind.yaml
kind_up
}
60 changes: 60 additions & 0 deletions cluster-up/cluster/kind-k8s-sriov-1.14.2/TROUBLESHOOTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# How to troubleshoot a failing kind job

If logging and output artifacts are not enough, there is a way to connect to a running CI pod and troubleshoot directly from there.

## Pre-requisites

- A working (enabled) account on the [CI cluster](shift.ovirt.org), specifically enabled to the `kubevirt-prow-jobs` project.
- The [mkpj tool](https://github.com/kubernetes/test-infra/tree/master/prow/cmd/mkpj) installed

## Launching a custom job

Through the `mkpj` tool, it's possible to craft a custom Prow Job that can be executed on the CI cluster.

Just `go get` it by running `go get k8s.io/test-infra/prow/cmd/mkpj`

Then run the following command from a checkout of the [project-infra repo](https://github.com/kubevirt/project-infra):

```bash
mkpj --pull-number $KUBEVIRTPRNUMBER -job pull-kubevirt-e2e-kind-k8s-sriov-1.14.2 -job-config-path github/ci/prow/files/jobs/kubevirt/kubevirt-presubmits.yaml --config-path github/ci/prow/files/config.yaml > debugkind.yaml
```

You will end up having a ProwJob manifest in the `debugkind.yaml` file.

It's strongly recommended to replace the job's name, as it will be easier to find and debug the relative pod, by replacing `metadata.name` with something more recognizeable.

The $KUBEVIRTPRNUMBER can be an actual PR on the [kubevirt repo](https://github.com/kubevirt/kubevirt).

In case we just want to debug the cluster provided by the CI, it's recommended to override the entry point, either in the test PR we are instrumenting (a good sample can be found [here](https://github.com/kubevirt/kubevirt/pull/3022)), or by overriding the entry point directly in the prow job's manifest.

Remember that we want the cluster long living, so a long sleep must be provided as part of the entry point.

Make sure you switch to the `kubevirt-prow-jobs` project, and apply the manifest:

```bash
kubectl apply -f debugkind.yaml
```

You will end up with a ProwJob object, and a pod with the same name you gave to the ProwJob.

Once the pod is up & running, connect to it via bash:

```bash
kubectl exec -it debugprowjobpod bash
```

### Logistics

Once you are in the pod, you'll be able to troubleshoot what's happening in the environment CI is running its tests.

Run the follow to bring up a [kind](https://github.com/kubernetes-sigs/kind) cluster with a single node setup and the SR-IOV operator already setup to go (if it wasn't already done by the job itself).

```bash
KUBEVIRT_PROVIDER=kind-k8s-sriov-1.14.2 make cluster-up
```

The kubeconfig file will be available under `/root/.kube/kind-config-sriov`.

The `kubectl` binary is already on board and in `$PATH`.

The container acting as node is the one named `sriov-control-plane`. You can even see what's in there by running `docker exec -it sriov-control-plane bash`.
4 changes: 3 additions & 1 deletion cluster-up/cluster/kind-k8s-sriov-1.14.2/provider.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,9 @@

set -e

export CLUSTER_NAME="sriov"
export CLUSTER_NAME="kind-sriov-1.14.2"
export KIND_NODE_IMAGE="kindest/node:v1.14.2"

source ${KUBEVIRTCI_PATH}/cluster/kind/common.sh

function up() {
Expand Down
32 changes: 32 additions & 0 deletions cluster-up/cluster/kind-k8s-sriov-1.17.0/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# K8S 1.17.0 with sriov in a Kind cluster

Provides a pre-deployed k8s cluster with version 1.17.0 that runs using [kind](https://github.com/kubernetes-sigs/kind) The cluster is completely ephemeral and is recreated on every cluster restart.
The KubeVirt containers are built on the local machine and are then pushed to a registry which is exposed at
`localhost:5000`.

This version also expects to have sriov-enabed nics on the current host, and will move all the physical interfaces and virtual interfaces into the `kind`'s cluster master node so that they can be used through multus.

## Bringing the cluster up

```bash
export KUBEVIRT_PROVIDER=kind-k8s-sriov-1.17.0
make cluster-up
```

The cluster can be accessed as usual:

```bash
$ cluster-up/kubectl.sh get nodes
NAME STATUS ROLES AGE VERSION
sriov-control-plane Ready master 2m33s v1.17.0
```

## Bringing the cluster down

```bash
export KUBEVIRT_PROVIDER=kind-k8s-sriov-1.17.0
make cluster-down
```

This destroys the whole cluster.

60 changes: 60 additions & 0 deletions cluster-up/cluster/kind-k8s-sriov-1.17.0/TROUBLESHOOTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# How to troubleshoot a failing kind job

If logging and output artifacts are not enough, there is a way to connect to a running CI pod and troubleshoot directly from there.

## Pre-requisites

- A working (enabled) account on the [CI cluster](shift.ovirt.org), specifically enabled to the `kubevirt-prow-jobs` project.
- The [mkpj tool](https://github.com/kubernetes/test-infra/tree/master/prow/cmd/mkpj) installed

## Launching a custom job

Through the `mkpj` tool, it's possible to craft a custom Prow Job that can be executed on the CI cluster.

Just `go get` it by running `go get k8s.io/test-infra/prow/cmd/mkpj`

Then run the following command from a checkout of the [project-infra repo](https://github.com/kubevirt/project-infra):

```bash
mkpj --pull-number $KUBEVIRTPRNUMBER -job pull-kubevirt-e2e-kind-k8s-sriov-1.14.2 -job-config-path github/ci/prow/files/jobs/kubevirt/kubevirt-presubmits.yaml --config-path github/ci/prow/files/config.yaml > debugkind.yaml
```

You will end up having a ProwJob manifest in the `debugkind.yaml` file.

It's strongly recommended to replace the job's name, as it will be easier to find and debug the relative pod, by replacing `metadata.name` with something more recognizeable.

The $KUBEVIRTPRNUMBER can be an actual PR on the [kubevirt repo](https://github.com/kubevirt/kubevirt).

In case we just want to debug the cluster provided by the CI, it's recommended to override the entry point, either in the test PR we are instrumenting (a good sample can be found [here](https://github.com/kubevirt/kubevirt/pull/3022)), or by overriding the entry point directly in the prow job's manifest.

Remember that we want the cluster long living, so a long sleep must be provided as part of the entry point.

Make sure you switch to the `kubevirt-prow-jobs` project, and apply the manifest:

```bash
kubectl apply -f debugkind.yaml
```

You will end up with a ProwJob object, and a pod with the same name you gave to the ProwJob.

Once the pod is up & running, connect to it via bash:

```bash
kubectl exec -it debugprowjobpod bash
```

### Logistics

Once you are in the pod, you'll be able to troubleshoot what's happening in the environment CI is running its tests.

Run the follow to bring up a [kind](https://github.com/kubernetes-sigs/kind) cluster with a single node setup and the SR-IOV operator already setup to go (if it wasn't already done by the job itself).

```bash
KUBEVIRT_PROVIDER=kind-k8s-sriov-1.14.2 make cluster-up
```

The kubeconfig file will be available under `/root/.kube/kind-config-sriov`.

The `kubectl` binary is already on board and in `$PATH`.

The container acting as node is the one named `sriov-control-plane`. You can even see what's in there by running `docker exec -it sriov-control-plane bash`.
Loading

0 comments on commit ca350eb

Please sign in to comment.