Skip to content

Commit

Permalink
Add OpenShift deployment in ephemeral containers
Browse files Browse the repository at this point in the history
- add provider
- add README
- update all other providers README
- add prefix for relevant providers

Signed-off-by: Lukianov Artyom <[email protected]>
  • Loading branch information
Lukianov Artyom committed Mar 14, 2018
1 parent cb59ef1 commit 8200a36
Show file tree
Hide file tree
Showing 9 changed files with 196 additions and 28 deletions.
14 changes: 4 additions & 10 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,10 @@ tools/openapispec/openapispec
**/bin
bin/*
.vagrant
cluster/vagrant-kubernetes/.kubeconfig
cluster/vagrant-kubernetes/.kubectl
cluster/k8s-1.9.3/.kubeconfig
cluster/k8s-1.9.3/.kubectl
hack/config-provider-k8s-1.9.3.sh
cluster/vagrant-openshift/.kubeconfig
cluster/vagrant-openshift/.oc
cluster/*/.kubeconfig
cluster/*/.kubectl
cluster/*/.oc
hack/config-provider-*
cluster/.console.vv
build-tools/desc/desc
hack/config-local.sh
Expand All @@ -20,9 +17,6 @@ tags
hack/gen-swagger-doc/*.adoc
hack/gen-swagger-doc/*.md
hack/gen-swagger-doc/html5
hack/config-provider-local.sh
hack/config-provider-vagrant-kubernetes.sh
hack/config-provider-vagrant-openshift.sh
cluster/local/certs
**.swp
**.pem
Expand Down
2 changes: 1 addition & 1 deletion cluster/k8s-1.9.3/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ node02 NotReady <none> 5s v1.9.3

```bash
export PROVIDER=k8s-1.9.3
make cluster-up
make cluster-down
```

This destroys the whole cluster. Recreating the cluster is fast, since k8s is
Expand Down
16 changes: 9 additions & 7 deletions cluster/k8s-1.9.3/provider.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@

set -e

prefix=kubevirt-k8s-1.9.3

function _main_ip() {
echo 127.0.0.1
}
Expand All @@ -12,11 +14,11 @@ function up() {
# Add one, 0 here means no node at all, but in the kubevirt repo it means master-only
local num_nodes=${VAGRANT_NUM_NODES-0}
num_nodes=$((num_nodes + 1))
${_cli} run --nodes ${num_nodes} --tls-port 127.0.0.1:8443 --ssh-port 127.0.0.1:2201 --background --registry-port 127.0.0.1:5000 --prefix kubevirt --registry-volume kubevirt_registry --base "rmohr/kubeadm-1.9.3@sha256:d72fe14077e0a5fe47f917570e141536397feb92d5981333158178298396d01e"
${_cli} ssh node01 sudo chown vagrant:vagrant /etc/kubernetes/admin.conf
${_cli} run --nodes ${num_nodes} --tls-port 127.0.0.1:8443 --ssh-port 127.0.0.1:2201 --background --registry-port 127.0.0.1:5000 --prefix $prefix --registry-volume kubevirt_registry --base "rmohr/kubeadm-1.9.3@sha256:d72fe14077e0a5fe47f917570e141536397feb92d5981333158178298396d01e"
${_cli} ssh --prefix $prefix node01 sudo chown vagrant:vagrant /etc/kubernetes/admin.conf

chmod 0600 ${KUBEVIRT_PATH}cluster/k8s-1.9.3/vagrant.key
OPTIONS="-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ${KUBEVIRT_PATH}cluster/k8s-1.9.3/vagrant.key -P 2201"
chmod 0600 ${KUBEVIRT_PATH}cluster/vagrant.key
OPTIONS="-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ${KUBEVIRT_PATH}cluster/vagrant.key -P 2201"

# Copy k8s config and kubectl
scp ${OPTIONS} [email protected]:/usr/bin/kubectl ${KUBEVIRT_PATH}cluster/k8s-1.9.3/.kubectl
Expand Down Expand Up @@ -59,8 +61,8 @@ function build() {
local num_nodes=${VAGRANT_NUM_NODES-0}
num_nodes=$((num_nodes + 1))
for i in $(seq 1 ${num_nodes}); do
${_cli} ssh "node$(printf "%02d" ${i})" "echo \"${container}\" | xargs --max-args=1 sudo docker pull"
${_cli} ssh "node$(printf "%02d" ${i})" "echo \"${container_alias}\" | xargs --max-args=2 sudo docker tag"
${_cli} ssh --prefix $prefix "node$(printf "%02d" ${i})" "echo \"${container}\" | xargs --max-args=1 sudo docker pull"
${_cli} ssh --prefix $prefix "node$(printf "%02d" ${i})" "echo \"${container_alias}\" | xargs --max-args=2 sudo docker tag"
done
}

Expand All @@ -70,5 +72,5 @@ function _kubectl() {
}

function down() {
${_cli} rm
${_cli} rm --prefix $prefix
}
47 changes: 47 additions & 0 deletions cluster/os-3.9.0-alpha.4/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# OpenShift 3.9.0-alpha.4 in ephemeral containers

Provides a pre-deployed OpenShift Origin with version 3.9.0-alpha.4 purely in docker
containers with qemu. The provided VMs are completely ephemeral and are
recreated on every cluster restart. The KubeVirt containers are built on the
local machine and are the pushed to a registry which is exposed at
`localhost:5000`.

## Bringing the cluster up

You will need to add line to `/etc/hosts` only once.

```bash
echo "127.0.0.1 node01" >> /etc/hosts
export PROVIDER=os-3.9.0-alpha.4
export VAGRANT_NUM_NODES=0 # currently only one node supported
make cluster-up
```

The cluster can be accessed as usual:

```bash
$ cluster/kubectl.sh get nodes
NAME STATUS ROLES AGE VERSION
node01 Ready master 11m v1.9.1+a0ce1bc657
```

## Bringing the cluster down

```bash
export PROVIDER=os-3.9.0-alpha.4
make cluster-down
```

This destroys the whole cluster. Recreating the cluster is fast, since OpenShift
is already pre-deployed. The only state which is kept is the state of the local
docker registry.

## Destroying the docker registry state

The docker registry survives a `make cluster-down`. It's state is stored in a
docker volume called `kubevirt_registry`. If the volume gets too big or the
volume contains corrupt data, it can be deleted with

```bash
docker volume rm kubevirt_registry
```
73 changes: 73 additions & 0 deletions cluster/os-3.9.0-alpha.4/provider.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
#!/bin/bash

set -e

prefix=kubevirt-os-3.9.0-alpha.4

function _main_ip() {
echo 127.0.0.1
}

_cli='docker run --privileged --rm -v /var/run/docker.sock:/var/run/docker.sock rmohr/cli:latest'

function up() {
# Add one, 0 here means no node at all, but in the kubevirt repo it means master-only
local num_nodes=${VAGRANT_NUM_NODES-0}
num_nodes=$((num_nodes + 1))
${_cli} run --nodes ${num_nodes} --osp-port 127.0.0.1:8443 --ssh-port 127.0.0.1:2201 --background --registry-port 127.0.0.1:5000 --prefix $prefix --registry-volume kubevirt_registry --base "rmohr/os-3.9"

chmod 0600 ${KUBEVIRT_PATH}cluster/vagrant.key
OPTIONS="-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ${KUBEVIRT_PATH}cluster/vagrant.key -P 2201"

# Copy oc tool
scp ${OPTIONS} [email protected]:/usr/local/bin/oc ${KUBEVIRT_PATH}cluster/os-3.9.0-alpha.4/.oc
chmod u+x ${KUBEVIRT_PATH}cluster/os-3.9.0-alpha.4/.oc

# Login to OpenShift
export KUBECONFIG=${KUBEVIRT_PATH}cluster/os-3.9.0-alpha.4/.kubeconfig
${KUBEVIRT_PATH}cluster/os-3.9.0-alpha.4/.oc login $(_main_ip):8443 --insecure-skip-tls-verify=true -u admin -p admin

# Make sure that local config is correct
prepare_config
}

function prepare_config() {
BASE_PATH=${KUBEVIRT_PATH:-$PWD}
cat >hack/config-provider-os-3.9.0-alpha.4.sh <<EOF
master_ip=$(_main_ip)
docker_tag=devel
kubeconfig=${BASE_PATH}/cluster/os-3.9.0-alpha.4/.kubeconfig
docker_prefix=localhost:5000/kubevirt
manifest_docker_prefix=registry:5000/kubevirt
EOF
}

function build() {
# Build everyting and publish it
${KUBEVIRT_PATH}hack/dockerized "DOCKER_TAG=${DOCKER_TAG} PROVIDER=${PROVIDER} ./hack/build-manifests.sh"
make build docker publish

# Make sure that all nodes use the newest images
container=""
container_alias=""
for arg in ${docker_images}; do
local name=$(basename $arg)
container="${container} ${manifest_docker_prefix}/${name}:${docker_tag}"
container_alias="${container_alias} ${manifest_docker_prefix}/${name}:${docker_tag} kubevirt/${name}:${docker_tag}"
done
local num_nodes=${VAGRANT_NUM_NODES-0}
num_nodes=$((num_nodes + 1))
for i in $(seq 1 ${num_nodes}); do
${_cli} ssh --prefix $prefix "node$(printf "%02d" ${i})" "echo \"${container}\" | xargs --max-args=1 sudo docker pull"
${_cli} ssh --prefix $prefix "node$(printf "%02d" ${i})" "echo \"${container_alias}\" | xargs --max-args=2 sudo docker tag"
done
}

function _kubectl() {
export KUBECONFIG=${KUBEVIRT_PATH}cluster/os-3.9.0-alpha.4/.kubeconfig
${KUBEVIRT_PATH}cluster/os-3.9.0-alpha.4/.oc "$@"
}

function down() {
${_cli} rm --prefix $prefix
}
33 changes: 29 additions & 4 deletions cluster/vagrant-kubernetes/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,30 @@
The purpose of this folder is to hold all relevant stuff to deploy
KubeVirt on kubernetes cluster in a vagrant box.
# Kubernetes 1.9.3 in vagrant VM

Thus this folder primarily contains
- The deployment scripts for master and nodes
Start vagrant VM and deploy k8s with version 1.9.3 on it.
It will deploy k8s only first time when you start a VM.

## Bringing the cluster up

```bash
export PROVIDER=vagrant-kubernetes
export VAGRANT_NUM_NODES=1
make cluster-up
```

The cluster can be accessed as usual:

```bash
$ cluster/kubectl.sh get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 31s v1.9.3
node0 NotReady <none> 5s v1.9.3
```

## Bringing the cluster down

```bash
export PROVIDER=vagrant-kubernetes
make cluster-down
```

It will shutdown vagrant VM without destroy it.
36 changes: 32 additions & 4 deletions cluster/vagrant-openshift/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,33 @@
The purpose of this folder is to hold all relevant stuff to deploy
KubeVirt on OpenShift cluster in a vagrant box.
# OpenShift 3.9.0-alpha.4 in vagrant VM

Thus this folder primarily contains
- The deployment scripts for master and nodes
Start vagrant VM and deploy OpenShift Origin with version 3.9.0-alpha.4 on it.
It will deploy OpenShift only first time when you start a VM.

## Bringing the cluster up

You will need to add line to `/etc/hosts` only once.

```bash
echo "192.168.200.2 master" >> /etc/hosts
export PROVIDER=vagrant-openshift
export VAGRANT_NUM_NODES=1
make cluster-up
```

The cluster can be accessed as usual:

```bash
$ cluster/kubectl.sh get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 8m v1.9.1+a0ce1bc657
node0 Ready <none> 6m v1.9.1+a0ce1bc657
```

## Bringing the cluster down

```bash
export PROVIDER=vagrant-openshift
make cluster-down
```

It will shutdown vagrant VM without destroy it.
3 changes: 1 addition & 2 deletions cluster/vagrant-openshift/provider.sh
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,8 @@ function up() {
scp $OPTIONS master:/usr/local/bin/oc ${KUBEVIRT_PATH}cluster/vagrant-openshift/.oc
chmod u+x cluster/vagrant-openshift/.oc

vagrant ssh master -c "sudo cat /etc/origin/master/openshift-master.kubeconfig" >${KUBEVIRT_PATH}cluster/vagrant-openshift/.kubeconfig

# Login to OpenShift
export KUBECONFIG=${KUBEVIRT_PATH}cluster/os-3.9.0-alpha.4/.kubeconfig
cluster/vagrant-openshift/.oc login $(_main_ip):8443 --insecure-skip-tls-verify=true -u admin -p admin

# Make sure that local config is correct
Expand Down
File renamed without changes.

0 comments on commit 8200a36

Please sign in to comment.