A declarative Kubernetes system to import Virtual Machine images for use with Kubevirt.
This project is designed with Kubevirt in mind and provides a declarative method for importing VM images into a Kuberenetes cluster. This approach support two main use-cases:
- a cluster administrator can build an abstract registry of immutable images (referred to as "Golden Images") which can be cloned and later consumed by Kubevirt, or
- an ad-hoc user (granted access) can import a VM image into their own namespace and feed this image directly to Kubevirt, bypassing the cloning step.
For an in depth look at the system and workflow, see the Design documentation.
The importer is capable of performing certain functions that streamline its use with Kubevirt. It automatically decompresses gzip and xz files, and un-tar's tar archives. Also, qcow2 images are converted into a raw image files needed by Kubevirt.
Supported file formats are:
- .tar
- .gz
- .xz
- .img
- .iso
- .qcow2
- A running Kubernetes cluster with roles and role bindings implementing security necesary for the CDI controller to watch PVCs across all namespaces.
- A storage class and provisioner.
- An HTTP or S3 file server hosting VM images
- An optional "golden" namespace acting as the image registry. The
default
namespace is fine for tire kicking.
$ git clone https://github.com/kubevirt/containerized-data-importer.git
Or
$ mkdir cdi-manifests && cd cdi-manifests
$ wget https://raw.githubusercontent.com/kubevirt/containerized-data-importer/kubevirt-centric-readme/manifests/example/golden-pvc.yaml
$ wget https://raw.githubusercontent.com/kubevirt/containerized-data-importer/kubevirt-centric-readme/manifests/example/endpoint-secret.yaml
$ wget https://raw.githubusercontent.com/kubevirt/containerized-data-importer/kubevirt-centric-readme/manifests/controller/controller/cdi-controller-deployment.yaml
Deploying the CDI controller is straight forward. Choose the namespace where the controller will run and ensure that this namespace has cluster-wide permission to watch all PVCs. In this document the default namespace is used, but in a production setup a namespace that is inaccessible to regular users should be used instead. See Protecting the Golden Image Namespace on creating a secure CDI controller namespace.
$ kubectl -n default create -f https://raw.githubusercontent.com/kubevirt/containerized-data-importer/master/manifests/cdi-controller-deployment.yaml
Note: The CDI controller is a required part of this work flow.
Make copies of the example manifests for editing. The neccessary files are:
- golden-pvc.yaml
- endpoint-secret.yaml
-
storageClassName:
The default StorageClass will be used if not set. Otherwise, set to a desired StorageClass. -
kubevirt.io/storage.import.endpoint:
The full URL to the VM image in the format of:http://www.myUrl.com/path/of/data
ors3://bucketName/fileName
. -
kubevirt.io/storage.import.secretName:
(Optional) The name of the secret containing the authentication credentials required by the file server.
Note: Only set these values if the file server requires authentication credentials.
-
metadata.name:
Arbitrary name of the secret. Must match the PVC'skubevirt.io/storage.import.secretName:
-
accessKeyId:
Contains the endpoint's key and/or user name. This value must be base64 encoded with no extraneous linefeeds. Useecho -n "xyzzy" | base64
orprintf "xyzzy" | base64
to avoid a trailing linefeed -
secretKey:
the endpoint's secret or password, again base64 encoded with no extraneous linefeeds.
-
(Optional) Create the namespace where the controller will run:
$ kubectl create ns <CDI-NAMESPACE>
-
(Optional) Create the endpoint secret in the triggering PVC's namespace:
$ kubectl -n <NAMESPACE> create -f endpoint-secret.yaml
-
Deploy the CDI controller:
$ kubectl -n <CDI-NAMESPACE> create -f manifests/controller/cdi-controller-deployment.yaml
-
Create the persistent volume claim to trigger the import process;
$ kubectl -n <NAMESPACE> create -f golden-pvc.yaml
-
Monitor the cdi-controller:
$ kubectl -n <CDI-NAMESPACE> logs cdi-deployment-<RANDOM-STRING>
-
Monitor the importer pod:
$ kubectl -n <NAMESPACE> logs importer-<PVC-NAME>
# shown in controller log above
CDI needs certain permissions to be able to execute properly, primarily the cluster-admin
role should be applied to the service account being used through the Kubernetes RBAC model. For example, if the CDI controller is running in a namespace called cdi
and the default
service account is being used, then the following RBAC should be applied:
$ kubectl create clusterrolebinding <BINDING-NAME> --clusterrole=cluster-admin --serviceaccount=<NAMESPACE>:default
i.e.
$ kubectl create clusterrolebinding c-golden-images-default --clusterrole=cluster-admin --serviceaccount=cdi:default
NOTE: This gives full cluster-admin access to this binding and may not be appropriate for production environments.
Currently there is no support for automatically implementing Kubernetes ResourceQuotas and Limits on desired namespaces and resources, therefore administrators need to manually lock down all new namespaces from being able to use the StorageClass associated with CDI/Kubevirt and cloning capabilities. This capability of automatically restricting resources is planned for future releases. Below are some examples of how one might achieve this level of resource protection:
- Lock Down StorageClass Usage for Namespace:
apiVersion: v1
kind: ResourceQuota
metadata:
name: protect-mynamespace
spec:
hard:
<STORAGE-CLASS-NAME>.storageclass.storage.k8s.io/requests.storage: "0"
NOTE: .storageclass.storage.k8s.io/persistentvolumeclaims: "0" would also accomplish the same affect by not allowing any pvc requests against the storageclass for this namespace.
- Open Up StorageClass Usage for Namespace:
apiVersion: v1
kind: ResourceQuota
metadata:
name: protect-mynamespace
spec:
hard:
<STORAGE-CLASS-NAME>.storageclass.storage.k8s.io/requests.storage: "500Gi"
NOTE: .storageclass.storage.k8s.io/persistentvolumeclaims: "4" could be used and this would only allow for 4 pvc requests in this namespace, anything over that would be denied.