Skip to content

Commit

Permalink
Merge pull request kubevirt#5351 from awels/add_volume_virtctl
Browse files Browse the repository at this point in the history
Support hotplug with virtctl
  • Loading branch information
kubevirt-bot authored Apr 8, 2021
2 parents ca3f1d6 + 701ae7f commit c8471c6
Show file tree
Hide file tree
Showing 7 changed files with 566 additions and 27 deletions.
131 changes: 131 additions & 0 deletions docs/hotplug.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,131 @@
# Hotplug Volumes

KubeVirt now supports hotplugging volumes into a running Virtual Machine Instance (VMI). The volume must be either a block volume or contain a disk image file just like any other regular volume. When a VM that has hotplugged volumes is rebooted, the hotplugged volumes will NOT be attached to the restarted VM unless the volumes are persisted.

## Enable feature gate

In order to enable the HotplugVolumes feature gate, you must add the HotplugVolumes to the list of enabled featureGates.

```yaml
spec:
configuration:
developerConfiguration:
featureGates:
- HotplugVolumes
```
Once the feature gate is enabled, you will be able to hotplug volumes into a running VMI.
## Virtctl support
In order to hotplug a volume, you must first prepare a volume. This can be done by using a DataVolume (DV). In the example we will use a blank DV in order to add some extra storage to a running VMI
```yaml
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: example-volume-hotplug
spec:
source:
blank: {}
pvc:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
```
In this example we are using ReadWriteOnce accessMode, and the default FileSystem volume mode. Volume hotplugging supports all combinations of block volume mode and ReadWriteMany/ReadWriteOnce/ReadOnlyMany accessModes, if your storage supports the combination.
### Addvolume
Now lets assume we have started a VMI like the [Fedora VMI in examples](examples/vmi-fedora.yaml) and the name of the VMI is 'vmi-fedora' we can add the above blank volume to this running VMI by using the 'addvolume' command available with virtctl
```bash
$ virtctl addvolume vmi-fedora --volume-name=example-volume-hotplug
```

This will hotplug the volume into the running VMI, and set the serial of the disk to the volume name. In this example it is set to example-hotplug-volume.

#### Serial
You can change the serial of the disk by specifying the --serial parameter, for example:
```bash
$ virtctl addvolume vmi-fedora --volume-name=example-volume-hotplug --serial=1234567890
```

The serial will be used in the guest so you can identify the disk inside the by the serial. For instance in Fedora the disk by id will contain the serial
```bash
$ virtctl console vmi-fedora

Fedora 32 (Cloud Edition)
Kernel 5.6.6-300.fc32.x86_64 on an x86_64 (ttyS0)

SSH host key: SHA256:c8ik1A9F4E7AxVrd6eE3vMNOcMcp6qBxsf8K30oC/C8 (ECDSA)
SSH host key: SHA256:fOAKptNAH2NWGo2XhkaEtFHvOMfypv2t6KIPANev090 (ED25519)
eth0: 10.244.196.144 fe80::d8b7:51ff:fec4:7099
vmi-fedora login:fedora
Password:fedora
[fedora@vmi-fedora ~]$ ls /dev/disk/by-id
scsi-0QEMU_QEMU_HARDDISK_1234567890
[fedora@vmi-fedora ~]$
```
As you can see the serial is part of the disk name, so you can uniquely identify it.

### Persist
In some cases you want a hotplugged volume to become part of the standard disks after a restart of the VM. For instance if you added some permanent storage to the VM. We also assume that the running VMI has a matching VM that defines it specification. You can call the addvolume command with the --persist flag. This will update the VM domain disks section in addition to updating the VMI domain disks. This means that when you restart the VM, the disk is already defined in the VM, and thus in the new VMI.

```bash
$ virtctl addvolume vm-fedora --volume-name=example-volume-hotplug --persist
```

In the VM spec this will now show as a new disk
```yaml
spec:
domain:
devices:
disks:
- disk:
bus: virtio
name: containerdisk
- disk:
bus: virtio
name: cloudinitdisk
- disk:
bus: scsi
name: example-volume-hotplug
machine:
type: ""
```
### Removevolume
In addition to hotplug plugging the volume, you can also unplug it by using the 'removevolume' command available with virtctl
```bash
$ virtctl removevolume vmi-fedora --volume-name=example-volume-hotplug
```
Note: You can only unplug volumes that were dynamically added with addvolume, or using the API.

### VolumeStatus
VMI objects have a new VolumeStatus status field. This is an array containing each disk, hotplugged or not. For example after hotplugging the volume in the addvolume example, the VMI status will contain this:
```yaml
volumeStatus:
- name: cloudinitdisk
target: vdb
- name: containerdisk
target: vda
- hotplugVolume:
attachPodName: hp-volume-7fmz4
attachPodUID: 62a7f6bf-474c-4e25-8db5-1db9725f0ed2
message: Successfully attach hotplugged volume volume-hotplug to VM
name: example-volume-hotplug
phase: Ready
reason: VolumeReady
target: sda
```
Vda is the container disk that contains the Fedora OS, vdb is the cloudinit disk. As you can see those just contain the name and target used when assigning them to the VM. The target is the value passed to QEMU when specifying the disks. The value is unique for the VM and does *NOT* represent the naming inside the guest. For instance for a Windows Guest OS the target has no meaning. The same will be true for hot plugged volumes. The target is just a unique identifier meant for QEMU, inside the guest the disk can be assigned a different name.
The hotplugVolume has some extra information that regular volume statuses do not have. The attachPodName is the name of the pod that was used to attach the volume to the node the VMI is running on. If this pod is deleted it will also stop the VMI as we cannot guarantee the volume will remain attached to the node. The other fields are similar to conditions and indicate the status of the hot plug process. Once a Volume is ready it can be used by the VM.
Note: Currently every volume hotplugged requires an additional pod to be created.
## Live Migration
Currently Live Migration is disabled for any VMI that has volumes hotplugged into it. This limitation will be removed in a future release.
2 changes: 2 additions & 0 deletions pkg/virtctl/root.go
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,8 @@ func NewVirtctlCommand() *cobra.Command {
vm.NewGuestOsInfoCommand(clientConfig),
vm.NewUserListCommand(clientConfig),
vm.NewFSListCommand(clientConfig),
vm.NewAddVolumeCommand(clientConfig),
vm.NewRemoveVolumeCommand(clientConfig),
pause.NewPauseCommand(clientConfig),
pause.NewUnpauseCommand(clientConfig),
expose.NewExposeCommand(clientConfig),
Expand Down
7 changes: 7 additions & 0 deletions pkg/virtctl/vm/BUILD.bazel
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ go_library(
"//staging/src/kubevirt.io/client-go/api/v1:go_default_library",
"//staging/src/kubevirt.io/client-go/kubecli:go_default_library",
"//vendor/github.com/spf13/cobra:go_default_library",
"//vendor/k8s.io/api/core/v1:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/apis/meta/v1:go_default_library",
"//vendor/k8s.io/client-go/tools/clientcmd:go_default_library",
],
)
Expand All @@ -23,12 +25,17 @@ go_test(
embed = [":go_default_library"],
deps = [
"//staging/src/kubevirt.io/client-go/api/v1:go_default_library",
"//staging/src/kubevirt.io/client-go/generated/containerized-data-importer/clientset/versioned/fake:go_default_library",
"//staging/src/kubevirt.io/client-go/kubecli:go_default_library",
"//staging/src/kubevirt.io/client-go/testutils:go_default_library",
"//tests:go_default_library",
"//vendor/github.com/golang/mock/gomock:go_default_library",
"//vendor/github.com/onsi/ginkgo:go_default_library",
"//vendor/github.com/onsi/ginkgo/extensions/table:go_default_library",
"//vendor/github.com/onsi/gomega:go_default_library",
"//vendor/k8s.io/api/core/v1:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/apis/meta/v1:go_default_library",
"//vendor/k8s.io/client-go/kubernetes/fake:go_default_library",
"//vendor/kubevirt.io/containerized-data-importer/pkg/apis/core/v1alpha1:go_default_library",
],
)
154 changes: 146 additions & 8 deletions pkg/virtctl/vm/vm.go
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,14 @@
package vm

import (
"context"
"encoding/json"
"fmt"
"strings"

corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"

v1 "kubevirt.io/client-go/api/v1"

"github.com/spf13/cobra"
Expand All @@ -34,19 +38,26 @@ import (
)

const (
COMMAND_START = "start"
COMMAND_STOP = "stop"
COMMAND_RESTART = "restart"
COMMAND_MIGRATE = "migrate"
COMMAND_RENAME = "rename"
COMMAND_GUESTOSINFO = "guestosinfo"
COMMAND_USERLIST = "userlist"
COMMAND_FSLIST = "fslist"
COMMAND_START = "start"
COMMAND_STOP = "stop"
COMMAND_RESTART = "restart"
COMMAND_MIGRATE = "migrate"
COMMAND_RENAME = "rename"
COMMAND_GUESTOSINFO = "guestosinfo"
COMMAND_USERLIST = "userlist"
COMMAND_FSLIST = "fslist"
COMMAND_ADDVOLUME = "addvolume"
COMMAND_REMOVEVOLUME = "removevolume"

volumeNameArg = "volume-name"
)

var (
forceRestart bool
gracePeriod int = -1
volumeName string
serial string
persist bool
)

func NewStartCommand(clientConfig clientcmd.ClientConfig) *cobra.Command {
Expand Down Expand Up @@ -171,6 +182,67 @@ func NewFSListCommand(clientConfig clientcmd.ClientConfig) *cobra.Command {
return cmd
}

func NewAddVolumeCommand(clientConfig clientcmd.ClientConfig) *cobra.Command {
cmd := &cobra.Command{
Use: "addvolume VMI",
Short: "add a volume to a running VM",
Example: usageAddVolume(),
Args: templates.ExactArgs("addvolume", 1),
RunE: func(cmd *cobra.Command, args []string) error {
c := Command{command: COMMAND_ADDVOLUME, clientConfig: clientConfig}
return c.Run(args)
},
}
cmd.SetUsageTemplate(templates.UsageTemplate())
cmd.Flags().StringVar(&volumeName, volumeNameArg, "", "name used in volumes section of spec")
cmd.MarkFlagRequired(volumeNameArg)
cmd.Flags().StringVar(&serial, "serial", "", "serial number you want to assign to the disk")
cmd.Flags().BoolVar(&persist, "persist", false, "if set, the added volume will be persisted in the VM spec (if it exists)")

return cmd
}

func NewRemoveVolumeCommand(clientConfig clientcmd.ClientConfig) *cobra.Command {
cmd := &cobra.Command{
Use: "removevolume VMI",
Short: "remove a volume from a running VM",
Example: usage(COMMAND_REMOVEVOLUME),
Args: templates.ExactArgs("removevolume", 1),
RunE: func(cmd *cobra.Command, args []string) error {
c := Command{command: COMMAND_REMOVEVOLUME, clientConfig: clientConfig}
return c.Run(args)
},
}
cmd.SetUsageTemplate(templates.UsageTemplate())
cmd.Flags().StringVar(&volumeName, volumeNameArg, "", "name used in volumes section of spec")
cmd.MarkFlagRequired(volumeNameArg)
cmd.Flags().BoolVar(&persist, "persist", false, "if set, the added volume will be persisted in the VM spec (if it exists)")
return cmd
}

func getVolumeSourceFromVolume(volumeName, namespace string, virtClient kubecli.KubevirtClient) (*v1.HotplugVolumeSource, error) {
//Check if data volume exists.
_, err := virtClient.CdiClient().CdiV1alpha1().DataVolumes(namespace).Get(context.TODO(), volumeName, metav1.GetOptions{})
if err == nil {
return &v1.HotplugVolumeSource{
DataVolume: &v1.DataVolumeSource{
Name: volumeName,
},
}, nil
}
// DataVolume not found, try PVC
_, err = virtClient.CoreV1().PersistentVolumeClaims(namespace).Get(context.TODO(), volumeName, metav1.GetOptions{})
if err == nil {
return &v1.HotplugVolumeSource{
PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{
ClaimName: volumeName,
},
}, nil
}
// Neither return error
return nil, fmt.Errorf("Volume %s is not a DataVolume or PersistentVolumeClaim", volumeName)
}

type Command struct {
clientConfig clientcmd.ClientConfig
command string
Expand All @@ -194,6 +266,68 @@ func usage(cmd string) string {
return usage
}

func usageAddVolume() string {
usage := ` #Dynamically attach a volume to a running VM.
{{ProgramName}} addvolume fedora-dv --volume-name=example-dv
#Dynamically attach a volume to a running VM giving it a serial number to identify the volume inside the guest.
{{ProgramName}} addvolume fedora-dv --volume-name=example-dv --serial=1234567890
#Dynamically attach a volume to a running VM and persisting it in the VM spec. At next VM restart the volume will be attached like any other volume.
{{ProgramName}} addvolume fedora-dv --volume-name=example-dv --persist
`
return usage
}

func addVolume(vmiName, volumeName, namespace string, virtClient kubecli.KubevirtClient) error {
volumeSource, err := getVolumeSourceFromVolume(volumeName, namespace, virtClient)
if err != nil {
return fmt.Errorf("error adding volume, %v", err)
}
hotplugRequest := &v1.AddVolumeOptions{
Name: volumeName,
Disk: &v1.Disk{
DiskDevice: v1.DiskDevice{
Disk: &v1.DiskTarget{
Bus: "scsi",
},
},
},
VolumeSource: volumeSource,
}
if serial != "" {
hotplugRequest.Disk.Serial = serial
} else {
hotplugRequest.Disk.Serial = volumeName
}
if !persist {
err = virtClient.VirtualMachineInstance(namespace).AddVolume(vmiName, hotplugRequest)
} else {
err = virtClient.VirtualMachine(namespace).AddVolume(vmiName, hotplugRequest)
}
if err != nil {
return fmt.Errorf("error adding volume, %v", err)
}
return nil
}

func removeVolume(vmiName, volumeName, namespace string, virtClient kubecli.KubevirtClient) error {
var err error
if !persist {
err = virtClient.VirtualMachineInstance(namespace).RemoveVolume(vmiName, &v1.RemoveVolumeOptions{
Name: volumeName,
})
} else {
err = virtClient.VirtualMachine(namespace).RemoveVolume(vmiName, &v1.RemoveVolumeOptions{
Name: volumeName,
})
}
if err != nil {
return fmt.Errorf("Error removing volume, %v", err)
}
return nil
}

func (o *Command) Run(args []string) error {

vmiName := args[0]
Expand Down Expand Up @@ -287,6 +421,10 @@ func (o *Command) Run(args []string) error {

fmt.Printf("%s\n", string(data))
return nil
case COMMAND_ADDVOLUME:
return addVolume(args[0], volumeName, namespace, virtClient)
case COMMAND_REMOVEVOLUME:
return removeVolume(args[0], volumeName, namespace, virtClient)
}

fmt.Printf("VM %s was scheduled to %s\n", vmiName, o.command)
Expand Down
Loading

0 comments on commit c8471c6

Please sign in to comment.