Skip to content

Commit

Permalink
LVM RAID raid0 level support (linux-system-roles#272)
Browse files Browse the repository at this point in the history
* Add workaround for missing LVM raid0 support in blivet

Blivet supports creating LVs with segment type "raid0" but it is
not in the list of supported RAID levels. This will be fixed in
blivet, see storaged-project/blivet#1047

* Add a test for LVM RAID raid0 level

* README: Remove "striped" from the list of supported RAID for pools

We use MD RAID for RAIDs on the pool level which doesn't support
"striped" level.

* README: Clarify supported volume RAID levels

We support different levels for LVM RAID and MD RAID.
  • Loading branch information
vojtechtrefny authored Jun 2, 2022
1 parent c91fbf3 commit 8b868a3
Show file tree
Hide file tree
Showing 3 changed files with 66 additions and 2 deletions.
7 changes: 5 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ device node basename (like `sda` or `mpathb`), /dev/disk/ symlink
##### `raid_level`
When used with `type: lvm` it manages a volume group with a mdraid array of given level
on it. Input `disks` are in this case used as RAID members.
Accepted values are: `linear`, `striped`, `raid0`, `raid1`, `raid4`, `raid5`, `raid6`, `raid10`
Accepted values are: `linear`, `raid0`, `raid1`, `raid4`, `raid5`, `raid6`, `raid10`

##### `volumes`
This is a list of volumes that belong to the current pool. It follows the
Expand Down Expand Up @@ -135,7 +135,10 @@ Specifies RAID level. LVM RAID can be created as well.
"Regular" RAID volume requires type to be `raid`.
LVM RAID needs that volume has `storage_pools` parent with type `lvm`,
`raid_disks` need to be specified as well.
Accepted values are: `linear` (N/A for LVM RAID), `striped`, `raid0`, `raid1`, `raid4`, `raid5`, `raid6`, `raid10`
Accepted values are:
* for LVM RAID volume: `raid0`, `raid1`, `raid4`, `raid5`, `raid6`, `raid10`, `striped`, `mirror`
* for RAID volume: `linear`, `raid0`, `raid1`, `raid4`, `raid5`, `raid6`, `raid10`

__WARNING__: Changing `raid_level` for a volume is a destructive operation, meaning
all data on that volume will be lost as part of the process of
removing old and adding new RAID. RAID reshaping is currently not
Expand Down
7 changes: 7 additions & 0 deletions library/blivet.py
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,7 @@
try:
from blivet3 import Blivet
from blivet3.callbacks import callbacks
from blivet3 import devicelibs
from blivet3 import devices
from blivet3.deviceaction import ActionConfigureFormat
from blivet3.devicefactory import DEFAULT_THPOOL_RESERVE
Expand All @@ -132,6 +133,7 @@
try:
from blivet import Blivet
from blivet.callbacks import callbacks
from blivet import devicelibs
from blivet import devices
from blivet.deviceaction import ActionConfigureFormat
from blivet.devicefactory import DEFAULT_THPOOL_RESERVE
Expand All @@ -153,6 +155,11 @@
set_up_logging()
log = logging.getLogger(BLIVET_PACKAGE + ".ansible")

# XXX add support for LVM RAID raid0 level
devicelibs.lvm.raid_levels.add_raid_level(devicelibs.raid.RAID0)
if "raid0" not in devicelibs.lvm.raid_seg_types:
devicelibs.lvm.raid_seg_types.append("raid0")


MAX_TRIM_PERCENT = 2

Expand Down
54 changes: 54 additions & 0 deletions tests/tests_create_raid_pool_then_remove.yml
Original file line number Diff line number Diff line change
Expand Up @@ -150,3 +150,57 @@
raid_disks: "{{ [unused_disks[0], unused_disks[1]] }}"

- include_tasks: verify-role-results.yml

- name: Create a RAID0 lvm raid device
include_role:
name: linux-system-roles.storage
vars:
storage_pools:
- name: vg1
disks: "{{ unused_disks }}"
type: lvm
state: present
volumes:
- name: lv1
size: "{{ volume1_size }}"
mount_point: "{{ mount_location1 }}"
raid_disks: "{{ [unused_disks[0], unused_disks[1]] }}"
raid_level: raid0

- include_tasks: verify-role-results.yml

- name: Repeat the previous invocation to verify idempotence
include_role:
name: linux-system-roles.storage
vars:
storage_pools:
- name: vg1
disks: "{{ unused_disks }}"
type: lvm
state: present
volumes:
- name: lv1
size: "{{ volume1_size }}"
mount_point: "{{ mount_location1 }}"
raid_level: raid0
raid_disks: "{{ [unused_disks[0], unused_disks[1]] }}"

- include_tasks: verify-role-results.yml

- name: Remove the device created above
include_role:
name: linux-system-roles.storage
vars:
storage_pools:
- name: vg1
disks: "{{ unused_disks }}"
type: lvm
state: absent
volumes:
- name: lv1
size: "{{ volume1_size }}"
mount_point: "{{ mount_location1 }}"
raid_level: raid0
raid_disks: "{{ [unused_disks[0], unused_disks[1]] }}"

- include_tasks: verify-role-results.yml

0 comments on commit 8b868a3

Please sign in to comment.