How does ovirt handle disks across multiple iscsi LUNs

A possibly obvious question I can't find the answer to anywhere—how does ovirt allocate VM disk images when a storage domain has multiple LUNs? Are these allocated one per LUN, so if e.g. a LUN runs out of space the disks on that LUN (only) will be unable to write? Or are these distributed across LUNs, so if a LUN fails due to storage failure etc the entire storage domain can be affected? Many thanks in advance, Peter

Anyone?

Hi Peter, The question is somewhat unclear here. First of all, a storage domain on iSCSI maps 1:1 to a LUN. So 1 LUN = 1 Storage domain. The storage domain is configured with LVM and each VM disk is a Logical Volume. If the LUN/Storage domain goes out of space, then no new space can be allocated, but if you have thick provisioned VM disks, they could still be able to write to the disk. Jean-Louis On 22/11/2022 13:14, peterd@mdg-it.com wrote:
A possibly obvious question I can't find the answer to anywhere—how does ovirt allocate VM disk images when a storage domain has multiple LUNs? Are these allocated one per LUN, so if e.g. a LUN runs out of space the disks on that LUN (only) will be unable to write? Or are these distributed across LUNs, so if a LUN fails due to storage failure etc the entire storage domain can be affected?
Many thanks in advance, Peter _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZZMV3G6EYBMG5F...

Thanks Jean-Louis, The system we’re working with definitely has multiple LUNs in the one storage domain, hence the question (see image link below). Regards, Peter https://imgur.com/a/Ukbe3fJ

Every LUN is a PV in LVM terms. And if you have multiple LUN's for a storage domain, then all those LUN's are combined into one single VG (storage domain). On 29/11/2022 05:25, peterd@mdg-it.com wrote:
Thanks Jean-Louis,
The system we’re working with definitely has multiple LUNs in the one storage domain, hence the question (see image link below).
Regards, Peter
https://imgur.com/a/Ukbe3fJ _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/HZDAD3GZMCTX2P...

Ok, got it, thanks so much!

On Sun, Nov 27, 2022 at 9:11 PM <peterd@mdg-it.com> wrote:
A possibly obvious question I can't find the answer to anywhere—how does ovirt allocate VM disk images when a storage domain has multiple LUNs? Are these allocated one per LUN, so if e.g. a LUN runs out of space the disks on that LUN (only) will be unable to write? Or are these distributed across LUNs, so if a LUN fails due to storage failure etc the entire storage domain can be affected?
A storage domain is exactly one LVM Volume Group (VG). Disks are created from volume, which are LVM Logical Volume (LV). Each time you create a snapshot oVirt creates a new volume. So a disk may have one or more LVs in the VG. The volumes may be extended as more space is needed. Up to 4.5, oVirt extended the volumes in chunks of 1g. Since 4.5, it uses chunks of 2.5g. So every disk may contain multiple chunks in different size, and these may be allocated anywhere in the VG logical space, so they may be on any PV. To understand how the chunks are allocated, you can inspect the each LV like this: # lvdisplay -m --devicesfile= bafd0f16-9aba-4f9f-ba90-46d3b8a29157/51de2d8b-b67e-4a91-bc68-a2c922bc7398 --- Logical volume --- LV Path /dev/bafd0f16-9aba-4f9f-ba90-46d3b8a29157/51de2d8b-b67e-4a91-bc68-a2c922bc7398 LV Name 51de2d8b-b67e-4a91-bc68-a2c922bc7398 VG Name bafd0f16-9aba-4f9f-ba90-46d3b8a29157 LV UUID W0FAvX-EUDc-v7QR-4A2A-3aSX-yGC5-2jREeV LV Write Access read/write LV Creation host, time host4, 2022-12-04 01:43:17 +0200 LV Status NOT available LV Size 200.00 GiB Current LE 1600 Segments 3 Allocation inherit Read ahead sectors auto --- Segments --- Logical extents 0 to 796: Type linear Physical volume /dev/mapper/0QEMU_QEMU_HARDDISK_data-fc-02 Physical extents 0 to 796 Logical extents 797 to 1593: Type linear Physical volume /dev/mapper/0QEMU_QEMU_HARDDISK_data-fc-03 Physical extents 0 to 796 Logical extents 1594 to 1599: Type linear Physical volume /dev/mapper/0QEMU_QEMU_HARDDISK_data-fc-01 Physical extents 49 to 54 Note that oVirt uses lvm devices files to prevent unwanted access of volumes by lvm commands. To disable the devices file temporarily you can use --devicesfile=. After extending this disk by 10g: # lvdisplay -m --devicesfile= bafd0f16-9aba-4f9f-ba90-46d3b8a29157/51de2d8b-b67e-4a91-bc68-a2c922bc7398 --- Logical volume --- LV Path /dev/bafd0f16-9aba-4f9f-ba90-46d3b8a29157/51de2d8b-b67e-4a91-bc68-a2c922bc7398 LV Name 51de2d8b-b67e-4a91-bc68-a2c922bc7398 VG Name bafd0f16-9aba-4f9f-ba90-46d3b8a29157 LV UUID W0FAvX-EUDc-v7QR-4A2A-3aSX-yGC5-2jREeV LV Write Access read/write LV Creation host, time host4, 2022-12-04 01:43:17 +0200 LV Status NOT available LV Size 210.00 GiB Current LE 1680 Segments 7 Allocation inherit Read ahead sectors auto --- Segments --- Logical extents 0 to 796: Type linear Physical volume /dev/mapper/0QEMU_QEMU_HARDDISK_data-fc-02 Physical extents 0 to 796 Logical extents 797 to 1593: Type linear Physical volume /dev/mapper/0QEMU_QEMU_HARDDISK_data-fc-03 Physical extents 0 to 796 Logical extents 1594 to 1613: Type linear Physical volume /dev/mapper/0QEMU_QEMU_HARDDISK_data-fc-01 Physical extents 49 to 68 Logical extents 1614 to 1616: Type linear Physical volume /dev/mapper/0QEMU_QEMU_HARDDISK_data-fc-01 Physical extents 244 to 246 Logical extents 1617 to 1619: Type linear Physical volume /dev/mapper/0QEMU_QEMU_HARDDISK_data-fc-01 Physical extents 154 to 156 Logical extents 1620 to 1648: Type linear Physical volume /dev/mapper/0QEMU_QEMU_HARDDISK_data-fc-01 Physical extents 177 to 205 Logical extents 1649 to 1679: Type linear Physical volume /dev/mapper/0QEMU_QEMU_HARDDISK_data-fc-01 Physical extents 531 to 561 I hope it helps. Nir

Thanks Nir, that's super clear and very helpful!
participants (3)
-
Jean-Louis Dupond
-
Nir Soffer
-
peterd@mdg-it.com