[Engine-devel] Storage Device Management in VDSM and oVirt

Dan Kenigsberg danken at redhat.com
Wed Apr 18 08:45:38 UTC 2012


(Note that vdsm-devel is on fedorahosted.org. vdsm-devel at ovirt.org was
created by mistake, and I believe we agreed to dropped it)

On Tue, Apr 17, 2012 at 03:38:25PM +0530, Shireesh Anjal wrote:
> Hi all,
> 
> As part of adding Gluster support in ovirt, we need to introduce
> some Storage Device management capabilities (on the host). Since
> these are quite generic and not specific to Gluster as such, we
> think it might be useful to add it as a core vdsm and oVirt feature.
> At a high level, this involves following:
> 
>  - A "Storage Devices" sub-tab on "Host" entity, displaying
> information about all the storage devices*
>  - Listing of different types of storage devices of a host
>     - Regular Disks and Partitions*
>     - LVM*
>     - Software RAID*
>  - Various actions related to device configuration
>     - Partition disks*
>     - Format and mount disks / partitions*
>     - Create, resize and delete LVM Volume Groups (VGs)
>     - Create, resize, delete, format and mount LVM Logical Volumes (LVs)
>     - Create, resize, delete, partition, format and mount Software
> RAID devices
>  - Edit properties of the devices
>  - UI can be modeled similar to the system-config-lvm tool
> 
> The items marked with (*) in above list are urgently required for
> the Gluster feature, and will be developed first.
> 
> Comments / inputs welcome.

This seems like a big undertaking, and I would like to understand the
complete use case of this. Is it intended to create the block storage
devices on top of which a Gluster volume will be created?

I must tell that we had a bad experience with exposing low level
commands over the Vdsm API: A Vdsm storage domain is a VG with some
metadata on top. We used to have two API calls for creating a storage
domain: one to create the VG and one to add the metadata and make it an
SD. But it is pretty hard to handle all the error cases remotely. It
proved more useful to have one atomic command for the whole sequence.

I suspect that this would be the case here, too. I'm not sure if using
Vdsm as an ssh-replacement for transporting lvm/md/fdisk commands is the
best approach.

It may be better to have a single verb for creating Gluster volume out
of block storage devices. Something like: "take these disks, partition
them, build a raid, cover with a vg, carve some PVs and make each of
them a Gluster volume".

Obviously, it is not simple to define a good language to describe a
general architecture of a Gluster voluem. But it would have to be done
somewhere - if not in Vdsm then in Engine; and I suspect it would be
better done on the local host, not beyond a fragile network link.

Please note that currently, Vdsm makes a lot of effort not to touch LVM
metadata of existing VGs on regular "HSM" hosts. All such operations are
done on the engine-selected "SPM" host. When implementing this, we must
bear in mind these safeguards and think whether we want to break them.

Regards,
Dan.



More information about the Engine-devel mailing list