On 06/18/2012 09:26 PM, Shu Ming wrote:
On 2012-5-30 17:38, Deepak C Shetty wrote:
> Hello All,
>
> I have a draft write-up on the VDSM-libstoragemgmt integration.
> I wanted to run this thru' the mailing list(s) to help tune and
> crystallize it, before putting it on the ovirt wiki.
> I have run this once thru Ayal and Tony, so have some of their
> comments incorporated.
>
> I still have few doubts/questions, which I have posted below with
> lines ending with '?'
>
> Comments / Suggestions are welcome & appreciated.
>
> thanx,
> deepak
>
> [Ccing engine-devel and libstoragemgmt lists as this stuff is
> relevant to them too]
>
>
--------------------------------------------------------------------------------------------------------------
>
>
> 1) Background:
>
> VDSM provides high level API for node virtualization management. It
> acts in response to the requests sent by oVirt Engine, which uses
> VDSM to do all node virtualization related tasks, including but not
> limited to storage management.
>
> libstoragemgmt aims to provide vendor agnostic API for managing
> external storage array. It should help system administrators
> utilizing open source solutions have a way to programmatically manage
> their storage hardware in a vendor neutral way. It also aims to
> facilitate management automation, ease of use and take advantage of
> storage vendor supported features which improve storage performance
> and space utilization.
>
> Home Page:
http://sourceforge.net/apps/trac/libstoragemgmt/
>
> libstoragemgmt (LSM) today supports C and python plugins for talking
> to external storage array using SMI-S as well as native interfaces
> (eg: netapp plugin )
> Plan is to grow the SMI-S interface as needed over time and add more
> vendor specific plugins for exploiting features not possible via
> SMI-S or have better alternatives than using SMI-S.
> For eg: Many of the copy offload features require to use vendor
> specific commands, which justifies the need for a vendor specific
> plugin.
>
>
> 2) Goals:
>
> 2a) Ability to plugin external storage array into oVirt/VDSM
> virtualization stack, in a vendor neutral way.
>
> 2b) Ability to list features/capabilities and other statistical
> info of the array
>
> 2c) Ability to utilize the storage array offload capabilities
> from oVirt/VDSM.
>
>
> 3) Details:
>
> LSM will sit as a new repository engine in VDSM.
> VDSM Repository Engine WIP @
http://gerrit.ovirt.org/#change,192
>
> Current plan is to have LSM co-exist with VDSM on the virtualization
> nodes.
Does that mean LSM will be a different daemon process than VDSM?
Also, how about the vendor's plugin, another process in the nodes?
Pls see the LSM homepage on
sourceforge.net on how LSM works. It already
has a lsmd ( daemon) which invokes the appropriate plugin based on the
URI prefix.
vendor plugins in LSM are supported in LSM as a .py module, which is
invoked based on the URI prefix, which will be vendor specific. See the
netapp vendor plugin.py in LSM source.
>
> *Note : 'storage' used below is generic. It can be a file/nfs-export
> for NAS targets and LUN/logical-drive for SAN targets.
>
> VDSM can use LSM and do the following...
> - Provision storage
> - Consume storage
>
> 3.1) Provisioning Storage using LSM
>
> Typically this will be done by a Storage administrator.
>
> oVirt/VDSM should provide storage admin the
> - ability to list the different storage arrays along with their
> types (NAS/SAN), capabilities, free/used space.
> - ability to provision storage using any of the array
> capabilities (eg: thin provisioned lun or new NFS export )
> - ability to manage the provisioned storage (eg: resize/delete
> storage)
>
> Once the storage is provisioned by the storage admin, VDSM will have
> to refresh the host(s) for them to be able to see the newly
> provisioned storage.
>
> 3.1.1) Potential flows:
>
> Mgmt -> vdsm -> lsm: create LUN + LUN Mapping / Zoning / whatever is
> needed to make LUN available to list of hosts passed by mgmt
> Mgmt -> vdsm: getDeviceList (refreshes host and gets list of devices)
> Repeat above for all relevant hosts (depending on list passed
> earlier, mostly relevant when extending an existing VG)
> Mgmt -> use LUN in normal flows.
>
>
> 3.1.2) How oVirt Engine will know which LSM to use ?
>
> Normally the way this works today is that user can choose the host to
> use (default today is SPM), however there are a few flows where mgmt
> will know which host to use:
> 1. extend storage domain (add LUN to existing VG) - Use SPM and make
> sure *all* hosts that need access to this SD can see the new LUN
> 2. attach new LUN to a VM which is pinned to a specific host - use
> this host
> 3. attach new LUN to a VM which is not pinned - use a host from the
> cluster the VM belongs to and make sure all nodes in cluster can see
> the new LUN
So this model depend on the work of removing storage pool?
I am not sure and want the experts to comment here. I am not very clear
yet on how things will work post SPM is gone. Here its assumed SPM is
present.
>
> Flows for which there is no clear candidate (Maybe we can use the SPM
> host itself which is the default ?)
> 1. create a new disk without attaching it to any VM
So the new floating disk should be exported to all nodes and all VMs?
> 2. create a LUN for a new storage domain
>
>
> 3.2) Consuming storage using LSM
>
> Typically this will be done by a virtualization administrator
>
> oVirt/VDSM should allow virtualization admin to
> - Create a new storage domain using the storage on the array.
> - Be able to specify whether VDSM should use the storage offload
> capability (default) or override it to use its own internal logic.
>
> 4) VDSM potential changes:
>
> 4.1) How to represent a VM disk, 1 LUN = 1 VMdisk or 1 LV = 1 VMdisk
> ? which bring another question...1 array == 1 storage domain OR 1
> LUN/nfs-export on the array == 1 storage domain ?
>
> Pros & Cons of each...
>
> 1 array == 1 storage domain
> - Each new vmdisk (aka volume) will be a new lun/file on the array.
> - Easier to exploit offload capabilities, as they are available
> at the LUN/File granularity
> - Will there be any issues where there will be too many
> LUNs/Files ... any maxluns limit on linux hosts that we might hit ?
> -- VDSM has been tested with 1K LUNs and it worked fine - ayal
> - Storage array limitations on the number of LUNs can be a
> downside here.
> - Would it be ok to share the array for hosting another storage
> domain if need be ?
> -- Provided the existing domain is not utilising all of the
> free space
> -- We can create new LUNs and hand it over to anyone needed ?
> -- Changes needed in VDSM to work with raw LUNs, today it only
> has support for consuming LUNs via VG/LV.
>
> 1 LUN/nfs-export on the array == 1 storage domain
> - How to represent a new vmdisk (aka vdsm volume) if its a LUN
> provisioned using SAN target ?
> -- Will it be VG/LV as is done today for block domains ?
> -- If yes, then it will be difficult to exploit offload
> capabilities, as they are at LUN level, not at LV level.
> - Each new vmdisk will be a new file on the nfs-export, assuming
> offload capability is available at the file level, so this should
> work for NAS targets ?
> - Can use the storage array for hosting multiple storage domains.
> -- Provision one more LUN and use it for another storage
> domain if need be.
> - VDSM already supports this today, as part of block storage
> domains for LUNs case.
>
> Note that we will allow user to do either one of the two options
> above, depending on need.
>
> 4.2) Storage domain metadata will also include the
> features/capabilities of the storage array as reported by LSM.
> - Capabilities (taken via LSM) will be stored in the domain
> metadata during storage domain create flow.
> - Need changes in oVirt engine as well ( see 'oVirt Engine
> potential changes' section below )
>
> 4.3) VDSM to poll LSM for array capabilities on a regular basis ?
> Per ayal:
> - If we have a 'storage array' entity in oVirt Engine (see 'oVirt
> Engine potential changes' section below ) then we can have a 'refresh
> capabilities' button/verb.
> - We can periodically query the storage array.
> - Query LSM before running operations (sounds redundant to me,
> but if it's cheap enough it could be simplest).
>
> Probably need a combination of 1+2 (query at very low frequency -
> 1/hour or 1/day + refresh button)
>
>
> 5) oVirt Engine potential changes - as described by ayal :
>
> - We will either need a new 'storage array' entity in engine to
> keep credentials, or, in case of storage array as storage domain,
> just keep this info as part of the domain at engine level.
> - Have a 'storage array' entity in oVirt Engine to support
> 'refresh capabilities' as a button/verb.
> - When user during storage provisioning, selects a LUN exported
> from a storage array (via LSM), the oVirt Engine would know from then
> onwards that this LUN is being served via LSM.
> It would then be able to query the capabilities of the LUN
> and show it to the virt admin during storage consumption flow.
>
> 6) Potential flows:
> - Create snapshot flow
> -- VDSM will check the snapshot offload capability in the
> domain metadata
> -- If available, and override is not configured, it will use
> LSM to offload LUN/File snapshot
If a LSM try to snapshot a running volume, does that mean all the IO
activity to the volume will be blocked when the snapshot is undergoing?
If VDSM offloads the snapshot to the array ( via LSM) the array will
take care of the snapshotting.... typically i believe it will quiese the
I/O temporarily for few ms ? and take a point-in-time copy of the
LUN/File and resume the I/O... i think it will happen transparent to the
vdsm/host.
> -- If override is configured or capability is not available,
> it will use its internal logic to create
> snapshot (qcow2).
>
> - Copy/Clone vmdisk flow
> -- VDSM will check the copy offload capability in the domain
> metadata
> -- If available, and override is not configured, it will use
> LSM to offload LUN/File copy
> -- If override is configured or capability is not available,
> it will use its internal logic to create
> snapshot (eg: dd cmd in case of LUN).
>
> 7) LSM potential changes:
>
> - list features/capabilities of the array. Eg: copy offload, thin
> prov. etc.
> - list containers (aka pools) (present in LSM today)
> - Ability to list different types of arrays being managed, their
> capabilities and used/free space
> - Ability to create/list/delete/resize volumes ( LUN or exports,
> available in LSM as of today)
> - Get monitoring info with object (LUN/snapshot/volume) as
> optional parameter for specific info. eg: container/pool free/used
> space, raid type etc.
>
> Need to make sure above info is listed in a coherent way across
> arrays (number of LUNs, raid type used? free/total per
> container/pool, per LUN?. Also need I/O statistics wherever possible.
>
>
> _______________________________________________
> vdsm-devel mailing list
> vdsm-devel(a)lists.fedorahosted.org
>
https://fedorahosted.org/mailman/listinfo/vdsm-devel