This is a multi-part message in MIME format.
--------------060306070208010807040506
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
(for some reason i never recd. Adam's note tho' I am subscribed to all
the 3 lists Cc'ed here, strange !
Replying off from the mail fwd.ed to me from my colleague, pls see my
responses inline below. Thanks. )
---------- Forwarded message ----------
From: *Adam Litke* <agl(a)us.ibm.com <mailto:agl@us.ibm.com>>
Date: Thu, May 31, 2012 at 7:31 PM
Subject: Re: [vdsm] RFC: Writeup on VDSM-libstoragemgmt integration
To: Deepak C Shetty <deepakcs(a)linux.vnet.ibm.com
<mailto:deepakcs@linux.vnet.ibm.com>>
Cc: libstoragemgmt-devel(a)lists.sourceforge.net
<mailto:libstoragemgmt-devel@lists.sourceforge.net>,
engine-devel(a)ovirt.org <mailto:engine-devel@ovirt.org>, VDSM Project
Development <vdsm-devel(a)lists.fedorahosted.org
<mailto:vdsm-devel@lists.fedorahosted.org>>
On Wed, May 30, 2012 at 03:08:46PM +0530, Deepak C Shetty wrote:
> Hello All,
>
> I have a draft write-up on the VDSM-libstoragemgmt integration.
> I wanted to run this thru' the mailing list(s) to help tune and
> crystallize it, before putting it on the ovirt wiki.
> I have run this once thru Ayal and Tony, so have some of their
> comments incorporated.
>
> I still have few doubts/questions, which I have posted below with
> lines ending with '?'
>
> Comments / Suggestions are welcome & appreciated.
>
> thanx,
> deepak
>
> [Ccing engine-devel and libstoragemgmt lists as this stuff is
> relevant to them too]
>
>
--------------------------------------------------------------------------------------------------------------
>
> 1) Background:
>
> VDSM provides high level API for node virtualization management. It
> acts in response to the requests sent by oVirt Engine, which uses
> VDSM to do all node virtualization related tasks, including but not
> limited to storage management.
>
> libstoragemgmt aims to provide vendor agnostic API for managing
> external storage array. It should help system administrators
> utilizing open source solutions have a way to programmatically
> manage their storage hardware in a vendor neutral way. It also aims
> to facilitate management automation, ease of use and take advantage
> of storage vendor supported features which improve storage
> performance and space utilization.
>
> Home Page:
http://sourceforge.net/apps/trac/libstoragemgmt/
>
> libstoragemgmt (LSM) today supports C and python plugins for talking
> to external storage array using SMI-S as well as native interfaces
> (eg: netapp plugin )
> Plan is to grow the SMI-S interface as needed over time and add more
> vendor specific plugins for exploiting features not possible via
> SMI-S or have better alternatives than using SMI-S.
> For eg: Many of the copy offload features require to use vendor
> specific commands, which justifies the need for a vendor specific
> plugin.
>
>
> 2) Goals:
>
> 2a) Ability to plugin external storage array into oVirt/VDSM
> virtualization stack, in a vendor neutral way.
>
> 2b) Ability to list features/capabilities and other statistical
> info of the array
>
> 2c) Ability to utilize the storage array offload capabilities
> from oVirt/VDSM.
>
>
> 3) Details:
>
> LSM will sit as a new repository engine in VDSM.
> VDSM Repository Engine WIP @
http://gerrit.ovirt.org/#change,192
>
> Current plan is to have LSM co-exist with VDSM on the virtualization
nodes.
>
> *Note : 'storage' used below is generic. It can be a file/nfs-export
> for NAS targets and LUN/logical-drive for SAN targets.
>
> VDSM can use LSM and do the following...
> - Provision storage
> - Consume storage
>
> 3.1) Provisioning Storage using LSM
>
> Typically this will be done by a Storage administrator.
>
> oVirt/VDSM should provide storage admin the
> - ability to list the different storage arrays along with their
> types (NAS/SAN), capabilities, free/used space.
> - ability to provision storage using any of the array
> capabilities (eg: thin provisioned lun or new NFS export )
> - ability to manage the provisioned storage (eg: resize/delete
storage)
I guess vdsm will need to model a new type of object (perhaps
StorageTarget) to
be used for performing the above provisioning operations. Then, to
consume the
provisioned storage, we could create a StorageConnectionRef by passing
in a
StorageTarget object and some additional parameters. Sound about right?
Sounds right to me, but I am not an expert in VDSM object model,
Saggi/Ayal/Dan can provide
more inputs here. The (proposed) storage array entity in ovirt engine
can use this vdsm object to
communicate and work with the storage array in doing the provisioning work.
Going ahead with the change to new Image Repository, I was envisioning
that LSM when integrated as
a new repo engine will exhibit "Storage Provisioning" as a implicit
feature/capability, only then it
will be picked up by the StorageTarget, else not.
> Once the storage is provisioned by the storage admin, VDSM will have
> to refresh the host(s) for them to be able to see the newly
> provisioned storage.
How would this refresh affect currently connected storage and running VMs?
I am not too sure on this... looking for more info from the experts
here. Per ayal, getDeviceInfo
should help refresh, but by 'affect' are you referring to what happens
if post refresh the device
IDs and/or names of the existing storage on the host changes ? What
exactly is the concern here ?
> 3.1.1) Potential flows:
>
> Mgmt -> vdsm -> lsm: create LUN + LUN Mapping / Zoning / whatever is
> needed to make LUN available to list of hosts passed by mgmt
> Mgmt -> vdsm: getDeviceList (refreshes host and gets list of devices)
> Repeat above for all relevant hosts (depending on list passed
> earlier, mostly relevant when extending an existing VG)
> Mgmt -> use LUN in normal flows.
>
>
> 3.1.2) How oVirt Engine will know which LSM to use ?
>
> Normally the way this works today is that user can choose the host
> to use (default today is SPM), however there are a few flows where
> mgmt will know which host to use:
> 1. extend storage domain (add LUN to existing VG) - Use SPM and make
> sure *all* hosts that need access to this SD can see the new LUN
> 2. attach new LUN to a VM which is pinned to a specific host - use
this host
> 3. attach new LUN to a VM which is not pinned - use a host from the
> cluster the VM belongs to and make sure all nodes in cluster can see
> the new LUN
You are still going to need to worry about locking the shared storage
resource.
Will libstoragemgmt have storage clustering support baked in or will
we continue
to rely on SPM? If the latter is true, most/all of these operations
would still
need to be done by SPM if I understand correctly.
The above scenarios were noted by me on behalf of Ayal.
I don't think LSM will worry abt storage clustering. We are just using
LSM to 'talk' with the
storage array. I am not sure if we need locking for the above scenarios.
We are just ensuring
that the newly provisioned LUN is visible to the relevant hosts, so not
sure why we might need
locking?
> Flows for which there is no clear candidate (Maybe we can use the
> SPM host itself which is the default ?)
> 1. create a new disk without attaching it to any VM
> 2. create a LUN for a new storage domain
Yes, SPM would seem correct to me.
> 3.2) Consuming storage using LSM
>
> Typically this will be done by a virtualization administrator
>
> oVirt/VDSM should allow virtualization admin to
> - Create a new storage domain using the storage on the array.
> - Be able to specify whether VDSM should use the storage offload
> capability (default) or override it to use its own internal logic.
If vdsm can make the right decisions, I would prefer that vdsm decides
when to use
hardware offload and when to use software algorithms without administrator
intervention. It's another case where oVirt can provide value-add by
simplifying the configuration and providing optimal performance.
Per ayal, the thought was that in scenarios we know where the storage
array implementation is
not optimal, we can override and tell VDSM to use its internal logic
than offload.
> 4) VDSM potential changes:
>
> 4.1) How to represent a VM disk, 1 LUN = 1 VMdisk or 1 LV = 1 VMdisk
> ? which bring another question...1 array == 1 storage domain OR 1
> LUN/nfs-export on the array == 1 storage domain ?
Saggi has mentioned some ideas on this topic so I will encourage him
to explain
his thoughts here.
Looking forward to Saggi's thoughts :)
>
> Pros & Cons of each...
>
> 1 array == 1 storage domain
> - Each new vmdisk (aka volume) will be a new lun/file on the array.
> - Easier to exploit offload capabilities, as they are available
> at the LUN/File granularity
> - Will there be any issues where there will be too many
> LUNs/Files ... any maxluns limit on linux hosts that we might hit ?
> -- VDSM has been tested with 1K LUNs and it worked fine - ayal
> - Storage array limitations on the number of LUNs can be a
> downside here.
> - Would it be ok to share the array for hosting another storage
> domain if need be ?
> -- Provided the existing domain is not utilising all of the
> free space
> -- We can create new LUNs and hand it over to anyone needed ?
> -- Changes needed in VDSM to work with raw LUNs, today it only
> has support for consuming LUNs via VG/LV.
>
> 1 LUN/nfs-export on the array == 1 storage domain
> - How to represent a new vmdisk (aka vdsm volume) if its a LUN
> provisioned using SAN target ?
> -- Will it be VG/LV as is done today for block domains ?
> -- If yes, then it will be difficult to exploit offload
> capabilities, as they are at LUN level, not at LV level.
> - Each new vmdisk will be a new file on the nfs-export, assuming
> offload capability is available at the file level, so this should
> work for NAS targets ?
> - Can use the storage array for hosting multiple storage domains.
> -- Provision one more LUN and use it for another storage
> domain if need be.
> - VDSM already supports this today, as part of block storage
> domains for LUNs case.
>
> Note that we will allow user to do either one of the two options
> above, depending on need.
>
> 4.2) Storage domain metadata will also include the
> features/capabilities of the storage array as reported by LSM.
> - Capabilities (taken via LSM) will be stored in the domain
> metadata during storage domain create flow.
> - Need changes in oVirt engine as well ( see 'oVirt Engine
> potential changes' section below )
Do we want to store the exact hw capabilities or some set of vdsm
chosen feature
bits that are set at create time based on the discovered hw
capabilities? The
difference would be that vdsm could choose which features to enable at
create
time and update those features later if needed.
IIUC, you are saying VDSM will only look for those capabilities, whcih
are of interest to it and
store it? That should be done by way LSM returning its capabilities as
part of it being a Image Repo.
I am referring to how localFSRepo (def capabilities) is shown in the PoC
Saggie posted @
http://gerrit.ovirt.org/#change,192
> 4.3) VDSM to poll LSM for array capabilities on a regular basis ?
> Per ayal:
> - If we have a 'storage array' entity in oVirt Engine (see
> 'oVirt Engine potential changes' section below ) then we can have a
> 'refresh capabilities' button/verb.
> - We can periodically query the storage array.
> - Query LSM before running operations (sounds redundant to me,
> but if it's cheap enough it could be simplest).
>
> Probably need a combination of 1+2 (query at very low frequency
> - 1/hour or 1/day + refresh button)
This problem can be aleviated by the abstraction I suggested above.
Then, LSM
can be queried only when we may want to adjust the policy connected with a
particular storage target.
Not clear to me, can you explain more ?
LSM might need to be contacted for updating the capabilities, because
storage admins can add/remove
capabilities over a period of time. Many storage arrays provide ability
to enable/disable array
features on demand.
> 5) oVirt Engine potential changes - as described by ayal :
>
> - We will either need a new 'storage array' entity in engine to
> keep credentials, or, in case of storage array as storage domain,
> just keep this info as part of the domain at engine level.
> - Have a 'storage array' entity in oVirt Engine to support
> 'refresh capabilities' as a button/verb.
> - When user during storage provisioning, selects a LUN exported
> from a storage array (via LSM), the oVirt Engine would know from
> then onwards that this LUN is being served via LSM.
> It would then be able to query the capabilities of the LUN
> and show it to the virt admin during storage consumption flow.
>
> 6) Potential flows:
> - Create snapshot flow
> -- VDSM will check the snapshot offload capability in the
> domain metadata
> -- If available, and override is not configured, it will use
> LSM to offload LUN/File snapshot
> -- If override is configured or capability is not available,
> it will use its internal logic to create
> snapshot (qcow2).
>
> - Copy/Clone vmdisk flow
> -- VDSM will check the copy offload capability in the domain
> metadata
> -- If available, and override is not configured, it will use
> LSM to offload LUN/File copy
> -- If override is configured or capability is not available,
> it will use its internal logic to create
> snapshot (eg: dd cmd in case of LUN).
>
> 7) LSM potential changes:
>
> - list features/capabilities of the array. Eg: copy offload,
> thin prov. etc.
> - list containers (aka pools) (present in LSM today)
> - Ability to list different types of arrays being managed, their
> capabilities and used/free space
> - Ability to create/list/delete/resize volumes ( LUN or exports,
> available in LSM as of today)
> - Get monitoring info with object (LUN/snapshot/volume) as
> optional parameter for specific info. eg: container/pool free/used
> space, raid type etc.
>
> Need to make sure above info is listed in a coherent way across
> arrays (number of LUNs, raid type used? free/total per
> container/pool, per LUN?. Also need I/O statistics wherever
> possible.
I forgot to add this in the original mail.. adding it now.
8) Concerns/Issues
- Per Tony of libstoragemgmt
-- Some additional things to consider.
-- Some of the array vendors may not allow multiple points of control at
the same time. e.g. you may not be able to have 2 or more nodes running
libStorageMgmt at the same time talking to the same array. NetApp
limits what things can be done concurrently.
-- LibStorageMgmt currently just provides the bits to control external
storage arrays. The plug-in daemon and the plug-ins themselves execute
unprivileged.
- How does the change from SPM to SDM will affect the above discussions ?
--------------060306070208010807040506
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#ffffff">
<tt>(for some reason i never recd. Adam's note tho' I am subscribed
to all the 3 lists Cc'ed here, strange !<br>
Replying off from the mail fwd.ed to me from my colleague, pls
see my responses inline below. Thanks. )<br>
<br>
</tt>
<blockquote
cite="mid:CAGZKiBprEgibga6A115WqiJVKWofvDz7E3C--_hHce93+f3Vbg@mail.gmail.com"
type="cite"><br>
<br>
<div class="gmail_quote">---------- Forwarded message
----------<br>
From: <b class="gmail_sendername">Adam Litke</b> <span
dir="ltr"><<a moz-do-not-send="true"
href="mailto:agl@us.ibm.com">agl@us.ibm.com</a>></span><br>
Date: Thu, May 31, 2012 at 7:31 PM<br>
Subject: Re: [vdsm] RFC: Writeup on VDSM-libstoragemgmt
integration<br>
To: Deepak C Shetty <<a moz-do-not-send="true"
href="mailto:deepakcs@linux.vnet.ibm.com">deepakcs@linux.vnet.ibm.com</a>><br>
Cc: <a moz-do-not-send="true"
href="mailto:libstoragemgmt-devel@lists.sourceforge.net">libstoragemgmt-devel@lists.sourceforge.net</a>,
<a moz-do-not-send="true"
href="mailto:engine-devel@ovirt.org">engine-devel@ovirt.org</a>,
VDSM Project Development <<a moz-do-not-send="true"
href="mailto:vdsm-devel@lists.fedorahosted.org">vdsm-devel@lists.fedorahosted.org</a>><br>
<br>
<br>
<div class="HOEnZb">
<div class="h5">On Wed, May 30, 2012 at 03:08:46PM +0530,
Deepak C Shetty wrote:<br>
> Hello All,<br>
><br>
> I have a draft write-up on the
VDSM-libstoragemgmt
integration.<br>
> I wanted to run this thru' the mailing list(s) to help
tune and<br>
> crystallize it, before putting it on the ovirt wiki.<br>
> I have run this once thru Ayal and Tony, so have some
of their<br>
> comments incorporated.<br>
><br>
> I still have few doubts/questions, which I have posted
below with<br>
> lines ending with '?'<br>
><br>
> Comments / Suggestions are welcome & appreciated.<br>
><br>
> thanx,<br>
> deepak<br>
><br>
> [Ccing engine-devel and libstoragemgmt lists as this
stuff is<br>
> relevant to them too]<br>
><br>
>
--------------------------------------------------------------------------------------------------------------<br>
><br>
> 1) Background:<br>
><br>
> VDSM provides high level API for node virtualization
management. It<br>
> acts in response to the requests sent by oVirt Engine,
which uses<br>
> VDSM to do all node virtualization related tasks,
including but not<br>
> limited to storage management.<br>
><br>
> libstoragemgmt aims to provide vendor agnostic API for
managing<br>
> external storage array. It should help system
administrators<br>
> utilizing open source solutions have a way to
programmatically<br>
> manage their storage hardware in a vendor neutral way.
It also aims<br>
> to facilitate management automation, ease of use and
take advantage<br>
> of storage vendor supported features which improve
storage<br>
> performance and space utilization.<br>
><br>
> Home Page: <a moz-do-not-send="true"
href="http://sourceforge.net/apps/trac/libstoragemgmt/"
target="_blank">http://sourceforge.net/apps/trac/libstoragem...
><br>
> libstoragemgmt (LSM) today supports C and python
plugins for talking<br>
> to external storage array using SMI-S as well as native
interfaces<br>
> (eg: netapp plugin )<br>
> Plan is to grow the SMI-S interface as needed over time
and add more<br>
> vendor specific plugins for exploiting features not
possible via<br>
> SMI-S or have better alternatives than using SMI-S.<br>
> For eg: Many of the copy offload features require to
use vendor<br>
> specific commands, which justifies the need for a
vendor specific<br>
> plugin.<br>
><br>
><br>
> 2) Goals:<br>
><br>
> 2a) Ability to plugin external storage array
into
oVirt/VDSM<br>
> virtualization stack, in a vendor neutral way.<br>
><br>
> 2b) Ability to list features/capabilities and
other
statistical<br>
> info of the array<br>
><br>
> 2c) Ability to utilize the storage array
offload
capabilities<br>
> from oVirt/VDSM.<br>
><br>
><br>
> 3) Details:<br>
><br>
> LSM will sit as a new repository engine in VDSM.<br>
> VDSM Repository Engine WIP @ <a moz-do-not-send="true"
href="http://gerrit.ovirt.org/#change,192"
target="_blank">http://gerrit.ovirt.org/#change,192</a>...
><br>
> Current plan is to have LSM co-exist with VDSM on the
virtualization nodes.<br>
><br>
> *Note : 'storage' used below is generic. It can be a
file/nfs-export<br>
> for NAS targets and LUN/logical-drive for SAN targets.<br>
><br>
> VDSM can use LSM and do the following...<br>
> - Provision storage<br>
> - Consume storage<br>
><br>
> 3.1) Provisioning Storage using LSM<br>
><br>
> Typically this will be done by a Storage administrator.<br>
><br>
> oVirt/VDSM should provide storage admin the<br>
> - ability to list the different storage arrays
along with their<br>
> types (NAS/SAN), capabilities, free/used space.<br>
> - ability to provision storage using any of
the
array<br>
> capabilities (eg: thin provisioned lun or new NFS
export )<br>
> - ability to manage the provisioned storage
(eg:
resize/delete storage)<br>
<br>
</div>
</div>
I guess vdsm will need to model a new type of object (perhaps
StorageTarget) to<br>
be used for performing the above provisioning operations. Then,
to consume the<br>
provisioned storage, we could create a StorageConnectionRef by
passing in a<br>
StorageTarget object and some additional parameters. Sound
about right?<br>
</div>
</blockquote>
<br>
<tt>Sounds right to me, but I am not an expert in VDSM object model,
Saggi/Ayal/Dan can provide<br>
more inputs here. The (proposed) storage array entity in ovirt
engine can use this vdsm object to <br>
communicate and work with the storage array in doing the
provisioning work.<br>
<br>
Going ahead with the change to new Image Repository, I was
envisioning that LSM when integrated as<br>
a new repo engine will exhibit "Storage Provisioning" as a
implicit feature/capability, only then it<br>
will be picked up by the StorageTarget, else not.<br>
</tt><br>
<blockquote
cite="mid:CAGZKiBprEgibga6A115WqiJVKWofvDz7E3C--_hHce93+f3Vbg@mail.gmail.com"
type="cite">
<div class="gmail_quote">
<div class="im"><br>
> Once the storage is provisioned by the storage admin,
VDSM will have<br>
> to refresh the host(s) for them to be able to see the
newly<br>
> provisioned storage.<br>
<br>
</div>
How would this refresh affect currently connected storage and
running VMs?<br>
</div>
</blockquote>
<br>
<tt>I am not too sure on this... looking for more info from the
experts here. Per ayal, getDeviceInfo<br>
should help refresh, but by 'affect' are you referring to what
happens if post refresh the device<br>
IDs and/or names of the existing storage on the host changes ?
What exactly is the concern here ?<br>
</tt><br>
<blockquote
cite="mid:CAGZKiBprEgibga6A115WqiJVKWofvDz7E3C--_hHce93+f3Vbg@mail.gmail.com"
type="cite">
<div class="gmail_quote">
<div class="im"><br>
> 3.1.1) Potential flows:<br>
><br>
> Mgmt -> vdsm -> lsm: create LUN + LUN Mapping /
Zoning / whatever is<br>
> needed to make LUN available to list of hosts passed by
mgmt<br>
> Mgmt -> vdsm: getDeviceList (refreshes host and gets
list of devices)<br>
> Repeat above for all relevant hosts (depending on list
passed<br>
> earlier, mostly relevant when extending an existing VG)<br>
> Mgmt -> use LUN in normal flows.<br>
><br>
><br>
> 3.1.2) How oVirt Engine will know which LSM to use ?<br>
><br>
> Normally the way this works today is that user can choose
the host<br>
> to use (default today is SPM), however there are a few
flows where<br>
> mgmt will know which host to use:<br>
> 1. extend storage domain (add LUN to existing VG) - Use
SPM and make<br>
> sure *all* hosts that need access to this SD can see the
new LUN<br>
> 2. attach new LUN to a VM which is pinned to a specific
host - use this host<br>
> 3. attach new LUN to a VM which is not pinned - use a
host from the<br>
> cluster the VM belongs to and make sure all nodes in
cluster can see<br>
> the new LUN<br>
<br>
</div>
You are still going to need to worry about locking the shared
storage resource.<br>
Will libstoragemgmt have storage clustering support baked in or
will we continue<br>
to rely on SPM? If the latter is true, most/all of these
operations would still<br>
need to be done by SPM if I understand correctly.<br>
</div>
</blockquote>
<br>
<tt>The above scenarios were noted by me on behalf of Ayal.<br>
I don't think LSM will worry abt storage clustering. We are just
using LSM to 'talk' with the<br>
storage array. I am not sure if we need locking for the above
scenarios. We are just ensuring<br>
that the newly provisioned LUN is visible to the relevant hosts,
so not sure why we might need<br>
locking?<br>
</tt><br>
<blockquote
cite="mid:CAGZKiBprEgibga6A115WqiJVKWofvDz7E3C--_hHce93+f3Vbg@mail.gmail.com"
type="cite">
<div class="gmail_quote">
<div class="im"><br>
> Flows for which there is no clear candidate (Maybe we can
use the<br>
> SPM host itself which is the default ?)<br>
> 1. create a new disk without attaching it to any VM<br>
> 2. create a LUN for a new storage domain<br>
<br>
</div>
Yes, SPM would seem correct to me.<br>
<div class="im"><br>
> 3.2) Consuming storage using LSM<br>
><br>
> Typically this will be done by a virtualization
administrator<br>
><br>
> oVirt/VDSM should allow virtualization admin to<br>
> - Create a new storage domain using the storage
on
the array.<br>
> - Be able to specify whether VDSM should use the
storage offload<br>
> capability (default) or override it to use its own
internal logic.<br>
<br>
</div>
If vdsm can make the right decisions, I would prefer that vdsm
decides when to use<br>
hardware offload and when to use software algorithms without
administrator<br>
intervention. It's another case where oVirt can provide
value-add by<br>
simplifying the configuration and providing optimal performance.<br>
</div>
</blockquote>
<br>
<tt>Per ayal, the thought was that in scenarios we know where the
storage array implementation is <br>
not optimal, we can override and tell VDSM to use its internal
logic than offload.<br>
</tt><br>
<blockquote
cite="mid:CAGZKiBprEgibga6A115WqiJVKWofvDz7E3C--_hHce93+f3Vbg@mail.gmail.com"
type="cite">
<div class="gmail_quote">
<div class="im"><br>
> 4) VDSM potential changes:<br>
><br>
> 4.1) How to represent a VM disk, 1 LUN = 1 VMdisk or 1 LV
= 1 VMdisk<br>
> ? which bring another question...1 array == 1 storage
domain OR 1<br>
> LUN/nfs-export on the array == 1 storage domain ?<br>
<br>
</div>
Saggi has mentioned some ideas on this topic so I will encourage
him to explain<br>
his thoughts here.<br>
</div>
</blockquote>
<br>
<tt>Looking forward to Saggi's thoughts :)</tt><br>
<br>
<blockquote
cite="mid:CAGZKiBprEgibga6A115WqiJVKWofvDz7E3C--_hHce93+f3Vbg@mail.gmail.com"
type="cite">
<div class="gmail_quote">
<div>
<div class="h5"><br>
><br>
> Pros & Cons of each...<br>
><br>
> 1 array == 1 storage domain<br>
> - Each new vmdisk (aka volume) will be a new
lun/file on the array.<br>
> - Easier to exploit offload capabilities, as
they
are available<br>
> at the LUN/File granularity<br>
> - Will there be any issues where there will be
too
many<br>
> LUNs/Files ... any maxluns limit on linux hosts that we
might hit ?<br>
> -- VDSM has been tested
with 1K LUNs and it
worked fine - ayal<br>
> - Storage array limitations on the number of
LUNs
can be a<br>
> downside here.<br>
> - Would it be ok to share the array for
hosting
another storage<br>
> domain if need be ?<br>
> -- Provided the existing
domain is not
utilising all of the<br>
> free space<br>
> -- We can create new LUNs
and hand it over to
anyone needed ?<br>
> -- Changes needed in VDSM to work with raw
LUNs,
today it only<br>
> has support for consuming LUNs via VG/LV.<br>
><br>
> 1 LUN/nfs-export on the array == 1 storage domain<br>
> - How to represent a new vmdisk (aka vdsm
volume)
if its a LUN<br>
> provisioned using SAN target ?<br>
> -- Will it be VG/LV as is
done today for block
domains ?<br>
> -- If yes, then it will
be difficult to exploit
offload<br>
> capabilities, as they are at LUN level, not at LV
level.<br>
> - Each new vmdisk will be a new file on the
nfs-export, assuming<br>
> offload capability is available at the file level, so
this should<br>
> work for NAS targets ?<br>
> - Can use the storage array for hosting
multiple
storage domains.<br>
> -- Provision one more LUN
and use it for
another storage<br>
> domain if need be.<br>
> - VDSM already supports this today, as part of
block storage<br>
> domains for LUNs case.<br>
><br>
> Note that we will allow user to do either one of the
two options<br>
> above, depending on need.<br>
><br>
> 4.2) Storage domain metadata will also include the<br>
> features/capabilities of the storage array as reported
by LSM.<br>
> - Capabilities (taken via
LSM) will be stored
in the domain<br>
> metadata during storage domain create flow.<br>
> - Need changes in oVirt
engine as well ( see
'oVirt Engine<br>
> potential changes' section below )<br>
<br>
</div>
</div>
Do we want to store the exact hw capabilities or some set of
vdsm chosen feature<br>
bits that are set at create time based on the discovered hw
capabilities? The<br>
difference would be that vdsm could choose which features to
enable at create<br>
time and update those features later if needed.<br>
</div>
</blockquote>
<br>
<tt>IIUC, you are saying VDSM will only look for those capabilities,
whcih are of interest to it and<br>
store it? That should be done by way LSM returning its
capabilities as part of it being a Image Repo.<br>
I am referring to how localFSRepo (def capabilities) is shown in
the PoC Saggie posted @<br>
<a
href="http://gerrit.ovirt.org/#change,192"
target="_blank">http://gerrit.ovirt.org/#change,192</a>...
<br>
<blockquote
cite="mid:CAGZKiBprEgibga6A115WqiJVKWofvDz7E3C--_hHce93+f3Vbg@mail.gmail.com"
type="cite">
<div class="gmail_quote">
<div class="im"><br>
> 4.3) VDSM to poll LSM for array capabilities on a regular
basis ?<br>
> Per ayal:<br>
> - If we have a 'storage array' entity in
oVirt Engine
(see<br>
> 'oVirt Engine potential changes' section below ) then we
can have a<br>
> 'refresh capabilities' button/verb.<br>
> - We can periodically query the storage
array.<br>
> - Query LSM before running operations (sounds
redundant to me,<br>
> but if it's cheap enough it could be simplest).<br>
><br>
> Probably need a combination of 1+2 (query at very
low
frequency<br>
> - 1/hour or 1/day + refresh button)<br>
<br>
</div>
This problem can be aleviated by the abstraction I suggested
above. Then, LSM<br>
can be queried only when we may want to adjust the policy
connected with a<br>
particular storage target.<br>
</div>
</blockquote>
<br>
<tt>Not clear to me, can you explain more ?<br>
LSM might need to be contacted for updating the capabilities,
because storage admins can add/remove<br>
capabilities over a period of time. Many storage arrays provide
ability to enable/disable array <br>
features on demand.<br>
</tt><br>
<blockquote
cite="mid:CAGZKiBprEgibga6A115WqiJVKWofvDz7E3C--_hHce93+f3Vbg@mail.gmail.com"
type="cite">
<div class="gmail_quote">
<div>
<div class="h5"><br>
> 5) oVirt Engine potential changes - as described by
ayal :<br>
><br>
> - We will either need a new 'storage
array' entity
in engine to<br>
> keep credentials, or, in case of storage array as
storage domain,<br>
> just keep this info as part of the domain at engine
level.<br>
> - Have a 'storage array' entity in
oVirt Engine to
support<br>
> 'refresh capabilities' as a button/verb.<br>
> - When user during storage provisioning,
selects a
LUN exported<br>
> from a storage array (via LSM), the oVirt Engine would
know from<br>
> then onwards that this LUN is being served via LSM.<br>
> It would then be able to
query the capabilities
of the LUN<br>
> and show it to the virt admin during storage
consumption flow.<br>
><br>
> 6) Potential flows:<br>
> - Create snapshot flow<br>
> -- VDSM will check the
snapshot offload
capability in the<br>
> domain metadata<br>
> -- If available, and
override is not
configured, it will use<br>
> LSM to offload LUN/File snapshot<br>
> -- If override is
configured or capability is
not available,<br>
> it will use its internal logic to create<br>
>
snapshot (qcow2).<br>
><br>
> - Copy/Clone vmdisk flow<br>
> -- VDSM will check the
copy offload capability
in the domain<br>
> metadata<br>
> -- If available, and
override is not
configured, it will use<br>
> LSM to offload LUN/File copy<br>
> -- If override is
configured or capability is
not available,<br>
> it will use its internal logic to create<br>
>
snapshot (eg: dd cmd in case of LUN).<br>
><br>
> 7) LSM potential changes:<br>
><br>
> - list features/capabilities of the array. Eg:
copy
offload,<br>
> thin prov. etc.<br>
> - list containers (aka pools) (present in LSM
today)<br>
> - Ability to list different types of arrays
being
managed, their<br>
> capabilities and used/free space<br>
> - Ability to create/list/delete/resize volumes
(
LUN or exports,<br>
> available in LSM as of today)<br>
> - Get monitoring info with object
(LUN/snapshot/volume) as<br>
> optional parameter for specific info. eg:
container/pool free/used<br>
> space, raid type etc.<br>
><br>
> Need to make sure above info is listed in a coherent
way across<br>
> arrays (number of LUNs, raid type used? free/total per<br>
> container/pool, per LUN?. Also need I/O statistics
wherever<br>
> possible.<br>
</div>
</div>
</div>
</blockquote>
<br>
<tt>I forgot to add this in the original mail.. adding it now.<br>
<br>
8) Concerns/Issues<br>
</tt><tt><br>
- Per Tony of libstoragemgmt</tt><br>
<pre wrap=""> -- Some additional things to consider.
-- Some of the array vendors may not allow multiple points of control at
the same time. e.g. you may not be able to have 2 or more nodes running
libStorageMgmt at the same time talking to the same array. NetApp
limits what things can be done concurrently.
-- LibStorageMgmt currently just provides the bits to control external
storage arrays. The plug-in daemon and the plug-ins themselves execute
unprivileged.
- How does the change from SPM to SDM will affect the above discussions ?
</pre>
<br>
</body>
</html>
--------------060306070208010807040506--