----- Original Message -----
On 04/24/2012 02:07 AM, Ayal Baron wrote:
>
> ----- Original Message -----
>> On 04/22/2012 12:28 PM, Ayal Baron wrote:
>>>>> This way we'd have a 2 stage process:
>>>>> 1. setupStorage (generic)
>>>> I was looking up on the VDSM archives and there are talks of
>>>> using
>>>> libstoragemgmt (lsm)
>>> Funny, we started using that acronym for Live Storage Migration
>>> :)
>>>
>>>> under VDSM. I was wondering if the setupStorage would be
>>>> something
>>>> where
>>>> lsm would
>>>> be used to do the work, it seems fit for purpose here.
>>>>
>>>>
>>> I don't think this is the libstoragemgmt mandate.
>>>
>>> libstoragemgmt is:
>>> "A library that will provide a vendor agnostic open source
>>> storage
>>> application programming interface (API) for storage arrays."
>>>
>>> i.e. it is there to abstract storage array specifics from the
>>> user.
>>> It will be used by things like LVM etc, not the other way around.
>>>
>>> setupStorage would use libstoragemgmt wherever appropriate of
>>> course.
>>>
>>> But as the libstoragemgmt maintainer, Tony (cc'd) can correct me
>>> if
>>> I'm wrong here.
>>>
>>>
>> I was looking at setupStorage as Provisioning + Setting up.
>> I know one of the basic goals of lsm is provision the storage to
>> the
>> host
>> and preparing the storage for consumption is higher layers work.
>>
>> With that, i think then its becomes a 3 stage process, from
>> oVirt/VDSM
>> pov...
>> 1) Provision Storage (using lsm if applicable, based on whether
>> external
>> storage is connected)
>> 2) Setup Storage (prepare the provisioned LUNs for usage)
>> 3) createSD/createGlusterVolume/... (plugin specific)
>>
>> Since we are talking about Storage management using VDSM, i was
>> interested in understanding the plans, strategy of how VDSM + lsm
>> will integrate ?
>
> There are various ways of approaching this.
> 1. Given proper storage you could just provision new LUNs whenever
> you need a new virtual disk and utilize storage side thin
> provisioning and snapshots for most of your needs.
> When you have such storage you don't really need steps 2 and 3
> above. Your storage is your virtual images repository.
> Although quite simple and powerful, very few arrays are capable of
> growing to a very large number of objects (luns + snapshots +
> whatever) today, so I don't see this being the most common use
> case any time soon.
This is not clear to me. This only talks about provisioning but not
consuming.
2 and 3 above are required from a consumability perspective. The LUNs
will have
to prepared and used by LVM (pv, vg, lv, metadata) for VDSM to host a
storage domain.
There are several ways of managing the repo in such a scenario, just an example is to
provision a LUN where vdsm would manage metadata (listing of images, relations between
snapshots, logical sizes of images, etc) and every image is another LUN that we would
provision, so there would be no need for LVM in such a scenario.
> 2. Provision LUNs (either statically or dynamically using lsm)
> once, preferably thinly provisioned. Then setupStorage (storage
> domain over VG / gLuster / other) and use lsm for creating
> snapshots/clones on the fly
> In my opinion this will be more prevalent to begin with.
>
> With lsm we will (hopefully) have a way of enumerating storage side
> capabilities so then when we create a repository (gluster / sd /
> ...) we'd be able to determine on the fly what capabilities it has
> and determine if to use these or to use virtualized capabilities
> (e.g. in the virt case when you need to create a snapshot use
> qcowX).
>
> In oVirt, once you've defined a storage domain and it exposes a set
> of capabilities, user should be able to override (e.g. even though
> storage supports snapshots, I want to use qcow as this storage can
> only create 255 snapshots per volume and I need more than that).
>
> I'm assuming that we will not have any way of knowing the limits
> per machine.
>
> Does that make sense?
>
Agree to #2. Thinking deeper....
1) Provisioning Storage
Provisioning storage using lsm would require new VDSM verbs to be
added,
which can create / show the LUNs to the oVirt user and user can then
select which LUN(s) to use for setupStorage.
create LUN doesn't exist today, but show LUNs does.
Currently the (simplified) flow is:
1. connect to storage (when relevant)
2. get listing of devices
3. create a storage domain on selected devices
Provisioning LUNs will probably exploit the lsm capabilities and
provide
the options to the user to create the LUNs using the available array
features.
With GlusterFS also providing some of the array capabilities (stripe,
replicate etc), user might want to provision GlusterFS volume (with
whatever capabilities gluster offers) to host storage upon,
especially
if the storage is coming from not-so-reliable commodity hw storage.
I feel this also has to be considered as part of provisioning and
should
come before the setupStorage step.
IMHO, there should be a "Storage Provisioning" tab in oVirt which
will
allow user to ...
1a) Carve LUNs from external Storage array.
1b) Provision storage as GlusterFS volume. User can select the
LUNs
carved (from #1a) as bricks for GlusterFS volume, if need be.
1c) Use local host free disk space.
Somewhere here there should be option ( as applicable) for user to
select whether to exploit storage array features or host virt
capabilities for say snapshot, in cases where both are applicable.
2) Setup Storage
Here the user would create VDSM file or block based storage domain,
based on the storage provisioned from the "Storage Provisioning" tab.
I believe this is where VDSM will add its metadata to the
provisioned
storage to make it a storage domain.
IMHO for image operations like snapshot/clone, VDSM will have to
track&
maintain whether the image is served by local host, external storage
array or gluster volume and accordingly use the lvm, lsm or gluster
apis
to get the job done.
For sure. That would be part of the domain metadata.
3) Not sure if anymore steps needed ?
In general I agree with the above. wrt the scope of setupStorage, it's semantics.
Personally I think we should differentiate between provisioning storage on the target side
and provisioning on the initiator side.
The flow (from GUI) as I see it is (with lsm and an array that supports dynamic
provisioning):
1. provide credentials and login into storage (out of band, using lsm)
2. enumerate capabilities
3. based on 2, define required storage domain characteristics (size, thin provisioning,
etc)
- but note that this is generic, so should apply to gluster as well
4. create a storage domain - would implicitly create 1 or more LUNs and anything else that
is needed according to above specifications. API wise, this is probably 3 calls, 1.
provision LUNs, 2. setupStorage and 3. createStorageDomain/createGlusterVolume/...
Characteristics might include things like partitions, encryption?, compression?, raid,
file systems, etc.
thanx,
deepak