----- Original Message -----
On Sat, Sep 29, 2012 at 3:47 PM, Ayal Baron < abaron(a)redhat.com
>
wrote:
> > However, as I read this email, it occurred that some other things
> > might not be equal. Specifically, using multiple LUNs could
> > provide
> > a means of shrinking the storage domain in the future. LVM
> > provides
> > a simple means to remove a PV from a VG, but does the engine
> > support
> > this in the CLI or GUI? That is, if the a storage domain has
> > multiple LUNs in it, can those be removed at a later date?
>
> Not yet.
>
>
> Does this mean it is in the works? If not, where could I put in
> such
> feature request?
>
>
> Certainly, I have no pressing need of this, but it seems like a
> fairly simple thing to implement since I have done it so easily in
> the past with a just a couple of commands outside of an oVirt
> environment. I believe the primary purpose of the LVM functionality
> was to enable removal of dying PVs before they take out an entire
> VG. No reason it would not work just as well to remove a healthy
> PV.
> It can take a long time to move all the extents off the PV
> requested, but there is command to show the progress, so it would
> also be easy to wrap that in to the GUI.
What's simple in a single host environment is really not that simple
when it comes to clusters.
The tricky part is the coordination between the different hosts and
doing it live or with minimal impact.
Fair enough, but it seems that the cluster environment has been
addressed with the SPM mechanism for all things LVM. Certainly,
initial coding the feature would be fairly trivial, but I can
imagine that testing in the cluster environment might expose
additional complexity.
The actual data move is done by the SPM and is a simple pvmove command as you've
stated.
The simple way of doing this would be to put the domain in maintenance mode and then
pvmove on the SPM (currently you can't run such operations while domain is in
maintenance, but it just makes sense to do it) and then activate the domain.
This means however that you would not be able to run any VM that has disks on this VG,
even ones that reside entirely on other PVs.
If, however, we'd want to do it 'semi live' then it would become much more
complex.
First you need to realize that LVs are not neatly dispersed between PVs. You can have
extents from different PVs for the same LV (esp. after lvextend which happens
automatically in the system when there are snapshots).
So we'd need to map all the LVs which are affected and prevent running any VM that
uses these LVs.
Then we'd need to also guarantee there is enough space to move these extents to
(again, in addition to user creating new objects, there are automatic lvextend operations
going on so we'd need a way to reserve space on the VG for this operation).
Once we've done this we'd need to run the op and then we'd need to make sure
that all the hosts see things properly.