Top posting:
It's not that I am for breaking the flow to create/attach/activate,
but we need to consider all the use cases.
Just want to highlight a use case, and pls find the solution for it:
I've a VM with 4 different disks on 4 different storage domains.
What will happen if the on Run VM when one of the SDs is inaccessible?
Of course the VM should be able to run,
but the Disk on the inaccessible SD should be "off/down"?
Note that if the Disk on the inaccessible SD is "boot" disk, the Run VM should
probably fail, or ?
Thanks,
Miki
----- Original Message -----
From: "Mike Kolesnik" <mkolesni(a)redhat.com>
To: "Livnat Peer" <lpeer(a)redhat.com>
Cc: engine-devel(a)ovirt.org
Sent: Monday, February 20, 2012 2:14:13 PM
Subject: Re: [Engine-devel] VM disks
> On 19/02/12 20:56, Daniel Erez wrote:
> >
> >
> > ----- Original Message -----
> >> From: "Livnat Peer" <lpeer(a)redhat.com>
> >> To: "Itamar Heim" <iheim(a)redhat.com>
> >> Cc: engine-devel(a)ovirt.org
> >> Sent: Sunday, February 19, 2012 1:23:56 PM
> >> Subject: Re: [Engine-devel] VM disks
> >>
> >> On 19/02/12 12:35, Itamar Heim wrote:
> >>> On 02/18/2012 07:07 PM, Livnat Peer wrote:
> >>>> Hi,
> >>>>
> >>>> These days we are working on various features around VM disks,
> >>>> in
> >>>> the
> >>>> different threads it was decided that we'll have the ability
> >>>> to
> >>>> attach a
> >>>> disk to a VM but it will be added as inactive, then the user
> >>>> can
> >>>> activate it for it to be accessible from within the guest.
> >>>>
> >>>> Flow of adding a new disk would be:
> >>>> - creating the disk
> >>>> - attaching the disk to the VM
> >>>> - activating it
> >>>>
> >>>> Flow of adding a shared disk (or any other existing disk):
> >>>> - attach the disk
> >>>> - activate it
> >>>>
> >>>> It seems to me a lot like adding a storage domain and I
> >>>> remember
> >>>> a
> >>>> lot
> >>>> of rejections on the storage domain flow (mostly about it
> >>>> being
> >>>> too
> >>>> cumbersome).
> >>>
> >>> true, you'll be asked to provide an option for the initial
> >>> state
> >>> in
> >>> that
> >>> case.
> >>>
> >>>> After discussing the issue with various people we could not
> >>>> find
> >>>> a
> >>>> good
> >>>> reason for having a VM disk in attached but inactive mode.
> >>>>
> >>>> Of course we can wrap the above steps in one step for specific
> >>>> flows
> >>>> (add+attach within a VM context for example) but can anyone
> >>>> think
> >>>> on a
> >>>> good reason to support attached but inactive disk?
> >>>>
> >>>> I would suggest that when attaching a disk to a VM it becomes
> >>>> part
> >>>> of
> >>>> the VM (active) like in 'real' machines.
> >>>
> >>> so hotunplug would make the disk floating, as it will detach it
> >>> as
> >>> well?
> >>
> >> In short - yes.
> >>
> >> The user will be able to attach/detach disk, the implementation
> >> would
> >> be
> >> to hotplug or simply plug according to the VM status (up or not)
> >> .
> >
> >
> > What about disks with snapshots?
> > By the current design of floating disks, detaching a disk with
> > snapshots
> > can be done only by collapsing and marking the snapshots as
> > broken.
> > Thus, removing a disk momentarily might be problematic without
> > Plugged/Unplugged status.
> >
>
> when taking the snapshots the user can choose if he wants to have
> the
> shared disk or direct lun in the snapshot or not, once the user
> makes
> the call that would be reflected in the snapshot configuration.
What derez meant is that once disk is detached from VM it cannot
retain it's history, as today snapshot data is part of the VM
definition and not the single disk, so then all it's images should
be collapsed, especially if it is to be attached to an entirely
different VM.
>
>
> > Maybe we should keep the current Activate/Deactivate buttons for
> > disks in addition to
> > encapsulating attach/detach and plug/unplug commands.
> > So, adding/attaching a new disk will plug the disk automatically
> > while allowing the user
> > deactivating a disk temporarily.
>
> IIUC that's the original design which I am suggesting to change.
> We got negative feedback on a similar approach with regard to
> storage
> domains I suspect it will be even more acute when it comes to VM
> disks
> which is much more common.
I think the downside of improving UX like you suggest (by chaining
the atomic commands in the client IIUC), is that the client needs to
poll us repetitively which poses several issues such as performance
and the need for the client to manage a "transaction".
Since these cases of the need to run several commands in a "flow" is
increasing, maybe we need to offer a generic API that allows to run
several commands in a simple flow (simple BPEL style perhaps?) and
take the load off the clients.
>
> Livnat
>
>
>
_______________________________________________
Engine-devel mailing list
Engine-devel(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel