[Users] clone vm from snapshot problem in 3.2

Alissa Bonas abonas at redhat.com
Mon Mar 4 12:19:09 UTC 2013


> ----- Original Message -----
> > From: "Gianluca Cecchi" <gianluca.cecchi at gmail.com>
> > To: "Liron Aravot" <laravot at redhat.com>
> > Cc: "users" <users at ovirt.org>, "Vered Volansky" <vered at redhat.com>
> > Sent: Thursday, February 28, 2013 7:07:55 PM
> > Subject: Re: [Users] clone vm from snapshot problem in 3.2
> > 
> > On Wed, Feb 27, 2013 at 1:21 PM, Liron Aravot  wrote:
> > 
> > >> Hi Gianluca,
> > >> I've built 3.2 and had success in reproducing your issue, I've
> > >> also
> > >> tested the fix and after it the clone window shows fine - so
> > >> this
> > >> should solve your issue (http://gerrit.ovirt.org/#/c/11254/).
> > >>
> > >> Thanks, Liron.
> > 
> > Hello,
> > so I rebuilt with the patch.
> > I built as in this thread:
> > http://lists.ovirt.org/pipermail/users/2013-February/012702.html
> > 
> > directly on engine server and with engine stopped:
> > 
> > sudo rpm -Uvh --force $(ls -1|grep -v all)
> > 
> > with
> > $ ls -1|grep -v all
> > ovirt-engine-3.2.0-4.fc18.noarch.rpm
> > ovirt-engine-backend-3.2.0-4.fc18.noarch.rpm
> > ovirt-engine-config-3.2.0-4.fc18.noarch.rpm
> > ovirt-engine-dbscripts-3.2.0-4.fc18.noarch.rpm
> > ovirt-engine-genericapi-3.2.0-4.fc18.noarch.rpm
> > ovirt-engine-notification-service-3.2.0-4.fc18.noarch.rpm
> > ovirt-engine-restapi-3.2.0-4.fc18.noarch.rpm
> > ovirt-engine-setup-3.2.0-4.fc18.noarch.rpm
> > ovirt-engine-tools-common-3.2.0-4.fc18.noarch.rpm
> > ovirt-engine-userportal-3.2.0-4.fc18.noarch.rpm
> > ovirt-engine-webadmin-portal-3.2.0-4.fc18.noarch.rpm
> > 
> > and then reboot the engine
> > 
> > Results:
> > 
> > OK, for the snapshot details pane. It seem the patch fixes this
> > too.
> > Until now I was never able to see snapshot details,
> > Instead now I can see disk and nic details. See:
> > https://docs.google.com/file/d/0BwoPbcrMv8mvQmlCZTZBYjBadmc/edit?usp=sharing
> > 
> > 
> > OK, if I clone a VM from a snapshot of a running VM (CentOS 5.6)
> > I start the clone and no problem at all.
> > 
> > well done!
> > 
> > 
> > Corner cases and problems
> > 
> > 1)
> > OK, if I clone a powered on VM from a snapshot with two disks and
> > here
> > you can find clone in action with webadmin gui and iotop on node
> > that
> > shows they are cloning in parallel.. well!
> > https://docs.google.com/file/d/0BwoPbcrMv8mvZ3lzY0l1MDc5OVE/edit?usp=sharing
> > 
> > The VM is a slackware 14 32bit with virtio disk that I obtained
> > from
> > a
> > virt-v2v from CentOS 6.3+Qemu/KVM
> > The problem is that the cloned VM recognizes the disks in reversed
> > order
> > 
> > See these images where sl1432 is master slcone is the clone
> > 
> > disk layout in details pane seems equal with boot disk the one that
> > appears as the second, but the master boots ok, the slave no.
> > Disks are swapped
> > 
> > Master VM disk details:
> > https://docs.google.com/file/d/0BwoPbcrMv8mvSWNVNFI4bHg4Umc/edit?usp=sharing
> > 
> > Clone VM disks details:
> > https://docs.google.com/file/d/0BwoPbcrMv8mvM1N0bVcyNlFPS1U/edit?usp=sharing
> > 
> > Page with the two consoles where you can see that vda of master
> > becomes vdb of clone and vice-versa:
> > https://docs.google.com/file/d/0BwoPbcrMv8mveFpESEs5V1dUTFE/edit?usp=sharing
> > 
> > Can I swap again in some way? In VMware for example you can see and
> > edit SCSI IDs of disks...

Can you please provide the engine logs where the boot success/failure of master and clone can be seen?
 
> > 2)
> > KO, if I clone a windows 7 VM from a snapshot (time 17:09) I get
> > this
> > on gui
> > https://docs.google.com/file/d/0BwoPbcrMv8mvaW9OVW84WVI1dkU/edit?usp=sharing
> > 
> > You can find the engine.log from server restart (at 17:06) here:
> > https://docs.google.com/file/d/0BwoPbcrMv8mvTDQzRTFfbFgxX0E/edit?usp=sharing
> > 
> > Are there any restrictions?
> > The VM was powered off when taking the snapshot and making the
> > clone.
> 
> Hi Gianluca,
> 
> Regarding the second item you mentioned - failure of the clone vm
> from snapshot,
> the engine log indicates that the  following error occurred  "Volume
> Group not big enough" - suggesting you might check if you have
> enough free space.
> 
> > Gianluca
> > _______________________________________________
> > Users mailing list
> > Users at ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> > 
> 



More information about the Users mailing list