<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Thu, Nov 20, 2014 at 8:13 PM, Markus Stockhausen <span dir="ltr"><<a href="mailto:stockhausen@collogia.de" target="_blank">stockhausen@collogia.de</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi Bob,<br>
<br>
looking at <a href="http://koji.fedoraproject.org/koji/packageinfo?packageID=91" target="_blank">http://koji.fedoraproject.org/koji/packageinfo?packageID=91</a><br>
I think FC20 will stay at 1.1.3. History shows:<br>
<span class=""><br></span></blockquote><div> </div><div>[snip]<br> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span class="">
________________________________________<br>
Von: Bob Doolittle [<a href="mailto:bob@doolittle.us.com">bob@doolittle.us.com</a>]<br>
</span>Gesendet: Donnerstag, 20. November 2014 19:38<br>
<span class="">An: Markus Stockhausen<br>
Cc: s k; <a href="mailto:users@ovirt.org">users@ovirt.org</a>; Daniel Helgenberger; Coffee Chou<br>
</span>Betreff: Re: Simple way to activate live merge in FC20 cluster<br>
<div class=""><div class="h5"><br>
Thanks Markus, but I have a single, self-hosted node, so cannot migrate VMs.<br>
<br>
Is it your assumption that F20 will never be updated with libvirt 1.2.9?<br>
<br>
If that's the case, my best course (in a month or so when F21 is<br>
released) is probably to export all my VMs, install F21,<br>
reinstall/reconfigure oVirt, import the old Export Domain, and then<br>
import my VMs again.<br>
<br>
-Bob<br>
<br>
On 11/20/2014 01:20 PM, Markus Stockhausen wrote:<br>
> Hi Bob,<br>
><br>
> if your are on a cluster with FC20 hypervisor nodes wthout virt-preview<br>
> (like we run it with qemu 1.6.2) the simplest way to get live merge features<br>
> should be - beware NOT FULLY TESTED!<br>
><br>
> 1) choose a single host (we only need a single one if we think about the required setup)<br>
> 2) Install libvirt from virt-preview repos. DON'T UPDATE QEMU AS YOU BREAK LIVE MIGRATION!<br>
> 3) migrate VMs you want to live merge to the host<br>
> 4) do the live merge<br>
> 5) migrate VMs back to original host<br>
><br></div></div></blockquote></div><br></div><div class="gmail_extra">Hello,<br></div><div class="gmail_extra">following the flow, as I have now an all-in-one environment based on F20 and oVirt 3.5. As it is both my engine and my hypervisor, I should be in the best situation.... <br>live merge is supposed to be supported on file based storage, that should match what I have (local on Host).<br><br></div><div class="gmail_extra">In fact, installing oVirt AIO on F20 automatically gives the virt-preview repo through the ovirt-3.5-dependencies.repo file and I see:<br><br>[root@tekkaman qemu]# rpm -q libvirt<br>libvirt-1.2.9.1-1.fc20.x86_64<br><br>[root@tekkaman qemu]# vdsClient -s 0 getVdsCaps | grep -i merge<br> liveMerge = 'true'<br><br>[root@tekkaman qemu]# rpm -q qemu<br>qemu-1.6.2-10.fc20.x86_64<br><br></div><div class="gmail_extra">Created a CentOS 7 x86_64 VM with a VirtIO-SCSI disk and after install I powered off and run it normally.<br></div><div class="gmail_extra">Then I took a snapshot (while VM powered on) and then ran a yum update.<br></div><div class="gmail_extra">At the end I reboot the VM with the new kernel installed and tried to delete the snapshot and the task indeed started:<br>Snapshot 'test per live merge' deletion for VM 'c7' was initiated by admin.<br><br>but about a minute later I got error:<br>Failed to delete snapshot 'test per live merge' for VM 'c7'.<br><br></div><div class="gmail_extra">here below relevant parts in log files.<br></div><div class="gmail_extra">What did I miss in the workflow?<br><br></div><div class="gmail_extra">BTW: both during the live snapshot and during the live merge I got disconnected both from spice console and from an ssh session open into VM (I could reconnect on both): is this expected? I hope not so.. <br></div><div class="gmail_extra"><br></div><div class="gmail_extra">Gianluca<br><br>engine.log<br>2014-11-21 01:16:00,182 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand] (pool-7-thread-3) [3337208b] Failed in MergeVDS method<br>2014-11-21 01:16:00,183 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand] (pool-7-thread-3) [3337208b] Command org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand return value <br> StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=52, mMessage=Merge failed]]<br>2014-11-21 01:16:00,184 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand] (pool-7-thread-3) [3337208b] HostName = local_host<br>2014-11-21 01:16:00,190 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand] (pool-7-thread-3) [3337208b] Command MergeVDSCommand(HostName = local_host, MergeVDSCommandParameters{HostId = aab9571f-da17-4c3c-9e6b-d0224b84c31e, vmId=0ba6a56b-542c-480e-8bab-88aea9195302, storagePoolId=65c9777e-23f1-4f04-8cea-e7c8871dc88b, storageDomainId=0a8035e6-e41d-40ff-a154-e0a374f264b2, imageGroupId=8a0ba67f-78c0-4ded-9bda-97bb9424e385, imageId=1fdc7440-2465-49ec-8368-141afc0721f1, baseImageId=cd7dd270-0895-411d-bc97-c5ad0ebd80b1, topImageId=1fdc7440-2465-49ec-8368-141afc0721f1, bandwidth=0}) execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to MergeVDS, error = Merge failed, code = 52<br>2014-11-21 01:16:00,191 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand] (pool-7-thread-3) [3337208b] FINISH, MergeVDSCommand, log id: 70748b53<br>2014-11-21 01:16:00,192 ERROR [org.ovirt.engine.core.bll.MergeCommand] (pool-7-thread-3) [3337208b] Command org.ovirt.engine.core.bll.MergeCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to MergeVDS, error = Merge failed, code = 52 (Failed with error mergeErr and code 52)<br>2014-11-21 01:16:00,205 ERROR [org.ovirt.engine.core.bll.MergeCommand] (pool-7-thread-3) [3337208b] Transaction rolled-back for command: org.ovirt.engine.core.bll.MergeCommand.<br>2014-11-21 01:16:09,888 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler_Worker-55) task id f245da80-58ce-4e2e-bb5a-0261cf71de0d is in pre-polling period and should not be polled. Pre-polling period is 60,000 millis. <br>2014-11-21 01:16:09,889 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler_Worker-55) task id c7c363f1-98b8-468c-8fca-cf3116d00221 is in pre-polling period and should not be polled. Pre-polling period is 60,000 millis. <br>2014-11-21 01:16:10,080 ERROR [org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskLiveCommand] (DefaultQuartzScheduler_Worker-56) [1457fcca] Failed child command status for step MERGE<br><br><br>vdsm.log<br>Thread-266::DEBUG::2014-11-21 01:16:00,144::resourceManager::641::Storage.ResourceManager::(releaseResource) Resource 'Storag<br>e.0a8035e6-e41d-40ff-a154-e0a374f264b2' is free, finding out if anyone is waiting for it.<br>Thread-266::DEBUG::2014-11-21 01:16:00,144::resourceManager::649::Storage.ResourceManager::(releaseResource) No one is waitin<br>g for resource 'Storage.0a8035e6-e41d-40ff-a154-e0a374f264b2', Clearing records.<br>Thread-266::DEBUG::2014-11-21 01:16:00,144::task::993::Storage.TaskManager.Task::(_decref) Task=`95f79027-3613-4f05-9d2d-02f8<br>c7975f02`::ref 0 aborting False<br>Thread-266::INFO::2014-11-21 01:16:00,158::vm::5743::vm.Vm::(merge) vmId=`0ba6a56b-542c-480e-8bab-88aea9195302`::Starting mer<br>ge with jobUUID='2247fea2-df73-407e-ad16-4622af7f52aa'<br>Thread-266::DEBUG::2014-11-21 01:16:00,160::libvirtconnection::143::root::(wrapper) Unknown libvirterror: ecode: 67 edom: 10 <br>level: 2 message: unsupported configuration: active commit not supported with this QEMU binary<br>Thread-266::ERROR::2014-11-21 01:16:00,161::vm::5751::vm.Vm::(merge) vmId=`0ba6a56b-542c-480e-8bab-88aea9195302`::Live merge failed (job: 2247fea2-df73-407e-ad16-4622af7f52aa)<br>Traceback (most recent call last):<br> File "/usr/share/vdsm/virt/vm.py", line 5747, in merge<br> flags)<br> File "/usr/share/vdsm/virt/vm.py", line 670, in f<br> ret = attr(*args, **kwargs)<br> File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 111, in wrapper<br> ret = f(*args, **kwargs)<br> File "/usr/lib64/python2.7/site-packages/libvirt.py", line 638, in blockCommit<br> if ret == -1: raise libvirtError ('virDomainBlockCommit() failed', dom=self)<br>libvirtError: unsupported configuration: active commit not supported with this QEMU binary<br>Thread-266::DEBUG::2014-11-21 01:16:00,181::BindingXMLRPC::1139::vds::(wrapper) return merge with {'status': {'message': 'Merge failed', 'code': 52}}<br>ioprocess communication (4939)::DEBUG::2014-11-21 01:16:00,656::__init__::375::IOProcess::(_processLogs) (null)|Receiving request...<br>ioprocess communication (4939)::DEBUG::2014-11-21 01:16:00,657::__init__::375::IOProcess::(_processLogs) (null)|Queuing request in the thread pool...<br>Thread-26::DEBUG::2014-11-21 01:16:00,658::fileSD::261::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd if=/rhev/data-center/mnt/_data_DATA2/b24b94c7-5935-4940-9152-36ecd370ba7c/dom_md/metadata iflag=direct of=/dev/null bs=4096 count=1 (cwd None)<br>ioprocess communication (4939)::DEBUG::2014-11-21 01:16:00,659::__init__::375::IOProcess::(_processLogs) Extracting request information...<br>ioprocess communication (4939)::DEBUG::2014-11-21 01:16:00,668::__init__::375::IOProcess::(_processLogs) (7100) Got request for method 'statvfs'<br>ioprocess communication (4939)::DEBUG::2014-11-21 01:16:00,668::__init__::375::IOProcess::(_processLogs) (7100) Queuing response<br>Thread-26::DEBUG::2014-11-21 01:16:00,687::fileSD::261::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n588 bytes (588 B) copied, 0.020396 s, 28.8 kB/s\n'; <rc> = 0<br>ioprocess communication (4939)::DEBUG::2014-11-21 01:16:00,688::__init__::375::IOProcess::(_processLogs) (null)|Receiving request...<br>ioprocess communication (4939)::DEBUG::2014-11-21 01:16:00,689::__init__::375::IOProcess::(_processLogs) (null)|Queuing request in the thread pool...<br>ioprocess communication (4939)::DEBUG::2014-11-21 01:16:00,690::__init__::375::IOProcess::(_processLogs) Extracting request information...<br>ioprocess communication (4939)::DEBUG::2014-11-21 01:16:00,690::__init__::375::IOProcess::(_processLogs) (7101) Got request for method 'statvfs'<br>ioprocess communication (4939)::DEBUG::2014-11-21 01:16:00,691::__init__::375::IOProcess::(_processLogs) (7101) Queuing response<br>ioprocess communication (4939)::DEBUG::2014-11-21 01:16:00,691::__init__::375::IOProcess::(_processLogs) (null)|Receiving request...<br>ioprocess communication (4939)::DEBUG::2014-11-21 01:16:00,691::__init__::375::IOProcess::(_processLogs) Queuing request in the thread pool...<br>ioprocess communication (4939)::DEBUG::2014-11-21 01:16:00,691::__init__::375::IOProcess::(_processLogs) Extracting request information...<br>ioprocess communication (4939)::DEBUG::2014-11-21 01:16:00,692::__init__::375::IOProcess::(_processLogs) (null)|(7102) Got request for method 'access'<br>ioprocess communication (4939)::DEBUG::2014-11-21 01:16:00,693::__init__::375::IOProcess::(_processLogs) (7102) Queuing response<br>ioprocess communication (4939)::DEBUG::2014-11-21 01:16:00,693::__init__::375::IOProcess::(_processLogs) (null)|Receiving request...<br>ioprocess communication (4939)::DEBUG::2014-11-21 01:16:00,693::__init__::375::IOProcess::(_processLogs) Queuing request in the thread pool...<br>ioprocess communication (4939)::DEBUG::2014-11-21 01:16:00,694::__init__::375::IOProcess::(_processLogs) (null)|Extracting request information...<br>ioprocess communication (4939)::DEBUG::2014-11-21 01:16:00,695::__init__::375::IOProcess::(_processLogs) (7103) Got request for method 'access'<br>ioprocess communication (4939)::DEBUG::2014-11-21 01:16:00,695::__init__::375::IOProcess::(_processLogs) (7103) Queuing response<br>ioprocess communication (4939)::DEBUG::2014-11-21 01:16:00,695::__init__::375::IOProcess::(_processLogs) (null)|Receiving req<br></div></div>