<div dir="ltr"><div>Hello,</div><div><br></div><div>I Tried this solution , but to some vms not was resolved.</div><div>Logs:</div><div><br></div><div>src logs:</div><div><br></div><div>Thread-12::DEBUG::2016-12-06 08:50:58,112::check::327::storage.check::(_check_completed) FINISH check u'/rhev/data-center/mnt/192.168.144.6:_home_iso/b5fa054f-0d3d-458b-a891-13fd9383ee7d/dom_md/metadata' rc=0 err=bytearray(b'0+1 records in\n0+1 records out\n328 bytes (328 B) copied, 0.000474535 s, 691 kB/s\n') elapsed=0.08<br>Thread-2374888::WARNING::2016-12-06 08:50:58,815::migration::671::virt.vm::(monitor_migration) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::The migration took 520 seconds which is exceeding the configured maximum time for migrations of 512 seconds. The migration will be aborted.<br>Thread-2374888::DEBUG::2016-12-06 08:50:58,816::migration::715::virt.vm::(stop) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::stopping migration monitor thread<br>Thread-2374888::DEBUG::2016-12-06 08:50:58,816::migration::570::virt.vm::(stop) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::stopping migration downtime thread<br>Thread-2374888::DEBUG::2016-12-06 08:50:58,817::migration::629::virt.vm::(run) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::stopped migration monitor thread<br>Thread-2374886::DEBUG::2016-12-06 08:50:59,098::migration::715::virt.vm::(stop) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::stopping migration monitor thread<br>Thread-2374886::ERROR::2016-12-06 08:50:59,098::migration::252::virt.vm::(_recover) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::operation aborted: migration job: canceled by client<br>Thread-2374886::DEBUG::2016-12-06 08:50:59,098::stompreactor::408::jsonrpc.AsyncoreClient::(send) Sending response<br>Thread-2374886::DEBUG::2016-12-06 08:50:59,321::__init__::208::jsonrpc.Notification::(emit) Sending event {"params": {"notify_time": 6040272640, "9b5ab7b4-1045-4858-b24c-1f5a9f6172c3": {"status": "Migration Source"}}, "jsonrpc": "2.0", "method": "|virt|VM_status|9b5ab7b4-1045-4858-b24c-1f5a9f6172c3"}<br>Thread-2374886::ERROR::2016-12-06 08:50:59,322::migration::381::virt.vm::(run) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::Failed to migrate<br>Traceback (most recent call last):<br> File "/usr/share/vdsm/virt/migration.py", line 363, in run<br> self._startUnderlyingMigration(time.time())<br> File "/usr/share/vdsm/virt/migration.py", line 438, in _startUnderlyingMigration<br> self._perform_with_downtime_thread(duri, muri)<br> File "/usr/share/vdsm/virt/migration.py", line 489, in _perform_with_downtime_thread<br> self._perform_migration(duri, muri)<br> File "/usr/share/vdsm/virt/migration.py", line 476, in _perform_migration<br> self._vm._dom.migrateToURI3(duri, params, flags)<br> File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 69, in f<br> ret = attr(*args, **kwargs)<br> File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, in wrapper<br> ret = f(*args, **kwargs)<br> File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 916, in wrapper<br> return func(inst, *args, **kwargs)<br> File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1836, in migrateToURI3<br> if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', dom=self)<br>libvirtError: operation aborted: migration job: canceled by client<br>Thread-12::DEBUG::2016-12-06 08:50:59,875::check::296::storage.check::(_start_process) START check '/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/metadata' cmd=['/usr/bin/taskset', '--cpu-list', '0-31', '/usr/bin/dd', 'if=/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/metadata', 'of=/dev/null', 'bs=4096', 'count=1', 'iflag=direct'] delay=0.00<br>mailbox.SPMMonitor::DEBUG::2016-12-06 08:50:59,914::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail) /usr/bin/taskset --cpu-list 0-31 dd if=/rhev/data-center/77e24b20-9d21-4952-a089-3c5c592b4e6d/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000 (cwd None)</div><div><br></div><div>dst log:</div><div><br></div><div>libvirtEventLoop::DEBUG::2016-12-06 08:50:59,080::task::995::Storage.TaskManager.Task::(_decref) Task=`7446f040-c5f4-497c-b4a5-8934921a7b89`::ref 1 aborting False<br>libvirtEventLoop::DEBUG::2016-12-06 08:50:59,080::fileUtils::190::Storage.fileUtils::(cleanupdir) Removing directory: /var/run/vdsm/storage/6e5cce71-3438-4045-9d54-6071<br>23e0557e/413a560d-4919-4870-88d6-f7fedbb77523<br>libvirtEventLoop::DEBUG::2016-12-06 08:50:59,080::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/taskset --cpu-list 0-31 /usr/bin/sudo -n /usr/sbin/lvm lvs --config ' de<br>vices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ '\''a|/dev/mapper/36005076300810a4db80<br>0000000000002|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0<br> } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 6e5cce71-3438-4045-9d54-607123e05<br>57e (cwd None)<br>jsonrpc.Executor/6::DEBUG::2016-12-06 08:50:59,090::__init__::530::jsonrpc.JsonRpcServer::(_handle_request) Calling 'VM.destroy' in bridge with {u'vmID': u'9b5ab7b4-104<br>5-4858-b24c-1f5a9f6172c3'}<br>jsonrpc.Executor/6::DEBUG::2016-12-06 08:50:59,090::API::314::vds::(destroy) About to destroy VM 9b5ab7b4-1045-4858-b24c-1f5a9f6172c3<br>jsonrpc.Executor/6::DEBUG::2016-12-06 08:50:59,091::vm::4171::virt.vm::(destroy) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::destroy Called<br>Thread-51550::ERROR::2016-12-06 08:50:59,091::vm::759::virt.vm::(_startUnderlyingVm) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::Failed to start a migration destinatio<br>n vm<br>Traceback (most recent call last):<br> File "/usr/share/vdsm/virt/vm.py", line 725, in _startUnderlyingVm<br> self._completeIncomingMigration()<br> File "/usr/share/vdsm/virt/vm.py", line 3071, in _completeIncomingMigration<br> self._incomingMigrationFinished.isSet(), usedTimeout)<br> File "/usr/share/vdsm/virt/vm.py", line 3154, in _attachLibvirtDomainAfterMigration<br> raise MigrationError(e.get_error_message())<br>MigrationError: Domain not found: no domain with matching uuid '9b5ab7b4-1045-4858-b24c-1f5a9f6172c3'<br>Thread-51550::INFO::2016-12-06 08:50:59,093::vm::1308::virt.vm::(setDownStatus) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::Changed state to Down: VM failed to migrate (code=8)</div><div><br></div><div><br></div><div><br></div><div>Logs Attached.</div><div><br></div><div>any ideas?</div><div><br></div><div>Very thanks.<br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">2016-12-06 4:16 GMT-03:00 Yedidyah Bar David <span dir="ltr"><<a href="mailto:didi@redhat.com" target="_blank">didi@redhat.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span>On Mon, Dec 5, 2016 at 10:25 PM, Marcelo Leandro <<a href="mailto:marceloltmm@gmail.com">marceloltmm@gmail.com</a>> wrote:<br>
> Hello<br>
> I am with problem, I can not migrate vm.<br>
> Can someone help me?<br>
><br>
> Message in log:<br>
><br>
> Src vdsm log:<br>
><br>
> Thread-89514 :: WARNING :: <a href="tel:2016-12-05%2017" value="+12016120517">2016-12-05 17</a>: 01: 49,542 :: migration :: 683 ::<br>
> virt.vm ::: monitor_migration vmId = `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e`<br>
> :: Migration Stalling: remaining<br>
> (56MiB)> lowmark (2MiB). Refer to RHBZ # 919201.<br>
> Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,543 :: migration :: 689 ::<br>
> virt.vm ::: monitor_migration vmId = `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e`<br>
> :: new Iteration detected: 15<br>
> Thread-89514 :: WARNING :: 2016-12-05 17: 01: 49,543 :: migration :: 704 ::<br>
> virt.vm ::: monitor_migration vmId = `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e`<br>
> :: Migration Is stuck: Has not pro<br>
> Gressed in 240.071660042 seconds. Aborting.<br>
<br>
</span>This is usually a result of a too-busy VM, changing its memory faster than<br>
the migration process can copy the changes to the destination.<br>
<br>
You can try changing the cluster migration policy to "suspend workload<br>
if needed".<br>
<br>
For more details/background, see also:<br>
<br>
<a href="https://www.ovirt.org/develop/release-management/features/migration-enhancements/" target="_blank" rel="noreferrer">https://www.ovirt.org/develop/<wbr>release-management/features/<wbr>migration-enhancements/</a><br>
<br>
Best,<br>
<div><div class="h5"><br>
> Thread-89514 :: DEBUG :: <a href="tel:2016-12-05%2017" value="+12016120517">2016-12-05 17</a>: 01: 49,544 :: migration :: 715 ::<br>
> virt.vm ::: stop} vmId = `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e` :: stopping<br>
> Migration monitor thread<br>
> Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,545 :: migration :: 570 ::<br>
> virt.vm ::: stop) vmId = `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e` :: stopping<br>
> Migration downtime thread<br>
> Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,545 :: migration :: 629 ::<br>
> virt.vm ::: (run) vmId = `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e` :: stopped<br>
> Migration monitor thread<br>
> Thread-89513 :: DEBUG :: 2016-12-05 17: 01: 49,766 :: migration :: 715 ::<br>
> virt.vm ::: stop} vmId = `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e` :: stopping<br>
> Migration monitor thread<br>
> Thread-89513 :: ERROR :: 2016-12-05 17: 01: 49,767 :: migration :: 252 ::<br>
> virt.vm :: (_ recover) vmId = `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e` ::<br>
> operation Aborted: migration job: cancel<br>
> D by cliente<br>
><br>
> Dst vdsm.log:<br>
><br>
> Periodic / 13 :: WARNING :: 2016-12-05 17: 01: 49,791 :: sampling :: 483 ::<br>
> virt.sampling.StatsCache: :( put) dropped stale old sample: sampled<br>
> 4303678.080000 stored 4303693.070000<br>
> Periodic / 13 :: DEBUG :: 2016-12-05 17: 01: 49,791 :: executor :: 221 ::<br>
> Executor :: (_ run) Worker was discarded<br>
> Jsonrpc.Executor / 0 :: DEBUG :: 2016-12-05 17: 01: 49,792 :: __ init __ ::<br>
> 530 :: jsonrpc.JsonRpcServer :: (_ handle_request) Calling 'VM.destroy' in<br>
> bridge with {u'vmID ' : U'f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e '}<br>
> Jsonrpc.Executor / 0 :: DEBUG :: 2016-12-05 17: 01: 49,793 :: API :: 314 ::<br>
> vds :::( destroy) About to destroy VM f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e<br>
> Jsonrpc.Executor / 0 :: DEBUG :: 2016-12-05 17:01:49,793 :: vm :: 4171 ::<br>
> virt.vm :::( destroy) vmId = `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e`: :<br>
> Destroy Called<br>
> Thread-483 :: ERROR :: 2016-12-05 17: 01: 49,793 :: vm :: 759 :: virt.vm ::<br>
> (_ startUnderlyingVm) vmId = `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e` ::<br>
> Failed To start a migration destination vm<br>
> Traceback (most recent call last):<br>
> File "/usr/share/vdsm/virt/vm.py", line 725, in _startUnderlyingVm<br>
> Self._<wbr>completeIncomingMigration ()<br>
> File "/usr/share/vdsm/virt/vm.py", line 3071, in<br>
> _completeIncomingMigration<br>
> Self._<wbr>incomingMigrationFinished.<wbr>isSet (), usedTimeout)<br>
> File "/usr/share/vdsm/virt/vm.py", line 3154, in<br>
> _<wbr>attachLibvirtDomainAfterMigrat<wbr>ion<br>
> Raise MigrationError (e.get_error_message ())<br>
> MigrationError: Domain not found: no domain with matching uuid<br>
> 'f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e'<br>
><br>
> The logs attached.<br>
> Thanks.<br>
><br>
</div></div>> ______________________________<wbr>_________________<br>
> Users mailing list<br>
> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
> <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank" rel="noreferrer">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
><br>
<span class="HOEnZb"><font color="#888888"><br>
<br>
<br>
--<br>
Didi<br>
</font></span></blockquote></div><br></div>