<div dir="ltr"><div>Hello.<br></div><div><br></div><div>Logs after rotate.</div><div><br></div><div>Thanks</div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">2016-12-06 10:50 GMT-03:00 Yedidyah Bar David <span dir="ltr">&lt;<a href="mailto:didi@redhat.com" target="_blank">didi@redhat.com</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">On Tue, Dec 6, 2016 at 2:14 PM, Marcelo Leandro &lt;<a href="mailto:marceloltmm@gmail.com">marceloltmm@gmail.com</a>&gt; wrote:<br>
&gt; Hello,<br>
&gt;<br>
&gt; I Tried this solution , but to some vms not was resolved.<br>
&gt; Logs:<br>
&gt;<br>
&gt; src logs:<br>
&gt;<br>
&gt; Thread-12::DEBUG::2016-12-06<br>
&gt; 08:50:58,112::check::327::<wbr>storage.check::(_check_<wbr>completed) FINISH check<br>
&gt; u&#39;/rhev/data-center/mnt/192.<wbr>168.144.6:_home_iso/b5fa054f-<wbr>0d3d-458b-a891-13fd9383ee7d/<wbr>dom_md/metadata&#39;<br>
&gt; rc=0 err=bytearray(b&#39;0+1 records in\n0+1 records out\n328 bytes (328 B)<br>
&gt; copied, 0.000474535 s, 691 kB/s\n&#39;) elapsed=0.08<br>
&gt; Thread-2374888::WARNING::2016-<wbr>12-06<br>
&gt; 08:50:58,815::migration::671::<wbr>virt.vm::(monitor_migration)<br>
&gt; vmId=`9b5ab7b4-1045-4858-b24c-<wbr>1f5a9f6172c3`::The migration took 520 seconds<br>
&gt; which is exceeding the configured maximum time for migrations of 512<br>
&gt; seconds. The migration will be aborted.<br>
&gt; Thread-2374888::DEBUG::2016-<wbr>12-06<br>
&gt; 08:50:58,816::migration::715::<wbr>virt.vm::(stop)<br>
&gt; vmId=`9b5ab7b4-1045-4858-b24c-<wbr>1f5a9f6172c3`::stopping migration monitor<br>
&gt; thread<br>
&gt; Thread-2374888::DEBUG::2016-<wbr>12-06<br>
&gt; 08:50:58,816::migration::570::<wbr>virt.vm::(stop)<br>
&gt; vmId=`9b5ab7b4-1045-4858-b24c-<wbr>1f5a9f6172c3`::stopping migration downtime<br>
&gt; thread<br>
&gt; Thread-2374888::DEBUG::2016-<wbr>12-06<br>
&gt; 08:50:58,817::migration::629::<wbr>virt.vm::(run)<br>
&gt; vmId=`9b5ab7b4-1045-4858-b24c-<wbr>1f5a9f6172c3`::stopped migration monitor<br>
&gt; thread<br>
&gt; Thread-2374886::DEBUG::2016-<wbr>12-06<br>
&gt; 08:50:59,098::migration::715::<wbr>virt.vm::(stop)<br>
&gt; vmId=`9b5ab7b4-1045-4858-b24c-<wbr>1f5a9f6172c3`::stopping migration monitor<br>
&gt; thread<br>
&gt; Thread-2374886::ERROR::2016-<wbr>12-06<br>
&gt; 08:50:59,098::migration::252::<wbr>virt.vm::(_recover)<br>
&gt; vmId=`9b5ab7b4-1045-4858-b24c-<wbr>1f5a9f6172c3`::operation aborted: migration<br>
&gt; job: canceled by client<br>
&gt; Thread-2374886::DEBUG::2016-<wbr>12-06<br>
&gt; 08:50:59,098::stompreactor::<wbr>408::jsonrpc.AsyncoreClient::(<wbr>send) Sending<br>
&gt; response<br>
&gt; Thread-2374886::DEBUG::2016-<wbr>12-06<br>
&gt; 08:50:59,321::__init__::208::<wbr>jsonrpc.Notification::(emit) Sending event<br>
&gt; {&quot;params&quot;: {&quot;notify_time&quot;: 6040272640,<br>
&gt; &quot;9b5ab7b4-1045-4858-b24c-<wbr>1f5a9f6172c3&quot;: {&quot;status&quot;: &quot;Migration Source&quot;}},<br>
&gt; &quot;jsonrpc&quot;: &quot;2.0&quot;, &quot;method&quot;:<br>
&gt; &quot;|virt|VM_status|9b5ab7b4-<wbr>1045-4858-b24c-1f5a9f6172c3&quot;}<br>
&gt; Thread-2374886::ERROR::2016-<wbr>12-06<br>
&gt; 08:50:59,322::migration::381::<wbr>virt.vm::(run)<br>
&gt; vmId=`9b5ab7b4-1045-4858-b24c-<wbr>1f5a9f6172c3`::Failed to migrate<br>
&gt; Traceback (most recent call last):<br>
&gt;   File &quot;/usr/share/vdsm/virt/<wbr>migration.py&quot;, line 363, in run<br>
&gt;     self._<wbr>startUnderlyingMigration(time.<wbr>time())<br>
&gt;   File &quot;/usr/share/vdsm/virt/<wbr>migration.py&quot;, line 438, in<br>
&gt; _startUnderlyingMigration<br>
&gt;     self._perform_with_downtime_<wbr>thread(duri, muri)<br>
&gt;   File &quot;/usr/share/vdsm/virt/<wbr>migration.py&quot;, line 489, in<br>
&gt; _perform_with_downtime_thread<br>
&gt;     self._perform_migration(duri, muri)<br>
&gt;   File &quot;/usr/share/vdsm/virt/<wbr>migration.py&quot;, line 476, in _perform_migration<br>
&gt;     self._vm._dom.migrateToURI3(<wbr>duri, params, flags)<br>
&gt;   File &quot;/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/virdomain.<wbr>py&quot;, line 69,<br>
&gt; in f<br>
&gt;     ret = attr(*args, **kwargs)<br>
&gt;   File &quot;/usr/lib/python2.7/site-<wbr>packages/vdsm/<wbr>libvirtconnection.py&quot;, line<br>
&gt; 123, in wrapper<br>
&gt;     ret = f(*args, **kwargs)<br>
&gt;   File &quot;/usr/lib/python2.7/site-<wbr>packages/vdsm/utils.py&quot;, line 916, in<br>
&gt; wrapper<br>
&gt;     return func(inst, *args, **kwargs)<br>
&gt;   File &quot;/usr/lib64/python2.7/site-<wbr>packages/libvirt.py&quot;, line 1836, in<br>
&gt; migrateToURI3<br>
&gt;     if ret == -1: raise libvirtError (&#39;virDomainMigrateToURI3() failed&#39;,<br>
&gt; dom=self)<br>
&gt; libvirtError: operation aborted: migration job: canceled by client<br>
&gt; Thread-12::DEBUG::2016-12-06<br>
&gt; 08:50:59,875::check::296::<wbr>storage.check::(_start_<wbr>process) START check<br>
&gt; &#39;/dev/c2dc0101-748e-4a7b-9913-<wbr>47993eaa52bd/metadata&#39;<br>
&gt; cmd=[&#39;/usr/bin/taskset&#39;, &#39;--cpu-list&#39;, &#39;0-31&#39;, &#39;/usr/bin/dd&#39;,<br>
&gt; &#39;if=/dev/c2dc0101-748e-4a7b-<wbr>9913-47993eaa52bd/metadata&#39;, &#39;of=/dev/null&#39;,<br>
&gt; &#39;bs=4096&#39;, &#39;count=1&#39;, &#39;iflag=direct&#39;] delay=0.00<br>
&gt; mailbox.SPMMonitor::DEBUG::<wbr>2016-12-06<br>
&gt; 08:50:59,914::storage_mailbox:<wbr>:733::Storage.Misc.excCmd::(_<wbr>checkForMail)<br>
&gt; /usr/bin/taskset --cpu-list 0-31 dd<br>
&gt; if=/rhev/data-center/77e24b20-<wbr>9d21-4952-a089-3c5c592b4e6d/<wbr>mastersd/dom_md/inbox<br>
&gt; iflag=direct,fullblock count=1 bs=1024000 (cwd None)<br>
<br>
</div></div>This snippet is not enough, and the attached logs are too new. Can you<br>
check/share<br>
more of the relevant log? Thanks.<br>
<div class="HOEnZb"><div class="h5"><br>
&gt;<br>
&gt; dst log:<br>
&gt;<br>
&gt; libvirtEventLoop::DEBUG::2016-<wbr>12-06<br>
&gt; 08:50:59,080::task::995::<wbr>Storage.TaskManager.Task::(_<wbr>decref)<br>
&gt; Task=`7446f040-c5f4-497c-b4a5-<wbr>8934921a7b89`::ref 1 aborting False<br>
&gt; libvirtEventLoop::DEBUG::2016-<wbr>12-06<br>
&gt; 08:50:59,080::fileUtils::190::<wbr>Storage.fileUtils::(<wbr>cleanupdir) Removing<br>
&gt; directory: /var/run/vdsm/storage/<wbr>6e5cce71-3438-4045-9d54-6071<br>
&gt; 23e0557e/413a560d-4919-4870-<wbr>88d6-f7fedbb77523<br>
&gt; libvirtEventLoop::DEBUG::2016-<wbr>12-06<br>
&gt; 08:50:59,080::lvm::288::<wbr>Storage.Misc.excCmd::(cmd) /usr/bin/taskset<br>
&gt; --cpu-list 0-31 /usr/bin/sudo -n /usr/sbin/lvm lvs --config &#39; de<br>
&gt; vices { preferred_names = [&quot;^/dev/mapper/&quot;] ignore_suspended_devices=1<br>
&gt; write_cache_state=0 disable_after_error_count=3 filter = [<br>
&gt; &#39;\&#39;&#39;a|/dev/mapper/<wbr>36005076300810a4db80<br>
&gt; 0000000000002|&#39;\&#39;&#39;, &#39;\&#39;&#39;r|.*|&#39;\&#39;&#39; ] }  global {  locking_type=1<br>
&gt; prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {<br>
&gt; retain_min = 50  retain_days = 0<br>
&gt;  } &#39; --noheadings --units b --nosuffix --separator &#39;|&#39;<br>
&gt; --ignoreskippedcluster -o<br>
&gt; uuid,name,vg_name,attr,size,<wbr>seg_start_pe,devices,tags<br>
&gt; 6e5cce71-3438-4045-9d54-<wbr>607123e05<br>
&gt; 57e (cwd None)<br>
&gt; jsonrpc.Executor/6::DEBUG::<wbr>2016-12-06<br>
&gt; 08:50:59,090::__init__::530::<wbr>jsonrpc.JsonRpcServer::(_<wbr>handle_request)<br>
&gt; Calling &#39;VM.destroy&#39; in bridge with {u&#39;vmID&#39;: u&#39;9b5ab7b4-104<br>
&gt; 5-4858-b24c-1f5a9f6172c3&#39;}<br>
&gt; jsonrpc.Executor/6::DEBUG::<wbr>2016-12-06 08:50:59,090::API::314::vds::(<wbr>destroy)<br>
&gt; About to destroy VM 9b5ab7b4-1045-4858-b24c-<wbr>1f5a9f6172c3<br>
&gt; jsonrpc.Executor/6::DEBUG::<wbr>2016-12-06<br>
&gt; 08:50:59,091::vm::4171::virt.<wbr>vm::(destroy)<br>
&gt; vmId=`9b5ab7b4-1045-4858-b24c-<wbr>1f5a9f6172c3`::destroy Called<br>
&gt; Thread-51550::ERROR::2016-12-<wbr>06<br>
&gt; 08:50:59,091::vm::759::virt.<wbr>vm::(_startUnderlyingVm)<br>
&gt; vmId=`9b5ab7b4-1045-4858-b24c-<wbr>1f5a9f6172c3`::Failed to start a migration<br>
&gt; destinatio<br>
&gt; n vm<br>
&gt; Traceback (most recent call last):<br>
&gt;   File &quot;/usr/share/vdsm/virt/vm.py&quot;, line 725, in _startUnderlyingVm<br>
&gt;     self._<wbr>completeIncomingMigration()<br>
&gt;   File &quot;/usr/share/vdsm/virt/vm.py&quot;, line 3071, in<br>
&gt; _completeIncomingMigration<br>
&gt;     self._<wbr>incomingMigrationFinished.<wbr>isSet(), usedTimeout)<br>
&gt;   File &quot;/usr/share/vdsm/virt/vm.py&quot;, line 3154, in<br>
&gt; _<wbr>attachLibvirtDomainAfterMigrat<wbr>ion<br>
&gt;     raise MigrationError(e.get_error_<wbr>message())<br>
&gt; MigrationError: Domain not found: no domain with matching uuid<br>
&gt; &#39;9b5ab7b4-1045-4858-b24c-<wbr>1f5a9f6172c3&#39;<br>
&gt; Thread-51550::INFO::2016-12-06<br>
&gt; 08:50:59,093::vm::1308::virt.<wbr>vm::(setDownStatus)<br>
&gt; vmId=`9b5ab7b4-1045-4858-b24c-<wbr>1f5a9f6172c3`::Changed state to Down: VM<br>
&gt; failed to migrate (code=8)<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; Logs Attached.<br>
&gt;<br>
&gt; any ideas?<br>
&gt;<br>
&gt; Very thanks.<br>
&gt;<br>
&gt;<br>
&gt; 2016-12-06 4:16 GMT-03:00 Yedidyah Bar David &lt;<a href="mailto:didi@redhat.com">didi@redhat.com</a>&gt;:<br>
&gt;&gt;<br>
&gt;&gt; On Mon, Dec 5, 2016 at 10:25 PM, Marcelo Leandro &lt;<a href="mailto:marceloltmm@gmail.com">marceloltmm@gmail.com</a>&gt;<br>
&gt;&gt; wrote:<br>
&gt;&gt; &gt; Hello<br>
&gt;&gt; &gt; I am with problem, I can not migrate vm.<br>
&gt;&gt; &gt; Can someone help me?<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Message in log:<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Src vdsm log:<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Thread-89514 :: WARNING :: <a href="tel:2016-12-05%2017" value="+12016120517">2016-12-05 17</a>: 01: 49,542 :: migration :: 683<br>
&gt;&gt; &gt; ::<br>
&gt;&gt; &gt; virt.vm ::: monitor_migration vmId =<br>
&gt;&gt; &gt; `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e`<br>
&gt;&gt; &gt; :: Migration Stalling: remaining<br>
&gt;&gt; &gt; (56MiB)&gt; lowmark (2MiB). Refer to RHBZ # 919201.<br>
&gt;&gt; &gt; Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,543 :: migration :: 689<br>
&gt;&gt; &gt; ::<br>
&gt;&gt; &gt; virt.vm ::: monitor_migration vmId =<br>
&gt;&gt; &gt; `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e`<br>
&gt;&gt; &gt; :: new Iteration detected: 15<br>
&gt;&gt; &gt; Thread-89514 :: WARNING :: 2016-12-05 17: 01: 49,543 :: migration :: 704<br>
&gt;&gt; &gt; ::<br>
&gt;&gt; &gt; virt.vm ::: monitor_migration vmId =<br>
&gt;&gt; &gt; `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e`<br>
&gt;&gt; &gt; :: Migration Is stuck: Has not pro<br>
&gt;&gt; &gt; Gressed in 240.071660042 seconds. Aborting.<br>
&gt;&gt;<br>
&gt;&gt; This is usually a result of a too-busy VM, changing its memory faster than<br>
&gt;&gt; the migration process can copy the changes to the destination.<br>
&gt;&gt;<br>
&gt;&gt; You can try changing the cluster migration policy to &quot;suspend workload<br>
&gt;&gt; if needed&quot;.<br>
&gt;&gt;<br>
&gt;&gt; For more details/background, see also:<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt; <a href="https://www.ovirt.org/develop/release-management/features/migration-enhancements/" target="_blank" rel="noreferrer">https://www.ovirt.org/develop/<wbr>release-management/features/<wbr>migration-enhancements/</a><br>
&gt;&gt;<br>
&gt;&gt; Best,<br>
&gt;&gt;<br>
&gt;&gt; &gt; Thread-89514 :: DEBUG :: <a href="tel:2016-12-05%2017" value="+12016120517">2016-12-05 17</a>: 01: 49,544 :: migration :: 715<br>
&gt;&gt; &gt; ::<br>
&gt;&gt; &gt; virt.vm ::: stop} vmId = `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e` ::<br>
&gt;&gt; &gt; stopping<br>
&gt;&gt; &gt; Migration monitor thread<br>
&gt;&gt; &gt; Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,545 :: migration :: 570<br>
&gt;&gt; &gt; ::<br>
&gt;&gt; &gt; virt.vm ::: stop) vmId = `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e` ::<br>
&gt;&gt; &gt; stopping<br>
&gt;&gt; &gt; Migration downtime thread<br>
&gt;&gt; &gt; Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,545 :: migration :: 629<br>
&gt;&gt; &gt; ::<br>
&gt;&gt; &gt; virt.vm ::: (run) vmId = `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e` ::<br>
&gt;&gt; &gt; stopped<br>
&gt;&gt; &gt; Migration monitor thread<br>
&gt;&gt; &gt; Thread-89513 :: DEBUG :: 2016-12-05 17: 01: 49,766 :: migration :: 715<br>
&gt;&gt; &gt; ::<br>
&gt;&gt; &gt; virt.vm ::: stop} vmId = `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e` ::<br>
&gt;&gt; &gt; stopping<br>
&gt;&gt; &gt; Migration monitor thread<br>
&gt;&gt; &gt; Thread-89513 :: ERROR :: 2016-12-05 17: 01: 49,767 :: migration :: 252<br>
&gt;&gt; &gt; ::<br>
&gt;&gt; &gt; virt.vm :: (_ recover) vmId = `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e` ::<br>
&gt;&gt; &gt; operation Aborted: migration job: cancel<br>
&gt;&gt; &gt; D by cliente<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Dst vdsm.log:<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Periodic / 13 :: WARNING :: 2016-12-05 17: 01: 49,791 :: sampling :: 483<br>
&gt;&gt; &gt; ::<br>
&gt;&gt; &gt; virt.sampling.StatsCache: :( put) dropped stale old sample: sampled<br>
&gt;&gt; &gt; 4303678.080000 stored 4303693.070000<br>
&gt;&gt; &gt; Periodic / 13 :: DEBUG :: 2016-12-05 17: 01: 49,791 :: executor :: 221<br>
&gt;&gt; &gt; ::<br>
&gt;&gt; &gt; Executor :: (_ run) Worker was discarded<br>
&gt;&gt; &gt; Jsonrpc.Executor / 0 :: DEBUG :: 2016-12-05 17: 01: 49,792 :: __ init __<br>
&gt;&gt; &gt; ::<br>
&gt;&gt; &gt; 530 :: jsonrpc.JsonRpcServer :: (_ handle_request) Calling &#39;VM.destroy&#39;<br>
&gt;&gt; &gt; in<br>
&gt;&gt; &gt; bridge with {u&#39;vmID &#39; : U&#39;f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e &#39;}<br>
&gt;&gt; &gt; Jsonrpc.Executor / 0 :: DEBUG :: 2016-12-05 17: 01: 49,793 :: API :: 314<br>
&gt;&gt; &gt; ::<br>
&gt;&gt; &gt; vds :::( destroy) About to destroy VM<br>
&gt;&gt; &gt; f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e<br>
&gt;&gt; &gt; Jsonrpc.Executor / 0 :: DEBUG :: 2016-12-05 17:01:49,793 :: vm :: 4171<br>
&gt;&gt; &gt; ::<br>
&gt;&gt; &gt; virt.vm :::( destroy) vmId = `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e`: :<br>
&gt;&gt; &gt; Destroy Called<br>
&gt;&gt; &gt; Thread-483 :: ERROR :: 2016-12-05 17: 01: 49,793 :: vm :: 759 :: virt.vm<br>
&gt;&gt; &gt; ::<br>
&gt;&gt; &gt; (_ startUnderlyingVm) vmId = `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e` ::<br>
&gt;&gt; &gt; Failed To start a migration destination vm<br>
&gt;&gt; &gt; Traceback (most recent call last):<br>
&gt;&gt; &gt;   File &quot;/usr/share/vdsm/virt/vm.py&quot;, line 725, in _startUnderlyingVm<br>
&gt;&gt; &gt;     Self._<wbr>completeIncomingMigration ()<br>
&gt;&gt; &gt;   File &quot;/usr/share/vdsm/virt/vm.py&quot;, line 3071, in<br>
&gt;&gt; &gt; _completeIncomingMigration<br>
&gt;&gt; &gt;     Self._<wbr>incomingMigrationFinished.<wbr>isSet (), usedTimeout)<br>
&gt;&gt; &gt;   File &quot;/usr/share/vdsm/virt/vm.py&quot;, line 3154, in<br>
&gt;&gt; &gt; _<wbr>attachLibvirtDomainAfterMigrat<wbr>ion<br>
&gt;&gt; &gt;     Raise MigrationError (e.get_error_message ())<br>
&gt;&gt; &gt; MigrationError: Domain not found: no domain with matching uuid<br>
&gt;&gt; &gt; &#39;f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e&#39;<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; The logs attached.<br>
&gt;&gt; &gt; Thanks.<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; ______________________________<wbr>_________________<br>
&gt;&gt; &gt; Users mailing list<br>
&gt;&gt; &gt; <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
&gt;&gt; &gt; <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank" rel="noreferrer">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
&gt;&gt; &gt;<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt; --<br>
&gt;&gt; Didi<br>
&gt;<br>
&gt;<br>
<br>
<br>
<br>
</div></div><span class="HOEnZb"><font color="#888888">--<br>
Didi<br>
</font></span></blockquote></div><br></div>