<div dir="ltr"><div>Hello.<br></div><div><br></div><div>Logs after rotate.</div><div><br></div><div>Thanks</div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">2016-12-06 10:50 GMT-03:00 Yedidyah Bar David <span dir="ltr"><<a href="mailto:didi@redhat.com" target="_blank">didi@redhat.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">On Tue, Dec 6, 2016 at 2:14 PM, Marcelo Leandro <<a href="mailto:marceloltmm@gmail.com">marceloltmm@gmail.com</a>> wrote:<br>
> Hello,<br>
><br>
> I Tried this solution , but to some vms not was resolved.<br>
> Logs:<br>
><br>
> src logs:<br>
><br>
> Thread-12::DEBUG::2016-12-06<br>
> 08:50:58,112::check::327::<wbr>storage.check::(_check_<wbr>completed) FINISH check<br>
> u'/rhev/data-center/mnt/192.<wbr>168.144.6:_home_iso/b5fa054f-<wbr>0d3d-458b-a891-13fd9383ee7d/<wbr>dom_md/metadata'<br>
> rc=0 err=bytearray(b'0+1 records in\n0+1 records out\n328 bytes (328 B)<br>
> copied, 0.000474535 s, 691 kB/s\n') elapsed=0.08<br>
> Thread-2374888::WARNING::2016-<wbr>12-06<br>
> 08:50:58,815::migration::671::<wbr>virt.vm::(monitor_migration)<br>
> vmId=`9b5ab7b4-1045-4858-b24c-<wbr>1f5a9f6172c3`::The migration took 520 seconds<br>
> which is exceeding the configured maximum time for migrations of 512<br>
> seconds. The migration will be aborted.<br>
> Thread-2374888::DEBUG::2016-<wbr>12-06<br>
> 08:50:58,816::migration::715::<wbr>virt.vm::(stop)<br>
> vmId=`9b5ab7b4-1045-4858-b24c-<wbr>1f5a9f6172c3`::stopping migration monitor<br>
> thread<br>
> Thread-2374888::DEBUG::2016-<wbr>12-06<br>
> 08:50:58,816::migration::570::<wbr>virt.vm::(stop)<br>
> vmId=`9b5ab7b4-1045-4858-b24c-<wbr>1f5a9f6172c3`::stopping migration downtime<br>
> thread<br>
> Thread-2374888::DEBUG::2016-<wbr>12-06<br>
> 08:50:58,817::migration::629::<wbr>virt.vm::(run)<br>
> vmId=`9b5ab7b4-1045-4858-b24c-<wbr>1f5a9f6172c3`::stopped migration monitor<br>
> thread<br>
> Thread-2374886::DEBUG::2016-<wbr>12-06<br>
> 08:50:59,098::migration::715::<wbr>virt.vm::(stop)<br>
> vmId=`9b5ab7b4-1045-4858-b24c-<wbr>1f5a9f6172c3`::stopping migration monitor<br>
> thread<br>
> Thread-2374886::ERROR::2016-<wbr>12-06<br>
> 08:50:59,098::migration::252::<wbr>virt.vm::(_recover)<br>
> vmId=`9b5ab7b4-1045-4858-b24c-<wbr>1f5a9f6172c3`::operation aborted: migration<br>
> job: canceled by client<br>
> Thread-2374886::DEBUG::2016-<wbr>12-06<br>
> 08:50:59,098::stompreactor::<wbr>408::jsonrpc.AsyncoreClient::(<wbr>send) Sending<br>
> response<br>
> Thread-2374886::DEBUG::2016-<wbr>12-06<br>
> 08:50:59,321::__init__::208::<wbr>jsonrpc.Notification::(emit) Sending event<br>
> {"params": {"notify_time": 6040272640,<br>
> "9b5ab7b4-1045-4858-b24c-<wbr>1f5a9f6172c3": {"status": "Migration Source"}},<br>
> "jsonrpc": "2.0", "method":<br>
> "|virt|VM_status|9b5ab7b4-<wbr>1045-4858-b24c-1f5a9f6172c3"}<br>
> Thread-2374886::ERROR::2016-<wbr>12-06<br>
> 08:50:59,322::migration::381::<wbr>virt.vm::(run)<br>
> vmId=`9b5ab7b4-1045-4858-b24c-<wbr>1f5a9f6172c3`::Failed to migrate<br>
> Traceback (most recent call last):<br>
> File "/usr/share/vdsm/virt/<wbr>migration.py", line 363, in run<br>
> self._<wbr>startUnderlyingMigration(time.<wbr>time())<br>
> File "/usr/share/vdsm/virt/<wbr>migration.py", line 438, in<br>
> _startUnderlyingMigration<br>
> self._perform_with_downtime_<wbr>thread(duri, muri)<br>
> File "/usr/share/vdsm/virt/<wbr>migration.py", line 489, in<br>
> _perform_with_downtime_thread<br>
> self._perform_migration(duri, muri)<br>
> File "/usr/share/vdsm/virt/<wbr>migration.py", line 476, in _perform_migration<br>
> self._vm._dom.migrateToURI3(<wbr>duri, params, flags)<br>
> File "/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/virdomain.<wbr>py", line 69,<br>
> in f<br>
> ret = attr(*args, **kwargs)<br>
> File "/usr/lib/python2.7/site-<wbr>packages/vdsm/<wbr>libvirtconnection.py", line<br>
> 123, in wrapper<br>
> ret = f(*args, **kwargs)<br>
> File "/usr/lib/python2.7/site-<wbr>packages/vdsm/utils.py", line 916, in<br>
> wrapper<br>
> return func(inst, *args, **kwargs)<br>
> File "/usr/lib64/python2.7/site-<wbr>packages/libvirt.py", line 1836, in<br>
> migrateToURI3<br>
> if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed',<br>
> dom=self)<br>
> libvirtError: operation aborted: migration job: canceled by client<br>
> Thread-12::DEBUG::2016-12-06<br>
> 08:50:59,875::check::296::<wbr>storage.check::(_start_<wbr>process) START check<br>
> '/dev/c2dc0101-748e-4a7b-9913-<wbr>47993eaa52bd/metadata'<br>
> cmd=['/usr/bin/taskset', '--cpu-list', '0-31', '/usr/bin/dd',<br>
> 'if=/dev/c2dc0101-748e-4a7b-<wbr>9913-47993eaa52bd/metadata', 'of=/dev/null',<br>
> 'bs=4096', 'count=1', 'iflag=direct'] delay=0.00<br>
> mailbox.SPMMonitor::DEBUG::<wbr>2016-12-06<br>
> 08:50:59,914::storage_mailbox:<wbr>:733::Storage.Misc.excCmd::(_<wbr>checkForMail)<br>
> /usr/bin/taskset --cpu-list 0-31 dd<br>
> if=/rhev/data-center/77e24b20-<wbr>9d21-4952-a089-3c5c592b4e6d/<wbr>mastersd/dom_md/inbox<br>
> iflag=direct,fullblock count=1 bs=1024000 (cwd None)<br>
<br>
</div></div>This snippet is not enough, and the attached logs are too new. Can you<br>
check/share<br>
more of the relevant log? Thanks.<br>
<div class="HOEnZb"><div class="h5"><br>
><br>
> dst log:<br>
><br>
> libvirtEventLoop::DEBUG::2016-<wbr>12-06<br>
> 08:50:59,080::task::995::<wbr>Storage.TaskManager.Task::(_<wbr>decref)<br>
> Task=`7446f040-c5f4-497c-b4a5-<wbr>8934921a7b89`::ref 1 aborting False<br>
> libvirtEventLoop::DEBUG::2016-<wbr>12-06<br>
> 08:50:59,080::fileUtils::190::<wbr>Storage.fileUtils::(<wbr>cleanupdir) Removing<br>
> directory: /var/run/vdsm/storage/<wbr>6e5cce71-3438-4045-9d54-6071<br>
> 23e0557e/413a560d-4919-4870-<wbr>88d6-f7fedbb77523<br>
> libvirtEventLoop::DEBUG::2016-<wbr>12-06<br>
> 08:50:59,080::lvm::288::<wbr>Storage.Misc.excCmd::(cmd) /usr/bin/taskset<br>
> --cpu-list 0-31 /usr/bin/sudo -n /usr/sbin/lvm lvs --config ' de<br>
> vices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1<br>
> write_cache_state=0 disable_after_error_count=3 filter = [<br>
> '\''a|/dev/mapper/<wbr>36005076300810a4db80<br>
> 0000000000002|'\'', '\''r|.*|'\'' ] } global { locking_type=1<br>
> prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup {<br>
> retain_min = 50 retain_days = 0<br>
> } ' --noheadings --units b --nosuffix --separator '|'<br>
> --ignoreskippedcluster -o<br>
> uuid,name,vg_name,attr,size,<wbr>seg_start_pe,devices,tags<br>
> 6e5cce71-3438-4045-9d54-<wbr>607123e05<br>
> 57e (cwd None)<br>
> jsonrpc.Executor/6::DEBUG::<wbr>2016-12-06<br>
> 08:50:59,090::__init__::530::<wbr>jsonrpc.JsonRpcServer::(_<wbr>handle_request)<br>
> Calling 'VM.destroy' in bridge with {u'vmID': u'9b5ab7b4-104<br>
> 5-4858-b24c-1f5a9f6172c3'}<br>
> jsonrpc.Executor/6::DEBUG::<wbr>2016-12-06 08:50:59,090::API::314::vds::(<wbr>destroy)<br>
> About to destroy VM 9b5ab7b4-1045-4858-b24c-<wbr>1f5a9f6172c3<br>
> jsonrpc.Executor/6::DEBUG::<wbr>2016-12-06<br>
> 08:50:59,091::vm::4171::virt.<wbr>vm::(destroy)<br>
> vmId=`9b5ab7b4-1045-4858-b24c-<wbr>1f5a9f6172c3`::destroy Called<br>
> Thread-51550::ERROR::2016-12-<wbr>06<br>
> 08:50:59,091::vm::759::virt.<wbr>vm::(_startUnderlyingVm)<br>
> vmId=`9b5ab7b4-1045-4858-b24c-<wbr>1f5a9f6172c3`::Failed to start a migration<br>
> destinatio<br>
> n vm<br>
> Traceback (most recent call last):<br>
> File "/usr/share/vdsm/virt/vm.py", line 725, in _startUnderlyingVm<br>
> self._<wbr>completeIncomingMigration()<br>
> File "/usr/share/vdsm/virt/vm.py", line 3071, in<br>
> _completeIncomingMigration<br>
> self._<wbr>incomingMigrationFinished.<wbr>isSet(), usedTimeout)<br>
> File "/usr/share/vdsm/virt/vm.py", line 3154, in<br>
> _<wbr>attachLibvirtDomainAfterMigrat<wbr>ion<br>
> raise MigrationError(e.get_error_<wbr>message())<br>
> MigrationError: Domain not found: no domain with matching uuid<br>
> '9b5ab7b4-1045-4858-b24c-<wbr>1f5a9f6172c3'<br>
> Thread-51550::INFO::2016-12-06<br>
> 08:50:59,093::vm::1308::virt.<wbr>vm::(setDownStatus)<br>
> vmId=`9b5ab7b4-1045-4858-b24c-<wbr>1f5a9f6172c3`::Changed state to Down: VM<br>
> failed to migrate (code=8)<br>
><br>
><br>
><br>
> Logs Attached.<br>
><br>
> any ideas?<br>
><br>
> Very thanks.<br>
><br>
><br>
> 2016-12-06 4:16 GMT-03:00 Yedidyah Bar David <<a href="mailto:didi@redhat.com">didi@redhat.com</a>>:<br>
>><br>
>> On Mon, Dec 5, 2016 at 10:25 PM, Marcelo Leandro <<a href="mailto:marceloltmm@gmail.com">marceloltmm@gmail.com</a>><br>
>> wrote:<br>
>> > Hello<br>
>> > I am with problem, I can not migrate vm.<br>
>> > Can someone help me?<br>
>> ><br>
>> > Message in log:<br>
>> ><br>
>> > Src vdsm log:<br>
>> ><br>
>> > Thread-89514 :: WARNING :: <a href="tel:2016-12-05%2017" value="+12016120517">2016-12-05 17</a>: 01: 49,542 :: migration :: 683<br>
>> > ::<br>
>> > virt.vm ::: monitor_migration vmId =<br>
>> > `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e`<br>
>> > :: Migration Stalling: remaining<br>
>> > (56MiB)> lowmark (2MiB). Refer to RHBZ # 919201.<br>
>> > Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,543 :: migration :: 689<br>
>> > ::<br>
>> > virt.vm ::: monitor_migration vmId =<br>
>> > `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e`<br>
>> > :: new Iteration detected: 15<br>
>> > Thread-89514 :: WARNING :: 2016-12-05 17: 01: 49,543 :: migration :: 704<br>
>> > ::<br>
>> > virt.vm ::: monitor_migration vmId =<br>
>> > `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e`<br>
>> > :: Migration Is stuck: Has not pro<br>
>> > Gressed in 240.071660042 seconds. Aborting.<br>
>><br>
>> This is usually a result of a too-busy VM, changing its memory faster than<br>
>> the migration process can copy the changes to the destination.<br>
>><br>
>> You can try changing the cluster migration policy to "suspend workload<br>
>> if needed".<br>
>><br>
>> For more details/background, see also:<br>
>><br>
>><br>
>> <a href="https://www.ovirt.org/develop/release-management/features/migration-enhancements/" target="_blank" rel="noreferrer">https://www.ovirt.org/develop/<wbr>release-management/features/<wbr>migration-enhancements/</a><br>
>><br>
>> Best,<br>
>><br>
>> > Thread-89514 :: DEBUG :: <a href="tel:2016-12-05%2017" value="+12016120517">2016-12-05 17</a>: 01: 49,544 :: migration :: 715<br>
>> > ::<br>
>> > virt.vm ::: stop} vmId = `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e` ::<br>
>> > stopping<br>
>> > Migration monitor thread<br>
>> > Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,545 :: migration :: 570<br>
>> > ::<br>
>> > virt.vm ::: stop) vmId = `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e` ::<br>
>> > stopping<br>
>> > Migration downtime thread<br>
>> > Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,545 :: migration :: 629<br>
>> > ::<br>
>> > virt.vm ::: (run) vmId = `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e` ::<br>
>> > stopped<br>
>> > Migration monitor thread<br>
>> > Thread-89513 :: DEBUG :: 2016-12-05 17: 01: 49,766 :: migration :: 715<br>
>> > ::<br>
>> > virt.vm ::: stop} vmId = `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e` ::<br>
>> > stopping<br>
>> > Migration monitor thread<br>
>> > Thread-89513 :: ERROR :: 2016-12-05 17: 01: 49,767 :: migration :: 252<br>
>> > ::<br>
>> > virt.vm :: (_ recover) vmId = `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e` ::<br>
>> > operation Aborted: migration job: cancel<br>
>> > D by cliente<br>
>> ><br>
>> > Dst vdsm.log:<br>
>> ><br>
>> > Periodic / 13 :: WARNING :: 2016-12-05 17: 01: 49,791 :: sampling :: 483<br>
>> > ::<br>
>> > virt.sampling.StatsCache: :( put) dropped stale old sample: sampled<br>
>> > 4303678.080000 stored 4303693.070000<br>
>> > Periodic / 13 :: DEBUG :: 2016-12-05 17: 01: 49,791 :: executor :: 221<br>
>> > ::<br>
>> > Executor :: (_ run) Worker was discarded<br>
>> > Jsonrpc.Executor / 0 :: DEBUG :: 2016-12-05 17: 01: 49,792 :: __ init __<br>
>> > ::<br>
>> > 530 :: jsonrpc.JsonRpcServer :: (_ handle_request) Calling 'VM.destroy'<br>
>> > in<br>
>> > bridge with {u'vmID ' : U'f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e '}<br>
>> > Jsonrpc.Executor / 0 :: DEBUG :: 2016-12-05 17: 01: 49,793 :: API :: 314<br>
>> > ::<br>
>> > vds :::( destroy) About to destroy VM<br>
>> > f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e<br>
>> > Jsonrpc.Executor / 0 :: DEBUG :: 2016-12-05 17:01:49,793 :: vm :: 4171<br>
>> > ::<br>
>> > virt.vm :::( destroy) vmId = `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e`: :<br>
>> > Destroy Called<br>
>> > Thread-483 :: ERROR :: 2016-12-05 17: 01: 49,793 :: vm :: 759 :: virt.vm<br>
>> > ::<br>
>> > (_ startUnderlyingVm) vmId = `f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e` ::<br>
>> > Failed To start a migration destination vm<br>
>> > Traceback (most recent call last):<br>
>> > File "/usr/share/vdsm/virt/vm.py", line 725, in _startUnderlyingVm<br>
>> > Self._<wbr>completeIncomingMigration ()<br>
>> > File "/usr/share/vdsm/virt/vm.py", line 3071, in<br>
>> > _completeIncomingMigration<br>
>> > Self._<wbr>incomingMigrationFinished.<wbr>isSet (), usedTimeout)<br>
>> > File "/usr/share/vdsm/virt/vm.py", line 3154, in<br>
>> > _<wbr>attachLibvirtDomainAfterMigrat<wbr>ion<br>
>> > Raise MigrationError (e.get_error_message ())<br>
>> > MigrationError: Domain not found: no domain with matching uuid<br>
>> > 'f38b9f7d-5bd0-4bdf-885c-<wbr>e03e8d6bc70e'<br>
>> ><br>
>> > The logs attached.<br>
>> > Thanks.<br>
>> ><br>
>> > ______________________________<wbr>_________________<br>
>> > Users mailing list<br>
>> > <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
>> > <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank" rel="noreferrer">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
>> ><br>
>><br>
>><br>
>><br>
>> --<br>
>> Didi<br>
><br>
><br>
<br>
<br>
<br>
</div></div><span class="HOEnZb"><font color="#888888">--<br>
Didi<br>
</font></span></blockquote></div><br></div>