
Hello I am with problem, I can not migrate vm. Can someone help me? Message in log: Src vdsm log: Thread-89514 :: WARNING :: 2016-12-05 17: 01: 49,542 :: migration :: 683 :: virt.vm ::: monitor_migration vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: Migration Stalling: remaining (56MiB)> lowmark (2MiB). Refer to RHBZ # 919201. Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,543 :: migration :: 689 :: virt.vm ::: monitor_migration vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: new Iteration detected: 15 Thread-89514 :: WARNING :: 2016-12-05 17: 01: 49,543 :: migration :: 704 :: virt.vm ::: monitor_migration vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: Migration Is stuck: Has not pro Gressed in 240.071660042 seconds. Aborting. Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,544 :: migration :: 715 :: virt.vm ::: stop} vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: stopping Migration monitor thread Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,545 :: migration :: 570 :: virt.vm ::: stop) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: stopping Migration downtime thread Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,545 :: migration :: 629 :: virt.vm ::: (run) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: stopped Migration monitor thread Thread-89513 :: DEBUG :: 2016-12-05 17: 01: 49,766 :: migration :: 715 :: virt.vm ::: stop} vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: stopping Migration monitor thread Thread-89513 :: ERROR :: 2016-12-05 17: 01: 49,767 :: migration :: 252 :: virt.vm :: (_ recover) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: operation Aborted: migration job: cancel D by cliente Dst vdsm.log: Periodic / 13 :: WARNING :: 2016-12-05 17: 01: 49,791 :: sampling :: 483 :: virt.sampling.StatsCache: :( put) dropped stale old sample: sampled 4303678.080000 stored 4303693.070000 Periodic / 13 :: DEBUG :: 2016-12-05 17: 01: 49,791 :: executor :: 221 :: Executor :: (_ run) Worker was discarded Jsonrpc.Executor / 0 :: DEBUG :: 2016-12-05 17: 01: 49,792 :: __ init __ :: 530 :: jsonrpc.JsonRpcServer :: (_ handle_request) Calling 'VM.destroy' in bridge with {u'vmID ' : U'f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e '} Jsonrpc.Executor / 0 :: DEBUG :: 2016-12-05 17: 01: 49,793 :: API :: 314 :: vds :::( destroy) About to destroy VM f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e Jsonrpc.Executor / 0 :: DEBUG :: 2016-12-05 17:01:49,793 :: vm :: 4171 :: virt.vm :::( destroy) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e`: : Destroy Called Thread-483 :: ERROR :: 2016-12-05 17: 01: 49,793 :: vm :: 759 :: virt.vm :: (_ startUnderlyingVm) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: Failed To start a migration destination vm Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 725, in _startUnderlyingVm Self._completeIncomingMigration () File "/usr/share/vdsm/virt/vm.py", line 3071, in _completeIncomingMigration Self._incomingMigrationFinished.isSet (), usedTimeout) File "/usr/share/vdsm/virt/vm.py", line 3154, in _ attachLibvirtDomainAfterMigration Raise MigrationError (e.get_error_message ()) MigrationError: Domain not found: no domain with matching uuid 'f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e' The logs attached. Thanks.

On Mon, Dec 5, 2016 at 10:25 PM, Marcelo Leandro <marceloltmm@gmail.com> wrote:
Hello I am with problem, I can not migrate vm. Can someone help me?
Message in log:
Src vdsm log:
Thread-89514 :: WARNING :: 2016-12-05 17: 01: 49,542 :: migration :: 683 :: virt.vm ::: monitor_migration vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: Migration Stalling: remaining (56MiB)> lowmark (2MiB). Refer to RHBZ # 919201. Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,543 :: migration :: 689 :: virt.vm ::: monitor_migration vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: new Iteration detected: 15 Thread-89514 :: WARNING :: 2016-12-05 17: 01: 49,543 :: migration :: 704 :: virt.vm ::: monitor_migration vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: Migration Is stuck: Has not pro Gressed in 240.071660042 seconds. Aborting.
This is usually a result of a too-busy VM, changing its memory faster than the migration process can copy the changes to the destination. You can try changing the cluster migration policy to "suspend workload if needed". For more details/background, see also: https://www.ovirt.org/develop/release-management/features/migration-enhancem... Best,
Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,544 :: migration :: 715 :: virt.vm ::: stop} vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: stopping Migration monitor thread Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,545 :: migration :: 570 :: virt.vm ::: stop) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: stopping Migration downtime thread Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,545 :: migration :: 629 :: virt.vm ::: (run) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: stopped Migration monitor thread Thread-89513 :: DEBUG :: 2016-12-05 17: 01: 49,766 :: migration :: 715 :: virt.vm ::: stop} vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: stopping Migration monitor thread Thread-89513 :: ERROR :: 2016-12-05 17: 01: 49,767 :: migration :: 252 :: virt.vm :: (_ recover) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: operation Aborted: migration job: cancel D by cliente
Dst vdsm.log:
Periodic / 13 :: WARNING :: 2016-12-05 17: 01: 49,791 :: sampling :: 483 :: virt.sampling.StatsCache: :( put) dropped stale old sample: sampled 4303678.080000 stored 4303693.070000 Periodic / 13 :: DEBUG :: 2016-12-05 17: 01: 49,791 :: executor :: 221 :: Executor :: (_ run) Worker was discarded Jsonrpc.Executor / 0 :: DEBUG :: 2016-12-05 17: 01: 49,792 :: __ init __ :: 530 :: jsonrpc.JsonRpcServer :: (_ handle_request) Calling 'VM.destroy' in bridge with {u'vmID ' : U'f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e '} Jsonrpc.Executor / 0 :: DEBUG :: 2016-12-05 17: 01: 49,793 :: API :: 314 :: vds :::( destroy) About to destroy VM f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e Jsonrpc.Executor / 0 :: DEBUG :: 2016-12-05 17:01:49,793 :: vm :: 4171 :: virt.vm :::( destroy) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e`: : Destroy Called Thread-483 :: ERROR :: 2016-12-05 17: 01: 49,793 :: vm :: 759 :: virt.vm :: (_ startUnderlyingVm) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: Failed To start a migration destination vm Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 725, in _startUnderlyingVm Self._completeIncomingMigration () File "/usr/share/vdsm/virt/vm.py", line 3071, in _completeIncomingMigration Self._incomingMigrationFinished.isSet (), usedTimeout) File "/usr/share/vdsm/virt/vm.py", line 3154, in _attachLibvirtDomainAfterMigration Raise MigrationError (e.get_error_message ()) MigrationError: Domain not found: no domain with matching uuid 'f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e'
The logs attached. Thanks.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Didi

Hello, I Tried this solution , but to some vms not was resolved. Logs: src logs: Thread-12::DEBUG::2016-12-06 08:50:58,112::check::327::storage.check::(_check_completed) FINISH check u'/rhev/data-center/mnt/192.168.144.6:_home_iso/b5fa054f-0d3d-458b-a891-13fd9383ee7d/dom_md/metadata' rc=0 err=bytearray(b'0+1 records in\n0+1 records out\n328 bytes (328 B) copied, 0.000474535 s, 691 kB/s\n') elapsed=0.08 Thread-2374888::WARNING::2016-12-06 08:50:58,815::migration::671::virt.vm::(monitor_migration) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::The migration took 520 seconds which is exceeding the configured maximum time for migrations of 512 seconds. The migration will be aborted. Thread-2374888::DEBUG::2016-12-06 08:50:58,816::migration::715::virt.vm::(stop) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::stopping migration monitor thread Thread-2374888::DEBUG::2016-12-06 08:50:58,816::migration::570::virt.vm::(stop) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::stopping migration downtime thread Thread-2374888::DEBUG::2016-12-06 08:50:58,817::migration::629::virt.vm::(run) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::stopped migration monitor thread Thread-2374886::DEBUG::2016-12-06 08:50:59,098::migration::715::virt.vm::(stop) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::stopping migration monitor thread Thread-2374886::ERROR::2016-12-06 08:50:59,098::migration::252::virt.vm::(_recover) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::operation aborted: migration job: canceled by client Thread-2374886::DEBUG::2016-12-06 08:50:59,098::stompreactor::408::jsonrpc.AsyncoreClient::(send) Sending response Thread-2374886::DEBUG::2016-12-06 08:50:59,321::__init__::208::jsonrpc.Notification::(emit) Sending event {"params": {"notify_time": 6040272640, "9b5ab7b4-1045-4858-b24c-1f5a9f6172c3": {"status": "Migration Source"}}, "jsonrpc": "2.0", "method": "|virt|VM_status|9b5ab7b4-1045-4858-b24c-1f5a9f6172c3"} Thread-2374886::ERROR::2016-12-06 08:50:59,322::migration::381::virt.vm::(run) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::Failed to migrate Traceback (most recent call last): File "/usr/share/vdsm/virt/migration.py", line 363, in run self._startUnderlyingMigration(time.time()) File "/usr/share/vdsm/virt/migration.py", line 438, in _startUnderlyingMigration self._perform_with_downtime_thread(duri, muri) File "/usr/share/vdsm/virt/migration.py", line 489, in _perform_with_downtime_thread self._perform_migration(duri, muri) File "/usr/share/vdsm/virt/migration.py", line 476, in _perform_migration self._vm._dom.migrateToURI3(duri, params, flags) File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 69, in f ret = attr(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 916, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1836, in migrateToURI3 if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', dom=self) libvirtError: operation aborted: migration job: canceled by client Thread-12::DEBUG::2016-12-06 08:50:59,875::check::296::storage.check::(_start_process) START check '/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/metadata' cmd=['/usr/bin/taskset', '--cpu-list', '0-31', '/usr/bin/dd', 'if=/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/metadata', 'of=/dev/null', 'bs=4096', 'count=1', 'iflag=direct'] delay=0.00 mailbox.SPMMonitor::DEBUG::2016-12-06 08:50:59,914::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail) /usr/bin/taskset --cpu-list 0-31 dd if=/rhev/data-center/77e24b20-9d21-4952-a089-3c5c592b4e6d/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000 (cwd None) dst log: libvirtEventLoop::DEBUG::2016-12-06 08:50:59,080::task::995::Storage.TaskManager.Task::(_decref) Task=`7446f040-c5f4-497c-b4a5-8934921a7b89`::ref 1 aborting False libvirtEventLoop::DEBUG::2016-12-06 08:50:59,080::fileUtils::190::Storage.fileUtils::(cleanupdir) Removing directory: /var/run/vdsm/storage/6e5cce71-3438-4045-9d54-6071 23e0557e/413a560d-4919-4870-88d6-f7fedbb77523 libvirtEventLoop::DEBUG::2016-12-06 08:50:59,080::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/taskset --cpu-list 0-31 /usr/bin/sudo -n /usr/sbin/lvm lvs --config ' de vices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ '\''a|/dev/mapper/36005076300810a4db80 0000000000002|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 6e5cce71-3438-4045-9d54-607123e05 57e (cwd None) jsonrpc.Executor/6::DEBUG::2016-12-06 08:50:59,090::__init__::530::jsonrpc.JsonRpcServer::(_handle_request) Calling 'VM.destroy' in bridge with {u'vmID': u'9b5ab7b4-104 5-4858-b24c-1f5a9f6172c3'} jsonrpc.Executor/6::DEBUG::2016-12-06 08:50:59,090::API::314::vds::(destroy) About to destroy VM 9b5ab7b4-1045-4858-b24c-1f5a9f6172c3 jsonrpc.Executor/6::DEBUG::2016-12-06 08:50:59,091::vm::4171::virt.vm::(destroy) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::destroy Called Thread-51550::ERROR::2016-12-06 08:50:59,091::vm::759::virt.vm::(_startUnderlyingVm) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::Failed to start a migration destinatio n vm Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 725, in _startUnderlyingVm self._completeIncomingMigration() File "/usr/share/vdsm/virt/vm.py", line 3071, in _completeIncomingMigration self._incomingMigrationFinished.isSet(), usedTimeout) File "/usr/share/vdsm/virt/vm.py", line 3154, in _attachLibvirtDomainAfterMigration raise MigrationError(e.get_error_message()) MigrationError: Domain not found: no domain with matching uuid '9b5ab7b4-1045-4858-b24c-1f5a9f6172c3' Thread-51550::INFO::2016-12-06 08:50:59,093::vm::1308::virt.vm::(setDownStatus) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::Changed state to Down: VM failed to migrate (code=8) Logs Attached. any ideas? Very thanks. 2016-12-06 4:16 GMT-03:00 Yedidyah Bar David <didi@redhat.com>:
On Mon, Dec 5, 2016 at 10:25 PM, Marcelo Leandro <marceloltmm@gmail.com> wrote:
Hello I am with problem, I can not migrate vm. Can someone help me?
Message in log:
Src vdsm log:
Thread-89514 :: WARNING :: 2016-12-05 17: 01: 49,542 :: migration :: 683 :: virt.vm ::: monitor_migration vmId = `f38b9f7d-5bd0-4bdf-885c- e03e8d6bc70e` :: Migration Stalling: remaining (56MiB)> lowmark (2MiB). Refer to RHBZ # 919201. Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,543 :: migration :: 689 :: virt.vm ::: monitor_migration vmId = `f38b9f7d-5bd0-4bdf-885c- e03e8d6bc70e` :: new Iteration detected: 15 Thread-89514 :: WARNING :: 2016-12-05 17: 01: 49,543 :: migration :: 704 :: virt.vm ::: monitor_migration vmId = `f38b9f7d-5bd0-4bdf-885c- e03e8d6bc70e` :: Migration Is stuck: Has not pro Gressed in 240.071660042 seconds. Aborting.
This is usually a result of a too-busy VM, changing its memory faster than the migration process can copy the changes to the destination.
You can try changing the cluster migration policy to "suspend workload if needed".
For more details/background, see also:
https://www.ovirt.org/develop/release-management/features/ migration-enhancements/
Best,
Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,544 :: migration :: 715 :: virt.vm ::: stop} vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: stopping Migration monitor thread Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,545 :: migration :: 570 :: virt.vm ::: stop) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: stopping Migration downtime thread Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,545 :: migration :: 629 :: virt.vm ::: (run) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: stopped Migration monitor thread Thread-89513 :: DEBUG :: 2016-12-05 17: 01: 49,766 :: migration :: 715 :: virt.vm ::: stop} vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: stopping Migration monitor thread Thread-89513 :: ERROR :: 2016-12-05 17: 01: 49,767 :: migration :: 252 :: virt.vm :: (_ recover) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: operation Aborted: migration job: cancel D by cliente
Dst vdsm.log:
Periodic / 13 :: WARNING :: 2016-12-05 17: 01: 49,791 :: sampling :: 483 :: virt.sampling.StatsCache: :( put) dropped stale old sample: sampled 4303678.080000 stored 4303693.070000 Periodic / 13 :: DEBUG :: 2016-12-05 17: 01: 49,791 :: executor :: 221 :: Executor :: (_ run) Worker was discarded Jsonrpc.Executor / 0 :: DEBUG :: 2016-12-05 17: 01: 49,792 :: __ init __ :: 530 :: jsonrpc.JsonRpcServer :: (_ handle_request) Calling 'VM.destroy' in bridge with {u'vmID ' : U'f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e '} Jsonrpc.Executor / 0 :: DEBUG :: 2016-12-05 17: 01: 49,793 :: API :: 314 :: vds :::( destroy) About to destroy VM f38b9f7d-5bd0-4bdf-885c- e03e8d6bc70e Jsonrpc.Executor / 0 :: DEBUG :: 2016-12-05 17:01:49,793 :: vm :: 4171 :: virt.vm :::( destroy) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e`: : Destroy Called Thread-483 :: ERROR :: 2016-12-05 17: 01: 49,793 :: vm :: 759 :: virt.vm :: (_ startUnderlyingVm) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: Failed To start a migration destination vm Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 725, in _startUnderlyingVm Self._completeIncomingMigration () File "/usr/share/vdsm/virt/vm.py", line 3071, in _completeIncomingMigration Self._incomingMigrationFinished.isSet (), usedTimeout) File "/usr/share/vdsm/virt/vm.py", line 3154, in _attachLibvirtDomainAfterMigration Raise MigrationError (e.get_error_message ()) MigrationError: Domain not found: no domain with matching uuid 'f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e'
The logs attached. Thanks.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Didi

On Tue, Dec 6, 2016 at 2:14 PM, Marcelo Leandro <marceloltmm@gmail.com> wrote:
Hello,
I Tried this solution , but to some vms not was resolved. Logs:
src logs:
Thread-12::DEBUG::2016-12-06 08:50:58,112::check::327::storage.check::(_check_completed) FINISH check u'/rhev/data-center/mnt/192.168.144.6:_home_iso/b5fa054f-0d3d-458b-a891-13fd9383ee7d/dom_md/metadata' rc=0 err=bytearray(b'0+1 records in\n0+1 records out\n328 bytes (328 B) copied, 0.000474535 s, 691 kB/s\n') elapsed=0.08 Thread-2374888::WARNING::2016-12-06 08:50:58,815::migration::671::virt.vm::(monitor_migration) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::The migration took 520 seconds which is exceeding the configured maximum time for migrations of 512 seconds. The migration will be aborted. Thread-2374888::DEBUG::2016-12-06 08:50:58,816::migration::715::virt.vm::(stop) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::stopping migration monitor thread Thread-2374888::DEBUG::2016-12-06 08:50:58,816::migration::570::virt.vm::(stop) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::stopping migration downtime thread Thread-2374888::DEBUG::2016-12-06 08:50:58,817::migration::629::virt.vm::(run) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::stopped migration monitor thread Thread-2374886::DEBUG::2016-12-06 08:50:59,098::migration::715::virt.vm::(stop) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::stopping migration monitor thread Thread-2374886::ERROR::2016-12-06 08:50:59,098::migration::252::virt.vm::(_recover) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::operation aborted: migration job: canceled by client Thread-2374886::DEBUG::2016-12-06 08:50:59,098::stompreactor::408::jsonrpc.AsyncoreClient::(send) Sending response Thread-2374886::DEBUG::2016-12-06 08:50:59,321::__init__::208::jsonrpc.Notification::(emit) Sending event {"params": {"notify_time": 6040272640, "9b5ab7b4-1045-4858-b24c-1f5a9f6172c3": {"status": "Migration Source"}}, "jsonrpc": "2.0", "method": "|virt|VM_status|9b5ab7b4-1045-4858-b24c-1f5a9f6172c3"} Thread-2374886::ERROR::2016-12-06 08:50:59,322::migration::381::virt.vm::(run) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::Failed to migrate Traceback (most recent call last): File "/usr/share/vdsm/virt/migration.py", line 363, in run self._startUnderlyingMigration(time.time()) File "/usr/share/vdsm/virt/migration.py", line 438, in _startUnderlyingMigration self._perform_with_downtime_thread(duri, muri) File "/usr/share/vdsm/virt/migration.py", line 489, in _perform_with_downtime_thread self._perform_migration(duri, muri) File "/usr/share/vdsm/virt/migration.py", line 476, in _perform_migration self._vm._dom.migrateToURI3(duri, params, flags) File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 69, in f ret = attr(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 916, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1836, in migrateToURI3 if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', dom=self) libvirtError: operation aborted: migration job: canceled by client Thread-12::DEBUG::2016-12-06 08:50:59,875::check::296::storage.check::(_start_process) START check '/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/metadata' cmd=['/usr/bin/taskset', '--cpu-list', '0-31', '/usr/bin/dd', 'if=/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/metadata', 'of=/dev/null', 'bs=4096', 'count=1', 'iflag=direct'] delay=0.00 mailbox.SPMMonitor::DEBUG::2016-12-06 08:50:59,914::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail) /usr/bin/taskset --cpu-list 0-31 dd if=/rhev/data-center/77e24b20-9d21-4952-a089-3c5c592b4e6d/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000 (cwd None)
This snippet is not enough, and the attached logs are too new. Can you check/share more of the relevant log? Thanks.
dst log:
libvirtEventLoop::DEBUG::2016-12-06 08:50:59,080::task::995::Storage.TaskManager.Task::(_decref) Task=`7446f040-c5f4-497c-b4a5-8934921a7b89`::ref 1 aborting False libvirtEventLoop::DEBUG::2016-12-06 08:50:59,080::fileUtils::190::Storage.fileUtils::(cleanupdir) Removing directory: /var/run/vdsm/storage/6e5cce71-3438-4045-9d54-6071 23e0557e/413a560d-4919-4870-88d6-f7fedbb77523 libvirtEventLoop::DEBUG::2016-12-06 08:50:59,080::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/taskset --cpu-list 0-31 /usr/bin/sudo -n /usr/sbin/lvm lvs --config ' de vices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ '\''a|/dev/mapper/36005076300810a4db80 0000000000002|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 6e5cce71-3438-4045-9d54-607123e05 57e (cwd None) jsonrpc.Executor/6::DEBUG::2016-12-06 08:50:59,090::__init__::530::jsonrpc.JsonRpcServer::(_handle_request) Calling 'VM.destroy' in bridge with {u'vmID': u'9b5ab7b4-104 5-4858-b24c-1f5a9f6172c3'} jsonrpc.Executor/6::DEBUG::2016-12-06 08:50:59,090::API::314::vds::(destroy) About to destroy VM 9b5ab7b4-1045-4858-b24c-1f5a9f6172c3 jsonrpc.Executor/6::DEBUG::2016-12-06 08:50:59,091::vm::4171::virt.vm::(destroy) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::destroy Called Thread-51550::ERROR::2016-12-06 08:50:59,091::vm::759::virt.vm::(_startUnderlyingVm) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::Failed to start a migration destinatio n vm Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 725, in _startUnderlyingVm self._completeIncomingMigration() File "/usr/share/vdsm/virt/vm.py", line 3071, in _completeIncomingMigration self._incomingMigrationFinished.isSet(), usedTimeout) File "/usr/share/vdsm/virt/vm.py", line 3154, in _attachLibvirtDomainAfterMigration raise MigrationError(e.get_error_message()) MigrationError: Domain not found: no domain with matching uuid '9b5ab7b4-1045-4858-b24c-1f5a9f6172c3' Thread-51550::INFO::2016-12-06 08:50:59,093::vm::1308::virt.vm::(setDownStatus) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::Changed state to Down: VM failed to migrate (code=8)
Logs Attached.
any ideas?
Very thanks.
2016-12-06 4:16 GMT-03:00 Yedidyah Bar David <didi@redhat.com>:
On Mon, Dec 5, 2016 at 10:25 PM, Marcelo Leandro <marceloltmm@gmail.com> wrote:
Hello I am with problem, I can not migrate vm. Can someone help me?
Message in log:
Src vdsm log:
Thread-89514 :: WARNING :: 2016-12-05 17: 01: 49,542 :: migration :: 683 :: virt.vm ::: monitor_migration vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: Migration Stalling: remaining (56MiB)> lowmark (2MiB). Refer to RHBZ # 919201. Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,543 :: migration :: 689 :: virt.vm ::: monitor_migration vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: new Iteration detected: 15 Thread-89514 :: WARNING :: 2016-12-05 17: 01: 49,543 :: migration :: 704 :: virt.vm ::: monitor_migration vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: Migration Is stuck: Has not pro Gressed in 240.071660042 seconds. Aborting.
This is usually a result of a too-busy VM, changing its memory faster than the migration process can copy the changes to the destination.
You can try changing the cluster migration policy to "suspend workload if needed".
For more details/background, see also:
https://www.ovirt.org/develop/release-management/features/migration-enhancem...
Best,
Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,544 :: migration :: 715 :: virt.vm ::: stop} vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: stopping Migration monitor thread Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,545 :: migration :: 570 :: virt.vm ::: stop) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: stopping Migration downtime thread Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,545 :: migration :: 629 :: virt.vm ::: (run) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: stopped Migration monitor thread Thread-89513 :: DEBUG :: 2016-12-05 17: 01: 49,766 :: migration :: 715 :: virt.vm ::: stop} vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: stopping Migration monitor thread Thread-89513 :: ERROR :: 2016-12-05 17: 01: 49,767 :: migration :: 252 :: virt.vm :: (_ recover) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: operation Aborted: migration job: cancel D by cliente
Dst vdsm.log:
Periodic / 13 :: WARNING :: 2016-12-05 17: 01: 49,791 :: sampling :: 483 :: virt.sampling.StatsCache: :( put) dropped stale old sample: sampled 4303678.080000 stored 4303693.070000 Periodic / 13 :: DEBUG :: 2016-12-05 17: 01: 49,791 :: executor :: 221 :: Executor :: (_ run) Worker was discarded Jsonrpc.Executor / 0 :: DEBUG :: 2016-12-05 17: 01: 49,792 :: __ init __ :: 530 :: jsonrpc.JsonRpcServer :: (_ handle_request) Calling 'VM.destroy' in bridge with {u'vmID ' : U'f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e '} Jsonrpc.Executor / 0 :: DEBUG :: 2016-12-05 17: 01: 49,793 :: API :: 314 :: vds :::( destroy) About to destroy VM f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e Jsonrpc.Executor / 0 :: DEBUG :: 2016-12-05 17:01:49,793 :: vm :: 4171 :: virt.vm :::( destroy) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e`: : Destroy Called Thread-483 :: ERROR :: 2016-12-05 17: 01: 49,793 :: vm :: 759 :: virt.vm :: (_ startUnderlyingVm) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: Failed To start a migration destination vm Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 725, in _startUnderlyingVm Self._completeIncomingMigration () File "/usr/share/vdsm/virt/vm.py", line 3071, in _completeIncomingMigration Self._incomingMigrationFinished.isSet (), usedTimeout) File "/usr/share/vdsm/virt/vm.py", line 3154, in _attachLibvirtDomainAfterMigration Raise MigrationError (e.get_error_message ()) MigrationError: Domain not found: no domain with matching uuid 'f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e'
The logs attached. Thanks.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Didi
-- Didi

Hello. Logs after rotate. Thanks 2016-12-06 10:50 GMT-03:00 Yedidyah Bar David <didi@redhat.com>:
Hello,
I Tried this solution , but to some vms not was resolved. Logs:
src logs:
Thread-12::DEBUG::2016-12-06 08:50:58,112::check::327::storage.check::(_check_completed) FINISH check u'/rhev/data-center/mnt/192.168.144.6:_home_iso/b5fa054f- 0d3d-458b-a891-13fd9383ee7d/dom_md/metadata' rc=0 err=bytearray(b'0+1 records in\n0+1 records out\n328 bytes (328 B) copied, 0.000474535 s, 691 kB/s\n') elapsed=0.08 Thread-2374888::WARNING::2016-12-06 08:50:58,815::migration::671::virt.vm::(monitor_migration) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::The migration took 520 seconds which is exceeding the configured maximum time for migrations of 512 seconds. The migration will be aborted. Thread-2374888::DEBUG::2016-12-06 08:50:58,816::migration::715::virt.vm::(stop) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::stopping migration monitor thread Thread-2374888::DEBUG::2016-12-06 08:50:58,816::migration::570::virt.vm::(stop) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::stopping migration downtime thread Thread-2374888::DEBUG::2016-12-06 08:50:58,817::migration::629::virt.vm::(run) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::stopped migration monitor thread Thread-2374886::DEBUG::2016-12-06 08:50:59,098::migration::715::virt.vm::(stop) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::stopping migration monitor thread Thread-2374886::ERROR::2016-12-06 08:50:59,098::migration::252::virt.vm::(_recover) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::operation aborted: migration job: canceled by client Thread-2374886::DEBUG::2016-12-06 08:50:59,098::stompreactor::408::jsonrpc.AsyncoreClient::(send) Sending response Thread-2374886::DEBUG::2016-12-06 08:50:59,321::__init__::208::jsonrpc.Notification::(emit) Sending event {"params": {"notify_time": 6040272640, "9b5ab7b4-1045-4858-b24c-1f5a9f6172c3": {"status": "Migration Source"}}, "jsonrpc": "2.0", "method": "|virt|VM_status|9b5ab7b4-1045-4858-b24c-1f5a9f6172c3"} Thread-2374886::ERROR::2016-12-06 08:50:59,322::migration::381::virt.vm::(run) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::Failed to migrate Traceback (most recent call last): File "/usr/share/vdsm/virt/migration.py", line 363, in run self._startUnderlyingMigration(time.time()) File "/usr/share/vdsm/virt/migration.py", line 438, in _startUnderlyingMigration self._perform_with_downtime_thread(duri, muri) File "/usr/share/vdsm/virt/migration.py", line 489, in _perform_with_downtime_thread self._perform_migration(duri, muri) File "/usr/share/vdsm/virt/migration.py", line 476, in _perform_migration self._vm._dom.migrateToURI3(duri, params, flags) File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 69, in f ret = attr(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
On Tue, Dec 6, 2016 at 2:14 PM, Marcelo Leandro <marceloltmm@gmail.com> wrote: line
123, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 916, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1836, in migrateToURI3 if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', dom=self) libvirtError: operation aborted: migration job: canceled by client Thread-12::DEBUG::2016-12-06 08:50:59,875::check::296::storage.check::(_start_process) START check '/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/metadata' cmd=['/usr/bin/taskset', '--cpu-list', '0-31', '/usr/bin/dd', 'if=/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/metadata', 'of=/dev/null', 'bs=4096', 'count=1', 'iflag=direct'] delay=0.00 mailbox.SPMMonitor::DEBUG::2016-12-06 08:50:59,914::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail) /usr/bin/taskset --cpu-list 0-31 dd if=/rhev/data-center/77e24b20-9d21-4952-a089-3c5c592b4e6d/ mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000 (cwd None)
This snippet is not enough, and the attached logs are too new. Can you check/share more of the relevant log? Thanks.
dst log:
libvirtEventLoop::DEBUG::2016-12-06 08:50:59,080::task::995::Storage.TaskManager.Task::(_decref) Task=`7446f040-c5f4-497c-b4a5-8934921a7b89`::ref 1 aborting False libvirtEventLoop::DEBUG::2016-12-06 08:50:59,080::fileUtils::190::Storage.fileUtils::(cleanupdir) Removing directory: /var/run/vdsm/storage/6e5cce71-3438-4045-9d54-6071 23e0557e/413a560d-4919-4870-88d6-f7fedbb77523 libvirtEventLoop::DEBUG::2016-12-06 08:50:59,080::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/taskset --cpu-list 0-31 /usr/bin/sudo -n /usr/sbin/lvm lvs --config ' de vices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ '\''a|/dev/mapper/36005076300810a4db80 0000000000002|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 6e5cce71-3438-4045-9d54-607123e05 57e (cwd None) jsonrpc.Executor/6::DEBUG::2016-12-06 08:50:59,090::__init__::530::jsonrpc.JsonRpcServer::(_handle_request) Calling 'VM.destroy' in bridge with {u'vmID': u'9b5ab7b4-104 5-4858-b24c-1f5a9f6172c3'} jsonrpc.Executor/6::DEBUG::2016-12-06 08:50:59,090::API::314::vds::(
About to destroy VM 9b5ab7b4-1045-4858-b24c-1f5a9f6172c3 jsonrpc.Executor/6::DEBUG::2016-12-06 08:50:59,091::vm::4171::virt.vm::(destroy) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::destroy Called Thread-51550::ERROR::2016-12-06 08:50:59,091::vm::759::virt.vm::(_startUnderlyingVm) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::Failed to start a migration destinatio n vm Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 725, in _startUnderlyingVm self._completeIncomingMigration() File "/usr/share/vdsm/virt/vm.py", line 3071, in _completeIncomingMigration self._incomingMigrationFinished.isSet(), usedTimeout) File "/usr/share/vdsm/virt/vm.py", line 3154, in _attachLibvirtDomainAfterMigration raise MigrationError(e.get_error_message()) MigrationError: Domain not found: no domain with matching uuid '9b5ab7b4-1045-4858-b24c-1f5a9f6172c3' Thread-51550::INFO::2016-12-06 08:50:59,093::vm::1308::virt.vm::(setDownStatus) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::Changed state to Down: VM failed to migrate (code=8)
Logs Attached.
any ideas?
Very thanks.
2016-12-06 4:16 GMT-03:00 Yedidyah Bar David <didi@redhat.com>:
On Mon, Dec 5, 2016 at 10:25 PM, Marcelo Leandro <marceloltmm@gmail.com
wrote:
Hello I am with problem, I can not migrate vm. Can someone help me?
Message in log:
Src vdsm log:
Thread-89514 :: WARNING :: 2016-12-05 17: 01: 49,542 :: migration :: 683 :: virt.vm ::: monitor_migration vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: Migration Stalling: remaining (56MiB)> lowmark (2MiB). Refer to RHBZ # 919201. Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,543 :: migration :: 689 :: virt.vm ::: monitor_migration vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: new Iteration detected: 15 Thread-89514 :: WARNING :: 2016-12-05 17: 01: 49,543 :: migration :: 704 :: virt.vm ::: monitor_migration vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: Migration Is stuck: Has not pro Gressed in 240.071660042 seconds. Aborting.
This is usually a result of a too-busy VM, changing its memory faster
destroy) than
the migration process can copy the changes to the destination.
You can try changing the cluster migration policy to "suspend workload if needed".
For more details/background, see also:
https://www.ovirt.org/develop/release-management/features/ migration-enhancements/
Best,
Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,544 :: migration :: 715 :: virt.vm ::: stop} vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: stopping Migration monitor thread Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,545 :: migration :: 570 :: virt.vm ::: stop) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: stopping Migration downtime thread Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,545 :: migration :: 629 :: virt.vm ::: (run) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: stopped Migration monitor thread Thread-89513 :: DEBUG :: 2016-12-05 17: 01: 49,766 :: migration :: 715 :: virt.vm ::: stop} vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: stopping Migration monitor thread Thread-89513 :: ERROR :: 2016-12-05 17: 01: 49,767 :: migration :: 252 :: virt.vm :: (_ recover) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: operation Aborted: migration job: cancel D by cliente
Dst vdsm.log:
Periodic / 13 :: WARNING :: 2016-12-05 17: 01: 49,791 :: sampling :: 483 :: virt.sampling.StatsCache: :( put) dropped stale old sample: sampled 4303678.080000 stored 4303693.070000 Periodic / 13 :: DEBUG :: 2016-12-05 17: 01: 49,791 :: executor :: 221 :: Executor :: (_ run) Worker was discarded Jsonrpc.Executor / 0 :: DEBUG :: 2016-12-05 17: 01: 49,792 :: __ init __ :: 530 :: jsonrpc.JsonRpcServer :: (_ handle_request) Calling 'VM.destroy' in bridge with {u'vmID ' : U'f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e '} Jsonrpc.Executor / 0 :: DEBUG :: 2016-12-05 17: 01: 49,793 :: API :: 314 :: vds :::( destroy) About to destroy VM f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e Jsonrpc.Executor / 0 :: DEBUG :: 2016-12-05 17:01:49,793 :: vm :: 4171 :: virt.vm :::( destroy) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e`: : Destroy Called Thread-483 :: ERROR :: 2016-12-05 17: 01: 49,793 :: vm :: 759 :: virt.vm :: (_ startUnderlyingVm) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: Failed To start a migration destination vm Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 725, in _startUnderlyingVm Self._completeIncomingMigration () File "/usr/share/vdsm/virt/vm.py", line 3071, in _completeIncomingMigration Self._incomingMigrationFinished.isSet (), usedTimeout) File "/usr/share/vdsm/virt/vm.py", line 3154, in _attachLibvirtDomainAfterMigration Raise MigrationError (e.get_error_message ()) MigrationError: Domain not found: no domain with matching uuid 'f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e'
The logs attached. Thanks.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Didi
-- Didi

On Tue, Dec 6, 2016 at 4:02 PM, Marcelo Leandro <marceloltmm@gmail.com> wrote:
Hello.
Logs after rotate.
Seem like the same files, only with more newer stuff. We need older ones, from the time frame when the migration failed. Thanks.
Thanks
2016-12-06 10:50 GMT-03:00 Yedidyah Bar David <didi@redhat.com>:
On Tue, Dec 6, 2016 at 2:14 PM, Marcelo Leandro <marceloltmm@gmail.com> wrote:
Hello,
I Tried this solution , but to some vms not was resolved. Logs:
src logs:
Thread-12::DEBUG::2016-12-06 08:50:58,112::check::327::storage.check::(_check_completed) FINISH check
u'/rhev/data-center/mnt/192.168.144.6:_home_iso/b5fa054f-0d3d-458b-a891-13fd9383ee7d/dom_md/metadata' rc=0 err=bytearray(b'0+1 records in\n0+1 records out\n328 bytes (328 B) copied, 0.000474535 s, 691 kB/s\n') elapsed=0.08 Thread-2374888::WARNING::2016-12-06 08:50:58,815::migration::671::virt.vm::(monitor_migration) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::The migration took 520 seconds which is exceeding the configured maximum time for migrations of 512 seconds. The migration will be aborted. Thread-2374888::DEBUG::2016-12-06 08:50:58,816::migration::715::virt.vm::(stop) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::stopping migration monitor thread Thread-2374888::DEBUG::2016-12-06 08:50:58,816::migration::570::virt.vm::(stop) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::stopping migration downtime thread Thread-2374888::DEBUG::2016-12-06 08:50:58,817::migration::629::virt.vm::(run) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::stopped migration monitor thread Thread-2374886::DEBUG::2016-12-06 08:50:59,098::migration::715::virt.vm::(stop) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::stopping migration monitor thread Thread-2374886::ERROR::2016-12-06 08:50:59,098::migration::252::virt.vm::(_recover) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::operation aborted: migration job: canceled by client Thread-2374886::DEBUG::2016-12-06 08:50:59,098::stompreactor::408::jsonrpc.AsyncoreClient::(send) Sending response Thread-2374886::DEBUG::2016-12-06 08:50:59,321::__init__::208::jsonrpc.Notification::(emit) Sending event {"params": {"notify_time": 6040272640, "9b5ab7b4-1045-4858-b24c-1f5a9f6172c3": {"status": "Migration Source"}}, "jsonrpc": "2.0", "method": "|virt|VM_status|9b5ab7b4-1045-4858-b24c-1f5a9f6172c3"} Thread-2374886::ERROR::2016-12-06 08:50:59,322::migration::381::virt.vm::(run) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::Failed to migrate Traceback (most recent call last): File "/usr/share/vdsm/virt/migration.py", line 363, in run self._startUnderlyingMigration(time.time()) File "/usr/share/vdsm/virt/migration.py", line 438, in _startUnderlyingMigration self._perform_with_downtime_thread(duri, muri) File "/usr/share/vdsm/virt/migration.py", line 489, in _perform_with_downtime_thread self._perform_migration(duri, muri) File "/usr/share/vdsm/virt/migration.py", line 476, in _perform_migration self._vm._dom.migrateToURI3(duri, params, flags) File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 69, in f ret = attr(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 916, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1836, in migrateToURI3 if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', dom=self) libvirtError: operation aborted: migration job: canceled by client Thread-12::DEBUG::2016-12-06 08:50:59,875::check::296::storage.check::(_start_process) START check '/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/metadata' cmd=['/usr/bin/taskset', '--cpu-list', '0-31', '/usr/bin/dd', 'if=/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/metadata', 'of=/dev/null', 'bs=4096', 'count=1', 'iflag=direct'] delay=0.00 mailbox.SPMMonitor::DEBUG::2016-12-06 08:50:59,914::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail) /usr/bin/taskset --cpu-list 0-31 dd
if=/rhev/data-center/77e24b20-9d21-4952-a089-3c5c592b4e6d/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000 (cwd None)
This snippet is not enough, and the attached logs are too new. Can you check/share more of the relevant log? Thanks.
dst log:
libvirtEventLoop::DEBUG::2016-12-06 08:50:59,080::task::995::Storage.TaskManager.Task::(_decref) Task=`7446f040-c5f4-497c-b4a5-8934921a7b89`::ref 1 aborting False libvirtEventLoop::DEBUG::2016-12-06 08:50:59,080::fileUtils::190::Storage.fileUtils::(cleanupdir) Removing directory: /var/run/vdsm/storage/6e5cce71-3438-4045-9d54-6071 23e0557e/413a560d-4919-4870-88d6-f7fedbb77523 libvirtEventLoop::DEBUG::2016-12-06 08:50:59,080::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/taskset --cpu-list 0-31 /usr/bin/sudo -n /usr/sbin/lvm lvs --config ' de vices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ '\''a|/dev/mapper/36005076300810a4db80 0000000000002|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 6e5cce71-3438-4045-9d54-607123e05 57e (cwd None) jsonrpc.Executor/6::DEBUG::2016-12-06 08:50:59,090::__init__::530::jsonrpc.JsonRpcServer::(_handle_request) Calling 'VM.destroy' in bridge with {u'vmID': u'9b5ab7b4-104 5-4858-b24c-1f5a9f6172c3'} jsonrpc.Executor/6::DEBUG::2016-12-06 08:50:59,090::API::314::vds::(destroy) About to destroy VM 9b5ab7b4-1045-4858-b24c-1f5a9f6172c3 jsonrpc.Executor/6::DEBUG::2016-12-06 08:50:59,091::vm::4171::virt.vm::(destroy) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::destroy Called Thread-51550::ERROR::2016-12-06 08:50:59,091::vm::759::virt.vm::(_startUnderlyingVm) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::Failed to start a migration destinatio n vm Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 725, in _startUnderlyingVm self._completeIncomingMigration() File "/usr/share/vdsm/virt/vm.py", line 3071, in _completeIncomingMigration self._incomingMigrationFinished.isSet(), usedTimeout) File "/usr/share/vdsm/virt/vm.py", line 3154, in _attachLibvirtDomainAfterMigration raise MigrationError(e.get_error_message()) MigrationError: Domain not found: no domain with matching uuid '9b5ab7b4-1045-4858-b24c-1f5a9f6172c3' Thread-51550::INFO::2016-12-06 08:50:59,093::vm::1308::virt.vm::(setDownStatus) vmId=`9b5ab7b4-1045-4858-b24c-1f5a9f6172c3`::Changed state to Down: VM failed to migrate (code=8)
Logs Attached.
any ideas?
Very thanks.
2016-12-06 4:16 GMT-03:00 Yedidyah Bar David <didi@redhat.com>:
On Mon, Dec 5, 2016 at 10:25 PM, Marcelo Leandro <marceloltmm@gmail.com> wrote:
Hello I am with problem, I can not migrate vm. Can someone help me?
Message in log:
Src vdsm log:
Thread-89514 :: WARNING :: 2016-12-05 17: 01: 49,542 :: migration :: 683 :: virt.vm ::: monitor_migration vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: Migration Stalling: remaining (56MiB)> lowmark (2MiB). Refer to RHBZ # 919201. Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,543 :: migration :: 689 :: virt.vm ::: monitor_migration vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: new Iteration detected: 15 Thread-89514 :: WARNING :: 2016-12-05 17: 01: 49,543 :: migration :: 704 :: virt.vm ::: monitor_migration vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: Migration Is stuck: Has not pro Gressed in 240.071660042 seconds. Aborting.
This is usually a result of a too-busy VM, changing its memory faster than the migration process can copy the changes to the destination.
You can try changing the cluster migration policy to "suspend workload if needed".
For more details/background, see also:
https://www.ovirt.org/develop/release-management/features/migration-enhancem...
Best,
Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,544 :: migration :: 715 :: virt.vm ::: stop} vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: stopping Migration monitor thread Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,545 :: migration :: 570 :: virt.vm ::: stop) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: stopping Migration downtime thread Thread-89514 :: DEBUG :: 2016-12-05 17: 01: 49,545 :: migration :: 629 :: virt.vm ::: (run) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: stopped Migration monitor thread Thread-89513 :: DEBUG :: 2016-12-05 17: 01: 49,766 :: migration :: 715 :: virt.vm ::: stop} vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: stopping Migration monitor thread Thread-89513 :: ERROR :: 2016-12-05 17: 01: 49,767 :: migration :: 252 :: virt.vm :: (_ recover) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: operation Aborted: migration job: cancel D by cliente
Dst vdsm.log:
Periodic / 13 :: WARNING :: 2016-12-05 17: 01: 49,791 :: sampling :: 483 :: virt.sampling.StatsCache: :( put) dropped stale old sample: sampled 4303678.080000 stored 4303693.070000 Periodic / 13 :: DEBUG :: 2016-12-05 17: 01: 49,791 :: executor :: 221 :: Executor :: (_ run) Worker was discarded Jsonrpc.Executor / 0 :: DEBUG :: 2016-12-05 17: 01: 49,792 :: __ init __ :: 530 :: jsonrpc.JsonRpcServer :: (_ handle_request) Calling 'VM.destroy' in bridge with {u'vmID ' : U'f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e '} Jsonrpc.Executor / 0 :: DEBUG :: 2016-12-05 17: 01: 49,793 :: API :: 314 :: vds :::( destroy) About to destroy VM f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e Jsonrpc.Executor / 0 :: DEBUG :: 2016-12-05 17:01:49,793 :: vm :: 4171 :: virt.vm :::( destroy) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e`: : Destroy Called Thread-483 :: ERROR :: 2016-12-05 17: 01: 49,793 :: vm :: 759 :: virt.vm :: (_ startUnderlyingVm) vmId = `f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e` :: Failed To start a migration destination vm Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 725, in _startUnderlyingVm Self._completeIncomingMigration () File "/usr/share/vdsm/virt/vm.py", line 3071, in _completeIncomingMigration Self._incomingMigrationFinished.isSet (), usedTimeout) File "/usr/share/vdsm/virt/vm.py", line 3154, in _attachLibvirtDomainAfterMigration Raise MigrationError (e.get_error_message ()) MigrationError: Domain not found: no domain with matching uuid 'f38b9f7d-5bd0-4bdf-885c-e03e8d6bc70e'
The logs attached. Thanks.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Didi
-- Didi
-- Didi

I am not sure when the problem start because i was in vacation. I have only logs of the last try. Very thanks. Marcelo Leandro

On Tue, Dec 6, 2016 at 5:25 PM, Marcelo Leandro <marceloltmm@gmail.com> wrote:
I am not sure when the problem start because i was in vacation. I have only logs of the last try.
The logs you attached did not include relevant time intervals. Are you sure you do not have logs that contain the snippet you included in-line? Anyway, you can always try again, and if it still fails, attach the new logs. Best,
Very thanks.
Marcelo Leandro
-- Didi

Hello, Logs Attached. Started migration = 3:03 PM Falied migration = 3:12PM Thanks. 2016-12-06 13:14 GMT-03:00 Yedidyah Bar David <didi@redhat.com>:
On Tue, Dec 6, 2016 at 5:25 PM, Marcelo Leandro <marceloltmm@gmail.com> wrote:
I am not sure when the problem start because i was in vacation. I have only logs of the last try.
The logs you attached did not include relevant time intervals. Are you sure you do not have logs that contain the snippet you included in-line?
Anyway, you can always try again, and if it still fails, attach the new logs.
Best,
Very thanks.
Marcelo Leandro
-- Didi

sorry, this is correct src-vdsm.log Thanks 2016-12-06 15:44 GMT-03:00 Marcelo Leandro <marceloltmm@gmail.com>:
Hello, Logs Attached. Started migration = 3:03 PM Falied migration = 3:12PM Thanks.
2016-12-06 13:14 GMT-03:00 Yedidyah Bar David <didi@redhat.com>:
On Tue, Dec 6, 2016 at 5:25 PM, Marcelo Leandro <marceloltmm@gmail.com> wrote:
I am not sure when the problem start because i was in vacation. I have only logs of the last try.
The logs you attached did not include relevant time intervals. Are you sure you do not have logs that contain the snippet you included in-line?
Anyway, you can always try again, and if it still fails, attach the new logs.
Best,
Very thanks.
Marcelo Leandro
-- Didi

On Tue, Dec 6, 2016 at 8:48 PM, Marcelo Leandro <marceloltmm@gmail.com> wrote:
sorry, this is correct src-vdsm.log
Perhaps try to edit the VM, choose Host tab, and tweak migration options. You can set policy to Legacy, and a high-enough "custom migration downtime". See also e.g.: http://lists.ovirt.org/pipermail/users/2016-April/039313.html Best,
Thanks
2016-12-06 15:44 GMT-03:00 Marcelo Leandro <marceloltmm@gmail.com>:
Hello, Logs Attached. Started migration = 3:03 PM Falied migration = 3:12PM Thanks.
2016-12-06 13:14 GMT-03:00 Yedidyah Bar David <didi@redhat.com>:
On Tue, Dec 6, 2016 at 5:25 PM, Marcelo Leandro <marceloltmm@gmail.com> wrote:
I am not sure when the problem start because i was in vacation. I have only logs of the last try.
The logs you attached did not include relevant time intervals. Are you sure you do not have logs that contain the snippet you included in-line?
Anyway, you can always try again, and if it still fails, attach the new logs.
Best,
Very thanks.
Marcelo Leandro
-- Didi
-- Didi

Hello, I am try this and migration successful now and it is not happening downtime. Use custom migration downtime = 5000 Logs Attached. Migration Start = 8:33 AM Migration Completed = 8:37 AM Very Thanks. 2016-12-07 5:30 GMT-03:00 Yedidyah Bar David <didi@redhat.com>:
On Tue, Dec 6, 2016 at 8:48 PM, Marcelo Leandro <marceloltmm@gmail.com> wrote:
sorry, this is correct src-vdsm.log
Perhaps try to edit the VM, choose Host tab, and tweak migration options. You can set policy to Legacy, and a high-enough "custom migration downtime".
See also e.g.:
http://lists.ovirt.org/pipermail/users/2016-April/039313.html
Best,
Thanks
2016-12-06 15:44 GMT-03:00 Marcelo Leandro <marceloltmm@gmail.com>:
Hello, Logs Attached. Started migration = 3:03 PM Falied migration = 3:12PM Thanks.
2016-12-06 13:14 GMT-03:00 Yedidyah Bar David <didi@redhat.com>:
On Tue, Dec 6, 2016 at 5:25 PM, Marcelo Leandro <marceloltmm@gmail.com
wrote:
I am not sure when the problem start because i was in vacation. I have only logs of the last try.
The logs you attached did not include relevant time intervals. Are you sure you do not have logs that contain the snippet you included in-line?
Anyway, you can always try again, and if it still fails, attach the new logs.
Best,
Very thanks.
Marcelo Leandro
-- Didi
-- Didi

On Wed, Dec 7, 2016 at 1:53 PM, Marcelo Leandro <marceloltmm@gmail.com> wrote:
Hello, I am try this and migration successful now and it is not happening downtime. Use custom migration downtime = 5000 Logs Attached.
Migration Start = 8:33 AM Migration Completed = 8:37 AM
Glad it worked, thanks for the report!
Very Thanks.
2016-12-07 5:30 GMT-03:00 Yedidyah Bar David <didi@redhat.com>:
On Tue, Dec 6, 2016 at 8:48 PM, Marcelo Leandro <marceloltmm@gmail.com> wrote:
sorry, this is correct src-vdsm.log
Perhaps try to edit the VM, choose Host tab, and tweak migration options. You can set policy to Legacy, and a high-enough "custom migration downtime".
See also e.g.:
http://lists.ovirt.org/pipermail/users/2016-April/039313.html
Best,
Thanks
2016-12-06 15:44 GMT-03:00 Marcelo Leandro <marceloltmm@gmail.com>:
Hello, Logs Attached. Started migration = 3:03 PM Falied migration = 3:12PM Thanks.
2016-12-06 13:14 GMT-03:00 Yedidyah Bar David <didi@redhat.com>:
On Tue, Dec 6, 2016 at 5:25 PM, Marcelo Leandro <marceloltmm@gmail.com> wrote:
I am not sure when the problem start because i was in vacation. I have only logs of the last try.
The logs you attached did not include relevant time intervals. Are you sure you do not have logs that contain the snippet you included in-line?
Anyway, you can always try again, and if it still fails, attach the new logs.
Best,
Very thanks.
Marcelo Leandro
-- Didi
-- Didi
-- Didi
participants (2)
-
Marcelo Leandro
-
Yedidyah Bar David