LiveStoreageMigration failed

Hello, I try to migrate a disk of a running vm from gluster 3.12.15 to gluster 3.12.15 but it fails. libGfApi set to true by engine-config. ° taking a snapshot first, is working. Then at engine-log: 2019-07-18 09:29:13,932+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Failed in 'VmReplicateDiskStartVDS' method 2019-07-18 09:29:13,936+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ovvirt07 command VmReplicateDiskStartVDS failed: Drive replication error 2019-07-18 09:29:13,936+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand' return value 'StatusOnlyReturn [status=Status [code=55, message=Drive replication error]]' 2019-07-18 09:29:13,936+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] HostName = ovvirt07 2019-07-18 09:29:13,937+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Command 'VmReplicateDiskStartVDSCommand(HostName = ovvirt07, VmReplicateDiskParameters:{hostId='3a7bf85c-e92d-4559-908e-5eed2f5608d4', vmId='3b79d0c0-47e9-47c3-8511-980a8cfe147c', storagePoolId='00000001-0001-0001-0001-000000000311', srcStorageDomainId='e54d835a-d8a5-44ae-8e17-fcba1c54e46f', targetStorageDomainId='4dabb6d6-4be5-458c-811d-6d5e87699640', imageGroupId='d2964ff9-10f7-4b92-8327-d68f3cfd5b50', imageId='62656632-8984-4b7e-8be1-fd2547ca0f98'})' execution failed: VDSGenericException: VDSErrorException: Failed to VmReplicateDiskStartVDS, error = Drive replication error, code = 55 2019-07-18 09:29:13,937+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] FINISH, VmReplicateDiskStartVDSCommand, return: , log id: 5b2afb0b 2019-07-18 09:29:13,937+02 ERROR [org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Failed VmReplicateDiskStart (Disk 'd2964ff9-10f7-4b92-8327-d68f3cfd5b50' , VM '3b79d0c0-47e9-47c3-8511-980a8cfe147c') 2019-07-18 09:29:13,938+02 ERROR [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Command 'LiveMigrateDisk' id: '8174c74c-8ab0-49fa-abfc-44d8b7c691e0' with children [03672b60-443b-47ba-834c-ac306d7129d0, 562522fc-6691-47fe-93bf-ef2c45e85676] failed when attempting to perform the next operation, marking as 'ACTIVE' 2019-07-18 09:29:13,938+02 ERROR [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] EngineException: Drive replication error (Failed with error replicaErr and code 55): org.ovirt.engine.core.common.errors.EngineException: EngineException: Drive replication error (Failed with error replicaErr and code 55) at org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand.replicateDiskStart(LiveMigrateDiskCommand.java:526) [bll.jar:] at org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand.performNextOperation(LiveMigrateDiskCommand.java:233) [bll.jar:] at org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback.childCommandsExecutionEnded(SerialChildCommandsExecutionCallback.java:32) [bll.jar:] at org.ovirt.engine.core.bll.ChildCommandsCallbackBase.doPolling(ChildCommandsCallbackBase.java:77) [bll.jar:] at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:175) [bll.jar:] at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109) [bll.jar:] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [rt.jar:1.8.0_212] at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [rt.jar:1.8.0_212] at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383) [javax.enterprise.concurrent-1.0.jar:] at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534) [javax.enterprise.concurrent-1.0.jar:] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_212] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_212] at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_212] at org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250) [javax.enterprise.concurrent-1.0.jar:] at org.jboss.as.ee.concurrent.service.ElytronManagedThreadFactory$ElytronManagedThread.run(ElytronManagedThreadFactory.java:78) 2019-07-18 09:29:13,938+02 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Command 'LiveMigrateDisk' id: '8174c74c-8ab0-49fa-abfc-44d8b7c691e0' child commands '[03672b60-443b-47ba-834c-ac306d7129d0, 562522fc-6691-47fe-93bf-ef2c45e85676]' executions were completed, status 'FAILED' 2019-07-18 09:29:15,019+02 ERROR [org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-1) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Ending command 'org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand' with failure. 2019-07-18 09:29:15,019+02 ERROR [org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-1) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Failed during live storage migration of disk 'd2964ff9-10f7-4b92-8327-d68f3cfd5b50' of vm '3b79d0c0-47e9-47c3-8511-980a8cfe147c', attempting to end replication before deleting the target disk // LiveStorageMigration on gluster - should that work at all? Has someone tried it? Greetings! Christoph Köhler

It should work, what is the engine and vdsm versions? Can you add vdsm logs as well? On Thu, Jul 18, 2019 at 11:16 AM Christoph Köhler < koehler@luis.uni-hannover.de> wrote:
Hello,
I try to migrate a disk of a running vm from gluster 3.12.15 to gluster 3.12.15 but it fails. libGfApi set to true by engine-config.
° taking a snapshot first, is working. Then at engine-log:
2019-07-18 09:29:13,932+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Failed in 'VmReplicateDiskStartVDS' method
2019-07-18 09:29:13,936+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ovvirt07 command VmReplicateDiskStartVDS failed: Drive replication error
2019-07-18 09:29:13,936+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand' return value 'StatusOnlyReturn [status=Status [code=55, message=Drive replication error]]'
2019-07-18 09:29:13,936+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] HostName = ovvirt07
2019-07-18 09:29:13,937+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Command 'VmReplicateDiskStartVDSCommand(HostName = ovvirt07, VmReplicateDiskParameters:{hostId='3a7bf85c-e92d-4559-908e-5eed2f5608d4', vmId='3b79d0c0-47e9-47c3-8511-980a8cfe147c', storagePoolId='00000001-0001-0001-0001-000000000311', srcStorageDomainId='e54d835a-d8a5-44ae-8e17-fcba1c54e46f', targetStorageDomainId='4dabb6d6-4be5-458c-811d-6d5e87699640', imageGroupId='d2964ff9-10f7-4b92-8327-d68f3cfd5b50', imageId='62656632-8984-4b7e-8be1-fd2547ca0f98'})' execution failed: VDSGenericException: VDSErrorException: Failed to VmReplicateDiskStartVDS, error = Drive replication error, code = 55
2019-07-18 09:29:13,937+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] FINISH, VmReplicateDiskStartVDSCommand, return: , log id: 5b2afb0b
2019-07-18 09:29:13,937+02 ERROR [org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Failed VmReplicateDiskStart (Disk 'd2964ff9-10f7-4b92-8327-d68f3cfd5b50' , VM '3b79d0c0-47e9-47c3-8511-980a8cfe147c')
2019-07-18 09:29:13,938+02 ERROR [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Command 'LiveMigrateDisk' id: '8174c74c-8ab0-49fa-abfc-44d8b7c691e0' with children [03672b60-443b-47ba-834c-ac306d7129d0, 562522fc-6691-47fe-93bf-ef2c45e85676] failed when attempting to perform the next operation, marking as 'ACTIVE'
2019-07-18 09:29:13,938+02 ERROR [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] EngineException: Drive replication error (Failed with error replicaErr and code 55): org.ovirt.engine.core.common.errors.EngineException: EngineException: Drive replication error (Failed with error replicaErr and code 55) at org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand.replicateDiskStart(LiveMigrateDiskCommand.java:526)
[bll.jar:] at org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand.performNextOperation(LiveMigrateDiskCommand.java:233)
[bll.jar:] at org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback.childCommandsExecutionEnded(SerialChildCommandsExecutionCallback.java:32)
[bll.jar:] at org.ovirt.engine.core.bll.ChildCommandsCallbackBase.doPolling(ChildCommandsCallbackBase.java:77)
[bll.jar:] at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:175)
[bll.jar:] at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109)
[bll.jar:] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [rt.jar:1.8.0_212] at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [rt.jar:1.8.0_212] at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383)
[javax.enterprise.concurrent-1.0.jar:] at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534)
[javax.enterprise.concurrent-1.0.jar:] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[rt.jar:1.8.0_212] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[rt.jar:1.8.0_212] at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_212] at org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250)
[javax.enterprise.concurrent-1.0.jar:] at
org.jboss.as.ee.concurrent.service.ElytronManagedThreadFactory$ElytronManagedThread.run(ElytronManagedThreadFactory.java:78)
2019-07-18 09:29:13,938+02 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Command 'LiveMigrateDisk' id: '8174c74c-8ab0-49fa-abfc-44d8b7c691e0' child commands '[03672b60-443b-47ba-834c-ac306d7129d0, 562522fc-6691-47fe-93bf-ef2c45e85676]' executions were completed, status 'FAILED'
2019-07-18 09:29:15,019+02 ERROR [org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-1) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Ending command 'org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand' with failure.
2019-07-18 09:29:15,019+02 ERROR [org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-1) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Failed during live storage migration of disk 'd2964ff9-10f7-4b92-8327-d68f3cfd5b50' of vm '3b79d0c0-47e9-47c3-8511-980a8cfe147c', attempting to end replication before deleting the target disk
//
LiveStorageMigration on gluster - should that work at all? Has someone tried it?
Greetings! Christoph Köhler _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WSF45K47AJNSAJ...

The engine runs on version 4.3.4 The hypervisors run on 4.2.8, (version vdsm: 4.20.46-1.el7) with libvirt 4.5.0-10.el7_6.7 By vdsm.log : 2019-07-18 09:29:09,744+0200 INFO (jsonrpc/2) [storage.StorageDomain] Creating domain run directory u'/var/run/vdsm/storage/4dabb6d6-4be5-458c-811d-6d5e87699640' (fileSD:577) 2019-07-18 09:29:09,744+0200 INFO (jsonrpc/2) [storage.fileUtils] Creating directory: /var/run/vdsm/storage/4dabb6d6-4be5-458c-811d-6d5e87699640 mode: None (fileUtils:197) 2019-07-18 09:29:09,744+0200 INFO (jsonrpc/2) [storage.StorageDomain] Creating symlink from /rhev/data-center/mnt/glusterSD/192.168.11.20:_gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50 to /var/run/vdsm/storage/4dabb6d6-4be5-458 c-811d-6d5e87699640/d2964ff9-10f7-4b92-8327-d68f3cfd5b50 (fileSD:580) 2019-07-18 09:29:09,789+0200 INFO (jsonrpc/2) [vdsm.api] FINISH prepareImage return={'info': {'path': u'gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50/62656632-8984-4b7e-8be1-fd2547ca0f98', 'type': 'network', 'hosts': [{'port': '0', 'transport': 'tcp', 'name': '192.168.11.20'}, {'port': '0', 'transport': 'tcp', 'name': 'glusrv01'}, {'port': '0', 'transport': 'tcp', 'name': 'glusrv02'}, {'port': '0', 'transport': 'tcp', 'name': 'glusrv03'}], 'protocol': 'gluster'}, 'path': u'/rhev/data-center/mn t/glusterSD/192.168.11.20:_gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50/62656632-8984-4b7e-8be1-fd2547ca0f98', 'imgVolumesInfo': [{'domainID': u'4dabb6d6-4be5-458c-811d-6d5e87699640', 'leaseOffset': 0, 'path': u'gluvol4/4dabb6d 6-4be5-458c-811d-6d5e87699640/images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50/62656632-8984-4b7e-8be1-fd2547ca0f98', 'volumeID': u'62656632-8984-4b7e-8be1-fd2547ca0f98', 'leasePath': u'/rhev/data-center/mnt/glusterSD/192.168.11.20:_gluvol4/4dabb6d6-4be5-458c-811d-6d5e8769964 0/images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50/62656632-8984-4b7e-8be1-fd2547ca0f98.lease', 'imageID': u'd2964ff9-10f7-4b92-8327-d68f3cfd5b50'}, {'domainID': u'4dabb6d6-4be5-458c-811d-6d5e87699640', 'leaseOffset': 0, 'path': u'gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/ images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50/43dbb053-c5fe-45bf-9464-acf77546b96a', 'volumeID': u'43dbb053-c5fe-45bf-9464-acf77546b96a', 'leasePath': u'/rhev/data-center/mnt/glusterSD/192.168.11.20:_gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/images/d2964ff9-10f7-4b92-83 27-d68f3cfd5b50/43dbb053-c5fe-45bf-9464-acf77546b96a.lease', 'imageID': u'd2964ff9-10f7-4b92-8327-d68f3cfd5b50'}]} from=::ffff:10.4.8.242,45784, flow_id=c957e011-37e0-43aa-abe7-9bb633c38c5f, task_id=b825b7c7-ddae-441f-b9c8-2bd50ec144b5 (api:52) 2019-07-18 09:29:09,790+0200 INFO (jsonrpc/2) [vds] prepared volume path: gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50/62656632-8984-4b7e-8be1-fd2547ca0f98 (clientIF:497) 2019-07-18 09:29:09,791+0200 INFO (jsonrpc/2) [vdsm.api] START teardownImage(sdUUID=u'4dabb6d6-4be5-458c-811d-6d5e87699640', spUUID=u'00000001-0001-0001-0001-000000000311', imgUUID=u'd2964ff9-10f7-4b92-8327-d68f3cfd5b50', volUUID=None) from=::ffff:10.4.8.242,45784, flow _id=c957e011-37e0-43aa-abe7-9bb633c38c5f, task_id=c13f7ee9-8d01-4b73-8200-3052b4953385 (api:46) 2019-07-18 09:29:09,792+0200 INFO (jsonrpc/2) [storage.StorageDomain] Removing image rundir link u'/var/run/vdsm/storage/4dabb6d6-4be5-458c-811d-6d5e87699640/d2964ff9-10f7-4b92-8327-d68f3cfd5b50' (fileSD:600) 2019-07-18 09:29:09,792+0200 INFO (jsonrpc/2) [vdsm.api] FINISH teardownImage return=None from=::ffff:10.4.8.242,45784, flow_id=c957e011-37e0-43aa-abe7-9bb633c38c5f, task_id=c13f7ee9-8d01-4b73-8200-3052b4953385 (api:52) 2019-07-18 09:29:09,792+0200 ERROR (jsonrpc/2) [virt.vm] (vmId='3b79d0c0-47e9-47c3-8511-980a8cfe147c') Unable to start replication for sda to {u'domainID': u'4dabb6d6-4be5-458c-811d-6d5e87699640', 'volumeInfo': {'path': u'gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/imag es/d2964ff9-10f7-4b92-8327-d68f3cfd5b50/62656632-8984-4b7e-8be1-fd2547ca0f98', 'type': 'network', 'hosts': [{'port': '0', 'transport': 'tcp', 'name': '192.168.11.20'}, {'port': '0', 'transport': 'tcp', 'name': 'glusrv01'}, {'port': '0', 'transport': 'tcp', 'name': 'glusr v02'}, {'port': '0', 'transport': 'tcp', 'name': 'glusrv03'}], 'protocol': 'gluster'}, 'format': 'cow', u'poolID': u'00000001-0001-0001-0001-000000000311', u'device': 'disk', 'protocol': 'gluster', 'propagateErrors': 'off', u'diskType': u'network', 'cache': 'none', u'vol umeID': u'62656632-8984-4b7e-8be1-fd2547ca0f98', u'imageID': u'd2964ff9-10f7-4b92-8327-d68f3cfd5b50', 'hosts': [{'port': '0', 'transport': 'tcp', 'name': '192.168.11.20'}], 'path': u'gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50 /62656632-8984-4b7e-8be1-fd2547ca0f98', 'volumeChain': [{'domainID': u'4dabb6d6-4be5-458c-811d-6d5e87699640', 'leaseOffset': 0, 'path': u'gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50/62656632-8984-4b7e-8be1-fd2547ca0f98', 'volu meID': u'62656632-8984-4b7e-8be1-fd2547ca0f98', 'leasePath': u'/rhev/data-center/mnt/glusterSD/192.168.11.20:_gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50/62656632-8984-4b7e-8be1-fd2547ca0f98.lease', 'imageID': u'd2964ff9-10f7- 4b92-8327-d68f3cfd5b50'}, {'domainID': u'4dabb6d6-4be5-458c-811d-6d5e87699640', 'leaseOffset': 0, 'path': u'gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50/43dbb053-c5fe-45bf-9464-acf77546b96a', 'volumeID': u'43dbb053-c5fe-45bf-94 64-acf77546b96a', 'leasePath': u'/rhev/data-center/mnt/glusterSD/192.168.11.20:_gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50/43dbb053-c5fe-45bf-9464-acf77546b96a.lease', 'imageID': u'd2964ff9-10f7-4b92-8327-d68f3cfd5b50'}]} (vm :4710) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 4704, in diskReplicateStart self._startDriveReplication(drive) File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 4843, in _startDriveReplication self._dom.blockCopy(drive.name, destxml, flags=flags) File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 98, in f ret = attr(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 130, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 729, in blockCopy if ret == -1: raise libvirtError ('virDomainBlockCopy() failed', dom=self) libvirtError: argument unsupported: non-file destination not supported yet 2019-07-18 09:29:09,796+0200 INFO (jsonrpc/2) [api.virt] FINISH diskReplicateStart return={'status': {'message': 'Drive replication error', 'code': 55}} from=::ffff:10.4.8.242,45784, flow_id=c957e011-37e0-43aa-abe7-9bb633c38c5f, vmId=3b79d0c0-47e9-47c3-8511-980a8cfe147c (api:52) On 18.07.19 10:42, Benny Zlotnik wrote:
It should work, what is the engine and vdsm versions? Can you add vdsm logs as well?
On Thu, Jul 18, 2019 at 11:16 AM Christoph Köhler <koehler@luis.uni-hannover.de <mailto:koehler@luis.uni-hannover.de>> wrote:
Hello,
I try to migrate a disk of a running vm from gluster 3.12.15 to gluster 3.12.15 but it fails. libGfApi set to true by engine-config.
° taking a snapshot first, is working. Then at engine-log:
2019-07-18 09:29:13,932+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Failed in 'VmReplicateDiskStartVDS' method
2019-07-18 09:29:13,936+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ovvirt07 command VmReplicateDiskStartVDS failed: Drive replication error
2019-07-18 09:29:13,936+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand'
return value 'StatusOnlyReturn [status=Status [code=55, message=Drive replication error]]'
2019-07-18 09:29:13,936+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] HostName = ovvirt07
2019-07-18 09:29:13,937+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Command 'VmReplicateDiskStartVDSCommand(HostName = ovvirt07, VmReplicateDiskParameters:{hostId='3a7bf85c-e92d-4559-908e-5eed2f5608d4',
vmId='3b79d0c0-47e9-47c3-8511-980a8cfe147c', storagePoolId='00000001-0001-0001-0001-000000000311', srcStorageDomainId='e54d835a-d8a5-44ae-8e17-fcba1c54e46f', targetStorageDomainId='4dabb6d6-4be5-458c-811d-6d5e87699640', imageGroupId='d2964ff9-10f7-4b92-8327-d68f3cfd5b50', imageId='62656632-8984-4b7e-8be1-fd2547ca0f98'})' execution failed: VDSGenericException: VDSErrorException: Failed to VmReplicateDiskStartVDS, error = Drive replication error, code = 55
2019-07-18 09:29:13,937+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] FINISH, VmReplicateDiskStartVDSCommand, return: , log id: 5b2afb0b
2019-07-18 09:29:13,937+02 ERROR [org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Failed VmReplicateDiskStart (Disk 'd2964ff9-10f7-4b92-8327-d68f3cfd5b50' , VM '3b79d0c0-47e9-47c3-8511-980a8cfe147c')
2019-07-18 09:29:13,938+02 ERROR [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Command 'LiveMigrateDisk' id: '8174c74c-8ab0-49fa-abfc-44d8b7c691e0' with children [03672b60-443b-47ba-834c-ac306d7129d0, 562522fc-6691-47fe-93bf-ef2c45e85676] failed when attempting to perform the next operation, marking as 'ACTIVE'
2019-07-18 09:29:13,938+02 ERROR [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] EngineException: Drive replication error (Failed with error replicaErr and code 55): org.ovirt.engine.core.common.errors.EngineException: EngineException: Drive replication error (Failed with error replicaErr and code 55) at org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand.replicateDiskStart(LiveMigrateDiskCommand.java:526)
[bll.jar:] at org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand.performNextOperation(LiveMigrateDiskCommand.java:233)
[bll.jar:] at org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback.childCommandsExecutionEnded(SerialChildCommandsExecutionCallback.java:32)
[bll.jar:] at org.ovirt.engine.core.bll.ChildCommandsCallbackBase.doPolling(ChildCommandsCallbackBase.java:77)
[bll.jar:] at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:175)
[bll.jar:] at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109)
[bll.jar:] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [rt.jar:1.8.0_212] at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [rt.jar:1.8.0_212] at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383)
[javax.enterprise.concurrent-1.0.jar:] at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534)
[javax.enterprise.concurrent-1.0.jar:] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[rt.jar:1.8.0_212] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[rt.jar:1.8.0_212] at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_212] at org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250)
[javax.enterprise.concurrent-1.0.jar:] at org.jboss.as.ee.concurrent.service.ElytronManagedThreadFactory$ElytronManagedThread.run(ElytronManagedThreadFactory.java:78)
2019-07-18 09:29:13,938+02 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Command 'LiveMigrateDisk' id: '8174c74c-8ab0-49fa-abfc-44d8b7c691e0' child commands '[03672b60-443b-47ba-834c-ac306d7129d0, 562522fc-6691-47fe-93bf-ef2c45e85676]' executions were completed, status 'FAILED'
2019-07-18 09:29:15,019+02 ERROR [org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-1) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Ending command 'org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand' with failure.
2019-07-18 09:29:15,019+02 ERROR [org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-1) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Failed during live storage migration of disk 'd2964ff9-10f7-4b92-8327-d68f3cfd5b50' of vm '3b79d0c0-47e9-47c3-8511-980a8cfe147c', attempting to end replication before deleting the target disk
//
LiveStorageMigration on gluster - should that work at all? Has someone tried it?
Greetings! Christoph Köhler _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WSF45K47AJNSAJ...

Looks like LSM with libgfapi enabled can't work at the moment[1] [1] - https://bugzilla.redhat.com/show_bug.cgi?id=1481688#c38 On Thu, Jul 18, 2019 at 12:05 PM Christoph Köhler < koehler@luis.uni-hannover.de> wrote:
The engine runs on version 4.3.4
The hypervisors run on 4.2.8, (version vdsm: 4.20.46-1.el7) with libvirt 4.5.0-10.el7_6.7
By vdsm.log :
2019-07-18 09:29:09,744+0200 INFO (jsonrpc/2) [storage.StorageDomain] Creating domain run directory u'/var/run/vdsm/storage/4dabb6d6-4be5-458c-811d-6d5e87699640' (fileSD:577) 2019-07-18 09:29:09,744+0200 INFO (jsonrpc/2) [storage.fileUtils] Creating directory: /var/run/vdsm/storage/4dabb6d6-4be5-458c-811d-6d5e87699640 mode: None (fileUtils:197)
2019-07-18 09:29:09,744+0200 INFO (jsonrpc/2) [storage.StorageDomain] Creating symlink from /rhev/data-center/mnt/glusterSD/192.168.11.20:_gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50
to /var/run/vdsm/storage/4dabb6d6-4be5-458 c-811d-6d5e87699640/d2964ff9-10f7-4b92-8327-d68f3cfd5b50 (fileSD:580)
2019-07-18 09:29:09,789+0200 INFO (jsonrpc/2) [vdsm.api] FINISH prepareImage return={'info': {'path': u'gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50/62656632-8984-4b7e-8be1-fd2547ca0f98',
'type': 'network', 'hosts': [{'port': '0', 'transport': 'tcp', 'name': '192.168.11.20'}, {'port': '0', 'transport': 'tcp', 'name': 'glusrv01'}, {'port': '0', 'transport': 'tcp', 'name': 'glusrv02'}, {'port': '0', 'transport': 'tcp', 'name': 'glusrv03'}], 'protocol': 'gluster'}, 'path': u'/rhev/data-center/mn t/glusterSD/192.168.11.20:_gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50/62656632-8984-4b7e-8be1-fd2547ca0f98',
'imgVolumesInfo': [{'domainID': u'4dabb6d6-4be5-458c-811d-6d5e87699640', 'leaseOffset': 0, 'path': u'gluvol4/4dabb6d 6-4be5-458c-811d-6d5e87699640/images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50/62656632-8984-4b7e-8be1-fd2547ca0f98',
'volumeID': u'62656632-8984-4b7e-8be1-fd2547ca0f98', 'leasePath': u'/rhev/data-center/mnt/glusterSD/192.168.11.20: _gluvol4/4dabb6d6-4be5-458c-811d-6d5e8769964 0/images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50/62656632-8984-4b7e-8be1-fd2547ca0f98.lease',
'imageID': u'd2964ff9-10f7-4b92-8327-d68f3cfd5b50'}, {'domainID': u'4dabb6d6-4be5-458c-811d-6d5e87699640', 'leaseOffset': 0, 'path': u'gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/ images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50/43dbb053-c5fe-45bf-9464-acf77546b96a',
'volumeID': u'43dbb053-c5fe-45bf-9464-acf77546b96a', 'leasePath': u'/rhev/data-center/mnt/glusterSD/192.168.11.20: _gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/images/d2964ff9-10f7-4b92-83 27-d68f3cfd5b50/43dbb053-c5fe-45bf-9464-acf77546b96a.lease', 'imageID': u'd2964ff9-10f7-4b92-8327-d68f3cfd5b50'}]} from=::ffff:10.4.8.242,45784, flow_id=c957e011-37e0-43aa-abe7-9bb633c38c5f, task_id=b825b7c7-ddae-441f-b9c8-2bd50ec144b5 (api:52) 2019-07-18 09:29:09,790+0200 INFO (jsonrpc/2) [vds] prepared volume path: gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50/62656632-8984-4b7e-8be1-fd2547ca0f98
(clientIF:497) 2019-07-18 09:29:09,791+0200 INFO (jsonrpc/2) [vdsm.api] START teardownImage(sdUUID=u'4dabb6d6-4be5-458c-811d-6d5e87699640', spUUID=u'00000001-0001-0001-0001-000000000311', imgUUID=u'd2964ff9-10f7-4b92-8327-d68f3cfd5b50', volUUID=None) from=::ffff:10.4.8.242,45784, flow _id=c957e011-37e0-43aa-abe7-9bb633c38c5f, task_id=c13f7ee9-8d01-4b73-8200-3052b4953385 (api:46)
2019-07-18 09:29:09,792+0200 INFO (jsonrpc/2) [storage.StorageDomain] Removing image rundir link u'/var/run/vdsm/storage/4dabb6d6-4be5-458c-811d-6d5e87699640/d2964ff9-10f7-4b92-8327-d68f3cfd5b50'
(fileSD:600)
2019-07-18 09:29:09,792+0200 INFO (jsonrpc/2) [vdsm.api] FINISH teardownImage return=None from=::ffff:10.4.8.242,45784, flow_id=c957e011-37e0-43aa-abe7-9bb633c38c5f, task_id=c13f7ee9-8d01-4b73-8200-3052b4953385 (api:52)
2019-07-18 09:29:09,792+0200 ERROR (jsonrpc/2) [virt.vm] (vmId='3b79d0c0-47e9-47c3-8511-980a8cfe147c') Unable to start replication for sda to {u'domainID': u'4dabb6d6-4be5-458c-811d-6d5e87699640', 'volumeInfo': {'path': u'gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/imag es/d2964ff9-10f7-4b92-8327-d68f3cfd5b50/62656632-8984-4b7e-8be1-fd2547ca0f98',
'type': 'network', 'hosts': [{'port': '0', 'transport': 'tcp', 'name': '192.168.11.20'}, {'port': '0', 'transport': 'tcp', 'name': 'glusrv01'}, {'port': '0', 'transport': 'tcp', 'name': 'glusr v02'}, {'port': '0', 'transport': 'tcp', 'name': 'glusrv03'}], 'protocol': 'gluster'}, 'format': 'cow', u'poolID': u'00000001-0001-0001-0001-000000000311', u'device': 'disk', 'protocol': 'gluster', 'propagateErrors': 'off', u'diskType': u'network', 'cache': 'none', u'vol umeID': u'62656632-8984-4b7e-8be1-fd2547ca0f98', u'imageID': u'd2964ff9-10f7-4b92-8327-d68f3cfd5b50', 'hosts': [{'port': '0', 'transport': 'tcp', 'name': '192.168.11.20'}], 'path':
u'gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50 /62656632-8984-4b7e-8be1-fd2547ca0f98', 'volumeChain': [{'domainID': u'4dabb6d6-4be5-458c-811d-6d5e87699640', 'leaseOffset': 0, 'path': u'gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50/62656632-8984-4b7e-8be1-fd2547ca0f98',
'volu meID': u'62656632-8984-4b7e-8be1-fd2547ca0f98', 'leasePath': u'/rhev/data-center/mnt/glusterSD/192.168.11.20:_gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50/62656632-8984-4b7e-8be1-fd2547ca0f98.lease',
'imageID': u'd2964ff9-10f7- 4b92-8327-d68f3cfd5b50'}, {'domainID': u'4dabb6d6-4be5-458c-811d-6d5e87699640', 'leaseOffset': 0, 'path': u'gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50/43dbb053-c5fe-45bf-9464-acf77546b96a',
'volumeID': u'43dbb053-c5fe-45bf-94 64-acf77546b96a', 'leasePath': u'/rhev/data-center/mnt/glusterSD/192.168.11.20:_gluvol4/4dabb6d6-4be5-458c-811d-6d5e87699640/images/d2964ff9-10f7-4b92-8327-d68f3cfd5b50/43dbb053-c5fe-45bf-9464-acf77546b96a.lease',
'imageID': u'd2964ff9-10f7-4b92-8327-d68f3cfd5b50'}]} (vm :4710) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 4704, in diskReplicateStart self._startDriveReplication(drive) File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 4843, in _startDriveReplication self._dom.blockCopy(drive.name, destxml, flags=flags) File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 98, in f ret = attr(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 130, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 729, in blockCopy if ret == -1: raise libvirtError ('virDomainBlockCopy() failed', dom=self) libvirtError: argument unsupported: non-file destination not supported yet
2019-07-18 09:29:09,796+0200 INFO (jsonrpc/2) [api.virt] FINISH diskReplicateStart return={'status': {'message': 'Drive replication error', 'code': 55}} from=::ffff:10.4.8.242,45784, flow_id=c957e011-37e0-43aa-abe7-9bb633c38c5f, vmId=3b79d0c0-47e9-47c3-8511-980a8cfe147c (api:52)
On 18.07.19 10:42, Benny Zlotnik wrote:
It should work, what is the engine and vdsm versions? Can you add vdsm logs as well?
On Thu, Jul 18, 2019 at 11:16 AM Christoph Köhler <koehler@luis.uni-hannover.de <mailto:koehler@luis.uni-hannover.de>> wrote:
Hello,
I try to migrate a disk of a running vm from gluster 3.12.15 to gluster 3.12.15 but it fails. libGfApi set to true by engine-config.
° taking a snapshot first, is working. Then at engine-log:
2019-07-18 09:29:13,932+02 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Failed in 'VmReplicateDiskStartVDS' method
2019-07-18 09:29:13,936+02 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ovvirt07 command VmReplicateDiskStartVDS failed: Drive replication error
2019-07-18 09:29:13,936+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Command
'org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand'
return value 'StatusOnlyReturn [status=Status [code=55, message=Drive replication error]]'
2019-07-18 09:29:13,936+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] HostName = ovvirt07
2019-07-18 09:29:13,937+02 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Command 'VmReplicateDiskStartVDSCommand(HostName = ovvirt07,
VmReplicateDiskParameters:{hostId='3a7bf85c-e92d-4559-908e-5eed2f5608d4',
vmId='3b79d0c0-47e9-47c3-8511-980a8cfe147c', storagePoolId='00000001-0001-0001-0001-000000000311', srcStorageDomainId='e54d835a-d8a5-44ae-8e17-fcba1c54e46f', targetStorageDomainId='4dabb6d6-4be5-458c-811d-6d5e87699640', imageGroupId='d2964ff9-10f7-4b92-8327-d68f3cfd5b50', imageId='62656632-8984-4b7e-8be1-fd2547ca0f98'})' execution failed: VDSGenericException: VDSErrorException: Failed to VmReplicateDiskStartVDS, error = Drive replication error, code = 55
2019-07-18 09:29:13,937+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] FINISH, VmReplicateDiskStartVDSCommand, return: , log id: 5b2afb0b
2019-07-18 09:29:13,937+02 ERROR [org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Failed VmReplicateDiskStart (Disk 'd2964ff9-10f7-4b92-8327-d68f3cfd5b50' , VM '3b79d0c0-47e9-47c3-8511-980a8cfe147c')
2019-07-18 09:29:13,938+02 ERROR [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Command 'LiveMigrateDisk' id: '8174c74c-8ab0-49fa-abfc-44d8b7c691e0' with children [03672b60-443b-47ba-834c-ac306d7129d0, 562522fc-6691-47fe-93bf-ef2c45e85676] failed when attempting to
perform
the next operation, marking as 'ACTIVE'
2019-07-18 09:29:13,938+02 ERROR [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] EngineException: Drive replication error (Failed with error replicaErr and code 55): org.ovirt.engine.core.common.errors.EngineException: EngineException: Drive replication error (Failed with error replicaErr and code 55) at
org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand.replicateDiskStart(LiveMigrateDiskCommand.java:526)
[bll.jar:] at
org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand.performNextOperation(LiveMigrateDiskCommand.java:233)
[bll.jar:] at
org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback.childCommandsExecutionEnded(SerialChildCommandsExecutionCallback.java:32)
[bll.jar:] at
org.ovirt.engine.core.bll.ChildCommandsCallbackBase.doPolling(ChildCommandsCallbackBase.java:77)
[bll.jar:] at
org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:175)
[bll.jar:] at
org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109)
[bll.jar:] at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[rt.jar:1.8.0_212] at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [rt.jar:1.8.0_212] at
org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383)
[javax.enterprise.concurrent-1.0.jar:] at
org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534)
[javax.enterprise.concurrent-1.0.jar:] at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[rt.jar:1.8.0_212] at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[rt.jar:1.8.0_212] at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_212] at
org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250)
[javax.enterprise.concurrent-1.0.jar:] at
org.jboss.as.ee.concurrent.service.ElytronManagedThreadFactory$ElytronManagedThread.run(ElytronManagedThreadFactory.java:78)
2019-07-18 09:29:13,938+02 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Command 'LiveMigrateDisk' id: '8174c74c-8ab0-49fa-abfc-44d8b7c691e0' child commands '[03672b60-443b-47ba-834c-ac306d7129d0, 562522fc-6691-47fe-93bf-ef2c45e85676]' executions were completed, status 'FAILED'
2019-07-18 09:29:15,019+02 ERROR [org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-1) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Ending command 'org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand' with failure.
2019-07-18 09:29:15,019+02 ERROR [org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-1) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Failed during live storage migration of disk 'd2964ff9-10f7-4b92-8327-d68f3cfd5b50' of vm '3b79d0c0-47e9-47c3-8511-980a8cfe147c', attempting to end replication before deleting the target disk
//
LiveStorageMigration on gluster - should that work at all? Has
someone
tried it?
Greetings! Christoph Köhler _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WSF45K47AJNSAJ...

According to this one (GlusterFS Storage Domain — oVirt) libgfapi support is disabled by default due to incompatibility with Live Storage Migration. VM can not be migrated to the GlusterFS storage domain. I guess someone from the devs have to confirm that this is still valid. Best Regards,Strahil Nikolov В четвъртък, 18 юли 2019 г., 11:16:26 ч. Гринуич+3, Christoph Köhler <koehler@luis.uni-hannover.de> написа: Hello, I try to migrate a disk of a running vm from gluster 3.12.15 to gluster 3.12.15 but it fails. libGfApi set to true by engine-config. ° taking a snapshot first, is working. Then at engine-log: 2019-07-18 09:29:13,932+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Failed in 'VmReplicateDiskStartVDS' method 2019-07-18 09:29:13,936+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ovvirt07 command VmReplicateDiskStartVDS failed: Drive replication error 2019-07-18 09:29:13,936+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand' return value 'StatusOnlyReturn [status=Status [code=55, message=Drive replication error]]' 2019-07-18 09:29:13,936+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] HostName = ovvirt07 2019-07-18 09:29:13,937+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Command 'VmReplicateDiskStartVDSCommand(HostName = ovvirt07, VmReplicateDiskParameters:{hostId='3a7bf85c-e92d-4559-908e-5eed2f5608d4', vmId='3b79d0c0-47e9-47c3-8511-980a8cfe147c', storagePoolId='00000001-0001-0001-0001-000000000311', srcStorageDomainId='e54d835a-d8a5-44ae-8e17-fcba1c54e46f', targetStorageDomainId='4dabb6d6-4be5-458c-811d-6d5e87699640', imageGroupId='d2964ff9-10f7-4b92-8327-d68f3cfd5b50', imageId='62656632-8984-4b7e-8be1-fd2547ca0f98'})' execution failed: VDSGenericException: VDSErrorException: Failed to VmReplicateDiskStartVDS, error = Drive replication error, code = 55 2019-07-18 09:29:13,937+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] FINISH, VmReplicateDiskStartVDSCommand, return: , log id: 5b2afb0b 2019-07-18 09:29:13,937+02 ERROR [org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Failed VmReplicateDiskStart (Disk 'd2964ff9-10f7-4b92-8327-d68f3cfd5b50' , VM '3b79d0c0-47e9-47c3-8511-980a8cfe147c') 2019-07-18 09:29:13,938+02 ERROR [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Command 'LiveMigrateDisk' id: '8174c74c-8ab0-49fa-abfc-44d8b7c691e0' with children [03672b60-443b-47ba-834c-ac306d7129d0, 562522fc-6691-47fe-93bf-ef2c45e85676] failed when attempting to perform the next operation, marking as 'ACTIVE' 2019-07-18 09:29:13,938+02 ERROR [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] EngineException: Drive replication error (Failed with error replicaErr and code 55): org.ovirt.engine.core.common.errors.EngineException: EngineException: Drive replication error (Failed with error replicaErr and code 55) at org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand.replicateDiskStart(LiveMigrateDiskCommand.java:526) [bll.jar:] at org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand.performNextOperation(LiveMigrateDiskCommand.java:233) [bll.jar:] at org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback.childCommandsExecutionEnded(SerialChildCommandsExecutionCallback.java:32) [bll.jar:] at org.ovirt.engine.core.bll.ChildCommandsCallbackBase.doPolling(ChildCommandsCallbackBase.java:77) [bll.jar:] at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:175) [bll.jar:] at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109) [bll.jar:] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [rt.jar:1.8.0_212] at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [rt.jar:1.8.0_212] at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383) [javax.enterprise.concurrent-1.0.jar:] at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534) [javax.enterprise.concurrent-1.0.jar:] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_212] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_212] at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_212] at org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250) [javax.enterprise.concurrent-1.0.jar:] at org.jboss.as.ee.concurrent.service.ElytronManagedThreadFactory$ElytronManagedThread.run(ElytronManagedThreadFactory.java:78) 2019-07-18 09:29:13,938+02 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Command 'LiveMigrateDisk' id: '8174c74c-8ab0-49fa-abfc-44d8b7c691e0' child commands '[03672b60-443b-47ba-834c-ac306d7129d0, 562522fc-6691-47fe-93bf-ef2c45e85676]' executions were completed, status 'FAILED' 2019-07-18 09:29:15,019+02 ERROR [org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-1) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Ending command 'org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand' with failure. 2019-07-18 09:29:15,019+02 ERROR [org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-1) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Failed during live storage migration of disk 'd2964ff9-10f7-4b92-8327-d68f3cfd5b50' of vm '3b79d0c0-47e9-47c3-8511-980a8cfe147c', attempting to end replication before deleting the target disk // LiveStorageMigration on gluster - should that work at all? Has someone tried it? Greetings! Christoph Köhler _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WSF45K47AJNSAJ...

On Thu, Jul 18, 2019 at 9:02 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
According to this one (GlusterFS Storage Domain — oVirt) libgfapi support is disabled by default due to incompatibility with Live Storage Migration. VM can not be migrated to the GlusterFS storage domain. I guess someone from the devs have to confirm that this is still valid.
Yes, this is waiting for libvirt fix - https://bugzilla.redhat.com/show_bug.cgi?id=760547 Best Regards,Strahil Nikolov
В четвъртък, 18 юли 2019 г., 11:16:26 ч. Гринуич+3, Christoph Köhler < koehler@luis.uni-hannover.de> написа:
Hello,
I try to migrate a disk of a running vm from gluster 3.12.15 to gluster 3.12.15 but it fails. libGfApi set to true by engine-config.
° taking a snapshot first, is working. Then at engine-log:
2019-07-18 09:29:13,932+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Failed in 'VmReplicateDiskStartVDS' method
2019-07-18 09:29:13,936+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ovvirt07 command VmReplicateDiskStartVDS failed: Drive replication error
2019-07-18 09:29:13,936+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand' return value 'StatusOnlyReturn [status=Status [code=55, message=Drive replication error]]'
2019-07-18 09:29:13,936+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] HostName = ovvirt07
2019-07-18 09:29:13,937+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Command 'VmReplicateDiskStartVDSCommand(HostName = ovvirt07, VmReplicateDiskParameters:{hostId='3a7bf85c-e92d-4559-908e-5eed2f5608d4', vmId='3b79d0c0-47e9-47c3-8511-980a8cfe147c', storagePoolId='00000001-0001-0001-0001-000000000311', srcStorageDomainId='e54d835a-d8a5-44ae-8e17-fcba1c54e46f', targetStorageDomainId='4dabb6d6-4be5-458c-811d-6d5e87699640', imageGroupId='d2964ff9-10f7-4b92-8327-d68f3cfd5b50', imageId='62656632-8984-4b7e-8be1-fd2547ca0f98'})' execution failed: VDSGenericException: VDSErrorException: Failed to VmReplicateDiskStartVDS, error = Drive replication error, code = 55
2019-07-18 09:29:13,937+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] FINISH, VmReplicateDiskStartVDSCommand, return: , log id: 5b2afb0b
2019-07-18 09:29:13,937+02 ERROR [org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Failed VmReplicateDiskStart (Disk 'd2964ff9-10f7-4b92-8327-d68f3cfd5b50' , VM '3b79d0c0-47e9-47c3-8511-980a8cfe147c')
2019-07-18 09:29:13,938+02 ERROR [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Command 'LiveMigrateDisk' id: '8174c74c-8ab0-49fa-abfc-44d8b7c691e0' with children [03672b60-443b-47ba-834c-ac306d7129d0, 562522fc-6691-47fe-93bf-ef2c45e85676] failed when attempting to perform the next operation, marking as 'ACTIVE'
2019-07-18 09:29:13,938+02 ERROR [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] EngineException: Drive replication error (Failed with error replicaErr and code 55): org.ovirt.engine.core.common.errors.EngineException: EngineException: Drive replication error (Failed with error replicaErr and code 55) at org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand.replicateDiskStart(LiveMigrateDiskCommand.java:526)
[bll.jar:] at org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand.performNextOperation(LiveMigrateDiskCommand.java:233)
[bll.jar:] at org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback.childCommandsExecutionEnded(SerialChildCommandsExecutionCallback.java:32)
[bll.jar:] at org.ovirt.engine.core.bll.ChildCommandsCallbackBase.doPolling(ChildCommandsCallbackBase.java:77)
[bll.jar:] at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:175)
[bll.jar:] at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109)
[bll.jar:] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [rt.jar:1.8.0_212] at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [rt.jar:1.8.0_212] at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383)
[javax.enterprise.concurrent-1.0.jar:] at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534)
[javax.enterprise.concurrent-1.0.jar:] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[rt.jar:1.8.0_212] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[rt.jar:1.8.0_212] at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_212] at org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250)
[javax.enterprise.concurrent-1.0.jar:] at
org.jboss.as.ee.concurrent.service.ElytronManagedThreadFactory$ElytronManagedThread.run(ElytronManagedThreadFactory.java:78)
2019-07-18 09:29:13,938+02 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-84) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Command 'LiveMigrateDisk' id: '8174c74c-8ab0-49fa-abfc-44d8b7c691e0' child commands '[03672b60-443b-47ba-834c-ac306d7129d0, 562522fc-6691-47fe-93bf-ef2c45e85676]' executions were completed, status 'FAILED'
2019-07-18 09:29:15,019+02 ERROR [org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-1) [c957e011-37e0-43aa-abe7-9bb633c38c5f] Ending command 'org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand' with failure
participants (4)
-
Benny Zlotnik
-
Christoph Köhler
-
Sahina Bose
-
Strahil Nikolov