[ovirt-users] live migrating vm from 4.1.7 to 4.1.9 failed because of vmx flag

Gianluca Cecchi gianluca.cecchi at gmail.com
Fri Jan 26 11:56:15 UTC 2018


Hello,
I'm trying to update an environment from 4.1.7 to 4.1.9.
Already migrated the engine (separate) and one host.
This host is now running a pair of VMs, that was powered off and then
powered on it via "run once" feature.

Now I'm trying to evacuate VMs from other hosts and get all to 4.1.9.

But I have tried with couple of VMs and I'm always getting error events of
type:

Jan 26, 2018 12:40:00 PM Migration failed (VM: dbatest3, Source: ov301,
Destination: ov200).

In engine.log

2018-01-26 12:39:48,267+01 INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-8)
[72ccfbb1-735a-4411-a990-cdb2c081e391] Lock Acquired to object
'EngineLock:{exclusiveLocks='[4f6ecae2-7d71-47c9-af23-2b3e49bc08fc=VM]',
sharedLocks=''}'
2018-01-26 12:39:48,350+01 INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand]
(org.ovirt.thread.pool-6-thread-8) [72ccfbb1-735a-4411-a990-cdb2c081e391]
Running command: MigrateVmToServerCommand internal: false. Entities
affected :  ID: 4f6ecae2-7d71-47c9-af23-2b3e49bc08fc Type: VMAction group
MIGRATE_VM with role type USER
2018-01-26 12:39:48,424+01 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
(org.ovirt.thread.pool-6-thread-8) [72ccfbb1-735a-4411-a990-cdb2c081e391]
START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true',
hostId='8ef1ce6f-4e38-486c-b3a4-58235f1f1d06',
vmId='4f6ecae2-7d71-47c9-af23-2b3e49bc08fc', srcHost='ov301.mydomain',
dstVdsId='d16e723c-b44c-4c1c-be76-c67911e47ccd',
dstHost='ov200.mydomain:54321', migrationMethod='ONLINE',
tunnelMigration='false', migrationDowntime='0', autoConverge='true',
migrateCompressed='false', consoleAddress='null', maxBandwidth='500',
enableGuestEvents='true', maxIncomingMigrations='2',
maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime,
params=[100]}], stalling=[{limit=1, action={name=setDowntime,
params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}},
{limit=3, action={name=setDowntime, params=[300]}}, {limit=4,
action={name=setDowntime, params=[400]}}, {limit=6,
action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort,
params=[]}}]]'}), log id: 7013a234
2018-01-26 12:39:48,425+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(org.ovirt.thread.pool-6-thread-8) [72ccfbb1-735a-4411-a990-cdb2c081e391]
START, MigrateBrokerVDSCommand(HostName = ov301,
MigrateVDSCommandParameters:{runAsync='true',
hostId='8ef1ce6f-4e38-486c-b3a4-58235f1f1d06',
vmId='4f6ecae2-7d71-47c9-af23-2b3e49bc08fc', srcHost='ov301.mydomain',
dstVdsId='d16e723c-b44c-4c1c-be76-c67911e47ccd',
dstHost='ov200.mydomain:54321', migrationMethod='ONLINE',
tunnelMigration='false', migrationDowntime='0', autoConverge='true',
migrateCompressed='false', consoleAddress='null', maxBandwidth='500',
enableGuestEvents='true', maxIncomingMigrations='2',
maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime,
params=[100]}], stalling=[{limit=1, action={name=setDowntime,
params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}},
{limit=3, action={name=setDowntime, params=[300]}}, {limit=4,
action={name=setDowntime, params=[400]}}, {limit=6,
action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort,
params=[]}}]]'}), log id: 25d9c017
2018-01-26 12:39:49,620+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(org.ovirt.thread.pool-6-thread-8) [72ccfbb1-735a-4411-a990-cdb2c081e391]
FINISH, MigrateBrokerVDSCommand, log id: 25d9c017
2018-01-26 12:39:49,622+01 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
(org.ovirt.thread.pool-6-thread-8) [72ccfbb1-735a-4411-a990-cdb2c081e391]
FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 7013a234
2018-01-26 12:39:49,627+01 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-8) [72ccfbb1-735a-4411-a990-cdb2c081e391]
EVENT_ID: VM_MIGRATION_START(62), Correlation ID:
72ccfbb1-735a-4411-a990-cdb2c081e391, Job ID:
b2f39d2c-87f1-480c-b4c7-b8ab09d09318, Call Stack: null, Custom ID: null,
Custom Event ID: -1, Message: Migration started (VM: dbatest3, Source:
ov301, Destination: ov200, User: admin at internal-authz).
2018-01-26 12:39:50,782+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher]
(DefaultQuartzScheduler1) [f2fc61e] Fetched 3 VMs from VDS
'd16e723c-b44c-4c1c-be76-c67911e47ccd'
2018-01-26 12:39:50,783+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler1) [f2fc61e] VM
'4f6ecae2-7d71-47c9-af23-2b3e49bc08fc'(dbatest3) was unexpectedly detected
as 'MigratingTo' on VDS 'd16e723c-b44c-4c1c-be76-c67911e47ccd'(ov200)
(expected on '8ef1ce6f-4e38-486c-b3a4-58235f1f1d06')
2018-01-26 12:39:50,784+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler1) [f2fc61e] VM
'4f6ecae2-7d71-47c9-af23-2b3e49bc08fc' is migrating to VDS
'd16e723c-b44c-4c1c-be76-c67911e47ccd'(ov200) ignoring it in the refresh
until migration is done
2018-01-26 12:39:51,968+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-15) [] VM '4f6ecae2-7d71-47c9-af23-2b3e49bc08fc' was
reported as Down on VDS 'd16e723c-b44c-4c1c-be76-c67911e47ccd'(ov200)
2018-01-26 12:39:51,969+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-15) [] START, DestroyVDSCommand(HostName = ov200,
DestroyVmVDSCommandParameters:{runAsync='true',
hostId='d16e723c-b44c-4c1c-be76-c67911e47ccd',
vmId='4f6ecae2-7d71-47c9-af23-2b3e49bc08fc', force='false',
secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log
id: 3b49afe3
2018-01-26 12:39:52,236+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-15) [] Failed to destroy VM
'4f6ecae2-7d71-47c9-af23-2b3e49bc08fc' because VM does not exist, ignoring
2018-01-26 12:39:52,237+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-15) [] FINISH, DestroyVDSCommand, log id: 3b49afe3
2018-01-26 12:39:52,237+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-15) [] VM
'4f6ecae2-7d71-47c9-af23-2b3e49bc08fc'(dbatest3) was unexpectedly detected
as 'Down' on VDS 'd16e723c-b44c-4c1c-be76-c67911e47ccd'(ov200) (expected on
'8ef1ce6f-4e38-486c-b3a4-58235f1f1d06')
2018-01-26 12:40:00,237+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler3) [5c4b079b] VM
'4f6ecae2-7d71-47c9-af23-2b3e49bc08fc'(dbatest3) moved from 'MigratingFrom'
--> 'Up'
2018-01-26 12:40:00,237+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler3) [5c4b079b] Adding VM
'4f6ecae2-7d71-47c9-af23-2b3e49bc08fc'(dbatest3) to re-run list
2018-01-26 12:40:00,245+01 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring]
(DefaultQuartzScheduler3) [5c4b079b] Rerun VM
'4f6ecae2-7d71-47c9-af23-2b3e49bc08fc'. Called from VDS 'ov301'
2018-01-26 12:40:00,287+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(org.ovirt.thread.pool-6-thread-15) [5c4b079b] START,
MigrateStatusVDSCommand(HostName = ov301,
MigrateStatusVDSCommandParameters:{runAsync='true',
hostId='8ef1ce6f-4e38-486c-b3a4-58235f1f1d06',
vmId='4f6ecae2-7d71-47c9-af23-2b3e49bc08fc'}), log id: 65677081
2018-01-26 12:40:00,818+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(org.ovirt.thread.pool-6-thread-15) [5c4b079b] FINISH,
MigrateStatusVDSCommand, log id: 65677081
2018-01-26 12:40:00,823+01 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-15) [5c4b079b] EVENT_ID:
VM_MIGRATION_TO_SERVER_FAILED(120), Correlation ID:
72ccfbb1-735a-4411-a990-cdb2c081e391, Job ID:
b2f39d2c-87f1-480c-b4c7-b8ab09d09318, Call Stack: null, Custom ID: null,
Custom Event ID: -1, Message: Migration failed  (VM: dbatest3, Source:
ov301, Destination: ov200).
2018-01-26 12:40:00,825+01 INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand]
(org.ovirt.thread.pool-6-thread-15) [5c4b079b] Lock freed to object
'EngineLock:{exclusiveLocks='[4f6ecae2-7d71-47c9-af23-2b3e49bc08fc=VM]',
sharedLocks=''}'
2018-01-26 12:40:05,796+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher]
(DefaultQuartzScheduler1) [27a52eca] Fetched 2 VMs from VDS
'd16e723c-b44c-4c1c-be76-c67911e47ccd'

I don't see errors in vdsm logs of target host, but I do see this in
/var/log/messages of target host

Jan 26 12:39:51 ov200 libvirtd: 2018-01-26 11:39:51.179+0000: 2588: error :
virCPUx86UpdateLive:2726  : operation failed: guest CPU doesn't match
specification: missing features: vmx

Indeed dbatest3 VM has this "vmx=on" flag in qemu-kvm command line on
source host, even if I never configured it explicitly...:

[root at ov301 ~]# ps -ef|grep dbatest3

qemu      1455     1  1  2017 ?        08:41:29 /usr/libexec/qemu-kvm -name
guest=dbatest3,debug-threads=on -S -object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-27-dbatest3/master-key.aes
-machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
Westmere,vmx=on -m 32768 ...


A quite similar VM named dbatest4 running on the already upgraded host
doesn't have indeed the flag:

[root at ov200 vdsm]# ps -ef|grep dbatest4
qemu     15827     1  3 10:47 ?        00:04:37 /usr/libexec/qemu-kvm -name
guest=dbatest4,debug-threads=on -S -object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-dbatest4/master-key.aes
-machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
Westmere -m 32768

The 2 VMs has been created from the same template and with the same options
if I remember correctly and across the previous versions I could live
migrate without problems....

What can have changed?
This would be a possible great show stopper, because one has to power off
all the VMs if he/she wants to upgrade.....

Thanks in advance for any help,

Gianluca
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20180126/4858dc05/attachment.html>


More information about the Users mailing list