[Users] oVirt 3.4 Templates break Foreman Provisioning/Adding host
by Matt .
Hi Guys,
On Foreman 1.4.2 it seems that you have issues with oVirt 3.4 when adding a
host.
When you add this host you will get an error as oVirt now has more
templating stuff:
VM template.id|name required for add
Is there a workaround for this to get the creation of VM go further ?
Thanks!
Matt
10 years, 8 months
Re: [ovirt-users] help.. vm trapped in limbo aka can't acquire exclusive lock
by Arik Hadas
Hi Jeremiah,
Thanks for providing the relevant information from the engine log.
A fix was just merged to 3.4 branch.
For more info about the bug and the solution see: https://bugzilla.redhat.com/1086280
Note that in the meantime you can unlock the VM by restarting the engine as Roy mentioned.
Regards,
Arik
----- Original Message -----
>
>
> ----- Forwarded Message -----
> > From: "Jeremiah Jahn" <jeremiah(a)goodinassociates.com>
> > To: "Yair Zaslavsky" <yzaslavs(a)redhat.com>
> > Cc: users(a)ovirt.org
> > Sent: Thursday, April 10, 2014 5:48:24 AM
> > Subject: Re: [ovirt-users] help.. vm trapped in limbo aka can't acquire
> > exclusive lock
> >
> > here's the start of the problem. If you really want the WHOLE log I
> > can do that too, but this is where I ask it to migrate, and where it
> > fails. After which it just keeps repeating that it can't get the lock.
> >
> >
> >
> > 2014-04-09 14:18:48,050 INFO
> > [org.ovirt.engine.core.bll.MigrateVmCommand]
> > (DefaultQuartzScheduler_Worker-33) Lock freed to object EngineLock
> > [exclusiveLocks= key:
> > 0af0aaf6-85cf-4c67-9744-68a63601ab85 value: VM
> > , sharedLocks= ]
> > 2014-04-09 14:19:32,177 INFO
> > [org.ovirt.engine.core.bll.MigrateVmCommand] (ajp--127.0.0.1-8702-1)
> > [4350ffd] Lock Acquired to object EngineLock [exclusiveLocks= key:
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea value: VM
> > , sharedLocks= ]
> > 2014-04-09 14:19:32,199 INFO
> > [org.ovirt.engine.core.bll.MigrateVmCommand]
> > (org.ovirt.thread.pool-6-thread-16) [4350ffd] Running command:
> > MigrateVmCommand internal: false. Entities affected : ID:
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea Type: VM
> > 2014-04-09 14:19:32,203 INFO
> > [org.ovirt.engine.core.bll.scheduling.policyunits.HaReservationWeightPolicyUnit]
> > (org.ovirt.thread.pool-6-thread-16) [4350ffd] Started HA reservation
> > scoring method
> > 2014-04-09 14:19:32,208 INFO
> > [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
> > (org.ovirt.thread.pool-6-thread-16) [4350ffd] START,
> > MigrateVDSCommand(HostName = kvm01.virt, HostId =
> > fa455012-b929-4805-a5c1-50e6c788039a,
> > vmId=b0108933-deb2-4fa0-ae74-e10cefbb0cea, srcHost=10.10.4.31,
> > dstVdsId=ce6b5b29-778f-4812-97b5-950f3d3b6c71,
> > dstHost=10.10.4.34:54321, migrationMethod=ONLINE,
> > tunnelMigration=false, migrationDowntime=0), log id: 745cd0ba
> > 2014-04-09 14:19:32,209 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
> > (org.ovirt.thread.pool-6-thread-16) [4350ffd]
> > VdsBroker::migrate::Entered
> > (vm_guid=b0108933-deb2-4fa0-ae74-e10cefbb0cea, srcHost=10.10.4.31,
> > dstHost=10.10.4.34:54321, method=online
> > 2014-04-09 14:19:32,210 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
> > (org.ovirt.thread.pool-6-thread-16) [4350ffd] START,
> > MigrateBrokerVDSCommand(HostName = kvm01.virt, HostId =
> > fa455012-b929-4805-a5c1-50e6c788039a,
> > vmId=b0108933-deb2-4fa0-ae74-e10cefbb0cea, srcHost=10.10.4.31,
> > dstVdsId=ce6b5b29-778f-4812-97b5-950f3d3b6c71,
> > dstHost=10.10.4.34:54321, migrationMethod=ONLINE,
> > tunnelMigration=false, migrationDowntime=0), log id: 5de39b66
> > 2014-04-09 14:19:32,224 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
> > (org.ovirt.thread.pool-6-thread-16) [4350ffd] FINISH,
> > MigrateBrokerVDSCommand, log id: 5de39b66
> > 2014-04-09 14:19:32,228 INFO
> > [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
> > (org.ovirt.thread.pool-6-thread-16) [4350ffd] FINISH,
> > MigrateVDSCommand, return: MigratingFrom, log id: 745cd0ba
> > 2014-04-09 14:19:32,233 INFO
> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> > (org.ovirt.thread.pool-6-thread-16) [4350ffd] Correlation ID: 4350ffd,
> > Job ID: 433cbdce-90b3-41ed-bbfe-3a10ea2c6616, Call Stack: null, Custom
> > Event ID: -1, Message: Migration started (VM: web.judici, Source:
> > kvm01.virt, Destination: kvm04, User: admin).
> > 2014-04-09 14:19:33,465 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-62) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm04
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:19:34,268 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-74) VM web.judici
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea moved from MigratingFrom --> Up
> > 2014-04-09 14:19:34,269 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-74) Adding VM
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea to re-run list
> > 2014-04-09 14:19:34,269 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand]
> > (DefaultQuartzScheduler_Worker-74) START, FullListVdsCommand(HostName
> > = kvm01.virt, HostId = fa455012-b929-4805-a5c1-50e6c788039a,
> > vds=Host[kvm01.virt], vmIds=[b0108933-deb2-4fa0-ae74-e10cefbb0cea]),
> > log id: 6936226f
> > 2014-04-09 14:19:34,279 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand]
> > (DefaultQuartzScheduler_Worker-74) FINISH, FullListVdsCommand, return:
> > [{displaySecurePort=5905, kvmEnable=true, nicModel=rtl8139,pv,
> > keyboardLayout=en-us, displayIp=0, pauseCode=NOERR, nice=0,
> > pitReinjection=false, timeOffset=0, transparentHugePages=true,
> > vmId=b0108933-deb2-4fa0-ae74-e10cefbb0cea, acpiEnable=true, custom={},
> > spiceSslCipherSuite=DEFAULT, memSize=8192, displayPort=5904, smp=4,
> > emulatedMachine=rhel6.5.0, status=Up, vmType=kvm,
> > memGuaranteedSize=1024, display=qxl, pid=30289, smartcardEnable=false,
> > smpCoresPerSocket=1,
> > spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,
> > maxVCpus=160, clientIp=10.10.4.242,
> > devices=[Ljava.lang.Object;@610edb5f, vmName=web.judici,
> > cpuType=Penryn}], log id: 6936226f
> > 2014-04-09 14:19:34,281 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-74) Received a console Device without
> > an address when processing VM b0108933-deb2-4fa0-ae74-e10cefbb0cea
> > devices, skipping device: {alias=console0, specParams={},
> > device=console, type=console,
> > deviceId=05fba965-133e-4810-a87c-1688be8beb5a}
> > 2014-04-09 14:19:34,294 ERROR
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-74) Rerun vm
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea. Called from vds kvm01.virt
> > 2014-04-09 14:19:34,299 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> > (org.ovirt.thread.pool-6-thread-46) START,
> > MigrateStatusVDSCommand(HostName = kvm01.virt, HostId =
> > fa455012-b929-4805-a5c1-50e6c788039a,
> > vmId=b0108933-deb2-4fa0-ae74-e10cefbb0cea), log id: 16ec0461
> > 2014-04-09 14:19:34,346 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> > (org.ovirt.thread.pool-6-thread-46) Failed in MigrateStatusVDS method
> > 2014-04-09 14:19:34,346 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> > (org.ovirt.thread.pool-6-thread-46) Command
> > org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand
> > return value
> > StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=12,
> > mMessage=Fatal error during migration]]
> > 2014-04-09 14:19:34,347 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> > (org.ovirt.thread.pool-6-thread-46) HostName = kvm01.virt
> > 2014-04-09 14:19:34,348 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> > (org.ovirt.thread.pool-6-thread-46) Command
> > MigrateStatusVDSCommand(HostName = kvm01.virt, HostId =
> > fa455012-b929-4805-a5c1-50e6c788039a,
> > vmId=b0108933-deb2-4fa0-ae74-e10cefbb0cea) execution failed.
> > Exception: VDSErrorException: VDSGenericException: VDSErrorException:
> > Failed to MigrateStatusVDS, error = Fatal error during migration, code
> > = 12
> > 2014-04-09 14:19:34,349 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> > (org.ovirt.thread.pool-6-thread-46) FINISH, MigrateStatusVDSCommand,
> > log id: 16ec0461
> > 2014-04-09 14:19:34,353 INFO
> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> > (org.ovirt.thread.pool-6-thread-46) Correlation ID: 4350ffd, Job ID:
> > 433cbdce-90b3-41ed-bbfe-3a10ea2c6616, Call Stack: null, Custom Event
> > ID: -1, Message: Migration failed due to Error: Fatal error during
> > migration. Trying to migrate to another Host (VM: web.judici, Source:
> > kvm01.virt, Destination: kvm04).
> > 2014-04-09 14:19:34,370 INFO
> > [org.ovirt.engine.core.bll.MigrateVmCommand]
> > (org.ovirt.thread.pool-6-thread-46) Running command: MigrateVmCommand
> > internal: false. Entities affected : ID:
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea Type: VM
> > 2014-04-09 14:19:34,374 INFO
> > [org.ovirt.engine.core.bll.scheduling.policyunits.HaReservationWeightPolicyUnit]
> > (org.ovirt.thread.pool-6-thread-46) Started HA reservation scoring
> > method
> > 2014-04-09 14:19:34,378 INFO
> > [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
> > (org.ovirt.thread.pool-6-thread-46) START, MigrateVDSCommand(HostName
> > = kvm01.virt, HostId = fa455012-b929-4805-a5c1-50e6c788039a,
> > vmId=b0108933-deb2-4fa0-ae74-e10cefbb0cea, srcHost=10.10.4.31,
> > dstVdsId=071d8ef9-971e-499b-975a-255ee3d3f895,
> > dstHost=10.10.4.35:54321, migrationMethod=ONLINE,
> > tunnelMigration=false, migrationDowntime=0), log id: 7ac1f1eb
> > 2014-04-09 14:19:34,379 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
> > (org.ovirt.thread.pool-6-thread-46) VdsBroker::migrate::Entered
> > (vm_guid=b0108933-deb2-4fa0-ae74-e10cefbb0cea, srcHost=10.10.4.31,
> > dstHost=10.10.4.35:54321, method=online
> > 2014-04-09 14:19:34,380 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
> > (org.ovirt.thread.pool-6-thread-46) START,
> > MigrateBrokerVDSCommand(HostName = kvm01.virt, HostId =
> > fa455012-b929-4805-a5c1-50e6c788039a,
> > vmId=b0108933-deb2-4fa0-ae74-e10cefbb0cea, srcHost=10.10.4.31,
> > dstVdsId=071d8ef9-971e-499b-975a-255ee3d3f895,
> > dstHost=10.10.4.35:54321, migrationMethod=ONLINE,
> > tunnelMigration=false, migrationDowntime=0), log id: 3c9d3817
> > 2014-04-09 14:19:34,431 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
> > (org.ovirt.thread.pool-6-thread-46) FINISH, MigrateBrokerVDSCommand,
> > log id: 3c9d3817
> > 2014-04-09 14:19:34,434 INFO
> > [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
> > (org.ovirt.thread.pool-6-thread-46) FINISH, MigrateVDSCommand, return:
> > MigratingFrom, log id: 7ac1f1eb
> > 2014-04-09 14:19:34,438 INFO
> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> > (org.ovirt.thread.pool-6-thread-46) Correlation ID: 4350ffd, Job ID:
> > 433cbdce-90b3-41ed-bbfe-3a10ea2c6616, Call Stack: null, Custom Event
> > ID: -1, Message: Migration started (VM: web.judici, Source:
> > kvm01.virt, Destination: kvm05.virt, User: admin).
> > 2014-04-09 14:19:37,582 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-92) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:19:40,617 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-91) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:19:43,640 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-100) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:19:46,680 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-8) [1450e72d] RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:19:49,735 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-26) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:19:52,752 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-23) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:19:55,769 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-35) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:19:58,788 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-50) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:20:01,837 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-53) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:20:04,880 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-58) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:20:07,904 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-68) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:20:10,921 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-77) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:20:13,951 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-91) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:20:16,988 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-100) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:20:20,033 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-8) [1450e72d] RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:20:23,052 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-26) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:20:26,070 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-36) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:20:29,090 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-45) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:20:32,135 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-50) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:20:35,180 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-51) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:20:38,196 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-71) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:20:41,215 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-73) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:20:44,234 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-81) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:20:47,275 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-97) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:20:50,316 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-98) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:20:53,332 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-12) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:20:56,348 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-18) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:20:59,363 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-36) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:21:02,404 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-29) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:21:05,439 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-47) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:21:08,455 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-60) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:21:11,476 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-58) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:21:14,501 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-76) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:21:17,549 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-77) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:21:20,598 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-91) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:21:23,617 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-100) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:21:26,642 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-6) [5c26eeb2] RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:21:29,664 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-21) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:21:32,702 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-25) [e633f03] RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:21:35,751 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-31) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:21:38,772 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-48) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:21:41,790 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-60) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:21:44,812 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-71) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:21:47,858 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-72) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:21:50,903 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-77) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:21:53,919 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-94) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:21:56,938 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-5) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:21:59,961 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-14) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:22:02,997 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-21) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:22:06,037 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-36) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:22:09,058 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-45) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:22:12,075 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-50) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:22:15,103 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-51) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm05.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:22:17,696 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-58) VM web.judici
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea moved from MigratingFrom --> Up
> > 2014-04-09 14:22:17,697 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-58) Adding VM
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea to re-run list
> > 2014-04-09 14:22:17,698 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand]
> > (DefaultQuartzScheduler_Worker-58) START, FullListVdsCommand(HostName
> > = kvm01.virt, HostId = fa455012-b929-4805-a5c1-50e6c788039a,
> > vds=Host[kvm01.virt], vmIds=[b0108933-deb2-4fa0-ae74-e10cefbb0cea]),
> > log id: 1688c794
> > 2014-04-09 14:22:17,706 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand]
> > (DefaultQuartzScheduler_Worker-58) FINISH, FullListVdsCommand, return:
> > [{displaySecurePort=5905, kvmEnable=true, nicModel=rtl8139,pv,
> > keyboardLayout=en-us, displayIp=0, pauseCode=NOERR, nice=0,
> > pitReinjection=false, timeOffset=0, transparentHugePages=true,
> > vmId=b0108933-deb2-4fa0-ae74-e10cefbb0cea, acpiEnable=true, custom={},
> > spiceSslCipherSuite=DEFAULT, memSize=8192, displayPort=5904, smp=4,
> > emulatedMachine=rhel6.5.0, status=Up, vmType=kvm,
> > memGuaranteedSize=1024, display=qxl, pid=30289, smartcardEnable=false,
> > smpCoresPerSocket=1,
> > spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,
> > maxVCpus=160, clientIp=10.10.4.242,
> > devices=[Ljava.lang.Object;@6de6d11a, vmName=web.judici,
> > cpuType=Penryn}], log id: 1688c794
> > 2014-04-09 14:22:17,709 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-58) Received a console Device without
> > an address when processing VM b0108933-deb2-4fa0-ae74-e10cefbb0cea
> > devices, skipping device: {alias=console0, specParams={},
> > device=console, type=console,
> > deviceId=05fba965-133e-4810-a87c-1688be8beb5a}
> > 2014-04-09 14:22:17,728 ERROR
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-58) Rerun vm
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea. Called from vds kvm01.virt
> > 2014-04-09 14:22:17,735 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> > (org.ovirt.thread.pool-6-thread-29) START,
> > MigrateStatusVDSCommand(HostName = kvm01.virt, HostId =
> > fa455012-b929-4805-a5c1-50e6c788039a,
> > vmId=b0108933-deb2-4fa0-ae74-e10cefbb0cea), log id: 15822aeb
> > 2014-04-09 14:22:17,781 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> > (org.ovirt.thread.pool-6-thread-29) Failed in MigrateStatusVDS method
> > 2014-04-09 14:22:17,782 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> > (org.ovirt.thread.pool-6-thread-29) Command
> > org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand
> > return value
> > StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=47,
> > mMessage=Migration canceled]]
> > 2014-04-09 14:22:17,782 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> > (org.ovirt.thread.pool-6-thread-29) HostName = kvm01.virt
> > 2014-04-09 14:22:17,783 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> > (org.ovirt.thread.pool-6-thread-29) Command
> > MigrateStatusVDSCommand(HostName = kvm01.virt, HostId =
> > fa455012-b929-4805-a5c1-50e6c788039a,
> > vmId=b0108933-deb2-4fa0-ae74-e10cefbb0cea) execution failed.
> > Exception: VDSErrorException: VDSGenericException: VDSErrorException:
> > Failed to MigrateStatusVDS, error = Migration canceled, code = 47
> > 2014-04-09 14:22:17,784 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> > (org.ovirt.thread.pool-6-thread-29) FINISH, MigrateStatusVDSCommand,
> > log id: 15822aeb
> > 2014-04-09 14:22:17,791 INFO
> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> > (org.ovirt.thread.pool-6-thread-29) Correlation ID: 4350ffd, Job ID:
> > 433cbdce-90b3-41ed-bbfe-3a10ea2c6616, Call Stack: null, Custom Event
> > ID: -1, Message: Migration failed due to Error: Migration not in
> > progress. Trying to migrate to another Host (VM: web.judici, Source:
> > kvm01.virt, Destination: kvm05.virt).
> > 2014-04-09 14:22:17,818 INFO
> > [org.ovirt.engine.core.bll.MigrateVmCommand]
> > (org.ovirt.thread.pool-6-thread-29) Running command: MigrateVmCommand
> > internal: false. Entities affected : ID:
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea Type: VM
> > 2014-04-09 14:22:17,824 INFO
> > [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
> > (org.ovirt.thread.pool-6-thread-29) START, MigrateVDSCommand(HostName
> > = kvm01.virt, HostId = fa455012-b929-4805-a5c1-50e6c788039a,
> > vmId=b0108933-deb2-4fa0-ae74-e10cefbb0cea, srcHost=10.10.4.31,
> > dstVdsId=d5e2830d-e59c-49be-b6e8-0968dbf357aa,
> > dstHost=10.10.4.33:54321, migrationMethod=ONLINE,
> > tunnelMigration=false, migrationDowntime=0), log id: 7dc46595
> > 2014-04-09 14:22:17,825 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
> > (org.ovirt.thread.pool-6-thread-29) VdsBroker::migrate::Entered
> > (vm_guid=b0108933-deb2-4fa0-ae74-e10cefbb0cea, srcHost=10.10.4.31,
> > dstHost=10.10.4.33:54321, method=online
> > 2014-04-09 14:22:17,826 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
> > (org.ovirt.thread.pool-6-thread-29) START,
> > MigrateBrokerVDSCommand(HostName = kvm01.virt, HostId =
> > fa455012-b929-4805-a5c1-50e6c788039a,
> > vmId=b0108933-deb2-4fa0-ae74-e10cefbb0cea, srcHost=10.10.4.31,
> > dstVdsId=d5e2830d-e59c-49be-b6e8-0968dbf357aa,
> > dstHost=10.10.4.33:54321, migrationMethod=ONLINE,
> > tunnelMigration=false, migrationDowntime=0), log id: 25038825
> > 2014-04-09 14:22:17,848 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
> > (org.ovirt.thread.pool-6-thread-29) FINISH, MigrateBrokerVDSCommand,
> > log id: 25038825
> > 2014-04-09 14:22:17,853 INFO
> > [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
> > (org.ovirt.thread.pool-6-thread-29) FINISH, MigrateVDSCommand, return:
> > MigratingFrom, log id: 7dc46595
> > 2014-04-09 14:22:17,859 INFO
> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> > (org.ovirt.thread.pool-6-thread-29) Correlation ID: 4350ffd, Job ID:
> > 433cbdce-90b3-41ed-bbfe-3a10ea2c6616, Call Stack: null, Custom Event
> > ID: -1, Message: Migration started (VM: web.judici, Source:
> > kvm01.virt, Destination: kvm03.virt, User: admin).
> > 2014-04-09 14:22:19,226 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-67) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:22:22,275 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-79) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:22:25,291 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-83) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:22:28,314 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-1) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:22:31,330 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-6) [5c26eeb2] RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:22:34,360 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-16) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:22:37,389 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-30) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:22:40,407 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-31) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:22:43,425 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-52) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:22:46,443 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-51) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:22:49,473 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-71) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:22:52,514 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-76) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:22:55,533 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-92) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:22:58,553 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-90) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:23:01,569 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-10) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:23:04,596 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-7) [7a463a98] RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:23:07,626 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-16) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:23:10,648 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-30) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:23:13,665 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-41) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:23:16,693 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-47) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:23:19,724 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-51) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:23:22,758 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-65) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:23:25,778 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-79) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:23:28,795 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-95) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:23:31,814 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-87) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:23:34,848 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-1) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:23:37,884 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-19) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:23:40,910 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-22) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:23:43,930 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-25) [e633f03] RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:23:46,949 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-45) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:23:49,982 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-48) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:23:53,048 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-64) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:23:56,076 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-66) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:23:59,093 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-72) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:24:02,112 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-93) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:24:05,140 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-87) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:24:08,210 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-3) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:24:11,240 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-6) [5c26eeb2] RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:24:14,257 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-21) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:24:17,277 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-36) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:24:20,309 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-31) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:24:23,371 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-48) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:24:26,391 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-63) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:24:29,409 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-74) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:24:32,428 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-72) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:24:35,467 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-85) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:24:38,499 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-89) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-2014-04-09 14:24:38,499 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-89) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-
> > e10cefbb0cea is migrating to vds kvm03.virt ignoring it in the refresh
> > until migration is done
> > 2014-04-09 14:24:41,534 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-1) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:24:44,561 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-11) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:24:47,584 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-21) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:24:50,631 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-25) [e633f03] RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:24:53,668 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-37) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:24:56,683 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-55) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:24:59,702 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-57) RefreshVmList vm id
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea is migrating to vds kvm03.virt
> > ignoring it in the refresh until migration is done
> > 2014-04-09 14:25:00,974 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-59) VM web.judici
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea moved from MigratingFrom --> Up
> > 2014-04-09 14:25:00,975 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-59) Adding VM
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea to re-run list
> > 2014-04-09 14:25:00,976 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand]
> > (DefaultQuartzScheduler_Worker-59) START, FullListVdsCommand(HostName
> > = kvm01.virt, HostId = fa455012-b929-4805-a5c1-50e6c788039a,
> > vds=Host[kvm01.virt], vmIds=[b0108933-deb2-4fa0-ae74-e10cefbb0cea]),
> > log id: 7ef5612e
> > 2014-04-09 14:25:00,984 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand]
> > (DefaultQuartzScheduler_Worker-59) FINISH, FullListVdsCommand, return:
> > [{displaySecurePort=5905, kvmEnable=true, nicModel=rtl8139,pv,
> > keyboardLayout=en-us, displayIp=0, pauseCode=NOERR, nice=0,
> > pitReinjection=false, timeOffset=0, transparentHugePages=true,
> > vmId=b0108933-deb2-4fa0-ae74-e10cefbb0cea, acpiEnable=true, custom={},
> > spiceSslCipherSuite=DEFAULT, memSize=8192, displayPort=5904, smp=4,
> > emulatedMachine=rhel6.5.0, status=Up, vmType=kvm,
> > memGuaranteedSize=1024, display=qxl, pid=30289, smartcardEnable=false,
> > smpCoresPerSocket=1,
> > spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,
> > maxVCpus=160, clientIp=10.10.4.242,
> > devices=[Ljava.lang.Object;@1fbf5420, vmName=web.judici,
> > cpuType=Penryn}], log id: 7ef5612e
> > 2014-04-09 14:25:00,988 INFO
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-59) Received a console Device without
> > an address when processing VM b0108933-deb2-4fa0-ae74-e10cefbb0cea
> > devices, skipping device: {alias=console0, specParams={},
> > device=console, type=console,
> > deviceId=05fba965-133e-4810-a87c-1688be8beb5a}
> > 2014-04-09 14:25:01,030 ERROR
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-59) Rerun vm
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea. Called from vds kvm01.virt
> > 2014-04-09 14:25:01,038 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> > (org.ovirt.thread.pool-6-thread-37) START,
> > MigrateStatusVDSCommand(HostName = kvm01.virt, HostId =
> > fa455012-b929-4805-a5c1-50e6c788039a,
> > vmId=b0108933-deb2-4fa0-ae74-e10cefbb0cea), log id: 47013f27
> > 2014-04-09 14:25:01,045 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> > (org.ovirt.thread.pool-6-thread-37) Failed in MigrateStatusVDS method
> > 2014-04-09 14:25:01,045 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> > (org.ovirt.thread.pool-6-thread-37) Command
> > org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand
> > return value
> > StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=47,
> > mMessage=Migration canceled]]
> > 2014-04-09 14:25:01,046 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> > (org.ovirt.thread.pool-6-thread-37) HostName = kvm01.virt
> > 2014-04-09 14:25:01,046 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> > (org.ovirt.thread.pool-6-thread-37) Command
> > MigrateStatusVDSCommand(HostName = kvm01.virt, HostId =
> > fa455012-b929-4805-a5c1-50e6c788039a,
> > vmId=b0108933-deb2-4fa0-ae74-e10cefbb0cea) execution failed.
> > Exception: VDSErrorException: VDSGenericException: VDSErrorException:
> > Failed to MigrateStatusVDS, error = Migration canceled, code = 47
> > 2014-04-09 14:25:01,048 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> > (org.ovirt.thread.pool-6-thread-37) FINISH, MigrateStatusVDSCommand,
> > log id: 47013f27
> > 2014-04-09 14:25:01,055 INFO
> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> > (org.ovirt.thread.pool-6-thread-37) Correlation ID: 4350ffd, Job ID:
> > 433cbdce-90b3-41ed-bbfe-3a10ea2c6616, Call Stack: null, Custom Event
> > ID: -1, Message: Migration failed due to Error: Migration not in
> > progress. Trying to migrate to another Host (VM: web.judici, Source:
> > kvm01.virt, Destination: kvm03.virt).
> > 2014-04-09 14:25:01,061 INFO
> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> > (org.ovirt.thread.pool-6-thread-37) Correlation ID: 4350ffd, Job ID:
> > 433cbdce-90b3-41ed-bbfe-3a10ea2c6616, Call Stack: null, Custom Event
> > ID: -1, Message: Migration failed due to Error: Migration not in
> > progress (VM: web.judici, Source: kvm01.virt, Destination:
> > kvm03.virt).
> > 2014-04-09 14:34:08,665 INFO
> > [org.ovirt.engine.core.bll.MaintenanceNumberOfVdssCommand]
> > (org.ovirt.thread.pool-6-thread-43) [4045389e] Running command:
> > MaintenanceNumberOfVdssCommand internal: false. Entities affected :
> > ID: fa455012-b929-4805-a5c1-50e6c788039a Type: VDS
> > 2014-04-09 14:34:08,667 INFO
> > [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
> > (org.ovirt.thread.pool-6-thread-43) [4045389e] START,
> > SetVdsStatusVDSCommand(HostName = kvm01.virt, HostId =
> > fa455012-b929-4805-a5c1-50e6c788039a, status=PreparingForMaintenance,
> > nonOperationalReason=NONE, stopSpmFailureLogged=true), log id:
> > 64965c26
> > 2014-04-09 14:34:08,678 INFO
> > [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
> > (org.ovirt.thread.pool-6-thread-43) [4045389e] FINISH,
> > SetVdsStatusVDSCommand, log id: 64965c26
> > 2014-04-09 14:34:08,725 INFO
> > [org.ovirt.engine.core.bll.MaintenanceVdsCommand]
> > (org.ovirt.thread.pool-6-thread-43) [4045389e] Running command:
> > MaintenanceVdsCommand internal: true. Entities affected : ID:
> > fa455012-b929-4805-a5c1-50e6c788039a Type: VDS
> > 2014-04-09 14:34:08,744 INFO
> > [org.ovirt.engine.core.bll.InternalMigrateVmCommand]
> > (org.ovirt.thread.pool-6-thread-43) [35f06602] Failed to Acquire Lock
> > to object EngineLock [exclusiveLocks= key:
> > b0108933-deb2-4fa0-ae74-e10cefbb0cea value: VM
> > , sharedLocks= ]
> >
> > On Wed, Apr 9, 2014 at 6:33 PM, Yair Zaslavsky <yzaslavs(a)redhat.com> wrote:
> > > Can you please attach full engine.log?
> > >
> > > Many thanks,
> > > Yair
> > >
> > >
> > > ----- Original Message -----
> > >> From: "Jeremiah Jahn" <jeremiah(a)goodinassociates.com>
> > >> To: users(a)ovirt.org
> > >> Sent: Thursday, April 10, 2014 2:18:48 AM
> > >> Subject: [ovirt-users] help.. vm trapped in limbo aka can't acquire
> > >> exclusive lock
> > >>
> > >> I can't start it, I can't migrate it. I tried to migrate it before,
> > >> but the machine was stuck in a read only state. The migration failed
> > >> because the machine it was being migrated to was also in a read only
> > >> state. somewhere in the process the lock obviously got lost, and I
> > >> can't get it back...
> > >>
> > >>
> > >>
> > >> 2014-04-09 18:11:16,675 INFO [org.ovirt.engine.core.bll.RunVmCommand]
> > >> (ajp--127.0.0.1-8702-3) [58b40832] Failed to Acquire Lock to object
> > >> EngineLock [exclusiveLocks= key: b0108933-deb2-4fa0-ae74-e10cefbb0cea
> > >> value: VM
> > >> , sharedLocks= ]
> > >>
> > >>
> > >> 2014-04-09 18:11:16,676 WARN [org.ovirt.engine.core.bll.RunVmCommand]
> > >> (ajp--127.0.0.1-8702-3) [58b40832] CanDoAction of action RunVm failed.
> > >> Reasons:VAR__ACTION__RUN,VAR__TYPE__VM,ACTION_TYPE_FAILED_VM_IS_BEING_MIGRATED,$VmName
> > >> web.judici
> > >> _______________________________________________
> > >> Users mailing list
> > >> Users(a)ovirt.org
> > >> http://lists.ovirt.org/mailman/listinfo/users
> > >>
> > _______________________________________________
> > Users mailing list
> > Users(a)ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
10 years, 8 months
sql error when viewing system disks
by Jeremiah Jahn
version 3.4
System->Disks tab->All checked
Caused by: org.postgresql.util.PSQLException: The column name
disk_storage_type was not found in this ResultSet.
at org.postgresql.jdbc2.AbstractJdbc2ResultSet.findColumn(AbstractJdbc2ResultSet.java:2542)
at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getInt(AbstractJdbc2ResultSet.java:2406)
at org.jboss.jca.adapters.jdbc.WrappedResultSet.getInt(WrappedResultSet.java:1073)
at org.ovirt.engine.core.dao.DiskDaoDbFacadeImpl$DiskRowMapper.mapRow(DiskDaoDbFacadeImpl.java:116)
[dal.jar:]
at org.ovirt.engine.core.dao.DiskDaoDbFacadeImpl$DiskRowMapper.mapRow(DiskDaoDbFacadeImpl.java:106)
[dal.jar:]
ERROR [org.ovirt.engine.core.bll.SearchQuery] (ajp--127.0.0.1-8702-3)
Query SearchQuery failed. Exception message is StatementCallback; bad
SQL grammar [SELECT * FROM (SELECT * FROM vds WHERE ( vds_id IN
(SELECT vds_with_tags.vds_id FROM vds_with_tags WHERE
vds_with_tags.vds_group_name LIKE 'Default' )) ORDER BY vds_name ASC
) as T1 OFFSET (1 -1) LIMIT 100]; nested exception is
org.postgresql.util.PSQLException: The column name disk_storage_type
was not found in this ResultSet. :
org.springframework.jdbc.BadSqlGrammarException: StatementCallback;
bad SQL grammar [SELECT * FROM (SELECT * FROM vds WHERE ( vds_id IN
(SELECT vds_with_tags.vds_id FROM vds_with_tags WHERE
vds_with_tags.vds_group_name LIKE 'Default' )) ORDER BY vds_name ASC
) as T1 OFFSET (1 -1) LIMIT 100]; nested exception is
org.postgresql.util.PSQLException: The column name disk_storage_type
was not found in this ResultSet.:
org.springframework.jdbc.BadSqlGrammarException: StatementCallback;
bad SQL grammar [SELECT * FROM (SELECT * FROM vds WHERE ( vds_id IN
(SELECT vds_with_tags.vds_id FROM vds_with_tags WHERE
vds_with_tags.vds_group_name LIKE 'Default' )) ORDER BY vds_name ASC
) as T1 OFFSET (1 -1) LIMIT 100]; nested exception is
org.postgresql.util.PSQLException: The column name disk_storage_type
was not found in this ResultSet.
10 years, 8 months
compatibility relationship between datacenter, ovirt and cluster
by Tamer Lima
Hi,
yesterday my ovirt was 3.3
my datacenter and cluster (compatibility version) was aligned with ovirt
3.3
today my ovirt is now 3.4.
and my datacenter and cluster (compatibility version) remains 3.3 (with
the option enabled to change for 3.4)
browsing the ovirt admin page I see 2 occurrences of ovirt version:
datacenter tab = 3.3
cluster tab =3.3
I would like to understand what means all these versions, the same version
for a lot of important things, and how my ovirt works/behaves using
different versions.
all my doubts together :
What means datacenter in version 3.3(or lower version) when ovirt is 3.4 ?
what means cluster in version 3.3 when ovirt is 3.4?
what means change the "compatibility version" for datacenter?
what means change the "compatibility version" for cluster?
thanks
10 years, 8 months
Re: [ovirt-users] SPM error
by Maurice James
------=_Part_377_34604661.1397258585580
Content-Type: multipart/alternative;
boundary="----=_Part_376_2068777434.1397258585580"
------=_Part_376_2068777434.1397258585580
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: base64
PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0
L2h0bWw7IGNoYXJzZXQ9VVRGLTgiPjwvaGVhZD48Ym9keSA+PGRpdj5Ob29vb29vby4mbmJzcDs8
L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PjxkaXYgc3R5bGU9ImZvbnQt
c2l6ZTo5cHg7Y29sb3I6IzU3NTc1NyI+U2VudCBmcm9tIG15IEdhbGF4eSBTwq5JSUk8L2Rpdj48
L2Rpdj48YnI+PGJyPi0tLS0tLS0tIE9yaWdpbmFsIG1lc3NhZ2UgLS0tLS0tLS08YnI+RnJvbTog
VGVkIE1pbGxlciA8dG1pbGxlckBoY2piLm9yZz4gPGJyPkRhdGU6MDQvMTEvMjAxNCAgNzowOCBQ
TSAgKEdNVC0wNTowMCkgPGJyPlRvOiBNYXVyaWNlIEphbWVzIDxtamFtZXNAbWVkaWEtbm9kZS5j
b20+IDxicj5TdWJqZWN0OiBSZTogW292aXJ0LXVzZXJzXSBTUE0gZXJyb3IgPGJyPjxicj48YnIg
Lz5JIGRpZG4mIzAzOTt0LCByZWFsbHkuICBJIGRpZCBzb21ldGhpbmcgd3JvbmcgYWxvbmcgdGhl
IHdheSwgYW5kIGVuZGVkIHVwIGhhdmluZyA8YnIgLz50byByZWJ1aWxkIHRoZSBlbmdpbmUgYW5k
IGhvc3RzLiAgKE15IHByb2JsZW1zIHdlcmUgZHVlIHRvIGEgZ2x1c3RlcmZzIDxiciAvPnNwbGl0
LWJyYWluLik8YnIgLz5UZWQgTWlsbGVyPGJyIC8+PGJyIC8+T24gNC8xMS8yMDE0IDY6MDMgUE0s
IE1hdXJpY2UgSmFtZXMgd3JvdGU6PGJyIC8+Jmd0OyBIb3cgZGlkIHlvdSBmaXggaXQ/PGJyIC8+
Jmd0OzxiciAvPiZndDs8YnIgLz4mZ3Q7IFNlbnQgZnJvbSBteSBHYWxheHkgU8KuSUlJPGJyIC8+
Jmd0OzxiciAvPiZndDsgLS0tLS0tLS0gT3JpZ2luYWwgbWVzc2FnZSAtLS0tLS0tLTxiciAvPiZn
dDsgRnJvbTogVGVkIE1pbGxlciAmbHQ7dG1pbGxlckBoY2piLm9yZyZndDs8YnIgLz4mZ3Q7IERh
dGU6MDQvMTEvMjAxNCAgNjowMCBQTSAgKEdNVC0wNTowMCk8YnIgLz4mZ3Q7IFRvOiB1c2Vyc0Bv
dmlydC5vcmc8YnIgLz4mZ3Q7IFN1YmplY3Q6IFJlOiBbb3ZpcnQtdXNlcnNdIFNQTSBlcnJvcjxi
ciAvPiZndDs8YnIgLz4mZ3Q7PGJyIC8+Jmd0OzxiciAvPiZndDsgT24gNC8xMS8yMDE0IDI6MDUg
UE0sIE1hdXJpY2UgSmFtZXMgd3JvdGU6PGJyIC8+Jmd0OyZndDsgSSBoYXZlIGFuIGVycm9yIHRy
eWluZyB0byBicmluZyB0aGUgbWFzdGVyIERDIGJhY2sgb25saW5lLiBBZnRlciBzZXZlcmFsPGJy
IC8+Jmd0OyZndDsgcmVib290cywgbm8gbHVjay4gSSB0b29rIHRoZSBvdGhlciBjbHVzdGVyIG1l
bWJlcnMgb2ZmbGluZSB0byB0cnkgdG88YnIgLz4mZ3Q7Jmd0OyB0cm91Ymxlc2hvb3QuIFRoZSBy
ZW1haW5pbmcgaG9zdCBpcyBjb25zdGFudGx5IGluIGNvbnRlbnRpb24gd2l0aCBpdHNlbGY8YnIg
Lz4mZ3Q7Jmd0OyBmb3IgU1BNPGJyIC8+Jmd0OyZndDs8YnIgLz4mZ3Q7Jmd0OzxiciAvPiZndDsm
Z3Q7IEVSUk9SIFtvcmcub3ZpcnQuZW5naW5lLmNvcmUudmRzYnJva2VyLmlyc2Jyb2tlci5JcnNC
cm9rZXJDb21tYW5kXTxiciAvPiZndDsmZ3Q7IChEZWZhdWx0UXVhcnR6U2NoZWR1bGVyX1dvcmtl
ci00MCkgWzM4ZDQwMGVhXTxiciAvPiZndDsmZ3Q7IElyc0Jyb2tlcjo6RmFpbGVkOjpHZXRTdG9y
YWdlUG9vbEluZm9WRFMgZHVlIHRvOjxiciAvPiZndDsmZ3Q7IElyc1NwbVN0YXJ0RmFpbGVkRXhj
ZXB0aW9uOiBJUlNHZW5lcmljRXhjZXB0aW9uOiBJUlNFcnJvckV4Y2VwdGlvbjo8YnIgLz4mZ3Q7
Jmd0OyBTcG1TdGFydCBmYWlsZWQ8YnIgLz4mZ3Q7Jmd0OzxiciAvPiZndDsgSSYjMDM5O20gbm8g
ZXhwZXJ0LCBidXQgdGhlIGxhc3QgdGltZSBJIGJlYXQgbXkgaGVhZCBvbiB0aGF0IHJvY2ssIHNv
bWV0aGluZyB3YXM8YnIgLz4mZ3Q7IHdyb25nIHdpdGggbXkgc2FubG9jayBzdG9yYWdlLiAgWU1N
VjxiciAvPiZndDsgVGVkIE1pbGxlcjxiciAvPiZndDsgRWxraGFydCwgSU4sIFVTQTxiciAvPiZn
dDs8YnIgLz48YnIgLz4tLSA8YnIgLz4mcXVvdDtIZSBpcyBubyBmb29sIHdobyBnaXZlcyB3aGF0
IGhlIGNhbm5vdCBrZWVwLCB0byBnYWluIHdoYXQgaGUgY2Fubm90IGxvc2UuJnF1b3Q7IC0gLSBK
aW0gRWxsaW90PGJyIC8+Rm9yIG1vcmUgaW5mb3JtYXRpb24gYWJvdXQgSmltIEVsbGlvdCBhbmQg
aGlzIHVudXN1YWwgbGlmZSwgc2VlIGh0dHA6Ly93d3cuY2hyaXN0aWFubGl0ZXJhdHVyZWFuZGxp
dmluZy5jb20vbWFyY2gyMDAzL2Nhcm9seW4uaHRtbC48YnIgLz48YnIgLz5UZWQgTWlsbGVyPGJy
IC8+RGVzaWduIEVuZ2luZWVyPGJyIC8+SENKQiBHbG9iYWwgVGVjaG5vbG9neSBDZW50ZXIsIGEg
bWluaXN0cnkgb2YgUmVhY2ggQmV5b25kPGJyIC8+MjgzMCBTb3V0aCAxN3RoIFN0PGJyIC8+RWxr
aGFydCwgSU4gIDQ2NTE3PGJyIC8+NTc0LS05NzAtNDI3MiBteSBkZXNrPGJyIC8+NTc0LS05NzAt
NDI1MiByZWNlcHRpb25pc3Q8YnIgLz48YnIgLz48L2JvZHk+
------=_Part_376_2068777434.1397258585580
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: base64
Tm9vb29vb28uwqAKCgpTZW50IGZyb20gbXkgR2FsYXh5IFPCrklJSQoKLS0tLS0tLS0gT3JpZ2lu
YWwgbWVzc2FnZSAtLS0tLS0tLQpGcm9tOiBUZWQgTWlsbGVyIDx0bWlsbGVyQGhjamIub3JnPiAK
RGF0ZTowNC8xMS8yMDE0ICA3OjA4IFBNICAoR01ULTA1OjAwKSAKVG86IE1hdXJpY2UgSmFtZXMg
PG1qYW1lc0BtZWRpYS1ub2RlLmNvbT4gClN1YmplY3Q6IFJlOiBbb3ZpcnQtdXNlcnNdIFNQTSBl
cnJvciAKCg0KDQpJIGRpZG4ndCwgcmVhbGx5LiAgSSBkaWQgc29tZXRoaW5nIHdyb25nIGFsb25n
IHRoZSB3YXksIGFuZCBlbmRlZCB1cCBoYXZpbmcgDQp0byByZWJ1aWxkIHRoZSBlbmdpbmUgYW5k
IGhvc3RzLiAgKE15IHByb2JsZW1zIHdlcmUgZHVlIHRvIGEgZ2x1c3RlcmZzIA0Kc3BsaXQtYnJh
aW4uKQ0KVGVkIE1pbGxlcg0KDQpPbiA0LzExLzIwMTQgNjowMyBQTSwgTWF1cmljZSBKYW1lcyB3
cm90ZToNCj4gSG93IGRpZCB5b3UgZml4IGl0Pw0KPg0KPg0KPiBTZW50IGZyb20gbXkgR2FsYXh5
IFPCrklJSQ0KPg0KPiAtLS0tLS0tLSBPcmlnaW5hbCBtZXNzYWdlIC0tLS0tLS0tDQo+IEZyb206
IFRlZCBNaWxsZXIgPHRtaWxsZXJAaGNqYi5vcmc+DQo+IERhdGU6MDQvMTEvMjAxNCAgNjowMCBQ
TSAgKEdNVC0wNTowMCkNCj4gVG86IHVzZXJzQG92aXJ0Lm9yZw0KPiBTdWJqZWN0OiBSZTogW292
aXJ0LXVzZXJzXSBTUE0gZXJyb3INCj4NCj4NCj4NCj4gT24gNC8xMS8yMDE0IDI6MDUgUE0sIE1h
dXJpY2UgSmFtZXMgd3JvdGU6DQo+PiBJIGhhdmUgYW4gZXJyb3IgdHJ5aW5nIHRvIGJyaW5nIHRo
ZSBtYXN0ZXIgREMgYmFjayBvbmxpbmUuIEFmdGVyIHNldmVyYWwNCj4+IHJlYm9vdHMsIG5vIGx1
Y2suIEkgdG9vayB0aGUgb3RoZXIgY2x1c3RlciBtZW1iZXJzIG9mZmxpbmUgdG8gdHJ5IHRvDQo+
PiB0cm91Ymxlc2hvb3QuIFRoZSByZW1haW5pbmcgaG9zdCBpcyBjb25zdGFudGx5IGluIGNvbnRl
bnRpb24gd2l0aCBpdHNlbGYNCj4+IGZvciBTUE0NCj4+DQo+Pg0KPj4gRVJST1IgW29yZy5vdmly
dC5lbmdpbmUuY29yZS52ZHNicm9rZXIuaXJzYnJva2VyLklyc0Jyb2tlckNvbW1hbmRdDQo+PiAo
RGVmYXVsdFF1YXJ0elNjaGVkdWxlcl9Xb3JrZXItNDApIFszOGQ0MDBlYV0NCj4+IElyc0Jyb2tl
cjo6RmFpbGVkOjpHZXRTdG9yYWdlUG9vbEluZm9WRFMgZHVlIHRvOg0KPj4gSXJzU3BtU3RhcnRG
YWlsZWRFeGNlcHRpb246IElSU0dlbmVyaWNFeGNlcHRpb246IElSU0Vycm9yRXhjZXB0aW9uOg0K
Pj4gU3BtU3RhcnQgZmFpbGVkDQo+Pg0KPiBJJ20gbm8gZXhwZXJ0LCBidXQgdGhlIGxhc3QgdGlt
ZSBJIGJlYXQgbXkgaGVhZCBvbiB0aGF0IHJvY2ssIHNvbWV0aGluZyB3YXMNCj4gd3Jvbmcgd2l0
aCBteSBzYW5sb2NrIHN0b3JhZ2UuICBZTU1WDQo+IFRlZCBNaWxsZXINCj4gRWxraGFydCwgSU4s
IFVTQQ0KPg0KDQotLSANCiJIZSBpcyBubyBmb29sIHdobyBnaXZlcyB3aGF0IGhlIGNhbm5vdCBr
ZWVwLCB0byBnYWluIHdoYXQgaGUgY2Fubm90IGxvc2UuIiAtIC0gSmltIEVsbGlvdA0KRm9yIG1v
cmUgaW5mb3JtYXRpb24gYWJvdXQgSmltIEVsbGlvdCBhbmQgaGlzIHVudXN1YWwgbGlmZSwgc2Vl
IGh0dHA6Ly93d3cuY2hyaXN0aWFubGl0ZXJhdHVyZWFuZGxpdmluZy5jb20vbWFyY2gyMDAzL2Nh
cm9seW4uaHRtbC4NCg0KVGVkIE1pbGxlcg0KRGVzaWduIEVuZ2luZWVyDQpIQ0pCIEdsb2JhbCBU
ZWNobm9sb2d5IENlbnRlciwgYSBtaW5pc3RyeSBvZiBSZWFjaCBCZXlvbmQNCjI4MzAgU291dGgg
MTd0aCBTdA0KRWxraGFydCwgSU4gIDQ2NTE3DQo1NzQtLTk3MC00MjcyIG15IGRlc2sNCjU3NC0t
OTcwLTQyNTIgcmVjZXB0aW9uaXN0DQoNCg==
------=_Part_376_2068777434.1397258585580--
------=_Part_377_34604661.1397258585580--
10 years, 8 months
SPM error
by Maurice James
------=_Part_280_288263983.1397239517104
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
I have an error trying to bring the master DC back online. After several reboots, no luck. I took the other cluster members offline to try to troubleshoot. The remaining host is constantly in contention with itself for SPM
ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-40) [38d400ea] IrsBroker::Failed::GetStoragePoolInfoVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed
------=_Part_280_288263983.1397239517104
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: times new roman, new york, times, se=
rif; font-size: 12pt; color: #000000"><div>I have an error trying to bring =
the master DC back online. After several reboots, no luck. I took the other=
cluster members offline to try to troubleshoot. The remaining host is cons=
tantly in contention with itself for SPM</div><div><br></div><div><br></div=
><div>ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (D=
efaultQuartzScheduler_Worker-40) [38d400ea] IrsBroker::Failed::GetStoragePo=
olInfoVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSError=
Exception: SpmStart failed</div><div><br></div><div><span name=3D"x"></span=
><div><span style=3D"font-family: 'trebuchet ms', sans-serif;" data-mce-sty=
le=3D"font-family: 'trebuchet ms', sans-serif;"><br></span></div><div><span=
style=3D"font-family: 'trebuchet ms', sans-serif;" data-mce-style=3D"font-=
family: 'trebuchet ms', sans-serif;"><br></span></div><div><br></div><span =
name=3D"x"></span><br></div></div></body></html>
------=_Part_280_288263983.1397239517104--
10 years, 8 months
Re: [ovirt-users] Re-add a node
by Dafna Ron
James, Please answer the user's list as well as to me so that other
people can participate as well :)
did you try to press the "confirm host has been rebooted" (right click)
On 04/11/2014 12:41 PM, James James wrote:
>
>
>
> 2014-04-11 13:16 GMT+02:00 Dafna Ron <dron(a)redhat.com
> <mailto:dron@redhat.com>>:
>
> what is the error message you get when you try to put the host in
> maintenance?
>
>
> I have this message :
>
>
> Error while executing action: Cannot switch Host to Maintenance mode.
> Host is Storage Pool Manager and is in Non Responsive state.
> - If power management is configured, engine will try to fence the host
> automatically.
> - Otherwise, either bring the node back up, or release the SPM resource.
> To do so, verify that the node is really down by right clicking on the
> host and confirm that the node was shutdown manually.
>
>
>
> are there any running vm's reported?
>
>
> No there is no VM running on this host
>
>
>
> Try to press the "confirm host has been rebooted" button and than
> see if you can put the host in maintenance.
>
> If that fails, select the host, in the general tab you will get
> the re-install link.
>
>
>
> I am running ovirt 3.4.0-1. I down't know where is the re-install link
> but I can't see it.
>
> try to re-install, when install fails the host should change
> status to failed installation.
>
>
>
>
> On 04/11/2014 12:09 PM, James James wrote:
>
> No, I can't ....
>
>
> 2014-04-11 12:12 GMT+02:00 Dafna Ron <dron(a)redhat.com
> <mailto:dron@redhat.com> <mailto:dron@redhat.com
> <mailto:dron@redhat.com>>>:
>
>
> can you put the host in maintenance?
>
>
>
>
> On 04/11/2014 10:43 AM, James James wrote:
>
> I can't delete the old node because it is in Non
> Responsive
> state. The remove button is stil blank .
>
> In the engine.log I've got this log :
> 2014-04-11 11:40:45,911 ERROR
>
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
> (DefaultQuartzScheduler_Worker-88) Command
> GetCapabilitiesVDSCommand(HostName = node1, HostId =
> 36fb6df3-c2c2-4133-86ac-fe50b99ee2e3, vds=Host[node1])
> execution failed. Exception: VDSNetworkException:
>
> sun.security.provider.certpath.SunCertPathBuilderException:
> unable to find valid certification path to requested
> target
> 2014-04-11 11:40:48,943 ERROR
>
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
> (DefaultQuartzScheduler_Worker-12) Command
> GetCapabilitiesVDSCommand(HostName = node1, HostId =
> 36fb6df3-c2c2-4133-86ac-fe50b99ee2e3, vds=Host[node1])
> execution failed. Exception: VDSNetworkException:
>
> sun.security.provider.certpath.SunCertPathBuilderException:
> unable to find valid certification path to requested
> target
>
>
>
>
>
> 2014-04-11 11:06 GMT+02:00 Alon Bar-Lev
> <alonbl(a)redhat.com <mailto:alonbl@redhat.com>
> <mailto:alonbl@redhat.com <mailto:alonbl@redhat.com>>
> <mailto:alonbl@redhat.com <mailto:alonbl@redhat.com>
>
> <mailto:alonbl@redhat.com <mailto:alonbl@redhat.com>>>>:
>
>
>
>
> ----- Original Message -----
> > From: "James James" <jreg2k(a)gmail.com
> <mailto:jreg2k@gmail.com>
> <mailto:jreg2k@gmail.com <mailto:jreg2k@gmail.com>>
> <mailto:jreg2k@gmail.com <mailto:jreg2k@gmail.com>
> <mailto:jreg2k@gmail.com <mailto:jreg2k@gmail.com>>>>
> > To: dron(a)redhat.com <mailto:dron@redhat.com>
> <mailto:dron@redhat.com <mailto:dron@redhat.com>>
> <mailto:dron@redhat.com <mailto:dron@redhat.com>
> <mailto:dron@redhat.com <mailto:dron@redhat.com>>>
> > Cc: "users" <users(a)ovirt.org
> <mailto:users@ovirt.org> <mailto:users@ovirt.org
> <mailto:users@ovirt.org>>
> <mailto:users@ovirt.org <mailto:users@ovirt.org>
> <mailto:users@ovirt.org <mailto:users@ovirt.org>>>>
> > Sent: Friday, April 11, 2014 12:04:23 PM
> > Subject: Re: [ovirt-users] Re-add a node
> >
> > The engine is the same but the node (node1) has been
> reinstalled ...
>
> So you need to re-add it to the engine.
> Delete the old node at engine side and use the
> node user
> interface
> to add it to the engine.
>
> >
> >
> > 2014-04-11 10:48 GMT+02:00 James James <
> jreg2k(a)gmail.com <mailto:jreg2k@gmail.com>
> <mailto:jreg2k@gmail.com <mailto:jreg2k@gmail.com>>
> <mailto:jreg2k@gmail.com <mailto:jreg2k@gmail.com>
> <mailto:jreg2k@gmail.com <mailto:jreg2k@gmail.com>>> > :
>
>
> >
> >
> >
> > the log contains information about the first node1
> installation ....
> >
> > http://pastebin.com/mZSb2wmD
> >
> >
> > 2014-04-11 10:12 GMT+02:00 Dafna Ron <
> dron(a)redhat.com <mailto:dron@redhat.com>
> <mailto:dron@redhat.com <mailto:dron@redhat.com>>
> <mailto:dron@redhat.com <mailto:dron@redhat.com>
> <mailto:dron@redhat.com <mailto:dron@redhat.com>>> > :
>
>
> >
> >
> >
> > please add the host-deploy log from
> /var/log/ovirt-engine/host-
> deploy/
> >
> >
> > On 04/11/2014 12:11 AM, James James wrote:
> >
> >
> >
> > Hi,
> > I don know i the subject is explicit enough but
> I have a
> problem
> and I hope
> > to find some help here.
> >
> > I had two hosts in my cluster (node1 and node2).
> I had to
> reinstall node1 due
> > to some networks problem. In the engine, node1
> appears
> now "Not
> reponsive"
> > and I can't remove it from the engine ui.
> >
> > Now node1 is back and I want to add it in the
> cluster but I
> can't. I've got
> > this error (vdsm.log) :
> > BindingXMLRPC::ERROR::2014-04- 11 01
> <tel:2014-04-%2011%2001>
> <tel:2014-04-%2011%2001>
> <tel:2014-04-%2011%2001>:04:42,622::BindingXMLRPC::
>
> > 81::vds::(threaded_start) xml-rpc handler exception
> > Traceback (most recent call last):
> > File "/usr/share/vdsm/ BindingXMLRPC.py", line
> 77, in
> threaded_start
> > self.server.handle_request()
> > File "/usr/lib64/python2.6/ SocketServer.py",
> line 278, in
> handle_request
> > self._handle_request_noblock()
> > File "/usr/lib64/python2.6/ SocketServer.py",
> line 288, in
> > _handle_request_noblock
> > request, client_address = self.get_request()
> > File "/usr/lib64/python2.6/ SocketServer.py",
> line 456, in
> get_request
> > return self.socket.accept()
> > File "/usr/lib64/python2.6/site- packages/vdsm/
> SecureXMLRPCServer.py", line
> > 136, in accept
> > raise SSL.SSLError("%s, client %s" % (e,
> address[0]))
> > SSLError: sslv3 alert certificate unknown, client
> 192.168.1.100
> >
> > 192.168.1.100 is the engine's address
> >
> > Can somebody help me ?
> >
> >
> >
> > ______________________________ _________________
> > Users mailing list
> > Users(a)ovirt.org <mailto:Users@ovirt.org>
> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>
> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>
> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>>
>
>
> > http://lists.ovirt.org/ mailman/listinfo/users
> >
> >
> > --
> > Dafna Ron
> >
> >
> >
> > _______________________________________________
> > Users mailing list
> > Users(a)ovirt.org <mailto:Users@ovirt.org>
> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>
> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>
> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>>
>
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
>
>
>
> -- Dafna Ron
>
>
>
>
> --
> Dafna Ron
>
>
--
Dafna Ron
10 years, 8 months
vdsm dependency rsyslog?
by Sven Kieske
Hi,
I just observed that vdsm
has a hard dependency on rsyslog.
Could anybody enlighten me why this
is needed?
As I understand it's a python
script which could log to any configurable
logging facility.
What about other syslog services?
--
Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator
Mittwald CM Service GmbH & Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
10 years, 8 months
New oVirt Node available for 3.3 and 3.4
by Fabian Deutsch
--=-QQWSqCCwZFdbK8EHnoD0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
Hey,
there is a new oVirt Node build available here:
http://resources.ovirt.org/releases/3.4/iso/ovirt-node-iso-3.4-20140410.0.e=
l6.iso
This build can be used for oVirt 3.4 and oVirt 3.3 as well.
Greetings
fabian
--=-QQWSqCCwZFdbK8EHnoD0
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJTR/8yAAoJEC9+uOgSHVGU/YYQALun8t3ayv1NPwhTBLf8lUEC
KVEkBogcbdPpR5o/UWAVJZun7gViSdnIt4ewaxgGSF5YUU49NArKyXvLSz/PD3+p
0NhQk+31xXzd9eBpnu3jbhonKAdoxjcd1jdDbHlKAfwJIIf/tGUCXyvdhX264CZv
mEdjbXZMsKsaCXZf2HEhTJspVp4Qvq72/k457E2EdY3rqxkVhO2l44vesRdubeYM
RAsE4nujoMnnhIewXoxYNvmZEfJcuzxNz1C3hYgvhpdvdLy6luwJzcK+GfYUHkbY
9Yd3uP9JsRcAvBkNLvy9/eXsfrb8Z3U2nvMEInXlqM0jvbONhKnCC9fjw++pRkgN
hoZylxvNPOCpObWV8HZuER4BVPGazQS1N9h/EZZnAjEanOkKIKC8O9TAUPtCd2qT
KKjXB5wC4cc4QYJG3n1E32XtwUF7PDfGSEM1N60KAM0mO41wQKDTR93rk9iMJl6E
GjUbfEOzK6NjqRUYnz61VPWc080GKSXwh+aJXuFg27JoTH/QF4RBujx+hHqeqfJU
iKNFWiiLsTBkkh1+S+xzEUFLuy/ANleLAqGUVj4A+blRzPq/ozGOhykDe69AXiaV
IBDEg9R1dLLm2QN/jkFK+GT/y4bHnis7yCcPfJpVYHwcla3k75Cpmw6t8OP7Akn7
j2JPmq48BQj6dqSxROPk
=Bs9k
-----END PGP SIGNATURE-----
--=-QQWSqCCwZFdbK8EHnoD0--
10 years, 8 months
Fwd: [Users] HA
by Koen Vanoppen
The Power management is configured correctly. And as long as the host who
loses his storage isn't the SPM, there is no problem.
If I can make it work that, when the VM is pauzed it's get switched of and
(HA-way) reboots itself. I'm perfectly happy :-).
Kind regards,
---------- Forwarded message ----------
From: Koen Vanoppen <vanoppen.koen(a)gmail.com>
Date: 2014-04-11 14:47 GMT+02:00
Subject: Re: [ovirt-users] [Users] HA
To: Michal Skrivanek <michal.skrivanek(a)redhat.com>
The Power management is configured correctly. And as long as the host who
loses his storage isn't the SPM, there is no problem.
If I can make it work that, when the VM is pauzed it's get switched of and
(HA-way) reboots itself. I'm perfectly happy :-).
Kind regards,
2014-04-11 9:37 GMT+02:00 Michal Skrivanek <michal.skrivanek(a)redhat.com>:
> On 11 Apr 2014, at 09:00, Koen Vanoppen wrote:
>
> Hi All,
>
> Any news about this? DSM hook or anything?
> Thanx!
>
> Kind regards
>
>
> 2014-04-09 9:37 GMT+02:00 Omer Frenkel <ofrenkel(a)redhat.com>:
>
>>
>>
>> ----- Original Message -----
>> > From: "Koen Vanoppen" <vanoppen.koen(a)gmail.com>
>> > To: users(a)ovirt.org
>> > Sent: Tuesday, April 8, 2014 3:41:02 PM
>> > Subject: Re: [Users] HA
>> >
>> > Or with other words, the SPM and the VM should move almost immediate
>> after
>> > the storage connections on the hypervisor are gone. I know, I'm asking
>> to
>> > much maybe, but we would be very happy :-) :-).
>> >
>> > So sketch:
>> >
>> > Mercury1 SPM
>> > Mercury 2
>> >
>> > Mercury1 loses both fibre connections --> goes in non-operational and
>> the VM
>> > goes in paused state and stays this way, until I manually reboot the
>> host so
>> > it fences.
>> >
>> > What I would like is that when mercury 1 loses both fibre connections.
>> He
>> > fences immediate so the VM's are moved also almost instantly... If this
>> is
>> > possible... :-)
>> >
>> > Kind regards and thanks for all the help!
>> >
>>
>> Michal, is there a vdsm hook for vm moved to pause?
>> if so, you could send KILL to it, and engine will identify vm was
>> killed+HA,
>> so it will be restarted, and no need to reboot the host, it will stay in
>> non-operational until storage is fixed.
>>
>
> you have to differentiate - if only the VMs would be paused, yes, you can
> do anything (also change the err reporting policy to not pause the VM)
> but if the host becomes non-operational then it simply doesn't work, vdsm
> got stuck somewhere (often in get blk device stats)
> proper power management config should fence it
>
> Thanks,
> michal
>
>
>> >
>> >
>> > 2014-04-08 14:26 GMT+02:00 Koen Vanoppen < vanoppen.koen(a)gmail.com > :
>> >
>> >
>> >
>> > Ok,
>> > Thanx already for all the help. I adapted some things for quicker
>> respons:
>> > engine-config --get FenceQuietTimeBetweenOperationsInSec-->180
>> > engine-config --set FenceQuietTimeBetweenOperationsInSec=60
>> >
>> > engine-config --get StorageDomainFalureTimeoutInMinutes-->180
>> > engine-config --set StorageDomainFalureTimeoutInMinutes=1
>> >
>> > engine-config --get SpmCommandFailOverRetries-->5
>> > engine-config --set SpmCommandFailOverRetries
>> >
>> > engine-config --get SPMFailOverAttempts-->3
>> > engine-config --set SPMFailOverAttempts=1
>> >
>> > engine-config --get NumberOfFailedRunsOnVds-->3
>> > engine-config --set NumberOfFailedRunsOnVds=1
>> >
>> > engine-config --get vdsTimeout-->180
>> > engine-config --set vdsTimeout=30
>> >
>> > engine-config --get VDSAttemptsToResetCount-->2
>> > engine-config --set VDSAttemptsToResetCount=1
>> >
>> > engine-config --get TimeoutToResetVdsInSeconds-->60
>> > engine-config --set TimeoutToResetVdsInSeconds=30
>> >
>> > Now the result of this is that when the VM is not running on the SPM
>> that it
>> > will migrate before going in pause mode.
>> > But when we tried it, when the vm is running on the SPM, it get's in
>> paused
>> > mode (for safety reasons, I know ;-) ). And stays there until the host
>> gets
>> > MANUALLY fenced by rebooting it. So now my question is... How can I
>> make the
>> > hypervisor fence (so reboots, so vm is moved) quicker?
>> >
>> > Kind regards,
>> >
>> > Koen
>> >
>> >
>> > 2014-04-04 16:28 GMT+02:00 Koen Vanoppen < vanoppen.koen(a)gmail.com > :
>> >
>> >
>> >
>> >
>> >
>> > Ja das waar. Maar was aan't rijden... Dus ik stuur maar door dan :-).
>> Ik heb
>> > reeds de time out aangepast. Die stond op 5 min voor hij den time out
>> ging
>> > geven. Staat nu op 2 min
>> > On Apr 4, 2014 4:14 PM, "David Van Zeebroeck" <
>> > david.van.zeebroeck(a)brusselsairport.be > wrote:
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > Ik heb ze ook he
>> >
>> >
>> >
>> > Maar normaal had de fencing moeten werken als ik het zo lees
>> >
>> > Dus daar is ergens iets verkeerd gelopen zo te lezen
>> >
>> >
>> >
>> > From: Koen Vanoppen [mailto: vanoppen.koen(a)gmail.com ]
>> > Sent: vrijdag 4 april 2014 16:07
>> > To: David Van Zeebroeck
>> > Subject: Fwd: Re: [Users] HA
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > David Van Zeebroeck
>> >
>> > Product Manager Unix Infrastructure
>> >
>> > Information & Communication Technology
>> >
>> > Brussels Airport Company
>> >
>> > T +32 (0)2 753 66 24
>> >
>> > M +32 (0)497 02 17 31
>> >
>> > david.van.zeebroeck(a)brusselsairport.be
>> >
>> >
>> >
>> > www.brusselsairport.be
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > FOLLOW US ON:
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > Company Info
>> >
>> >
>> >
>> >
>> >
>> >
>> > ---------- Forwarded message ----------
>> > From: "Michal Skrivanek" < michal.skrivanek(a)redhat.com >
>> > Date: Apr 4, 2014 3:39 PM
>> > Subject: Re: [Users] HA
>> > To: "Koen Vanoppen" < vanoppen.koen(a)gmail.com >
>> > Cc: "ovirt-users Users" < users(a)ovirt.org >
>> >
>> >
>> >
>> >
>> >
>> >
>> > On 4 Apr 2014, at 15:14, Sander Grendelman wrote:
>> >
>> >
>> >
>> >
>> >
>> >
>> > Do you have power management configured?
>> >
>> >
>> > Was the "failed" host fenced/rebooted?
>> >
>> >
>> >
>> >
>> >
>> > On Fri, Apr 4, 2014 at 2:21 PM, Koen Vanoppen < vanoppen.koen(a)gmail.com>
>> > wrote:
>> >
>> >
>> > So... It is possible for a fully automatic migration of the VM to
>> another
>> > hypervisor in case Storage connection fails?
>> >
>> >
>> > How can we make this happen? Because for the moment, when we tested the
>> > situation they stayed in pause state.
>> >
>> >
>> > (Test situation:
>> >
>> > * Unplug the 2 fibre cables from the hypervisor
>> > * VM's go in pause state
>> > * VM's stayed in pause state until the failure was solved
>> >
>> >
>> >
>> >
>> >
>> > as said before, it's not safe hence we (try to) not migrate them.
>> >
>> >
>> > They only get paused when they actually access the storage which may
>> not be
>> > always the case. I.e. the storage connection is severed, host deemed
>> > NonOperational and VMs are getting migrated from it, then some of them
>> will
>> > succeed if they didn't access that "bad" storage … the paused VMs will
>> > remain (mostly, it can still happen that they appear paused migrated on
>> > other host when the disk access occurs only at the last stage of
>> migration)
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > so in other words, if you want to migrate the VMs without interruption
>> it's
>> > not sometimes possible
>> >
>> >
>> > if you are fine with the VMs restarted in short time on other host then
>> power
>> > management/fencing will help here
>> >
>> >
>> >
>> >
>> >
>> > Thanks,
>> >
>> >
>> > michal
>> >
>> >
>> >
>> >
>> >
>> >
>> > )
>> >
>> >
>> >
>> >
>> > They only returned when we restored the fiber connection to the
>> Hypervisor…
>> >
>> >
>> >
>> >
>> >
>> > yes, since 3.3 we have the autoresume feature
>> >
>> >
>> >
>> >
>> >
>> > Thanks,
>> >
>> >
>> > michal
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > Kind Regards,
>> >
>> > Koen
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > 2014-04-04 13:52 GMT+02:00 Koen Vanoppen < vanoppen.koen(a)gmail.com >:
>> >
>> >
>> > So... It is possible for a fully automatic migration of the VM to
>> another
>> > hypervisor in case Storage connection fails?
>> >
>> >
>> > How can we make this happen? Because for the moment, when we tested the
>> > situation they stayed in pause state.
>> >
>> >
>> > (Test situation:
>> >
>> > * Unplug the 2 fibre cables from the hypervisor
>> > * VM's go in pause state
>> > * VM's stayed in pause state until the failure was solved
>> >
>> >
>> > )
>> >
>> >
>> >
>> >
>> > They only returned when we restored the fiber connection to the
>> Hypervisor...
>> >
>> >
>> > Kind Regards,
>> >
>> > Koen
>> >
>> >
>> >
>> >
>> >
>> > 2014-04-03 16:53 GMT+02:00 Koen Vanoppen < vanoppen.koen(a)gmail.com >:
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > ---------- Forwarded message ----------
>> > From: "Doron Fediuck" < dfediuck(a)redhat.com >
>> > Date: Apr 3, 2014 4:51 PM
>> > Subject: Re: [Users] HA
>> >
>> >
>> > To: "Koen Vanoppen" < vanoppen.koen(a)gmail.com >
>> > Cc: "Omer Frenkel" < ofrenkel(a)redhat.com >, < users(a)ovirt.org >,
>> "Federico
>> > Simoncelli" < fsimonce(a)redhat.com >, "Allon Mureinik" <
>> amureini(a)redhat.com
>> > >
>> >
>> >
>> >
>> > ----- Original Message -----
>> > > From: "Koen Vanoppen" < vanoppen.koen(a)gmail.com >
>> > > To: "Omer Frenkel" < ofrenkel(a)redhat.com >, users(a)ovirt.org
>> > > Sent: Wednesday, April 2, 2014 4:17:36 PM
>> > > Subject: Re: [Users] HA
>> > >
>> > > Yes, indeed. I meant not-operational. Sorry.
>> > > So, if I understand this correctly. When we ever come in a situation
>> that
>> > > we
>> > > loose both storage connections on our hypervisor, we will have to
>> manually
>> > > restore the connections first?
>> > >
>> > > And thanx for the tip for speeding up thins :-).
>> > >
>> > > Kind regards,
>> > >
>> > > Koen
>> > >
>> > >
>> > > 2014-04-02 15:14 GMT+02:00 Omer Frenkel < ofrenkel(a)redhat.com > :
>> > >
>> > >
>> > >
>> > >
>> > >
>> > > ----- Original Message -----
>> > > > From: "Koen Vanoppen" < vanoppen.koen(a)gmail.com >
>> > > > To: users(a)ovirt.org
>> > > > Sent: Wednesday, April 2, 2014 4:07:19 PM
>> > > > Subject: [Users] HA
>> > > >
>> > > > Dear All,
>> > > >
>> > > > Due our acceptance testing, we discovered something. (Document will
>> > > > follow).
>> > > > When we disable one fiber path, no problem multipath finds it way no
>> > > > pings
>> > > > are lost.
>> > > > BUT when we disabled both the fiber paths (so one of the storage
>> domain
>> > > > is
>> > > > gone on this host, but still available on the other host), vms go in
>> > > > paused
>> > > > mode... He chooses a new SPM (can we speed this up?), put's the
>> host in
>> > > > non-responsive (can we speed this up, more important) and the VM's
>> stay
>> > > > on
>> > > > Paused mode... I would expect that they would be migrated (yes, HA
>> is
>> > >
>> > > i guess you mean the host moves to not-operational (in contrast to
>> > > non-responsive)?
>> > > if so, the engine will not migrate vms that are paused to do io error,
>> > > because of data corruption risk.
>> > >
>> > > to speed up you can look at the storage domain monitoring timeout:
>> > > engine-config --get StorageDomainFalureTimeoutInMinutes
>> > >
>> > >
>> > > > enabled) to the other host and reboot there... Any solution? We are
>> still
>> > > > using oVirt 3.3.1 , but we are planning a upgrade to 3.4 after the
>> easter
>> > > > holiday.
>> > > >
>> > > > Kind Regards,
>> > > >
>> > > > Koen
>> > > >
>> >
>> > Hi Koen,
>> > Resuming from paused due to io issues is supported (adding relevant
>> folks).
>> > Regardless, if you did not define power management, you should manually
>> > approve
>> > source host was rebooted in order for migration to proceed. Otherwise
>> we risk
>> > split-brain scenario.
>> >
>> > Doron
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > _______________________________________________
>> > Users mailing list
>> > Users(a)ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>> >
>> >
>> >
>> >
>> >
>> > _______________________________________________
>> > Users mailing list
>> > Users(a)ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > _______________________________________________
>> > Users mailing list
>> > Users(a)ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>> >
>>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
10 years, 8 months