DR different number of Host
by ernestclydeachua@gmail.com
Hello,
Currently i have a standalone ovirt on a primary site and planning to create a DR
but my DR lacks a baremetal that has a same resources on primary site.
is it possible on DR to have multiple hyper-visors then automatically assign where the vm should run/attach automatically?
4 years, 10 months
HostedEngine migration fails with VM destroyed during the startup.
by Vrgotic, Marko
Dear oVirt,
I have problem migrating HostedEngine, only HA VM server, to other HA nodes.
Bit of background story:
* We have oVirt SHE 4.3.5
* Three Nodes act as HA pool for SHE
* Node 3 is currently Hosting SHE
* Actions:
* Put Node1 in Maintenance mode, all VMs were successfully migrated, than Upgrade packages, Activate Host – all looks good
* Put Node2 in Maintenance mode, all VMs were successfully migrated, than Upgrade packages, Activate Host – all looks good
Not the problem:
Try to set Node3 in Maintenance mode, all VMs were successfully migrated, except HostedEngine.
When attempting Migration of the VM HostedEngine, it fails with following error message:
2020-02-14 12:33:49,960Z INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (default task-265) [16f4559e-e262-4c9d-80b4-ec81c2cbf950] Lock Acquired to object 'EngineLock:{exclusiveLocks='[66b6d489-ceb8-486a-951a-355e21f13627=VM]', sharedLocks=''}'
2020-02-14 12:33:49,984Z INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-265) [16f4559e-e262-4c9d-80b4-ec81c2cbf950] Candidate host 'ovirt-sj-04.ictv.com' ('d98843da-bd81-46c9-9425-065b196ac59d') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'HA' (correlation id: null)
2020-02-14 12:33:49,984Z INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-265) [16f4559e-e262-4c9d-80b4-ec81c2cbf950] Candidate host 'ovirt-sj-05.ictv.com' ('e3176705-9fb0-41d6-8721-367dfa2e62bd') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'HA' (correlation id: null)
2020-02-14 12:33:49,997Z INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (default task-265) [16f4559e-e262-4c9d-80b4-ec81c2cbf950] Running command: MigrateVmCommand internal: false. Entities affected : ID: 66b6d489-ceb8-486a-951a-355e21f13627 Type: VMAction group MIGRATE_VM with role type USER
2020-02-14 12:33:50,008Z INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-265) [16f4559e-e262-4c9d-80b4-ec81c2cbf950] Candidate host 'ovirt-sj-04.ictv.com' ('d98843da-bd81-46c9-9425-065b196ac59d') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'HA' (correlation id: 16f4559e-e262-4c9d-80b4-ec81c2cbf950)
2020-02-14 12:33:50,008Z INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-265) [16f4559e-e262-4c9d-80b4-ec81c2cbf950] Candidate host 'ovirt-sj-05.ictv.com' ('e3176705-9fb0-41d6-8721-367dfa2e62bd') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'HA' (correlation id: 16f4559e-e262-4c9d-80b4-ec81c2cbf950)
2020-02-14 12:33:50,033Z INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-265) [16f4559e-e262-4c9d-80b4-ec81c2cbf950] START, MigrateVDSCommand( MigrateVDSCommandParameters:{hostId='f8d27efb-1527-45f0-97d6-d34a86abaaa2', vmId='66b6d489-ceb8-486a-951a-355e21f13627', srcHost='ovirt-sj-03.ictv.com', dstVdsId='9808f434-5cd4-48b5-8bbc-e639e391c6a5', dstHost='ovirt-sj-01.ictv.com:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='40', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]', dstQemu='10.210.13.11'}), log id: 5c126a47
2020-02-14 12:33:50,036Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (default task-265) [16f4559e-e262-4c9d-80b4-ec81c2cbf950] START, MigrateBrokerVDSCommand(HostName = ovirt-sj-03.ictv.com, MigrateVDSCommandParameters:{hostId='f8d27efb-1527-45f0-97d6-d34a86abaaa2', vmId='66b6d489-ceb8-486a-951a-355e21f13627', srcHost='ovirt-sj-03.ictv.com', dstVdsId='9808f434-5cd4-48b5-8bbc-e639e391c6a5', dstHost='ovirt-sj-01.ictv.com:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='40', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]', dstQemu='10.210.13.11'}), log id: a0f776d
2020-02-14 12:33:50,043Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (default task-265) [16f4559e-e262-4c9d-80b4-ec81c2cbf950] FINISH, MigrateBrokerVDSCommand, return: , log id: a0f776d
2020-02-14 12:33:50,046Z INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-265) [16f4559e-e262-4c9d-80b4-ec81c2cbf950] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 5c126a47
2020-02-14 12:33:50,052Z INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-265) [16f4559e-e262-4c9d-80b4-ec81c2cbf950] EVENT_ID: VM_MIGRATION_START(62), Migration started (VM: HostedEngine, Source: ovirt-sj-03.ictv.com, Destination: ovirt-sj-01.ictv.com, User: mvrgotic@ictv.com(a)ictv.com-authz).
2020-02-14 12:33:52,893Z INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-8) [] VM '66b6d489-ceb8-486a-951a-355e21f13627' was reported as Down on VDS '9808f434-5cd4-48b5-8bbc-e639e391c6a5'(ovirt-sj-01.ictv.com)
2020-02-14 12:33:52,893Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-8) [] START, DestroyVDSCommand(HostName = ovirt-sj-01.ictv.com, DestroyVmVDSCommandParameters:{hostId='9808f434-5cd4-48b5-8bbc-e639e391c6a5', vmId='66b6d489-ceb8-486a-951a-355e21f13627', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 7532a8c0
2020-02-14 12:33:53,217Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-8) [] Failed to destroy VM '66b6d489-ceb8-486a-951a-355e21f13627' because VM does not exist, ignoring
2020-02-14 12:33:53,217Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-8) [] FINISH, DestroyVDSCommand, return: , log id: 7532a8c0
2020-02-14 12:33:53,217Z INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-8) [] VM '66b6d489-ceb8-486a-951a-355e21f13627'(HostedEngine) was unexpectedly detected as 'Down' on VDS '9808f434-5cd4-48b5-8bbc-e639e391c6a5'(ovirt-sj-01.ictv.com) (expected on 'f8d27efb-1527-45f0-97d6-d34a86abaaa2')
2020-02-14 12:33:53,217Z ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-8) [] Migration of VM 'HostedEngine' to host 'ovirt-sj-01.ictv.com' failed: VM destroyed during the startup.
2020-02-14 12:33:53,219Z INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-15) [] VM '66b6d489-ceb8-486a-951a-355e21f13627'(HostedEngine) moved from 'MigratingFrom' --> 'Up'
2020-02-14 12:33:53,219Z INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-15) [] Adding VM '66b6d489-ceb8-486a-951a-355e21f13627'(HostedEngine) to re-run list
2020-02-14 12:33:53,221Z ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-15) [] Rerun VM '66b6d489-ceb8-486a-951a-355e21f13627'. Called from VDS 'ovirt-sj-03.ictv.com'
2020-02-14 12:33:53,259Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-377323) [] START, MigrateStatusVDSCommand(HostName = ovirt-sj-03.ictv.com, MigrateStatusVDSCommandParameters:{hostId='f8d27efb-1527-45f0-97d6-d34a86abaaa2', vmId='66b6d489-ceb8-486a-951a-355e21f13627'}), log id: 62bac076
2020-02-14 12:33:53,265Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-377323) [] FINISH, MigrateStatusVDSCommand, return: , log id: 62bac076
2020-02-14 12:33:53,277Z WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-377323) [] EVENT_ID: VM_MIGRATION_TRYING_RERUN(128), Failed to migrate VM HostedEngine to Host ovirt-sj-01.ictv.com . Trying to migrate to another Host.
2020-02-14 12:33:53,330Z INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (EE-ManagedThreadFactory-engine-Thread-377323) [] Candidate host 'ovirt-sj-04.ictv.com' ('d98843da-bd81-46c9-9425-065b196ac59d') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'HA' (correlation id: null)
2020-02-14 12:33:53,330Z INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (EE-ManagedThreadFactory-engine-Thread-377323) [] Candidate host 'ovirt-sj-05.ictv.com' ('e3176705-9fb0-41d6-8721-367dfa2e62bd') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'HA' (correlation id: null)
2020-02-14 12:33:53,345Z INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (EE-ManagedThreadFactory-engine-Thread-377323) [] Running command: MigrateVmCommand internal: false. Entities affected : ID: 66b6d489-ceb8-486a-951a-355e21f13627 Type: VMAction group MIGRATE_VM with role type USER
2020-02-14 12:33:53,356Z INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (EE-ManagedThreadFactory-engine-Thread-377323) [] Candidate host 'ovirt-sj-04.ictv.com' ('d98843da-bd81-46c9-9425-065b196ac59d') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'HA' (correlation id: 16f4559e-e262-4c9d-80b4-ec81c2cbf950)
2020-02-14 12:33:53,356Z INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (EE-ManagedThreadFactory-engine-Thread-377323) [] Candidate host 'ovirt-sj-05.ictv.com' ('e3176705-9fb0-41d6-8721-367dfa2e62bd') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'HA' (correlation id: 16f4559e-e262-4c9d-80b4-ec81c2cbf950)
2020-02-14 12:33:53,380Z INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-377323) [] START, MigrateVDSCommand( MigrateVDSCommandParameters:{hostId='f8d27efb-1527-45f0-97d6-d34a86abaaa2', vmId='66b6d489-ceb8-486a-951a-355e21f13627', srcHost='ovirt-sj-03.ictv.com', dstVdsId='33e8ff78-e396-4f40-b43c-685bfaaee9af', dstHost='ovirt-sj-02.ictv.com:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='40', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]', dstQemu='10.210.13.12'}), log id: d99059f
2020-02-14 12:33:53,380Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-377323) [] START, MigrateBrokerVDSCommand(HostName = ovirt-sj-03.ictv.com, MigrateVDSCommandParameters:{hostId='f8d27efb-1527-45f0-97d6-d34a86abaaa2', vmId='66b6d489-ceb8-486a-951a-355e21f13627', srcHost='ovirt-sj-03.ictv.com', dstVdsId='33e8ff78-e396-4f40-b43c-685bfaaee9af', dstHost='ovirt-sj-02.ictv.com:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='40', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]', dstQemu='10.210.13.12'}), log id: 6f0483ac
2020-02-14 12:33:53,386Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-377323) [] FINISH, MigrateBrokerVDSCommand, return: , log id: 6f0483ac
2020-02-14 12:33:53,388Z INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-377323) [] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: d99059f
2020-02-14 12:33:53,391Z INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-377323) [] EVENT_ID: VM_MIGRATION_START(62), Migration started (VM: HostedEngine, Source: ovirt-sj-03.ictv.com, Destination: ovirt-sj-02.ictv.com, User: mvrgotic@ictv.com(a)ictv.com-authz).
2020-02-14 12:33:55,108Z INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (EE-ManagedThreadFactory-engineScheduled-Thread-96) [] Fetched 10 VMs from VDS '33e8ff78-e396-4f40-b43c-685bfaaee9af'
2020-02-14 12:33:55,110Z INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-96) [] VM '66b6d489-ceb8-486a-951a-355e21f13627' is migrating to VDS '33e8ff78-e396-4f40-b43c-685bfaaee9af'(ovirt-sj-02.ictv.com) ignoring it in the refresh until migration is done
2020-02-14 12:33:57,224Z INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-15) [] VM '66b6d489-ceb8-486a-951a-355e21f13627' was reported as Down on VDS '33e8ff78-e396-4f40-b43c-685bfaaee9af'(ovirt-sj-02.ictv.com)
2020-02-14 12:33:57,225Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-15) [] START, DestroyVDSCommand(HostName = ovirt-sj-02.ictv.com, DestroyVmVDSCommandParameters:{hostId='33e8ff78-e396-4f40-b43c-685bfaaee9af', vmId='66b6d489-ceb8-486a-951a-355e21f13627', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 1dec553e
2020-02-14 12:33:57,672Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-15) [] Failed to destroy VM '66b6d489-ceb8-486a-951a-355e21f13627' because VM does not exist, ignoring
2020-02-14 12:33:57,672Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-15) [] FINISH, DestroyVDSCommand, return: , log id: 1dec553e
2020-02-14 12:33:57,672Z INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-15) [] VM '66b6d489-ceb8-486a-951a-355e21f13627'(HostedEngine) was unexpectedly detected as 'Down' on VDS '33e8ff78-e396-4f40-b43c-685bfaaee9af'(ovirt-sj-02.ictv.com) (expected on 'f8d27efb-1527-45f0-97d6-d34a86abaaa2')
2020-02-14 12:33:57,672Z ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-15) [] Migration of VM 'HostedEngine' to host 'ovirt-sj-02.ictv.com' failed: VM destroyed during the startup.
2020-02-14 12:33:57,674Z INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-8) [] VM '66b6d489-ceb8-486a-951a-355e21f13627'(HostedEngine) moved from 'MigratingFrom' --> 'Up'
2020-02-14 12:33:57,674Z INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-8) [] Adding VM '66b6d489-ceb8-486a-951a-355e21f13627'(HostedEngine) to re-run list
2020-02-14 12:33:57,676Z ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-8) [] Rerun VM '66b6d489-ceb8-486a-951a-355e21f13627'. Called from VDS 'ovirt-sj-03.ictv.com'
2020-02-14 12:33:57,678Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-377324) [] START, MigrateStatusVDSCommand(HostName = ovirt-sj-03.ictv.com, MigrateStatusVDSCommandParameters:{hostId='f8d27efb-1527-45f0-97d6-d34a86abaaa2', vmId='66b6d489-ceb8-486a-951a-355e21f13627'}), log id: 2228bcc
2020-02-14 12:33:57,682Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-377324) [] FINISH, MigrateStatusVDSCommand, return: , log id: 2228bcc
2020-02-14 12:33:57,691Z WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-377324) [] EVENT_ID: VM_MIGRATION_TRYING_RERUN(128), Failed to migrate VM HostedEngine to Host ovirt-sj-02.ictv.com . Trying to migrate to another Host.
2020-02-14 12:33:57,713Z INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (EE-ManagedThreadFactory-engine-Thread-377324) [] Candidate host 'ovirt-sj-04.ictv.com' ('d98843da-bd81-46c9-9425-065b196ac59d') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'HA' (correlation id: null)
2020-02-14 12:33:57,713Z INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (EE-ManagedThreadFactory-engine-Thread-377324) [] Candidate host 'ovirt-sj-05.ictv.com' ('e3176705-9fb0-41d6-8721-367dfa2e62bd') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'HA' (correlation id: null)
2020-02-14 12:33:57,713Z WARN [org.ovirt.engine.core.bll.MigrateVmCommand] (EE-ManagedThreadFactory-engine-Thread-377324) [] Validation of action 'MigrateVm' failed for user mvrgotic@ictv.com(a)ictv.com-authz. Reasons: VAR__ACTION__MIGRATE,VAR__TYPE__VM,VAR__ACTION__MIGRATE,VAR__TYPE__VM,VAR__ACTION__MIGRATE,VAR__TYPE__VM,SCHEDULING_ALL_HOSTS_FILTERED_OUT,VAR__FILTERTYPE__INTERNAL,$hostName ovirt-sj-04.ictv.com,$filterName HA,VAR__DETAIL__NOT_HE_HOST,SCHEDULING_HOST_FILTERED_REASON_WITH_DETAIL,VAR__FILTERTYPE__INTERNAL,$hostName ovirt-sj-05.ictv.com,$filterName HA,VAR__DETAIL__NOT_HE_HOST,SCHEDULING_HOST_FILTERED_REASON_WITH_DETAIL
2020-02-14 12:33:57,715Z INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (EE-ManagedThreadFactory-engine-Thread-377324) [] Lock freed to object 'EngineLock:{exclusiveLocks='[66b6d489-ceb8-486a-951a-355e21f13627=VM]', sharedLocks=''}'
2020-02-14 12:33:57,725Z WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-377324) [] EVENT_ID: VM_MIGRATION_NO_VDS_TO_MIGRATE_TO(166), No available host was found to migrate VM HostedEngine to.
2020-02-14 12:33:57,752Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-377324) [] EVENT_ID: VM_MIGRATION_FAILED(65), Migration failed (VM: HostedEngine, Source: ovirt-sj-03.ictv.com).
2020-02-14 12:33:58,220Z INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'commandCoordinator' is using 0 threads out of 10, 6 threads waiting for tasks.
2020-02-14 12:33:58,220Z INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'default' is using 0 threads out of 1, 5 threads waiting for tasks.
2020-02-14 12:33:58,220Z INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engine' is using 0 threads out of 500, 42 threads waiting for tasks and 0 tasks in queue.
2020-02-14 12:33:58,220Z INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engineScheduled' is using 0 threads out of 100, 100 threads waiting for tasks.
2020-02-14 12:33:58,220Z INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engineThreadMonitoring' is using 1 threads out of 1, 0 threads waiting for tasks.
2020-02-14 12:33:58,220Z INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'hostUpdatesChecker' is using 0 threads out of 5, 5 threads waiting for tasks.
2020-02-14 12:34:05,643Z INFO [org.ovirt.engine.core.bll.EngineBackupAwarenessManager] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [] Backup check started.
2020-02-14 12:34:05,649Z WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [] EVENT_ID: ENGINE_NO_WARM_BACKUP(9,023), Full backup was created on Mon Feb 03 17:48:23 UTC 2020 and it's too old. Please run engine-backup to prevent data loss in case of corruption.
2020-02-14 12:34:05,649Z INFO [org.ovirt.engine.core.bll.EngineBackupAwarenessManager] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [] Backup check completed.
2020-02-14 12:34:10,145Z INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (EE-ManagedThreadFactory-engineScheduled-Thread-92) [] Fetched 9 VMs from VDS '33e8ff78-e396-4f40-b43c-685bfaaee9af'
I have tried to fix the issue:
* by reinstall of the Node1
* by Undeploy HostedEngine from Node1 – than remove the Node1 from oVIrt – add as NewHost with HostedEngine deploy
* by Undeploy HostedEngine from Node1 – than remove the Node1 from oVIrt – add as NewHost with HostedEngine deploy – set to Maintenance Mode and reboot – just for sake of starting services in correct order
In all cases, error message remains the same and HostedEngine remains on Node3.
Kindly awaiting your reply.
-----
kind regards/met vriendelijke groeten
Marko Vrgotic
Sr. System Engineer @ System Administration
ActiveVideo
o: +31 (35) 6774131
e: m.vrgotic(a)activevideo.com<mailto:m.vrgotic@activevideo.com>
w: www.activevideo.com<http://www.activevideo.com>
ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ Hilversum, The Netherlands. The information contained in this message may be legally privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or destroy any copy of this message.
4 years, 10 months
Error with LDAP
by Lucas Lamy
Hello everyone.
I have previously configured LDAP connexion though ovirt-engine-extension-aaa-ldap-setup.
The only working configuration was IBM Security Directory Server (the IBM Security Directory Server RFC-2307 Schema doesn't work), ladps and anonymous search user. But the LDAP server I’m testing is OpenLDAP and not IBM.
Indeed with IBM the search and login are working fine when I test them with ovirt-engine-extensions-tool aaa.
But when I try to add a LDAP User in the User Administration Panel I get this Error message : "Error while executing action AddUser : Internal Engine Error".
None of the solutions I've found on previous threads seems to works.
Does someone have an idea please ?
Please find the logs attached.
Thank you beforehand.
Caused by: org.postgresql.util.PSQLException: ERROR: null value in column "external_id" violates not-null constraint Detail: Failing row contains (**user info**). Where: SQL statement "INSERT INTO users ( department, domain, email, name, note, surname, user_id, username, external_id, namespace ) VALUES ( v_department, v_domain, v_email, v_name, v_note, v_surname, v_user_id, v_username, v_external_id, v_namespace )" PL/pgSQL function insertuser(character varying,character varying,character varying,character varying,character varying,character varying,uuid,character varying,text,character varying) line 3 at SQL state$ at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2433) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2178) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:306) at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441) at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365) at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:155) at org.postgresql.jdbc.PgCallableStatement.executeWithFlags(PgCallableStatement.java:78) at org.postgresql.jdbc.PgPreparedStatement.execute(PgPreparedStatement.java:144) at org.jboss.jca.adapters.jdbc.CachedPreparedStatement.execute(CachedPreparedStatement.java:303) at org.jboss.jca.adapters.jdbc.WrappedPreparedStatement.execute(WrappedPreparedStatement.java:442) at org.springframework.jdbc.core.JdbcTemplate.lambda$call$4(JdbcTemplate.java:1105) [spring-jdbc.jar:5.0.4.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:1050) [spring-jdbc.jar:5.0.4.RELEASE] ... 162 more 2020-02-15 10:16:53,337+01 ERROR [org.ovirt.engine.core.bll.aaa.AddUserCommand] (default task-4) [222f7ca7-b669-40e0-b152-2ca898ebde09] Transaction rolled-back for command 'org.ovirt.engine.core.bll.aaa.$ 2020-02-15 10:16:53,341+01 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-4) [222f7ca7-b669-40e0-b152-2ca898ebde09] EVENT_ID: USER_FAILED_ADD_ADUSER(327), Fail, Failed to add User 'user' to the system.
4 years, 10 months
Re: What is this error message from?
by Nir Soffer
On Tue, Feb 18, 2020 at 4:13 PM Jeremy Tourville
<jeremy_tourville(a)hotmail.com> wrote:
>
> I don't recall running any convert operations on the host and certainly not on the time/date listed. *If* I ran any conversions were run, they were run from a laptop and then I moved the converted disk to this host. I definitely didn't make any volume changes. Is this image conversion part of the template process? I have been creating quite a few templates lately. I have noted that several of them failed and I had to rerun the process.
This may be an error from template creation.
> Is this some sort of process that just keeps trying over and over because it thinks it failed?
We don't have such jobs.
The log you posted contains output from getAllTasksStatuses:
2020-02-17 06:19:47,782-0600 INFO (jsonrpc/5) [vdsm.api] FINISH
getAllTasksStatuses return={'allTasksStatus':
{'1cbc63d7-2310-4291-8f08-df5bf58376bb': {'code': 0, 'message': '1
jobs completed successfully', 'taskState': 'finished', 'taskResult':
'success', 'taskID': '1cbc63d7-2310-4291-8f08-df5bf58376bb'},
'9db209be-8e33-4c35-be8a-a58b4819812a': {'code': 261, 'message': 'low
level Image copy failed: ("Command [\'/usr/bin/qemu-img\',
\'convert\', \'-p\', \'-t\', \'none\', \'-T\', \'none\', \'-f\',
\'raw\', u\'/rhev/data-center/mnt/glusterSD/storage.cyber-range.lan:_vmstore/dd69364b-2c02-4165-bc4b-2f2a3b7fc10d/images/c651575f-75a0-492e-959e-8cfee6b6a7b5/9b5601fe-9627-4a8a-8a98-4959f68fb137\',
\'-O\', \'qcow2\', \'-o\', \'compat=1.1\',
u\'/rhev/data-center/mnt/glusterSD/storage.cyber-range.lan:_vmstore/dd69364b-2c02-4165-bc4b-2f2a3b7fc10d/images/6a2ce11a-deec-41e0-a726-9de6ba6d4ddd/6d738c08-0f8c-4a10-95cd-eeaa2d638db5\']
failed with rc=1 out=\'\' err=bytearray(b\'qemu-img: error while
reading sector 24117243: No such file or directory\\\\n\')",)',
'taskState': 'finished', 'taskResult': 'cleanSuccess', 'taskID':
'9db209be-8e33-4c35-be8a-a58b4819812a'},
'bd494f24-ca73-4e89-8ad0-629ad32bb2c1': {'code': 0, 'message': '1 jobs
completed successfully', 'taskState': 'finished', 'taskResult':
'success', 'taskID': 'bd494f24-ca73-4e89-8ad0-629ad32bb2c1'}}}
from=::ffff:172.30.50.4,33302,
task_id=8a8c6402-4e1d-46b8-a8fd-454fde7151d7 (api:54)
The failing task was:
\'9db209be-8e33-4c35-be8a-a58b4819812a\': {
\'code\': 261,
\'message\': \'low level Image copy failed: ("Command
[\'/usr/bin/qemu-img\', \'convert\', \'-p\', \'-t\', \'none\', \'-T\',
\'none\', \'-f\', \'raw\',
u\'/rhev/data-center/mnt/glusterSD/storage.cyber-range.lan:_vmstore/dd69364b-2c02-4165-bc4b-2f2a3b7fc10d/images/c651575f-75a0-492e-959e-8cfee6b6a7b5/9b5601fe-9627-4a8a-8a98-4959f68fb137\',
\'-O\', \'qcow2\', \'-o\', \'compat=1.1\',
u\'/rhev/data-center/mnt/glusterSD/storage.cyber-range.lan:_vmstore/dd69364b-2c02-4165-bc4b-2f2a3b7fc10d/images/6a2ce11a-deec-41e0-a726-9de6ba6d4ddd/6d738c08-0f8c-4a10-95cd-eeaa2d638db5\']
failed with rc=1 out=\'\' err=bytearray(b\'qemu-img: error while
reading sector 24117243: No such file or directory\\\\n\')",)\',
\'taskState\': \'finished\',
\'taskResult\': \'cleanSuccess\',
\'taskID\': \'9db209be-8e33-4c35-be8a-a58b4819812a\'
}
You may grep for this task id in all logs and share the matching logs.
Note that you have binary data in your logs, so you need to use "grep -a".
That's the only theory I can come up with.
>
> ________________________________
> From: Kevin Wolf <kwolf(a)redhat.com>
> Sent: Tuesday, February 18, 2020 3:01 AM
> To: Nir Soffer <nsoffer(a)redhat.com>
> Cc: jeremy_tourville(a)hotmail.com <jeremy_tourville(a)hotmail.com>; users <users(a)ovirt.org>; Krutika Dhananjay <kdhananj(a)redhat.com>
> Subject: Re: [ovirt-users] What is this error message from?
>
> Am 17.02.2020 um 16:16 hat Nir Soffer geschrieben:
> > On Mon, Feb 17, 2020, 16:53 <jeremy_tourville(a)hotmail.com> wrote:
> >
> > > I have seen this error message repeatedly when reviewing events.
> > >
> > > VDSM vmh.cyber-range.lan command HSMGetAllTasksStatusesVDS failed: low
> > > level Image copy failed: ("Command ['/usr/bin/qemu-img', 'convert', '-p',
> > > '-t', 'none', '-T', 'none', '-f', 'raw',
> > > u'/rhev/data-center/mnt/glusterSD/storage.cyber-range.lan:_vmstore/dd69364b-2c02-4165-bc4b-2f2a3b7fc10d/images/c651575f-75a0-492e-959e-8cfee6b6a7b5/9b5601fe-9627-4a8a-8a98-4959f68fb137',
> > > '-O', 'qcow2', '-o', 'compat=1.1',
> > > u'/rhev/data-center/mnt/glusterSD/storage.cyber-range.lan:_vmstore/dd69364b-2c02-4165-bc4b-2f2a3b7fc10d/images/6a2ce11a-deec-41e0-a726-9de6ba6d4ddd/6d738c08-0f8c-4a10-95cd-eeaa2d638db5']
> > > failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading
> > > sector 24117243: No such file or directory\\n')",)
> > >
> >
> > Looks like copying image failed with ENOENT while reading
> > offset 12348028416 (11.49 GiB).
> >
> > I never seen such failure, typically after opening a file read will never
> > fail with such error, but in gluster this may be possible.
> >
> > Please share vdsm log showingn this error, it may add useful info.
> >
> > Also glusterfs client logs from
> > /var/log/glusterfs*/*storage.cyber-range.lan*.log
> >
> > Kevin, Krutika, do you have an idea about this error?
>
> This is a weird one. Not only that reading shouldn't be looking up any
> filename, but also that it's not at offset 0, but suddenly somewhere in
> the middle of the image file.
>
> I think it's pretty safe to say that this error doesn't come from QEMU,
> but from the kernel. Did you (or some software) change anything about
> the volume in the background while the convert operation was running?
>
> Kevin
>
4 years, 10 months
I wrote an article on using Ansible to backup oVirt VMs
by Jayme
I've been part of this mailing list for a while now and have received a lot
of great advice and help on various subjects. I read the list daily and one
thing I've noticed is that many users are curious about backup options for
oVirt (myself included). I wanted to share with the community a solution
I've come up with to easily backup multiple running oVirt VMs to OVA format
using some basic Ansible playbooks. I've put together a blog post detailing
the process which also includes links to a Github repo containing the
playbooks here:
https://blog.silverorange.com/backing-up-ovirt-vms-with-ansible-4c2fca8b3b43
Any feedback, suggestions or questions are welcome. I hope this information
is helpful.
Thanks!
- Jayme
4 years, 10 months
Re: Move Self Hosted Engine to Standalone
by Jeremy Tourville
OK, that worked perfectly, thanks! I was able to run the restore and then run engine-setup. How do I remove the old self-hosted engine properly?
________________________________
From: Staniforth, Paul <P.Staniforth(a)leedsbeckett.ac.uk>
Sent: Tuesday, February 18, 2020 3:57 AM
To: Jeremy Tourville <jeremy_tourville(a)hotmail.com>; Robert Webb <rwebb(a)ropeguru.com>; users(a)ovirt.org <users(a)ovirt.org>
Subject: Re: [ovirt-users] Re: Move Self Hosted Engine to Standalone
Hi Jeremy,
I think you must have run the engine-setup before the restore, the restore is designed to restore to a clean install or the previous install with the same credentials. If you delete the postgresql config you can then install with your postgres credentials.
e.g.
systemctl stop rh-postgresql10-postgresql.service
rm -rf /var/opt/rh/rh-postgresql10/lib/pgsql/data/*
regards,
Paul S.
________________________________
From: Jeremy Tourville <jeremy_tourville(a)hotmail.com>
Sent: 18 February 2020 02:26
To: Robert Webb <rwebb(a)ropeguru.com>; users(a)ovirt.org <users(a)ovirt.org>
Subject: [ovirt-users] Re: Move Self Hosted Engine to Standalone
Caution External Mail: Do not click any links or open any attachments unless you trust the sender and know that the content is safe.
I did get a little further.
First I ran engine-cleanup on my new engine where I will be running the restore operation. (I had previously run the engine-setup script on this machine)
Then I ran this-
[root@engine ~]# engine-backup --mode=restore --file=ovirt-engine-backup-20200217125040.backup --log=ovirt-engine-backup-20200217125040.log --provision-db --provision-dwh-db --restore-permissions
Start of engine-backup with mode 'restore'
scope: all
archive file: ovirt-engine-backup-20200217125040.backup
log file: ovirt-engine-backup-20200217125040.log
Preparing to restore:
- Unpacking file 'ovirt-engine-backup-20200217125040.backup'
Restoring:
- Files
Provisioning PostgreSQL users/databases:
- user 'engine', database 'engine'
FATAL: Existing database 'engine' or user 'engine' found and temporary ones created - Please clean up everything and try again
Time to research further for that FATAL error message. Isn't that the purpose of engine-cleanup though? Why the conflict?
________________________________
From: Robert Webb <rwebb(a)ropeguru.com>
Sent: Monday, February 17, 2020 1:42 PM
To: Jeremy Tourville <jeremy_tourville(a)hotmail.com>; users(a)ovirt.org <users(a)ovirt.org>
Subject: Re: [ovirt-users] Move Self Hosted Engine to Standalone
Try looking at the RHEV info here and go to section 6.2.2 and see if that helps.
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.0/...<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Faccess....>
________________________________________
From: Jeremy Tourville <jeremy_tourville(a)hotmail.com>
Sent: Monday, February 17, 2020 2:26 PM
To: Robert Webb; users(a)ovirt.org
Subject: Re: [ovirt-users] Move Self Hosted Engine to Standalone
OK, I was able to get the backup completed. I am a little confused on how to do the restore though. https://www.ovirt.org/documentation/self-hosted/chap-Backing_up_and_Resto...<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...>
Is this link even applicable? My environment is a single node, not EL based. Anyhow, here is what I have so far-
[root@engine glusterfs]# engine-backup
Start of engine-backup with mode 'backup'
scope: all
archive file: /var/lib/ovirt-engine-backup/ovirt-engine-backup-20200217125040.backup
log file: /var/log/ovirt-engine-backup/ovirt-engine-backup-20200217125040.log
Backing up:
Notifying engine
- Files
- Engine database 'engine'
- DWH database 'ovirt_engine_history'
Packing into file '/var/lib/ovirt-engine-backup/ovirt-engine-backup-20200217125040.backup'
Notifying engine
Done.
[root@engine glusterfs]#
I moved the backup file to my new engine.
How do I perform the restore?
The directions say:
# engine-backup --mode=restore --file=file_name --log=log_file_name --provision-db --provision-dwh-db --restore-permissions
What is the file name and log_file name? Do I need to do something to unpack my backup file?
________________________________
From: Robert Webb <rwebb(a)ropeguru.com>
Sent: Monday, February 17, 2020 9:13 AM
To: jeremy_tourville(a)hotmail.com <jeremy_tourville(a)hotmail.com>; users(a)ovirt.org <users(a)ovirt.org>
Subject: RE: [ovirt-users] Move Self Hosted Engine to Standalone
Can you take a backup of the original, build the new one, then do a restore?
> -----Original Message-----
> From: jeremy_tourville(a)hotmail.com <jeremy_tourville(a)hotmail.com>
> Sent: Monday, February 17, 2020 10:11 AM
> To: users(a)ovirt.org
> Subject: [ovirt-users] Move Self Hosted Engine to Standalone
>
> I have a single oVirt host running a self-hosted engine. I'd like to move the
> engine off the host and run it on a standalone server. I am running Software
> Version:4.3.6.6-1.el7 Can anyone tell me what the procedure is for that?
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org Privacy Statement:
> https://www.ovirt.org/site/privacy-policy/<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...>
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C7IRADM7HZW<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.o...>
> RAD6Y6F76T5CS4ABQ3Y3R/
To view the terms under which this email is distributed, please go to:-
http://leedsbeckett.ac.uk/disclaimer/email/
4 years, 10 months
backup
by Nazan CENGİZ
Hi all,
I am trying https://github.com/vacosta94/VirtBKP.
ovirt version:4.3.5
my config file;
[bkp]
url = https://xxx/ovirt-engine/api
user = admin@internal
password = yyy
ca_file = /opt/VirtBKP/ca.crt
bkpvm = VirtBKM
bckdir = /mnt/backup
[restore]
url = https:/xxx/ovirt-engine/api
user = admin@internal
password = yyy
ca_file = ca.crt
storage = hosted_storage(storage domain name for new vm???)
proxy = xxx(engine FQDN)
proxyport = 54323
Fail on below;
[root@virtbkp VirtBKP]# /opt/VirtBKP/backup_vm.py default.conf Bacchus
[OK] Connection to oVIrt API success https://ovirtengine2.5ghvl.local/ovirt-engi ne/api
[INFO] Trying to create snapshot of VM: 8a95f435-94dd-4a69-aed0-46395bcbd082
[INFO] Waiting until snapshot creation ends
[INFO] Waiting until snapshot creation ends
[OK] Snapshot created
[INFO] Trying to create a qcow2 file of disk aa564596-fd33-4734-8050-0f82130a677b
[INFO] Attach snap disk to bkpvm
Traceback (most recent call last):
File "/opt/VirtBKP/backup_vm.py", line 6, in <module>
b.main()
File "/opt/VirtBKP/backup_vm_last.py", line 242, in main
self.backup(self.vmid,self.snapid,disk_id,self.bkpvm)
File "/opt/VirtBKP/backup_vm_last.py", line 210, in backup
self.attach_disk(bkpvm,disk_id,snapid)
File "/opt/VirtBKP/backup_vm_last.py", line 123, in attach_disk
resp_attach = requests.post(urlattach, data=xmlattach, headers=headers, verify=False, auth=(self.user,self.password))
File "/usr/lib/python2.7/site-packages/requests/api.py", line 112, in post
return request('post', url, data=data, json=json, **kwargs)
File "/usr/lib/python2.7/site-packages/requests/api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 498, in request
prep = self.prepare_request(req)
File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 441, in prepare_request
hooks=merge_hooks(request.hooks, self.hooks),
File "/usr/lib/python2.7/site-packages/requests/models.py", line 309, in prepare
self.prepare_url(url, params)
File "/usr/lib/python2.7/site-packages/requests/models.py", line 377, in prepare_url
raise InvalidURL(*e.args)
requests.exceptions.InvalidURL: Failed to parse: https://ovirtengine2.5ghvl.local/ovirt-engine/api/v3/vms/13d45c7f-7812-4f...
[cid:imageaec16d.PNG@45f7abd5.42935efe]<http://www.havelsan.com.tr> [cid:image18fddf.JPG@d09bcf00.45bddac6]
Nazan CENGİZ
AR-GE MÜHENDİSİ
Mustafa Kemal Mahallesi 2120 Cad. No:39 06510 Çankaya Ankara TÜRKİYE
[cid:image0149e5.PNG@59a3254a.4b8a79be] +90 312 219 57 87 [cid:image71c9d4.PNG@6aa43e9d.46b24ebe] +90 312 219 57 97
[cid:imagebc8991.JPG@fe1bbae1.41bc2c01]
YASAL UYARI: Bu elektronik posta işbu linki kullanarak ulaşabileceğiniz Koşul ve Şartlar dokümanına tabidir. <http://havelsan.com.tr/tr/news/e-posta-yasal-uyari>
LEGAL NOTICE: This e-mail is subject to the Terms and Conditions document which can be accessed with this link. <http://havelsan.com.tr/tr/news/e-posta-yasal-uyari>
[http://www.havelsan.com.tr/Library/images/mail/email.jpg] Lütfen gerekmedikçe bu sayfa ve eklerini yazdırmayınız / Please consider the environment before printing this email
4 years, 10 months
Re: Move Self Hosted Engine to Standalone
by Staniforth, Paul
Hi Jeremy,
I think you must have run the engine-setup before the restore, the restore is designed to restore to a clean install or the previous install with the same credentials. If you delete the postgresql config you can then install with your postgres credentials.
e.g.
systemctl stop rh-postgresql10-postgresql.service
rm -rf /var/opt/rh/rh-postgresql10/lib/pgsql/data/*
regards,
Paul S.
________________________________
From: Jeremy Tourville <jeremy_tourville(a)hotmail.com>
Sent: 18 February 2020 02:26
To: Robert Webb <rwebb(a)ropeguru.com>; users(a)ovirt.org <users(a)ovirt.org>
Subject: [ovirt-users] Re: Move Self Hosted Engine to Standalone
Caution External Mail: Do not click any links or open any attachments unless you trust the sender and know that the content is safe.
I did get a little further.
First I ran engine-cleanup on my new engine where I will be running the restore operation. (I had previously run the engine-setup script on this machine)
Then I ran this-
[root@engine ~]# engine-backup --mode=restore --file=ovirt-engine-backup-20200217125040.backup --log=ovirt-engine-backup-20200217125040.log --provision-db --provision-dwh-db --restore-permissions
Start of engine-backup with mode 'restore'
scope: all
archive file: ovirt-engine-backup-20200217125040.backup
log file: ovirt-engine-backup-20200217125040.log
Preparing to restore:
- Unpacking file 'ovirt-engine-backup-20200217125040.backup'
Restoring:
- Files
Provisioning PostgreSQL users/databases:
- user 'engine', database 'engine'
FATAL: Existing database 'engine' or user 'engine' found and temporary ones created - Please clean up everything and try again
Time to research further for that FATAL error message. Isn't that the purpose of engine-cleanup though? Why the conflict?
________________________________
From: Robert Webb <rwebb(a)ropeguru.com>
Sent: Monday, February 17, 2020 1:42 PM
To: Jeremy Tourville <jeremy_tourville(a)hotmail.com>; users(a)ovirt.org <users(a)ovirt.org>
Subject: Re: [ovirt-users] Move Self Hosted Engine to Standalone
Try looking at the RHEV info here and go to section 6.2.2 and see if that helps.
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.0/...<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Faccess....>
________________________________________
From: Jeremy Tourville <jeremy_tourville(a)hotmail.com>
Sent: Monday, February 17, 2020 2:26 PM
To: Robert Webb; users(a)ovirt.org
Subject: Re: [ovirt-users] Move Self Hosted Engine to Standalone
OK, I was able to get the backup completed. I am a little confused on how to do the restore though. https://www.ovirt.org/documentation/self-hosted/chap-Backing_up_and_Resto...<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...>
Is this link even applicable? My environment is a single node, not EL based. Anyhow, here is what I have so far-
[root@engine glusterfs]# engine-backup
Start of engine-backup with mode 'backup'
scope: all
archive file: /var/lib/ovirt-engine-backup/ovirt-engine-backup-20200217125040.backup
log file: /var/log/ovirt-engine-backup/ovirt-engine-backup-20200217125040.log
Backing up:
Notifying engine
- Files
- Engine database 'engine'
- DWH database 'ovirt_engine_history'
Packing into file '/var/lib/ovirt-engine-backup/ovirt-engine-backup-20200217125040.backup'
Notifying engine
Done.
[root@engine glusterfs]#
I moved the backup file to my new engine.
How do I perform the restore?
The directions say:
# engine-backup --mode=restore --file=file_name --log=log_file_name --provision-db --provision-dwh-db --restore-permissions
What is the file name and log_file name? Do I need to do something to unpack my backup file?
________________________________
From: Robert Webb <rwebb(a)ropeguru.com>
Sent: Monday, February 17, 2020 9:13 AM
To: jeremy_tourville(a)hotmail.com <jeremy_tourville(a)hotmail.com>; users(a)ovirt.org <users(a)ovirt.org>
Subject: RE: [ovirt-users] Move Self Hosted Engine to Standalone
Can you take a backup of the original, build the new one, then do a restore?
> -----Original Message-----
> From: jeremy_tourville(a)hotmail.com <jeremy_tourville(a)hotmail.com>
> Sent: Monday, February 17, 2020 10:11 AM
> To: users(a)ovirt.org
> Subject: [ovirt-users] Move Self Hosted Engine to Standalone
>
> I have a single oVirt host running a self-hosted engine. I'd like to move the
> engine off the host and run it on a standalone server. I am running Software
> Version:4.3.6.6-1.el7 Can anyone tell me what the procedure is for that?
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org Privacy Statement:
> https://www.ovirt.org/site/privacy-policy/<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...>
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C7IRADM7HZW<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.o...>
> RAD6Y6F76T5CS4ABQ3Y3R/
To view the terms under which this email is distributed, please go to:-
http://leedsbeckett.ac.uk/disclaimer/email/
4 years, 10 months
Vm suddenly paused with error "vm has paused due to unknown storage error"
by Jasper Siero
Hi all,
Since we upgraded our Ovirt nodes to CentOS 7 a vm (not a specific one but never more then one) will sometimes pause suddenly with the error "VM ... has paused due to unknown storage error". It happens now two times in a month.
The Ovirt node uses san storage for the vm's running on it. When a specific vm is pausing with an error the other vm's keeps running without problems.
The vm runs without problems after unpausing it.
Versions:
CentOS Linux release 7.1.1503
vdsm-4.14.17-0
libvirt-daemon-1.2.8-16
vdsm.log:
VM Channels Listener::DEBUG::2015-10-25 07:43:54,382::vmChannels::95::vds::(_handle_timeouts) Timeout on fileno 78.
libvirtEventLoop::INFO::2015-10-25 07:43:56,177::vm::4602::vm.Vm::(_onIOError) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::abnormal vm stop device virtio-disk0 error eother
libvirtEventLoop::DEBUG::2015-10-25 07:43:56,178::vm::5204::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::event Suspended detail 2 opaque None
libvirtEventLoop::INFO::2015-10-25 07:43:56,178::vm::4602::vm.Vm::(_onIOError) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::abnormal vm stop device virtio-disk0 error eother
...........
libvirtEventLoop::INFO::2015-10-25 07:43:56,180::vm::4602::vm.Vm::(_onIOError) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::abnormal vm stop device virtio-disk0 error eother
specific error part in libvirt vm log:
block I/O error in device 'drive-virtio-disk0': Unknown error 32758 (32758)
...........
block I/O error in device 'drive-virtio-disk0': Unknown error 32758 (32758)
engine.log:
2015-10-25 07:44:48,945 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-40) [a43dcc8] VM diataal-prod-cas1 77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb moved from
Up --> Paused
2015-10-25 07:44:49,003 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-40) [a43dcc8] Correlation ID: null, Call Stack: null, Custom Event
ID: -1, Message: VM diataal-prod-cas1 has paused due to unknown storage error.
Has anyone experienced the same problem or knows a way to solve this?
Kind regards,
Jasper
4 years, 10 months
foreman integration
by eevans@digitaldatatechs.com
I have ovirt 4.3 and foreman 1.9.3. both with self signed certificates. Foreman is a vm on the ovirt cluster. I added foreman as an external provider in ovirt, but when I try to add ovirt as a compute resource in foreman I get an ssl error. I found several fixes that prevented the error but the data center and quota fields would not populate.
The exact error is:
Unable to save
SSL_connect returned=1 errno=0 state=error: certificate verify failed
Has anyone else had this problem?
Any help would be greatly appreciated.
4 years, 10 months