Automigration of VMs from other hypervisors
by KK CHN
Hi list,
I am in the process of migrating 150+ VMs running on Rhevm4.1 to KVM
based OpenStack installation ( Ussuri with KVm and glance as image storage.)
What I am doing now, manually shutdown each VM through RHVM GUI and export
to export domain and scp those image files of each VM to our OpenStack
controller node and uploading to glance and creating each VM manually.
query 1.
Is there a better way to automate this migration by any utility or scripts ?
Any one done this kind of automatic migration before what was your
approach? Or whats the better approach instead of doing manual migration ?
Or only manually I have to repeat the process for all 150+ Virtual
machines? ( guest VMs are CentOS7 and Redhat Linux 7 with LVM data
partitions attached)
Kindly share your thoughts..
Query 2.
other than this 150+ VMs Redhat Linux 7 and Centos VMs on Rhevm 4.1, I
have to migrate 50+ VMs which hosted on hyperV.
What the method / approach for exporting from HyperV and importing to
OpenStack Ussuri version with glance with KVM hpervisor ? ( This is the
ffirst time I am going to use hyperV, no much idea about export from hyperv
and Import to KVM)
Will the images exported form HyperV(vhdx image disks with single disk
and multiple disk(max 3 disk) VMs) can be directly imported to KVM ? does
KVM support this or need to modify vhdx disk images to any other format ?
What is the best approach should be in case of HyperV hosted VMs( Windows
2012 guest machines and Linux guest machines ) to be imported to KVM based
OpenStack(Ussuri version with glance as image storage ).
Thanks in advance
Kris
3 years, 7 months
Ubuntu 20.04 cloud-init
by Pavel Šipoš
Hi.
We are using oVirt version 4.3.10.4-1.el7
I am trying to use clod-init function in ovirt to set password and add
ssh keys for VM on first boot.
It is working perfectly on Centos7, Centos8, but not with Ubuntu 20.04.02 .
On VM Cloud-init package and quemu-guest-agent are installed, services
are running - I checked that and then used Run once to test further but
no luck.
It looks like its not even trying to set anything.
I am open to suggestions how to debug this. I see no errors if I look
into /var/log/cloud-init logs on VM.
Did anyone else had similar problem?
Can anyone confirm that cloud-init function in ovirt should work with
ubuntu 20.04?
Packages used:
cloud-init 21.2-3-g899bfaa9-0ubuntu2~20.04.1
qemu-guest-agent 1:4.2-3ubuntu6.17
I made ovirt template myself using packer (qemu) that also uses
cloud-init for bootstrapping.
Thank you in advance!
Pavel
--
--
Pavel Sipos, Arnes <pavel.sipos(a)arnes.si>
ARNES, p.p. 7, SI-1001 Ljubljana, Slovenia
T: +386 1 479 88 00
W: www.arnes.si, aai.arnes.si
3 years, 7 months
low iscsi storage space
by Leonardo Costa
Hello.
I'm having a problem that I can't solve,
I have 20Tb of space on Ovirt but on my DELL SCV 3000 storage, only 5Tb
free on the volume.
It seems to me that the deleted machines are not removing the disks in the
storage iscsi.
Could anyone help?
3 years, 7 months
Major network issue with 4.4.7
by Andrea Chierici
Dear all,
I've been using ovirt for at least 6 years and only lately I've stepped
into a weird problem that I hope someone will be able to give help.
My hardware is:
- blade lenovo for hosts, with dual switch
- dell equallogic for iscsi storage, directly connected to the blade
switches
- The two host network cards are configured with bonding and all the
vlans are accessed from it (mtu 9000)
- all the hosts and ovirt engine have firewalld service disabled)
My engine is hosted on a separate vmware vm (I will evaluate the self
hosted engine later...). I want to stress the fact that for years this
setup worked smoothly without any significant issue (and all the minor
updates were completed flawlessly).
A few weeks ago I started the update from the rock solid 4.3 to the
latest 4.4.7. I began with the manager, following the docs, installing a
new centos8 vm and importing the backup: everything went smootly and I
was able to get access to the manager without any problem, all the
machines still there :)
I then began updating the hosts, from centos7 to centos8 stream, one by one.
Immediately I noticed network issues, with the VMs hosted on the first
updated host. Migrating VMs from centos8 host to other centos8 quite
often fails, but the main issue is this: *if I start one of the VMs on
the centos8 host, they have no network connectivity. If I migrate them
to a centos7 hosts the network starts to work, and if I migrate the VMs
back to the centos8 host, the network keeps working.*
I am puzzled and can't understand what's going on. Generally speaking
all the centos8 hosts (I have 6 in my cluster, and now 3 are centos8
while the rest is still centos7) seem to be very unstable, meaning that
the VMs they host are quite often showing network issues and temporary
glitches.
Can someone give a hint on how to solve this weird issue?
Thanks,
Andrea
--
Andrea Chierici - INFN-CNAF
Viale Berti Pichat 6/2, 40127 BOLOGNA
Office Tel: +39 051 2095463
SkypeID ataruz
--
3 years, 7 months
live merge of snapshots failed
by g.vasilopoulos@uoc.gr
Hello
I have a situation with a vm in which I cannot delete the snapshot.
The whole thing is quite strange because I can delete the snapshot when I create and delete it from the web interface but when I do it with a python script through the API it failes.
The script does create snapshot-> download snapshot-> delete snapshot and I used the examples from ovirt python sdk on githab to create it ,in general it works prety well.
But on a specific machine (so far) it cannot delete the live snapshot
Ovirt is 4.3.10 and the guest is a windows 10 pc. Windows 10 guest has 2 disks attached both on different fc domains one on an ssd emc and the other on an hdd emc. Both disks are prealocated.
I cannot figure out what the problem is so far
the related engine log:
2021-08-03 15:51:00,385+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-61) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Comma
nd 'RemoveSnapshotSingleDiskLive' (id: '80dc4609-b91f-4e93-bc12-7b2083933e5a') waiting on child command id: '74c83880-581b-4774-ae51-8c4af0c92c53' type:'Merge' to complete
2021-08-03 15:51:00,385+03 INFO [org.ovirt.engine.core.bll.MergeCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-61) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Waiting on merge command to complete (
jobId = 62bf8c83-cd78-42a5-b57d-d67ddfdee8ee)
2021-08-03 15:51:00,387+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-61) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshotSingleDiskLive' (id: '87bc90c7-2aa5-4a1b-b58c-54296518658a') waiting on child command id: 'ec806ac6-929f-42d9-a86e-98d6a39a4718' type:'Merge' to complete
2021-08-03 15:51:01,388+03 INFO [org.ovirt.engine.core.bll.MergeCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-30) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Waiting on merge command to complete (jobId = c57fb3e5-da20-4838-8db3-31655ba76c1f)
2021-08-03 15:51:07,491+03 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-38) [b929fd4a-8ce7-408f-927d-ab0169879c4e] Command 'MoveImageGroup' (id: '1de1b800-873f-405f-805b-f44397740909') waiting on child command id: 'd1136344-2888-4d63-8fe1-b506426bc8aa' type:'CopyImageGroupWithData' to complete
2021-08-03 15:51:11,513+03 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-41) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshot' (id: '04e9d61e-28a2-4ab0-9bb7-5c805ee871e9') waiting on child command id: '87bc90c7-2aa5-4a1b-b58c-54296518658a' type:'RemoveSnapshotSingleDiskLive' to complete
2021-08-03 15:51:12,522+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-76) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshotSingleDiskLive' (id: '80dc4609-b91f-4e93-bc12-7b2083933e5a') waiting on child command id: '74c83880-581b-4774-ae51-8c4af0c92c53' type:'Merge' to complete
2021-08-03 15:51:12,523+03 INFO [org.ovirt.engine.core.bll.MergeCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-76) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Waiting on merge command to complete (jobId = 62bf8c83-cd78-42a5-b57d-d67ddfdee8ee)
2021-08-03 15:51:12,527+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-76) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshotSingleDiskLive' (id: '87bc90c7-2aa5-4a1b-b58c-54296518658a') waiting on child command id: 'ec806ac6-929f-42d9-a86e-98d6a39a4718' type:'Merge' to complete
2021-08-03 15:51:13,528+03 INFO [org.ovirt.engine.core.bll.MergeCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-37) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Waiting on merge command to complete (jobId = c57fb3e5-da20-4838-8db3-31655ba76c1f)
2021-08-03 15:51:21,635+03 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-58) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshot' (id: '04e9d61e-28a2-4ab0-9bb7-5c805ee871e9') waiting on child command id: '87bc90c7-2aa5-4a1b-b58c-54296518658a' type:'RemoveSnapshotSingleDiskLive' to complete
2021-08-03 15:51:22,655+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshotSingleDiskLive' (id: '80dc4609-b91f-4e93-bc12-7b2083933e5a') waiting on child command id: '74c83880-581b-4774-ae51-8c4af0c92c53' type:'Merge' to complete
2021-08-03 15:51:22,661+03 INFO [org.ovirt.engine.core.bll.MergeCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Merge command (jobId = 62bf8c83-cd78-42a5-b57d-d67ddfdee8ee) has completed for images '7611ebcf-5323-45ca-b16c-9302d0bdedc6'..'17618ba1-4ab8-49eb-a991-fc3d602ced14'
2021-08-03 15:51:22,664+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshotSingleDiskLive' (id: '87bc90c7-2aa5-4a1b-b58c-54296518658a') waiting on child command id: 'ec806ac6-929f-42d9-a86e-98d6a39a4718' type:'Merge' to complete
2021-08-03 15:51:23,664+03 INFO [org.ovirt.engine.core.bll.MergeCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-41) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Waiting on merge command to complete (jobId = c57fb3e5-da20-4838-8db3-31655ba76c1f)
2021-08-03 15:51:24,672+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-6) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Executing Live Merge command step 'MERGE_STATUS'
2021-08-03 15:51:24,699+03 INFO [org.ovirt.engine.core.bll.MergeStatusCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-1) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Running command: MergeStatusCommand internal: true. Entities affected : ID: 96000ec9-e181-44eb-893f-e0a36e3a6775 Type: Storage
8-03 15:51:24,749+03 INFO [org.ovirt.engine.core.bll.MergeStatusCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-1) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Successfully removed volume 17618ba1-4
ab8-49eb-a991-fc3d602ced14 from the chain
2021-08-03 15:51:24,749+03 INFO [org.ovirt.engine.core.bll.MergeStatusCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-1) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Volume merge type 'COMMIT'
2021-08-03 15:51:25,691+03 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-5) [b929fd4a-8ce7-408f-927d-ab0169879c4e] Command 'CopyImageGroup
WithData' (id: 'd1136344-2888-4d63-8fe1-b506426bc8aa') waiting on child command id: 'b40aeec9-b7cf-4ee5-9683-54d98f4307d5' type:'CopyImageGroupVolumesData' to complete
2021-08-03 15:51:26,692+03 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-35) [b929fd4a-8ce7-408f-927d-ab0169879c4e] Command 'CopyImageGrou
pVolumesData' (id: 'b40aeec9-b7cf-4ee5-9683-54d98f4307d5') waiting on child command id: 'f46d65cb-32d9-4269-982e-6e19331b8a27' type:'CopyData' to complete
2021-08-03 15:51:26,711+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-35) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Executing Liv
e Merge command step 'DESTROY_IMAGE'
2021-08-03 15:51:26,726+03 INFO [org.ovirt.engine.core.bll.storage.disk.image.DestroyImageCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-10) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Running command:
DestroyImageCommand internal: true. Entities affected : ID: 96000ec9-e181-44eb-893f-e0a36e3a6775 Type: Storage
2021-08-03 15:51:26,747+03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DestroyImageVDSCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-10) [3bf9345d-fab2-490f-ba44-6aa014bbb743] START, DestroyIma
geVDSCommand( DestroyImageVDSCommandParameters:{storagePoolId='5da76866-7b7d-11eb-9913-00163e1f2643', ignoreFailoverLimit='false', storageDomainId='96000ec9-e181-44eb-893f-e0a36e3a6775', imageGroupId='205a30a3-f
c06-4ceb-8ef2-018f16d4ccbb', imageId='00000000-0000-0000-0000-000000000000', imageList='[17618ba1-4ab8-49eb-a991-fc3d602ced14]', postZero='false', force='false'}), log id: 307a3fb1
2021-08-03 15:51:26,834+03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DestroyImageVDSCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-10) [3bf9345d-fab2-490f-ba44-6aa014bbb743] FINISH, DestroyIm
ageVDSCommand, return: , log id: 307a3fb1
2021-08-03 15:51:26,846+03 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-commandCoordinator-Thread-10) [3bf9345d-fab2-490f-ba44-6aa014bbb743] CommandAsyncTask::Adding CommandM
ultiAsyncTasks object for command 'baaa5254-261c-452a-84b4-0a8b397cdb62'
2021-08-03 15:51:26,847+03 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (EE-ManagedThreadFactory-commandCoordinator-Thread-10) [3bf9345d-fab2-490f-ba44-6aa014bbb743] CommandMultiAsyncTasks::attachTas
k: Attaching task 'ae2e80a4-d224-4fc3-a84b-859042811525' to command 'baaa5254-261c-452a-84b4-0a8b397cdb62'.
2021-08-03 15:51:26,857+03 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedThreadFactory-commandCoordinator-Thread-10) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Adding task 'ae2e80a4-d224-4fc3-a
84b-859042811525' (Parent Command 'DestroyImage', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling hasn't started yet..
2021-08-03 15:51:26,858+03 INFO [org.ovirt.engine.core.bll.storage.disk.image.DestroyImageCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-10) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Successfully star
ted task to remove orphaned volumes
2021-08-03 15:51:26,863+03 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-commandCoordinator-Thread-10) [3bf9345d-fab2-490f-ba44-6aa014bbb743] BaseAsyncTask::startPollingTask: Star
ting to poll task 'ae2e80a4-d224-4fc3-a84b-859042811525'.
2021-08-03 15:51:26,863+03 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-commandCoordinator-Thread-10) [3bf9345d-fab2-490f-ba44-6aa014bbb743] BaseAsyncTask::startPollingTask: Star
ting to poll task 'ae2e80a4-d224-4fc3-a84b-859042811525'.
2021-08-03 15:51:28,799+03 INFO [org.ovirt.engine.core.bll.storage.disk.image.DestroyImageCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-88) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Waiting on destroy image command to complete the task (taskId = ae2e80a4-d224-4fc3-a84b-859042811525)
2021-08-03 15:51:30,805+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-48) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshotSingleDiskLive' (id: '80dc4609-b91f-4e93-bc12-7b2083933e5a') waiting on child command id: 'baaa5254-261c-452a-84b4-0a8b397cdb62' type:'DestroyImage' to complete
2021-08-03 15:51:31,824+03 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-97) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshot' (id: '04e9d61e-28a2-4ab0-9bb7-5c805ee871e9') waiting on child command id: '87bc90c7-2aa5-4a1b-b58c-54296518658a' type:'RemoveSnapshotSingleDiskLive' to complete
2021-08-03 15:51:32,825+03 INFO [org.ovirt.engine.core.bll.storage.disk.image.DestroyImageCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-33) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Waiting on destroy image command to complete the task (taskId = ae2e80a4-d224-4fc3-a84b-859042811525)
2021-08-03 15:51:32,831+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-33) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshotSingleDiskLive' (id: '87bc90c7-2aa5-4a1b-b58c-54296518658a') waiting on child command id: 'ec806ac6-929f-42d9-a86e-98d6a39a4718' type:'Merge' to complete
2021-08-03 15:51:33,594+03 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-1481337) [3bf9345d-fab2-490f-ba44-6aa014bbb743] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'DestroyImage' completed, handling the result.
2021-08-03 15:51:33,594+03 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-1481337) [3bf9345d-fab2-490f-ba44-6aa014bbb743] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'DestroyImage' succeeded, clearing tasks.
2021-08-03 15:51:33,594+03 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-1481337) [3bf9345d-fab2-490f-ba44-6aa014bbb743] SPMAsyncTask::ClearAsyncTask: Attempting to clear task 'ae2e80a4-d224-4fc3-a84b-859042811525'
2021-08-03 15:51:33,595+03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-1481337) [3bf9345d-fab2-490f-ba44-6aa014bbb743] START, SPMClearTaskVDSCommand( SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='5da76866-7b7d-11eb-9913-00163e1f2643', ignoreFailoverLimit='false', taskId='ae2e80a4-d224-4fc3-a84b-859042811525'}), log id: 6225ae22
2021-08-03 15:51:33,596+03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-1481337) [3bf9345d-fab2-490f-ba44-6aa014bbb743] START, HSMClearTaskVDSCommand(HostName = ovirt2-7.vmmgmt-int.uoc.gr, HSMTaskGuidBaseVDSCommandParameters:{hostId='10599b78-5f45-48d2-bfe0-028f3dae69eb', taskId='ae2e80a4-d224-4fc3-a84b-859042811525'}), log id: 45b18a8a
2021-08-03 15:51:33,611+03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-1481337) [3bf9345d-fab2-490f-ba44-6aa014bbb743] FINISH, HSMClearTaskVDSCommand, return: , log id: 45b18a8a
2021-08-03 15:51:33,611+03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-1481337) [3bf9345d-fab2-490f-ba44-6aa014bbb743] FINISH, SPMClearTaskVDSCommand, return: , log id: 6225ae22
2021-08-03 15:51:33,614+03 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-1481337) [3bf9345d-fab2-490f-ba44-6aa014bbb743] BaseAsyncTask::removeTaskFromDB: Removed task 'ae2e80a4-d224-4fc3-a84b-859042811525' from DataBase
2021-08-03 15:51:33,614+03 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-1481337) [3bf9345d-fab2-490f-ba44-6aa014bbb743] CommandAsyncTask::HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks object for entity 'baaa5254-261c-452a-84b4-0a8b397cdb62'
2021-08-03 15:51:33,839+03 INFO [org.ovirt.engine.core.bll.MergeCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-95) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Merge command (jobId = c57fb3e5-da20-4838-8db3-31655ba76c1f) has completed for images '84c005da-cbec-4ace-8619-5a8e2ae5ea75'..'b43b7c33-5b53-4332-a2e0-f950debb919b'
2021-08-03 15:51:34,847+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Executing Live Merge command step 'MERGE_STATUS'
2021-08-03 15:51:34,917+03 ERROR [org.ovirt.engine.core.bll.MergeStatusCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Failed to live merge. Top volume b43b7c33-5b53-4332-a2e0-f950debb919b is still in qemu chain [b43b7c33-5b53-4332-a2e0-f950debb919b, 84c005da-cbec-4ace-8619-5a8e2ae5ea75]
2021-08-03 15:51:35,866+03 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-41) [b929fd4a-8ce7-408f-927d-ab0169879c4e] Command 'CopyImageGroupWithData' (id: 'd1136344-2888-4d63-8fe1-b506426bc8aa') waiting on child command id: 'b40aeec9-b7cf-4ee5-9683-54d98f4307d5' type:'CopyImageGroupVolumesData' to complete
2021-08-03 15:51:36,867+03 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-50) [b929fd4a-8ce7-408f-927d-ab0169879c4e] Command 'CopyImageGroupVolumesData' (id: 'b40aeec9-b7cf-4ee5-9683-54d98f4307d5') waiting on child command id: 'f46d65cb-32d9-4269-982e-6e19331b8a27' type:'CopyData' to complete
2021-08-03 15:51:36,873+03 ERROR [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-50) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command id: '87bc90c7-2aa5-4a1b-b58c-54296518658a failed child command status for step 'MERGE_STATUS'
2021-08-03 15:51:36,873+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-50) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshotSingleDiskLive' id: '87bc90c7-2aa5-4a1b-b58c-54296518658a' child commands '[a7de75fe-e94c-4795-9310-2c8fc3d6d3fc, ec806ac6-929f-42d9-a86e-98d6a39a4718, 6443529c-d753-48f6-8a9e-af1f9f09dfb5]' executions were completed, status 'FAILED'
2021-08-03 15:51:37,982+03 ERROR [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-22) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Merging of snapshot '1cd4985b-b0c0-40d6-bbde-54451e43bef6' images '84c005da-cbec-4ace-8619-5a8e2ae5ea75'..'b43b7c33-5b53-4332-a2e0-f950debb919b' failed. Images have been marked illegal and can no longer be previewed or reverted to. Please retry Live Merge on the snapshot to complete the operation.
2021-08-03 15:51:37,985+03 ERROR [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-22) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Ending command 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand' with failure.
2021-08-03 15:51:39,014+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-23) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Executing Live Merge command step 'REDUCE_IMAGE'
2021-08-03 15:51:39,029+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-23) [3bf9345d-fab2-490f-ba44-6aa014bbb743] No need to execute reduce image command, skipping its execution. Storage Type: 'FCP', Disk: 'anova.admin.uoc.gr_Disk2' Snapshot: 'anova.admin.uoc.gr-2021-08-03'
2021-08-03 15:51:39,034+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-23) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshotSingleDiskLive' id: '80dc4609-b91f-4e93-bc12-7b2083933e5a' child commands '[18653ea6-166c-41a3-b335-84525871b9e6, 74c83880-581b-4774-ae51-8c4af0c92c53, 92e92d8e-b099-47dd-ba8c-e3db907f9a62, baaa5254-261c-452a-84b4-0a8b397cdb62]' executions were completed, status 'SUCCEEDED'
2021-08-03 15:51:39,040+03 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-23) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshot' (id: '04e9d61e-28a2-4ab0-9bb7-5c805ee871e9') waiting on child command id: '80dc4609-b91f-4e93-bc12-7b2083933e5a' type:'RemoveSnapshotSingleDiskLive' to complete
2021-08-03 15:51:40,046+03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-96) [3bf9345d-fab2-490f-ba44-6aa014bbb743] START, GetImageInfoVDSCommand( GetImageInfoVDSCommandParameters:{storagePoolId='5da76866-7b7d-11eb-9913-00163e1f2643', ignoreFailoverLimit='false', storageDomainId='96000ec9-e181-44eb-893f-e0a36e3a6775', imageGroupId='205a30a3-fc06-4ceb-8ef2-018f16d4ccbb', imageId='7611ebcf-5323-45ca-b16c-9302d0bdedc6'}), log id: 4c98e4db
2021-08-03 15:51:40,047+03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-96) [3bf9345d-fab2-490f-ba44-6aa014bbb743] START, GetVolumeInfoVDSCommand(HostName = ovirt2-7.vmmgmt-int.uoc.gr, GetVolumeInfoVDSCommandParameters:{hostId='10599b78-5f45-48d2-bfe0-028f3dae69eb', storagePoolId='5da76866-7b7d-11eb-9913-00163e1f2643', storageDomainId='96000ec9-e181-44eb-893f-e0a36e3a6775', imageGroupId='205a30a3-fc06-4ceb-8ef2-018f16d4ccbb', imageId='7611ebcf-5323-45ca-b16c-9302d0bdedc6'}), log id: 4aca5f83
2021-08-03 15:51:40,084+03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-96) [3bf9345d-fab2-490f-ba44-6aa014bbb743] FINISH, GetVolumeInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.DiskImage@70097e6, log id: 4aca5f83
2021-08-03 15:51:40,084+03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-96) [3bf9345d-fab2-490f-ba44-6aa014bbb743] FINISH, GetImageInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.DiskImage@70097e6, log id: 4c98e4db
2021-08-03 15:51:40,283+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-96) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Successfully merged snapshot '1cd4985b-b0c0-40d6-bbde-54451e43bef6' images '17618ba1-4ab8-49eb-a991-fc3d602ced14'..'7611ebcf-5323-45ca-b16c-9302d0bdedc6'
2021-08-03 15:51:40,287+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-96) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Ending command 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand' successfully.
2021-08-03 15:51:40,296+03 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-96) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshot' id: '04e9d61e-28a2-4ab0-9bb7-5c805ee871e9' child commands '[87bc90c7-2aa5-4a1b-b58c-54296518658a, 80dc4609-b91f-4e93-bc12-7b2083933e5a]' executions were completed, status 'FAILED'
2021-08-03 15:51:41,322+03 ERROR [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-65) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Ending command 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand' with failure.
2021-08-03 15:51:41,353+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-65) [3bf9345d-fab2-490f-ba44-6aa014bbb743] EVENT_ID: USER_REMOVE_SNAPSHOT_FINISHED_FAILURE(357), Failed to delete snapshot 'anova.admin.uoc.gr-2021-08-03' for VM 'anova.admin.uoc.gr'.
Any help would be highly appreciated
3 years, 7 months
Q: Get Host Capabilities Failed after restart
by Andrei Verovski
Hi,
I have oVirt 4.4.7.6-1.el8 and one problematic node (HP ProLiant with CentOS 8 stream).
After replacing server rack router switch and restart got this error I can’t recover from:
VDSM node14 command Get Host Capabilities failed: Message timeout which can be caused by communication issues
vdsm-network running fine, but vdsmd can’t start on node14 for whatever reason. All other nodes running fine.
Aug 09 10:24:12 node14.mydomain.lv vdsmd_init_common.sh[4825]: vdsm: Running dummybr
Aug 09 10:24:13 node14.mydomain.lv vdsmd_init_common.sh[4825]: vdsm: Running tune_system
Aug 09 10:24:13 node14.mydomain.lv vdsmd_init_common.sh[4825]: vdsm: Running test_space
Aug 09 10:24:13 node14.mydomain.lv vdsmd_init_common.sh[4825]: vdsm: Running test_lo
Aug 09 10:24:13 node14.mydomain.lv systemd[1]: Started Virtual Desktop Server Manager.
Aug 09 10:24:16 node14.mydomain.lv sudo[7721]: pam_systemd(sudo:session): Failed to create session: Start job for unit user-0.slice failed with 'canceled'
Aug 09 10:24:16 node14.mydomain.lv sudo[7721]: pam_unix(sudo:session): session opened for user root by (uid=0)
Aug 09 10:24:16 node14.mydomain.lv sudo[7721]: pam_unix(sudo:session): session closed for user root
Aug 09 10:24:17 node14.mydomain.lv vdsm[6754]: WARN MOM not available. Error: [Errno 2] No such file or directory
Aug 09 10:24:17 node14.mydomain.lv vdsm[6754]: WARN MOM not available, KSM stats will be missing. Error:
In web gui -> Management I can’t do anything with the host except restart. Stop aborts with error, all other commands are gray-ed out.
Status is “Unassigned”. Host is answering to pings as usual.
vdsm.log (from node14) attached.
Thanks in advance for any help.
3 years, 7 months
Storage Domains for a VM from Template
by Shantur Rathore
Hi all,
I have multiple storage domains in my DC. One is backed up that has
templates and another one which isn't backed up and I want to use it for VM
instances of the template.
I cannot find an option to create the VM from a template and use a storage
domain other than where the template is stored.
Is it even possible?
Thanks,
Shantur
3 years, 7 months
any configuration when clients can't directly access oVirt cluster?
by yam yam
Hello,
Under my environment, clients must go through a proxy to reach the oVirt cluster(physical nodes), so that I applied DNAT rules for engine and VM IPs to clients.
but there were still some issues like VNC console(+browser) don't work, and I'm not sure other features work well.
so, I'm wondering whether there is any oVirt configuration or some guides for such case using the proxy.
Thanks,
3 years, 7 months
ISO Upload in in Paused by System Status
by louisb@ameritech.net
I'm attempting to upload an ISO that is approximately 9GB in size. I've succssfullly started th upload process via the oVirt Management Console/Disk. The upload started however, it now has a status of "Paused by System". My storage type is set to NFS Data.
Is something happing the back ground that contributing to the "Paused by System"? I have mor the 84TB of space available so I don't think it is a space issue. Is there something that I need to do? At this time I'm going to wait and see if it moves forward on its own.
Please provide me with any help or direction.
Thanks
3 years, 7 months
adding new host failed with error "CPU does not match the Cluster CPU Type"
by jimod4@yahoo.com
i get the below error when i try to add a new kvm host to the cluster. I know that the cpu is a haswell, Intel E5-2640 v3. if i run "virsh capabilities" it returns <model>Haswell-noTSX-IBRS</model>. My KVM server is redhat 8.4 fully patched today on an DL-360 Gen9. oVirt is at 4.4.7.7-1.el8. i have tried many diffent CPU settings for the cluster and none of them work. this is new oVirt install.
ERROR:
"The host CPU does not match the Cluster CPU Type and is running in a degraded mode. It is missing the following CPU flags: vmx, model_Haswell-noTSX, nx. Please update the host CPU microcode or change the Cluster CPU Type."
MIRCOCODE:
root@kvm09 ~]# dmesg | grep 'microcode'
[ 0.000000] microcode: microcode updated early to revision 0x46, date = 2021-01-27
[ 1.625108] microcode: sig=0x306f2, pf=0x1, revision=0x46
[ 1.625753] microcode: Microcode Update Driver: v2.2.
3 years, 7 months