Q: Master storage domain on failed hardware
by Andrei Verovski
Hi,
Is it possible to to activate data center if master storage domain is not available anymore?
For example when server hosting master storage domain dies.
Thanks and with best regards
Andrei
7 hours, 28 minutes
export very slow
by Enrico Becchetti
Dear all,
I have an Ovirt cluster with 3 Dell R7525 nodes and about 70 virtual
machines. I have been using ovirtbackup (python) for a long time to save
the VMs.
Unfortunately it has always been a very slow process that with time and
the ever-increasing size of the VMs has become unusable.
The oVirt cluster has two storage domains. "DATA" is a the fibre channel
type (8Gbs) with 18TB and an EXPORT domain NFS managed by a
dual-processor server HPE Proliant with 4x1Gbs.
I'll give you an example. I have a vm with four vdisk for a 1.8TB and
these are the steps performed to backup this VM.
As you will see it takes many hours to get the clone from the snapshot
and then many hours to copy the clone to the NFS storage.
How can I reduce the time needed to save ?
Thank you !
Best Regards
Enrico
-----------------------------------------------------------------
Feb 2, 2025, 6:26:47 AM
Snapshot 'Snapshot for backup script' creation for VM 'GRAYLOG-9' was
initiated by admin@internal-authz.
45
admin@internal-authz
infn-vm10.management
GRAYLOG-9
Blank
INFNPG
DELL
70bbb81b-ed44-4700-839a-96c1df8daeea
oVirt
-----------------------------------------------------------------
Feb 2, 2025, 6:27:22 AM
Snapshot 'Snapshot for backup script' creation for VM 'GRAYLOG-9' has
been completed.
68
admin@internal-authz
infn-vm10.management
GRAYLOG-9
Blank
INFNPG
DELL
70bbb81b-ed44-4700-839a-96c1df8daeea
oVirt
-----------------------------------------------------------------
Feb 2, 2025, 6:27:37 AM
VM GRAYLOG-9_BCK_020244 creation was initiated by admin@internal-authz.
37
admin@internal-authz
GRAYLOG-9_BCK_020244
Blank
INFNPG
DATA_FC_P2050
DELL
edde40d9-2a26-4d88-8a02-bd928c1ded4a
oVirt
-----------------------------------------------------------------
Feb 2, 2025, 1:59:30 PM
VM GRAYLOG-9_BCK_020244 creation has been completed.
53
admin@internal-authz
GRAYLOG-9_BCK_020244
Blank
INFNPG
DATA_FC_P2050
DELL
edde40d9-2a26-4d88-8a02-bd928c1ded4a
oVirt
-----------------------------------------------------------------
Feb 2, 2025, 1:59:34 PM
Snapshot 'Snapshot for backup script' deletion for VM 'GRAYLOG-9' was
initiated by admin@internal-authz.
342
admin@internal-authz
infn-vm10.management
GRAYLOG-9
Blank
INFNPG
DELL
fcfc6291-a55f-431a-8b4c-87cb8e3615ff
oVirt
-----------------------------------------------------------------
Feb 2, 2025, 2:04:00 PM
Snapshot 'Snapshot for backup script' deletion for VM 'GRAYLOG-9' has
been completed.
356
admin@internal-authz
infn-vm10.management
GRAYLOG-9
Blank
INFNPG
DELL
fcfc6291-a55f-431a-8b4c-87cb8e3615ff
oVirt
-----------------------------------------------------------------
Feb 2, 2025, 2:04:08 PM
Starting export Vm GRAYLOG-9_BCK_020244 to VMS_EXPORT
1162
admin@internal-authz
GRAYLOG-9_BCK_020244
INFNPG
VMS_EXPORT
DELL
b964e3b3-df13-4b82-8128-00984eea380b
oVirt
-----------------------------------------------------------------
Feb 4, 2025, 5:27:50 AM
Vm GRAYLOG-9_BCK_020244 was exported successfully to VMS_EXPORT
1150
admin@internal-authz
GRAYLOG-9_BCK_020244
INFNPG
VMS_EXPORT
DELL
b964e3b3-df13-4b82-8128-00984eea380b
oVirt
-----------------------------------------------------------------
Feb 4, 2025, 5:27:53 AM
VM GRAYLOG-9_BCK_020244 configuration was updated by admin@internal-authz.
35
admin@internal-authz
GRAYLOG-9_BCK_020244
Blank
INFNPG
DELL
44480c68-10ad-4660-bd9b-297043210503
oVirt
-----------------------------------------------------------------
Feb 4, 2025, 5:28:00 AM
VM GRAYLOG-9_BCK_020244 was successfully removed by admin@internal-authz.
113
admin@internal-authz
GRAYLOG-9_BCK_020244
Blank
INFNPG
DELL
d141ad60-6cf8-4e77-b438-26b5481f4256
oVirt
-----------------------------------------------------------------
9 hours, 42 minutes
vms shutdown during boot
by eevans@digitaldatatechs.com
I have an issue with getting vms to run. I get this error: VM is down with error. Exit message: Lost connection with qemu process.
2025-02-02 13:27:55,131-05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-13) [5ad46633] EVENT_ID: VM_DOWN_ERROR(119), VM DBServer-2-18-2024 is down with error. Exit message: Lost connection with qemu process.
I see this in the RedHat portal but I don't have entitement or I wouldn't be here.
My setup Centos 9 with Ovirt 4.5 with a seperate node controlling the gluster server and ovirt and three nodes managed by the fourth. I ran this same setup in the past with no problem.
I hope one of the RH folks will post the fix. It is a verified issue on the REDHAT portal
I appreciate any help yu can give me.
4 days, 3 hours