Active Active Stretched Node cluster not working as expected.
by gaurang.patel@allotgroup.com
Hi,
i have a Six node Stretched HCI cluster with 2 x 3 =6 distributed Replicated bricks, want to achieve Active - Active Disaster Recovery , if my three hosts in DC power Down or Failed my DR Site Three node should take care of running virtual machines.
During Split-brain scenario Ovirt-Hosted-Engine VM goes in PAUSE mode and not restarting on DR hosts and waited for long time, due to this my all production VMs are not restarting on DR Site.
please guide to achieve Active-Active Disaster Recovery with 6 node stretched cluster. i referred document hosted official site but did not get success, it would be great help of anybody can extend help or guide me for the same.
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/...
My Setup insight.
root@phnode06 /]# gluster volume status
Status of volume: data
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 172.16.1.1:/gluster_bricks/data/data 58744 0 Y 1746524
Brick 172.16.1.2:/gluster_bricks/data/data 58732 0 Y 2933385
Brick 172.16.1.3:/gluster_bricks/data/data 60442 0 Y 6158
Brick phnode04..local:/gluster_bricks
/data/data1 51661 0 Y 5905
Brick phnode05..local:/gluster_bricks
/data/data1 52177 0 Y 6178
Brick phnode06..local:/gluster_bricks
/data/data1 59910 0 Y 6208
Self-heal Daemon on localhost N/A N/A Y 5878
Self-heal Daemon on phnode03..local N/A N/A Y 6133
Self-heal Daemon on gluster01..local N/A N/A Y 6072
Self-heal Daemon on phnode05..local N/A N/A Y 5751
Self-heal Daemon on gluster02..local N/A N/A Y 5746
Self-heal Daemon on phnode04..local N/A N/A Y 4734
Task Status of Volume data
------------------------------------------------------------------------------
Task : Rebalance
ID : 1c30fbfa-2707-457e-b844-ccaf2e07aa7f
Status : completed
Status of volume: engine
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 172.16.1.1:/gluster_bricks/engine/eng
ine 53723 0 Y 1746535
Brick 172.16.1.2:/gluster_bricks/engine/eng
ine 49937 0 Y 2933395
Brick 172.16.1.3:/gluster_bricks/engine/eng
ine 53370 0 Y 6171
Brick phnode04..local:/gluster_bricks
/engine/engine1 52024 0 Y 5916
Brick phnode05..local:/gluster_bricks
/engine/engine1 51785 0 Y 6189
Brick phnode06..local:/gluster_bricks
/engine/engine1 55443 0 Y 6221
Self-heal Daemon on localhost N/A N/A Y 5878
Self-heal Daemon on gluster02..local N/A N/A Y 5746
Self-heal Daemon on gluster01..local N/A N/A Y 6072
Self-heal Daemon on phnode05..local N/A N/A Y 5751
Self-heal Daemon on phnode03..local N/A N/A Y 6133
Self-heal Daemon on phnode04..local N/A N/A Y 4734
Task Status of Volume engine
------------------------------------------------------------------------------
Task : Rebalance
ID : 21304de4-b100-4860-b408-5e58103080a2
Status : completed
Status of volume: vmstore
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 172.16.1.1:/gluster_bricks/vmstore/vm
store 50244 0 Y 1746546
Brick 172.16.1.2:/gluster_bricks/vmstore/vm
store 51607 0 Y 2933405
Brick 172.16.1.3:/gluster_bricks/vmstore/vm
store 49835 0 Y 6182
Brick phnode04..local:/gluster_bricks
/vmstore/vmstore1 54098 0 Y 5927
Brick phnode05..local:/gluster_bricks
/vmstore/vmstore1 56565 0 Y 6200
Brick phnode06..local:/gluster_bricks
/vmstore/vmstore1 58653 0 Y 1853456
Self-heal Daemon on localhost N/A N/A Y 5878
Self-heal Daemon on gluster02..local N/A N/A Y 5746
Self-heal Daemon on gluster01..local N/A N/A Y 6072
Self-heal Daemon on phnode05..local N/A N/A Y 5751
Self-heal Daemon on phnode03..local N/A N/A Y 6133
Self-heal Daemon on phnode04..local N/A N/A Y 4734
Task Status of Volume vmstore
------------------------------------------------------------------------------
Task : Rebalance
ID : 7d1849f8-433a-43b1-93bc-d674a7b0aef3
Status : completed
[root@phnode06 /]# hosted-engine --vm-status
--== Host phnode01..local (id: 1) status ==--
Host ID : 1
Host timestamp : 9113939
Score : 3400
Engine status : {"vm": "up", "health": "good", "detail": "Up"}
Hostname : phnode01..local
Local maintenance : False
stopped : False
crc32 : 00979124
conf_on_shared_storage : True
local_conf_timestamp : 9113939
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=9113939 (Sat Aug 19 14:39:41 2023)
host-id=1
score=3400
vm_conf_refresh_time=9113939 (Sat Aug 19 14:39:41 2023)
conf_on_shared_storage=True
maintenance=False
state=EngineUp
stopped=False
--== Host phnode03..local (id: 2) status ==--
Host ID : 2
Host timestamp : 405629
Score : 3400
Engine status : {"vm": "down", "health": "bad", "detail": "unknown", "reason": "vm not running on this host"}
Hostname : phnode03..local
Local maintenance : False
stopped : False
crc32 : 0b1c7489
conf_on_shared_storage : True
local_conf_timestamp : 405630
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=405629 (Sat Aug 19 14:39:43 2023)
host-id=2
score=3400
vm_conf_refresh_time=405630 (Sat Aug 19 14:39:43 2023)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
--== Host phnode02..local (id: 3) status ==--
Host ID : 3
Host timestamp : 9069311
Score : 3400
Engine status : {"vm": "down", "health": "bad", "detail": "unknown", "reason": "vm not running on this host"}
Hostname : phnode02..local
Local maintenance : False
stopped : False
crc32 : b84baa31
conf_on_shared_storage : True
local_conf_timestamp : 9069311
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=9069311 (Sat Aug 19 14:39:41 2023)
host-id=3
score=3400
vm_conf_refresh_time=9069311 (Sat Aug 19 14:39:41 2023)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
--== Host phnode04..local (id: 4) status ==--
Host ID : 4
Host timestamp : 1055893
Score : 3400
Engine status : {"vm": "down", "health": "bad", "detail": "unknown", "reason": "vm not running on this host"}
Hostname : phnode04..local
Local maintenance : False
stopped : False
crc32 : 58a9b84c
conf_on_shared_storage : True
local_conf_timestamp : 1055893
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1055893 (Sat Aug 19 14:39:38 2023)
host-id=4
score=3400
vm_conf_refresh_time=1055893 (Sat Aug 19 14:39:38 2023)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
--== Host phnode05..local (id: 5) status ==--
Host ID : 5
Host timestamp : 404187
Score : 3400
Engine status : {"vm": "down", "health": "bad", "detail": "unknown", "reason": "vm not running on this host"}
Hostname : phnode05..local
Local maintenance : False
stopped : False
crc32 : 48cf9186
conf_on_shared_storage : True
local_conf_timestamp : 404187
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=404187 (Sat Aug 19 14:39:40 2023)
host-id=5
score=3400
vm_conf_refresh_time=404187 (Sat Aug 19 14:39:40 2023)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
--== Host phnode06..local (id: 6) status ==--
Host ID : 6
Host timestamp : 1055902
Score : 3400
Engine status : {"vm": "down", "health": "bad", "detail": "unknown", "reason": "vm not running on this host"}
Hostname : phnode06..local
Local maintenance : False
stopped : False
crc32 : 913ff6c2
conf_on_shared_storage : True
local_conf_timestamp : 1055902
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1055902 (Sat Aug 19 14:39:48 2023)
host-id=6
score=3400
vm_conf_refresh_time=1055902 (Sat Aug 19 14:39:48 2023)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
[root@phnode06 /]#
1 year, 4 months
Issue - Bonding 802.3ad
by Anthony Bustillos
Hello team,
I have this warning when I tried, to configure two ports with boonding 802.3ad.
Warning: Bond is in link aggregation mode (Mode 4 ) but no partner mac has been reported for it. At least one slave has a different aggregator id.
how I can to solved this warning?
best Regards
1 year, 4 months
Update path of hosted_storage domain
by francesco@shellrent.com
Hello everyone,
we have a self hosted engine environment (oVirt 4.4.5) that use a replica 2 + arbiter glusterfs. Those servers are both glusterfs nodes and oVirt node of the hosted engine.
For an upgrade we followed this guide https://access.redhat.com/documentation/it-it/red_hat_hyperconverged_infr....
The planned upgrade was adding 2 new servers (node3 and node4) that would replace the existing ones. We added to oVirt cluster and the glusterfs pool the servers and moved the engine in those new hosts, without touching the underlay glusterfs configuration. After the step n° 18 ("Reboot the existing and replacement hosts.Wait until all hosts are available before continuing.") of the guide we tried to start the engine but we got error on the hosted_storage domain due to old path (the glusterfs mounted path was "node1" and the backupvolfile was "node2").
For avoiding corruption we updated the database with the correct path and mount options accordly with the new configuration edited in the file /etc/ovirt-hosted-engine/hosted-engine.conf (as written in the guide).
If we try to detach the node1 brick everything stop working causing storage error. We noticed that reference of node1 is still present in the file /rhev/data-center/mnt/glusterSD/node1.ovirt.xyz:_engine/36c75f6e-d95d-48f4-9ceb-ad2895ab2123/dom_md/metadata, on both of new hosts (node3 and node4).
I'll be more than glad to attach any log file needed for understanding what's going on, and thank you whoever will take time for helping me out :)
Regards,
Francesco
1 year, 4 months
Task Configure OVN for oVirt failed to execute
by Jorge Visentini
Hi,
I am trying to reinstall this host, but I cannot because of this issue.
Any tips for me?
Host ksmmi1r01ovirt20.kosmo.cloud installation failed. Task Configure OVN
for oVirt failed to execute. Please check logs for more details:
/var/log/ovirt-engine/host-deploy/ovirt-host-deploy-ansible-20230815194759-ksmmi1r01ovirt20.kosmo.cloud-63609c39-934a-4360-a822-f2f6935cc5b6.log.
"stdout" : "fatal: [ksmmi1r01ovirt20.kosmo.cloud]: FAILED! => {\"changed\":
true, \"cmd\": [\"vdsm-tool\", \"ovn-config\", \"10.250.156.20\",
\"ksmmi1r01ovirt20.kosmo.cloud\"], \"delta\": \"0:00:02.319187\", \"end\":
\"2023-08-15 19:50:05.483458\", \"msg\": \"non-zero return code\", \"rc\":
1, \"start\": \"2023-08-15 19:50:03.164271\", \"stderr\": \"Traceback (most
recent call last):\\n File
\\\"/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py\\\", line 117,
in get_network\\n return networks[net_name]\\nKeyError:
'ksmmi1r01ovirt20.kosmo.cloud'\\n\\nDuring handling of the above exception,
another exception occurred:\\n\\nTraceback (most recent call last):\\n
File \\\"/usr/bin/vdsm-tool\\\", line 195, in main\\n return
tool_command[cmd][\\\"command\\\"](*args)\\n File
\\\"/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py\\\", line 63,
in ovn_config\\n ip_address = get_ip_addr(get_network(network_caps(),
net_name))\\n File
\\\"/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py\\\", line 119,
in get_network\\n raise
NetworkNotFoundError(net_name)\\nvdsm.tool.ovn_config.NetworkNotFoundError:
ksmmi1r01ovirt20.kosmo.cloud\", \"stderr_lines\": [\"Traceback (most recent
call last):\", \" File
\\\"/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py\\\", line 117,
in get_network\", \" return networks[net_name]\", \"KeyError:
'ksmmi1r01ovirt20.kosmo.cloud'\", \"\", \"During handling of the above
exception, another exception occurred:\", \"\", \"Traceback (most recent
call last):\", \" File \\\"/usr/bin/vdsm-tool\\\", line 195, in main\", \"
return tool_command[cmd][\\\"command\\\"](*args)\", \" File
\\\"/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py\\\", line 63,
in ovn_config\", \" ip_address = get_ip_addr(get_network(network_caps(),
net_name))\", \" File
\\\"/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py\\\", line 119,
in get_network\", \" raise NetworkNotFoundError(net_name)\",
\"vdsm.tool.ovn_config.NetworkNotFoundError:
ksmmi1r01ovirt20.kosmo.cloud\"], \"stdout\": \"\", \"stdout_lines\": []}",
[root@ksmmi1r01ovirt20 ~]# vdsm-tool ovn-config 10.250.156.20
ksmmi1r01ovirt20.kosmo.cloud
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py", line
117, in get_network
return networks[net_name]
KeyError: 'ksmmi1r01ovirt20.kosmo.cloud'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/bin/vdsm-tool", line 195, in main
return tool_command[cmd]["command"](*args)
File "/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py", line 63,
in ovn_config
ip_address = get_ip_addr(get_network(network_caps(), net_name))
File "/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py", line
119, in get_network
raise NetworkNotFoundError(net_name)
vdsm.tool.ovn_config.NetworkNotFoundError: ksmmi1r01ovirt20.kosmo.cloud
[root@ksmmi1r01ovirt20 ~]# vdsm-tool list-nets
ovirtmgmt (default route)
--
Att,
Jorge Visentini
+55 55 98432-9868
1 year, 4 months
oVirt 4.5.5 snapshot - Migration failed due to an Error: Fatal error during migration
by Jorge Visentini
Any tips about this error?
2023-08-10 18:24:57,544-03 INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-4)
[633be3a0-3afd-490c-b412-805d2b14e1c2] Lock Acquired to object
'EngineLock:{exclusiveLocks='[29032e83-cfaf-4d30-bcc2-df72c5358552=VM]',
sharedLocks=''}'
2023-08-10 18:24:57,578-03 INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-4)
[633be3a0-3afd-490c-b412-805d2b14e1c2] Running command:
MigrateVmToServerCommand internal: false. Entities affected : ID:
29032e83-cfaf-4d30-bcc2-df72c5358552 Type: VMAction group MIGRATE_VM with
role type USER
2023-08-10 18:24:57,628-03 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-4)
[633be3a0-3afd-490c-b412-805d2b14e1c2] START, MigrateVDSCommand(
MigrateVDSCommandParameters:{hostId='282b69aa-8b74-4312-8cc0-9c20e01982b7',
vmId='29032e83-cfaf-4d30-bcc2-df72c5358552', srcHost='ksmmi1r01ovirt18',
dstVdsId='73c38b36-36da-4ffa-b17a-492fd7b093ae',
dstHost='ksmmi1r01ovirt19:54321', migrationMethod='ONLINE',
tunnelMigration='false', migrationDowntime='0', autoConverge='true',
migrateCompressed='false', migrateEncrypted='false', consoleAddress='null',
maxBandwidth='3125', parallel='null', enableGuestEvents='true',
maxIncomingMigrations='2', maxOutgoingMigrations='2',
convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2,
action={name=setDowntime, params=[200]}}, {limit=3,
action={name=setDowntime, params=[300]}}, {limit=4,
action={name=setDowntime, params=[400]}}, {limit=6,
action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort,
params=[]}}]]', dstQemu='10.250.156.19', cpusets='null',
numaNodesets='null'}), log id: 5bbc21d6
2023-08-10 18:24:57,628-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(default task-4) [633be3a0-3afd-490c-b412-805d2b14e1c2] START,
MigrateBrokerVDSCommand(HostName = ksmmi1r01ovirt18,
MigrateVDSCommandParameters:{hostId='282b69aa-8b74-4312-8cc0-9c20e01982b7',
vmId='29032e83-cfaf-4d30-bcc2-df72c5358552', srcHost='ksmmi1r01ovirt18',
dstVdsId='73c38b36-36da-4ffa-b17a-492fd7b093ae',
dstHost='ksmmi1r01ovirt19:54321', migrationMethod='ONLINE',
tunnelMigration='false', migrationDowntime='0', autoConverge='true',
migrateCompressed='false', migrateEncrypted='false', consoleAddress='null',
maxBandwidth='3125', parallel='null', enableGuestEvents='true',
maxIncomingMigrations='2', maxOutgoingMigrations='2',
convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2,
action={name=setDowntime, params=[200]}}, {limit=3,
action={name=setDowntime, params=[300]}}, {limit=4,
action={name=setDowntime, params=[400]}}, {limit=6,
action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort,
params=[]}}]]', dstQemu='10.250.156.19', cpusets='null',
numaNodesets='null'}), log id: 14d92c9
2023-08-10 18:24:57,631-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(default task-4) [633be3a0-3afd-490c-b412-805d2b14e1c2] FINISH,
MigrateBrokerVDSCommand, return: , log id: 14d92c9
2023-08-10 18:24:57,634-03 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-4)
[633be3a0-3afd-490c-b412-805d2b14e1c2] FINISH, MigrateVDSCommand, return:
MigratingFrom, log id: 5bbc21d6
2023-08-10 18:24:57,639-03 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-4) [633be3a0-3afd-490c-b412-805d2b14e1c2] EVENT_ID:
VM_MIGRATION_START(62), Migration started (VM: ROUTER, Source:
ksmmi1r01ovirt18, Destination: ksmmi1r01ovirt19, User: admin@ovirt
@internalkeycloak-authz).
2023-08-10 18:24:57,641-03 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-13) [] VM
'29032e83-cfaf-4d30-bcc2-df72c5358552'(ROUTER) moved from 'MigratingFrom'
--> 'Up'
2023-08-10 18:24:57,641-03 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-13) [] Adding VM
'29032e83-cfaf-4d30-bcc2-df72c5358552'(ROUTER) to re-run list
2023-08-10 18:24:57,643-03 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring]
(ForkJoinPool-1-worker-13) [] Rerun VM
'29032e83-cfaf-4d30-bcc2-df72c5358552'. Called from VDS 'ksmmi1r01ovirt18'
2023-08-10 18:24:57,679-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-2194) [] START,
MigrateStatusVDSCommand(HostName = ksmmi1r01ovirt18,
MigrateStatusVDSCommandParameters:{hostId='282b69aa-8b74-4312-8cc0-9c20e01982b7',
vmId='29032e83-cfaf-4d30-bcc2-df72c5358552'}), log id: 445b81e0
2023-08-10 18:24:57,681-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-2194) [] FINISH,
MigrateStatusVDSCommand, return:
org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusReturn@12cb7b1b, log
id: 445b81e0
2023-08-10 18:24:57,695-03 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-2194) [] EVENT_ID:
VM_MIGRATION_TO_SERVER_FAILED(120), Migration failed due to an Error: Fatal
error during migration (VM: ROUTER, Source: ksmmi1r01ovirt18, Destination:
ksmmi1r01ovirt19).
2023-08-10 18:24:57,698-03 INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand]
(EE-ManagedThreadFactory-engine-Thread-2194) [] Lock freed to object
'EngineLock:{exclusiveLocks='[29032e83-cfaf-4d30-bcc2-df72c5358552=VM]',
sharedLocks=''}'
ovirt-release-master-4.5.5-0.0.master.20230612064154.git0c65b0e.el9.noarch
ovirt-imageio-common-2.5.1-0.202307060620.git4e5f7e0.el9.x86_64
python3-ovirt-engine-sdk4-4.6.3-0.1.master.20230324091708.el9.x86_64
ovirt-openvswitch-ovn-2.17-1.el9.noarch
ovirt-openvswitch-ovn-common-2.17-1.el9.noarch
ovirt-openvswitch-ovn-host-2.17-1.el9.noarch
ovirt-imageio-client-2.5.1-0.202307060620.git4e5f7e0.el9.x86_64
ovirt-imageio-daemon-2.5.1-0.202307060620.git4e5f7e0.el9.x86_64
python3-ovirt-setup-lib-1.3.4-0.0.master.20220413133253.gitd32d35f.el9.noarch
python3.11-ovirt-engine-sdk4-4.6.3-0.1.master.20230324091708.el9.x86_64
python3.11-ovirt-imageio-common-2.5.1-0.202307060620.git4e5f7e0.el9.x86_64
python3.11-ovirt-imageio-client-2.5.1-0.202307060620.git4e5f7e0.el9.x86_64
ovirt-ansible-collection-3.1.3-0.1.master.20230420113738.el9.noarch
ovirt-vmconsole-1.0.9-2.el9.noarch
ovirt-vmconsole-host-1.0.9-2.el9.noarch
ovirt-openvswitch-2.17-1.el9.noarch
ovirt-python-openvswitch-2.17-1.el9.noarch
ovirt-openvswitch-ipsec-2.17-1.el9.noarch
python3-ovirt-node-ng-nodectl-4.4.3-0.20220615.0.el9.noarch
ovirt-node-ng-nodectl-4.4.3-0.20220615.0.el9.noarch
ovirt-host-dependencies-4.5.0-3.1.20220510094000.git2f2d022.el9.x86_64
ovirt-hosted-engine-ha-2.5.1-0.0.master.20220707064804.20220707064802.git14b1139.el9.noarch
ovirt-provider-ovn-driver-1.2.37-0.20220610132522.git62111d0.el9.noarch
ovirt-hosted-engine-setup-2.7.1-0.0.master.20230414113600.git340e19b.el9.noarch
ovirt-host-4.5.0-3.1.20220510094000.git2f2d022.el9.x86_64
ovirt-release-host-node-4.5.5-0.0.master.20230612064150.git0c65b0e.el9.x86_64
ovirt-node-ng-image-update-placeholder-4.5.5-0.0.master.20230612064150.git0c65b0e.el9.noarch
qemu-kvm-tools-8.0.0-6.el9.x86_64
qemu-kvm-docs-8.0.0-6.el9.x86_64
qemu-kvm-common-8.0.0-6.el9.x86_64
qemu-kvm-device-display-virtio-gpu-8.0.0-6.el9.x86_64
qemu-kvm-ui-opengl-8.0.0-6.el9.x86_64
qemu-kvm-ui-egl-headless-8.0.0-6.el9.x86_64
qemu-kvm-device-display-virtio-gpu-pci-8.0.0-6.el9.x86_64
qemu-kvm-block-blkio-8.0.0-6.el9.x86_64
qemu-kvm-device-display-virtio-vga-8.0.0-6.el9.x86_64
qemu-kvm-device-usb-host-8.0.0-6.el9.x86_64
qemu-kvm-device-usb-redirect-8.0.0-6.el9.x86_64
qemu-kvm-audio-pa-8.0.0-6.el9.x86_64
qemu-kvm-block-rbd-8.0.0-6.el9.x86_64
qemu-kvm-core-8.0.0-6.el9.x86_64
qemu-kvm-8.0.0-6.el9.x86_64
libvirt-daemon-kvm-9.3.0-2.el9.x86_64
Cheers!
--
Att,
Jorge Visentini
+55 55 98432-9868
1 year, 4 months
CPU support Xeon E5345
by Mikhail Po
Is it possible to install oVirt 4.3/4.4 on ProLiant BL460c G1 with Intel Xeon E5345 processor(a)2.33GHz ?
In case of a setup failure, [ERROR] is fatal:[localhost]: FAILED!=>{"Changed":false,"message":"The host was inoperable, deployment errors: code 156:Host host1.test.com disabled because the host CPU type in this case is not supported by the cluster compatibility version or is not supported at all, code 9000: Failed to check the power management configuration for the host host1.test.com ., correct accordingly and re-deploy."
1 year, 4 months
Trouble restoring + upgrading to ovirt 4.5 system after host crashed
by David Johnson
Good afternoon all,
We had a confluence of events hit all at once and need help desperately.
Our Ovirt engine system recently crashed and is unrecoverable. Due to a
power maintenance event at the data center, 1/3 of our VM's are offline.
I have recent backups from the engine created with engine-backup.
I installed a clean Centos 9 and followed the directions to install
the ovirt-engine .
After I restore the backup, the engine-setup fails on the keycloak
configuration.
*From clean system:*
*Install: **(Observe failed scriptlet during install, but rom install still
succeeds)*
[root@ovirt2 administrator]# dnf install -y ovirt-engine
Last metadata expiration check: 2:08:15 ago on Tue 08 Aug 2023 10:11:31 AM
CDT.
Dependencies resolved.
=============================================================================================================================================================
Package Architecture
Version Repository
Size
=============================================================================================================================================================
Installing:
ovirt-engine noarch
4.5.4-1.el9 centos-ovirt45
13 M
Installing dependencies:
SuperLU x86_64
5.3.0-2.el9 epel
182 k
(Snip ...)
* Running scriptlet: ovirt-vmconsole-1.0.9-1.el9.noarch
60/425Failed to resolve allow statement at
/var/lib/selinux/targeted/tmp/modules/400/ovirt_vmconsole/cil:539Failed to
resolve AST/usr/sbin/semodule: Failed!*
(Snip ...)
xmlrpc-common-3.1.3-1.1.el9.noarch
xorg-x11-fonts-ISO8859-1-100dpi-7.5-33.el9.noarch
zziplib-0.13.71-9.el9.x86_64
Complete!
*Engine-restore (no visible issues):*
[root@ovirt2 administrator]# engine-backup --mode=restore
--log=restore1.log --file=Downloads/engine-2023-08-06.22.00.02.bak
--provision-all-databases --restore-permissions
Start of engine-backup with mode 'restore'
scope: all
archive file: Downloads/engine-2023-08-06.22.00.02.bak
log file: restore1.log
Preparing to restore:
- Unpacking file 'Downloads/engine-2023-08-06.22.00.02.bak'
Restoring:
- Files
------------------------------------------------------------------------------
Please note:
Operating system is different from the one used during backup.
Current operating system: centos9
Operating system at backup: centos8
Apache httpd configuration will not be restored.
You will be asked about it on the next engine-setup run.
------------------------------------------------------------------------------
Provisioning PostgreSQL users/databases:
- user 'engine', database 'engine'
- user 'ovirt_engine_history', database 'ovirt_engine_history'
- user 'ovirt_engine_history_grafana' on database 'ovirt_engine_history'
Restoring:
- Engine database 'engine'
- Cleaning up temporary tables in engine database 'engine'
- Updating DbJustRestored VdcOption in engine database
- Resetting DwhCurrentlyRunning in dwh_history_timekeeping in engine
database
- Resetting HA VM status
------------------------------------------------------------------------------
Please note:
The engine database was backed up at 2023-08-06 22:00:19.000000000 -0500 .
Objects that were added, removed or changed after this date, such as virtual
machines, disks, etc., are missing in the engine, and will probably require
recovery or recreation.
------------------------------------------------------------------------------
- DWH database 'ovirt_engine_history'
- Grafana database '/var/lib/grafana/grafana.db'
You should now run engine-setup.
Done.
[root@ovirt2 administrator]#
*Engine-setup :*
[root@ovirt2 administrator]# engine-setup
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files:
/etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf,
/etc/ovirt-engine-setup.conf.d/10-packaging.conf,
/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf
Log file:
/var/log/ovirt-engine/setup/ovirt-engine-setup-20230808124501-joveku.log
Version: otopi-1.10.3 (otopi-1.10.3-1.el9)
[ INFO ] The engine DB has been restored from a backup
*[ ERROR ] Failed to execute stage 'Environment setup': Cannot connect to
Keycloak database 'ovirt_engine_keycloak' using existing credentials:
ovirt_engine_keycloak@localhost:5432*[ INFO ] Stage: Clean up
Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20230808124501-joveku.log
[ INFO ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20230808124504-setup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
*[ ERROR ] Execution of setup failed[root@ovirt2 administrator]#*
*Engine-cleanup results:*
(snip)
[ INFO ] Stage: Clean up
Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-remove-20230808120445-mj4eef.log
[ INFO ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20230808120508-cleanup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ INFO ] Execution of cleanup completed successfully
[root@cen-90-tmpl administrator]#
*Engine backup (restore) results:*
[root@ovirt2 administrator]# engine-backup --mode=restore
--log=restore1.log --file=Downloads/engine-2023-08-06.22.00.02.bak
--provision-all-databases --restore-permissions
Start of engine-backup with mode 'restore'
scope: all
archive file: Downloads/engine-2023-08-06.22.00.02.bak
log file: restore1.log
Preparing to restore:
- Unpacking file 'Downloads/engine-2023-08-06.22.00.02.bak'
Restoring:
- Files
------------------------------------------------------------------------------
Please note:
Operating system is different from the one used during backup.
Current operating system: centos9
Operating system at backup: centos8
Apache httpd configuration will not be restored.
You will be asked about it on the next engine-setup run.
------------------------------------------------------------------------------
Provisioning PostgreSQL users/databases:
- user 'engine', database 'engine'
*FATAL: Existing database 'engine' or user 'engine' found and temporary
ones created - Please clean up everything and try again*
Any advice would be appreciated.
*David Johnson*
1 year, 4 months