Task Configure OVN for oVirt failed to execute
by Jorge Visentini
Hi,
I am trying to reinstall this host, but I cannot because of this issue.
Any tips for me?
Host ksmmi1r01ovirt20.kosmo.cloud installation failed. Task Configure OVN
for oVirt failed to execute. Please check logs for more details:
/var/log/ovirt-engine/host-deploy/ovirt-host-deploy-ansible-20230815194759-ksmmi1r01ovirt20.kosmo.cloud-63609c39-934a-4360-a822-f2f6935cc5b6.log.
"stdout" : "fatal: [ksmmi1r01ovirt20.kosmo.cloud]: FAILED! => {\"changed\":
true, \"cmd\": [\"vdsm-tool\", \"ovn-config\", \"10.250.156.20\",
\"ksmmi1r01ovirt20.kosmo.cloud\"], \"delta\": \"0:00:02.319187\", \"end\":
\"2023-08-15 19:50:05.483458\", \"msg\": \"non-zero return code\", \"rc\":
1, \"start\": \"2023-08-15 19:50:03.164271\", \"stderr\": \"Traceback (most
recent call last):\\n File
\\\"/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py\\\", line 117,
in get_network\\n return networks[net_name]\\nKeyError:
'ksmmi1r01ovirt20.kosmo.cloud'\\n\\nDuring handling of the above exception,
another exception occurred:\\n\\nTraceback (most recent call last):\\n
File \\\"/usr/bin/vdsm-tool\\\", line 195, in main\\n return
tool_command[cmd][\\\"command\\\"](*args)\\n File
\\\"/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py\\\", line 63,
in ovn_config\\n ip_address = get_ip_addr(get_network(network_caps(),
net_name))\\n File
\\\"/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py\\\", line 119,
in get_network\\n raise
NetworkNotFoundError(net_name)\\nvdsm.tool.ovn_config.NetworkNotFoundError:
ksmmi1r01ovirt20.kosmo.cloud\", \"stderr_lines\": [\"Traceback (most recent
call last):\", \" File
\\\"/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py\\\", line 117,
in get_network\", \" return networks[net_name]\", \"KeyError:
'ksmmi1r01ovirt20.kosmo.cloud'\", \"\", \"During handling of the above
exception, another exception occurred:\", \"\", \"Traceback (most recent
call last):\", \" File \\\"/usr/bin/vdsm-tool\\\", line 195, in main\", \"
return tool_command[cmd][\\\"command\\\"](*args)\", \" File
\\\"/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py\\\", line 63,
in ovn_config\", \" ip_address = get_ip_addr(get_network(network_caps(),
net_name))\", \" File
\\\"/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py\\\", line 119,
in get_network\", \" raise NetworkNotFoundError(net_name)\",
\"vdsm.tool.ovn_config.NetworkNotFoundError:
ksmmi1r01ovirt20.kosmo.cloud\"], \"stdout\": \"\", \"stdout_lines\": []}",
[root@ksmmi1r01ovirt20 ~]# vdsm-tool ovn-config 10.250.156.20
ksmmi1r01ovirt20.kosmo.cloud
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py", line
117, in get_network
return networks[net_name]
KeyError: 'ksmmi1r01ovirt20.kosmo.cloud'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/bin/vdsm-tool", line 195, in main
return tool_command[cmd]["command"](*args)
File "/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py", line 63,
in ovn_config
ip_address = get_ip_addr(get_network(network_caps(), net_name))
File "/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py", line
119, in get_network
raise NetworkNotFoundError(net_name)
vdsm.tool.ovn_config.NetworkNotFoundError: ksmmi1r01ovirt20.kosmo.cloud
[root@ksmmi1r01ovirt20 ~]# vdsm-tool list-nets
ovirtmgmt (default route)
--
Att,
Jorge Visentini
+55 55 98432-9868
1 year, 7 months
oVirt 4.5.5 snapshot - Migration failed due to an Error: Fatal error during migration
by Jorge Visentini
Any tips about this error?
2023-08-10 18:24:57,544-03 INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-4)
[633be3a0-3afd-490c-b412-805d2b14e1c2] Lock Acquired to object
'EngineLock:{exclusiveLocks='[29032e83-cfaf-4d30-bcc2-df72c5358552=VM]',
sharedLocks=''}'
2023-08-10 18:24:57,578-03 INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-4)
[633be3a0-3afd-490c-b412-805d2b14e1c2] Running command:
MigrateVmToServerCommand internal: false. Entities affected : ID:
29032e83-cfaf-4d30-bcc2-df72c5358552 Type: VMAction group MIGRATE_VM with
role type USER
2023-08-10 18:24:57,628-03 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-4)
[633be3a0-3afd-490c-b412-805d2b14e1c2] START, MigrateVDSCommand(
MigrateVDSCommandParameters:{hostId='282b69aa-8b74-4312-8cc0-9c20e01982b7',
vmId='29032e83-cfaf-4d30-bcc2-df72c5358552', srcHost='ksmmi1r01ovirt18',
dstVdsId='73c38b36-36da-4ffa-b17a-492fd7b093ae',
dstHost='ksmmi1r01ovirt19:54321', migrationMethod='ONLINE',
tunnelMigration='false', migrationDowntime='0', autoConverge='true',
migrateCompressed='false', migrateEncrypted='false', consoleAddress='null',
maxBandwidth='3125', parallel='null', enableGuestEvents='true',
maxIncomingMigrations='2', maxOutgoingMigrations='2',
convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2,
action={name=setDowntime, params=[200]}}, {limit=3,
action={name=setDowntime, params=[300]}}, {limit=4,
action={name=setDowntime, params=[400]}}, {limit=6,
action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort,
params=[]}}]]', dstQemu='10.250.156.19', cpusets='null',
numaNodesets='null'}), log id: 5bbc21d6
2023-08-10 18:24:57,628-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(default task-4) [633be3a0-3afd-490c-b412-805d2b14e1c2] START,
MigrateBrokerVDSCommand(HostName = ksmmi1r01ovirt18,
MigrateVDSCommandParameters:{hostId='282b69aa-8b74-4312-8cc0-9c20e01982b7',
vmId='29032e83-cfaf-4d30-bcc2-df72c5358552', srcHost='ksmmi1r01ovirt18',
dstVdsId='73c38b36-36da-4ffa-b17a-492fd7b093ae',
dstHost='ksmmi1r01ovirt19:54321', migrationMethod='ONLINE',
tunnelMigration='false', migrationDowntime='0', autoConverge='true',
migrateCompressed='false', migrateEncrypted='false', consoleAddress='null',
maxBandwidth='3125', parallel='null', enableGuestEvents='true',
maxIncomingMigrations='2', maxOutgoingMigrations='2',
convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2,
action={name=setDowntime, params=[200]}}, {limit=3,
action={name=setDowntime, params=[300]}}, {limit=4,
action={name=setDowntime, params=[400]}}, {limit=6,
action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort,
params=[]}}]]', dstQemu='10.250.156.19', cpusets='null',
numaNodesets='null'}), log id: 14d92c9
2023-08-10 18:24:57,631-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(default task-4) [633be3a0-3afd-490c-b412-805d2b14e1c2] FINISH,
MigrateBrokerVDSCommand, return: , log id: 14d92c9
2023-08-10 18:24:57,634-03 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-4)
[633be3a0-3afd-490c-b412-805d2b14e1c2] FINISH, MigrateVDSCommand, return:
MigratingFrom, log id: 5bbc21d6
2023-08-10 18:24:57,639-03 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-4) [633be3a0-3afd-490c-b412-805d2b14e1c2] EVENT_ID:
VM_MIGRATION_START(62), Migration started (VM: ROUTER, Source:
ksmmi1r01ovirt18, Destination: ksmmi1r01ovirt19, User: admin@ovirt
@internalkeycloak-authz).
2023-08-10 18:24:57,641-03 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-13) [] VM
'29032e83-cfaf-4d30-bcc2-df72c5358552'(ROUTER) moved from 'MigratingFrom'
--> 'Up'
2023-08-10 18:24:57,641-03 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-13) [] Adding VM
'29032e83-cfaf-4d30-bcc2-df72c5358552'(ROUTER) to re-run list
2023-08-10 18:24:57,643-03 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring]
(ForkJoinPool-1-worker-13) [] Rerun VM
'29032e83-cfaf-4d30-bcc2-df72c5358552'. Called from VDS 'ksmmi1r01ovirt18'
2023-08-10 18:24:57,679-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-2194) [] START,
MigrateStatusVDSCommand(HostName = ksmmi1r01ovirt18,
MigrateStatusVDSCommandParameters:{hostId='282b69aa-8b74-4312-8cc0-9c20e01982b7',
vmId='29032e83-cfaf-4d30-bcc2-df72c5358552'}), log id: 445b81e0
2023-08-10 18:24:57,681-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-2194) [] FINISH,
MigrateStatusVDSCommand, return:
org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusReturn@12cb7b1b, log
id: 445b81e0
2023-08-10 18:24:57,695-03 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-2194) [] EVENT_ID:
VM_MIGRATION_TO_SERVER_FAILED(120), Migration failed due to an Error: Fatal
error during migration (VM: ROUTER, Source: ksmmi1r01ovirt18, Destination:
ksmmi1r01ovirt19).
2023-08-10 18:24:57,698-03 INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand]
(EE-ManagedThreadFactory-engine-Thread-2194) [] Lock freed to object
'EngineLock:{exclusiveLocks='[29032e83-cfaf-4d30-bcc2-df72c5358552=VM]',
sharedLocks=''}'
ovirt-release-master-4.5.5-0.0.master.20230612064154.git0c65b0e.el9.noarch
ovirt-imageio-common-2.5.1-0.202307060620.git4e5f7e0.el9.x86_64
python3-ovirt-engine-sdk4-4.6.3-0.1.master.20230324091708.el9.x86_64
ovirt-openvswitch-ovn-2.17-1.el9.noarch
ovirt-openvswitch-ovn-common-2.17-1.el9.noarch
ovirt-openvswitch-ovn-host-2.17-1.el9.noarch
ovirt-imageio-client-2.5.1-0.202307060620.git4e5f7e0.el9.x86_64
ovirt-imageio-daemon-2.5.1-0.202307060620.git4e5f7e0.el9.x86_64
python3-ovirt-setup-lib-1.3.4-0.0.master.20220413133253.gitd32d35f.el9.noarch
python3.11-ovirt-engine-sdk4-4.6.3-0.1.master.20230324091708.el9.x86_64
python3.11-ovirt-imageio-common-2.5.1-0.202307060620.git4e5f7e0.el9.x86_64
python3.11-ovirt-imageio-client-2.5.1-0.202307060620.git4e5f7e0.el9.x86_64
ovirt-ansible-collection-3.1.3-0.1.master.20230420113738.el9.noarch
ovirt-vmconsole-1.0.9-2.el9.noarch
ovirt-vmconsole-host-1.0.9-2.el9.noarch
ovirt-openvswitch-2.17-1.el9.noarch
ovirt-python-openvswitch-2.17-1.el9.noarch
ovirt-openvswitch-ipsec-2.17-1.el9.noarch
python3-ovirt-node-ng-nodectl-4.4.3-0.20220615.0.el9.noarch
ovirt-node-ng-nodectl-4.4.3-0.20220615.0.el9.noarch
ovirt-host-dependencies-4.5.0-3.1.20220510094000.git2f2d022.el9.x86_64
ovirt-hosted-engine-ha-2.5.1-0.0.master.20220707064804.20220707064802.git14b1139.el9.noarch
ovirt-provider-ovn-driver-1.2.37-0.20220610132522.git62111d0.el9.noarch
ovirt-hosted-engine-setup-2.7.1-0.0.master.20230414113600.git340e19b.el9.noarch
ovirt-host-4.5.0-3.1.20220510094000.git2f2d022.el9.x86_64
ovirt-release-host-node-4.5.5-0.0.master.20230612064150.git0c65b0e.el9.x86_64
ovirt-node-ng-image-update-placeholder-4.5.5-0.0.master.20230612064150.git0c65b0e.el9.noarch
qemu-kvm-tools-8.0.0-6.el9.x86_64
qemu-kvm-docs-8.0.0-6.el9.x86_64
qemu-kvm-common-8.0.0-6.el9.x86_64
qemu-kvm-device-display-virtio-gpu-8.0.0-6.el9.x86_64
qemu-kvm-ui-opengl-8.0.0-6.el9.x86_64
qemu-kvm-ui-egl-headless-8.0.0-6.el9.x86_64
qemu-kvm-device-display-virtio-gpu-pci-8.0.0-6.el9.x86_64
qemu-kvm-block-blkio-8.0.0-6.el9.x86_64
qemu-kvm-device-display-virtio-vga-8.0.0-6.el9.x86_64
qemu-kvm-device-usb-host-8.0.0-6.el9.x86_64
qemu-kvm-device-usb-redirect-8.0.0-6.el9.x86_64
qemu-kvm-audio-pa-8.0.0-6.el9.x86_64
qemu-kvm-block-rbd-8.0.0-6.el9.x86_64
qemu-kvm-core-8.0.0-6.el9.x86_64
qemu-kvm-8.0.0-6.el9.x86_64
libvirt-daemon-kvm-9.3.0-2.el9.x86_64
Cheers!
--
Att,
Jorge Visentini
+55 55 98432-9868
1 year, 7 months
CPU support Xeon E5345
by Mikhail Po
Is it possible to install oVirt 4.3/4.4 on ProLiant BL460c G1 with Intel Xeon E5345 processor(a)2.33GHz ?
In case of a setup failure, [ERROR] is fatal:[localhost]: FAILED!=>{"Changed":false,"message":"The host was inoperable, deployment errors: code 156:Host host1.test.com disabled because the host CPU type in this case is not supported by the cluster compatibility version or is not supported at all, code 9000: Failed to check the power management configuration for the host host1.test.com ., correct accordingly and re-deploy."
1 year, 7 months
Trouble restoring + upgrading to ovirt 4.5 system after host crashed
by David Johnson
Good afternoon all,
We had a confluence of events hit all at once and need help desperately.
Our Ovirt engine system recently crashed and is unrecoverable. Due to a
power maintenance event at the data center, 1/3 of our VM's are offline.
I have recent backups from the engine created with engine-backup.
I installed a clean Centos 9 and followed the directions to install
the ovirt-engine .
After I restore the backup, the engine-setup fails on the keycloak
configuration.
*From clean system:*
*Install: **(Observe failed scriptlet during install, but rom install still
succeeds)*
[root@ovirt2 administrator]# dnf install -y ovirt-engine
Last metadata expiration check: 2:08:15 ago on Tue 08 Aug 2023 10:11:31 AM
CDT.
Dependencies resolved.
=============================================================================================================================================================
Package Architecture
Version Repository
Size
=============================================================================================================================================================
Installing:
ovirt-engine noarch
4.5.4-1.el9 centos-ovirt45
13 M
Installing dependencies:
SuperLU x86_64
5.3.0-2.el9 epel
182 k
(Snip ...)
* Running scriptlet: ovirt-vmconsole-1.0.9-1.el9.noarch
60/425Failed to resolve allow statement at
/var/lib/selinux/targeted/tmp/modules/400/ovirt_vmconsole/cil:539Failed to
resolve AST/usr/sbin/semodule: Failed!*
(Snip ...)
xmlrpc-common-3.1.3-1.1.el9.noarch
xorg-x11-fonts-ISO8859-1-100dpi-7.5-33.el9.noarch
zziplib-0.13.71-9.el9.x86_64
Complete!
*Engine-restore (no visible issues):*
[root@ovirt2 administrator]# engine-backup --mode=restore
--log=restore1.log --file=Downloads/engine-2023-08-06.22.00.02.bak
--provision-all-databases --restore-permissions
Start of engine-backup with mode 'restore'
scope: all
archive file: Downloads/engine-2023-08-06.22.00.02.bak
log file: restore1.log
Preparing to restore:
- Unpacking file 'Downloads/engine-2023-08-06.22.00.02.bak'
Restoring:
- Files
------------------------------------------------------------------------------
Please note:
Operating system is different from the one used during backup.
Current operating system: centos9
Operating system at backup: centos8
Apache httpd configuration will not be restored.
You will be asked about it on the next engine-setup run.
------------------------------------------------------------------------------
Provisioning PostgreSQL users/databases:
- user 'engine', database 'engine'
- user 'ovirt_engine_history', database 'ovirt_engine_history'
- user 'ovirt_engine_history_grafana' on database 'ovirt_engine_history'
Restoring:
- Engine database 'engine'
- Cleaning up temporary tables in engine database 'engine'
- Updating DbJustRestored VdcOption in engine database
- Resetting DwhCurrentlyRunning in dwh_history_timekeeping in engine
database
- Resetting HA VM status
------------------------------------------------------------------------------
Please note:
The engine database was backed up at 2023-08-06 22:00:19.000000000 -0500 .
Objects that were added, removed or changed after this date, such as virtual
machines, disks, etc., are missing in the engine, and will probably require
recovery or recreation.
------------------------------------------------------------------------------
- DWH database 'ovirt_engine_history'
- Grafana database '/var/lib/grafana/grafana.db'
You should now run engine-setup.
Done.
[root@ovirt2 administrator]#
*Engine-setup :*
[root@ovirt2 administrator]# engine-setup
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files:
/etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf,
/etc/ovirt-engine-setup.conf.d/10-packaging.conf,
/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf
Log file:
/var/log/ovirt-engine/setup/ovirt-engine-setup-20230808124501-joveku.log
Version: otopi-1.10.3 (otopi-1.10.3-1.el9)
[ INFO ] The engine DB has been restored from a backup
*[ ERROR ] Failed to execute stage 'Environment setup': Cannot connect to
Keycloak database 'ovirt_engine_keycloak' using existing credentials:
ovirt_engine_keycloak@localhost:5432*[ INFO ] Stage: Clean up
Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20230808124501-joveku.log
[ INFO ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20230808124504-setup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
*[ ERROR ] Execution of setup failed[root@ovirt2 administrator]#*
*Engine-cleanup results:*
(snip)
[ INFO ] Stage: Clean up
Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-remove-20230808120445-mj4eef.log
[ INFO ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20230808120508-cleanup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ INFO ] Execution of cleanup completed successfully
[root@cen-90-tmpl administrator]#
*Engine backup (restore) results:*
[root@ovirt2 administrator]# engine-backup --mode=restore
--log=restore1.log --file=Downloads/engine-2023-08-06.22.00.02.bak
--provision-all-databases --restore-permissions
Start of engine-backup with mode 'restore'
scope: all
archive file: Downloads/engine-2023-08-06.22.00.02.bak
log file: restore1.log
Preparing to restore:
- Unpacking file 'Downloads/engine-2023-08-06.22.00.02.bak'
Restoring:
- Files
------------------------------------------------------------------------------
Please note:
Operating system is different from the one used during backup.
Current operating system: centos9
Operating system at backup: centos8
Apache httpd configuration will not be restored.
You will be asked about it on the next engine-setup run.
------------------------------------------------------------------------------
Provisioning PostgreSQL users/databases:
- user 'engine', database 'engine'
*FATAL: Existing database 'engine' or user 'engine' found and temporary
ones created - Please clean up everything and try again*
Any advice would be appreciated.
*David Johnson*
1 year, 7 months
ovirt node NonResponsive
by carlos.mendes@mgo.cv
Hello,
I have ovirt with two nodes and one that are NonResponsive and and cant manage them because they are in Unknown state.
It seems that nodes lost connection for a while with their gateway.
The node (ovirt2) however is having consistent problems. The follow sequence of events is reproducible and is causing the host to enter a "NonOperational" state on the cluster:
What is the proper way of restoring management?
I have a two-node cluster with the ovirt manager running standlone on the virtual maachine CentOS-Stream-9 and the ovirt node running the most recent oVirt Node 4.5.4 software.
I can then re-activate ovirt2, which appears as green for approximately 5 minutes and then repeats all of the above issues.
What can I do to troubleshoot this?
1 year, 7 months
How restore nodes ovirt UP from NonResponsive and VMs executing
by José Pascual
Hello,
I have ovirt with two nodes that are NonResponsive and all the VMs are
properly executing but i cant manage them because are in Unknown state.
It seems that nodes lost connection for a while with their gateway.
I have thought of first restarting the node where the engine is not
running and trying to put in UP. Then restart the engine from within de
VM to see if it starts up on this node.
What is the proper way of restoring management? I
Thanks,
Best Regards
--
Saludos,
José Pascual Gallud Martínez
Nombre | Dpto. Ingeniería <http://telfy.com/>
1 year, 7 months
Re: Some problems with ovirt
by אריה קלטר
Hi,
Sorry for the late reply
I tried now to get logs for the vm, for the scenario of stucked in the
middle of the power up because of no disk.
The vm id is 486cea97-ed56-47d4-930b-5f85c51ad3cf, the vm name is kc26-1
[image: image.png]
About the second problem, with the migration, i also attached the logs
here, both from the source and from the destination server.
Any clue how to solve the problems?
It is רeally annoying that a lot of times when I am trying to power on the
vm it hangs like that, and when the system is doing live migration for the
engine vm it is always not working and I need to power it off and on
manually from the cli.
For both scenarios i uploaded bot the /var/log/vdsm/vdsm.log and
/var/log/libvirt/qemu/<vmname>.log
1 year, 7 months
Certificates expired...
by Jason P. Thomas
We're moving to a new facility and pretty much building the
infrastructure out from scratch. As such, the oVirt 4.4 cluster at our
old location has floated under notice because it has just worked for
years. In July it seems some of the certs expired (specifically the
engine apache cert) and we just noticed it. I followed a post for
changing the apache cert and that allowed us to login to the engine web
interface, but nothing in the interface showed as connected. VMs are
still running, I even rebooted one via ssh before realizing the
certificate issues. In "Events" in the engine, it was complaining about
certs being expired on the hosts. I found this post to this mailing
list and followed the instructions possibly in error:
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/NHJNETOIMSHD...
Now the engine won't start at all and I'm afraid I'm one power outage
away from complete disaster. I need to keep the old location up and
functioning for another 4-6 months, so any insights would be greatly
appreciated.
Sincerely,
Jason P. Thomas
1 year, 7 months
Problems running CentOS 9 Stream GenericCloud guests
by Gianluca Amato
Hello everyone,
I'm trying to run CentOS 9 Stream GenericCloud as a guest in oVirt 4.5.4. While the images in the ovirt-image-repository seems to work fine (in particular, I've tried version 20211119.0), the latest version (20230727.1) does not start. After the initial boot messages, it gives the error:
Starting dracut mount hook
dracut-mount[413]: Warning: Can't mount root filesystem
Starting dracut emergency shell
and brings me to the emergency shell.
Is this some known bug ?
Note that if I start from the old image and then upgrade all the packages, I have no problems at all.
Thanks in advance for any help.
Gianluca
1 year, 7 months