I tried searching the archives but couldn't find anything related, so posting a new thread. When adding a new host to the cluster, why do we need to assign the IP on an untagged network in the bond?
It is failing with error in setupnetworks when we tried it with the IP assigned to a vlan interface over the bond. For context, the cluster is an ovs-switch based cluster. It works when added to a traditional linux bridge based cluster.
Can someone throw some light on this please?
OK, so the data storage domain on a cluster filled up to the point that
the OS refused to allocate any more space.
This happened because I tried to create a new prealloc'd disk from the
Admin WebUI. The disk creation claims to be completed successfully,
I've not tried to use that disk yet, but due to a timeout with the
storage domain in question the engine began trying to fence all of the
The fencing failed for all of the HA VMs leaving them in a powered off
state. Despite all of the HA VMs being up at the time, so no
reallocation of the leases should have been necessary. Attempting to
restart them manually from the Admin WebUI failed. With the original
host they were running on complaining about "no space left on device",
and the other hosts claiming that the original host still held the VM
After cleaning up some old snapshots, the HA VMs would still not boot.
Toggling the High Availability setting for each one and allowing the
lease to be removed from the storage domain was required to get the VMs
to start again. Re-enabling the High Availability setting there after
fixed the lease issue. But now some, not all, of the HA VMs are still
throwing "no space left on device" errors when attempting to start
them. The others are working just fine even with their HA lease
My questions are:
1. Why does oVirt claim to have a constantly allocated HA VM lease on
the storage domain when it's clearly only done while the VM is running?
2. Why does oVirt deallocate the HA VM lease when performing a fencing
3. Why can't oVirt clear the old HA VM lease when the VM is down and
the storage pool has space available? (How much space is even needed?
The leases section of the storage domain in the Admin WebUI doesn't
contain any useful info beyond the fact that a lease should exist for a
VM even when it's off.)
4. Is there a better way to force start a HA VM when the lease is old
and the VM is powered off?
5. Should I file a bug on the whole HA VM failing to reacquire a lease
on a full storage pool?
I'm trying to enable usb3.0 for all guest. Currently, win10 and linux guest work well with qemu-xhci device while win7sp1 doesn't
I have googled usb3 drivers for win7, but none of them match qemu-xhci.
So any body know where i can download this driver for win7
Just tried to upgrade my engine from 126.96.36.199 -> 4.5.0 on CentOS Stream 8.
I relized I missed the step of updating to the very latest version of
4.4.10 around the same time it failed and left things in somewhat of a bad
state, so I just built a new Rocky 8.6 host and restored my backup there.
Update from 188.8.131.52 to 184.108.40.206 there went fine. 220.127.116.11 -> 4.5.0 fails
with the same error, however:
[ ERROR ] schema.sh: FATAL: Cannot execute sql command: --file=/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql
[ ERROR ] Failed to execute stage 'Misc configuration': Engine schema refresh failed
This appears to be the relevant section of the log:
psql:/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql:22: NOTICE: column "default_value" of relation "vdc_options" already exists, skipping
psql:/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql:1454: ERROR: column "default_value" contains null values
CONTEXT: SQL statement "ALTER TABLE vdc_options ALTER COLUMN default_value SET NOT NULL"
PL/pgSQL function fn_db_change_column_null(character varying,character varying,boolean) line 10 at EXECUTE
FATAL: Cannot execute sql command: --file=/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql
2022-06-01 01:10:03,015-0700 ERROR otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema schema._misc:530 schema.sh: FATAL: Cannot execute sql command: --file=/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql
2022-06-01 01:10:03,016-0700 DEBUG otopi.context context._executeMethod:145 method exception
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/otopi/context.py", line 132, in _executeMethod
File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py", line 532, in _misc
raise RuntimeError(_('Engine schema refresh failed'))
RuntimeError: Engine schema refresh failed
2022-06-01 01:10:03,017-0700 ERROR otopi.context context._executeMethod:154 Failed to execute stage 'Misc configuration': Engine schema refresh failed
We have two oVirt environments configured, one for our Lab and one for Production.
In the Lab, we can create templates. Clicking OK results in the New Template dialog disappearing and the VM's disk becomes locked and template creation begins.
In the production environment, we cannot create templates. Clicking OK results in nothing, the dialog box remains. Nothing indicates missing/incorrect information.
Both environments are on hosted engines, and oVirt Engine version 4.5. I have monitored the logs in the production environment and nothing comes up when the OK button is clicked.
I have tested both environments in both Firefox and Chrome, and the behavior is also the same, our lab will progress, but production will simply not budge.