rocky 9.4 hypervisor - failed to create ovirmgmt iface
by marek
hi,
today i tried to add new hypervisor - rocky 9.4
ovirt engine - 4.5
ovirtmgmt iface is not functional when i try add host to cluster
it exist for while (seconds) but then i see message "ovirtmgmt: port
1(eno3) entered disabled state" and ovirtmgmt iface is gone and network
is unreacheable
i have working hypervisors based on rocky 9.3
i tried downgrade nmstate to version from 9.3
dnf downgrade
https://dl.rockylinux.org/vault/rocky/9.3/AppStream/x86_64/os/Packages/p/...
https://dl.rockylinux.org/vault/rocky/9.3/AppStream/x86_64/os/Packages/n/...
https://dl.rockylinux.org/vault/rocky/9.3/AppStream/x86_64/os/Packages/n/...
any ideas/tips howto debug ovirmgmt iface creation process?
vdsm.log
2024-07-11 17:42:07,305+0200 INFO (jsonrpc/6) [api.network] START
setupNetworks(networks={'ovirtmgmt': {'netmask': '255.255.255.0',
'ipv6autoconf': False, 'nic': 'eno3', 'bridged': 'true', 'ipaddr':
'10.10.10.237', 'defaultRoute': True, 'dhcpv6': False, 'STP': 'no',
'mtu': 1500, 'switch': 'legacy'}}, bondings={},
options={'connectivityTimeout': 120, 'commitOnSuccess': True,
'connectivityCheck': 'true'}) from=::ffff:10.10.12.12,33842 (api:31)
2024-07-11 17:43:07,312+0200 WARN (vdsm.Scheduler) [Executor] Worker
blocked: <Worker name=jsonrpc/6 running <Task <JsonRpcTask {'jsonrpc':
'2.0', 'method': 'Host.setupNetworks', 'params': {'networks':
{'ovirtmgmt': {'netmask': '255.255.255.0', 'ipv6autoconf': False, 'nic':
'eno3', 'bridged': 'true', 'ipaddr': '10.10.10.237', 'defaultRoute':
True, 'dhcpv6': False, 'STP': 'no', 'mtu': 1500, 'switch': 'legacy'}},
'bondings': {}, 'options': {'connectivityTimeout': 120,
'commitOnSuccess': True, 'connectivityCheck': 'true'}}, 'id':
'65b89a79-0e7a-4a05-b310-f9903bdec2ce'} at 0x7f5e8c245b20> timeout=60,
duration=60.01 at 0x7f5e8c245df0> task#=0 at 0x7f5e8c27eaf0>, traceback:
File: "/usr/lib64/python3.9/threading.py", line 937, in _bootstrap
self._bootstrap_inner()
File: "/usr/lib64/python3.9/threading.py", line 980, in _bootstrap_inner
self.run()
File: "/usr/lib64/python3.9/threading.py", line 917, in run
self._target(*self._args, **self._kwargs)
File: "/usr/lib/python3.9/site-packages/vdsm/common/concurrent.py", line
243, in run
ret = func(*args, **kwargs)
File: "/usr/lib/python3.9/site-packages/vdsm/executor.py", line 284, in _run
self._execute_task()
File: "/usr/lib/python3.9/site-packages/vdsm/executor.py", line 298, in
_execute_task
task()
File: "/usr/lib/python3.9/site-packages/vdsm/executor.py", line 374, in
__call__
self._callable()
File: "/usr/lib/python3.9/site-packages/yajsonrpc/__init__.py", line
253, in __call__
self._handler(self._ctx, self._req)
File: "/usr/lib/python3.9/site-packages/yajsonrpc/__init__.py", line
296, in _serveRequest
response = self._handle_request(req, ctx)
File: "/usr/lib/python3.9/site-packages/yajsonrpc/__init__.py", line
338, in _handle_request
res = method(**params)
File: "/usr/lib/python3.9/site-packages/vdsm/rpc/Bridge.py", line 186,
in _dynamicMethod
result = fn(*methodArgs)
File: "<decorator-gen-508>", line 2, in setupNetworks
File: "/usr/lib/python3.9/site-packages/vdsm/common/api.py", line 33, in
method
ret = func(*args, **kwargs)
File: "/usr/lib/python3.9/site-packages/vdsm/API.py", line 1576, in
setupNetworks
supervdsm.getProxy().setupNetworks(networks, bondings, options)
File: "/usr/lib/python3.9/site-packages/vdsm/common/supervdsm.py", line
38, in __call__
return callMethod()
File: "/usr/lib/python3.9/site-packages/vdsm/common/supervdsm.py", line
35, in <lambda>
getattr(self._supervdsmProxy._svdsm, self._funcName)(*args,
File: "<string>", line 2, in setupNetworks
File: "/usr/lib64/python3.9/multiprocessing/managers.py", line 810, in
_callmethod
kind, result = conn.recv()
File: "/usr/lib64/python3.9/multiprocessing/connection.py", line 254, in
recv
buf = self._recv_bytes()
File: "/usr/lib64/python3.9/multiprocessing/connection.py", line 418, in
_recv_bytes
buf = self._recv(4)
File: "/usr/lib64/python3.9/multiprocessing/connection.py", line 383, in
_recv
chunk = read(handle, remaining) (executor:345)
installed ovirt packages on host
centos-release-ovirt45-9.2-1.el9s.noarch
ovirt-vmconsole-1.0.9-3.el9.noarch
ovirt-imageio-common-2.5.0-1.el9.x86_64
python3-ovirt-engine-sdk4-4.6.2-1.el9.x86_64
ovirt-openvswitch-ovn-2.17-1.el9.noarch
ovirt-openvswitch-ovn-common-2.17-1.el9.noarch
ovirt-imageio-client-2.5.0-1.el9.x86_64
ovirt-imageio-daemon-2.5.0-1.el9.x86_64
ovirt-openvswitch-ovn-host-2.17-1.el9.noarch
ovirt-vmconsole-host-1.0.9-3.el9.noarch
ovirt-python-openvswitch-2.17-1.el9.noarch
ovirt-openvswitch-2.17-1.el9.noarch
python3.11-ovirt-imageio-common-2.5.0-1.el9.x86_64
python3.11-ovirt-engine-sdk4-4.6.2-1.el9.x86_64
python3.11-ovirt-imageio-client-2.5.0-1.el9.x86_64
ovirt-openvswitch-ipsec-2.17-1.el9.noarch
python3-ovirt-setup-lib-1.3.3-1.el9.noarch
ovirt-ansible-collection-3.2.0-1.el9.noarch
ovirt-hosted-engine-ha-2.5.1-1.el9.noarch
ovirt-provider-ovn-driver-1.2.36-1.el9.noarch
ovirt-host-dependencies-4.5.0-3.el9.x86_64
ovirt-hosted-engine-setup-2.7.1-1.el9.noarch
ovirt-host-4.5.0-3.el9.x86_64
12 months
Failed to configure management network on the host.
by d_hashima@sagaidc.co.jp
RHEL9 (KVM server) cannot be added from oVirt Engine built on centos9 stream.
The oVirt management NW and the RHEL9 (KVM server) management NW are in the same segment and are in an environment where they can resolve each other's names.
I'm new to oVirt, so I'd appreciate it if you could explain it in an easy-to-understand manner.
12 months
Re: engine-setup fails: "Failed to execute stage 'Misc configuration': Command '/usr/bin/ovirt-aaa-jdbc-tool' failed to execute"
by Brent Saner
Whoops, sent before I finished replying- I'll look into first trying the Nightly, to see if that has the same behavior, and then trying the _setupAuth patching et. al. to see if I can narrow down what might be happening here.
Brent Saner
SENIOR SYSTEMS ENGINEER
Follow us on LinkedIn!
brent.saner(a)netfire.com
855-696-3834 Ext. 110
www.netfire.com
________________________________
From: Brent Saner <brent.saner(a)netfire.com>
Sent: Thursday, July 11, 2024 5:38 PM
To: users(a)ovirt.org <users(a)ovirt.org>
Subject: Re: [ovirt-users] Re: engine-setup fails: "Failed to execute stage 'Misc configuration': Command '/usr/bin/ovirt-aaa-jdbc-tool' failed to execute"
Looking into this, I can confirm that *after* the packages are installed and *before* engine-setup is run, /etc/ovirt-engine/aaa/ does not even exist:
# ls /etc/ovirt-engine/aaa
ls: cannot access '/etc/ovirt-engine/aaa': No such file or directory
Which may be expected, I'm unclear on that.
Further details inline below:
________________________________
From: Yedidyah Bar David <didi(a)redhat.com>
Sent: Thursday, July 4, 2024 2:13 AM
To: Brent Saner <brent.saner(a)netfire.com>
Cc: users(a)ovirt.org <users(a)ovirt.org>
Subject: Re: [ovirt-users] Re: engine-setup fails: "Failed to execute stage 'Misc configuration': Command '/usr/bin/ovirt-aaa-jdbc-tool' failed to execute"
You don't often get email from didi(a)redhat.com. Learn why this is important<https://aka.ms/LearnAboutSenderIdentification>
Caution: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance.
On Thu, Jul 4, 2024 at 9:11 AM Yedidyah Bar David <didi(a)redhat.com<mailto:didi@redhat.com>> wrote:
On Wed, Jun 19, 2024 at 10:38 PM Brent S. <brent.saner(a)netfire.com<mailto:brent.saner@netfire.com>> wrote:
As a quick update to this:
# ovirt-aaa-jdbc-tool
Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false
Jun 19, 2024 7:28:14 PM org.ovirt.engine.extension.aaa.jdbc.binding.cli.Cli main
SEVERE: Unexpected Exception invoking Cli: Could not read properties from: /etc/ovirt-engine/aaa/internal.properties
Which is, of course, the same message in the log.
This is probably expected, since *engine-setup never actually created the file*:
Are you sure about this?
Presumably it was removed, but I do know the directory is not present before engine-setup is run and the file is not present after.
# ls -la /etc/ovirt-engine/aaa
total 4
drwxr-xr-x. 2 root root 6 Jun 19 19:27 .
drwxr-xr-x. 18 root root 4096 Jun 19 19:27 ..
#
I guess you checked the above only after engine-setup failed/finished, right?
At the time, yes; see above inlines; the directory itself does not even exist before engine-setup.
And:
2024-06-19 19:27:10,917+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc plugin.execute:923 execute-output: ['/usr/share/ovirt-engine-extension-aaa-jdbc/dbscripts/schema.sh', '-s', '[REDACTED_REMOTE_DB_HOST]', '-p', '5432', '-u', '[REDACTED_REMOTE_DB_USER]', '-d', '[REDACTED_REMOTE_DB_NAME]', '-e', 'aaa_jdbc', '-l', '/root/ovirt-engine-setup.log', '-c', 'apply'] stderr:
2024-06-19 19:27:10,917+0000 DEBUG otopi.transaction transaction._prepare:61 preparing 'File transaction for '/etc/ovirt-engine/aaa/internal.properties''
2024-06-19 19:27:10,917+0000 DEBUG otopi.filetransaction filetransaction.prepare:184 file '/etc/ovirt-engine/aaa/internal.properties' missing
Indeed
(SNIP)
If you grep ovirt-engine sources, you'll find internal.properties in:
packaging/setup/ovirt_engine_setup/engine/constants.py:
AAA_JDBC_CONFIG_DB = os.path.join(
OVIRT_ENGINE_SYSCONFDIR,
'aaa',
'internal.properties'
)
If you then grep for AAA_JDBC_CONFIG_DB, you see it in:
packaging/setup/plugins/ovirt-engine-setup/ovirt-engine/config/aaajdbc.py:
def _setupAuth(self):
self.environment[otopicons.CoreEnv.MAIN_TRANSACTION].append(
filetransaction.FileTransaction(
name=oenginecons.FileLocations.AAA_JDBC_CONFIG_DB,
...
visibleButUnsafe=True,
Forgot to mention: You can check otopi sources:src/otopi/filetransaction.py
to see what this means.
...
def _setupAdminUser(self):
toolArgs = (
oenginecons.FileLocations.AAA_JDBC_TOOL,
'--db-config=%s' % oenginecons.FileLocations.AAA_JDBC_CONFIG_DB,
)
...
@plugin.event(
stage=plugin.Stages.STAGE_MISC,
name=AAA_JDBC_SETUP_ADMIN_USER,
after=(
oengcommcons.Stages.DB_SCHEMA,
oengcommcons.Stages.DB_CONNECTION_AVAILABLE,
oenginecons.Stages.CONFIG_EXTENSIONS_UPGRADE,
),
before=(
oenginecons.Stages.CONFIG_AAA_ADMIN_USER_SETUP,
),
condition=lambda self: self.environment[
oenginecons.ConfigEnv.ADMIN_USER_AUTHZ_TYPE
] == self.AAA_JDBC_AUTHZ_TYPE,
)
def _misc(self):
# TODO: if we knew that aaa-jdbc package was upgraded by engine-setup
# TODO: we could display summary note that custom profiles have to be
# TODO: upgraded manually
self._setupSchema()
self._setupAuth()
self._setupAdminUser()
...
This means that:
At STAGE_MISC, _misc calls _setupAuth, which creates this file, and then it calls
_setupAdminUser which tries to use it. Latter fails, and engine-setup rolls back
the MAIN_TRANSACTION, including removing the file.
I'd start debugging this issue by:
1. Patching _setupAuth to wait (e.g. using dialog.queryBoolean, search the source
for examples) after it creates the file, so that I can investigate it
2. Patching _setupAdminUser to wait after it runs the tool, so that I can try to
investigate the failure - e.g. run it myself under strace, if the existing logging
is not enough.
You can try using the otopi plugin wait_on_error for this, instead of patching.
Good luck and best regards,
--
Didi
--
Didi
12 months
engine-setup fails: "Failed to execute stage 'Misc configuration': Command '/usr/bin/ovirt-aaa-jdbc-tool' failed to execute"
by brent.saner@netfire.com
I can find no reason why this would be happening. AlmaLinux 9.4, oVirt 4.5.6, brand new/fresh cluster/install.
From the log, these seem to be the only relevant lines:
2024-06-18 22:02:52,720+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc plugin.execute:918 execute-output: ('/usr/bin/ovirt-aaa-jdbc-tool', '--db-config=/etc/ovirt-engine/aaa/internal.properties', 'query', '--what=
user', '--pattern=name=admin') stdout:
2024-06-18 22:02:52,720+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc plugin.execute:923 execute-output: ('/usr/bin/ovirt-aaa-jdbc-tool', '--db-config=/etc/ovirt-engine/aaa/internal.properties', 'query', '--what=
user', '--pattern=name=admin') stderr:
Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false
Jun 18, 2024 10:02:52 PM org.ovirt.engine.extension.aaa.jdbc.binding.cli.Cli main
SEVERE: Unexpected Exception invoking Cli: com/ongres/scram/common/stringprep/StringPreparation
2024-06-18 22:02:52,720+0000 DEBUG otopi.context context._executeMethod:143 method exception
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/otopi/context.py", line 132, in _executeMethod
method['method']()
File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/config/aaajdbc.py", line 414, in _misc
self._setupAdminUser()
File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/config/aaajdbc.py", line 298, in _setupAdminUser
if not self._userExists(
File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/config/aaajdbc.py", line 49, in _userExists
rc, stdout, stderr = self.execute(
File "/usr/lib/python3.9/site-packages/otopi/plugin.py", line 929, in execute
raise RuntimeError(
RuntimeError: Command '/usr/bin/ovirt-aaa-jdbc-tool' failed to execute
2024-06-18 22:02:52,721+0000 ERROR otopi.context context._executeMethod:151 Failed to execute stage 'Misc configuration': Command '/usr/bin/ovirt-aaa-jdbc-tool' failed to execute
2024-06-18 22:02:52,721+0000 DEBUG otopi.transaction transaction.abort:124 aborting 'DNF Transaction'
Any ideas?
12 months
Get Host Capabilities failed: Connection timeout for host
by Frederico Madeira
Hi guys
I'm getting these messages from engine-gui interfaces.: Get Host
Capabilities failed: Connection timeout for host
My hosts are in irresponsible state
My VMs are in unknown state
My Storage are in inactive state
From the engine, I did successfully : ping ip and name and ssh connect to
host my hosts, so I didn´t have network connectivity problems.
error in engine.log
2024-07-10 03:47:30,088-03 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-13) [] EVENT_ID:
VDS_BROKER_COMMAND_FAILURE(10,802), VDSM servernode10 command Get Host
Capabilities failed: Connection timeout for host '10.180.18.116', last
response arrived 6609 ms ago.
2024-07-10 03:47:30,089-03 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
(EE-ManagedThreadFactory-engineScheduled-Thread-13) [] Unable to
RefreshCapabilities: VDSNetworkException: VDSGenericException:
VDSNetworkException: Connection timeout for host '10.180.18.116', last
response arrived 6609 ms ago.
2024-07-10 03:49:45,027-03 INFO
Any hint how to troubleshoot this issue?
Regards,*o Madeira *
fred(a)madeira.eng.br
www.madeira.eng.br
Cisco CCNA, LPIC-1, LPIC-2
Registered GNU/Linux nº 206120
GPG-Key-ID: 1024D/0F0A721D
Key fingerprint = C424 D86B 57D5 BE55 767A 6ED1 53F8 254E 0F0A 721D
MSN: fttmadeira(a)hotmail.com
GTalk:fmadeira@gmail.com
SKYPE: fred_madeira
12 months
VM live migration fails after update to qemu-kvm-9.0.0-4.el9.x86_64
by g.vasilopoulos@uoc.gr
Hello
After updating a hypervisor with qemu-kvm-9.0.0-4.el9.x86_64 live migration stopped working with errors related to the cpu topology.
When I downgraded to qemu-kvm-9.0.0-3.el9.x86_64 migration worked again.
Is this a known bug? Has anyone else encountered this?
12 months
vm stays locked after failed export
by h aanst
Hi,
Running ovirt for many years, but this baffles me.
An export was taking too long, seems to hang.
Due to the heat a switch failedt, so export storage is umounted, at this time.
Now my VM has a little lock, and can't be restarted. (shutdown worked)
Checked all DB entries and vdsm.
update vm_static set template_status = 0 where template_status = 1;
select * from vm_static where template_status = 1;
0
update vm_dynamic set status = 0 where status = 15;
select * from vm_dynamic where status = 15;
0
update images set imagestatus = 1 where imagestatus = 2;
select * from images where imagestatus = 2;
0
update snapshots set status = 'OK' where status like 'LOCKED';
select * from snapshots where status like 'LOCKED';
0
UPDATE images SET imagestatus = 1 WHERE imagestatus = 4;
select * from images where imagestatus = 4;
0
restarted the hosted-engine
I dont know what to check next.
Any hints?
thanks
12 months
CentOS9 Stream -- Only way forward?
by eshwayri@gmail.com
I've been running 4.5.4-1 on RHEL8 for a while now; been running fine for my needs. I was getting around the unholy dependency issue caused by the centos-release package conflicting with the redhat-release package by upgrading everything else -- not it. It's been necessary to do this since RHEL versions were higher than any CentOS 8 Stream versions. This time round since CentOS has been archived, I made sure to install anything I hadn't and then manually purged centos-release from the rpm database. This allowed RHEL to update to 8.10, including the release package this time. Not expecting an issue since there won't be any future patches coming from that repo.
If I decide to stick with oVirt and install daily snapshots, I assume what I will have to do is setup a new CentOS Steam 9 server, create a backup of the old server, and then restore on the new server. Like what I did when I was forced to migrate from CentOS 8 to RHEL8 (with CentOS 8 Stream packages). Is this right?
I haven't decided if I want to do this. CentOS Stream 9 is supported until mid 2027; RHEL 8.10 is supported until mid 2029. Barring any new features or major incompatibility, I really don't see a pressing reason to do this. In fact, with no one stepping in to take over the project, there is a better chance of being supported by sticking with RHEL 8.10 (2+ years).
12 months
VM Migration Failed
by KSNull Zero
Running oVirt 4.4.5
VM cannot migrate between hosts.
vdsm.log contains the following error:
libvirt.libvirtError: operation failed: Failed to connect to remote libvirt URI qemu+tls://ovhost01.local/system: authentication failed: Failed to verify peer's certificate
Certificates on hosts was renewed some time ago. How this issue can be fixed ?
Thank you.
12 months
Re: Multiple GPU Passthrough with NVLink (Invalid I/O region)
by Zhengyi Lai
I noticed this document https://docs.nvidia.com/vgpu/16.0/grid-vgpu-release-notes-generic-linux-k... has this to say
In pass through mode, all GPUs connected to each other through NVLink must be assigned to the same VM. If a subset of GPUs connected to each other through NVLink is passed through to a VM, unrecoverable error XID 74 occurs when the VM is booted. If a subset of GPUs connected to each other through NVLink is passed through to a VM, unrecoverable error XID 74 occurs when the VM is booted. This error corrupts the NVLink state on the physical GPUs and, as a result, the NVLink bridge between the NVLink and the physical GPUs is not recognized. result, the NVLink bridge between the GPUs is unusable.
You may need to passthrough all GPUs in the nvlink to the VM
12 months
problem on dnf update
by g.vasilopoulos@uoc.gr
Hi
I tried dnf update today and got the error bellow. Is there a solution for this?:
[root@*****~]# dnf update
Last metadata expiration check: 3:42:36 ago on Mon 17 Jun 2024 08:47:59 AM EEST.
Error:
Problem 1: package python3-boto3-1.18.58-1.el9s.noarch from @System requires (python3.9dist(botocore) < 1.22 with python3.9dist(botocore) >= 1.21.58), but none of the providers can be installed
- cannot install both python3-botocore-1.31.62-1.el9.noarch from appstream and python3-botocore-1.21.58-1.el9s.noarch from @System
- cannot install both python3-botocore-1.21.58-1.el9s.noarch from ovirt-master-centos-stream-openstack-yoga-testing and python3-botocore-1.31.62-1.el9.noarch from appstream
- cannot install the best update candidate for package python3-botocore-1.21.58-1.el9s.noarch
- cannot install the best update candidate for package python3-boto3-1.18.58-1.el9s.noarch
Problem 2: package python3-pyngus-2.3.0-8.el9s.noarch from @System requires python3.9dist(python-qpid-proton), but none of the providers can be installed
- cannot install both python3-qpid-proton-0.39.0-2.el9s.x86_64 from ovirt-master-centos-stream-opstools-collectd5-testing and python3-qpid-proton-0.35.0-2.el9s.x86_64 from @System
- cannot install both python3-qpid-proton-0.35.0-2.el9s.x86_64 from ovirt-master-centos-stream-openstack-yoga-testing and python3-qpid-proton-0.39.0-2.el9s.x86_64 from ovirt-master-centos-stream-opstools-collectd5-testing
- cannot install the best update candidate for package python3-qpid-proton-0.35.0-2.el9s.x86_64
- cannot install the best update candidate for package python3-pyngus-2.3.0-8.el9s.noarch
Problem 3: package python3-oslo-messaging-12.13.3-1.el9s.noarch from @System requires python3-pyngus, but none of the providers can be installed
- package python3-pyngus-2.3.0-8.el9s.noarch from @System requires python3.9dist(python-qpid-proton), but none of the providers can be installed
- package python3-pyngus-2.3.0-8.el9s.noarch from ovirt-master-centos-stream-openstack-yoga-testing requires python3.9dist(python-qpid-proton), but none of the providers can be installed
- package python3-qpid-proton-0.35.0-2.el9s.x86_64 from @System requires qpid-proton-c(x86-64) = 0.35.0-2.el9s, but none of the providers can be installed
- package python3-qpid-proton-0.35.0-2.el9s.x86_64 from ovirt-master-centos-stream-openstack-yoga-testing requires qpid-proton-c(x86-64) = 0.35.0-2.el9s, but none of the providers can be installed
- cannot install both qpid-proton-c-0.39.0-2.el9s.x86_64 from ovirt-master-centos-stream-opstools-collectd5-testing and qpid-proton-c-0.35.0-2.el9s.x86_64 from @System
- cannot install both qpid-proton-c-0.35.0-2.el9s.x86_64 from ovirt-master-centos-stream-openstack-yoga-testing and qpid-proton-c-0.39.0-2.el9s.x86_64 from ovirt-master-centos-stream-opstools-collectd5-testing
- cannot install the best update candidate for package qpid-proton-c-0.35.0-2.el9s.x86_64
- cannot install the best update candidate for package python3-oslo-messaging-12.13.3-1.el9s.noarch
(try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
12 months
Certificate verification error for qemu while migrating
by Julien Deberles
Hello,
I'm running ovirt 4.4.10 and I have the following error while I launch a VM migration :
Jul 3 12:37:07 ssc-sati-02 journal[958949]: Certificate [session] owner does not match the hostname myhostname
Jul 3 12:37:07 ssc-sati-02 journal[958949]: Certificate check failed Certificate [session] owner does not match the hostname myhostname
Jul 3 12:37:07 ssc-sati-02 journal[958949]: authentication failed: Failed to verify peer's certificate
Jul 3 12:37:07 ssc-sati-02 journal[958949]: operation failed: Failed to connect to remote libvirt URI qemu+tls://myhostname/system: authentication failed: Failed to verify peer's certificate
To avoid this error I set the following paramaters inside the /etc/libvirt/qemu.conf and restard vdsmd daemon.
migrate_tls_x509_verify = 0
default_tls_x509_verify = 0
But I still have the same error. Can you help me to understand why this set of parameters are not working as exepected ?
kind regards,
Julien
1 year
Disk blocked
by Louis Barbonnais
Hello,
I apologize, I am new to oVirt and I am lost. I have just installed oVirt on CentOS 9 with local storage. I am trying to add ISO images to my disk, but they are blocked. I cannot delete or unblock them.
Could you please assist me?
1 year
deploy ovirt-engine4.5.6 on rockylinux9 encounter cross-origin frame error when visiting webadmin
by taleintervenor@sjtu.edu.cn
We have deployed ovirt-engine on rocky9.4, "engine-setup" runs all green and said it completed successfully.
But when we visit https://ovirtmu.pi.sjtu.edu.cn/ovirt-engine/webadmin, UI report the error as:
```
2024-07-03 15:45:58,692+08 ERROR [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default task-3) [] Uncaught exception: com.google.gwt.event.shared.UmbrellaException: Exception caught: (SecurityError) : Failed to read a named property 'kCb' from 'Window': Blocked a frame with origin "https://ovirtmu.pi.sjtu.edu.cn" from accessing a cross-origin frame.
at java.lang.Throwable.Throwable(Throwable.java:72)
at java.lang.RuntimeException.RuntimeException(RuntimeException.java:32)
at com.google.web.bindery.event.shared.UmbrellaException.UmbrellaException(UmbrellaException.java:64)
at Unknown.new t8(webadmin-0.js)
at com.google.gwt.event.shared.EventBus.$castFireEvent(EventBus.java:65)
at org.ovirt.engine.ui.webadmin.system.MessageReceivedEvent.fire(MessageReceivedEvent.java:21)
at org.ovirt.engine.ui.webadmin.system.PostMessageDispatcher.onMessage(PostMessageDispatcher.java:27)
at Unknown.c(webadmin-0.js)
Caused by: com.google.gwt.core.client.JavaScriptException: (SecurityError) : Failed to read a named property 'kCb' from 'Window': Blocked a frame with origin "https://ovirtmu.pi.sjtu.edu.cn" from accessing a cross-origin frame.
at com.google.gwt.lang.Cast.instanceOfJso(Cast.java:211)
at org.ovirt.engine.ui.webadmin.plugin.jsni.JsArrayHelper.createMixedArray(JsArrayHelper.java:36)
at org.ovirt.engine.ui.webadmin.plugin.PluginEventHandler.lambda$16(PluginEventHandler.java:105)
at org.ovirt.engine.ui.webadmin.system.MessageReceivedEvent.$dispatch(MessageReceivedEvent.java:50)
at org.ovirt.engine.ui.webadmin.system.MessageReceivedEvent.dispatch(MessageReceivedEvent.java:50)
at com.google.gwt.event.shared.GwtEvent.dispatch(GwtEvent.java:76)
at com.google.web.bindery.event.shared.SimpleEventBus.$doFire(SimpleEventBus.java:173)
... 4 more
```
Version of ovirt-engine is ovirt-engine-4.5.6-1.el9.noarch, and the setup options are:
--== CONFIGURATION PREVIEW ==--
Application mode : both
Default SAN wipe after delete : False
Host FQDN : ovirtmu.pi.sjtu.edu.cn
Firewall manager : firewalld
Update Firewall : True
Set up Cinderlib integration : False
Configure local Engine database : True
Set application as default page : True
Configure Apache SSL : True
Keycloak installation : True
Engine database host : localhost
Engine database port : 5432
Engine database secured connection : False
Engine database host name validation : False
Engine database name : engine
Engine database user name : engine
Engine installation : True
PKI organization : pi.sjtu.edu.cn
Set up ovirt-provider-ovn : True
DWH installation : True
DWH database host : localhost
DWH database port : 5432
Configure local DWH database : True
Grafana integration : False
Keycloak database host : localhost
Keycloak database port : 5432
Keycloak database secured connection : False
Keycloak database host name validation : False
Keycloak database name : ovirt_engine_keycloak
Keycloak database user name : ovirt_engine_keycloak
Configure local Keycloak database : True
Configure VMConsole Proxy : True
Configure WebSocket Proxy : True
Can anyone provide some suggestions on positioning the problem?
1 year
Install the oVirt engine on CentOS 9 Stream. However, I encountered an error while enabling the Java package tool, pki-deps, and the PostgreSQL
by Sachendra Shukla
Hi All,
I am trying to install the oVirt engine on CentOS 9 Stream. However, I
encountered an error while enabling the Java package tool, pki-deps, and
the PostgreSQL module. Below is the error message I received.
Could anyone please assist with a resolution for this issue?
[image: image.png]
--
Regards,
*Sachendra Shukla*
IT Administrator
Yagna iQ, Inc. and subsidiaries
Email: Sachendra.shukla(a)yagnaiq.com <dnyanesh.tisge(a)yagnaiq.com>
Website: https://yagnaiq.com
Privacy Policy: https://www.yagnaiq.com/privacy-policy/
<https://www.linkedin.com/company/yagnaiq/mycompany/>
<https://www.youtube.com/channel/UCeHXOpcUxWvOJO0aegD99Jg>
This communication and any attachments may contain confidential information
and/or Yagna iQ, Inc. copyright material.
All unauthorized use, disclosure, or distribution is prohibited. If you are
not the intended recipient, please notify Yagna iQ immediately by replying
to the email and destroy all copies of this communication.
This email has been scanned for all known viruses. The sender does not
accept liability for any damage inflicted by viewing the content of this
email.
1 year
oVirt 4.5.6-1.el9 standalone engine - older VM's on centos 6 or rhel 6 hangs on live migration with ovirt node 5.14.0-388.el9.x86_64
by Sumit Basu
Hi All,
In our new oVirt 4.5.6-1.el9 cluster we find older vm's on el6 (centos/rhel) keeps hanging after live migrations. we find this typically with el6 vm's, our other el'7 and windows vm's do not face this issue.
All these VM's were imported from an oVirt 4.3.10.4-1.el7 cluster by importing the storage domain into the new cluster. Initially we installed all the latest ovirt-node-ng-installer-latest-el9 on all our nodes(8 of them) and we found this live migration hang issue, later we installed ovirt-node-ng-installer-4.5.2-2022081013.el9 on the nodes and tested - we found the issue resolved and we could migrate the el6 vm's without any issues at all, later, after doing an upgrade on the nodes we find the issue has come back again.
The nodes where there are no live migration issues have kernel 5.14.0-142.el9.x86_64, the nodes with the latest version 5.14.0-388.el9.x86_64 seems to have this issue.
Has anybody faced this issue? - any patches needed/available?
Thanks
1 year