Network Interface Already In USe - Self-Hosted Install
by Matthew J Black
Hi Guys & Girls,
<begin_rant>
OK, so I am really, *really* starting to get fed up with this. I know this is probably my fault, but even if it is then the oVirt documentation isn't helping in any way (being... "less than clear").
What I would really like is instead of having to rely on the "black box" that is Ansible, what I'd like is a simple set of clear cut instructions, Step-By-Step, so that we actually *know* what was going on when attempting to do a Self-Hosted install. After all, oVirt's "competition" doesn't make things so difficult...
<end_rant>
Now that I've got that on my chest, I'm trying to do a straight forward Self-Hosted Install. I've followed the instructions in the oVirt doco pretty much to the letter, and I'm still having problems.
My (pre-install) set-up:
- A freshly installed server (oVirt_Node_1) running Rocky Linux 8.6 with 3 NICs - NIC_1, NIC_2, & NIC_3.
- There are three VLANs - VLAN_A (172.16.1.0/24), VLAN_B (172.16.2.0/24), & VLAN_C (172.16.3.0/24).
- NIC_1 & NIC_2 are formed into a bond (bond_1).
- bond_1 is an 802.3ad bond.
- bond_1 has 2 sub-interfaces - bond_1.a & bond_1.b
- Interface bond_1.a in in VLAN_A.
- Interface bond_1.b is in VLAN_B.
- NIC_3 is sitting in VLAN_C.
- VLAN_A is the everyday "working" VLAN where the rest of the servers all sit (ie DNS Servers, Local Repository Server, etc, etc, etc), and where the oVirt Engine (OVE) will sit.
- VLAN B is for data throughput to and from the Ceph iSCSI Gateways in our Ceph Storage Cluster. This is a dedicated isolated VLAN with no gateway (ie only the oVirt Hosting Nodes and the Ceph iSCSI Gateways are on this VLAN).
- VLAN C is for OOB management traffic. This is a dedicated isolated VLAN with no gateway.
Everything is working. Everything can ping properly back and forth within the individual VLANs and VLAN_A can ping out to the Internet via its gateway (172.16.1.1).
Because we don't require iSCSI connectivity for the OVE (its on a working local Gluster TSP volume) the iSCSI hasn't *yet* been implemented.
After trying to do the install using our Local Repository Mirror (after discovering and mirroring all the required repositories), I gave up on that because for a "one-off" install it wasn't worth the time and effort it was taking, especially when it "seems" that the Ansible playbook wants the "original" repositories anyway - but that's another rant/issue.
So, I'm using all the original repositories as per the oVirt doco, including the special instructions for Rocky Linux and RHEL-derivatives in general, and using the defaults for the answers to the deployment script (except where there are no defaults) - and now I've got the following error:
~~~
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "cmd": ["virsh", "net-start", "default"], "delta": "0:00:00.031972", "end": "2022-10-04 16:41:38.603454", "msg": "non-zero return code", "rc": 1, "start": "2022-10-04 16:41:38.571482", "stderr": "error: Failed to start network default\nerror: internal error: Network is already in use by interface bond_1.a", "stderr_lines": ["error: Failed to start network default", "error: internal error: Network is already in use by interface bond_1.a"], "stdout": "", "stdout_lines": []}
[ ERROR ] Failed to execute stage 'Closing up': Failed getting local_vm_dir
~~~
The relevant lines from the log file (at least I think these are the relevant lines):
~~~
2022-10-04 16:41:35,712+1100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Update libvirt default network configuration, undefine]
2022-10-04 16:41:37,017+1100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 {'changed': False, 'stdout': '', 'stderr': "error: failed to get network 'default'\nerror: Network not found: no network with matching name 'default'", 'rc': 1, 'cmd': ['virsh', 'net-undefine', 'default'], 'start': '2022-10-04 16:41:35.806251', 'end': '2022-10-04 16:41:36.839780', 'delta': '0:00:01.033529', 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'virsh net-undefine default', '_uses_shell': False, 'warn': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["error: failed to get network 'default'", "error: Network not found: no network with matching name 'default'"], '_ansible_no_log': False}
2022-10-04 16:41:37,118+1100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 ignored: [localhost]: FAILED! => {"changed": false, "cmd": ["virsh", "net-undefine", "default"], "delta": "0:00:01.033529", "end": "2022-10-04 16:41:36.839780", "msg": "non-zero return code", "rc": 1, "start": "2022-10-04 16:41:35.806251", "stderr": "error: failed to get network 'default'\nerror: Network not found: no network with matching name 'default'", "stderr_lines": ["error: failed to get network 'default'", "error: Network not found: no network with matching name 'default'"], "stdout": "", "stdout_lines": []}
2022-10-04 16:41:37,219+1100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Update libvirt default network configuration, define]
2022-10-04 16:41:38,421+1100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 ok: [localhost]
2022-10-04 16:41:38,522+1100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Activate default libvirt network]
2022-10-04 16:41:38,823+1100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 {'changed': False, 'stdout': '', 'stderr': 'error: Failed to start network default\nerror: internal error: Network is already in use by interface bond_1.a', 'rc': 1, 'cmd': ['virsh', 'net-start', 'default'], 'start': '2022-10-04 16:41:38.571482', 'end': '2022-10-04 16:41:38.603454', 'delta': '0:00:00.031972', 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'virsh net-start default', '_uses_shell': False, 'warn': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ['error: Failed to start network default', 'error: internal error: Network is already in use by interface bond_1.a'], '_ansible_no_log': False}
2022-10-04 16:41:38,924+1100 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:113 fatal: [localhost]: FAILED! => {"changed": false, "cmd": ["virsh", "net-start", "default"], "delta": "0:00:00.031972", "end": "2022-10-04 16:41:38.603454", "msg": "non-zero return code", "rc": 1, "start": "2022-10-04 16:41:38.571482", "stderr": "error: Failed to start network default\nerror: internal error: Network is already in use by interface bond_1.a", "stderr_lines": ["error: Failed to start network default", "error: internal error: Network is already in use by interface bond_1.a"], "stdout": "", "stdout_lines": []}
2022-10-04 16:41:39,125+1100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 PLAY RECAP [localhost] : ok: 106 changed: 32 unreachable: 0 skipped: 61 failed: 1
2022-10-04 16:41:39,226+1100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:226 ansible-playbook rc: 2
2022-10-04 16:41:39,226+1100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:233 ansible-playbook stdout:
2022-10-04 16:41:39,226+1100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:236 ansible-playbook stderr:
2022-10-04 16:41:39,226+1100 DEBUG otopi.plugins.gr_he_ansiblesetup.core.misc misc._closeup:475 {'otopi_host_net': {'ansible_facts': {'otopi_host_net': ['ens0p1', 'bond_1.a', 'bond_1.b']}, '_ansible_no_log': False, 'changed': False}, 'ansible-playbook_rc': 2}
2022-10-04 16:41:39,226+1100 DEBUG otopi.context context._executeMethod:145 method exception
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/otopi/context.py", line 132, in _executeMethod
method['method']()
File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/misc.py", line 485, in _closeup
raise RuntimeError(_('Failed getting local_vm_dir'))
RuntimeError: Failed getting local_vm_dir
2022-10-04 16:41:39,227+1100 ERROR otopi.context context._executeMethod:154 Failed to execute stage 'Closing up': Failed getting local_vm_dir
2022-10-04 16:41:39,228+1100 DEBUG otopi.context context.dumpEnvironment:765 ENVIRONMENT DUMP - BEGIN
2022-10-04 16:41:39,228+1100 DEBUG otopi.context context.dumpEnvironment:775 ENV BASE/error=bool:'True'
2022-10-04 16:41:39,228+1100 DEBUG otopi.context context.dumpEnvironment:775 ENV BASE/exceptionInfo=list:'[(<class 'RuntimeError'>, RuntimeError('Failed getting local_vm_dir',), <traceback object at 0x7f5210013088>)]'
2022-10-04 16:41:39,228+1100 DEBUG otopi.context context.dumpEnvironment:779 ENVIRONMENT DUMP - END
~~~
So, would someone please help me in getting this sorted - I mean, how are we supposed to do this install if the interface we need to connect to the box in the first place can't be used because it's "already in use"?
Cheers
Dulux-Oz
1 year, 6 months
Hyperconverged install fails to add second and third hosts
by Calvin Ellison
Hello fellow users, I'm having trouble sending up a brand new cluster using
Equinix Metal. The three servers are their "n3.xlarge.x86" model, which
uses an Intel Xeon Gold 6314U CPU in a Supermicro SSG-110P-NTR10-EI018
server.
The entire Hyperconverged installation process appears to complete without
error, but when I log into the manager only one host is listed and only
that host's Gluster brick appears in the UI. The only hint of a problem in
the UI is in the Tasks pane: two failed tasks to add the other hosts.
Where do I get started troubleshooting?
Calvin Ellison
Systems Architect
calvin.ellison(a)voxox.com
+1 (213) 285-0555
<http://voxox.com>
<https://www.facebook.com/VOXOX/> <https://www.instagram.com/voxoxofficial/>
<https://www.linkedin.com/company/3573541/admin/>
<https://twitter.com/Voxox>
The information contained herein is confidential and privileged information
or work product intended only for the individual or entity to whom it is
addressed. Any unauthorized use, distribution, or copying of this
communication is strictly prohibited. If you have received this
communication in error, please notify me immediately.
1 year, 6 months
install ovirt on maas provider
by Charles Kozler
Hello - I am attempting to install ovirt hosted engine (engine running as a
VM in a cluster). I have configured 10 servers at a metal-as-a-service
provider. The server wiring has been configured to our specifications,
however, the MaaS provider requires bond0 to be setup previously and the
two interface VLAN's preconfigured on the OS. I am not new to setting up
ovirt though my experience has changed every installation and this is a new
type of deployment for me (maas+ovirt) so I have some questions.
bond0.2034 = Uplink to WAN switch
bond0.3071 = Uplink to LAN internal switch (will need to become ovirtmgmt)
The gluster storage links are separate and not in scope for this
conversation right now as I am just trying to establish my first host and
engine.
Historically, ovirt has been a bit aggressive in how it sets up its
networks at install (in my experience) and when any network came
preconfigured it would typically drop it entirely which then means I lose
my SSH as ovirt decided what to do.
I am still TBD on whether or not I have drac/oob access to these systems so
lets assume for now I do not.
That all being said I am looking to see if anyone can tell me what to
expect with my current configuration. I cannot risk running the hosted
engine install and having ansible drop my network without my first
confirming OOB availability so any suggestions
1 year, 6 months
Error during deployment of ovirt-engine
by Jonas
Hello all
I'm trying to deploy an oVirt Engine through the cockpit interface.
Unfortunately the deployment fails with the following error:
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set admin username]
[ INFO ] ok: [localhost -> 192.168.222.95]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Wait for ovirt-engine service to start]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Open a port on firewalld]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Expose engine VM webui over a local port via ssh port forwarding]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Evaluate temporary bootstrap engine VM URL]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Display the temporary bootstrap engine VM URL]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Detect VLAN ID]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set Engine public key as authorized key without validating the TLS/SSL certificates]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Obtain SSO token using username/password credentials]
[ ERROR ] ovirtsdk4.AuthError: Error during SSO authentication access_denied : Cannot authenticate user Invalid user credentials.
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": false, "msg": "Error during SSO authentication access_denied : Cannot authenticate user Invalid user credentials."}
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Sync on engine machine]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch logs from the engine VM]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set destination directory path]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create destination directory]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Find the local appliance image]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set local_vm_disk_path]
[ INFO ] ok: [localhost]
I can login just fine to the VM over SSH but when I try to login over
the web interface as admin, the password is not accepted. I tried both
complex and simple passwords for root/admin but none worked so far.
This previous discussion did not solve my problems:
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/HMLCEG2LPSWF...
Thank you,
Jonas
1 year, 6 months
There is no glusterfs management dashboard in the Cockpit background
by ziyi Liu
I have many data centers, the first data center contains the engine, the glusterfs management dashboard can be seen in the Cockpit background, but the glusterfs management dashboard cannot be seen in the second data center,
Create the second data center step: I create a data center in the engine, then set up a glusterfs cluster in the Cockpit background, and finally mount the glusterfs volume in the engine.
Is there something wrong with the way I add it?
1 year, 6 months
Re: OVS Task Error when Add EL Host
by erdosi.peter@kifu.gov.hu
Actually, there are some strange stuff going on a newly deployed HE:
[root@poc ~]# yum update
Failed to set locale, defaulting to C.UTF-8
Last metadata expiration check: 0:34:52 ago on Fri Oct 14 01:09:57 2022.
Error:
Problem 1: package ovirt-openvswitch-2.15-4.el8.noarch requires openvswitch2.15, but none of the providers can be installed
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-119.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-106.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-110.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-115.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-117.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-22.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-23.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-24.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-27.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-30.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-32.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-35.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-37.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-39.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-41.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-47.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-48.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-51.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-52.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-53.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-54.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-56.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-6.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-72.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-75.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-80.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-81.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-88.el8s.x86_64
- cannot install the best update candidate for package ovirt-openvswitch-2.15-4.el8.noarch
- cannot install the best update candidate for package openvswitch2.15-2.15.0-119.el8s.x86_64
Problem 2: package ovirt-openvswitch-ovn-common-2.15-4.el8.noarch requires ovn-2021, but none of the providers can be installed
- package rdo-ovn-2:22.06-3.el8.noarch obsoletes ovn-2021 < 22.06 provided by ovn-2021-21.12.0-82.el8s.x86_64
- package rdo-ovn-2:22.06-3.el8.noarch obsoletes ovn-2021 < 22.06 provided by ovn-2021-21.03.0-21.el8s.x86_64
- package rdo-ovn-2:22.06-3.el8.noarch obsoletes ovn-2021 < 22.06 provided by ovn-2021-21.03.0-40.el8s.x86_64
- package rdo-ovn-2:22.06-3.el8.noarch obsoletes ovn-2021 < 22.06 provided by ovn-2021-21.06.0-17.el8s.x86_64
- package rdo-ovn-2:22.06-3.el8.noarch obsoletes ovn-2021 < 22.06 provided by ovn-2021-21.06.0-29.el8s.x86_64
- package rdo-ovn-2:22.06-3.el8.noarch obsoletes ovn-2021 < 22.06 provided by ovn-2021-21.12.0-11.el8s.x86_64
- cannot install the best update candidate for package ovn-2021-21.12.0-82.el8s.x86_64
- cannot install the best update candidate for package ovirt-openvswitch-ovn-common-2.15-4.el8.noarch
Problem 3: package ovirt-openvswitch-ovn-central-2.15-4.el8.noarch requires ovn-2021-central, but none of the providers can be installed
- package rdo-ovn-central-2:22.06-3.el8.noarch obsoletes ovn-2021-central < 22.06 provided by ovn-2021-central-21.12.0-82.el8s.x86_64
- package rdo-ovn-central-2:22.06-3.el8.noarch obsoletes ovn-2021-central < 22.06 provided by ovn-2021-central-21.03.0-21.el8s.x86_64
- package rdo-ovn-central-2:22.06-3.el8.noarch obsoletes ovn-2021-central < 22.06 provided by ovn-2021-central-21.03.0-40.el8s.x86_64
- package rdo-ovn-central-2:22.06-3.el8.noarch obsoletes ovn-2021-central < 22.06 provided by ovn-2021-central-21.06.0-17.el8s.x86_64
- package rdo-ovn-central-2:22.06-3.el8.noarch obsoletes ovn-2021-central < 22.06 provided by ovn-2021-central-21.06.0-29.el8s.x86_64
- package rdo-ovn-central-2:22.06-3.el8.noarch obsoletes ovn-2021-central < 22.06 provided by ovn-2021-central-21.12.0-11.el8s.x86_64
- cannot install the best update candidate for package ovn-2021-central-21.12.0-82.el8s.x86_64
- cannot install the best update candidate for package ovirt-openvswitch-ovn-central-2.15-4.el8.noarch
Problem 4: package ovirt-python-openvswitch-2.15-4.el8.noarch requires python3-openvswitch2.15, but none of the providers can be installed
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-119.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-106.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-110.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-115.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-117.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-22.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-23.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-24.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-27.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-30.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-32.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-35.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-37.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-39.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-41.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-47.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-48.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-51.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-52.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-53.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-54.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-56.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-6.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-72.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-75.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-80.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-81.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-88.el8s.x86_64
- cannot install the best update candidate for package python3-openvswitch2.15-2.15.0-119.el8s.x86_64
- cannot install the best update candidate for package ovirt-python-openvswitch-2.15-4.el8.noarch
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
Maybe this can give a clue to someone.
I used this iso: ovirt-node-ng-installer-4.5.2-2022081013.el8
1 year, 6 months
Disks in "Finalizing" state on Web Inteface
by markeczzz@gmail.com
Hi!
I have few vm-s that have disk status in "Finalizing"
This VMs are working normally, but on web interface status is always "Finalizing" and progress bar is not moving. (It's in this state for days now)
I can create snapshots and backups of them without problems and everything works ok.
Is there any way that I can check why are this disks in that state?
Regards,
1 year, 6 months
Cannot run VM. The Custom Compatibility Version of VM ... (4.3) is not supported in Data Center compatibility version 4.4.
by nicolas@devels.es
Hi,
I'm running oVirt 4.4 and recently I changed the Data Center
compatibility version from 4.3 to 4.4.
Now turns out that some Pools are unable to restart machines, returning
this error:
Cannot run VM. The Custom Compatibility Version of VM ... (4.3) is not
supported in Data Center compatibility version 4.4.
I cannot edit manually the custom compatibility version of VMs, as pools
won't allow that.
Is there any manual way to fix it? I tried to do it via Python SDK but
it doesn't seem to work...
vm = vms_serv.list(name='X')[0]
ver = vm.custom_compatibility_version
ver.minor = 4
vm.custom_compatibility_version = ver
vmsv = sys_serv.vms_service().vm_service(id=vm.id)
vmsv.upfate(vm=vm)
It doesn't seem to do any difference, though.
Any help is really appreciated, as many our users can't use theis VMs.
Thanks.
1 year, 6 months