hosted-engine-setup --deploy fail on Centos Stream 8
by andrea.crisanti@uniroma1.it
Hy,
I am trying to install ovirt 4.5 on a 4-host cluster running Centos Stream 8, but the engine does not start and the whole process fails.
Here is my procedure
dnf install centos-release-ovirt45
dnf module reset virt
dnf module enable virt:rhel
dnf install ovirt-engine-appliance
dnf install ovirt-hosted-engine-setup
The latest version of ansible [ansible-core 2.13] uses python3.9 and the installation fails because some python3.9 modules are missing
[python39-netaddr, python39-jmespath] and cannot be installed [conflict python3-jmespath]. So I downgraded ansible to ansible-core 2.12
dnf downgrade ansible-core
Now
hosted-engine-setup --deploy --4
goes proceed further but stops because it cannot start the engine
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Wait for the host to be up]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Notify the user about a failure]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Host is not up, please check logs, perhaps also on the engine machine"}
I looked into the log file
/var/log//ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-ansible-bootstrap_local_vm-20221007132728-yp7cd1.log
and I found the following error:
2022-10-07 13:28:30,881+0200 ERROR ansible failed {
"ansible_host": "localhost",
"ansible_playbook": "/usr/share/ovirt-hosted-engine-setup/he_ansible/trigger_role.yml",
"ansible_result": {
"_ansible_no_log": false,
"changed": false,
"cmd": [
"virsh",
"net-undefine",
"default"
],
"delta": "0:00:00.039258",
"end": "2022-10-07 13:28:30.710401",
"invocation": {
"module_args": {
"_raw_params": "virsh net-undefine default",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": false
}
},
"msg": "non-zero return code",
"rc": 1,
"start": "2022-10-07 13:28:30.671143",
"stderr": "error: failed to get network 'default'\nerror: Network not found: no network with matching name 'default'",
"stderr_lines": [
"error: failed to get network 'default'",
"error: Network not found: no network with matching name 'default'"
],
"stdout": "",
"stdout_lines": []
},
"ansible_task": "Update libvirt default network configuration, undefine",
"ansible_type": "task",
"status": "FAILED",
"task_duration": 0
}
Needless to say
firewalld and libvirtd are both up
and virsh net-list gives:
Name State Autostart Persistent
------------------------------------------------
;vdsmdummy; active no no
default active no yes
I googled around without success.
Has anyone had similar problems?
End of past July I installed Ovirt on another cluster running Centos Stream 8 following the procedure I just described with no problem.
If needed I can post all log files.
Thanks for the help.
Best
Andrea
2 years, 1 month
Get Vlan IDs from multiple VMs
by chry3@hotmail.com
Hello,
I´m exrtracting diferent VM informations from diferent VMs that are deployed on a server that I need to proccess later. I already have informations such as IPs, macs cpu cores etc, but I would like to get the VLAN id from each of the VMs. How can I do that? How can I get the network profile that my VM is using?
Kind Regards.
2 years, 1 month
Adding documentation - Migrating from a self-hosted engine to a standalone
by David White
Hi. I'm working on adding instructions to the documentation for how to migrate from a self-hosted engine to a stand alone manager. This will be the first time I've contributed anything to this project (or any larger open source project for that matter), so before I get too far in the weeds, I wanted to run this by the community so as not to waste my time if this is a bad idea.
My general high level documentation is pasted at the bottom of this email. These are the steps that I took when I did my own migration.
(Note to self: Need to add a step at the end to log into the Manager and disable the Gluster service by going to Computer --> Clusters --> (Edit the cluster) --> Uncheck the "Enable Gluster Service" checkbox.
And I've forked the ovirt-site repo and started working on the documentation here: https://github.com/dmwhite823/ovirt-site/tree/migrate-engine-to-standalone
I still have a ways to go before I'm ready to request a PR, but I'm open to any & all feedback. I think that I'm done with most of the changes necessary to ovirt-site/source/documentation/migrating_from_a_self-hosted_engine_to_a_standalone_manager/index.adoc, but I'm unclear why there's also a master.adoc file.
Note that I copied files into the new directory of migrating_from_a_self-hosted_engine_to_a_standalone_manager from the already existing migrating_from_a_standalone_manager_to_a_self-hosted_engine structure directory, and am editing the files in the new directory.
Will this be useful / helpful? Would others on the team like to contribute to improving these instructions prior to issuing a PR to the ovirt-site git repo? Am I even doing this right? 🤣
Thanks,
David
High level overview of steps required (pasted below):
Pre-req: Make sure VMs are not using HA lease of a gluster domain
1) Migrate all storage off Gluster
2) Remove all gluster volumes from oVirt
3) Put cluster into global maintenance
hosted-engine --set-maintenance --mode=global
4) On new VM:
Install CentOS Stream & add ovirt repos
dnf install centos-release-ovirt45
dnf module enable javapackages-tools pki-deps postgresql:12 mod_auth_openidc:2.3 nodejs:14
Stop & Disable the engine
# systemctl stop ovirt-engine
# systemctl disable ovirt-engine
Setup DNS in /etc/hosts if you don't have local DNS servers
Backup the engine
# engine-backup --mode=backup --file=file_name --log=log_file_name
Restore
# engine-backup --mode=restore --file=engine-backup-09172022-1 --log=restore --restore-permissions
Run engine-setup
# engine-setup
2 years, 1 month
oVirt 4.5.3 is now generally available
by Lev Veyde
The oVirt project is excited to announce the general availability of oVirt
4.5.3, as of October 18th, 2022.
This release unleashes an altogether more powerful and flexible open source
virtualization solution that encompasses hundreds of individual changes and
a wide range of enhancements across the engine, storage, network, user
interface, and analytics on top of oVirt 4.4.
Important notes before you install / upgrade
Some of the features included in oVirt 4.5.3 require content that is
available in RHEL 8.6 (or newer) and derivatives.
NOTE: If you’re going to install oVirt 4.5.3 on RHEL or similar, please
read Installing on RHEL or derivatives
<https://ovirt.org/download/install_on_rhel.html> first.
Documentation
Be sure to follow instructions for oVirt 4.5!
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
What’s new in oVirt 4.5.3 Release?
This release is available now on x86_64 architecture for:
-
CentOS Stream 8
-
RHEL 8.6 and derivatives
This release supports Hypervisor Hosts on x86_64:
-
oVirt Node NG (based on CentOS Stream 8)
-
CentOS Stream 8
-
RHEL 8.6 and derivatives
This release also supports Hypervisor Hosts on x86_64 as tech preview
without secure boot:
-
CentOS Stream 9
-
RHEL 9.0 and derivatives
-
oVirt Node NG based on CentOS Stream 9
Builds are also available for ppc64le and aarch64.
Known issues:
-
On EL9 with UEFI secure boot, vdsm fails to decode DMI data due to
Bug 2081648 <https://bugzilla.redhat.com/show_bug.cgi?id=2081648> -
python-dmidecode module fails to decode DMI data
Security fixes included in oVirt 4.5.3 compared to latest oVirt 4.5.2:
Bug list
<https://bugzilla.redhat.com/buglist.cgi?quicksearch=target_milestone%3Aov...>
Some of the RFEs with high user impact are listed below:
Bug list
<https://bugzilla.redhat.com/buglist.cgi?quicksearch=target_milestone%3Aov...>
Some of the Bugs with high user impact are listed below:
Bug list
<https://bugzilla.redhat.com/buglist.cgi?quicksearch=target_milestone%3Aov...>
oVirt Node will be released shortly after the release will reach the CentOS
mirrors.
See the release notes for installation instructions and a list of new
features and bugs fixed.
Additional resources:
-
Read more about the oVirt 4.5.3 release highlights:
https://www.ovirt.org/release/4.5.3/
-
Get more oVirt project updates on Twitter: https://twitter.com/ovirt
-
Check out the latest project news on the oVirt blog:
https://blogs.ovirt.org/
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
2 years, 1 month
Network Interface Already In USe - Self-Hosted Install
by Matthew J Black
Hi Guys & Girls,
<begin_rant>
OK, so I am really, *really* starting to get fed up with this. I know this is probably my fault, but even if it is then the oVirt documentation isn't helping in any way (being... "less than clear").
What I would really like is instead of having to rely on the "black box" that is Ansible, what I'd like is a simple set of clear cut instructions, Step-By-Step, so that we actually *know* what was going on when attempting to do a Self-Hosted install. After all, oVirt's "competition" doesn't make things so difficult...
<end_rant>
Now that I've got that on my chest, I'm trying to do a straight forward Self-Hosted Install. I've followed the instructions in the oVirt doco pretty much to the letter, and I'm still having problems.
My (pre-install) set-up:
- A freshly installed server (oVirt_Node_1) running Rocky Linux 8.6 with 3 NICs - NIC_1, NIC_2, & NIC_3.
- There are three VLANs - VLAN_A (172.16.1.0/24), VLAN_B (172.16.2.0/24), & VLAN_C (172.16.3.0/24).
- NIC_1 & NIC_2 are formed into a bond (bond_1).
- bond_1 is an 802.3ad bond.
- bond_1 has 2 sub-interfaces - bond_1.a & bond_1.b
- Interface bond_1.a in in VLAN_A.
- Interface bond_1.b is in VLAN_B.
- NIC_3 is sitting in VLAN_C.
- VLAN_A is the everyday "working" VLAN where the rest of the servers all sit (ie DNS Servers, Local Repository Server, etc, etc, etc), and where the oVirt Engine (OVE) will sit.
- VLAN B is for data throughput to and from the Ceph iSCSI Gateways in our Ceph Storage Cluster. This is a dedicated isolated VLAN with no gateway (ie only the oVirt Hosting Nodes and the Ceph iSCSI Gateways are on this VLAN).
- VLAN C is for OOB management traffic. This is a dedicated isolated VLAN with no gateway.
Everything is working. Everything can ping properly back and forth within the individual VLANs and VLAN_A can ping out to the Internet via its gateway (172.16.1.1).
Because we don't require iSCSI connectivity for the OVE (its on a working local Gluster TSP volume) the iSCSI hasn't *yet* been implemented.
After trying to do the install using our Local Repository Mirror (after discovering and mirroring all the required repositories), I gave up on that because for a "one-off" install it wasn't worth the time and effort it was taking, especially when it "seems" that the Ansible playbook wants the "original" repositories anyway - but that's another rant/issue.
So, I'm using all the original repositories as per the oVirt doco, including the special instructions for Rocky Linux and RHEL-derivatives in general, and using the defaults for the answers to the deployment script (except where there are no defaults) - and now I've got the following error:
~~~
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "cmd": ["virsh", "net-start", "default"], "delta": "0:00:00.031972", "end": "2022-10-04 16:41:38.603454", "msg": "non-zero return code", "rc": 1, "start": "2022-10-04 16:41:38.571482", "stderr": "error: Failed to start network default\nerror: internal error: Network is already in use by interface bond_1.a", "stderr_lines": ["error: Failed to start network default", "error: internal error: Network is already in use by interface bond_1.a"], "stdout": "", "stdout_lines": []}
[ ERROR ] Failed to execute stage 'Closing up': Failed getting local_vm_dir
~~~
The relevant lines from the log file (at least I think these are the relevant lines):
~~~
2022-10-04 16:41:35,712+1100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Update libvirt default network configuration, undefine]
2022-10-04 16:41:37,017+1100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 {'changed': False, 'stdout': '', 'stderr': "error: failed to get network 'default'\nerror: Network not found: no network with matching name 'default'", 'rc': 1, 'cmd': ['virsh', 'net-undefine', 'default'], 'start': '2022-10-04 16:41:35.806251', 'end': '2022-10-04 16:41:36.839780', 'delta': '0:00:01.033529', 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'virsh net-undefine default', '_uses_shell': False, 'warn': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["error: failed to get network 'default'", "error: Network not found: no network with matching name 'default'"], '_ansible_no_log': False}
2022-10-04 16:41:37,118+1100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 ignored: [localhost]: FAILED! => {"changed": false, "cmd": ["virsh", "net-undefine", "default"], "delta": "0:00:01.033529", "end": "2022-10-04 16:41:36.839780", "msg": "non-zero return code", "rc": 1, "start": "2022-10-04 16:41:35.806251", "stderr": "error: failed to get network 'default'\nerror: Network not found: no network with matching name 'default'", "stderr_lines": ["error: failed to get network 'default'", "error: Network not found: no network with matching name 'default'"], "stdout": "", "stdout_lines": []}
2022-10-04 16:41:37,219+1100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Update libvirt default network configuration, define]
2022-10-04 16:41:38,421+1100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 ok: [localhost]
2022-10-04 16:41:38,522+1100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Activate default libvirt network]
2022-10-04 16:41:38,823+1100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 {'changed': False, 'stdout': '', 'stderr': 'error: Failed to start network default\nerror: internal error: Network is already in use by interface bond_1.a', 'rc': 1, 'cmd': ['virsh', 'net-start', 'default'], 'start': '2022-10-04 16:41:38.571482', 'end': '2022-10-04 16:41:38.603454', 'delta': '0:00:00.031972', 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'virsh net-start default', '_uses_shell': False, 'warn': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ['error: Failed to start network default', 'error: internal error: Network is already in use by interface bond_1.a'], '_ansible_no_log': False}
2022-10-04 16:41:38,924+1100 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:113 fatal: [localhost]: FAILED! => {"changed": false, "cmd": ["virsh", "net-start", "default"], "delta": "0:00:00.031972", "end": "2022-10-04 16:41:38.603454", "msg": "non-zero return code", "rc": 1, "start": "2022-10-04 16:41:38.571482", "stderr": "error: Failed to start network default\nerror: internal error: Network is already in use by interface bond_1.a", "stderr_lines": ["error: Failed to start network default", "error: internal error: Network is already in use by interface bond_1.a"], "stdout": "", "stdout_lines": []}
2022-10-04 16:41:39,125+1100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 PLAY RECAP [localhost] : ok: 106 changed: 32 unreachable: 0 skipped: 61 failed: 1
2022-10-04 16:41:39,226+1100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:226 ansible-playbook rc: 2
2022-10-04 16:41:39,226+1100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:233 ansible-playbook stdout:
2022-10-04 16:41:39,226+1100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:236 ansible-playbook stderr:
2022-10-04 16:41:39,226+1100 DEBUG otopi.plugins.gr_he_ansiblesetup.core.misc misc._closeup:475 {'otopi_host_net': {'ansible_facts': {'otopi_host_net': ['ens0p1', 'bond_1.a', 'bond_1.b']}, '_ansible_no_log': False, 'changed': False}, 'ansible-playbook_rc': 2}
2022-10-04 16:41:39,226+1100 DEBUG otopi.context context._executeMethod:145 method exception
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/otopi/context.py", line 132, in _executeMethod
method['method']()
File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/misc.py", line 485, in _closeup
raise RuntimeError(_('Failed getting local_vm_dir'))
RuntimeError: Failed getting local_vm_dir
2022-10-04 16:41:39,227+1100 ERROR otopi.context context._executeMethod:154 Failed to execute stage 'Closing up': Failed getting local_vm_dir
2022-10-04 16:41:39,228+1100 DEBUG otopi.context context.dumpEnvironment:765 ENVIRONMENT DUMP - BEGIN
2022-10-04 16:41:39,228+1100 DEBUG otopi.context context.dumpEnvironment:775 ENV BASE/error=bool:'True'
2022-10-04 16:41:39,228+1100 DEBUG otopi.context context.dumpEnvironment:775 ENV BASE/exceptionInfo=list:'[(<class 'RuntimeError'>, RuntimeError('Failed getting local_vm_dir',), <traceback object at 0x7f5210013088>)]'
2022-10-04 16:41:39,228+1100 DEBUG otopi.context context.dumpEnvironment:779 ENVIRONMENT DUMP - END
~~~
So, would someone please help me in getting this sorted - I mean, how are we supposed to do this install if the interface we need to connect to the box in the first place can't be used because it's "already in use"?
Cheers
Dulux-Oz
2 years, 1 month
Hyperconverged install fails to add second and third hosts
by Calvin Ellison
Hello fellow users, I'm having trouble sending up a brand new cluster using
Equinix Metal. The three servers are their "n3.xlarge.x86" model, which
uses an Intel Xeon Gold 6314U CPU in a Supermicro SSG-110P-NTR10-EI018
server.
The entire Hyperconverged installation process appears to complete without
error, but when I log into the manager only one host is listed and only
that host's Gluster brick appears in the UI. The only hint of a problem in
the UI is in the Tasks pane: two failed tasks to add the other hosts.
Where do I get started troubleshooting?
Calvin Ellison
Systems Architect
calvin.ellison(a)voxox.com
+1 (213) 285-0555
<http://voxox.com>
<https://www.facebook.com/VOXOX/> <https://www.instagram.com/voxoxofficial/>
<https://www.linkedin.com/company/3573541/admin/>
<https://twitter.com/Voxox>
The information contained herein is confidential and privileged information
or work product intended only for the individual or entity to whom it is
addressed. Any unauthorized use, distribution, or copying of this
communication is strictly prohibited. If you have received this
communication in error, please notify me immediately.
2 years, 1 month
install ovirt on maas provider
by Charles Kozler
Hello - I am attempting to install ovirt hosted engine (engine running as a
VM in a cluster). I have configured 10 servers at a metal-as-a-service
provider. The server wiring has been configured to our specifications,
however, the MaaS provider requires bond0 to be setup previously and the
two interface VLAN's preconfigured on the OS. I am not new to setting up
ovirt though my experience has changed every installation and this is a new
type of deployment for me (maas+ovirt) so I have some questions.
bond0.2034 = Uplink to WAN switch
bond0.3071 = Uplink to LAN internal switch (will need to become ovirtmgmt)
The gluster storage links are separate and not in scope for this
conversation right now as I am just trying to establish my first host and
engine.
Historically, ovirt has been a bit aggressive in how it sets up its
networks at install (in my experience) and when any network came
preconfigured it would typically drop it entirely which then means I lose
my SSH as ovirt decided what to do.
I am still TBD on whether or not I have drac/oob access to these systems so
lets assume for now I do not.
That all being said I am looking to see if anyone can tell me what to
expect with my current configuration. I cannot risk running the hosted
engine install and having ansible drop my network without my first
confirming OOB availability so any suggestions
2 years, 1 month
Error during deployment of ovirt-engine
by Jonas
Hello all
I'm trying to deploy an oVirt Engine through the cockpit interface.
Unfortunately the deployment fails with the following error:
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set admin username]
[ INFO ] ok: [localhost -> 192.168.222.95]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Wait for ovirt-engine service to start]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Open a port on firewalld]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Expose engine VM webui over a local port via ssh port forwarding]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Evaluate temporary bootstrap engine VM URL]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Display the temporary bootstrap engine VM URL]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Detect VLAN ID]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set Engine public key as authorized key without validating the TLS/SSL certificates]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Obtain SSO token using username/password credentials]
[ ERROR ] ovirtsdk4.AuthError: Error during SSO authentication access_denied : Cannot authenticate user Invalid user credentials.
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": false, "msg": "Error during SSO authentication access_denied : Cannot authenticate user Invalid user credentials."}
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Sync on engine machine]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch logs from the engine VM]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set destination directory path]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create destination directory]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Find the local appliance image]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set local_vm_disk_path]
[ INFO ] ok: [localhost]
I can login just fine to the VM over SSH but when I try to login over
the web interface as admin, the password is not accepted. I tried both
complex and simple passwords for root/admin but none worked so far.
This previous discussion did not solve my problems:
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/HMLCEG2LPSWF...
Thank you,
Jonas
2 years, 1 month
There is no glusterfs management dashboard in the Cockpit background
by ziyi Liu
I have many data centers, the first data center contains the engine, the glusterfs management dashboard can be seen in the Cockpit background, but the glusterfs management dashboard cannot be seen in the second data center,
Create the second data center step: I create a data center in the engine, then set up a glusterfs cluster in the Cockpit background, and finally mount the glusterfs volume in the engine.
Is there something wrong with the way I add it?
2 years, 1 month
Re: OVS Task Error when Add EL Host
by erdosi.peter@kifu.gov.hu
Actually, there are some strange stuff going on a newly deployed HE:
[root@poc ~]# yum update
Failed to set locale, defaulting to C.UTF-8
Last metadata expiration check: 0:34:52 ago on Fri Oct 14 01:09:57 2022.
Error:
Problem 1: package ovirt-openvswitch-2.15-4.el8.noarch requires openvswitch2.15, but none of the providers can be installed
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-119.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-106.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-110.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-115.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-117.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-22.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-23.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-24.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-27.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-30.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-32.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-35.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-37.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-39.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-41.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-47.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-48.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-51.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-52.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-53.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-54.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-56.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-6.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-72.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-75.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-80.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-81.el8s.x86_64
- package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-88.el8s.x86_64
- cannot install the best update candidate for package ovirt-openvswitch-2.15-4.el8.noarch
- cannot install the best update candidate for package openvswitch2.15-2.15.0-119.el8s.x86_64
Problem 2: package ovirt-openvswitch-ovn-common-2.15-4.el8.noarch requires ovn-2021, but none of the providers can be installed
- package rdo-ovn-2:22.06-3.el8.noarch obsoletes ovn-2021 < 22.06 provided by ovn-2021-21.12.0-82.el8s.x86_64
- package rdo-ovn-2:22.06-3.el8.noarch obsoletes ovn-2021 < 22.06 provided by ovn-2021-21.03.0-21.el8s.x86_64
- package rdo-ovn-2:22.06-3.el8.noarch obsoletes ovn-2021 < 22.06 provided by ovn-2021-21.03.0-40.el8s.x86_64
- package rdo-ovn-2:22.06-3.el8.noarch obsoletes ovn-2021 < 22.06 provided by ovn-2021-21.06.0-17.el8s.x86_64
- package rdo-ovn-2:22.06-3.el8.noarch obsoletes ovn-2021 < 22.06 provided by ovn-2021-21.06.0-29.el8s.x86_64
- package rdo-ovn-2:22.06-3.el8.noarch obsoletes ovn-2021 < 22.06 provided by ovn-2021-21.12.0-11.el8s.x86_64
- cannot install the best update candidate for package ovn-2021-21.12.0-82.el8s.x86_64
- cannot install the best update candidate for package ovirt-openvswitch-ovn-common-2.15-4.el8.noarch
Problem 3: package ovirt-openvswitch-ovn-central-2.15-4.el8.noarch requires ovn-2021-central, but none of the providers can be installed
- package rdo-ovn-central-2:22.06-3.el8.noarch obsoletes ovn-2021-central < 22.06 provided by ovn-2021-central-21.12.0-82.el8s.x86_64
- package rdo-ovn-central-2:22.06-3.el8.noarch obsoletes ovn-2021-central < 22.06 provided by ovn-2021-central-21.03.0-21.el8s.x86_64
- package rdo-ovn-central-2:22.06-3.el8.noarch obsoletes ovn-2021-central < 22.06 provided by ovn-2021-central-21.03.0-40.el8s.x86_64
- package rdo-ovn-central-2:22.06-3.el8.noarch obsoletes ovn-2021-central < 22.06 provided by ovn-2021-central-21.06.0-17.el8s.x86_64
- package rdo-ovn-central-2:22.06-3.el8.noarch obsoletes ovn-2021-central < 22.06 provided by ovn-2021-central-21.06.0-29.el8s.x86_64
- package rdo-ovn-central-2:22.06-3.el8.noarch obsoletes ovn-2021-central < 22.06 provided by ovn-2021-central-21.12.0-11.el8s.x86_64
- cannot install the best update candidate for package ovn-2021-central-21.12.0-82.el8s.x86_64
- cannot install the best update candidate for package ovirt-openvswitch-ovn-central-2.15-4.el8.noarch
Problem 4: package ovirt-python-openvswitch-2.15-4.el8.noarch requires python3-openvswitch2.15, but none of the providers can be installed
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-119.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-106.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-110.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-115.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-117.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-22.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-23.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-24.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-27.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-30.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-32.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-35.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-37.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-39.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-41.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-47.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-48.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-51.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-52.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-53.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-54.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-56.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-6.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-72.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-75.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-80.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-81.el8s.x86_64
- package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-88.el8s.x86_64
- cannot install the best update candidate for package python3-openvswitch2.15-2.15.0-119.el8s.x86_64
- cannot install the best update candidate for package ovirt-python-openvswitch-2.15-4.el8.noarch
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
Maybe this can give a clue to someone.
I used this iso: ovirt-node-ng-installer-4.5.2-2022081013.el8
2 years, 1 month