Unable to deploy to new host
by David White
I currently have a self-hosted engine that was restored from a backup of an engine that was originally in a hyperconverged state. (See https://lists.ovirt.org/archives/list/users@ovirt.org/message/APQ3XBU...).
This was also an upgrade from ovirt 4.4 to ovirt 4.5.
There were 4 hosts in this cluster. Unfortunately, 2 of them are completely in an "Unassigned" state right now, and I don't know why. The VMs on those hosts are working fine, but I have no way to move the VMs or manage them.
More to the point of this email:
I'm trying to re-deploy onto a 3rd host. I did a fresh install of Rocky Linux 8, and followed the instructions at https://ovirt.org/download/ and at https://ovirt.org/download/install_on_rhel.html, including the part there that is specific to Rocky.
After installing the centos-release-ovirt45 package, I then logged into the oVirt engine web UI, and went to Compute -> Hosts -> New, and have tried (and failed) many times to install / deploy to this new host.
The last error in the host deploy log is the following:
2022-09-18 21:29:39 EDT - { "uuid" : "94b93e6a-5410-4d26-b058-d7d1db0a151e",
"counter" : 404,
"stdout" : "fatal: [cha2-storage.mgt.example.com]: FAILED! => {\"msg\": \"The conditional check 'cluster_switch == \\\"ovs\\\" or (ovn_central is defined and ovn_central | ipaddr)' failed. The error was: The ipaddr filter requires python's netaddr be installed on the ansible controller\\n\\nThe error appears to be in '/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-provider-ovn-driver/tasks/configure.yml': line 3, column 5, but may\\nbe elsewhere in the file depending on the exact syntax problem.\\n\\nThe offending line appears to be:\\n\\n- block:\\n - name: Install ovs\\n ^ here\\n\"}",
"start_line" : 405,
"end_line" : 406,
"runner_ident" : "e2cbd38d-64fa-4ecd-82c6-114420ea14a4",
"event" : "runner_on_failed",
"pid" : 65899,
"created" : "2022-09-19T01:29:38.983937",
"parent_uuid" : "02113221-f1b3-920f-8bd4-00000000003d",
"event_data" : {
"playbook" : "ovirt-host-deploy.yml",
"playbook_uuid" : "73a6e8f1-3836-49e1-82fd-5367b0bf4e90",
"play" : "all",
"play_uuid" : "02113221-f1b3-920f-8bd4-000000000006",
"play_pattern" : "all",
"task" : "Install ovs",
"task_uuid" : "02113221-f1b3-920f-8bd4-00000000003d",
"task_action" : "package",
"task_args" : "",
"task_path" : "/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-provider-ovn-driver/tasks/configure.yml:3",
"role" : "ovirt-provider-ovn-driver",
"host" : "cha2-storage.mgt.example.com",
"remote_addr" : "cha2-storage.mgt.example.com",
"res" : {
"msg" : "The conditional check 'cluster_switch == \"ovs\" or (ovn_central is defined and ovn_central | ipaddr)' failed. The error was: The ipaddr filter requires python's netaddr be installed on the ansible controller\n\nThe error appears to be in '/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-provider-ovn-driver/tasks/configure.yml': line 3, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n- block:\n - name: Install ovs\n ^ here\n",
"_ansible_no_log" : false
},
"start" : "2022-09-19T01:29:38.919334",
"end" : "2022-09-19T01:29:38.983680",
"duration" : 0.064346,
"ignore_errors" : null,
"event_loop" : null,
"uuid" : "94b93e6a-5410-4d26-b058-d7d1db0a151e"
}
}
On the engine, I have verified that netaddr is installed. And just for kicks, I've installed as many different versions as I can find:
[root@ovirt-engine1 host-deploy]# rpm -qa | grep netaddrpython38-netaddr-0.7.19-8.1.1.el8.noarch
python2-netaddr-0.7.19-8.1.1.el8.noarch
python3-netaddr-0.7.19-8.1.1.el8.noarch
The engine is based on CentOS Stream 8 (when I moved the engine out of the hyperconverged environment, my goal was to keep things as close to the original environment as possible)
[root@ovirt-engine1 host-deploy]# cat /etc/redhat-release
CentOS Stream release 8
The engine is fully up-to-date:
[root@ovirt-engine1 host-deploy]# uname -a
Linux ovirt-engine1.mgt.barredowlweb.com 4.18.0-408.el8.x86_64 #1 SMP Mon Jul 18 17:42:52 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
And the engine has the following repos:
[root@ovirt-engine1 host-deploy]# yum repolistrepo id repo name
appstream CentOS Stream 8 - AppStream
baseos CentOS Stream 8 - BaseOS
centos-ceph-pacific CentOS-8-stream - Ceph Pacific
centos-gluster10 CentOS-8-stream - Gluster 10
centos-nfv-openvswitch CentOS-8 - NFV OpenvSwitch
centos-opstools CentOS-OpsTools - collectd
centos-ovirt45 CentOS Stream 8 - oVirt 4.5
extras CentOS Stream 8 - Extras
extras-common CentOS Stream 8 - Extras common packages
ovirt-45-centos-stream-openstack-yoga CentOS Stream 8 - oVirt 4.5 - OpenStack Yoga Repository
ovirt-45-upstream oVirt upstream for CentOS Stream 8 - oVirt 4.5
powertools CentOS Stream 8 - PowerTools
Why does deploying to this new Rocky host keep failing?
Sent with Proton Mail secure email.
1 year, 8 months
oVirt 4.5 on Rocky 9
by Bjorn M
Hi,
I'm moving all my infra nodes to Rocky 9 and my oVirt cluster is next on the list. I'm deploying a standalone oVirt VM on a KVM box and will set up the hosts afterwards. All are to run on Rocky 9 x86_64.
I followed https://www.ovirt.org/download/install_on_rhel.html and created an Ansible playbook to set up the customisations.
I now have all repos set up correctly, or at least that is my understanding.
When I run yum search ovirt-engine I get a number of packages available from the repos, but not the ovirt-engine package. I do see the ovirt-hosted-engine, but I prefer the standalone option.
This makes sense as I don't find the package at http://mirror.stream.centos.org/SIGs/9-stream/virt/x86_64/ovirt-45/Packag... , which I where all ovirt- packages are, except this one.
Yum whatprovides engine-setup also turns out negative.
I then decided to install ovirt-engine-appliance 4.5-20220419160254.1.el9 from ovirt-45-upstream, but that package produces an error on the GPG key import.
It's unclear whether the issue is on my specific stack or wider. The missing ovirt-engine package is confusing though.
Any help is appreciated,
Cheers, Bjorn
OUTPUT :
(0)[root@ovirt ~]# yum repolist
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered with an entitlement server. You can use subscription-manager to register.
repo id repo name
appstream Rocky Linux 9 - AppStream
baseos Rocky Linux 9 - BaseOS
c9s-extras-common CentOS Stream 9 - Extras packages
centos-ceph-pacific CentOS-9-stream - Ceph Pacific
centos-gluster10 CentOS-9-stream - Gluster 10
centos-nfv-openvswitch CentOS Stream 9 - NFV OpenvSwitch
centos-openstack-yoga CentOS-9 - OpenStack yoga
centos-opstools CentOS Stream 9 - OpsTools - collectd
centos-ovirt45 CentOS Stream 9 - oVirt 4.5
centos-rabbitmq-38 CentOS-9 - RabbitMQ 38
crb Rocky Linux 9 - CRB
epel Extra Packages for Enterprise Linux 9 - x86_64
extras Rocky Linux 9 - Extras
ovirt-45-upstream oVirt upstream for CentOS Stream 9 - oVirt 4.5
resilientstorage Rocky Linux 9 - Resilient Storage
(0)[root@ovirt ~]# yum search ovirt-engine
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered with an entitlement server. You can use subscription-manager to register.
Last metadata expiration check: 3:37:50 ago on Wed 14 Sep 2022 08:24:39 AM CEST.
======================================================================================================== Name Matched: ovirt-engine ========================================================================================================
ovirt-engine-appliance.x86_64 : The oVirt Engine Appliance image (OVA)
ovirt-engine-extension-aaa-ldap.noarch : oVirt Engine LDAP Users Management Extension
ovirt-engine-extension-aaa-ldap-setup.noarch : oVirt Engine LDAP Users Management Extension Setup Tool
ovirt-engine-extensions-api.noarch : oVirt engine extensions API
ovirt-engine-extensions-api-javadoc.noarch : oVirt engine extensions API documentation
ovirt-engine-nodejs-modules.noarch : Node.js modules required to build oVirt JavaScript applications
ovirt-engine-wildfly.x86_64 : WildFly Application Server for oVirt Engine
python3-ovirt-engine-sdk4.x86_64 : oVirt Engine Software Development Kit (Python)
(0)[root@ovirt ~]# yum install ovirt-engine
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered with an entitlement server. You can use subscription-manager to register.
Last metadata expiration check: 3:37:57 ago on Wed 14 Sep 2022 08:24:39 AM CEST.
No match for argument: ovirt-engine
Error: Unable to find a match: ovirt-engine
(1)[root@ovirt ~]#
(1)[root@ovirt ~]# yum search ovirt-hosted
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered with an entitlement server. You can use subscription-manager to register.
Last metadata expiration check: 3:46:40 ago on Wed 14 Sep 2022 08:24:39 AM CEST.
======================================================================================================== Name Matched: ovirt-hosted ========================================================================================================
ovirt-hosted-engine-ha.noarch : oVirt Hosted Engine High Availability Manager
ovirt-hosted-engine-setup.noarch : oVirt Hosted Engine setup tool
(0)[root@ovirt ~]#
(0)[root@ovirt ~]# yum whatprovides engine-setup
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered with an entitlement server. You can use subscription-manager to register.
Last metadata expiration check: 3:51:37 ago on Wed 14 Sep 2022 08:24:39 AM CEST.
Error: No Matches found
(1)[root@ovirt ~]#
(1)[root@ovirt ~]# yum install ovirt-engine-appliance.x86_64
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered with an entitlement server. You can use subscription-manager to register.
Last metadata expiration check: 4:08:35 ago on Wed 14 Sep 2022 08:24:39 AM CEST.
Dependencies resolved.
============================================================================================================================================================================================================================================
Package Architecture Version Repository Size
============================================================================================================================================================================================================================================
Installing:
ovirt-engine-appliance x86_64 4.5-20220419160254.1.el9 ovirt-45-upstream 1.6 G
Transaction Summary
============================================================================================================================================================================================================================================
Install 1 Package
Total size: 1.6 G
Installed size: 1.6 G
Is this ok [y/N]: y
Downloading Packages:
[SKIPPED] ovirt-engine-appliance-4.5-20220419160254.1.el9.x86_64.rpm: Already downloaded
oVirt upstream for CentOS Stream 9 - oVirt 4.5 2.8 MB/s | 2.9 kB 00:00
Importing GPG key 0xFE590CB7:
Userid : "oVirt <infra(a)ovirt.org>"
Fingerprint: 31A5 D783 7FAD 7CB2 86CD 3469 AB8C 4F9D FE59 0CB7
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-oVirt-4.5
Is this ok [y/N]: y
Key import failed (code 2). Failing package is: ovirt-engine-appliance-4.5-20220419160254.1.el9.x86_64
GPG Keys are configured as: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oVirt-4.5
The downloaded packages were saved in cache until the next successful transaction.
You can remove cached packages by executing 'yum clean packages'.
Error: GPG check FAILED
(1)[root@ovirt ~]#
1 year, 8 months
Re: Error during deployment of ovirt-engine
by Peter H
Hi Jonas,
I get the same error when I try to install. Last weekend I managed to do it
(but with a lot of other problems) so something has changed during the week.
After the VM is up I logged in through ssh and saw that the oVirt appliance
comes with Python-3.6 and Python-3.8 which both have netaddr installed.
Part of the log:
...
[ INFO ] TASK [ovirt.ovirt.engine_setup : Update setup packages]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.engine_setup : Copy yum configuration file]
[ INFO ] changed: [localhost -> 192.168.222.35]
[ INFO ] TASK [ovirt.ovirt.engine_setup : Set 'best' to false]
[ INFO ] changed: [localhost -> 192.168.222.35]
[ INFO ] TASK [ovirt.ovirt.engine_setup : Update all packages]
[ INFO ] changed: [localhost -> 192.168.222.35]
[ INFO ] TASK [ovirt.ovirt.engine_setup : Remove temporary yum
configuration file]
[ INFO ] changed: [localhost -> 192.168.222.35]
[ INFO ] TASK [ovirt.ovirt.engine_setup : Set offline parameter if
variable is set]
...
I noticed that after the task "Update all packages" Python-3.9 gets
installed and that version does not have netaddr installed. My theory is
that the playbook somehow uses the newest version of Python which is
installed.
pip3.9 list
Package Version
------------ -------
ansible-core 2.13.3
cffi 1.14.3
cryptography 3.3.1
idna 2.10
pip 20.2.4
ply 3.11
pycparser 2.20
PyYAML 5.4.1
setuptools 50.3.2
six 1.15.0
I have tried another run where I installed the netaddr module as soon as
Python-3.9 got installed and that installation went further but then it had
another error.
When I log into my hosted engine VM from last week there is no Python-3.9.
My dnf(1) skills are not good enough to figure out which dependency is
causing Python-3.9 to be installed.
There are probably a lot of other modules missing that can explain the
other error(s) I see.
I will see if I can find out how to file a proper bug report.
BR
Peter
1 year, 8 months
Re: Should I migrate existing oVirt Engine, or deploy new?
by David White
Hi Paul,
Thanks for the response.
I think you're suggesting that I take a hybrid approach, and do a restore of the current Engine onto the new VM. I hadn't thought about this option.
Essentially what I was considering was either:
- Export to OVA or something
OR
- Build a completely new oVirt engine with a completely new domain, etc... and try to live migrate the VMs from the old engine to the new engine.
Do I understand you correctly that you're suggesting I install the OS onto a new VM, and try to do a restore of the oVirt settings onto the new VM (after I put the cluster into Global maintenance mode and shutdown the old oVirt)?
Sent with Proton Mail secure email.
------- Original Message -------
On Friday, August 19th, 2022 at 10:46 AM, Staniforth, Paul <P.Staniforth(a)leedsbeckett.ac.uk> wrote:
> Hello David,
> I don't think there's a documentated method to go from a Hosted Engine to standalone just the other way standalone to HE.
>
> I would suggest doing a full backup of the engine prepare the new VM and restore to that rather than trying to export it.
> This way you can shut down the original engine and run the new engine VM to test it works as you will be able to restart the original engine if it doesn't work.
>
> Regards,
> Paul S.
>
>
>
>
>
> From: David White via Users <users(a)ovirt.org>
> Sent: 19 August 2022 15:27
> To: David White <dmwhite823(a)protonmail.com>
> Cc: oVirt Users <users(a)ovirt.org>
> Subject: [ovirt-users] Re: Should I migrate existing oVirt Engine, or deploy new?
>
> Caution External Mail: Do not click any links or open any attachments unless you trust the sender and know that the content is safe.
>
> In other words, I want to migrate the Engine from a hyperconverged environment into a stand-alone setup.
>
>
> Sent with Proton Mail secure email.
>
> ------- Original Message -------
> On Friday, August 19th, 2022 at 10:17 AM, David White via Users <users(a)ovirt.org> wrote:
>
>
> > Hello,
> > I have just purchased a Synology SA3400 which I plan to use for my oVirt storage domain(s) going forward. I'm currently using Gluster storage in a hyperconverged environment.
> >
> > My goal now is to:
> >
> > - Use the Synology Virtual Machine manager to host the oVirt Engine on the Synology
> > - Setup NFS storage on the Synology as the storage domain for all VMs in our environment
> > - Migrate all VM storage onto the new NFS domain
> > - Get rid of Gluster
> >
> >
> > My first step is to migrate the oVirt Engine off of Gluster storage / off the Hyperconverged hosts into the Synology Virtual Machine manager.
> >
> > Is it possible to migrate the existing oVirt Engine (put the cluster into Global Maintenance Mode, shutdown oVirt, export to VDI or something, and then import into Synology's virtualization)? Or would it be better for me to install a completely new Engine, and then somehow migrate all of the VMs from the old engine into the new engine?
> >
> > Thanks,
> > David
> >
> >
> > Sent with Proton Mail secure email.
>
> To view the terms under which this email is distributed, please go to:-
> https://leedsbeckett.ac.uk/disclaimer/email
1 year, 8 months
Migrating VM from old ovirt to a new one
by Facundo Badaracco
Hi everyone
I would like to ask you a question
I have an ovirt 4.4 which got corrupted certs and I can't access the gui.
It gives me error 500 internal when I want to access.
I have a new ovirt 4.5 on glusterfs which I would like to migrate all my
vm's to.
Is this possible? Can it be done by cli? It would save me from having to
reinstall all the vm's.
1 year, 8 months
Gluster setup for oVirt
by Jonas
Hello all
I tried to setup Gluster volumes in cockpit using the wizard. Based on
Red Hat's recommendations I wanted to put the Volume for the oVirt
Engine on a thick provisioned logical volume [1] and therefore removed
the line thinpoolname and corresponding configuration from the yml file
(see below). Unfortunately, this approach was not successful. My
solution is now to only create a data volume and manually create a thick
provisioned gluster volume manually. What would you recommend doing?
Thanks your any input :)
Regards,
Jonas
[1]:
https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infr...
hc_nodes:
hosts:
server-005.storage.int.rabe.ch:
gluster_infra_volume_groups:
- vgname: vg_tier1_01
pvname: /dev/md/raid_tier1_gluster
gluster_infra_mount_devices:
- path: /gluster_bricks/tier1-ovirt-engine-01/gb-01
lvname: lv_tier1_ovirt_engine_01
vgname: vg_tier1_01
- path: /gluster_bricks/tier1-ovirt-data-01/gb-01
lvname: lv_tier1_ovirt_data_01
vgname: vg_tier1_01
blacklist_mpath_devices:
- raid_tier1_gluster
gluster_infra_thinpools:
- vgname: vg_tier1_01
thinpoolname: lv_tier1_ovirt_data_01_tp
poolmetadatasize: 16G
gluster_infra_lv_logicalvols:
- vgname: vg_tier1_01
lvname: lv_tier1_ovirt_engine_01
lvsize: 100G
- vgname: vg_tier1_01
thinpool: lv_tier1_ovirt_data_01_tp
lvname: lv_tier1_ovirt_data_01
lvsize: 16000G
server-006.storage.int.rabe.ch:
gluster_infra_volume_groups:
- vgname: vg_tier1_01
pvname: /dev/md/raid_tier1_gluster
gluster_infra_mount_devices:
- path: /gluster_bricks/tier1-ovirt-engine-01/gb-01
lvname: lv_tier1_ovirt_engine_01
vgname: vg_tier1_01
- path: /gluster_bricks/tier1-ovirt-data-01/gb-01
lvname: lv_tier1_ovirt_data_01
vgname: vg_tier1_01
blacklist_mpath_devices:
- raid_tier1_gluster
gluster_infra_thinpools:
- vgname: vg_tier1_01
thinpoolname: lv_tier1_ovirt_data_01_tp
poolmetadatasize: 16G
gluster_infra_lv_logicalvols:
- vgname: vg_tier1_01
lvname: lv_tier1_ovirt_engine_01
lvsize: 100G
- vgname: vg_tier1_01
thinpool: lv_tier1_ovirt_data_01_tp
lvname: lv_tier1_ovirt_data_01
lvsize: 16000G
server-007.storage.int.rabe.ch:
gluster_infra_volume_groups:
- vgname: vg_tier0_01
pvname: /dev/md/raid_tier0_gluster
gluster_infra_mount_devices:
- path: /gluster_bricks/tier1-ovirt-engine-01/gb-01
lvname: lv_tier1_ovirt_engine_01
vgname: vg_tier0_01
- path: /gluster_bricks/tier1-ovirt-data-01/gb-01
lvname: lv_tier1_ovirt_data_01
vgname: vg_tier0_01
blacklist_mpath_devices:
- raid_tier0_gluster
gluster_infra_thinpools:
- vgname: vg_tier0_01
thinpoolname: lv_tier1_ovirt_data_01_tp
poolmetadatasize: 1G
gluster_infra_lv_logicalvols:
- vgname: vg_tier0_01
lvname: lv_tier1_ovirt_engine_01
lvsize: 20G
- vgname: vg_tier0_01
thinpool: lv_tier1_ovirt_data_01_tp
lvname: lv_tier1_ovirt_data_01
lvsize: 32G
vars:
gluster_infra_disktype: JBOD
gluster_infra_daling: 1024K
gluster_set_selinux_labels: true
gluster_infra_fw_ports:
- 2049/tcp
- 54321/tcp
- 5900/tcp
- 5900-6923/tcp
- 5666/tcp
- 16514/tcp
gluster_infra_fw_permanent: true
gluster_infra_fw_state: enabled
gluster_infra_fw_zone: public
gluster_infra_fw_services:
- glusterfs
gluster_features_force_varlogsizecheck: false
cluster_nodes:
- server-005.storage.int.rabe.ch
- server-006.storage.int.rabe.ch
- server-007.storage.int.rabe.ch
gluster_features_hci_cluster: '{{ cluster_nodes }}'
gluster_features_hci_volumes:
- volname: tier1-ovirt-engine-01
brick: /gluster_bricks/tier1-ovirt-engine-01/gb-01
arbiter: 1
- volname: tier1-ovirt-data-01
brick: /gluster_bricks/tier1-ovirt-data-01/gb-01
arbiter: 1
1 year, 8 months
static IP with OVN subnet
by ravi k
Hello,
Hope you are all doing well. We have an interesting thing going on and so wanted to share with you all for some ideas and feedback.
We configured a cluster with OVS switch and added an external network with subnet. So OVN started managing the subnet using DHCP. But here's the problem. When we add a nic with this network, an ip is assigned by DHCP. But we want an IP of our choice to be assigned. Here's what we tried.
1. We tried using the ovn-nbctl lsp-set-address like ovn-nbctl lsp-set-addresses 7840c97b-73c2-4246-a2a8-0e9e5b7f420a "56:6f:6b:54:00:ec 10.19.3.8" to update the static IP. But this does not persist a NIC unplug or a VM reboot.
So you might ask why not just try assigning a static IP and not add the external subnet. We want to use security groups. For security groups to work, the ip assigned to the NIC and the IP in the value for fixed_ips in `openstack port show` should be the same. So the same thing repeats here as well. If we use the `openstack port set` to update the fixed_ips, a NIC unplug or a VM reboot will remove the IP.
We have an internal IPAM that provides us IPs during our automated provisioning. Did anyone try this or have any ideas how to work around this? The only option seems to be just use DHCP and then update that IP in our internal IPAM tool.
Please let me know if I'm unclear and any more info is required.
Regards,
rav
1 year, 8 months