oVirt & (Ceph) iSCSI
by Matthew J Black
Hi Everybody (Hi Dr. Nick),
So, next question in my on-going saga: *somewhere* in the documentation I read that when using oVirt with multiple iSCSI paths (in my case, multiple Ceph iSCSI Gateways) we need to set up DM Multipath.
My question is: Is this still relevant information when using oVirt v4.5.2?
Relevant link referred to by the oVirt Documentation:
- https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/...
Cheers
Dulux-Oz
2 years, 2 months
Self-hosted-engine timeout and recovering time
by Marcos Sungaila
Hi all,
I have a cluster running the 4.4.10 release with 6 KVM hosts and Self-Hosted-Engine.
I'm testing some network outage scenarios, and I faced strange behavior.
After disconnecting the KVM hosts hosting the SHE, there was a long timeout until switching the Self-Hosted-Engine to another host as expected.
Also, there took a relatively long time to take over the HA VMs from the failing server.
Is there a configuration where I can reduce the SHE timeout to make this recover process faster?
Regards,
Marcos Sungaila
2 years, 2 months
How do I migrate a running VM off unassigned host?
by David White
Ok, now that I'm able to (re)deploy ovirt to new hosts, I now need to migrate VMs that are running on hosts that are currently in an "unassigned" state in the cluser.
This is the result of having moved the oVirt engine OUT of a hyperconverged environment onto its own stand-alone system, while simultaneously upgrading oVirt from v4.4 to the latest v4.5.
See the following email threads:
- https://lists.ovirt.org/archives/list/users@ovirt.org/thread/TZAUCM3GB5ER...
- https://lists.ovirt.org/archives/list/users@ovirt.org/thread/3IWXZ7VXM6CY...
The oVirt engine knows about the VMs, and oVirt knows about the storage that those VMs are on. But the engine sees 2 of my hosts as "unassigned", and I've been unable to migrate the disks to new storage, nor live migrate a VM from an unassigned host, nor make a clone of an existing VM.
Is there a way to recover from this scenario? I was thinking something along the lines of manually shutting down the VM on the unassigned host, and then somehow force the engine to bring the VM online again from a healthy host?
Thanks,
David
Sent with Proton Mail secure email.
2 years, 2 months
long time running backup (hanged in image finalizing state )
by Jirka Simon
Hello there.
we have issue with backups on our cluster, one backup started 2 days ago
and is is still in state finalizing.
select * from vm_backups;
backup_id | from_checkpoint_id |
to_checkpoint_id | vm_id
| phase | _create_date | host_id | des
cription | _update_date | backup_type |
snapshot_id | is_stopped
--------------------------------------+--------------------+--------------------------------------+--------------------------------------+-------+----------------------------+---------+----
---------+----------------------------+-------------+--------------------------------------+------------
b9c458e6-64e2-41c2-93b8-96761e71f82b | |
7a558f2a-57b6-432f-b5dd-85f5fb9dac8e |
c3b2199f-35cc-41dc-8787-835e945217d2 | Ready | 2022-09-17
00:44:56.877+02 | |
| 2022-09-17 00:45:19.057+02 | hybrid |
0c6ebd56-dcfe-46a8-91cc-327cc94e9773 | f
(1 row)
and if I check imagetransfer table, I see bytes_sent = bytes_total.
engine=# select it.disk_id,bd.disk_alias,it.last_updated, it.bytes_sent,
it.bytes_total from image_transfers as it , base_disks as bd where
it.disk_id = bd.disk_id;
disk_id | disk_alias
| last_updated | bytes_sent |
bytes_total
--------------------------------------+-------------------------------------------------------+----------------------------+--------------+--------------
950279ef-485c-400e-ba66-a3f545618de5 |
log1.util.prod.hq.sldev.cz_log1.util.prod.hq.sldev.cz | 2022-09-17
01:43:09.229+02 | 214748364800 | 214748364800
there is no error in logs
if i use /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -t all
-qc there is no record in any part.
I can clean these record from DB to fix it but it will happen again in
few days.
vdsm.x86_64 4.50.2.2-1.el8
ovirt-engine.noarch 4.5.2.4-1.el8
is there anything i can check to find reason of this ?
Thank you Jirka
2 years, 2 months
Unable to deploy to new host
by David White
I currently have a self-hosted engine that was restored from a backup of an engine that was originally in a hyperconverged state. (See https://lists.ovirt.org/archives/list/users@ovirt.org/message/APQ3XBU...).
This was also an upgrade from ovirt 4.4 to ovirt 4.5.
There were 4 hosts in this cluster. Unfortunately, 2 of them are completely in an "Unassigned" state right now, and I don't know why. The VMs on those hosts are working fine, but I have no way to move the VMs or manage them.
More to the point of this email:
I'm trying to re-deploy onto a 3rd host. I did a fresh install of Rocky Linux 8, and followed the instructions at https://ovirt.org/download/ and at https://ovirt.org/download/install_on_rhel.html, including the part there that is specific to Rocky.
After installing the centos-release-ovirt45 package, I then logged into the oVirt engine web UI, and went to Compute -> Hosts -> New, and have tried (and failed) many times to install / deploy to this new host.
The last error in the host deploy log is the following:
2022-09-18 21:29:39 EDT - { "uuid" : "94b93e6a-5410-4d26-b058-d7d1db0a151e",
"counter" : 404,
"stdout" : "fatal: [cha2-storage.mgt.example.com]: FAILED! => {\"msg\": \"The conditional check 'cluster_switch == \\\"ovs\\\" or (ovn_central is defined and ovn_central | ipaddr)' failed. The error was: The ipaddr filter requires python's netaddr be installed on the ansible controller\\n\\nThe error appears to be in '/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-provider-ovn-driver/tasks/configure.yml': line 3, column 5, but may\\nbe elsewhere in the file depending on the exact syntax problem.\\n\\nThe offending line appears to be:\\n\\n- block:\\n - name: Install ovs\\n ^ here\\n\"}",
"start_line" : 405,
"end_line" : 406,
"runner_ident" : "e2cbd38d-64fa-4ecd-82c6-114420ea14a4",
"event" : "runner_on_failed",
"pid" : 65899,
"created" : "2022-09-19T01:29:38.983937",
"parent_uuid" : "02113221-f1b3-920f-8bd4-00000000003d",
"event_data" : {
"playbook" : "ovirt-host-deploy.yml",
"playbook_uuid" : "73a6e8f1-3836-49e1-82fd-5367b0bf4e90",
"play" : "all",
"play_uuid" : "02113221-f1b3-920f-8bd4-000000000006",
"play_pattern" : "all",
"task" : "Install ovs",
"task_uuid" : "02113221-f1b3-920f-8bd4-00000000003d",
"task_action" : "package",
"task_args" : "",
"task_path" : "/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-provider-ovn-driver/tasks/configure.yml:3",
"role" : "ovirt-provider-ovn-driver",
"host" : "cha2-storage.mgt.example.com",
"remote_addr" : "cha2-storage.mgt.example.com",
"res" : {
"msg" : "The conditional check 'cluster_switch == \"ovs\" or (ovn_central is defined and ovn_central | ipaddr)' failed. The error was: The ipaddr filter requires python's netaddr be installed on the ansible controller\n\nThe error appears to be in '/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-provider-ovn-driver/tasks/configure.yml': line 3, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n- block:\n - name: Install ovs\n ^ here\n",
"_ansible_no_log" : false
},
"start" : "2022-09-19T01:29:38.919334",
"end" : "2022-09-19T01:29:38.983680",
"duration" : 0.064346,
"ignore_errors" : null,
"event_loop" : null,
"uuid" : "94b93e6a-5410-4d26-b058-d7d1db0a151e"
}
}
On the engine, I have verified that netaddr is installed. And just for kicks, I've installed as many different versions as I can find:
[root@ovirt-engine1 host-deploy]# rpm -qa | grep netaddrpython38-netaddr-0.7.19-8.1.1.el8.noarch
python2-netaddr-0.7.19-8.1.1.el8.noarch
python3-netaddr-0.7.19-8.1.1.el8.noarch
The engine is based on CentOS Stream 8 (when I moved the engine out of the hyperconverged environment, my goal was to keep things as close to the original environment as possible)
[root@ovirt-engine1 host-deploy]# cat /etc/redhat-release
CentOS Stream release 8
The engine is fully up-to-date:
[root@ovirt-engine1 host-deploy]# uname -a
Linux ovirt-engine1.mgt.barredowlweb.com 4.18.0-408.el8.x86_64 #1 SMP Mon Jul 18 17:42:52 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
And the engine has the following repos:
[root@ovirt-engine1 host-deploy]# yum repolistrepo id repo name
appstream CentOS Stream 8 - AppStream
baseos CentOS Stream 8 - BaseOS
centos-ceph-pacific CentOS-8-stream - Ceph Pacific
centos-gluster10 CentOS-8-stream - Gluster 10
centos-nfv-openvswitch CentOS-8 - NFV OpenvSwitch
centos-opstools CentOS-OpsTools - collectd
centos-ovirt45 CentOS Stream 8 - oVirt 4.5
extras CentOS Stream 8 - Extras
extras-common CentOS Stream 8 - Extras common packages
ovirt-45-centos-stream-openstack-yoga CentOS Stream 8 - oVirt 4.5 - OpenStack Yoga Repository
ovirt-45-upstream oVirt upstream for CentOS Stream 8 - oVirt 4.5
powertools CentOS Stream 8 - PowerTools
Why does deploying to this new Rocky host keep failing?
Sent with Proton Mail secure email.
2 years, 2 months
oVirt 4.5 on Rocky 9
by Bjorn M
Hi,
I'm moving all my infra nodes to Rocky 9 and my oVirt cluster is next on the list. I'm deploying a standalone oVirt VM on a KVM box and will set up the hosts afterwards. All are to run on Rocky 9 x86_64.
I followed https://www.ovirt.org/download/install_on_rhel.html and created an Ansible playbook to set up the customisations.
I now have all repos set up correctly, or at least that is my understanding.
When I run yum search ovirt-engine I get a number of packages available from the repos, but not the ovirt-engine package. I do see the ovirt-hosted-engine, but I prefer the standalone option.
This makes sense as I don't find the package at http://mirror.stream.centos.org/SIGs/9-stream/virt/x86_64/ovirt-45/Packag... , which I where all ovirt- packages are, except this one.
Yum whatprovides engine-setup also turns out negative.
I then decided to install ovirt-engine-appliance 4.5-20220419160254.1.el9 from ovirt-45-upstream, but that package produces an error on the GPG key import.
It's unclear whether the issue is on my specific stack or wider. The missing ovirt-engine package is confusing though.
Any help is appreciated,
Cheers, Bjorn
OUTPUT :
(0)[root@ovirt ~]# yum repolist
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered with an entitlement server. You can use subscription-manager to register.
repo id repo name
appstream Rocky Linux 9 - AppStream
baseos Rocky Linux 9 - BaseOS
c9s-extras-common CentOS Stream 9 - Extras packages
centos-ceph-pacific CentOS-9-stream - Ceph Pacific
centos-gluster10 CentOS-9-stream - Gluster 10
centos-nfv-openvswitch CentOS Stream 9 - NFV OpenvSwitch
centos-openstack-yoga CentOS-9 - OpenStack yoga
centos-opstools CentOS Stream 9 - OpsTools - collectd
centos-ovirt45 CentOS Stream 9 - oVirt 4.5
centos-rabbitmq-38 CentOS-9 - RabbitMQ 38
crb Rocky Linux 9 - CRB
epel Extra Packages for Enterprise Linux 9 - x86_64
extras Rocky Linux 9 - Extras
ovirt-45-upstream oVirt upstream for CentOS Stream 9 - oVirt 4.5
resilientstorage Rocky Linux 9 - Resilient Storage
(0)[root@ovirt ~]# yum search ovirt-engine
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered with an entitlement server. You can use subscription-manager to register.
Last metadata expiration check: 3:37:50 ago on Wed 14 Sep 2022 08:24:39 AM CEST.
======================================================================================================== Name Matched: ovirt-engine ========================================================================================================
ovirt-engine-appliance.x86_64 : The oVirt Engine Appliance image (OVA)
ovirt-engine-extension-aaa-ldap.noarch : oVirt Engine LDAP Users Management Extension
ovirt-engine-extension-aaa-ldap-setup.noarch : oVirt Engine LDAP Users Management Extension Setup Tool
ovirt-engine-extensions-api.noarch : oVirt engine extensions API
ovirt-engine-extensions-api-javadoc.noarch : oVirt engine extensions API documentation
ovirt-engine-nodejs-modules.noarch : Node.js modules required to build oVirt JavaScript applications
ovirt-engine-wildfly.x86_64 : WildFly Application Server for oVirt Engine
python3-ovirt-engine-sdk4.x86_64 : oVirt Engine Software Development Kit (Python)
(0)[root@ovirt ~]# yum install ovirt-engine
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered with an entitlement server. You can use subscription-manager to register.
Last metadata expiration check: 3:37:57 ago on Wed 14 Sep 2022 08:24:39 AM CEST.
No match for argument: ovirt-engine
Error: Unable to find a match: ovirt-engine
(1)[root@ovirt ~]#
(1)[root@ovirt ~]# yum search ovirt-hosted
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered with an entitlement server. You can use subscription-manager to register.
Last metadata expiration check: 3:46:40 ago on Wed 14 Sep 2022 08:24:39 AM CEST.
======================================================================================================== Name Matched: ovirt-hosted ========================================================================================================
ovirt-hosted-engine-ha.noarch : oVirt Hosted Engine High Availability Manager
ovirt-hosted-engine-setup.noarch : oVirt Hosted Engine setup tool
(0)[root@ovirt ~]#
(0)[root@ovirt ~]# yum whatprovides engine-setup
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered with an entitlement server. You can use subscription-manager to register.
Last metadata expiration check: 3:51:37 ago on Wed 14 Sep 2022 08:24:39 AM CEST.
Error: No Matches found
(1)[root@ovirt ~]#
(1)[root@ovirt ~]# yum install ovirt-engine-appliance.x86_64
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered with an entitlement server. You can use subscription-manager to register.
Last metadata expiration check: 4:08:35 ago on Wed 14 Sep 2022 08:24:39 AM CEST.
Dependencies resolved.
============================================================================================================================================================================================================================================
Package Architecture Version Repository Size
============================================================================================================================================================================================================================================
Installing:
ovirt-engine-appliance x86_64 4.5-20220419160254.1.el9 ovirt-45-upstream 1.6 G
Transaction Summary
============================================================================================================================================================================================================================================
Install 1 Package
Total size: 1.6 G
Installed size: 1.6 G
Is this ok [y/N]: y
Downloading Packages:
[SKIPPED] ovirt-engine-appliance-4.5-20220419160254.1.el9.x86_64.rpm: Already downloaded
oVirt upstream for CentOS Stream 9 - oVirt 4.5 2.8 MB/s | 2.9 kB 00:00
Importing GPG key 0xFE590CB7:
Userid : "oVirt <infra(a)ovirt.org>"
Fingerprint: 31A5 D783 7FAD 7CB2 86CD 3469 AB8C 4F9D FE59 0CB7
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-oVirt-4.5
Is this ok [y/N]: y
Key import failed (code 2). Failing package is: ovirt-engine-appliance-4.5-20220419160254.1.el9.x86_64
GPG Keys are configured as: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oVirt-4.5
The downloaded packages were saved in cache until the next successful transaction.
You can remove cached packages by executing 'yum clean packages'.
Error: GPG check FAILED
(1)[root@ovirt ~]#
2 years, 2 months
Re: Error during deployment of ovirt-engine
by Peter H
Hi Jonas,
I get the same error when I try to install. Last weekend I managed to do it
(but with a lot of other problems) so something has changed during the week.
After the VM is up I logged in through ssh and saw that the oVirt appliance
comes with Python-3.6 and Python-3.8 which both have netaddr installed.
Part of the log:
...
[ INFO ] TASK [ovirt.ovirt.engine_setup : Update setup packages]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.engine_setup : Copy yum configuration file]
[ INFO ] changed: [localhost -> 192.168.222.35]
[ INFO ] TASK [ovirt.ovirt.engine_setup : Set 'best' to false]
[ INFO ] changed: [localhost -> 192.168.222.35]
[ INFO ] TASK [ovirt.ovirt.engine_setup : Update all packages]
[ INFO ] changed: [localhost -> 192.168.222.35]
[ INFO ] TASK [ovirt.ovirt.engine_setup : Remove temporary yum
configuration file]
[ INFO ] changed: [localhost -> 192.168.222.35]
[ INFO ] TASK [ovirt.ovirt.engine_setup : Set offline parameter if
variable is set]
...
I noticed that after the task "Update all packages" Python-3.9 gets
installed and that version does not have netaddr installed. My theory is
that the playbook somehow uses the newest version of Python which is
installed.
pip3.9 list
Package Version
------------ -------
ansible-core 2.13.3
cffi 1.14.3
cryptography 3.3.1
idna 2.10
pip 20.2.4
ply 3.11
pycparser 2.20
PyYAML 5.4.1
setuptools 50.3.2
six 1.15.0
I have tried another run where I installed the netaddr module as soon as
Python-3.9 got installed and that installation went further but then it had
another error.
When I log into my hosted engine VM from last week there is no Python-3.9.
My dnf(1) skills are not good enough to figure out which dependency is
causing Python-3.9 to be installed.
There are probably a lot of other modules missing that can explain the
other error(s) I see.
I will see if I can find out how to file a proper bug report.
BR
Peter
2 years, 2 months
Re: Should I migrate existing oVirt Engine, or deploy new?
by David White
Hi Paul,
Thanks for the response.
I think you're suggesting that I take a hybrid approach, and do a restore of the current Engine onto the new VM. I hadn't thought about this option.
Essentially what I was considering was either:
- Export to OVA or something
OR
- Build a completely new oVirt engine with a completely new domain, etc... and try to live migrate the VMs from the old engine to the new engine.
Do I understand you correctly that you're suggesting I install the OS onto a new VM, and try to do a restore of the oVirt settings onto the new VM (after I put the cluster into Global maintenance mode and shutdown the old oVirt)?
Sent with Proton Mail secure email.
------- Original Message -------
On Friday, August 19th, 2022 at 10:46 AM, Staniforth, Paul <P.Staniforth(a)leedsbeckett.ac.uk> wrote:
> Hello David,
> I don't think there's a documentated method to go from a Hosted Engine to standalone just the other way standalone to HE.
>
> I would suggest doing a full backup of the engine prepare the new VM and restore to that rather than trying to export it.
> This way you can shut down the original engine and run the new engine VM to test it works as you will be able to restart the original engine if it doesn't work.
>
> Regards,
> Paul S.
>
>
>
>
>
> From: David White via Users <users(a)ovirt.org>
> Sent: 19 August 2022 15:27
> To: David White <dmwhite823(a)protonmail.com>
> Cc: oVirt Users <users(a)ovirt.org>
> Subject: [ovirt-users] Re: Should I migrate existing oVirt Engine, or deploy new?
>
> Caution External Mail: Do not click any links or open any attachments unless you trust the sender and know that the content is safe.
>
> In other words, I want to migrate the Engine from a hyperconverged environment into a stand-alone setup.
>
>
> Sent with Proton Mail secure email.
>
> ------- Original Message -------
> On Friday, August 19th, 2022 at 10:17 AM, David White via Users <users(a)ovirt.org> wrote:
>
>
> > Hello,
> > I have just purchased a Synology SA3400 which I plan to use for my oVirt storage domain(s) going forward. I'm currently using Gluster storage in a hyperconverged environment.
> >
> > My goal now is to:
> >
> > - Use the Synology Virtual Machine manager to host the oVirt Engine on the Synology
> > - Setup NFS storage on the Synology as the storage domain for all VMs in our environment
> > - Migrate all VM storage onto the new NFS domain
> > - Get rid of Gluster
> >
> >
> > My first step is to migrate the oVirt Engine off of Gluster storage / off the Hyperconverged hosts into the Synology Virtual Machine manager.
> >
> > Is it possible to migrate the existing oVirt Engine (put the cluster into Global Maintenance Mode, shutdown oVirt, export to VDI or something, and then import into Synology's virtualization)? Or would it be better for me to install a completely new Engine, and then somehow migrate all of the VMs from the old engine into the new engine?
> >
> > Thanks,
> > David
> >
> >
> > Sent with Proton Mail secure email.
>
> To view the terms under which this email is distributed, please go to:-
> https://leedsbeckett.ac.uk/disclaimer/email
2 years, 2 months