Upgrade from ovirt-node-ng 4.4.1 to 4.4.2 fails with "Local storage domains were found on the same filesystem as / ! Please migrate the data to a new LV before upgrading, or you will lose the VMs"
by gantonjo-ovirt@yahoo.com
So, we have a cluster of 3 servers running oVirt Node 4.4.1. Now we are attempting to upgrade it to latest version, 4.4.2, but it fails as shown below. Problem is that the Storage domains listed are all located on an external iSCSI SAN. The Storage Domains were created in another cluster we had (oVirt Node 4.3 based) and detached from the old cluster and imported successfully into the new cluster through the oVirt Management interface. As I understand, oVirt itself has created the mount points under /rhev/data-center/mnt/blockSD/ for each of the iSCSI domains, and as such they are not really storaged domains on the / filesystem.
I do believe the solution to the mentioned BugZilla bug has caused a new bug, but I may be wrong. I cannot see what we have done wrong when importing these storage domains to the cluster (well, actually, some were freshly created in this cluster, thus fully managed by oVirt 4.4 manager interface).
What can we do to proceed in upgrading the hosts to latest oVirt Node?
Dependencies resolved.
=============================================================================================================================================================================================================================================================================================================================
Package Architecture Version Repository Size
=============================================================================================================================================================================================================================================================================================================================
Upgrading:
ovirt-node-ng-image-update noarch 4.4.2-1.el8 ovirt-4.4 782 M
replacing ovirt-node-ng-image-update-placeholder.noarch 4.4.1.5-1.el8
Transaction Summary
=============================================================================================================================================================================================================================================================================================================================
Upgrade 1 Package
Total download size: 782 M
Is this ok [y/N]: y
Downloading Packages:
ovirt-node-ng-image-update-4.4.2-1.el8.noarch.rpm 8.6 MB/s | 782 MB 01:31
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 8.6 MB/s | 782 MB 01:31
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/3
Local storage domains were found on the same filesystem as / ! Please migrate the data to a new LV before upgrading, or you will lose the VMs
See: https://bugzilla.redhat.com/show_bug.cgi?id=1550205#c3
Storage domains were found in:
/rhev/data-center/mnt/blockSD/c3df4c98-ca97-4486-a5d4-d0321a0fb801/dom_md
/rhev/data-center/mnt/blockSD/90a52746-e0cb-4884-825d-32a9d94710ff/dom_md
/rhev/data-center/mnt/blockSD/74673f68-e1fa-46cf-b0ac-a35f05d42a7a/dom_md
/rhev/data-center/mnt/blockSD/f5fe00ba-c899-428f-96a2-e8d5e5707905/dom_md
/rhev/data-center/mnt/blockSD/5c3d9aff-66a3-4555-a17d-172fbf043505/dom_md
/rhev/data-center/mnt/blockSD/4cc6074b-a5f5-4337-a32f-0ace577e5e47/dom_md
/rhev/data-center/mnt/blockSD/a7658abd-e605-455e-9253-69d7e59ff50a/dom_md
/rhev/data-center/mnt/blockSD/f18e6e5c-124b-4a66-ae98-2088c87de42b/dom_md
/rhev/data-center/mnt/blockSD/f431e29b-77cd-4e51-8f7f-dd73543dfce6/dom_md
/rhev/data-center/mnt/blockSD/0f53281c-c756-4171-bcd2-8946956ebbd0/dom_md
/rhev/data-center/mnt/blockSD/9fad9f9b-c549-4226-9278-51208411b2ac/dom_md
/rhev/data-center/mnt/blockSD/c64006e7-e22c-486f-82a5-20d2b9431299/dom_md
/rhev/data-center/mnt/blockSD/509de8b4-bc41-40fa-9354-16c24ae16442/dom_md
/rhev/data-center/mnt/blockSD/0d57fcd3-4622-41cc-ab23-744b93d175a0/dom_md
error: %prein(ovirt-node-ng-image-update-4.4.2-1.el8.noarch) scriptlet failed, exit status 1
Error in PREIN scriptlet in rpm package ovirt-node-ng-image-update
Verifying : ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/3
Verifying : ovirt-node-ng-image-update-4.4.1.5-1.el8.noarch 2/3
Verifying : ovirt-node-ng-image-update-placeholder-4.4.1.5-1.el8.noarch 3/3
Unpersisting: ovirt-node-ng-image-update-4.4.1.5-1.el8.noarch.rpm
Unpersisting: ovirt-node-ng-image-update-placeholder-4.4.1.5-1.el8.noarch.rpm
Thanks in advance for your good help.
4 years, 5 months
How to make oVirt + GlusterFS bulletproof
by Jarosław Prokopowski
Hi Guys,
I had a situation 2 times that due to unexpected power outage something went wrong and VMs on glusterfs where not recoverable.
Gluster heal did not help and I could not start the VMs any more.
Is there a way to make such setup bulletproof?
Does it matter which volume type I choose - raw or qcow2? Or thin provision versus reallocated?
Any other advise?
4 years, 5 months
Latest ManagedBlockDevice documentation
by Michael Thomas
I'm looking for the latest documentation for setting up a Managed Block
Device storage domain so that I can move some of my VM images to ceph rbd.
I found this:
https://ovirt.org/develop/release-management/features/storage/cinderlib-i...
...but it has a big note at the top that it is "...not user
documentation and should not be treated as such."
The oVirt administration guide[1] does not talk about managed block devices.
I've found a few mailing list threads that discuss people setting up a
Managed Block Device with ceph, but didn't see any links to
documentation steps that folks were following.
Is the Managed Block Storage domain a supported feature in oVirt 4.4.2,
and if so, where is the documentation for using it?
--Mike
[1]ovirt.org/documentation/administration_guide/
4 years, 5 months
How can I Disable console Single Sign On by default
by Riccardo Brunetti
Dear all.
I'm trying to setup an oVirt environment able to provide VMs to a small number of users.
I'm just using the "internal" domain and I defined some users and groups using ovirt-aaa-jdbc-tool.
Everything seems to work, the users cal log-in into the VM portal and they can create VM.
The problem comes out when I try to access the VM console: in order to be able to open the noVNC console, I need to set Disable Single Sign On on the console settings of the VM.
But this is only possible using the Administrator portal and I couldn't find a way to do it "as user".
How can I allow users to open the noVNC console from the VM portal?
Is there a way to set Disable Single Sign On by default?
Thanks a lot
Riccardo
4 years, 5 months
Upgrade from 4.3.9 to 4.3.10 leads to sanlock errors
by mschuler@bsgtech.com
After upgrading to 4.3.10 from 4.3.9, when doing VM backups after hours, we started to have VMs freeze/pause during nightly backup runs. We expect the increased load exposes the issue.
We reverted the hosts back to 4.3.9 and the problem went away, after some testing using a single host on .10 , I am seeing the below error in sanlock.log:
2020-10-14 16:06:12 2939 [5724]: 95bd5893 aio timeout RD 0x7f7f9c0008c0:0x7f7f9c0008d0:0x7f7fa9efb000 ioto 10 to_count 1
2020-10-14 16:06:12 2939 [5724]: s2 delta_renew read timeout 10 sec offset 0 /dev/95bd5893-83d4-42f2-b333-1c65226f1d09/ids
2020-10-14 16:06:12 2939 [5724]: s2 renewal error -202 delta_length 10 last_success 2908
2020-10-14 16:06:14 2941 [5724]: 95bd5893 aio collect RD 0x7f7f9c0008c0:0x7f7f9c0008d0:0x7f7fa9efb000 result 1048576:0 match reap
So engine is still at 4.3.10. We also see the error below in messages:
Oct 14 16:09:20 HOSTNAME kernel: perf: interrupt took too long (2509 > 2500), lowering kernel.perf_event_max_sample_rate to 79000
I guess my question is two fold, how do I go about troubleshooting this further. Otherwise would it be better/possible to move to 4.4.2 (or 4.4.3 when released.) Do all hosts have to be on 4.3.10, or can the hosts be on 4.3.9 while engine is 4.3.10 to do the migration?
Thank you!
4 years, 5 months
Connection failed
by info@worldhostess.com
Messages related to the failure might be found in the journal "journalctl -u
cockpit"
This is the output
node01.xxx.co.za cockpit-tls[8249]: cockpit-tls: gnutls_handshake failed: A
TLS fatal alert has been received.
Any suggestion will be appreciated as I struggle for days to get oVirt to
work and I can see it is still a long way for me to get an operational
solution.
Henni
4 years, 5 months
Collectd version downgrade on oVirt engine
by kushagra2agarwal@gmail.com
i am trying to downgrade collectd version on oVirt engine from 5.10.0 to 5.8.4 using ansible, but getting error while doing so. Can someone help to fix issue
----------------------
Ansible yml file
---
- name: Perform a yum clean
command: /usr/bin/yum clean all
- name: downgrade collectd version to 5.8.1
yum:
name:
- collectd-5.8.1-4.el7.x86_64
- collectd-disk-5.8.1-4.el7.x86_64
state: present
allow_downgrade: true
update_cache: true
become: true
----- error
{"changed": false, "changes": {"installed": ["collectd-5.8.1-4.el7.x86_64"]}, "msg": "Error: Package: collectd-write_http-5.10.0-2.el7.x86_64 (@ovirt-4.3-centos-opstools)\n Requires: collectd(x86-64) = 5.10.0-2.el7\n Removing: collectd-5.10.0-2.el7.x86_64 (@ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.10.0-2.el7\n Downgraded By: collectd-5.8.1-4.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.1-4.el7\n Available: collectd-5.7.2-1.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.7.2-1.el7\n Available: collectd-5.7.2-3.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.7.2-3.el7\n Available: collectd-5.8.0-2.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.0-2.el7\n Available: collectd-5.8.0-3.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.0-3.el7\n Available: collectd-5.8.0-5.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.0-5.el7\n Available: collectd-5.8.0-6.1.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.0-6.1.el7\n Available: collectd-5.8.1-1.el7.x86_64 (epel)\n collectd(x86-64) = 5.8.1-1.el7\n Available: collectd-5.8.1-2.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.1-2.el7\n Available: collectd-5.8.1-3.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.1-3.el7\n Available: collectd-5.8.1-5.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.1-5.el7\nError: Package: collectd-disk-5.10.0-2.el7.x86_64 (@ovirt-4.3-centos-opstools)\n Requires: collectd(x86-64) = 5.10.0-2.el7\n Removing: collectd-5.10.0-2.el7.x86_64 (@ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.10.0-2.el7\n Downgraded By: collectd-5.8.1-4.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.1-4.el7\n Available: collectd-5.7.2-1.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.7.2-1.el7\n Available: collectd-5.7.2-3.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.7.2-3.el7\n Available: collectd-5.8.0-2.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.0-2.el7\n Available: collectd-5.8.0-3.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.0-3.el7\n Available: collectd-5.8.0-5.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.0-5.el7\n Available: collectd-5.8.0-6.1.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.0-6.1.el7\n Available: collectd-5.8.1-1.el7.x86_64 (epel)\n collectd(x86-64) = 5.8.1-1.el7\n Available: collectd-5.8.1-2.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.1-2.el7\n A
▽
vailable: collectd-5.8.1-3.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.1-3.el7\n Available: collectd-5.8.1-5.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.1-5.el7\nError: Package: collectd-postgresql-5.10.0-2.el7.x86_64 (@ovirt-4.3-centos-opstools)\n Requires: collectd(x86-64) = 5.10.0-2.el7\n Removing: collectd-5.10.0-2.el7.x86_64 (@ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.10.0-2.el7\n Downgraded By: collectd-5.8.1-4.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.1-4.el7\n Available: collectd-5.7.2-1.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.7.2-1.el7\n Available: collectd-5.7.2-3.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.7.2-3.el7\n Available: collectd-5.8.0-2.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.0-2.el7\n Available: collectd-5.8.0-3.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.0-3.el7\n Available: collectd-5.8.0-5.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.0-5.el7\n Available: collectd-5.8.0-6.1.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.0-6.1.el7\n Available: collectd-5.8.1-1.el7.x86_64 (epel)\n collectd(x86-64) = 5.8.1-1.el7\n Available: collectd-5.8.1-2.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.1-2.el7\n Available: collectd-5.8.1-3.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.1-3.el7\n Available: collectd-5.8.1-5.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.1-5.el7\nError: Package: collectd-write_syslog-5.10.0-2.el7.x86_64 (@ovirt-4.3-centos-opstools)\n Requires: collectd(x86-64) = 5.10.0-2.el7\n Removing: collectd-5.10.0-2.el7.x86_64 (@ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.10.0-2.el7\n Downgraded By: collectd-5.8.1-4.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.1-4.el7\n Available: collectd-5.7.2-1.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.7.2-1.el7\n Available: collectd-5.7.2-3.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.7.2-3.el7\n Available: collectd-5.8.0-2.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.0-2.el7\n Available: collectd-5.8.0-3.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.0-3.el7\n Available: collectd-5.8.0-5.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.0-5.el7\n Available: collectd-5.8.0-6.1.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.0-6.1.el7\n Available: collectd-5.8.1-1.el7.x86_64 (epel)\n collectd(x86-64) = 5.8.1-1.el7\n Available: collectd-5.8.1-2.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.1-2.el7\n Available: collectd-5.8.1-3.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.1-3.el7\n Available: collectd-5.8.1-5.el7.x86_64 (ovirt-4.3-centos-opstools)\n collectd(x86-64) = 5.8.1-5.el7\n", "rc": 1, "results": ["Loaded plugins: fastestmirror, versionlock\nLoading mirror speeds from cached hostfile\n updates: mirror.ash.fastserv.com\nResolving Dependencies\n--> Running transaction check\n---> Package collectd.x86_64 0:5.8.1-4.el7 will be a downgrade\n---> Package collectd.x86_64 0:5.10.0-2.el7 will be erased\n--> Finished Dependency Resolution\n You could try using --skip-broken to work around the problem\n You could try running: rpm -Va --nofiles --nodigest\n"]}
4 years, 5 months
Ovirt Node 4.4.2 install Odroid-H2 64GB eMMC
by franklin.shearer@gmail.com
Hi All,
I'm trying to install node 4.4.2 on an eMMC card, but when I get to the storage configuration of the installer, it doesn't save the settings (which is automatic configuration) I have chosen and displays failed to save storage configuration. I have deleted all partitions on the card before trying to install and I still get the same error. The only way I can get it to go is select manual configuration with LVM thin provisioning and automatically create. Am I doing something wrong. I can install Centos 8 no issues on this, but not oVirt node 4.4.2.
4 years, 5 months