Single Node HCI upgrade procedure from CentOS7/oVirt 4.3 to CentOS8/oVirt 4.4?
by thomas@hoberg.net
I can hear you saying: "You did understand that single node HCI is just a toy, right?"
For me the primary use of a single node HCI is adding some disaster resilience in small server edge type scenarios, where a three node HCI provides the fault tolerance: 3+1 with a bit of distance, warm or even cold stand-by, potentially manual switch and reduced workload in case disaster strikes.
Of course, another 3nHCI would be better, but who gets that type of budget, right?
What I am trying say: If you want oVirt to gain market share, try to give HCI more love. And while you're at it, try to make expanding from 1nHCI to 3nHCI (and higher counts) a standard operational procedure to allow expanding a disaster stand-by into a production setup, while the original 3nHCI is being rebuilt.
For me low-budget HCI is where oVirt has its biggest competitive advantage against vSan and Nutanix, so please don't treat the HCI/gluster variant like an unwanted child any more.
In the mean-time OVA imports (from 4.3.10 exports) on my 4.4.2 1nHCI fail again, which I'll report separately.
3 years, 5 months
Problem with Cluster-wise BIOS Settings in oVirt 4.4
by Rodrigo G. López
Hi all,
We are running an oVirt 4.4 Hosted Engine as a VM, and after changing
the Cluster's BIOS type from Q35 with Legacy BIOS (the default one after
installation) to Preexistent, the VM fails with the following error:
XML error: The device at PCI address 0000:00:02.0 cannot be plugged into
the PCI controller with index='0'. It requires a controller that accepts
a pcie-root-port.
We need it so that we can run imported VMs from a previous version of
oVirt, namely 4.0.
Applying the BIOS settings individually works but as an attempt to
generalize the settings we decided to apply it to the full cluster.
Tell me if you need more data.
Cheers,
-rodri
3 years, 5 months
POWER9 (ppc64le) Support on oVirt 4.4.1
by Vinícius Ferrão
Hello, I was using oVirt 4.3.10 with IBM AC922 (POWER9 / ppc64le) without any issues.
Since I’ve moved to 4.4.1 I can’t add the AC922 machine to the engine anymore, it complains with the following error:
The host CPU does not match the Cluster CPU type and is running in degraded mode. It is missing the following CPU flags: model_POWER9, powernv.
Any ideia of what’s may be happening? The engine runs on x86_64, and I was using this way on 4.3.10.
Machine info:
timebase : 512000000
platform : PowerNV
model : 8335-GTH
machine : PowerNV 8335-GTH
firmware : OPAL
MMU : Radix
Thanks,
3 years, 5 months
Gluster Domain Storage full
by suporte@logicworks.pt
Hello,
I'm running oVirt Version 4.3.4.3-1.el7.
I have a small GlusterFS Domain storage brick on a dedicated filesystem serving only one VM.
The VM filled all the Domain storage.
The Linux filesystem has 4.1G available and 100% used, the mounted brick has 0GB available and 100% used
I can not do anything with this disk, for example, if I try to move it to another Gluster Domain Storage get the message:
Error while executing action: Cannot move Virtual Disk. Low disk space on Storage Domain
Any idea?
Thanks
--
Jose Ferradeira
http://www.logicworks.pt
3 years, 5 months
adminstration portal wont complete load, looping
by Philip Brown
I have an odd situation:
When I go to
https://ovengine/ovirt-engine/webadmin/?locale=en_US
after authentication passes...
it shows the top banner of
oVirt OPEN VIRTUALIZATION MANAGER
and the
Loading ...
in the center. but never gets past that. Any suggestions on how I could investigate and fix this?
background:
I recently updated certs to be signed wildcard certs, but this broke consoles somehow.
So I restored the original certs, and restarted things... but got stuck with this.
Interestingly, the VM portal loads fine. But not the admin portal.
--
Philip Brown| Sr. Linux System Administrator | Medata, Inc.
5 Peters Canyon Rd Suite 250
Irvine CA 92606
Office 714.918.1310| Fax 714.918.1325
pbrown(a)medata.com| www.medata.com
3 years, 5 months
Latest ManagedBlockDevice documentation
by Michael Thomas
I'm looking for the latest documentation for setting up a Managed Block
Device storage domain so that I can move some of my VM images to ceph rbd.
I found this:
https://ovirt.org/develop/release-management/features/storage/cinderlib-i...
...but it has a big note at the top that it is "...not user
documentation and should not be treated as such."
The oVirt administration guide[1] does not talk about managed block devices.
I've found a few mailing list threads that discuss people setting up a
Managed Block Device with ceph, but didn't see any links to
documentation steps that folks were following.
Is the Managed Block Storage domain a supported feature in oVirt 4.4.2,
and if so, where is the documentation for using it?
--Mike
[1]ovirt.org/documentation/administration_guide/
3 years, 6 months
Moving VM disks from one storage domain to another. Automate?
by Green, Jacob Allen /C
I am looking for an automated way, via Ansible to move a VM disk from one storage domain to another. I found the following, https://docs.ansible.com/ansible/latest/modules/ovirt_disk_module.html and while it mentions copying a VM disk image from one domain to another it does not mention a live storage migration. Which is what I am looking to do. I want to take roughly 100 VMs and move their disk images from one domain to another that is available to the datacenter in some automated/scripted fashion. I am just curious if anyone out there has had to do this and how they tackled it. Or perhaps I am missing some easy obvious way, other than clicking all the disks and clicking move. However from the looks of it, if I did click all the disk and selected move, it appears RHV tries to do them all at once, which is probably not ideal, I would like it to move the disks in a serial One after another fashion, to conserve throughput and IO.
I also did not see anything on Ansible galaxy or the ovirt github that would do this.
Thank you.
3 years, 6 months
oVirt Survey Autumn 2020
by Sandro Bonazzola
As we continue to develop oVirt 4.4, the Development and Integration teams
at Red Hat would value insights on how you are deploying the oVirt
environment.
Please help us to hit the mark by completing this short survey.
The survey will close on October 18th 2020. If you're managing multiple
oVirt deployments with very different use cases or very different
deployments you can consider answering this survey multiple times.
*Please note the answers to this survey will be publicly accessible*.
This survey is under oVirt Privacy Policy available at
https://www.ovirt.org/site/privacy-policy.html .
The survey is available https://forms.gle/bPvEAdRyUcyCbgEc7
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
3 years, 6 months
oVirt Node 4.4.2 is now generally available
by Sandro Bonazzola
oVirt Node 4.4.2 is now generally available
The oVirt project is pleased to announce the general availability of oVirt
Node 4.4.2 , as of September 25th, 2020.
This release completes the oVirt 4.4.2 release published on September 17th
Important notes before you install / upgrade
Please note that oVirt 4.4 only supports clusters and data centers with
compatibility version 4.2 and above. If clusters or data centers are
running with an older compatibility version, you need to upgrade them to at
least 4.2 (4.3 is recommended).
Please note that in RHEL 8 / CentOS 8 several devices that worked on EL7
are no longer supported.
For example, the megaraid_sas driver is removed. If you use Enterprise
Linux 8 hosts you can try to provide the necessary drivers for the
deprecated hardware using the DUD method (See the users’ mailing list
thread on this at
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/NDSVUZSESOXE...
)
How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> -
Host enter emergency mode after upgrading to latest build
If you have your root file system on a multipath device on your hosts you
should be aware that after upgrading from 4.4.1 to 4.4.2 you may get your
host entering emergency mode.
In order to prevent this be sure to upgrade oVirt Engine first, then on
your hosts:
1.
Remove the current lvm filter while still on 4.4.1, or in emergency mode
(if rebooted).
2.
Reboot.
3.
Upgrade to 4.4.2 (redeploy in case of already being on 4.4.2).
4.
Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.
Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with the
correct filter configuration
6.
Reboot.
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
What’s new in oVirt Node 4.4.2 Release?
oVirt Node has been updated, including:
-
oVirt 4.4.2: http://www.ovirt.org/release/4.4.2/
-
Ansible 2.9.13:
https://github.com/ansible/ansible/blob/stable-2.9/changelogs/CHANGELOG-v...
-
Glusterfs 7.7: https://docs.gluster.org/en/latest/release-notes/7.7/
-
Advanced Virtualization 8.2.1
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Additional resources:
-
Read more about the oVirt 4.4.2 release highlights:
http://www.ovirt.org/release/4.4.2/
-
Get more oVirt project updates on Twitter: https://twitter.com/ovirt
-
Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.2/
[2] http://resources.ovirt.org/pub/ovirt-4.4/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
3 years, 6 months