Migrate Hosted-Engine to different cluster
by Michaal R
I know this question pops up on a regular with this forum, but my best search engine manipulation turned up only to back up the engine configuration to a shared storage, then restore it on the other cluster using hosted-engine --deploy --restore-from-file=backup.tar.gz. While this is workable (it's by no means the end of the world for me), I wanted to ask if a different or better way has been or will be implemented. Here's why:
I am trying to move to oVirt from ESXi (the latter has outgrown its usefulness to me and has become increasingly more difficult to maintain for my purposes, as VMware is intent on pushing newer CPU technologies with each new version, leaving perfectly good systems with perfectly good processors and hardware out in the cold. On top of that, VMware won't let you easily do vGPU or GPU passthrough without significant risk to the system stability.). I started with a VM on the ESXi host as a proof-of-concept for setting up and configuring oVirt to run the hosted engine and a couple of VMs. This worked after some great work from the members here on the forum (shout out to Tomas and Arik!). This initial cluster is an Intel CPU based cluster, as the server its VM is running on is an R720. This is the server I need to migrate off of (I have a plan). So I set up in VMware Workstation on my PC a second host in its own cluster (as that machine is an AMD CPU machine). Now, I know I can't live migr
ate between clusters to begin with, but I was hoping there was an easier way to move the hosted-engine to the shared NAS storage, put the hosted-engine in maintenance mode, shut it down, change a variable or three with the --set-shared-config option (if that applies), run some script on the other host that starts the hosted-engine on that host in that cluster, bringing it out of maintenance mode automatically once up and verified stable.
I understand that process COULD be slower and more fraught with danger than just simply backing up the config to shared storage and deploying on the AMD host with the restored config, but on the off-chance it ISN'T I wanted to explore that possibility. Besides, it would be a nice feature of the hosted-engine and oVirt as a whole if you could eventually right-click the hosted-engine VM in the portal and click "Move to new cluster...", which would automate that entire process for you.
All that said, I'm working on one last problematic VM import before I start the arduous process of pulling everything off of ESXi and reformatting the R720 for oVirt. Hopefully that goes smoother than my start with oVirt. :)
5 months, 1 week
[ANN] oVirt 4.5.5 is now generally available
by Sandro Bonazzola
oVirt 4.5.5 is now generally available
The oVirt project is excited to announce the general availability of oVirt
4.5.5, as of December 1st, 2023.
Important notes before you install / upgrade
If you’re going to install oVirt 4.5.5 on RHEL or similar, please read
Installing
on RHEL or derivatives <https://ovirt.org/download/install_on_rhel.html> f
irst.
Suggestion to use nightly
As discussed in oVirt Users mailing list
<https://lists.ovirt.org/archives/list/users@ovirt.org/thread/DMCC5QCHL6EC...>
we suggest the user community to use oVirt master snapshot repositories
<https://ovirt.org/develop/dev-process/install-nightly-snapshot.html>
ensuring that the latest fixes for the platform regressions will be
promptly available.
This oVirt 4.5.5 release is meant to provide what has been made available
in nightly repositories as base for new installations.
If you are already using oVirt master snapshot you should already have
received this release content.
Documentation
Be sure to follow instructions for oVirt 4.5!
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
What’s new in oVirt 4.5.5 Release?
This release is available now on x86_64 architecture for:
-
oVirt Node NG (based on CentOS Stream 8)
-
oVirt Node NG (based on CentOS Stream 9)
-
CentOS Stream 8
-
CentOS Stream 9
-
RHEL 8 and derivatives
-
RHEL 9 and derivatives
Experimental builds are also available for ppc64le and aarch64.
See the release notes for installation instructions and a list of new
features and bugs fixed.
Additional resources:
-
Read more about the oVirt 4.5.5 release highlights:
https://www.ovirt.org/release/4.5.5/
-
Check out the latest project news on the oVirt blog:
https://blogs.ovirt.org/
--
Sandro* Bonazzola*
oVirt Project
5 months, 1 week
ovirt node NonResponsive
by carlos.mendes@mgo.cv
Hello,
I have ovirt with two nodes and one that are NonResponsive and and cant manage them
because they are in Unknown state.
It seems that nodes lost connection for a while with their gateway.
The node (ovirt2) however is having consistent problems. The follow sequence of events is
reproducible and is causing the host to enter a "NonOperational" state on the
cluster:
What is the proper way of restoring management?
I have a two-node cluster with the ovirt manager running standlone on the virtual maachine
CentOS-Stream-9 and the ovirt node running the most recent oVirt Node 4.5.4 software.
I can then re-activate ovirt2, which appears as green for approximately 5 minutes and then
repeats all of the above issues.
What can I do to troubleshoot this?
5 months, 1 week
Something went wrong, connection is closed web vnc console
by antonio.riggio@mail.com
After our admin updated the cert for ovirt after getting PKIK path validation failed:java.security.cert when trying to login. I get Something went wrong, connection is closed when trying to access console with noVnc. Any ideas what can cause this ?
Thank you
5 months, 1 week
Install oVirt node NG on SW raid
by Jirka Simon
Hello oVirt folks,
is there any way to install oVirt node next with sw raid (mirror) I have
two disks and i would like to use them for redundancy.
I know there is way to migrate from single disk to raid,
https://access.redhat.com/solutions/4194011 it worked for me with RHEL8
and Centos 8 earlier, but now with Ovirt Node NG 4.5.5 it doesn't work
for me.
after restart, I see EFI record I can boot, but it doesn't see any raid
array.
thank you for any help here.
jirka
5 months, 2 weeks