Re: package conflicts when executing dnf update
by Stefan Wichmann
Hi,
I have exactly the same problem. Fresh install of CentOS 8.1, did the install of oVirt-4.4-Release (added new host). And now I have this conflict…
Is there any solution to this?
Kind regards,
Stefan
4 years, 6 months
oVirt 4.4.0 Release is now generally available
by Sandro Bonazzola
oVirt 4.4.0 Release is now generally available
The oVirt Project is excited to announce the general availability of the
oVirt 4.4.0 Release, as of May 20th, 2020
This release unleashes an altogether more powerful and flexible open source
virtualization solution that encompasses hundreds of individual changes and
a wide range of enhancements across the engine, storage, network, user
interface, and analytics, as compared to oVirt 4.3.
Important notes before you install / upgrade
Some of the features included in the oVirt 4.4.0 release require content
that will be available in CentOS Linux 8.2 but cannot be tested on RHEL 8.2
yet due to some incompatibility in the openvswitch package that is shipped
in CentOS Virt SIG, which requires rebuilding openvswitch on top of CentOS
8.2. The cluster switch type OVS is not implemented for CentOS 8 hosts.
Please note that oVirt 4.4 only supports clusters and datacenters with
compatibility version 4.2 and above. If clusters or datacenters are running
with an older compatibility version, you need to upgrade them to at least
4.2 (4.3 is recommended).
Please note that in RHEL 8 / CentOS 8 several devices that worked on EL7
are no longer supported.
For example, megaraid_sas driver is removed. If you use Enterprise Linux 8
hosts you can try to provide the necessary drivers for the deprecated
hardware using the DUD method (See users mailing list thread on this at
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/NDSVUZSESOXE...
)
Installation instructions
For the engine: either use the oVirt appliance or install CentOS Linux 8
minimal by following these steps:
- Install the CentOS Linux 8 image from
http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-...
- dnf install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm
- dnf update (reboot if needed)
- dnf module enable -y javapackages-tools pki-deps postgresql:12
- dnf install ovirt-engine
- engine-setup
For the nodes:
Either use oVirt Node ISO or:
- Install CentOS Linux 8 from
http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-...,
selecting the minimal installation.
- dnf install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm
- dnf update (reboot if needed)
- Attach the host to the engine and let it be deployed.
Update instructionsUpdate from oVirt 4.4 Release Candidate
On the engine side and on CentOS hosts, you’ll need to switch from
ovirt44-pre to ovirt44 repositories.
In order to do so, you need to:
1.
dnf remove ovirt-release44-pre
2.
rm -f /etc/yum.repos.d/ovirt-4.4-pre-dependencies.repo
3.
rm -f /etc/yum.repos.d/ovirt-4.4-pre.repo
4.
dnf install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm
5.
dnf update
On the engine side you’ll need to run engine-setup only if you were not
already on the latest release candidate.
On oVirt Node, you’ll need to upgrade with:
1.
Move node to maintenance
2.
dnf install
https://resources.ovirt.org/pub/ovirt-4.4/rpm/el8/noarch/ovirt-node-ng-im...
3.
Reboot
4.
Activate the host
Update from oVirt 4.3
oVirt 4.4 is available only for CentOS 8. In-place upgrades from previous
installations, based on CentOS 7, are not possible. For the engine, use
backup, and restore that into a new engine. Nodes will need to be
reinstalled.
A 4.4 engine can still manage existing 4.3 hosts, but you can’t add new
ones.
For a standalone engine, please refer to upgrade procedure at
https://ovirt.org/documentation/upgrade_guide/#Upgrading_from_4-3
If needed, run ovirt-engine-rename (see engine rename tool documentation at
https://www.ovirt.org/documentation/admin-guide/chap-Utilities.html )
When upgrading hosts:
You need to upgrade one host at a time.
1.
Turn host to maintenance. Virtual machines on that host should migrate
automatically to a different host.
2.
Remove it from the engine
3.
Re-install it with el8 or oVirt Node as per installation instructions
4.
Re-add the host to the engine
Please note that you may see some issues live migrating VMs from el7 to
el8. If you hit such a case, please turn off the vm on el7 host and get it
started on the new el8 host in order to be able to move the next el7 host
to maintenance.
What’s new in oVirt 4.4.0 Release?
-
Hypervisors based on CentOS Linux 8 (rebuilt from award winning RHEL8),
for both oVirt Node and standalone CentOS Linux hosts.
-
Easier network management and configuration flexibility with
NetworkManager.
-
VMs based on a more modern Q35 chipset with legacy SeaBIOS and UEFI
firmware.
-
Support for direct passthrough of local host disks to VMs.
-
Live migration improvements for High Performance guests.
-
New Windows guest tools installer based on WiX framework now moved to
VirtioWin project.
-
Dropped support for cluster level prior to 4.2.
-
Dropped API/SDK v3 support deprecated in past versions.
-
4K block disk support only for file-based storage. iSCSI/FC storage do
not support 4K disks yet.
-
You can export a VM to a data domain.
-
You can edit floating disks.
-
Ansible Runner (ansible-runner) is integrated within the engine,
enabling more detailed monitoring of playbooks executed from the engine.
-
Adding and reinstalling hosts is now completely based on Ansible,
replacing ovirt-host-deploy, which is not used anymore.
-
The OpenStack Neutron Agent cannot be configured by oVirt anymore, it
should be configured by TripleO instead.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.1
* CentOS Linux (or similar) 8.1
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.1
* CentOS Linux (or similar) 8.1
* oVirt Node 4.4 based on CentOS Linux 8.1 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
If you manage more than one oVirt instance, OKD or RDO we also recommend to
try ManageIQ <http://manageiq.org/>.
In such a case, please be sure to take the qc2 image and not the ova image.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
Additional Resources:
* Read more about the oVirt 4.4.0 release highlights:
http://www.ovirt.org/release/4.4.0/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.0/
[2] http://resources.ovirt.org/pub/ovirt-4.4/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
[image: |Our code is open_] <https://www.redhat.com/en/our-code-is-open>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
4 years, 6 months
oVirt and Fedora
by Sandro Bonazzola
If you have followed the oVirt project for a few releases you already know
oVirt has struggled to keep the pace with the fast innovation cycles Fedora
Project is following.
Back in September 2019 CentOS project launched CentOS Stream as a rolling
preview of future RHEL kernels and features, providing an upstream
development platform for ecosystem developers that sits between Fedora and
RHEL.
Since then the oVirt project tried to keep the software working on Fedora,
CenOS Stream, and RHEL/CentOS but it became quickly evident the project
lacked resources to keep the project running on three platforms. Further,
our user surveys show that oVirt users strongly prefer using oVirt on
CentOS and RHEL.
With the upcoming end of life of Fedora 30 the oVirt project has decided to
stop trying to keep the pace with this amazing platform, focusing on
stabilizing the software codebase on RHEL / CentOS Linux. By focusing our
resources and community efforts on RHEL/CentOS Linux and Centos Stream, we
can provide better support for those platforms and use more time for moving
oVirt forward.
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
[image: |Our code is open_] <https://www.redhat.com/en/our-code-is-open>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
4 years, 6 months
Re: [oVirt Jenkins] ovirt-system-tests_he-basic-suite-4.3 - Build # 448 - Failure!
by Yedidyah Bar David
On Wed, May 20, 2020 at 7:06 AM <jenkins(a)jenkins.phx.ovirt.org> wrote:
>
> Project: https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-4.3/
> Build: https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-4.3/448/
Checked this, and failed to find exact root cause.
It failed in 012_local_maintenance_sdk.local_maintenance, to put the
host to maintenance, because,
https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-4.3/448/a...
2020-05-20 00:03:57,239-04 ERROR
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource]
(default task-6) [] Operation Failed: [Cannot switch the Host(s) to
Maintenance mode.
There are no available hosts capable of running the engine VM.]
Before that, engine.log:
2020-05-19 23:52:22,933-04 WARN
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
(EE-ManagedThreadFactory-engine-Thread-89) [] vds
'lago-he-basic-suite-4-3-host-0' reported domain
'9624d30d-4a7a-4061-80c8-149a1e770275:nfs' as in problem, attempting
to move the vds to status NonOperational
then it started migrating the engine vm to host-1.
Relevant lines about the migration can also be seen in HA logs on both hosts.
Couldn't find anywhere why it had storage problems and moved to nonoperational.
Nir, can you please have a look? Thanks.
It's likely a local issue (high load or whatever), but can't be certain.
Best regards,
--
Didi
4 years, 6 months
ssl error during host install
by Vojtech Juranek
Hi,
I'm getting following SSL error when adding new host:
engine:
[...]
2020-05-18 18:28:35,530-04 WARN [org.ovirt.vdsm.jsonrpc.client.utils.retry.Retryable] (EE-ManagedThreadFactory-engine-Thread-42) [762c711c] Retry failed
2020-05-18 18:28:39,033-04 WARN [org.ovirt.vdsm.jsonrpc.client.utils.retry.Retryable] (EE-ManagedThreadFactory-engine-Thread-43) [762c711c] Retry failed
2020-05-18 18:28:39,033-04 ERROR [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] (EE-ManagedThreadFactory-engine-Thread-3) [762c711c] Host installation failed for host 'ab082e77-315c-4cfe-8188-11fbee94c2b8', 'fc30-glance': Network error during communication with the host
host (vdsm log):
2020-05-18 18:26:36,963-0400 ERROR (Reactor thread) [vds.dispatcher] uncaptured python exception, closing channel <yajsonrpc.betterAsyncore.Dispatcher connected ('::ffff:192.168.122.246', 52428, 0, 0) at 0x7f9f8c7e8d30> (<class 'ssl.SSLError'>:[X509] no certificate or crl found (_ssl.c:4053) [/usr/lib64/python3.7/asyncore.py|readwrite|110] [/usr/lib64/python3.7/asyncore.py|handle_write_event|441] [/usr/lib/python3.7/site-packages/yajsonrpc/betterAsyncore.py|handle_write|75] [/usr/lib/python3.7/site-packages/yajsonrpc/betterAsyncore.py|_delegate_call|173] [/usr/lib/python3.7/site-packages/vdsm/sslutils.py|handle_write|190] [/usr/lib/python3.7/site-packages/vdsm/sslutils.py|_handle_io|194] [/usr/lib/python3.7/site-packages/vdsm/sslutils.py|_set_up_socket|154]) (betterAsyncore:184)
Any idea what's wrong and/or how to fix it?
Both engine and host are on FC30, host is installed from standard ovirt-release44-pre
repo and engine is dev build of latest engine master with glance API v2 patches [1].
Thanks
Vojta
[1] https://gerrit.ovirt.org/#/q/topic:"Image+Service+API+v2"
4 years, 6 months
are we storing the back-end fqdn (like the one used in gluster) in engine?
by Prajith Kesava Prasad
Hi all,
I hope this email finds you well.
I just wanted to check, does anyone know if there is a way to get the
back-end hostname from the ovirt-engine?, (this scenario is most likely to
occur when an external logical network is attached).
I did see a way to get the network interface, with respect to each host.
like Even though RHV-engine, contains logical network-attached to it, and
its details (like ipv4). There is no primary key present that could help us
identify that a network is gluster related or not. so we cannot actually
get the details, because we won't know if a network is gluster network or
something else.
I just wanted to see, if there is any other way to get the details of the
backend hostname?
Thanks in advance,
Prajith.
4 years, 6 months