oVirt Engine now requires ansible-core >= 2.12 and ansible-runner >= 2.0
by Martin Perina
Hi,
yesterday we have merged a global change in the whole oVirt, from now on we
require ansible-core >= 2.12 for oVirt functionality. It means that
upcoming oVirt 4.5 will work only on OS version where ansible-core-2.12 is
properly packages (for example CentOS Stream 8 and upcoming RHEL 8.6). All
required package changes should be performed automatically for RPM
installations.
But developers will need to upgrade packages in their development
environments manually, the most significant change is visible in for
ovirt-engine. If you already have a working development environment setup,
please perform following upgrade steps:
1. Update your packages, especially ovirt-release-master
https://copr.fedorainfracloud.org/coprs/ovirt/ovirt-master-snapshot/
2. Remove old ansible and ansible-runner-service-dev
dnf remove ansible ansible-runner-service-dev
3. Install ansible-core and ansible-runner
dnf install ansible-core ansible-runner ovirt-ansible-collection
Detailed information how to set up an engine development environment can be
found at https://github.com/oVirt/ovirt-engine/blob/master/README.adoc
If anything doesn't work, please let us know.
Thanks,
Martin
--
Martin Perina
Manager, Software Engineering
Red Hat Czech s.r.o.
2 years, 8 months
Alpha 4.5 Test Day | Deploy on Ceph Storage
by Jöran Malek
I tried deploying oVirt 4.5 Alpha (using the oVirt Node NG master
installer ISO for el8) with converged Ceph using following steps:
* Install two oVirt Node-nodes
* Install cephadm on both, single 240GB OSD per node (this is a nested
virtualization test environment)
* Deploy ceph cluster with
cephadm --skip-monitoring-stack --single-host-defaults
* Added OSDs to Ceph
* Created CephFS with "ceph fs volume create cephfs"
* Deployed NFS ganesha using ceph orch apply
* Added export "/ovirt" to NFS ganesha for CephFS "/", mounted CephFS
temporarily, changing its owner to 36:36
* Added RBD Pool "ovirt"
At this point: Ceph is running, CephFS is working, NFS exports are available.
Performed the usual ovirt-hosted-engine-setup, answers file attached.
Hosted Engine: Check, on NFS storage domain over Ceph (not doing the
iSCSI route as that is a black box to me). NFS "just works", and I can
connect to localhost as server, making it the perfect candidate for HA
storage (because NFS is deployed on every node in the Cluster, due to
Ceph being installed on every node in the cluster)
Copied over the ceph.conf and ceph.client.admin.keyring to engine-VM,
and changed their owner to ovirt.
Applied cinderlib integration on Engine with
> engine-setup --reconfigure-optional-components
Added a block storage domain (configuration as in the blog post), and
getting following error, as attached (engine.log, cinderlib.log).
German:
Fehler beim Ausführen der Aktion: Kann Speicher nicht hinzufügen.
Verbinden mit verwalteter Blockdomäne fehlgeschlagen.
English (translated):
Error while performing action: Unable to add storage. Failed to
connect to managed block storage domain.
Anything I can provide that helps figuring this out?
Ceph Version:
16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503)
pacific (stable)
For the Trello-board:
Installation on Ceph works as expected - iSCSI and NFS are
well-supported, and deployment with NFS is a bit easier than iSCSI.
Adding a managed block storage domain failed for me, works on oVirt
4.4 with the exact same procedure.
Best,
Jöran
2 years, 8 months