Dear Jean-Louis
Previously, oVirt had integration with Ceph via OpenStack Cinder [1]. Then, developers removed or simply disabled (I have no information about this, we simply stopped updating because of this) the old integration and made integration via the cinderlib library [2]. They did it, to put it mildly - very strangely, via a kernel module
It would be good if oVirt had the ability to work with Ceph RBD+QEMU from user space, as it works in OpenStack. And provide some manuals, that it was possible to migrate tables of the old integration to the new one. It seems that for this to work now, it is enough to make the appropriate edits to the code so as not to deal with servicing kernel devices, but to work through the librbd QEMU driver (as legacy [1] do it [3])
engine=# SELECT cinder_volume_type AS volume_type,
pg_size_pretty(SUM(size)) AS bytes,
COUNT(disk_id) AS disks
FROM all_disks_for_vms
GROUP BY ROLLUP(cinder_volume_type) ORDER BY cinder_volume_type;
volume_type | bytes | disks
---------------------+--------+-------
replicated-rbd | 136 TB | 235
replicated-rbd-nvme | 23 TB | 87
| 159 TB | 322
(3 rows)
Thanks,
k
[3]
https://docs.ceph.com/en/latest/rbd/libvirt/#configuring-the-vmOn 15 Jan 2025, at 17:57, Jean-Louis Dupond via Users <users@ovirt.org> wrote:
At this moment it's still safe to keep GlusterFS support in oVirt.
But I think we should already think of the moment GlusterFS will not be shipped anymore in RHEL/CentOS/Alma, and then we will hit issues with oVirt.
So there might be a moment GlusterFS support gets dropped from oVirt, in order to keep it building.
CEPH might be an alternative, but I think also a lot of work to maintain it.
And do you really want to run your CEPH on your hypervisor?