On Thu, Jan 21, 2021 at 8:50 AM Konstantin Shalygin <k0ste(a)k0ste.ru> wrote:
I understood, more than the code that works with qemu already exists for openstack
integration
We have code on vdsm and engine to support librbd, but using in cinderlib
based volume is not a trivial change.
On engine side, this means changing the flow, so instead of attaching
a device to a host, engine will configure the xml with network disk, using
the rbd url, same way as old cinder support was using.
To make this work, engine needs to configure the ceph authentication
secrets on all hosts in the DC. We have code to do this for old cinder storage
doman, but it is not used for new cinderlib setup. I'm not sure how easy is to
use the same mechanism for cinderlib.
Generally, we don't want to spend time on special code for ceph, and prefer
to outsource this to os brick and the kernel, so we have a uniform way to
use volumes. But if the special code gives important benefits, we can
consider it.
I think openshift virtualization is using the same solution (kernel based rbd)
for ceph. An important requirement for us is having an easy way to migrate
vms from ovirt to openshift virtuations. Using the same ceph configuration
can make this migration easier.
I'm also not sure about the future of librbd support in qemu. I know that
qemu folks also want to get rid of such code. For example libgfapi
(Glsuter native driver) is not maintained and likely to be removed soon.
If this feature is important to you, please open RFE for this, and explain why
it is needed.
We can consider it for future 4.4.z release.
Adding some storage and qemu folks to get more info on this.
Nir