Re: [ovirt-users] Re: [ANN] oVirt 4.4.4 is now generally available

On Tue, Dec 22, 2020 at 6:33 PM Konstantin Shalygin <k0ste@k0ste.ru> wrote:
Sandro, FYI we are not against cinderlib integration, more than we are upgrade 4.3 to 4.4 due movement to cinderlib.
But (!) current Managed Storage Block realization support only krbd (kernel RBD) driver - it's also not a option, because kernel client is always lagging behind librbd, and every update\bugfix we should reboot whole host instead simple migration of all VMs and then migrate it back. Also with krbd host will be use kernel page cache, and will not be unmounted if VM will crash (qemu with librbd is one userland process).
There was rbd-nbd support at some point in cinderlib[1] which addresses your concerns, but it was removed because of some issues +Gorka, are there any plans to pick it up again? [1] https://github.com/Akrog/cinderlib/commit/a09a7e12fe685d747ed390a59cd42d0acd...
So for me current situation look like this:
1. We update deprecated OpenStack code? Why, Its for delete?.. Nevermind, just update this code...
2. Hmm... auth tests doesn't work, to pass test just disable any OpenStack project_id related things... and... Done...
3. I don't care how current cinder + qemu code works, just write new one for linux kernel, it's optimal to use userland apps, just add wrappers (no, it's not);
4. Current Cinder integration require zero configuration on oVirt hosts. It's lazy, why oVirt administrator do nothing? just write manual how-to install packages - oVirt administrators love anything except "reinstall" from engine (no, it's not);
5. We broke old code. New features is "Cinderlib is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production".
6. Oh, we broke old code. Let's deprecate them and close PRODUCTION issues (we didn't see anything).
And again, we are not hate new cinderlib integration. We just want that new technology don't break all PRODUCTION clustes. Almost two years ago I write on this issue https://bugzilla.redhat.com/show_bug.cgi?id=1539837#c6 about "before deprecate, let's help to migrate". For now I see that oVirt totally will disable QEMU RBD support and want to use kernel RBD module + python os-brick + userland mappers + shell wrappers.
Thanks, I hope I am writing this for a reason and it will help build bridges between the community and the developers. We have been with oVirt for almost 10 years and now it is a crossroads towards a different virtualization manager.
k
So I see only regressions for now, hope we'll found some code owner who can catch this oVirt 4.4 only bugs.
I looked at the bugs and I see you've already identified the problem and have patches attached, if you can submit the patches and verify them perhaps we can merge the fixes

On 28/12, Benny Zlotnik wrote:
On Tue, Dec 22, 2020 at 6:33 PM Konstantin Shalygin <k0ste@k0ste.ru> wrote:
Sandro, FYI we are not against cinderlib integration, more than we are upgrade 4.3 to 4.4 due movement to cinderlib.
But (!) current Managed Storage Block realization support only krbd (kernel RBD) driver - it's also not a option, because kernel client is always lagging behind librbd, and every update\bugfix we should reboot whole host instead simple migration of all VMs and then migrate it back. Also with krbd host will be use kernel page cache, and will not be unmounted if VM will crash (qemu with librbd is one userland process).
There was rbd-nbd support at some point in cinderlib[1] which addresses your concerns, but it was removed because of some issues
+Gorka, are there any plans to pick it up again?
[1] https://github.com/Akrog/cinderlib/commit/a09a7e12fe685d747ed390a59cd42d0acd...
Hi, Apologies for the delay on the response, I was on a long PTO and came back just yesterday. There are plans to add it now. ;-) I will add the RBD-NBD support to cinderlib and update this thread once there's an RDO RPM available (which usually happens on the same day the patch merges). If using QEMU to directly connect RBD volumes is the preferred option, then that code would have to be added to oVirt and can be done now without any cinderlib changes. The connection information is provided by cinderlib, and oVirt can check the type of connection that is is and do the connection directly in QEMU for RBD volumes, or call os-brick for all other types of volumes to get a local device before adding it to the instances. Cheers, Gorka.
So for me current situation look like this:
1. We update deprecated OpenStack code? Why, Its for delete?.. Nevermind, just update this code...
2. Hmm... auth tests doesn't work, to pass test just disable any OpenStack project_id related things... and... Done...
3. I don't care how current cinder + qemu code works, just write new one for linux kernel, it's optimal to use userland apps, just add wrappers (no, it's not);
4. Current Cinder integration require zero configuration on oVirt hosts. It's lazy, why oVirt administrator do nothing? just write manual how-to install packages - oVirt administrators love anything except "reinstall" from engine (no, it's not);
5. We broke old code. New features is "Cinderlib is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production".
6. Oh, we broke old code. Let's deprecate them and close PRODUCTION issues (we didn't see anything).
And again, we are not hate new cinderlib integration. We just want that new technology don't break all PRODUCTION clustes. Almost two years ago I write on this issue https://bugzilla.redhat.com/show_bug.cgi?id=1539837#c6 about "before deprecate, let's help to migrate". For now I see that oVirt totally will disable QEMU RBD support and want to use kernel RBD module + python os-brick + userland mappers + shell wrappers.
Thanks, I hope I am writing this for a reason and it will help build bridges between the community and the developers. We have been with oVirt for almost 10 years and now it is a crossroads towards a different virtualization manager.
k
So I see only regressions for now, hope we'll found some code owner who can catch this oVirt 4.4 only bugs.
I looked at the bugs and I see you've already identified the problem and have patches attached, if you can submit the patches and verify them perhaps we can merge the fixes

I understood, more than the code that works with qemu already exists for openstack integration k Sent from my iPhone
On 14 Jan 2021, at 09:43, Gorka Eguileor <geguileo@redhat.com> wrote:
If using QEMU to directly connect RBD volumes is the preferred option, then that code would have to be added to oVirt and can be done now without any cinderlib changes.

On Thu, Jan 21, 2021 at 8:50 AM Konstantin Shalygin <k0ste@k0ste.ru> wrote:
I understood, more than the code that works with qemu already exists for openstack integration
We have code on vdsm and engine to support librbd, but using in cinderlib based volume is not a trivial change. On engine side, this means changing the flow, so instead of attaching a device to a host, engine will configure the xml with network disk, using the rbd url, same way as old cinder support was using. To make this work, engine needs to configure the ceph authentication secrets on all hosts in the DC. We have code to do this for old cinder storage doman, but it is not used for new cinderlib setup. I'm not sure how easy is to use the same mechanism for cinderlib. Generally, we don't want to spend time on special code for ceph, and prefer to outsource this to os brick and the kernel, so we have a uniform way to use volumes. But if the special code gives important benefits, we can consider it. I think openshift virtualization is using the same solution (kernel based rbd) for ceph. An important requirement for us is having an easy way to migrate vms from ovirt to openshift virtuations. Using the same ceph configuration can make this migration easier. I'm also not sure about the future of librbd support in qemu. I know that qemu folks also want to get rid of such code. For example libgfapi (Glsuter native driver) is not maintained and likely to be removed soon. If this feature is important to you, please open RFE for this, and explain why it is needed. We can consider it for future 4.4.z release. Adding some storage and qemu folks to get more info on this. Nir

On 21/01, Nir Soffer wrote:
On Thu, Jan 21, 2021 at 8:50 AM Konstantin Shalygin <k0ste@k0ste.ru> wrote:
I understood, more than the code that works with qemu already exists for openstack integration
We have code on vdsm and engine to support librbd, but using in cinderlib based volume is not a trivial change.
On engine side, this means changing the flow, so instead of attaching a device to a host, engine will configure the xml with network disk, using the rbd url, same way as old cinder support was using.
To make this work, engine needs to configure the ceph authentication secrets on all hosts in the DC. We have code to do this for old cinder storage doman, but it is not used for new cinderlib setup. I'm not sure how easy is to use the same mechanism for cinderlib.
Hi, All the data is in the connection info (including the keyring), so it should be possible to implement.
Generally, we don't want to spend time on special code for ceph, and prefer to outsource this to os brick and the kernel, so we have a uniform way to use volumes. But if the special code gives important benefits, we can consider it.
I think think that's reasonable. Having less code to worry about and making the project's code base more readable and maintainable is a considerable benefit that should not be underestimated.
I think openshift virtualization is using the same solution (kernel based rbd) for ceph. An important requirement for us is having an easy way to migrate vms from ovirt to openshift virtuations. Using the same ceph configuration can make this migration easier.
The Ceph CSI plugin seems to have the possibility of using krbd and rbd-nbd [1], but that's something we can also achieve in oVirt by adding back the rbd-nbd support in cinderlib without changes to oVirt. Cheers, Gorka. [1]: https://github.com/ceph/ceph-csi/blob/04644c1d5896b493d6aaf9ab66f2302cf67a2e...
I'm also not sure about the future of librbd support in qemu. I know that qemu folks also want to get rid of such code. For example libgfapi (Glsuter native driver) is not maintained and likely to be removed soon.
If this feature is important to you, please open RFE for this, and explain why it is needed.
We can consider it for future 4.4.z release.
Adding some storage and qemu folks to get more info on this.
Nir

I would love https://github.com/openstack/cinderlib/commit/a09a7e12fe685d747ed390a59cd42d... to come back. On Thu, Jan 21, 2021 at 2:27 PM Gorka Eguileor <geguileo@redhat.com> wrote:
On 21/01, Nir Soffer wrote:
On Thu, Jan 21, 2021 at 8:50 AM Konstantin Shalygin <k0ste@k0ste.ru> wrote:
I understood, more than the code that works with qemu already exists
for openstack integration
We have code on vdsm and engine to support librbd, but using in cinderlib based volume is not a trivial change.
On engine side, this means changing the flow, so instead of attaching a device to a host, engine will configure the xml with network disk, using the rbd url, same way as old cinder support was using.
To make this work, engine needs to configure the ceph authentication secrets on all hosts in the DC. We have code to do this for old cinder storage doman, but it is not used for new cinderlib setup. I'm not sure how easy is to use the same mechanism for cinderlib.
Hi,
All the data is in the connection info (including the keyring), so it should be possible to implement.
Generally, we don't want to spend time on special code for ceph, and
prefer
to outsource this to os brick and the kernel, so we have a uniform way to use volumes. But if the special code gives important benefits, we can consider it.
I think think that's reasonable. Having less code to worry about and making the project's code base more readable and maintainable is a considerable benefit that should not be underestimated.
I think openshift virtualization is using the same solution (kernel based rbd) for ceph. An important requirement for us is having an easy way to migrate vms from ovirt to openshift virtuations. Using the same ceph configuration can make this migration easier.
The Ceph CSI plugin seems to have the possibility of using krbd and rbd-nbd [1], but that's something we can also achieve in oVirt by adding back the rbd-nbd support in cinderlib without changes to oVirt.
Cheers, Gorka.
[1]: https://github.com/ceph/ceph-csi/blob/04644c1d5896b493d6aaf9ab66f2302cf67a2e...
I'm also not sure about the future of librbd support in qemu. I know that qemu folks also want to get rid of such code. For example libgfapi (Glsuter native driver) is not maintained and likely to be removed soon.
If this feature is important to you, please open RFE for this, and explain why it is needed.
We can consider it for future 4.4.z release.
Adding some storage and qemu folks to get more info on this.
Nir
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SUXZT47HWHALTY...

All connection data should be comes from cinderlib, as for current cinder integration. Gorka says the same Thanks, k Sent from my iPhone
On 21 Jan 2021, at 16:54, Nir Soffer <nsoffer@redhat.com> wrote:
To make this work, engine needs to configure the ceph authentication secrets on all hosts in the DC. We have code to do this for old cinder storage doman, but it is not used for new cinderlib setup. I'm not sure how easy is to use the same mechanism for cinderlib.
participants (5)
-
Benny Zlotnik
-
Gorka Eguileor
-
Konstantin Shalygin
-
Nir Soffer
-
Shantur Rathore