oVirt 4.3 and cinderlib integration (for ceph) on CentOS 7 - centos-release-openstack-pike

At https://www.ovirt.org/develop/release-management/features/storage/cinderlib-... ist described the cinderlib integration into oVirt: Installation: - install centos-release-openstack-pike on engine and all hosts - install openstack-cinder and python-pip on engine - pip install cinderlib on engine - install python2-os-brick on all hosts - install ceph-common on engine and on all hosts Which software versions do you use on CentOS 7 whith oVirt 4.3.10? The package centos-release-openstack-pike, as described at the above-mentioned Managed Block Storage feature page, doesn't exist anymore in the CentOS repositories, so I have to switch to centos-release-openstack-queens or newer (rocky, stein, train). So I get (for using with ceph luminous 12): - openstack-cinder 12.0.10 - cinderlib 1.0.1 - ceph-common 12.2.11 - python2-os-brick 2.3.9

I've used successfully rocky with 4.3 in the past, the main caveat with 4.3 currently is that cinderlib has to be forced to be 0.9.0 (pip install cinderlib==0.9.0). Let me know if you have any issues. Hopefully during 4.4 we will have the repositories with the RPMs and installation will be much easier On Thu, Jun 4, 2020 at 10:00 PM Mathias Schwenke <mathias.schwenke@uni-dortmund.de> wrote:
At https://www.ovirt.org/develop/release-management/features/storage/cinderlib-... ist described the cinderlib integration into oVirt: Installation: - install centos-release-openstack-pike on engine and all hosts - install openstack-cinder and python-pip on engine - pip install cinderlib on engine - install python2-os-brick on all hosts - install ceph-common on engine and on all hosts
Which software versions do you use on CentOS 7 whith oVirt 4.3.10? The package centos-release-openstack-pike, as described at the above-mentioned Managed Block Storage feature page, doesn't exist anymore in the CentOS repositories, so I have to switch to centos-release-openstack-queens or newer (rocky, stein, train). So I get (for using with ceph luminous 12): - openstack-cinder 12.0.10 - cinderlib 1.0.1 - ceph-common 12.2.11 - python2-os-brick 2.3.9 _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/H5BRKSYAHJBLI6...

Thanks vor your replay. Yes, I have some issues. In some cases starting or migrating a virtual machine failed. At the moment it seems that I have a misconfiguration of my ceph connection: 2020-06-04 22:44:07,685+02 ERROR [org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor] (EE-ManagedThreadFactory-engine-Thread-2771) [6e1b74c4] cinderlib execution failed: Traceback (most recent call last): File "./cinderlib-client.py", line 179, in main args.command(args) File "./cinderlib-client.py", line 232, in connect_volume backend = load_backend(args) File "./cinderlib-client.py", line 210, in load_backend return cl.Backend(**json.loads(args.driver)) File "/usr/lib/python2.7/site-packages/cinderlib/cinderlib.py", line 88, in __init__ self.driver.check_for_setup_error() File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 295, in check_for_setup_error with RADOSClient(self): File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 177, in __init__ self.cluster, self.ioctx = driver._connect_to_rados(pool) File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 353, in _connect_to_rados return _do_conn(pool, remote, timeout) File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 818, in _wrapper return r.call(f, *args, **kwargs) File "/usr/lib/python2.7/site-packages/retrying.py", line 229, in call raise attempt.get() File "/usr/lib/python2.7/site-packages/retrying.py", line 261, in get six.reraise(self.value[0], self.value[1], self.value[2]) File "/usr/lib/python2.7/site-packages/retrying.py", line 217, in call attempt = Attempt(fn(*args, **kwargs), attempt_number, False) File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 351, in _do_conn raise exception.VolumeBackendAPIException(data=msg) VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Error connecting to ceph cluster.

yes, it looks like a configuration issue, you can use plain `rbd` to check connectivity. regarding starting vms and live migration, are there bug reports for these? there is an issue we're aware of with live migration[1], it can be worked around by blacklisting rbd devices in the multipath.conf [1] https://bugzilla.redhat.com/show_bug.cgi?id=1755801 On Thu, Jun 4, 2020 at 11:49 PM Mathias Schwenke <mathias.schwenke@uni-dortmund.de> wrote:
Thanks vor your replay. Yes, I have some issues. In some cases starting or migrating a virtual machine failed.
At the moment it seems that I have a misconfiguration of my ceph connection: 2020-06-04 22:44:07,685+02 ERROR [org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor] (EE-ManagedThreadFactory-engine-Thread-2771) [6e1b74c4] cinderlib execution failed: Traceback (most recent call last): File "./cinderlib-client.py", line 179, in main args.command(args) File "./cinderlib-client.py", line 232, in connect_volume backend = load_backend(args) File "./cinderlib-client.py", line 210, in load_backend return cl.Backend(**json.loads(args.driver)) File "/usr/lib/python2.7/site-packages/cinderlib/cinderlib.py", line 88, in __init__ self.driver.check_for_setup_error() File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 295, in check_for_setup_error with RADOSClient(self): File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 177, in __init__ self.cluster, self.ioctx = driver._connect_to_rados(pool) File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 353, in _connect_to_rados return _do_conn(pool, remote, timeout) File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 818, in _wrapper return r.call(f, *args, **kwargs) File "/usr/lib/python2.7/site-packages/retrying.py", line 229, in call raise attempt.get() File "/usr/lib/python2.7/site-packages/retrying.py", line 261, in get six.reraise(self.value[0], self.value[1], self.value[2]) File "/usr/lib/python2.7/site-packages/retrying.py", line 217, in call attempt = Attempt(fn(*args, **kwargs), attempt_number, False) File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 351, in _do_conn raise exception.VolumeBackendAPIException(data=msg) VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Error connecting to ceph cluster. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/I4BMALG7MPMPS3...

It looks like a configuration issue, you can use plain `rbd` to check connectivity. Yes, it was a configuration error. I fixed it. Also, I had to adapt different rbd feature sets between ovirt nodes and ceph images. Now it seems to work.

yes, that's because cinderlib uses KRBD, so it has less features, I should add this to the documentation. I was told cinderlib has plans to add support for rbd-nbd, this would eventually allow use of newer features On Mon, Jun 8, 2020 at 9:40 PM Mathias Schwenke <mathias.schwenke@uni-dortmund.de> wrote:
It looks like a configuration issue, you can use plain `rbd` to check connectivity. Yes, it was a configuration error. I fixed it. Also, I had to adapt different rbd feature sets between ovirt nodes and ceph images. Now it seems to work.
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/72OOSCUSTZAGYI...
participants (2)
-
Benny Zlotnik
-
Mathias Schwenke