
Hi, I upgraded my test environment to 4.3.2 and now I'm trying to set up a "Managed Block Storage" domain with our Ceph 12.2 cluster. I think I got all prerequisites, but when saving the configuration for the domain with volume_driver "cinder.volume.drivers.rbd.RBDDriver" (and a couple of other options) I get "VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Error connecting to ceph cluster" in engine log (full error below). Unfortunately this is a rather generic error message and I don't really know where to look next. Accessing the rbd pool from the engine host with rbd CLI and the configured "rbd_user" works flawlessly... Although I don't think this is directly connected there is one other question that comes up for me: how are libvirt "Authentication Keys" handled with Ceph "Managed Block Storage" domains? With "standalone Cinder" setups like we are using now you have to configure a "provider" of type "OpenStack Block Storage" where you can configure these keys that are referenced in cinder.conf as "rbd_secret_uuid". How is this supposed to work now? Thanks for any advice, we are using oVirt with Ceph heavily and are very interested in a tight integration of oVirt and Ceph. Matthias 2019-04-01 11:14:55,128+02 ERROR [org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor] (default task-22) [b6665621-6b85-438e-8c68-266f33e55d79] cinderlib execution failed: Traceback (most recent call last): File "./cinderlib-client.py", line 187, in main args.command(args) File "./cinderlib-client.py", line 275, in storage_stats backend = load_backend(args) File "./cinderlib-client.py", line 217, in load_backend return cl.Backend(**json.loads(args.driver)) File "/usr/lib/python2.7/site-packages/cinderlib/cinderlib.py", line 87, in __init__ self.driver.check_for_setup_error() File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 288, in check_for_setup_error with RADOSClient(self): File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 170, in __init__ self.cluster, self.ioctx = driver._connect_to_rados(pool) File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 346, in _connect_to_rados return _do_conn(pool, remote, timeout) File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 799, in _wrapper return r.call(f, *args, **kwargs) File "/usr/lib/python2.7/site-packages/retrying.py", line 229, in call raise attempt.get() File "/usr/lib/python2.7/site-packages/retrying.py", line 261, in get six.reraise(self.value[0], self.value[1], self.value[2]) File "/usr/lib/python2.7/site-packages/retrying.py", line 217, in call attempt = Attempt(fn(*args, **kwargs), attempt_number, False) File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 344, in _do_conn raise exception.VolumeBackendAPIException(data=msg) VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Error connecting to ceph cluster.