OK, /var/log/ovirt-engine/cinderlib/cinderlib.log says:
2019-04-01 11:14:54,925 - cinder.volume.drivers.rbd - ERROR - Error
connecting to ceph cluster.
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py",
line 337, in _do_conn
client.connect()
File "rados.pyx", line 885, in rados.Rados.connect
(/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.11/rpm/el7/BUILD/ceph-12.2.11/build/src/pybind/rados/pyrex/rados.c:9785)
OSError: [errno 95] error connecting to the cluster
2019-04-01 11:14:54,930 - root - ERROR - Failure occurred when trying to
run command 'storage_stats': Bad or unexpected response from the storage
volume backend API: Error connecting to ceph cluster.
I don't really know what to do with that either.
BTW, the cinder version on engine host is "pike"
(openstack-cinder-11.2.0-1.el7.noarch)
Not sure if the version is related (I know
it's been tested with
pike), but you can try and install the latest rocky (that's what I use
for development)
Shall I pass "rbd_secret_uuid" in the driver options? But
where is this
UUID created? Where is the ceph secret key stored in oVirt?
I don't think
it's needed as ceph based volumes are no longer a
network disk like in the cinder integration, but it is attached like a
regular block device
The only things that are a must now are "rbd_keyring_conf" and
"rbd_ceph_conf" (you don't need the first if the path to the keyring
is configured in the latter)
And I think you get the error because it's missing or incorrect, since
I manually removed the keyring path from the configuration and got the
same error as you