On 10/15/20 10:19 AM, Jeff Bailey wrote:
On 10/15/2020 10:01 AM, Michael Thomas wrote:
Getting closer...
I recreated the storage domain and added rbd_default_features=3 to ceph.conf. Now I see the new disk being created with (what I think is) the correct set of features:
# rbd info rbd.ovirt.data/volume-f4ac68c6-e5f7-4b01-aed0-36a55b901
fbf
rbd image 'volume-f4ac68c6-e5f7-4b01-aed0-36a55b901fbf':
size 100 GiB in 25600 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 70aab541cb331
block_name_prefix: rbd_data.70aab541cb331
format: 2
features: layering
op_features:
flags:
create_timestamp: Thu Oct 15 06:53:23 2020
access_timestamp: Thu Oct 15 06:53:23 2020
modify_timestamp: Thu Oct 15 06:53:23 2020
However, I'm still unable to attach the disk to a VM. This time it's a permissions issue on the ovirt node where the VM is running. It looks like it can't read the temporary ceph config file that is sent over from the engine:
Are you using octopus? If so, the config file that's generated is missing the "[global]" at the top and octopus doesn't like that. It's been patched upstream.
Yes, I am using Octopus (15.2.4). Do you have a pointer to the upstream patch or issue so that I can watch for a release with the fix?
https://bugs.launchpad.net/cinder/+bug/1865754
It's a simple fix. I just changed line 100 of /usr/lib/python3.6/site-packages/os_brick/initiator/connectors/rbd.py to:
conf_file.writelines(["[global]",
"\n", mon_hosts, "\n", keyring, "\n"])
--Mike