On 10/15/20 11:27 AM, Jeff Bailey wrote:
On 10/15/2020 12:07 PM, Michael Thomas wrote:
> On 10/15/20 10:19 AM, Jeff Bailey wrote:
>>
>> On 10/15/2020 10:01 AM, Michael Thomas wrote:
>>> Getting closer...
>>>
>>> I recreated the storage domain and added rbd_default_features=3 to
>>> ceph.conf. Now I see the new disk being created with (what I think
>>> is) the correct set of features:
>>>
>>> # rbd info rbd.ovirt.data/volume-f4ac68c6-e5f7-4b01-aed0-36a55b901
>>> fbf
>>> rbd image 'volume-f4ac68c6-e5f7-4b01-aed0-36a55b901fbf':
>>> size 100 GiB in 25600 objects
>>> order 22 (4 MiB objects)
>>> snapshot_count: 0
>>> id: 70aab541cb331
>>> block_name_prefix: rbd_data.70aab541cb331
>>> format: 2
>>> features: layering
>>> op_features:
>>> flags:
>>> create_timestamp: Thu Oct 15 06:53:23 2020
>>> access_timestamp: Thu Oct 15 06:53:23 2020
>>> modify_timestamp: Thu Oct 15 06:53:23 2020
>>>
>>> However, I'm still unable to attach the disk to a VM. This time
>>> it's a permissions issue on the ovirt node where the VM is running.
>>> It looks like it can't read the temporary ceph config file that is
>>> sent over from the engine:
>>
>>
>> Are you using octopus? If so, the config file that's generated is
>> missing the "[global]" at the top and octopus doesn't like that.
It's
>> been patched upstream.
>
> Yes, I am using Octopus (15.2.4). Do you have a pointer to the
> upstream patch or issue so that I can watch for a release with the fix?
https://bugs.launchpad.net/cinder/+bug/1865754
And for anyone playing along at home, I was able to map this back to the
openstack ticket:
https://review.opendev.org/#/c/730376/
It's a simple fix. I just changed line 100 of
/usr/lib/python3.6/site-packages/os_brick/initiator/connectors/rbd.py to:
conf_file.writelines(["[global]", "\n", mon_hosts, "\n",
keyring, "\n"])
After applying this patch, I was finally able to attach my ceph block
device to a running VM. I've now got virtually unlimited data storage
for my VMs. Many thanks to you and Benny for the help!
--Mike