Starting a VM should definitely work, I see in the error message:
"RBD image feature set mismatch. You can disable features unsupported by
the kernel with "rbd feature disable"
Adding "rbd default features = 3" to ceph.conf might help with that.
The other issue looks like a bug and it would be great if you can submit
one[1]
[1]
On Wed, Jul 17, 2019 at 3:30 PM <mathias.schwenke(a)uni-dortmund.de> wrote:
Hi.
I tried to use manged block storage to connect our oVirt cluster (Version
4.3.4.3-1.el7) to our ceph storage ( version 10.2.11). I used the
instructions from
https://ovirt.org/develop/release-management/features/storage/cinderlib-i...
At the moment, in the ovirt administration portal I can create and delete
ceph volumes (ovirt disks) and attach them to virtual machines. If I try to
launch a vm with connected ceph block storage volume, starting fails:
2019-07-16 19:39:09,251+02 WARN
[org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand]
(default task-53) [7cada945] Unexpected return value: Status [code=926,
message=Managed Volume Helper failed.: ('Error executing helper: Command
[\'/usr/libexec/vdsm/managedvolume-helper\', \'attach\'] failed with
rc=1
out=\'\' err=\'oslo.privsep.daemon: Running privsep helper:
[\\\'sudo\\\',
\\\'privsep-helper\\\', \\\'--privsep_context\\\',
\\\'os_brick.privileged.default\\\', \\\'--privsep_sock_path\\\',
\\\'/tmp/tmpB6ZBAs/privsep.sock\\\']\\noslo.privsep.daemon: Spawned new
privsep daemon via rootwrap\\noslo.privsep.daemon: privsep daemon
starting\\noslo.privsep.daemon: privsep process running with uid/gid:
0/0\\noslo.privsep.daemon: privsep process running with capabilities
(eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none\\noslo.privsep.daemon:
privsep daemon running as pid 112531\\nTraceback (most recent call
last):\\n File "/usr/libexec/vdsm/managedvolume-help
er", line 154, in <module>\\n sys.exit(main(sys.argv[1:]))\\n File
"/usr/libexec/vdsm/managedvolume-helper", line 77, in main\\n
args.command(args)\\n File "/usr/libexec/vdsm/managedvolume-helper", line
137, in attach\\n attachment =
conn.connect_volume(conn_info[\\\'data\\\'])\\n File
"/usr/lib/python2.7/site-packages/vdsm/storage/nos_brick.py", line 96, in
connect_volume\\n run_as_root=True)\\n File
"/usr/lib/python2.7/site-packages/os_brick/executor.py", line 52, in
_execute\\n result = self.__execute(*args, **kwargs)\\n File
"/usr/lib/python2.7/site-packages/os_brick/privileged/rootwrap.py", line
169, in execute\\n return execute_root(*cmd, **kwargs)\\n File
"/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 205,
in _wrap\\n return self.channel.remote_call(name, args, kwargs)\\n File
"/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line 202, in
remote_call\\n raise exc_type(*result[2])\\
noslo_concurrency.processutils.Pr
ocessExecutionError: Unexpected error while running command.\\nCommand:
rbd map volume-a57dbd5c-2f66-460f-b37f-5f7dfa95d254 --pool ovirt-volumes
--conf /tmp/brickrbd_TLMTkR --id ovirtcinderlib --mon_host
192.168.61.1:6789 --mon_host 192.168.61.2:6789 --mon_host
192.168.61.3:6789\\nExit code: 6\\nStdout: u\\\'RBD image feature set
mismatch. You can disable features unsupported by the kernel with "rbd
feature disable".\\\\nIn some cases useful info is found in syslog - try
"dmesg | tail" or so.\\\\n\\\'\\nStderr: u\\\'rbd: sysfs write
failed\\\\nrbd: map failed: (6) No such device or address\\\\n\\\'\\n\'',)]
2019-07-16 19:39:09,251+02 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand]
(default task-53) [7cada945] Failed in 'AttachManagedBlockStorageVolumeVDS'
method
After disconnecting the disk, I can delete it (the volume disappears from
ceph), but the disks stays in my oVirt administration portal as cinderlib
means the disk ist still connected:
2019-07-16 19:42:53,551+02 INFO
[org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand]
(EE-ManagedThreadFactory-engine-Thread-487362)
[887b4d11-302f-4f8d-a3f9-7443a80a47ba] Running command: RemoveDiskCommand
internal: false. Entities affected : ID:
a57dbd5c-2f66-460f-b37f-5f7dfa95d254 Type: DiskAction group DELETE_DISK
with role type USER
2019-07-16 19:42:53,559+02 INFO
[org.ovirt.engine.core.bll.storage.disk.managedblock.RemoveManagedBlockStorageDiskCommand]
(EE-ManagedThreadFactory-commandCoordinator-Thread-8) [] Running command:
RemoveManagedBlockStorageDiskCommand internal: true.
2019-07-16 19:42:56,240+02 ERROR
[org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor]
(EE-ManagedThreadFactory-commandCoordinator-Thread-8) [] cinderlib
execution failed
DBReferenceError: (psycopg2.IntegrityError) update or delete on table
"volumes" violates foreign key constraint
"volume_attachment_volume_id_fkey" on table "volume_attachment"
2019-07-16 19:42:55,958 - cinderlib-client - INFO - Deleting volume
'a57dbd5c-2f66-460f-b37f-5f7dfa95d254'
[887b4d11-302f-4f8d-a3f9-7443a80a47ba]
2019-07-16 19:42:56,099 - cinderlib-client - ERROR - Failure occurred when
trying to run command 'delete_volume': (psycopg2.IntegrityError) update or
delete on table "volumes" violates foreign key constraint
"volume_attachment_volume_id_fkey" on table "volume_attachment"
DETAIL: Key (id)=(a57dbd5c-2f66-460f-b37f-5f7dfa95d254) is still
referenced from table "volume_attachment".
[SQL: 'DELETE FROM volumes WHERE volumes.deleted = false AND volumes.id
= %(id_1)s'] [parameters: {'id_1':
u'a57dbd5c-2f66-460f-b37f-5f7dfa95d254'}]
[887b4d11-302f-4f8d-a3f9-7443a80a47ba]
How are your experiences with oVirt, cinderlib and ceph? Should it work?
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U5B6S2AOFGF...