I added an example for ceph[1]
[1] -
https://github.com/oVirt/ovirt-site/blob/468c79a05358e20289e7403d9dd24732...
On Mon, Apr 1, 2019 at 5:24 PM Benny Zlotnik <bzlotnik(a)redhat.com> wrote:
>
> Did you pass the rbd_user when creating the storage domain?
>
> On Mon, Apr 1, 2019 at 5:08 PM Matthias Leopold
> <matthias.leopold(a)meduniwien.ac.at> wrote:
> >
> >
> > Am 01.04.19 um 13:17 schrieb Benny Zlotnik:
> > >> OK, /var/log/ovirt-engine/cinderlib/cinderlib.log says:
> > >>
> > >> 2019-04-01 11:14:54,925 - cinder.volume.drivers.rbd - ERROR - Error
> > >> connecting to ceph cluster.
> > >> Traceback (most recent call last):
> > >> File
"/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py",
> > >> line 337, in _do_conn
> > >> client.connect()
> > >> File "rados.pyx", line 885, in rados.Rados.connect
> > >>
(/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.11/rpm/el7/BUILD/ceph-12.2.11/build/src/pybind/rados/pyrex/rados.c:9785)
> > >> OSError: [errno 95] error connecting to the cluster
> > >> 2019-04-01 11:14:54,930 - root - ERROR - Failure occurred when trying
to
> > >> run command 'storage_stats': Bad or unexpected response from
the storage
> > >> volume backend API: Error connecting to ceph cluster.
> > >>
> > >> I don't really know what to do with that either.
> > >> BTW, the cinder version on engine host is "pike"
> > >> (openstack-cinder-11.2.0-1.el7.noarch)
> > > Not sure if the version is related (I know it's been tested with
> > > pike), but you can try and install the latest rocky (that's what I use
> > > for development)
> >
> > I upgraded cinder on engine and hypervisors to rocky and installed
> > missing "ceph-common" packages on hypervisors. I set
"rbd_keyring_conf"
> > and "rbd_ceph_conf" as indicated and got as far as adding a
"Managed
> > Block Storage" domain and creating a disk (which is also visible through
> > "rbd ls"). I used a keyring that is only authorized for the pool I
> > specified with "rbd_pool". When I try to start the VM it fails and I
see
> > the following in supervdsm.log on hypervisor:
> >
> > ManagedVolumeHelperFailed: Managed Volume Helper failed.: ('Error
> > executing helper: Command [\'/usr/libexec/vdsm/managedvolume-helper\',
> > \'attach\'] failed with rc=1 out=\'\'
err=\'oslo.privsep.daemon: Running
> > privsep helper: [\\\'sudo\\\', \\\'privsep-helper\\\',
> > \\\'--privsep_context\\\', \\\'os_brick.privileged.default\\\',
> > \\\'--privsep_sock_path\\\',
> > \\\'/tmp/tmp5S8zZV/privsep.sock\\\']\\noslo.privsep.daemon: Spawned new
> > privsep daemon via rootwrap\\noslo.privsep.daemon: privsep daemon
> > starting\\noslo.privsep.daemon: privsep process running with uid/gid:
> > 0/0\\noslo.privsep.daemon: privsep process running with capabilities
> > (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none\\noslo.privsep.daemon:
> > privsep daemon running as pid 15944\\nTraceback (most recent call
> > last):\\n File "/usr/libexec/vdsm/managedvolume-helper", line 154,
in
> > <module>\\n sys.exit(main(sys.argv[1:]))\\n File
> > "/usr/libexec/vdsm/managedvolume-helper", line 77, in main\\n
> > args.command(args)\\n File "/usr/libexec/vdsm/managedvolume-helper",
> > line 137, in attach\\n attachment =
> > conn.connect_volume(conn_info[\\\'data\\\'])\\n File
> > "/usr/lib/python2.7/site-packages/vdsm/storage/nos_brick.py", line
96,
> > in connect_volume\\n run_as_root=True)\\n File
> > "/usr/lib/python2.7/site-packages/os_brick/executor.py", line 52, in
> > _execute\\n result = self.__execute(*args, **kwargs)\\n File
> > "/usr/lib/python2.7/site-packages/os_brick/privileged/rootwrap.py",
line
> > 169, in execute\\n return execute_root(*cmd, **kwargs)\\n File
> > "/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line
> > 207, in _wrap\\n return self.channel.remote_call(name, args,
> > kwargs)\\n File
> > "/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line 202,
in
> > remote_call\\n raise
> > exc_type(*result[2])\\noslo_concurrency.processutils.ProcessExecutionError:
> > Unexpected error while running command.\\nCommand: rbd map
> > volume-36f5eb75-329e-4bd2-88d0-6f0bfe5d1040 --pool ovirt-test --conf
> > /tmp/brickrbd_RmBvxA --id None --mon_host xxx.xxx.216.45:6789 --mon_host
> > xxx.xxx.216.54:6789 --mon_host xxx.xxx.216.55:6789\\nExit code:
> > 22\\nStdout: u\\\'In some cases useful info is found in syslog - try
> > "dmesg | tail".\\\\n\\\'\\nStderr: u"2019-04-01
15:27:30.743196
> > 7fe0b4632d40 -1 auth: unable to find a keyring on
> >
/etc/ceph/ceph.client.None.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,:
> > (2) No such file or directory\\\\nrbd: sysfs write failed\\\\n2019-04-01
> > 15:27:30.746987 7fe0b4632d40 -1 auth: unable to find a keyring on
> >
/etc/ceph/ceph.client.None.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,:
> > (2) No such file or directory\\\\n2019-04-01 15:27:30.747896
> > 7fe0b4632d40 -1 monclient: authenticate NOTE: no keyring found; disabled
> > cephx authentication\\\\n2019-04-01 15:27:30.747903 7fe0b4632d40 0
> > librados: client.None authentication error (95) Operation not
> > supported\\\\nrbd: couldn\\\'t connect to the cluster!\\\\nrbd: map
> > failed: (22) Invalid argument\\\\n"\\n\'',)
> >
> > I tried to provide a /etc/ceph directory with ceph.conf and client
> > keyring on hypervisors (as configured in driver options). This didn't
> > solve it and doesn't seem to be the right way as the mentioned
> > /tmp/brickrbd_RmBvxA contains the needed keyring data. Please give me
> > some advice what's wrong.
> >
> > thx
> > Matthias
> >
> >