[ovirt-users] Can not access storage domain hosted_storage

Simone Tiraboschi stirabos at redhat.com
Thu Apr 7 15:30:39 UTC 2016


On Thu, Apr 7, 2016 at 4:17 PM, Richard Neuboeck <hawk at tbi.univie.ac.at> wrote:
> Hi oVirt Users/Developers,
>
> I'm having trouble adding another host to a working hosted engine
> setup. Through the WebUI I try to add another host. The package
> installation and configuration processes seemingly run without
> problems. When the second host tries to mount the engine storage
> volume it halts with the WebUI showing the following message:
>
> 'Failed to connect Host cube-two to the Storage Domain hosted_engine'
>
> The mount fails which results in the host status as 'non operational'.
>
> Checking the vdsm.log on the newly added host shows that the mount
> attempt of the engine volume doesn't use -t glusterfs. On the other
> hand the VM storage volume (also a glusterfs volume) is mounted the
> right way.
>
> It seems the Engine configuration that is given to the second host
> lacks the vfs_type property. So without glusterfs as fs given the
> system assumes an NFS mount and obviously fails.

It seams that the auto-import procedure in the engine didn't recognize
that the hosted-engine storage domain was on gluster and took it for
NFS.

Adding Roy here to take a look.


> Here are the relevant log lines showing the JSON reply to the
> configuration request, the working mount of the VM storage (called
> plexus) and the failing mount of the engine storage.
>
> ...
> jsonrpc.Executor/4::INFO::2016-04-07
> 15:45:53,043::logUtils::48::dispatcher::(wrapper) Run and protect:
> connectStorageServer(domType=7,
> spUUID=u'00000001-0001-0001-0001-0000000003ce', conList=[{u'id':
> u'981cd3aa-052b-498a-914e-5e8f314357a8', u'connection':
> u'borg-sphere-one:/plexus', u'iqn': u'', u'user': u'', u'tpgt':
> u'1', u'vfs_type': u'glusterfs', u'password': '********', u'port':
> u''}, {u'id': u'cceaa988-9607-4bef-8854-0e7a585720aa',
> u'connection': u'borg-sphere-one:/engine', u'iqn': u'', u'user':
> u'', u'tpgt': u'1', u'password': '********', u'port': u''}],
> options=None)
> ...
> jsonrpc.Executor/4::DEBUG::2016-04-07
> 15:45:53,062::mount::229::Storage.Misc.excCmd::(_runcmd)
> /usr/bin/taskset --cpu-list 0-39 /usr/bin/sudo -n
> /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount
> -t glusterfs -o
> backup-volfile-servers=borg-sphere-two:borg-sphere-three
> borg-sphere-one:/plexus
> /rhev/data-center/mnt/glusterSD/borg-sphere-one:_plexus (cwd None)
> ...
> jsonrpc.Executor/4::DEBUG::2016-04-07
> 15:45:53,380::mount::229::Storage.Misc.excCmd::(_runcmd)
> /usr/bin/taskset --cpu-list 0-39 /usr/bin/sudo -n
> /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount
> -o backup-volfile-servers=borg-sphere-two:borg-sphere-three
> borg-sphere-one:/engine
> /rhev/data-center/mnt/glusterSD/borg-sphere-one:_engine (cwd None)
> ...
>
> The problem seems to have been introduced since March 22nd. On this
> install I have added two additional hosts without problem. Three
> days ago I tried to reinstall the whole system for testing and
> documentation purposes but now am not able to add other hosts.
>
> All the installs follow the same documented procedure. I've verified
> several times that the problem exists with the components in the
> current 3.6 release repo as well as in the 3.6 snapshot repo.
>
> If I check the storage configuration of hosted_engine domain in the
> WebUI it shows glusterfs as VFS type.
>
> The initial mount during the hosted engine setup on the first host
> shows the correct parameters (vfs_type) in vdsm.log:
>
> Thread-42::INFO::2016-04-07
> 14:56:29,464::logUtils::48::dispatcher::(wrapper) Run and protect:
> connectStorageServer(domType=7,
> spUUID='00000000-0000-0000-0000-000000000000', conList=[{'id':
> 'b13ae31f-d66a-43a7-8aba-eaf4e62a6fb0', 'tpgt': '1', 'vfs_type':
> 'glusterfs', 'connection': 'borg-sphere-one:/engine', 'user':
> 'kvm'}], options=None)
> Thread-42::DEBUG::2016-04-07
> 14:56:29,591::fileUtils::143::Storage.fileUtils::(createdir)
> Creating directory:
> /rhev/data-center/mnt/glusterSD/borg-sphere-one:_engine mode: None
> Thread-42::DEBUG::2016-04-07
> 14:56:29,592::storageServer::364::Storage.StorageServer.MountConnection::(_get_backup_servers_option)
> Using bricks: ['borg-sphere-one', 'borg-sphere-two',
> 'borg-sphere-three']
> Thread-42::DEBUG::2016-04-07
> 14:56:29,592::mount::229::Storage.Misc.excCmd::(_runcmd)
> /usr/bin/taskset --cpu-list 0-39 /usr/bin/sudo -n
> /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount
> -t glusterfs -o
> backup-volfile-servers=borg-sphere-two:borg-sphere-three
> borg-sphere-one:/engine
> /rhev/data-center/mnt/glusterSD/borg-sphere-one:_engine (cwd None)
>
>
> I've already created a bug report but since I didn't know where to
> put it filed it as VDSM bug which it doesn't seem to be.
> https://bugzilla.redhat.com/show_bug.cgi?id=1324075
>
>
> I would really like to help resolve this problem. If there is
> anything I can test, please let me know. I appreciate any help in
> this matter.
>
> Currently I'm running an oVirt 3.6 snapshot installation on CentOS
> 7.2. The two storage volumes are both replica 3 on separate gluster
> storage nodes.
>
> Thanks in advance!
> Richard
>
> --
> /dev/null
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



More information about the Users mailing list