[ovirt-users] Adding servers after hosted ending servers fail due to incorrect mount options in vdsm
Simone Tiraboschi
stirabos at redhat.com
Wed Mar 9 04:10:56 EST 2016
On Wed, Mar 9, 2016 at 9:17 AM, Nir Soffer <nsoffer at redhat.com> wrote:
> On Wed, Mar 9, 2016 at 7:54 AM, Bond, Darryl <dbond at nrggos.com.au> wrote:
> > I have a 3 node 3.6.3 hosted engine cluster (Default) with a number of
> VMs. The hosted engine is stored on gluster.
> >
> > Adding an additional server to the Default cluster that isn't a
> hosted-engine ha server fails.
> >
> > Looking at the vdsm.log, the host attempts to mount the gluster as NFS
> with the gluster options which fails.
> >
> >
>
> Please add the logs above the log you posted, with the string "Run and
> protect, connectStorageServer"
>
> This log contains the arguments received from engine, revealing what
> is going on.
>
> > jsonrpc.Executor/6::DEBUG::2016-03-09
> 15:10:01,022::fileUtils::143::Storage.fileUtils::(createdir) Creating
> directory: /rhev/data-center/mnt/glusterSD/ovirt36-h1:_hosted-engine mode:
> None
> > jsonrpc.Executor/6::DEBUG::2016-03-09
> 15:10:01,022::storageServer::357::Storage.StorageServer.MountConnection::(_get_backup_servers_option)
> Using bricks: ['ovirt36-h1', 'ovirt36-h2', 'ovirt36-h3']
> > jsonrpc.Executor/6::DEBUG::2016-03-09
> 15:10:01,022::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset
> --cpu-list 0-11 /usr/bin/sudo -n /usr/bin/systemd-run --scope
> --slice=vdsm-glusterfs /usr/bin/mount -o
> backup-volfile-servers=ovirt36-h2:ovirt36-h3 ovirt36-h1:/hosted-engine
> /rhev/data-center/mnt/glusterSD/ovirt36-h1:_hosted-engine (cwd None)
>
> -t glusterfs is missing here
>
>
Could it related to the auto-import feature?
Did it correctly recognized the hosted-engine storage domain as a gluster
one?
Adding Roy here.
> This line can be generated only by the GlusterfFSConnection, used when
> connecting
> to gluster storage domain, but this connection type ads the "glustefs"
> type.
>
> > jsonrpc.Executor/6::ERROR::2016-03-09
> 15:10:01,042::hsm::2473::Storage.HSM::(connectStorageServer) Could not
> connect to storageServer
> > Traceback (most recent call last):
> > File "/usr/share/vdsm/storage/hsm.py", line 2470, in
> connectStorageServer
> > conObj.connect()
> > File "/usr/share/vdsm/storage/storageServer.py", line 236, in connect
> > six.reraise(t, v, tb)
> > File "/usr/share/vdsm/storage/storageServer.py", line 228, in connect
> > self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
> > File "/usr/share/vdsm/storage/mount.py", line 225, in mount
> > return self._runcmd(cmd, timeout)
> > File "/usr/share/vdsm/storage/mount.py", line 241, in _runcmd
> > raise MountError(rc, ";".join((out, err)))
> > MountError: (32, ';Running scope as unit run-18808.scope.\nmount.nfs: an
> incorrect mount option was specified\n')
> >
> > I noticed the hosted-engine servers perform the same mount but pass the
> -t glusterfs correctly.
> >
> > A bug or am I doing something wrong??
> >
> > I do not want to create a new datacentre without the hosted engine
> storage as I want to use the same storage domains.
>
> Same storage domains? Maybe you mean same bricks?
>
> You cannot use the same storage domain from different DC. You can
> create new gluster volume
> using the same bricks.
>
> Nir
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20160309/e9bf84a5/attachment-0001.html>
More information about the Users
mailing list