Adding servers after hosted ending servers fail due to incorrect mount options in vdsm

I have a 3 node 3.6.3 hosted engine cluster (Default) with a number of VMs. The hosted engine is stored on gluster. Adding an additional server to the Default cluster that isn't a hosted-engine ha server fails. Looking at the vdsm.log, the host attempts to mount the gluster as NFS with the gluster options which fails. jsonrpc.Executor/6::DEBUG::2016-03-09 15:10:01,022::fileUtils::143::Storage.fileUtils::(createdir) Creating directory: /rhev/data-center/mnt/glusterSD/ovirt36-h1:_hosted-engine mode: None jsonrpc.Executor/6::DEBUG::2016-03-09 15:10:01,022::storageServer::357::Storage.StorageServer.MountConnection::(_get_backup_servers_option) Using bricks: ['ovirt36-h1', 'ovirt36-h2', 'ovirt36-h3'] jsonrpc.Executor/6::DEBUG::2016-03-09 15:10:01,022::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset --cpu-list 0-11 /usr/bin/sudo -n /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount -o backup-volfile-servers=ovirt36-h2:ovirt36-h3 ovirt36-h1:/hosted-engine /rhev/data-center/mnt/glusterSD/ovirt36-h1:_hosted-engine (cwd None) jsonrpc.Executor/6::ERROR::2016-03-09 15:10:01,042::hsm::2473::Storage.HSM::(connectStorageServer) Could not connect to storageServer Traceback (most recent call last): File "/usr/share/vdsm/storage/hsm.py", line 2470, in connectStorageServer conObj.connect() File "/usr/share/vdsm/storage/storageServer.py", line 236, in connect six.reraise(t, v, tb) File "/usr/share/vdsm/storage/storageServer.py", line 228, in connect self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP) File "/usr/share/vdsm/storage/mount.py", line 225, in mount return self._runcmd(cmd, timeout) File "/usr/share/vdsm/storage/mount.py", line 241, in _runcmd raise MountError(rc, ";".join((out, err))) MountError: (32, ';Running scope as unit run-18808.scope.\nmount.nfs: an incorrect mount option was specified\n') I noticed the hosted-engine servers perform the same mount but pass the -t glusterfs correctly. A bug or am I doing something wrong?? I do not want to create a new datacentre without the hosted engine storage as I want to use the same storage domains. Regards Darryl ________________________________ The contents of this electronic message and any attachments are intended only for the addressee and may contain legally privileged, personal, sensitive or confidential information. If you are not the intended addressee, and have received this email, any transmission, distribution, downloading, printing or photocopying of the contents of this message or attachments is strictly prohibited. Any legal privilege or confidentiality attached to this message and attachments is not waived, lost or destroyed by reason of delivery to any person other than intended addressee. If you have received this message and are not the intended addressee you should notify the sender by return email and destroy all copies of the message and any attachments. Unless expressly attributed, the views expressed in this email do not necessarily represent the views of the company.

On Wed, Mar 9, 2016 at 7:54 AM, Bond, Darryl <dbond@nrggos.com.au> wrote:
I have a 3 node 3.6.3 hosted engine cluster (Default) with a number of VMs. The hosted engine is stored on gluster.
Adding an additional server to the Default cluster that isn't a hosted-engine ha server fails.
Looking at the vdsm.log, the host attempts to mount the gluster as NFS with the gluster options which fails.
Please add the logs above the log you posted, with the string "Run and protect, connectStorageServer" This log contains the arguments received from engine, revealing what is going on.
jsonrpc.Executor/6::DEBUG::2016-03-09 15:10:01,022::fileUtils::143::Storage.fileUtils::(createdir) Creating directory: /rhev/data-center/mnt/glusterSD/ovirt36-h1:_hosted-engine mode: None jsonrpc.Executor/6::DEBUG::2016-03-09 15:10:01,022::storageServer::357::Storage.StorageServer.MountConnection::(_get_backup_servers_option) Using bricks: ['ovirt36-h1', 'ovirt36-h2', 'ovirt36-h3'] jsonrpc.Executor/6::DEBUG::2016-03-09 15:10:01,022::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset --cpu-list 0-11 /usr/bin/sudo -n /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount -o backup-volfile-servers=ovirt36-h2:ovirt36-h3 ovirt36-h1:/hosted-engine /rhev/data-center/mnt/glusterSD/ovirt36-h1:_hosted-engine (cwd None)
-t glusterfs is missing here This line can be generated only by the GlusterfFSConnection, used when connecting to gluster storage domain, but this connection type ads the "glustefs" type.
jsonrpc.Executor/6::ERROR::2016-03-09 15:10:01,042::hsm::2473::Storage.HSM::(connectStorageServer) Could not connect to storageServer Traceback (most recent call last): File "/usr/share/vdsm/storage/hsm.py", line 2470, in connectStorageServer conObj.connect() File "/usr/share/vdsm/storage/storageServer.py", line 236, in connect six.reraise(t, v, tb) File "/usr/share/vdsm/storage/storageServer.py", line 228, in connect self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP) File "/usr/share/vdsm/storage/mount.py", line 225, in mount return self._runcmd(cmd, timeout) File "/usr/share/vdsm/storage/mount.py", line 241, in _runcmd raise MountError(rc, ";".join((out, err))) MountError: (32, ';Running scope as unit run-18808.scope.\nmount.nfs: an incorrect mount option was specified\n')
I noticed the hosted-engine servers perform the same mount but pass the -t glusterfs correctly.
A bug or am I doing something wrong??
I do not want to create a new datacentre without the hosted engine storage as I want to use the same storage domains.
Same storage domains? Maybe you mean same bricks? You cannot use the same storage domain from different DC. You can create new gluster volume using the same bricks. Nir

On Wed, Mar 9, 2016 at 9:17 AM, Nir Soffer <nsoffer@redhat.com> wrote:
On Wed, Mar 9, 2016 at 7:54 AM, Bond, Darryl <dbond@nrggos.com.au> wrote:
I have a 3 node 3.6.3 hosted engine cluster (Default) with a number of VMs. The hosted engine is stored on gluster.
Adding an additional server to the Default cluster that isn't a hosted-engine ha server fails.
Looking at the vdsm.log, the host attempts to mount the gluster as NFS with the gluster options which fails.
Please add the logs above the log you posted, with the string "Run and protect, connectStorageServer"
This log contains the arguments received from engine, revealing what is going on.
jsonrpc.Executor/6::DEBUG::2016-03-09 15:10:01,022::fileUtils::143::Storage.fileUtils::(createdir) Creating directory: /rhev/data-center/mnt/glusterSD/ovirt36-h1:_hosted-engine mode: None jsonrpc.Executor/6::DEBUG::2016-03-09 15:10:01,022::storageServer::357::Storage.StorageServer.MountConnection::(_get_backup_servers_option) Using bricks: ['ovirt36-h1', 'ovirt36-h2', 'ovirt36-h3'] jsonrpc.Executor/6::DEBUG::2016-03-09 15:10:01,022::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset --cpu-list 0-11 /usr/bin/sudo -n /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount -o backup-volfile-servers=ovirt36-h2:ovirt36-h3 ovirt36-h1:/hosted-engine /rhev/data-center/mnt/glusterSD/ovirt36-h1:_hosted-engine (cwd None)
-t glusterfs is missing here
Could it related to the auto-import feature? Did it correctly recognized the hosted-engine storage domain as a gluster one? Adding Roy here.
This line can be generated only by the GlusterfFSConnection, used when connecting to gluster storage domain, but this connection type ads the "glustefs" type.
jsonrpc.Executor/6::ERROR::2016-03-09 15:10:01,042::hsm::2473::Storage.HSM::(connectStorageServer) Could not connect to storageServer Traceback (most recent call last): File "/usr/share/vdsm/storage/hsm.py", line 2470, in connectStorageServer conObj.connect() File "/usr/share/vdsm/storage/storageServer.py", line 236, in connect six.reraise(t, v, tb) File "/usr/share/vdsm/storage/storageServer.py", line 228, in connect self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP) File "/usr/share/vdsm/storage/mount.py", line 225, in mount return self._runcmd(cmd, timeout) File "/usr/share/vdsm/storage/mount.py", line 241, in _runcmd raise MountError(rc, ";".join((out, err))) MountError: (32, ';Running scope as unit run-18808.scope.\nmount.nfs: an incorrect mount option was specified\n')
I noticed the hosted-engine servers perform the same mount but pass the -t glusterfs correctly.
A bug or am I doing something wrong??
I do not want to create a new datacentre without the hosted engine storage as I want to use the same storage domains.
Same storage domains? Maybe you mean same bricks?
You cannot use the same storage domain from different DC. You can create new gluster volume using the same bricks.
Nir _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (3)
-
Bond, Darryl
-
Nir Soffer
-
Simone Tiraboschi