[ovirt-users] Adding servers after hosted ending servers fail due to incorrect mount options in vdsm

Nir Soffer nsoffer at redhat.com
Wed Mar 9 16:30:44 EST 2016


On Wed, Mar 9, 2016 at 11:22 PM, Bond, Darryl <dbond at nrggos.com.au> wrote:
> Yes, your last example is the one we are shooting for:
> One DC multiple clusters, one being the hosted engine cluster all sharing the same storage, which has to include the hosted engine gluster storage.
> The remainder of the storage will be mainly in cinder/ceph

Please share your experience with cinder/ceph, we like to get feedback about it.
(probably in another thread)

> with for export/ISO domains.
> Our initial attempt was to attach the new non-hosted engine server to a new cluster but we also tried to add it to the same cluster as the hosted engines with the same result. It just seems to remove the complication of the second cluster to try to get it working in the hosted engine cluster.
>
> Darryl
>
> ________________________________________
> From: Nir Soffer <nsoffer at redhat.com>
> Sent: Wednesday, 9 March 2016 9:50 PM
> To: Bond, Darryl; users; Simone Tiraboschi; Roy Golan; Maor Lipchuk
> Subject: Re: [ovirt-users] Adding servers after hosted ending servers fail due to incorrect mount options in vdsm
>
> On Wed, Mar 9, 2016 at 11:48 AM, Bond, Darryl <dbond at nrggos.com.au> wrote:
>> Nir,
>> The run and protect was the line above, I have included it above the rest.
>> My comments about not creating a extra data centre agree with yours. I want:
>> 1 data centre which includes:
>>     1 set of storage domains
>
> What do you mean by "set of storage domains?"
>
>>     3 hosted engine ha hosts
>>     4 extra hosts
>
> I don't know about hosted engine limitations, but generally a host can be only
> part of one DC and cluster.
>
> So you can have 3 hosts serving as hosted engine nodes, with their own
> storage, used only for hosted engine, and you can have another DC for
> vms, using other storage domains.
>
> Or you can have one DC, with all the storage domains, and several clusters,
> one for hosted engine nodes, and one for compute nodes for other vms.
> In this case all the nodes will have to connect to all storage domains.
>
> Pick the setup that fits your needs.
>
>>
>> I cannot activate an additional (non ha) host as it errors mounting the gluster hosted engine domain as it does not pass the -t glusterfs. I don't really care if they didn't (as it is only there for the hosted-engine ha hosts) but that does not seem possible.
>> ________________________________________
>> From: Nir Soffer <nsoffer at redhat.com>
>> Sent: Wednesday, 9 March 2016 6:17 PM
>> To: Bond, Darryl; Ala Hino
>> Cc: users at ovirt.org
>> Subject: Re: [ovirt-users] Adding servers after hosted ending servers fail due to incorrect mount options in vdsm
>>
>> On Wed, Mar 9, 2016 at 7:54 AM, Bond, Darryl <dbond at nrggos.com.au> wrote:
>>> I have a 3 node 3.6.3 hosted engine cluster (Default) with a number of VMs. The hosted engine is stored on gluster.
>>>
>>> Adding an additional server to the Default cluster that isn't a hosted-engine ha server fails.
>>>
>>> Looking at the vdsm.log, the host attempts to mount the gluster as NFS with the gluster options which fails.
>>>
>>>
>>
>> Please add the logs above the log you posted, with the string "Run and
>> protect, connectStorageServer"
>>
>> This log contains the arguments received from engine, revealing what
>> is going on.
>> jsonrpc.Executor/4::INFO::2016-03-09 16:05:02,002::logUtils::48::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=7, spUUID=u'00000001-0001-0001-0001-000000000229', conList=[{u'id': u'19fb9b3b-79c1-48e8-9300-d0d52ddce7b1', u'connection': u'ovirt36-h1:/hosted-engine', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'password': '********', u'port': u''}], options=None)
>>
>
> There must be vfs_type parameter here, with the value "glusterfs".
>
> Kind of redundant since we have a domType=7, which is glusterfs, but this is the
> current API, and we must keep it for backward compatibility.
>
> The owner of the code sending this value should take a look.
>
> Nir
>
>>> jsonrpc.Executor/6::DEBUG::2016-03-09 15:10:01,022::fileUtils::143::Storage.fileUtils::(createdir) Creating directory: /rhev/data-center/mnt/glusterSD/ovirt36-h1:_hosted-engine mode: None
>>> jsonrpc.Executor/6::DEBUG::2016-03-09 15:10:01,022::storageServer::357::Storage.StorageServer.MountConnection::(_get_backup_servers_option) Using bricks: ['ovirt36-h1', 'ovirt36-h2', 'ovirt36-h3']
>>> jsonrpc.Executor/6::DEBUG::2016-03-09 15:10:01,022::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset --cpu-list 0-11 /usr/bin/sudo -n /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount -o backup-volfile-servers=ovirt36-h2:ovirt36-h3 ovirt36-h1:/hosted-engine /rhev/data-center/mnt/glusterSD/ovirt36-h1:_hosted-engine (cwd None)
>>
>> -t glusterfs is missing here
>>
>> This line can be generated only by the GlusterfFSConnection, used when
>> connecting
>> to gluster storage domain, but this connection type ads the "glustefs" type.
>>
>>> jsonrpc.Executor/6::ERROR::2016-03-09 15:10:01,042::hsm::2473::Storage.HSM::(connectStorageServer) Could not connect to storageServer
>>> Traceback (most recent call last):
>>>   File "/usr/share/vdsm/storage/hsm.py", line 2470, in connectStorageServer
>>>     conObj.connect()
>>>   File "/usr/share/vdsm/storage/storageServer.py", line 236, in connect
>>>     six.reraise(t, v, tb)
>>>   File "/usr/share/vdsm/storage/storageServer.py", line 228, in connect
>>>     self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
>>>   File "/usr/share/vdsm/storage/mount.py", line 225, in mount
>>>     return self._runcmd(cmd, timeout)
>>>   File "/usr/share/vdsm/storage/mount.py", line 241, in _runcmd
>>>     raise MountError(rc, ";".join((out, err)))
>>> MountError: (32, ';Running scope as unit run-18808.scope.\nmount.nfs: an incorrect mount option was specified\n')
>>>
>>> I noticed the hosted-engine servers perform the same mount but pass the -t glusterfs correctly.
>>>
>>> A bug or am I doing something wrong??
>>>
>>> I do not want to create a new datacentre without the hosted engine storage as I want to use the same storage domains.
>>
>> Same storage domains? Maybe you mean same bricks?
>>
>> You cannot use the same storage domain from different DC. You can
>> create new gluster volume
>> using the same bricks.
>>
>> Nir
>>
>> ________________________________
>>
>> The contents of this electronic message and any attachments are intended only for the addressee and may contain legally privileged, personal, sensitive or confidential information. If you are not the intended addressee, and have received this email, any transmission, distribution, downloading, printing or photocopying of the contents of this message or attachments is strictly prohibited. Any legal privilege or confidentiality attached to this message and attachments is not waived, lost or destroyed by reason of delivery to any person other than intended addressee. If you have received this message and are not the intended addressee you should notify the sender by return email and destroy all copies of the message and any attachments. Unless expressly attributed, the views expressed in this email do not necessarily represent the views of the company.


More information about the Users mailing list