The mount failure is probably related to glusterfs. There are
glusterfs logs on the host that
can give more info on this error.
Oh duh, sorry forgot to check :-&
"failed to fetch volume file"
[2019-01-25 13:47:03.560677] I [MSGID: 100030] [glusterfsd.c:2691:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 5.2
(args: /usr/sbin/glusterfs --process-name fuse --volfile-server=*.*.*.*
--volfile-server=*.*.*.* --volfile-server=*.*.*.* --volfile-id=ssd5
/rhev/data-center/mnt/glusterSD/*.*.*.*:ssd5)
[2019-01-25 13:47:03.571819] I [MSGID: 101190]
[event-epoll.c:622:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2019-01-25 14:02:45.067440] I [glusterfsd-mgmt.c:2424:mgmt_rpc_notify]
0-glusterfsd-mgmt: disconnected from remote-host: *.*.*.*
[2019-01-25 14:02:45.067500] I [glusterfsd-mgmt.c:2464:mgmt_rpc_notify]
0-glusterfsd-mgmt: connecting to next volfile server *.*.*.*
[2019-01-25 14:02:45.069678] E [rpc-clnt.c:346:saved_frames_unwind] (-->
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f2b0b7f4f1b] (-->
/lib64/libgfrpc.so.0(+0xce11)[0x7f2b0b5bde11] (-->
/lib64/libgfrpc.so.0(+0xcf2e)[0x7f2b0b5bdf2e] (-->
/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x91)[0x7f2b0b5bf531]
(--> /lib64/libgfrpc.so.0(+0xf0d8)[0x7f2b0b5c00d8] ))))) 0-glusterfs:
forced unwinding frame type(GlusterFS Handshake) op(GETSPEC(2)) called
at 2019-01-25 13:47:03.572632 (xid=0x2)
[2019-01-25 14:02:45.069697] E [glusterfsd-mgmt.c:2136:mgmt_getspec_cbk]
0-mgmt: failed to fetch volume file (key:ssd5)
[2019-01-25 14:02:45.069725] W [glusterfsd.c:1481:cleanup_and_exit]
(-->/lib64/libgfrpc.so.0(+0xce32) [0x7f2b0b5bde32]
-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x841) [0x5651101b7231]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x5651101afbbb] ) 0-:
received signum (0), shutting down
[2019-01-25 14:02:45.069749] I [fuse-bridge.c:5897:fini] 0-fuse:
Unmounting '/rhev/data-center/mnt/glusterSD/*.*.*.*:ssd5'.
[2019-01-25 14:02:45.078949] I [fuse-bridge.c:5902:fini] 0-fuse: Closing
fuse connection to '/rhev/data-center/mnt/glusterSD/*.*.*.*:ssd5'.
[2019-01-25 14:02:45.079035] W [glusterfsd.c:1481:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7dd5) [0x7f2b0a656dd5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x5651101afd45]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x5651101afbbb] ) 0-:
received signum (15), shutting down
On 1/25/19 2:55 PM, Nir Soffer wrote:
> On Fri, Jan 25, 2019 at 3:18 PM Jorick Astrego <jorick(a)netbulae.eu
> <mailto:jorick@netbulae.eu>> wrote:
>
> Hi,
>
> We're having problems mounting the preexisting 3.12 glusterfs
> storage domains in ovirt node ng 4.3.0 rc2.
>
> Getting
>
> There are no iptables blocks on the storage network, the ip's are
> pingable bothe ways. I can telnet to the glusterfs ports and I see
> no messages in the logs of the glusterfs servers.
>
> When I try the mount command manually it hangs for ever:
>
> /usr/bin/mount -t glusterfs -o
> backup-volfile-servers=*.*.*.*:*.*.*.* *.*.*.*:/sdd8 /mnt/temp
>
> I haven't submitted a bug yet....
>
> from supervdsm.log
>
> MainProcess|jsonrpc/2::DEBUG::2019-01-25
> 13:42:45,282::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper)
> call volumeInfo with (u'sdd8', u'*.*.*.*') {}
> MainProcess|jsonrpc/2::DEBUG::2019-01-25
> 13:42:45,282::commands::198::root::(execCmd) /usr/bin/taskset
> --cpu-list 0-63 /usr/sbin/gluster --mode=script volume info
> --remote-host=*.*.*.* sdd8 --xml (cwd None)
> MainProcess|jsonrpc/2::DEBUG::2019-01-25
> 13:44:45,399::commands::219::root::(execCmd) FAILED: <err> = '';
> <rc> = 1
> MainProcess|jsonrpc/2::DEBUG::2019-01-25
> 13:44:45,399::logutils::319::root::(_report_stats) ThreadedHandler
> is ok in the last 120 seconds (max pending: 2)
>
>
> This looks like
>
https://bugzilla.redhat.com/show_bug.cgi?id=1666123#c18
>
> We should see "ThreadedHandler is ok" every 60 seconds when using
> debug log level.
>
> Looks like your entire supervdsmd process was hang for 120 seconds.
>
>
> MainProcess|jsonrpc/2::ERROR::2019-01-25
> 13:44:45,399::supervdsm_server::104::SuperVdsm.ServerCallback::(wrapper)
> Error in volumeInfo
> Traceback (most recent call last):
> File
> "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line
> 102, in wrapper
> res = func(*args, **kwargs)
> File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py",
> line 529, in volumeInfo
> xmltree = _execGlusterXml(command)
> File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py",
> line 131, in _execGlusterXml
> return _getTree(rc, out, err)
> File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py",
> line 112, in _getTree
> raise ge.GlusterCmdExecFailedException(rc, out, err)
> GlusterCmdExecFailedException: Command execution failed
> error: E
> r
> r
> o
> r
>
> :
>
> R
> e
> q
> u
> e
> s
> t
>
> t
> i
> m
> e
> d
>
> o
> u
> t
>
> Looks like side effect of
>
https://gerrit.ovirt.org/c/94784/
>
> GlusterException assumes that it accept list of lines, but we started
> to raise
> strings. The class should be fixed to handle strings.
>
>
>
> return code: 1
> MainProcess|jsonrpc/2::DEBUG::2019-01-25
> 13:44:45,400::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper)
> call mount with (<vdsm.supervdsm_server._SuperVdsm object at
> 0x7f6eb8d0a2d0>, u'*.*.*.*:/sdd8',
> u'/rhev/data-center/mnt/glusterSD/*.*.*.*:_sdd8') {'vfstype':
> u'glusterfs', 'mntOpts':
> u'backup-volfile-servers=*.*.*.*:*.*.*.*', 'cgroup':
'vdsm-glusterfs'}
> MainProcess|jsonrpc/2::DEBUG::2019-01-25
> 13:44:45,400::commands::198::root::(execCmd) /usr/bin/taskset
> --cpu-list 0-63 /usr/bin/systemd-run --scope
> --slice=vdsm-glusterfs /usr/bin/mount -t glusterfs -o
> backup-volfile-servers=*.*.*.*:*.*.*.* *.*.*.*:/sdd8
> /rhev/data-center/mnt/glusterSD/*.*.*.*:_sdd8 (cwd None)
> MainProcess|jsonrpc/0::DEBUG::2019-01-25
> 13:45:02,884::commands::219::root::(execCmd) FAILED: <err> =
> 'Running scope as unit run-38676.scope.\nMount failed. Please
> check the log file for more details.\n'; <rc> = 1
> MainProcess|jsonrpc/0::ERROR::2019-01-25
> 13:45:02,884::supervdsm_server::104::SuperVdsm.ServerCallback::(wrapper)
> Error in mount
> Traceback (most recent call last):
> File
> "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line
> 102, in wrapper
> res = func(*args, **kwargs)
> File
> "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line
> 144, in mount
> cgroup=cgroup)
> File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py",
> line 277, in _mount
> _runcmd(cmd)
> File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py",
> line 305, in _runcmd
> raise MountError(rc, b";".join((out, err)))
> MountError: (1, ';Running scope as unit run-38676.scope.\nMount
> failed. Please check the log file for more details.\n')
>
The mount failure is probably related to glusterfs. There are
glusterfs logs on the host that
can give more info on this error.
>
> MainProcess|jsonrpc/0::DEBUG::2019-01-25
> 13:45:02,894::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper)
> call volumeInfo with (u'ssd9', u'*.*.*.*') {}
> MainProcess|jsonrpc/0::DEBUG::2019-01-25
> 13:45:02,894::commands::198::root::(execCmd) /usr/bin/taskset
> --cpu-list 0-63 /usr/sbin/gluster --mode=script volume info
> --remote-host=*.*.*.* ssd9 --xml (cwd None)
>
>
> from vdsm.log
>
> 2019-01-25 13:46:03,519+0100 WARN (vdsm.Scheduler) [Executor]
> Worker blocked: <Worker name=jsonrpc/2 running <Task <JsonRpcTask
> {'params': {u'connectionParams': [{u'mnt_options':
> u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
> u'6b6b7899-c82b-4417-b453-0b3b0ac11deb', u'connection':
> u'*.*.*.*:ssd4', u'iqn': u'', u'user':
u'', u'tpgt': u'1',
> u'ipv6_enabled': u'false', u'vfs_type':
u'glusterfs', u'password':
> '********', u'port': u''}, {u'mnt_options':
> u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
> u'b036005a-d44d-4689-a8c3-13e1bbf55af7', u'connection':
> u'*.*.*.*:ssd5', u'iqn': u'', u'user':
u'', u'tpgt': u'1',
> u'ipv6_enabled': u'false', u'vfs_type':
u'glusterfs', u'password':
> '********', u'port': u''}, {u'mnt_options':
> u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
> u'40d191b0-b7f8-48f9-bf6f-327275f51fef', u'connection':
> u'*.*.*.*:ssd6', u'iqn': u'', u'user':
u'', u'tpgt': u'1',
> u'ipv6_enabled': u'false', u'vfs_type':
u'glusterfs', u'password':
> '********', u'port': u''}, {u'mnt_options':
> u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
> u'26fbd2d6-6f25-4520-ab7f-15e9001f07b9', u'connection':
> u'*.*.*.*:/hdd2', u'iqn': u'', u'user':
u'', u'tpgt': u'1',
> u'ipv6_enabled': u'false', u'vfs_type':
u'glusterfs', u'password':
> '********', u'port': u''}, {u'mnt_options':
> u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
> u'f11fed97-513a-4a10-b85c-2afe68f42608', u'connection':
> u'*.*.*.*:/ssd3', u'iqn': u'', u'user':
u'', u'tpgt': u'1',
> u'ipv6_enabled': u'false', u'vfs_type':
u'glusterfs', u'password':
> '********', u'port': u''}, {u'mnt_options':
> u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
> u'f20b8691-528e-4e38-89ad-1e27684dee8b', u'connection':
> u'*.*.*.*:/sdd8', u'iqn': u'', u'user':
u'', u'tpgt': u'1',
> u'ipv6_enabled': u'false', u'vfs_type':
u'glusterfs', u'password':
> '********', u'port': u''}, {u'mnt_options':
> u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
> u'188e71dc-3d81-43d3-b930-238a4c6711e6', u'connection':
> u'*.*.*.*:/ssd9', u'iqn': u'', u'user':
u'', u'tpgt': u'1',
> u'ipv6_enabled': u'false', u'vfs_type':
u'glusterfs', u'password':
> '********', u'port': u''}], u'storagepoolID':
> u'00000001-0001-0001-0001-000000000043', u'domainType': 7},
> 'jsonrpc': '2.0', 'method':
u'StoragePool.connectStorageServer',
> 'id': u'581e2ad3-0682-4d44-95b4-bdc088b45f66'} at
0x7f9be815c850>
> timeout=60, duration=1260.00 at 0x7f9be815ca10> task#=98 at
> 0x7f9be83bb750>, traceback:
> File: "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap
> self.__bootstrap_inner()
> File: "/usr/lib64/python2.7/threading.py", line 812, in
> __bootstrap_inner
> self.run()
> File: "/usr/lib64/python2.7/threading.py", line 765, in run
> self.__target(*self.__args, **self.__kwargs)
> File:
> "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line
> 195, in run
> ret = func(*args, **kwargs)
> File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line
> 301, in _run
> self._execute_task()
> File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line
> 315, in _execute_task
> task()
> File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line
> 391, in __call__
> self._callable()
> File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py",
> line 262, in __call__
> self._handler(self._ctx, self._req)
> File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py",
> line 305, in _serveRequest
> response = self._handle_request(req, ctx)
> File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py",
> line 345, in _handle_request
> res = method(**params)
> File: "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line
> 194, in _dynamicMethod
> result = fn(*methodArgs)
> File: "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1103,
> in connectStorageServer
> connectionParams)
> File:
> "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py",
> line 72, in wrapper
> result = ctask.prepare(func, *args, **kwargs)
> File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py",
> line 108, in wrapper
> return m(self, *a, **kw)
> File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py",
> line 1179, in prepare
> result = self._run(func, *args, **kwargs)
> File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py",
> line 882, in _run
> return fn(*args, **kargs)
> File: "<string>", line 2, in connectStorageServer
> File: "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line
> 50, in method
> ret = func(*args, **kwargs)
> File: "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line
> 2410, in connectStorageServer
> conObj.connect()
>
> from messages:
>
> Jan 25 13:49:07 node9 vdsm[31968]: WARN Worker blocked: <Worker
> name=jsonrpc/3 running <Task <JsonRpcTask {'params':
> {u'connectionParams': [{u'mnt_options':
> u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
> u'6b6b7899-c82b-4417-b453-0b3b0ac11deb', u'connection':
> u'*.*.*.*:ssd4', u'iqn': u'', u'user':
u'', u'tpgt': u'1',
> u'ipv6_enabled': u'false', u'vfs_type':
u'glusterfs', u'password':
> '********', u'port': u''}, {u'mnt_options':
> u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
> u'b036005a-d44d-4689-a8c3-13e1bbf55af7', u'connection':
> u'*.*.*.*:ssd5', u'iqn': u'', u'user':
u'', u'tpgt': u'1',
> u'ipv6_enabled': u'false', u'vfs_type':
u'glusterfs', u'password':
> '********', u'port': u''}, {u'mnt_options':
> u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
> u'40d191b0-b7f8-48f9-bf6f-327275f51fef', u'connection':
> u'*.*.*.*:ssd6', u'iqn': u'', u'user':
u'', u'tpgt': u'1',
> u'ipv6_enabled': u'false', u'vfs_type':
u'glusterfs', u'password':
> '********', u'port': u''}, {u'mnt_options':
> u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
> u'26fbd2d6-6f25-4520-ab7f-15e9001f07b9', u'connection':
> u'*.*.*.*:/hdd2', u'iqn': u'', u'user':
u'', u'tpgt': u'1',
> u'ipv6_enabled': u'false', u'vfs_type':
u'glusterfs', u'password':
> '********', u'port': u''}, {u'mnt_options':
> u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
> u'f11fed97-513a-4a10-b85c-2afe68f42608', u'connection':
> u'*.*.*.*:/ssd3', u'iqn': u'', u'user':
u'', u'tpgt': u'1',
> u'ipv6_enabled': u'false', u'vfs_type':
u'glusterfs', u'password':
> '********', u'port': u''}, {u'mnt_options':
> u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
> u'f20b8691-528e-4e38-89ad-1e27684dee8b', u'connection':
> u'*.*.*.*:/sdd8', u'iqn': u'', u'user':
u'', u'tpgt': u'1',
> u'ipv6_enabled': u'false', u'vfs_type':
u'glusterfs', u'password':
> '********', u'port': u''}, {u'mnt_options':
> u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
> u'188e71dc-3d81-43d3-b930-238a4c6711e6', u'connection':
> u'*.*.*.*:/ssd9', u'iqn': u'', u'user':
u'', u'tpgt': u'1',
> u'ipv6_enabled': u'false', u'vfs_type':
u'glusterfs', u'password':
> '********', u'port': u''}], u'storagepoolID':
> u'00000001-0001-0001-0001-000000000043', u'domainType': 7},
> 'jsonrpc': '2.0', 'method':
u'StoragePool.connectStorageServer',
> 'id': u'918f4d06-ca89-4ec0-a396-3407f1bdb8f9'} at
0x7f9be82ff250>
> timeout=60, duration=840.00 at 0x7f9be82ffa50> task#=93 at
> 0x7f9be83bba10>, traceback:#012File:
> "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap#012
> self.__bootstrap_inner()#012File:
> "/usr/lib64/python2.7/threading.py", line 812, in
> __bootstrap_inner#012 self.run()#012File:
> "/usr/lib64/python2.7/threading.py", line 765, in run#012
> self.__target(*self.__args, **self.__kwargs)#012File:
> "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line
> 195, in run#012 ret = func(*args, **kwargs)#012File:
> "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in
> _run#012 self._execute_task()#012File:
> "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315, in
> _execute_task#012 task()#012File:
> "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in
> __call__#012 self._callable()#012File:
> "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> 262, in __call__#012 self._handler(self._ctx, self._req)#012File:
> "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> 305, in _serveRequest#012 response = self._handle_request(req,
> ctx)#012File:
> "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> 345, in _handle_request#012 res = method(**params)#012File:
> "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194,
> in _dynamicMethod#012 result = fn(*methodArgs)#012File:
> "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1103, in
> connectStorageServer#012 connectionParams)#012File:
> "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py",
> line 72, in wrapper#012 result = ctask.prepare(func, *args,
> **kwargs)#012File:
> "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 108,
> in wrapper#012 return m(self, *a, **kw)#012File:
> "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line
> 1179, in prepare#012 result = self._run(func, *args,
> **kwargs)#012File:
> "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
> in _run#012 return fn(*args, **kargs)#012File: "<string>", line
> 2, in connectStorageServer#012File:
> "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in
> method#012 ret = func(*args, **kwargs)#012File:
> "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2410,
> in connectStorageServer#012 conObj.connect()#012File:
> "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py",
> line 172, in connect#012 self._mount.mount(self.options,
> self._vfsType, cgroup=self.CGROUP)#012File:
> "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line
> 207, in mount#012 cgroup=cgroup)#012File:
> "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line
> 56, in __call__#012 return callMethod()#012File:
> "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line
> 54, in <lambda>#012 **kwargs)#012File: "<string>", line 2,
in
> mount#012File: "/usr/lib64/python2.7/multiprocessing/managers.py",
> line 759, in _callmethod#012 kind, result = conn.recv()
>
> This matches the theory of blocked supervdsmd.
>
> Met vriendelijke groet, With kind regards,
>
> Jorick Astrego
> *
> Netbulae Virtualization Experts *
> ------------------------------------------------------------------------
> Tel: 053 20 30 270 info(a)netbulae.eu <mailto:info@netbulae.eu>
> Staalsteden 4-3A KvK 08198180
> Fax: 053 20 30 271
www.netbulae.eu <
http://www.netbulae.eu> 7547
> TA Enschede BTW NL821234584B01
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
> To unsubscribe send an email to users-leave(a)ovirt.org
> <mailto:users-leave@ovirt.org>
> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
>
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UEFI2DXVP4D...
>
Met vriendelijke groet, With kind regards,
Jorick Astrego
Netbulae Virtualization Experts
----------------
Tel: 053 20 30 270 info(a)netbulae.eu Staalsteden 4-3A KvK 08198180
Fax: 053 20 30 271
www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
----------------