4.3.0 rc2 cannot mount glusterfs volumes on ovirt node ng

Hi, We're having problems mounting the preexisting 3.12 glusterfs storage domains in ovirt node ng 4.3.0 rc2. Getting There are no iptables blocks on the storage network, the ip's are pingable bothe ways. I can telnet to the glusterfs ports and I see no messages in the logs of the glusterfs servers. When I try the mount command manually it hangs for ever: /usr/bin/mount -t glusterfs -o backup-volfile-servers=*.*.*.*:*.*.*.* *.*.*.*:/sdd8 /mnt/temp I haven't submitted a bug yet.... from supervdsm.log MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:42:45,282::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) call volumeInfo with (u'sdd8', u'*.*.*.*') {} MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:42:45,282::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/gluster --mode=script volume info --remote-host=*.*.*.* sdd8 --xml (cwd None) MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:44:45,399::commands::219::root::(execCmd) FAILED: <err> = ''; <rc> = 1 MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:44:45,399::logutils::319::root::(_report_stats) ThreadedHandler is ok in the last 120 seconds (max pending: 2) MainProcess|jsonrpc/2::ERROR::2019-01-25 13:44:45,399::supervdsm_server::104::SuperVdsm.ServerCallback::(wrapper) Error in volumeInfo Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 102, in wrapper res = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 529, in volumeInfo xmltree = _execGlusterXml(command) File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 131, in _execGlusterXml return _getTree(rc, out, err) File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 112, in _getTree raise ge.GlusterCmdExecFailedException(rc, out, err) GlusterCmdExecFailedException: Command execution failed error: E r r o r : R e q u e s t t i m e d o u t return code: 1 MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:44:45,400::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) call mount with (<vdsm.supervdsm_server._SuperVdsm object at 0x7f6eb8d0a2d0>, u'*.*.*.*:/sdd8', u'/rhev/data-center/mnt/glusterSD/*.*.*.*:_sdd8') {'vfstype': u'glusterfs', 'mntOpts': u'backup-volfile-servers=*.*.*.*:*.*.*.*', 'cgroup': 'vdsm-glusterfs'} MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:44:45,400::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount -t glusterfs -o backup-volfile-servers=*.*.*.*:*.*.*.* *.*.*.*:/sdd8 /rhev/data-center/mnt/glusterSD/*.*.*.*:_sdd8 (cwd None) MainProcess|jsonrpc/0::DEBUG::2019-01-25 13:45:02,884::commands::219::root::(execCmd) FAILED: <err> = 'Running scope as unit run-38676.scope.\nMount failed. Please check the log file for more details.\n'; <rc> = 1 MainProcess|jsonrpc/0::ERROR::2019-01-25 13:45:02,884::supervdsm_server::104::SuperVdsm.ServerCallback::(wrapper) Error in mount Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 102, in wrapper res = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 144, in mount cgroup=cgroup) File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 277, in _mount _runcmd(cmd) File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 305, in _runcmd raise MountError(rc, b";".join((out, err))) MountError: (1, ';Running scope as unit run-38676.scope.\nMount failed. Please check the log file for more details.\n') MainProcess|jsonrpc/0::DEBUG::2019-01-25 13:45:02,894::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) call volumeInfo with (u'ssd9', u'*.*.*.*') {} MainProcess|jsonrpc/0::DEBUG::2019-01-25 13:45:02,894::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/gluster --mode=script volume info --remote-host=*.*.*.* ssd9 --xml (cwd None) from vdsm.log 2019-01-25 13:46:03,519+0100 WARN (vdsm.Scheduler) [Executor] Worker blocked: <Worker name=jsonrpc/2 running <Task <JsonRpcTask {'params': {u'connectionParams': [{u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'6b6b7899-c82b-4417-b453-0b3b0ac11deb', u'connection': u'*.*.*.*:ssd4', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'b036005a-d44d-4689-a8c3-13e1bbf55af7', u'connection': u'*.*.*.*:ssd5', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'40d191b0-b7f8-48f9-bf6f-327275f51fef', u'connection': u'*.*.*.*:ssd6', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'26fbd2d6-6f25-4520-ab7f-15e9001f07b9', u'connection': u'*.*.*.*:/hdd2', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'f11fed97-513a-4a10-b85c-2afe68f42608', u'connection': u'*.*.*.*:/ssd3', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'f20b8691-528e-4e38-89ad-1e27684dee8b', u'connection': u'*.*.*.*:/sdd8', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'188e71dc-3d81-43d3-b930-238a4c6711e6', u'connection': u'*.*.*.*:/ssd9', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}], u'storagepoolID': u'00000001-0001-0001-0001-000000000043', u'domainType': 7}, 'jsonrpc': '2.0', 'method': u'StoragePool.connectStorageServer', 'id': u'581e2ad3-0682-4d44-95b4-bdc088b45f66'} at 0x7f9be815c850> timeout=60, duration=1260.00 at 0x7f9be815ca10> task#=98 at 0x7f9be83bb750>, traceback: File: "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap self.__bootstrap_inner() File: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner self.run() File: "/usr/lib64/python2.7/threading.py", line 765, in run self.__target(*self.__args, **self.__kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 195, in run ret = func(*args, **kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run self._execute_task() File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315, in _execute_task task() File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in __call__ self._callable() File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 262, in __call__ self._handler(self._ctx, self._req) File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 305, in _serveRequest response = self._handle_request(req, ctx) File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in _handle_request res = method(**params) File: "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in _dynamicMethod result = fn(*methodArgs) File: "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1103, in connectStorageServer connectionParams) File: "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py", line 72, in wrapper result = ctask.prepare(func, *args, **kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 108, in wrapper return m(self, *a, **kw) File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1179, in prepare result = self._run(func, *args, **kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File: "<string>", line 2, in connectStorageServer File: "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method ret = func(*args, **kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2410, in connectStorageServer conObj.connect() from messages: Jan 25 13:49:07 node9 vdsm[31968]: WARN Worker blocked: <Worker name=jsonrpc/3 running <Task <JsonRpcTask {'params': {u'connectionParams': [{u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'6b6b7899-c82b-4417-b453-0b3b0ac11deb', u'connection': u'192.168.99.15:ssd4', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'b036005a-d44d-4689-a8c3-13e1bbf55af7', u'connection': u'*.*.*.*:ssd5', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'40d191b0-b7f8-48f9-bf6f-327275f51fef', u'connection': u'*.*.*.*:ssd6', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'26fbd2d6-6f25-4520-ab7f-15e9001f07b9', u'connection': u'*.*.*.*:/hdd2', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'f11fed97-513a-4a10-b85c-2afe68f42608', u'connection': u'*.*.*.*:/ssd3', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'f20b8691-528e-4e38-89ad-1e27684dee8b', u'connection': u'*.*.*.*:/sdd8', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'188e71dc-3d81-43d3-b930-238a4c6711e6', u'connection': u'*.*.*.*:/ssd9', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}], u'storagepoolID': u'00000001-0001-0001-0001-000000000043', u'domainType': 7}, 'jsonrpc': '2.0', 'method': u'StoragePool.connectStorageServer', 'id': u'918f4d06-ca89-4ec0-a396-3407f1bdb8f9'} at 0x7f9be82ff250> timeout=60, duration=840.00 at 0x7f9be82ffa50> task#=93 at 0x7f9be83bba10>, traceback:#012File: "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap#012 self.__bootstrap_inner()#012File: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner#012 self.run()#012File: "/usr/lib64/python2.7/threading.py", line 765, in run#012 self.__target(*self.__args, **self.__kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 195, in run#012 ret = func(*args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run#012 self._execute_task()#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315, in _execute_task#012 task()#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in __call__#012 self._callable()#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 262, in __call__#012 self._handler(self._ctx, self._req)#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 305, in _serveRequest#012 response = self._handle_request(req, ctx)#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in _handle_request#012 res = method(**params)#012File: "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in _dynamicMethod#012 result = fn(*methodArgs)#012File: "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1103, in connectStorageServer#012 connectionParams)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py", line 72, in wrapper#012 result = ctask.prepare(func, *args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 108, in wrapper#012 return m(self, *a, **kw)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1179, in prepare#012 result = self._run(func, *args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run#012 return fn(*args, **kargs)#012File: "<string>", line 2, in connectStorageServer#012File: "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method#012 ret = func(*args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2410, in connectStorageServer#012 conObj.connect()#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 172, in connect#012 self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 207, in mount#012 cgroup=cgroup)#012File: "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 56, in __call__#012 return callMethod()#012File: "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 54, in <lambda>#012 **kwargs)#012File: "<string>", line 2, in mount#012File: "/usr/lib64/python2.7/multiprocessing/managers.py", line 759, in _callmethod#012 kind, result = conn.recv() Met vriendelijke groet, With kind regards, Jorick Astrego Netbulae Virtualization Experts ---------------- Tel: 053 20 30 270 info@netbulae.eu Staalsteden 4-3A KvK 08198180 Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01 ----------------

Adding some people to help debugging the issue Il giorno ven 25 gen 2019 alle ore 14:18 Jorick Astrego <jorick@netbulae.eu> ha scritto:
Hi,
We're having problems mounting the preexisting 3.12 glusterfs storage domains in ovirt node ng 4.3.0 rc2.
Getting
There are no iptables blocks on the storage network, the ip's are pingable bothe ways. I can telnet to the glusterfs ports and I see no messages in the logs of the glusterfs servers.
When I try the mount command manually it hangs for ever:
/usr/bin/mount -t glusterfs -o backup-volfile-servers=*.*.*.*:*.*.*.* *.*.*.*:/sdd8 /mnt/temp
I haven't submitted a bug yet....
from supervdsm.log
MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:42:45,282::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) call volumeInfo with (u'sdd8', u'*.*.*.*') {} MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:42:45,282::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/gluster --mode=script volume info --remote-host=*.*.*.* sdd8 --xml (cwd None) MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:44:45,399::commands::219::root::(execCmd) FAILED: <err> = ''; <rc> = 1 MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:44:45,399::logutils::319::root::(_report_stats) ThreadedHandler is ok in the last 120 seconds (max pending: 2) MainProcess|jsonrpc/2::ERROR::2019-01-25 13:44:45,399::supervdsm_server::104::SuperVdsm.ServerCallback::(wrapper) Error in volumeInfo Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 102, in wrapper res = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 529, in volumeInfo xmltree = _execGlusterXml(command) File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 131, in _execGlusterXml return _getTree(rc, out, err) File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 112, in _getTree raise ge.GlusterCmdExecFailedException(rc, out, err) GlusterCmdExecFailedException: Command execution failed error: E r r o r
:
R e q u e s t
t i m e d
o u t
return code: 1 MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:44:45,400::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) call mount with (<vdsm.supervdsm_server._SuperVdsm object at 0x7f6eb8d0a2d0>, u'*.*.*.*:/sdd8', u'/rhev/data-center/mnt/glusterSD/*.*.*.*:_sdd8') {'vfstype': u'glusterfs', 'mntOpts': u'backup-volfile-servers=*.*.*.*:*.*.*.*', 'cgroup': 'vdsm-glusterfs'} MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:44:45,400::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount -t glusterfs -o backup-volfile-servers=*.*.*.*:*.*.*.* *.*.*.*:/sdd8 /rhev/data-center/mnt/glusterSD/*.*.*.*:_sdd8 (cwd None) MainProcess|jsonrpc/0::DEBUG::2019-01-25 13:45:02,884::commands::219::root::(execCmd) FAILED: <err> = 'Running scope as unit run-38676.scope.\nMount failed. Please check the log file for more details.\n'; <rc> = 1 MainProcess|jsonrpc/0::ERROR::2019-01-25 13:45:02,884::supervdsm_server::104::SuperVdsm.ServerCallback::(wrapper) Error in mount Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 102, in wrapper res = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 144, in mount cgroup=cgroup) File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 277, in _mount _runcmd(cmd) File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 305, in _runcmd raise MountError(rc, b";".join((out, err))) MountError: (1, ';Running scope as unit run-38676.scope.\nMount failed. Please check the log file for more details.\n') MainProcess|jsonrpc/0::DEBUG::2019-01-25 13:45:02,894::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) call volumeInfo with (u'ssd9', u'*.*.*.*') {} MainProcess|jsonrpc/0::DEBUG::2019-01-25 13:45:02,894::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/gluster --mode=script volume info --remote-host=*.*.*.* ssd9 --xml (cwd None)
from vdsm.log
2019-01-25 13:46:03,519+0100 WARN (vdsm.Scheduler) [Executor] Worker blocked: <Worker name=jsonrpc/2 running <Task <JsonRpcTask {'params': {u'connectionParams': [{u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'6b6b7899-c82b-4417-b453-0b3b0ac11deb', u'connection': u'*.*.*.*:ssd4', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'b036005a-d44d-4689-a8c3-13e1bbf55af7', u'connection': u'*.*.*.*:ssd5', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'40d191b0-b7f8-48f9-bf6f-327275f51fef', u'connection': u'*.*.*.*:ssd6', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'26fbd2d6-6f25-4520-ab7f-15e9001f07b9', u'connection': u'*.*.*.*:/hdd2', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'f11fed97-513a-4a10-b85c-2afe68f42608', u'connection': u'*.*.*.*:/ssd3', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'f20b8691-528e-4e38-89ad-1e27684dee8b', u'connection': u'*.*.*.*:/sdd8', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'188e71dc-3d81-43d3-b930-238a4c6711e6', u'connection': u'*.*.*.*:/ssd9', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}], u'storagepoolID': u'00000001-0001-0001-0001-000000000043', u'domainType': 7}, 'jsonrpc': '2.0', 'method': u'StoragePool.connectStorageServer', 'id': u'581e2ad3-0682-4d44-95b4-bdc088b45f66'} at 0x7f9be815c850> timeout=60, duration=1260.00 at 0x7f9be815ca10> task#=98 at 0x7f9be83bb750>, traceback: File: "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap self.__bootstrap_inner() File: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner self.run() File: "/usr/lib64/python2.7/threading.py", line 765, in run self.__target(*self.__args, **self.__kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 195, in run ret = func(*args, **kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run self._execute_task() File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315, in _execute_task task() File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in __call__ self._callable() File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 262, in __call__ self._handler(self._ctx, self._req) File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 305, in _serveRequest response = self._handle_request(req, ctx) File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in _handle_request res = method(**params) File: "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in _dynamicMethod result = fn(*methodArgs) File: "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1103, in connectStorageServer connectionParams) File: "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py", line 72, in wrapper result = ctask.prepare(func, *args, **kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 108, in wrapper return m(self, *a, **kw) File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1179, in prepare result = self._run(func, *args, **kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File: "<string>", line 2, in connectStorageServer File: "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method ret = func(*args, **kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2410, in connectStorageServer conObj.connect()
from messages:
Jan 25 13:49:07 node9 vdsm[31968]: WARN Worker blocked: <Worker name=jsonrpc/3 running <Task <JsonRpcTask {'params': {u'connectionParams': [{u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'6b6b7899-c82b-4417-b453-0b3b0ac11deb', u'connection': u'192.168.99.15:ssd4', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'b036005a-d44d-4689-a8c3-13e1bbf55af7', u'connection': u'*.*.*.*:ssd5', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'40d191b0-b7f8-48f9-bf6f-327275f51fef', u'connection': u'*.*.*.*:ssd6', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'26fbd2d6-6f25-4520-ab7f-15e9001f07b9', u'connection': u'*.*.*.*:/hdd2', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'f11fed97-513a-4a10-b85c-2afe68f42608', u'connection': u'*.*.*.*:/ssd3', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'f20b8691-528e-4e38-89ad-1e27684dee8b', u'connection': u'*.*.*.*:/sdd8', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'188e71dc-3d81-43d3-b930-238a4c6711e6', u'connection': u'*.*.*.*:/ssd9', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}], u'storagepoolID': u'00000001-0001-0001-0001-000000000043', u'domainType': 7}, 'jsonrpc': '2.0', 'method': u'StoragePool.connectStorageServer', 'id': u'918f4d06-ca89-4ec0-a396-3407f1bdb8f9'} at 0x7f9be82ff250> timeout=60, duration=840.00 at 0x7f9be82ffa50> task#=93 at 0x7f9be83bba10>, traceback:#012File: "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap#012 self.__bootstrap_inner()#012File: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner#012 self.run()#012File: "/usr/lib64/python2.7/threading.py", line 765, in run#012 self.__target(*self.__args, **self.__kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 195, in run#012 ret = func(*args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run#012 self._execute_task()#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315, in _execute_task#012 task()#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in __call__#012 self._callable()#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 262, in __call__#012 self._handler(self._ctx, self._req)#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 305, in _serveRequest#012 response = self._handle_request(req, ctx)#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in _handle_request#012 res = method(**params)#012File: "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in _dynamicMethod#012 result = fn(*methodArgs)#012File: "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1103, in connectStorageServer#012 connectionParams)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py", line 72, in wrapper#012 result = ctask.prepare(func, *args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 108, in wrapper#012 return m(self, *a, **kw)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1179, in prepare#012 result = self._run(func, *args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run#012 return fn(*args, **kargs)#012File: "<string>", line 2, in connectStorageServer#012File: "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method#012 ret = func(*args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2410, in connectStorageServer#012 conObj.connect()#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 172, in connect#012 self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 207, in mount#012 cgroup=cgroup)#012File: "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 56, in __call__#012 return callMethod()#012File: "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 54, in <lambda>#012 **kwargs)#012File: "<string>", line 2, in mount#012File: "/usr/lib64/python2.7/multiprocessing/managers.py", line 759, in _callmethod#012 kind, result = conn.recv()
Met vriendelijke groet, With kind regards,
Jorick Astrego
*Netbulae Virtualization Experts * ------------------------------ Tel: 053 20 30 270 info@netbulae.eu Staalsteden 4-3A KvK 08198180 Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01 ------------------------------
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UEFI2DXVP4D4B4...
-- SANDRO BONAZZOLA MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://red.ht/sig>

On Fri, Jan 25, 2019 at 3:18 PM Jorick Astrego <jorick@netbulae.eu> wrote:
Hi,
We're having problems mounting the preexisting 3.12 glusterfs storage domains in ovirt node ng 4.3.0 rc2.
Getting
There are no iptables blocks on the storage network, the ip's are pingable bothe ways. I can telnet to the glusterfs ports and I see no messages in the logs of the glusterfs servers.
When I try the mount command manually it hangs for ever:
/usr/bin/mount -t glusterfs -o backup-volfile-servers=*.*.*.*:*.*.*.* *.*.*.*:/sdd8 /mnt/temp
I haven't submitted a bug yet....
from supervdsm.log
MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:42:45,282::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) call volumeInfo with (u'sdd8', u'*.*.*.*') {} MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:42:45,282::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/gluster --mode=script volume info --remote-host=*.*.*.* sdd8 --xml (cwd None) MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:44:45,399::commands::219::root::(execCmd) FAILED: <err> = ''; <rc> = 1 MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:44:45,399::logutils::319::root::(_report_stats) ThreadedHandler is ok in the last 120 seconds (max pending: 2)
This looks like https://bugzilla.redhat.com/show_bug.cgi?id=1666123#c18 We should see "ThreadedHandler is ok" every 60 seconds when using debug log level. Looks like your entire supervdsmd process was hang for 120 seconds.
MainProcess|jsonrpc/2::ERROR::2019-01-25 13:44:45,399::supervdsm_server::104::SuperVdsm.ServerCallback::(wrapper) Error in volumeInfo Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 102, in wrapper res = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 529, in volumeInfo xmltree = _execGlusterXml(command) File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 131, in _execGlusterXml return _getTree(rc, out, err) File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 112, in _getTree raise ge.GlusterCmdExecFailedException(rc, out, err) GlusterCmdExecFailedException: Command execution failed error: E r r o r
:
R e q u e s t
t i m e d
o u t
Looks like side effect of https://gerrit.ovirt.org/c/94784/ GlusterException assumes that it accept list of lines, but we started to raise strings. The class should be fixed to handle strings.
return code: 1 MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:44:45,400::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) call mount with (<vdsm.supervdsm_server._SuperVdsm object at 0x7f6eb8d0a2d0>, u'*.*.*.*:/sdd8', u'/rhev/data-center/mnt/glusterSD/*.*.*.*:_sdd8') {'vfstype': u'glusterfs', 'mntOpts': u'backup-volfile-servers=*.*.*.*:*.*.*.*', 'cgroup': 'vdsm-glusterfs'} MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:44:45,400::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount -t glusterfs -o backup-volfile-servers=*.*.*.*:*.*.*.* *.*.*.*:/sdd8 /rhev/data-center/mnt/glusterSD/*.*.*.*:_sdd8 (cwd None) MainProcess|jsonrpc/0::DEBUG::2019-01-25 13:45:02,884::commands::219::root::(execCmd) FAILED: <err> = 'Running scope as unit run-38676.scope.\nMount failed. Please check the log file for more details.\n'; <rc> = 1 MainProcess|jsonrpc/0::ERROR::2019-01-25 13:45:02,884::supervdsm_server::104::SuperVdsm.ServerCallback::(wrapper) Error in mount Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 102, in wrapper res = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 144, in mount cgroup=cgroup) File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 277, in _mount _runcmd(cmd) File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 305, in _runcmd raise MountError(rc, b";".join((out, err))) MountError: (1, ';Running scope as unit run-38676.scope.\nMount failed. Please check the log file for more details.\n')
The mount failure is probably related to glusterfs. There are glusterfs logs on the host that can give more info on this error.
MainProcess|jsonrpc/0::DEBUG::2019-01-25 13:45:02,894::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) call volumeInfo with (u'ssd9', u'*.*.*.*') {} MainProcess|jsonrpc/0::DEBUG::2019-01-25 13:45:02,894::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/gluster --mode=script volume info --remote-host=*.*.*.* ssd9 --xml (cwd None)
from vdsm.log
2019-01-25 13:46:03,519+0100 WARN (vdsm.Scheduler) [Executor] Worker blocked: <Worker name=jsonrpc/2 running <Task <JsonRpcTask {'params': {u'connectionParams': [{u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'6b6b7899-c82b-4417-b453-0b3b0ac11deb', u'connection': u'*.*.*.*:ssd4', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'b036005a-d44d-4689-a8c3-13e1bbf55af7', u'connection': u'*.*.*.*:ssd5', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'40d191b0-b7f8-48f9-bf6f-327275f51fef', u'connection': u'*.*.*.*:ssd6', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'26fbd2d6-6f25-4520-ab7f-15e9001f07b9', u'connection': u'*.*.*.*:/hdd2', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'f11fed97-513a-4a10-b85c-2afe68f42608', u'connection': u'*.*.*.*:/ssd3', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'f20b8691-528e-4e38-89ad-1e27684dee8b', u'connection': u'*.*.*.*:/sdd8', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'188e71dc-3d81-43d3-b930-238a4c6711e6', u'connection': u'*.*.*.*:/ssd9', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}], u'storagepoolID': u'00000001-0001-0001-0001-000000000043', u'domainType': 7}, 'jsonrpc': '2.0', 'method': u'StoragePool.connectStorageServer', 'id': u'581e2ad3-0682-4d44-95b4-bdc088b45f66'} at 0x7f9be815c850> timeout=60, duration=1260.00 at 0x7f9be815ca10> task#=98 at 0x7f9be83bb750>, traceback: File: "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap self.__bootstrap_inner() File: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner self.run() File: "/usr/lib64/python2.7/threading.py", line 765, in run self.__target(*self.__args, **self.__kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 195, in run ret = func(*args, **kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run self._execute_task() File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315, in _execute_task task() File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in __call__ self._callable() File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 262, in __call__ self._handler(self._ctx, self._req) File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 305, in _serveRequest response = self._handle_request(req, ctx) File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in _handle_request res = method(**params) File: "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in _dynamicMethod result = fn(*methodArgs) File: "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1103, in connectStorageServer connectionParams) File: "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py", line 72, in wrapper result = ctask.prepare(func, *args, **kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 108, in wrapper return m(self, *a, **kw) File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1179, in prepare result = self._run(func, *args, **kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File: "<string>", line 2, in connectStorageServer File: "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method ret = func(*args, **kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2410, in connectStorageServer conObj.connect()
from messages:
Jan 25 13:49:07 node9 vdsm[31968]: WARN Worker blocked: <Worker name=jsonrpc/3 running <Task <JsonRpcTask {'params': {u'connectionParams': [{u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'6b6b7899-c82b-4417-b453-0b3b0ac11deb', u'connection': u'192.168.99.15:ssd4', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'b036005a-d44d-4689-a8c3-13e1bbf55af7', u'connection': u'*.*.*.*:ssd5', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'40d191b0-b7f8-48f9-bf6f-327275f51fef', u'connection': u'*.*.*.*:ssd6', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'26fbd2d6-6f25-4520-ab7f-15e9001f07b9', u'connection': u'*.*.*.*:/hdd2', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'f11fed97-513a-4a10-b85c-2afe68f42608', u'connection': u'*.*.*.*:/ssd3', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'f20b8691-528e-4e38-89ad-1e27684dee8b', u'connection': u'*.*.*.*:/sdd8', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'188e71dc-3d81-43d3-b930-238a4c6711e6', u'connection': u'*.*.*.*:/ssd9', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}], u'storagepoolID': u'00000001-0001-0001-0001-000000000043', u'domainType': 7}, 'jsonrpc': '2.0', 'method': u'StoragePool.connectStorageServer', 'id': u'918f4d06-ca89-4ec0-a396-3407f1bdb8f9'} at 0x7f9be82ff250> timeout=60, duration=840.00 at 0x7f9be82ffa50> task#=93 at 0x7f9be83bba10>, traceback:#012File: "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap#012 self.__bootstrap_inner()#012File: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner#012 self.run()#012File: "/usr/lib64/python2.7/threading.py", line 765, in run#012 self.__target(*self.__args, **self.__kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 195, in run#012 ret = func(*args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run#012 self._execute_task()#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315, in _execute_task#012 task()#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in __call__#012 self._callable()#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 262, in __call__#012 self._handler(self._ctx, self._req)#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 305, in _serveRequest#012 response = self._handle_request(req, ctx)#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in _handle_request#012 res = method(**params)#012File: "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in _dynamicMethod#012 result = fn(*methodArgs)#012File: "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1103, in connectStorageServer#012 connectionParams)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py", line 72, in wrapper#012 result = ctask.prepare(func, *args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 108, in wrapper#012 return m(self, *a, **kw)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1179, in prepare#012 result = self._run(func, *args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run#012 return fn(*args, **kargs)#012File: "<string>", line 2, in connectStorageServer#012File: "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method#012 ret = func(*args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2410, in connectStorageServer#012 conObj.connect()#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 172, in connect#012 self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 207, in mount#012 cgroup=cgroup)#012File: "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 56, in __call__#012 return callMethod()#012File: "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 54, in <lambda>#012 **kwargs)#012File: "<string>", line 2, in mount#012File: "/usr/lib64/python2.7/multiprocessing/managers.py", line 759, in _callmethod#012 kind, result = conn.recv()
This matches the theory of blocked supervdsmd. Met vriendelijke groet, With kind regards,
Jorick Astrego
*Netbulae Virtualization Experts * ------------------------------ Tel: 053 20 30 270 info@netbulae.eu Staalsteden 4-3A KvK 08198180 Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01 ------------------------------
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UEFI2DXVP4D4B4...

The mount failure is probably related to glusterfs. There are glusterfs logs on the host that can give more info on this error.
Oh duh, sorry forgot to check :-& "failed to fetch volume file" [2019-01-25 13:47:03.560677] I [MSGID: 100030] [glusterfsd.c:2691:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 5.2 (args: /usr/sbin/glusterfs --process-name fuse --volfile-server=*.*.*.* --volfile-server=*.*.*.* --volfile-server=*.*.*.* --volfile-id=ssd5 /rhev/data-center/mnt/glusterSD/*.*.*.*:ssd5) [2019-01-25 13:47:03.571819] I [MSGID: 101190] [event-epoll.c:622:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2019-01-25 14:02:45.067440] I [glusterfsd-mgmt.c:2424:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from remote-host: *.*.*.* [2019-01-25 14:02:45.067500] I [glusterfsd-mgmt.c:2464:mgmt_rpc_notify] 0-glusterfsd-mgmt: connecting to next volfile server *.*.*.* [2019-01-25 14:02:45.069678] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f2b0b7f4f1b] (--> /lib64/libgfrpc.so.0(+0xce11)[0x7f2b0b5bde11] (--> /lib64/libgfrpc.so.0(+0xcf2e)[0x7f2b0b5bdf2e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x91)[0x7f2b0b5bf531] (--> /lib64/libgfrpc.so.0(+0xf0d8)[0x7f2b0b5c00d8] ))))) 0-glusterfs: forced unwinding frame type(GlusterFS Handshake) op(GETSPEC(2)) called at 2019-01-25 13:47:03.572632 (xid=0x2) [2019-01-25 14:02:45.069697] E [glusterfsd-mgmt.c:2136:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:ssd5) [2019-01-25 14:02:45.069725] W [glusterfsd.c:1481:cleanup_and_exit] (-->/lib64/libgfrpc.so.0(+0xce32) [0x7f2b0b5bde32] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x841) [0x5651101b7231] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x5651101afbbb] ) 0-: received signum (0), shutting down [2019-01-25 14:02:45.069749] I [fuse-bridge.c:5897:fini] 0-fuse: Unmounting '/rhev/data-center/mnt/glusterSD/*.*.*.*:ssd5'. [2019-01-25 14:02:45.078949] I [fuse-bridge.c:5902:fini] 0-fuse: Closing fuse connection to '/rhev/data-center/mnt/glusterSD/*.*.*.*:ssd5'. [2019-01-25 14:02:45.079035] W [glusterfsd.c:1481:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7dd5) [0x7f2b0a656dd5] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x5651101afd45] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x5651101afbbb] ) 0-: received signum (15), shutting down On 1/25/19 2:55 PM, Nir Soffer wrote:
On Fri, Jan 25, 2019 at 3:18 PM Jorick Astrego <jorick@netbulae.eu <mailto:jorick@netbulae.eu>> wrote:
Hi,
We're having problems mounting the preexisting 3.12 glusterfs storage domains in ovirt node ng 4.3.0 rc2.
Getting
There are no iptables blocks on the storage network, the ip's are pingable bothe ways. I can telnet to the glusterfs ports and I see no messages in the logs of the glusterfs servers.
When I try the mount command manually it hangs for ever:
/usr/bin/mount -t glusterfs -o backup-volfile-servers=*.*.*.*:*.*.*.* *.*.*.*:/sdd8 /mnt/temp
I haven't submitted a bug yet....
from supervdsm.log
MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:42:45,282::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) call volumeInfo with (u'sdd8', u'*.*.*.*') {} MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:42:45,282::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/gluster --mode=script volume info --remote-host=*.*.*.* sdd8 --xml (cwd None) MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:44:45,399::commands::219::root::(execCmd) FAILED: <err> = ''; <rc> = 1 MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:44:45,399::logutils::319::root::(_report_stats) ThreadedHandler is ok in the last 120 seconds (max pending: 2)
This looks like https://bugzilla.redhat.com/show_bug.cgi?id=1666123#c18
We should see "ThreadedHandler is ok" every 60 seconds when using debug log level.
Looks like your entire supervdsmd process was hang for 120 seconds.
MainProcess|jsonrpc/2::ERROR::2019-01-25 13:44:45,399::supervdsm_server::104::SuperVdsm.ServerCallback::(wrapper) Error in volumeInfo Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 102, in wrapper res = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 529, in volumeInfo xmltree = _execGlusterXml(command) File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 131, in _execGlusterXml return _getTree(rc, out, err) File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 112, in _getTree raise ge.GlusterCmdExecFailedException(rc, out, err) GlusterCmdExecFailedException: Command execution failed error: E r r o r : R e q u e s t t i m e d o u t
Looks like side effect of https://gerrit.ovirt.org/c/94784/
GlusterException assumes that it accept list of lines, but we started to raise strings. The class should be fixed to handle strings.
return code: 1 MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:44:45,400::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) call mount with (<vdsm.supervdsm_server._SuperVdsm object at 0x7f6eb8d0a2d0>, u'*.*.*.*:/sdd8', u'/rhev/data-center/mnt/glusterSD/*.*.*.*:_sdd8') {'vfstype': u'glusterfs', 'mntOpts': u'backup-volfile-servers=*.*.*.*:*.*.*.*', 'cgroup': 'vdsm-glusterfs'} MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:44:45,400::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount -t glusterfs -o backup-volfile-servers=*.*.*.*:*.*.*.* *.*.*.*:/sdd8 /rhev/data-center/mnt/glusterSD/*.*.*.*:_sdd8 (cwd None) MainProcess|jsonrpc/0::DEBUG::2019-01-25 13:45:02,884::commands::219::root::(execCmd) FAILED: <err> = 'Running scope as unit run-38676.scope.\nMount failed. Please check the log file for more details.\n'; <rc> = 1 MainProcess|jsonrpc/0::ERROR::2019-01-25 13:45:02,884::supervdsm_server::104::SuperVdsm.ServerCallback::(wrapper) Error in mount Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 102, in wrapper res = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 144, in mount cgroup=cgroup) File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 277, in _mount _runcmd(cmd) File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 305, in _runcmd raise MountError(rc, b";".join((out, err))) MountError: (1, ';Running scope as unit run-38676.scope.\nMount failed. Please check the log file for more details.\n')
The mount failure is probably related to glusterfs. There are glusterfs logs on the host that can give more info on this error.
MainProcess|jsonrpc/0::DEBUG::2019-01-25 13:45:02,894::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) call volumeInfo with (u'ssd9', u'*.*.*.*') {} MainProcess|jsonrpc/0::DEBUG::2019-01-25 13:45:02,894::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/gluster --mode=script volume info --remote-host=*.*.*.* ssd9 --xml (cwd None)
from vdsm.log
2019-01-25 13:46:03,519+0100 WARN (vdsm.Scheduler) [Executor] Worker blocked: <Worker name=jsonrpc/2 running <Task <JsonRpcTask {'params': {u'connectionParams': [{u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'6b6b7899-c82b-4417-b453-0b3b0ac11deb', u'connection': u'*.*.*.*:ssd4', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'b036005a-d44d-4689-a8c3-13e1bbf55af7', u'connection': u'*.*.*.*:ssd5', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'40d191b0-b7f8-48f9-bf6f-327275f51fef', u'connection': u'*.*.*.*:ssd6', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'26fbd2d6-6f25-4520-ab7f-15e9001f07b9', u'connection': u'*.*.*.*:/hdd2', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'f11fed97-513a-4a10-b85c-2afe68f42608', u'connection': u'*.*.*.*:/ssd3', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'f20b8691-528e-4e38-89ad-1e27684dee8b', u'connection': u'*.*.*.*:/sdd8', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'188e71dc-3d81-43d3-b930-238a4c6711e6', u'connection': u'*.*.*.*:/ssd9', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}], u'storagepoolID': u'00000001-0001-0001-0001-000000000043', u'domainType': 7}, 'jsonrpc': '2.0', 'method': u'StoragePool.connectStorageServer', 'id': u'581e2ad3-0682-4d44-95b4-bdc088b45f66'} at 0x7f9be815c850> timeout=60, duration=1260.00 at 0x7f9be815ca10> task#=98 at 0x7f9be83bb750>, traceback: File: "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap self.__bootstrap_inner() File: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner self.run() File: "/usr/lib64/python2.7/threading.py", line 765, in run self.__target(*self.__args, **self.__kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 195, in run ret = func(*args, **kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run self._execute_task() File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315, in _execute_task task() File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in __call__ self._callable() File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 262, in __call__ self._handler(self._ctx, self._req) File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 305, in _serveRequest response = self._handle_request(req, ctx) File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in _handle_request res = method(**params) File: "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in _dynamicMethod result = fn(*methodArgs) File: "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1103, in connectStorageServer connectionParams) File: "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py", line 72, in wrapper result = ctask.prepare(func, *args, **kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 108, in wrapper return m(self, *a, **kw) File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1179, in prepare result = self._run(func, *args, **kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File: "<string>", line 2, in connectStorageServer File: "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method ret = func(*args, **kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2410, in connectStorageServer conObj.connect()
from messages:
Jan 25 13:49:07 node9 vdsm[31968]: WARN Worker blocked: <Worker name=jsonrpc/3 running <Task <JsonRpcTask {'params': {u'connectionParams': [{u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'6b6b7899-c82b-4417-b453-0b3b0ac11deb', u'connection': u'*.*.*.*:ssd4', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'b036005a-d44d-4689-a8c3-13e1bbf55af7', u'connection': u'*.*.*.*:ssd5', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'40d191b0-b7f8-48f9-bf6f-327275f51fef', u'connection': u'*.*.*.*:ssd6', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'26fbd2d6-6f25-4520-ab7f-15e9001f07b9', u'connection': u'*.*.*.*:/hdd2', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'f11fed97-513a-4a10-b85c-2afe68f42608', u'connection': u'*.*.*.*:/ssd3', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'f20b8691-528e-4e38-89ad-1e27684dee8b', u'connection': u'*.*.*.*:/sdd8', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'188e71dc-3d81-43d3-b930-238a4c6711e6', u'connection': u'*.*.*.*:/ssd9', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}], u'storagepoolID': u'00000001-0001-0001-0001-000000000043', u'domainType': 7}, 'jsonrpc': '2.0', 'method': u'StoragePool.connectStorageServer', 'id': u'918f4d06-ca89-4ec0-a396-3407f1bdb8f9'} at 0x7f9be82ff250> timeout=60, duration=840.00 at 0x7f9be82ffa50> task#=93 at 0x7f9be83bba10>, traceback:#012File: "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap#012 self.__bootstrap_inner()#012File: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner#012 self.run()#012File: "/usr/lib64/python2.7/threading.py", line 765, in run#012 self.__target(*self.__args, **self.__kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 195, in run#012 ret = func(*args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run#012 self._execute_task()#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315, in _execute_task#012 task()#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in __call__#012 self._callable()#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 262, in __call__#012 self._handler(self._ctx, self._req)#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 305, in _serveRequest#012 response = self._handle_request(req, ctx)#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in _handle_request#012 res = method(**params)#012File: "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in _dynamicMethod#012 result = fn(*methodArgs)#012File: "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1103, in connectStorageServer#012 connectionParams)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py", line 72, in wrapper#012 result = ctask.prepare(func, *args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 108, in wrapper#012 return m(self, *a, **kw)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1179, in prepare#012 result = self._run(func, *args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run#012 return fn(*args, **kargs)#012File: "<string>", line 2, in connectStorageServer#012File: "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method#012 ret = func(*args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2410, in connectStorageServer#012 conObj.connect()#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 172, in connect#012 self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 207, in mount#012 cgroup=cgroup)#012File: "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 56, in __call__#012 return callMethod()#012File: "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 54, in <lambda>#012 **kwargs)#012File: "<string>", line 2, in mount#012File: "/usr/lib64/python2.7/multiprocessing/managers.py", line 759, in _callmethod#012 kind, result = conn.recv()
This matches the theory of blocked supervdsmd.
Met vriendelijke groet, With kind regards,
Jorick Astrego * Netbulae Virtualization Experts * ------------------------------------------------------------------------ Tel: 053 20 30 270 info@netbulae.eu <mailto:info@netbulae.eu> Staalsteden 4-3A KvK 08198180 Fax: 053 20 30 271 www.netbulae.eu <http://www.netbulae.eu> 7547 TA Enschede BTW NL821234584B01
------------------------------------------------------------------------
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UEFI2DXVP4D4B4...
Met vriendelijke groet, With kind regards, Jorick Astrego Netbulae Virtualization Experts ---------------- Tel: 053 20 30 270 info@netbulae.eu Staalsteden 4-3A KvK 08198180 Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01 ----------------

On Fri, Jan 25, 2019 at 7:36 PM Jorick Astrego <jorick@netbulae.eu> wrote:
The mount failure is probably related to glusterfs. There are glusterfs logs on the host that can give more info on this error.
Oh duh, sorry forgot to check :-&
"failed to fetch volume file"
[2019-01-25 14:02:45.067440] I [glusterfsd-mgmt.c:2424:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from remote-host: *.*.*.* Are you able to access this host, and the gluster ports open? Anything in the glusterd.log of the gluster server? Adding Sanju to help [2019-01-25 13:47:03.560677] I [MSGID: 100030] [glusterfsd.c:2691:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 5.2 (args: /usr/sbin/glusterfs --process-name fuse --volfile-server=*.*.*.* --volfile-server=*.*.*.* --volfile-server=*.*.*.* --volfile-id=ssd5 /rhev/data-center/mnt/glusterSD/*.*.*.*:ssd5) [2019-01-25 13:47:03.571819] I [MSGID: 101190] [event-epoll.c:622:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2019-01-25 14:02:45.067440] I [glusterfsd-mgmt.c:2424:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from remote-host: *.*.*.* [2019-01-25 14:02:45.067500] I [glusterfsd-mgmt.c:2464:mgmt_rpc_notify] 0-glusterfsd-mgmt: connecting to next volfile server *.*.*.* [2019-01-25 14:02:45.069678] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f2b0b7f4f1b] (--> /lib64/libgfrpc.so.0(+0xce11)[0x7f2b0b5bde11] (--> /lib64/libgfrpc.so.0(+0xcf2e)[0x7f2b0b5bdf2e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x91)[0x7f2b0b5bf531] (--> /lib64/libgfrpc.so.0(+0xf0d8)[0x7f2b0b5c00d8] ))))) 0-glusterfs: forced unwinding frame type(GlusterFS Handshake) op(GETSPEC(2)) called at 2019-01-25 13:47:03.572632 (xid=0x2) [2019-01-25 14:02:45.069697] E [glusterfsd-mgmt.c:2136:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:ssd5) [2019-01-25 14:02:45.069725] W [glusterfsd.c:1481:cleanup_and_exit] (-->/lib64/libgfrpc.so.0(+0xce32) [0x7f2b0b5bde32] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x841) [0x5651101b7231] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x5651101afbbb] ) 0-: received signum (0), shutting down [2019-01-25 14:02:45.069749] I [fuse-bridge.c:5897:fini] 0-fuse: Unmounting '/rhev/data-center/mnt/glusterSD/*.*.*.*:ssd5'. [2019-01-25 14:02:45.078949] I [fuse-bridge.c:5902:fini] 0-fuse: Closing fuse connection to '/rhev/data-center/mnt/glusterSD/*.*.*.*:ssd5'. [2019-01-25 14:02:45.079035] W [glusterfsd.c:1481:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7dd5) [0x7f2b0a656dd5] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x5651101afd45] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x5651101afbbb] ) 0-: received signum (15), shutting down
On 1/25/19 2:55 PM, Nir Soffer wrote:
On Fri, Jan 25, 2019 at 3:18 PM Jorick Astrego <jorick@netbulae.eu> wrote:
Hi,
We're having problems mounting the preexisting 3.12 glusterfs storage domains in ovirt node ng 4.3.0 rc2.
Getting
There are no iptables blocks on the storage network, the ip's are pingable bothe ways. I can telnet to the glusterfs ports and I see no messages in the logs of the glusterfs servers.
When I try the mount command manually it hangs for ever:
/usr/bin/mount -t glusterfs -o backup-volfile-servers=*.*.*.*:*.*.*.* *.*.*.*:/sdd8 /mnt/temp
I haven't submitted a bug yet....
from supervdsm.log
MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:42:45,282::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) call volumeInfo with (u'sdd8', u'*.*.*.*') {} MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:42:45,282::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/gluster --mode=script volume info --remote-host=*.*.*.* sdd8 --xml (cwd None) MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:44:45,399::commands::219::root::(execCmd) FAILED: <err> = ''; <rc> = 1 MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:44:45,399::logutils::319::root::(_report_stats) ThreadedHandler is ok in the last 120 seconds (max pending: 2)
This looks like https://bugzilla.redhat.com/show_bug.cgi?id=1666123#c18
We should see "ThreadedHandler is ok" every 60 seconds when using debug log level.
Looks like your entire supervdsmd process was hang for 120 seconds.
MainProcess|jsonrpc/2::ERROR::2019-01-25 13:44:45,399::supervdsm_server::104::SuperVdsm.ServerCallback::(wrapper) Error in volumeInfo Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 102, in wrapper res = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 529, in volumeInfo xmltree = _execGlusterXml(command) File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 131, in _execGlusterXml return _getTree(rc, out, err) File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 112, in _getTree raise ge.GlusterCmdExecFailedException(rc, out, err) GlusterCmdExecFailedException: Command execution failed error: E r r o r
:
R e q u e s t
t i m e d
o u t
Looks like side effect of https://gerrit.ovirt.org/c/94784/
GlusterException assumes that it accept list of lines, but we started to raise strings. The class should be fixed to handle strings.
return code: 1 MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:44:45,400::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) call mount with (<vdsm.supervdsm_server._SuperVdsm object at 0x7f6eb8d0a2d0>, u'*.*.*.*:/sdd8', u'/rhev/data-center/mnt/glusterSD/*.*.*.*:_sdd8') {'vfstype': u'glusterfs', 'mntOpts': u'backup-volfile-servers=*.*.*.*:*.*.*.*', 'cgroup': 'vdsm-glusterfs'} MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:44:45,400::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount -t glusterfs -o backup-volfile-servers=*.*.*.*:*.*.*.* *.*.*.*:/sdd8 /rhev/data-center/mnt/glusterSD/*.*.*.*:_sdd8 (cwd None) MainProcess|jsonrpc/0::DEBUG::2019-01-25 13:45:02,884::commands::219::root::(execCmd) FAILED: <err> = 'Running scope as unit run-38676.scope.\nMount failed. Please check the log file for more details.\n'; <rc> = 1 MainProcess|jsonrpc/0::ERROR::2019-01-25 13:45:02,884::supervdsm_server::104::SuperVdsm.ServerCallback::(wrapper) Error in mount Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 102, in wrapper res = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 144, in mount cgroup=cgroup) File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 277, in _mount _runcmd(cmd) File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 305, in _runcmd raise MountError(rc, b";".join((out, err))) MountError: (1, ';Running scope as unit run-38676.scope.\nMount failed. Please check the log file for more details.\n')
The mount failure is probably related to glusterfs. There are glusterfs logs on the host that can give more info on this error.
MainProcess|jsonrpc/0::DEBUG::2019-01-25 13:45:02,894::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) call volumeInfo with (u'ssd9', u'*.*.*.*') {} MainProcess|jsonrpc/0::DEBUG::2019-01-25 13:45:02,894::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/gluster --mode=script volume info --remote-host=*.*.*.* ssd9 --xml (cwd None)
from vdsm.log
2019-01-25 13:46:03,519+0100 WARN (vdsm.Scheduler) [Executor] Worker blocked: <Worker name=jsonrpc/2 running <Task <JsonRpcTask {'params': {u'connectionParams': [{u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'6b6b7899-c82b-4417-b453-0b3b0ac11deb', u'connection': u'*.*.*.*:ssd4', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'b036005a-d44d-4689-a8c3-13e1bbf55af7', u'connection': u'*.*.*.*:ssd5', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'40d191b0-b7f8-48f9-bf6f-327275f51fef', u'connection': u'*.*.*.*:ssd6', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'26fbd2d6-6f25-4520-ab7f-15e9001f07b9', u'connection': u'*.*.*.*:/hdd2', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'f11fed97-513a-4a10-b85c-2afe68f42608', u'connection': u'*.*.*.*:/ssd3', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'f20b8691-528e-4e38-89ad-1e27684dee8b', u'connection': u'*.*.*.*:/sdd8', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'188e71dc-3d81-43d3-b930-238a4c6711e6', u'connection': u'*.*.*.*:/ssd9', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}], u'storagepoolID': u'00000001-0001-0001-0001-000000000043', u'domainType': 7}, 'jsonrpc': '2.0', 'method': u'StoragePool.connectStorageServer', 'id': u'581e2ad3-0682-4d44-95b4-bdc088b45f66'} at 0x7f9be815c850> timeout=60, duration=1260.00 at 0x7f9be815ca10> task#=98 at 0x7f9be83bb750>, traceback: File: "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap self.__bootstrap_inner() File: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner self.run() File: "/usr/lib64/python2.7/threading.py", line 765, in run self.__target(*self.__args, **self.__kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 195, in run ret = func(*args, **kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run self._execute_task() File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315, in _execute_task task() File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in __call__ self._callable() File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 262, in __call__ self._handler(self._ctx, self._req) File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 305, in _serveRequest response = self._handle_request(req, ctx) File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in _handle_request res = method(**params) File: "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in _dynamicMethod result = fn(*methodArgs) File: "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1103, in connectStorageServer connectionParams) File: "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py", line 72, in wrapper result = ctask.prepare(func, *args, **kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 108, in wrapper return m(self, *a, **kw) File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1179, in prepare result = self._run(func, *args, **kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File: "<string>", line 2, in connectStorageServer File: "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method ret = func(*args, **kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2410, in connectStorageServer conObj.connect()
from messages:
Jan 25 13:49:07 node9 vdsm[31968]: WARN Worker blocked: <Worker name=jsonrpc/3 running <Task <JsonRpcTask {'params': {u'connectionParams': [{u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'6b6b7899-c82b-4417-b453-0b3b0ac11deb', u'connection': u'*.*.*.*:ssd4', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'b036005a-d44d-4689-a8c3-13e1bbf55af7', u'connection': u'*.*.*.*:ssd5', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'40d191b0-b7f8-48f9-bf6f-327275f51fef', u'connection': u'*.*.*.*:ssd6', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'26fbd2d6-6f25-4520-ab7f-15e9001f07b9', u'connection': u'*.*.*.*:/hdd2', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'f11fed97-513a-4a10-b85c-2afe68f42608', u'connection': u'*.*.*.*:/ssd3', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'f20b8691-528e-4e38-89ad-1e27684dee8b', u'connection': u'*.*.*.*:/sdd8', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'188e71dc-3d81-43d3-b930-238a4c6711e6', u'connection': u'*.*.*.*:/ssd9', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}], u'storagepoolID': u'00000001-0001-0001-0001-000000000043', u'domainType': 7}, 'jsonrpc': '2.0', 'method': u'StoragePool.connectStorageServer', 'id': u'918f4d06-ca89-4ec0-a396-3407f1bdb8f9'} at 0x7f9be82ff250> timeout=60, duration=840.00 at 0x7f9be82ffa50> task#=93 at 0x7f9be83bba10>, traceback:#012File: "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap#012 self.__bootstrap_inner()#012File: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner#012 self.run()#012File: "/usr/lib64/python2.7/threading.py", line 765, in run#012 self.__target(*self.__args, **self.__kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 195, in run#012 ret = func(*args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run#012 self._execute_task()#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315, in _execute_task#012 task()#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in __call__#012 self._callable()#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 262, in __call__#012 self._handler(self._ctx, self._req)#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 305, in _serveRequest#012 response = self._handle_request(req, ctx)#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in _handle_request#012 res = method(**params)#012File: "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in _dynamicMethod#012 result = fn(*methodArgs)#012File: "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1103, in connectStorageServer#012 connectionParams)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py", line 72, in wrapper#012 result = ctask.prepare(func, *args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 108, in wrapper#012 return m(self, *a, **kw)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1179, in prepare#012 result = self._run(func, *args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run#012 return fn(*args, **kargs)#012File: "<string>", line 2, in connectStorageServer#012File: "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method#012 ret = func(*args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2410, in connectStorageServer#012 conObj.connect()#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 172, in connect#012 self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 207, in mount#012 cgroup=cgroup)#012File: "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 56, in __call__#012 return callMethod()#012File: "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 54, in <lambda>#012 **kwargs)#012File: "<string>", line 2, in mount#012File: "/usr/lib64/python2.7/multiprocessing/managers.py", line 759, in _callmethod#012 kind, result = conn.recv()
This matches the theory of blocked supervdsmd.
Met vriendelijke groet, With kind regards,
Jorick Astrego
* Netbulae Virtualization Experts * ------------------------------ Tel: 053 20 30 270 info@netbulae.eu Staalsteden 4-3A KvK 08198180 Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01 ------------------------------
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UEFI2DXVP4D4B4...
Met vriendelijke groet, With kind regards,
Jorick Astrego
*Netbulae Virtualization Experts * ------------------------------ Tel: 053 20 30 270 info@netbulae.eu Staalsteden 4-3A KvK 08198180 Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01 ------------------------------

On 1/25/19 3:26 PM, Sahina Bose wrote:
On Fri, Jan 25, 2019 at 7:36 PM Jorick Astrego <jorick@netbulae.eu <mailto:jorick@netbulae.eu>> wrote:
The mount failure is probably related to glusterfs. There are glusterfs logs on the host that can give more info on this error.
Oh duh, sorry forgot to check :-&
"failed to fetch volume file"
[2019-01-25 14:02:45.067440] I [glusterfsd-mgmt.c:2424:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from remote-host: *.*.*.*
Are you able to access this host, and the gluster ports open? Anything in the glusterd.log of the gluster server?
Adding Sanju to help
From the host: Telnet *.*.*.14 24007 Trying *.*.*.14... Connected to *.*.*.14. Escape character is '^]'. glusterd.log on *.*.*.14: Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT icmp -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:54321 ACCEPT tcp -- anywhere anywhere tcp dpt:54322 ACCEPT tcp -- anywhere anywhere tcp dpt:sunrpc ACCEPT udp -- anywhere anywhere udp dpt:sunrpc ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ACCEPT udp -- anywhere anywhere udp dpt:snmp ACCEPT tcp -- anywhere anywhere tcp dpt:websm ACCEPT tcp -- anywhere anywhere tcp dpt:24007 ACCEPT tcp -- anywhere anywhere tcp dpt:webcache ACCEPT tcp -- anywhere anywhere tcp dpt:38465 ACCEPT tcp -- anywhere anywhere tcp dpt:38466 ACCEPT tcp -- anywhere anywhere tcp dpt:38467 ACCEPT tcp -- anywhere anywhere tcp dpt:nfs ACCEPT tcp -- anywhere anywhere tcp dpt:38469 ACCEPT tcp -- anywhere anywhere tcp dpt:5666 ACCEPT tcp -- anywhere anywhere tcp dpt:39543 ACCEPT tcp -- anywhere anywhere tcp dpt:55863 ACCEPT tcp -- anywhere anywhere tcp dpt:38468 ACCEPT udp -- anywhere anywhere udp dpt:963 ACCEPT tcp -- anywhere anywhere tcp dpt:965 ACCEPT tcp -- anywhere anywhere tcp dpt:ctdb ACCEPT tcp -- anywhere anywhere tcp dpt:netbios-ssn ACCEPT tcp -- anywhere anywhere tcp dpt:microsoft-ds ACCEPT tcp -- anywhere anywhere tcp dpts:24009:24108 ACCEPT tcp -- anywhere anywhere tcp dpts:49152:49251 ACCEPT tcp -- anywhere anywhere tcp dpts:49217:49316 ACCEPT tcp -- anywhere anywhere tcp dpt:zabbix-agent /* Zabbix agent */ REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) target prot opt source destination REJECT all -- anywhere anywhere PHYSDEV match ! --physdev-is-bridged reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) target prot opt source destination And in the glusterd.log has nothing else but: [2019-01-25 11:02:32.617694] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd6 [2019-01-25 11:02:32.623892] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd9 [2019-01-25 11:03:31.847006] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2019-01-25 11:04:33.023685] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume hdd2 [2019-01-25 11:04:33.030549] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume sdd8 [2019-01-25 11:04:33.037024] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd3 [2019-01-25 11:04:33.043442] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd4 [2019-01-25 11:04:33.049494] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd5 [2019-01-25 11:04:33.056049] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd6 [2019-01-25 11:04:33.062081] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd9 [2019-01-25 11:05:47.710098] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2019-01-25 11:06:33.481200] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume hdd2 [2019-01-25 11:06:33.488137] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume sdd8 [2019-01-25 11:06:33.494492] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd3 [2019-01-25 11:06:33.501053] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd4 [2019-01-25 11:06:33.507302] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd5 [2019-01-25 11:06:33.513299] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd6 [2019-01-25 11:06:33.519269] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd9 [2019-01-25 11:08:33.940033] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume hdd2 [2019-01-25 11:08:33.947011] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume sdd8 [2019-01-25 11:08:33.953445] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd3 [2019-01-25 11:08:33.959618] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd4 [2019-01-25 11:08:33.965659] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd5 [2019-01-25 11:08:33.971609] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd6 [2019-01-25 11:08:33.977763] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd9 [2019-01-25 11:10:58.656508] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2019-01-25 11:11:05.746810] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd4 [2019-01-25 11:11:08.153573] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd5 [2019-01-25 11:11:11.108019] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd6 [2019-01-25 11:11:13.456055] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume hdd2 [2019-01-25 11:11:13.584535] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume hdd2 [2019-01-25 11:11:16.046244] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd3 [2019-01-25 11:11:16.172035] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd3 [2019-01-25 11:11:18.370339] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume sdd8 [2019-01-25 11:11:18.504515] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume sdd8 [2019-01-25 11:11:20.710031] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd9 [2019-01-25 11:11:05.886182] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd4 [2019-01-25 11:11:08.284806] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd5 [2019-01-25 11:11:11.244245] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd6 [2019-01-25 11:11:20.835972] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd9 [2019-01-25 11:13:33.238060] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2019-01-25 11:15:36.929536] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume hdd2 [2019-01-25 11:15:36.937647] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume sdd8 [2019-01-25 11:15:36.943936] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd3 [2019-01-25 11:15:36.950206] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd4 [2019-01-25 11:15:36.956460] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd5 [2019-01-25 11:15:36.962459] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd6 [2019-01-25 11:15:36.968594] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd9 [2019-01-25 11:21:14.123760] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2019-01-25 11:22:38.575269] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume hdd2 [2019-01-25 11:22:38.582011] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume sdd8 [2019-01-25 11:22:38.588583] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd3 Met vriendelijke groet, With kind regards, Jorick Astrego Netbulae Virtualization Experts ---------------- Tel: 053 20 30 270 info@netbulae.eu Staalsteden 4-3A KvK 08198180 Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01 ----------------

Hi, I know people are busy but do we have any progress on this, I removed the OVS cluster and recreated it with bridge networking. But I'm still stuck on the "failed to fetch volume file" and am unable to activate the hosts to continue testing... Anything else I can try to debug? The storage network is ip based without dns. Regards, Jorick On 1/25/19 3:53 PM, Jorick Astrego wrote:
On 1/25/19 3:26 PM, Sahina Bose wrote:
On Fri, Jan 25, 2019 at 7:36 PM Jorick Astrego <jorick@netbulae.eu <mailto:jorick@netbulae.eu>> wrote:
The mount failure is probably related to glusterfs. There are glusterfs logs on the host that can give more info on this error.
Oh duh, sorry forgot to check :-&
"failed to fetch volume file"
[2019-01-25 14:02:45.067440] I [glusterfsd-mgmt.c:2424:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from remote-host: *.*.*.*
Are you able to access this host, and the gluster ports open? Anything in the glusterd.log of the gluster server?
Adding Sanju to help
From the host:
Telnet *.*.*.14 24007 Trying *.*.*.14... Connected to *.*.*.14. Escape character is '^]'.
glusterd.log on *.*.*.14:
Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT icmp -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:54321 ACCEPT tcp -- anywhere anywhere tcp dpt:54322 ACCEPT tcp -- anywhere anywhere tcp dpt:sunrpc ACCEPT udp -- anywhere anywhere udp dpt:sunrpc ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ACCEPT udp -- anywhere anywhere udp dpt:snmp ACCEPT tcp -- anywhere anywhere tcp dpt:websm ACCEPT tcp -- anywhere anywhere tcp dpt:24007 ACCEPT tcp -- anywhere anywhere tcp dpt:webcache ACCEPT tcp -- anywhere anywhere tcp dpt:38465 ACCEPT tcp -- anywhere anywhere tcp dpt:38466 ACCEPT tcp -- anywhere anywhere tcp dpt:38467 ACCEPT tcp -- anywhere anywhere tcp dpt:nfs ACCEPT tcp -- anywhere anywhere tcp dpt:38469 ACCEPT tcp -- anywhere anywhere tcp dpt:5666 ACCEPT tcp -- anywhere anywhere tcp dpt:39543 ACCEPT tcp -- anywhere anywhere tcp dpt:55863 ACCEPT tcp -- anywhere anywhere tcp dpt:38468 ACCEPT udp -- anywhere anywhere udp dpt:963 ACCEPT tcp -- anywhere anywhere tcp dpt:965 ACCEPT tcp -- anywhere anywhere tcp dpt:ctdb ACCEPT tcp -- anywhere anywhere tcp dpt:netbios-ssn ACCEPT tcp -- anywhere anywhere tcp dpt:microsoft-ds ACCEPT tcp -- anywhere anywhere tcp dpts:24009:24108 ACCEPT tcp -- anywhere anywhere tcp dpts:49152:49251 ACCEPT tcp -- anywhere anywhere tcp dpts:49217:49316 ACCEPT tcp -- anywhere anywhere tcp dpt:zabbix-agent /* Zabbix agent */ REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT) target prot opt source destination REJECT all -- anywhere anywhere PHYSDEV match ! --physdev-is-bridged reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT) target prot opt source destination
And in the glusterd.log has nothing else but:
[2019-01-25 11:02:32.617694] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd6 [2019-01-25 11:02:32.623892] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd9 [2019-01-25 11:03:31.847006] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2019-01-25 11:04:33.023685] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume hdd2 [2019-01-25 11:04:33.030549] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume sdd8 [2019-01-25 11:04:33.037024] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd3 [2019-01-25 11:04:33.043442] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd4 [2019-01-25 11:04:33.049494] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd5 [2019-01-25 11:04:33.056049] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd6 [2019-01-25 11:04:33.062081] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd9 [2019-01-25 11:05:47.710098] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2019-01-25 11:06:33.481200] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume hdd2 [2019-01-25 11:06:33.488137] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume sdd8 [2019-01-25 11:06:33.494492] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd3 [2019-01-25 11:06:33.501053] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd4 [2019-01-25 11:06:33.507302] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd5 [2019-01-25 11:06:33.513299] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd6 [2019-01-25 11:06:33.519269] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd9 [2019-01-25 11:08:33.940033] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume hdd2 [2019-01-25 11:08:33.947011] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume sdd8 [2019-01-25 11:08:33.953445] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd3 [2019-01-25 11:08:33.959618] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd4 [2019-01-25 11:08:33.965659] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd5 [2019-01-25 11:08:33.971609] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd6 [2019-01-25 11:08:33.977763] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd9 [2019-01-25 11:10:58.656508] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2019-01-25 11:11:05.746810] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd4 [2019-01-25 11:11:08.153573] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd5 [2019-01-25 11:11:11.108019] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd6 [2019-01-25 11:11:13.456055] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume hdd2 [2019-01-25 11:11:13.584535] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume hdd2 [2019-01-25 11:11:16.046244] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd3 [2019-01-25 11:11:16.172035] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd3 [2019-01-25 11:11:18.370339] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume sdd8 [2019-01-25 11:11:18.504515] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume sdd8 [2019-01-25 11:11:20.710031] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd9 [2019-01-25 11:11:05.886182] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd4 [2019-01-25 11:11:08.284806] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd5 [2019-01-25 11:11:11.244245] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd6 [2019-01-25 11:11:20.835972] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd9 [2019-01-25 11:13:33.238060] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2019-01-25 11:15:36.929536] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume hdd2 [2019-01-25 11:15:36.937647] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume sdd8 [2019-01-25 11:15:36.943936] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd3 [2019-01-25 11:15:36.950206] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd4 [2019-01-25 11:15:36.956460] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd5 [2019-01-25 11:15:36.962459] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd6 [2019-01-25 11:15:36.968594] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd9 [2019-01-25 11:21:14.123760] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2019-01-25 11:22:38.575269] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume hdd2 [2019-01-25 11:22:38.582011] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume sdd8 [2019-01-25 11:22:38.588583] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd3
Met vriendelijke groet, With kind regards, Jorick Astrego Netbulae Virtualization Experts ---------------- Tel: 053 20 30 270 info@netbulae.eu Staalsteden 4-3A KvK 08198180 Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01 ----------------

Hi, I will probably be able to look into hanged supervdsmd issue only next week. I'll reply as soon as I have some insight. Regards, Marcin On 1/29/19 12:19 PM, Jorick Astrego wrote:
Hi,
I know people are busy but do we have any progress on this, I removed the OVS cluster and recreated it with bridge networking.
But I'm still stuck on the "failed to fetch volume file" and am unable to activate the hosts to continue testing...
Anything else I can try to debug? The storage network is ip based without dns.
Regards, Jorick
On 1/25/19 3:53 PM, Jorick Astrego wrote:
On 1/25/19 3:26 PM, Sahina Bose wrote:
On Fri, Jan 25, 2019 at 7:36 PM Jorick Astrego <jorick@netbulae.eu <mailto:jorick@netbulae.eu>> wrote:
The mount failure is probably related to glusterfs. There are glusterfs logs on the host that can give more info on this error.
Oh duh, sorry forgot to check :-&
"failed to fetch volume file"
[2019-01-25 14:02:45.067440] I [glusterfsd-mgmt.c:2424:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from remote-host: *.*.*.*
Are you able to access this host, and the gluster ports open? Anything in the glusterd.log of the gluster server?
Adding Sanju to help
From the host:
Telnet *.*.*.14 24007 Trying *.*.*.14... Connected to *.*.*.14. Escape character is '^]'.
glusterd.log on *.*.*.14:
Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT icmp -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:54321 ACCEPT tcp -- anywhere anywhere tcp dpt:54322 ACCEPT tcp -- anywhere anywhere tcp dpt:sunrpc ACCEPT udp -- anywhere anywhere udp dpt:sunrpc ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ACCEPT udp -- anywhere anywhere udp dpt:snmp ACCEPT tcp -- anywhere anywhere tcp dpt:websm ACCEPT tcp -- anywhere anywhere tcp dpt:24007 ACCEPT tcp -- anywhere anywhere tcp dpt:webcache ACCEPT tcp -- anywhere anywhere tcp dpt:38465 ACCEPT tcp -- anywhere anywhere tcp dpt:38466 ACCEPT tcp -- anywhere anywhere tcp dpt:38467 ACCEPT tcp -- anywhere anywhere tcp dpt:nfs ACCEPT tcp -- anywhere anywhere tcp dpt:38469 ACCEPT tcp -- anywhere anywhere tcp dpt:5666 ACCEPT tcp -- anywhere anywhere tcp dpt:39543 ACCEPT tcp -- anywhere anywhere tcp dpt:55863 ACCEPT tcp -- anywhere anywhere tcp dpt:38468 ACCEPT udp -- anywhere anywhere udp dpt:963 ACCEPT tcp -- anywhere anywhere tcp dpt:965 ACCEPT tcp -- anywhere anywhere tcp dpt:ctdb ACCEPT tcp -- anywhere anywhere tcp dpt:netbios-ssn ACCEPT tcp -- anywhere anywhere tcp dpt:microsoft-ds ACCEPT tcp -- anywhere anywhere tcp dpts:24009:24108 ACCEPT tcp -- anywhere anywhere tcp dpts:49152:49251 ACCEPT tcp -- anywhere anywhere tcp dpts:49217:49316 ACCEPT tcp -- anywhere anywhere tcp dpt:zabbix-agent /* Zabbix agent */ REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT) target prot opt source destination REJECT all -- anywhere anywhere PHYSDEV match ! --physdev-is-bridged reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT) target prot opt source destination
And in the glusterd.log has nothing else but:
[2019-01-25 11:02:32.617694] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd6 [2019-01-25 11:02:32.623892] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd9 [2019-01-25 11:03:31.847006] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2019-01-25 11:04:33.023685] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume hdd2 [2019-01-25 11:04:33.030549] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume sdd8 [2019-01-25 11:04:33.037024] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd3 [2019-01-25 11:04:33.043442] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd4 [2019-01-25 11:04:33.049494] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd5 [2019-01-25 11:04:33.056049] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd6 [2019-01-25 11:04:33.062081] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd9 [2019-01-25 11:05:47.710098] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2019-01-25 11:06:33.481200] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume hdd2 [2019-01-25 11:06:33.488137] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume sdd8 [2019-01-25 11:06:33.494492] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd3 [2019-01-25 11:06:33.501053] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd4 [2019-01-25 11:06:33.507302] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd5 [2019-01-25 11:06:33.513299] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd6 [2019-01-25 11:06:33.519269] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd9 [2019-01-25 11:08:33.940033] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume hdd2 [2019-01-25 11:08:33.947011] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume sdd8 [2019-01-25 11:08:33.953445] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd3 [2019-01-25 11:08:33.959618] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd4 [2019-01-25 11:08:33.965659] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd5 [2019-01-25 11:08:33.971609] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd6 [2019-01-25 11:08:33.977763] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd9 [2019-01-25 11:10:58.656508] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2019-01-25 11:11:05.746810] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd4 [2019-01-25 11:11:08.153573] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd5 [2019-01-25 11:11:11.108019] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd6 [2019-01-25 11:11:13.456055] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume hdd2 [2019-01-25 11:11:13.584535] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume hdd2 [2019-01-25 11:11:16.046244] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd3 [2019-01-25 11:11:16.172035] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd3 [2019-01-25 11:11:18.370339] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume sdd8 [2019-01-25 11:11:18.504515] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume sdd8 [2019-01-25 11:11:20.710031] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd9 [2019-01-25 11:11:05.886182] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd4 [2019-01-25 11:11:08.284806] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd5 [2019-01-25 11:11:11.244245] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd6 [2019-01-25 11:11:20.835972] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd9 [2019-01-25 11:13:33.238060] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2019-01-25 11:15:36.929536] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume hdd2 [2019-01-25 11:15:36.937647] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume sdd8 [2019-01-25 11:15:36.943936] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd3 [2019-01-25 11:15:36.950206] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd4 [2019-01-25 11:15:36.956460] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd5 [2019-01-25 11:15:36.962459] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd6 [2019-01-25 11:15:36.968594] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd9 [2019-01-25 11:21:14.123760] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2019-01-25 11:22:38.575269] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume hdd2 [2019-01-25 11:22:38.582011] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume sdd8 [2019-01-25 11:22:38.588583] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume ssd3
Met vriendelijke groet, With kind regards,
Jorick Astrego * Netbulae Virtualization Experts * ------------------------------------------------------------------------ Tel: 053 20 30 270 info@netbulae.eu Staalsteden 4-3A KvK 08198180 Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
------------------------------------------------------------------------

On Fri, Jan 25, 2019 at 3:55 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Fri, Jan 25, 2019 at 3:18 PM Jorick Astrego <jorick@netbulae.eu> wrote:
Hi,
We're having problems mounting the preexisting 3.12 glusterfs storage domains in ovirt node ng 4.3.0 rc2.
Getting
There are no iptables blocks on the storage network, the ip's are pingable bothe ways. I can telnet to the glusterfs ports and I see no messages in the logs of the glusterfs servers.
When I try the mount command manually it hangs for ever:
/usr/bin/mount -t glusterfs -o backup-volfile-servers=*.*.*.*:*.*.*.* *.*.*.*:/sdd8 /mnt/temp
I haven't submitted a bug yet....
from supervdsm.log
MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:42:45,282::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) call volumeInfo with (u'sdd8', u'*.*.*.*') {} MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:42:45,282::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/gluster --mode=script volume info --remote-host=*.*.*.* sdd8 --xml (cwd None) MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:44:45,399::commands::219::root::(execCmd) FAILED: <err> = ''; <rc> = 1 MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:44:45,399::logutils::319::root::(_report_stats) ThreadedHandler is ok in the last 120 seconds (max pending: 2)
This looks like https://bugzilla.redhat.com/show_bug.cgi?id=1666123#c18
We should see "ThreadedHandler is ok" every 60 seconds when using debug log level.
Looks like your entire supervdsmd process was hang for 120 seconds.
MainProcess|jsonrpc/2::ERROR::2019-01-25 13:44:45,399::supervdsm_server::104::SuperVdsm.ServerCallback::(wrapper) Error in volumeInfo Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 102, in wrapper res = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 529, in volumeInfo xmltree = _execGlusterXml(command) File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 131, in _execGlusterXml return _getTree(rc, out, err) File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 112, in _getTree raise ge.GlusterCmdExecFailedException(rc, out, err) GlusterCmdExecFailedException: Command execution failed error: E r r o r
:
R e q u e s t
t i m e d
o u t
Looks like side effect of https://gerrit.ovirt.org/c/94784/
GlusterException assumes that it accept list of lines, but we started to raise strings. The class should be fixed to handle strings.
Fixed in https://gerrit.ovirt.org/c/97316/ I think we need this in 4.2.8. Denis, please check.
return code: 1 MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:44:45,400::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) call mount with (<vdsm.supervdsm_server._SuperVdsm object at 0x7f6eb8d0a2d0>, u'*.*.*.*:/sdd8', u'/rhev/data-center/mnt/glusterSD/*.*.*.*:_sdd8') {'vfstype': u'glusterfs', 'mntOpts': u'backup-volfile-servers=*.*.*.*:*.*.*.*', 'cgroup': 'vdsm-glusterfs'} MainProcess|jsonrpc/2::DEBUG::2019-01-25 13:44:45,400::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount -t glusterfs -o backup-volfile-servers=*.*.*.*:*.*.*.* *.*.*.*:/sdd8 /rhev/data-center/mnt/glusterSD/*.*.*.*:_sdd8 (cwd None) MainProcess|jsonrpc/0::DEBUG::2019-01-25 13:45:02,884::commands::219::root::(execCmd) FAILED: <err> = 'Running scope as unit run-38676.scope.\nMount failed. Please check the log file for more details.\n'; <rc> = 1 MainProcess|jsonrpc/0::ERROR::2019-01-25 13:45:02,884::supervdsm_server::104::SuperVdsm.ServerCallback::(wrapper) Error in mount Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 102, in wrapper res = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 144, in mount cgroup=cgroup) File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 277, in _mount _runcmd(cmd) File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 305, in _runcmd raise MountError(rc, b";".join((out, err))) MountError: (1, ';Running scope as unit run-38676.scope.\nMount failed. Please check the log file for more details.\n')
The mount failure is probably related to glusterfs. There are glusterfs logs on the host that can give more info on this error.
MainProcess|jsonrpc/0::DEBUG::2019-01-25 13:45:02,894::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) call volumeInfo with (u'ssd9', u'*.*.*.*') {} MainProcess|jsonrpc/0::DEBUG::2019-01-25 13:45:02,894::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/gluster --mode=script volume info --remote-host=*.*.*.* ssd9 --xml (cwd None)
from vdsm.log
2019-01-25 13:46:03,519+0100 WARN (vdsm.Scheduler) [Executor] Worker blocked: <Worker name=jsonrpc/2 running <Task <JsonRpcTask {'params': {u'connectionParams': [{u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'6b6b7899-c82b-4417-b453-0b3b0ac11deb', u'connection': u'*.*.*.*:ssd4', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'b036005a-d44d-4689-a8c3-13e1bbf55af7', u'connection': u'*.*.*.*:ssd5', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'40d191b0-b7f8-48f9-bf6f-327275f51fef', u'connection': u'*.*.*.*:ssd6', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'26fbd2d6-6f25-4520-ab7f-15e9001f07b9', u'connection': u'*.*.*.*:/hdd2', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'f11fed97-513a-4a10-b85c-2afe68f42608', u'connection': u'*.*.*.*:/ssd3', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'f20b8691-528e-4e38-89ad-1e27684dee8b', u'connection': u'*.*.*.*:/sdd8', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'188e71dc-3d81-43d3-b930-238a4c6711e6', u'connection': u'*.*.*.*:/ssd9', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}], u'storagepoolID': u'00000001-0001-0001-0001-000000000043', u'domainType': 7}, 'jsonrpc': '2.0', 'method': u'StoragePool.connectStorageServer', 'id': u'581e2ad3-0682-4d44-95b4-bdc088b45f66'} at 0x7f9be815c850> timeout=60, duration=1260.00 at 0x7f9be815ca10> task#=98 at 0x7f9be83bb750>, traceback: File: "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap self.__bootstrap_inner() File: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner self.run() File: "/usr/lib64/python2.7/threading.py", line 765, in run self.__target(*self.__args, **self.__kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 195, in run ret = func(*args, **kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run self._execute_task() File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315, in _execute_task task() File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in __call__ self._callable() File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 262, in __call__ self._handler(self._ctx, self._req) File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 305, in _serveRequest response = self._handle_request(req, ctx) File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in _handle_request res = method(**params) File: "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in _dynamicMethod result = fn(*methodArgs) File: "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1103, in connectStorageServer connectionParams) File: "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py", line 72, in wrapper result = ctask.prepare(func, *args, **kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 108, in wrapper return m(self, *a, **kw) File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1179, in prepare result = self._run(func, *args, **kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File: "<string>", line 2, in connectStorageServer File: "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method ret = func(*args, **kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2410, in connectStorageServer conObj.connect()
from messages:
Jan 25 13:49:07 node9 vdsm[31968]: WARN Worker blocked: <Worker name=jsonrpc/3 running <Task <JsonRpcTask {'params': {u'connectionParams': [{u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'6b6b7899-c82b-4417-b453-0b3b0ac11deb', u'connection': u'192.168.99.15:ssd4', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'b036005a-d44d-4689-a8c3-13e1bbf55af7', u'connection': u'*.*.*.*:ssd5', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'40d191b0-b7f8-48f9-bf6f-327275f51fef', u'connection': u'*.*.*.*:ssd6', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'26fbd2d6-6f25-4520-ab7f-15e9001f07b9', u'connection': u'*.*.*.*:/hdd2', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'f11fed97-513a-4a10-b85c-2afe68f42608', u'connection': u'*.*.*.*:/ssd3', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'f20b8691-528e-4e38-89ad-1e27684dee8b', u'connection': u'*.*.*.*:/sdd8', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}, {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id': u'188e71dc-3d81-43d3-b930-238a4c6711e6', u'connection': u'*.*.*.*:/ssd9', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}], u'storagepoolID': u'00000001-0001-0001-0001-000000000043', u'domainType': 7}, 'jsonrpc': '2.0', 'method': u'StoragePool.connectStorageServer', 'id': u'918f4d06-ca89-4ec0-a396-3407f1bdb8f9'} at 0x7f9be82ff250> timeout=60, duration=840.00 at 0x7f9be82ffa50> task#=93 at 0x7f9be83bba10>, traceback:#012File: "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap#012 self.__bootstrap_inner()#012File: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner#012 self.run()#012File: "/usr/lib64/python2.7/threading.py", line 765, in run#012 self.__target(*self.__args, **self.__kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 195, in run#012 ret = func(*args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run#012 self._execute_task()#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315, in _execute_task#012 task()#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in __call__#012 self._callable()#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 262, in __call__#012 self._handler(self._ctx, self._req)#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 305, in _serveRequest#012 response = self._handle_request(req, ctx)#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in _handle_request#012 res = method(**params)#012File: "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in _dynamicMethod#012 result = fn(*methodArgs)#012File: "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1103, in connectStorageServer#012 connectionParams)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py", line 72, in wrapper#012 result = ctask.prepare(func, *args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 108, in wrapper#012 return m(self, *a, **kw)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1179, in prepare#012 result = self._run(func, *args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run#012 return fn(*args, **kargs)#012File: "<string>", line 2, in connectStorageServer#012File: "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method#012 ret = func(*args, **kwargs)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2410, in connectStorageServer#012 conObj.connect()#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 172, in connect#012 self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)#012File: "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 207, in mount#012 cgroup=cgroup)#012File: "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 56, in __call__#012 return callMethod()#012File: "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 54, in <lambda>#012 **kwargs)#012File: "<string>", line 2, in mount#012File: "/usr/lib64/python2.7/multiprocessing/managers.py", line 759, in _callmethod#012 kind, result = conn.recv()
This matches the theory of blocked supervdsmd.
Met vriendelijke groet, With kind regards,
Jorick Astrego
*Netbulae Virtualization Experts * ------------------------------ Tel: 053 20 30 270 info@netbulae.eu Staalsteden 4-3A KvK 08198180 Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01 ------------------------------
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UEFI2DXVP4D4B4...
participants (5)
-
Jorick Astrego
-
Marcin Sobczyk
-
Nir Soffer
-
Sahina Bose
-
Sandro Bonazzola