I have recently updated (yesterday) my platform to latest available (v4.3.3.7) and upgraded to gluster v6.1 .The setup is hyperconverged 3 node cluster with ovirt1/gluster1 & ovirt2/gluster2 as replica nodes (glusterX is for gluster communication) while ovirt3 is the arbiter.
2019-05-16 10:15:21,296+0300 INFO (jsonrpc/2) [vdsm.api] FINISH createStorageDomain error=Command ['/usr/bin/dd', 'iflag=fullblock', u'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases', 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' err="/usr/bin/dd: error writing '/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases': Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 0.0138582 s, 0.0 kB/s\n" from=::ffff:192.168.1.2,43864, flow_id=4a54578a, task_id=d2535d0f-c7f7-4f31-a10f-704923ce1790 (api:52)
2019-05-16 10:15:21,296+0300 ERROR (jsonrpc/2) [storage.TaskManager.Task] (Task='d2535d0f-c7f7-4f31-a10f-704923ce1790') Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
return fn(*args, **kargs)
File "<string>", line 2, in createStorageDomain
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2614, in createStorageDomain
storageType, domVersion, block_size, alignment)
File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py", line 106, in create
block_size)
File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line 466, in _prepareMetadata
cls.format_external_leases(sdUUID, xleases_path)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 1255, in format_external_leases
xlease.format_index(lockspace, backend)
File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 681, in format_index
index.dump(file)
File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 843, in dump
file.pwrite(INDEX_BASE, self._buf)
File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 1076, in pwrite
self._run(args, data=buf[:])
File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 1093, in _run
raise cmdutils.Error(args, rc, "[suppressed]", err)
Error: Command ['/usr/bin/dd', 'iflag=fullblock', u'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases', 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' err="/usr/bin/dd: error writing '/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases': Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 0.0138582 s, 0.0 kB/s\n"
2019-05-16 10:15:21,296+0300 INFO (jsonrpc/2) [storage.TaskManager.Task] (Task='d2535d0f-c7f7-4f31-a10f-704923ce1790') aborting: Task is aborted: u'Command [\'/usr/bin/dd\', \'iflag=fullblock\', u\'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases\', \'oflag=direct,seek_bytes\', \'seek=1048576\', \'bs=256512\', \'count=1\', \'conv=notrunc,nocreat,fsync\'] failed with rc=1 out=\'[suppressed]\' err="/usr/bin/dd: error writing \'/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases\': Invalid argument\\n1+0 records in\\n0+0 records out\\n0 bytes (0 B) copied, 0.0138582 s, 0.0 kB/s\\n"' - code 100 (task:1181)
2019-05-16 10:15:21,297+0300 ERROR (jsonrpc/2) [storage.Dispatcher] FINISH createStorageDomain error=Command ['/usr/bin/dd', 'iflag=fullblock', u'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases', 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' err="/usr/bin/dd: error writing '/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases': Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 0.0138582 s, 0.0 kB/s\n" (dispatcher:87)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py", line 74, in wrapper
result = ctask.prepare(func, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 108, in wrapper
return m(self, *a, **kw)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1189, in prepare
raise self.error
Error: Command ['/usr/bin/dd', 'iflag=fullblock', u'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases', 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' err="/usr/bin/dd: error writing '/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases': Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 0.0138582 s, 0.0 kB/s\n"
2019-05-16 10:15:21,297+0300 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call StorageDomain.create failed (error 351) in 0.45 seconds (__init__:312)
2019-05-16 10:15:22,068+0300 INFO (jsonrpc/1) [vdsm.api] START disconnectStorageServer(domType=7, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'mnt_options': u'backup-volfile-servers=gluster2:ovirt3', u'id': u'7442e9ab-dc54-4b9a-95d9-5d98a1e81b05', u'connection': u'gluster1:/data_fast2', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}], options=None) from=::ffff:192.168.1.2,43864, flow_id=33ced9b2-cdd5-4147-a223-d0eb398a2daf, task_id=a9a8f90a-1603-40c6-a959-3cbff29d1d7b (api:48)
2019-05-16 10:15:22,068+0300 INFO (jsonrpc/1) [storage.Mount] unmounting /rhev/data-center/mnt/glusterSD/gluster1:_data__fast2 (mount:212)
I have tested manually mounting and trying it again:
[root@ovirt1 logs]# mount -t glusterfs -o backupvolfile-server=gluster2:ovirt3 gluster1:/data_fast2 /mnt
[root@ovirt1 logs]# cd /mnt/
[root@ovirt1 mnt]# ll
total 0
[root@ovirt1 mnt]# dd if=/dev/zero of=file bs=4M status=progress count=250
939524096 bytes (940 MB) copied, 8.145447 s, 115 MB/s
250+0 records in
250+0 records out
1048576000 bytes (1.0 GB) copied, 9.08347 s, 115 MB/s
[root@ovirt1 mnt]# /usr/bin/dd iflag=fullblock of=file oflag=direct,seek_bytes seek=1048576 bs=256512 count=1 conv=notrunc,nocreat,fsync status=progress
^C0+0 records in
0+0 records out
0 bytes (0 B) copied, 46.5877 s, 0.0 kB/s
Can someone give a hint ? Maybe it's related to gluster v6 ?
Can someone test with older version of Gluster ?
Best Regards,
Strahil Nikolov