I found these in the logs:
[2020-01-09 12:30:01.690759] W [MSGID: 114031]
[client-rpc-fops_v2.c:2697:client4_0_readv_cbk] 0-vms-client-2: remote
operation failed [Invalid argument]
[2020-01-09 12:30:01.691284] W [MSGID: 114031]
[client-rpc-fops_v2.c:2697:client4_0_readv_cbk] 0-vms-client-0: remote
operation failed [Invalid argument]
[2020-01-09 12:30:01.691469] W [MSGID: 114031]
[client-rpc-fops_v2.c:2697:client4_0_readv_cbk] 0-vms-client-1: remote
operation failed [Invalid argument]
[2020-01-09 12:30:01.691500] W [fuse-bridge.c:2830:fuse_readv_cbk]
0-glusterfs-fuse: 2509: READ => -1
gfid=dbec5303-64c8-4e56-ae27-455f34fdfccc fd=0x7fcd38007148 (Invalid
argument)
[2020-01-09 12:30:01.694257] W [fuse-bridge.c:2830:fuse_readv_cbk]
0-glusterfs-fuse: 2514: READ => -1
gfid=dbec5303-64c8-4e56-ae27-455f34fdfccc fd=0x7fcd38007148 (Invalid
argument)
[2020-01-09 12:30:02.036328] W [MSGID: 114031]
[client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-vms-client-2: remote
operation failed. Path: /.shard/dbec5303-64c8-4e56-ae27-455f34fdfccc.12
(00000000-0000-0000-0000-000000000000) [Permission denied]
[2020-01-09 12:30:02.036544] W [MSGID: 114031]
[client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-vms-client-0: remote
operation failed. Path: /.shard/dbec5303-64c8-4e56-ae27-455f34fdfccc.12
(00000000-0000-0000-0000-000000000000) [Permission denied]
[2020-01-09 12:30:02.037320] W [MSGID: 114031]
[client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-vms-client-1: remote
operation failed. Path: /.shard/dbec5303-64c8-4e56-ae27-455f34fdfccc.12
(00000000-0000-0000-0000-000000000000) [Permission denied]
[2020-01-09 12:30:02.037338] E [MSGID: 133010]
[shard.c:2327:shard_common_lookup_shards_cbk] 0-vms-shard: Lookup on
shard 12 failed. Base file gfid = dbec5303-64c8-4e56-ae27-455f34fdfccc
[Permission denied]
[2020-01-09 12:30:02.037371] W [fuse-bridge.c:2830:fuse_readv_cbk]
0-glusterfs-fuse: 2543: READ => -1
gfid=dbec5303-64c8-4e56-ae27-455f34fdfccc fd=0x7fcd38014788 (Permission
denied)
Anyone? Help? :)
-Chris.
On 08/01/2020 17:10, Christian Reiss wrote:
Ugh,
After having rebooted the 3-way HCI Cluster everything came back online,
all the gluster volumes are online, no split-brain info detected. Also
they are mounted on all nodes.
The Volumes, Domains and Disks are all marked green in the Engine.
Launching a VM fails with
"VM test01 has been paused due to storage I/O problem."
A md5 sum of an uploaded ISO image from all three mounted cluster
members yield the same md5, so does the source file on my PC. Creating a
new VM and attaching that ISO fails: The ISO is not readable (says the
vm console).
The Disk Image from the test01 vm seems sounds (file has correct size,
file-tool shows correct magic header), other files (configs) are readable.
I did complete cluster reboot etc.
gluster> volume heal vms info split-brain
Brick node01:/gluster_bricks/vms/vms
Status: Connected
Number of entries in split-brain: 0
Brick node02:/gluster_bricks/vms/vms
Status: Connected
Number of entries in split-brain: 0
Brick node03:/gluster_bricks/vms/vms
Status: Connected
Number of entries in split-brain: 0
Brick node01:/gluster_bricks/vms/vms
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick node02:/gluster_bricks/vms/vms
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick node03:/gluster_bricks/vms/vms
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
How would I fix this issue?
Anyone got a clue on how to proceed?
-Chris.
--
with kind regards,
mit freundlichen Gruessen,
Christian Reiss