On February 3, 2020 12:25:05 PM GMT+02:00, Christian Reiss
<email(a)christian-reiss.de> wrote:
Hey,
here is one more thing: The issue we had some time ago might (just
might) be the culprit. We Copied the one gluster file over to the other
nodes. The one correct node which we took down yesterday is node01,
which has more metadata to said file:
[root@node01:~] # getfattr -m . -d -e hex
/gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6
getfattr: Removing leading '/' from absolute path names
# file:
gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.ssd_storage-client-1=0x000000000000000000000000
trusted.gfid=0xa121e4fb09844e4194d78f0c4f87f4b6
trusted.gfid2path.d4cf876a215b173f=0x62653331383633382d653861302d346336642d393737642d3761393337616138343830362f38366461303238392d663734662d343230302d393238342d3637386537626437363139352e31323030
trusted.glusterfs.mdata=0x010000000000000000000000005e35ed17000000003069a5de000000005e35ed17000000003069a5de000000005e34994900000000304a5eb2
The other nodes have significantly lesser info:
[root@node02:~] # getfattr -m . -d -e hex
/gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6
getfattr: Removing leading '/' from absolute path names
# file:
gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.afr.dirty=0x000000000000000000000000
trusted.gfid=0xa121e4fb09844e4194d78f0c4f87f4b6
trusted.gfid2path.d4cf876a215b173f=0x62653331383633382d653861302d346336642d393737642d3761393337616138343830362f38366461303238392d663734662d343230302d393238342d3637386537626437363139352e31323030
trusted.glusterfs.mdata=0x010000000000000000000000005e35ed17000000003069a5de000000005e35ed17000000003069a5de000000005e3595f8000000003572d5ba
[root@node03:~] # getfattr -m . -d -e hex
/gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6
getfattr: Removing leading '/' from absolute path names
# file:
gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.afr.dirty=0x000000000000000000000000
trusted.glusterfs.mdata=0x010000000000000000000000005e35ed17000000003069a5de000000005e35ed17000000003069a5de000000005e34994900000000304a5eb2
Maybe just maybe this file contains a lot of required data? The chunk
size is 64mb, and the md5 matches across the board. Also I did monitor
access and modify times for this file across all three nodes and the
times, size and md5 match.
How could I reset the header info to match all three?
-Chris.
You can use setfattr.
Have you converted gfid to file and check the file contents (if ASCII) ?
Usually, I first convert my gfid to file (in brick) and then I check the timestamps and
content of the file on all bricks before deciding what to do.
Best Regards,
Strahil Nikolov