The gluster volume heal engine command didn't output anything in the CLI.
The gluster volume heal engine info gives:
# gluster volume heal engine info
Brick ov-no1.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0
Brick ov-no2.ariadne-t.local:/gluster_bricks/engine/engine
/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
Status: Connected
Number of entries: 1
Brick ov-no3.ariadne-t.local:/gluster_bricks/engine/engine
/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
Status: Connected
Number of entries: 1
And gluster volume heal engine info summary gives:
# gluster volume heal engine info summary
Brick ov-no1.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Total Number of entries: 1
Number of entries in heal pending: 1
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick ov-no2.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Total Number of entries: 1
Number of entries in heal pending: 1
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick ov-no3.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Total Number of entries: 1
Number of entries in heal pending: 1
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Also I found the following warning message in the logs that has been
repeating itself since the problem started:
[2021-03-10 10:08:11.646824] W [MSGID: 114061]
[client-common.c:2644:client_pre_fsync_v2] 0-engine-client-0:
(3fafabf3-d0cd-4b9a-8dd7-43145451f7cf) remote_fd is -1. EBADFD [File
descriptor in bad state]
And from what I see in the logs, the healing process seems to be still
trying to fix the volume.
[2021-03-10 10:47:34.820229] I [MSGID: 108026]
[afr-self-heal-common.c:1741:afr_log_selfheal] 0-engine-replicate-0:
Completed data selfheal on 3fafabf3-d0cd-4b9a-8dd7-43145451f7cf.
sources=1 [2] sinks=0
The message "I [MSGID: 108026]
[afr-self-heal-common.c:1741:afr_log_selfheal] 0-engine-replicate-0:
Completed data selfheal on 3fafabf3-d0cd-4b9a-8dd7-43145451f7cf.
sources=1 [2] sinks=0 " repeated 8 times between [2021-03-10
10:47:34.820229] and [2021-03-10 10:48:00.088805]
On 3/10/21 10:21 AM, Strahil Nikolov via Users wrote:
It seems to me that ov-no1 didn't update the file properly.
What was the output of the gluster volume heal command ?
Best Regards,
Strahil Nikolov
The output of the getfattr command on the nodes was the following:
Node1:
[root@ov-no1 <mailto:root@ov-no1> ~]# getfattr -d -m . -e hex
/gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
getfattr: Removing leading '/' from absolute path names
# file:
gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.afr.dirty=0x000003940000000000000000
trusted.afr.engine-client-2=0x000000000000000000000000
trusted.gfid=0x3fafabf3d0cd4b9a8dd743145451f7cf
trusted.gfid2path.06f4f1065c7ed193=0x36313936323032302d386431342d343261372d613565332d3233346365656635343035632f61343835353566342d626532332d343436372d386135342d343030616537626166396437
trusted.glusterfs.mdata=0x010000000000000000000000005fec6287000000002f584958000000005fec6287000000002f584958000000005d791c1a0000000000ba286e
trusted.glusterfs.shard.block-size=0x0000000004000000
trusted.glusterfs.shard.file-size=0x00000019000000000000000000000000000000000092040b0000000000000000
Node2:
[root@ov-no2 <mailto:root@ov-no2> ~]# getfattr -d -m . -e hex
/gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
getfattr: Removing leading '/' from absolute path names
# file:
gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.engine-client-0=0x0000043a0000000000000000
trusted.afr.engine-client-2=0x000000000000000000000000
trusted.gfid=0x3fafabf3d0cd4b9a8dd743145451f7cf
trusted.gfid2path.06f4f1065c7ed193=0x36313936323032302d386431342d343261372d613565332d3233346365656635343035632f61343835353566342d626532332d343436372d386135342d343030616537626166396437
trusted.glusterfs.mdata=0x010000000000000000000000005fec6287000000002f584958000000005fec6287000000002f584958000000005d791c1a0000000000ba286e
trusted.glusterfs.shard.block-size=0x0000000004000000
trusted.glusterfs.shard.file-size=0x00000019000000000000000000000000000000000092040b0000000000000000
Node3:
[root@ov-no3 <mailto:root@ov-no3> ~]# getfattr -d -m . -e hex
/gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
getfattr: Removing leading '/' from absolute path names
# file:
gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.engine-client-0=0x000004440000000000000000
trusted.gfid=0x3fafabf3d0cd4b9a8dd743145451f7cf
trusted.gfid2path.06f4f1065c7ed193=0x36313936323032302d386431342d343261372d613565332d3233346365656635343035632f61343835353566342d626532332d343436372d386135342d343030616537626166396437
trusted.glusterfs.mdata=0x010000000000000000000000005fec6287000000002f584958000000005fec6287000000002f584958000000005d791c1a0000000000ba286e
trusted.glusterfs.shard.block-size=0x0000000004000000
trusted.glusterfs.shard.file-size=0x00000019000000000000000000000000000000000092040b0000000000000000
_______________________________________________
Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org
<mailto:users-leave@ovirt.org>
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUVBESAIZEJ...
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/R3ODLVEODDF...