I record entry like this in the journal of everynode:
Mar 28 10:26:58 ovirt-node3.ovirt sanlock[1660]: 2023-03-28 10:26:58 1191247 [4105511]: s9
delta_renew read timeout 10 sec offset 0
/rhev/data-center/mnt/glusterSD/ovirt-node3.ovirt:_gv0/4745320f-bfc3-46c4-8849-b4fe8f1b2de6/dom_md/ids
Mar 28 10:26:58 ovirt-node3.ovirt sanlock[1660]: 2023-03-28 10:26:58 1191247 [4105511]: s9
renewal error -202 delta_length 10 last_success 1191216
Mar 28 10:26:58 ovirt-node3.ovirt sanlock[1660]: 2023-03-28 10:26:58 1191247 [2750073]:
s11 delta_renew read timeout 10 sec offset 0
/rhev/data-center/mnt/ovirt-nfsha.ovirt:_dati_drbd0/2527ed0f-e91a-4748-995c-e644362e8408/dom_md/ids
Mar 28 10:26:58 ovirt-node3.ovirt sanlock[1660]: 2023-03-28 10:26:58 1191247 [2750073]:
s11 renewal error -202 delta_length 10 last_success 1191217
as You see its complaining about a gluster volume (hosting vms and mapped on three node
with the terrible SATA SSD: Samsung_SSD_870_EVO_4TB