Re: Upgrading from 4.2.8 to 4.3.3 broke Node NG GlusterFS

Fix those disconnectes node and run find against a node that has successfully mounted the volume. Best Regards, Strahil NikolovOn Apr 24, 2019 15:31, Andreas Elvers <andreas.elvers+ovirtforum@solutions.work> wrote:
The file handle is stale so find will display:
"find: '/rhev/data-center/mnt/glusterSD/node01.infra.solutions.work:_vmstore': Transport endpoint is not connected"
"stat /rhev/data-center/mnt/glusterSD/node01.infra.solutions.work:_vmstore" will output stat: cannot stat '/rhev/data-center/mnt/glusterSD/node01.infra.solutions.work:_vmstore': Transport endpoint is not connected
All Nodes are peering with the other nodes: ----- Saiph:~ andreas$ ssh node01 gluster peer status Number of Peers: 2
Hostname: node02.infra.solutions.work Uuid: 87fab40a-2395-41ce-857d-0b846e078cdb State: Peer in Cluster (Connected)
Hostname: node03.infra.solutions.work Uuid: 49025f81-e7c1-4760-be03-f36e0f403d26 State: Peer in Cluster (Connected) ---- Saiph:~ andreas$ ssh node02 gluster peer status Number of Peers: 2
Hostname: node03.infra.solutions.work Uuid: 49025f81-e7c1-4760-be03-f36e0f403d26 State: Peer in Cluster (Disconnected)
Hostname: node01.infra.solutions.work Uuid: f25e6bff-e5e2-465f-a33e-9148bef94633 State: Peer in Cluster (Connected) ---- ssh node03 gluster peer status Number of Peers: 2
Hostname: node02.infra.solutions.work Uuid: 87fab40a-2395-41ce-857d-0b846e078cdb State: Peer in Cluster (Connected)
Hostname: node01.infra.solutions.work Uuid: f25e6bff-e5e2-465f-a33e-9148bef94633 State: Peer in Cluster (Connected) _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/DI6AWTLIQIPWNK...

After rebooting the node that was not able to mount the gluster volume things improved eventually. SPM went away and restarted for the Datacenter and suddenly node03 was able to mount the gluster volume. In between I was down to 1/3 active Bricks which results in read only glusterfs. I was lucky to have the Engine still on NFS. But anyway... Thanks for your thoughts.
participants (2)
-
Andreas Elvers
-
Strahil