
Have you restarted glusterd.service on the affected node. glusterd is just management layer and it won't affect the brick processes. Best Regards, Strahil Nikolov В вторник, 22 септември 2020 г., 01:43:36 Гринуич+3, Jeremey Wise <jeremey.wise@gmail.com> написа: Start is not an option. It notes two bricks. but command line denotes three bricks and all present [root@odin thorst.penguinpages.local:_vmstore]# gluster volume status data Status of volume: data Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick thorst.penguinpages.local:/gluster_br icks/data/data 49152 0 Y 33123 Brick odinst.penguinpages.local:/gluster_br icks/data/data 49152 0 Y 2970 Brick medusast.penguinpages.local:/gluster_ bricks/data/data 49152 0 Y 2646 Self-heal Daemon on localhost N/A N/A Y 3004 Self-heal Daemon on thorst.penguinpages.loc al N/A N/A Y 33230 Self-heal Daemon on medusast.penguinpages.l ocal N/A N/A Y 2475 Task Status of Volume data ------------------------------------------------------------------------------ There are no active volume tasks [root@odin thorst.penguinpages.local:_vmstore]# gluster peer status Number of Peers: 2 Hostname: thorst.penguinpages.local Uuid: 7726b514-e7c3-4705-bbc9-5a90c8a966c9 State: Peer in Cluster (Connected) Hostname: medusast.penguinpages.local Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031 State: Peer in Cluster (Connected) [root@odin thorst.penguinpages.local:_vmstore]# On Mon, Sep 21, 2020 at 4:32 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Just select the volume and press "start" . It will automatically mark "force start" and will fix itself.
Best Regards, Strahil Nikolov
В понеделник, 21 септември 2020 г., 20:53:15 Гринуич+3, Jeremey Wise <jeremey.wise@gmail.com> написа:
oVirt engine shows one of the gluster servers having an issue. I did a graceful shutdown of all three nodes over weekend as I have to move around some power connections in prep for UPS.
Came back up.. but....
And this is reflected in 2 bricks online (should be three for each volume)
Command line shows gluster should be happy.
[root@thor engine]# gluster peer status Number of Peers: 2
Hostname: odinst.penguinpages.local Uuid: 83c772aa-33cd-430f-9614-30a99534d10e State: Peer in Cluster (Connected)
Hostname: medusast.penguinpages.local Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031 State: Peer in Cluster (Connected) [root@thor engine]#
# All bricks showing online [root@thor engine]# gluster volume status Status of volume: data Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick thorst.penguinpages.local:/gluster_br icks/data/data 49152 0 Y 11001 Brick odinst.penguinpages.local:/gluster_br icks/data/data 49152 0 Y 2970 Brick medusast.penguinpages.local:/gluster_ bricks/data/data 49152 0 Y 2646 Self-heal Daemon on localhost N/A N/A Y 50560 Self-heal Daemon on odinst.penguinpages.loc al N/A N/A Y 3004 Self-heal Daemon on medusast.penguinpages.l ocal N/A N/A Y 2475
Task Status of Volume data ------------------------------------------------------------------------------ There are no active volume tasks
Status of volume: engine Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick thorst.penguinpages.local:/gluster_br icks/engine/engine 49153 0 Y 11012 Brick odinst.penguinpages.local:/gluster_br icks/engine/engine 49153 0 Y 2982 Brick medusast.penguinpages.local:/gluster_ bricks/engine/engine 49153 0 Y 2657 Self-heal Daemon on localhost N/A N/A Y 50560 Self-heal Daemon on odinst.penguinpages.loc al N/A N/A Y 3004 Self-heal Daemon on medusast.penguinpages.l ocal N/A N/A Y 2475
Task Status of Volume engine ------------------------------------------------------------------------------ There are no active volume tasks
Status of volume: iso Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick thorst.penguinpages.local:/gluster_br icks/iso/iso 49156 49157 Y 151426 Brick odinst.penguinpages.local:/gluster_br icks/iso/iso 49156 49157 Y 69225 Brick medusast.penguinpages.local:/gluster_ bricks/iso/iso 49156 49157 Y 45018 Self-heal Daemon on localhost N/A N/A Y 50560 Self-heal Daemon on odinst.penguinpages.loc al N/A N/A Y 3004 Self-heal Daemon on medusast.penguinpages.l ocal N/A N/A Y 2475
Task Status of Volume iso ------------------------------------------------------------------------------ There are no active volume tasks
Status of volume: vmstore Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick thorst.penguinpages.local:/gluster_br icks/vmstore/vmstore 49154 0 Y 11023 Brick odinst.penguinpages.local:/gluster_br icks/vmstore/vmstore 49154 0 Y 2993 Brick medusast.penguinpages.local:/gluster_ bricks/vmstore/vmstore 49154 0 Y 2668 Self-heal Daemon on localhost N/A N/A Y 50560 Self-heal Daemon on medusast.penguinpages.l ocal N/A N/A Y 2475 Self-heal Daemon on odinst.penguinpages.loc al N/A N/A Y 3004
Task Status of Volume vmstore ------------------------------------------------------------------------------ There are no active volume tasks
[root@thor engine]# gluster volume heal data engine iso vmstore [root@thor engine]# gluster volume heal data info Brick thorst.penguinpages.local:/gluster_bricks/data/data Status: Connected Number of entries: 0
Brick odinst.penguinpages.local:/gluster_bricks/data/data Status: Connected Number of entries: 0
Brick medusast.penguinpages.local:/gluster_bricks/data/data Status: Connected Number of entries: 0
[root@thor engine]# gluster volume heal engine Launching heal operation to perform index self heal on volume engine has been successful Use heal info commands to check status. [root@thor engine]# gluster volume heal engine info Brick thorst.penguinpages.local:/gluster_bricks/engine/engine Status: Connected Number of entries: 0
Brick odinst.penguinpages.local:/gluster_bricks/engine/engine Status: Connected Number of entries: 0
Brick medusast.penguinpages.local:/gluster_bricks/engine/engine Status: Connected Number of entries: 0
[root@thor engine]# gluster volume heal vmwatore info Volume vmwatore does not exist Volume heal failed. [root@thor engine]#
So not sure what to do with oVirt Engine to make it happy again.
-- penguinpages _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RRHEVF2STA5EVJ...
-- jeremey.wise@gmail.com _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZFVGMWRMELRGEE...