On Wed, Jul 17, 2019 at 7:50 PM Doron Fediuck <dfediuck@redhat.com> wrote:
Adding relevant folks.
Sahina?

On Thu, 11 Jul 2019 at 00:24, William Kwan <potatok@yahoo.com> wrote:
Hi,

I need some direction to make sure we won't make more mistake in recovering a 3-node self hosted engine with Gluster.

Someone carelessly, very, wiped out the first few MB of the LVM PV on one node.  We changed the quorum and brought up the Gluster and oVirt.   We ran vgcfgrestore on the 'bad node' and we can run pvscan, vgscan and lvscan to see the LV's. 

If you have 2/3 bricks online - storage would have been available. Can you elaborate why you needed to change qourum



What should I do next to prevent more damage or corrupt the glusterfs on the other two nodes?

Will this work?
  mkfs the LV's
  bring up gluster
  run gluster sync <goodNode>?

On the node where the bricks were wiped out,you need to
1. rebuild the bricks (i.e recreate the LVs and mount at the same brick path as before)
2. Restart glusterd (if not started)
3. Reset the brick - from Volume -> Bricks subtab, select the relevant brick



Thanks
Will

_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/P6ADINQ2UTUQTLAOGRFPX64B4BELNPNU/