Hello Team,

We are running a setup of 3-way replica HC gluster setup configured during the initial deployment from the cockpit console using ansible.

NODE1
  - /dev/sda   (OS)
  - /dev/sdb   ( Gluster Bricks )
       * /gluster_bricks/engine/engine/ 
       * /gluster_bricks/data/data/ 
       * /gluster_bricks/vmstore/vmstore/ 

NODE2 and NODE3 with a similar setup.

There is a mishap that /dev/sdb on NODE2 totally got crashed and now there is nothing inside. However, I have created similar directories after mounting it back i.e.,

       * /gluster_bricks/engine/engine/ 
       * /gluster_bricks/data/data/ 
       * /gluster_bricks/vmstore/vmstore/ 
but it is not yet recovered.

=====================================================
[root@node2 ~]# gluster volume status
Status of volume: data
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick *.*.*.1:/gluster_bricks/data/data  49152     0          Y       11111
Brick *.*.*.2:/gluster_bricks/data/data  N/A       N/A        N       N/A  
Brick *.*.*.3:/gluster_bricks/data/data  49152     0          Y       4303 
Self-heal Daemon on localhost               N/A       N/A        Y       23976
Self-heal Daemon on *.*.*.1              N/A       N/A        Y       27838
Self-heal Daemon on *.*.*.3              N/A       N/A        Y       27424
 
Task Status of Volume data
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: engine
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick *.*.*.1:/gluster_bricks/engine/eng
ine                                         49153     0          Y       11117
Brick *.*.*.2:/gluster_bricks/engine/eng
ine                                         N/A       N/A        N       N/A  
Brick *.*.*.3:/gluster_bricks/engine/eng
ine                                         49153     0          Y       4314 
Self-heal Daemon on localhost               N/A       N/A        Y       23976
Self-heal Daemon on *.*.*.3              N/A       N/A        Y       27424
Self-heal Daemon on *.*.*.1              N/A       N/A        Y       27838
 
Task Status of Volume engine
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vmstore
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick *.*.*.1:/gluster_bricks/vmstore/vm
store                                       49154     0          Y       21603
Brick *.*.*.2:/gluster_bricks/vmstore/vm
store                                       N/A       N/A        N       N/A  
Brick *.*.*.3:/gluster_bricks/vmstore/vm
store                                       49154     0          Y       26845
Self-heal Daemon on localhost               N/A       N/A        Y       23976
Self-heal Daemon on *.*.*.3              N/A       N/A        Y       27424
Self-heal Daemon on *.*.*.1              N/A       N/A        Y       27838
 
Task Status of Volume vmstore
------------------------------------------------------------------------------
There are no active volume tasks
=============================================================


Can someone please suggest the steps to recover the setup?

I have tried the below workaround but it doesn't help.


--

ABHISHEK SAHNI

Mob : +91-990-701-5143