Recover accidentally deleted gluster node having bricks ( Engine, Data and Vmstore)

Hello Team, We are running a setup of 3-way replica HC gluster setup configured during the initial deployment from the cockpit console using ansible. NODE1 - /dev/sda (OS) - /dev/sdb ( Gluster Bricks ) * /gluster_bricks/engine/engine/ * /gluster_bricks/data/data/ * /gluster_bricks/vmstore/vmstore/ NODE2 and NODE3 with a similar setup. There is a mishap that /dev/sdb on NODE2 totally got crashed and now there is nothing inside. However, I have created similar directories after mounting it back i.e., * /gluster_bricks/engine/engine/ * /gluster_bricks/data/data/ * /gluster_bricks/vmstore/vmstore/ but it is not yet recovered. ===================================================== [root@node2 ~]# gluster volume status Status of volume: data Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick *.*.*.1:/gluster_bricks/data/data 49152 0 Y 11111 Brick *.*.*.2:/gluster_bricks/data/data N/A N/A N N/A Brick *.*.*.3:/gluster_bricks/data/data 49152 0 Y 4303 Self-heal Daemon on localhost N/A N/A Y 23976 Self-heal Daemon on *.*.*.1 N/A N/A Y 27838 Self-heal Daemon on *.*.*.3 N/A N/A Y 27424 Task Status of Volume data ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: engine Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick *.*.*.1:/gluster_bricks/engine/eng ine 49153 0 Y 11117 Brick *.*.*.2:/gluster_bricks/engine/eng ine N/A N/A N N/A Brick *.*.*.3:/gluster_bricks/engine/eng ine 49153 0 Y 4314 Self-heal Daemon on localhost N/A N/A Y 23976 Self-heal Daemon on *.*.*.3 N/A N/A Y 27424 Self-heal Daemon on *.*.*.1 N/A N/A Y 27838 Task Status of Volume engine ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: vmstore Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick *.*.*.1:/gluster_bricks/vmstore/vm store 49154 0 Y 21603 Brick *.*.*.2:/gluster_bricks/vmstore/vm store N/A N/A N N/A Brick *.*.*.3:/gluster_bricks/vmstore/vm store 49154 0 Y 26845 Self-heal Daemon on localhost N/A N/A Y 23976 Self-heal Daemon on *.*.*.3 N/A N/A Y 27424 Self-heal Daemon on *.*.*.1 N/A N/A Y 27838 Task Status of Volume vmstore ------------------------------------------------------------------------------ There are no active volume tasks ============================================================= Can someone please suggest the steps to recover the setup? I have tried the below workaround but it doesn't help. https://lists.gluster.org/pipermail/gluster-users/2013-November/015079.html -- ABHISHEK SAHNI Mob : +91-990-701-5143

On Tue, Nov 20, 2018 at 5:56 PM Abhishek Sahni <abhishek.sahni1991@gmail.com> wrote:
Hello Team,
We are running a setup of 3-way replica HC gluster setup configured during the initial deployment from the cockpit console using ansible.
NODE1 - /dev/sda (OS) - /dev/sdb ( Gluster Bricks ) * /gluster_bricks/engine/engine/ * /gluster_bricks/data/data/ * /gluster_bricks/vmstore/vmstore/
NODE2 and NODE3 with a similar setup.
There is a mishap that /dev/sdb on NODE2 totally got crashed and now there is nothing inside. However, I have created similar directories after mounting it back i.e.,
* /gluster_bricks/engine/engine/ * /gluster_bricks/data/data/ * /gluster_bricks/vmstore/vmstore/ but it is not yet recovered.
===================================================== [root@node2 ~]# gluster volume status Status of volume: data Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick *.*.*.1:/gluster_bricks/data/data 49152 0 Y 11111 Brick *.*.*.2:/gluster_bricks/data/data N/A N/A N N/A Brick *.*.*.3:/gluster_bricks/data/data 49152 0 Y 4303 Self-heal Daemon on localhost N/A N/A Y 23976 Self-heal Daemon on *.*.*.1 N/A N/A Y 27838 Self-heal Daemon on *.*.*.3 N/A N/A Y 27424
Task Status of Volume data
------------------------------------------------------------------------------ There are no active volume tasks
Status of volume: engine Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick *.*.*.1:/gluster_bricks/engine/eng ine 49153 0 Y 11117 Brick *.*.*.2:/gluster_bricks/engine/eng ine N/A N/A N N/A Brick *.*.*.3:/gluster_bricks/engine/eng ine 49153 0 Y 4314 Self-heal Daemon on localhost N/A N/A Y 23976 Self-heal Daemon on *.*.*.3 N/A N/A Y 27424 Self-heal Daemon on *.*.*.1 N/A N/A Y 27838
Task Status of Volume engine
------------------------------------------------------------------------------ There are no active volume tasks
Status of volume: vmstore Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick *.*.*.1:/gluster_bricks/vmstore/vm store 49154 0 Y 21603 Brick *.*.*.2:/gluster_bricks/vmstore/vm store N/A N/A N N/A Brick *.*.*.3:/gluster_bricks/vmstore/vm store 49154 0 Y 26845 Self-heal Daemon on localhost N/A N/A Y 23976 Self-heal Daemon on *.*.*.3 N/A N/A Y 27424 Self-heal Daemon on *.*.*.1 N/A N/A Y 27838
Task Status of Volume vmstore
------------------------------------------------------------------------------ There are no active volume tasks =============================================================
Can someone please suggest the steps to recover the setup?
I have tried the below workaround but it doesn't help.
https://lists.gluster.org/pipermail/gluster-users/2013-November/015079.html
You can reset the brick - if you're on oVirt 4.2.x, there's a UI option in the bricks subtab to do this.
--
ABHISHEK SAHNI Mob : +91-990-701-5143
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFYUBA4DPHOSAZ...

Thanks Sahina for your response, I am not able to find it on UI, please help me to navigate? and yes I am using oVirt 4.2.6.4-1. On Tue, Nov 27, 2018 at 12:55 PM Sahina Bose <sabose@redhat.com> wrote:
On Tue, Nov 20, 2018 at 5:56 PM Abhishek Sahni < abhishek.sahni1991@gmail.com> wrote:
Hello Team,
We are running a setup of 3-way replica HC gluster setup configured during the initial deployment from the cockpit console using ansible.
NODE1 - /dev/sda (OS) - /dev/sdb ( Gluster Bricks ) * /gluster_bricks/engine/engine/ * /gluster_bricks/data/data/ * /gluster_bricks/vmstore/vmstore/
NODE2 and NODE3 with a similar setup.
There is a mishap that /dev/sdb on NODE2 totally got crashed and now there is nothing inside. However, I have created similar directories after mounting it back i.e.,
* /gluster_bricks/engine/engine/ * /gluster_bricks/data/data/ * /gluster_bricks/vmstore/vmstore/ but it is not yet recovered.
===================================================== [root@node2 ~]# gluster volume status Status of volume: data Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick *.*.*.1:/gluster_bricks/data/data 49152 0 Y 11111 Brick *.*.*.2:/gluster_bricks/data/data N/A N/A N N/A Brick *.*.*.3:/gluster_bricks/data/data 49152 0 Y 4303 Self-heal Daemon on localhost N/A N/A Y 23976 Self-heal Daemon on *.*.*.1 N/A N/A Y 27838 Self-heal Daemon on *.*.*.3 N/A N/A Y 27424
Task Status of Volume data
------------------------------------------------------------------------------ There are no active volume tasks
Status of volume: engine Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick *.*.*.1:/gluster_bricks/engine/eng ine 49153 0 Y 11117 Brick *.*.*.2:/gluster_bricks/engine/eng ine N/A N/A N N/A Brick *.*.*.3:/gluster_bricks/engine/eng ine 49153 0 Y 4314 Self-heal Daemon on localhost N/A N/A Y 23976 Self-heal Daemon on *.*.*.3 N/A N/A Y 27424 Self-heal Daemon on *.*.*.1 N/A N/A Y 27838
Task Status of Volume engine
------------------------------------------------------------------------------ There are no active volume tasks
Status of volume: vmstore Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick *.*.*.1:/gluster_bricks/vmstore/vm store 49154 0 Y 21603 Brick *.*.*.2:/gluster_bricks/vmstore/vm store N/A N/A N N/A Brick *.*.*.3:/gluster_bricks/vmstore/vm store 49154 0 Y 26845 Self-heal Daemon on localhost N/A N/A Y 23976 Self-heal Daemon on *.*.*.3 N/A N/A Y 27424 Self-heal Daemon on *.*.*.1 N/A N/A Y 27838
Task Status of Volume vmstore
------------------------------------------------------------------------------ There are no active volume tasks =============================================================
Can someone please suggest the steps to recover the setup?
I have tried the below workaround but it doesn't help.
https://lists.gluster.org/pipermail/gluster-users/2013-November/015079.html
You can reset the brick - if you're on oVirt 4.2.x, there's a UI option in the bricks subtab to do this.
--
ABHISHEK SAHNI Mob : +91-990-701-5143
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFYUBA4DPHOSAZ...
-- ABHISHEK SAHNI Mob : +91-990-701-5143

Click on volume for which you want to reset the brick-> under bricks tab select the brick you wan to reset -> once you do you will see the 'Reset Brick' option is active. Attached is a screenshot -> https://i.imgur.com/QUMSrzt.png On Tue, Nov 27, 2018 at 2:43 PM Abhishek Sahni <abhishek.sahni1991@gmail.com> wrote:
Thanks Sahina for your response, I am not able to find it on UI, please help me to navigate? and yes I am using oVirt 4.2.6.4-1.
On Tue, Nov 27, 2018 at 12:55 PM Sahina Bose <sabose@redhat.com> wrote:
On Tue, Nov 20, 2018 at 5:56 PM Abhishek Sahni < abhishek.sahni1991@gmail.com> wrote:
Hello Team,
We are running a setup of 3-way replica HC gluster setup configured during the initial deployment from the cockpit console using ansible.
NODE1 - /dev/sda (OS) - /dev/sdb ( Gluster Bricks ) * /gluster_bricks/engine/engine/ * /gluster_bricks/data/data/ * /gluster_bricks/vmstore/vmstore/
NODE2 and NODE3 with a similar setup.
There is a mishap that /dev/sdb on NODE2 totally got crashed and now there is nothing inside. However, I have created similar directories after mounting it back i.e.,
* /gluster_bricks/engine/engine/ * /gluster_bricks/data/data/ * /gluster_bricks/vmstore/vmstore/ but it is not yet recovered.
===================================================== [root@node2 ~]# gluster volume status Status of volume: data Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick *.*.*.1:/gluster_bricks/data/data 49152 0 Y 11111 Brick *.*.*.2:/gluster_bricks/data/data N/A N/A N N/A Brick *.*.*.3:/gluster_bricks/data/data 49152 0 Y 4303 Self-heal Daemon on localhost N/A N/A Y 23976 Self-heal Daemon on *.*.*.1 N/A N/A Y 27838 Self-heal Daemon on *.*.*.3 N/A N/A Y 27424
Task Status of Volume data
------------------------------------------------------------------------------ There are no active volume tasks
Status of volume: engine Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick *.*.*.1:/gluster_bricks/engine/eng ine 49153 0 Y 11117 Brick *.*.*.2:/gluster_bricks/engine/eng ine N/A N/A N N/A Brick *.*.*.3:/gluster_bricks/engine/eng ine 49153 0 Y 4314 Self-heal Daemon on localhost N/A N/A Y 23976 Self-heal Daemon on *.*.*.3 N/A N/A Y 27424 Self-heal Daemon on *.*.*.1 N/A N/A Y 27838
Task Status of Volume engine
------------------------------------------------------------------------------ There are no active volume tasks
Status of volume: vmstore Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick *.*.*.1:/gluster_bricks/vmstore/vm store 49154 0 Y 21603 Brick *.*.*.2:/gluster_bricks/vmstore/vm store N/A N/A N N/A Brick *.*.*.3:/gluster_bricks/vmstore/vm store 49154 0 Y 26845 Self-heal Daemon on localhost N/A N/A Y 23976 Self-heal Daemon on *.*.*.3 N/A N/A Y 27424 Self-heal Daemon on *.*.*.1 N/A N/A Y 27838
Task Status of Volume vmstore
------------------------------------------------------------------------------ There are no active volume tasks =============================================================
Can someone please suggest the steps to recover the setup?
I have tried the below workaround but it doesn't help.
https://lists.gluster.org/pipermail/gluster-users/2013-November/015079.html
You can reset the brick - if you're on oVirt 4.2.x, there's a UI option in the bricks subtab to do this.
--
ABHISHEK SAHNI Mob : +91-990-701-5143
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFYUBA4DPHOSAZ...
--
ABHISHEK SAHNI Mob : +91-990-701-5143
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GEZZIGBWA6HQ55...
Thanks, Kaustav

Hello Kaustav, That's weird, I never saw any volumes under the storage tab since installation. I am using HC setup deployed using cockpit console. https://imgur.com/a/nH9rzK8 Did I miss something? On Tue, Nov 27, 2018 at 2:50 PM Kaustav Majumder <kmajumde@redhat.com> wrote:
Click on volume for which you want to reset the brick-> under bricks tab select the brick you wan to reset -> once you do you will see the 'Reset Brick' option is active. Attached is a screenshot -> https://i.imgur.com/QUMSrzt.png
On Tue, Nov 27, 2018 at 2:43 PM Abhishek Sahni < abhishek.sahni1991@gmail.com> wrote:
Thanks Sahina for your response, I am not able to find it on UI, please help me to navigate? and yes I am using oVirt 4.2.6.4-1.
On Tue, Nov 27, 2018 at 12:55 PM Sahina Bose <sabose@redhat.com> wrote:
On Tue, Nov 20, 2018 at 5:56 PM Abhishek Sahni < abhishek.sahni1991@gmail.com> wrote:
Hello Team,
We are running a setup of 3-way replica HC gluster setup configured during the initial deployment from the cockpit console using ansible.
NODE1 - /dev/sda (OS) - /dev/sdb ( Gluster Bricks ) * /gluster_bricks/engine/engine/ * /gluster_bricks/data/data/ * /gluster_bricks/vmstore/vmstore/
NODE2 and NODE3 with a similar setup.
There is a mishap that /dev/sdb on NODE2 totally got crashed and now there is nothing inside. However, I have created similar directories after mounting it back i.e.,
* /gluster_bricks/engine/engine/ * /gluster_bricks/data/data/ * /gluster_bricks/vmstore/vmstore/ but it is not yet recovered.
===================================================== [root@node2 ~]# gluster volume status Status of volume: data Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick *.*.*.1:/gluster_bricks/data/data 49152 0 Y 11111 Brick *.*.*.2:/gluster_bricks/data/data N/A N/A N N/A Brick *.*.*.3:/gluster_bricks/data/data 49152 0 Y 4303 Self-heal Daemon on localhost N/A N/A Y 23976 Self-heal Daemon on *.*.*.1 N/A N/A Y 27838 Self-heal Daemon on *.*.*.3 N/A N/A Y 27424
Task Status of Volume data
------------------------------------------------------------------------------ There are no active volume tasks
Status of volume: engine Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick *.*.*.1:/gluster_bricks/engine/eng ine 49153 0 Y 11117 Brick *.*.*.2:/gluster_bricks/engine/eng ine N/A N/A N N/A Brick *.*.*.3:/gluster_bricks/engine/eng ine 49153 0 Y 4314 Self-heal Daemon on localhost N/A N/A Y 23976 Self-heal Daemon on *.*.*.3 N/A N/A Y 27424 Self-heal Daemon on *.*.*.1 N/A N/A Y 27838
Task Status of Volume engine
------------------------------------------------------------------------------ There are no active volume tasks
Status of volume: vmstore Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick *.*.*.1:/gluster_bricks/vmstore/vm store 49154 0 Y 21603 Brick *.*.*.2:/gluster_bricks/vmstore/vm store N/A N/A N N/A Brick *.*.*.3:/gluster_bricks/vmstore/vm store 49154 0 Y 26845 Self-heal Daemon on localhost N/A N/A Y 23976 Self-heal Daemon on *.*.*.3 N/A N/A Y 27424 Self-heal Daemon on *.*.*.1 N/A N/A Y 27838
Task Status of Volume vmstore
------------------------------------------------------------------------------ There are no active volume tasks =============================================================
Can someone please suggest the steps to recover the setup?
I have tried the below workaround but it doesn't help.
https://lists.gluster.org/pipermail/gluster-users/2013-November/015079.html
You can reset the brick - if you're on oVirt 4.2.x, there's a UI option in the bricks subtab to do this.
--
ABHISHEK SAHNI Mob : +91-990-701-5143
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFYUBA4DPHOSAZ...
--
ABHISHEK SAHNI Mob : +91-990-701-5143
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GEZZIGBWA6HQ55...
Thanks, Kaustav
-- ABHISHEK SAHNI Mob : +91-990-701-5143

I am not sure why ovirt is not showing any volume. Sahina, is this a bug? On Tue, Nov 27, 2018 at 3:10 PM Abhishek Sahni <abhishek.sahni1991@gmail.com> wrote:
Hello Kaustav,
That's weird, I never saw any volumes under the storage tab since installation. I am using HC setup deployed using cockpit console.
Did I miss something?
On Tue, Nov 27, 2018 at 2:50 PM Kaustav Majumder <kmajumde@redhat.com> wrote:
Click on volume for which you want to reset the brick-> under bricks tab select the brick you wan to reset -> once you do you will see the 'Reset Brick' option is active. Attached is a screenshot -> https://i.imgur.com/QUMSrzt.png
On Tue, Nov 27, 2018 at 2:43 PM Abhishek Sahni < abhishek.sahni1991@gmail.com> wrote:
Thanks Sahina for your response, I am not able to find it on UI, please help me to navigate? and yes I am using oVirt 4.2.6.4-1.
On Tue, Nov 27, 2018 at 12:55 PM Sahina Bose <sabose@redhat.com> wrote:
On Tue, Nov 20, 2018 at 5:56 PM Abhishek Sahni < abhishek.sahni1991@gmail.com> wrote:
Hello Team,
We are running a setup of 3-way replica HC gluster setup configured during the initial deployment from the cockpit console using ansible.
NODE1 - /dev/sda (OS) - /dev/sdb ( Gluster Bricks ) * /gluster_bricks/engine/engine/ * /gluster_bricks/data/data/ * /gluster_bricks/vmstore/vmstore/
NODE2 and NODE3 with a similar setup.
There is a mishap that /dev/sdb on NODE2 totally got crashed and now there is nothing inside. However, I have created similar directories after mounting it back i.e.,
* /gluster_bricks/engine/engine/ * /gluster_bricks/data/data/ * /gluster_bricks/vmstore/vmstore/ but it is not yet recovered.
===================================================== [root@node2 ~]# gluster volume status Status of volume: data Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick *.*.*.1:/gluster_bricks/data/data 49152 0 Y 11111 Brick *.*.*.2:/gluster_bricks/data/data N/A N/A N N/A Brick *.*.*.3:/gluster_bricks/data/data 49152 0 Y 4303 Self-heal Daemon on localhost N/A N/A Y 23976 Self-heal Daemon on *.*.*.1 N/A N/A Y 27838 Self-heal Daemon on *.*.*.3 N/A N/A Y 27424
Task Status of Volume data
------------------------------------------------------------------------------ There are no active volume tasks
Status of volume: engine Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick *.*.*.1:/gluster_bricks/engine/eng ine 49153 0 Y 11117 Brick *.*.*.2:/gluster_bricks/engine/eng ine N/A N/A N N/A Brick *.*.*.3:/gluster_bricks/engine/eng ine 49153 0 Y 4314 Self-heal Daemon on localhost N/A N/A Y 23976 Self-heal Daemon on *.*.*.3 N/A N/A Y 27424 Self-heal Daemon on *.*.*.1 N/A N/A Y 27838
Task Status of Volume engine
------------------------------------------------------------------------------ There are no active volume tasks
Status of volume: vmstore Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick *.*.*.1:/gluster_bricks/vmstore/vm store 49154 0 Y 21603 Brick *.*.*.2:/gluster_bricks/vmstore/vm store N/A N/A N N/A Brick *.*.*.3:/gluster_bricks/vmstore/vm store 49154 0 Y 26845 Self-heal Daemon on localhost N/A N/A Y 23976 Self-heal Daemon on *.*.*.3 N/A N/A Y 27424 Self-heal Daemon on *.*.*.1 N/A N/A Y 27838
Task Status of Volume vmstore
------------------------------------------------------------------------------ There are no active volume tasks =============================================================
Can someone please suggest the steps to recover the setup?
I have tried the below workaround but it doesn't help.
https://lists.gluster.org/pipermail/gluster-users/2013-November/015079.html
You can reset the brick - if you're on oVirt 4.2.x, there's a UI option in the bricks subtab to do this.
--
ABHISHEK SAHNI Mob : +91-990-701-5143
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFYUBA4DPHOSAZ...
--
ABHISHEK SAHNI Mob : +91-990-701-5143
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GEZZIGBWA6HQ55...
Thanks, Kaustav
--
ABHISHEK SAHNI Mob : +91-990-701-5143
-- KAUSTAV MAJUMDER ASSOCIATE SOFTWARE ENGINEER Red Hat India PVT LTD. <https://www.redhat.com/> kmajumder@redhat.com M: 08981884037 IM: IRC: kmajumder <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> @redhatway <https://twitter.com/redhatway> @redhatinc <https://instagram.com/redhatinc> @redhatsnaps <https://snapchat.com/add/redhatsnaps>

On Tue, Nov 27, 2018 at 3:45 PM Kaustav Majumder <kmajumde@redhat.com> wrote:
I am not sure why ovirt is not showing any volume. Sahina, is this a bug?
Check if gluster service is enabled on the cluster. The volumes are managed only if this is true
On Tue, Nov 27, 2018 at 3:10 PM Abhishek Sahni < abhishek.sahni1991@gmail.com> wrote:
Hello Kaustav,
That's weird, I never saw any volumes under the storage tab since installation. I am using HC setup deployed using cockpit console.
Did I miss something?
On Tue, Nov 27, 2018 at 2:50 PM Kaustav Majumder <kmajumde@redhat.com> wrote:
Click on volume for which you want to reset the brick-> under bricks tab select the brick you wan to reset -> once you do you will see the 'Reset Brick' option is active. Attached is a screenshot -> https://i.imgur.com/QUMSrzt.png
On Tue, Nov 27, 2018 at 2:43 PM Abhishek Sahni < abhishek.sahni1991@gmail.com> wrote:
Thanks Sahina for your response, I am not able to find it on UI, please help me to navigate? and yes I am using oVirt 4.2.6.4-1.
On Tue, Nov 27, 2018 at 12:55 PM Sahina Bose <sabose@redhat.com> wrote:
On Tue, Nov 20, 2018 at 5:56 PM Abhishek Sahni < abhishek.sahni1991@gmail.com> wrote:
Hello Team,
We are running a setup of 3-way replica HC gluster setup configured during the initial deployment from the cockpit console using ansible.
NODE1 - /dev/sda (OS) - /dev/sdb ( Gluster Bricks ) * /gluster_bricks/engine/engine/ * /gluster_bricks/data/data/ * /gluster_bricks/vmstore/vmstore/
NODE2 and NODE3 with a similar setup.
There is a mishap that /dev/sdb on NODE2 totally got crashed and now there is nothing inside. However, I have created similar directories after mounting it back i.e.,
* /gluster_bricks/engine/engine/ * /gluster_bricks/data/data/ * /gluster_bricks/vmstore/vmstore/ but it is not yet recovered.
===================================================== [root@node2 ~]# gluster volume status Status of volume: data Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick *.*.*.1:/gluster_bricks/data/data 49152 0 Y 11111 Brick *.*.*.2:/gluster_bricks/data/data N/A N/A N N/A Brick *.*.*.3:/gluster_bricks/data/data 49152 0 Y 4303 Self-heal Daemon on localhost N/A N/A Y 23976 Self-heal Daemon on *.*.*.1 N/A N/A Y 27838 Self-heal Daemon on *.*.*.3 N/A N/A Y 27424
Task Status of Volume data
------------------------------------------------------------------------------ There are no active volume tasks
Status of volume: engine Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick *.*.*.1:/gluster_bricks/engine/eng ine 49153 0 Y 11117 Brick *.*.*.2:/gluster_bricks/engine/eng ine N/A N/A N N/A Brick *.*.*.3:/gluster_bricks/engine/eng ine 49153 0 Y 4314 Self-heal Daemon on localhost N/A N/A Y 23976 Self-heal Daemon on *.*.*.3 N/A N/A Y 27424 Self-heal Daemon on *.*.*.1 N/A N/A Y 27838
Task Status of Volume engine
------------------------------------------------------------------------------ There are no active volume tasks
Status of volume: vmstore Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick *.*.*.1:/gluster_bricks/vmstore/vm store 49154 0 Y 21603 Brick *.*.*.2:/gluster_bricks/vmstore/vm store N/A N/A N N/A Brick *.*.*.3:/gluster_bricks/vmstore/vm store 49154 0 Y 26845 Self-heal Daemon on localhost N/A N/A Y 23976 Self-heal Daemon on *.*.*.3 N/A N/A Y 27424 Self-heal Daemon on *.*.*.1 N/A N/A Y 27838
Task Status of Volume vmstore
------------------------------------------------------------------------------ There are no active volume tasks =============================================================
Can someone please suggest the steps to recover the setup?
I have tried the below workaround but it doesn't help.
https://lists.gluster.org/pipermail/gluster-users/2013-November/015079.html
You can reset the brick - if you're on oVirt 4.2.x, there's a UI option in the bricks subtab to do this.
--
ABHISHEK SAHNI Mob : +91-990-701-5143
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFYUBA4DPHOSAZ...
--
ABHISHEK SAHNI Mob : +91-990-701-5143
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GEZZIGBWA6HQ55...
Thanks, Kaustav
--
ABHISHEK SAHNI Mob : +91-990-701-5143
--
KAUSTAV MAJUMDER
ASSOCIATE SOFTWARE ENGINEER
Red Hat India PVT LTD. <https://www.redhat.com/>
kmajumder@redhat.com M: 08981884037 IM: IRC: kmajumder <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> @redhatway <https://twitter.com/redhatway> @redhatinc <https://instagram.com/redhatinc> @redhatsnaps <https://snapchat.com/add/redhatsnaps>

I just enabled it on default cluster and now the volumes are visible. It seems like gluster service was disabled by default on cluster. On Tue, Nov 27, 2018 at 3:51 PM Sahina Bose <sabose@redhat.com> wrote:
On Tue, Nov 27, 2018 at 3:45 PM Kaustav Majumder <kmajumde@redhat.com> wrote:
I am not sure why ovirt is not showing any volume. Sahina, is this a bug?
Check if gluster service is enabled on the cluster. The volumes are managed only if this is true
On Tue, Nov 27, 2018 at 3:10 PM Abhishek Sahni < abhishek.sahni1991@gmail.com> wrote:
Hello Kaustav,
That's weird, I never saw any volumes under the storage tab since installation. I am using HC setup deployed using cockpit console.
Did I miss something?
On Tue, Nov 27, 2018 at 2:50 PM Kaustav Majumder <kmajumde@redhat.com> wrote:
Click on volume for which you want to reset the brick-> under bricks tab select the brick you wan to reset -> once you do you will see the 'Reset Brick' option is active. Attached is a screenshot -> https://i.imgur.com/QUMSrzt.png
On Tue, Nov 27, 2018 at 2:43 PM Abhishek Sahni < abhishek.sahni1991@gmail.com> wrote:
Thanks Sahina for your response, I am not able to find it on UI, please help me to navigate? and yes I am using oVirt 4.2.6.4-1.
On Tue, Nov 27, 2018 at 12:55 PM Sahina Bose <sabose@redhat.com> wrote:
On Tue, Nov 20, 2018 at 5:56 PM Abhishek Sahni < abhishek.sahni1991@gmail.com> wrote:
> Hello Team, > > We are running a setup of 3-way replica HC gluster setup configured > during the initial deployment from the cockpit console using ansible. > > NODE1 > - /dev/sda (OS) > - /dev/sdb ( Gluster Bricks ) > * /gluster_bricks/engine/engine/ > * /gluster_bricks/data/data/ > * /gluster_bricks/vmstore/vmstore/ > > NODE2 and NODE3 with a similar setup. > > There is a mishap that /dev/sdb on NODE2 totally got crashed and now > there is nothing inside. However, I have created similar directories after > mounting it back i.e., > > * /gluster_bricks/engine/engine/ > * /gluster_bricks/data/data/ > * /gluster_bricks/vmstore/vmstore/ > but it is not yet recovered. > > ===================================================== > [root@node2 ~]# gluster volume status > Status of volume: data > Gluster process TCP Port RDMA Port > Online Pid > > ------------------------------------------------------------------------------ > Brick *.*.*.1:/gluster_bricks/data/data 49152 0 Y > 11111 > Brick *.*.*.2:/gluster_bricks/data/data N/A N/A N > N/A > Brick *.*.*.3:/gluster_bricks/data/data 49152 0 Y > 4303 > Self-heal Daemon on localhost N/A N/A Y > 23976 > Self-heal Daemon on *.*.*.1 N/A N/A Y > 27838 > Self-heal Daemon on *.*.*.3 N/A N/A Y > 27424 > > Task Status of Volume data > > ------------------------------------------------------------------------------ > There are no active volume tasks > > Status of volume: engine > Gluster process TCP Port RDMA Port > Online Pid > > ------------------------------------------------------------------------------ > Brick *.*.*.1:/gluster_bricks/engine/eng > ine 49153 0 Y > 11117 > Brick *.*.*.2:/gluster_bricks/engine/eng > ine N/A N/A N > N/A > Brick *.*.*.3:/gluster_bricks/engine/eng > ine 49153 0 Y > 4314 > Self-heal Daemon on localhost N/A N/A Y > 23976 > Self-heal Daemon on *.*.*.3 N/A N/A Y > 27424 > Self-heal Daemon on *.*.*.1 N/A N/A Y > 27838 > > Task Status of Volume engine > > ------------------------------------------------------------------------------ > There are no active volume tasks > > Status of volume: vmstore > Gluster process TCP Port RDMA Port > Online Pid > > ------------------------------------------------------------------------------ > Brick *.*.*.1:/gluster_bricks/vmstore/vm > store 49154 0 Y > 21603 > Brick *.*.*.2:/gluster_bricks/vmstore/vm > store N/A N/A N > N/A > Brick *.*.*.3:/gluster_bricks/vmstore/vm > store 49154 0 Y > 26845 > Self-heal Daemon on localhost N/A N/A Y > 23976 > Self-heal Daemon on *.*.*.3 N/A N/A Y > 27424 > Self-heal Daemon on *.*.*.1 N/A N/A Y > 27838 > > Task Status of Volume vmstore > > ------------------------------------------------------------------------------ > There are no active volume tasks > ============================================================= > > > Can someone please suggest the steps to recover the setup? > > I have tried the below workaround but it doesn't help. > > > https://lists.gluster.org/pipermail/gluster-users/2013-November/015079.html >
You can reset the brick - if you're on oVirt 4.2.x, there's a UI option in the bricks subtab to do this.
> > -- > > ABHISHEK SAHNI > Mob : +91-990-701-5143 > > > _______________________________________________ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-leave@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFYUBA4DPHOSAZ... >
--
ABHISHEK SAHNI Mob : +91-990-701-5143
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GEZZIGBWA6HQ55...
Thanks, Kaustav
--
ABHISHEK SAHNI Mob : +91-990-701-5143
--
KAUSTAV MAJUMDER
ASSOCIATE SOFTWARE ENGINEER
Red Hat India PVT LTD. <https://www.redhat.com/>
kmajumder@redhat.com M: 08981884037 IM: IRC: kmajumder <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> @redhatway <https://twitter.com/redhatway> @redhatinc <https://instagram.com/redhatinc> @redhatsnaps <https://snapchat.com/add/redhatsnaps>
-- Thanks, Abhishek Sahni Computer Centre IISER Bhopal
participants (4)
-
Abhishek Sahni
-
Abhishek Sahni
-
Kaustav Majumder
-
Sahina Bose