remove gluster storage domain and resize gluster storage domain

I have a 3 node cluster with replica 3 gluster volume. But for some reason the volume is not using the full size available. I thought maybe it was because I had created a second gluster volume on same partition, so I tried to remove it. I was able to put it in maintenance mode and detach it, but in no window was I able to get the "remove" option to be enabled. Now if I select "attach data" I see ovirt thinks the volume is still there, although it is not. 2 questions. 1. how do I clear out the old removed volume from ovirt? 2. how do I get gluster to use the full disk space available? Its a 1T partition but it only created a 225G gluster volume. Why? How do I get the space back? All three nodes look the same: /dev/mapper/rootvg01-lv02 1.1T 135G 929G 13% /ovirt-store ovirt1-gl.j2noc.com:/gv1 225G 135G 91G 60% /rhev/data-center/mnt/glusterSD/ovirt1-gl.j2noc.com:_gv1 [root@ovirt1 prod ovirt1-gl.j2noc.com:_gv1]# gluster volume status Status of volume: gv1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick ovirt1-gl.j2noc.com:/ovirt-store/bric k1/gv1 49152 0 Y 5218 Brick ovirt3-gl.j2noc.com:/ovirt-store/bric k1/gv1 49152 0 Y 5678 Brick ovirt2-gl.j2noc.com:/ovirt-store/bric k1/gv1 49152 0 Y 61386 NFS Server on localhost 2049 0 Y 31312 Self-heal Daemon on localhost N/A N/A Y 31320 NFS Server on ovirt3-gl.j2noc.com 2049 0 Y 38109 Self-heal Daemon on ovirt3-gl.j2noc.com N/A N/A Y 38119 NFS Server on ovirt2-gl.j2noc.com 2049 0 Y 5387 Self-heal Daemon on ovirt2-gl.j2noc.com N/A N/A Y 5402 Task Status of Volume gv1 ------------------------------------------------------------------------------ There are no active volume tasks Thanks.

On Thu, Dec 1, 2016 at 10:54 AM, Bill James <bill.james@j2.com> wrote:
I have a 3 node cluster with replica 3 gluster volume. But for some reason the volume is not using the full size available. I thought maybe it was because I had created a second gluster volume on same partition, so I tried to remove it.
I was able to put it in maintenance mode and detach it, but in no window was I able to get the "remove" option to be enabled. Now if I select "attach data" I see ovirt thinks the volume is still there, although it is not.
2 questions.
1. how do I clear out the old removed volume from ovirt?
To remove the storage domain, you need to detach the domain from the Data Center sub tab of Storage Domain. Once detached, the remove and format domain option should be available to you. Once you detach - what is the status of the storage domain? Does it show as Detached?
2. how do I get gluster to use the full disk space available?
Its a 1T partition but it only created a 225G gluster volume. Why? How do I get the space back?
What's the output of "lsblk"? Is it consistent across all 3 nodes?
All three nodes look the same: /dev/mapper/rootvg01-lv02 1.1T 135G 929G 13% /ovirt-store ovirt1-gl.j2noc.com:/gv1 225G 135G 91G 60% /rhev/data-center/mnt/glusterSD/ovirt1-gl.j2noc.com:_gv1
[root@ovirt1 prod ovirt1-gl.j2noc.com:_gv1]# gluster volume status Status of volume: gv1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick ovirt1-gl.j2noc.com:/ovirt-store/bric k1/gv1 49152 0 Y 5218 Brick ovirt3-gl.j2noc.com:/ovirt-store/bric k1/gv1 49152 0 Y 5678 Brick ovirt2-gl.j2noc.com:/ovirt-store/bric k1/gv1 49152 0 Y 61386 NFS Server on localhost 2049 0 Y 31312 Self-heal Daemon on localhost N/A N/A Y 31320 NFS Server on ovirt3-gl.j2noc.com 2049 0 Y 38109 Self-heal Daemon on ovirt3-gl.j2noc.com N/A N/A Y 38119 NFS Server on ovirt2-gl.j2noc.com 2049 0 Y 5387 Self-heal Daemon on ovirt2-gl.j2noc.com N/A N/A Y 5402
Task Status of Volume gv1 ------------------------------------------------------------ ------------------ There are no active volume tasks
Thanks. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

--------------63B6AB8D0ACBC70C0A31700E Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 8bit thank you for the reply. [root@ovirt1 prod ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 1.1T 0 disk ââsda1 8:1 0 500M 0 part /boot ââsda2 8:2 0 4G 0 part [SWAP] ââsda3 8:3 0 1.1T 0 part âârootvg01-lv01 253:0 0 50G 0 lvm / âârootvg01-lv02 253:1 0 1T 0 lvm /ovirt-store ovirt2 same. ovirt3: [root@ovirt3 prod ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 279.4G 0 disk ââsda1 8:1 0 500M 0 part /boot ââsda2 8:2 0 4G 0 part [SWAP] ââsda3 8:3 0 274.9G 0 part âârootvg01-lv01 253:0 0 50G 0 lvm / * âârootvg01-lv02 253:1 0 224.9G 0 lvm /ovirt-store* Ah ha! I missed that. Thank you!! I can fix that. Once I detached the storage domain it is no longer listed. Is there some option to make it show detached volumes? ovirt-engine-3.6.4.1-1.el7.centos.noarch On 12/1/16 3:58 AM, Sahina Bose wrote:
On Thu, Dec 1, 2016 at 10:54 AM, Bill James <bill.james@j2.com <mailto:bill.james@j2.com>> wrote:
I have a 3 node cluster with replica 3 gluster volume. But for some reason the volume is not using the full size available. I thought maybe it was because I had created a second gluster volume on same partition, so I tried to remove it.
I was able to put it in maintenance mode and detach it, but in no window was I able to get the "remove" option to be enabled. Now if I select "attach data" I see ovirt thinks the volume is still there, although it is not.
2 questions.
1. how do I clear out the old removed volume from ovirt?
To remove the storage domain, you need to detach the domain from the Data Center sub tab of Storage Domain. Once detached, the remove and format domain option should be available to you. Once you detach - what is the status of the storage domain? Does it show as Detached?
2. how do I get gluster to use the full disk space available?
Its a 1T partition but it only created a 225G gluster volume. Why? How do I get the space back?
What's the output of "lsblk"? Is it consistent across all 3 nodes?
All three nodes look the same: /dev/mapper/rootvg01-lv02 1.1T 135G 929G 13% /ovirt-store ovirt1-gl.j2noc.com:/gv1 225G 135G 91G 60% /rhev/data-center/mnt/glusterSD/ovirt1-gl.j2noc.com <http://ovirt1-gl.j2noc.com>:_gv1
[root@ovirt1 prod ovirt1-gl.j2noc.com:_gv1]# gluster volume status Status of volume: gv1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick ovirt1-gl.j2noc.com:/ovirt-store/bric k1/gv1 49152 0 Y 5218 Brick ovirt3-gl.j2noc.com:/ovirt-store/bric k1/gv1 49152 0 Y 5678 Brick ovirt2-gl.j2noc.com:/ovirt-store/bric k1/gv1 49152 0 Y 61386 NFS Server on localhost 2049 0 Y 31312 Self-heal Daemon on localhost N/A N/A Y 31320 NFS Server on ovirt3-gl.j2noc.com <http://ovirt3-gl.j2noc.com> 2049 0 Y 38109 Self-heal Daemon on ovirt3-gl.j2noc.com <http://ovirt3-gl.j2noc.com> N/A N/A Y 38119 NFS Server on ovirt2-gl.j2noc.com <http://ovirt2-gl.j2noc.com> 2049 0 Y 5387 Self-heal Daemon on ovirt2-gl.j2noc.com <http://ovirt2-gl.j2noc.com> N/A N/A Y 5402
Task Status of Volume gv1 ------------------------------------------------------------------------------ There are no active volume tasks
Thanks. _______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
Cloud Services for Business www.j2.com j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox This email, its contents and attachments contain information from j2 Global, Inc. and/or its affiliates which may be privileged, confidential or otherwise protected from disclosure. The information is intended to be for the addressee(s) only. If you are not an addressee, any disclosure, copy, distribution, or use of the contents of this message is prohibited. If you have received this email in error please notify the sender by reply e-mail and delete the original message and any copies. (c) 2015 j2 Global, Inc. All rights reserved. eFax, eVoice, Campaigner, FuseMail, KeepItSafe, and Onebox are registered trademarks of j2 Global, Inc. and its affiliates. --------------63B6AB8D0ACBC70C0A31700E Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> thank you for the reply.<br> <br> [root@ovirt1 prod ~]# lsblk<br> NAME             MAJ:MIN RM SIZE RO TYPE MOUNTPOINT<br> sda                8:0   0 1.1T 0 disk <br> ââsda1             8:1   0 500M 0 part /boot<br> ââsda2             8:2   0   4G 0 part [SWAP]<br> ââsda3             8:3   0 1.1T 0 part <br>  âârootvg01-lv01 253:0   0  50G 0 lvm /<br>  âârootvg01-lv02 253:1   0   1T 0 lvm /ovirt-store<br> <br> ovirt2 same.<br> ovirt3:<br> <br> [root@ovirt3 prod ~]# lsblk<br> NAME             MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT<br> sda                8:0   0 279.4G 0 disk <br> ââsda1             8:1   0  500M 0 part /boot<br> ââsda2             8:2   0    4G 0 part [SWAP]<br> ââsda3             8:3   0 274.9G 0 part <br>  âârootvg01-lv01 253:0   0   50G 0 lvm /<br> <b> âârootvg01-lv02 253:1   0 224.9G 0 lvm /ovirt-store</b><br> <br> Ah ha! I missed that. Thank you!!<br> I can fix that.<br> <br> <br> Once I detached the storage domain it is no longer listed.<br> Is there some option to make it show detached volumes?<br> <br> ovirt-engine-3.6.4.1-1.el7.centos.noarch<br> <br> <br> <div class="moz-cite-prefix">On 12/1/16 3:58 AM, Sahina Bose wrote:<br> </div> <blockquote cite="mid:CACjzOvewuEuLwuKok4=zKsgQ8JgKUwCzWFPsCRDEs-7-AO7zqA@mail.gmail.com" type="cite"> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <div dir="ltr"><br> <div class="gmail_extra"><br> <div class="gmail_quote">On Thu, Dec 1, 2016 at 10:54 AM, Bill James <span dir="ltr"><<a moz-do-not-send="true" href="mailto:bill.james@j2.com" target="_blank">bill.james@j2.com</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I have a 3 node cluster with replica 3 gluster volume.<br> But for some reason the volume is not using the full size available.<br> I thought maybe it was because I had created a second gluster volume on same partition, so I tried to remove it.<br> <br> I was able to put it in maintenance mode and detach it, but in no window was I able to get the "remove" option to be enabled.<br> Now if I select "attach data" I see ovirt thinks the volume is still there, although it is not.<br> <br> 2 questions.<br> <br> 1. how do I clear out the old removed volume from ovirt?<br> </blockquote> <div><br> </div> <div>To remove the storage domain, you need to detach the domain from the Data Center sub tab of Storage Domain. Once detached, the remove and format domain option should be available to you.<br> </div> <div>Once you detach - what is the status of the storage domain? Does it show as Detached?<br>  <br> </div> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br> 2. how do I get gluster to use the full disk space available? <br> </blockquote> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br> Its a 1T partition but it only created a 225G gluster volume. Why? How do I get the space back?<br> </blockquote> <br> What's the output of "lsblk"? Is it consistent across all 3 nodes?<br> <div> <br> </div> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br> All three nodes look the same:<br> /dev/mapper/rootvg01-lv02 1.1T 135G 929G 13% /ovirt-store<br> ovirt1-gl.j2noc.com:/gv1  225G 135G  91G 60% /rhev/data-center/mnt/glusterS<wbr>D/<a moz-do-not-send="true" href="http://ovirt1-gl.j2noc.com" target="_blank">ovirt1-gl.j2noc.com</a>:_gv1<br> <br> <br> [root@ovirt1 prod ovirt1-gl.j2noc.com:_gv1]# gluster volume status<br> Status of volume: gv1<br> Gluster process               TCP Port RDMA Port Online Pid<br> ------------------------------<wbr>------------------------------<wbr>------------------<br> Brick ovirt1-gl.j2noc.com:/ovirt-sto<wbr>re/bric<br> k1/gv1                   49152   0 Y    5218<br> Brick ovirt3-gl.j2noc.com:/ovirt-sto<wbr>re/bric<br> k1/gv1                   49152   0 Y    5678<br> Brick ovirt2-gl.j2noc.com:/ovirt-sto<wbr>re/bric<br> k1/gv1                   49152   0 Y    61386<br> NFS Server on localhost           2049   0 Y    31312<br> Self-heal Daemon on localhost        N/A    N/A Y    31320<br> NFS Server on <a moz-do-not-send="true" href="http://ovirt3-gl.j2noc.com" rel="noreferrer" target="_blank">ovirt3-gl.j2noc.com</a>      2049   0 Y    38109<br> Self-heal Daemon on <a moz-do-not-send="true" href="http://ovirt3-gl.j2noc.com" rel="noreferrer" target="_blank">ovirt3-gl.j2noc.com</a>   N/A    N/A Y    38119<br> NFS Server on <a moz-do-not-send="true" href="http://ovirt2-gl.j2noc.com" rel="noreferrer" target="_blank">ovirt2-gl.j2noc.com</a>      2049   0 Y    5387<br> Self-heal Daemon on <a moz-do-not-send="true" href="http://ovirt2-gl.j2noc.com" rel="noreferrer" target="_blank">ovirt2-gl.j2noc.com</a>   N/A    N/A Y    5402<br> <br> Task Status of Volume gv1<br> ------------------------------<wbr>------------------------------<wbr>------------------<br> There are no active volume tasks<br> <br> <br> Thanks.<br> ______________________________<wbr>_________________<br> Users mailing list<br> <a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br> <a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br> </blockquote> </div> <br> </div> </div> </blockquote> <br> <p><a href="http://www.j2.com/?utm_source=j2global&utm_medium=xsell-referral&utm_campaign=employeeemail"><span style='color:windowtext; text-decoration:none'><img border=0 width=391 height=46 src="http://home.j2.com/j2_Global_Cloud_Services/j2_Global_Email_Footer.jpg" alt="www.j2.com"></span></a></p> <p><span style='font-size:8.0pt;font-family:"Arial","sans-serif"; color:gray'>This email, its contents and attachments contain information from <a href="http://www.j2.com/?utm_source=j2global&utm_medium=xsell-referral&utm_campaign=employemail">j2 Global, Inc</a>. and/or its affiliates which may be privileged, confidential or otherwise protected from disclosure. The information is intended to be for the addressee(s) only. If you are not an addressee, any disclosure, copy, distribution, or use of the contents of this message is prohibited. If you have received this email in error please notify the sender by reply e-mail and delete the original message and any copies. © 2015 <a href="http://www.j2.com/">j2 Global, Inc</a>. All rights reserved. <a href="http://www.efax.com/">eFax ®</a>, <a href="http://www.evoice.com/">eVoice ®</a>, <a href="http://www.campaigner.com/">Campaigner ®</a>, <a href="http://www.fusemail.com/">FuseMail ®</a>, <a href="http://www.keepitsafe.com/">KeepItSafe ®</a> and <a href="http://www.onebox.com/">Onebox ®</a> are registered trademarks of <a href="http://www.j2.com/">j2 Global, Inc</a>. and its affiliates.</span></p></body> </html> --------------63B6AB8D0ACBC70C0A31700E--

----- Original Message -----
From: "Bill James" <bill.james@j2.com> To: "Sahina Bose" <sabose@redhat.com> Cc: users@ovirt.org Sent: Thursday, December 1, 2016 8:15:03 PM Subject: Re: [ovirt-users] remove gluster storage domain and resize gluster storage domain
thank you for the reply.
[root@ovirt1 prod ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 1.1T 0 disk ├─sda1 8:1 0 500M 0 part /boot ├─sda2 8:2 0 4G 0 part [SWAP] └─sda3 8:3 0 1.1T 0 part ├─rootvg01-lv01 253:0 0 50G 0 lvm / └─rootvg01-lv02 253:1 0 1T 0 lvm /ovirt-store
ovirt2 same. ovirt3:
[root@ovirt3 prod ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 279.4G 0 disk ├─sda1 8:1 0 500M 0 part /boot ├─sda2 8:2 0 4G 0 part [SWAP] └─sda3 8:3 0 274.9G 0 part ├─rootvg01-lv01 253:0 0 50G 0 lvm / └─rootvg01-lv02 253:1 0 224.9G 0 lvm /ovirt-store
See the difference between ovirt3 and other two nodes. LV 'rootvg01-lv02' in ovirt3 has only 224 GB of capacity. In replicated Gluster volume, storage capacity of the volume is limited by the smallest replica brick. If you want to have 1TB gluster volume then please make sure that all the bricks in the replicated volume has minimum 1TB capacity. Regards, Ramesh
Ah ha! I missed that. Thank you!! I can fix that.
Once I detached the storage domain it is no longer listed. Is there some option to make it show detached volumes?
ovirt-engine-3.6.4.1-1.el7.centos.noarch
On 12/1/16 3:58 AM, Sahina Bose wrote:
On Thu, Dec 1, 2016 at 10:54 AM, Bill James < bill.james@j2.com > wrote:
I have a 3 node cluster with replica 3 gluster volume. But for some reason the volume is not using the full size available. I thought maybe it was because I had created a second gluster volume on same partition, so I tried to remove it.
I was able to put it in maintenance mode and detach it, but in no window was I able to get the "remove" option to be enabled. Now if I select "attach data" I see ovirt thinks the volume is still there, although it is not.
2 questions.
1. how do I clear out the old removed volume from ovirt?
To remove the storage domain, you need to detach the domain from the Data Center sub tab of Storage Domain. Once detached, the remove and format domain option should be available to you. Once you detach - what is the status of the storage domain? Does it show as Detached?
2. how do I get gluster to use the full disk space available?
Its a 1T partition but it only created a 225G gluster volume. Why? How do I get the space back?
What's the output of "lsblk"? Is it consistent across all 3 nodes?
All three nodes look the same: /dev/mapper/rootvg01-lv02 1.1T 135G 929G 13% /ovirt-store ovirt1-gl.j2noc.com:/gv1 225G 135G 91G 60% /rhev/data-center/mnt/glusterSD/ ovirt1-gl.j2noc.com :_gv1
[root@ovirt1 prod ovirt1-gl.j2noc.com:_gv1]# gluster volume status Status of volume: gv1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick ovirt1-gl.j2noc.com:/ovirt-store/bric k1/gv1 49152 0 Y 5218 Brick ovirt3-gl.j2noc.com:/ovirt-store/bric k1/gv1 49152 0 Y 5678 Brick ovirt2-gl.j2noc.com:/ovirt-store/bric k1/gv1 49152 0 Y 61386 NFS Server on localhost 2049 0 Y 31312 Self-heal Daemon on localhost N/A N/A Y 31320 NFS Server on ovirt3-gl.j2noc.com 2049 0 Y 38109 Self-heal Daemon on ovirt3-gl.j2noc.com N/A N/A Y 38119 NFS Server on ovirt2-gl.j2noc.com 2049 0 Y 5387 Self-heal Daemon on ovirt2-gl.j2noc.com N/A N/A Y 5402
Task Status of Volume gv1 ------------------------------------------------------------------------------ There are no active volume tasks
Thanks. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
This email, its contents and attachments contain information from j2 Global, Inc . and/or its affiliates which may be privileged, confidential or otherwise protected from disclosure. The information is intended to be for the addressee(s) only. If you are not an addressee, any disclosure, copy, distribution, or use of the contents of this message is prohibited. If you have received this email in error please notify the sender by reply e-mail and delete the original message and any copies. � 2015 j2 Global, Inc . All rights reserved. eFax � , eVoice � , Campaigner � , FuseMail � , KeepItSafe � and Onebox � are r egistered trademarks of j2 Global, Inc . and its affiliates.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (3)
-
Bill James
-
Ramesh Nachimuthu
-
Sahina Bose