How to create a new Gluster volume

Hello, I'm trying to create a new volume. I'm in 4.1.2 I'm following these indications: http://www.ovirt.org/documentation/admin-guide/chap-Working_with_Gluster_Sto... When I click the "add brick" button, I don't see anything in "Brick Directory" dropdown field and I cannot manuall input a directory name. On the 3 nodes I already have formatted and mounted fs [root@ovirt01 ~]# df -h /gluster/brick3/ Filesystem Size Used Avail Use% Mounted on /dev/mapper/gluster-export 50G 33M 50G 1% /gluster/brick3 [root@ovirt01 ~]# The guide tells 7. Click the Add Bricks button to select bricks to add to the volume. Bricks must be created externally on the Gluster Storage nodes. What does it mean with "created externally"? The next step from os point would be volume creation but it is indeed what I would like to do from the gui... Thanks, Gianluca

On Thu, Jul 6, 2017 at 11:51 AM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
Hello, I'm trying to create a new volume. I'm in 4.1.2 I'm following these indications: http://www.ovirt.org/documentation/admin-guide/chap-Working_with_Gluster_ Storage/
When I click the "add brick" button, I don't see anything in "Brick Directory" dropdown field and I cannot manuall input a directory name.
On the 3 nodes I already have formatted and mounted fs
[root@ovirt01 ~]# df -h /gluster/brick3/ Filesystem Size Used Avail Use% Mounted on /dev/mapper/gluster-export 50G 33M 50G 1% /gluster/brick3 [root@ovirt01 ~]#
The guide tells
7. Click the Add Bricks button to select bricks to add to the volume. Bricks must be created externally on the Gluster Storage nodes.
What does it mean with "created externally"? The next step from os point would be volume creation but it is indeed what I would like to do from the gui...
Thanks, Gianluca
It seems I have to de-select the checkbox "Show available bricks from host" and so I can manually the the directory of the bricks BTW: I see that after creating a volume optimized for oVirt in web admin gui of 4.1.2 I get slight option for it in respect for a pre-existing volume created in 4.0.5 during initial setup with gdeploy. NOTE: during 4.0.5 setup I had gluster 3.7 installed, while now I have gluster 3.10 (manually updated from CentOS storage SIG) Making a "gluster volume info" and then a diff of the output for the 2 volumes I have: new volume == < old volume == > < cluster.shd-max-threads: 8 ---
cluster.shd-max-threads: 6 13a13,14 features.shard-block-size: 512MB 16c17 < network.remote-dio: enable
network.remote-dio: off 23a25 performance.readdir-ahead: on 25c27 < server.allow-insecure: on
performance.strict-o-direct: on
Do I have to change anything for the newly created one? Thanks, Gianluca

On Thu, Jul 6, 2017 at 11:51 AM, Gianluca Cecchi <gianluca.cecchi@gmail.com <mailto:gianluca.cecchi@gmail.com>> wrote:
Hello, I'm trying to create a new volume. I'm in 4.1.2 I'm following these indications: http://www.ovirt.org/documentation/admin-guide/chap-Working_with_Gluster_Sto... <http://www.ovirt.org/documentation/admin-guide/chap-Working_with_Gluster_Storage/>
When I click the "add brick" button, I don't see anything in "Brick Directory" dropdown field and I cannot manuall input a directory name.
On the 3 nodes I already have formatted and mounted fs
[root@ovirt01 ~]# df -h /gluster/brick3/ Filesystem Size Used Avail Use% Mounted on /dev/mapper/gluster-export 50G 33M 50G 1% /gluster/brick3 [root@ovirt01 ~]#
The guide tells
7. Click the Add Bricks button to select bricks to add to the volume. Bricks must be created externally on the Gluster Storage nodes.
What does it mean with "created externally"? The next step from os point would be volume creation but it is indeed what I would like to do from the gui...
Thanks, Gianluca
It seems I have to de-select the checkbox "Show available bricks from host" and so I can manually the the directory of the bricks I see that bricks are mounted in /gluster/brick3 and that is the reason it does not show anything in "Brick Directory" drop down filed. If bricks are mounted under /gluster_bricks then it would have detected automatically. There is an RFE which is raised to detect bricks which are created manually.
BTW: I see that after creating a volume optimized for oVirt in web admin gui of 4.1.2 I get slight option for it in respect for a pre-existing volume created in 4.0.5 during initial setup with gdeploy.
NOTE: during 4.0.5 setup I had gluster 3.7 installed, while now I have gluster 3.10 (manually updated from CentOS storage SIG)
Making a "gluster volume info" and then a diff of the output for the 2 volumes I have:
new volume == < old volume == >
< cluster.shd-max-threads: 8 ---
cluster.shd-max-threads: 6 13a13,14 features.shard-block-size: 512MB 16c17 < network.remote-dio: enable
network.remote-dio: off 23a25 performance.readdir-ahead: on 25c27 < server.allow-insecure: on
performance.strict-o-direct: on
Do I have to change anything for the newly created one? No, you do not need to change anything for the new volume. But if you
This is a multi-part message in MIME format. --------------2D801FAEAD20732E743C15A5 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit On 07/06/2017 04:38 PM, Gianluca Cecchi wrote: plan to enable o-direct on the volume then you will have to disable/turn off remote-dio.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------2D801FAEAD20732E743C15A5 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <div class="moz-cite-prefix">On 07/06/2017 04:38 PM, Gianluca Cecchi wrote:<br> </div> <blockquote cite="mid:CAG2kNCzd0brmUGfs1ruw2WfnFOSD1TfELrhw598iiHFLwjnwew@mail.gmail.com" type="cite"> <div dir="ltr"> <div class="gmail_extra"> <div class="gmail_quote">On Thu, Jul 6, 2017 at 11:51 AM, Gianluca Cecchi <span dir="ltr"><<a moz-do-not-send="true" href="mailto:gianluca.cecchi@gmail.com" target="_blank">gianluca.cecchi@gmail.com</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> <div dir="ltr">Hello, <div>I'm trying to create a new volume. I'm in 4.1.2</div> <div>I'm following these indications:</div> <div><a moz-do-not-send="true" href="http://www.ovirt.org/documentation/admin-guide/chap-Working_with_Gluster_Sto..." target="_blank">http://www.ovirt.org/<wbr>documentation/admin-guide/<wbr>chap-Working_with_Gluster_<wbr>Storage/</a><br> </div> <div><br> </div> <div>When I click the "add brick" button, I don't see anything in "Brick Directory" dropdown field and I cannot manuall input a directory name.</div> <div><br> </div> <div>On the 3 nodes I already have formatted and mounted fs</div> <div><br> </div> <div> <div>[root@ovirt01 ~]# df -h /gluster/brick3/</div> <div>Filesystem Size Used Avail Use% Mounted on</div> <div>/dev/mapper/gluster-export 50G 33M 50G 1% /gluster/brick3</div> <div>[root@ovirt01 ~]# </div> </div> <div><br> </div> <div>The guide tells </div> <div><br> </div> <div>7. Click the Add Bricks button to select bricks to add to the volume. Bricks must be created externally on the Gluster Storage nodes.<br> </div> <div><br> </div> <div>What does it mean with "created externally"? </div> <div>The next step from os point would be volume creation but it is indeed what I would like to do from the gui...</div> <div><br> </div> <div>Thanks,</div> <div>Gianluca</div> <div><br> </div> </div> </blockquote> </div> <br> </div> <div class="gmail_extra">It seems I have to de-select the checkbox "Show available bricks from host" and so I can manually the the directory of the bricks</div> </div> </blockquote> I see that bricks are mounted in /gluster/brick3 and that is the reason it does not show anything in "Brick Directory" drop down filed. If bricks are mounted under /gluster_bricks then it would have detected automatically. There is an RFE which is raised to detect bricks which are created manually.<br> <blockquote cite="mid:CAG2kNCzd0brmUGfs1ruw2WfnFOSD1TfELrhw598iiHFLwjnwew@mail.gmail.com" type="cite"> <div dir="ltr"> <div class="gmail_extra"><br> </div> <div class="gmail_extra">BTW: I see that after creating a volume optimized for oVirt in web admin gui of 4.1.2 I get slight option for it in respect for a pre-existing volume created in 4.0.5 during initial setup with gdeploy.</div> <div class="gmail_extra"><br> </div> <div class="gmail_extra">NOTE: during 4.0.5 setup I had gluster 3.7 installed, while now I have gluster 3.10 (manually updated from CentOS storage SIG)</div> <div class="gmail_extra"><br> </div> <div class="gmail_extra">Making a "gluster volume info" and then a diff of the output for the 2 volumes I have:</div> <div class="gmail_extra"><br> </div> <div class="gmail_extra">new volume == <</div> <div class="gmail_extra">old volume == ></div> <div class="gmail_extra"><br> </div> <div class="gmail_extra"> <div class="gmail_extra">< cluster.shd-max-threads: 8</div> <div class="gmail_extra">---</div> <div class="gmail_extra">> cluster.shd-max-threads: 6</div> <div class="gmail_extra">13a13,14</div> <div class="gmail_extra">> features.shard-block-size: 512MB<br> </div> <div class="gmail_extra">16c17</div> <div class="gmail_extra">< network.remote-dio: enable</div> <div class="gmail_extra">---</div> <div class="gmail_extra">> network.remote-dio: off</div> <div class="gmail_extra">23a25</div> <div class="gmail_extra">> performance.readdir-ahead: on</div> <div class="gmail_extra">25c27</div> <div class="gmail_extra">< server.allow-insecure: on</div> <div class="gmail_extra">---</div> <div class="gmail_extra">> performance.strict-o-direct: on</div> <div><br> </div> </div> <div class="gmail_extra">Do I have to change anything for the newly created one?</div> </div> </blockquote> No, you do not need to change anything for the new volume. But if you plan to enable o-direct on the volume then you will have to disable/turn off remote-dio.<br> <br> <blockquote cite="mid:CAG2kNCzd0brmUGfs1ruw2WfnFOSD1TfELrhw598iiHFLwjnwew@mail.gmail.com" type="cite"> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <p><br> </p> </body> </html> --------------2D801FAEAD20732E743C15A5--

On Fri, Jul 7, 2017 at 10:15 AM, knarra <knarra@redhat.com> wrote:
It seems I have to de-select the checkbox "Show available bricks from host" and so I can manually the the directory of the bricks
I see that bricks are mounted in /gluster/brick3 and that is the reason it does not show anything in "Brick Directory" drop down filed. If bricks are mounted under /gluster_bricks then it would have detected automatically. There is an RFE which is raised to detect bricks which are created manually.
I deployed this HCI system with gdeploy at oVirt 4.05 time, so I think I used the "default" path that was proposed inside the ovirt-gluster.conf file to feed gdeploy with... I think it was based on this from Jason: https://www.ovirt.org/blog/2016/08/up-and-running-with-ovirt-4-0-and-gluster... and this conf file https://gist.githubusercontent.com/jasonbrooks/a5484769eea5a8cf2fa9d32329d5e... Good that there is an RFE. Thanks
BTW: I see that after creating a volume optimized for oVirt in web admin gui of 4.1.2 I get slight option for it in respect for a pre-existing volume created in 4.0.5 during initial setup with gdeploy.
NOTE: during 4.0.5 setup I had gluster 3.7 installed, while now I have gluster 3.10 (manually updated from CentOS storage SIG)
Making a "gluster volume info" and then a diff of the output for the 2 volumes I have:
new volume == < old volume == >
< cluster.shd-max-threads: 8 ---
cluster.shd-max-threads: 6 13a13,14 features.shard-block-size: 512MB 16c17 < network.remote-dio: enable
network.remote-dio: off 23a25 performance.readdir-ahead: on 25c27 < server.allow-insecure: on
performance.strict-o-direct: on
Do I have to change anything for the newly created one?
No, you do not need to change anything for the new volume. But if you plan to enable o-direct on the volume then you will have to disable/turn off remote-dio.
OK.
Again, in ovirt-gluster.conf file I see there was this kind of setting for the Gluster volumes when running gdeploy for them: key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm,cluster.locking-scheme,cluster.shd-wait-qlength,cluster.shd-max-threads,network.ping-timeout,user.cifs,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal value=virt,36,36,on,512MB,32,full,granular,10000,8,30,off,on,off,on brick_dirs=/gluster/brick1/engine I'm going to crosscheck now what are the suggested values for oVirt 4.1 and Gluster 3.10 combined... I was in particular worried by the difference of features.shard-block-size but after reading this http://blog.gluster.org/2015/12/introducing-shard-translator/ I'm not sure if 512Mb is the best in case of VMs storage.... I'm going to dig more eventually Thanks, Gianluca

This is a multi-part message in MIME format. --------------B767DCDDA0BBE4759A642B02 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit On 07/07/2017 02:03 PM, Gianluca Cecchi wrote:
On Fri, Jul 7, 2017 at 10:15 AM, knarra <knarra@redhat.com <mailto:knarra@redhat.com>> wrote:
It seems I have to de-select the checkbox "Show available bricks from host" and so I can manually the the directory of the bricks
I see that bricks are mounted in /gluster/brick3 and that is the reason it does not show anything in "Brick Directory" drop down filed. If bricks are mounted under /gluster_bricks then it would have detected automatically. There is an RFE which is raised to detect bricks which are created manually.
I deployed this HCI system with gdeploy at oVirt 4.05 time, so I think I used the "default" path that was proposed inside the ovirt-gluster.conf file to feed gdeploy with... I think it was based on this from Jason: https://www.ovirt.org/blog/2016/08/up-and-running-with-ovirt-4-0-and-gluster... and this conf file https://gist.githubusercontent.com/jasonbrooks/a5484769eea5a8cf2fa9d32329d5e...
Good that there is an RFE. Thanks
BTW: I see that after creating a volume optimized for oVirt in web admin gui of 4.1.2 I get slight option for it in respect for a pre-existing volume created in 4.0.5 during initial setup with gdeploy.
NOTE: during 4.0.5 setup I had gluster 3.7 installed, while now I have gluster 3.10 (manually updated from CentOS storage SIG)
Making a "gluster volume info" and then a diff of the output for the 2 volumes I have:
new volume == < old volume == >
< cluster.shd-max-threads: 8 --- > cluster.shd-max-threads: 6 13a13,14 > features.shard-block-size: 512MB 16c17 < network.remote-dio: enable --- > network.remote-dio: off 23a25 > performance.readdir-ahead: on 25c27 < server.allow-insecure: on --- > performance.strict-o-direct: on
Do I have to change anything for the newly created one?
No, you do not need to change anything for the new volume. But if you plan to enable o-direct on the volume then you will have to disable/turn off remote-dio.
OK. Again, in ovirt-gluster.conf file I see there was this kind of setting for the Gluster volumes when running gdeploy for them: key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm,cluster.locking-scheme,cluster.shd-wait-qlength,cluster.shd-max-threads,network.ping-timeout,user.cifs,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal value=virt,36,36,on,512MB,32,full,granular,10000,8,30,off,on,off,on brick_dirs=/gluster/brick1/engine I'm going to crosscheck now what are the suggested values for oVirt 4.1 and Gluster 3.10 combined... Now virt group sets the shard block size and it is the default which is 4MB and is the suggested value. With 4MB shards we see that healing is much faster with granular entry heal being enabled on the volume.
I am not sure why the conf file again sets the shard size. May be this can be removed from the file. Other than this everything looks good for me.
I was in particular worried by the difference of features.shard-block-size but after reading this
http://blog.gluster.org/2015/12/introducing-shard-translator/
I'm not sure if 512Mb is the best in case of VMs storage.... I'm going to dig more eventually
Thanks, Gianluca
--------------B767DCDDA0BBE4759A642B02 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <div class="moz-cite-prefix">On 07/07/2017 02:03 PM, Gianluca Cecchi wrote:<br> </div> <blockquote cite="mid:CAG2kNCwSGS8_+2yrPyC6bxDsZ_M-L=j3GtE2uz+-QE2WioPBHQ@mail.gmail.com" type="cite"> <div dir="ltr"> <div class="gmail_extra"> <div class="gmail_quote">On Fri, Jul 7, 2017 at 10:15 AM, knarra <span dir="ltr"><<a moz-do-not-send="true" href="mailto:knarra@redhat.com" target="_blank">knarra@redhat.com</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> <div bgcolor="#FFFFFF"><span class="gmail-"> <div class="gmail-m_-832412323805543219moz-cite-prefix"><br> </div> <blockquote type="cite"> <div dir="ltr"> <div class="gmail_extra"> <div class="gmail_quote"> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> <div dir="ltr"> <div><br> </div> </div> </blockquote> </div> <br> </div> <div class="gmail_extra">It seems I have to de-select the checkbox "Show available bricks from host" and so I can manually the the directory of the bricks</div> </div> </blockquote> </span> I see that bricks are mounted in /gluster/brick3 and that is the reason it does not show anything in "Brick Directory" drop down filed. If bricks are mounted under /gluster_bricks then it would have detected automatically. There is an RFE which is raised to detect bricks which are created manually.</div> </blockquote> <div><br> </div> <div>I deployed this HCI system with gdeploy at oVirt 4.05 time, so I think I used the "default" path that was proposed inside the ovirt-gluster.conf file to feed gdeploy with...</div> <div>I think it was based on this from Jason:</div> <div><a moz-do-not-send="true" href="https://www.ovirt.org/blog/2016/08/up-and-running-with-ovirt-4-0-and-gluster-storage/">https://www.ovirt.org/blog/2016/08/up-and-running-with-ovirt-4-0-and-gluster-storage/</a><br> </div> <div>and this conf file</div> <div><a moz-do-not-send="true" href="https://gist.githubusercontent.com/jasonbrooks/a5484769eea5a8cf2fa9d32329d5ebe5/raw/ovirt-gluster.conf">https://gist.githubusercontent.com/jasonbrooks/a5484769eea5a8cf2fa9d32329d5ebe5/raw/ovirt-gluster.conf</a><br> </div> <div><br> </div> <div>Good that there is an RFE. Thanks</div> <div><br> </div> <div> </div> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> <div bgcolor="#FFFFFF"><span class="gmail-"><br> <blockquote type="cite"> <div dir="ltr"> <div class="gmail_extra"><br> </div> <div class="gmail_extra">BTW: I see that after creating a volume optimized for oVirt in web admin gui of 4.1.2 I get slight option for it in respect for a pre-existing volume created in 4.0.5 during initial setup with gdeploy.</div> <div class="gmail_extra"><br> </div> <div class="gmail_extra">NOTE: during 4.0.5 setup I had gluster 3.7 installed, while now I have gluster 3.10 (manually updated from CentOS storage SIG)</div> <div class="gmail_extra"><br> </div> <div class="gmail_extra">Making a "gluster volume info" and then a diff of the output for the 2 volumes I have:</div> <div class="gmail_extra"><br> </div> <div class="gmail_extra">new volume == <</div> <div class="gmail_extra">old volume == ></div> <div class="gmail_extra"><br> </div> <div class="gmail_extra"> <div class="gmail_extra">< cluster.shd-max-threads: 8</div> <div class="gmail_extra">---</div> <div class="gmail_extra">> cluster.shd-max-threads: 6</div> <div class="gmail_extra">13a13,14</div> <div class="gmail_extra">> features.shard-block-size: 512MB<br> </div> <div class="gmail_extra">16c17</div> <div class="gmail_extra">< network.remote-dio: enable</div> <div class="gmail_extra">---</div> <div class="gmail_extra">> network.remote-dio: off</div> <div class="gmail_extra">23a25</div> <div class="gmail_extra">> performance.readdir-ahead: on</div> <div class="gmail_extra">25c27</div> <div class="gmail_extra">< server.allow-insecure: on</div> <div class="gmail_extra">---</div> <div class="gmail_extra">> performance.strict-o-direct: on</div> <div><br> </div> </div> <div class="gmail_extra">Do I have to change anything for the newly created one?</div> </div> </blockquote> </span> No, you do not need to change anything for the new volume. But if you plan to enable o-direct on the volume then you will have to disable/turn off remote-dio.<br> <br> <p><br> </p> </div> </blockquote> <div>OK. </div> <div>Again, in ovirt-gluster.conf file I see there was this kind of setting for the Gluster volumes when running gdeploy for them:</div> <div> <pre style="color:rgb(0,0,0);word-wrap:break-word;white-space:pre-wrap">key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm,cluster.locking-scheme,cluster.shd-wait-qlength,cluster.shd-max-threads,network.ping-timeout,user.cifs,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal value=virt,36,36,on,512MB,32,full,granular,10000,8,30,off,on,off,on brick_dirs=/gluster/brick1/engine </pre> </div> <div>I'm going to crosscheck now what are the suggested values for oVirt 4.1 and Gluster 3.10 combined...<br> </div> </div> </div> </div> </blockquote> Now virt group sets the shard block size and it is the default which is 4MB and is the suggested value. With 4MB shards we see that healing is much faster with granular entry heal being enabled on the volume.<br> <br> I am not sure why the conf file again sets the shard size. May be this can be removed from the file.<br> <br> Other than this everything looks good for me.<br> <blockquote cite="mid:CAG2kNCwSGS8_+2yrPyC6bxDsZ_M-L=j3GtE2uz+-QE2WioPBHQ@mail.gmail.com" type="cite"> <div dir="ltr"> <div class="gmail_extra"> <div class="gmail_quote"> <div><br> </div> <div>I was in particular worried by the difference of features.shard-block-size but after reading this</div> <div><br> </div> <div><a moz-do-not-send="true" href="http://blog.gluster.org/2015/12/introducing-shard-translator/">http://blog.gluster.org/2015/12/introducing-shard-translator/</a><br> </div> <div><br> </div> <div>I'm not sure if 512Mb is the best in case of VMs storage.... I'm going to dig more eventually</div> <div><br> </div> <div>Thanks,</div> <div>Gianluca</div> </div> </div> </div> </blockquote> <p><br> </p> </body> </html> --------------B767DCDDA0BBE4759A642B02--
participants (2)
-
Gianluca Cecchi
-
knarra