<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Fri, Jul 7, 2017 at 10:15 AM, knarra <span dir="ltr"><<a href="mailto:knarra@redhat.com" target="_blank">knarra@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF"><span class="gmail-">
<div class="gmail-m_-832412323805543219moz-cite-prefix"><br></div><blockquote type="cite"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">
<div><br>
</div>
</div>
</blockquote>
</div>
<br>
</div>
<div class="gmail_extra">It seems I have to de-select the
checkbox "Show available bricks from host" and so I can
manually the the directory of the bricks</div>
</div>
</blockquote></span>
I see that bricks are mounted in /gluster/brick3 and that is the
reason it does not show anything in "Brick Directory" drop down
filed. If bricks are mounted under /gluster_bricks then it would
have detected automatically. There is an RFE which is raised to
detect bricks which are created manually.</div></blockquote><div><br></div><div>I deployed this HCI system with gdeploy at oVirt 4.05 time, so I think I used the "default" path that was proposed inside the ovirt-gluster.conf file to feed gdeploy with...</div><div>I think it was based on this from Jason:</div><div><a href="https://www.ovirt.org/blog/2016/08/up-and-running-with-ovirt-4-0-and-gluster-storage/">https://www.ovirt.org/blog/2016/08/up-and-running-with-ovirt-4-0-and-gluster-storage/</a><br></div><div>and this conf file</div><div><a href="https://gist.githubusercontent.com/jasonbrooks/a5484769eea5a8cf2fa9d32329d5ebe5/raw/ovirt-gluster.conf">https://gist.githubusercontent.com/jasonbrooks/a5484769eea5a8cf2fa9d32329d5ebe5/raw/ovirt-gluster.conf</a><br></div><div><br></div><div>Good that there is an RFE. Thanks</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div bgcolor="#FFFFFF"><span class="gmail-"><br>
<blockquote type="cite">
<div dir="ltr">
<div class="gmail_extra"><br>
</div>
<div class="gmail_extra">BTW: I see that after creating a volume
optimized for oVirt in web admin gui of 4.1.2 I get slight
option for it in respect for a pre-existing volume created in
4.0.5 during initial setup with gdeploy.</div>
<div class="gmail_extra"><br>
</div>
<div class="gmail_extra">NOTE: during 4.0.5 setup I had gluster
3.7 installed, while now I have gluster 3.10 (manually updated
from CentOS storage SIG)</div>
<div class="gmail_extra"><br>
</div>
<div class="gmail_extra">Making a "gluster volume info" and then
a diff of the output for the 2 volumes I have:</div>
<div class="gmail_extra"><br>
</div>
<div class="gmail_extra">new volume == <</div>
<div class="gmail_extra">old volume == ></div>
<div class="gmail_extra"><br>
</div>
<div class="gmail_extra">
<div class="gmail_extra">< cluster.shd-max-threads: 8</div>
<div class="gmail_extra">---</div>
<div class="gmail_extra">> cluster.shd-max-threads: 6</div>
<div class="gmail_extra">13a13,14</div>
<div class="gmail_extra">> features.shard-block-size: 512MB<br>
</div>
<div class="gmail_extra">16c17</div>
<div class="gmail_extra">< network.remote-dio: enable</div>
<div class="gmail_extra">---</div>
<div class="gmail_extra">> network.remote-dio: off</div>
<div class="gmail_extra">23a25</div>
<div class="gmail_extra">> performance.readdir-ahead: on</div>
<div class="gmail_extra">25c27</div>
<div class="gmail_extra">< server.allow-insecure: on</div>
<div class="gmail_extra">---</div>
<div class="gmail_extra">> performance.strict-o-direct: on</div>
<div><br>
</div>
</div>
<div class="gmail_extra">Do I have to change anything for the
newly created one?</div>
</div>
</blockquote></span>
No, you do not need to change anything for the new volume. But if
you plan to enable o-direct on the volume then you will have to
disable/turn off remote-dio.<br>
<br>
<p><br></p></div></blockquote><div>OK. </div><div>Again, in ovirt-gluster.conf file I see there was this kind of setting for the Gluster volumes when running gdeploy for them:</div><div><pre style="color:rgb(0,0,0);word-wrap:break-word;white-space:pre-wrap">key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm,cluster.locking-scheme,cluster.shd-wait-qlength,cluster.shd-max-threads,network.ping-timeout,user.cifs,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,on,512MB,32,full,granular,10000,8,30,off,on,off,on
brick_dirs=/gluster/brick1/engine
</pre></div><div>I'm going to crosscheck now what are the suggested values for oVirt 4.1 and Gluster 3.10 combined...<br></div><div><br></div><div>I was in particular worried by the difference of features.shard-block-size but after reading this</div><div><br></div><div><a href="http://blog.gluster.org/2015/12/introducing-shard-translator/">http://blog.gluster.org/2015/12/introducing-shard-translator/</a><br></div><div><br></div><div>I'm not sure if 512Mb is the best in case of VMs storage.... I'm going to dig more eventually</div><div><br></div><div>Thanks,</div><div>Gianluca</div></div></div></div>