This is a multi-part message in MIME format.
--------------B767DCDDA0BBE4759A642B02
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
On 07/07/2017 02:03 PM, Gianluca Cecchi wrote:
On Fri, Jul 7, 2017 at 10:15 AM, knarra <knarra(a)redhat.com
<mailto:knarra@redhat.com>> wrote:
>
>
> It seems I have to de-select the checkbox "Show available bricks
> from host" and so I can manually the the directory of the bricks
I see that bricks are mounted in /gluster/brick3 and that is the
reason it does not show anything in "Brick Directory" drop down
filed. If bricks are mounted under /gluster_bricks then it would
have detected automatically. There is an RFE which is raised to
detect bricks which are created manually.
I deployed this HCI system with gdeploy at oVirt 4.05 time, so I think
I used the "default" path that was proposed inside the
ovirt-gluster.conf file to feed gdeploy with...
I think it was based on this from Jason:
https://www.ovirt.org/blog/2016/08/up-and-running-with-ovirt-4-0-and-glus...
and this conf file
https://gist.githubusercontent.com/jasonbrooks/a5484769eea5a8cf2fa9d32329...
Good that there is an RFE. Thanks
>
> BTW: I see that after creating a volume optimized for oVirt in
> web admin gui of 4.1.2 I get slight option for it in respect for
> a pre-existing volume created in 4.0.5 during initial setup with
> gdeploy.
>
> NOTE: during 4.0.5 setup I had gluster 3.7 installed, while now I
> have gluster 3.10 (manually updated from CentOS storage SIG)
>
> Making a "gluster volume info" and then a diff of the output for
> the 2 volumes I have:
>
> new volume == <
> old volume == >
>
> < cluster.shd-max-threads: 8
> ---
> > cluster.shd-max-threads: 6
> 13a13,14
> > features.shard-block-size: 512MB
> 16c17
> < network.remote-dio: enable
> ---
> > network.remote-dio: off
> 23a25
> > performance.readdir-ahead: on
> 25c27
> < server.allow-insecure: on
> ---
> > performance.strict-o-direct: on
>
> Do I have to change anything for the newly created one?
No, you do not need to change anything for the new volume. But if
you plan to enable o-direct on the volume then you will have to
disable/turn off remote-dio.
OK.
Again, in ovirt-gluster.conf file I see there was this kind of setting
for the Gluster volumes when running gdeploy for them:
key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm,cluster.locking-scheme,cluster.shd-wait-qlength,cluster.shd-max-threads,network.ping-timeout,user.cifs,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,on,512MB,32,full,granular,10000,8,30,off,on,off,on
brick_dirs=/gluster/brick1/engine
I'm going to crosscheck now what are the suggested values for oVirt
4.1 and Gluster 3.10 combined...
Now virt group sets the shard block size and it is
the default which is
4MB and is the suggested value. With 4MB shards we see that healing is
much faster with granular entry heal being enabled on the volume.
I am not sure why the conf file again sets the shard size. May be this
can be removed from the file.
Other than this everything looks good for me.
I was in particular worried by the difference
of features.shard-block-size but after reading this
http://blog.gluster.org/2015/12/introducing-shard-translator/
I'm not sure if 512Mb is the best in case of VMs storage.... I'm going
to dig more eventually
Thanks,
Gianluca
--------------B767DCDDA0BBE4759A642B02
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=utf-8"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 07/07/2017 02:03 PM, Gianluca Cecchi
wrote:<br>
</div>
<blockquote
cite="mid:CAG2kNCwSGS8_+2yrPyC6bxDsZ_M-L=j3GtE2uz+-QE2WioPBHQ@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">On Fri, Jul 7, 2017 at 10:15 AM,
knarra <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:knarra@redhat.com"
target="_blank">knarra(a)redhat.com</a>&gt;</span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF"><span
class="gmail-">
<div
class="gmail-m_-832412323805543219moz-cite-prefix"><br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote"
style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div dir="ltr">
<div><br>
</div>
</div>
</blockquote>
</div>
<br>
</div>
<div class="gmail_extra">It seems I have to
de-select the checkbox "Show available bricks
from host" and so I can manually the the
directory of the bricks</div>
</div>
</blockquote>
</span> I see that bricks are mounted in /gluster/brick3
and that is the reason it does not show anything in
"Brick Directory" drop down filed. If bricks are mounted
under /gluster_bricks then it would have detected
automatically. There is an RFE which is raised to detect
bricks which are created manually.</div>
</blockquote>
<div><br>
</div>
<div>I deployed this HCI system with gdeploy at oVirt 4.05
time, so I think I used the "default" path that was
proposed inside the ovirt-gluster.conf file to feed
gdeploy with...</div>
<div>I think it was based on this from Jason:</div>
<div><a moz-do-not-send="true"
href="https://www.ovirt.org/blog/2016/08/up-and-running-with-ovirt-4...
</div>
<div>and this conf file</div>
<div><a moz-do-not-send="true"
href="https://gist.githubusercontent.com/jasonbrooks/a5484769eea5a8c...
</div>
<div><br>
</div>
<div>Good that there is an RFE. Thanks</div>
<div><br>
</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF"><span
class="gmail-"><br>
<blockquote type="cite">
<div dir="ltr">
<div class="gmail_extra"><br>
</div>
<div class="gmail_extra">BTW: I see that after
creating a volume optimized for oVirt in web
admin gui of 4.1.2 I get slight option for it in
respect for a pre-existing volume created in
4.0.5 during initial setup with gdeploy.</div>
<div class="gmail_extra"><br>
</div>
<div class="gmail_extra">NOTE: during 4.0.5 setup
I had gluster 3.7 installed, while now I have
gluster 3.10 (manually updated from CentOS
storage SIG)</div>
<div class="gmail_extra"><br>
</div>
<div class="gmail_extra">Making a "gluster
volume
info" and then a diff of the output for the 2
volumes I have:</div>
<div class="gmail_extra"><br>
</div>
<div class="gmail_extra">new volume ==
<</div>
<div class="gmail_extra">old volume ==
></div>
<div class="gmail_extra"><br>
</div>
<div class="gmail_extra">
<div class="gmail_extra"><
cluster.shd-max-threads: 8</div>
<div class="gmail_extra">---</div>
<div class="gmail_extra">>
cluster.shd-max-threads: 6</div>
<div class="gmail_extra">13a13,14</div>
<div class="gmail_extra">>
features.shard-block-size: 512MB<br>
</div>
<div class="gmail_extra">16c17</div>
<div class="gmail_extra"><
network.remote-dio: enable</div>
<div class="gmail_extra">---</div>
<div class="gmail_extra">>
network.remote-dio: off</div>
<div class="gmail_extra">23a25</div>
<div class="gmail_extra">>
performance.readdir-ahead: on</div>
<div class="gmail_extra">25c27</div>
<div class="gmail_extra"><
server.allow-insecure: on</div>
<div class="gmail_extra">---</div>
<div class="gmail_extra">>
performance.strict-o-direct: on</div>
<div><br>
</div>
</div>
<div class="gmail_extra">Do I have to change
anything for the newly created one?</div>
</div>
</blockquote>
</span> No, you do not need to change anything for the
new volume. But if you plan to enable o-direct on the
volume then you will have to disable/turn off
remote-dio.<br>
<br>
<p><br>
</p>
</div>
</blockquote>
<div>OK. </div>
<div>Again, in ovirt-gluster.conf file I see there was this
kind of setting for the Gluster volumes when running
gdeploy for them:</div>
<div>
<pre
style="color:rgb(0,0,0);word-wrap:break-word;white-space:pre-wrap">key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm,cluster.locking-scheme,cluster.shd-wait-qlength,cluster.shd-max-threads,network.ping-timeout,user.cifs,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,on,512MB,32,full,granular,10000,8,30,off,on,off,on
brick_dirs=/gluster/brick1/engine
</pre>
</div>
<div>I'm going to crosscheck now what are the suggested
values for oVirt 4.1 and Gluster 3.10 combined...<br>
</div>
</div>
</div>
</div>
</blockquote>
Now virt group sets the shard block size and it is the default which
is 4MB and is the suggested value. With 4MB shards we see that
healing is much faster with granular entry heal being enabled on the
volume.<br>
<br>
I am not sure why the conf file again sets the shard size. May be
this can be removed from the file.<br>
<br>
Other than this everything looks good for me.<br>
<blockquote
cite="mid:CAG2kNCwSGS8_+2yrPyC6bxDsZ_M-L=j3GtE2uz+-QE2WioPBHQ@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">
<div><br>
</div>
<div>I was in particular worried by the difference
of features.shard-block-size but after reading this</div>
<div><br>
</div>
<div><a moz-do-not-send="true"
href="http://blog.gluster.org/2015/12/introducing-shard-translator/&...
</div>
<div><br>
</div>
<div>I'm not sure if 512Mb is the best in case of VMs
storage.... I'm going to dig more eventually</div>
<div><br>
</div>
<div>Thanks,</div>
<div>Gianluca</div>
</div>
</div>
</div>
</blockquote>
<p><br>
</p>
</body>
</html>
--------------B767DCDDA0BBE4759A642B02--