<div dir="ltr"><div><div><div>Which operating system version are you using for your zfs storage? <br></div>do:<br></div>zfs get all your-pool-name<br></div>use arc_summary.py from freenas git repo if you wish.<br><br></div><div class="gmail_extra"><br><div class="gmail_quote">2017-03-03 10:33 GMT-03:00 Arman Khalatyan <span dir="ltr"><<a href="mailto:arm2arm@gmail.com" target="_blank">arm2arm@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Pool load:<br>[root@clei21 ~]# zpool iostat -v 1 <br> <wbr> capacity operations bandwidth<br>pool <wbr> alloc free read write read write<br>------------------------------<wbr>-------- ----- ----- ----- ----- ----- -----<br>zclei21 <wbr> 10.1G 3.62T 0 112 823 8.82M<br> HGST_HUS724040ALA640_<wbr>PN2334PBJ52XWT1 10.1G 3.62T 0 46 626 4.40M<br>logs <wbr> - - - - - -<br> lv_slog <wbr> 225M 9.72G 0 66 198 4.45M<br>cache <wbr> - - - - - -<br> lv_cache <wbr> 9.81G 204G 0 46 56 4.13M<br>------------------------------<wbr>-------- ----- ----- ----- ----- ----- -----<br><br> <wbr> capacity operations bandwidth<br>pool <wbr> alloc free read write read write<br>------------------------------<wbr>-------- ----- ----- ----- ----- ----- -----<br>zclei21 <wbr> 10.1G 3.62T 0 191 0 12.8M<br> HGST_HUS724040ALA640_<wbr>PN2334PBJ52XWT1 10.1G 3.62T 0 0 0 0<br>logs <wbr> - - - - - -<br> lv_slog <wbr> 225M 9.72G 0 191 0 12.8M<br>cache <wbr> - - - - - -<br> lv_cache <wbr> 9.83G 204G 0 218 0 20.0M<br>------------------------------<wbr>-------- ----- ----- ----- ----- ----- -----<br><br> <wbr> capacity operations bandwidth<br>pool <wbr> alloc free read write read write<br>------------------------------<wbr>-------- ----- ----- ----- ----- ----- -----<br>zclei21 <wbr> 10.1G 3.62T 0 191 0 12.7M<br> HGST_HUS724040ALA640_<wbr>PN2334PBJ52XWT1 10.1G 3.62T 0 0 0 0<br>logs <wbr> - - - - - -<br> lv_slog <wbr> 225M 9.72G 0 191 0 12.7M<br>cache <wbr> - - - - - -<br> lv_cache <wbr> 9.83G 204G 0 72 0 7.68M<br>------------------------------<wbr>-------- ----- ----- ----- ----- ----- -----<br><br></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Mar 3, 2017 at 2:32 PM, Arman Khalatyan <span dir="ltr"><<a href="mailto:arm2arm@gmail.com" target="_blank">arm2arm@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>Glusterfs now in healing mode:<br></div>Receiver:<br>[root@clei21 ~]# arcstat.py 1<br> time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c <br>13:24:49 0 0 0 0 0 0 0 0 0 4.6G 31G <br>13:24:50 154 80 51 80 51 0 0 80 51 4.6G 31G <br>13:24:51 179 62 34 62 34 0 0 62 42 4.6G 31G <br>13:24:52 148 68 45 68 45 0 0 68 45 4.6G 31G <br>13:24:53 140 64 45 64 45 0 0 64 45 4.6G 31G <br>13:24:54 124 48 38 48 38 0 0 48 38 4.6G 31G <br>13:24:55 157 80 50 80 50 0 0 80 50 4.7G 31G <br>13:24:56 202 68 33 68 33 0 0 68 41 4.7G 31G <br>13:24:57 127 54 42 54 42 0 0 54 42 4.7G 31G <br>13:24:58 126 50 39 50 39 0 0 50 39 4.7G 31G <br>13:24:59 116 40 34 40 34 0 0 40 34 4.7G 31G <br><br><div><br>Sender<br>[root@clei22 ~]# arcstat.py 1<br> time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c <br>13:28:37 8 2 25 2 25 0 0 2 25 468M 31G <br>13:28:38 1.2K 727 62 727 62 0 0 525 54 469M 31G <br>13:28:39 815 508 62 508 62 0 0 376 55 469M 31G <br>13:28:40 994 624 62 624 62 0 0 450 54 469M 31G <br>13:28:41 783 456 58 456 58 0 0 338 50 470M 31G <br>13:28:42 916 541 59 541 59 0 0 390 50 470M 31G <br>13:28:43 768 437 56 437 57 0 0 313 48 471M 31G <br>13:28:44 877 534 60 534 60 0 0 393 53 470M 31G <br>13:28:45 957 630 65 630 65 0 0 450 57 470M 31G <br>13:28:46 819 479 58 479 58 0 0 357 51 471M 31G <br><br></div></div><div class="m_-4808390406438333713HOEnZb"><div class="m_-4808390406438333713h5"><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Mar 2, 2017 at 7:18 PM, Juan Pablo <span dir="ltr"><<a href="mailto:pablo.localhost@gmail.com" target="_blank">pablo.localhost@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">hey,<br>what are you using for zfs? get an arc status and show please<br><br></div><div class="m_-4808390406438333713m_6906981569320781HOEnZb"><div class="m_-4808390406438333713m_6906981569320781h5"><div class="gmail_extra"><br><div class="gmail_quote">2017-03-02 9:57 GMT-03:00 Arman Khalatyan <span dir="ltr"><<a href="mailto:arm2arm@gmail.com" target="_blank">arm2arm@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div>no, <br></div>ZFS itself is not on top of lvm. only ssd was spitted by lvm for slog(10G) and cache (the rest)<br></div><div>but in any-case the ssd does not help much on glusterfs/ovirt load it has almost 100% cache misses....:( (terrible performance compare with nfs)<br><br></div><div><br></div><br><br></div><div class="m_-4808390406438333713m_6906981569320781m_349397128160904570HOEnZb"><div class="m_-4808390406438333713m_6906981569320781m_349397128160904570h5"><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Mar 2, 2017 at 1:47 PM, FERNANDO FREDIANI <span dir="ltr"><<a href="mailto:fernando.frediani@upx.com" target="_blank">fernando.frediani@upx.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<p>Am I understanding correctly, but you have Gluster on the top of
ZFS which is on the top of LVM ? If so, why the usage of LVM was
necessary ? I have ZFS with any need of LVM.</p><span class="m_-4808390406438333713m_6906981569320781m_349397128160904570m_-6203522006917276901HOEnZb"><font color="#888888">
<p>Fernando<br>
</p></font></span><div><div class="m_-4808390406438333713m_6906981569320781m_349397128160904570m_-6203522006917276901h5">
<br>
<div class="m_-4808390406438333713m_6906981569320781m_349397128160904570m_-6203522006917276901m_3542297157065926681moz-cite-prefix">On 02/03/2017 06:19, Arman Khalatyan
wrote:<br>
</div>
</div></div><blockquote type="cite"><div><div class="m_-4808390406438333713m_6906981569320781m_349397128160904570m_-6203522006917276901h5">
<div dir="ltr">
<div>
<div>Hi, <br>
</div>
I use 3 nodes with zfs and glusterfs.<br>
</div>
Are there any suggestions to optimize it?<br>
<div><br>
host zfs config 4TB-HDD+250GB-SSD:<br>
[root@clei22 ~]# zpool status <br>
pool: zclei22<br>
state: ONLINE<br>
scan: scrub repaired 0 in 0h0m with 0 errors on Tue Feb 28
14:16:07 2017<br>
config:<br>
<br>
NAME <wbr> STATE READ
WRITE CKSUM<br>
zclei22 <wbr> ONLINE 0
0 0<br>
HGST_HUS724040ALA640_PN2334PBJ<wbr>4SV6T1 ONLINE 0
0 0<br>
logs<br>
lv_slog <wbr> ONLINE 0
0 0<br>
cache<br>
lv_cache <wbr> ONLINE 0
0 0<br>
<br>
errors: No known data errors<br>
<br>
Name:<br>
GluReplica<br>
Volume ID:<br>
ee686dfe-203a-4caa-a691-263534<wbr>60cc48<br>
Volume Type:<br>
Replicate (Arbiter)<br>
Replica Count:<br>
2 + 1<br>
Number of Bricks:<br>
3<br>
Transport Types:<br>
TCP, RDMA<br>
Maximum no of snapshots:<br>
256<br>
Capacity:<br>
3.51 TiB total, 190.56 GiB used, 3.33 TiB free<br>
</div>
</div>
<br>
<fieldset class="m_-4808390406438333713m_6906981569320781m_349397128160904570m_-6203522006917276901m_3542297157065926681mimeAttachmentHeader"></fieldset>
<br>
</div></div><span><pre>______________________________<wbr>_________________
Users mailing list
<a class="m_-4808390406438333713m_6906981569320781m_349397128160904570m_-6203522006917276901m_3542297157065926681moz-txt-link-abbreviated" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>
<a class="m_-4808390406438333713m_6906981569320781m_349397128160904570m_-6203522006917276901m_3542297157065926681moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a>
</pre>
</span></blockquote>
<br>
</div>
<br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
<br></blockquote></div><br></div>
</div></div><br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
<br></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>