<div dir="ltr"><div><div>no, <br></div>ZFS itself is not on top of lvm. only ssd was spitted by lvm for slog(10G) and cache (the rest)<br></div><div>but in any-case the ssd does not help much on glusterfs/ovirt load it has almost 100% cache misses....:( (terrible performance compare with nfs)<br><br></div><div><br></div><br><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Mar 2, 2017 at 1:47 PM, FERNANDO FREDIANI <span dir="ltr"><<a href="mailto:fernando.frediani@upx.com" target="_blank">fernando.frediani@upx.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<p>Am I understanding correctly, but you have Gluster on the top of
ZFS which is on the top of LVM ? If so, why the usage of LVM was
necessary ? I have ZFS with any need of LVM.</p><span class="HOEnZb"><font color="#888888">
<p>Fernando<br>
</p></font></span><div><div class="h5">
<br>
<div class="m_3542297157065926681moz-cite-prefix">On 02/03/2017 06:19, Arman Khalatyan
wrote:<br>
</div>
</div></div><blockquote type="cite"><div><div class="h5">
<div dir="ltr">
<div>
<div>Hi, <br>
</div>
I use 3 nodes with zfs and glusterfs.<br>
</div>
Are there any suggestions to optimize it?<br>
<div><br>
host zfs config 4TB-HDD+250GB-SSD:<br>
[root@clei22 ~]# zpool status <br>
pool: zclei22<br>
state: ONLINE<br>
scan: scrub repaired 0 in 0h0m with 0 errors on Tue Feb 28
14:16:07 2017<br>
config:<br>
<br>
NAME <wbr> STATE READ
WRITE CKSUM<br>
zclei22 <wbr> ONLINE 0
0 0<br>
HGST_HUS724040ALA640_<wbr>PN2334PBJ4SV6T1 ONLINE 0
0 0<br>
logs<br>
lv_slog <wbr> ONLINE 0
0 0<br>
cache<br>
lv_cache <wbr> ONLINE 0
0 0<br>
<br>
errors: No known data errors<br>
<br>
Name:<br>
GluReplica<br>
Volume ID:<br>
ee686dfe-203a-4caa-a691-<wbr>26353460cc48<br>
Volume Type:<br>
Replicate (Arbiter)<br>
Replica Count:<br>
2 + 1<br>
Number of Bricks:<br>
3<br>
Transport Types:<br>
TCP, RDMA<br>
Maximum no of snapshots:<br>
256<br>
Capacity:<br>
3.51 TiB total, 190.56 GiB used, 3.33 TiB free<br>
</div>
</div>
<br>
<fieldset class="m_3542297157065926681mimeAttachmentHeader"></fieldset>
<br>
</div></div><span class=""><pre>______________________________<wbr>_________________
Users mailing list
<a class="m_3542297157065926681moz-txt-link-abbreviated" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>
<a class="m_3542297157065926681moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a>
</pre>
</span></blockquote>
<br>
</div>
<br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
<br></blockquote></div><br></div>