hey,
what are you using for zfs? get an arc status and show please
2017-03-02 9:57 GMT-03:00 Arman Khalatyan <arm2arm(a)gmail.com>:
no,
ZFS itself is not on top of lvm. only ssd was spitted by lvm for slog(10G)
and cache (the rest)
but in any-case the ssd does not help much on glusterfs/ovirt load it has
almost 100% cache misses....:( (terrible performance compare with nfs)
On Thu, Mar 2, 2017 at 1:47 PM, FERNANDO FREDIANI <
fernando.frediani(a)upx.com> wrote:
> Am I understanding correctly, but you have Gluster on the top of ZFS
> which is on the top of LVM ? If so, why the usage of LVM was necessary ? I
> have ZFS with any need of LVM.
>
> Fernando
>
> On 02/03/2017 06:19, Arman Khalatyan wrote:
>
> Hi,
> I use 3 nodes with zfs and glusterfs.
> Are there any suggestions to optimize it?
>
> host zfs config 4TB-HDD+250GB-SSD:
> [root@clei22 ~]# zpool status
> pool: zclei22
> state: ONLINE
> scan: scrub repaired 0 in 0h0m with 0 errors on Tue Feb 28 14:16:07 2017
> config:
>
> NAME STATE READ WRITE CKSUM
> zclei22 ONLINE 0 0 0
> HGST_HUS724040ALA640_PN2334PBJ4SV6T1 ONLINE 0 0 0
> logs
> lv_slog ONLINE 0 0 0
> cache
> lv_cache ONLINE 0 0 0
>
> errors: No known data errors
>
> Name:
> GluReplica
> Volume ID:
> ee686dfe-203a-4caa-a691-26353460cc48
> Volume Type:
> Replicate (Arbiter)
> Replica Count:
> 2 + 1
> Number of Bricks:
> 3
> Transport Types:
> TCP, RDMA
> Maximum no of snapshots:
> 256
> Capacity:
> 3.51 TiB total, 190.56 GiB used, 3.33 TiB free
>
>
> _______________________________________________
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
>
>
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users