[ovirt-users] Replicated Glusterfs on top of ZFS

Arman Khalatyan arm2arm at gmail.com
Fri Mar 3 13:33:47 UTC 2017


Pool load:
[root at clei21 ~]# zpool iostat -v 1
                                           capacity     operations
bandwidth
pool                                    alloc   free   read  write   read
write
--------------------------------------  -----  -----  -----  -----  -----
-----
zclei21                                 10.1G  3.62T      0    112    823
8.82M
  HGST_HUS724040ALA640_PN2334PBJ52XWT1  10.1G  3.62T      0     46    626
4.40M
logs                                        -      -      -      -
-      -
  lv_slog                                225M  9.72G      0     66    198
4.45M
cache                                       -      -      -      -
-      -
  lv_cache                              9.81G   204G      0     46     56
4.13M
--------------------------------------  -----  -----  -----  -----  -----
-----

                                           capacity     operations
bandwidth
pool                                    alloc   free   read  write   read
write
--------------------------------------  -----  -----  -----  -----  -----
-----
zclei21                                 10.1G  3.62T      0    191      0
12.8M
  HGST_HUS724040ALA640_PN2334PBJ52XWT1  10.1G  3.62T      0      0
0      0
logs                                        -      -      -      -
-      -
  lv_slog                                225M  9.72G      0    191      0
12.8M
cache                                       -      -      -      -
-      -
  lv_cache                              9.83G   204G      0    218      0
20.0M
--------------------------------------  -----  -----  -----  -----  -----
-----

                                           capacity     operations
bandwidth
pool                                    alloc   free   read  write   read
write
--------------------------------------  -----  -----  -----  -----  -----
-----
zclei21                                 10.1G  3.62T      0    191      0
12.7M
  HGST_HUS724040ALA640_PN2334PBJ52XWT1  10.1G  3.62T      0      0
0      0
logs                                        -      -      -      -
-      -
  lv_slog                                225M  9.72G      0    191      0
12.7M
cache                                       -      -      -      -
-      -
  lv_cache                              9.83G   204G      0     72      0
7.68M
--------------------------------------  -----  -----  -----  -----  -----
-----


On Fri, Mar 3, 2017 at 2:32 PM, Arman Khalatyan <arm2arm at gmail.com> wrote:

> Glusterfs now in healing mode:
> Receiver:
> [root at clei21 ~]# arcstat.py 1
>     time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz     c
> 13:24:49     0     0      0     0    0     0    0     0    0   4.6G   31G
> 13:24:50   154    80     51    80   51     0    0    80   51   4.6G   31G
> 13:24:51   179    62     34    62   34     0    0    62   42   4.6G   31G
> 13:24:52   148    68     45    68   45     0    0    68   45   4.6G   31G
> 13:24:53   140    64     45    64   45     0    0    64   45   4.6G   31G
> 13:24:54   124    48     38    48   38     0    0    48   38   4.6G   31G
> 13:24:55   157    80     50    80   50     0    0    80   50   4.7G   31G
> 13:24:56   202    68     33    68   33     0    0    68   41   4.7G   31G
> 13:24:57   127    54     42    54   42     0    0    54   42   4.7G   31G
> 13:24:58   126    50     39    50   39     0    0    50   39   4.7G   31G
> 13:24:59   116    40     34    40   34     0    0    40   34   4.7G   31G
>
>
> Sender
> [root at clei22 ~]# arcstat.py 1
>     time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz     c
> 13:28:37     8     2     25     2   25     0    0     2   25   468M   31G
> 13:28:38  1.2K   727     62   727   62     0    0   525   54   469M   31G
> 13:28:39   815   508     62   508   62     0    0   376   55   469M   31G
> 13:28:40   994   624     62   624   62     0    0   450   54   469M   31G
> 13:28:41   783   456     58   456   58     0    0   338   50   470M   31G
> 13:28:42   916   541     59   541   59     0    0   390   50   470M   31G
> 13:28:43   768   437     56   437   57     0    0   313   48   471M   31G
> 13:28:44   877   534     60   534   60     0    0   393   53   470M   31G
> 13:28:45   957   630     65   630   65     0    0   450   57   470M   31G
> 13:28:46   819   479     58   479   58     0    0   357   51   471M   31G
>
>
> On Thu, Mar 2, 2017 at 7:18 PM, Juan Pablo <pablo.localhost at gmail.com>
> wrote:
>
>> hey,
>> what are you using for zfs? get an arc status and show please
>>
>>
>> 2017-03-02 9:57 GMT-03:00 Arman Khalatyan <arm2arm at gmail.com>:
>>
>>> no,
>>> ZFS itself is not on top of lvm. only ssd was spitted by lvm for
>>> slog(10G) and cache (the rest)
>>> but in any-case the ssd does not help much on glusterfs/ovirt  load it
>>> has almost 100% cache misses....:( (terrible performance compare with nfs)
>>>
>>>
>>>
>>>
>>>
>>> On Thu, Mar 2, 2017 at 1:47 PM, FERNANDO FREDIANI <
>>> fernando.frediani at upx.com> wrote:
>>>
>>>> Am I understanding correctly, but you have Gluster on the top of ZFS
>>>> which is on the top of LVM ? If so, why the usage of LVM was necessary ? I
>>>> have ZFS with any need of LVM.
>>>>
>>>> Fernando
>>>>
>>>> On 02/03/2017 06:19, Arman Khalatyan wrote:
>>>>
>>>> Hi,
>>>> I use 3 nodes with zfs and glusterfs.
>>>> Are there any suggestions to optimize it?
>>>>
>>>> host zfs config 4TB-HDD+250GB-SSD:
>>>> [root at clei22 ~]# zpool status
>>>>   pool: zclei22
>>>>  state: ONLINE
>>>>   scan: scrub repaired 0 in 0h0m with 0 errors on Tue Feb 28 14:16:07
>>>> 2017
>>>> config:
>>>>
>>>>     NAME                                    STATE     READ WRITE CKSUM
>>>>     zclei22                                 ONLINE       0     0     0
>>>>       HGST_HUS724040ALA640_PN2334PBJ4SV6T1  ONLINE       0     0     0
>>>>     logs
>>>>       lv_slog                               ONLINE       0     0     0
>>>>     cache
>>>>       lv_cache                              ONLINE       0     0     0
>>>>
>>>> errors: No known data errors
>>>>
>>>> Name:
>>>> GluReplica
>>>> Volume ID:
>>>> ee686dfe-203a-4caa-a691-26353460cc48
>>>> Volume Type:
>>>> Replicate (Arbiter)
>>>> Replica Count:
>>>> 2 + 1
>>>> Number of Bricks:
>>>> 3
>>>> Transport Types:
>>>> TCP, RDMA
>>>> Maximum no of snapshots:
>>>> 256
>>>> Capacity:
>>>> 3.51 TiB total, 190.56 GiB used, 3.33 TiB free
>>>>
>>>>
>>>> _______________________________________________
>>>> Users mailing listUsers at ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users at ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170303/68982ea7/attachment.html>


More information about the Users mailing list