<div dir="ltr"><div><div>No, I have one pool made of the one disk and ssd as a cache and log device.<br></div>I have 3 Glusterfs bricks- separate 3 hosts:Volume type Replicate (Arbiter)= replica 2+1!<br></div>That how much you can push into compute nodes(they have only 3 disk slots).<br><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Mar 3, 2017 at 3:19 PM, Juan Pablo <span dir="ltr">&lt;<a href="mailto:pablo.localhost@gmail.com" target="_blank">pablo.localhost@gmail.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div>ok, you have 3 pools, zclei22, logs and cache, thats wrong. you should have 1 pool, with zlog+cache if you are looking for performance.<br></div>also, dont mix drives. <br></div>whats the performance issue you are facing? <br><div><div><br><br></div><div>regards,</div></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">2017-03-03 11:00 GMT-03:00 Arman Khalatyan <span dir="ltr">&lt;<a href="mailto:arm2arm@gmail.com" target="_blank">arm2arm@gmail.com</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">This is CentOS 7.3 ZoL version 0.6.5.9-1<br><br><blockquote><p>[root@clei22 ~]# lsscsi </p><p>[2:0:0:0]    disk    ATA      INTEL SSDSC2CW24 400i  /dev/sda </p><p>[3:0:0:0]    disk    ATA      HGST HUS724040AL AA70  /dev/sdb </p><p>[4:0:0:0]    disk    ATA      WDC WD2002FYPS-0 1G01  /dev/sdc </p><p><br></p><p>[root@clei22 ~]# pvs ;vgs;lvs</p><p>  PV                            <wbr>                     VG            Fmt  Attr PSize   PFree </p><p>  /dev/mapper/INTEL_SSDSC2CW240A<wbr>3_CVCV306302RP240CGN vg_cache      lvm2 a--  223.57g     0 </p><p>  /dev/sdc2                     <wbr>                     centos_clei22 lvm2 a--    1.82t 64.00m</p><p>  VG            #PV #LV #SN Attr   VSize   VFree </p><p>  centos_clei22   1   3   0 wz--n-   1.82t 64.00m</p><p>  vg_cache        1   2   0 wz--n- 223.57g     0 </p><p>  LV       VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert</p><p>  home     centos_clei22 -wi-ao----   1.74t                         <wbr>                           </p><p>  root     centos_clei22 -wi-ao----  50.00g                        <wbr>                            </p><p>  swap     centos_clei22 -wi-ao----  31.44g                        <wbr>                            </p><p>  lv_cache vg_cache      -wi-ao---- 213.57g                       <wbr>                             </p><p>  lv_slog  vg_cache      -wi-ao----  10.00g    </p><p><br></p><p>[root@clei22 ~]# zpool status -v</p><span><p>  pool: zclei22</p><p> state: ONLINE</p><p>  scan: scrub repaired 0 in 0h0m with 0 errors on Tue Feb 28 14:16:07 2017</p><p>config:</p><p><br></p><p>    NAME                          <wbr>          STATE     READ WRITE CKSUM</p><p>    zclei22                       <wbr>          ONLINE       0     0     0</p><p>      HGST_HUS724040ALA640_PN2334PBJ<wbr>4SV6T1  ONLINE       0     0     0</p><p>    logs</p><p>      lv_slog                       <wbr>        ONLINE       0     0     0</p><p>    cache</p><p>      lv_cache                      <wbr>        ONLINE       0     0     0</p><p><br></p><p>errors: No known data errors</p></span></blockquote><p><font size="3"><b><br>ZFS config:</b></font></p><blockquote class="m_-772355890576934878m_8401411119881083294gmail-tr_bq"><p>[root@clei22 ~]# zfs get all zclei22/01</p><p>NAME        PROPERTY              VALUE                  SOURCE</p><p>zclei22/01  type                  filesystem             -</p><p>zclei22/01  creation              Tue Feb 28 14:06 2017  -</p><p>zclei22/01  used                  389G                   -</p><p>zclei22/01  available             3.13T                  -</p><p>zclei22/01  referenced            389G                   -</p><p>zclei22/01  compressratio         1.01x                  -</p><p>zclei22/01  mounted               yes                    -</p><p>zclei22/01  quota                 none                   default</p><p>zclei22/01  reservation           none                   default</p><p>zclei22/01  recordsize            128K                   local</p><p>zclei22/01  mountpoint            /zclei22/01            default</p><p>zclei22/01  sharenfs              off                    default</p><p>zclei22/01  checksum              on                     default</p><p>zclei22/01  compression           off                    local</p><p>zclei22/01  atime                 on                     default</p><p>zclei22/01  devices               on                     default</p><p>zclei22/01  exec                  on                     default</p><p>zclei22/01  setuid                on                     default</p><p>zclei22/01  readonly              off                    default</p><p>zclei22/01  zoned                 off                    default</p><p>zclei22/01  snapdir               hidden                 default</p><p>zclei22/01  aclinherit            restricted             default</p><p>zclei22/01  canmount              on                     default</p><p>zclei22/01  xattr                 sa                     local</p><p>zclei22/01  copies                1                      default</p><p>zclei22/01  version               5                      -</p><p>zclei22/01  utf8only              off                    -</p><p>zclei22/01  normalization         none                   -</p><p>zclei22/01  casesensitivity       sensitive              -</p><p>zclei22/01  vscan                 off                    default</p><p>zclei22/01  nbmand                off                    default</p><p>zclei22/01  sharesmb              off                    default</p><p>zclei22/01  refquota              none                   default</p><p>zclei22/01  refreservation        none                   default</p><p>zclei22/01  primarycache          metadata               local</p><p>zclei22/01  secondarycache        metadata               local</p><p>zclei22/01  usedbysnapshots       0                      -</p><p>zclei22/01  usedbydataset         389G                   -</p><p>zclei22/01  usedbychildren        0                      -</p><p>zclei22/01  usedbyrefreservation  0                      -</p><p>zclei22/01  logbias               latency                default</p><p>zclei22/01  dedup                 off                    default</p><p>zclei22/01  mlslabel              none                   default</p><p>zclei22/01  sync                  disabled               local</p><p>zclei22/01  refcompressratio      1.01x                  -</p><p>zclei22/01  written               389G                   -</p><p>zclei22/01  logicalused           396G                   -</p><p>zclei22/01  logicalreferenced     396G                   -</p><p>zclei22/01  filesystem_limit      none                   default</p><p>zclei22/01  snapshot_limit        none                   default</p><p>zclei22/01  filesystem_count      none                   default</p><p>zclei22/01  snapshot_count        none                   default</p><p>zclei22/01  snapdev               hidden                 default</p><p>zclei22/01  acltype               off                    default</p><p>zclei22/01  context               none                   default</p><p>zclei22/01  fscontext             none                   default</p><p>zclei22/01  defcontext            none                   default</p><p>zclei22/01  rootcontext           none                   default</p><p>zclei22/01  relatime              off                    default</p><p>zclei22/01  redundant_metadata    all                    default</p><p>zclei22/01  overlay               off                    default</p></blockquote><p><br></p><br><br></div><div class="m_-772355890576934878HOEnZb"><div class="m_-772355890576934878h5"><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Mar 3, 2017 at 2:52 PM, Juan Pablo <span dir="ltr">&lt;<a href="mailto:pablo.localhost@gmail.com" target="_blank">pablo.localhost@gmail.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div>Which operating system version are you using for your zfs storage? <br></div>do:<br></div>zfs get all your-pool-name<br></div>use arc_summary.py from freenas git repo if you wish.<br><br></div><div class="m_-772355890576934878m_8401411119881083294HOEnZb"><div class="m_-772355890576934878m_8401411119881083294h5"><div class="gmail_extra"><br><div class="gmail_quote">2017-03-03 10:33 GMT-03:00 Arman Khalatyan <span dir="ltr">&lt;<a href="mailto:arm2arm@gmail.com" target="_blank">arm2arm@gmail.com</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Pool load:<br>[root@clei21 ~]# zpool iostat -v 1 <br>                              <wbr>             capacity     operations    bandwidth<br>pool                          <wbr>          alloc   free   read  write   read  write<br>------------------------------<wbr>--------  -----  -----  -----  -----  -----  -----<br>zclei21                       <wbr>          10.1G  3.62T      0    112    823  8.82M<br>  HGST_HUS724040ALA640_PN2334PBJ<wbr>52XWT1  10.1G  3.62T      0     46    626  4.40M<br>logs                          <wbr>              -      -      -      -      -      -<br>  lv_slog                       <wbr>         225M  9.72G      0     66    198  4.45M<br>cache                         <wbr>              -      -      -      -      -      -<br>  lv_cache                      <wbr>        9.81G   204G      0     46     56  4.13M<br>------------------------------<wbr>--------  -----  -----  -----  -----  -----  -----<br><br>                              <wbr>             capacity     operations    bandwidth<br>pool                          <wbr>          alloc   free   read  write   read  write<br>------------------------------<wbr>--------  -----  -----  -----  -----  -----  -----<br>zclei21                       <wbr>          10.1G  3.62T      0    191      0  12.8M<br>  HGST_HUS724040ALA640_PN2334PBJ<wbr>52XWT1  10.1G  3.62T      0      0      0      0<br>logs                          <wbr>              -      -      -      -      -      -<br>  lv_slog                       <wbr>         225M  9.72G      0    191      0  12.8M<br>cache                         <wbr>              -      -      -      -      -      -<br>  lv_cache                      <wbr>        9.83G   204G      0    218      0  20.0M<br>------------------------------<wbr>--------  -----  -----  -----  -----  -----  -----<br><br>                              <wbr>             capacity     operations    bandwidth<br>pool                          <wbr>          alloc   free   read  write   read  write<br>------------------------------<wbr>--------  -----  -----  -----  -----  -----  -----<br>zclei21                       <wbr>          10.1G  3.62T      0    191      0  12.7M<br>  HGST_HUS724040ALA640_PN2334PBJ<wbr>52XWT1  10.1G  3.62T      0      0      0      0<br>logs                          <wbr>              -      -      -      -      -      -<br>  lv_slog                       <wbr>         225M  9.72G      0    191      0  12.7M<br>cache                         <wbr>              -      -      -      -      -      -<br>  lv_cache                      <wbr>        9.83G   204G      0     72      0  7.68M<br>------------------------------<wbr>--------  -----  -----  -----  -----  -----  -----<br><br></div><div class="m_-772355890576934878m_8401411119881083294m_-2168826882466388647HOEnZb"><div class="m_-772355890576934878m_8401411119881083294m_-2168826882466388647h5"><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Mar 3, 2017 at 2:32 PM, Arman Khalatyan <span dir="ltr">&lt;<a href="mailto:arm2arm@gmail.com" target="_blank">arm2arm@gmail.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>Glusterfs now in healing mode:<br></div>Receiver:<br>[root@clei21 ~]# arcstat.py 1<br>    time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz     c  <br>13:24:49     0     0      0     0    0     0    0     0    0   4.6G   31G  <br>13:24:50   154    80     51    80   51     0    0    80   51   4.6G   31G  <br>13:24:51   179    62     34    62   34     0    0    62   42   4.6G   31G  <br>13:24:52   148    68     45    68   45     0    0    68   45   4.6G   31G  <br>13:24:53   140    64     45    64   45     0    0    64   45   4.6G   31G  <br>13:24:54   124    48     38    48   38     0    0    48   38   4.6G   31G  <br>13:24:55   157    80     50    80   50     0    0    80   50   4.7G   31G  <br>13:24:56   202    68     33    68   33     0    0    68   41   4.7G   31G  <br>13:24:57   127    54     42    54   42     0    0    54   42   4.7G   31G  <br>13:24:58   126    50     39    50   39     0    0    50   39   4.7G   31G  <br>13:24:59   116    40     34    40   34     0    0    40   34   4.7G   31G  <br><br><div><br>Sender<br>[root@clei22 ~]# arcstat.py 1<br>    time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz     c  <br>13:28:37     8     2     25     2   25     0    0     2   25   468M   31G  <br>13:28:38  1.2K   727     62   727   62     0    0   525   54   469M   31G  <br>13:28:39   815   508     62   508   62     0    0   376   55   469M   31G  <br>13:28:40   994   624     62   624   62     0    0   450   54   469M   31G  <br>13:28:41   783   456     58   456   58     0    0   338   50   470M   31G  <br>13:28:42   916   541     59   541   59     0    0   390   50   470M   31G  <br>13:28:43   768   437     56   437   57     0    0   313   48   471M   31G  <br>13:28:44   877   534     60   534   60     0    0   393   53   470M   31G  <br>13:28:45   957   630     65   630   65     0    0   450   57   470M   31G  <br>13:28:46   819   479     58   479   58     0    0   357   51   471M   31G  <br><br></div></div><div class="m_-772355890576934878m_8401411119881083294m_-2168826882466388647m_-4808390406438333713HOEnZb"><div class="m_-772355890576934878m_8401411119881083294m_-2168826882466388647m_-4808390406438333713h5"><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Mar 2, 2017 at 7:18 PM, Juan Pablo <span dir="ltr">&lt;<a href="mailto:pablo.localhost@gmail.com" target="_blank">pablo.localhost@gmail.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">hey,<br>what are you using for zfs? get an arc status and show please<br><br></div><div class="m_-772355890576934878m_8401411119881083294m_-2168826882466388647m_-4808390406438333713m_6906981569320781HOEnZb"><div class="m_-772355890576934878m_8401411119881083294m_-2168826882466388647m_-4808390406438333713m_6906981569320781h5"><div class="gmail_extra"><br><div class="gmail_quote">2017-03-02 9:57 GMT-03:00 Arman Khalatyan <span dir="ltr">&lt;<a href="mailto:arm2arm@gmail.com" target="_blank">arm2arm@gmail.com</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div>no, <br></div>ZFS itself is not on top of lvm. only ssd was spitted by lvm for slog(10G) and cache (the rest)<br></div><div>but in any-case the ssd does not help much on glusterfs/ovirt  load it has almost 100% cache misses....:( (terrible performance compare with nfs)<br><br></div><div><br></div><br><br></div><div class="m_-772355890576934878m_8401411119881083294m_-2168826882466388647m_-4808390406438333713m_6906981569320781m_349397128160904570HOEnZb"><div class="m_-772355890576934878m_8401411119881083294m_-2168826882466388647m_-4808390406438333713m_6906981569320781m_349397128160904570h5"><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Mar 2, 2017 at 1:47 PM, FERNANDO FREDIANI <span dir="ltr">&lt;<a href="mailto:fernando.frediani@upx.com" target="_blank">fernando.frediani@upx.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div bgcolor="#FFFFFF" text="#000000">
    <p>Am I understanding correctly, but you have Gluster on the top of
      ZFS which is on the top of LVM ? If so, why the usage of LVM was
      necessary ? I have ZFS with any need of LVM.</p><span class="m_-772355890576934878m_8401411119881083294m_-2168826882466388647m_-4808390406438333713m_6906981569320781m_349397128160904570m_-6203522006917276901HOEnZb"><font color="#888888">
    <p>Fernando<br>
    </p></font></span><div><div class="m_-772355890576934878m_8401411119881083294m_-2168826882466388647m_-4808390406438333713m_6906981569320781m_349397128160904570m_-6203522006917276901h5">
    <br>
    <div class="m_-772355890576934878m_8401411119881083294m_-2168826882466388647m_-4808390406438333713m_6906981569320781m_349397128160904570m_-6203522006917276901m_3542297157065926681moz-cite-prefix">On 02/03/2017 06:19, Arman Khalatyan
      wrote:<br>
    </div>
    </div></div><blockquote type="cite"><div><div class="m_-772355890576934878m_8401411119881083294m_-2168826882466388647m_-4808390406438333713m_6906981569320781m_349397128160904570m_-6203522006917276901h5">
      <div dir="ltr">
        <div>
          <div>Hi, <br>
          </div>
          I use 3 nodes with zfs and glusterfs.<br>
        </div>
        Are there any suggestions to optimize it?<br>
        <div><br>
          host zfs config 4TB-HDD+250GB-SSD:<br>
          [root@clei22 ~]# zpool status <br>
            pool: zclei22<br>
           state: ONLINE<br>
            scan: scrub repaired 0 in 0h0m with 0 errors on Tue Feb 28
          14:16:07 2017<br>
          config:<br>
          <br>
              NAME                          <wbr>          STATE     READ
          WRITE CKSUM<br>
              zclei22                       <wbr>          ONLINE       0    
          0     0<br>
                HGST_HUS724040ALA640_PN2334PBJ<wbr>4SV6T1  ONLINE       0    
          0     0<br>
              logs<br>
                lv_slog                       <wbr>        ONLINE       0    
          0     0<br>
              cache<br>
                lv_cache                      <wbr>        ONLINE       0    
          0     0<br>
          <br>
          errors: No known data errors<br>
          <br>
          Name:<br>
          GluReplica<br>
          Volume ID:<br>
          ee686dfe-203a-4caa-a691-263534<wbr>60cc48<br>
          Volume Type:<br>
          Replicate (Arbiter)<br>
          Replica Count:<br>
          2 + 1<br>
          Number of Bricks:<br>
          3<br>
          Transport Types:<br>
          TCP, RDMA<br>
          Maximum no of snapshots:<br>
          256<br>
          Capacity:<br>
          3.51 TiB total, 190.56 GiB used, 3.33 TiB free<br>
        </div>
      </div>
      <br>
      <fieldset class="m_-772355890576934878m_8401411119881083294m_-2168826882466388647m_-4808390406438333713m_6906981569320781m_349397128160904570m_-6203522006917276901m_3542297157065926681mimeAttachmentHeader"></fieldset>
      <br>
      </div></div><span><pre>______________________________<wbr>_________________
Users mailing list
<a class="m_-772355890576934878m_8401411119881083294m_-2168826882466388647m_-4808390406438333713m_6906981569320781m_349397128160904570m_-6203522006917276901m_3542297157065926681moz-txt-link-abbreviated" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>
<a class="m_-772355890576934878m_8401411119881083294m_-2168826882466388647m_-4808390406438333713m_6906981569320781m_349397128160904570m_-6203522006917276901m_3542297157065926681moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a>
</pre>
    </span></blockquote>
    <br>
  </div>

<br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
<br></blockquote></div><br></div>
</div></div><br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
<br></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>