<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="">Why are you using an arbitrator if all your HW configs are identical? I’d use a true replica 3 in this case.<div class=""><br class=""></div><div class="">Also in my experience with gluster and vm hosting, the ZIL/slog degrades write performance unless it’s a truly dedicated disk. But I have 8 spinners backing my ZFS volumes, so trying to share a sata disk wasn’t a good zil. If yours is dedicated SAS, keep it, if it’s SATA, try testing without it.</div><div class=""><br class=""></div><div class="">You don’t have compression enabled on your zfs volume, and I’d recommend enabling relatime on it. Depending on the amount of RAM in these boxes, you probably want to limit your zfs arc size to 8G or so (1/4 total ram or less). Gluster just works volumes hard during a rebuild, what’s the problem you’re seeing? If it’s affecting your VMs, using shading and tuning client & server threads can help avoid interruptions to your VMs while repairs are running. If you really need to limit it, you can use cgroups to keep it from hogging all the CPU, but it takes longer to heal, of course. There are a couple older posts and blogs about it, if you go back a while.</div><div class=""><br class=""></div><div class=""><br class=""><div><blockquote type="cite" class=""><div class="">On Mar 3, 2017, at 9:02 AM, Arman Khalatyan <<a href="mailto:arm2arm@gmail.com" class="">arm2arm@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="ltr" class=""><div class=""><div class=""><div class=""><div class=""><div class=""><div class="">The problem itself is not the streaming data performance., and also dd zero does not help much in the production zfs running with compression.<br class=""></div>the main problem comes when the gluster is starting to do something with that, it is using xattrs, probably accessing extended attributes inside the zfs is slower than XFS.<br class=""></div>Also primitive find file or ls -l in the (dot)gluster folders takes ages: <br class=""><br class=""></div>now I can see that arbiter host has almost 100% cache miss during the rebuild, which is actually natural while he is reading always the new datasets:<br class="">[root@clei26 ~]# arcstat.py 1<br class=""> time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c <br class="">15:57:31 29 29 100 29 100 0 0 29 100 685M 31G <br class="">15:57:32 530 476 89 476 89 0 0 457 89 685M 31G <br class="">15:57:33 480 467 97 467 97 0 0 463 97 685M 31G <br class="">15:57:34 452 443 98 443 98 0 0 435 97 685M 31G <br class="">15:57:35 582 547 93 547 93 0 0 536 94 685M 31G <br class="">15:57:36 439 417 94 417 94 0 0 393 94 685M 31G <br class="">15:57:38 435 392 90 392 90 0 0 374 89 685M 31G <br class="">15:57:39 364 352 96 352 96 0 0 352 96 685M 31G <br class="">15:57:40 408 375 91 375 91 0 0 360 91 685M 31G <br class="">15:57:41 552 539 97 539 97 0 0 539 97 685M 31G <br class=""><br class=""></div>It looks like we cannot have in the same system performance and reliability :(<br class=""></div>Simply final conclusion is with the single disk+ssd even zfs doesnot help to speedup the glusterfs healing.<br class=""></div>I will stop here:)<br class=""><br class=""><div class=""><div class=""><div class=""><br class=""><div class=""><br class=""><br class=""><div class="gmail_extra"><div class="gmail_quote">On Fri, Mar 3, 2017 at 3:35 PM, Juan Pablo <span dir="ltr" class=""><<a href="mailto:pablo.localhost@gmail.com" target="_blank" class="">pablo.localhost@gmail.com</a>></span> wrote:<br class=""><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr" class=""><div class=""><div class=""><div class=""><div class="">cd to inside the pool path<br class=""></div>then dd if=/dev/zero of=<a href="http://test.tt/" target="_blank" class="">test.tt</a> bs=1M <br class=""></div>leave it runing 5/10 minutes.<br class=""></div>do ctrl+c paste result here.<br class=""></div>etc.<br class=""></div><div class="gmail-HOEnZb"><div class="gmail-h5"><div class="gmail_extra"><br class=""><div class="gmail_quote">2017-03-03 11:30 GMT-03:00 Arman Khalatyan <span dir="ltr" class=""><<a href="mailto:arm2arm@gmail.com" target="_blank" class="">arm2arm@gmail.com</a>></span>:<br class=""><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr" class=""><div class=""><div class="">No, I have one pool made of the one disk and ssd as a cache and log device.<br class=""></div>I have 3 Glusterfs bricks- separate 3 hosts:Volume type Replicate (Arbiter)= replica 2+1!<br class=""></div>That how much you can push into compute nodes(they have only 3 disk slots).<br class=""><br class=""></div><div class="gmail-m_6281861324822600694HOEnZb"><div class="gmail-m_6281861324822600694h5"><div class="gmail_extra"><br class=""><div class="gmail_quote">On Fri, Mar 3, 2017 at 3:19 PM, Juan Pablo <span dir="ltr" class=""><<a href="mailto:pablo.localhost@gmail.com" target="_blank" class="">pablo.localhost@gmail.com</a>></span> wrote:<br class=""><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr" class=""><div class=""><div class="">ok, you have 3 pools, zclei22, logs and cache, thats wrong. you should have 1 pool, with zlog+cache if you are looking for performance.<br class=""></div>also, dont mix drives. <br class=""></div>whats the performance issue you are facing? <br class=""><div class=""><div class=""><br class=""><br class=""></div><div class="">regards,</div></div></div><div class="gmail-m_6281861324822600694m_9102919535904979465HOEnZb"><div class="gmail-m_6281861324822600694m_9102919535904979465h5"><div class="gmail_extra"><br class=""><div class="gmail_quote">2017-03-03 11:00 GMT-03:00 Arman Khalatyan <span dir="ltr" class=""><<a href="mailto:arm2arm@gmail.com" target="_blank" class="">arm2arm@gmail.com</a>></span>:<br class=""><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr" class="">This is CentOS 7.3 ZoL version 0.6.5.9-1<br class=""><br class=""><blockquote class=""><p class="">[root@clei22 ~]# lsscsi </p><p class="">[2:0:0:0] disk ATA INTEL SSDSC2CW24 400i /dev/sda </p><p class="">[3:0:0:0] disk ATA HGST HUS724040AL AA70 /dev/sdb </p><p class="">[4:0:0:0] disk ATA WDC WD2002FYPS-0 1G01 /dev/sdc </p><p class=""><br class=""></p><p class="">[root@clei22 ~]# pvs ;vgs;lvs</p><p class=""> PV <wbr class=""> VG Fmt Attr PSize PFree </p><p class=""> /dev/mapper/INTEL_SSDSC2CW240A<wbr class="">3_CVCV306302RP240CGN vg_cache lvm2 a-- 223.57g 0 </p><p class=""> /dev/sdc2 <wbr class=""> centos_clei22 lvm2 a-- 1.82t 64.00m</p><p class=""> VG #PV #LV #SN Attr VSize VFree </p><p class=""> centos_clei22 1 3 0 wz--n- 1.82t 64.00m</p><p class=""> vg_cache 1 2 0 wz--n- 223.57g 0 </p><p class=""> LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert</p><p class=""> home centos_clei22 -wi-ao---- 1.74t <wbr class=""> </p><p class=""> root centos_clei22 -wi-ao---- 50.00g <wbr class=""> </p><p class=""> swap centos_clei22 -wi-ao---- 31.44g <wbr class=""> </p><p class=""> lv_cache vg_cache -wi-ao---- 213.57g <wbr class=""> </p><p class=""> lv_slog vg_cache -wi-ao---- 10.00g </p><p class=""><br class=""></p><p class="">[root@clei22 ~]# zpool status -v</p><span class=""><p class=""> pool: zclei22</p><p class=""> state: ONLINE</p><p class=""> scan: scrub repaired 0 in 0h0m with 0 errors on Tue Feb 28 14:16:07 2017</p><p class="">config:</p><p class=""><br class=""></p><p class=""> NAME <wbr class=""> STATE READ WRITE CKSUM</p><p class=""> zclei22 <wbr class=""> ONLINE 0 0 0</p><p class=""> HGST_HUS724040ALA640_PN2334PBJ<wbr class="">4SV6T1 ONLINE 0 0 0</p><p class=""> logs</p><p class=""> lv_slog <wbr class=""> ONLINE 0 0 0</p><p class=""> cache</p><p class=""> lv_cache <wbr class=""> ONLINE 0 0 0</p><p class=""><br class=""></p><p class="">errors: No known data errors</p></span></blockquote><p class=""><font size="3" class=""><b class=""><br class="">ZFS config:</b></font></p><blockquote class="gmail-m_6281861324822600694m_9102919535904979465m_-772355890576934878m_8401411119881083294gmail-tr_bq"><p class="">[root@clei22 ~]# zfs get all zclei22/01</p><p class="">NAME PROPERTY VALUE SOURCE</p><p class="">zclei22/01 type filesystem -</p><p class="">zclei22/01 creation Tue Feb 28 14:06 2017 -</p><p class="">zclei22/01 used 389G -</p><p class="">zclei22/01 available 3.13T -</p><p class="">zclei22/01 referenced 389G -</p><p class="">zclei22/01 compressratio 1.01x -</p><p class="">zclei22/01 mounted yes -</p><p class="">zclei22/01 quota none default</p><p class="">zclei22/01 reservation none default</p><p class="">zclei22/01 recordsize 128K local</p><p class="">zclei22/01 mountpoint /zclei22/01 default</p><p class="">zclei22/01 sharenfs off default</p><p class="">zclei22/01 checksum on default</p><p class="">zclei22/01 compression off local</p><p class="">zclei22/01 atime on default</p><p class="">zclei22/01 devices on default</p><p class="">zclei22/01 exec on default</p><p class="">zclei22/01 setuid on default</p><p class="">zclei22/01 readonly off default</p><p class="">zclei22/01 zoned off default</p><p class="">zclei22/01 snapdir hidden default</p><p class="">zclei22/01 aclinherit restricted default</p><p class="">zclei22/01 canmount on default</p><p class="">zclei22/01 xattr sa local</p><p class="">zclei22/01 copies 1 default</p><p class="">zclei22/01 version 5 -</p><p class="">zclei22/01 utf8only off -</p><p class="">zclei22/01 normalization none -</p><p class="">zclei22/01 casesensitivity sensitive -</p><p class="">zclei22/01 vscan off default</p><p class="">zclei22/01 nbmand off default</p><p class="">zclei22/01 sharesmb off default</p><p class="">zclei22/01 refquota none default</p><p class="">zclei22/01 refreservation none default</p><p class="">zclei22/01 primarycache metadata local</p><p class="">zclei22/01 secondarycache metadata local</p><p class="">zclei22/01 usedbysnapshots 0 -</p><p class="">zclei22/01 usedbydataset 389G -</p><p class="">zclei22/01 usedbychildren 0 -</p><p class="">zclei22/01 usedbyrefreservation 0 -</p><p class="">zclei22/01 logbias latency default</p><p class="">zclei22/01 dedup off default</p><p class="">zclei22/01 mlslabel none default</p><p class="">zclei22/01 sync disabled local</p><p class="">zclei22/01 refcompressratio 1.01x -</p><p class="">zclei22/01 written 389G -</p><p class="">zclei22/01 logicalused 396G -</p><p class="">zclei22/01 logicalreferenced 396G -</p><p class="">zclei22/01 filesystem_limit none default</p><p class="">zclei22/01 snapshot_limit none default</p><p class="">zclei22/01 filesystem_count none default</p><p class="">zclei22/01 snapshot_count none default</p><p class="">zclei22/01 snapdev hidden default</p><p class="">zclei22/01 acltype off default</p><p class="">zclei22/01 context none default</p><p class="">zclei22/01 fscontext none default</p><p class="">zclei22/01 defcontext none default</p><p class="">zclei22/01 rootcontext none default</p><p class="">zclei22/01 relatime off default</p><p class="">zclei22/01 redundant_metadata all default</p><p class="">zclei22/01 overlay off default</p></blockquote><p class=""><br class=""></p><br class=""><br class=""></div><div class="gmail-m_6281861324822600694m_9102919535904979465m_-772355890576934878HOEnZb"><div class="gmail-m_6281861324822600694m_9102919535904979465m_-772355890576934878h5"><div class="gmail_extra"><br class=""><div class="gmail_quote">On Fri, Mar 3, 2017 at 2:52 PM, Juan Pablo <span dir="ltr" class=""><<a href="mailto:pablo.localhost@gmail.com" target="_blank" class="">pablo.localhost@gmail.com</a>></span> wrote:<br class=""><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr" class=""><div class=""><div class=""><div class="">Which operating system version are you using for your zfs storage? <br class=""></div>do:<br class=""></div>zfs get all your-pool-name<br class=""></div>use arc_summary.py from freenas git repo if you wish.<br class=""><br class=""></div><div class="gmail-m_6281861324822600694m_9102919535904979465m_-772355890576934878m_8401411119881083294HOEnZb"><div class="gmail-m_6281861324822600694m_9102919535904979465m_-772355890576934878m_8401411119881083294h5"><div class="gmail_extra"><br class=""><div class="gmail_quote">2017-03-03 10:33 GMT-03:00 Arman Khalatyan <span dir="ltr" class=""><<a href="mailto:arm2arm@gmail.com" target="_blank" class="">arm2arm@gmail.com</a>></span>:<br class=""><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr" class="">Pool load:<br class="">[root@clei21 ~]# zpool iostat -v 1 <br class=""> <wbr class=""> capacity operations bandwidth<br class="">pool <wbr class=""> alloc free read write read write<br class="">------------------------------<wbr class="">-------- ----- ----- ----- ----- ----- -----<br class="">zclei21 <wbr class=""> 10.1G 3.62T 0 112 823 8.82M<br class=""> HGST_HUS724040ALA640_PN2334PBJ<wbr class="">52XWT1 10.1G 3.62T 0 46 626 4.40M<br class="">logs <wbr class=""> - - - - - -<br class=""> lv_slog <wbr class=""> 225M 9.72G 0 66 198 4.45M<br class="">cache <wbr class=""> - - - - - -<br class=""> lv_cache <wbr class=""> 9.81G 204G 0 46 56 4.13M<br class="">------------------------------<wbr class="">-------- ----- ----- ----- ----- ----- -----<br class=""><br class=""> <wbr class=""> capacity operations bandwidth<br class="">pool <wbr class=""> alloc free read write read write<br class="">------------------------------<wbr class="">-------- ----- ----- ----- ----- ----- -----<br class="">zclei21 <wbr class=""> 10.1G 3.62T 0 191 0 12.8M<br class=""> HGST_HUS724040ALA640_PN2334PBJ<wbr class="">52XWT1 10.1G 3.62T 0 0 0 0<br class="">logs <wbr class=""> - - - - - -<br class=""> lv_slog <wbr class=""> 225M 9.72G 0 191 0 12.8M<br class="">cache <wbr class=""> - - - - - -<br class=""> lv_cache <wbr class=""> 9.83G 204G 0 218 0 20.0M<br class="">------------------------------<wbr class="">-------- ----- ----- ----- ----- ----- -----<br class=""><br class=""> <wbr class=""> capacity operations bandwidth<br class="">pool <wbr class=""> alloc free read write read write<br class="">------------------------------<wbr class="">-------- ----- ----- ----- ----- ----- -----<br class="">zclei21 <wbr class=""> 10.1G 3.62T 0 191 0 12.7M<br class=""> HGST_HUS724040ALA640_PN2334PBJ<wbr class="">52XWT1 10.1G 3.62T 0 0 0 0<br class="">logs <wbr class=""> - - - - - -<br class=""> lv_slog <wbr class=""> 225M 9.72G 0 191 0 12.7M<br class="">cache <wbr class=""> - - - - - -<br class=""> lv_cache <wbr class=""> 9.83G 204G 0 72 0 7.68M<br class="">------------------------------<wbr class="">-------- ----- ----- ----- ----- ----- -----<br class=""><br class=""></div><div class="gmail-m_6281861324822600694m_9102919535904979465m_-772355890576934878m_8401411119881083294m_-2168826882466388647HOEnZb"><div class="gmail-m_6281861324822600694m_9102919535904979465m_-772355890576934878m_8401411119881083294m_-2168826882466388647h5"><div class="gmail_extra"><br class=""><div class="gmail_quote">On Fri, Mar 3, 2017 at 2:32 PM, Arman Khalatyan <span dir="ltr" class=""><<a href="mailto:arm2arm@gmail.com" target="_blank" class="">arm2arm@gmail.com</a>></span> wrote:<br class=""><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr" class=""><div class="">Glusterfs now in healing mode:<br class=""></div>Receiver:<br class="">[root@clei21 ~]# arcstat.py 1<br class=""> time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c <br class="">13:24:49 0 0 0 0 0 0 0 0 0 4.6G 31G <br class="">13:24:50 154 80 51 80 51 0 0 80 51 4.6G 31G <br class="">13:24:51 179 62 34 62 34 0 0 62 42 4.6G 31G <br class="">13:24:52 148 68 45 68 45 0 0 68 45 4.6G 31G <br class="">13:24:53 140 64 45 64 45 0 0 64 45 4.6G 31G <br class="">13:24:54 124 48 38 48 38 0 0 48 38 4.6G 31G <br class="">13:24:55 157 80 50 80 50 0 0 80 50 4.7G 31G <br class="">13:24:56 202 68 33 68 33 0 0 68 41 4.7G 31G <br class="">13:24:57 127 54 42 54 42 0 0 54 42 4.7G 31G <br class="">13:24:58 126 50 39 50 39 0 0 50 39 4.7G 31G <br class="">13:24:59 116 40 34 40 34 0 0 40 34 4.7G 31G <br class=""><br class=""><div class=""><br class="">Sender<br class="">[root@clei22 ~]# arcstat.py 1<br class=""> time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c <br class="">13:28:37 8 2 25 2 25 0 0 2 25 468M 31G <br class="">13:28:38 1.2K 727 62 727 62 0 0 525 54 469M 31G <br class="">13:28:39 815 508 62 508 62 0 0 376 55 469M 31G <br class="">13:28:40 994 624 62 624 62 0 0 450 54 469M 31G <br class="">13:28:41 783 456 58 456 58 0 0 338 50 470M 31G <br class="">13:28:42 916 541 59 541 59 0 0 390 50 470M 31G <br class="">13:28:43 768 437 56 437 57 0 0 313 48 471M 31G <br class="">13:28:44 877 534 60 534 60 0 0 393 53 470M 31G <br class="">13:28:45 957 630 65 630 65 0 0 450 57 470M 31G <br class="">13:28:46 819 479 58 479 58 0 0 357 51 471M 31G <br class=""><br class=""></div></div><div class="gmail-m_6281861324822600694m_9102919535904979465m_-772355890576934878m_8401411119881083294m_-2168826882466388647m_-4808390406438333713HOEnZb"><div class="gmail-m_6281861324822600694m_9102919535904979465m_-772355890576934878m_8401411119881083294m_-2168826882466388647m_-4808390406438333713h5"><div class="gmail_extra"><br class=""><div class="gmail_quote">On Thu, Mar 2, 2017 at 7:18 PM, Juan Pablo <span dir="ltr" class=""><<a href="mailto:pablo.localhost@gmail.com" target="_blank" class="">pablo.localhost@gmail.com</a>></span> wrote:<br class=""><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr" class="">hey,<br class="">what are you using for zfs? get an arc status and show please<br class=""><br class=""></div><div class="gmail-m_6281861324822600694m_9102919535904979465m_-772355890576934878m_8401411119881083294m_-2168826882466388647m_-4808390406438333713m_6906981569320781HOEnZb"><div class="gmail-m_6281861324822600694m_9102919535904979465m_-772355890576934878m_8401411119881083294m_-2168826882466388647m_-4808390406438333713m_6906981569320781h5"><div class="gmail_extra"><br class=""><div class="gmail_quote">2017-03-02 9:57 GMT-03:00 Arman Khalatyan <span dir="ltr" class=""><<a href="mailto:arm2arm@gmail.com" target="_blank" class="">arm2arm@gmail.com</a>></span>:<br class=""><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr" class=""><div class=""><div class="">no, <br class=""></div>ZFS itself is not on top of lvm. only ssd was spitted by lvm for slog(10G) and cache (the rest)<br class=""></div><div class="">but in any-case the ssd does not help much on glusterfs/ovirt load it has almost 100% cache misses....:( (terrible performance compare with nfs)<br class=""><br class=""></div><div class=""><br class=""></div><br class=""><br class=""></div><div class="gmail-m_6281861324822600694m_9102919535904979465m_-772355890576934878m_8401411119881083294m_-2168826882466388647m_-4808390406438333713m_6906981569320781m_349397128160904570HOEnZb"><div class="gmail-m_6281861324822600694m_9102919535904979465m_-772355890576934878m_8401411119881083294m_-2168826882466388647m_-4808390406438333713m_6906981569320781m_349397128160904570h5"><div class="gmail_extra"><br class=""><div class="gmail_quote">On Thu, Mar 2, 2017 at 1:47 PM, FERNANDO FREDIANI <span dir="ltr" class=""><<a href="mailto:fernando.frediani@upx.com" target="_blank" class="">fernando.frediani@upx.com</a>></span> wrote:<br class=""><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF" class=""><p class="">Am I understanding correctly, but you have Gluster on the top of
ZFS which is on the top of LVM ? If so, why the usage of LVM was
necessary ? I have ZFS with any need of LVM.</p><span class="gmail-m_6281861324822600694m_9102919535904979465m_-772355890576934878m_8401411119881083294m_-2168826882466388647m_-4808390406438333713m_6906981569320781m_349397128160904570m_-6203522006917276901HOEnZb"><font color="#888888" class=""><p class="">Fernando<br class="">
</p></font></span><div class=""><div class="gmail-m_6281861324822600694m_9102919535904979465m_-772355890576934878m_8401411119881083294m_-2168826882466388647m_-4808390406438333713m_6906981569320781m_349397128160904570m_-6203522006917276901h5">
<br class="">
<div class="gmail-m_6281861324822600694m_9102919535904979465m_-772355890576934878m_8401411119881083294m_-2168826882466388647m_-4808390406438333713m_6906981569320781m_349397128160904570m_-6203522006917276901m_3542297157065926681moz-cite-prefix">On 02/03/2017 06:19, Arman Khalatyan
wrote:<br class="">
</div>
</div></div><blockquote type="cite" class=""><div class=""><div class="gmail-m_6281861324822600694m_9102919535904979465m_-772355890576934878m_8401411119881083294m_-2168826882466388647m_-4808390406438333713m_6906981569320781m_349397128160904570m_-6203522006917276901h5">
<div dir="ltr" class="">
<div class="">
<div class="">Hi, <br class="">
</div>
I use 3 nodes with zfs and glusterfs.<br class="">
</div>
Are there any suggestions to optimize it?<br class="">
<div class=""><br class="">
host zfs config 4TB-HDD+250GB-SSD:<br class="">
[root@clei22 ~]# zpool status <br class="">
pool: zclei22<br class="">
state: ONLINE<br class="">
scan: scrub repaired 0 in 0h0m with 0 errors on Tue Feb 28
14:16:07 2017<br class="">
config:<br class="">
<br class="">
NAME <wbr class=""> STATE READ
WRITE CKSUM<br class="">
zclei22 <wbr class=""> ONLINE 0
0 0<br class="">
HGST_HUS724040ALA640_PN2334PBJ<wbr class="">4SV6T1 ONLINE 0
0 0<br class="">
logs<br class="">
lv_slog <wbr class=""> ONLINE 0
0 0<br class="">
cache<br class="">
lv_cache <wbr class=""> ONLINE 0
0 0<br class="">
<br class="">
errors: No known data errors<br class="">
<br class="">
Name:<br class="">
GluReplica<br class="">
Volume ID:<br class="">
ee686dfe-203a-4caa-a691-263534<wbr class="">60cc48<br class="">
Volume Type:<br class="">
Replicate (Arbiter)<br class="">
Replica Count:<br class="">
2 + 1<br class="">
Number of Bricks:<br class="">
3<br class="">
Transport Types:<br class="">
TCP, RDMA<br class="">
Maximum no of snapshots:<br class="">
256<br class="">
Capacity:<br class="">
3.51 TiB total, 190.56 GiB used, 3.33 TiB free<br class="">
</div>
</div>
<br class="">
<fieldset class="gmail-m_6281861324822600694m_9102919535904979465m_-772355890576934878m_8401411119881083294m_-2168826882466388647m_-4808390406438333713m_6906981569320781m_349397128160904570m_-6203522006917276901m_3542297157065926681mimeAttachmentHeader"></fieldset>
<br class="">
</div></div><span class=""><pre class="">______________________________<wbr class="">_________________
Users mailing list
<a class="gmail-m_6281861324822600694m_9102919535904979465m_-772355890576934878m_8401411119881083294m_-2168826882466388647m_-4808390406438333713m_6906981569320781m_349397128160904570m_-6203522006917276901m_3542297157065926681moz-txt-link-abbreviated" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>
<a class="gmail-m_6281861324822600694m_9102919535904979465m_-772355890576934878m_8401411119881083294m_-2168826882466388647m_-4808390406438333713m_6906981569320781m_349397128160904570m_-6203522006917276901m_3542297157065926681moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman<wbr class="">/listinfo/users</a>
</pre>
</span></blockquote>
<br class="">
</div>
<br class="">______________________________<wbr class="">_________________<br class="">
Users mailing list<br class="">
<a href="mailto:Users@ovirt.org" target="_blank" class="">Users@ovirt.org</a><br class="">
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank" class="">http://lists.ovirt.org/mailman<wbr class="">/listinfo/users</a><br class="">
<br class=""></blockquote></div><br class=""></div>
</div></div><br class="">______________________________<wbr class="">_________________<br class="">
Users mailing list<br class="">
<a href="mailto:Users@ovirt.org" target="_blank" class="">Users@ovirt.org</a><br class="">
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank" class="">http://lists.ovirt.org/mailman<wbr class="">/listinfo/users</a><br class="">
<br class=""></blockquote></div><br class=""></div>
</div></div></blockquote></div><br class=""></div>
</div></div></blockquote></div><br class=""></div>
</div></div></blockquote></div><br class=""></div>
</div></div></blockquote></div><br class=""></div>
</div></div></blockquote></div><br class=""></div>
</div></div></blockquote></div><br class=""></div>
</div></div></blockquote></div><br class=""></div>
</div></div></blockquote></div><br class=""></div></div></div></div></div></div>
_______________________________________________<br class="">Users mailing list<br class=""><a href="mailto:Users@ovirt.org" class="">Users@ovirt.org</a><br class="">http://lists.ovirt.org/mailman/listinfo/users<br class=""></div></blockquote></div><br class=""></div></body></html>