<div dir="ltr">Also, if performance is the key, and you don't need the maximum space, try using 8x mirrors. If your ZVOL is doing MySQL or any database , set primary and secondary cache properties to metadata only. The caching in ZFS can sometimes hurt when the application does its own caching.<div><br></div><div>Having a ZIL/slog device is good, in my situation I have neither ZIL or l2_arc because my zpool is pure SSD.</div><div><br></div><div>Maximizing the amount of striped raid sets helps a lot with ZFS, as Karli mentioned, you'll see better performance with 2x8 than 1x16.</div><div><br></div><div>I too have seen RAID cards out perform a plain HBA + ZFS, in some situations. This is usually due to the cards doing their own internal caching and other "dangerous" (my opinion) types of behavior. Doing ZFS on those cards is semi-dangerous. A few months ago I threw away 8 Areca cards after we lost a 30TB RAID set due to errors in the Areca. I have yet to lose a single bit on ZFS. I have chosen to trade some performance for stability and data integrity. To me it was worth it.</div><div><br></div><div>For what it's worth, I use ZFS on Linux as the backing file system for our HPC cluster's parallel filesystem storage nodes, and after upgrading from 0.6.2 to 0.6.3 I saw a double in overall throughput to all storage systems. I've also had to tweak things like prefetching and cache tunables in ZFS to get better performance.</div><div><br></div><div>Right now my oVirt instance that's backed by ZFS has done well for MySQL with the iSCSI backed data domain. I believe the numbers were ~250 transactions per second on a small-ish (2 core, 8GB RAM) virtual machine using sysbench 0.5 on MariaDB. I was formatting the iSCSI backed VM disks as ext4 mounted with "nobarrier". Can post specifics about my zvol and zpool setup if it'll help, just don't want to flood oVirt list with too much ZFS stuff :)</div><div><br></div><div>- Trey</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Oct 23, 2014 at 4:34 AM, Karli Sjöberg <span dir="ltr"><<a href="mailto:Karli.Sjoberg@slu.se" target="_blank">Karli.Sjoberg@slu.se</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">On Thu, 2014-10-23 at 11:09 +0200, Arman Khalatyan wrote:<br>
> yes + 1xssd cache<br>
><br>
><br>
> NAME STATE READ<br>
> WRITE CKSUM<br>
> tank ONLINE 0<br>
> 0 0<br>
> raidz2-0 ONLINE 0<br>
> 0 0<br>
> scsi-35000cca22be96bed ONLINE 0<br>
> 0 0<br>
> scsi-35000cca22bc5a20e ONLINE 0<br>
> 0 0<br>
> scsi-35000cca22bc515ee ONLINE 0<br>
> 0 0<br>
> ata-Hitachi_HUS724030ALE640_PK2A31PAG9VJXW ONLINE 0<br>
> 0 0<br>
> scsi-35000cca22bc1f9cf ONLINE 0<br>
> 0 0<br>
> scsi-35000cca22be68899 ONLINE 0<br>
> 0 0<br>
> scsi-35000cca22bc58e1b ONLINE 0<br>
> 0 0<br>
> scsi-35000cca22bc4dc6b ONLINE 0<br>
> 0 0<br>
> scsi-35000cca22bc394ee ONLINE 0<br>
> 0 0<br>
> scsi-35000cca22bc10d97 ONLINE 0<br>
> 0 0<br>
> scsi-35000cca22bc605d1 ONLINE 0<br>
> 0 0<br>
> scsi-35000cca22bc412bf ONLINE 0<br>
> 0 0<br>
> scsi-35000cca22bc3f9ad ONLINE 0<br>
> 0 0<br>
> scsi-35000cca22bc53004 ONLINE 0<br>
> 0 0<br>
> scsi-35000cca22bc5b8e2 ONLINE 0<br>
> 0 0<br>
> scsi-35000cca22bc3beb3 ONLINE 0<br>
> 0 0<br>
> cache<br>
> sdc ONLINE 0<br>
> 0 0<br>
<br>
</div></div>OK, two things:<br>
<br>
1) Redo the pool layout into 2x8 disk radiz2<br>
2) Add two really fast SSD's as mirrored log devices, like a pair of<br>
200GB Intel DC S3700 e.g.<br>
<br>
Do this and it may provide even better performance than the HW RAID.<br>
<br>
But that depends on the specs of the rest of the HW; CPU and RAM mostly,<br>
can never have enough RAM with ZFS;)<br>
<span class="HOEnZb"><font color="#888888"><br>
/K<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
><br>
><br>
><br>
> On Thu, Oct 23, 2014 at 10:57 AM, Karli Sjöberg <<a href="mailto:Karli.Sjoberg@slu.se">Karli.Sjoberg@slu.se</a>><br>
> wrote:<br>
> On Thu, 2014-10-23 at 10:11 +0200, Arman Khalatyan wrote:<br>
> ><br>
> > ---------- Forwarded message ----------<br>
> > From: Arman Khalatyan <<a href="mailto:arm2arm@gmail.com">arm2arm@gmail.com</a>><br>
> > Date: Thu, Oct 23, 2014 at 10:11 AM<br>
> > Subject: Re: [ovirt-users] Are there some general<br>
> strategyhow to Iscsi<br>
> > strorage domain?<br>
> > To: Trey Dockendorf <<a href="mailto:treydock@gmail.com">treydock@gmail.com</a>><br>
> ><br>
> ><br>
> > Thank you Trey for sharing your setup.<br>
> ><br>
> > I have also one test system with zvol exported with iscsi<br>
> over 10G.<br>
> > Unfortunately the difference in performance of zfs over raid<br>
> > controller is huge, particularly where VM running mysql. I<br>
> did not try<br>
> > HBAs yet, I have only LSI/Adaptec/Areca RaidControllers they<br>
> dont have<br>
> > IT mode. Maybe that can be the reason.<br>
> ><br>
> > For sure always one need to find the sweet edge between<br>
> performance<br>
> > and reliability.<br>
> ><br>
> > Just for comparison with yours I get on random IO:<br>
> > zvol/16Disks/Raid2/tgtd/iscsi/10G-> on VM multiple rsync -<br>
> > ~100-150MB/s, Same HW but disks with Areca RAID6 - 650MB/s<br>
> stable<br>
> > even more in some cases.<br>
><br>
> Did you have separate log devices attached to the pool?<br>
><br>
> The pool´s name was '16Disks'. Did you have 16 disks in one<br>
> radiz2 vdev?<br>
><br>
> /K<br>
><br>
> ><br>
> > The best performance I got on FDR iser-><br>
> > 80% of bare metal performance: 1500MB/s but ovirt goes mad<br>
> claiming<br>
> > that Network and Disk devices are saturated. My VM goes time<br>
> by time<br>
> > to paused state.<br>
> > It is due to that the Ovirt treating all ib devices as<br>
> 10Gbit<br>
> > cards(in terms of speed).:(<br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > On Thu, Oct 23, 2014 at 8:30 AM, Trey Dockendorf<br>
> <<a href="mailto:treydock@gmail.com">treydock@gmail.com</a>><br>
> > wrote:<br>
> > Not sure if it's a solution for you, but ZFS. My<br>
> domains are<br>
> > all ZFS (using ZFS on Linux in EL6.5) and my backup<br>
> server<br>
> > receives incremental snapshots from primary storage<br>
> which<br>
> > includes both NFS exports and iSCSI. ZFS makes<br>
> creating block<br>
> > devices for iSCSI very easy, and they are included<br>
> in snapshot<br>
> > replication. The replication is not HA but disaster<br>
> recovery<br>
> > and off site.<br>
> ><br>
> > I've hit 300MB/s using ZFS send over IPoIB on my DDR<br>
> fabric,<br>
> > which isn't amazing but not terrible for an old DDR<br>
> fabric.<br>
> ><br>
> > ZFS is probably not an easy solution as requires<br>
> rebuilding<br>
> > your storage, but maybe for future use or other<br>
> readers it<br>
> > will give some useful ideas.<br>
> ><br>
> > - Trey<br>
> ><br>
> > On Oct 22, 2014 11:56 AM, "Arman Khalatyan"<br>
> > <<a href="mailto:arm2arm@gmail.com">arm2arm@gmail.com</a>> wrote:<br>
> ><br>
> > Hi,<br>
> > I have 2x40TB domains each are exported with<br>
> > iser/iscsi with ib and 10Gb interfaces.<br>
> ><br>
> > For sure they are RAID6 storage little bit<br>
> save on<br>
> > failure.<br>
> > But I was wondered if any way to backup<br>
> those domains.<br>
> > particularly master one.<br>
> ><br>
> ><br>
> > I was thinking somehow DRBD based<br>
> replication, with<br>
> > lvm snapshots etc. But it looks like<br>
> overkill.<br>
> ><br>
> > Will be nice somehow to deploy replicated/HA<br>
> Master<br>
> > domain with ability to backp on tapes as<br>
> well.<br>
> ><br>
> ><br>
> > Any ideas are welcome.<br>
> ><br>
> > Thanks,<br>
> ><br>
> > Arman.<br>
> ><br>
> ><br>
> ><br>
> ><br>
> _______________________________________________<br>
> > Users mailing list<br>
> > <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
> ><br>
> <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
><br>
><br>
><br>
><br>
> --<br>
><br>
> Med Vänliga Hälsningar<br>
><br>
> -------------------------------------------------------------------------------<br>
> Karli Sjöberg<br>
> Swedish University of Agricultural Sciences Box 7079 (Visiting<br>
> Address<br>
> Kronåsvägen 8)<br>
> S-750 07 Uppsala, Sweden<br>
> Phone: <a href="tel:%2B46-%280%2918-67%2015%2066" value="+4618671566">+46-(0)18-67 15 66</a><br>
> <a href="mailto:karli.sjoberg@slu.se">karli.sjoberg@slu.se</a><br>
><br>
><br>
<br>
<br>
<br>
--<br>
<br>
Med Vänliga Hälsningar<br>
<br>
-------------------------------------------------------------------------------<br>
Karli Sjöberg<br>
Swedish University of Agricultural Sciences Box 7079 (Visiting Address<br>
Kronåsvägen 8)<br>
S-750 07 Uppsala, Sweden<br>
Phone: <a href="tel:%2B46-%280%2918-67%2015%2066" value="+4618671566">+46-(0)18-67 15 66</a><br>
<a href="mailto:karli.sjoberg@slu.se">karli.sjoberg@slu.se</a><br>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
</div></div></blockquote></div><br></div>