yes + 1xssd cache
NAME STATE READ
WRITE CKSUM
tank ONLINE 0
0 0
raidz2-0 ONLINE 0
0 0
scsi-35000cca22be96bed ONLINE 0
0 0
scsi-35000cca22bc5a20e ONLINE 0
0 0
scsi-35000cca22bc515ee ONLINE 0
0 0
ata-Hitachi_HUS724030ALE640_PK2A31PAG9VJXW ONLINE 0
0 0
scsi-35000cca22bc1f9cf ONLINE 0
0 0
scsi-35000cca22be68899 ONLINE 0
0 0
scsi-35000cca22bc58e1b ONLINE 0
0 0
scsi-35000cca22bc4dc6b ONLINE 0
0 0
scsi-35000cca22bc394ee ONLINE 0
0 0
scsi-35000cca22bc10d97 ONLINE 0
0 0
scsi-35000cca22bc605d1 ONLINE 0
0 0
scsi-35000cca22bc412bf ONLINE 0
0 0
scsi-35000cca22bc3f9ad ONLINE 0
0 0
scsi-35000cca22bc53004 ONLINE 0
0 0
scsi-35000cca22bc5b8e2 ONLINE 0
0 0
scsi-35000cca22bc3beb3 ONLINE 0
0 0
cache
sdc ONLINE 0
0 0
On Thu, Oct 23, 2014 at 10:57 AM, Karli Sjöberg <Karli.Sjoberg(a)slu.se>
wrote:
On Thu, 2014-10-23 at 10:11 +0200, Arman Khalatyan wrote:
>
> ---------- Forwarded message ----------
> From: Arman Khalatyan <arm2arm(a)gmail.com>
> Date: Thu, Oct 23, 2014 at 10:11 AM
> Subject: Re: [ovirt-users] Are there some general strategyhow to Iscsi
> strorage domain?
> To: Trey Dockendorf <treydock(a)gmail.com>
>
>
> Thank you Trey for sharing your setup.
>
> I have also one test system with zvol exported with iscsi over 10G.
> Unfortunately the difference in performance of zfs over raid
> controller is huge, particularly where VM running mysql. I did not try
> HBAs yet, I have only LSI/Adaptec/Areca RaidControllers they dont have
> IT mode. Maybe that can be the reason.
>
> For sure always one need to find the sweet edge between performance
> and reliability.
>
> Just for comparison with yours I get on random IO:
> zvol/16Disks/Raid2/tgtd/iscsi/10G-> on VM multiple rsync -
> ~100-150MB/s, Same HW but disks with Areca RAID6 - 650MB/s stable
> even more in some cases.
Did you have separate log devices attached to the pool?
The pool´s name was '16Disks'. Did you have 16 disks in one radiz2 vdev?
/K
>
> The best performance I got on FDR iser->
> 80% of bare metal performance: 1500MB/s but ovirt goes mad claiming
> that Network and Disk devices are saturated. My VM goes time by time
> to paused state.
> It is due to that the Ovirt treating all ib devices as 10Gbit
> cards(in terms of speed).:(
>
>
>
>
>
>
> On Thu, Oct 23, 2014 at 8:30 AM, Trey Dockendorf <treydock(a)gmail.com>
> wrote:
> Not sure if it's a solution for you, but ZFS. My domains are
> all ZFS (using ZFS on Linux in EL6.5) and my backup server
> receives incremental snapshots from primary storage which
> includes both NFS exports and iSCSI. ZFS makes creating block
> devices for iSCSI very easy, and they are included in snapshot
> replication. The replication is not HA but disaster recovery
> and off site.
>
> I've hit 300MB/s using ZFS send over IPoIB on my DDR fabric,
> which isn't amazing but not terrible for an old DDR fabric.
>
> ZFS is probably not an easy solution as requires rebuilding
> your storage, but maybe for future use or other readers it
> will give some useful ideas.
>
> - Trey
>
> On Oct 22, 2014 11:56 AM, "Arman Khalatyan"
> <arm2arm(a)gmail.com> wrote:
>
> Hi,
> I have 2x40TB domains each are exported with
> iser/iscsi with ib and 10Gb interfaces.
>
> For sure they are RAID6 storage little bit save on
> failure.
> But I was wondered if any way to backup those domains.
> particularly master one.
>
>
> I was thinking somehow DRBD based replication, with
> lvm snapshots etc. But it looks like overkill.
>
> Will be nice somehow to deploy replicated/HA Master
> domain with ability to backp on tapes as well.
>
>
> Any ideas are welcome.
>
> Thanks,
>
> Arman.
>
>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
>
--
Med Vänliga Hälsningar
-------------------------------------------------------------------------------
Karli Sjöberg
Swedish University of Agricultural Sciences Box 7079 (Visiting Address
Kronåsvägen 8)
S-750 07 Uppsala, Sweden
Phone: +46-(0)18-67 15 66
karli.sjoberg(a)slu.se