On Jan 8, 2014, at 11:55 AM, Karli Sjöberg <Karli.Sjoberg(a)slu.se> wrote:
Skickat från min iPhone
> 8 jan 2014 kl. 18:47 skrev "Darrell Budic"
<darrell.budic(a)zenfire.com>:
>
> Grégoire-
>
> My test setup, running a version of the nightly self-hosted setup w/ gluster
distributed/replicated disks as shared storage, in a NFS cluster:
>
> Core i5 3570K @ 3.4Ghz, 16G Ram
> Boot disks: 2x 32G SATA SSDs in raid-1
> Storage system: 4x500G Seagate RE3s in a ZFS raid-10 w/ 1GB ZIL & ~22G L2ARC
caching from boot drives
> 1 1G ethernet
> 2 VMs running
>
> Core2 Duo E8500 @ 3.16GHz, 8G Ram
> Boot disks: 2x 32G SATA SSDS in raid-1
> Storage system: 2x1500G WD Green drives in a ZFS Raid w/ 1GB ZIL & ~22G L2ARC
cache from boot drives
> 1 1G ethernet
Just curious, are you doing ZFS in Linux?
/K
Yes, forgot to mention those are freshly built Centos 6.5 systems with zfs 0.6.2, and
glusterfs-3.4.1-3.el6.x86_64, vdsm-gluster-4.13.2-1.el6.noarch for testing/experimenting.
Bought some cheap SSDs and just grabbed systems and platters I had around for it.
Testbedding and getting some experience with the self hosted engine, since I’d like to
move to it once it’s released. Also looking forward to testing native gluster on this
setup.
I have a production ovirt cluster with a linux zfs based NFS storage server, the backend
has been very stable since I got rid of Nextenta and went to linux. Sounds odd, I know,
but couldn’t get good support for a community nextenta server I inherited. I was having
driver level box lockup issues with openSolaris that I couldn’t resolve. So I rebuilt it
with linux, imported the pool, and haven’t looked back or had a storage failure since.
-Darrell