[Users] SSD Caching

Amedeo Salvati amedeo at oscert.net
Thu Jan 9 02:50:25 EST 2014


you can use flashcache under centos6, it's stable and give you a boost for read/write, but I never user with gluster:https://github.com/facebook/flashcache/under fedora you have more choice: flashcache, bcache, dm-cacheregardsaDate: Wed, 8 Jan 2014 21:44:35 -0600From: Darrell Budic <darrell.budic at zenfire.com>To: Russell Purinton <russ at sonicbx.com>Cc: "users at ovirt.org" <users at ovirt.org>Subject: Re: [Users] SSD CachingMessage-ID: <A45059D4-B00D-4573-81E7-F00B2B9FA4AA at zenfire.com>Content-Type: text/plain; charset="windows-1252"Stick
 your bricks on ZFS and let it do it for you. Works well, although I 
haven?t done much benchmarking of it. My test setup is described in the 
thread under [Users] Creation of preallocated disk with Gluster 
replication. I?ve seen some blog posts here and there about gluster on 
ZFS for this reason too. -DarrellOn Jan 7, 2014, at 9:56 PM, Russell Purinton <russ at sonicbx.com> wrote:> [20:42] <sonicrose> is anybody out there using a good RAM+SSD caching system ahead of gluster storage?> [20:42] <sonicrose> sorry if that came through twice>
 [20:44] <sonicrose> im thinking about making the SSD one giant 
swap file then creating a very large ramdisk in virtual memory and using
 that as a block level cache for parts and pieces of virtual machine 
disk images> [20:44] <sonicrose> then i think the memory 
managers would inherently play the role of storage tiering ie: keeping 
the hottest data in memory and the coldest data on swap> [20:45] 
<sonicrose> everything i have seen today has been setup as   
"consumer"  ===>  network ====> SSD cache ====> real disks> [20:45] <sonicrose> but i'd like to actually do "consumer" ===> RAM+SSD cache ===>  network ===> real disks>
 [20:46] <sonicrose> i realize doing a virtual memory disk means 
the cache will be cleared on every reboot, and I'm ok with that> 
[20:47] <sonicrose> i know this can be done with NFS and 
cachefilesd(fscache), but how could something be integrated into the 
native gluster clients?> [20:47] <sonicrose> i'd prefer not to have to access gluster via NFS>
 [20:49] <sonicrose> any feedback from this room is greatly 
appreciated, getting someone started to build managed HA cloud hosting>
Da: users-bounces at ovirt.org
A: users at ovirt.org
Cc: 
Data: Thu, 09 Jan 2014 02:34:48 -0500
Oggetto: Users Digest, Vol 28, Issue 61

> Send Users mailing list submissions to
> 	users at ovirt.org
> 
> To subscribe or unsubscribe via the World Wide Web, visit
> 	http://lists.ovirt.org/mailman/listinfo/users
> or, via email, send a message with subject or body 'help' to
> 	users-request at ovirt.org
> 
> You can reach the person managing the list at
> 	users-owner at ovirt.org
> 
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Users digest..."
> 
> 
> Today's Topics:
> 
>    1. Re: SSD Caching (Darrell Budic)
>    2. Re: Ovirt DR setup (Hans Emmanuel)
>    3. Re: Experience with low cost NFS-Storage as VM-Storage?
>       (Markus Stockhausen)
>    4. Re: Experience with low cost NFS-Storage as VM-Storage?
>       (Karli Sj?berg)
>    5. Re: Experience with low cost NFS-Storage as VM-Storage? (squadra)
> 
> 
> ----------------------------------------------------------------------
> 
> Message: 1
> Date: Wed, 8 Jan 2014 21:44:35 -0600
> From: Darrell Budic 
> To: Russell Purinton 
> Cc: "users at ovirt.org" 
> Subject: Re: [Users] SSD Caching
> Message-ID: 
> Content-Type: text/plain; charset="windows-1252"
> 
> Stick your bricks on ZFS and let it do it for you. Works well, although I haven?t done much benchmarking of it. My test setup is described in the thread under [Users] Creation of preallocated disk with Gluster replication. I?ve seen some blog posts here and there about gluster on ZFS for this reason too.
> 
>  -Darrell
> 
> On Jan 7, 2014, at 9:56 PM, Russell Purinton  wrote:
> 
> > [20:42]  is anybody out there using a good RAM+SSD caching system ahead of gluster storage?
> > [20:42]  sorry if that came through twice
> > [20:44]  im thinking about making the SSD one giant swap file then creating a very large ramdisk in virtual memory and using that as a block level cache for parts and pieces of virtual machine disk images
> > [20:44]  then i think the memory managers would inherently play the role of storage tiering ie: keeping the hottest data in memory and the coldest data on swap
> > [20:45]  everything i have seen today has been setup as   "consumer"  ===>  network ====> SSD cache ====> real disks
> > [20:45]  but i'd like to actually do "consumer" ===> RAM+SSD cache ===>  network ===> real disks
> > [20:46]  i realize doing a virtual memory disk means the cache will be cleared on every reboot, and I'm ok with that
> > [20:47]  i know this can be done with NFS and cachefilesd(fscache), but how could something be integrated into the native gluster clients?
> > [20:47]  i'd prefer not to have to access gluster via NFS
> > [20:49]  any feedback from this room is greatly appreciated, getting someone started to build managed HA cloud hosting
> > _______________________________________________
> > Users mailing list
> > Users at ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> 
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: 
> 
> ------------------------------
> 
> Message: 2
> Date: Thu, 9 Jan 2014 10:34:26 +0530
> From: Hans Emmanuel 
> To: users at ovirt.org
> Subject: Re: [Users] Ovirt DR setup
> Message-ID:
> 	
> Content-Type: text/plain; charset="iso-8859-1"
> 
> Could any one please give me some suggestions ?
> 
> 
> On Wed, Jan 8, 2014 at 11:39 AM, Hans Emmanuel wrote:
> 
> > Hi all ,
> >
> > I would like to know about the possibility of setup Disaster Recovery Site
> > (DR) for an Ovirt cluster . i.e if site 1 goes down I need to trigger the
> > site 2 to come in to action with the minimal down time .
> >
> > I am open to use NFS shared storage or local storage for data storage
> > domain . I know we need to replicate the storage domain and Ovirt confs and
> > DB across the sites  , but couldn't find any doc for the same , isn't that
> > possible with Ovirt ?
> >
> >  *Hans Emmanuel*
> >
> >
> > *NOthing to FEAR but something to FEEL......*
> >
> >
> 
> 
> -- 
> *Hans Emmanuel*
> 
> *NOthing to FEAR but something to FEEL......*
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: 
> 
> ------------------------------
> 
> Message: 3
> Date: Thu, 9 Jan 2014 07:10:07 +0000
> From: Markus Stockhausen 
> To: squadra , "users at ovirt.org" 
> Subject: Re: [Users] Experience with low cost NFS-Storage as
> 	VM-Storage?
> Message-ID:
> 	<12EF8D94C6F8734FB2FF37B9FBEDD173585B991E at EXCHANGE.collogia.de>
> Content-Type: text/plain; charset="us-ascii"
> 
> > Von: users-bounces at ovirt.org [users-bounces at ovirt.org]" im Auftrag von "squadra [squadra at gmail.com]
> > Gesendet: Mittwoch, 8. Januar 2014 17:15
> > An: users at ovirt.org
> > Betreff: Re: [Users] Experience with low cost NFS-Storage as VM-Storage?
> >
> > better go for iscsi or something else... i whould avoid nfs for vm hosting
> > Freebsd10 delivers kernel iscsitarget now, which works great so far. or go with omnios to get comstar iscsi, which is a rocksolid solution
> >
> > Cheers,
> > 
> > Juergen
> 
> That is usually a matter of taste and the available environment. 
> The minimal differences in performance usually only show up
> if you drive the storage to its limits. I guess you could help Sven 
> better if you had some hard facts why to favour ISCSI. 
> 
> Best regards.
> 
> Markus
> -------------- next part --------------
> An embedded and charset-unspecified text was scrubbed...
> Name: InterScan_Disclaimer.txt
> URL: 
> 
> ------------------------------
> 
> Message: 4
> Date: Thu, 9 Jan 2014 07:30:56 +0000
> From: Karli Sj?berg 
> To: "stockhausen at collogia.de" 
> Cc: "squadra at gmail.com" ,	"users at ovirt.org"
> 	
> Subject: Re: [Users] Experience with low cost NFS-Storage as
> 	VM-Storage?
> Message-ID: <5F9E965F5A80BC468BE5F40576769F095AFE3369 at exchange2-1>
> Content-Type: text/plain; charset="utf-8"
> 
> On Thu, 2014-01-09 at 07:10 +0000, Markus Stockhausen wrote:
> > > Von: users-bounces at ovirt.org [users-bounces at ovirt.org]" im Auftrag von "squadra [squadra at gmail.com]
> > > Gesendet: Mittwoch, 8. Januar 2014 17:15
> > > An: users at ovirt.org
> > > Betreff: Re: [Users] Experience with low cost NFS-Storage as VM-Storage?
> > >
> > > better go for iscsi or something else... i whould avoid nfs for vm hosting
> > > Freebsd10 delivers kernel iscsitarget now, which works great so far. or go with omnios to get comstar iscsi, which is a rocksolid solution
> > >
> > > Cheers,
> > > 
> > > Juergen
> > 
> > That is usually a matter of taste and the available environment. 
> > The minimal differences in performance usually only show up
> > if you drive the storage to its limits. I guess you could help Sven 
> > better if you had some hard facts why to favour ISCSI. 
> > 
> > Best regards.
> > 
> > Markus
> 
> Only technical difference I can think of is the iSCSI-level
> load-balancing. With NFS you set up the network with LACP and let that
> load-balance for you (and you should probably do that with iSCSI as well
> but you don?t strictly have to). I think it has to do with a chance of
> trying to go beyond the capacity of 1 network interface at the same
> time, from one Host (higher bandwidth) that makes people try iSCSI
> instead of plain NFS. I have tried that but was never able to achieve
> that effect, so in our situation, there?s no difference. In comparing
> them both in benchmarks, there was no performance difference at all, at
> least for our storage systems that are based on FreeBSD.
> 
> /K
> 
> ------------------------------
> 
> Message: 5
> Date: Thu, 9 Jan 2014 08:34:44 +0100
> From: squadra 
> To: Markus Stockhausen 
> Cc: users at ovirt.org
> Subject: Re: [Users] Experience with low cost NFS-Storage as
> 	VM-Storage?
> Message-ID:
> 	
> Content-Type: text/plain; charset="utf-8"
> 
> There's are already enaugh articles on the web about NFS problems related
> locking, latency, etc.... Eh stacking a protocol onto another to fix
> problem and then maybe one more to glue them together.
> 
> Google for the suse PDF " why NFS sucks", I don't agree with the whole
> sheet.. NFS got his place,too. But not as production filer for VM.
> 
> Cheers,
> 
> Juergen, the NFS lover
> On Jan 9, 2014 8:10 AM, "Markus Stockhausen" 
> wrote:
> 
> > > Von: users-bounces at ovirt.org [users-bounces at ovirt.org]" im Auftrag von
> > "squadra [squadra at gmail.com]
> > > Gesendet: Mittwoch, 8. Januar 2014 17:15
> > > An: users at ovirt.org
> > > Betreff: Re: [Users] Experience with low cost NFS-Storage as VM-Storage?
> > >
> > > better go for iscsi or something else... i whould avoid nfs for vm
> > hosting
> > > Freebsd10 delivers kernel iscsitarget now, which works great so far. or
> > go with omnios to get comstar iscsi, which is a rocksolid solution
> > >
> > > Cheers,
> > >
> > > Juergen
> >
> > That is usually a matter of taste and the available environment.
> > The minimal differences in performance usually only show up
> > if you drive the storage to its limits. I guess you could help Sven
> > better if you had some hard facts why to favour ISCSI.
> >
> > Best regards.
> >
> > Markus
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: 
> 
> ------------------------------
> 
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 
> End of Users Digest, Vol 28, Issue 61
> *************************************
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20140109/8428b0f3/attachment-0001.html>


More information about the Users mailing list