[Users] SSD Caching

Darrell Budic darrell.budic at zenfire.com
Thu Jan 9 03:44:35 UTC 2014


Stick your bricks on ZFS and let it do it for you. Works well, although I haven’t done much benchmarking of it. My test setup is described in the thread under [Users] Creation of preallocated disk with Gluster replication. I’ve seen some blog posts here and there about gluster on ZFS for this reason too.

 -Darrell

On Jan 7, 2014, at 9:56 PM, Russell Purinton <russ at sonicbx.com> wrote:

> [20:42] <sonicrose> is anybody out there using a good RAM+SSD caching system ahead of gluster storage?
> [20:42] <sonicrose> sorry if that came through twice
> [20:44] <sonicrose> im thinking about making the SSD one giant swap file then creating a very large ramdisk in virtual memory and using that as a block level cache for parts and pieces of virtual machine disk images
> [20:44] <sonicrose> then i think the memory managers would inherently play the role of storage tiering ie: keeping the hottest data in memory and the coldest data on swap
> [20:45] <sonicrose> everything i have seen today has been setup as   "consumer"  ===>  network ====> SSD cache ====> real disks
> [20:45] <sonicrose> but i'd like to actually do "consumer" ===> RAM+SSD cache ===>  network ===> real disks
> [20:46] <sonicrose> i realize doing a virtual memory disk means the cache will be cleared on every reboot, and I'm ok with that
> [20:47] <sonicrose> i know this can be done with NFS and cachefilesd(fscache), but how could something be integrated into the native gluster clients?
> [20:47] <sonicrose> i'd prefer not to have to access gluster via NFS
> [20:49] <sonicrose> any feedback from this room is greatly appreciated, getting someone started to build managed HA cloud hosting
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20140108/21aef6d2/attachment-0001.html>


More information about the Users mailing list