<div class="xam_msg_class">
<div style="font: normal 13px Arial; color:rgb(31, 28, 27);"><br>you can use flashcache under centos6, it's stable and give you a boost for read/write, but I never user with gluster:<br><br>https://github.com/facebook/flashcache/<br><br>under fedora you have more choice: flashcache, bcache, dm-cache<br><br>regards<br>a<br><br>Date: Wed, 8 Jan 2014 21:44:35 -0600<br>From: Darrell Budic <darrell.budic@zenfire.com><br>To: Russell Purinton <russ@sonicbx.com><br>Cc: "users@ovirt.org" <users@ovirt.org><br>Subject: Re: [Users] SSD Caching<br>Message-ID: <A45059D4-B00D-4573-81E7-F00B2B9FA4AA@zenfire.com><br>Content-Type: text/plain; charset="windows-1252"<br><br>Stick
your bricks on ZFS and let it do it for you. Works well, although I
haven?t done much benchmarking of it. My test setup is described in the
thread under [Users] Creation of preallocated disk with Gluster
replication. I?ve seen some blog posts here and there about gluster on
ZFS for this reason too.<br><br> -Darrell<br><br>On Jan 7, 2014, at 9:56 PM, Russell Purinton <russ@sonicbx.com> wrote:<br><br>> [20:42] <sonicrose> is anybody out there using a good RAM+SSD caching system ahead of gluster storage?<br>> [20:42] <sonicrose> sorry if that came through twice<br>>
[20:44] <sonicrose> im thinking about making the SSD one giant
swap file then creating a very large ramdisk in virtual memory and using
that as a block level cache for parts and pieces of virtual machine
disk images<br>> [20:44] <sonicrose> then i think the memory
managers would inherently play the role of storage tiering ie: keeping
the hottest data in memory and the coldest data on swap<br>> [20:45]
<sonicrose> everything i have seen today has been setup as
"consumer" ===> network ====> SSD cache ====> real disks<br>> [20:45] <sonicrose> but i'd like to actually do "consumer" ===> RAM+SSD cache ===> network ===> real disks<br>>
[20:46] <sonicrose> i realize doing a virtual memory disk means
the cache will be cleared on every reboot, and I'm ok with that<br>>
[20:47] <sonicrose> i know this can be done with NFS and
cachefilesd(fscache), but how could something be integrated into the
native gluster clients?<br>> [20:47] <sonicrose> i'd prefer not to have to access gluster via NFS<br>>
[20:49] <sonicrose> any feedback from this room is greatly
appreciated, getting someone started to build managed HA cloud hosting<br>><br><br><br><br><br><br><br><br><br><br><br><br><br>
<div><span style="font-family:Arial; font-size:11px; color:#5F5F5F;">Da</span><span style="font-family:Arial; font-size:12px; color:#5F5F5F; padding-left:5px;">: users-bounces@ovirt.org</span></div>
<div><span style="font-family:Arial; font-size:11px; color:#5F5F5F;">A</span><span style="font-family:Arial; font-size:12px; color:#5F5F5F; padding-left:5px;">: users@ovirt.org</span></div>
<div><span style="font-family:Arial; font-size:11px; color:#5F5F5F;">Cc</span><span style="font-family:Arial; font-size:12px; color:#5F5F5F; padding-left:5px;">: </span></div>
<div><span style="font-family:Arial; font-size:11px; color:#5F5F5F;">Data</span><span style="font-family:Arial; font-size:12px; color:#5F5F5F; padding-left:5px;">: Thu, 09 Jan 2014 02:34:48 -0500</span></div>
<div><span style="font-family:Arial; font-size:11px; color:#5F5F5F;">Oggetto</span><span style="font-family:Arial; font-size:12px; color:#5F5F5F; padding-left:5px;">: Users Digest, Vol 28, Issue 61</span></div>
<br>
<div>> Send Users mailing list submissions to</div><div>>         users@ovirt.org</div><div>> </div><div>> To subscribe or unsubscribe via the World Wide Web, visit</div><div>>         http://lists.ovirt.org/mailman/listinfo/users</div><div>> or, via email, send a message with subject or body 'help' to</div><div>>         users-request@ovirt.org</div><div>> </div><div>> You can reach the person managing the list at</div><div>>         users-owner@ovirt.org</div><div>> </div><div>> When replying, please edit your Subject line so it is more specific</div><div>> than "Re: Contents of Users digest..."</div><div>> </div><div>> </div><div>> Today's Topics:</div><div>> </div><div>> 1. Re: SSD Caching (Darrell Budic)</div><div>> 2. Re: Ovirt DR setup (Hans Emmanuel)</div><div>> 3. Re: Experience with low cost NFS-Storage as VM-Storage?</div><div>> (Markus Stockhausen)</div><div>> 4. Re: Experience with low cost NFS-Storage as VM-Storage?</div><div>> (Karli Sj?berg)</div><div>> 5. Re: Experience with low cost NFS-Storage as VM-Storage? (squadra)</div><div>> </div><div>> </div><div>> ----------------------------------------------------------------------</div><div>> </div><div>> Message: 1</div><div>> Date: Wed, 8 Jan 2014 21:44:35 -0600</div><div>> From: Darrell Budic <darrell.budic@zenfire.com></darrell.budic@zenfire.com></div><div>> To: Russell Purinton <russ@sonicbx.com></russ@sonicbx.com></div><div>> Cc: "users@ovirt.org" <users@ovirt.org></users@ovirt.org></div><div>> Subject: Re: [Users] SSD Caching</div><div>> Message-ID: <a45059d4-b00d-4573-81e7-f00b2b9fa4aa@zenfire.com></a45059d4-b00d-4573-81e7-f00b2b9fa4aa@zenfire.com></div><div>> Content-Type: text/plain; charset="windows-1252"</div><div>> </div><div>> Stick your bricks on ZFS and let it do it for you. Works well, although I haven?t done much benchmarking of it. My test setup is described in the thread under [Users] Creation of preallocated disk with Gluster replication. I?ve seen some blog posts here and there about gluster on ZFS for this reason too.</div><div>> </div><div>> -Darrell</div><div>> </div><div>> On Jan 7, 2014, at 9:56 PM, Russell Purinton <russ@sonicbx.com> wrote:</russ@sonicbx.com></div><div>> </div><div>> > [20:42] <sonicrose> is anybody out there using a good RAM+SSD caching system ahead of gluster storage?</sonicrose></div><div>> > [20:42] <sonicrose> sorry if that came through twice</sonicrose></div><div>> > [20:44] <sonicrose> im thinking about making the SSD one giant swap file then creating a very large ramdisk in virtual memory and using that as a block level cache for parts and pieces of virtual machine disk images</sonicrose></div><div>> > [20:44] <sonicrose> then i think the memory managers would inherently play the role of storage tiering ie: keeping the hottest data in memory and the coldest data on swap</sonicrose></div><div>> > [20:45] <sonicrose> everything i have seen today has been setup as "consumer" ===> network ====> SSD cache ====> real disks</sonicrose></div><div>> > [20:45] <sonicrose> but i'd like to actually do "consumer" ===> RAM+SSD cache ===> network ===> real disks</sonicrose></div><div>> > [20:46] <sonicrose> i realize doing a virtual memory disk means the cache will be cleared on every reboot, and I'm ok with that</sonicrose></div><div>> > [20:47] <sonicrose> i know this can be done with NFS and cachefilesd(fscache), but how could something be integrated into the native gluster clients?</sonicrose></div><div>> > [20:47] <sonicrose> i'd prefer not to have to access gluster via NFS</sonicrose></div><div>> > [20:49] <sonicrose> any feedback from this room is greatly appreciated, getting someone started to build managed HA cloud hosting</sonicrose></div><div>> > _______________________________________________</div><div>> > Users mailing list</div><div>> > Users@ovirt.org</div><div>> > http://lists.ovirt.org/mailman/listinfo/users</div><div>> </div><div>> -------------- next part --------------</div><div>> An HTML attachment was scrubbed...</div><div>> URL: <http: lists.ovirt.org="" pipermail="" users="" attachments="" 20140108="" 21aef6d2="" attachment-0001.html=""></http:></div><div>> </div><div>> ------------------------------</div><div>> </div><div>> Message: 2</div><div>> Date: Thu, 9 Jan 2014 10:34:26 +0530</div><div>> From: Hans Emmanuel <hansemmanuel@gmail.com></hansemmanuel@gmail.com></div><div>> To: users@ovirt.org</div><div>> Subject: Re: [Users] Ovirt DR setup</div><div>> Message-ID:</div><div>>         <cakym+td8o3g+zfsfgybzegnnk+hxq=9tj9j9r1ky_thyemcxwa@mail.gmail.com></cakym+td8o3g+zfsfgybzegnnk+hxq=9tj9j9r1ky_thyemcxwa@mail.gmail.com></div><div>> Content-Type: text/plain; charset="iso-8859-1"</div><div>> </div><div>> Could any one please give me some suggestions ?</div><div>> </div><div>> </div><div>> On Wed, Jan 8, 2014 at 11:39 AM, Hans Emmanuel <hansemmanuel@gmail.com>wrote:</hansemmanuel@gmail.com></div><div>> </div><div>> > Hi all ,</div><div>> ></div><div>> > I would like to know about the possibility of setup Disaster Recovery Site</div><div>> > (DR) for an Ovirt cluster . i.e if site 1 goes down I need to trigger the</div><div>> > site 2 to come in to action with the minimal down time .</div><div>> ></div><div>> > I am open to use NFS shared storage or local storage for data storage</div><div>> > domain . I know we need to replicate the storage domain and Ovirt confs and</div><div>> > DB across the sites , but couldn't find any doc for the same , isn't that</div><div>> > possible with Ovirt ?</div><div>> ></div><div>> > *Hans Emmanuel*</div><div>> ></div><div>> ></div><div>> > *NOthing to FEAR but something to FEEL......*</div><div>> ></div><div>> ></div><div>> </div><div>> </div><div>> -- </div><div>> *Hans Emmanuel*</div><div>> </div><div>> *NOthing to FEAR but something to FEEL......*</div><div>> -------------- next part --------------</div><div>> An HTML attachment was scrubbed...</div><div>> URL: <http: lists.ovirt.org="" pipermail="" users="" attachments="" 20140109="" ae9bb53c="" attachment-0001.html=""></http:></div><div>> </div><div>> ------------------------------</div><div>> </div><div>> Message: 3</div><div>> Date: Thu, 9 Jan 2014 07:10:07 +0000</div><div>> From: Markus Stockhausen <stockhausen@collogia.de></stockhausen@collogia.de></div><div>> To: squadra <squadra@gmail.com>, "users@ovirt.org" <users@ovirt.org></users@ovirt.org></squadra@gmail.com></div><div>> Subject: Re: [Users] Experience with low cost NFS-Storage as</div><div>>         VM-Storage?</div><div>> Message-ID:</div><div>>         <12EF8D94C6F8734FB2FF37B9FBEDD173585B991E@EXCHANGE.collogia.de></div><div>> Content-Type: text/plain; charset="us-ascii"</div><div>> </div><div>> > Von: users-bounces@ovirt.org [users-bounces@ovirt.org]" im Auftrag von "squadra [squadra@gmail.com]</div><div>> > Gesendet: Mittwoch, 8. Januar 2014 17:15</div><div>> > An: users@ovirt.org</div><div>> > Betreff: Re: [Users] Experience with low cost NFS-Storage as VM-Storage?</div><div>> ></div><div>> > better go for iscsi or something else... i whould avoid nfs for vm hosting</div><div>> > Freebsd10 delivers kernel iscsitarget now, which works great so far. or go with omnios to get comstar iscsi, which is a rocksolid solution</div><div>> ></div><div>> > Cheers,</div><div>> > </div><div>> > Juergen</div><div>> </div><div>> That is usually a matter of taste and the available environment. </div><div>> The minimal differences in performance usually only show up</div><div>> if you drive the storage to its limits. I guess you could help Sven </div><div>> better if you had some hard facts why to favour ISCSI. </div><div>> </div><div>> Best regards.</div><div>> </div><div>> Markus</div><div>> -------------- next part --------------</div><div>> An embedded and charset-unspecified text was scrubbed...</div><div>> Name: InterScan_Disclaimer.txt</div><div>> URL: <http: lists.ovirt.org="" pipermail="" users="" attachments="" 20140109="" 3dfd362d="" attachment-0001.txt=""></http:></div><div>> </div><div>> ------------------------------</div><div>> </div><div>> Message: 4</div><div>> Date: Thu, 9 Jan 2014 07:30:56 +0000</div><div>> From: Karli Sj?berg <karli.sjoberg@slu.se></karli.sjoberg@slu.se></div><div>> To: "stockhausen@collogia.de" <stockhausen@collogia.de></stockhausen@collogia.de></div><div>> Cc: "squadra@gmail.com" <squadra@gmail.com>,        "users@ovirt.org"</squadra@gmail.com></div><div>>         <users@ovirt.org></users@ovirt.org></div><div>> Subject: Re: [Users] Experience with low cost NFS-Storage as</div><div>>         VM-Storage?</div><div>> Message-ID: <5F9E965F5A80BC468BE5F40576769F095AFE3369@exchange2-1></div><div>> Content-Type: text/plain; charset="utf-8"</div><div>> </div><div>> On Thu, 2014-01-09 at 07:10 +0000, Markus Stockhausen wrote:</div><div>> > > Von: users-bounces@ovirt.org [users-bounces@ovirt.org]" im Auftrag von "squadra [squadra@gmail.com]</div><div>> > > Gesendet: Mittwoch, 8. Januar 2014 17:15</div><div>> > > An: users@ovirt.org</div><div>> > > Betreff: Re: [Users] Experience with low cost NFS-Storage as VM-Storage?</div><div>> > ></div><div>> > > better go for iscsi or something else... i whould avoid nfs for vm hosting</div><div>> > > Freebsd10 delivers kernel iscsitarget now, which works great so far. or go with omnios to get comstar iscsi, which is a rocksolid solution</div><div>> > ></div><div>> > > Cheers,</div><div>> > > </div><div>> > > Juergen</div><div>> > </div><div>> > That is usually a matter of taste and the available environment. </div><div>> > The minimal differences in performance usually only show up</div><div>> > if you drive the storage to its limits. I guess you could help Sven </div><div>> > better if you had some hard facts why to favour ISCSI. </div><div>> > </div><div>> > Best regards.</div><div>> > </div><div>> > Markus</div><div>> </div><div>> Only technical difference I can think of is the iSCSI-level</div><div>> load-balancing. With NFS you set up the network with LACP and let that</div><div>> load-balance for you (and you should probably do that with iSCSI as well</div><div>> but you don?t strictly have to). I think it has to do with a chance of</div><div>> trying to go beyond the capacity of 1 network interface at the same</div><div>> time, from one Host (higher bandwidth) that makes people try iSCSI</div><div>> instead of plain NFS. I have tried that but was never able to achieve</div><div>> that effect, so in our situation, there?s no difference. In comparing</div><div>> them both in benchmarks, there was no performance difference at all, at</div><div>> least for our storage systems that are based on FreeBSD.</div><div>> </div><div>> /K</div><div>> </div><div>> ------------------------------</div><div>> </div><div>> Message: 5</div><div>> Date: Thu, 9 Jan 2014 08:34:44 +0100</div><div>> From: squadra <squadra@gmail.com></squadra@gmail.com></div><div>> To: Markus Stockhausen <stockhausen@collogia.de></stockhausen@collogia.de></div><div>> Cc: users@ovirt.org</div><div>> Subject: Re: [Users] Experience with low cost NFS-Storage as</div><div>>         VM-Storage?</div><div>> Message-ID:</div><div>>         <cabx==a33=tq=xzsbyssyfgxsycfheab7sxhgu8bx7fmhksj5aa@mail.gmail.com></cabx==a33=tq=xzsbyssyfgxsycfheab7sxhgu8bx7fmhksj5aa@mail.gmail.com></div><div>> Content-Type: text/plain; charset="utf-8"</div><div>> </div><div>> There's are already enaugh articles on the web about NFS problems related</div><div>> locking, latency, etc.... Eh stacking a protocol onto another to fix</div><div>> problem and then maybe one more to glue them together.</div><div>> </div><div>> Google for the suse PDF " why NFS sucks", I don't agree with the whole</div><div>> sheet.. NFS got his place,too. But not as production filer for VM.</div><div>> </div><div>> Cheers,</div><div>> </div><div>> Juergen, the NFS lover</div><div>> On Jan 9, 2014 8:10 AM, "Markus Stockhausen" <stockhausen@collogia.de></stockhausen@collogia.de></div><div>> wrote:</div><div>> </div><div>> > > Von: users-bounces@ovirt.org [users-bounces@ovirt.org]" im Auftrag von</div><div>> > "squadra [squadra@gmail.com]</div><div>> > > Gesendet: Mittwoch, 8. Januar 2014 17:15</div><div>> > > An: users@ovirt.org</div><div>> > > Betreff: Re: [Users] Experience with low cost NFS-Storage as VM-Storage?</div><div>> > ></div><div>> > > better go for iscsi or something else... i whould avoid nfs for vm</div><div>> > hosting</div><div>> > > Freebsd10 delivers kernel iscsitarget now, which works great so far. or</div><div>> > go with omnios to get comstar iscsi, which is a rocksolid solution</div><div>> > ></div><div>> > > Cheers,</div><div>> > ></div><div>> > > Juergen</div><div>> ></div><div>> > That is usually a matter of taste and the available environment.</div><div>> > The minimal differences in performance usually only show up</div><div>> > if you drive the storage to its limits. I guess you could help Sven</div><div>> > better if you had some hard facts why to favour ISCSI.</div><div>> ></div><div>> > Best regards.</div><div>> ></div><div>> > Markus</div><div>> -------------- next part --------------</div><div>> An HTML attachment was scrubbed...</div><div>> URL: <http: lists.ovirt.org="" pipermail="" users="" attachments="" 20140109="" 3b206609="" attachment.html=""></http:></div><div>> </div><div>> ------------------------------</div><div>> </div><div>> _______________________________________________</div><div>> Users mailing list</div><div>> Users@ovirt.org</div><div>> http://lists.ovirt.org/mailman/listinfo/users</div><div>> </div><div>> </div><div>> End of Users Digest, Vol 28, Issue 61</div><div>> *************************************</div></div>
</div>