<div dir="ltr">another point is, that a correct configured multipathing is way more solid when it comes to a single path outage. at the software side, i have seen countless nfs servers which where unresponsive because of lockd issues for example, and only a reboot fixed this since its kernel based.<div>
<br></div><div>another contra for me is, that its rather complicated and a 50/50 chance that a nfs failover in a nfs ha setup works without any clients dying.</div><div><br></div><div>dont get me wrong, nfs is great for small setups. its easy to setup, easy to scale, i use it very widespread for content sharing and homedirs. but i am healed regarding vm images on nfs.</div>
</div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Jan 9, 2014 at 8:48 AM, Karli Sjöberg <span dir="ltr"><<a href="mailto:Karli.Sjoberg@slu.se" target="_blank">Karli.Sjoberg@slu.se</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im">On Thu, 2014-01-09 at 08:35 +0100, squadra wrote:<br>
> Right, try multipathing with nfs :)<br>
<br>
</div>Yes, that´s what I meant, maybe could have been more clear about that,<br>
sorry. Multipathing (and the load-balancing it brings) is what really<br>
separates iSCSI from NFS.<br>
<br>
What I´d be interested in knowing is at what breaking-point, not having<br>
multipathing becomes an issue. I mean, we might not have such a big<br>
VM-park, about 300-400 VMs. But so far running without multipathing<br>
using good ole' NFS and no performance issues this far. Would be good to<br>
know beforehand if we´re headed for a wall of some sorts, and about<br>
"when" we´ll hit it...<br>
<span class="HOEnZb"><font color="#888888"><br>
/K<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
><br>
> On Jan 9, 2014 8:30 AM, "Karli Sjöberg" <<a href="mailto:Karli.Sjoberg@slu.se">Karli.Sjoberg@slu.se</a>> wrote:<br>
> On Thu, 2014-01-09 at 07:10 +0000, Markus Stockhausen wrote:<br>
> > > Von: <a href="mailto:users-bounces@ovirt.org">users-bounces@ovirt.org</a> [<a href="mailto:users-bounces@ovirt.org">users-bounces@ovirt.org</a>]" im<br>
> Auftrag von "squadra [<a href="mailto:squadra@gmail.com">squadra@gmail.com</a>]<br>
> > > Gesendet: Mittwoch, 8. Januar 2014 17:15<br>
> > > An: <a href="mailto:users@ovirt.org">users@ovirt.org</a><br>
> > > Betreff: Re: [Users] Experience with low cost NFS-Storage<br>
> as VM-Storage?<br>
> > ><br>
> > > better go for iscsi or something else... i whould avoid<br>
> nfs for vm hosting<br>
> > > Freebsd10 delivers kernel iscsitarget now, which works<br>
> great so far. or go with omnios to get comstar iscsi, which is<br>
> a rocksolid solution<br>
> > ><br>
> > > Cheers,<br>
> > ><br>
> > > Juergen<br>
> ><br>
> > That is usually a matter of taste and the available<br>
> environment.<br>
> > The minimal differences in performance usually only show up<br>
> > if you drive the storage to its limits. I guess you could<br>
> help Sven<br>
> > better if you had some hard facts why to favour ISCSI.<br>
> ><br>
> > Best regards.<br>
> ><br>
> > Markus<br>
><br>
> Only technical difference I can think of is the iSCSI-level<br>
> load-balancing. With NFS you set up the network with LACP and<br>
> let that<br>
> load-balance for you (and you should probably do that with<br>
> iSCSI as well<br>
> but you don´t strictly have to). I think it has to do with a<br>
> chance of<br>
> trying to go beyond the capacity of 1 network interface at the<br>
> same<br>
> time, from one Host (higher bandwidth) that makes people try<br>
> iSCSI<br>
> instead of plain NFS. I have tried that but was never able to<br>
> achieve<br>
> that effect, so in our situation, there´s no difference. In<br>
> comparing<br>
> them both in benchmarks, there was no performance difference<br>
> at all, at<br>
> least for our storage systems that are based on FreeBSD.<br>
><br>
> /K<br>
<br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><pre>Sent from the Delta quadrant using Borg technology!</pre>
</div>