another point is, that a correct configured multipathing is way more solid when it comes to a single path outage. at the software side, i have seen countless nfs servers which where unresponsive because of lockd issues for example, and only a reboot fixed this since its kernel based.

another contra for me is, that its rather complicated and a 50/50 chance that a nfs failover in a nfs ha setup works without any clients dying.

dont get me wrong, nfs is great for small setups. its easy to setup, easy to scale, i use it very widespread for content sharing and homedirs. but i am healed regarding vm images on nfs.


On Thu, Jan 9, 2014 at 8:48 AM, Karli Sjöberg <Karli.Sjoberg@slu.se> wrote:
On Thu, 2014-01-09 at 08:35 +0100, squadra wrote:
> Right, try multipathing with nfs :)

Yes, that´s what I meant, maybe could have been more clear about that,
sorry. Multipathing (and the load-balancing it brings) is what really
separates iSCSI from NFS.

What I´d be interested in knowing is at what breaking-point, not having
multipathing becomes an issue. I mean, we might not have such a big
VM-park, about 300-400 VMs. But so far running without multipathing
using good ole' NFS and no performance issues this far. Would be good to
know beforehand if we´re headed for a wall of some sorts, and about
"when" we´ll hit it...

/K

>
> On Jan 9, 2014 8:30 AM, "Karli Sjöberg" <Karli.Sjoberg@slu.se> wrote:
>         On Thu, 2014-01-09 at 07:10 +0000, Markus Stockhausen wrote:
>         > > Von: users-bounces@ovirt.org [users-bounces@ovirt.org]" im
>         Auftrag von "squadra [squadra@gmail.com]
>         > > Gesendet: Mittwoch, 8. Januar 2014 17:15
>         > > An: users@ovirt.org
>         > > Betreff: Re: [Users] Experience with low cost NFS-Storage
>         as VM-Storage?
>         > >
>         > > better go for iscsi or something else... i whould avoid
>         nfs for vm hosting
>         > > Freebsd10 delivers kernel iscsitarget now, which works
>         great so far. or go with omnios to get comstar iscsi, which is
>         a rocksolid solution
>         > >
>         > > Cheers,
>         > >
>         > > Juergen
>         >
>         > That is usually a matter of taste and the available
>         environment.
>         > The minimal differences in performance usually only show up
>         > if you drive the storage to its limits. I guess you could
>         help Sven
>         > better if you had some hard facts why to favour ISCSI.
>         >
>         > Best regards.
>         >
>         > Markus
>
>         Only technical difference I can think of is the iSCSI-level
>         load-balancing. With NFS you set up the network with LACP and
>         let that
>         load-balance for you (and you should probably do that with
>         iSCSI as well
>         but you don´t strictly have to). I think it has to do with a
>         chance of
>         trying to go beyond the capacity of 1 network interface at the
>         same
>         time, from one Host (higher bandwidth) that makes people try
>         iSCSI
>         instead of plain NFS. I have tried that but was never able to
>         achieve
>         that effect, so in our situation, there´s no difference. In
>         comparing
>         them both in benchmarks, there was no performance difference
>         at all, at
>         least for our storage systems that are based on FreeBSD.
>
>         /K




--
Sent from the Delta quadrant using Borg technology!