[Users] Users Digest, Vol 26, Issue 72

Ryan Barry phresus at gmail.com
Sun Nov 17 17:58:05 UTC 2013


Without knowing how the disks are split among the controllers, I don't want
to make any assumptions about how shared it actually is, since it may be
half and half with no multipathing.

While a multi-controller DAS array *may* be shared storage, it may not be.
Moreover, I have no idea whether VDSM looks at by-path, by-bus, dm-*, or
otherwise, and there are no guarantees that a SAS disk will present like a
FC LUN (by-path/pci...-fc-$wwn...), whereas OCFS POSIXFS is assured to
work, albeit with a more complex setup and another intermediary layer.
On Nov 17, 2013 10:00 AM, <users-request at ovirt.org> wrote:

> Send Users mailing list submissions to
>         users at ovirt.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         http://lists.ovirt.org/mailman/listinfo/users
> or, via email, send a message with subject or body 'help' to
>         users-request at ovirt.org
>
> You can reach the person managing the list at
>         users-owner at ovirt.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Users digest..."
>
>
> Today's Topics:
>
>    1. Re: oVirt and SAS shared storage?? (Jeff Bailey)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Sat, 16 Nov 2013 21:39:35 -0500
> From: Jeff Bailey <bailey at cs.kent.edu>
> To: users at ovirt.org
> Subject: Re: [Users] oVirt and SAS shared storage??
> Message-ID: <52882C67.9000707 at cs.kent.edu>
> Content-Type: text/plain; charset=ISO-8859-1
>
>
> On 11/16/2013 9:22 AM, Ryan Barry wrote:
> >
> >     unfortunally, I didn't got a reply for my question. So.. let's try
> >     again.
> >
> >     Does oVirt supports SAS shared storages (p. e. MSA2000sa) as
> >     storage domain?
> >     If yes.. what kind of storage domain I've to choose at setup time?
> >
> > SAS is a bus which implements the SCSI protocol in a point-to-point
> > fashion. The array you have is the effective equivalent of attaching
> > additional hard drives directly to your computer.
> >
> > It is not necessarily faster than iSCSI or Fiber Channel; almost any
> > nearline storage these days will be SAS, almost all the SANs in
> > production, and most of the tiered storage as well (because SAS
> > supports SATA drives). I'm not even sure if NetApp uses FC-AL drives
> > in their arrays anymore. I think they're all SAS, but don't quote me
> > on that.
> >
> > What differentiates a SAN (iSCSI or Fiber Channel) from a NAS is that
> > a SAN presents raw devices over a fabric or switched medium rather
> > than point-to-point (point-to-point Fiber Channel still happens, but
> > it's easier to assume that it doesn't for the sake of argument). A NAS
> > presents network file systems (CIFS, GlusterFS, Lustre, NFS, Ceph,
> > whatever), though this also gets complicated when you start talking
> > about distributed clustered network file systems.
> >
> > Anyway, what you have is neither of these. It's directly-attached
> > storage. It may work, but it's an unsupported configuration, and is
> > only shared storage in the sense that it has multiple controllers. If
> > I were going to configure it for oVirt, I would:
> >
>
> It's shared storage in every sense of the word.  I would simply use an
> FC domain and choose the LUNs as usual.
>
> > Attach it to a 3rd server and export iSCSI LUNs from it
> > Attach it to a 3rd server and export NFS from it
> > Attach it to multiple CentOS/Fedora servers, configure clustering (so
> > you get fencing, a DLM, and the other requisites of a clustered
> > filesystem), and use raw cLVM block devices or GFS2/OCFS filesystems
> > as POSIXFS storage for oVirt.
> >
>
> These would be terrible choices for both performance and reliability.
> It's exactly the same as fronting an FC LUN would be with all of that
> crud when you could simply access the LUN directly.  If the array port
> count is a problem then just toss an SAS switch in between and you have
> an all SAS equivalent of a Fibre Channel SAN.  This is exactly what we
> do in production vSphere environments and there are no technical reasons
> it shouldn't work fine with oVirt.
>
> >     Thank you for your help
> >
> >     Hans-Joachim
> >
> >
> > Hans
> >
> > --
> > while (!asleep) { sheep++; }
> >
> >
> > _______________________________________________
> > Users mailing list
> > Users at ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ------------------------------
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
> End of Users Digest, Vol 26, Issue 72
> *************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20131117/5f95fd8b/attachment-0001.html>


More information about the Users mailing list