[Users] oVirt and SAS shared storage??

Jeff Bailey bailey at cs.kent.edu
Sat Nov 16 21:39:35 EST 2013


On 11/16/2013 9:22 AM, Ryan Barry wrote:
>
>     unfortunally, I didn't got a reply for my question. So.. let's try
>     again.
>
>     Does oVirt supports SAS shared storages (p. e. MSA2000sa) as
>     storage domain?
>     If yes.. what kind of storage domain I've to choose at setup time?
>
> SAS is a bus which implements the SCSI protocol in a point-to-point
> fashion. The array you have is the effective equivalent of attaching
> additional hard drives directly to your computer.
>
> It is not necessarily faster than iSCSI or Fiber Channel; almost any
> nearline storage these days will be SAS, almost all the SANs in
> production, and most of the tiered storage as well (because SAS
> supports SATA drives). I'm not even sure if NetApp uses FC-AL drives
> in their arrays anymore. I think they're all SAS, but don't quote me
> on that.
>
> What differentiates a SAN (iSCSI or Fiber Channel) from a NAS is that
> a SAN presents raw devices over a fabric or switched medium rather
> than point-to-point (point-to-point Fiber Channel still happens, but
> it's easier to assume that it doesn't for the sake of argument). A NAS
> presents network file systems (CIFS, GlusterFS, Lustre, NFS, Ceph,
> whatever), though this also gets complicated when you start talking
> about distributed clustered network file systems.
>
> Anyway, what you have is neither of these. It's directly-attached
> storage. It may work, but it's an unsupported configuration, and is
> only shared storage in the sense that it has multiple controllers. If
> I were going to configure it for oVirt, I would:
>

It's shared storage in every sense of the word.  I would simply use an
FC domain and choose the LUNs as usual.

> Attach it to a 3rd server and export iSCSI LUNs from it
> Attach it to a 3rd server and export NFS from it
> Attach it to multiple CentOS/Fedora servers, configure clustering (so
> you get fencing, a DLM, and the other requisites of a clustered
> filesystem), and use raw cLVM block devices or GFS2/OCFS filesystems
> as POSIXFS storage for oVirt.
>

These would be terrible choices for both performance and reliability. 
It's exactly the same as fronting an FC LUN would be with all of that
crud when you could simply access the LUN directly.  If the array port
count is a problem then just toss an SAS switch in between and you have
an all SAS equivalent of a Fibre Channel SAN.  This is exactly what we
do in production vSphere environments and there are no technical reasons
it shouldn't work fine with oVirt.

>     Thank you for your help
>
>     Hans-Joachim
>
>
> Hans
>  
> -- 
> while (!asleep) { sheep++; }
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users



More information about the Users mailing list