[ovirt-users] Good practices

FERNANDO FREDIANI fernando.frediani at upx.com
Tue Aug 8 13:31:55 UTC 2017


Exactly Moacir, that is my point.


A proper Distributed FIlesystem should not rely on any type of RAID as 
it can make its own redundancy without having to rely on any underneath 
layer (look at CEPH). Using RAID may help with management and in certain 
scenarios to replace a faulty disk, but at a cost, not cheap by the way.
That's why in terms of resourcing saving, if a replica 3 brings those 
issues mentioned it is much worth to have a small arbiter somewhere 
instead of wasting a significant amount of disk space.


Fernando


On 08/08/2017 06:09, Moacir Ferreira wrote:
>
> Fernando,
>
>
> Let's see what people say... But this is what I understood Red Hat 
> says is the best performance model. This is the main reason to open 
> this discussion because as long as I can see, some of you in the 
> community, do not agree.
>
>
> But when I think about a "distributed file system", that can make any 
> number of copies you want, it does not make sense using a RAIDed 
> brick, what it makes sense is to use JBOD.
>
>
> Moacir
>
>
> ------------------------------------------------------------------------
> *From:* fernando.frediani at upx.com.br <fernando.frediani at upx.com.br> on 
> behalf of FERNANDO FREDIANI <fernando.frediani at upx.com>
> *Sent:* Tuesday, August 8, 2017 3:08 AM
> *To:* Moacir Ferreira
> *Cc:* Colin Coe; users at ovirt.org
> *Subject:* Re: [ovirt-users] Good practices
> Moacir, I understand that if you do this type of configuration you 
> will be severely impacted on storage performance, specially for 
> writes. Even if you have a Hardware RAID Controller with Writeback 
> cache you will have a significant performance penalty and may not 
> fully use all the resources you mentioned you have.
>
> Fernando
>
> 2017-08-07 10:03 GMT-03:00 Moacir Ferreira <moacirferreira at hotmail.com 
> <mailto:moacirferreira at hotmail.com>>:
>
>     Hi Colin,
>
>
>     Take a look on Devin's response. Also, read the doc he shared that
>     gives some hints on how to deploy Gluster.
>
>
>     It is more like that if you want high-performance you should have
>     the bricks created as RAID (5 or 6) by the server's disk
>     controller and them assemble a JBOD GlusterFS. The attached
>     document is Gluster specific and not for oVirt. But at this point
>     I think that having SSD will not be a plus as using the RAID
>     controller Gluster will not be aware of the SSD. Regarding the OS,
>     my idea is to have a RAID 1, made of 2 low cost HDDs, to install it.
>
>
>     So far, based on the information received I should create a single
>     RAID 5 or 6 on each server and then use this disk as a brick to
>     create my Gluster cluster, made of 2 replicas + 1 arbiter. What is
>     new for me is the detail that the arbiter does not need a lot of
>     space as it only keeps meta data.
>
>
>     Thanks for your response!
>
>     Moacir
>
>     ------------------------------------------------------------------------
>     *From:* Colin Coe <colin.coe at gmail.com <mailto:colin.coe at gmail.com>>
>     *Sent:* Monday, August 7, 2017 12:41 PM
>
>     *To:* Moacir Ferreira
>     *Cc:* users at ovirt.org <mailto:users at ovirt.org>
>     *Subject:* Re: [ovirt-users] Good practices
>     Hi
>
>     I just thought that you'd do hardware RAID if you had the
>     controller or JBOD if you didn't.  In hindsight, a server with
>     40Gbps NICs is pretty likely to have a hardware RAID controller. 
>     I've never done JBOD with hardware RAID.  I think having a single
>     gluster brick on hardware JBOD would be riskier than multiple
>     bricks, each on a single disk, but thats not based on anything
>     other than my prejudices.
>
>     I thought gluster tiering was for the most frequently accessed
>     files, in which case all the VMs disks would end up in the hot
>     tier.  However, I have been wrong before...
>
>     I just wanted to know where the OS was going as I didn't see it
>     mentioned in the OP.  Normally, I'd have the OS on a RAID1 but in
>     your case thats a lot of wasted disk.
>
>     Honestly, I think Yaniv's answer was far better than my own and
>     made the important point about having an arbiter.
>
>     Thanks
>
>     On Mon, Aug 7, 2017 at 5:56 PM, Moacir Ferreira
>     <moacirferreira at hotmail.com <mailto:moacirferreira at hotmail.com>>
>     wrote:
>
>         Hi Colin,
>
>
>         I am in Portugal, so sorry for this late response. It is quite
>         confusing for me, please consider:
>
>         *
>         *1*- *What if the RAID is done by the server's disk
>         controller, not by software?
>
>         2 -**For JBOD I am just using gdeploy to deploy it. However, I
>         am not using the oVirt node GUI to do this.
>
>
>         3 -**As the VM .qcow2 files are quite big, tiering would only
>         help if made by an intelligent system that uses SSD for chunks
>         of data not for the entire .qcow2 file. But I guess this is a
>         problem everybody else has. So, Do you know how tiering works
>         in Gluster?
>
>
>         4 - I am putting the OS on the first disk. However, would you
>         do differently?
>
>
>         Moacir
>
>         ------------------------------------------------------------------------
>         *From:* Colin Coe <colin.coe at gmail.com
>         <mailto:colin.coe at gmail.com>>
>         *Sent:* Monday, August 7, 2017 4:48 AM
>         *To:* Moacir Ferreira
>         *Cc:* users at ovirt.org <mailto:users at ovirt.org>
>         *Subject:* Re: [ovirt-users] Good practices
>         1) RAID5 may be a performance hit-
>
>         2) I'd be inclined to do this as JBOD by creating a
>         distributed disperse volume on each server.  Something like
>
>         echo gluster volume create dispersevol disperse-data 5
>         redundancy 2 \
>         $(for SERVER in a b c; do for BRICK in $(seq 1 5); do echo -e
>         "server${SERVER}:/brick/brick-${SERVER}${BRICK}/brick \c";
>         done; done)
>
>         3) I think the above.
>
>         4) Gluster does support tiering, but IIRC you'd need the same
>         number of SSD as spindle drives. There may be another way to
>         use the SSD as a fast cache.
>
>         Where are you putting the OS?
>
>         Hope I understood the question...
>
>         Thanks
>
>         On Sun, Aug 6, 2017 at 10:49 PM, Moacir Ferreira
>         <moacirferreira at hotmail.com
>         <mailto:moacirferreira at hotmail.com>> wrote:
>
>             I am willing to assemble a oVirt "pod", made of 3 servers,
>             each with 2 CPU sockets of 12 cores, 256GB RAM, 7 HDD 10K,
>             1 SSD. The idea is to use GlusterFS to provide HA for the
>             VMs. The 3 servers have a dual 40Gb NIC and a dual 10Gb
>             NIC. So my intention is to create a loop like a server
>             triangle using the 40Gb NICs for virtualization files (VMs
>             .qcow2) access and to move VMs around the pod (east /west
>             traffic) while using the 10Gb interfaces for giving
>             services to the outside world (north/south traffic).
>
>
>             This said, my first question is: How should I deploy
>             GlusterFS in such oVirt scenario? My questions are:
>
>
>             1 - Should I create 3 RAID (i.e.: RAID 5), one on each
>             oVirt node, and then create a GlusterFS using them?
>
>             2 - Instead, should I create a JBOD array made of all
>             server's disks?
>
>             3 - What is the best Gluster configuration to provide for
>             HA while not consuming too much disk space?
>
>             4 - Does a oVirt hypervisor pod like I am planning to
>             build, and the virtualization environment, benefits from
>             tiering when using a SSD disk? And yes, will Gluster do it
>             by default or I have to configure it to do so?
>
>
>             At the bottom line, what is the good practice for using
>             GlusterFS in small pods for enterprises?
>
>
>             You opinion/feedback will be really appreciated!
>
>             Moacir
>
>
>             _______________________________________________
>             Users mailing list
>             Users at ovirt.org <mailto:Users at ovirt.org>
>             http://lists.ovirt.org/mailman/listinfo/users
>             <http://lists.ovirt.org/mailman/listinfo/users>
>
>
>
>
>     _______________________________________________
>     Users mailing list
>     Users at ovirt.org <mailto:Users at ovirt.org>
>     http://lists.ovirt.org/mailman/listinfo/users
>     <http://lists.ovirt.org/mailman/listinfo/users>
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170808/83354dc2/attachment-0001.html>


More information about the Users mailing list