[ovirt-users] Good practices

FERNANDO FREDIANI fernando.frediani at upx.com
Mon Aug 7 13:08:32 UTC 2017


Moacir, I beleive for using the 3 servers directly connected to each 
others without switch you have to have a Bridge on each server for every 
2 physical interfaces to allow the traffic passthrough in layer2 (Is it 
possible to create this from the oVirt Engine Web Interface?). If your 
ovirtmgmt network is separate from other (should really be) that should 
be fine to do.


Fernando


On 07/08/2017 07:13, Moacir Ferreira wrote:
>
> Hi, in-line responses.
>
>
> Thanks,
>
> Moacir
>
>
> ------------------------------------------------------------------------
> *From:* Yaniv Kaul <ykaul at redhat.com>
> *Sent:* Monday, August 7, 2017 7:42 AM
> *To:* Moacir Ferreira
> *Cc:* users at ovirt.org
> *Subject:* Re: [ovirt-users] Good practices
>
>
> On Sun, Aug 6, 2017 at 5:49 PM, Moacir Ferreira 
> <moacirferreira at hotmail.com <mailto:moacirferreira at hotmail.com>> wrote:
>
>     I am willing to assemble a oVirt "pod", made of 3 servers, each
>     with 2 CPU sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The
>     idea is to use GlusterFS to provide HA for the VMs. The 3 servers
>     have a dual 40Gb NIC and a dual 10Gb NIC. So my intention is to
>     create a loop like a server triangle using the 40Gb NICs for
>     virtualization files (VMs .qcow2) access and to move VMs around
>     the pod (east /west traffic) while using the 10Gb interfaces for
>     giving services to the outside world (north/south traffic).
>
>
> Very nice gear. How are you planning the network exactly? Without a 
> switch, back-to-back? (sounds OK to me, just wanted to ensure this is 
> what the 'dual' is used for). However, I'm unsure if you have the 
> correct balance between the interface speeds (40g) and the disks (too 
> many HDDs?).
>
> Moacir:The idea is to have a very high performance network for the 
> distributed file system and to prevent bottlenecks when we move one VM 
> from a node to another. Using 40Gb NICs I can just connect the servers 
> back-to-back. In this case I don't need the expensive 40Gb switch, I 
> get very high speed and no contention between north/south traffic with 
> east/west.
>
>
>     This said, my first question is: How should I deploy GlusterFS in
>     such oVirt scenario? My questions are:
>
>
>     1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node,
>     and then create a GlusterFS using them?
>
> I would assume RAID 1 for the operating system (you don't want a 
> single point of failure there?) and the rest JBODs. The SSD will be 
> used for caching, I reckon? (I personally would add more SSDs instead 
> of HDDs, but it does depend on the disk sizes and your space requirements.
>
> Moacir: Yes, I agree that I need a RAID-1 for the OS. Now, generic 
> JBOD or a JBOD assembled using RAID-5 "disks" createdby the server's 
> disk controller?
>
>     2 - Instead, should I create a JBOD array made of all server's disks?
>
>     3 - What is the best Gluster configuration to provide for HA while
>     not consuming too much disk space?
>
>
> Replica 2 + Arbiter sounds good to me.
> Moacir:I agree, and that is what I am using.
>
>     4 - Does a oVirt hypervisor pod like I am planning to build, and
>     the virtualization environment, benefits from tiering when using a
>     SSD disk? And yes, will Gluster do it by default or I have to
>     configure it to do so?
>
>
> Yes, I believe using lvmcache is the best way to go.
>
>     Moacir: Are you sure? I say that because the qcow2 files will be
>     quite big. So if tiering is "file based" the SSD would have to be
>     very, very big unless Gluster tiering do it by "chunks of data".
>
>
>     At the bottom line, what is the good practice for using GlusterFS
>     in small pods for enterprises?
>
>
> Don't forget jumbo frames. libgfapi (coming hopefully in 4.1.5). 
> Sharding (enabled out of the box if you use a hyper-converged setup 
> via gdeploy).
> *Moacir:* Yes! This is another reason to have separate networks for 
> north/south and east/west. In that way I can use the standard MTU on 
> the 10Gb NICs and jumbo frames on the file/move 40Gb NICs.
>
> Y.
>
>
>     You opinion/feedback will be really appreciated!
>
>     Moacir
>
>
>     _______________________________________________
>     Users mailing list
>     Users at ovirt.org <mailto:Users at ovirt.org>
>     http://lists.ovirt.org/mailman/listinfo/users
>     <http://lists.ovirt.org/mailman/listinfo/users>
>
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170807/f4181b39/attachment.html>


More information about the Users mailing list