On Sun, Aug 6, 2017 at 5:49 PM, Moacir Ferreira <moacirferreira(a)hotmail.com>
wrote:
I am willing to assemble a oVirt "pod", made of 3 servers,
each with 2 CPU
sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use
GlusterFS to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and
a dual 10Gb NIC. So my intention is to create a loop like a server triangle
using the 40Gb NICs for virtualization files (VMs .qcow2) access and to
move VMs around the pod (east /west traffic) while using the 10Gb
interfaces for giving services to the outside world (north/south traffic).
Very nice gear. How are you planning the network exactly? Without a switch,
back-to-back? (sounds OK to me, just wanted to ensure this is what the
'dual' is used for). However, I'm unsure if you have the correct balance
between the interface speeds (40g) and the disks (too many HDDs?).
This said, my first question is: How should I deploy GlusterFS in such
oVirt scenario? My questions are:
1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and
then create a GlusterFS using them?
I would assume RAID 1 for the operating system (you don't want a single
point of failure there?) and the rest JBODs. The SSD will be used for
caching, I reckon? (I personally would add more SSDs instead of HDDs, but
it does depend on the disk sizes and your space requirements.
2 - Instead, should I create a JBOD array made of all server's
disks?
3 - What is the best Gluster configuration to provide for HA while not
consuming too much disk space?
Replica 2 + Arbiter sounds good to me.
4 - Does a oVirt hypervisor pod like I am planning to build, and the
virtualization environment, benefits from tiering when using a SSD disk?
And yes, will Gluster do it by default or I have to configure it to do so?
Yes, I believe using lvmcache is the best way to go.
At the bottom line, what is the good practice for using GlusterFS in small
pods for enterprises?
Don't forget jumbo frames. libgfapi (coming hopefully in 4.1.5). Sharding
(enabled out of the box if you use a hyper-converged setup via gdeploy).
Y.
You opinion/feedback will be really appreciated!
Moacir
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users