1) RAID5 may be a performance hit
2) I'd be inclined to do this as JBOD by creating a distributed disperse
volume on each server. Something like
echo gluster volume create dispersevol disperse-data 5 redundancy 2 \
$(for SERVER in a b c; do for BRICK in $(seq 1 5); do echo -e
"server${SERVER}:/brick/brick-${SERVER}${BRICK}/brick \c"; done; done)
3) I think the above
4) Gluster does support tiering, but IIRC you'd need the same number of SSD
as spindle drives. There may be another way to use the SSD as a fast
cache.
Where are you putting the OS?
Hope I understood the question...
Thanks
On Sun, Aug 6, 2017 at 10:49 PM, Moacir Ferreira <moacirferreira(a)hotmail.com
wrote:
I am willing to assemble a oVirt "pod", made of 3 servers,
each with 2 CPU
sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use
GlusterFS to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and
a dual 10Gb NIC. So my intention is to create a loop like a server triangle
using the 40Gb NICs for virtualization files (VMs .qcow2) access and to
move VMs around the pod (east /west traffic) while using the 10Gb
interfaces for giving services to the outside world (north/south traffic).
This said, my first question is: How should I deploy GlusterFS in such
oVirt scenario? My questions are:
1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and
then create a GlusterFS using them?
2 - Instead, should I create a JBOD array made of all server's disks?
3 - What is the best Gluster configuration to provide for HA while not
consuming too much disk space?
4 - Does a oVirt hypervisor pod like I am planning to build, and the
virtualization environment, benefits from tiering when using a SSD disk?
And yes, will Gluster do it by default or I have to configure it to do so?
At the bottom line, what is the good practice for using GlusterFS in small
pods for enterprises?
You opinion/feedback will be really appreciated!
Moacir
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users