[ovirt-users] Good practices
Moacir Ferreira
moacirferreira at hotmail.com
Mon Aug 7 10:19:57 UTC 2017
Devin,
Many, many thaks for your response. I will read the doc you sent and if I still have questions I will post them here.
But why would I use a RAIDed brick if Gluster, by itself, already "protects" the data by making replicas. You see, that is what is confusing to me...
Thanks,
Moacir
________________________________
From: Devin Acosta <devin at pabstatencio.com>
Sent: Monday, August 7, 2017 7:46 AM
To: Moacir Ferreira; users at ovirt.org
Subject: Re: [ovirt-users] Good practices
Moacir,
I have recently installed multiple Red Hat Virtualization hosts for several different companies, and have dealt with the Red Hat Support Team in depth about optimal configuration in regards to setting up GlusterFS most efficiently and I wanted to share with you what I learned.
In general Red Hat Virtualization team frowns upon using each DISK of the system as just a JBOD, sure there is some protection by having the data replicated, however, the recommendation is to use RAID 6 (preferred) or RAID-5, or at least RAID-1 at the very least.
Here is the direct quote from Red Hat when I asked about RAID and Bricks:
"A typical Gluster configuration would use RAID underneath the bricks. RAID 6 is most typical as it gives you 2 disk failure protection, but RAID 5 could be used too. Once you have the RAIDed bricks, you'd then apply the desired replication on top of that. The most popular way of doing this would be distributed replicated with 2x replication. In general you'll get better performance with larger bricks. 12 drives is often a sweet spot. Another option would be to create a separate tier using all SSD’s.”
In order to SSD tiering from my understanding you would need 1 x NVMe drive in each server, or 4 x SSD hot tier (it needs to be distributed, replicated for the hot tier if not using NVME). So with you only having 1 SSD drive in each server, I’d suggest maybe looking into the NVME option.
Since your using only 3-servers, what I’d probably suggest is to do (2 Replicas + Arbiter Node), this setup actually doesn’t require the 3rd server to have big drives at all as it only stores meta-data about the files and not actually a full copy.
Please see the attached document that was given to me by Red Hat to get more information on this. Hope this information helps you.
--
Devin Acosta, RHCA, RHVCA
Red Hat Certified Architect
On August 6, 2017 at 7:29:29 PM, Moacir Ferreira (moacirferreira at hotmail.com<mailto:moacirferreira at hotmail.com>) wrote:
I am willing to assemble a oVirt "pod", made of 3 servers, each with 2 CPU sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use GlusterFS to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and a dual 10Gb NIC. So my intention is to create a loop like a server triangle using the 40Gb NICs for virtualization files (VMs .qcow2) access and to move VMs around the pod (east /west traffic) while using the 10Gb interfaces for giving services to the outside world (north/south traffic).
This said, my first question is: How should I deploy GlusterFS in such oVirt scenario? My questions are:
1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and then create a GlusterFS using them?
2 - Instead, should I create a JBOD array made of all server's disks?
3 - What is the best Gluster configuration to provide for HA while not consuming too much disk space?
4 - Does a oVirt hypervisor pod like I am planning to build, and the virtualization environment, benefits from tiering when using a SSD disk? And yes, will Gluster do it by default or I have to configure it to do so?
At the bottom line, what is the good practice for using GlusterFS in small pods for enterprises?
You opinion/feedback will be really appreciated!
Moacir
_______________________________________________
Users mailing list
Users at ovirt.org<mailto:Users at ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170807/b2b8a0f0/attachment.html>
More information about the Users
mailing list