[ovirt-users] Good practices

Johan Bernhardsson johan at kafit.se
Tue Aug 8 10:24:31 UTC 2017


On ovirt gluster uses sharding. So all large files are broken up in small 
pieces on the gluster bricks.

/Johan


On August 8, 2017 12:19:39 Moacir Ferreira <moacirferreira at hotmail.com> wrote:

> Thanks Johan, you brought "light" into my darkness! I went looking for the 
> GlusterFS tiering how-to and it looks like quite simple to attach a SSD as 
> hot tier. For those willing to read about it, go here: 
> http://blog.gluster.org/2016/03/automated-tiering-in-gluster/
>
>
> Now, I still have a question: VMs are made of very large .qcow2 files. My 
> understanding is that files in Gluster are kept all together in a single 
> brick. If so, I will not benefit from tiering as a single SSD will not be 
> big enough to fit all my large VM .qcow2 files. This would not be true if 
> Gluster can store "blocks" of data that compose a large file spread on 
> several bricks. But if I am not wrong, this is one of key differences in 
> between GlusterFS and Ceph. Can you comment?
>
>
> Moacir
>
>
> ________________________________
> From: Johan Bernhardsson <johan at kafit.se>
> Sent: Tuesday, August 8, 2017 7:03 AM
> To: Moacir Ferreira; Devin Acosta; users at ovirt.org
> Subject: Re: [ovirt-users] Good practices
>
>
> You attach the ssd as a hot tier with a gluster command. I don't think that 
> gdeploy or ovirt gui can do it.
>
> The gluster docs and redhat docs explains tiering quite good.
>
> /Johan
>
> On August 8, 2017 07:06:42 Moacir Ferreira <moacirferreira at hotmail.com> wrote:
>
> Hi Devin,
>
>
> Please consider that for the OS I have a RAID 1. Now, lets say I use RAID 5 
> to assemble a single disk on each server. In this case, the SSD will not 
> make any difference, right? I guess that to be possible to use it, the SSD 
> should not be part of the RAID 5. In this case I could create a logical 
> volume made of the RAIDed brick and then extend it using the SSD. I.e.: 
> Using gdeploy:
>
>
> [disktype]
>
> jbod
>
> ....
>
> [pv1]
>
> action=create
>
> devices=sdb, sdc
>
> wipefs=yes
>
> ignore_vg_erros=no
>
>
> [vg1]
>
> action=create
>
> vgname=gluster_vg_jbod
>
> pvname=sdb
>
> ignore_vg_erros=no
>
>
> [vg2]
>
> action=extend
>
> vgname=gluster_vg_jbod
>
> pvname=sdc
>
> ignore_vg_erros=no
>
>
> But will Gluster be able to auto-detect and use this SSD brick for tiering? 
> Do I have to do some other configurations? Also, as the VM files (.qcow2) 
> are quite big will I benefit from tiering? This is wrong and my approach 
> should be other?
>
>
> Thanks,
>
> Moacir
>
>
> ________________________________
> From: Devin Acosta <devin at pabstatencio.com>
> Sent: Monday, August 7, 2017 7:46 AM
> To: Moacir Ferreira; users at ovirt.org
> Subject: Re: [ovirt-users] Good practices
>
>
> Moacir,
>
> I have recently installed multiple Red Hat Virtualization hosts for several 
> different companies, and have dealt with the Red Hat Support Team in depth 
> about optimal configuration in regards to setting up GlusterFS most 
> efficiently and I wanted to share with you what I learned.
>
> In general Red Hat Virtualization team frowns upon using each DISK of the 
> system as just a JBOD, sure there is some protection by having the data 
> replicated, however, the recommendation is to use RAID 6 (preferred) or 
> RAID-5, or at least RAID-1 at the very least.
>
> Here is the direct quote from Red Hat when I asked about RAID and Bricks:
>
> "A typical Gluster configuration would use RAID underneath the bricks. RAID 
> 6 is most typical as it gives you 2 disk failure protection, but RAID 5 
> could be used too. Once you have the RAIDed bricks, you'd then apply the 
> desired replication on top of that. The most popular way of doing this 
> would be distributed replicated with 2x replication. In general you'll get 
> better performance with larger bricks. 12 drives is often a sweet spot. 
> Another option would be to create a separate tier using all SSD’s.”
>
> In order to SSD tiering from my understanding you would need 1 x NVMe drive 
> in each server, or 4 x SSD hot tier (it needs to be distributed, replicated 
> for the hot tier if not using NVME). So with you only having 1 SSD drive in 
> each server, I’d suggest maybe looking into the NVME option.
>
> Since your using only 3-servers, what I’d probably suggest is to do (2 
> Replicas + Arbiter Node), this setup actually doesn’t require the 3rd 
> server to have big drives at all as it only stores meta-data about the 
> files and not actually a full copy.
>
> Please see the attached document that was given to me by Red Hat to get 
> more information on this. Hope this information helps you.
>
>
> --
>
> Devin Acosta, RHCA, RHVCA
> Red Hat Certified Architect
>
>
> On August 6, 2017 at 7:29:29 PM, Moacir Ferreira 
> (moacirferreira at hotmail.com<mailto:moacirferreira at hotmail.com>) wrote:
>
> I am willing to assemble a oVirt "pod", made of 3 servers, each with 2 CPU 
> sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use 
> GlusterFS to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and 
> a dual 10Gb NIC. So my intention is to create a loop like a server triangle 
> using the 40Gb NICs for virtualization files (VMs .qcow2) access and to 
> move VMs around the pod (east /west traffic) while using the 10Gb 
> interfaces for giving services to the outside world (north/south traffic).
>
>
> This said, my first question is: How should I deploy GlusterFS in such 
> oVirt scenario? My questions are:
>
>
> 1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and then 
> create a GlusterFS using them?
>
> 2 - Instead, should I create a JBOD array made of all server's disks?
>
> 3 - What is the best Gluster configuration to provide for HA while not 
> consuming too much disk space?
>
> 4 - Does a oVirt hypervisor pod like I am planning to build, and the 
> virtualization environment, benefits from tiering when using a SSD disk? 
> And yes, will Gluster do it by default or I have to configure it to do so?
>
>
> At the bottom line, what is the good practice for using GlusterFS in small 
> pods for enterprises?
>
>
> You opinion/feedback will be really appreciated!
>
> Moacir
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org<mailto:Users at ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
> _______________________________________________
> Users mailing list
> Users at ovirt.org<mailto:Users%40ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170808/97d8f9d7/attachment.html>


More information about the Users mailing list