[ovirt-users] Good practices

Colin Coe colin.coe at gmail.com
Mon Aug 7 11:41:19 UTC 2017


Hi

I just thought that you'd do hardware RAID if you had the controller or
JBOD if you didn't.  In hindsight, a server with 40Gbps NICs is pretty
likely to have a hardware RAID controller.  I've never done JBOD with
hardware RAID.  I think having a single gluster brick on hardware JBOD
would be riskier than multiple bricks, each on a single disk, but thats not
based on anything other than my prejudices.

I thought gluster tiering was for the most frequently accessed files, in
which case all the VMs disks would end up in the hot tier.  However, I have
been wrong before...

I just wanted to know where the OS was going as I didn't see it mentioned
in the OP.  Normally, I'd have the OS on a RAID1 but in your case thats a
lot of wasted disk.

Honestly, I think Yaniv's answer was far better than my own and made the
important point about having an arbiter.

Thanks

On Mon, Aug 7, 2017 at 5:56 PM, Moacir Ferreira <moacirferreira at hotmail.com>
wrote:

> Hi Colin,
>
>
> I am in Portugal, so sorry for this late response. It is quite confusing
> for me, please consider:
>
>
> 1* - *What if the RAID is done by the server's disk controller, not by
> software?
>
> 2 - For JBOD I am just using gdeploy to deploy it. However, I am not
> using the oVirt node GUI to do this.
>
>
> 3 - As the VM .qcow2 files are quite big, tiering would only help if made
> by an intelligent system that uses SSD for chunks of data not for the
> entire .qcow2 file. But I guess this is a problem everybody else has. So,
> Do you know how tiering works in Gluster?
>
>
> 4 - I am putting the OS on the first disk. However, would you do
> differently?
>
>
> Moacir
>
> ------------------------------
> *From:* Colin Coe <colin.coe at gmail.com>
> *Sent:* Monday, August 7, 2017 4:48 AM
> *To:* Moacir Ferreira
> *Cc:* users at ovirt.org
> *Subject:* Re: [ovirt-users] Good practices
>
> 1) RAID5 may be a performance hit-
>
> 2) I'd be inclined to do this as JBOD by creating a distributed disperse
> volume on each server.  Something like
>
> echo gluster volume create dispersevol disperse-data 5 redundancy 2 \
> $(for SERVER in a b c; do for BRICK in $(seq 1 5); do echo -e
> "server${SERVER}:/brick/brick-${SERVER}${BRICK}/brick \c"; done; done)
>
> 3) I think the above.
>
> 4) Gluster does support tiering, but IIRC you'd need the same number of
> SSD as spindle drives.  There may be another way to use the SSD as a fast
> cache.
>
> Where are you putting the OS?
>
> Hope I understood the question...
>
> Thanks
>
> On Sun, Aug 6, 2017 at 10:49 PM, Moacir Ferreira <
> moacirferreira at hotmail.com> wrote:
>
>> I am willing to assemble a oVirt "pod", made of 3 servers, each with 2
>> CPU sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use
>> GlusterFS to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and
>> a dual 10Gb NIC. So my intention is to create a loop like a server triangle
>> using the 40Gb NICs for virtualization files (VMs .qcow2) access and to
>> move VMs around the pod (east /west traffic) while using the 10Gb
>> interfaces for giving services to the outside world (north/south traffic).
>>
>>
>> This said, my first question is: How should I deploy GlusterFS in such
>> oVirt scenario? My questions are:
>>
>>
>> 1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and
>> then create a GlusterFS using them?
>>
>> 2 - Instead, should I create a JBOD array made of all server's disks?
>>
>> 3 - What is the best Gluster configuration to provide for HA while not
>> consuming too much disk space?
>>
>> 4 - Does a oVirt hypervisor pod like I am planning to build, and the
>> virtualization environment, benefits from tiering when using a SSD disk?
>> And yes, will Gluster do it by default or I have to configure it to do so?
>>
>>
>> At the bottom line, what is the good practice for using GlusterFS in
>> small pods for enterprises?
>>
>>
>> You opinion/feedback will be really appreciated!
>>
>> Moacir
>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170807/1c08d17b/attachment.html>


More information about the Users mailing list