[ovirt-users] Hardware critique

Vincent Royer vincent at epicenergy.ca
Fri Apr 6 01:16:43 UTC 2018


Well good, we can at least bounce ideas off each other, and I'm sure we'll
get some good advice sooner or later! Best way to get good ideas on the
internet is to post bad ones and wait ;)

In the performance and sizing guide PDF, they make this statement:

*Standard servers with 4:2 erasure coding and JBOD bricks provide superior
value in terms of throughput *
*per server cost.*

I don't know enough about erasure coding to know if that applies, but all
the graphs that involve JBOD in that document look pretty good to me.

What do you know about ZFS?  Seems to be popular to use it in combination
with Gluster.


On Thu, Apr 5, 2018, 2:38 PM Jayme, <jaymef at gmail.com> wrote:

> Vincent,
>
> I've been back and forth on SSDs vs HDDs and can't really get a clear
> answer.  You are correct though, it would only equal 4TB usable in the end
> which is pretty crazy but that amount of 7200 RPM HDDs equals about the
> same cost as 3 2TB ssds would.  I actually posted a question to this list
> not long ago asking how GlusterFS might perform with a small amount of
> disks such as one 2TB SSD per host and some glusterFS users commented
> stating that network would be the bottleneck long before the disks and a
> small amount of SSDs could bottleneck at RPC layer.   Also, I believe at
> this time GlusterFS is not exactly developed to take full advantage of SSDs
> (but I believe there has been strides being made in that regard, I could be
> wrong here).
>
> As for replica 3 being overkill that may be true as well but from what
> I've read on Ovirt and GlusterFS list archives people typically feel safer
> with replica 3 and run in to less disaster scenarios and can provide easier
> recovery.  I'm not sold on Replica 3 either, Rep 3 Arbiter 1 may be more
> than fine but I wanted to err on the side of caution as this setup may host
> production servers sometime in the future.
>
> I really wish I could get some straight answers on best configuration for
> Ovirt + GlusterFS but thus far it has been a big question mark.  I don't
> know if Raid is better than JBOD and I don't know if a smaller number of
> SSDs would perform any better/worse than larger number of spinning disks in
> raid 10.
>
> On Thu, Apr 5, 2018 at 5:38 PM, Vincent Royer <vincent at epicenergy.ca>
> wrote:
>
>> Jayme,
>>
>> I'm doing a very similar build, the only difference really is I am using
>> SSDs instead of HDDs.   I have similar questions as you regarding expected
>> performance. Have you considered JBOD + NFS?   Putting a Gluster Replica 3
>> on top of RAID 10 arrays sounds very safe, but my gosh the capacity takes a
>> massive hit.  Am I correct in saying you will only get 4TB total usable
>> capacity out of 24TB worth of disks?  The cost per TB in that sort of
>> scenario is immense.
>>
>> My plan is two 2TB SSDs per server in JBOD with a caching raid card, with
>> replica 3.  I would end up with the same 4TB total capacity using 12TB of
>> SSDs.
>>
>> I think Replica 3 is safe enough that you could forgo the RAID 10. But
>> I'm talking from zero experience...  Would love others to chime in with
>> their opinions on both these setups.
>>
>> *Vincent Royer*
>> *778-825-1057*
>>
>>
>> <http://www.epicenergy.ca/>
>> *SUSTAINABLE MOBILE ENERGY SOLUTIONS*
>>
>>
>>
>>
>> On Thu, Apr 5, 2018 at 12:22 PM, Jayme <jaymef at gmail.com> wrote:
>>
>>> Thanks for your feedback.  Any other opinions on this proposed setup?
>>> I'm very torn over using GlusterFS and what the expected performance may
>>> be, there seems to be little information out there.  Would love to hear any
>>> feedback specifically from ovirt users on hyperconverged configurations.
>>>
>>> On Thu, Apr 5, 2018 at 2:56 AM, Alex K <rightkicktech at gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> You should be ok with the setup.
>>>> I am running around 20 vms (linux and windows, small and medium size)
>>>> with the half of your specs. With 10G network replica 3 is ok.
>>>>
>>>> Alex
>>>>
>>>> On Wed, Apr 4, 2018, 16:13 Jayme <jaymef at gmail.com> wrote:
>>>>
>>>>> I'm spec'ing hardware for a 3-node oVirt build (on somewhat of a
>>>>> budget).  I plan to do 20-30 Linux VMs most of them very light weight + a
>>>>> couple of heavier hitting web and DB servers with frequent rsync backups.
>>>>> Some have a lot of small files from large github repos etc.
>>>>>
>>>>> 3X of the following:
>>>>>
>>>>> Dell PowerEdge R720
>>>>> 2x 2.9 GHz 8 Core E5-2690 (SR0L0)
>>>>> 256GB RAM
>>>>> PERC H710
>>>>> 2x10GB Nic
>>>>>
>>>>> Boot/OS will likely be two cheaper small sata/ssd in raid 1.
>>>>>
>>>>> Gluster bricks comprised of 4x2TB WD Gold 7200RPM SATA HDDs in RAID 10
>>>>> per server.  Using a replica 3 setup (and I'm thinking right now with no
>>>>> arbiter for extra redundancy, although I'm not sure what the performance
>>>>> hit may be as a result).  Will this allow for two host failure or just one?
>>>>>
>>>>> I've been really struggling with storage choices, it seems very
>>>>> difficult to predict the performance of glusterFS due to the variance in
>>>>> hardware (everyone is using something different).  I'm not sure if the
>>>>> performance will be adequate enough for my needs.
>>>>>
>>>>> I will be using an all ready existing Netgear XS716T 10GB switch for
>>>>> Gluster storage network.
>>>>>
>>>>> In addition I plan to build another simple glusterFS storage server
>>>>> that I can use to georeplicate the gluster volume to for DR purposes and
>>>>> use existing hardware to build an independent standby oVirt host that is
>>>>> able to start up a few high priority VMs from the georeplicated glusterFS
>>>>> volume if for some reason the primary oVirt cluster/glusterFS volume ever
>>>>> failed.
>>>>>
>>>>> I would love to hear any advice or critiques on this plan.
>>>>>
>>>>> Thanks!
>>>>> _______________________________________________
>>>>> Users mailing list
>>>>> Users at ovirt.org
>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>
>>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20180406/d7174bd5/attachment.html>


More information about the Users mailing list