Hi Strahil , Leo and Jayme,
This thread is getting more and more useful, great.
Atm, I have 15 nodes cluster with shared Storage from Netapp. The storage network is (NFS4.1) on 20GB LACP, separated from control.
Performance is generally great, except in several test cases when using "send next data after write confirm". This situation does not care about speed of network, kernel buffers or any other buffers, but only about
storage server speed, and then we hit the speed issue.
The main reason why I am asking for HCI, is to get as close as possible to Local Storage speed with multiple hosts in same cluster.
The idea is to add HCI to current setup, as second cluster, utilizing CPU RAM and LocalStorage of joined nodes.
--Is this actually a direction which will get me to the wanted result, or am I misunderstanding purpose of HCI?
I understand that the HCI with SHE requires replica2+arbiter or replica3, but that is not my situation. I wish only to add HCI for reasons above.
--Do I need the distributed-replicated in that case, or I can simply use distributed (if still supported) setup?
Jayme, I do have resources to set this up in a staged environment, and I will be happy to share the info, but first I need to find out if I am at all moving in right direction.
Kindly awaiting your reply.
-----
kind regards/met vriendelijke groeten
Marko Vrgotic
Sr. System Engineer @ System Administration
ActiveVideo
e: m.vrgotic@activevideo.com
w: www.activevideo.com <https://www.activevideo.com>
ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ Hilversum, The Netherlands. The information contained in this message may be legally privileged and confidential. It is intended to be read only by the individual
or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error,
please immediately notify the sender and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or destroy any copy of this message.
On 29/02/2020, 11:53, "Strahil Nikolov" <hunter86_bg@yahoo.com> wrote:
On February 29, 2020 11:19:30 AM GMT+02:00, Jayme <jaymef@gmail.com> wrote:
>I currently have a three host hci in rep 3 (no arbiter). 10gbe network
>and
>ssds making up the bricks. I’ve wondered what the result of adding
>three
>more nodes to expand hci would be. Is there an overall storage
>performance
>increase when gluster is expanded like this?
>
>On Sat, Feb 29, 2020 at 4:26 AM Leo David <leoalex@gmail.com> wrote:
>
>> Hi,
>> As a first setup, you can go with a 3 nodes HCI and having the data
>volume
>> in a replica 3 setup.
>> Afterwards, if you want to expand HCI ( compute and storage too) you
>can
>> add sets of 3 nodes, and the data volume will automatically become
>> replicated-distributed. Safely, you can add sets of 3 nodes up to 12
>nodes
>> per HCI.
>> You can also add "compute only nodes" and not extending storage too.
>This
>> can be done by adding nodes one by one.
>> As an example, I have an implementation where are 3 hyperconverged
>nodes,
>> they form a replica 3 volume, and later i have added the 4th node to
>the
>> cluster which only adds ram and cpu, whilts consuming storage from
>the
>> existing 3 nodes based volume.
>> Hope this helps.
>> Cheers,
>>
>> Leo
>>
>>
>> On Fri, Feb 28, 2020, 15:25 Vrgotic, Marko
><M.Vrgotic@activevideo.com>
>> wrote:
>>
>>> Hi Strahil,
>>>
>>>
>>>
>>> I circled back on your reply while ago regarding oVirt
>Hyperconverged and
>>> more than 3 nodes in cluster:
>>>
>>>
>>>
>>> “Hi Marko, I guess you can use distributed-replicated volumes and
>>> oVirt cluster with host triplets.”
>>>
>>> Initially I understood that its limited to 3Nodes max per HC
>cluster, but
>>> now reading documentation further
>>>
>https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Maintenance_and_Upgrading_Resources.html
>>> that does not look like it.
>>>
>>>
>>>
>>> Would you be so kind to give me an example or clarify what you meant
>by “*you
>>> can use distributed-replicated volumes and oVirt cluster with host
>>> triplets.*” ?
>>>
>>>
>>>
>>> Kindly awaiting your reply.
>>>
>>>
>>>
>>>
>>>
>>> -----
>>>
>>> kind regards/met vriendelijke groeten
>>>
>>>
>>>
>>> Marko Vrgotic
>>> ActiveVideo
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *From: *"Vrgotic, Marko" <M.Vrgotic@activevideo.com>
>>> *Date: *Friday, 11 October 2019 at 08:49
>>> *To: *Strahil <hunter86_bg@yahoo.com>
>>> *Cc: *users <users@ovirt.org>
>>> *Subject: *Re: [ovirt-users] Hyperconverged setup questions
>>>
>>>
>>>
>>> Hi Strahil,
>>>
>>>
>>>
>>> Thank you.
>>>
>>> One maybe stupid question, but significant to me:
>>>
>>> Considering i haven’t been playing before with hyperconverged setup
>in
>>> oVirt, is this something i can do from ui cockpit or does it require
>me
>>> first setup an Glusterfs on the Hosts before doing anything via
>oVirt API
>>> or Web interface?
>>>
>>>
>>>
>>> Kindly awaiting your reply.
>>>
>>>
>>>
>>> Marko
>>>
>>>
>>>
>>> Sent from my iPhone
>>>
>>>
>>>
>>> On 11 Oct 2019, at 06:14, Strahil <hunter86_bg@yahoo.com> wrote:
>>>
>>> Hi Marko,
>>>
>>> I guess you can use distributed-replicated volumes and oVirt
>cluster
>>> with host triplets.
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>> On Oct 10, 2019 15:30, "Vrgotic, Marko" <M.Vrgotic@activevideo.com>
>>> wrote:
>>>
>>> Dear oVirt,
>>>
>>>
>>>
>>> Is it possible to add oVirt 3Hosts/Gluster hyperconverged cluster to
>>> existing oVirt setup? I need this to achieve Local storage
>performance, but
>>> still have pool of Hypevisors available.
>>>
>>> Is it possible to have more than 3Hosts in Hyperconverged setup?
>>>
>>>
>>>
>>> I have currently 1Shared Cluster (NFS based storage, where also SHE
>is
>>> hosted) and 2Local Storage clusters.
>>>
>>>
>>>
>>> oVirt current version running is 4.3.4.
>>>
>>>
>>>
>>> Kindly awaiting your reply.
>>>
>>>
>>>
>>>
>>>
>>> — — —
>>> Met vriendelijke groet / Kind regards,
>>>
>>> *Marko Vrgotic*
>>>
>>> *ActiveVideo*
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-leave@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>>
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/UH2FDN57V2TOQXD36UQXVTVCTB37O4OE/
>>>
>> _______________________________________________
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-leave@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>>
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/CJ5IAHCMNU3KSYUR3MCD2NNJTDEIHRNX/
>>
Hi Jayme,
As this is a distributed-replicated (or maybe the term is the opposite - it doesn't matter) with shard option enabled - the VM disk will be spread on all bricks and I expect faster reads and even a little bit faster writes
(as 1 shard can go to first replica set, while another 1 can go to the second replica set).
The problem you can hit is that if a whole replica set (for example 2 of the newly added 3 bricks ) is down /it is important to mention that gluster balance was done before this hypothetical situation/ you can have many ,
even all VMs unable to power up or end in paused state. Of course, this would be the same reault if this happens with the current HCI setup.
You should never leave a replica 3 volume without quorum and to suffer such a failiure without proper actions taken.
On the contrary, if you have 2 separate HCI setups (each with own HostedEngine) - you can completely segregrate the 2 environments. Drawbacks are that you can tolerate only 1 node failiure per HCI.
I would go with the extension of the existing HCI with additional 3 hosts, if I were in your shoes.
About the performance - it doesn't scale out for ever , because your gluster servers are also your clients and the NICs bandwidth should be increased when you scale-out.
If you grow larger - you will need to separate the Gluster Cluster from oVirt .
Best Regards,
Strahil Nikolov