Thanks Jayme !
I'll read up on it.
Thanks Again For Your Help !!
On Mon, Sep 28, 2020 at 1:42 PM Jayme <jaymef(a)gmail.com> wrote:
It might be possible to do something similar as described in the
documentation here:
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4...
-- but I'm not sure if oVirt HCI would support it. You might have to roll
out your own GlusterFS storage solution. Someone with more Gluster/HCI
knowledge might know better.
On Mon, Sep 28, 2020 at 1:26 PM C Williams <cwilliams3320(a)gmail.com>
wrote:
> Jayme,
>
> Thank for getting back with me !
>
> If I wanted to be wasteful with storage, could I start with an initial
> replica 2 + arbiter and then add 2 bricks to the volume ? Could the arbiter
> solve split-brains for 4 bricks ?
>
> Thank You For Your Help !
>
> On Mon, Sep 28, 2020 at 12:05 PM Jayme <jaymef(a)gmail.com> wrote:
>
>> You can only do HCI in multiple's of 3. You could do a 3 server HCI
>> setup and add the other two servers as compute nodes or you could add a 6th
>> server and expand HCI across all 6
>>
>> On Mon, Sep 28, 2020 at 12:28 PM C Williams <cwilliams3320(a)gmail.com>
>> wrote:
>>
>>> Hello,
>>>
>>> We recently received 5 servers. All have about 3 TB of storage.
>>>
>>> I want to deploy an oVirt HCI using as much of my storage and compute
>>> resources as possible.
>>>
>>> Would oVirt support a "replica 5" HCI (Compute/Gluster) cluster ?
>>>
>>> I have deployed replica 3s and know about replica 2 + arbiter -- but an
>>> arbiter would not be applicable here -- since I have equal storage on all
>>> of the planned bricks.
>>>
>>> Thank You For Your Help !!
>>>
>>> C Williams
>>> _______________________________________________
>>> Users mailing list -- users(a)ovirt.org
>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>> Privacy Statement:
https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>>
https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AXTNUFZ3BAS...
>>>
>>