One important step is to align the XFS to the stripe size * stripe width. Don't miss
it or you might have issues.
Details can be found
Best Regards,
Strahil Nikolov
В вторник, 29 септември 2020 г., 16:36:10 Гринуич+3, C Williams
<cwilliams3320(a)gmail.com> написа:
Hello,
We have decided to get a 6th server for the install. I hope to set up a 2x3 Distributed
replica 3 .
So we are not going to worry about the "5 server" situation.
Thank You All For Your Help !!
On Mon, Sep 28, 2020 at 5:53 PM C Williams <cwilliams3320(a)gmail.com> wrote:
Hello,
More questions on this -- since I have 5 servers . Could the following work ? Each
server has (1) 3TB RAID 6 partition that I want to use for contiguous storage.
Mountpoint for RAID 6 partition (3TB) /brick
Server A: VOL1 - Brick 1 directory /brick/brick1 (VOL1
Data brick)
Server B: VOL1 - Brick 2 + VOL2 - Brick 3 directory /brick/brick2 (VOL1 Data brick)
/brick/brick3 (VOL2 Arbitrator brick)
Server C: VOL1 - Brick 3 directory /brick/brick3 (VOL1
Data brick)
Server D: VOL2 - Brick 1 directory /brick/brick1 (VOL2
Data brick)
Server E VOL2 - Brick 2 directory /brick/brick2 (VOL2
Data brick)
Questions about this configuration
1. Is it safe to use a mount point 2 times ?
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/formatting_and_mounting_bricks#idm139907083110432
says "Ensure that no more than one brick is created from a single mount." In my
example I have VOL1 and VOL2 sharing the mountpoint /brick on Server B
2. Could I start a standard replica3 (VOL1) with 3 data bricks and add 2 additional data
bricks plus 1 arbitrator brick (VOL2) to create a distributed-replicate cluster
providing ~6TB of contiguous storage ? .
By contiguous storage I mean that df -h would show ~6 TB disk space.
Thank You For Your Help !!
On Mon, Sep 28, 2020 at 4:04 PM C Williams <cwilliams3320(a)gmail.com> wrote:
> Strahil,
>
> Thank You For Your Help !
>
> On Mon, Sep 28, 2020 at 4:01 PM Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
>> You can setup your bricks in such way , that each host has at least 1 brick.
>> For example:
>> Server A: VOL1 - Brick 1
>> Server B: VOL1 - Brick 2 + VOL 3 - Brick 3
>> Server C: VOL1 - Brick 3 + VOL 2 - Brick 2
>> Server D: VOL2 - brick 1
>>
>> The most optimal is to find a small system/VM for being an arbiter and having a
'replica 3 arbiter 1' volume.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>> В понеделник, 28 септември 2020 г., 20:46:16 Гринуич+3, Jayme
<jaymef(a)gmail.com> написа:
>>
>>
>>
>>
>>
>> It might be possible to do something similar as described in the documentation
here:
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4...
-- but I'm not sure if oVirt HCI would support it. You might have to roll out your own
GlusterFS storage solution. Someone with more Gluster/HCI knowledge might know better.
>>
>> On Mon, Sep 28, 2020 at 1:26 PM C Williams <cwilliams3320(a)gmail.com>
wrote:
>>> Jayme,
>>>
>>> Thank for getting back with me !
>>>
>>> If I wanted to be wasteful with storage, could I start with an initial
replica 2 + arbiter and then add 2 bricks to the volume ? Could the arbiter solve
split-brains for 4 bricks ?
>>>
>>> Thank You For Your Help !
>>>
>>> On Mon, Sep 28, 2020 at 12:05 PM Jayme <jaymef(a)gmail.com> wrote:
>>>> You can only do HCI in multiple's of 3. You could do a 3 server HCI
setup and add the other two servers as compute nodes or you could add a 6th server and
expand HCI across all 6
>>>>
>>>> On Mon, Sep 28, 2020 at 12:28 PM C Williams
<cwilliams3320(a)gmail.com> wrote:
>>>>> Hello,
>>>>>
>>>>> We recently received 5 servers. All have about 3 TB of storage.
>>>>>
>>>>> I want to deploy an oVirt HCI using as much of my storage and compute
resources as possible.
>>>>>
>>>>> Would oVirt support a "replica 5" HCI (Compute/Gluster)
cluster ?
>>>>>
>>>>> I have deployed replica 3s and know about replica 2 + arbiter -- but
an arbiter would not be applicable here -- since I have equal storage on all of the
planned bricks.
>>>>>
>>>>> Thank You For Your Help !!
>>>>>
>>>>> C Williams
>>>>> _______________________________________________
>>>>> Users mailing list -- users(a)ovirt.org
>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>>> Privacy Statement:
https://www.ovirt.org/privacy-policy.html
>>>>> oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
>>>>> List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AXTNUFZ3BAS...
>>>>>
>>>>
>>>
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org
>> To unsubscribe send an email to users-leave(a)ovirt.org
>> Privacy Statement:
https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CECMD2SWBSB...
>>
>
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: