[ovirt-users] Two node configuration.

Kasturi Narra knarra at redhat.com
Wed Dec 20 13:53:08 UTC 2017


Hello Jarek,

         As of today we cannot have gdeploy to work with different devices
on different nodes for deploying HC. Currently device name has to be same
on data and arbiter nodes.

Hope this helps !!

Thanks
kasturi

On Wed, Dec 20, 2017 at 2:29 PM, Jarek <jaru at jaru.eu.org> wrote:

> One thing:
>
> I have two nodes with sda3 partitions for gluster deploy and one node with
> for example vdd (arbiter).
> How can I play with gdeploy config to change device name for arbiter node
> (without udevs)?
> ------------------------------
> *From: *"Jaroslaw Augustynowicz" <jaru at jaru.eu.org>
> *To: *"Sandro Bonazzola" <sbonazzo at redhat.com>
> *Cc: *"users" <users at ovirt.org>
> *Sent: *Friday, December 15, 2017 4:44:50 PM
>
> *Subject: *Re: [ovirt-users] Two node configuration.
>
> Yes, sure :)
>
> Dnia 15 grudnia 2017 16:43:10 CET, Sandro Bonazzola <sbonazzo at redhat.com>
> napisał(a):
>>
>>
>>
>> 2017-12-15 16:37 GMT+01:00 Jarosław Augustynowicz <jaru at jaru.eu.org>:
>>
>>> It can be good idea:
>>> 2 physical nodes with local disks (raid6) as gluster bricks + 2 kvm vms
>>> with pcs and drbd for arbiter.
>>>
>>
>> Just be sure to not run the kvm vms arbiters on the same real hosts or it
>> may become a mess.
>>
>>
>>> I'll test it.
>>>
>>> Dnia 15 grudnia 2017 13:05:08 CET, Sandro Bonazzola <sbonazzo at redhat.com>
>>> napisał(a):
>>>>
>>>>
>>>>
>>>> 2017-12-15 12:53 GMT+01:00 Jarek <jaru at jaru.eu.org>:
>>>>
>>>>> Yes, I checked it but it seems I still need three nodes - 2 for
>>>>> storage and one smaller for arbiter.
>>>>> Is it safe to deploy it only on two nodes?
>>>>>
>>>>
>>>> the host arbiter can be even just a Raspberry PI as far as I can tell.
>>>> Adding Sahina.
>>>>
>>>>
>>>>
>>>>> Am I wrong?
>>>>>
>>>>> ------------------------------
>>>>> *From: *"Sandro Bonazzola" <sbonazzo at redhat.com>
>>>>> *To: *"Jaroslaw Augustynowicz" <jaru at jaru.eu.org>
>>>>> *Cc: *"users" <users at ovirt.org>
>>>>> *Sent: *Friday, December 15, 2017 12:03:24 PM
>>>>> *Subject: *Re: [ovirt-users] Two node configuration.
>>>>>
>>>>>
>>>>>
>>>>> 2017-12-15 8:55 GMT+01:00 Jarek <jaru at jaru.eu.org>:
>>>>>
>>>>>> Hello, currently I'm using kvms with pcs&drbd on vms... is there any
>>>>>> ovirt solution for ha with two nodes (storage on local disks) without pcs&
>>>>>> drbd for storage? I know about gluster storage but it needs third host;/
>>>>>>
>>>>>
>>>>> Did you check https://access.redhat.com/documentation/en-us/red_
>>>>> hat_gluster_storage/3.3/html/administration_guide/creating_
>>>>> arbitrated_replicated_volumes ?
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Users mailing list
>>>>>> Users at ovirt.org
>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> SANDRO BONAZZOLA
>>>>>
>>>>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
>>>>>
>>>>> Red Hat EMEA <https://www.redhat.com/>
>>>>> <https://red.ht/sig>
>>>>> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> SANDRO BONAZZOLA
>>>>
>>>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
>>>>
>>>> Red Hat EMEA <https://www.redhat.com/>
>>>> <https://red.ht/sig>
>>>> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>>>>
>>>>
>>
>>
>> --
>>
>> SANDRO BONAZZOLA
>>
>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
>>
>> Red Hat EMEA <https://www.redhat.com/>
>> <https://red.ht/sig>
>> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>>
>>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20171220/643afc37/attachment.html>


More information about the Users mailing list