[Users] RFE: A manual way of saying that only hostA in a DC shall be used as proxy for power commands

Itamar Heim iheim at redhat.com
Mon Jul 30 15:18:29 UTC 2012


On 07/30/2012 04:25 PM, Karli Sjöberg wrote:
>
> 30 jul 2012 kl. 12.26 skrev Itamar Heim:
>
>> On 07/30/2012 12:03 PM, Karli Sjöberg wrote:
>>>
>>> 30 jul 2012 kl. 11.01 skrev Itamar Heim:
>>>
>>>> On 07/30/2012 08:56 AM, Karli Sjöberg wrote:
>>>>>
>>>>> 28 jul 2012 kl. 14.11 skrev Moti Asayag:
>>>>>
>>>>>> On 07/26/2012 02:53 PM, Karli Sjöberg wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> In my DC, I have three hosts added:
>>>>>>>
>>>>>>> hostA
>>>>>>> hostB
>>>>>>> hostC
>>>>>>>
>>>>>>> I want a way to force only to use hostA as a proxy for power
>>>>>>> commands.
>>>>>>
>>>>>> The algorithm of selection a host to act as a proxy for PM commands is
>>>>>> quite naive: any host from the system with status UP.
>>>>>>
>>>>>> You can see how it is being selected in
>>>>>> FencingExecutor.FindVdsToFence()
>>>>>> from
>>>>>> ovirt-engine/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/FencingExecutor.java
>>>>>>
>>>>>> There is no other algorithm for the selection at the moment.
>>>>>>
>>>>>> How would you handle a case in which hostA isn't responsive ? Wouldn't
>>>>>> you prefer trying to perform the fencing using other available host ?
>>>>>
>>>>>
>>>>> Let me explain a little to make you better understand my reasoning
>>>>> behind this configuration.
>>>>>
>>>>> We work with segmented, separated networks. One network for public
>>>>> access, one for storage traffic, one for management and so on. That
>>>>> means that if the nodes themselves have to do their own
>>>>> power-management, the nodes would require three interfaces each,
>>>>> and the
>>>>> metal we are using for hosts just don´t have that. But if we can
>>>>> use the
>>>>> engine to do that, the hosts would only require two interfaces, which
>>>>> most 1U servers are equipped with as standard (plus one
>>>>> iLO/IPMI/whatev), so we can use them as hosts without issue. Then the
>>>>> backend has one extra interface that it can use to communicate over the
>>>>> power management network to the respective service processor with.
>>>>>
>>>>> Is there a "better" way to achieve what we are aiming for? Ideally, I
>>>>> would like to set up the two NICs in a bond and create VLAN-interfaces
>>>>> on top of that bond. That way, I can have as many virtual interfaces as
>>>>> I want without having more than two physical NICs, but I haven´t been
>>>>> able to find a good HOWTO explaining the process.
>>>>>
>>>>
>>>> I think there is a difference between:
>>>> 1. allowing engine to fence
>>>> 2. allowing to choose fencing host per cluster (or per host)
>>>>
>>>> it sounds like you actually want #1, but can live with #2, by installing
>>>> the engine as a host as well.
>>>
>>> Exactly, I can live with #2, as I have the engine added as hostA in my DC
>>
>> well, the question is if choosing another host to use for fencing
>> would/should be limited to hosts from same DC, then engine can only be
>> used to fence one DC.
>
> I´m quoting you here:
> "1. power management is DC wide, not cluster."
>
> So this wouldn´t be any different from it´s current state.

true, but if you have multiple DCs, engine as a host can be used to 
fence only one DC.
while if engine is 'special', it can be used to fence in all DCs

>
>
>> also, for any host other than engine, question is what to do if it is
>> down...
>
>
>
> Med Vänliga Hälsningar
> -------------------------------------------------------------------------------
> Karli Sjöberg
> Swedish University of Agricultural Sciences
> Box 7079 (Visiting Address Kronåsvägen 8)
> S-750 07 Uppsala, Sweden
> Phone:  +46-(0)18-67 15 66
> karli.sjoberg at slu.se <mailto:karli.sjoberg at adm.slu.se>
>





More information about the Users mailing list