[ovirt-users] OVirt Node / RHEV-H and Multiple NICs/bond
Christopher Young
mexigabacho at gmail.com
Fri Feb 26 20:35:28 UTC 2016
I should note that I was able to slowly work through this by adding
the node to the engine, running the hosted-engine setup from within
the node, removing the node via the Engine's WebUI when the conflict
messages comes up, letting the hosted-engine setup finish. That's not
ideal, but it works. In the future, I believe a simple prompt that
allows the user to simple recognize that the node has already been
added and reinit things would be sufficient.
This does make me fearful of one thing, so if you can clarify I would
really appreciate it:
Now that I have my nodes up and in the engine, is renaming the default
cluster/datacenter/etc. going to impact the hosted-engine in any way?
I worry because my first attempt at bringing this platform up failed
on the 2nd node for the hosted-engine setup due to the 'Default'
cluster not existing. Does the hosted-engine's HA pieces utilize this
in any way (and thus it is safe for me to rename things now that I
have them up)?
Many thanks,
Chris
On Fri, Feb 26, 2016 at 2:41 PM, Christopher Young
<mexigabacho at gmail.com> wrote:
> Thanks for the advice. This DID work, however I've run into more
> (what I consider to be) ridiculousness:
>
> Running hosted-engine setup seems to assume that the host is not
> already added to the Default cluster (had an issue previously where I
> had renamed the cluster and that messed up the hosted-engine setup
> since it appears to look for 'Default' as a cluster name - that seems
> like something it should query for vs. assuming it never changed in my
> view).
>
> So, since I've added the host already (in order to get storage NICs
> configured and be able to get to my storage in order to get to the
> hosted engine's VM disk), it won't allow me to add hosted engine to
> this node. This is not a good experience for a customer in my view.
> If the host has already been added, then a simple prompt that asks if
> that is the case should suffice.
>
> Please don't take my comments as anything more than a customer
> experience and not just hateful complaints. I genuinely believe in
> this product, but things like this are VERY common in the enterprise
> space and could very well scare off people who have used product where
> this process is significantly cleaner.
>
> We need to be able to create storage NICs prior to hosted-engine
> setup. And what's more is that we need to KNOW to not add a node to
> the engine if you intend to run a hosted engine on it (and thus allow
> the hosted engine setup to add the node to the DC/cluster/etc.). That
> should be very, very clear.
>
> Thanks,
>
> Chris
>
> On Wed, Feb 24, 2016 at 8:16 AM, Fabian Deutsch <fdeutsch at redhat.com> wrote:
>> Hey Christopher,
>>
>> On Tue, Feb 23, 2016 at 8:29 PM, Christopher Young
>> <mexigabacho at gmail.com> wrote:
>>> So, I have what I think should be a standard setup where I have
>>> dedicated NICs (to be bonded via LACP for storage) and well are NICs
>>> for various VLANS, etc.
>>>
>>> As typical, I have the main system interfaces for the usual system IPs
>>> (em1 in this case).
>>>
>>> A couple of observations (and a "chicken and the egg problem"):
>>>
>>> #1. The RHEV-H/Ovirt-Node interface doesn't allow you to configure
>>> more than one interface. Why is this?
>>
>> This is by design. The idea is that you use the TUI to configure the
>> initial NIC, all subsequent configuration will be done trough Engine.
>>
>>> #2. This prevents me from bringing up an interface for access to my
>>> Netapp SAN (which I keep on separate networking/VLANs for
>>> best-practices purposes)
>>>
>>> If I'm unable to bring up a regular system interface AND an interface
>>> for my storage, then how am I going to be able to install a RHEV-M
>>> (engine) hosted-engine VM since I would be either unable to have an
>>> interface for this VM's IP AND be able to connect to my storage
>>> network.
>>>
>>> In short, I'm confused. I see this as a very standard enterprise
>>> setup so I feel like I must be missing something obvious. If someone
>>> could educate me, I'd really appreciate it.
>>>
>>
>> This is a valid point - you can not configure Ndoe from the TUI to
>> connect to more than two networks.
>>
>> What you can do however is to temporarily setup a route between the
>> two networks to bootstrap Node. After setup you can use Engine to
>> configure another nic on Node to access the storage network.
>>
>> The other option I see is to drop to shell and manually configure the
>> second NIC by creating an ifcfg file.
>>
>> Note: In future we plan that you can use Cockpit to configure
>> networking - this will also allow you to configure multiple NICs.
>>
>> Greetings
>> - fabian
>>
>>> Thanks,
>>> CHris
>>> _______________________________________________
>>> Users mailing list
>>> Users at ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>> --
>> Fabian Deutsch <fdeutsch at redhat.com>
>> RHEV Hypervisor
>> Red Hat
More information about the Users
mailing list