Thanks Gianluca

On 28/09/2022 08:12, Gianluca Cecchi wrote:
On Tue, Sep 27, 2022 at 5:22 PM Matthew J Black <matthew@peregrineit.net> wrote:
I need a quick piece of advice, please.

I'm at setting up the oVirt Engine VM (ie doing a "hosted-engine --deploy") stage.

The Host has 3 NICs.

NIC_1 and NIC_2 are bonded (bond1) and run 2 VLANs (on bond1.1 and bond1.2).

VLAN_1 is to be used as the "everyday connection VLAN for the VMs" (including the oVirt Engine VM - I think).

VLAN_2 is *only* to be used for data traffic to-and-from our Ceph Cluster (ie via the Ceph iSCSI Gateway Nodes).

NIC_3 (running VLAN_3) is to be used for oVirt-host-to-oVirt-host comms (including "local" Gluster traffic - yes, the (oVirt) hosts are running a couple of Gluster drives).

My question is: Which interface should we use for the "ovirtmgmt" Bridge?

I suspect it should be NIC_3 (VLAN_3), and I'm 99.999% sure it *shouldn't* be bond1.2 (VLAN_2), but it might be bond1.1 (VLAN_1), so I thought I'd better get peoples' input.

You see, I'm not sure what the purpose of the "ovirtmgmt" bridge is. Is it for humans to talk to the oVirt Engine, or is it for the oVirt Engine to talk to the VMs (and hosts), or is it for some other purpose, or is it for some combination of the these? (I have read the doco on the ovirtmgmt bridge, and I'm still somewhat confused.)

So, if someone wouldn't mind getting back to me about this, I'd appreciate it.

Cheers

Dulux-Oz


In general the IP on the ovirtmgmt bridge is the one used to connect to the engine server (eg for web admin gui access and ssh access to the server to check logs and so on) and it is also used for mgmt communication between engine and hosts.
So the choice of the adapter/bond should take this into consideration.
No VMs involved and no need to use that logical network (the ovirtmgmt one) also for VMs virtual nics, but you can do it if needed/practical (eg lab or small environments).
The must in general is that the network where you decide to put the engine IP has to be routable with the networks where you decide to put the IPs of your hosts (eg host1 could be on network1 and host2 in network2, with both network1 and network2 routable with the network where the engine IP lives).
In the case of a Self Hosted Engine environment (engine as a VM inside the oVirt infra that it manages), you start the deployment process using a command on one server that will become at the end one of the managed hosts (hypervisors). So in general for the ovirtmgmt bridge you will use the network and so the adapter/bond that you use when you connect from your client to the host through ssh to start the whole process with the "hosted-engine --deploy" command.

See also this for temporary IP allocation for the self hosted engine appliance (downstream RHV links provided, but quite the same for oVirt):
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/planning_and_prerequisites_guide/rhv_requirements#general_requirements

Also these:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/planning_and_prerequisites_guide/considerations#networking-considerations
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/planning_and_prerequisites_guide/recommendations#networking-recommendations

and review the deploy process flow:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_self-hosted_engine_using_the_command_line/installing_the_red_hat_virtualization_manager_she_cli_deploy#Deploying_the_Self-Hosted_Engine_Using_the_CLI_install_RHVM

HIH,
Gianluca