[hosted-engine] cluster network setup deadlock

Hi, i'm testing hosted-engine in my lab and have some issues doing network configuration of my cluster I have 2 hosts + storage (iscsi/nfs), each with 2 NICs. My aim is highly available platform, so it must tolerate loss of one NIC, thus i want to use bonding + vlans, scheme like this: bond device used as ovirtmgmt network (mgmt/display/migration, not a VM net) with no vlan tagging and several vlan-tagged networks for VMs and storage connection on top of it. I use that kind of scheme all the time with non-selfhosted ovirt, as it provides reliability, requires only 2 nics, allows pxe-kickstarting and requires no switch reconfiguration during/after host setup. However, with self-hosted engine i'm stuck - at deploy stage installer setups engine VM into ovirtmgmt network, making it VM network and i cant change it afterwards: when i uncheck "vm network" in web-interface for ovirtmgmt - it couldnt apply changes on host that running engine-vm (ok, thats expected), but if i setup another host with desired network configuration (i.e. ovirtmgmt - not a vm network + separate vlan-tagged networks for vm) it just couldnt migrate or start engine-vm, as it lacks "ovirtmgmt" bridge interface. So, is there any way to change engine-vm network settings to run it on different bridge, not ovirtmgmt? -- Yuriy Demchenko

----- Original Message -----
From: "Yuriy Demchenko" <demchenko.ya@gmail.com> To: users@ovirt.org Sent: Friday, September 5, 2014 11:25:17 AM Subject: [ovirt-users] [hosted-engine] cluster network setup deadlock
Hi,
i'm testing hosted-engine in my lab and have some issues doing network configuration of my cluster I have 2 hosts + storage (iscsi/nfs), each with 2 NICs. My aim is highly available platform, so it must tolerate loss of one NIC, thus i want to use bonding + vlans, scheme like this: bond device used as ovirtmgmt network (mgmt/display/migration, not a VM net) with no vlan tagging and several vlan-tagged networks for VMs and storage connection on top of it. I use that kind of scheme all the time with non-selfhosted ovirt, as it provides reliability, requires only 2 nics, allows pxe-kickstarting and requires no switch reconfiguration during/after host setup. However, with self-hosted engine i'm stuck - at deploy stage installer setups engine VM into ovirtmgmt network, making it VM network and i cant change it afterwards: when i uncheck "vm network" in web-interface for ovirtmgmt - it couldnt apply changes on host that running engine-vm (ok, thats expected), but if i setup another host with desired network configuration (i.e. ovirtmgmt - not a vm network + separate vlan-tagged networks for vm) it just couldnt migrate or start engine-vm, as it lacks "ovirtmgmt" bridge interface.
Since the host-engine is a vm, it should be connected to a vm network, which leaves only 2 options for that vm network: 1. vlan - where the network should co-exist over the bond with other vlan networks (and if wishes, you can define a non vlan-non vm network to act as the display/migration network) 2. non vlan - which requires the network to be exclusively defined on the host. There is a bug recently fixed by Toni (cc'ed) which seems to solve your issue: https://bugzilla.redhat.com/show_bug.cgi?id=1124207
So, is there any way to change engine-vm network settings to run it on different bridge, not ovirtmgmt?
-- Yuriy Demchenko
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (2)
-
Moti Asayag
-
Yuriy Demchenko