
On 02/14/2012 03:35 PM, Roy Golan wrote:
----- Original Message -----
From: "Itamar Heim"<iheim@redhat.com> To: "Roy Golan"<rgolan@redhat.com> Cc: engine-devel@ovirt.org Sent: Thursday, February 9, 2012 10:02:03 AM Subject: Re: [Engine-devel] bridgless networks
On 02/06/2012 04:47 PM, Roy Golan wrote:
Hi All
Lately I've been working on a design of bridge-less network feature in the engine. You can see it in http://www.ovirt.org/wiki/Features/Design/Network/Bridgeless_Networks#Bridge...
Please review the design. Note, there are some open issues, you can find in the relevant section. Reviews and comments are very welcome.
1. validations 1.1. do you block setting a logical network to don't allow running VMs if it has a vnic associated with it? 1.2. do you check on import a vnic isn't connected to a logical network which doesn't allow running VMs? 1.3. do you check when REST API tries to add/edit a vnic that the chosen logical network is allowed to run VMs?
2. changes 2.1 can a logical network be changed between allow/disallow running VMs? 2.2 what's the flow when enabling running VMs? will the logical network become non-operational until all hosts are reconfigured with a bridge (if applicable)? what is the user flow to reconfigure the hosts (go one by one? do what (there is no change to host level config)? 2.3 what's the flow to not allowing to run VMs (bridge-less) - no need to make the network non operational, but same question - what should the admin do to reconfigure the hosts (no host level config change is needed by him, just a reconfigure iiuc)
Thanks, Itamar
Since it will take some time till we'll add a type to a nic, the whole concept of enforcing bridging in the migration domain, namely the cluster, should be replaced with much more simple approach - set bridged true/false during the attach action on the host (i.e setupnetworks).
This means there are no monitoring checks, no new fields to logical networks and no validations but migration might fail in case the target network is not bridged and the underlying nic is not vNic etc.
Once we will support nic types it will be easy to add the ability to mark a network as "able to run VMs" to advice the attach nic action, based on the nic type to set a bridge or not.
thoughts?
what i don't like about this: 1. no validations == allows more users errors 2. more definitions at host level (+ allows more user error on misconfiguring the cluster). 3. probably need to obsolete this when will add this at logical network + handle upgrade for this so question is what is the implementation gap between doing this at logical network (cluster level) to doing this at host level?