----- Original Message -----
From: "Livnat Peer" <lpeer(a)redhat.com>
To: "Dan Kenigsberg" <danken(a)redhat.com>
Cc: "Itamar Heim" <iheim(a)redhat.com>, "Simon Grinberg"
Sent: Friday, November 23, 2012 9:02:09 AM
Subject: Re: [Engine-devel] Report vNic Implementation Details
On 22/11/12 13:59, Dan Kenigsberg wrote:
> On Thu, Nov 22, 2012 at 09:55:43AM +0200, Livnat Peer wrote:
>> On 22/11/12 00:02, Itamar Heim wrote:
>>> On 11/21/2012 10:53 PM, Andrew Cathrow wrote:
>>>> ----- Original Message -----
>>>>> From: "Moti Asayag" <masayag(a)redhat.com>
>>>>> To: engine-devel(a)ovirt.org
>>>>> Sent: Wednesday, November 21, 2012 12:36:58 PM
>>>>> Subject: [Engine-devel] Report vNic Implementation Details
>>>>> Hi all,
>>>>> This is a proposal for reporting the vNic implementation
>>>>> details as
>>>>> reported by the guest agent per each vNic.
>>>>> Please review the wiki and add your comments.
>>>> While we're making the change is there anything else we'd want
>>>> report - MTU, MAC (since a user might try to override), etc ?
>>> iirc, the fact ip addresses appear under guest info in the api
>>> and not
>>> under the vnic was a modeling decision.
>>> for example, what if guest is using a bridge, or a bond (yes,
>>> unlikely, but the point is it may be incorrect to assume ip-vnic
>>> michael - do you remember the details of that discussion?
>> I'd love to know what drove this modeling decision.
>> The use case above is not a typical use case.
>> We know we won't be able to present the guest internal network
>> configuration accurately in some scenarios but if we cover 90% of
>> use cases correctly I think we should not let perfect be the enemy
>> (very) good ;)
> We do not support this yet, but once we have nested virtualization,
> won't be that odd to have a bridge or bond in the guest. I know
> that we
> already have users with in-guest vlan devices.
The fact that it's not odd does not mean it is common.., which was
point I was trying to make.
I agree that we should be able to accommodate such info, not sure
it is required at this point.
At this point what you need to make sure is that if it does exit you do not crash due to
Don't forget that using a hook you may create nested virtualization today.
> I think that the api should accomodate these configurations - even
> if we
> do not report them right away. The guest agent already reports
> (some) of
> the information:
Which API are you referring to? if you are talking about VDSM-Engine
we do not change it, only use what is already reported by the GA. I
don't think we should change the API for future support...
>> The Guest Agent reports the vNic details:
>> IP addresses (both IPv4 and IPv6).
>> vNic internal name
> Actually, the guest agent reports all in-guest network device.
> vNics (and bonds
> and vlans) included.
true, but AFAIU we won't be able to build the networking topology of
guest from this report. For example if the guest agent reports a
it does not say to which interface it is connected to etc.
Actually in most cased it does since by default the bridge will get the interface IP.
Well not that accurate, the bridge gets the lowest IP the of all the devices it is
connected to, however there was a fix in libvirt some time back to make sure the external
interface IP will be the one the bride gets by providing the tap/tun devices connected to
the bridge a MAC address starting with 'FE'
>> The retrieved information by the Guest Agent should be reported to
>> the ovirt-engine and to be viewed by its client
>> Today only the IPv4 addresses are reported to the User, kept on VM
>> level. This feature will maintain the information on vNic level.
> I think we should report the information on the vNic level only
> when we
> can. If we have a vlan device, which we do not know how tie to a
> specific vNic, we'd better report its IP address on the VM level.
If I understand you correctly you are suggesting to keep the IP
property also on the VM level and for devices with reported
which the engine can not correlate to a VNIC hold it on this VM-level
My concern is that in the UI we are currently displaying in the main
greed the VM IP and on the network sub-tab the IP per vNIC.
If we choose to hold the IP addresses the engine does not correlate
the VM level they become the more visible addresses for the users
I am not sure is what we want.
What I suggest is to add a property to VM that says network devices
hold the GA report as a 'blob'. we can display this info in the API
the VM level and in the UI maybe display it on the general sub-tab or
add a dialog on the network subtab.
I think you should display it as correlated as possible - since I agree with you that the
common case is an IP set directly on the VNIC, it means in most cases you'll present
the right info in the right place. let's not allow 10% of the cases block us then
doing the right thing for the other 90.
Try to correlate based on MAC and present the internal name and IP as the guest info on
the same line with the Defined nic (In API they can (and should as Itamar noted) be
separated and probably held as a blob per your suggestion.
Now your problematic scenarios:
1. You have a single VLAN, bridge etc -> It translates to many entries with the same
MAC but only one has IP -> collapse all to 'Select to present the one that has
2. You have a MAC that does not match -> Present this as a guest only device (IE new
line - instead of name say guest only), let the user see and correlate if he wants - it is
his network, if he did crazy things it's his problem)
3. Multiple bridges and Vlans -> Collect all IPs per MAC and collapse, then present as
From the 3 above I expect only the first to be found ATM and even that
In the future we can work on what I've suggested in a previous email on this thread,
present a guest network topology same as you do for the host.
> It might be worthwhile to note that we should (try to) correlate
> idea of a vNic with the guest agent report, according to the mac
> The guest can try to fool us by changing the mac address, but
> true for every bit of info coming from the agent.