I didn't said it is the right design, i guess that i can get nothing from
such bond and you are right, but,
This is was long discussed a year ago(maybe more) with network DEV when we
planed this feature and it was decided to allow to create bond from VFs.
Thanks,
On Tue, Dec 6, 2016 at 5:04 PM, Martin Polednik <mpolednik(a)redhat.com>
wrote:
On 06/12/16 16:58 +0200, Michael Burman wrote:
> Hello Nathanael
>
> VFs are regular NICs and can be used as regular NICs.
> We are allowing to create bond from them and allowing to attach logical
> networks to them, because as long as they are not used by any VM, they can
> be used just as any other interfaces on the host.
>
> When creating bond from 2 VFs you should see the pencil icon, because you
> want to able to edit the bond.
> Vf is an interface and you can use it just as you use regular interface on
> the host.
>
> * When you create bond from 2 VFs, this VFs are no longer free and you
> can't use them to run VM
> * The same for attaching a network to VF, this VF is no longer considered
> as free VF.
>
> There is no problem about it, this is exactly how it was designed ) , you
> have to remember that as long as the VFs are not used by VM, they can be
> used as regular NICs and you can do with them everything you want.
>
That doesn't mean we have the right design: there is no problem in
bonding 2 VFs belonging to 2 different PFs; but every sane software
warns you if you try to bond 2 VFs from the same PF together (what do
you get by such bond?) -- this applies to both host and guest.
>
http://www.ovirt.org/develop/release-management/features/engine/sr-iov/
>
> Regards,
>
>
> On Tue, Dec 6, 2016 at 4:23 PM, Nathanaël Blanchet <blanchet(a)abes.fr>
> wrote:
>
>
>>
>> Le 06/12/2016 à 13:19, Martin Polednik a écrit :
>>
>> On 06/12/16 12:14 +0100, Nathanaël Blanchet wrote:
>>>
>>> Hello,
>>>>
>>>> My new 10G NICS support now SR-IOV, and I've played with this new
>>>> feature as passthrough device, so as to reduce my host CPU consumption.
>>>>
>>>> At the origin, I set up a bond on both 10G PF nics.
>>>>
>>>> After many configurations, the only way I manage to use a VF into a VM,
>>>> is to get out of the bond one nic.
>>>>
>>>> So does it mean that it is impossible to run a VM with VF with PF
>>>> attached to a bond?
>>>>
>>>>
>>> As far as I know, it's not possible to do that. The reason is that the
>>> bond normally creates new (logical) interface, what you are doing is
>>> assigning "part" of the bond directly to a VM and the driver within
VM
>>> isn't aware of the bond.
>>>
>>> This is what I supposed, UI should prevent us to create VFfrom when nic
>> is
>> attached to a bond. Pencil icon should'nt appear in this case.
>>
>>
>>> Moreover, something strange happens : during the boot of the VM, the
>>>
>>>> passthrough device gets an dhcp IP on the native vlan of the bond, and
>>>> once
>>>> finally up, the real vlan used by this device is on the different
>>>> predifined vlan. It implies to me to reconfigure the network to ping
>>>> something on the wanted vlan. Really crazy.
>>>>
>>>>
>>> This could be explained by previous statement: bonding PFs at
>>> hypervisor level and then assigning VFs to a VM can most likely cause
>>> undefined behavior.
>>>
>>> The issue is the same when the PF is not attached to a bond, so in an
>> expected working situation.
>>
>>
>>> Other question is : In which case can it be useful to be able to bond 2
>>>
>>>> VF? UI let us to do so, but it is impossible to add any bridge on that
>>>> virtual bond.
>>>>
>>>>
>>> At hypervisor level? I believe it doesn't make sense.
>>>
>>> I wonder this because UI allows to do it. The same as above, user
>> shouldn't be allowed to bond two VFs, and not allowed to add virtual
>> network to a VF
>>
>>
>> If you require bond between 2 PFs, you can assign 2 VFs each from
>>> different PF to a VM and bond them within the guest.
>>>
>>> Comparing to a large number of restrictions (migration and others), my
>>>
>>>> opinion is that this feature seems to be very difficult to use in
>>>> production...
>>>>
>>>>
>>> The use case for SR-IOV is maximum performance at the cost of
>>> convenience while still (somewhat) allowing you to scale.
>>>
>>> --
>>>
>>>> Nathanaël Blanchet
>>>>
>>>> Supervision réseau
>>>> Pôle Infrastrutures Informatiques
>>>> 227 avenue Professeur-Jean-Louis-Viala
>>>> 34193 MONTPELLIER CEDEX 5
>>>> Tél. 33 (0)4 67 54 84 55
>>>> Fax 33 (0)4 67 54 84 14
>>>> blanchet(a)abes.fr
>>>>
>>>>
>>>> _______________________________________________
>>>
>>>> Users mailing list
>>>> Users(a)ovirt.org
>>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>
>>> --
>> Nathanaël Blanchet
>>
>> Supervision réseau
>> Pôle Infrastrutures Informatiques
>> 227 avenue Professeur-Jean-Louis-Viala
>> 34193 MONTPELLIER CEDEX 5
>> Tél. 33 (0)4 67 54 84 55
>> Fax 33 (0)4 67 54 84 14
>> blanchet(a)abes.fr
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>>
http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
> Michael Burman
> RedHat Israel, RHV-M Network QE
>
> Mobile: 054-5355725
> IRC: mburman
>
--
Michael Burman
RedHat Israel, RHV-M Network QE
Mobile: 054-5355725
IRC: mburman