[ovirt-devel] SR-IOV feature

Alona Kaplan alkaplan at redhat.com
Sun Oct 26 10:39:00 UTC 2014



----- Original Message -----
> From: "Antoni Segura Puimedon" <asegurap at redhat.com>
> To: "Itamar Heim" <iheim at redhat.com>
> Cc: "Alona Kaplan" <alkaplan at redhat.com>, devel at ovirt.org
> Sent: Friday, October 24, 2014 9:33:45 PM
> Subject: Re: [ovirt-devel] SR-IOV feature
> 
> 
> 
> ----- Original Message -----
> > From: "Itamar Heim" <iheim at redhat.com>
> > To: "Alona Kaplan" <alkaplan at redhat.com>, devel at ovirt.org
> > Sent: Friday, October 24, 2014 12:21:00 PM
> > Subject: Re: [ovirt-devel] SR-IOV feature
> > 
> > On 10/05/2014 07:02 AM, Alona Kaplan wrote:
> > > Hi all,
> > >
> > > Currently SR-IOV in oVirt is only supported using vdsm-hook [1].
> > > This feature will add SR-IOV support to oVirt management system
> > > (including
> > > migration).
> > >
> > > You are more than welcome to review the feature page-
> > > http://www.ovirt.org/Feature/SR-IOV
> > >
> > >
> > > Thanks,
> > > Alona.
> > > _______________________________________________
> > > Devel mailing list
> > > Devel at ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/devel
> > >
> > 
> > Glad to see this.
> > 
> > some questions:
> > 
> > > Note: this feature is about exposing a virtualized (or VirtIO) vNic to
> > > the
> > > guest, and not about exposing the PCI device to it. This restriction is
> > > necessary for migration to be supported.
> > 
> > did not understand this sentence - are you hinting to macvtap?
> 
> Most likely macvtap, yes.
> 
> Additionally I think Martin Poledník is looking into direct sr-iov attachment
> to VMs as part of the pci passthrough work he is doing.
> 
> > 
> > > add/edit profile
> > 
> > so i gather the implementation is at profile level, which is at logical
> > network level?
> > how does this work exactly? can this logical network be vlan tagged or
> > must be native? if vlan tagged who does the tagging for the passthrough
> > device? (I see later on vf_vlan is one of the parameters to vdsm, just
> > wondering how the mapping can be at host level if this is a passthrough
> > device)?
> > is this because the use of virtio (macvtap)?

The logical network can be vlan tagged.
As you mentioned the vf_vlan is one of the parameters to the vdsm (on create verb).
Setting the vlan on the vf is done as follows-
ip link set {DEVICE} vf {NUM} [ vlan VLANID ]
It is written in the notes section.

It is not related to the use of virtio. The vlan can be set on the vf whether it
is connected to the vm via macvtap or directly.

> > wouldn't it be better to support both macvtap and passthrough and just
> > flag the VM as non migratable in that case?

Martin Polednik is working on pci-passthrough-
http://www.ovirt.org/Features/hostdev_passthrough

Maybe we should wait for his feature to be ready and then combine it with the
sr-iov feature.
As I see in his feature page he plans to attach a specific device directly
to the vm.
We can combine his feature with the sr-iov feature-
1. The network profile will have type property- 
bridge (the regular configuration we have today, vnic->tap->bridge->physical nic).
virtio(in the current feature design it is called passthrough, vnic->macvtap->vf)
pci-passthrough(vnic->vf)
2. Attaching a network profile with pci-passthrough type to a vnic will mark the vm as non-migratable.
3. When running a vm with pci-passthrough vnic a free VF will be attached to the vm with the vlan and mtu
configuration of the profile/network (same as for virio profile, as described in the feature page).

The benefit of it is that the user won't have to choose the vf directly and will
be able to set vlan and mtu on the vf.

> > 
> > also (and doesn't have to be in first phase) what happens if i ran out
> > of hosts with sr-iov (or they failed) - can i fail back to non
> > pcipassthrough profile for backup (policy question at vm level if more
> > important to have sr-iov or more important it will run even without it
> > since it provides a critical service, with a [scheduling] preference to
> > run on sr-iov?
> > (oh, i see this is in the "futures" section already.

:) 

> > 
> > 
> > > management, display and migration properties are not relevant for the VFs
> > > configuration
> > 
> > just wondering - any technical reason we can't put the management on a
> > VF (not saying its a priority to do so)?

Today we mark the logical network with a role (management/display/migration)
when attaching it to the cluster.
A logical network can be attached to one physical nic (PF).

We can't use the current attachment of a role for sr-iov, since the network can
be configured as "vf allowed" on more than one nic (maybe even on all the nics).
If the network is "vf allowed" on the nic,
a vnic with this network can be attached to a free vf on the nic.

So we can't use the logical network to mark a vf with a role.
We have to mark the vf explicitly.
Since in the current design we don't expose the vf, setting the roles was blocked.
But if there is a requirement for setting a vf as management/migration/display we can
re-think about the design for it.

> > 
> > > sr-iov host nic management - num of VFs
> > 
> > I assume this is for admin to define a policy on how many VFs to use,
> > based on the max as reported by getVdsCaps. worth stating that for clarity.
> > 

Updated the wiki with the following-
"It is used for admin to enable this number of VFs on the nic.
Changing this value will remove all the VFs from the nic and create new #numOFVfs VFs on the nic."

The max value reported by getVdsCaps is just the theoretical maximum value.


> > >  User Experience - Setup networks - Option 1
> > 
> > in the last picture ("Edit VFs networks and labels") - why are there
> > labels here together with the networks (if labels appear at the PF level
> > in the first dialog)?
> > 
> > iiuc, the option 2 is re-using the setup networks, where the PF will
> > just be another physical interface, and networks or labels edited just
> > like for regular network interfaces?
> > (not sure where you are on this, but it sounds more straight
> > forward/similar to existing concepts iiuc).
> > 

As I wrote in the answer about the roles.
There are two concepts-
1. The attachment of network to physical nic (what we have today).
2. Containing the network in the "VFs managenet tab=>allowed networks" of the nic.

In 1, we actually configure the host's nics and bridges according to the setup networks.
In 2, we just specify the "allowed" list, it doesn't even sent to the vdsm.
It is used by the engine when it schedules a host for a vm.

The connection between networks to nics is many to many.
The same network can be part of 1 and 2 on the same nic.
And even part of 2 in other sr-iov enabled nics.

Since 2 is completely different concept than 1, we weren't sure that using drag and drop
as for PFs isn't to much in this case.

> > Question: any issues with hot plug/unplug or just expected to work
> > normally?

Expected to work (but wasn't tested yet).

> > 
> > Thanks,
> >     Itamar
> > _______________________________________________
> > Devel mailing list
> > Devel at ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/devel
> > 
> 



More information about the Devel mailing list