virt-v2v integration feature

Hi All, The proposed feature will introduce a new process of import virtual machines from external systems using virt-v2v in oVirt. I've created a wiki page that contains initial thoughts and design for it: http://www.ovirt.org/Features/virt-v2v_Integration You are more than welcome to share your thoughts and insights. Thanks, Arik

Hi Arik Cool feature! Have some questions/comments: The assumption here is that the proxy host has both access to the external system, and the export domain. So you'll need to create two different providers in case you need another proxy host, in case the current external provider's proxy host doesn't have access to the second export domain. Did you consider choosing the proxy in the action itself, and not beforehand? On one hand it might make the flow a bit more complex, but on the other hand it will do a clear separation between the source of the VMs, and the destination. I mean: press "import" --> Choose export domain --> Choose a proxy host that can access it --> list VMs --> choose VM/VMs --> Press OK. Or: press "import" --> Choose Proxy Host --> list VMs --> choose VM/VMs --> Choose export domain the host can access --> Press OK. And another thought, can we do the listing from the engine itself, and not via VDSM? If so, the provider definition will not need to have a proxy host as well, and you'll be able to see a list of VMs also without the need to select a proxy. The import will be done by one of the hosts in the DC. The glance integration uses a similar approach. As for installing virt-v2v, I don't know if that's an issue to add a dependency on it, and to include it in all hosts supporting cluster level >= 3.6. However, if it is an issue, then it can be selected at the host level when adding it, and the bootstrapping will install it. Then it will be used only on hosts that have this checked. Oved ----- Original Message -----
From: "Arik Hadas" <ahadas@redhat.com> To: devel@ovirt.org, "Richard Jones" <rjones@redhat.com>, mbooth@redhat.com Sent: Wednesday, July 9, 2014 9:30:38 PM Subject: [ovirt-devel] virt-v2v integration feature
Hi All,
The proposed feature will introduce a new process of import virtual machines from external systems using virt-v2v in oVirt. I've created a wiki page that contains initial thoughts and design for it: http://www.ovirt.org/Features/virt-v2v_Integration
You are more than welcome to share your thoughts and insights.
Thanks, Arik _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

On Thu, Jul 10, 2014 at 01:04:48AM -0400, Oved Ourfali wrote:
As for installing virt-v2v, I don't know if that's an issue to add a dependency on it, and to include it in all hosts supporting cluster level >= 3.6. However, if it is an issue, then it can be selected at the host level when adding it, and the bootstrapping will install it. Then it will be used only on hosts that have this checked.
The new virt-v2v has considerably fewer dependencies than the previous version. In particular it no longer requires Perl, and libguestfs has split dependencies too so you only "pay for" features required by your guests (eg. if you don't need to support XFS guests, then you don't need to pull in the XFS tools). Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-builder quickly builds VMs from scratch http://libguestfs.org/virt-builder.1.html

Am 09.07.2014 20:30, schrieb Arik Hadas:
Hi All,
The proposed feature will introduce a new process of import virtual machines from external systems using virt-v2v in oVirt. I've created a wiki page that contains initial thoughts and design for it: http://www.ovirt.org/Features/virt-v2v_Integration
You are more than welcome to share your thoughts and insights.
Thanks, Arik
Am I right that this still involves a full operational e.g. esxi host to import vmware vms? There is a huge user demand on a simpler process for just converting and importing an vmware disk image. This feature will not solve this use case, will it? -- Mit freundlichen Grüßen / Regards Sven Kieske Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen

On 07/10/2014 10:29 AM, Sven Kieske wrote:
Am 09.07.2014 20:30, schrieb Arik Hadas:
Hi All,
The proposed feature will introduce a new process of import virtual machines from external systems using virt-v2v in oVirt. I've created a wiki page that contains initial thoughts and design for it: http://www.ovirt.org/Features/virt-v2v_Integration
You are more than welcome to share your thoughts and insights.
Thanks, Arik
Am I right that this still involves a full operational e.g. esxi host to import vmware vms?
There is a huge user demand on a simpler process for just converting and importing an vmware disk image. This feature will not solve this use case, will it?
I agree it should. need to check if virt-v2v can cover this. if not, need to fix it so it will...

Am 10.07.2014 09:41, schrieb Itamar Heim:
On 07/10/2014 10:29 AM, Sven Kieske wrote:
Am 09.07.2014 20:30, schrieb Arik Hadas:
Hi All,
The proposed feature will introduce a new process of import virtual machines from external systems using virt-v2v in oVirt. I've created a wiki page that contains initial thoughts and design for it: http://www.ovirt.org/Features/virt-v2v_Integration
You are more than welcome to share your thoughts and insights.
Thanks, Arik
Am I right that this still involves a full operational e.g. esxi host to import vmware vms?
There is a huge user demand on a simpler process for just converting and importing an vmware disk image. This feature will not solve this use case, will it?
I agree it should. need to check if virt-v2v can cover this. if not, need to fix it so it will...
Well here are the relevant BZ entries: https://bugzilla.redhat.com/show_bug.cgi?id=1062910 https://bugzilla.redhat.com/show_bug.cgi?id=1049604 CC'ing the users from this Bugzilla entries, maybe they can add something to gain some traction :) -- Mit freundlichen Grüßen / Regards Sven Kieske Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen

Lots of virtual appliances come in ovf format. There isn’t a way to import it into an oVirt/RHEV instance without a live ESX instance. This is very inconvenient and inefficient -----Original Message----- From: Sven Kieske [mailto:S.Kieske@mittwald.de] Sent: Thursday, July 10, 2014 5:01 AM To: Itamar Heim; devel@ovirt.org Cc: Users@ovirt.org List; Richard W.M. Jones; midnightsteel@msn.com; blaster@556nato.com; bugzilla@grendelman.com; fw@moov.de; R P Herrold; jspahr@bandwith.com Subject: Re: [ovirt-users] [ovirt-devel] virt-v2v integration feature Am 10.07.2014 09:41, schrieb Itamar Heim:
On 07/10/2014 10:29 AM, Sven Kieske wrote:
Am 09.07.2014 20:30, schrieb Arik Hadas:
Hi All,
The proposed feature will introduce a new process of import virtual machines from external systems using virt-v2v in oVirt. I've created a wiki page that contains initial thoughts and design for it: http://www.ovirt.org/Features/virt-v2v_Integration
You are more than welcome to share your thoughts and insights.
Thanks, Arik
Am I right that this still involves a full operational e.g. esxi host to import vmware vms?
There is a huge user demand on a simpler process for just converting and importing an vmware disk image. This feature will not solve this use case, will it?
I agree it should. need to check if virt-v2v can cover this. if not, need to fix it so it will...
Well here are the relevant BZ entries: https://bugzilla.redhat.com/show_bug.cgi?id=1062910 https://bugzilla.redhat.com/show_bug.cgi?id=1049604 CC'ing the users from this Bugzilla entries, maybe they can add something to gain some traction :) -- Mit freundlichen Grüßen / Regards Sven Kieske Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen

On Thu, Jul 10, 2014 at 10:41:16AM +0300, Itamar Heim wrote:
On 07/10/2014 10:29 AM, Sven Kieske wrote:
Am 09.07.2014 20:30, schrieb Arik Hadas:
Hi All,
The proposed feature will introduce a new process of import virtual machines from external systems using virt-v2v in oVirt. I've created a wiki page that contains initial thoughts and design for it: http://www.ovirt.org/Features/virt-v2v_Integration
You are more than welcome to share your thoughts and insights.
Thanks, Arik
Am I right that this still involves a full operational e.g. esxi host to import vmware vms?
There is a huge user demand on a simpler process for just converting and importing an vmware disk image. This feature will not solve this use case, will it?
I agree it should. need to check if virt-v2v can cover this. if not, need to fix it so it will...
From live ESX, yes.
Offline VMware images (OVAs), not yet. However patches are welcome. We quite literally have no one available to implement this. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-p2v converts physical machines to virtual machines. Boot with a live CD or over the network (PXE) and turn machines into KVM guests. http://libguestfs.org/virt-v2v

On Thu, Jul 10, 2014 at 10:11:24AM +0100, Richard W.M. Jones wrote:
On Thu, Jul 10, 2014 at 10:41:16AM +0300, Itamar Heim wrote:
On 07/10/2014 10:29 AM, Sven Kieske wrote:
Am 09.07.2014 20:30, schrieb Arik Hadas:
Hi All,
The proposed feature will introduce a new process of import virtual machines from external systems using virt-v2v in oVirt. I've created a wiki page that contains initial thoughts and design for it: http://www.ovirt.org/Features/virt-v2v_Integration
You are more than welcome to share your thoughts and insights.
Thanks, Arik
Am I right that this still involves a full operational e.g. esxi host to import vmware vms?
There is a huge user demand on a simpler process for just converting and importing an vmware disk image. This feature will not solve this use case, will it?
I agree it should. need to check if virt-v2v can cover this. if not, need to fix it so it will...
From live ESX, yes.
Offline VMware images (OVAs), not yet. However patches are welcome. We quite literally have no one available to implement this.
Just catching up on this .. virt-v2v now supports OVA imports, and it works pretty well. Please make sure you use the new version, available here: https://www.redhat.com/archives/libguestfs/2014-May/msg00090.html or in Fedora >= 21 or Debian experimental. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com libguestfs lets you edit virtual machines. Supports shell scripting, bindings from many languages. http://libguestfs.org

On 07/09/2014 09:30 PM, Arik Hadas wrote:
Hi All,
The proposed feature will introduce a new process of import virtual machines from external systems using virt-v2v in oVirt. I've created a wiki page that contains initial thoughts and design for it: http://www.ovirt.org/Features/virt-v2v_Integration
You are more than welcome to share your thoughts and insights.
Thanks, Arik _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
Some thoughts after reviewing: http://www.ovirt.org/Features/virt-v2v_Integration great to see this - its something which i hope will ease a lot conversion to oVirt! adding users@ mailing list, to get feedback on the use cases. 1. Need to understand if possible to provide the VM via 'uploading it'[1] 2. does this really merit a provider per external system? or just ability to choose source path (including simple nfs/posix path pointing to existing kvm disks or vmware vmdk's), or URI/user /passowrd for such a system if vsphere/etc.? I'm not opposed to allowing to define providers, I would just remember that's a specific single use case, and would separate the import/convert flow from being part of the provider (i.e., on the import/convert flow, would allow to choose the source being a path or a pre-defined provider) to sum above two points, I think the page should start with 'use cases': - I have a KVM VM i want to 'convert' to RHEV (doesn't need full v2v, but libvirt-->ovf conversion?) the VM(s) is/are in an nfs/posix path, or i want to upload the VM. - i have a folder with VMs/disks i want to convert (make sure this covers the ESXi flow) - I have a vsphere to import VMs from - I have Xen... (hope to see input from users on these. I'm sure there was a thread on this in the past, what do you know, i started one...[2]) 3. does this mean virt-v2v and all of its dependencies will be required by vdsm (and on ovirt-node). also, for distro's which have it, of the required windows components? 4. most important flow i would like to remove via this integration is the "double copy". i.e., if we run v2v from a host, there is no reason to not pre-create the VM/disks for it, and have v2v overwrite their content, instead of converting into the export domain just to perform another copy/import. (may mean you want virt-v2v to have a specific verb to just get the OVF, and a specific verb to convert a disk) [1] we want to support upload of an iso/VM/disks in any case. [2] http://lists.ovirt.org/pipermail/users/2014-January/019839.html

Hi All, Better late than never.. Thanks for all the feedback, it was really constructive. I made major changes in the wiki page to address the comments, Please take another look: http://www.ovirt.org/Features/virt-v2v_Integration Thanks, Arik ----- Original Message -----
Hi All,
The proposed feature will introduce a new process of import virtual machines from external systems using virt-v2v in oVirt. I've created a wiki page that contains initial thoughts and design for it: http://www.ovirt.org/Features/virt-v2v_Integration
You are more than welcome to share your thoughts and insights.
Thanks, Arik _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

On 02.09.14 09:26, Arik Hadas wrote:
Hi All,
Better late than never..
Thanks for all the feedback, it was really constructive. I made major changes in the wiki page to address the comments, Please take another look: http://www.ovirt.org/Features/virt-v2v_Integration
Thanks, Arik Few issues came up after talking with storage team:
The actual import VM will take place in two steps: 1. SPM call to create the volume. 2. HSM call virt-v2v with the created UUIDs by the first SPM call. The virt-v2v don't need to create any directory or volume since the SPM create volume will create the structure for us. Rich: is that ok with the current virt-v2v? Before creating the volume we are running a virt-v2v convert with --no-copy, which mean we don't copy the disk only get the ovf, by doing that we don't have the actual size of the disk - which cause a problem in block device where we need to preallocate the volume and to make sure that the destination have room for the new disk. Rich: Can we get the estimated size when we run with --no-copy? Some notes: Since the second call is an HSM call we need to make sure that we take the virt-v2v "tasks" into consideration when it comes to host life cycle. Thanks, Shahar.

On Wed, Sep 10, 2014 at 10:41:27AM +0300, Shahar Havivi wrote:
On 02.09.14 09:26, Arik Hadas wrote:
Hi All,
Better late than never..
Thanks for all the feedback, it was really constructive. I made major changes in the wiki page to address the comments, Please take another look: http://www.ovirt.org/Features/virt-v2v_Integration
Thanks, Arik Few issues came up after talking with storage team:
The actual import VM will take place in two steps: 1. SPM call to create the volume. 2. HSM call virt-v2v with the created UUIDs by the first SPM call. The virt-v2v don't need to create any directory or volume since the SPM create volume will create the structure for us. Rich: is that ok with the current virt-v2v?
Before creating the volume we are running a virt-v2v convert with --no-copy, which mean we don't copy the disk only get the ovf, by doing that we don't have the actual size of the disk - which cause a problem in block device where we need to preallocate the volume and to make sure that the destination have room for the new disk. Rich: Can we get the estimated size when we run with --no-copy?
Currently you get the virtual size accurately. We could add an estimate of the used size, but I doubt it will be very accurate. It would probably be an upper limit, which I guess is useful. Rich.
Some notes: Since the second call is an HSM call we need to make sure that we take the virt-v2v "tasks" into consideration when it comes to host life cycle.
Thanks, Shahar.
-- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com libguestfs lets you edit virtual machines. Supports shell scripting, bindings from many languages. http://libguestfs.org

On 10.09.14 10:36, Richard W.M. Jones wrote:
On Wed, Sep 10, 2014 at 10:41:27AM +0300, Shahar Havivi wrote:
On 02.09.14 09:26, Arik Hadas wrote:
Hi All,
Better late than never..
Thanks for all the feedback, it was really constructive. I made major changes in the wiki page to address the comments, Please take another look: http://www.ovirt.org/Features/virt-v2v_Integration
Thanks, Arik Few issues came up after talking with storage team:
The actual import VM will take place in two steps: 1. SPM call to create the volume. 2. HSM call virt-v2v with the created UUIDs by the first SPM call. The virt-v2v don't need to create any directory or volume since the SPM create volume will create the structure for us. Rich: is that ok with the current virt-v2v?
Before creating the volume we are running a virt-v2v convert with --no-copy, which mean we don't copy the disk only get the ovf, by doing that we don't have the actual size of the disk - which cause a problem in block device where we need to preallocate the volume and to make sure that the destination have room for the new disk. Rich: Can we get the estimated size when we run with --no-copy?
Currently you get the virtual size accurately. We could add an estimate of the used size, but I doubt it will be very accurate. It would probably be an upper limit, which I guess is useful. upper limit is good.
Rich.
Some notes: Since the second call is an HSM call we need to make sure that we take the virt-v2v "tasks" into consideration when it comes to host life cycle.
Thanks, Shahar.
-- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com libguestfs lets you edit virtual machines. Supports shell scripting, bindings from many languages. http://libguestfs.org

On Wed, Sep 10, 2014 at 10:41:27AM +0300, Shahar Havivi wrote:
Rich: Can we get the estimated size when we run with --no-copy?
BTW you need to file bugs for all feature changes needed in virt-v2v. If there's no bug it will get forgotten about. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-p2v converts physical machines to virtual machines. Boot with a live CD or over the network (PXE) and turn machines into KVM guests. http://libguestfs.org/virt-v2v

On 10.09.14 10:52, Richard W.M. Jones wrote:
On Wed, Sep 10, 2014 at 10:41:27AM +0300, Shahar Havivi wrote:
Rich: Can we get the estimated size when we run with --no-copy?
BTW you need to file bugs for all feature changes needed in virt-v2v. If there's no bug it will get forgotten about. Sure.
Rich.
-- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-p2v converts physical machines to virtual machines. Boot with a live CD or over the network (PXE) and turn machines into KVM guests. http://libguestfs.org/virt-v2v

On 09/02/2014 04:26 PM, Arik Hadas wrote:
Hi All,
Better late than never..
Thanks for all the feedback, it was really constructive. I made major changes in the wiki page to address the comments, Please take another look: http://www.ovirt.org/Features/virt-v2v_Integration
Thanks, Arik
----- Original Message -----
Hi All,
The proposed feature will introduce a new process of import virtual machines from external systems using virt-v2v in oVirt. I've created a wiki page that contains initial thoughts and design for it: http://www.ovirt.org/Features/virt-v2v_Integration
You are more than welcome to share your thoughts and insights.
Thanks, Arik _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
much much better :) now we just need to do all this... few comments: - upload vm - indeed tbd, but hopefully will resolve this as well independently. - Windows VMs - v2v would need to make sure relevant iso is in the iso domain probably with the drivers expected by v2v? - proxy host - do we really need this? I assume we are going to require virt-v2v by vdsm, so all hosts will simply have all needed to do this. to rephrase - I wouldn't make this the architectural choice to require a single host with v2v, rather default to all (for both ovirt-node and plain deployment) we may still want to allow choosing which host to do the import from, for specific network toplogy use cases. I wouldn't default to that, and i think it can be phase II actually - per shahar's comment on SPM from discussion with storage group - just a btw we hope SPM to go away in 3.6. still, you should separate meta data (create volume) and data (copy) operations. - external providers - I think this one requires more thinking. if you will define these as providers, with their credentials, then all admins can import. i don't think we should require the provider to be defined, not require it to keep credentials (could be just a helper list). i.e., unlike the other providers, i think here the providers are just to auto complete fields in the import form (i can choose a provider with its user/password, i can choose a provider without user/pass and fill myself, i can just type provider and user/pass) - I'll be able to provider more feedback on mockups... Thanks, Itamar

On Wed, Sep 10, 2014 at 01:36:59PM +0300, Itamar Heim wrote:
- Windows VMs - v2v would need to make sure relevant iso is in the iso domain probably with the drivers expected by v2v?
I'm not 100% sure how this works, but I'll tell you what virt-v2v does (which is the same as old virt-v2v). It installs RHEV-APT (.exe) in the guest and ensures it runs at first boot. Does RHEV-APT need an ISO to work? I was under the impression that the executable contained the drivers needed in itself. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-top is 'top' for virtual machines. Tiny program with many powerful monitoring features, net stats, disk stats, logging, etc. http://people.redhat.com/~rjones/virt-top

On 09/10/2014 02:55 PM, Richard W.M. Jones wrote:
On Wed, Sep 10, 2014 at 01:36:59PM +0300, Itamar Heim wrote:
- Windows VMs - v2v would need to make sure relevant iso is in the iso domain probably with the drivers expected by v2v?
I'm not 100% sure how this works, but I'll tell you what virt-v2v does (which is the same as old virt-v2v). It installs RHEV-APT (.exe) in the guest and ensures it runs at first boot. Does RHEV-APT need an ISO to work? I was under the impression that the executable contained the drivers needed in itself.
Rich.
rhev-apt (can't remember the upstream name of the top of my head) is just a utility that automatically installs/updates the tools based on: - detecting there is an ISO - detecting the ISO is a rhev-tools iso - detecting the ISO is properly signed - run the various installers so yes, we need to make sure if the converted VM is a windows VM, there is an ISO (and attach it to the VM). how/where does v2v get rhev-apt from btw? (not sure the newly introduced ovirt windows guest driver iso has installers yet).

On Wed, Sep 10, 2014 at 03:11:37PM +0300, Itamar Heim wrote:
On 09/10/2014 02:55 PM, Richard W.M. Jones wrote:
On Wed, Sep 10, 2014 at 01:36:59PM +0300, Itamar Heim wrote:
- Windows VMs - v2v would need to make sure relevant iso is in the iso domain probably with the drivers expected by v2v?
I'm not 100% sure how this works, but I'll tell you what virt-v2v does (which is the same as old virt-v2v). It installs RHEV-APT (.exe) in the guest and ensures it runs at first boot. Does RHEV-APT need an ISO to work? I was under the impression that the executable contained the drivers needed in itself.
Rich.
rhev-apt (can't remember the upstream name of the top of my head) is just a utility that automatically installs/updates the tools based on: - detecting there is an ISO - detecting the ISO is a rhev-tools iso - detecting the ISO is properly signed - run the various installers
so yes, we need to make sure if the converted VM is a windows VM, there is an ISO (and attach it to the VM).
how/where does v2v get rhev-apt from btw?
Very good question! Both old and current virt-v2v use a binary (added to the RHEL build) which Matt extracted from rhev-apt, probably an old version. I have no idea where it comes from, and I doubt it is up to date. $ grep RHEV-App libguestfs.spec Source7: RHEV-Application_Provisioning_Tool_46267.exe
(not sure the newly introduced ovirt windows guest driver iso has installers yet).
Something we can build from source using the mingw-* chain would be useful, otherwise we can never add this to Fedora. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-builder quickly builds VMs from scratch http://libguestfs.org/virt-builder.1.html

On Wed, 10 Sep 2014, Itamar Heim wrote:
On 09/10/2014 02:55 PM, Richard W.M. Jones wrote:
On Wed, Sep 10, 2014 at 01:36:59PM +0300, Itamar Heim wrote:
- Windows VMs - v2v would need to make sure relevant iso is in the iso domain probably with the drivers expected by v2v?
I'm not 100% sure how this works, ...
I _think_ outside of the virt-v2v scope, it is with something like cloudbase-init [-1]
... but I'll tell you what virt-v2v does (which is the same as old virt-v2v). It installs RHEV-APT (.exe) in the guest and ensures it runs at first boot. Does RHEV-APT need an ISO to work? I was under the impression that the executable contained the drivers needed in itself.
rhev-apt (can't remember the upstream name of the top of my head) is just a utility that automatically installs/updates the tools based on: - detecting there is an ISO - detecting the ISO is a rhev-tools iso - detecting the ISO is properly signed - run the various installers
I do not immediately find sources for a 'rhev-apt' package, probably implying it is a sub-package of some other package? Possibly in: rhel-guest-image but I get a nil result in the .spec file, and a qcow2 image, which may contain such, when I look [ Reading ~/rpmbuild/SOURCES/rhel-guest-image/guest-image-ovf-creator.py from that SRPM is sort of a disturbing mish mash of python code, and instance data. Why no driver file to specify the instance, and so to isolate code from data? ] I too am interested in where that file comes from. Pointer appreciated, as we support in this space I see later reply by Richard:
I have no idea where it comes from, and I doubt it is up to date.
$ grep RHEV-App libguestfs.spec Source7: RHEV-Application_Provisioning_Tool_46267.exe
and I'm with him! I have no idea either, but am interested in the answer I think you MAY be referring to the capability of the type provided via 'cloud-init' (present as early as: 0.7.1 -- in via a rebase at RHEL 6.5 [0], which asserts it is at cloud-init-0.7.2) to mount and source a ConfigDrive (actually, a Config ISO-9660 ISO disk image) [1] [2]. There is a MSFT analog as well, and VMWare seems to run similar updates from time to time Red Hat support for such in its enterprise product seems to be explicitly adding ConfigDrive (also written as two words in some places) as of late 2013 [3], bugfix [4] to work properly under a 2.6 kernel Richard had posted as to a tool for making such without the ened for root rights a while back titled: Creating a cloud-init config disk for non-cloud boots [5] but the utility of article is broader than the title implies (thanks, Richard) -- Russ herrold End Notes: ========== I do not see 'rhel-apt' as a sub-target in the spec file for rhel-guest-image either: [herrold@centos-6 rhel-guest-image]$ rpmno.sh \ rhel-guest-image-6.5-20131115.0.el6.src.rpm rhel-guest-image.spec guest-image-ovf-creator.py rhel-guest-image-6-6.5-20131115.0-1.qcow2 [herrold@centos-6 rhel-guest-image]$ grep "rhel-apt" rhel-guest-image.spec [herrold@centos-6 rhel-guest-image]$ Please excuse the negative number footnote, but I've been composing this all morning and a couple hours into the afternoon, as well organizing my knowledge into a form I can refer to in the future as to CloudDrive's [-1] https://github.com/cloudbase/cloudbase-init [0] https://bugzilla.redhat.com/show_bug.cgi?id=968246 [1] http://cloudinit.readthedocs.org/en/latest/topics/datasources.html#config-dr... [2] http://docs.openstack.org/grizzly/openstack-compute/admin/content/config-dri... [3] https://bugzilla.redhat.com/show_bug.cgi?id=970820 [4] https://rhn.redhat.com/errata/RHBA-2013-1744.html [5] https://rwmj.wordpress.com/2013/12/10/creating-a-cloud-init-config-disk-for-...

On 09/10/2014 09:07 PM, R P Herrold wrote:
On Wed, 10 Sep 2014, Itamar Heim wrote:
On 09/10/2014 02:55 PM, Richard W.M. Jones wrote:
On Wed, Sep 10, 2014 at 01:36:59PM +0300, Itamar Heim wrote:
- Windows VMs - v2v would need to make sure relevant iso is in the iso domain probably with the drivers expected by v2v?
I'm not 100% sure how this works, ...
I _think_ outside of the virt-v2v scope, it is with something like cloudbase-init [-1]
... but I'll tell you what virt-v2v does (which is the same as old virt-v2v). It installs RHEV-APT (.exe) in the guest and ensures it runs at first boot. Does RHEV-APT need an ISO to work? I was under the impression that the executable contained the drivers needed in itself.
rhev-apt (can't remember the upstream name of the top of my head) is just a utility that automatically installs/updates the tools based on: - detecting there is an ISO - detecting the ISO is a rhev-tools iso - detecting the ISO is properly signed - run the various installers
I do not immediately find sources for a 'rhev-apt' package, probably implying it is a sub-package of some other package? Possibly in: rhel-guest-image but I get a nil result in the .spec file, and a qcow2 image, which may contain such, when I look
[ Reading ~/rpmbuild/SOURCES/rhel-guest-image/guest-image-ovf-creator.py from that SRPM is sort of a disturbing mish mash of python code, and instance data. Why no driver file to specify the instance, and so to isolate code from data? ]
I too am interested in where that file comes from. Pointer appreciated, as we support in this space
I see later reply by Richard:
I have no idea where it comes from, and I doubt it is up to date.
$ grep RHEV-App libguestfs.spec Source7: RHEV-Application_Provisioning_Tool_46267.exe
and I'm with him! I have no idea either, but am interested in the answer
I think you MAY be referring to the capability of the type provided via 'cloud-init' (present as early as: 0.7.1 -- in via a rebase at RHEL 6.5 [0], which asserts it is at cloud-init-0.7.2) to mount and source a ConfigDrive (actually, a Config ISO-9660 ISO disk image) [1] [2]. There is a MSFT analog as well, and VMWare seems to run similar updates from time to time
Red Hat support for such in its enterprise product seems to be explicitly adding ConfigDrive (also written as two words in some places) as of late 2013 [3], bugfix [4] to work properly under a 2.6 kernel
Richard had posted as to a tool for making such without the ened for root rights a while back titled: Creating a cloud-init config disk for non-cloud boots [5] but the utility of article is broader than the title implies (thanks, Richard)
-- Russ herrold
End Notes: ==========
I do not see 'rhel-apt' as a sub-target in the spec file for rhel-guest-image either:
[herrold@centos-6 rhel-guest-image]$ rpmno.sh \ rhel-guest-image-6.5-20131115.0.el6.src.rpm rhel-guest-image.spec guest-image-ovf-creator.py rhel-guest-image-6-6.5-20131115.0-1.qcow2 [herrold@centos-6 rhel-guest-image]$ grep "rhel-apt" rhel-guest-image.spec [herrold@centos-6 rhel-guest-image]$
Please excuse the negative number footnote, but I've been composing this all morning and a couple hours into the afternoon, as well organizing my knowledge into a form I can refer to in the future as to CloudDrive's
[-1] https://github.com/cloudbase/cloudbase-init [0] https://bugzilla.redhat.com/show_bug.cgi?id=968246 [1] http://cloudinit.readthedocs.org/en/latest/topics/datasources.html#config-dr... [2] http://docs.openstack.org/grizzly/openstack-compute/admin/content/config-dri... [3] https://bugzilla.redhat.com/show_bug.cgi?id=970820 [4] https://rhn.redhat.com/errata/RHBA-2013-1744.html [5] https://rwmj.wordpress.com/2013/12/10/creating-a-cloud-init-config-disk-for-...
ok, So this took a while to locate considering how long it has been, and since this utility literally went mostly unchanged since its inception circa 2008... Rich - based on the naming of your utility, its seems like an old build (uses svn versioning...) Russ - back in 2010 we started the effort of open sourcing rhev-apt, located here: https://fedorahosted.org/CDUpgrader/ TBH, i don't remember in which state it is, or that we finished moving rhev-apt to be based of CDUpgrader as a proper upstream. but CDUpgrader is probably where to start on this (and we want to work on it, i suggest we create a repo for it on gerrit.ovirt.org instead of the current fedorahosted one). (I highly suggest using this mailing list, rather than the one in the wiki to discuss it more...)

----- Original Message -----
From: "Itamar Heim" <iheim@redhat.com> To: "Arik Hadas" <ahadas@redhat.com>, devel@ovirt.org, "Richard Jones" <rjones@redhat.com>, "Matthew Booth" <mbooth@redhat.com> Sent: Wednesday, September 10, 2014 1:36:59 PM Subject: Re: [ovirt-devel] virt-v2v integration feature
On 09/02/2014 04:26 PM, Arik Hadas wrote:
Hi All,
Better late than never..
Thanks for all the feedback, it was really constructive. I made major changes in the wiki page to address the comments, Please take another look: http://www.ovirt.org/Features/virt-v2v_Integration
Thanks, Arik
----- Original Message -----
Hi All,
The proposed feature will introduce a new process of import virtual machines from external systems using virt-v2v in oVirt. I've created a wiki page that contains initial thoughts and design for it: http://www.ovirt.org/Features/virt-v2v_Integration
You are more than welcome to share your thoughts and insights.
Thanks, Arik _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
much much better :) now we just need to do all this...
few comments: - upload vm - indeed tbd, but hopefully will resolve this as well independently. - Windows VMs - v2v would need to make sure relevant iso is in the iso domain probably with the drivers expected by v2v? - proxy host - do we really need this? I assume we are going to require virt-v2v by vdsm, so all hosts will simply have all needed to do this. to rephrase - I wouldn't make this the architectural choice to require a single host with v2v, rather default to all (for both ovirt-node and plain deployment) we may still want to allow choosing which host to do the import from, for specific network toplogy use cases. I wouldn't default to that, and i think it can be phase II actually - per shahar's comment on SPM from discussion with storage group - just a btw we hope SPM to go away in 3.6. still, you should separate meta data (create volume) and data (copy) operations. - external providers - I think this one requires more thinking. if you will define these as providers, with their credentials, then all admins can import. i don't think we should require the provider to be defined, not require it to keep credentials (could be just a helper list). i.e., unlike the other providers, i think here the providers are just to auto complete fields in the import form (i can choose a provider with its user/password, i can choose a provider without user/pass and fill myself, i can just type provider and user/pass)
I guess that supporting multiple VM import would address that. As it can be quite frustrating to fill in the details on each and every import. But, if we allow to import them in one user step then we don't need to keep the details anywhere.
- I'll be able to provider more feedback on mockups...
Thanks, Itamar _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

Hi all, Currently SR-IOV in oVirt is only supported using vdsm-hook [1]. This feature will add SR-IOV support to oVirt management system (including migration). You are more than welcome to review the feature page- http://www.ovirt.org/Feature/SR-IOV Thanks, Alona.

On 10/05/2014 07:02 AM, Alona Kaplan wrote:
Hi all,
Currently SR-IOV in oVirt is only supported using vdsm-hook [1]. This feature will add SR-IOV support to oVirt management system (including migration).
You are more than welcome to review the feature page- http://www.ovirt.org/Feature/SR-IOV
Thanks, Alona. _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
Glad to see this. some questions:
Note: this feature is about exposing a virtualized (or VirtIO) vNic to the guest, and not about exposing the PCI device to it. This restriction is necessary for migration to be supported.
did not understand this sentence - are you hinting to macvtap?
add/edit profile
so i gather the implementation is at profile level, which is at logical network level? how does this work exactly? can this logical network be vlan tagged or must be native? if vlan tagged who does the tagging for the passthrough device? (I see later on vf_vlan is one of the parameters to vdsm, just wondering how the mapping can be at host level if this is a passthrough device)? is this because the use of virtio (macvtap)? wouldn't it be better to support both macvtap and passthrough and just flag the VM as non migratable in that case? also (and doesn't have to be in first phase) what happens if i ran out of hosts with sr-iov (or they failed) - can i fail back to non pcipassthrough profile for backup (policy question at vm level if more important to have sr-iov or more important it will run even without it since it provides a critical service, with a [scheduling] preference to run on sr-iov? (oh, i see this is in the "futures" section already.
management, display and migration properties are not relevant for the VFs configuration
just wondering - any technical reason we can't put the management on a VF (not saying its a priority to do so)?
sr-iov host nic management - num of VFs
I assume this is for admin to define a policy on how many VFs to use, based on the max as reported by getVdsCaps. worth stating that for clarity.
User Experience - Setup networks - Option 1
in the last picture ("Edit VFs networks and labels") - why are there labels here together with the networks (if labels appear at the PF level in the first dialog)? iiuc, the option 2 is re-using the setup networks, where the PF will just be another physical interface, and networks or labels edited just like for regular network interfaces? (not sure where you are on this, but it sounds more straight forward/similar to existing concepts iiuc). Question: any issues with hot plug/unplug or just expected to work normally? Thanks, Itamar

----- Original Message -----
From: "Itamar Heim" <iheim@redhat.com> To: "Alona Kaplan" <alkaplan@redhat.com>, devel@ovirt.org Sent: Friday, October 24, 2014 12:21:00 PM Subject: Re: [ovirt-devel] SR-IOV feature
On 10/05/2014 07:02 AM, Alona Kaplan wrote:
Hi all,
Currently SR-IOV in oVirt is only supported using vdsm-hook [1]. This feature will add SR-IOV support to oVirt management system (including migration).
You are more than welcome to review the feature page- http://www.ovirt.org/Feature/SR-IOV
Thanks, Alona. _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
Glad to see this.
some questions:
Note: this feature is about exposing a virtualized (or VirtIO) vNic to the guest, and not about exposing the PCI device to it. This restriction is necessary for migration to be supported.
did not understand this sentence - are you hinting to macvtap?
Most likely macvtap, yes. Additionally I think Martin Poledník is looking into direct sr-iov attachment to VMs as part of the pci passthrough work he is doing.
add/edit profile
so i gather the implementation is at profile level, which is at logical network level? how does this work exactly? can this logical network be vlan tagged or must be native? if vlan tagged who does the tagging for the passthrough device? (I see later on vf_vlan is one of the parameters to vdsm, just wondering how the mapping can be at host level if this is a passthrough device)? is this because the use of virtio (macvtap)? wouldn't it be better to support both macvtap and passthrough and just flag the VM as non migratable in that case?
also (and doesn't have to be in first phase) what happens if i ran out of hosts with sr-iov (or they failed) - can i fail back to non pcipassthrough profile for backup (policy question at vm level if more important to have sr-iov or more important it will run even without it since it provides a critical service, with a [scheduling] preference to run on sr-iov? (oh, i see this is in the "futures" section already.
management, display and migration properties are not relevant for the VFs configuration
just wondering - any technical reason we can't put the management on a VF (not saying its a priority to do so)?
sr-iov host nic management - num of VFs
I assume this is for admin to define a policy on how many VFs to use, based on the max as reported by getVdsCaps. worth stating that for clarity.
User Experience - Setup networks - Option 1
in the last picture ("Edit VFs networks and labels") - why are there labels here together with the networks (if labels appear at the PF level in the first dialog)?
iiuc, the option 2 is re-using the setup networks, where the PF will just be another physical interface, and networks or labels edited just like for regular network interfaces? (not sure where you are on this, but it sounds more straight forward/similar to existing concepts iiuc).
Question: any issues with hot plug/unplug or just expected to work normally?
Thanks, Itamar _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

----- Original Message -----
From: "Antoni Segura Puimedon" <asegurap@redhat.com> To: "Itamar Heim" <iheim@redhat.com> Cc: "Alona Kaplan" <alkaplan@redhat.com>, devel@ovirt.org Sent: Friday, October 24, 2014 9:33:45 PM Subject: Re: [ovirt-devel] SR-IOV feature
----- Original Message -----
From: "Itamar Heim" <iheim@redhat.com> To: "Alona Kaplan" <alkaplan@redhat.com>, devel@ovirt.org Sent: Friday, October 24, 2014 12:21:00 PM Subject: Re: [ovirt-devel] SR-IOV feature
On 10/05/2014 07:02 AM, Alona Kaplan wrote:
Hi all,
Currently SR-IOV in oVirt is only supported using vdsm-hook [1]. This feature will add SR-IOV support to oVirt management system (including migration).
You are more than welcome to review the feature page- http://www.ovirt.org/Feature/SR-IOV
Thanks, Alona. _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
Glad to see this.
some questions:
Note: this feature is about exposing a virtualized (or VirtIO) vNic to the guest, and not about exposing the PCI device to it. This restriction is necessary for migration to be supported.
did not understand this sentence - are you hinting to macvtap?
Most likely macvtap, yes.
Additionally I think Martin Poledník is looking into direct sr-iov attachment to VMs as part of the pci passthrough work he is doing.
add/edit profile
so i gather the implementation is at profile level, which is at logical network level? how does this work exactly? can this logical network be vlan tagged or must be native? if vlan tagged who does the tagging for the passthrough device? (I see later on vf_vlan is one of the parameters to vdsm, just wondering how the mapping can be at host level if this is a passthrough device)? is this because the use of virtio (macvtap)?
The logical network can be vlan tagged. As you mentioned the vf_vlan is one of the parameters to the vdsm (on create verb). Setting the vlan on the vf is done as follows- ip link set {DEVICE} vf {NUM} [ vlan VLANID ] It is written in the notes section. It is not related to the use of virtio. The vlan can be set on the vf whether it is connected to the vm via macvtap or directly.
wouldn't it be better to support both macvtap and passthrough and just flag the VM as non migratable in that case?
Martin Polednik is working on pci-passthrough- http://www.ovirt.org/Features/hostdev_passthrough Maybe we should wait for his feature to be ready and then combine it with the sr-iov feature. As I see in his feature page he plans to attach a specific device directly to the vm. We can combine his feature with the sr-iov feature- 1. The network profile will have type property- bridge (the regular configuration we have today, vnic->tap->bridge->physical nic). virtio(in the current feature design it is called passthrough, vnic->macvtap->vf) pci-passthrough(vnic->vf) 2. Attaching a network profile with pci-passthrough type to a vnic will mark the vm as non-migratable. 3. When running a vm with pci-passthrough vnic a free VF will be attached to the vm with the vlan and mtu configuration of the profile/network (same as for virio profile, as described in the feature page). The benefit of it is that the user won't have to choose the vf directly and will be able to set vlan and mtu on the vf.
also (and doesn't have to be in first phase) what happens if i ran out of hosts with sr-iov (or they failed) - can i fail back to non pcipassthrough profile for backup (policy question at vm level if more important to have sr-iov or more important it will run even without it since it provides a critical service, with a [scheduling] preference to run on sr-iov? (oh, i see this is in the "futures" section already.
:)
management, display and migration properties are not relevant for the VFs configuration
just wondering - any technical reason we can't put the management on a VF (not saying its a priority to do so)?
Today we mark the logical network with a role (management/display/migration) when attaching it to the cluster. A logical network can be attached to one physical nic (PF). We can't use the current attachment of a role for sr-iov, since the network can be configured as "vf allowed" on more than one nic (maybe even on all the nics). If the network is "vf allowed" on the nic, a vnic with this network can be attached to a free vf on the nic. So we can't use the logical network to mark a vf with a role. We have to mark the vf explicitly. Since in the current design we don't expose the vf, setting the roles was blocked. But if there is a requirement for setting a vf as management/migration/display we can re-think about the design for it.
sr-iov host nic management - num of VFs
I assume this is for admin to define a policy on how many VFs to use, based on the max as reported by getVdsCaps. worth stating that for clarity.
Updated the wiki with the following- "It is used for admin to enable this number of VFs on the nic. Changing this value will remove all the VFs from the nic and create new #numOFVfs VFs on the nic." The max value reported by getVdsCaps is just the theoretical maximum value.
User Experience - Setup networks - Option 1
in the last picture ("Edit VFs networks and labels") - why are there labels here together with the networks (if labels appear at the PF level in the first dialog)?
iiuc, the option 2 is re-using the setup networks, where the PF will just be another physical interface, and networks or labels edited just like for regular network interfaces? (not sure where you are on this, but it sounds more straight forward/similar to existing concepts iiuc).
As I wrote in the answer about the roles. There are two concepts- 1. The attachment of network to physical nic (what we have today). 2. Containing the network in the "VFs managenet tab=>allowed networks" of the nic. In 1, we actually configure the host's nics and bridges according to the setup networks. In 2, we just specify the "allowed" list, it doesn't even sent to the vdsm. It is used by the engine when it schedules a host for a vm. The connection between networks to nics is many to many. The same network can be part of 1 and 2 on the same nic. And even part of 2 in other sr-iov enabled nics. Since 2 is completely different concept than 1, we weren't sure that using drag and drop as for PFs isn't to much in this case.
Question: any issues with hot plug/unplug or just expected to work normally?
Expected to work (but wasn't tested yet).
Thanks, Itamar _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

On Sun, Oct 26, 2014 at 06:39:00AM -0400, Alona Kaplan wrote:
On 10/05/2014 07:02 AM, Alona Kaplan wrote:
Hi all,
Currently SR-IOV in oVirt is only supported using vdsm-hook [1]. This feature will add SR-IOV support to oVirt management system (including migration).
You are more than welcome to review the feature page- http://www.ovirt.org/Feature/SR-IOV
Thanks, Alona. _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
Glad to see this.
some questions:
Note: this feature is about exposing a virtualized (or VirtIO) vNic to the guest, and not about exposing the PCI device to it. This restriction is necessary for migration to be supported.
did not understand this sentence - are you hinting to macvtap?
Most likely macvtap, yes.
Additionally I think Martin Poledník is looking into direct sr-iov attachment to VMs as part of the pci passthrough work he is doing.
add/edit profile
so i gather the implementation is at profile level, which is at logical network level? how does this work exactly? can this logical network be vlan tagged or must be native? if vlan tagged who does the tagging for the passthrough device? (I see later on vf_vlan is one of the parameters to vdsm, just wondering how the mapping can be at host level if this is a passthrough device)? is this because the use of virtio (macvtap)?
The logical network can be vlan tagged. As you mentioned the vf_vlan is one of the parameters to the vdsm (on create verb). Setting the vlan on the vf is done as follows- ip link set {DEVICE} vf {NUM} [ vlan VLANID ] It is written in the notes section.
It is not related to the use of virtio. The vlan can be set on the vf whether it is connected to the vm via macvtap or directly.
Are you sure about this? I think that when a host device is attached to a VM, it disappears from the host, and the the guest can send arbitrary unmodified packets through the wire. But I may well be wrong.
wouldn't it be better to support both macvtap and passthrough and just flag the VM as non migratable in that case?
Martin Polednik is working on pci-passthrough- http://www.ovirt.org/Features/hostdev_passthrough
Maybe we should wait for his feature to be ready and then combine it with the sr-iov feature. As I see in his feature page he plans to attach a specific device directly to the vm. We can combine his feature with the sr-iov feature- 1. The network profile will have type property- bridge (the regular configuration we have today, vnic->tap->bridge->physical nic). virtio(in the current feature design it is called passthrough, vnic->macvtap->vf) pci-passthrough(vnic->vf) 2. Attaching a network profile with pci-passthrough type to a vnic will mark the vm as non-migratable.
This marking can be tuned by the admin. If the admin requests migration despite the pci-passthrough type, Vdsm can auto-unplug the PCI device before migration, and plug it back on the destination. That would allow some kind of migration to guests that are willing to see a PCI device disappear and re-appear.
3. When running a vm with pci-passthrough vnic a free VF will be attached to the vm with the vlan and mtu configuration of the profile/network (same as for virio profile, as described in the feature page).
The benefit of it is that the user won't have to choose the vf directly and will be able to set vlan and mtu on the vf.
also (and doesn't have to be in first phase) what happens if i ran out of hosts with sr-iov (or they failed) - can i fail back to non pcipassthrough profile for backup (policy question at vm level if more important to have sr-iov or more important it will run even without it since it provides a critical service, with a [scheduling] preference to run on sr-iov? (oh, i see this is in the "futures" section already.
:)
A benefit of this "Nice to have passthrough" is that one could set it on vNic profiles that are already used by VMs. Once they are migrated to a new host, the passthrough-ness request would take effect.
management, display and migration properties are not relevant for the VFs configuration
just wondering - any technical reason we can't put the management on a VF (not saying its a priority to do so)?
Today we mark the logical network with a role (management/display/migration) when attaching it to the cluster. A logical network can be attached to one physical nic (PF).
We can't use the current attachment of a role for sr-iov, since the network can be configured as "vf allowed" on more than one nic (maybe even on all the nics). If the network is "vf allowed" on the nic, a vnic with this network can be attached to a free vf on the nic.
So we can't use the logical network to mark a vf with a role. We have to mark the vf explicitly. Since in the current design we don't expose the vf, setting the roles was blocked. But if there is a requirement for setting a vf as management/migration/display we can re-think about the design for it.
We can relax this requirement by allowing the network to be attached on one nic (be it VF or PF or legacy), and to set they "vf allowed" on a completely disjoint set of PFs.
sr-iov host nic management - num of VFs
I assume this is for admin to define a policy on how many VFs to use, based on the max as reported by getVdsCaps. worth stating that for clarity.
Updated the wiki with the following- "It is used for admin to enable this number of VFs on the nic. Changing this value will remove all the VFs from the nic and create new #numOFVfs VFs on the nic."
The max value reported by getVdsCaps is just the theoretical maximum value.
I think that Itamar suggests that this should be automated. And admin could say "give me all the VFs you can", and when adding a new host, Engine would set it seemlessly. By the way, do you know what's the down side of asking for the maximum number of VFs? Is it memory overhead? CPU? network performence? I wonder whether it makes sense for Vdsm to set the max on each reboot? Another related issue, that is mentioned as an open question: The current suggestion, of having updateSriovMaxVFs as an independent verb has a down side: you cannot use it to updateSriovMaxVFs of the PF that is used by the management network. If we want to support this use case, we should probably expose the functionality within the transactional setupNetworks verb.
User Experience - Setup networks - Option 1
in the last picture ("Edit VFs networks and labels") - why are there labels here together with the networks (if labels appear at the PF level in the first dialog)?
iiuc, the option 2 is re-using the setup networks, where the PF will just be another physical interface, and networks or labels edited just like for regular network interfaces? (not sure where you are on this, but it sounds more straight forward/similar to existing concepts iiuc).
As I wrote in the answer about the roles. There are two concepts- 1. The attachment of network to physical nic (what we have today). 2. Containing the network in the "VFs managenet tab=>allowed networks" of the nic.
In 1, we actually configure the host's nics and bridges according to the setup networks. In 2, we just specify the "allowed" list, it doesn't even sent to the vdsm. It is used by the engine when it schedules a host for a vm.
The connection between networks to nics is many to many. The same network can be part of 1 and 2 on the same nic. And even part of 2 in other sr-iov enabled nics.
Since 2 is completely different concept than 1, we weren't sure that using drag and drop as for PFs isn't to much in this case.
Question: any issues with hot plug/unplug or just expected to work normally?
Expected to work (but wasn't tested yet).

----- Original Message -----
From: "Dan Kenigsberg" <danken@redhat.com> To: "Alona Kaplan" <alkaplan@redhat.com>, bazulay@redhat.com Cc: "Itamar Heim" <iheim@redhat.com>, "Eldan Hildesheim" <ehildesh@redhat.com>, "Nir Yechiel" <nyechiel@redhat.com>, devel@ovirt.org Sent: Thursday, October 30, 2014 7:47:31 PM Subject: Re: [ovirt-devel] SR-IOV feature
On Sun, Oct 26, 2014 at 06:39:00AM -0400, Alona Kaplan wrote:
On 10/05/2014 07:02 AM, Alona Kaplan wrote:
Hi all,
Currently SR-IOV in oVirt is only supported using vdsm-hook [1]. This feature will add SR-IOV support to oVirt management system (including migration).
You are more than welcome to review the feature page- http://www.ovirt.org/Feature/SR-IOV
Thanks, Alona. _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
Glad to see this.
some questions:
Note: this feature is about exposing a virtualized (or VirtIO) vNic to the guest, and not about exposing the PCI device to it. This restriction is necessary for migration to be supported.
did not understand this sentence - are you hinting to macvtap?
Most likely macvtap, yes.
Additionally I think Martin Poledník is looking into direct sr-iov attachment to VMs as part of the pci passthrough work he is doing.
add/edit profile
so i gather the implementation is at profile level, which is at logical network level? how does this work exactly? can this logical network be vlan tagged or must be native? if vlan tagged who does the tagging for the passthrough device? (I see later on vf_vlan is one of the parameters to vdsm, just wondering how the mapping can be at host level if this is a passthrough device)? is this because the use of virtio (macvtap)?
The logical network can be vlan tagged. As you mentioned the vf_vlan is one of the parameters to the vdsm (on create verb). Setting the vlan on the vf is done as follows- ip link set {DEVICE} vf {NUM} [ vlan VLANID ] It is written in the notes section.
It is not related to the use of virtio. The vlan can be set on the vf whether it is connected to the vm via macvtap or directly.
Are you sure about this? I think that when a host device is attached to a VM, it disappears from the host, and the the guest can send arbitrary unmodified packets through the wire. But I may well be wrong.
I think you are correct for the case of mtu (that's why I added it as an open issue- "Is applying MTU on VF supported by libvirt?"). But as I understand from the documentation (although I didn't test it by myself)- that is the purpose of ip link set {DEVICE} vf {NUM} vlan VLANID The documentation says- "all traffic sent from the VF will be tagged with the specified VLAN ID. Incoming traffic will be filtered for the specified VLAN ID, and will have all VLAN tags stripped before being passed to the VF." Note- It is also supported by libvirt. As you can read in- http://docs.fedoraproject.org/en-US/Fedora_Draft_Documentation/0.1/html/Virt... "type='hostdev' SR-IOV interfaces do support transparent vlan tagging of guest traffic".
wouldn't it be better to support both macvtap and passthrough and just flag the VM as non migratable in that case?
Martin Polednik is working on pci-passthrough- http://www.ovirt.org/Features/hostdev_passthrough
Maybe we should wait for his feature to be ready and then combine it with the sr-iov feature. As I see in his feature page he plans to attach a specific device directly to the vm. We can combine his feature with the sr-iov feature- 1. The network profile will have type property- bridge (the regular configuration we have today, vnic->tap->bridge->physical nic). virtio(in the current feature design it is called passthrough, vnic->macvtap->vf) pci-passthrough(vnic->vf) 2. Attaching a network profile with pci-passthrough type to a vnic will mark the vm as non-migratable.
This marking can be tuned by the admin. If the admin requests migration despite the pci-passthrough type, Vdsm can auto-unplug the PCI device before migration, and plug it back on the destination. That would allow some kind of migration to guests that are willing to see a PCI device disappear and re-appear.
Added it as an open issue to the feature page.
3. When running a vm with pci-passthrough vnic a free VF will be attached to the vm with the vlan and mtu configuration of the profile/network (same as for virio profile, as described in the feature page).
The benefit of it is that the user won't have to choose the vf directly and will be able to set vlan and mtu on the vf.
also (and doesn't have to be in first phase) what happens if i ran out of hosts with sr-iov (or they failed) - can i fail back to non pcipassthrough profile for backup (policy question at vm level if more important to have sr-iov or more important it will run even without it since it provides a critical service, with a [scheduling] preference to run on sr-iov? (oh, i see this is in the "futures" section already.
:)
A benefit of this "Nice to have passthrough" is that one could set it on vNic profiles that are already used by VMs. Once they are migrated to a new host, the passthrough-ness request would take effect.
Added this benefit to the feature page.
management, display and migration properties are not relevant for the VFs configuration
just wondering - any technical reason we can't put the management on a VF (not saying its a priority to do so)?
Today we mark the logical network with a role (management/display/migration) when attaching it to the cluster. A logical network can be attached to one physical nic (PF).
We can't use the current attachment of a role for sr-iov, since the network can be configured as "vf allowed" on more than one nic (maybe even on all the nics). If the network is "vf allowed" on the nic, a vnic with this network can be attached to a free vf on the nic.
So we can't use the logical network to mark a vf with a role. We have to mark the vf explicitly. Since in the current design we don't expose the vf, setting the roles was blocked. But if there is a requirement for setting a vf as management/migration/display we can re-think about the design for it.
We can relax this requirement by allowing the network to be attached on one nic (be it VF or PF or legacy), and to set they "vf allowed" on a completely disjoint set of PFs.
I'm not sure I understand your suggestion. And still don't understand the benefit of using a vf as management/display/migration.
sr-iov host nic management - num of VFs
I assume this is for admin to define a policy on how many VFs to use, based on the max as reported by getVdsCaps. worth stating that for clarity.
Updated the wiki with the following- "It is used for admin to enable this number of VFs on the nic. Changing this value will remove all the VFs from the nic and create new #numOFVfs VFs on the nic."
The max value reported by getVdsCaps is just the theoretical maximum value.
I think that Itamar suggests that this should be automated. And admin could say "give me all the VFs you can", and when adding a new host, Engine would set it seemlessly.
By the way, do you know what's the down side of asking for the maximum number of VFs? Is it memory overhead? CPU? network performence?
I think "give me all the VFs you can" would rarely be used because in practice this maximum is much lower, since each VF consumes resources. Network device needs the resources to support the VF such as queues for data, data address space, command processing, and more.
I wonder whether it makes sense for Vdsm to set the max on each reboot?
You're not updating the max, you're updating the number of of existing VFs on a PF. On a reboot all the VFs are destroyed. When the host is started, #defaultNum of VFs are created. Updating the num of VFs via sysfs is cross modules. Since the sriov_numvfs value passed to sysfs is not persistent cross reboots, after a reboot the new value is taken from the module specific configuration. Each module has its own way to specify persistent default num of VFs. For example- with Intel VT-d you should add the line- options igb max_vfs=7 to any file in /etc/modprobe.d If the module doesn't specify the number of VFs in its configuration the default number is 0. So if vdsm won't set /sys/class/net/'device_name'/device/sriov_numvfs on each reboot, the user will have to control the number manually and module specifically.
Another related issue, that is mentioned as an open question: The current suggestion, of having updateSriovMaxVFs as an independent verb has a down side: you cannot use it to updateSriovMaxVFs of the PF that is used by the management network. If we want to support this use case, we should probably expose the functionality within the transactional setupNetworks verb.
Why can't it be used on the PF that is used by the management network? AFAIK the PF doesn't lose connectivity when updating /sys/class/net/eth0/device/sriov_numvfs but I"m not sure about it. Added it to the open issues section.
User Experience - Setup networks - Option 1
in the last picture ("Edit VFs networks and labels") - why are there labels here together with the networks (if labels appear at the PF level in the first dialog)?
iiuc, the option 2 is re-using the setup networks, where the PF will just be another physical interface, and networks or labels edited just like for regular network interfaces? (not sure where you are on this, but it sounds more straight forward/similar to existing concepts iiuc).
As I wrote in the answer about the roles. There are two concepts- 1. The attachment of network to physical nic (what we have today). 2. Containing the network in the "VFs managenet tab=>allowed networks" of the nic.
In 1, we actually configure the host's nics and bridges according to the setup networks. In 2, we just specify the "allowed" list, it doesn't even sent to the vdsm. It is used by the engine when it schedules a host for a vm.
The connection between networks to nics is many to many. The same network can be part of 1 and 2 on the same nic. And even part of 2 in other sr-iov enabled nics.
Since 2 is completely different concept than 1, we weren't sure that using drag and drop as for PFs isn't to much in this case.
Question: any issues with hot plug/unplug or just expected to work normally?
Expected to work (but wasn't tested yet).

Hey all, sorry for joining a bit late... General note: hostdev-passthrough wiki will be updated ASAP in order to reflect ongoing progress. ----- Original Message -----
From: "Alona Kaplan" <alkaplan@redhat.com> To: "Dan Kenigsberg" <danken@redhat.com> Cc: "Eldan Hildesheim" <ehildesh@redhat.com>, devel@ovirt.org, "Nir Yechiel" <nyechiel@redhat.com> Sent: Sunday, November 2, 2014 2:17:40 PM Subject: Re: [ovirt-devel] SR-IOV feature
----- Original Message -----
From: "Dan Kenigsberg" <danken@redhat.com> To: "Alona Kaplan" <alkaplan@redhat.com>, bazulay@redhat.com Cc: "Itamar Heim" <iheim@redhat.com>, "Eldan Hildesheim" <ehildesh@redhat.com>, "Nir Yechiel" <nyechiel@redhat.com>, devel@ovirt.org Sent: Thursday, October 30, 2014 7:47:31 PM Subject: Re: [ovirt-devel] SR-IOV feature
On Sun, Oct 26, 2014 at 06:39:00AM -0400, Alona Kaplan wrote:
On 10/05/2014 07:02 AM, Alona Kaplan wrote:
Hi all,
Currently SR-IOV in oVirt is only supported using vdsm-hook [1]. This feature will add SR-IOV support to oVirt management system (including migration).
You are more than welcome to review the feature page- http://www.ovirt.org/Feature/SR-IOV
Thanks, Alona. _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
Glad to see this.
some questions:
Note: this feature is about exposing a virtualized (or VirtIO) vNic to the guest, and not about exposing the PCI device to it. This restriction is necessary for migration to be supported.
did not understand this sentence - are you hinting to macvtap?
Most likely macvtap, yes.
Additionally I think Martin Poledník is looking into direct sr-iov attachment to VMs as part of the pci passthrough work he is doing.
add/edit profile
so i gather the implementation is at profile level, which is at logical network level? how does this work exactly? can this logical network be vlan tagged or must be native? if vlan tagged who does the tagging for the passthrough device? (I see later on vf_vlan is one of the parameters to vdsm, just wondering how the mapping can be at host level if this is a passthrough device)? is this because the use of virtio (macvtap)?
The logical network can be vlan tagged. As you mentioned the vf_vlan is one of the parameters to the vdsm (on create verb). Setting the vlan on the vf is done as follows- ip link set {DEVICE} vf {NUM} [ vlan VLANID ] It is written in the notes section.
It is not related to the use of virtio. The vlan can be set on the vf whether it is connected to the vm via macvtap or directly.
Are you sure about this? I think that when a host device is attached to a VM, it disappears from the host, and the the guest can send arbitrary unmodified packets through the wire. But I may well be wrong.
I think you are correct for the case of mtu (that's why I added it as an open issue- "Is applying MTU on VF supported by libvirt?"). But as I understand from the documentation (although I didn't test it by myself)- that is the purpose of ip link set {DEVICE} vf {NUM} vlan VLANID The documentation says- "all traffic sent from the VF will be tagged with the specified VLAN ID. Incoming traffic will be filtered for the specified VLAN ID, and will have all VLAN tags stripped before being passed to the VF."
Note- It is also supported by libvirt. As you can read in- http://docs.fedoraproject.org/en-US/Fedora_Draft_Documentation/0.1/html/Virt... "type='hostdev' SR-IOV interfaces do support transparent vlan tagging of guest traffic".
wouldn't it be better to support both macvtap and passthrough and just flag the VM as non migratable in that case?
Martin Polednik is working on pci-passthrough- http://www.ovirt.org/Features/hostdev_passthrough
I'm actively working on hostdev passthrough (not only PCI but PCI, scsi and usb currently) and part of my testing was done on SR-IOV capable nic (intel 82576 chip).
Maybe we should wait for his feature to be ready and then combine it with the sr-iov feature. As I see in his feature page he plans to attach a specific device directly to the vm.
Hostdev passthrough is working on a VFIO granularity - that means it's reporting to engine whole computer bus tree (libvirt's listAllDevices()) including few unique device identifiers (for me that is name of the nevice such as pci_0000_af_01_1c OR the tuple (vendor_id, device_id). The api is very general - it doesn't care if we're dealing with PV or VF, only restriction is that whole IOMMU group has to be attached (libvirt limitation) - in case of SR-IOV NICs that presents no complications as these are in unique IOMMU groups. This is the API you should use when dealing with physical host devices, if anything is missing feel free to bring it up and we can work it in atleast so we don't implement the same thing twice.
We can combine his feature with the sr-iov feature- 1. The network profile will have type property- bridge (the regular configuration we have today, vnic->tap->bridge->physical nic). virtio(in the current feature design it is called passthrough, vnic->macvtap->vf) pci-passthrough(vnic->vf) 2. Attaching a network profile with pci-passthrough type to a vnic will mark the vm as non-migratable.
This marking can be tuned by the admin. If the admin requests migration despite the pci-passthrough type, Vdsm can auto-unplug the PCI device before migration, and plug it back on the destination. That would allow some kind of migration to guests that are willing to see a PCI device disappear and re-appear.
For NICs this can even be avoided by using bonding[1], for other devices we'll need to manually handle cases of specific device on specific bus { specific device (any bus) { VF belonging to specific PF VF (any PF) (and possibly more, to be discussed)
Added it as an open issue to the feature page.
3. When running a vm with pci-passthrough vnic a free VF will be attached to the vm with the vlan and mtu configuration of the profile/network (same as for virio profile, as described in the feature page).
The benefit of it is that the user won't have to choose the vf directly and will be able to set vlan and mtu on the vf.
also (and doesn't have to be in first phase) what happens if i ran out of hosts with sr-iov (or they failed) - can i fail back to non pcipassthrough profile for backup (policy question at vm level if more important to have sr-iov or more important it will run even without it since it provides a critical service, with a [scheduling] preference to run on sr-iov? (oh, i see this is in the "futures" section already.
:)
A benefit of this "Nice to have passthrough" is that one could set it on vNic profiles that are already used by VMs. Once they are migrated to a new host, the passthrough-ness request would take effect.
Added this benefit to the feature page.
management, display and migration properties are not relevant for the VFs configuration
just wondering - any technical reason we can't put the management on a VF (not saying its a priority to do so)?
Today we mark the logical network with a role (management/display/migration) when attaching it to the cluster. A logical network can be attached to one physical nic (PF).
We can't use the current attachment of a role for sr-iov, since the network can be configured as "vf allowed" on more than one nic (maybe even on all the nics). If the network is "vf allowed" on the nic, a vnic with this network can be attached to a free vf on the nic.
So we can't use the logical network to mark a vf with a role. We have to mark the vf explicitly. Since in the current design we don't expose the vf, setting the roles was blocked. But if there is a requirement for setting a vf as management/migration/display we can re-think about the design for it.
We can relax this requirement by allowing the network to be attached on one nic (be it VF or PF or legacy), and to set they "vf allowed" on a completely disjoint set of PFs.
I'm not sure I understand your suggestion. And still don't understand the benefit of using a vf as management/display/migration.
sr-iov host nic management - num of VFs
I assume this is for admin to define a policy on how many VFs to use, based on the max as reported by getVdsCaps. worth stating that for clarity.
Updated the wiki with the following- "It is used for admin to enable this number of VFs on the nic. Changing this value will remove all the VFs from the nic and create new #numOFVfs VFs on the nic."
The max value reported by getVdsCaps is just the theoretical maximum value.
I think that Itamar suggests that this should be automated. And admin could say "give me all the VFs you can", and when adding a new host, Engine would set it seemlessly.
By the way, do you know what's the down side of asking for the maximum number of VFs? Is it memory overhead? CPU? network performence?
I think "give me all the VFs you can" would rarely be used because in practice this maximum is much lower, since each VF consumes resources. Network device needs the resources to support the VF such as queues for data, data address space, command processing, and more.
I wonder whether it makes sense for Vdsm to set the max on each reboot?
You're not updating the max, you're updating the number of of existing VFs on a PF.
On a reboot all the VFs are destroyed. When the host is started, #defaultNum of VFs are created.
Updating the num of VFs via sysfs is cross modules. Since the sriov_numvfs value passed to sysfs is not persistent cross reboots, after a reboot the new value is taken from the module specific configuration.
Each module has its own way to specify persistent default num of VFs. For example- with Intel VT-d you should add the line- options igb max_vfs=7 to any file in /etc/modprobe.d If the module doesn't specify the number of VFs in its configuration the default number is 0.
So if vdsm won't set /sys/class/net/'device_name'/device/sriov_numvfs on each reboot, the user will have to control the number manually and module specifically.
Another related issue, that is mentioned as an open question: The current suggestion, of having updateSriovMaxVFs as an independent verb has a down side: you cannot use it to updateSriovMaxVFs of the PF that is used by the management network. If we want to support this use case, we should probably expose the functionality within the transactional setupNetworks verb.
Why can't it be used on the PF that is used by the management network? AFAIK the PF doesn't lose connectivity when updating /sys/class/net/eth0/device/sriov_numvfs but I"m not sure about it. Added it to the open issues section.
User Experience - Setup networks - Option 1
in the last picture ("Edit VFs networks and labels") - why are there labels here together with the networks (if labels appear at the PF level in the first dialog)?
iiuc, the option 2 is re-using the setup networks, where the PF will just be another physical interface, and networks or labels edited just like for regular network interfaces? (not sure where you are on this, but it sounds more straight forward/similar to existing concepts iiuc).
As I wrote in the answer about the roles. There are two concepts- 1. The attachment of network to physical nic (what we have today). 2. Containing the network in the "VFs managenet tab=>allowed networks" of the nic.
In 1, we actually configure the host's nics and bridges according to the setup networks. In 2, we just specify the "allowed" list, it doesn't even sent to the vdsm. It is used by the engine when it schedules a host for a vm.
The connection between networks to nics is many to many. The same network can be part of 1 and 2 on the same nic. And even part of 2 in other sr-iov enabled nics.
Since 2 is completely different concept than 1, we weren't sure that using drag and drop as for PFs isn't to much in this case.
Question: any issues with hot plug/unplug or just expected to work normally?
Expected to work (but wasn't tested yet).
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
participants (12)
-
Alona Kaplan
-
Antoni Segura Puimedon
-
Arik Hadas
-
Dan Kenigsberg
-
Itamar Heim
-
Martin Polednik
-
Maurice James
-
Oved Ourfali
-
R P Herrold
-
Richard W.M. Jones
-
Shahar Havivi
-
Sven Kieske