From iheim at redhat.com Thu Dec 1 04:32:55 2011 From: iheim at redhat.com (Itamar Heim) Date: Wed, 30 Nov 2011 23:32:55 -0500 (EST) Subject: [Engine-devel] Proposed next-generation vdsm API In-Reply-To: <20111130224041.GM13803@us.ibm.com> References: <20111130224041.GM13803@us.ibm.com> Message-ID: > -----Original Message----- > From: Adam Litke [mailto:agl at us.ibm.com] > Sent: Thursday, December 01, 2011 0:41 AM > To: vdsm-devel at lists.fedorahosted.org; engine-devel at ovirt.org > Cc: Daniel P. Berrange; Chris Wright; Dan Kenigsberg; Itamar Heim > Subject: Proposed next-generation vdsm API > > Recently we've had some very productive discussions concerning the VDSM API. I > want to attempt to refocus the discussion around an emerging proposal and see if > we can agree on a sensible path forward. > > Based on the discussion, I have identified the following requirements that > a new API for vdsm should have: > > 1.) Single API that can be consumed by ovirt-engine and ISVs > - We don't want to maintain multiple parallel APIs > - To develop a vendor ecosystem, we must have a robust external API to > vdsm > > 2.) Full vdsm capabilities are exposed without requiring ovirt-engine > - ovirt components should be modular and independently useful > - Some deployments might want to manage nodes without ovirt-engine > > 3.) Standardized protocol with low overhead > - Required for widespread adoption > > 4.) Support for asynchronous tasks and events > - Needed by ovirt-engine and other consumers > > Based on these requirements, the following proposal has started to emerge: > > Create a REST API that will provide all of the functionality that is currently > available via the xmlrpc interface (with the goal of deprecating xmlrpc once it > becomes mature enough). To support advanced clustering features that > ovirt-engine is planning, we'll write an QMF broker that can proxy the REST API > onto a message bus. ovirt-engine will interact with vdsm exclusively over this > bus but the REST API will be the principle API and the entry point for ISV apps. > A REST API provides a light-weight and standard way to access all of the vdsm > functionality. > > The REST API will handle events by exposing a new 'events' collection at the api > root. REST users will use some sort of polling to collect these events. The > details of this interface are being worked on. Several ways for minimizing the > impact of polling have been discussed. The QMF broker can expose a > publish/subscribe model for events as appropriate. > > Is this model an acceptable way to improve the vdsm API? I would like to hear > the opinions of ovirt-engine developers, vdsm developers, and other > stakeholders. Thanks for providing feedback on this proposal! Why things non native to REST and wrap it in QMF, rather than do the reverse? Or just to them in parallel, since it sounds like both are going to be first class citizens? From abaron at redhat.com Thu Dec 1 07:02:27 2011 From: abaron at redhat.com (Ayal Baron) Date: Thu, 01 Dec 2011 02:02:27 -0500 (EST) Subject: [Engine-devel] Proposed next-generation vdsm API In-Reply-To: Message-ID: ----- Original Message ----- > > > > -----Original Message----- > > From: Adam Litke [mailto:agl at us.ibm.com] > > Sent: Thursday, December 01, 2011 0:41 AM > > To: vdsm-devel at lists.fedorahosted.org; engine-devel at ovirt.org > > Cc: Daniel P. Berrange; Chris Wright; Dan Kenigsberg; Itamar Heim > > Subject: Proposed next-generation vdsm API > > > > Recently we've had some very productive discussions concerning the > > VDSM > API. I > > want to attempt to refocus the discussion around an emerging > > proposal > and see if > > we can agree on a sensible path forward. > > > > Based on the discussion, I have identified the following > > requirements > that > > a new API for vdsm should have: > > > > 1.) Single API that can be consumed by ovirt-engine and ISVs > > - We don't want to maintain multiple parallel APIs > > - To develop a vendor ecosystem, we must have a robust external > > API to > > vdsm > > > > 2.) Full vdsm capabilities are exposed without requiring > > ovirt-engine > > - ovirt components should be modular and independently useful > > - Some deployments might want to manage nodes without ovirt-engine > > > > 3.) Standardized protocol with low overhead > > - Required for widespread adoption > > > > 4.) Support for asynchronous tasks and events > > - Needed by ovirt-engine and other consumers > > > > Based on these requirements, the following proposal has started to > emerge: > > > > Create a REST API that will provide all of the functionality that > > is > currently > > available via the xmlrpc interface (with the goal of deprecating > > xmlrpc > once it > > becomes mature enough). To support advanced clustering features > > that > > ovirt-engine is planning, we'll write an QMF broker that can proxy > > the > REST API > > onto a message bus. ovirt-engine will interact with vdsm > > exclusively > over this > > bus but the REST API will be the principle API and the entry point > > for > ISV apps. > > A REST API provides a light-weight and standard way to access all > > of the > vdsm > > functionality. > > > > The REST API will handle events by exposing a new 'events' > > collection at > the api > > root. REST users will use some sort of polling to collect these > > events. > The > > details of this interface are being worked on. Several ways for > minimizing the > > impact of polling have been discussed. The QMF broker can expose a > > publish/subscribe model for events as appropriate. > > > > Is this model an acceptable way to improve the vdsm API? I would > > like > to hear > > the opinions of ovirt-engine developers, vdsm developers, and other > > stakeholders. Thanks for providing feedback on this proposal! > > Why things non native to REST and wrap it in QMF, rather than do the > reverse? > Or just to them in parallel, since it sounds like both are going to > be > first class citizens? This was more my understanding from our discussion on IRC yesterday. REST API - everything that is relevant for single node management QMF - same API as above + multi-node relevant API calls. I don't see any reason for doing weird things over REST to support the latter. In fact, I don't even see any real reason for going through the REST API when using QMF. If you take a look at today's API you will see that there is nothing there that limits it to XML-RPC and we could easily expose all the calls using REST or anything else. In python, exposing a new verb in the various APIs can be automatic so this would require very little maintenance. Any multi-node or transport specific calls can be decorated as such and would be automatically ignored/picked up by the relevant API layer. This way, we could also easily enable using different bus protocols assuming a customer already has a deployment as was suggested yesterday. > _______________________________________________ > vdsm-devel mailing list > vdsm-devel at lists.fedorahosted.org > https://fedorahosted.org/mailman/listinfo/vdsm-devel > From lpeer at redhat.com Thu Dec 1 10:22:50 2011 From: lpeer at redhat.com (Livnat Peer) Date: Thu, 01 Dec 2011 12:22:50 +0200 Subject: [Engine-devel] Stable PCI Addresses design wiki In-Reply-To: <20111130155434.GB1160@redhat.com> References: <20111130155434.GB1160@redhat.com> Message-ID: <4ED7557A.8060505@redhat.com> On 11/30/2011 05:54 PM, Daniel P. Berrange wrote: > On Wed, Nov 30, 2011 at 10:06:37AM -0500, Eli Mesika wrote: >> http://www.ovirt.org/wiki/Features/Design/StablePCIAddresses > > My primary comment on this is that you likely don't want to restrict > yourself to PCI addresses. > > If RHEVM intends to use virtio-serial controllers you want to maintain > stable virtio serial addresses > > If RHEVM intends to use CCID smartcard controllers you want to maintain > stable CCID device addresses > > If RHEVM intends to use SCSI controllers you will want to maintain > SCSI drive addresses > > If RHEVM intends to use USB controllers you will want to maintain > USB device addresses > > I think you get the idea :-) In general you can say that every single > device listed in the XML will ultimately have an address associated > with it. The type address will vary depending on what type of controller > the device is attached to. > > In addition, when you start dealing with these other non-PCI address > types, you will also need to start dealing with controller devices. > > eg, if you add a SCSI disk to the XML > > > > > > > > First of all libvirt will want to assign an address to this drive > > > > > >
> > > Then, if the corresponding controller does not exist already, libvirt > will auto-add a controller device, which itself has an address you will > need to track: > > >
> > > > In addition, when QEMU gains support for PCI bridges, or multiple PCI > root complexes, there will also be the possibility of dealing with > multiple PCI controllers in the XML too. > > So even if you only implement PCI address support in the first iteration, > I'd really encourage you to at least consider how you will cope with the > other address types sooner, rather than later. > > Regards, > Daniel Hi Eli, - I suggest changing the feature name from stable PCI addresses to stable device addresses, then document which devices are supported. I think the first version includes PCI, VirtIO Serial, SCSI, IDE, CCID, actually anything libvirt supports - right? - I am missing in the documentation the format of the device addresses as interchanged between RHEVM and VDSM, specifically i am interested why use XML and not JSON format for that data. Thanks, Livnat From ofrenkel at redhat.com Thu Dec 1 10:56:45 2011 From: ofrenkel at redhat.com (Omer Frenkel) Date: Thu, 01 Dec 2011 05:56:45 -0500 (EST) Subject: [Engine-devel] Stable PCI Addresses design wiki In-Reply-To: <8a9ce5ed-e449-414a-bb11-b3c189a15f2b@zmail13.collab.prod.int.phx2.redhat.com> Message-ID: <625f96f0-38b6-4fda-b59b-c0b9b814dcaa@zmail07.collab.prod.int.phx2.redhat.com> my notes: why adding new parameter to create verb and not using the createInfo? (or this is the intention?) if we expect xml to be large, perhaps it would be better to get it from db only before sending it to vdsm, to keep the vm object from getting too big, meaning never get it as part of vm dynamic. ----- Original Message ----- > From: "Eli Mesika" > To: engine-devel at ovirt.org > Sent: Wednesday, November 30, 2011 5:17:42 PM > Subject: Re: [Engine-devel] Stable PCI Addresses design wiki > > Hi again > The following is a design draft for a new feature of oVirt-engine > planned for 3.1 > > The feature allow devices in guest virtual machines to retain the > same PCI address allocations as other devices are added or removed > from the guest configuration. This is particularly important for > Windows guests in order to prevent warnings or reactivation when > device addresses change. > > This feature is supported by libvirt and should be implemented by > RHEVM and VDSM. > > When creating a VM, QEMU allocates PCI addresses to the guest > devices, these addresses are being reported by libvirt to VDSM and > VDSM should report it back to RHEVM. RHEVM should persist the PCI > addresses and report it as part of the VM configuration on the next > run. If a change to the VM devices occurred RHEVM should detect the > change and persist the new PCI addresses. > > Please review. > > Thanks > Eli Mesika > Redhat ISRAEL > > > ----- Original Message ----- > > From: "Eli Mesika" > > To: engine-devel at ovirt.org > > Sent: Wednesday, November 30, 2011 5:06:37 PM > > Subject: [Engine-devel] Stable PCI Addresses design wiki > > > > http://www.ovirt.org/wiki/Features/Design/StablePCIAddresses > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From emesika at redhat.com Thu Dec 1 11:09:17 2011 From: emesika at redhat.com (Eli Mesika) Date: Thu, 01 Dec 2011 06:09:17 -0500 (EST) Subject: [Engine-devel] Stable PCI Addresses design wiki In-Reply-To: <625f96f0-38b6-4fda-b59b-c0b9b814dcaa@zmail07.collab.prod.int.phx2.redhat.com> Message-ID: <865083c6-ffb2-46e7-a2af-9f7019f056ab@zmail13.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > From: "Omer Frenkel" > To: "Eli Mesika" > Cc: engine-devel at ovirt.org > Sent: Thursday, December 1, 2011 12:56:45 PM > Subject: Re: [Engine-devel] Stable PCI Addresses design wiki > > my notes: > why adding new parameter to create verb and not using the createInfo? > (or this is the intention?) Yes, we will use the createInfo to pass that > > if we expect xml to be large, perhaps it would be better to get it > from db only before sending it to vdsm, > to keep the vm object from getting too big, meaning never get it as > part of vm dynamic. Sure , I have considered that (see Logic Design) in original document: b. Get the domxml from DAL for each VM we are running (creating) This will insure that our VM entities are not keeping the domxml inside them, rather, they will got it on demand > > ----- Original Message ----- > > From: "Eli Mesika" > > To: engine-devel at ovirt.org > > Sent: Wednesday, November 30, 2011 5:17:42 PM > > Subject: Re: [Engine-devel] Stable PCI Addresses design wiki > > > > Hi again > > The following is a design draft for a new feature of oVirt-engine > > planned for 3.1 > > > > The feature allow devices in guest virtual machines to retain the > > same PCI address allocations as other devices are added or removed > > from the guest configuration. This is particularly important for > > Windows guests in order to prevent warnings or reactivation when > > device addresses change. > > > > This feature is supported by libvirt and should be implemented by > > RHEVM and VDSM. > > > > When creating a VM, QEMU allocates PCI addresses to the guest > > devices, these addresses are being reported by libvirt to VDSM and > > VDSM should report it back to RHEVM. RHEVM should persist the PCI > > addresses and report it as part of the VM configuration on the next > > run. If a change to the VM devices occurred RHEVM should detect the > > change and persist the new PCI addresses. > > > > Please review. > > > > Thanks > > Eli Mesika > > Redhat ISRAEL > > > > > > ----- Original Message ----- > > > From: "Eli Mesika" > > > To: engine-devel at ovirt.org > > > Sent: Wednesday, November 30, 2011 5:06:37 PM > > > Subject: [Engine-devel] Stable PCI Addresses design wiki > > > > > > http://www.ovirt.org/wiki/Features/Design/StablePCIAddresses > > > _______________________________________________ > > > Engine-devel mailing list > > > Engine-devel at ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > From emesika at redhat.com Thu Dec 1 11:21:41 2011 From: emesika at redhat.com (Eli Mesika) Date: Thu, 01 Dec 2011 06:21:41 -0500 (EST) Subject: [Engine-devel] Stable PCI Addresses design wiki In-Reply-To: <20111130155434.GB1160@redhat.com> Message-ID: ----- Original Message ----- > From: "Daniel P. Berrange" > To: "Eli Mesika" > Cc: engine-devel at ovirt.org > Sent: Wednesday, November 30, 2011 5:54:34 PM > Subject: Re: [Engine-devel] Stable PCI Addresses design wiki > > On Wed, Nov 30, 2011 at 10:06:37AM -0500, Eli Mesika wrote: > > http://www.ovirt.org/wiki/Features/Design/StablePCIAddresses > > My primary comment on this is that you likely don't want to restrict > yourself to PCI addresses. Sure, this is a bad name, have changed that to "Stable Device Addresses" In the term Device we include PCI, VirtIO Serial, SCSI, IDE, CCID and actually anything libvirt supports. Page was changed to : http://www.ovirt.org/wiki/Features/Design/StableDeviceAddresses Thanks > > If RHEVM intends to use virtio-serial controllers you want to > maintain > stable virtio serial addresses > > If RHEVM intends to use CCID smartcard controllers you want to > maintain > stable CCID device addresses > > If RHEVM intends to use SCSI controllers you will want to maintain > SCSI drive addresses > > If RHEVM intends to use USB controllers you will want to maintain > USB device addresses > > I think you get the idea :-) In general you can say that every single > device listed in the XML will ultimately have an address associated > with it. The type address will vary depending on what type of > controller > the device is attached to. > > In addition, when you start dealing with these other non-PCI address > types, you will also need to start dealing with controller devices. > > eg, if you add a SCSI disk to the XML > > > > > > > > First of all libvirt will want to assign an address to this drive > > > > > >
> > > Then, if the corresponding controller does not exist already, libvirt > will auto-add a controller device, which itself has an address you > will > need to track: > > >
function='0x0'/> > > > > In addition, when QEMU gains support for PCI bridges, or multiple PCI > root complexes, there will also be the possibility of dealing with > multiple PCI controllers in the XML too. > > So even if you only implement PCI address support in the first > iteration, > I'd really encourage you to at least consider how you will cope with > the > other address types sooner, rather than later. > > Regards, > Daniel > -- > |: http://berrange.com -o- > | http://www.flickr.com/photos/dberrange/ :| > |: http://libvirt.org -o- > | http://virt-manager.org :| > |: http://autobuild.org -o- > | http://search.cpan.org/~danberr/ :| > |: http://entangle-photo.org -o- > | http://live.gnome.org/gtk-vnc :| > From lpeer at redhat.com Thu Dec 1 12:06:52 2011 From: lpeer at redhat.com (Livnat Peer) Date: Thu, 01 Dec 2011 14:06:52 +0200 Subject: [Engine-devel] Stable PCI/DEvice addresses In-Reply-To: <20111201114937.GH23066@redhat.com> References: <0137308a-f249-493c-8839-cbfe5bc847bd@zmail13.collab.prod.int.phx2.redhat.com> <20111201114937.GH23066@redhat.com> Message-ID: <4ED76DDC.3040806@redhat.com> Moving this back to list - On 12/01/2011 01:49 PM, Dan Kenigsberg wrote: > On Thu, Dec 01, 2011 at 06:26:16AM -0500, Eli Mesika wrote: >> Hi guys >> >> I need the xml/json format representing the VM installed devices. >> Livnat asked me to add it to my Wiki >> http://www.ovirt.org/wiki/Features/Design/StableDeviceAddresses >> >> Please feel free to either send it to me or edit the VDSM section adding this info. > > I think that it is wrong to document this in this point in time. The > data is a blob, generated by libvirt, copied by Vdsm, and not expected > to be editted by RHEV-M. > > If you REALLY want to know, it is simply libvirt's domain xml, which is > well-documented in http://libvirt.org/formatdomain.html. > > Dan. > Hi Dan, Since i suspect the next requirement on this would be for RHEVM to parse the "blob" and enable user to specify addresses i think the content of the "blob" should be discussed. Otherwise we'll have to support this "blob" format for the sake of backwards compatibility and not be able to set a reasonable API between the engine and VDSM. Livnat From mkenneth at redhat.com Thu Dec 1 14:09:23 2011 From: mkenneth at redhat.com (Miki Kenneth) Date: Thu, 01 Dec 2011 09:09:23 -0500 (EST) Subject: [Engine-devel] Stable PCI Addresses design wiki In-Reply-To: <8a9ce5ed-e449-414a-bb11-b3c189a15f2b@zmail13.collab.prod.int.phx2.redhat.com> Message-ID: <5a6ced9a-8651-4296-9484-666457191a07@mkenneth.csb> I know that we are talking about only stable addresses, but I would like to broaden the scope a bit (don't kick me guys)... Shouldn't we keep a run-time configuration vs "saved/commit" configuration. By run time I mean: the current memory/cpu/disks/address per VM and by "stable" I mean the "one in the DB". That way, I'm going to be able to change properties in the stable config, which will not affect the running one (and vice versa). Maybe this is totally different feature - but I decide to throw it on the table. Miki ----- Original Message ----- > From: "Eli Mesika" > To: engine-devel at ovirt.org > Sent: Wednesday, November 30, 2011 5:17:42 PM > Subject: Re: [Engine-devel] Stable PCI Addresses design wiki > > Hi again > The following is a design draft for a new feature of oVirt-engine > planned for 3.1 > > The feature allow devices in guest virtual machines to retain the > same PCI address allocations as other devices are added or removed > from the guest configuration. This is particularly important for > Windows guests in order to prevent warnings or reactivation when > device addresses change. > > This feature is supported by libvirt and should be implemented by > RHEVM and VDSM. > > When creating a VM, QEMU allocates PCI addresses to the guest > devices, these addresses are being reported by libvirt to VDSM and > VDSM should report it back to RHEVM. RHEVM should persist the PCI > addresses and report it as part of the VM configuration on the next > run. If a change to the VM devices occurred RHEVM should detect the > change and persist the new PCI addresses. > > Please review. > > Thanks > Eli Mesika > Redhat ISRAEL > > > ----- Original Message ----- > > From: "Eli Mesika" > > To: engine-devel at ovirt.org > > Sent: Wednesday, November 30, 2011 5:06:37 PM > > Subject: [Engine-devel] Stable PCI Addresses design wiki > > > > http://www.ovirt.org/wiki/Features/Design/StablePCIAddresses > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From danken at redhat.com Thu Dec 1 14:11:07 2011 From: danken at redhat.com (Dan Kenigsberg) Date: Thu, 1 Dec 2011 16:11:07 +0200 Subject: [Engine-devel] Stable PCI/DEvice addresses In-Reply-To: <4ED76DDC.3040806@redhat.com> References: <0137308a-f249-493c-8839-cbfe5bc847bd@zmail13.collab.prod.int.phx2.redhat.com> <20111201114937.GH23066@redhat.com> <4ED76DDC.3040806@redhat.com> Message-ID: <20111201140831.GJ23066@redhat.com> On Thu, Dec 01, 2011 at 02:06:52PM +0200, Livnat Peer wrote: > Moving this back to list - > > On 12/01/2011 01:49 PM, Dan Kenigsberg wrote: > > On Thu, Dec 01, 2011 at 06:26:16AM -0500, Eli Mesika wrote: > >> Hi guys > >> > >> I need the xml/json format representing the VM installed devices. > >> Livnat asked me to add it to my Wiki > >> http://www.ovirt.org/wiki/Features/Design/StableDeviceAddresses > >> > >> Please feel free to either send it to me or edit the VDSM section adding this info. > > > > I think that it is wrong to document this in this point in time. The > > data is a blob, generated by libvirt, copied by Vdsm, and not expected > > to be editted by RHEV-M. > > > > If you REALLY want to know, it is simply libvirt's domain xml, which is > > well-documented in http://libvirt.org/formatdomain.html. > > > > Dan. > > > > Hi Dan, > > Since i suspect the next requirement on this would be for RHEVM to parse > the "blob" and enable user to specify addresses i think the content of > the "blob" should be discussed. > > Otherwise we'll have to support this "blob" format for the sake of > backwards compatibility and not be able to set a reasonable API between > the engine and VDSM. The requirement for 3.1 alowed me to define an opaque parameter, with which Vdsm uses the Engine to store the VM device addresses. We "secretly" opted for storing the libvirt domxml because it already contains addresses for just about anything, and would alow us to do even more evil things in the future (I'm thinking about specifying complete boot order, and other things which libvirt long supports, but Vdsm does not). Another reason was that this was a very simple thing to do. The down side is that this "device-blob" is a bit bloated in size, and if you look deep into it, it has duplicate information on top of Vdsm's "create" verb. We should probably not include the elements in the blob they are very verbose and uninteresting to RHEV-M. If stressed, Vdsm could report only the element. It could also convert it to json or yaml, compress and encrypt it - but I do not see the point of these transformations. Dan. From iheim at redhat.com Thu Dec 1 15:27:07 2011 From: iheim at redhat.com (Itamar Heim) Date: Thu, 01 Dec 2011 17:27:07 +0200 Subject: [Engine-devel] Stable PCI Addresses design wiki In-Reply-To: <5a6ced9a-8651-4296-9484-666457191a07@mkenneth.csb> References: <5a6ced9a-8651-4296-9484-666457191a07@mkenneth.csb> Message-ID: <4ED79CCB.8050805@redhat.com> On 12/01/2011 04:09 PM, Miki Kenneth wrote: > I know that we are talking about only stable addresses, but I would like to broaden the scope a bit > (don't kick me guys)... > Shouldn't we keep a run-time configuration vs "saved/commit" configuration. > By run time I mean: the current memory/cpu/disks/address per VM and by "stable" I mean the "one in the DB". > That way, I'm going to be able to change properties in the stable config, which will not affect the running one > (and vice versa). > > Maybe this is totally different feature - but I decide to throw it on the table. shouldn't that be part of the snapshot improvements design? > > Miki > > ----- Original Message ----- >> From: "Eli Mesika" >> To: engine-devel at ovirt.org >> Sent: Wednesday, November 30, 2011 5:17:42 PM >> Subject: Re: [Engine-devel] Stable PCI Addresses design wiki >> >> Hi again >> The following is a design draft for a new feature of oVirt-engine >> planned for 3.1 >> >> The feature allow devices in guest virtual machines to retain the >> same PCI address allocations as other devices are added or removed >> from the guest configuration. This is particularly important for >> Windows guests in order to prevent warnings or reactivation when >> device addresses change. >> >> This feature is supported by libvirt and should be implemented by >> RHEVM and VDSM. >> >> When creating a VM, QEMU allocates PCI addresses to the guest >> devices, these addresses are being reported by libvirt to VDSM and >> VDSM should report it back to RHEVM. RHEVM should persist the PCI >> addresses and report it as part of the VM configuration on the next >> run. If a change to the VM devices occurred RHEVM should detect the >> change and persist the new PCI addresses. >> >> Please review. >> >> Thanks >> Eli Mesika >> Redhat ISRAEL >> >> >> ----- Original Message ----- >>> From: "Eli Mesika" >>> To: engine-devel at ovirt.org >>> Sent: Wednesday, November 30, 2011 5:06:37 PM >>> Subject: [Engine-devel] Stable PCI Addresses design wiki >>> >>> http://www.ovirt.org/wiki/Features/Design/StablePCIAddresses >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From dfediuck at redhat.com Thu Dec 1 15:52:42 2011 From: dfediuck at redhat.com (Doron Fediuck) Date: Thu, 1 Dec 2011 17:52:42 +0200 Subject: [Engine-devel] Stable PCI/DEvice addresses In-Reply-To: <20111201140831.GJ23066@redhat.com> References: <4ED76DDC.3040806@redhat.com> <20111201140831.GJ23066@redhat.com> Message-ID: <201112011752.43123.dfediuck@redhat.com> On Thursday 01 December 2011 16:11:07 Dan Kenigsberg wrote: > On Thu, Dec 01, 2011 at 02:06:52PM +0200, Livnat Peer wrote: > > Moving this back to list - > > > > On 12/01/2011 01:49 PM, Dan Kenigsberg wrote: > > > On Thu, Dec 01, 2011 at 06:26:16AM -0500, Eli Mesika wrote: > > >> Hi guys > > >> > > >> I need the xml/json format representing the VM installed devices. > > >> Livnat asked me to add it to my Wiki > > >> http://www.ovirt.org/wiki/Features/Design/StableDeviceAddresses > > >> > > >> Please feel free to either send it to me or edit the VDSM section adding this info. > > > > > > I think that it is wrong to document this in this point in time. The > > > data is a blob, generated by libvirt, copied by Vdsm, and not expected > > > to be editted by RHEV-M. > > > > > > If you REALLY want to know, it is simply libvirt's domain xml, which is > > > well-documented in http://libvirt.org/formatdomain.html. > > > > > > Dan. > > > > > > > Hi Dan, > > > > Since i suspect the next requirement on this would be for RHEVM to parse > > the "blob" and enable user to specify addresses i think the content of > > the "blob" should be discussed. > > > > Otherwise we'll have to support this "blob" format for the sake of > > backwards compatibility and not be able to set a reasonable API between > > the engine and VDSM. > > The requirement for 3.1 alowed me to define an opaque parameter, with > which Vdsm uses the Engine to store the VM device addresses. > > We "secretly" opted for storing the libvirt domxml because it already > contains addresses for just about anything, and would alow us to do > even more evil things in the future (I'm thinking about specifying > complete boot order, and other things which libvirt long supports, but > Vdsm does not). Another reason was that this was a very simple thing > to do. The down side is that this "device-blob" is a bit bloated in > size, and if you look deep into it, it has duplicate information on top > of Vdsm's "create" verb. > > We should probably not include the elements in the blob they > are very verbose and uninteresting to RHEV-M. > > If stressed, Vdsm could report only the element. It could also > convert it to json or yaml, compress and encrypt it - but I do not see > the point of these transformations. > > Dan. > The direction this is taking is for Engine core to be able to parse and edit libvirt's domxml, while vdsm is agnostic (or partially agnostic) to the blob. Is this what we really need? want? -- /d Never say "OOPS!" always say "Ah, Interesting!" From agl at us.ibm.com Thu Dec 1 18:35:44 2011 From: agl at us.ibm.com (Adam Litke) Date: Thu, 1 Dec 2011 12:35:44 -0600 Subject: [Engine-devel] Proposed next-generation vdsm API In-Reply-To: <4ED6BE1C.8020007@redhat.com> References: <20111130224041.GM13803@us.ibm.com> <4ED6BE1C.8020007@redhat.com> Message-ID: <20111201183544.GN13803@us.ibm.com> On Thu, Dec 01, 2011 at 12:37:00AM +0100, Geert Jansen wrote: > Hi, > > i think this makes sense, but i'm not a VDSM expert. I did want to > point out one other point, below: > > On 11/30/2011 11:40 PM, Adam Litke wrote: > >Recently we've had some very productive discussions concerning the VDSM API. I > >want to attempt to refocus the discussion around an emerging proposal and see if > >we can agree on a sensible path forward. > > > >Based on the discussion, I have identified the following requirements that > >a new API for vdsm should have: > > > >1.) Single API that can be consumed by ovirt-engine and ISVs > > - We don't want to maintain multiple parallel APIs > > - To develop a vendor ecosystem, we must have a robust external API to > > vdsm > > I have doubts around how useful the VDSM API will be for creating an > ecosystem. If you look at most virtualization ISVs today, they want > to integrate with a multi-node API and not a single-node API. The > only use case that i know where integrating with a single node API > is requested is when you're basically creating a virtualization > management platform like oVirt itself. A single-node (or standalone VDSM deployment) is a very important use case. Many people are coming into the oVirt community from different perspectives. The strength of the ecosystem depends, in part, on the ability of oVirt components to be combined in unique ways with other software to produce solutions. The complete oVirt stack is a great thing, but not the only way to use the technology. > [Since we haven't met before, a brief intro... I have been > responsible at Red Hat for buiding our virtualization ecosystem for > the past year or so.] Hi and thanks for the introduction! I look forward to working with you and the rest of the oVirt community on these issues :) -- Adam Litke IBM Linux Technology Center From agl at us.ibm.com Thu Dec 1 18:40:06 2011 From: agl at us.ibm.com (Adam Litke) Date: Thu, 1 Dec 2011 12:40:06 -0600 Subject: [Engine-devel] Proposed next-generation vdsm API In-Reply-To: References: <20111130224041.GM13803@us.ibm.com> Message-ID: <20111201184006.GO13803@us.ibm.com> On Wed, Nov 30, 2011 at 11:32:55PM -0500, Itamar Heim wrote: > > > > -----Original Message----- > > From: Adam Litke [mailto:agl at us.ibm.com] > > Sent: Thursday, December 01, 2011 0:41 AM > > To: vdsm-devel at lists.fedorahosted.org; engine-devel at ovirt.org > > Cc: Daniel P. Berrange; Chris Wright; Dan Kenigsberg; Itamar Heim > > Subject: Proposed next-generation vdsm API > > > > Recently we've had some very productive discussions concerning the VDSM > API. I > > want to attempt to refocus the discussion around an emerging proposal > and see if > > we can agree on a sensible path forward. > > > > Based on the discussion, I have identified the following requirements > that > > a new API for vdsm should have: > > > > 1.) Single API that can be consumed by ovirt-engine and ISVs > > - We don't want to maintain multiple parallel APIs > > - To develop a vendor ecosystem, we must have a robust external API to > > vdsm > > > > 2.) Full vdsm capabilities are exposed without requiring ovirt-engine > > - ovirt components should be modular and independently useful > > - Some deployments might want to manage nodes without ovirt-engine > > > > 3.) Standardized protocol with low overhead > > - Required for widespread adoption > > > > 4.) Support for asynchronous tasks and events > > - Needed by ovirt-engine and other consumers > > > > Based on these requirements, the following proposal has started to > emerge: > > > > Create a REST API that will provide all of the functionality that is > currently > > available via the xmlrpc interface (with the goal of deprecating xmlrpc > once it > > becomes mature enough). To support advanced clustering features that > > ovirt-engine is planning, we'll write an QMF broker that can proxy the > REST API > > onto a message bus. ovirt-engine will interact with vdsm exclusively > over this > > bus but the REST API will be the principle API and the entry point for > ISV apps. > > A REST API provides a light-weight and standard way to access all of the > vdsm > > functionality. > > > > The REST API will handle events by exposing a new 'events' collection at > the api > > root. REST users will use some sort of polling to collect these events. > The > > details of this interface are being worked on. Several ways for > minimizing the > > impact of polling have been discussed. The QMF broker can expose a > > publish/subscribe model for events as appropriate. > > > > Is this model an acceptable way to improve the vdsm API? I would like > to hear > > the opinions of ovirt-engine developers, vdsm developers, and other > > stakeholders. Thanks for providing feedback on this proposal! > > Why things non native to REST and wrap it in QMF, rather than do the > reverse? > Or just to them in parallel, since it sounds like both are going to be > first class citizens? Parallel APIs mean dual maintenance. There will be inherent incompatibilities as each API would naturally have small differences. The reason for beginning with REST because of its low overhead and simplicity. Users of the REST API would not need to concern themselves with QMF at all but if that extra set of features is desired it can be easily added. -- Adam Litke IBM Linux Technology Center From gjansen at redhat.com Thu Dec 1 18:42:19 2011 From: gjansen at redhat.com (Geert Jansen) Date: Thu, 01 Dec 2011 19:42:19 +0100 Subject: [Engine-devel] Proposed next-generation vdsm API In-Reply-To: <20111201183544.GN13803@us.ibm.com> References: <20111130224041.GM13803@us.ibm.com> <4ED6BE1C.8020007@redhat.com> <20111201183544.GN13803@us.ibm.com> Message-ID: <4ED7CA8B.4000103@redhat.com> On 12/01/2011 07:35 PM, Adam Litke wrote: > A single-node (or standalone VDSM deployment) is a very important use case. > Many people are coming into the oVirt community from different perspectives. > The strength of the ecosystem depends, in part, on the ability of oVirt > components to be combined in unique ways with other software to produce > solutions. The complete oVirt stack is a great thing, but not the only way to > use the technology. Just out of curiosity: what might those single node cases be, outside implementing an oVirt like solution? I think understanding those better is also critical input to the API design. Regards, Geert From smizrahi at redhat.com Thu Dec 1 18:33:14 2011 From: smizrahi at redhat.com (Saggi Mizrahi) Date: Thu, 01 Dec 2011 13:33:14 -0500 Subject: [Engine-devel] Proposed next-generation vdsm API In-Reply-To: <20111130224041.GM13803@us.ibm.com> References: <20111130224041.GM13803@us.ibm.com> Message-ID: <4ED7C86A.4090601@redhat.com> On 11/30/2011 05:40 PM, Adam Litke wrote: > Recently we've had some very productive discussions concerning the VDSM API. I > want to attempt to refocus the discussion around an emerging proposal and see if > we can agree on a sensible path forward. > > Based on the discussion, I have identified the following requirements that > a new API for vdsm should have: > > 1.) Single API that can be consumed by ovirt-engine and ISVs > - We don't want to maintain multiple parallel APIs > - To develop a vendor ecosystem, we must have a robust external API to > vdsm > > 2.) Full vdsm capabilities are exposed without requiring ovirt-engine > - ovirt components should be modular and independently useful > - Some deployments might want to manage nodes without ovirt-engine > > 3.) Standardized protocol with low overhead > - Required for widespread adoption > > 4.) Support for asynchronous tasks and events > - Needed by ovirt-engine and other consumers > > Based on these requirements, the following proposal has started to emerge: > > Create a REST API that will provide all of the functionality that is currently > available via the xmlrpc interface I just want to stress that it WILL deprecate and significantly change some aspects of the API most obvious are the task aspects of the API. Even though the commands might be the same the current async\sync task semantics might change to better accommodate the QMF wrapper and other users. I don't think it's needed to go into details now but I am stressing that there WILL be API calls the will not migrate or significantly change in the REST API. > (with the goal of deprecating xmlrpc once it > becomes mature enough). To support advanced clustering features that > ovirt-engine is planning, we'll write an QMF broker that can proxy the REST API > onto a message bus. ovirt-engine will interact with vdsm exclusively over this > bus but the REST API will be the principle API and the entry point for ISV apps. > A REST API provides a light-weight and standard way to access all of the vdsm > functionality. > > The REST API will handle events by exposing a new 'events' collection at the api > root. REST users will use some sort of polling to collect these events. The > details of this interface are being worked on. Several ways for minimizing the > impact of polling have been discussed. The QMF broker can expose a > publish/subscribe model for events as appropriate. > > Is this model an acceptable way to improve the vdsm API? I would like to hear > the opinions of ovirt-engine developers, vdsm developers, and other > stakeholders. Thanks for providing feedback on this proposal! > From acathrow at redhat.com Thu Dec 1 19:09:42 2011 From: acathrow at redhat.com (Andrew Cathrow) Date: Thu, 01 Dec 2011 14:09:42 -0500 (EST) Subject: [Engine-devel] Stable PCI/DEvice addresses In-Reply-To: <201112011752.43123.dfediuck@redhat.com> Message-ID: ----- Original Message ----- > From: "Doron Fediuck" > To: "Dan Kenigsberg" > Cc: engine-devel at ovirt.org, "Igor Lvovsky" > Sent: Thursday, December 1, 2011 10:52:42 AM > Subject: Re: [Engine-devel] Stable PCI/DEvice addresses > > On Thursday 01 December 2011 16:11:07 Dan Kenigsberg wrote: > > On Thu, Dec 01, 2011 at 02:06:52PM +0200, Livnat Peer wrote: > > > Moving this back to list - > > > > > > On 12/01/2011 01:49 PM, Dan Kenigsberg wrote: > > > > On Thu, Dec 01, 2011 at 06:26:16AM -0500, Eli Mesika wrote: > > > >> Hi guys > > > >> > > > >> I need the xml/json format representing the VM installed > > > >> devices. > > > >> Livnat asked me to add it to my Wiki > > > >> http://www.ovirt.org/wiki/Features/Design/StableDeviceAddresses > > > >> > > > >> Please feel free to either send it to me or edit the VDSM > > > >> section adding this info. > > > > > > > > I think that it is wrong to document this in this point in > > > > time. The > > > > data is a blob, generated by libvirt, copied by Vdsm, and not > > > > expected > > > > to be editted by RHEV-M. > > > > > > > > If you REALLY want to know, it is simply libvirt's domain xml, > > > > which is > > > > well-documented in http://libvirt.org/formatdomain.html. > > > > > > > > Dan. > > > > > > > > > > Hi Dan, > > > > > > Since i suspect the next requirement on this would be for RHEVM > > > to parse > > > the "blob" and enable user to specify addresses i think the > > > content of > > > the "blob" should be discussed. > > > > > > Otherwise we'll have to support this "blob" format for the sake > > > of > > > backwards compatibility and not be able to set a reasonable API > > > between > > > the engine and VDSM. > > > > The requirement for 3.1 alowed me to define an opaque parameter, > > with > > which Vdsm uses the Engine to store the VM device addresses. > > > > We "secretly" opted for storing the libvirt domxml because it > > already > > contains addresses for just about anything, and would alow us to do > > even more evil things in the future (I'm thinking about specifying > > complete boot order, and other things which libvirt long supports, > > but > > Vdsm does not). Another reason was that this was a very simple > > thing > > to do. ?The down side is that this "device-blob" is a bit bloated > > in > > size, and if you look deep into it, it has duplicate information on > > top > > of Vdsm's "create" verb. > > > > We should probably not include the elements in the blob > > they > > are very verbose and uninteresting to RHEV-M. > > > > If stressed, Vdsm could report only the element. It could > > also > > convert it to json or yaml, compress and encrypt it - but I do not > > see > > the point of these transformations. > > > > Dan. > > > The direction this is taking is for Engine core to be able to parse > and > edit libvirt's domxml, while vdsm is agnostic (or partially agnostic) > to > the blob. > > Is this what we really need? want? When we discuss the blob are we talking about just the address part - eg .
or something more? > -- > > /d > > Never say "OOPS!" always say "Ah, Interesting!" > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From agl at us.ibm.com Thu Dec 1 19:10:53 2011 From: agl at us.ibm.com (Adam Litke) Date: Thu, 1 Dec 2011 13:10:53 -0600 Subject: [Engine-devel] Proposed next-generation vdsm API In-Reply-To: References: Message-ID: <20111201191053.GP13803@us.ibm.com> On Thu, Dec 01, 2011 at 02:02:27AM -0500, Ayal Baron wrote: > > > ----- Original Message ----- > > > > > > > -----Original Message----- > > > From: Adam Litke [mailto:agl at us.ibm.com] > > > Sent: Thursday, December 01, 2011 0:41 AM > > > To: vdsm-devel at lists.fedorahosted.org; engine-devel at ovirt.org > > > Cc: Daniel P. Berrange; Chris Wright; Dan Kenigsberg; Itamar Heim > > > Subject: Proposed next-generation vdsm API > > > > > > Recently we've had some very productive discussions concerning the > > > VDSM > > API. I > > > want to attempt to refocus the discussion around an emerging > > > proposal > > and see if > > > we can agree on a sensible path forward. > > > > > > Based on the discussion, I have identified the following > > > requirements > > that > > > a new API for vdsm should have: > > > > > > 1.) Single API that can be consumed by ovirt-engine and ISVs > > > - We don't want to maintain multiple parallel APIs > > > - To develop a vendor ecosystem, we must have a robust external > > > API to > > > vdsm > > > > > > 2.) Full vdsm capabilities are exposed without requiring > > > ovirt-engine > > > - ovirt components should be modular and independently useful > > > - Some deployments might want to manage nodes without ovirt-engine > > > > > > 3.) Standardized protocol with low overhead > > > - Required for widespread adoption > > > > > > 4.) Support for asynchronous tasks and events > > > - Needed by ovirt-engine and other consumers > > > > > > Based on these requirements, the following proposal has started to > > emerge: > > > > > > Create a REST API that will provide all of the functionality that > > > is > > currently > > > available via the xmlrpc interface (with the goal of deprecating > > > xmlrpc > > once it > > > becomes mature enough). To support advanced clustering features > > > that > > > ovirt-engine is planning, we'll write an QMF broker that can proxy > > > the > > REST API > > > onto a message bus. ovirt-engine will interact with vdsm > > > exclusively > > over this > > > bus but the REST API will be the principle API and the entry point > > > for > > ISV apps. > > > A REST API provides a light-weight and standard way to access all > > > of the > > vdsm > > > functionality. > > > > > > The REST API will handle events by exposing a new 'events' > > > collection at > > the api > > > root. REST users will use some sort of polling to collect these > > > events. > > The > > > details of this interface are being worked on. Several ways for > > minimizing the > > > impact of polling have been discussed. The QMF broker can expose a > > > publish/subscribe model for events as appropriate. > > > > > > Is this model an acceptable way to improve the vdsm API? I would > > > like > > to hear > > > the opinions of ovirt-engine developers, vdsm developers, and other > > > stakeholders. Thanks for providing feedback on this proposal! > > > > Why things non native to REST and wrap it in QMF, rather than do the > > reverse? > > Or just to them in parallel, since it sounds like both are going to > > be > > first class citizens? > > This was more my understanding from our discussion on IRC yesterday. I'm afraid I did not follow that discussion to your conclusions... > REST API - everything that is relevant for single node management > QMF - same API as above + multi-node relevant API calls. I don't see any > reason for doing weird things over REST to support the latter. A QMF broker runs on the vdsm host and talks to the REST API. It connects to a bus and exposes an API to ovirt-engine on this bus using a vdsm-base schema. ovirt-engine wants additional clustering functionality. This API should be implemented completely between the QMF broker and ovirt-engine using a separate vdsm-cluster schema. > In fact, I don't even see any real reason for going through the REST API when > using QMF. Because we want to avoid the proliferation of APIs. I would prefer a mostly vertical chain of API components to a vdsm with several independent APIs (which are sure to diverge or be neglected by individual developers/patches). > If you take a look at today's API you will see that there is nothing there > that limits it to XML-RPC and we could easily expose all the calls using REST > or anything else. In python, exposing a new verb in the various APIs can be > automatic so this would require very little maintenance. Any multi-node or > transport specific calls can be decorated as such and would be automatically > ignored/picked up by the relevant API layer. This way, we could also easily > enable using different bus protocols assuming a customer already has a > deployment as was suggested yesterday. I don't think this will be as automatic as you suggest in practice. It sounds like it will increase code complexity, obfuscation, and maintenance burden. -- Adam Litke IBM Linux Technology Center From smizrahi at redhat.com Thu Dec 1 19:21:56 2011 From: smizrahi at redhat.com (Saggi Mizrahi) Date: Thu, 01 Dec 2011 14:21:56 -0500 Subject: [Engine-devel] Proposed next-generation vdsm API In-Reply-To: <4ED7CA8B.4000103@redhat.com> References: <20111130224041.GM13803@us.ibm.com> <4ED6BE1C.8020007@redhat.com> <20111201183544.GN13803@us.ibm.com> <4ED7CA8B.4000103@redhat.com> Message-ID: <4ED7D3D4.1000206@redhat.com> On 12/01/2011 01:42 PM, Geert Jansen wrote: > > On 12/01/2011 07:35 PM, Adam Litke wrote: > >> A single-node (or standalone VDSM deployment) is a very important use >> case. >> Many people are coming into the oVirt community from different >> perspectives. >> The strength of the ecosystem depends, in part, on the ability of oVirt >> components to be combined in unique ways with other software to produce >> solutions. The complete oVirt stack is a great thing, but not the >> only way to >> use the technology. > > Just out of curiosity: what might those single node cases be, outside > implementing an oVirt like solution? I think understanding those > better is also critical input to the API design. > I would personally really like to see a server distro that installs virtual appliances instead of packages. With that I could have a server with a DHCP\httpd\mail server\webdav\IPA appliances running on the host completely separate using qemu or even lxc. Having a BL and UI around that would be very convenient for small setups and home servers. VDSM would be handy for storage, network and policy configuration as well as tracking the appliances stats. > Geert > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From ryanh at us.ibm.com Thu Dec 1 20:40:26 2011 From: ryanh at us.ibm.com (Ryan Harper) Date: Thu, 1 Dec 2011 14:40:26 -0600 Subject: [Engine-devel] Proposed next-generation vdsm API In-Reply-To: <4ED6BE1C.8020007@redhat.com> References: <20111130224041.GM13803@us.ibm.com> <4ED6BE1C.8020007@redhat.com> Message-ID: <20111201204026.GP17631@us.ibm.com> * Geert Jansen [2011-11-30 17:38]: > Hi, > > i think this makes sense, but i'm not a VDSM expert. I did want to point > out one other point, below: > > On 11/30/2011 11:40 PM, Adam Litke wrote: > > Recently we've had some very productive discussions concerning the VDSM API. I > > want to attempt to refocus the discussion around an emerging proposal and see if > > we can agree on a sensible path forward. > > > > Based on the discussion, I have identified the following requirements that > > a new API for vdsm should have: > > > > 1.) Single API that can be consumed by ovirt-engine and ISVs > > - We don't want to maintain multiple parallel APIs > > - To develop a vendor ecosystem, we must have a robust external API to > > vdsm > > I have doubts around how useful the VDSM API will be for creating an > ecosystem. If you look at most virtualization ISVs today, they want to > integrate with a multi-node API and not a single-node API. The only use > case that i know where integrating with a single node API is requested > is when you're basically creating a virtualization management platform > like oVirt itself. Without a first-class node level API, we're precluding the very case you're aware of. If we're building a community and ecosystem around KVM management then we need to be open to someone building that management platform and doing it in a way that keeps things compatible. There are existing products (IBM has a number in this space) which utilize libvirt as a node-level API which won't be able to (easily) integrate all of oVirt just to obtain access to the nodes. Further, if the only way to consume VDSM is via the multi-node solution, then price of entry is a much larger, more complex stack with more dependencies. This raises the barrier of entry and participation. IMHO, not ideal when we're attempting to grow a community. Alternatively, if we enable a common node API, not only do we support single node deployments (think virtual appliances, or hardware appliances; IBM has a few of these) but allowing competing (but compatible) multi-node/cluster solutions; and it even supports the single node solutions to be managed by different kvm-based management platforms because they're all working with the same node-level API. Having a first-class node API is a critical starting point for building our larger kvm management community since it allows for easier integration of existing products. I cannot stress that point enough; if we're committed to being an open community then we need design our interfaces to encourage participation. Lowing the cost of participation and integrate is great way to enable an ecosystem. -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx ryanh at us.ibm.com From danken at redhat.com Thu Dec 1 20:41:01 2011 From: danken at redhat.com (Dan Kenigsberg) Date: Thu, 1 Dec 2011 22:41:01 +0200 Subject: [Engine-devel] Stable PCI/DEvice addresses In-Reply-To: References: <201112011752.43123.dfediuck@redhat.com> Message-ID: <20111201204100.GC23878@redhat.com> On Thu, Dec 01, 2011 at 02:09:42PM -0500, Andrew Cathrow wrote: > > ----- Original Message ----- > > From: "Doron Fediuck" > > To: "Dan Kenigsberg" > > Cc: engine-devel at ovirt.org, "Igor Lvovsky" > > Sent: Thursday, December 1, 2011 10:52:42 AM > > Subject: Re: [Engine-devel] Stable PCI/DEvice addresses > > > > On Thursday 01 December 2011 16:11:07 Dan Kenigsberg wrote: > > > On Thu, Dec 01, 2011 at 02:06:52PM +0200, Livnat Peer wrote: > > > > Moving this back to list - > > > > > > > > On 12/01/2011 01:49 PM, Dan Kenigsberg wrote: > > > > > On Thu, Dec 01, 2011 at 06:26:16AM -0500, Eli Mesika wrote: > > > > >> Hi guys > > > > >> > > > > >> I need the xml/json format representing the VM installed > > > > >> devices. > > > > >> Livnat asked me to add it to my Wiki > > > > >> http://www.ovirt.org/wiki/Features/Design/StableDeviceAddresses > > > > >> > > > > >> Please feel free to either send it to me or edit the VDSM > > > > >> section adding this info. > > > > > > > > > > I think that it is wrong to document this in this point in > > > > > time. The > > > > > data is a blob, generated by libvirt, copied by Vdsm, and not > > > > > expected > > > > > to be editted by RHEV-M. > > > > > > > > > > If you REALLY want to know, it is simply libvirt's domain xml, > > > > > which is > > > > > well-documented in http://libvirt.org/formatdomain.html. > > > > > > > > > > Dan. > > > > > > > > > > > > > Hi Dan, > > > > > > > > Since i suspect the next requirement on this would be for RHEVM > > > > to parse > > > > the "blob" and enable user to specify addresses i think the > > > > content of > > > > the "blob" should be discussed. > > > > > > > > Otherwise we'll have to support this "blob" format for the sake > > > > of > > > > backwards compatibility and not be able to set a reasonable API > > > > between > > > > the engine and VDSM. > > > > > > The requirement for 3.1 alowed me to define an opaque parameter, > > > with > > > which Vdsm uses the Engine to store the VM device addresses. > > > > > > We "secretly" opted for storing the libvirt domxml because it > > > already > > > contains addresses for just about anything, and would alow us to do > > > even more evil things in the future (I'm thinking about specifying > > > complete boot order, and other things which libvirt long supports, > > > but > > > Vdsm does not). Another reason was that this was a very simple > > > thing > > > to do. ?The down side is that this "device-blob" is a bit bloated > > > in > > > size, and if you look deep into it, it has duplicate information on > > > top > > > of Vdsm's "create" verb. > > > > > > We should probably not include the elements in the blob > > > they > > > are very verbose and uninteresting to RHEV-M. > > > > > > If stressed, Vdsm could report only the element. It could > > > also > > > convert it to json or yaml, compress and encrypt it - but I do not > > > see > > > the point of these transformations. > > > > > > Dan. > > > > > The direction this is taking is for Engine core to be able to parse > > and > > edit libvirt's domxml, while vdsm is agnostic (or partially agnostic) > > to > > the blob. > > > > Is this what we really need? want? > > When we discuss the blob are we talking about just the address part - eg . >
> > or something more? This is exactly the question. Until today, the blob was just a blob, opaque to the engine, and there was no discussion. If the engine wants to be able to control and edit the device addresses, we must agree on how to represent the address for each device type, and how to name each device. The simplest solution is to tap on the good work of the libvirt chaps. If we ever want to, it would allow the Engine to do the crazier stuff alluded by Doron - but it does not force us to go down this path, though. In my original design I imagined the blob to be a complete libvirt domxml. However, I understand the benefits of a more specific "blob". For the feature at hand, it is safe to limit this down to the element, with its elements stripped, and possibly converted to your data representation language du jour. Dan. From anthony at codemonkey.ws Fri Dec 2 22:54:12 2011 From: anthony at codemonkey.ws (Anthony Liguori) Date: Fri, 02 Dec 2011 16:54:12 -0600 Subject: [Engine-devel] Proposed next-generation vdsm API In-Reply-To: <4ED7CA8B.4000103@redhat.com> References: <20111130224041.GM13803@us.ibm.com> <4ED6BE1C.8020007@redhat.com> <20111201183544.GN13803@us.ibm.com> <4ED7CA8B.4000103@redhat.com> Message-ID: <4ED95714.5090707@codemonkey.ws> On 12/01/2011 12:42 PM, Geert Jansen wrote: > > On 12/01/2011 07:35 PM, Adam Litke wrote: > >> A single-node (or standalone VDSM deployment) is a very important use case. >> Many people are coming into the oVirt community from different perspectives. >> The strength of the ecosystem depends, in part, on the ability of oVirt >> components to be combined in unique ways with other software to produce >> solutions. The complete oVirt stack is a great thing, but not the only way to >> use the technology. > > Just out of curiosity: what might those single node cases be, outside > implementing an oVirt like solution? ovirt-engine currently doesn't handle every possible scenario where you want to manage more than one physical machine. And while I'm sure total world domination is not that far away, there's going to be a time period where there continues to be use cases it is not suited for. To name a few: 1) Fixed environments where a handful of systems are needed with some level of scripting used to coordinate things. Using a full three tier management server would be too much for this environment. This isn't necessarily a small deployment, this may be a huge number of small deployments (think half a dozen blades in the back of a retail store times 5,000). 2) Massive scale clusters. I'm talking Top 500 scale. There are people out there doing this with KVM today. They use tools like xCat and write directly against libvirt. But libvirt's lack of policy makes this harder than it should be. 3) Environments where virtualization is not the primary workload. There are a lot of cloud-like environments that are built with physical hardware only. In these cases, there is an existing infrastructure that does a lot of what oVirt does. It's easier for these environments to treat VMs as physical machines. As much as it's important to focus on the top-down view of oVirt as a cohesive stack, it's also important to look at each layer and make sure that each layer stands on its own. It's an 90/10 thing. You need a strong node-level interface to cover that last 10% of use-cases unless you're willing to spend 90% of the effort trying to also accommodate them. Regards, Anthony Liguori > I think understanding those better > is also critical input to the API design. > > Regards, > Geert > _______________________________________________ > vdsm-devel mailing list > vdsm-devel at lists.fedorahosted.org > https://fedorahosted.org/mailman/listinfo/vdsm-devel From lpeer at redhat.com Sat Dec 3 09:26:27 2011 From: lpeer at redhat.com (Livnat Peer) Date: Sat, 03 Dec 2011 11:26:27 +0200 Subject: [Engine-devel] Stable PCI Addresses design wiki In-Reply-To: <4ED79CCB.8050805@redhat.com> References: <5a6ced9a-8651-4296-9484-666457191a07@mkenneth.csb> <4ED79CCB.8050805@redhat.com> Message-ID: <4ED9EB43.9040204@redhat.com> On 12/01/2011 05:27 PM, Itamar Heim wrote: > On 12/01/2011 04:09 PM, Miki Kenneth wrote: >> I know that we are talking about only stable addresses, but I would >> like to broaden the scope a bit >> (don't kick me guys)... >> Shouldn't we keep a run-time configuration vs "saved/commit" >> configuration. >> By run time I mean: the current memory/cpu/disks/address per VM and by >> "stable" I mean the "one in the DB". >> That way, I'm going to be able to change properties in the stable >> config, which will not affect the running one >> (and vice versa). >> >> Maybe this is totally different feature - but I decide to throw it on >> the table. > It is a different feature ;) > shouldn't that be part of the snapshot improvements design? > What Miki is looking for, miki please correct me if i am wrong, is the ability to change VM configuration while the VM is running and expect the changes to apply starting from the next VM run. For the above feature to be 'complete' Miki wants to be able to view what is the VM current configuration (the one used when the VM started) and what is the configuration for the next run. After the VM is stopped you have only one configuration (the one for the next run). I guess i can see why you associated it with snapshots, as we can look at it as a temporary VM configuration snapshot, but i think it is another functionality (especially in UI/client perspective). Livnat >> >> Miki >> >> ----- Original Message ----- >>> From: "Eli Mesika" >>> To: engine-devel at ovirt.org >>> Sent: Wednesday, November 30, 2011 5:17:42 PM >>> Subject: Re: [Engine-devel] Stable PCI Addresses design wiki >>> >>> Hi again >>> The following is a design draft for a new feature of oVirt-engine >>> planned for 3.1 >>> >>> The feature allow devices in guest virtual machines to retain the >>> same PCI address allocations as other devices are added or removed >>> from the guest configuration. This is particularly important for >>> Windows guests in order to prevent warnings or reactivation when >>> device addresses change. >>> >>> This feature is supported by libvirt and should be implemented by >>> RHEVM and VDSM. >>> >>> When creating a VM, QEMU allocates PCI addresses to the guest >>> devices, these addresses are being reported by libvirt to VDSM and >>> VDSM should report it back to RHEVM. RHEVM should persist the PCI >>> addresses and report it as part of the VM configuration on the next >>> run. If a change to the VM devices occurred RHEVM should detect the >>> change and persist the new PCI addresses. >>> >>> Please review. >>> >>> Thanks >>> Eli Mesika >>> Redhat ISRAEL >>> >>> >>> ----- Original Message ----- >>>> From: "Eli Mesika" >>>> To: engine-devel at ovirt.org >>>> Sent: Wednesday, November 30, 2011 5:06:37 PM >>>> Subject: [Engine-devel] Stable PCI Addresses design wiki >>>> >>>> http://www.ovirt.org/wiki/Features/Design/StablePCIAddresses >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>> >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From lpeer at redhat.com Sat Dec 3 09:47:35 2011 From: lpeer at redhat.com (Livnat Peer) Date: Sat, 03 Dec 2011 11:47:35 +0200 Subject: [Engine-devel] Stable PCI/DEvice addresses In-Reply-To: <20111201204100.GC23878@redhat.com> References: <201112011752.43123.dfediuck@redhat.com> <20111201204100.GC23878@redhat.com> Message-ID: <4ED9F037.5050807@redhat.com> On 12/01/2011 10:41 PM, Dan Kenigsberg wrote: > On Thu, Dec 01, 2011 at 02:09:42PM -0500, Andrew Cathrow wrote: >> >> ----- Original Message ----- >>> From: "Doron Fediuck" >>> To: "Dan Kenigsberg" >>> Cc: engine-devel at ovirt.org, "Igor Lvovsky" >>> Sent: Thursday, December 1, 2011 10:52:42 AM >>> Subject: Re: [Engine-devel] Stable PCI/DEvice addresses >>> >>> On Thursday 01 December 2011 16:11:07 Dan Kenigsberg wrote: >>>> On Thu, Dec 01, 2011 at 02:06:52PM +0200, Livnat Peer wrote: >>>>> Moving this back to list - >>>>> >>>>> On 12/01/2011 01:49 PM, Dan Kenigsberg wrote: >>>>>> On Thu, Dec 01, 2011 at 06:26:16AM -0500, Eli Mesika wrote: >>>>>>> Hi guys >>>>>>> >>>>>>> I need the xml/json format representing the VM installed >>>>>>> devices. >>>>>>> Livnat asked me to add it to my Wiki >>>>>>> http://www.ovirt.org/wiki/Features/Design/StableDeviceAddresses >>>>>>> >>>>>>> Please feel free to either send it to me or edit the VDSM >>>>>>> section adding this info. >>>>>> >>>>>> I think that it is wrong to document this in this point in >>>>>> time. The >>>>>> data is a blob, generated by libvirt, copied by Vdsm, and not >>>>>> expected >>>>>> to be editted by RHEV-M. >>>>>> >>>>>> If you REALLY want to know, it is simply libvirt's domain xml, >>>>>> which is >>>>>> well-documented in http://libvirt.org/formatdomain.html. >>>>>> >>>>>> Dan. >>>>>> >>>>> >>>>> Hi Dan, >>>>> >>>>> Since i suspect the next requirement on this would be for RHEVM >>>>> to parse >>>>> the "blob" and enable user to specify addresses i think the >>>>> content of >>>>> the "blob" should be discussed. >>>>> >>>>> Otherwise we'll have to support this "blob" format for the sake >>>>> of >>>>> backwards compatibility and not be able to set a reasonable API >>>>> between >>>>> the engine and VDSM. >>>> >>>> The requirement for 3.1 alowed me to define an opaque parameter, >>>> with >>>> which Vdsm uses the Engine to store the VM device addresses. >>>> >>>> We "secretly" opted for storing the libvirt domxml because it >>>> already >>>> contains addresses for just about anything, and would alow us to do >>>> even more evil things in the future (I'm thinking about specifying >>>> complete boot order, and other things which libvirt long supports, >>>> but >>>> Vdsm does not). Another reason was that this was a very simple >>>> thing >>>> to do. The down side is that this "device-blob" is a bit bloated >>>> in >>>> size, and if you look deep into it, it has duplicate information on >>>> top >>>> of Vdsm's "create" verb. >>>> >>>> We should probably not include the elements in the blob >>>> they >>>> are very verbose and uninteresting to RHEV-M. >>>> >>>> If stressed, Vdsm could report only the element. It could >>>> also >>>> convert it to json or yaml, compress and encrypt it - but I do not >>>> see >>>> the point of these transformations. >>>> >>>> Dan. >>>> >>> The direction this is taking is for Engine core to be able to parse >>> and >>> edit libvirt's domxml, while vdsm is agnostic (or partially agnostic) >>> to >>> the blob. >>> >>> Is this what we really need? want? >> >> When we discuss the blob are we talking about just the address part - eg . >>
>> >> or something more? > > This is exactly the question. Until today, the blob was just a blob, opaque to > the engine, and there was no discussion. > > If the engine wants to be able to control and edit the device addresses, we must > agree on how to represent the address for each device type, and how to name each > device. The simplest solution is to tap on the good work of the libvirt chaps. > If we ever want to, it would allow the Engine to do the crazier stuff alluded by > Doron - but it does not force us to go down this path, though. > > In my original design I imagined the blob to be a complete libvirt domxml. However, > I understand the benefits of a more specific "blob". For the feature at hand, it > is safe to limit this down to the element, with its elements > stripped, and possibly converted to your data representation language du jour. > > Dan. > Hi Dan, I understand why pass-through of the domxml is appealing, as it reduces work for the current feature, it is also enabling us easily to support all type of device addresses that are supported by libvirt. What i like less is the format, I rather not use a verbose xml for this but use JSON as we have in several other places in the engine. The next cycle on this feature is to expose the ability to edit addresses by the user, and for that we'll need to manipulate the domxml in the engine - less fun. So although it is more work for us now i rather get the device section in a JSON format. BTW - Is there a chance libvirt will support JSON format in the future for describing the VM? then we can use the format they suggest for this and it can be a JSON pass-through in the future ;) Livnat > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From ykaul at redhat.com Sat Dec 3 18:38:45 2011 From: ykaul at redhat.com (Yaniv Kaul) Date: Sat, 03 Dec 2011 20:38:45 +0200 Subject: [Engine-devel] Stable PCI Addresses design wiki In-Reply-To: <4ED9EB43.9040204@redhat.com> References: <5a6ced9a-8651-4296-9484-666457191a07@mkenneth.csb> <4ED79CCB.8050805@redhat.com> <4ED9EB43.9040204@redhat.com> Message-ID: <4EDA6CB5.2000905@redhat.com> On 12/03/2011 11:26 AM, Livnat Peer wrote: > On 12/01/2011 05:27 PM, Itamar Heim wrote: >> On 12/01/2011 04:09 PM, Miki Kenneth wrote: >>> I know that we are talking about only stable addresses, but I would >>> like to broaden the scope a bit >>> (don't kick me guys)... >>> Shouldn't we keep a run-time configuration vs "saved/commit" >>> configuration. >>> By run time I mean: the current memory/cpu/disks/address per VM and by >>> "stable" I mean the "one in the DB". >>> That way, I'm going to be able to change properties in the stable >>> config, which will not affect the running one >>> (and vice versa). >>> >>> Maybe this is totally different feature - but I decide to throw it on >>> the table. > It is a different feature ;) > >> shouldn't that be part of the snapshot improvements design? >> > What Miki is looking for, miki please correct me if i am wrong, is the > ability to change VM configuration while the VM is running and expect > the changes to apply starting from the next VM run. In addition, turn a reboot of a VM into shutdown + run with the new parameters. That way an admin can tell a user 'I increased your VM's memory, reboot at your own preferred time and you'll have the extra memory'. (of course, hot-plugging memory is cooler). Y. > For the above feature to be 'complete' Miki wants to be able to view > what is the VM current configuration (the one used when the VM started) > and what is the configuration for the next run. > > After the VM is stopped you have only one configuration (the one for the > next run). > > I guess i can see why you associated it with snapshots, as we can look > at it as a temporary VM configuration snapshot, but i think it is > another functionality (especially in UI/client perspective). > > Livnat > > >>> Miki >>> >>> ----- Original Message ----- >>>> From: "Eli Mesika" >>>> To: engine-devel at ovirt.org >>>> Sent: Wednesday, November 30, 2011 5:17:42 PM >>>> Subject: Re: [Engine-devel] Stable PCI Addresses design wiki >>>> >>>> Hi again >>>> The following is a design draft for a new feature of oVirt-engine >>>> planned for 3.1 >>>> >>>> The feature allow devices in guest virtual machines to retain the >>>> same PCI address allocations as other devices are added or removed >>>> from the guest configuration. This is particularly important for >>>> Windows guests in order to prevent warnings or reactivation when >>>> device addresses change. >>>> >>>> This feature is supported by libvirt and should be implemented by >>>> RHEVM and VDSM. >>>> >>>> When creating a VM, QEMU allocates PCI addresses to the guest >>>> devices, these addresses are being reported by libvirt to VDSM and >>>> VDSM should report it back to RHEVM. RHEVM should persist the PCI >>>> addresses and report it as part of the VM configuration on the next >>>> run. If a change to the VM devices occurred RHEVM should detect the >>>> change and persist the new PCI addresses. >>>> >>>> Please review. >>>> >>>> Thanks >>>> Eli Mesika >>>> Redhat ISRAEL >>>> >>>> >>>> ----- Original Message ----- >>>>> From: "Eli Mesika" >>>>> To: engine-devel at ovirt.org >>>>> Sent: Wednesday, November 30, 2011 5:06:37 PM >>>>> Subject: [Engine-devel] Stable PCI Addresses design wiki >>>>> >>>>> http://www.ovirt.org/wiki/Features/Design/StablePCIAddresses >>>>> _______________________________________________ >>>>> Engine-devel mailing list >>>>> Engine-devel at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>> >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>> >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From abaron at redhat.com Sat Dec 3 22:22:29 2011 From: abaron at redhat.com (Ayal Baron) Date: Sat, 03 Dec 2011 17:22:29 -0500 (EST) Subject: [Engine-devel] Proposed next-generation vdsm API In-Reply-To: <20111201204026.GP17631@us.ibm.com> Message-ID: ----- Original Message ----- > * Geert Jansen [2011-11-30 17:38]: > > Hi, > > > > i think this makes sense, but i'm not a VDSM expert. I did want to > > point > > out one other point, below: > > > > On 11/30/2011 11:40 PM, Adam Litke wrote: > > > Recently we've had some very productive discussions concerning > > > the VDSM API. I > > > want to attempt to refocus the discussion around an emerging > > > proposal and see if > > > we can agree on a sensible path forward. > > > > > > Based on the discussion, I have identified the following > > > requirements that > > > a new API for vdsm should have: > > > > > > 1.) Single API that can be consumed by ovirt-engine and ISVs > > > - We don't want to maintain multiple parallel APIs > > > - To develop a vendor ecosystem, we must have a robust external > > > API to > > > vdsm > > > > I have doubts around how useful the VDSM API will be for creating > > an > > ecosystem. If you look at most virtualization ISVs today, they want > > to > > integrate with a multi-node API and not a single-node API. The only > > use > > case that i know where integrating with a single node API is > > requested > > is when you're basically creating a virtualization management > > platform > > like oVirt itself. > > Without a first-class node level API, we're precluding the very case > you're aware of. If we're building a community and ecosystem > around KVM management then we need to be open to someone building > that > management platform and doing it in a way that keeps things > compatible. > > There are existing products (IBM has a number in this space) which > utilize libvirt as a node-level API which won't be able to (easily) > integrate all of oVirt just to obtain access to the nodes. > > Further, if the only way to consume VDSM is via the multi-node > solution, then > price of entry is a much larger, more complex stack with more > dependencies. This raises the barrier of entry and participation. > IMHO, not ideal when we're attempting to grow a community. > > Alternatively, if we enable a common node API, not only do we support > single node deployments (think virtual appliances, or hardware > appliances; IBM has a few of these) but allowing competing (but > compatible) multi-node/cluster solutions; and it even supports > the single node solutions to be managed by different kvm-based > management platforms because they're all working with the same > node-level API. > > Having a first-class node API is a critical starting point for > building > our larger kvm management community since it allows for easier > integration of existing products. I cannot stress that point enough; > if > we're committed to being an open community then we need design our > interfaces to encourage participation. Lowing the cost of > participation > and integrate is great way to enable an ecosystem. +1 The REST-API should be exactly this. For multi-node environments where things are already complex we can raise the bar of demands, but for single node it has to be simple and straight-forward with very few requirements (i.e. requiring amqp in single-node use case is too complicated). > > -- > Ryan Harper > Software Engineer; Linux Technology Center > IBM Corp., Austin, Tx > ryanh at us.ibm.com > > _______________________________________________ > vdsm-devel mailing list > vdsm-devel at lists.fedorahosted.org > https://fedorahosted.org/mailman/listinfo/vdsm-devel > From abaron at redhat.com Sat Dec 3 22:25:59 2011 From: abaron at redhat.com (Ayal Baron) Date: Sat, 03 Dec 2011 17:25:59 -0500 (EST) Subject: [Engine-devel] Proposed next-generation vdsm API In-Reply-To: <20111201191053.GP13803@us.ibm.com> Message-ID: <7f433fe8-fdd2-4999-bf19-a4deca6b8cf5@zmail13.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > On Thu, Dec 01, 2011 at 02:02:27AM -0500, Ayal Baron wrote: > > > > > > ----- Original Message ----- > > > > > > > > > > -----Original Message----- > > > > From: Adam Litke [mailto:agl at us.ibm.com] > > > > Sent: Thursday, December 01, 2011 0:41 AM > > > > To: vdsm-devel at lists.fedorahosted.org; engine-devel at ovirt.org > > > > Cc: Daniel P. Berrange; Chris Wright; Dan Kenigsberg; Itamar > > > > Heim > > > > Subject: Proposed next-generation vdsm API > > > > > > > > Recently we've had some very productive discussions concerning > > > > the > > > > VDSM > > > API. I > > > > want to attempt to refocus the discussion around an emerging > > > > proposal > > > and see if > > > > we can agree on a sensible path forward. > > > > > > > > Based on the discussion, I have identified the following > > > > requirements > > > that > > > > a new API for vdsm should have: > > > > > > > > 1.) Single API that can be consumed by ovirt-engine and ISVs > > > > - We don't want to maintain multiple parallel APIs > > > > - To develop a vendor ecosystem, we must have a robust > > > > external > > > > API to > > > > vdsm > > > > > > > > 2.) Full vdsm capabilities are exposed without requiring > > > > ovirt-engine > > > > - ovirt components should be modular and independently useful > > > > - Some deployments might want to manage nodes without > > > > ovirt-engine > > > > > > > > 3.) Standardized protocol with low overhead > > > > - Required for widespread adoption > > > > > > > > 4.) Support for asynchronous tasks and events > > > > - Needed by ovirt-engine and other consumers > > > > > > > > Based on these requirements, the following proposal has started > > > > to > > > emerge: > > > > > > > > Create a REST API that will provide all of the functionality > > > > that > > > > is > > > currently > > > > available via the xmlrpc interface (with the goal of > > > > deprecating > > > > xmlrpc > > > once it > > > > becomes mature enough). To support advanced clustering > > > > features > > > > that > > > > ovirt-engine is planning, we'll write an QMF broker that can > > > > proxy > > > > the > > > REST API > > > > onto a message bus. ovirt-engine will interact with vdsm > > > > exclusively > > > over this > > > > bus but the REST API will be the principle API and the entry > > > > point > > > > for > > > ISV apps. > > > > A REST API provides a light-weight and standard way to access > > > > all > > > > of the > > > vdsm > > > > functionality. > > > > > > > > The REST API will handle events by exposing a new 'events' > > > > collection at > > > the api > > > > root. REST users will use some sort of polling to collect > > > > these > > > > events. > > > The > > > > details of this interface are being worked on. Several ways > > > > for > > > minimizing the > > > > impact of polling have been discussed. The QMF broker can > > > > expose a > > > > publish/subscribe model for events as appropriate. > > > > > > > > Is this model an acceptable way to improve the vdsm API? I > > > > would > > > > like > > > to hear > > > > the opinions of ovirt-engine developers, vdsm developers, and > > > > other > > > > stakeholders. Thanks for providing feedback on this proposal! > > > > > > Why things non native to REST and wrap it in QMF, rather than do > > > the > > > reverse? > > > Or just to them in parallel, since it sounds like both are going > > > to > > > be > > > first class citizens? > > > > This was more my understanding from our discussion on IRC > > yesterday. > > I'm afraid I did not follow that discussion to your conclusions... > > > REST API - everything that is relevant for single node management > > QMF - same API as above + multi-node relevant API calls. I don't > > see any > > reason for doing weird things over REST to support the latter. > > A QMF broker runs on the vdsm host and talks to the REST API. It > connects to a > bus and exposes an API to ovirt-engine on this bus using a vdsm-base > schema. > ovirt-engine wants additional clustering functionality. This API > should be > implemented completely between the QMF broker and ovirt-engine using > a separate > vdsm-cluster schema. > > > In fact, I don't even see any real reason for going through the > > REST API when > > using QMF. > > Because we want to avoid the proliferation of APIs. I would prefer a > mostly > vertical chain of API components to a vdsm with several independent > APIs (which > are sure to diverge or be neglected by individual > developers/patches). > > > If you take a look at today's API you will see that there is > > nothing there > > that limits it to XML-RPC and we could easily expose all the calls > > using REST > > or anything else. In python, exposing a new verb in the various > > APIs can be > > automatic so this would require very little maintenance. Any > > multi-node or > > transport specific calls can be decorated as such and would be > > automatically > > ignored/picked up by the relevant API layer. This way, we could > > also easily > > enable using different bus protocols assuming a customer already > > has a > > deployment as was suggested yesterday. > > I don't think this will be as automatic as you suggest in practice. > It sounds > like it will increase code complexity, obfuscation, and maintenance > burden. Even today we don't really have any reliance on the transport. The beauty of REST is its simplicity, but with that come limitations. If we require the qmf API to go through REST then we're basically limiting the qmf to the REST shortcomings or we implement things that bend REST the wrong way to accomodate the extra requirements. For any "simple" call I'm fine with qmf going through REST if it simplifies maintenance, but for things like events, I believe there is no justification for going through REST. > > -- > Adam Litke > IBM Linux Technology Center > > From mkenneth at redhat.com Sun Dec 4 05:02:44 2011 From: mkenneth at redhat.com (Miki Kenneth) Date: Sun, 04 Dec 2011 00:02:44 -0500 (EST) Subject: [Engine-devel] Stable PCI Addresses design wiki In-Reply-To: <4EDA6CB5.2000905@redhat.com> References: <5a6ced9a-8651-4296-9484-666457191a07@mkenneth.csb> <4ED79CCB.8050805@redhat.com> <4ED9EB43.9040204@redhat.com> <4EDA6CB5.2000905@redhat.com> Message-ID: <16F5CA8E-72F8-4A01-8EB4-CD23D2F7567F@redhat.com> Send from my iPhone . On 3 ???? 2011, at 20:38, Yaniv Kaul wrote: > On 12/03/2011 11:26 AM, Livnat Peer wrote: >> On 12/01/2011 05:27 PM, Itamar Heim wrote: >>> On 12/01/2011 04:09 PM, Miki Kenneth wrote: >>>> I know that we are talking about only stable addresses, but I would >>>> like to broaden the scope a bit >>>> (don't kick me guys)... >>>> Shouldn't we keep a run-time configuration vs "saved/commit" >>>> configuration. >>>> By run time I mean: the current memory/cpu/disks/address per VM and by >>>> "stable" I mean the "one in the DB". >>>> That way, I'm going to be able to change properties in the stable >>>> config, which will not affect the running one >>>> (and vice versa). >>>> >>>> Maybe this is totally different feature - but I decide to throw it on >>>> the table. >> It is a different feature ;) >> >>> shouldn't that be part of the snapshot improvements design? >>> >> What Miki is looking for, miki please correct me if i am wrong, is the >> ability to change VM configuration while the VM is running and expect >> the changes to apply starting from the next VM run. > > In addition, turn a reboot of a VM into shutdown + run with the new parameters. > That way an admin can tell a user 'I increased your VM's memory, reboot at your own preferred time and you'll have the extra memory'. > (of course, hot-plugging memory is cooler). > Y. Agreed. > >> For the above feature to be 'complete' Miki wants to be able to view >> what is the VM current configuration (the one used when the VM started) >> and what is the configuration for the next run. >> >> After the VM is stopped you have only one configuration (the one for the >> next run). >> >> I guess i can see why you associated it with snapshots, as we can look >> at it as a temporary VM configuration snapshot, but i think it is >> another functionality (especially in UI/client perspective). I don't mind on what feature you implement this ... I think when you design a change in functionality in existing flow, you need to look at the broader scope, at that is why I mentioned. Btw, Livnat, let's start with having it on the engine, ui can follow later on... >> >> Livnat >> >> >>>> Miki >>>> >>>> ----- Original Message ----- >>>>> From: "Eli Mesika" >>>>> To: engine-devel at ovirt.org >>>>> Sent: Wednesday, November 30, 2011 5:17:42 PM >>>>> Subject: Re: [Engine-devel] Stable PCI Addresses design wiki >>>>> >>>>> Hi again >>>>> The following is a design draft for a new feature of oVirt-engine >>>>> planned for 3.1 >>>>> >>>>> The feature allow devices in guest virtual machines to retain the >>>>> same PCI address allocations as other devices are added or removed >>>>> from the guest configuration. This is particularly important for >>>>> Windows guests in order to prevent warnings or reactivation when >>>>> device addresses change. >>>>> >>>>> This feature is supported by libvirt and should be implemented by >>>>> RHEVM and VDSM. >>>>> >>>>> When creating a VM, QEMU allocates PCI addresses to the guest >>>>> devices, these addresses are being reported by libvirt to VDSM and >>>>> VDSM should report it back to RHEVM. RHEVM should persist the PCI >>>>> addresses and report it as part of the VM configuration on the next >>>>> run. If a change to the VM devices occurred RHEVM should detect the >>>>> change and persist the new PCI addresses. >>>>> >>>>> Please review. >>>>> >>>>> Thanks >>>>> Eli Mesika >>>>> Redhat ISRAEL >>>>> >>>>> >>>>> ----- Original Message ----- >>>>>> From: "Eli Mesika" >>>>>> To: engine-devel at ovirt.org >>>>>> Sent: Wednesday, November 30, 2011 5:06:37 PM >>>>>> Subject: [Engine-devel] Stable PCI Addresses design wiki >>>>>> >>>>>> http://www.ovirt.org/wiki/Features/Design/StablePCIAddresses >>>>>> _______________________________________________ >>>>>> Engine-devel mailing list >>>>>> Engine-devel at ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>> >>>>> _______________________________________________ >>>>> Engine-devel mailing list >>>>> Engine-devel at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>> >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From abaron at redhat.com Sun Dec 4 07:23:06 2011 From: abaron at redhat.com (Ayal Baron) Date: Sun, 04 Dec 2011 02:23:06 -0500 (EST) Subject: [Engine-devel] Stable PCI Addresses design wiki In-Reply-To: <16F5CA8E-72F8-4A01-8EB4-CD23D2F7567F@redhat.com> Message-ID: <351e8491-0e40-41d2-885d-121d84ff5a9d@zmail13.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > > > Send from my iPhone . > > > On 3 ???? 2011, at 20:38, Yaniv Kaul wrote: > > > On 12/03/2011 11:26 AM, Livnat Peer wrote: > >> On 12/01/2011 05:27 PM, Itamar Heim wrote: > >>> On 12/01/2011 04:09 PM, Miki Kenneth wrote: > >>>> I know that we are talking about only stable addresses, but I > >>>> would > >>>> like to broaden the scope a bit > >>>> (don't kick me guys)... > >>>> Shouldn't we keep a run-time configuration vs "saved/commit" > >>>> configuration. > >>>> By run time I mean: the current memory/cpu/disks/address per VM > >>>> and by > >>>> "stable" I mean the "one in the DB". > >>>> That way, I'm going to be able to change properties in the > >>>> stable > >>>> config, which will not affect the running one > >>>> (and vice versa). > >>>> > >>>> Maybe this is totally different feature - but I decide to throw > >>>> it on > >>>> the table. > >> It is a different feature ;) > >> > >>> shouldn't that be part of the snapshot improvements design? > >>> > >> What Miki is looking for, miki please correct me if i am wrong, is > >> the > >> ability to change VM configuration while the VM is running and > >> expect > >> the changes to apply starting from the next VM run. > > > > In addition, turn a reboot of a VM into shutdown + run with the new > > parameters. > > That way an admin can tell a user 'I increased your VM's memory, > > reboot at your own preferred time and you'll have the extra > > memory'. > > (of course, hot-plugging memory is cooler). > > Y. > Agreed. > > > >> For the above feature to be 'complete' Miki wants to be able to > >> view > >> what is the VM current configuration (the one used when the VM > >> started) > >> and what is the configuration for the next run. > >> > >> After the VM is stopped you have only one configuration (the one > >> for the > >> next run). > >> > >> I guess i can see why you associated it with snapshots, as we can > >> look > >> at it as a temporary VM configuration snapshot, but i think it is > >> another functionality (especially in UI/client perspective). > I don't mind on what feature you implement this ... I think when you > design a change in functionality in existing flow, you need to look > at the broader scope, at that is why I mentioned. So let's broaden the scope just a little more. As a user, I would like to be able to revert my configuration changes to "last known good" (and no, I'm not talking about live snapshots, only config changes, because I don't want to revert my data changes on disk). I would store multiple versions of the config (20?) and let the user go back (without the need to remember what the config was. > > Btw, Livnat, let's start with having it on the engine, ui can follow > later on... > > >> > >> Livnat > >> > >> > >>>> Miki > >>>> > >>>> ----- Original Message ----- > >>>>> From: "Eli Mesika" > >>>>> To: engine-devel at ovirt.org > >>>>> Sent: Wednesday, November 30, 2011 5:17:42 PM > >>>>> Subject: Re: [Engine-devel] Stable PCI Addresses design wiki > >>>>> > >>>>> Hi again > >>>>> The following is a design draft for a new feature of > >>>>> oVirt-engine > >>>>> planned for 3.1 > >>>>> > >>>>> The feature allow devices in guest virtual machines to retain > >>>>> the > >>>>> same PCI address allocations as other devices are added or > >>>>> removed > >>>>> from the guest configuration. This is particularly important > >>>>> for > >>>>> Windows guests in order to prevent warnings or reactivation > >>>>> when > >>>>> device addresses change. > >>>>> > >>>>> This feature is supported by libvirt and should be implemented > >>>>> by > >>>>> RHEVM and VDSM. > >>>>> > >>>>> When creating a VM, QEMU allocates PCI addresses to the guest > >>>>> devices, these addresses are being reported by libvirt to VDSM > >>>>> and > >>>>> VDSM should report it back to RHEVM. RHEVM should persist the > >>>>> PCI > >>>>> addresses and report it as part of the VM configuration on the > >>>>> next > >>>>> run. If a change to the VM devices occurred RHEVM should detect > >>>>> the > >>>>> change and persist the new PCI addresses. > >>>>> > >>>>> Please review. > >>>>> > >>>>> Thanks > >>>>> Eli Mesika > >>>>> Redhat ISRAEL > >>>>> > >>>>> > >>>>> ----- Original Message ----- > >>>>>> From: "Eli Mesika" > >>>>>> To: engine-devel at ovirt.org > >>>>>> Sent: Wednesday, November 30, 2011 5:06:37 PM > >>>>>> Subject: [Engine-devel] Stable PCI Addresses design wiki > >>>>>> > >>>>>> http://www.ovirt.org/wiki/Features/Design/StablePCIAddresses > >>>>>> _______________________________________________ > >>>>>> Engine-devel mailing list > >>>>>> Engine-devel at ovirt.org > >>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>>>>> > >>>>> _______________________________________________ > >>>>> Engine-devel mailing list > >>>>> Engine-devel at ovirt.org > >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>>>> > >>>> _______________________________________________ > >>>> Engine-devel mailing list > >>>> Engine-devel at ovirt.org > >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>> _______________________________________________ > >>> Engine-devel mailing list > >>> Engine-devel at ovirt.org > >>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >> _______________________________________________ > >> Engine-devel mailing list > >> Engine-devel at ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From lpeer at redhat.com Sun Dec 4 08:04:30 2011 From: lpeer at redhat.com (Livnat Peer) Date: Sun, 04 Dec 2011 10:04:30 +0200 Subject: [Engine-devel] Stable PCI Addresses design wiki In-Reply-To: <351e8491-0e40-41d2-885d-121d84ff5a9d@zmail13.collab.prod.int.phx2.redhat.com> References: <351e8491-0e40-41d2-885d-121d84ff5a9d@zmail13.collab.prod.int.phx2.redhat.com> Message-ID: <4EDB298E.1040101@redhat.com> On 12/04/2011 09:23 AM, Ayal Baron wrote: > > > ----- Original Message ----- >> >> >> Send from my iPhone . >> >> >> On 3 ???? 2011, at 20:38, Yaniv Kaul wrote: >> >>> On 12/03/2011 11:26 AM, Livnat Peer wrote: >>>> On 12/01/2011 05:27 PM, Itamar Heim wrote: >>>>> On 12/01/2011 04:09 PM, Miki Kenneth wrote: >>>>>> I know that we are talking about only stable addresses, but I >>>>>> would >>>>>> like to broaden the scope a bit >>>>>> (don't kick me guys)... >>>>>> Shouldn't we keep a run-time configuration vs "saved/commit" >>>>>> configuration. >>>>>> By run time I mean: the current memory/cpu/disks/address per VM >>>>>> and by >>>>>> "stable" I mean the "one in the DB". >>>>>> That way, I'm going to be able to change properties in the >>>>>> stable >>>>>> config, which will not affect the running one >>>>>> (and vice versa). >>>>>> >>>>>> Maybe this is totally different feature - but I decide to throw >>>>>> it on >>>>>> the table. >>>> It is a different feature ;) >>>> >>>>> shouldn't that be part of the snapshot improvements design? >>>>> >>>> What Miki is looking for, miki please correct me if i am wrong, is >>>> the >>>> ability to change VM configuration while the VM is running and >>>> expect >>>> the changes to apply starting from the next VM run. >>> >>> In addition, turn a reboot of a VM into shutdown + run with the new >>> parameters. >>> That way an admin can tell a user 'I increased your VM's memory, >>> reboot at your own preferred time and you'll have the extra >>> memory'. >>> (of course, hot-plugging memory is cooler). >>> Y. >> Agreed. >>> >>>> For the above feature to be 'complete' Miki wants to be able to >>>> view >>>> what is the VM current configuration (the one used when the VM >>>> started) >>>> and what is the configuration for the next run. >>>> >>>> After the VM is stopped you have only one configuration (the one >>>> for the >>>> next run). >>>> >>>> I guess i can see why you associated it with snapshots, as we can >>>> look >>>> at it as a temporary VM configuration snapshot, but i think it is >>>> another functionality (especially in UI/client perspective). >> I don't mind on what feature you implement this ... I think when you >> design a change in functionality in existing flow, you need to look >> at the broader scope, at that is why I mentioned. > > So let's broaden the scope just a little more. As a user, I would like to be able to revert my configuration changes to "last known good" (and no, I'm not talking about live snapshots, only config changes, because I don't want to revert my data changes on disk). I would store multiple versions of the config (20?) and let the user go back (without the need to remember what the config was. > A nice addition. To summarize what was suggested so far: 1. changing configuration while VM is running (apply on next-run) 2. reflecting to the user current configuration vs. 'next-run' configuration 3. switch to the new configuration on restart 4. keeping vm configuration 'history' (either explicitly or implicitly) and enabling roll-back to a specific configuration. Anything else? Livnat >> >> Btw, Livnat, let's start with having it on the engine, ui can follow >> later on... >> >>>> >>>> Livnat >>>> >>>> >>>>>> Miki >>>>>> >>>>>> ----- Original Message ----- >>>>>>> From: "Eli Mesika" >>>>>>> To: engine-devel at ovirt.org >>>>>>> Sent: Wednesday, November 30, 2011 5:17:42 PM >>>>>>> Subject: Re: [Engine-devel] Stable PCI Addresses design wiki >>>>>>> >>>>>>> Hi again >>>>>>> The following is a design draft for a new feature of >>>>>>> oVirt-engine >>>>>>> planned for 3.1 >>>>>>> >>>>>>> The feature allow devices in guest virtual machines to retain >>>>>>> the >>>>>>> same PCI address allocations as other devices are added or >>>>>>> removed >>>>>>> from the guest configuration. This is particularly important >>>>>>> for >>>>>>> Windows guests in order to prevent warnings or reactivation >>>>>>> when >>>>>>> device addresses change. >>>>>>> >>>>>>> This feature is supported by libvirt and should be implemented >>>>>>> by >>>>>>> RHEVM and VDSM. >>>>>>> >>>>>>> When creating a VM, QEMU allocates PCI addresses to the guest >>>>>>> devices, these addresses are being reported by libvirt to VDSM >>>>>>> and >>>>>>> VDSM should report it back to RHEVM. RHEVM should persist the >>>>>>> PCI >>>>>>> addresses and report it as part of the VM configuration on the >>>>>>> next >>>>>>> run. If a change to the VM devices occurred RHEVM should detect >>>>>>> the >>>>>>> change and persist the new PCI addresses. >>>>>>> >>>>>>> Please review. >>>>>>> >>>>>>> Thanks >>>>>>> Eli Mesika >>>>>>> Redhat ISRAEL >>>>>>> >>>>>>> >>>>>>> ----- Original Message ----- >>>>>>>> From: "Eli Mesika" >>>>>>>> To: engine-devel at ovirt.org >>>>>>>> Sent: Wednesday, November 30, 2011 5:06:37 PM >>>>>>>> Subject: [Engine-devel] Stable PCI Addresses design wiki >>>>>>>> >>>>>>>> http://www.ovirt.org/wiki/Features/Design/StablePCIAddresses >>>>>>>> _______________________________________________ >>>>>>>> Engine-devel mailing list >>>>>>>> Engine-devel at ovirt.org >>>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>>>> >>>>>>> _______________________________________________ >>>>>>> Engine-devel mailing list >>>>>>> Engine-devel at ovirt.org >>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>>> >>>>>> _______________________________________________ >>>>>> Engine-devel mailing list >>>>>> Engine-devel at ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>> _______________________________________________ >>>>> Engine-devel mailing list >>>>> Engine-devel at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From ofrenkel at redhat.com Sun Dec 4 08:21:43 2011 From: ofrenkel at redhat.com (Omer Frenkel) Date: Sun, 04 Dec 2011 03:21:43 -0500 (EST) Subject: [Engine-devel] Stable PCI Addresses design wiki In-Reply-To: <4EDB298E.1040101@redhat.com> Message-ID: <27ec9a47-3822-485b-933c-741ea696469c@zmail07.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > From: "Livnat Peer" > To: "Ayal Baron" > Cc: engine-devel at ovirt.org > Sent: Sunday, December 4, 2011 10:04:30 AM > Subject: Re: [Engine-devel] Stable PCI Addresses design wiki > > On 12/04/2011 09:23 AM, Ayal Baron wrote: > > > > > > ----- Original Message ----- > >> > >> > >> Send from my iPhone . > >> > >> > >> On 3 ???? 2011, at 20:38, Yaniv Kaul wrote: > >> > >>> On 12/03/2011 11:26 AM, Livnat Peer wrote: > >>>> On 12/01/2011 05:27 PM, Itamar Heim wrote: > >>>>> On 12/01/2011 04:09 PM, Miki Kenneth wrote: > >>>>>> I know that we are talking about only stable addresses, but I > >>>>>> would > >>>>>> like to broaden the scope a bit > >>>>>> (don't kick me guys)... > >>>>>> Shouldn't we keep a run-time configuration vs "saved/commit" > >>>>>> configuration. > >>>>>> By run time I mean: the current memory/cpu/disks/address per > >>>>>> VM > >>>>>> and by > >>>>>> "stable" I mean the "one in the DB". > >>>>>> That way, I'm going to be able to change properties in the > >>>>>> stable > >>>>>> config, which will not affect the running one > >>>>>> (and vice versa). > >>>>>> > >>>>>> Maybe this is totally different feature - but I decide to > >>>>>> throw > >>>>>> it on > >>>>>> the table. > >>>> It is a different feature ;) > >>>> > >>>>> shouldn't that be part of the snapshot improvements design? > >>>>> > >>>> What Miki is looking for, miki please correct me if i am wrong, > >>>> is > >>>> the > >>>> ability to change VM configuration while the VM is running and > >>>> expect > >>>> the changes to apply starting from the next VM run. > >>> > >>> In addition, turn a reboot of a VM into shutdown + run with the > >>> new > >>> parameters. > >>> That way an admin can tell a user 'I increased your VM's memory, > >>> reboot at your own preferred time and you'll have the extra > >>> memory'. > >>> (of course, hot-plugging memory is cooler). > >>> Y. > >> Agreed. > >>> > >>>> For the above feature to be 'complete' Miki wants to be able to > >>>> view > >>>> what is the VM current configuration (the one used when the VM > >>>> started) > >>>> and what is the configuration for the next run. > >>>> > >>>> After the VM is stopped you have only one configuration (the one > >>>> for the > >>>> next run). > >>>> > >>>> I guess i can see why you associated it with snapshots, as we > >>>> can > >>>> look > >>>> at it as a temporary VM configuration snapshot, but i think it > >>>> is > >>>> another functionality (especially in UI/client perspective). > >> I don't mind on what feature you implement this ... I think when > >> you > >> design a change in functionality in existing flow, you need to > >> look > >> at the broader scope, at that is why I mentioned. > > > > So let's broaden the scope just a little more. As a user, I would > > like to be able to revert my configuration changes to "last known > > good" (and no, I'm not talking about live snapshots, only config > > changes, because I don't want to revert my data changes on disk). > > I would store multiple versions of the config (20?) and let the > > user go back (without the need to remember what the config was. > > > > A nice addition. > To summarize what was suggested so far: > > 1. changing configuration while VM is running (apply on next-run) > 2. reflecting to the user current configuration vs. 'next-run' > configuration > 3. switch to the new configuration on restart > 4. keeping vm configuration 'history' (either explicitly or > implicitly) > and enabling roll-back to a specific configuration. the last one sounds like snapshot without disks? > > Anything else? > > Livnat > > > >> > >> Btw, Livnat, let's start with having it on the engine, ui can > >> follow > >> later on... > >> > >>>> > >>>> Livnat > >>>> > >>>> > >>>>>> Miki > >>>>>> > >>>>>> ----- Original Message ----- > >>>>>>> From: "Eli Mesika" > >>>>>>> To: engine-devel at ovirt.org > >>>>>>> Sent: Wednesday, November 30, 2011 5:17:42 PM > >>>>>>> Subject: Re: [Engine-devel] Stable PCI Addresses design wiki > >>>>>>> > >>>>>>> Hi again > >>>>>>> The following is a design draft for a new feature of > >>>>>>> oVirt-engine > >>>>>>> planned for 3.1 > >>>>>>> > >>>>>>> The feature allow devices in guest virtual machines to retain > >>>>>>> the > >>>>>>> same PCI address allocations as other devices are added or > >>>>>>> removed > >>>>>>> from the guest configuration. This is particularly important > >>>>>>> for > >>>>>>> Windows guests in order to prevent warnings or reactivation > >>>>>>> when > >>>>>>> device addresses change. > >>>>>>> > >>>>>>> This feature is supported by libvirt and should be > >>>>>>> implemented > >>>>>>> by > >>>>>>> RHEVM and VDSM. > >>>>>>> > >>>>>>> When creating a VM, QEMU allocates PCI addresses to the guest > >>>>>>> devices, these addresses are being reported by libvirt to > >>>>>>> VDSM > >>>>>>> and > >>>>>>> VDSM should report it back to RHEVM. RHEVM should persist the > >>>>>>> PCI > >>>>>>> addresses and report it as part of the VM configuration on > >>>>>>> the > >>>>>>> next > >>>>>>> run. If a change to the VM devices occurred RHEVM should > >>>>>>> detect > >>>>>>> the > >>>>>>> change and persist the new PCI addresses. > >>>>>>> > >>>>>>> Please review. > >>>>>>> > >>>>>>> Thanks > >>>>>>> Eli Mesika > >>>>>>> Redhat ISRAEL > >>>>>>> > >>>>>>> > >>>>>>> ----- Original Message ----- > >>>>>>>> From: "Eli Mesika" > >>>>>>>> To: engine-devel at ovirt.org > >>>>>>>> Sent: Wednesday, November 30, 2011 5:06:37 PM > >>>>>>>> Subject: [Engine-devel] Stable PCI Addresses design wiki > >>>>>>>> > >>>>>>>> http://www.ovirt.org/wiki/Features/Design/StablePCIAddresses > >>>>>>>> _______________________________________________ > >>>>>>>> Engine-devel mailing list > >>>>>>>> Engine-devel at ovirt.org > >>>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>>>>>>> > >>>>>>> _______________________________________________ > >>>>>>> Engine-devel mailing list > >>>>>>> Engine-devel at ovirt.org > >>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>>>>>> > >>>>>> _______________________________________________ > >>>>>> Engine-devel mailing list > >>>>>> Engine-devel at ovirt.org > >>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>>>> _______________________________________________ > >>>>> Engine-devel mailing list > >>>>> Engine-devel at ovirt.org > >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>>> _______________________________________________ > >>>> Engine-devel mailing list > >>>> Engine-devel at ovirt.org > >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>> > >>> _______________________________________________ > >>> Engine-devel mailing list > >>> Engine-devel at ovirt.org > >>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >> _______________________________________________ > >> Engine-devel mailing list > >> Engine-devel at ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/engine-devel > >> > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From lpeer at redhat.com Sun Dec 4 08:34:25 2011 From: lpeer at redhat.com (Livnat Peer) Date: Sun, 04 Dec 2011 10:34:25 +0200 Subject: [Engine-devel] Stable PCI Addresses design wiki In-Reply-To: <27ec9a47-3822-485b-933c-741ea696469c@zmail07.collab.prod.int.phx2.redhat.com> References: <27ec9a47-3822-485b-933c-741ea696469c@zmail07.collab.prod.int.phx2.redhat.com> Message-ID: <4EDB3091.5010408@redhat.com> On 12/04/2011 10:21 AM, Omer Frenkel wrote: > > > ----- Original Message ----- >> From: "Livnat Peer" >> To: "Ayal Baron" >> Cc: engine-devel at ovirt.org >> Sent: Sunday, December 4, 2011 10:04:30 AM >> Subject: Re: [Engine-devel] Stable PCI Addresses design wiki >> >> On 12/04/2011 09:23 AM, Ayal Baron wrote: >>> >>> >>> ----- Original Message ----- >>>> >>>> >>>> Send from my iPhone . >>>> >>>> >>>> On 3 ???? 2011, at 20:38, Yaniv Kaul wrote: >>>> >>>>> On 12/03/2011 11:26 AM, Livnat Peer wrote: >>>>>> On 12/01/2011 05:27 PM, Itamar Heim wrote: >>>>>>> On 12/01/2011 04:09 PM, Miki Kenneth wrote: >>>>>>>> I know that we are talking about only stable addresses, but I >>>>>>>> would >>>>>>>> like to broaden the scope a bit >>>>>>>> (don't kick me guys)... >>>>>>>> Shouldn't we keep a run-time configuration vs "saved/commit" >>>>>>>> configuration. >>>>>>>> By run time I mean: the current memory/cpu/disks/address per >>>>>>>> VM >>>>>>>> and by >>>>>>>> "stable" I mean the "one in the DB". >>>>>>>> That way, I'm going to be able to change properties in the >>>>>>>> stable >>>>>>>> config, which will not affect the running one >>>>>>>> (and vice versa). >>>>>>>> >>>>>>>> Maybe this is totally different feature - but I decide to >>>>>>>> throw >>>>>>>> it on >>>>>>>> the table. >>>>>> It is a different feature ;) >>>>>> >>>>>>> shouldn't that be part of the snapshot improvements design? >>>>>>> >>>>>> What Miki is looking for, miki please correct me if i am wrong, >>>>>> is >>>>>> the >>>>>> ability to change VM configuration while the VM is running and >>>>>> expect >>>>>> the changes to apply starting from the next VM run. >>>>> >>>>> In addition, turn a reboot of a VM into shutdown + run with the >>>>> new >>>>> parameters. >>>>> That way an admin can tell a user 'I increased your VM's memory, >>>>> reboot at your own preferred time and you'll have the extra >>>>> memory'. >>>>> (of course, hot-plugging memory is cooler). >>>>> Y. >>>> Agreed. >>>>> >>>>>> For the above feature to be 'complete' Miki wants to be able to >>>>>> view >>>>>> what is the VM current configuration (the one used when the VM >>>>>> started) >>>>>> and what is the configuration for the next run. >>>>>> >>>>>> After the VM is stopped you have only one configuration (the one >>>>>> for the >>>>>> next run). >>>>>> >>>>>> I guess i can see why you associated it with snapshots, as we >>>>>> can >>>>>> look >>>>>> at it as a temporary VM configuration snapshot, but i think it >>>>>> is >>>>>> another functionality (especially in UI/client perspective). >>>> I don't mind on what feature you implement this ... I think when >>>> you >>>> design a change in functionality in existing flow, you need to >>>> look >>>> at the broader scope, at that is why I mentioned. >>> >>> So let's broaden the scope just a little more. As a user, I would >>> like to be able to revert my configuration changes to "last known >>> good" (and no, I'm not talking about live snapshots, only config >>> changes, because I don't want to revert my data changes on disk). >>> I would store multiple versions of the config (20?) and let the >>> user go back (without the need to remember what the config was. >>> >> >> A nice addition. >> To summarize what was suggested so far: >> >> 1. changing configuration while VM is running (apply on next-run) >> 2. reflecting to the user current configuration vs. 'next-run' >> configuration >> 3. switch to the new configuration on restart >> 4. keeping vm configuration 'history' (either explicitly or >> implicitly) >> and enabling roll-back to a specific configuration. > > the last one sounds like snapshot without disks? > I agree. maybe an addition is to take it implicitly if the feature is turned on, like local history in eclipse. Livnat >> >> Anything else? >> >> Livnat >> >> >>>> >>>> Btw, Livnat, let's start with having it on the engine, ui can >>>> follow >>>> later on... >>>> >>>>>> >>>>>> Livnat >>>>>> >>>>>> >>>>>>>> Miki >>>>>>>> >>>>>>>> ----- Original Message ----- >>>>>>>>> From: "Eli Mesika" >>>>>>>>> To: engine-devel at ovirt.org >>>>>>>>> Sent: Wednesday, November 30, 2011 5:17:42 PM >>>>>>>>> Subject: Re: [Engine-devel] Stable PCI Addresses design wiki >>>>>>>>> >>>>>>>>> Hi again >>>>>>>>> The following is a design draft for a new feature of >>>>>>>>> oVirt-engine >>>>>>>>> planned for 3.1 >>>>>>>>> >>>>>>>>> The feature allow devices in guest virtual machines to retain >>>>>>>>> the >>>>>>>>> same PCI address allocations as other devices are added or >>>>>>>>> removed >>>>>>>>> from the guest configuration. This is particularly important >>>>>>>>> for >>>>>>>>> Windows guests in order to prevent warnings or reactivation >>>>>>>>> when >>>>>>>>> device addresses change. >>>>>>>>> >>>>>>>>> This feature is supported by libvirt and should be >>>>>>>>> implemented >>>>>>>>> by >>>>>>>>> RHEVM and VDSM. >>>>>>>>> >>>>>>>>> When creating a VM, QEMU allocates PCI addresses to the guest >>>>>>>>> devices, these addresses are being reported by libvirt to >>>>>>>>> VDSM >>>>>>>>> and >>>>>>>>> VDSM should report it back to RHEVM. RHEVM should persist the >>>>>>>>> PCI >>>>>>>>> addresses and report it as part of the VM configuration on >>>>>>>>> the >>>>>>>>> next >>>>>>>>> run. If a change to the VM devices occurred RHEVM should >>>>>>>>> detect >>>>>>>>> the >>>>>>>>> change and persist the new PCI addresses. >>>>>>>>> >>>>>>>>> Please review. >>>>>>>>> >>>>>>>>> Thanks >>>>>>>>> Eli Mesika >>>>>>>>> Redhat ISRAEL >>>>>>>>> >>>>>>>>> >>>>>>>>> ----- Original Message ----- >>>>>>>>>> From: "Eli Mesika" >>>>>>>>>> To: engine-devel at ovirt.org >>>>>>>>>> Sent: Wednesday, November 30, 2011 5:06:37 PM >>>>>>>>>> Subject: [Engine-devel] Stable PCI Addresses design wiki >>>>>>>>>> >>>>>>>>>> http://www.ovirt.org/wiki/Features/Design/StablePCIAddresses >>>>>>>>>> _______________________________________________ >>>>>>>>>> Engine-devel mailing list >>>>>>>>>> Engine-devel at ovirt.org >>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> Engine-devel mailing list >>>>>>>>> Engine-devel at ovirt.org >>>>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Engine-devel mailing list >>>>>>>> Engine-devel at ovirt.org >>>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>>> _______________________________________________ >>>>>>> Engine-devel mailing list >>>>>>> Engine-devel at ovirt.org >>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>> _______________________________________________ >>>>>> Engine-devel mailing list >>>>>> Engine-devel at ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>> >>>>> _______________________________________________ >>>>> Engine-devel mailing list >>>>> Engine-devel at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>> >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> From ykaul at redhat.com Sun Dec 4 09:00:45 2011 From: ykaul at redhat.com (Yaniv Kaul) Date: Sun, 04 Dec 2011 11:00:45 +0200 Subject: [Engine-devel] Stable PCI Addresses design wiki In-Reply-To: <4EDB3091.5010408@redhat.com> References: <27ec9a47-3822-485b-933c-741ea696469c@zmail07.collab.prod.int.phx2.redhat.com> <4EDB3091.5010408@redhat.com> Message-ID: <4EDB36BD.3080500@redhat.com> On 12/04/2011 10:34 AM, Livnat Peer wrote: > On 12/04/2011 10:21 AM, Omer Frenkel wrote: >> >> ----- Original Message ----- >>> From: "Livnat Peer" >>> To: "Ayal Baron" >>> Cc: engine-devel at ovirt.org >>> Sent: Sunday, December 4, 2011 10:04:30 AM >>> Subject: Re: [Engine-devel] Stable PCI Addresses design wiki >>> >>> On 12/04/2011 09:23 AM, Ayal Baron wrote: >>>> >>>> ----- Original Message ----- >>>>> >>>>> Send from my iPhone . >>>>> >>>>> >>>>> On 3 ???? 2011, at 20:38, Yaniv Kaul wrote: >>>>> >>>>>> On 12/03/2011 11:26 AM, Livnat Peer wrote: >>>>>>> On 12/01/2011 05:27 PM, Itamar Heim wrote: >>>>>>>> On 12/01/2011 04:09 PM, Miki Kenneth wrote: >>>>>>>>> I know that we are talking about only stable addresses, but I >>>>>>>>> would >>>>>>>>> like to broaden the scope a bit >>>>>>>>> (don't kick me guys)... >>>>>>>>> Shouldn't we keep a run-time configuration vs "saved/commit" >>>>>>>>> configuration. >>>>>>>>> By run time I mean: the current memory/cpu/disks/address per >>>>>>>>> VM >>>>>>>>> and by >>>>>>>>> "stable" I mean the "one in the DB". >>>>>>>>> That way, I'm going to be able to change properties in the >>>>>>>>> stable >>>>>>>>> config, which will not affect the running one >>>>>>>>> (and vice versa). >>>>>>>>> >>>>>>>>> Maybe this is totally different feature - but I decide to >>>>>>>>> throw >>>>>>>>> it on >>>>>>>>> the table. >>>>>>> It is a different feature ;) >>>>>>> >>>>>>>> shouldn't that be part of the snapshot improvements design? >>>>>>>> >>>>>>> What Miki is looking for, miki please correct me if i am wrong, >>>>>>> is >>>>>>> the >>>>>>> ability to change VM configuration while the VM is running and >>>>>>> expect >>>>>>> the changes to apply starting from the next VM run. >>>>>> In addition, turn a reboot of a VM into shutdown + run with the >>>>>> new >>>>>> parameters. >>>>>> That way an admin can tell a user 'I increased your VM's memory, >>>>>> reboot at your own preferred time and you'll have the extra >>>>>> memory'. >>>>>> (of course, hot-plugging memory is cooler). >>>>>> Y. >>>>> Agreed. >>>>>>> For the above feature to be 'complete' Miki wants to be able to >>>>>>> view >>>>>>> what is the VM current configuration (the one used when the VM >>>>>>> started) >>>>>>> and what is the configuration for the next run. >>>>>>> >>>>>>> After the VM is stopped you have only one configuration (the one >>>>>>> for the >>>>>>> next run). >>>>>>> >>>>>>> I guess i can see why you associated it with snapshots, as we >>>>>>> can >>>>>>> look >>>>>>> at it as a temporary VM configuration snapshot, but i think it >>>>>>> is >>>>>>> another functionality (especially in UI/client perspective). >>>>> I don't mind on what feature you implement this ... I think when >>>>> you >>>>> design a change in functionality in existing flow, you need to >>>>> look >>>>> at the broader scope, at that is why I mentioned. >>>> So let's broaden the scope just a little more. As a user, I would >>>> like to be able to revert my configuration changes to "last known >>>> good" (and no, I'm not talking about live snapshots, only config >>>> changes, because I don't want to revert my data changes on disk). >>>> I would store multiple versions of the config (20?) and let the >>>> user go back (without the need to remember what the config was. >>>> >>> A nice addition. >>> To summarize what was suggested so far: >>> >>> 1. changing configuration while VM is running (apply on next-run) >>> 2. reflecting to the user current configuration vs. 'next-run' >>> configuration >>> 3. switch to the new configuration on restart >>> 4. keeping vm configuration 'history' (either explicitly or >>> implicitly) >>> and enabling roll-back to a specific configuration. >> the last one sounds like snapshot without disks? >> > I agree. maybe an addition is to take it implicitly if the feature is > turned on, like local history in eclipse. > > Livnat You will be saving the HW config per snapshot regardless, right? Y. > >>> Anything else? >>> >>> Livnat >>> >>> >>>>> Btw, Livnat, let's start with having it on the engine, ui can >>>>> follow >>>>> later on... >>>>> >>>>>>> Livnat >>>>>>> >>>>>>> >>>>>>>>> Miki >>>>>>>>> >>>>>>>>> ----- Original Message ----- >>>>>>>>>> From: "Eli Mesika" >>>>>>>>>> To: engine-devel at ovirt.org >>>>>>>>>> Sent: Wednesday, November 30, 2011 5:17:42 PM >>>>>>>>>> Subject: Re: [Engine-devel] Stable PCI Addresses design wiki >>>>>>>>>> >>>>>>>>>> Hi again >>>>>>>>>> The following is a design draft for a new feature of >>>>>>>>>> oVirt-engine >>>>>>>>>> planned for 3.1 >>>>>>>>>> >>>>>>>>>> The feature allow devices in guest virtual machines to retain >>>>>>>>>> the >>>>>>>>>> same PCI address allocations as other devices are added or >>>>>>>>>> removed >>>>>>>>>> from the guest configuration. This is particularly important >>>>>>>>>> for >>>>>>>>>> Windows guests in order to prevent warnings or reactivation >>>>>>>>>> when >>>>>>>>>> device addresses change. >>>>>>>>>> >>>>>>>>>> This feature is supported by libvirt and should be >>>>>>>>>> implemented >>>>>>>>>> by >>>>>>>>>> RHEVM and VDSM. >>>>>>>>>> >>>>>>>>>> When creating a VM, QEMU allocates PCI addresses to the guest >>>>>>>>>> devices, these addresses are being reported by libvirt to >>>>>>>>>> VDSM >>>>>>>>>> and >>>>>>>>>> VDSM should report it back to RHEVM. RHEVM should persist the >>>>>>>>>> PCI >>>>>>>>>> addresses and report it as part of the VM configuration on >>>>>>>>>> the >>>>>>>>>> next >>>>>>>>>> run. If a change to the VM devices occurred RHEVM should >>>>>>>>>> detect >>>>>>>>>> the >>>>>>>>>> change and persist the new PCI addresses. >>>>>>>>>> >>>>>>>>>> Please review. >>>>>>>>>> >>>>>>>>>> Thanks >>>>>>>>>> Eli Mesika >>>>>>>>>> Redhat ISRAEL >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> ----- Original Message ----- >>>>>>>>>>> From: "Eli Mesika" >>>>>>>>>>> To: engine-devel at ovirt.org >>>>>>>>>>> Sent: Wednesday, November 30, 2011 5:06:37 PM >>>>>>>>>>> Subject: [Engine-devel] Stable PCI Addresses design wiki >>>>>>>>>>> >>>>>>>>>>> http://www.ovirt.org/wiki/Features/Design/StablePCIAddresses >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> Engine-devel mailing list >>>>>>>>>>> Engine-devel at ovirt.org >>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> Engine-devel mailing list >>>>>>>>>> Engine-devel at ovirt.org >>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> Engine-devel mailing list >>>>>>>>> Engine-devel at ovirt.org >>>>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>>>> _______________________________________________ >>>>>>>> Engine-devel mailing list >>>>>>>> Engine-devel at ovirt.org >>>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>>> _______________________________________________ >>>>>>> Engine-devel mailing list >>>>>>> Engine-devel at ovirt.org >>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>> _______________________________________________ >>>>>> Engine-devel mailing list >>>>>> Engine-devel at ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>> _______________________________________________ >>>>> Engine-devel mailing list >>>>> Engine-devel at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>> >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From abaron at redhat.com Sun Dec 4 10:00:14 2011 From: abaron at redhat.com (Ayal Baron) Date: Sun, 04 Dec 2011 05:00:14 -0500 (EST) Subject: [Engine-devel] Stable PCI Addresses design wiki In-Reply-To: <4EDB36BD.3080500@redhat.com> Message-ID: ----- Original Message ----- > On 12/04/2011 10:34 AM, Livnat Peer wrote: > > On 12/04/2011 10:21 AM, Omer Frenkel wrote: > >> > >> ----- Original Message ----- > >>> From: "Livnat Peer" > >>> To: "Ayal Baron" > >>> Cc: engine-devel at ovirt.org > >>> Sent: Sunday, December 4, 2011 10:04:30 AM > >>> Subject: Re: [Engine-devel] Stable PCI Addresses design wiki > >>> > >>> On 12/04/2011 09:23 AM, Ayal Baron wrote: > >>>> > >>>> ----- Original Message ----- > >>>>> > >>>>> Send from my iPhone . > >>>>> > >>>>> > >>>>> On 3 ???? 2011, at 20:38, Yaniv Kaul wrote: > >>>>> > >>>>>> On 12/03/2011 11:26 AM, Livnat Peer wrote: > >>>>>>> On 12/01/2011 05:27 PM, Itamar Heim wrote: > >>>>>>>> On 12/01/2011 04:09 PM, Miki Kenneth wrote: > >>>>>>>>> I know that we are talking about only stable addresses, but > >>>>>>>>> I > >>>>>>>>> would > >>>>>>>>> like to broaden the scope a bit > >>>>>>>>> (don't kick me guys)... > >>>>>>>>> Shouldn't we keep a run-time configuration vs > >>>>>>>>> "saved/commit" > >>>>>>>>> configuration. > >>>>>>>>> By run time I mean: the current memory/cpu/disks/address > >>>>>>>>> per > >>>>>>>>> VM > >>>>>>>>> and by > >>>>>>>>> "stable" I mean the "one in the DB". > >>>>>>>>> That way, I'm going to be able to change properties in the > >>>>>>>>> stable > >>>>>>>>> config, which will not affect the running one > >>>>>>>>> (and vice versa). > >>>>>>>>> > >>>>>>>>> Maybe this is totally different feature - but I decide to > >>>>>>>>> throw > >>>>>>>>> it on > >>>>>>>>> the table. > >>>>>>> It is a different feature ;) > >>>>>>> > >>>>>>>> shouldn't that be part of the snapshot improvements design? > >>>>>>>> > >>>>>>> What Miki is looking for, miki please correct me if i am > >>>>>>> wrong, > >>>>>>> is > >>>>>>> the > >>>>>>> ability to change VM configuration while the VM is running > >>>>>>> and > >>>>>>> expect > >>>>>>> the changes to apply starting from the next VM run. > >>>>>> In addition, turn a reboot of a VM into shutdown + run with > >>>>>> the > >>>>>> new > >>>>>> parameters. > >>>>>> That way an admin can tell a user 'I increased your VM's > >>>>>> memory, > >>>>>> reboot at your own preferred time and you'll have the extra > >>>>>> memory'. > >>>>>> (of course, hot-plugging memory is cooler). > >>>>>> Y. > >>>>> Agreed. > >>>>>>> For the above feature to be 'complete' Miki wants to be able > >>>>>>> to > >>>>>>> view > >>>>>>> what is the VM current configuration (the one used when the > >>>>>>> VM > >>>>>>> started) > >>>>>>> and what is the configuration for the next run. > >>>>>>> > >>>>>>> After the VM is stopped you have only one configuration (the > >>>>>>> one > >>>>>>> for the > >>>>>>> next run). > >>>>>>> > >>>>>>> I guess i can see why you associated it with snapshots, as we > >>>>>>> can > >>>>>>> look > >>>>>>> at it as a temporary VM configuration snapshot, but i think > >>>>>>> it > >>>>>>> is > >>>>>>> another functionality (especially in UI/client perspective). > >>>>> I don't mind on what feature you implement this ... I think > >>>>> when > >>>>> you > >>>>> design a change in functionality in existing flow, you need to > >>>>> look > >>>>> at the broader scope, at that is why I mentioned. > >>>> So let's broaden the scope just a little more. As a user, I > >>>> would > >>>> like to be able to revert my configuration changes to "last > >>>> known > >>>> good" (and no, I'm not talking about live snapshots, only config > >>>> changes, because I don't want to revert my data changes on > >>>> disk). > >>>> I would store multiple versions of the config (20?) and let > >>>> the > >>>> user go back (without the need to remember what the config was. > >>>> > >>> A nice addition. > >>> To summarize what was suggested so far: > >>> > >>> 1. changing configuration while VM is running (apply on next-run) > >>> 2. reflecting to the user current configuration vs. 'next-run' > >>> configuration > >>> 3. switch to the new configuration on restart > >>> 4. keeping vm configuration 'history' (either explicitly or > >>> implicitly) > >>> and enabling roll-back to a specific configuration. > >> the last one sounds like snapshot without disks? Implementation wise - yes. IMO feature wise it is not which is why I specifically made the differentiation. Under the hood you can run a "live snapshot without disks" but when I expose this to the user, it is not as a diskless live snapshot. I want to implicitly save configurations and let user start-up the vm with old configs. > >> > > I agree. maybe an addition is to take it implicitly if the feature > > is > > turned on, like local history in eclipse. > > > > Livnat > > You will be saving the HW config per snapshot regardless, right? > Y. > > > > > >>> Anything else? > >>> > >>> Livnat > >>> > >>> > >>>>> Btw, Livnat, let's start with having it on the engine, ui can > >>>>> follow > >>>>> later on... > >>>>> > >>>>>>> Livnat > >>>>>>> > >>>>>>> > >>>>>>>>> Miki > >>>>>>>>> > >>>>>>>>> ----- Original Message ----- > >>>>>>>>>> From: "Eli Mesika" > >>>>>>>>>> To: engine-devel at ovirt.org > >>>>>>>>>> Sent: Wednesday, November 30, 2011 5:17:42 PM > >>>>>>>>>> Subject: Re: [Engine-devel] Stable PCI Addresses design > >>>>>>>>>> wiki > >>>>>>>>>> > >>>>>>>>>> Hi again > >>>>>>>>>> The following is a design draft for a new feature of > >>>>>>>>>> oVirt-engine > >>>>>>>>>> planned for 3.1 > >>>>>>>>>> > >>>>>>>>>> The feature allow devices in guest virtual machines to > >>>>>>>>>> retain > >>>>>>>>>> the > >>>>>>>>>> same PCI address allocations as other devices are added or > >>>>>>>>>> removed > >>>>>>>>>> from the guest configuration. This is particularly > >>>>>>>>>> important > >>>>>>>>>> for > >>>>>>>>>> Windows guests in order to prevent warnings or > >>>>>>>>>> reactivation > >>>>>>>>>> when > >>>>>>>>>> device addresses change. > >>>>>>>>>> > >>>>>>>>>> This feature is supported by libvirt and should be > >>>>>>>>>> implemented > >>>>>>>>>> by > >>>>>>>>>> RHEVM and VDSM. > >>>>>>>>>> > >>>>>>>>>> When creating a VM, QEMU allocates PCI addresses to the > >>>>>>>>>> guest > >>>>>>>>>> devices, these addresses are being reported by libvirt to > >>>>>>>>>> VDSM > >>>>>>>>>> and > >>>>>>>>>> VDSM should report it back to RHEVM. RHEVM should persist > >>>>>>>>>> the > >>>>>>>>>> PCI > >>>>>>>>>> addresses and report it as part of the VM configuration on > >>>>>>>>>> the > >>>>>>>>>> next > >>>>>>>>>> run. If a change to the VM devices occurred RHEVM should > >>>>>>>>>> detect > >>>>>>>>>> the > >>>>>>>>>> change and persist the new PCI addresses. > >>>>>>>>>> > >>>>>>>>>> Please review. > >>>>>>>>>> > >>>>>>>>>> Thanks > >>>>>>>>>> Eli Mesika > >>>>>>>>>> Redhat ISRAEL > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> ----- Original Message ----- > >>>>>>>>>>> From: "Eli Mesika" > >>>>>>>>>>> To: engine-devel at ovirt.org > >>>>>>>>>>> Sent: Wednesday, November 30, 2011 5:06:37 PM > >>>>>>>>>>> Subject: [Engine-devel] Stable PCI Addresses design wiki > >>>>>>>>>>> > >>>>>>>>>>> http://www.ovirt.org/wiki/Features/Design/StablePCIAddresses > >>>>>>>>>>> _______________________________________________ > >>>>>>>>>>> Engine-devel mailing list > >>>>>>>>>>> Engine-devel at ovirt.org > >>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>>>>>>>>>> > >>>>>>>>>> _______________________________________________ > >>>>>>>>>> Engine-devel mailing list > >>>>>>>>>> Engine-devel at ovirt.org > >>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>>>>>>>>> > >>>>>>>>> _______________________________________________ > >>>>>>>>> Engine-devel mailing list > >>>>>>>>> Engine-devel at ovirt.org > >>>>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>>>>>>> _______________________________________________ > >>>>>>>> Engine-devel mailing list > >>>>>>>> Engine-devel at ovirt.org > >>>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>>>>>> _______________________________________________ > >>>>>>> Engine-devel mailing list > >>>>>>> Engine-devel at ovirt.org > >>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>>>>> _______________________________________________ > >>>>>> Engine-devel mailing list > >>>>>> Engine-devel at ovirt.org > >>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>>>> _______________________________________________ > >>>>> Engine-devel mailing list > >>>>> Engine-devel at ovirt.org > >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>>>> > >>>> _______________________________________________ > >>>> Engine-devel mailing list > >>>> Engine-devel at ovirt.org > >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>> _______________________________________________ > >>> Engine-devel mailing list > >>> Engine-devel at ovirt.org > >>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>> > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From danken at redhat.com Sun Dec 4 10:04:09 2011 From: danken at redhat.com (Dan Kenigsberg) Date: Sun, 4 Dec 2011 12:04:09 +0200 Subject: [Engine-devel] Stable PCI/DEvice addresses In-Reply-To: <4ED9F037.5050807@redhat.com> References: <201112011752.43123.dfediuck@redhat.com> <20111201204100.GC23878@redhat.com> <4ED9F037.5050807@redhat.com> Message-ID: <20111204100408.GJ4580@redhat.com> On Sat, Dec 03, 2011 at 11:47:35AM +0200, Livnat Peer wrote: > On 12/01/2011 10:41 PM, Dan Kenigsberg wrote: > > On Thu, Dec 01, 2011 at 02:09:42PM -0500, Andrew Cathrow wrote: > >> > >> ----- Original Message ----- > >>> From: "Doron Fediuck" > >>> To: "Dan Kenigsberg" > >>> Cc: engine-devel at ovirt.org, "Igor Lvovsky" > >>> Sent: Thursday, December 1, 2011 10:52:42 AM > >>> Subject: Re: [Engine-devel] Stable PCI/DEvice addresses > >>> > >>> On Thursday 01 December 2011 16:11:07 Dan Kenigsberg wrote: > >>>> On Thu, Dec 01, 2011 at 02:06:52PM +0200, Livnat Peer wrote: > >>>>> Moving this back to list - > >>>>> > >>>>> On 12/01/2011 01:49 PM, Dan Kenigsberg wrote: > >>>>>> On Thu, Dec 01, 2011 at 06:26:16AM -0500, Eli Mesika wrote: > >>>>>>> Hi guys > >>>>>>> > >>>>>>> I need the xml/json format representing the VM installed > >>>>>>> devices. > >>>>>>> Livnat asked me to add it to my Wiki > >>>>>>> http://www.ovirt.org/wiki/Features/Design/StableDeviceAddresses > >>>>>>> > >>>>>>> Please feel free to either send it to me or edit the VDSM > >>>>>>> section adding this info. > >>>>>> > >>>>>> I think that it is wrong to document this in this point in > >>>>>> time. The > >>>>>> data is a blob, generated by libvirt, copied by Vdsm, and not > >>>>>> expected > >>>>>> to be editted by RHEV-M. > >>>>>> > >>>>>> If you REALLY want to know, it is simply libvirt's domain xml, > >>>>>> which is > >>>>>> well-documented in http://libvirt.org/formatdomain.html. > >>>>>> > >>>>>> Dan. > >>>>>> > >>>>> > >>>>> Hi Dan, > >>>>> > >>>>> Since i suspect the next requirement on this would be for RHEVM > >>>>> to parse > >>>>> the "blob" and enable user to specify addresses i think the > >>>>> content of > >>>>> the "blob" should be discussed. > >>>>> > >>>>> Otherwise we'll have to support this "blob" format for the sake > >>>>> of > >>>>> backwards compatibility and not be able to set a reasonable API > >>>>> between > >>>>> the engine and VDSM. > >>>> > >>>> The requirement for 3.1 alowed me to define an opaque parameter, > >>>> with > >>>> which Vdsm uses the Engine to store the VM device addresses. > >>>> > >>>> We "secretly" opted for storing the libvirt domxml because it > >>>> already > >>>> contains addresses for just about anything, and would alow us to do > >>>> even more evil things in the future (I'm thinking about specifying > >>>> complete boot order, and other things which libvirt long supports, > >>>> but > >>>> Vdsm does not). Another reason was that this was a very simple > >>>> thing > >>>> to do. The down side is that this "device-blob" is a bit bloated > >>>> in > >>>> size, and if you look deep into it, it has duplicate information on > >>>> top > >>>> of Vdsm's "create" verb. > >>>> > >>>> We should probably not include the elements in the blob > >>>> they > >>>> are very verbose and uninteresting to RHEV-M. > >>>> > >>>> If stressed, Vdsm could report only the element. It could > >>>> also > >>>> convert it to json or yaml, compress and encrypt it - but I do not > >>>> see > >>>> the point of these transformations. > >>>> > >>>> Dan. > >>>> > >>> The direction this is taking is for Engine core to be able to parse > >>> and > >>> edit libvirt's domxml, while vdsm is agnostic (or partially agnostic) > >>> to > >>> the blob. > >>> > >>> Is this what we really need? want? > >> > >> When we discuss the blob are we talking about just the address part - eg . > >>
> >> > >> or something more? > > > > This is exactly the question. Until today, the blob was just a blob, opaque to > > the engine, and there was no discussion. > > > > If the engine wants to be able to control and edit the device addresses, we must > > agree on how to represent the address for each device type, and how to name each > > device. The simplest solution is to tap on the good work of the libvirt chaps. > > If we ever want to, it would allow the Engine to do the crazier stuff alluded by > > Doron - but it does not force us to go down this path, though. > > > > In my original design I imagined the blob to be a complete libvirt domxml. However, > > I understand the benefits of a more specific "blob". For the feature at hand, it > > is safe to limit this down to the element, with its elements > > stripped, and possibly converted to your data representation language du jour. > > > > Dan. > > > > Hi Dan, > > I understand why pass-through of the domxml is appealing, as it reduces > work for the current feature, it is also enabling us easily to support > all type of device addresses that are supported by libvirt. > > What i like less is the format, I rather not use a verbose xml for this > but use JSON as we have in several other places in the engine. > > The next cycle on this feature is to expose the ability to edit > addresses by the user, and for that we'll need to manipulate the domxml > in the engine - less fun. > > So although it is more work for us now i rather get the device section > in a JSON format. I'm not sure I understand why xml->json conversion is more fun to do in Vdsm than in Engine; I don't see it as a difficult step either way. The problem is that we have an already-well-defined interface for device naming and addressing (libvirt's). Inventing another one, even if it is only a straight-forward xml-to-json conversion, would inevitably add doubts and places for bugs. > > BTW - Is there a chance libvirt will support JSON format in the future > for describing the VM? then we can use the format they suggest for this > and it can be a JSON pass-through in the future ;) It has been years since I've touched infinitesimal probabilities, so I shall leave this question to a libvirt Daniel. Dan. From lpeer at redhat.com Sun Dec 4 11:16:30 2011 From: lpeer at redhat.com (Livnat Peer) Date: Sun, 04 Dec 2011 13:16:30 +0200 Subject: [Engine-devel] Stable PCI Addresses design wiki In-Reply-To: <4EDB36BD.3080500@redhat.com> References: <27ec9a47-3822-485b-933c-741ea696469c@zmail07.collab.prod.int.phx2.redhat.com> <4EDB3091.5010408@redhat.com> <4EDB36BD.3080500@redhat.com> Message-ID: <4EDB568E.6080809@redhat.com> On 12/04/2011 11:00 AM, Yaniv Kaul wrote: > On 12/04/2011 10:34 AM, Livnat Peer wrote: >> On 12/04/2011 10:21 AM, Omer Frenkel wrote: >>> >>> ----- Original Message ----- >>>> From: "Livnat Peer" >>>> To: "Ayal Baron" >>>> Cc: engine-devel at ovirt.org >>>> Sent: Sunday, December 4, 2011 10:04:30 AM >>>> Subject: Re: [Engine-devel] Stable PCI Addresses design wiki >>>> >>>> On 12/04/2011 09:23 AM, Ayal Baron wrote: >>>>> >>>>> ----- Original Message ----- >>>>>> >>>>>> Send from my iPhone . >>>>>> >>>>>> >>>>>> On 3 ???? 2011, at 20:38, Yaniv Kaul wrote: >>>>>> >>>>>>> On 12/03/2011 11:26 AM, Livnat Peer wrote: >>>>>>>> On 12/01/2011 05:27 PM, Itamar Heim wrote: >>>>>>>>> On 12/01/2011 04:09 PM, Miki Kenneth wrote: >>>>>>>>>> I know that we are talking about only stable addresses, but I >>>>>>>>>> would >>>>>>>>>> like to broaden the scope a bit >>>>>>>>>> (don't kick me guys)... >>>>>>>>>> Shouldn't we keep a run-time configuration vs "saved/commit" >>>>>>>>>> configuration. >>>>>>>>>> By run time I mean: the current memory/cpu/disks/address per >>>>>>>>>> VM >>>>>>>>>> and by >>>>>>>>>> "stable" I mean the "one in the DB". >>>>>>>>>> That way, I'm going to be able to change properties in the >>>>>>>>>> stable >>>>>>>>>> config, which will not affect the running one >>>>>>>>>> (and vice versa). >>>>>>>>>> >>>>>>>>>> Maybe this is totally different feature - but I decide to >>>>>>>>>> throw >>>>>>>>>> it on >>>>>>>>>> the table. >>>>>>>> It is a different feature ;) >>>>>>>> >>>>>>>>> shouldn't that be part of the snapshot improvements design? >>>>>>>>> >>>>>>>> What Miki is looking for, miki please correct me if i am wrong, >>>>>>>> is >>>>>>>> the >>>>>>>> ability to change VM configuration while the VM is running and >>>>>>>> expect >>>>>>>> the changes to apply starting from the next VM run. >>>>>>> In addition, turn a reboot of a VM into shutdown + run with the >>>>>>> new >>>>>>> parameters. >>>>>>> That way an admin can tell a user 'I increased your VM's memory, >>>>>>> reboot at your own preferred time and you'll have the extra >>>>>>> memory'. >>>>>>> (of course, hot-plugging memory is cooler). >>>>>>> Y. >>>>>> Agreed. >>>>>>>> For the above feature to be 'complete' Miki wants to be able to >>>>>>>> view >>>>>>>> what is the VM current configuration (the one used when the VM >>>>>>>> started) >>>>>>>> and what is the configuration for the next run. >>>>>>>> >>>>>>>> After the VM is stopped you have only one configuration (the one >>>>>>>> for the >>>>>>>> next run). >>>>>>>> >>>>>>>> I guess i can see why you associated it with snapshots, as we >>>>>>>> can >>>>>>>> look >>>>>>>> at it as a temporary VM configuration snapshot, but i think it >>>>>>>> is >>>>>>>> another functionality (especially in UI/client perspective). >>>>>> I don't mind on what feature you implement this ... I think when >>>>>> you >>>>>> design a change in functionality in existing flow, you need to >>>>>> look >>>>>> at the broader scope, at that is why I mentioned. >>>>> So let's broaden the scope just a little more. As a user, I would >>>>> like to be able to revert my configuration changes to "last known >>>>> good" (and no, I'm not talking about live snapshots, only config >>>>> changes, because I don't want to revert my data changes on disk). >>>>> I would store multiple versions of the config (20?) and let the >>>>> user go back (without the need to remember what the config was. >>>>> >>>> A nice addition. >>>> To summarize what was suggested so far: >>>> >>>> 1. changing configuration while VM is running (apply on next-run) >>>> 2. reflecting to the user current configuration vs. 'next-run' >>>> configuration >>>> 3. switch to the new configuration on restart >>>> 4. keeping vm configuration 'history' (either explicitly or >>>> implicitly) >>>> and enabling roll-back to a specific configuration. >>> the last one sounds like snapshot without disks? >>> >> I agree. maybe an addition is to take it implicitly if the feature is >> turned on, like local history in eclipse. >> >> Livnat > > You will be saving the HW config per snapshot regardless, right? yes > Y. > > >> >>>> Anything else? >>>> >>>> Livnat >>>> >>>> >>>>>> Btw, Livnat, let's start with having it on the engine, ui can >>>>>> follow >>>>>> later on... >>>>>> >>>>>>>> Livnat >>>>>>>> >>>>>>>> >>>>>>>>>> Miki >>>>>>>>>> >>>>>>>>>> ----- Original Message ----- >>>>>>>>>>> From: "Eli Mesika" >>>>>>>>>>> To: engine-devel at ovirt.org >>>>>>>>>>> Sent: Wednesday, November 30, 2011 5:17:42 PM >>>>>>>>>>> Subject: Re: [Engine-devel] Stable PCI Addresses design wiki >>>>>>>>>>> >>>>>>>>>>> Hi again >>>>>>>>>>> The following is a design draft for a new feature of >>>>>>>>>>> oVirt-engine >>>>>>>>>>> planned for 3.1 >>>>>>>>>>> >>>>>>>>>>> The feature allow devices in guest virtual machines to retain >>>>>>>>>>> the >>>>>>>>>>> same PCI address allocations as other devices are added or >>>>>>>>>>> removed >>>>>>>>>>> from the guest configuration. This is particularly important >>>>>>>>>>> for >>>>>>>>>>> Windows guests in order to prevent warnings or reactivation >>>>>>>>>>> when >>>>>>>>>>> device addresses change. >>>>>>>>>>> >>>>>>>>>>> This feature is supported by libvirt and should be >>>>>>>>>>> implemented >>>>>>>>>>> by >>>>>>>>>>> RHEVM and VDSM. >>>>>>>>>>> >>>>>>>>>>> When creating a VM, QEMU allocates PCI addresses to the guest >>>>>>>>>>> devices, these addresses are being reported by libvirt to >>>>>>>>>>> VDSM >>>>>>>>>>> and >>>>>>>>>>> VDSM should report it back to RHEVM. RHEVM should persist the >>>>>>>>>>> PCI >>>>>>>>>>> addresses and report it as part of the VM configuration on >>>>>>>>>>> the >>>>>>>>>>> next >>>>>>>>>>> run. If a change to the VM devices occurred RHEVM should >>>>>>>>>>> detect >>>>>>>>>>> the >>>>>>>>>>> change and persist the new PCI addresses. >>>>>>>>>>> >>>>>>>>>>> Please review. >>>>>>>>>>> >>>>>>>>>>> Thanks >>>>>>>>>>> Eli Mesika >>>>>>>>>>> Redhat ISRAEL >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> ----- Original Message ----- >>>>>>>>>>>> From: "Eli Mesika" >>>>>>>>>>>> To: engine-devel at ovirt.org >>>>>>>>>>>> Sent: Wednesday, November 30, 2011 5:06:37 PM >>>>>>>>>>>> Subject: [Engine-devel] Stable PCI Addresses design wiki >>>>>>>>>>>> >>>>>>>>>>>> http://www.ovirt.org/wiki/Features/Design/StablePCIAddresses >>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>> Engine-devel mailing list >>>>>>>>>>>> Engine-devel at ovirt.org >>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>>>>>>>> >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> Engine-devel mailing list >>>>>>>>>>> Engine-devel at ovirt.org >>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> Engine-devel mailing list >>>>>>>>>> Engine-devel at ovirt.org >>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>>>>> _______________________________________________ >>>>>>>>> Engine-devel mailing list >>>>>>>>> Engine-devel at ovirt.org >>>>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>>>> _______________________________________________ >>>>>>>> Engine-devel mailing list >>>>>>>> Engine-devel at ovirt.org >>>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>>> _______________________________________________ >>>>>>> Engine-devel mailing list >>>>>>> Engine-devel at ovirt.org >>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>> _______________________________________________ >>>>>> Engine-devel mailing list >>>>>> Engine-devel at ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>> >>>>> _______________________________________________ >>>>> Engine-devel mailing list >>>>> Engine-devel at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel > From kroberts at redhat.com Sun Dec 4 17:10:38 2011 From: kroberts at redhat.com (Keith Robertson) Date: Sun, 04 Dec 2011 12:10:38 -0500 Subject: [Engine-devel] New tool to upload OVF archives Message-ID: <4EDBA98E.7000502@redhat.com> All, I have created a new tool that makes it easier to upload an OVF archive file to an oVirt export domain. I've attached the patch to this email so that you can try it out and see what it does. I am looking for feedback on the tool so please let me know what you think. Cheers, Keith //---- Begin description The new tool provided in this patch makes it easier to upload an OVF archive to an export domain. An OVF archive is simply a zipped archive that can contain an image and must contain an XML document describing the image to be uploaded. The tool has the following behavior: 1. Before unpacking the archive it will check for requisite space on the local system. 2. Before uploading the requisite parts in the archive it will check for space in the target NFS export domain. 3. At this time only NFS as a transport mechanism is supported. This is slightly different behavior than the iso uploader which supports both NFS and SSH/SFTP. 4. The tool will allow you to rename the image. 5. The tool will allow you to change the UUID of the image. 6. The tool will only upload those files explicitly listed in the archive's XML .ovf file. This prevents spurious cruft which it included in some OVF archives from being moved to the export domain. Example usage: 1. > python ovirt-image-uploader.py -n 127.0.0.1:/virt/exports --template-name=new-name-here --template-id=new-uuid-here upload keith.ovf --force 2. > python ovirt-image-uploader.py --conf-file=./imageuploader.conf list Please provide the REST API password for RHEV-M (CTRL+D to abort): Export Storage Domain Name | Datacenter | Export Domain Status ExportDomain | LegacyDC | active 3. > python ovirt-image-uploader.py -e ExportDomain --template-name=new-name-here --template-id=new-uuid-here upload keith.ovf --force -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: 0001-tools-New-tool-to-upload-OVF-archives-xxxxxx.patch URL: From lpeer at redhat.com Mon Dec 5 06:41:31 2011 From: lpeer at redhat.com (Livnat Peer) Date: Mon, 05 Dec 2011 08:41:31 +0200 Subject: [Engine-devel] blog post ovirt-engine Message-ID: <4EDC679B.30806@redhat.com> Hi, I wrote a post about migrating oVirt-engine from MS-SQL to PostgreSQL. http://lpeer.blogspot.com/2011/12/migrating-ovirt-from-ms-sql-to.html Livnat From mkolesni at redhat.com Mon Dec 5 10:16:16 2011 From: mkolesni at redhat.com (Mike Kolesnik) Date: Mon, 05 Dec 2011 05:16:16 -0500 (EST) Subject: [Engine-devel] Agenda for bi-weekly oVirt engine core meeting (Wed Dec. 7th) In-Reply-To: <03d44f83-c95b-4a6d-a1a4-f8a9b6852bdc@zmail14.collab.prod.int.phx2.redhat.com> Message-ID: <2b447fba-bbed-4643-b62d-fa842e3a1423@zmail14.collab.prod.int.phx2.redhat.com> These are the topics that we are planning to discuss in the meeting: * Open discussion on stable addressing support in the engine. * Michael Kublin - Introducing synchronization/locking mechanism . * Mike Kolesnik - Introducing upcoming changes due to snapshots fixes. If anyone would like to discuss other topics, feel free to reply and add them to the list. Regards, Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From berrange at redhat.com Mon Dec 5 10:59:35 2011 From: berrange at redhat.com (Daniel P. Berrange) Date: Mon, 5 Dec 2011 10:59:35 +0000 Subject: [Engine-devel] Stable PCI/DEvice addresses In-Reply-To: <4ED9F037.5050807@redhat.com> References: <201112011752.43123.dfediuck@redhat.com> <20111201204100.GC23878@redhat.com> <4ED9F037.5050807@redhat.com> Message-ID: <20111205105935.GD2734@redhat.com> On Sat, Dec 03, 2011 at 11:47:35AM +0200, Livnat Peer wrote: > BTW - Is there a chance libvirt will support JSON format in the future > for describing the VM? then we can use the format they suggest for this > and it can be a JSON pass-through in the future ;) If an application wants to use JSON, then just use a standard JSON<->XML convertor. There's no need todo this directly in libvirt. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From acathrow at redhat.com Mon Dec 5 13:22:27 2011 From: acathrow at redhat.com (Andrew Cathrow) Date: Mon, 05 Dec 2011 08:22:27 -0500 (EST) Subject: [Engine-devel] New tool to upload OVF archives In-Reply-To: <4EDBA98E.7000502@redhat.com> Message-ID: <64030ec0-32c8-46b0-b7c3-e1d0a62c4be9@zmail07.collab.prod.int.phx2.redhat.com> Can we update the ID feature to autogenerate an ID instead of specifying it. Also the ID generation needs to flow through to the template/VM's disk ID ----- Original Message ----- > From: "Keith Robertson" > To: engine-devel at ovirt.org > Sent: Sunday, December 4, 2011 12:10:38 PM > Subject: [Engine-devel] New tool to upload OVF archives > > All, > > I have created a new tool that makes it easier to upload an OVF > archive > file to an oVirt export domain. I've attached the patch to this > email > so that you can try it out and see what it does. > > I am looking for feedback on the tool so please let me know what you > think. > > Cheers, > Keith > > > //---- Begin description > > The new tool provided in this patch makes it easier to upload an > OVF archive to an export domain. An OVF archive is simply a > zipped archive that can contain an image and must contain > an XML document describing the image to be uploaded. > > The tool has the following behavior: > 1. Before unpacking the archive it will check for requisite space > on the local system. > 2. Before uploading the requisite parts in the archive it will > check for space in the target NFS export domain. > 3. At this time only NFS as a transport mechanism is supported. > This is slightly different behavior than the iso uploader which > supports both NFS and SSH/SFTP. > 4. The tool will allow you to rename the image. > 5. The tool will allow you to change the UUID of the image. > 6. The tool will only upload those files explicitly listed in the > archive's XML .ovf file. This prevents spurious cruft which it > included > in some OVF archives from being moved to the export domain. > > Example usage: > 1. > python ovirt-image-uploader.py -n 127.0.0.1:/virt/exports > --template-name=new-name-here --template-id=new-uuid-here upload > keith.ovf --force > 2. > python ovirt-image-uploader.py --conf-file=./imageuploader.conf > list > Please provide the REST API password for RHEV-M (CTRL+D to abort): > Export Storage Domain Name | Datacenter | Export > Domain Status > ExportDomain | LegacyDC | active > 3. > python ovirt-image-uploader.py -e ExportDomain > --template-name=new-name-here --template-id=new-uuid-here upload > keith.ovf --force > > > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From kroberts at redhat.com Mon Dec 5 13:58:48 2011 From: kroberts at redhat.com (Keith Robertson) Date: Mon, 05 Dec 2011 08:58:48 -0500 Subject: [Engine-devel] New tool to upload OVF archives In-Reply-To: <64030ec0-32c8-46b0-b7c3-e1d0a62c4be9@zmail07.collab.prod.int.phx2.redhat.com> References: <64030ec0-32c8-46b0-b7c3-e1d0a62c4be9@zmail07.collab.prod.int.phx2.redhat.com> Message-ID: <4EDCCE18.8000009@redhat.com> On 12/05/2011 08:22 AM, Andrew Cathrow wrote: > > Can we update the ID feature to autogenerate an ID instead of specifying it. Yes. > Also the ID generation needs to flow through to the template/VM's disk ID > Are you sure about that? In the sample OVF XML in the patch [1], the XML Element, "TemplateId", has a UUID that ends in 9402. The only other place that this 9402 UUID appears is in the "ovf:id" attribute a "Section" element. This UUID does not appear in any disk ID elements. Hence, it was my assumption that the UUIDs for the TemplateID and the "ovf:diskId" elements are distinctly separate. Cheers, Keith [1]: .../engine-image-uploader/src/ovf/sample-ovf.xml > > ----- Original Message ----- >> From: "Keith Robertson" >> To: engine-devel at ovirt.org >> Sent: Sunday, December 4, 2011 12:10:38 PM >> Subject: [Engine-devel] New tool to upload OVF archives >> >> All, >> >> I have created a new tool that makes it easier to upload an OVF >> archive >> file to an oVirt export domain. I've attached the patch to this >> email >> so that you can try it out and see what it does. >> >> I am looking for feedback on the tool so please let me know what you >> think. >> >> Cheers, >> Keith >> >> >> //---- Begin description >> >> The new tool provided in this patch makes it easier to upload an >> OVF archive to an export domain. An OVF archive is simply a >> zipped archive that can contain an image and must contain >> an XML document describing the image to be uploaded. >> >> The tool has the following behavior: >> 1. Before unpacking the archive it will check for requisite space >> on the local system. >> 2. Before uploading the requisite parts in the archive it will >> check for space in the target NFS export domain. >> 3. At this time only NFS as a transport mechanism is supported. >> This is slightly different behavior than the iso uploader which >> supports both NFS and SSH/SFTP. >> 4. The tool will allow you to rename the image. >> 5. The tool will allow you to change the UUID of the image. >> 6. The tool will only upload those files explicitly listed in the >> archive's XML .ovf file. This prevents spurious cruft which it >> included >> in some OVF archives from being moved to the export domain. >> >> Example usage: >> 1.> python ovirt-image-uploader.py -n 127.0.0.1:/virt/exports >> --template-name=new-name-here --template-id=new-uuid-here upload >> keith.ovf --force >> 2.> python ovirt-image-uploader.py --conf-file=./imageuploader.conf >> list >> Please provide the REST API password for RHEV-M (CTRL+D to abort): >> Export Storage Domain Name | Datacenter | Export >> Domain Status >> ExportDomain | LegacyDC | active >> 3.> python ovirt-image-uploader.py -e ExportDomain >> --template-name=new-name-here --template-id=new-uuid-here upload >> keith.ovf --force >> >> >> >> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> From lpeer at redhat.com Mon Dec 5 15:34:55 2011 From: lpeer at redhat.com (Livnat Peer) Date: Mon, 05 Dec 2011 17:34:55 +0200 Subject: [Engine-devel] New tool to upload OVF archives In-Reply-To: <4EDCCE18.8000009@redhat.com> References: <64030ec0-32c8-46b0-b7c3-e1d0a62c4be9@zmail07.collab.prod.int.phx2.redhat.com> <4EDCCE18.8000009@redhat.com> Message-ID: <4EDCE49F.70207@redhat.com> On 12/05/2011 03:58 PM, Keith Robertson wrote: > On 12/05/2011 08:22 AM, Andrew Cathrow wrote: >> >> Can we update the ID feature to autogenerate an ID instead of >> specifying it. > Yes. >> Also the ID generation needs to flow through to the template/VM's disk ID >> > Are you sure about that? In the sample OVF XML in the patch [1], the > XML Element, "TemplateId", has a UUID that ends in 9402. The only other > place that this 9402 UUID appears is in the "ovf:id" attribute a > "Section" element. This UUID does not appear in any disk ID elements. > Hence, it was my assumption that the UUIDs for the TemplateID and the > "ovf:diskId" elements are distinctly separate. > > Cheers, > Keith > Hi Keith, As far as i know the template id appears only once which is fine but the disk id should be updated in several places. I think Andrew's question was referencing to the disk-id. Changing the disk id also requires taking care of all the references to this id, for example: - in the ovf file the References element, Disk element and devices under "VirtualHardwareSection". - The image files names (image and meta-data file) - The data inside the meta-data file Is it covered by the tool? Livnat > [1]: .../engine-image-uploader/src/ovf/sample-ovf.xml >> >> ----- Original Message ----- >>> From: "Keith Robertson" >>> To: engine-devel at ovirt.org >>> Sent: Sunday, December 4, 2011 12:10:38 PM >>> Subject: [Engine-devel] New tool to upload OVF archives >>> >>> All, >>> >>> I have created a new tool that makes it easier to upload an OVF >>> archive >>> file to an oVirt export domain. I've attached the patch to this >>> email >>> so that you can try it out and see what it does. >>> >>> I am looking for feedback on the tool so please let me know what you >>> think. >>> >>> Cheers, >>> Keith >>> >>> >>> //---- Begin description >>> >>> The new tool provided in this patch makes it easier to upload an >>> OVF archive to an export domain. An OVF archive is simply a >>> zipped archive that can contain an image and must contain >>> an XML document describing the image to be uploaded. >>> >>> The tool has the following behavior: >>> 1. Before unpacking the archive it will check for requisite space >>> on the local system. >>> 2. Before uploading the requisite parts in the archive it will >>> check for space in the target NFS export domain. >>> 3. At this time only NFS as a transport mechanism is supported. >>> This is slightly different behavior than the iso uploader which >>> supports both NFS and SSH/SFTP. >>> 4. The tool will allow you to rename the image. >>> 5. The tool will allow you to change the UUID of the image. >>> 6. The tool will only upload those files explicitly listed in the >>> archive's XML .ovf file. This prevents spurious cruft which it >>> included >>> in some OVF archives from being moved to the export domain. >>> >>> Example usage: >>> 1.> python ovirt-image-uploader.py -n 127.0.0.1:/virt/exports >>> --template-name=new-name-here --template-id=new-uuid-here upload >>> keith.ovf --force >>> 2.> python ovirt-image-uploader.py --conf-file=./imageuploader.conf >>> list >>> Please provide the REST API password for RHEV-M (CTRL+D to abort): >>> Export Storage Domain Name | Datacenter | Export >>> Domain Status >>> ExportDomain | LegacyDC | active >>> 3.> python ovirt-image-uploader.py -e ExportDomain >>> --template-name=new-name-here --template-id=new-uuid-here upload >>> keith.ovf --force >>> >>> >>> >>> >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From kroberts at redhat.com Mon Dec 5 16:27:27 2011 From: kroberts at redhat.com (Keith Robertson) Date: Mon, 05 Dec 2011 11:27:27 -0500 Subject: [Engine-devel] New tool to upload OVF archives In-Reply-To: <4EDCE49F.70207@redhat.com> References: <64030ec0-32c8-46b0-b7c3-e1d0a62c4be9@zmail07.collab.prod.int.phx2.redhat.com> <4EDCCE18.8000009@redhat.com> <4EDCE49F.70207@redhat.com> Message-ID: <4EDCF0EF.8020606@redhat.com> On 12/05/2011 10:34 AM, Livnat Peer wrote: > On 12/05/2011 03:58 PM, Keith Robertson wrote: >> On 12/05/2011 08:22 AM, Andrew Cathrow wrote: >>> Can we update the ID feature to autogenerate an ID instead of >>> specifying it. >> Yes. >>> Also the ID generation needs to flow through to the template/VM's disk ID >>> >> Are you sure about that? In the sample OVF XML in the patch [1], the >> XML Element, "TemplateId", has a UUID that ends in 9402. The only other >> place that this 9402 UUID appears is in the "ovf:id" attribute a >> "Section" element. This UUID does not appear in any disk ID elements. >> Hence, it was my assumption that the UUIDs for the TemplateID and the >> "ovf:diskId" elements are distinctly separate. >> >> Cheers, >> Keith >> > Hi Keith, > > As far as i know the template id appears only once which is fine but the > disk id should be updated in several places. > I think Andrew's question was referencing to the disk-id. > > Changing the disk id also requires taking care of all the references to > this id, for example: > - in the ovf file the References element, Disk element and devices under > "VirtualHardwareSection". > - The image files names (image and meta-data file) > - The data inside the meta-data file > > Is it covered by the tool? Livnat, Currently, the tool will only update the TemplateID and the 2 places where it appears. It does not update the "ovf:diskID" attribute or the 5 places where the "diskID" UUID can appear. The OVF XML schema is a bit vague on these issues so I had to make some assumptions. Please correct my assumptions... 1. The UUID in the TemplateID element appears to be different than the UUID for the "ovf:diskId" attribute. I am assuming that this is a requirement. 2. The UUID for the TemplateID uniquely identifies the "image" to be imported *not* the disk ID(s). An image can have multiple disks and each disk would have it's own UUID. Cheers, Keith > Livnat > >> [1]: .../engine-image-uploader/src/ovf/sample-ovf.xml >>> ----- Original Message ----- >>>> From: "Keith Robertson" >>>> To: engine-devel at ovirt.org >>>> Sent: Sunday, December 4, 2011 12:10:38 PM >>>> Subject: [Engine-devel] New tool to upload OVF archives >>>> >>>> All, >>>> >>>> I have created a new tool that makes it easier to upload an OVF >>>> archive >>>> file to an oVirt export domain. I've attached the patch to this >>>> email >>>> so that you can try it out and see what it does. >>>> >>>> I am looking for feedback on the tool so please let me know what you >>>> think. >>>> >>>> Cheers, >>>> Keith >>>> >>>> >>>> //---- Begin description >>>> >>>> The new tool provided in this patch makes it easier to upload an >>>> OVF archive to an export domain. An OVF archive is simply a >>>> zipped archive that can contain an image and must contain >>>> an XML document describing the image to be uploaded. >>>> >>>> The tool has the following behavior: >>>> 1. Before unpacking the archive it will check for requisite space >>>> on the local system. >>>> 2. Before uploading the requisite parts in the archive it will >>>> check for space in the target NFS export domain. >>>> 3. At this time only NFS as a transport mechanism is supported. >>>> This is slightly different behavior than the iso uploader which >>>> supports both NFS and SSH/SFTP. >>>> 4. The tool will allow you to rename the image. >>>> 5. The tool will allow you to change the UUID of the image. >>>> 6. The tool will only upload those files explicitly listed in the >>>> archive's XML .ovf file. This prevents spurious cruft which it >>>> included >>>> in some OVF archives from being moved to the export domain. >>>> >>>> Example usage: >>>> 1.> python ovirt-image-uploader.py -n 127.0.0.1:/virt/exports >>>> --template-name=new-name-here --template-id=new-uuid-here upload >>>> keith.ovf --force >>>> 2.> python ovirt-image-uploader.py --conf-file=./imageuploader.conf >>>> list >>>> Please provide the REST API password for RHEV-M (CTRL+D to abort): >>>> Export Storage Domain Name | Datacenter | Export >>>> Domain Status >>>> ExportDomain | LegacyDC | active >>>> 3.> python ovirt-image-uploader.py -e ExportDomain >>>> --template-name=new-name-here --template-id=new-uuid-here upload >>>> keith.ovf --force >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel From agl at us.ibm.com Mon Dec 5 17:34:18 2011 From: agl at us.ibm.com (Adam Litke) Date: Mon, 5 Dec 2011 11:34:18 -0600 Subject: [Engine-devel] API design and plan Message-ID: <20111205173418.GB2706@us.ibm.com> Hi everyone. On today's VDSM call we discussed the requirements, design, and plan for updating the API to include support for QMF and single-host REST API. All members present arrived at a general consensus on the best way to design the next-generation API. I have tried to capture this discussion in the oVirt wiki: http://ovirt.org/wiki/Vdsm_API Please take a look at this page and let's discuss any changes that may be needed in order to adopt it as a working plan that we can begin to execute. Thanks! -- Adam Litke IBM Linux Technology Center From lpeer at redhat.com Tue Dec 6 08:05:48 2011 From: lpeer at redhat.com (Livnat Peer) Date: Tue, 06 Dec 2011 10:05:48 +0200 Subject: [Engine-devel] New tool to upload OVF archives In-Reply-To: <4EDCF0EF.8020606@redhat.com> References: <64030ec0-32c8-46b0-b7c3-e1d0a62c4be9@zmail07.collab.prod.int.phx2.redhat.com> <4EDCCE18.8000009@redhat.com> <4EDCE49F.70207@redhat.com> <4EDCF0EF.8020606@redhat.com> Message-ID: <4EDDCCDC.3000907@redhat.com> On 12/05/2011 06:27 PM, Keith Robertson wrote: > On 12/05/2011 10:34 AM, Livnat Peer wrote: >> On 12/05/2011 03:58 PM, Keith Robertson wrote: >>> On 12/05/2011 08:22 AM, Andrew Cathrow wrote: >>>> Can we update the ID feature to autogenerate an ID instead of >>>> specifying it. >>> Yes. >>>> Also the ID generation needs to flow through to the template/VM's >>>> disk ID >>>> >>> Are you sure about that? In the sample OVF XML in the patch [1], the >>> XML Element, "TemplateId", has a UUID that ends in 9402. The only other >>> place that this 9402 UUID appears is in the "ovf:id" attribute a >>> "Section" element. This UUID does not appear in any disk ID elements. >>> Hence, it was my assumption that the UUIDs for the TemplateID and the >>> "ovf:diskId" elements are distinctly separate. >>> >>> Cheers, >>> Keith >>> >> Hi Keith, >> >> As far as i know the template id appears only once which is fine but the >> disk id should be updated in several places. >> I think Andrew's question was referencing to the disk-id. >> >> Changing the disk id also requires taking care of all the references to >> this id, for example: >> - in the ovf file the References element, Disk element and devices under >> "VirtualHardwareSection". >> - The image files names (image and meta-data file) >> - The data inside the meta-data file >> >> Is it covered by the tool? > Livnat, > Currently, the tool will only update the TemplateID and the 2 places > where it appears. It does not update the "ovf:diskID" attribute or the > 5 places where the "diskID" UUID can appear. > > The OVF XML schema is a bit vague on these issues so I had to make some > assumptions. Please correct my assumptions... I think Shahar can help with this. Shahar - can you publish the ovf docs on the oVirt wiki? > > 1. The UUID in the TemplateID element appears to be different than the > UUID for the "ovf:diskId" attribute. I am assuming that this is a > requirement. yes > 2. The UUID for the TemplateID uniquely identifies the "image" to be > imported *not* the disk ID(s). An image can have multiple disks and > each disk would have it's own UUID. > I am not sure i follow you with this question. IIUC the tool supports the ability to change template ID but not change it's disks ids. Then you have 2 different templates pointing to the same disks? Livnat > Cheers, > Keith > > > >> Livnat >> >>> [1]: .../engine-image-uploader/src/ovf/sample-ovf.xml >>>> ----- Original Message ----- >>>>> From: "Keith Robertson" >>>>> To: engine-devel at ovirt.org >>>>> Sent: Sunday, December 4, 2011 12:10:38 PM >>>>> Subject: [Engine-devel] New tool to upload OVF archives >>>>> >>>>> All, >>>>> >>>>> I have created a new tool that makes it easier to upload an OVF >>>>> archive >>>>> file to an oVirt export domain. I've attached the patch to this >>>>> email >>>>> so that you can try it out and see what it does. >>>>> >>>>> I am looking for feedback on the tool so please let me know what you >>>>> think. >>>>> >>>>> Cheers, >>>>> Keith >>>>> >>>>> >>>>> //---- Begin description >>>>> >>>>> The new tool provided in this patch makes it easier to upload an >>>>> OVF archive to an export domain. An OVF archive is simply a >>>>> zipped archive that can contain an image and must contain >>>>> an XML document describing the image to be uploaded. >>>>> >>>>> The tool has the following behavior: >>>>> 1. Before unpacking the archive it will check for requisite space >>>>> on the local system. >>>>> 2. Before uploading the requisite parts in the archive it will >>>>> check for space in the target NFS export domain. >>>>> 3. At this time only NFS as a transport mechanism is supported. >>>>> This is slightly different behavior than the iso uploader which >>>>> supports both NFS and SSH/SFTP. >>>>> 4. The tool will allow you to rename the image. >>>>> 5. The tool will allow you to change the UUID of the image. >>>>> 6. The tool will only upload those files explicitly listed in the >>>>> archive's XML .ovf file. This prevents spurious cruft which it >>>>> included >>>>> in some OVF archives from being moved to the export domain. >>>>> >>>>> Example usage: >>>>> 1.> python ovirt-image-uploader.py -n 127.0.0.1:/virt/exports >>>>> --template-name=new-name-here --template-id=new-uuid-here upload >>>>> keith.ovf --force >>>>> 2.> python ovirt-image-uploader.py --conf-file=./imageuploader.conf >>>>> list >>>>> Please provide the REST API password for RHEV-M (CTRL+D to abort): >>>>> Export Storage Domain Name | Datacenter | Export >>>>> Domain Status >>>>> ExportDomain | LegacyDC | active >>>>> 3.> python ovirt-image-uploader.py -e ExportDomain >>>>> --template-name=new-name-here --template-id=new-uuid-here upload >>>>> keith.ovf --force >>>>> >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> Engine-devel mailing list >>>>> Engine-devel at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>> >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel > From rgolan at redhat.com Tue Dec 6 09:08:41 2011 From: rgolan at redhat.com (Roy Golan) Date: Tue, 06 Dec 2011 11:08:41 +0200 Subject: [Engine-devel] Network features design wiki Message-ID: <4EDDDB99.1040103@redhat.com> http://www.ovirt.org/wiki/Features/Design/Network From rgolan at redhat.com Tue Dec 6 09:52:50 2011 From: rgolan at redhat.com (Roy Golan) Date: Tue, 06 Dec 2011 11:52:50 +0200 Subject: [Engine-devel] Network features design wiki In-Reply-To: <4EDDDB99.1040103@redhat.com> References: <4EDDDB99.1040103@redhat.com> Message-ID: <4EDDE5F2.4030402@redhat.com> On Tue 06 Dec 2011 11:08:41 AM IST, Roy Golan wrote: > http://www.ovirt.org/wiki/Features/Design/Network > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel Hi all, The link above is for the wiki describing changes/additions to network features for the coming future. Network management by the engine is fairly big issue and only starting to evolve and may take totally different approach in the future, so this is a (small) start. The changes are typically extending the API to be more robust by provision a desired network topology in one call and create bridge-less networks to utilize network more wisely. Thanks, Roy From danken at redhat.com Tue Dec 6 12:58:59 2011 From: danken at redhat.com (Dan Kenigsberg) Date: Tue, 6 Dec 2011 14:58:59 +0200 Subject: [Engine-devel] API design and plan In-Reply-To: <20111205173418.GB2706@us.ibm.com> References: <20111205173418.GB2706@us.ibm.com> Message-ID: <20111206125858.GJ22779@redhat.com> On Mon, Dec 05, 2011 at 11:34:18AM -0600, Adam Litke wrote: > Hi everyone. On today's VDSM call we discussed the requirements, design, and > plan for updating the API to include support for QMF and single-host REST API. > All members present arrived at a general consensus on the best way to design the > next-generation API. I have tried to capture this discussion in the oVirt wiki: > > http://ovirt.org/wiki/Vdsm_API > > Please take a look at this page and let's discuss any changes that may be needed > in order to adopt it as a working plan that we can begin to execute. Thanks! > Very nice, I've fixed two bullets about the future of the xml-rpc. I think that we are missing something here: how do we model Vdsm-to-Vdsm communication, in a binding-blind way? I'm less worried about the storage-based mailbox used for lvextend requests: my problem is with migration command. Currently, the implementation of the "migrate" verb includes contacting the remote Vdsm over xml-rpc before issuing the libvirt migrateToURI2 command ('migrationCreate' verb). A Vdsm user who choose to use the REST binding, is likely to want this to be implemented this using a REST request to the destination. This means that the implementation of Vdsm depends on the chosen binding. The issue can be mitigating by requiring the binding level to provide a "callback" for migrationCreate (and any other future Vdsm->world requests). This would complicate the beautiful png at http://ovirt.org/wiki/Vdsm_API#Design ... Does anyone have another suggestion? Dan. From kroberts at redhat.com Tue Dec 6 13:41:37 2011 From: kroberts at redhat.com (Keith Robertson) Date: Tue, 06 Dec 2011 08:41:37 -0500 Subject: [Engine-devel] New tool to upload OVF archives In-Reply-To: <4EDDCCDC.3000907@redhat.com> References: <64030ec0-32c8-46b0-b7c3-e1d0a62c4be9@zmail07.collab.prod.int.phx2.redhat.com> <4EDCCE18.8000009@redhat.com> <4EDCE49F.70207@redhat.com> <4EDCF0EF.8020606@redhat.com> <4EDDCCDC.3000907@redhat.com> Message-ID: <4EDE1B91.7030901@redhat.com> On 12/06/2011 03:05 AM, Livnat Peer wrote: > >>> Is it covered by the tool? >> Livnat, >> Currently, the tool will only update the TemplateID and the 2 places >> where it appears. It does not update the "ovf:diskID" attribute or the >> 5 places where the "diskID" UUID can appear. >> >> The OVF XML schema is a bit vague on these issues so I had to make some >> assumptions. Please correct my assumptions... > I think Shahar can help with this. > Shahar - can you publish the ovf docs on the oVirt wiki? > > >> 1. The UUID in the TemplateID element appears to be different than the >> UUID for the "ovf:diskId" attribute. I am assuming that this is a >> requirement. > yes > >> 2. The UUID for the TemplateID uniquely identifies the "image" to be >> imported *not* the disk ID(s). An image can have multiple disks and >> each disk would have it's own UUID. >> > I am not sure i follow you with this question. > IIUC the tool supports the ability to change template ID but not change > it's disks ids. Correct, this is the current behavior. > Then you have 2 different templates pointing to the same > disks? No, I don't think you would have two templates pointing to the same disk(s) unless the tool that generated the OVF archive and it's associated XML did it incorrectly. To summarize, the OVF XML schema appears to create a parent-child relationship between the TemplateID and the disks. See the abbreviated XML below from an actual OVF XML file below and notice the following points: 1. The "Content" element appears to be the "parent" element. It has a "Name" and a "TemplateID" which I think uniquely identify the "image" or "template". 2. The "Content" element has "Item"(s) as indirect children. These "Item"s can be disks and are uniquely identified by their own UUID (i.e. InstanceID). 3. The "InstanceID" of a disk can be resolved to a physical disk by looking at the "id" and "href" combination in the "References" element. In short, the relationship appears to be... TemplateID (UUID-1) |- Disk (UUID-2) |- Disk (UUID-3) Do you agree? //---------- Begin XML
List of Virtual Disks
tmcowrhel6 5272b689-cd9f-4532-9b5d-2413eb7b9402
Guest Operating System RHEL6x64
1 CPU, 1024 Memeory RHEVM 3.0.0.0 Drive 1 c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad 17 2b30e705-c1d6-4bd8-a6cd-a1fe8a70614f/c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad
> Livnat > > The point is that the TemplatID [1] can have multiple child disks and each ch From agl at us.ibm.com Tue Dec 6 14:46:57 2011 From: agl at us.ibm.com (Adam Litke) Date: Tue, 6 Dec 2011 08:46:57 -0600 Subject: [Engine-devel] API design and plan In-Reply-To: <20111206125858.GJ22779@redhat.com> References: <20111205173418.GB2706@us.ibm.com> <20111206125858.GJ22779@redhat.com> Message-ID: <20111206144657.GA20444@us.ibm.com> On Tue, Dec 06, 2011 at 02:58:59PM +0200, Dan Kenigsberg wrote: > On Mon, Dec 05, 2011 at 11:34:18AM -0600, Adam Litke wrote: > > Hi everyone. On today's VDSM call we discussed the requirements, design, and > > plan for updating the API to include support for QMF and single-host REST API. > > All members present arrived at a general consensus on the best way to design the > > next-generation API. I have tried to capture this discussion in the oVirt wiki: > > > > http://ovirt.org/wiki/Vdsm_API > > > > Please take a look at this page and let's discuss any changes that may be needed > > in order to adopt it as a working plan that we can begin to execute. Thanks! > > > > Very nice, I've fixed two bullets about the future of the xml-rpc. Thanks... Updates look good to me. > I think that we are missing something here: how do we model Vdsm-to-Vdsm > communication, in a binding-blind way? I'm less worried about the > storage-based mailbox used for lvextend requests: my problem is with > migration command. Ok, interesting... Besides migration, are there other features (current or planned) that would involve P2P communication? I want to ensure we consider the full problem space. > Currently, the implementation of the "migrate" verb includes contacting > the remote Vdsm over xml-rpc before issuing the libvirt migrateToURI2 > command ('migrationCreate' verb). > > A Vdsm user who choose to use the REST binding, is likely to want this to > be implemented this using a REST request to the destination. This means > that the implementation of Vdsm depends on the chosen binding. > > The issue can be mitigating by requiring the binding level to provide a > "callback" for migrationCreate (and any other future Vdsm->world requests). > This would complicate the beautiful png at > http://ovirt.org/wiki/Vdsm_API#Design ... Does anyone have another > suggestion? Actually, I think you are blending the external API with vdsm internals. As a management server or ovirt-engine, I don't care about the protocol that vdsm uses to contact the migration recipient. As far as I am concerned this is a special case internal function call. For that purpose, I think xmlrpc is perfectly well-suited to the task and should be used unconditionally, regardless of the bindings used to initiate the migration. So I would propose that we modify the design such that we keep an extremely thin xmlrpc server active whose sole purpose is to service internal P2P requests. -- Adam Litke IBM Linux Technology Center From shaharh at redhat.com Tue Dec 6 14:48:47 2011 From: shaharh at redhat.com (Shahar Havivi) Date: Tue, 6 Dec 2011 16:48:47 +0200 Subject: [Engine-devel] New tool to upload OVF archives In-Reply-To: <4EDE1B91.7030901@redhat.com> References: <64030ec0-32c8-46b0-b7c3-e1d0a62c4be9@zmail07.collab.prod.int.phx2.redhat.com> <4EDCCE18.8000009@redhat.com> <4EDCE49F.70207@redhat.com> <4EDCF0EF.8020606@redhat.com> <4EDDCCDC.3000907@redhat.com> <4EDE1B91.7030901@redhat.com> Message-ID: <20111206144846.GA16410@redhat.com> Hi, the ovf documentation can be found in this link http://cleo.tlv.redhat.com/qumrawiki/CategoryRhevmBackend/OvfFileFormat On 06.12.11 08:41, Keith Robertson wrote: > On 12/06/2011 03:05 AM, Livnat Peer wrote: > > > >>>Is it covered by the tool? > >>Livnat, > >>Currently, the tool will only update the TemplateID and the 2 places > >>where it appears. It does not update the "ovf:diskID" attribute or the > >>5 places where the "diskID" UUID can appear. > >> > >>The OVF XML schema is a bit vague on these issues so I had to make some > >>assumptions. Please correct my assumptions... > >I think Shahar can help with this. > >Shahar - can you publish the ovf docs on the oVirt wiki? > > > > > >>1. The UUID in the TemplateID element appears to be different than the > >>UUID for the "ovf:diskId" attribute. I am assuming that this is a > >>requirement. > >yes > > > >>2. The UUID for the TemplateID uniquely identifies the "image" to be > >>imported *not* the disk ID(s). An image can have multiple disks and > >>each disk would have it's own UUID. > >> > >I am not sure i follow you with this question. > >IIUC the tool supports the ability to change template ID but not change > >it's disks ids. > Correct, this is the current behavior. > >Then you have 2 different templates pointing to the same > >disks? > No, I don't think you would have two templates pointing to the same > disk(s) unless the tool that generated the OVF archive and it's > associated XML did it incorrectly. > > To summarize, the OVF XML schema appears to create a parent-child > relationship between the TemplateID and the disks. See the > abbreviated XML below from an actual OVF XML file below and notice > the following points: > > 1. The "Content" element appears to be the "parent" element. It has > a "Name" and a "TemplateID" which I think uniquely identify the > "image" or "template". > 2. The "Content" element has "Item"(s) as indirect children. These > "Item"s can be disks and are uniquely identified by their own UUID > (i.e. InstanceID). > 3. The "InstanceID" of a disk can be resolved to a physical disk by > looking at the "id" and "href" combination in the "References" > element. > > In short, the relationship appears to be... > > TemplateID (UUID-1) > |- Disk (UUID-2) > |- Disk (UUID-3) > > Do you agree? > > > //---------- Begin XML > > > > ovf:href="2b30e705-c1d6-4bd8-a6cd-a1fe8a70614f/c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad" > ovf:id="c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad" > ovf:size="8589934592"/> > >
> List of Virtual Disks > ovf:actual_size="1" > ovf:vm_snapshot_id="c1398a3c-ca59-460a-ac0c-0ea91d7218be" > ovf:fileRef="2b30e705-c1d6-4bd8-a6cd-a1fe8a70614f/c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad" > ovf:format="http://www.gnome.org/~markmc/qcow-image-format.html" > ovf:volume-format="COW" ovf:volume-type="Sparse" > ovf:disk-interface="VirtIO" > ovf:disk-type="System" ovf:boot="true" > ovf:wipe-after-delete="false" /> >
> > tmcowrhel6 > 5272b689-cd9f-4532-9b5d-2413eb7b9402 >
ovf:required="false" > xsi:type="ovf:OperatingSystemSection_Type"> > Guest Operating System > RHEL6x64 >
>
> 1 CPU, 1024 Memeory > > RHEVM 3.0.0.0 > > > Drive 1 > c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad > > 17 > 2b30e705-c1d6-4bd8-a6cd-a1fe8a70614f/c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad > > >
>
>
> > > > >Livnat > > > > > The point is that the TemplatID [1] can have multiple child disks > and each ch > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From acathrow at redhat.com Tue Dec 6 15:24:29 2011 From: acathrow at redhat.com (Andrew Cathrow) Date: Tue, 06 Dec 2011 10:24:29 -0500 (EST) Subject: [Engine-devel] New tool to upload OVF archives In-Reply-To: <20111206144846.GA16410@redhat.com> Message-ID: <616503c4-fb89-4dbb-b8c7-3cc88394cbb0@zmail07.collab.prod.int.phx2.redhat.com> Can we post that on the upstream wiki? ----- Original Message ----- > From: "Shahar Havivi" > To: "Keith Robertson" > Cc: engine-devel at ovirt.org > Sent: Tuesday, December 6, 2011 9:48:47 AM > Subject: Re: [Engine-devel] New tool to upload OVF archives > > Hi, > the ovf documentation can be found in this link > http://cleo.tlv.redhat.com/qumrawiki/CategoryRhevmBackend/OvfFileFormat > > > On 06.12.11 08:41, Keith Robertson wrote: > > On 12/06/2011 03:05 AM, Livnat Peer wrote: > > > > > >>>Is it covered by the tool? > > >>Livnat, > > >>Currently, the tool will only update the TemplateID and the 2 > > >>places > > >>where it appears. It does not update the "ovf:diskID" attribute > > >>or the > > >>5 places where the "diskID" UUID can appear. > > >> > > >>The OVF XML schema is a bit vague on these issues so I had to > > >>make some > > >>assumptions. Please correct my assumptions... > > >I think Shahar can help with this. > > >Shahar - can you publish the ovf docs on the oVirt wiki? > > > > > > > > >>1. The UUID in the TemplateID element appears to be different > > >>than the > > >>UUID for the "ovf:diskId" attribute. I am assuming that this is > > >>a > > >>requirement. > > >yes > > > > > >>2. The UUID for the TemplateID uniquely identifies the "image" to > > >>be > > >>imported *not* the disk ID(s). An image can have multiple disks > > >>and > > >>each disk would have it's own UUID. > > >> > > >I am not sure i follow you with this question. > > >IIUC the tool supports the ability to change template ID but not > > >change > > >it's disks ids. > > Correct, this is the current behavior. > > >Then you have 2 different templates pointing to the same > > >disks? > > No, I don't think you would have two templates pointing to the same > > disk(s) unless the tool that generated the OVF archive and it's > > associated XML did it incorrectly. > > > > To summarize, the OVF XML schema appears to create a parent-child > > relationship between the TemplateID and the disks. See the > > abbreviated XML below from an actual OVF XML file below and notice > > the following points: > > > > 1. The "Content" element appears to be the "parent" element. It > > has > > a "Name" and a "TemplateID" which I think uniquely identify the > > "image" or "template". > > 2. The "Content" element has "Item"(s) as indirect children. These > > "Item"s can be disks and are uniquely identified by their own UUID > > (i.e. InstanceID). > > 3. The "InstanceID" of a disk can be resolved to a physical disk by > > looking at the "id" and "href" combination in the "References" > > element. > > > > In short, the relationship appears to be... > > > > TemplateID (UUID-1) > > |- Disk (UUID-2) > > |- Disk (UUID-3) > > > > Do you agree? > > > > > > //---------- Begin XML > > > > > > > > > ovf:href="2b30e705-c1d6-4bd8-a6cd-a1fe8a70614f/c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad" > > ovf:id="c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad" > > ovf:size="8589934592"/> > > > >
> > List of Virtual Disks > > > ovf:size="8" > > ovf:actual_size="1" > > ovf:vm_snapshot_id="c1398a3c-ca59-460a-ac0c-0ea91d7218be" > > ovf:fileRef="2b30e705-c1d6-4bd8-a6cd-a1fe8a70614f/c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad" > > ovf:format="http://www.gnome.org/~markmc/qcow-image-format.html" > > ovf:volume-format="COW" ovf:volume-type="Sparse" > > ovf:disk-interface="VirtIO" > > ovf:disk-type="System" ovf:boot="true" > > ovf:wipe-after-delete="false" /> > >
> > > > tmcowrhel6 > > 5272b689-cd9f-4532-9b5d-2413eb7b9402 > >
> ovf:required="false" > > xsi:type="ovf:OperatingSystemSection_Type"> > > Guest Operating System > > RHEL6x64 > >
> >
> > 1 CPU, 1024 Memeory > > > > RHEVM 3.0.0.0 > > > > > > Drive 1 > > c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad > > > > 17 > > 2b30e705-c1d6-4bd8-a6cd-a1fe8a70614f/c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad > > > > > >
> >
> >
> > > > > > > > >Livnat > > > > > > > > The point is that the TemplatID [1] can have multiple child disks > > and each ch > > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From shaharh at redhat.com Tue Dec 6 15:24:00 2011 From: shaharh at redhat.com (Shahar Havivi) Date: Tue, 6 Dec 2011 17:24:00 +0200 Subject: [Engine-devel] New tool to upload OVF archives In-Reply-To: <616503c4-fb89-4dbb-b8c7-3cc88394cbb0@zmail07.collab.prod.int.phx2.redhat.com> References: <20111206144846.GA16410@redhat.com> <616503c4-fb89-4dbb-b8c7-3cc88394cbb0@zmail07.collab.prod.int.phx2.redhat.com> Message-ID: <20111206152359.GB16410@redhat.com> On 06.12.11 10:24, Andrew Cathrow wrote: > Can we post that on the upstream wiki? I dont see y not, and I did saw the Keith is inside redhat... > > ----- Original Message ----- > > From: "Shahar Havivi" > > To: "Keith Robertson" > > Cc: engine-devel at ovirt.org > > Sent: Tuesday, December 6, 2011 9:48:47 AM > > Subject: Re: [Engine-devel] New tool to upload OVF archives > > > > Hi, > > the ovf documentation can be found in this link > > http://cleo.tlv.redhat.com/qumrawiki/CategoryRhevmBackend/OvfFileFormat > > > > > > On 06.12.11 08:41, Keith Robertson wrote: > > > On 12/06/2011 03:05 AM, Livnat Peer wrote: > > > > > > > >>>Is it covered by the tool? > > > >>Livnat, > > > >>Currently, the tool will only update the TemplateID and the 2 > > > >>places > > > >>where it appears. It does not update the "ovf:diskID" attribute > > > >>or the > > > >>5 places where the "diskID" UUID can appear. > > > >> > > > >>The OVF XML schema is a bit vague on these issues so I had to > > > >>make some > > > >>assumptions. Please correct my assumptions... > > > >I think Shahar can help with this. > > > >Shahar - can you publish the ovf docs on the oVirt wiki? > > > > > > > > > > > >>1. The UUID in the TemplateID element appears to be different > > > >>than the > > > >>UUID for the "ovf:diskId" attribute. I am assuming that this is > > > >>a > > > >>requirement. > > > >yes > > > > > > > >>2. The UUID for the TemplateID uniquely identifies the "image" to > > > >>be > > > >>imported *not* the disk ID(s). An image can have multiple disks > > > >>and > > > >>each disk would have it's own UUID. > > > >> > > > >I am not sure i follow you with this question. > > > >IIUC the tool supports the ability to change template ID but not > > > >change > > > >it's disks ids. > > > Correct, this is the current behavior. > > > >Then you have 2 different templates pointing to the same > > > >disks? > > > No, I don't think you would have two templates pointing to the same > > > disk(s) unless the tool that generated the OVF archive and it's > > > associated XML did it incorrectly. > > > > > > To summarize, the OVF XML schema appears to create a parent-child > > > relationship between the TemplateID and the disks. See the > > > abbreviated XML below from an actual OVF XML file below and notice > > > the following points: > > > > > > 1. The "Content" element appears to be the "parent" element. It > > > has > > > a "Name" and a "TemplateID" which I think uniquely identify the > > > "image" or "template". > > > 2. The "Content" element has "Item"(s) as indirect children. These > > > "Item"s can be disks and are uniquely identified by their own UUID > > > (i.e. InstanceID). > > > 3. The "InstanceID" of a disk can be resolved to a physical disk by > > > looking at the "id" and "href" combination in the "References" > > > element. > > > > > > In short, the relationship appears to be... > > > > > > TemplateID (UUID-1) > > > |- Disk (UUID-2) > > > |- Disk (UUID-3) > > > > > > Do you agree? > > > > > > > > > //---------- Begin XML > > > > > > > > > > > > > > ovf:href="2b30e705-c1d6-4bd8-a6cd-a1fe8a70614f/c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad" > > > ovf:id="c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad" > > > ovf:size="8589934592"/> > > > > > >
> > > List of Virtual Disks > > > > > ovf:size="8" > > > ovf:actual_size="1" > > > ovf:vm_snapshot_id="c1398a3c-ca59-460a-ac0c-0ea91d7218be" > > > ovf:fileRef="2b30e705-c1d6-4bd8-a6cd-a1fe8a70614f/c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad" > > > ovf:format="http://www.gnome.org/~markmc/qcow-image-format.html" > > > ovf:volume-format="COW" ovf:volume-type="Sparse" > > > ovf:disk-interface="VirtIO" > > > ovf:disk-type="System" ovf:boot="true" > > > ovf:wipe-after-delete="false" /> > > >
> > > > > > tmcowrhel6 > > > 5272b689-cd9f-4532-9b5d-2413eb7b9402 > > >
> > ovf:required="false" > > > xsi:type="ovf:OperatingSystemSection_Type"> > > > Guest Operating System > > > RHEL6x64 > > >
> > >
> > > 1 CPU, 1024 Memeory > > > > > > RHEVM 3.0.0.0 > > > > > > > > > Drive 1 > > > c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad > > > > > > 17 > > > 2b30e705-c1d6-4bd8-a6cd-a1fe8a70614f/c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad > > > > > > > > >
> > >
> > >
> > > > > > > > > > > > >Livnat > > > > > > > > > > > The point is that the TemplatID [1] can have multiple child disks > > > and each ch > > > > > > _______________________________________________ > > > Engine-devel mailing list > > > Engine-devel at ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From danken at redhat.com Tue Dec 6 16:07:53 2011 From: danken at redhat.com (Dan Kenigsberg) Date: Tue, 6 Dec 2011 18:07:53 +0200 Subject: [Engine-devel] API design and plan In-Reply-To: <20111206144657.GA20444@us.ibm.com> References: <20111205173418.GB2706@us.ibm.com> <20111206125858.GJ22779@redhat.com> <20111206144657.GA20444@us.ibm.com> Message-ID: <20111206160747.GQ22779@redhat.com> On Tue, Dec 06, 2011 at 08:46:57AM -0600, Adam Litke wrote: > On Tue, Dec 06, 2011 at 02:58:59PM +0200, Dan Kenigsberg wrote: > > On Mon, Dec 05, 2011 at 11:34:18AM -0600, Adam Litke wrote: > > > Hi everyone. On today's VDSM call we discussed the requirements, design, and > > > plan for updating the API to include support for QMF and single-host REST API. > > > All members present arrived at a general consensus on the best way to design the > > > next-generation API. I have tried to capture this discussion in the oVirt wiki: > > > > > > http://ovirt.org/wiki/Vdsm_API > > > > > > Please take a look at this page and let's discuss any changes that may be needed > > > in order to adopt it as a working plan that we can begin to execute. Thanks! > > > > > > > Very nice, I've fixed two bullets about the future of the xml-rpc. > > Thanks... Updates look good to me. > > > I think that we are missing something here: how do we model Vdsm-to-Vdsm > > communication, in a binding-blind way? I'm less worried about the > > storage-based mailbox used for lvextend requests: my problem is with > > migration command. > > Ok, interesting... Besides migration, are there other features (current or > planned) that would involve P2P communication? I want to ensure we consider the > full problem space. Well, I can imagine we would like a host in distress to migrate VMs to whomever can take them, without central management driving this process. (CAVE split brain) At the momemt I cannot think of something that cannot be implemented by QMF events. Ayal? > > > Currently, the implementation of the "migrate" verb includes contacting > > the remote Vdsm over xml-rpc before issuing the libvirt migrateToURI2 > > command ('migrationCreate' verb). > > > > A Vdsm user who choose to use the REST binding, is likely to want this to > > be implemented this using a REST request to the destination. This means > > that the implementation of Vdsm depends on the chosen binding. > > > > The issue can be mitigating by requiring the binding level to provide a > > "callback" for migrationCreate (and any other future Vdsm->world requests). > > This would complicate the beautiful png at > > http://ovirt.org/wiki/Vdsm_API#Design ... Does anyone have another > > suggestion? > > Actually, I think you are blending the external API with vdsm internals. As a > management server or ovirt-engine, I don't care about the protocol that vdsm > uses to contact the migration recipient. As far as I am concerned this is a > special case internal function call. For that purpose, I think xmlrpc is > perfectly well-suited to the task and should be used unconditionally, regardless > of the bindings used to initiate the migration. > > So I would propose that we modify the design such that we keep an extremely thin > xmlrpc server active whose sole purpose is to service internal P2P requests. Interesting. We could avoid even that, if we could register a callback with libvirt, so that destination libvirtd called destination Vdsm to verify that all storage and networking resources are ready, before executing qemu. DanPB, can something like that be done? (I guess it is not realistic since we may need to pass vdsm-specific data from source to dest, and libvirt is not supposed to be a general purpose transport.) Dan. From kroberts at redhat.com Tue Dec 6 19:14:33 2011 From: kroberts at redhat.com (Keith Robertson) Date: Tue, 06 Dec 2011 14:14:33 -0500 Subject: [Engine-devel] New tool to upload OVF archives In-Reply-To: <616503c4-fb89-4dbb-b8c7-3cc88394cbb0@zmail07.collab.prod.int.phx2.redhat.com> References: <616503c4-fb89-4dbb-b8c7-3cc88394cbb0@zmail07.collab.prod.int.phx2.redhat.com> Message-ID: <4EDE6999.3080708@redhat.com> Shahar and Andrew, There is some ambiguity between the OVF XML schema and the document (i.e. RHEVM-OVF-1.odt). The XSD defines an XML element called Content. This content element roughly correlates to a VM. It has such things as "Name", "TemplateID", and can have "Item" element(s) that describe such things as CPUs, Memory, and Disks. Currently, my tool will allow the user to change the "TemplateID" in the "Content" element. I can easily modify the tool to also change the UUID of each disk that it finds within the "Content" element (i.e. diskID); however, oVirt places an additional stipulation on Disk elements. It states that disks should be laid out like " [Image Group Id]/[Image Id]". What should the tool do if it opens an archive and discovers that disks to not have "Image Group ID"s? Will oVirt be able to appropriately load them if the tool simply copies the image into a path resembling [1] or is it requiring a path like [2]? [1] /images/ [2] /images// Cheers, Keith On 12/06/2011 10:24 AM, Andrew Cathrow wrote: > Can we post that on the upstream wiki? > > ----- Original Message ----- >> From: "Shahar Havivi" >> To: "Keith Robertson" >> Cc: engine-devel at ovirt.org >> Sent: Tuesday, December 6, 2011 9:48:47 AM >> Subject: Re: [Engine-devel] New tool to upload OVF archives >> >> Hi, >> the ovf documentation can be found in this link >> http://cleo.tlv.redhat.com/qumrawiki/CategoryRhevmBackend/OvfFileFormat >> >> >> On 06.12.11 08:41, Keith Robertson wrote: >>> On 12/06/2011 03:05 AM, Livnat Peer wrote: >>>> >>>>>> Is it covered by the tool? >>>>> Livnat, >>>>> Currently, the tool will only update the TemplateID and the 2 >>>>> places >>>>> where it appears. It does not update the "ovf:diskID" attribute >>>>> or the >>>>> 5 places where the "diskID" UUID can appear. >>>>> >>>>> The OVF XML schema is a bit vague on these issues so I had to >>>>> make some >>>>> assumptions. Please correct my assumptions... >>>> I think Shahar can help with this. >>>> Shahar - can you publish the ovf docs on the oVirt wiki? >>>> >>>> >>>>> 1. The UUID in the TemplateID element appears to be different >>>>> than the >>>>> UUID for the "ovf:diskId" attribute. I am assuming that this is >>>>> a >>>>> requirement. >>>> yes >>>> >>>>> 2. The UUID for the TemplateID uniquely identifies the "image" to >>>>> be >>>>> imported *not* the disk ID(s). An image can have multiple disks >>>>> and >>>>> each disk would have it's own UUID. >>>>> >>>> I am not sure i follow you with this question. >>>> IIUC the tool supports the ability to change template ID but not >>>> change >>>> it's disks ids. >>> Correct, this is the current behavior. >>>> Then you have 2 different templates pointing to the same >>>> disks? >>> No, I don't think you would have two templates pointing to the same >>> disk(s) unless the tool that generated the OVF archive and it's >>> associated XML did it incorrectly. >>> >>> To summarize, the OVF XML schema appears to create a parent-child >>> relationship between the TemplateID and the disks. See the >>> abbreviated XML below from an actual OVF XML file below and notice >>> the following points: >>> >>> 1. The "Content" element appears to be the "parent" element. It >>> has >>> a "Name" and a "TemplateID" which I think uniquely identify the >>> "image" or "template". >>> 2. The "Content" element has "Item"(s) as indirect children. These >>> "Item"s can be disks and are uniquely identified by their own UUID >>> (i.e. InstanceID). >>> 3. The "InstanceID" of a disk can be resolved to a physical disk by >>> looking at the "id" and "href" combination in the "References" >>> element. >>> >>> In short, the relationship appears to be... >>> >>> TemplateID (UUID-1) >>> |- Disk (UUID-2) >>> |- Disk (UUID-3) >>> >>> Do you agree? >>> >>> >>> //---------- Begin XML >>> >>> >>> >>> >> ovf:href="2b30e705-c1d6-4bd8-a6cd-a1fe8a70614f/c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad" >>> ovf:id="c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad" >>> ovf:size="8589934592"/> >>> >>>
>>> List of Virtual Disks >>> >> ovf:size="8" >>> ovf:actual_size="1" >>> ovf:vm_snapshot_id="c1398a3c-ca59-460a-ac0c-0ea91d7218be" >>> ovf:fileRef="2b30e705-c1d6-4bd8-a6cd-a1fe8a70614f/c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad" >>> ovf:format="http://www.gnome.org/~markmc/qcow-image-format.html" >>> ovf:volume-format="COW" ovf:volume-type="Sparse" >>> ovf:disk-interface="VirtIO" >>> ovf:disk-type="System" ovf:boot="true" >>> ovf:wipe-after-delete="false" /> >>>
>>> >>> tmcowrhel6 >>> 5272b689-cd9f-4532-9b5d-2413eb7b9402 >>>
>> ovf:required="false" >>> xsi:type="ovf:OperatingSystemSection_Type"> >>> Guest Operating System >>> RHEL6x64 >>>
>>>
>>> 1 CPU, 1024 Memeory >>> >>> RHEVM 3.0.0.0 >>> >>> >>> Drive 1 >>> c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad >>> >>> 17 >>> 2b30e705-c1d6-4bd8-a6cd-a1fe8a70614f/c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad >>> >>> >>>
>>>
>>>
>>> >>> >>> >>>> Livnat >>>> >>>> >>> The point is that the TemplatID [1] can have multiple child disks >>> and each ch >>> >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From shaharh at redhat.com Wed Dec 7 07:40:50 2011 From: shaharh at redhat.com (Shahar Havivi) Date: Wed, 7 Dec 2011 09:40:50 +0200 Subject: [Engine-devel] New tool to upload OVF archives In-Reply-To: <4EDE6999.3080708@redhat.com> References: <616503c4-fb89-4dbb-b8c7-3cc88394cbb0@zmail07.collab.prod.int.phx2.redhat.com> <4EDE6999.3080708@redhat.com> Message-ID: <20111207074049.GB2586@redhat.com> On 06.12.11 14:14, Keith Robertson wrote: > Shahar and Andrew, > > There is some ambiguity between the OVF XML schema and the document > (i.e. RHEVM-OVF-1.odt). The XSD defines an XML element called > Content. This content element roughly correlates to a VM. It has > such things as "Name", "TemplateID", and can have "Item" element(s) > that describe such things as CPUs, Memory, and Disks. > > Currently, my tool will allow the user to change the "TemplateID" in > the "Content" element. I can easily modify the tool to also change > the UUID of each disk that it finds within the "Content" element > (i.e. diskID); however, oVirt places an additional stipulation on > Disk elements. It states that disks should be laid out like " > [Image Group Id]/[Image Id]". > > What should the tool do if it opens an archive and discovers that > disks to not have "Image Group ID"s? Will oVirt be able to > appropriately load them if the tool simply copies the image into a > path resembling [1] or is it requiring a path like [2]? > > [1] /images/ > [2] /images// > > Cheers, > Keith Hi Keith, ovrit must have image gropu id and the format must be: [Image Group Id]/[Image Id] if you don't have the image group id you can generate a guid and set it for the VM image group id. > > > > > On 12/06/2011 10:24 AM, Andrew Cathrow wrote: > >Can we post that on the upstream wiki? > > > >----- Original Message ----- > >>From: "Shahar Havivi" > >>To: "Keith Robertson" > >>Cc: engine-devel at ovirt.org > >>Sent: Tuesday, December 6, 2011 9:48:47 AM > >>Subject: Re: [Engine-devel] New tool to upload OVF archives > >> > >>Hi, > >>the ovf documentation can be found in this link > >>http://cleo.tlv.redhat.com/qumrawiki/CategoryRhevmBackend/OvfFileFormat > >> > >> > >>On 06.12.11 08:41, Keith Robertson wrote: > >>>On 12/06/2011 03:05 AM, Livnat Peer wrote: > >>>> > >>>>>>Is it covered by the tool? > >>>>>Livnat, > >>>>>Currently, the tool will only update the TemplateID and the 2 > >>>>>places > >>>>>where it appears. It does not update the "ovf:diskID" attribute > >>>>>or the > >>>>>5 places where the "diskID" UUID can appear. > >>>>> > >>>>>The OVF XML schema is a bit vague on these issues so I had to > >>>>>make some > >>>>>assumptions. Please correct my assumptions... > >>>>I think Shahar can help with this. > >>>>Shahar - can you publish the ovf docs on the oVirt wiki? > >>>> > >>>> > >>>>>1. The UUID in the TemplateID element appears to be different > >>>>>than the > >>>>>UUID for the "ovf:diskId" attribute. I am assuming that this is > >>>>>a > >>>>>requirement. > >>>>yes > >>>> > >>>>>2. The UUID for the TemplateID uniquely identifies the "image" to > >>>>>be > >>>>>imported *not* the disk ID(s). An image can have multiple disks > >>>>>and > >>>>>each disk would have it's own UUID. > >>>>> > >>>>I am not sure i follow you with this question. > >>>>IIUC the tool supports the ability to change template ID but not > >>>>change > >>>>it's disks ids. > >>>Correct, this is the current behavior. > >>>>Then you have 2 different templates pointing to the same > >>>>disks? > >>>No, I don't think you would have two templates pointing to the same > >>>disk(s) unless the tool that generated the OVF archive and it's > >>>associated XML did it incorrectly. > >>> > >>>To summarize, the OVF XML schema appears to create a parent-child > >>>relationship between the TemplateID and the disks. See the > >>>abbreviated XML below from an actual OVF XML file below and notice > >>>the following points: > >>> > >>>1. The "Content" element appears to be the "parent" element. It > >>>has > >>>a "Name" and a "TemplateID" which I think uniquely identify the > >>>"image" or "template". > >>>2. The "Content" element has "Item"(s) as indirect children. These > >>>"Item"s can be disks and are uniquely identified by their own UUID > >>>(i.e. InstanceID). > >>>3. The "InstanceID" of a disk can be resolved to a physical disk by > >>>looking at the "id" and "href" combination in the "References" > >>>element. > >>> > >>>In short, the relationship appears to be... > >>> > >>>TemplateID (UUID-1) > >>>|- Disk (UUID-2) > >>>|- Disk (UUID-3) > >>> > >>>Do you agree? > >>> > >>> > >>>//---------- Begin XML > >>> > >>> > >>> > >>> >>>ovf:href="2b30e705-c1d6-4bd8-a6cd-a1fe8a70614f/c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad" > >>> ovf:id="c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad" > >>>ovf:size="8589934592"/> > >>> > >>>
> >>>List of Virtual Disks > >>> >>>ovf:size="8" > >>> ovf:actual_size="1" > >>>ovf:vm_snapshot_id="c1398a3c-ca59-460a-ac0c-0ea91d7218be" > >>>ovf:fileRef="2b30e705-c1d6-4bd8-a6cd-a1fe8a70614f/c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad" > >>>ovf:format="http://www.gnome.org/~markmc/qcow-image-format.html" > >>> ovf:volume-format="COW" ovf:volume-type="Sparse" > >>>ovf:disk-interface="VirtIO" > >>> ovf:disk-type="System" ovf:boot="true" > >>>ovf:wipe-after-delete="false" /> > >>>
> >>> > >>>tmcowrhel6 > >>>5272b689-cd9f-4532-9b5d-2413eb7b9402 > >>>
>>> ovf:required="false" > >>>xsi:type="ovf:OperatingSystemSection_Type"> > >>>Guest Operating System > >>>RHEL6x64 > >>>
> >>>
> >>>1 CPU, 1024 Memeory > >>> > >>>RHEVM 3.0.0.0 > >>> > >>> > >>>Drive 1 > >>>c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad > >>> > >>>17 > >>>2b30e705-c1d6-4bd8-a6cd-a1fe8a70614f/c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad > >>> > >>> > >>>
> >>>
> >>>
> >>> > >>> > >>> > >>>>Livnat > >>>> > >>>> > >>>The point is that the TemplatID [1] can have multiple child disks > >>>and each ch > >>> > >>>_______________________________________________ > >>>Engine-devel mailing list > >>>Engine-devel at ovirt.org > >>>http://lists.ovirt.org/mailman/listinfo/engine-devel > >>_______________________________________________ > >>Engine-devel mailing list > >>Engine-devel at ovirt.org > >>http://lists.ovirt.org/mailman/listinfo/engine-devel > >> > From ofrenkel at redhat.com Wed Dec 7 08:01:40 2011 From: ofrenkel at redhat.com (Omer Frenkel) Date: Wed, 07 Dec 2011 03:01:40 -0500 (EST) Subject: [Engine-devel] New tool to upload OVF archives In-Reply-To: <20111207074049.GB2586@redhat.com> Message-ID: ----- Original Message ----- > From: "Shahar Havivi" > To: "Keith Robertson" > Cc: engine-devel at ovirt.org > Sent: Wednesday, December 7, 2011 9:40:50 AM > Subject: Re: [Engine-devel] New tool to upload OVF archives > > On 06.12.11 14:14, Keith Robertson wrote: > > Shahar and Andrew, > > > > There is some ambiguity between the OVF XML schema and the document > > (i.e. RHEVM-OVF-1.odt). The XSD defines an XML element called > > Content. This content element roughly correlates to a VM. It has > > such things as "Name", "TemplateID", and can have "Item" element(s) > > that describe such things as CPUs, Memory, and Disks. > > > > Currently, my tool will allow the user to change the "TemplateID" > > in > > the "Content" element. I can easily modify the tool to also change > > the UUID of each disk that it finds within the "Content" element > > (i.e. diskID); however, oVirt places an additional stipulation on > > Disk elements. It states that disks should be laid out like " > > [Image Group Id]/[Image Id]". > > > > What should the tool do if it opens an archive and discovers that > > disks to not have "Image Group ID"s? Will oVirt be able to > > appropriately load them if the tool simply copies the image into a > > path resembling [1] or is it requiring a path like [2]? > > > > [1] /images/ > > [2] /images// > here> > > > > Cheers, > > Keith > Hi Keith, > ovrit must have image gropu id and the format must be: [Image Group > Id]/[Image Id] > if you don't have the image group id you can generate a guid and set > it for > the VM image group id. first some clarifications so we could talk in the same language: image in vdsm called image-group in rhevm (and stands for a whole disk) it can have one or more volume(s) which is called image in rhevm (and stand for snapshots of the disk) so now, every volume file has a .meta file, and the image id is written there (under IMAGE=) so without it i don't think its a valid image entirely, if for some reason the image id is missing, it should be taken from the volume meta file. > > > > > > > > > > On 12/06/2011 10:24 AM, Andrew Cathrow wrote: > > >Can we post that on the upstream wiki? > > > > > >----- Original Message ----- > > >>From: "Shahar Havivi" > > >>To: "Keith Robertson" > > >>Cc: engine-devel at ovirt.org > > >>Sent: Tuesday, December 6, 2011 9:48:47 AM > > >>Subject: Re: [Engine-devel] New tool to upload OVF archives > > >> > > >>Hi, > > >>the ovf documentation can be found in this link > > >>http://cleo.tlv.redhat.com/qumrawiki/CategoryRhevmBackend/OvfFileFormat > > >> > > >> > > >>On 06.12.11 08:41, Keith Robertson wrote: > > >>>On 12/06/2011 03:05 AM, Livnat Peer wrote: > > >>>> > > >>>>>>Is it covered by the tool? > > >>>>>Livnat, > > >>>>>Currently, the tool will only update the TemplateID and the 2 > > >>>>>places > > >>>>>where it appears. It does not update the "ovf:diskID" > > >>>>>attribute > > >>>>>or the > > >>>>>5 places where the "diskID" UUID can appear. > > >>>>> > > >>>>>The OVF XML schema is a bit vague on these issues so I had to > > >>>>>make some > > >>>>>assumptions. Please correct my assumptions... > > >>>>I think Shahar can help with this. > > >>>>Shahar - can you publish the ovf docs on the oVirt wiki? > > >>>> > > >>>> > > >>>>>1. The UUID in the TemplateID element appears to be different > > >>>>>than the > > >>>>>UUID for the "ovf:diskId" attribute. I am assuming that this > > >>>>>is > > >>>>>a > > >>>>>requirement. > > >>>>yes > > >>>> > > >>>>>2. The UUID for the TemplateID uniquely identifies the "image" > > >>>>>to > > >>>>>be > > >>>>>imported *not* the disk ID(s). An image can have multiple > > >>>>>disks > > >>>>>and > > >>>>>each disk would have it's own UUID. > > >>>>> > > >>>>I am not sure i follow you with this question. > > >>>>IIUC the tool supports the ability to change template ID but > > >>>>not > > >>>>change > > >>>>it's disks ids. > > >>>Correct, this is the current behavior. > > >>>>Then you have 2 different templates pointing to the same > > >>>>disks? > > >>>No, I don't think you would have two templates pointing to the > > >>>same > > >>>disk(s) unless the tool that generated the OVF archive and it's > > >>>associated XML did it incorrectly. > > >>> > > >>>To summarize, the OVF XML schema appears to create a > > >>>parent-child > > >>>relationship between the TemplateID and the disks. See the > > >>>abbreviated XML below from an actual OVF XML file below and > > >>>notice > > >>>the following points: > > >>> > > >>>1. The "Content" element appears to be the "parent" element. It > > >>>has > > >>>a "Name" and a "TemplateID" which I think uniquely identify the > > >>>"image" or "template". > > >>>2. The "Content" element has "Item"(s) as indirect children. > > >>> These > > >>>"Item"s can be disks and are uniquely identified by their own > > >>>UUID > > >>>(i.e. InstanceID). > > >>>3. The "InstanceID" of a disk can be resolved to a physical disk > > >>>by > > >>>looking at the "id" and "href" combination in the "References" > > >>>element. > > >>> > > >>>In short, the relationship appears to be... > > >>> > > >>>TemplateID (UUID-1) > > >>>|- Disk (UUID-2) > > >>>|- Disk (UUID-3) > > >>> > > >>>Do you agree? > > >>> > > >>> > > >>>//---------- Begin XML > > >>> > > >>> > > >>> > > >>> > >>>ovf:href="2b30e705-c1d6-4bd8-a6cd-a1fe8a70614f/c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad" > > >>> ovf:id="c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad" > > >>>ovf:size="8589934592"/> > > >>> > > >>>
> > >>>List of Virtual Disks > > >>> > >>>ovf:size="8" > > >>> ovf:actual_size="1" > > >>>ovf:vm_snapshot_id="c1398a3c-ca59-460a-ac0c-0ea91d7218be" > > >>>ovf:fileRef="2b30e705-c1d6-4bd8-a6cd-a1fe8a70614f/c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad" > > >>>ovf:format="http://www.gnome.org/~markmc/qcow-image-format.html" > > >>> ovf:volume-format="COW" ovf:volume-type="Sparse" > > >>>ovf:disk-interface="VirtIO" > > >>> ovf:disk-type="System" ovf:boot="true" > > >>>ovf:wipe-after-delete="false" /> > > >>>
> > >>> > > >>>tmcowrhel6 > > >>>5272b689-cd9f-4532-9b5d-2413eb7b9402 > > >>>
> >>> ovf:required="false" > > >>>xsi:type="ovf:OperatingSystemSection_Type"> > > >>>Guest Operating System > > >>>RHEL6x64 > > >>>
> > >>>
> > >>>1 CPU, 1024 Memeory > > >>> > > >>>RHEVM 3.0.0.0 > > >>> > > >>> > > >>>Drive 1 > > >>>c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad > > >>> > > >>>17 > > >>>2b30e705-c1d6-4bd8-a6cd-a1fe8a70614f/c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad > > >>> > > >>> > > >>>
> > >>>
> > >>>
> > >>> > > >>> > > >>> > > >>>>Livnat > > >>>> > > >>>> > > >>>The point is that the TemplatID [1] can have multiple child > > >>>disks > > >>>and each ch > > >>> > > >>>_______________________________________________ > > >>>Engine-devel mailing list > > >>>Engine-devel at ovirt.org > > >>>http://lists.ovirt.org/mailman/listinfo/engine-devel > > >>_______________________________________________ > > >>Engine-devel mailing list > > >>Engine-devel at ovirt.org > > >>http://lists.ovirt.org/mailman/listinfo/engine-devel > > >> > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From rgolan at redhat.com Wed Dec 7 08:54:29 2011 From: rgolan at redhat.com (Roy Golan) Date: Wed, 07 Dec 2011 10:54:29 +0200 Subject: [Engine-devel] Agenda for bi-weekly oVirt engine core meeting (Wed Dec. 7th) In-Reply-To: <2b447fba-bbed-4643-b62d-fa842e3a1423@zmail14.collab.prod.int.phx2.redhat.com> References: <03d44f83-c95b-4a6d-a1a4-f8a9b6852bdc@zmail14.collab.prod.int.phx2.redhat.com> <2b447fba-bbed-4643-b62d-fa842e3a1423@zmail14.collab.prod.int.phx2.redhat.com> Message-ID: <4EDF29C5.6010005@redhat.com> On Mon 05 Dec 2011 12:16:16 PM IST, Mike Kolesnik wrote: > These are the topics that we are planning to discuss in the meeting: > > * Open discussion on stable addressing support in the engine. > * Michael Kublin - Introducing synchronization/locking mechanism. > * Mike Kolesnik - Introducing upcoming changes due to snapshots fixes. Roy Golan - intro to upcoming network changes (time permitting) > > If anyone would like to discuss other topics, feel free to reply and > add them to the list. > > Regards, > Mike > > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From shaharh at redhat.com Wed Dec 7 15:04:57 2011 From: shaharh at redhat.com (Shahar Havivi) Date: Wed, 7 Dec 2011 17:04:57 +0200 Subject: [Engine-devel] New tool to upload OVF archives In-Reply-To: <20111207074049.GB2586@redhat.com> References: <616503c4-fb89-4dbb-b8c7-3cc88394cbb0@zmail07.collab.prod.int.phx2.redhat.com> <4EDE6999.3080708@redhat.com> <20111207074049.GB2586@redhat.com> Message-ID: <20111207150456.GC6252@redhat.com> The ovirt link to the ovf document: http://www.ovirt.org/wiki/Ovf On 07.12.11 09:40, Shahar Havivi wrote: > On 06.12.11 14:14, Keith Robertson wrote: > > Shahar and Andrew, > > > > There is some ambiguity between the OVF XML schema and the document > > (i.e. RHEVM-OVF-1.odt). The XSD defines an XML element called > > Content. This content element roughly correlates to a VM. It has > > such things as "Name", "TemplateID", and can have "Item" element(s) > > that describe such things as CPUs, Memory, and Disks. > > > > Currently, my tool will allow the user to change the "TemplateID" in > > the "Content" element. I can easily modify the tool to also change > > the UUID of each disk that it finds within the "Content" element > > (i.e. diskID); however, oVirt places an additional stipulation on > > Disk elements. It states that disks should be laid out like " > > [Image Group Id]/[Image Id]". > > > > What should the tool do if it opens an archive and discovers that > > disks to not have "Image Group ID"s? Will oVirt be able to > > appropriately load them if the tool simply copies the image into a > > path resembling [1] or is it requiring a path like [2]? > > > > [1] /images/ > > [2] /images// > > > > Cheers, > > Keith > Hi Keith, > ovrit must have image gropu id and the format must be: [Image Group Id]/[Image Id] > if you don't have the image group id you can generate a guid and set it for > the VM image group id. > > > > > > > > > > On 12/06/2011 10:24 AM, Andrew Cathrow wrote: > > >Can we post that on the upstream wiki? > > > > > >----- Original Message ----- > > >>From: "Shahar Havivi" > > >>To: "Keith Robertson" > > >>Cc: engine-devel at ovirt.org > > >>Sent: Tuesday, December 6, 2011 9:48:47 AM > > >>Subject: Re: [Engine-devel] New tool to upload OVF archives > > >> > > >>Hi, > > >>the ovf documentation can be found in this link > > >>http://cleo.tlv.redhat.com/qumrawiki/CategoryRhevmBackend/OvfFileFormat > > >> > > >> > > >>On 06.12.11 08:41, Keith Robertson wrote: > > >>>On 12/06/2011 03:05 AM, Livnat Peer wrote: > > >>>> > > >>>>>>Is it covered by the tool? > > >>>>>Livnat, > > >>>>>Currently, the tool will only update the TemplateID and the 2 > > >>>>>places > > >>>>>where it appears. It does not update the "ovf:diskID" attribute > > >>>>>or the > > >>>>>5 places where the "diskID" UUID can appear. > > >>>>> > > >>>>>The OVF XML schema is a bit vague on these issues so I had to > > >>>>>make some > > >>>>>assumptions. Please correct my assumptions... > > >>>>I think Shahar can help with this. > > >>>>Shahar - can you publish the ovf docs on the oVirt wiki? > > >>>> > > >>>> > > >>>>>1. The UUID in the TemplateID element appears to be different > > >>>>>than the > > >>>>>UUID for the "ovf:diskId" attribute. I am assuming that this is > > >>>>>a > > >>>>>requirement. > > >>>>yes > > >>>> > > >>>>>2. The UUID for the TemplateID uniquely identifies the "image" to > > >>>>>be > > >>>>>imported *not* the disk ID(s). An image can have multiple disks > > >>>>>and > > >>>>>each disk would have it's own UUID. > > >>>>> > > >>>>I am not sure i follow you with this question. > > >>>>IIUC the tool supports the ability to change template ID but not > > >>>>change > > >>>>it's disks ids. > > >>>Correct, this is the current behavior. > > >>>>Then you have 2 different templates pointing to the same > > >>>>disks? > > >>>No, I don't think you would have two templates pointing to the same > > >>>disk(s) unless the tool that generated the OVF archive and it's > > >>>associated XML did it incorrectly. > > >>> > > >>>To summarize, the OVF XML schema appears to create a parent-child > > >>>relationship between the TemplateID and the disks. See the > > >>>abbreviated XML below from an actual OVF XML file below and notice > > >>>the following points: > > >>> > > >>>1. The "Content" element appears to be the "parent" element. It > > >>>has > > >>>a "Name" and a "TemplateID" which I think uniquely identify the > > >>>"image" or "template". > > >>>2. The "Content" element has "Item"(s) as indirect children. These > > >>>"Item"s can be disks and are uniquely identified by their own UUID > > >>>(i.e. InstanceID). > > >>>3. The "InstanceID" of a disk can be resolved to a physical disk by > > >>>looking at the "id" and "href" combination in the "References" > > >>>element. > > >>> > > >>>In short, the relationship appears to be... > > >>> > > >>>TemplateID (UUID-1) > > >>>|- Disk (UUID-2) > > >>>|- Disk (UUID-3) > > >>> > > >>>Do you agree? > > >>> > > >>> > > >>>//---------- Begin XML > > >>> > > >>> > > >>> > > >>> > >>>ovf:href="2b30e705-c1d6-4bd8-a6cd-a1fe8a70614f/c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad" > > >>> ovf:id="c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad" > > >>>ovf:size="8589934592"/> > > >>> > > >>>
> > >>>List of Virtual Disks > > >>> > >>>ovf:size="8" > > >>> ovf:actual_size="1" > > >>>ovf:vm_snapshot_id="c1398a3c-ca59-460a-ac0c-0ea91d7218be" > > >>>ovf:fileRef="2b30e705-c1d6-4bd8-a6cd-a1fe8a70614f/c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad" > > >>>ovf:format="http://www.gnome.org/~markmc/qcow-image-format.html" > > >>> ovf:volume-format="COW" ovf:volume-type="Sparse" > > >>>ovf:disk-interface="VirtIO" > > >>> ovf:disk-type="System" ovf:boot="true" > > >>>ovf:wipe-after-delete="false" /> > > >>>
> > >>> > > >>>tmcowrhel6 > > >>>5272b689-cd9f-4532-9b5d-2413eb7b9402 > > >>>
> >>> ovf:required="false" > > >>>xsi:type="ovf:OperatingSystemSection_Type"> > > >>>Guest Operating System > > >>>RHEL6x64 > > >>>
> > >>>
> > >>>1 CPU, 1024 Memeory > > >>> > > >>>RHEVM 3.0.0.0 > > >>> > > >>> > > >>>Drive 1 > > >>>c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad > > >>> > > >>>17 > > >>>2b30e705-c1d6-4bd8-a6cd-a1fe8a70614f/c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad > > >>> > > >>> > > >>>
> > >>>
> > >>>
> > >>> > > >>> > > >>> > > >>>>Livnat > > >>>> > > >>>> > > >>>The point is that the TemplatID [1] can have multiple child disks > > >>>and each ch > > >>> > > >>>_______________________________________________ > > >>>Engine-devel mailing list > > >>>Engine-devel at ovirt.org > > >>>http://lists.ovirt.org/mailman/listinfo/engine-devel > > >>_______________________________________________ > > >>Engine-devel mailing list > > >>Engine-devel at ovirt.org > > >>http://lists.ovirt.org/mailman/listinfo/engine-devel > > >> > > From kroberts at redhat.com Wed Dec 7 19:35:06 2011 From: kroberts at redhat.com (Keith Robertson) Date: Wed, 07 Dec 2011 14:35:06 -0500 Subject: [Engine-devel] New tool to upload OVF archives In-Reply-To: References: Message-ID: <4EDFBFEA.3050308@redhat.com> On 12/07/2011 03:01 AM, Omer Frenkel wrote: > first some clarifications so we could talk in the same language: Agreed. It's actually pretty hard to describe all of this. > image in vdsm called image-group in rhevm (and stands for a whole disk) > it can have one or more volume(s) which is called image in rhevm (and stand for snapshots of the disk) > > so now, every volume file has a .meta file, and the image id is written there (under IMAGE=) I have noticed that when I tell oVirt to import an image that it will import it even if there isn't a .meta file. As such, I would like to know if a .meta file required or will oVirt automatically generate one? > so without it i don't think its a valid image entirely, > if for some reason the image id is missing, it should be taken from the volume meta file. > > Here is a use case that illustrates some of the issues that I am facing: 1. Assume the following OVF archive. | |- 5272b689-cd9f-4532-9b5d-2413eb7b9402.ovf <-- The OVF XML |- c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad <-- The COW image 2. In the exmple above, there is no directory hiearchy defined in the archive (i.e. all files are in the root). 3. Also, in the example above the file "5272b689-cd9f-4532-9b5d-2413eb7b9402.ovf" appropriately refers to the image in it's XML without an "Image Group UUID" in the prefix. I can have the tool massage the XML and the directory layout so that it will upload to the export domain such that it looks like this... /virt/exports/ |----- images | |----- 2b30e705-c1d6-4bd8-a6cd-a1fe8a70614f <-- Generated UUID | |----- 5272b689-cd9f-4532-9b5d-2413eb7b9402 |---- master |---- vms |---- 5272b689-cd9f-4532-9b5d-2413eb7b9402 |---- 5272b689-cd9f-4532-9b5d-2413eb7b9402.ovf Is this desireable? From acathrow at redhat.com Wed Dec 7 20:01:31 2011 From: acathrow at redhat.com (Andrew Cathrow) Date: Wed, 07 Dec 2011 15:01:31 -0500 (EST) Subject: [Engine-devel] New tool to upload OVF archives In-Reply-To: <4EDFBFEA.3050308@redhat.com> Message-ID: <3cd83304-991a-4110-84ae-3b1eefbe8b5e@zmail07.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > From: "Keith Robertson" > To: "Omer Frenkel" , "Andrew Cathrow" > Cc: "Shahar Havivi" , engine-devel at ovirt.org > Sent: Wednesday, December 7, 2011 2:35:06 PM > Subject: Re: [Engine-devel] New tool to upload OVF archives > > On 12/07/2011 03:01 AM, Omer Frenkel wrote: > > > first some clarifications so we could talk in the same language: > Agreed. It's actually pretty hard to describe all of this. > > image in vdsm called image-group in rhevm (and stands for a whole > > disk) > > it can have one or more volume(s) which is called image in rhevm > > (and stand for snapshots of the disk) > > > > so now, every volume file has a .meta file, and the image id is > > written there (under IMAGE=) > I have noticed that when I tell oVirt to import an image that it will > import it even if there isn't a .meta file. As such, I would like to > know if a .meta file required or will oVirt automatically generate > one? > > so without it i don't think its a valid image entirely, > > if for some reason the image id is missing, it should be taken from > > the volume meta file. > > > > > Here is a use case that illustrates some of the issues that I am > facing: > 1. Assume the following OVF archive. > | > |- 5272b689-cd9f-4532-9b5d-2413eb7b9402.ovf <-- The OVF XML > |- c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad <-- The COW image > > 2. In the exmple above, there is no directory hiearchy defined in > the > archive (i.e. all files are in the root). > 3. Also, in the example above the file > "5272b689-cd9f-4532-9b5d-2413eb7b9402.ovf" appropriately refers to > the > image in it's XML without an "Image Group UUID" in the prefix. > > I can have the tool massage the XML and the directory layout so that > it > will upload to the export domain such that it looks like this... > /virt/exports/ > |----- images > | |----- 2b30e705-c1d6-4bd8-a6cd-a1fe8a70614f <-- Generated UUID > | |----- 5272b689-cd9f-4532-9b5d-2413eb7b9402 > |---- master > |---- vms > |---- 5272b689-cd9f-4532-9b5d-2413eb7b9402 > |---- 5272b689-cd9f-4532-9b5d-2413eb7b9402.ovf > > Is this desireable? So is it valid to have an ovf and image in the root of the zip or do we require a subdirectory > > > > From kroberts at redhat.com Wed Dec 7 20:13:32 2011 From: kroberts at redhat.com (Keith Robertson) Date: Wed, 07 Dec 2011 15:13:32 -0500 Subject: [Engine-devel] New tool to upload OVF archives In-Reply-To: <3cd83304-991a-4110-84ae-3b1eefbe8b5e@zmail07.collab.prod.int.phx2.redhat.com> References: <3cd83304-991a-4110-84ae-3b1eefbe8b5e@zmail07.collab.prod.int.phx2.redhat.com> Message-ID: <4EDFC8EC.1020500@redhat.com> > ----- Original Message ----- >> From: "Keith Robertson" >> To: "Omer Frenkel", "Andrew Cathrow" >> Cc: "Shahar Havivi", engine-devel at ovirt.org >> Sent: Wednesday, December 7, 2011 2:35:06 PM >> Subject: Re: [Engine-devel] New tool to upload OVF archives >> >> On 12/07/2011 03:01 AM, Omer Frenkel wrote: >>> >> first some clarifications so we could talk in the same language: >> Agreed. It's actually pretty hard to describe all of this. >>> image in vdsm called image-group in rhevm (and stands for a whole >>> disk) >>> it can have one or more volume(s) which is called image in rhevm >>> (and stand for snapshots of the disk) >>> >>> so now, every volume file has a .meta file, and the image id is >>> written there (under IMAGE=) >> I have noticed that when I tell oVirt to import an image that it will >> import it even if there isn't a .meta file. As such, I would like to >> know if a .meta file required or will oVirt automatically generate >> one? >>> so without it i don't think its a valid image entirely, >>> if for some reason the image id is missing, it should be taken from >>> the volume meta file. >>> >>> >> Here is a use case that illustrates some of the issues that I am >> facing: >> 1. Assume the following OVF archive. >> | >> |- 5272b689-cd9f-4532-9b5d-2413eb7b9402.ovf<-- The OVF XML >> |- c0e51e1b-004e-4d10-abc0-8b9f5e21f3ad<-- The COW image >> >> 2. In the exmple above, there is no directory hiearchy defined in >> the >> archive (i.e. all files are in the root). >> 3. Also, in the example above the file >> "5272b689-cd9f-4532-9b5d-2413eb7b9402.ovf" appropriately refers to >> the >> image in it's XML without an "Image Group UUID" in the prefix. >> >> I can have the tool massage the XML and the directory layout so that >> it >> will upload to the export domain such that it looks like this... >> /virt/exports/ >> |----- images >> | |----- 2b30e705-c1d6-4bd8-a6cd-a1fe8a70614f<-- Generated UUID >> | |----- 5272b689-cd9f-4532-9b5d-2413eb7b9402 >> |---- master >> |---- vms >> |---- 5272b689-cd9f-4532-9b5d-2413eb7b9402 >> |---- 5272b689-cd9f-4532-9b5d-2413eb7b9402.ovf >> >> Is this desireable? > So is it valid to have an ovf and image in the root of the zip or do we require a subdirectory Yes, it is valid in the XML. The question is does oVirt support it. Here are 2 odd-ball scenarios: 1. The image is deeply nested in the archive. dir1/dir2/dir3/.../dirN/cow-image-here Inside the XML you would have... Message-ID: <544e4be1-fe31-43ae-98c7-4e1e2abe9602@zmail15.collab.prod.int.phx2.redhat.com> Hi All I have followed the instructions on the wiki: http://ovirt.org/wiki/Installing_ovirt-engine_from_rpm And successfully installed oVirt engine. The instructions worked perfectly. I noticed that JBoss AS 5 came bundled in the ovirt-engine-jbossas package. I understand the reasoning for going out with AS 5 for now. However, the AS 5 default security configuration has not been changed. Once you install oVirt engine using the instructions above, the JMX Console will be running with no authentication. Worms exploiting this weakness are knowing to be circulating; people are likely to get compromised. For now, I have added instructions on securing the JMX Console to the aforementioned wiki page. In the long term, I think we should either disable or completely remove the JMX Console from JBoss AS as it is distributed with oVirt engine. Thanks -- David Jorm / Red Hat Security Response Team From iheim at redhat.com Thu Dec 8 04:48:53 2011 From: iheim at redhat.com (Itamar Heim) Date: Thu, 08 Dec 2011 06:48:53 +0200 Subject: [Engine-devel] API design and plan In-Reply-To: <20111205173418.GB2706@us.ibm.com> References: <20111205173418.GB2706@us.ibm.com> Message-ID: <4EE041B5.8060606@redhat.com> On 12/05/2011 07:34 PM, Adam Litke wrote: > Hi everyone. On today's VDSM call we discussed the requirements, design, and > plan for updating the API to include support for QMF and single-host REST API. > All members present arrived at a general consensus on the best way to design the > next-generation API. I have tried to capture this discussion in the oVirt wiki: > > http://ovirt.org/wiki/Vdsm_API > > Please take a look at this page and let's discuss any changes that may be needed > in order to adopt it as a working plan that we can begin to execute. Thanks! > as you are going to plan an api... This piece by Geert Jansen summarizes lessons learned from the RHEV-M (ovirt) REST API project https://fedorahosted.org/pipermail/rhevm-api/2011-August/002714.html From abaron at redhat.com Thu Dec 8 09:56:17 2011 From: abaron at redhat.com (Ayal Baron) Date: Thu, 08 Dec 2011 04:56:17 -0500 (EST) Subject: [Engine-devel] API design and plan In-Reply-To: <20111206160747.GQ22779@redhat.com> Message-ID: ----- Original Message ----- > On Tue, Dec 06, 2011 at 08:46:57AM -0600, Adam Litke wrote: > > On Tue, Dec 06, 2011 at 02:58:59PM +0200, Dan Kenigsberg wrote: > > > On Mon, Dec 05, 2011 at 11:34:18AM -0600, Adam Litke wrote: > > > > Hi everyone. On today's VDSM call we discussed the > > > > requirements, design, and > > > > plan for updating the API to include support for QMF and > > > > single-host REST API. > > > > All members present arrived at a general consensus on the best > > > > way to design the > > > > next-generation API. I have tried to capture this discussion > > > > in the oVirt wiki: > > > > > > > > http://ovirt.org/wiki/Vdsm_API > > > > > > > > Please take a look at this page and let's discuss any changes > > > > that may be needed > > > > in order to adopt it as a working plan that we can begin to > > > > execute. Thanks! > > > > > > > > > > Very nice, I've fixed two bullets about the future of the > > > xml-rpc. > > > > Thanks... Updates look good to me. > > > > > I think that we are missing something here: how do we model > > > Vdsm-to-Vdsm > > > communication, in a binding-blind way? I'm less worried about the > > > storage-based mailbox used for lvextend requests: my problem is > > > with > > > migration command. > > > > Ok, interesting... Besides migration, are there other features > > (current or > > planned) that would involve P2P communication? I want to ensure we > > consider the > > full problem space. > > Well, I can imagine we would like a host in distress to migrate VMs > to > whomever can take them, without central management driving this > process. > (CAVE split brain) > > At the momemt I cannot think of something that cannot be implemented > by > QMF events. Ayal? > > > > > > Currently, the implementation of the "migrate" verb includes > > > contacting > > > the remote Vdsm over xml-rpc before issuing the libvirt > > > migrateToURI2 > > > command ('migrationCreate' verb). > > > > > > A Vdsm user who choose to use the REST binding, is likely to want > > > this to > > > be implemented this using a REST request to the destination. This > > > means > > > that the implementation of Vdsm depends on the chosen binding. > > > > > > The issue can be mitigating by requiring the binding level to > > > provide a > > > "callback" for migrationCreate (and any other future Vdsm->world > > > requests). > > > This would complicate the beautiful png at > > > http://ovirt.org/wiki/Vdsm_API#Design ... Does anyone have > > > another > > > suggestion? > > > > Actually, I think you are blending the external API with vdsm > > internals. As a > > management server or ovirt-engine, I don't care about the protocol > > that vdsm > > uses to contact the migration recipient. As far as I am concerned > > this is a > > special case internal function call. For that purpose, I think > > xmlrpc is > > perfectly well-suited to the task and should be used > > unconditionally, regardless > > of the bindings used to initiate the migration. > > > > So I would propose that we modify the design such that we keep an > > extremely thin > > xmlrpc server active whose sole purpose is to service internal P2P > > requests. > > Interesting. We could avoid even that, if we could register a > callback > with libvirt, so that destination libvirtd called destination Vdsm to > verify that all storage and networking resources are ready, before > executing qemu. DanPB, can something like that be done? (I guess it > is > not realistic since we may need to pass vdsm-specific data from > source > to dest, and libvirt is not supposed to be a general purpose > transport.) > > Dan. I don't understand the issue. The whole point of the REST API is to be an easily consumable *single* node management API. Once you start coordinating among different nodes then you need clustering and management (either distributed or centralized), in both cases it is fine to require having a bus in which case you have your method of communications between hosts to replace current xml-rpc. Requiring an additional xml-rpc server sounds wrong to me. > _______________________________________________ > vdsm-devel mailing list > vdsm-devel at lists.fedorahosted.org > https://fedorahosted.org/mailman/listinfo/vdsm-devel > From agl at us.ibm.com Thu Dec 8 15:30:04 2011 From: agl at us.ibm.com (Adam Litke) Date: Thu, 8 Dec 2011 09:30:04 -0600 Subject: [Engine-devel] API design and plan In-Reply-To: References: <20111206160747.GQ22779@redhat.com> Message-ID: <20111208153004.GA29101@us.ibm.com> On Thu, Dec 08, 2011 at 04:56:17AM -0500, Ayal Baron wrote: > > > ----- Original Message ----- > > On Tue, Dec 06, 2011 at 08:46:57AM -0600, Adam Litke wrote: > > > On Tue, Dec 06, 2011 at 02:58:59PM +0200, Dan Kenigsberg wrote: > > > > On Mon, Dec 05, 2011 at 11:34:18AM -0600, Adam Litke wrote: > > > > > Hi everyone. On today's VDSM call we discussed the > > > > > requirements, design, and > > > > > plan for updating the API to include support for QMF and > > > > > single-host REST API. > > > > > All members present arrived at a general consensus on the best > > > > > way to design the > > > > > next-generation API. I have tried to capture this discussion > > > > > in the oVirt wiki: > > > > > > > > > > http://ovirt.org/wiki/Vdsm_API > > > > > > > > > > Please take a look at this page and let's discuss any changes > > > > > that may be needed > > > > > in order to adopt it as a working plan that we can begin to > > > > > execute. Thanks! > > > > > > > > > > > > > Very nice, I've fixed two bullets about the future of the > > > > xml-rpc. > > > > > > Thanks... Updates look good to me. > > > > > > > I think that we are missing something here: how do we model > > > > Vdsm-to-Vdsm > > > > communication, in a binding-blind way? I'm less worried about the > > > > storage-based mailbox used for lvextend requests: my problem is > > > > with > > > > migration command. > > > > > > Ok, interesting... Besides migration, are there other features > > > (current or > > > planned) that would involve P2P communication? I want to ensure we > > > consider the > > > full problem space. > > > > Well, I can imagine we would like a host in distress to migrate VMs > > to > > whomever can take them, without central management driving this > > process. > > (CAVE split brain) > > > > At the momemt I cannot think of something that cannot be implemented > > by > > QMF events. Ayal? > > > > > > > > > Currently, the implementation of the "migrate" verb includes > > > > contacting > > > > the remote Vdsm over xml-rpc before issuing the libvirt > > > > migrateToURI2 > > > > command ('migrationCreate' verb). > > > > > > > > A Vdsm user who choose to use the REST binding, is likely to want > > > > this to > > > > be implemented this using a REST request to the destination. This > > > > means > > > > that the implementation of Vdsm depends on the chosen binding. > > > > > > > > The issue can be mitigating by requiring the binding level to > > > > provide a > > > > "callback" for migrationCreate (and any other future Vdsm->world > > > > requests). > > > > This would complicate the beautiful png at > > > > http://ovirt.org/wiki/Vdsm_API#Design ... Does anyone have > > > > another > > > > suggestion? > > > > > > Actually, I think you are blending the external API with vdsm > > > internals. As a > > > management server or ovirt-engine, I don't care about the protocol > > > that vdsm > > > uses to contact the migration recipient. As far as I am concerned > > > this is a > > > special case internal function call. For that purpose, I think > > > xmlrpc is > > > perfectly well-suited to the task and should be used > > > unconditionally, regardless > > > of the bindings used to initiate the migration. > > > > > > So I would propose that we modify the design such that we keep an > > > extremely thin > > > xmlrpc server active whose sole purpose is to service internal P2P > > > requests. > > > > Interesting. We could avoid even that, if we could register a > > callback > > with libvirt, so that destination libvirtd called destination Vdsm to > > verify that all storage and networking resources are ready, before > > executing qemu. DanPB, can something like that be done? (I guess it > > is > > not realistic since we may need to pass vdsm-specific data from > > source > > to dest, and libvirt is not supposed to be a general purpose > > transport.) > > > > Dan. > > I don't understand the issue. The whole point of the REST API is to be an > easily consumable *single* node management API. Once you start coordinating > among different nodes then you need clustering and management (either > distributed or centralized), in both cases it is fine to require having a bus > in which case you have your method of communications between hosts to replace > current xml-rpc. Implicit in this statement is an assertion that live migration between two vdsm instances will not be supported without orchestration from an ovirt-engine instance. I don't agree with placing such a limitation on vdsm since p2p migration is already well-supported by the underlying components (libvirt and qemu). > Requiring an additional xml-rpc server sounds wrong to me. The other option is to support a migrateCreate binding in REST and QMF. -- Adam Litke IBM Linux Technology Center From agl at us.ibm.com Thu Dec 8 15:49:06 2011 From: agl at us.ibm.com (Adam Litke) Date: Thu, 8 Dec 2011 09:49:06 -0600 Subject: [Engine-devel] API design and plan In-Reply-To: <4EE041B5.8060606@redhat.com> References: <20111205173418.GB2706@us.ibm.com> <4EE041B5.8060606@redhat.com> Message-ID: <20111208154906.GB29101@us.ibm.com> On Thu, Dec 08, 2011 at 06:48:53AM +0200, Itamar Heim wrote: > On 12/05/2011 07:34 PM, Adam Litke wrote: > >Hi everyone. On today's VDSM call we discussed the requirements, design, and > >plan for updating the API to include support for QMF and single-host REST API. > >All members present arrived at a general consensus on the best way to design the > >next-generation API. I have tried to capture this discussion in the oVirt wiki: > > > > http://ovirt.org/wiki/Vdsm_API > > > >Please take a look at this page and let's discuss any changes that may be needed > >in order to adopt it as a working plan that we can begin to execute. Thanks! > > > > as you are going to plan an api... > This piece by Geert Jansen summarizes lessons learned from the > RHEV-M (ovirt) REST API project > https://fedorahosted.org/pipermail/rhevm-api/2011-August/002714.html Thanks for the link! This is proving to be a very insightful read. I am finding that I have come to many of these same conclusions in my own way as I have been desigining the API (especially regarding the use of JSON over XML). -- Adam Litke IBM Linux Technology Center From abaron at redhat.com Thu Dec 8 21:34:16 2011 From: abaron at redhat.com (Ayal Baron) Date: Thu, 08 Dec 2011 16:34:16 -0500 (EST) Subject: [Engine-devel] API design and plan In-Reply-To: <20111208153004.GA29101@us.ibm.com> Message-ID: I just noticed, why are we cross-posting to engine-devel? Anyway, comments inline ----- Original Message ----- > On Thu, Dec 08, 2011 at 04:56:17AM -0500, Ayal Baron wrote: > > > > > > ----- Original Message ----- > > > On Tue, Dec 06, 2011 at 08:46:57AM -0600, Adam Litke wrote: > > > > On Tue, Dec 06, 2011 at 02:58:59PM +0200, Dan Kenigsberg wrote: > > > > > On Mon, Dec 05, 2011 at 11:34:18AM -0600, Adam Litke wrote: > > > > > > Hi everyone. On today's VDSM call we discussed the > > > > > > requirements, design, and > > > > > > plan for updating the API to include support for QMF and > > > > > > single-host REST API. > > > > > > All members present arrived at a general consensus on the > > > > > > best > > > > > > way to design the > > > > > > next-generation API. I have tried to capture this > > > > > > discussion > > > > > > in the oVirt wiki: > > > > > > > > > > > > http://ovirt.org/wiki/Vdsm_API > > > > > > > > > > > > Please take a look at this page and let's discuss any > > > > > > changes > > > > > > that may be needed > > > > > > in order to adopt it as a working plan that we can begin to > > > > > > execute. Thanks! > > > > > > > > > > > > > > > > Very nice, I've fixed two bullets about the future of the > > > > > xml-rpc. > > > > > > > > Thanks... Updates look good to me. > > > > > > > > > I think that we are missing something here: how do we model > > > > > Vdsm-to-Vdsm > > > > > communication, in a binding-blind way? I'm less worried about > > > > > the > > > > > storage-based mailbox used for lvextend requests: my problem > > > > > is > > > > > with > > > > > migration command. > > > > > > > > Ok, interesting... Besides migration, are there other features > > > > (current or > > > > planned) that would involve P2P communication? I want to > > > > ensure we > > > > consider the > > > > full problem space. > > > > > > Well, I can imagine we would like a host in distress to migrate > > > VMs > > > to > > > whomever can take them, without central management driving this > > > process. > > > (CAVE split brain) > > > > > > At the momemt I cannot think of something that cannot be > > > implemented > > > by > > > QMF events. Ayal? > > > > > > > > > > > > Currently, the implementation of the "migrate" verb includes > > > > > contacting > > > > > the remote Vdsm over xml-rpc before issuing the libvirt > > > > > migrateToURI2 > > > > > command ('migrationCreate' verb). > > > > > > > > > > A Vdsm user who choose to use the REST binding, is likely to > > > > > want > > > > > this to > > > > > be implemented this using a REST request to the destination. > > > > > This > > > > > means > > > > > that the implementation of Vdsm depends on the chosen > > > > > binding. > > > > > > > > > > The issue can be mitigating by requiring the binding level to > > > > > provide a > > > > > "callback" for migrationCreate (and any other future > > > > > Vdsm->world > > > > > requests). > > > > > This would complicate the beautiful png at > > > > > http://ovirt.org/wiki/Vdsm_API#Design ... Does anyone have > > > > > another > > > > > suggestion? > > > > > > > > Actually, I think you are blending the external API with vdsm > > > > internals. As a > > > > management server or ovirt-engine, I don't care about the > > > > protocol > > > > that vdsm > > > > uses to contact the migration recipient. As far as I am > > > > concerned > > > > this is a > > > > special case internal function call. For that purpose, I think > > > > xmlrpc is > > > > perfectly well-suited to the task and should be used > > > > unconditionally, regardless > > > > of the bindings used to initiate the migration. > > > > > > > > So I would propose that we modify the design such that we keep > > > > an > > > > extremely thin > > > > xmlrpc server active whose sole purpose is to service internal > > > > P2P > > > > requests. > > > > > > Interesting. We could avoid even that, if we could register a > > > callback > > > with libvirt, so that destination libvirtd called destination > > > Vdsm to > > > verify that all storage and networking resources are ready, > > > before > > > executing qemu. DanPB, can something like that be done? (I guess > > > it > > > is > > > not realistic since we may need to pass vdsm-specific data from > > > source > > > to dest, and libvirt is not supposed to be a general purpose > > > transport.) > > > > > > Dan. > > > > I don't understand the issue. The whole point of the REST API is > > to be an > > easily consumable *single* node management API. Once you start > > coordinating > > among different nodes then you need clustering and management > > (either > > distributed or centralized), in both cases it is fine to require > > having a bus > > in which case you have your method of communications between hosts > > to replace > > current xml-rpc. > > Implicit in this statement is an assertion that live migration > between two vdsm > instances will not be supported without orchestration from an > ovirt-engine > instance. I don't agree with placing such a limitation on vdsm since > p2p > migration is already well-supported by the underlying components > (libvirt and > qemu). proper migration requires a lot of coordination. e.g. making sure that both source and target have the same networks (required by guest) available, access to shared storage, access to same vm configuration, handling split brain scenarios, etc. This has to be managed somehow... I don't think that ovirt-engine should be a requirement for this, but something should take care of this. This entity could just initiate migrateCreate on destination if not using qmf. > > > Requiring an additional xml-rpc server sounds wrong to me. > > The other option is to support a migrateCreate binding in REST and > QMF. I see no problem with this, I don't view preparing to accept a migration as an internal flow, it can be directed by either another vdsm or a management server or anything else. Seeing as it is by any event run by an external entity, it should be a public API. > > -- > Adam Litke > IBM Linux Technology Center > > From lhornyak at redhat.com Fri Dec 9 20:43:43 2011 From: lhornyak at redhat.com (Laszlo Hornyak) Date: Fri, 09 Dec 2011 15:43:43 -0500 (EST) Subject: [Engine-devel] short research on postgresql enums In-Reply-To: Message-ID: Hi, I looked into Mike's database patch ( http://gerrit.ovirt.org/#change,500 ) today and read Yaniv's comment on it. I have seen another patches related to enums and how they are stored in the database. I made a quick test to compare between varchar and enum and the results are here: http://dummywarhead.blogspot.com/2011/12/postgresql-enums-vs-varchar.html IMO enums could be a good solution, but changing enums could be a pain under postgres 9.1. So what if we could use varchar now and migrate to enum once postgres 9.1 replaces the older installations :) Laszlo From smizrahi at redhat.com Fri Dec 9 22:41:42 2011 From: smizrahi at redhat.com (Saggi Mizrahi) Date: Fri, 09 Dec 2011 17:41:42 -0500 Subject: [Engine-devel] shared fs support Message-ID: <4EE28EA6.2050906@redhat.com> Hi, I have preliminary (WIP) patches for shared FS up on gerrit. There is a lot of work to be done reorganizing the patches but I just wanted all the TLV guys to have a chance to look at it on Sunday. I did some testing and should work as expected for most cases. To test just connectStorageServer with storageType=6 (sharedfs) connection params are {'id'=1, 'spec'='server:/export' 'vfs_type'='nfs\gluster\smb' 'mnt_options'='opt,opt=3,opt' } to check with an existing NFS domain you can just spec=server:/export vfs_type=nfs mnt_options=soft,timeo=600,retrans=6,nosharecache,vers=3 I only tested NFS but I am going to test more exotic stuff on Monday. This is the patch to build the RPM from. http://gerrit.ovirt.org/#change,560 Have a good weekend -------------- next part -------------- An HTML attachment was scrubbed... URL: From emesika at redhat.com Sat Dec 10 19:02:35 2011 From: emesika at redhat.com (Eli Mesika) Date: Sat, 10 Dec 2011 14:02:35 -0500 (EST) Subject: [Engine-devel] short research on postgresql enums In-Reply-To: Message-ID: <45b4e8bc-5215-4db3-bf48-3396fcb7cfa9@zmail13.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > From: "Laszlo Hornyak" > To: engine-devel at ovirt.org > Sent: Friday, December 9, 2011 10:43:43 PM > Subject: [Engine-devel] short research on postgresql enums > > Hi, > > I looked into Mike's database patch ( > http://gerrit.ovirt.org/#change,500 ) today and read Yaniv's comment > on it. I have seen another patches related to enums and how they are > stored in the database. I made a quick test to compare between > varchar and enum and the results are here: > http://dummywarhead.blogspot.com/2011/12/postgresql-enums-vs-varchar.html > > IMO enums could be a good solution, but changing enums could be a > pain under postgres 9.1. So what if we could use varchar now and > migrate to enum once postgres 9.1 replaces the older installations > :) > +1 > Laszlo > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From iheim at redhat.com Sat Dec 10 19:23:06 2011 From: iheim at redhat.com (Itamar Heim) Date: Sat, 10 Dec 2011 21:23:06 +0200 Subject: [Engine-devel] short research on postgresql enums In-Reply-To: <45b4e8bc-5215-4db3-bf48-3396fcb7cfa9@zmail13.collab.prod.int.phx2.redhat.com> References: <45b4e8bc-5215-4db3-bf48-3396fcb7cfa9@zmail13.collab.prod.int.phx2.redhat.com> Message-ID: <4EE3B19A.3000104@redhat.com> On 12/10/2011 09:02 PM, Eli Mesika wrote: > > > ----- Original Message ----- >> From: "Laszlo Hornyak" >> To: engine-devel at ovirt.org >> Sent: Friday, December 9, 2011 10:43:43 PM >> Subject: [Engine-devel] short research on postgresql enums >> >> Hi, >> >> I looked into Mike's database patch ( >> http://gerrit.ovirt.org/#change,500 ) today and read Yaniv's comment >> on it. I have seen another patches related to enums and how they are >> stored in the database. I made a quick test to compare between >> varchar and enum and the results are here: >> http://dummywarhead.blogspot.com/2011/12/postgresql-enums-vs-varchar.html >> >> IMO enums could be a good solution, but changing enums could be a >> pain under postgres 9.1. So what if we could use varchar now and >> migrate to enum once postgres 9.1 replaces the older installations >> :) >> > +1 several considerations: 1. for existing enums - their numbering is part of the "api" (for example, audit log even id. 2. they are also part of the "api" of the data warehouse. 3. space is a critical difference when you think of the data warehouse rather than on the operational db. that alone is reason enough to have these as numeric. 4. we need to provide i18n - not sure if this will be easier in numeric or string form. 5. if 9.1 will support enums, and currently all enums are stored as numeric - why not keep it consistent until moving to 9.1? however, worth remembering the fact a platform component brings a shiny new feature is not always a reason to try it before it matures a bit: a. it may be buggy (though it is fair to try and help it mature as well). b. it may not be available in the various distributions yet, making it harder to consume the code. From mpastern at redhat.com Sun Dec 11 13:31:45 2011 From: mpastern at redhat.com (Michael Pasternak) Date: Sun, 11 Dec 2011 15:31:45 +0200 Subject: [Engine-devel] RESTAPI: optional body in DELETE request Message-ID: <4EE4B0C1.3090408@redhat.com> Recently we asked RESTeasy to support one of our unique use-cases: - (optional) body in DELETE request e.g case 1: DELETE /api/vms/xxx true|false case 2: DELETE /api/vms/xxx and now they do allow us passing body in DELETE request what is permitted by HTTP spec., but a bit awkward from the user perspective, therefore i would like to start discussion on this where the goal is deprecating this feature and consume /force/ from url parameters instead, e.g: DELETE /api/vms/xxx DELETE /api/vms/xxx?force=true note: we continue supplying very same uniform interface and all url parameters will be properly documented in RSDL, U/G. -- Michael Pasternak RedHat, ENG-Virtualization R&D From acathrow at redhat.com Sun Dec 11 15:15:23 2011 From: acathrow at redhat.com (Andrew Cathrow) Date: Sun, 11 Dec 2011 10:15:23 -0500 (EST) Subject: [Engine-devel] shared fs support In-Reply-To: <4EE28EA6.2050906@redhat.com> Message-ID: ----- Original Message ----- > From: "Saggi Mizrahi" > To: "VDSM Project Development" , engine-devel at ovirt.org > Sent: Friday, December 9, 2011 5:41:42 PM > Subject: [Engine-devel] shared fs support > > > Hi, I have preliminary (WIP) patches for shared FS up on gerrit. > There is a lot of work to be done reorganizing the patches but I > just wanted all the TLV guys to have a chance to look at it on > Sunday. > > I did some testing and should work as expected for most cases. > > To test just connectStorageServer with storageType=6 (sharedfs) > connection params are > {'id'=1, > 'spec'='server:/export' > 'vfs_type'='nfs\gluster\smb' > 'mnt_options'='opt,opt=3,opt' } > > to check with an existing NFS domain you can just > spec=server:/export > vfs_type=nfs > mnt_options=soft,timeo=600,retrans=6,nosharecache,vers=3 So does that mean that we treat nfs custom types differently -eg using the out or process stuff? > > I only tested NFS but I am going to test more exotic stuff on Monday. > > This is the patch to build the RPM from. > http://gerrit.ovirt.org/#change,560 > > Have a good weekend > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From mlipchuk at redhat.com Sun Dec 11 15:54:55 2011 From: mlipchuk at redhat.com (Maor) Date: Sun, 11 Dec 2011 17:54:55 +0200 Subject: [Engine-devel] short research on postgresql enums In-Reply-To: References: Message-ID: <4EE4D24F.9030102@redhat.com> On 12/09/2011 10:43 PM, Laszlo Hornyak wrote: > Hi, > > I looked into Mike's database patch ( http://gerrit.ovirt.org/#change,500 ) today and read Yaniv's comment on it. I have seen another patches related to enums and how they are stored in the database. I made a quick test to compare between varchar and enum and the results are here: http://dummywarhead.blogspot.com/2011/12/postgresql-enums-vs-varchar.html > > IMO enums could be a good solution, but changing enums could be a pain under postgres 9.1. So what if we could use varchar now and migrate to enum once postgres 9.1 replaces the older installations :) > > Laszlo > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel +1 liked the article. I wonder, how common are database enums among different DB providers, for supporting DB migration in the future if needed. I also searched a little bit on the subject and found the following article: http://www.designmagick.com/article/29/PostgreSQL-Enum-Types Postgres 8.4.7 supports the CHECK function, which validate specific values. (not sure how is the performance). but maybe we can use it, if we choose to adapt the enum idea later. From iheim at redhat.com Sun Dec 11 15:57:57 2011 From: iheim at redhat.com (Itamar Heim) Date: Sun, 11 Dec 2011 17:57:57 +0200 Subject: [Engine-devel] short research on postgresql enums In-Reply-To: <4EE4D24F.9030102@redhat.com> References: <4EE4D24F.9030102@redhat.com> Message-ID: <4EE4D305.8040304@redhat.com> On 12/11/2011 05:54 PM, Maor wrote: > On 12/09/2011 10:43 PM, Laszlo Hornyak wrote: >> Hi, >> >> I looked into Mike's database patch ( http://gerrit.ovirt.org/#change,500 ) today and read Yaniv's comment on it. I have seen another patches related to enums and how they are stored in the database. I made a quick test to compare between varchar and enum and the results are here: http://dummywarhead.blogspot.com/2011/12/postgresql-enums-vs-varchar.html >> >> IMO enums could be a good solution, but changing enums could be a pain under postgres 9.1. So what if we could use varchar now and migrate to enum once postgres 9.1 replaces the older installations :) >> >> Laszlo >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel > +1 liked the article. > I wonder, how common are database enums among different DB providers, > for supporting DB migration in the future if needed. that's a very good point, considering we are still on the path of moving to hibernate. > > I also searched a little bit on the subject and found the following > article: > http://www.designmagick.com/article/29/PostgreSQL-Enum-Types > Postgres 8.4.7 supports the CHECK function, which validate specific > values. (not sure how is the performance). > but maybe we can use it, if we choose to adapt the enum idea later. > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From lhornyak at redhat.com Mon Dec 12 08:43:39 2011 From: lhornyak at redhat.com (Laszlo Hornyak) Date: Mon, 12 Dec 2011 03:43:39 -0500 (EST) Subject: [Engine-devel] short research on postgresql enums In-Reply-To: <4EE4D305.8040304@redhat.com> Message-ID: <3bcd5490-2d40-45d8-a020-4f3792ba4d4d@zmail01.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > From: "Itamar Heim" > To: "Maor" > Cc: engine-devel at ovirt.org, "Laszlo Hornyak" > Sent: Sunday, December 11, 2011 4:57:57 PM > Subject: Re: [Engine-devel] short research on postgresql enums > > On 12/11/2011 05:54 PM, Maor wrote: > > On 12/09/2011 10:43 PM, Laszlo Hornyak wrote: > >> Hi, > >> > >> I looked into Mike's database patch ( > >> http://gerrit.ovirt.org/#change,500 ) today and read Yaniv's > >> comment on it. I have seen another patches related to enums and > >> how they are stored in the database. I made a quick test to > >> compare between varchar and enum and the results are here: > >> http://dummywarhead.blogspot.com/2011/12/postgresql-enums-vs-varchar.html > >> > >> IMO enums could be a good solution, but changing enums could be a > >> pain under postgres 9.1. So what if we could use varchar now and > >> migrate to enum once postgres 9.1 replaces the older > >> installations :) > >> > >> Laszlo > >> _______________________________________________ > >> Engine-devel mailing list > >> Engine-devel at ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/engine-devel > > +1 liked the article. > > I wonder, how common are database enums among different DB > > providers, > > for supporting DB migration in the future if needed. > > that's a very good point, considering we are still on the path of > moving > to hibernate. I did not do a test with hibernate yet, but since a postgresql enum column is read/written as character data, I believe hibernate would not see a difference between enum and varchar. On other RDBMS' these columns could be just plain varchar columns. So we can look at this as a PostfgreSQL-specific optimization. JPA supports ordinal or name ( http://docs.oracle.com/javaee/6/api/javax/persistence/EnumType.html ). The current JDBC DAO's persist using the custom logic written into the enums. I am not sure this is portable to JPA. So if we move to JPA, we will have a choice: - persist using ordinal - I think we would want to avoid this, since the code we have in enums and the DAO classes is to avoid this - persist using name - in this case the column can be an enum on postgresql, which actually has the same performance as the ordinal values (and the custom DAO code), or it can be a varchar (if enum is not supported by the rdbms) and it can be plain varchar, which performs somewhat under the integers. > > > > > I also searched a little bit on the subject and found the following > > article: > > http://www.designmagick.com/article/29/PostgreSQL-Enum-Types > > Postgres 8.4.7 supports the CHECK function, which validate specific > > values. (not sure how is the performance). > > but maybe we can use it, if we choose to adapt the enum idea later. > > > > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > From jchoate at redhat.com Mon Dec 12 17:22:15 2011 From: jchoate at redhat.com (Jon Choate) Date: Mon, 12 Dec 2011 12:22:15 -0500 Subject: [Engine-devel] VM hibernation to storage domain Message-ID: <4EE63847.7030009@redhat.com> Is there any reason anyone can think of why we would need to specify a specific storage domain for a VM to use when it hibernates? Ideally we could just grab any storage domain that has space and use it for hibernation (as long a we remember where we did it!). thoughts? From kroberts at redhat.com Mon Dec 12 18:11:18 2011 From: kroberts at redhat.com (Keith Robertson) Date: Mon, 12 Dec 2011 13:11:18 -0500 Subject: [Engine-devel] Updated: New tool to upload OVF archives Message-ID: <4EE643C6.90906@redhat.com> Please find attached to this email a updated tool that will makes it easier to upload an OVF archive file to an oVirt export domain. This version of the tool has the following improvements: 1. The tool will automatically generate UUIDs for the TemplateID and the disk ID. 2. The tool attempts to outline in the help how an OVF archive should be formatted so that a creator will produce archives that are consumable by the tool and oVirt. 3. The makefile has been updated to produce the image uploader's RPM. Sample Usage: > ovirt-image-uploader.py -n 127.0.0.1:/virt/exports -t NEW-IMAGE-NAME-HERE -i -d upload ovf-tgz-here.ovf // First few lines of the help that attempt to document *how* an archive should be formatted... keith at whiplash src (master *)]$ python engine-image-uploader.py -h Usage: engine-image-uploader.py [options] list engine-image-uploader.py [options] upload [file].[file]...[file] The image uploader can be used to list export storage domains and upload OVF files to export storage domains. This tool supports OVF files created in the following manner: 1. The OVF archive must be a gzip archive. 2. The archive must have the following internal layout: |-- images | |-- | |--- |-- master | |---vms | |---