From iheim at redhat.com Wed Aug 1 04:42:06 2012 From: iheim at redhat.com (Itamar Heim) Date: Wed, 01 Aug 2012 07:42:06 +0300 Subject: [Engine-devel] Adding VNC support In-Reply-To: <20120731134441.Horde.SjnvwJir309QGEO5nM2T-nA@imap.linux.ibm.com> References: <20120731100043.GA4725@bogey.xentower.nl> <1636398506.1290873.1343742250166.JavaMail.root@redhat.com> <20120731152553.GC4725@bogey.xentower.nl> <20120731134441.Horde.SjnvwJir309QGEO5nM2T-nA@imap.linux.ibm.com> Message-ID: <5018B39E.8010001@redhat.com> On 07/31/2012 11:44 PM, snmishra at linux.vnet.ibm.com wrote: > > Quoting Ewoud Kohl van Wijngaarden : > >> On Tue, Jul 31, 2012 at 09:44:10AM -0400, Alon Bar-Lev wrote: >>> Ewoud Kohl van Wijngaarden wrote: >>> > On Tue, Jul 31, 2012 at 10:09:26AM +0100, Daniel P. Berrange wrote: >>> > > On Tue, Jul 31, 2012 at 09:18:50AM +0300, Itamar Heim wrote: >>> > > > On 07/26/2012 05:36 PM, snmishra at linux.vnet.ibm.com wrote: >>> > > > 5.2 novnc websocket server - i see three options >>> > > > >>> > > > 5.2.1 extend qemu to do this, so novnc can connect to it directly >>> > > > like we do today for vnc/spice >>> > > >>> > > I don't think this is a desirable approach. One of the nice >>> > > benefits >>> > > you gain from using a websocket proxy is that you only need to have >>> > > one single TCP port exposed to the internet now. If you put >>> > > websockets >>> > > in QEMU itself, you'd be stuck with having to open your firewall to >>> > > allow 100's of ports. With a separate web proxy, you can even make >>> > > each QEMU server now use a local UNIX socket for their VNC server, >>> > > since only the proxy needs to be able to connect. This means that >>> > > the VNC server would no longer be exposed to random local user >>> > > access too. >>> > >>> > Another benefit of a proxy is that you can run it in a DMZ and not >>> > have >>> > to expose all your virtualization hosts to the internet. >>> >>> But this way you do expose them :) >> >> Since I've worked with VNCAuthProxy I'll explain how that works. >> >> First of all it listens on a control port. This can be inside the >> firewall and has a simple JSON-based protocol. On this control port you >> can ask it to open a connection on port X to virt-host.example.org:Y. >> virt-host.example.org can also be behind the firewall and now only port >> X is exposed to the internet. > > I am coming from the libvirt/libvirt-cim world and I don't completely > follow this discussion. In libvrt-cim (higher level layer using libvirt > to create and manage VMs), we took the input from user on what VNC IP, > port, vncpassword etc. the user wants to use to access the VM and > created a libvirt XML using these user provided values. This XML was > then passed to libvirt which created the new VM and magically set vnc > up. The user then opened any VNC viewer of their choice to access the > VM. If ovirt is using libvirt, why can't we use the same magic? that's already implemented today - you can click the UI to get a dialog with the vnc details and open the session yourself. the thread discussed something which will launch vnc from the browser for you. launching from browser has 3 ways: - browser wrapper - activex, xpi, etc. - mime based - html based - like the novnc client (well, also java applet based but less used today) > > Pardon my ignorance here. > -Sharad Mishra > >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From iheim at redhat.com Wed Aug 1 04:45:10 2012 From: iheim at redhat.com (Itamar Heim) Date: Wed, 01 Aug 2012 07:45:10 +0300 Subject: [Engine-devel] Domain rescan action question In-Reply-To: <1401554098.6341190.1343769718592.JavaMail.root@redhat.com> References: <1401554098.6341190.1343769718592.JavaMail.root@redhat.com> Message-ID: <5018B456.8030700@redhat.com> On 08/01/2012 12:21 AM, Ayal Baron wrote: > > > ----- Original Message ----- >> On 07/31/2012 11:30 PM, Hopper, Ricky wrote: >>> Hey all, >>> >>> As I'm making progress with the domain rescan functionality, I've >>> realized that I'm unsure what to do with any disks that are >>> detected on >>> the domain. Should I add them back into the database to be listed >>> as >>> floating disks, or should I just return a list of disk images to be >>> attached to whatever the caller of the query needs? >>> >>> - Ricky >> >> i'm not sure they should be added automatically. >> I think a dialog[1] showing orphan disks/images on the storage domain >> for user to choose which to import as 'floating' disks would be >> better >> than auto importing them. > > why? this same functionality would be used to import an existing domain. > If these disks are referenced in OVFs we are not familiar with on this domain then we should import the *VMs*. > If they are referenced by other VMs that are already in the system (but disks have been unreachable until now) then the disks should just be added to the db in attached mode. > If neither, then the disks should be added as floating disks. > For the import functionality, once you subsequently import another domain with OVFs which do reference these disks then if user hasn't appropriated them for other VMs then they would move to attached state, otherwise need to add those VMs with errors. > > Note that there are 2 reasons for unknown valid disks to be on the domain: > 1. delete was initiated in engine but not performed on storage (then floating is fine as only way to automatically delete them is if user chooses to do so) > 2. disks were created there outside of the system - should just detect and import and use logic above. > >> >> there is also the reverse of flagging existing disks as 'missing' in >> storage? > > If disks were floating then they should just be removed, otherwise should be moved to illegal state (we have this state for disks today). > >> >> >> [1] or a subtab on the storage domain. > > Another sub-tab for disks? > It's possible but what would you do when importing an existing domain into the system? require user to manually select which disks to import? or would you have different flows for import of domain and rescan of contents? (I'd rather keep it simple with less tabs and manual operations for user to perform, seems more intuitive to me). > I'm not sure orphaed images and importing an entire domain should be the same flow from UX part. and for a netapp native clone, you'd want to only import the newly cloned ("orphaned) image from the storage via an api call, not the entire domain/images From abaron at redhat.com Wed Aug 1 07:48:28 2012 From: abaron at redhat.com (Ayal Baron) Date: Wed, 1 Aug 2012 03:48:28 -0400 (EDT) Subject: [Engine-devel] Domain rescan action question In-Reply-To: <5018B456.8030700@redhat.com> Message-ID: <571010747.6484571.1343807308605.JavaMail.root@redhat.com> ----- Original Message ----- > On 08/01/2012 12:21 AM, Ayal Baron wrote: > > > > > > ----- Original Message ----- > >> On 07/31/2012 11:30 PM, Hopper, Ricky wrote: > >>> Hey all, > >>> > >>> As I'm making progress with the domain rescan functionality, I've > >>> realized that I'm unsure what to do with any disks that are > >>> detected on > >>> the domain. Should I add them back into the database to be listed > >>> as > >>> floating disks, or should I just return a list of disk images to > >>> be > >>> attached to whatever the caller of the query needs? > >>> > >>> - Ricky > >> > >> i'm not sure they should be added automatically. > >> I think a dialog[1] showing orphan disks/images on the storage > >> domain > >> for user to choose which to import as 'floating' disks would be > >> better > >> than auto importing them. > > > > why? this same functionality would be used to import an existing > > domain. > > If these disks are referenced in OVFs we are not familiar with on > > this domain then we should import the *VMs*. > > If they are referenced by other VMs that are already in the system > > (but disks have been unreachable until now) then the disks should > > just be added to the db in attached mode. > > If neither, then the disks should be added as floating disks. > > For the import functionality, once you subsequently import another > > domain with OVFs which do reference these disks then if user > > hasn't appropriated them for other VMs then they would move to > > attached state, otherwise need to add those VMs with errors. > > > > Note that there are 2 reasons for unknown valid disks to be on the > > domain: > > 1. delete was initiated in engine but not performed on storage > > (then floating is fine as only way to automatically delete them is > > if user chooses to do so) > > 2. disks were created there outside of the system - should just > > detect and import and use logic above. > > > >> > >> there is also the reverse of flagging existing disks as 'missing' > >> in > >> storage? > > > > If disks were floating then they should just be removed, otherwise > > should be moved to illegal state (we have this state for disks > > today). > > > >> > >> > >> [1] or a subtab on the storage domain. > > > > Another sub-tab for disks? > > It's possible but what would you do when importing an existing > > domain into the system? require user to manually select which > > disks to import? or would you have different flows for import of > > domain and rescan of contents? (I'd rather keep it simple with > > less tabs and manual operations for user to perform, seems more > > intuitive to me). > > > > I'm not sure orphaed images and importing an entire domain should be > the > same flow from UX part. > and for a netapp native clone, you'd want to only import the newly > cloned ("orphaned) image from the storage via an api call, not the > entire domain/images For proper bookkeeping I think we'd want these disks to be reflected in the GUI. I do not however think we need to add additional tabs for this. We could have yet another state ('orphaned' or sth.) for disks to be able to easily differentiate in the GUI etc. I would imagine we'd regularly scan domains to account for storage usage. If we do add such a state then we'd need an API to 'enable' it (so that it could be done automatically). From vszocs at redhat.com Wed Aug 1 09:40:50 2012 From: vszocs at redhat.com (Vojtech Szocs) Date: Wed, 1 Aug 2012 05:40:50 -0400 (EDT) Subject: [Engine-devel] oVirt UI Plugins: Follow-up Meeting Message-ID: <1341427663.24843903.1343814050905.JavaMail.root@redhat.com> The following is a new meeting request: Subject: oVirt UI Plugins: Follow-up Meeting Organizer: "Vojtech Szocs" Time: Tuesday, August 14, 2012, 4:30:00 PM - 5:30:00 PM GMT +01:00 Belgrade, Bratislava, Budapest, Ljubljana, Prague Invitees: engine-devel at ovirt.org; George.Costea at netapp.com; Troy.Mangum at netapp.com; Dustin.Schoenbrun at netapp.com; Ricky.Hopper at netapp.com; Chris.Frantz at hp.com *~*~*~*~*~*~*~*~*~* Hi guys, this is a follow-up meeting for discussing progress on oVirt UI Plugins feature. Here are the details required for joining the session. Intercall dial-in numbers can be found at: https://www.intercallonline.com/portlets/scheduling/viewNumbers/listNumbersByCode.do?confCode=7128867405 Intercall Conference Code ID: 7128867405 # Elluminate session: Regards, Vojtech -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 4493 bytes Desc: not available URL: From ofrenkel at redhat.com Wed Aug 1 09:43:27 2012 From: ofrenkel at redhat.com (Omer Frenkel) Date: Wed, 1 Aug 2012 05:43:27 -0400 (EDT) Subject: [Engine-devel] Domain rescan action question In-Reply-To: <1049657803.1388688.1343769882067.JavaMail.root@redhat.com> Message-ID: <253410604.1614860.1343814207457.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Andrew Cathrow" > To: "Itamar Heim" , "Dan Yasny" , "Ricky Hopper" > Cc: engine-devel at ovirt.org > Sent: Wednesday, August 1, 2012 12:24:42 AM > Subject: Re: [Engine-devel] Domain rescan action question > > > > ----- Original Message ----- > > From: "Itamar Heim" > > To: "Ricky Hopper" > > Cc: engine-devel at ovirt.org > > Sent: Tuesday, July 31, 2012 4:44:34 PM > > Subject: Re: [Engine-devel] Domain rescan action question > > > > On 07/31/2012 11:30 PM, Hopper, Ricky wrote: > > > Hey all, > > > > > > As I'm making progress with the domain rescan functionality, I've > > > realized that I'm unsure what to do with any disks that are > > > detected on > > > the domain. Should I add them back into the database to be listed > > > as > > > floating disks, or should I just return a list of disk images to > > > be > > > attached to whatever the caller of the query needs? > > > > > > - Ricky > > > > i'm not sure they should be added automatically. > > I think a dialog[1] showing orphan disks/images on the storage > > domain > > for user to choose which to import as 'floating' disks would be > > better > > than auto importing them. > > > > there is also the reverse of flagging existing disks as 'missing' > > in > > storage? > > > > Perhaps we should start a feature page to discuss and better scope > it. > There is a feature page that we could expand, it doesn't discuss the > notion of importing those disks which is certainly something we need > to address. > > > http://wiki.ovirt.org/wiki/Features/Orphaned_Images > +1 also, how will be handled images with snapshots? or broken chain of images (not sure its a valid scenario) > > > > [1] or a subtab on the storage domain. > > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From vszocs at redhat.com Wed Aug 1 09:43:47 2012 From: vszocs at redhat.com (Vojtech Szocs) Date: Wed, 1 Aug 2012 05:43:47 -0400 (EDT) Subject: [Engine-devel] oVirt UI Plugins Meeting In-Reply-To: <5017FC38.4060601@redhat.com> Message-ID: <604015569.24847931.1343814227395.JavaMail.root@redhat.com> Thanks Itamar, I've scheduled a follow-up meeting on Monday, August 14, same time as the previous meeting. Vojtech ----- Original Message ----- From: "Itamar Heim" To: "Vojtech Szocs" Cc: "engine-devel" , "George Costea" , "Troy Mangum" , "Dustin Schoenbrun" , "Ricky Hopper" , "Chris Frantz" Sent: Tuesday, July 31, 2012 5:39:36 PM Subject: Re: [Engine-devel] oVirt UI Plugins Meeting my notes from the call: vojtech will schedule a follow up for two weeks from now same time. next step is to provide simple sample java script based plugins in the following order: 1. add a main tab showing html page from external url 2. add a sub tab showing html page from external url 3. add a context menu item opening an external url 4. plugin performs a REST API call to the engine 5. cross origin header - plugin asking another server/url a question if anyone wants to pick on of those up rather than wait for vojtech on them - would be great. Thanks, Itamar On 07/30/2012 03:03 PM, Vojtech Szocs wrote: From dyasny at redhat.com Wed Aug 1 09:59:10 2012 From: dyasny at redhat.com (Dan Yasny) Date: Wed, 1 Aug 2012 05:59:10 -0400 (EDT) Subject: [Engine-devel] Domain rescan action question In-Reply-To: <1049657803.1388688.1343769882067.JavaMail.root@redhat.com> Message-ID: <1631790052.4861341.1343815150755.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Andrew Cathrow" > To: "Itamar Heim" , "Dan Yasny" , "Ricky Hopper" > Cc: engine-devel at ovirt.org > Sent: Wednesday, 1 August, 2012 12:24:42 AM > Subject: Re: [Engine-devel] Domain rescan action question > > > > ----- Original Message ----- > > From: "Itamar Heim" > > To: "Ricky Hopper" > > Cc: engine-devel at ovirt.org > > Sent: Tuesday, July 31, 2012 4:44:34 PM > > Subject: Re: [Engine-devel] Domain rescan action question > > > > On 07/31/2012 11:30 PM, Hopper, Ricky wrote: > > > Hey all, > > > > > > As I'm making progress with the domain rescan functionality, I've > > > realized that I'm unsure what to do with any disks that are > > > detected on > > > the domain. Should I add them back into the database to be listed > > > as > > > floating disks, or should I just return a list of disk images to > > > be > > > attached to whatever the caller of the query needs? > > > > > > - Ricky > > > > i'm not sure they should be added automatically. > > I think a dialog[1] showing orphan disks/images on the storage > > domain > > for user to choose which to import as 'floating' disks would be > > better > > than auto importing them. > > > > there is also the reverse of flagging existing disks as 'missing' > > in > > storage? > > > > Perhaps we should start a feature page to discuss and better scope > it. > There is a feature page that we could expand, it doesn't discuss the > notion of importing those disks which is certainly something we need > to address. > > > http://wiki.ovirt.org/wiki/Features/Orphaned_Images The original idea was to scan the storage domains and compare the images lists to the database, thus getting a list of images no longer relevant and scrubbing the storage. This will actually be addressed properly in the future (Ayal can elaborate on that) but for now this is needed at least for that use case. As I understand, the conversation here is about trying to take an already populated SD (from another setup I suppose), scanning it and putting it into RHEV? > > > > > [1] or a subtab on the storage domain. > > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > -- Regards, Dan Yasny Red Hat Israel +972 9769 2280 From Ricky.Hopper at netapp.com Wed Aug 1 13:34:53 2012 From: Ricky.Hopper at netapp.com (Hopper, Ricky) Date: Wed, 1 Aug 2012 13:34:53 +0000 Subject: [Engine-devel] Domain rescan action question In-Reply-To: <1631790052.4861341.1343815150755.JavaMail.root@redhat.com> Message-ID: On 8/1/12 5:59 AM, "Dan Yasny" wrote: > > >----- Original Message ----- >> From: "Andrew Cathrow" >> To: "Itamar Heim" , "Dan Yasny" , >>"Ricky Hopper" >> Cc: engine-devel at ovirt.org >> Sent: Wednesday, 1 August, 2012 12:24:42 AM >> Subject: Re: [Engine-devel] Domain rescan action question >> >> >> >> ----- Original Message ----- >> > From: "Itamar Heim" >> > To: "Ricky Hopper" >> > Cc: engine-devel at ovirt.org >> > Sent: Tuesday, July 31, 2012 4:44:34 PM >> > Subject: Re: [Engine-devel] Domain rescan action question >> > >> > On 07/31/2012 11:30 PM, Hopper, Ricky wrote: >> > > Hey all, >> > > >> > > As I'm making progress with the domain rescan functionality, I've >> > > realized that I'm unsure what to do with any disks that are >> > > detected on >> > > the domain. Should I add them back into the database to be listed >> > > as >> > > floating disks, or should I just return a list of disk images to >> > > be >> > > attached to whatever the caller of the query needs? >> > > >> > > - Ricky >> > >> > i'm not sure they should be added automatically. >> > I think a dialog[1] showing orphan disks/images on the storage >> > domain >> > for user to choose which to import as 'floating' disks would be >> > better >> > than auto importing them. >> > >> > there is also the reverse of flagging existing disks as 'missing' >> > in >> > storage? >> > >> >> Perhaps we should start a feature page to discuss and better scope >> it. >> There is a feature page that we could expand, it doesn't discuss the >> notion of importing those disks which is certainly something we need >> to address. >> >> >> http://wiki.ovirt.org/wiki/Features/Orphaned_Images > >The original idea was to scan the storage domains and compare the images >lists to the database, thus getting a list of images no longer relevant >and scrubbing the storage. This will actually be addressed properly in >the future (Ayal can elaborate on that) but for now this is needed at >least for that use case. > > >As I understand, the conversation here is about trying to take an already >populated SD (from another setup I suppose), scanning it and putting it >into RHEV? As I understood it, the purpose of this functionality wasn't to find images which should be removed from storage, but to find images on the domain that oVirt was unaware of and importing them for use (for instance, if a disk was created outside of oVirt on the domain). If one of the use cases for this feature is also the orphaned images mentioned on the feature page, that may expand the functionality into a separate domain scrub and storage import, both of which would call the rescan (meaning the rescan would not actually add to the database, but instead return a list of "orphaned" disk images). Another solution would be to import all disk images into the database either way, and let the user delete any orphaned images from the GUI. > >> >> > >> > [1] or a subtab on the storage domain. >> > >> > _______________________________________________ >> > Engine-devel mailing list >> > Engine-devel at ovirt.org >> > http://lists.ovirt.org/mailman/listinfo/engine-devel >> > >> > >-- > > > >Regards, > >Dan Yasny >Red Hat Israel >+972 9769 2280 From dyasny at redhat.com Wed Aug 1 13:42:37 2012 From: dyasny at redhat.com (Dan Yasny) Date: Wed, 1 Aug 2012 09:42:37 -0400 (EDT) Subject: [Engine-devel] Domain rescan action question In-Reply-To: Message-ID: <176087497.4951388.1343828557474.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Ricky Hopper" > To: "Dan Yasny" , "Andrew Cathrow" > Cc: engine-devel at ovirt.org, "Itamar Heim" , "Ricky Hopper" > Sent: Wednesday, 1 August, 2012 4:34:53 PM > Subject: Re: [Engine-devel] Domain rescan action question > > > > On 8/1/12 5:59 AM, "Dan Yasny" wrote: > > > > > > >----- Original Message ----- > >> From: "Andrew Cathrow" > >> To: "Itamar Heim" , "Dan Yasny" > >> , > >>"Ricky Hopper" > >> Cc: engine-devel at ovirt.org > >> Sent: Wednesday, 1 August, 2012 12:24:42 AM > >> Subject: Re: [Engine-devel] Domain rescan action question > >> > >> > >> > >> ----- Original Message ----- > >> > From: "Itamar Heim" > >> > To: "Ricky Hopper" > >> > Cc: engine-devel at ovirt.org > >> > Sent: Tuesday, July 31, 2012 4:44:34 PM > >> > Subject: Re: [Engine-devel] Domain rescan action question > >> > > >> > On 07/31/2012 11:30 PM, Hopper, Ricky wrote: > >> > > Hey all, > >> > > > >> > > As I'm making progress with the domain rescan functionality, > >> > > I've > >> > > realized that I'm unsure what to do with any disks that are > >> > > detected on > >> > > the domain. Should I add them back into the database to be > >> > > listed > >> > > as > >> > > floating disks, or should I just return a list of disk images > >> > > to > >> > > be > >> > > attached to whatever the caller of the query needs? > >> > > > >> > > - Ricky > >> > > >> > i'm not sure they should be added automatically. > >> > I think a dialog[1] showing orphan disks/images on the storage > >> > domain > >> > for user to choose which to import as 'floating' disks would be > >> > better > >> > than auto importing them. > >> > > >> > there is also the reverse of flagging existing disks as > >> > 'missing' > >> > in > >> > storage? > >> > > >> > >> Perhaps we should start a feature page to discuss and better scope > >> it. > >> There is a feature page that we could expand, it doesn't discuss > >> the > >> notion of importing those disks which is certainly something we > >> need > >> to address. > >> > >> > >> http://wiki.ovirt.org/wiki/Features/Orphaned_Images > > > >The original idea was to scan the storage domains and compare the > >images > >lists to the database, thus getting a list of images no longer > >relevant > >and scrubbing the storage. This will actually be addressed properly > >in > >the future (Ayal can elaborate on that) but for now this is needed > >at > >least for that use case. > > > > > >As I understand, the conversation here is about trying to take an > >already > >populated SD (from another setup I suppose), scanning it and putting > >it > >into RHEV? > > As I understood it, the purpose of this functionality wasn't to find > images which should be removed from storage, but to find images on > the > domain that oVirt was unaware of and importing them for use (for > instance, > if a disk was created outside of oVirt on the domain). If one of the > use > cases for this feature is also the orphaned images mentioned on the > feature page, that may expand the functionality into a separate > domain > scrub and storage import, both of which would call the rescan > (meaning the > rescan would not actually add to the database, but instead return a > list > of "orphaned" disk images). > > Another solution would be to import all disk images into the database > either way, and let the user delete any orphaned images from the GUI. I think are nice to have, but the problem with the scanning is that if we're not scanning a master domain or an export domain, all we will see is a bunch of images with no context or even hints as to where they belong. The data that makes it all usable is in the engine database and in the ovf files on the master domain. This is why I stopped at the orphaned images part of the feature - because there it's feasible, I would rely on the engine database for image ID comparisons. If we present a user with a list of nameless disks, I doubt it will be of any use. > > > >> > >> > > >> > [1] or a subtab on the storage domain. > >> > > >> > _______________________________________________ > >> > Engine-devel mailing list > >> > Engine-devel at ovirt.org > >> > http://lists.ovirt.org/mailman/listinfo/engine-devel > >> > > >> > > > >-- > > > > > > > >Regards, > > > >Dan Yasny > >Red Hat Israel > >+972 9769 2280 > > -- Regards, Dan Yasny Red Hat Israel +972 9769 2280 From Ricky.Hopper at netapp.com Wed Aug 1 13:56:45 2012 From: Ricky.Hopper at netapp.com (Hopper, Ricky) Date: Wed, 1 Aug 2012 13:56:45 +0000 Subject: [Engine-devel] Domain rescan action question In-Reply-To: <176087497.4951388.1343828557474.JavaMail.root@redhat.com> Message-ID: On 8/1/12 9:42 AM, "Dan Yasny" wrote: > > >----- Original Message ----- >> From: "Ricky Hopper" >> To: "Dan Yasny" , "Andrew Cathrow" >> >> Cc: engine-devel at ovirt.org, "Itamar Heim" , "Ricky >>Hopper" >> Sent: Wednesday, 1 August, 2012 4:34:53 PM >> Subject: Re: [Engine-devel] Domain rescan action question >> >> >> >> On 8/1/12 5:59 AM, "Dan Yasny" wrote: >> >> > >> > >> >----- Original Message ----- >> >> From: "Andrew Cathrow" >> >> To: "Itamar Heim" , "Dan Yasny" >> >> , >> >>"Ricky Hopper" >> >> Cc: engine-devel at ovirt.org >> >> Sent: Wednesday, 1 August, 2012 12:24:42 AM >> >> Subject: Re: [Engine-devel] Domain rescan action question >> >> >> >> >> >> >> >> ----- Original Message ----- >> >> > From: "Itamar Heim" >> >> > To: "Ricky Hopper" >> >> > Cc: engine-devel at ovirt.org >> >> > Sent: Tuesday, July 31, 2012 4:44:34 PM >> >> > Subject: Re: [Engine-devel] Domain rescan action question >> >> > >> >> > On 07/31/2012 11:30 PM, Hopper, Ricky wrote: >> >> > > Hey all, >> >> > > >> >> > > As I'm making progress with the domain rescan functionality, >> >> > > I've >> >> > > realized that I'm unsure what to do with any disks that are >> >> > > detected on >> >> > > the domain. Should I add them back into the database to be >> >> > > listed >> >> > > as >> >> > > floating disks, or should I just return a list of disk images >> >> > > to >> >> > > be >> >> > > attached to whatever the caller of the query needs? >> >> > > >> >> > > - Ricky >> >> > >> >> > i'm not sure they should be added automatically. >> >> > I think a dialog[1] showing orphan disks/images on the storage >> >> > domain >> >> > for user to choose which to import as 'floating' disks would be >> >> > better >> >> > than auto importing them. >> >> > >> >> > there is also the reverse of flagging existing disks as >> >> > 'missing' >> >> > in >> >> > storage? >> >> > >> >> >> >> Perhaps we should start a feature page to discuss and better scope >> >> it. >> >> There is a feature page that we could expand, it doesn't discuss >> >> the >> >> notion of importing those disks which is certainly something we >> >> need >> >> to address. >> >> >> >> >> >> http://wiki.ovirt.org/wiki/Features/Orphaned_Images >> > >> >The original idea was to scan the storage domains and compare the >> >images >> >lists to the database, thus getting a list of images no longer >> >relevant >> >and scrubbing the storage. This will actually be addressed properly >> >in >> >the future (Ayal can elaborate on that) but for now this is needed >> >at >> >least for that use case. >> > >> > >> >As I understand, the conversation here is about trying to take an >> >already >> >populated SD (from another setup I suppose), scanning it and putting >> >it >> >into RHEV? >> >> As I understood it, the purpose of this functionality wasn't to find >> images which should be removed from storage, but to find images on >> the >> domain that oVirt was unaware of and importing them for use (for >> instance, >> if a disk was created outside of oVirt on the domain). If one of the >> use >> cases for this feature is also the orphaned images mentioned on the >> feature page, that may expand the functionality into a separate >> domain >> scrub and storage import, both of which would call the rescan >> (meaning the >> rescan would not actually add to the database, but instead return a >> list >> of "orphaned" disk images). >> >> Another solution would be to import all disk images into the database >> either way, and let the user delete any orphaned images from the GUI. > >I think are nice to have, but the problem with the scanning is that if >we're not scanning a master domain or an export domain, all we will see >is a bunch of images with no context or even hints as to where they >belong. The data that makes it all usable is in the engine database and >in the ovf files on the master domain. > >This is why I stopped at the orphaned images part of the feature - >because there it's feasible, I would rely on the engine database for >image ID comparisons. > >If we present a user with a list of nameless disks, I doubt it will be of >any use. The way this would work is by comparing a list of disk images from vdsm and from oVirt's database, finding the ones vdsm returns that oVirt doesn't have, and then either adding or returning those images. So oVirt's db will be used in the comparison. As far as presenting the user with nameless disks, that's a point I hadn't considered; we could generate some sort of placeholder metadata upon addition to show the user that these are new/orphaned disks that were found on the storage domain. Is it safe to assume that the disks discovered by this feature won't be attached to anything? > > >> > >> >> >> >> > >> >> > [1] or a subtab on the storage domain. >> >> > >> >> > _______________________________________________ >> >> > Engine-devel mailing list >> >> > Engine-devel at ovirt.org >> >> > http://lists.ovirt.org/mailman/listinfo/engine-devel >> >> > >> >> >> > >> >-- >> > >> > >> > >> >Regards, >> > >> >Dan Yasny >> >Red Hat Israel >> >+972 9769 2280 >> >> > >-- > > > >Regards, > >Dan Yasny >Red Hat Israel >+972 9769 2280 From dyasny at redhat.com Wed Aug 1 14:05:15 2012 From: dyasny at redhat.com (Dan Yasny) Date: Wed, 1 Aug 2012 10:05:15 -0400 (EDT) Subject: [Engine-devel] Domain rescan action question In-Reply-To: Message-ID: <1235552045.4962754.1343829915979.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Ricky Hopper" > To: "Dan Yasny" > Cc: engine-devel at ovirt.org, "Itamar Heim" , "Andrew Cathrow" > Sent: Wednesday, 1 August, 2012 4:56:45 PM > Subject: Re: [Engine-devel] Domain rescan action question > > > > On 8/1/12 9:42 AM, "Dan Yasny" wrote: > > > > > > >----- Original Message ----- > >> From: "Ricky Hopper" > >> To: "Dan Yasny" , "Andrew Cathrow" > >> > >> Cc: engine-devel at ovirt.org, "Itamar Heim" , > >> "Ricky > >>Hopper" > >> Sent: Wednesday, 1 August, 2012 4:34:53 PM > >> Subject: Re: [Engine-devel] Domain rescan action question > >> > >> > >> > >> On 8/1/12 5:59 AM, "Dan Yasny" wrote: > >> > >> > > >> > > >> >----- Original Message ----- > >> >> From: "Andrew Cathrow" > >> >> To: "Itamar Heim" , "Dan Yasny" > >> >> , > >> >>"Ricky Hopper" > >> >> Cc: engine-devel at ovirt.org > >> >> Sent: Wednesday, 1 August, 2012 12:24:42 AM > >> >> Subject: Re: [Engine-devel] Domain rescan action question > >> >> > >> >> > >> >> > >> >> ----- Original Message ----- > >> >> > From: "Itamar Heim" > >> >> > To: "Ricky Hopper" > >> >> > Cc: engine-devel at ovirt.org > >> >> > Sent: Tuesday, July 31, 2012 4:44:34 PM > >> >> > Subject: Re: [Engine-devel] Domain rescan action question > >> >> > > >> >> > On 07/31/2012 11:30 PM, Hopper, Ricky wrote: > >> >> > > Hey all, > >> >> > > > >> >> > > As I'm making progress with the domain rescan > >> >> > > functionality, > >> >> > > I've > >> >> > > realized that I'm unsure what to do with any disks that are > >> >> > > detected on > >> >> > > the domain. Should I add them back into the database to be > >> >> > > listed > >> >> > > as > >> >> > > floating disks, or should I just return a list of disk > >> >> > > images > >> >> > > to > >> >> > > be > >> >> > > attached to whatever the caller of the query needs? > >> >> > > > >> >> > > - Ricky > >> >> > > >> >> > i'm not sure they should be added automatically. > >> >> > I think a dialog[1] showing orphan disks/images on the > >> >> > storage > >> >> > domain > >> >> > for user to choose which to import as 'floating' disks would > >> >> > be > >> >> > better > >> >> > than auto importing them. > >> >> > > >> >> > there is also the reverse of flagging existing disks as > >> >> > 'missing' > >> >> > in > >> >> > storage? > >> >> > > >> >> > >> >> Perhaps we should start a feature page to discuss and better > >> >> scope > >> >> it. > >> >> There is a feature page that we could expand, it doesn't > >> >> discuss > >> >> the > >> >> notion of importing those disks which is certainly something we > >> >> need > >> >> to address. > >> >> > >> >> > >> >> http://wiki.ovirt.org/wiki/Features/Orphaned_Images > >> > > >> >The original idea was to scan the storage domains and compare the > >> >images > >> >lists to the database, thus getting a list of images no longer > >> >relevant > >> >and scrubbing the storage. This will actually be addressed > >> >properly > >> >in > >> >the future (Ayal can elaborate on that) but for now this is > >> >needed > >> >at > >> >least for that use case. > >> > > >> > > >> >As I understand, the conversation here is about trying to take an > >> >already > >> >populated SD (from another setup I suppose), scanning it and > >> >putting > >> >it > >> >into RHEV? > >> > >> As I understood it, the purpose of this functionality wasn't to > >> find > >> images which should be removed from storage, but to find images on > >> the > >> domain that oVirt was unaware of and importing them for use (for > >> instance, > >> if a disk was created outside of oVirt on the domain). If one of > >> the > >> use > >> cases for this feature is also the orphaned images mentioned on > >> the > >> feature page, that may expand the functionality into a separate > >> domain > >> scrub and storage import, both of which would call the rescan > >> (meaning the > >> rescan would not actually add to the database, but instead return > >> a > >> list > >> of "orphaned" disk images). > >> > >> Another solution would be to import all disk images into the > >> database > >> either way, and let the user delete any orphaned images from the > >> GUI. > > > >I think are nice to have, but the problem with the scanning is that > >if > >we're not scanning a master domain or an export domain, all we will > >see > >is a bunch of images with no context or even hints as to where they > >belong. The data that makes it all usable is in the engine database > >and > >in the ovf files on the master domain. > > > >This is why I stopped at the orphaned images part of the feature - > >because there it's feasible, I would rely on the engine database for > >image ID comparisons. > > > >If we present a user with a list of nameless disks, I doubt it will > >be of > >any use. > > The way this would work is by comparing a list of disk images from > vdsm > and from oVirt's database, finding the ones vdsm returns that oVirt > doesn't have, and then either adding or returning those images. So > oVirt's > db will be used in the comparison. This will work only when scanning storage domains already attached and in use by the current oVirt setup. What I am talking about is what will happen if a LUN that used to be a SD in another oVirt setup is discovered and scanned, with no engine db to compare with. If we don't consider such a use case, life is definitely quite easy, and we're basically within the scope of the orphaned images feature > > As far as presenting the user with nameless disks, that's a point I > hadn't > considered; we could generate some sort of placeholder metadata upon > addition to show the user that these are new/orphaned disks that were > found on the storage domain. Is it safe to assume that the disks > discovered by this feature won't be attached to anything? The oVirt paradigm says "if it isn't in the engine db, it's not ours", so any LV or image we discover that is missing from the DB or the snapshot chain of the image in the DB, is nameless, and orphaned. Such an image on a current SD, belonging to a working oVirt setup is definitely an orphaned image. Attaching these to VMs is usually also useless, because they are more often than not discarded snapshots that didn't get discarded cleanly for some reason. Now, if we want to make this usable, we might want to actually check the qcow2 metadata of the image to see whether it's a mid-chain snapshot (and if so it's probably just a candidate for cleanup), or a standalone qcow2 or raw image, and then we can move on with the virt-* tools, to find out the image size and the filesystems it contains. This will at least provide the user with some usable information about the detected image. If we're talking about scanning an SD that doesn't presently belong to the current oVirt setup, then this is even more relevant, because all of the images will have no VM-related context. > > > > > >> > > >> >> > >> >> > > >> >> > [1] or a subtab on the storage domain. > >> >> > > >> >> > _______________________________________________ > >> >> > Engine-devel mailing list > >> >> > Engine-devel at ovirt.org > >> >> > http://lists.ovirt.org/mailman/listinfo/engine-devel > >> >> > > >> >> > >> > > >> >-- > >> > > >> > > >> > > >> >Regards, > >> > > >> >Dan Yasny > >> >Red Hat Israel > >> >+972 9769 2280 > >> > >> > > > >-- > > > > > > > >Regards, > > > >Dan Yasny > >Red Hat Israel > >+972 9769 2280 > > -- Regards, Dan Yasny Red Hat Israel +972 9769 2280 From Ricky.Hopper at netapp.com Wed Aug 1 17:36:09 2012 From: Ricky.Hopper at netapp.com (Hopper, Ricky) Date: Wed, 1 Aug 2012 17:36:09 +0000 Subject: [Engine-devel] Domain rescan action question In-Reply-To: <1235552045.4962754.1343829915979.JavaMail.root@redhat.com> Message-ID: On 8/1/12 10:05 AM, "Dan Yasny" wrote: > > >----- Original Message ----- >> From: "Ricky Hopper" >> To: "Dan Yasny" >> Cc: engine-devel at ovirt.org, "Itamar Heim" , "Andrew >>Cathrow" >> Sent: Wednesday, 1 August, 2012 4:56:45 PM >> Subject: Re: [Engine-devel] Domain rescan action question >> >> >> >> On 8/1/12 9:42 AM, "Dan Yasny" wrote: >> >> > >> > >> >----- Original Message ----- >> >> From: "Ricky Hopper" >> >> To: "Dan Yasny" , "Andrew Cathrow" >> >> >> >> Cc: engine-devel at ovirt.org, "Itamar Heim" , >> >> "Ricky >> >>Hopper" >> >> Sent: Wednesday, 1 August, 2012 4:34:53 PM >> >> Subject: Re: [Engine-devel] Domain rescan action question >> >> >> >> >> >> >> >> On 8/1/12 5:59 AM, "Dan Yasny" wrote: >> >> >> >> > >> >> > >> >> >----- Original Message ----- >> >> >> From: "Andrew Cathrow" >> >> >> To: "Itamar Heim" , "Dan Yasny" >> >> >> , >> >> >>"Ricky Hopper" >> >> >> Cc: engine-devel at ovirt.org >> >> >> Sent: Wednesday, 1 August, 2012 12:24:42 AM >> >> >> Subject: Re: [Engine-devel] Domain rescan action question >> >> >> >> >> >> >> >> >> >> >> >> ----- Original Message ----- >> >> >> > From: "Itamar Heim" >> >> >> > To: "Ricky Hopper" >> >> >> > Cc: engine-devel at ovirt.org >> >> >> > Sent: Tuesday, July 31, 2012 4:44:34 PM >> >> >> > Subject: Re: [Engine-devel] Domain rescan action question >> >> >> > >> >> >> > On 07/31/2012 11:30 PM, Hopper, Ricky wrote: >> >> >> > > Hey all, >> >> >> > > >> >> >> > > As I'm making progress with the domain rescan >> >> >> > > functionality, >> >> >> > > I've >> >> >> > > realized that I'm unsure what to do with any disks that are >> >> >> > > detected on >> >> >> > > the domain. Should I add them back into the database to be >> >> >> > > listed >> >> >> > > as >> >> >> > > floating disks, or should I just return a list of disk >> >> >> > > images >> >> >> > > to >> >> >> > > be >> >> >> > > attached to whatever the caller of the query needs? >> >> >> > > >> >> >> > > - Ricky >> >> >> > >> >> >> > i'm not sure they should be added automatically. >> >> >> > I think a dialog[1] showing orphan disks/images on the >> >> >> > storage >> >> >> > domain >> >> >> > for user to choose which to import as 'floating' disks would >> >> >> > be >> >> >> > better >> >> >> > than auto importing them. >> >> >> > >> >> >> > there is also the reverse of flagging existing disks as >> >> >> > 'missing' >> >> >> > in >> >> >> > storage? >> >> >> > >> >> >> >> >> >> Perhaps we should start a feature page to discuss and better >> >> >> scope >> >> >> it. >> >> >> There is a feature page that we could expand, it doesn't >> >> >> discuss >> >> >> the >> >> >> notion of importing those disks which is certainly something we >> >> >> need >> >> >> to address. >> >> >> >> >> >> >> >> >> http://wiki.ovirt.org/wiki/Features/Orphaned_Images >> >> > >> >> >The original idea was to scan the storage domains and compare the >> >> >images >> >> >lists to the database, thus getting a list of images no longer >> >> >relevant >> >> >and scrubbing the storage. This will actually be addressed >> >> >properly >> >> >in >> >> >the future (Ayal can elaborate on that) but for now this is >> >> >needed >> >> >at >> >> >least for that use case. >> >> > >> >> > >> >> >As I understand, the conversation here is about trying to take an >> >> >already >> >> >populated SD (from another setup I suppose), scanning it and >> >> >putting >> >> >it >> >> >into RHEV? >> >> >> >> As I understood it, the purpose of this functionality wasn't to >> >> find >> >> images which should be removed from storage, but to find images on >> >> the >> >> domain that oVirt was unaware of and importing them for use (for >> >> instance, >> >> if a disk was created outside of oVirt on the domain). If one of >> >> the >> >> use >> >> cases for this feature is also the orphaned images mentioned on >> >> the >> >> feature page, that may expand the functionality into a separate >> >> domain >> >> scrub and storage import, both of which would call the rescan >> >> (meaning the >> >> rescan would not actually add to the database, but instead return >> >> a >> >> list >> >> of "orphaned" disk images). >> >> >> >> Another solution would be to import all disk images into the >> >> database >> >> either way, and let the user delete any orphaned images from the >> >> GUI. >> > >> >I think are nice to have, but the problem with the scanning is that >> >if >> >we're not scanning a master domain or an export domain, all we will >> >see >> >is a bunch of images with no context or even hints as to where they >> >belong. The data that makes it all usable is in the engine database >> >and >> >in the ovf files on the master domain. >> > >> >This is why I stopped at the orphaned images part of the feature - >> >because there it's feasible, I would rely on the engine database for >> >image ID comparisons. >> > >> >If we present a user with a list of nameless disks, I doubt it will >> >be of >> >any use. >> >> The way this would work is by comparing a list of disk images from >> vdsm >> and from oVirt's database, finding the ones vdsm returns that oVirt >> doesn't have, and then either adding or returning those images. So >> oVirt's >> db will be used in the comparison. > >This will work only when scanning storage domains already attached and in >use by the current oVirt setup. What I am talking about is what will >happen if a LUN that used to be a SD in another oVirt setup is discovered >and scanned, with no engine db to compare with. If we don't consider such >a use case, life is definitely quite easy, and we're basically within the >scope of the orphaned images feature This use case should definitely be considered, maybe have a separate case where the rescan would return all "compatible" disks (i.e. disks that aren't just partial snapshots and the like) if the domain has not yet been mounted. Essentially, it would run the same comparison, but compare against an empty list rather than a list of disks. There's no way it's as simple as that (I'm unsure of the methods oVirt uses to mount a domain), but it's a good starting point. > >> >> As far as presenting the user with nameless disks, that's a point I >> hadn't >> considered; we could generate some sort of placeholder metadata upon >> addition to show the user that these are new/orphaned disks that were >> found on the storage domain. Is it safe to assume that the disks >> discovered by this feature won't be attached to anything? > >The oVirt paradigm says "if it isn't in the engine db, it's not ours", so >any LV or image we discover that is missing from the DB or the snapshot >chain of the image in the DB, is nameless, and orphaned. > >Such an image on a current SD, belonging to a working oVirt setup is >definitely an orphaned image. Attaching these to VMs is usually also >useless, because they are more often than not discarded snapshots that >didn't get discarded cleanly for some reason. > > >Now, if we want to make this usable, we might want to actually check the >qcow2 metadata of the image to see whether it's a mid-chain snapshot (and >if so it's probably just a candidate for cleanup), or a standalone qcow2 >or raw image, and then we can move on with the virt-* tools, to find out >the image size and the filesystems it contains. This will at least >provide the user with some usable information about the detected image. >If we're talking about scanning an SD that doesn't presently belong to >the current oVirt setup, then this is even more relevant, because all of >the images will have no VM-related context. We're currently working on having disks created outside of the oVirt environment, so not all orphaned disks on the existing storage domain will be artifacts of supposedly-deleted data. For our use case, disk images created by us will be able to be imported into oVirt and attached to a VM created through the engine. Because of this, saying "if it isn't in the engine db, it's not ours" wouldn't necessarily be true. When you talk about checking the metadata, does either oVirt or vdsm have a simple way to do this? A query of some sort would be ideal for this, as it could be run for each image as a qualifier for import. Also, as far as writing the functionality itself, I'm gathering that it should be structured as a query to return these orphaned images, which can then be acted upon/added to the database through a separate command after checking the validity of each image? > >> > >> > >> >> > >> >> >> >> >> >> > >> >> >> > [1] or a subtab on the storage domain. >> >> >> > >> >> >> > _______________________________________________ >> >> >> > Engine-devel mailing list >> >> >> > Engine-devel at ovirt.org >> >> >> > http://lists.ovirt.org/mailman/listinfo/engine-devel >> >> >> > >> >> >> >> >> > >> >> >-- >> >> > >> >> > >> >> > >> >> >Regards, >> >> > >> >> >Dan Yasny >> >> >Red Hat Israel >> >> >+972 9769 2280 >> >> >> >> >> > >> >-- >> > >> > >> > >> >Regards, >> > >> >Dan Yasny >> >Red Hat Israel >> >+972 9769 2280 >> >> > >-- > > > >Regards, > >Dan Yasny >Red Hat Israel >+972 9769 2280 From dyasny at redhat.com Wed Aug 1 18:35:29 2012 From: dyasny at redhat.com (Dan Yasny) Date: Wed, 1 Aug 2012 14:35:29 -0400 (EDT) Subject: [Engine-devel] Domain rescan action question In-Reply-To: Message-ID: <1952269421.5171062.1343846129879.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Ricky Hopper" > To: "Dan Yasny" > Cc: engine-devel at ovirt.org, "Itamar Heim" , "Andrew Cathrow" > Sent: Wednesday, 1 August, 2012 8:36:09 PM > Subject: Re: [Engine-devel] Domain rescan action question > > > > On 8/1/12 10:05 AM, "Dan Yasny" wrote: > > > > > > >----- Original Message ----- > >> From: "Ricky Hopper" > >> To: "Dan Yasny" > >> Cc: engine-devel at ovirt.org, "Itamar Heim" , > >> "Andrew > >>Cathrow" > >> Sent: Wednesday, 1 August, 2012 4:56:45 PM > >> Subject: Re: [Engine-devel] Domain rescan action question > >> > >> > >> > >> On 8/1/12 9:42 AM, "Dan Yasny" wrote: > >> > >> > > >> > > >> >----- Original Message ----- > >> >> From: "Ricky Hopper" > >> >> To: "Dan Yasny" , "Andrew Cathrow" > >> >> > >> >> Cc: engine-devel at ovirt.org, "Itamar Heim" , > >> >> "Ricky > >> >>Hopper" > >> >> Sent: Wednesday, 1 August, 2012 4:34:53 PM > >> >> Subject: Re: [Engine-devel] Domain rescan action question > >> >> > >> >> > >> >> > >> >> On 8/1/12 5:59 AM, "Dan Yasny" wrote: > >> >> > >> >> > > >> >> > > >> >> >----- Original Message ----- > >> >> >> From: "Andrew Cathrow" > >> >> >> To: "Itamar Heim" , "Dan Yasny" > >> >> >> , > >> >> >>"Ricky Hopper" > >> >> >> Cc: engine-devel at ovirt.org > >> >> >> Sent: Wednesday, 1 August, 2012 12:24:42 AM > >> >> >> Subject: Re: [Engine-devel] Domain rescan action question > >> >> >> > >> >> >> > >> >> >> > >> >> >> ----- Original Message ----- > >> >> >> > From: "Itamar Heim" > >> >> >> > To: "Ricky Hopper" > >> >> >> > Cc: engine-devel at ovirt.org > >> >> >> > Sent: Tuesday, July 31, 2012 4:44:34 PM > >> >> >> > Subject: Re: [Engine-devel] Domain rescan action question > >> >> >> > > >> >> >> > On 07/31/2012 11:30 PM, Hopper, Ricky wrote: > >> >> >> > > Hey all, > >> >> >> > > > >> >> >> > > As I'm making progress with the domain rescan > >> >> >> > > functionality, > >> >> >> > > I've > >> >> >> > > realized that I'm unsure what to do with any disks that > >> >> >> > > are > >> >> >> > > detected on > >> >> >> > > the domain. Should I add them back into the database to > >> >> >> > > be > >> >> >> > > listed > >> >> >> > > as > >> >> >> > > floating disks, or should I just return a list of disk > >> >> >> > > images > >> >> >> > > to > >> >> >> > > be > >> >> >> > > attached to whatever the caller of the query needs? > >> >> >> > > > >> >> >> > > - Ricky > >> >> >> > > >> >> >> > i'm not sure they should be added automatically. > >> >> >> > I think a dialog[1] showing orphan disks/images on the > >> >> >> > storage > >> >> >> > domain > >> >> >> > for user to choose which to import as 'floating' disks > >> >> >> > would > >> >> >> > be > >> >> >> > better > >> >> >> > than auto importing them. > >> >> >> > > >> >> >> > there is also the reverse of flagging existing disks as > >> >> >> > 'missing' > >> >> >> > in > >> >> >> > storage? > >> >> >> > > >> >> >> > >> >> >> Perhaps we should start a feature page to discuss and better > >> >> >> scope > >> >> >> it. > >> >> >> There is a feature page that we could expand, it doesn't > >> >> >> discuss > >> >> >> the > >> >> >> notion of importing those disks which is certainly something > >> >> >> we > >> >> >> need > >> >> >> to address. > >> >> >> > >> >> >> > >> >> >> http://wiki.ovirt.org/wiki/Features/Orphaned_Images > >> >> > > >> >> >The original idea was to scan the storage domains and compare > >> >> >the > >> >> >images > >> >> >lists to the database, thus getting a list of images no longer > >> >> >relevant > >> >> >and scrubbing the storage. This will actually be addressed > >> >> >properly > >> >> >in > >> >> >the future (Ayal can elaborate on that) but for now this is > >> >> >needed > >> >> >at > >> >> >least for that use case. > >> >> > > >> >> > > >> >> >As I understand, the conversation here is about trying to take > >> >> >an > >> >> >already > >> >> >populated SD (from another setup I suppose), scanning it and > >> >> >putting > >> >> >it > >> >> >into RHEV? > >> >> > >> >> As I understood it, the purpose of this functionality wasn't to > >> >> find > >> >> images which should be removed from storage, but to find images > >> >> on > >> >> the > >> >> domain that oVirt was unaware of and importing them for use > >> >> (for > >> >> instance, > >> >> if a disk was created outside of oVirt on the domain). If one > >> >> of > >> >> the > >> >> use > >> >> cases for this feature is also the orphaned images mentioned on > >> >> the > >> >> feature page, that may expand the functionality into a separate > >> >> domain > >> >> scrub and storage import, both of which would call the rescan > >> >> (meaning the > >> >> rescan would not actually add to the database, but instead > >> >> return > >> >> a > >> >> list > >> >> of "orphaned" disk images). > >> >> > >> >> Another solution would be to import all disk images into the > >> >> database > >> >> either way, and let the user delete any orphaned images from > >> >> the > >> >> GUI. > >> > > >> >I think are nice to have, but the problem with the scanning is > >> >that > >> >if > >> >we're not scanning a master domain or an export domain, all we > >> >will > >> >see > >> >is a bunch of images with no context or even hints as to where > >> >they > >> >belong. The data that makes it all usable is in the engine > >> >database > >> >and > >> >in the ovf files on the master domain. > >> > > >> >This is why I stopped at the orphaned images part of the feature > >> >- > >> >because there it's feasible, I would rely on the engine database > >> >for > >> >image ID comparisons. > >> > > >> >If we present a user with a list of nameless disks, I doubt it > >> >will > >> >be of > >> >any use. > >> > >> The way this would work is by comparing a list of disk images from > >> vdsm > >> and from oVirt's database, finding the ones vdsm returns that > >> oVirt > >> doesn't have, and then either adding or returning those images. So > >> oVirt's > >> db will be used in the comparison. > > > >This will work only when scanning storage domains already attached > >and in > >use by the current oVirt setup. What I am talking about is what will > >happen if a LUN that used to be a SD in another oVirt setup is > >discovered > >and scanned, with no engine db to compare with. If we don't consider > >such > >a use case, life is definitely quite easy, and we're basically > >within the > >scope of the orphaned images feature > > This use case should definitely be considered, maybe have a separate > case > where the rescan would return all "compatible" disks (i.e. disks that > aren't just partial snapshots and the like) if the domain has not yet > been > mounted. Essentially, it would run the same comparison, but compare > against an empty list rather than a list of disks. There's no way > it's as > simple as that (I'm unsure of the methods oVirt uses to mount a > domain), > but it's a good starting point. There is no complex method there. For file storage it's just a mount command, and for block it's LVM (plus iscsi session establishment, if needed) > > > >> > >> As far as presenting the user with nameless disks, that's a point > >> I > >> hadn't > >> considered; we could generate some sort of placeholder metadata > >> upon > >> addition to show the user that these are new/orphaned disks that > >> were > >> found on the storage domain. Is it safe to assume that the disks > >> discovered by this feature won't be attached to anything? > > > >The oVirt paradigm says "if it isn't in the engine db, it's not > >ours", so > >any LV or image we discover that is missing from the DB or the > >snapshot > >chain of the image in the DB, is nameless, and orphaned. > > > >Such an image on a current SD, belonging to a working oVirt setup is > >definitely an orphaned image. Attaching these to VMs is usually also > >useless, because they are more often than not discarded snapshots > >that > >didn't get discarded cleanly for some reason. > > > > > >Now, if we want to make this usable, we might want to actually check > >the > >qcow2 metadata of the image to see whether it's a mid-chain snapshot > >(and > >if so it's probably just a candidate for cleanup), or a standalone > >qcow2 > >or raw image, and then we can move on with the virt-* tools, to find > >out > >the image size and the filesystems it contains. This will at least > >provide the user with some usable information about the detected > >image. > >If we're talking about scanning an SD that doesn't presently belong > >to > >the current oVirt setup, then this is even more relevant, because > >all of > >the images will have no VM-related context. > > We're currently working on having disks created outside of the oVirt > environment, so not all orphaned disks on the existing storage domain > will > be artifacts of supposedly-deleted data. Do you mean like rhevm-image-upload, or something different? > For our use case, disk > images > created by us will be able to be imported into oVirt and attached to > a VM > created through the engine. Because of this, saying "if it isn't in > the > engine db, it's not ours" wouldn't necessarily be true. > > When you talk about checking the metadata, does either oVirt or vdsm > have > a simple way to do this? A query of some sort would be ideal for > this, as > it could be run for each image as a qualifier for import. qemu-img info and libguestfs commands should do. Besides, our images do come with some metadata (in the LVM tags or a .meta file) > > Also, as far as writing the functionality itself, I'm gathering that > it > should be structured as a query to return these orphaned images, > which can > then be acted upon/added to the database through a separate command > after > checking the validity of each image? Yes, a simple way to say "import this one to DB, attach to VM X or make floating", "delete that one", "skip" > > > >> > > >> > > >> >> > > >> >> >> > >> >> >> > > >> >> >> > [1] or a subtab on the storage domain. > >> >> >> > > >> >> >> > _______________________________________________ > >> >> >> > Engine-devel mailing list > >> >> >> > Engine-devel at ovirt.org > >> >> >> > http://lists.ovirt.org/mailman/listinfo/engine-devel > >> >> >> > > >> >> >> > >> >> > > >> >> >-- > >> >> > > >> >> > > >> >> > > >> >> >Regards, > >> >> > > >> >> >Dan Yasny > >> >> >Red Hat Israel > >> >> >+972 9769 2280 > >> >> > >> >> > >> > > >> >-- > >> > > >> > > >> > > >> >Regards, > >> > > >> >Dan Yasny > >> >Red Hat Israel > >> >+972 9769 2280 > >> > >> > > > >-- > > > > > > > >Regards, > > > >Dan Yasny > >Red Hat Israel > >+972 9769 2280 > > -- Regards, Dan Yasny Red Hat Israel +972 9769 2280 From snmishra at linux.vnet.ibm.com Thu Aug 2 00:02:55 2012 From: snmishra at linux.vnet.ibm.com (snmishra at linux.vnet.ibm.com) Date: Wed, 01 Aug 2012 17:02:55 -0700 Subject: [Engine-devel] Adding VNC support In-Reply-To: <501778CA.40105@redhat.com> References: <20120726073643.Horde.XbhN_Jir309QEVX7YpIizCA@imap.linux.ibm.com> <501778CA.40105@redhat.com> Message-ID: <20120801170255.Horde.IXbmDJir309QGcOv0v93vAA@imap.linux.ibm.com> Quoting Itamar Heim : > On 07/26/2012 05:36 PM, snmishra at linux.vnet.ibm.com wrote: >> >> Hi, >> >> I am looking at adding VNC support in ovirt. What does the community >> think? Ideas, suggestions, comments? > > so to sum this up: > 1. there is the new dialog to open vnc manually. > http://gerrit.ovirt.org/#/c/4790/ good > > 2. Alon suggested it should be allowed to open this dialog for spice > as well, not only for vnc. +1 > > 3. Alon also suggested to have a launch button on that window (or > parallel to it) which will try to launch vnc or spice by returning a > specific mime type response, allowing client to choose the vnc/spice > client to run for this mime type, and passing command line > parameters to it in the mime type reply. +1 I like the idea of being able to launch vnc and spice from the same place. > > 4. provide a vnc xpi/activex wrappers to allow launching it via web > browsers like spice > main limitation of this compared to novnc is you need to do this for > every browser/platform. I like the noVNC option better since most modern web browsers support the canvas element of HTML 5. With noVNC we don't have to port to other platforms/browsers. > > 5. novnc > 5.1 novnc client - i'd start with the one recently pushed to fedora. > https://bugzilla.redhat.com/show_bug.cgi?id=822187 +1 that is an added advantage. > > 5.2 novnc websocket server - i see three options > > 5.2.1 extend qemu to do this, so novnc can connect to it directly > like we do today for vnc/spice > > 5.2.2 use the python based one from: > https://bugzilla.redhat.com/show_bug.cgi?id=822187 > > 5.2.3 look at a java based websocket solution, assuming easier to > deploy it as part of webadmin/user portal war than another service > (requires a bit of research) > looking forward user portal and webadmin would be deployed on > multiple hosts, so a websockets would need to be deployed next to > them. I can see myself going either way with java or python based websockets. -Sharad Mishra > > from the little i looked at, the various websocket implementations > are mostly nascent and are not scaleable/robust/etc. > I'd love to be proven wrong, and worth playing with them a bit to > measure that. > > 6. spice.html5 > while very nascent - worth mentioning on this thread and trying to > take a look: > http://www.spice-space.org/page/Html5 From shuming at linux.vnet.ibm.com Thu Aug 2 01:44:56 2012 From: shuming at linux.vnet.ibm.com (Shu Ming) Date: Thu, 02 Aug 2012 09:44:56 +0800 Subject: [Engine-devel] Domain rescan action question In-Reply-To: References: Message-ID: <5019DB98.6050702@linux.vnet.ibm.com> No comments below. I am just curious how you can have disk created outside of the oVirt environment if the storage domain is mounted on oVirt engine. Does your application try to communicate with VDSM though SPM host to do that? Are the two applications(oVirt engine and your application) coexisting and manipulating the storage domain at the same time or one application is quiet when the other one is functioning? IMHO, modifying the storage domain meta data from two application are dangerous and I suppose your application should know the reason and what these disks are for. So telling oVirt to import the disks automatically should be reasonable because you know what these disks are for and it is unnecessary for the oVirt administrator to select which one to import. On 2012-8-2 1:36, Hopper, Ricky wrote: > > On 8/1/12 10:05 AM, "Dan Yasny" wrote: > >> >> ----- Original Message ----- >>> From: "Ricky Hopper" >>> To: "Dan Yasny" >>> Cc: engine-devel at ovirt.org, "Itamar Heim" , "Andrew >>> Cathrow" >>> Sent: Wednesday, 1 August, 2012 4:56:45 PM >>> Subject: Re: [Engine-devel] Domain rescan action question >>> >>> >>> >>> On 8/1/12 9:42 AM, "Dan Yasny" wrote: >>> >>>> >>>> ----- Original Message ----- >>>>> From: "Ricky Hopper" >>>>> To: "Dan Yasny" , "Andrew Cathrow" >>>>> >>>>> Cc: engine-devel at ovirt.org, "Itamar Heim" , >>>>> "Ricky >>>>> Hopper" >>>>> Sent: Wednesday, 1 August, 2012 4:34:53 PM >>>>> Subject: Re: [Engine-devel] Domain rescan action question >>>>> >>>>> >>>>> >>>>> On 8/1/12 5:59 AM, "Dan Yasny" wrote: >>>>> >>>>>> >>>>>> ----- Original Message ----- >>>>>>> From: "Andrew Cathrow" >>>>>>> To: "Itamar Heim" , "Dan Yasny" >>>>>>> , >>>>>>> "Ricky Hopper" >>>>>>> Cc: engine-devel at ovirt.org >>>>>>> Sent: Wednesday, 1 August, 2012 12:24:42 AM >>>>>>> Subject: Re: [Engine-devel] Domain rescan action question >>>>>>> >>>>>>> >>>>>>> >>>>>>> ----- Original Message ----- >>>>>>>> From: "Itamar Heim" >>>>>>>> To: "Ricky Hopper" >>>>>>>> Cc: engine-devel at ovirt.org >>>>>>>> Sent: Tuesday, July 31, 2012 4:44:34 PM >>>>>>>> Subject: Re: [Engine-devel] Domain rescan action question >>>>>>>> >>>>>>>> On 07/31/2012 11:30 PM, Hopper, Ricky wrote: >>>>>>>>> Hey all, >>>>>>>>> >>>>>>>>> As I'm making progress with the domain rescan >>>>>>>>> functionality, >>>>>>>>> I've >>>>>>>>> realized that I'm unsure what to do with any disks that are >>>>>>>>> detected on >>>>>>>>> the domain. Should I add them back into the database to be >>>>>>>>> listed >>>>>>>>> as >>>>>>>>> floating disks, or should I just return a list of disk >>>>>>>>> images >>>>>>>>> to >>>>>>>>> be >>>>>>>>> attached to whatever the caller of the query needs? >>>>>>>>> >>>>>>>>> - Ricky >>>>>>>> i'm not sure they should be added automatically. >>>>>>>> I think a dialog[1] showing orphan disks/images on the >>>>>>>> storage >>>>>>>> domain >>>>>>>> for user to choose which to import as 'floating' disks would >>>>>>>> be >>>>>>>> better >>>>>>>> than auto importing them. >>>>>>>> >>>>>>>> there is also the reverse of flagging existing disks as >>>>>>>> 'missing' >>>>>>>> in >>>>>>>> storage? >>>>>>>> >>>>>>> Perhaps we should start a feature page to discuss and better >>>>>>> scope >>>>>>> it. >>>>>>> There is a feature page that we could expand, it doesn't >>>>>>> discuss >>>>>>> the >>>>>>> notion of importing those disks which is certainly something we >>>>>>> need >>>>>>> to address. >>>>>>> >>>>>>> >>>>>>> http://wiki.ovirt.org/wiki/Features/Orphaned_Images >>>>>> The original idea was to scan the storage domains and compare the >>>>>> images >>>>>> lists to the database, thus getting a list of images no longer >>>>>> relevant >>>>>> and scrubbing the storage. This will actually be addressed >>>>>> properly >>>>>> in >>>>>> the future (Ayal can elaborate on that) but for now this is >>>>>> needed >>>>>> at >>>>>> least for that use case. >>>>>> >>>>>> >>>>>> As I understand, the conversation here is about trying to take an >>>>>> already >>>>>> populated SD (from another setup I suppose), scanning it and >>>>>> putting >>>>>> it >>>>>> into RHEV? >>>>> As I understood it, the purpose of this functionality wasn't to >>>>> find >>>>> images which should be removed from storage, but to find images on >>>>> the >>>>> domain that oVirt was unaware of and importing them for use (for >>>>> instance, >>>>> if a disk was created outside of oVirt on the domain). If one of >>>>> the >>>>> use >>>>> cases for this feature is also the orphaned images mentioned on >>>>> the >>>>> feature page, that may expand the functionality into a separate >>>>> domain >>>>> scrub and storage import, both of which would call the rescan >>>>> (meaning the >>>>> rescan would not actually add to the database, but instead return >>>>> a >>>>> list >>>>> of "orphaned" disk images). >>>>> >>>>> Another solution would be to import all disk images into the >>>>> database >>>>> either way, and let the user delete any orphaned images from the >>>>> GUI. >>>> I think are nice to have, but the problem with the scanning is that >>>> if >>>> we're not scanning a master domain or an export domain, all we will >>>> see >>>> is a bunch of images with no context or even hints as to where they >>>> belong. The data that makes it all usable is in the engine database >>>> and >>>> in the ovf files on the master domain. >>>> >>>> This is why I stopped at the orphaned images part of the feature - >>>> because there it's feasible, I would rely on the engine database for >>>> image ID comparisons. >>>> >>>> If we present a user with a list of nameless disks, I doubt it will >>>> be of >>>> any use. >>> The way this would work is by comparing a list of disk images from >>> vdsm >>> and from oVirt's database, finding the ones vdsm returns that oVirt >>> doesn't have, and then either adding or returning those images. So >>> oVirt's >>> db will be used in the comparison. >> This will work only when scanning storage domains already attached and in >> use by the current oVirt setup. What I am talking about is what will >> happen if a LUN that used to be a SD in another oVirt setup is discovered >> and scanned, with no engine db to compare with. If we don't consider such >> a use case, life is definitely quite easy, and we're basically within the >> scope of the orphaned images feature > This use case should definitely be considered, maybe have a separate case > where the rescan would return all "compatible" disks (i.e. disks that > aren't just partial snapshots and the like) if the domain has not yet been > mounted. Essentially, it would run the same comparison, but compare > against an empty list rather than a list of disks. There's no way it's as > simple as that (I'm unsure of the methods oVirt uses to mount a domain), > but it's a good starting point. >>> As far as presenting the user with nameless disks, that's a point I >>> hadn't >>> considered; we could generate some sort of placeholder metadata upon >>> addition to show the user that these are new/orphaned disks that were >>> found on the storage domain. Is it safe to assume that the disks >>> discovered by this feature won't be attached to anything? >> The oVirt paradigm says "if it isn't in the engine db, it's not ours", so >> any LV or image we discover that is missing from the DB or the snapshot >> chain of the image in the DB, is nameless, and orphaned. >> >> Such an image on a current SD, belonging to a working oVirt setup is >> definitely an orphaned image. Attaching these to VMs is usually also >> useless, because they are more often than not discarded snapshots that >> didn't get discarded cleanly for some reason. >> >> >> Now, if we want to make this usable, we might want to actually check the >> qcow2 metadata of the image to see whether it's a mid-chain snapshot (and >> if so it's probably just a candidate for cleanup), or a standalone qcow2 >> or raw image, and then we can move on with the virt-* tools, to find out >> the image size and the filesystems it contains. This will at least >> provide the user with some usable information about the detected image. >> If we're talking about scanning an SD that doesn't presently belong to >> the current oVirt setup, then this is even more relevant, because all of >> the images will have no VM-related context. > We're currently working on having disks created outside of the oVirt > environment, so not all orphaned disks on the existing storage domain will > be artifacts of supposedly-deleted data. For our use case, disk images > created by us will be able to be imported into oVirt and attached to a VM > created through the engine. Because of this, saying "if it isn't in the > engine db, it's not ours" wouldn't necessarily be true. > > When you talk about checking the metadata, does either oVirt or vdsm have > a simple way to do this? A query of some sort would be ideal for this, as > it could be run for each image as a qualifier for import. > > Also, as far as writing the functionality itself, I'm gathering that it > should be structured as a query to return these orphaned images, which can > then be acted upon/added to the database through a separate command after > checking the validity of each image? >>>> >>>>>>>> [1] or a subtab on the storage domain. >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Engine-devel mailing list >>>>>>>> Engine-devel at ovirt.org >>>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>>>> >>>>>> -- >>>>>> >>>>>> >>>>>> >>>>>> Regards, >>>>>> >>>>>> Dan Yasny >>>>>> Red Hat Israel >>>>>> +972 9769 2280 >>>>> >>>> -- >>>> >>>> >>>> >>>> Regards, >>>> >>>> Dan Yasny >>>> Red Hat Israel >>>> +972 9769 2280 >>> >> -- >> >> >> >> Regards, >> >> Dan Yasny >> Red Hat Israel >> +972 9769 2280 > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > -- Shu Ming IBM China Systems and Technology Laboratory From iheim at redhat.com Thu Aug 2 06:59:36 2012 From: iheim at redhat.com (Itamar Heim) Date: Thu, 02 Aug 2012 09:59:36 +0300 Subject: [Engine-devel] Domain rescan action question In-Reply-To: <5019DB98.6050702@linux.vnet.ibm.com> References: <5019DB98.6050702@linux.vnet.ibm.com> Message-ID: <501A2558.6000401@redhat.com> On 08/02/2012 04:44 AM, Shu Ming wrote: > No comments below. I am just curious how you can have disk created > outside of the oVirt environment if the storage domain is mounted on > oVirt engine. Does your application try to communicate with VDSM though > SPM host to do that? Are the two applications(oVirt engine and your > application) coexisting and manipulating the storage domain at the same > time or one application is quiet when the other one is functioning? > IMHO, modifying the storage domain meta data from two application are > dangerous and I suppose your application should know the reason and what > these disks are for. So telling oVirt to import the disks > automatically should be reasonable because you know what these disks are > for and it is unnecessary for the oVirt administrator to select which > one to import. two use cases: 1. NFS storage allows this quite easily, and very relevant with storage side cloning. 2. engine and storage go out of sync (recover either from backup) > > On 2012-8-2 1:36, Hopper, Ricky wrote: >> >> On 8/1/12 10:05 AM, "Dan Yasny" wrote: >> >>> >>> ----- Original Message ----- >>>> From: "Ricky Hopper" >>>> To: "Dan Yasny" >>>> Cc: engine-devel at ovirt.org, "Itamar Heim" , "Andrew >>>> Cathrow" >>>> Sent: Wednesday, 1 August, 2012 4:56:45 PM >>>> Subject: Re: [Engine-devel] Domain rescan action question >>>> >>>> >>>> >>>> On 8/1/12 9:42 AM, "Dan Yasny" wrote: >>>> >>>>> >>>>> ----- Original Message ----- >>>>>> From: "Ricky Hopper" >>>>>> To: "Dan Yasny" , "Andrew Cathrow" >>>>>> >>>>>> Cc: engine-devel at ovirt.org, "Itamar Heim" , >>>>>> "Ricky >>>>>> Hopper" >>>>>> Sent: Wednesday, 1 August, 2012 4:34:53 PM >>>>>> Subject: Re: [Engine-devel] Domain rescan action question >>>>>> >>>>>> >>>>>> >>>>>> On 8/1/12 5:59 AM, "Dan Yasny" wrote: >>>>>> >>>>>>> >>>>>>> ----- Original Message ----- >>>>>>>> From: "Andrew Cathrow" >>>>>>>> To: "Itamar Heim" , "Dan Yasny" >>>>>>>> , >>>>>>>> "Ricky Hopper" >>>>>>>> Cc: engine-devel at ovirt.org >>>>>>>> Sent: Wednesday, 1 August, 2012 12:24:42 AM >>>>>>>> Subject: Re: [Engine-devel] Domain rescan action question >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> ----- Original Message ----- >>>>>>>>> From: "Itamar Heim" >>>>>>>>> To: "Ricky Hopper" >>>>>>>>> Cc: engine-devel at ovirt.org >>>>>>>>> Sent: Tuesday, July 31, 2012 4:44:34 PM >>>>>>>>> Subject: Re: [Engine-devel] Domain rescan action question >>>>>>>>> >>>>>>>>> On 07/31/2012 11:30 PM, Hopper, Ricky wrote: >>>>>>>>>> Hey all, >>>>>>>>>> >>>>>>>>>> As I'm making progress with the domain rescan >>>>>>>>>> functionality, >>>>>>>>>> I've >>>>>>>>>> realized that I'm unsure what to do with any disks that are >>>>>>>>>> detected on >>>>>>>>>> the domain. Should I add them back into the database to be >>>>>>>>>> listed >>>>>>>>>> as >>>>>>>>>> floating disks, or should I just return a list of disk >>>>>>>>>> images >>>>>>>>>> to >>>>>>>>>> be >>>>>>>>>> attached to whatever the caller of the query needs? >>>>>>>>>> >>>>>>>>>> - Ricky >>>>>>>>> i'm not sure they should be added automatically. >>>>>>>>> I think a dialog[1] showing orphan disks/images on the >>>>>>>>> storage >>>>>>>>> domain >>>>>>>>> for user to choose which to import as 'floating' disks would >>>>>>>>> be >>>>>>>>> better >>>>>>>>> than auto importing them. >>>>>>>>> >>>>>>>>> there is also the reverse of flagging existing disks as >>>>>>>>> 'missing' >>>>>>>>> in >>>>>>>>> storage? >>>>>>>>> >>>>>>>> Perhaps we should start a feature page to discuss and better >>>>>>>> scope >>>>>>>> it. >>>>>>>> There is a feature page that we could expand, it doesn't >>>>>>>> discuss >>>>>>>> the >>>>>>>> notion of importing those disks which is certainly something we >>>>>>>> need >>>>>>>> to address. >>>>>>>> >>>>>>>> >>>>>>>> http://wiki.ovirt.org/wiki/Features/Orphaned_Images >>>>>>> The original idea was to scan the storage domains and compare the >>>>>>> images >>>>>>> lists to the database, thus getting a list of images no longer >>>>>>> relevant >>>>>>> and scrubbing the storage. This will actually be addressed >>>>>>> properly >>>>>>> in >>>>>>> the future (Ayal can elaborate on that) but for now this is >>>>>>> needed >>>>>>> at >>>>>>> least for that use case. >>>>>>> >>>>>>> >>>>>>> As I understand, the conversation here is about trying to take an >>>>>>> already >>>>>>> populated SD (from another setup I suppose), scanning it and >>>>>>> putting >>>>>>> it >>>>>>> into RHEV? >>>>>> As I understood it, the purpose of this functionality wasn't to >>>>>> find >>>>>> images which should be removed from storage, but to find images on >>>>>> the >>>>>> domain that oVirt was unaware of and importing them for use (for >>>>>> instance, >>>>>> if a disk was created outside of oVirt on the domain). If one of >>>>>> the >>>>>> use >>>>>> cases for this feature is also the orphaned images mentioned on >>>>>> the >>>>>> feature page, that may expand the functionality into a separate >>>>>> domain >>>>>> scrub and storage import, both of which would call the rescan >>>>>> (meaning the >>>>>> rescan would not actually add to the database, but instead return >>>>>> a >>>>>> list >>>>>> of "orphaned" disk images). >>>>>> >>>>>> Another solution would be to import all disk images into the >>>>>> database >>>>>> either way, and let the user delete any orphaned images from the >>>>>> GUI. >>>>> I think are nice to have, but the problem with the scanning is that >>>>> if >>>>> we're not scanning a master domain or an export domain, all we will >>>>> see >>>>> is a bunch of images with no context or even hints as to where they >>>>> belong. The data that makes it all usable is in the engine database >>>>> and >>>>> in the ovf files on the master domain. >>>>> >>>>> This is why I stopped at the orphaned images part of the feature - >>>>> because there it's feasible, I would rely on the engine database for >>>>> image ID comparisons. >>>>> >>>>> If we present a user with a list of nameless disks, I doubt it will >>>>> be of >>>>> any use. >>>> The way this would work is by comparing a list of disk images from >>>> vdsm >>>> and from oVirt's database, finding the ones vdsm returns that oVirt >>>> doesn't have, and then either adding or returning those images. So >>>> oVirt's >>>> db will be used in the comparison. >>> This will work only when scanning storage domains already attached >>> and in >>> use by the current oVirt setup. What I am talking about is what will >>> happen if a LUN that used to be a SD in another oVirt setup is >>> discovered >>> and scanned, with no engine db to compare with. If we don't consider >>> such >>> a use case, life is definitely quite easy, and we're basically within >>> the >>> scope of the orphaned images feature >> This use case should definitely be considered, maybe have a separate case >> where the rescan would return all "compatible" disks (i.e. disks that >> aren't just partial snapshots and the like) if the domain has not yet >> been >> mounted. Essentially, it would run the same comparison, but compare >> against an empty list rather than a list of disks. There's no way it's as >> simple as that (I'm unsure of the methods oVirt uses to mount a domain), >> but it's a good starting point. >>>> As far as presenting the user with nameless disks, that's a point I >>>> hadn't >>>> considered; we could generate some sort of placeholder metadata upon >>>> addition to show the user that these are new/orphaned disks that were >>>> found on the storage domain. Is it safe to assume that the disks >>>> discovered by this feature won't be attached to anything? >>> The oVirt paradigm says "if it isn't in the engine db, it's not >>> ours", so >>> any LV or image we discover that is missing from the DB or the snapshot >>> chain of the image in the DB, is nameless, and orphaned. >>> >>> Such an image on a current SD, belonging to a working oVirt setup is >>> definitely an orphaned image. Attaching these to VMs is usually also >>> useless, because they are more often than not discarded snapshots that >>> didn't get discarded cleanly for some reason. >>> >>> >>> Now, if we want to make this usable, we might want to actually check the >>> qcow2 metadata of the image to see whether it's a mid-chain snapshot >>> (and >>> if so it's probably just a candidate for cleanup), or a standalone qcow2 >>> or raw image, and then we can move on with the virt-* tools, to find out >>> the image size and the filesystems it contains. This will at least >>> provide the user with some usable information about the detected image. >>> If we're talking about scanning an SD that doesn't presently belong to >>> the current oVirt setup, then this is even more relevant, because all of >>> the images will have no VM-related context. >> We're currently working on having disks created outside of the oVirt >> environment, so not all orphaned disks on the existing storage domain >> will >> be artifacts of supposedly-deleted data. For our use case, disk images >> created by us will be able to be imported into oVirt and attached to a VM >> created through the engine. Because of this, saying "if it isn't in the >> engine db, it's not ours" wouldn't necessarily be true. >> >> When you talk about checking the metadata, does either oVirt or vdsm have >> a simple way to do this? A query of some sort would be ideal for this, as >> it could be run for each image as a qualifier for import. >> >> Also, as far as writing the functionality itself, I'm gathering that it >> should be structured as a query to return these orphaned images, which >> can >> then be acted upon/added to the database through a separate command after >> checking the validity of each image? >>>>> >>>>>>>>> [1] or a subtab on the storage domain. >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> Engine-devel mailing list >>>>>>>>> Engine-devel at ovirt.org >>>>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>>>>> >>>>>>> -- >>>>>>> >>>>>>> >>>>>>> >>>>>>> Regards, >>>>>>> >>>>>>> Dan Yasny >>>>>>> Red Hat Israel >>>>>>> +972 9769 2280 >>>>>> >>>>> -- >>>>> >>>>> >>>>> >>>>> Regards, >>>>> >>>>> Dan Yasny >>>>> Red Hat Israel >>>>> +972 9769 2280 >>>> >>> -- >>> >>> >>> >>> Regards, >>> >>> Dan Yasny >>> Red Hat Israel >>> +972 9769 2280 >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> > > From iheim at redhat.com Thu Aug 2 08:52:43 2012 From: iheim at redhat.com (Itamar Heim) Date: Thu, 02 Aug 2012 11:52:43 +0300 Subject: [Engine-devel] Fwd: [jboss-rpm] New jboss-as update (7.1.1-6) In-Reply-To: <501A3EA0.2090808@redhat.com> References: <501A3EA0.2090808@redhat.com> Message-ID: <501A3FDB.2020906@redhat.com> fyi -------- Original Message -------- Subject: [jboss-rpm] New jboss-as update (7.1.1-6) Date: Thu, 02 Aug 2012 10:47:28 +0200 From: Marek Goldmann To: jboss-rpm at lists.jboss.org Hi all, I just submitted a new update (7.1.1-6) to Fedora. This is a bugfix release which fixes issues with oVirt: - https://bugzilla.redhat.com/show_bug.cgi?id=843285 - https://bugzilla.redhat.com/show_bug.cgi?id=844554 - https://bugzilla.redhat.com/show_bug.cgi?id=842997 - https://bugzilla.redhat.com/show_bug.cgi?id=842996 (All above issues are talking about the same thing) Additionally a fix preventing the package from building from source was applied. Once it hits testing, please give it a try and bump the karma: https://admin.fedoraproject.org/updates/jboss-as-7.1.1-6.fc17 Thanks! --Marek _______________________________________________ jboss-rpm mailing list jboss-rpm at lists.jboss.org https://lists.jboss.org/mailman/listinfo/jboss-rpm From Ricky.Hopper at netapp.com Thu Aug 2 14:37:10 2012 From: Ricky.Hopper at netapp.com (Hopper, Ricky) Date: Thu, 2 Aug 2012 14:37:10 +0000 Subject: [Engine-devel] Domain rescan action question In-Reply-To: <1952269421.5171062.1343846129879.JavaMail.root@redhat.com> Message-ID: In the interest of good discussion, we've put up a feature page for this feature (http://wiki.ovirt.org/wiki/Features/Domain_Scan), which links to a talk page where modifications can be proposed to how I've laid out the feature. So far, it covers how the query works and which commands will come about to implement it. I'd appreciate it if anyone concerned could check this out and make any changes as they see fit so we can get going with the coding. - Ricky On 8/1/12 2:35 PM, "Dan Yasny" wrote: > > >----- Original Message ----- >> From: "Ricky Hopper" >> To: "Dan Yasny" >> Cc: engine-devel at ovirt.org, "Itamar Heim" , "Andrew >>Cathrow" >> Sent: Wednesday, 1 August, 2012 8:36:09 PM >> Subject: Re: [Engine-devel] Domain rescan action question >> >> >> >> On 8/1/12 10:05 AM, "Dan Yasny" wrote: >> >> > >> > >> >----- Original Message ----- >> >> From: "Ricky Hopper" >> >> To: "Dan Yasny" >> >> Cc: engine-devel at ovirt.org, "Itamar Heim" , >> >> "Andrew >> >>Cathrow" >> >> Sent: Wednesday, 1 August, 2012 4:56:45 PM >> >> Subject: Re: [Engine-devel] Domain rescan action question >> >> >> >> >> >> >> >> On 8/1/12 9:42 AM, "Dan Yasny" wrote: >> >> >> >> > >> >> > >> >> >----- Original Message ----- >> >> >> From: "Ricky Hopper" >> >> >> To: "Dan Yasny" , "Andrew Cathrow" >> >> >> >> >> >> Cc: engine-devel at ovirt.org, "Itamar Heim" , >> >> >> "Ricky >> >> >>Hopper" >> >> >> Sent: Wednesday, 1 August, 2012 4:34:53 PM >> >> >> Subject: Re: [Engine-devel] Domain rescan action question >> >> >> >> >> >> >> >> >> >> >> >> On 8/1/12 5:59 AM, "Dan Yasny" wrote: >> >> >> >> >> >> > >> >> >> > >> >> >> >----- Original Message ----- >> >> >> >> From: "Andrew Cathrow" >> >> >> >> To: "Itamar Heim" , "Dan Yasny" >> >> >> >> , >> >> >> >>"Ricky Hopper" >> >> >> >> Cc: engine-devel at ovirt.org >> >> >> >> Sent: Wednesday, 1 August, 2012 12:24:42 AM >> >> >> >> Subject: Re: [Engine-devel] Domain rescan action question >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> ----- Original Message ----- >> >> >> >> > From: "Itamar Heim" >> >> >> >> > To: "Ricky Hopper" >> >> >> >> > Cc: engine-devel at ovirt.org >> >> >> >> > Sent: Tuesday, July 31, 2012 4:44:34 PM >> >> >> >> > Subject: Re: [Engine-devel] Domain rescan action question >> >> >> >> > >> >> >> >> > On 07/31/2012 11:30 PM, Hopper, Ricky wrote: >> >> >> >> > > Hey all, >> >> >> >> > > >> >> >> >> > > As I'm making progress with the domain rescan >> >> >> >> > > functionality, >> >> >> >> > > I've >> >> >> >> > > realized that I'm unsure what to do with any disks that >> >> >> >> > > are >> >> >> >> > > detected on >> >> >> >> > > the domain. Should I add them back into the database to >> >> >> >> > > be >> >> >> >> > > listed >> >> >> >> > > as >> >> >> >> > > floating disks, or should I just return a list of disk >> >> >> >> > > images >> >> >> >> > > to >> >> >> >> > > be >> >> >> >> > > attached to whatever the caller of the query needs? >> >> >> >> > > >> >> >> >> > > - Ricky >> >> >> >> > >> >> >> >> > i'm not sure they should be added automatically. >> >> >> >> > I think a dialog[1] showing orphan disks/images on the >> >> >> >> > storage >> >> >> >> > domain >> >> >> >> > for user to choose which to import as 'floating' disks >> >> >> >> > would >> >> >> >> > be >> >> >> >> > better >> >> >> >> > than auto importing them. >> >> >> >> > >> >> >> >> > there is also the reverse of flagging existing disks as >> >> >> >> > 'missing' >> >> >> >> > in >> >> >> >> > storage? >> >> >> >> > >> >> >> >> >> >> >> >> Perhaps we should start a feature page to discuss and better >> >> >> >> scope >> >> >> >> it. >> >> >> >> There is a feature page that we could expand, it doesn't >> >> >> >> discuss >> >> >> >> the >> >> >> >> notion of importing those disks which is certainly something >> >> >> >> we >> >> >> >> need >> >> >> >> to address. >> >> >> >> >> >> >> >> >> >> >> >> http://wiki.ovirt.org/wiki/Features/Orphaned_Images >> >> >> > >> >> >> >The original idea was to scan the storage domains and compare >> >> >> >the >> >> >> >images >> >> >> >lists to the database, thus getting a list of images no longer >> >> >> >relevant >> >> >> >and scrubbing the storage. This will actually be addressed >> >> >> >properly >> >> >> >in >> >> >> >the future (Ayal can elaborate on that) but for now this is >> >> >> >needed >> >> >> >at >> >> >> >least for that use case. >> >> >> > >> >> >> > >> >> >> >As I understand, the conversation here is about trying to take >> >> >> >an >> >> >> >already >> >> >> >populated SD (from another setup I suppose), scanning it and >> >> >> >putting >> >> >> >it >> >> >> >into RHEV? >> >> >> >> >> >> As I understood it, the purpose of this functionality wasn't to >> >> >> find >> >> >> images which should be removed from storage, but to find images >> >> >> on >> >> >> the >> >> >> domain that oVirt was unaware of and importing them for use >> >> >> (for >> >> >> instance, >> >> >> if a disk was created outside of oVirt on the domain). If one >> >> >> of >> >> >> the >> >> >> use >> >> >> cases for this feature is also the orphaned images mentioned on >> >> >> the >> >> >> feature page, that may expand the functionality into a separate >> >> >> domain >> >> >> scrub and storage import, both of which would call the rescan >> >> >> (meaning the >> >> >> rescan would not actually add to the database, but instead >> >> >> return >> >> >> a >> >> >> list >> >> >> of "orphaned" disk images). >> >> >> >> >> >> Another solution would be to import all disk images into the >> >> >> database >> >> >> either way, and let the user delete any orphaned images from >> >> >> the >> >> >> GUI. >> >> > >> >> >I think are nice to have, but the problem with the scanning is >> >> >that >> >> >if >> >> >we're not scanning a master domain or an export domain, all we >> >> >will >> >> >see >> >> >is a bunch of images with no context or even hints as to where >> >> >they >> >> >belong. The data that makes it all usable is in the engine >> >> >database >> >> >and >> >> >in the ovf files on the master domain. >> >> > >> >> >This is why I stopped at the orphaned images part of the feature >> >> >- >> >> >because there it's feasible, I would rely on the engine database >> >> >for >> >> >image ID comparisons. >> >> > >> >> >If we present a user with a list of nameless disks, I doubt it >> >> >will >> >> >be of >> >> >any use. >> >> >> >> The way this would work is by comparing a list of disk images from >> >> vdsm >> >> and from oVirt's database, finding the ones vdsm returns that >> >> oVirt >> >> doesn't have, and then either adding or returning those images. So >> >> oVirt's >> >> db will be used in the comparison. >> > >> >This will work only when scanning storage domains already attached >> >and in >> >use by the current oVirt setup. What I am talking about is what will >> >happen if a LUN that used to be a SD in another oVirt setup is >> >discovered >> >and scanned, with no engine db to compare with. If we don't consider >> >such >> >a use case, life is definitely quite easy, and we're basically >> >within the >> >scope of the orphaned images feature >> >> This use case should definitely be considered, maybe have a separate >> case >> where the rescan would return all "compatible" disks (i.e. disks that >> aren't just partial snapshots and the like) if the domain has not yet >> been >> mounted. Essentially, it would run the same comparison, but compare >> against an empty list rather than a list of disks. There's no way >> it's as >> simple as that (I'm unsure of the methods oVirt uses to mount a >> domain), >> but it's a good starting point. > >There is no complex method there. For file storage it's just a mount >command, and for block it's LVM (plus iscsi session establishment, if >needed) > >> > >> >> >> >> As far as presenting the user with nameless disks, that's a point >> >> I >> >> hadn't >> >> considered; we could generate some sort of placeholder metadata >> >> upon >> >> addition to show the user that these are new/orphaned disks that >> >> were >> >> found on the storage domain. Is it safe to assume that the disks >> >> discovered by this feature won't be attached to anything? >> > >> >The oVirt paradigm says "if it isn't in the engine db, it's not >> >ours", so >> >any LV or image we discover that is missing from the DB or the >> >snapshot >> >chain of the image in the DB, is nameless, and orphaned. >> > >> >Such an image on a current SD, belonging to a working oVirt setup is >> >definitely an orphaned image. Attaching these to VMs is usually also >> >useless, because they are more often than not discarded snapshots >> >that >> >didn't get discarded cleanly for some reason. >> > >> > >> >Now, if we want to make this usable, we might want to actually check >> >the >> >qcow2 metadata of the image to see whether it's a mid-chain snapshot >> >(and >> >if so it's probably just a candidate for cleanup), or a standalone >> >qcow2 >> >or raw image, and then we can move on with the virt-* tools, to find >> >out >> >the image size and the filesystems it contains. This will at least >> >provide the user with some usable information about the detected >> >image. >> >If we're talking about scanning an SD that doesn't presently belong >> >to >> >the current oVirt setup, then this is even more relevant, because >> >all of >> >the images will have no VM-related context. >> >> We're currently working on having disks created outside of the oVirt >> environment, so not all orphaned disks on the existing storage domain >> will >> be artifacts of supposedly-deleted data. > >Do you mean like rhevm-image-upload, or something different? > >> For our use case, disk >> images >> created by us will be able to be imported into oVirt and attached to >> a VM >> created through the engine. Because of this, saying "if it isn't in >> the >> engine db, it's not ours" wouldn't necessarily be true. >> >> When you talk about checking the metadata, does either oVirt or vdsm >> have >> a simple way to do this? A query of some sort would be ideal for >> this, as >> it could be run for each image as a qualifier for import. > >qemu-img info and libguestfs commands should do. >Besides, our images do come with some metadata (in the LVM tags or a >.meta file) > >> >> Also, as far as writing the functionality itself, I'm gathering that >> it >> should be structured as a query to return these orphaned images, >> which can >> then be acted upon/added to the database through a separate command >> after >> checking the validity of each image? > >Yes, a simple way to say "import this one to DB, attach to VM X or make >floating", "delete that one", "skip" > > >> > >> >> > >> >> > >> >> >> > >> >> >> >> >> >> >> >> > >> >> >> >> > [1] or a subtab on the storage domain. >> >> >> >> > >> >> >> >> > _______________________________________________ >> >> >> >> > Engine-devel mailing list >> >> >> >> > Engine-devel at ovirt.org >> >> >> >> > http://lists.ovirt.org/mailman/listinfo/engine-devel >> >> >> >> > >> >> >> >> >> >> >> > >> >> >> >-- >> >> >> > >> >> >> > >> >> >> > >> >> >> >Regards, >> >> >> > >> >> >> >Dan Yasny >> >> >> >Red Hat Israel >> >> >> >+972 9769 2280 >> >> >> >> >> >> >> >> > >> >> >-- >> >> > >> >> > >> >> > >> >> >Regards, >> >> > >> >> >Dan Yasny >> >> >Red Hat Israel >> >> >+972 9769 2280 >> >> >> >> >> > >> >-- >> > >> > >> > >> >Regards, >> > >> >Dan Yasny >> >Red Hat Israel >> >+972 9769 2280 >> >> > >-- > > > >Regards, > >Dan Yasny >Red Hat Israel >+972 9769 2280 >_______________________________________________ >Engine-devel mailing list >Engine-devel at ovirt.org >http://lists.ovirt.org/mailman/listinfo/engine-devel From dyasny at redhat.com Thu Aug 2 14:43:13 2012 From: dyasny at redhat.com (Dan Yasny) Date: Thu, 2 Aug 2012 10:43:13 -0400 (EDT) Subject: [Engine-devel] Domain rescan action question In-Reply-To: Message-ID: <1747379863.5617935.1343918593837.JavaMail.root@redhat.com> Thanks Ricky, I've added reviewing this to my todo list Dan ----- Original Message ----- > From: "Ricky Hopper" > To: "Dan Yasny" > Cc: engine-devel at ovirt.org > Sent: Thursday, 2 August, 2012 5:37:10 PM > Subject: Re: [Engine-devel] Domain rescan action question > > In the interest of good discussion, we've put up a feature page for > this > feature (http://wiki.ovirt.org/wiki/Features/Domain_Scan), which > links to > a talk page where modifications can be proposed to how I've laid out > the > feature. So far, it covers how the query works and which commands > will > come about to implement it. I'd appreciate it if anyone concerned > could > check this out and make any changes as they see fit so we can get > going > with the coding. > > - Ricky > > On 8/1/12 2:35 PM, "Dan Yasny" wrote: > > > > > > >----- Original Message ----- > >> From: "Ricky Hopper" > >> To: "Dan Yasny" > >> Cc: engine-devel at ovirt.org, "Itamar Heim" , > >> "Andrew > >>Cathrow" > >> Sent: Wednesday, 1 August, 2012 8:36:09 PM > >> Subject: Re: [Engine-devel] Domain rescan action question > >> > >> > >> > >> On 8/1/12 10:05 AM, "Dan Yasny" wrote: > >> > >> > > >> > > >> >----- Original Message ----- > >> >> From: "Ricky Hopper" > >> >> To: "Dan Yasny" > >> >> Cc: engine-devel at ovirt.org, "Itamar Heim" , > >> >> "Andrew > >> >>Cathrow" > >> >> Sent: Wednesday, 1 August, 2012 4:56:45 PM > >> >> Subject: Re: [Engine-devel] Domain rescan action question > >> >> > >> >> > >> >> > >> >> On 8/1/12 9:42 AM, "Dan Yasny" wrote: > >> >> > >> >> > > >> >> > > >> >> >----- Original Message ----- > >> >> >> From: "Ricky Hopper" > >> >> >> To: "Dan Yasny" , "Andrew Cathrow" > >> >> >> > >> >> >> Cc: engine-devel at ovirt.org, "Itamar Heim" > >> >> >> , > >> >> >> "Ricky > >> >> >>Hopper" > >> >> >> Sent: Wednesday, 1 August, 2012 4:34:53 PM > >> >> >> Subject: Re: [Engine-devel] Domain rescan action question > >> >> >> > >> >> >> > >> >> >> > >> >> >> On 8/1/12 5:59 AM, "Dan Yasny" wrote: > >> >> >> > >> >> >> > > >> >> >> > > >> >> >> >----- Original Message ----- > >> >> >> >> From: "Andrew Cathrow" > >> >> >> >> To: "Itamar Heim" , "Dan Yasny" > >> >> >> >> , > >> >> >> >>"Ricky Hopper" > >> >> >> >> Cc: engine-devel at ovirt.org > >> >> >> >> Sent: Wednesday, 1 August, 2012 12:24:42 AM > >> >> >> >> Subject: Re: [Engine-devel] Domain rescan action question > >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> ----- Original Message ----- > >> >> >> >> > From: "Itamar Heim" > >> >> >> >> > To: "Ricky Hopper" > >> >> >> >> > Cc: engine-devel at ovirt.org > >> >> >> >> > Sent: Tuesday, July 31, 2012 4:44:34 PM > >> >> >> >> > Subject: Re: [Engine-devel] Domain rescan action > >> >> >> >> > question > >> >> >> >> > > >> >> >> >> > On 07/31/2012 11:30 PM, Hopper, Ricky wrote: > >> >> >> >> > > Hey all, > >> >> >> >> > > > >> >> >> >> > > As I'm making progress with the domain rescan > >> >> >> >> > > functionality, > >> >> >> >> > > I've > >> >> >> >> > > realized that I'm unsure what to do with any disks > >> >> >> >> > > that > >> >> >> >> > > are > >> >> >> >> > > detected on > >> >> >> >> > > the domain. Should I add them back into the database > >> >> >> >> > > to > >> >> >> >> > > be > >> >> >> >> > > listed > >> >> >> >> > > as > >> >> >> >> > > floating disks, or should I just return a list of > >> >> >> >> > > disk > >> >> >> >> > > images > >> >> >> >> > > to > >> >> >> >> > > be > >> >> >> >> > > attached to whatever the caller of the query needs? > >> >> >> >> > > > >> >> >> >> > > - Ricky > >> >> >> >> > > >> >> >> >> > i'm not sure they should be added automatically. > >> >> >> >> > I think a dialog[1] showing orphan disks/images on the > >> >> >> >> > storage > >> >> >> >> > domain > >> >> >> >> > for user to choose which to import as 'floating' disks > >> >> >> >> > would > >> >> >> >> > be > >> >> >> >> > better > >> >> >> >> > than auto importing them. > >> >> >> >> > > >> >> >> >> > there is also the reverse of flagging existing disks as > >> >> >> >> > 'missing' > >> >> >> >> > in > >> >> >> >> > storage? > >> >> >> >> > > >> >> >> >> > >> >> >> >> Perhaps we should start a feature page to discuss and > >> >> >> >> better > >> >> >> >> scope > >> >> >> >> it. > >> >> >> >> There is a feature page that we could expand, it doesn't > >> >> >> >> discuss > >> >> >> >> the > >> >> >> >> notion of importing those disks which is certainly > >> >> >> >> something > >> >> >> >> we > >> >> >> >> need > >> >> >> >> to address. > >> >> >> >> > >> >> >> >> > >> >> >> >> http://wiki.ovirt.org/wiki/Features/Orphaned_Images > >> >> >> > > >> >> >> >The original idea was to scan the storage domains and > >> >> >> >compare > >> >> >> >the > >> >> >> >images > >> >> >> >lists to the database, thus getting a list of images no > >> >> >> >longer > >> >> >> >relevant > >> >> >> >and scrubbing the storage. This will actually be addressed > >> >> >> >properly > >> >> >> >in > >> >> >> >the future (Ayal can elaborate on that) but for now this is > >> >> >> >needed > >> >> >> >at > >> >> >> >least for that use case. > >> >> >> > > >> >> >> > > >> >> >> >As I understand, the conversation here is about trying to > >> >> >> >take > >> >> >> >an > >> >> >> >already > >> >> >> >populated SD (from another setup I suppose), scanning it > >> >> >> >and > >> >> >> >putting > >> >> >> >it > >> >> >> >into RHEV? > >> >> >> > >> >> >> As I understood it, the purpose of this functionality wasn't > >> >> >> to > >> >> >> find > >> >> >> images which should be removed from storage, but to find > >> >> >> images > >> >> >> on > >> >> >> the > >> >> >> domain that oVirt was unaware of and importing them for use > >> >> >> (for > >> >> >> instance, > >> >> >> if a disk was created outside of oVirt on the domain). If > >> >> >> one > >> >> >> of > >> >> >> the > >> >> >> use > >> >> >> cases for this feature is also the orphaned images mentioned > >> >> >> on > >> >> >> the > >> >> >> feature page, that may expand the functionality into a > >> >> >> separate > >> >> >> domain > >> >> >> scrub and storage import, both of which would call the > >> >> >> rescan > >> >> >> (meaning the > >> >> >> rescan would not actually add to the database, but instead > >> >> >> return > >> >> >> a > >> >> >> list > >> >> >> of "orphaned" disk images). > >> >> >> > >> >> >> Another solution would be to import all disk images into the > >> >> >> database > >> >> >> either way, and let the user delete any orphaned images from > >> >> >> the > >> >> >> GUI. > >> >> > > >> >> >I think are nice to have, but the problem with the scanning is > >> >> >that > >> >> >if > >> >> >we're not scanning a master domain or an export domain, all we > >> >> >will > >> >> >see > >> >> >is a bunch of images with no context or even hints as to where > >> >> >they > >> >> >belong. The data that makes it all usable is in the engine > >> >> >database > >> >> >and > >> >> >in the ovf files on the master domain. > >> >> > > >> >> >This is why I stopped at the orphaned images part of the > >> >> >feature > >> >> >- > >> >> >because there it's feasible, I would rely on the engine > >> >> >database > >> >> >for > >> >> >image ID comparisons. > >> >> > > >> >> >If we present a user with a list of nameless disks, I doubt it > >> >> >will > >> >> >be of > >> >> >any use. > >> >> > >> >> The way this would work is by comparing a list of disk images > >> >> from > >> >> vdsm > >> >> and from oVirt's database, finding the ones vdsm returns that > >> >> oVirt > >> >> doesn't have, and then either adding or returning those images. > >> >> So > >> >> oVirt's > >> >> db will be used in the comparison. > >> > > >> >This will work only when scanning storage domains already > >> >attached > >> >and in > >> >use by the current oVirt setup. What I am talking about is what > >> >will > >> >happen if a LUN that used to be a SD in another oVirt setup is > >> >discovered > >> >and scanned, with no engine db to compare with. If we don't > >> >consider > >> >such > >> >a use case, life is definitely quite easy, and we're basically > >> >within the > >> >scope of the orphaned images feature > >> > >> This use case should definitely be considered, maybe have a > >> separate > >> case > >> where the rescan would return all "compatible" disks (i.e. disks > >> that > >> aren't just partial snapshots and the like) if the domain has not > >> yet > >> been > >> mounted. Essentially, it would run the same comparison, but > >> compare > >> against an empty list rather than a list of disks. There's no way > >> it's as > >> simple as that (I'm unsure of the methods oVirt uses to mount a > >> domain), > >> but it's a good starting point. > > > >There is no complex method there. For file storage it's just a mount > >command, and for block it's LVM (plus iscsi session establishment, > >if > >needed) > > > >> > > >> >> > >> >> As far as presenting the user with nameless disks, that's a > >> >> point > >> >> I > >> >> hadn't > >> >> considered; we could generate some sort of placeholder metadata > >> >> upon > >> >> addition to show the user that these are new/orphaned disks > >> >> that > >> >> were > >> >> found on the storage domain. Is it safe to assume that the > >> >> disks > >> >> discovered by this feature won't be attached to anything? > >> > > >> >The oVirt paradigm says "if it isn't in the engine db, it's not > >> >ours", so > >> >any LV or image we discover that is missing from the DB or the > >> >snapshot > >> >chain of the image in the DB, is nameless, and orphaned. > >> > > >> >Such an image on a current SD, belonging to a working oVirt setup > >> >is > >> >definitely an orphaned image. Attaching these to VMs is usually > >> >also > >> >useless, because they are more often than not discarded snapshots > >> >that > >> >didn't get discarded cleanly for some reason. > >> > > >> > > >> >Now, if we want to make this usable, we might want to actually > >> >check > >> >the > >> >qcow2 metadata of the image to see whether it's a mid-chain > >> >snapshot > >> >(and > >> >if so it's probably just a candidate for cleanup), or a > >> >standalone > >> >qcow2 > >> >or raw image, and then we can move on with the virt-* tools, to > >> >find > >> >out > >> >the image size and the filesystems it contains. This will at > >> >least > >> >provide the user with some usable information about the detected > >> >image. > >> >If we're talking about scanning an SD that doesn't presently > >> >belong > >> >to > >> >the current oVirt setup, then this is even more relevant, because > >> >all of > >> >the images will have no VM-related context. > >> > >> We're currently working on having disks created outside of the > >> oVirt > >> environment, so not all orphaned disks on the existing storage > >> domain > >> will > >> be artifacts of supposedly-deleted data. > > > >Do you mean like rhevm-image-upload, or something different? > > > >> For our use case, disk > >> images > >> created by us will be able to be imported into oVirt and attached > >> to > >> a VM > >> created through the engine. Because of this, saying "if it isn't > >> in > >> the > >> engine db, it's not ours" wouldn't necessarily be true. > >> > >> When you talk about checking the metadata, does either oVirt or > >> vdsm > >> have > >> a simple way to do this? A query of some sort would be ideal for > >> this, as > >> it could be run for each image as a qualifier for import. > > > >qemu-img info and libguestfs commands should do. > >Besides, our images do come with some metadata (in the LVM tags or a > >.meta file) > > > >> > >> Also, as far as writing the functionality itself, I'm gathering > >> that > >> it > >> should be structured as a query to return these orphaned images, > >> which can > >> then be acted upon/added to the database through a separate > >> command > >> after > >> checking the validity of each image? > > > >Yes, a simple way to say "import this one to DB, attach to VM X or > >make > >floating", "delete that one", "skip" > > > > > >> > > >> >> > > >> >> > > >> >> >> > > >> >> >> >> > >> >> >> >> > > >> >> >> >> > [1] or a subtab on the storage domain. > >> >> >> >> > > >> >> >> >> > _______________________________________________ > >> >> >> >> > Engine-devel mailing list > >> >> >> >> > Engine-devel at ovirt.org > >> >> >> >> > http://lists.ovirt.org/mailman/listinfo/engine-devel > >> >> >> >> > > >> >> >> >> > >> >> >> > > >> >> >> >-- > >> >> >> > > >> >> >> > > >> >> >> > > >> >> >> >Regards, > >> >> >> > > >> >> >> >Dan Yasny > >> >> >> >Red Hat Israel > >> >> >> >+972 9769 2280 > >> >> >> > >> >> >> > >> >> > > >> >> >-- > >> >> > > >> >> > > >> >> > > >> >> >Regards, > >> >> > > >> >> >Dan Yasny > >> >> >Red Hat Israel > >> >> >+972 9769 2280 > >> >> > >> >> > >> > > >> >-- > >> > > >> > > >> > > >> >Regards, > >> > > >> >Dan Yasny > >> >Red Hat Israel > >> >+972 9769 2280 > >> > >> > > > >-- > > > > > > > >Regards, > > > >Dan Yasny > >Red Hat Israel > >+972 9769 2280 > >_______________________________________________ > >Engine-devel mailing list > >Engine-devel at ovirt.org > >http://lists.ovirt.org/mailman/listinfo/engine-devel > > -- Regards, Dan Yasny Red Hat Israel +972 9769 2280 From lpeer at redhat.com Mon Aug 6 07:27:51 2012 From: lpeer at redhat.com (Livnat Peer) Date: Mon, 6 Aug 2012 03:27:51 -0400 (EDT) Subject: [Engine-devel] oVirt engine core Message-ID: <660234482.447166.1344238071709.JavaMail.root@redhat.com> The following meeting has been modified: Subject: oVirt engine core Organiser: "Livnat Peer" Time: 4:00:00 PM - 5:00:00 PM GMT +02:00 Jerusalem [MODIFIED] Recurrence : Every 2 weeks on Wednesday End by 14 Aug, 2012 Effective 23 Nov, 2011 Invitees: engine-devel at ovirt.org; wangbo_bupt at hotmail.com; mkolesni at redhat.com; ykaul at redhat.com; ofrenkel at redhat.com; lhornyak at redhat.com; smizrahi at redhat.com; oschreib at redhat.com; sgordon at redhat.com; dedutta at cisco.com; emesika at redhat.com ... *~*~*~*~*~*~*~*~*~* Hi All, I am cancelling this series as we set a specific meeting for any subject that needs to be discussed. Some of the discussions are taking place on the ovirt general sync meetings. Thanks, Livnat -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 3883 bytes Desc: not available URL: From lpeer at redhat.com Mon Aug 6 07:34:21 2012 From: lpeer at redhat.com (Livnat Peer) Date: Mon, 6 Aug 2012 03:34:21 -0400 (EDT) Subject: [Engine-devel] ovirt network Message-ID: <383645166.448526.1344238461084.JavaMail.root@redhat.com> The following is a new meeting request: Subject: ovirt network Organiser: "Livnat Peer" Time: Wednesday, 15 August, 2012, 4:00:00 PM - 5:00:00 PM GMT +02:00 Jerusalem Invitees: engine-devel at ovirt.org; vdsm-devel at lists.fedorahosted.org *~*~*~*~*~*~*~*~*~* Hi All, As discussed previously on the list, I am adding a monthly discussion on Networking in oVirt. In this meeting we'll discuss general status of networking and features that we're missing. Thanks, Livnat Bridge ID: 972506565679 Dial-in information: Reservationless-Plus Toll Free Dial-In Number (US & Canada): (800) 451-8679 Reservationless-Plus International Dial-In Number: (212) 729-5016 Conference code: 8425973915 Global Access Numbers Local: Australia, Sydney Dial-In #: 0289852326 Austria, Vienna Dial-In #: 012534978196 Belgium, Brussels Dial-In #: 027920405 China Dial-In #: 4006205013 Denmark, Copenhagen Dial-In #: 32729215 Finland, Helsinki Dial-In #: 0923194436 France, Paris Dial-In #: 0170377140 Germany, Berlin Dial-In #: 030300190579 Ireland, Dublin Dial-In #: 014367793 Italy, Milan Dial-In #: 0236269529 Netherlands, Amsterdam Dial-In #: 0207975872 Norway, Oslo Dial-In #: 21033188 Singapore Dial-In #: 64840858 Spain, Barcelona Dial-In #: 935452328 Sweden, Stockholm Dial-In #: 0850513770 Switzerland, Geneva Dial-In #: 0225927881 United Kingdom Dial-In #: 02078970515 United Kingdom Dial-In #: 08445790676 United Kingdom, LocalCall Dial-In #: 08445790678 United States Dial-In #: 2127295016 Global Access Numbers Tollfree: Argentina Dial-In #: 8004441016 Australia Dial-In #: 1800337169 Austria Dial-In #: 0800005898 Bahamas Dial-In #: 18002054776 Bahrain Dial-In #: 80004377 Belgium Dial-In #: 080048325 Brazil Dial-In #: 08008921002 Bulgaria Dial-In #: 008001100236 Chile Dial-In #: 800370228 Colombia Dial-In #: 018009134033 Costa Rica Dial-In #: 08000131048 Cyprus Dial-In #: 80095297 Czech Republic Dial-In #: 800700318 Denmark Dial-In #: 80887114 Dominican Republic Dial-In #: 18887512313 Estonia Dial-In #: 8000100232 Finland Dial-In #: 0800117116 France Dial-In #: 0805632867 Germany Dial-In #: 8006647541 Greece Dial-In #: 00800127562 Hong Kong Dial-In #: 800930349 Hungary Dial-In #: 0680016796 Iceland Dial-In #: 8008967 India Dial-In #: 0008006501533 Indonesia Dial-In #: 0018030179162 Ireland Dial-In #: 1800932401 Israel Dial-In #: 1809462557 Italy Dial-In #: 800985897 Jamaica Dial-In #: 18002050328 Japan Dial-In #: 0120934453 Korea (South) Dial-In #: 007986517393 Latvia Dial-In #: 80003339 Lithuania Dial-In #: 880030479 Luxembourg Dial-In #: 80026595 Malaysia Dial-In #: 1800814451 Mexico Dial-In #: 0018664590915 New Zealand Dial-In #: 0800888167 Norway Dial-In #: 80012994 Panama Dial-In #: 008002269184 Philippines Dial-In #: 180011100991 Poland Dial-In #: 008001210187 Portugal Dial-In #: 800814625 Russian Federation Dial-In #: 81080028341012 Saint Kitts and Nevis Dial-In #: 18002059252 Singapore Dial-In #: 8006162235 Slovak Republic Dial-In #: 0800001441 South Africa Dial-In #: 0800981148 Spain Dial-In #: 800300524 Sweden Dial-In #: 200896860 Switzerland Dial-In #: 800650077 Taiwan Dial-In #: 00801127141 Thailand Dial-In #: 001800656966 Trinidad and Tobago Dial-In #: 18002024615 United Arab Emirates Dial-In #: 8000650591 United Kingdom Dial-In #: 08006948057 United States Dial-In #: 8004518679 Uruguay Dial-In #: 00040190315 Venezuela Dial-In #: 08001627182 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 8917 bytes Desc: not available URL: From lpeer at redhat.com Mon Aug 6 07:55:07 2012 From: lpeer at redhat.com (Livnat Peer) Date: Mon, 6 Aug 2012 03:55:07 -0400 (EDT) Subject: [Engine-devel] ovirt network Message-ID: <291105241.454699.1344239707543.JavaMail.root@redhat.com> The following meeting has been modified: Subject: ovirt network Organiser: "Livnat Peer" Time: 4:00:00 PM - 5:00:00 PM GMT +02:00 Jerusalem Recurrence : Every 5 weeks on Wednesday No end date Effective 15 Aug, 2012 Invitees: engine-devel at ovirt.org; vdsm-devel at lists.fedorahosted.org; GARGYA at de.ibm.com; dyasny at redhat.com; simon at redhat.com *~*~*~*~*~*~*~*~*~* Hi All, As discussed previously on the list, I am adding a monthly discussion on Networking in oVirt. In this meeting we'll discuss general status of networking and features that we're missing. Thanks, Livnat Bridge ID: 972506565679 Dial-in information: Reservationless-Plus Toll Free Dial-In Number (US & Canada): (800) 451-8679 Reservationless-Plus International Dial-In Number: (212) 729-5016 Conference code: 8425973915 Global Access Numbers Local: Australia, Sydney Dial-In #: 0289852326 Austria, Vienna Dial-In #: 012534978196 Belgium, Brussels Dial-In #: 027920405 China Dial-In #: 4006205013 Denmark, Copenhagen Dial-In #: 32729215 Finland, Helsinki Dial-In #: 0923194436 France, Paris Dial-In #: 0170377140 Germany, Berlin Dial-In #: 030300190579 Ireland, Dublin Dial-In #: 014367793 Italy, Milan Dial-In #: 0236269529 Netherlands, Amsterdam Dial-In #: 0207975872 Norway, Oslo Dial-In #: 21033188 Singapore Dial-In #: 64840858 Spain, Barcelona Dial-In #: 935452328 Sweden, Stockholm Dial-In #: 0850513770 Switzerland, Geneva Dial-In #: 0225927881 United Kingdom Dial-In #: 02078970515 United Kingdom Dial-In #: 08445790676 United Kingdom, LocalCall Dial-In #: 08445790678 United States Dial-In #: 2127295016 Global Access Numbers Tollfree: Argentina Dial-In #: 8004441016 Australia Dial-In #: 1800337169 Austria Dial-In #: 0800005898 Bahamas Dial-In #: 18002054776 Bahrain Dial-In #: 80004377 Belgium Dial-In #: 080048325 Brazil Dial-In #: 08008921002 Bulgaria Dial-In #: 008001100236 Chile Dial-In #: 800370228 Colombia Dial-In #: 018009134033 Costa Rica Dial-In #: 08000131048 Cyprus Dial-In #: 80095297 Czech Republic Dial-In #: 800700318 Denmark Dial-In #: 80887114 Dominican Republic Dial-In #: 18887512313 Estonia Dial-In #: 8000100232 Finland Dial-In #: 0800117116 France Dial-In #: 0805632867 Germany Dial-In #: 8006647541 Greece Dial-In #: 00800127562 Hong Kong Dial-In #: 800930349 Hungary Dial-In #: 0680016796 Iceland Dial-In #: 8008967 India Dial-In #: 0008006501533 Indonesia Dial-In #: 0018030179162 Ireland Dial-In #: 1800932401 Israel Dial-In #: 1809462557 Italy Dial-In #: 800985897 Jamaica Dial-In #: 18002050328 Japan Dial-In #: 0120934453 Korea (South) Dial-In #: 007986517393 Latvia Dial-In #: 80003339 Lithuania Dial-In #: 880030479 Luxembourg Dial-In #: 80026595 Malaysia Dial-In #: 1800814451 Mexico Dial-In #: 0018664590915 New Zealand Dial-In #: 0800888167 Norway Dial-In #: 80012994 Panama Dial-In #: 008002269184 Philippines Dial-In #: 180011100991 Poland Dial-In #: 008001210187 Portugal Dial-In #: 800814625 Russian Federation Dial-In #: 81080028341012 Saint Kitts and Nevis Dial-In #: 18002059252 Singapore Dial-In #: 8006162235 Slovak Republic Dial-In #: 0800001441 South Africa Dial-In #: 0800981148 Spain Dial-In #: 800300524 Sweden Dial-In #: 200896860 Switzerland Dial-In #: 800650077 Taiwan Dial-In #: 00801127141 Thailand Dial-In #: 001800656966 Trinidad and Tobago Dial-In #: 18002024615 United Arab Emirates Dial-In #: 8000650591 United Kingdom Dial-In #: 08006948057 United States Dial-In #: 8004518679 Uruguay Dial-In #: 00040190315 Venezuela Dial-In #: 08001627182 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 9718 bytes Desc: not available URL: From Ricky.Hopper at netapp.com Tue Aug 7 14:06:01 2012 From: Ricky.Hopper at netapp.com (Hopper, Ricky) Date: Tue, 7 Aug 2012 14:06:01 +0000 Subject: [Engine-devel] Getting the storage domain of a LunDisk Message-ID: Hi all, Does anyone know the best way to get the storage domain of a LunDisk object? I don't know if there's some query or anything I'm failing to find, but I can't find anything within the class itself that's definitive. Thanks, - Ricky -------------- next part -------------- An HTML attachment was scrubbed... URL: From gpadgett at redhat.com Tue Aug 7 14:41:27 2012 From: gpadgett at redhat.com (Greg Padgett) Date: Tue, 07 Aug 2012 10:41:27 -0400 Subject: [Engine-devel] Getting the storage domain of a LunDisk In-Reply-To: References: Message-ID: <50212917.9010507@redhat.com> On 08/07/2012 10:06 AM, Hopper, Ricky wrote: > Hi all, > > Does anyone know the best way to get the storage domain of a LunDisk object? > I don't know if there's some query or anything I'm failing to find, but I > can't find anything within the class itself that's definitive. > > Thanks, > > - Ricky > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > Hi Ricky, As far as I can see, the DiskImage class can give this--but AFAIK there is no way to convert a LunDisk to a DiskImage. I am curious if the storage domain info should be moved into the Disk class, which is a superclass of both LunDisk and DiskImage, or if perhaps there is a better place for it? Here's a partial class hierarchy that I mapped out when looking into this for a bug I'm working on: BaseDisk '- Disk |- LunDisk '- DiskImageBase '- DiskImage Thanks, Greg From Dustin.Schoenbrun at netapp.com Tue Aug 7 17:02:52 2012 From: Dustin.Schoenbrun at netapp.com (Schoenbrun, Dustin) Date: Tue, 7 Aug 2012 17:02:52 +0000 Subject: [Engine-devel] oVirt UI Plugins Meeting In-Reply-To: <5017FC38.4060601@redhat.com> Message-ID: Hey Itamar, I'm going to be working on the plugin framework over on my side. I'll let you guys know if I need anything or have something cool to show off. Thanks! -- Dustin On 7/31/12 11:39 AM, "Itamar Heim" wrote: >my notes from the call: > >vojtech will schedule a follow up for two weeks from now same time. > >next step is to provide simple sample java script based plugins in the >following order: >1. add a main tab showing html page from external url >2. add a sub tab showing html page from external url >3. add a context menu item opening an external url >4. plugin performs a REST API call to the engine >5. cross origin header - plugin asking another server/url a question > >if anyone wants to pick on of those up rather than wait for vojtech on >them - would be great. > >Thanks, > Itamar > > >On 07/30/2012 03:03 PM, Vojtech Szocs wrote: > From iheim at redhat.com Tue Aug 7 23:41:12 2012 From: iheim at redhat.com (Itamar Heim) Date: Wed, 08 Aug 2012 02:41:12 +0300 Subject: [Engine-devel] Getting the storage domain of a LunDisk In-Reply-To: References: Message-ID: <5021A798.30106@redhat.com> On 08/07/2012 05:06 PM, Hopper, Ricky wrote: > Hi all, > > Does anyone know the best way to get the storage domain of a LunDisk > object? I don't know if there's some query or anything I'm failing to > find, but I can't find anything within the class itself that's definitive. a LUN disk is not in any storage domain? From mkolesni at redhat.com Wed Aug 8 06:58:04 2012 From: mkolesni at redhat.com (Mike Kolesnik) Date: Wed, 8 Aug 2012 02:58:04 -0400 (EDT) Subject: [Engine-devel] Getting the storage domain of a LunDisk In-Reply-To: <5021A798.30106@redhat.com> Message-ID: <681086246.1762748.1344409084599.JavaMail.root@redhat.com> ----- Original Message ----- > On 08/07/2012 05:06 PM, Hopper, Ricky wrote: > > Hi all, > > > > Does anyone know the best way to get the storage domain of a > > LunDisk > > object? I don't know if there's some query or anything I'm failing > > to > > find, but I can't find anything within the class itself that's > > definitive. > > a LUN disk is not in any storage domain? AFAIK LUN disk simply resides on a LUN, that's why it has the field: private LUNs lun; A SD can possibly also reside on this LUN if this behaviour is desirable (I'm not sure if this is acceptable or not in oVirt), but is not necessary for the disk to function. > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From djasa at redhat.com Wed Aug 8 12:22:34 2012 From: djasa at redhat.com (David =?UTF-8?Q?Ja=C5=A1a?=) Date: Wed, 08 Aug 2012 14:22:34 +0200 Subject: [Engine-devel] wiki page RFC: How to Connect to SPICE Console Without Portal Message-ID: <1344428554.31846.3.camel@dhcp-29-7.brq.redhat.com> Hi, Based on several recent threads asking how to connect to a VM via spice outside of the Portals, I've put up together this wiki page: http://wiki.ovirt.org/wiki/How_to_Connect_to_SPICE_Console_Without_Portal Comments or corrections are welcome. David -- David Ja?a, RHCE SPICE QE based in Brno GPG Key: 22C33E24 Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24 From Ricky.Hopper at netapp.com Wed Aug 8 13:09:56 2012 From: Ricky.Hopper at netapp.com (Hopper, Ricky) Date: Wed, 8 Aug 2012 13:09:56 +0000 Subject: [Engine-devel] Getting the storage domain of a LunDisk In-Reply-To: <681086246.1762748.1344409084599.JavaMail.root@redhat.com> Message-ID: On 8/8/12 2:58 AM, "Mike Kolesnik" wrote: >----- Original Message ----- >> On 08/07/2012 05:06 PM, Hopper, Ricky wrote: >> > Hi all, >> > >> > Does anyone know the best way to get the storage domain of a >> > LunDisk >> > object? I don't know if there's some query or anything I'm failing >> > to >> > find, but I can't find anything within the class itself that's >> > definitive. >> >> a LUN disk is not in any storage domain? > >AFAIK LUN disk simply resides on a LUN, that's why it has the field: > private LUNs lun; > >A SD can possibly also reside on this LUN if this behaviour is desirable >(I'm not sure if this is acceptable or not in oVirt), but is not >necessary for the disk to function. Alright, that makes sense. Thanks! > >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> From dfediuck at redhat.com Wed Aug 8 21:41:31 2012 From: dfediuck at redhat.com (Doron Fediuck) Date: Wed, 8 Aug 2012 17:41:31 -0400 (EDT) Subject: [Engine-devel] Unifying (parts of) commit templates. In-Reply-To: <1398935866.6077472.1344461913159.JavaMail.root@redhat.com> Message-ID: <1629476053.6079144.1344462091144.JavaMail.root@redhat.com> Hi All, It seems that for commit subjects, vdsm is using a general concept of- BZ#??????? some message I'd like to suggest adopting it to the engine template we use today- BZ#??????? : short summary under 50 chars This may help us write some scripts which will work both for vdsm and engine BZs. Doron. From robert at middleswarth.net Thu Aug 9 02:05:51 2012 From: robert at middleswarth.net (Robert Middleswarth) Date: Wed, 08 Aug 2012 22:05:51 -0400 Subject: [Engine-devel] Jenkins testing of patch what do we want to check? Message-ID: <50231AFF.50203@middleswarth.net> Unlike vdsm were the entire unit test takes 2 min to process the Engine has several jobs many that take longer then 15 min to run. So here is the questions. What current jobs do we want to run on each patch submission? -- Thanks Robert Middleswarth @rmiddle (twitter/IRC) From iheim at redhat.com Thu Aug 9 05:37:03 2012 From: iheim at redhat.com (Itamar Heim) Date: Thu, 09 Aug 2012 08:37:03 +0300 Subject: [Engine-devel] Jenkins testing of patch what do we want to check? In-Reply-To: <50231AFF.50203@middleswarth.net> References: <50231AFF.50203@middleswarth.net> Message-ID: <50234C7F.4030404@redhat.com> On 08/09/2012 05:05 AM, Robert Middleswarth wrote: > Unlike vdsm were the entire unit test takes 2 min to process the Engine > has several jobs many that take longer then 15 min to run. So here is > the questions. > > What current jobs do we want to run on each patch submission? > all... developers should verify the default unitests, or specific tests for areas they are touching. but jenkins should run the findbugs, gwt compilation and enable all unitests. From lpeer at redhat.com Thu Aug 9 06:11:45 2012 From: lpeer at redhat.com (Livnat Peer) Date: Thu, 09 Aug 2012 09:11:45 +0300 Subject: [Engine-devel] Unifying (parts of) commit templates. In-Reply-To: <1629476053.6079144.1344462091144.JavaMail.root@redhat.com> References: <1629476053.6079144.1344462091144.JavaMail.root@redhat.com> Message-ID: <502354A1.5090900@redhat.com> On 09/08/12 00:41, Doron Fediuck wrote: > Hi All, > It seems that for commit subjects, vdsm is using a general concept of- > > BZ#??????? some message > > I'd like to suggest adopting it to the engine template we use today- > > BZ#??????? : short summary under 50 chars > > This may help us write some scripts which will work both for vdsm and engine BZs. > +1 with a small change - adding a \n after the bz number - BZ#??????? : short summary under 50 chars Long description of what this commit is about Livnat > Doron. > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From eedri at redhat.com Thu Aug 9 07:26:37 2012 From: eedri at redhat.com (Eyal Edri) Date: Thu, 9 Aug 2012 03:26:37 -0400 (EDT) Subject: [Engine-devel] Jenkins testing of patch what do we want to check? In-Reply-To: <50234C7F.4030404@redhat.com> Message-ID: <500774761.2278461.1344497197213.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Itamar Heim" > To: "Robert Middleswarth" > Cc: engine-devel at ovirt.org, "infra" > Sent: Thursday, August 9, 2012 8:37:03 AM > Subject: Re: [Engine-devel] Jenkins testing of patch what do we want to check? > > On 08/09/2012 05:05 AM, Robert Middleswarth wrote: > > Unlike vdsm were the entire unit test takes 2 min to process the > > Engine > > has several jobs many that take longer then 15 min to run. So here > > is > > the questions. > > > > What current jobs do we want to run on each patch submission? > > > > all... > developers should verify the default unitests, or specific tests for > areas they are touching. > but jenkins should run the findbugs, gwt compilation and enable all > unitests. > like itamar said. we should run every job we can to verify code with to avoid regressions and errors pre-commit. - create/upgrade db - checkstyle, findbugs, unit-tests, dao-tests... - maybe animal sniffer - to verify backward compatibility to jdk6 We can't do it with current jenkins resources, that's the main reason why we need more slaves. > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From lpeer at redhat.com Thu Aug 9 08:11:24 2012 From: lpeer at redhat.com (Livnat Peer) Date: Thu, 09 Aug 2012 11:11:24 +0300 Subject: [Engine-devel] Data center level 3.2 and cluster level 3.2 Message-ID: <502370AC.6070903@redhat.com> Hi All, We pushed a fix for bridge-less networks that requires vdsm version 3.2. Does anyone have any concerns with adding engine cluster and DC 3.2 so we can enable bridge-less networks? We had in mind that DC 3.2 will support clusters 3.0, 3.1 and 3.2. Cluster and DC 3.2 will have parity features to 3.1 ATM, except for bridgless networks that will require 3.2 DC. Thanks, Livnat From iheim at redhat.com Thu Aug 9 08:52:54 2012 From: iheim at redhat.com (Itamar Heim) Date: Thu, 09 Aug 2012 11:52:54 +0300 Subject: [Engine-devel] Unifying (parts of) commit templates. In-Reply-To: <502354A1.5090900@redhat.com> References: <1629476053.6079144.1344462091144.JavaMail.root@redhat.com> <502354A1.5090900@redhat.com> Message-ID: <50237A66.5070302@redhat.com> On 08/09/2012 09:11 AM, Livnat Peer wrote: > On 09/08/12 00:41, Doron Fediuck wrote: >> Hi All, >> It seems that for commit subjects, vdsm is using a general concept of- >> >> BZ#??????? some message >> >> I'd like to suggest adopting it to the engine template we use today- >> >> BZ#??????? : short summary under 50 chars >> >> This may help us write some scripts which will work both for vdsm and engine BZs. >> > > +1 > with a small change - adding a \n after the bz number - wouldn't this kill git shortlog? patch short summary must be in first line iirc > BZ#??????? > : > short summary under 50 chars > > Long description of what this commit is about > > Livnat > >> Doron. >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From lpeer at redhat.com Thu Aug 9 08:54:38 2012 From: lpeer at redhat.com (Livnat Peer) Date: Thu, 09 Aug 2012 11:54:38 +0300 Subject: [Engine-devel] Unifying (parts of) commit templates. In-Reply-To: <50237A66.5070302@redhat.com> References: <1629476053.6079144.1344462091144.JavaMail.root@redhat.com> <502354A1.5090900@redhat.com> <50237A66.5070302@redhat.com> Message-ID: <50237ACE.9080603@redhat.com> On 09/08/12 11:52, Itamar Heim wrote: > On 08/09/2012 09:11 AM, Livnat Peer wrote: >> On 09/08/12 00:41, Doron Fediuck wrote: >>> Hi All, >>> It seems that for commit subjects, vdsm is using a general concept of- >>> >>> BZ#??????? some message >>> >>> I'd like to suggest adopting it to the engine template we use today- >>> >>> BZ#??????? >> webadmin>: short summary under 50 chars >>> >>> This may help us write some scripts which will work both for vdsm and >>> engine BZs. >>> >> >> +1 >> with a small change - adding a \n after the bz number - > > wouldn't this kill git shortlog? > patch short summary must be in first line iirc > yes it will. +1 for Doron's initial proposal >> BZ#??????? >> : >> short summary under 50 chars >> >> Long description of what this commit is about >> >> Livnat >> >>> Doron. >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> >> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> > > From robert at middleswarth.net Thu Aug 9 13:18:48 2012 From: robert at middleswarth.net (Robert Middleswarth) Date: Thu, 09 Aug 2012 09:18:48 -0400 Subject: [Engine-devel] Jenkins testing of patch what do we want to check? In-Reply-To: <500774761.2278461.1344497197213.JavaMail.root@redhat.com> References: <500774761.2278461.1344497197213.JavaMail.root@redhat.com> Message-ID: <5023B8B8.7060509@middleswarth.net> On 08/09/2012 03:26 AM, Eyal Edri wrote: > > ----- Original Message ----- >> From: "Itamar Heim" >> To: "Robert Middleswarth" >> Cc: engine-devel at ovirt.org, "infra" >> Sent: Thursday, August 9, 2012 8:37:03 AM >> Subject: Re: [Engine-devel] Jenkins testing of patch what do we want to check? >> >> On 08/09/2012 05:05 AM, Robert Middleswarth wrote: >>> Unlike vdsm were the entire unit test takes 2 min to process the >>> Engine >>> has several jobs many that take longer then 15 min to run. So here >>> is >>> the questions. >>> >>> What current jobs do we want to run on each patch submission? >>> >> all... >> developers should verify the default unitests, or specific tests for >> areas they are touching. >> but jenkins should run the findbugs, gwt compilation and enable all >> unitests. >> > like itamar said. > we should run every job we can to verify code with to avoid regressions and errors pre-commit. > > - create/upgrade db > - checkstyle, findbugs, unit-tests, dao-tests... > - maybe animal sniffer - to verify backward compatibility to jdk6 > > We can't do it with current jenkins resources, that's the main reason why > we need more slaves. Ok I will start testing a pre patch processing of the diff test on ovirt-engine. Please be aware of the fact I will be testing today so if you get a fail on something today it might be a fails positive until I have a change to confirm the process. I will send out an email when I feel I have all to bugs worked out. Thanks Robert >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> -- Thanks Robert Middleswarth @rmiddle (twitter/IRC) From iheim at redhat.com Thu Aug 9 14:05:29 2012 From: iheim at redhat.com (Itamar Heim) Date: Thu, 09 Aug 2012 17:05:29 +0300 Subject: [Engine-devel] Jenkins testing of patch what do we want to check? In-Reply-To: <5023B8B8.7060509@middleswarth.net> References: <500774761.2278461.1344497197213.JavaMail.root@redhat.com> <5023B8B8.7060509@middleswarth.net> Message-ID: <5023C3A9.3060108@redhat.com> On 08/09/2012 04:18 PM, Robert Middleswarth wrote: > On 08/09/2012 03:26 AM, Eyal Edri wrote: >> >> ----- Original Message ----- >>> From: "Itamar Heim" >>> To: "Robert Middleswarth" >>> Cc: engine-devel at ovirt.org, "infra" >>> Sent: Thursday, August 9, 2012 8:37:03 AM >>> Subject: Re: [Engine-devel] Jenkins testing of patch what do we want >>> to check? >>> >>> On 08/09/2012 05:05 AM, Robert Middleswarth wrote: >>>> Unlike vdsm were the entire unit test takes 2 min to process the >>>> Engine >>>> has several jobs many that take longer then 15 min to run. So here >>>> is >>>> the questions. >>>> >>>> What current jobs do we want to run on each patch submission? >>>> >>> all... >>> developers should verify the default unitests, or specific tests for >>> areas they are touching. >>> but jenkins should run the findbugs, gwt compilation and enable all >>> unitests. >>> >> like itamar said. >> we should run every job we can to verify code with to avoid >> regressions and errors pre-commit. >> >> - create/upgrade db >> - checkstyle, findbugs, unit-tests, dao-tests... >> - maybe animal sniffer - to verify backward compatibility to jdk6 >> >> We can't do it with current jenkins resources, that's the main reason why >> we need more slaves. > Ok I will start testing a pre patch processing of the diff test on > ovirt-engine. Please be aware of the fact I will be testing today so if > you get a fail on something today it might be a fails positive until I > have a change to confirm the process. I will send out an email when I > feel I have all to bugs worked out. you can do a silent testing in gerrit plugin - so it will tell you the job failed, but not update gerrit with it. From amureini at redhat.com Thu Aug 9 15:41:09 2012 From: amureini at redhat.com (Allon Mureinik) Date: Thu, 9 Aug 2012 11:41:09 -0400 (EDT) Subject: [Engine-devel] Serial Execution of Async Tasks Message-ID: <304934555.2667108.1344526869797.JavaMail.root@redhat.com> Hi guys, As you may know the engine currently has the ability to fire an SPM task, and be asynchronously be "woken-up" when it ends. This is great, but we found the for the Live Storage Migration feature we need something a bit complex - the ability to have a series of async tasks in a single control flow. Here's my initial design for this, your comments and criticism would be welcome: http://wiki.ovirt.org/wiki/Features/Serial_Execution_of_Asynchronous_Tasks_Detailed_Design -Allon From vszocs at redhat.com Thu Aug 9 16:56:06 2012 From: vszocs at redhat.com (Vojtech Szocs) Date: Thu, 9 Aug 2012 12:56:06 -0400 (EDT) Subject: [Engine-devel] Update on UI Plugins feature: PoC patch revision 2 In-Reply-To: <743744301.7825703.1344529966706.JavaMail.root@redhat.com> Message-ID: <800461780.7830021.1344531366474.JavaMail.root@redhat.com> Hi guys, I've improved the original plugin infrastructure, please find the 2nd revision of UI Plugins proof-of-concept (PoC) patch attached. Here's a quick summary of changes: * PluginSourcePageServlet looks up the actual plugin code in local filesystem (.js), so you can experiment with different plugins. If you want to add new plugins, just modify WebadminDynamicHostingServlet.writeAdditionalJsData() method. * PluginManager now calls UiInit function on plugins (plugin objects) within the scope of WebAdmin main section (user has logged in, main section UI is initialized and ready), and disables plugin execution outside main section (e.g. when the user logs out) . (Please find a sample plugin code attached as well. PluginSourcePageServlet tries to load it from a hard-coded location in local filesystem, you probably want to modify this to suit your environment.) UiInit function is a special event handler function that gets called once, after plugin reports as ready, and before other event handler functions are called. UiInit function would be a good place to extend default WebAdmin UI (adding main tab, etc.). This is illustrated on the following use case: 1. user requests WebAdmin page, during initialization a plugin iframe gets created and attached to DOM, plugin HTML page gets requested asynchronously, application init code still runs so iframe plugin code evaluation is blocked (this is because of JavaScript runtime being single-threaded in its nature) 2. application init code finishes, plugin code gets evaluated, plugin registers itself into pluginApi.plugins and reports back as ready (calls the ready function) 3. since the user is still in login section (not logged into WebAdmin), plugin invocation is disabled, until the user logs in 4. user logs into the application, UI redirects to main section, and after UI gets initialized, plugin invocation is enabled 5. UiInit function is called on the plugin 6. user performs some actions and WebAdmin calls different functions on the plugin 7. assume the user logs out, plugin invocation is disabled, until the user logs in again 8. user logs in again, but UiInit isn't called now because it has been called already before 9. goto step 6 The reason why UiInit is called just once (after visiting main section for the first time), is because WebAdmin UI (Views) are mostly singletons, so even when you switch to different section (login section) and go to main section again, singleton Views will still be there, with any adjustments/extensions made previously by plugins. Now, as for the next steps, we can proceed with actual tasks Itamar outlined in his email: * use UiInit event to extend UI (add main tab, etc.) * define other events (table context menu event, etc.) * allow plugins to do REST API calls through pluginApi object I've tried to implement "add main tab" functionality. Unfortunately, this isn't quite easy to do with GWT-Platform (GWTP) framework we use. Each tab in WebAdmin has some place (GWT history token = URL hash fragment) associated. The way GWTP handles tabs is that individual tabs (Presenter) reveal themselves into tab container (TabContainerPresenter), with presenter reveal flow being processed bottom-up. I strongly suggest to go through [http://code.google.com/p/gwt-platform/wiki/GettingStarted] to get some basic understanding of GWTP framework and how tabs work in general. Long story short, to add tabs dynamically in a proper way, we need to write custom presenter proxy, here are some links on this matter: Discussion [https://groups.google.com/forum/#!topic/gwt-platform/aJrGOf9Gu04/discussion ] Dynamic tab example [http://code.google.com/r/goudreauchristian-update/source/browse/ ] Working demo [http://olivier.monaco.free.fr/lab/gwtp-editor/] So adding main/sub tabs is a task that will require some additional work, especially since we wish to combine both static tabs and dynamic tabs in one tab container. I'll try to work on this one. On the other hand, it would be great if others could take the latest PoC patch (attached), and experiment with other stuff like context menu events, REST API calls, etc. You can always reach me on #ovirt (vszocs) if you have a question or need help with anything. Cheers, Vojtech -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: myPlugin.js.example Type: application/octet-stream Size: 276 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-WIP-UI-Plugins-PoC-revision-2.patch Type: text/x-patch Size: 23281 bytes Desc: not available URL: From iheim at redhat.com Thu Aug 9 22:03:03 2012 From: iheim at redhat.com (Itamar Heim) Date: Fri, 10 Aug 2012 01:03:03 +0300 Subject: [Engine-devel] Data center level 3.2 and cluster level 3.2 In-Reply-To: <502370AC.6070903@redhat.com> References: <502370AC.6070903@redhat.com> Message-ID: <50243397.4010108@redhat.com> On 08/09/2012 11:11 AM, Livnat Peer wrote: > Hi All, > > We pushed a fix for bridge-less networks that requires vdsm version 3.2. > Does anyone have any concerns with adding engine cluster and DC 3.2 so > we can enable bridge-less networks? > > We had in mind that DC 3.2 will support clusters 3.0, 3.1 and 3.2. > Cluster and DC 3.2 will have parity features to 3.1 ATM, except for > bridgless networks that will require 3.2 DC. sounds ok to me. remember you need to clone config which are about compatibility levels. I'd review this to begin with this one, though i imagine we have more by now. commit 700b13a515939c0a413288d3a7f71d38351f9ac6 Author: Eli Mesika Date: Thu Feb 2 00:53:48 2012 +0200 core: Adding version dependant config values for version 3.1 vdc_options includes settings marked with version='general' that indicates that this setting is not version dependant and with other settings marked with version='x.y' where x is the realesae number and y is the major number of the version. Those entries indicates version dependant values and are repeated with the correct setting for each added version. V2: Fixing EmulatedMachine to Fedora Change-Id: I6bbc61210d175f178d2d1974b1f09944bff8c070 From emesika at redhat.com Fri Aug 10 00:40:48 2012 From: emesika at redhat.com (Eli Mesika) Date: Thu, 9 Aug 2012 20:40:48 -0400 (EDT) Subject: [Engine-devel] Serial Execution of Async Tasks In-Reply-To: <304934555.2667108.1344526869797.JavaMail.root@redhat.com> Message-ID: <532949446.36197412.1344559248995.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Allon Mureinik" > To: "engine-devel" > Cc: "Eduardo Warszawski" , "Yeela Kaplan" , "Federico Simoncelli" > , "Liron Aravot" > Sent: Thursday, August 9, 2012 6:41:09 PM > Subject: [Engine-devel] Serial Execution of Async Tasks > > Hi guys, > > As you may know the engine currently has the ability to fire an SPM > task, and be asynchronously be "woken-up" when it ends. > This is great, but we found the for the Live Storage Migration > feature we need something a bit complex - the ability to have a > series of async tasks in a single control flow. > > Here's my initial design for this, your comments and criticism would > be welcome: > http://wiki.ovirt.org/wiki/Features/Serial_Execution_of_Asynchronous_Tasks_Detailed_Design Apart from the short explanation & flow , since this is a detailed design , I would add 1) Class diagram 2) Flow diagram > > > -Allon > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From yzaslavs at redhat.com Fri Aug 10 19:48:16 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Fri, 10 Aug 2012 15:48:16 -0400 (EDT) Subject: [Engine-devel] Serial Execution of Async Tasks In-Reply-To: <532949446.36197412.1344559248995.JavaMail.root@redhat.com> Message-ID: <536706378.9223948.1344628096135.JavaMail.root@redhat.com> ----- Original Message ----- From: "Eli Mesika" To: "Allon Mureinik" Cc: "Liron Aravot" , "Federico Simoncelli" , "engine-devel" , "Eduardo Warszawski" , "Yeela Kaplan" Sent: Friday, August 10, 2012 3:40:48 AM Subject: Re: [Engine-devel] Serial Execution of Async Tasks ----- Original Message ----- > From: "Allon Mureinik" > To: "engine-devel" > Cc: "Eduardo Warszawski" , "Yeela Kaplan" , "Federico Simoncelli" > , "Liron Aravot" > Sent: Thursday, August 9, 2012 6:41:09 PM > Subject: [Engine-devel] Serial Execution of Async Tasks > > Hi guys, > > As you may know the engine currently has the ability to fire an SPM > task, and be asynchronously be "woken-up" when it ends. > This is great, but we found the for the Live Storage Migration > feature we need something a bit complex - the ability to have a > series of async tasks in a single control flow. > > Here's my initial design for this, your comments and criticism would > be welcome: > http://wiki.ovirt.org/wiki/Features/Serial_Execution_of_Asynchronous_Tasks_Detailed_Design Apart from the short explanation & flow , since this is a detailed design , I would add 1) Class diagram 2) Flow diagram +1 I am also interested to get a flow how a task is created (i.e - replacement of ConcreateCreateTask) - but this will be handled in what Eli has asked for. In addition, you have two titles of "Successful Execution". At "compensate" - see how revertTasks currently behaves. Also read - http://wiki.ovirt.org/wiki/Main_Page/features/RunningCommandsOnEndActionFailure This is the work I did for CloneVmFromSnapshot - not saying it's perfect - but you should have an infrastructure/pattern to rollback not just via spmRevertTask but also using an engine command. Yair > > > -Allon > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > _______________________________________________ Engine-devel mailing list Engine-devel at ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel From iheim at redhat.com Sat Aug 11 22:19:16 2012 From: iheim at redhat.com (Itamar Heim) Date: Sun, 12 Aug 2012 01:19:16 +0300 Subject: [Engine-devel] Proposal to add Tomas Jelinek as maintainers to webadmin and user portal Message-ID: <5026DA64.8000700@redhat.com> Tomas has worked on oVirt for the past 9 months, developing several features (including merging the infrastructure of userportal over the webadmin infra) and fixing a slew of bugs. I'd like to propose Tomas as a maintainer for the webadmin and user portal. From iheim at redhat.com Sat Aug 11 22:19:50 2012 From: iheim at redhat.com (Itamar Heim) Date: Sun, 12 Aug 2012 01:19:50 +0300 Subject: [Engine-devel] Proposal to add Juan Hernandez as maintainer to packaging Message-ID: <5026DA86.2030703@redhat.com> Juan has worked on oVirt for the past 12 months, with considerable contribution to the packaging and deployment parts. he also almost single handedly packaged ovirt engine for fedora and the new ovirt-engine service for fedora. I'd like to propose Juan as a maintainer of the packaging subproject of ovirt-engine. From iheim at redhat.com Sat Aug 11 22:20:17 2012 From: iheim at redhat.com (Itamar Heim) Date: Sun, 12 Aug 2012 01:20:17 +0300 Subject: [Engine-devel] Proposal to add Vojtech Szocs as maintainer to user portal Message-ID: <5026DAA1.6090707@redhat.com> Vojtech has been working on the webadmin since its inception. His recent work that allowed the user portal and web-admin to be based on the same infrastructure. He also ported the user portal to work on top of this shared infrastructure. I'd like to propose Vojtech as a maintainer of the user portal. From iheim at redhat.com Sat Aug 11 22:20:43 2012 From: iheim at redhat.com (Itamar Heim) Date: Sun, 12 Aug 2012 01:20:43 +0300 Subject: [Engine-devel] Proposal to add Alona Kaplan as maintainers to webadmin Message-ID: <5026DABB.9080708@redhat.com> Alona has worked on oVirt for the past 9 months, developing several features in the webadmin (including localization and integrated dashboards) and also a slew of bugs... I'd like to propose Alona as a maintainer of the webadmin From bazulay at redhat.com Sat Aug 11 23:38:32 2012 From: bazulay at redhat.com (Barak Azulay) Date: Sat, 11 Aug 2012 19:38:32 -0400 (EDT) Subject: [Engine-devel] Proposal to add Juan Hernandez as maintainer to packaging In-Reply-To: <5026DA86.2030703@redhat.com> References: <5026DA86.2030703@redhat.com> Message-ID: <26A7D761-5EE6-4446-B2E9-31C244EC4A05@redhat.com> Ack He has proven himself to be highly effective & professional Barak Azulay On Aug 12, 2012, at 1:19, Itamar Heim wrote: > Juan has worked on oVirt for the past 12 months, with considerable contribution to the packaging and deployment parts. > he also almost single handedly packaged ovirt engine for fedora and the new ovirt-engine service for fedora. > > I'd like to propose Juan as a maintainer of the packaging subproject of ovirt-engine. > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lpeer at redhat.com Sun Aug 12 05:02:07 2012 From: lpeer at redhat.com (Livnat Peer) Date: Sun, 12 Aug 2012 08:02:07 +0300 Subject: [Engine-devel] Proposal to add Juan Hernandez as maintainer to packaging In-Reply-To: <5026DA86.2030703@redhat.com> References: <5026DA86.2030703@redhat.com> Message-ID: <502738CF.4050908@redhat.com> On 12/08/12 01:19, Itamar Heim wrote: > Juan has worked on oVirt for the past 12 months, with considerable > contribution to the packaging and deployment parts. > he also almost single handedly packaged ovirt engine for fedora and the > new ovirt-engine service for fedora. > > I'd like to propose Juan as a maintainer of the packaging subproject of > ovirt-engine. > ________________________________ +1, Juan put a lot of work in packaging ovirt-engine for Fedora, thanks to his work we were able to push it to Fedora 17. _______________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From lpeer at redhat.com Sun Aug 12 05:23:00 2012 From: lpeer at redhat.com (Livnat Peer) Date: Sun, 12 Aug 2012 08:23:00 +0300 Subject: [Engine-devel] Proposal to add Alona Kaplan as maintainers to webadmin In-Reply-To: <5026DABB.9080708@redhat.com> References: <5026DABB.9080708@redhat.com> Message-ID: <50273DB4.6040005@redhat.com> On 12/08/12 01:20, Itamar Heim wrote: > Alona has worked on oVirt for the past 9 months, developing several > features in the webadmin (including localization and integrated > dashboards) and also a slew of bugs... > > I'd like to propose Alona as a maintainer of the webadmin In the past 3 months Alona was focused on the new setup network dialog and she did a great Job, we started with a dialog that de-facto was not working and ended up with one of the more attractive dialogs in the webadmin. +1 > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From ecohen at redhat.com Sun Aug 12 05:47:25 2012 From: ecohen at redhat.com (Einav Cohen) Date: Sun, 12 Aug 2012 01:47:25 -0400 (EDT) Subject: [Engine-devel] Proposal to add Alona Kaplan as maintainers to webadmin In-Reply-To: <50273DB4.6040005@redhat.com> Message-ID: <1921326042.12930380.1344750445436.JavaMail.root@redhat.com> > ----- Original Message ----- > From: "Livnat Peer" > Sent: Sunday, August 12, 2012 8:23:00 AM > > On 12/08/12 01:20, Itamar Heim wrote: > > Alona has worked on oVirt for the past 9 months, developing several > > features in the webadmin (including localization and integrated > > dashboards) and also a slew of bugs... > > > > I'd like to propose Alona as a maintainer of the webadmin > > In the past 3 months Alona was focused on the new setup network > dialog > and she did a great Job, we started with a dialog that de-facto was > not > working and ended up with one of the more attractive dialogs in the > webadmin. > > +1 +1 > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > From ovedo at redhat.com Sun Aug 12 06:05:17 2012 From: ovedo at redhat.com (Oved Ourfalli) Date: Sun, 12 Aug 2012 02:05:17 -0400 (EDT) Subject: [Engine-devel] Proposal to add Alona Kaplan as maintainers to webadmin In-Reply-To: <1921326042.12930380.1344750445436.JavaMail.root@redhat.com> Message-ID: <97473392.6941104.1344751517676.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Einav Cohen" > To: "Itamar Heim" > Cc: "Asaf Shakarchi" , engine-devel at ovirt.org > Sent: Sunday, August 12, 2012 8:47:25 AM > Subject: Re: [Engine-devel] Proposal to add Alona Kaplan as maintainers to webadmin > > > ----- Original Message ----- > > From: "Livnat Peer" > > Sent: Sunday, August 12, 2012 8:23:00 AM > > > > On 12/08/12 01:20, Itamar Heim wrote: > > > Alona has worked on oVirt for the past 9 months, developing > > > several > > > features in the webadmin (including localization and integrated > > > dashboards) and also a slew of bugs... > > > > > > I'd like to propose Alona as a maintainer of the webadmin > > > > In the past 3 months Alona was focused on the new setup network > > dialog > > and she did a great Job, we started with a dialog that de-facto was > > not > > working and ended up with one of the more attractive dialogs in the > > webadmin. > > > > +1 > > +1 > +1 > > > > > _______________________________________________ > > > Engine-devel mailing list > > > Engine-devel at ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From yzaslavs at redhat.com Sun Aug 12 06:28:30 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Sun, 12 Aug 2012 09:28:30 +0300 Subject: [Engine-devel] Proposal to add Alona Kaplan as maintainers to webadmin In-Reply-To: <97473392.6941104.1344751517676.JavaMail.root@redhat.com> References: <97473392.6941104.1344751517676.JavaMail.root@redhat.com> Message-ID: <50274D0E.3050905@redhat.com> +1 On 08/12/2012 09:05 AM, Oved Ourfalli wrote: > > > ----- Original Message ----- >> From: "Einav Cohen" >> To: "Itamar Heim" >> Cc: "Asaf Shakarchi" , engine-devel at ovirt.org >> Sent: Sunday, August 12, 2012 8:47:25 AM >> Subject: Re: [Engine-devel] Proposal to add Alona Kaplan as maintainers to webadmin >> >>> ----- Original Message ----- >>> From: "Livnat Peer" >>> Sent: Sunday, August 12, 2012 8:23:00 AM >>> >>> On 12/08/12 01:20, Itamar Heim wrote: >>>> Alona has worked on oVirt for the past 9 months, developing >>>> several >>>> features in the webadmin (including localization and integrated >>>> dashboards) and also a slew of bugs... >>>> >>>> I'd like to propose Alona as a maintainer of the webadmin >>> >>> In the past 3 months Alona was focused on the new setup network >>> dialog >>> and she did a great Job, we started with a dialog that de-facto was >>> not >>> working and ended up with one of the more attractive dialogs in the >>> webadmin. >>> >>> +1 >> >> +1 >> > +1 >>> >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> >>> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From ovedo at redhat.com Sun Aug 12 06:38:00 2012 From: ovedo at redhat.com (Oved Ourfalli) Date: Sun, 12 Aug 2012 02:38:00 -0400 (EDT) Subject: [Engine-devel] Proposal to add Juan Hernandez as maintainer to packaging In-Reply-To: <5026DA86.2030703@redhat.com> Message-ID: <73684232.6954416.1344753480067.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Itamar Heim" > To: "Barak Azulay" , "Eyal Edri" , "Ofer Schreiber" > Cc: engine-devel at ovirt.org > Sent: Sunday, August 12, 2012 1:19:50 AM > Subject: [Engine-devel] Proposal to add Juan Hernandez as maintainer to packaging > > Juan has worked on oVirt for the past 12 months, with considerable > contribution to the packaging and deployment parts. > he also almost single handedly packaged ovirt engine for fedora and > the > new ovirt-engine service for fedora. > > I'd like to propose Juan as a maintainer of the packaging subproject > of > ovirt-engine. +1. He did a wonderful work to the oVirt project, especially in the packaging area. > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From lpeer at redhat.com Sun Aug 12 06:39:23 2012 From: lpeer at redhat.com (Livnat Peer) Date: Sun, 12 Aug 2012 09:39:23 +0300 Subject: [Engine-devel] Serial Execution of Async Tasks In-Reply-To: <532949446.36197412.1344559248995.JavaMail.root@redhat.com> References: <532949446.36197412.1344559248995.JavaMail.root@redhat.com> Message-ID: <50274F9B.9090506@redhat.com> On 10/08/12 03:40, Eli Mesika wrote: > > > ----- Original Message ----- >> From: "Allon Mureinik" >> To: "engine-devel" >> Cc: "Eduardo Warszawski" , "Yeela Kaplan" , "Federico Simoncelli" >> , "Liron Aravot" >> Sent: Thursday, August 9, 2012 6:41:09 PM >> Subject: [Engine-devel] Serial Execution of Async Tasks >> >> Hi guys, >> >> As you may know the engine currently has the ability to fire an SPM >> task, and be asynchronously be "woken-up" when it ends. >> This is great, but we found the for the Live Storage Migration >> feature we need something a bit complex - the ability to have a >> series of async tasks in a single control flow. >> >> Here's my initial design for this, your comments and criticism would >> be welcome: >> http://wiki.ovirt.org/wiki/Features/Serial_Execution_of_Asynchronous_Tasks_Detailed_Design > > Apart from the short explanation & flow , since this is a detailed design , I would add > 1) Class diagram > 2) Flow diagram > +1, it would help understanding the flow. - It looks like you chose not re-use/extend the ExecutionHandler (the entity used for building the tasks view exposed to the users). It might be a good idea to keep the separation between the engine Jobs and the underlying vdsm tasks, but I want to make sure you are familiar with this mechanism and ruled it out with a reason. If this is the case please share why you decided not to use it. - how does this design survives a jboss restart? Can you please a section in the wiki to explain that. -successful execution - * "CommandBase iterates over its SPMAsyncTaskHandlers" - when? * If the second task is an HSM command (vs. SPM command), I think you should explain in the design how to handle such flows as well. * Why do we need before task? can you give a concrete example of what would you do in such a method. - I see you added SPMAsyncTaskHandler, any reason not to use SPMAsyncTasK to manage it own life-cycle? - In the life-cycle managed by the SPMAsyncTaskHandler there is a step 'createTask - how to create the async task' can you please elaborate what are the options. Livnat >> >> >> -Allon >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From yzaslavs at redhat.com Sun Aug 12 06:55:13 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Sun, 12 Aug 2012 09:55:13 +0300 Subject: [Engine-devel] Proposal to add Juan Hernandez as maintainer to packaging In-Reply-To: <73684232.6954416.1344753480067.JavaMail.root@redhat.com> References: <73684232.6954416.1344753480067.JavaMail.root@redhat.com> Message-ID: <50275351.40005@redhat.com> On 08/12/2012 09:38 AM, Oved Ourfalli wrote: > > > ----- Original Message ----- >> From: "Itamar Heim" >> To: "Barak Azulay" , "Eyal Edri" , "Ofer Schreiber" >> Cc: engine-devel at ovirt.org >> Sent: Sunday, August 12, 2012 1:19:50 AM >> Subject: [Engine-devel] Proposal to add Juan Hernandez as maintainer to packaging >> >> Juan has worked on oVirt for the past 12 months, with considerable >> contribution to the packaging and deployment parts. >> he also almost single handedly packaged ovirt engine for fedora and >> the >> new ovirt-engine service for fedora. >> >> I'd like to propose Juan as a maintainer of the packaging subproject >> of >> ovirt-engine. > +1. He did a wonderful work to the oVirt project, especially in the packaging area. +1 > > >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From eedri at redhat.com Sun Aug 12 07:13:01 2012 From: eedri at redhat.com (Eyal Edri) Date: Sun, 12 Aug 2012 03:13:01 -0400 (EDT) Subject: [Engine-devel] Proposal to add Juan Hernandez as maintainer to packaging In-Reply-To: <5026DA86.2030703@redhat.com> Message-ID: <1680305482.3517660.1344755581394.JavaMail.root@redhat.com> +1 for it without a doubt. ----- Original Message ----- > From: "Itamar Heim" > To: "Barak Azulay" , "Eyal Edri" , "Ofer Schreiber" > Cc: engine-devel at ovirt.org > Sent: Sunday, August 12, 2012 1:19:50 AM > Subject: Proposal to add Juan Hernandez as maintainer to packaging > > Juan has worked on oVirt for the past 12 months, with considerable > contribution to the packaging and deployment parts. > he also almost single handedly packaged ovirt engine for fedora and > the > new ovirt-engine service for fedora. > > I'd like to propose Juan as a maintainer of the packaging subproject > of > ovirt-engine. > From iheim at redhat.com Sun Aug 12 07:18:14 2012 From: iheim at redhat.com (Itamar Heim) Date: Sun, 12 Aug 2012 10:18:14 +0300 Subject: [Engine-devel] proposal to add... Message-ID: <502758B6.8070700@redhat.com> just to clarify on process: the various +1's are all welcome and can help maintainers make up their mind. but per our guidelines[1], I'll be counting only existing maintainer votes per subproject[2]. [1] "The current set of maintainers vote additional maintainers onto the project. " [2] http://www.ovirt.org/project/subprojects/ From ecohen at redhat.com Sun Aug 12 07:21:19 2012 From: ecohen at redhat.com (Einav Cohen) Date: Sun, 12 Aug 2012 03:21:19 -0400 (EDT) Subject: [Engine-devel] Proposal to add Tomas Jelinek as maintainers to webadmin and user portal In-Reply-To: <5026DA64.8000700@redhat.com> Message-ID: <595913570.12954548.1344756079881.JavaMail.root@redhat.com> > ----- Original Message ----- > From: "Itamar Heim" > Sent: Sunday, August 12, 2012 1:19:16 AM > > Tomas has worked on oVirt for the past 9 months, developing several > features (including merging the infrastructure of userportal over the > webadmin infra) and fixing a slew of bugs. > > I'd like to propose Tomas as a maintainer for the webadmin and user > portal. +2 > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > From ecohen at redhat.com Sun Aug 12 07:21:55 2012 From: ecohen at redhat.com (Einav Cohen) Date: Sun, 12 Aug 2012 03:21:55 -0400 (EDT) Subject: [Engine-devel] Proposal to add Vojtech Szocs as maintainer to user portal In-Reply-To: <5026DAA1.6090707@redhat.com> Message-ID: <1977195271.12954571.1344756115237.JavaMail.root@redhat.com> > ----- Original Message ----- > From: "Itamar Heim" > Sent: Sunday, August 12, 2012 1:20:17 AM > > Vojtech has been working on the webadmin since its inception. His > recent > work that allowed the user portal and web-admin to be based on the > same > infrastructure. He also ported the user portal to work on top of this > shared infrastructure. > > I'd like to propose Vojtech as a maintainer of the user portal. +1 > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > From gchaplik at redhat.com Sun Aug 12 07:24:20 2012 From: gchaplik at redhat.com (Gilad Chaplik) Date: Sun, 12 Aug 2012 03:24:20 -0400 (EDT) Subject: [Engine-devel] Proposal to add Tomas Jelinek as maintainers to webadmin and user portal In-Reply-To: <595913570.12954548.1344756079881.JavaMail.root@redhat.com> Message-ID: <1389925264.3990727.1344756260261.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Einav Cohen" > To: "Itamar Heim" > Cc: engine-devel at ovirt.org, "Daniel Erez" , "Asaf Shakarchi" , "Tal Nisan" > , "Vojtech Szocs" , "Gilad Chaplik" , "Alexey Chub" > > Sent: Sunday, August 12, 2012 10:21:19 AM > Subject: Re: [Engine-devel] Proposal to add Tomas Jelinek as maintainers to webadmin and user portal > > > ----- Original Message ----- > > From: "Itamar Heim" > > Sent: Sunday, August 12, 2012 1:19:16 AM > > > > Tomas has worked on oVirt for the past 9 months, developing several > > features (including merging the infrastructure of userportal over > > the > > webadmin infra) and fixing a slew of bugs. > > > > I'd like to propose Tomas as a maintainer for the webadmin and user > > portal. > > +2 +1 > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > > > From gchaplik at redhat.com Sun Aug 12 07:25:32 2012 From: gchaplik at redhat.com (Gilad Chaplik) Date: Sun, 12 Aug 2012 03:25:32 -0400 (EDT) Subject: [Engine-devel] Proposal to add Alona Kaplan as maintainers to webadmin In-Reply-To: <1921326042.12930380.1344750445436.JavaMail.root@redhat.com> Message-ID: <376238565.3992064.1344756332545.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Einav Cohen" > To: "Itamar Heim" > Cc: "Asaf Shakarchi" , "Tal Nisan" , "Vojtech Szocs" , > "Gilad Chaplik" , "Daniel Erez" , "Alexey Chub" , "Livnat > Peer" , engine-devel at ovirt.org > Sent: Sunday, August 12, 2012 8:47:25 AM > Subject: Re: [Engine-devel] Proposal to add Alona Kaplan as maintainers to webadmin > > > ----- Original Message ----- > > From: "Livnat Peer" > > Sent: Sunday, August 12, 2012 8:23:00 AM > > > > On 12/08/12 01:20, Itamar Heim wrote: > > > Alona has worked on oVirt for the past 9 months, developing > > > several > > > features in the webadmin (including localization and integrated > > > dashboards) and also a slew of bugs... > > > > > > I'd like to propose Alona as a maintainer of the webadmin > > > > In the past 3 months Alona was focused on the new setup network > > dialog > > and she did a great Job, we started with a dialog that de-facto was > > not > > working and ended up with one of the more attractive dialogs in the > > webadmin. > > > > +1 > > +1 +1 > > > > > > _______________________________________________ > > > Engine-devel mailing list > > > Engine-devel at ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > From oschreib at redhat.com Sun Aug 12 07:30:54 2012 From: oschreib at redhat.com (Ofer Schreiber) Date: Sun, 12 Aug 2012 03:30:54 -0400 (EDT) Subject: [Engine-devel] Proposal to add Juan Hernandez as maintainer to packaging In-Reply-To: <5026DA86.2030703@redhat.com> Message-ID: <1336613597.3992400.1344756654345.JavaMail.root@redhat.com> ----- Original Message ----- > Juan has worked on oVirt for the past 12 months, with considerable > contribution to the packaging and deployment parts. > he also almost single handedly packaged ovirt engine for fedora and > the > new ovirt-engine service for fedora. > > I'd like to propose Juan as a maintainer of the packaging subproject > of > ovirt-engine. > As the packaging maintainer, There's no doubt Juan did a wonderful job in the past year. +1 From masayag at redhat.com Sun Aug 12 07:39:28 2012 From: masayag at redhat.com (Moti Asayag) Date: Sun, 12 Aug 2012 10:39:28 +0300 Subject: [Engine-devel] Proposal to add Alona Kaplan as maintainers to webadmin In-Reply-To: <5026DABB.9080708@redhat.com> References: <5026DABB.9080708@redhat.com> Message-ID: <50275DB0.9080901@redhat.com> On 08/12/2012 01:20 AM, Itamar Heim wrote: > Alona has worked on oVirt for the past 9 months, developing several > features in the webadmin (including localization and integrated > dashboards) and also a slew of bugs... > > I'd like to propose Alona as a maintainer of the webadmin > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel +1 From masayag at redhat.com Sun Aug 12 07:41:08 2012 From: masayag at redhat.com (Moti Asayag) Date: Sun, 12 Aug 2012 10:41:08 +0300 Subject: [Engine-devel] Proposal to add Juan Hernandez as maintainer to packaging In-Reply-To: <5026DA86.2030703@redhat.com> References: <5026DA86.2030703@redhat.com> Message-ID: <50275E14.8020003@redhat.com> On 08/12/2012 01:19 AM, Itamar Heim wrote: > Juan has worked on oVirt for the past 12 months, with considerable > contribution to the packaging and deployment parts. > he also almost single handedly packaged ovirt engine for fedora and the > new ovirt-engine service for fedora. > > I'd like to propose Juan as a maintainer of the packaging subproject of > ovirt-engine. > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel +1 From derez at redhat.com Sun Aug 12 07:50:57 2012 From: derez at redhat.com (Daniel Erez) Date: Sun, 12 Aug 2012 03:50:57 -0400 (EDT) Subject: [Engine-devel] Proposal to add Vojtech Szocs as maintainer to user portal In-Reply-To: <1977195271.12954571.1344756115237.JavaMail.root@redhat.com> Message-ID: <1484113877.3994018.1344757857615.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Einav Cohen" > To: "Itamar Heim" > Cc: engine-devel at ovirt.org, "Daniel Erez" , "Tal Nisan" , "Asaf Shakarchi" > > Sent: Sunday, August 12, 2012 10:21:55 AM > Subject: Re: [Engine-devel] Proposal to add Vojtech Szocs as maintainer to user portal > > > ----- Original Message ----- > > From: "Itamar Heim" > > Sent: Sunday, August 12, 2012 1:20:17 AM > > > > Vojtech has been working on the webadmin since its inception. His > > recent > > work that allowed the user portal and web-admin to be based on the > > same > > infrastructure. He also ported the user portal to work on top of > > this > > shared infrastructure. > > > > I'd like to propose Vojtech as a maintainer of the user portal. > > +1 +1 > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > > > From derez at redhat.com Sun Aug 12 07:51:39 2012 From: derez at redhat.com (Daniel Erez) Date: Sun, 12 Aug 2012 03:51:39 -0400 (EDT) Subject: [Engine-devel] Proposal to add Tomas Jelinek as maintainers to webadmin and user portal In-Reply-To: <1389925264.3990727.1344756260261.JavaMail.root@redhat.com> Message-ID: <59874512.3994022.1344757899217.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Gilad Chaplik" > To: "Einav Cohen" > Cc: engine-devel at ovirt.org, "Daniel Erez" , "Asaf Shakarchi" , "Tal Nisan" > , "Vojtech Szocs" , "Alexey Chub" , "Itamar Heim" > > Sent: Sunday, August 12, 2012 10:24:20 AM > Subject: Re: [Engine-devel] Proposal to add Tomas Jelinek as maintainers to webadmin and user portal > > ----- Original Message ----- > > From: "Einav Cohen" > > To: "Itamar Heim" > > Cc: engine-devel at ovirt.org, "Daniel Erez" , "Asaf > > Shakarchi" , "Tal Nisan" > > , "Vojtech Szocs" , "Gilad > > Chaplik" , "Alexey Chub" > > > > Sent: Sunday, August 12, 2012 10:21:19 AM > > Subject: Re: [Engine-devel] Proposal to add Tomas Jelinek as > > maintainers to webadmin and user portal > > > > > ----- Original Message ----- > > > From: "Itamar Heim" > > > Sent: Sunday, August 12, 2012 1:19:16 AM > > > > > > Tomas has worked on oVirt for the past 9 months, developing > > > several > > > features (including merging the infrastructure of userportal over > > > the > > > webadmin infra) and fixing a slew of bugs. > > > > > > I'd like to propose Tomas as a maintainer for the webadmin and > > > user > > > portal. > > > > +2 > > +1 +1 > > > > > > _______________________________________________ > > > Engine-devel mailing list > > > Engine-devel at ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > > > > > > > > From derez at redhat.com Sun Aug 12 07:52:02 2012 From: derez at redhat.com (Daniel Erez) Date: Sun, 12 Aug 2012 03:52:02 -0400 (EDT) Subject: [Engine-devel] Proposal to add Alona Kaplan as maintainers to webadmin In-Reply-To: <376238565.3992064.1344756332545.JavaMail.root@redhat.com> Message-ID: <1962327080.3994038.1344757922083.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Gilad Chaplik" > To: "Einav Cohen" > Cc: "Asaf Shakarchi" , "Tal Nisan" , "Vojtech Szocs" , > "Daniel Erez" , "Alexey Chub" , "Livnat Peer" , > engine-devel at ovirt.org, "Itamar Heim" > Sent: Sunday, August 12, 2012 10:25:32 AM > Subject: Re: [Engine-devel] Proposal to add Alona Kaplan as maintainers to webadmin > > ----- Original Message ----- > > From: "Einav Cohen" > > To: "Itamar Heim" > > Cc: "Asaf Shakarchi" , "Tal Nisan" > > , "Vojtech Szocs" , > > "Gilad Chaplik" , "Daniel Erez" > > , "Alexey Chub" , "Livnat > > Peer" , engine-devel at ovirt.org > > Sent: Sunday, August 12, 2012 8:47:25 AM > > Subject: Re: [Engine-devel] Proposal to add Alona Kaplan as > > maintainers to webadmin > > > > > ----- Original Message ----- > > > From: "Livnat Peer" > > > Sent: Sunday, August 12, 2012 8:23:00 AM > > > > > > On 12/08/12 01:20, Itamar Heim wrote: > > > > Alona has worked on oVirt for the past 9 months, developing > > > > several > > > > features in the webadmin (including localization and integrated > > > > dashboards) and also a slew of bugs... > > > > > > > > I'd like to propose Alona as a maintainer of the webadmin > > > > > > In the past 3 months Alona was focused on the new setup network > > > dialog > > > and she did a great Job, we started with a dialog that de-facto > > > was > > > not > > > working and ended up with one of the more attractive dialogs in > > > the > > > webadmin. > > > > > > +1 > > > > +1 > > +1 +1 > > > > > > > > > > _______________________________________________ > > > > Engine-devel mailing list > > > > Engine-devel at ovirt.org > > > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > > > > > From amureini at redhat.com Sun Aug 12 08:17:19 2012 From: amureini at redhat.com (Allon Mureinik) Date: Sun, 12 Aug 2012 04:17:19 -0400 (EDT) Subject: [Engine-devel] Unifying (parts of) commit templates. In-Reply-To: <50237ACE.9080603@redhat.com> Message-ID: <100355826.3523505.1344759439615.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Livnat Peer" > To: "Itamar Heim" > Cc: engine-devel at ovirt.org > Sent: Thursday, August 9, 2012 11:54:38 AM > Subject: Re: [Engine-devel] Unifying (parts of) commit templates. > > On 09/08/12 11:52, Itamar Heim wrote: > > On 08/09/2012 09:11 AM, Livnat Peer wrote: > >> On 09/08/12 00:41, Doron Fediuck wrote: > >>> Hi All, > >>> It seems that for commit subjects, vdsm is using a general > >>> concept of- > >>> > >>> BZ#??????? some message > >>> > >>> I'd like to suggest adopting it to the engine template we use > >>> today- > >>> > >>> BZ#??????? >>> userportal | > >>> webadmin>: short summary under 50 chars > >>> > >>> This may help us write some scripts which will work both for vdsm > >>> and > >>> engine BZs. > >>> > >> > >> +1 > >> with a small change - adding a \n after the bz number - > > > > wouldn't this kill git shortlog? > > patch short summary must be in first line iirc > > > > yes it will. > +1 for Doron's initial proposal +1. Also, while we're at it, a wiki explaining the correct way to use this template would be great. I know, it's pretty straight forward, but new contributed may get confused as to the distinction between core and engine, or how to mark a vertical patch that fixes a UI dialog and the backend logic behind it (with a coma between components? a slash? a pipe?) > > >> BZ#??????? > >> >> webadmin>: > >> short summary under 50 chars > >> > >> Long description of what this commit is about > >> > >> Livnat > >> > >>> Doron. > >>> _______________________________________________ > >>> Engine-devel mailing list > >>> Engine-devel at ovirt.org > >>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>> > >> > >> _______________________________________________ > >> Engine-devel mailing list > >> Engine-devel at ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/engine-devel > >> > > > > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From asaf at redhat.com Sun Aug 12 08:48:29 2012 From: asaf at redhat.com (Asaf Shakarchi) Date: Sun, 12 Aug 2012 11:48:29 +0300 Subject: [Engine-devel] Proposal to add Juan Hernandez as maintainer to packaging In-Reply-To: <5026DA86.2030703@redhat.com> References: <5026DA86.2030703@redhat.com> Message-ID: <20120812084828.GA17296@asaf-nb.redhat.com> +1 On 08/12/12 at 01:19AM +0300, Itamar Heim wrote: > Juan has worked on oVirt for the past 12 months, with considerable > contribution to the packaging and deployment parts. > he also almost single handedly packaged ovirt engine for fedora and the > new ovirt-engine service for fedora. > > I'd like to propose Juan as a maintainer of the packaging subproject of > ovirt-engine. > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel -- Best Regards, Asaf Shakarchi. --- cell: +972-54-3094949 From dfediuck at redhat.com Sun Aug 12 11:56:23 2012 From: dfediuck at redhat.com (Doron Fediuck) Date: Sun, 12 Aug 2012 07:56:23 -0400 (EDT) Subject: [Engine-devel] Unifying (parts of) commit templates. In-Reply-To: <100355826.3523505.1344759439615.JavaMail.root@redhat.com> Message-ID: <734294556.13072571.1344772583146.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Allon Mureinik" > To: engine-devel at ovirt.org > Sent: Sunday, August 12, 2012 11:17:19 AM > Subject: Re: [Engine-devel] Unifying (parts of) commit templates. > > > > ----- Original Message ----- > > From: "Livnat Peer" > > To: "Itamar Heim" > > Cc: engine-devel at ovirt.org > > Sent: Thursday, August 9, 2012 11:54:38 AM > > Subject: Re: [Engine-devel] Unifying (parts of) commit templates. > > > > On 09/08/12 11:52, Itamar Heim wrote: > > > On 08/09/2012 09:11 AM, Livnat Peer wrote: > > >> On 09/08/12 00:41, Doron Fediuck wrote: > > >>> Hi All, > > >>> It seems that for commit subjects, vdsm is using a general > > >>> concept of- > > >>> > > >>> BZ#??????? some message > > >>> > > >>> I'd like to suggest adopting it to the engine template we use > > >>> today- > > >>> > > >>> BZ#??????? > >>> userportal | > > >>> webadmin>: short summary under 50 chars > > >>> > > >>> This may help us write some scripts which will work both for > > >>> vdsm > > >>> and > > >>> engine BZs. > > >>> > > >> > > >> +1 > > >> with a small change - adding a \n after the bz number - > > > > > > wouldn't this kill git shortlog? > > > patch short summary must be in first line iirc > > > > > > > yes it will. > > +1 for Doron's initial proposal > +1. > Also, while we're at it, a wiki explaining the correct way to use > this template would be great. > I know, it's pretty straight forward, but new contributed may get > confused as to the distinction between core and engine, or how to > mark a vertical patch that fixes a UI dialog and the backend logic > behind it (with a coma between components? a slash? a pipe?) Patch submitted: http://gerrit.ovirt.org/#/c/7101/2 +2 is needed there. Wiki may take time, so anyone who's willing to spare some time, feel free to start with: git commit -s -F config/engine-commit-template.txt -e > > > > >> BZ#??????? > > >> > >> webadmin>: > > >> short summary under 50 chars > > >> > > >> Long description of what this commit is about > > >> > > >> Livnat > > >> > > >>> Doron. > > >>> _______________________________________________ > > >>> Engine-devel mailing list > > >>> Engine-devel at ovirt.org > > >>> http://lists.ovirt.org/mailman/listinfo/engine-devel > > >>> > > >> > > >> _______________________________________________ > > >> Engine-devel mailing list > > >> Engine-devel at ovirt.org > > >> http://lists.ovirt.org/mailman/listinfo/engine-devel > > >> > > > > > > > > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From iheim at redhat.com Mon Aug 13 08:41:02 2012 From: iheim at redhat.com (Itamar Heim) Date: Mon, 13 Aug 2012 11:41:02 +0300 Subject: [Engine-devel] Proposal to add Alona Kaplan as maintainers to webadmin In-Reply-To: <1921326042.12930380.1344750445436.JavaMail.root@redhat.com> References: <1921326042.12930380.1344750445436.JavaMail.root@redhat.com> Message-ID: <5028BD9E.30700@redhat.com> On 08/12/2012 08:47 AM, Einav Cohen wrote: >> ----- Original Message ----- >> From: "Livnat Peer" >> Sent: Sunday, August 12, 2012 8:23:00 AM >> >> On 12/08/12 01:20, Itamar Heim wrote: >>> Alona has worked on oVirt for the past 9 months, developing several >>> features in the webadmin (including localization and integrated >>> dashboards) and also a slew of bugs... >>> >>> I'd like to propose Alona as a maintainer of the webadmin >> >> In the past 3 months Alona was focused on the new setup network >> dialog >> and she did a great Job, we started with a dialog that de-facto was >> not >> working and ended up with one of the more attractive dialogs in the >> webadmin. >> >> +1 > > +1 > >> >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >> >> I got 3 acks from maintainers and no nacks, alona has been added in gerrit. From iheim at redhat.com Mon Aug 13 08:43:14 2012 From: iheim at redhat.com (Itamar Heim) Date: Mon, 13 Aug 2012 11:43:14 +0300 Subject: [Engine-devel] Proposal to add Tomas Jelinek as maintainers to webadmin and user portal In-Reply-To: <5026DA64.8000700@redhat.com> References: <5026DA64.8000700@redhat.com> Message-ID: <5028BE22.1030904@redhat.com> On 08/12/2012 01:19 AM, Itamar Heim wrote: > Tomas has worked on oVirt for the past 9 months, developing several > features (including merging the infrastructure of userportal over the > webadmin infra) and fixing a slew of bugs. > > I'd like to propose Tomas as a maintainer for the webadmin and user portal. > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel I got 3 acks from maintainers and no nacks, Tomas has been added in gerrit. From iheim at redhat.com Mon Aug 13 08:49:17 2012 From: iheim at redhat.com (Itamar Heim) Date: Mon, 13 Aug 2012 11:49:17 +0300 Subject: [Engine-devel] Proposal to add Juan Hernandez as maintainer to packaging In-Reply-To: <5026DA86.2030703@redhat.com> References: <5026DA86.2030703@redhat.com> Message-ID: <5028BF8D.8050409@redhat.com> On 08/12/2012 01:19 AM, Itamar Heim wrote: > Juan has worked on oVirt for the past 12 months, with considerable > contribution to the packaging and deployment parts. > he also almost single handedly packaged ovirt engine for fedora and the > new ovirt-engine service for fedora. > > I'd like to propose Juan as a maintainer of the packaging subproject of > ovirt-engine. > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel I got acks from all 3 maintainers - juan added in gerrit. From ashakarc at redhat.com Mon Aug 13 09:14:23 2012 From: ashakarc at redhat.com (Asaf Shakarchi) Date: Mon, 13 Aug 2012 05:14:23 -0400 (EDT) Subject: [Engine-devel] Proposal to add Vojtech Szocs as maintainer to user portal In-Reply-To: <5028BE40.6030402@redhat.com> Message-ID: <662291980.15028657.1344849263959.JavaMail.root@redhat.com> +1 ! -------- Original Message -------- > Subject: [Engine-devel] Proposal to add Vojtech Szocs as maintainer > to > user portal > Date: Sun, 12 Aug 2012 01:20:17 +0300 > From: Itamar Heim > To: Einav Cohen , Daniel Erez , > Tal Nisan , Asaf Shakarchi > > CC: engine-devel at ovirt.org > > Vojtech has been working on the webadmin since its inception. His > recent > work that allowed the user portal and web-admin to be based on the > same > infrastructure. He also ported the user portal to work on top of this > shared infrastructure. > > I'd like to propose Vojtech as a maintainer of the user portal. > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > From vszocs at redhat.com Mon Aug 13 13:36:32 2012 From: vszocs at redhat.com (Vojtech Szocs) Date: Mon, 13 Aug 2012 09:36:32 -0400 (EDT) Subject: [Engine-devel] UI Plugins meeting postponed for one week In-Reply-To: <605481775.8755789.1344864594925.JavaMail.root@redhat.com> Message-ID: <2132312971.8760604.1344864992799.JavaMail.root@redhat.com> Hi guys, we had the UI Plugins follow-up meeting planned for tomorrow (Aug 14). Unfortunately, I'm sick this week and need to stay in bed. Let's postpone the meeting for one week (Aug 21), I'll send out an updated invitation. Vojtech From vszocs at redhat.com Mon Aug 13 13:36:56 2012 From: vszocs at redhat.com (Vojtech Szocs) Date: Mon, 13 Aug 2012 09:36:56 -0400 (EDT) Subject: [Engine-devel] oVirt UI Plugins: Follow-up Meeting Message-ID: <838648654.8760680.1344865016630.JavaMail.root@redhat.com> The following meeting has been modified: Subject: oVirt UI Plugins: Follow-up Meeting Organizer: "Vojtech Szocs" Time: Tuesday, August 21, 2012, 4:30:00 PM - 5:30:00 PM GMT +01:00 Belgrade, Bratislava, Budapest, Ljubljana, Prague [MODIFIED] Invitees: engine-devel at ovirt.org; George.Costea at netapp.com; Troy.Mangum at netapp.com; Dustin.Schoenbrun at netapp.com; Ricky.Hopper at netapp.com; Chris.Frantz at hp.com; kroberts at redhat.com; ovedo at redhat.com; iheim at redhat.com; ilvovsky at redhat.com; ecohen at redhat.com *~*~*~*~*~*~*~*~*~* Hi guys, this is a follow-up meeting for discussing progress on oVirt UI Plugins feature. Here are the details required for joining the session. Intercall dial-in numbers can be found at: https://www.intercallonline.com/portlets/scheduling/viewNumbers/listNumbersByCode.do?confCode=7128867405 Intercall Conference Code ID: 7128867405 # Elluminate session: Regards, Vojtech -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 5367 bytes Desc: not available URL: From robert at middleswarth.net Tue Aug 14 05:43:06 2012 From: robert at middleswarth.net (Robert Middleswarth) Date: Tue, 14 Aug 2012 01:43:06 -0400 Subject: [Engine-devel] Jenkins testing. Message-ID: <5029E56A.8070909@middleswarth.net> After a few false starts it looks like we have per patch testing working on VDSM, oVirt-engine, oVirt-engine-sdk and oVirt-engine-cli. There are 3 status a patch can get. 1) Success - Means the patch ran though the tests without issue. 2) Failure - Means the tests failed. 3) Aborted - Generally means the submitter is not in the whitelist and the tests were never run. If you have any questions please feel free to ask. -- Thanks Robert Middleswarth @rmiddle (twitter/IRC) From eedri at redhat.com Tue Aug 14 07:22:13 2012 From: eedri at redhat.com (Eyal Edri) Date: Tue, 14 Aug 2012 03:22:13 -0400 (EDT) Subject: [Engine-devel] Jenkins testing. In-Reply-To: <5029E56A.8070909@middleswarth.net> Message-ID: <1133835900.4334166.1344928933385.JavaMail.root@redhat.com> Great job! I know it required a great effort and time to make this work so kudos for making it happen. Eyal. ----- Original Message ----- > From: "Robert Middleswarth" > To: "infra" , engine-devel at ovirt.org, "VDSM Project Development" > , "arch" > Sent: Tuesday, August 14, 2012 8:43:06 AM > Subject: Jenkins testing. > > After a few false starts it looks like we have per patch testing > working > on VDSM, oVirt-engine, oVirt-engine-sdk and oVirt-engine-cli. There > are > 3 status a patch can get. 1) Success - Means the patch ran though > the > tests without issue. 2) Failure - Means the tests failed. 3) > Aborted - > Generally means the submitter is not in the whitelist and the tests > were > never run. If you have any questions please feel free to ask. > > -- > Thanks > Robert Middleswarth > @rmiddle (twitter/IRC) > > _______________________________________________ > Infra mailing list > Infra at ovirt.org > http://lists.ovirt.org/mailman/listinfo/infra > From deepakcs at linux.vnet.ibm.com Tue Aug 14 07:22:11 2012 From: deepakcs at linux.vnet.ibm.com (Deepak C Shetty) Date: Tue, 14 Aug 2012 12:52:11 +0530 Subject: [Engine-devel] [vdsm] Jenkins testing. In-Reply-To: <5029E56A.8070909@middleswarth.net> References: <5029E56A.8070909@middleswarth.net> Message-ID: <5029FCA3.90109@linux.vnet.ibm.com> On 08/14/2012 11:13 AM, Robert Middleswarth wrote: > After a few false starts it looks like we have per patch testing > working on VDSM, oVirt-engine, oVirt-engine-sdk and oVirt-engine-cli. > There are 3 status a patch can get. 1) Success - Means the patch ran > though the tests without issue. 2) Failure - Means the tests failed. > 3) Aborted - Generally means the submitter is not in the whitelist and > the tests were never run. If you have any questions please feel free > to ask. > So what is needed for the submitted to be in whitelist ? I once for Success for few of my patches.. then got failure for some other patch( maybe thats due to the false starts u had) and then for the latest patch of mine, it says aborted. So not sure if i am in whitelist or not ? If not, what do i need to do to be part of it ? If yes, why did the build abort for my latest patch ? From deepakcs at linux.vnet.ibm.com Tue Aug 14 08:54:31 2012 From: deepakcs at linux.vnet.ibm.com (Deepak C Shetty) Date: Tue, 14 Aug 2012 14:24:31 +0530 Subject: [Engine-devel] [vdsm] Jenkins testing. In-Reply-To: <5029FCA3.90109@linux.vnet.ibm.com> References: <5029E56A.8070909@middleswarth.net> <5029FCA3.90109@linux.vnet.ibm.com> Message-ID: <502A1247.3060508@linux.vnet.ibm.com> On 08/14/2012 12:52 PM, Deepak C Shetty wrote: > On 08/14/2012 11:13 AM, Robert Middleswarth wrote: >> After a few false starts it looks like we have per patch testing >> working on VDSM, oVirt-engine, oVirt-engine-sdk and >> oVirt-engine-cli. There are 3 status a patch can get. 1) Success - >> Means the patch ran though the tests without issue. 2) Failure - >> Means the tests failed. 3) Aborted - Generally means the submitter >> is not in the whitelist and the tests were never run. If you have >> any questions please feel free to ask. >> > So what is needed for the submitted to be in whitelist ? > I once for Success for few of my patches.. then got failure for some > other patch( maybe thats due to the false starts u had) and then for > the latest patch of mine, it says aborted. > > So not sure if i am in whitelist or not ? > If not, what do i need to do to be part of it ? > If yes, why did the build abort for my latest patch ? > Pls see http://gerrit.ovirt.org/#/c/6856/ For patch1 it says build success, for patch 2, it says aborted.. why ? From danken at redhat.com Tue Aug 14 09:51:11 2012 From: danken at redhat.com (Dan Kenigsberg) Date: Tue, 14 Aug 2012 12:51:11 +0300 Subject: [Engine-devel] [vdsm] Jenkins testing. In-Reply-To: <5029E56A.8070909@middleswarth.net> References: <5029E56A.8070909@middleswarth.net> Message-ID: <20120814095111.GA12200@redhat.com> On Tue, Aug 14, 2012 at 01:43:06AM -0400, Robert Middleswarth wrote: > After a few false starts it looks like we have per patch testing > working on VDSM, oVirt-engine, oVirt-engine-sdk and > oVirt-engine-cli. There are 3 status a patch can get. 1) Success - > Means the patch ran though the tests without issue. 2) Failure - > Means the tests failed. 3) Aborted - Generally means the submitter > is not in the whitelist and the tests were never run. If you have > any questions please feel free to ask. Thanks Robert, for this great improvement. However, it seems to me that the script is pulling the wrong git hash. For example, my patch http://gerrit.ovirt.org/#/c/7097/ with git has 539ccfbf02f0ca9605149885ae6b3e6feb4f1976 reports of success. However the console output http://jenkins.ovirt.info/job/patch_vdsm_unit_tests/417/console show that the git hash that was actually built was 8af050b205994746198e5fb257652cd2fb8bfbc1 (vdsm/master) Something is fishy here, and I may be getting false positive results. Regards, Dan. From eedri at redhat.com Tue Aug 14 10:20:11 2012 From: eedri at redhat.com (Eyal Edri) Date: Tue, 14 Aug 2012 06:20:11 -0400 (EDT) Subject: [Engine-devel] [vdsm] Jenkins testing. In-Reply-To: <502A1247.3060508@linux.vnet.ibm.com> Message-ID: <560190193.4470348.1344939611714.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Deepak C Shetty" > To: "Deepak C Shetty" > Cc: engine-devel at ovirt.org, "arch" , "VDSM Project Development" , > "infra" > Sent: Tuesday, August 14, 2012 11:54:31 AM > Subject: Re: [Engine-devel] [vdsm] Jenkins testing. > > On 08/14/2012 12:52 PM, Deepak C Shetty wrote: > > On 08/14/2012 11:13 AM, Robert Middleswarth wrote: > >> After a few false starts it looks like we have per patch testing > >> working on VDSM, oVirt-engine, oVirt-engine-sdk and > >> oVirt-engine-cli. There are 3 status a patch can get. 1) Success > >> - > >> Means the patch ran though the tests without issue. 2) Failure - > >> Means the tests failed. 3) Aborted - Generally means the > >> submitter > >> is not in the whitelist and the tests were never run. If you have > >> any questions please feel free to ask. > >> > > So what is needed for the submitted to be in whitelist ? > > I once for Success for few of my patches.. then got failure for > > some > > other patch( maybe thats due to the false starts u had) and then > > for > > the latest patch of mine, it says aborted. > > > > So not sure if i am in whitelist or not ? > > If not, what do i need to do to be part of it ? > > If yes, why did the build abort for my latest patch ? > > > Pls see http://gerrit.ovirt.org/#/c/6856/ > For patch1 it says build success, for patch 2, it says aborted.. why > ? > it's because your email address wasn't in the jenkins-whitelist.txt file. this file includes emails address for users that jenkins will allow running test jobs on thier patches. this was introduced as a way for defending the jenkins from malicious users that might send harmful patches to jenkins. i've added you to the whitelist, you should be ok now. the list was generated automaticly from git log, so it still might missing known people, if anyone sees this aborted msg in the gerrit, please contact infra team so he can be added to the whitelist. Eyal. > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From ewoud+ovirt at kohlvanwijngaarden.nl Tue Aug 14 10:28:37 2012 From: ewoud+ovirt at kohlvanwijngaarden.nl (Ewoud Kohl van Wijngaarden) Date: Tue, 14 Aug 2012 12:28:37 +0200 Subject: [Engine-devel] [vdsm] Jenkins testing. In-Reply-To: <560190193.4470348.1344939611714.JavaMail.root@redhat.com> References: <502A1247.3060508@linux.vnet.ibm.com> <560190193.4470348.1344939611714.JavaMail.root@redhat.com> Message-ID: <20120814102836.GH25069@bogey.xentower.nl> On Tue, Aug 14, 2012 at 06:20:11AM -0400, Eyal Edri wrote: > Deepak C Shetty wrote: > > On 08/14/2012 12:52 PM, Deepak C Shetty wrote: > > > On 08/14/2012 11:13 AM, Robert Middleswarth wrote: > > >> After a few false starts it looks like we have per patch testing > > >> working on VDSM, oVirt-engine, oVirt-engine-sdk and > > >> oVirt-engine-cli. There are 3 status a patch can get. 1) Success > > >> - > > >> Means the patch ran though the tests without issue. 2) Failure - > > >> Means the tests failed. 3) Aborted - Generally means the > > >> submitter > > >> is not in the whitelist and the tests were never run. If you have > > >> any questions please feel free to ask. > > >> > > > So what is needed for the submitted to be in whitelist ? > > > I once for Success for few of my patches.. then got failure for > > > some > > > other patch( maybe thats due to the false starts u had) and then > > > for > > > the latest patch of mine, it says aborted. > > > > > > So not sure if i am in whitelist or not ? > > > If not, what do i need to do to be part of it ? > > > If yes, why did the build abort for my latest patch ? > > > > > Pls see http://gerrit.ovirt.org/#/c/6856/ > > For patch1 it says build success, for patch 2, it says aborted.. why > > ? > > > > it's because your email address wasn't in the jenkins-whitelist.txt file. > this file includes emails address for users that jenkins will allow > running test jobs on thier patches. this was introduced as a way for > defending the jenkins from malicious users that might send harmful > patches to jenkins. > > i've added you to the whitelist, you should be ok now. > > the list was generated automaticly from git log, so it still might > missing known people, if anyone sees this aborted msg in the gerrit, > please contact infra team so he can be added to the whitelist. Note that the whitelist is in the jenkins-whitelist repo so see http://gerrit.ovirt.org/gitweb?p=jenkins-whitelist.git and http://gerrit.ovirt.org/#/q/project:jenkins-whitelist,n,z for more details. From amureini at redhat.com Tue Aug 14 11:10:55 2012 From: amureini at redhat.com (Allon Mureinik) Date: Tue, 14 Aug 2012 07:10:55 -0400 (EDT) Subject: [Engine-devel] Serial Execution of Async Tasks In-Reply-To: <50274F9B.9090506@redhat.com> Message-ID: <518329067.4513405.1344942654993.JavaMail.root@redhat.com> Hi guys, Thanks for all your comments! The correct response for many these points is to update the wiki. I'm enclosing here the quick-and-dirty replies just to keep this thread alive, and will update the wiki shortly. See inline. ----- Original Message ----- > From: "Livnat Peer" > To: "Allon Mureinik" > Cc: "Eli Mesika" , "Liron Aravot" , "Federico Simoncelli" > , "engine-devel" , "Eduardo Warszawski" , "Yeela > Kaplan" > Sent: Sunday, August 12, 2012 9:39:23 AM > Subject: Re: [Engine-devel] Serial Execution of Async Tasks > > On 10/08/12 03:40, Eli Mesika wrote: > > > > > > ----- Original Message ----- > >> From: "Allon Mureinik" > >> To: "engine-devel" > >> Cc: "Eduardo Warszawski" , "Yeela Kaplan" > >> , "Federico Simoncelli" > >> , "Liron Aravot" > >> Sent: Thursday, August 9, 2012 6:41:09 PM > >> Subject: [Engine-devel] Serial Execution of Async Tasks > >> > >> Hi guys, > >> > >> As you may know the engine currently has the ability to fire an > >> SPM > >> task, and be asynchronously be "woken-up" when it ends. > >> This is great, but we found the for the Live Storage Migration > >> feature we need something a bit complex - the ability to have a > >> series of async tasks in a single control flow. > >> > >> Here's my initial design for this, your comments and criticism > >> would > >> be welcome: > >> http://wiki.ovirt.org/wiki/Features/Serial_Execution_of_Asynchronous_Tasks_Detailed_Design > > > > Apart from the short explanation & flow , since this is a detailed > > design , I would add > > 1) Class diagram > > 2) Flow diagram Good idea, I'll see if I can jimmy something up. > > > > +1, it would help understanding the flow. > > - It looks like you chose not re-use/extend the ExecutionHandler (the > entity used for building the tasks view exposed to the users). > It might be a good idea to keep the separation between the engine > Jobs > and the underlying vdsm tasks, but I want to make sure you are > familiar > with this mechanism and ruled it out with a reason. If this is the > case > please share why you decided not to use it. As you said Jobs and Steps are pure engine entities - they can contain no VDSM tasks, one VDSM task, or plausibly, in the future, several tasks. Even /today/, AsyncTasks and Jobs/Steps are two different kinds of animals - I don't see any added value in mixing them together. > > > - how does this design survives a jboss restart? Can you please a > section in the wiki to explain that. Basically, the way as a Command does today - the task is saved with the executionIndex, and continues when the command is woken up. I'll clarify this point in the wiki. > > -successful execution - > * "CommandBase iterates over its SPMAsyncTaskHandlers" - when? This is the new suggested format of executeCommand(). I'll clarify this too. > * If the second task is an HSM command (vs. SPM command), I think you > should explain in the design how to handle such flows as well. HSM commands do not create AsyncTasks, as they do today - I will clarify this. > * Why do we need before task? can you give a concrete example of what > would you do in such a method. Basically, /today/, command look like this: executeCommand() { doStuffInTheDB(); runVdsCommand(someCommand); } endSuccessfully() { doMoreStuffInTheDB(); } endWithFailure() { doMoreStuffForFailureInTheDB(); } In the new design, the entire doStuffInTheDB() should be moved to a breforeTask of the (only) SPMAsyncTaskHandler. > > - I see you added SPMAsyncTaskHandler, any reason not to use > SPMAsyncTasK to manage it own life-cycle? Conserving today's design - The SPMAsyncTaskHandler is the place to add additional, non-SPM, logic around the SPM task execution, like CommandBase allows today. > > - In the life-cycle managed by the SPMAsyncTaskHandler there is a > step > 'createTask - how to create the async task' can you please elaborate > what are the options. new [any type of async task] > > > > > > Livnat > > >> > >> > >> -Allon > >> _______________________________________________ > >> Engine-devel mailing list > >> Engine-devel at ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/engine-devel > >> > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > From mlipchuk at redhat.com Tue Aug 14 11:35:00 2012 From: mlipchuk at redhat.com (Maor Lipchuk) Date: Tue, 14 Aug 2012 14:35:00 +0300 Subject: [Engine-devel] Serial Execution of Async Tasks In-Reply-To: <518329067.4513405.1344942654993.JavaMail.root@redhat.com> References: <518329067.4513405.1344942654993.JavaMail.root@redhat.com> Message-ID: <502A37E4.4060303@redhat.com> How should we handle the auditLogMessages? Basically when a command ends it print an audit log. When we will start to use multiple tasks I assume user might get a bulk of audit logs which are actually related to the same action (when we fail for example the process will be create and delete). It might be a bit confusing for the user not to know which action is related to the operation Maybe we will need to use the correlation id of the Execution handler as Eli suggested or maybe add new states at CommandActionState? On 08/14/2012 02:10 PM, Allon Mureinik wrote: > Hi guys, > > Thanks for all your comments! > The correct response for many these points is to update the wiki. > I'm enclosing here the quick-and-dirty replies just to keep this thread alive, and will update the wiki shortly. > > See inline. > > ----- Original Message ----- >> From: "Livnat Peer" >> To: "Allon Mureinik" >> Cc: "Eli Mesika" , "Liron Aravot" , "Federico Simoncelli" >> , "engine-devel" , "Eduardo Warszawski" , "Yeela >> Kaplan" >> Sent: Sunday, August 12, 2012 9:39:23 AM >> Subject: Re: [Engine-devel] Serial Execution of Async Tasks >> >> On 10/08/12 03:40, Eli Mesika wrote: >>> >>> >>> ----- Original Message ----- >>>> From: "Allon Mureinik" >>>> To: "engine-devel" >>>> Cc: "Eduardo Warszawski" , "Yeela Kaplan" >>>> , "Federico Simoncelli" >>>> , "Liron Aravot" >>>> Sent: Thursday, August 9, 2012 6:41:09 PM >>>> Subject: [Engine-devel] Serial Execution of Async Tasks >>>> >>>> Hi guys, >>>> >>>> As you may know the engine currently has the ability to fire an >>>> SPM >>>> task, and be asynchronously be "woken-up" when it ends. >>>> This is great, but we found the for the Live Storage Migration >>>> feature we need something a bit complex - the ability to have a >>>> series of async tasks in a single control flow. >>>> >>>> Here's my initial design for this, your comments and criticism >>>> would >>>> be welcome: >>>> http://wiki.ovirt.org/wiki/Features/Serial_Execution_of_Asynchronous_Tasks_Detailed_Design >>> >>> Apart from the short explanation & flow , since this is a detailed >>> design , I would add >>> 1) Class diagram >>> 2) Flow diagram > Good idea, I'll see if I can jimmy something up. > >>> >> >> +1, it would help understanding the flow. >> >> - It looks like you chose not re-use/extend the ExecutionHandler (the >> entity used for building the tasks view exposed to the users). >> It might be a good idea to keep the separation between the engine >> Jobs >> and the underlying vdsm tasks, but I want to make sure you are >> familiar >> with this mechanism and ruled it out with a reason. If this is the >> case >> please share why you decided not to use it. > As you said Jobs and Steps are pure engine entities - they can contain no VDSM tasks, one VDSM task, or plausibly, in the future, several tasks. > Even /today/, AsyncTasks and Jobs/Steps are two different kinds of animals - I don't see any added value in mixing them together. > >> >> >> - how does this design survives a jboss restart? Can you please a >> section in the wiki to explain that. > Basically, the way as a Command does today - the task is saved with the executionIndex, and continues when the command is woken up. > I'll clarify this point in the wiki. > >> >> -successful execution - >> * "CommandBase iterates over its SPMAsyncTaskHandlers" - when? > This is the new suggested format of executeCommand(). I'll clarify this too. > >> * If the second task is an HSM command (vs. SPM command), I think you >> should explain in the design how to handle such flows as well. > HSM commands do not create AsyncTasks, as they do today - I will clarify this. > >> * Why do we need before task? can you give a concrete example of what >> would you do in such a method. > Basically, /today/, command look like this: > executeCommand() { > doStuffInTheDB(); > runVdsCommand(someCommand); > } > > endSuccessfully() { > doMoreStuffInTheDB(); > } > > endWithFailure() { > doMoreStuffForFailureInTheDB(); > } > > In the new design, the entire doStuffInTheDB() should be moved to a breforeTask of the (only) SPMAsyncTaskHandler. > >> >> - I see you added SPMAsyncTaskHandler, any reason not to use >> SPMAsyncTasK to manage it own life-cycle? > Conserving today's design - The SPMAsyncTaskHandler is the place to add additional, non-SPM, logic around the SPM task execution, like CommandBase allows today. > >> >> - In the life-cycle managed by the SPMAsyncTaskHandler there is a >> step >> 'createTask - how to create the async task' can you please elaborate >> what are the options. > new [any type of async task] >> >> >> >> >> >> Livnat >> >>>> >>>> >>>> -Allon >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>> >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> >> >> > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From michal.skrivanek at redhat.com Tue Aug 14 13:25:01 2012 From: michal.skrivanek at redhat.com (Michal Skrivanek) Date: Tue, 14 Aug 2012 15:25:01 +0200 Subject: [Engine-devel] Unifying (parts of) commit templates. In-Reply-To: <734294556.13072571.1344772583146.JavaMail.root@redhat.com> References: <734294556.13072571.1344772583146.JavaMail.root@redhat.com> Message-ID: <4CE438DA-E1CC-47F3-9634-5B647E7E9698@redhat.com> On Aug 12, 2012, at 13:56 , Doron Fediuck wrote: > > > ----- Original Message ----- >> From: "Allon Mureinik" >> To: engine-devel at ovirt.org >> Sent: Sunday, August 12, 2012 11:17:19 AM >> Subject: Re: [Engine-devel] Unifying (parts of) commit templates. >> >> >> >> ----- Original Message ----- >>> From: "Livnat Peer" >>> To: "Itamar Heim" >>> Cc: engine-devel at ovirt.org >>> Sent: Thursday, August 9, 2012 11:54:38 AM >>> Subject: Re: [Engine-devel] Unifying (parts of) commit templates. >>> >>> On 09/08/12 11:52, Itamar Heim wrote: >>>> On 08/09/2012 09:11 AM, Livnat Peer wrote: >>>>> On 09/08/12 00:41, Doron Fediuck wrote: >>>>>> Hi All, >>>>>> It seems that for commit subjects, vdsm is using a general >>>>>> concept of- >>>>>> >>>>>> BZ#??????? some message >>>>>> >>>>>> I'd like to suggest adopting it to the engine template we use >>>>>> today- >>>>>> >>>>>> BZ#??????? >>>>> userportal | >>>>>> webadmin>: short summary under 50 chars >>>>>> >>>>>> This may help us write some scripts which will work both for >>>>>> vdsm >>>>>> and >>>>>> engine BZs. Is is a problem adopting the same for all the projects? It would be nice to be really consistent everywhere. Is the >>>>> >>>>> >>>>> +1 >>>>> with a small change - adding a \n after the bz number - >>>> >>>> wouldn't this kill git shortlog? >>>> patch short summary must be in first line iirc >>>> >>> >>> yes it will. >>> +1 for Doron's initial proposal >> +1. >> Also, while we're at it, a wiki explaining the correct way to use >> this template would be great. >> I know, it's pretty straight forward, but new contributed may get >> confused as to the distinction between core and engine, or how to >> mark a vertical patch that fixes a UI dialog and the backend logic >> behind it (with a coma between components? a slash? a pipe?) > > Patch submitted: > http://gerrit.ovirt.org/#/c/7101/2 > +2 is needed there. > > Wiki may take time, so anyone who's willing to spare some time, > feel free to start with: > git commit -s -F config/engine-commit-template.txt -e > > >>> >>>>> BZ#??????? >>>>> >>>> webadmin>: >>>>> short summary under 50 chars >>>>> >>>>> Long description of what this commit is about >>>>> >>>>> Livnat >>>>> >>>>>> Doron. >>>>>> _______________________________________________ >>>>>> Engine-devel mailing list >>>>>> Engine-devel at ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>> >>>>> >>>>> _______________________________________________ >>>>> Engine-devel mailing list >>>>> Engine-devel at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>> >>>> >>>> >>> >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From dfediuck at redhat.com Tue Aug 14 13:56:01 2012 From: dfediuck at redhat.com (Doron Fediuck) Date: Tue, 14 Aug 2012 09:56:01 -0400 (EDT) Subject: [Engine-devel] Unifying (parts of) commit templates. In-Reply-To: <4CE438DA-E1CC-47F3-9634-5B647E7E9698@redhat.com> Message-ID: <860340147.19577639.1344952561681.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Michal Skrivanek" > To: "Doron Fediuck" > Cc: engine-devel at ovirt.org > Sent: Tuesday, August 14, 2012 4:25:01 PM > Subject: Re: [Engine-devel] Unifying (parts of) commit templates. > > > On Aug 12, 2012, at 13:56 , Doron Fediuck wrote: > > > > > > > ----- Original Message ----- > >> From: "Allon Mureinik" > >> To: engine-devel at ovirt.org > >> Sent: Sunday, August 12, 2012 11:17:19 AM > >> Subject: Re: [Engine-devel] Unifying (parts of) commit templates. > >> > >> > >> > >> ----- Original Message ----- > >>> From: "Livnat Peer" > >>> To: "Itamar Heim" > >>> Cc: engine-devel at ovirt.org > >>> Sent: Thursday, August 9, 2012 11:54:38 AM > >>> Subject: Re: [Engine-devel] Unifying (parts of) commit templates. > >>> > >>> On 09/08/12 11:52, Itamar Heim wrote: > >>>> On 08/09/2012 09:11 AM, Livnat Peer wrote: > >>>>> On 09/08/12 00:41, Doron Fediuck wrote: > >>>>>> Hi All, > >>>>>> It seems that for commit subjects, vdsm is using a general > >>>>>> concept of- > >>>>>> > >>>>>> BZ#??????? some message > >>>>>> > >>>>>> I'd like to suggest adopting it to the engine template we use > >>>>>> today- > >>>>>> > >>>>>> BZ#??????? >>>>>> userportal | > >>>>>> webadmin>: short summary under 50 chars > >>>>>> > >>>>>> This may help us write some scripts which will work both for > >>>>>> vdsm > >>>>>> and > >>>>>> engine BZs. > Is is a problem adopting the same for all the projects? It would be > nice to be really consistent everywhere. > We can't force (sub-)projects for a convention. We can only recommend. > Is the easy to say which is the right one, things are shared between > webadmin and portal, etc... It is. It helps us write a script to look for patches in specific areas. If you have more than one relevant component, add: 936b5b09244b81e8a0d02bad3163f49da28771ba packaging, tools: Generate engine-config.xml from template > In the past I found "BZ #123456 - bug title" on the first line to be > the most helpful. Some patches may apply to features or more than one bz, so additional flexibility is needed. Also, one BZ may be fixed by several patches... > > Also it would imho be great if we get consistent tags for each build > (or at least the commit ids) I know we are trying, but still its not > really at the same time when the build is created. > > Thanks, > michal > > >>>>>> > >>>>> > >>>>> +1 > >>>>> with a small change - adding a \n after the bz number - > >>>> > >>>> wouldn't this kill git shortlog? > >>>> patch short summary must be in first line iirc > >>>> > >>> > >>> yes it will. > >>> +1 for Doron's initial proposal > >> +1. > >> Also, while we're at it, a wiki explaining the correct way to use > >> this template would be great. > >> I know, it's pretty straight forward, but new contributed may get > >> confused as to the distinction between core and engine, or how to > >> mark a vertical patch that fixes a UI dialog and the backend logic > >> behind it (with a coma between components? a slash? a pipe?) > > > > Patch submitted: > > http://gerrit.ovirt.org/#/c/7101/2 > > +2 is needed there. > > > > Wiki may take time, so anyone who's willing to spare some time, > > feel free to start with: > > git commit -s -F config/engine-commit-template.txt -e > > > > > >>> > >>>>> BZ#??????? > >>>>> >>>>> webadmin>: > >>>>> short summary under 50 chars > >>>>> > >>>>> Long description of what this commit is about > >>>>> > >>>>> Livnat > >>>>> > >>>>>> Doron. From iheim at redhat.com Tue Aug 14 14:33:35 2012 From: iheim at redhat.com (Itamar Heim) Date: Tue, 14 Aug 2012 17:33:35 +0300 Subject: [Engine-devel] Proposal to add Vojtech Szocs as maintainer to user portal In-Reply-To: <5026DAA1.6090707@redhat.com> References: <5026DAA1.6090707@redhat.com> Message-ID: <502A61BF.6050305@redhat.com> On 08/12/2012 01:20 AM, Itamar Heim wrote: > Vojtech has been working on the webadmin since its inception. His recent > work that allowed the user portal and web-admin to be based on the same > infrastructure. He also ported the user portal to work on top of this > shared infrastructure. > > I'd like to propose Vojtech as a maintainer of the user portal. > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel I got 3 acks from user portal maintainers for vojtech. updated http://www.ovirt.org/project/subprojects/ as well From cctrieloff at redhat.com Tue Aug 14 14:36:13 2012 From: cctrieloff at redhat.com (Carl Trieloff) Date: Tue, 14 Aug 2012 10:36:13 -0400 Subject: [Engine-devel] Proposal to add Vojtech Szocs as maintainer to user portal In-Reply-To: <502A61BF.6050305@redhat.com> References: <5026DAA1.6090707@redhat.com> <502A61BF.6050305@redhat.com> Message-ID: <502A625D.7010200@redhat.com> On 08/14/2012 10:33 AM, Itamar Heim wrote: > > I got 3 acks from user portal maintainers for vojtech. > > updated http://www.ovirt.org/project/subprojects/ as well If you also send a summary note to the board list. thx Carl. From iheim at redhat.com Tue Aug 14 14:37:05 2012 From: iheim at redhat.com (Itamar Heim) Date: Tue, 14 Aug 2012 17:37:05 +0300 Subject: [Engine-devel] Proposal to add Vojtech Szocs as maintainer to user portal In-Reply-To: <502A625D.7010200@redhat.com> References: <5026DAA1.6090707@redhat.com> <502A61BF.6050305@redhat.com> <502A625D.7010200@redhat.com> Message-ID: <502A6291.3080605@redhat.com> On 08/14/2012 05:36 PM, Carl Trieloff wrote: > On 08/14/2012 10:33 AM, Itamar Heim wrote: >> >> I got 3 acks from user portal maintainers for vojtech. >> >> updated http://www.ovirt.org/project/subprojects/ as well > > > If you also send a summary note to the board list. why would we update board on any new maintainer in any sub project? From iheim at redhat.com Tue Aug 14 14:23:44 2012 From: iheim at redhat.com (Itamar Heim) Date: Tue, 14 Aug 2012 17:23:44 +0300 Subject: [Engine-devel] Serial Execution of Async Tasks In-Reply-To: <502A37E4.4060303@redhat.com> References: <518329067.4513405.1344942654993.JavaMail.root@redhat.com> <502A37E4.4060303@redhat.com> Message-ID: <502A5F70.6020807@redhat.com> On 08/14/2012 02:35 PM, Maor Lipchuk wrote: > How should we handle the auditLogMessages? > Basically when a command ends it print an audit log. > > When we will start to use multiple tasks I assume user might get a bulk > of audit logs which are actually related to the same action (when we > fail for example the process will be create and delete). > It might be a bit confusing for the user not to know which action is > related to the operation I thought audit log gets written regardless of the transaction, so audit log appears "as they happen"? > > Maybe we will need to use the correlation id of the Execution handler as > Eli suggested or maybe add new states at CommandActionState? > > On 08/14/2012 02:10 PM, Allon Mureinik wrote: >> Hi guys, >> >> Thanks for all your comments! >> The correct response for many these points is to update the wiki. >> I'm enclosing here the quick-and-dirty replies just to keep this thread alive, and will update the wiki shortly. >> >> See inline. >> >> ----- Original Message ----- >>> From: "Livnat Peer" >>> To: "Allon Mureinik" >>> Cc: "Eli Mesika" , "Liron Aravot" , "Federico Simoncelli" >>> , "engine-devel" , "Eduardo Warszawski" , "Yeela >>> Kaplan" >>> Sent: Sunday, August 12, 2012 9:39:23 AM >>> Subject: Re: [Engine-devel] Serial Execution of Async Tasks >>> >>> On 10/08/12 03:40, Eli Mesika wrote: >>>> >>>> >>>> ----- Original Message ----- >>>>> From: "Allon Mureinik" >>>>> To: "engine-devel" >>>>> Cc: "Eduardo Warszawski" , "Yeela Kaplan" >>>>> , "Federico Simoncelli" >>>>> , "Liron Aravot" >>>>> Sent: Thursday, August 9, 2012 6:41:09 PM >>>>> Subject: [Engine-devel] Serial Execution of Async Tasks >>>>> >>>>> Hi guys, >>>>> >>>>> As you may know the engine currently has the ability to fire an >>>>> SPM >>>>> task, and be asynchronously be "woken-up" when it ends. >>>>> This is great, but we found the for the Live Storage Migration >>>>> feature we need something a bit complex - the ability to have a >>>>> series of async tasks in a single control flow. >>>>> >>>>> Here's my initial design for this, your comments and criticism >>>>> would >>>>> be welcome: >>>>> http://wiki.ovirt.org/wiki/Features/Serial_Execution_of_Asynchronous_Tasks_Detailed_Design >>>> >>>> Apart from the short explanation & flow , since this is a detailed >>>> design , I would add >>>> 1) Class diagram >>>> 2) Flow diagram >> Good idea, I'll see if I can jimmy something up. >> >>>> >>> >>> +1, it would help understanding the flow. >>> >>> - It looks like you chose not re-use/extend the ExecutionHandler (the >>> entity used for building the tasks view exposed to the users). >>> It might be a good idea to keep the separation between the engine >>> Jobs >>> and the underlying vdsm tasks, but I want to make sure you are >>> familiar >>> with this mechanism and ruled it out with a reason. If this is the >>> case >>> please share why you decided not to use it. >> As you said Jobs and Steps are pure engine entities - they can contain no VDSM tasks, one VDSM task, or plausibly, in the future, several tasks. >> Even /today/, AsyncTasks and Jobs/Steps are two different kinds of animals - I don't see any added value in mixing them together. >> >>> >>> >>> - how does this design survives a jboss restart? Can you please a >>> section in the wiki to explain that. >> Basically, the way as a Command does today - the task is saved with the executionIndex, and continues when the command is woken up. >> I'll clarify this point in the wiki. >> >>> >>> -successful execution - >>> * "CommandBase iterates over its SPMAsyncTaskHandlers" - when? >> This is the new suggested format of executeCommand(). I'll clarify this too. >> >>> * If the second task is an HSM command (vs. SPM command), I think you >>> should explain in the design how to handle such flows as well. >> HSM commands do not create AsyncTasks, as they do today - I will clarify this. >> >>> * Why do we need before task? can you give a concrete example of what >>> would you do in such a method. >> Basically, /today/, command look like this: >> executeCommand() { >> doStuffInTheDB(); >> runVdsCommand(someCommand); >> } >> >> endSuccessfully() { >> doMoreStuffInTheDB(); >> } >> >> endWithFailure() { >> doMoreStuffForFailureInTheDB(); >> } >> >> In the new design, the entire doStuffInTheDB() should be moved to a breforeTask of the (only) SPMAsyncTaskHandler. >> >>> >>> - I see you added SPMAsyncTaskHandler, any reason not to use >>> SPMAsyncTasK to manage it own life-cycle? >> Conserving today's design - The SPMAsyncTaskHandler is the place to add additional, non-SPM, logic around the SPM task execution, like CommandBase allows today. >> >>> >>> - In the life-cycle managed by the SPMAsyncTaskHandler there is a >>> step >>> 'createTask - how to create the async task' can you please elaborate >>> what are the options. >> new [any type of async task] >>> >>> >>> >>> >>> >>> Livnat >>> >>>>> >>>>> >>>>> -Allon >>>>> _______________________________________________ >>>>> Engine-devel mailing list >>>>> Engine-devel at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>> >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>> >>> >>> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From workshop-pc at ovirt.org Tue Aug 14 21:44:05 2012 From: workshop-pc at ovirt.org (workshop-pc at ovirt.org) Date: Tue, 14 Aug 2012 14:44:05 -0700 Subject: [Engine-devel] oVirt Workshop Europe 2012: Call For Participation Message-ID: <20120814214405.GN20407@x200.localdomain> ================================================================= oVirt Workshop Europe 2012: Call For Participation November 7-9, 2012 - Hotel Fira Palace - Barcelona, Spain (All submissions must be received before midnight Sep 14th, 2012) ================================================================= The oVirt Project is an open virtualization project for anyone who cares about Linux-based KVM virtualization. Providing a feature-rich server virtualization management system with advanced capabilities for hosts and guests, including high availability, live migration, storage management, system scheduler, and more. By open we mean open source & open governance, done right. During this workshop you?ll learn about the technical background and direction of the oVirt project. You?ll meet the developers, and have an opportunity to see and dive into the code right away. The workshop is open to all who want to use, get involved with, or learn about the comprehensive open virtualization management platform, oVirt. The sessions cover the technical projects details, governance, getting involved, usage, and much more. If you have any interest in an Open Virtualization Management platform, this workshop is for you! We are excited to announce that this oVirt Workshop will be held in conjunction with the KVM Forum. http://events.linuxfoundation.org/events/kvm-forum/ The KVM Forum and oVirt Workshop are co-located with the Linux Foundation's 2012 LinuxCon Europe in Barcelona, Spain. oVirt Workshop attendees will be able to attend KVM Forum sessions and are eligible to attend LinuxCon Europe for a discounted rate. http://events.linuxfoundation.org/events/kvm-forum/register We invite you to lead part of the discussion by submitting a speaking proposal for oVirt Workshop 2012. http://events.linuxfoundation.org/cfp Suggested topics: - community use case/stories - roadmaps - deep dives into features/areas - deep dives into code/debugging/tuning - integration and extensions - components: engine, vdsm, node, sdk/cli, reports, mom, guest agent, etc. - subjects: network, storage, vm life cycle, scheduling & sla, gluster, etc. - packaging, installation and distributions - community infrastructure and services SUBMISSION REQUIREMENTS Abstracts due: Sep 14th, 2012 Notification: Sep 28th, 2012 Please submit a short abstract (~150 words) describing your presentation proposal. In your submission please note how long your talk will take. Slots vary in length up to 45 minutes. Also include in your proposal the proposal type -- one of: - technical talk - end-user talk - birds of a feather (BOF) session Submit your proposal here: http://events.linuxfoundation.org/cfp You will receive a notification whether or not your presentation proposal was accepted by Sep 14th. END-USER COLLABORATION One of the big challenges as developers is to know what, where and how people actually use our software. We will reserve a few slots for end users talking about their deployment challenges and achievements. If you are using oVirt in production you are encouraged submit a speaking proposal. Simply mark it as an end-user collaboration proposal. As an end user, this is a unique opportunity to get your input to developers. BOF SESSION We will reserve some slots in the evening after the main conference tracks, for birds of a feather (BOF) sessions. These sessions will be less formal than presentation tracks and targetted for people who would like to discuss specific issues with other developers and/or users. If you are interested in getting developers and/or uses together to discuss a specific problem, please submit a BOF proposal. LIGHTNING TALKS In addition to submitted talks we will also have some room for lightning talks. These are short (5 minute) discussions to highlight new work or ideas that aren't complete enough to warrant a full presentation slot. Lightning talk submissions and scheduling will be handled on-site at oVirt Workshop. HOTEL / TRAVEL The oVirt Workshop Europe 2012 will be held in Barcelona, Spain at the Hotel Fira Palace. http://events.linuxfoundation.org/events/kvm-forum/hotel Thank you for your interest in oVirt. We're looking forward to your submissions and seeing you at the oVirt Workshop Europe 2012 in November! Thanks, your oVirt Workshop Europe 2012 Program Commitee Please contact us with any questions or comments. workshop-pc at ovirt.org From mlipchuk at redhat.com Thu Aug 16 12:21:41 2012 From: mlipchuk at redhat.com (Maor Lipchuk) Date: Thu, 16 Aug 2012 15:21:41 +0300 Subject: [Engine-devel] Serial Execution of Async Tasks In-Reply-To: <502A5F70.6020807@redhat.com> References: <518329067.4513405.1344942654993.JavaMail.root@redhat.com> <502A37E4.4060303@redhat.com> <502A5F70.6020807@redhat.com> Message-ID: <502CE5D5.3030307@redhat.com> On 08/14/2012 05:23 PM, Itamar Heim wrote: > On 08/14/2012 02:35 PM, Maor Lipchuk wrote: >> How should we handle the auditLogMessages? >> Basically when a command ends it print an audit log. >> >> When we will start to use multiple tasks I assume user might get a bulk >> of audit logs which are actually related to the same action (when we >> fail for example the process will be create and delete). >> It might be a bit confusing for the user not to know which action is >> related to the operation > > I thought audit log gets written regardless of the transaction, so audit > log appears "as they happen"? That is correct, The issue that I was referring to, is that now, with multiple tasks execution, we will get many audit logs which related to the same transaction but each one will be printed at a different time. I think that it might be confusing for the user to relate each audit log to the operation that was started. For example : User run an action that executes some tasks of create volumes, then the engine encounter a problem, and decide to rollback the operation and delete the volumes, in that case the engine will execute a delete task for the volumes, so user might see that delete of the volume (for example a snapshot) was initiated. Since those are asynchronous tasks, audit log will be printed in a different period of time and a user might not be aware what is the relation of those specific delete. > >> >> Maybe we will need to use the correlation id of the Execution handler as >> Eli suggested or maybe add new states at CommandActionState? >> >> On 08/14/2012 02:10 PM, Allon Mureinik wrote: >>> Hi guys, >>> >>> Thanks for all your comments! >>> The correct response for many these points is to update the wiki. >>> I'm enclosing here the quick-and-dirty replies just to keep this >>> thread alive, and will update the wiki shortly. >>> >>> See inline. >>> >>> ----- Original Message ----- >>>> From: "Livnat Peer" >>>> To: "Allon Mureinik" >>>> Cc: "Eli Mesika" , "Liron Aravot" >>>> , "Federico Simoncelli" >>>> , "engine-devel" , >>>> "Eduardo Warszawski" , "Yeela >>>> Kaplan" >>>> Sent: Sunday, August 12, 2012 9:39:23 AM >>>> Subject: Re: [Engine-devel] Serial Execution of Async Tasks >>>> >>>> On 10/08/12 03:40, Eli Mesika wrote: >>>>> >>>>> >>>>> ----- Original Message ----- >>>>>> From: "Allon Mureinik" >>>>>> To: "engine-devel" >>>>>> Cc: "Eduardo Warszawski" , "Yeela Kaplan" >>>>>> , "Federico Simoncelli" >>>>>> , "Liron Aravot" >>>>>> Sent: Thursday, August 9, 2012 6:41:09 PM >>>>>> Subject: [Engine-devel] Serial Execution of Async Tasks >>>>>> >>>>>> Hi guys, >>>>>> >>>>>> As you may know the engine currently has the ability to fire an >>>>>> SPM >>>>>> task, and be asynchronously be "woken-up" when it ends. >>>>>> This is great, but we found the for the Live Storage Migration >>>>>> feature we need something a bit complex - the ability to have a >>>>>> series of async tasks in a single control flow. >>>>>> >>>>>> Here's my initial design for this, your comments and criticism >>>>>> would >>>>>> be welcome: >>>>>> http://wiki.ovirt.org/wiki/Features/Serial_Execution_of_Asynchronous_Tasks_Detailed_Design >>>>>> >>>>> >>>>> Apart from the short explanation & flow , since this is a detailed >>>>> design , I would add >>>>> 1) Class diagram >>>>> 2) Flow diagram >>> Good idea, I'll see if I can jimmy something up. >>> >>>>> >>>> >>>> +1, it would help understanding the flow. >>>> >>>> - It looks like you chose not re-use/extend the ExecutionHandler (the >>>> entity used for building the tasks view exposed to the users). >>>> It might be a good idea to keep the separation between the engine >>>> Jobs >>>> and the underlying vdsm tasks, but I want to make sure you are >>>> familiar >>>> with this mechanism and ruled it out with a reason. If this is the >>>> case >>>> please share why you decided not to use it. >>> As you said Jobs and Steps are pure engine entities - they can >>> contain no VDSM tasks, one VDSM task, or plausibly, in the future, >>> several tasks. >>> Even /today/, AsyncTasks and Jobs/Steps are two different kinds of >>> animals - I don't see any added value in mixing them together. >>> >>>> >>>> >>>> - how does this design survives a jboss restart? Can you please a >>>> section in the wiki to explain that. >>> Basically, the way as a Command does today - the task is saved with >>> the executionIndex, and continues when the command is woken up. >>> I'll clarify this point in the wiki. >>> >>>> >>>> -successful execution - >>>> * "CommandBase iterates over its SPMAsyncTaskHandlers" - when? >>> This is the new suggested format of executeCommand(). I'll clarify >>> this too. >>> >>>> * If the second task is an HSM command (vs. SPM command), I think you >>>> should explain in the design how to handle such flows as well. >>> HSM commands do not create AsyncTasks, as they do today - I will >>> clarify this. >>> >>>> * Why do we need before task? can you give a concrete example of what >>>> would you do in such a method. >>> Basically, /today/, command look like this: >>> executeCommand() { >>> doStuffInTheDB(); >>> runVdsCommand(someCommand); >>> } >>> >>> endSuccessfully() { >>> doMoreStuffInTheDB(); >>> } >>> >>> endWithFailure() { >>> doMoreStuffForFailureInTheDB(); >>> } >>> >>> In the new design, the entire doStuffInTheDB() should be moved to a >>> breforeTask of the (only) SPMAsyncTaskHandler. >>> >>>> >>>> - I see you added SPMAsyncTaskHandler, any reason not to use >>>> SPMAsyncTasK to manage it own life-cycle? >>> Conserving today's design - The SPMAsyncTaskHandler is the place to >>> add additional, non-SPM, logic around the SPM task execution, like >>> CommandBase allows today. >>> >>>> >>>> - In the life-cycle managed by the SPMAsyncTaskHandler there is a >>>> step >>>> 'createTask - how to create the async task' can you please elaborate >>>> what are the options. >>> new [any type of async task] >>>> >>>> >>>> >>>> >>>> >>>> Livnat >>>> >>>>>> >>>>>> >>>>>> -Allon >>>>>> _______________________________________________ >>>>>> Engine-devel mailing list >>>>>> Engine-devel at ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>> >>>>> _______________________________________________ >>>>> Engine-devel mailing list >>>>> Engine-devel at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>> >>>> >>>> >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> >> >> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> > > From iheim at redhat.com Thu Aug 16 15:51:31 2012 From: iheim at redhat.com (Itamar Heim) Date: Thu, 16 Aug 2012 18:51:31 +0300 Subject: [Engine-devel] Serial Execution of Async Tasks In-Reply-To: <502CE5D5.3030307@redhat.com> References: <518329067.4513405.1344942654993.JavaMail.root@redhat.com> <502A37E4.4060303@redhat.com> <502A5F70.6020807@redhat.com> <502CE5D5.3030307@redhat.com> Message-ID: <502D1703.6090006@redhat.com> On 08/16/2012 03:21 PM, Maor Lipchuk wrote: > On 08/14/2012 05:23 PM, Itamar Heim wrote: >> On 08/14/2012 02:35 PM, Maor Lipchuk wrote: >>> How should we handle the auditLogMessages? >>> Basically when a command ends it print an audit log. >>> >>> When we will start to use multiple tasks I assume user might get a bulk >>> of audit logs which are actually related to the same action (when we >>> fail for example the process will be create and delete). >>> It might be a bit confusing for the user not to know which action is >>> related to the operation >> >> I thought audit log gets written regardless of the transaction, so audit >> log appears "as they happen"? > That is correct, > The issue that I was referring to, is that now, with multiple tasks > execution, we will get many audit logs which related to the same > transaction but each one will be printed at a different time. > > I think that it might be confusing for the user to relate each audit log > to the operation that was started. > > > For example : > User run an action that executes some tasks of create volumes, > then the engine encounter a problem, and decide to rollback the > operation and delete the volumes, in that case the engine will execute a > delete task for the volumes, so user might see that delete of the volume > (for example a snapshot) was initiated. > Since those are asynchronous tasks, audit log will be printed in a > different period of time and a user might not be aware what is the > relation of those specific delete. async doesn't mean we don't print an audit log when we start it, and when we end it. so user would get the starting audit log when the task failed in your example. of course this may happen 2 hours after they started the task. as long as we can correlate the audit log to be part of the same "job", i don't see the issue. >> >>> >>> Maybe we will need to use the correlation id of the Execution handler as >>> Eli suggested or maybe add new states at CommandActionState? >>> >>> On 08/14/2012 02:10 PM, Allon Mureinik wrote: >>>> Hi guys, >>>> >>>> Thanks for all your comments! >>>> The correct response for many these points is to update the wiki. >>>> I'm enclosing here the quick-and-dirty replies just to keep this >>>> thread alive, and will update the wiki shortly. >>>> >>>> See inline. >>>> >>>> ----- Original Message ----- >>>>> From: "Livnat Peer" >>>>> To: "Allon Mureinik" >>>>> Cc: "Eli Mesika" , "Liron Aravot" >>>>> , "Federico Simoncelli" >>>>> , "engine-devel" , >>>>> "Eduardo Warszawski" , "Yeela >>>>> Kaplan" >>>>> Sent: Sunday, August 12, 2012 9:39:23 AM >>>>> Subject: Re: [Engine-devel] Serial Execution of Async Tasks >>>>> >>>>> On 10/08/12 03:40, Eli Mesika wrote: >>>>>> >>>>>> >>>>>> ----- Original Message ----- >>>>>>> From: "Allon Mureinik" >>>>>>> To: "engine-devel" >>>>>>> Cc: "Eduardo Warszawski" , "Yeela Kaplan" >>>>>>> , "Federico Simoncelli" >>>>>>> , "Liron Aravot" >>>>>>> Sent: Thursday, August 9, 2012 6:41:09 PM >>>>>>> Subject: [Engine-devel] Serial Execution of Async Tasks >>>>>>> >>>>>>> Hi guys, >>>>>>> >>>>>>> As you may know the engine currently has the ability to fire an >>>>>>> SPM >>>>>>> task, and be asynchronously be "woken-up" when it ends. >>>>>>> This is great, but we found the for the Live Storage Migration >>>>>>> feature we need something a bit complex - the ability to have a >>>>>>> series of async tasks in a single control flow. >>>>>>> >>>>>>> Here's my initial design for this, your comments and criticism >>>>>>> would >>>>>>> be welcome: >>>>>>> http://wiki.ovirt.org/wiki/Features/Serial_Execution_of_Asynchronous_Tasks_Detailed_Design >>>>>>> >>>>>> >>>>>> Apart from the short explanation & flow , since this is a detailed >>>>>> design , I would add >>>>>> 1) Class diagram >>>>>> 2) Flow diagram >>>> Good idea, I'll see if I can jimmy something up. >>>> >>>>>> >>>>> >>>>> +1, it would help understanding the flow. >>>>> >>>>> - It looks like you chose not re-use/extend the ExecutionHandler (the >>>>> entity used for building the tasks view exposed to the users). >>>>> It might be a good idea to keep the separation between the engine >>>>> Jobs >>>>> and the underlying vdsm tasks, but I want to make sure you are >>>>> familiar >>>>> with this mechanism and ruled it out with a reason. If this is the >>>>> case >>>>> please share why you decided not to use it. >>>> As you said Jobs and Steps are pure engine entities - they can >>>> contain no VDSM tasks, one VDSM task, or plausibly, in the future, >>>> several tasks. >>>> Even /today/, AsyncTasks and Jobs/Steps are two different kinds of >>>> animals - I don't see any added value in mixing them together. >>>> >>>>> >>>>> >>>>> - how does this design survives a jboss restart? Can you please a >>>>> section in the wiki to explain that. >>>> Basically, the way as a Command does today - the task is saved with >>>> the executionIndex, and continues when the command is woken up. >>>> I'll clarify this point in the wiki. >>>> >>>>> >>>>> -successful execution - >>>>> * "CommandBase iterates over its SPMAsyncTaskHandlers" - when? >>>> This is the new suggested format of executeCommand(). I'll clarify >>>> this too. >>>> >>>>> * If the second task is an HSM command (vs. SPM command), I think you >>>>> should explain in the design how to handle such flows as well. >>>> HSM commands do not create AsyncTasks, as they do today - I will >>>> clarify this. >>>> >>>>> * Why do we need before task? can you give a concrete example of what >>>>> would you do in such a method. >>>> Basically, /today/, command look like this: >>>> executeCommand() { >>>> doStuffInTheDB(); >>>> runVdsCommand(someCommand); >>>> } >>>> >>>> endSuccessfully() { >>>> doMoreStuffInTheDB(); >>>> } >>>> >>>> endWithFailure() { >>>> doMoreStuffForFailureInTheDB(); >>>> } >>>> >>>> In the new design, the entire doStuffInTheDB() should be moved to a >>>> breforeTask of the (only) SPMAsyncTaskHandler. >>>> >>>>> >>>>> - I see you added SPMAsyncTaskHandler, any reason not to use >>>>> SPMAsyncTasK to manage it own life-cycle? >>>> Conserving today's design - The SPMAsyncTaskHandler is the place to >>>> add additional, non-SPM, logic around the SPM task execution, like >>>> CommandBase allows today. >>>> >>>>> >>>>> - In the life-cycle managed by the SPMAsyncTaskHandler there is a >>>>> step >>>>> 'createTask - how to create the async task' can you please elaborate >>>>> what are the options. >>>> new [any type of async task] >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> Livnat >>>>> >>>>>>> >>>>>>> >>>>>>> -Allon >>>>>>> _______________________________________________ >>>>>>> Engine-devel mailing list >>>>>>> Engine-devel at ovirt.org >>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>>> >>>>>> _______________________________________________ >>>>>> Engine-devel mailing list >>>>>> Engine-devel at ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>> >>>>> >>>>> >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>> >>> >>> >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> >> >> > > From mlipchuk at redhat.com Thu Aug 16 17:27:33 2012 From: mlipchuk at redhat.com (Maor Lipchuk) Date: Thu, 16 Aug 2012 20:27:33 +0300 Subject: [Engine-devel] Serial Execution of Async Tasks In-Reply-To: <502D1703.6090006@redhat.com> References: <518329067.4513405.1344942654993.JavaMail.root@redhat.com> <502A37E4.4060303@redhat.com> <502A5F70.6020807@redhat.com> <502CE5D5.3030307@redhat.com> <502D1703.6090006@redhat.com> Message-ID: <502D2D85.5040205@redhat.com> On 08/16/2012 06:51 PM, Itamar Heim wrote: > On 08/16/2012 03:21 PM, Maor Lipchuk wrote: >> On 08/14/2012 05:23 PM, Itamar Heim wrote: >>> On 08/14/2012 02:35 PM, Maor Lipchuk wrote: >>>> How should we handle the auditLogMessages? >>>> Basically when a command ends it print an audit log. >>>> >>>> When we will start to use multiple tasks I assume user might get a bulk >>>> of audit logs which are actually related to the same action (when we >>>> fail for example the process will be create and delete). >>>> It might be a bit confusing for the user not to know which action is >>>> related to the operation >>> >>> I thought audit log gets written regardless of the transaction, so audit >>> log appears "as they happen"? >> That is correct, >> The issue that I was referring to, is that now, with multiple tasks >> execution, we will get many audit logs which related to the same >> transaction but each one will be printed at a different time. >> >> I think that it might be confusing for the user to relate each audit log >> to the operation that was started. >> >> >> For example : >> User run an action that executes some tasks of create volumes, >> then the engine encounter a problem, and decide to rollback the >> operation and delete the volumes, in that case the engine will execute a >> delete task for the volumes, so user might see that delete of the volume >> (for example a snapshot) was initiated. >> Since those are asynchronous tasks, audit log will be printed in a >> different period of time and a user might not be aware what is the >> relation of those specific delete. > > async doesn't mean we don't print an audit log when we start it, and > when we end it. > so user would get the starting audit log when the task failed in your > example. of course this may happen 2 hours after they started the task. > as long as we can correlate the audit log to be part of the same "job", > i don't see the issue. yes, but if I understood correctly, we don't want to correlate the multiple tasks with the execution handler (which AFAIK handle the correlation id). I assume this issue can be addressed in a future phase, but maybe it is an issue that might worth to think about. > >>> >>>> >>>> Maybe we will need to use the correlation id of the Execution >>>> handler as >>>> Eli suggested or maybe add new states at CommandActionState? >>>> >>>> On 08/14/2012 02:10 PM, Allon Mureinik wrote: >>>>> Hi guys, >>>>> >>>>> Thanks for all your comments! >>>>> The correct response for many these points is to update the wiki. >>>>> I'm enclosing here the quick-and-dirty replies just to keep this >>>>> thread alive, and will update the wiki shortly. >>>>> >>>>> See inline. >>>>> >>>>> ----- Original Message ----- >>>>>> From: "Livnat Peer" >>>>>> To: "Allon Mureinik" >>>>>> Cc: "Eli Mesika" , "Liron Aravot" >>>>>> , "Federico Simoncelli" >>>>>> , "engine-devel" , >>>>>> "Eduardo Warszawski" , "Yeela >>>>>> Kaplan" >>>>>> Sent: Sunday, August 12, 2012 9:39:23 AM >>>>>> Subject: Re: [Engine-devel] Serial Execution of Async Tasks >>>>>> >>>>>> On 10/08/12 03:40, Eli Mesika wrote: >>>>>>> >>>>>>> >>>>>>> ----- Original Message ----- >>>>>>>> From: "Allon Mureinik" >>>>>>>> To: "engine-devel" >>>>>>>> Cc: "Eduardo Warszawski" , "Yeela Kaplan" >>>>>>>> , "Federico Simoncelli" >>>>>>>> , "Liron Aravot" >>>>>>>> Sent: Thursday, August 9, 2012 6:41:09 PM >>>>>>>> Subject: [Engine-devel] Serial Execution of Async Tasks >>>>>>>> >>>>>>>> Hi guys, >>>>>>>> >>>>>>>> As you may know the engine currently has the ability to fire an >>>>>>>> SPM >>>>>>>> task, and be asynchronously be "woken-up" when it ends. >>>>>>>> This is great, but we found the for the Live Storage Migration >>>>>>>> feature we need something a bit complex - the ability to have a >>>>>>>> series of async tasks in a single control flow. >>>>>>>> >>>>>>>> Here's my initial design for this, your comments and criticism >>>>>>>> would >>>>>>>> be welcome: >>>>>>>> http://wiki.ovirt.org/wiki/Features/Serial_Execution_of_Asynchronous_Tasks_Detailed_Design >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> Apart from the short explanation & flow , since this is a detailed >>>>>>> design , I would add >>>>>>> 1) Class diagram >>>>>>> 2) Flow diagram >>>>> Good idea, I'll see if I can jimmy something up. >>>>> >>>>>>> >>>>>> >>>>>> +1, it would help understanding the flow. >>>>>> >>>>>> - It looks like you chose not re-use/extend the ExecutionHandler (the >>>>>> entity used for building the tasks view exposed to the users). >>>>>> It might be a good idea to keep the separation between the engine >>>>>> Jobs >>>>>> and the underlying vdsm tasks, but I want to make sure you are >>>>>> familiar >>>>>> with this mechanism and ruled it out with a reason. If this is the >>>>>> case >>>>>> please share why you decided not to use it. >>>>> As you said Jobs and Steps are pure engine entities - they can >>>>> contain no VDSM tasks, one VDSM task, or plausibly, in the future, >>>>> several tasks. >>>>> Even /today/, AsyncTasks and Jobs/Steps are two different kinds of >>>>> animals - I don't see any added value in mixing them together. >>>>> >>>>>> >>>>>> >>>>>> - how does this design survives a jboss restart? Can you please a >>>>>> section in the wiki to explain that. >>>>> Basically, the way as a Command does today - the task is saved with >>>>> the executionIndex, and continues when the command is woken up. >>>>> I'll clarify this point in the wiki. >>>>> >>>>>> >>>>>> -successful execution - >>>>>> * "CommandBase iterates over its SPMAsyncTaskHandlers" - when? >>>>> This is the new suggested format of executeCommand(). I'll clarify >>>>> this too. >>>>> >>>>>> * If the second task is an HSM command (vs. SPM command), I think you >>>>>> should explain in the design how to handle such flows as well. >>>>> HSM commands do not create AsyncTasks, as they do today - I will >>>>> clarify this. >>>>> >>>>>> * Why do we need before task? can you give a concrete example of what >>>>>> would you do in such a method. >>>>> Basically, /today/, command look like this: >>>>> executeCommand() { >>>>> doStuffInTheDB(); >>>>> runVdsCommand(someCommand); >>>>> } >>>>> >>>>> endSuccessfully() { >>>>> doMoreStuffInTheDB(); >>>>> } >>>>> >>>>> endWithFailure() { >>>>> doMoreStuffForFailureInTheDB(); >>>>> } >>>>> >>>>> In the new design, the entire doStuffInTheDB() should be moved to a >>>>> breforeTask of the (only) SPMAsyncTaskHandler. >>>>> >>>>>> >>>>>> - I see you added SPMAsyncTaskHandler, any reason not to use >>>>>> SPMAsyncTasK to manage it own life-cycle? >>>>> Conserving today's design - The SPMAsyncTaskHandler is the place to >>>>> add additional, non-SPM, logic around the SPM task execution, like >>>>> CommandBase allows today. >>>>> >>>>>> >>>>>> - In the life-cycle managed by the SPMAsyncTaskHandler there is a >>>>>> step >>>>>> 'createTask - how to create the async task' can you please elaborate >>>>>> what are the options. >>>>> new [any type of async task] >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> Livnat >>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -Allon >>>>>>>> _______________________________________________ >>>>>>>> Engine-devel mailing list >>>>>>>> Engine-devel at ovirt.org >>>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>>>> >>>>>>> _______________________________________________ >>>>>>> Engine-devel mailing list >>>>>>> Engine-devel at ovirt.org >>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>>> >>>>>> >>>>>> >>>>> _______________________________________________ >>>>> Engine-devel mailing list >>>>> Engine-devel at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>> >>>> >>>> >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>> >>> >>> >> >> > > From robert at middleswarth.net Fri Aug 17 01:32:20 2012 From: robert at middleswarth.net (Robert Middleswarth) Date: Thu, 16 Aug 2012 21:32:20 -0400 Subject: [Engine-devel] Minor change in the per patch process. Message-ID: <502D9F24.2050904@middleswarth.net> Part of the process now include creating rpm packages. http://jenkins.ovirt.info/view/patches/job/patch_engine_create_rpms/ This allows people to download and test packages based on the change if they want to. -- Thanks Robert Middleswarth @rmiddle (twitter/IRC) From danken at redhat.com Sun Aug 19 07:21:00 2012 From: danken at redhat.com (Dan Kenigsberg) Date: Sun, 19 Aug 2012 10:21:00 +0300 Subject: [Engine-devel] Minor change in the per patch process. In-Reply-To: <502D9F24.2050904@middleswarth.net> References: <502D9F24.2050904@middleswarth.net> Message-ID: <20120819072100.GD12807@redhat.com> On Thu, Aug 16, 2012 at 09:32:20PM -0400, Robert Middleswarth wrote: > Part of the process now include creating rpm packages. > http://jenkins.ovirt.info/view/patches/job/patch_engine_create_rpms/ > This allows people to download and test packages based on the change > if they want to. I wonder if this is needed by people. For me it duplicates the number of emails per change... BTW, I see that a failed job http://jenkins.ovirt.info/job/patch_vdsm_unit_tests/486/console does not set V-1 on the change. Could this be fixed? I'd like the poster and human reviewer to be perfectly aware that a change breaks unit tests. Regrads, Dan. From lpeer at redhat.com Sun Aug 19 07:22:58 2012 From: lpeer at redhat.com (Livnat Peer) Date: Sun, 19 Aug 2012 10:22:58 +0300 Subject: [Engine-devel] Engine supporting Cluster and Data Center level 3.2 Message-ID: <50309452.2090406@redhat.com> Hi All, Following previous discussion on the list a patch was sent (and accepted) to add cluster and Data center 3.2 in the engine. http://gerrit.ovirt.org/#/c/7169/ Please note that vdsm is reporting version 3.2 for a while now (started reporting 3.2 soon after oVirt 3.1 release was built). To make it easier for us to keep track of what features are supported in 3.2 cluster and data center I created the following wiki page - http://wiki.ovirt.org/wiki/Features/compatibilityVersion Thanks, Livnat From robert at middleswarth.net Sun Aug 19 08:06:09 2012 From: robert at middleswarth.net (Robert Middleswarth) Date: Sun, 19 Aug 2012 04:06:09 -0400 Subject: [Engine-devel] Minor change in the per patch process. In-Reply-To: <20120819072100.GD12807@redhat.com> References: <502D9F24.2050904@middleswarth.net> <20120819072100.GD12807@redhat.com> Message-ID: <50309E71.1070705@middleswarth.net> On 08/19/2012 03:21 AM, Dan Kenigsberg wrote: > On Thu, Aug 16, 2012 at 09:32:20PM -0400, Robert Middleswarth wrote: >> Part of the process now include creating rpm packages. >> http://jenkins.ovirt.info/view/patches/job/patch_engine_create_rpms/ >> This allows people to download and test packages based on the change >> if they want to. > I wonder if this is needed by people. For me it duplicates the number of > emails per change... > > BTW, I see that a failed job http://jenkins.ovirt.info/job/patch_vdsm_unit_tests/486/console > does not set V-1 on the change. Could this be fixed? I'd like the poster > and human reviewer to be perfectly aware that a change breaks unit > tests. > > Regrads, > Dan. This was detailed in an earlier email. There are limits related to the current plugin and how it processes patches. The issue is simple and effects job when aborted because someone isn't in the whitelist. I have spent a lot of time testing diff options and the best I could come up with is that we abort the process. The biggest problem is the gerrit-trigger plugin treats aborts as if they are a failure instead of as a non event. I tried several diff ways to make it a non event but couldn't find one. The choice I went with was to add text that they failed and leave them at zero the other options was to mark all failure / aborts (Including people not in the whitelist) as a -1 that wasn't really acceptable. There are 2 diff bug reports and if either gets fixed we will be able to -1 failures but until the are done I am very limited on the options. -- Thanks Robert Middleswarth @rmiddle (twitter/IRC) From danken at redhat.com Sun Aug 19 15:50:12 2012 From: danken at redhat.com (Dan Kenigsberg) Date: Sun, 19 Aug 2012 18:50:12 +0300 Subject: [Engine-devel] Minor change in the per patch process. In-Reply-To: <50309E71.1070705@middleswarth.net> References: <502D9F24.2050904@middleswarth.net> <20120819072100.GD12807@redhat.com> <50309E71.1070705@middleswarth.net> Message-ID: <20120819155012.GA29167@redhat.com> On Sun, Aug 19, 2012 at 04:06:09AM -0400, Robert Middleswarth wrote: > On 08/19/2012 03:21 AM, Dan Kenigsberg wrote: > >On Thu, Aug 16, 2012 at 09:32:20PM -0400, Robert Middleswarth wrote: > >>Part of the process now include creating rpm packages. > >>http://jenkins.ovirt.info/view/patches/job/patch_engine_create_rpms/ > >>This allows people to download and test packages based on the change > >>if they want to. > >I wonder if this is needed by people. For me it duplicates the number of > >emails per change... > > > >BTW, I see that a failed job http://jenkins.ovirt.info/job/patch_vdsm_unit_tests/486/console > >does not set V-1 on the change. Could this be fixed? I'd like the poster > >and human reviewer to be perfectly aware that a change breaks unit > >tests. > > > >Regrads, > >Dan. > This was detailed in an earlier email. I'm sorry that I've missed it. > There are limits related to > the current plugin and how it processes patches. The issue is > simple and effects job when aborted because someone isn't in the > whitelist. I have spent a lot of time testing diff options and the > best I could come up with is that we abort the process. The biggest > problem is the gerrit-trigger plugin treats aborts as if they are a > failure instead of as a non event. I tried several diff ways to > make it a non event but couldn't find one. The choice I went with > was to add text that they failed and leave them at zero the other > options was to mark all failure / aborts (Including people not in > the whitelist) as a -1 that wasn't really acceptable. There are 2 > diff bug reports and if either gets fixed we will be able to -1 > failures but until the are done I am very limited on the options. We'll make with what we've got. Thanks. Oh, and a tiny "security" comment: we may trust infallible at engineer.org but not her fallible at engineer.org colleague. So grep'ing the whitelist should be done with something like grep -q ^${email}$ From robert at middleswarth.net Sun Aug 19 19:29:57 2012 From: robert at middleswarth.net (Robert Middleswarth) Date: Sun, 19 Aug 2012 15:29:57 -0400 Subject: [Engine-devel] Minor change in the per patch process. In-Reply-To: <20120819155012.GA29167@redhat.com> References: <502D9F24.2050904@middleswarth.net> <20120819072100.GD12807@redhat.com> <50309E71.1070705@middleswarth.net> <20120819155012.GA29167@redhat.com> Message-ID: <50313EB5.3030705@middleswarth.net> On 08/19/2012 11:50 AM, Dan Kenigsberg wrote: > On Sun, Aug 19, 2012 at 04:06:09AM -0400, Robert Middleswarth wrote: >> On 08/19/2012 03:21 AM, Dan Kenigsberg wrote: >>> On Thu, Aug 16, 2012 at 09:32:20PM -0400, Robert Middleswarth wrote: >>>> Part of the process now include creating rpm packages. >>>> http://jenkins.ovirt.info/view/patches/job/patch_engine_create_rpms/ >>>> This allows people to download and test packages based on the change >>>> if they want to. >>> I wonder if this is needed by people. For me it duplicates the number of >>> emails per change... >>> >>> BTW, I see that a failed job http://jenkins.ovirt.info/job/patch_vdsm_unit_tests/486/console >>> does not set V-1 on the change. Could this be fixed? I'd like the poster >>> and human reviewer to be perfectly aware that a change breaks unit >>> tests. >>> >>> Regrads, >>> Dan. >> This was detailed in an earlier email. > I'm sorry that I've missed it. > >> There are limits related to >> the current plugin and how it processes patches. The issue is >> simple and effects job when aborted because someone isn't in the >> whitelist. I have spent a lot of time testing diff options and the >> best I could come up with is that we abort the process. The biggest >> problem is the gerrit-trigger plugin treats aborts as if they are a >> failure instead of as a non event. I tried several diff ways to >> make it a non event but couldn't find one. The choice I went with >> was to add text that they failed and leave them at zero the other >> options was to mark all failure / aborts (Including people not in >> the whitelist) as a -1 that wasn't really acceptable. There are 2 >> diff bug reports and if either gets fixed we will be able to -1 >> failures but until the are done I am very limited on the options. > We'll make with what we've got. Thanks. > > Oh, and a tiny "security" comment: we may trust infallible at engineer.org > but not her fallible at engineer.org colleague. So grep'ing the > whitelist should be done with something like > > grep -q ^${email}$ Point taken and grep line tested and changed. -- Thanks Robert Middleswarth @rmiddle (twitter/IRC) From robert at middleswarth.net Sun Aug 19 20:03:54 2012 From: robert at middleswarth.net (Robert Middleswarth) Date: Sun, 19 Aug 2012 16:03:54 -0400 Subject: [Engine-devel] Any way to break the dao_unit_tests up? Message-ID: <503146AA.9070908@middleswarth.net> The DAO unit tests take twice as long as the rest of the test to run is there any way to break them up into two pieces? -- Thanks Robert Middleswarth @rmiddle (twitter/IRC) From mkolesni at redhat.com Mon Aug 20 05:46:27 2012 From: mkolesni at redhat.com (Mike Kolesnik) Date: Mon, 20 Aug 2012 01:46:27 -0400 (EDT) Subject: [Engine-devel] Any way to break the dao_unit_tests up? In-Reply-To: <503146AA.9070908@middleswarth.net> Message-ID: <1808305604.8209657.1345441587191.JavaMail.root@redhat.com> ----- Original Message ----- > The DAO unit tests take twice as long as the rest of the test to run > is > there any way to break them up into two pieces? > It will not be easy.. The way the tests are built today is with DB-unit. DB-unit allows to have an XML file with predefined data (called fixtures) which is used to recreate the DB data each time a test-class is run. This is all fine, except that in our tests there are (at least) 2 issues: 1. The same fixtures.xml file is used in all DAO tests. 2. Some DAOs require fixtures for several tables. Now, we could fix issue #1 by splitting the fixtures file into smaller files, each relating to only one table, which would allow us to run these tests in parallel on the same DB. Issue #2 would require to figure out which tests require several fixtures, and have them run isolated from the other tests which require only a single table. A simpler solution could be to have the tests run each on it's own db schema (or it's own db) which eliminates the dependencies and allows to run all in parallel, but is a bit more complicated to maintain (we would need some script that generates these schemas/dbs for tests automatically) and keeping multiple schemas up to date would also require CPU time. This is speaking in terms of the tests themselves, without considering the build process itself. > > -- > Thanks > Robert Middleswarth > @rmiddle (twitter/IRC) > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From ykaul at redhat.com Mon Aug 20 06:02:22 2012 From: ykaul at redhat.com (Yaniv Kaul) Date: Mon, 20 Aug 2012 09:02:22 +0300 Subject: [Engine-devel] Any way to break the dao_unit_tests up? In-Reply-To: <503146AA.9070908@middleswarth.net> References: <503146AA.9070908@middleswarth.net> Message-ID: <5031D2EE.4090103@redhat.com> On 08/19/2012 11:03 PM, Robert Middleswarth wrote: > The DAO unit tests take twice as long as the rest of the test to run > is there any way to break them up into two pieces? > > Can they run in parallel to the rest of the tests? May be a KISS solution for this problem. Y. From robert at middleswarth.net Mon Aug 20 06:40:13 2012 From: robert at middleswarth.net (Robert Middleswarth) Date: Mon, 20 Aug 2012 02:40:13 -0400 Subject: [Engine-devel] Any way to break the dao_unit_tests up? In-Reply-To: <5031D2EE.4090103@redhat.com> References: <503146AA.9070908@middleswarth.net> <5031D2EE.4090103@redhat.com> Message-ID: <5031DBCD.8070201@middleswarth.net> On 08/20/2012 02:02 AM, Yaniv Kaul wrote: > On 08/19/2012 11:03 PM, Robert Middleswarth wrote: >> The DAO unit tests take twice as long as the rest of the test to run >> is there any way to break them up into two pieces? >> >> > > Can they run in parallel to the rest of the tests? > May be a KISS solution for this problem. > Y. > Yaniv, That is what I am doing but the current test can't be ran in parallel on the same host and the jobs backup several hours and none of the results get written to Gerrit until all the jobes finishes. -- Thanks Robert Middleswarth @rmiddle (twitter/Freenode IRC) @RobertM (OFTC IRC) From iheim at redhat.com Mon Aug 20 07:26:10 2012 From: iheim at redhat.com (Itamar Heim) Date: Mon, 20 Aug 2012 10:26:10 +0300 Subject: [Engine-devel] Any way to break the dao_unit_tests up? In-Reply-To: <5031DBCD.8070201@middleswarth.net> References: <503146AA.9070908@middleswarth.net> <5031D2EE.4090103@redhat.com> <5031DBCD.8070201@middleswarth.net> Message-ID: <5031E692.5020602@redhat.com> On 08/20/2012 09:40 AM, Robert Middleswarth wrote: > On 08/20/2012 02:02 AM, Yaniv Kaul wrote: >> On 08/19/2012 11:03 PM, Robert Middleswarth wrote: >>> The DAO unit tests take twice as long as the rest of the test to run >>> is there any way to break them up into two pieces? >>> >>> >> >> Can they run in parallel to the rest of the tests? >> May be a KISS solution for this problem. >> Y. >> > Yaniv, > > That is what I am doing but the current test can't be ran in parallel on > the same host and the jobs backup several hours and none of the results actually, why can't the test run in parallel on same host? > get written to Gerrit until all the jobes finishes. > From eedri at redhat.com Mon Aug 20 07:30:12 2012 From: eedri at redhat.com (Eyal Edri) Date: Mon, 20 Aug 2012 03:30:12 -0400 (EDT) Subject: [Engine-devel] Any way to break the dao_unit_tests up? In-Reply-To: <5031E692.5020602@redhat.com> Message-ID: <344505909.7239547.1345447812949.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Itamar Heim" > To: "Robert Middleswarth" > Cc: "Yaniv Kaul" , engine-devel at ovirt.org, "Eyal Edri" > Sent: Monday, August 20, 2012 10:26:10 AM > Subject: Re: [Engine-devel] Any way to break the dao_unit_tests up? > > On 08/20/2012 09:40 AM, Robert Middleswarth wrote: > > On 08/20/2012 02:02 AM, Yaniv Kaul wrote: > >> On 08/19/2012 11:03 PM, Robert Middleswarth wrote: > >>> The DAO unit tests take twice as long as the rest of the test to > >>> run > >>> is there any way to break them up into two pieces? > >>> > >>> > >> > >> Can they run in parallel to the rest of the tests? > >> May be a KISS solution for this problem. > >> Y. > >> > > Yaniv, > > > > That is what I am doing but the current test can't be ran in > > parallel on > > the same host and the jobs backup several hours and none of the > > results > > actually, why can't the test run in parallel on same host? i think we run into errors when trying to do it.. it's worth trying to do again and see if the errors are due to environment issues or problems in the tests themselfs. > > > get written to Gerrit until all the jobes finishes. > > > > > From iheim at redhat.com Mon Aug 20 07:31:15 2012 From: iheim at redhat.com (Itamar Heim) Date: Mon, 20 Aug 2012 10:31:15 +0300 Subject: [Engine-devel] Any way to break the dao_unit_tests up? In-Reply-To: <344505909.7239547.1345447812949.JavaMail.root@redhat.com> References: <344505909.7239547.1345447812949.JavaMail.root@redhat.com> Message-ID: <5031E7C3.1010309@redhat.com> On 08/20/2012 10:30 AM, Eyal Edri wrote: > > > ----- Original Message ----- >> From: "Itamar Heim" >> To: "Robert Middleswarth" >> Cc: "Yaniv Kaul" , engine-devel at ovirt.org, "Eyal Edri" >> Sent: Monday, August 20, 2012 10:26:10 AM >> Subject: Re: [Engine-devel] Any way to break the dao_unit_tests up? >> >> On 08/20/2012 09:40 AM, Robert Middleswarth wrote: >>> On 08/20/2012 02:02 AM, Yaniv Kaul wrote: >>>> On 08/19/2012 11:03 PM, Robert Middleswarth wrote: >>>>> The DAO unit tests take twice as long as the rest of the test to >>>>> run >>>>> is there any way to break them up into two pieces? >>>>> >>>>> >>>> >>>> Can they run in parallel to the rest of the tests? >>>> May be a KISS solution for this problem. >>>> Y. >>>> >>> Yaniv, >>> >>> That is what I am doing but the current test can't be ran in >>> parallel on >>> the same host and the jobs backup several hours and none of the >>> results >> >> actually, why can't the test run in parallel on same host? > > i think we run into errors when trying to do it.. > it's worth trying to do again and see if the errors are due to environment issues or problems in the tests themselfs. if you try to use same schema name, you will fail, but i think we fixed the dao tests to run in parallel in the past by setting the schema name per run, etc. From eedri at redhat.com Mon Aug 20 07:48:22 2012 From: eedri at redhat.com (Eyal Edri) Date: Mon, 20 Aug 2012 03:48:22 -0400 (EDT) Subject: [Engine-devel] Any way to break the dao_unit_tests up? In-Reply-To: <5031E7C3.1010309@redhat.com> Message-ID: <1704239865.7254645.1345448902335.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Itamar Heim" > To: "Eyal Edri" > Cc: "Yaniv Kaul" , engine-devel at ovirt.org, "Robert Middleswarth" > Sent: Monday, August 20, 2012 10:31:15 AM > Subject: Re: [Engine-devel] Any way to break the dao_unit_tests up? > > On 08/20/2012 10:30 AM, Eyal Edri wrote: > > > > > > ----- Original Message ----- > >> From: "Itamar Heim" > >> To: "Robert Middleswarth" > >> Cc: "Yaniv Kaul" , engine-devel at ovirt.org, "Eyal > >> Edri" > >> Sent: Monday, August 20, 2012 10:26:10 AM > >> Subject: Re: [Engine-devel] Any way to break the dao_unit_tests > >> up? > >> > >> On 08/20/2012 09:40 AM, Robert Middleswarth wrote: > >>> On 08/20/2012 02:02 AM, Yaniv Kaul wrote: > >>>> On 08/19/2012 11:03 PM, Robert Middleswarth wrote: > >>>>> The DAO unit tests take twice as long as the rest of the test > >>>>> to > >>>>> run > >>>>> is there any way to break them up into two pieces? > >>>>> > >>>>> > >>>> > >>>> Can they run in parallel to the rest of the tests? > >>>> May be a KISS solution for this problem. > >>>> Y. > >>>> > >>> Yaniv, > >>> > >>> That is what I am doing but the current test can't be ran in > >>> parallel on > >>> the same host and the jobs backup several hours and none of the > >>> results > >> > >> actually, why can't the test run in parallel on same host? > > > > i think we run into errors when trying to do it.. > > it's worth trying to do again and see if the errors are due to > > environment issues or problems in the tests themselfs. > > if you try to use same schema name, you will fail, but i think we > fixed > the dao tests to run in parallel in the past by setting the schema > name > per run, etc. > we're using a diff db name, if that's what you mean. i meant some random tests were failing, but lets check it again and verify it is the case. From sanjal at redhat.com Mon Aug 20 10:30:43 2012 From: sanjal at redhat.com (Shireesh Anjal) Date: Mon, 20 Aug 2012 16:00:43 +0530 Subject: [Engine-devel] Any way to break the dao_unit_tests up? In-Reply-To: <1808305604.8209657.1345441587191.JavaMail.root@redhat.com> References: <1808305604.8209657.1345441587191.JavaMail.root@redhat.com> Message-ID: <503211D3.1050800@redhat.com> On Monday 20 August 2012 11:16 AM, Mike Kolesnik wrote: > ----- Original Message ----- >> The DAO unit tests take twice as long as the rest of the test to run >> is >> there any way to break them up into two pieces? >> > It will not be easy.. > > The way the tests are built today is with DB-unit. > DB-unit allows to have an XML file with predefined data (called fixtures) which is used to recreate the DB data each time a test-class is run. > > This is all fine, except that in our tests there are (at least) 2 issues: > 1. The same fixtures.xml file is used in all DAO tests. > 2. Some DAOs require fixtures for several tables. > > Now, we could fix issue #1 by splitting the fixtures file into smaller files, each relating to only one table, which would allow us to run these tests in parallel on the same DB. > Issue #2 would require to figure out which tests require several fixtures, and have them run isolated from the other tests which require only a single table. > > A simpler solution could be to have the tests run each on it's own db schema (or it's own db) which eliminates the dependencies and allows to run all in parallel, but is a bit more complicated to maintain (we would need some script that generates these schemas/dbs for tests automatically) and keeping multiple schemas up to date would also require CPU time. > > This is speaking in terms of the tests themselves, without considering the build process itself. There are two issues here: 1) DB connection is created during initialization of every test case, and destroyed at the end of each test case execution 2) The fixtures data is inserted during initialization of every test case I think both of these can be resolved by - creating the test data only during initialization of the first test case, which will include creating the connection (with auto-commit = false), inserting fixtures data and committing it - rolling back any changes done to the database during test case execution in the tearDown method I just tried this in two phases. Using the same connection across all test cases brought down the dao unit tests run time from 4:42.683s to 1:07.628s, and inserting the fixtures data only once further brought it down to just 22.295s ! (on my local development machine) I've just sent a patch with these changes: http://gerrit.ovirt.org/7336 > >> -- >> Thanks >> Robert Middleswarth >> @rmiddle (twitter/IRC) >> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From amureini at redhat.com Mon Aug 20 10:40:37 2012 From: amureini at redhat.com (Allon Mureinik) Date: Mon, 20 Aug 2012 06:40:37 -0400 (EDT) Subject: [Engine-devel] Any way to break the dao_unit_tests up? In-Reply-To: <503211D3.1050800@redhat.com> Message-ID: <1050852842.7466046.1345459237983.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Shireesh Anjal" > To: "Mike Kolesnik" > Cc: engine-devel at ovirt.org, "infra" > Sent: Monday, August 20, 2012 1:30:43 PM > Subject: Re: [Engine-devel] Any way to break the dao_unit_tests up? > > On Monday 20 August 2012 11:16 AM, Mike Kolesnik wrote: > > ----- Original Message ----- > >> The DAO unit tests take twice as long as the rest of the test to > >> run > >> is > >> there any way to break them up into two pieces? > >> > > It will not be easy.. > > > > The way the tests are built today is with DB-unit. > > DB-unit allows to have an XML file with predefined data (called > > fixtures) which is used to recreate the DB data each time a > > test-class is run. > > > > This is all fine, except that in our tests there are (at least) 2 > > issues: > > 1. The same fixtures.xml file is used in all DAO tests. > > 2. Some DAOs require fixtures for several tables. > > > > Now, we could fix issue #1 by splitting the fixtures file into > > smaller files, each relating to only one table, which would allow > > us to run these tests in parallel on the same DB. > > Issue #2 would require to figure out which tests require several > > fixtures, and have them run isolated from the other tests which > > require only a single table. > > > > A simpler solution could be to have the tests run each on it's own > > db schema (or it's own db) which eliminates the dependencies and > > allows to run all in parallel, but is a bit more complicated to > > maintain (we would need some script that generates these > > schemas/dbs for tests automatically) and keeping multiple schemas > > up to date would also require CPU time. > > > > This is speaking in terms of the tests themselves, without > > considering the build process itself. > > There are two issues here: > > 1) DB connection is created during initialization of every test case, > and destroyed at the end of each test case execution > 2) The fixtures data is inserted during initialization of every test > case > > I think both of these can be resolved by > - creating the test data only during initialization of the first > test > case, which will include creating the connection (with auto-commit = > false), inserting fixtures data and committing it > - rolling back any changes done to the database during test case > execution in the tearDown method > > I just tried this in two phases. Using the same connection across all > test cases brought down the dao unit tests run time from 4:42.683s to > 1:07.628s, and inserting the fixtures data only once further brought > it > down to just 22.295s ! (on my local development machine) > > I've just sent a patch with these changes: > http://gerrit.ovirt.org/7336 Beat me to the punch :-) (BTW, I have some implementation details - see inline in gerrit) > > > > >> -- > >> Thanks > >> Robert Middleswarth > >> @rmiddle (twitter/IRC) > >> > >> _______________________________________________ > >> Engine-devel mailing list > >> Engine-devel at ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/engine-devel > >> > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From yzaslavs at redhat.com Mon Aug 20 11:24:03 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Mon, 20 Aug 2012 14:24:03 +0300 Subject: [Engine-devel] Any way to break the dao_unit_tests up? In-Reply-To: <503211D3.1050800@redhat.com> References: <1808305604.8209657.1345441587191.JavaMail.root@redhat.com> <503211D3.1050800@redhat.com> Message-ID: <50321E53.40901@redhat.com> On 08/20/2012 01:30 PM, Shireesh Anjal wrote: > On Monday 20 August 2012 11:16 AM, Mike Kolesnik wrote: >> ----- Original Message ----- >>> The DAO unit tests take twice as long as the rest of the test to run >>> is >>> there any way to break them up into two pieces? >>> >> It will not be easy.. >> >> The way the tests are built today is with DB-unit. >> DB-unit allows to have an XML file with predefined data (called >> fixtures) which is used to recreate the DB data each time a test-class >> is run. >> >> This is all fine, except that in our tests there are (at least) 2 issues: >> 1. The same fixtures.xml file is used in all DAO tests. >> 2. Some DAOs require fixtures for several tables. >> >> Now, we could fix issue #1 by splitting the fixtures file into smaller >> files, each relating to only one table, which would allow us to run >> these tests in parallel on the same DB. >> Issue #2 would require to figure out which tests require several >> fixtures, and have them run isolated from the other tests which >> require only a single table. >> >> A simpler solution could be to have the tests run each on it's own db >> schema (or it's own db) which eliminates the dependencies and allows >> to run all in parallel, but is a bit more complicated to maintain (we >> would need some script that generates these schemas/dbs for tests >> automatically) and keeping multiple schemas up to date would also >> require CPU time. >> >> This is speaking in terms of the tests themselves, without considering >> the build process itself. > > There are two issues here: > > 1) DB connection is created during initialization of every test case, > and destroyed at the end of each test case execution > 2) The fixtures data is inserted during initialization of every test case > > I think both of these can be resolved by > - creating the test data only during initialization of the first test > case, which will include creating the connection (with auto-commit = > false), inserting fixtures data and committing it > - rolling back any changes done to the database during test case > execution in the tearDown method > > I just tried this in two phases. Using the same connection across all > test cases brought down the dao unit tests run time from 4:42.683s to > 1:07.628s, and inserting the fixtures data only once further brought it > down to just 22.295s ! (on my local development machine) > > I've just sent a patch with these changes: http://gerrit.ovirt.org/7336 Hi Shireesh, I am also providing some input on some missing functionality > >> >>> -- >>> Thanks >>> Robert Middleswarth >>> @rmiddle (twitter/IRC) >>> >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From vszocs at redhat.com Mon Aug 20 14:07:21 2012 From: vszocs at redhat.com (Vojtech Szocs) Date: Mon, 20 Aug 2012 10:07:21 -0400 (EDT) Subject: [Engine-devel] oVirt UI Plugins: Follow-up Meeting Message-ID: <1000617980.11550618.1345471641661.JavaMail.root@redhat.com> The following meeting has been modified: Subject: oVirt UI Plugins: Follow-up Meeting Organizer: "Vojtech Szocs" Time: Tuesday, August 21, 2012, 4:30:00 PM - 5:30:00 PM GMT +01:00 Belgrade, Bratislava, Budapest, Ljubljana, Prague Invitees: engine-devel at ovirt.org; George.Costea at netapp.com; Troy.Mangum at netapp.com; Dustin.Schoenbrun at netapp.com; Ricky.Hopper at netapp.com; Chris.Frantz at hp.com; kroberts at redhat.com; ovedo at redhat.com; iheim at redhat.com; ilvovsky at redhat.com; ecohen at redhat.com ... *~*~*~*~*~*~*~*~*~* Hi guys, this is a follow-up meeting for discussing progress on oVirt UI Plugins feature. Topics covered in this meeting: * walk through changes in the plugin infrastructure (2nd revision of UI Plugins proof-of-concept patch) * discuss the current progress of implementing dynamic tabs with GWT-Platform (GWTP) framework * discuss any open questions or issues, in case somebody started working on other items Here are the details required for joining the session. Intercall dial-in numbers can be found at: https://www.intercallonline.com/portlets/scheduling/viewNumbers/listNumbersByCode.do?confCode=7128867405 Intercall Conference Code ID: 7128867405 # Elluminate session: Regards, Vojtech -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 6269 bytes Desc: not available URL: From vszocs at redhat.com Mon Aug 20 15:21:46 2012 From: vszocs at redhat.com (Vojtech Szocs) Date: Mon, 20 Aug 2012 11:21:46 -0400 (EDT) Subject: [Engine-devel] oVirt UI Plugins: Follow-up Meeting Message-ID: <2138681341.11593104.1345476106866.JavaMail.root@redhat.com> The following meeting has been modified: Subject: oVirt UI Plugins: Follow-up Meeting Organizer: "Vojtech Szocs" Time: Tuesday, August 21, 2012, 4:30:00 PM - 5:30:00 PM GMT +01:00 Belgrade, Bratislava, Budapest, Ljubljana, Prague Invitees: engine-devel at ovirt.org; George.Costea at netapp.com; Troy.Mangum at netapp.com; Dustin.Schoenbrun at netapp.com; Ricky.Hopper at netapp.com; Chris.Frantz at hp.com; kroberts at redhat.com; ovedo at redhat.com; iheim at redhat.com; ilvovsky at redhat.com; ecohen at redhat.com ... *~*~*~*~*~*~*~*~*~* Hi guys, this is a follow-up meeting for discussing progress on oVirt UI Plugins feature. Topics covered in this meeting: * walk through changes in the plugin infrastructure (2nd revision of UI Plugins proof-of-concept patch) * discuss the current progress of implementing dynamic tabs with GWT-Platform (GWTP) framework * discuss any open questions or issues, in case somebody started working on other items Here are the details required for joining the session. Intercall dial-in numbers can be found at: https://www.intercallonline.com/portlets/scheduling/viewNumbers/listNumbersByCode.do?confCode=7128867405 Intercall Conference Code ID: 7128867405 # Elluminate session: https://sas.elluminate.com/m.jnlp?sid=819&password=M.CDDD4C1B4E2E33D90E5897F4942DD9 Regards, Vojtech -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 6485 bytes Desc: not available URL: From robert at middleswarth.net Mon Aug 20 19:51:24 2012 From: robert at middleswarth.net (Robert Middleswarth) Date: Mon, 20 Aug 2012 15:51:24 -0400 Subject: [Engine-devel] oVirt UI Plugins: Follow-up Meeting In-Reply-To: <2138681341.11593104.1345476106866.JavaMail.root@redhat.com> References: <2138681341.11593104.1345476106866.JavaMail.root@redhat.com> Message-ID: <5032953C.7010301@middleswarth.net> On 08/20/2012 11:21 AM, Vojtech Szocs wrote: > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel This is at the same time as the infra meeting? Should it be rescheduled? -- Thanks Robert Middleswarth @rmiddle (twitter/Freenode IRC) @RobertM (OFTC IRC) From iheim at redhat.com Mon Aug 20 21:08:51 2012 From: iheim at redhat.com (Itamar Heim) Date: Tue, 21 Aug 2012 00:08:51 +0300 Subject: [Engine-devel] oVirt UI Plugins: Follow-up Meeting In-Reply-To: <5032953C.7010301@middleswarth.net> References: <2138681341.11593104.1345476106866.JavaMail.root@redhat.com> <5032953C.7010301@middleswarth.net> Message-ID: <5032A763.8050703@redhat.com> On 08/20/2012 10:51 PM, Robert Middleswarth wrote: > On 08/20/2012 11:21 AM, Vojtech Szocs wrote: >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel > This is at the same time as the infra meeting? Should it be rescheduled? > i think this is mostly different audience, not sure its an issue. From mkolesni at redhat.com Tue Aug 21 06:43:23 2012 From: mkolesni at redhat.com (Mike Kolesnik) Date: Tue, 21 Aug 2012 02:43:23 -0400 (EDT) Subject: [Engine-devel] Any way to break the dao_unit_tests up? In-Reply-To: <503211D3.1050800@redhat.com> Message-ID: <2031729343.8738969.1345531403624.JavaMail.root@redhat.com> ----- Original Message ----- > On Monday 20 August 2012 11:16 AM, Mike Kolesnik wrote: > > ----- Original Message ----- > >> The DAO unit tests take twice as long as the rest of the test to > >> run > >> is > >> there any way to break them up into two pieces? > >> > > It will not be easy.. > > > > The way the tests are built today is with DB-unit. > > DB-unit allows to have an XML file with predefined data (called > > fixtures) which is used to recreate the DB data each time a > > test-class is run. > > > > This is all fine, except that in our tests there are (at least) 2 > > issues: > > 1. The same fixtures.xml file is used in all DAO tests. > > 2. Some DAOs require fixtures for several tables. > > > > Now, we could fix issue #1 by splitting the fixtures file into > > smaller files, each relating to only one table, which would allow > > us to run these tests in parallel on the same DB. > > Issue #2 would require to figure out which tests require several > > fixtures, and have them run isolated from the other tests which > > require only a single table. > > > > A simpler solution could be to have the tests run each on it's own > > db schema (or it's own db) which eliminates the dependencies and > > allows to run all in parallel, but is a bit more complicated to > > maintain (we would need some script that generates these > > schemas/dbs for tests automatically) and keeping multiple schemas > > up to date would also require CPU time. > > > > This is speaking in terms of the tests themselves, without > > considering the build process itself. > > There are two issues here: > > 1) DB connection is created during initialization of every test case, > and destroyed at the end of each test case execution > 2) The fixtures data is inserted during initialization of every test > case > > I think both of these can be resolved by > - creating the test data only during initialization of the first > test > case, which will include creating the connection (with auto-commit = > false), inserting fixtures data and committing it > - rolling back any changes done to the database during test case > execution in the tearDown method > > I just tried this in two phases. Using the same connection across all > test cases brought down the dao unit tests run time from 4:42.683s to > 1:07.628s, and inserting the fixtures data only once further brought > it > down to just 22.295s ! (on my local development machine) > > I've just sent a patch with these changes: > http://gerrit.ovirt.org/7336 Patch merged, Thanks Shireesh for the contribution, now the DAO tests are super fast! > > > > >> -- > >> Thanks > >> Robert Middleswarth > >> @rmiddle (twitter/IRC) > >> > >> _______________________________________________ > >> Engine-devel mailing list > >> Engine-devel at ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/engine-devel > >> > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > From iheim at redhat.com Tue Aug 21 07:22:49 2012 From: iheim at redhat.com (Itamar Heim) Date: Tue, 21 Aug 2012 10:22:49 +0300 Subject: [Engine-devel] [vdsm] Jenkins testing. In-Reply-To: <5029FCA3.90109@linux.vnet.ibm.com> References: <5029E56A.8070909@middleswarth.net> <5029FCA3.90109@linux.vnet.ibm.com> Message-ID: <50333749.8070605@redhat.com> On 08/14/2012 10:22 AM, Deepak C Shetty wrote: > On 08/14/2012 11:13 AM, Robert Middleswarth wrote: >> After a few false starts it looks like we have per patch testing >> working on VDSM, oVirt-engine, oVirt-engine-sdk and oVirt-engine-cli. >> There are 3 status a patch can get. 1) Success - Means the patch ran >> though the tests without issue. 2) Failure - Means the tests failed. >> 3) Aborted - Generally means the submitter is not in the whitelist and >> the tests were never run. If you have any questions please feel free >> to ask. >> > So what is needed for the submitted to be in whitelist ? > I once for Success for few of my patches.. then got failure for some > other patch( maybe thats due to the false starts u had) and then for the > latest patch of mine, it says aborted. > > So not sure if i am in whitelist or not ? > If not, what do i need to do to be part of it ? robert is adding these per failed jobs. we track the whitelist as a git repo in gerrit: http://gerrit.ovirt.org/gitweb?p=jenkins-whitelist.git;a=blob;f=jenkins-whitelist.txt > If yes, why did the build abort for my latest patch ? > > _______________________________________________ > Infra mailing list > Infra at ovirt.org > http://lists.ovirt.org/mailman/listinfo/infra From amureini at redhat.com Tue Aug 21 08:59:19 2012 From: amureini at redhat.com (Allon Mureinik) Date: Tue, 21 Aug 2012 04:59:19 -0400 (EDT) Subject: [Engine-devel] Any way to break the dao_unit_tests up? In-Reply-To: <2031729343.8738969.1345531403624.JavaMail.root@redhat.com> Message-ID: <101738546.8209710.1345539559177.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Mike Kolesnik" > To: "Shireesh Anjal" > Cc: engine-devel at ovirt.org, "infra" > Sent: Tuesday, August 21, 2012 9:43:23 AM > Subject: Re: [Engine-devel] Any way to break the dao_unit_tests up? > > ----- Original Message ----- > > On Monday 20 August 2012 11:16 AM, Mike Kolesnik wrote: > > > ----- Original Message ----- > > >> The DAO unit tests take twice as long as the rest of the test to > > >> run > > >> is > > >> there any way to break them up into two pieces? > > >> > > > It will not be easy.. > > > > > > The way the tests are built today is with DB-unit. > > > DB-unit allows to have an XML file with predefined data (called > > > fixtures) which is used to recreate the DB data each time a > > > test-class is run. > > > > > > This is all fine, except that in our tests there are (at least) 2 > > > issues: > > > 1. The same fixtures.xml file is used in all DAO tests. > > > 2. Some DAOs require fixtures for several tables. > > > > > > Now, we could fix issue #1 by splitting the fixtures file into > > > smaller files, each relating to only one table, which would allow > > > us to run these tests in parallel on the same DB. > > > Issue #2 would require to figure out which tests require several > > > fixtures, and have them run isolated from the other tests which > > > require only a single table. > > > > > > A simpler solution could be to have the tests run each on it's > > > own > > > db schema (or it's own db) which eliminates the dependencies and > > > allows to run all in parallel, but is a bit more complicated to > > > maintain (we would need some script that generates these > > > schemas/dbs for tests automatically) and keeping multiple schemas > > > up to date would also require CPU time. > > > > > > This is speaking in terms of the tests themselves, without > > > considering the build process itself. > > > > There are two issues here: > > > > 1) DB connection is created during initialization of every test > > case, > > and destroyed at the end of each test case execution > > 2) The fixtures data is inserted during initialization of every > > test > > case > > > > I think both of these can be resolved by > > - creating the test data only during initialization of the first > > test > > case, which will include creating the connection (with auto-commit > > = > > false), inserting fixtures data and committing it > > - rolling back any changes done to the database during test case > > execution in the tearDown method > > > > I just tried this in two phases. Using the same connection across > > all > > test cases brought down the dao unit tests run time from 4:42.683s > > to > > 1:07.628s, and inserting the fixtures data only once further > > brought > > it > > down to just 22.295s ! (on my local development machine) > > > > I've just sent a patch with these changes: > > http://gerrit.ovirt.org/7336 > > Patch merged, > > Thanks Shireesh for the contribution, now the DAO tests are super > fast! 20-something seconds to run DAO tests? awesome! Kudos, Shireesh! > > > > > > > > >> -- > > >> Thanks > > >> Robert Middleswarth > > >> @rmiddle (twitter/IRC) > > >> > > >> _______________________________________________ > > >> Engine-devel mailing list > > >> Engine-devel at ovirt.org > > >> http://lists.ovirt.org/mailman/listinfo/engine-devel > > >> > > > _______________________________________________ > > > Engine-devel mailing list > > > Engine-devel at ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From yzaslavs at redhat.com Tue Aug 21 09:01:53 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Tue, 21 Aug 2012 12:01:53 +0300 Subject: [Engine-devel] Any way to break the dao_unit_tests up? In-Reply-To: <101738546.8209710.1345539559177.JavaMail.root@redhat.com> References: <101738546.8209710.1345539559177.JavaMail.root@redhat.com> Message-ID: <50334E81.2010307@redhat.com> On 08/21/2012 11:59 AM, Allon Mureinik wrote: > > > ----- Original Message ----- >> From: "Mike Kolesnik" >> To: "Shireesh Anjal" >> Cc: engine-devel at ovirt.org, "infra" >> Sent: Tuesday, August 21, 2012 9:43:23 AM >> Subject: Re: [Engine-devel] Any way to break the dao_unit_tests up? >> >> ----- Original Message ----- >>> On Monday 20 August 2012 11:16 AM, Mike Kolesnik wrote: >>>> ----- Original Message ----- >>>>> The DAO unit tests take twice as long as the rest of the test to >>>>> run >>>>> is >>>>> there any way to break them up into two pieces? >>>>> >>>> It will not be easy.. >>>> >>>> The way the tests are built today is with DB-unit. >>>> DB-unit allows to have an XML file with predefined data (called >>>> fixtures) which is used to recreate the DB data each time a >>>> test-class is run. >>>> >>>> This is all fine, except that in our tests there are (at least) 2 >>>> issues: >>>> 1. The same fixtures.xml file is used in all DAO tests. >>>> 2. Some DAOs require fixtures for several tables. >>>> >>>> Now, we could fix issue #1 by splitting the fixtures file into >>>> smaller files, each relating to only one table, which would allow >>>> us to run these tests in parallel on the same DB. >>>> Issue #2 would require to figure out which tests require several >>>> fixtures, and have them run isolated from the other tests which >>>> require only a single table. >>>> >>>> A simpler solution could be to have the tests run each on it's >>>> own >>>> db schema (or it's own db) which eliminates the dependencies and >>>> allows to run all in parallel, but is a bit more complicated to >>>> maintain (we would need some script that generates these >>>> schemas/dbs for tests automatically) and keeping multiple schemas >>>> up to date would also require CPU time. >>>> >>>> This is speaking in terms of the tests themselves, without >>>> considering the build process itself. >>> >>> There are two issues here: >>> >>> 1) DB connection is created during initialization of every test >>> case, >>> and destroyed at the end of each test case execution >>> 2) The fixtures data is inserted during initialization of every >>> test >>> case >>> >>> I think both of these can be resolved by >>> - creating the test data only during initialization of the first >>> test >>> case, which will include creating the connection (with auto-commit >>> = >>> false), inserting fixtures data and committing it >>> - rolling back any changes done to the database during test case >>> execution in the tearDown method >>> >>> I just tried this in two phases. Using the same connection across >>> all >>> test cases brought down the dao unit tests run time from 4:42.683s >>> to >>> 1:07.628s, and inserting the fixtures data only once further >>> brought >>> it >>> down to just 22.295s ! (on my local development machine) >>> >>> I've just sent a patch with these changes: >>> http://gerrit.ovirt.org/7336 >> >> Patch merged, >> >> Thanks Shireesh for the contribution, now the DAO tests are super >> fast! > 20-something seconds to run DAO tests? > awesome! > Kudos, Shireesh! +100 on this, good work! >> >>> >>>> >>>>> -- >>>>> Thanks >>>>> Robert Middleswarth >>>>> @rmiddle (twitter/IRC) >>>>> >>>>> _______________________________________________ >>>>> Engine-devel mailing list >>>>> Engine-devel at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>> >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> >>> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From robert at middleswarth.net Wed Aug 22 02:10:02 2012 From: robert at middleswarth.net (Robert Middleswarth) Date: Tue, 21 Aug 2012 22:10:02 -0400 Subject: [Engine-devel] [vdsm] Jenkins testing. In-Reply-To: <502A1247.3060508@linux.vnet.ibm.com> References: <5029E56A.8070909@middleswarth.net> <5029FCA3.90109@linux.vnet.ibm.com> <502A1247.3060508@linux.vnet.ibm.com> Message-ID: <50343F7A.1070508@middleswarth.net> On 08/14/2012 04:54 AM, Deepak C Shetty wrote: > On 08/14/2012 12:52 PM, Deepak C Shetty wrote: >> On 08/14/2012 11:13 AM, Robert Middleswarth wrote: >>> After a few false starts it looks like we have per patch testing >>> working on VDSM, oVirt-engine, oVirt-engine-sdk and >>> oVirt-engine-cli. There are 3 status a patch can get. 1) Success - >>> Means the patch ran though the tests without issue. 2) Failure - >>> Means the tests failed. 3) Aborted - Generally means the submitter >>> is not in the whitelist and the tests were never run. If you have >>> any questions please feel free to ask. >>> >> So what is needed for the submitted to be in whitelist ? >> I once for Success for few of my patches.. then got failure for some >> other patch( maybe thats due to the false starts u had) and then for >> the latest patch of mine, it says aborted. >> >> So not sure if i am in whitelist or not ? >> If not, what do i need to do to be part of it ? >> If yes, why did the build abort for my latest patch ? >> > Pls see http://gerrit.ovirt.org/#/c/6856/ > For patch1 it says build success, for patch 2, it says aborted.. why ? > All the abort means as a protective measure we don't run the tests unless we know the committer. With that said you are now in the whitelist so it shouldn't be an issue in the feature. -- Thanks Robert Middleswarth @rmiddle (twitter/Freenode IRC) @RobertM (OFTC IRC) From deepakcs at linux.vnet.ibm.com Wed Aug 22 04:03:20 2012 From: deepakcs at linux.vnet.ibm.com (Deepak C Shetty) Date: Wed, 22 Aug 2012 09:33:20 +0530 Subject: [Engine-devel] [vdsm] Jenkins testing. In-Reply-To: <50343F7A.1070508@middleswarth.net> References: <5029E56A.8070909@middleswarth.net> <5029FCA3.90109@linux.vnet.ibm.com> <502A1247.3060508@linux.vnet.ibm.com> <50343F7A.1070508@middleswarth.net> Message-ID: <50345A08.7020602@linux.vnet.ibm.com> On 08/22/2012 07:40 AM, Robert Middleswarth wrote: > On 08/14/2012 04:54 AM, Deepak C Shetty wrote: >> On 08/14/2012 12:52 PM, Deepak C Shetty wrote: >>> On 08/14/2012 11:13 AM, Robert Middleswarth wrote: >>>> After a few false starts it looks like we have per patch testing >>>> working on VDSM, oVirt-engine, oVirt-engine-sdk and >>>> oVirt-engine-cli. There are 3 status a patch can get. 1) Success - >>>> Means the patch ran though the tests without issue. 2) Failure - >>>> Means the tests failed. 3) Aborted - Generally means the submitter >>>> is not in the whitelist and the tests were never run. If you have >>>> any questions please feel free to ask. >>>> >>> So what is needed for the submitted to be in whitelist ? >>> I once for Success for few of my patches.. then got failure for some >>> other patch( maybe thats due to the false starts u had) and then for >>> the latest patch of mine, it says aborted. >>> >>> So not sure if i am in whitelist or not ? >>> If not, what do i need to do to be part of it ? >>> If yes, why did the build abort for my latest patch ? >>> >> Pls see http://gerrit.ovirt.org/#/c/6856/ >> For patch1 it says build success, for patch 2, it says aborted.. why ? >> > All the abort means as a protective measure we don't run the tests > unless we know the committer. With that said you are now in the > whitelist so it shouldn't be an issue in the feature. > Thanks for putting me in the whitelist. But it still doesn't clarify how patch 1 got build success and subsequent patch 2 had abort ? From robert at middleswarth.net Wed Aug 22 04:44:22 2012 From: robert at middleswarth.net (Robert Middleswarth) Date: Wed, 22 Aug 2012 00:44:22 -0400 Subject: [Engine-devel] [vdsm] Jenkins testing. In-Reply-To: <50345A08.7020602@linux.vnet.ibm.com> References: <5029E56A.8070909@middleswarth.net> <5029FCA3.90109@linux.vnet.ibm.com> <502A1247.3060508@linux.vnet.ibm.com> <50343F7A.1070508@middleswarth.net> <50345A08.7020602@linux.vnet.ibm.com> Message-ID: <503463A6.7070205@middleswarth.net> On 08/22/2012 12:03 AM, Deepak C Shetty wrote: > On 08/22/2012 07:40 AM, Robert Middleswarth wrote: >> On 08/14/2012 04:54 AM, Deepak C Shetty wrote: >>> On 08/14/2012 12:52 PM, Deepak C Shetty wrote: >>>> On 08/14/2012 11:13 AM, Robert Middleswarth wrote: >>>>> After a few false starts it looks like we have per patch testing >>>>> working on VDSM, oVirt-engine, oVirt-engine-sdk and >>>>> oVirt-engine-cli. There are 3 status a patch can get. 1) Success - >>>>> Means the patch ran though the tests without issue. 2) Failure - >>>>> Means the tests failed. 3) Aborted - Generally means the submitter >>>>> is not in the whitelist and the tests were never run. If you have >>>>> any questions please feel free to ask. >>>>> >>>> So what is needed for the submitted to be in whitelist ? >>>> I once for Success for few of my patches.. then got failure for some >>>> other patch( maybe thats due to the false starts u had) and then for >>>> the latest patch of mine, it says aborted. >>>> >>>> So not sure if i am in whitelist or not ? >>>> If not, what do i need to do to be part of it ? >>>> If yes, why did the build abort for my latest patch ? >>>> >>> Pls see http://gerrit.ovirt.org/#/c/6856/ >>> For patch1 it says build success, for patch 2, it says aborted.. why ? >>> >> All the abort means as a protective measure we don't run the tests >> unless we know the committer. With that said you are now in the >> whitelist so it shouldn't be an issue in the feature. >> > Thanks for putting me in the whitelist. > But it still doesn't clarify how patch 1 got build success and > subsequent patch 2 had abort ? > Patch 1 happened in the small window well I was testing before the whitelist went live. Patch 2 happened after the whitelist went live. Since you are now in the whitelist all new patches for you will run. -- Thanks Robert Middleswarth @rmiddle (twitter/Freenode IRC) @RobertM (OFTC IRC) From amureini at redhat.com Wed Aug 22 09:18:07 2012 From: amureini at redhat.com (Allon Mureinik) Date: Wed, 22 Aug 2012 05:18:07 -0400 (EDT) Subject: [Engine-devel] Serial Execution of Async Tasks In-Reply-To: <518329067.4513405.1344942654993.JavaMail.root@redhat.com> Message-ID: <349094506.8912540.1345627087053.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Allon Mureinik" > To: "Livnat Peer" > Cc: "Eli Mesika" , "Liron Aravot" , "Federico Simoncelli" > , "engine-devel" , "Eduardo Warszawski" , "Yeela > Kaplan" > Sent: Tuesday, August 14, 2012 2:10:55 PM > Subject: Re: [Engine-devel] Serial Execution of Async Tasks > > Hi guys, > > Thanks for all your comments! > The correct response for many these points is to update the wiki. > I'm enclosing here the quick-and-dirty replies just to keep this > thread alive, and will update the wiki shortly. > > See inline. > > ----- Original Message ----- > > From: "Livnat Peer" > > To: "Allon Mureinik" > > Cc: "Eli Mesika" , "Liron Aravot" > > , "Federico Simoncelli" > > , "engine-devel" , > > "Eduardo Warszawski" , "Yeela > > Kaplan" > > Sent: Sunday, August 12, 2012 9:39:23 AM > > Subject: Re: [Engine-devel] Serial Execution of Async Tasks > > > > On 10/08/12 03:40, Eli Mesika wrote: > > > > > > > > > ----- Original Message ----- > > >> From: "Allon Mureinik" > > >> To: "engine-devel" > > >> Cc: "Eduardo Warszawski" , "Yeela Kaplan" > > >> , "Federico Simoncelli" > > >> , "Liron Aravot" > > >> Sent: Thursday, August 9, 2012 6:41:09 PM > > >> Subject: [Engine-devel] Serial Execution of Async Tasks > > >> > > >> Hi guys, > > >> > > >> As you may know the engine currently has the ability to fire an > > >> SPM > > >> task, and be asynchronously be "woken-up" when it ends. > > >> This is great, but we found the for the Live Storage Migration > > >> feature we need something a bit complex - the ability to have a > > >> series of async tasks in a single control flow. > > >> > > >> Here's my initial design for this, your comments and criticism > > >> would > > >> be welcome: > > >> http://wiki.ovirt.org/wiki/Features/Serial_Execution_of_Asynchronous_Tasks_Detailed_Design > > > > > > Apart from the short explanation & flow , since this is a > > > detailed > > > design , I would add > > > 1) Class diagram > > > 2) Flow diagram > Good idea, I'll see if I can jimmy something up. > > > > > > > > +1, it would help understanding the flow. > > > > - It looks like you chose not re-use/extend the ExecutionHandler > > (the > > entity used for building the tasks view exposed to the users). > > It might be a good idea to keep the separation between the engine > > Jobs > > and the underlying vdsm tasks, but I want to make sure you are > > familiar > > with this mechanism and ruled it out with a reason. If this is the > > case > > please share why you decided not to use it. > As you said Jobs and Steps are pure engine entities - they can > contain no VDSM tasks, one VDSM task, or plausibly, in the future, > several tasks. > Even /today/, AsyncTasks and Jobs/Steps are two different kinds of > animals - I don't see any added value in mixing them together. > > > > > > > - how does this design survives a jboss restart? Can you please a > > section in the wiki to explain that. > Basically, the way as a Command does today - the task is saved with > the executionIndex, and continues when the command is woken up. > I'll clarify this point in the wiki. Added to the wiki. > > > > > -successful execution - > > * "CommandBase iterates over its SPMAsyncTaskHandlers" - when? > This is the new suggested format of executeCommand(). I'll clarify > this too. Added to the wiki. > > * If the second task is an HSM command (vs. SPM command), I think > > you > > should explain in the design how to handle such flows as well. > HSM commands do not create AsyncTasks, as they do today - I will > clarify this. Added to the wiki. > > > * Why do we need before task? can you give a concrete example of > > what > > would you do in such a method. > Basically, /today/, command look like this: > executeCommand() { > doStuffInTheDB(); > runVdsCommand(someCommand); > } > > endSuccessfully() { > doMoreStuffInTheDB(); > } > > endWithFailure() { > doMoreStuffForFailureInTheDB(); > } > > In the new design, the entire doStuffInTheDB() should be moved to a > breforeTask of the (only) SPMAsyncTaskHandler. > > > > > - I see you added SPMAsyncTaskHandler, any reason not to use > > SPMAsyncTasK to manage it own life-cycle? > Conserving today's design - The SPMAsyncTaskHandler is the place to > add additional, non-SPM, logic around the SPM task execution, like > CommandBase allows today. > > > > > - In the life-cycle managed by the SPMAsyncTaskHandler there is a > > step > > 'createTask - how to create the async task' can you please > > elaborate > > what are the options. > new [any type of async task] > > > > > > > > > > > > Livnat > > > > >> > > >> > > >> -Allon > > >> _______________________________________________ > > >> Engine-devel mailing list > > >> Engine-devel at ovirt.org > > >> http://lists.ovirt.org/mailman/listinfo/engine-devel > > >> > > > _______________________________________________ > > > Engine-devel mailing list > > > Engine-devel at ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > > > > From amureini at redhat.com Wed Aug 22 09:21:23 2012 From: amureini at redhat.com (Allon Mureinik) Date: Wed, 22 Aug 2012 05:21:23 -0400 (EDT) Subject: [Engine-devel] Serial Execution of Async Tasks In-Reply-To: <502D2D85.5040205@redhat.com> Message-ID: <658531399.8916617.1345627283559.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Maor Lipchuk" > To: "Itamar Heim" > Cc: "Allon Mureinik" , "engine-devel" , "Eduardo Warszawski" > , "Yeela Kaplan" , "Federico Simoncelli" , "Liron > Aravot" > Sent: Thursday, August 16, 2012 8:27:33 PM > Subject: Re: [Engine-devel] Serial Execution of Async Tasks > > On 08/16/2012 06:51 PM, Itamar Heim wrote: > > On 08/16/2012 03:21 PM, Maor Lipchuk wrote: > >> On 08/14/2012 05:23 PM, Itamar Heim wrote: > >>> On 08/14/2012 02:35 PM, Maor Lipchuk wrote: > >>>> How should we handle the auditLogMessages? > >>>> Basically when a command ends it print an audit log. > >>>> > >>>> When we will start to use multiple tasks I assume user might get > >>>> a bulk > >>>> of audit logs which are actually related to the same action > >>>> (when we > >>>> fail for example the process will be create and delete). > >>>> It might be a bit confusing for the user not to know which > >>>> action is > >>>> related to the operation > >>> > >>> I thought audit log gets written regardless of the transaction, > >>> so audit > >>> log appears "as they happen"? > >> That is correct, > >> The issue that I was referring to, is that now, with multiple > >> tasks > >> execution, we will get many audit logs which related to the same > >> transaction but each one will be printed at a different time. > >> > >> I think that it might be confusing for the user to relate each > >> audit log > >> to the operation that was started. > >> > >> > >> For example : > >> User run an action that executes some tasks of create volumes, > >> then the engine encounter a problem, and decide to rollback the > >> operation and delete the volumes, in that case the engine will > >> execute a > >> delete task for the volumes, so user might see that delete of the > >> volume > >> (for example a snapshot) was initiated. > >> Since those are asynchronous tasks, audit log will be printed in a > >> different period of time and a user might not be aware what is the > >> relation of those specific delete. > > > > async doesn't mean we don't print an audit log when we start it, > > and > > when we end it. > > so user would get the starting audit log when the task failed in > > your > > example. of course this may happen 2 hours after they started the > > task. > > as long as we can correlate the audit log to be part of the same > > "job", > > i don't see the issue. > yes, but if I understood correctly, we don't want to correlate the > multiple tasks with the execution handler (which AFAIK handle the > correlation id). I actually didn't mention this, but I don't see why not. What's I'd probably like to have is a log with "Correlation ID xyzabc, step #3 starting/executing/ending" Does this make any sense? > > I assume this issue can be addressed in a future phase, > but maybe it is an issue that might worth to think about. > > > >>> > >>>> > >>>> Maybe we will need to use the correlation id of the Execution > >>>> handler as > >>>> Eli suggested or maybe add new states at CommandActionState? > >>>> > >>>> On 08/14/2012 02:10 PM, Allon Mureinik wrote: > >>>>> Hi guys, > >>>>> > >>>>> Thanks for all your comments! > >>>>> The correct response for many these points is to update the > >>>>> wiki. > >>>>> I'm enclosing here the quick-and-dirty replies just to keep > >>>>> this > >>>>> thread alive, and will update the wiki shortly. > >>>>> > >>>>> See inline. > >>>>> > >>>>> ----- Original Message ----- > >>>>>> From: "Livnat Peer" > >>>>>> To: "Allon Mureinik" > >>>>>> Cc: "Eli Mesika" , "Liron Aravot" > >>>>>> , "Federico Simoncelli" > >>>>>> , "engine-devel" > >>>>>> , > >>>>>> "Eduardo Warszawski" , "Yeela > >>>>>> Kaplan" > >>>>>> Sent: Sunday, August 12, 2012 9:39:23 AM > >>>>>> Subject: Re: [Engine-devel] Serial Execution of Async Tasks > >>>>>> > >>>>>> On 10/08/12 03:40, Eli Mesika wrote: > >>>>>>> > >>>>>>> > >>>>>>> ----- Original Message ----- > >>>>>>>> From: "Allon Mureinik" > >>>>>>>> To: "engine-devel" > >>>>>>>> Cc: "Eduardo Warszawski" , "Yeela > >>>>>>>> Kaplan" > >>>>>>>> , "Federico Simoncelli" > >>>>>>>> , "Liron Aravot" > >>>>>>>> Sent: Thursday, August 9, 2012 6:41:09 PM > >>>>>>>> Subject: [Engine-devel] Serial Execution of Async Tasks > >>>>>>>> > >>>>>>>> Hi guys, > >>>>>>>> > >>>>>>>> As you may know the engine currently has the ability to fire > >>>>>>>> an > >>>>>>>> SPM > >>>>>>>> task, and be asynchronously be "woken-up" when it ends. > >>>>>>>> This is great, but we found the for the Live Storage > >>>>>>>> Migration > >>>>>>>> feature we need something a bit complex - the ability to > >>>>>>>> have a > >>>>>>>> series of async tasks in a single control flow. > >>>>>>>> > >>>>>>>> Here's my initial design for this, your comments and > >>>>>>>> criticism > >>>>>>>> would > >>>>>>>> be welcome: > >>>>>>>> http://wiki.ovirt.org/wiki/Features/Serial_Execution_of_Asynchronous_Tasks_Detailed_Design > >>>>>>>> > >>>>>>>> > >>>>>>> > >>>>>>> Apart from the short explanation & flow , since this is a > >>>>>>> detailed > >>>>>>> design , I would add > >>>>>>> 1) Class diagram > >>>>>>> 2) Flow diagram > >>>>> Good idea, I'll see if I can jimmy something up. > >>>>> > >>>>>>> > >>>>>> > >>>>>> +1, it would help understanding the flow. > >>>>>> > >>>>>> - It looks like you chose not re-use/extend the > >>>>>> ExecutionHandler (the > >>>>>> entity used for building the tasks view exposed to the users). > >>>>>> It might be a good idea to keep the separation between the > >>>>>> engine > >>>>>> Jobs > >>>>>> and the underlying vdsm tasks, but I want to make sure you are > >>>>>> familiar > >>>>>> with this mechanism and ruled it out with a reason. If this is > >>>>>> the > >>>>>> case > >>>>>> please share why you decided not to use it. > >>>>> As you said Jobs and Steps are pure engine entities - they can > >>>>> contain no VDSM tasks, one VDSM task, or plausibly, in the > >>>>> future, > >>>>> several tasks. > >>>>> Even /today/, AsyncTasks and Jobs/Steps are two different kinds > >>>>> of > >>>>> animals - I don't see any added value in mixing them together. > >>>>> > >>>>>> > >>>>>> > >>>>>> - how does this design survives a jboss restart? Can you > >>>>>> please a > >>>>>> section in the wiki to explain that. > >>>>> Basically, the way as a Command does today - the task is saved > >>>>> with > >>>>> the executionIndex, and continues when the command is woken up. > >>>>> I'll clarify this point in the wiki. > >>>>> > >>>>>> > >>>>>> -successful execution - > >>>>>> * "CommandBase iterates over its SPMAsyncTaskHandlers" - when? > >>>>> This is the new suggested format of executeCommand(). I'll > >>>>> clarify > >>>>> this too. > >>>>> > >>>>>> * If the second task is an HSM command (vs. SPM command), I > >>>>>> think you > >>>>>> should explain in the design how to handle such flows as well. > >>>>> HSM commands do not create AsyncTasks, as they do today - I > >>>>> will > >>>>> clarify this. > >>>>> > >>>>>> * Why do we need before task? can you give a concrete example > >>>>>> of what > >>>>>> would you do in such a method. > >>>>> Basically, /today/, command look like this: > >>>>> executeCommand() { > >>>>> doStuffInTheDB(); > >>>>> runVdsCommand(someCommand); > >>>>> } > >>>>> > >>>>> endSuccessfully() { > >>>>> doMoreStuffInTheDB(); > >>>>> } > >>>>> > >>>>> endWithFailure() { > >>>>> doMoreStuffForFailureInTheDB(); > >>>>> } > >>>>> > >>>>> In the new design, the entire doStuffInTheDB() should be moved > >>>>> to a > >>>>> breforeTask of the (only) SPMAsyncTaskHandler. > >>>>> > >>>>>> > >>>>>> - I see you added SPMAsyncTaskHandler, any reason not to use > >>>>>> SPMAsyncTasK to manage it own life-cycle? > >>>>> Conserving today's design - The SPMAsyncTaskHandler is the > >>>>> place to > >>>>> add additional, non-SPM, logic around the SPM task execution, > >>>>> like > >>>>> CommandBase allows today. > >>>>> > >>>>>> > >>>>>> - In the life-cycle managed by the SPMAsyncTaskHandler there > >>>>>> is a > >>>>>> step > >>>>>> 'createTask - how to create the async task' can you please > >>>>>> elaborate > >>>>>> what are the options. > >>>>> new [any type of async task] > >>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>>> Livnat > >>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> -Allon > >>>>>>>> _______________________________________________ > >>>>>>>> Engine-devel mailing list > >>>>>>>> Engine-devel at ovirt.org > >>>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>>>>>>> > >>>>>>> _______________________________________________ > >>>>>>> Engine-devel mailing list > >>>>>>> Engine-devel at ovirt.org > >>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>>>>>> > >>>>>> > >>>>>> > >>>>> _______________________________________________ > >>>>> Engine-devel mailing list > >>>>> Engine-devel at ovirt.org > >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>>>> > >>>> > >>>> > >>>> _______________________________________________ > >>>> Engine-devel mailing list > >>>> Engine-devel at ovirt.org > >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>>> > >>> > >>> > >> > >> > > > > > > > From amureini at redhat.com Wed Aug 22 09:40:28 2012 From: amureini at redhat.com (Allon Mureinik) Date: Wed, 22 Aug 2012 05:40:28 -0400 (EDT) Subject: [Engine-devel] Serial Execution of Async Tasks In-Reply-To: <536706378.9223948.1344628096135.JavaMail.root@redhat.com> Message-ID: <1140946129.8937936.1345628428476.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Yair Zaslavsky" > To: "Eli Mesika" > Cc: "Liron Aravot" , "Federico Simoncelli" , "engine-devel" > , "Eduardo Warszawski" , "Yeela Kaplan" , "Allon > Mureinik" > Sent: Friday, August 10, 2012 10:48:16 PM > Subject: Re: [Engine-devel] Serial Execution of Async Tasks > > > > ----- Original Message ----- > From: "Eli Mesika" > To: "Allon Mureinik" > Cc: "Liron Aravot" , "Federico Simoncelli" > , "engine-devel" , > "Eduardo Warszawski" , "Yeela Kaplan" > > Sent: Friday, August 10, 2012 3:40:48 AM > Subject: Re: [Engine-devel] Serial Execution of Async Tasks > > > > ----- Original Message ----- > > From: "Allon Mureinik" > > To: "engine-devel" > > Cc: "Eduardo Warszawski" , "Yeela Kaplan" > > , "Federico Simoncelli" > > , "Liron Aravot" > > Sent: Thursday, August 9, 2012 6:41:09 PM > > Subject: [Engine-devel] Serial Execution of Async Tasks > > > > Hi guys, > > > > As you may know the engine currently has the ability to fire an SPM > > task, and be asynchronously be "woken-up" when it ends. > > This is great, but we found the for the Live Storage Migration > > feature we need something a bit complex - the ability to have a > > series of async tasks in a single control flow. > > > > Here's my initial design for this, your comments and criticism > > would > > be welcome: > > http://wiki.ovirt.org/wiki/Features/Serial_Execution_of_Asynchronous_Tasks_Detailed_Design > > Apart from the short explanation & flow , since this is a detailed > design , I would add > 1) Class diagram > 2) Flow diagram > > +1 > I am also interested to get a flow how a task is created (i.e - > replacement of ConcreateCreateTask) - but this will be handled in > what Eli has asked for. > > In addition, you have two titles of "Successful Execution". Fixed. > At "compensate" - see how revertTasks currently behaves. > Also read - > http://wiki.ovirt.org/wiki/Main_Page/features/RunningCommandsOnEndActionFailure > > This is the work I did for CloneVmFromSnapshot - not saying it's > perfect - but you should have an infrastructure/pattern to rollback > not just via spmRevertTask but also using an engine command. This is what the endWithFailure does - or am I missing your point? > > Yair > > > > > > > > > -Allon > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From amureini at redhat.com Wed Aug 22 09:48:01 2012 From: amureini at redhat.com (Allon Mureinik) Date: Wed, 22 Aug 2012 05:48:01 -0400 (EDT) Subject: [Engine-devel] Serial Execution of Async Tasks In-Reply-To: <532949446.36197412.1344559248995.JavaMail.root@redhat.com> Message-ID: <364622524.8944168.1345628881870.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Eli Mesika" > To: "Allon Mureinik" > Cc: "Eduardo Warszawski" , "Yeela Kaplan" , "Federico Simoncelli" > , "Liron Aravot" , "engine-devel" > Sent: Friday, August 10, 2012 3:40:48 AM > Subject: Re: [Engine-devel] Serial Execution of Async Tasks > > > > ----- Original Message ----- > > From: "Allon Mureinik" > > To: "engine-devel" > > Cc: "Eduardo Warszawski" , "Yeela Kaplan" > > , "Federico Simoncelli" > > , "Liron Aravot" > > Sent: Thursday, August 9, 2012 6:41:09 PM > > Subject: [Engine-devel] Serial Execution of Async Tasks > > > > Hi guys, > > > > As you may know the engine currently has the ability to fire an SPM > > task, and be asynchronously be "woken-up" when it ends. > > This is great, but we found the for the Live Storage Migration > > feature we need something a bit complex - the ability to have a > > series of async tasks in a single control flow. > > > > Here's my initial design for this, your comments and criticism > > would > > be welcome: > > http://wiki.ovirt.org/wiki/Features/Serial_Execution_of_Asynchronous_Tasks_Detailed_Design > > Apart from the short explanation & flow , since this is a detailed > design , I would add > 1) Class diagram Done > 2) Flow diagram > > > > > > > -Allon > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > From alonbl at redhat.com Wed Aug 22 10:32:27 2012 From: alonbl at redhat.com (Alon Bar-Lev) Date: Wed, 22 Aug 2012 06:32:27 -0400 (EDT) Subject: [Engine-devel] NOTICE: vdsm-bootstrap master is required for ovirt-engine master In-Reply-To: <1885300945.2560540.1345631435509.JavaMail.root@redhat.com> Message-ID: <1106641080.2560594.1345631547183.JavaMail.root@redhat.com> Hello, Due to recent changes in the bootstrap process, engine master now requires vdsm-bootstrap from vdsm master as well. The major changes are listed bellow. If you experience any issue, please CC me. Regards, Alon Bar-Lev. commit ec60b1fe12273ef2d5a55183bc6031cfa4cbedbb Author: Alon Bar-Lev Date: Wed Aug 8 17:50:13 2012 +0300 bootstrap: send complete bootstrap from engine CURRENT BEHAVIOR vds_installer.py is part of ovirt-engine, upon bootstrap, the script is sent to the node using ssh. Then vds_installer.py pulls vds-bootstrap* files using HTTP from engine. The vds_installer.py and vds_bootstrap pulls ssh public key from engine using HTTP. NEW BEHAVIOR vds_installer.py was moved into the vdsm-bootstrap and renamed to setup. vdsm-bootstrap repository was updated to create directory per bootstrap interface with 'setup' script. ovirt-engine copies public key to node in similar way of firewall rules. At bootstrap time, engine create tar archive from the bootstrap directory, cache it and pipe the archive into node in order to extract it and run the setup script. No HTTP communication is needed. No conflict with existing files. CONFIGURATION New: BootstrapCommand Control which command is sent during bootstrap. New: BootstrapCacheRefreshInterval Control the interval of testing if cache is valid. New: BootstrapPackageDirectory Directory to pack and send to node. New: BootstrapPackageName Cache archive name (basename). New: SSHKeyAlias Engine SSH key alias MODIFICATIONS Use umask 0077 when transferring installations so accessible only to logged on user. Change-Id: I6f4a09ca9e66f0c9f5f4f7b283a5f43986b7e603 Signed-off-by: Alon Bar-Lev commit 8d7d8ecb07cca8c47ae539115b581a7124923235 Author: Alon Bar-Lev Date: Wed Jul 25 16:21:07 2012 +0300 bootstrap: new implementation for apache-sshd usage Major changes: 1. Do not use temporary files for compression/decompression. 2. Do not use wget to pull large files, use ssh for all transfers. 3. One pass on files for digest, compress/decompress, send/receive. 4. Do not pull every 1 second for bytes/status. 5. Test for command status code. 6. File transfer using ssh and md5sum at same session, md5sum written to stderr. 7. Limit buffer size when reading remote output, so we won't exhaust all memory. 8. Do not echo back whole file content when sending file. 9. Consistent error, exception handling and debugging information. 10. More unit tests. Split between pure ssh implementation[1] and application logic[2]. Unit tests now have their own dedicated generic sshd[3], for proper work in embedded mode ssh apache-ssh-0.7.0 is required. Separate unit tests dedicated to OVirtSSH implementation, by default embedded apache-sshd is used, this can be overridden by setting java system properties, see[4]. As unit tests takes long time, use -Penable-ssh-tests to activate. [1] org.ovirt.engine.core.utils.ssh.OVirtSSH [2] org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper [3] org.ovirt.engine.core.utils.ssh.SSHD [4] org.ovirt.engine.core.utils.ssh.TestCommon Change-Id: I50ba60f2db364114907485da3074feb714615e0c Signed-off-by: Alon Bar-Lev From amureini at redhat.com Wed Aug 22 11:55:05 2012 From: amureini at redhat.com (Allon Mureinik) Date: Wed, 22 Aug 2012 07:55:05 -0400 (EDT) Subject: [Engine-devel] Serial Execution of Async Tasks In-Reply-To: <364622524.8944168.1345628881870.JavaMail.root@redhat.com> Message-ID: <1420018276.9013501.1345636505299.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Allon Mureinik" > To: "Eli Mesika" > Cc: "Liron Aravot" , "engine-devel" , "Eduardo Warszawski" > > Sent: Wednesday, August 22, 2012 12:48:01 PM > Subject: Re: [Engine-devel] Serial Execution of Async Tasks > > > > ----- Original Message ----- > > From: "Eli Mesika" > > To: "Allon Mureinik" > > Cc: "Eduardo Warszawski" , "Yeela Kaplan" > > , "Federico Simoncelli" > > , "Liron Aravot" , > > "engine-devel" > > Sent: Friday, August 10, 2012 3:40:48 AM > > Subject: Re: [Engine-devel] Serial Execution of Async Tasks > > > > > > > > ----- Original Message ----- > > > From: "Allon Mureinik" > > > To: "engine-devel" > > > Cc: "Eduardo Warszawski" , "Yeela Kaplan" > > > , "Federico Simoncelli" > > > , "Liron Aravot" > > > Sent: Thursday, August 9, 2012 6:41:09 PM > > > Subject: [Engine-devel] Serial Execution of Async Tasks > > > > > > Hi guys, > > > > > > As you may know the engine currently has the ability to fire an > > > SPM > > > task, and be asynchronously be "woken-up" when it ends. > > > This is great, but we found the for the Live Storage Migration > > > feature we need something a bit complex - the ability to have a > > > series of async tasks in a single control flow. > > > > > > Here's my initial design for this, your comments and criticism > > > would > > > be welcome: > > > http://wiki.ovirt.org/wiki/Features/Serial_Execution_of_Asynchronous_Tasks_Detailed_Design > > > > Apart from the short explanation & flow , since this is a detailed > > design , I would add > > 1) Class diagram > Done > > 2) Flow diagram Done too. > > > > > > > > > > > -Allon > > > _______________________________________________ > > > Engine-devel mailing list > > > Engine-devel at ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From iheim at redhat.com Wed Aug 22 13:29:03 2012 From: iheim at redhat.com (Itamar Heim) Date: Wed, 22 Aug 2012 16:29:03 +0300 Subject: [Engine-devel] Serial Execution of Async Tasks In-Reply-To: <658531399.8916617.1345627283559.JavaMail.root@redhat.com> References: <658531399.8916617.1345627283559.JavaMail.root@redhat.com> Message-ID: <5034DE9F.9040403@redhat.com> On 08/22/2012 12:21 PM, Allon Mureinik wrote: > > > ----- Original Message ----- >> From: "Maor Lipchuk" >> To: "Itamar Heim" >> Cc: "Allon Mureinik" , "engine-devel" , "Eduardo Warszawski" >> , "Yeela Kaplan" , "Federico Simoncelli" , "Liron >> Aravot" >> Sent: Thursday, August 16, 2012 8:27:33 PM >> Subject: Re: [Engine-devel] Serial Execution of Async Tasks >> >> On 08/16/2012 06:51 PM, Itamar Heim wrote: >>> On 08/16/2012 03:21 PM, Maor Lipchuk wrote: >>>> On 08/14/2012 05:23 PM, Itamar Heim wrote: >>>>> On 08/14/2012 02:35 PM, Maor Lipchuk wrote: >>>>>> How should we handle the auditLogMessages? >>>>>> Basically when a command ends it print an audit log. >>>>>> >>>>>> When we will start to use multiple tasks I assume user might get >>>>>> a bulk >>>>>> of audit logs which are actually related to the same action >>>>>> (when we >>>>>> fail for example the process will be create and delete). >>>>>> It might be a bit confusing for the user not to know which >>>>>> action is >>>>>> related to the operation >>>>> >>>>> I thought audit log gets written regardless of the transaction, >>>>> so audit >>>>> log appears "as they happen"? >>>> That is correct, >>>> The issue that I was referring to, is that now, with multiple >>>> tasks >>>> execution, we will get many audit logs which related to the same >>>> transaction but each one will be printed at a different time. >>>> >>>> I think that it might be confusing for the user to relate each >>>> audit log >>>> to the operation that was started. >>>> >>>> >>>> For example : >>>> User run an action that executes some tasks of create volumes, >>>> then the engine encounter a problem, and decide to rollback the >>>> operation and delete the volumes, in that case the engine will >>>> execute a >>>> delete task for the volumes, so user might see that delete of the >>>> volume >>>> (for example a snapshot) was initiated. >>>> Since those are asynchronous tasks, audit log will be printed in a >>>> different period of time and a user might not be aware what is the >>>> relation of those specific delete. >>> >>> async doesn't mean we don't print an audit log when we start it, >>> and >>> when we end it. >>> so user would get the starting audit log when the task failed in >>> your >>> example. of course this may happen 2 hours after they started the >>> task. >>> as long as we can correlate the audit log to be part of the same >>> "job", >>> i don't see the issue. >> yes, but if I understood correctly, we don't want to correlate the >> multiple tasks with the execution handler (which AFAIK handle the >> correlation id). > I actually didn't mention this, but I don't see why not. > What's I'd probably like to have is a log with "Correlation ID xyzabc, step #3 starting/executing/ending" > Does this make any sense? I think it is important to decide if correlation ID is also a "job id", or since correlation id can be controlled by user, for multiple steps over several commands (and maybe allowing clients to do multiple actions as a single transaction) we want a more formal "job id". >> >> I assume this issue can be addressed in a future phase, >> but maybe it is an issue that might worth to think about. >>> >>>>> >>>>>> >>>>>> Maybe we will need to use the correlation id of the Execution >>>>>> handler as >>>>>> Eli suggested or maybe add new states at CommandActionState? >>>>>> >>>>>> On 08/14/2012 02:10 PM, Allon Mureinik wrote: >>>>>>> Hi guys, >>>>>>> >>>>>>> Thanks for all your comments! >>>>>>> The correct response for many these points is to update the >>>>>>> wiki. >>>>>>> I'm enclosing here the quick-and-dirty replies just to keep >>>>>>> this >>>>>>> thread alive, and will update the wiki shortly. >>>>>>> >>>>>>> See inline. >>>>>>> >>>>>>> ----- Original Message ----- >>>>>>>> From: "Livnat Peer" >>>>>>>> To: "Allon Mureinik" >>>>>>>> Cc: "Eli Mesika" , "Liron Aravot" >>>>>>>> , "Federico Simoncelli" >>>>>>>> , "engine-devel" >>>>>>>> , >>>>>>>> "Eduardo Warszawski" , "Yeela >>>>>>>> Kaplan" >>>>>>>> Sent: Sunday, August 12, 2012 9:39:23 AM >>>>>>>> Subject: Re: [Engine-devel] Serial Execution of Async Tasks >>>>>>>> >>>>>>>> On 10/08/12 03:40, Eli Mesika wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> ----- Original Message ----- >>>>>>>>>> From: "Allon Mureinik" >>>>>>>>>> To: "engine-devel" >>>>>>>>>> Cc: "Eduardo Warszawski" , "Yeela >>>>>>>>>> Kaplan" >>>>>>>>>> , "Federico Simoncelli" >>>>>>>>>> , "Liron Aravot" >>>>>>>>>> Sent: Thursday, August 9, 2012 6:41:09 PM >>>>>>>>>> Subject: [Engine-devel] Serial Execution of Async Tasks >>>>>>>>>> >>>>>>>>>> Hi guys, >>>>>>>>>> >>>>>>>>>> As you may know the engine currently has the ability to fire >>>>>>>>>> an >>>>>>>>>> SPM >>>>>>>>>> task, and be asynchronously be "woken-up" when it ends. >>>>>>>>>> This is great, but we found the for the Live Storage >>>>>>>>>> Migration >>>>>>>>>> feature we need something a bit complex - the ability to >>>>>>>>>> have a >>>>>>>>>> series of async tasks in a single control flow. >>>>>>>>>> >>>>>>>>>> Here's my initial design for this, your comments and >>>>>>>>>> criticism >>>>>>>>>> would >>>>>>>>>> be welcome: >>>>>>>>>> http://wiki.ovirt.org/wiki/Features/Serial_Execution_of_Asynchronous_Tasks_Detailed_Design >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> Apart from the short explanation & flow , since this is a >>>>>>>>> detailed >>>>>>>>> design , I would add >>>>>>>>> 1) Class diagram >>>>>>>>> 2) Flow diagram >>>>>>> Good idea, I'll see if I can jimmy something up. >>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> +1, it would help understanding the flow. >>>>>>>> >>>>>>>> - It looks like you chose not re-use/extend the >>>>>>>> ExecutionHandler (the >>>>>>>> entity used for building the tasks view exposed to the users). >>>>>>>> It might be a good idea to keep the separation between the >>>>>>>> engine >>>>>>>> Jobs >>>>>>>> and the underlying vdsm tasks, but I want to make sure you are >>>>>>>> familiar >>>>>>>> with this mechanism and ruled it out with a reason. If this is >>>>>>>> the >>>>>>>> case >>>>>>>> please share why you decided not to use it. >>>>>>> As you said Jobs and Steps are pure engine entities - they can >>>>>>> contain no VDSM tasks, one VDSM task, or plausibly, in the >>>>>>> future, >>>>>>> several tasks. >>>>>>> Even /today/, AsyncTasks and Jobs/Steps are two different kinds >>>>>>> of >>>>>>> animals - I don't see any added value in mixing them together. >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> - how does this design survives a jboss restart? Can you >>>>>>>> please a >>>>>>>> section in the wiki to explain that. >>>>>>> Basically, the way as a Command does today - the task is saved >>>>>>> with >>>>>>> the executionIndex, and continues when the command is woken up. >>>>>>> I'll clarify this point in the wiki. >>>>>>> >>>>>>>> >>>>>>>> -successful execution - >>>>>>>> * "CommandBase iterates over its SPMAsyncTaskHandlers" - when? >>>>>>> This is the new suggested format of executeCommand(). I'll >>>>>>> clarify >>>>>>> this too. >>>>>>> >>>>>>>> * If the second task is an HSM command (vs. SPM command), I >>>>>>>> think you >>>>>>>> should explain in the design how to handle such flows as well. >>>>>>> HSM commands do not create AsyncTasks, as they do today - I >>>>>>> will >>>>>>> clarify this. >>>>>>> >>>>>>>> * Why do we need before task? can you give a concrete example >>>>>>>> of what >>>>>>>> would you do in such a method. >>>>>>> Basically, /today/, command look like this: >>>>>>> executeCommand() { >>>>>>> doStuffInTheDB(); >>>>>>> runVdsCommand(someCommand); >>>>>>> } >>>>>>> >>>>>>> endSuccessfully() { >>>>>>> doMoreStuffInTheDB(); >>>>>>> } >>>>>>> >>>>>>> endWithFailure() { >>>>>>> doMoreStuffForFailureInTheDB(); >>>>>>> } >>>>>>> >>>>>>> In the new design, the entire doStuffInTheDB() should be moved >>>>>>> to a >>>>>>> breforeTask of the (only) SPMAsyncTaskHandler. >>>>>>> >>>>>>>> >>>>>>>> - I see you added SPMAsyncTaskHandler, any reason not to use >>>>>>>> SPMAsyncTasK to manage it own life-cycle? >>>>>>> Conserving today's design - The SPMAsyncTaskHandler is the >>>>>>> place to >>>>>>> add additional, non-SPM, logic around the SPM task execution, >>>>>>> like >>>>>>> CommandBase allows today. >>>>>>> >>>>>>>> >>>>>>>> - In the life-cycle managed by the SPMAsyncTaskHandler there >>>>>>>> is a >>>>>>>> step >>>>>>>> 'createTask - how to create the async task' can you please >>>>>>>> elaborate >>>>>>>> what are the options. >>>>>>> new [any type of async task] >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Livnat >>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -Allon >>>>>>>>>> _______________________________________________ >>>>>>>>>> Engine-devel mailing list >>>>>>>>>> Engine-devel at ovirt.org >>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> Engine-devel mailing list >>>>>>>>> Engine-devel at ovirt.org >>>>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> _______________________________________________ >>>>>>> Engine-devel mailing list >>>>>>> Engine-devel at ovirt.org >>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>>> >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Engine-devel mailing list >>>>>> Engine-devel at ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>> >>>>> >>>>> >>>> >>>> >>> >>> >> >> >> From vszocs at redhat.com Thu Aug 23 12:14:17 2012 From: vszocs at redhat.com (Vojtech Szocs) Date: Thu, 23 Aug 2012 08:14:17 -0400 (EDT) Subject: [Engine-devel] UI Plugins configuration In-Reply-To: Message-ID: <1711114539.12944839.1345724057853.JavaMail.root@redhat.com> Hi Chris, thanks for taking the time to make this patch, these are some excellent ideas! (CC'ing engine-devel so that we can discuss this with other guys as well) First of all, I really like the way you designed plugin source page URLs (going through PluginSourcePageServlet ), e.g. "/webadmin/webadmin/plugin//.html", plus the concept of "path" JSON attribute. WebadminDynamicHostingServlet loads and caches all plugin definitions ( *.json files), and directly embeds them into WebAdmin host page as pluginDefinitions JavaScript object. I'm assuming that pluginDefinitions object will now look like this: var pluginDefinitions = { "test": { "name": "test", "version": "1.0", "url": "/webadmin/webadmin/plugin/test/foo.html", "path": "/tmp", "config": {"a":1, "b":2, "c":3} } } Originally, the pluginDefinitions object looked like this: var pluginDefinitions = { "test": "/webadmin/webadmin/plugin/test/foo.html" // Simple pluginName -> pluginSourcePageUrl mappings } This is because PluginManager (WebAdmin) only needs pluginName ("name") and pluginSourcePageUrl ("url") during startup, when creating plugin iframe. But this can be changed :) Plugin "version" makes sense, plus the plugin configuration object ("config") can be useful directly on the client. Let me explain: Originally, plugin configuration was supposed to be passed to actual plugin code (through immediately-invoked-function-expression, or IIFE), just like this: (function (pluginApi, pluginConfig) { // JavaScript IIFE // ... actual plugin code ... })( parent.pluginApi, /* reference to global pluginApi object */ {"a":1, "b":2, "c":3} /* embedded plugin configuration as JavaScript object */ ); The whole purpose of PluginSourcePageServlet was to "wrap" actual plugin code into HTML, so that users don't need to write HTML pages for their plugins manually. PluginSourcePageServlet would handle any plugin dependencies (placed into HTML head), with actual plugin code being wrapped into IIFE, as shown above. Plugin configuration was meant to be stored in a separate file, e.g. -config.json , so that users could change the default plugin configuration to suit their needs. Inspired by your patch, rather than reading/embedding plugin configuration when serving plugin HTML page ( PluginSourcePageServlet ), it's even better to have the plugin configuration embedded directly into WebAdmin host page, along with introducing new pluginApi function to retrieve the plugin configuration object. Based on this, I suggest following modifications to the original concept: - modify original pluginDefinitions structure, from pluginName -> pluginSourcePageUrl , to pluginName -> pluginDefObject - pluginDefObject is basically a subset of physical plugin definition ( test.json , see below), suitable for use on the client - add following attributes to pluginDefObject : version , url , config * note #1: name is not needed, since it's already the key of pluginName -> pluginDefObject mapping * note #2: path is not needed on the client (more on this below) - introduce pluginApi.config(pluginName) function for plugins to retrieve their configuration object, and remove pluginConfig parameter from main IIFE (as shown above) [a] Physical plugin definition file (JSON) might be located at oVirt "DataDir", e.g. /usr/share/ovirt-engine/ui-plugins/test.json , for example: { "name": "test", "version": "1.0", "url": "/webadmin/webadmin/plugin/test/start.html", "path": "/tmp", "config": "test-config.json" } [b] Plugin configuration file (JSON) might be located at oVirt "ConfigDir", e.g. /etc/ovirt-engine/ui-plugins/test-config.json , for example: { "a":1, "b":2, "c":3 } [c] Finally, plugin static resources (plugin source page, actual plugin code, plugin dependencies, CSS/images, etc.) would be located at /tmp (as shown in [a]), for example: /tmp/start.html -> plugin source page, used to load actual plugin code /tmp/test.js -> actual plugin code /tmp/deps/jquery-min.js -> simulate 3rd party plugin dependency For example: "/webadmin/webadmin/plugin/test/start.html" will be mapped to /tmp/start.html "/webadmin/webadmin/plugin/test/deps/jquery-min.js" will be mapped to /tmp/deps/jquery-min.js This approach has some pros and cons: (+) plugin static resources can be served through PluginSourcePageServlet (pretty much like oVirt documentation resources, served through oVirt Engine root war's FileServlet ) (+) plugin author has complete control over plugin source page (-) plugin author actually needs to write plugin source page Overall, I think this approach is better than the previous one (where PluginSourcePageServlet took care of rendering plugin source page, but sacrificed some flexibility). By the way, here's what would happen behind the scenes: 1. user requests WebAdmin host page, WebadminDynamicHostingServlet loads and caches all plugin definitions [a] + plugin configurations [b] and constructs/embeds appropriate pluginDefinitions JavaScript object 2. during WebAdmin startup, PluginManager registers the plugin (name/version/url/config), and creates/attaches the iframe to fetch plugin source page ansynchronously 3. PluginSourcePageServlet handles plugin source page request, resolves the correct path [c] and just streams the file content back to client > 1. The plugin configuration files should probably have an "enabled" field and an "apiVersion" field that should be examined to determine whether or not to use the plugin. Sounds good, we can implement these later on :) > 2. I suspect the way I've modified PluginSourcePage makes it vulnerable to directory climbing attacks. Yes, but we can defend against these, restricting access only to plugin's "path" and its sub-directories. > 3. Is /usr/share/ovirt-engine the right place for the plugin config files? I suppose you mean plugin definition files [a], cannot tell for sure, but we can change this anytime :) Chris, please let me know what you think, and again - many thanks for sending the patch! Regards, Vojtech ----- Original Message ----- From: "Chris Frantz" To: vszocs at redhat.com Sent: Wednesday, August 22, 2012 7:56:45 PM Subject: UI Plugins configuration Vojtech, I decided to work on making the plugin patch a bit more configurable, following some of the ideas expressed by Itamar and others in the meeting yesterday. The attached patch is a simple first-attempt. Plugin configurations are stored in /usr/share/ovirt-engine/ui-plugins/*.json. Example: { "name": "test", "version": "1.0", "url": "/webadmin/webadmin/plugin/test/foo.html", "path": "/tmp", "config": {"a":1, "b":2, "c": 3} } The engine reads all of the *.json files in that directory to build the list of known plugins and gives that list to the webadmin. When webadmin loads a plugin, it requests the URL given in the plugin config file. The "plugin" URL is mapped to PluginSourcePage, which will translate the first part of the path ("test") into whatever path is stored in pluginConfig ("/tmp") in this case, and then serve the static file (e.g. "/tmp/foo.html"). I didn't use the renderPluginSourcePage() method in favor of just serving a static file, but I have no strong opinion on the matter. However, a plugin may want to store static resources at "path" and have the engine serve those resources. By just serving files through PluginSourcePage, we don't need any other servlets to provide those resources. There is still a bit of work to do: 1. The plugin configuration files should probably have an "enabled" field and an "apiVersion" field that should be examined to determine whether or not to use the plugin. 2. I suspect the way I've modified PluginSourcePage makes it vulnerable to directory climbing attacks. 3. Is /usr/share/ovirt-engine the right place for the plugin config files? Let me know what you think, --Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From George.Costea at netapp.com Thu Aug 23 13:09:05 2012 From: George.Costea at netapp.com (Costea, George) Date: Thu, 23 Aug 2012 13:09:05 +0000 Subject: [Engine-devel] UI Plugins configuration In-Reply-To: <1711114539.12944839.1345724057853.JavaMail.root@redhat.com> References: <1711114539.12944839.1345724057853.JavaMail.root@redhat.com> Message-ID: <6C8AC8C50E170C4E9B44D47B39B24A480931C86F@SACEXCMBX04-PRD.hq.netapp.com> Thanks Chris and Vojtech for continuing this discussion. I think I?m missing the link between providing the plugin definition file and defining the plugins. If I want to add 3 main tabs and 6 context menus, do I provide 9 plugin definitions? Or do I provide 1 plugin definition with multiple ?urls? where each one points to a distinct path? If ?url? is configured to point to an external application server hosting my plugin, what is the intent of ?path?? For example, if I configure ?url? to point to ?https://10.10.10.10/myplugin/entrypoint.html? then presumably the application server will render the page it needs as a main tab or context menu. It would have no need for ?path? since all dependencies would be resolved by the application server. George From: engine-devel-bounces at ovirt.org [mailto:engine-devel-bounces at ovirt.org] On Behalf Of Vojtech Szocs Sent: Thursday, August 23, 2012 8:14 AM To: Chris Frantz Cc: engine-devel Subject: Re: [Engine-devel] UI Plugins configuration Hi Chris, thanks for taking the time to make this patch, these are some excellent ideas! (CC'ing engine-devel so that we can discuss this with other guys as well) First of all, I really like the way you designed plugin source page URLs (going through PluginSourcePageServlet), e.g. "/webadmin/webadmin/plugin//.html", plus the concept of "path" JSON attribute. WebadminDynamicHostingServlet loads and caches all plugin definitions (*.json files), and directly embeds them into WebAdmin host page as pluginDefinitions JavaScript object. I'm assuming that pluginDefinitions object will now look like this: var pluginDefinitions = { "test": { "name": "test", "version": "1.0", "url": "/webadmin/webadmin/plugin/test/foo.html", "path": "/tmp", "config": {"a":1, "b":2, "c":3} } } Originally, the pluginDefinitions object looked like this: var pluginDefinitions = { "test": "/webadmin/webadmin/plugin/test/foo.html" // Simple pluginName -> pluginSourcePageUrl mappings } This is because PluginManager (WebAdmin) only needs pluginName ("name") and pluginSourcePageUrl ("url") during startup, when creating plugin iframe. But this can be changed :) Plugin "version" makes sense, plus the plugin configuration object ("config") can be useful directly on the client. Let me explain: Originally, plugin configuration was supposed to be passed to actual plugin code (through immediately-invoked-function-expression, or IIFE), just like this: (function (pluginApi, pluginConfig) { // JavaScript IIFE // ... actual plugin code ... })( parent.pluginApi, /* reference to global pluginApi object */ {"a":1, "b":2, "c":3} /* embedded plugin configuration as JavaScript object */ ); The whole purpose of PluginSourcePageServlet was to "wrap" actual plugin code into HTML, so that users don't need to write HTML pages for their plugins manually. PluginSourcePageServlet would handle any plugin dependencies (placed into HTML head), with actual plugin code being wrapped into IIFE, as shown above. Plugin configuration was meant to be stored in a separate file, e.g. -config.json, so that users could change the default plugin configuration to suit their needs. Inspired by your patch, rather than reading/embedding plugin configuration when serving plugin HTML page (PluginSourcePageServlet), it's even better to have the plugin configuration embedded directly into WebAdmin host page, along with introducing new pluginApi function to retrieve the plugin configuration object. Based on this, I suggest following modifications to the original concept: - modify original pluginDefinitions structure, from pluginName -> pluginSourcePageUrl, to pluginName -> pluginDefObject - pluginDefObject is basically a subset of physical plugin definition (test.json, see below), suitable for use on the client - add following attributes to pluginDefObject: version, url, config * note #1: name is not needed, since it's already the key of pluginName -> pluginDefObject mapping * note #2: path is not needed on the client (more on this below) - introduce pluginApi.config(pluginName) function for plugins to retrieve their configuration object, and remove pluginConfig parameter from main IIFE (as shown above) [a] Physical plugin definition file (JSON) might be located at oVirt "DataDir", e.g. /usr/share/ovirt-engine/ui-plugins/test.json, for example: { "name": "test", "version": "1.0", "url": "/webadmin/webadmin/plugin/test/start.html", "path": "/tmp", "config": "test-config.json" } [b] Plugin configuration file (JSON) might be located at oVirt "ConfigDir", e.g. /etc/ovirt-engine/ui-plugins/test-config.json, for example: { "a":1, "b":2, "c":3 } [c] Finally, plugin static resources (plugin source page, actual plugin code, plugin dependencies, CSS/images, etc.) would be located at /tmp (as shown in [a]), for example: /tmp/start.html -> plugin source page, used to load actual plugin code /tmp/test.js -> actual plugin code /tmp/deps/jquery-min.js -> simulate 3rd party plugin dependency For example: "/webadmin/webadmin/plugin/test/start.html" will be mapped to /tmp/start.html "/webadmin/webadmin/plugin/test/deps/jquery-min.js" will be mapped to /tmp/deps/jquery-min.js This approach has some pros and cons: (+) plugin static resources can be served through PluginSourcePageServlet (pretty much like oVirt documentation resources, served through oVirt Engine root war's FileServlet) (+) plugin author has complete control over plugin source page (-) plugin author actually needs to write plugin source page Overall, I think this approach is better than the previous one (where PluginSourcePageServlet took care of rendering plugin source page, but sacrificed some flexibility). By the way, here's what would happen behind the scenes: 1. user requests WebAdmin host page, WebadminDynamicHostingServlet loads and caches all plugin definitions [a] + plugin configurations [b] and constructs/embeds appropriate pluginDefinitions JavaScript object 2. during WebAdmin startup, PluginManager registers the plugin (name/version/url/config), and creates/attaches the iframe to fetch plugin source page ansynchronously 3. PluginSourcePageServlet handles plugin source page request, resolves the correct path [c] and just streams the file content back to client > 1. The plugin configuration files should probably have an "enabled" field and an "apiVersion" field that should be examined to determine whether or not to use the plugin. Sounds good, we can implement these later on :) > 2. I suspect the way I've modified PluginSourcePage makes it vulnerable to directory climbing attacks. Yes, but we can defend against these, restricting access only to plugin's "path" and its sub-directories. > 3. Is /usr/share/ovirt-engine the right place for the plugin config files? I suppose you mean plugin definition files [a], cannot tell for sure, but we can change this anytime :) Chris, please let me know what you think, and again - many thanks for sending the patch! Regards, Vojtech ________________________________ From: "Chris Frantz" > To: vszocs at redhat.com Sent: Wednesday, August 22, 2012 7:56:45 PM Subject: UI Plugins configuration Vojtech, I decided to work on making the plugin patch a bit more configurable, following some of the ideas expressed by Itamar and others in the meeting yesterday. The attached patch is a simple first-attempt. Plugin configurations are stored in /usr/share/ovirt-engine/ui-plugins/*.json. Example: { "name": "test", "version": "1.0", "url": "/webadmin/webadmin/plugin/test/foo.html", "path": "/tmp", "config": {"a":1, "b":2, "c": 3} } The engine reads all of the *.json files in that directory to build the list of known plugins and gives that list to the webadmin. When webadmin loads a plugin, it requests the URL given in the plugin config file. The "plugin" URL is mapped to PluginSourcePage, which will translate the first part of the path ("test") into whatever path is stored in pluginConfig ("/tmp") in this case, and then serve the static file (e.g. "/tmp/foo.html"). I didn't use the renderPluginSourcePage() method in favor of just serving a static file, but I have no strong opinion on the matter. However, a plugin may want to store static resources at "path" and have the engine serve those resources. By just serving files through PluginSourcePage, we don't need any other servlets to provide those resources. There is still a bit of work to do: 1. The plugin configuration files should probably have an "enabled" field and an "apiVersion" field that should be examined to determine whether or not to use the plugin. 2. I suspect the way I've modified PluginSourcePage makes it vulnerable to directory climbing attacks. 3. Is /usr/share/ovirt-engine the right place for the plugin config files? Let me know what you think, --Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From mlipchuk at redhat.com Thu Aug 23 14:24:27 2012 From: mlipchuk at redhat.com (Maor Lipchuk) Date: Thu, 23 Aug 2012 17:24:27 +0300 Subject: [Engine-devel] Serial Execution of Async Tasks In-Reply-To: <658531399.8916617.1345627283559.JavaMail.root@redhat.com> References: <658531399.8916617.1345627283559.JavaMail.root@redhat.com> Message-ID: <50363D1B.8030809@redhat.com> On 08/22/2012 12:21 PM, Allon Mureinik wrote: > > > ----- Original Message ----- >> From: "Maor Lipchuk" >> To: "Itamar Heim" >> Cc: "Allon Mureinik" , "engine-devel" , "Eduardo Warszawski" >> , "Yeela Kaplan" , "Federico Simoncelli" , "Liron >> Aravot" >> Sent: Thursday, August 16, 2012 8:27:33 PM >> Subject: Re: [Engine-devel] Serial Execution of Async Tasks >> >> On 08/16/2012 06:51 PM, Itamar Heim wrote: >>> On 08/16/2012 03:21 PM, Maor Lipchuk wrote: >>>> On 08/14/2012 05:23 PM, Itamar Heim wrote: >>>>> On 08/14/2012 02:35 PM, Maor Lipchuk wrote: >>>>>> How should we handle the auditLogMessages? >>>>>> Basically when a command ends it print an audit log. >>>>>> >>>>>> When we will start to use multiple tasks I assume user might get >>>>>> a bulk >>>>>> of audit logs which are actually related to the same action >>>>>> (when we >>>>>> fail for example the process will be create and delete). >>>>>> It might be a bit confusing for the user not to know which >>>>>> action is >>>>>> related to the operation >>>>> >>>>> I thought audit log gets written regardless of the transaction, >>>>> so audit >>>>> log appears "as they happen"? >>>> That is correct, >>>> The issue that I was referring to, is that now, with multiple >>>> tasks >>>> execution, we will get many audit logs which related to the same >>>> transaction but each one will be printed at a different time. >>>> >>>> I think that it might be confusing for the user to relate each >>>> audit log >>>> to the operation that was started. >>>> >>>> >>>> For example : >>>> User run an action that executes some tasks of create volumes, >>>> then the engine encounter a problem, and decide to rollback the >>>> operation and delete the volumes, in that case the engine will >>>> execute a >>>> delete task for the volumes, so user might see that delete of the >>>> volume >>>> (for example a snapshot) was initiated. >>>> Since those are asynchronous tasks, audit log will be printed in a >>>> different period of time and a user might not be aware what is the >>>> relation of those specific delete. >>> >>> async doesn't mean we don't print an audit log when we start it, >>> and >>> when we end it. >>> so user would get the starting audit log when the task failed in >>> your >>> example. of course this may happen 2 hours after they started the >>> task. >>> as long as we can correlate the audit log to be part of the same >>> "job", >>> i don't see the issue. >> yes, but if I understood correctly, we don't want to correlate the >> multiple tasks with the execution handler (which AFAIK handle the >> correlation id). > I actually didn't mention this, but I don't see why not. > What's I'd probably like to have is a log with "Correlation ID xyzabc, step #3 starting/executing/ending" > Does this make any sense? Sound's great to me. >> >> I assume this issue can be addressed in a future phase, >> but maybe it is an issue that might worth to think about. >>> >>>>> >>>>>> >>>>>> Maybe we will need to use the correlation id of the Execution >>>>>> handler as >>>>>> Eli suggested or maybe add new states at CommandActionState? >>>>>> >>>>>> On 08/14/2012 02:10 PM, Allon Mureinik wrote: >>>>>>> Hi guys, >>>>>>> >>>>>>> Thanks for all your comments! >>>>>>> The correct response for many these points is to update the >>>>>>> wiki. >>>>>>> I'm enclosing here the quick-and-dirty replies just to keep >>>>>>> this >>>>>>> thread alive, and will update the wiki shortly. >>>>>>> >>>>>>> See inline. >>>>>>> >>>>>>> ----- Original Message ----- >>>>>>>> From: "Livnat Peer" >>>>>>>> To: "Allon Mureinik" >>>>>>>> Cc: "Eli Mesika" , "Liron Aravot" >>>>>>>> , "Federico Simoncelli" >>>>>>>> , "engine-devel" >>>>>>>> , >>>>>>>> "Eduardo Warszawski" , "Yeela >>>>>>>> Kaplan" >>>>>>>> Sent: Sunday, August 12, 2012 9:39:23 AM >>>>>>>> Subject: Re: [Engine-devel] Serial Execution of Async Tasks >>>>>>>> >>>>>>>> On 10/08/12 03:40, Eli Mesika wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> ----- Original Message ----- >>>>>>>>>> From: "Allon Mureinik" >>>>>>>>>> To: "engine-devel" >>>>>>>>>> Cc: "Eduardo Warszawski" , "Yeela >>>>>>>>>> Kaplan" >>>>>>>>>> , "Federico Simoncelli" >>>>>>>>>> , "Liron Aravot" >>>>>>>>>> Sent: Thursday, August 9, 2012 6:41:09 PM >>>>>>>>>> Subject: [Engine-devel] Serial Execution of Async Tasks >>>>>>>>>> >>>>>>>>>> Hi guys, >>>>>>>>>> >>>>>>>>>> As you may know the engine currently has the ability to fire >>>>>>>>>> an >>>>>>>>>> SPM >>>>>>>>>> task, and be asynchronously be "woken-up" when it ends. >>>>>>>>>> This is great, but we found the for the Live Storage >>>>>>>>>> Migration >>>>>>>>>> feature we need something a bit complex - the ability to >>>>>>>>>> have a >>>>>>>>>> series of async tasks in a single control flow. >>>>>>>>>> >>>>>>>>>> Here's my initial design for this, your comments and >>>>>>>>>> criticism >>>>>>>>>> would >>>>>>>>>> be welcome: >>>>>>>>>> http://wiki.ovirt.org/wiki/Features/Serial_Execution_of_Asynchronous_Tasks_Detailed_Design >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> Apart from the short explanation & flow , since this is a >>>>>>>>> detailed >>>>>>>>> design , I would add >>>>>>>>> 1) Class diagram >>>>>>>>> 2) Flow diagram >>>>>>> Good idea, I'll see if I can jimmy something up. >>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> +1, it would help understanding the flow. >>>>>>>> >>>>>>>> - It looks like you chose not re-use/extend the >>>>>>>> ExecutionHandler (the >>>>>>>> entity used for building the tasks view exposed to the users). >>>>>>>> It might be a good idea to keep the separation between the >>>>>>>> engine >>>>>>>> Jobs >>>>>>>> and the underlying vdsm tasks, but I want to make sure you are >>>>>>>> familiar >>>>>>>> with this mechanism and ruled it out with a reason. If this is >>>>>>>> the >>>>>>>> case >>>>>>>> please share why you decided not to use it. >>>>>>> As you said Jobs and Steps are pure engine entities - they can >>>>>>> contain no VDSM tasks, one VDSM task, or plausibly, in the >>>>>>> future, >>>>>>> several tasks. >>>>>>> Even /today/, AsyncTasks and Jobs/Steps are two different kinds >>>>>>> of >>>>>>> animals - I don't see any added value in mixing them together. >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> - how does this design survives a jboss restart? Can you >>>>>>>> please a >>>>>>>> section in the wiki to explain that. >>>>>>> Basically, the way as a Command does today - the task is saved >>>>>>> with >>>>>>> the executionIndex, and continues when the command is woken up. >>>>>>> I'll clarify this point in the wiki. >>>>>>> >>>>>>>> >>>>>>>> -successful execution - >>>>>>>> * "CommandBase iterates over its SPMAsyncTaskHandlers" - when? >>>>>>> This is the new suggested format of executeCommand(). I'll >>>>>>> clarify >>>>>>> this too. >>>>>>> >>>>>>>> * If the second task is an HSM command (vs. SPM command), I >>>>>>>> think you >>>>>>>> should explain in the design how to handle such flows as well. >>>>>>> HSM commands do not create AsyncTasks, as they do today - I >>>>>>> will >>>>>>> clarify this. >>>>>>> >>>>>>>> * Why do we need before task? can you give a concrete example >>>>>>>> of what >>>>>>>> would you do in such a method. >>>>>>> Basically, /today/, command look like this: >>>>>>> executeCommand() { >>>>>>> doStuffInTheDB(); >>>>>>> runVdsCommand(someCommand); >>>>>>> } >>>>>>> >>>>>>> endSuccessfully() { >>>>>>> doMoreStuffInTheDB(); >>>>>>> } >>>>>>> >>>>>>> endWithFailure() { >>>>>>> doMoreStuffForFailureInTheDB(); >>>>>>> } >>>>>>> >>>>>>> In the new design, the entire doStuffInTheDB() should be moved >>>>>>> to a >>>>>>> breforeTask of the (only) SPMAsyncTaskHandler. >>>>>>> >>>>>>>> >>>>>>>> - I see you added SPMAsyncTaskHandler, any reason not to use >>>>>>>> SPMAsyncTasK to manage it own life-cycle? >>>>>>> Conserving today's design - The SPMAsyncTaskHandler is the >>>>>>> place to >>>>>>> add additional, non-SPM, logic around the SPM task execution, >>>>>>> like >>>>>>> CommandBase allows today. >>>>>>> >>>>>>>> >>>>>>>> - In the life-cycle managed by the SPMAsyncTaskHandler there >>>>>>>> is a >>>>>>>> step >>>>>>>> 'createTask - how to create the async task' can you please >>>>>>>> elaborate >>>>>>>> what are the options. >>>>>>> new [any type of async task] >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Livnat >>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -Allon >>>>>>>>>> _______________________________________________ >>>>>>>>>> Engine-devel mailing list >>>>>>>>>> Engine-devel at ovirt.org >>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> Engine-devel mailing list >>>>>>>>> Engine-devel at ovirt.org >>>>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> _______________________________________________ >>>>>>> Engine-devel mailing list >>>>>>> Engine-devel at ovirt.org >>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>>> >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Engine-devel mailing list >>>>>> Engine-devel at ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>> >>>>> >>>>> >>>> >>>> >>> >>> >> >> >> From Chris.Frantz at hp.com Thu Aug 23 15:12:02 2012 From: Chris.Frantz at hp.com (Frantz, Chris) Date: Thu, 23 Aug 2012 15:12:02 +0000 Subject: [Engine-devel] UI Plugins configuration In-Reply-To: <1711114539.12944839.1345724057853.JavaMail.root@redhat.com> References: <1711114539.12944839.1345724057853.JavaMail.root@redhat.com> Message-ID: Vojtech, Your assumption about the structure of the pluginDefinitions object is correct. It?s no longer a String->String mapping , but a String to Object mapping. I liked the original IIFE approach, except that it seemed that having additional static resources (jquery, images, html templates, etc) was going to be more cumbersome. I don?t think having the plugin author write a basic start.html is that big of a burden :). I agree that the plugin configuration was always going to be a resource (probably a local file) that the end user could customize. I?m not sure it I really needs to be separate from the plugin definition file (/usr/share/ovirt-engine/ui-plugins/test.json). I suppose it depends on how complex the configuration is going to be and on some of the implementation details surrounding the plugin definition file. In my patch, I simply used Jackson to parse the file into a tree of JsonNodes. Should the plugin definition be a java object of some sort? (please please please don?t make me learn about java beans?). I stuck with the JsonNodes because Jackson makes them easy to work with and they?re really easy to re-serialize back to json to give to the webadmin. We should probably turn on JsonParser.Feature.ALLOW_COMMENTS. The definition and config files will difficult for end-users (or even developers) to understand without comments. We need to formalize the structure of the plugin definition and decide which fields are mandatory and which are optional: { # Mandatory fields: name, enabled, version, url, apiversion, author, license # Name of the plugin "name": "test", # Whether or not plugin is enabed "enabled": true, # version of the plugin "version": "1.0", # How to load the plugin "url": "/webadmin/webadmin/plugin/test/start.html", # Which version of engine plugin is meant to work with "apiversion": "3.1.0", # Who wrote the plugin and how is it licensed? "author": "SuperBig Corporation", "license": "Proprietary", # Optional fields path, config # Where to locate plugin (if loaded by webadmin/plugin) "path": "/tmp", # Plugin configuration information (if any) "config": "test-config.json", } I can work on the plugin Definition loader some more and make it enforce mandatory/optional fields. I?ll also investigate the directory climbing issue I mentioned in my previous mail. Also, I?m curious how things are going to work when the ?url? points to a foreign resource as the plugin start page. I don?t think the plugin?s iframe is going to be able to access parent.pluginApi. Perhaps there is some aspect of CORS that I don?t understand? Thanks, --Chris From: Vojtech Szocs [mailto:vszocs at redhat.com] Sent: Thursday, August 23, 2012 7:14 AM To: Frantz, Chris Cc: engine-devel Subject: Re: UI Plugins configuration Hi Chris, thanks for taking the time to make this patch, these are some excellent ideas! (CC'ing engine-devel so that we can discuss this with other guys as well) First of all, I really like the way you designed plugin source page URLs (going through PluginSourcePageServlet), e.g. "/webadmin/webadmin/plugin//.html", plus the concept of "path" JSON attribute. WebadminDynamicHostingServlet loads and caches all plugin definitions (*.json files), and directly embeds them into WebAdmin host page as pluginDefinitions JavaScript object. I'm assuming that pluginDefinitions object will now look like this: var pluginDefinitions = { "test": { "name": "test", "version": "1.0", "url": "/webadmin/webadmin/plugin/test/foo.html", "path": "/tmp", "config": {"a":1, "b":2, "c":3} } } Originally, the pluginDefinitions object looked like this: var pluginDefinitions = { "test": "/webadmin/webadmin/plugin/test/foo.html" // Simple pluginName -> pluginSourcePageUrl mappings } This is because PluginManager (WebAdmin) only needs pluginName ("name") and pluginSourcePageUrl ("url") during startup, when creating plugin iframe. But this can be changed :) Plugin "version" makes sense, plus the plugin configuration object ("config") can be useful directly on the client. Let me explain: Originally, plugin configuration was supposed to be passed to actual plugin code (through immediately-invoked-function-expression, or IIFE), just like this: (function (pluginApi, pluginConfig) { // JavaScript IIFE // ... actual plugin code ... })( parent.pluginApi, /* reference to global pluginApi object */ {"a":1, "b":2, "c":3} /* embedded plugin configuration as JavaScript object */ ); The whole purpose of PluginSourcePageServlet was to "wrap" actual plugin code into HTML, so that users don't need to write HTML pages for their plugins manually. PluginSourcePageServlet would handle any plugin dependencies (placed into HTML head), with actual plugin code being wrapped into IIFE, as shown above. Plugin configuration was meant to be stored in a separate file, e.g. -config.json, so that users could change the default plugin configuration to suit their needs. Inspired by your patch, rather than reading/embedding plugin configuration when serving plugin HTML page (PluginSourcePageServlet), it's even better to have the plugin configuration embedded directly into WebAdmin host page, along with introducing new pluginApi function to retrieve the plugin configuration object. Based on this, I suggest following modifications to the original concept: - modify original pluginDefinitions structure, from pluginName -> pluginSourcePageUrl, to pluginName -> pluginDefObject - pluginDefObject is basically a subset of physical plugin definition (test.json, see below), suitable for use on the client - add following attributes to pluginDefObject: version, url, config * note #1: name is not needed, since it's already the key of pluginName -> pluginDefObject mapping * note #2: path is not needed on the client (more on this below) - introduce pluginApi.config(pluginName) function for plugins to retrieve their configuration object, and remove pluginConfig parameter from main IIFE (as shown above) [a] Physical plugin definition file (JSON) might be located at oVirt "DataDir", e.g. /usr/share/ovirt-engine/ui-plugins/test.json, for example: { "name": "test", "version": "1.0", "url": "/webadmin/webadmin/plugin/test/start.html", "path": "/tmp", "config": "test-config.json" } [b] Plugin configuration file (JSON) might be located at oVirt "ConfigDir", e.g. /etc/ovirt-engine/ui-plugins/test-config.json, for example: { "a":1, "b":2, "c":3 } [c] Finally, plugin static resources (plugin source page, actual plugin code, plugin dependencies, CSS/images, etc.) would be located at /tmp (as shown in [a]), for example: /tmp/start.html -> plugin source page, used to load actual plugin code /tmp/test.js -> actual plugin code /tmp/deps/jquery-min.js -> simulate 3rd party plugin dependency For example: "/webadmin/webadmin/plugin/test/start.html" will be mapped to /tmp/start.html "/webadmin/webadmin/plugin/test/deps/jquery-min.js" will be mapped to /tmp/deps/jquery-min.js This approach has some pros and cons: (+) plugin static resources can be served through PluginSourcePageServlet (pretty much like oVirt documentation resources, served through oVirt Engine root war's FileServlet) (+) plugin author has complete control over plugin source page (-) plugin author actually needs to write plugin source page Overall, I think this approach is better than the previous one (where PluginSourcePageServlet took care of rendering plugin source page, but sacrificed some flexibility). By the way, here's what would happen behind the scenes: 1. user requests WebAdmin host page, WebadminDynamicHostingServlet loads and caches all plugin definitions [a] + plugin configurations [b] and constructs/embeds appropriate pluginDefinitions JavaScript object 2. during WebAdmin startup, PluginManager registers the plugin (name/version/url/config), and creates/attaches the iframe to fetch plugin source page ansynchronously 3. PluginSourcePageServlet handles plugin source page request, resolves the correct path [c] and just streams the file content back to client > 1. The plugin configuration files should probably have an "enabled" field and an "apiVersion" field that should be examined to determine whether or not to use the plugin. Sounds good, we can implement these later on :) > 2. I suspect the way I've modified PluginSourcePage makes it vulnerable to directory climbing attacks. Yes, but we can defend against these, restricting access only to plugin's "path" and its sub-directories. > 3. Is /usr/share/ovirt-engine the right place for the plugin config files? I suppose you mean plugin definition files [a], cannot tell for sure, but we can change this anytime :) Chris, please let me know what you think, and again - many thanks for sending the patch! Regards, Vojtech ________________________________ From: "Chris Frantz" > To: vszocs at redhat.com Sent: Wednesday, August 22, 2012 7:56:45 PM Subject: UI Plugins configuration Vojtech, I decided to work on making the plugin patch a bit more configurable, following some of the ideas expressed by Itamar and others in the meeting yesterday. The attached patch is a simple first-attempt. Plugin configurations are stored in /usr/share/ovirt-engine/ui-plugins/*.json. Example: { "name": "test", "version": "1.0", "url": "/webadmin/webadmin/plugin/test/foo.html", "path": "/tmp", "config": {"a":1, "b":2, "c": 3} } The engine reads all of the *.json files in that directory to build the list of known plugins and gives that list to the webadmin. When webadmin loads a plugin, it requests the URL given in the plugin config file. The "plugin" URL is mapped to PluginSourcePage, which will translate the first part of the path ("test") into whatever path is stored in pluginConfig ("/tmp") in this case, and then serve the static file (e.g. "/tmp/foo.html"). I didn't use the renderPluginSourcePage() method in favor of just serving a static file, but I have no strong opinion on the matter. However, a plugin may want to store static resources at "path" and have the engine serve those resources. By just serving files through PluginSourcePage, we don't need any other servlets to provide those resources. There is still a bit of work to do: 1. The plugin configuration files should probably have an "enabled" field and an "apiVersion" field that should be examined to determine whether or not to use the plugin. 2. I suspect the way I've modified PluginSourcePage makes it vulnerable to directory climbing attacks. 3. Is /usr/share/ovirt-engine the right place for the plugin config files? Let me know what you think, --Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From iheim at redhat.com Thu Aug 23 15:26:52 2012 From: iheim at redhat.com (Itamar Heim) Date: Thu, 23 Aug 2012 18:26:52 +0300 Subject: [Engine-devel] UI Plugins configuration In-Reply-To: References: <1711114539.12944839.1345724057853.JavaMail.root@redhat.com> Message-ID: <50364BBC.30408@redhat.com> On 08/23/2012 06:12 PM, Frantz, Chris wrote: > Vojtech, > > Your assumption about the structure of the pluginDefinitions object is > correct. It?s no longer a String->String mapping , but a String to > Object mapping. > > I liked the original IIFE approach, except that it seemed that having > additional static resources (jquery, images, html templates, etc) was > going to be more cumbersome. I don?t think having the plugin author > write a basic start.html is that big of a burden :). > > I agree that the plugin configuration was always going to be a resource > (probably a local file) that the end user could customize. I?m not sure > it I really needs to be separate from the plugin definition file > (/usr/share/ovirt-engine/ui-plugins/test.json). I suppose it depends on > how complex the configuration is going to be and on some of the > implementation details surrounding the plugin definition file. if we expect a user to "touch" it (change url of external service, etc.) it must be under /etc as a config file. so we should separate what we exepct admin to change (config) and what is part of the plugin itself ("definition"?) which would reside where the code lives. > > In my patch, I simply used Jackson to parse the file into a tree of > JsonNodes. Should the plugin definition be a java object of some sort? > (please please please don?t make me learn about java beans?). I stuck > with the JsonNodes because Jackson makes them easy to work with and > they?re really easy to re-serialize back to json to give to the webadmin. > > We should probably turn on JsonParser.Feature.ALLOW_COMMENTS. The > definition and config files will difficult for end-users (or even > developers) to understand without comments. > > We need to formalize the structure of the plugin definition and decide > which fields are mandatory and which are optional: > > { > > # Mandatory fields: name, enabled, version, url, apiversion, author, > license > # Name of the plugin > > "name": "test", > > # Whether or not plugin is enabed > > "enabled": true, > > # version of the plugin > > "version": "1.0", > > # How to load the plugin > > "url": "/webadmin/webadmin/plugin/test/start.html", > > # Which version of engine plugin is meant to work with > > "apiversion": "3.1.0", > > # Who wrote the plugin and how is it licensed? > > "author": "SuperBig Corporation", > "license": "Proprietary", > > # Optional fields path, config > > # Where to locate plugin (if loaded by webadmin/plugin) > > "path": "/tmp", > > # Plugin configuration information (if any) > > "config": "test-config.json", > } > > I can work on the plugin Definition loader some more and make it enforce > mandatory/optional fields. I?ll also investigate the directory climbing > issue I mentioned in my previous mail. > > Also, I?m curious how things are going to work when the ?url? points to > a foreign resource as the plugin start page. I don?t think the plugin?s > iframe is going to be able to access parent.pluginApi. Perhaps there is > some aspect of CORS that I don?t understand? > > Thanks, > > --Chris > > *From:*Vojtech Szocs [mailto:vszocs at redhat.com] > *Sent:* Thursday, August 23, 2012 7:14 AM > *To:* Frantz, Chris > *Cc:* engine-devel > *Subject:* Re: UI Plugins configuration > > Hi Chris, > > thanks for taking the time to make this patch, these are some excellent > ideas! (CC'ing engine-devel so that we can discuss this with other guys > as well) > > First of all, I really like the way you designed plugin source page URLs > (going through /PluginSourcePageServlet/), e.g. > "/webadmin/webadmin/plugin//.html", plus > the concept of "path" JSON attribute. > > /WebadminDynamicHostingServlet/ loads and caches all plugin definitions > (/*.json/ files), and directly embeds them into WebAdmin host page as > /pluginDefinitions/ JavaScript object. I'm assuming that > /pluginDefinitions/ object will now look like this: > > var pluginDefinitions = { > "test": { > "name": "test", > "version": "1.0", > "url": "/webadmin/webadmin/plugin/test/foo.html", > "path": "/tmp", > "config": {"a":1, "b":2, "c":3} > } > } > > Originally, the /pluginDefinitions/ object looked like this: > > var pluginDefinitions = { > "test": "/webadmin/webadmin/plugin/test/foo.html" // Simple > pluginName -> pluginSourcePageUrl mappings > } > > This is because PluginManager (WebAdmin) only needs /pluginName/ > ("name") and /pluginSourcePageUrl/ ("url") during startup, when creating > plugin iframe. But this can be changed :) > > Plugin "version" makes sense, plus the plugin configuration object > ("config") can be useful directly on the client. Let me explain: > > Originally, plugin configuration was supposed to be passed to actual > plugin code (through immediately-invoked-function-expression, or IIFE), > just like this: > > (function (pluginApi, pluginConfig) { // JavaScript IIFE > // ... actual plugin code ... > })( > parent.pluginApi, /* reference to global pluginApi object */ > {"a":1, "b":2, "c":3} /* embedded plugin configuration as JavaScript > object */ > ); > > The whole purpose of /PluginSourcePageServlet/ was to "wrap" actual > plugin code into HTML, so that users don't need to write HTML pages for > their plugins manually. /PluginSourcePageServlet/ would handle any > plugin dependencies (placed into HTML head), with actual plugin code > being wrapped into IIFE, as shown above. Plugin configuration was meant > to be stored in a separate file, e.g. /-config.json/, so > that users could change the default plugin configuration to suit their > needs. > > Inspired by your patch, rather than reading/embedding plugin > configuration when serving plugin HTML page (/PluginSourcePageServlet/), > it's even better to have the plugin configuration embedded directly into > WebAdmin host page, along with introducing new /pluginApi/ function to > retrieve the plugin configuration object. > > Based on this, I suggest following modifications to the original concept: > > - modify original /pluginDefinitions/ structure, from /pluginName -> > pluginSourcePageUrl/, to /pluginName -> pluginDefObject/ > - /pluginDefObject/ is basically a subset of physical plugin definition > (/test.json/, see below), suitable for use on the client > - add following attributes to /pluginDefObject/: /version/, /url/, /config/ > * note #1: /name/ is not needed, since it's already the key of > /pluginName -> pluginDefObject/ mapping > * note #2: /path/ is not needed on the client (more on this below) > - introduce /pluginApi.config(pluginName)/ function for plugins to > retrieve their configuration object, and remove /pluginConfig/ parameter > from main IIFE (as shown above) > > [a] Physical plugin definition file (JSON) might be located at oVirt > "DataDir", e.g. //usr/share/ovirt-engine/ui-plugins/test.json/, for example: > > { > "name": "test", > "version": "1.0", > "url": "/webadmin/webadmin/plugin/test/start.html", > "path": "/tmp", > "config": "test-config.json" > } > > [b] Plugin configuration file (JSON) might be located at oVirt > "ConfigDir", e.g. //etc/ovirt-engine/ui-plugins/test-config.json/, for > example: > > { > "a":1, "b":2, "c":3 > } > > [c] Finally, plugin static resources (plugin source page, actual plugin > code, plugin dependencies, CSS/images, etc.) would be located at //tmp/ > (as shown in [a]), for example: > > /tmp/start.html -> plugin source page, used to load actual plugin code > /tmp/test.js -> actual plugin code > /tmp/deps/jquery-min.js -> simulate 3rd party plugin dependency > > For example: > "/webadmin/webadmin/plugin/test/start.html" will be mapped to > //tmp/start.html/ > "/webadmin/webadmin/plugin/test/deps/jquery-min.js" will be mapped to > //tmp/deps/jquery-min.js/ > > This approach has some pros and cons: > (+) plugin static resources can be served through > /PluginSourcePageServlet/ (pretty much like oVirt documentation > resources, served through oVirt Engine root war's /FileServlet/) > (+) plugin author has complete control over plugin source page > (-) plugin author actually needs to write plugin source page > > Overall, I think this approach is better than the previous one (where > /PluginSourcePageServlet/ took care of rendering plugin source page, but > sacrificed some flexibility). > > By the way, here's what would happen behind the scenes: > > 1. user requests WebAdmin host page, /WebadminDynamicHostingServlet/ > loads and caches all plugin definitions [a] + plugin configurations > [b] and constructs/embeds appropriate /pluginDefinitions/ JavaScript > object > 2. during WebAdmin startup, /PluginManager/ registers the plugin > (name/version/url/config), and creates/attaches the iframe to fetch > plugin source page ansynchronously > 3. /PluginSourcePageServlet/ handles plugin source page request, > resolves the correct path [c] and just streams the file content back > to client > >> 1. The plugin configuration files should probably have an "enabled" field and an "apiVersion" field that should be examined to determine whether or not to use the plugin. > > Sounds good, we can implement these later on :) > >> 2. I suspect the way I've modified PluginSourcePage makes it vulnerable to directory climbing attacks. > > Yes, but we can defend against these, restricting access only to > plugin's "path" and its sub-directories. > >> 3. Is /usr/share/ovirt-engine the right place for the plugin config files? > > I suppose you mean plugin definition files [a], cannot tell for sure, > but we can change this anytime :) > > > Chris, please let me know what you think, and again - many thanks for > sending the patch! > > > Regards, > Vojtech > > ------------------------------------------------------------------------ > > > From: "Chris Frantz" > > To: vszocs at redhat.com > Sent: Wednesday, August 22, 2012 7:56:45 PM > Subject: UI Plugins configuration > > Vojtech, > > I decided to work on making the plugin patch a bit more configurable, > following some of the ideas expressed by Itamar and others in the > meeting yesterday. The attached patch is a simple first-attempt. > > Plugin configurations are stored in > /usr/share/ovirt-engine/ui-plugins/*.json. > > Example: > { > "name": "test", > "version": "1.0", > "url": "/webadmin/webadmin/plugin/test/foo.html", > "path": "/tmp", > "config": {"a":1, "b":2, "c": 3} > } > > The engine reads all of the *.json files in that directory to build the > list of known plugins and gives that list to the webadmin. > > When webadmin loads a plugin, it requests the URL given in the plugin > config file. The "plugin" URL is mapped to PluginSourcePage, which will > translate the first part of the path ("test") into whatever path is > stored in pluginConfig ("/tmp") in this case, and then serve the static > file (e.g. "/tmp/foo.html"). > > I didn't use the renderPluginSourcePage() method in favor of just > serving a static file, but I have no strong opinion on the matter. > However, a plugin may want to store static resources at "path" and > have the engine serve those resources. By just serving files through > PluginSourcePage, we don't need any other servlets to provide those > resources. > > There is still a bit of work to do: > > 1. The plugin configuration files should probably have an "enabled" > field and an "apiVersion" field that should be examined to determine > whether or not to use the plugin. > > 2. I suspect the way I've modified PluginSourcePage makes it vulnerable > to directory climbing attacks. > > 3. Is /usr/share/ovirt-engine the right place for the plugin config files? > > Let me know what you think, > --Chris > > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From jhernand at redhat.com Fri Aug 24 11:27:23 2012 From: jhernand at redhat.com (Juan Hernandez) Date: Fri, 24 Aug 2012 13:27:23 +0200 Subject: [Engine-devel] Change in default ports: 6090 and 6091 Message-ID: <5037651B.5030105@redhat.com> Hello, Some time ago I requested feedback about a change in the port numbers used by the engine, the details are in this thread: http://lists.ovirt.org/pipermail/engine-devel/2012-July/002089.html I didn't receive bad feedback, neither in the thread or in the proposed change in gerrit: http://gerrit.ovirt.org/6348 I tested it as much as I can, so I think it is ready for merge. Unless someone has an strong reason to not do this change I will merge it on Monday 27 afternoon. Remember that with this change the default ports used by the engine will be 6090 for HTTP and 6091 for HTTPS. This will affect RPM installations and also development environments, so next time you do "mvn -Psetup" your local installation of the application server will start using port 6090 instead of 8080. Also take into account that if you are using Apache as a proxy it will continue using ports 80 and 443 by default, no change there. If you have objections please let me know. Regards, Juan Hern?ndez -- Direcci?n Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta 3?D, 28016 Madrid, Spain Inscrita en el Reg. Mercantil de Madrid ? C.I.F. B82657941 - Red Hat S.L. From iheim at redhat.com Fri Aug 24 11:42:01 2012 From: iheim at redhat.com (Itamar Heim) Date: Fri, 24 Aug 2012 14:42:01 +0300 Subject: [Engine-devel] Change in default ports: 6090 and 6091 In-Reply-To: <5037651B.5030105@redhat.com> References: <5037651B.5030105@redhat.com> Message-ID: <50376889.4070601@redhat.com> On 08/24/2012 02:27 PM, Juan Hernandez wrote: > Hello, > > Some time ago I requested feedback about a change in the port numbers > used by the engine, the details are in this thread: > > http://lists.ovirt.org/pipermail/engine-devel/2012-July/002089.html > > I didn't receive bad feedback, neither in the thread or in the proposed > change in gerrit: > > http://gerrit.ovirt.org/6348 > > I tested it as much as I can, so I think it is ready for merge. Unless > someone has an strong reason to not do this change I will merge it on > Monday 27 afternoon. > > Remember that with this change the default ports used by the engine will > be 6090 for HTTP and 6091 for HTTPS. This will affect RPM installations danken - what is the range of spice ports used by vdsm (iirc they start with 5900) and what are the secure ports? juan - maybe something that close to spice/vnc ports to avoid collission on an all-in-one. > and also development environments, so next time you do "mvn -Psetup" > your local installation of the application server will start using port > 6090 instead of 8080. > > Also take into account that if you are using Apache as a proxy it will > continue using ports 80 and 443 by default, no change there. > > If you have objections please let me know. > > Regards, > Juan Hern?ndez > From jhernand at redhat.com Fri Aug 24 12:45:29 2012 From: jhernand at redhat.com (Juan Hernandez) Date: Fri, 24 Aug 2012 14:45:29 +0200 Subject: [Engine-devel] Change in default ports: 6090 and 6091 In-Reply-To: <50376889.4070601@redhat.com> References: <5037651B.5030105@redhat.com> <50376889.4070601@redhat.com> Message-ID: <50377769.6050406@redhat.com> On 08/24/2012 01:42 PM, Itamar Heim wrote: > On 08/24/2012 02:27 PM, Juan Hernandez wrote: >> Hello, >> >> Some time ago I requested feedback about a change in the port numbers >> used by the engine, the details are in this thread: >> >> http://lists.ovirt.org/pipermail/engine-devel/2012-July/002089.html >> >> I didn't receive bad feedback, neither in the thread or in the proposed >> change in gerrit: >> >> http://gerrit.ovirt.org/6348 >> >> I tested it as much as I can, so I think it is ready for merge. Unless >> someone has an strong reason to not do this change I will merge it on >> Monday 27 afternoon. >> >> Remember that with this change the default ports used by the engine will >> be 6090 for HTTP and 6091 for HTTPS. This will affect RPM installations > > danken - what is the range of spice ports used by vdsm (iirc they start > with 5900) and what are the secure ports? > juan - maybe something that close to spice/vnc ports to avoid collission > on an all-in-one. Yes, it is be good idea to stay away from those spice and VNC port ranges. What about 8700 and 8701? They are also available according to /etc/services and IANA [1]. >> and also development environments, so next time you do "mvn -Psetup" >> your local installation of the application server will start using port >> 6090 instead of 8080. >> >> Also take into account that if you are using Apache as a proxy it will >> continue using ports 80 and 443 by default, no change there. >> >> If you have objections please let me know. >> >> Regards, >> Juan Hern?ndez [1] http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xml -- Direcci?n Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta 3?D, 28016 Madrid, Spain Inscrita en el Reg. Mercantil de Madrid ? C.I.F. B82657941 - Red Hat S.L. From iheim at redhat.com Sun Aug 26 07:36:26 2012 From: iheim at redhat.com (Itamar Heim) Date: Sun, 26 Aug 2012 10:36:26 +0300 Subject: [Engine-devel] Change in default ports: 6090 and 6091 In-Reply-To: <50377769.6050406@redhat.com> References: <5037651B.5030105@redhat.com> <50376889.4070601@redhat.com> <50377769.6050406@redhat.com> Message-ID: <5039D1FA.7080005@redhat.com> On 08/24/2012 03:45 PM, Juan Hernandez wrote: > On 08/24/2012 01:42 PM, Itamar Heim wrote: >> On 08/24/2012 02:27 PM, Juan Hernandez wrote: >>> Hello, >>> >>> Some time ago I requested feedback about a change in the port numbers >>> used by the engine, the details are in this thread: >>> >>> http://lists.ovirt.org/pipermail/engine-devel/2012-July/002089.html >>> >>> I didn't receive bad feedback, neither in the thread or in the proposed >>> change in gerrit: >>> >>> http://gerrit.ovirt.org/6348 >>> >>> I tested it as much as I can, so I think it is ready for merge. Unless >>> someone has an strong reason to not do this change I will merge it on >>> Monday 27 afternoon. >>> >>> Remember that with this change the default ports used by the engine will >>> be 6090 for HTTP and 6091 for HTTPS. This will affect RPM installations >> >> danken - what is the range of spice ports used by vdsm (iirc they start >> with 5900) and what are the secure ports? >> juan - maybe something that close to spice/vnc ports to avoid collission >> on an all-in-one. > > Yes, it is be good idea to stay away from those spice and VNC port > ranges. What about 8700 and 8701? They are also available according to > /etc/services and IANA [1]. sounds good to me. > >>> and also development environments, so next time you do "mvn -Psetup" >>> your local installation of the application server will start using port >>> 6090 instead of 8080. >>> >>> Also take into account that if you are using Apache as a proxy it will >>> continue using ports 80 and 443 by default, no change there. >>> >>> If you have objections please let me know. >>> >>> Regards, >>> Juan Hern?ndez > > [1] > http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xml > > From jhernand at redhat.com Mon Aug 27 08:58:36 2012 From: jhernand at redhat.com (Juan Hernandez) Date: Mon, 27 Aug 2012 10:58:36 +0200 Subject: [Engine-devel] Chang in default ports 8700 and 8701 (was "Change in default ports: 6090 and 6091") In-Reply-To: <5039D1FA.7080005@redhat.com> References: <5037651B.5030105@redhat.com> <50376889.4070601@redhat.com> <50377769.6050406@redhat.com> <5039D1FA.7080005@redhat.com> Message-ID: <503B36BC.9060002@redhat.com> Take two, using ports 8700 and 8701 to avoid overlap with VNC and spice ports in all in one configurations. Any objection? If not I will merge it end of day today. Yaniv, any objection from QA point of view? Eyal, do you foresee any problem in the CI environment? On 08/26/2012 09:36 AM, Itamar Heim wrote: > On 08/24/2012 03:45 PM, Juan Hernandez wrote: >> On 08/24/2012 01:42 PM, Itamar Heim wrote: >>> On 08/24/2012 02:27 PM, Juan Hernandez wrote: >>>> Hello, >>>> >>>> Some time ago I requested feedback about a change in the port numbers >>>> used by the engine, the details are in this thread: >>>> >>>> http://lists.ovirt.org/pipermail/engine-devel/2012-July/002089.html >>>> >>>> I didn't receive bad feedback, neither in the thread or in the proposed >>>> change in gerrit: >>>> >>>> http://gerrit.ovirt.org/6348 >>>> >>>> I tested it as much as I can, so I think it is ready for merge. Unless >>>> someone has an strong reason to not do this change I will merge it on >>>> Monday 27 afternoon. >>>> >>>> Remember that with this change the default ports used by the engine will >>>> be 6090 for HTTP and 6091 for HTTPS. This will affect RPM installations >>> >>> danken - what is the range of spice ports used by vdsm (iirc they start >>> with 5900) and what are the secure ports? >>> juan - maybe something that close to spice/vnc ports to avoid collission >>> on an all-in-one. >> >> Yes, it is be good idea to stay away from those spice and VNC port >> ranges. What about 8700 and 8701? They are also available according to >> /etc/services and IANA [1]. > > sounds good to me. > >> >>>> and also development environments, so next time you do "mvn -Psetup" >>>> your local installation of the application server will start using port >>>> 6090 instead of 8080. >>>> >>>> Also take into account that if you are using Apache as a proxy it will >>>> continue using ports 80 and 443 by default, no change there. >>>> >>>> If you have objections please let me know. >>>> >>>> Regards, >>>> Juan Hern?ndez >> >> [1] >> http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xml -- Direcci?n Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta 3?D, 28016 Madrid, Spain Inscrita en el Reg. Mercantil de Madrid ? C.I.F. B82657941 - Red Hat S.L. From ryanh at us.ibm.com Mon Aug 27 13:36:49 2012 From: ryanh at us.ibm.com (Ryan Harper) Date: Mon, 27 Aug 2012 08:36:49 -0500 Subject: [Engine-devel] storagedomain status via API Message-ID: <20120827133649.GR13822@frylock.phx.austin.ibm.com> Should all created storage domains have 'status' in the API? Below, I've got one active NFS data domain, and a second ISO domain defined, but not activated. the element is only available for the the ISO domain. I'd really like a way to enumerate the storage domains, and check whether a domain is OK or not. via python shell ovirt-sdk: >>> iso = api.storagedomains.get(name='isos-20120614') >>> iso.name 'isos-20120614' >>> iso.status.state 'unattached' >>> sd = api.storagedomains.get(name='images-cluster1') >>> sd.name 'images-cluster1' >>> sd.status.state Traceback (most recent call last): File "", line 1, in AttributeError: 'NoneType' object has no attribute 'state' Refrence XML via https://enginehost/api/storagedomains/ images-cluster1 data true nfs
ichigo-dom209.phx.austin.ibm.com
/images-cluster1
6442450944 39728447488 0 v1
isos-20120614 iso unattached false nfs
ichigo-dom209.phx.austin.ibm.com
/iso-cluster1
6442450944 39728447488 0 v1
-- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx ryanh at us.ibm.com From vszocs at redhat.com Mon Aug 27 13:40:51 2012 From: vszocs at redhat.com (Vojtech Szocs) Date: Mon, 27 Aug 2012 09:40:51 -0400 (EDT) Subject: [Engine-devel] UI Plugins configuration In-Reply-To: <6C8AC8C50E170C4E9B44D47B39B24A480931C86F@SACEXCMBX04-PRD.hq.netapp.com> Message-ID: <2067093209.14076040.1346074851579.JavaMail.root@redhat.com> Hi George, > If I want to add 3 main tabs and 6 context menus, do I provide 9 plugin definitions? Or do I provide 1 plugin definition with multiple ?urls? where each one points to a distinct path? The JSON plugin definition file (maybe we should call it "plugin descriptor") should contain basic information about the plugin and how it's supposed to be loaded by WebAdmin, for example: { "name": "test", "version": "1.0", "url": "/webadmin/webadmin/plugin/test/start.html", // Invokes the actual plugin code ... more attributes ... } You can do many things inside one plugin (add multiple tabs, context menu items, etc.) - you just need to add multiple event handler functions inside the actual plugin code, for example: pluginApi.plugins['test'] = { // UiInit event handler function, the first function to be called on the plugin UiInit: function() { pluginApi.ui.addMainTab('Custom Tab One', 'custom-tab-1', 'http://www.example.com/1'); pluginApi.ui.addMainTab('Custom Tab Two', 'custom-tab-2', 'http://www.example.com/2'); } , // HostContextMenu event handler function, just an example (not implemented yet) HostContextMenu: function(ctx) { // 'ctx' represents the context of this event handler function, containing: // - information about host(s) currently selected // - API (functions) to add custom context menu items } // Similarly, we could define VmContextMenu, etc. (everything inside one plugin) }; > If ?url? is configured to point to an external application server hosting my plugin, what is the intent of ?path?? For example, if I configure ?url? to point to ?https://10.10.10.10/myplugin/entrypoint.html? then presumably the application server will render the page it needs as a main tab or context menu. It would have no need for ?path? since all dependencies would be resolved by the application server. You're right, the "path" attribute makes sense only when serving plugin resources (most importantly, plugin HTML page) through a special oVirt Engine servlet (currently called PluginSourcePageServlet , should be renamed to something like "PluginResourceServlet"). If "url" points to some external application server, the "path" attribute can be omitted (optional attribute). However, the "url" attribute denotes the location from which plugin HTML page will be requested by the plugin's iframe. Regards, Vojtech ----- Original Message ----- From: "George Costea" To: "Vojtech Szocs" , "Chris Frantz" Cc: "engine-devel" Sent: Thursday, August 23, 2012 3:09:05 PM Subject: RE: [Engine-devel] UI Plugins configuration Thanks Chris and Vojtech for continuing this discussion. I think I?m missing the link between providing the plugin definition file and defining the plugins. If I want to add 3 main tabs and 6 context menus, do I provide 9 plugin definitions? Or do I provide 1 plugin definition with multiple ?urls? where each one points to a distinct path? If ?url? is configured to point to an external application server hosting my plugin, what is the intent of ?path?? For example, if I configure ?url? to point to ?https://10.10.10.10/myplugin/entrypoint.html? then presumably the application server will render the page it needs as a main tab or context menu. It would have no need for ?path? since all dependencies would be resolved by the application server. George From: engine-devel-bounces at ovirt.org [mailto:engine-devel-bounces at ovirt.org] On Behalf Of Vojtech Szocs Sent: Thursday, August 23, 2012 8:14 AM To: Chris Frantz Cc: engine-devel Subject: Re: [Engine-devel] UI Plugins configuration Hi Chris, thanks for taking the time to make this patch, these are some excellent ideas! (CC'ing engine-devel so that we can discuss this with other guys as well) First of all, I really like the way you designed plugin source page URLs (going through PluginSourcePageServlet ), e.g. "/webadmin/webadmin/plugin//.html", plus the concept of "path" JSON attribute. WebadminDynamicHostingServlet loads and caches all plugin definitions ( *.json files), and directly embeds them into WebAdmin host page as pluginDefinitions JavaScript object. I'm assuming that pluginDefinitions object will now look like this: var pluginDefinitions = { "test": { "name": "test", "version": "1.0", "url": "/webadmin/webadmin/plugin/test/foo.html", "path": "/tmp", "config": {"a":1, "b":2, "c":3} } } Originally, the pluginDefinitions object looked like this: var pluginDefinitions = { "test": "/webadmin/webadmin/plugin/test/foo.html" // Simple pluginName -> pluginSourcePageUrl mappings } This is because PluginManager (WebAdmin) only needs pluginName ("name") and pluginSourcePageUrl ("url") during startup, when creating plugin iframe. But this can be changed :) Plugin "version" makes sense, plus the plugin configuration object ("config") can be useful directly on the client. Let me explain: Originally, plugin configuration was supposed to be passed to actual plugin code (through immediately-invoked-function-expression, or IIFE), just like this: (function (pluginApi, pluginConfig) { // JavaScript IIFE // ... actual plugin code ... })( parent.pluginApi, /* reference to global pluginApi object */ {"a":1, "b":2, "c":3} /* embedded plugin configuration as JavaScript object */ ); The whole purpose of PluginSourcePageServlet was to "wrap" actual plugin code into HTML, so that users don't need to write HTML pages for their plugins manually. PluginSourcePageServlet would handle any plugin dependencies (placed into HTML head), with actual plugin code being wrapped into IIFE, as shown above. Plugin configuration was meant to be stored in a separate file, e.g. -config.json , so that users could change the default plugin configuration to suit their needs. Inspired by your patch, rather than reading/embedding plugin configuration when serving plugin HTML page ( PluginSourcePageServlet ), it's even better to have the plugin configuration embedded directly into WebAdmin host page, along with introducing new pluginApi function to retrieve the plugin configuration object. Based on this, I suggest following modifications to the original concept: - modify original pluginDefinitions structure, from pluginName -> pluginSourcePageUrl , to pluginName -> pluginDefObject - pluginDefObject is basically a subset of physical plugin definition ( test.json , see below), suitable for use on the client - add following attributes to pluginDefObject : version , url , config * note #1: name is not needed, since it's already the key of pluginName -> pluginDefObject mapping * note #2: path is not needed on the client (more on this below) - introduce pluginApi.config(pluginName) function for plugins to retrieve their configuration object, and remove pluginConfig parameter from main IIFE (as shown above) [a] Physical plugin definition file (JSON) might be located at oVirt "DataDir", e.g. /usr/share/ovirt-engine/ui-plugins/test.json , for example: { "name": "test", "version": "1.0", "url": "/webadmin/webadmin/plugin/test/start.html", "path": "/tmp", "config": "test-config.json" } [b] Plugin configuration file (JSON) might be located at oVirt "ConfigDir", e.g. /etc/ovirt-engine/ui-plugins/test-config.json , for example: { "a":1, "b":2, "c":3 } [c] Finally, plugin static resources (plugin source page, actual plugin code, plugin dependencies, CSS/images, etc.) would be located at /tmp (as shown in [a]), for example: /tmp/start.html -> plugin source page, used to load actual plugin code /tmp/test.js -> actual plugin code /tmp/deps/jquery-min.js -> simulate 3rd party plugin dependency For example: "/webadmin/webadmin/plugin/test/start.html" will be mapped to /tmp/start.html "/webadmin/webadmin/plugin/test/deps/jquery-min.js" will be mapped to /tmp/deps/jquery-min.js This approach has some pros and cons: (+) plugin static resources can be served through PluginSourcePageServlet (pretty much like oVirt documentation resources, served through oVirt Engine root war's FileServlet ) (+) plugin author has complete control over plugin source page (-) plugin author actually needs to write plugin source page Overall, I think this approach is better than the previous one (where PluginSourcePageServlet took care of rendering plugin source page, but sacrificed some flexibility). By the way, here's what would happen behind the scenes: 1. user requests WebAdmin host page, WebadminDynamicHostingServlet loads and caches all plugin definitions [a] + plugin configurations [b] and constructs/embeds appropriate pluginDefinitions JavaScript object 2. during WebAdmin startup, PluginManager registers the plugin (name/version/url/config), and creates/attaches the iframe to fetch plugin source page ansynchronously 3. PluginSourcePageServlet handles plugin source page request, resolves the correct path [c] and just streams the file content back to client > 1. The plugin configuration files should probably have an "enabled" field and an "apiVersion" field that should be examined to determine whether or not to use the plugin. Sounds good, we can implement these later on :) > 2. I suspect the way I've modified PluginSourcePage makes it vulnerable to directory climbing attacks. Yes, but we can defend against these, restricting access only to plugin's "path" and its sub-directories. > 3. Is /usr/share/ovirt-engine the right place for the plugin config files? I suppose you mean plugin definition files [a], cannot tell for sure, but we can change this anytime :) Chris, please let me know what you think, and again - many thanks for sending the patch! Regards, Vojtech ----- Original Message ----- From: "Chris Frantz" < Chris.Frantz at hp.com > To: vszocs at redhat.com Sent: Wednesday, August 22, 2012 7:56:45 PM Subject: UI Plugins configuration Vojtech, I decided to work on making the plugin patch a bit more configurable, following some of the ideas expressed by Itamar and others in the meeting yesterday. The attached patch is a simple first-attempt. Plugin configurations are stored in /usr/share/ovirt-engine/ui-plugins/*.json. Example: { "name": "test", "version": "1.0", "url": "/webadmin/webadmin/plugin/test/foo.html", "path": "/tmp", "config": {"a":1, "b":2, "c": 3} } The engine reads all of the *.json files in that directory to build the list of known plugins and gives that list to the webadmin. When webadmin loads a plugin, it requests the URL given in the plugin config file. The "plugin" URL is mapped to PluginSourcePage, which will translate the first part of the path ("test") into whatever path is stored in pluginConfig ("/tmp") in this case, and then serve the static file (e.g. "/tmp/foo.html"). I didn't use the renderPluginSourcePage() method in favor of just serving a static file, but I have no strong opinion on the matter. However, a plugin may want to store static resources at "path" and have the engine serve those resources. By just serving files through PluginSourcePage, we don't need any other servlets to provide those resources. There is still a bit of work to do: 1. The plugin configuration files should probably have an "enabled" field and an "apiVersion" field that should be examined to determine whether or not to use the plugin. 2. I suspect the way I've modified PluginSourcePage makes it vulnerable to directory climbing attacks. 3. Is /usr/share/ovirt-engine the right place for the plugin config files? Let me know what you think, --Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From mpastern at redhat.com Mon Aug 27 14:22:43 2012 From: mpastern at redhat.com (Michael Pasternak) Date: Mon, 27 Aug 2012 17:22:43 +0300 Subject: [Engine-devel] Fwd: storagedomain status via API In-Reply-To: <503B8020.20103@redhat.com> References: <20120827133649.GR13822@frylock.phx.austin.ibm.com> <503B8020.20103@redhat.com> Message-ID: <503B82B3.8020309@redhat.com> Hi Ryan, SD status available only in datacenter context [1], as some of domains may be shared between datacenters, i.e: dc = api.datacenters.get(name="xxx") sd = dc.storagedomains.get(name="images-cluster1") sd.status [1] except of 'unatached' domains. On 08/27/2012 05:11 PM, Itamar Heim wrote: > > > > -------- Original Message -------- > Subject: [Engine-devel] storagedomain status via API > Date: Mon, 27 Aug 2012 08:36:49 -0500 > From: Ryan Harper > To: engine-devel at ovirt.org > > Should all created storage domains have 'status' in the API? Below, I've got one active NFS data domain, and a second ISO domain defined, but not activated. the > element is only available for the the ISO domain. > > I'd really like a way to enumerate the storage domains, and check whether a domain is OK or not. > > > via python shell ovirt-sdk: >>>> iso = api.storagedomains.get(name='isos-20120614') >>>> iso.name > 'isos-20120614' >>>> iso.status.state > 'unattached' >>>> sd = api.storagedomains.get(name='images-cluster1') >>>> sd.name > 'images-cluster1' >>>> sd.status.state > Traceback (most recent call last): > File "", line 1, in > AttributeError: 'NoneType' object has no attribute 'state' > > > Refrence XML via https://enginehost/api/storagedomains/ > > > > images-cluster1 > > data > true > > nfs >
ichigo-dom209.phx.austin.ibm.com
> /images-cluster1 >
> 6442450944 > 39728447488 > 0 > v1 >
> > isos-20120614 > > > iso > > unattached > > false > > nfs >
ichigo-dom209.phx.austin.ibm.com
> /iso-cluster1 >
> 6442450944 > 39728447488 > 0 > v1 >
>
> > -- Michael Pasternak RedHat, ENG-Virtualization R&D From ryanh at us.ibm.com Mon Aug 27 14:28:57 2012 From: ryanh at us.ibm.com (Ryan Harper) Date: Mon, 27 Aug 2012 09:28:57 -0500 Subject: [Engine-devel] storagedomain status via API In-Reply-To: <20120827133649.GR13822@frylock.phx.austin.ibm.com> References: <20120827133649.GR13822@frylock.phx.austin.ibm.com> Message-ID: <20120827142857.GB17196@frylock.phx.austin.ibm.com> * Ryan Harper [2012-08-27 08:39]: > Should all created storage domains have 'status' in the API? Below, I've got one active NFS data domain, and a second ISO domain defined, but not activated. the element is only available for the the ISO domain. > > I'd really like a way to enumerate the storage domains, and check whether a domain is OK or not. And while I'm here... trying to delete an unattached storage domain: >>> iso >>> iso.delete() Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/brokers.py", line 3180, in delete body=ParseHelper.toXml(storagedomain)) File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 111, in delete return self.request('DELETE', url, body, headers) File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 128, in request last=last) File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 154, in __doRequest raise RequestError, response ovirtsdk.infrastructure.errors.RequestError: status: 400 reason: Bad Request detail: StorageDomain [host.id|name] required for remove > > > via python shell ovirt-sdk: > >>> iso = api.storagedomains.get(name='isos-20120614') > >>> iso.name > 'isos-20120614' > >>> iso.status.state > 'unattached' > >>> sd = api.storagedomains.get(name='images-cluster1') > >>> sd.name > 'images-cluster1' > >>> sd.status.state > Traceback (most recent call last): > File "", line 1, in > AttributeError: 'NoneType' object has no attribute 'state' > > > Refrence XML via https://enginehost/api/storagedomains/ > > > > images-cluster1 > > data > true > > nfs >
ichigo-dom209.phx.austin.ibm.com
> /images-cluster1 >
> 6442450944 > 39728447488 > 0 > v1 >
> > isos-20120614 > > > iso > > unattached > > false > > nfs >
ichigo-dom209.phx.austin.ibm.com
> /iso-cluster1 >
> 6442450944 > 39728447488 > 0 > v1 >
>
> > > -- > Ryan Harper > Software Engineer; Linux Technology Center > IBM Corp., Austin, Tx > ryanh at us.ibm.com > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx ryanh at us.ibm.com From ryanh at us.ibm.com Mon Aug 27 14:41:20 2012 From: ryanh at us.ibm.com (Ryan Harper) Date: Mon, 27 Aug 2012 09:41:20 -0500 Subject: [Engine-devel] Fwd: storagedomain status via API In-Reply-To: <503B82B3.8020309@redhat.com> References: <20120827133649.GR13822@frylock.phx.austin.ibm.com> <503B8020.20103@redhat.com> <503B82B3.8020309@redhat.com> Message-ID: <20120827144120.GC17196@frylock.phx.austin.ibm.com> * Michael Pasternak [2012-08-27 09:24]: > > > Hi Ryan, > He Michael, > SD status available only in datacenter context [1], as some of domains > may be shared between datacenters, i.e: > > dc = api.datacenters.get(name="xxx") > sd = dc.storagedomains.get(name="images-cluster1") > sd.status > > [1] except of 'unatached' domains. Right, this does make checking the state/status of a storage domain programtically troublesome. I can get a list of all storagedomains, but only some of them will have. the 'status' attribute. If it doesn't have the status attribute, then I need to enumerate all of the datacenters, and for each datacenter, I can list the storage domains, and check the status there. Any reason the status of each storagedomain can't be present under api/storagedomains? Also, any tips on deleting 'unattached' domains via the python bindings? >>> iso.delete() Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/brokers.py", line 3180, in delete body=ParseHelper.toXml(storagedomain)) File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 111, in delete return self.request('DELETE', url, body, headers) File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 128, in request last=last) File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 154, in __doRequest raise RequestError, response ovirtsdk.infrastructure.errors.RequestError: status: 400 reason: Bad Request detail: StorageDomain [host.id|name] required for remove > > On 08/27/2012 05:11 PM, Itamar Heim wrote: > > > > > > > > -------- Original Message -------- > > Subject: [Engine-devel] storagedomain status via API > > Date: Mon, 27 Aug 2012 08:36:49 -0500 > > From: Ryan Harper > > To: engine-devel at ovirt.org > > > > Should all created storage domains have 'status' in the API? Below, I've got one active NFS data domain, and a second ISO domain defined, but not activated. the > > element is only available for the the ISO domain. > > > > I'd really like a way to enumerate the storage domains, and check whether a domain is OK or not. > > > > > > via python shell ovirt-sdk: > >>>> iso = api.storagedomains.get(name='isos-20120614') > >>>> iso.name > > 'isos-20120614' > >>>> iso.status.state > > 'unattached' > >>>> sd = api.storagedomains.get(name='images-cluster1') > >>>> sd.name > > 'images-cluster1' > >>>> sd.status.state > > Traceback (most recent call last): > > File "", line 1, in > > AttributeError: 'NoneType' object has no attribute 'state' > > > > > > Refrence XML via https://enginehost/api/storagedomains/ > > > > > > > > images-cluster1 > > > > data > > true > > > > nfs > >
ichigo-dom209.phx.austin.ibm.com
> > /images-cluster1 > >
> > 6442450944 > > 39728447488 > > 0 > > v1 > >
> > > > isos-20120614 > > > > > > iso > > > > unattached > > > > false > > > > nfs > >
ichigo-dom209.phx.austin.ibm.com
> > /iso-cluster1 > >
> > 6442450944 > > 39728447488 > > 0 > > v1 > >
> >
> > > > > > > -- > > Michael Pasternak > RedHat, ENG-Virtualization R&D -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx ryanh at us.ibm.com From mpastern at redhat.com Mon Aug 27 14:43:57 2012 From: mpastern at redhat.com (Michael Pasternak) Date: Mon, 27 Aug 2012 17:43:57 +0300 Subject: [Engine-devel] storagedomain status via API In-Reply-To: <20120827142857.GB17196@frylock.phx.austin.ibm.com> References: <20120827133649.GR13822@frylock.phx.austin.ibm.com> <20120827142857.GB17196@frylock.phx.austin.ibm.com> Message-ID: <503B87AD.7070700@redhat.com> Ryan, each sdk method is well documented, if you'll take a look on .delete().__doc__ it's: @param storagedomain.host.id|name: string [@param async: boolean (true|false)] [@param correlation_id: any string] and that's exactly what error says, -> StorageDomain [host.id|name] required for remove do: sd = api.storagedomains.get(name="xxx") sd.delete(storagedomain=params.StorageDomain(host=params.Host(id="yyy"))) On 08/27/2012 05:28 PM, Ryan Harper wrote: > * Ryan Harper [2012-08-27 08:39]: >> Should all created storage domains have 'status' in the API? Below, I've got one active NFS data domain, and a second ISO domain defined, but not activated. the element is only available for the the ISO domain. >> >> I'd really like a way to enumerate the storage domains, and check whether a domain is OK or not. > > And while I'm here... trying to delete an unattached storage domain: > >>>> iso > >>>> iso.delete() > Traceback (most recent call last): > File "", line 1, in > File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/brokers.py", line 3180, in delete > body=ParseHelper.toXml(storagedomain)) > File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 111, in delete > return self.request('DELETE', url, body, headers) > File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 128, in request > last=last) > File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 154, in __doRequest > raise RequestError, response > ovirtsdk.infrastructure.errors.RequestError: > status: 400 > reason: Bad Request > detail: StorageDomain [host.id|name] required for remove > > > >> >> >> via python shell ovirt-sdk: >>>>> iso = api.storagedomains.get(name='isos-20120614') >>>>> iso.name >> 'isos-20120614' >>>>> iso.status.state >> 'unattached' >>>>> sd = api.storagedomains.get(name='images-cluster1') >>>>> sd.name >> 'images-cluster1' >>>>> sd.status.state >> Traceback (most recent call last): >> File "", line 1, in >> AttributeError: 'NoneType' object has no attribute 'state' >> >> >> Refrence XML via https://enginehost/api/storagedomains/ >> >> >> >> images-cluster1 >> >> data >> true >> >> nfs >>
ichigo-dom209.phx.austin.ibm.com
>> /images-cluster1 >>
>> 6442450944 >> 39728447488 >> 0 >> v1 >>
>> >> isos-20120614 >> >> >> iso >> >> unattached >> >> false >> >> nfs >>
ichigo-dom209.phx.austin.ibm.com
>> /iso-cluster1 >>
>> 6442450944 >> 39728447488 >> 0 >> v1 >>
>>
>> >> >> -- >> Ryan Harper >> Software Engineer; Linux Technology Center >> IBM Corp., Austin, Tx >> ryanh at us.ibm.com >> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel > -- Michael Pasternak RedHat, ENG-Virtualization R&D From ryanh at us.ibm.com Mon Aug 27 14:48:53 2012 From: ryanh at us.ibm.com (Ryan Harper) Date: Mon, 27 Aug 2012 09:48:53 -0500 Subject: [Engine-devel] storagedomain status via API In-Reply-To: <503B87AD.7070700@redhat.com> References: <20120827133649.GR13822@frylock.phx.austin.ibm.com> <20120827142857.GB17196@frylock.phx.austin.ibm.com> <503B87AD.7070700@redhat.com> Message-ID: <20120827144853.GD17196@frylock.phx.austin.ibm.com> * Michael Pasternak [2012-08-27 09:45]: > > Ryan, > > each sdk method is well documented, if you'll take a look > on .delete().__doc__ it's: > > @param storagedomain.host.id|name: string > [@param async: boolean (true|false)] > [@param correlation_id: any string] > Thanks for the tip on the __doc__ string. > and that's exactly what error says, > -> StorageDomain [host.id|name] required for remove > > do: > > sd = api.storagedomains.get(name="xxx") > sd.delete(storagedomain=params.StorageDomain(host=params.Host(id="yyy"))) > > > On 08/27/2012 05:28 PM, Ryan Harper wrote: > > * Ryan Harper [2012-08-27 08:39]: > >> Should all created storage domains have 'status' in the API? Below, I've got one active NFS data domain, and a second ISO domain defined, but not activated. the element is only available for the the ISO domain. > >> > >> I'd really like a way to enumerate the storage domains, and check whether a domain is OK or not. > > > > And while I'm here... trying to delete an unattached storage domain: > > > >>>> iso > > > >>>> iso.delete() > > Traceback (most recent call last): > > File "", line 1, in > > File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/brokers.py", line 3180, in delete > > body=ParseHelper.toXml(storagedomain)) > > File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 111, in delete > > return self.request('DELETE', url, body, headers) > > File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 128, in request > > last=last) > > File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 154, in __doRequest > > raise RequestError, response > > ovirtsdk.infrastructure.errors.RequestError: > > status: 400 > > reason: Bad Request > > detail: StorageDomain [host.id|name] required for remove > > > > > > > >> > >> > >> via python shell ovirt-sdk: > >>>>> iso = api.storagedomains.get(name='isos-20120614') > >>>>> iso.name > >> 'isos-20120614' > >>>>> iso.status.state > >> 'unattached' > >>>>> sd = api.storagedomains.get(name='images-cluster1') > >>>>> sd.name > >> 'images-cluster1' > >>>>> sd.status.state > >> Traceback (most recent call last): > >> File "", line 1, in > >> AttributeError: 'NoneType' object has no attribute 'state' > >> > >> > >> Refrence XML via https://enginehost/api/storagedomains/ > >> > >> > >> > >> images-cluster1 > >> > >> data > >> true > >> > >> nfs > >>
ichigo-dom209.phx.austin.ibm.com
> >> /images-cluster1 > >>
> >> 6442450944 > >> 39728447488 > >> 0 > >> v1 > >>
> >> > >> isos-20120614 > >> > >> > >> iso > >> > >> unattached > >> > >> false > >> > >> nfs > >>
ichigo-dom209.phx.austin.ibm.com
> >> /iso-cluster1 > >>
> >> 6442450944 > >> 39728447488 > >> 0 > >> v1 > >>
> >>
> >> > >> > >> -- > >> Ryan Harper > >> Software Engineer; Linux Technology Center > >> IBM Corp., Austin, Tx > >> ryanh at us.ibm.com > >> > >> _______________________________________________ > >> Engine-devel mailing list > >> Engine-devel at ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > -- > > Michael Pasternak > RedHat, ENG-Virtualization R&D -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx ryanh at us.ibm.com From vszocs at redhat.com Mon Aug 27 14:48:44 2012 From: vszocs at redhat.com (Vojtech Szocs) Date: Mon, 27 Aug 2012 10:48:44 -0400 (EDT) Subject: [Engine-devel] UI Plugins configuration In-Reply-To: Message-ID: <127266215.14115417.1346078924449.JavaMail.root@redhat.com> Hi Chris, > Your assumption about the structure of the pluginDefinitions object is correct. It?s no longer a String->String mapping , but a String to Object mapping. Yes :) but maybe we could also formalize some terms, for example: Plugin descriptor is the JSON file that contains important plugin meta-data (including the plugin source page URL), e.g. /usr/share/ovirt-engine/ui-plugins/test.json Plugin definition is the JavaScript object representing plugin descriptor meta-data suitable for use on client (GWT WebAdmin). Plugin definition is embedded into WebAdmin host page within pluginDefinitions object, and read by PluginManager during WebAdmin startup. Plugin configuration is the JSON file that contains optional plugin configuration, e.g. /etc/ovirt-engine/ui-plugins/test-config.json I think we can combine two things here: 1) allow plugin authors to define standard (fallback) configuration directly inside plugin descriptor 2) allow plugin users to override standard configuration by modifying dedicated plugin configuration file Finally, plugin source page is the HTML page used to invoke actual plugin code (this page is referenced by plugin descriptor's "url" attribute). Plugin source page can also load external resources required by the plugin, e.g. 3rd party JavaScript libraries, CSS, images, etc. > I liked the original IIFE approach, except that it seemed that having additional static resources (jquery, images, html templates, etc) was going to be more cumbersome. I don?t think having the plugin author write a basic start.html is that big of a burden :). You're right, for such additional plugin resources, even more configuration/parsing/logic would be required. Even though plugin authors need to write the plugin source page themselves, they have full control over it, which is a good thing in general. > I agree that the plugin configuration was always going to be a resource (probably a local file) that the end user could customize. I?m not sure it I really needs to be separate from the plugin definition file (/usr/share/ovirt-engine/ui-plugins/test.json). I suppose it depends on how complex the configuration is going to be and on some of the implementation details surrounding the plugin definition file. Yeah, let's make the concept of the plugin configuration file optional for now (standard plugin configuration can be part of plugin descriptor). > In my patch, I simply used Jackson to parse the file into a tree of JsonNodes. Should the plugin definition be a java object of some sort? (please please please don?t make me learn about java beans?). I stuck with the JsonNodes because Jackson makes them easy to work with and they?re really easy to re-serialize back to json to give to the webadmin. I think using Jackson's JSON representation in Java (JsonNode) is perfectly suitable in this situation. No need to have separate Java bean for that :) > We should probably turn on JsonParser.Feature.ALLOW_COMMENTS. The definition and config files will difficult for end-users (or even developers) to understand without comments. Agreed. > We need to formalize the structure of the plugin definition and decide which fields are mandatory and which are optional Sounds good, but I'd skip some attributes for now (enabled, apiVersion, author, license) for the sake of simplicity. As you wrote, when loading plugin descriptor, we should enforce mandatory attributes (name, version, url) . As for plugin configuration, there could be two different attributes: - "config" for standard (fallback) plugin configuration (JSON object) - "configFile" for external plugin configuration file (path to file, relative to /etc/ovirt-engine/ui-plugins/ ) , that overrides the standard configuration Note that when loading plugin descriptor, the loader should also "merge" the configuration together (custom config on top of standard config). > I can work on the plugin Definition loader some more and make it enforce mandatory/optional fields. I?ll also investigate the directory climbing issue I mentioned in my previous mail. Sounds good! I was planning to incorporate your original patch in next PoC revision, but of course, you can work on the loader some more and send another patch :) For the directory climbing issue, see /backend/manager/modules/root/src/main/java/org/ovirt/engine/core/FileServlet.java (there's a method called isSane for dealing with such issue). > Also, I?m curious how things are going to work when the ?url? points to a foreign resource as the plugin start page. I don?t think the plugin?s iframe is going to be able to access parent.pluginApi. Perhaps there is some aspect of CORS that I don?t understand? When the plugin iframe references a resource on different origin (protocol, domain, port) than WebAdmin main page origin, JavaScript code running inside that iframe will not be able to access parent (top-level) pluginApi object. You're right, the statement "parent.pluginApi" will not work, because of Same-Origin Policy enforced by the browser. CORS is just one alternative, see http://stackoverflow.com/questions/3076414/ways-to-circumvent-the-same-origin-policy for more. However, CORS needs to be supported by the browser (a special HTTP response header is used to tell that the iframe is allowed to access resources from another - WebAdmin main page - origin). We need to investigate this a bit more I guess. Regards, Vojtech ----- Original Message ----- From: "Chris Frantz" To: "Vojtech Szocs" Cc: "engine-devel" Sent: Thursday, August 23, 2012 5:12:02 PM Subject: RE: UI Plugins configuration Vojtech, Your assumption about the structure of the pluginDefinitions object is correct. It?s no longer a String->String mapping , but a String to Object mapping. I liked the original IIFE approach, except that it seemed that having additional static resources (jquery, images, html templates, etc) was going to be more cumbersome. I don?t think having the plugin author write a basic start.html is that big of a burden :). I agree that the plugin configuration was always going to be a resource (probably a local file) that the end user could customize. I?m not sure it I really needs to be separate from the plugin definition file (/usr/share/ovirt-engine/ui-plugins/test.json). I suppose it depends on how complex the configuration is going to be and on some of the implementation details surrounding the plugin definition file. In my patch, I simply used Jackson to parse the file into a tree of JsonNodes. Should the plugin definition be a java object of some sort? (please please please don?t make me learn about java beans?). I stuck with the JsonNodes because Jackson makes them easy to work with and they?re really easy to re-serialize back to json to give to the webadmin. We should probably turn on JsonParser.Feature.ALLOW_COMMENTS. The definition and config files will difficult for end-users (or even developers) to understand without comments. We need to formalize the structure of the plugin definition and decide which fields are mandatory and which are optional: { # Mandatory fields: name, enabled, version, url, apiversion, author, license # Name of the plugin "name": "test", # Whether or not plugin is enabed "enabled": true, # version of the plugin "version": "1.0", # How to load the plugin "url": "/webadmin/webadmin/plugin/test/start.html", # Which version of engine plugin is meant to work with "apiversion": "3.1.0", # Who wrote the plugin and how is it licensed? "author": "SuperBig Corporation", "license": "Proprietary", # Optional fields path, config # Where to locate plugin (if loaded by webadmin/plugin) "path": "/tmp", # Plugin configuration information (if any) "config": "test-config.json", } I can work on the plugin Definition loader some more and make it enforce mandatory/optional fields. I?ll also investigate the directory climbing issue I mentioned in my previous mail. Also, I?m curious how things are going to work when the ?url? points to a foreign resource as the plugin start page. I don?t think the plugin?s iframe is going to be able to access parent.pluginApi. Perhaps there is some aspect of CORS that I don?t understand? Thanks, --Chris From: Vojtech Szocs [mailto:vszocs at redhat.com] Sent: Thursday, August 23, 2012 7:14 AM To: Frantz, Chris Cc: engine-devel Subject: Re: UI Plugins configuration Hi Chris, thanks for taking the time to make this patch, these are some excellent ideas! (CC'ing engine-devel so that we can discuss this with other guys as well) First of all, I really like the way you designed plugin source page URLs (going through PluginSourcePageServlet ), e.g. "/webadmin/webadmin/plugin//.html", plus the concept of "path" JSON attribute. WebadminDynamicHostingServlet loads and caches all plugin definitions ( *.json files), and directly embeds them into WebAdmin host page as pluginDefinitions JavaScript object. I'm assuming that pluginDefinitions object will now look like this: var pluginDefinitions = { "test": { "name": "test", "version": "1.0", "url": "/webadmin/webadmin/plugin/test/foo.html", "path": "/tmp", "config": {"a":1, "b":2, "c":3} } } Originally, the pluginDefinitions object looked like this: var pluginDefinitions = { "test": "/webadmin/webadmin/plugin/test/foo.html" // Simple pluginName -> pluginSourcePageUrl mappings } This is because PluginManager (WebAdmin) only needs pluginName ("name") and pluginSourcePageUrl ("url") during startup, when creating plugin iframe. But this can be changed :) Plugin "version" makes sense, plus the plugin configuration object ("config") can be useful directly on the client. Let me explain: Originally, plugin configuration was supposed to be passed to actual plugin code (through immediately-invoked-function-expression, or IIFE), just like this: (function (pluginApi, pluginConfig) { // JavaScript IIFE // ... actual plugin code ... })( parent.pluginApi, /* reference to global pluginApi object */ {"a":1, "b":2, "c":3} /* embedded plugin configuration as JavaScript object */ ); The whole purpose of PluginSourcePageServlet was to "wrap" actual plugin code into HTML, so that users don't need to write HTML pages for their plugins manually. PluginSourcePageServlet would handle any plugin dependencies (placed into HTML head), with actual plugin code being wrapped into IIFE, as shown above. Plugin configuration was meant to be stored in a separate file, e.g. -config.json , so that users could change the default plugin configuration to suit their needs. Inspired by your patch, rather than reading/embedding plugin configuration when serving plugin HTML page ( PluginSourcePageServlet ), it's even better to have the plugin configuration embedded directly into WebAdmin host page, along with introducing new pluginApi function to retrieve the plugin configuration object. Based on this, I suggest following modifications to the original concept: - modify original pluginDefinitions structure, from pluginName -> pluginSourcePageUrl , to pluginName -> pluginDefObject - pluginDefObject is basically a subset of physical plugin definition ( test.json , see below), suitable for use on the client - add following attributes to pluginDefObject : version , url , config * note #1: name is not needed, since it's already the key of pluginName -> pluginDefObject mapping * note #2: path is not needed on the client (more on this below) - introduce pluginApi.config(pluginName) function for plugins to retrieve their configuration object, and remove pluginConfig parameter from main IIFE (as shown above) [a] Physical plugin definition file (JSON) might be located at oVirt "DataDir", e.g. /usr/share/ovirt-engine/ui-plugins/test.json , for example: { "name": "test", "version": "1.0", "url": "/webadmin/webadmin/plugin/test/start.html", "path": "/tmp", "config": "test-config.json" } [b] Plugin configuration file (JSON) might be located at oVirt "ConfigDir", e.g. /etc/ovirt-engine/ui-plugins/test-config.json , for example: { "a":1, "b":2, "c":3 } [c] Finally, plugin static resources (plugin source page, actual plugin code, plugin dependencies, CSS/images, etc.) would be located at /tmp (as shown in [a]), for example: /tmp/start.html -> plugin source page, used to load actual plugin code /tmp/test.js -> actual plugin code /tmp/deps/jquery-min.js -> simulate 3rd party plugin dependency For example: "/webadmin/webadmin/plugin/test/start.html" will be mapped to /tmp/start.html "/webadmin/webadmin/plugin/test/deps/jquery-min.js" will be mapped to /tmp/deps/jquery-min.js This approach has some pros and cons: (+) plugin static resources can be served through PluginSourcePageServlet (pretty much like oVirt documentation resources, served through oVirt Engine root war's FileServlet ) (+) plugin author has complete control over plugin source page (-) plugin author actually needs to write plugin source page Overall, I think this approach is better than the previous one (where PluginSourcePageServlet took care of rendering plugin source page, but sacrificed some flexibility). By the way, here's what would happen behind the scenes: 1. user requests WebAdmin host page, WebadminDynamicHostingServlet loads and caches all plugin definitions [a] + plugin configurations [b] and constructs/embeds appropriate pluginDefinitions JavaScript object 2. during WebAdmin startup, PluginManager registers the plugin (name/version/url/config), and creates/attaches the iframe to fetch plugin source page ansynchronously 3. PluginSourcePageServlet handles plugin source page request, resolves the correct path [c] and just streams the file content back to client > 1. The plugin configuration files should probably have an "enabled" field and an "apiVersion" field that should be examined to determine whether or not to use the plugin. Sounds good, we can implement these later on :) > 2. I suspect the way I've modified PluginSourcePage makes it vulnerable to directory climbing attacks. Yes, but we can defend against these, restricting access only to plugin's "path" and its sub-directories. > 3. Is /usr/share/ovirt-engine the right place for the plugin config files? I suppose you mean plugin definition files [a], cannot tell for sure, but we can change this anytime :) Chris, please let me know what you think, and again - many thanks for sending the patch! Regards, Vojtech ----- Original Message ----- From: "Chris Frantz" < Chris.Frantz at hp.com > To: vszocs at redhat.com Sent: Wednesday, August 22, 2012 7:56:45 PM Subject: UI Plugins configuration Vojtech, I decided to work on making the plugin patch a bit more configurable, following some of the ideas expressed by Itamar and others in the meeting yesterday. The attached patch is a simple first-attempt. Plugin configurations are stored in /usr/share/ovirt-engine/ui-plugins/*.json. Example: { "name": "test", "version": "1.0", "url": "/webadmin/webadmin/plugin/test/foo.html", "path": "/tmp", "config": {"a":1, "b":2, "c": 3} } The engine reads all of the *.json files in that directory to build the list of known plugins and gives that list to the webadmin. When webadmin loads a plugin, it requests the URL given in the plugin config file. The "plugin" URL is mapped to PluginSourcePage, which will translate the first part of the path ("test") into whatever path is stored in pluginConfig ("/tmp") in this case, and then serve the static file (e.g. "/tmp/foo.html"). I didn't use the renderPluginSourcePage() method in favor of just serving a static file, but I have no strong opinion on the matter. However, a plugin may want to store static resources at "path" and have the engine serve those resources. By just serving files through PluginSourcePage, we don't need any other servlets to provide those resources. There is still a bit of work to do: 1. The plugin configuration files should probably have an "enabled" field and an "apiVersion" field that should be examined to determine whether or not to use the plugin. 2. I suspect the way I've modified PluginSourcePage makes it vulnerable to directory climbing attacks. 3. Is /usr/share/ovirt-engine the right place for the plugin config files? Let me know what you think, --Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From mpastern at redhat.com Mon Aug 27 14:55:24 2012 From: mpastern at redhat.com (Michael Pasternak) Date: Mon, 27 Aug 2012 17:55:24 +0300 Subject: [Engine-devel] Fwd: storagedomain status via API In-Reply-To: <20120827144120.GC17196@frylock.phx.austin.ibm.com> References: <20120827133649.GR13822@frylock.phx.austin.ibm.com> <503B8020.20103@redhat.com> <503B82B3.8020309@redhat.com> <20120827144120.GC17196@frylock.phx.austin.ibm.com> Message-ID: <503B8A5C.4040503@redhat.com> On 08/27/2012 05:41 PM, Ryan Harper wrote: > * Michael Pasternak [2012-08-27 09:24]: >> > >> > >> > Hi Ryan, >> > > He Michael, > >> > SD status available only in datacenter context [1], as some of domains >> > may be shared between datacenters, i.e: >> > >> > dc = api.datacenters.get(name="xxx") >> > sd = dc.storagedomains.get(name="images-cluster1") >> > sd.status >> > >> > [1] except of 'unatached' domains. > Right, this does make checking the state/status of a storage domain > programtically troublesome. > > I can get a list of all storagedomains, but only some of them will have. > the 'status' attribute. If it doesn't have the status attribute, then I > need to enumerate all of the datacenters, and for each datacenter, I can > list the storage domains, and check the status there. > > Any reason the status of each storagedomain can't be present under > api/storagedomains? > storagedomain can be shared between different datacenters, i.e it may have status X in DC1 and Y in DC2, so only use-case for SD to have status outside of DC context, is when it not attached yet. > > Also, any tips on deleting 'unattached' domains via the python bindings? > see my reply in original email. -- Michael Pasternak RedHat, ENG-Virtualization R&D From ryanh at us.ibm.com Mon Aug 27 15:03:59 2012 From: ryanh at us.ibm.com (Ryan Harper) Date: Mon, 27 Aug 2012 10:03:59 -0500 Subject: [Engine-devel] Fwd: storagedomain status via API In-Reply-To: <503B8A5C.4040503@redhat.com> References: <20120827133649.GR13822@frylock.phx.austin.ibm.com> <503B8020.20103@redhat.com> <503B82B3.8020309@redhat.com> <20120827144120.GC17196@frylock.phx.austin.ibm.com> <503B8A5C.4040503@redhat.com> Message-ID: <20120827150359.GE17196@frylock.phx.austin.ibm.com> * Michael Pasternak [2012-08-27 09:57]: > > On 08/27/2012 05:41 PM, Ryan Harper wrote: > > * Michael Pasternak [2012-08-27 09:24]: > >> > > >> > > >> > Hi Ryan, > >> > > > He Michael, > > > >> > SD status available only in datacenter context [1], as some of domains > >> > may be shared between datacenters, i.e: > >> > > >> > dc = api.datacenters.get(name="xxx") > >> > sd = dc.storagedomains.get(name="images-cluster1") > >> > sd.status > >> > > >> > [1] except of 'unatached' domains. > > Right, this does make checking the state/status of a storage domain > > programtically troublesome. > > > > I can get a list of all storagedomains, but only some of them will have. > > the 'status' attribute. If it doesn't have the status attribute, then I > > need to enumerate all of the datacenters, and for each datacenter, I can > > list the storage domains, and check the status there. > > > > Any reason the status of each storagedomain can't be present under > > api/storagedomains? > > > > storagedomain can be shared between different datacenters, > i.e it may have status X in DC1 and Y in DC2, so only use-case > for SD to have status outside of DC context, is when it not attached > yet. OK, that makes sense. > > > > > Also, any tips on deleting 'unattached' domains via the python bindings? > > > > see my reply in original email. Thanks! > > -- > > Michael Pasternak > RedHat, ENG-Virtualization R&D -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx ryanh at us.ibm.com From vszocs at redhat.com Mon Aug 27 17:58:40 2012 From: vszocs at redhat.com (Vojtech Szocs) Date: Mon, 27 Aug 2012 13:58:40 -0400 (EDT) Subject: [Engine-devel] Update on UI Plugins: PoC patch revision 3 In-Reply-To: <450866661.14161957.1346087156975.JavaMail.root@redhat.com> Message-ID: <1182196199.14184524.1346090320195.JavaMail.root@redhat.com> Hi guys, here comes the most recent revision of UI Plugins proof-of-concept patch (please find it attached). There's only one significant change on top of previous revision: the possibility to add custom main tabs :) * MainTabCustomPresenter is the (non-singleton) presenter of the custom tab component * MainTabCustomView is the (non-singleton) view of the custom tab component, could be improved to actually show the content of the given URL (e.g. through an iframe) * MainTabCustomProxy is the (non-singleton) presenter proxy implementation Here's what happens when a plugin adds a new custom main tab (the process starts at PluginManager.addMainTab method): * MainTabCustomProxyFactory creates new presenter proxy ( MainTabCustomProxy ) * MainTabCustomProxy binds itself to the given place (so that it will be able to respond to GWTP PlaceRequestInternalEvent ), creates the "actual" proxy ( WrappedProxy ) providing actual tab data, and also connects with WebAdmin default gatekeeper (user must be signed into WebAdmin in order to access the place of this custom tab) * From that point on, the presenter proxy for custom tab component is bound and ready for use In GWTP, when main tab panel (e.g. MainTabPanelPresenter ) renders its tabs, it fires an event ( RequestTabsEvent ) saying "any tab which belongs to me, report in". Besides the standard main tab proxies, MainTabCustomProxy instance(s) report in as well, registering themselves into main tab panel through TabContainerPresenter.addTab method. In effect, main tab panel will contain clickable tab headers for all tabs that belong to it. When the custom tab gets activated, or more precisely, when the custom tab presenter gets revealed (e.g. by clicking custom tab header, or programatically using PlaceManager ), GWTP will first ask all proxies "which one handles #custom-tab place", and MainTabCustomProxy indeed responds and hands over its presenter. GWTP presenter reveal flow and view composition gets triggered (bottom-up, from MainTabCustomPresenter right up to RootPresenter ), and after reaching the top of presenter hierarchy ( RootPresenter ), presenter life-cycle methods (such as onReveal , onReset , etc.) are called in a top-down fashion. I've tested this patch using following plugin code ( myPlugin.js ): pluginApi.plugins['myPlugin'] = { UiInit: function() { pluginApi.ui.addMainTab('Custom Tab', 'custom-tab', 'http://www.google.com/'); } }; pluginApi.ready('myPlugin'); As a side note, this PoC patch only adds one kind of custom main tab (the one that is supposed to have its content rendered from the given URL). We could also implement other kinds of custom main tabs, two of which come to my mind right now (this is just an idea for future improvement): * custom table-based main tab, with API for adding columns, buttons and actual row data * custom form-based main tab, with API for adding key-value data pairs Let me know what you think. Cheers, Vojtech -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: WIP-UI-Plugins-PoC-revision-3.patch Type: text/x-patch Size: 39483 bytes Desc: not available URL: From ryanh at us.ibm.com Mon Aug 27 19:30:40 2012 From: ryanh at us.ibm.com (Ryan Harper) Date: Mon, 27 Aug 2012 14:30:40 -0500 Subject: [Engine-devel] storagedomain status via API In-Reply-To: <503B87AD.7070700@redhat.com> References: <20120827133649.GR13822@frylock.phx.austin.ibm.com> <20120827142857.GB17196@frylock.phx.austin.ibm.com> <503B87AD.7070700@redhat.com> Message-ID: <20120827193040.GF17196@frylock.phx.austin.ibm.com> * Michael Pasternak [2012-08-27 09:45]: > > Ryan, > > each sdk method is well documented, if you'll take a look > on .delete().__doc__ it's: > > @param storagedomain.host.id|name: string > [@param async: boolean (true|false)] > [@param correlation_id: any string] > > and that's exactly what error says, > -> StorageDomain [host.id|name] required for remove > > do: > > sd = api.storagedomains.get(name="xxx") > sd.delete(storagedomain=params.StorageDomain(host=params.Host(id="yyy"))) This failed with bad request, but this worked: sd.delete(storagedomain=params.StorageDomain(host=api.hosts.get('hostname here'))) Thanks again, Ryan -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx ryanh at us.ibm.com From mpastern at redhat.com Tue Aug 28 06:04:37 2012 From: mpastern at redhat.com (Michael Pasternak) Date: Tue, 28 Aug 2012 09:04:37 +0300 Subject: [Engine-devel] storagedomain status via API In-Reply-To: <20120827193040.GF17196@frylock.phx.austin.ibm.com> References: <20120827133649.GR13822@frylock.phx.austin.ibm.com> <20120827142857.GB17196@frylock.phx.austin.ibm.com> <503B87AD.7070700@redhat.com> <20120827193040.GF17196@frylock.phx.austin.ibm.com> Message-ID: <503C5F75.1030303@redhat.com> On 08/27/2012 10:30 PM, Ryan Harper wrote: > * Michael Pasternak [2012-08-27 09:45]: >> >> Ryan, >> >> each sdk method is well documented, if you'll take a look >> on .delete().__doc__ it's: >> >> @param storagedomain.host.id|name: string >> [@param async: boolean (true|false)] >> [@param correlation_id: any string] >> >> and that's exactly what error says, >> -> StorageDomain [host.id|name] required for remove >> >> do: >> >> sd = api.storagedomains.get(name="xxx") >> sd.delete(storagedomain=params.StorageDomain(host=params.Host(id="yyy"))) > > This failed with bad request, but this worked: are you sure about host id? i.e is it was same id as in host fetched by api.hosts.get('hostname here') ? > > sd.delete(storagedomain=params.StorageDomain(host=api.hosts.get('hostname here'))) > > Thanks again, > Ryan > -- Michael Pasternak RedHat, ENG-Virtualization R&D From jhernand at redhat.com Tue Aug 28 08:20:45 2012 From: jhernand at redhat.com (Juan Hernandez) Date: Tue, 28 Aug 2012 10:20:45 +0200 Subject: [Engine-devel] Default ports are 8700 and 8701 now Message-ID: <503C7F5D.7020900@redhat.com> Hello, Change http://gerrit.ovirt.org/6348 has been merged, so next time you run "mvn -Psetup ..." with the updated code you will need to use 8700 instead of 8080. Let me know if you find any issue related to this change. Regards, Juan Hernandez -- Direcci?n Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta 3?D, 28016 Madrid, Spain Inscrita en el Reg. Mercantil de Madrid ? C.I.F. B82657941 - Red Hat S.L. From vszocs at redhat.com Tue Aug 28 10:17:29 2012 From: vszocs at redhat.com (Vojtech Szocs) Date: Tue, 28 Aug 2012 06:17:29 -0400 (EDT) Subject: [Engine-devel] Update on UI Plugins: Improved plugin API In-Reply-To: <455300925.14438202.1346146868031.JavaMail.root@redhat.com> Message-ID: <1302819559.14447954.1346149049849.JavaMail.root@redhat.com> Hi guys, I was thinking about improving JavaScript plugin API so that each time a plugin calls WebAdmin, the plugin name will be part of the call. This way, PluginManager (WebAdmin) can validate the call before doing any further actions, e.g. "plugin invocation must be allowed && plugin must be initialized". In other words, this is just to ensure consistent behavior for "plugin -> WebAdmin" calls. Here's a draft of new JavaScript API in action: http://jsfiddle.net/tHk5n/ (see the code below the comment saying "ACTUAL TEST CODE") (I took my inspiration from jQuery source code, even though I don't fully understand JavaScript prototype OOP concept, seems a bit weird to me.) For comparison, here's some plugin code that uses current plugin API: // Register our plugin object (containing event handler functions) into pluginApi.plugins pluginApi.plugins['myPlugin'] = { UiInit: function() { pluginApi.ui.addMainTab('Custom Tab', 'custom-tab', 'http://www.example.com/'); } }; // Tell WebAdmin that we are ready, we need the plugin name to identify this plugin pluginApi.ready('myPlugin'); And here's an equivalent plugin code that uses new plugin API: // Plugin API instance for our plugin var myPlugin = pluginApi('myPlugin'); // Register our plugin object (containing event handler functions) into myPlugin myPlugin.register({ UiInit: function() { myPlugin.ui.addMainTab('Custom Tab', 'custom-tab', 'http://www.example.com/'); } }); // Tell WebAdmin that we are ready , plugin name is already part of myPlugin myPlugin.ready(); // Note: the above line is equivalent to: pluginApi('myPlugin').ready() ; Let me know what you think. Cheers, Vojtech -------------- next part -------------- An HTML attachment was scrubbed... URL: From jhernand at redhat.com Tue Aug 28 18:07:40 2012 From: jhernand at redhat.com (Juan Hernandez) Date: Tue, 28 Aug 2012 20:07:40 +0200 Subject: [Engine-devel] UI Plugins configuration In-Reply-To: <127266215.14115417.1346078924449.JavaMail.root@redhat.com> References: <127266215.14115417.1346078924449.JavaMail.root@redhat.com> Message-ID: <503D08EC.9050509@redhat.com> A couple of comments from the packaging point of view. On 08/27/2012 04:48 PM, Vojtech Szocs wrote: > Hi Chris, > >> Your assumption about the structure of the pluginDefinitions object is correct. It?s no longer a String->String mapping , but a String to Object mapping. > > Yes :) but maybe we could also formalize some terms, for example: > > Plugin descriptor is the JSON file that contains important plugin meta-data (including the plugin source page URL), e.g. /usr/share/ovirt-engine/ui-plugins/test.json Note that the fact that this file goes in the /usr directory means that it is *not* intended to be modified. It means also that the packaging system (well, at least RPM) will *not* preserve it when updating the package that contains it, so any change made by the end user will be lost. So it is important to make clear for plugin developers and end users that these files are *not* to be modified after installation. > Plugin definition is the JavaScript object representing plugin descriptor meta-data suitable for use on client (GWT WebAdmin). Plugin definition is embedded into WebAdmin host page within pluginDefinitions object, and read by PluginManager during WebAdmin startup. > > Plugin configuration is the JSON file that contains optional plugin configuration, e.g. /etc/ovirt-engine/ui-plugins/test-config.json For files in the /etc directory the converse happens: they are intended to be modified by the end user, and usually (if marked properly) they are preserved when a new version of the package is installed. > I think we can combine two things here: > 1) allow plugin authors to define standard (fallback) configuration directly inside plugin descriptor > 2) allow plugin users to override standard configuration by modifying dedicated plugin configuration file > > Finally, plugin source page is the HTML page used to invoke actual plugin code (this page is referenced by plugin descriptor's "url" attribute). Plugin source page can also load external resources required by the plugin, e.g. 3rd party JavaScript libraries, CSS, images, etc. > >> I liked the original IIFE approach, except that it seemed that having additional static resources (jquery, images, html templates, etc) was going to be more cumbersome. I don?t think having the plugin author write a basic start.html is that big of a burden :). > > You're right, for such additional plugin resources, even more configuration/parsing/logic would be required. Even though plugin authors need to write the plugin source page themselves, they have full control over it, which is a good thing in general. > >> I agree that the plugin configuration was always going to be a resource (probably a local file) that the end user could customize. I?m not sure it I really needs to be separate from the plugin definition file (/usr/share/ovirt-engine/ui-plugins/test.json). I suppose it depends on how complex the configuration is going to be and on some of the implementation details surrounding the plugin definition file. > > Yeah, let's make the concept of the plugin configuration file optional for now (standard plugin configuration can be part of plugin descriptor). > >> In my patch, I simply used Jackson to parse the file into a tree of JsonNodes. Should the plugin definition be a java object of some sort? (please please please don?t make me learn about java beans?). I stuck with the JsonNodes because Jackson makes them easy to work with and they?re really easy to re-serialize back to json to give to the webadmin. > > I think using Jackson's JSON representation in Java (JsonNode) is perfectly suitable in this situation. No need to have separate Java bean for that :) > >> We should probably turn on JsonParser.Feature.ALLOW_COMMENTS. The definition and config files will difficult for end-users (or even developers) to understand without comments. > > Agreed. > >> We need to formalize the structure of the plugin definition and decide which fields are mandatory and which are optional > > Sounds good, but I'd skip some attributes for now (enabled, apiVersion, author, license) for the sake of simplicity. > > As you wrote, when loading plugin descriptor, we should enforce mandatory attributes (name, version, url) . > > As for plugin configuration, there could be two different attributes: > - "config" for standard (fallback) plugin configuration (JSON object) > - "configFile" for external plugin configuration file (path to file, relative to /etc/ovirt-engine/ui-plugins/ ) , that overrides the standard configuration > > Note that when loading plugin descriptor, the loader should also "merge" the configuration together (custom config on top of standard config). > >> I can work on the plugin Definition loader some more and make it enforce mandatory/optional fields. I?ll also investigate the directory climbing issue I mentioned in my previous mail. > > Sounds good! I was planning to incorporate your original patch in next PoC revision, but of course, you can work on the loader some more and send another patch :) > > For the directory climbing issue, see /backend/manager/modules/root/src/main/java/org/ovirt/engine/core/FileServlet.java (there's a method called isSane for dealing with such issue). > >> Also, I?m curious how things are going to work when the ?url? points to a foreign resource as the plugin start page. I don?t think the plugin?s iframe is going to be able to access parent.pluginApi. Perhaps there is some aspect of CORS that I don?t understand? > > When the plugin iframe references a resource on different origin (protocol, domain, port) than WebAdmin main page origin, JavaScript code running inside that iframe will not be able to access parent (top-level) pluginApi object. You're right, the statement "parent.pluginApi" will not work, because of Same-Origin Policy enforced by the browser. > > CORS is just one alternative, see http://stackoverflow.com/questions/3076414/ways-to-circumvent-the-same-origin-policy for more. However, CORS needs to be supported by the browser (a special HTTP response header is used to tell that the iframe is allowed to access resources from another - WebAdmin main page - origin). We need to investigate this a bit more I guess. > > Regards, > Vojtech > > > ----- Original Message ----- > > From: "Chris Frantz" > To: "Vojtech Szocs" > Cc: "engine-devel" > Sent: Thursday, August 23, 2012 5:12:02 PM > Subject: RE: UI Plugins configuration > > > > Vojtech, > > Your assumption about the structure of the pluginDefinitions object is correct. It?s no longer a String->String mapping , but a String to Object mapping. > > I liked the original IIFE approach, except that it seemed that having additional static resources (jquery, images, html templates, etc) was going to be more cumbersome. I don?t think having the plugin author write a basic start.html is that big of a burden :). > > I agree that the plugin configuration was always going to be a resource (probably a local file) that the end user could customize. I?m not sure it I really needs to be separate from the plugin definition file (/usr/share/ovirt-engine/ui-plugins/test.json). I suppose it depends on how complex the configuration is going to be and on some of the implementation details surrounding the plugin definition file. > > In my patch, I simply used Jackson to parse the file into a tree of JsonNodes. Should the plugin definition be a java object of some sort? (please please please don?t make me learn about java beans?). I stuck with the JsonNodes because Jackson makes them easy to work with and they?re really easy to re-serialize back to json to give to the webadmin. > > We should probably turn on JsonParser.Feature.ALLOW_COMMENTS. The definition and config files will difficult for end-users (or even developers) to understand without comments. > > We need to formalize the structure of the plugin definition and decide which fields are mandatory and which are optional: > > { > # Mandatory fields: name, enabled, version, url, apiversion, author, license > # Name of the plugin > "name": "test", > > > # Whether or not plugin is enabed > "enabled": true, > > > # version of the plugin > "version": "1.0", > > > # How to load the plugin > "url": "/webadmin/webadmin/plugin/test/start.html", > > > # Which version of engine plugin is meant to work with > "apiversion": "3.1.0", > > > # Who wrote the plugin and how is it licensed? > "author": "SuperBig Corporation", > "license": "Proprietary", > > > # Optional fields path, config > # Where to locate plugin (if loaded by webadmin/plugin) > "path": "/tmp", > > # Plugin configuration information (if any) > "config": "test-config.json", > } > > I can work on the plugin Definition loader some more and make it enforce mandatory/optional fields. I?ll also investigate the directory climbing issue I mentioned in my previous mail. > > Also, I?m curious how things are going to work when the ?url? points to a foreign resource as the plugin start page. I don?t think the plugin?s iframe is going to be able to access parent.pluginApi. Perhaps there is some aspect of CORS that I don?t understand? > > Thanks, > --Chris > > > > > > From: Vojtech Szocs [mailto:vszocs at redhat.com] > Sent: Thursday, August 23, 2012 7:14 AM > To: Frantz, Chris > Cc: engine-devel > Subject: Re: UI Plugins configuration > > > Hi Chris, > > thanks for taking the time to make this patch, these are some excellent ideas! (CC'ing engine-devel so that we can discuss this with other guys as well) > > First of all, I really like the way you designed plugin source page URLs (going through PluginSourcePageServlet ), e.g. "/webadmin/webadmin/plugin//.html", plus the concept of "path" JSON attribute. > > WebadminDynamicHostingServlet loads and caches all plugin definitions ( *.json files), and directly embeds them into WebAdmin host page as pluginDefinitions JavaScript object. I'm assuming that pluginDefinitions object will now look like this: > > var pluginDefinitions = { > "test": { > "name": "test", > "version": "1.0", > "url": "/webadmin/webadmin/plugin/test/foo.html", > "path": "/tmp", > "config": {"a":1, "b":2, "c":3} > } > } > > Originally, the pluginDefinitions object looked like this: > > var pluginDefinitions = { > "test": "/webadmin/webadmin/plugin/test/foo.html" // Simple pluginName -> pluginSourcePageUrl mappings > } > > This is because PluginManager (WebAdmin) only needs pluginName ("name") and pluginSourcePageUrl ("url") during startup, when creating plugin iframe. But this can be changed :) > > Plugin "version" makes sense, plus the plugin configuration object ("config") can be useful directly on the client. Let me explain: > > Originally, plugin configuration was supposed to be passed to actual plugin code (through immediately-invoked-function-expression, or IIFE), just like this: > > (function (pluginApi, pluginConfig) { // JavaScript IIFE > // ... actual plugin code ... > })( > parent.pluginApi, /* reference to global pluginApi object */ > {"a":1, "b":2, "c":3} /* embedded plugin configuration as JavaScript object */ > ); > > The whole purpose of PluginSourcePageServlet was to "wrap" actual plugin code into HTML, so that users don't need to write HTML pages for their plugins manually. PluginSourcePageServlet would handle any plugin dependencies (placed into HTML head), with actual plugin code being wrapped into IIFE, as shown above. Plugin configuration was meant to be stored in a separate file, e.g. -config.json , so that users could change the default plugin configuration to suit their needs. > > Inspired by your patch, rather than reading/embedding plugin configuration when serving plugin HTML page ( PluginSourcePageServlet ), it's even better to have the plugin configuration embedded directly into WebAdmin host page, along with introducing new pluginApi function to retrieve the plugin configuration object. > > Based on this, I suggest following modifications to the original concept: > > - modify original pluginDefinitions structure, from pluginName -> pluginSourcePageUrl , to pluginName -> pluginDefObject > - pluginDefObject is basically a subset of physical plugin definition ( test.json , see below), suitable for use on the client > - add following attributes to pluginDefObject : version , url , config > * note #1: name is not needed, since it's already the key of pluginName -> pluginDefObject mapping > * note #2: path is not needed on the client (more on this below) > - introduce pluginApi.config(pluginName) function for plugins to retrieve their configuration object, and remove pluginConfig parameter from main IIFE (as shown above) > > [a] Physical plugin definition file (JSON) might be located at oVirt "DataDir", e.g. /usr/share/ovirt-engine/ui-plugins/test.json , for example: > > { > "name": "test", > "version": "1.0", > "url": "/webadmin/webadmin/plugin/test/start.html", > "path": "/tmp", > "config": "test-config.json" > } > > [b] Plugin configuration file (JSON) might be located at oVirt "ConfigDir", e.g. /etc/ovirt-engine/ui-plugins/test-config.json , for example: > > { > "a":1, "b":2, "c":3 > } > > [c] Finally, plugin static resources (plugin source page, actual plugin code, plugin dependencies, CSS/images, etc.) would be located at /tmp (as shown in [a]), for example: > > /tmp/start.html -> plugin source page, used to load actual plugin code > /tmp/test.js -> actual plugin code > /tmp/deps/jquery-min.js -> simulate 3rd party plugin dependency > > For example: > "/webadmin/webadmin/plugin/test/start.html" will be mapped to /tmp/start.html > "/webadmin/webadmin/plugin/test/deps/jquery-min.js" will be mapped to /tmp/deps/jquery-min.js > > This approach has some pros and cons: > (+) plugin static resources can be served through PluginSourcePageServlet (pretty much like oVirt documentation resources, served through oVirt Engine root war's FileServlet ) > (+) plugin author has complete control over plugin source page > (-) plugin author actually needs to write plugin source page > > Overall, I think this approach is better than the previous one (where PluginSourcePageServlet took care of rendering plugin source page, but sacrificed some flexibility). > > By the way, here's what would happen behind the scenes: > > 1. user requests WebAdmin host page, WebadminDynamicHostingServlet loads and caches all plugin definitions [a] + plugin configurations [b] and constructs/embeds appropriate pluginDefinitions JavaScript object > 2. during WebAdmin startup, PluginManager registers the plugin (name/version/url/config), and creates/attaches the iframe to fetch plugin source page ansynchronously > 3. PluginSourcePageServlet handles plugin source page request, resolves the correct path [c] and just streams the file content back to client > >> 1. The plugin configuration files should probably have an "enabled" field and an "apiVersion" field that should be examined to determine whether or not to use the plugin. > > Sounds good, we can implement these later on :) > >> 2. I suspect the way I've modified PluginSourcePage makes it vulnerable to directory climbing attacks. > > Yes, but we can defend against these, restricting access only to plugin's "path" and its sub-directories. > >> 3. Is /usr/share/ovirt-engine the right place for the plugin config files? > > I suppose you mean plugin definition files [a], cannot tell for sure, but we can change this anytime :) > > > Chris, please let me know what you think, and again - many thanks for sending the patch! > > > Regards, > Vojtech > > > ----- Original Message ----- > > > From: "Chris Frantz" < Chris.Frantz at hp.com > > To: vszocs at redhat.com > Sent: Wednesday, August 22, 2012 7:56:45 PM > Subject: UI Plugins configuration > > Vojtech, > > I decided to work on making the plugin patch a bit more configurable, following some of the ideas expressed by Itamar and others in the meeting yesterday. The attached patch is a simple first-attempt. > > Plugin configurations are stored in /usr/share/ovirt-engine/ui-plugins/*.json. > > Example: > { > "name": "test", > "version": "1.0", > "url": "/webadmin/webadmin/plugin/test/foo.html", > "path": "/tmp", > "config": {"a":1, "b":2, "c": 3} > } > > The engine reads all of the *.json files in that directory to build the list of known plugins and gives that list to the webadmin. > > When webadmin loads a plugin, it requests the URL given in the plugin config file. The "plugin" URL is mapped to PluginSourcePage, which will translate the first part of the path ("test") into whatever path is stored in pluginConfig ("/tmp") in this case, and then serve the static file (e.g. "/tmp/foo.html"). > > I didn't use the renderPluginSourcePage() method in favor of just serving a static file, but I have no strong opinion on the matter. However, a plugin may want to store static resources at "path" and have the engine serve those resources. By just serving files through PluginSourcePage, we don't need any other servlets to provide those resources. > > There is still a bit of work to do: > > 1. The plugin configuration files should probably have an "enabled" field and an "apiVersion" field that should be examined to determine whether or not to use the plugin. > > 2. I suspect the way I've modified PluginSourcePage makes it vulnerable to directory climbing attacks. > > 3. Is /usr/share/ovirt-engine the right place for the plugin config files? > > Let me know what you think, > --Chris > > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > -- Direcci?n Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta 3?D, 28016 Madrid, Spain Inscrita en el Reg. Mercantil de Madrid ? C.I.F. B82657941 - Red Hat S.L. From ryanh at us.ibm.com Tue Aug 28 18:56:55 2012 From: ryanh at us.ibm.com (Ryan Harper) Date: Tue, 28 Aug 2012 13:56:55 -0500 Subject: [Engine-devel] storagedomain status via API In-Reply-To: <503C5F75.1030303@redhat.com> References: <20120827133649.GR13822@frylock.phx.austin.ibm.com> <20120827142857.GB17196@frylock.phx.austin.ibm.com> <503B87AD.7070700@redhat.com> <20120827193040.GF17196@frylock.phx.austin.ibm.com> <503C5F75.1030303@redhat.com> Message-ID: <20120828185655.GN17196@frylock.phx.austin.ibm.com> * Michael Pasternak [2012-08-28 01:04]: > On 08/27/2012 10:30 PM, Ryan Harper wrote: > > * Michael Pasternak [2012-08-27 09:45]: > >> > >> Ryan, > >> > >> each sdk method is well documented, if you'll take a look > >> on .delete().__doc__ it's: > >> > >> @param storagedomain.host.id|name: string > >> [@param async: boolean (true|false)] > >> [@param correlation_id: any string] > >> > >> and that's exactly what error says, > >> -> StorageDomain [host.id|name] required for remove > >> > >> do: > >> > >> sd = api.storagedomains.get(name="xxx") > >> sd.delete(storagedomain=params.StorageDomain(host=params.Host(id="yyy"))) > > > > This failed with bad request, but this worked: > > are you sure about host id? i.e is it was same id as in host fetched by > api.hosts.get('hostname here') ? [root at ichigo-dom223 ~]# python2.7 Python 2.7.3 (default, Jul 24 2012, 10:05:38) [GCC 4.7.0 20120507 (Red Hat 4.7.0-5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from ovirtsdk.api import API >>> from ovirtsdk.xml import params >>> api = API(url='https://localhost:443/api', username='admin at internal', password='XXXXXXX') >>> sd = api.storagedomains.get(name='iso-cluster1') >>> sd.name 'iso-cluster1' >>> sd.status.state 'unattached' >>> host = api.hosts.get('ichigo-dom224.phx.austin.ibm.com') >>> host.name 'ichigo-dom224.phx.austin.ibm.com' >>> sd.delete(storagedomain=params.StorageDomain(host=params.Host(id=host.name))) Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/brokers.py", line 3180, in delete body=ParseHelper.toXml(storagedomain)) File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 111, in delete return self.request('DELETE', url, body, headers) File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 128, in request last=last) File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 154, in __doRequest raise RequestError, response ovirtsdk.infrastructure.errors.RequestError: status: 500 reason: Internal Server Error detail: HTTP Status 500 >>> sd.delete(storagedomain=params.StorageDomain(host=api.hosts.get(host.name))) '' [root at ichigo-dom223 ~]# cat /etc/issue Fedora release 17 (Beefy Miracle) Kernel \r on an \m (\l) [root at ichigo-dom223 ~]# rpm -qa | grep ovirt- ovirt-engine-userportal-3.1.0-2.fc17.noarch ovirt-engine-webadmin-portal-3.1.0-2.fc17.noarch ovirt-engine-sdk-3.1.0.4-1.fc17.noarch ovirt-engine-setup-3.1.0-2.fc17.noarch ovirt-iso-uploader-3.1.0-0.git1841d9.fc17.noarch ovirt-engine-restapi-3.1.0-2.fc17.noarch ovirt-engine-3.1.0-2.fc17.noarch ovirt-engine-backend-3.1.0-2.fc17.noarch ovirt-image-uploader-3.1.0-0.git9c42c8.fc17.noarch ovirt-engine-config-3.1.0-2.fc17.noarch ovirt-engine-genericapi-3.1.0-2.fc17.noarch ovirt-engine-tools-common-3.1.0-2.fc17.noarch ovirt-engine-dbscripts-3.1.0-2.fc17.noarch ovirt-log-collector-3.1.0-0.git10d719.fc17.noarch ovirt-engine-notification-service-3.1.0-2.fc17.noarch -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx ryanh at us.ibm.com From mpastern at redhat.com Wed Aug 29 06:52:00 2012 From: mpastern at redhat.com (Michael Pasternak) Date: Wed, 29 Aug 2012 09:52:00 +0300 Subject: [Engine-devel] storagedomain status via API In-Reply-To: <20120828185655.GN17196@frylock.phx.austin.ibm.com> References: <20120827133649.GR13822@frylock.phx.austin.ibm.com> <20120827142857.GB17196@frylock.phx.austin.ibm.com> <503B87AD.7070700@redhat.com> <20120827193040.GF17196@frylock.phx.austin.ibm.com> <503C5F75.1030303@redhat.com> <20120828185655.GN17196@frylock.phx.austin.ibm.com> Message-ID: <503DBC10.8010303@redhat.com> On 08/28/2012 09:56 PM, Ryan Harper wrote: > * Michael Pasternak [2012-08-28 01:04]: >> On 08/27/2012 10:30 PM, Ryan Harper wrote: >>> * Michael Pasternak [2012-08-27 09:45]: >>>> >>>> Ryan, >>>> >>>> each sdk method is well documented, if you'll take a look >>>> on .delete().__doc__ it's: >>>> >>>> @param storagedomain.host.id|name: string >>>> [@param async: boolean (true|false)] >>>> [@param correlation_id: any string] >>>> >>>> and that's exactly what error says, >>>> -> StorageDomain [host.id|name] required for remove >>>> >>>> do: >>>> >>>> sd = api.storagedomains.get(name="xxx") >>>> sd.delete(storagedomain=params.StorageDomain(host=params.Host(id="yyy"))) >>> >>> This failed with bad request, but this worked: >> >> are you sure about host id? i.e is it was same id as in host fetched by >> api.hosts.get('hostname here') ? > > [root at ichigo-dom223 ~]# python2.7 > Python 2.7.3 (default, Jul 24 2012, 10:05:38) > [GCC 4.7.0 20120507 (Red Hat 4.7.0-5)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. >>>> from ovirtsdk.api import API >>>> from ovirtsdk.xml import params >>>> api = API(url='https://localhost:443/api', username='admin at internal', password='XXXXXXX') >>>> sd = api.storagedomains.get(name='iso-cluster1') >>>> sd.name > 'iso-cluster1' >>>> sd.status.state > 'unattached' >>>> host = api.hosts.get('ichigo-dom224.phx.austin.ibm.com') >>>> host.name > 'ichigo-dom224.phx.austin.ibm.com' >>>> sd.delete(storagedomain=params.StorageDomain(host=params.Host(id=host.name))) well, this is your problem: you trying to set host.name in to 'id' property ^, it should be => sd.delete(storagedomain=params.StorageDomain(host=params.Host(id=host.id))) the error on the server side, is 'name' str. to UUID conversion failure, though response error not informative enough - i'll address this. > Traceback (most recent call last): > File "", line 1, in > File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/brokers.py", line 3180, in delete > body=ParseHelper.toXml(storagedomain)) > File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 111, in delete > return self.request('DELETE', url, body, headers) > File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 128, in request > last=last) > File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 154, in __doRequest > raise RequestError, response > ovirtsdk.infrastructure.errors.RequestError: > status: 500 > reason: Internal Server Error > detail: HTTP Status 500 >>>> sd.delete(storagedomain=params.StorageDomain(host=api.hosts.get(host.name))) > '' > > [root at ichigo-dom223 ~]# cat /etc/issue > Fedora release 17 (Beefy Miracle) > Kernel \r on an \m (\l) > > [root at ichigo-dom223 ~]# rpm -qa | grep ovirt- > ovirt-engine-userportal-3.1.0-2.fc17.noarch > ovirt-engine-webadmin-portal-3.1.0-2.fc17.noarch > ovirt-engine-sdk-3.1.0.4-1.fc17.noarch > ovirt-engine-setup-3.1.0-2.fc17.noarch > ovirt-iso-uploader-3.1.0-0.git1841d9.fc17.noarch > ovirt-engine-restapi-3.1.0-2.fc17.noarch > ovirt-engine-3.1.0-2.fc17.noarch > ovirt-engine-backend-3.1.0-2.fc17.noarch > ovirt-image-uploader-3.1.0-0.git9c42c8.fc17.noarch > ovirt-engine-config-3.1.0-2.fc17.noarch > ovirt-engine-genericapi-3.1.0-2.fc17.noarch > ovirt-engine-tools-common-3.1.0-2.fc17.noarch > ovirt-engine-dbscripts-3.1.0-2.fc17.noarch > ovirt-log-collector-3.1.0-0.git10d719.fc17.noarch > ovirt-engine-notification-service-3.1.0-2.fc17.noarch > > -- Michael Pasternak RedHat, ENG-Virtualization R&D From sanjal at redhat.com Wed Aug 29 10:19:16 2012 From: sanjal at redhat.com (Shireesh Anjal) Date: Wed, 29 Aug 2012 15:49:16 +0530 Subject: [Engine-devel] JUnit tests in 'bll' project Message-ID: <503DECA4.60909@redhat.com> Hi, When I do a full build of oVirt engine (mvn clean install -Pgwtdev,gwt-admin,dep,enable-dao-tests), it seems that the JUnit tests in project 'bll' are not executed. Is this done intentionally? If yes, is there a simple way to execute them using mvn ? Thanks, Shireesh From eedri at redhat.com Wed Aug 29 10:27:24 2012 From: eedri at redhat.com (Eyal Edri) Date: Wed, 29 Aug 2012 06:27:24 -0400 (EDT) Subject: [Engine-devel] JUnit tests in 'bll' project In-Reply-To: <503DECA4.60909@redhat.com> Message-ID: <2072538000.12430065.1346236044788.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Shireesh Anjal" > To: engine-devel at ovirt.org > Sent: Wednesday, August 29, 2012 1:19:16 PM > Subject: [Engine-devel] JUnit tests in 'bll' project > > Hi, > > When I do a full build of oVirt engine (mvn clean install > -Pgwtdev,gwt-admin,dep,enable-dao-tests), it seems that the JUnit > tests > in project 'bll' are not executed. Is this done intentionally? If > yes, > is there a simple way to execute them using mvn ? there is a maven profile enable-bll-itests from the bll submodule dir. hanv't run it in a while so it might not work... > > Thanks, > Shireesh > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From yzaslavs at redhat.com Wed Aug 29 10:28:47 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Wed, 29 Aug 2012 13:28:47 +0300 Subject: [Engine-devel] JUnit tests in 'bll' project In-Reply-To: <2072538000.12430065.1346236044788.JavaMail.root@redhat.com> References: <2072538000.12430065.1346236044788.JavaMail.root@redhat.com> Message-ID: <503DEEDF.7070308@redhat.com> itests are integration tests, do not confuse them with the bll unit tests which use Mockito (for example). Shireesh - what are you referring to exactly? Yair On 08/29/2012 01:27 PM, Eyal Edri wrote: > > > ----- Original Message ----- >> From: "Shireesh Anjal" >> To: engine-devel at ovirt.org >> Sent: Wednesday, August 29, 2012 1:19:16 PM >> Subject: [Engine-devel] JUnit tests in 'bll' project >> >> Hi, >> >> When I do a full build of oVirt engine (mvn clean install >> -Pgwtdev,gwt-admin,dep,enable-dao-tests), it seems that the JUnit >> tests >> in project 'bll' are not executed. Is this done intentionally? If >> yes, >> is there a simple way to execute them using mvn ? > > there is a maven profile enable-bll-itests from the bll submodule dir. > > hanv't run it in a while so it might not work... > >> >> Thanks, >> Shireesh >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From sanjal at redhat.com Wed Aug 29 11:14:07 2012 From: sanjal at redhat.com (Shireesh Anjal) Date: Wed, 29 Aug 2012 16:44:07 +0530 Subject: [Engine-devel] JUnit tests in 'bll' project In-Reply-To: <503DEEDF.7070308@redhat.com> References: <2072538000.12430065.1346236044788.JavaMail.root@redhat.com> <503DEEDF.7070308@redhat.com> Message-ID: <503DF97F.30805@redhat.com> I'm referring to bll unit tests. There are lots of them in backend/manager/modules/bll/src/test/java, only a minority of them are itests (org.ovirt.engine.core.itests.*) ~Shireesh On Wednesday 29 August 2012 03:58 PM, Yair Zaslavsky wrote: > itests are integration tests, do not confuse them with the bll unit > tests which use Mockito (for example). > Shireesh - what are you referring to exactly? > > Yair > > > On 08/29/2012 01:27 PM, Eyal Edri wrote: >> >> >> ----- Original Message ----- >>> From: "Shireesh Anjal" >>> To: engine-devel at ovirt.org >>> Sent: Wednesday, August 29, 2012 1:19:16 PM >>> Subject: [Engine-devel] JUnit tests in 'bll' project >>> >>> Hi, >>> >>> When I do a full build of oVirt engine (mvn clean install >>> -Pgwtdev,gwt-admin,dep,enable-dao-tests), it seems that the JUnit >>> tests >>> in project 'bll' are not executed. Is this done intentionally? If >>> yes, >>> is there a simple way to execute them using mvn ? >> >> there is a maven profile enable-bll-itests from the bll submodule dir. >> >> hanv't run it in a while so it might not work... >> >>> >>> Thanks, >>> Shireesh >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From yzaslavs at redhat.com Wed Aug 29 11:19:14 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Wed, 29 Aug 2012 14:19:14 +0300 Subject: [Engine-devel] JUnit tests in 'bll' project In-Reply-To: <503DF97F.30805@redhat.com> References: <2072538000.12430065.1346236044788.JavaMail.root@redhat.com> <503DEEDF.7070308@redhat.com> <503DF97F.30805@redhat.com> Message-ID: <503DFAB2.4050001@redhat.com> Shireesh, what happens when you run for example "mvn clean install" ? I just did that, and it DID run the bll tests for me Yair On 08/29/2012 02:14 PM, Shireesh Anjal wrote: > I'm referring to bll unit tests. There are lots of them in > backend/manager/modules/bll/src/test/java, only a minority of them are > itests (org.ovirt.engine.core.itests.*) > > ~Shireesh > > On Wednesday 29 August 2012 03:58 PM, Yair Zaslavsky wrote: >> itests are integration tests, do not confuse them with the bll unit >> tests which use Mockito (for example). >> Shireesh - what are you referring to exactly? >> >> Yair >> >> >> On 08/29/2012 01:27 PM, Eyal Edri wrote: >>> >>> >>> ----- Original Message ----- >>>> From: "Shireesh Anjal" >>>> To: engine-devel at ovirt.org >>>> Sent: Wednesday, August 29, 2012 1:19:16 PM >>>> Subject: [Engine-devel] JUnit tests in 'bll' project >>>> >>>> Hi, >>>> >>>> When I do a full build of oVirt engine (mvn clean install >>>> -Pgwtdev,gwt-admin,dep,enable-dao-tests), it seems that the JUnit >>>> tests >>>> in project 'bll' are not executed. Is this done intentionally? If >>>> yes, >>>> is there a simple way to execute them using mvn ? >>> >>> there is a maven profile enable-bll-itests from the bll submodule dir. >>> >>> hanv't run it in a while so it might not work... >>> >>>> >>>> Thanks, >>>> Shireesh >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>> >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel > From sanjal at redhat.com Wed Aug 29 12:13:55 2012 From: sanjal at redhat.com (Shireesh Anjal) Date: Wed, 29 Aug 2012 17:43:55 +0530 Subject: [Engine-devel] JUnit tests in 'bll' project In-Reply-To: <503DFAB2.4050001@redhat.com> References: <2072538000.12430065.1346236044788.JavaMail.root@redhat.com> <503DEEDF.7070308@redhat.com> <503DF97F.30805@redhat.com> <503DFAB2.4050001@redhat.com> Message-ID: <503E0783.7060809@redhat.com> It was my mistake, the bll tests are running fine. Sorry about the confusion.. Thanks, Shireesh On Wednesday 29 August 2012 04:49 PM, Yair Zaslavsky wrote: > Shireesh, what happens when you run for example "mvn clean install" ? > I just did that, and it DID run the bll tests for me > > Yair > > > On 08/29/2012 02:14 PM, Shireesh Anjal wrote: >> I'm referring to bll unit tests. There are lots of them in >> backend/manager/modules/bll/src/test/java, only a minority of them are >> itests (org.ovirt.engine.core.itests.*) >> >> ~Shireesh >> >> On Wednesday 29 August 2012 03:58 PM, Yair Zaslavsky wrote: >>> itests are integration tests, do not confuse them with the bll unit >>> tests which use Mockito (for example). >>> Shireesh - what are you referring to exactly? >>> >>> Yair >>> >>> >>> On 08/29/2012 01:27 PM, Eyal Edri wrote: >>>> >>>> >>>> ----- Original Message ----- >>>>> From: "Shireesh Anjal" >>>>> To: engine-devel at ovirt.org >>>>> Sent: Wednesday, August 29, 2012 1:19:16 PM >>>>> Subject: [Engine-devel] JUnit tests in 'bll' project >>>>> >>>>> Hi, >>>>> >>>>> When I do a full build of oVirt engine (mvn clean install >>>>> -Pgwtdev,gwt-admin,dep,enable-dao-tests), it seems that the JUnit >>>>> tests >>>>> in project 'bll' are not executed. Is this done intentionally? If >>>>> yes, >>>>> is there a simple way to execute them using mvn ? >>>> >>>> there is a maven profile enable-bll-itests from the bll submodule dir. >>>> >>>> hanv't run it in a while so it might not work... >>>> >>>>> >>>>> Thanks, >>>>> Shireesh >>>>> _______________________________________________ >>>>> Engine-devel mailing list >>>>> Engine-devel at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>> >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>> >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >> > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From vszocs at redhat.com Wed Aug 29 12:26:13 2012 From: vszocs at redhat.com (Vojtech Szocs) Date: Wed, 29 Aug 2012 08:26:13 -0400 (EDT) Subject: [Engine-devel] UI Plugins configuration In-Reply-To: <503D08EC.9050509@redhat.com> Message-ID: <893370008.14923719.1346243173704.JavaMail.root@redhat.com> Hi Juan, thanks for your comments! Indeed, plugin descriptors have the attributes as you described for files under /usr/share/ovirt-engine (oVirt DataDir): - they are NOT intended to be modified by end users - they are NOT intended to be preserved after package upgrade - packaging system takes care of creating/updating/removing them (Each plugin descriptor provides mandatory meta-data for a specific version of the given plugin.) Indeed as well, plugin configuration files have also the attributes as you described for files under /etc/ovirt-engine (oVirt ConfigDir): - they are intended to be modified by end users - they are intended to be preserved after package upgrade - their presence is optional, therefore they are NOT part of the relevant package (Plugin configuration files are used to override default plugin configuration as necessary.) Thanks for bringing up these points, we should indeed mention these in oVirt plugin wiki page. Regards, Vojtech ----- Original Message ----- From: "Juan Hernandez" To: "Vojtech Szocs" Cc: "Chris Frantz" , "engine-devel" Sent: Tuesday, August 28, 2012 8:07:40 PM Subject: Re: [Engine-devel] UI Plugins configuration A couple of comments from the packaging point of view. On 08/27/2012 04:48 PM, Vojtech Szocs wrote: > Hi Chris, > >> Your assumption about the structure of the pluginDefinitions object is correct. It?s no longer a String->String mapping , but a String to Object mapping. > > Yes :) but maybe we could also formalize some terms, for example: > > Plugin descriptor is the JSON file that contains important plugin meta-data (including the plugin source page URL), e.g. /usr/share/ovirt-engine/ui-plugins/test.json Note that the fact that this file goes in the /usr directory means that it is *not* intended to be modified. It means also that the packaging system (well, at least RPM) will *not* preserve it when updating the package that contains it, so any change made by the end user will be lost. So it is important to make clear for plugin developers and end users that these files are *not* to be modified after installation. > Plugin definition is the JavaScript object representing plugin descriptor meta-data suitable for use on client (GWT WebAdmin). Plugin definition is embedded into WebAdmin host page within pluginDefinitions object, and read by PluginManager during WebAdmin startup. > > Plugin configuration is the JSON file that contains optional plugin configuration, e.g. /etc/ovirt-engine/ui-plugins/test-config.json For files in the /etc directory the converse happens: they are intended to be modified by the end user, and usually (if marked properly) they are preserved when a new version of the package is installed. > I think we can combine two things here: > 1) allow plugin authors to define standard (fallback) configuration directly inside plugin descriptor > 2) allow plugin users to override standard configuration by modifying dedicated plugin configuration file > > Finally, plugin source page is the HTML page used to invoke actual plugin code (this page is referenced by plugin descriptor's "url" attribute). Plugin source page can also load external resources required by the plugin, e.g. 3rd party JavaScript libraries, CSS, images, etc. > >> I liked the original IIFE approach, except that it seemed that having additional static resources (jquery, images, html templates, etc) was going to be more cumbersome. I don?t think having the plugin author write a basic start.html is that big of a burden :). > > You're right, for such additional plugin resources, even more configuration/parsing/logic would be required. Even though plugin authors need to write the plugin source page themselves, they have full control over it, which is a good thing in general. > >> I agree that the plugin configuration was always going to be a resource (probably a local file) that the end user could customize. I?m not sure it I really needs to be separate from the plugin definition file (/usr/share/ovirt-engine/ui-plugins/test.json). I suppose it depends on how complex the configuration is going to be and on some of the implementation details surrounding the plugin definition file. > > Yeah, let's make the concept of the plugin configuration file optional for now (standard plugin configuration can be part of plugin descriptor). > >> In my patch, I simply used Jackson to parse the file into a tree of JsonNodes. Should the plugin definition be a java object of some sort? (please please please don?t make me learn about java beans?). I stuck with the JsonNodes because Jackson makes them easy to work with and they?re really easy to re-serialize back to json to give to the webadmin. > > I think using Jackson's JSON representation in Java (JsonNode) is perfectly suitable in this situation. No need to have separate Java bean for that :) > >> We should probably turn on JsonParser.Feature.ALLOW_COMMENTS. The definition and config files will difficult for end-users (or even developers) to understand without comments. > > Agreed. > >> We need to formalize the structure of the plugin definition and decide which fields are mandatory and which are optional > > Sounds good, but I'd skip some attributes for now (enabled, apiVersion, author, license) for the sake of simplicity. > > As you wrote, when loading plugin descriptor, we should enforce mandatory attributes (name, version, url) . > > As for plugin configuration, there could be two different attributes: > - "config" for standard (fallback) plugin configuration (JSON object) > - "configFile" for external plugin configuration file (path to file, relative to /etc/ovirt-engine/ui-plugins/ ) , that overrides the standard configuration > > Note that when loading plugin descriptor, the loader should also "merge" the configuration together (custom config on top of standard config). > >> I can work on the plugin Definition loader some more and make it enforce mandatory/optional fields. I?ll also investigate the directory climbing issue I mentioned in my previous mail. > > Sounds good! I was planning to incorporate your original patch in next PoC revision, but of course, you can work on the loader some more and send another patch :) > > For the directory climbing issue, see /backend/manager/modules/root/src/main/java/org/ovirt/engine/core/FileServlet.java (there's a method called isSane for dealing with such issue). > >> Also, I?m curious how things are going to work when the ?url? points to a foreign resource as the plugin start page. I don?t think the plugin?s iframe is going to be able to access parent.pluginApi. Perhaps there is some aspect of CORS that I don?t understand? > > When the plugin iframe references a resource on different origin (protocol, domain, port) than WebAdmin main page origin, JavaScript code running inside that iframe will not be able to access parent (top-level) pluginApi object. You're right, the statement "parent.pluginApi" will not work, because of Same-Origin Policy enforced by the browser. > > CORS is just one alternative, see http://stackoverflow.com/questions/3076414/ways-to-circumvent-the-same-origin-policy for more. However, CORS needs to be supported by the browser (a special HTTP response header is used to tell that the iframe is allowed to access resources from another - WebAdmin main page - origin). We need to investigate this a bit more I guess. > > Regards, > Vojtech > > > ----- Original Message ----- > > From: "Chris Frantz" > To: "Vojtech Szocs" > Cc: "engine-devel" > Sent: Thursday, August 23, 2012 5:12:02 PM > Subject: RE: UI Plugins configuration > > > > Vojtech, > > Your assumption about the structure of the pluginDefinitions object is correct. It?s no longer a String->String mapping , but a String to Object mapping. > > I liked the original IIFE approach, except that it seemed that having additional static resources (jquery, images, html templates, etc) was going to be more cumbersome. I don?t think having the plugin author write a basic start.html is that big of a burden :). > > I agree that the plugin configuration was always going to be a resource (probably a local file) that the end user could customize. I?m not sure it I really needs to be separate from the plugin definition file (/usr/share/ovirt-engine/ui-plugins/test.json). I suppose it depends on how complex the configuration is going to be and on some of the implementation details surrounding the plugin definition file. > > In my patch, I simply used Jackson to parse the file into a tree of JsonNodes. Should the plugin definition be a java object of some sort? (please please please don?t make me learn about java beans?). I stuck with the JsonNodes because Jackson makes them easy to work with and they?re really easy to re-serialize back to json to give to the webadmin. > > We should probably turn on JsonParser.Feature.ALLOW_COMMENTS. The definition and config files will difficult for end-users (or even developers) to understand without comments. > > We need to formalize the structure of the plugin definition and decide which fields are mandatory and which are optional: > > { > # Mandatory fields: name, enabled, version, url, apiversion, author, license > # Name of the plugin > "name": "test", > > > # Whether or not plugin is enabed > "enabled": true, > > > # version of the plugin > "version": "1.0", > > > # How to load the plugin > "url": "/webadmin/webadmin/plugin/test/start.html", > > > # Which version of engine plugin is meant to work with > "apiversion": "3.1.0", > > > # Who wrote the plugin and how is it licensed? > "author": "SuperBig Corporation", > "license": "Proprietary", > > > # Optional fields path, config > # Where to locate plugin (if loaded by webadmin/plugin) > "path": "/tmp", > > # Plugin configuration information (if any) > "config": "test-config.json", > } > > I can work on the plugin Definition loader some more and make it enforce mandatory/optional fields. I?ll also investigate the directory climbing issue I mentioned in my previous mail. > > Also, I?m curious how things are going to work when the ?url? points to a foreign resource as the plugin start page. I don?t think the plugin?s iframe is going to be able to access parent.pluginApi. Perhaps there is some aspect of CORS that I don?t understand? > > Thanks, > --Chris > > > > > > From: Vojtech Szocs [mailto:vszocs at redhat.com] > Sent: Thursday, August 23, 2012 7:14 AM > To: Frantz, Chris > Cc: engine-devel > Subject: Re: UI Plugins configuration > > > Hi Chris, > > thanks for taking the time to make this patch, these are some excellent ideas! (CC'ing engine-devel so that we can discuss this with other guys as well) > > First of all, I really like the way you designed plugin source page URLs (going through PluginSourcePageServlet ), e.g. "/webadmin/webadmin/plugin//.html", plus the concept of "path" JSON attribute. > > WebadminDynamicHostingServlet loads and caches all plugin definitions ( *.json files), and directly embeds them into WebAdmin host page as pluginDefinitions JavaScript object. I'm assuming that pluginDefinitions object will now look like this: > > var pluginDefinitions = { > "test": { > "name": "test", > "version": "1.0", > "url": "/webadmin/webadmin/plugin/test/foo.html", > "path": "/tmp", > "config": {"a":1, "b":2, "c":3} > } > } > > Originally, the pluginDefinitions object looked like this: > > var pluginDefinitions = { > "test": "/webadmin/webadmin/plugin/test/foo.html" // Simple pluginName -> pluginSourcePageUrl mappings > } > > This is because PluginManager (WebAdmin) only needs pluginName ("name") and pluginSourcePageUrl ("url") during startup, when creating plugin iframe. But this can be changed :) > > Plugin "version" makes sense, plus the plugin configuration object ("config") can be useful directly on the client. Let me explain: > > Originally, plugin configuration was supposed to be passed to actual plugin code (through immediately-invoked-function-expression, or IIFE), just like this: > > (function (pluginApi, pluginConfig) { // JavaScript IIFE > // ... actual plugin code ... > })( > parent.pluginApi, /* reference to global pluginApi object */ > {"a":1, "b":2, "c":3} /* embedded plugin configuration as JavaScript object */ > ); > > The whole purpose of PluginSourcePageServlet was to "wrap" actual plugin code into HTML, so that users don't need to write HTML pages for their plugins manually. PluginSourcePageServlet would handle any plugin dependencies (placed into HTML head), with actual plugin code being wrapped into IIFE, as shown above. Plugin configuration was meant to be stored in a separate file, e.g. -config.json , so that users could change the default plugin configuration to suit their needs. > > Inspired by your patch, rather than reading/embedding plugin configuration when serving plugin HTML page ( PluginSourcePageServlet ), it's even better to have the plugin configuration embedded directly into WebAdmin host page, along with introducing new pluginApi function to retrieve the plugin configuration object. > > Based on this, I suggest following modifications to the original concept: > > - modify original pluginDefinitions structure, from pluginName -> pluginSourcePageUrl , to pluginName -> pluginDefObject > - pluginDefObject is basically a subset of physical plugin definition ( test.json , see below), suitable for use on the client > - add following attributes to pluginDefObject : version , url , config > * note #1: name is not needed, since it's already the key of pluginName -> pluginDefObject mapping > * note #2: path is not needed on the client (more on this below) > - introduce pluginApi.config(pluginName) function for plugins to retrieve their configuration object, and remove pluginConfig parameter from main IIFE (as shown above) > > [a] Physical plugin definition file (JSON) might be located at oVirt "DataDir", e.g. /usr/share/ovirt-engine/ui-plugins/test.json , for example: > > { > "name": "test", > "version": "1.0", > "url": "/webadmin/webadmin/plugin/test/start.html", > "path": "/tmp", > "config": "test-config.json" > } > > [b] Plugin configuration file (JSON) might be located at oVirt "ConfigDir", e.g. /etc/ovirt-engine/ui-plugins/test-config.json , for example: > > { > "a":1, "b":2, "c":3 > } > > [c] Finally, plugin static resources (plugin source page, actual plugin code, plugin dependencies, CSS/images, etc.) would be located at /tmp (as shown in [a]), for example: > > /tmp/start.html -> plugin source page, used to load actual plugin code > /tmp/test.js -> actual plugin code > /tmp/deps/jquery-min.js -> simulate 3rd party plugin dependency > > For example: > "/webadmin/webadmin/plugin/test/start.html" will be mapped to /tmp/start.html > "/webadmin/webadmin/plugin/test/deps/jquery-min.js" will be mapped to /tmp/deps/jquery-min.js > > This approach has some pros and cons: > (+) plugin static resources can be served through PluginSourcePageServlet (pretty much like oVirt documentation resources, served through oVirt Engine root war's FileServlet ) > (+) plugin author has complete control over plugin source page > (-) plugin author actually needs to write plugin source page > > Overall, I think this approach is better than the previous one (where PluginSourcePageServlet took care of rendering plugin source page, but sacrificed some flexibility). > > By the way, here's what would happen behind the scenes: > > 1. user requests WebAdmin host page, WebadminDynamicHostingServlet loads and caches all plugin definitions [a] + plugin configurations [b] and constructs/embeds appropriate pluginDefinitions JavaScript object > 2. during WebAdmin startup, PluginManager registers the plugin (name/version/url/config), and creates/attaches the iframe to fetch plugin source page ansynchronously > 3. PluginSourcePageServlet handles plugin source page request, resolves the correct path [c] and just streams the file content back to client > >> 1. The plugin configuration files should probably have an "enabled" field and an "apiVersion" field that should be examined to determine whether or not to use the plugin. > > Sounds good, we can implement these later on :) > >> 2. I suspect the way I've modified PluginSourcePage makes it vulnerable to directory climbing attacks. > > Yes, but we can defend against these, restricting access only to plugin's "path" and its sub-directories. > >> 3. Is /usr/share/ovirt-engine the right place for the plugin config files? > > I suppose you mean plugin definition files [a], cannot tell for sure, but we can change this anytime :) > > > Chris, please let me know what you think, and again - many thanks for sending the patch! > > > Regards, > Vojtech > > > ----- Original Message ----- > > > From: "Chris Frantz" < Chris.Frantz at hp.com > > To: vszocs at redhat.com > Sent: Wednesday, August 22, 2012 7:56:45 PM > Subject: UI Plugins configuration > > Vojtech, > > I decided to work on making the plugin patch a bit more configurable, following some of the ideas expressed by Itamar and others in the meeting yesterday. The attached patch is a simple first-attempt. > > Plugin configurations are stored in /usr/share/ovirt-engine/ui-plugins/*.json. > > Example: > { > "name": "test", > "version": "1.0", > "url": "/webadmin/webadmin/plugin/test/foo.html", > "path": "/tmp", > "config": {"a":1, "b":2, "c": 3} > } > > The engine reads all of the *.json files in that directory to build the list of known plugins and gives that list to the webadmin. > > When webadmin loads a plugin, it requests the URL given in the plugin config file. The "plugin" URL is mapped to PluginSourcePage, which will translate the first part of the path ("test") into whatever path is stored in pluginConfig ("/tmp") in this case, and then serve the static file (e.g. "/tmp/foo.html"). > > I didn't use the renderPluginSourcePage() method in favor of just serving a static file, but I have no strong opinion on the matter. However, a plugin may want to store static resources at "path" and have the engine serve those resources. By just serving files through PluginSourcePage, we don't need any other servlets to provide those resources. > > There is still a bit of work to do: > > 1. The plugin configuration files should probably have an "enabled" field and an "apiVersion" field that should be examined to determine whether or not to use the plugin. > > 2. I suspect the way I've modified PluginSourcePage makes it vulnerable to directory climbing attacks. > > 3. Is /usr/share/ovirt-engine the right place for the plugin config files? > > Let me know what you think, > --Chris > > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > -- Direcci?n Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta 3?D, 28016 Madrid, Spain Inscrita en el Reg. Mercantil de Madrid ? C.I.F. B82657941 - Red Hat S.L. From vszocs at redhat.com Thu Aug 30 15:39:39 2012 From: vszocs at redhat.com (Vojtech Szocs) Date: Thu, 30 Aug 2012 11:39:39 -0400 (EDT) Subject: [Engine-devel] Update on UI Plugins: PoC patch revision 4 In-Reply-To: <532646312.15477541.1346339532822.JavaMail.root@redhat.com> Message-ID: <1240508860.15487250.1346341179026.JavaMail.root@redhat.com> Hello everyone, as a follow-up to my last email on improving plugin API, here comes the latest revision of UI Plugins proof-of-concept patch (please find it attached). This patch is focused on improving JavaScript plugin API, along with important changes and improvements made to plugin infrastructure ( PluginManager ). Let's walk through the changes step by step. Improved plugin API, taking some inspiration from jQuery Following is a sample plugin code that uses new plugin API: var myPlugin = pluginApi('myPlugin'); // Obtain plugin API instance for 'myPlugin' var myPluginConfig = myPlugin.configObject(); // Obtain plugin-specific configuration // Register event handler functions to be invoked by WebAdmin // Note: all functions are optional, the plugin only defines functions for events it wants to handle myPlugin.register({ UiInit: function() { var testUrl = 'http://www.example.com/' + myPluginConfig.foo; // Assume plugin configuration has 'foo' attribute myPlugin.ui.addMainTab('Custom Tab', 'custom-tab', testUrl); // Invoke some operation using plugin API } }); myPlugin.ready(); // Event handler functions are registered, we are now ready to get initialized (UiInit) UI plugin life-cycle, enforced by plugin infrastructure The PluginState enumeration lists possible states of a plugin during its runtime: * DEFINED : This is the initial state for all plugins. Plugin meta-data has been read by PluginManager and the corresponding iframe element has been created for the plugin. Note that at this point, the iframe element is not attached to DOM yet. * LOADING : The iframe element for the plugin has been attached to DOM, which causes plugin host page (previously known as plugin source page) to be fetched asynchronously in the background. We are now waiting for plugin to report in as ready. In practice, due to JavaScript runtime being single-threaded, WebAdmin startup logic will continue to execute until the JavaScript runtime is "idle" (browser event loop returns), and at this point JavaScript plugin code gets invoked through the plugin host page. * READY : The plugin has indicated that it is ready for use. We assume the plugin has already registered its event handler object (object containing various event handler functions to be called by WebAdmin) at this point. We can now proceed with plugin initialization. * INITIALIZED : The plugin has been initialized by calling UiInit function on its event handler object. We can now call other event handler functions, the plugin is now initialized and in use. Note on plugin initialization: the UiInit function will be called just once during the lifetime of the plugin, after the plugin reports in as ready AND WebAdmin enters the state that allows plugins to be invoked (entering main section for logged-in users), and before other event handler functions are invoked by the plugin infrastructure. Plugin meta-data is now passed to client using different format Previously, plugin meta-data was embedded into WebAdmin host page as a simple JavaScript object, like so: var pluginDefinitions = { myPlugin: "", anotherPlugin: "" } Now, plugin meta-data is embedded into WebAdmin host page as a JavaScript array, like so: var pluginDefinitions = [ { name: "myPlugin", url: "", config: { "foo": 1, "bar": "whatever" } }, { name: "anotherPlugin", url: "" } ]; As you can see, pluginDefinitions is now an array of JavaScript objects, with each object representing plugin meta-data. The "name" and "url" attributes are mandatory (we need to check them when loading plugin descriptors). "config" is the plugin configuration (JSON) object, obtained by merging default plugin configuration (defined in plugin descriptor) with custom plugin configuration (defined in external plugin configuration file). Note that the "config" attribute is optional. In terms of Java classes, pluginDefinitions is mapped to PluginDefinitions overlay type, and each meta-data object within the array is mapped to PluginMetaData overlay type. Note on using assert statements in client code: you might notice that I'm using a lot of assert statements in Plugin class. This is to ensure consistency and guard against corrupted state during development. In GWT, assert statements work in a different way than in standard Java VM. When debugging GWT application using Development Mode, assert statements are checked and throw assertion errors during runtime (they are displayed in Development Mode console). However, when compiling GWT application to JavaScript (Production Mode), assert statements are removed by GWT compiler, so they don't affect the application running in Production Mode. Cheers, Vojtech -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: WIP-UI-Plugins-PoC-revision-4.patch Type: text/x-patch Size: 49169 bytes Desc: not available URL: From sesubram at redhat.com Thu Aug 30 11:41:09 2012 From: sesubram at redhat.com (Selvasundaram) Date: Thu, 30 Aug 2012 17:11:09 +0530 Subject: [Engine-devel] Gluster IPTable configuration Message-ID: <503F5155.6020603@redhat.com> Hi, I want to add gluster specific IPTable configuration in addition to the ovirt IPTable configuration (if it is gluster node). There are two approaches, 1. Having one more gluster specific IP table config in db and merge with ovirt IPTable config (merging NOT appending) [I have the patch engine: Gluster specific firewall configurations #7244] 2. Having two different IP Table config (ovirt and ovirt+gluster) and use either one. Please provide your suggestions or improvements on this. -- Regards Selvasundaram -------------- next part -------------- An HTML attachment was scrubbed... URL: From sesubram at redhat.com Thu Aug 30 13:30:16 2012 From: sesubram at redhat.com (Selvasundaram) Date: Thu, 30 Aug 2012 19:00:16 +0530 Subject: [Engine-devel] Gluster IPTable configuration In-Reply-To: <503F5155.6020603@redhat.com> References: <503F5155.6020603@redhat.com> Message-ID: <503F6AE8.7010303@redhat.com> Hi, I want to add gluster specific IPTable configuration in addition to the ovirt IPTable configuration (if it is gluster node). There are two approaches, 1. Having one more gluster specific IP table config in db and merge with ovirt IPTable config (merging NOT appending) [I have the patch engine: Gluster specific firewall configurations #7244] 2. Having two different IP Table config (ovirt and ovirt+gluster) and use either one. Please provide your suggestions or improvements on this. -- Regards Selvasundaram -------------- next part -------------- An HTML attachment was scrubbed... URL: From lpeer at redhat.com Thu Aug 30 12:22:29 2012 From: lpeer at redhat.com (Livnat Peer) Date: Thu, 30 Aug 2012 15:22:29 +0300 Subject: [Engine-devel] network subnet Message-ID: <503F5B05.7070402@redhat.com> Hi All, Today when a user wants to define a network subnet mask, he does it when he attaches the network to a host NIC. I was wondering if there is a reason not to define the network subnet on the logical network entity (Data center level). Thanks, Livnat From vszocs at redhat.com Thu Aug 30 15:12:12 2012 From: vszocs at redhat.com (Vojtech Szocs) Date: Thu, 30 Aug 2012 11:12:12 -0400 (EDT) Subject: [Engine-devel] Update on UI Plugins: PoC patch revision 4 In-Reply-To: <288818200.15450142.1346334714989.JavaMail.root@redhat.com> Message-ID: <532646312.15477541.1346339532822.JavaMail.root@redhat.com> Hello everyone, as a follow-up to my last email on improving plugin API, here comes the latest revision of UI Plugins proof-of-concept patch (please find it attached). This patch is focused on improving JavaScript plugin API, along with important changes and improvements made to plugin infrastructure ( PluginManager ). Let's walk through the changes step by step. Improved plugin API, taking some inspiration from jQuery Following is a sample plugin code that uses new plugin API: var myPlugin = pluginApi('myPlugin'); // Obtain plugin API instance for 'myPlugin' var myPluginConfig = myPlugin.configObject(); // Obtain plugin-specific configuration // Register event handler functions to be invoked by WebAdmin // Note: all functions are optional, the plugin only defines functions for events it wants to handle myPlugin.register({ UiInit: function() { var testUrl = 'http://www.example.com/' + myPluginConfig.foo; // Assume plugin configuration has 'foo' attribute myPlugin.ui.addMainTab('Custom Tab', 'custom-tab', testUrl); // Invoke some operation using plugin API } }); myPlugin.ready(); // Event handler functions are registered, we are now ready to get initialized (UiInit) UI plugin life-cycle, enforced by plugin infrastructure The PluginState enumeration lists possible states of a plugin during its runtime: * DEFINED : This is the initial state for all plugins. Plugin meta-data has been read by PluginManager and the corresponding iframe element has been created for the plugin. Note that at this point, the iframe element is not attached to DOM yet. * LOADING : The iframe element for the plugin has been attached to DOM, which causes plugin host page (previously known as plugin source page) to be fetched asynchronously in the background. We are now waiting for plugin to report in as ready. In practice, due to JavaScript runtime being single-threaded, WebAdmin startup logic will continue to execute until the JavaScript runtime is "idle" (browser event loop returns), and at this point JavaScript plugin code gets invoked through the plugin host page. * READY : The plugin has indicated that it is ready for use. We assume the plugin has already registered its event handler object (object containing various event handler functions to be called by WebAdmin) at this point. We can now proceed with plugin initialization. * INITIALIZED : The plugin has been initialized by calling UiInit function on its event handler object. We can now call other event handler functions, the plugin is now initialized and in use. Note on plugin initialization: the UiInit function will be called just once during the lifetime of the plugin, after the plugin reports in as ready AND WebAdmin enters the state that allows plugins to be invoked (entering main section for logged-in users), and before other event handler functions are invoked by the plugin infrastructure. Plugin meta-data is now passed to client using different format Previously, plugin meta-data was embedded into WebAdmin host page as a simple JavaScript object, like so: var pluginDefinitions = { myPlugin: "", anotherPlugin: "" } Now, plugin meta-data is embedded into WebAdmin host page as a JavaScript array, like so: var pluginDefinitions = [ { name: "myPlugin", url: "", config: { "foo": 1, "bar": "whatever" } }, { name: "anotherPlugin", url: "" } ]; As you can see, pluginDefinitions is now an array of JavaScript objects, with each object representing plugin meta-data . The "name" and "url" attributes are mandatory (we need to check them when loading plugin descriptors). "config" is the plugin configuration (JSON) object, obtained by merging default plugin configuration (defined in plugin descriptor) with custom plugin configuration (defined in external plugin configuration file) . Note that the "config" attribute is optional. In terms of Java classes, pluginDefinitions is mapped to PluginDefinitions overlay type, and each meta-data object within the array is mapped to PluginMetaData overlay type. Note on using assert statements in client code : you might notice that I'm using a lot of assert statements in Plugin class. This is to ensure consistency and guard against corrupted state during development. In GWT, assert statements work in a different way than in standard Java VM. When debugging GWT application using Development Mode, assert statements are checked and throw assertion errors during runtime (they are displayed in Development Mode console). However, when compiling GWT application to JavaScript (Production Mode), assert statements are removed by GWT compiler, so they don't affect the application running in Production Mode. Let me know what you think guys. Cheers, Vojtech -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: WIP-UI-Plugins-PoC-revision-4.patch Type: text/x-patch Size: 49169 bytes Desc: not available URL: From Chris.Frantz at hp.com Thu Aug 30 18:06:24 2012 From: Chris.Frantz at hp.com (Frantz, Chris) Date: Thu, 30 Aug 2012 18:06:24 +0000 Subject: [Engine-devel] UI Plugins configuration In-Reply-To: <127266215.14115417.1346078924449.JavaMail.root@redhat.com> References: <127266215.14115417.1346078924449.JavaMail.root@redhat.com> Message-ID: Vojtech, I agree with your formalized names: Plugin Descriptor is the JSON file containing plugin meta-data. The plugin descriptor may also contain the default configuration data. It is located in $DATADIR/ui-plugins. Plugin Configuration is the JSON file containing optional plugin configuration info. It is located in $CONFIGDIR/ui-plugins (unless the Plugin Descriptor contains an absolute path). Plugin Definition is the JavaScript object used by WebAdmin. In the current implementation, the Plugin Definition contains both the Plugin Descriptor and the Plugin Configuraion. Plugin Source Page is the HTML page used to invoke the plugin code and shall be referenced by the plugin descriptor?s ?url? attribute. I?ve implemented the config merging you?ve suggested: the structure in configFile gets merged with the structure of ?config?, with the data in configFile winning in the case of duplicate key names. BTW, the patch is against ovirt-engine + 0001-WIP-UI-Plugins-PoC-revision-2. Let me know what you think, --Chris From: Vojtech Szocs [mailto:vszocs at redhat.com] Sent: Monday, August 27, 2012 9:49 AM To: Frantz, Chris Cc: engine-devel Subject: Re: UI Plugins configuration Hi Chris, > Your assumption about the structure of the pluginDefinitions object is correct. It?s no longer a String->String mapping , but a String to Object mapping. Yes :) but maybe we could also formalize some terms, for example: Plugin descriptor is the JSON file that contains important plugin meta-data (including the plugin source page URL), e.g. /usr/share/ovirt-engine/ui-plugins/test.json Plugin definition is the JavaScript object representing plugin descriptor meta-data suitable for use on client (GWT WebAdmin). Plugin definition is embedded into WebAdmin host page within pluginDefinitions object, and read by PluginManager during WebAdmin startup. Plugin configuration is the JSON file that contains optional plugin configuration, e.g. /etc/ovirt-engine/ui-plugins/test-config.json I think we can combine two things here: 1) allow plugin authors to define standard (fallback) configuration directly inside plugin descriptor 2) allow plugin users to override standard configuration by modifying dedicated plugin configuration file Finally, plugin source page is the HTML page used to invoke actual plugin code (this page is referenced by plugin descriptor's "url" attribute). Plugin source page can also load external resources required by the plugin, e.g. 3rd party JavaScript libraries, CSS, images, etc. > I liked the original IIFE approach, except that it seemed that having additional static resources (jquery, images, html templates, etc) was going to be more cumbersome. I don?t think having the plugin author write a basic start.html is that big of a burden :). You're right, for such additional plugin resources, even more configuration/parsing/logic would be required. Even though plugin authors need to write the plugin source page themselves, they have full control over it, which is a good thing in general. > I agree that the plugin configuration was always going to be a resource (probably a local file) that the end user could customize. I?m not sure it I really needs to be separate from the plugin definition file (/usr/share/ovirt-engine/ui-plugins/test.json). I suppose it depends on how complex the configuration is going to be and on some of the implementation details surrounding the plugin definition file. Yeah, let's make the concept of the plugin configuration file optional for now (standard plugin configuration can be part of plugin descriptor). > In my patch, I simply used Jackson to parse the file into a tree of JsonNodes. Should the plugin definition be a java object of some sort? (please please please don?t make me learn about java beans?). I stuck with the JsonNodes because Jackson makes them easy to work with and they?re really easy to re-serialize back to json to give to the webadmin. I think using Jackson's JSON representation in Java (JsonNode) is perfectly suitable in this situation. No need to have separate Java bean for that :) > We should probably turn on JsonParser.Feature.ALLOW_COMMENTS. The definition and config files will difficult for end-users (or even developers) to understand without comments. Agreed. > We need to formalize the structure of the plugin definition and decide which fields are mandatory and which are optional Sounds good, but I'd skip some attributes for now (enabled, apiVersion, author, license) for the sake of simplicity. As you wrote, when loading plugin descriptor, we should enforce mandatory attributes (name, version, url). As for plugin configuration, there could be two different attributes: - "config" for standard (fallback) plugin configuration (JSON object) - "configFile" for external plugin configuration file (path to file, relative to /etc/ovirt-engine/ui-plugins/), that overrides the standard configuration Note that when loading plugin descriptor, the loader should also "merge" the configuration together (custom config on top of standard config). > I can work on the plugin Definition loader some more and make it enforce mandatory/optional fields. I?ll also investigate the directory climbing issue I mentioned in my previous mail. Sounds good! I was planning to incorporate your original patch in next PoC revision, but of course, you can work on the loader some more and send another patch :) For the directory climbing issue, see /backend/manager/modules/root/src/main/java/org/ovirt/engine/core/FileServlet.java (there's a method called isSane for dealing with such issue). > Also, I?m curious how things are going to work when the ?url? points to a foreign resource as the plugin start page. I don?t think the plugin?s iframe is going to be able to access parent.pluginApi. Perhaps there is some aspect of CORS that I don?t understand? When the plugin iframe references a resource on different origin (protocol, domain, port) than WebAdmin main page origin, JavaScript code running inside that iframe will not be able to access parent (top-level) pluginApi object. You're right, the statement "parent.pluginApi" will not work, because of Same-Origin Policy enforced by the browser. CORS is just one alternative, see http://stackoverflow.com/questions/3076414/ways-to-circumvent-the-same-origin-policy for more. However, CORS needs to be supported by the browser (a special HTTP response header is used to tell that the iframe is allowed to access resources from another - WebAdmin main page - origin). We need to investigate this a bit more I guess. Regards, Vojtech ________________________________ From: "Chris Frantz" > To: "Vojtech Szocs" > Cc: "engine-devel" > Sent: Thursday, August 23, 2012 5:12:02 PM Subject: RE: UI Plugins configuration Vojtech, Your assumption about the structure of the pluginDefinitions object is correct. It?s no longer a String->String mapping , but a String to Object mapping. I liked the original IIFE approach, except that it seemed that having additional static resources (jquery, images, html templates, etc) was going to be more cumbersome. I don?t think having the plugin author write a basic start.html is that big of a burden :). I agree that the plugin configuration was always going to be a resource (probably a local file) that the end user could customize. I?m not sure it I really needs to be separate from the plugin definition file (/usr/share/ovirt-engine/ui-plugins/test.json). I suppose it depends on how complex the configuration is going to be and on some of the implementation details surrounding the plugin definition file. In my patch, I simply used Jackson to parse the file into a tree of JsonNodes. Should the plugin definition be a java object of some sort? (please please please don?t make me learn about java beans?). I stuck with the JsonNodes because Jackson makes them easy to work with and they?re really easy to re-serialize back to json to give to the webadmin. We should probably turn on JsonParser.Feature.ALLOW_COMMENTS. The definition and config files will difficult for end-users (or even developers) to understand without comments. We need to formalize the structure of the plugin definition and decide which fields are mandatory and which are optional: { # Mandatory fields: name, enabled, version, url, apiversion, author, license # Name of the plugin "name": "test", # Whether or not plugin is enabed "enabled": true, # version of the plugin "version": "1.0", # How to load the plugin "url": "/webadmin/webadmin/plugin/test/start.html", # Which version of engine plugin is meant to work with "apiversion": "3.1.0", # Who wrote the plugin and how is it licensed? "author": "SuperBig Corporation", "license": "Proprietary", # Optional fields path, config # Where to locate plugin (if loaded by webadmin/plugin) "path": "/tmp", # Plugin configuration information (if any) "config": "test-config.json", } I can work on the plugin Definition loader some more and make it enforce mandatory/optional fields. I?ll also investigate the directory climbing issue I mentioned in my previous mail. Also, I?m curious how things are going to work when the ?url? points to a foreign resource as the plugin start page. I don?t think the plugin?s iframe is going to be able to access parent.pluginApi. Perhaps there is some aspect of CORS that I don?t understand? Thanks, --Chris From: Vojtech Szocs [mailto:vszocs at redhat.com] Sent: Thursday, August 23, 2012 7:14 AM To: Frantz, Chris Cc: engine-devel Subject: Re: UI Plugins configuration Hi Chris, thanks for taking the time to make this patch, these are some excellent ideas! (CC'ing engine-devel so that we can discuss this with other guys as well) First of all, I really like the way you designed plugin source page URLs (going through PluginSourcePageServlet), e.g. "/webadmin/webadmin/plugin//.html", plus the concept of "path" JSON attribute. WebadminDynamicHostingServlet loads and caches all plugin definitions (*.json files), and directly embeds them into WebAdmin host page as pluginDefinitions JavaScript object. I'm assuming that pluginDefinitions object will now look like this: var pluginDefinitions = { "test": { "name": "test", "version": "1.0", "url": "/webadmin/webadmin/plugin/test/foo.html", "path": "/tmp", "config": {"a":1, "b":2, "c":3} } } Originally, the pluginDefinitions object looked like this: var pluginDefinitions = { "test": "/webadmin/webadmin/plugin/test/foo.html" // Simple pluginName -> pluginSourcePageUrl mappings } This is because PluginManager (WebAdmin) only needs pluginName ("name") and pluginSourcePageUrl ("url") during startup, when creating plugin iframe. But this can be changed :) Plugin "version" makes sense, plus the plugin configuration object ("config") can be useful directly on the client. Let me explain: Originally, plugin configuration was supposed to be passed to actual plugin code (through immediately-invoked-function-expression, or IIFE), just like this: (function (pluginApi, pluginConfig) { // JavaScript IIFE // ... actual plugin code ... })( parent.pluginApi, /* reference to global pluginApi object */ {"a":1, "b":2, "c":3} /* embedded plugin configuration as JavaScript object */ ); The whole purpose of PluginSourcePageServlet was to "wrap" actual plugin code into HTML, so that users don't need to write HTML pages for their plugins manually. PluginSourcePageServlet would handle any plugin dependencies (placed into HTML head), with actual plugin code being wrapped into IIFE, as shown above. Plugin configuration was meant to be stored in a separate file, e.g. -config.json, so that users could change the default plugin configuration to suit their needs. Inspired by your patch, rather than reading/embedding plugin configuration when serving plugin HTML page (PluginSourcePageServlet), it's even better to have the plugin configuration embedded directly into WebAdmin host page, along with introducing new pluginApi function to retrieve the plugin configuration object. Based on this, I suggest following modifications to the original concept: - modify original pluginDefinitions structure, from pluginName -> pluginSourcePageUrl, to pluginName -> pluginDefObject - pluginDefObject is basically a subset of physical plugin definition (test.json, see below), suitable for use on the client - add following attributes to pluginDefObject: version, url, config * note #1: name is not needed, since it's already the key of pluginName -> pluginDefObject mapping * note #2: path is not needed on the client (more on this below) - introduce pluginApi.config(pluginName) function for plugins to retrieve their configuration object, and remove pluginConfig parameter from main IIFE (as shown above) [a] Physical plugin definition file (JSON) might be located at oVirt "DataDir", e.g. /usr/share/ovirt-engine/ui-plugins/test.json, for example: { "name": "test", "version": "1.0", "url": "/webadmin/webadmin/plugin/test/start.html", "path": "/tmp", "config": "test-config.json" } [b] Plugin configuration file (JSON) might be located at oVirt "ConfigDir", e.g. /etc/ovirt-engine/ui-plugins/test-config.json, for example: { "a":1, "b":2, "c":3 } [c] Finally, plugin static resources (plugin source page, actual plugin code, plugin dependencies, CSS/images, etc.) would be located at /tmp (as shown in [a]), for example: /tmp/start.html -> plugin source page, used to load actual plugin code /tmp/test.js -> actual plugin code /tmp/deps/jquery-min.js -> simulate 3rd party plugin dependency For example: "/webadmin/webadmin/plugin/test/start.html" will be mapped to /tmp/start.html "/webadmin/webadmin/plugin/test/deps/jquery-min.js" will be mapped to /tmp/deps/jquery-min.js This approach has some pros and cons: (+) plugin static resources can be served through PluginSourcePageServlet (pretty much like oVirt documentation resources, served through oVirt Engine root war's FileServlet) (+) plugin author has complete control over plugin source page (-) plugin author actually needs to write plugin source page Overall, I think this approach is better than the previous one (where PluginSourcePageServlet took care of rendering plugin source page, but sacrificed some flexibility). By the way, here's what would happen behind the scenes: 1. user requests WebAdmin host page, WebadminDynamicHostingServlet loads and caches all plugin definitions [a] + plugin configurations [b] and constructs/embeds appropriate pluginDefinitions JavaScript object 2. during WebAdmin startup, PluginManager registers the plugin (name/version/url/config), and creates/attaches the iframe to fetch plugin source page ansynchronously 3. PluginSourcePageServlet handles plugin source page request, resolves the correct path [c] and just streams the file content back to client > 1. The plugin configuration files should probably have an "enabled" field and an "apiVersion" field that should be examined to determine whether or not to use the plugin. Sounds good, we can implement these later on :) > 2. I suspect the way I've modified PluginSourcePage makes it vulnerable to directory climbing attacks. Yes, but we can defend against these, restricting access only to plugin's "path" and its sub-directories. > 3. Is /usr/share/ovirt-engine the right place for the plugin config files? I suppose you mean plugin definition files [a], cannot tell for sure, but we can change this anytime :) Chris, please let me know what you think, and again - many thanks for sending the patch! Regards, Vojtech ________________________________ From: "Chris Frantz" > To: vszocs at redhat.com Sent: Wednesday, August 22, 2012 7:56:45 PM Subject: UI Plugins configuration Vojtech, I decided to work on making the plugin patch a bit more configurable, following some of the ideas expressed by Itamar and others in the meeting yesterday. The attached patch is a simple first-attempt. Plugin configurations are stored in /usr/share/ovirt-engine/ui-plugins/*.json. Example: { "name": "test", "version": "1.0", "url": "/webadmin/webadmin/plugin/test/foo.html", "path": "/tmp", "config": {"a":1, "b":2, "c": 3} } The engine reads all of the *.json files in that directory to build the list of known plugins and gives that list to the webadmin. When webadmin loads a plugin, it requests the URL given in the plugin config file. The "plugin" URL is mapped to PluginSourcePage, which will translate the first part of the path ("test") into whatever path is stored in pluginConfig ("/tmp") in this case, and then serve the static file (e.g. "/tmp/foo.html"). I didn't use the renderPluginSourcePage() method in favor of just serving a static file, but I have no strong opinion on the matter. However, a plugin may want to store static resources at "path" and have the engine serve those resources. By just serving files through PluginSourcePage, we don't need any other servlets to provide those resources. There is still a bit of work to do: 1. The plugin configuration files should probably have an "enabled" field and an "apiVersion" field that should be examined to determine whether or not to use the plugin. 2. I suspect the way I've modified PluginSourcePage makes it vulnerable to directory climbing attacks. 3. Is /usr/share/ovirt-engine the right place for the plugin config files? Let me know what you think, --Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: sample.tar.gz Type: application/x-gzip Size: 1096 bytes Desc: sample.tar.gz URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: UI-Plugins-Config-2.patch Type: application/octet-stream Size: 16835 bytes Desc: UI-Plugins-Config-2.patch URL: From jhernand at redhat.com Thu Aug 30 18:24:02 2012 From: jhernand at redhat.com (Juan Hernandez) Date: Thu, 30 Aug 2012 20:24:02 +0200 Subject: [Engine-devel] Update on UI Plugins: PoC patch revision 4 In-Reply-To: <1240508860.15487250.1346341179026.JavaMail.root@redhat.com> References: <1240508860.15487250.1346341179026.JavaMail.root@redhat.com> Message-ID: <503FAFC2.2060504@redhat.com> Nice work Vojtech, just some comments about the PluginSourcePageServlet: * You can avoid the hardcoded plugin code location with something like this: import org.ovirt.engine.core.utils.LocalConfig; File dataDir = LocalConfig.getInstance().getUsrDir(); File pluginCodeLocation = new File(etcDir, "ui-plugins"); That will result in /usr/share/ovirt-engine/ui-plugins or whatever directory is configured in the ENGINE_USR parameter in the /etc/sysconfig/ovirt-engine file. * It is very important to check the sanity of the value of the "plugin" parameter, otherwise an attacker could send you a name with backpaths, and that can result in accessing an unexpected file. In this particular case you are adding the ".js" extension, so it won't probably result in accessing dangerous files, but anyhow it is a good practice. I would recommend to do something like this: String pluginName = request.getParameter("plugin"); if (pluginName != null || !isSane(pluginName)) { ... } The "isSane" method can do something similar to the "isSane" method in the "FileServlet" class (I think you already mentioned this at some point), maybe even forbid slashes as well. * When copying the plugin file to the generated page you can avoid the extra Buffered reader/writer as you are already using your own buffer in the "copyChars" method (which is very good practice). For the output you can directly use "response.getWriter()" instead of "response.getOutputStream()", that is already buffered by the container. On 08/30/2012 05:39 PM, Vojtech Szocs wrote: > > > Hello everyone, > > as a follow-up to my last email on improving plugin API, here comes the latest revision of UI Plugins proof-of-concept patch (please find it attached). > > This patch is focused on improving JavaScript plugin API, along with important changes and improvements made to plugin infrastructure ( PluginManager ). Let's walk through the changes step by step. > > > > Improved plugin API, taking some inspiration from jQuery > > Following is a sample plugin code that uses new plugin API: > > var myPlugin = pluginApi('myPlugin'); // Obtain plugin API instance for 'myPlugin' > var myPluginConfig = myPlugin.configObject(); // Obtain plugin-specific configuration > > // Register event handler functions to be invoked by WebAdmin > // Note: all functions are optional, the plugin only defines functions for events it wants to handle > myPlugin.register({ > UiInit: function() { > var testUrl = 'http://www.example.com/' + myPluginConfig.foo; // Assume plugin configuration has 'foo' attribute > myPlugin.ui.addMainTab('Custom Tab', 'custom-tab', testUrl); // Invoke some operation using plugin API > } > }); > > myPlugin.ready(); // Event handler functions are registered, we are now ready to get initialized (UiInit) > > > > UI plugin life-cycle, enforced by plugin infrastructure > > The PluginState enumeration lists possible states of a plugin during its runtime: > > * DEFINED : This is the initial state for all plugins. Plugin meta-data has been read by PluginManager and the corresponding iframe element has been created for the plugin. Note that at this point, the iframe element is not attached to DOM yet. > * LOADING : The iframe element for the plugin has been attached to DOM, which causes plugin host page (previously known as plugin source page) to be fetched asynchronously in the background. We are now waiting for plugin to report in as ready. In practice, due to JavaScript runtime being single-threaded, WebAdmin startup logic will continue to execute until the JavaScript runtime is "idle" (browser event loop returns), and at this point JavaScript plugin code gets invoked through the plugin host page. > * READY : The plugin has indicated that it is ready for use. We assume the plugin has already registered its event handler object (object containing various event handler functions to be called by WebAdmin) at this point. We can now proceed with plugin initialization. > * INITIALIZED : The plugin has been initialized by calling UiInit function on its event handler object. We can now call other event handler functions, the plugin is now initialized and in use. > > > Note on plugin initialization: the UiInit function will be called just once during the lifetime of the plugin, after the plugin reports in as ready AND WebAdmin enters the state that allows plugins to be invoked (entering main section for logged-in users), and before other event handler functions are invoked by the plugin infrastructure. > > > > > Plugin meta-data is now passed to client using different format > > > Previously, plugin meta-data was embedded into WebAdmin host page as a simple JavaScript object, like so: > > > var pluginDefinitions = { myPlugin: "", anotherPlugin: "" } > > > > Now, plugin meta-data is embedded into WebAdmin host page as a JavaScript array, like so: > > > > var pluginDefinitions = [ > { name: "myPlugin", url: "", config: { "foo": 1, "bar": "whatever" } }, > { name: "anotherPlugin", url: "" } > > ]; > > > As you can see, pluginDefinitions is now an array of JavaScript objects, with each object representing plugin meta-data. The "name" and "url" attributes are mandatory (we need to check them when loading plugin descriptors). "config" is the plugin configuration (JSON) object, obtained by merging default plugin configuration (defined in plugin descriptor) with custom plugin configuration (defined in external plugin configuration file). Note that the "config" attribute is optional. > > > > In terms of Java classes, pluginDefinitions is mapped to PluginDefinitions overlay type, and each meta-data object within the array is mapped to PluginMetaData overlay type. > > > > > > Note on using assert statements in client code: you might notice that I'm using a lot of assert statements in Plugin class. This is to ensure consistency and guard against corrupted state during development. In GWT, assert statements work in a different way than in standard Java VM. When debugging GWT application using Development Mode, assert statements are checked and throw assertion errors during runtime (they are displayed in Development Mode console). However, when compiling GWT application to JavaScript (Production Mode), assert statements are removed by GWT compiler, so they don't affect the application running in Production Mode. > > > > Cheers, > Vojtech > > > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > -- Direcci?n Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta 3?D, 28016 Madrid, Spain Inscrita en el Reg. Mercantil de Madrid ? C.I.F. B82657941 - Red Hat S.L. From alonbl at redhat.com Thu Aug 30 18:35:16 2012 From: alonbl at redhat.com (Alon Bar-Lev) Date: Thu, 30 Aug 2012 14:35:16 -0400 (EDT) Subject: [Engine-devel] Gluster IPTable configuration In-Reply-To: <503F6AE8.7010303@redhat.com> Message-ID: <278376844.4015237.1346351716576.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Selvasundaram" > To: engine-devel at ovirt.org > Cc: "Shireesh Anjal" > Sent: Thursday, August 30, 2012 4:30:16 PM > Subject: [Engine-devel] Gluster IPTable configuration > > > Hi, > > I want to add gluster specific IPTable configuration in addition to > the ovirt IPTable configuration (if it is gluster node). > > There are two approaches, > 1. Having one more gluster specific IP table config in db and merge > with ovirt IPTable config (merging NOT appending) > [I have the patch engine: Gluster specific firewall configurations > #7244] > 2. Having two different IP Table config (ovirt and ovirt+gluster) and > use either one. > > Please provide your suggestions or improvements on this. > Hello all, The mentioned patch[1], adds hard coded gluster code into the bootstrap code, manipulate the firewall configuration to be gluster specific. It hardcoded search for "reject", insert before some other rules. I believe this hardcode approach is obsolete now that we have proper tools for templates. A more robust solution would be defining generic profiles, each profile as a template, each template can refer to different profiles, and assign profile to a node. This way the implementation is not gluster [or any] specific and can be reused for more setups, code is cleaner. Example: BASIC.PRE :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] BASIC.IN accept ... accept ... BASIC.POST reject ... reject ... BASIC ${BASIC.PRE} ${BASIC.IN} ${BASIC.POST} GLUSTER ${BASIC.PRE} ${BASIC.IN} accept ... ${BASIC.POST} reject ... Regards, Alon Bar-Lev [1] http://gerrit.ovirt.org/#/c/7244/ From acathrow at redhat.com Thu Aug 30 18:37:59 2012 From: acathrow at redhat.com (Andrew Cathrow) Date: Thu, 30 Aug 2012 14:37:59 -0400 (EDT) Subject: [Engine-devel] Gluster IPTable configuration In-Reply-To: <278376844.4015237.1346351716576.JavaMail.root@redhat.com> Message-ID: <1378011110.3476214.1346351879584.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Alon Bar-Lev" > To: "Selvasundaram" > Cc: "Shireesh Anjal" , engine-devel at ovirt.org > Sent: Thursday, August 30, 2012 2:35:16 PM > Subject: Re: [Engine-devel] Gluster IPTable configuration > > > > ----- Original Message ----- > > From: "Selvasundaram" > > To: engine-devel at ovirt.org > > Cc: "Shireesh Anjal" > > Sent: Thursday, August 30, 2012 4:30:16 PM > > Subject: [Engine-devel] Gluster IPTable configuration > > > > > > Hi, > > > > I want to add gluster specific IPTable configuration in addition to > > the ovirt IPTable configuration (if it is gluster node). > > > > There are two approaches, > > 1. Having one more gluster specific IP table config in db and merge > > with ovirt IPTable config (merging NOT appending) > > [I have the patch engine: Gluster specific firewall configurations > > #7244] > > 2. Having two different IP Table config (ovirt and ovirt+gluster) > > and > > use either one. > > > > Please provide your suggestions or improvements on this. > > > > Hello all, > > The mentioned patch[1], adds hard coded gluster code into the > bootstrap code, manipulate the firewall configuration to be gluster > specific. It hardcoded search for "reject", insert before some other > rules. > > I believe this hardcode approach is obsolete now that we have proper > tools for templates. > > A more robust solution would be defining generic profiles, each > profile as a template, each template can refer to different > profiles, and assign profile to a node. > > This way the implementation is not gluster [or any] specific and can > be reused for more setups, code is cleaner. or create custom chains ? > > Example: > > BASIC.PRE > :INPUT ACCEPT [0:0] > :FORWARD ACCEPT [0:0] > :OUTPUT ACCEPT [0:0] > BASIC.IN > accept ... > accept ... > BASIC.POST > reject ... > reject ... > > BASIC > ${BASIC.PRE} > ${BASIC.IN} > ${BASIC.POST} > > GLUSTER > ${BASIC.PRE} > ${BASIC.IN} > accept ... > ${BASIC.POST} > reject ... > > Regards, > Alon Bar-Lev > > [1] http://gerrit.ovirt.org/#/c/7244/ > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From alonbl at redhat.com Thu Aug 30 18:39:10 2012 From: alonbl at redhat.com (Alon Bar-Lev) Date: Thu, 30 Aug 2012 14:39:10 -0400 (EDT) Subject: [Engine-devel] network subnet In-Reply-To: <503F5B05.7070402@redhat.com> Message-ID: <493287114.4015967.1346351950293.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Livnat Peer" > To: engine-devel at ovirt.org > Sent: Thursday, August 30, 2012 3:22:29 PM > Subject: [Engine-devel] network subnet > > Hi All, > > Today when a user wants to define a network subnet mask, he does it > when > he attaches the network to a host NIC. > > I was wondering if there is a reason not to define the network subnet > on > the logical network entity (Data center level). > > Thanks, Livnat Hello, I am sorry, maybe I do not understand... the IP scheme enforces the use of address mask in order to properly route packets. Network mask is used in any case, I guess it can be dropped from configuration in favour of using the address class as mask, is that what you suggest? Regards, Alon From alonbl at redhat.com Thu Aug 30 18:40:22 2012 From: alonbl at redhat.com (Alon Bar-Lev) Date: Thu, 30 Aug 2012 14:40:22 -0400 (EDT) Subject: [Engine-devel] Gluster IPTable configuration In-Reply-To: <1378011110.3476214.1346351879584.JavaMail.root@redhat.com> Message-ID: <1896719975.4016337.1346352022153.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Andrew Cathrow" > To: "Alon Bar-Lev" > Cc: "Shireesh Anjal" , engine-devel at ovirt.org, "Selvasundaram" > Sent: Thursday, August 30, 2012 9:37:59 PM > Subject: Re: [Engine-devel] Gluster IPTable configuration > > > > ----- Original Message ----- > > From: "Alon Bar-Lev" > > To: "Selvasundaram" > > Cc: "Shireesh Anjal" , engine-devel at ovirt.org > > Sent: Thursday, August 30, 2012 2:35:16 PM > > Subject: Re: [Engine-devel] Gluster IPTable configuration > > > > > > > > ----- Original Message ----- > > > From: "Selvasundaram" > > > To: engine-devel at ovirt.org > > > Cc: "Shireesh Anjal" > > > Sent: Thursday, August 30, 2012 4:30:16 PM > > > Subject: [Engine-devel] Gluster IPTable configuration > > > > > > > > > Hi, > > > > > > I want to add gluster specific IPTable configuration in addition > > > to > > > the ovirt IPTable configuration (if it is gluster node). > > > > > > There are two approaches, > > > 1. Having one more gluster specific IP table config in db and > > > merge > > > with ovirt IPTable config (merging NOT appending) > > > [I have the patch engine: Gluster specific firewall > > > configurations > > > #7244] > > > 2. Having two different IP Table config (ovirt and ovirt+gluster) > > > and > > > use either one. > > > > > > Please provide your suggestions or improvements on this. > > > > > > > Hello all, > > > > The mentioned patch[1], adds hard coded gluster code into the > > bootstrap code, manipulate the firewall configuration to be gluster > > specific. It hardcoded search for "reject", insert before some > > other > > rules. > > > > I believe this hardcode approach is obsolete now that we have > > proper > > tools for templates. > > > > A more robust solution would be defining generic profiles, each > > profile as a template, each template can refer to different > > profiles, and assign profile to a node. > > > > This way the implementation is not gluster [or any] specific and > > can > > be reused for more setups, code is cleaner. > > > or create custom chains ? Can you please elaborate what is custom chains? Thanks! > > > > Example: > > > > BASIC.PRE > > :INPUT ACCEPT [0:0] > > :FORWARD ACCEPT [0:0] > > :OUTPUT ACCEPT [0:0] > > BASIC.IN > > accept ... > > accept ... > > BASIC.POST > > reject ... > > reject ... > > > > BASIC > > ${BASIC.PRE} > > ${BASIC.IN} > > ${BASIC.POST} > > > > GLUSTER > > ${BASIC.PRE} > > ${BASIC.IN} > > accept ... > > ${BASIC.POST} > > reject ... > > > > Regards, > > Alon Bar-Lev > > > > [1] http://gerrit.ovirt.org/#/c/7244/ > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > From lpeer at redhat.com Thu Aug 30 19:16:05 2012 From: lpeer at redhat.com (Livnat Peer) Date: Thu, 30 Aug 2012 22:16:05 +0300 Subject: [Engine-devel] network subnet In-Reply-To: <493287114.4015967.1346351950293.JavaMail.root@redhat.com> References: <493287114.4015967.1346351950293.JavaMail.root@redhat.com> Message-ID: <503FBBF5.6010707@redhat.com> On 30/08/12 21:39, Alon Bar-Lev wrote: > > > ----- Original Message ----- >> From: "Livnat Peer" >> To: engine-devel at ovirt.org >> Sent: Thursday, August 30, 2012 3:22:29 PM >> Subject: [Engine-devel] network subnet >> >> Hi All, >> >> Today when a user wants to define a network subnet mask, he does it >> when >> he attaches the network to a host NIC. >> >> I was wondering if there is a reason not to define the network subnet >> on >> the logical network entity (Data center level). >> >> Thanks, Livnat > > Hello, > > I am sorry, maybe I do not understand... the IP scheme enforces the use of address mask in order to properly route packets. of course. My proposal is related to our user usage of the system. Today ovirt user, who wants to define a network subnet, has to type the subnet per host (per network), I think the user should only define it once on the logical network entity in the Data Center. Propagating the value to all hosts is needed but it should be our internal implementation detail. > > Network mask is used in any case, I guess it can be dropped from configuration in favour of using the address class as mask, is that what you suggest? > No, hope the above paragraph made it more clear. > Regards, > Alon > From Chris.Frantz at hp.com Thu Aug 30 20:03:02 2012 From: Chris.Frantz at hp.com (Frantz, Chris) Date: Thu, 30 Aug 2012 20:03:02 +0000 Subject: [Engine-devel] UI Plugins configuration In-Reply-To: References: <127266215.14115417.1346078924449.JavaMail.root@redhat.com> Message-ID: Vojtech, Here is my patch against WIP-UI-Plugins-PoC-revision-4. I?ve also included 2 dummy plugins in sample.tar.gz. --Chris From: engine-devel-bounces at ovirt.org [mailto:engine-devel-bounces at ovirt.org] On Behalf Of Frantz, Chris Sent: Thursday, August 30, 2012 1:06 PM To: Vojtech Szocs Cc: engine-devel Subject: Re: [Engine-devel] UI Plugins configuration Vojtech, I agree with your formalized names: Plugin Descriptor is the JSON file containing plugin meta-data. The plugin descriptor may also contain the default configuration data. It is located in $DATADIR/ui-plugins. Plugin Configuration is the JSON file containing optional plugin configuration info. It is located in $CONFIGDIR/ui-plugins (unless the Plugin Descriptor contains an absolute path). Plugin Definition is the JavaScript object used by WebAdmin. In the current implementation, the Plugin Definition contains both the Plugin Descriptor and the Plugin Configuraion. Plugin Source Page is the HTML page used to invoke the plugin code and shall be referenced by the plugin descriptor?s ?url? attribute. I?ve implemented the config merging you?ve suggested: the structure in configFile gets merged with the structure of ?config?, with the data in configFile winning in the case of duplicate key names. BTW, the patch is against ovirt-engine + 0001-WIP-UI-Plugins-PoC-revision-2. Let me know what you think, --Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: sample.tar.gz Type: application/x-gzip Size: 1250 bytes Desc: sample.tar.gz URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: UI-Plugins-Config-4.patch Type: application/octet-stream Size: 17270 bytes Desc: UI-Plugins-Config-4.patch URL: From alonbl at redhat.com Thu Aug 30 20:11:30 2012 From: alonbl at redhat.com (Alon Bar-Lev) Date: Thu, 30 Aug 2012 16:11:30 -0400 (EDT) Subject: [Engine-devel] network subnet In-Reply-To: <503FBBF5.6010707@redhat.com> Message-ID: <571855213.4031461.1346357490996.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Livnat Peer" > To: "Alon Bar-Lev" > Cc: engine-devel at ovirt.org > Sent: Thursday, August 30, 2012 10:16:05 PM > Subject: Re: [Engine-devel] network subnet > > On 30/08/12 21:39, Alon Bar-Lev wrote: > > > > > > ----- Original Message ----- > >> From: "Livnat Peer" > >> To: engine-devel at ovirt.org > >> Sent: Thursday, August 30, 2012 3:22:29 PM > >> Subject: [Engine-devel] network subnet > >> > >> Hi All, > >> > >> Today when a user wants to define a network subnet mask, he does > >> it > >> when > >> he attaches the network to a host NIC. > >> > >> I was wondering if there is a reason not to define the network > >> subnet > >> on > >> the logical network entity (Data center level). > >> > >> Thanks, Livnat > > > > Hello, > > > > I am sorry, maybe I do not understand... the IP scheme enforces the > > use of address mask in order to properly route packets. > > of course. My proposal is related to our user usage of the system. > Today > ovirt user, who wants to define a network subnet, has to type the > subnet > per host (per network), I think the user should only define it once > on > the logical network entity in the Data Center. > Propagating the value to all hosts is needed but it should be our > internal implementation detail. > > > > > Network mask is used in any case, I guess it can be dropped from > > configuration in favour of using the address class as mask, is > > that what you suggest? > > > > No, hope the above paragraph made it more clear. > Hello, Then you assume that a logical network, which is actually layer 2 network in our implementation, has layer 3 characteristics, right? In our current implementation "data center logical network" is pure layer 2 segment aka layer 2 broadcast domain. One can use the same logical network for multiple layer 3 segments, which is totally valid and consistent with standard physical layer 2 setup. Unless I am missing something crucial, I would suggest to keep the consistent physical->virtual mapping, unless we emulate layer 3 switching. Layer 2 does not have layer 3 characteristics. Regards, Alon. From djasa at redhat.com Fri Aug 31 08:57:11 2012 From: djasa at redhat.com (David =?UTF-8?Q?Ja=C5=A1a?=) Date: Fri, 31 Aug 2012 10:57:11 +0200 Subject: [Engine-devel] Gluster IPTable configuration In-Reply-To: <1896719975.4016337.1346352022153.JavaMail.root@redhat.com> References: <1896719975.4016337.1346352022153.JavaMail.root@redhat.com> Message-ID: <1346403431.18636.36.camel@dhcp-29-7.brq.redhat.com> Alon Bar-Lev p??e v ?t 30. 08. 2012 v 14:40 -0400: > > ----- Original Message ----- > > From: "Andrew Cathrow" > > To: "Alon Bar-Lev" > > Cc: "Shireesh Anjal" , engine-devel at ovirt.org, "Selvasundaram" > > Sent: Thursday, August 30, 2012 9:37:59 PM > > Subject: Re: [Engine-devel] Gluster IPTable configuration > > > > > > > > ----- Original Message ----- > > > From: "Alon Bar-Lev" > > > To: "Selvasundaram" > > > Cc: "Shireesh Anjal" , engine-devel at ovirt.org > > > Sent: Thursday, August 30, 2012 2:35:16 PM > > > Subject: Re: [Engine-devel] Gluster IPTable configuration > > > > > > > > > > > > ----- Original Message ----- > > > > From: "Selvasundaram" > > > > To: engine-devel at ovirt.org > > > > Cc: "Shireesh Anjal" > > > > Sent: Thursday, August 30, 2012 4:30:16 PM > > > > Subject: [Engine-devel] Gluster IPTable configuration > > > > > > > > > > > > Hi, > > > > > > > > I want to add gluster specific IPTable configuration in addition > > > > to > > > > the ovirt IPTable configuration (if it is gluster node). > > > > > > > > There are two approaches, > > > > 1. Having one more gluster specific IP table config in db and > > > > merge > > > > with ovirt IPTable config (merging NOT appending) > > > > [I have the patch engine: Gluster specific firewall > > > > configurations > > > > #7244] > > > > 2. Having two different IP Table config (ovirt and ovirt+gluster) > > > > and > > > > use either one. > > > > > > > > Please provide your suggestions or improvements on this. > > > > > > > > > > Hello all, > > > > > > The mentioned patch[1], adds hard coded gluster code into the > > > bootstrap code, manipulate the firewall configuration to be gluster > > > specific. It hardcoded search for "reject", insert before some > > > other > > > rules. > > > > > > I believe this hardcode approach is obsolete now that we have > > > proper > > > tools for templates. > > > > > > A more robust solution would be defining generic profiles, each > > > profile as a template, each template can refer to different > > > profiles, and assign profile to a node. > > > > > > This way the implementation is not gluster [or any] specific and > > > can > > > be reused for more setups, code is cleaner. > > > > > > or create custom chains ? > > Can you please elaborate what is custom chains? > Thanks! iptables -N my_new_chain iptables -A my_new_chain iptables -A my_new_chain ... iptables -A my_new_chain # if this is matched, packet goes through rules in my_new_chain iptables -A INPUT -j my_new_chain David > > > > > > > Example: > > > > > > BASIC.PRE > > > :INPUT ACCEPT [0:0] > > > :FORWARD ACCEPT [0:0] > > > :OUTPUT ACCEPT [0:0] > > > BASIC.IN > > > accept ... > > > accept ... > > > BASIC.POST > > > reject ... > > > reject ... > > > > > > BASIC > > > ${BASIC.PRE} > > > ${BASIC.IN} > > > ${BASIC.POST} > > > > > > GLUSTER > > > ${BASIC.PRE} > > > ${BASIC.IN} > > > accept ... > > > ${BASIC.POST} > > > reject ... > > > > > > Regards, > > > Alon Bar-Lev > > > > > > [1] http://gerrit.ovirt.org/#/c/7244/ > > > _______________________________________________ > > > Engine-devel mailing list > > > Engine-devel at ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel -- David Ja?a, RHCE SPICE QE based in Brno GPG Key: 22C33E24 Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24 From alonbl at redhat.com Fri Aug 31 09:09:47 2012 From: alonbl at redhat.com (Alon Bar-Lev) Date: Fri, 31 Aug 2012 05:09:47 -0400 (EDT) Subject: [Engine-devel] Gluster IPTable configuration In-Reply-To: <1346403431.18636.36.camel@dhcp-29-7.brq.redhat.com> Message-ID: <1859251011.4125137.1346404187219.JavaMail.root@redhat.com> ----- Original Message ----- > From: "David Ja?a" > To: engine-devel at ovirt.org > Sent: Friday, August 31, 2012 11:57:11 AM > Subject: Re: [Engine-devel] Gluster IPTable configuration > > Alon Bar-Lev p??e v ?t 30. 08. 2012 v 14:40 -0400: > > > > ----- Original Message ----- > > > From: "Andrew Cathrow" > > > To: "Alon Bar-Lev" > > > Cc: "Shireesh Anjal" , engine-devel at ovirt.org, > > > "Selvasundaram" > > > Sent: Thursday, August 30, 2012 9:37:59 PM > > > Subject: Re: [Engine-devel] Gluster IPTable configuration > > > > > > > > > > > > ----- Original Message ----- > > > > From: "Alon Bar-Lev" > > > > To: "Selvasundaram" > > > > Cc: "Shireesh Anjal" , > > > > engine-devel at ovirt.org > > > > Sent: Thursday, August 30, 2012 2:35:16 PM > > > > Subject: Re: [Engine-devel] Gluster IPTable configuration > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > From: "Selvasundaram" > > > > > To: engine-devel at ovirt.org > > > > > Cc: "Shireesh Anjal" > > > > > Sent: Thursday, August 30, 2012 4:30:16 PM > > > > > Subject: [Engine-devel] Gluster IPTable configuration > > > > > > > > > > > > > > > Hi, > > > > > > > > > > I want to add gluster specific IPTable configuration in > > > > > addition > > > > > to > > > > > the ovirt IPTable configuration (if it is gluster node). > > > > > > > > > > There are two approaches, > > > > > 1. Having one more gluster specific IP table config in db and > > > > > merge > > > > > with ovirt IPTable config (merging NOT appending) > > > > > [I have the patch engine: Gluster specific firewall > > > > > configurations > > > > > #7244] > > > > > 2. Having two different IP Table config (ovirt and > > > > > ovirt+gluster) > > > > > and > > > > > use either one. > > > > > > > > > > Please provide your suggestions or improvements on this. > > > > > > > > > > > > > Hello all, > > > > > > > > The mentioned patch[1], adds hard coded gluster code into the > > > > bootstrap code, manipulate the firewall configuration to be > > > > gluster > > > > specific. It hardcoded search for "reject", insert before some > > > > other > > > > rules. > > > > > > > > I believe this hardcode approach is obsolete now that we have > > > > proper > > > > tools for templates. > > > > > > > > A more robust solution would be defining generic profiles, each > > > > profile as a template, each template can refer to different > > > > profiles, and assign profile to a node. > > > > > > > > This way the implementation is not gluster [or any] specific > > > > and > > > > can > > > > be reused for more setups, code is cleaner. > > > > > > > > > or create custom chains ? > > > > Can you please elaborate what is custom chains? > > Thanks! > > iptables -N my_new_chain > iptables -A my_new_chain > iptables -A my_new_chain ... > iptables -A my_new_chain > > # if this is matched, packet goes through rules in > my_new_chain > iptables -A INPUT -j my_new_chain > Hello, How does this solve the original issue? The need to provide different rules to different hosts by software installed on destination? Standard host needs iptables X. Gluster host needs iptables X+Y. XXX host needs iptables X+Z. Maintainer of Gluster knows what Y is. Maintainer of XXX knows what Z is. If we merge all to one entry product comes with default X. User override X to A. New version of product comes with default Y. Upgrade options: 1. System continues to use A. 2. Some AI to upgrade and create A'. 3. Revert to Y, dropping user's customization. Or we can maintain one large table with complete configuration and conditionals. Alon. From acathrow at redhat.com Fri Aug 31 12:12:34 2012 From: acathrow at redhat.com (Andrew Cathrow) Date: Fri, 31 Aug 2012 08:12:34 -0400 (EDT) Subject: [Engine-devel] Gluster IPTable configuration In-Reply-To: <1859251011.4125137.1346404187219.JavaMail.root@redhat.com> Message-ID: <1750220219.3758540.1346415154344.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Alon Bar-Lev" > To: "David Ja?a" > Cc: engine-devel at ovirt.org > Sent: Friday, August 31, 2012 5:09:47 AM > Subject: Re: [Engine-devel] Gluster IPTable configuration > > > > ----- Original Message ----- > > From: "David Ja?a" > > To: engine-devel at ovirt.org > > Sent: Friday, August 31, 2012 11:57:11 AM > > Subject: Re: [Engine-devel] Gluster IPTable configuration > > > > Alon Bar-Lev p??e v ?t 30. 08. 2012 v 14:40 -0400: > > > > > > ----- Original Message ----- > > > > From: "Andrew Cathrow" > > > > To: "Alon Bar-Lev" > > > > Cc: "Shireesh Anjal" , > > > > engine-devel at ovirt.org, > > > > "Selvasundaram" > > > > Sent: Thursday, August 30, 2012 9:37:59 PM > > > > Subject: Re: [Engine-devel] Gluster IPTable configuration > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > From: "Alon Bar-Lev" > > > > > To: "Selvasundaram" > > > > > Cc: "Shireesh Anjal" , > > > > > engine-devel at ovirt.org > > > > > Sent: Thursday, August 30, 2012 2:35:16 PM > > > > > Subject: Re: [Engine-devel] Gluster IPTable configuration > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > From: "Selvasundaram" > > > > > > To: engine-devel at ovirt.org > > > > > > Cc: "Shireesh Anjal" > > > > > > Sent: Thursday, August 30, 2012 4:30:16 PM > > > > > > Subject: [Engine-devel] Gluster IPTable configuration > > > > > > > > > > > > > > > > > > Hi, > > > > > > > > > > > > I want to add gluster specific IPTable configuration in > > > > > > addition > > > > > > to > > > > > > the ovirt IPTable configuration (if it is gluster node). > > > > > > > > > > > > There are two approaches, > > > > > > 1. Having one more gluster specific IP table config in db > > > > > > and > > > > > > merge > > > > > > with ovirt IPTable config (merging NOT appending) > > > > > > [I have the patch engine: Gluster specific firewall > > > > > > configurations > > > > > > #7244] > > > > > > 2. Having two different IP Table config (ovirt and > > > > > > ovirt+gluster) > > > > > > and > > > > > > use either one. > > > > > > > > > > > > Please provide your suggestions or improvements on this. > > > > > > > > > > > > > > > > Hello all, > > > > > > > > > > The mentioned patch[1], adds hard coded gluster code into the > > > > > bootstrap code, manipulate the firewall configuration to be > > > > > gluster > > > > > specific. It hardcoded search for "reject", insert before > > > > > some > > > > > other > > > > > rules. > > > > > > > > > > I believe this hardcode approach is obsolete now that we have > > > > > proper > > > > > tools for templates. > > > > > > > > > > A more robust solution would be defining generic profiles, > > > > > each > > > > > profile as a template, each template can refer to different > > > > > profiles, and assign profile to a node. > > > > > > > > > > This way the implementation is not gluster [or any] specific > > > > > and > > > > > can > > > > > be reused for more setups, code is cleaner. > > > > > > > > > > > > or create custom chains ? > > > > > > Can you please elaborate what is custom chains? > > > Thanks! > > > > iptables -N my_new_chain > > iptables -A my_new_chain > > iptables -A my_new_chain ... > > iptables -A my_new_chain > > > > # if this is matched, packet goes through rules in > > my_new_chain > > iptables -A INPUT -j my_new_chain > > > > Hello, > > How does this solve the original issue? It makes it easier for customers who are adding their own IPTables configuration - when we do rhev-h plugins it'll make things easier. This way we don't wipe out the original rules we just add our own chain. > The need to provide different rules to different hosts by software > installed on destination? > > Standard host needs iptables X. > Gluster host needs iptables X+Y. > XXX host needs iptables X+Z. > Maintainer of Gluster knows what Y is. > Maintainer of XXX knows what Z is. > > If we merge all to one entry product comes with default X. > User override X to A. > New version of product comes with default Y. > Upgrade options: > 1. System continues to use A. > 2. Some AI to upgrade and create A'. > 3. Revert to Y, dropping user's customization. > > Or we can maintain one large table with complete configuration and > conditionals. > > Alon. > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel >