From mkolesni at redhat.com Wed Feb 1 06:03:47 2012 From: mkolesni at redhat.com (Mike Kolesnik) Date: Wed, 01 Feb 2012 01:03:47 -0500 (EST) Subject: [Engine-devel] Simplifying our POJOs In-Reply-To: <4F2835A6.9020707@redhat.com> Message-ID: <89269de6-728f-4278-88b0-aef60a6240f8@zmail14.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > On 01/31/2012 12:45 PM, Doron Fediuck wrote: > > On 31/01/12 12:39, Livnat Peer wrote: > >> On 31/01/12 12:02, Mike Kolesnik wrote: > >>> Hi, > >>> > >>> Today many POJO > >>> s > >>> are used throughout the system to convey data: > >>> > >>> * Parameters - To send data to commands. > >>> * Business Entities - To transfer data in the parameters & > >>> to/from > >>> the DB. > >>> > >>> These POJOs are (usually) very verbose and full of boilerplate > >>> code > >>> . > >>> > >>> This, in turn, reduces their readability and maintainability for > >>> a > >>> couple of reasons (that I can think of): > >>> > >>> * It's hard to know what does what: > >>> o Who participates in equals/hashCode? > >>> o What fields are printed in toString? > >>> * Consistency is problematic: > >>> o A field may be part of equals but not hashCode, or vice > >>> versa. > >>> o This breaks the Object.hashCode() > >>> > >>> contract! > >>> * Adding/Removing fields take more time since you need to > >>> synchronize > >>> the change to all boilerplate methods. > >>> o Again, we're facing the consistency problem. > >>> * These simple classes tend to be very long and not very > >>> readable. > >>> * Boilerplate code makes it harder to find out which methods > >>> *don't* > >>> behave the default way. > >>> * Javadoc, if existent, is usually meaningless (but you might > >>> see some > >>> banal documentation that doesn't add any real value). > >>> * Our existing classes are not up to standard! > >>> > >>> > >>> So what can be done to remedy the situation? > >>> > >>> We could, of course, try to simplify the classes as much as we > >>> can and > >>> maybe address some of the issues. > >>> This won't alleviate the boilerplate code problem altogether, > >>> though. > >>> > >>> We could write annotations to do some of the things for us > >>> automatically. > >>> The easiest approach would be runtime-based, and would hinder > >>> performance. > >>> This also means we need to maintain this "infrastructure" and all > >>> the > >>> implications of such a decision. > >>> > >>> > >>> Luckily, there is a much easier solution: Someone else already > >>> did it! > >>> > >>> Check out Project Lombok: http://projectlombok.org > >>> What Lombok gives us, among some other things, is a way to > >>> greatly > >>> simplify our POJOs by using annotations to get the boilerplate > >>> code > >>> automatically generated. > >>> This means we get the benefit of annotations which would simplify > >>> the > >>> code a whole lot, while not imposing a performance cost (since > >>> the > >>> boilerplate code is generated during compilation). > >>> However, it's also possible to create the methods yourself if you > >>> want > >>> them to behave differently. > >>> Outside the POJO itself, you would see it as you would always see > >>> it. > >>> > >>> So what are the downsides to this approach? > >>> > >>> * First of all, Lombok provides also some other capabilities > >>> which I'm > >>> not sure are required/wanted at this time. > >>> o That's why I propose we use it for commons project, and > >>> make use > >>> of it's POJO-related annotations ONLY. > >>> * There might be a problem debugging the code since it's > >>> auto-generated. > >>> o I think this is rather negligible, since usually you > >>> don't debug > >>> POJOs anyway. > >>> * There might be a problem if the auto-generated code throws an > >>> Exception. > >>> o As before, I'm rather sure this is an edge-case which we > >>> usually > >>> won't hit (if at all). > >>> > >>> > >>> Even given these possible downsides, I think that we would > >>> benefit > >>> greatly if we would introduce this library. > >>> > >>> If you have any questions, you're welcome to study out the > >>> project site > >>> which has very thorough documentation: http://projectlombok.org > >>> > >>> Your thoughts on the matter? > >>> > >> > >> - I think an example of before/after pojo would help demonstrating > >> how > >> good the framework is. > >> > >> - Would it work when adding JPA annotations? > I suspect that yes (needs to be checked) > Will it work with GWT (if we create new business entity that needs to > be > exposed to GWT guys) ? As it is stated on the site, it supports GWT. > >> > >>> Regards, > >>> Mike > >>> > > > > Watching the demo it looks like we'll get less code, which in many > > cases is a good thing. > > What I'm concerned about is traceability; or- how can we track > > issues coming from the field > > when function calls and line numbers in the stack trace will not > > match the code we know. > > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From lpeer at redhat.com Wed Feb 1 06:59:32 2012 From: lpeer at redhat.com (Livnat Peer) Date: Wed, 01 Feb 2012 08:59:32 +0200 Subject: [Engine-devel] Simplifying our POJOs In-Reply-To: <89269de6-728f-4278-88b0-aef60a6240f8@zmail14.collab.prod.int.phx2.redhat.com> References: <89269de6-728f-4278-88b0-aef60a6240f8@zmail14.collab.prod.int.phx2.redhat.com> Message-ID: <4F28E2D4.5000803@redhat.com> On 01/02/12 08:03, Mike Kolesnik wrote: > > ----- Original Message ----- >> On 01/31/2012 12:45 PM, Doron Fediuck wrote: >>> On 31/01/12 12:39, Livnat Peer wrote: >>>> On 31/01/12 12:02, Mike Kolesnik wrote: >>>>> Hi, >>>>> >>>>> Today many POJO >>>>> s >>>>> are used throughout the system to convey data: >>>>> >>>>> * Parameters - To send data to commands. >>>>> * Business Entities - To transfer data in the parameters & >>>>> to/from >>>>> the DB. >>>>> >>>>> These POJOs are (usually) very verbose and full of boilerplate >>>>> code >>>>> . >>>>> >>>>> This, in turn, reduces their readability and maintainability for >>>>> a >>>>> couple of reasons (that I can think of): >>>>> >>>>> * It's hard to know what does what: >>>>> o Who participates in equals/hashCode? >>>>> o What fields are printed in toString? >>>>> * Consistency is problematic: >>>>> o A field may be part of equals but not hashCode, or vice >>>>> versa. >>>>> o This breaks the Object.hashCode() >>>>> >>>>> contract! >>>>> * Adding/Removing fields take more time since you need to >>>>> synchronize >>>>> the change to all boilerplate methods. >>>>> o Again, we're facing the consistency problem. >>>>> * These simple classes tend to be very long and not very >>>>> readable. >>>>> * Boilerplate code makes it harder to find out which methods >>>>> *don't* >>>>> behave the default way. >>>>> * Javadoc, if existent, is usually meaningless (but you might >>>>> see some >>>>> banal documentation that doesn't add any real value). >>>>> * Our existing classes are not up to standard! >>>>> >>>>> >>>>> So what can be done to remedy the situation? >>>>> >>>>> We could, of course, try to simplify the classes as much as we >>>>> can and >>>>> maybe address some of the issues. >>>>> This won't alleviate the boilerplate code problem altogether, >>>>> though. >>>>> >>>>> We could write annotations to do some of the things for us >>>>> automatically. >>>>> The easiest approach would be runtime-based, and would hinder >>>>> performance. >>>>> This also means we need to maintain this "infrastructure" and all >>>>> the >>>>> implications of such a decision. >>>>> >>>>> >>>>> Luckily, there is a much easier solution: Someone else already >>>>> did it! >>>>> >>>>> Check out Project Lombok: http://projectlombok.org >>>>> What Lombok gives us, among some other things, is a way to >>>>> greatly >>>>> simplify our POJOs by using annotations to get the boilerplate >>>>> code >>>>> automatically generated. >>>>> This means we get the benefit of annotations which would simplify >>>>> the >>>>> code a whole lot, while not imposing a performance cost (since >>>>> the >>>>> boilerplate code is generated during compilation). >>>>> However, it's also possible to create the methods yourself if you >>>>> want >>>>> them to behave differently. >>>>> Outside the POJO itself, you would see it as you would always see >>>>> it. >>>>> >>>>> So what are the downsides to this approach? >>>>> >>>>> * First of all, Lombok provides also some other capabilities >>>>> which I'm >>>>> not sure are required/wanted at this time. >>>>> o That's why I propose we use it for commons project, and >>>>> make use >>>>> of it's POJO-related annotations ONLY. >>>>> * There might be a problem debugging the code since it's >>>>> auto-generated. >>>>> o I think this is rather negligible, since usually you >>>>> don't debug >>>>> POJOs anyway. >>>>> * There might be a problem if the auto-generated code throws an >>>>> Exception. >>>>> o As before, I'm rather sure this is an edge-case which we >>>>> usually >>>>> won't hit (if at all). >>>>> >>>>> >>>>> Even given these possible downsides, I think that we would >>>>> benefit >>>>> greatly if we would introduce this library. >>>>> >>>>> If you have any questions, you're welcome to study out the >>>>> project site >>>>> which has very thorough documentation: http://projectlombok.org >>>>> >>>>> Your thoughts on the matter? >>>>> >>>> >>>> - I think an example of before/after pojo would help demonstrating >>>> how >>>> good the framework is. >>>> >>>> - Would it work when adding JPA annotations? >> I suspect that yes (needs to be checked) >> Will it work with GWT (if we create new business entity that needs to >> be >> exposed to GWT guys) ? > > As it is stated on the site, it supports GWT. > Since this package is required only during compile time it is relatively easy to push it in. Need to make sure it is working nice with debugging and give it a try. I like this package, +1 from me. >>>> >>>>> Regards, >>>>> Mike >>>>> >>> >>> Watching the demo it looks like we'll get less code, which in many >>> cases is a good thing. >>> What I'm concerned about is traceability; or- how can we track >>> issues coming from the field >>> when function calls and line numbers in the stack trace will not >>> match the code we know. >>> >> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From yzaslavs at redhat.com Wed Feb 1 07:13:05 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Wed, 01 Feb 2012 09:13:05 +0200 Subject: [Engine-devel] Simplifying our POJOs In-Reply-To: <4F28E2D4.5000803@redhat.com> References: <89269de6-728f-4278-88b0-aef60a6240f8@zmail14.collab.prod.int.phx2.redhat.com> <4F28E2D4.5000803@redhat.com> Message-ID: <4F28E601.9010800@redhat.com> On 02/01/2012 08:59 AM, Livnat Peer wrote: > On 01/02/12 08:03, Mike Kolesnik wrote: >> >> ----- Original Message ----- >>> On 01/31/2012 12:45 PM, Doron Fediuck wrote: >>>> On 31/01/12 12:39, Livnat Peer wrote: >>>>> On 31/01/12 12:02, Mike Kolesnik wrote: >>>>>> Hi, >>>>>> >>>>>> Today many POJO >>>>>> s >>>>>> are used throughout the system to convey data: >>>>>> >>>>>> * Parameters - To send data to commands. >>>>>> * Business Entities - To transfer data in the parameters & >>>>>> to/from >>>>>> the DB. >>>>>> >>>>>> These POJOs are (usually) very verbose and full of boilerplate >>>>>> code >>>>>> . >>>>>> >>>>>> This, in turn, reduces their readability and maintainability for >>>>>> a >>>>>> couple of reasons (that I can think of): >>>>>> >>>>>> * It's hard to know what does what: >>>>>> o Who participates in equals/hashCode? >>>>>> o What fields are printed in toString? >>>>>> * Consistency is problematic: >>>>>> o A field may be part of equals but not hashCode, or vice >>>>>> versa. >>>>>> o This breaks the Object.hashCode() >>>>>> >>>>>> contract! >>>>>> * Adding/Removing fields take more time since you need to >>>>>> synchronize >>>>>> the change to all boilerplate methods. >>>>>> o Again, we're facing the consistency problem. >>>>>> * These simple classes tend to be very long and not very >>>>>> readable. >>>>>> * Boilerplate code makes it harder to find out which methods >>>>>> *don't* >>>>>> behave the default way. >>>>>> * Javadoc, if existent, is usually meaningless (but you might >>>>>> see some >>>>>> banal documentation that doesn't add any real value). >>>>>> * Our existing classes are not up to standard! >>>>>> >>>>>> >>>>>> So what can be done to remedy the situation? >>>>>> >>>>>> We could, of course, try to simplify the classes as much as we >>>>>> can and >>>>>> maybe address some of the issues. >>>>>> This won't alleviate the boilerplate code problem altogether, >>>>>> though. >>>>>> >>>>>> We could write annotations to do some of the things for us >>>>>> automatically. >>>>>> The easiest approach would be runtime-based, and would hinder >>>>>> performance. >>>>>> This also means we need to maintain this "infrastructure" and all >>>>>> the >>>>>> implications of such a decision. >>>>>> >>>>>> >>>>>> Luckily, there is a much easier solution: Someone else already >>>>>> did it! >>>>>> >>>>>> Check out Project Lombok: http://projectlombok.org >>>>>> What Lombok gives us, among some other things, is a way to >>>>>> greatly >>>>>> simplify our POJOs by using annotations to get the boilerplate >>>>>> code >>>>>> automatically generated. >>>>>> This means we get the benefit of annotations which would simplify >>>>>> the >>>>>> code a whole lot, while not imposing a performance cost (since >>>>>> the >>>>>> boilerplate code is generated during compilation). >>>>>> However, it's also possible to create the methods yourself if you >>>>>> want >>>>>> them to behave differently. >>>>>> Outside the POJO itself, you would see it as you would always see >>>>>> it. >>>>>> >>>>>> So what are the downsides to this approach? >>>>>> >>>>>> * First of all, Lombok provides also some other capabilities >>>>>> which I'm >>>>>> not sure are required/wanted at this time. >>>>>> o That's why I propose we use it for commons project, and >>>>>> make use >>>>>> of it's POJO-related annotations ONLY. >>>>>> * There might be a problem debugging the code since it's >>>>>> auto-generated. >>>>>> o I think this is rather negligible, since usually you >>>>>> don't debug >>>>>> POJOs anyway. >>>>>> * There might be a problem if the auto-generated code throws an >>>>>> Exception. >>>>>> o As before, I'm rather sure this is an edge-case which we >>>>>> usually >>>>>> won't hit (if at all). >>>>>> >>>>>> >>>>>> Even given these possible downsides, I think that we would >>>>>> benefit >>>>>> greatly if we would introduce this library. >>>>>> >>>>>> If you have any questions, you're welcome to study out the >>>>>> project site >>>>>> which has very thorough documentation: http://projectlombok.org >>>>>> >>>>>> Your thoughts on the matter? >>>>>> >>>>> >>>>> - I think an example of before/after pojo would help demonstrating >>>>> how >>>>> good the framework is. >>>>> >>>>> - Would it work when adding JPA annotations? >>> I suspect that yes (needs to be checked) >>> Will it work with GWT (if we create new business entity that needs to >>> be >>> exposed to GWT guys) ? >> >> As it is stated on the site, it supports GWT. >> > > Since this package is required only during compile time it is relatively > easy to push it in. > Need to make sure it is working nice with debugging and give it a try. > > I like this package, > +1 from me. > Another issue to check - (I'm sure it does, but still) - Are empty CTORs generated as well? (There is a long debate for POJOs that contain X fields whether they should have an empty CTOR, as usage of empty CTOR may yield to potential bugs (logically speaking) of "partial state") - Unfortunately, some frameworks require existence of empty CTOR (I admit, still haven't look at the site thoroughly, so I'm just sharing here thoughts of what should we check for). Yair > >>>>> >>>>>> Regards, >>>>>> Mike >>>>>> >>>> >>>> Watching the demo it looks like we'll get less code, which in many >>>> cases is a good thing. >>>> What I'm concerned about is traceability; or- how can we track >>>> issues coming from the field >>>> when function calls and line numbers in the stack trace will not >>>> match the code we know. >>>> >>> >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel > From lpeer at redhat.com Wed Feb 1 07:16:34 2012 From: lpeer at redhat.com (Livnat Peer) Date: Wed, 01 Feb 2012 09:16:34 +0200 Subject: [Engine-devel] Simplifying our POJOs In-Reply-To: <4F28E601.9010800@redhat.com> References: <89269de6-728f-4278-88b0-aef60a6240f8@zmail14.collab.prod.int.phx2.redhat.com> <4F28E2D4.5000803@redhat.com> <4F28E601.9010800@redhat.com> Message-ID: <4F28E6D2.1050204@redhat.com> On 01/02/12 09:13, Yair Zaslavsky wrote: > On 02/01/2012 08:59 AM, Livnat Peer wrote: >> On 01/02/12 08:03, Mike Kolesnik wrote: >>> >>> ----- Original Message ----- >>>> On 01/31/2012 12:45 PM, Doron Fediuck wrote: >>>>> On 31/01/12 12:39, Livnat Peer wrote: >>>>>> On 31/01/12 12:02, Mike Kolesnik wrote: >>>>>>> Hi, >>>>>>> >>>>>>> Today many POJO >>>>>>> s >>>>>>> are used throughout the system to convey data: >>>>>>> >>>>>>> * Parameters - To send data to commands. >>>>>>> * Business Entities - To transfer data in the parameters & >>>>>>> to/from >>>>>>> the DB. >>>>>>> >>>>>>> These POJOs are (usually) very verbose and full of boilerplate >>>>>>> code >>>>>>> . >>>>>>> >>>>>>> This, in turn, reduces their readability and maintainability for >>>>>>> a >>>>>>> couple of reasons (that I can think of): >>>>>>> >>>>>>> * It's hard to know what does what: >>>>>>> o Who participates in equals/hashCode? >>>>>>> o What fields are printed in toString? >>>>>>> * Consistency is problematic: >>>>>>> o A field may be part of equals but not hashCode, or vice >>>>>>> versa. >>>>>>> o This breaks the Object.hashCode() >>>>>>> >>>>>>> contract! >>>>>>> * Adding/Removing fields take more time since you need to >>>>>>> synchronize >>>>>>> the change to all boilerplate methods. >>>>>>> o Again, we're facing the consistency problem. >>>>>>> * These simple classes tend to be very long and not very >>>>>>> readable. >>>>>>> * Boilerplate code makes it harder to find out which methods >>>>>>> *don't* >>>>>>> behave the default way. >>>>>>> * Javadoc, if existent, is usually meaningless (but you might >>>>>>> see some >>>>>>> banal documentation that doesn't add any real value). >>>>>>> * Our existing classes are not up to standard! >>>>>>> >>>>>>> >>>>>>> So what can be done to remedy the situation? >>>>>>> >>>>>>> We could, of course, try to simplify the classes as much as we >>>>>>> can and >>>>>>> maybe address some of the issues. >>>>>>> This won't alleviate the boilerplate code problem altogether, >>>>>>> though. >>>>>>> >>>>>>> We could write annotations to do some of the things for us >>>>>>> automatically. >>>>>>> The easiest approach would be runtime-based, and would hinder >>>>>>> performance. >>>>>>> This also means we need to maintain this "infrastructure" and all >>>>>>> the >>>>>>> implications of such a decision. >>>>>>> >>>>>>> >>>>>>> Luckily, there is a much easier solution: Someone else already >>>>>>> did it! >>>>>>> >>>>>>> Check out Project Lombok: http://projectlombok.org >>>>>>> What Lombok gives us, among some other things, is a way to >>>>>>> greatly >>>>>>> simplify our POJOs by using annotations to get the boilerplate >>>>>>> code >>>>>>> automatically generated. >>>>>>> This means we get the benefit of annotations which would simplify >>>>>>> the >>>>>>> code a whole lot, while not imposing a performance cost (since >>>>>>> the >>>>>>> boilerplate code is generated during compilation). >>>>>>> However, it's also possible to create the methods yourself if you >>>>>>> want >>>>>>> them to behave differently. >>>>>>> Outside the POJO itself, you would see it as you would always see >>>>>>> it. >>>>>>> >>>>>>> So what are the downsides to this approach? >>>>>>> >>>>>>> * First of all, Lombok provides also some other capabilities >>>>>>> which I'm >>>>>>> not sure are required/wanted at this time. >>>>>>> o That's why I propose we use it for commons project, and >>>>>>> make use >>>>>>> of it's POJO-related annotations ONLY. >>>>>>> * There might be a problem debugging the code since it's >>>>>>> auto-generated. >>>>>>> o I think this is rather negligible, since usually you >>>>>>> don't debug >>>>>>> POJOs anyway. >>>>>>> * There might be a problem if the auto-generated code throws an >>>>>>> Exception. >>>>>>> o As before, I'm rather sure this is an edge-case which we >>>>>>> usually >>>>>>> won't hit (if at all). >>>>>>> >>>>>>> >>>>>>> Even given these possible downsides, I think that we would >>>>>>> benefit >>>>>>> greatly if we would introduce this library. >>>>>>> >>>>>>> If you have any questions, you're welcome to study out the >>>>>>> project site >>>>>>> which has very thorough documentation: http://projectlombok.org >>>>>>> >>>>>>> Your thoughts on the matter? >>>>>>> >>>>>> >>>>>> - I think an example of before/after pojo would help demonstrating >>>>>> how >>>>>> good the framework is. >>>>>> >>>>>> - Would it work when adding JPA annotations? >>>> I suspect that yes (needs to be checked) >>>> Will it work with GWT (if we create new business entity that needs to >>>> be >>>> exposed to GWT guys) ? >>> >>> As it is stated on the site, it supports GWT. >>> >> >> Since this package is required only during compile time it is relatively >> easy to push it in. >> Need to make sure it is working nice with debugging and give it a try. >> >> I like this package, >> +1 from me. >> > Another issue to check - (I'm sure it does, but still) - > Are empty CTORs generated as well? (There is a long debate for POJOs > that contain X fields whether they should have an empty CTOR, as usage > of empty CTOR may yield to potential bugs (logically speaking) of > "partial state") - Unfortunately, some frameworks require existence of > empty CTOR (I admit, still haven't look at the site thoroughly, so I'm > just sharing here thoughts of what should we check for). > > > Yair > It seems like you can define what ever you like - @NoArgsConstructor, @RequiredArgsConstructor @AllArgsConstructor Livnat >> >>>>>> >>>>>>> Regards, >>>>>>> Mike >>>>>>> >>>>> >>>>> Watching the demo it looks like we'll get less code, which in many >>>>> cases is a good thing. >>>>> What I'm concerned about is traceability; or- how can we track >>>>> issues coming from the field >>>>> when function calls and line numbers in the stack trace will not >>>>> match the code we know. >>>>> >>>> >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>> >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >> > From abaron at redhat.com Wed Feb 1 09:41:43 2012 From: abaron at redhat.com (Ayal Baron) Date: Wed, 01 Feb 2012 04:41:43 -0500 (EST) Subject: [Engine-devel] ovirt core MOM In-Reply-To: <4F1E9BA0.1050300@redhat.com> Message-ID: <9eb9ade8-0cd2-48d8-94f5-fc5d963cef53@zmail13.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > On 01/23/2012 11:01 PM, Ayal Baron wrote: > > > > > > ----- Original Message ----- > >> > >> > >> ----- Original Message ----- > >>> From: "Ayal Baron" > >>> To: "Itamar Heim" > >>> Cc: engine-devel at ovirt.org, "Miki Kenneth" > >>> Sent: Sunday, January 22, 2012 11:19:03 AM > >>> Subject: Re: [Engine-devel] ovirt core MOM > >>> > >>> > >>> > >>> ----- Original Message ----- > >>>> On 01/20/2012 11:42 PM, Miki Kenneth wrote: > >>>>> > >>>>> > >>>>> ----- Original Message ----- > >>>>>> From: "Itamar Heim" > >>>>>> To: "Ayal Baron" > >>>>>> Cc: engine-devel at ovirt.org > >>>>>> Sent: Friday, January 20, 2012 2:12:27 AM > >>>>>> Subject: Re: [Engine-devel] ovirt core MOM > >>>>>> > >>>>>> On 01/19/2012 11:58 AM, Ayal Baron wrote: > >>>>>>> > >>>>>>> > >>>>>>> ----- Original Message ----- > >>>>>>>> On 01/18/2012 05:53 PM, Livnat Peer wrote: > >>>>>>>>> Hi All, > >>>>>>>>> > >>>>>>>>> This is what we've discussed in the meeting today: > >>>>>>>>> > >>>>>>>>> Multiple storage domain: > >>>>>>>>> - Should have a single generic verb for removing a disk. > >>>>>>>>> - We block removing the last template disk - template is > >>>>>>>>> immutable. > >>>>>>>> > >>>>>>>> but it will be deleted when deleting the template, right? > >>>>>>> > >>>>>>> Of course. > >>>>>>> The point is that the template is an immutable object and > >>>>>>> should > >>>>>>> not change (until we support editing a template at which > >>>>>>> point > >>>>>>> the > >>>>>>> user would have to change the template to edit mode before > >>>>>>> being > >>>>>>> able to make such changes and maybe also be able to run it > >>>>>>> and > >>>>>>> make changes internally?). > >>>>>> > >>>>>> When i hear "edit a template" i don't expect replacing the > >>>>>> files. > >>>>>> I expect a new edition of disks appearing as a new version of > >>>>>> the > >>>>>> template. but they don't have to derive from same original > >>>>>> template. > >>>>>> say i want to create a "Fedora 16 template", then update it > >>>>>> every > >>>>>> month > >>>>>> with latest "yum update". > >>>>>> it doesn't matter if i use a VM from same template or just > >>>>>> create > >>>>>> a > >>>>>> new one. > >>>>>> then specify it is V2 of the "Fedora 16 template". > >>>>>> when someone creates a VM from this template, default version > >>>>>> would > >>>>>> be > >>>>>> latest (but we can let them choose specific older versions as > >>>>>> well) > >>>>> +1. Nicely put. > >>>>> And just to add another common use case is the pool usage. > >>>>> When we creating stateless VM pool from the template, > >>>>> it would be nice to be able to update the template to V2, > >>>>> and have all the newly created VMs dynamically based to the new > >>>>> template. > >>>> > >>>> that is indeed where i was going with it as well, but not as > >>>> trivial, > >>>> since need to wait for VMs to stop and return to pool and create > >>>> new > >>>> ones and remove old ones. > >>>> also, creating new ones may involve an admin action of first > >>>> boot > >>>> + > >>>> take > >>>> of first snapshot > >>>> > >>>> (hence i stopped the previous description before this part, but > >>>> since > >>>> you opened the door...) > >>> > >>> Yes, but this all goes to template versioning (which is great and > >>> we > >>> need). > >>> For the user though, creating a new template version like you > >>> described would be a long and wasteful process, and is not what > >>> I'm > >>> talking about. > >>> > >>> Unless we support nested templates (second template would be a > >>> snapshot over the first one), then we're likely to require way > >>> too > >>> much space and creation process would be too slow (having to copy > >>> over all the bits). > >>> I think the pool example is an excellent example where I would > >>> not > >>> want to have 2 copies of the template where the only difference > >>> between them is a set of security patches I've applied to the new > >>> template. > >> Not sure I understand how you do that while vms are still running > >> on > >> the original template? > > > > They either: > > 1. wouldn't be (if changes are in place) > > 2. if we support template over template (from snapshot) then no > > issue at all. > > Once all VMs stop running on previous template we can live > > merge the 2. > > > >>> > >>> So the 2 options are for what I'm suggesting are: > >>> 1. decommission the old template by making in place changes > >>> 2. support template snapshots > >> Not sure how this will work and what use case it serves? > > > > number 1: changing the template for stateless pools. > > number 2: for anything you want including template versioning. > > Template versioning should have 2 flavours: > > 1. my golden image is outdated and I would like to decommission it > > and replace with a new one created from scratch (i.e. same name, > > new VMs would be derived from new template, no data dedup). > > 2. my golden image is outdated and I would like to update it > > internally - create a VM from it, make the changes, seal this VM > > as the new version of the template (not using the process we have > > today which copies all the data, just change it to be immutable). > > > > The latter requires supporting trees. > > use case wise, #1 is easier, and covers both use cases - it only > varies > in amount of IO/Space, so when someone tackles this implementation > wise, > I'd vote for doing #1 first. > No, it varies in amount of time and complexity for user. It might also be quite complex to create the same image again. To this I can only say 'provisioning provisioning provisioning'. The point is to make the user's life easier and making provisioning a breeze, forcing #1 is going in the opposite direction. From iheim at redhat.com Wed Feb 1 10:16:39 2012 From: iheim at redhat.com (Itamar Heim) Date: Wed, 01 Feb 2012 05:16:39 -0500 (EST) Subject: [Engine-devel] ovirt core MOM In-Reply-To: <9eb9ade8-0cd2-48d8-94f5-fc5d963cef53@zmail13.collab.prod.int.phx2.redhat.com> Message-ID: ----- Original Message ----- > From: "Ayal Baron" > To: "Itamar Heim" > Cc: "Miki Kenneth" , engine-devel at ovirt.org > Sent: Wednesday, February 1, 2012 11:41:43 AM > Subject: Re: [Engine-devel] ovirt core MOM > > > > ----- Original Message ----- > > On 01/23/2012 11:01 PM, Ayal Baron wrote: > > > > > > > > > ----- Original Message ----- > > >> > > >> > > >> ----- Original Message ----- > > >>> From: "Ayal Baron" > > >>> To: "Itamar Heim" > > >>> Cc: engine-devel at ovirt.org, "Miki Kenneth" > > >>> Sent: Sunday, January 22, 2012 11:19:03 AM > > >>> Subject: Re: [Engine-devel] ovirt core MOM > > >>> > > >>> > > >>> > > >>> ----- Original Message ----- > > >>>> On 01/20/2012 11:42 PM, Miki Kenneth wrote: > > >>>>> > > >>>>> > > >>>>> ----- Original Message ----- > > >>>>>> From: "Itamar Heim" > > >>>>>> To: "Ayal Baron" > > >>>>>> Cc: engine-devel at ovirt.org > > >>>>>> Sent: Friday, January 20, 2012 2:12:27 AM > > >>>>>> Subject: Re: [Engine-devel] ovirt core MOM > > >>>>>> > > >>>>>> On 01/19/2012 11:58 AM, Ayal Baron wrote: > > >>>>>>> > > >>>>>>> > > >>>>>>> ----- Original Message ----- > > >>>>>>>> On 01/18/2012 05:53 PM, Livnat Peer wrote: > > >>>>>>>>> Hi All, > > >>>>>>>>> > > >>>>>>>>> This is what we've discussed in the meeting today: > > >>>>>>>>> > > >>>>>>>>> Multiple storage domain: > > >>>>>>>>> - Should have a single generic verb for removing a disk. > > >>>>>>>>> - We block removing the last template disk - template is > > >>>>>>>>> immutable. > > >>>>>>>> > > >>>>>>>> but it will be deleted when deleting the template, right? > > >>>>>>> > > >>>>>>> Of course. > > >>>>>>> The point is that the template is an immutable object and > > >>>>>>> should > > >>>>>>> not change (until we support editing a template at which > > >>>>>>> point > > >>>>>>> the > > >>>>>>> user would have to change the template to edit mode before > > >>>>>>> being > > >>>>>>> able to make such changes and maybe also be able to run it > > >>>>>>> and > > >>>>>>> make changes internally?). > > >>>>>> > > >>>>>> When i hear "edit a template" i don't expect replacing the > > >>>>>> files. > > >>>>>> I expect a new edition of disks appearing as a new version > > >>>>>> of > > >>>>>> the > > >>>>>> template. but they don't have to derive from same original > > >>>>>> template. > > >>>>>> say i want to create a "Fedora 16 template", then update it > > >>>>>> every > > >>>>>> month > > >>>>>> with latest "yum update". > > >>>>>> it doesn't matter if i use a VM from same template or just > > >>>>>> create > > >>>>>> a > > >>>>>> new one. > > >>>>>> then specify it is V2 of the "Fedora 16 template". > > >>>>>> when someone creates a VM from this template, default > > >>>>>> version > > >>>>>> would > > >>>>>> be > > >>>>>> latest (but we can let them choose specific older versions > > >>>>>> as > > >>>>>> well) > > >>>>> +1. Nicely put. > > >>>>> And just to add another common use case is the pool usage. > > >>>>> When we creating stateless VM pool from the template, > > >>>>> it would be nice to be able to update the template to V2, > > >>>>> and have all the newly created VMs dynamically based to the > > >>>>> new > > >>>>> template. > > >>>> > > >>>> that is indeed where i was going with it as well, but not as > > >>>> trivial, > > >>>> since need to wait for VMs to stop and return to pool and > > >>>> create > > >>>> new > > >>>> ones and remove old ones. > > >>>> also, creating new ones may involve an admin action of first > > >>>> boot > > >>>> + > > >>>> take > > >>>> of first snapshot > > >>>> > > >>>> (hence i stopped the previous description before this part, > > >>>> but > > >>>> since > > >>>> you opened the door...) > > >>> > > >>> Yes, but this all goes to template versioning (which is great > > >>> and > > >>> we > > >>> need). > > >>> For the user though, creating a new template version like you > > >>> described would be a long and wasteful process, and is not what > > >>> I'm > > >>> talking about. > > >>> > > >>> Unless we support nested templates (second template would be a > > >>> snapshot over the first one), then we're likely to require way > > >>> too > > >>> much space and creation process would be too slow (having to > > >>> copy > > >>> over all the bits). > > >>> I think the pool example is an excellent example where I would > > >>> not > > >>> want to have 2 copies of the template where the only difference > > >>> between them is a set of security patches I've applied to the > > >>> new > > >>> template. > > >> Not sure I understand how you do that while vms are still > > >> running > > >> on > > >> the original template? > > > > > > They either: > > > 1. wouldn't be (if changes are in place) > > > 2. if we support template over template (from snapshot) then no > > > issue at all. > > > Once all VMs stop running on previous template we can live > > > merge the 2. > > > > > >>> > > >>> So the 2 options are for what I'm suggesting are: > > >>> 1. decommission the old template by making in place changes > > >>> 2. support template snapshots > > >> Not sure how this will work and what use case it serves? > > > > > > number 1: changing the template for stateless pools. > > > number 2: for anything you want including template versioning. > > > Template versioning should have 2 flavours: > > > 1. my golden image is outdated and I would like to decommission > > > it > > > and replace with a new one created from scratch (i.e. same name, > > > new VMs would be derived from new template, no data dedup). > > > 2. my golden image is outdated and I would like to update it > > > internally - create a VM from it, make the changes, seal this VM > > > as the new version of the template (not using the process we have > > > today which copies all the data, just change it to be immutable). > > > > > > The latter requires supporting trees. > > > > use case wise, #1 is easier, and covers both use cases - it only > > varies > > in amount of IO/Space, so when someone tackles this implementation > > wise, > > I'd vote for doing #1 first. > > > No, it varies in amount of time and complexity for user. > It might also be quite complex to create the same image again. > To this I can only say 'provisioning provisioning provisioning'. > The point is to make the user's life easier and making provisioning a > breeze, forcing #1 is going in the opposite direction. > > #2 does not solve #1. #2 allows doing part of #1 in a more efficient way. so there is no reason to not do #1. (there is also no reason to not do #2) From abaron at redhat.com Wed Feb 1 10:26:12 2012 From: abaron at redhat.com (Ayal Baron) Date: Wed, 01 Feb 2012 05:26:12 -0500 (EST) Subject: [Engine-devel] ovirt core MOM In-Reply-To: Message-ID: <929c1a9c-acfb-4a33-92b5-590a72a0d687@zmail13.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > > > ----- Original Message ----- > > From: "Ayal Baron" > > To: "Itamar Heim" > > Cc: "Miki Kenneth" , engine-devel at ovirt.org > > Sent: Wednesday, February 1, 2012 11:41:43 AM > > Subject: Re: [Engine-devel] ovirt core MOM > > > > > > > > ----- Original Message ----- > > > On 01/23/2012 11:01 PM, Ayal Baron wrote: > > > > > > > > > > > > ----- Original Message ----- > > > >> > > > >> > > > >> ----- Original Message ----- > > > >>> From: "Ayal Baron" > > > >>> To: "Itamar Heim" > > > >>> Cc: engine-devel at ovirt.org, "Miki > > > >>> Kenneth" > > > >>> Sent: Sunday, January 22, 2012 11:19:03 AM > > > >>> Subject: Re: [Engine-devel] ovirt core MOM > > > >>> > > > >>> > > > >>> > > > >>> ----- Original Message ----- > > > >>>> On 01/20/2012 11:42 PM, Miki Kenneth wrote: > > > >>>>> > > > >>>>> > > > >>>>> ----- Original Message ----- > > > >>>>>> From: "Itamar Heim" > > > >>>>>> To: "Ayal Baron" > > > >>>>>> Cc: engine-devel at ovirt.org > > > >>>>>> Sent: Friday, January 20, 2012 2:12:27 AM > > > >>>>>> Subject: Re: [Engine-devel] ovirt core MOM > > > >>>>>> > > > >>>>>> On 01/19/2012 11:58 AM, Ayal Baron wrote: > > > >>>>>>> > > > >>>>>>> > > > >>>>>>> ----- Original Message ----- > > > >>>>>>>> On 01/18/2012 05:53 PM, Livnat Peer wrote: > > > >>>>>>>>> Hi All, > > > >>>>>>>>> > > > >>>>>>>>> This is what we've discussed in the meeting today: > > > >>>>>>>>> > > > >>>>>>>>> Multiple storage domain: > > > >>>>>>>>> - Should have a single generic verb for removing a > > > >>>>>>>>> disk. > > > >>>>>>>>> - We block removing the last template disk - template > > > >>>>>>>>> is > > > >>>>>>>>> immutable. > > > >>>>>>>> > > > >>>>>>>> but it will be deleted when deleting the template, > > > >>>>>>>> right? > > > >>>>>>> > > > >>>>>>> Of course. > > > >>>>>>> The point is that the template is an immutable object and > > > >>>>>>> should > > > >>>>>>> not change (until we support editing a template at which > > > >>>>>>> point > > > >>>>>>> the > > > >>>>>>> user would have to change the template to edit mode > > > >>>>>>> before > > > >>>>>>> being > > > >>>>>>> able to make such changes and maybe also be able to run > > > >>>>>>> it > > > >>>>>>> and > > > >>>>>>> make changes internally?). > > > >>>>>> > > > >>>>>> When i hear "edit a template" i don't expect replacing the > > > >>>>>> files. > > > >>>>>> I expect a new edition of disks appearing as a new version > > > >>>>>> of > > > >>>>>> the > > > >>>>>> template. but they don't have to derive from same original > > > >>>>>> template. > > > >>>>>> say i want to create a "Fedora 16 template", then update > > > >>>>>> it > > > >>>>>> every > > > >>>>>> month > > > >>>>>> with latest "yum update". > > > >>>>>> it doesn't matter if i use a VM from same template or just > > > >>>>>> create > > > >>>>>> a > > > >>>>>> new one. > > > >>>>>> then specify it is V2 of the "Fedora 16 template". > > > >>>>>> when someone creates a VM from this template, default > > > >>>>>> version > > > >>>>>> would > > > >>>>>> be > > > >>>>>> latest (but we can let them choose specific older versions > > > >>>>>> as > > > >>>>>> well) > > > >>>>> +1. Nicely put. > > > >>>>> And just to add another common use case is the pool usage. > > > >>>>> When we creating stateless VM pool from the template, > > > >>>>> it would be nice to be able to update the template to V2, > > > >>>>> and have all the newly created VMs dynamically based to the > > > >>>>> new > > > >>>>> template. > > > >>>> > > > >>>> that is indeed where i was going with it as well, but not as > > > >>>> trivial, > > > >>>> since need to wait for VMs to stop and return to pool and > > > >>>> create > > > >>>> new > > > >>>> ones and remove old ones. > > > >>>> also, creating new ones may involve an admin action of first > > > >>>> boot > > > >>>> + > > > >>>> take > > > >>>> of first snapshot > > > >>>> > > > >>>> (hence i stopped the previous description before this part, > > > >>>> but > > > >>>> since > > > >>>> you opened the door...) > > > >>> > > > >>> Yes, but this all goes to template versioning (which is great > > > >>> and > > > >>> we > > > >>> need). > > > >>> For the user though, creating a new template version like you > > > >>> described would be a long and wasteful process, and is not > > > >>> what > > > >>> I'm > > > >>> talking about. > > > >>> > > > >>> Unless we support nested templates (second template would be > > > >>> a > > > >>> snapshot over the first one), then we're likely to require > > > >>> way > > > >>> too > > > >>> much space and creation process would be too slow (having to > > > >>> copy > > > >>> over all the bits). > > > >>> I think the pool example is an excellent example where I > > > >>> would > > > >>> not > > > >>> want to have 2 copies of the template where the only > > > >>> difference > > > >>> between them is a set of security patches I've applied to the > > > >>> new > > > >>> template. > > > >> Not sure I understand how you do that while vms are still > > > >> running > > > >> on > > > >> the original template? > > > > > > > > They either: > > > > 1. wouldn't be (if changes are in place) > > > > 2. if we support template over template (from snapshot) then no > > > > issue at all. > > > > Once all VMs stop running on previous template we can live > > > > merge the 2. > > > > > > > >>> > > > >>> So the 2 options are for what I'm suggesting are: > > > >>> 1. decommission the old template by making in place changes > > > >>> 2. support template snapshots > > > >> Not sure how this will work and what use case it serves? > > > > > > > > number 1: changing the template for stateless pools. > > > > number 2: for anything you want including template versioning. > > > > Template versioning should have 2 flavours: > > > > 1. my golden image is outdated and I would like to decommission > > > > it > > > > and replace with a new one created from scratch (i.e. same > > > > name, > > > > new VMs would be derived from new template, no data dedup). > > > > 2. my golden image is outdated and I would like to update it > > > > internally - create a VM from it, make the changes, seal this > > > > VM > > > > as the new version of the template (not using the process we > > > > have > > > > today which copies all the data, just change it to be > > > > immutable). > > > > > > > > The latter requires supporting trees. > > > > > > use case wise, #1 is easier, and covers both use cases - it only > > > varies > > > in amount of IO/Space, so when someone tackles this > > > implementation > > > wise, > > > I'd vote for doing #1 first. > > > > > No, it varies in amount of time and complexity for user. > > It might also be quite complex to create the same image again. > > To this I can only say 'provisioning provisioning provisioning'. > > The point is to make the user's life easier and making provisioning > > a > > breeze, forcing #1 is going in the opposite direction. > > > > > > #2 does not solve #1. > #2 allows doing part of #1 in a more efficient way. > so there is no reason to not do #1. > (there is also no reason to not do #2) > I agree, that is why I said template versioning should have both. From mkenneth at redhat.com Wed Feb 1 12:16:27 2012 From: mkenneth at redhat.com (Miki Kenneth) Date: Wed, 01 Feb 2012 07:16:27 -0500 (EST) Subject: [Engine-devel] ovirt core MOM In-Reply-To: <929c1a9c-acfb-4a33-92b5-590a72a0d687@zmail13.collab.prod.int.phx2.redhat.com> Message-ID: <12dd4864-0397-4778-b1fa-332f41ea6f48@mkenneth.csb> ----- Original Message ----- > From: "Ayal Baron" > To: "Itamar Heim" > Cc: "Miki Kenneth" , engine-devel at ovirt.org > Sent: Wednesday, February 1, 2012 12:26:12 PM > Subject: Re: [Engine-devel] ovirt core MOM > > > > ----- Original Message ----- > > > > > > ----- Original Message ----- > > > From: "Ayal Baron" > > > To: "Itamar Heim" > > > Cc: "Miki Kenneth" , engine-devel at ovirt.org > > > Sent: Wednesday, February 1, 2012 11:41:43 AM > > > Subject: Re: [Engine-devel] ovirt core MOM > > > > > > > > > > > > ----- Original Message ----- > > > > On 01/23/2012 11:01 PM, Ayal Baron wrote: > > > > > > > > > > > > > > > ----- Original Message ----- > > > > >> > > > > >> > > > > >> ----- Original Message ----- > > > > >>> From: "Ayal Baron" > > > > >>> To: "Itamar Heim" > > > > >>> Cc: engine-devel at ovirt.org, "Miki > > > > >>> Kenneth" > > > > >>> Sent: Sunday, January 22, 2012 11:19:03 AM > > > > >>> Subject: Re: [Engine-devel] ovirt core MOM > > > > >>> > > > > >>> > > > > >>> > > > > >>> ----- Original Message ----- > > > > >>>> On 01/20/2012 11:42 PM, Miki Kenneth wrote: > > > > >>>>> > > > > >>>>> > > > > >>>>> ----- Original Message ----- > > > > >>>>>> From: "Itamar Heim" > > > > >>>>>> To: "Ayal Baron" > > > > >>>>>> Cc: engine-devel at ovirt.org > > > > >>>>>> Sent: Friday, January 20, 2012 2:12:27 AM > > > > >>>>>> Subject: Re: [Engine-devel] ovirt core MOM > > > > >>>>>> > > > > >>>>>> On 01/19/2012 11:58 AM, Ayal Baron wrote: > > > > >>>>>>> > > > > >>>>>>> > > > > >>>>>>> ----- Original Message ----- > > > > >>>>>>>> On 01/18/2012 05:53 PM, Livnat Peer wrote: > > > > >>>>>>>>> Hi All, > > > > >>>>>>>>> > > > > >>>>>>>>> This is what we've discussed in the meeting today: > > > > >>>>>>>>> > > > > >>>>>>>>> Multiple storage domain: > > > > >>>>>>>>> - Should have a single generic verb for removing a > > > > >>>>>>>>> disk. > > > > >>>>>>>>> - We block removing the last template disk - template > > > > >>>>>>>>> is > > > > >>>>>>>>> immutable. > > > > >>>>>>>> > > > > >>>>>>>> but it will be deleted when deleting the template, > > > > >>>>>>>> right? > > > > >>>>>>> > > > > >>>>>>> Of course. > > > > >>>>>>> The point is that the template is an immutable object > > > > >>>>>>> and > > > > >>>>>>> should > > > > >>>>>>> not change (until we support editing a template at > > > > >>>>>>> which > > > > >>>>>>> point > > > > >>>>>>> the > > > > >>>>>>> user would have to change the template to edit mode > > > > >>>>>>> before > > > > >>>>>>> being > > > > >>>>>>> able to make such changes and maybe also be able to run > > > > >>>>>>> it > > > > >>>>>>> and > > > > >>>>>>> make changes internally?). > > > > >>>>>> > > > > >>>>>> When i hear "edit a template" i don't expect replacing > > > > >>>>>> the > > > > >>>>>> files. > > > > >>>>>> I expect a new edition of disks appearing as a new > > > > >>>>>> version > > > > >>>>>> of > > > > >>>>>> the > > > > >>>>>> template. but they don't have to derive from same > > > > >>>>>> original > > > > >>>>>> template. > > > > >>>>>> say i want to create a "Fedora 16 template", then update > > > > >>>>>> it > > > > >>>>>> every > > > > >>>>>> month > > > > >>>>>> with latest "yum update". > > > > >>>>>> it doesn't matter if i use a VM from same template or > > > > >>>>>> just > > > > >>>>>> create > > > > >>>>>> a > > > > >>>>>> new one. > > > > >>>>>> then specify it is V2 of the "Fedora 16 template". > > > > >>>>>> when someone creates a VM from this template, default > > > > >>>>>> version > > > > >>>>>> would > > > > >>>>>> be > > > > >>>>>> latest (but we can let them choose specific older > > > > >>>>>> versions > > > > >>>>>> as > > > > >>>>>> well) > > > > >>>>> +1. Nicely put. > > > > >>>>> And just to add another common use case is the pool > > > > >>>>> usage. > > > > >>>>> When we creating stateless VM pool from the template, > > > > >>>>> it would be nice to be able to update the template to V2, > > > > >>>>> and have all the newly created VMs dynamically based to > > > > >>>>> the > > > > >>>>> new > > > > >>>>> template. > > > > >>>> > > > > >>>> that is indeed where i was going with it as well, but not > > > > >>>> as > > > > >>>> trivial, > > > > >>>> since need to wait for VMs to stop and return to pool and > > > > >>>> create > > > > >>>> new > > > > >>>> ones and remove old ones. > > > > >>>> also, creating new ones may involve an admin action of > > > > >>>> first > > > > >>>> boot > > > > >>>> + > > > > >>>> take > > > > >>>> of first snapshot > > > > >>>> > > > > >>>> (hence i stopped the previous description before this > > > > >>>> part, > > > > >>>> but > > > > >>>> since > > > > >>>> you opened the door...) > > > > >>> > > > > >>> Yes, but this all goes to template versioning (which is > > > > >>> great > > > > >>> and > > > > >>> we > > > > >>> need). > > > > >>> For the user though, creating a new template version like > > > > >>> you > > > > >>> described would be a long and wasteful process, and is not > > > > >>> what > > > > >>> I'm > > > > >>> talking about. > > > > >>> > > > > >>> Unless we support nested templates (second template would > > > > >>> be > > > > >>> a > > > > >>> snapshot over the first one), then we're likely to require > > > > >>> way > > > > >>> too > > > > >>> much space and creation process would be too slow (having > > > > >>> to > > > > >>> copy > > > > >>> over all the bits). > > > > >>> I think the pool example is an excellent example where I > > > > >>> would > > > > >>> not > > > > >>> want to have 2 copies of the template where the only > > > > >>> difference > > > > >>> between them is a set of security patches I've applied to > > > > >>> the > > > > >>> new > > > > >>> template. > > > > >> Not sure I understand how you do that while vms are still > > > > >> running > > > > >> on > > > > >> the original template? > > > > > > > > > > They either: > > > > > 1. wouldn't be (if changes are in place) > > > > > 2. if we support template over template (from snapshot) then > > > > > no > > > > > issue at all. > > > > > Once all VMs stop running on previous template we can > > > > > live > > > > > merge the 2. > > > > > > > > > >>> > > > > >>> So the 2 options are for what I'm suggesting are: > > > > >>> 1. decommission the old template by making in place changes > > > > >>> 2. support template snapshots > > > > >> Not sure how this will work and what use case it serves? > > > > > > > > > > number 1: changing the template for stateless pools. > > > > > number 2: for anything you want including template > > > > > versioning. > > > > > Template versioning should have 2 flavours: > > > > > 1. my golden image is outdated and I would like to > > > > > decommission > > > > > it > > > > > and replace with a new one created from scratch (i.e. same > > > > > name, > > > > > new VMs would be derived from new template, no data dedup). > > > > > 2. my golden image is outdated and I would like to update it > > > > > internally - create a VM from it, make the changes, seal this > > > > > VM > > > > > as the new version of the template (not using the process we > > > > > have > > > > > today which copies all the data, just change it to be > > > > > immutable). > > > > > > > > > > The latter requires supporting trees. > > > > > > > > use case wise, #1 is easier, and covers both use cases - it > > > > only > > > > varies > > > > in amount of IO/Space, so when someone tackles this > > > > implementation > > > > wise, > > > > I'd vote for doing #1 first. > > > > > > > No, it varies in amount of time and complexity for user. > > > It might also be quite complex to create the same image again. > > > To this I can only say 'provisioning provisioning provisioning'. > > > The point is to make the user's life easier and making > > > provisioning > > > a > > > breeze, forcing #1 is going in the opposite direction. > > > > > > > > > > #2 does not solve #1. > > #2 allows doing part of #1 in a more efficient way. > > so there is no reason to not do #1. > > (there is also no reason to not do #2) > > > > I agree, that is why I said template versioning should have both. Decision? > From iheim at redhat.com Wed Feb 1 12:19:15 2012 From: iheim at redhat.com (Itamar Heim) Date: Wed, 01 Feb 2012 14:19:15 +0200 Subject: [Engine-devel] ovirt core MOM In-Reply-To: <12dd4864-0397-4778-b1fa-332f41ea6f48@mkenneth.csb> References: <12dd4864-0397-4778-b1fa-332f41ea6f48@mkenneth.csb> Message-ID: <4F292DC3.3060406@redhat.com> On 02/01/2012 02:16 PM, Miki Kenneth wrote: > > > ----- Original Message ----- >> From: "Ayal Baron" >> To: "Itamar Heim" >> Cc: "Miki Kenneth", engine-devel at ovirt.org >> Sent: Wednesday, February 1, 2012 12:26:12 PM >> Subject: Re: [Engine-devel] ovirt core MOM >> >> >> >> ----- Original Message ----- >>> >>> >>> ----- Original Message ----- >>>> From: "Ayal Baron" >>>> To: "Itamar Heim" >>>> Cc: "Miki Kenneth", engine-devel at ovirt.org >>>> Sent: Wednesday, February 1, 2012 11:41:43 AM >>>> Subject: Re: [Engine-devel] ovirt core MOM >>>> >>>> >>>> >>>> ----- Original Message ----- >>>>> On 01/23/2012 11:01 PM, Ayal Baron wrote: >>>>>> >>>>>> >>>>>> ----- Original Message ----- >>>>>>> >>>>>>> >>>>>>> ----- Original Message ----- >>>>>>>> From: "Ayal Baron" >>>>>>>> To: "Itamar Heim" >>>>>>>> Cc: engine-devel at ovirt.org, "Miki >>>>>>>> Kenneth" >>>>>>>> Sent: Sunday, January 22, 2012 11:19:03 AM >>>>>>>> Subject: Re: [Engine-devel] ovirt core MOM >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> ----- Original Message ----- >>>>>>>>> On 01/20/2012 11:42 PM, Miki Kenneth wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> ----- Original Message ----- >>>>>>>>>>> From: "Itamar Heim" >>>>>>>>>>> To: "Ayal Baron" >>>>>>>>>>> Cc: engine-devel at ovirt.org >>>>>>>>>>> Sent: Friday, January 20, 2012 2:12:27 AM >>>>>>>>>>> Subject: Re: [Engine-devel] ovirt core MOM >>>>>>>>>>> >>>>>>>>>>> On 01/19/2012 11:58 AM, Ayal Baron wrote: >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> ----- Original Message ----- >>>>>>>>>>>>> On 01/18/2012 05:53 PM, Livnat Peer wrote: >>>>>>>>>>>>>> Hi All, >>>>>>>>>>>>>> >>>>>>>>>>>>>> This is what we've discussed in the meeting today: >>>>>>>>>>>>>> >>>>>>>>>>>>>> Multiple storage domain: >>>>>>>>>>>>>> - Should have a single generic verb for removing a >>>>>>>>>>>>>> disk. >>>>>>>>>>>>>> - We block removing the last template disk - template >>>>>>>>>>>>>> is >>>>>>>>>>>>>> immutable. >>>>>>>>>>>>> >>>>>>>>>>>>> but it will be deleted when deleting the template, >>>>>>>>>>>>> right? >>>>>>>>>>>> >>>>>>>>>>>> Of course. >>>>>>>>>>>> The point is that the template is an immutable object >>>>>>>>>>>> and >>>>>>>>>>>> should >>>>>>>>>>>> not change (until we support editing a template at >>>>>>>>>>>> which >>>>>>>>>>>> point >>>>>>>>>>>> the >>>>>>>>>>>> user would have to change the template to edit mode >>>>>>>>>>>> before >>>>>>>>>>>> being >>>>>>>>>>>> able to make such changes and maybe also be able to run >>>>>>>>>>>> it >>>>>>>>>>>> and >>>>>>>>>>>> make changes internally?). >>>>>>>>>>> >>>>>>>>>>> When i hear "edit a template" i don't expect replacing >>>>>>>>>>> the >>>>>>>>>>> files. >>>>>>>>>>> I expect a new edition of disks appearing as a new >>>>>>>>>>> version >>>>>>>>>>> of >>>>>>>>>>> the >>>>>>>>>>> template. but they don't have to derive from same >>>>>>>>>>> original >>>>>>>>>>> template. >>>>>>>>>>> say i want to create a "Fedora 16 template", then update >>>>>>>>>>> it >>>>>>>>>>> every >>>>>>>>>>> month >>>>>>>>>>> with latest "yum update". >>>>>>>>>>> it doesn't matter if i use a VM from same template or >>>>>>>>>>> just >>>>>>>>>>> create >>>>>>>>>>> a >>>>>>>>>>> new one. >>>>>>>>>>> then specify it is V2 of the "Fedora 16 template". >>>>>>>>>>> when someone creates a VM from this template, default >>>>>>>>>>> version >>>>>>>>>>> would >>>>>>>>>>> be >>>>>>>>>>> latest (but we can let them choose specific older >>>>>>>>>>> versions >>>>>>>>>>> as >>>>>>>>>>> well) >>>>>>>>>> +1. Nicely put. >>>>>>>>>> And just to add another common use case is the pool >>>>>>>>>> usage. >>>>>>>>>> When we creating stateless VM pool from the template, >>>>>>>>>> it would be nice to be able to update the template to V2, >>>>>>>>>> and have all the newly created VMs dynamically based to >>>>>>>>>> the >>>>>>>>>> new >>>>>>>>>> template. >>>>>>>>> >>>>>>>>> that is indeed where i was going with it as well, but not >>>>>>>>> as >>>>>>>>> trivial, >>>>>>>>> since need to wait for VMs to stop and return to pool and >>>>>>>>> create >>>>>>>>> new >>>>>>>>> ones and remove old ones. >>>>>>>>> also, creating new ones may involve an admin action of >>>>>>>>> first >>>>>>>>> boot >>>>>>>>> + >>>>>>>>> take >>>>>>>>> of first snapshot >>>>>>>>> >>>>>>>>> (hence i stopped the previous description before this >>>>>>>>> part, >>>>>>>>> but >>>>>>>>> since >>>>>>>>> you opened the door...) >>>>>>>> >>>>>>>> Yes, but this all goes to template versioning (which is >>>>>>>> great >>>>>>>> and >>>>>>>> we >>>>>>>> need). >>>>>>>> For the user though, creating a new template version like >>>>>>>> you >>>>>>>> described would be a long and wasteful process, and is not >>>>>>>> what >>>>>>>> I'm >>>>>>>> talking about. >>>>>>>> >>>>>>>> Unless we support nested templates (second template would >>>>>>>> be >>>>>>>> a >>>>>>>> snapshot over the first one), then we're likely to require >>>>>>>> way >>>>>>>> too >>>>>>>> much space and creation process would be too slow (having >>>>>>>> to >>>>>>>> copy >>>>>>>> over all the bits). >>>>>>>> I think the pool example is an excellent example where I >>>>>>>> would >>>>>>>> not >>>>>>>> want to have 2 copies of the template where the only >>>>>>>> difference >>>>>>>> between them is a set of security patches I've applied to >>>>>>>> the >>>>>>>> new >>>>>>>> template. >>>>>>> Not sure I understand how you do that while vms are still >>>>>>> running >>>>>>> on >>>>>>> the original template? >>>>>> >>>>>> They either: >>>>>> 1. wouldn't be (if changes are in place) >>>>>> 2. if we support template over template (from snapshot) then >>>>>> no >>>>>> issue at all. >>>>>> Once all VMs stop running on previous template we can >>>>>> live >>>>>> merge the 2. >>>>>> >>>>>>>> >>>>>>>> So the 2 options are for what I'm suggesting are: >>>>>>>> 1. decommission the old template by making in place changes >>>>>>>> 2. support template snapshots >>>>>>> Not sure how this will work and what use case it serves? >>>>>> >>>>>> number 1: changing the template for stateless pools. >>>>>> number 2: for anything you want including template >>>>>> versioning. >>>>>> Template versioning should have 2 flavours: >>>>>> 1. my golden image is outdated and I would like to >>>>>> decommission >>>>>> it >>>>>> and replace with a new one created from scratch (i.e. same >>>>>> name, >>>>>> new VMs would be derived from new template, no data dedup). >>>>>> 2. my golden image is outdated and I would like to update it >>>>>> internally - create a VM from it, make the changes, seal this >>>>>> VM >>>>>> as the new version of the template (not using the process we >>>>>> have >>>>>> today which copies all the data, just change it to be >>>>>> immutable). >>>>>> >>>>>> The latter requires supporting trees. >>>>> >>>>> use case wise, #1 is easier, and covers both use cases - it >>>>> only >>>>> varies >>>>> in amount of IO/Space, so when someone tackles this >>>>> implementation >>>>> wise, >>>>> I'd vote for doing #1 first. >>>>> >>>> No, it varies in amount of time and complexity for user. >>>> It might also be quite complex to create the same image again. >>>> To this I can only say 'provisioning provisioning provisioning'. >>>> The point is to make the user's life easier and making >>>> provisioning >>>> a >>>> breeze, forcing #1 is going in the opposite direction. >>>> >>>> >>> >>> #2 does not solve #1. >>> #2 allows doing part of #1 in a more efficient way. >>> so there is no reason to not do #1. >>> (there is also no reason to not do #2) >>> >> >> I agree, that is why I said template versioning should have both. > Decision? decision about what? this is a theoretical discussion until someone will actively work on the template versioning feature. both modes should be supported. which one first would probably depend on the priorities of the one sending the patches. From lpeer at redhat.com Wed Feb 1 12:26:32 2012 From: lpeer at redhat.com (Livnat Peer) Date: Wed, 01 Feb 2012 14:26:32 +0200 Subject: [Engine-devel] agenda for today's meeting Message-ID: <4F292F78.3010006@redhat.com> Hi All, This week we'll continue the agenda from last meeting: - setupNetworks API - stable device addresses if we have time - Hot plug NIC - Direct LUN - Serialized commands Thanks, Livnat From lpeer at redhat.com Wed Feb 1 15:03:50 2012 From: lpeer at redhat.com (Livnat Peer) Date: Wed, 01 Feb 2012 17:03:50 +0200 Subject: [Engine-devel] agenda for today's meeting In-Reply-To: <4F292F78.3010006@redhat.com> References: <4F292F78.3010006@redhat.com> Message-ID: <4F295456.8000808@redhat.com> On 01/02/12 14:26, Livnat Peer wrote: > Hi All, > This week we'll continue the agenda from last meeting: > > - setupNetworks API > - stable device addresses > > if we have time > > - Hot plug NIC > - Direct LUN > - Serialized commands > > Thanks, Livnat AI from the meeting: 1. Roy - set a meeting on the Network UI. 2. Andrew - send the problematic network configurations, we'll go over these scenarios in the UI meeting. 3. Eli - start a discussion on the need of VM version and if we can use the OVF version as a starting point for this feature. Thanks, Livnat > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From ovedo at redhat.com Wed Feb 1 15:57:19 2012 From: ovedo at redhat.com (Oved Ourfalli) Date: Wed, 01 Feb 2012 10:57:19 -0500 (EST) Subject: [Engine-devel] Problem working with LDAP domains In-Reply-To: <03c9c00d-6c33-4d11-b44a-4637e7d8d46a@zmail02.collab.prod.int.phx2.redhat.com> Message-ID: <3ad8ce0b-b09b-4611-9c80-29126b769b93@zmail02.collab.prod.int.phx2.redhat.com> Hey all, There was an issue a week ago of problem performing DNS SRV records queries from the engine, and it caused issues when working with LDAP. In order to fix these issues please make sure you copy the file: /deployment/modules/sun/jdk/main/module.xml To: $JBOSS_HOME/modules/sun/jdk/main/module.xml You should do that even if you are only working with the internal domain, as one day you might try to work with LDAP one. Note that a fix for that was posted, and it is a part of the RPMs as well, so it is only relevant for people who already have this environment set up, and not for new environments. Thank you, Oved From derez at redhat.com Wed Feb 1 17:04:21 2012 From: derez at redhat.com (Daniel Erez) Date: Wed, 01 Feb 2012 12:04:21 -0500 (EST) Subject: [Engine-devel] Floating Disk feature description In-Reply-To: <812f6939-3416-43ca-8db7-ccbc332e2e71@zmail14.collab.prod.int.phx2.redhat.com> Message-ID: <7c5ffe6f-5d6d-4ce1-917f-2e1d7b528c51@zmail14.collab.prod.int.phx2.redhat.com> Hi, Floating Disk feature description Wiki page: http://www.ovirt.org/wiki/Features/DetailedFloatingDisk Best Regards, Daniel From emesika at redhat.com Thu Feb 2 00:56:10 2012 From: emesika at redhat.com (Eli Mesika) Date: Wed, 01 Feb 2012 19:56:10 -0500 (EST) Subject: [Engine-devel] oVirt upstream meeting : VM Version In-Reply-To: <700f08e3-b086-45a4-b24a-46825818f490@zmail13.collab.prod.int.phx2.redhat.com> Message-ID: Hi We had discussed today the Stable Device Addresses feature One of the questions arose from the meeting (and actually defined as an open issue in the feature wiki) is: What happens to a 3.1 VM running on 3.1 Cluster when it is moved to a 3.0 cluster. We encountered that VM may lose some configuration data but also may be corrupted. >From that point we got to the conclusion that we have somehow to maintain a VM Version that will allow us to block moving VM if it's version is not fully supported compatible with the target Cluster. One idea for getting the VM version is the OVF which actually holds inside its header OvfVersion. The question is , is the OVF good enough for all our needs or should we persist that else (for example in DB) Also, any other issues/difficulties we may encounter implementing and storing VM version. Keep in mind that this is a new feature that impacts the Stable Device Addresses feature but may be useful/relevant for other features as well. Thanks Eli From iheim at redhat.com Thu Feb 2 06:46:46 2012 From: iheim at redhat.com (Itamar Heim) Date: Thu, 02 Feb 2012 08:46:46 +0200 Subject: [Engine-devel] oVirt upstream meeting : VM Version In-Reply-To: References: Message-ID: <4F2A3156.5030808@redhat.com> On 02/02/2012 02:56 AM, Eli Mesika wrote: > Hi > > We had discussed today the Stable Device Addresses feature > One of the questions arose from the meeting (and actually defined as an open issue in the feature wiki) is: > What happens to a 3.1 VM running on 3.1 Cluster when it is moved to a 3.0 cluster. > We encountered that VM may lose some configuration data but also may be corrupted. > From that point we got to the conclusion that we have somehow to maintain a VM Version that will allow us to > block moving VM if it's version is not fully supported compatible with the target Cluster. > One idea for getting the VM version is the OVF which actually holds inside its header OvfVersion. > The question is , is the OVF good enough for all our needs or should we persist that else (for example in DB) > Also, any other issues/difficulties we may encounter implementing and storing VM version. > > Keep in mind that this is a new feature that impacts the Stable Device Addresses feature but may be useful/relevant > for other features as well. Can you give some examples which will cause an issue moving a VM from a 3.1 cluster to a 3.0 one? From iheim at redhat.com Thu Feb 2 07:08:56 2012 From: iheim at redhat.com (Itamar Heim) Date: Thu, 02 Feb 2012 09:08:56 +0200 Subject: [Engine-devel] Floating Disk feature description In-Reply-To: <7c5ffe6f-5d6d-4ce1-917f-2e1d7b528c51@zmail14.collab.prod.int.phx2.redhat.com> References: <7c5ffe6f-5d6d-4ce1-917f-2e1d7b528c51@zmail14.collab.prod.int.phx2.redhat.com> Message-ID: <4F2A3688.4040606@redhat.com> On 02/01/2012 07:04 PM, Daniel Erez wrote: > Hi, > > Floating Disk feature description Wiki page: > http://www.ovirt.org/wiki/Features/DetailedFloatingDisk some questions/notes: 1. why do you need a floating/not floating state? isn't a disk floating if it was detached from all VMs? or is that only a helper property to optimize lookups? 2. you mention fields of disks (Floating/Shared/Managed) 2.1 do we have a definition of "Managed" disk somewhere? I assume unmanaged would be a direct LUN, but i think we need a better terminology here. 2.2 same goes for "floating" actually... do we really want to tell the user the disk is "floating"? I guess suggestion welcome for a better name. 2.3 finally, for shared, maybe more interesting is number of VMs the disk is connected to, rather than just a boolean (though i assume this increases complexity for calculation, or redundancy of data, and not a big issue) 3. List of Storage Domains in which the selected Disk resides. this is only relevant for template disks? maybe consider splitting the main grid if looking at tempalte disks or vm disks, and show for vm disks the storage domain in main grid? maybe start with vm disks only and not consider template disks so much? 4. "Templates (visible for disks that reside in templates) List of Templates to which the selected Disk is attached. " same comment as above of maybe consider only vm disks for now. and also a question - how can a template disk belong to more than a single template? which again hints for a template disk you would want another view, with the template name in the main grid 5. Tree: 'Resources' vs. 'free disks' while i understand why separating them - naming is very confusing. maybe a single node in tree and a way to filter the search from the right side grid in some manner for known lookups (relevant to other main tabs as well?) 6. permissions not available for disks? at all? what do you mean power user would be able to attach them by their type? does it mean they can associate any shared disk in the system? I hope i'm misunderstanding, as doesn't make sense to me. or is this caveat specific to the user portal and not the admin? not allowing creating a floating disk from user portal is not a problem in my view for this phase. I assume anyone can add a disk on a storage domain they have quota to. who can edit a disk? remove a disk? attach disk to VM (which gives them ability to edit the disk) (attach disk to VM obviously requires permission on both disk and VM) 7. related features - data ware house may be affected by disks being unattached, or shared between multiple disks. From ykaul at redhat.com Thu Feb 2 08:27:06 2012 From: ykaul at redhat.com (Yaniv Kaul) Date: Thu, 02 Feb 2012 10:27:06 +0200 Subject: [Engine-devel] Floating Disk feature description In-Reply-To: <7c5ffe6f-5d6d-4ce1-917f-2e1d7b528c51@zmail14.collab.prod.int.phx2.redhat.com> References: <7c5ffe6f-5d6d-4ce1-917f-2e1d7b528c51@zmail14.collab.prod.int.phx2.redhat.com> Message-ID: <4F2A48DA.9090806@redhat.com> On 02/01/2012 07:04 PM, Daniel Erez wrote: > Hi, > > Floating Disk feature description Wiki page: > http://www.ovirt.org/wiki/Features/DetailedFloatingDisk > > Best Regards, > Daniel > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel 0. Is it a floating disk or a floating image? would be nice to use the same terminology for all projects, where possible (http://www.ovirt.org/wiki/Vdsm_Storage_Terminology#Image) 1. I don't see why a disk name should be unique. I don't think it's enforceable under any normal circumstances: If user A decided to call his disk 'system', user B who is completely unaware of A cannot call his disk 'system' ? It should be unique at some level, but not system-wide. 2. I'm not sure I understand why exporting a floating disk is 'not supported'. In the current design? implementation? ever? Y. From rgolan at redhat.com Thu Feb 2 08:48:17 2012 From: rgolan at redhat.com (Roy Golan) Date: Thu, 02 Feb 2012 10:48:17 +0200 Subject: [Engine-devel] oVirt upstream meeting - setup networks MOM Message-ID: <4F2A4DD1.1070903@redhat.com> Setup networks feature have been introduced and few question rose: mgoldboi asked to give attention to error handling or reporting AI - need to make sure I have proper error codes from VDSM on validation,failure in committing the new topology etc... acathrew raised an issue of known configuration that won't work e.g specific bonding over bridge that should fail AI - gather those improper configuratations ( Andy pls reply with the exact details if you have them) lpeer asked to set a UI sync meeting lead by them - done Thanks, Roy From lhornyak at redhat.com Thu Feb 2 09:30:25 2012 From: lhornyak at redhat.com (Laszlo Hornyak) Date: Thu, 02 Feb 2012 04:30:25 -0500 (EST) Subject: [Engine-devel] Simplifying our POJOs In-Reply-To: <4F28E6D2.1050204@redhat.com> Message-ID: <79730f63-75a4-4d82-892e-9d49fb7a954e@zmail01.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > From: "Livnat Peer" > To: "Yair Zaslavsky" > Cc: engine-devel at ovirt.org > Sent: Wednesday, February 1, 2012 8:16:34 AM > Subject: Re: [Engine-devel] Simplifying our POJOs > > On 01/02/12 09:13, Yair Zaslavsky wrote: > > On 02/01/2012 08:59 AM, Livnat Peer wrote: > >> On 01/02/12 08:03, Mike Kolesnik wrote: > >>> > >>> ----- Original Message ----- > >>>> On 01/31/2012 12:45 PM, Doron Fediuck wrote: > >>>>> On 31/01/12 12:39, Livnat Peer wrote: > >>>>>> On 31/01/12 12:02, Mike Kolesnik wrote: > >>>>>>> Hi, > >>>>>>> > >>>>>>> Today many POJO > >>>>>>> s > >>>>>>> are used throughout the system to convey data: > >>>>>>> > >>>>>>> * Parameters - To send data to commands. > >>>>>>> * Business Entities - To transfer data in the parameters > >>>>>>> & > >>>>>>> to/from > >>>>>>> the DB. > >>>>>>> > >>>>>>> These POJOs are (usually) very verbose and full of > >>>>>>> boilerplate > >>>>>>> code > >>>>>>> . > >>>>>>> > >>>>>>> This, in turn, reduces their readability and maintainability > >>>>>>> for > >>>>>>> a > >>>>>>> couple of reasons (that I can think of): > >>>>>>> > >>>>>>> * It's hard to know what does what: > >>>>>>> o Who participates in equals/hashCode? > >>>>>>> o What fields are printed in toString? > >>>>>>> * Consistency is problematic: > >>>>>>> o A field may be part of equals but not hashCode, or > >>>>>>> vice > >>>>>>> versa. > >>>>>>> o This breaks the Object.hashCode() > >>>>>>> > >>>>>>> contract! > >>>>>>> * Adding/Removing fields take more time since you need to > >>>>>>> synchronize > >>>>>>> the change to all boilerplate methods. > >>>>>>> o Again, we're facing the consistency problem. > >>>>>>> * These simple classes tend to be very long and not very > >>>>>>> readable. > >>>>>>> * Boilerplate code makes it harder to find out which > >>>>>>> methods > >>>>>>> *don't* > >>>>>>> behave the default way. > >>>>>>> * Javadoc, if existent, is usually meaningless (but you > >>>>>>> might > >>>>>>> see some > >>>>>>> banal documentation that doesn't add any real value). > >>>>>>> * Our existing classes are not up to standard! > >>>>>>> > >>>>>>> > >>>>>>> So what can be done to remedy the situation? > >>>>>>> > >>>>>>> We could, of course, try to simplify the classes as much as > >>>>>>> we > >>>>>>> can and > >>>>>>> maybe address some of the issues. > >>>>>>> This won't alleviate the boilerplate code problem altogether, > >>>>>>> though. > >>>>>>> > >>>>>>> We could write annotations to do some of the things for us > >>>>>>> automatically. > >>>>>>> The easiest approach would be runtime-based, and would hinder > >>>>>>> performance. > >>>>>>> This also means we need to maintain this "infrastructure" and > >>>>>>> all > >>>>>>> the > >>>>>>> implications of such a decision. > >>>>>>> > >>>>>>> > >>>>>>> Luckily, there is a much easier solution: Someone else > >>>>>>> already > >>>>>>> did it! > >>>>>>> > >>>>>>> Check out Project Lombok: http://projectlombok.org > >>>>>>> What Lombok gives us, among some other things, is a way to > >>>>>>> greatly > >>>>>>> simplify our POJOs by using annotations to get the > >>>>>>> boilerplate > >>>>>>> code > >>>>>>> automatically generated. > >>>>>>> This means we get the benefit of annotations which would > >>>>>>> simplify > >>>>>>> the > >>>>>>> code a whole lot, while not imposing a performance cost > >>>>>>> (since > >>>>>>> the > >>>>>>> boilerplate code is generated during compilation). > >>>>>>> However, it's also possible to create the methods yourself if > >>>>>>> you > >>>>>>> want > >>>>>>> them to behave differently. > >>>>>>> Outside the POJO itself, you would see it as you would always > >>>>>>> see > >>>>>>> it. > >>>>>>> > >>>>>>> So what are the downsides to this approach? > >>>>>>> > >>>>>>> * First of all, Lombok provides also some other > >>>>>>> capabilities > >>>>>>> which I'm > >>>>>>> not sure are required/wanted at this time. > >>>>>>> o That's why I propose we use it for commons project, > >>>>>>> and > >>>>>>> make use > >>>>>>> of it's POJO-related annotations ONLY. > >>>>>>> * There might be a problem debugging the code since it's > >>>>>>> auto-generated. > >>>>>>> o I think this is rather negligible, since usually you > >>>>>>> don't debug > >>>>>>> POJOs anyway. > >>>>>>> * There might be a problem if the auto-generated code > >>>>>>> throws an > >>>>>>> Exception. > >>>>>>> o As before, I'm rather sure this is an edge-case which > >>>>>>> we > >>>>>>> usually > >>>>>>> won't hit (if at all). > >>>>>>> > >>>>>>> > >>>>>>> Even given these possible downsides, I think that we would > >>>>>>> benefit > >>>>>>> greatly if we would introduce this library. > >>>>>>> > >>>>>>> If you have any questions, you're welcome to study out the > >>>>>>> project site > >>>>>>> which has very thorough documentation: > >>>>>>> http://projectlombok.org > >>>>>>> > >>>>>>> Your thoughts on the matter? > >>>>>>> > >>>>>> > >>>>>> - I think an example of before/after pojo would help > >>>>>> demonstrating > >>>>>> how > >>>>>> good the framework is. > >>>>>> > >>>>>> - Would it work when adding JPA annotations? > >>>> I suspect that yes (needs to be checked) > >>>> Will it work with GWT (if we create new business entity that > >>>> needs to > >>>> be > >>>> exposed to GWT guys) ? > >>> > >>> As it is stated on the site, it supports GWT. > >>> > >> > >> Since this package is required only during compile time it is > >> relatively > >> easy to push it in. > >> Need to make sure it is working nice with debugging and give it a > >> try. > >> > >> I like this package, > >> +1 from me. > >> > > Another issue to check - (I'm sure it does, but still) - > > Are empty CTORs generated as well? (There is a long debate for > > POJOs > > that contain X fields whether they should have an empty CTOR, as > > usage > > of empty CTOR may yield to potential bugs (logically speaking) of > > "partial state") - Unfortunately, some frameworks require existence > > of > > empty CTOR (I admit, still haven't look at the site thoroughly, so > > I'm > > just sharing here thoughts of what should we check for). > > > > > > Yair > > > > It seems like you can define what ever you like - > @NoArgsConstructor, > @RequiredArgsConstructor > @AllArgsConstructor I am keeping an eye on project lombok for a good while and I really like it's approach, but I have never seen it in a production app so far. Could be interesting to give it a try! Just one more thing I would like to know about annotations: some frameworks (jaxb for example) require you to place the annotations on the getters (creating a hell of annotations). Fortunately the model classes are not serialized by the rest api as far as I know, but would it work together with lombok? Btw... Vojtech has a similar project to simplify resource file generations in the frontend. http://code.google.com/p/genftw/ Laszlo > > Livnat > > >> > >>>>>> > >>>>>>> Regards, > >>>>>>> Mike > >>>>>>> > >>>>> > >>>>> Watching the demo it looks like we'll get less code, which in > >>>>> many > >>>>> cases is a good thing. > >>>>> What I'm concerned about is traceability; or- how can we track > >>>>> issues coming from the field > >>>>> when function calls and line numbers in the stack trace will > >>>>> not > >>>>> match the code we know. > >>>>> > >>>> > >>>> _______________________________________________ > >>>> Engine-devel mailing list > >>>> Engine-devel at ovirt.org > >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>>> > >>> _______________________________________________ > >>> Engine-devel mailing list > >>> Engine-devel at ovirt.org > >>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >> > > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From dlaor at redhat.com Thu Feb 2 09:49:10 2012 From: dlaor at redhat.com (Dor Laor) Date: Thu, 02 Feb 2012 11:49:10 +0200 Subject: [Engine-devel] oVirt upstream meeting : VM Version In-Reply-To: <4F2A3156.5030808@redhat.com> References: <4F2A3156.5030808@redhat.com> Message-ID: <4F2A5C16.4040206@redhat.com> On 02/02/2012 08:46 AM, Itamar Heim wrote: > On 02/02/2012 02:56 AM, Eli Mesika wrote: >> Hi >> >> We had discussed today the Stable Device Addresses feature >> One of the questions arose from the meeting (and actually defined as >> an open issue in the feature wiki) is: >> What happens to a 3.1 VM running on 3.1 Cluster when it is moved to a >> 3.0 cluster. >> We encountered that VM may lose some configuration data but also may >> be corrupted. >> From that point we got to the conclusion that we have somehow to >> maintain a VM Version that will allow us to What do you mean by VM version? Is that the guest hardware abstraction version (which is the kvm hypervisor release + the '-M' flag for compatibility)? I think its the above + the meta data /devices you keep for it. >> block moving VM if it's version is not fully supported compatible with >> the target Cluster. >> One idea for getting the VM version is the OVF which actually holds >> inside its header OvfVersion. >> The question is , is the OVF good enough for all our needs or should >> we persist that else (for example in DB) >> Also, any other issues/difficulties we may encounter implementing and >> storing VM version. >> >> Keep in mind that this is a new feature that impacts the Stable Device >> Addresses feature but may be useful/relevant >> for other features as well. > > Can you give some examples which will cause an issue moving a VM from a > 3.1 cluster to a 3.0 one? > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From abaron at redhat.com Thu Feb 2 10:15:12 2012 From: abaron at redhat.com (Ayal Baron) Date: Thu, 02 Feb 2012 05:15:12 -0500 (EST) Subject: [Engine-devel] oVirt upstream meeting : VM Version In-Reply-To: <4F2A5C16.4040206@redhat.com> Message-ID: ----- Original Message ----- > On 02/02/2012 08:46 AM, Itamar Heim wrote: > > On 02/02/2012 02:56 AM, Eli Mesika wrote: > >> Hi > >> > >> We had discussed today the Stable Device Addresses feature > >> One of the questions arose from the meeting (and actually defined > >> as > >> an open issue in the feature wiki) is: > >> What happens to a 3.1 VM running on 3.1 Cluster when it is moved > >> to a > >> 3.0 cluster. > >> We encountered that VM may lose some configuration data but also > >> may > >> be corrupted. > >> From that point we got to the conclusion that we have somehow to > >> maintain a VM Version that will allow us to > > What do you mean by VM version? > Is that the guest hardware abstraction version (which is the kvm > hypervisor release + the '-M' flag for compatibility)? > > I think its the above + the meta data /devices you keep for it. Correct. There are several issues here: 1. you loose the stable device addresses (no point in keeping the data in the db as the next time the VM is run the devices can get different addresses) 2. If you move the VM to an older cluster where the hosts don't support the VM's compatibility mode (-M) then the VM would be started with different virtual hardware which might cause problems 3. Once we support s4 then running the VM again with different hardware might be even more problematic than just running it from shutdown (e.g. once we have a balloon device with memory assigned to it which suddenly disappears, what would happen to the VM?) 4. Same applies for migrate to file, but this can be dealt with by not allowing to move a VM between incompatible clusters in case it has a migrate to file state (or delete the file). A side note - I'm not sure if exporting a VM also exports the state file after migrate to file? if not then probably it should... I'm sure there are additional scenarios we're not thinking of. > > >> block moving VM if it's version is not fully supported compatible > >> with > >> the target Cluster. > >> One idea for getting the VM version is the OVF which actually > >> holds > >> inside its header OvfVersion. > >> The question is , is the OVF good enough for all our needs or > >> should > >> we persist that else (for example in DB) > >> Also, any other issues/difficulties we may encounter implementing > >> and > >> storing VM version. > >> > >> Keep in mind that this is a new feature that impacts the Stable > >> Device > >> Addresses feature but may be useful/relevant > >> for other features as well. > > > > Can you give some examples which will cause an issue moving a VM > > from a > > 3.1 cluster to a 3.0 one? > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From derez at redhat.com Thu Feb 2 10:25:52 2012 From: derez at redhat.com (Daniel Erez) Date: Thu, 02 Feb 2012 05:25:52 -0500 (EST) Subject: [Engine-devel] Floating Disk feature description In-Reply-To: <4F2A3688.4040606@redhat.com> Message-ID: <3d7d71cf-f9c4-4dfd-be98-b98c5c25cf15@zmail14.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > From: "Itamar Heim" > To: "Daniel Erez" > Cc: engine-devel at ovirt.org > Sent: Thursday, February 2, 2012 9:08:56 AM > Subject: Re: [Engine-devel] Floating Disk feature description > > On 02/01/2012 07:04 PM, Daniel Erez wrote: > > Hi, > > > > Floating Disk feature description Wiki page: > > http://www.ovirt.org/wiki/Features/DetailedFloatingDisk > > some questions/notes: > 1. why do you need a floating/not floating state? isn't a disk > floating > if it was detached from all VMs? > or is that only a helper property to optimize lookups? > Yes, the floating state is an indication whether the disk is attached to any VM. It's not a persistent property on the disk, but rather a DB view calculated value. > 2. you mention fields of disks (Floating/Shared/Managed) > > 2.1 do we have a definition of "Managed" disk somewhere? > I assume unmanaged would be a direct LUN, but i think we need a > better > terminology here. Indeed, we're looking for a better teminology. Suggestions are welcomed... > > 2.2 same goes for "floating" actually... do we really want to tell > the > user the disk is "floating"? > I guess suggestion welcome for a better name. "unattached" has been mentioned once as an alternative. > > 2.3 finally, for shared, maybe more interesting is number of VMs the > disk is connected to, rather than just a boolean (though i assume > this > increases complexity for calculation, or redundancy of data, and not > a > big issue) Actually, as part of the "Shared raw disk" feature, we do want to display the number of VMs (and probably a list too) the disk is connected to - in the 'Disks' sub-tab (under VMs main tab). Hence, it might be rather simple to show that number also in the Disks main tab (the list of VMs will be displayed under VMs sub-tab). > > 3. List of Storage Domains in which the selected Disk resides. > this is only relevant for template disks? Yes, it's only for cloned templates. > maybe consider splitting the main grid if looking at tempalte disks > or > vm disks, and show for vm disks the storage domain in main grid? > maybe start with vm disks only and not consider template disks so > much? Miki? > > 4. "Templates (visible for disks that reside in templates) List of > Templates to which the selected Disk is attached. " > > same comment as above of maybe consider only vm disks for now. > and also a question - how can a template disk belong to more than a > single template? Yes, for now, a template disk cannot belong to more than a single template. However, won't we like to have a shared disk for a template in the future? > > which again hints for a template disk you would want another view, > with > the template name in the main grid > > 5. Tree: 'Resources' vs. 'free disks' > while i understand why separating them - naming is very confusing. > maybe a single node in tree and a way to filter the search from the > right side grid in some manner for known lookups (relevant to other > main > tabs as well?) For now, we've agreed that sorting abilities in columns is needed for easing the orientation. > > 6. permissions not available for disks? > at all? > what do you mean power user would be able to attach them by their > type? > does it mean they can associate any shared disk in the system? I hope > i'm misunderstanding, as doesn't make sense to me. > > or is this caveat specific to the user portal and not the admin? > not allowing creating a floating disk from user portal is not a > problem > in my view for this phase. > > I assume anyone can add a disk on a storage domain they have quota > to. > who can edit a disk? remove a disk? attach disk to VM (which gives > them > ability to edit the disk) > (attach disk to VM obviously requires permission on both disk and VM) Since we won't support permissions on disks entities (at first stage), as a compromise for the power user portal, we've agreed to simply hide floating non shared disks from the user. > > 7. related features > - data ware house may be affected by disks being unattached, or > shared > between multiple disks. > > > > > > > From ykaul at redhat.com Thu Feb 2 10:29:31 2012 From: ykaul at redhat.com (Yaniv Kaul) Date: Thu, 02 Feb 2012 12:29:31 +0200 Subject: [Engine-devel] Floating Disk feature description In-Reply-To: <3d7d71cf-f9c4-4dfd-be98-b98c5c25cf15@zmail14.collab.prod.int.phx2.redhat.com> References: <3d7d71cf-f9c4-4dfd-be98-b98c5c25cf15@zmail14.collab.prod.int.phx2.redhat.com> Message-ID: <4F2A658B.5040708@redhat.com> On 02/02/2012 12:25 PM, Daniel Erez wrote: > > ----- Original Message ----- >> From: "Itamar Heim" >> To: "Daniel Erez" >> Cc: engine-devel at ovirt.org >> Sent: Thursday, February 2, 2012 9:08:56 AM >> Subject: Re: [Engine-devel] Floating Disk feature description >> >> On 02/01/2012 07:04 PM, Daniel Erez wrote: >>> Hi, >>> >>> Floating Disk feature description Wiki page: >>> http://www.ovirt.org/wiki/Features/DetailedFloatingDisk >> some questions/notes: >> 1. why do you need a floating/not floating state? isn't a disk >> floating >> if it was detached from all VMs? >> or is that only a helper property to optimize lookups? >> > Yes, the floating state is an indication whether the disk is attached to any VM. > It's not a persistent property on the disk, but rather a DB view calculated value. > >> 2. you mention fields of disks (Floating/Shared/Managed) >> >> 2.1 do we have a definition of "Managed" disk somewhere? >> I assume unmanaged would be a direct LUN, but i think we need a >> better >> terminology here. > Indeed, we're looking for a better teminology. Suggestions are welcomed... > >> 2.2 same goes for "floating" actually... do we really want to tell >> the >> user the disk is "floating"? >> I guess suggestion welcome for a better name. > "unattached" has been mentioned once as an alternative. Google says the world prefers 'detached': ~196M entries vs. ~5.7M entries for 'unattached'. Y. > >> 2.3 finally, for shared, maybe more interesting is number of VMs the >> disk is connected to, rather than just a boolean (though i assume >> this >> increases complexity for calculation, or redundancy of data, and not >> a >> big issue) > Actually, as part of the "Shared raw disk" feature, we do want to display the number of VMs > (and probably a list too) the disk is connected to - in the 'Disks' sub-tab (under VMs main tab). > Hence, it might be rather simple to show that number also in the Disks main tab > (the list of VMs will be displayed under VMs sub-tab). > >> 3. List of Storage Domains in which the selected Disk resides. >> this is only relevant for template disks? > Yes, it's only for cloned templates. > >> maybe consider splitting the main grid if looking at tempalte disks >> or >> vm disks, and show for vm disks the storage domain in main grid? >> maybe start with vm disks only and not consider template disks so >> much? > Miki? > >> 4. "Templates (visible for disks that reside in templates) List of >> Templates to which the selected Disk is attached. " >> >> same comment as above of maybe consider only vm disks for now. >> and also a question - how can a template disk belong to more than a >> single template? > Yes, for now, a template disk cannot belong to more than a single template. > However, won't we like to have a shared disk for a template in the future? > >> which again hints for a template disk you would want another view, >> with >> the template name in the main grid >> >> 5. Tree: 'Resources' vs. 'free disks' >> while i understand why separating them - naming is very confusing. >> maybe a single node in tree and a way to filter the search from the >> right side grid in some manner for known lookups (relevant to other >> main >> tabs as well?) > For now, we've agreed that sorting abilities in columns is needed for easing the orientation. > >> 6. permissions not available for disks? >> at all? >> what do you mean power user would be able to attach them by their >> type? >> does it mean they can associate any shared disk in the system? I hope >> i'm misunderstanding, as doesn't make sense to me. >> >> or is this caveat specific to the user portal and not the admin? >> not allowing creating a floating disk from user portal is not a >> problem >> in my view for this phase. >> >> I assume anyone can add a disk on a storage domain they have quota >> to. >> who can edit a disk? remove a disk? attach disk to VM (which gives >> them >> ability to edit the disk) >> (attach disk to VM obviously requires permission on both disk and VM) > Since we won't support permissions on disks entities (at first stage), > as a compromise for the power user portal, we've agreed to simply hide > floating non shared disks from the user. > >> 7. related features >> - data ware house may be affected by disks being unattached, or >> shared >> between multiple disks. >> >> >> >> >> >> >> > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From derez at redhat.com Thu Feb 2 11:35:03 2012 From: derez at redhat.com (Daniel Erez) Date: Thu, 02 Feb 2012 06:35:03 -0500 (EST) Subject: [Engine-devel] Floating Disk feature description In-Reply-To: <4F2A48DA.9090806@redhat.com> Message-ID: <2604ecab-482a-42c2-8306-24ad32807d2d@zmail14.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > From: "Yaniv Kaul" > To: "Daniel Erez" > Cc: engine-devel at ovirt.org > Sent: Thursday, February 2, 2012 10:27:06 AM > Subject: Re: [Engine-devel] Floating Disk feature description > > On 02/01/2012 07:04 PM, Daniel Erez wrote: > > Hi, > > > > Floating Disk feature description Wiki page: > > http://www.ovirt.org/wiki/Features/DetailedFloatingDisk > > > > Best Regards, > > Daniel > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > 0. Is it a floating disk or a floating image? would be nice to use > the > same terminology for all projects, where possible > (http://www.ovirt.org/wiki/Vdsm_Storage_Terminology#Image) Disk and Image are the same ("Disk" is the RHEV-M term for VDSM's "Image"). Both of them means: the collection of "volumes" (or, "DiskImages", in RHEV-M terminology) that comprise the full disk. We have no problem using either terminology, however might be confusing either way. [BTW, a floating disk can't conatin snapshots - so it doesn't really matter if you are talking about disk, image, diskImage or volume - they are all the same] > 1. I don't see why a disk name should be unique. I don't think it's > enforceable under any normal circumstances: If user A decided to call > his disk 'system', user B who is completely unaware of A cannot call > his > disk 'system' ? It should be unique at some level, but not > system-wide. The enforcement for uniqueness has been suggested for avoiding a list of duplicate named disks in the Disks main tab and for identifying a specific disk. Probelm is that any disk theoretically can be floating, so you cannot differentiate between the disks using the VM name to which it is attached, for example (moreover, some of the disks in the system are shared, so which VM name will you use?...) Maybe we can use some other attribute for identification? > 2. I'm not sure I understand why exporting a floating disk is 'not > supported'. In the current design? implementation? ever? Currently, Export is done in a VM/Template level. Support in export/import (floating) disks is a new functionality which requires additional thinking/design/etc. > > Y. > From ykaul at redhat.com Thu Feb 2 11:43:58 2012 From: ykaul at redhat.com (Yaniv Kaul) Date: Thu, 02 Feb 2012 13:43:58 +0200 Subject: [Engine-devel] Floating Disk feature description In-Reply-To: <2604ecab-482a-42c2-8306-24ad32807d2d@zmail14.collab.prod.int.phx2.redhat.com> References: <2604ecab-482a-42c2-8306-24ad32807d2d@zmail14.collab.prod.int.phx2.redhat.com> Message-ID: <4F2A76FE.8020205@redhat.com> On 02/02/2012 01:35 PM, Daniel Erez wrote: > > ----- Original Message ----- >> From: "Yaniv Kaul" >> To: "Daniel Erez" >> Cc: engine-devel at ovirt.org >> Sent: Thursday, February 2, 2012 10:27:06 AM >> Subject: Re: [Engine-devel] Floating Disk feature description >> >> On 02/01/2012 07:04 PM, Daniel Erez wrote: >>> Hi, >>> >>> Floating Disk feature description Wiki page: >>> http://www.ovirt.org/wiki/Features/DetailedFloatingDisk >>> >>> Best Regards, >>> Daniel >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >> 0. Is it a floating disk or a floating image? would be nice to use >> the >> same terminology for all projects, where possible >> (http://www.ovirt.org/wiki/Vdsm_Storage_Terminology#Image) > Disk and Image are the same ("Disk" is the RHEV-M term for VDSM's "Image"). Both of them means: the collection of "volumes" (or, "DiskImages", in RHEV-M terminology) that comprise the full disk. We have no problem using either terminology, however might be confusing either way. > [BTW, a floating disk can't conatin snapshots - so it doesn't really matter if you are talking about disk, image, diskImage or volume - they are all the same] If they are the same, then please use the term 'image'. And it looks like the feature is 'floating single-volume image' . I wonder if the limitation is really an issue or not. Can't think of a real use case it would be, but here's an imaginary one: before attaching it to a VM (or after and before running), I'd take a snapshot, then run the VM, do whatever, and revert before/after detaching, so the floating would go back to its original state. > >> 1. I don't see why a disk name should be unique. I don't think it's >> enforceable under any normal circumstances: If user A decided to call >> his disk 'system', user B who is completely unaware of A cannot call >> his >> disk 'system' ? It should be unique at some level, but not >> system-wide. > The enforcement for uniqueness has been suggested for avoiding a list of > duplicate named disks in the Disks main tab and for identifying a specific disk. Understood, but it's not good enough. Need to solve this, as it's not practical to ask me not to share a property with you - which both us do not really share. > Probelm is that any disk theoretically can be floating, so you cannot differentiate between the disks using the VM name to which it is attached, for example (moreover, some of the disks in the system are shared, so which VM name will you use?...) The fact VM names are unique is also an annoying, problematic issue, but I imagine there are less VMs than disks, and most are going to be FQDN based anyway. 'system' and 'data' are quite common names for disks, whereas VM names might be more original. Anyway, both don't scale. > Maybe we can use some other attribute for identification? The real identification can be done via the serial number, no harm in displaying that cryptic ID in the UI. Makes us look professional. Of course, it makes more sense to use the volume UUID. May come handy in locating it physically on disk, if it exists. > >> 2. I'm not sure I understand why exporting a floating disk is 'not >> supported'. In the current design? implementation? ever? > Currently, Export is done in a VM/Template level. Support in export/import (floating) disks is a new functionality which requires additional thinking/design/etc. Right, so lets add 'currently not supported'. Y. > >> Y. >> From mkenneth at redhat.com Thu Feb 2 12:08:29 2012 From: mkenneth at redhat.com (Miki Kenneth) Date: Thu, 02 Feb 2012 07:08:29 -0500 (EST) Subject: [Engine-devel] Floating Disk feature description In-Reply-To: <4F2A3688.4040606@redhat.com> Message-ID: <23f06fdc-2a94-4eda-abe4-b39b2c8ca96c@mkenneth.csb> ----- Original Message ----- > From: "Itamar Heim" > To: "Daniel Erez" > Cc: engine-devel at ovirt.org > Sent: Thursday, February 2, 2012 9:08:56 AM > Subject: Re: [Engine-devel] Floating Disk feature description > > On 02/01/2012 07:04 PM, Daniel Erez wrote: > > Hi, > > > > Floating Disk feature description Wiki page: > > http://www.ovirt.org/wiki/Features/DetailedFloatingDisk > > some questions/notes: > 1. why do you need a floating/not floating state? isn't a disk > floating > if it was detached from all VMs? > or is that only a helper property to optimize lookups? > > 2. you mention fields of disks (Floating/Shared/Managed) > > 2.1 do we have a definition of "Managed" disk somewhere? > I assume unmanaged would be a direct LUN, but i think we need a > better > terminology here. > > 2.2 same goes for "floating" actually... do we really want to tell > the > user the disk is "floating"? > I guess suggestion welcome for a better name. > > 2.3 finally, for shared, maybe more interesting is number of VMs the > disk is connected to, rather than just a boolean (though i assume > this > increases complexity for calculation, or redundancy of data, and not > a > big issue) > > 3. List of Storage Domains in which the selected Disk resides. > this is only relevant for template disks? > maybe consider splitting the main grid if looking at tempalte disks > or > vm disks, and show for vm disks the storage domain in main grid? > maybe start with vm disks only and not consider template disks so > much? > > 4. "Templates (visible for disks that reside in templates) List of > Templates to which the selected Disk is attached. " > > same comment as above of maybe consider only vm disks for now. > and also a question - how can a template disk belong to more than a > single template? > > which again hints for a template disk you would want another view, > with > the template name in the main grid > > 5. Tree: 'Resources' vs. 'free disks' > while i understand why separating them - naming is very confusing. > maybe a single node in tree and a way to filter the search from the > right side grid in some manner for known lookups (relevant to other > main > tabs as well?) I agree - we need to find a nicer way. Maybe Eldan can help. > > 6. permissions not available for disks? > at all? > what do you mean power user would be able to attach them by their > type? > does it mean they can associate any shared disk in the system? I hope > i'm misunderstanding, as doesn't make sense to me. > > or is this caveat specific to the user portal and not the admin? > not allowing creating a floating disk from user portal is not a > problem > in my view for this phase. > > I assume anyone can add a disk on a storage domain they have quota > to. > who can edit a disk? remove a disk? attach disk to VM (which gives > them > ability to edit the disk) > (attach disk to VM obviously requires permission on both disk and VM) This is a huge caveat!!! and we need to find a way to allow accessing floating disks from the Power User portal. Tough I understand the complexity, let's think what can be done to overcome at least the attach process. > > 7. related features > - data ware house may be affected by disks being unattached, or > shared > between multiple disks. > > > > > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From mkenneth at redhat.com Thu Feb 2 12:10:54 2012 From: mkenneth at redhat.com (Miki Kenneth) Date: Thu, 02 Feb 2012 07:10:54 -0500 (EST) Subject: [Engine-devel] Floating Disk feature description In-Reply-To: <3d7d71cf-f9c4-4dfd-be98-b98c5c25cf15@zmail14.collab.prod.int.phx2.redhat.com> Message-ID: <52eb7d88-0876-44a5-9f4b-70af26d7be0a@mkenneth.csb> ----- Original Message ----- > From: "Daniel Erez" > To: "Itamar Heim" > Cc: "Miki Kenneth" , engine-devel at ovirt.org > Sent: Thursday, February 2, 2012 12:25:52 PM > Subject: Re: [Engine-devel] Floating Disk feature description > > > > ----- Original Message ----- > > From: "Itamar Heim" > > To: "Daniel Erez" > > Cc: engine-devel at ovirt.org > > Sent: Thursday, February 2, 2012 9:08:56 AM > > Subject: Re: [Engine-devel] Floating Disk feature description > > > > On 02/01/2012 07:04 PM, Daniel Erez wrote: > > > Hi, > > > > > > Floating Disk feature description Wiki page: > > > http://www.ovirt.org/wiki/Features/DetailedFloatingDisk > > > > some questions/notes: > > 1. why do you need a floating/not floating state? isn't a disk > > floating > > if it was detached from all VMs? > > or is that only a helper property to optimize lookups? > > > > Yes, the floating state is an indication whether the disk is attached > to any VM. > It's not a persistent property on the disk, but rather a DB view > calculated value. > > > 2. you mention fields of disks (Floating/Shared/Managed) > > > > 2.1 do we have a definition of "Managed" disk somewhere? > > I assume unmanaged would be a direct LUN, but i think we need a > > better > > terminology here. > > Indeed, we're looking for a better teminology. Suggestions are > welcomed... > > > > > 2.2 same goes for "floating" actually... do we really want to tell > > the > > user the disk is "floating"? > > I guess suggestion welcome for a better name. > > "unattached" has been mentioned once as an alternative. > > > > > 2.3 finally, for shared, maybe more interesting is number of VMs > > the > > disk is connected to, rather than just a boolean (though i assume > > this > > increases complexity for calculation, or redundancy of data, and > > not > > a > > big issue) > > Actually, as part of the "Shared raw disk" feature, we do want to > display the number of VMs > (and probably a list too) the disk is connected to - in the 'Disks' > sub-tab (under VMs main tab). > Hence, it might be rather simple to show that number also in the > Disks main tab > (the list of VMs will be displayed under VMs sub-tab). > > > > > 3. List of Storage Domains in which the selected Disk resides. > > this is only relevant for template disks? > > Yes, it's only for cloned templates. > > > maybe consider splitting the main grid if looking at tempalte disks > > or > > vm disks, and show for vm disks the storage domain in main grid? > > maybe start with vm disks only and not consider template disks so > > much? > > Miki? Good point - we can do either by a column and sortby or by two-interchangeable tabs. There are two many properties we would like to sort by, that's way I suggested the column way. > > > > > 4. "Templates (visible for disks that reside in templates) List of > > Templates to which the selected Disk is attached. " > > > > same comment as above of maybe consider only vm disks for now. > > and also a question - how can a template disk belong to more than a > > single template? > > Yes, for now, a template disk cannot belong to more than a single > template. > However, won't we like to have a shared disk for a template in the > future? > > > > > which again hints for a template disk you would want another view, > > with > > the template name in the main grid > > > > 5. Tree: 'Resources' vs. 'free disks' > > while i understand why separating them - naming is very confusing. > > maybe a single node in tree and a way to filter the search from the > > right side grid in some manner for known lookups (relevant to other > > main > > tabs as well?) > > For now, we've agreed that sorting abilities in columns is needed for > easing the orientation. > > > > > 6. permissions not available for disks? > > at all? > > what do you mean power user would be able to attach them by their > > type? > > does it mean they can associate any shared disk in the system? I > > hope > > i'm misunderstanding, as doesn't make sense to me. > > > > or is this caveat specific to the user portal and not the admin? > > not allowing creating a floating disk from user portal is not a > > problem > > in my view for this phase. > > > > I assume anyone can add a disk on a storage domain they have quota > > to. > > who can edit a disk? remove a disk? attach disk to VM (which gives > > them > > ability to edit the disk) > > (attach disk to VM obviously requires permission on both disk and > > VM) > > Since we won't support permissions on disks entities (at first > stage), > as a compromise for the power user portal, we've agreed to simply > hide > floating non shared disks from the user. > > > > > 7. related features > > - data ware house may be affected by disks being unattached, or > > shared > > between multiple disks. > > > > > > > > > > > > > > > From jchoate at redhat.com Thu Feb 2 13:47:07 2012 From: jchoate at redhat.com (Jon Choate) Date: Thu, 02 Feb 2012 08:47:07 -0500 Subject: [Engine-devel] in the UI - missing list of storage domains in detach modal Message-ID: <4F2A93DB.9040809@redhat.com> In the Data Center tab, if you select a storage domain to detach, a modal dialog box pops up saying "Are you sure you want to Detach the following storage(s)?". Below this there are no storage domains listed. I would suggest either listing them or changing the word 'following' to 'selected' From ecohen at redhat.com Thu Feb 2 14:20:16 2012 From: ecohen at redhat.com (Einav Cohen) Date: Thu, 02 Feb 2012 09:20:16 -0500 (EST) Subject: [Engine-devel] Floating Disk feature description In-Reply-To: <52eb7d88-0876-44a5-9f4b-70af26d7be0a@mkenneth.csb> Message-ID: > ----- Original Message ----- > From: "Miki Kenneth" > Sent: Thursday, February 2, 2012 2:10:54 PM > ... > > > > Hi, > > > > > > > > Floating Disk feature description Wiki page: > > > > http://www.ovirt.org/wiki/Features/DetailedFloatingDisk > > > > > > some questions/notes: > > > 1. why do you need a floating/not floating state? isn't a disk > > > floating > > > if it was detached from all VMs? > > > or is that only a helper property to optimize lookups? > > > > > > > Yes, the floating state is an indication whether the disk is > > attached > > to any VM. > > It's not a persistent property on the disk, but rather a DB view > > calculated value. > > > > > 2. you mention fields of disks (Floating/Shared/Managed) > > > > > > 2.1 do we have a definition of "Managed" disk somewhere? > > > I assume unmanaged would be a direct LUN, but i think we need a > > > better > > > terminology here. > > > > Indeed, we're looking for a better teminology. Suggestions are > > welcomed... > > > > > > > > 2.2 same goes for "floating" actually... do we really want to > > > tell > > > the > > > user the disk is "floating"? > > > I guess suggestion welcome for a better name. > > > > "unattached" has been mentioned once as an alternative. > > > > > > > > 2.3 finally, for shared, maybe more interesting is number of VMs > > > the > > > disk is connected to, rather than just a boolean (though i assume > > > this > > > increases complexity for calculation, or redundancy of data, and > > > not > > > a > > > big issue) > > > > Actually, as part of the "Shared raw disk" feature, we do want to > > display the number of VMs > > (and probably a list too) the disk is connected to - in the 'Disks' > > sub-tab (under VMs main tab). > > Hence, it might be rather simple to show that number also in the > > Disks main tab > > (the list of VMs will be displayed under VMs sub-tab). > > > > > > > > 3. List of Storage Domains in which the selected Disk resides. > > > this is only relevant for template disks? > > > > Yes, it's only for cloned templates. > > > > > maybe consider splitting the main grid if looking at tempalte > > > disks > > > or > > > vm disks, and show for vm disks the storage domain in main grid? > > > maybe start with vm disks only and not consider template disks so > > > much? > > > > Miki? > Good point - we can do either by a column and sortby or by > two-interchangeable tabs. > There are two many properties we would like to sort by, that's way I > suggested the column way. I actually believe that Itamar's suggestion of "start with vm disks only and not consider template disks so much" is better and simpler; Template Disks aren't the reason for implementing the Disks main tab anyway: there shouldn't be to many template disks in the system and they are not floating/shared, they cannot be attached/detached, etc. Having two grids in a main tab isn't consistent with the current graphical "language" of the GUI. We can still have a single grid with a column for indication for "VM Disk vs. Template Disk", however having there also a column of "storage domain" or "template name" is problematic ("storage domain" column is problematic for Template Disks, since they can reside on multiple storage domains; "template name" is problematic, since it is relevant only for templates' disks). > > > > > > > > 4. "Templates (visible for disks that reside in templates) List > > > of > > > Templates to which the selected Disk is attached. " > > > > > > same comment as above of maybe consider only vm disks for now. > > > and also a question - how can a template disk belong to more than > > > a > > > single template? > > > > Yes, for now, a template disk cannot belong to more than a single > > template. > > However, won't we like to have a shared disk for a template in the > > future? > > > > > > > > which again hints for a template disk you would want another > > > view, > > > with > > > the template name in the main grid Again, as I stated above - maybe worth ignoring Templates' Disks altogether in the Disks main tab context. > > > > > > 5. Tree: 'Resources' vs. 'free disks' > > > while i understand why separating them - naming is very > > > confusing. > > > maybe a single node in tree and a way to filter the search from > > > the > > > right side grid in some manner for known lookups (relevant to > > > other > > > main > > > tabs as well?) > > > > For now, we've agreed that sorting abilities in columns is needed > > for > > easing the orientation. > > > > > > > > 6. permissions not available for disks? > > > at all? > > > what do you mean power user would be able to attach them by their > > > type? > > > does it mean they can associate any shared disk in the system? I > > > hope > > > i'm misunderstanding, as doesn't make sense to me. > > > > > > or is this caveat specific to the user portal and not the admin? > > > not allowing creating a floating disk from user portal is not a > > > problem > > > in my view for this phase. > > > > > > I assume anyone can add a disk on a storage domain they have > > > quota > > > to. > > > who can edit a disk? remove a disk? attach disk to VM (which > > > gives > > > them > > > ability to edit the disk) > > > (attach disk to VM obviously requires permission on both disk and > > > VM) > > > > Since we won't support permissions on disks entities (at first > > stage), > > as a compromise for the power user portal, we've agreed to simply > > hide > > floating non shared disks from the user. > > > > > > > > 7. related features > > > - data ware house may be affected by disks being unattached, or > > > shared > > > between multiple disks. > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From iheim at redhat.com Thu Feb 2 14:29:30 2012 From: iheim at redhat.com (Itamar Heim) Date: Thu, 02 Feb 2012 16:29:30 +0200 Subject: [Engine-devel] Floating Disk feature description In-Reply-To: <3d7d71cf-f9c4-4dfd-be98-b98c5c25cf15@zmail14.collab.prod.int.phx2.redhat.com> References: <3d7d71cf-f9c4-4dfd-be98-b98c5c25cf15@zmail14.collab.prod.int.phx2.redhat.com> Message-ID: <4F2A9DCA.4080001@redhat.com> On 02/02/2012 12:25 PM, Daniel Erez wrote: ... >> 6. permissions not available for disks? >> at all? >> what do you mean power user would be able to attach them by their >> type? >> does it mean they can associate any shared disk in the system? I hope >> i'm misunderstanding, as doesn't make sense to me. >> >> or is this caveat specific to the user portal and not the admin? >> not allowing creating a floating disk from user portal is not a >> problem >> in my view for this phase. >> >> I assume anyone can add a disk on a storage domain they have quota >> to. >> who can edit a disk? remove a disk? attach disk to VM (which gives >> them >> ability to edit the disk) >> (attach disk to VM obviously requires permission on both disk and VM) > > Since we won't support permissions on disks entities (at first stage), > as a compromise for the power user portal, we've agreed to simply hide > floating non shared disks from the user. I still think we won't find a decent way to model this without permissions, regardless of the power user portal. we'll hit too many problems. I'll look into this a bit more. From ecohen at redhat.com Thu Feb 2 14:45:30 2012 From: ecohen at redhat.com (Einav Cohen) Date: Thu, 02 Feb 2012 09:45:30 -0500 (EST) Subject: [Engine-devel] Floating Disk feature description In-Reply-To: <4F2A76FE.8020205@redhat.com> Message-ID: > ----- Original Message ----- > From: "Yaniv Kaul" > Sent: Thursday, February 2, 2012 1:43:58 PM > > On 02/02/2012 01:35 PM, Daniel Erez wrote: > > > > ----- Original Message ----- > >> From: "Yaniv Kaul" > >> To: "Daniel Erez" > >> Cc: engine-devel at ovirt.org > >> Sent: Thursday, February 2, 2012 10:27:06 AM > >> Subject: Re: [Engine-devel] Floating Disk feature description > >> > >> On 02/01/2012 07:04 PM, Daniel Erez wrote: > >>> Hi, > >>> > >>> Floating Disk feature description Wiki page: > >>> http://www.ovirt.org/wiki/Features/DetailedFloatingDisk > >>> > >>> Best Regards, > >>> Daniel > >>> _______________________________________________ > >>> Engine-devel mailing list > >>> Engine-devel at ovirt.org > >>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >> 0. Is it a floating disk or a floating image? would be nice to use > >> the > >> same terminology for all projects, where possible > >> (http://www.ovirt.org/wiki/Vdsm_Storage_Terminology#Image) > > Disk and Image are the same ("Disk" is the RHEV-M term for VDSM's > > "Image"). Both of them means: the collection of "volumes" (or, > > "DiskImages", in RHEV-M terminology) that comprise the full disk. > > We have no problem using either terminology, however might be > > confusing either way. > > [BTW, a floating disk can't conatin snapshots - so it doesn't > > really matter if you are talking about disk, image, diskImage or > > volume - they are all the same] > > If they are the same, then please use the term 'image'. As I said - no technical problem doing that, however in the engine we use the term "Disk" and the current name of the relevant VMs sub-tab is "Disks" and the name of the new main tab would be "Disks" (or "Virtual Disks"), etc. - so it will be strange to call the feature "floating image" when "floating" is going to be column in a GUI grid titled "Disks". I assume that you can also find explanations of why using the term "Disk" is confusing and "Image" is better; I am just genuinely not sure what is less confusing. > And it looks like the feature is 'floating single-volume image' . I > wonder if the limitation is really an issue or not. Can't think of a > real use case it would be, but here's an imaginary one: before > attaching > it to a VM (or after and before running), I'd take a snapshot, then > run > the VM, do whatever, and revert before/after detaching, so the > floating > would go back to its original state. > > > > >> 1. I don't see why a disk name should be unique. I don't think > >> it's > >> enforceable under any normal circumstances: If user A decided to > >> call > >> his disk 'system', user B who is completely unaware of A cannot > >> call > >> his > >> disk 'system' ? It should be unique at some level, but not > >> system-wide. > > The enforcement for uniqueness has been suggested for avoiding a > > list of > > duplicate named disks in the Disks main tab and for identifying a > > specific disk. > > Understood, but it's not good enough. Need to solve this, as it's not > practical to ask me not to share a property with you - which both us > do > not really share. > > > Probelm is that any disk theoretically can be floating, so you > > cannot differentiate between the disks using the VM name to which > > it is attached, for example (moreover, some of the disks in the > > system are shared, so which VM name will you use?...) > > The fact VM names are unique is also an annoying, problematic issue, > but > I imagine there are less VMs than disks, and most are going to be > FQDN > based anyway. 'system' and 'data' are quite common names for disks, > whereas VM names might be more original. Anyway, both don't scale. > > > Maybe we can use some other attribute for identification? > > The real identification can be done via the serial number, no harm in > displaying that cryptic ID in the UI. Makes us look professional. Of > course, it makes more sense to use the volume UUID. May come handy in > locating it physically on disk, if it exists. > > > > > >> 2. I'm not sure I understand why exporting a floating disk is 'not > >> supported'. In the current design? implementation? ever? > > Currently, Export is done in a VM/Template level. Support in > > export/import (floating) disks is a new functionality which > > requires additional thinking/design/etc. > > Right, so lets add 'currently not supported'. > Y. > > > > >> Y. > >> > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From mkenneth at redhat.com Thu Feb 2 15:13:19 2012 From: mkenneth at redhat.com (Miki Kenneth) Date: Thu, 02 Feb 2012 10:13:19 -0500 (EST) Subject: [Engine-devel] Floating Disk feature description In-Reply-To: Message-ID: <146028b9-4859-43b8-9fac-0fafe8ba30f9@mkenneth.csb> ----- Original Message ----- > From: "Einav Cohen" > To: "Yaniv Kaul" > Cc: engine-devel at ovirt.org, "Daniel Erez" , "Miki Kenneth" > Sent: Thursday, February 2, 2012 4:45:30 PM > Subject: Re: [Engine-devel] Floating Disk feature description > > > ----- Original Message ----- > > From: "Yaniv Kaul" > > Sent: Thursday, February 2, 2012 1:43:58 PM > > > > On 02/02/2012 01:35 PM, Daniel Erez wrote: > > > > > > ----- Original Message ----- > > >> From: "Yaniv Kaul" > > >> To: "Daniel Erez" > > >> Cc: engine-devel at ovirt.org > > >> Sent: Thursday, February 2, 2012 10:27:06 AM > > >> Subject: Re: [Engine-devel] Floating Disk feature description > > >> > > >> On 02/01/2012 07:04 PM, Daniel Erez wrote: > > >>> Hi, > > >>> > > >>> Floating Disk feature description Wiki page: > > >>> http://www.ovirt.org/wiki/Features/DetailedFloatingDisk > > >>> > > >>> Best Regards, > > >>> Daniel > > >>> _______________________________________________ > > >>> Engine-devel mailing list > > >>> Engine-devel at ovirt.org > > >>> http://lists.ovirt.org/mailman/listinfo/engine-devel > > >> 0. Is it a floating disk or a floating image? would be nice to > > >> use > > >> the > > >> same terminology for all projects, where possible > > >> (http://www.ovirt.org/wiki/Vdsm_Storage_Terminology#Image) > > > Disk and Image are the same ("Disk" is the RHEV-M term for VDSM's > > > "Image"). Both of them means: the collection of "volumes" (or, > > > "DiskImages", in RHEV-M terminology) that comprise the full disk. > > > We have no problem using either terminology, however might be > > > confusing either way. > > > [BTW, a floating disk can't conatin snapshots - so it doesn't > > > really matter if you are talking about disk, image, diskImage or > > > volume - they are all the same] > > > > If they are the same, then please use the term 'image'. > > As I said - no technical problem doing that, however in the engine we > use the term "Disk" and the current name of the relevant VMs sub-tab > is "Disks" and the name of the new main tab would be "Disks" (or > "Virtual Disks"), etc. - so it will be strange to call the feature > "floating image" when "floating" is going to be column in a GUI grid > titled "Disks". > I assume that you can also find explanations of why using the term > "Disk" is confusing and "Image" is better; I am just genuinely not > sure what is less confusing. This is inconsistency I agree, but I think that from the User perspective we should stick with either Disk or Drive. In any OS of a Server/Desktop the term rather Disk or Drives. > > > And it looks like the feature is 'floating single-volume image' . I > > wonder if the limitation is really an issue or not. Can't think of > > a > > real use case it would be, but here's an imaginary one: before > > attaching > > it to a VM (or after and before running), I'd take a snapshot, then > > run > > the VM, do whatever, and revert before/after detaching, so the > > floating > > would go back to its original state. > > > > > > > >> 1. I don't see why a disk name should be unique. I don't think > > >> it's > > >> enforceable under any normal circumstances: If user A decided to > > >> call > > >> his disk 'system', user B who is completely unaware of A cannot > > >> call > > >> his > > >> disk 'system' ? It should be unique at some level, but not > > >> system-wide. > > > The enforcement for uniqueness has been suggested for avoiding a > > > list of > > > duplicate named disks in the Disks main tab and for identifying a > > > specific disk. > > > > Understood, but it's not good enough. Need to solve this, as it's > > not > > practical to ask me not to share a property with you - which both > > us > > do > > not really share. > > > > > Probelm is that any disk theoretically can be floating, so you > > > cannot differentiate between the disks using the VM name to which > > > it is attached, for example (moreover, some of the disks in the > > > system are shared, so which VM name will you use?...) > > > > The fact VM names are unique is also an annoying, problematic > > issue, > > but > > I imagine there are less VMs than disks, and most are going to be > > FQDN > > based anyway. 'system' and 'data' are quite common names for disks, > > whereas VM names might be more original. Anyway, both don't scale. > > > > > Maybe we can use some other attribute for identification? > > > > The real identification can be done via the serial number, no harm > > in > > displaying that cryptic ID in the UI. Makes us look professional. > > Of > > course, it makes more sense to use the volume UUID. May come handy > > in > > locating it physically on disk, if it exists. > > > > > > > > > >> 2. I'm not sure I understand why exporting a floating disk is > > >> 'not > > >> supported'. In the current design? implementation? ever? > > > Currently, Export is done in a VM/Template level. Support in > > > export/import (floating) disks is a new functionality which > > > requires additional thinking/design/etc. > > > > Right, so lets add 'currently not supported'. > > Y. > > > > > > > >> Y. > > >> > > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > From mlipchuk at redhat.com Thu Feb 2 15:15:40 2012 From: mlipchuk at redhat.com (Maor) Date: Thu, 02 Feb 2012 17:15:40 +0200 Subject: [Engine-devel] SharedRawDisk feature detail Message-ID: <4F2AA89C.7090605@redhat.com> Hello all, The shared raw disk feature description can be found under the following links: http://www.ovirt.org/wiki/Features/DetailedSharedRawDisk http://www.ovirt.org/wiki/Features/SharedRawDisk Please feel free, to share your comments. Regards, Maor From gchaplik at redhat.com Thu Feb 2 15:30:12 2012 From: gchaplik at redhat.com (Gilad Chaplik) Date: Thu, 02 Feb 2012 10:30:12 -0500 (EST) Subject: [Engine-devel] in the UI - missing list of storage domains in detach modal In-Reply-To: <4F2A93DB.9040809@redhat.com> Message-ID: It's a bug, sending a patch to fix it. The modal dialog should list all the storage domains names that should be detached. Thanks, Gilad. ----- Original Message ----- > From: "Jon Choate" > To: engine-devel at ovirt.org > Sent: Thursday, February 2, 2012 3:47:07 PM > Subject: [Engine-devel] in the UI - missing list of storage domains in detach modal > > In the Data Center tab, if you select a storage domain to detach, a > modal dialog box pops up saying "Are you sure you want to Detach the > following storage(s)?". > > Below this there are no storage domains listed. I would suggest > either > listing them or changing the word 'following' to 'selected' > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From jchoate at redhat.com Thu Feb 2 16:01:15 2012 From: jchoate at redhat.com (Jon Choate) Date: Thu, 02 Feb 2012 11:01:15 -0500 Subject: [Engine-devel] multiple destinations for disks in create/import template? Message-ID: <4F2AB34B.702@redhat.com> Given the changes for multiple storage domains, do we want to allow a user to specify multiple storage domains per disk when creating or importing a template? Otherwise the user will need to use the copy(clone) template disk afterwards to create the copies of the storage domain disks where they want them. If so, what would the UI look for this? It would require the backend to receive something that looks like Map>. thoughts? From mlipchuk at redhat.com Thu Feb 2 16:19:49 2012 From: mlipchuk at redhat.com (Maor) Date: Thu, 02 Feb 2012 18:19:49 +0200 Subject: [Engine-devel] multiple destinations for disks in create/import template? In-Reply-To: <4F2AB34B.702@redhat.com> References: <4F2AB34B.702@redhat.com> Message-ID: <4F2AB7A5.90303@redhat.com> On 02/02/2012 06:01 PM, Jon Choate wrote: > Given the changes for multiple storage domains, do we want to allow a > user to specify multiple storage domains per disk when creating or > importing a template? > > Otherwise the user will need to use the copy(clone) template disk > afterwards to create the copies of the storage domain disks where they > want them. > > If so, what would the UI look for this? It would require the backend to > receive something that looks like Map>. > > thoughts? I think something that should be taken in consider if doing that, is that the VM which the template is created from will stay in image lock much longer. Also what would be the desired behaviour if few of storage domains would not be available, and will fail. (right now, if counting on the AsyncTaskManager mechanism, the all operation of create template will be rolled back, and the template would not be created at all. > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From ovedo at redhat.com Thu Feb 2 17:05:10 2012 From: ovedo at redhat.com (Oved Ourfalli) Date: Thu, 02 Feb 2012 12:05:10 -0500 (EST) Subject: [Engine-devel] Floating Disk feature description In-Reply-To: <4F2A9DCA.4080001@redhat.com> Message-ID: <2bda839a-dea8-4ca1-9b77-afe45c3f6a0d@zmail02.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > From: "Itamar Heim" > To: "Daniel Erez" > Cc: engine-devel at ovirt.org > Sent: Thursday, February 2, 2012 4:29:30 PM > Subject: Re: [Engine-devel] Floating Disk feature description > > On 02/02/2012 12:25 PM, Daniel Erez wrote: > ... > >> 6. permissions not available for disks? > >> at all? > >> what do you mean power user would be able to attach them by their > >> type? > >> does it mean they can associate any shared disk in the system? I > >> hope > >> i'm misunderstanding, as doesn't make sense to me. > >> > >> or is this caveat specific to the user portal and not the admin? > >> not allowing creating a floating disk from user portal is not a > >> problem > >> in my view for this phase. > >> > >> I assume anyone can add a disk on a storage domain they have quota > >> to. > >> who can edit a disk? remove a disk? attach disk to VM (which gives > >> them > >> ability to edit the disk) > >> (attach disk to VM obviously requires permission on both disk and > >> VM) > > > > Since we won't support permissions on disks entities (at first > > stage), > > as a compromise for the power user portal, we've agreed to simply > > hide > > floating non shared disks from the user. > > I still think we won't find a decent way to model this without > permissions, regardless of the power user portal. > we'll hit too many problems. > I'll look into this a bit more. I agree that permissions on disks are the right solution. But, if not possible to have disk permissions in the next version, as a compromise, maybe we can somehow use Quota. i.e, only users with permissions to consume from the Quota the disk resides on can attach the disk to another VM (if unattached). It can work to shared disks as well. There are some problems in this solution, though, as not everyone will use Quotas, you might want to share disks regardless of Quota, Thoughts? > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From jchoate at redhat.com Thu Feb 2 17:16:28 2012 From: jchoate at redhat.com (Jon Choate) Date: Thu, 02 Feb 2012 12:16:28 -0500 Subject: [Engine-devel] multiple destinations for disks in create/import template? In-Reply-To: <4F2AB7A5.90303@redhat.com> References: <4F2AB34B.702@redhat.com> <4F2AB7A5.90303@redhat.com> Message-ID: <4F2AC4EC.7010903@redhat.com> On 02/02/2012 11:19 AM, Maor wrote: > On 02/02/2012 06:01 PM, Jon Choate wrote: >> Given the changes for multiple storage domains, do we want to allow a >> user to specify multiple storage domains per disk when creating or >> importing a template? >> >> Otherwise the user will need to use the copy(clone) template disk >> afterwards to create the copies of the storage domain disks where they >> want them. >> >> If so, what would the UI look for this? It would require the backend to >> receive something that looks like Map>. >> >> thoughts? > I think something that should be taken in consider if doing that, is > that the VM which the template is created from will stay in image lock > much longer. Not necessarily. Once we get one copy of each disk down, we can release the vm and use these copies as the source of the other copies. > Also what would be the desired behaviour if few of storage domains would > not be available, and will fail. (right now, if counting on the > AsyncTaskManager mechanism, the all operation of create template will be > rolled back, and the template would not be created at all. Yes, the failure cases need to be considered. I would think that as long as one copy of each disk can be created then the template should persist. If we can't create a copy of each disk then we need to roll back and not create the template. But in this approach how do we convey the list of failures back to the user? >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From iheim at redhat.com Thu Feb 2 17:30:46 2012 From: iheim at redhat.com (Itamar Heim) Date: Thu, 02 Feb 2012 19:30:46 +0200 Subject: [Engine-devel] Floating Disk feature description In-Reply-To: <2bda839a-dea8-4ca1-9b77-afe45c3f6a0d@zmail02.collab.prod.int.phx2.redhat.com> References: <2bda839a-dea8-4ca1-9b77-afe45c3f6a0d@zmail02.collab.prod.int.phx2.redhat.com> Message-ID: <4F2AC846.6010904@redhat.com> On 02/02/2012 07:05 PM, Oved Ourfalli wrote: > > > ----- Original Message ----- >> From: "Itamar Heim" >> To: "Daniel Erez" >> Cc: engine-devel at ovirt.org >> Sent: Thursday, February 2, 2012 4:29:30 PM >> Subject: Re: [Engine-devel] Floating Disk feature description >> >> On 02/02/2012 12:25 PM, Daniel Erez wrote: >> ... >>>> 6. permissions not available for disks? >>>> at all? >>>> what do you mean power user would be able to attach them by their >>>> type? >>>> does it mean they can associate any shared disk in the system? I >>>> hope >>>> i'm misunderstanding, as doesn't make sense to me. >>>> >>>> or is this caveat specific to the user portal and not the admin? >>>> not allowing creating a floating disk from user portal is not a >>>> problem >>>> in my view for this phase. >>>> >>>> I assume anyone can add a disk on a storage domain they have quota >>>> to. >>>> who can edit a disk? remove a disk? attach disk to VM (which gives >>>> them >>>> ability to edit the disk) >>>> (attach disk to VM obviously requires permission on both disk and >>>> VM) >>> >>> Since we won't support permissions on disks entities (at first >>> stage), >>> as a compromise for the power user portal, we've agreed to simply >>> hide >>> floating non shared disks from the user. >> >> I still think we won't find a decent way to model this without >> permissions, regardless of the power user portal. >> we'll hit too many problems. >> I'll look into this a bit more. > > I agree that permissions on disks are the right solution. > > But, if not possible to have disk permissions in the next version, as a compromise, maybe we can somehow use Quota. i.e, only users with permissions to consume from the Quota the disk resides on can attach the disk to another VM (if unattached). It can work to shared disks as well. > > There are some problems in this solution, though, as not everyone will use Quotas, you might want to share disks regardless of Quota, quota is fine for a permission to add a floating disk, as quotas are the way we manage permissions to add disks to VMs as well. problem is with 'add/import external disk', which would be a system wide permission (that should be ok, it is not a permission on a disk, rather on system). my problem is with permission on the created (or detached) disks - who can edit/delete/attach them (and attach is a special case, as it requires checking permission on both disk and vm) From iheim at redhat.com Thu Feb 2 19:17:07 2012 From: iheim at redhat.com (Itamar Heim) Date: Thu, 02 Feb 2012 21:17:07 +0200 Subject: [Engine-devel] multiple destinations for disks in create/import template? In-Reply-To: <4F2AC4EC.7010903@redhat.com> References: <4F2AB34B.702@redhat.com> <4F2AB7A5.90303@redhat.com> <4F2AC4EC.7010903@redhat.com> Message-ID: <4F2AE133.50507@redhat.com> On 02/02/2012 07:16 PM, Jon Choate wrote: > On 02/02/2012 11:19 AM, Maor wrote: >> On 02/02/2012 06:01 PM, Jon Choate wrote: >>> Given the changes for multiple storage domains, do we want to allow a >>> user to specify multiple storage domains per disk when creating or >>> importing a template? >>> >>> Otherwise the user will need to use the copy(clone) template disk >>> afterwards to create the copies of the storage domain disks where they >>> want them. >>> >>> If so, what would the UI look for this? It would require the backend to >>> receive something that looks like Map>. >>> >>> thoughts? >> I think something that should be taken in consider if doing that, is >> that the VM which the template is created from will stay in image lock >> much longer. > Not necessarily. Once we get one copy of each disk down, we can release > the vm and use these copies as the source of the other copies. >> Also what would be the desired behaviour if few of storage domains would >> not be available, and will fail. (right now, if counting on the >> AsyncTaskManager mechanism, the all operation of create template will be >> rolled back, and the template would not be created at all. > Yes, the failure cases need to be considered. I would think that as long > as one copy of each disk can be created then the template should > persist. If we can't create a copy of each disk then we need to roll > back and not create the template. > > But in this approach how do we convey the list of failures back to the > user? how about we start with KISS to see everything works post all changes going around, and later can add support for multiple clones (parallel or serial)? From iheim at redhat.com Fri Feb 3 10:47:35 2012 From: iheim at redhat.com (Itamar Heim) Date: Fri, 03 Feb 2012 12:47:35 +0200 Subject: [Engine-devel] Floating Disk feature description In-Reply-To: <146028b9-4859-43b8-9fac-0fafe8ba30f9@mkenneth.csb> References: <146028b9-4859-43b8-9fac-0fafe8ba30f9@mkenneth.csb> Message-ID: <4F2BBB47.2030908@redhat.com> On 02/02/2012 05:13 PM, Miki Kenneth wrote: >>> If they are the same, then please use the term 'image'. >> >> As I said - no technical problem doing that, however in the engine we >> use the term "Disk" and the current name of the relevant VMs sub-tab >> is "Disks" and the name of the new main tab would be "Disks" (or >> "Virtual Disks"), etc. - so it will be strange to call the feature >> "floating image" when "floating" is going to be column in a GUI grid >> titled "Disks". >> I assume that you can also find explanations of why using the term >> "Disk" is confusing and "Image" is better; I am just genuinely not >> sure what is less confusing. > This is inconsistency I agree, but I think that from the User perspective we should stick with either Disk or Drive. In any OS of a Server/Desktop the term rather Disk or Drives. I'd go with what the REST API calls them right now, as changing it would be the painful part. From iheim at redhat.com Fri Feb 3 11:30:54 2012 From: iheim at redhat.com (Itamar Heim) Date: Fri, 03 Feb 2012 13:30:54 +0200 Subject: [Engine-devel] Floating Disk feature description In-Reply-To: <2604ecab-482a-42c2-8306-24ad32807d2d@zmail14.collab.prod.int.phx2.redhat.com> References: <2604ecab-482a-42c2-8306-24ad32807d2d@zmail14.collab.prod.int.phx2.redhat.com> Message-ID: <4F2BC56E.4010006@redhat.com> On 02/02/2012 01:35 PM, Daniel Erez wrote: ... >> 1. I don't see why a disk name should be unique. I don't think it's >> enforceable under any normal circumstances: If user A decided to call >> his disk 'system', user B who is completely unaware of A cannot call >> his >> disk 'system' ? It should be unique at some level, but not >> system-wide. > > The enforcement for uniqueness has been suggested for avoiding a list of > duplicate named disks in the Disks main tab and for identifying a specific disk. > Probelm is that any disk theoretically can be floating, so you cannot differentiate between the disks using the VM name to which it is attached, for example (moreover, some of the disks in the system are shared, so which VM name will you use?...) > Maybe we can use some other attribute for identification? I agree with Kaul here - you can't expect disk name to be unique. makes sense to make it unique inside same VM, but as you mentioned, there is an issue with floating disks. I suggest we consider disk ID would be a generated id humanoids can follow (not uuid), and a field for description. converting the uuid from hexadecimal to a full alphanumeric representation will probably give a short enough id we can live with[1] i.e., the disk real ID would be represented to the user as a short alphanumeric ID they cannot change. import/clone/etc. change the disk uuid anyway, so we'll get a new ID for these disks. i did a fast calculation it will be a a string of 12 characters with 36 alphanumeric to cover 128bit UUID - hope i didn't do it too fast From ecohen at redhat.com Fri Feb 3 12:01:43 2012 From: ecohen at redhat.com (Einav Cohen) Date: Fri, 03 Feb 2012 07:01:43 -0500 (EST) Subject: [Engine-devel] Floating Disk feature description In-Reply-To: <4F2BBB47.2030908@redhat.com> Message-ID: <1059af98-9e3b-475a-a376-d763e1143c06@zmail04.collab.prod.int.phx2.redhat.com> > ----- Original Message ----- > From: "Itamar Heim" > Sent: Friday, February 3, 2012 12:47:35 PM > > On 02/02/2012 05:13 PM, Miki Kenneth wrote: > >>> If they are the same, then please use the term 'image'. > >> > >> As I said - no technical problem doing that, however in the engine > >> we > >> use the term "Disk" and the current name of the relevant VMs > >> sub-tab > >> is "Disks" and the name of the new main tab would be "Disks" (or > >> "Virtual Disks"), etc. - so it will be strange to call the feature > >> "floating image" when "floating" is going to be column in a GUI > >> grid > >> titled "Disks". > >> I assume that you can also find explanations of why using the term > >> "Disk" is confusing and "Image" is better; I am just genuinely not > >> sure what is less confusing. > > This is inconsistency I agree, but I think that from the User > > perspective we should stick with either Disk or Drive. In any OS > > of a Server/Desktop the term rather Disk or Drives. > > I'd go with what the REST API calls them right now, as changing it > would > be the painful part. REST API uses the term "disks", so I think that the current terminology for this feature should remain as is (it is also consistent with Miki's argument). > From jchoate at redhat.com Fri Feb 3 14:41:44 2012 From: jchoate at redhat.com (Jon Choate) Date: Fri, 03 Feb 2012 09:41:44 -0500 Subject: [Engine-devel] adding a second disk to a new VM Message-ID: <4F2BF228.7060301@redhat.com> When creating new VM, you are asked if you want to add a virtual disk. If you do, you are asked if you want to add another. The issue is that while the first disk is being added, the VM is locked so unless the first disk is added very quickly, a second disk cannot be added and the user will get an error saying that the VM is not down. I think user may find this confusing and/or annoying. Is there anything that can be done to improve this experience? From iheim at redhat.com Fri Feb 3 14:59:24 2012 From: iheim at redhat.com (Itamar Heim) Date: Fri, 03 Feb 2012 16:59:24 +0200 Subject: [Engine-devel] SharedRawDisk feature detail In-Reply-To: <4F2AA89C.7090605@redhat.com> References: <4F2AA89C.7090605@redhat.com> Message-ID: <4F2BF64C.5010602@redhat.com> On 02/02/2012 05:15 PM, Maor wrote: > Hello all, > > The shared raw disk feature description can be found under the following > links: > http://www.ovirt.org/wiki/Features/DetailedSharedRawDisk > http://www.ovirt.org/wiki/Features/SharedRawDisk > > Please feel free, to share your comments. 1. Affected oVirt projects i'm pretty sure the history data warehouse will need to adapt to this. 2. "The shared raw disk feature should provide the ability to attach disk to many VMs with safe concurrent access," this could be read as if ovirt or vdsm somehow provides a mechanism for safe concurrent access. maybe something like "to multiple VMs that can handle concurrent access to a shared disk without risk of corruption". and having just written this - sounds like setting this flag at UI level should include a prompt to the user to make sure they understand that flagging the disk as shared *will* lead to corruption if it is attached to virtual machines which do not support and expect it to be shared with other virtual or physical machines[1] 3. "The synchronization/clustering of shared raw disk between VMs will be managed in the file system. " either i don't understand what this mean, or it could be read with a misleading meaning. 4. VM Pools VM Pools are always based (at least today) on templates, and templates have no shared disks. I'd just block attaching a shared disk to a VM which is part of a pool (unless there is a very interesting use case meriting this) 5. "Quota has to be taken in consideration, for every new feature that will involve consumption of resources managed by it." I thought quota is not relevant in this feature. 6. future work - Permissions should be added for disk entity so who can add a shared disk? same as for floating disks, i find it hard to imagine a flow in which if someone flagged a disk as shared, suddenly everyone can have access to it. same as my statement of floating disks - I'll spend some more time to reflect on this specific part. [1] an external LUN based disk could be shared with a physical server as well. From iheim at redhat.com Fri Feb 3 15:00:15 2012 From: iheim at redhat.com (Itamar Heim) Date: Fri, 03 Feb 2012 17:00:15 +0200 Subject: [Engine-devel] oVirt upstream meeting : VM Version In-Reply-To: References: Message-ID: <4F2BF67F.6090404@redhat.com> On 02/02/2012 12:15 PM, Ayal Baron wrote: > > > ----- Original Message ----- >> On 02/02/2012 08:46 AM, Itamar Heim wrote: >>> On 02/02/2012 02:56 AM, Eli Mesika wrote: >>>> Hi >>>> >>>> We had discussed today the Stable Device Addresses feature >>>> One of the questions arose from the meeting (and actually defined >>>> as >>>> an open issue in the feature wiki) is: >>>> What happens to a 3.1 VM running on 3.1 Cluster when it is moved >>>> to a >>>> 3.0 cluster. >>>> We encountered that VM may lose some configuration data but also >>>> may >>>> be corrupted. >>>> From that point we got to the conclusion that we have somehow to >>>> maintain a VM Version that will allow us to >> >> What do you mean by VM version? >> Is that the guest hardware abstraction version (which is the kvm >> hypervisor release + the '-M' flag for compatibility)? >> >> I think its the above + the meta data /devices you keep for it. > > Correct. > There are several issues here: > 1. you loose the stable device addresses (no point in keeping the data in the db as the next time the VM is run the devices can get different addresses) > 2. If you move the VM to an older cluster where the hosts don't support the VM's compatibility mode (-M) then the VM would be started with different virtual hardware which might cause problems > 3. Once we support s4 then running the VM again with different hardware might be even more problematic than just running it from shutdown (e.g. once we have a balloon device with memory assigned to it which suddenly disappears, what would happen to the VM?) > 4. Same applies for migrate to file, but this can be dealt with by not allowing to move a VM between incompatible clusters in case it has a migrate to file state (or delete the file). same would apply for a direct lun on the vm, custom properties defined to it, multiple monitors for spice for linux guests, etc. I think we should add validations for things we know are not supported, but otherwise allow it. > A side note - I'm not sure if exporting a VM also exports the state file after migrate to file? if not then probably it should... > > I'm sure there are additional scenarios we're not thinking of. From ecohen at redhat.com Fri Feb 3 18:13:18 2012 From: ecohen at redhat.com (Einav Cohen) Date: Fri, 03 Feb 2012 13:13:18 -0500 (EST) Subject: [Engine-devel] adding a second disk to a new VM In-Reply-To: <4F2BF228.7060301@redhat.com> Message-ID: <642ab524-54dc-4f44-b40c-02af1e9e4a82@zmail04.collab.prod.int.phx2.redhat.com> > ----- Original Message ----- > From: "Jon Choate" > Sent: Friday, February 3, 2012 4:41:44 PM > > When creating new VM, you are asked if you want to add a virtual > disk. > If you do, you are asked if you want to add another. The issue is > that > while the first disk is being added, the VM is locked so unless the > first disk is added very quickly, a second disk cannot be added and > the > user will get an error saying that the VM is not down. I think user > may > find this confusing and/or annoying. Is there anything that can be > done > to improve this experience? Currently, the "Guide Me" dialog isn't being dynamically updated according to the relevant business entity status; i.e. you would want the "add another disk" button in the dialog to be disabled as long as the VM is in the "Image Locked" status, but once the VM becomes "Down", you would want the "add another disk" button to become enabled; currently, it is not the case and the "add another disk" button is simply always enabled. We have https://bugzilla.redhat.com/show_bug.cgi?id=692450 on that. We are still not sure regarding the exact behavior improvement that we would like to introduce here. > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From lpeer at redhat.com Sat Feb 4 13:18:40 2012 From: lpeer at redhat.com (Livnat Peer) Date: Sat, 04 Feb 2012 15:18:40 +0200 Subject: [Engine-devel] adding a second disk to a new VM In-Reply-To: <642ab524-54dc-4f44-b40c-02af1e9e4a82@zmail04.collab.prod.int.phx2.redhat.com> References: <642ab524-54dc-4f44-b40c-02af1e9e4a82@zmail04.collab.prod.int.phx2.redhat.com> Message-ID: <4F2D3030.50605@redhat.com> On 03/02/12 20:13, Einav Cohen wrote: >> ----- Original Message ----- >> From: "Jon Choate" >> Sent: Friday, February 3, 2012 4:41:44 PM >> >> When creating new VM, you are asked if you want to add a virtual >> disk. >> If you do, you are asked if you want to add another. The issue is >> that >> while the first disk is being added, the VM is locked so unless the >> first disk is added very quickly, a second disk cannot be added and >> the >> user will get an error saying that the VM is not down. I think user >> may >> find this confusing and/or annoying. Is there anything that can be >> done >> to improve this experience? > > Currently, the "Guide Me" dialog isn't being dynamically updated according to the relevant business entity status; i.e. you would want the "add another disk" button in the dialog to be disabled as long as the VM is in the "Image Locked" status, but once the VM becomes "Down", you would want the "add another disk" button to become enabled; currently, it is not the case and the "add another disk" button is simply always enabled. > We have https://bugzilla.redhat.com/show_bug.cgi?id=692450 on that. We are still not sure regarding the exact behavior improvement that we would like to introduce here. > We plan that add disk won't lock the whole VM but only the VM disk, so adding another disk (simultaneously) should be supported very soon. >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From ovedo at redhat.com Sun Feb 5 07:33:59 2012 From: ovedo at redhat.com (Oved Ourfalli) Date: Sun, 05 Feb 2012 02:33:59 -0500 (EST) Subject: [Engine-devel] Using categories in the Feature pages In-Reply-To: Message-ID: <9fabf120-c256-4453-aa67-d3c16840d367@zmail02.collab.prod.int.phx2.redhat.com> Hey all, When you write a feature page, please make sure you end it with: [[Category:Feature]] We will add a link to this category, so that features will be easier to track. In your detailed feature page, put: [[Category:DetailedFeature]] And in the design page put: [[Category:FeatureDesign]] If the feature is simple enough, and creating all the three pages is an overkill, then create the first one (with the correct category), and put all the details there. Thank you, Oved From lpeer at redhat.com Sun Feb 5 08:29:46 2012 From: lpeer at redhat.com (Livnat Peer) Date: Sun, 05 Feb 2012 10:29:46 +0200 Subject: [Engine-devel] oVirt upstream meeting : VM Version In-Reply-To: <4F2BF67F.6090404@redhat.com> References: <4F2BF67F.6090404@redhat.com> Message-ID: <4F2E3DFA.7010804@redhat.com> On 03/02/12 17:00, Itamar Heim wrote: > On 02/02/2012 12:15 PM, Ayal Baron wrote: >> >> >> ----- Original Message ----- >>> On 02/02/2012 08:46 AM, Itamar Heim wrote: >>>> On 02/02/2012 02:56 AM, Eli Mesika wrote: >>>>> Hi >>>>> >>>>> We had discussed today the Stable Device Addresses feature >>>>> One of the questions arose from the meeting (and actually defined >>>>> as >>>>> an open issue in the feature wiki) is: >>>>> What happens to a 3.1 VM running on 3.1 Cluster when it is moved >>>>> to a >>>>> 3.0 cluster. >>>>> We encountered that VM may lose some configuration data but also >>>>> may >>>>> be corrupted. >>>>> From that point we got to the conclusion that we have somehow to >>>>> maintain a VM Version that will allow us to >>> >>> What do you mean by VM version? >>> Is that the guest hardware abstraction version (which is the kvm >>> hypervisor release + the '-M' flag for compatibility)? >>> >>> I think its the above + the meta data /devices you keep for it. >> >> Correct. >> There are several issues here: >> 1. you loose the stable device addresses (no point in keeping the data >> in the db as the next time the VM is run the devices can get different >> addresses) >> 2. If you move the VM to an older cluster where the hosts don't >> support the VM's compatibility mode (-M) then the VM would be started >> with different virtual hardware which might cause problems >> 3. Once we support s4 then running the VM again with different >> hardware might be even more problematic than just running it from >> shutdown (e.g. once we have a balloon device with memory assigned to >> it which suddenly disappears, what would happen to the VM?) >> 4. Same applies for migrate to file, but this can be dealt with by not >> allowing to move a VM between incompatible clusters in case it has a >> migrate to file state (or delete the file). > > same would apply for a direct lun on the vm, custom properties defined > to it, multiple monitors for spice for linux guests, etc. > I think we should add validations for things we know are not supported, > but otherwise allow it. > IIUC you suggest to use features granularity for setting on which cluster (version) the VM can be started. Note that *all* VMs that were started on a 3.1 cluster will loose functionality when running on 3.0 cluster (stable device addressed will be lost). I would go with a simple approach here. Derive the VM version from the cluster version, VM can be executed on all hosts in the cluster without loosing any functionality, when we change the VM cluster we practically change the VM version. I would require a force flag to execute the VM on a lower cluster version. What we are missing today is saving this version as part of the OVF to support version compatibility functionality during import/export VM flows and snapshots of VM configuration. >> A side note - I'm not sure if exporting a VM also exports the state >> file after migrate to file? if not then probably it should... >> >> I'm sure there are additional scenarios we're not thinking of. > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From ykaul at redhat.com Sun Feb 5 08:34:55 2012 From: ykaul at redhat.com (Yaniv Kaul) Date: Sun, 05 Feb 2012 03:34:55 -0500 (EST) Subject: [Engine-devel] oVirt upstream meeting : VM Version In-Reply-To: <4F2E3DFA.7010804@redhat.com> Message-ID: <0ab1b64d-59e7-43b7-b9f6-3ecc5a41cfd3@zmail13.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > On 03/02/12 17:00, Itamar Heim wrote: > > On 02/02/2012 12:15 PM, Ayal Baron wrote: > >> > >> > >> ----- Original Message ----- > >>> On 02/02/2012 08:46 AM, Itamar Heim wrote: > >>>> On 02/02/2012 02:56 AM, Eli Mesika wrote: > >>>>> Hi > >>>>> > >>>>> We had discussed today the Stable Device Addresses feature > >>>>> One of the questions arose from the meeting (and actually > >>>>> defined > >>>>> as > >>>>> an open issue in the feature wiki) is: > >>>>> What happens to a 3.1 VM running on 3.1 Cluster when it is > >>>>> moved > >>>>> to a > >>>>> 3.0 cluster. > >>>>> We encountered that VM may lose some configuration data but > >>>>> also > >>>>> may > >>>>> be corrupted. > >>>>> From that point we got to the conclusion that we have somehow > >>>>> to > >>>>> maintain a VM Version that will allow us to > >>> > >>> What do you mean by VM version? > >>> Is that the guest hardware abstraction version (which is the kvm > >>> hypervisor release + the '-M' flag for compatibility)? > >>> > >>> I think its the above + the meta data /devices you keep for it. > >> > >> Correct. > >> There are several issues here: > >> 1. you loose the stable device addresses (no point in keeping the > >> data > >> in the db as the next time the VM is run the devices can get > >> different > >> addresses) > >> 2. If you move the VM to an older cluster where the hosts don't > >> support the VM's compatibility mode (-M) then the VM would be > >> started > >> with different virtual hardware which might cause problems > >> 3. Once we support s4 then running the VM again with different > >> hardware might be even more problematic than just running it from > >> shutdown (e.g. once we have a balloon device with memory assigned > >> to > >> it which suddenly disappears, what would happen to the VM?) > >> 4. Same applies for migrate to file, but this can be dealt with by > >> not > >> allowing to move a VM between incompatible clusters in case it has > >> a > >> migrate to file state (or delete the file). > > > > same would apply for a direct lun on the vm, custom properties > > defined > > to it, multiple monitors for spice for linux guests, etc. > > I think we should add validations for things we know are not > > supported, > > but otherwise allow it. > > > > IIUC you suggest to use features granularity for setting on which > cluster (version) the VM can be started. Note that *all* VMs that > were > started on a 3.1 cluster will loose functionality when running on 3.0 > cluster (stable device addressed will be lost). > > I would go with a simple approach here. > Derive the VM version from the cluster version, VM can be executed on > all hosts in the cluster without loosing any functionality, when we > change the VM cluster we practically change the VM version. > I would require a force flag to execute the VM on a lower cluster > version. Isn't the VM version derived from the version of the cluster on which it was last edited? For example: you've created a VM on a cluster v3.0. When it is running on a v3.2 cluster, is there any reason to change its version? When it is edited, then perhaps yes - because it may have changed/added properties/features that are only applicable to v3.2. But until then - let it stay in the same version as it was created. (btw, how does this map, if at all, to the '-m' qemu command line switch?) Y. > > What we are missing today is saving this version as part of the OVF > to > support version compatibility functionality during import/export VM > flows and snapshots of VM configuration. > > >> A side note - I'm not sure if exporting a VM also exports the > >> state > >> file after migrate to file? if not then probably it should... > >> > >> I'm sure there are additional scenarios we're not thinking of. > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From lpeer at redhat.com Sun Feb 5 08:45:36 2012 From: lpeer at redhat.com (Livnat Peer) Date: Sun, 05 Feb 2012 10:45:36 +0200 Subject: [Engine-devel] oVirt upstream meeting : VM Version In-Reply-To: <0ab1b64d-59e7-43b7-b9f6-3ecc5a41cfd3@zmail13.collab.prod.int.phx2.redhat.com> References: <0ab1b64d-59e7-43b7-b9f6-3ecc5a41cfd3@zmail13.collab.prod.int.phx2.redhat.com> Message-ID: <4F2E41B0.7030901@redhat.com> On 05/02/12 10:34, Yaniv Kaul wrote: > ----- Original Message ----- >> On 03/02/12 17:00, Itamar Heim wrote: >>> On 02/02/2012 12:15 PM, Ayal Baron wrote: >>>> >>>> >>>> ----- Original Message ----- >>>>> On 02/02/2012 08:46 AM, Itamar Heim wrote: >>>>>> On 02/02/2012 02:56 AM, Eli Mesika wrote: >>>>>>> Hi >>>>>>> >>>>>>> We had discussed today the Stable Device Addresses feature >>>>>>> One of the questions arose from the meeting (and actually >>>>>>> defined >>>>>>> as >>>>>>> an open issue in the feature wiki) is: >>>>>>> What happens to a 3.1 VM running on 3.1 Cluster when it is >>>>>>> moved >>>>>>> to a >>>>>>> 3.0 cluster. >>>>>>> We encountered that VM may lose some configuration data but >>>>>>> also >>>>>>> may >>>>>>> be corrupted. >>>>>>> From that point we got to the conclusion that we have somehow >>>>>>> to >>>>>>> maintain a VM Version that will allow us to >>>>> >>>>> What do you mean by VM version? >>>>> Is that the guest hardware abstraction version (which is the kvm >>>>> hypervisor release + the '-M' flag for compatibility)? >>>>> >>>>> I think its the above + the meta data /devices you keep for it. >>>> >>>> Correct. >>>> There are several issues here: >>>> 1. you loose the stable device addresses (no point in keeping the >>>> data >>>> in the db as the next time the VM is run the devices can get >>>> different >>>> addresses) >>>> 2. If you move the VM to an older cluster where the hosts don't >>>> support the VM's compatibility mode (-M) then the VM would be >>>> started >>>> with different virtual hardware which might cause problems >>>> 3. Once we support s4 then running the VM again with different >>>> hardware might be even more problematic than just running it from >>>> shutdown (e.g. once we have a balloon device with memory assigned >>>> to >>>> it which suddenly disappears, what would happen to the VM?) >>>> 4. Same applies for migrate to file, but this can be dealt with by >>>> not >>>> allowing to move a VM between incompatible clusters in case it has >>>> a >>>> migrate to file state (or delete the file). >>> >>> same would apply for a direct lun on the vm, custom properties >>> defined >>> to it, multiple monitors for spice for linux guests, etc. >>> I think we should add validations for things we know are not >>> supported, >>> but otherwise allow it. >>> >> >> IIUC you suggest to use features granularity for setting on which >> cluster (version) the VM can be started. Note that *all* VMs that >> were >> started on a 3.1 cluster will loose functionality when running on 3.0 >> cluster (stable device addressed will be lost). >> >> I would go with a simple approach here. >> Derive the VM version from the cluster version, VM can be executed on >> all hosts in the cluster without loosing any functionality, when we >> change the VM cluster we practically change the VM version. >> I would require a force flag to execute the VM on a lower cluster >> version. > > Isn't the VM version derived from the version of the cluster on which it was last edited? > For example: you've created a VM on a cluster v3.0. When it is running on a v3.2 cluster, is there any reason to change its version? > When it is edited, then perhaps yes - because it may have changed/added properties/features that are only applicable to v3.2. > But until then - let it stay in the same version as it was created. > (btw, how does this map, if at all, to the '-m' qemu command line switch?) > Y. > Currently we do not persist the VM version at all, it is derived from the cluster version the VM belongs to (that's why I suggested to save it as part of the OVF so we can be aware of the VM version when exporting/importing a VM etc.). The VM does not have to be edited to be influenced by the cluster version. For example if you start a VM on 3.1 cluster you get the stable device address feature with no manual editing. Livnat >> >> What we are missing today is saving this version as part of the OVF >> to >> support version compatibility functionality during import/export VM >> flows and snapshots of VM configuration. >> >>>> A side note - I'm not sure if exporting a VM also exports the >>>> state >>>> file after migrate to file? if not then probably it should... >>>> >>>> I'm sure there are additional scenarios we're not thinking of. >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> From lpeer at redhat.com Sun Feb 5 08:46:56 2012 From: lpeer at redhat.com (Livnat Peer) Date: Sun, 05 Feb 2012 10:46:56 +0200 Subject: [Engine-devel] oVirt upstream meeting : VM Version In-Reply-To: <4F2E41B0.7030901@redhat.com> References: <0ab1b64d-59e7-43b7-b9f6-3ecc5a41cfd3@zmail13.collab.prod.int.phx2.redhat.com> <4F2E41B0.7030901@redhat.com> Message-ID: <4F2E4200.8010308@redhat.com> On 05/02/12 10:45, Livnat Peer wrote: > On 05/02/12 10:34, Yaniv Kaul wrote: >> ----- Original Message ----- >>> On 03/02/12 17:00, Itamar Heim wrote: >>>> On 02/02/2012 12:15 PM, Ayal Baron wrote: >>>>> >>>>> >>>>> ----- Original Message ----- >>>>>> On 02/02/2012 08:46 AM, Itamar Heim wrote: >>>>>>> On 02/02/2012 02:56 AM, Eli Mesika wrote: >>>>>>>> Hi >>>>>>>> >>>>>>>> We had discussed today the Stable Device Addresses feature >>>>>>>> One of the questions arose from the meeting (and actually >>>>>>>> defined >>>>>>>> as >>>>>>>> an open issue in the feature wiki) is: >>>>>>>> What happens to a 3.1 VM running on 3.1 Cluster when it is >>>>>>>> moved >>>>>>>> to a >>>>>>>> 3.0 cluster. >>>>>>>> We encountered that VM may lose some configuration data but >>>>>>>> also >>>>>>>> may >>>>>>>> be corrupted. >>>>>>>> From that point we got to the conclusion that we have somehow >>>>>>>> to >>>>>>>> maintain a VM Version that will allow us to >>>>>> >>>>>> What do you mean by VM version? >>>>>> Is that the guest hardware abstraction version (which is the kvm >>>>>> hypervisor release + the '-M' flag for compatibility)? >>>>>> >>>>>> I think its the above + the meta data /devices you keep for it. >>>>> >>>>> Correct. >>>>> There are several issues here: >>>>> 1. you loose the stable device addresses (no point in keeping the >>>>> data >>>>> in the db as the next time the VM is run the devices can get >>>>> different >>>>> addresses) >>>>> 2. If you move the VM to an older cluster where the hosts don't >>>>> support the VM's compatibility mode (-M) then the VM would be >>>>> started >>>>> with different virtual hardware which might cause problems >>>>> 3. Once we support s4 then running the VM again with different >>>>> hardware might be even more problematic than just running it from >>>>> shutdown (e.g. once we have a balloon device with memory assigned >>>>> to >>>>> it which suddenly disappears, what would happen to the VM?) >>>>> 4. Same applies for migrate to file, but this can be dealt with by >>>>> not >>>>> allowing to move a VM between incompatible clusters in case it has >>>>> a >>>>> migrate to file state (or delete the file). >>>> >>>> same would apply for a direct lun on the vm, custom properties >>>> defined >>>> to it, multiple monitors for spice for linux guests, etc. >>>> I think we should add validations for things we know are not >>>> supported, >>>> but otherwise allow it. >>>> >>> >>> IIUC you suggest to use features granularity for setting on which >>> cluster (version) the VM can be started. Note that *all* VMs that >>> were >>> started on a 3.1 cluster will loose functionality when running on 3.0 >>> cluster (stable device addressed will be lost). >>> >>> I would go with a simple approach here. >>> Derive the VM version from the cluster version, VM can be executed on >>> all hosts in the cluster without loosing any functionality, when we >>> change the VM cluster we practically change the VM version. >>> I would require a force flag to execute the VM on a lower cluster >>> version. >> >> Isn't the VM version derived from the version of the cluster on which it was last edited? >> For example: you've created a VM on a cluster v3.0. When it is running on a v3.2 cluster, is there any reason to change its version? >> When it is edited, then perhaps yes - because it may have changed/added properties/features that are only applicable to v3.2. >> But until then - let it stay in the same version as it was created. >> (btw, how does this map, if at all, to the '-m' qemu command line switch?) >> Y. >> > > Currently we do not persist the VM version at all, it is derived from > the cluster version the VM belongs to (that's why I suggested to save it > as part of the OVF so we can be aware of the VM version when > exporting/importing a VM etc.). > > The VM does not have to be edited to be influenced by the cluster > version. For example if you start a VM on 3.1 cluster you get the stable > device address feature with no manual editing. > > Livnat > About the -m switch the engine derives it from the cluster level. >>> >>> What we are missing today is saving this version as part of the OVF >>> to >>> support version compatibility functionality during import/export VM >>> flows and snapshots of VM configuration. >>> >>>>> A side note - I'm not sure if exporting a VM also exports the >>>>> state >>>>> file after migrate to file? if not then probably it should... >>>>> >>>>> I'm sure there are additional scenarios we're not thinking of. >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From lpeer at redhat.com Sun Feb 5 11:28:18 2012 From: lpeer at redhat.com (Livnat Peer) Date: Sun, 05 Feb 2012 13:28:18 +0200 Subject: [Engine-devel] multiple destinations for disks in create/import template? In-Reply-To: <4F2AE133.50507@redhat.com> References: <4F2AB34B.702@redhat.com> <4F2AB7A5.90303@redhat.com> <4F2AC4EC.7010903@redhat.com> <4F2AE133.50507@redhat.com> Message-ID: <4F2E67D2.4060109@redhat.com> On 02/02/12 21:17, Itamar Heim wrote: > On 02/02/2012 07:16 PM, Jon Choate wrote: >> On 02/02/2012 11:19 AM, Maor wrote: >>> On 02/02/2012 06:01 PM, Jon Choate wrote: >>>> Given the changes for multiple storage domains, do we want to allow a >>>> user to specify multiple storage domains per disk when creating or >>>> importing a template? >>>> >>>> Otherwise the user will need to use the copy(clone) template disk >>>> afterwards to create the copies of the storage domain disks where they >>>> want them. >>>> >>>> If so, what would the UI look for this? It would require the backend to >>>> receive something that looks like Map>. >>>> >>>> thoughts? >>> I think something that should be taken in consider if doing that, is >>> that the VM which the template is created from will stay in image lock >>> much longer. >> Not necessarily. Once we get one copy of each disk down, we can release >> the vm and use these copies as the source of the other copies. >>> Also what would be the desired behaviour if few of storage domains would >>> not be available, and will fail. (right now, if counting on the >>> AsyncTaskManager mechanism, the all operation of create template will be >>> rolled back, and the template would not be created at all. >> Yes, the failure cases need to be considered. I would think that as long >> as one copy of each disk can be created then the template should >> persist. If we can't create a copy of each disk then we need to roll >> back and not create the template. >> >> But in this approach how do we convey the list of failures back to the >> user? > > how about we start with KISS to see everything works post all changes > going around, and later can add support for multiple clones (parallel or > serial)? +1 for KISS Other than that we have many scenarios where we want to aggregate number of users actions as one user operation (plug and activate disk for example and some older flows like configure local storage). I think that after introducing a general mechanism in the engine for executing sequence of commands all the above modeling will be redundant. Theoretically the user will be able to create a template followed by clone disk for the relevant disks etc. Livnat > ______________________________ _________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From mlipchuk at redhat.com Sun Feb 5 13:14:35 2012 From: mlipchuk at redhat.com (Maor) Date: Sun, 05 Feb 2012 15:14:35 +0200 Subject: [Engine-devel] SharedRawDisk feature detail In-Reply-To: <4F2BF64C.5010602@redhat.com> References: <4F2AA89C.7090605@redhat.com> <4F2BF64C.5010602@redhat.com> Message-ID: <4F2E80BB.4060604@redhat.com> On 02/03/2012 04:59 PM, Itamar Heim wrote: > On 02/02/2012 05:15 PM, Maor wrote: >> Hello all, >> >> The shared raw disk feature description can be found under the following >> links: >> http://www.ovirt.org/wiki/Features/DetailedSharedRawDisk >> http://www.ovirt.org/wiki/Features/SharedRawDisk >> >> Please feel free, to share your comments. > > 1. Affected oVirt projects > i'm pretty sure the history data warehouse will need to adapt to this. > > 2. "The shared raw disk feature should provide the ability to attach > disk to many VMs with safe concurrent access," > this could be read as if ovirt or vdsm somehow provides a mechanism for > safe concurrent access. > maybe something like "to multiple VMs that can handle concurrent access > to a shared disk without risk of corruption". > and having just written this - sounds like setting this flag at UI level > should include a prompt to the user to make sure they understand that > flagging the disk as shared *will* lead to corruption if it is attached > to virtual machines which do not support and expect it to be shared with > other virtual or physical machines[1] I agree, I will change it. > > 3. "The synchronization/clustering of shared raw disk between VMs will > be managed in the file system. " > > either i don't understand what this mean, or it could be read with a > misleading meaning. Maybe the following rephrase will be more accurate: "The synchronization/clustering of shared raw disk between VMs should be based on external independent application which will be synchronized with the guest application." > > 4. VM Pools > VM Pools are always based (at least today) on templates, and templates > have no shared disks. > I'd just block attaching a shared disk to a VM which is part of a pool > (unless there is a very interesting use case meriting this) If there is no reason to attach shared disk to a VM from pool, maybe its also not that relevant to attach shared disk to stateless VM. Miki? > > 5. "Quota has to be taken in consideration, for every new feature that > will involve consumption of resources managed by it." > > I thought quota is not relevant in this feature. Why not? Quota should be taken in consideration when adding new shared raw disk, or moving a disk to other domains. We should also notice that shared raw disk should not consume more quota when sharing the disk with different VMs. > > 6. future work - Permissions should be added for disk entity > so who can add a shared disk? Data Center Administrator or System Administrator will be initialized with permissions for creating shared raw disk, or changing shared disk to be unshared. Regarding attach/detach disks to/from VM, I was thinking that for phase one we will count on the user VM permissions. If user will have permissions to create new disks on the VM, he will also have permissions to attach new shared raw disk to it. > same as for floating disks, i find it hard to imagine a flow in which if > someone flagged a disk as shared, suddenly everyone can have access to it. > same as my statement of floating disks - I'll spend some more time to > reflect on this specific part. > > [1] an external LUN based disk could be shared with a physical server as > well. From jchoate at redhat.com Sun Feb 5 13:23:03 2012 From: jchoate at redhat.com (Jon Choate) Date: Sun, 05 Feb 2012 08:23:03 -0500 (EST) Subject: [Engine-devel] Using categories in the Feature pages In-Reply-To: <9fabf120-c256-4453-aa67-d3c16840d367@zmail02.collab.prod.int.phx2.redhat.com> Message-ID: You also should not create the feature pages with a url like ovirt.org/wiki/Features/MyFeature since we are using natural language page names and not url hierarchies to organize pages. I would also suggest creating a new category with the name of your feature to organize all of the pages related to your feature. You can customize the category page for your feature category to act as a "home page" for the feature. ----- Original Message ----- > From: "Oved Ourfalli" > To: engine-devel at ovirt.org > Sent: Sunday, February 5, 2012 2:33:59 AM > Subject: [Engine-devel] Using categories in the Feature pages > > Hey all, > > When you write a feature page, please make sure you end it with: > [[Category:Feature]] > > We will add a link to this category, so that features will be easier > to track. > > In your detailed feature page, put: > [[Category:DetailedFeature]] > > And in the design page put: > [[Category:FeatureDesign]] > > If the feature is simple enough, and creating all the three pages is > an overkill, then create the first one (with the correct category), > and put all the details there. > > Thank you, > Oved > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From mkenneth at redhat.com Sun Feb 5 13:57:43 2012 From: mkenneth at redhat.com (Miki Kenneth) Date: Sun, 05 Feb 2012 08:57:43 -0500 (EST) Subject: [Engine-devel] oVirt upstream meeting : VM Version In-Reply-To: <4F2E4200.8010308@redhat.com> Message-ID: <9d055ec5-3aea-4644-b5a8-859750d84ef3@mkenneth.csb> ----- Original Message ----- > From: "Livnat Peer" > To: "Yaniv Kaul" > Cc: dlaor at redhat.com, engine-devel at ovirt.org > Sent: Sunday, February 5, 2012 10:46:56 AM > Subject: Re: [Engine-devel] oVirt upstream meeting : VM Version > > On 05/02/12 10:45, Livnat Peer wrote: > > On 05/02/12 10:34, Yaniv Kaul wrote: > >> ----- Original Message ----- > >>> On 03/02/12 17:00, Itamar Heim wrote: > >>>> On 02/02/2012 12:15 PM, Ayal Baron wrote: > >>>>> > >>>>> > >>>>> ----- Original Message ----- > >>>>>> On 02/02/2012 08:46 AM, Itamar Heim wrote: > >>>>>>> On 02/02/2012 02:56 AM, Eli Mesika wrote: > >>>>>>>> Hi > >>>>>>>> > >>>>>>>> We had discussed today the Stable Device Addresses feature > >>>>>>>> One of the questions arose from the meeting (and actually > >>>>>>>> defined > >>>>>>>> as > >>>>>>>> an open issue in the feature wiki) is: > >>>>>>>> What happens to a 3.1 VM running on 3.1 Cluster when it is > >>>>>>>> moved > >>>>>>>> to a > >>>>>>>> 3.0 cluster. > >>>>>>>> We encountered that VM may lose some configuration data but > >>>>>>>> also > >>>>>>>> may > >>>>>>>> be corrupted. > >>>>>>>> From that point we got to the conclusion that we have > >>>>>>>> somehow > >>>>>>>> to > >>>>>>>> maintain a VM Version that will allow us to > >>>>>> > >>>>>> What do you mean by VM version? > >>>>>> Is that the guest hardware abstraction version (which is the > >>>>>> kvm > >>>>>> hypervisor release + the '-M' flag for compatibility)? > >>>>>> > >>>>>> I think its the above + the meta data /devices you keep for > >>>>>> it. > >>>>> > >>>>> Correct. > >>>>> There are several issues here: > >>>>> 1. you loose the stable device addresses (no point in keeping > >>>>> the > >>>>> data > >>>>> in the db as the next time the VM is run the devices can get > >>>>> different > >>>>> addresses) > >>>>> 2. If you move the VM to an older cluster where the hosts don't > >>>>> support the VM's compatibility mode (-M) then the VM would be > >>>>> started > >>>>> with different virtual hardware which might cause problems > >>>>> 3. Once we support s4 then running the VM again with different > >>>>> hardware might be even more problematic than just running it > >>>>> from > >>>>> shutdown (e.g. once we have a balloon device with memory > >>>>> assigned > >>>>> to > >>>>> it which suddenly disappears, what would happen to the VM?) > >>>>> 4. Same applies for migrate to file, but this can be dealt with > >>>>> by > >>>>> not > >>>>> allowing to move a VM between incompatible clusters in case it > >>>>> has > >>>>> a > >>>>> migrate to file state (or delete the file). > >>>> > >>>> same would apply for a direct lun on the vm, custom properties > >>>> defined > >>>> to it, multiple monitors for spice for linux guests, etc. > >>>> I think we should add validations for things we know are not > >>>> supported, > >>>> but otherwise allow it. > >>>> > >>> > >>> IIUC you suggest to use features granularity for setting on which > >>> cluster (version) the VM can be started. Note that *all* VMs that > >>> were > >>> started on a 3.1 cluster will loose functionality when running on > >>> 3.0 > >>> cluster (stable device addressed will be lost). > >>> > >>> I would go with a simple approach here. > >>> Derive the VM version from the cluster version, VM can be > >>> executed on > >>> all hosts in the cluster without loosing any functionality, when > >>> we > >>> change the VM cluster we practically change the VM version. > >>> I would require a force flag to execute the VM on a lower cluster > >>> version. > >> > >> Isn't the VM version derived from the version of the cluster on > >> which it was last edited? > >> For example: you've created a VM on a cluster v3.0. When it is > >> running on a v3.2 cluster, is there any reason to change its > >> version? > >> When it is edited, then perhaps yes - because it may have > >> changed/added properties/features that are only applicable to > >> v3.2. > >> But until then - let it stay in the same version as it was > >> created. > >> (btw, how does this map, if at all, to the '-m' qemu command line > >> switch?) > >> Y. > >> > > > > Currently we do not persist the VM version at all, it is derived > > from > > the cluster version the VM belongs to (that's why I suggested to > > save it > > as part of the OVF so we can be aware of the VM version when > > exporting/importing a VM etc.). > > > > The VM does not have to be edited to be influenced by the cluster > > version. For example if you start a VM on 3.1 cluster you get the > > stable > > device address feature with no manual editing. > > > > Livnat > > However, I do agree with Yaniv that changing the VM version "under the hood" is a bit problematic. Version is a parameter associated with create/update operation, and less with Run command. > > About the -m switch the engine derives it from the cluster level. > > >>> > >>> What we are missing today is saving this version as part of the > >>> OVF > >>> to > >>> support version compatibility functionality during import/export > >>> VM > >>> flows and snapshots of VM configuration. > >>> > >>>>> A side note - I'm not sure if exporting a VM also exports the > >>>>> state > >>>>> file after migrate to file? if not then probably it should... > >>>>> > >>>>> I'm sure there are additional scenarios we're not thinking of. > >>>> _______________________________________________ > >>>> Engine-devel mailing list > >>>> Engine-devel at ovirt.org > >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>> > >>> _______________________________________________ > >>> Engine-devel mailing list > >>> Engine-devel at ovirt.org > >>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>> > > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From ovedo at redhat.com Sun Feb 5 14:25:49 2012 From: ovedo at redhat.com (Oved Ourfalli) Date: Sun, 05 Feb 2012 09:25:49 -0500 (EST) Subject: [Engine-devel] Using categories in the Feature pages In-Reply-To: Message-ID: ----- Original Message ----- > From: "Jon Choate" > To: "Oved Ourfalli" > Cc: engine-devel at ovirt.org > Sent: Sunday, February 5, 2012 3:23:03 PM > Subject: Re: [Engine-devel] Using categories in the Feature pages > > You also should not create the feature pages with a url like > ovirt.org/wiki/Features/MyFeature since we are using natural > language page names and not url hierarchies to organize pages. > > I would also suggest creating a new category with the name of your > feature to organize all of the pages related to your feature. You > can customize the category page for your feature category to act as > a "home page" for the feature. > I would put that in the feature page. Today, in the template, there is a link to the detailed feature page. So, a link to the design should be added there as well. Thoughts? > ----- Original Message ----- > > From: "Oved Ourfalli" > > To: engine-devel at ovirt.org > > Sent: Sunday, February 5, 2012 2:33:59 AM > > Subject: [Engine-devel] Using categories in the Feature pages > > > > Hey all, > > > > When you write a feature page, please make sure you end it with: > > [[Category:Feature]] > > > > We will add a link to this category, so that features will be > > easier > > to track. > > > > In your detailed feature page, put: > > [[Category:DetailedFeature]] > > > > And in the design page put: > > [[Category:FeatureDesign]] > > > > If the feature is simple enough, and creating all the three pages > > is > > an overkill, then create the first one (with the correct category), > > and put all the details there. > > > > Thank you, > > Oved > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From ovedo at redhat.com Sun Feb 5 14:40:21 2012 From: ovedo at redhat.com (Oved Ourfalli) Date: Sun, 05 Feb 2012 09:40:21 -0500 (EST) Subject: [Engine-devel] SharedRawDisk feature detail In-Reply-To: <4F2E80BB.4060604@redhat.com> Message-ID: <8ff8fb38-f845-4a6c-b07f-b56eb35686b0@zmail02.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > From: "Maor" > To: "Itamar Heim" > Cc: engine-devel at ovirt.org > Sent: Sunday, February 5, 2012 3:14:35 PM > Subject: Re: [Engine-devel] SharedRawDisk feature detail > > On 02/03/2012 04:59 PM, Itamar Heim wrote: > > On 02/02/2012 05:15 PM, Maor wrote: > >> Hello all, > >> > >> The shared raw disk feature description can be found under the > >> following > >> links: > >> http://www.ovirt.org/wiki/Features/DetailedSharedRawDisk > >> http://www.ovirt.org/wiki/Features/SharedRawDisk > >> > >> Please feel free, to share your comments. > > > > 1. Affected oVirt projects > > i'm pretty sure the history data warehouse will need to adapt to > > this. > > > > 2. "The shared raw disk feature should provide the ability to > > attach > > disk to many VMs with safe concurrent access," > > this could be read as if ovirt or vdsm somehow provides a mechanism > > for > > safe concurrent access. > > maybe something like "to multiple VMs that can handle concurrent > > access > > to a shared disk without risk of corruption". > > and having just written this - sounds like setting this flag at UI > > level > > should include a prompt to the user to make sure they understand > > that > > flagging the disk as shared *will* lead to corruption if it is > > attached > > to virtual machines which do not support and expect it to be shared > > with > > other virtual or physical machines[1] > I agree, I will change it. > > > > 3. "The synchronization/clustering of shared raw disk between VMs > > will > > be managed in the file system. " > > > > either i don't understand what this mean, or it could be read with > > a > > misleading meaning. > Maybe the following rephrase will be more accurate: "The > synchronization/clustering of shared raw disk between VMs should be > based on external independent application which will be synchronized > with the guest application." > > > > 4. VM Pools > > VM Pools are always based (at least today) on templates, and > > templates > > have no shared disks. > > I'd just block attaching a shared disk to a VM which is part of a > > pool > > (unless there is a very interesting use case meriting this) > If there is no reason to attach shared disk to a VM from pool, maybe > its > also not that relevant to attach shared disk to stateless VM. > Miki? > I think there is such a use-case in clustered environments (DB cluster, for example), in which you have several disks that are not shared (OS, applications, etc.), and several disks that are shared (DB disks). In this case, in order to create this clustered environment, it will be nice if you create a template with the regular disks, create a pool from it, and attach all the VMs in the pool the shared DB disks. (It would be even nicer if the shared disk will be a part of the template, and when creating VMs from it they will have have a "link" to this shared disk - but I agree that it may be complex so maybe we should leave it aside for now). Thoughts? > > > > 5. "Quota has to be taken in consideration, for every new feature > > that > > will involve consumption of resources managed by it." > > > > I thought quota is not relevant in this feature. > Why not? > Quota should be taken in consideration when adding new shared raw > disk, > or moving a disk to other domains. We should also notice that shared > raw > disk should not consume more quota when sharing the disk with > different VMs. > > > > > 6. future work - Permissions should be added for disk entity > > so who can add a shared disk? > Data Center Administrator or System Administrator will be initialized > with permissions for creating shared raw disk, or changing shared > disk > to be unshared. > Regarding attach/detach disks to/from VM, I was thinking that for > phase > one we will count on the user VM permissions. If user will have > permissions to create new disks on the VM, he will also have > permissions > to attach new shared raw disk to it. > > same as for floating disks, i find it hard to imagine a flow in > > which if > > someone flagged a disk as shared, suddenly everyone can have access > > to it. > > same as my statement of floating disks - I'll spend some more time > > to > > reflect on this specific part. > > > > [1] an external LUN based disk could be shared with a physical > > server as > > well. > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From ykaul at redhat.com Sun Feb 5 15:13:17 2012 From: ykaul at redhat.com (Yaniv Kaul) Date: Sun, 05 Feb 2012 17:13:17 +0200 Subject: [Engine-devel] SharedRawDisk feature detail In-Reply-To: <8ff8fb38-f845-4a6c-b07f-b56eb35686b0@zmail02.collab.prod.int.phx2.redhat.com> References: <8ff8fb38-f845-4a6c-b07f-b56eb35686b0@zmail02.collab.prod.int.phx2.redhat.com> Message-ID: <4F2E9C8D.6060505@redhat.com> On 02/05/2012 04:40 PM, Oved Ourfalli wrote: > > ----- Original Message ----- >> From: "Maor" >> To: "Itamar Heim" >> Cc: engine-devel at ovirt.org >> Sent: Sunday, February 5, 2012 3:14:35 PM >> Subject: Re: [Engine-devel] SharedRawDisk feature detail >> >> On 02/03/2012 04:59 PM, Itamar Heim wrote: >>> On 02/02/2012 05:15 PM, Maor wrote: >>>> Hello all, >>>> >>>> The shared raw disk feature description can be found under the >>>> following >>>> links: >>>> http://www.ovirt.org/wiki/Features/DetailedSharedRawDisk >>>> http://www.ovirt.org/wiki/Features/SharedRawDisk >>>> >>>> Please feel free, to share your comments. >>> 1. Affected oVirt projects >>> i'm pretty sure the history data warehouse will need to adapt to >>> this. >>> >>> 2. "The shared raw disk feature should provide the ability to >>> attach >>> disk to many VMs with safe concurrent access," >>> this could be read as if ovirt or vdsm somehow provides a mechanism >>> for >>> safe concurrent access. >>> maybe something like "to multiple VMs that can handle concurrent >>> access >>> to a shared disk without risk of corruption". >>> and having just written this - sounds like setting this flag at UI >>> level >>> should include a prompt to the user to make sure they understand >>> that >>> flagging the disk as shared *will* lead to corruption if it is >>> attached >>> to virtual machines which do not support and expect it to be shared >>> with >>> other virtual or physical machines[1] >> I agree, I will change it. >>> 3. "The synchronization/clustering of shared raw disk between VMs >>> will >>> be managed in the file system. " >>> >>> either i don't understand what this mean, or it could be read with >>> a >>> misleading meaning. >> Maybe the following rephrase will be more accurate: "The >> synchronization/clustering of shared raw disk between VMs should be >> based on external independent application which will be synchronized >> with the guest application." >>> 4. VM Pools >>> VM Pools are always based (at least today) on templates, and >>> templates >>> have no shared disks. >>> I'd just block attaching a shared disk to a VM which is part of a >>> pool >>> (unless there is a very interesting use case meriting this) >> If there is no reason to attach shared disk to a VM from pool, maybe >> its >> also not that relevant to attach shared disk to stateless VM. >> Miki? >> > I think there is such a use-case in clustered environments (DB cluster, for example), in which you have several disks that are not shared (OS, applications, etc.), and several disks that are shared (DB disks). > In this case, in order to create this clustered environment, it will be nice if you create a template with the regular disks, create a pool from it, and attach all the VMs in the pool the shared DB disks. > (It would be even nicer if the shared disk will be a part of the template, and when creating VMs from it they will have have a "link" to this shared disk - but I agree that it may be complex so maybe we should leave it aside for now). > > Thoughts? Unless you have a correct 'uniqifying' process ('sysprep' for Windows), this is really not going to work. For example, even Windows sysprep fails at times. We've known for quite some time that it does not make the DTC UUID unique, so when cloning+sysprep'ing the system, you are still left with severe connectivity issues at the DTC RPC level. I therefore assume other applications may suffer from it as well (how do you make a RHEVM instance unique? which fields in the DB, config files, etc. do you have to re-create?). Y. > >>> 5. "Quota has to be taken in consideration, for every new feature >>> that >>> will involve consumption of resources managed by it." >>> >>> I thought quota is not relevant in this feature. >> Why not? >> Quota should be taken in consideration when adding new shared raw >> disk, >> or moving a disk to other domains. We should also notice that shared >> raw >> disk should not consume more quota when sharing the disk with >> different VMs. >> >>> 6. future work - Permissions should be added for disk entity >>> so who can add a shared disk? >> Data Center Administrator or System Administrator will be initialized >> with permissions for creating shared raw disk, or changing shared >> disk >> to be unshared. >> Regarding attach/detach disks to/from VM, I was thinking that for >> phase >> one we will count on the user VM permissions. If user will have >> permissions to create new disks on the VM, he will also have >> permissions >> to attach new shared raw disk to it. >>> same as for floating disks, i find it hard to imagine a flow in >>> which if >>> someone flagged a disk as shared, suddenly everyone can have access >>> to it. >>> same as my statement of floating disks - I'll spend some more time >>> to >>> reflect on this specific part. >>> >>> [1] an external LUN based disk could be shared with a physical >>> server as >>> well. >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From iheim at redhat.com Sun Feb 5 16:59:32 2012 From: iheim at redhat.com (Itamar Heim) Date: Sun, 05 Feb 2012 17:59:32 +0100 Subject: [Engine-devel] SharedRawDisk feature detail In-Reply-To: <4F2E80BB.4060604@redhat.com> References: <4F2AA89C.7090605@redhat.com> <4F2BF64C.5010602@redhat.com> <4F2E80BB.4060604@redhat.com> Message-ID: <4F2EB574.2060807@redhat.com> On 02/05/2012 02:14 PM, Maor wrote: ... >> 3. "The synchronization/clustering of shared raw disk between VMs will >> be managed in the file system. " >> >> either i don't understand what this mean, or it could be read with a >> misleading meaning. > Maybe the following rephrase will be more accurate: "The > synchronization/clustering of shared raw disk between VMs should be > based on external independent application which will be synchronized > with the guest application." "The synchronization/clustering of shared raw disk between VMs is the responsibility of the guests. Unaware guests will lead to corruption of the shared disk." >> >> 4. VM Pools >> VM Pools are always based (at least today) on templates, and templates >> have no shared disks. >> I'd just block attaching a shared disk to a VM which is part of a pool >> (unless there is a very interesting use case meriting this) > If there is no reason to attach shared disk to a VM from pool, maybe its > also not that relevant to attach shared disk to stateless VM. > Miki? I think pools and stateless are different. I can envision a use case where stateless guests would use a shared disk (say, in read only for same data). >> >> 6. future work - Permissions should be added for disk entity >> so who can add a shared disk? > Data Center Administrator or System Administrator will be initialized > with permissions for creating shared raw disk, or changing shared disk > to be unshared. > Regarding attach/detach disks to/from VM, I was thinking that for phase > one we will count on the user VM permissions. If user will have > permissions to create new disks on the VM, he will also have permissions > to attach new shared raw disk to it. this means they can attach shared disks from other VMs they have no permission on... as i said earlier - need to think about this one some more. From iheim at redhat.com Sun Feb 5 17:00:34 2012 From: iheim at redhat.com (Itamar Heim) Date: Sun, 05 Feb 2012 18:00:34 +0100 Subject: [Engine-devel] SharedRawDisk feature detail In-Reply-To: <8ff8fb38-f845-4a6c-b07f-b56eb35686b0@zmail02.collab.prod.int.phx2.redhat.com> References: <8ff8fb38-f845-4a6c-b07f-b56eb35686b0@zmail02.collab.prod.int.phx2.redhat.com> Message-ID: <4F2EB5B2.8050707@redhat.com> On 02/05/2012 03:40 PM, Oved Ourfalli wrote: ... >>> 4. VM Pools >>> VM Pools are always based (at least today) on templates, and >>> templates >>> have no shared disks. >>> I'd just block attaching a shared disk to a VM which is part of a >>> pool >>> (unless there is a very interesting use case meriting this) >> If there is no reason to attach shared disk to a VM from pool, maybe >> its >> also not that relevant to attach shared disk to stateless VM. >> Miki? >> > > I think there is such a use-case in clustered environments (DB cluster, for example), in which you have several disks that are not shared (OS, applications, etc.), and several disks that are shared (DB disks). > In this case, in order to create this clustered environment, it will be nice if you create a template with the regular disks, create a pool from it, and attach all the VMs in the pool the shared DB disks. > (It would be even nicer if the shared disk will be a part of the template, and when creating VMs from it they will have have a "link" to this shared disk - but I agree that it may be complex so maybe we should leave it aside for now). > > Thoughts? that we can wait with an interesting enough use case before we pursue this and ignore it for simplification for now. From iheim at redhat.com Sun Feb 5 17:07:39 2012 From: iheim at redhat.com (Itamar Heim) Date: Sun, 05 Feb 2012 18:07:39 +0100 Subject: [Engine-devel] oVirt upstream meeting : VM Version In-Reply-To: <9d055ec5-3aea-4644-b5a8-859750d84ef3@mkenneth.csb> References: <9d055ec5-3aea-4644-b5a8-859750d84ef3@mkenneth.csb> Message-ID: <4F2EB75B.50104@redhat.com> On 02/05/2012 02:57 PM, Miki Kenneth wrote: ... >>>> Isn't the VM version derived from the version of the cluster on >>>> which it was last edited? >>>> For example: you've created a VM on a cluster v3.0. When it is >>>> running on a v3.2 cluster, is there any reason to change its >>>> version? >>>> When it is edited, then perhaps yes - because it may have >>>> changed/added properties/features that are only applicable to >>>> v3.2. >>>> But until then - let it stay in the same version as it was >>>> created. >>>> (btw, how does this map, if at all, to the '-m' qemu command line >>>> switch?) >>>> Y. >>>> >>> >>> Currently we do not persist the VM version at all, it is derived >>> from >>> the cluster version the VM belongs to (that's why I suggested to >>> save it >>> as part of the OVF so we can be aware of the VM version when >>> exporting/importing a VM etc.). >>> >>> The VM does not have to be edited to be influenced by the cluster >>> version. For example if you start a VM on 3.1 cluster you get the >>> stable >>> device address feature with no manual editing. >>> >>> Livnat >>> > However, I do agree with Yaniv that changing the VM version "under the hood" is a bit problematic. Version is a parameter associated with create/update operation, and less with Run command. but the engine currently has no logic to detect the need to increase the emulated machine to support feature X. the engine currently does not save this parameter at VM level. it will also need to compare it to the list of supported emulated machines at the cluster, and prevent running the VM if there isn't a match. it also increases the matrix of possible emulated machines being run on different versions of hypervisor to N*cluster_levels, instead of just the number of cluster levels. plus, if a cluster is increased to a new version of hosts which doesn't support an older emulated machine level - user will need to upgrade all VMs one by one? (or will engine block upgrading cluster level if the new cluster level doesn't have an emulated machine in use by one of the virtual machines) it also means engine needs to handle validation logic for this field when exporting/importing (point of this discussion), as well as just moving a VM between clusters. so before introducing all this logic - were issues observed where changing the cluster level (i.e., -M at host level) resulted in problematic changes at guest level worth all of these? From abaron at redhat.com Mon Feb 6 07:42:59 2012 From: abaron at redhat.com (Ayal Baron) Date: Mon, 06 Feb 2012 02:42:59 -0500 (EST) Subject: [Engine-devel] oVirt upstream meeting : VM Version In-Reply-To: <4F2EB75B.50104@redhat.com> Message-ID: <5f2c99b7-38c8-42cd-980d-206a147d12f6@zmail13.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > On 02/05/2012 02:57 PM, Miki Kenneth wrote: > ... > > >>>> Isn't the VM version derived from the version of the cluster on > >>>> which it was last edited? > >>>> For example: you've created a VM on a cluster v3.0. When it is > >>>> running on a v3.2 cluster, is there any reason to change its > >>>> version? > >>>> When it is edited, then perhaps yes - because it may have > >>>> changed/added properties/features that are only applicable to > >>>> v3.2. > >>>> But until then - let it stay in the same version as it was > >>>> created. > >>>> (btw, how does this map, if at all, to the '-m' qemu command > >>>> line > >>>> switch?) > >>>> Y. > >>>> > >>> > >>> Currently we do not persist the VM version at all, it is derived > >>> from > >>> the cluster version the VM belongs to (that's why I suggested to > >>> save it > >>> as part of the OVF so we can be aware of the VM version when > >>> exporting/importing a VM etc.). > >>> > >>> The VM does not have to be edited to be influenced by the cluster > >>> version. For example if you start a VM on 3.1 cluster you get the > >>> stable > >>> device address feature with no manual editing. > >>> > >>> Livnat > >>> > > However, I do agree with Yaniv that changing the VM version "under > > the hood" is a bit problematic. Version is a parameter associated > > with create/update operation, and less with Run command. It's not under the hood, user effectively chose to change it when she changed the cluster level. Going forward, we could check the version before running the VM and then warning the user (so that the change would take effect per VM and not per cluster) but that would be annoying and to mitigate that, we would need to add a checkbox when changing the cluster level "Automatically upgrade VMs" or something (to keep current simple behaviour). > > but the engine currently has no logic to detect the need to increase > the > emulated machine to support feature X. > the engine currently does not save this parameter at VM level. > it will also need to compare it to the list of supported emulated > machines at the cluster, and prevent running the VM if there isn't a > match. > it also increases the matrix of possible emulated machines being run > on > different versions of hypervisor to N*cluster_levels, instead of just > the number of cluster levels. > plus, if a cluster is increased to a new version of hosts which > doesn't > support an older emulated machine level - user will need to upgrade > all > VMs one by one? > (or will engine block upgrading cluster level if the new cluster > level > doesn't have an emulated machine in use by one of the virtual > machines) > it also means engine needs to handle validation logic for this field > when exporting/importing (point of this discussion), as well as just > moving a VM between clusters. > > so before introducing all this logic - were issues observed where > changing the cluster level (i.e., -M at host level) resulted in > problematic changes at guest level worth all of these? > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From mkolesni at redhat.com Mon Feb 6 07:59:57 2012 From: mkolesni at redhat.com (Mike Kolesnik) Date: Mon, 06 Feb 2012 02:59:57 -0500 (EST) Subject: [Engine-devel] Simplifying our POJOs In-Reply-To: <79730f63-75a4-4d82-892e-9d49fb7a954e@zmail01.collab.prod.int.phx2.redhat.com> Message-ID: ----- Original Message ----- > > > ----- Original Message ----- > > From: "Livnat Peer" > > To: "Yair Zaslavsky" > > Cc: engine-devel at ovirt.org > > Sent: Wednesday, February 1, 2012 8:16:34 AM > > Subject: Re: [Engine-devel] Simplifying our POJOs > > > > On 01/02/12 09:13, Yair Zaslavsky wrote: > > > On 02/01/2012 08:59 AM, Livnat Peer wrote: > > >> On 01/02/12 08:03, Mike Kolesnik wrote: > > >>> > > >>> ----- Original Message ----- > > >>>> On 01/31/2012 12:45 PM, Doron Fediuck wrote: > > >>>>> On 31/01/12 12:39, Livnat Peer wrote: > > >>>>>> On 31/01/12 12:02, Mike Kolesnik wrote: > > >>>>>>> Hi, > > >>>>>>> > > >>>>>>> Today many POJO > > >>>>>>> s > > >>>>>>> are used throughout the system to convey data: > > >>>>>>> > > >>>>>>> * Parameters - To send data to commands. > > >>>>>>> * Business Entities - To transfer data in the > > >>>>>>> parameters > > >>>>>>> & > > >>>>>>> to/from > > >>>>>>> the DB. > > >>>>>>> > > >>>>>>> These POJOs are (usually) very verbose and full of > > >>>>>>> boilerplate > > >>>>>>> code > > >>>>>>> . > > >>>>>>> > > >>>>>>> This, in turn, reduces their readability and > > >>>>>>> maintainability > > >>>>>>> for > > >>>>>>> a > > >>>>>>> couple of reasons (that I can think of): > > >>>>>>> > > >>>>>>> * It's hard to know what does what: > > >>>>>>> o Who participates in equals/hashCode? > > >>>>>>> o What fields are printed in toString? > > >>>>>>> * Consistency is problematic: > > >>>>>>> o A field may be part of equals but not hashCode, or > > >>>>>>> vice > > >>>>>>> versa. > > >>>>>>> o This breaks the Object.hashCode() > > >>>>>>> > > >>>>>>> contract! > > >>>>>>> * Adding/Removing fields take more time since you need to > > >>>>>>> synchronize > > >>>>>>> the change to all boilerplate methods. > > >>>>>>> o Again, we're facing the consistency problem. > > >>>>>>> * These simple classes tend to be very long and not very > > >>>>>>> readable. > > >>>>>>> * Boilerplate code makes it harder to find out which > > >>>>>>> methods > > >>>>>>> *don't* > > >>>>>>> behave the default way. > > >>>>>>> * Javadoc, if existent, is usually meaningless (but you > > >>>>>>> might > > >>>>>>> see some > > >>>>>>> banal documentation that doesn't add any real value). > > >>>>>>> * Our existing classes are not up to standard! > > >>>>>>> > > >>>>>>> > > >>>>>>> So what can be done to remedy the situation? > > >>>>>>> > > >>>>>>> We could, of course, try to simplify the classes as much as > > >>>>>>> we > > >>>>>>> can and > > >>>>>>> maybe address some of the issues. > > >>>>>>> This won't alleviate the boilerplate code problem > > >>>>>>> altogether, > > >>>>>>> though. > > >>>>>>> > > >>>>>>> We could write annotations to do some of the things for us > > >>>>>>> automatically. > > >>>>>>> The easiest approach would be runtime-based, and would > > >>>>>>> hinder > > >>>>>>> performance. > > >>>>>>> This also means we need to maintain this "infrastructure" > > >>>>>>> and > > >>>>>>> all > > >>>>>>> the > > >>>>>>> implications of such a decision. > > >>>>>>> > > >>>>>>> > > >>>>>>> Luckily, there is a much easier solution: Someone else > > >>>>>>> already > > >>>>>>> did it! > > >>>>>>> > > >>>>>>> Check out Project Lombok: http://projectlombok.org > > >>>>>>> What Lombok gives us, among some other things, is a way to > > >>>>>>> greatly > > >>>>>>> simplify our POJOs by using annotations to get the > > >>>>>>> boilerplate > > >>>>>>> code > > >>>>>>> automatically generated. > > >>>>>>> This means we get the benefit of annotations which would > > >>>>>>> simplify > > >>>>>>> the > > >>>>>>> code a whole lot, while not imposing a performance cost > > >>>>>>> (since > > >>>>>>> the > > >>>>>>> boilerplate code is generated during compilation). > > >>>>>>> However, it's also possible to create the methods yourself > > >>>>>>> if > > >>>>>>> you > > >>>>>>> want > > >>>>>>> them to behave differently. > > >>>>>>> Outside the POJO itself, you would see it as you would > > >>>>>>> always > > >>>>>>> see > > >>>>>>> it. > > >>>>>>> > > >>>>>>> So what are the downsides to this approach? > > >>>>>>> > > >>>>>>> * First of all, Lombok provides also some other > > >>>>>>> capabilities > > >>>>>>> which I'm > > >>>>>>> not sure are required/wanted at this time. > > >>>>>>> o That's why I propose we use it for commons project, > > >>>>>>> and > > >>>>>>> make use > > >>>>>>> of it's POJO-related annotations ONLY. > > >>>>>>> * There might be a problem debugging the code since it's > > >>>>>>> auto-generated. > > >>>>>>> o I think this is rather negligible, since usually > > >>>>>>> you > > >>>>>>> don't debug > > >>>>>>> POJOs anyway. > > >>>>>>> * There might be a problem if the auto-generated code > > >>>>>>> throws an > > >>>>>>> Exception. > > >>>>>>> o As before, I'm rather sure this is an edge-case > > >>>>>>> which > > >>>>>>> we > > >>>>>>> usually > > >>>>>>> won't hit (if at all). > > >>>>>>> > > >>>>>>> > > >>>>>>> Even given these possible downsides, I think that we would > > >>>>>>> benefit > > >>>>>>> greatly if we would introduce this library. > > >>>>>>> > > >>>>>>> If you have any questions, you're welcome to study out the > > >>>>>>> project site > > >>>>>>> which has very thorough documentation: > > >>>>>>> http://projectlombok.org > > >>>>>>> > > >>>>>>> Your thoughts on the matter? > > >>>>>>> > > >>>>>> > > >>>>>> - I think an example of before/after pojo would help > > >>>>>> demonstrating > > >>>>>> how > > >>>>>> good the framework is. > > >>>>>> > > >>>>>> - Would it work when adding JPA annotations? > > >>>> I suspect that yes (needs to be checked) > > >>>> Will it work with GWT (if we create new business entity that > > >>>> needs to > > >>>> be > > >>>> exposed to GWT guys) ? > > >>> > > >>> As it is stated on the site, it supports GWT. > > >>> > > >> > > >> Since this package is required only during compile time it is > > >> relatively > > >> easy to push it in. > > >> Need to make sure it is working nice with debugging and give it > > >> a > > >> try. > > >> > > >> I like this package, > > >> +1 from me. > > >> > > > Another issue to check - (I'm sure it does, but still) - > > > Are empty CTORs generated as well? (There is a long debate for > > > POJOs > > > that contain X fields whether they should have an empty CTOR, as > > > usage > > > of empty CTOR may yield to potential bugs (logically speaking) of > > > "partial state") - Unfortunately, some frameworks require > > > existence > > > of > > > empty CTOR (I admit, still haven't look at the site thoroughly, > > > so > > > I'm > > > just sharing here thoughts of what should we check for). > > > > > > > > > Yair > > > > > > > It seems like you can define what ever you like - > > @NoArgsConstructor, > > @RequiredArgsConstructor > > @AllArgsConstructor > > I am keeping an eye on project lombok for a good while and I really > like it's approach, but I have never seen it in a production app so > far. Could be interesting to give it a try! > > Just one more thing I would like to know about annotations: some > frameworks (jaxb for example) require you to place the annotations > on the getters (creating a hell of annotations). Fortunately the > model classes are not serialized by the rest api as far as I know, > but would it work together with lombok? Since we don't use JAXB annotations any more that's not an issue. However, you could keep the getters with annotations in place if you need and lombok will not generate them for you. > > Btw... Vojtech has a similar project to simplify resource file > generations in the frontend. > http://code.google.com/p/genftw/ > > Laszlo > > > > > Livnat > > > > >> > > >>>>>> > > >>>>>>> Regards, > > >>>>>>> Mike > > >>>>>>> > > >>>>> > > >>>>> Watching the demo it looks like we'll get less code, which in > > >>>>> many > > >>>>> cases is a good thing. > > >>>>> What I'm concerned about is traceability; or- how can we > > >>>>> track > > >>>>> issues coming from the field > > >>>>> when function calls and line numbers in the stack trace will > > >>>>> not > > >>>>> match the code we know. > > >>>>> > > >>>> > > >>>> _______________________________________________ > > >>>> Engine-devel mailing list > > >>>> Engine-devel at ovirt.org > > >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > > >>>> > > >>> _______________________________________________ > > >>> Engine-devel mailing list > > >>> Engine-devel at ovirt.org > > >>> http://lists.ovirt.org/mailman/listinfo/engine-devel > > >> > > > > > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From rgolan at redhat.com Mon Feb 6 14:47:11 2012 From: rgolan at redhat.com (Roy Golan) Date: Mon, 06 Feb 2012 09:47:11 -0500 (EST) Subject: [Engine-devel] bridgless networks In-Reply-To: <46b99319-c1f0-4b23-b34a-a1c6bf68488a@zmail01.collab.prod.int.phx2.redhat.com> Message-ID: <4cd20748-a940-4f77-b491-7c8a75ac8a23@zmail01.collab.prod.int.phx2.redhat.com> Hi All Lately I've been working on a design of bridge-less network feature in the engine. You can see it in http://www.ovirt.org/wiki/Features/Design/Network/Bridgeless_Networks#Bridge-less_Networks Please review the design. Note, there are some open issues, you can find in the relevant section. Reviews and comments are very welcome. Thanks, Roy From agl at us.ibm.com Mon Feb 6 14:49:33 2012 From: agl at us.ibm.com (Adam Litke) Date: Mon, 6 Feb 2012 08:49:33 -0600 Subject: [Engine-devel] Eclipse IDE setup Message-ID: <20120206144933.GF3026@us.ibm.com> Hi all, I am trying to set up an eclipse development environment for ovirt-engine and am running into a stubborn problem with missing classes. I have followed the directions for importing the Maven projects as written here: http://ovirt.org/wiki/Building_Ovirt_Engine/IDE The projects are able to be imported but I see lots of errors about missing imports such as: import org.ovirt.engine.api.model.* import org.ovirt.engine.core.common.* I should have a complete ovirt-engine source repository (I cloned the ovirt-engine git repo). Has anyone seen this problem before? Can you offer any suggestions to help me resolve it? Thanks! -- Adam Litke IBM Linux Technology Center From ovedo at redhat.com Mon Feb 6 15:01:57 2012 From: ovedo at redhat.com (Oved Ourfalli) Date: Mon, 06 Feb 2012 10:01:57 -0500 (EST) Subject: [Engine-devel] bridgless networks In-Reply-To: <4cd20748-a940-4f77-b491-7c8a75ac8a23@zmail01.collab.prod.int.phx2.redhat.com> Message-ID: <3650d693-2f71-4986-8575-44494b700a30@zmail02.collab.prod.int.phx2.redhat.com> I think that in the UI we should automatically check the "bridged" for VM networks, and uncheck it for non-VM ones. In the future, when we'll support more network types that can run VMs without a bridge (VEPA/VNLink/SRIOV) then we would change this logic. As for the open issues: [1] if a network is checked with "allowToRunVms" and an underlying host will need an un-bridged(SRIOV...) network to fulfil that how do we treat that during monitoring? we should be able to distinguish on interfaces that can run vm with/without bridge and deduce that cluster compatibility didn't break [oved] We will be able to make this distinction. Less relevant for today, but will be relevant in the future. [2] if, for some reason an admin wants a non VM network to be bridged, should we allow it? [oved] I would allow it. ----- Original Message ----- > From: "Roy Golan" > To: engine-devel at ovirt.org > Sent: Monday, February 6, 2012 4:47:11 PM > Subject: [Engine-devel] bridgless networks > > Hi All > > Lately I've been working on a design of bridge-less network feature > in the engine. > You can see it in > http://www.ovirt.org/wiki/Features/Design/Network/Bridgeless_Networks#Bridge-less_Networks > > Please review the design. > Note, there are some open issues, you can find in the relevant > section. > Reviews and comments are very welcome. > > Thanks, > Roy > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From sanjal at redhat.com Mon Feb 6 15:06:23 2012 From: sanjal at redhat.com (Shireesh Anjal) Date: Mon, 06 Feb 2012 20:36:23 +0530 Subject: [Engine-devel] Fwd: Re: Moving to Jboss AS7 In-Reply-To: <803c568c-626c-4bd9-8287-60c4601e815a@zmail02.collab.prod.int.phx2.redhat.com> References: <803c568c-626c-4bd9-8287-60c4601e815a@zmail02.collab.prod.int.phx2.redhat.com> Message-ID: <4F2FEC6F.6000406@redhat.com> Hi Oved, After rebasing our POC code with upstream, I tried to follow the below steps for moving to JBoss AS7. Since a newer version of JBoss (CR1) is available, I downloaded that instead of Beta1 assuming that it will be more stable and will contain mostly bug-fixes and hence should work. However it turns out that the format of standalone.xml itself has changed and that created/modified by our "deploy" profile doesn't work. I had to download the "Beta1" tarball to get the deployment to work. This is just FYI. Please ignore in case you are already aware of this :) Thanks, Shireesh -------- Original Message -------- Subject: Re: [Engine-devel] Moving to Jboss AS7 Date: Sun, 15 Jan 2012 04:05:33 -0500 (EST) From: Oved Ourfalli To: engine-devel at ovirt.org Hey all, The patches are now pushed! So please, if you fetch from now on you'll have to perform the actions below. The wiki was updated as well. Short list of steps: 1. Download jboss 7.1.0 Beta1b (wget http://download.jboss.org/jbossas/7.1/jboss-as-7.1.0.Beta1b/jboss-as-7.1.0.Beta1b.tar.gz) 2. Fetch the latest changes from our git repository 3. Change the Jboss home to the new path, both in the JBOSS_HOME environment variable, and in maven settings file (~/.m2/settings.xml) 4. Build the engine, with profiles "dep" and "setup". This will put all the proper configuration files, postgresql driver and make all the other needed changes in Jboss in order to make it work properly mvn clean install -Psetup,dep ....... 5. In order to run Jboss go to JBOSS_HOME/bin and run ./standalone.sh A more descriptive set of steps and notes can be found in my previous E-mail below. I'm here if you need any help. Thank you, Oved ----- Original Message ----- > From: "Oved Ourfalli" > To: engine-devel at ovirt.org > Sent: Wednesday, January 11, 2012 2:57:19 PM > Subject: Moving to Jboss AS7 > > Hey all, > > The code changes required to make the engine work on Jboss AS7 will > soon be push > It will, of course, require you to install it, and start working with > it :-) > > A separate E-mail will be sent to notify you all once pushed, and > then you'll have to perform the following steps: > > 1. Download jboss 7.1.0 Beta1b > (http://download.jboss.org/jbossas/7.1/jboss-as-7.1.0.Beta1b/jboss-as-7.1.0.Beta1b.tar.gz) > - There is a newer version, but it has issues in the REST-API, so we > decided to work with the beta version until a proper fix will be > released. > 2. Fetch the latest changes from our git repository > 3. Change the Jboss home to the new path, both in the JBOSS_HOME > environment variable, and in maven settings file > (~/.m2/settings.xml) > 4. Build the engine, with profiles "dep" and "setup". This will put > all the proper configuration files, postgresql driver and make all > the other needed changes in Jboss in order to make it work properly > mvn clean install -Psetup,dep ....... > 5. In order to run Jboss go to JBOSS_HOME/bin and run ./standalone.sh > 6. Look inside the JBOSS_HOME/bin/standalone.conf file in order to > enable jboss debugging (just uncomment the line > JAVA_OPTS="$JAVA_OPTS > -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n") > 7. If you have a krb5.conf file you are working with, put it in > JBOSS_HOME/standalone/configuration directory > 8. Run Jboss (and be impressed by the startup speed!) > > The above will be also added to the wiki page for building the engine > as soon as the patches will be pushed upstream. > > Some facts about Jboss 7: > 1. It supports both standalone deployment, and domain deployment. We > use the standalone one. > 2. Stuff related to the standalone mode can be found in the > JBOSS_HOME/standalone directory: > * configuration - contains the main configuration file called > standalone.xml file, plus some other configuration files > * deployments - that's where your deployments should be. When adding > a new one, don't forget to create a file called > ".dodeploy" in order to make it deploy. > * log - that's where the log files are written (unless stated > differently in the standalone.xml file). > 3. The different modules that come with Jboss 7 are located in > JBOSS_HOME/modules directory > * No more common/lib directory. > * Every module has a module.xml file which contains it's > dependencies in other modules, the jars that are part of the > module, and etc. > * In order to use a dependency from there you have to add > "Dependencies" section to your manifest file (do git grep > "Dependencies" to take a look at some examples done in the engine > source code). > 4. Useful links: > * Documentation - > https://docs.jboss.org/author/display/AS7/Documentation > * Class loading changes - > https://docs.jboss.org/author/display/AS7/Class+Loading+in+AS7 > * The Jboss community - http://community.jboss.org/wiki > > > Please send issues/feedback to this mailing list. > > Thank you all, > Oved _______________________________________________ Engine-devel mailing list Engine-devel at ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel From ovedo at redhat.com Mon Feb 6 15:10:07 2012 From: ovedo at redhat.com (Oved Ourfalli) Date: Mon, 06 Feb 2012 10:10:07 -0500 (EST) Subject: [Engine-devel] Moving to Jboss AS7 In-Reply-To: <4F2FEC6F.6000406@redhat.com> Message-ID: <43730908-f3f0-4b7f-be76-cfbaa78e16d4@zmail02.collab.prod.int.phx2.redhat.com> Hey, Yes, I'm aware of that issue. The reason we chose to work with Beta1b is due to several regressions in resteasy 2.3.1 (Beta1b ships with resteasy 2.2.3). In the next AS version those issues are supposed to be fixed, and then we'll work with it upstream. If you are curios in understanding the standalone.xml changes you can take a look at this patch: http://gerrit.ovirt.org/#change,715 It contains both changes in standalone.xml, and removal of the jboss naming jar from our repo (it was there due to an issue in LDAP on beta1b, that was fixed in CR1b). Thank you, Oved ----- Original Message ----- > From: "Shireesh Anjal" > To: "Oved Ourfalli" , engine-devel at ovirt.org > Cc: rhev-gluster at redhat.com > Sent: Monday, February 6, 2012 5:06:23 PM > Subject: Fwd: Re: [Engine-devel] Moving to Jboss AS7 > > Hi Oved, > > After rebasing our POC code with upstream, I tried to follow the > below > steps for moving to JBoss AS7. Since a newer version of JBoss (CR1) > is > available, I downloaded that instead of Beta1 assuming that it will > be > more stable and will contain mostly bug-fixes and hence should work. > > However it turns out that the format of standalone.xml itself has > changed and that created/modified by our "deploy" profile doesn't > work. > I had to download the "Beta1" tarball to get the deployment to work. > > This is just FYI. Please ignore in case you are already aware of this > :) > > Thanks, > Shireesh > > -------- Original Message -------- > Subject: Re: [Engine-devel] Moving to Jboss AS7 > Date: Sun, 15 Jan 2012 04:05:33 -0500 (EST) > From: Oved Ourfalli > To: engine-devel at ovirt.org > > > > Hey all, > The patches are now pushed! > > So please, if you fetch from now on you'll have to perform the > actions below. > The wiki was updated as well. > > Short list of steps: > 1. Download jboss 7.1.0 Beta1b (wget > http://download.jboss.org/jbossas/7.1/jboss-as-7.1.0.Beta1b/jboss-as-7.1.0.Beta1b.tar.gz) > 2. Fetch the latest changes from our git repository > 3. Change the Jboss home to the new path, both in the JBOSS_HOME > environment variable, and in maven settings file > (~/.m2/settings.xml) > 4. Build the engine, with profiles "dep" and "setup". This will put > all the proper configuration files, postgresql driver and make all > the other needed changes in Jboss in order to make it work properly > mvn clean install -Psetup,dep ....... > 5. In order to run Jboss go to JBOSS_HOME/bin and run ./standalone.sh > > A more descriptive set of steps and notes can be found in my previous > E-mail below. > > I'm here if you need any help. > > Thank you, > Oved > > ----- Original Message ----- > > From: "Oved Ourfalli" > > To: engine-devel at ovirt.org > > Sent: Wednesday, January 11, 2012 2:57:19 PM > > Subject: Moving to Jboss AS7 > > > > Hey all, > > > > The code changes required to make the engine work on Jboss AS7 > > will > > soon be push > > It will, of course, require you to install it, and start working > > with > > it :-) > > > > A separate E-mail will be sent to notify you all once pushed, and > > then you'll have to perform the following steps: > > > > 1. Download jboss 7.1.0 Beta1b > > (http://download.jboss.org/jbossas/7.1/jboss-as-7.1.0.Beta1b/jboss-as-7.1.0.Beta1b.tar.gz) > > - There is a newer version, but it has issues in the REST-API, so > > we > > decided to work with the beta version until a proper fix will be > > released. > > 2. Fetch the latest changes from our git repository > > 3. Change the Jboss home to the new path, both in the JBOSS_HOME > > environment variable, and in maven settings file > > (~/.m2/settings.xml) > > 4. Build the engine, with profiles "dep" and "setup". This will > > put > > all the proper configuration files, postgresql driver and make all > > the other needed changes in Jboss in order to make it work > > properly > > mvn clean install -Psetup,dep ....... > > 5. In order to run Jboss go to JBOSS_HOME/bin and run > > ./standalone.sh > > 6. Look inside the JBOSS_HOME/bin/standalone.conf file in order to > > enable jboss debugging (just uncomment the line > > JAVA_OPTS="$JAVA_OPTS > > -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n") > > 7. If you have a krb5.conf file you are working with, put it in > > JBOSS_HOME/standalone/configuration directory > > 8. Run Jboss (and be impressed by the startup speed!) > > > > The above will be also added to the wiki page for building the > > engine > > as soon as the patches will be pushed upstream. > > > > Some facts about Jboss 7: > > 1. It supports both standalone deployment, and domain deployment. > > We > > use the standalone one. > > 2. Stuff related to the standalone mode can be found in the > > JBOSS_HOME/standalone directory: > > * configuration - contains the main configuration file called > > standalone.xml file, plus some other configuration files > > * deployments - that's where your deployments should be. When > > adding > > a new one, don't forget to create a file called > > ".dodeploy" in order to make it deploy. > > * log - that's where the log files are written (unless stated > > differently in the standalone.xml file). > > 3. The different modules that come with Jboss 7 are located in > > JBOSS_HOME/modules directory > > * No more common/lib directory. > > * Every module has a module.xml file which contains it's > > dependencies in other modules, the jars that are part of the > > module, and etc. > > * In order to use a dependency from there you have to add > > "Dependencies" section to your manifest file (do git grep > > "Dependencies" to take a look at some examples done in the engine > > source code). > > 4. Useful links: > > * Documentation - > > https://docs.jboss.org/author/display/AS7/Documentation > > * Class loading changes - > > https://docs.jboss.org/author/display/AS7/Class+Loading+in+AS7 > > * The Jboss community - http://community.jboss.org/wiki > > > > > > Please send issues/feedback to this mailing list. > > > > Thank you all, > > Oved > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > > From ykaul at redhat.com Mon Feb 6 15:10:16 2012 From: ykaul at redhat.com (Yaniv Kaul) Date: Mon, 06 Feb 2012 17:10:16 +0200 Subject: [Engine-devel] bridgless networks In-Reply-To: <4cd20748-a940-4f77-b491-7c8a75ac8a23@zmail01.collab.prod.int.phx2.redhat.com> References: <4cd20748-a940-4f77-b491-7c8a75ac8a23@zmail01.collab.prod.int.phx2.redhat.com> Message-ID: <4F2FED58.7010209@redhat.com> On 02/06/2012 04:47 PM, Roy Golan wrote: > Hi All > > Lately I've been working on a design of bridge-less network feature in the engine. > You can see it in http://www.ovirt.org/wiki/Features/Design/Network/Bridgeless_Networks#Bridge-less_Networks > > Please review the design. > Note, there are some open issues, you can find in the relevant section. > Reviews and comments are very welcome. > > Thanks, > Roy > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel 0. Fixed some typos in the wiki. There are others I couldn't understand. 1. "Also looking forward a capable of running VMs nics should be bridged on regular nics and un-bridged in case of dedicated special nics" - don't understand what it means (English-wise too). 2. "UI shall user shall" . 3. Not sure the REST API is complete. How is the property set on the logical network (upon creation or later) ? 4. So, if there's no bridge on my bond, can I now use the bond methods that are incompatible with bridges and therefore we did not allow them until now? Y. From lhornyak at redhat.com Mon Feb 6 15:58:13 2012 From: lhornyak at redhat.com (Laszlo Hornyak) Date: Mon, 06 Feb 2012 10:58:13 -0500 (EST) Subject: [Engine-devel] Eclipse IDE setup In-Reply-To: <20120206144933.GF3026@us.ibm.com> Message-ID: Hi Adam! Please check if workspace maven resolution is enabled, and run a maven build with install. If it is still broken, then there must be a bad dependency in the pom.xml-s... it happens :-( Laszlo ----- Original Message ----- > From: "Adam Litke" > To: engine-devel at ovirt.org > Sent: Monday, February 6, 2012 3:49:33 PM > Subject: [Engine-devel] Eclipse IDE setup > > Hi all, > > I am trying to set up an eclipse development environment for > ovirt-engine and am > running into a stubborn problem with missing classes. I have > followed the > directions for importing the Maven projects as written here: > http://ovirt.org/wiki/Building_Ovirt_Engine/IDE > > The projects are able to be imported but I see lots of errors about > missing > imports such as: > > import org.ovirt.engine.api.model.* > import org.ovirt.engine.core.common.* > > I should have a complete ovirt-engine source repository (I cloned the > ovirt-engine git repo). Has anyone seen this problem before? Can > you offer any > suggestions to help me resolve it? Thanks! > > -- > Adam Litke > IBM Linux Technology Center > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From agl at us.ibm.com Mon Feb 6 17:18:34 2012 From: agl at us.ibm.com (Adam Litke) Date: Mon, 6 Feb 2012 11:18:34 -0600 Subject: [Engine-devel] Eclipse IDE setup In-Reply-To: References: <20120206144933.GF3026@us.ibm.com> Message-ID: <20120206171834.GH3026@us.ibm.com> On Mon, Feb 06, 2012 at 10:58:13AM -0500, Laszlo Hornyak wrote: > Hi Adam! > > Please check if workspace maven resolution is enabled, and run a maven build with install. > If it is still broken, then there must be a bad dependency in the pom.xml-s... it happens :-( Thanks for your suggestions. Maven resolution is enabled. Then I tried to build on the command line using mvn directly but got the same errors as in eclipse. Next, I tried to checkout out the 3.0 branch (assuming that the build should be more stable) and I got a different set of compilation errors. This brings up a few questions: 1.) Which jdk should I use? I am currently using OpenJDK /usr/lib/jvm/java-1.6.0-openjdk/bin/java -version java version "1.6.0_23" OpenJDK Runtime Environment (IcedTea6 1.11pre) (6b23~pre11-0ubuntu1.11.10.1) OpenJDK 64-Bit Server VM (build 20.0-b11, mixed mode) 2,) Does this need a Fedora/RH system to compile? 3.) My guess is that others are able to compile oVirt even if there are bad dependencies in the pom.xml files. Otherwise they would already be fixed. How do others fix the dependencies on their local systems. Thanks for the help! > > Laszlo > > ----- Original Message ----- > > From: "Adam Litke" > > To: engine-devel at ovirt.org > > Sent: Monday, February 6, 2012 3:49:33 PM > > Subject: [Engine-devel] Eclipse IDE setup > > > > Hi all, > > > > I am trying to set up an eclipse development environment for > > ovirt-engine and am > > running into a stubborn problem with missing classes. I have > > followed the > > directions for importing the Maven projects as written here: > > http://ovirt.org/wiki/Building_Ovirt_Engine/IDE > > > > The projects are able to be imported but I see lots of errors about > > missing > > imports such as: > > > > import org.ovirt.engine.api.model.* > > import org.ovirt.engine.core.common.* > > > > I should have a complete ovirt-engine source repository (I cloned the > > ovirt-engine git repo). Has anyone seen this problem before? Can > > you offer any > > suggestions to help me resolve it? Thanks! > > > > -- > > Adam Litke > > IBM Linux Technology Center > > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > -- Adam Litke IBM Linux Technology Center From jchoate at redhat.com Mon Feb 6 17:55:54 2012 From: jchoate at redhat.com (Jon Choate) Date: Mon, 06 Feb 2012 12:55:54 -0500 Subject: [Engine-devel] Eclipse IDE setup In-Reply-To: <20120206171834.GH3026@us.ibm.com> References: <20120206144933.GF3026@us.ibm.com> <20120206171834.GH3026@us.ibm.com> Message-ID: <4F30142A.9040901@redhat.com> On 02/06/2012 12:18 PM, Adam Litke wrote: > On Mon, Feb 06, 2012 at 10:58:13AM -0500, Laszlo Hornyak wrote: >> Hi Adam! >> >> Please check if workspace maven resolution is enabled, and run a maven build with install. >> If it is still broken, then there must be a bad dependency in the pom.xml-s... it happens :-( > Thanks for your suggestions. Maven resolution is enabled. Then I tried to > build on the command line using mvn directly but got the same errors as in > eclipse. Next, I tried to checkout out the 3.0 branch (assuming that the build > should be more stable) and I got a different set of compilation errors. > > This brings up a few questions: > > 1.) Which jdk should I use? I am currently using OpenJDK > > /usr/lib/jvm/java-1.6.0-openjdk/bin/java -version > java version "1.6.0_23" > OpenJDK Runtime Environment (IcedTea6 1.11pre) (6b23~pre11-0ubuntu1.11.10.1) > OpenJDK 64-Bit Server VM (build 20.0-b11, mixed mode) I am using a very similar jdk but on Fedora - #> java -version java version "1.6.0_22" OpenJDK Runtime Environment (IcedTea6 1.10.4) (fedora-60.1.10.4.fc15-x86_64) OpenJDK 64-Bit Server VM (build 20.0-b11, mixed mode) I can check what I am using on my Ubuntu machine when I get home. Have you put the correct settings in ~/.m2/settings.xml? http://www.ovirt.org/wiki/Building_oVirt_engine#Maven_personal_settings Jon > 2,) Does this need a Fedora/RH system to compile? > > 3.) My guess is that others are able to compile oVirt even if there are bad > dependencies in the pom.xml files. Otherwise they would already be fixed. How > do others fix the dependencies on their local systems. > > Thanks for the help! > >> Laszlo >> >> ----- Original Message ----- >>> From: "Adam Litke" >>> To: engine-devel at ovirt.org >>> Sent: Monday, February 6, 2012 3:49:33 PM >>> Subject: [Engine-devel] Eclipse IDE setup >>> >>> Hi all, >>> >>> I am trying to set up an eclipse development environment for >>> ovirt-engine and am >>> running into a stubborn problem with missing classes. I have >>> followed the >>> directions for importing the Maven projects as written here: >>> http://ovirt.org/wiki/Building_Ovirt_Engine/IDE >>> >>> The projects are able to be imported but I see lots of errors about >>> missing >>> imports such as: >>> >>> import org.ovirt.engine.api.model.* >>> import org.ovirt.engine.core.common.* >>> >>> I should have a complete ovirt-engine source repository (I cloned the >>> ovirt-engine git repo). Has anyone seen this problem before? Can >>> you offer any >>> suggestions to help me resolve it? Thanks! >>> >>> -- >>> Adam Litke >>> IBM Linux Technology Center >>> >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From lpeer at redhat.com Mon Feb 6 18:28:19 2012 From: lpeer at redhat.com (Livnat Peer) Date: Mon, 06 Feb 2012 20:28:19 +0200 Subject: [Engine-devel] Eclipse IDE setup In-Reply-To: <20120206171834.GH3026@us.ibm.com> References: <20120206144933.GF3026@us.ibm.com> <20120206171834.GH3026@us.ibm.com> Message-ID: <4F301BC3.1070305@redhat.com> On 06/02/12 19:18, Adam Litke wrote: > On Mon, Feb 06, 2012 at 10:58:13AM -0500, Laszlo Hornyak wrote: >> Hi Adam! >> >> Please check if workspace maven resolution is enabled, and run a maven build with install. >> If it is still broken, then there must be a bad dependency in the pom.xml-s... it happens :-( > > Thanks for your suggestions. Maven resolution is enabled. Then I tried to > build on the command line using mvn directly but got the same errors as in > eclipse. Next, I tried to checkout out the 3.0 branch (assuming that the build > should be more stable) and I got a different set of compilation errors. > Hi Adam, > This brings up a few questions: > > 1.) Which jdk should I use? I am currently using OpenJDK > > /usr/lib/jvm/java-1.6.0-openjdk/bin/java -version > java version "1.6.0_23" > OpenJDK Runtime Environment (IcedTea6 1.11pre) (6b23~pre11-0ubuntu1.11.10.1) > OpenJDK 64-Bit Server VM (build 20.0-b11, mixed mode) > you are using the right JDK. > 2,) Does this need a Fedora/RH system to compile? The engine works on Fedora, RHEL, Ubuntu Gentoo and should work on any other Linux based operating system (Java is platform agnostic). > > 3.) My guess is that others are able to compile oVirt even if there are bad > dependencies in the pom.xml files. Otherwise they would already be fixed. How > do others fix the dependencies on their local systems. > There should not be any local issues, let's try to figure out what the issues are. The errors are probably not related to eclipse because you have compilation errors from the command line as well. I would start by compiling the engine and api with no tests and no UI: Run from the command line - 1. $ovirt_engine_home> mvn clean 2. $ovirt_engine_home> mvn install -DskipTests What is the result of the above two? Livnat > Thanks for the help! > >> >> Laszlo >> >> ----- Original Message ----- >>> From: "Adam Litke" >>> To: engine-devel at ovirt.org >>> Sent: Monday, February 6, 2012 3:49:33 PM >>> Subject: [Engine-devel] Eclipse IDE setup >>> >>> Hi all, >>> >>> I am trying to set up an eclipse development environment for >>> ovirt-engine and am >>> running into a stubborn problem with missing classes. I have >>> followed the >>> directions for importing the Maven projects as written here: >>> http://ovirt.org/wiki/Building_Ovirt_Engine/IDE >>> >>> The projects are able to be imported but I see lots of errors about >>> missing >>> imports such as: >>> >>> import org.ovirt.engine.api.model.* >>> import org.ovirt.engine.core.common.* >>> >>> I should have a complete ovirt-engine source repository (I cloned the >>> ovirt-engine git repo). Has anyone seen this problem before? Can >>> you offer any >>> suggestions to help me resolve it? Thanks! >>> >>> -- >>> Adam Litke >>> IBM Linux Technology Center >>> >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> >> > From agl at us.ibm.com Mon Feb 6 19:47:24 2012 From: agl at us.ibm.com (Adam Litke) Date: Mon, 6 Feb 2012 13:47:24 -0600 Subject: [Engine-devel] Eclipse IDE setup In-Reply-To: <4F301BC3.1070305@redhat.com> References: <20120206144933.GF3026@us.ibm.com> <20120206171834.GH3026@us.ibm.com> <4F301BC3.1070305@redhat.com> Message-ID: <20120206194724.GM3026@us.ibm.com> On Mon, Feb 06, 2012 at 08:28:19PM +0200, Livnat Peer wrote: > On 06/02/12 19:18, Adam Litke wrote: > > On Mon, Feb 06, 2012 at 10:58:13AM -0500, Laszlo Hornyak wrote: > >> Hi Adam! > >> > >> Please check if workspace maven resolution is enabled, and run a maven build with install. > >> If it is still broken, then there must be a bad dependency in the pom.xml-s... it happens :-( > > > > Thanks for your suggestions. Maven resolution is enabled. Then I tried to > > build on the command line using mvn directly but got the same errors as in > > eclipse. Next, I tried to checkout out the 3.0 branch (assuming that the build > > should be more stable) and I got a different set of compilation errors. > > > > Hi Adam, > > > This brings up a few questions: > > > > 1.) Which jdk should I use? I am currently using OpenJDK > > > > /usr/lib/jvm/java-1.6.0-openjdk/bin/java -version > > java version "1.6.0_23" > > OpenJDK Runtime Environment (IcedTea6 1.11pre) (6b23~pre11-0ubuntu1.11.10.1) > > OpenJDK 64-Bit Server VM (build 20.0-b11, mixed mode) > > > > you are using the right JDK. > > > 2,) Does this need a Fedora/RH system to compile? > > The engine works on Fedora, RHEL, Ubuntu Gentoo and should work on any > other Linux based operating system (Java is platform agnostic). > > > > > > 3.) My guess is that others are able to compile oVirt even if there are bad > > dependencies in the pom.xml files. Otherwise they would already be fixed. How > > do others fix the dependencies on their local systems. > > > > There should not be any local issues, let's try to figure out what the > issues are. > > The errors are probably not related to eclipse because you have > compilation errors from the command line as well. > > I would start by compiling the engine and api with no tests and no UI: > > Run from the command line - > > 1. $ovirt_engine_home> mvn clean > 2. $ovirt_engine_home> mvn install -DskipTests > > What is the result of the above two? Thanks Livnat! The mvn clean was successful. Here are the errors from the install step: [INFO] ------------------------------------------------------------------------ [INFO] Building Shared GWT code [INFO] task-segment: [install] [INFO] ------------------------------------------------------------------------ [INFO] [clean:clean {execution: auto-clean}] [INFO] [dependency:unpack {execution: copy}] [INFO] Configured Artifact: org.ovirt.engine.core:common:sources:3.0.0-0001:jar [INFO] Configured Artifact: org.ovirt.engine.core:compat:sources:3.0.0-0001:jar [INFO] Configured Artifact: org.ovirt.engine.core:searchbackend:sources:3.0.0-0001:jar [INFO] Unpacking /home/aglitke/.m2/repository/org/ovirt/engine/core/common/3.0.0-0001/common-3.0.0-0001-sources.jar to /home/aglitke/src/ovirt-engine/frontend/webadmin/modules/sharedgwt/src/main/java with includes null and excludes:null [INFO] Unpacking /home/aglitke/.m2/repository/org/ovirt/engine/core/compat/3.0.0-0001/compat-3.0.0-0001-sources.jar to /home/aglitke/src/ovirt-engine/frontend/webadmin/modules/sharedgwt/src/main/java with includes null and excludes:null [INFO] Unpacking /home/aglitke/.m2/repository/org/ovirt/engine/core/searchbackend/3.0.0-0001/searchbackend-3.0.0-0001-sources.jar to /home/aglitke/src/ovirt-engine/frontend/webadmin/modules/sharedgwt/src/main/java with includes null and excludes:null [INFO] [resources:resources {execution: default-resources}] [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 3 resources [INFO] skip non existing resourceDirectory /home/aglitke/src/ovirt-engine/frontend/webadmin/modules/sharedgwt/src/main/resources [INFO] [gwt:resources {execution: default}] [INFO] 750 source files copied from GWT module org.ovirt.engine.SharedGwt [INFO] [compiler:compile {execution: default-compile}] [INFO] Compiling 749 source files to /home/aglitke/src/ovirt-engine/frontend/webadmin/modules/sharedgwt/target/classes [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Compilation failure Quota.java:[52,52] cannot find symbol symbol : variable QUOTA_NAME_SIZE location: class org.ovirt.engine.core.common.businessentities.BusinessEntitiesDefinitions Quota.java:[58,52] cannot find symbol symbol : variable QUOTA_DESCRIPTION_SIZE location: class org.ovirt.engine.core.common.businessentities.BusinessEntitiesDefinitions [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1 minute 52 seconds [INFO] Finished at: Mon Feb 06 13:45:09 CST 2012 [INFO] Final Memory: 241M/575M [INFO] ------------------------------------------------------------------------ -- Adam Litke IBM Linux Technology Center From lpeer at redhat.com Mon Feb 6 20:24:36 2012 From: lpeer at redhat.com (Livnat Peer) Date: Mon, 06 Feb 2012 22:24:36 +0200 Subject: [Engine-devel] Eclipse IDE setup In-Reply-To: <20120206194724.GM3026@us.ibm.com> References: <20120206144933.GF3026@us.ibm.com> <20120206171834.GH3026@us.ibm.com> <4F301BC3.1070305@redhat.com> <20120206194724.GM3026@us.ibm.com> Message-ID: <4F303704.7030606@redhat.com> On 06/02/12 21:47, Adam Litke wrote: > On Mon, Feb 06, 2012 at 08:28:19PM +0200, Livnat Peer wrote: >> On 06/02/12 19:18, Adam Litke wrote: >>> On Mon, Feb 06, 2012 at 10:58:13AM -0500, Laszlo Hornyak wrote: >>>> Hi Adam! >>>> >>>> Please check if workspace maven resolution is enabled, and run a maven build with install. >>>> If it is still broken, then there must be a bad dependency in the pom.xml-s... it happens :-( >>> >>> Thanks for your suggestions. Maven resolution is enabled. Then I tried to >>> build on the command line using mvn directly but got the same errors as in >>> eclipse. Next, I tried to checkout out the 3.0 branch (assuming that the build >>> should be more stable) and I got a different set of compilation errors. >>> >> >> Hi Adam, >> >>> This brings up a few questions: >>> >>> 1.) Which jdk should I use? I am currently using OpenJDK >>> >>> /usr/lib/jvm/java-1.6.0-openjdk/bin/java -version >>> java version "1.6.0_23" >>> OpenJDK Runtime Environment (IcedTea6 1.11pre) (6b23~pre11-0ubuntu1.11.10.1) >>> OpenJDK 64-Bit Server VM (build 20.0-b11, mixed mode) >>> >> >> you are using the right JDK. >> >>> 2,) Does this need a Fedora/RH system to compile? >> >> The engine works on Fedora, RHEL, Ubuntu Gentoo and should work on any >> other Linux based operating system (Java is platform agnostic). >> >> >>> >>> 3.) My guess is that others are able to compile oVirt even if there are bad >>> dependencies in the pom.xml files. Otherwise they would already be fixed. How >>> do others fix the dependencies on their local systems. >>> >> >> There should not be any local issues, let's try to figure out what the >> issues are. >> >> The errors are probably not related to eclipse because you have >> compilation errors from the command line as well. >> >> I would start by compiling the engine and api with no tests and no UI: >> >> Run from the command line - >> >> 1. $ovirt_engine_home> mvn clean >> 2. $ovirt_engine_home> mvn install -DskipTests >> >> What is the result of the above two? > > Thanks Livnat! The mvn clean was successful. Here are the errors from the > install step: 1. do you have latest? when did you fetch last (I can fetch the same commit hash to make sure it compiles, I have latest and it compiles) 2. let try to compile the engine without the GWT stub - $ovirt_engine_home> cd backend/manager $ovirt_engine_home/backend/manager > mvn install -DskipTests What is the result of the above? BTW if you want online help I am on the ovirt IRC channel. > > [INFO] ------------------------------------------------------------------------ > [INFO] Building Shared GWT code > [INFO] task-segment: [install] > [INFO] ------------------------------------------------------------------------ > [INFO] [clean:clean {execution: auto-clean}] > [INFO] [dependency:unpack {execution: copy}] > [INFO] Configured Artifact: org.ovirt.engine.core:common:sources:3.0.0-0001:jar > [INFO] Configured Artifact: org.ovirt.engine.core:compat:sources:3.0.0-0001:jar > [INFO] Configured Artifact: org.ovirt.engine.core:searchbackend:sources:3.0.0-0001:jar > [INFO] Unpacking /home/aglitke/.m2/repository/org/ovirt/engine/core/common/3.0.0-0001/common-3.0.0-0001-sources.jar to > /home/aglitke/src/ovirt-engine/frontend/webadmin/modules/sharedgwt/src/main/java > with includes null and excludes:null > [INFO] Unpacking /home/aglitke/.m2/repository/org/ovirt/engine/core/compat/3.0.0-0001/compat-3.0.0-0001-sources.jar to > /home/aglitke/src/ovirt-engine/frontend/webadmin/modules/sharedgwt/src/main/java > with includes null and excludes:null > [INFO] Unpacking /home/aglitke/.m2/repository/org/ovirt/engine/core/searchbackend/3.0.0-0001/searchbackend-3.0.0-0001-sources.jar to > /home/aglitke/src/ovirt-engine/frontend/webadmin/modules/sharedgwt/src/main/java > with includes null and excludes:null > [INFO] [resources:resources {execution: default-resources}] > [INFO] Using 'UTF-8' encoding to copy filtered resources. > [INFO] Copying 3 resources > [INFO] skip non existing resourceDirectory /home/aglitke/src/ovirt-engine/frontend/webadmin/modules/sharedgwt/src/main/resources > [INFO] [gwt:resources {execution: default}] > [INFO] 750 source files copied from GWT module org.ovirt.engine.SharedGwt > [INFO] [compiler:compile {execution: default-compile}] > [INFO] Compiling 749 source files to /home/aglitke/src/ovirt-engine/frontend/webadmin/modules/sharedgwt/target/classes > [INFO] ------------------------------------------------------------------------ > [ERROR] BUILD FAILURE > [INFO] ------------------------------------------------------------------------ > [INFO] Compilation failure > > Quota.java:[52,52] cannot find symbol > symbol : variable QUOTA_NAME_SIZE > location: class org.ovirt.engine.core.common.businessentities.BusinessEntitiesDefinitions > > Quota.java:[58,52] cannot find symbol > symbol : variable QUOTA_DESCRIPTION_SIZE > location: class org.ovirt.engine.core.common.businessentities.BusinessEntitiesDefinitions > > > [INFO] ------------------------------------------------------------------------ > [INFO] For more information, run Maven with the -e switch > [INFO] ------------------------------------------------------------------------ > [INFO] Total time: 1 minute 52 seconds > [INFO] Finished at: Mon Feb 06 13:45:09 CST 2012 > [INFO] Final Memory: 241M/575M > [INFO] ------------------------------------------------------------------------ > > From dlaor at redhat.com Tue Feb 7 10:01:58 2012 From: dlaor at redhat.com (Dor Laor) Date: Tue, 07 Feb 2012 12:01:58 +0200 Subject: [Engine-devel] bridgless networks In-Reply-To: <4cd20748-a940-4f77-b491-7c8a75ac8a23@zmail01.collab.prod.int.phx2.redhat.com> References: <4cd20748-a940-4f77-b491-7c8a75ac8a23@zmail01.collab.prod.int.phx2.redhat.com> Message-ID: <4F30F696.5090409@redhat.com> On 02/06/2012 04:47 PM, Roy Golan wrote: > Hi All > > Lately I've been working on a design of bridge-less network feature in the engine. > You can see it in http://www.ovirt.org/wiki/Features/Design/Network/Bridgeless_Networks#Bridge-less_Networks > > Please review the design. > Note, there are some open issues, you can find in the relevant section. > Reviews and comments are very welcome. I'm not in the details of the above design but just please make sure this change will be able to accommodate w/: - Different bridging types: - Today's Linux bridge - openVswitch bridge - macvtap bridges. - pci device assignment w/o sriov - virtio over macvtap over sriov virtual function Cheers, Dor > > Thanks, > Roy > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From oschreib at redhat.com Tue Feb 7 11:05:49 2012 From: oschreib at redhat.com (Ofer Schreiber) Date: Tue, 07 Feb 2012 06:05:49 -0500 (EST) Subject: [Engine-devel] oVirt's First Release go/no go meeting Message-ID: <1d8aeedf-04a5-438e-9658-b5a7e30df8a0@zmail14.collab.prod.int.phx2.redhat.com> The following is a new meeting request: Subject: oVirt's First Release go/no go meeting Organizer: "Ofer Schreiber" Location: #ovirt on oftc Time: Tuesday, February 7, 2012, 5:00:00 PM - 6:00:00 PM GMT +02:00 Jerusalem Invitees: engine-devel at ovirt.org; node-devel at ovirt.org; board at ovirt.org *~*~*~*~*~*~*~*~*~* This is the official go/no go meeting for Ovirt's first release. Useful resources: 1. http://www.ovirt.org/wiki/Releases/First_Release_Blockers 2. http://www.ovirt.org/wiki/Releases/First_Release -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 2789 bytes Desc: not available URL: From agl at us.ibm.com Tue Feb 7 14:16:33 2012 From: agl at us.ibm.com (Adam Litke) Date: Tue, 7 Feb 2012 08:16:33 -0600 Subject: [Engine-devel] Eclipse IDE setup In-Reply-To: <4F303704.7030606@redhat.com> References: <20120206144933.GF3026@us.ibm.com> <20120206171834.GH3026@us.ibm.com> <4F301BC3.1070305@redhat.com> <20120206194724.GM3026@us.ibm.com> <4F303704.7030606@redhat.com> Message-ID: <20120207141633.GA2840@us.ibm.com> On Mon, Feb 06, 2012 at 10:24:36PM +0200, Livnat Peer wrote: > On 06/02/12 21:47, Adam Litke wrote: > > On Mon, Feb 06, 2012 at 08:28:19PM +0200, Livnat Peer wrote: > >> On 06/02/12 19:18, Adam Litke wrote: > >>> On Mon, Feb 06, 2012 at 10:58:13AM -0500, Laszlo Hornyak wrote: > >>>> Hi Adam! > >>>> > >>>> Please check if workspace maven resolution is enabled, and run a maven build with install. > >>>> If it is still broken, then there must be a bad dependency in the pom.xml-s... it happens :-( > >>> > >>> Thanks for your suggestions. Maven resolution is enabled. Then I tried to > >>> build on the command line using mvn directly but got the same errors as in > >>> eclipse. Next, I tried to checkout out the 3.0 branch (assuming that the build > >>> should be more stable) and I got a different set of compilation errors. > >>> > >> > >> Hi Adam, > >> > >>> This brings up a few questions: > >>> > >>> 1.) Which jdk should I use? I am currently using OpenJDK > >>> > >>> /usr/lib/jvm/java-1.6.0-openjdk/bin/java -version > >>> java version "1.6.0_23" > >>> OpenJDK Runtime Environment (IcedTea6 1.11pre) (6b23~pre11-0ubuntu1.11.10.1) > >>> OpenJDK 64-Bit Server VM (build 20.0-b11, mixed mode) > >>> > >> > >> you are using the right JDK. > >> > >>> 2,) Does this need a Fedora/RH system to compile? > >> > >> The engine works on Fedora, RHEL, Ubuntu Gentoo and should work on any > >> other Linux based operating system (Java is platform agnostic). > >> > >> > >>> > >>> 3.) My guess is that others are able to compile oVirt even if there are bad > >>> dependencies in the pom.xml files. Otherwise they would already be fixed. How > >>> do others fix the dependencies on their local systems. > >>> > >> > >> There should not be any local issues, let's try to figure out what the > >> issues are. > >> > >> The errors are probably not related to eclipse because you have > >> compilation errors from the command line as well. > >> > >> I would start by compiling the engine and api with no tests and no UI: > >> > >> Run from the command line - > >> > >> 1. $ovirt_engine_home> mvn clean > >> 2. $ovirt_engine_home> mvn install -DskipTests > >> > >> What is the result of the above two? > > > > Thanks Livnat! The mvn clean was successful. Here are the errors from the > > install step: > > > 1. do you have latest? when did you fetch last (I can fetch the same > commit hash to make sure it compiles, I have latest and it compiles) Ok. I guess I had the 3.0 branch checked out when I was trying to fix the compile. By moving back to master, I was able to build from the command line successfully. However, I still get lots of errors in eclipse. I will include a few below: Action cannot be resolved to a type ActionResource.java /restapi-definition/src/main/java/org/ovirt/engine/api/resource line 34 Java Problem Actions cannot be resolved to a type ActionsBuilder.java /restapi-definition/src/main/java/org/ovirt/engine/api/model line 35 Java Problem BaseDevice cannot be resolved to a type DeviceResource.java /restapi-definition/src/main/java/org/ovirt/engine/api/resource line 28 Java Problem BaseDevices cannot be resolved to a type DevicesResource.java /restapi-definition/src/main/java/org/ovirt/engine/api/resource line 33 Java Problem BaseResource cannot be resolved to a type RemovableStorageDomainContentsResource.java /restapi-definition/src/main/java/org/ovirt/engine/api/resource line 28 Java Problem BaseResources cannot be resolved to a type RemovableStorageDomainContentsResource.java /restapi-definition/src/main/java/org/ovirt/engine/api/resource line 28 Java Problem Bound mismatch: The type C is not a valid substitute for the bounded parameter of the type ReadOnlyDevicesResource DevicesResource.java /restapi-definition/src/main/java/org/ovirt/engine/api/resource line 34 Java Problem Bound mismatch: The type D is not a valid substitute for the bounded parameter of the type DeviceResource DevicesResource.java /restapi-definition/src/main/java/org/ovirt/engine/api/resource line 52 Java Problem Bound mismatch: The type R is not a valid substitute for the bounded parameter of the type StorageDomainContentResource StorageDomainContentsResource.java /restapi-definition/src/main/java/org/ovirt/engine/api/resource line 36 Java Problem Capabilities cannot be resolved to a type CapabilitiesResource.java /restapi-definition/src/main/java/org/ovirt/engine/api/resource line 33 Java Problem CdRom cannot be resolved to a type TemplateResource.java /restapi-definition/src/main/java/org/ovirt/engine/api/resource line 51 Java Problem -- Adam Litke IBM Linux Technology Center From sanjal at redhat.com Tue Feb 7 14:26:10 2012 From: sanjal at redhat.com (Shireesh Anjal) Date: Tue, 07 Feb 2012 09:26:10 -0500 (EST) Subject: [Engine-devel] Eclipse IDE setup In-Reply-To: <4F303704.7030606@redhat.com> Message-ID: Hi, I went through setting Eclipse IDE for engine development recently. After getting the maven build to work, and after importing all projects into eclipse, following setup is required to get rid of all compilation errors reported by eclipse: 1) at restapi-definition -> project -> properties -> java build path -> source -> add source folder -> target/generated sources/xjc 2) at webadmin -> project -> properties -> java build path -> source -> add source folder-> target/generated sources/annotations,gwt,test-annotations 3) To get rid of the error "The method setCharacterEncoding(String) is undefined for the type ?HttpServletResponse" in source frontend/webadmin/modules/frontend/src/main/java/org/ovirt/engine/ui/frontend/server/gwt/WebadminDynamicHostingServlet.java, I modified pom.xml at root level to change servlet API version from 2.3 to 2.4 as the concerned API is introduced in 2.4 3.0 - 2.3 + 2.4 0.1.42 4) Make sure that you import the engine code formatter into eclipse _before_ starting development. Window -> Preferences -> Java -> Code Style -> Formatter -> Import -> /config/engine-code-format.xml 5) Above mentioned formatted doesn't work in comments. This can be resolved by adding "Remove trailing whitespace" in "Save actions" as follows: Window -> Preferences -> Java -> Editor -> Save Actions -> Additional Actions -> Configure -> Code Organizing -> Remove trailing whitespace -> ?All lines 6) For some reason, editing a properties file in eclipse results in a lot of "diff" in git, making it difficult to review the code change. So I'm resorting to editing properties file in a text editor outside eclipse for the time being. I suspect this may not happen on all machines. Regards, Shireesh ----- Original Message ----- From: "Livnat Peer" To: "Adam Litke" Cc: engine-devel at ovirt.org Sent: Tuesday, February 7, 2012 1:54:36 AM Subject: Re: [Engine-devel] Eclipse IDE setup On 06/02/12 21:47, Adam Litke wrote: > On Mon, Feb 06, 2012 at 08:28:19PM +0200, Livnat Peer wrote: >> On 06/02/12 19:18, Adam Litke wrote: >>> On Mon, Feb 06, 2012 at 10:58:13AM -0500, Laszlo Hornyak wrote: >>>> Hi Adam! >>>> >>>> Please check if workspace maven resolution is enabled, and run a maven build with install. >>>> If it is still broken, then there must be a bad dependency in the pom.xml-s... it happens :-( >>> >>> Thanks for your suggestions. ?Maven resolution is enabled. ?Then I tried to >>> build on the command line using mvn directly but got the same errors as in >>> eclipse. ?Next, I tried to checkout out the 3.0 branch (assuming that the build >>> should be more stable) and I got a different set of compilation errors. >>> >> >> Hi Adam, >> >>> This brings up a few questions: >>> >>> 1.) Which jdk should I use? ?I am currently using OpenJDK >>> >>> /usr/lib/jvm/java-1.6.0-openjdk/bin/java -version >>> java version "1.6.0_23" >>> OpenJDK Runtime Environment (IcedTea6 1.11pre) (6b23~pre11-0ubuntu1.11.10.1) >>> OpenJDK 64-Bit Server VM (build 20.0-b11, mixed mode) >>> >> >> you are using the right JDK. >> >>> 2,) Does this need a Fedora/RH system to compile? >> >> The engine works on Fedora, RHEL, Ubuntu Gentoo ?and should work on any >> other Linux based operating system (Java is platform agnostic). >> >> >>> >>> 3.) My guess is that others are able to compile oVirt even if there are bad >>> dependencies in the pom.xml files. ?Otherwise they would already be fixed. ?How >>> do others fix the dependencies on their local systems. >>> >> >> There should not be any local issues, let's try to figure out what the >> issues are. >> >> The errors are probably not related to eclipse because you have >> compilation errors from the command line as well. >> >> I would start by compiling the engine and api with no tests and no UI: >> >> Run from the command line - >> >> 1. $ovirt_engine_home> mvn clean >> 2. $ovirt_engine_home> mvn install -DskipTests >> >> What is the result of the above two? > > Thanks Livnat! ?The mvn clean was successful. ?Here are the errors from the > install step: 1. do you have latest? when did you fetch last (I can fetch the same commit hash to make sure it compiles, I have latest and it compiles) 2. let try to compile the engine without the GWT stub - ?$ovirt_engine_home> cd backend/manager ?$ovirt_engine_home/backend/manager > mvn install -DskipTests What is the result of the above? BTW if you want online help I am on the ovirt IRC channel. > > [INFO] ------------------------------------------------------------------------ > [INFO] Building Shared GWT code > [INFO] ? ?task-segment: [install] > [INFO] ------------------------------------------------------------------------ > [INFO] [clean:clean {execution: auto-clean}] > [INFO] [dependency:unpack {execution: copy}] > [INFO] Configured Artifact: org.ovirt.engine.core:common:sources:3.0.0-0001:jar > [INFO] Configured Artifact: org.ovirt.engine.core:compat:sources:3.0.0-0001:jar > [INFO] Configured Artifact: org.ovirt.engine.core:searchbackend:sources:3.0.0-0001:jar > [INFO] Unpacking /home/aglitke/.m2/repository/org/ovirt/engine/core/common/3.0.0-0001/common-3.0.0-0001-sources.jar to > ? /home/aglitke/src/ovirt-engine/frontend/webadmin/modules/sharedgwt/src/main/java > ? ?with includes null and excludes:null > [INFO] Unpacking /home/aglitke/.m2/repository/org/ovirt/engine/core/compat/3.0.0-0001/compat-3.0.0-0001-sources.jar to > ? /home/aglitke/src/ovirt-engine/frontend/webadmin/modules/sharedgwt/src/main/java > ? ?with includes null and excludes:null > [INFO] Unpacking /home/aglitke/.m2/repository/org/ovirt/engine/core/searchbackend/3.0.0-0001/searchbackend-3.0.0-0001-sources.jar to > ? /home/aglitke/src/ovirt-engine/frontend/webadmin/modules/sharedgwt/src/main/java > ? ?with includes null and excludes:null > [INFO] [resources:resources {execution: default-resources}] > [INFO] Using 'UTF-8' encoding to copy filtered resources. > [INFO] Copying 3 resources > [INFO] skip non existing resourceDirectory /home/aglitke/src/ovirt-engine/frontend/webadmin/modules/sharedgwt/src/main/resources > [INFO] [gwt:resources {execution: default}] > [INFO] 750 source files copied from GWT module org.ovirt.engine.SharedGwt > [INFO] [compiler:compile {execution: default-compile}] > [INFO] Compiling 749 source files to /home/aglitke/src/ovirt-engine/frontend/webadmin/modules/sharedgwt/target/classes > [INFO] ------------------------------------------------------------------------ > [ERROR] BUILD FAILURE > [INFO] ------------------------------------------------------------------------ > [INFO] Compilation failure > > Quota.java:[52,52] cannot find symbol > symbol ?: variable QUOTA_NAME_SIZE > location: class org.ovirt.engine.core.common.businessentities.BusinessEntitiesDefinitions > > Quota.java:[58,52] cannot find symbol > symbol ?: variable QUOTA_DESCRIPTION_SIZE > location: class org.ovirt.engine.core.common.businessentities.BusinessEntitiesDefinitions > > > [INFO] ------------------------------------------------------------------------ > [INFO] For more information, run Maven with the -e switch > [INFO] ------------------------------------------------------------------------ > [INFO] Total time: 1 minute 52 seconds > [INFO] Finished at: Mon Feb 06 13:45:09 CST 2012 > [INFO] Final Memory: 241M/575M > [INFO] ------------------------------------------------------------------------ > > _______________________________________________ Engine-devel mailing list Engine-devel at ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel From abaron at redhat.com Tue Feb 7 15:43:24 2012 From: abaron at redhat.com (Ayal Baron) Date: Tue, 07 Feb 2012 10:43:24 -0500 (EST) Subject: [Engine-devel] oVirt upstream meeting : VM Version In-Reply-To: <5f2c99b7-38c8-42cd-980d-206a147d12f6@zmail13.collab.prod.int.phx2.redhat.com> Message-ID: ----- Original Message ----- > > > ----- Original Message ----- > > On 02/05/2012 02:57 PM, Miki Kenneth wrote: > > ... > > > > >>>> Isn't the VM version derived from the version of the cluster > > >>>> on > > >>>> which it was last edited? > > >>>> For example: you've created a VM on a cluster v3.0. When it is > > >>>> running on a v3.2 cluster, is there any reason to change its > > >>>> version? > > >>>> When it is edited, then perhaps yes - because it may have > > >>>> changed/added properties/features that are only applicable to > > >>>> v3.2. > > >>>> But until then - let it stay in the same version as it was > > >>>> created. > > >>>> (btw, how does this map, if at all, to the '-m' qemu command > > >>>> line > > >>>> switch?) > > >>>> Y. > > >>>> > > >>> > > >>> Currently we do not persist the VM version at all, it is > > >>> derived > > >>> from > > >>> the cluster version the VM belongs to (that's why I suggested > > >>> to > > >>> save it > > >>> as part of the OVF so we can be aware of the VM version when > > >>> exporting/importing a VM etc.). > > >>> > > >>> The VM does not have to be edited to be influenced by the > > >>> cluster > > >>> version. For example if you start a VM on 3.1 cluster you get > > >>> the > > >>> stable > > >>> device address feature with no manual editing. > > >>> > > >>> Livnat > > >>> > > > However, I do agree with Yaniv that changing the VM version > > > "under > > > the hood" is a bit problematic. Version is a parameter associated > > > with create/update operation, and less with Run command. > > It's not under the hood, user effectively chose to change it when she > changed the cluster level. > > Going forward, we could check the version before running the VM and > then warning the user (so that the change would take effect per VM > and not per cluster) but that would be annoying and to mitigate > that, we would need to add a checkbox when changing the cluster > level "Automatically upgrade VMs" or something (to keep current > simple behaviour). Another thing that would require VM version is unicode support in the ovf. > > > > > but the engine currently has no logic to detect the need to > > increase > > the > > emulated machine to support feature X. > > the engine currently does not save this parameter at VM level. > > it will also need to compare it to the list of supported emulated > > machines at the cluster, and prevent running the VM if there isn't > > a > > match. > > it also increases the matrix of possible emulated machines being > > run > > on > > different versions of hypervisor to N*cluster_levels, instead of > > just > > the number of cluster levels. > > plus, if a cluster is increased to a new version of hosts which > > doesn't > > support an older emulated machine level - user will need to upgrade > > all > > VMs one by one? > > (or will engine block upgrading cluster level if the new cluster > > level > > doesn't have an emulated machine in use by one of the virtual > > machines) > > it also means engine needs to handle validation logic for this > > field > > when exporting/importing (point of this discussion), as well as > > just > > moving a VM between clusters. > > > > so before introducing all this logic - were issues observed where > > changing the cluster level (i.e., -M at host level) resulted in > > problematic changes at guest level worth all of these? > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > From agl at us.ibm.com Tue Feb 7 16:29:18 2012 From: agl at us.ibm.com (Adam Litke) Date: Tue, 7 Feb 2012 10:29:18 -0600 Subject: [Engine-devel] Eclipse IDE setup In-Reply-To: <4F303704.7030606@redhat.com> References: <20120206144933.GF3026@us.ibm.com> <20120206171834.GH3026@us.ibm.com> <4F301BC3.1070305@redhat.com> <20120206194724.GM3026@us.ibm.com> <4F303704.7030606@redhat.com> Message-ID: <20120207162918.GC2840@us.ibm.com> On Mon, Feb 06, 2012 at 10:24:36PM +0200, Livnat Peer wrote: > BTW if you want online help I am on the ovirt IRC channel. Thanks. I might need to take you up on your offer. Even after following the suggestions in this thread I still have around 400 unsolved errors across several different projects. What is your IRC nick? I wasn't able to recognize you on #ovirt. -- Adam Litke IBM Linux Technology Center From iheim at redhat.com Tue Feb 7 23:02:58 2012 From: iheim at redhat.com (Itamar Heim) Date: Wed, 08 Feb 2012 01:02:58 +0200 Subject: [Engine-devel] Eclipse IDE setup In-Reply-To: References: Message-ID: <4F31ADA2.5000808@redhat.com> On 02/07/2012 04:26 PM, Shireesh Anjal wrote: > Hi, > > I went through setting Eclipse IDE for engine development recently. > > After getting the maven build to work, and after importing all projects into eclipse, following setup is required to get rid of all compilation errors reported by eclipse: > > 1) at restapi-definition -> project -> properties -> java build path -> source -> add source folder -> target/generated sources/xjc > 2) at webadmin -> project -> properties -> java build path -> source -> add source folder-> target/generated sources/annotations,gwt,test-annotations > > 3) To get rid of the error "The method setCharacterEncoding(String) is undefined for the type HttpServletResponse" in source frontend/webadmin/modules/frontend/src/main/java/org/ovirt/engine/ui/frontend/server/gwt/WebadminDynamicHostingServlet.java, I modified pom.xml at root level to change servlet API version from 2.3 to 2.4 as the concerned API is introduced in 2.4 > > 3.0 > -2.3 > +2.4 > 0.1.42 > > 4) Make sure that you import the engine code formatter into eclipse _before_ starting development. > > Window -> Preferences -> Java -> Code Style -> Formatter -> Import -> /config/engine-code-format.xml > > 5) Above mentioned formatted doesn't work in comments. This can be resolved by adding "Remove trailing whitespace" in "Save actions" as follows: > > Window -> Preferences -> Java -> Editor -> Save Actions -> Additional Actions -> Configure -> Code Organizing -> Remove trailing whitespace -> All lines > > 6) For some reason, editing a properties file in eclipse results in a lot of "diff" in git, making it difficult to review the code change. So I'm resorting to editing properties file in a text editor outside eclipse for the time being. I suspect this may not happen on all machines. Shireesh - great points - care to update the wiki? Also - I'd like to hope we can clean the pom files so #1-#3 would not be needed manually? From yzaslavs at redhat.com Wed Feb 8 06:43:31 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Wed, 08 Feb 2012 08:43:31 +0200 Subject: [Engine-devel] Eclipse IDE setup In-Reply-To: <4F31ADA2.5000808@redhat.com> References: <4F31ADA2.5000808@redhat.com> Message-ID: <4F321993.6010206@redhat.com> On 02/08/2012 01:02 AM, Itamar Heim wrote: > On 02/07/2012 04:26 PM, Shireesh Anjal wrote: >> Hi, >> >> I went through setting Eclipse IDE for engine development recently. >> >> After getting the maven build to work, and after importing all >> projects into eclipse, following setup is required to get rid of all >> compilation errors reported by eclipse: >> >> 1) at restapi-definition -> project -> properties -> java build >> path -> source -> add source folder -> target/generated sources/xjc >> 2) at webadmin -> project -> properties -> java build path -> >> source -> add source folder-> target/generated >> sources/annotations,gwt,test-annotations >> >> 3) To get rid of the error "The method setCharacterEncoding(String) is >> undefined for the type HttpServletResponse" in source >> frontend/webadmin/modules/frontend/src/main/java/org/ovirt/engine/ui/frontend/server/gwt/WebadminDynamicHostingServlet.java, >> I modified pom.xml at root level to change servlet API version from >> 2.3 to 2.4 as the concerned API is introduced in 2.4 >> >> 3.0 >> -2.3 >> +2.4 >> 0.1.42 >> >> 4) Make sure that you import the engine code formatter into eclipse >> _before_ starting development. >> >> Window -> Preferences -> Java -> Code Style -> Formatter -> >> Import -> /config/engine-code-format.xml >> >> 5) Above mentioned formatted doesn't work in comments. This can be >> resolved by adding "Remove trailing whitespace" in "Save actions" as >> follows: >> >> Window -> Preferences -> Java -> Editor -> Save Actions -> >> Additional Actions -> Configure -> Code Organizing -> Remove >> trailing whitespace -> All lines I would like to add that our maven checkstyle plugin also fails build on unused import. Make sure you also got it covered. I personally did not set this at my IDE, but I rather handle this issue manually. >> >> 6) For some reason, editing a properties file in eclipse results in a >> lot of "diff" in git, making it difficult to review the code change. >> So I'm resorting to editing properties file in a text editor outside >> eclipse for the time being. I suspect this may not happen on all >> machines. > > Shireesh - great points - care to update the wiki? > Also - I'd like to hope we can clean the pom files so #1-#3 would not be > needed manually? > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From lpeer at redhat.com Wed Feb 8 07:05:25 2012 From: lpeer at redhat.com (Livnat Peer) Date: Wed, 08 Feb 2012 09:05:25 +0200 Subject: [Engine-devel] Eclipse IDE setup In-Reply-To: <20120207162918.GC2840@us.ibm.com> References: <20120206144933.GF3026@us.ibm.com> <20120206171834.GH3026@us.ibm.com> <4F301BC3.1070305@redhat.com> <20120206194724.GM3026@us.ibm.com> <4F303704.7030606@redhat.com> <20120207162918.GC2840@us.ibm.com> Message-ID: <4F321EB5.4060806@redhat.com> On 07/02/12 18:29, Adam Litke wrote: > On Mon, Feb 06, 2012 at 10:24:36PM +0200, Livnat Peer wrote: >> BTW if you want online help I am on the ovirt IRC channel. > > Thanks. I might need to take you up on your offer. Even after following the > suggestions in this thread I still have around 400 unsolved errors across > several different projects. What is your IRC nick? I wasn't able to recognize > you on #ovirt. > lpeer, I'll be happy to help, if I'm not online you can try: - mkolesni - lhornyak Or in Westford (maybe the time zone will work better for you) we have: - jumper45 Livnat From dfediuck at redhat.com Wed Feb 8 07:56:39 2012 From: dfediuck at redhat.com (Doron Fediuck) Date: Wed, 08 Feb 2012 09:56:39 +0200 Subject: [Engine-devel] oVirt upstream meeting : VM Version In-Reply-To: References: Message-ID: <4F322AB7.2070202@redhat.com> On 07/02/12 17:43, Ayal Baron wrote: > > > ----- Original Message ----- >> >> >> ----- Original Message ----- >>> On 02/05/2012 02:57 PM, Miki Kenneth wrote: >>> ... >>> >>>>>>> Isn't the VM version derived from the version of the cluster >>>>>>> on >>>>>>> which it was last edited? >>>>>>> For example: you've created a VM on a cluster v3.0. When it is >>>>>>> running on a v3.2 cluster, is there any reason to change its >>>>>>> version? >>>>>>> When it is edited, then perhaps yes - because it may have >>>>>>> changed/added properties/features that are only applicable to >>>>>>> v3.2. >>>>>>> But until then - let it stay in the same version as it was >>>>>>> created. >>>>>>> (btw, how does this map, if at all, to the '-m' qemu command >>>>>>> line >>>>>>> switch?) >>>>>>> Y. >>>>>>> >>>>>> >>>>>> Currently we do not persist the VM version at all, it is >>>>>> derived >>>>>> from >>>>>> the cluster version the VM belongs to (that's why I suggested >>>>>> to >>>>>> save it >>>>>> as part of the OVF so we can be aware of the VM version when >>>>>> exporting/importing a VM etc.). >>>>>> >>>>>> The VM does not have to be edited to be influenced by the >>>>>> cluster >>>>>> version. For example if you start a VM on 3.1 cluster you get >>>>>> the >>>>>> stable >>>>>> device address feature with no manual editing. >>>>>> >>>>>> Livnat >>>>>> >>>> However, I do agree with Yaniv that changing the VM version >>>> "under >>>> the hood" is a bit problematic. Version is a parameter associated >>>> with create/update operation, and less with Run command. >> >> It's not under the hood, user effectively chose to change it when she >> changed the cluster level. >> >> Going forward, we could check the version before running the VM and >> then warning the user (so that the change would take effect per VM >> and not per cluster) but that would be annoying and to mitigate >> that, we would need to add a checkbox when changing the cluster >> level "Automatically upgrade VMs" or something (to keep current >> simple behaviour). > > Another thing that would require VM version is unicode support in the ovf. > Why isn't unicode an OVF version issue? >> >>> >>> but the engine currently has no logic to detect the need to >>> increase >>> the >>> emulated machine to support feature X. >>> the engine currently does not save this parameter at VM level. >>> it will also need to compare it to the list of supported emulated >>> machines at the cluster, and prevent running the VM if there isn't >>> a >>> match. >>> it also increases the matrix of possible emulated machines being >>> run >>> on >>> different versions of hypervisor to N*cluster_levels, instead of >>> just >>> the number of cluster levels. >>> plus, if a cluster is increased to a new version of hosts which >>> doesn't >>> support an older emulated machine level - user will need to upgrade >>> all >>> VMs one by one? >>> (or will engine block upgrading cluster level if the new cluster >>> level >>> doesn't have an emulated machine in use by one of the virtual >>> machines) >>> it also means engine needs to handle validation logic for this >>> field >>> when exporting/importing (point of this discussion), as well as >>> just >>> moving a VM between clusters. >>> >>> so before introducing all this logic - were issues observed where >>> changing the cluster level (i.e., -M at host level) resulted in >>> problematic changes at guest level worth all of these? >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> >> > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel -- /d "The answer, my friend, is blowing in the wind" --Bob Dylan, Blowin' in the Wind (1963) From ovedo at redhat.com Wed Feb 8 11:43:47 2012 From: ovedo at redhat.com (Oved Ourfalli) Date: Wed, 08 Feb 2012 06:43:47 -0500 (EST) Subject: [Engine-devel] SPICE related features In-Reply-To: <6685d495-3a1d-4b8a-a2b1-687f2d354e5e@zmail02.collab.prod.int.phx2.redhat.com> Message-ID: Hello all, The following feature page describes the engine adjustments needed for new SPICE features. http://www.ovirt.org/wiki/Features/SPICERelatedFeatures Feel free to share your comments. Thank you, Oved From dlaor at redhat.com Wed Feb 8 11:49:56 2012 From: dlaor at redhat.com (Dor Laor) Date: Wed, 08 Feb 2012 13:49:56 +0200 Subject: [Engine-devel] SPICE related features In-Reply-To: References: Message-ID: <4F326164.6000005@redhat.com> On 02/08/2012 01:43 PM, Oved Ourfalli wrote: > Hello all, > > The following feature page describes the engine adjustments needed for new SPICE features. > > http://www.ovirt.org/wiki/Features/SPICERelatedFeatures Better to cross-post it w/ spice-devel upstream for co-review. > > Feel free to share your comments. > > Thank you, > Oved > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From oschreib at redhat.com Wed Feb 8 14:25:52 2012 From: oschreib at redhat.com (Ofer Schreiber) Date: Wed, 08 Feb 2012 16:25:52 +0200 Subject: [Engine-devel] New oVirt-Engine available - Release Candidate - V2 Message-ID: <4F3285F0.8090307@redhat.com> Hi all, A new ovirt-engine (3.0.0_0001-1.6) build has been uploaded into ovirt.org. This release is considered as a release candidate for oVirt's first release, scheduled for tomorrow [1]. In order to grab the rpms, please follow instructions at http://www.ovirt.org/wiki/Installing_ovirt_from_rpm The tarball can be found at http://www.ovirt.org/releases/nightly/src/ovirt-engine-3.0.0_0001.20120208.tar.gz The oVirt-Engine team [1] http://ovirt.org/meetings/ovirt/2012/ovirt.2012-02-07-15.01.html From ovedo at redhat.com Wed Feb 8 14:43:23 2012 From: ovedo at redhat.com (Oved Ourfalli) Date: Wed, 08 Feb 2012 09:43:23 -0500 (EST) Subject: [Engine-devel] [Spice-devel] SPICE related features In-Reply-To: <4F328856.9090808@redhat.com> Message-ID: <71287314-0781-4ae6-a2b1-1d018037c37f@zmail02.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > From: "Hans de Goede" > To: dlaor at redhat.com > Cc: "Oved Ourfalli" , spice-devel at lists.freedesktop.org, engine-devel at ovirt.org > Sent: Wednesday, February 8, 2012 4:36:06 PM > Subject: Re: [Spice-devel] [Engine-devel] SPICE related features > > Hi all, > > Dor, thanks for the forward. > > On 02/08/2012 12:49 PM, Dor Laor wrote: > > On 02/08/2012 01:43 PM, Oved Ourfalli wrote: > >> Hello all, > >> > >> The following feature page describes the engine adjustments needed > >> for new SPICE features. > >> > >> http://www.ovirt.org/wiki/Features/SPICERelatedFeatures > > Al in all this looks good, some remarks: > > * WRT multi monitor support for RHEL, the latest RHEL > xorg-x11-drv-qxl and > spice-vdagent packages do support multi monitor support using > multiple > cards in Xinerama mode. We are waiting for a RHEL-6 z-stream > update to > fix an x11-xorg-server-Xorg bug which atm makes the mouse unusable > in this > mode wants this lands, multi-monitor support this way should be > available > for RHEL-6.2 (and later) guests. The same holds true for Fedora > guests, > although I don't expect the necessary Xorg changes to be available > for > versions older then Fedora 17. The driving multiple monitors from > a single > qxl device support OTOH is still a long time away, likely 6 months > or so. > The idea behind this support is to have it on a single PCI card, and not on multiple ones. > * WRT the native USB support, the wiki page says: > "If the cluster level is 3.1 (which supports native USB support), > but the > client only has non-native USB support (old client), then we will > use the > old client. This means that we'll have to keep supporting the > non-native USB > support, side-by-side with the native one." > > Note that the new usb-support requires starting the guest with a > number of > extra emulated devices. These will just sit around and do nothing > if unused, > so I don't really expect any issues with this, but this still is > something > to be aware of. OTOH the old usb-support requires the installation > of extra > software inside the guest, if this is not installed falling back > to the old > client will not help wrt usb support. > Of course. Added a note on that in the wiki page. Thank you, Oved > Regards, > > Hans > From hdegoede at redhat.com Wed Feb 8 14:36:06 2012 From: hdegoede at redhat.com (Hans de Goede) Date: Wed, 08 Feb 2012 15:36:06 +0100 Subject: [Engine-devel] [Spice-devel] SPICE related features In-Reply-To: <4F326164.6000005@redhat.com> References: <4F326164.6000005@redhat.com> Message-ID: <4F328856.9090808@redhat.com> Hi all, Dor, thanks for the forward. On 02/08/2012 12:49 PM, Dor Laor wrote: > On 02/08/2012 01:43 PM, Oved Ourfalli wrote: >> Hello all, >> >> The following feature page describes the engine adjustments needed for new SPICE features. >> >> http://www.ovirt.org/wiki/Features/SPICERelatedFeatures Al in all this looks good, some remarks: * WRT multi monitor support for RHEL, the latest RHEL xorg-x11-drv-qxl and spice-vdagent packages do support multi monitor support using multiple cards in Xinerama mode. We are waiting for a RHEL-6 z-stream update to fix an x11-xorg-server-Xorg bug which atm makes the mouse unusable in this mode wants this lands, multi-monitor support this way should be available for RHEL-6.2 (and later) guests. The same holds true for Fedora guests, although I don't expect the necessary Xorg changes to be available for versions older then Fedora 17. The driving multiple monitors from a single qxl device support OTOH is still a long time away, likely 6 months or so. * WRT the native USB support, the wiki page says: "If the cluster level is 3.1 (which supports native USB support), but the client only has non-native USB support (old client), then we will use the old client. This means that we'll have to keep supporting the non-native USB support, side-by-side with the native one." Note that the new usb-support requires starting the guest with a number of extra emulated devices. These will just sit around and do nothing if unused, so I don't really expect any issues with this, but this still is something to be aware of. OTOH the old usb-support requires the installation of extra software inside the guest, if this is not installed falling back to the old client will not help wrt usb support. Regards, Hans From hdegoede at redhat.com Wed Feb 8 15:00:51 2012 From: hdegoede at redhat.com (Hans de Goede) Date: Wed, 08 Feb 2012 16:00:51 +0100 Subject: [Engine-devel] [Spice-devel] SPICE related features In-Reply-To: <71287314-0781-4ae6-a2b1-1d018037c37f@zmail02.collab.prod.int.phx2.redhat.com> References: <71287314-0781-4ae6-a2b1-1d018037c37f@zmail02.collab.prod.int.phx2.redhat.com> Message-ID: <4F328E23.3020903@redhat.com> Hi, On 02/08/2012 03:43 PM, Oved Ourfalli wrote: > > > ----- Original Message ----- >> From: "Hans de Goede" >> To: dlaor at redhat.com >> Cc: "Oved Ourfalli", spice-devel at lists.freedesktop.org, engine-devel at ovirt.org >> Sent: Wednesday, February 8, 2012 4:36:06 PM >> Subject: Re: [Spice-devel] [Engine-devel] SPICE related features >> >> Hi all, >> >> Dor, thanks for the forward. >> >> On 02/08/2012 12:49 PM, Dor Laor wrote: >>> On 02/08/2012 01:43 PM, Oved Ourfalli wrote: >>>> Hello all, >>>> >>>> The following feature page describes the engine adjustments needed >>>> for new SPICE features. >>>> >>>> http://www.ovirt.org/wiki/Features/SPICERelatedFeatures >> >> Al in all this looks good, some remarks: >> >> * WRT multi monitor support for RHEL, the latest RHEL >> xorg-x11-drv-qxl and >> spice-vdagent packages do support multi monitor support using >> multiple >> cards in Xinerama mode. We are waiting for a RHEL-6 z-stream >> update to >> fix an x11-xorg-server-Xorg bug which atm makes the mouse unusable >> in this >> mode wants this lands, multi-monitor support this way should be >> available >> for RHEL-6.2 (and later) guests. The same holds true for Fedora >> guests, >> although I don't expect the necessary Xorg changes to be available >> for >> versions older then Fedora 17. The driving multiple monitors from >> a single >> qxl device support OTOH is still a long time away, likely 6 months >> or so. >> > The idea behind this support is to have it on a single PCI card, and not on multiple ones. Right, I understand, but my point is, that current RHEL (and other Linux distro based) hypervisors as well as guests do not support the multiple monitors on a single PCI card setup, but they *do* support the multiple monitors, with each a separate PCI card setup like we also do for windows. This means that we may want to enable multiple monitor support for Linux guests *now* using the same code paths / method as for windows guests and then later, for vms where both the guest and the cluster support the multiple monitors on a single PCI card setup, use that instead. Which is not something which the wiki page reflects. Regards, Hans From mkublin at redhat.com Wed Feb 8 15:21:24 2012 From: mkublin at redhat.com (Michael Kublin) Date: Wed, 08 Feb 2012 10:21:24 -0500 (EST) Subject: [Engine-devel] Default TransactionScope at engine In-Reply-To: Message-ID: <87b830f6-0d8c-4562-b227-c3ccbd78dde1@zmail14.collab.prod.int.phx2.redhat.com> Hi All, Today most of the flows of runAction at engine look like at the following way: 1. Perform some selects 2. Perform call to host (xml rpc) or call to some number of other actions in synchronous way 3. If a call success than perform update to DB. It is means that we actually need open transaction only at the step 3, it is mean that by default we are keeping transaction much more longer than it should be, the call to host can take a while. So, the possible plan of action is 1. Change a default scope of transaction from TransactionScopeOption.Required to TransactionScopeOption.Suppress 2. Remove annotation NonTransactiveCommandAttribute 3. The actions which are need to run in transaction - appropriate transaction scope should be passed via parameters It is a lot of dirty and not to interesting work, but I think it will provide a great benefit: 1. Reduced a time of open transaction 2. Better db connection utilization 3. I think that number of open transaction will also reduce 4. As a result performance will improve By the way, many of new actions which are written today by default marked as NonTransactiveCommandAttribute, if we will continue in such way almost every class will be marked with NonTransactiveCommandAttribute Regards Michael From dennisml at conversis.de Wed Feb 8 16:47:18 2012 From: dennisml at conversis.de (Dennis Jacobfeuerborn) Date: Wed, 08 Feb 2012 17:47:18 +0100 Subject: [Engine-devel] New oVirt-Engine available - Release Candidate - V2 In-Reply-To: <4F3285F0.8090307@redhat.com> References: <4F3285F0.8090307@redhat.com> Message-ID: <4F32A716.6040002@conversis.de> On 02/08/2012 03:25 PM, Ofer Schreiber wrote: > Hi all, > > A new ovirt-engine (3.0.0_0001-1.6) build has been uploaded into ovirt.org. > This release is considered as a release candidate for oVirt's first > release, scheduled for tomorrow [1]. > > In order to grab the rpms, please follow instructions at > http://www.ovirt.org/wiki/Installing_ovirt_from_rpm > The tarball can be found at > http://www.ovirt.org/releases/nightly/src/ovirt-engine-3.0.0_0001.20120208.tar.gz What are the requirements to install this? Can this be installed on a plain RHEL/CentOS 6.2, Fedora 15/16, Ubuntu, etc. and how much disk space does the engine require? Regards, Dennis From sanjal at redhat.com Thu Feb 9 07:09:21 2012 From: sanjal at redhat.com (Shireesh Anjal) Date: Thu, 09 Feb 2012 12:39:21 +0530 Subject: [Engine-devel] Eclipse IDE setup In-Reply-To: <4F31ADA2.5000808@redhat.com> References: <4F31ADA2.5000808@redhat.com> Message-ID: <4F337121.2030802@redhat.com> On Wednesday 08 February 2012 04:32 AM, Itamar Heim wrote: > On 02/07/2012 04:26 PM, Shireesh Anjal wrote: >> Hi, >> >> I went through setting Eclipse IDE for engine development recently. >> >> After getting the maven build to work, and after importing all >> projects into eclipse, following setup is required to get rid of all >> compilation errors reported by eclipse: >> >> 1) at restapi-definition -> project -> properties -> java build >> path -> source -> add source folder -> target/generated sources/xjc >> 2) at webadmin -> project -> properties -> java build path -> >> source -> add source folder-> target/generated >> sources/annotations,gwt,test-annotations >> >> 3) To get rid of the error "The method setCharacterEncoding(String) >> is undefined for the type HttpServletResponse" in source >> frontend/webadmin/modules/frontend/src/main/java/org/ovirt/engine/ui/frontend/server/gwt/WebadminDynamicHostingServlet.java, >> I modified pom.xml at root level to change servlet API version from >> 2.3 to 2.4 as the concerned API is introduced in 2.4 >> >> 3.0 >> -2.3 >> +2.4 >> 0.1.42 >> >> 4) Make sure that you import the engine code formatter into eclipse >> _before_ starting development. >> >> Window -> Preferences -> Java -> Code Style -> Formatter -> >> Import -> /config/engine-code-format.xml >> >> 5) Above mentioned formatted doesn't work in comments. This can be >> resolved by adding "Remove trailing whitespace" in "Save actions" as >> follows: >> >> Window -> Preferences -> Java -> Editor -> Save Actions -> >> Additional Actions -> Configure -> Code Organizing -> Remove >> trailing whitespace -> All lines >> >> 6) For some reason, editing a properties file in eclipse results in a >> lot of "diff" in git, making it difficult to review the code change. >> So I'm resorting to editing properties file in a text editor outside >> eclipse for the time being. I suspect this may not happen on all >> machines. > > Shireesh - great points - care to update the wiki? Have added the above points to an existing wiki page: http://www.ovirt.org/wiki/Building_Ovirt_Engine/IDE > Also - I'd like to hope we can clean the pom files so #1-#3 would not > be needed manually? -- Regards, Shireesh From lpeer at redhat.com Thu Feb 9 07:40:13 2012 From: lpeer at redhat.com (Livnat Peer) Date: Thu, 09 Feb 2012 09:40:13 +0200 Subject: [Engine-devel] Default TransactionScope at engine In-Reply-To: <87b830f6-0d8c-4562-b227-c3ccbd78dde1@zmail14.collab.prod.int.phx2.redhat.com> References: <87b830f6-0d8c-4562-b227-c3ccbd78dde1@zmail14.collab.prod.int.phx2.redhat.com> Message-ID: <4F33785D.1090400@redhat.com> On 08/02/12 17:21, Michael Kublin wrote: > Hi All, > > Today most of the flows of runAction at engine look like at the following way: > > 1. Perform some selects > 2. Perform call to host (xml rpc) or call to some number of other actions in synchronous way > 3. If a call success than perform update to DB. > There are also the following flows to consider - * An action that is internal to the engine and does not involve a round trip to the host, like all MLA related flows, quota etc. * An action that is not internal but commits before the round trip to the host like adding entities to db or changing entities status. In addition we need to figure how this idea works when moving to JPA. > It is means that we actually need open transaction only at the step 3, it is mean that by default > we are keeping transaction much more longer than it should be, the call to host can take a while. > > So, the possible plan of action is > 1. Change a default scope of transaction from TransactionScopeOption.Required to TransactionScopeOption.Suppress > 2. Remove annotation NonTransactiveCommandAttribute > 3. The actions which are need to run in transaction - appropriate transaction scope should be passed via parameters > > It is a lot of dirty and not to interesting work, but I think it will provide a great benefit: > 1. Reduced a time of open transaction > 2. Better db connection utilization > 3. I think that number of open transaction will also reduce > 4. As a result performance will improve > > By the way, many of new actions which are written today by default marked as NonTransactiveCommandAttribute, if we will continue > in such way almost every class will be marked with NonTransactiveCommandAttribute > > Regards Michael > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From iheim at redhat.com Thu Feb 9 07:55:20 2012 From: iheim at redhat.com (Itamar Heim) Date: Thu, 09 Feb 2012 09:55:20 +0200 Subject: [Engine-devel] [Spice-devel] SPICE related features In-Reply-To: <4F328856.9090808@redhat.com> References: <4F326164.6000005@redhat.com> <4F328856.9090808@redhat.com> Message-ID: <4F337BE8.4020607@redhat.com> On 02/08/2012 04:36 PM, Hans de Goede wrote: > Hi all, > > Dor, thanks for the forward. > > On 02/08/2012 12:49 PM, Dor Laor wrote: >> On 02/08/2012 01:43 PM, Oved Ourfalli wrote: >>> Hello all, >>> >>> The following feature page describes the engine adjustments needed >>> for new SPICE features. >>> >>> http://www.ovirt.org/wiki/Features/SPICERelatedFeatures > > Al in all this looks good, some remarks: > > * WRT multi monitor support for RHEL, the latest RHEL xorg-x11-drv-qxl and > spice-vdagent packages do support multi monitor support using multiple > cards in Xinerama mode. We are waiting for a RHEL-6 z-stream update to > fix an x11-xorg-server-Xorg bug which atm makes the mouse unusable in this > mode wants this lands, multi-monitor support this way should be available > for RHEL-6.2 (and later) guests. The same holds true for Fedora guests, > although I don't expect the necessary Xorg changes to be available for > versions older then Fedora 17. The driving multiple monitors from a single > qxl device support OTOH is still a long time away, likely 6 months or so. so this means we need to ask the user for linux guests if they want single head or multiple heads when they choose multi monitor? this will cause their (single) head to spin... any better UX we can suggest users? > > * WRT the native USB support, the wiki page says: > "If the cluster level is 3.1 (which supports native USB support), but the > client only has non-native USB support (old client), then we will use the > old client. This means that we'll have to keep supporting the non-native > USB > support, side-by-side with the native one." > > Note that the new usb-support requires starting the guest with a number of > extra emulated devices. These will just sit around and do nothing if > unused, > so I don't really expect any issues with this, but this still is something > to be aware of. OTOH the old usb-support requires the installation of extra > software inside the guest, if this is not installed falling back to the old > client will not help wrt usb support. true, I think we need to offer at least a single version that offers backward compatibility before we can deprecate it. From iheim at redhat.com Thu Feb 9 08:02:03 2012 From: iheim at redhat.com (Itamar Heim) Date: Thu, 09 Feb 2012 10:02:03 +0200 Subject: [Engine-devel] bridgless networks In-Reply-To: <4cd20748-a940-4f77-b491-7c8a75ac8a23@zmail01.collab.prod.int.phx2.redhat.com> References: <4cd20748-a940-4f77-b491-7c8a75ac8a23@zmail01.collab.prod.int.phx2.redhat.com> Message-ID: <4F337D7B.5050808@redhat.com> On 02/06/2012 04:47 PM, Roy Golan wrote: > Hi All > > Lately I've been working on a design of bridge-less network feature in the engine. > You can see it in http://www.ovirt.org/wiki/Features/Design/Network/Bridgeless_Networks#Bridge-less_Networks > > Please review the design. > Note, there are some open issues, you can find in the relevant section. > Reviews and comments are very welcome. 1. validations 1.1. do you block setting a logical network to don't allow running VMs if it has a vnic associated with it? 1.2. do you check on import a vnic isn't connected to a logical network which doesn't allow running VMs? 1.3. do you check when REST API tries to add/edit a vnic that the chosen logical network is allowed to run VMs? 2. changes 2.1 can a logical network be changed between allow/disallow running VMs? 2.2 what's the flow when enabling running VMs? will the logical network become non-operational until all hosts are reconfigured with a bridge (if applicable)? what is the user flow to reconfigure the hosts (go one by one? do what (there is no change to host level config)? 2.3 what's the flow to not allowing to run VMs (bridge-less) - no need to make the network non operational, but same question - what should the admin do to reconfigure the hosts (no host level config change is needed by him, just a reconfigure iiuc) Thanks, Itamar From hdegoede at redhat.com Thu Feb 9 08:31:15 2012 From: hdegoede at redhat.com (Hans de Goede) Date: Thu, 09 Feb 2012 09:31:15 +0100 Subject: [Engine-devel] [Spice-devel] SPICE related features In-Reply-To: <4F337BE8.4020607@redhat.com> References: <4F326164.6000005@redhat.com> <4F328856.9090808@redhat.com> <4F337BE8.4020607@redhat.com> Message-ID: <4F338453.4090106@redhat.com> Hi, On 02/09/2012 08:55 AM, Itamar Heim wrote: > On 02/08/2012 04:36 PM, Hans de Goede wrote: >> Hi all, >> >> Dor, thanks for the forward. >> >> On 02/08/2012 12:49 PM, Dor Laor wrote: >>> On 02/08/2012 01:43 PM, Oved Ourfalli wrote: >>>> Hello all, >>>> >>>> The following feature page describes the engine adjustments needed >>>> for new SPICE features. >>>> >>>> http://www.ovirt.org/wiki/Features/SPICERelatedFeatures >> >> Al in all this looks good, some remarks: >> >> * WRT multi monitor support for RHEL, the latest RHEL xorg-x11-drv-qxl and >> spice-vdagent packages do support multi monitor support using multiple >> cards in Xinerama mode. We are waiting for a RHEL-6 z-stream update to >> fix an x11-xorg-server-Xorg bug which atm makes the mouse unusable in this >> mode wants this lands, multi-monitor support this way should be available >> for RHEL-6.2 (and later) guests. The same holds true for Fedora guests, >> although I don't expect the necessary Xorg changes to be available for >> versions older then Fedora 17. The driving multiple monitors from a single >> qxl device support OTOH is still a long time away, likely 6 months or so. > > so this means we need to ask the user for linux guests if they want single head or multiple heads when they choose multi monitor? We could ask the user, but I don't think that that is a good idea. > this will cause their (single) head to spin... With which you seem to agree :) > any better UX we can suggest users? Yes, no UI at all, the current solution using multiple single monitor pci cards means using Xinerama, which disables Xrandr, and thus allows no dynamic adjustment of the monitor settings of the guest, instead an xorg.conf file must be written (the linux agent can generate one based on the current client monitor info) and Xorg needs to be restarted. This is the result of the multiple pci cards which each 1 monitor model we've been using for windows guests being a poor match for Linux guests. So we are working on adding support to drive multiple monitors from a single qxl pci device. This requires changes on both the host and guest side, but if both sides support it this configuration is much better, so IMHO ovirt should just automatically enable it if both the host (the cluster) and the guest support it. On the guest side, this is the current status: RHEL <= 6.1 no multi monitor support RHEL 6.2(*) - 6.? multi monitor support using Xinerama (so 1 monitor/card, multiple cards) RHEL >= 6.? multi monitor support using a single card with multiple outputs Just like when exactly the new multi mon support will be available for guests, it is a similar question mark for when it will be available for the host. *) Note for 6.2 this requires a z-stream xorg server update. >> * WRT the native USB support, the wiki page says: >> "If the cluster level is 3.1 (which supports native USB support), but the >> client only has non-native USB support (old client), then we will use the >> old client. This means that we'll have to keep supporting the non-native >> USB >> support, side-by-side with the native one." >> >> Note that the new usb-support requires starting the guest with a number of >> extra emulated devices. These will just sit around and do nothing if >> unused, >> so I don't really expect any issues with this, but this still is something >> to be aware of. OTOH the old usb-support requires the installation of extra >> software inside the guest, if this is not installed falling back to the old >> client will not help wrt usb support. > > true, I think we need to offer at least a single version that offers backward compatibility before we can deprecate it. Agreed. Regards, Hans From iheim at redhat.com Thu Feb 9 08:33:38 2012 From: iheim at redhat.com (Itamar Heim) Date: Thu, 09 Feb 2012 10:33:38 +0200 Subject: [Engine-devel] [Spice-devel] SPICE related features In-Reply-To: <4F338453.4090106@redhat.com> References: <4F326164.6000005@redhat.com> <4F328856.9090808@redhat.com> <4F337BE8.4020607@redhat.com> <4F338453.4090106@redhat.com> Message-ID: <4F3384E2.7070407@redhat.com> On 02/09/2012 10:31 AM, Hans de Goede wrote: > Hi, > > On 02/09/2012 08:55 AM, Itamar Heim wrote: >> On 02/08/2012 04:36 PM, Hans de Goede wrote: >>> Hi all, >>> >>> Dor, thanks for the forward. >>> >>> On 02/08/2012 12:49 PM, Dor Laor wrote: >>>> On 02/08/2012 01:43 PM, Oved Ourfalli wrote: >>>>> Hello all, >>>>> >>>>> The following feature page describes the engine adjustments needed >>>>> for new SPICE features. >>>>> >>>>> http://www.ovirt.org/wiki/Features/SPICERelatedFeatures >>> >>> Al in all this looks good, some remarks: >>> >>> * WRT multi monitor support for RHEL, the latest RHEL >>> xorg-x11-drv-qxl and >>> spice-vdagent packages do support multi monitor support using multiple >>> cards in Xinerama mode. We are waiting for a RHEL-6 z-stream update to >>> fix an x11-xorg-server-Xorg bug which atm makes the mouse unusable in >>> this >>> mode wants this lands, multi-monitor support this way should be >>> available >>> for RHEL-6.2 (and later) guests. The same holds true for Fedora guests, >>> although I don't expect the necessary Xorg changes to be available for >>> versions older then Fedora 17. The driving multiple monitors from a >>> single >>> qxl device support OTOH is still a long time away, likely 6 months or >>> so. >> >> so this means we need to ask the user for linux guests if they want >> single head or multiple heads when they choose multi monitor? > > We could ask the user, but I don't think that that is a good idea. > >> this will cause their (single) head to spin... > > With which you seem to agree :) > >> any better UX we can suggest users? > > Yes, no UI at all, the current solution using multiple single monitor > pci cards means using Xinerama, which disables Xrandr, and thus allows > no dynamic adjustment of the monitor settings of the guest, instead > an xorg.conf file must be written (the linux agent can generate one > based on the current client monitor info) and Xorg needs to be restarted. > > This is the result of the multiple pci cards which each 1 monitor model > we've been using for windows guests being a poor match for Linux guests. > > So we are working on adding support to drive multiple monitors from a > single qxl pci device. This requires changes on both the host and > guest side, but if both sides support it this configuration is much > better, so IMHO ovirt should just automatically enable it > if both the host (the cluster) and the guest support it. > > On the guest side, this is the current status: > > RHEL <= 6.1 no multi monitor support > RHEL 6.2(*) - 6.? multi monitor support using Xinerama (so 1 > monitor/card, multiple cards) > RHEL >= 6.? multi monitor support using a single card with multiple outputs > > Just like when exactly the new multi mon support will be available > for guests, it is a similar question mark for when it will be available for > the host. this is the ovirt mailing list, so upstream versions are more relevant here. in any case, I have the same issue with backward compatibilty. say you fix this in fedora 17. user started a guest VM when host was fedora 16. admin upgraded host and changed cluster level to utilize new features. suddenly on next boot guest will move from 4 heads to single head? I'm guessing it will break user configuration. i.e., user should be able to choose to move to utilize the new mode? From hdegoede at redhat.com Thu Feb 9 09:05:48 2012 From: hdegoede at redhat.com (Hans de Goede) Date: Thu, 09 Feb 2012 10:05:48 +0100 Subject: [Engine-devel] [Spice-devel] SPICE related features In-Reply-To: <4F3384E2.7070407@redhat.com> References: <4F326164.6000005@redhat.com> <4F328856.9090808@redhat.com> <4F337BE8.4020607@redhat.com> <4F338453.4090106@redhat.com> <4F3384E2.7070407@redhat.com> Message-ID: <4F338C6C.9010705@redhat.com> Hi, On 02/09/2012 09:33 AM, Itamar Heim wrote: > On 02/09/2012 10:31 AM, Hans de Goede wrote: >>> so this means we need to ask the user for linux guests if they want >>> single head or multiple heads when they choose multi monitor? >> >> We could ask the user, but I don't think that that is a good idea. >> >>> this will cause their (single) head to spin... >> >> With which you seem to agree :) >> >>> any better UX we can suggest users? >> >> Yes, no UI at all, the current solution using multiple single monitor >> pci cards means using Xinerama, which disables Xrandr, and thus allows >> no dynamic adjustment of the monitor settings of the guest, instead >> an xorg.conf file must be written (the linux agent can generate one >> based on the current client monitor info) and Xorg needs to be restarted. >> >> This is the result of the multiple pci cards which each 1 monitor model >> we've been using for windows guests being a poor match for Linux guests. >> >> So we are working on adding support to drive multiple monitors from a >> single qxl pci device. This requires changes on both the host and >> guest side, but if both sides support it this configuration is much >> better, so IMHO ovirt should just automatically enable it >> if both the host (the cluster) and the guest support it. >> >> On the guest side, this is the current status: >> >> RHEL <= 6.1 no multi monitor support >> RHEL 6.2(*) - 6.? multi monitor support using Xinerama (so 1 >> monitor/card, multiple cards) >> RHEL >= 6.? multi monitor support using a single card with multiple outputs >> >> Just like when exactly the new multi mon support will be available >> for guests, it is a similar question mark for when it will be available for >> the host. > > this is the ovirt mailing list, so upstream versions are more relevant here. > in any case, I have the same issue with backward compatibilty. > say you fix this in fedora 17. > user started a guest VM when host was fedora 16. > admin upgraded host and changed cluster level to utilize new features. > suddenly on next boot guest will move from 4 heads to single head? I'm guessing it will break user configuration. > i.e., user should be able to choose to move to utilize the new mode? I see this as something which gets decided at vm creation time, and then stored in the vm config. So if the vm gets created with a guest OS which does not support multiple monitors per qxl device, or when the cluster does not support it, it uses the old setup with 1 card / monitor. Even if the guest OS or the cluster gets upgraded later. Regards, Hans From iheim at redhat.com Thu Feb 9 09:07:20 2012 From: iheim at redhat.com (Itamar Heim) Date: Thu, 09 Feb 2012 11:07:20 +0200 Subject: [Engine-devel] [Spice-devel] SPICE related features In-Reply-To: <4F338C6C.9010705@redhat.com> References: <4F326164.6000005@redhat.com> <4F328856.9090808@redhat.com> <4F337BE8.4020607@redhat.com> <4F338453.4090106@redhat.com> <4F3384E2.7070407@redhat.com> <4F338C6C.9010705@redhat.com> Message-ID: <4F338CC8.8060208@redhat.com> On 02/09/2012 11:05 AM, Hans de Goede wrote: > Hi, > > On 02/09/2012 09:33 AM, Itamar Heim wrote: >> On 02/09/2012 10:31 AM, Hans de Goede wrote: > > > >>>> so this means we need to ask the user for linux guests if they want >>>> single head or multiple heads when they choose multi monitor? >>> >>> We could ask the user, but I don't think that that is a good idea. >>> >>>> this will cause their (single) head to spin... >>> >>> With which you seem to agree :) >>> >>>> any better UX we can suggest users? >>> >>> Yes, no UI at all, the current solution using multiple single monitor >>> pci cards means using Xinerama, which disables Xrandr, and thus allows >>> no dynamic adjustment of the monitor settings of the guest, instead >>> an xorg.conf file must be written (the linux agent can generate one >>> based on the current client monitor info) and Xorg needs to be >>> restarted. >>> >>> This is the result of the multiple pci cards which each 1 monitor model >>> we've been using for windows guests being a poor match for Linux guests. >>> >>> So we are working on adding support to drive multiple monitors from a >>> single qxl pci device. This requires changes on both the host and >>> guest side, but if both sides support it this configuration is much >>> better, so IMHO ovirt should just automatically enable it >>> if both the host (the cluster) and the guest support it. >>> >>> On the guest side, this is the current status: >>> >>> RHEL <= 6.1 no multi monitor support >>> RHEL 6.2(*) - 6.? multi monitor support using Xinerama (so 1 >>> monitor/card, multiple cards) >>> RHEL >= 6.? multi monitor support using a single card with multiple >>> outputs >>> >>> Just like when exactly the new multi mon support will be available >>> for guests, it is a similar question mark for when it will be >>> available for >>> the host. >> >> this is the ovirt mailing list, so upstream versions are more relevant >> here. >> in any case, I have the same issue with backward compatibilty. >> say you fix this in fedora 17. >> user started a guest VM when host was fedora 16. >> admin upgraded host and changed cluster level to utilize new features. >> suddenly on next boot guest will move from 4 heads to single head? I'm >> guessing it will break user configuration. >> i.e., user should be able to choose to move to utilize the new mode? > > I see this as something which gets decided at vm creation time, and then > stored in the vm config. So if the vm gets created with a guest OS which > does not support multiple monitors per qxl device, or when the cluster does > not support it, it uses the old setup with 1 card / monitor. Even if the > guest OS or the cluster gets upgraded later. so instead of letting user change this, we'd force this at vm creation time? I'm not sure this is "friendlier". From iheim at redhat.com Thu Feb 9 10:55:38 2012 From: iheim at redhat.com (Itamar Heim) Date: Thu, 09 Feb 2012 12:55:38 +0200 Subject: [Engine-devel] Adding gluster support Message-ID: <4F33A62A.20307@redhat.com> Hi, The following wiki describes the approach i'm suggesting for adding gluster support in phases to ovirt. http://www.ovirt.org/wiki/AddingGlusterSupportToOvirt comments welcome. Thanks, Itamar From jiaoyang at opzoon.com Thu Feb 9 12:20:12 2012 From: jiaoyang at opzoon.com (jiaoyang at opzoon.com) Date: Thu, 9 Feb 2012 20:20:12 +0800 Subject: [Engine-devel] Exception with JRebel References: Message-ID: <20120209201911070118148@opzoon.com> Hi all I setup JRebel as wiki suggests to reduce redeployment time. When I started up JBoss, it's generally perfect except some errors ========== 2012-02-09 19:55:51,032 INFO [stdout] (MSC service thread 1-1) JRebel: Class 'org.jboss.as.ee.component.ViewConfiguration' could not be processed: 2012-02-09 19:55:51,033 ERROR [stderr] (MSC service thread 1-1) org.zeroturnaround.bundled.javassist.NotFoundException: addViewPostConstructInterceptor(..) is not found in org.jboss.as.ee.component.ViewConfiguration 2012-02-09 19:55:51,034 ERROR [stderr] (MSC service thread 1-1) at org.zeroturnaround.bundled.javassist.CtClassType.getDeclaredMethod(JRebel:1196) 2012-02-09 19:55:51,034 ERROR [stderr] (MSC service thread 1-1) at org.zeroturnaround.javarebel.jboss7.cbp.ViewConfigurationCBP.process(ViewConfigurationCBP.java:59 ========== Is anyone with same situation , or it is a minor exception I can ignore? Log is attached if you want it. thanks Joseph -------------- next part -------------- A non-text attachment was scrubbed... Name: nohup.out Type: application/octet-stream Size: 77085 bytes Desc: not available URL: From lhornyak at redhat.com Thu Feb 9 12:33:34 2012 From: lhornyak at redhat.com (Laszlo Hornyak) Date: Thu, 09 Feb 2012 07:33:34 -0500 (EST) Subject: [Engine-devel] Exception with JRebel In-Reply-To: <20120209201911070118148@opzoon.com> Message-ID: <1aeac342-1566-44db-bb0a-c6ba3a02875f@zmail01.collab.prod.int.phx2.redhat.com> Hi! I am not sure, I remember I have seen this exception in my installation and still it did not break any functionality, code replace worked fine. Now it is working fine without the exception... Laszlo ----- Original Message ----- > From: jiaoyang at opzoon.com > To: "Laszlo Hornyak" , "engine-devel" > Sent: Thursday, February 9, 2012 1:20:12 PM > Subject: Exception with JRebel > > Hi all > > I setup JRebel as wiki suggests to reduce redeployment time. When I > started up JBoss, > it's generally perfect except some errors > > ========== > 2012-02-09 19:55:51,032 INFO [stdout] (MSC service thread 1-1) > JRebel: Class 'org.jboss.as.ee.component.ViewConfiguration' could > not be processed: > 2012-02-09 19:55:51,033 ERROR [stderr] (MSC service thread 1-1) > org.zeroturnaround.bundled.javassist.NotFoundException: > addViewPostConstructInterceptor(..) is not found in > org.jboss.as.ee.component.ViewConfiguration > 2012-02-09 19:55:51,034 ERROR [stderr] (MSC service thread 1-1) > at > org.zeroturnaround.bundled.javassist.CtClassType.getDeclaredMethod(JRebel:1196) > 2012-02-09 19:55:51,034 ERROR [stderr] (MSC service thread 1-1) > at > org.zeroturnaround.javarebel.jboss7.cbp.ViewConfigurationCBP.process(ViewConfigurationCBP.java:59 > ========== > > Is anyone with same situation , or it is a minor exception I can > ignore? Log is attached if you want it. > > thanks > Joseph From djasa at redhat.com Thu Feb 9 12:50:59 2012 From: djasa at redhat.com (David =?UTF-8?Q?Ja=C5=A1a?=) Date: Thu, 09 Feb 2012 13:50:59 +0100 Subject: [Engine-devel] [Spice-devel] SPICE related features In-Reply-To: <4F338CC8.8060208@redhat.com> References: <4F326164.6000005@redhat.com> <4F328856.9090808@redhat.com> <4F337BE8.4020607@redhat.com> <4F338453.4090106@redhat.com> <4F3384E2.7070407@redhat.com> <4F338C6C.9010705@redhat.com> <4F338CC8.8060208@redhat.com> Message-ID: <1328791859.12032.249.camel@dhcp-29-7.brq.redhat.com> Itamar Heim p??e v ?t 09. 02. 2012 v 11:07 +0200: > On 02/09/2012 11:05 AM, Hans de Goede wrote: > > Hi, > > > > On 02/09/2012 09:33 AM, Itamar Heim wrote: > >> On 02/09/2012 10:31 AM, Hans de Goede wrote: > > > > > > > >>>> so this means we need to ask the user for linux guests if they want > >>>> single head or multiple heads when they choose multi monitor? > >>> > >>> We could ask the user, but I don't think that that is a good idea. > >>> > >>>> this will cause their (single) head to spin... > >>> > >>> With which you seem to agree :) > >>> > >>>> any better UX we can suggest users? > >>> > >>> Yes, no UI at all, the current solution using multiple single monitor > >>> pci cards means using Xinerama, which disables Xrandr, and thus allows > >>> no dynamic adjustment of the monitor settings of the guest, instead > >>> an xorg.conf file must be written (the linux agent can generate one > >>> based on the current client monitor info) and Xorg needs to be > >>> restarted. > >>> > >>> This is the result of the multiple pci cards which each 1 monitor model > >>> we've been using for windows guests being a poor match for Linux guests. > >>> > >>> So we are working on adding support to drive multiple monitors from a > >>> single qxl pci device. This requires changes on both the host and > >>> guest side, but if both sides support it this configuration is much > >>> better, so IMHO ovirt should just automatically enable it > >>> if both the host (the cluster) and the guest support it. > >>> > >>> On the guest side, this is the current status: > >>> > >>> RHEL <= 6.1 no multi monitor support > >>> RHEL 6.2(*) - 6.? multi monitor support using Xinerama (so 1 > >>> monitor/card, multiple cards) > >>> RHEL >= 6.? multi monitor support using a single card with multiple > >>> outputs > >>> > >>> Just like when exactly the new multi mon support will be available > >>> for guests, it is a similar question mark for when it will be > >>> available for > >>> the host. > >> > >> this is the ovirt mailing list, so upstream versions are more relevant > >> here. > >> in any case, I have the same issue with backward compatibilty. > >> say you fix this in fedora 17. > >> user started a guest VM when host was fedora 16. > >> admin upgraded host and changed cluster level to utilize new features. > >> suddenly on next boot guest will move from 4 heads to single head? I'm > >> guessing it will break user configuration. > >> i.e., user should be able to choose to move to utilize the new mode? > > > > I see this as something which gets decided at vm creation time, and then > > stored in the vm config. So if the vm gets created with a guest OS which > > does not support multiple monitors per qxl device, or when the cluster does > > not support it, it uses the old setup with 1 card / monitor. Even if the > > guest OS or the cluster gets upgraded later. > > so instead of letting user change this, we'd force this at vm creation > time? I'm not sure this is "friendlier". I think that some history-watching logic & one UI bit could be the way to go. The UI bit would be yet another select button that would let user choose what graphic layout ("all monitors on single graphic card", "one graphic card per monitor (legacy)"). The logic would be like this: * pre-existing guest that now supports new layout in 3.1 cluster * The guest uses 1 monitor, is swithed to 2+ --> new * The guest uses 2+ monitor layout --> old, big fat warning when changing to the new that user should wipe xinerama configuration in the guest * pre-existing guest in old or mixed cluster: * guest uses 2+ monitors --> old * guest is newly configured for 2+ monitors --> show warning that user either has co configure xinerama or use newer cluster --> old * new guest in new cluster: * --> new * if user switches to old, show warning * old guest in any type of cluster * --> old This kind of behavior should provide sensible defaults, all valid choices in all possible scenarios and it should not interfere too much when admin chooses to do anything. David > ______________________________________ > _________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel -- David Ja?a, RHCE SPICE QE based in Brno GPG Key: 22C33E24 Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24 From juan.hernandez at redhat.com Thu Feb 9 18:46:43 2012 From: juan.hernandez at redhat.com (Juan Hernandez) Date: Thu, 09 Feb 2012 19:46:43 +0100 Subject: [Engine-devel] Packaging apache-mina and apache-sshd Message-ID: <4F341493.4040509@redhat.com> Hello, As part of the effort to package oVirt for Fedora I need to package apache-mina and apache-sshd: http://mina.apache.org/ http://mina.apache.org/sshd As there is not indication in bugzilla that this has been done before I am starting to do it. Let me know if you have any suggestions. Regards, Juan Hernandez From kroberts at redhat.com Fri Feb 10 14:42:00 2012 From: kroberts at redhat.com (Keith Robertson) Date: Fri, 10 Feb 2012 09:42:00 -0500 Subject: [Engine-devel] New oVirt GIT Repo Request Message-ID: <4F352CB8.8060006@redhat.com> All, I would like to move some of the oVirt tools into their own GIT repos so that they are easier to manage/maintain. In particular, I would like to move the ovirt-log-collector, ovirt-iso-uploader, and ovirt-image-uploader each into their own GIT repos. The Plan: Step 1: Create naked GIT repos on oVirt.org for the 3 tools. Step 2: Link git repos to gerrit. Step 3: Populate naked GIT repos with source and build standalone spec files for each. Step 4: In one patch do both a) and b)... a) Update oVirt manager GIT repo by removing tool source. b) Update oVirt manager GIT repo such that spec has dependencies on 3 new RPMs. Optional: - These three tools share some python classes that are very similar. I would like to create a GIT repo (perhaps ovirt-tools-common) to contain these classes so that a fix in one place will fix the issue everywhere. Perhaps we can also create a naked GIT repo for these common classes while addressing the primary concerns above. Please comment, Keith Robertson From oschreib at redhat.com Sat Feb 11 08:48:54 2012 From: oschreib at redhat.com (Ofer Schreiber) Date: Sat, 11 Feb 2012 03:48:54 -0500 (EST) Subject: [Engine-devel] New oVirt GIT Repo Request In-Reply-To: <4F352CB8.8060006@redhat.com> References: <4F352CB8.8060006@redhat.com> Message-ID: <8FF5A0E4-AE69-4F19-87A9-2BEEE70DD78D@redhat.com> On 10 Feb 2012, at 16:42, Keith Robertson wrote: > All, > > I would like to move some of the oVirt tools into their own GIT repos so that they are easier to manage/maintain. In particular, I would like to move the ovirt-log-collector, ovirt-iso-uploader, and ovirt-image-uploader each into their own GIT repos. > > The Plan: > Step 1: Create naked GIT repos on oVirt.org for the 3 tools. > Step 2: Link git repos to gerrit. > Step 3: Populate naked GIT repos with source and build standalone spec files for each. > Step 4: In one patch do both a) and b)... > a) Update oVirt manager GIT repo by removing tool source. > b) Update oVirt manager GIT repo such that spec has dependencies on 3 new RPMs. > > Optional: > - These three tools share some python classes that are very similar. I would like to create a GIT repo (perhaps ovirt-tools-common) to contain these classes so that a fix in one place will fix the issue everywhere. Perhaps we can also create a naked GIT repo for these common classes while addressing the primary concerns above. +1 on the entire suggestion. about the common stuff- will this package be obsolete once the tools will be base on the sdk? > > Please comment, > Keith Robertson > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From ppinatti at linux.vnet.ibm.com Fri Feb 10 19:33:44 2012 From: ppinatti at linux.vnet.ibm.com (Paulo de Rezende Pinatti) Date: Fri, 10 Feb 2012 17:33:44 -0200 Subject: [Engine-devel] New oVirt GIT Repo Request In-Reply-To: <4F352CB8.8060006@redhat.com> References: <4F352CB8.8060006@redhat.com> Message-ID: <4F357118.6050204@linux.vnet.ibm.com> Hi, a suggestion for the plan: you could use git filter-branch with the --subdirectory-filter option for creating the repos at step 1. That way it will keep commit history of the files being moved in the new repos. Paulo de Rezende Pinatti Staff Software Engineer IBM Linux Technology Center On 02/10/2012 12:42 PM, Keith Robertson wrote: > All, > > I would like to move some of the oVirt tools into their own GIT repos > so that they are easier to manage/maintain. In particular, I would > like to move the ovirt-log-collector, ovirt-iso-uploader, and > ovirt-image-uploader each into their own GIT repos. > > The Plan: > Step 1: Create naked GIT repos on oVirt.org for the 3 tools. > Step 2: Link git repos to gerrit. > Step 3: Populate naked GIT repos with source and build standalone spec > files for each. > Step 4: In one patch do both a) and b)... > a) Update oVirt manager GIT repo by removing tool source. > b) Update oVirt manager GIT repo such that spec has dependencies on 3 > new RPMs. > > Optional: > - These three tools share some python classes that are very similar. > I would like to create a GIT repo (perhaps ovirt-tools-common) to > contain these classes so that a fix in one place will fix the issue > everywhere. Perhaps we can also create a naked GIT repo for these > common classes while addressing the primary concerns above. > > Please comment, > Keith Robertson > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch > From kroberts at redhat.com Sat Feb 11 13:43:36 2012 From: kroberts at redhat.com (Keith Robertson) Date: Sat, 11 Feb 2012 08:43:36 -0500 Subject: [Engine-devel] New oVirt GIT Repo Request In-Reply-To: <8FF5A0E4-AE69-4F19-87A9-2BEEE70DD78D@redhat.com> References: <4F352CB8.8060006@redhat.com> <8FF5A0E4-AE69-4F19-87A9-2BEEE70DD78D@redhat.com> Message-ID: <4F367088.5010608@redhat.com> On 02/11/2012 03:48 AM, Ofer Schreiber wrote: > > On 10 Feb 2012, at 16:42, Keith Robertson > wrote: > >> All, >> >> I would like to move some of the oVirt tools into their own GIT repos >> so that they are easier to manage/maintain. In particular, I would >> like to move the ovirt-log-collector, ovirt-iso-uploader, and >> ovirt-image-uploader each into their own GIT repos. >> >> The Plan: >> Step 1: Create naked GIT repos on oVirt.org for >> the 3 tools. >> Step 2: Link git repos to gerrit. >> Step 3: Populate naked GIT repos with source and build standalone >> spec files for each. >> Step 4: In one patch do both a) and b)... >> a) Update oVirt manager GIT repo by removing tool source. >> b) Update oVirt manager GIT repo such that spec has dependencies on 3 >> new RPMs. >> >> Optional: >> - These three tools share some python classes that are very similar. >> I would like to create a GIT repo (perhaps ovirt-tools-common) to >> contain these classes so that a fix in one place will fix the issue >> everywhere. Perhaps we can also create a naked GIT repo for these >> common classes while addressing the primary concerns above. > > +1 on the entire suggestion. > about the common stuff- will this package be obsolete once the tools > will be base on the sdk? No. The SDK is different it provides a common mechanism for accessing the REST API. Whereas, the common tools repo is more geared to the tooling (e.g. common classes for logging, option parsing, etc.). It would look like this... [Common Tools] [REST SDK] \ / [image-uploader, iso-uploader, log-collector] Cheers, Keith > >> >> Please comment, >> Keith Robertson >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From kroberts at redhat.com Sat Feb 11 13:44:58 2012 From: kroberts at redhat.com (Keith Robertson) Date: Sat, 11 Feb 2012 08:44:58 -0500 Subject: [Engine-devel] New oVirt GIT Repo Request In-Reply-To: <4F357118.6050204@linux.vnet.ibm.com> References: <4F352CB8.8060006@redhat.com> <4F357118.6050204@linux.vnet.ibm.com> Message-ID: <4F3670DA.5010502@redhat.com> On 02/10/2012 02:33 PM, Paulo de Rezende Pinatti wrote: > Hi, > > a suggestion for the plan: you could use git filter-branch with the > --subdirectory-filter option for creating the repos at step 1. That > way it will keep commit history of the files being moved in the new > repos. No argument from me here. I'd like to keep my history. > > > Paulo de Rezende Pinatti > Staff Software Engineer > IBM Linux Technology Center > > > On 02/10/2012 12:42 PM, Keith Robertson wrote: >> All, >> >> I would like to move some of the oVirt tools into their own GIT repos >> so that they are easier to manage/maintain. In particular, I would >> like to move the ovirt-log-collector, ovirt-iso-uploader, and >> ovirt-image-uploader each into their own GIT repos. >> >> The Plan: >> Step 1: Create naked GIT repos on oVirt.org for the 3 tools. >> Step 2: Link git repos to gerrit. >> Step 3: Populate naked GIT repos with source and build standalone >> spec files for each. >> Step 4: In one patch do both a) and b)... >> a) Update oVirt manager GIT repo by removing tool source. >> b) Update oVirt manager GIT repo such that spec has dependencies on >> 3 new RPMs. >> >> Optional: >> - These three tools share some python classes that are very similar. >> I would like to create a GIT repo (perhaps ovirt-tools-common) to >> contain these classes so that a fix in one place will fix the issue >> everywhere. Perhaps we can also create a naked GIT repo for these >> common classes while addressing the primary concerns above. >> >> Please comment, >> Keith Robertson >> _______________________________________________ >> Arch mailing list >> Arch at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/arch >> > From iheim at redhat.com Sat Feb 11 22:41:39 2012 From: iheim at redhat.com (Itamar Heim) Date: Sun, 12 Feb 2012 00:41:39 +0200 Subject: [Engine-devel] New oVirt GIT Repo Request In-Reply-To: <4F352CB8.8060006@redhat.com> References: <4F352CB8.8060006@redhat.com> Message-ID: <4F36EEA3.50006@redhat.com> On 02/10/2012 04:42 PM, Keith Robertson wrote: > All, > > I would like to move some of the oVirt tools into their own GIT repos so > that they are easier to manage/maintain. In particular, I would like to > move the ovirt-log-collector, ovirt-iso-uploader, and > ovirt-image-uploader each into their own GIT repos. > > The Plan: > Step 1: Create naked GIT repos on oVirt.org for the 3 tools. > Step 2: Link git repos to gerrit. above two are same step - create a project in gerrit. I'll do that if list doesn't have any objections by monday. > Step 3: Populate naked GIT repos with source and build standalone spec > files for each. > Step 4: In one patch do both a) and b)... > a) Update oVirt manager GIT repo by removing tool source. > b) Update oVirt manager GIT repo such that spec has dependencies on 3 > new RPMs. > > Optional: > - These three tools share some python classes that are very similar. I > would like to create a GIT repo (perhaps ovirt-tools-common) to contain > these classes so that a fix in one place will fix the issue everywhere. > Perhaps we can also create a naked GIT repo for these common classes > while addressing the primary concerns above. would this hold both python and java common code? From iheim at redhat.com Sat Feb 11 22:44:40 2012 From: iheim at redhat.com (Itamar Heim) Date: Sun, 12 Feb 2012 00:44:40 +0200 Subject: [Engine-devel] New oVirt GIT Repo Request In-Reply-To: <4F36EEA3.50006@redhat.com> References: <4F352CB8.8060006@redhat.com> <4F36EEA3.50006@redhat.com> Message-ID: <4F36EF58.70702@redhat.com> On 02/12/2012 12:41 AM, Itamar Heim wrote: >> The Plan: >> Step 1: Create naked GIT repos on oVirt.org for the 3 tools. >> Step 2: Link git repos to gerrit. > > above two are same step - create a project in gerrit. > I'll do that if list doesn't have any objections by monday. > >> Step 3: Populate naked GIT repos with source and build standalone spec >> files for each. >> Step 4: In one patch do both a) and b)... >> a) Update oVirt manager GIT repo by removing tool source. >> b) Update oVirt manager GIT repo such that spec has dependencies on 3 >> new RPMs. Also, run at list pylint jobs tracking changes to these new repos (cc'd eedri to coordinate with) From eedri at redhat.com Sun Feb 12 08:26:10 2012 From: eedri at redhat.com (Eyal Edri) Date: Sun, 12 Feb 2012 03:26:10 -0500 (EST) Subject: [Engine-devel] New oVirt GIT Repo Request In-Reply-To: <4F36EF58.70702@redhat.com> Message-ID: ----- Original Message ----- > From: "Itamar Heim" > To: "Keith Robertson" > Cc: engine-devel at ovirt.org, arch at ovirt.org, "Eyal Edri" > Sent: Sunday, February 12, 2012 12:44:40 AM > Subject: Re: [Engine-devel] New oVirt GIT Repo Request > > On 02/12/2012 12:41 AM, Itamar Heim wrote: > >> The Plan: > >> Step 1: Create naked GIT repos on oVirt.org for the 3 tools. > >> Step 2: Link git repos to gerrit. > > > > above two are same step - create a project in gerrit. > > I'll do that if list doesn't have any objections by monday. > > > >> Step 3: Populate naked GIT repos with source and build standalone > >> spec > >> files for each. > >> Step 4: In one patch do both a) and b)... > >> a) Update oVirt manager GIT repo by removing tool source. > >> b) Update oVirt manager GIT repo such that spec has dependencies > >> on 3 > >> new RPMs. > > Also, run at list pylint jobs tracking changes to these new repos > (cc'd > eedri to coordinate with) > It shouldn't be a problem if we want to test python code (pyflakes/pylint) via gerrit & jenkins. (we might want to use -E just for errors, otherwise we'll get a lot of warnings we can't handle). testing java (+maven) code will require moving to maven 3.0.X due to jenkins plugins backward compatible issues. Eyal. From mkenneth at redhat.com Sun Feb 12 10:35:40 2012 From: mkenneth at redhat.com (Miki Kenneth) Date: Sun, 12 Feb 2012 05:35:40 -0500 (EST) Subject: [Engine-devel] SharedRawDisk feature detail In-Reply-To: <4F2EB574.2060807@redhat.com> Message-ID: <548298ad-4546-47d7-b5d8-168c67c2a002@mkenneth.csb> ----- Original Message ----- > From: "Itamar Heim" > To: "Maor" > Cc: engine-devel at ovirt.org, "Yaniv Dary" , "Miki Kenneth" > Sent: Sunday, February 5, 2012 6:59:32 PM > Subject: Re: [Engine-devel] SharedRawDisk feature detail > > On 02/05/2012 02:14 PM, Maor wrote: > ... > > >> 3. "The synchronization/clustering of shared raw disk between VMs > >> will > >> be managed in the file system. " > >> > >> either i don't understand what this mean, or it could be read with > >> a > >> misleading meaning. > > Maybe the following rephrase will be more accurate: "The > > synchronization/clustering of shared raw disk between VMs should be > > based on external independent application which will be > > synchronized > > with the guest application." > "The synchronization/clustering of shared raw disk between VMs is the > responsibility of the guests. Unaware guests will lead to corruption > of > the shared disk." > > >> > >> 4. VM Pools > >> VM Pools are always based (at least today) on templates, and > >> templates > >> have no shared disks. > >> I'd just block attaching a shared disk to a VM which is part of a > >> pool > >> (unless there is a very interesting use case meriting this) > > If there is no reason to attach shared disk to a VM from pool, > > maybe its > > also not that relevant to attach shared disk to stateless VM. > > Miki? > > I think pools and stateless are different. I can envision a use case > where stateless guests would use a shared disk (say, in read only for > same data). Not sure what you mean here. I do think that if you have a pool of Servers (rather than desktops), It is a valid use case to be able have a shared raw disk attached to it. I agree that this can be handled later on - but I would like the design at list to handle it. > > > >> > >> 6. future work - Permissions should be added for disk entity > >> so who can add a shared disk? > > Data Center Administrator or System Administrator will be > > initialized > > with permissions for creating shared raw disk, or changing shared > > disk > > to be unshared. > > Regarding attach/detach disks to/from VM, I was thinking that for > > phase > > one we will count on the user VM permissions. If user will have > > permissions to create new disks on the VM, he will also have > > permissions > > to attach new shared raw disk to it. > > this means they can attach shared disks from other VMs they have no > permission on... > as i said earlier - need to think about this one some more. > From kroberts at redhat.com Sun Feb 12 13:32:12 2012 From: kroberts at redhat.com (Keith Robertson) Date: Sun, 12 Feb 2012 08:32:12 -0500 Subject: [Engine-devel] New oVirt GIT Repo Request In-Reply-To: <4F36EEA3.50006@redhat.com> References: <4F352CB8.8060006@redhat.com> <4F36EEA3.50006@redhat.com> Message-ID: <4F37BF5C.20801@redhat.com> On 02/11/2012 05:41 PM, Itamar Heim wrote: > On 02/10/2012 04:42 PM, Keith Robertson wrote: >> All, >> >> I would like to move some of the oVirt tools into their own GIT repos so >> that they are easier to manage/maintain. In particular, I would like to >> move the ovirt-log-collector, ovirt-iso-uploader, and >> ovirt-image-uploader each into their own GIT repos. >> >> The Plan: >> Step 1: Create naked GIT repos on oVirt.org for the 3 tools. >> Step 2: Link git repos to gerrit. > > above two are same step - create a project in gerrit. > I'll do that if list doesn't have any objections by monday. Sure, np. > >> Step 3: Populate naked GIT repos with source and build standalone spec >> files for each. >> Step 4: In one patch do both a) and b)... >> a) Update oVirt manager GIT repo by removing tool source. >> b) Update oVirt manager GIT repo such that spec has dependencies on 3 >> new RPMs. >> >> Optional: >> - These three tools share some python classes that are very similar. I >> would like to create a GIT repo (perhaps ovirt-tools-common) to contain >> these classes so that a fix in one place will fix the issue everywhere. >> Perhaps we can also create a naked GIT repo for these common classes >> while addressing the primary concerns above. > > would this hold both python and java common code? None of the 3 tools currently have any requirement for Java code, but I think the installer does. That said, I wouldn't have a problem mixing Java code in the "common" component as long as they're in separate package directories. If we do something like this do we want a "python" common RPM and a "java" common RPM or just a single RPM for all common code? I don't really have a preference. Perhaps: common/src/ common/src//com/ovirt/whatever From lpeer at redhat.com Sun Feb 12 17:03:12 2012 From: lpeer at redhat.com (Livnat Peer) Date: Sun, 12 Feb 2012 19:03:12 +0200 Subject: [Engine-devel] SharedRawDisk feature detail In-Reply-To: <4F2AA89C.7090605@redhat.com> References: <4F2AA89C.7090605@redhat.com> Message-ID: <4F37F0D0.7090504@redhat.com> On 02/02/12 17:15, Maor wrote: > Hello all, > > The shared raw disk feature description can be found under the following > links: > http://www.ovirt.org/wiki/Features/DetailedSharedRawDisk > http://www.ovirt.org/wiki/Features/SharedRawDisk > > Please feel free, to share your comments. > > Regards, > Maor > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel Hi Maor, - "when taking a VM snapshot, a snapshot of the shared disk will not be taken." I think it is worth mentioning that the shared disk will be part of the VM snapshot configuration. The disk will appear as unplugged. - Move VM is deprecated in 3.1. - It seems from the wiki that shared disk is not supported for template but is supported for VM pool. I am not sure how can we do that? iirc we create pool from template. What is the complexity of supporting shared disk in Templates? off the top of my head it seems like it is more complicated to block shared disks in templates than to support it. What do you think? Livnat From abaron at redhat.com Sun Feb 12 21:22:54 2012 From: abaron at redhat.com (Ayal Baron) Date: Sun, 12 Feb 2012 16:22:54 -0500 (EST) Subject: [Engine-devel] SharedRawDisk feature detail In-Reply-To: <4F2EB574.2060807@redhat.com> Message-ID: ----- Original Message ----- > On 02/05/2012 02:14 PM, Maor wrote: > ... > > >> 3. "The synchronization/clustering of shared raw disk between VMs > >> will > >> be managed in the file system. " > >> > >> either i don't understand what this mean, or it could be read with > >> a > >> misleading meaning. > > Maybe the following rephrase will be more accurate: "The > > synchronization/clustering of shared raw disk between VMs should be > > based on external independent application which will be > > synchronized > > with the guest application." > "The synchronization/clustering of shared raw disk between VMs is the > responsibility of the guests. Unaware guests will lead to corruption > of > the shared disk." > > >> > >> 4. VM Pools > >> VM Pools are always based (at least today) on templates, and > >> templates > >> have no shared disks. > >> I'd just block attaching a shared disk to a VM which is part of a > >> pool > >> (unless there is a very interesting use case meriting this) > > If there is no reason to attach shared disk to a VM from pool, > > maybe its > > also not that relevant to attach shared disk to stateless VM. > > Miki? > > I think pools and stateless are different. I can envision a use case > where stateless guests would use a shared disk (say, in read only for > same data). Read only is just as relevant to pools as it is to stateless... > > > >> > >> 6. future work - Permissions should be added for disk entity > >> so who can add a shared disk? > > Data Center Administrator or System Administrator will be > > initialized > > with permissions for creating shared raw disk, or changing shared > > disk > > to be unshared. > > Regarding attach/detach disks to/from VM, I was thinking that for > > phase > > one we will count on the user VM permissions. If user will have > > permissions to create new disks on the VM, he will also have > > permissions > > to attach new shared raw disk to it. > > this means they can attach shared disks from other VMs they have no > permission on... > as i said earlier - need to think about this one some more. > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From abaron at redhat.com Sun Feb 12 22:06:30 2012 From: abaron at redhat.com (Ayal Baron) Date: Sun, 12 Feb 2012 17:06:30 -0500 (EST) Subject: [Engine-devel] SharedRawDisk feature detail In-Reply-To: <4F37F0D0.7090504@redhat.com> Message-ID: <95fc2520-439b-4eed-9e42-d3bc9cd35e18@zmail13.collab.prod.int.phx2.redhat.com> I started writing the changes in the email but got tired of it and just made a bunch of changes in the wiki (impossible to track in email, such things should be done using etherpad or something). A few questions though: plugged vs. enabled - I thought we converged on attached/detached and enabled/disabled and not plugged/unplugged? * Shared disks are attached with R/W permissions. - What about enabling R/O ? (esp. for stateless/pools) * Template disks should not be shared. - Why not? (read only) * When exporting a VM, only the disks which are not shared will be exported. - Why is the above not treated the same as a snapshot? the configuration will reference the shared disk as unplugged? (or will it and it's just not clear?) I didn't touch stateless/pools but should be fixed to reflect comments on this thread. Is Remove shared disk and Delete shared disk the same thing? if so, why the dual terminology? I don't quite follow the logic determining which section is under functionality and which under user experience. For example, why do you have a 'Delete shared disk' section in the Ux section but not a 'Move shared disk' section (there is no shared disk specific logic visible in the delete action UI). * Disk name should be generated automatically based on the vm name and disk number in the VM. Description will be empty. * New disk should enforce the user to enter a name for the disk. - Huh? the above 2 items seem like an oxymoron, but I may be missing something... * Attach/Detach of a shared disk can be performed only when the VM is in status 'down'. - Why? under the functionality section you clearly stated that attaching a disk will result in it being attached but disabled... ----- Original Message ----- > On 02/02/12 17:15, Maor wrote: > > Hello all, > > > > The shared raw disk feature description can be found under the > > following > > links: > > http://www.ovirt.org/wiki/Features/DetailedSharedRawDisk > > http://www.ovirt.org/wiki/Features/SharedRawDisk > > > > Please feel free, to share your comments. > > > > Regards, > > Maor > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > Hi Maor, > > - "when taking a VM snapshot, a snapshot of the shared disk will not > be > taken." > I think it is worth mentioning that the shared disk will be part of > the > VM snapshot configuration. The disk will appear as unplugged. > > - Move VM is deprecated in 3.1. > > - It seems from the wiki that shared disk is not supported for > template > but is supported for VM pool. > I am not sure how can we do that? iirc we create pool from template. > > What is the complexity of supporting shared disk in Templates? off > the > top of my head it seems like it is more complicated to block shared > disks in templates than to support it. What do you think? > > > Livnat > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From lpeer at redhat.com Mon Feb 13 07:17:41 2012 From: lpeer at redhat.com (Livnat Peer) Date: Mon, 13 Feb 2012 09:17:41 +0200 Subject: [Engine-devel] SharedRawDisk feature detail In-Reply-To: <95fc2520-439b-4eed-9e42-d3bc9cd35e18@zmail13.collab.prod.int.phx2.redhat.com> References: <95fc2520-439b-4eed-9e42-d3bc9cd35e18@zmail13.collab.prod.int.phx2.redhat.com> Message-ID: <4F38B915.1070200@redhat.com> On 13/02/12 00:06, Ayal Baron wrote: > I started writing the changes in the email but got tired of it and just made a bunch of changes in the wiki (impossible to track in email, such things should be done using etherpad or something). > > A few questions though: > > plugged vs. enabled - I thought we converged on attached/detached and enabled/disabled and not plugged/unplugged? > > * Shared disks are attached with R/W permissions. > - What about enabling R/O ? (esp. for stateless/pools) > Do we have ack from libvirt that attaching disk in r/o mode actually works? (I know we opened a bug for testing this but i can't find the bug - Haim?) > > * Template disks should not be shared. > - Why not? (read only) > > * When exporting a VM, only the disks which are not shared will be exported. > - Why is the above not treated the same as a snapshot? the configuration will reference the shared disk as unplugged? (or will it and it's just not clear?) > It is not the same case as snapshot. We don't have in the export domain the shared image. > I didn't touch stateless/pools but should be fixed to reflect comments on this thread. > > Is Remove shared disk and Delete shared disk the same thing? if so, why the dual terminology? > I don't quite follow the logic determining which section is under functionality and which under user experience. For example, why do you have a 'Delete shared disk' section in the Ux section but not a 'Move shared disk' section (there is no shared disk specific logic visible in the delete action UI). > > > * Disk name should be generated automatically based on the vm name and disk number in the VM. Description will be empty. > * New disk should enforce the user to enter a name for the disk. > - Huh? the above 2 items seem like an oxymoron, but I may be missing something... > The first line is referring to upgrade flow (it is under the upgrade section), the second line is the behavior going forward. I agree this is confusing, Maor I suggest you remove the general behavior from the upgrade section. > * Attach/Detach of a shared disk can be performed only when the VM is in status 'down'. > - Why? under the functionality section you clearly stated that attaching a disk will result in it being attached but disabled... > > I agree with Ayal on this. Attach/Detach a disk should be enabled regardless if the VM is running or not, this applies to all disks not only to shared disk. Maor, few more questions: * "Regular disk can become a shared raw disk, by editing the existing disk and marking the 'share disk' property type." There are limitation to this, for example disk with snapshots can not be shared (we support only shared raw disks etc.) * "When removing a VM with shared disks attached to it, the shared disks will not be deleted. " If the shared disk is not attached to any other VM, why don't we delete it? I think we should behave with it as any other disk. Maybe going forward when deleting a VM we should ask if to remove the disks as well. This logic can apply to shared disk in the same way, I don't think we should have a special logic around this for shared disks. * The VDSM owner is missing from the doc, and vdsm is missing from the affected projects. Livnat > > ----- Original Message ----- >> On 02/02/12 17:15, Maor wrote: >>> Hello all, >>> >>> The shared raw disk feature description can be found under the >>> following >>> links: >>> http://www.ovirt.org/wiki/Features/DetailedSharedRawDisk >>> http://www.ovirt.org/wiki/Features/SharedRawDisk >>> >>> Please feel free, to share your comments. >>> >>> Regards, >>> Maor >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >> >> Hi Maor, >> >> - "when taking a VM snapshot, a snapshot of the shared disk will not >> be >> taken." >> I think it is worth mentioning that the shared disk will be part of >> the >> VM snapshot configuration. The disk will appear as unplugged. >> >> - Move VM is deprecated in 3.1. >> >> - It seems from the wiki that shared disk is not supported for >> template >> but is supported for VM pool. >> I am not sure how can we do that? iirc we create pool from template. >> >> What is the complexity of supporting shared disk in Templates? off >> the >> top of my head it seems like it is more complicated to block shared >> disks in templates than to support it. What do you think? >> >> >> Livnat >> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> From rgolan at redhat.com Mon Feb 13 07:55:01 2012 From: rgolan at redhat.com (Roy Golan) Date: Mon, 13 Feb 2012 02:55:01 -0500 (EST) Subject: [Engine-devel] network - UI Sync meeting Message-ID: <12714450-8b25-413d-919e-dba835be293c@zmail01.collab.prod.int.phx2.redhat.com> The following meeting has been modified: Subject: network - UI Sync meeting Organizer: "Roy Golan" Location: asia-tlv at redhat.com Time: Monday, February 13, 2012, 4:00:00 PM - 5:00:00 PM GMT +02:00 Jerusalem Invitees: mkenneth at redhat.com; sgrinber at redhat.com; lpeer at redhat.com; dfediuck at redhat.com; drankevi at redhat.com; ecohen at redhat.com; iheim at redhat.com; ovedo at redhat.com; acathrow at redhat.com; drankevi at redhat.com; engine-devel at ovirt.org *~*~*~*~*~*~*~*~*~* We'll gather to see we have full coverage and discuss validation issues Bridge ID: 1814335863 https://www.intercallonline.com/portlets/scheduling/viewNumbers/listNumbersByCode.do?confCode=1814335863 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 4021 bytes Desc: not available URL: From ovedo at redhat.com Mon Feb 13 07:59:55 2012 From: ovedo at redhat.com (Oved Ourfalli) Date: Mon, 13 Feb 2012 02:59:55 -0500 (EST) Subject: [Engine-devel] SharedRawDisk feature detail In-Reply-To: <4F38B915.1070200@redhat.com> Message-ID: <5c2f50bc-d297-4dd1-b723-7506ff21c784@zmail02.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > From: "Livnat Peer" > To: "Maor" > Cc: engine-devel at ovirt.org, "Haim Ateya" > Sent: Monday, February 13, 2012 9:17:41 AM > Subject: Re: [Engine-devel] SharedRawDisk feature detail > > On 13/02/12 00:06, Ayal Baron wrote: > > I started writing the changes in the email but got tired of it and > > just made a bunch of changes in the wiki (impossible to track in > > email, such things should be done using etherpad or something). > > > > A few questions though: > > > > plugged vs. enabled - I thought we converged on attached/detached > > and enabled/disabled and not plugged/unplugged? > > > > * Shared disks are attached with R/W permissions. > > - What about enabling R/O ? (esp. for stateless/pools) > > > > Do we have ack from libvirt that attaching disk in r/o mode actually > works? (I know we opened a bug for testing this but i can't find the > bug > - Haim?) > > > > > * Template disks should not be shared. > > - Why not? (read only) > > > > * When exporting a VM, only the disks which are not shared will be > > exported. > > - Why is the above not treated the same as a snapshot? the > > configuration will reference the shared disk as unplugged? (or > > will it and it's just not clear?) > > > > It is not the same case as snapshot. We don't have in the export > domain > the shared image. > > > I didn't touch stateless/pools but should be fixed to reflect > > comments on this thread. > > > > Is Remove shared disk and Delete shared disk the same thing? if so, > > why the dual terminology? > > I don't quite follow the logic determining which section is under > > functionality and which under user experience. For example, why > > do you have a 'Delete shared disk' section in the Ux section but > > not a 'Move shared disk' section (there is no shared disk specific > > logic visible in the delete action UI). > > > > > > * Disk name should be generated automatically based on the vm name > > and disk number in the VM. Description will be empty. > > * New disk should enforce the user to enter a name for the disk. > > - Huh? the above 2 items seem like an oxymoron, but I may be > > missing something... > > > > The first line is referring to upgrade flow (it is under the upgrade > section), the second line is the behavior going forward. > > I agree this is confusing, Maor I suggest you remove the general > behavior from the upgrade section. > > > > * Attach/Detach of a shared disk can be performed only when the VM > > is in status 'down'. > > - Why? under the functionality section you clearly stated that > > attaching a disk will result in it being attached but disabled... > > > > > > I agree with Ayal on this. > Attach/Detach a disk should be enabled regardless if the VM is > running > or not, this applies to all disks not only to shared disk. > > Maor, few more questions: > > * "Regular disk can become a shared raw disk, by editing the existing > disk and marking the 'share disk' property type." > > There are limitation to this, for example disk with snapshots can not > be > shared (we support only shared raw disks etc.) > > > * "When removing a VM with shared disks attached to it, the shared > disks > will not be deleted. " > > If the shared disk is not attached to any other VM, why don't we > delete it? > I think we should behave with it as any other disk. > Maybe going forward when deleting a VM we should ask if to remove the > disks as well. This logic can apply to shared disk in the same way, I > don't think we should have a special logic around this for shared > disks. > Do we plan to support deleting disks in the disks tab? (as part of supporting floating disks) If not then we should delete the disk (as we won't be able to delete it after the last VM using it is deleted....). If we do, then I think we should indeed ask the user whether he would like to delete those disks or not (in case it is the last VM). Deleting it implicitly might be wrong, as these disks might contain data that the user would like to attach to a new VM. > > * The VDSM owner is missing from the doc, and vdsm is missing from > the > affected projects. > > > Livnat > > > > > ----- Original Message ----- > >> On 02/02/12 17:15, Maor wrote: > >>> Hello all, > >>> > >>> The shared raw disk feature description can be found under the > >>> following > >>> links: > >>> http://www.ovirt.org/wiki/Features/DetailedSharedRawDisk > >>> http://www.ovirt.org/wiki/Features/SharedRawDisk > >>> > >>> Please feel free, to share your comments. > >>> > >>> Regards, > >>> Maor > >>> _______________________________________________ > >>> Engine-devel mailing list > >>> Engine-devel at ovirt.org > >>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >> > >> Hi Maor, > >> > >> - "when taking a VM snapshot, a snapshot of the shared disk will > >> not > >> be > >> taken." > >> I think it is worth mentioning that the shared disk will be part > >> of > >> the > >> VM snapshot configuration. The disk will appear as unplugged. > >> > >> - Move VM is deprecated in 3.1. > >> > >> - It seems from the wiki that shared disk is not supported for > >> template > >> but is supported for VM pool. > >> I am not sure how can we do that? iirc we create pool from > >> template. > >> > >> What is the complexity of supporting shared disk in Templates? off > >> the > >> top of my head it seems like it is more complicated to block > >> shared > >> disks in templates than to support it. What do you think? > >> > >> > >> Livnat > >> > >> _______________________________________________ > >> Engine-devel mailing list > >> Engine-devel at ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/engine-devel > >> > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From ykaul at redhat.com Mon Feb 13 10:03:33 2012 From: ykaul at redhat.com (Yaniv Kaul) Date: Mon, 13 Feb 2012 12:03:33 +0200 Subject: [Engine-devel] compilation failure: VmSnapshotListModel.java:[234, 36] cannot find symbol, symbol : variable RemoveSnapshot Message-ID: <4F38DFF5.2060404@redhat.com> git hash 727abd1bd4447be81ca0e9dcd3d03563b74a7046 (but I really don't recall when was the last time I tried to compile!): [INFO] Compiling 39 source files to /home/ykaul/ovirt-engine/frontend/webadmin/modules/userportal/target/classes [INFO] ------------------------------------------------------------- [ERROR] COMPILATION ERROR : [INFO] ------------------------------------------------------------- [ERROR] VmSnapshotListModel.java:[234,36] cannot find symbol symbol : variable RemoveSnapshot location: class org.ovirt.engine.core.common.action.VdcActionType [INFO] 1 error [INFO] ------------------------------------------------------------- [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE On Fedora 16/x64, fully updated. TIA, Y. From lhornyak at redhat.com Mon Feb 13 10:32:34 2012 From: lhornyak at redhat.com (Laszlo Hornyak) Date: Mon, 13 Feb 2012 05:32:34 -0500 (EST) Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: Message-ID: <172cbbe7-2473-4e67-b435-a23be583e522@zmail01.collab.prod.int.phx2.redhat.com> Hi, Please review the plan document for autorecovery. http://www.ovirt.org/wiki/Features/Autorecovery Thank you, Laszlo From ykaul at redhat.com Mon Feb 13 10:42:39 2012 From: ykaul at redhat.com (Yaniv Kaul) Date: Mon, 13 Feb 2012 12:42:39 +0200 Subject: [Engine-devel] compilation failure: VmSnapshotListModel.java:[234, 36] cannot find symbol, symbol : variable RemoveSnapshot In-Reply-To: <4F38DFF5.2060404@redhat.com> References: <4F38DFF5.2060404@redhat.com> Message-ID: <4F38E91F.7020004@redhat.com> On 02/13/2012 12:03 PM, Yaniv Kaul wrote: > git hash 727abd1bd4447be81ca0e9dcd3d03563b74a7046 (but I really don't > recall when was the last time I tried to compile!): > > [INFO] Compiling 39 source files to > /home/ykaul/ovirt-engine/frontend/webadmin/modules/userportal/target/classes > [INFO] ------------------------------------------------------------- > [ERROR] COMPILATION ERROR : > [INFO] ------------------------------------------------------------- > [ERROR] VmSnapshotListModel.java:[234,36] cannot find symbol > symbol : variable RemoveSnapshot > location: class org.ovirt.engine.core.common.action.VdcActionType > [INFO] 1 error > [INFO] ------------------------------------------------------------- > [INFO] > ------------------------------------------------------------------------ > [ERROR] BUILD FAILURE > > On Fedora 16/x64, fully updated. > > TIA, > Y. > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel mvn2 clean install before the build cleared that one, but I am getting several errors, some of it seems relevant to recent log changes: [INFO] [gwt:compile {execution: gwtcompile}] [INFO] auto discovered modules [org.ovirt.engine.ui.GwtExtension, org.ovirt.engine.ui.Frontend, org.ovirt.engine.ui.webadmin.WebAdmin, org.ovirt.engine.ui.common.GwtCommon, org.ovirt.engine.ui.UICommonWeb] [WARNING] Don't declare gwt-dev as a project dependency. This may introduce complex dependency conflicts [INFO] org.ovirt.engine.ui.GwtExtension has no EntryPoint - compilation skipped [INFO] org.ovirt.engine.ui.Frontend has no EntryPoint - compilation skipped [INFO] org.ovirt.engine.ui.common.GwtCommon has no EntryPoint - compilation skipped [INFO] org.ovirt.engine.ui.UICommonWeb has no EntryPoint - compilation skipped [ERROR] [AppClassLoader at 64601bb1] info AspectJ Weaver Version 1.6.11 built on Tuesday Mar 15, 2011 at 15:31:04 GMT [ERROR] [AppClassLoader at 64601bb1] info register classloader sun.misc.Launcher$AppClassLoader at 64601bb1 [ERROR] [AppClassLoader at 64601bb1] info using configuration file:/home/ykaul/ovirt-engine/frontend/webadmin/modules/gwt-extension/target/gwt-extension-3.0.0-0001.jar!/META-INF/aop.xml [ERROR] [AppClassLoader at 64601bb1] info using configuration file:/home/ykaul/.m2/repository/org/ovirt/engine/ui/gwt-extension/3.0.0-0001/gwt-extension-3.0.0-0001-sources.jar!/META-INF/aop.xml [ERROR] [AppClassLoader at 64601bb1] info register aspect org.ovirt.engine.ui.gwtextension.DontPrune [ERROR] [AppClassLoader at 64601bb1] info register aspect org.ovirt.engine.ui.gwtextension.DontPrune [INFO] Compiling module org.ovirt.engine.ui.webadmin.WebAdmin [INFO] Validating newly compiled units [INFO] [ERROR] Errors in 'jar:file:/home/ykaul/ovirt-engine/frontend/webadmin/modules/sharedgwt/target/sharedgwt-3.0.0-0001.jar!/org/ovirt/engine/core/common/businessentities/VdsFencingOptions.java' [INFO] [ERROR] Line 631: No source code is available for type org.ovirt.engine.core.compat.LogCompat; did you forget to inherit a required module? [INFO] [ERROR] Line 631: No source code is available for type org.ovirt.engine.core.compat.LogFactoryCompat; did you forget to inherit a required module? From ovedo at redhat.com Mon Feb 13 11:31:23 2012 From: ovedo at redhat.com (Oved Ourfalli) Date: Mon, 13 Feb 2012 06:31:23 -0500 (EST) Subject: [Engine-devel] [Users] Autorecovery feature plan for review In-Reply-To: <172cbbe7-2473-4e67-b435-a23be583e522@zmail01.collab.prod.int.phx2.redhat.com> Message-ID: <05347800-1e91-4090-baab-3db97e51a94f@zmail02.collab.prod.int.phx2.redhat.com> Some comments: 1. I think the amount of time between tests should be configurable. 2. I guess some of the actions done by the autorecovery process should be monitored, so take a look at "http://www.ovirt.org/wiki/Features/TaskManagerDetailed#Job_for_System_Monitors" in order to monitor this action. Oved ----- Original Message ----- > From: "Laszlo Hornyak" > To: "engine-devel" , users at ovirt.org > Sent: Monday, February 13, 2012 12:32:34 PM > Subject: [Users] Autorecovery feature plan for review > > Hi, > > Please review the plan document for autorecovery. > http://www.ovirt.org/wiki/Features/Autorecovery > > Thank you, > Laszlo > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From vszocs at redhat.com Mon Feb 13 11:32:14 2012 From: vszocs at redhat.com (Vojtech Szocs) Date: Mon, 13 Feb 2012 06:32:14 -0500 (EST) Subject: [Engine-devel] Frontend Session: GwtCommon Module Message-ID: The following meeting has been modified: Subject: Frontend Session: GwtCommon Module Organizer: "Vojtech Szocs" Location: "asia-tlv" [MODIFIED] Resources: asia-tlv at redhat.com (Asia-tlv) Time: Tuesday, February 14, 2012, 2:00:00 PM - 3:00:00 PM GMT +01:00 Belgrade, Bratislava, Budapest, Ljubljana, Prague Invitees: drankevi at redhat.com; ashakarc at redhat.com; derez at redhat.com; gchaplik at redhat.com; alkaplan at redhat.com; tjelinek at redhat.com; ecohen at redhat.com; engine-devel at ovirt.org *~*~*~*~*~*~*~*~*~* Hi, I'd like to give a tech session about Frontend GwtCommon module. The agenda is following: - What is GwtCommon - How we use it in Frontend projects - Guidelines for expanding GwtCommon - Ideas for UI reuse Please find the PDF slides attached to this invitation. To join the session, please dial the Intercall bridge number according to your country, and enter the conference ID provided below. Dial-In Numbers: https://www.intercallonline.com/portlets/scheduling/viewNumbers/listNumbersByCode.do?confCode=7128867405 Conference Code ID: 7128867405# Regards, Vojtech -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 4833 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: GwtCommon.pdf Type: application/pdf Size: 366057 bytes Desc: not available URL: From lhornyak at redhat.com Mon Feb 13 12:24:34 2012 From: lhornyak at redhat.com (Laszlo Hornyak) Date: Mon, 13 Feb 2012 07:24:34 -0500 (EST) Subject: [Engine-devel] [Users] Autorecovery feature plan for review In-Reply-To: <05347800-1e91-4090-baab-3db97e51a94f@zmail02.collab.prod.int.phx2.redhat.com> Message-ID: ----- Original Message ----- > From: "Oved Ourfalli" > To: "Laszlo Hornyak" > Cc: "engine-devel" , users at ovirt.org > Sent: Monday, February 13, 2012 12:31:23 PM > Subject: Re: [Users] Autorecovery feature plan for review > > Some comments: > 1. I think the amount of time between tests should be configurable. Agreed. > 2. I guess some of the actions done by the autorecovery process > should be monitored, so take a look at > "http://www.ovirt.org/wiki/Features/TaskManagerDetailed#Job_for_System_Monitors" > in order to monitor this action. > > Oved > > ----- Original Message ----- > > From: "Laszlo Hornyak" > > To: "engine-devel" , users at ovirt.org > > Sent: Monday, February 13, 2012 12:32:34 PM > > Subject: [Users] Autorecovery feature plan for review > > > > Hi, > > > > Please review the plan document for autorecovery. > > http://www.ovirt.org/wiki/Features/Autorecovery > > > > Thank you, > > Laszlo > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > > From bazulay at redhat.com Mon Feb 13 12:31:37 2012 From: bazulay at redhat.com (Barak Azulay) Date: Mon, 13 Feb 2012 14:31:37 +0200 Subject: [Engine-devel] New oVirt GIT Repo Request In-Reply-To: <4F37BF5C.20801@redhat.com> References: <4F352CB8.8060006@redhat.com> <4F36EEA3.50006@redhat.com> <4F37BF5C.20801@redhat.com> Message-ID: <4F3902A9.2060405@redhat.com> On 02/12/2012 03:32 PM, Keith Robertson wrote: > On 02/11/2012 05:41 PM, Itamar Heim wrote: >> On 02/10/2012 04:42 PM, Keith Robertson wrote: >>> All, >>> >>> I would like to move some of the oVirt tools into their own GIT repos so >>> that they are easier to manage/maintain. In particular, I would like to >>> move the ovirt-log-collector, ovirt-iso-uploader, and >>> ovirt-image-uploader each into their own GIT repos. >>> >>> The Plan: >>> Step 1: Create naked GIT repos on oVirt.org for the 3 tools. >>> Step 2: Link git repos to gerrit. >> >> above two are same step - create a project in gerrit. >> I'll do that if list doesn't have any objections by monday. > Sure, np. >> >>> Step 3: Populate naked GIT repos with source and build standalone spec >>> files for each. >>> Step 4: In one patch do both a) and b)... >>> a) Update oVirt manager GIT repo by removing tool source. >>> b) Update oVirt manager GIT repo such that spec has dependencies on 3 >>> new RPMs. >>> >>> Optional: >>> - These three tools share some python classes that are very similar. I >>> would like to create a GIT repo (perhaps ovirt-tools-common) to contain >>> these classes so that a fix in one place will fix the issue everywhere. >>> Perhaps we can also create a naked GIT repo for these common classes >>> while addressing the primary concerns above. >> >> would this hold both python and java common code? > > None of the 3 tools currently have any requirement for Java code, but I > think the installer does. That said, I wouldn't have a problem mixing > Java code in the "common" component as long as they're in separate > package directories. > > If we do something like this do we want a "python" common RPM and a > "java" common RPM or just a single RPM for all common code? I don't > really have a preference. I would go with separating the java common and python common, even if it's just to ease build/release issues. > > Perhaps: > common/src/ > common/src//com/ovirt/whatever > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch From vszocs at redhat.com Mon Feb 13 12:35:05 2012 From: vszocs at redhat.com (Vojtech Szocs) Date: Mon, 13 Feb 2012 07:35:05 -0500 (EST) Subject: [Engine-devel] Frontend Session: GwtCommon Module Message-ID: <2db100d8-81ff-4cb4-9dcf-27d0fbd90b78@zmail16.collab.prod.int.phx2.redhat.com> The following meeting has been modified: Subject: Frontend Session: GwtCommon Module Organizer: "Vojtech Szocs" Location: "asia-tlv" Resources: asia-tlv at redhat.com (Asia-tlv) Time: Tuesday, February 14, 2012, 2:00:00 PM - 3:00:00 PM GMT +01:00 Belgrade, Bratislava, Budapest, Ljubljana, Prague Invitees: drankevi at redhat.com; ashakarc at redhat.com; derez at redhat.com; gchaplik at redhat.com; alkaplan at redhat.com; tjelinek at redhat.com; ecohen at redhat.com; engine-devel at ovirt.org; ovedo at redhat.com; ykaul at redhat.com; amureini at redhat.com ... *~*~*~*~*~*~*~*~*~* Hi, I'd like to give a tech session about Frontend GwtCommon module. The agenda is following: - What is GwtCommon - How we use it in Frontend projects - Guidelines for expanding GwtCommon - Ideas for UI reuse Please find the PDF slides attached to this invitation. To join the session, please dial the Intercall bridge number according to your country, and enter the conference ID provided below. Dial-In Numbers: https://www.intercallonline.com/portlets/scheduling/viewNumbers/listNumbersByCode.do?confCode=7128867405 Conference Code ID: 7128867405# Regards, Vojtech -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 5382 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: GwtCommon.pdf Type: application/pdf Size: 292331 bytes Desc: not available URL: From dougsland at redhat.com Mon Feb 13 15:57:27 2012 From: dougsland at redhat.com (Douglas Landgraf) Date: Mon, 13 Feb 2012 10:57:27 -0500 Subject: [Engine-devel] New oVirt GIT Repo Request In-Reply-To: <4F3902A9.2060405@redhat.com> References: <4F352CB8.8060006@redhat.com> <4F36EEA3.50006@redhat.com> <4F37BF5C.20801@redhat.com> <4F3902A9.2060405@redhat.com> Message-ID: <4F3932E7.1010501@redhat.com> On 02/13/2012 07:31 AM, Barak Azulay wrote: > On 02/12/2012 03:32 PM, Keith Robertson wrote: >> On 02/11/2012 05:41 PM, Itamar Heim wrote: >>> On 02/10/2012 04:42 PM, Keith Robertson wrote: >>>> All, >>>> >>>> I would like to move some of the oVirt tools into their own GIT >>>> repos so >>>> that they are easier to manage/maintain. In particular, I would >>>> like to >>>> move the ovirt-log-collector, ovirt-iso-uploader, and >>>> ovirt-image-uploader each into their own GIT repos. >>>> >>>> The Plan: >>>> Step 1: Create naked GIT repos on oVirt.org for the 3 tools. >>>> Step 2: Link git repos to gerrit. >>> >>> above two are same step - create a project in gerrit. >>> I'll do that if list doesn't have any objections by monday. >> Sure, np. >>> >>>> Step 3: Populate naked GIT repos with source and build standalone spec >>>> files for each. >>>> Step 4: In one patch do both a) and b)... >>>> a) Update oVirt manager GIT repo by removing tool source. >>>> b) Update oVirt manager GIT repo such that spec has dependencies on 3 >>>> new RPMs. >>>> >>>> Optional: >>>> - These three tools share some python classes that are very similar. I >>>> would like to create a GIT repo (perhaps ovirt-tools-common) to >>>> contain >>>> these classes so that a fix in one place will fix the issue >>>> everywhere. >>>> Perhaps we can also create a naked GIT repo for these common classes >>>> while addressing the primary concerns above. >>> >>> would this hold both python and java common code? >> >> None of the 3 tools currently have any requirement for Java code, but I >> think the installer does. That said, I wouldn't have a problem mixing >> Java code in the "common" component as long as they're in separate >> package directories. >> >> If we do something like this do we want a "python" common RPM and a >> "java" common RPM or just a single RPM for all common code? I don't >> really have a preference. > > I would go with separating the java common and python common, even if > it's just to ease build/release issues. > +1 and if needed one package be required to the other. -- Cheers Douglas From kroberts at redhat.com Mon Feb 13 13:20:40 2012 From: kroberts at redhat.com (Keith Robertson) Date: Mon, 13 Feb 2012 08:20:40 -0500 Subject: [Engine-devel] New oVirt GIT Repo Request In-Reply-To: <4F3932E7.1010501@redhat.com> References: <4F352CB8.8060006@redhat.com> <4F36EEA3.50006@redhat.com> <4F37BF5C.20801@redhat.com> <4F3902A9.2060405@redhat.com> <4F3932E7.1010501@redhat.com> Message-ID: <4F390E28.9060300@redhat.com> On 02/13/2012 10:57 AM, Douglas Landgraf wrote: > On 02/13/2012 07:31 AM, Barak Azulay wrote: >> On 02/12/2012 03:32 PM, Keith Robertson wrote: >>> On 02/11/2012 05:41 PM, Itamar Heim wrote: >>>> On 02/10/2012 04:42 PM, Keith Robertson wrote: >>>>> All, >>>>> >>>>> I would like to move some of the oVirt tools into their own GIT >>>>> repos so >>>>> that they are easier to manage/maintain. In particular, I would >>>>> like to >>>>> move the ovirt-log-collector, ovirt-iso-uploader, and >>>>> ovirt-image-uploader each into their own GIT repos. >>>>> >>>>> The Plan: >>>>> Step 1: Create naked GIT repos on oVirt.org for the 3 tools. >>>>> Step 2: Link git repos to gerrit. >>>> >>>> above two are same step - create a project in gerrit. >>>> I'll do that if list doesn't have any objections by monday. >>> Sure, np. >>>> >>>>> Step 3: Populate naked GIT repos with source and build standalone >>>>> spec >>>>> files for each. >>>>> Step 4: In one patch do both a) and b)... >>>>> a) Update oVirt manager GIT repo by removing tool source. >>>>> b) Update oVirt manager GIT repo such that spec has dependencies on 3 >>>>> new RPMs. >>>>> >>>>> Optional: >>>>> - These three tools share some python classes that are very >>>>> similar. I >>>>> would like to create a GIT repo (perhaps ovirt-tools-common) to >>>>> contain >>>>> these classes so that a fix in one place will fix the issue >>>>> everywhere. >>>>> Perhaps we can also create a naked GIT repo for these common classes >>>>> while addressing the primary concerns above. >>>> >>>> would this hold both python and java common code? >>> >>> None of the 3 tools currently have any requirement for Java code, but I >>> think the installer does. That said, I wouldn't have a problem mixing >>> Java code in the "common" component as long as they're in separate >>> package directories. >>> >>> If we do something like this do we want a "python" common RPM and a >>> "java" common RPM or just a single RPM for all common code? I don't >>> really have a preference. >> >> I would go with separating the java common and python common, even if >> it's just to ease build/release issues. >> > +1 and if needed one package be required to the other. > Sounds like a plan. Full speed ahead. Cheers From ecohen at redhat.com Mon Feb 13 15:23:27 2012 From: ecohen at redhat.com (Einav Cohen) Date: Mon, 13 Feb 2012 10:23:27 -0500 (EST) Subject: [Engine-devel] setup-networks review meeting minutes [Feb 13] In-Reply-To: Message-ID: <61e628a5-e844-4870-96fc-8cc5a3824116@zmail04.collab.prod.int.phx2.redhat.com> sub-tab: - need horizontal scroll - need to maybe separate statistics view and structure view (you don't always want to see the mac address, for example) dialog: - need to have more iterations on the GUI (Andrew will set-up a meeting for tomorrow). Some of the comments raised: - need to graphically emphasize the link between the right table (logical layer) and the left table (physical layer) - We don't necessarily need a separate dialog for bonds; we can maybe somehow have a single dialog in which the left section is "static" and contains the Host network topology, while the right section contains the available NICs/Networks/etc according to the selection in the left section. - Bridgeless networks should be marked in a special way. - There should be a way to add a NIC to an existing bond (and also to remove a NIC from bond). - Error messages should be clear From mlipchuk at redhat.com Mon Feb 13 17:44:35 2012 From: mlipchuk at redhat.com (Maor) Date: Mon, 13 Feb 2012 19:44:35 +0200 Subject: [Engine-devel] SharedRawDisk feature detail In-Reply-To: <4F37F0D0.7090504@redhat.com> References: <4F2AA89C.7090605@redhat.com> <4F37F0D0.7090504@redhat.com> Message-ID: <4F394C03.6070206@redhat.com> On 02/12/2012 07:03 PM, Livnat Peer wrote: > On 02/02/12 17:15, Maor wrote: >> Hello all, >> >> The shared raw disk feature description can be found under the following >> links: >> http://www.ovirt.org/wiki/Features/DetailedSharedRawDisk >> http://www.ovirt.org/wiki/Features/SharedRawDisk >> >> Please feel free, to share your comments. >> >> Regards, >> Maor >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel > > Hi Maor, > > - "when taking a VM snapshot, a snapshot of the shared disk will not be > taken." > I think it is worth mentioning that the shared disk will be part of the > VM snapshot configuration. The disk will appear as unplugged. Agreed, I changed it to the following: when taking a vm snapshot, a snapshot of the shared disk should not be taken, although it will be part of the VM snapshot configuration and the disk will appear as unplugged. > > - Move VM is deprecated in 3.1. Right, I removed this anecdote from the wiki. > > - It seems from the wiki that shared disk is not supported for template > but is supported for VM pool. > I am not sure how can we do that? iirc we create pool from template. What I was thinking about, is that the administrator can take a VM from the pool and attach it a shared disk, after the VM was created (for testing). The motivation for adding shared disk was that each entity that can be added with a disk can also be added with shared disk. Today, Administrator can add a disk to a VM from pool, which might be wrong behaviour, so maybe its better not to support it... > > What is the complexity of supporting shared disk in Templates? off the > top of my head it seems like it is more complicated to block shared > disks in templates than to support it. What do you think? Implementation wize it might be less complex, the problem is the use cases it raises, some of them which I'm thinking about are: * If the disk will be deleted from the DC, should we remove it from the template? or leave an indication in the template that there was a shared disk there, maybe should not allow to delete the disk in the first place, until it is unattached from the template? * What do we want to do when creating a template from VM with shared disk - Should User choose whether to create a template with/without the shared disk? Blocking shared disk from template means creating the template without the shared disk, the implementation for it is to check if the disk is shared or not. I think that if GUI will support attaching shared disk to multiple VMs, there is no strong use case for allowing adding shared disk to a template. > > > Livnat > From iheim at redhat.com Tue Feb 14 03:56:58 2012 From: iheim at redhat.com (Itamar Heim) Date: Tue, 14 Feb 2012 05:56:58 +0200 Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: <172cbbe7-2473-4e67-b435-a23be583e522@zmail01.collab.prod.int.phx2.redhat.com> References: <172cbbe7-2473-4e67-b435-a23be583e522@zmail01.collab.prod.int.phx2.redhat.com> Message-ID: <4F39DB8A.2060502@redhat.com> On 02/13/2012 12:32 PM, Laszlo Hornyak wrote: > Hi, > > Please review the plan document for autorecovery. > http://www.ovirt.org/wiki/Features/Autorecovery why would we disable auto recovery by default? it sounds like the preferred behavior? From lpeer at redhat.com Tue Feb 14 06:57:31 2012 From: lpeer at redhat.com (Livnat Peer) Date: Tue, 14 Feb 2012 08:57:31 +0200 Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: <4F39DB8A.2060502@redhat.com> References: <172cbbe7-2473-4e67-b435-a23be583e522@zmail01.collab.prod.int.phx2.redhat.com> <4F39DB8A.2060502@redhat.com> Message-ID: <4F3A05DB.8050307@redhat.com> On 14/02/12 05:56, Itamar Heim wrote: > On 02/13/2012 12:32 PM, Laszlo Hornyak wrote: >> Hi, >> >> Please review the plan document for autorecovery. >> http://www.ovirt.org/wiki/Features/Autorecovery > > why would we disable auto recovery by default? it sounds like the > preferred behavior? > I think that by default Laszlo meant in the upgrade process to maintain current behavior. I agree that for new entities the default should be true. > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From iheim at redhat.com Tue Feb 14 06:59:07 2012 From: iheim at redhat.com (Itamar Heim) Date: Tue, 14 Feb 2012 08:59:07 +0200 Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: <4F3A05DB.8050307@redhat.com> References: <172cbbe7-2473-4e67-b435-a23be583e522@zmail01.collab.prod.int.phx2.redhat.com> <4F39DB8A.2060502@redhat.com> <4F3A05DB.8050307@redhat.com> Message-ID: <4F3A063B.1060309@redhat.com> On 02/14/2012 08:57 AM, Livnat Peer wrote: > On 14/02/12 05:56, Itamar Heim wrote: >> On 02/13/2012 12:32 PM, Laszlo Hornyak wrote: >>> Hi, >>> >>> Please review the plan document for autorecovery. >>> http://www.ovirt.org/wiki/Features/Autorecovery >> >> why would we disable auto recovery by default? it sounds like the >> preferred behavior? >> > > I think that by default Laszlo meant in the upgrade process to maintain > current behavior. > > I agree that for new entities the default should be true. i think the only combination which will allow this is for db to default to false and code to default to true for this property? From yzaslavs at redhat.com Tue Feb 14 07:20:10 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Tue, 14 Feb 2012 09:20:10 +0200 Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: <4F3A063B.1060309@redhat.com> References: <172cbbe7-2473-4e67-b435-a23be583e522@zmail01.collab.prod.int.phx2.redhat.com> <4F39DB8A.2060502@redhat.com> <4F3A05DB.8050307@redhat.com> <4F3A063B.1060309@redhat.com> Message-ID: <4F3A0B2A.404@redhat.com> On 02/14/2012 08:59 AM, Itamar Heim wrote: > On 02/14/2012 08:57 AM, Livnat Peer wrote: >> On 14/02/12 05:56, Itamar Heim wrote: >>> On 02/13/2012 12:32 PM, Laszlo Hornyak wrote: >>>> Hi, >>>> >>>> Please review the plan document for autorecovery. >>>> http://www.ovirt.org/wiki/Features/Autorecovery >>> >>> why would we disable auto recovery by default? it sounds like the >>> preferred behavior? >>> >> >> I think that by default Laszlo meant in the upgrade process to maintain >> current behavior. >> >> I agree that for new entities the default should be true. > > i think the only combination which will allow this is for db to default > to false and code to default to true for this property? Why can't we during upgrade process set to all existing entities in DB the value to false, but still have the column defined as "default true"? > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From lpeer at redhat.com Tue Feb 14 07:17:59 2012 From: lpeer at redhat.com (Livnat Peer) Date: Tue, 14 Feb 2012 09:17:59 +0200 Subject: [Engine-devel] SharedRawDisk feature detail In-Reply-To: <4F394C03.6070206@redhat.com> References: <4F2AA89C.7090605@redhat.com> <4F37F0D0.7090504@redhat.com> <4F394C03.6070206@redhat.com> Message-ID: <4F3A0AA7.9060903@redhat.com> On 13/02/12 19:44, Maor wrote: > On 02/12/2012 07:03 PM, Livnat Peer wrote: >> On 02/02/12 17:15, Maor wrote: >>> Hello all, >>> >>> The shared raw disk feature description can be found under the following >>> links: >>> http://www.ovirt.org/wiki/Features/DetailedSharedRawDisk >>> http://www.ovirt.org/wiki/Features/SharedRawDisk >>> >>> Please feel free, to share your comments. >>> >>> Regards, >>> Maor >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >> >> Hi Maor, >> >> - "when taking a VM snapshot, a snapshot of the shared disk will not be >> taken." >> I think it is worth mentioning that the shared disk will be part of the >> VM snapshot configuration. The disk will appear as unplugged. > Agreed, I changed it to the following: > when taking a vm snapshot, a snapshot of the shared disk should not be > taken, although it will be part of the VM snapshot configuration and the > disk will appear as unplugged. >> >> - Move VM is deprecated in 3.1. > Right, I removed this anecdote from the wiki. >> >> - It seems from the wiki that shared disk is not supported for template >> but is supported for VM pool. >> I am not sure how can we do that? iirc we create pool from template. > What I was thinking about, is that the administrator can take a VM from > the pool and attach it a shared disk, after the VM was created (for > testing). > > The motivation for adding shared disk was that each entity that can be > added with a disk can also be added with shared disk. > Today, Administrator can add a disk to a VM from pool, which might be > wrong behaviour, so maybe its better not to support it... >> >> What is the complexity of supporting shared disk in Templates? off the >> top of my head it seems like it is more complicated to block shared >> disks in templates than to support it. What do you think? > Implementation wize it might be less complex, the problem is the use > cases it raises, > some of them which I'm thinking about are: > * If the disk will be deleted from the DC, should we remove it from the > template? or leave an indication in the template that there was a shared > disk there, maybe should not allow to delete the disk in the first > place, until it is unattached from the template? Since template configuration is 'read-only' you can not change a disk to be plugged or unplugged. I would say you can not delete a disk that is part of a template regardless if it is shared or not. > * What do we want to do when creating a template from VM with shared > disk - Should User choose whether to create a template with/without the > shared disk? > If a user is creating a template from VM the configuration should be identical to the VM. > Blocking shared disk from template means creating the template without > the shared disk, the implementation for it is to check if the disk is > shared or not. > I think that if GUI will support attaching shared disk to multiple VMs, > there is no strong use case for allowing adding shared disk to a template. I am not sure what the above comment means but remember that we have API users as well as UI. I think that if we don't have a strong case for not supporting shared disk in templates the default should be to support it. >> >> Livnat >> From oschreib at redhat.com Tue Feb 14 07:26:15 2012 From: oschreib at redhat.com (Ofer Schreiber) Date: Tue, 14 Feb 2012 02:26:15 -0500 (EST) Subject: [Engine-devel] DB Upgrade doesn't work In-Reply-To: <98031938-e59a-45e6-93b7-fb24d0de7b77@zmail14.collab.prod.int.phx2.redhat.com> Message-ID: <83b5823a-7b31-4207-8f29-45e2d3184302@zmail14.collab.prod.int.phx2.redhat.com> I've recently opened https://bugzilla.redhat.com/show_bug.cgi?id=790303 (Upgrade from first release doesn't work). Generally, the DB upgrade scripts tries to run only scripts with higher number than the last script that ran. But if a new release includes dbscrips with lower number, they'll never run, and cause DB upgrade issue. Such a situation is not rare, since we cherry-pick patches into a build branch, so the build branch might look like: a, b, c, f, h and the master might look like a, b, c, d, e, f, g, h, i So with the current code, only scripts higher then "h" will run. This issue is blocking the upgrade utility of ovirt-engine. any estimination on a fix date? Thanks, Ofer Schreiber. From lpeer at redhat.com Tue Feb 14 07:34:10 2012 From: lpeer at redhat.com (Livnat Peer) Date: Tue, 14 Feb 2012 09:34:10 +0200 Subject: [Engine-devel] DB Upgrade doesn't work In-Reply-To: <83b5823a-7b31-4207-8f29-45e2d3184302@zmail14.collab.prod.int.phx2.redhat.com> References: <83b5823a-7b31-4207-8f29-45e2d3184302@zmail14.collab.prod.int.phx2.redhat.com> Message-ID: <4F3A0E72.4080106@redhat.com> On 14/02/12 09:26, Ofer Schreiber wrote: > I've recently opened https://bugzilla.redhat.com/show_bug.cgi?id=790303 (Upgrade from first release doesn't work). > > Generally, the DB upgrade scripts tries to run only scripts with higher number than the last script that ran. > But if a new release includes dbscrips with lower number, they'll never run, and cause DB upgrade issue. > Such a situation is not rare, since we cherry-pick patches into a build branch, so the build branch might look like: > a, b, c, f, h > and the master might look like > a, b, c, d, e, f, g, h, i > > So with the current code, only scripts higher then "h" will run. > > This issue is blocking the upgrade utility of ovirt-engine. any estimination on a fix date? > No estimation yet. We need to rethink the upgrade flow as a whole. It will take some time. Livnat > Thanks, > Ofer Schreiber. > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From oschreib at redhat.com Tue Feb 14 07:36:27 2012 From: oschreib at redhat.com (Ofer Schreiber) Date: Tue, 14 Feb 2012 02:36:27 -0500 (EST) Subject: [Engine-devel] DB Upgrade doesn't work In-Reply-To: <4F3A0E72.4080106@redhat.com> Message-ID: <8291bdaf-f59f-4320-a151-a64baa356641@zmail14.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > On 14/02/12 09:26, Ofer Schreiber wrote: > > I've recently opened > > https://bugzilla.redhat.com/show_bug.cgi?id=790303 (Upgrade from > > first release doesn't work). > > > > Generally, the DB upgrade scripts tries to run only scripts with > > higher number than the last script that ran. > > But if a new release includes dbscrips with lower number, they'll > > never run, and cause DB upgrade issue. > > Such a situation is not rare, since we cherry-pick patches into a > > build branch, so the build branch might look like: > > a, b, c, f, h > > and the master might look like > > a, b, c, d, e, f, g, h, i > > > > So with the current code, only scripts higher then "h" will run. > > > > This issue is blocking the upgrade utility of ovirt-engine. any > > estimination on a fix date? > > > > No estimation yet. > We need to rethink the upgrade flow as a whole. It will take some > time. > > Livnat We need some sort of estimation, as this issue might block further releases of oVirt. > > > Thanks, > > Ofer Schreiber. > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > From lpeer at redhat.com Tue Feb 14 07:41:24 2012 From: lpeer at redhat.com (Livnat Peer) Date: Tue, 14 Feb 2012 09:41:24 +0200 Subject: [Engine-devel] DB Upgrade doesn't work In-Reply-To: <8291bdaf-f59f-4320-a151-a64baa356641@zmail14.collab.prod.int.phx2.redhat.com> References: <8291bdaf-f59f-4320-a151-a64baa356641@zmail14.collab.prod.int.phx2.redhat.com> Message-ID: <4F3A1024.40009@redhat.com> On 14/02/12 09:36, Ofer Schreiber wrote: > > > ----- Original Message ----- >> On 14/02/12 09:26, Ofer Schreiber wrote: >>> I've recently opened >>> https://bugzilla.redhat.com/show_bug.cgi?id=790303 (Upgrade from >>> first release doesn't work). >>> >>> Generally, the DB upgrade scripts tries to run only scripts with >>> higher number than the last script that ran. >>> But if a new release includes dbscrips with lower number, they'll >>> never run, and cause DB upgrade issue. >>> Such a situation is not rare, since we cherry-pick patches into a >>> build branch, so the build branch might look like: >>> a, b, c, f, h >>> and the master might look like >>> a, b, c, d, e, f, g, h, i >>> >>> So with the current code, only scripts higher then "h" will run. >>> >>> This issue is blocking the upgrade utility of ovirt-engine. any >>> estimination on a fix date? >>> >> >> No estimation yet. >> We need to rethink the upgrade flow as a whole. It will take some >> time. >> >> Livnat > > We need some sort of estimation, as this issue might block further releases of oVirt. > It should be within the next weeks. I don't have estimation because we haven't discussed the different solutions yet. Next oVirt release should be around May, it will be solved long before that so should not hold back the release. >> >>> Thanks, >>> Ofer Schreiber. >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >> >> From oschreib at redhat.com Tue Feb 14 07:48:55 2012 From: oschreib at redhat.com (Ofer Schreiber) Date: Tue, 14 Feb 2012 02:48:55 -0500 (EST) Subject: [Engine-devel] DB Upgrade doesn't work In-Reply-To: <4F3A1024.40009@redhat.com> Message-ID: ----- Original Message ----- > On 14/02/12 09:36, Ofer Schreiber wrote: > > > > > > ----- Original Message ----- > >> On 14/02/12 09:26, Ofer Schreiber wrote: > >>> I've recently opened > >>> https://bugzilla.redhat.com/show_bug.cgi?id=790303 (Upgrade from > >>> first release doesn't work). > >>> > >>> Generally, the DB upgrade scripts tries to run only scripts with > >>> higher number than the last script that ran. > >>> But if a new release includes dbscrips with lower number, they'll > >>> never run, and cause DB upgrade issue. > >>> Such a situation is not rare, since we cherry-pick patches into a > >>> build branch, so the build branch might look like: > >>> a, b, c, f, h > >>> and the master might look like > >>> a, b, c, d, e, f, g, h, i > >>> > >>> So with the current code, only scripts higher then "h" will run. > >>> > >>> This issue is blocking the upgrade utility of ovirt-engine. any > >>> estimination on a fix date? > >>> > >> > >> No estimation yet. > >> We need to rethink the upgrade flow as a whole. It will take some > >> time. > >> > >> Livnat > > > > We need some sort of estimation, as this issue might block further > > releases of oVirt. > > > > It should be within the next weeks. I don't have estimation because > we > haven't discussed the different solutions yet. > > Next oVirt release should be around May, it will be solved long > before > that so should not hold back the release. Good to know, thanks. > > >> > >>> Thanks, > >>> Ofer Schreiber. > >>> _______________________________________________ > >>> Engine-devel mailing list > >>> Engine-devel at ovirt.org > >>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >> > >> > > From ofrenkel at redhat.com Tue Feb 14 07:48:56 2012 From: ofrenkel at redhat.com (Omer Frenkel) Date: Tue, 14 Feb 2012 02:48:56 -0500 (EST) Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: <4F3A0B2A.404@redhat.com> Message-ID: <50ca594f-4b80-4e3d-84bc-852fb6e864d0@ofrenkel.csb> ----- Original Message ----- > From: "Yair Zaslavsky" > To: engine-devel at ovirt.org > Sent: Tuesday, February 14, 2012 9:20:10 AM > Subject: Re: [Engine-devel] Autorecovery feature plan for review > > On 02/14/2012 08:59 AM, Itamar Heim wrote: > > On 02/14/2012 08:57 AM, Livnat Peer wrote: > >> On 14/02/12 05:56, Itamar Heim wrote: > >>> On 02/13/2012 12:32 PM, Laszlo Hornyak wrote: > >>>> Hi, > >>>> > >>>> Please review the plan document for autorecovery. > >>>> http://www.ovirt.org/wiki/Features/Autorecovery > >>> > >>> why would we disable auto recovery by default? it sounds like the > >>> preferred behavior? > >>> > >> > >> I think that by default Laszlo meant in the upgrade process to > >> maintain > >> current behavior. > >> > >> I agree that for new entities the default should be true. > > > > i think the only combination which will allow this is for db to > > default > > to false and code to default to true for this property? > Why can't we during upgrade process set to all existing entities in > DB > the value to false, but still have the column defined as "default > true"? why all the trouble? i think this field should be mandatory as any other field, user has to specify it during the entity creation, right where he provide the name and any other field for the new entity. > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From yzaslavs at redhat.com Tue Feb 14 07:59:15 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Tue, 14 Feb 2012 09:59:15 +0200 Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: <50ca594f-4b80-4e3d-84bc-852fb6e864d0@ofrenkel.csb> References: <50ca594f-4b80-4e3d-84bc-852fb6e864d0@ofrenkel.csb> Message-ID: <4F3A1453.6040508@redhat.com> On 02/14/2012 09:48 AM, Omer Frenkel wrote: > > > ----- Original Message ----- >> From: "Yair Zaslavsky" >> To: engine-devel at ovirt.org >> Sent: Tuesday, February 14, 2012 9:20:10 AM >> Subject: Re: [Engine-devel] Autorecovery feature plan for review >> >> On 02/14/2012 08:59 AM, Itamar Heim wrote: >>> On 02/14/2012 08:57 AM, Livnat Peer wrote: >>>> On 14/02/12 05:56, Itamar Heim wrote: >>>>> On 02/13/2012 12:32 PM, Laszlo Hornyak wrote: >>>>>> Hi, >>>>>> >>>>>> Please review the plan document for autorecovery. >>>>>> http://www.ovirt.org/wiki/Features/Autorecovery >>>>> >>>>> why would we disable auto recovery by default? it sounds like the >>>>> preferred behavior? >>>>> >>>> >>>> I think that by default Laszlo meant in the upgrade process to >>>> maintain >>>> current behavior. >>>> >>>> I agree that for new entities the default should be true. >>> >>> i think the only combination which will allow this is for db to >>> default >>> to false and code to default to true for this property? >> Why can't we during upgrade process set to all existing entities in >> DB >> the value to false, but still have the column defined as "default >> true"? > > why all the trouble? i think this field should be mandatory as any other field, > user has to specify it during the entity creation, right where he provide the name and any other field for the new entity. Fine by me. > >> >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> From yzaslavs at redhat.com Tue Feb 14 08:06:44 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Tue, 14 Feb 2012 10:06:44 +0200 Subject: [Engine-devel] Clone VM from snapshot feature Message-ID: <4F3A1614.9030405@redhat.com> Hi all, I modified the Wiki pages of this feature: http://www.ovirt.org/wiki/Features/CloneVmFromSnapshot http://www.ovirt.org/wiki/Features/DetailedCloneVmFromSnapshot Comments are more than welcome Kind regards, Yair From ykaul at redhat.com Tue Feb 14 08:29:09 2012 From: ykaul at redhat.com (Yaniv Kaul) Date: Tue, 14 Feb 2012 10:29:09 +0200 Subject: [Engine-devel] Clone VM from snapshot feature In-Reply-To: <4F3A1614.9030405@redhat.com> References: <4F3A1614.9030405@redhat.com> Message-ID: <4F3A1B55.1070007@redhat.com> On 02/14/2012 10:06 AM, Yair Zaslavsky wrote: > Hi all, > I modified the Wiki pages of this feature: > > http://www.ovirt.org/wiki/Features/CloneVmFromSnapshot > > http://www.ovirt.org/wiki/Features/DetailedCloneVmFromSnapshot - Missing error handling. I hope all will goes well, of course. - Will you be copying the disks in parallel, or serially? - Too bad the disks have to be copied by the SPM. Not sure why, really. Same for the merge, which is not really mentioned where/how it's going to take place (VDSM-wise). - If the 'Disk1' , 'Disk2' are RAW, would be nice to have an option NOT to copy them. Especially as you have a snapshot on top of them. Y. > > Comments are more than welcome > > Kind regards, > Yair > > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From yzaslavs at redhat.com Tue Feb 14 08:35:57 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Tue, 14 Feb 2012 10:35:57 +0200 Subject: [Engine-devel] Clone VM from snapshot feature In-Reply-To: <4F3A1B55.1070007@redhat.com> References: <4F3A1614.9030405@redhat.com> <4F3A1B55.1070007@redhat.com> Message-ID: <4F3A1CED.3060300@redhat.com> On 02/14/2012 10:29 AM, Yaniv Kaul wrote: > On 02/14/2012 10:06 AM, Yair Zaslavsky wrote: >> Hi all, >> I modified the Wiki pages of this feature: >> >> http://www.ovirt.org/wiki/Features/CloneVmFromSnapshot >> >> http://www.ovirt.org/wiki/Features/DetailedCloneVmFromSnapshot > > - Missing error handling. I hope all will goes well, of course. > - Will you be copying the disks in parallel, or serially? > - Too bad the disks have to be copied by the SPM. Not sure why, really. Typo, will be fixed. > Same for the merge, which is not really mentioned where/how it's going > to take place (VDSM-wise). > - If the 'Disk1' , 'Disk2' are RAW, would be nice to have an option NOT > to copy them. Especially as you have a snapshot on top of them. > Y. > >> >> Comments are more than welcome >> >> Kind regards, >> Yair >> >> >> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel > From yzaslavs at redhat.com Tue Feb 14 08:53:15 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Tue, 14 Feb 2012 10:53:15 +0200 Subject: [Engine-devel] Clone VM from snapshot feature In-Reply-To: <4F3A1CED.3060300@redhat.com> References: <4F3A1614.9030405@redhat.com> <4F3A1B55.1070007@redhat.com> <4F3A1CED.3060300@redhat.com> Message-ID: <4F3A20FB.1090104@redhat.com> On 02/14/2012 10:35 AM, Yair Zaslavsky wrote: > On 02/14/2012 10:29 AM, Yaniv Kaul wrote: >> On 02/14/2012 10:06 AM, Yair Zaslavsky wrote: >>> Hi all, >>> I modified the Wiki pages of this feature: >>> >>> http://www.ovirt.org/wiki/Features/CloneVmFromSnapshot >>> >>> http://www.ovirt.org/wiki/Features/DetailedCloneVmFromSnapshot >> >> - Missing error handling. I hope all will goes well, of course. Will be added. Not sure though what can we do in case for example you fail to copy image N out of M , besides of course >> - Will you be copying the disks in parallel, or serially? CopyImage is an asycnrhonous verb that will be monitored by the AsyncTaskManager at Engine core. >> - Too bad the disks have to be copied by the SPM. Not sure why, really. > Typo, will be fixed. >> Same for the merge, which is not really mentioned where/how it's going >> to take place (VDSM-wise). The copy operation will perform collapse on destination. Maybe I do not understand your question here- please elaborate. >> - If the 'Disk1' , 'Disk2' are RAW, would be nice to have an option NOT >> to copy them. Especially as you have a snapshot on top of them. Please elaborate on that. >> Y. >> >>> >>> Comments are more than welcome >>> >>> Kind regards, >>> Yair >>> >>> >>> >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >> > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From ykaul at redhat.com Tue Feb 14 09:03:47 2012 From: ykaul at redhat.com (Yaniv Kaul) Date: Tue, 14 Feb 2012 11:03:47 +0200 Subject: [Engine-devel] Clone VM from snapshot feature In-Reply-To: <4F3A20FB.1090104@redhat.com> References: <4F3A1614.9030405@redhat.com> <4F3A1B55.1070007@redhat.com> <4F3A1CED.3060300@redhat.com> <4F3A20FB.1090104@redhat.com> Message-ID: <4F3A2373.6020507@redhat.com> On 02/14/2012 10:53 AM, Yair Zaslavsky wrote: > On 02/14/2012 10:35 AM, Yair Zaslavsky wrote: >> On 02/14/2012 10:29 AM, Yaniv Kaul wrote: >>> On 02/14/2012 10:06 AM, Yair Zaslavsky wrote: >>>> Hi all, >>>> I modified the Wiki pages of this feature: >>>> >>>> http://www.ovirt.org/wiki/Features/CloneVmFromSnapshot >>>> >>>> http://www.ovirt.org/wiki/Features/DetailedCloneVmFromSnapshot >>> - Missing error handling. I hope all will goes well, of course. > Will be added. Not sure though what can we do in case for example you > fail to copy image N out of M , besides of course Since it's not clear that if you merge the snapshots regardless of the base image (if it's RAW), or you merge them all to one big image, I'm not sure if there are two processes here or not - I assume there are: copy and merge. Each can fail independently, and rollback is probably required? >>> - Will you be copying the disks in parallel, or serially? > CopyImage is an asycnrhonous verb that will be monitored by the > AsyncTaskManager at Engine core. Which means that if there are N disks you copy them in parallel or one by one? May make sense to do it depending on the storage domain - if it's the same for all or not, etc. An optimization, I guess. > >>> - Too bad the disks have to be copied by the SPM. Not sure why, really. >> Typo, will be fixed. >>> Same for the merge, which is not really mentioned where/how it's going >>> to take place (VDSM-wise). > The copy operation will perform collapse on destination. > Maybe I do not understand your question here- please elaborate. Will the merge of the snapshots be done by SPM or HSM? > >>> - If the 'Disk1' , 'Disk2' are RAW, would be nice to have an option NOT >>> to copy them. Especially as you have a snapshot on top of them. > Please elaborate on that. If you are going to merge snapshots into the base, not sure it needs to be copied first - I wonder if there's an option to collapse to a new destination. QEMU feature, I guess. Y. >>> Y. >>> >>>> Comments are more than welcome >>>> >>>> Kind regards, >>>> Yair >>>> >>>> >>>> >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel From ykaul at redhat.com Tue Feb 14 09:45:21 2012 From: ykaul at redhat.com (Yaniv Kaul) Date: Tue, 14 Feb 2012 11:45:21 +0200 Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: <4F3A1453.6040508@redhat.com> References: <50ca594f-4b80-4e3d-84bc-852fb6e864d0@ofrenkel.csb> <4F3A1453.6040508@redhat.com> Message-ID: <4F3A2D31.7060901@redhat.com> On 02/14/2012 09:59 AM, Yair Zaslavsky wrote: > On 02/14/2012 09:48 AM, Omer Frenkel wrote: >> >> ----- Original Message ----- >>> From: "Yair Zaslavsky" >>> To: engine-devel at ovirt.org >>> Sent: Tuesday, February 14, 2012 9:20:10 AM >>> Subject: Re: [Engine-devel] Autorecovery feature plan for review >>> >>> On 02/14/2012 08:59 AM, Itamar Heim wrote: >>>> On 02/14/2012 08:57 AM, Livnat Peer wrote: >>>>> On 14/02/12 05:56, Itamar Heim wrote: >>>>>> On 02/13/2012 12:32 PM, Laszlo Hornyak wrote: >>>>>>> Hi, >>>>>>> >>>>>>> Please review the plan document for autorecovery. >>>>>>> http://www.ovirt.org/wiki/Features/Autorecovery >>>>>> why would we disable auto recovery by default? it sounds like the >>>>>> preferred behavior? >>>>>> >>>>> I think that by default Laszlo meant in the upgrade process to >>>>> maintain >>>>> current behavior. Why? Why not improve their user experience and provide them with such feature? Current behaviour sucks - as your system admin. >>>>> >>>>> I agree that for new entities the default should be true. >>>> i think the only combination which will allow this is for db to >>>> default >>>> to false and code to default to true for this property? >>> Why can't we during upgrade process set to all existing entities in >>> DB >>> the value to false, but still have the column defined as "default >>> true"? >> why all the trouble? i think this field should be mandatory as any other field, >> user has to specify it during the entity creation, right where he provide the name and any other field for the new entity. > Fine by me. I'm not sure I see the reason a user would want to turn it off, on a per-object basis. If it's in the 'Advanced' settings of a host/storage, fine, but otherwise, it's just another cryptic feature to turn on/off. Y. > >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From mlipchuk at redhat.com Tue Feb 14 09:44:36 2012 From: mlipchuk at redhat.com (Maor) Date: Tue, 14 Feb 2012 11:44:36 +0200 Subject: [Engine-devel] SharedRawDisk feature detail In-Reply-To: <4F3A0AA7.9060903@redhat.com> References: <4F2AA89C.7090605@redhat.com> <4F37F0D0.7090504@redhat.com> <4F394C03.6070206@redhat.com> <4F3A0AA7.9060903@redhat.com> Message-ID: <4F3A2D04.3020300@redhat.com> On 02/14/2012 09:17 AM, Livnat Peer wrote: > On 13/02/12 19:44, Maor wrote: >> On 02/12/2012 07:03 PM, Livnat Peer wrote: >>> On 02/02/12 17:15, Maor wrote: >>>> Hello all, >>>> >>>> The shared raw disk feature description can be found under the following >>>> links: >>>> http://www.ovirt.org/wiki/Features/DetailedSharedRawDisk >>>> http://www.ovirt.org/wiki/Features/SharedRawDisk >>>> >>>> Please feel free, to share your comments. >>>> >>>> Regards, >>>> Maor >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> >>> Hi Maor, >>> >>> - "when taking a VM snapshot, a snapshot of the shared disk will not be >>> taken." >>> I think it is worth mentioning that the shared disk will be part of the >>> VM snapshot configuration. The disk will appear as unplugged. >> Agreed, I changed it to the following: >> when taking a vm snapshot, a snapshot of the shared disk should not be >> taken, although it will be part of the VM snapshot configuration and the >> disk will appear as unplugged. >>> >>> - Move VM is deprecated in 3.1. >> Right, I removed this anecdote from the wiki. >>> >>> - It seems from the wiki that shared disk is not supported for template >>> but is supported for VM pool. >>> I am not sure how can we do that? iirc we create pool from template. >> What I was thinking about, is that the administrator can take a VM from >> the pool and attach it a shared disk, after the VM was created (for >> testing). >> >> The motivation for adding shared disk was that each entity that can be >> added with a disk can also be added with shared disk. >> Today, Administrator can add a disk to a VM from pool, which might be >> wrong behaviour, so maybe its better not to support it... >>> >>> What is the complexity of supporting shared disk in Templates? off the >>> top of my head it seems like it is more complicated to block shared >>> disks in templates than to support it. What do you think? >> Implementation wize it might be less complex, the problem is the use >> cases it raises, >> some of them which I'm thinking about are: >> * If the disk will be deleted from the DC, should we remove it from the >> template? or leave an indication in the template that there was a shared >> disk there, maybe should not allow to delete the disk in the first >> place, until it is unattached from the template? > > Since template configuration is 'read-only' you can not change a disk to > be plugged or unplugged. > I would say you can not delete a disk that is part of a template > regardless if it is shared or not. So in that case template with shared disk, will block the user from removing the shared disk from the DC. Won't it will make the flow for the user a bit complicated. User who wants to remove the shared disk, will need to remove the VM's which are based on the template and then remove the template it self. > >> * What do we want to do when creating a template from VM with shared >> disk - Should User choose whether to create a template with/without the >> shared disk? >> > > If a user is creating a template from VM the configuration should be > identical to the VM. > >> Blocking shared disk from template means creating the template without >> the shared disk, the implementation for it is to check if the disk is >> shared or not. >> I think that if GUI will support attaching shared disk to multiple VMs, >> there is no strong use case for allowing adding shared disk to a template. > > I am not sure what the above comment means but remember that we have API > users as well as UI. > > I think that if we don't have a strong case for not supporting shared > disk in templates the default should be to support it. > >>> >>> Livnat >>> > From yzaslavs at redhat.com Tue Feb 14 10:32:43 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Tue, 14 Feb 2012 12:32:43 +0200 Subject: [Engine-devel] Clone VM from snapshot feature In-Reply-To: <4F3A2373.6020507@redhat.com> References: <4F3A1614.9030405@redhat.com> <4F3A1B55.1070007@redhat.com> <4F3A1CED.3060300@redhat.com> <4F3A20FB.1090104@redhat.com> <4F3A2373.6020507@redhat.com> Message-ID: <4F3A384B.6070904@redhat.com> On 02/14/2012 11:03 AM, Yaniv Kaul wrote: > On 02/14/2012 10:53 AM, Yair Zaslavsky wrote: >> On 02/14/2012 10:35 AM, Yair Zaslavsky wrote: >>> On 02/14/2012 10:29 AM, Yaniv Kaul wrote: >>>> On 02/14/2012 10:06 AM, Yair Zaslavsky wrote: >>>>> Hi all, >>>>> I modified the Wiki pages of this feature: >>>>> >>>>> http://www.ovirt.org/wiki/Features/CloneVmFromSnapshot >>>>> >>>>> http://www.ovirt.org/wiki/Features/DetailedCloneVmFromSnapshot >>>> - Missing error handling. I hope all will goes well, of course. >> Will be added. Not sure though what can we do in case for example you >> fail to copy image N out of M , besides of course > > Since it's not clear that if you merge the snapshots regardless of the > base image (if it's RAW), or you merge them all to one big image, I'm > not sure if there are two processes here or not - I assume there are: > copy and merge. Each can fail independently, and rollback is probably > required? > >>>> - Will you be copying the disks in parallel, or serially? >> CopyImage is an asycnrhonous verb that will be monitored by the >> AsyncTaskManager at Engine core. > > Which means that if there are N disks you copy them in parallel or one > by one? May make sense to do it depending on the storage domain - if > it's the same for all or not, etc. An optimization, I guess. This engine core code that is required to launch VDS command + create a task for monitoring it takes less time than completion of the monitoring itself - so the part of lauch VDS command + create task for monitoring is serial, but the monitoring itself is performed periodically, according to the behavior of AsyncTaskManager. Engine-core is indifferent to whether the copies are performed concurrently or not in VDSM. > >> >>>> - Too bad the disks have to be copied by the SPM. Not sure why, really. >>> Typo, will be fixed. >>>> Same for the merge, which is not really mentioned where/how it's going >>>> to take place (VDSM-wise). >> The copy operation will perform collapse on destination. >> Maybe I do not understand your question here- please elaborate. > > Will the merge of the snapshots be done by SPM or HSM? > >> >>>> - If the 'Disk1' , 'Disk2' are RAW, would be nice to have an option NOT >>>> to copy them. Especially as you have a snapshot on top of them. >> Please elaborate on that. > > If you are going to merge snapshots into the base, not sure it needs to > be copied first - I wonder if there's an option to collapse to a new > destination. QEMU feature, I guess. > Y. > >>>> Y. >>>> >>>>> Comments are more than welcome >>>>> >>>>> Kind regards, >>>>> Yair >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> Engine-devel mailing list >>>>> Engine-devel at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel > From lhornyak at redhat.com Tue Feb 14 10:50:30 2012 From: lhornyak at redhat.com (Laszlo Hornyak) Date: Tue, 14 Feb 2012 05:50:30 -0500 (EST) Subject: [Engine-devel] [backend] a little confusion about the quartz jobs In-Reply-To: <194680b3-167f-4e7f-98ff-ae8e50c528e5@zmail01.collab.prod.int.phx2.redhat.com> Message-ID: <9ed2edad-0d4a-4d7e-9a53-bd6616d62393@zmail01.collab.prod.int.phx2.redhat.com> hi, I was playing with the quartz jobs in the backend and I thought this is an area where some simplification and/or cleanup would be useful. - SchedulerUtil interface would be nice to hide quartz from the rest of the code, but it very rarely used, the clients are bound to it's single implementation, SchedulerUtilQuartzImpl through it's getInstance() method. - It was designed to be a local EJB, Backend actually expects it to be injected. (this field is not used) - when scheduling a job, you call schedule...Job(Object instance, String methodName, ...) however, it is not the _methodname_ that the executor will look for - instead, it will check the OnTimerMethodAnnotation on all the methods. But this annotation has everywhere the methodName as value - JobWrapper actually iterates over all the methods to find the one with the right annotation So a quick simplification could be: - The annotation is not needed, it could be removed - JobWrapper could just getMethod(methodName, argClasses) instead of looking for the annotation in all of the methods - I am really not for factoryes, but if we want to separate the interface from the implementation, then probably a SchedulerUtilFactory could help here. The dummy implementation would do just the very same thing as the SchedulerUtilQuartzImpl.getInstance() - I would remove the reference to SchedulerUtil from Backend as well, since it is not used. Really _should_ the Backend class do any scheduling? Please share your thoughts. Thank you, Laszlo From lpeer at redhat.com Tue Feb 14 12:00:35 2012 From: lpeer at redhat.com (Livnat Peer) Date: Tue, 14 Feb 2012 14:00:35 +0200 Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: <4F3A2D31.7060901@redhat.com> References: <50ca594f-4b80-4e3d-84bc-852fb6e864d0@ofrenkel.csb> <4F3A1453.6040508@redhat.com> <4F3A2D31.7060901@redhat.com> Message-ID: <4F3A4CE3.7080907@redhat.com> On 14/02/12 11:45, Yaniv Kaul wrote: > On 02/14/2012 09:59 AM, Yair Zaslavsky wrote: >> On 02/14/2012 09:48 AM, Omer Frenkel wrote: >>> >>> ----- Original Message ----- >>>> From: "Yair Zaslavsky" >>>> To: engine-devel at ovirt.org >>>> Sent: Tuesday, February 14, 2012 9:20:10 AM >>>> Subject: Re: [Engine-devel] Autorecovery feature plan for review >>>> >>>> On 02/14/2012 08:59 AM, Itamar Heim wrote: >>>>> On 02/14/2012 08:57 AM, Livnat Peer wrote: >>>>>> On 14/02/12 05:56, Itamar Heim wrote: >>>>>>> On 02/13/2012 12:32 PM, Laszlo Hornyak wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> Please review the plan document for autorecovery. >>>>>>>> http://www.ovirt.org/wiki/Features/Autorecovery >>>>>>> why would we disable auto recovery by default? it sounds like the >>>>>>> preferred behavior? >>>>>>> >>>>>> I think that by default Laszlo meant in the upgrade process to >>>>>> maintain >>>>>> current behavior. > > Why? Why not improve their user experience and provide them with such > feature? Current behaviour sucks - as your system admin. > I don't have objections either way. Laszlo - let's update the wiki to upgrade by default to true, if we'll get good reason why not to upgrade to true then we can open it again for discussion. >>>>>> >>>>>> I agree that for new entities the default should be true. >>>>> i think the only combination which will allow this is for db to >>>>> default >>>>> to false and code to default to true for this property? >>>> Why can't we during upgrade process set to all existing entities in >>>> DB >>>> the value to false, but still have the column defined as "default >>>> true"? >>> why all the trouble? i think this field should be mandatory as any >>> other field, >>> user has to specify it during the entity creation, right where he >>> provide the name and any other field for the new entity. >> Fine by me. > > I'm not sure I see the reason a user would want to turn it off, on a > per-object basis. > If it's in the 'Advanced' settings of a host/storage, fine, but > otherwise, it's just another cryptic feature to turn on/off. > Y. > >> >>>>> _______________________________________________ >>>>> Engine-devel mailing list >>>>> Engine-devel at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From lhornyak at redhat.com Tue Feb 14 12:16:58 2012 From: lhornyak at redhat.com (Laszlo Hornyak) Date: Tue, 14 Feb 2012 07:16:58 -0500 (EST) Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: <4F3A4CE3.7080907@redhat.com> Message-ID: ----- Original Message ----- > From: "Livnat Peer" > To: "Yaniv Kaul" > Cc: engine-devel at ovirt.org > Sent: Tuesday, February 14, 2012 1:00:35 PM > Subject: Re: [Engine-devel] Autorecovery feature plan for review > > On 14/02/12 11:45, Yaniv Kaul wrote: > > On 02/14/2012 09:59 AM, Yair Zaslavsky wrote: > >> On 02/14/2012 09:48 AM, Omer Frenkel wrote: > >>> > >>> ----- Original Message ----- > >>>> From: "Yair Zaslavsky" > >>>> To: engine-devel at ovirt.org > >>>> Sent: Tuesday, February 14, 2012 9:20:10 AM > >>>> Subject: Re: [Engine-devel] Autorecovery feature plan for review > >>>> > >>>> On 02/14/2012 08:59 AM, Itamar Heim wrote: > >>>>> On 02/14/2012 08:57 AM, Livnat Peer wrote: > >>>>>> On 14/02/12 05:56, Itamar Heim wrote: > >>>>>>> On 02/13/2012 12:32 PM, Laszlo Hornyak wrote: > >>>>>>>> Hi, > >>>>>>>> > >>>>>>>> Please review the plan document for autorecovery. > >>>>>>>> http://www.ovirt.org/wiki/Features/Autorecovery > >>>>>>> why would we disable auto recovery by default? it sounds like > >>>>>>> the > >>>>>>> preferred behavior? > >>>>>>> > >>>>>> I think that by default Laszlo meant in the upgrade process to > >>>>>> maintain > >>>>>> current behavior. > > > > Why? Why not improve their user experience and provide them with > > such > > feature? Current behaviour sucks - as your system admin. > > > > I don't have objections either way. > Laszlo - let's update the wiki to upgrade by default to true, if > we'll > get good reason why not to upgrade to true then we can open it again > for > discussion. So be it, I changed the wikipage. > > > >>>>>> > >>>>>> I agree that for new entities the default should be true. > >>>>> i think the only combination which will allow this is for db to > >>>>> default > >>>>> to false and code to default to true for this property? > >>>> Why can't we during upgrade process set to all existing entities > >>>> in > >>>> DB > >>>> the value to false, but still have the column defined as > >>>> "default > >>>> true"? > >>> why all the trouble? i think this field should be mandatory as > >>> any > >>> other field, > >>> user has to specify it during the entity creation, right where he > >>> provide the name and any other field for the new entity. > >> Fine by me. > > > > I'm not sure I see the reason a user would want to turn it off, on > > a > > per-object basis. > > If it's in the 'Advanced' settings of a host/storage, fine, but > > otherwise, it's just another cryptic feature to turn on/off. > > Y. > > > >> > >>>>> _______________________________________________ > >>>>> Engine-devel mailing list > >>>>> Engine-devel at ovirt.org > >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>>> _______________________________________________ > >>>> Engine-devel mailing list > >>>> Engine-devel at ovirt.org > >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>>> > >> _______________________________________________ > >> Engine-devel mailing list > >> Engine-devel at ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From mkolesni at redhat.com Tue Feb 14 12:21:07 2012 From: mkolesni at redhat.com (Mike Kolesnik) Date: Tue, 14 Feb 2012 07:21:07 -0500 (EST) Subject: [Engine-devel] [backend] a little confusion about the quartz jobs In-Reply-To: <9ed2edad-0d4a-4d7e-9a53-bd6616d62393@zmail01.collab.prod.int.phx2.redhat.com> Message-ID: > hi, > > I was playing with the quartz jobs in the backend and I thought this > is an area where some simplification and/or cleanup would be useful. > > - SchedulerUtil interface would be nice to hide quartz from the rest > of the code, but it very rarely used, the clients are bound to it's > single implementation, SchedulerUtilQuartzImpl through it's > getInstance() method. I think the whole class name is misleading, since usually when I imagine a utils class, it's a simple class that does some menial work for me in static methods, and not really calls anything else or even has an instance. Maybe the class can be renamed to just Scheduler, or ScheduleManager which will be more precise. > - It was designed to be a local EJB, Backend actually expects it to > be injected. (this field is not used) > - when scheduling a job, you call schedule...Job(Object instance, > String methodName, ...) however, it is not the _methodname_ that > the executor will look for > - instead, it will check the OnTimerMethodAnnotation on all the > methods. But this annotation has everywhere the methodName as value > - JobWrapper actually iterates over all the methods to find the one > with the right annotation > > So a quick simplification could be: > - The annotation is not needed, it could be removed > - JobWrapper could just getMethod(methodName, argClasses) instead of > looking for the annotation in all of the methods Sounds good, or maybe just keep the annotation and not the method name in the call/annotation since then if the method name changes it won't break and we can easily locate all jobs by searching for the annotation.. > - I am really not for factoryes, but if we want to separate the > interface from the implementation, then probably a > SchedulerUtilFactory could help here. The dummy implementation > would do just the very same thing as the > SchedulerUtilQuartzImpl.getInstance() > - I would remove the reference to SchedulerUtil from Backend as > well, since it is not used. Really _should_ the Backend class do > any scheduling? Backend does schedule at least one job in it's Initialize() method.. Maybe the class should be injected, but I don't know if that happens so maybe that's why it's unused. > > Please share your thoughts. > > Thank you, > Laszlo > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From jhenner at redhat.com Tue Feb 14 12:57:55 2012 From: jhenner at redhat.com (Jaroslav Henner) Date: Tue, 14 Feb 2012 13:57:55 +0100 Subject: [Engine-devel] ovirtSDK: ObjectsFactory would bring ability to customize objects Message-ID: <4F3A5A53.2080400@redhat.com> Greetings. I'm an automation tester in charge of testing SDK, my problem is that I cannot influence the objects that are created without so called "monkey patching" (google that and you will see what it means) the ovirtsdk.infrastructure.brokers.*. I made some comments about this #782891, but I was not so clear there, so I hope this will be better: In [12]: api.datacenters.list()[0] Out[12]: You see that this returns an object that was declared somewhere in SDK. We have AFAIK no good way to say which object should be created. It would be good for us to be able to tell SDK: " My dear SDK, every time you are asked, please don't create and give me ovirtsdk.infrastructure.brokers.DataCenter, but something very similar: our.tests.infrastructure.brokers.DataCenter " This object would be a subclass SDK object. Than we would not have make another layer which would be converting between objects from ovirtsdk and objects we need, which will contain our own code (__eq__, __in__, str, ...). I hope this would be good for other SDK users as well. I think this could be achieved by having a overridable mapping: entitites_map = { 'DataCenters': ovirtsdk.api.DataCenters, # Use the default. 'DataCenters': our.tests.infrastructure.brokers.DataCenter, 'Clusters': our.tests.infrastructure.brokers.Clusters, 'Cluster': our.tests.infrastructure.brokers.Cluster } SDK would create the objects somehow like this: new_cluster = entities_map['Cluster'](param1="sdff", param2="dsfs", ...) Or even better if there was a Factory: class DefaultEntitiesFactory(object): def cluster(self, param1=None, param2=None, datacenter=...): cluster = ovirtsdk.infrastructure.brokers.Cluster(param1, param2, ...) ... return cluster def datacenter(self, param1=None, param2=None) dc = ovirtsdk.infrastructure.brokers.Datacenter(param1, param2, ...) ... return dc and there was, somewhere in the sdk: ovirtsdk..somemodule.ENTITIES_FACTORY = DefaultEntitiesFactory We could than switch the factory to something which fits better: ovirtsdk..somemodule.ENTITIES_FACTORY = MyBetterEntitiesFactory whereas the MyBetterEntitiesFactory would be a subclass: class MyBetterEntitiesFactory(DefaultEntitiesFactory): def cluster(self, param1=None, param2=None, datacenter=...): cluster = our.tests.infrastructure.brokers.Cluster(param1, param2, ...) do_some_fancy_cool_stuff(self, cluster) return cluster pass I hope I made myself clear this time. Jarda From yzaslavs at redhat.com Tue Feb 14 13:01:41 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Tue, 14 Feb 2012 15:01:41 +0200 Subject: [Engine-devel] [backend] a little confusion about the quartz jobs In-Reply-To: References: Message-ID: <4F3A5B35.3050902@redhat.com> On 02/14/2012 02:21 PM, Mike Kolesnik wrote: >> hi, >> >> I was playing with the quartz jobs in the backend and I thought this >> is an area where some simplification and/or cleanup would be useful. >> >> - SchedulerUtil interface would be nice to hide quartz from the rest >> of the code, but it very rarely used, the clients are bound to it's >> single implementation, SchedulerUtilQuartzImpl through it's >> getInstance() method. > > I think the whole class name is misleading, since usually when I imagine a utils class, it's a simple class that does some menial work for me in static methods, and not really calls anything else or even has an instance. +1 > > Maybe the class can be renamed to just Scheduler, or ScheduleManager which will be more precise. > >> - It was designed to be a local EJB, Backend actually expects it to >> be injected. (this field is not used) >> - when scheduling a job, you call schedule...Job(Object instance, >> String methodName, ...) however, it is not the _methodname_ that >> the executor will look for >> - instead, it will check the OnTimerMethodAnnotation on all the >> methods. But this annotation has everywhere the methodName as value >> - JobWrapper actually iterates over all the methods to find the one >> with the right annotation >> >> So a quick simplification could be: >> - The annotation is not needed, it could be removed >> - JobWrapper could just getMethod(methodName, argClasses) instead of >> looking for the annotation in all of the methods > > Sounds good, or maybe just keep the annotation and not the method name in the call/annotation since then if the method name changes it won't break and we can easily locate all jobs by searching for the annotation.. > >> - I am really not for factoryes, but if we want to separate the >> interface from the implementation, then probably a >> SchedulerUtilFactory could help here. The dummy implementation >> would do just the very same thing as the >> SchedulerUtilQuartzImpl.getInstance() >> - I would remove the reference to SchedulerUtil from Backend as >> well, since it is not used. Really _should_ the Backend class do >> any scheduling? > > Backend does schedule at least one job in it's Initialize() method.. Yes, we have the DbUsers cache manager that performs periodic checks for db users against AD/IPA. This scheduler should start upon @PostConstruct (or any logical equivalent). > Maybe the class should be injected, but I don't know if that happens so maybe that's why it's unused. > >> >> Please share your thoughts. >> >> Thank you, >> Laszlo >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From rgolan at redhat.com Tue Feb 14 13:35:29 2012 From: rgolan at redhat.com (Roy Golan) Date: Tue, 14 Feb 2012 08:35:29 -0500 (EST) Subject: [Engine-devel] bridgless networks In-Reply-To: <4F337D7B.5050808@redhat.com> Message-ID: <5fe81486-2785-48f0-89db-a02d5f3fd4af@zmail01.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > From: "Itamar Heim" > To: "Roy Golan" > Cc: engine-devel at ovirt.org > Sent: Thursday, February 9, 2012 10:02:03 AM > Subject: Re: [Engine-devel] bridgless networks > > On 02/06/2012 04:47 PM, Roy Golan wrote: > > Hi All > > > > Lately I've been working on a design of bridge-less network feature > > in the engine. > > You can see it in > > http://www.ovirt.org/wiki/Features/Design/Network/Bridgeless_Networks#Bridge-less_Networks > > > > Please review the design. > > Note, there are some open issues, you can find in the relevant > > section. > > Reviews and comments are very welcome. > > 1. validations > 1.1. do you block setting a logical network to don't allow running > VMs > if it has a vnic associated with it? > 1.2. do you check on import a vnic isn't connected to a logical > network > which doesn't allow running VMs? > 1.3. do you check when REST API tries to add/edit a vnic that the > chosen > logical network is allowed to run VMs? > > 2. changes > 2.1 can a logical network be changed between allow/disallow running > VMs? > 2.2 what's the flow when enabling running VMs? will the logical > network > become non-operational until all hosts are reconfigured with a bridge > (if applicable)? > what is the user flow to reconfigure the hosts (go one by one? do > what > (there is no change to host level config)? > 2.3 what's the flow to not allowing to run VMs (bridge-less) - no > need > to make the network non operational, but same question - what should > the > admin do to reconfigure the hosts (no host level config change is > needed > by him, just a reconfigure iiuc) > > Thanks, > Itamar > Since it will take some time till we'll add a type to a nic, the whole concept of enforcing bridging in the migration domain, namely the cluster, should be replaced with much more simple approach - set bridged true/false during the attach action on the host (i.e setupnetworks). This means there are no monitoring checks, no new fields to logical networks and no validations but migration might fail in case the target network is not bridged and the underlying nic is not vNic etc. Once we will support nic types it will be easy to add the ability to mark a network as "able to run VMs" to advice the attach nic action, based on the nic type to set a bridge or not. thoughts? From rgolan at redhat.com Tue Feb 14 13:47:30 2012 From: rgolan at redhat.com (Roy Golan) Date: Tue, 14 Feb 2012 08:47:30 -0500 (EST) Subject: [Engine-devel] network - UI Sync meeting Message-ID: The following meeting has been modified: Subject: network - UI Sync meeting Organizer: "Roy Golan" Location: asia-tlv at redhat.com Time: Wednesday, February 22, 2012, 4:00:00 PM - 5:00:00 PM GMT +02:00 Jerusalem [MODIFIED] Invitees: mkenneth at redhat.com; sgrinber at redhat.com; lpeer at redhat.com; dfediuck at redhat.com; drankevi at redhat.com; ecohen at redhat.com; iheim at redhat.com; ovedo at redhat.com; acathrow at redhat.com; drankevi at redhat.com; engine-devel at ovirt.org ... *~*~*~*~*~*~*~*~*~* Bridge ID: 1814335863 https://www.intercallonline.com/portlets/scheduling/viewNumbers/listNumbersByCode.do?confCode=1814335863 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 4055 bytes Desc: not available URL: From rgolan at redhat.com Tue Feb 14 13:48:36 2012 From: rgolan at redhat.com (Roy Golan) Date: Tue, 14 Feb 2012 08:48:36 -0500 (EST) Subject: [Engine-devel] Cancelled: network - UI Sync meeting Message-ID: <0fafbca9-9c62-408d-a029-f406acf8c09a@zmail01.collab.prod.int.phx2.redhat.com> The following meeting has been cancelled: Subject: network - UI Sync meeting Organizer: "Roy Golan" Location: asia-tlv at redhat.com Time: Wednesday, February 22, 2012, 4:00:00 PM - 5:00:00 PM GMT +02:00 Jerusalem Invitees: mkenneth at redhat.com; sgrinber at redhat.com; lpeer at redhat.com; dfediuck at redhat.com; drankevi at redhat.com; ecohen at redhat.com; iheim at redhat.com; ovedo at redhat.com; acathrow at redhat.com; drankevi at redhat.com; engine-devel at ovirt.org ... *~*~*~*~*~*~*~*~*~* Bridge ID: 1814335863 https://www.intercallonline.com/portlets/scheduling/viewNumbers/listNumbersByCode.do?confCode=1814335863 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 3987 bytes Desc: not available URL: From iheim at redhat.com Tue Feb 14 14:58:05 2012 From: iheim at redhat.com (Itamar Heim) Date: Tue, 14 Feb 2012 16:58:05 +0200 Subject: [Engine-devel] ovirtSDK: ObjectsFactory would bring ability to customize objects In-Reply-To: <4F3A5A53.2080400@redhat.com> References: <4F3A5A53.2080400@redhat.com> Message-ID: <4F3A767D.1050907@redhat.com> On 02/14/2012 02:57 PM, Jaroslav Henner wrote: > Greetings. > > I'm an automation tester in charge of testing SDK, my problem is that I > cannot influence the objects that are created without so called "monkey > patching" (google that and you will see what it means) the > ovirtsdk.infrastructure.brokers.*. I made some comments about this > #782891, but I was not so clear there, so I hope this will be better: > > > In [12]: api.datacenters.list()[0] > Out[12]: > > You see that this returns an object that was declared somewhere in SDK. > We have AFAIK no good way to say which object should be created. > > It would be good for us to be able to tell SDK: > " > My dear SDK, every time you are asked, please don't create and give me > ovirtsdk.infrastructure.brokers.DataCenter, but something very similar: > our.tests.infrastructure.brokers.DataCenter > " I'm not sure i understand - you want the SDK to return an object from a class it does not know (i.e., the SDK to return an object from your class)? how will this look like if the SDK was in another language (say java)? From rgolan at redhat.com Tue Feb 14 15:08:19 2012 From: rgolan at redhat.com (Roy Golan) Date: Tue, 14 Feb 2012 10:08:19 -0500 (EST) Subject: [Engine-devel] network - UI Sync meeting Message-ID: The following is a new meeting request: Subject: network - UI Sync meeting Organizer: "Roy Golan" Time: Monday, February 20, 2012, 4:00:00 PM - 5:00:00 PM GMT +02:00 Jerusalem Invitees: mkenneth at redhat.com; sgrinber at redhat.com; lpeer at redhat.com; dfediuck at redhat.com; drankevi at redhat.com; ecohen at redhat.com; iheim at redhat.com; ovedo at redhat.com; acathrow at redhat.com; engine-devel at ovirt.org; kroberts at redhat.com *~*~*~*~*~*~*~*~*~* Bridge ID: 1814335863 https://www.intercallonline.com/portlets/scheduling/viewNumbers/listNumbersByCode.do?confCode=1814335863 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 3727 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 4071 bytes Desc: not available URL: From jhenner at redhat.com Tue Feb 14 16:01:09 2012 From: jhenner at redhat.com (Jaroslav Henner) Date: Tue, 14 Feb 2012 17:01:09 +0100 Subject: [Engine-devel] ovirtSDK: ObjectsFactory would bring ability to customize objects In-Reply-To: <4F3A767D.1050907@redhat.com> References: <4F3A5A53.2080400@redhat.com> <4F3A767D.1050907@redhat.com> Message-ID: <4F3A8545.9030207@redhat.com> On Tue 14 Feb 2012 03:58:05 PM CET, Itamar Heim wrote: > On 02/14/2012 02:57 PM, Jaroslav Henner wrote: >> Greetings. >> >> I'm an automation tester in charge of testing SDK, my problem is that I >> cannot influence the objects that are created without so called "monkey >> patching" (google that and you will see what it means) the >> ovirtsdk.infrastructure.brokers.*. I made some comments about this >> #782891, but I was not so clear there, so I hope this will be better: >> >> >> In [12]: api.datacenters.list()[0] >> Out[12]: >> >> You see that this returns an object that was declared somewhere in SDK. >> We have AFAIK no good way to say which object should be created. >> >> It would be good for us to be able to tell SDK: >> " >> My dear SDK, every time you are asked, please don't create and give me >> ovirtsdk.infrastructure.brokers.DataCenter, but something very similar: >> our.tests.infrastructure.brokers.DataCenter >> " > > I'm not sure i understand - you want the SDK to return an object from > a class it does not know (i.e., the SDK to return an object from your > class)? > how will this look like if the SDK was in another language (say java)? There would be some interface/abstract/concrete class DefaultFactory. This will define some common interface. There would be some setter inside SDK which would contain the instance of that DefaultFactory. SDK user would be able to set it to his own instance of (descendant) of DefaultFactory. From this point on, the objects created by SDK would be from the newly set factory -- customized. [http://en.wikipedia.org/wiki/Abstract_factory_pattern] From lhornyak at redhat.com Tue Feb 14 16:49:25 2012 From: lhornyak at redhat.com (Laszlo Hornyak) Date: Tue, 14 Feb 2012 11:49:25 -0500 (EST) Subject: [Engine-devel] [backend] a little confusion about the quartz jobs In-Reply-To: <4F3A5B35.3050902@redhat.com> Message-ID: ----- Original Message ----- > From: "Yair Zaslavsky" > To: engine-devel at ovirt.org > Sent: Tuesday, February 14, 2012 2:01:41 PM > Subject: Re: [Engine-devel] [backend] a little confusion about the quartz jobs > > On 02/14/2012 02:21 PM, Mike Kolesnik wrote: > >> hi, > >> > >> I was playing with the quartz jobs in the backend and I thought > >> this > >> is an area where some simplification and/or cleanup would be > >> useful. > >> > >> - SchedulerUtil interface would be nice to hide quartz from the > >> rest > >> of the code, but it very rarely used, the clients are bound to > >> it's > >> single implementation, SchedulerUtilQuartzImpl through it's > >> getInstance() method. > > > > I think the whole class name is misleading, since usually when I > > imagine a utils class, it's a simple class that does some menial > > work for me in static methods, and not really calls anything else > > or even has an instance. > +1 Agreed, I will rename it. > > > > Maybe the class can be renamed to just Scheduler, or > > ScheduleManager which will be more precise. > > > >> - It was designed to be a local EJB, Backend actually expects it > >> to > >> be injected. (this field is not used) > >> - when scheduling a job, you call schedule...Job(Object instance, > >> String methodName, ...) however, it is not the _methodname_ that > >> the executor will look for > >> - instead, it will check the OnTimerMethodAnnotation on all the > >> methods. But this annotation has everywhere the methodName as > >> value > >> - JobWrapper actually iterates over all the methods to find the > >> one > >> with the right annotation > >> > >> So a quick simplification could be: > >> - The annotation is not needed, it could be removed > >> - JobWrapper could just getMethod(methodName, argClasses) instead > >> of > >> looking for the annotation in all of the methods > > > > Sounds good, or maybe just keep the annotation and not the method > > name in the call/annotation since then if the method name changes > > it won't break and we can easily locate all jobs by searching for > > the annotation.. > > > >> - I am really not for factoryes, but if we want to separate the > >> interface from the implementation, then probably a > >> SchedulerUtilFactory could help here. The dummy implementation > >> would do just the very same thing as the > >> SchedulerUtilQuartzImpl.getInstance() > >> - I would remove the reference to SchedulerUtil from Backend as > >> well, since it is not used. Really _should_ the Backend class do > >> any scheduling? > > > > Backend does schedule at least one job in it's Initialize() > > method.. > Yes, we have the DbUsers cache manager that performs periodic checks > for > db users against AD/IPA. > This scheduler should start upon @PostConstruct (or any logical > equivalent). > Yes but I am not sure this should happen right there. All the other service installs it's own jobs, so maybe SessionDataContainer should do so as well. It would look more consistent. > > Maybe the class should be injected, but I don't know if that > > happens so maybe that's why it's unused. > > > >> > >> Please share your thoughts. > >> > >> Thank you, > >> Laszlo > >> _______________________________________________ > >> Engine-devel mailing list > >> Engine-devel at ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/engine-devel > >> > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From lpeer at redhat.com Tue Feb 14 17:44:21 2012 From: lpeer at redhat.com (Livnat Peer) Date: Tue, 14 Feb 2012 19:44:21 +0200 Subject: [Engine-devel] SharedRawDisk feature detail In-Reply-To: <4F3A2D04.3020300@redhat.com> References: <4F2AA89C.7090605@redhat.com> <4F37F0D0.7090504@redhat.com> <4F394C03.6070206@redhat.com> <4F3A0AA7.9060903@redhat.com> <4F3A2D04.3020300@redhat.com> Message-ID: <4F3A9D75.3050206@redhat.com> On 14/02/12 11:44, Maor wrote: > On 02/14/2012 09:17 AM, Livnat Peer wrote: >> On 13/02/12 19:44, Maor wrote: >>> On 02/12/2012 07:03 PM, Livnat Peer wrote: >>>> On 02/02/12 17:15, Maor wrote: >>>>> Hello all, >>>>> >>>>> The shared raw disk feature description can be found under the following >>>>> links: >>>>> http://www.ovirt.org/wiki/Features/DetailedSharedRawDisk >>>>> http://www.ovirt.org/wiki/Features/SharedRawDisk >>>>> >>>>> Please feel free, to share your comments. >>>>> >>>>> Regards, >>>>> Maor >>>>> _______________________________________________ >>>>> Engine-devel mailing list >>>>> Engine-devel at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>> >>>> Hi Maor, >>>> >>>> - "when taking a VM snapshot, a snapshot of the shared disk will not be >>>> taken." >>>> I think it is worth mentioning that the shared disk will be part of the >>>> VM snapshot configuration. The disk will appear as unplugged. >>> Agreed, I changed it to the following: >>> when taking a vm snapshot, a snapshot of the shared disk should not be >>> taken, although it will be part of the VM snapshot configuration and the >>> disk will appear as unplugged. >>>> >>>> - Move VM is deprecated in 3.1. >>> Right, I removed this anecdote from the wiki. >>>> >>>> - It seems from the wiki that shared disk is not supported for template >>>> but is supported for VM pool. >>>> I am not sure how can we do that? iirc we create pool from template. >>> What I was thinking about, is that the administrator can take a VM from >>> the pool and attach it a shared disk, after the VM was created (for >>> testing). >>> >>> The motivation for adding shared disk was that each entity that can be >>> added with a disk can also be added with shared disk. >>> Today, Administrator can add a disk to a VM from pool, which might be >>> wrong behaviour, so maybe its better not to support it... >>>> >>>> What is the complexity of supporting shared disk in Templates? off the >>>> top of my head it seems like it is more complicated to block shared >>>> disks in templates than to support it. What do you think? >>> Implementation wize it might be less complex, the problem is the use >>> cases it raises, >>> some of them which I'm thinking about are: >>> * If the disk will be deleted from the DC, should we remove it from the >>> template? or leave an indication in the template that there was a shared >>> disk there, maybe should not allow to delete the disk in the first >>> place, until it is unattached from the template? >> >> Since template configuration is 'read-only' you can not change a disk to >> be plugged or unplugged. >> I would say you can not delete a disk that is part of a template >> regardless if it is shared or not. > So in that case template with shared disk, will block the user from > removing the shared disk from the DC. > Won't it will make the flow for the user a bit complicated. > User who wants to remove the shared disk, will need to remove the VM's > which are based on the template and then remove the template it self. I see the complication of delete, we have similar complications for delete regardless of shared disk (deleting disk with snapshots). Other than delete can you think of other complicated scenarios? >> >>> * What do we want to do when creating a template from VM with shared >>> disk - Should User choose whether to create a template with/without the >>> shared disk? >>> >> >> If a user is creating a template from VM the configuration should be >> identical to the VM. >> >>> Blocking shared disk from template means creating the template without >>> the shared disk, the implementation for it is to check if the disk is >>> shared or not. >>> I think that if GUI will support attaching shared disk to multiple VMs, >>> there is no strong use case for allowing adding shared disk to a template. >> >> I am not sure what the above comment means but remember that we have API >> users as well as UI. >> >> I think that if we don't have a strong case for not supporting shared >> disk in templates the default should be to support it. >> >>>> >>>> Livnat >>>> >> > From iheim at redhat.com Tue Feb 14 20:02:58 2012 From: iheim at redhat.com (Itamar Heim) Date: Tue, 14 Feb 2012 22:02:58 +0200 Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: <50ca594f-4b80-4e3d-84bc-852fb6e864d0@ofrenkel.csb> References: <50ca594f-4b80-4e3d-84bc-852fb6e864d0@ofrenkel.csb> Message-ID: <4F3ABDF2.1050701@redhat.com> On 02/14/2012 09:48 AM, Omer Frenkel wrote: > > > ----- Original Message ----- >> From: "Yair Zaslavsky" >> To: engine-devel at ovirt.org >> Sent: Tuesday, February 14, 2012 9:20:10 AM >> Subject: Re: [Engine-devel] Autorecovery feature plan for review >> >> On 02/14/2012 08:59 AM, Itamar Heim wrote: >>> On 02/14/2012 08:57 AM, Livnat Peer wrote: >>>> On 14/02/12 05:56, Itamar Heim wrote: >>>>> On 02/13/2012 12:32 PM, Laszlo Hornyak wrote: >>>>>> Hi, >>>>>> >>>>>> Please review the plan document for autorecovery. >>>>>> http://www.ovirt.org/wiki/Features/Autorecovery >>>>> >>>>> why would we disable auto recovery by default? it sounds like the >>>>> preferred behavior? >>>>> >>>> >>>> I think that by default Laszlo meant in the upgrade process to >>>> maintain >>>> current behavior. >>>> >>>> I agree that for new entities the default should be true. >>> >>> i think the only combination which will allow this is for db to >>> default >>> to false and code to default to true for this property? >> Why can't we during upgrade process set to all existing entities in >> DB >> the value to false, but still have the column defined as "default >> true"? > > why all the trouble? i think this field should be mandatory as any other field, > user has to specify it during the entity creation, right where he provide the name and any other field for the new entity. because this will break the API? what happens if user doesn't pass it, like they didn't so far? i.e., you need to decide on a default. From iheim at redhat.com Tue Feb 14 20:03:17 2012 From: iheim at redhat.com (Itamar Heim) Date: Tue, 14 Feb 2012 22:03:17 +0200 Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: <4F3A0B2A.404@redhat.com> References: <172cbbe7-2473-4e67-b435-a23be583e522@zmail01.collab.prod.int.phx2.redhat.com> <4F39DB8A.2060502@redhat.com> <4F3A05DB.8050307@redhat.com> <4F3A063B.1060309@redhat.com> <4F3A0B2A.404@redhat.com> Message-ID: <4F3ABE05.60905@redhat.com> On 02/14/2012 09:20 AM, Yair Zaslavsky wrote: > On 02/14/2012 08:59 AM, Itamar Heim wrote: >> On 02/14/2012 08:57 AM, Livnat Peer wrote: >>> On 14/02/12 05:56, Itamar Heim wrote: >>>> On 02/13/2012 12:32 PM, Laszlo Hornyak wrote: >>>>> Hi, >>>>> >>>>> Please review the plan document for autorecovery. >>>>> http://www.ovirt.org/wiki/Features/Autorecovery >>>> >>>> why would we disable auto recovery by default? it sounds like the >>>> preferred behavior? >>>> >>> >>> I think that by default Laszlo meant in the upgrade process to maintain >>> current behavior. >>> >>> I agree that for new entities the default should be true. >> >> i think the only combination which will allow this is for db to default >> to false and code to default to true for this property? > Why can't we during upgrade process set to all existing entities in DB > the value to false, but still have the column defined as "default true"? because upgrade and clean install are running the same scripts? From iheim at redhat.com Tue Feb 14 20:19:47 2012 From: iheim at redhat.com (Itamar Heim) Date: Tue, 14 Feb 2012 22:19:47 +0200 Subject: [Engine-devel] SharedRawDisk feature detail In-Reply-To: <4F3A9D75.3050206@redhat.com> References: <4F2AA89C.7090605@redhat.com> <4F37F0D0.7090504@redhat.com> <4F394C03.6070206@redhat.com> <4F3A0AA7.9060903@redhat.com> <4F3A2D04.3020300@redhat.com> <4F3A9D75.3050206@redhat.com> Message-ID: <4F3AC1E3.9090808@redhat.com> On 02/14/2012 07:44 PM, Livnat Peer wrote: > On 14/02/12 11:44, Maor wrote: >> On 02/14/2012 09:17 AM, Livnat Peer wrote: >>> On 13/02/12 19:44, Maor wrote: >>>> On 02/12/2012 07:03 PM, Livnat Peer wrote: >>>>> On 02/02/12 17:15, Maor wrote: >>>>>> Hello all, >>>>>> >>>>>> The shared raw disk feature description can be found under the following >>>>>> links: >>>>>> http://www.ovirt.org/wiki/Features/DetailedSharedRawDisk >>>>>> http://www.ovirt.org/wiki/Features/SharedRawDisk >>>>>> >>>>>> Please feel free, to share your comments. >>>>>> >>>>>> Regards, >>>>>> Maor >>>>>> _______________________________________________ >>>>>> Engine-devel mailing list >>>>>> Engine-devel at ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>> >>>>> Hi Maor, >>>>> >>>>> - "when taking a VM snapshot, a snapshot of the shared disk will not be >>>>> taken." >>>>> I think it is worth mentioning that the shared disk will be part of the >>>>> VM snapshot configuration. The disk will appear as unplugged. >>>> Agreed, I changed it to the following: >>>> when taking a vm snapshot, a snapshot of the shared disk should not be >>>> taken, although it will be part of the VM snapshot configuration and the >>>> disk will appear as unplugged. >>>>> >>>>> - Move VM is deprecated in 3.1. >>>> Right, I removed this anecdote from the wiki. >>>>> >>>>> - It seems from the wiki that shared disk is not supported for template >>>>> but is supported for VM pool. >>>>> I am not sure how can we do that? iirc we create pool from template. >>>> What I was thinking about, is that the administrator can take a VM from >>>> the pool and attach it a shared disk, after the VM was created (for >>>> testing). >>>> >>>> The motivation for adding shared disk was that each entity that can be >>>> added with a disk can also be added with shared disk. >>>> Today, Administrator can add a disk to a VM from pool, which might be >>>> wrong behaviour, so maybe its better not to support it... >>>>> >>>>> What is the complexity of supporting shared disk in Templates? off the >>>>> top of my head it seems like it is more complicated to block shared >>>>> disks in templates than to support it. What do you think? >>>> Implementation wize it might be less complex, the problem is the use >>>> cases it raises, >>>> some of them which I'm thinking about are: >>>> * If the disk will be deleted from the DC, should we remove it from the >>>> template? or leave an indication in the template that there was a shared >>>> disk there, maybe should not allow to delete the disk in the first >>>> place, until it is unattached from the template? >>> >>> Since template configuration is 'read-only' you can not change a disk to >>> be plugged or unplugged. >>> I would say you can not delete a disk that is part of a template >>> regardless if it is shared or not. >> So in that case template with shared disk, will block the user from >> removing the shared disk from the DC. >> Won't it will make the flow for the user a bit complicated. >> User who wants to remove the shared disk, will need to remove the VM's >> which are based on the template and then remove the template it self. > > I see the complication of delete, we have similar complications for > delete regardless of shared disk (deleting disk with snapshots). > > Other than delete can you think of other complicated scenarios? if it makes things more complex, why not postpone this part of the feature to a later phase? From yzaslavs at redhat.com Tue Feb 14 21:36:16 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Tue, 14 Feb 2012 23:36:16 +0200 Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: <4F3ABE05.60905@redhat.com> References: <172cbbe7-2473-4e67-b435-a23be583e522@zmail01.collab.prod.int.phx2.redhat.com> <4F39DB8A.2060502@redhat.com> <4F3A05DB.8050307@redhat.com> <4F3A063B.1060309@redhat.com> <4F3A0B2A.404@redhat.com> <4F3ABE05.60905@redhat.com> Message-ID: <4F3AD3D0.9040202@redhat.com> On 02/14/2012 10:03 PM, Itamar Heim wrote: > On 02/14/2012 09:20 AM, Yair Zaslavsky wrote: >> On 02/14/2012 08:59 AM, Itamar Heim wrote: >>> On 02/14/2012 08:57 AM, Livnat Peer wrote: >>>> On 14/02/12 05:56, Itamar Heim wrote: >>>>> On 02/13/2012 12:32 PM, Laszlo Hornyak wrote: >>>>>> Hi, >>>>>> >>>>>> Please review the plan document for autorecovery. >>>>>> http://www.ovirt.org/wiki/Features/Autorecovery >>>>> >>>>> why would we disable auto recovery by default? it sounds like the >>>>> preferred behavior? >>>>> >>>> >>>> I think that by default Laszlo meant in the upgrade process to maintain >>>> current behavior. >>>> >>>> I agree that for new entities the default should be true. >>> >>> i think the only combination which will allow this is for db to default >>> to false and code to default to true for this property? >> Why can't we during upgrade process set to all existing entities in DB >> the value to false, but still have the column defined as "default true"? > > because upgrade and clean install are running the same scripts? I guess I still fail to understand. Scenarios (as both upgrade and clean install run the same scripts) a. In environment to be upgraded we have X entities that are non recoverable - after upgrade these X entities have the boolean flag set to false. New entities in the system will be created with auto recoverable set to true. b. In environment to be clean installed -we have 0 existing entities - after clean install all new entities in the system will be create with auto recoverable set to true. Will this be considered a bad behavior? From abaron at redhat.com Tue Feb 14 22:17:32 2012 From: abaron at redhat.com (Ayal Baron) Date: Tue, 14 Feb 2012 17:17:32 -0500 (EST) Subject: [Engine-devel] SharedRawDisk feature detail In-Reply-To: <4F3AC1E3.9090808@redhat.com> Message-ID: <4ecd5a37-095a-4a4b-9aab-acb0e6f5ebf6@zmail13.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > On 02/14/2012 07:44 PM, Livnat Peer wrote: > > On 14/02/12 11:44, Maor wrote: > >> On 02/14/2012 09:17 AM, Livnat Peer wrote: > >>> On 13/02/12 19:44, Maor wrote: > >>>> On 02/12/2012 07:03 PM, Livnat Peer wrote: > >>>>> On 02/02/12 17:15, Maor wrote: > >>>>>> Hello all, > >>>>>> > >>>>>> The shared raw disk feature description can be found under the > >>>>>> following > >>>>>> links: > >>>>>> http://www.ovirt.org/wiki/Features/DetailedSharedRawDisk > >>>>>> http://www.ovirt.org/wiki/Features/SharedRawDisk > >>>>>> > >>>>>> Please feel free, to share your comments. > >>>>>> > >>>>>> Regards, > >>>>>> Maor > >>>>>> _______________________________________________ > >>>>>> Engine-devel mailing list > >>>>>> Engine-devel at ovirt.org > >>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>>>> > >>>>> Hi Maor, > >>>>> > >>>>> - "when taking a VM snapshot, a snapshot of the shared disk > >>>>> will not be > >>>>> taken." > >>>>> I think it is worth mentioning that the shared disk will be > >>>>> part of the > >>>>> VM snapshot configuration. The disk will appear as unplugged. > >>>> Agreed, I changed it to the following: > >>>> when taking a vm snapshot, a snapshot of the shared disk should > >>>> not be > >>>> taken, although it will be part of the VM snapshot configuration > >>>> and the > >>>> disk will appear as unplugged. > >>>>> > >>>>> - Move VM is deprecated in 3.1. > >>>> Right, I removed this anecdote from the wiki. > >>>>> > >>>>> - It seems from the wiki that shared disk is not supported for > >>>>> template > >>>>> but is supported for VM pool. > >>>>> I am not sure how can we do that? iirc we create pool from > >>>>> template. > >>>> What I was thinking about, is that the administrator can take a > >>>> VM from > >>>> the pool and attach it a shared disk, after the VM was created > >>>> (for > >>>> testing). > >>>> > >>>> The motivation for adding shared disk was that each entity that > >>>> can be > >>>> added with a disk can also be added with shared disk. > >>>> Today, Administrator can add a disk to a VM from pool, which > >>>> might be > >>>> wrong behaviour, so maybe its better not to support it... > >>>>> > >>>>> What is the complexity of supporting shared disk in Templates? > >>>>> off the > >>>>> top of my head it seems like it is more complicated to block > >>>>> shared > >>>>> disks in templates than to support it. What do you think? > >>>> Implementation wize it might be less complex, the problem is the > >>>> use > >>>> cases it raises, > >>>> some of them which I'm thinking about are: > >>>> * If the disk will be deleted from the DC, should we remove it > >>>> from the > >>>> template? or leave an indication in the template that there was > >>>> a shared > >>>> disk there, maybe should not allow to delete the disk in the > >>>> first > >>>> place, until it is unattached from the template? > >>> > >>> Since template configuration is 'read-only' you can not change a > >>> disk to > >>> be plugged or unplugged. > >>> I would say you can not delete a disk that is part of a template > >>> regardless if it is shared or not. > >> So in that case template with shared disk, will block the user > >> from > >> removing the shared disk from the DC. > >> Won't it will make the flow for the user a bit complicated. > >> User who wants to remove the shared disk, will need to remove the > >> VM's > >> which are based on the template and then remove the template it > >> self. > > > > I see the complication of delete, we have similar complications for > > delete regardless of shared disk (deleting disk with snapshots). There should be no such thing as 'delete disk with snapshots' When deleting a disk the only thing that should be deleted is topmost layer. The disk does not stop to exist in the history of the VM and when reverting to an old snapshot the disk should be there. Shared disks have no snapshots so there is no issue in this sense, however, if a template has a reference to the disk then deleting the disk would effectively modify the template configuration(?) and iirc engine does not allow changing template configuration post creation (but we had a very long thread on this already). > > > > Other than delete can you think of other complicated scenarios? > > if it makes things more complex, why not postpone this part of the > feature to a later phase? I think we're approaching this the wrong way. There are 2 possible problems we're trying to solve here and having the original shared disk as part of the template is the wrong solution for both. The first problem is - user wants to attach the shared disk to all VMs derived from the template - in this case the shared disk is *not* a part of the template and what is needed is an automatic way to configure newly created VMs that would allow to attach the shared disk. The second problem is - user wants the data on the shared disk to be part of the template - This is the general case for template seeing as a template is a *copy* of the original VM with stripped identity. in this case what is needed is a *copy* of the data and the fact that the disk is shared is coincidental. What the above means is that when you create a template from a VM with a shared disk the user should choose whether the operation would also create a new disk and copy the content of the shared disk to it (it is the user's responsibility to make sure that the data does not change while this operation is taking place, but we can help a little there) or exclude the shared disk from the template. > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From abaron at redhat.com Tue Feb 14 22:37:46 2012 From: abaron at redhat.com (Ayal Baron) Date: Tue, 14 Feb 2012 17:37:46 -0500 (EST) Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: <4F3AD3D0.9040202@redhat.com> Message-ID: ----- Original Message ----- > On 02/14/2012 10:03 PM, Itamar Heim wrote: > > On 02/14/2012 09:20 AM, Yair Zaslavsky wrote: > >> On 02/14/2012 08:59 AM, Itamar Heim wrote: > >>> On 02/14/2012 08:57 AM, Livnat Peer wrote: > >>>> On 14/02/12 05:56, Itamar Heim wrote: > >>>>> On 02/13/2012 12:32 PM, Laszlo Hornyak wrote: > >>>>>> Hi, > >>>>>> > >>>>>> Please review the plan document for autorecovery. > >>>>>> http://www.ovirt.org/wiki/Features/Autorecovery > >>>>> > >>>>> why would we disable auto recovery by default? it sounds like > >>>>> the > >>>>> preferred behavior? > >>>>> > >>>> > >>>> I think that by default Laszlo meant in the upgrade process to > >>>> maintain > >>>> current behavior. > >>>> > >>>> I agree that for new entities the default should be true. > >>> > >>> i think the only combination which will allow this is for db to > >>> default > >>> to false and code to default to true for this property? > >> Why can't we during upgrade process set to all existing entities > >> in DB > >> the value to false, but still have the column defined as "default > >> true"? > > > > because upgrade and clean install are running the same scripts? > I guess I still fail to understand. > Scenarios (as both upgrade and clean install run the same scripts) > a. In environment to be upgraded we have X entities that are non > recoverable - after upgrade these X entities have the boolean flag > set > to false. New entities in the system will be created with auto > recoverable set to true. > b. In environment to be clean installed -we have 0 existing entities > - > after clean install all new entities in the system will be create > with > auto recoverable set to true. > Will this be considered a bad behavior? > Why is there a field in the db for this? Why is there absolutely no description in the wiki what this feature *actually* does? Why is there a periodic process to do this? iiuc host/storage/whatever goes into non-operational mode due to monitoring of this object and after a certain amount of time (or immediately) where the object was reported to be in an error state it is moved to non-operational. Monitoring of these objects should just *not* stop and the second it is reported ok, move the object back to up/active/whatever state. What am I missing? > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From iheim at redhat.com Wed Feb 15 07:01:26 2012 From: iheim at redhat.com (Itamar Heim) Date: Wed, 15 Feb 2012 09:01:26 +0200 Subject: [Engine-devel] ovirtSDK: ObjectsFactory would bring ability to customize objects In-Reply-To: <4F3A8545.9030207@redhat.com> References: <4F3A5A53.2080400@redhat.com> <4F3A767D.1050907@redhat.com> <4F3A8545.9030207@redhat.com> Message-ID: <4F3B5846.5000100@redhat.com> On 02/14/2012 06:01 PM, Jaroslav Henner wrote: > On Tue 14 Feb 2012 03:58:05 PM CET, Itamar Heim wrote: >> On 02/14/2012 02:57 PM, Jaroslav Henner wrote: >>> Greetings. >>> >>> I'm an automation tester in charge of testing SDK, my problem is that I >>> cannot influence the objects that are created without so called "monkey >>> patching" (google that and you will see what it means) the >>> ovirtsdk.infrastructure.brokers.*. I made some comments about this >>> #782891, but I was not so clear there, so I hope this will be better: >>> >>> >>> In [12]: api.datacenters.list()[0] >>> Out[12]: >>> >>> You see that this returns an object that was declared somewhere in SDK. >>> We have AFAIK no good way to say which object should be created. >>> >>> It would be good for us to be able to tell SDK: >>> " >>> My dear SDK, every time you are asked, please don't create and give me >>> ovirtsdk.infrastructure.brokers.DataCenter, but something very similar: >>> our.tests.infrastructure.brokers.DataCenter >>> " >> >> I'm not sure i understand - you want the SDK to return an object from >> a class it does not know (i.e., the SDK to return an object from your >> class)? >> how will this look like if the SDK was in another language (say java)? > > There would be some interface/abstract/concrete class DefaultFactory. > This will define some common interface. There would be some setter > inside SDK which would contain the instance of that DefaultFactory. SDK > user would be able to set it to his own instance of (descendant) of > DefaultFactory. From this point on, the objects created by SDK would be > from the newly set factory -- customized. > [http://en.wikipedia.org/wiki/Abstract_factory_pattern] > while i understand the benefit this will give you in extending the classes, It doesn't sounds to me like the classical sdk, which just gives you the API at OO class level? From ilvovsky at redhat.com Wed Feb 15 07:44:38 2012 From: ilvovsky at redhat.com (Igor Lvovsky) Date: Wed, 15 Feb 2012 02:44:38 -0500 (EST) Subject: [Engine-devel] Empty cdrom drive. In-Reply-To: <3df0ea66-d8a0-4772-9605-d394799561ee@zmail13.collab.prod.int.phx2.redhat.com> Message-ID: Hi, I want to discuss $subject on the email just to be sure that we all on the same page. So, today in 3.0 vdsm has two ways to create VM with cdrom : 1. If RHEV-M ask to create VM with cdrom, vdsm just create it 2. RHEV-M doesn't ask to create VM with cdrom, vdsm still creates VM with empty cdrom. Vdsm creates this device as 'hdc' (IDE device, index 2), because of libvirt restrictions. In this case RHEV-M will be able to "insert" cdrom on the fly with changeCD request. In the new style API we want to get rid from stupid scenario #2, because we want to be able to create VM without cdrom at all. It means, that now we need to change a little our scenarios: 1. If RHEV-M ask to create VM with cdrom, vdsm just create it 2. RHEV-M doesn't want to create VM with cdrom, but it want to be able to "insert" cdrom on the fly after this. Here we have two options: a. RHEV-M should to pass empty cdrom device on VM creation and use regular changeCD after that b. RHEV-M can create VM without cdrom and add cdrom later through hotplugDisk command. Note: The new libvirt remove previous restriction on cdrom devices. Now cdrom can be created as IDE or VIRTIO device in any index. It means we can easily hotplug it. Regards, Igor Lvovsky From ykaul at redhat.com Wed Feb 15 09:13:02 2012 From: ykaul at redhat.com (Yaniv Kaul) Date: Wed, 15 Feb 2012 11:13:02 +0200 Subject: [Engine-devel] Empty cdrom drive. In-Reply-To: References: Message-ID: <4F3B771E.6020402@redhat.com> On 02/15/2012 09:44 AM, Igor Lvovsky wrote: > Hi, > I want to discuss $subject on the email just to be sure that we all on the > same page. > > So, today in 3.0 vdsm has two ways to create VM with cdrom : > 1. If RHEV-M ask to create VM with cdrom, vdsm just create it > 2. RHEV-M doesn't ask to create VM with cdrom, vdsm still creates VM with > empty cdrom. Vdsm creates this device as 'hdc' (IDE device, index 2), > because of libvirt restrictions. > In this case RHEV-M will be able to "insert" cdrom on the fly with > changeCD request. > > In the new style API we want to get rid from stupid scenario #2, because > we want to be able to create VM without cdrom at all. > It means, that now we need to change a little our scenarios: > 1. If RHEV-M ask to create VM with cdrom, vdsm just create it > 2. RHEV-M doesn't want to create VM with cdrom, but it want to be able to > "insert" cdrom on the fly after this. Here we have two options: > a. RHEV-M should to pass empty cdrom device on VM creation and use > regular changeCD after that > b. RHEV-M can create VM without cdrom and add cdrom later through > hotplugDisk command. > > Note: The new libvirt remove previous restriction on cdrom devices. Now > cdrom can be created as IDE or VIRTIO device in any index. > It means we can easily hotplug it. I didn't know a CDROM can be a virtio device, but in any way it requires driver (which may not exist on Windows). I didn't know an IDE CDROM can be hot-plugged (only USB-based?), perhaps I'm wrong here. Y. > > > Regards, > Igor Lvovsky > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From rgolan at redhat.com Wed Feb 15 09:16:55 2012 From: rgolan at redhat.com (Roy Golan) Date: Wed, 15 Feb 2012 04:16:55 -0500 (EST) Subject: [Engine-devel] bridgless networks In-Reply-To: <4F30F696.5090409@redhat.com> Message-ID: ----- Original Message ----- > From: "Dor Laor" > To: "Roy Golan" > Cc: engine-devel at ovirt.org > Sent: Tuesday, February 7, 2012 12:01:58 PM > Subject: Re: [Engine-devel] bridgless networks > > On 02/06/2012 04:47 PM, Roy Golan wrote: > > Hi All > > > > Lately I've been working on a design of bridge-less network feature > > in the engine. > > You can see it in > > http://www.ovirt.org/wiki/Features/Design/Network/Bridgeless_Networks#Bridge-less_Networks > > > > Please review the design. > > Note, there are some open issues, you can find in the relevant > > section. > > Reviews and comments are very welcome. > > I'm not in the details of the above design but just please make sure > this change will be able to accommodate w/: > - Different bridging types: > - Today's Linux bridge > - openVswitch bridge > - macvtap bridges. > - pci device assignment w/o sriov > - virtio over macvtap over sriov virtual function > and you can mix any bridge type with any nic? > Cheers, > Dor > > > > > Thanks, > > Roy > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > From shuming at linux.vnet.ibm.com Wed Feb 15 09:17:25 2012 From: shuming at linux.vnet.ibm.com (Shu Ming) Date: Wed, 15 Feb 2012 17:17:25 +0800 Subject: [Engine-devel] engine.log was gone Message-ID: <4F3B7825.8060709@linux.vnet.ibm.com> Hi, I deleted the engine.log under /var/log/ovirt-engine by mistake. It seems that the log file can not be recreated whatever engine-clean and engine-setup were run. Even rebooting the server didn't work. Is there any quick way to bring my engine.log back? -- Shu Ming IBM China Systems and Technology Laboratory From abaron at redhat.com Wed Feb 15 09:23:54 2012 From: abaron at redhat.com (Ayal Baron) Date: Wed, 15 Feb 2012 04:23:54 -0500 (EST) Subject: [Engine-devel] Empty cdrom drive. In-Reply-To: <4F3B771E.6020402@redhat.com> Message-ID: <7f174e1b-4d5e-4125-ae79-f8da481d46cb@zmail13.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > On 02/15/2012 09:44 AM, Igor Lvovsky wrote: > > Hi, > > I want to discuss $subject on the email just to be sure that we all > > on the > > same page. > > > > So, today in 3.0 vdsm has two ways to create VM with cdrom : > > 1. If RHEV-M ask to create VM with cdrom, vdsm just create it > > 2. RHEV-M doesn't ask to create VM with cdrom, vdsm still creates > > VM with > > empty cdrom. Vdsm creates this device as 'hdc' (IDE device, > > index 2), > > because of libvirt restrictions. > > In this case RHEV-M will be able to "insert" cdrom on the fly > > with > > changeCD request. > > > > In the new style API we want to get rid from stupid scenario #2, > > because > > we want to be able to create VM without cdrom at all. > > It means, that now we need to change a little our scenarios: > > 1. If RHEV-M ask to create VM with cdrom, vdsm just create it > > 2. RHEV-M doesn't want to create VM with cdrom, but it want to be > > able to > > "insert" cdrom on the fly after this. Here we have two > > options: > > a. RHEV-M should to pass empty cdrom device on VM creation and > > use > > regular changeCD after that > > b. RHEV-M can create VM without cdrom and add cdrom later > > through > > hotplugDisk command. > > > > Note: The new libvirt remove previous restriction on cdrom devices. > > Now > > cdrom can be created as IDE or VIRTIO device in any index. > > It means we can easily hotplug it. > > I didn't know a CDROM can be a virtio device, but in any way it > requires > driver (which may not exist on Windows). > I didn't know an IDE CDROM can be hot-plugged (only USB-based?), It can't be hotplugged. usb based is not ide (the ide device is the usb port, the cdrom is a usb device afaik). The point of this email is that since we want to support being able to start VMs *without* a cdrom then the default behaviour of attaching a cdrom device needs to be implemented in engine or we shall have a regression. In the new API (for stable device addresses) vdsm doesn't automatically attach a cdrom. > perhaps > I'm wrong here. > Y. > > > > > > > Regards, > > Igor Lvovsky > > > > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From mkenneth at redhat.com Wed Feb 15 09:29:58 2012 From: mkenneth at redhat.com (Miki Kenneth) Date: Wed, 15 Feb 2012 04:29:58 -0500 (EST) Subject: [Engine-devel] Empty cdrom drive. In-Reply-To: <7f174e1b-4d5e-4125-ae79-f8da481d46cb@zmail13.collab.prod.int.phx2.redhat.com> Message-ID: ----- Original Message ----- > From: "Ayal Baron" > To: "Yaniv Kaul" > Cc: engine-devel at ovirt.org > Sent: Wednesday, February 15, 2012 11:23:54 AM > Subject: Re: [Engine-devel] Empty cdrom drive. > > > > ----- Original Message ----- > > On 02/15/2012 09:44 AM, Igor Lvovsky wrote: > > > Hi, > > > I want to discuss $subject on the email just to be sure that we > > > all > > > on the > > > same page. > > > > > > So, today in 3.0 vdsm has two ways to create VM with cdrom : > > > 1. If RHEV-M ask to create VM with cdrom, vdsm just create it > > > 2. RHEV-M doesn't ask to create VM with cdrom, vdsm still > > > creates > > > VM with > > > empty cdrom. Vdsm creates this device as 'hdc' (IDE device, > > > index 2), > > > because of libvirt restrictions. > > > In this case RHEV-M will be able to "insert" cdrom on the > > > fly > > > with > > > changeCD request. > > > > > > In the new style API we want to get rid from stupid scenario #2, > > > because > > > we want to be able to create VM without cdrom at all. > > > It means, that now we need to change a little our scenarios: > > > 1. If RHEV-M ask to create VM with cdrom, vdsm just create it > > > 2. RHEV-M doesn't want to create VM with cdrom, but it want to > > > be > > > able to > > > "insert" cdrom on the fly after this. Here we have two > > > options: > > > a. RHEV-M should to pass empty cdrom device on VM creation > > > and > > > use > > > regular changeCD after that > > > b. RHEV-M can create VM without cdrom and add cdrom later > > > through > > > hotplugDisk command. > > > > > > Note: The new libvirt remove previous restriction on cdrom > > > devices. > > > Now > > > cdrom can be created as IDE or VIRTIO device in any index. > > > It means we can easily hotplug it. > > > > I didn't know a CDROM can be a virtio device, but in any way it > > requires > > driver (which may not exist on Windows). > > I didn't know an IDE CDROM can be hot-plugged (only USB-based?), > > It can't be hotplugged. > usb based is not ide (the ide device is the usb port, the cdrom is a > usb device afaik). > > The point of this email is that since we want to support being able > to start VMs *without* a cdrom then the default behaviour of > attaching a cdrom device needs to be implemented in engine or we > shall have a regression. This is a regression that we can not live with... > In the new API (for stable device addresses) vdsm doesn't > automatically attach a cdrom. > > > perhaps > > I'm wrong here. > > Y. > > > > > > > > > > > Regards, > > > Igor Lvovsky > > > > > > > > > _______________________________________________ > > > Engine-devel mailing list > > > Engine-devel at ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From rgolan at redhat.com Wed Feb 15 09:34:05 2012 From: rgolan at redhat.com (Roy Golan) Date: Wed, 15 Feb 2012 04:34:05 -0500 (EST) Subject: [Engine-devel] bridgless networks In-Reply-To: <4F2FED58.7010209@redhat.com> Message-ID: <9e8d1866-760d-4078-a5fa-71d692e6e06a@zmail01.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > From: "Yaniv Kaul" > To: "Roy Golan" > Cc: engine-devel at ovirt.org > Sent: Monday, February 6, 2012 5:10:16 PM > Subject: Re: [Engine-devel] bridgless networks > > On 02/06/2012 04:47 PM, Roy Golan wrote: > > Hi All > > > > Lately I've been working on a design of bridge-less network feature > > in the engine. > > You can see it in > > http://www.ovirt.org/wiki/Features/Design/Network/Bridgeless_Networks#Bridge-less_Networks > > > > Please review the design. > > Note, there are some open issues, you can find in the relevant > > section. > > Reviews and comments are very welcome. > > > > Thanks, > > Roy > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > 0. Fixed some typos in the wiki. There are others I couldn't > understand. > 1. "Also looking forward a capable of running VMs nics should be > bridged > on regular nics and un-bridged in case of dedicated special nics" - > don't understand what it means (English-wise too). correct, the phrasing is bad. I meant that when we are doing the actual attach, should we implicitly choose not to create a bridge on vNic or SRIOV ? Anyway for now the best and fastest approach I think is to give freedom of choice - the user will choose if the network should be bridged or not during the attach. > 2. "UI shall user shall" . > 3. Not sure the REST API is complete. How is the property set on the > logical network (upon creation or later) ? please see my former post. I suggest we won't have this property on the logical network at all. > 4. So, if there's no bridge on my bond, can I now use the bond > methods > that are incompatible with bridges and therefore we did not allow > them > until now? why not? is VDSM blocking those? > > Y. > > > From ykaul at redhat.com Wed Feb 15 09:38:27 2012 From: ykaul at redhat.com (Yaniv Kaul) Date: Wed, 15 Feb 2012 11:38:27 +0200 Subject: [Engine-devel] bridgless networks In-Reply-To: <9e8d1866-760d-4078-a5fa-71d692e6e06a@zmail01.collab.prod.int.phx2.redhat.com> References: <9e8d1866-760d-4078-a5fa-71d692e6e06a@zmail01.collab.prod.int.phx2.redhat.com> Message-ID: <4F3B7D13.1020200@redhat.com> On 02/15/2012 11:34 AM, Roy Golan wrote: > > ----- Original Message ----- >> From: "Yaniv Kaul" >> To: "Roy Golan" >> Cc: engine-devel at ovirt.org >> Sent: Monday, February 6, 2012 5:10:16 PM >> Subject: Re: [Engine-devel] bridgless networks >> >> On 02/06/2012 04:47 PM, Roy Golan wrote: >>> Hi All >>> >>> Lately I've been working on a design of bridge-less network feature >>> in the engine. >>> You can see it in >>> http://www.ovirt.org/wiki/Features/Design/Network/Bridgeless_Networks#Bridge-less_Networks >>> >>> Please review the design. >>> Note, there are some open issues, you can find in the relevant >>> section. >>> Reviews and comments are very welcome. >>> >>> Thanks, >>> Roy >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >> 0. Fixed some typos in the wiki. There are others I couldn't >> understand. >> 1. "Also looking forward a capable of running VMs nics should be >> bridged >> on regular nics and un-bridged in case of dedicated special nics" - >> don't understand what it means (English-wise too). > correct, the phrasing is bad. I meant that when we are doing the actual attach, should > we implicitly choose not to create a bridge on vNic or SRIOV ? > Anyway for now the best and fastest approach I think is to give freedom of choice - the user will choose > if the network should be bridged or not during the attach. > >> 2. "UI shall user shall" . >> 3. Not sure the REST API is complete. How is the property set on the >> logical network (upon creation or later) ? > please see my former post. I suggest we won't have this property on the logical network at all. >> 4. So, if there's no bridge on my bond, can I now use the bond >> methods >> that are incompatible with bridges and therefore we did not allow >> them >> until now? > why not? is VDSM blocking those? Either UI or engine do not show bond modes that are incompatible with bridges. Perhaps it's not a limitation we need to worry about. Y. >> Y. >> >> >> From eedri at redhat.com Wed Feb 15 09:53:07 2012 From: eedri at redhat.com (Eyal Edri) Date: Wed, 15 Feb 2012 04:53:07 -0500 (EST) Subject: [Engine-devel] Unit tests fail on ovirt-engine Jenkins In-Reply-To: <13d2fc9b-e3e6-494e-8eff-1b41883dedbf@zmail17.collab.prod.int.phx2.redhat.com> Message-ID: <5d929d25-2b59-411d-914d-1b702993a5ef@zmail17.collab.prod.int.phx2.redhat.com> FYI, Some of the unit tests for ovirt-engine fails due to latest commits. Failures started after these commits: http://jenkins.ovirt.org/job/ovirt_engine_unit_tests/136/: Changes engine-core: Adding non-blocking VDSM api (detail / gitweb) engine-core: poll vds while setupnetwork is running (detail / gitweb) engine-core: add validations on VdsNetworkInterface (detail / gitweb) fix checkstyle import (detail / gitweb) dbscripts: fix adjustment for engine-db-install to support systemd (detail / gitweb) engine-core: setupNetworks api change (detail / gitweb) engine-core: clean and fix typos (detail / gitweb) Changes userportal: UiCommon model activation upon tab selection (detail / gitweb) rest-api: exposing setupnetworks action under /api/hosts/{id}/nics (detail / gitweb) Please try to run tests prior to pushing to master in order to avoid breaking the build. Eyal Edri oVirt Infrastructure http://jenkins.ovirt.org From mkenneth at redhat.com Wed Feb 15 10:09:48 2012 From: mkenneth at redhat.com (Miki Kenneth) Date: Wed, 15 Feb 2012 05:09:48 -0500 (EST) Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: Message-ID: ----- Original Message ----- > From: "Ayal Baron" > To: "Yair Zaslavsky" > Cc: engine-devel at ovirt.org > Sent: Wednesday, February 15, 2012 12:37:46 AM > Subject: Re: [Engine-devel] Autorecovery feature plan for review > > > > ----- Original Message ----- > > On 02/14/2012 10:03 PM, Itamar Heim wrote: > > > On 02/14/2012 09:20 AM, Yair Zaslavsky wrote: > > >> On 02/14/2012 08:59 AM, Itamar Heim wrote: > > >>> On 02/14/2012 08:57 AM, Livnat Peer wrote: > > >>>> On 14/02/12 05:56, Itamar Heim wrote: > > >>>>> On 02/13/2012 12:32 PM, Laszlo Hornyak wrote: > > >>>>>> Hi, > > >>>>>> > > >>>>>> Please review the plan document for autorecovery. > > >>>>>> http://www.ovirt.org/wiki/Features/Autorecovery > > >>>>> > > >>>>> why would we disable auto recovery by default? it sounds like > > >>>>> the > > >>>>> preferred behavior? > > >>>>> > > >>>> > > >>>> I think that by default Laszlo meant in the upgrade process to > > >>>> maintain > > >>>> current behavior. > > >>>> > > >>>> I agree that for new entities the default should be true. > > >>> > > >>> i think the only combination which will allow this is for db to > > >>> default > > >>> to false and code to default to true for this property? > > >> Why can't we during upgrade process set to all existing entities > > >> in DB > > >> the value to false, but still have the column defined as > > >> "default > > >> true"? > > > > > > because upgrade and clean install are running the same scripts? > > I guess I still fail to understand. > > Scenarios (as both upgrade and clean install run the same scripts) > > a. In environment to be upgraded we have X entities that are non > > recoverable - after upgrade these X entities have the boolean flag > > set > > to false. New entities in the system will be created with auto > > recoverable set to true. > > b. In environment to be clean installed -we have 0 existing > > entities > > - > > after clean install all new entities in the system will be create > > with > > auto recoverable set to true. > > Will this be considered a bad behavior? > > > > Why is there a field in the db for this? > Why is there absolutely no description in the wiki what this feature > *actually* does? > Why is there a periodic process to do this? iiuc > host/storage/whatever goes into non-operational mode due to > monitoring of this object and after a certain amount of time (or > immediately) where the object was reported to be in an error state > it is moved to non-operational. > Monitoring of these objects should just *not* stop and the second it > is reported ok, move the object back to up/active/whatever state. > What am I missing? Let me see if I got it right: it means I've one process that will go over all the "down" objects every X seconds, and will issue "activate" action per object? this will be done sequentially I guess... I would reduce the audit log time to 1 hour. > > > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From mkenneth at redhat.com Wed Feb 15 10:20:53 2012 From: mkenneth at redhat.com (Miki Kenneth) Date: Wed, 15 Feb 2012 05:20:53 -0500 (EST) Subject: [Engine-devel] SharedRawDisk feature detail In-Reply-To: <4ecd5a37-095a-4a4b-9aab-acb0e6f5ebf6@zmail13.collab.prod.int.phx2.redhat.com> Message-ID: ----- Original Message ----- > From: "Ayal Baron" > To: "Itamar Heim" > Cc: engine-devel at ovirt.org > Sent: Wednesday, February 15, 2012 12:17:32 AM > Subject: Re: [Engine-devel] SharedRawDisk feature detail > > > > ----- Original Message ----- > > On 02/14/2012 07:44 PM, Livnat Peer wrote: > > > On 14/02/12 11:44, Maor wrote: > > >> On 02/14/2012 09:17 AM, Livnat Peer wrote: > > >>> On 13/02/12 19:44, Maor wrote: > > >>>> On 02/12/2012 07:03 PM, Livnat Peer wrote: > > >>>>> On 02/02/12 17:15, Maor wrote: > > >>>>>> Hello all, > > >>>>>> > > >>>>>> The shared raw disk feature description can be found under > > >>>>>> the > > >>>>>> following > > >>>>>> links: > > >>>>>> http://www.ovirt.org/wiki/Features/DetailedSharedRawDisk > > >>>>>> http://www.ovirt.org/wiki/Features/SharedRawDisk > > >>>>>> > > >>>>>> Please feel free, to share your comments. > > >>>>>> > > >>>>>> Regards, > > >>>>>> Maor > > >>>>>> _______________________________________________ > > >>>>>> Engine-devel mailing list > > >>>>>> Engine-devel at ovirt.org > > >>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > > >>>>> > > >>>>> Hi Maor, > > >>>>> > > >>>>> - "when taking a VM snapshot, a snapshot of the shared disk > > >>>>> will not be > > >>>>> taken." > > >>>>> I think it is worth mentioning that the shared disk will be > > >>>>> part of the > > >>>>> VM snapshot configuration. The disk will appear as unplugged. > > >>>> Agreed, I changed it to the following: > > >>>> when taking a vm snapshot, a snapshot of the shared disk > > >>>> should > > >>>> not be > > >>>> taken, although it will be part of the VM snapshot > > >>>> configuration > > >>>> and the > > >>>> disk will appear as unplugged. > > >>>>> > > >>>>> - Move VM is deprecated in 3.1. > > >>>> Right, I removed this anecdote from the wiki. > > >>>>> > > >>>>> - It seems from the wiki that shared disk is not supported > > >>>>> for > > >>>>> template > > >>>>> but is supported for VM pool. > > >>>>> I am not sure how can we do that? iirc we create pool from > > >>>>> template. > > >>>> What I was thinking about, is that the administrator can take > > >>>> a > > >>>> VM from > > >>>> the pool and attach it a shared disk, after the VM was created > > >>>> (for > > >>>> testing). > > >>>> > > >>>> The motivation for adding shared disk was that each entity > > >>>> that > > >>>> can be > > >>>> added with a disk can also be added with shared disk. > > >>>> Today, Administrator can add a disk to a VM from pool, which > > >>>> might be > > >>>> wrong behaviour, so maybe its better not to support it... > > >>>>> > > >>>>> What is the complexity of supporting shared disk in > > >>>>> Templates? > > >>>>> off the > > >>>>> top of my head it seems like it is more complicated to block > > >>>>> shared > > >>>>> disks in templates than to support it. What do you think? > > >>>> Implementation wize it might be less complex, the problem is > > >>>> the > > >>>> use > > >>>> cases it raises, > > >>>> some of them which I'm thinking about are: > > >>>> * If the disk will be deleted from the DC, should we remove it > > >>>> from the > > >>>> template? or leave an indication in the template that there > > >>>> was > > >>>> a shared > > >>>> disk there, maybe should not allow to delete the disk in the > > >>>> first > > >>>> place, until it is unattached from the template? > > >>> > > >>> Since template configuration is 'read-only' you can not change > > >>> a > > >>> disk to > > >>> be plugged or unplugged. > > >>> I would say you can not delete a disk that is part of a > > >>> template > > >>> regardless if it is shared or not. > > >> So in that case template with shared disk, will block the user > > >> from > > >> removing the shared disk from the DC. > > >> Won't it will make the flow for the user a bit complicated. > > >> User who wants to remove the shared disk, will need to remove > > >> the > > >> VM's > > >> which are based on the template and then remove the template it > > >> self. > > > > > > I see the complication of delete, we have similar complications > > > for > > > delete regardless of shared disk (deleting disk with snapshots). > > There should be no such thing as 'delete disk with snapshots' > When deleting a disk the only thing that should be deleted is topmost > layer. > The disk does not stop to exist in the history of the VM and when > reverting to an old snapshot the disk should be there. > > Shared disks have no snapshots so there is no issue in this sense, > however, if a template has a reference to the disk then deleting the > disk would effectively modify the template configuration(?) and iirc > engine does not allow changing template configuration post creation > (but we had a very long thread on this already). > > > > > > > Other than delete can you think of other complicated scenarios? > > > > if it makes things more complex, why not postpone this part of the > > feature to a later phase? > > I think we're approaching this the wrong way. > There are 2 possible problems we're trying to solve here and having > the original shared disk as part of the template is the wrong > solution for both. > > The first problem is - user wants to attach the shared disk to all > VMs derived from the template - in this case the shared disk is > *not* a part of the template and what is needed is an automatic way > to configure newly created VMs that would allow to attach the shared > disk. My personal feeling is that this is the common use case. > > The second problem is - user wants the data on the shared disk to be > part of the template - > This is the general case for template seeing as a template is a > *copy* of the original VM with stripped identity. > in this case what is needed is a *copy* of the data and the fact that > the disk is shared is coincidental. > > What the above means is that when you create a template from a VM > with a shared disk the user should choose whether the operation > would also create a new disk and copy the content of the shared disk > to it (it is the user's responsibility to make sure that the data > does not change while this operation is taking place, but we can > help a little there) or exclude the shared disk from the template. Although this is a valid use case, I think this is very confusing and defeat the "notion" of shared. So I would defer on this one for now. > > > > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From ykaul at redhat.com Wed Feb 15 10:23:22 2012 From: ykaul at redhat.com (Yaniv Kaul) Date: Wed, 15 Feb 2012 12:23:22 +0200 Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: <4F3AD3D0.9040202@redhat.com> References: <172cbbe7-2473-4e67-b435-a23be583e522@zmail01.collab.prod.int.phx2.redhat.com> <4F39DB8A.2060502@redhat.com> <4F3A05DB.8050307@redhat.com> <4F3A063B.1060309@redhat.com> <4F3A0B2A.404@redhat.com> <4F3ABE05.60905@redhat.com> <4F3AD3D0.9040202@redhat.com> Message-ID: <4F3B879A.9010904@redhat.com> On 02/14/2012 11:36 PM, Yair Zaslavsky wrote: > On 02/14/2012 10:03 PM, Itamar Heim wrote: >> On 02/14/2012 09:20 AM, Yair Zaslavsky wrote: >>> On 02/14/2012 08:59 AM, Itamar Heim wrote: >>>> On 02/14/2012 08:57 AM, Livnat Peer wrote: >>>>> On 14/02/12 05:56, Itamar Heim wrote: >>>>>> On 02/13/2012 12:32 PM, Laszlo Hornyak wrote: >>>>>>> Hi, >>>>>>> >>>>>>> Please review the plan document for autorecovery. >>>>>>> http://www.ovirt.org/wiki/Features/Autorecovery >>>>>> why would we disable auto recovery by default? it sounds like the >>>>>> preferred behavior? >>>>>> >>>>> I think that by default Laszlo meant in the upgrade process to maintain >>>>> current behavior. >>>>> >>>>> I agree that for new entities the default should be true. >>>> i think the only combination which will allow this is for db to default >>>> to false and code to default to true for this property? >>> Why can't we during upgrade process set to all existing entities in DB >>> the value to false, but still have the column defined as "default true"? >> because upgrade and clean install are running the same scripts? > I guess I still fail to understand. > Scenarios (as both upgrade and clean install run the same scripts) > a. In environment to be upgraded we have X entities that are non > recoverable - after upgrade these X entities have the boolean flag set > to false. New entities in the system will be created with auto > recoverable set to true. I still fail to understand why you 'punish' existing objects and not giving them the new feature enabled by default. Y. > b. In environment to be clean installed -we have 0 existing entities - > after clean install all new entities in the system will be create with > auto recoverable set to true. > Will this be considered a bad behavior? > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From dlaor at redhat.com Wed Feb 15 10:23:33 2012 From: dlaor at redhat.com (Dor Laor) Date: Wed, 15 Feb 2012 12:23:33 +0200 Subject: [Engine-devel] bridgless networks In-Reply-To: References: Message-ID: <4F3B87A5.7090704@redhat.com> On 02/15/2012 11:16 AM, Roy Golan wrote: > > > ----- Original Message ----- >> From: "Dor Laor" >> To: "Roy Golan" >> Cc: engine-devel at ovirt.org >> Sent: Tuesday, February 7, 2012 12:01:58 PM >> Subject: Re: [Engine-devel] bridgless networks >> >> On 02/06/2012 04:47 PM, Roy Golan wrote: >>> Hi All >>> >>> Lately I've been working on a design of bridge-less network feature >>> in the engine. >>> You can see it in >>> http://www.ovirt.org/wiki/Features/Design/Network/Bridgeless_Networks#Bridge-less_Networks >>> >>> Please review the design. >>> Note, there are some open issues, you can find in the relevant >>> section. >>> Reviews and comments are very welcome. >> >> I'm not in the details of the above design but just please make sure >> this change will be able to accommodate w/: >> - Different bridging types: >> - Today's Linux bridge >> - openVswitch bridge >> - macvtap bridges. >> - pci device assignment w/o sriov >> - virtio over macvtap over sriov virtual function >> > and you can mix any bridge type with any nic? Yes, virtio/e1000.rtl8139 are all good over any backend. >> Cheers, >> Dor >> >>> >>> Thanks, >>> Roy >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >> >> From djasa at redhat.com Wed Feb 15 10:30:27 2012 From: djasa at redhat.com (David =?UTF-8?Q?Ja=C5=A1a?=) Date: Wed, 15 Feb 2012 11:30:27 +0100 Subject: [Engine-devel] engine.log was gone In-Reply-To: <4F3B7825.8060709@linux.vnet.ibm.com> References: <4F3B7825.8060709@linux.vnet.ibm.com> Message-ID: <1329301827.12032.836.camel@dhcp-29-7.brq.redhat.com> Shu Ming p??e v St 15. 02. 2012 v 17:17 +0800: > Hi, > I deleted the engine.log under /var/log/ovirt-engine by mistake. It > seems that the log file can not be recreated whatever engine-clean and > engine-setup were run. Even rebooting the server didn't work. Is there > any quick way to bring my engine.log back? > I would try to touch the file, modify permissions to match those of other log files in directory and then restore its selinux context. David -- David Ja?a, RHCE SPICE QE based in Brno GPG Key: 22C33E24 Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24 From dlaor at redhat.com Wed Feb 15 10:52:46 2012 From: dlaor at redhat.com (Dor Laor) Date: Wed, 15 Feb 2012 12:52:46 +0200 Subject: [Engine-devel] [Spice-devel] SPICE related features In-Reply-To: <1328791859.12032.249.camel@dhcp-29-7.brq.redhat.com> References: <4F326164.6000005@redhat.com> <4F328856.9090808@redhat.com> <4F337BE8.4020607@redhat.com> <4F338453.4090106@redhat.com> <4F3384E2.7070407@redhat.com> <4F338C6C.9010705@redhat.com> <4F338CC8.8060208@redhat.com> <1328791859.12032.249.camel@dhcp-29-7.brq.redhat.com> Message-ID: <4F3B8E7E.5020503@redhat.com> On 02/09/2012 02:50 PM, David Ja?a wrote: > Itamar Heim p??e v ?t 09. 02. 2012 v 11:07 +0200: >> On 02/09/2012 11:05 AM, Hans de Goede wrote: >>> Hi, >>> >>> On 02/09/2012 09:33 AM, Itamar Heim wrote: >>>> On 02/09/2012 10:31 AM, Hans de Goede wrote: >>> >>> >>> >>>>>> so this means we need to ask the user for linux guests if they want >>>>>> single head or multiple heads when they choose multi monitor? >>>>> >>>>> We could ask the user, but I don't think that that is a good idea. >>>>> >>>>>> this will cause their (single) head to spin... >>>>> >>>>> With which you seem to agree :) >>>>> >>>>>> any better UX we can suggest users? >>>>> >>>>> Yes, no UI at all, the current solution using multiple single monitor >>>>> pci cards means using Xinerama, which disables Xrandr, and thus allows >>>>> no dynamic adjustment of the monitor settings of the guest, instead >>>>> an xorg.conf file must be written (the linux agent can generate one >>>>> based on the current client monitor info) and Xorg needs to be >>>>> restarted. >>>>> >>>>> This is the result of the multiple pci cards which each 1 monitor model >>>>> we've been using for windows guests being a poor match for Linux guests. >>>>> >>>>> So we are working on adding support to drive multiple monitors from a >>>>> single qxl pci device. This requires changes on both the host and >>>>> guest side, but if both sides support it this configuration is much >>>>> better, so IMHO ovirt should just automatically enable it >>>>> if both the host (the cluster) and the guest support it. >>>>> >>>>> On the guest side, this is the current status: >>>>> >>>>> RHEL<= 6.1 no multi monitor support >>>>> RHEL 6.2(*) - 6.? multi monitor support using Xinerama (so 1 >>>>> monitor/card, multiple cards) >>>>> RHEL>= 6.? multi monitor support using a single card with multiple >>>>> outputs >>>>> >>>>> Just like when exactly the new multi mon support will be available >>>>> for guests, it is a similar question mark for when it will be >>>>> available for >>>>> the host. >>>> >>>> this is the ovirt mailing list, so upstream versions are more relevant >>>> here. >>>> in any case, I have the same issue with backward compatibilty. >>>> say you fix this in fedora 17. >>>> user started a guest VM when host was fedora 16. >>>> admin upgraded host and changed cluster level to utilize new features. >>>> suddenly on next boot guest will move from 4 heads to single head? I'm >>>> guessing it will break user configuration. >>>> i.e., user should be able to choose to move to utilize the new mode? >>> >>> I see this as something which gets decided at vm creation time, and then >>> stored in the vm config. So if the vm gets created with a guest OS which >>> does not support multiple monitors per qxl device, or when the cluster does >>> not support it, it uses the old setup with 1 card / monitor. Even if the >>> guest OS or the cluster gets upgraded later. >> >> so instead of letting user change this, we'd force this at vm creation >> time? I'm not sure this is "friendlier". > > I think that some history-watching logic& one UI bit could be the way > to go. The UI bit would be yet another select button that would let user > choose what graphic layout ("all monitors on single graphic card", "one > graphic card per monitor (legacy)"). The logic would be like this: > * pre-existing guest that now supports new layout in 3.1 cluster > * The guest uses 1 monitor, is swithed to 2+ --> new > * The guest uses 2+ monitor layout --> old, big fat > warning when changing to the new that user should wipe > xinerama configuration in the guest > * pre-existing guest in old or mixed cluster: > * guest uses 2+ monitors --> old > * guest is newly configured for 2+ monitors --> show > warning that user either has co configure xinerama or > use newer cluster --> old > * new guest in new cluster: > * --> new > * if user switches to old, show warning > * old guest in any type of cluster > * --> old > > This kind of behavior should provide sensible defaults, all valid > choices in all possible scenarios and it should not interfere too much > when admin chooses to do anything. In short, the same rule of the thumb applies to all of our virtual hardware: - new vm creation should use the greatest and latest virtual hardware version (if the current cluster allows it) - For existing VMs we should preserve their current virtual hardware set (-M flag in qemu machine type vocabulary, cluster terminology for ovirt). - Changing the virtual hardware by either changing existing devices, adding devices, changing pci slots, or changing the virtual hardware revision should done only by user consent. The later may have the exception of smart offline v2v tool. Dor > > David > >> ______________________________________ >> _________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel > From abaron at redhat.com Wed Feb 15 11:19:48 2012 From: abaron at redhat.com (Ayal Baron) Date: Wed, 15 Feb 2012 06:19:48 -0500 (EST) Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: <4F3B879A.9010904@redhat.com> Message-ID: <67f1779f-3c9b-4530-86aa-4278bb7c704d@zmail13.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > On 02/14/2012 11:36 PM, Yair Zaslavsky wrote: > > On 02/14/2012 10:03 PM, Itamar Heim wrote: > >> On 02/14/2012 09:20 AM, Yair Zaslavsky wrote: > >>> On 02/14/2012 08:59 AM, Itamar Heim wrote: > >>>> On 02/14/2012 08:57 AM, Livnat Peer wrote: > >>>>> On 14/02/12 05:56, Itamar Heim wrote: > >>>>>> On 02/13/2012 12:32 PM, Laszlo Hornyak wrote: > >>>>>>> Hi, > >>>>>>> > >>>>>>> Please review the plan document for autorecovery. > >>>>>>> http://www.ovirt.org/wiki/Features/Autorecovery > >>>>>> why would we disable auto recovery by default? it sounds like > >>>>>> the > >>>>>> preferred behavior? > >>>>>> > >>>>> I think that by default Laszlo meant in the upgrade process to > >>>>> maintain > >>>>> current behavior. > >>>>> > >>>>> I agree that for new entities the default should be true. > >>>> i think the only combination which will allow this is for db to > >>>> default > >>>> to false and code to default to true for this property? > >>> Why can't we during upgrade process set to all existing entities > >>> in DB > >>> the value to false, but still have the column defined as "default > >>> true"? > >> because upgrade and clean install are running the same scripts? > > I guess I still fail to understand. > > Scenarios (as both upgrade and clean install run the same scripts) > > a. In environment to be upgraded we have X entities that are non > > recoverable - after upgrade these X entities have the boolean flag > > set > > to false. New entities in the system will be created with auto > > recoverable set to true. > > I still fail to understand why you 'punish' existing objects and not > giving them the new feature enabled by default. This is not a feature, it's a bug! This should not be treated as a feature and this should not be configurable! Today an object moves to non-operational due to state reported by vdsm. The object should immediately return to up the moment vdsm reports the object as ok (this means that you don't stop monitoring just because there is an error). That's it. no db field and no nothing... This pertains to storage domains, network, host status, whatever. > Y. > > > b. In environment to be clean installed -we have 0 existing > > entities - > > after clean install all new entities in the system will be create > > with > > auto recoverable set to true. > > Will this be considered a bad behavior? > > > > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From lpeer at redhat.com Wed Feb 15 11:21:37 2012 From: lpeer at redhat.com (Livnat Peer) Date: Wed, 15 Feb 2012 13:21:37 +0200 Subject: [Engine-devel] agenda for today's meeting Message-ID: <4F3B9541.5070002@redhat.com> Hi All, Agenda for this week meeting: - Implementation of bridged networks - Clone Vm from snapshot - Automatic recovery of entities Thanks, Livnat From ovedo at redhat.com Wed Feb 15 11:25:37 2012 From: ovedo at redhat.com (Oved Ourfalli) Date: Wed, 15 Feb 2012 06:25:37 -0500 (EST) Subject: [Engine-devel] [Spice-devel] SPICE related features In-Reply-To: <4F3B8E7E.5020503@redhat.com> Message-ID: <96683d4d-ebf3-4837-9eaf-09238a9cfbcb@zmail02.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > From: "Dor Laor" > To: spice-devel at lists.freedesktop.org, engine-devel at ovirt.org > Cc: "David Ja?a" , "Oved Ourfalli" , "Hans de Goede" > Sent: Wednesday, February 15, 2012 12:52:46 PM > Subject: Re: [Engine-devel] [Spice-devel] SPICE related features > > On 02/09/2012 02:50 PM, David Ja?a wrote: > > Itamar Heim p??e v ?t 09. 02. 2012 v 11:07 +0200: > >> On 02/09/2012 11:05 AM, Hans de Goede wrote: > >>> Hi, > >>> > >>> On 02/09/2012 09:33 AM, Itamar Heim wrote: > >>>> On 02/09/2012 10:31 AM, Hans de Goede wrote: > >>> > >>> > >>> > >>>>>> so this means we need to ask the user for linux guests if they > >>>>>> want > >>>>>> single head or multiple heads when they choose multi monitor? > >>>>> > >>>>> We could ask the user, but I don't think that that is a good > >>>>> idea. > >>>>> > >>>>>> this will cause their (single) head to spin... > >>>>> > >>>>> With which you seem to agree :) > >>>>> > >>>>>> any better UX we can suggest users? > >>>>> > >>>>> Yes, no UI at all, the current solution using multiple single > >>>>> monitor > >>>>> pci cards means using Xinerama, which disables Xrandr, and thus > >>>>> allows > >>>>> no dynamic adjustment of the monitor settings of the guest, > >>>>> instead > >>>>> an xorg.conf file must be written (the linux agent can generate > >>>>> one > >>>>> based on the current client monitor info) and Xorg needs to be > >>>>> restarted. > >>>>> > >>>>> This is the result of the multiple pci cards which each 1 > >>>>> monitor model > >>>>> we've been using for windows guests being a poor match for > >>>>> Linux guests. > >>>>> > >>>>> So we are working on adding support to drive multiple monitors > >>>>> from a > >>>>> single qxl pci device. This requires changes on both the host > >>>>> and > >>>>> guest side, but if both sides support it this configuration is > >>>>> much > >>>>> better, so IMHO ovirt should just automatically enable it > >>>>> if both the host (the cluster) and the guest support it. > >>>>> > >>>>> On the guest side, this is the current status: > >>>>> > >>>>> RHEL<= 6.1 no multi monitor support > >>>>> RHEL 6.2(*) - 6.? multi monitor support using Xinerama (so 1 > >>>>> monitor/card, multiple cards) > >>>>> RHEL>= 6.? multi monitor support using a single card with > >>>>> multiple > >>>>> outputs > >>>>> > >>>>> Just like when exactly the new multi mon support will be > >>>>> available > >>>>> for guests, it is a similar question mark for when it will be > >>>>> available for > >>>>> the host. > >>>> > >>>> this is the ovirt mailing list, so upstream versions are more > >>>> relevant > >>>> here. > >>>> in any case, I have the same issue with backward compatibilty. > >>>> say you fix this in fedora 17. > >>>> user started a guest VM when host was fedora 16. > >>>> admin upgraded host and changed cluster level to utilize new > >>>> features. > >>>> suddenly on next boot guest will move from 4 heads to single > >>>> head? I'm > >>>> guessing it will break user configuration. > >>>> i.e., user should be able to choose to move to utilize the new > >>>> mode? > >>> > >>> I see this as something which gets decided at vm creation time, > >>> and then > >>> stored in the vm config. So if the vm gets created with a guest > >>> OS which > >>> does not support multiple monitors per qxl device, or when the > >>> cluster does > >>> not support it, it uses the old setup with 1 card / monitor. Even > >>> if the > >>> guest OS or the cluster gets upgraded later. > >> > >> so instead of letting user change this, we'd force this at vm > >> creation > >> time? I'm not sure this is "friendlier". > > > > I think that some history-watching logic& one UI bit could be the > > way > > to go. The UI bit would be yet another select button that would let > > user > > choose what graphic layout ("all monitors on single graphic card", > > "one > > graphic card per monitor (legacy)"). The logic would be like this: > > * pre-existing guest that now supports new layout in 3.1 > > cluster > > * The guest uses 1 monitor, is swithed to 2+ --> > > new > > * The guest uses 2+ monitor layout --> old, big fat > > warning when changing to the new that user should > > wipe > > xinerama configuration in the guest > > * pre-existing guest in old or mixed cluster: > > * guest uses 2+ monitors --> old > > * guest is newly configured for 2+ monitors --> > > show > > warning that user either has co configure xinerama > > or > > use newer cluster --> old > > * new guest in new cluster: > > * --> new > > * if user switches to old, show warning > > * old guest in any type of cluster > > * --> old > > > > This kind of behavior should provide sensible defaults, all valid > > choices in all possible scenarios and it should not interfere too > > much > > when admin chooses to do anything. > > In short, the same rule of the thumb applies to all of our virtual > hardware: > - new vm creation should use the greatest and latest virtual > hardware > version (if the current cluster allows it) > - For existing VMs we should preserve their current virtual > hardware > set (-M flag in qemu machine type vocabulary, cluster > terminology > for ovirt). > - Changing the virtual hardware by either changing existing > devices, > adding devices, changing pci slots, or changing the virtual > hardware revision should done only by user consent. > The later may have the exception of smart offline v2v tool. > > Dor > I agree. What I don't understand (not a question for you Dor, but more for the SPICE guys), though, is the requirement to support Xinerama configuration on the guest. Today the ovirt-engine doesn't support using more than one monitor on Linux guests. If someone did something with is not the standard, somewhere behind the scenes, the engine cannot be aware of that, thus it cannot take it into consideration. Moreover, don't change the device layout when we add support, as the guest had one display device with one head, and it will remain that way, unless he chooses to work with multiple monitors through the engine management system. > > > > David > > > >> ______________________________________ > >> _________ > >> Engine-devel mailing list > >> Engine-devel at ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From shuming at linux.vnet.ibm.com Wed Feb 15 11:32:02 2012 From: shuming at linux.vnet.ibm.com (Shu Ming) Date: Wed, 15 Feb 2012 19:32:02 +0800 Subject: [Engine-devel] engine.log was gone In-Reply-To: <1329301827.12032.836.camel@dhcp-29-7.brq.redhat.com> References: <4F3B7825.8060709@linux.vnet.ibm.com> <1329301827.12032.836.camel@dhcp-29-7.brq.redhat.com> Message-ID: <4F3B97B2.7030904@linux.vnet.ibm.com> On 2012-2-15 18:30, David Ja?a wrote: > Shu Ming p??e v St 15. 02. 2012 v 17:17 +0800: >> Hi, >> I deleted the engine.log under /var/log/ovirt-engine by mistake. It >> seems that the log file can not be recreated whatever engine-clean and >> engine-setup were run. Even rebooting the server didn't work. Is there >> any quick way to bring my engine.log back? >> > I would try to touch the file, modify permissions to match those of > other log files in directory and then restore its selinux context. That is what I did. But don't know hot to restore its selinux context, do have have an example? > > David > -- Shu Ming IBM China Systems and Technology Laboratory From lpeer at redhat.com Wed Feb 15 11:33:52 2012 From: lpeer at redhat.com (Livnat Peer) Date: Wed, 15 Feb 2012 13:33:52 +0200 Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: <4F3B879A.9010904@redhat.com> References: <172cbbe7-2473-4e67-b435-a23be583e522@zmail01.collab.prod.int.phx2.redhat.com> <4F39DB8A.2060502@redhat.com> <4F3A05DB.8050307@redhat.com> <4F3A063B.1060309@redhat.com> <4F3A0B2A.404@redhat.com> <4F3ABE05.60905@redhat.com> <4F3AD3D0.9040202@redhat.com> <4F3B879A.9010904@redhat.com> Message-ID: <4F3B9820.1020808@redhat.com> On 15/02/12 12:23, Yaniv Kaul wrote: > On 02/14/2012 11:36 PM, Yair Zaslavsky wrote: >> On 02/14/2012 10:03 PM, Itamar Heim wrote: >>> On 02/14/2012 09:20 AM, Yair Zaslavsky wrote: >>>> On 02/14/2012 08:59 AM, Itamar Heim wrote: >>>>> On 02/14/2012 08:57 AM, Livnat Peer wrote: >>>>>> On 14/02/12 05:56, Itamar Heim wrote: >>>>>>> On 02/13/2012 12:32 PM, Laszlo Hornyak wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> Please review the plan document for autorecovery. >>>>>>>> http://www.ovirt.org/wiki/Features/Autorecovery >>>>>>> why would we disable auto recovery by default? it sounds like the >>>>>>> preferred behavior? >>>>>>> >>>>>> I think that by default Laszlo meant in the upgrade process to >>>>>> maintain >>>>>> current behavior. >>>>>> >>>>>> I agree that for new entities the default should be true. >>>>> i think the only combination which will allow this is for db to >>>>> default >>>>> to false and code to default to true for this property? >>>> Why can't we during upgrade process set to all existing entities in DB >>>> the value to false, but still have the column defined as "default >>>> true"? >>> because upgrade and clean install are running the same scripts? >> I guess I still fail to understand. >> Scenarios (as both upgrade and clean install run the same scripts) >> a. In environment to be upgraded we have X entities that are non >> recoverable - after upgrade these X entities have the boolean flag set >> to false. New entities in the system will be created with auto >> recoverable set to true. > > I still fail to understand why you 'punish' existing objects and not > giving them the new feature enabled by default. > Y. We agreed that users will get by default the auto-recovery feature (wiki is updated accordingly). The discussion above is theoretical about setting different values during upgrade and setting default for new entities. > >> b. In environment to be clean installed -we have 0 existing entities - >> after clean install all new entities in the system will be create with >> auto recoverable set to true. >> Will this be considered a bad behavior? >> >> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From ykaul at redhat.com Wed Feb 15 11:35:56 2012 From: ykaul at redhat.com (Yaniv Kaul) Date: Wed, 15 Feb 2012 13:35:56 +0200 Subject: [Engine-devel] agenda for today's meeting In-Reply-To: <4F3B9541.5070002@redhat.com> References: <4F3B9541.5070002@redhat.com> Message-ID: <4F3B989C.6090805@redhat.com> On 02/15/2012 01:21 PM, Livnat Peer wrote: > Hi All, > Agenda for this week meeting: > > - Implementation of bridged networks > - Clone Vm from snapshot > - Automatic recovery of entities > > Thanks, Livnat > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel If there's time, I'd be happy to add another topic to the agenda: - upstream stabilization (especially considering the recent failures in unit tests) Y. From lhornyak at redhat.com Wed Feb 15 11:36:59 2012 From: lhornyak at redhat.com (Laszlo Hornyak) Date: Wed, 15 Feb 2012 06:36:59 -0500 (EST) Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: <67f1779f-3c9b-4530-86aa-4278bb7c704d@zmail13.collab.prod.int.phx2.redhat.com> Message-ID: Hi Ayal, ----- Original Message ----- > From: "Ayal Baron" > To: "Yaniv Kaul" > Cc: engine-devel at ovirt.org > Sent: Wednesday, February 15, 2012 12:19:48 PM > Subject: Re: [Engine-devel] Autorecovery feature plan for review > > > > > > I still fail to understand why you 'punish' existing objects and > > not > > giving them the new feature enabled by default. > > This is not a feature, it's a bug! Whatever we call it, it is a change in behavior. We agreed that it will be enabled for all existing objects by default. http://globalnerdy.com/wordpress/wp-content/uploads/2007/12/bug_vs_feature.gif > This should not be treated as a feature and this should not be > configurable! I can imagine some situations when I would not like the autorecovery to happen, but if everyone agrees not to make it configurable, I will just remove it from my patchset. > Today an object moves to non-operational due to state reported by > vdsm. The object should immediately return to up the moment vdsm > reports the object as ok (this means that you don't stop monitoring > just because there is an error). > That's it. no db field and no nothing... > This pertains to storage domains, network, host status, whatever. > > > Y. > > > > > b. In environment to be clean installed -we have 0 existing > > > entities - > > > after clean install all new entities in the system will be create > > > with > > > auto recoverable set to true. > > > Will this be considered a bad behavior? > > > > > > > > > _______________________________________________ > > > Engine-devel mailing list > > > Engine-devel at ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From eedri at redhat.com Wed Feb 15 11:39:30 2012 From: eedri at redhat.com (Eyal Edri) Date: Wed, 15 Feb 2012 06:39:30 -0500 (EST) Subject: [Engine-devel] agenda for today's meeting In-Reply-To: <4F3B989C.6090805@redhat.com> Message-ID: ----- Original Message ----- > From: "Yaniv Kaul" > To: "Livnat Peer" > Cc: engine-devel at ovirt.org > Sent: Wednesday, February 15, 2012 1:35:56 PM > Subject: Re: [Engine-devel] agenda for today's meeting > > On 02/15/2012 01:21 PM, Livnat Peer wrote: > > Hi All, > > Agenda for this week meeting: > > > > - Implementation of bridged networks > > - Clone Vm from snapshot > > - Automatic recovery of entities > > > > Thanks, Livnat > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > If there's time, I'd be happy to add another topic to the agenda: > - upstream stabilization (especially considering the recent failures > in > unit tests) > Y. +1. > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From abaron at redhat.com Wed Feb 15 11:46:05 2012 From: abaron at redhat.com (Ayal Baron) Date: Wed, 15 Feb 2012 06:46:05 -0500 (EST) Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: Message-ID: <77b2a8a2-e26b-4c82-a3f0-4f698736409a@zmail13.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > Hi Ayal, > > ----- Original Message ----- > > From: "Ayal Baron" > > To: "Yaniv Kaul" > > Cc: engine-devel at ovirt.org > > Sent: Wednesday, February 15, 2012 12:19:48 PM > > Subject: Re: [Engine-devel] Autorecovery feature plan for review > > > > > > > > > > I still fail to understand why you 'punish' existing objects and > > > not > > > giving them the new feature enabled by default. > > > > This is not a feature, it's a bug! > > Whatever we call it, it is a change in behavior. We agreed that it > will be enabled for all existing objects by default. > > http://globalnerdy.com/wordpress/wp-content/uploads/2007/12/bug_vs_feature.gif > > > This should not be treated as a feature and this should not be > > configurable! > > I can imagine some situations when I would not like the autorecovery > to happen, but if everyone agrees not to make it configurable, I > will just remove it from my patchset. It's not autorecovery, you're not recovering anything. You're reflecting the fact that the resource is back to normal (not due to anything that the engine did). This is why it is a bug today. This is why it should not be configurable. > > > Today an object moves to non-operational due to state reported by > > vdsm. The object should immediately return to up the moment vdsm > > reports the object as ok (this means that you don't stop monitoring > > just because there is an error). > > That's it. no db field and no nothing... > > This pertains to storage domains, network, host status, whatever. > > > > > Y. > > > > > > > b. In environment to be clean installed -we have 0 existing > > > > entities - > > > > after clean install all new entities in the system will be > > > > create > > > > with > > > > auto recoverable set to true. > > > > Will this be considered a bad behavior? > > > > > > > > > > > > _______________________________________________ > > > > Engine-devel mailing list > > > > Engine-devel at ovirt.org > > > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > > _______________________________________________ > > > Engine-devel mailing list > > > Engine-devel at ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > From alevy at redhat.com Wed Feb 15 12:51:47 2012 From: alevy at redhat.com (Alon Levy) Date: Wed, 15 Feb 2012 14:51:47 +0200 Subject: [Engine-devel] [Spice-devel] SPICE related features In-Reply-To: <96683d4d-ebf3-4837-9eaf-09238a9cfbcb@zmail02.collab.prod.int.phx2.redhat.com> References: <4F3B8E7E.5020503@redhat.com> <96683d4d-ebf3-4837-9eaf-09238a9cfbcb@zmail02.collab.prod.int.phx2.redhat.com> Message-ID: <20120215125147.GT15812@garlic.tlv.redhat.com> On Wed, Feb 15, 2012 at 06:25:37AM -0500, Oved Ourfalli wrote: > > > ----- Original Message ----- > > From: "Dor Laor" > > To: spice-devel at lists.freedesktop.org, engine-devel at ovirt.org > > Cc: "David Ja?a" , "Oved Ourfalli" , "Hans de Goede" > > Sent: Wednesday, February 15, 2012 12:52:46 PM > > Subject: Re: [Engine-devel] [Spice-devel] SPICE related features > > > > On 02/09/2012 02:50 PM, David Ja?a wrote: > > > Itamar Heim p??e v ?t 09. 02. 2012 v 11:07 +0200: > > >> On 02/09/2012 11:05 AM, Hans de Goede wrote: > > >>> Hi, > > >>> > > >>> On 02/09/2012 09:33 AM, Itamar Heim wrote: > > >>>> On 02/09/2012 10:31 AM, Hans de Goede wrote: > > >>> > > >>> > > >>> > > >>>>>> so this means we need to ask the user for linux guests if they > > >>>>>> want > > >>>>>> single head or multiple heads when they choose multi monitor? > > >>>>> > > >>>>> We could ask the user, but I don't think that that is a good > > >>>>> idea. > > >>>>> > > >>>>>> this will cause their (single) head to spin... > > >>>>> > > >>>>> With which you seem to agree :) > > >>>>> > > >>>>>> any better UX we can suggest users? > > >>>>> > > >>>>> Yes, no UI at all, the current solution using multiple single > > >>>>> monitor > > >>>>> pci cards means using Xinerama, which disables Xrandr, and thus > > >>>>> allows > > >>>>> no dynamic adjustment of the monitor settings of the guest, > > >>>>> instead > > >>>>> an xorg.conf file must be written (the linux agent can generate > > >>>>> one > > >>>>> based on the current client monitor info) and Xorg needs to be > > >>>>> restarted. > > >>>>> > > >>>>> This is the result of the multiple pci cards which each 1 > > >>>>> monitor model > > >>>>> we've been using for windows guests being a poor match for > > >>>>> Linux guests. > > >>>>> > > >>>>> So we are working on adding support to drive multiple monitors > > >>>>> from a > > >>>>> single qxl pci device. This requires changes on both the host > > >>>>> and > > >>>>> guest side, but if both sides support it this configuration is > > >>>>> much > > >>>>> better, so IMHO ovirt should just automatically enable it > > >>>>> if both the host (the cluster) and the guest support it. > > >>>>> > > >>>>> On the guest side, this is the current status: > > >>>>> > > >>>>> RHEL<= 6.1 no multi monitor support > > >>>>> RHEL 6.2(*) - 6.? multi monitor support using Xinerama (so 1 > > >>>>> monitor/card, multiple cards) > > >>>>> RHEL>= 6.? multi monitor support using a single card with > > >>>>> multiple > > >>>>> outputs > > >>>>> > > >>>>> Just like when exactly the new multi mon support will be > > >>>>> available > > >>>>> for guests, it is a similar question mark for when it will be > > >>>>> available for > > >>>>> the host. > > >>>> > > >>>> this is the ovirt mailing list, so upstream versions are more > > >>>> relevant > > >>>> here. > > >>>> in any case, I have the same issue with backward compatibilty. > > >>>> say you fix this in fedora 17. > > >>>> user started a guest VM when host was fedora 16. > > >>>> admin upgraded host and changed cluster level to utilize new > > >>>> features. > > >>>> suddenly on next boot guest will move from 4 heads to single > > >>>> head? I'm > > >>>> guessing it will break user configuration. > > >>>> i.e., user should be able to choose to move to utilize the new > > >>>> mode? > > >>> > > >>> I see this as something which gets decided at vm creation time, > > >>> and then > > >>> stored in the vm config. So if the vm gets created with a guest > > >>> OS which > > >>> does not support multiple monitors per qxl device, or when the > > >>> cluster does > > >>> not support it, it uses the old setup with 1 card / monitor. Even > > >>> if the > > >>> guest OS or the cluster gets upgraded later. > > >> > > >> so instead of letting user change this, we'd force this at vm > > >> creation > > >> time? I'm not sure this is "friendlier". > > > > > > I think that some history-watching logic& one UI bit could be the > > > way > > > to go. The UI bit would be yet another select button that would let > > > user > > > choose what graphic layout ("all monitors on single graphic card", > > > "one > > > graphic card per monitor (legacy)"). The logic would be like this: > > > * pre-existing guest that now supports new layout in 3.1 > > > cluster > > > * The guest uses 1 monitor, is swithed to 2+ --> > > > new > > > * The guest uses 2+ monitor layout --> old, big fat > > > warning when changing to the new that user should > > > wipe > > > xinerama configuration in the guest > > > * pre-existing guest in old or mixed cluster: > > > * guest uses 2+ monitors --> old > > > * guest is newly configured for 2+ monitors --> > > > show > > > warning that user either has co configure xinerama > > > or > > > use newer cluster --> old > > > * new guest in new cluster: > > > * --> new > > > * if user switches to old, show warning > > > * old guest in any type of cluster > > > * --> old > > > > > > This kind of behavior should provide sensible defaults, all valid > > > choices in all possible scenarios and it should not interfere too > > > much > > > when admin chooses to do anything. > > > > In short, the same rule of the thumb applies to all of our virtual > > hardware: > > - new vm creation should use the greatest and latest virtual > > hardware > > version (if the current cluster allows it) > > - For existing VMs we should preserve their current virtual > > hardware > > set (-M flag in qemu machine type vocabulary, cluster > > terminology > > for ovirt). > > - Changing the virtual hardware by either changing existing > > devices, > > adding devices, changing pci slots, or changing the virtual > > hardware revision should done only by user consent. > > The later may have the exception of smart offline v2v tool. > > > > Dor > > > > I agree. > What I don't understand (not a question for you Dor, but more for the SPICE guys), though, is the requirement to support Xinerama configuration on the guest. > Today the ovirt-engine doesn't support using more than one monitor on Linux guests. Why? With Xinerama support we have a solution for this. That's the reason for asking for more the one monitor support for linux guests, no? > If someone did something with is not the standard, somewhere behind the scenes, the engine cannot be aware of that, thus it cannot take it into consideration. > Moreover, don't change the device layout when we add support, as the guest had one display device with one head, and it will remain that way, unless he chooses to work with multiple monitors through the engine management system. > > > > > > > David > > > > > >> ______________________________________ > > >> _________ > > >> Engine-devel mailing list > > >> Engine-devel at ovirt.org > > >> http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > _______________________________________________ > Spice-devel mailing list > Spice-devel at lists.freedesktop.org > http://lists.freedesktop.org/mailman/listinfo/spice-devel From ovedo at redhat.com Wed Feb 15 13:06:41 2012 From: ovedo at redhat.com (Oved Ourfalli) Date: Wed, 15 Feb 2012 08:06:41 -0500 (EST) Subject: [Engine-devel] [Spice-devel] SPICE related features In-Reply-To: <20120215125147.GT15812@garlic.tlv.redhat.com> Message-ID: <00644af6-6717-4c21-8b8a-4259e656552d@zmail02.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > From: "Alon Levy" > To: "Oved Ourfalli" > Cc: "David Ja?a" , spice-devel at lists.freedesktop.org, dlaor at redhat.com, engine-devel at ovirt.org > Sent: Wednesday, February 15, 2012 2:51:47 PM > Subject: Re: [Engine-devel] [Spice-devel] SPICE related features > > On Wed, Feb 15, 2012 at 06:25:37AM -0500, Oved Ourfalli wrote: > > > > > > ----- Original Message ----- > > > From: "Dor Laor" > > > To: spice-devel at lists.freedesktop.org, engine-devel at ovirt.org > > > Cc: "David Ja?a" , "Oved Ourfalli" > > > , "Hans de Goede" > > > Sent: Wednesday, February 15, 2012 12:52:46 PM > > > Subject: Re: [Engine-devel] [Spice-devel] SPICE related features > > > > > > On 02/09/2012 02:50 PM, David Ja?a wrote: > > > > Itamar Heim p??e v ?t 09. 02. 2012 v 11:07 +0200: > > > >> On 02/09/2012 11:05 AM, Hans de Goede wrote: > > > >>> Hi, > > > >>> > > > >>> On 02/09/2012 09:33 AM, Itamar Heim wrote: > > > >>>> On 02/09/2012 10:31 AM, Hans de Goede wrote: > > > >>> > > > >>> > > > >>> > > > >>>>>> so this means we need to ask the user for linux guests if > > > >>>>>> they > > > >>>>>> want > > > >>>>>> single head or multiple heads when they choose multi > > > >>>>>> monitor? > > > >>>>> > > > >>>>> We could ask the user, but I don't think that that is a > > > >>>>> good > > > >>>>> idea. > > > >>>>> > > > >>>>>> this will cause their (single) head to spin... > > > >>>>> > > > >>>>> With which you seem to agree :) > > > >>>>> > > > >>>>>> any better UX we can suggest users? > > > >>>>> > > > >>>>> Yes, no UI at all, the current solution using multiple > > > >>>>> single > > > >>>>> monitor > > > >>>>> pci cards means using Xinerama, which disables Xrandr, and > > > >>>>> thus > > > >>>>> allows > > > >>>>> no dynamic adjustment of the monitor settings of the guest, > > > >>>>> instead > > > >>>>> an xorg.conf file must be written (the linux agent can > > > >>>>> generate > > > >>>>> one > > > >>>>> based on the current client monitor info) and Xorg needs to > > > >>>>> be > > > >>>>> restarted. > > > >>>>> > > > >>>>> This is the result of the multiple pci cards which each 1 > > > >>>>> monitor model > > > >>>>> we've been using for windows guests being a poor match for > > > >>>>> Linux guests. > > > >>>>> > > > >>>>> So we are working on adding support to drive multiple > > > >>>>> monitors > > > >>>>> from a > > > >>>>> single qxl pci device. This requires changes on both the > > > >>>>> host > > > >>>>> and > > > >>>>> guest side, but if both sides support it this configuration > > > >>>>> is > > > >>>>> much > > > >>>>> better, so IMHO ovirt should just automatically enable it > > > >>>>> if both the host (the cluster) and the guest support it. > > > >>>>> > > > >>>>> On the guest side, this is the current status: > > > >>>>> > > > >>>>> RHEL<= 6.1 no multi monitor support > > > >>>>> RHEL 6.2(*) - 6.? multi monitor support using Xinerama (so > > > >>>>> 1 > > > >>>>> monitor/card, multiple cards) > > > >>>>> RHEL>= 6.? multi monitor support using a single card with > > > >>>>> multiple > > > >>>>> outputs > > > >>>>> > > > >>>>> Just like when exactly the new multi mon support will be > > > >>>>> available > > > >>>>> for guests, it is a similar question mark for when it will > > > >>>>> be > > > >>>>> available for > > > >>>>> the host. > > > >>>> > > > >>>> this is the ovirt mailing list, so upstream versions are > > > >>>> more > > > >>>> relevant > > > >>>> here. > > > >>>> in any case, I have the same issue with backward > > > >>>> compatibilty. > > > >>>> say you fix this in fedora 17. > > > >>>> user started a guest VM when host was fedora 16. > > > >>>> admin upgraded host and changed cluster level to utilize new > > > >>>> features. > > > >>>> suddenly on next boot guest will move from 4 heads to single > > > >>>> head? I'm > > > >>>> guessing it will break user configuration. > > > >>>> i.e., user should be able to choose to move to utilize the > > > >>>> new > > > >>>> mode? > > > >>> > > > >>> I see this as something which gets decided at vm creation > > > >>> time, > > > >>> and then > > > >>> stored in the vm config. So if the vm gets created with a > > > >>> guest > > > >>> OS which > > > >>> does not support multiple monitors per qxl device, or when > > > >>> the > > > >>> cluster does > > > >>> not support it, it uses the old setup with 1 card / monitor. > > > >>> Even > > > >>> if the > > > >>> guest OS or the cluster gets upgraded later. > > > >> > > > >> so instead of letting user change this, we'd force this at vm > > > >> creation > > > >> time? I'm not sure this is "friendlier". > > > > > > > > I think that some history-watching logic& one UI bit could be > > > > the > > > > way > > > > to go. The UI bit would be yet another select button that would > > > > let > > > > user > > > > choose what graphic layout ("all monitors on single graphic > > > > card", > > > > "one > > > > graphic card per monitor (legacy)"). The logic would be like > > > > this: > > > > * pre-existing guest that now supports new layout in 3.1 > > > > cluster > > > > * The guest uses 1 monitor, is swithed to 2+ --> > > > > new > > > > * The guest uses 2+ monitor layout --> old, big > > > > fat > > > > warning when changing to the new that user > > > > should > > > > wipe > > > > xinerama configuration in the guest > > > > * pre-existing guest in old or mixed cluster: > > > > * guest uses 2+ monitors --> old > > > > * guest is newly configured for 2+ monitors --> > > > > show > > > > warning that user either has co configure > > > > xinerama > > > > or > > > > use newer cluster --> old > > > > * new guest in new cluster: > > > > * --> new > > > > * if user switches to old, show warning > > > > * old guest in any type of cluster > > > > * --> old > > > > > > > > This kind of behavior should provide sensible defaults, all > > > > valid > > > > choices in all possible scenarios and it should not interfere > > > > too > > > > much > > > > when admin chooses to do anything. > > > > > > In short, the same rule of the thumb applies to all of our > > > virtual > > > hardware: > > > - new vm creation should use the greatest and latest virtual > > > hardware > > > version (if the current cluster allows it) > > > - For existing VMs we should preserve their current virtual > > > hardware > > > set (-M flag in qemu machine type vocabulary, cluster > > > terminology > > > for ovirt). > > > - Changing the virtual hardware by either changing existing > > > devices, > > > adding devices, changing pci slots, or changing the virtual > > > hardware revision should done only by user consent. > > > The later may have the exception of smart offline v2v tool. > > > > > > Dor > > > > > > > I agree. > > What I don't understand (not a question for you Dor, but more for > > the SPICE guys), though, is the requirement to support Xinerama > > configuration on the guest. > > Today the ovirt-engine doesn't support using more than one monitor > > on Linux guests. > > Why? With Xinerama support we have a solution for this. That's the > reason for asking for more the one monitor support for linux guests, > no? > AFAIU there was no requirement for adding Xinerama support, but only supporting using multiple heads on a single device. If this is a requirement we should know about it, and design it. Either way, as both features are new we have no backward compatibility issue here. > > If someone did something with is not the standard, somewhere behind > > the scenes, the engine cannot be aware of that, thus it cannot > > take it into consideration. > > Moreover, don't change the device layout when we add support, as > > the guest had one display device with one head, and it will remain > > that way, unless he chooses to work with multiple monitors through > > the engine management system. > > > > > > > > > > David > > > > > > > >> ______________________________________ > > > >> _________ > > > >> Engine-devel mailing list > > > >> Engine-devel at ovirt.org > > > >> http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > > > > > > _______________________________________________ > > > Engine-devel mailing list > > > Engine-devel at ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > _______________________________________________ > > Spice-devel mailing list > > Spice-devel at lists.freedesktop.org > > http://lists.freedesktop.org/mailman/listinfo/spice-devel > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From djasa at redhat.com Wed Feb 15 13:10:59 2012 From: djasa at redhat.com (David =?UTF-8?Q?Ja=C5=A1a?=) Date: Wed, 15 Feb 2012 14:10:59 +0100 Subject: [Engine-devel] [Spice-devel] SPICE related features In-Reply-To: <96683d4d-ebf3-4837-9eaf-09238a9cfbcb@zmail02.collab.prod.int.phx2.redhat.com> References: <96683d4d-ebf3-4837-9eaf-09238a9cfbcb@zmail02.collab.prod.int.phx2.redhat.com> Message-ID: <1329311459.12032.940.camel@dhcp-29-7.brq.redhat.com> Oved Ourfalli p??e v St 15. 02. 2012 v 06:25 -0500: > > ----- Original Message ----- > > From: "Dor Laor" > > To: spice-devel at lists.freedesktop.org, engine-devel at ovirt.org > > Cc: "David Ja?a" , "Oved Ourfalli" , "Hans de Goede" > > Sent: Wednesday, February 15, 2012 12:52:46 PM > > Subject: Re: [Engine-devel] [Spice-devel] SPICE related features > > > > On 02/09/2012 02:50 PM, David Ja?a wrote: > > > Itamar Heim p??e v ?t 09. 02. 2012 v 11:07 +0200: > > >> On 02/09/2012 11:05 AM, Hans de Goede wrote: > > >>> Hi, > > >>> > > >>> On 02/09/2012 09:33 AM, Itamar Heim wrote: > > >>>> On 02/09/2012 10:31 AM, Hans de Goede wrote: > > >>> > > >>> > > >>> > > >>>>>> so this means we need to ask the user for linux guests if they > > >>>>>> want > > >>>>>> single head or multiple heads when they choose multi monitor? > > >>>>> > > >>>>> We could ask the user, but I don't think that that is a good > > >>>>> idea. > > >>>>> > > >>>>>> this will cause their (single) head to spin... > > >>>>> > > >>>>> With which you seem to agree :) > > >>>>> > > >>>>>> any better UX we can suggest users? > > >>>>> > > >>>>> Yes, no UI at all, the current solution using multiple single > > >>>>> monitor > > >>>>> pci cards means using Xinerama, which disables Xrandr, and thus > > >>>>> allows > > >>>>> no dynamic adjustment of the monitor settings of the guest, > > >>>>> instead > > >>>>> an xorg.conf file must be written (the linux agent can generate > > >>>>> one > > >>>>> based on the current client monitor info) and Xorg needs to be > > >>>>> restarted. > > >>>>> > > >>>>> This is the result of the multiple pci cards which each 1 > > >>>>> monitor model > > >>>>> we've been using for windows guests being a poor match for > > >>>>> Linux guests. > > >>>>> > > >>>>> So we are working on adding support to drive multiple monitors > > >>>>> from a > > >>>>> single qxl pci device. This requires changes on both the host > > >>>>> and > > >>>>> guest side, but if both sides support it this configuration is > > >>>>> much > > >>>>> better, so IMHO ovirt should just automatically enable it > > >>>>> if both the host (the cluster) and the guest support it. > > >>>>> > > >>>>> On the guest side, this is the current status: > > >>>>> > > >>>>> RHEL<= 6.1 no multi monitor support > > >>>>> RHEL 6.2(*) - 6.? multi monitor support using Xinerama (so 1 > > >>>>> monitor/card, multiple cards) > > >>>>> RHEL>= 6.? multi monitor support using a single card with > > >>>>> multiple > > >>>>> outputs > > >>>>> > > >>>>> Just like when exactly the new multi mon support will be > > >>>>> available > > >>>>> for guests, it is a similar question mark for when it will be > > >>>>> available for > > >>>>> the host. > > >>>> > > >>>> this is the ovirt mailing list, so upstream versions are more > > >>>> relevant > > >>>> here. > > >>>> in any case, I have the same issue with backward compatibilty. > > >>>> say you fix this in fedora 17. > > >>>> user started a guest VM when host was fedora 16. > > >>>> admin upgraded host and changed cluster level to utilize new > > >>>> features. > > >>>> suddenly on next boot guest will move from 4 heads to single > > >>>> head? I'm > > >>>> guessing it will break user configuration. > > >>>> i.e., user should be able to choose to move to utilize the new > > >>>> mode? > > >>> > > >>> I see this as something which gets decided at vm creation time, > > >>> and then > > >>> stored in the vm config. So if the vm gets created with a guest > > >>> OS which > > >>> does not support multiple monitors per qxl device, or when the > > >>> cluster does > > >>> not support it, it uses the old setup with 1 card / monitor. Even > > >>> if the > > >>> guest OS or the cluster gets upgraded later. > > >> > > >> so instead of letting user change this, we'd force this at vm > > >> creation > > >> time? I'm not sure this is "friendlier". > > > > > > I think that some history-watching logic& one UI bit could be the > > > way > > > to go. The UI bit would be yet another select button that would let > > > user > > > choose what graphic layout ("all monitors on single graphic card", > > > "one > > > graphic card per monitor (legacy)"). The logic would be like this: > > > * pre-existing guest that now supports new layout in 3.1 > > > cluster > > > * The guest uses 1 monitor, is swithed to 2+ --> > > > new > > > * The guest uses 2+ monitor layout --> old, big fat > > > warning when changing to the new that user should > > > wipe > > > xinerama configuration in the guest > > > * pre-existing guest in old or mixed cluster: > > > * guest uses 2+ monitors --> old > > > * guest is newly configured for 2+ monitors --> > > > show > > > warning that user either has co configure xinerama > > > or > > > use newer cluster --> old > > > * new guest in new cluster: > > > * --> new > > > * if user switches to old, show warning > > > * old guest in any type of cluster > > > * --> old > > > > > > This kind of behavior should provide sensible defaults, all valid > > > choices in all possible scenarios and it should not interfere too > > > much > > > when admin chooses to do anything. > > > > In short, the same rule of the thumb applies to all of our virtual > > hardware: > > - new vm creation should use the greatest and latest virtual > > hardware > > version (if the current cluster allows it) > > - For existing VMs we should preserve their current virtual > > hardware > > set (-M flag in qemu machine type vocabulary, cluster > > terminology > > for ovirt). > > - Changing the virtual hardware by either changing existing > > devices, > > adding devices, changing pci slots, or changing the virtual > > hardware revision should done only by user consent. > > The later may have the exception of smart offline v2v tool. > > > > Dor > > > > I agree. > What I don't understand (not a question for you Dor, but more for the > SPICE guys), though, is the requirement to support Xinerama > configuration on the guest. > Today the ovirt-engine doesn't support using more than one monitor on > Linux guests. Downstream RHEV 3.0 does support multi-monitor linux guests using Xinerama so it has to be taken into consideration. David > If someone did something with is not the standard, somewhere behind > the scenes, the engine cannot be aware of that, thus it cannot take it > into consideration. > Moreover, don't change the device layout when we add support, as the > guest had one display device with one head, and it will remain that > way, unless he chooses to work with multiple monitors through the > engine management system. > > > > > > > David > > > > > >> ______________________________________ > > >> _________ > > >> Engine-devel mailing list > > >> Engine-devel at ovirt.org > > >> http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel -- David Ja?a, RHCE SPICE QE based in Brno GPG Key: 22C33E24 Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24 From lpeer at redhat.com Wed Feb 15 13:51:16 2012 From: lpeer at redhat.com (Livnat Peer) Date: Wed, 15 Feb 2012 15:51:16 +0200 Subject: [Engine-devel] agenda for today's meeting In-Reply-To: References: Message-ID: <4F3BB854.6010707@redhat.com> On 15/02/12 13:39, Eyal Edri wrote: > > > ----- Original Message ----- >> From: "Yaniv Kaul" >> To: "Livnat Peer" >> Cc: engine-devel at ovirt.org >> Sent: Wednesday, February 15, 2012 1:35:56 PM >> Subject: Re: [Engine-devel] agenda for today's meeting >> >> On 02/15/2012 01:21 PM, Livnat Peer wrote: >>> Hi All, >>> Agenda for this week meeting: >>> >>> - Implementation of bridged networks >>> - Clone Vm from snapshot >>> - Automatic recovery of entities >>> >>> Thanks, Livnat >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >> >> If there's time, I'd be happy to add another topic to the agenda: >> - upstream stabilization (especially considering the recent failures >> in >> unit tests) >> Y. > > +1. > This was added to agenda. >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> From djasa at redhat.com Wed Feb 15 13:52:12 2012 From: djasa at redhat.com (David =?UTF-8?Q?Ja=C5=A1a?=) Date: Wed, 15 Feb 2012 14:52:12 +0100 Subject: [Engine-devel] engine.log was gone In-Reply-To: <4F3B97B2.7030904@linux.vnet.ibm.com> References: <4F3B7825.8060709@linux.vnet.ibm.com> <1329301827.12032.836.camel@dhcp-29-7.brq.redhat.com> <4F3B97B2.7030904@linux.vnet.ibm.com> Message-ID: <1329313932.12032.949.camel@dhcp-29-7.brq.redhat.com> Shu Ming p??e v St 15. 02. 2012 v 19:32 +0800: > On 2012-2-15 18:30, David Ja?a wrote: > > Shu Ming p??e v St 15. 02. 2012 v 17:17 +0800: > >> Hi, > >> I deleted the engine.log under /var/log/ovirt-engine by mistake. It > >> seems that the log file can not be recreated whatever engine-clean and > >> engine-setup were run. Even rebooting the server didn't work. Is there > >> any quick way to bring my engine.log back? > >> > > I would try to touch the file, modify permissions to match those of > > other log files in directory and then restore its selinux context. > That is what I did. But don't know hot to restore its selinux context, > do have have an example? > restorecon(8) is your friend. In short, "restorecon -FvvR " will restore context of recursively (but it won't follow symlinks). David > > > > > David > > > > -- David Ja?a, RHCE SPICE QE based in Brno GPG Key: 22C33E24 Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24 From ryanh at us.ibm.com Wed Feb 15 13:56:36 2012 From: ryanh at us.ibm.com (Ryan Harper) Date: Wed, 15 Feb 2012 07:56:36 -0600 Subject: [Engine-devel] Change jboss listening ip/port configuration Message-ID: <20120215135636.GC6051@us.ibm.com> Hi, I've followed the building engine from source wiki[1] and I've got it all running (thanks!) but to test things out, I wanted to exercise the gui and other parts via the web portal. By default, jboss is listening only to localhost:8080, and I was wondering the right way to change that to something else? I see localhost and ports buried down in ./backend/manager/conf/standalone.xml but not clear if I should be mucking with those and re-deploying or what. Thanks for the help, -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx ryanh at us.ibm.com From dfediuck at redhat.com Wed Feb 15 14:15:13 2012 From: dfediuck at redhat.com (Doron Fediuck) Date: Wed, 15 Feb 2012 16:15:13 +0200 Subject: [Engine-devel] Change jboss listening ip/port configuration In-Reply-To: <20120215135636.GC6051@us.ibm.com> References: <20120215135636.GC6051@us.ibm.com> Message-ID: <4F3BBDF1.3000507@redhat.com> On 15/02/12 15:56, Ryan Harper wrote: > Hi, > > I've followed the building engine from source wiki[1] and I've got it > all running (thanks!) but to test things out, I wanted to exercise the > gui and other parts via the web portal. By default, jboss is listening > only to localhost:8080, and I was wondering the right way to change that > to something else? > > I see localhost and ports buried down in > > ./backend/manager/conf/standalone.xml > > but not clear if I should be mucking with those and re-deploying or > what. > > Thanks for the help, > Hi Ryan, This is the JBoss default. There are 2 options when changing the relevant ports, and you need to decide how to proceed; - ports <1024 In Linux processes binding to this port range (for example 80, 22 ...), require root permissions. There are technical solutions for it, such as mod_ssl and iptables routin. - ports >=1024 This can be resolved in standalone.xml or- http://www.postgresql.org/docs/8.1/static/sql-alterdatabase.html -- /d Why doesn't DOS ever say "EXCELLENT command or filename!" From dfediuck at redhat.com Wed Feb 15 14:18:57 2012 From: dfediuck at redhat.com (Doron Fediuck) Date: Wed, 15 Feb 2012 16:18:57 +0200 Subject: [Engine-devel] Change jboss listening ip/port configuration In-Reply-To: <4F3BBDF1.3000507@redhat.com> References: <20120215135636.GC6051@us.ibm.com> <4F3BBDF1.3000507@redhat.com> Message-ID: <4F3BBED1.9010706@redhat.com> On 15/02/12 16:15, Doron Fediuck wrote: > On 15/02/12 15:56, Ryan Harper wrote: >> Hi, >> >> I've followed the building engine from source wiki[1] and I've got it >> all running (thanks!) but to test things out, I wanted to exercise the >> gui and other parts via the web portal. By default, jboss is listening >> only to localhost:8080, and I was wondering the right way to change that >> to something else? >> >> I see localhost and ports buried down in >> >> ./backend/manager/conf/standalone.xml >> >> but not clear if I should be mucking with those and re-deploying or >> what. >> >> Thanks for the help, >> > Hi Ryan, > This is the JBoss default. > There are 2 options when changing the relevant ports, and you need to > decide how to proceed; > > - ports <1024 > In Linux processes binding to this port range (for example 80, 22 ...), > require root permissions. There are technical solutions for it, such as > mod_ssl and iptables routin. > > - ports >=1024 > This can be resolved in standalone.xml or- http://www.postgresql.org/docs/8.1/static/sql-alterdatabase.html > and now the right link- https://community.jboss.org/thread/168140 -- /d "The answer, my friend, is blowing in the wind" --Bob Dylan, Blowin' in the Wind (1963) From djasa at redhat.com Wed Feb 15 14:22:44 2012 From: djasa at redhat.com (David =?UTF-8?Q?Ja=C5=A1a?=) Date: Wed, 15 Feb 2012 15:22:44 +0100 Subject: [Engine-devel] [Spice-devel] SPICE related features In-Reply-To: <00644af6-6717-4c21-8b8a-4259e656552d@zmail02.collab.prod.int.phx2.redhat.com> References: <00644af6-6717-4c21-8b8a-4259e656552d@zmail02.collab.prod.int.phx2.redhat.com> Message-ID: <1329315764.12032.961.camel@dhcp-29-7.brq.redhat.com> Oved Ourfalli p??e v St 15. 02. 2012 v 08:06 -0500: > > ----- Original Message ----- > > From: "Alon Levy" > > To: "Oved Ourfalli" > > Cc: "David Ja?a" , spice-devel at lists.freedesktop.org, dlaor at redhat.com, engine-devel at ovirt.org > > Sent: Wednesday, February 15, 2012 2:51:47 PM > > Subject: Re: [Engine-devel] [Spice-devel] SPICE related features > > > > On Wed, Feb 15, 2012 at 06:25:37AM -0500, Oved Ourfalli wrote: > > > > > > > > > ----- Original Message ----- > > > > From: "Dor Laor" > > > > To: spice-devel at lists.freedesktop.org, engine-devel at ovirt.org > > > > Cc: "David Ja?a" , "Oved Ourfalli" > > > > , "Hans de Goede" > > > > Sent: Wednesday, February 15, 2012 12:52:46 PM > > > > Subject: Re: [Engine-devel] [Spice-devel] SPICE related features > > > > > > > > On 02/09/2012 02:50 PM, David Ja?a wrote: > > > > > Itamar Heim p??e v ?t 09. 02. 2012 v 11:07 +0200: > > > > >> On 02/09/2012 11:05 AM, Hans de Goede wrote: > > > > >>> Hi, > > > > >>> > > > > >>> On 02/09/2012 09:33 AM, Itamar Heim wrote: > > > > >>>> On 02/09/2012 10:31 AM, Hans de Goede wrote: > > > > >>> > > > > >>> > > > > >>> > > > > >>>>>> so this means we need to ask the user for linux guests if > > > > >>>>>> they > > > > >>>>>> want > > > > >>>>>> single head or multiple heads when they choose multi > > > > >>>>>> monitor? > > > > >>>>> > > > > >>>>> We could ask the user, but I don't think that that is a > > > > >>>>> good > > > > >>>>> idea. > > > > >>>>> > > > > >>>>>> this will cause their (single) head to spin... > > > > >>>>> > > > > >>>>> With which you seem to agree :) > > > > >>>>> > > > > >>>>>> any better UX we can suggest users? > > > > >>>>> > > > > >>>>> Yes, no UI at all, the current solution using multiple > > > > >>>>> single > > > > >>>>> monitor > > > > >>>>> pci cards means using Xinerama, which disables Xrandr, and > > > > >>>>> thus > > > > >>>>> allows > > > > >>>>> no dynamic adjustment of the monitor settings of the guest, > > > > >>>>> instead > > > > >>>>> an xorg.conf file must be written (the linux agent can > > > > >>>>> generate > > > > >>>>> one > > > > >>>>> based on the current client monitor info) and Xorg needs to > > > > >>>>> be > > > > >>>>> restarted. > > > > >>>>> > > > > >>>>> This is the result of the multiple pci cards which each 1 > > > > >>>>> monitor model > > > > >>>>> we've been using for windows guests being a poor match for > > > > >>>>> Linux guests. > > > > >>>>> > > > > >>>>> So we are working on adding support to drive multiple > > > > >>>>> monitors > > > > >>>>> from a > > > > >>>>> single qxl pci device. This requires changes on both the > > > > >>>>> host > > > > >>>>> and > > > > >>>>> guest side, but if both sides support it this configuration > > > > >>>>> is > > > > >>>>> much > > > > >>>>> better, so IMHO ovirt should just automatically enable it > > > > >>>>> if both the host (the cluster) and the guest support it. > > > > >>>>> > > > > >>>>> On the guest side, this is the current status: > > > > >>>>> > > > > >>>>> RHEL<= 6.1 no multi monitor support > > > > >>>>> RHEL 6.2(*) - 6.? multi monitor support using Xinerama (so > > > > >>>>> 1 > > > > >>>>> monitor/card, multiple cards) > > > > >>>>> RHEL>= 6.? multi monitor support using a single card with > > > > >>>>> multiple > > > > >>>>> outputs > > > > >>>>> > > > > >>>>> Just like when exactly the new multi mon support will be > > > > >>>>> available > > > > >>>>> for guests, it is a similar question mark for when it will > > > > >>>>> be > > > > >>>>> available for > > > > >>>>> the host. > > > > >>>> > > > > >>>> this is the ovirt mailing list, so upstream versions are > > > > >>>> more > > > > >>>> relevant > > > > >>>> here. > > > > >>>> in any case, I have the same issue with backward > > > > >>>> compatibilty. > > > > >>>> say you fix this in fedora 17. > > > > >>>> user started a guest VM when host was fedora 16. > > > > >>>> admin upgraded host and changed cluster level to utilize new > > > > >>>> features. > > > > >>>> suddenly on next boot guest will move from 4 heads to single > > > > >>>> head? I'm > > > > >>>> guessing it will break user configuration. > > > > >>>> i.e., user should be able to choose to move to utilize the > > > > >>>> new > > > > >>>> mode? > > > > >>> > > > > >>> I see this as something which gets decided at vm creation > > > > >>> time, > > > > >>> and then > > > > >>> stored in the vm config. So if the vm gets created with a > > > > >>> guest > > > > >>> OS which > > > > >>> does not support multiple monitors per qxl device, or when > > > > >>> the > > > > >>> cluster does > > > > >>> not support it, it uses the old setup with 1 card / monitor. > > > > >>> Even > > > > >>> if the > > > > >>> guest OS or the cluster gets upgraded later. > > > > >> > > > > >> so instead of letting user change this, we'd force this at vm > > > > >> creation > > > > >> time? I'm not sure this is "friendlier". > > > > > > > > > > I think that some history-watching logic& one UI bit could be > > > > > the > > > > > way > > > > > to go. The UI bit would be yet another select button that would > > > > > let > > > > > user > > > > > choose what graphic layout ("all monitors on single graphic > > > > > card", > > > > > "one > > > > > graphic card per monitor (legacy)"). The logic would be like > > > > > this: > > > > > * pre-existing guest that now supports new layout in 3.1 > > > > > cluster > > > > > * The guest uses 1 monitor, is swithed to 2+ --> > > > > > new > > > > > * The guest uses 2+ monitor layout --> old, big > > > > > fat > > > > > warning when changing to the new that user > > > > > should > > > > > wipe > > > > > xinerama configuration in the guest > > > > > * pre-existing guest in old or mixed cluster: > > > > > * guest uses 2+ monitors --> old > > > > > * guest is newly configured for 2+ monitors --> > > > > > show > > > > > warning that user either has co configure > > > > > xinerama > > > > > or > > > > > use newer cluster --> old > > > > > * new guest in new cluster: > > > > > * --> new > > > > > * if user switches to old, show warning > > > > > * old guest in any type of cluster > > > > > * --> old > > > > > > > > > > This kind of behavior should provide sensible defaults, all > > > > > valid > > > > > choices in all possible scenarios and it should not interfere > > > > > too > > > > > much > > > > > when admin chooses to do anything. > > > > > > > > In short, the same rule of the thumb applies to all of our > > > > virtual > > > > hardware: > > > > - new vm creation should use the greatest and latest virtual > > > > hardware > > > > version (if the current cluster allows it) > > > > - For existing VMs we should preserve their current virtual > > > > hardware > > > > set (-M flag in qemu machine type vocabulary, cluster > > > > terminology > > > > for ovirt). > > > > - Changing the virtual hardware by either changing existing > > > > devices, > > > > adding devices, changing pci slots, or changing the virtual > > > > hardware revision should done only by user consent. > > > > The later may have the exception of smart offline v2v tool. > > > > > > > > Dor > > > > > > > > > > I agree. > > > What I don't understand (not a question for you Dor, but more for > > > the SPICE guys), though, is the requirement to support Xinerama > > > configuration on the guest. > > > Today the ovirt-engine doesn't support using more than one monitor > > > on Linux guests. > > > > Why? With Xinerama support we have a solution for this. That's the > > reason for asking for more the one monitor support for linux guests, > > no? > > > > AFAIU there was no requirement for adding Xinerama support, but only supporting using multiple heads on a single device. > If this is a requirement we should know about it, and design it. > Either way, as both features are new we have no backward compatibility issue here. No and no - if you take downstream RHEV into account. RHEV 3.0 _does_ support multimonitor linux guests using more single-headed devices which in turn require Xinerama configuration in guests. When such guests are placed on top of new multi-headed device, their configuration will break - so the feature is neither new, nor backward-compatibility-free. If you choose to ignore this issue, it may turn out to be a pain to fix it in late (RHEV-M) 3.1 cycle, when it is definitely branched from upstream. David > > > > If someone did something with is not the standard, somewhere behind > > > the scenes, the engine cannot be aware of that, thus it cannot > > > take it into consideration. > > > Moreover, don't change the device layout when we add support, as > > > the guest had one display device with one head, and it will remain > > > that way, unless he chooses to work with multiple monitors through > > > the engine management system. > > > > > > > > > > > > > David > > > > > > > > > >> ______________________________________ > > > > >> _________ > > > > >> Engine-devel mailing list > > > > >> Engine-devel at ovirt.org > > > > >> http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > > > > > > > > > _______________________________________________ > > > > Engine-devel mailing list > > > > Engine-devel at ovirt.org > > > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > > > _______________________________________________ > > > Spice-devel mailing list > > > Spice-devel at lists.freedesktop.org > > > http://lists.freedesktop.org/mailman/listinfo/spice-devel > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel -- David Ja?a, RHCE SPICE QE based in Brno GPG Key: 22C33E24 Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24 From ryanh at us.ibm.com Wed Feb 15 14:42:18 2012 From: ryanh at us.ibm.com (Ryan Harper) Date: Wed, 15 Feb 2012 08:42:18 -0600 Subject: [Engine-devel] Change jboss listening ip/port configuration In-Reply-To: <4F3BBDF1.3000507@redhat.com> References: <20120215135636.GC6051@us.ibm.com> <4F3BBDF1.3000507@redhat.com> Message-ID: <20120215144218.GD6051@us.ibm.com> * Doron Fediuck [2012-02-15 08:16]: > On 15/02/12 15:56, Ryan Harper wrote: > > Hi, > > > > I've followed the building engine from source wiki[1] and I've got it > > all running (thanks!) but to test things out, I wanted to exercise the > > gui and other parts via the web portal. By default, jboss is listening > > only to localhost:8080, and I was wondering the right way to change that > > to something else? > > > > I see localhost and ports buried down in > > > > ./backend/manager/conf/standalone.xml > > > > but not clear if I should be mucking with those and re-deploying or > > what. > > > > Thanks for the help, > > > Hi Ryan, > This is the JBoss default. > There are 2 options when changing the relevant ports, and you need to > decide how to proceed; > > - ports <1024 > In Linux processes binding to this port range (for example 80, 22 ...), > require root permissions. There are technical solutions for it, such as > mod_ssl and iptables routin. > > - ports >=1024 > This can be resolved in standalone.xml or- http://www.postgresql.org/docs/8.1/static/sql-alterdatabase.html OK, sounds like if I want to change either hostname or ports, I'd do that via database settings in standalone.xml? Once I've changed them do I just do another mvn2 clean install -Pdep do I also need to restart jboss-as at some point? > > -- > > /d > > Why doesn't DOS ever say "EXCELLENT command or filename!" -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx ryanh at us.ibm.com From dfediuck at redhat.com Wed Feb 15 14:50:28 2012 From: dfediuck at redhat.com (Doron Fediuck) Date: Wed, 15 Feb 2012 16:50:28 +0200 Subject: [Engine-devel] Change jboss listening ip/port configuration In-Reply-To: <20120215144218.GD6051@us.ibm.com> References: <20120215135636.GC6051@us.ibm.com> <4F3BBDF1.3000507@redhat.com> <20120215144218.GD6051@us.ibm.com> Message-ID: <4F3BC634.2050203@redhat.com> On 15/02/12 16:42, Ryan Harper wrote: -- /d "2B | !2B = FF" > * Doron Fediuck [2012-02-15 08:16]: >> On 15/02/12 15:56, Ryan Harper wrote: >>> Hi, >>> >>> I've followed the building engine from source wiki[1] and I've got it >>> all running (thanks!) but to test things out, I wanted to exercise the >>> gui and other parts via the web portal. By default, jboss is listening >>> only to localhost:8080, and I was wondering the right way to change that >>> to something else? >>> >>> I see localhost and ports buried down in >>> >>> ./backend/manager/conf/standalone.xml >>> >>> but not clear if I should be mucking with those and re-deploying or >>> what. >>> >>> Thanks for the help, >>> >> Hi Ryan, >> This is the JBoss default. >> There are 2 options when changing the relevant ports, and you need to >> decide how to proceed; >> >> - ports <1024 >> In Linux processes binding to this port range (for example 80, 22 ...), >> require root permissions. There are technical solutions for it, such as >> mod_ssl and iptables routin. >> >> - ports >=1024 >> This can be resolved in standalone.xml or- http://www.postgresql.org/docs/8.1/static/sql-alterdatabase.html > > OK, sounds like if I want to change either hostname or ports, I'd do > that via database settings in standalone.xml? Once I've changed them The standalone.xml JBoss configuration holds several settings amongst them is data source. Port binding is one of these settings. Look for something like- socket-binding name="http" port="8080" > do I just do another > > mvn2 clean install -Pdep No need, since you're not changing the application, but the underlying application server setup. > > > do I also need to restart jboss-as at some point? That's the only thing you need to do. > > > >> >> -- >> >> /d >> >> Why doesn't DOS ever say "EXCELLENT command or filename!" > From ykaul at redhat.com Wed Feb 15 15:26:48 2012 From: ykaul at redhat.com (Yaniv Kaul) Date: Wed, 15 Feb 2012 17:26:48 +0200 Subject: [Engine-devel] Upstream stabilization [was: Re: agenda for today's meeting] In-Reply-To: <4F3BB854.6010707@redhat.com> References: <4F3BB854.6010707@redhat.com> Message-ID: <4F3BCEB8.402@redhat.com> Since regretfully there was no time for my topic, I'll convey it via email. I'd like to suggest 3 simple rules to assist in keeping the projects stable: 1. If you've broken the Jenkins automated tests, provide a fix ASAP. 2. Don't fix what's ain't broken. 3. Follow rules 1 & 2 at all times. Thanks, Y. On 02/15/2012 03:51 PM, Livnat Peer wrote: > On 15/02/12 13:39, Eyal Edri wrote: >> >> ----- Original Message ----- >>> From: "Yaniv Kaul" >>> To: "Livnat Peer" >>> Cc: engine-devel at ovirt.org >>> Sent: Wednesday, February 15, 2012 1:35:56 PM >>> Subject: Re: [Engine-devel] agenda for today's meeting >>> >>> On 02/15/2012 01:21 PM, Livnat Peer wrote: >>>> Hi All, >>>> Agenda for this week meeting: >>>> >>>> - Implementation of bridged networks >>>> - Clone Vm from snapshot >>>> - Automatic recovery of entities >>>> >>>> Thanks, Livnat >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> If there's time, I'd be happy to add another topic to the agenda: >>> - upstream stabilization (especially considering the recent failures >>> in >>> unit tests) >>> Y. >> +1. >> > This was added to agenda. > >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> From jchoate at redhat.com Wed Feb 15 16:15:27 2012 From: jchoate at redhat.com (Jon Choate) Date: Wed, 15 Feb 2012 11:15:27 -0500 Subject: [Engine-devel] Upstream stabilization [was: Re: agenda for today's meeting] In-Reply-To: <4F3BCEB8.402@redhat.com> References: <4F3BB854.6010707@redhat.com> <4F3BCEB8.402@redhat.com> Message-ID: <4F3BDA1F.8050302@redhat.com> On 02/15/2012 10:26 AM, Yaniv Kaul wrote: > Since regretfully there was no time for my topic, I'll convey it via > email. > > I'd like to suggest 3 simple rules to assist in keeping the projects > stable: > 1. If you've broken the Jenkins automated tests, provide a fix ASAP. > 2. Don't fix what's ain't broken. > 3. Follow rules 1 & 2 at all times. > > Thanks, > Y. Since we are engineers we can start numbering at zero: 0. Don't push anything that you know will cause test failures or won't compile. Every commit point is publicly available and any member of the community might grab oVirt at your broken commit point. When they fail to build it makes the project look unprofessional and immature. +1 to #1 -1 to #2 - code needs to be fluid and flexible. A bias against proactive refactoring tends to lead to code that is brittle and eventually unmaintainable. I'd like to add to that: If you give at patch the verified flag this implies that you have: 1. pulled the patch locally and ensured that it builds 2. have run all the unit tests and ensure that they all pass If either of these are not true, you need to give the patch "-1 Fails" If you don't do this you are as much to blame as the author. > > > On 02/15/2012 03:51 PM, Livnat Peer wrote: >> On 15/02/12 13:39, Eyal Edri wrote: >>> >>> ----- Original Message ----- >>>> From: "Yaniv Kaul" >>>> To: "Livnat Peer" >>>> Cc: engine-devel at ovirt.org >>>> Sent: Wednesday, February 15, 2012 1:35:56 PM >>>> Subject: Re: [Engine-devel] agenda for today's meeting >>>> >>>> On 02/15/2012 01:21 PM, Livnat Peer wrote: >>>>> Hi All, >>>>> Agenda for this week meeting: >>>>> >>>>> - Implementation of bridged networks >>>>> - Clone Vm from snapshot >>>>> - Automatic recovery of entities >>>>> >>>>> Thanks, Livnat >>>>> _______________________________________________ >>>>> Engine-devel mailing list >>>>> Engine-devel at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>> If there's time, I'd be happy to add another topic to the agenda: >>>> - upstream stabilization (especially considering the recent failures >>>> in >>>> unit tests) >>>> Y. >>> +1. >>> >> This was added to agenda. >> >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>> > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From lhornyak at redhat.com Wed Feb 15 16:18:21 2012 From: lhornyak at redhat.com (Laszlo Hornyak) Date: Wed, 15 Feb 2012 11:18:21 -0500 (EST) Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: <77b2a8a2-e26b-4c82-a3f0-4f698736409a@zmail13.collab.prod.int.phx2.redhat.com> Message-ID: Hi, A short summary from the call today, please correct me if I forgot or misunderstood something. Ayal argued that the failed host/storagedomain should be reactivated by a periodically executed job, he would prefer if the engine could [try to] correct the problem right on discovery. Livnat's point was that this is hard to implement and it is OK if we move it to Nonoperational state and periodically check it again. There was a little arguing if we call the current behavior a bug or a missing behavior, I believe this is not quite important. I did not fully understand the last few sentences from Livant, did we manage to agree in a change in the plan? Anyway, I agree with Ayal that it would be very nice if the engine could fix the issues right on discovery, but I also agree that this feature would take a bigger effort. It would be nice to know what effort it would take to get the monitoring do this safely. Could we still call it monitoring then? Laszlo ----- Original Message ----- > From: "Ayal Baron" > To: "Laszlo Hornyak" > Cc: engine-devel at ovirt.org, "Yaniv Kaul" > Sent: Wednesday, February 15, 2012 12:46:05 PM > Subject: Re: [Engine-devel] Autorecovery feature plan for review > > > > ----- Original Message ----- > > Hi Ayal, > > > > ----- Original Message ----- > > > From: "Ayal Baron" > > > To: "Yaniv Kaul" > > > Cc: engine-devel at ovirt.org > > > Sent: Wednesday, February 15, 2012 12:19:48 PM > > > Subject: Re: [Engine-devel] Autorecovery feature plan for review > > > > > > > > > > > > > > I still fail to understand why you 'punish' existing objects > > > > and > > > > not > > > > giving them the new feature enabled by default. > > > > > > This is not a feature, it's a bug! > > > > Whatever we call it, it is a change in behavior. We agreed that it > > will be enabled for all existing objects by default. > > > > http://globalnerdy.com/wordpress/wp-content/uploads/2007/12/bug_vs_feature.gif > > > > > This should not be treated as a feature and this should not be > > > configurable! > > > > I can imagine some situations when I would not like the > > autorecovery > > to happen, but if everyone agrees not to make it configurable, I > > will just remove it from my patchset. > > It's not autorecovery, you're not recovering anything. You're > reflecting the fact that the resource is back to normal (not due to > anything that the engine did). > This is why it is a bug today. > This is why it should not be configurable. > > > > > > Today an object moves to non-operational due to state reported by > > > vdsm. The object should immediately return to up the moment vdsm > > > reports the object as ok (this means that you don't stop > > > monitoring > > > just because there is an error). > > > That's it. no db field and no nothing... > > > This pertains to storage domains, network, host status, whatever. > > > > > > > Y. > > > > > > > > > b. In environment to be clean installed -we have 0 existing > > > > > entities - > > > > > after clean install all new entities in the system will be > > > > > create > > > > > with > > > > > auto recoverable set to true. > > > > > Will this be considered a bad behavior? > > > > > > > > > > > > > > > _______________________________________________ > > > > > Engine-devel mailing list > > > > > Engine-devel at ovirt.org > > > > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > > > > _______________________________________________ > > > > Engine-devel mailing list > > > > Engine-devel at ovirt.org > > > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > > > _______________________________________________ > > > Engine-devel mailing list > > > Engine-devel at ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > > From abaron at redhat.com Wed Feb 15 16:28:11 2012 From: abaron at redhat.com (Ayal Baron) Date: Wed, 15 Feb 2012 11:28:11 -0500 (EST) Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: Message-ID: <42b4dcb5-4937-4654-80fd-02bc885df3f9@zmail13.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > Hi, > > A short summary from the call today, please correct me if I forgot or > misunderstood something. > > Ayal argued that the failed host/storagedomain should be reactivated > by a periodically executed job, he would prefer if the engine could > [try to] correct the problem right on discovery. > Livnat's point was that this is hard to implement and it is OK if we > move it to Nonoperational state and periodically check it again. > > There was a little arguing if we call the current behavior a bug or a > missing behavior, I believe this is not quite important. > > I did not fully understand the last few sentences from Livant, did we > manage to agree in a change in the plan? A couple of points that we agreed upon: 1. no need for new mechanism, just initiate this from the monitoring context. Preferably, if not difficult, evaluate the monitoring data, if host should remain in non-op then don't bother running initVdsOnUp 2. configuration of when to call initvdsonup is orthogonal to auto-init behaviour and if introduced should be on by default and user should be able to configure this either on or off for the host in general (no lower granularity) and can only be configured via the API. When disabled initVdsOnUp would be called only when admin activates the host/storage and any error would keep it inactive (I still don't understand why this is at all needed but whatever). Note that going forward what I envision is engine pushing down the entire host configuration once and from that point on the host would try to keep this configuration up and running. Once this happens there will be no need for initVdsOnUp at all. > > Anyway, I agree with Ayal that it would be very nice if the engine > could fix the issues right on discovery, but I also agree that this > feature would take a bigger effort. It would be nice to know what > effort it would take to get the monitoring do this safely. Could we > still call it monitoring then? > > Laszlo > > ----- Original Message ----- > > From: "Ayal Baron" > > To: "Laszlo Hornyak" > > Cc: engine-devel at ovirt.org, "Yaniv Kaul" > > Sent: Wednesday, February 15, 2012 12:46:05 PM > > Subject: Re: [Engine-devel] Autorecovery feature plan for review > > > > > > > > ----- Original Message ----- > > > Hi Ayal, > > > > > > ----- Original Message ----- > > > > From: "Ayal Baron" > > > > To: "Yaniv Kaul" > > > > Cc: engine-devel at ovirt.org > > > > Sent: Wednesday, February 15, 2012 12:19:48 PM > > > > Subject: Re: [Engine-devel] Autorecovery feature plan for > > > > review > > > > > > > > > > > > > > > > > > I still fail to understand why you 'punish' existing objects > > > > > and > > > > > not > > > > > giving them the new feature enabled by default. > > > > > > > > This is not a feature, it's a bug! > > > > > > Whatever we call it, it is a change in behavior. We agreed that > > > it > > > will be enabled for all existing objects by default. > > > > > > http://globalnerdy.com/wordpress/wp-content/uploads/2007/12/bug_vs_feature.gif > > > > > > > This should not be treated as a feature and this should not be > > > > configurable! > > > > > > I can imagine some situations when I would not like the > > > autorecovery > > > to happen, but if everyone agrees not to make it configurable, I > > > will just remove it from my patchset. > > > > It's not autorecovery, you're not recovering anything. You're > > reflecting the fact that the resource is back to normal (not due to > > anything that the engine did). > > This is why it is a bug today. > > This is why it should not be configurable. > > > > > > > > > Today an object moves to non-operational due to state reported > > > > by > > > > vdsm. The object should immediately return to up the moment > > > > vdsm > > > > reports the object as ok (this means that you don't stop > > > > monitoring > > > > just because there is an error). > > > > That's it. no db field and no nothing... > > > > This pertains to storage domains, network, host status, > > > > whatever. > > > > > > > > > Y. > > > > > > > > > > > b. In environment to be clean installed -we have 0 existing > > > > > > entities - > > > > > > after clean install all new entities in the system will be > > > > > > create > > > > > > with > > > > > > auto recoverable set to true. > > > > > > Will this be considered a bad behavior? > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > Engine-devel mailing list > > > > > > Engine-devel at ovirt.org > > > > > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > > > > > > _______________________________________________ > > > > > Engine-devel mailing list > > > > > Engine-devel at ovirt.org > > > > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > > > > > _______________________________________________ > > > > Engine-devel mailing list > > > > Engine-devel at ovirt.org > > > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > > > > > > From ewoud+ovirt at kohlvanwijngaarden.nl Wed Feb 15 16:30:43 2012 From: ewoud+ovirt at kohlvanwijngaarden.nl (Ewoud Kohl van Wijngaarden) Date: Wed, 15 Feb 2012 17:30:43 +0100 Subject: [Engine-devel] Upstream stabilization [was: Re: agenda for today's meeting] In-Reply-To: <4F3BDA1F.8050302@redhat.com> References: <4F3BB854.6010707@redhat.com> <4F3BCEB8.402@redhat.com> <4F3BDA1F.8050302@redhat.com> Message-ID: <20120215163039.GC4385@bogey.xentower.nl> On Wed, Feb 15, 2012 at 11:15:27AM -0500, Jon Choate wrote: > I'd like to add to that: > > If you give at patch the verified flag this implies that you have: > 1. pulled the patch locally and ensured that it builds > 2. have run all the unit tests and ensure that they all pass > > If either of these are not true, you need to give the patch "-1 > Fails" If you don't do this you are as much to blame as the author. There's a jenkins plugin that builds each patch and gives -1 or +1 depending on if it builds. Now I don't know if we have the computing power to do the same, but it might be worth considering. See https://wiki.jenkins-ci.org/display/JENKINS/Gerrit+Trigger. From rgolan at redhat.com Wed Feb 15 16:50:11 2012 From: rgolan at redhat.com (Roy Golan) Date: Wed, 15 Feb 2012 11:50:11 -0500 (EST) Subject: [Engine-devel] bridgeless networks - update In-Reply-To: <5fbee67f-94ba-465c-8255-b715710654d9@zmail01.collab.prod.int.phx2.redhat.com> Message-ID: <875f86ce-dd81-4ec1-8b86-d2541eaf38ac@zmail01.collab.prod.int.phx2.redhat.com> following ovirt engine weekly here's a summary of the changes: 1. no validation during vmInterface creation 2. when attaching a network the default value is bridged (GUI responsibility) 3. monitoring - detect mixed configured cluster (network "foo" is bridged on one host and not on another) and issue an audit log warning with event interval of 1 day wiki will be updated accordingly. Thanks, Roy From lpeer at redhat.com Wed Feb 15 17:02:35 2012 From: lpeer at redhat.com (Livnat Peer) Date: Wed, 15 Feb 2012 19:02:35 +0200 Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: <42b4dcb5-4937-4654-80fd-02bc885df3f9@zmail13.collab.prod.int.phx2.redhat.com> References: <42b4dcb5-4937-4654-80fd-02bc885df3f9@zmail13.collab.prod.int.phx2.redhat.com> Message-ID: <4F3BE52B.5070402@redhat.com> On 15/02/12 18:28, Ayal Baron wrote: > > > ----- Original Message ----- >> Hi, >> >> A short summary from the call today, please correct me if I forgot or >> misunderstood something. >> >> Ayal argued that the failed host/storagedomain should be reactivated >> by a periodically executed job, he would prefer if the engine could >> [try to] correct the problem right on discovery. >> Livnat's point was that this is hard to implement and it is OK if we >> move it to Nonoperational state and periodically check it again. >> >> There was a little arguing if we call the current behavior a bug or a >> missing behavior, I believe this is not quite important. >> >> I did not fully understand the last few sentences from Livant, did we >> manage to agree in a change in the plan? > > A couple of points that we agreed upon: > 1. no need for new mechanism, just initiate this from the monitoring context. > Preferably, if not difficult, evaluate the monitoring data, if host should remain in non-op then don't bother running initVdsOnUp > 2. configuration of when to call initvdsonup is orthogonal to auto-init behaviour and if introduced should be on by default and user should be able to configure this either on or off for the host in general (no lower granularity) and can only be configured via the API. > When disabled initVdsOnUp would be called only when admin activates the host/storage and any error would keep it inactive (I still don't understand why this is at all needed but whatever). > Also a note from Moran on the call was to check if we can unify the non-operational and Error statuses of the host. It was mentioned on the call that the reason for having ERROR state is for recovery (time out of the error state) but since we are about to recover from non-operational status as well there is no reason to have two different statuses. > Note that going forward what I envision is engine pushing down the entire host configuration once and from that point on the host would try to keep this configuration up and running. Once this happens there will be no need for initVdsOnUp at all. > > >> >> Anyway, I agree with Ayal that it would be very nice if the engine >> could fix the issues right on discovery, but I also agree that this >> feature would take a bigger effort. It would be nice to know what >> effort it would take to get the monitoring do this safely. Could we >> still call it monitoring then? >> Basically the monitoring flow moves the host to non-operational, what Ayal suggests is that it will also trigger the recovery flow (initialization flow). I think that modeling it to be triggered from the monitoring flow will block monitoring of the host during the initialization flow which can save us races going forward. Let's see if we can design the solution to be triggered by the monitoring. >> Laszlo >> >> ----- Original Message ----- >>> From: "Ayal Baron" >>> To: "Laszlo Hornyak" >>> Cc: engine-devel at ovirt.org, "Yaniv Kaul" >>> Sent: Wednesday, February 15, 2012 12:46:05 PM >>> Subject: Re: [Engine-devel] Autorecovery feature plan for review >>> >>> >>> >>> ----- Original Message ----- >>>> Hi Ayal, >>>> >>>> ----- Original Message ----- >>>>> From: "Ayal Baron" >>>>> To: "Yaniv Kaul" >>>>> Cc: engine-devel at ovirt.org >>>>> Sent: Wednesday, February 15, 2012 12:19:48 PM >>>>> Subject: Re: [Engine-devel] Autorecovery feature plan for >>>>> review >>>>> >>>>> >>>>>> >>>>>> I still fail to understand why you 'punish' existing objects >>>>>> and >>>>>> not >>>>>> giving them the new feature enabled by default. >>>>> >>>>> This is not a feature, it's a bug! >>>> >>>> Whatever we call it, it is a change in behavior. We agreed that >>>> it >>>> will be enabled for all existing objects by default. >>>> >>>> http://globalnerdy.com/wordpress/wp-content/uploads/2007/12/bug_vs_feature.gif >>>> >>>>> This should not be treated as a feature and this should not be >>>>> configurable! >>>> >>>> I can imagine some situations when I would not like the >>>> autorecovery >>>> to happen, but if everyone agrees not to make it configurable, I >>>> will just remove it from my patchset. >>> >>> It's not autorecovery, you're not recovering anything. You're >>> reflecting the fact that the resource is back to normal (not due to >>> anything that the engine did). >>> This is why it is a bug today. >>> This is why it should not be configurable. >>> >>>> >>>>> Today an object moves to non-operational due to state reported >>>>> by >>>>> vdsm. The object should immediately return to up the moment >>>>> vdsm >>>>> reports the object as ok (this means that you don't stop >>>>> monitoring >>>>> just because there is an error). >>>>> That's it. no db field and no nothing... >>>>> This pertains to storage domains, network, host status, >>>>> whatever. >>>>> >>>>>> Y. >>>>>> >>>>>>> b. In environment to be clean installed -we have 0 existing >>>>>>> entities - >>>>>>> after clean install all new entities in the system will be >>>>>>> create >>>>>>> with >>>>>>> auto recoverable set to true. >>>>>>> Will this be considered a bad behavior? >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Engine-devel mailing list >>>>>>> Engine-devel at ovirt.org >>>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>> >>>>>> _______________________________________________ >>>>>> Engine-devel mailing list >>>>>> Engine-devel at ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>> >>>>> _______________________________________________ >>>>> Engine-devel mailing list >>>>> Engine-devel at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>> >>>> >>> >> > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From jchoate at redhat.com Wed Feb 15 17:49:28 2012 From: jchoate at redhat.com (Jon Choate) Date: Wed, 15 Feb 2012 12:49:28 -0500 Subject: [Engine-devel] Unit test failures and RunVmCommand Message-ID: <4F3BF028.1040508@redhat.com> I was trying to fix the broken unit tests so that I can make sure my changes are not breaking anything. While trying to fix the RunVmCommand tests I found some logic that I am unsure of. It seems like with all of the nested conditions in this method the scope of some of the checks is wrong. In RunVmCommand.CanRunVm we check the boot sequence. If the vm is set to only boot from a hard disk, we check to make sure that the vm has a hard disk and that it is plugged. If both of these are true, we do not perform any other checks and return that the vm can start. One of the checks that gets skipped is whether or not the vm is already running. Do we really want to skip that check? From iheim at redhat.com Wed Feb 15 20:27:20 2012 From: iheim at redhat.com (Itamar Heim) Date: Wed, 15 Feb 2012 22:27:20 +0200 Subject: [Engine-devel] SharedRawDisk feature detail In-Reply-To: References: Message-ID: <4F3C1528.8010400@redhat.com> On 02/15/2012 12:20 PM, Miki Kenneth wrote: ... >> I think we're approaching this the wrong way. >> There are 2 possible problems we're trying to solve here and having >> the original shared disk as part of the template is the wrong >> solution for both. >> >> The first problem is - user wants to attach the shared disk to all >> VMs derived from the template - in this case the shared disk is >> *not* a part of the template and what is needed is an automatic way >> to configure newly created VMs that would allow to attach the shared >> disk. > My personal feeling is that this is the common use case. my view is the use case of shared disk defined at template level is nice, but can be done at a later phase. the higher priority would be to actually support a shared disk, even if user needs to attach it to the VMs they instantiated from the template. so we need to remember the template use case, and not do things making it more complex later, but can also do some validations preventing it if it makes things more complex. From lpeer at redhat.com Wed Feb 15 20:59:45 2012 From: lpeer at redhat.com (Livnat Peer) Date: Wed, 15 Feb 2012 22:59:45 +0200 Subject: [Engine-devel] bridgeless networks - update In-Reply-To: <875f86ce-dd81-4ec1-8b86-d2541eaf38ac@zmail01.collab.prod.int.phx2.redhat.com> References: <875f86ce-dd81-4ec1-8b86-d2541eaf38ac@zmail01.collab.prod.int.phx2.redhat.com> Message-ID: <4F3C1CC1.7080303@redhat.com> On 15/02/12 18:50, Roy Golan wrote: > following ovirt engine weekly here's a summary of the changes: > > 1. no validation during vmInterface creation > 2. when attaching a network the default value is bridged (GUI responsibility) > 3. monitoring - detect mixed configured cluster (network "foo" is bridged on one host and not on another) > and issue an audit log warning with event interval of 1 day I think it is worth mentioning that mixed meaning mix within a cluster (not DC). > > wiki will be updated accordingly. > > Thanks, > Roy > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From iheim at redhat.com Wed Feb 15 22:21:49 2012 From: iheim at redhat.com (Itamar Heim) Date: Thu, 16 Feb 2012 00:21:49 +0200 Subject: [Engine-devel] bridgeless networks - update In-Reply-To: <875f86ce-dd81-4ec1-8b86-d2541eaf38ac@zmail01.collab.prod.int.phx2.redhat.com> References: <875f86ce-dd81-4ec1-8b86-d2541eaf38ac@zmail01.collab.prod.int.phx2.redhat.com> Message-ID: <4F3C2FFD.9060904@redhat.com> On 02/15/2012 06:50 PM, Roy Golan wrote: > following ovirt engine weekly here's a summary of the changes: > > 1. no validation during vmInterface creation > 2. when attaching a network the default value is bridged (GUI responsibility) so what is the default if one didn't pass it in the API (i.e., backend should have a default value. UI may choose whatver). (and the UI/API name in the wiki is something around "allow to run vms", not bridged - so maybe change the terminology used to discuss this to reduce risk of confusion/mistakes later). > 3. monitoring - detect mixed configured cluster (network "foo" is bridged on one host and not on another) > and issue an audit log warning with event interval of 1 day why does this matter if cluster is mixed? i.e., wouldn't this be interesting per host, that a network which could be bridgeless is bridged to warn about? > > wiki will be updated accordingly. having fast read it, couldn't understand the backward compatibility section ("Its compatibility version is 3.1 and enforced by the enclosed command as mentioned already") i'd make it clear this is a 3.1 DC feature, and not a cluster one(?) becuase iirc, setupNetworks is actually host level 3.1 compatibility level, not cluster/dc wide? > > Thanks, > Roy > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From iheim at redhat.com Wed Feb 15 22:38:35 2012 From: iheim at redhat.com (Itamar Heim) Date: Thu, 16 Feb 2012 00:38:35 +0200 Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: <4F3BE52B.5070402@redhat.com> References: <42b4dcb5-4937-4654-80fd-02bc885df3f9@zmail13.collab.prod.int.phx2.redhat.com> <4F3BE52B.5070402@redhat.com> Message-ID: <4F3C33EB.40504@redhat.com> On 02/15/2012 07:02 PM, Livnat Peer wrote: > On 15/02/12 18:28, Ayal Baron wrote: >> >> >> ----- Original Message ----- >>> Hi, >>> >>> A short summary from the call today, please correct me if I forgot or >>> misunderstood something. >>> >>> Ayal argued that the failed host/storagedomain should be reactivated >>> by a periodically executed job, he would prefer if the engine could >>> [try to] correct the problem right on discovery. >>> Livnat's point was that this is hard to implement and it is OK if we >>> move it to Nonoperational state and periodically check it again. >>> >>> There was a little arguing if we call the current behavior a bug or a >>> missing behavior, I believe this is not quite important. >>> >>> I did not fully understand the last few sentences from Livant, did we >>> manage to agree in a change in the plan? >> >> A couple of points that we agreed upon: >> 1. no need for new mechanism, just initiate this from the monitoring context. >> Preferably, if not difficult, evaluate the monitoring data, if host should remain in non-op then don't bother running initVdsOnUp >> 2. configuration of when to call initvdsonup is orthogonal to auto-init behaviour and if introduced should be on by default and user should be able to configure this either on or off for the host in general (no lower granularity) and can only be configured via the API. >> When disabled initVdsOnUp would be called only when admin activates the host/storage and any error would keep it inactive (I still don't understand why this is at all needed but whatever). >> > > Also a note from Moran on the call was to check if we can unify the > non-operational and Error statuses of the host. > It was mentioned on the call that the reason for having ERROR state is > for recovery (time out of the error state) but since we are about to > recover from non-operational status as well there is no reason to have > two different statuses. they are not exactly the same. or should i say, error is supposed to be when reason isn't related to host being non-operational. what is error state? a host will go into error state if it fails to run 3 (configurable) VMs, that succeeded running on other host on retry. i.e., something is wrong with that host, failing to launch VMs. as it happens, it already "auto recovers" for this mode after a certain period of time. why? because the host will fail to run virtual machines, and will be the least loaded, so it will be the first target selected to run them, which will continue to fail. so there is a negative scoring mechanism on number of errors, till host is taken out for a while. (I don't remember if the reverse is true and the VM goes into error mode if the VM failed to launch on all hosts per number of retries. i think this wasn't needed and user just got an error in audit log) i can see two reasons a host will go into error state: 1. monitoring didn't detect an issue yet, and host would have/will/should go into non-operational mode. if host will go into non-operational mode, and will auto recover with the above flow, i guess it is fine. 2. cause for failure isn't something we monitor for (upgraded to a bad version of qemu, or qemu got corrupted). now, the error mode was developed quite a long time ago (august 2007 iirc), so could be it mostly compensated for the first reason which is now better monitored. i wonder how often error state is seen due to a reason which isn't monitored already. moran - do you have examples of when you see error state of hosts? From iheim at redhat.com Thu Feb 16 00:31:17 2012 From: iheim at redhat.com (Itamar Heim) Date: Thu, 16 Feb 2012 02:31:17 +0200 Subject: [Engine-devel] Upstream stabilization [was: Re: agenda for today's meeting] In-Reply-To: <20120215163039.GC4385@bogey.xentower.nl> References: <4F3BB854.6010707@redhat.com> <4F3BCEB8.402@redhat.com> <4F3BDA1F.8050302@redhat.com> <20120215163039.GC4385@bogey.xentower.nl> Message-ID: <4F3C4E55.9080307@redhat.com> On 02/15/2012 06:30 PM, Ewoud Kohl van Wijngaarden wrote: > On Wed, Feb 15, 2012 at 11:15:27AM -0500, Jon Choate wrote: >> I'd like to add to that: >> >> If you give at patch the verified flag this implies that you have: >> 1. pulled the patch locally and ensured that it builds >> 2. have run all the unit tests and ensure that they all pass >> >> If either of these are not true, you need to give the patch "-1 >> Fails" If you don't do this you are as much to blame as the author. > There's a jenkins plugin that builds each patch and gives -1 or +1 > depending on if it builds. Now I don't know if we have the computing > power to do the same, but it might be worth considering. See > https://wiki.jenkins-ci.org/display/JENKINS/Gerrit+Trigger. indeed. eyal is fighting some hopefully last maven3 conflicts between git/gerrit plugin to get this rolling. From mgoldboi at redhat.com Thu Feb 16 07:29:17 2012 From: mgoldboi at redhat.com (Moran Goldboim) Date: Thu, 16 Feb 2012 09:29:17 +0200 Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: <4F3C33EB.40504@redhat.com> References: <42b4dcb5-4937-4654-80fd-02bc885df3f9@zmail13.collab.prod.int.phx2.redhat.com> <4F3BE52B.5070402@redhat.com> <4F3C33EB.40504@redhat.com> Message-ID: <4F3CB04D.6000107@redhat.com> On 02/16/2012 12:38 AM, Itamar Heim wrote: > On 02/15/2012 07:02 PM, Livnat Peer wrote: >> On 15/02/12 18:28, Ayal Baron wrote: >>> >>> >>> ----- Original Message ----- >>>> Hi, >>>> >>>> A short summary from the call today, please correct me if I forgot or >>>> misunderstood something. >>>> >>>> Ayal argued that the failed host/storagedomain should be reactivated >>>> by a periodically executed job, he would prefer if the engine could >>>> [try to] correct the problem right on discovery. >>>> Livnat's point was that this is hard to implement and it is OK if we >>>> move it to Nonoperational state and periodically check it again. >>>> >>>> There was a little arguing if we call the current behavior a bug or a >>>> missing behavior, I believe this is not quite important. >>>> >>>> I did not fully understand the last few sentences from Livant, did we >>>> manage to agree in a change in the plan? >>> >>> A couple of points that we agreed upon: >>> 1. no need for new mechanism, just initiate this from the monitoring >>> context. >>> Preferably, if not difficult, evaluate the monitoring data, if >>> host should remain in non-op then don't bother running initVdsOnUp >>> 2. configuration of when to call initvdsonup is orthogonal to >>> auto-init behaviour and if introduced should be on by default and >>> user should be able to configure this either on or off for the host >>> in general (no lower granularity) and can only be configured via the >>> API. >>> When disabled initVdsOnUp would be called only when admin activates >>> the host/storage and any error would keep it inactive (I still don't >>> understand why this is at all needed but whatever). >>> >> >> Also a note from Moran on the call was to check if we can unify the >> non-operational and Error statuses of the host. >> It was mentioned on the call that the reason for having ERROR state is >> for recovery (time out of the error state) but since we are about to >> recover from non-operational status as well there is no reason to have >> two different statuses. > > they are not exactly the same. > or should i say, error is supposed to be when reason isn't related to > host being non-operational. > > what is error state? > a host will go into error state if it fails to run 3 (configurable) > VMs, that succeeded running on other host on retry. > i.e., something is wrong with that host, failing to launch VMs. > as it happens, it already "auto recovers" for this mode after a > certain period of time. > > why? because the host will fail to run virtual machines, and will be > the least loaded, so it will be the first target selected to run them, > which will continue to fail. > > so there is a negative scoring mechanism on number of errors, till > host is taken out for a while. > > (I don't remember if the reverse is true and the VM goes into error > mode if the VM failed to launch on all hosts per number of retries. i > think this wasn't needed and user just got an error in audit log) > > i can see two reasons a host will go into error state: > 1. monitoring didn't detect an issue yet, and host would > have/will/should go into non-operational mode. > if host will go into non-operational mode, and will auto recover with > the above flow, i guess it is fine. > > 2. cause for failure isn't something we monitor for (upgraded to a bad > version of qemu, or qemu got corrupted). > > now, the error mode was developed quite a long time ago (august 2007 > iirc), so could be it mostly compensated for the first reason which is > now better monitored. > i wonder how often error state is seen due to a reason which isn't > monitored already. > moran - do you have examples of when you see error state of hosts? usually it happened when there were a problematic/ misconfigurated vdsm / libvirt which failed to run vms (nothing we can recover from)- i haven't faced the issue of "host it too loaded" that status has some other syndromes, however the behaviour on that state is very much the same -waiting for 30 min (?) and than move it to activated. Moran. From ykaul at redhat.com Thu Feb 16 07:35:13 2012 From: ykaul at redhat.com (Yaniv Kaul) Date: Thu, 16 Feb 2012 09:35:13 +0200 Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: <4F3CB04D.6000107@redhat.com> References: <42b4dcb5-4937-4654-80fd-02bc885df3f9@zmail13.collab.prod.int.phx2.redhat.com> <4F3BE52B.5070402@redhat.com> <4F3C33EB.40504@redhat.com> <4F3CB04D.6000107@redhat.com> Message-ID: <4F3CB1B1.9030903@redhat.com> On 02/16/2012 09:29 AM, Moran Goldboim wrote: > On 02/16/2012 12:38 AM, Itamar Heim wrote: >> On 02/15/2012 07:02 PM, Livnat Peer wrote: >>> On 15/02/12 18:28, Ayal Baron wrote: >>>> >>>> >>>> ----- Original Message ----- >>>>> Hi, >>>>> >>>>> A short summary from the call today, please correct me if I forgot or >>>>> misunderstood something. >>>>> >>>>> Ayal argued that the failed host/storagedomain should be reactivated >>>>> by a periodically executed job, he would prefer if the engine could >>>>> [try to] correct the problem right on discovery. >>>>> Livnat's point was that this is hard to implement and it is OK if we >>>>> move it to Nonoperational state and periodically check it again. >>>>> >>>>> There was a little arguing if we call the current behavior a bug or a >>>>> missing behavior, I believe this is not quite important. >>>>> >>>>> I did not fully understand the last few sentences from Livant, did we >>>>> manage to agree in a change in the plan? >>>> >>>> A couple of points that we agreed upon: >>>> 1. no need for new mechanism, just initiate this from the >>>> monitoring context. >>>> Preferably, if not difficult, evaluate the monitoring data, if >>>> host should remain in non-op then don't bother running initVdsOnUp >>>> 2. configuration of when to call initvdsonup is orthogonal to >>>> auto-init behaviour and if introduced should be on by default and >>>> user should be able to configure this either on or off for the host >>>> in general (no lower granularity) and can only be configured via >>>> the API. >>>> When disabled initVdsOnUp would be called only when admin activates >>>> the host/storage and any error would keep it inactive (I still >>>> don't understand why this is at all needed but whatever). >>>> >>> >>> Also a note from Moran on the call was to check if we can unify the >>> non-operational and Error statuses of the host. >>> It was mentioned on the call that the reason for having ERROR state is >>> for recovery (time out of the error state) but since we are about to >>> recover from non-operational status as well there is no reason to have >>> two different statuses. >> >> they are not exactly the same. >> or should i say, error is supposed to be when reason isn't related to >> host being non-operational. >> >> what is error state? >> a host will go into error state if it fails to run 3 (configurable) >> VMs, that succeeded running on other host on retry. >> i.e., something is wrong with that host, failing to launch VMs. >> as it happens, it already "auto recovers" for this mode after a >> certain period of time. >> >> why? because the host will fail to run virtual machines, and will be >> the least loaded, so it will be the first target selected to run >> them, which will continue to fail. >> >> so there is a negative scoring mechanism on number of errors, till >> host is taken out for a while. >> >> (I don't remember if the reverse is true and the VM goes into error >> mode if the VM failed to launch on all hosts per number of retries. i >> think this wasn't needed and user just got an error in audit log) >> >> i can see two reasons a host will go into error state: >> 1. monitoring didn't detect an issue yet, and host would >> have/will/should go into non-operational mode. >> if host will go into non-operational mode, and will auto recover with >> the above flow, i guess it is fine. >> >> 2. cause for failure isn't something we monitor for (upgraded to a >> bad version of qemu, or qemu got corrupted). >> >> now, the error mode was developed quite a long time ago (august 2007 >> iirc), so could be it mostly compensated for the first reason which >> is now better monitored. >> i wonder how often error state is seen due to a reason which isn't >> monitored already. >> moran - do you have examples of when you see error state of hosts? > > usually it happened when there were a problematic/ misconfigurated > vdsm / libvirt which failed to run vms (nothing we can recover from)- > i haven't faced the issue of "host it too loaded" that status has some > other syndromes, however the behaviour on that state is very much the > same -waiting for 30 min (?) and than move it to activated. > Moran. 'host is too loaded' is too loaded is the only transient state where a temporary 'error' state makes sense, but in the same time, it can also fit the 'non operational' state description. From my experience, the problem with KVM/libvirt/VDSM mis-configured is never temporary, (= magically solved by itself, without concrete user intervention). IMHO, it should move the host to an error state that would not automatically recover from. Regardless, consolidating the names of the states ('inactive, detached, non operational, maintenance, error, unknown' ...) would be nice too. Probably can't be done for all, of course. Y. > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From mgoldboi at redhat.com Thu Feb 16 08:01:37 2012 From: mgoldboi at redhat.com (Moran Goldboim) Date: Thu, 16 Feb 2012 10:01:37 +0200 Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: <4F3CB1B1.9030903@redhat.com> References: <42b4dcb5-4937-4654-80fd-02bc885df3f9@zmail13.collab.prod.int.phx2.redhat.com> <4F3BE52B.5070402@redhat.com> <4F3C33EB.40504@redhat.com> <4F3CB04D.6000107@redhat.com> <4F3CB1B1.9030903@redhat.com> Message-ID: <4F3CB7E1.50606@redhat.com> On 02/16/2012 09:35 AM, Yaniv Kaul wrote: > On 02/16/2012 09:29 AM, Moran Goldboim wrote: >> On 02/16/2012 12:38 AM, Itamar Heim wrote: >>> On 02/15/2012 07:02 PM, Livnat Peer wrote: >>>> On 15/02/12 18:28, Ayal Baron wrote: >>>>> >>>>> >>>>> ----- Original Message ----- >>>>>> Hi, >>>>>> >>>>>> A short summary from the call today, please correct me if I >>>>>> forgot or >>>>>> misunderstood something. >>>>>> >>>>>> Ayal argued that the failed host/storagedomain should be reactivated >>>>>> by a periodically executed job, he would prefer if the engine could >>>>>> [try to] correct the problem right on discovery. >>>>>> Livnat's point was that this is hard to implement and it is OK if we >>>>>> move it to Nonoperational state and periodically check it again. >>>>>> >>>>>> There was a little arguing if we call the current behavior a bug >>>>>> or a >>>>>> missing behavior, I believe this is not quite important. >>>>>> >>>>>> I did not fully understand the last few sentences from Livant, >>>>>> did we >>>>>> manage to agree in a change in the plan? >>>>> >>>>> A couple of points that we agreed upon: >>>>> 1. no need for new mechanism, just initiate this from the >>>>> monitoring context. >>>>> Preferably, if not difficult, evaluate the monitoring data, if >>>>> host should remain in non-op then don't bother running initVdsOnUp >>>>> 2. configuration of when to call initvdsonup is orthogonal to >>>>> auto-init behaviour and if introduced should be on by default and >>>>> user should be able to configure this either on or off for the >>>>> host in general (no lower granularity) and can only be configured >>>>> via the API. >>>>> When disabled initVdsOnUp would be called only when admin >>>>> activates the host/storage and any error would keep it inactive (I >>>>> still don't understand why this is at all needed but whatever). >>>>> >>>> >>>> Also a note from Moran on the call was to check if we can unify the >>>> non-operational and Error statuses of the host. >>>> It was mentioned on the call that the reason for having ERROR state is >>>> for recovery (time out of the error state) but since we are about to >>>> recover from non-operational status as well there is no reason to have >>>> two different statuses. >>> >>> they are not exactly the same. >>> or should i say, error is supposed to be when reason isn't related >>> to host being non-operational. >>> >>> what is error state? >>> a host will go into error state if it fails to run 3 (configurable) >>> VMs, that succeeded running on other host on retry. >>> i.e., something is wrong with that host, failing to launch VMs. >>> as it happens, it already "auto recovers" for this mode after a >>> certain period of time. >>> >>> why? because the host will fail to run virtual machines, and will be >>> the least loaded, so it will be the first target selected to run >>> them, which will continue to fail. >>> >>> so there is a negative scoring mechanism on number of errors, till >>> host is taken out for a while. >>> >>> (I don't remember if the reverse is true and the VM goes into error >>> mode if the VM failed to launch on all hosts per number of retries. >>> i think this wasn't needed and user just got an error in audit log) >>> >>> i can see two reasons a host will go into error state: >>> 1. monitoring didn't detect an issue yet, and host would >>> have/will/should go into non-operational mode. >>> if host will go into non-operational mode, and will auto recover >>> with the above flow, i guess it is fine. >>> >>> 2. cause for failure isn't something we monitor for (upgraded to a >>> bad version of qemu, or qemu got corrupted). >>> >>> now, the error mode was developed quite a long time ago (august 2007 >>> iirc), so could be it mostly compensated for the first reason which >>> is now better monitored. >>> i wonder how often error state is seen due to a reason which isn't >>> monitored already. >>> moran - do you have examples of when you see error state of hosts? >> >> usually it happened when there were a problematic/ misconfigurated >> vdsm / libvirt which failed to run vms (nothing we can recover from)- >> i haven't faced the issue of "host it too loaded" that status has >> some other syndromes, however the behaviour on that state is very >> much the same -waiting for 30 min (?) and than move it to activated. >> Moran. > > 'host is too loaded' is too loaded is the only transient state where a > temporary 'error' state makes sense, but in the same time, it can also > fit the 'non operational' state description. > From my experience, the problem with KVM/libvirt/VDSM mis-configured > is never temporary, (= magically solved by itself, without concrete > user intervention). IMHO, it should move the host to an error state > that would not automatically recover from. > Regardless, consolidating the names of the states ('inactive, > detached, non operational, maintenance, error, unknown' ...) would be > nice too. Probably can't be done for all, of course. > Y. agreed, most of the causes of ERROR state aren't transient, but looks to me as if this state is redundant and could be taken care as part of the other host states, since the way it's being used today isn't very helpful as well. Moran. > > >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel > From mkenneth at redhat.com Thu Feb 16 08:28:12 2012 From: mkenneth at redhat.com (Miki Kenneth) Date: Thu, 16 Feb 2012 03:28:12 -0500 (EST) Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: <4F3CB7E1.50606@redhat.com> Message-ID: <8ce49456-6cea-46be-96db-1770c060b375@mkenneth.csb> ----- Original Message ----- > From: "Moran Goldboim" > To: "Yaniv Kaul" > Cc: engine-devel at ovirt.org > Sent: Thursday, February 16, 2012 10:01:37 AM > Subject: Re: [Engine-devel] Autorecovery feature plan for review > > On 02/16/2012 09:35 AM, Yaniv Kaul wrote: > > On 02/16/2012 09:29 AM, Moran Goldboim wrote: > >> On 02/16/2012 12:38 AM, Itamar Heim wrote: > >>> On 02/15/2012 07:02 PM, Livnat Peer wrote: > >>>> On 15/02/12 18:28, Ayal Baron wrote: > >>>>> > >>>>> > >>>>> ----- Original Message ----- > >>>>>> Hi, > >>>>>> > >>>>>> A short summary from the call today, please correct me if I > >>>>>> forgot or > >>>>>> misunderstood something. > >>>>>> > >>>>>> Ayal argued that the failed host/storagedomain should be > >>>>>> reactivated > >>>>>> by a periodically executed job, he would prefer if the engine > >>>>>> could > >>>>>> [try to] correct the problem right on discovery. > >>>>>> Livnat's point was that this is hard to implement and it is OK > >>>>>> if we > >>>>>> move it to Nonoperational state and periodically check it > >>>>>> again. > >>>>>> > >>>>>> There was a little arguing if we call the current behavior a > >>>>>> bug > >>>>>> or a > >>>>>> missing behavior, I believe this is not quite important. > >>>>>> > >>>>>> I did not fully understand the last few sentences from Livant, > >>>>>> did we > >>>>>> manage to agree in a change in the plan? > >>>>> > >>>>> A couple of points that we agreed upon: > >>>>> 1. no need for new mechanism, just initiate this from the > >>>>> monitoring context. > >>>>> Preferably, if not difficult, evaluate the monitoring data, > >>>>> if > >>>>> host should remain in non-op then don't bother running > >>>>> initVdsOnUp > >>>>> 2. configuration of when to call initvdsonup is orthogonal to > >>>>> auto-init behaviour and if introduced should be on by default > >>>>> and > >>>>> user should be able to configure this either on or off for the > >>>>> host in general (no lower granularity) and can only be > >>>>> configured > >>>>> via the API. > >>>>> When disabled initVdsOnUp would be called only when admin > >>>>> activates the host/storage and any error would keep it inactive > >>>>> (I > >>>>> still don't understand why this is at all needed but whatever). > >>>>> > >>>> > >>>> Also a note from Moran on the call was to check if we can unify > >>>> the > >>>> non-operational and Error statuses of the host. > >>>> It was mentioned on the call that the reason for having ERROR > >>>> state is > >>>> for recovery (time out of the error state) but since we are > >>>> about to > >>>> recover from non-operational status as well there is no reason > >>>> to have > >>>> two different statuses. > >>> > >>> they are not exactly the same. > >>> or should i say, error is supposed to be when reason isn't > >>> related > >>> to host being non-operational. > >>> > >>> what is error state? > >>> a host will go into error state if it fails to run 3 > >>> (configurable) > >>> VMs, that succeeded running on other host on retry. > >>> i.e., something is wrong with that host, failing to launch VMs. > >>> as it happens, it already "auto recovers" for this mode after a > >>> certain period of time. > >>> > >>> why? because the host will fail to run virtual machines, and will > >>> be > >>> the least loaded, so it will be the first target selected to run > >>> them, which will continue to fail. > >>> > >>> so there is a negative scoring mechanism on number of errors, > >>> till > >>> host is taken out for a while. > >>> > >>> (I don't remember if the reverse is true and the VM goes into > >>> error > >>> mode if the VM failed to launch on all hosts per number of > >>> retries. > >>> i think this wasn't needed and user just got an error in audit > >>> log) > >>> > >>> i can see two reasons a host will go into error state: > >>> 1. monitoring didn't detect an issue yet, and host would > >>> have/will/should go into non-operational mode. > >>> if host will go into non-operational mode, and will auto recover > >>> with the above flow, i guess it is fine. > >>> > >>> 2. cause for failure isn't something we monitor for (upgraded to > >>> a > >>> bad version of qemu, or qemu got corrupted). > >>> > >>> now, the error mode was developed quite a long time ago (august > >>> 2007 > >>> iirc), so could be it mostly compensated for the first reason > >>> which > >>> is now better monitored. > >>> i wonder how often error state is seen due to a reason which > >>> isn't > >>> monitored already. > >>> moran - do you have examples of when you see error state of > >>> hosts? > >> > >> usually it happened when there were a problematic/ misconfigurated > >> vdsm / libvirt which failed to run vms (nothing we can recover > >> from)- > >> i haven't faced the issue of "host it too loaded" that status has > >> some other syndromes, however the behaviour on that state is very > >> much the same -waiting for 30 min (?) and than move it to > >> activated. > >> Moran. > > > > 'host is too loaded' is too loaded is the only transient state > > where a > > temporary 'error' state makes sense, but in the same time, it can > > also > > fit the 'non operational' state description. > > From my experience, the problem with KVM/libvirt/VDSM > > mis-configured > > is never temporary, (= magically solved by itself, without concrete > > user intervention). IMHO, it should move the host to an error state > > that would not automatically recover from. > > Regardless, consolidating the names of the states ('inactive, > > detached, non operational, maintenance, error, unknown' ...) would > > be > > nice too. Probably can't be done for all, of course. > > Y. > > agreed, most of the causes of ERROR state aren't transient, but looks > to > me as if this state is redundant and could be taken care as part of > the > other host states, since the way it's being used today isn't very > helpful as well. > Moran. However, I can envision an ERROR state that you don't want to keep retry mechanism on... which might be a different behavior than the NON-OP one. > > > > > > > >> _______________________________________________ > >> Engine-devel mailing list > >> Engine-devel at ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From mgoldboi at redhat.com Thu Feb 16 08:45:41 2012 From: mgoldboi at redhat.com (Moran Goldboim) Date: Thu, 16 Feb 2012 10:45:41 +0200 Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: <8ce49456-6cea-46be-96db-1770c060b375@mkenneth.csb> References: <8ce49456-6cea-46be-96db-1770c060b375@mkenneth.csb> Message-ID: <4F3CC235.6070304@redhat.com> On 02/16/2012 10:28 AM, Miki Kenneth wrote: > > ----- Original Message ----- >> From: "Moran Goldboim" >> To: "Yaniv Kaul" >> Cc: engine-devel at ovirt.org >> Sent: Thursday, February 16, 2012 10:01:37 AM >> Subject: Re: [Engine-devel] Autorecovery feature plan for review >> >> On 02/16/2012 09:35 AM, Yaniv Kaul wrote: >>> On 02/16/2012 09:29 AM, Moran Goldboim wrote: >>>> On 02/16/2012 12:38 AM, Itamar Heim wrote: >>>>> On 02/15/2012 07:02 PM, Livnat Peer wrote: >>>>>> On 15/02/12 18:28, Ayal Baron wrote: >>>>>>> >>>>>>> ----- Original Message ----- >>>>>>>> Hi, >>>>>>>> >>>>>>>> A short summary from the call today, please correct me if I >>>>>>>> forgot or >>>>>>>> misunderstood something. >>>>>>>> >>>>>>>> Ayal argued that the failed host/storagedomain should be >>>>>>>> reactivated >>>>>>>> by a periodically executed job, he would prefer if the engine >>>>>>>> could >>>>>>>> [try to] correct the problem right on discovery. >>>>>>>> Livnat's point was that this is hard to implement and it is OK >>>>>>>> if we >>>>>>>> move it to Nonoperational state and periodically check it >>>>>>>> again. >>>>>>>> >>>>>>>> There was a little arguing if we call the current behavior a >>>>>>>> bug >>>>>>>> or a >>>>>>>> missing behavior, I believe this is not quite important. >>>>>>>> >>>>>>>> I did not fully understand the last few sentences from Livant, >>>>>>>> did we >>>>>>>> manage to agree in a change in the plan? >>>>>>> A couple of points that we agreed upon: >>>>>>> 1. no need for new mechanism, just initiate this from the >>>>>>> monitoring context. >>>>>>> Preferably, if not difficult, evaluate the monitoring data, >>>>>>> if >>>>>>> host should remain in non-op then don't bother running >>>>>>> initVdsOnUp >>>>>>> 2. configuration of when to call initvdsonup is orthogonal to >>>>>>> auto-init behaviour and if introduced should be on by default >>>>>>> and >>>>>>> user should be able to configure this either on or off for the >>>>>>> host in general (no lower granularity) and can only be >>>>>>> configured >>>>>>> via the API. >>>>>>> When disabled initVdsOnUp would be called only when admin >>>>>>> activates the host/storage and any error would keep it inactive >>>>>>> (I >>>>>>> still don't understand why this is at all needed but whatever). >>>>>>> >>>>>> Also a note from Moran on the call was to check if we can unify >>>>>> the >>>>>> non-operational and Error statuses of the host. >>>>>> It was mentioned on the call that the reason for having ERROR >>>>>> state is >>>>>> for recovery (time out of the error state) but since we are >>>>>> about to >>>>>> recover from non-operational status as well there is no reason >>>>>> to have >>>>>> two different statuses. >>>>> they are not exactly the same. >>>>> or should i say, error is supposed to be when reason isn't >>>>> related >>>>> to host being non-operational. >>>>> >>>>> what is error state? >>>>> a host will go into error state if it fails to run 3 >>>>> (configurable) >>>>> VMs, that succeeded running on other host on retry. >>>>> i.e., something is wrong with that host, failing to launch VMs. >>>>> as it happens, it already "auto recovers" for this mode after a >>>>> certain period of time. >>>>> >>>>> why? because the host will fail to run virtual machines, and will >>>>> be >>>>> the least loaded, so it will be the first target selected to run >>>>> them, which will continue to fail. >>>>> >>>>> so there is a negative scoring mechanism on number of errors, >>>>> till >>>>> host is taken out for a while. >>>>> >>>>> (I don't remember if the reverse is true and the VM goes into >>>>> error >>>>> mode if the VM failed to launch on all hosts per number of >>>>> retries. >>>>> i think this wasn't needed and user just got an error in audit >>>>> log) >>>>> >>>>> i can see two reasons a host will go into error state: >>>>> 1. monitoring didn't detect an issue yet, and host would >>>>> have/will/should go into non-operational mode. >>>>> if host will go into non-operational mode, and will auto recover >>>>> with the above flow, i guess it is fine. >>>>> >>>>> 2. cause for failure isn't something we monitor for (upgraded to >>>>> a >>>>> bad version of qemu, or qemu got corrupted). >>>>> >>>>> now, the error mode was developed quite a long time ago (august >>>>> 2007 >>>>> iirc), so could be it mostly compensated for the first reason >>>>> which >>>>> is now better monitored. >>>>> i wonder how often error state is seen due to a reason which >>>>> isn't >>>>> monitored already. >>>>> moran - do you have examples of when you see error state of >>>>> hosts? >>>> usually it happened when there were a problematic/ misconfigurated >>>> vdsm / libvirt which failed to run vms (nothing we can recover >>>> from)- >>>> i haven't faced the issue of "host it too loaded" that status has >>>> some other syndromes, however the behaviour on that state is very >>>> much the same -waiting for 30 min (?) and than move it to >>>> activated. >>>> Moran. >>> 'host is too loaded' is too loaded is the only transient state >>> where a >>> temporary 'error' state makes sense, but in the same time, it can >>> also >>> fit the 'non operational' state description. >>> From my experience, the problem with KVM/libvirt/VDSM >>> mis-configured >>> is never temporary, (= magically solved by itself, without concrete >>> user intervention). IMHO, it should move the host to an error state >>> that would not automatically recover from. >>> Regardless, consolidating the names of the states ('inactive, >>> detached, non operational, maintenance, error, unknown' ...) would >>> be >>> nice too. Probably can't be done for all, of course. >>> Y. >> agreed, most of the causes of ERROR state aren't transient, but looks >> to >> me as if this state is redundant and could be taken care as part of >> the >> other host states, since the way it's being used today isn't very >> helpful as well. >> Moran. > However, I can envision an ERROR state that you don't want to keep retry mechanism on... > which might be a different behavior than the NON-OP one. it stills means that the host will be non-operational, just that you don't want to perform reties on it, it's need to be divided to transient/non-transient treatments (may apply to other scenarios as well -like qemu isn't there or virt isn't enabled on bios etc) Moran. >> >>> >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> From rgolan at redhat.com Thu Feb 16 08:48:08 2012 From: rgolan at redhat.com (Roy Golan) Date: Thu, 16 Feb 2012 03:48:08 -0500 (EST) Subject: [Engine-devel] bridgeless networks - update In-Reply-To: <4F3C2FFD.9060904@redhat.com> Message-ID: <8585235b-ea42-4a1b-b809-c64779bb66b1@zmail01.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > From: "Itamar Heim" > To: "Roy Golan" > Cc: engine-devel at ovirt.org > Sent: Thursday, February 16, 2012 12:21:49 AM > Subject: Re: [Engine-devel] bridgeless networks - update > > On 02/15/2012 06:50 PM, Roy Golan wrote: > > following ovirt engine weekly here's a summary of the changes: > > > > 1. no validation during vmInterface creation > > 2. when attaching a network the default value is bridged (GUI > > responsibility) > > so what is the default if one didn't pass it in the API (i.e., > backend > should have a default value. UI may choose whatver). I generally dislike default values in API because client wont know what it is, default behavior might change in time etc... > (and the UI/API name in the wiki is something around "allow to run > vms", > not bridged - so maybe change the terminology used to discuss this to > reduce risk of confusion/mistakes later). > > > > 3. monitoring - detect mixed configured cluster (network "foo" is > > bridged on one host and not on another) > > and issue an audit log warning with event interval of 1 day > > why does this matter if cluster is mixed? to inform a potential migration problem > i.e., wouldn't this be interesting per host, that a network which > could > be bridgeless is bridged to warn about? I agree that an admin might like a notice that he can improve the host's performance. but currently how can we say a network can be bridgeless given that we don't have the nic type yet? detect if no vmInterfaces are connected to it maybe? > > > > > wiki will be updated accordingly. > > having fast read it, couldn't understand the backward compatibility > section ("Its compatibility version is 3.1 and enforced by the > enclosed > command as mentioned already") > i'd make it clear this is a 3.1 DC feature, and not a cluster one(?) > becuase iirc, setupNetworks is actually host level 3.1 compatibility > level, not cluster/dc wide? yes its not clear enough that 3.0 cluster cannot have their network bridgeless. will fix this to be clearer. > > > > > Thanks, > > Roy > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > From lpeer at redhat.com Thu Feb 16 09:22:45 2012 From: lpeer at redhat.com (Livnat Peer) Date: Thu, 16 Feb 2012 11:22:45 +0200 Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: <4F3CB7E1.50606@redhat.com> References: <42b4dcb5-4937-4654-80fd-02bc885df3f9@zmail13.collab.prod.int.phx2.redhat.com> <4F3BE52B.5070402@redhat.com> <4F3C33EB.40504@redhat.com> <4F3CB04D.6000107@redhat.com> <4F3CB1B1.9030903@redhat.com> <4F3CB7E1.50606@redhat.com> Message-ID: <4F3CCAE5.3090802@redhat.com> On 16/02/12 10:01, Moran Goldboim wrote: > On 02/16/2012 09:35 AM, Yaniv Kaul wrote: >> On 02/16/2012 09:29 AM, Moran Goldboim wrote: >>> On 02/16/2012 12:38 AM, Itamar Heim wrote: >>>> On 02/15/2012 07:02 PM, Livnat Peer wrote: >>>>> On 15/02/12 18:28, Ayal Baron wrote: >>>>>> >>>>>> >>>>>> ----- Original Message ----- >>>>>>> Hi, >>>>>>> >>>>>>> A short summary from the call today, please correct me if I >>>>>>> forgot or >>>>>>> misunderstood something. >>>>>>> >>>>>>> Ayal argued that the failed host/storagedomain should be reactivated >>>>>>> by a periodically executed job, he would prefer if the engine could >>>>>>> [try to] correct the problem right on discovery. >>>>>>> Livnat's point was that this is hard to implement and it is OK if we >>>>>>> move it to Nonoperational state and periodically check it again. >>>>>>> >>>>>>> There was a little arguing if we call the current behavior a bug >>>>>>> or a >>>>>>> missing behavior, I believe this is not quite important. >>>>>>> >>>>>>> I did not fully understand the last few sentences from Livant, >>>>>>> did we >>>>>>> manage to agree in a change in the plan? >>>>>> >>>>>> A couple of points that we agreed upon: >>>>>> 1. no need for new mechanism, just initiate this from the >>>>>> monitoring context. >>>>>> Preferably, if not difficult, evaluate the monitoring data, if >>>>>> host should remain in non-op then don't bother running initVdsOnUp >>>>>> 2. configuration of when to call initvdsonup is orthogonal to >>>>>> auto-init behaviour and if introduced should be on by default and >>>>>> user should be able to configure this either on or off for the >>>>>> host in general (no lower granularity) and can only be configured >>>>>> via the API. >>>>>> When disabled initVdsOnUp would be called only when admin >>>>>> activates the host/storage and any error would keep it inactive (I >>>>>> still don't understand why this is at all needed but whatever). >>>>>> >>>>> >>>>> Also a note from Moran on the call was to check if we can unify the >>>>> non-operational and Error statuses of the host. >>>>> It was mentioned on the call that the reason for having ERROR state is >>>>> for recovery (time out of the error state) but since we are about to >>>>> recover from non-operational status as well there is no reason to have >>>>> two different statuses. >>>> >>>> they are not exactly the same. >>>> or should i say, error is supposed to be when reason isn't related >>>> to host being non-operational. >>>> >>>> what is error state? >>>> a host will go into error state if it fails to run 3 (configurable) >>>> VMs, that succeeded running on other host on retry. >>>> i.e., something is wrong with that host, failing to launch VMs. >>>> as it happens, it already "auto recovers" for this mode after a >>>> certain period of time. >>>> >>>> why? because the host will fail to run virtual machines, and will be >>>> the least loaded, so it will be the first target selected to run >>>> them, which will continue to fail. >>>> >>>> so there is a negative scoring mechanism on number of errors, till >>>> host is taken out for a while. >>>> >>>> (I don't remember if the reverse is true and the VM goes into error >>>> mode if the VM failed to launch on all hosts per number of retries. >>>> i think this wasn't needed and user just got an error in audit log) >>>> >>>> i can see two reasons a host will go into error state: >>>> 1. monitoring didn't detect an issue yet, and host would >>>> have/will/should go into non-operational mode. >>>> if host will go into non-operational mode, and will auto recover >>>> with the above flow, i guess it is fine. >>>> >>>> 2. cause for failure isn't something we monitor for (upgraded to a >>>> bad version of qemu, or qemu got corrupted). >>>> >>>> now, the error mode was developed quite a long time ago (august 2007 >>>> iirc), so could be it mostly compensated for the first reason which >>>> is now better monitored. >>>> i wonder how often error state is seen due to a reason which isn't >>>> monitored already. >>>> moran - do you have examples of when you see error state of hosts? >>> >>> usually it happened when there were a problematic/ misconfigurated >>> vdsm / libvirt which failed to run vms (nothing we can recover from)- >>> i haven't faced the issue of "host it too loaded" that status has >>> some other syndromes, however the behaviour on that state is very >>> much the same -waiting for 30 min (?) and than move it to activated. >>> Moran. >> >> 'host is too loaded' is too loaded is the only transient state where a >> temporary 'error' state makes sense, but in the same time, it can also >> fit the 'non operational' state description. >> From my experience, the problem with KVM/libvirt/VDSM mis-configured >> is never temporary, (= magically solved by itself, without concrete >> user intervention). IMHO, it should move the host to an error state >> that would not automatically recover from. >> Regardless, consolidating the names of the states ('inactive, >> detached, non operational, maintenance, error, unknown' ...) would be >> nice too. Probably can't be done for all, of course. >> Y. > > agreed, most of the causes of ERROR state aren't transient, but looks to > me as if this state is redundant and could be taken care as part of the > other host states, since the way it's being used today isn't very > helpful as well. > Moran. > Currently host status is changed to non-operational on various reasons, some of them are static like vdsm version and cpu model and some of them are (potentially) transient like network failure. The Error state, as Itamar detailed earlier on this thread, is used currently on what I would call (potentially) transient reason. The original intention (I think) was to move host to non-operational on reasons which are static and to Error on reasons which are transient, and I guess that is why there is timeout on the Error state and OE tries to initialize a host after 30 minutes in Error state. The problem is that as the code evolved this is not the case anymore. I suggest that we use the non-operational state for transient reasons, which we detect in monitoring flow, or execution failures and do the initialization retry as Laszlo suggested in the document. Use the Error state for static errors and remove the 'timeout' mechanism we currently have (from Error state). Livnat > >> >> >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >> > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From ovedo at redhat.com Thu Feb 16 09:35:24 2012 From: ovedo at redhat.com (Oved Ourfalli) Date: Thu, 16 Feb 2012 04:35:24 -0500 (EST) Subject: [Engine-devel] bridgeless networks - update In-Reply-To: <8585235b-ea42-4a1b-b809-c64779bb66b1@zmail01.collab.prod.int.phx2.redhat.com> Message-ID: ----- Original Message ----- > From: "Roy Golan" > To: "Itamar Heim" > Cc: engine-devel at ovirt.org > Sent: Thursday, February 16, 2012 10:48:08 AM > Subject: Re: [Engine-devel] bridgeless networks - update > > > > ----- Original Message ----- > > From: "Itamar Heim" > > To: "Roy Golan" > > Cc: engine-devel at ovirt.org > > Sent: Thursday, February 16, 2012 12:21:49 AM > > Subject: Re: [Engine-devel] bridgeless networks - update > > > > On 02/15/2012 06:50 PM, Roy Golan wrote: > > > following ovirt engine weekly here's a summary of the changes: > > > > > > 1. no validation during vmInterface creation > > > 2. when attaching a network the default value is bridged (GUI > > > responsibility) > > > > so what is the default if one didn't pass it in the API (i.e., > > backend > > should have a default value. UI may choose whatver). > I generally dislike default values in API because client > wont know what it is, default behavior might change in time etc... > > (and the UI/API name in the wiki is something around "allow to run > > vms", > > not bridged - so maybe change the terminology used to discuss this > > to > > reduce risk of confusion/mistakes later). > > > > > > > 3. monitoring - detect mixed configured cluster (network "foo" is > > > bridged on one host and not on another) > > > and issue an audit log warning with event interval of 1 day > > > > why does this matter if cluster is mixed? > to inform a potential migration problem > > i.e., wouldn't this be interesting per host, that a network which > > could > > be bridgeless http://www.ovirt.org/Features/Design/Network/SetupNetworks/is bridged to warn about? > I agree that an admin might like a notice that he can improve the > host's performance. > but currently how can we say a network can be bridgeless given that > we don't have the nic type yet? > detect if no vmInterfaces are connected to it maybe? > Itamar - It was decided to remove the "allow to run VMs" attribute on the logical network level (to simplify things, not sure it is the right way, though). AFAIU the main reason for that is that we might want to use a network for VMs in one cluster, and for other uses on another cluster. Not sure how important this use-case is, but in AFAIU that was the main reason to leave this attribute aside for now, and add it only later on. If we indeed don't add this attribute then we can't really tell the admin "this network can be bridgeless" as we don't know that. Maybe the user wants to run VMs on it. So, we can just warn him in several scenarios that what he is doing is probably wrong, or we can monitor that and issue a warning event. If we do add this attribute then we can decide on a different logic: allowing only bridge for VM networks, allowing both but warn the user, monitoring for "mixed" configurations, and etc., according to what we think is a reasonable use-case. > > > > > > > > wiki will be updated accordingly. > > > > having fast read it, couldn't understand the backward compatibility > > section ("Its compatibility version is 3.1 and enforced by the > > enclosed > > command as mentioned already") > > i'd make it clear this is a 3.1 DC feature, and not a cluster > > one(?) > > becuase iirc, setupNetworks is actually host level 3.1 > > compatibility > > level, not cluster/dc wide? > yes its not clear enough that 3.0 cluster cannot have their network > bridgeless. will fix this to be clearer. > > > > > > > > Thanks, > > > Roy > > > _______________________________________________ > > > Engine-devel mailing list > > > Engine-devel at ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From mkublin at redhat.com Thu Feb 16 11:00:10 2012 From: mkublin at redhat.com (Michael Kublin) Date: Thu, 16 Feb 2012 06:00:10 -0500 (EST) Subject: [Engine-devel] Unit test failures and RunVmCommand In-Reply-To: <4F3BF028.1040508@redhat.com> Message-ID: <5d6f7974-c487-4a49-9c43-14d7de047e08@zmail14.collab.prod.int.phx2.redhat.com> All issues were fixed ----- Original Message ----- From: "Jon Choate" To: engine-devel at ovirt.org Sent: Wednesday, February 15, 2012 7:49:28 PM Subject: [Engine-devel] Unit test failures and RunVmCommand I was trying to fix the broken unit tests so that I can make sure my changes are not breaking anything. While trying to fix the RunVmCommand tests I found some logic that I am unsure of. It seems like with all of the nested conditions in this method the scope of some of the checks is wrong. In RunVmCommand.CanRunVm we check the boot sequence. If the vm is set to only boot from a hard disk, we check to make sure that the vm has a hard disk and that it is plugged. If both of these are true, we do not perform any other checks and return that the vm can start. One of the checks that gets skipped is whether or not the vm is already running. Do we really want to skip that check? _______________________________________________ Engine-devel mailing list Engine-devel at ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel From jchoate at redhat.com Thu Feb 16 14:04:04 2012 From: jchoate at redhat.com (Jon Choate) Date: Thu, 16 Feb 2012 09:04:04 -0500 (EST) Subject: [Engine-devel] Unit test failures and RunVmCommand In-Reply-To: <5d6f7974-c487-4a49-9c43-14d7de047e08@zmail14.collab.prod.int.phx2.redhat.com> Message-ID: <7f1fcd65-9b11-442a-815d-7afc14e59d01@zmail12.collab.prod.int.phx2.redhat.com> awesome! Thanks! ----- Original Message ----- > From: "Michael Kublin" > To: "Jon Choate" > Cc: engine-devel at ovirt.org > Sent: Thursday, February 16, 2012 6:00:10 AM > Subject: Re: [Engine-devel] Unit test failures and RunVmCommand > > All issues were fixed > > ----- Original Message ----- > From: "Jon Choate" > To: engine-devel at ovirt.org > Sent: Wednesday, February 15, 2012 7:49:28 PM > Subject: [Engine-devel] Unit test failures and RunVmCommand > > I was trying to fix the broken unit tests so that I can make sure my > changes are not breaking anything. While trying to fix the > RunVmCommand > tests I found some logic that I am unsure of. > > It seems like with all of the nested conditions in this method the > scope > of some of the checks is wrong. > > In RunVmCommand.CanRunVm we check the boot sequence. If the vm is set > to > only boot from a hard disk, we check to make sure that the vm has a > hard > disk and that it is plugged. If both of these are true, we do not > perform any other checks and return that the vm can start. > > One of the checks that gets skipped is whether or not the vm is > already > running. Do we really want to skip that check? > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From iheim at redhat.com Fri Feb 17 00:16:39 2012 From: iheim at redhat.com (Itamar Heim) Date: Fri, 17 Feb 2012 02:16:39 +0200 Subject: [Engine-devel] bridgeless networks - update In-Reply-To: References: Message-ID: <4F3D9C67.9060102@redhat.com> On 02/16/2012 11:35 AM, Oved Ourfalli wrote: ... >>> why does this matter if cluster is mixed? >> to inform a potential migration problem >>> i.e., wouldn't this be interesting per host, that a network which >>> could >>> be bridgeless http://www.ovirt.org/Features/Design/Network/SetupNetworks/is bridged to warn about? >> I agree that an admin might like a notice that he can improve the >> host's performance. >> but currently how can we say a network can be bridgeless given that >> we don't have the nic type yet? >> detect if no vmInterfaces are connected to it maybe? >> > Itamar - It was decided to remove the "allow to run VMs" attribute on the logical network level (to simplify things, not sure it is the right way, though). AFAIU the main reason for that is that we might want to use a network for VMs in one cluster, and for other uses on another cluster. Not sure how important this use-case is, but in AFAIU that was the main reason to leave this attribute aside for now, and add it only later on. I thought it was supposed to be per cluster (in the grand scheme of things, as part of also defining what would be the live migration networks, and things like that - all cluster level definitions). btw, I may have missed the email on this change, but the wiki doesn't reflect it at all. > > If we indeed don't add this attribute then we can't really tell the admin "this network can be bridgeless" as we don't know that. Maybe the user wants to run VMs on it. So, we can just warn him in several scenarios that what he is doing is probably wrong, or we can monitor that and issue a warning event. > > If we do add this attribute then we can decide on a different logic: allowing only bridge for VM networks, allowing both but warn the user, monitoring for "mixed" configurations, and etc., according to what we think is a reasonable use-case. From iheim at redhat.com Fri Feb 17 00:20:29 2012 From: iheim at redhat.com (Itamar Heim) Date: Fri, 17 Feb 2012 02:20:29 +0200 Subject: [Engine-devel] bridgeless networks - update In-Reply-To: <8585235b-ea42-4a1b-b809-c64779bb66b1@zmail01.collab.prod.int.phx2.redhat.com> References: <8585235b-ea42-4a1b-b809-c64779bb66b1@zmail01.collab.prod.int.phx2.redhat.com> Message-ID: <4F3D9D4D.1000107@redhat.com> On 02/16/2012 10:48 AM, Roy Golan wrote: ... >> On 02/15/2012 06:50 PM, Roy Golan wrote: >>> following ovirt engine weekly here's a summary of the changes: >>> >>> 1. no validation during vmInterface creation >>> 2. when attaching a network the default value is bridged (GUI >>> responsibility) >> >> so what is the default if one didn't pass it in the API (i.e., >> backend >> should have a default value. UI may choose whatver). > I generally dislike default values in API because client > wont know what it is, default behavior might change in time etc... it is not a question of like/dislike - are you proposing we break the api for existing users since we don't want to choose a default value for them? >> (and the UI/API name in the wiki is something around "allow to run >> vms", >> not bridged - so maybe change the terminology used to discuss this to >> reduce risk of confusion/mistakes later). >> >> >>> 3. monitoring - detect mixed configured cluster (network "foo" is >>> bridged on one host and not on another) >>> and issue an audit log warning with event interval of 1 day >> >> why does this matter if cluster is mixed? > to inform a potential migration problem if logical network doesn't need to run VMs, if it is bridged on some hosts, doesn't mean there is an issue. if logical network is supposed to run VMs, either the network or the hosts would have it as non operational until the issue is fixed. how is that different from other network mis configurations? >> i.e., wouldn't this be interesting per host, that a network which >> could >> be bridgeless is bridged to warn about? > I agree that an admin might like a notice that he can improve the host's performance. > but currently how can we say a network can be bridgeless given that we don't have the nic type yet? > detect if no vmInterfaces are connected to it maybe? what do you mean by nic type? From iheim at redhat.com Fri Feb 17 00:25:06 2012 From: iheim at redhat.com (Itamar Heim) Date: Fri, 17 Feb 2012 02:25:06 +0200 Subject: [Engine-devel] Autorecovery feature plan for review In-Reply-To: <4F3CCAE5.3090802@redhat.com> References: <42b4dcb5-4937-4654-80fd-02bc885df3f9@zmail13.collab.prod.int.phx2.redhat.com> <4F3BE52B.5070402@redhat.com> <4F3C33EB.40504@redhat.com> <4F3CB04D.6000107@redhat.com> <4F3CB1B1.9030903@redhat.com> <4F3CB7E1.50606@redhat.com> <4F3CCAE5.3090802@redhat.com> Message-ID: <4F3D9E62.8070509@redhat.com> On 02/16/2012 11:22 AM, Livnat Peer wrote: > On 16/02/12 10:01, Moran Goldboim wrote: >> On 02/16/2012 09:35 AM, Yaniv Kaul wrote: >>> On 02/16/2012 09:29 AM, Moran Goldboim wrote: >>>> On 02/16/2012 12:38 AM, Itamar Heim wrote: >>>>> On 02/15/2012 07:02 PM, Livnat Peer wrote: >>>>>> On 15/02/12 18:28, Ayal Baron wrote: >>>>>>> >>>>>>> >>>>>>> ----- Original Message ----- >>>>>>>> Hi, >>>>>>>> >>>>>>>> A short summary from the call today, please correct me if I >>>>>>>> forgot or >>>>>>>> misunderstood something. >>>>>>>> >>>>>>>> Ayal argued that the failed host/storagedomain should be reactivated >>>>>>>> by a periodically executed job, he would prefer if the engine could >>>>>>>> [try to] correct the problem right on discovery. >>>>>>>> Livnat's point was that this is hard to implement and it is OK if we >>>>>>>> move it to Nonoperational state and periodically check it again. >>>>>>>> >>>>>>>> There was a little arguing if we call the current behavior a bug >>>>>>>> or a >>>>>>>> missing behavior, I believe this is not quite important. >>>>>>>> >>>>>>>> I did not fully understand the last few sentences from Livant, >>>>>>>> did we >>>>>>>> manage to agree in a change in the plan? >>>>>>> >>>>>>> A couple of points that we agreed upon: >>>>>>> 1. no need for new mechanism, just initiate this from the >>>>>>> monitoring context. >>>>>>> Preferably, if not difficult, evaluate the monitoring data, if >>>>>>> host should remain in non-op then don't bother running initVdsOnUp >>>>>>> 2. configuration of when to call initvdsonup is orthogonal to >>>>>>> auto-init behaviour and if introduced should be on by default and >>>>>>> user should be able to configure this either on or off for the >>>>>>> host in general (no lower granularity) and can only be configured >>>>>>> via the API. >>>>>>> When disabled initVdsOnUp would be called only when admin >>>>>>> activates the host/storage and any error would keep it inactive (I >>>>>>> still don't understand why this is at all needed but whatever). >>>>>>> >>>>>> >>>>>> Also a note from Moran on the call was to check if we can unify the >>>>>> non-operational and Error statuses of the host. >>>>>> It was mentioned on the call that the reason for having ERROR state is >>>>>> for recovery (time out of the error state) but since we are about to >>>>>> recover from non-operational status as well there is no reason to have >>>>>> two different statuses. >>>>> >>>>> they are not exactly the same. >>>>> or should i say, error is supposed to be when reason isn't related >>>>> to host being non-operational. >>>>> >>>>> what is error state? >>>>> a host will go into error state if it fails to run 3 (configurable) >>>>> VMs, that succeeded running on other host on retry. >>>>> i.e., something is wrong with that host, failing to launch VMs. >>>>> as it happens, it already "auto recovers" for this mode after a >>>>> certain period of time. >>>>> >>>>> why? because the host will fail to run virtual machines, and will be >>>>> the least loaded, so it will be the first target selected to run >>>>> them, which will continue to fail. >>>>> >>>>> so there is a negative scoring mechanism on number of errors, till >>>>> host is taken out for a while. >>>>> >>>>> (I don't remember if the reverse is true and the VM goes into error >>>>> mode if the VM failed to launch on all hosts per number of retries. >>>>> i think this wasn't needed and user just got an error in audit log) >>>>> >>>>> i can see two reasons a host will go into error state: >>>>> 1. monitoring didn't detect an issue yet, and host would >>>>> have/will/should go into non-operational mode. >>>>> if host will go into non-operational mode, and will auto recover >>>>> with the above flow, i guess it is fine. >>>>> >>>>> 2. cause for failure isn't something we monitor for (upgraded to a >>>>> bad version of qemu, or qemu got corrupted). >>>>> >>>>> now, the error mode was developed quite a long time ago (august 2007 >>>>> iirc), so could be it mostly compensated for the first reason which >>>>> is now better monitored. >>>>> i wonder how often error state is seen due to a reason which isn't >>>>> monitored already. >>>>> moran - do you have examples of when you see error state of hosts? >>>> >>>> usually it happened when there were a problematic/ misconfigurated >>>> vdsm / libvirt which failed to run vms (nothing we can recover from)- >>>> i haven't faced the issue of "host it too loaded" that status has >>>> some other syndromes, however the behaviour on that state is very >>>> much the same -waiting for 30 min (?) and than move it to activated. >>>> Moran. >>> >>> 'host is too loaded' is too loaded is the only transient state where a >>> temporary 'error' state makes sense, but in the same time, it can also >>> fit the 'non operational' state description. >>> From my experience, the problem with KVM/libvirt/VDSM mis-configured >>> is never temporary, (= magically solved by itself, without concrete >>> user intervention). IMHO, it should move the host to an error state >>> that would not automatically recover from. >>> Regardless, consolidating the names of the states ('inactive, >>> detached, non operational, maintenance, error, unknown' ...) would be >>> nice too. Probably can't be done for all, of course. >>> Y. >> >> agreed, most of the causes of ERROR state aren't transient, but looks to >> me as if this state is redundant and could be taken care as part of the >> other host states, since the way it's being used today isn't very >> helpful as well. >> Moran. >> > > Currently host status is changed to non-operational on various reasons, > some of them are static like vdsm version and cpu model and some of them > are (potentially) transient like network failure. > > The Error state, as Itamar detailed earlier on this thread, is used > currently on what I would call (potentially) transient reason. > > The original intention (I think) was to move host to non-operational on > reasons which are static and to Error on reasons which are transient, > and I guess that is why there is timeout on the Error state and OE tries > to initialize a host after 30 minutes in Error state. > > The problem is that as the code evolved this is not the case anymore. > I suggest that we use the non-operational state for transient reasons, > which we detect in monitoring flow, or execution failures and do the > initialization retry as Laszlo suggested in the document. Use the Error > state for static errors and remove the 'timeout' mechanism we currently > have (from Error state). we are just adding a retry mechanism where we didn't have it. I wouldn't remove the one we have so soon, as we may get it back very fast as 'need retry/timeout on errors'. it sounds like both statuses are indeed different - but even if we think error covers mostly non transient, we can't be sure. From iheim at redhat.com Fri Feb 17 15:09:30 2012 From: iheim at redhat.com (Itamar Heim) Date: Fri, 17 Feb 2012 17:09:30 +0200 Subject: [Engine-devel] New oVirt GIT Repo Request In-Reply-To: <4F390E28.9060300@redhat.com> References: <4F352CB8.8060006@redhat.com> <4F36EEA3.50006@redhat.com> <4F37BF5C.20801@redhat.com> <4F3902A9.2060405@redhat.com> <4F3932E7.1010501@redhat.com> <4F390E28.9060300@redhat.com> Message-ID: <4F3E6DAA.5050206@redhat.com> On 02/13/2012 03:20 PM, Keith Robertson wrote: > On 02/13/2012 10:57 AM, Douglas Landgraf wrote: >> On 02/13/2012 07:31 AM, Barak Azulay wrote: >>> On 02/12/2012 03:32 PM, Keith Robertson wrote: >>>> On 02/11/2012 05:41 PM, Itamar Heim wrote: >>>>> On 02/10/2012 04:42 PM, Keith Robertson wrote: >>>>>> All, >>>>>> >>>>>> I would like to move some of the oVirt tools into their own GIT >>>>>> repos so >>>>>> that they are easier to manage/maintain. In particular, I would >>>>>> like to >>>>>> move the ovirt-log-collector, ovirt-iso-uploader, and >>>>>> ovirt-image-uploader each into their own GIT repos. >>>>>> >>>>>> The Plan: >>>>>> Step 1: Create naked GIT repos on oVirt.org for the 3 tools. >>>>>> Step 2: Link git repos to gerrit. >>>>> >>>>> above two are same step - create a project in gerrit. >>>>> I'll do that if list doesn't have any objections by monday. >>>> Sure, np. >>>>> >>>>>> Step 3: Populate naked GIT repos with source and build standalone >>>>>> spec >>>>>> files for each. >>>>>> Step 4: In one patch do both a) and b)... >>>>>> a) Update oVirt manager GIT repo by removing tool source. >>>>>> b) Update oVirt manager GIT repo such that spec has dependencies on 3 >>>>>> new RPMs. >>>>>> >>>>>> Optional: >>>>>> - These three tools share some python classes that are very >>>>>> similar. I >>>>>> would like to create a GIT repo (perhaps ovirt-tools-common) to >>>>>> contain >>>>>> these classes so that a fix in one place will fix the issue >>>>>> everywhere. >>>>>> Perhaps we can also create a naked GIT repo for these common classes >>>>>> while addressing the primary concerns above. >>>>> >>>>> would this hold both python and java common code? >>>> >>>> None of the 3 tools currently have any requirement for Java code, but I >>>> think the installer does. That said, I wouldn't have a problem mixing >>>> Java code in the "common" component as long as they're in separate >>>> package directories. >>>> >>>> If we do something like this do we want a "python" common RPM and a >>>> "java" common RPM or just a single RPM for all common code? I don't >>>> really have a preference. >>> >>> I would go with separating the java common and python common, even if >>> it's just to ease build/release issues. >>> >> +1 and if needed one package be required to the other. >> > Sounds like a plan. Full speed ahead. The following repo's were created: ovirt-image-uploader ovirt-iso-uploader ovirt-log-collector ovirt-tools-common-python I've used the existing ovirt-engine-tools group for its maintainers, as this is only a split of part of the tools from using the engine git, but tools project was defined as separate wrt maintainers. From ryanh at us.ibm.com Fri Feb 17 17:36:07 2012 From: ryanh at us.ibm.com (Ryan Harper) Date: Fri, 17 Feb 2012 11:36:07 -0600 Subject: [Engine-devel] attempting to deploy ovirt-engine built from source Message-ID: <20120217173607.GF3145@us.ibm.com> Hi, I've been attempting to build and deploy ovirt-engine from source; i've been following the guide[1] and I'm still having a bit of trouble; hoping for some help here. I've got a F16 64-bit server. I'm using the jboss that's available via the f16 repos: ovirt-engine-jbossas-1.2-2.fc16.x86_64 which is jboss 7.1 based. I've updated my .m2/settings.xml to match the install path of the above RPM: [build at f16-node ear]$ cat ~/.m2/settings.xml oVirtEnvSettings oVirtEnvSettings /usr/share/jboss-as-7.1.0.Beta1b /usr/lib/jvm/java-1.6.0-openjdk.x86_64 always engine and the gui build fine with the commands from the wiki. When I attempt to test the connection and webgui they fail. so I looked out in the deployment directory and I see the following: [root at f16-node deployments]# pwd /usr/share/jboss-as-7.1.0.Beta1b/standalone/deployments [root at f16-node deployments]# ls -al total 28 drwxrwxr-x. 3 jboss-as jboss-as 4096 Feb 17 12:28 . drwxrwxr-x. 8 jboss-as jboss-as 4096 Feb 3 11:19 .. -rw-rw-r--. 1 jboss-as jboss-as 8868 Dec 1 16:10 README.txt drwxrwxr-x 15 build build 4096 Feb 17 12:21 engine.ear -rw-rw-r-- 1 jboss-as jboss-as 2353 Feb 17 12:21 engine.ear.failed I'm attaching the engine.ear.failed. 1. http://ovirt.org/wiki/Building_oVirt_engine -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx ryanh at us.ibm.com -------------- next part -------------- {"JBAS014653: Composite operation failed and was rolled back. Steps that failed:" => {"Operation step-2" => {"JBAS014671: Failed services" => {"jboss.deployment.subunit.\"engine.ear\".\"engine-bll.jar\".POST_MODULE" => "org.jboss.msc.service.StartException in service jboss.deployment.subunit.\"engine.ear\".\"engine-bll.jar\".POST_MODULE: Failed to process phase POST_MODULE of subdeployment \"engine-bll.jar\" of deployment \"engine.ear\""},"JBAS014771: Services with missing/unavailable dependencies" => ["jboss.naming.context.java.comp.engine.engine-scheduler.Scheduler.Validatorjboss.naming.context.java.comp.engine.engine-scheduler.SchedulerMissing[jboss.naming.context.java.comp.engine.engine-scheduler.Scheduler.Validatorjboss.naming.context.java.comp.engine.engine-scheduler.Scheduler]","jboss.naming.context.java.comp.engine.engine-vdsbroker.VdsBroker.Validatorjboss.naming.context.java.comp.engine.engine-vdsbroker.VdsBrokerMissing[jboss.naming.context.java.comp.engine.engine-vdsbroker.VdsBroker.Validatorjboss.naming.context.java.comp.engine.engine-vdsbroker.VdsBroker]","jboss.naming.context.java.comp.engine.engine-genericapi.GenericApiService.ValidatorFactoryjboss.naming.context.java.comp.engine.engine-genericapi.GenericApiServiceMissing[jboss.naming.context.java.comp.engine.engine-genericapi.GenericApiService.ValidatorFactoryjboss.naming.context.java.comp.engine.engine-genericapi.GenericApiService]","jboss.naming.context.java.comp.engine.engine-scheduler.Scheduler.ValidatorFactoryjboss.naming.context.java.comp.engine.engine-scheduler.SchedulerMissing[jboss.naming.context.java.comp.engine.engine-scheduler.Scheduler.ValidatorFactoryjboss.naming.context.java.comp.engine.engine-scheduler.Scheduler]","jboss.naming.context.java.comp.engine.engine-vdsbroker.VdsBroker.ValidatorFactoryjboss.naming.context.java.comp.engine.engine-vdsbroker.VdsBrokerMissing[jboss.naming.context.java.comp.engine.engine-vdsbroker.VdsBroker.ValidatorFactoryjboss.naming.context.java.comp.engine.engine-vdsbroker.VdsBroker]","jboss.naming.context.java.comp.engine.engine-genericapi.GenericApiService.Validatorjboss.naming.context.java.comp.engine.engine-genericapi.GenericApiServiceMissing[jboss.naming.context.java.comp.engine.engine-genericapi.GenericApiService.Validatorjboss.naming.context.java.comp.engine.engine-genericapi.GenericApiService]"]}}} From dougsland at redhat.com Fri Feb 17 21:18:21 2012 From: dougsland at redhat.com (Douglas Landgraf) Date: Fri, 17 Feb 2012 16:18:21 -0500 Subject: [Engine-devel] attempting to deploy ovirt-engine built from source In-Reply-To: <20120217173607.GF3145@us.ibm.com> References: <20120217173607.GF3145@us.ibm.com> Message-ID: <4F3EC41D.2020606@redhat.com> Hi Ryan, On 02/17/2012 12:36 PM, Ryan Harper wrote: > Hi, > > I've been attempting to build and deploy ovirt-engine from source; i've > been following the guide[1] and I'm still having a bit of trouble; > hoping for some help here. > > I've got a F16 64-bit server. I'm using the jboss that's available via > the f16 repos: > > ovirt-engine-jbossas-1.2-2.fc16.x86_64 > > which is jboss 7.1 based. I've updated my .m2/settings.xml to match the > install path of the above RPM: > > > [build at f16-node ear]$ cat ~/.m2/settings.xml > xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 > http://maven.apache.org/xsd/settings-1.0.0.xsd"> > > > > > oVirtEnvSettings > > > > > oVirtEnvSettings > > /usr/share/jboss-as-7.1.0.Beta1b > /usr/lib/jvm/java-1.6.0-openjdk.x86_64 > always > > > > > > engine and the gui build fine with the commands from the wiki. When I attempt > to test the connection and webgui they fail. so I looked out in the > deployment directory and I see the following: > > [root at f16-node deployments]# pwd > /usr/share/jboss-as-7.1.0.Beta1b/standalone/deployments > [root at f16-node deployments]# ls -al > total 28 > drwxrwxr-x. 3 jboss-as jboss-as 4096 Feb 17 12:28 . > drwxrwxr-x. 8 jboss-as jboss-as 4096 Feb 3 11:19 .. > -rw-rw-r--. 1 jboss-as jboss-as 8868 Dec 1 16:10 README.txt > drwxrwxr-x 15 build build 4096 Feb 17 12:21 engine.ear > -rw-rw-r-- 1 jboss-as jboss-as 2353 Feb 17 12:21 engine.ear.failed > > I'm attaching the engine.ear.failed. > > > > 1. http://ovirt.org/wiki/Building_oVirt_engine I have made a few changes into this url yesterday, have you tried today? Have you noticed any error during the maven work? Any error during the jboss start? Can you please try to execute chmod 777 /usr/share/jboss-as and the run again the deploy commands? http://ovirt.org/wiki/Building_oVirt_engine#Deploy Cheers Douglas From ryanh at us.ibm.com Fri Feb 17 19:15:18 2012 From: ryanh at us.ibm.com (Ryan Harper) Date: Fri, 17 Feb 2012 13:15:18 -0600 Subject: [Engine-devel] attempting to deploy ovirt-engine built from source In-Reply-To: <4F3EC41D.2020606@redhat.com> References: <20120217173607.GF3145@us.ibm.com> <4F3EC41D.2020606@redhat.com> Message-ID: <20120217191518.GI3145@us.ibm.com> * Douglas Landgraf [2012-02-17 12:15]: > Hi Ryan, Hey Douglas, thanks for the reply, > > On 02/17/2012 12:36 PM, Ryan Harper wrote: > >Hi, > > > >I've been attempting to build and deploy ovirt-engine from source; i've > >been following the guide[1] and I'm still having a bit of trouble; > >hoping for some help here. > > > >I've got a F16 64-bit server. I'm using the jboss that's available via > >the f16 repos: > > > >ovirt-engine-jbossas-1.2-2.fc16.x86_64 > > > >which is jboss 7.1 based. I've updated my .m2/settings.xml to match the > >install path of the above RPM: > > > > > >[build at f16-node ear]$ cat ~/.m2/settings.xml > > > xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > > xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 > > http://maven.apache.org/xsd/settings-1.0.0.xsd"> > > > > > > > > > > oVirtEnvSettings > > > > > > > > > > oVirtEnvSettings > > > > /usr/share/jboss-as-7.1.0.Beta1b > > /usr/lib/jvm/java-1.6.0-openjdk.x86_64 > > always > > > > > > > > > > > >engine and the gui build fine with the commands from the wiki. When I > >attempt > >to test the connection and webgui they fail. so I looked out in the > >deployment directory and I see the following: > > > >[root at f16-node deployments]# pwd > >/usr/share/jboss-as-7.1.0.Beta1b/standalone/deployments > >[root at f16-node deployments]# ls -al > >total 28 > >drwxrwxr-x. 3 jboss-as jboss-as 4096 Feb 17 12:28 . > >drwxrwxr-x. 8 jboss-as jboss-as 4096 Feb 3 11:19 .. > >-rw-rw-r--. 1 jboss-as jboss-as 8868 Dec 1 16:10 README.txt > >drwxrwxr-x 15 build build 4096 Feb 17 12:21 engine.ear > >-rw-rw-r-- 1 jboss-as jboss-as 2353 Feb 17 12:21 engine.ear.failed > > > >I'm attaching the engine.ear.failed. > > > > > > > >1. http://ovirt.org/wiki/Building_oVirt_engine > I have made a few changes into this url yesterday, have you tried today? I looked at the wiki and didn't see anything significantly different. I've pulled the latest engine code and re-ran the build and deploy with the same results. I did see the "I've made a change how do I test" and the only comment there is that the script calls 'mvn' and the rest of the wiki page uses 'mvn2' to ensure folks are using the right maven. > Have you noticed any error during the maven work? Any error during the > jboss start? Not that I can tell. The build log shows what looks to be errors, but it never fails out, and the report lines always show 0 Errors. Nothing from the service restart jboss-as, nothing in /var/log/messages nor /var/log/jboss-as/console.log > > Can you please try to execute chmod 777 /usr/share/jboss-as and the run > again the deploy commands? > http://ovirt.org/wiki/Building_oVirt_engine#Deploy [root at f16-node share]# ls -al | grep jboss lrwxrwxrwx. 1 jboss-as jboss-as 32 Feb 3 11:13 jboss-as -> /usr/share/jboss-as-7.1.0.Beta1b drwxrwxr-x. 10 jboss-as jboss-as 4096 Feb 15 11:09 jboss-as-7.1.0.Beta1b They're already 777. Again, thanks for the help debugging this. -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx ryanh at us.ibm.com From ryanh at us.ibm.com Fri Feb 17 19:17:23 2012 From: ryanh at us.ibm.com (Ryan Harper) Date: Fri, 17 Feb 2012 13:17:23 -0600 Subject: [Engine-devel] attempting to deploy ovirt-engine built from source In-Reply-To: <20120217191518.GI3145@us.ibm.com> References: <20120217173607.GF3145@us.ibm.com> <4F3EC41D.2020606@redhat.com> <20120217191518.GI3145@us.ibm.com> Message-ID: <20120217191723.GJ3145@us.ibm.com> * Ryan Harper [2012-02-17 13:15]: > * Douglas Landgraf [2012-02-17 12:15]: > > Hi Ryan, > > Hey Douglas, thanks for the reply, > > > > > On 02/17/2012 12:36 PM, Ryan Harper wrote: > > >Hi, > > > > > >I've been attempting to build and deploy ovirt-engine from source; i've > > >been following the guide[1] and I'm still having a bit of trouble; > > >hoping for some help here. > > > > > >I've got a F16 64-bit server. I'm using the jboss that's available via > > >the f16 repos: > > > > > >ovirt-engine-jbossas-1.2-2.fc16.x86_64 > > > > > >which is jboss 7.1 based. I've updated my .m2/settings.xml to match the > > >install path of the above RPM: > > > > > > > > >[build at f16-node ear]$ cat ~/.m2/settings.xml > > > > > xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > > > xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 > > > http://maven.apache.org/xsd/settings-1.0.0.xsd"> > > > > > > > > > > > > > > > oVirtEnvSettings > > > > > > > > > > > > > > > oVirtEnvSettings > > > > > > /usr/share/jboss-as-7.1.0.Beta1b > > > /usr/lib/jvm/java-1.6.0-openjdk.x86_64 > > > always > > > > > > > > > > > > > > > > > >engine and the gui build fine with the commands from the wiki. When I > > >attempt > > >to test the connection and webgui they fail. so I looked out in the > > >deployment directory and I see the following: > > > > > >[root at f16-node deployments]# pwd > > >/usr/share/jboss-as-7.1.0.Beta1b/standalone/deployments > > >[root at f16-node deployments]# ls -al > > >total 28 > > >drwxrwxr-x. 3 jboss-as jboss-as 4096 Feb 17 12:28 . > > >drwxrwxr-x. 8 jboss-as jboss-as 4096 Feb 3 11:19 .. > > >-rw-rw-r--. 1 jboss-as jboss-as 8868 Dec 1 16:10 README.txt > > >drwxrwxr-x 15 build build 4096 Feb 17 12:21 engine.ear > > >-rw-rw-r-- 1 jboss-as jboss-as 2353 Feb 17 12:21 engine.ear.failed > > > > > >I'm attaching the engine.ear.failed. > > > > > > > > > > > >1. http://ovirt.org/wiki/Building_oVirt_engine > > I have made a few changes into this url yesterday, have you tried today? > > I looked at the wiki and didn't see anything significantly different. > I've pulled the latest engine code and re-ran the build and deploy with > the same results. > > I did see the "I've made a change how do I test" and the only comment > there is that the script calls 'mvn' and the rest of the wiki page uses > 'mvn2' to ensure folks are using the right maven. > > > Have you noticed any error during the maven work? Any error during the > > jboss start? > > Not that I can tell. The build log shows what looks to be errors, but > it never fails out, and the report lines always show 0 Errors. Nothing > from the service restart jboss-as, nothing in /var/log/messages nor > /var/log/jboss-as/console.log > > > > > Can you please try to execute chmod 777 /usr/share/jboss-as and the run > > again the deploy commands? > > http://ovirt.org/wiki/Building_oVirt_engine#Deploy > > [root at f16-node share]# ls -al | grep jboss > lrwxrwxrwx. 1 jboss-as jboss-as 32 Feb 3 11:13 jboss-as -> /usr/share/jboss-as-7.1.0.Beta1b > drwxrwxr-x. 10 jboss-as jboss-as 4096 Feb 15 11:09 jboss-as-7.1.0.Beta1b > > They're already 777. > > > Again, thanks for the help debugging this. actually, after the git update, build and restarting jboss, it looks like everything is running! Might have been an issue with the current git checkout that I had. Again, thanks for taking the time to take a look! [root at f16-node deployments]# ls -al total 28 drwxrwxr-x. 3 jboss-as jboss-as 4096 Feb 17 14:15 . drwxrwxr-x. 8 jboss-as jboss-as 4096 Feb 3 11:19 .. -rw-rw-r--. 1 jboss-as jboss-as 8868 Dec 1 16:10 README.txt drwxrwxr-x 15 build build 4096 Feb 17 14:04 engine.ear -rw-rw-r-- 1 jboss-as jboss-as 10 Feb 17 14:04 engine.ear.deployed Ryan -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx ryanh at us.ibm.com From lpeer at redhat.com Sat Feb 18 17:07:01 2012 From: lpeer at redhat.com (Livnat Peer) Date: Sat, 18 Feb 2012 19:07:01 +0200 Subject: [Engine-devel] VM disks Message-ID: <4F3FDAB5.9020203@redhat.com> Hi, These days we are working on various features around VM disks, in the different threads it was decided that we'll have the ability to attach a disk to a VM but it will be added as inactive, then the user can activate it for it to be accessible from within the guest. Flow of adding a new disk would be: - creating the disk - attaching the disk to the VM - activating it Flow of adding a shared disk (or any other existing disk): - attach the disk - activate it It seems to me a lot like adding a storage domain and I remember a lot of rejections on the storage domain flow (mostly about it being too cumbersome). After discussing the issue with various people we could not find a good reason for having a VM disk in attached but inactive mode. Of course we can wrap the above steps in one step for specific flows (add+attach within a VM context for example) but can anyone think on a good reason to support attached but inactive disk? I would suggest that when attaching a disk to a VM it becomes part of the VM (active) like in 'real' machines. Thank you, Livnat From ovedo at redhat.com Sun Feb 19 07:48:31 2012 From: ovedo at redhat.com (Oved Ourfalli) Date: Sun, 19 Feb 2012 02:48:31 -0500 (EST) Subject: [Engine-devel] VM disks In-Reply-To: <4F3FDAB5.9020203@redhat.com> Message-ID: <1c4f45c4-a00b-41d7-8b9e-3f28c4a733e0@zmail02.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > From: "Livnat Peer" > To: engine-devel at ovirt.org > Sent: Saturday, February 18, 2012 7:07:01 PM > Subject: [Engine-devel] VM disks > > Hi, > > These days we are working on various features around VM disks, in the > different threads it was decided that we'll have the ability to > attach a > disk to a VM but it will be added as inactive, then the user can > activate it for it to be accessible from within the guest. > > Flow of adding a new disk would be: > - creating the disk > - attaching the disk to the VM > - activating it > > Flow of adding a shared disk (or any other existing disk): > - attach the disk > - activate it > > It seems to me a lot like adding a storage domain and I remember a > lot > of rejections on the storage domain flow (mostly about it being too > cumbersome). > After discussing the issue with various people we could not find a > good > reason for having a VM disk in attached but inactive mode. > > Of course we can wrap the above steps in one step for specific flows > (add+attach within a VM context for example) but can anyone think on > a > good reason to support attached but inactive disk? > > I would suggest that when attaching a disk to a VM it becomes part of > the VM (active) like in 'real' machines. > +1 on that (regardless of whether the disk is shared or not). IMO - in the case of shared disk we should make it as clear as possible to the user/admin that the added disk is shared, but the flow should be exactly the same. > > Thank you, Livnat > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From mkublin at redhat.com Sun Feb 19 07:59:51 2012 From: mkublin at redhat.com (Michael Kublin) Date: Sun, 19 Feb 2012 02:59:51 -0500 (EST) Subject: [Engine-devel] VM disks In-Reply-To: <1c4f45c4-a00b-41d7-8b9e-3f28c4a733e0@zmail02.collab.prod.int.phx2.redhat.com> Message-ID: <7a5504e7-3aa8-4b72-8818-415502da11f5@zmail14.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > From: "Oved Ourfalli" > To: "Livnat Peer" > Cc: engine-devel at ovirt.org > Sent: Sunday, February 19, 2012 9:48:31 AM > Subject: Re: [Engine-devel] VM disks > > > > ----- Original Message ----- > > From: "Livnat Peer" > > To: engine-devel at ovirt.org > > Sent: Saturday, February 18, 2012 7:07:01 PM > > Subject: [Engine-devel] VM disks > > > > Hi, > > > > These days we are working on various features around VM disks, in > > the > > different threads it was decided that we'll have the ability to > > attach a > > disk to a VM but it will be added as inactive, then the user can > > activate it for it to be accessible from within the guest. > > > > Flow of adding a new disk would be: > > - creating the disk > > - attaching the disk to the VM > > - activating it These should be in a one step, otherwise the clients (rest and gui) will need to pool us for every disk > > Flow of adding a shared disk (or any other existing disk): > > - attach the disk > > - activate it These is just simple as a hot plug , should be and it is easy implement as one step > > It seems to me a lot like adding a storage domain and I remember a > > lot > > of rejections on the storage domain flow (mostly about it being too > > cumbersome). > > After discussing the issue with various people we could not find a > > good > > reason for having a VM disk in attached but inactive mode. > > > > Of course we can wrap the above steps in one step for specific > > flows Agreed, should be in one step > > (add+attach within a VM context for example) but can anyone think > > on > > a > > good reason to support attached but inactive disk? I don't see a reason also. > > I would suggest that when attaching a disk to a VM it becomes part > > of > > the VM (active) like in 'real' machines. > > > +1 on that (regardless of whether the disk is shared or not). > IMO - in the case of shared disk we should make it as clear as > possible to the user/admin that the added disk is shared, but the > flow should be exactly the same. Also agreed > > > > > Thank you, Livnat > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From iheim at redhat.com Sun Feb 19 10:35:27 2012 From: iheim at redhat.com (Itamar Heim) Date: Sun, 19 Feb 2012 12:35:27 +0200 Subject: [Engine-devel] VM disks In-Reply-To: <4F3FDAB5.9020203@redhat.com> References: <4F3FDAB5.9020203@redhat.com> Message-ID: <4F40D06F.4040106@redhat.com> On 02/18/2012 07:07 PM, Livnat Peer wrote: > Hi, > > These days we are working on various features around VM disks, in the > different threads it was decided that we'll have the ability to attach a > disk to a VM but it will be added as inactive, then the user can > activate it for it to be accessible from within the guest. > > Flow of adding a new disk would be: > - creating the disk > - attaching the disk to the VM > - activating it > > Flow of adding a shared disk (or any other existing disk): > - attach the disk > - activate it > > It seems to me a lot like adding a storage domain and I remember a lot > of rejections on the storage domain flow (mostly about it being too > cumbersome). true, you'll be asked to provide an option for the initial state in that case. > After discussing the issue with various people we could not find a good > reason for having a VM disk in attached but inactive mode. > > Of course we can wrap the above steps in one step for specific flows > (add+attach within a VM context for example) but can anyone think on a > good reason to support attached but inactive disk? > > I would suggest that when attaching a disk to a VM it becomes part of > the VM (active) like in 'real' machines. so hotunplug would make the disk floating, as it will detach it as well? From derez at redhat.com Sun Feb 19 10:36:18 2012 From: derez at redhat.com (Daniel Erez) Date: Sun, 19 Feb 2012 05:36:18 -0500 (EST) Subject: [Engine-devel] VM disks In-Reply-To: <7a5504e7-3aa8-4b72-8818-415502da11f5@zmail14.collab.prod.int.phx2.redhat.com> Message-ID: <65392c7a-33f7-4c1c-8b52-0ea87295e6ac@zmail14.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > From: "Michael Kublin" > To: engine-devel at ovirt.org > Sent: Sunday, February 19, 2012 9:59:51 AM > Subject: Re: [Engine-devel] VM disks > > > > ----- Original Message ----- > > From: "Oved Ourfalli" > > To: "Livnat Peer" > > Cc: engine-devel at ovirt.org > > Sent: Sunday, February 19, 2012 9:48:31 AM > > Subject: Re: [Engine-devel] VM disks > > > > > > > > ----- Original Message ----- > > > From: "Livnat Peer" > > > To: engine-devel at ovirt.org > > > Sent: Saturday, February 18, 2012 7:07:01 PM > > > Subject: [Engine-devel] VM disks > > > > > > Hi, > > > > > > These days we are working on various features around VM disks, in > > > the > > > different threads it was decided that we'll have the ability to > > > attach a > > > disk to a VM but it will be added as inactive, then the user can > > > activate it for it to be accessible from within the guest. > > > > > > Flow of adding a new disk would be: > > > - creating the disk > > > - attaching the disk to the VM > > > - activating it > These should be in a one step, otherwise the clients (rest and gui) > will need to pool us > for every disk > > > Flow of adding a shared disk (or any other existing disk): > > > - attach the disk > > > - activate it > These is just simple as a hot plug , should be and it is easy > implement as one step > > > It seems to me a lot like adding a storage domain and I remember > > > a > > > lot > > > of rejections on the storage domain flow (mostly about it being > > > too > > > cumbersome). > > > After discussing the issue with various people we could not find > > > a > > > good > > > reason for having a VM disk in attached but inactive mode. > > > > > > Of course we can wrap the above steps in one step for specific > > > flows > Agreed, should be in one step > > > (add+attach within a VM context for example) but can anyone think > > > on > > > a > > > good reason to support attached but inactive disk? > I don't see a reason also. > > > > I would suggest that when attaching a disk to a VM it becomes > > > part > > > of > > > the VM (active) like in 'real' machines. > > > > > +1 on that (regardless of whether the disk is shared or not). > > IMO - in the case of shared disk we should make it as clear as > > possible to the user/admin that the added disk is shared, but the > > flow should be exactly the same. > Also agreed > > +1 I think that any disk (new/attached) should be activated (plugged) by default. It seems less confusing to the user and probably a better UX. Joining the operations would save the client redundant disk status polling. > > > > > > Thank you, Livnat > > > _______________________________________________ > > > Engine-devel mailing list > > > Engine-devel at ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From lpeer at redhat.com Sun Feb 19 11:06:39 2012 From: lpeer at redhat.com (Livnat Peer) Date: Sun, 19 Feb 2012 13:06:39 +0200 Subject: [Engine-devel] Empty cdrom drive. In-Reply-To: References: Message-ID: <4F40D7BF.7040401@redhat.com> On 15/02/12 11:29, Miki Kenneth wrote: > > > ----- Original Message ----- >> From: "Ayal Baron" >> To: "Yaniv Kaul" >> Cc: engine-devel at ovirt.org >> Sent: Wednesday, February 15, 2012 11:23:54 AM >> Subject: Re: [Engine-devel] Empty cdrom drive. >> >> >> >> ----- Original Message ----- >>> On 02/15/2012 09:44 AM, Igor Lvovsky wrote: >>>> Hi, >>>> I want to discuss $subject on the email just to be sure that we >>>> all >>>> on the >>>> same page. >>>> >>>> So, today in 3.0 vdsm has two ways to create VM with cdrom : >>>> 1. If RHEV-M ask to create VM with cdrom, vdsm just create it >>>> 2. RHEV-M doesn't ask to create VM with cdrom, vdsm still >>>> creates >>>> VM with >>>> empty cdrom. Vdsm creates this device as 'hdc' (IDE device, >>>> index 2), >>>> because of libvirt restrictions. >>>> In this case RHEV-M will be able to "insert" cdrom on the >>>> fly >>>> with >>>> changeCD request. >>>> >>>> In the new style API we want to get rid from stupid scenario #2, >>>> because >>>> we want to be able to create VM without cdrom at all. >>>> It means, that now we need to change a little our scenarios: >>>> 1. If RHEV-M ask to create VM with cdrom, vdsm just create it >>>> 2. RHEV-M doesn't want to create VM with cdrom, but it want to >>>> be >>>> able to >>>> "insert" cdrom on the fly after this. Here we have two >>>> options: >>>> a. RHEV-M should to pass empty cdrom device on VM creation >>>> and >>>> use >>>> regular changeCD after that >>>> b. RHEV-M can create VM without cdrom and add cdrom later >>>> through >>>> hotplugDisk command. >>>> The preferred solution IMO would be to let the user choose if he wants a VM with CD or not. I think the motivation for the above is to 'save' IDE slot if a user does not need CD. If the user wants to have a VM with CD the engine would create an empty CD and pass it to VDSM as a device, but if the user does not require a CD there is no reason to create it in VDSM nor in the OE (oVirt Engine). Supporting the above requires the engine upgrade to create empty CD device to all VMs. Dan - what happens in 3.0 API if the engine passes the element cdrom but with empty path attribute. (I know that if the engine does not pass cdrom element VDSM creates empty CD) Livnat >>>> Note: The new libvirt remove previous restriction on cdrom >>>> devices. >>>> Now >>>> cdrom can be created as IDE or VIRTIO device in any index. >>>> It means we can easily hotplug it. >>> >>> I didn't know a CDROM can be a virtio device, but in any way it >>> requires >>> driver (which may not exist on Windows). >>> I didn't know an IDE CDROM can be hot-plugged (only USB-based?), >> >> It can't be hotplugged. >> usb based is not ide (the ide device is the usb port, the cdrom is a >> usb device afaik). >> >> The point of this email is that since we want to support being able >> to start VMs *without* a cdrom then the default behaviour of >> attaching a cdrom device needs to be implemented in engine or we >> shall have a regression. > This is a regression that we can not live with... >> In the new API (for stable device addresses) vdsm doesn't >> automatically attach a cdrom. >> >>> perhaps >>> I'm wrong here. >>> Y. >>> >>>> >>>> >>>> Regards, >>>> Igor Lvovsky >>>> >>>> >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From lpeer at redhat.com Sun Feb 19 11:23:56 2012 From: lpeer at redhat.com (Livnat Peer) Date: Sun, 19 Feb 2012 13:23:56 +0200 Subject: [Engine-devel] VM disks In-Reply-To: <4F40D06F.4040106@redhat.com> References: <4F3FDAB5.9020203@redhat.com> <4F40D06F.4040106@redhat.com> Message-ID: <4F40DBCC.7020304@redhat.com> On 19/02/12 12:35, Itamar Heim wrote: > On 02/18/2012 07:07 PM, Livnat Peer wrote: >> Hi, >> >> These days we are working on various features around VM disks, in the >> different threads it was decided that we'll have the ability to attach a >> disk to a VM but it will be added as inactive, then the user can >> activate it for it to be accessible from within the guest. >> >> Flow of adding a new disk would be: >> - creating the disk >> - attaching the disk to the VM >> - activating it >> >> Flow of adding a shared disk (or any other existing disk): >> - attach the disk >> - activate it >> >> It seems to me a lot like adding a storage domain and I remember a lot >> of rejections on the storage domain flow (mostly about it being too >> cumbersome). > > true, you'll be asked to provide an option for the initial state in that > case. > >> After discussing the issue with various people we could not find a good >> reason for having a VM disk in attached but inactive mode. >> >> Of course we can wrap the above steps in one step for specific flows >> (add+attach within a VM context for example) but can anyone think on a >> good reason to support attached but inactive disk? >> >> I would suggest that when attaching a disk to a VM it becomes part of >> the VM (active) like in 'real' machines. > > so hotunplug would make the disk floating, as it will detach it as well? In short - yes. The user will be able to attach/detach disk, the implementation would be to hotplug or simply plug according to the VM status (up or not) . From ykaul at redhat.com Sun Feb 19 12:15:42 2012 From: ykaul at redhat.com (Yaniv Kaul) Date: Sun, 19 Feb 2012 14:15:42 +0200 Subject: [Engine-devel] VM disks In-Reply-To: <4F3FDAB5.9020203@redhat.com> References: <4F3FDAB5.9020203@redhat.com> Message-ID: <4F40E7EE.5070506@redhat.com> On 02/18/2012 07:07 PM, Livnat Peer wrote: > Hi, > > These days we are working on various features around VM disks, in the > different threads it was decided that we'll have the ability to attach a > disk to a VM but it will be added as inactive, then the user can > activate it for it to be accessible from within the guest. > > Flow of adding a new disk would be: > - creating the disk > - attaching the disk to the VM > - activating it > > Flow of adding a shared disk (or any other existing disk): > - attach the disk > - activate it > > It seems to me a lot like adding a storage domain and I remember a lot > of rejections on the storage domain flow (mostly about it being too > cumbersome). > After discussing the issue with various people we could not find a good > reason for having a VM disk in attached but inactive mode. And since you probably can't find a good reason to have two steps for storage domain, lets fix this as well. (Downstream RFE https://bugzilla.redhat.com/show_bug.cgi?id=567585 - discuss only the export domain, but I'm quite sure it's applicable to other domains as well) Y. > > Of course we can wrap the above steps in one step for specific flows > (add+attach within a VM context for example) but can anyone think on a > good reason to support attached but inactive disk? > > I would suggest that when attaching a disk to a VM it becomes part of > the VM (active) like in 'real' machines. > > > Thank you, Livnat > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From ilvovsky at redhat.com Sun Feb 19 15:42:02 2012 From: ilvovsky at redhat.com (Igor Lvovsky) Date: Sun, 19 Feb 2012 10:42:02 -0500 (EST) Subject: [Engine-devel] Empty cdrom drive. In-Reply-To: <4F40D7BF.7040401@redhat.com> References: <4F40D7BF.7040401@redhat.com> Message-ID: <014f01ccef1d$aab3c560$001b5020$@com> > -----Original Message----- > From: engine-devel-bounces at ovirt.org [mailto:engine-devel-bounces at ovirt.org] > On Behalf Of Livnat Peer > Sent: Sunday, February 19, 2012 1:07 PM > To: Dan Kenigsberg > Cc: engine-devel at ovirt.org; arch at ovirt.org > Subject: Re: [Engine-devel] Empty cdrom drive. > > On 15/02/12 11:29, Miki Kenneth wrote: > > > > > > ----- Original Message ----- > >> From: "Ayal Baron" > >> To: "Yaniv Kaul" > >> Cc: engine-devel at ovirt.org > >> Sent: Wednesday, February 15, 2012 11:23:54 AM > >> Subject: Re: [Engine-devel] Empty cdrom drive. > >> > >> > >> > >> ----- Original Message ----- > >>> On 02/15/2012 09:44 AM, Igor Lvovsky wrote: > >>>> Hi, > >>>> I want to discuss $subject on the email just to be sure that we > >>>> all > >>>> on the > >>>> same page. > >>>> > >>>> So, today in 3.0 vdsm has two ways to create VM with cdrom : > >>>> 1. If RHEV-M ask to create VM with cdrom, vdsm just create it > >>>> 2. RHEV-M doesn't ask to create VM with cdrom, vdsm still > >>>> creates > >>>> VM with > >>>> empty cdrom. Vdsm creates this device as 'hdc' (IDE device, > >>>> index 2), > >>>> because of libvirt restrictions. > >>>> In this case RHEV-M will be able to "insert" cdrom on the > >>>> fly > >>>> with > >>>> changeCD request. > >>>> > >>>> In the new style API we want to get rid from stupid scenario #2, > >>>> because > >>>> we want to be able to create VM without cdrom at all. > >>>> It means, that now we need to change a little our scenarios: > >>>> 1. If RHEV-M ask to create VM with cdrom, vdsm just create it > >>>> 2. RHEV-M doesn't want to create VM with cdrom, but it want to > >>>> be > >>>> able to > >>>> "insert" cdrom on the fly after this. Here we have two > >>>> options: > >>>> a. RHEV-M should to pass empty cdrom device on VM creation > >>>> and > >>>> use > >>>> regular changeCD after that > >>>> b. RHEV-M can create VM without cdrom and add cdrom later > >>>> through > >>>> hotplugDisk command. > >>>> > > > The preferred solution IMO would be to let the user choose if he wants a > VM with CD or not. > I think the motivation for the above is to 'save' IDE slot if a user > does not need CD. > > If the user wants to have a VM with CD the engine would create an empty > CD and pass it to VDSM as a device, but if the user does not require a > CD there is no reason to create it in VDSM nor in the OE (oVirt Engine). > > Supporting the above requires the engine upgrade to create empty CD > device to all VMs. > +1 Indeed, this is a right thing to do > Dan - what happens in 3.0 API if the engine passes the element cdrom but > with empty path attribute. (I know that if the engine does not pass > cdrom element VDSM creates empty CD) We will still create an empty CD > > > Livnat > > > >>>> Note: The new libvirt remove previous restriction on cdrom > >>>> devices. > >>>> Now > >>>> cdrom can be created as IDE or VIRTIO device in any index. > >>>> It means we can easily hotplug it. > >>> > >>> I didn't know a CDROM can be a virtio device, but in any way it > >>> requires > >>> driver (which may not exist on Windows). > >>> I didn't know an IDE CDROM can be hot-plugged (only USB-based?), > >> > >> It can't be hotplugged. > >> usb based is not ide (the ide device is the usb port, the cdrom is a > >> usb device afaik). > >> > >> The point of this email is that since we want to support being able > >> to start VMs *without* a cdrom then the default behaviour of > >> attaching a cdrom device needs to be implemented in engine or we > >> shall have a regression. > > This is a regression that we can not live with... > >> In the new API (for stable device addresses) vdsm doesn't > >> automatically attach a cdrom. > >> > >>> perhaps > >>> I'm wrong here. > >>> Y. > >>> > >>>> > >>>> > >>>> Regards, > >>>> Igor Lvovsky > >>>> > >>>> > >>>> _______________________________________________ > >>>> Engine-devel mailing list > >>>> Engine-devel at ovirt.org > >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>> > >>> _______________________________________________ > >>> Engine-devel mailing list > >>> Engine-devel at ovirt.org > >>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>> > >> _______________________________________________ > >> Engine-devel mailing list > >> Engine-devel at ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/engine-devel > >> > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From iheim at redhat.com Sun Feb 19 16:18:35 2012 From: iheim at redhat.com (Itamar Heim) Date: Sun, 19 Feb 2012 18:18:35 +0200 Subject: [Engine-devel] bridgless networks In-Reply-To: <5fe81486-2785-48f0-89db-a02d5f3fd4af@zmail01.collab.prod.int.phx2.redhat.com> References: <5fe81486-2785-48f0-89db-a02d5f3fd4af@zmail01.collab.prod.int.phx2.redhat.com> Message-ID: <4F4120DB.4040004@redhat.com> On 02/14/2012 03:35 PM, Roy Golan wrote: > ----- Original Message ----- >> From: "Itamar Heim" >> To: "Roy Golan" >> Cc: engine-devel at ovirt.org >> Sent: Thursday, February 9, 2012 10:02:03 AM >> Subject: Re: [Engine-devel] bridgless networks >> >> On 02/06/2012 04:47 PM, Roy Golan wrote: >>> Hi All >>> >>> Lately I've been working on a design of bridge-less network feature >>> in the engine. >>> You can see it in >>> http://www.ovirt.org/wiki/Features/Design/Network/Bridgeless_Networks#Bridge-less_Networks >>> >>> Please review the design. >>> Note, there are some open issues, you can find in the relevant >>> section. >>> Reviews and comments are very welcome. >> >> 1. validations >> 1.1. do you block setting a logical network to don't allow running >> VMs >> if it has a vnic associated with it? >> 1.2. do you check on import a vnic isn't connected to a logical >> network >> which doesn't allow running VMs? >> 1.3. do you check when REST API tries to add/edit a vnic that the >> chosen >> logical network is allowed to run VMs? >> >> 2. changes >> 2.1 can a logical network be changed between allow/disallow running >> VMs? >> 2.2 what's the flow when enabling running VMs? will the logical >> network >> become non-operational until all hosts are reconfigured with a bridge >> (if applicable)? >> what is the user flow to reconfigure the hosts (go one by one? do >> what >> (there is no change to host level config)? >> 2.3 what's the flow to not allowing to run VMs (bridge-less) - no >> need >> to make the network non operational, but same question - what should >> the >> admin do to reconfigure the hosts (no host level config change is >> needed >> by him, just a reconfigure iiuc) >> >> Thanks, >> Itamar >> > > > Since it will take some time till we'll add a type to a nic, the whole concept of > enforcing bridging in the migration domain, namely the cluster, should be replaced with much more simple > approach - set bridged true/false during the attach action on the host (i.e setupnetworks). > > This means there are no monitoring checks, no new fields to logical networks and no validations but > migration might fail in case the target network is not bridged and the underlying nic is not vNic etc. > > Once we will support nic types it will be easy to add the ability to mark a network as "able to run VMs" to > advice the attach nic action, based on the nic type to set a bridge or not. > > thoughts? what i don't like about this: 1. no validations == allows more users errors 2. more definitions at host level (+ allows more user error on misconfiguring the cluster). 3. probably need to obsolete this when will add this at logical network + handle upgrade for this so question is what is the implementation gap between doing this at logical network (cluster level) to doing this at host level? From derez at redhat.com Sun Feb 19 18:56:46 2012 From: derez at redhat.com (Daniel Erez) Date: Sun, 19 Feb 2012 13:56:46 -0500 (EST) Subject: [Engine-devel] VM disks In-Reply-To: <4F40DBCC.7020304@redhat.com> Message-ID: ----- Original Message ----- > From: "Livnat Peer" > To: "Itamar Heim" > Cc: engine-devel at ovirt.org > Sent: Sunday, February 19, 2012 1:23:56 PM > Subject: Re: [Engine-devel] VM disks > > On 19/02/12 12:35, Itamar Heim wrote: > > On 02/18/2012 07:07 PM, Livnat Peer wrote: > >> Hi, > >> > >> These days we are working on various features around VM disks, in > >> the > >> different threads it was decided that we'll have the ability to > >> attach a > >> disk to a VM but it will be added as inactive, then the user can > >> activate it for it to be accessible from within the guest. > >> > >> Flow of adding a new disk would be: > >> - creating the disk > >> - attaching the disk to the VM > >> - activating it > >> > >> Flow of adding a shared disk (or any other existing disk): > >> - attach the disk > >> - activate it > >> > >> It seems to me a lot like adding a storage domain and I remember a > >> lot > >> of rejections on the storage domain flow (mostly about it being > >> too > >> cumbersome). > > > > true, you'll be asked to provide an option for the initial state in > > that > > case. > > > >> After discussing the issue with various people we could not find a > >> good > >> reason for having a VM disk in attached but inactive mode. > >> > >> Of course we can wrap the above steps in one step for specific > >> flows > >> (add+attach within a VM context for example) but can anyone think > >> on a > >> good reason to support attached but inactive disk? > >> > >> I would suggest that when attaching a disk to a VM it becomes part > >> of > >> the VM (active) like in 'real' machines. > > > > so hotunplug would make the disk floating, as it will detach it as > > well? > > In short - yes. > > The user will be able to attach/detach disk, the implementation would > be > to hotplug or simply plug according to the VM status (up or not) . What about disks with snapshots? By the current design of floating disks, detaching a disk with snapshots can be done only by collapsing and marking the snapshots as broken. Thus, removing a disk momentarily might be problematic without Plugged/Unplugged status. Maybe we should keep the current Activate/Deactivate buttons for disks in addition to encapsulating attach/detach and plug/unplug commands. So, adding/attaching a new disk will plug the disk automatically while allowing the user deactivating a disk temporarily. > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From lpeer at redhat.com Sun Feb 19 19:48:49 2012 From: lpeer at redhat.com (Livnat Peer) Date: Sun, 19 Feb 2012 21:48:49 +0200 Subject: [Engine-devel] VM disks In-Reply-To: References: Message-ID: <4F415221.5060907@redhat.com> On 19/02/12 20:56, Daniel Erez wrote: > > > ----- Original Message ----- >> From: "Livnat Peer" >> To: "Itamar Heim" >> Cc: engine-devel at ovirt.org >> Sent: Sunday, February 19, 2012 1:23:56 PM >> Subject: Re: [Engine-devel] VM disks >> >> On 19/02/12 12:35, Itamar Heim wrote: >>> On 02/18/2012 07:07 PM, Livnat Peer wrote: >>>> Hi, >>>> >>>> These days we are working on various features around VM disks, in >>>> the >>>> different threads it was decided that we'll have the ability to >>>> attach a >>>> disk to a VM but it will be added as inactive, then the user can >>>> activate it for it to be accessible from within the guest. >>>> >>>> Flow of adding a new disk would be: >>>> - creating the disk >>>> - attaching the disk to the VM >>>> - activating it >>>> >>>> Flow of adding a shared disk (or any other existing disk): >>>> - attach the disk >>>> - activate it >>>> >>>> It seems to me a lot like adding a storage domain and I remember a >>>> lot >>>> of rejections on the storage domain flow (mostly about it being >>>> too >>>> cumbersome). >>> >>> true, you'll be asked to provide an option for the initial state in >>> that >>> case. >>> >>>> After discussing the issue with various people we could not find a >>>> good >>>> reason for having a VM disk in attached but inactive mode. >>>> >>>> Of course we can wrap the above steps in one step for specific >>>> flows >>>> (add+attach within a VM context for example) but can anyone think >>>> on a >>>> good reason to support attached but inactive disk? >>>> >>>> I would suggest that when attaching a disk to a VM it becomes part >>>> of >>>> the VM (active) like in 'real' machines. >>> >>> so hotunplug would make the disk floating, as it will detach it as >>> well? >> >> In short - yes. >> >> The user will be able to attach/detach disk, the implementation would >> be >> to hotplug or simply plug according to the VM status (up or not) . > > > What about disks with snapshots? > By the current design of floating disks, detaching a disk with snapshots > can be done only by collapsing and marking the snapshots as broken. > Thus, removing a disk momentarily might be problematic without Plugged/Unplugged status. > when taking the snapshots the user can choose if he wants to have the shared disk or direct lun in the snapshot or not, once the user makes the call that would be reflected in the snapshot configuration. > Maybe we should keep the current Activate/Deactivate buttons for disks in addition to > encapsulating attach/detach and plug/unplug commands. > So, adding/attaching a new disk will plug the disk automatically while allowing the user > deactivating a disk temporarily. IIUC that's the original design which I am suggesting to change. We got negative feedback on a similar approach with regard to storage domains I suspect it will be even more acute when it comes to VM disks which is much more common. Livnat > > >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> From mkolesni at redhat.com Mon Feb 20 12:14:13 2012 From: mkolesni at redhat.com (Mike Kolesnik) Date: Mon, 20 Feb 2012 07:14:13 -0500 (EST) Subject: [Engine-devel] VM disks In-Reply-To: <4F415221.5060907@redhat.com> Message-ID: <24bc9ab0-b274-4365-aa72-29fefff2c2c9@zmail14.collab.prod.int.phx2.redhat.com> > On 19/02/12 20:56, Daniel Erez wrote: > > > > > > ----- Original Message ----- > >> From: "Livnat Peer" > >> To: "Itamar Heim" > >> Cc: engine-devel at ovirt.org > >> Sent: Sunday, February 19, 2012 1:23:56 PM > >> Subject: Re: [Engine-devel] VM disks > >> > >> On 19/02/12 12:35, Itamar Heim wrote: > >>> On 02/18/2012 07:07 PM, Livnat Peer wrote: > >>>> Hi, > >>>> > >>>> These days we are working on various features around VM disks, > >>>> in > >>>> the > >>>> different threads it was decided that we'll have the ability to > >>>> attach a > >>>> disk to a VM but it will be added as inactive, then the user can > >>>> activate it for it to be accessible from within the guest. > >>>> > >>>> Flow of adding a new disk would be: > >>>> - creating the disk > >>>> - attaching the disk to the VM > >>>> - activating it > >>>> > >>>> Flow of adding a shared disk (or any other existing disk): > >>>> - attach the disk > >>>> - activate it > >>>> > >>>> It seems to me a lot like adding a storage domain and I remember > >>>> a > >>>> lot > >>>> of rejections on the storage domain flow (mostly about it being > >>>> too > >>>> cumbersome). > >>> > >>> true, you'll be asked to provide an option for the initial state > >>> in > >>> that > >>> case. > >>> > >>>> After discussing the issue with various people we could not find > >>>> a > >>>> good > >>>> reason for having a VM disk in attached but inactive mode. > >>>> > >>>> Of course we can wrap the above steps in one step for specific > >>>> flows > >>>> (add+attach within a VM context for example) but can anyone > >>>> think > >>>> on a > >>>> good reason to support attached but inactive disk? > >>>> > >>>> I would suggest that when attaching a disk to a VM it becomes > >>>> part > >>>> of > >>>> the VM (active) like in 'real' machines. > >>> > >>> so hotunplug would make the disk floating, as it will detach it > >>> as > >>> well? > >> > >> In short - yes. > >> > >> The user will be able to attach/detach disk, the implementation > >> would > >> be > >> to hotplug or simply plug according to the VM status (up or not) . > > > > > > What about disks with snapshots? > > By the current design of floating disks, detaching a disk with > > snapshots > > can be done only by collapsing and marking the snapshots as broken. > > Thus, removing a disk momentarily might be problematic without > > Plugged/Unplugged status. > > > > when taking the snapshots the user can choose if he wants to have the > shared disk or direct lun in the snapshot or not, once the user makes > the call that would be reflected in the snapshot configuration. What derez meant is that once disk is detached from VM it cannot retain it's history, as today snapshot data is part of the VM definition and not the single disk, so then all it's images should be collapsed, especially if it is to be attached to an entirely different VM. > > > > Maybe we should keep the current Activate/Deactivate buttons for > > disks in addition to > > encapsulating attach/detach and plug/unplug commands. > > So, adding/attaching a new disk will plug the disk automatically > > while allowing the user > > deactivating a disk temporarily. > > IIUC that's the original design which I am suggesting to change. > We got negative feedback on a similar approach with regard to storage > domains I suspect it will be even more acute when it comes to VM > disks > which is much more common. I think the downside of improving UX like you suggest (by chaining the atomic commands in the client IIUC), is that the client needs to poll us repetitively which poses several issues such as performance and the need for the client to manage a "transaction". Since these cases of the need to run several commands in a "flow" is increasing, maybe we need to offer a generic API that allows to run several commands in a simple flow (simple BPEL style perhaps?) and take the load off the clients. > > Livnat > > > From yzaslavs at redhat.com Tue Feb 21 06:29:01 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Tue, 21 Feb 2012 08:29:01 +0200 Subject: [Engine-devel] Future of IImage and DiskImageBase? Message-ID: <4F4339AD.3000803@redhat.com> Hi all, Now with the elimination of DiskImageTemplate entity, Do we really need DiskImageBase and IImage? Yair From mkolesni at redhat.com Tue Feb 21 06:33:18 2012 From: mkolesni at redhat.com (Mike Kolesnik) Date: Tue, 21 Feb 2012 01:33:18 -0500 (EST) Subject: [Engine-devel] Future of IImage and DiskImageBase? In-Reply-To: <4F4339AD.3000803@redhat.com> Message-ID: <1f306f58-333c-49ff-bcfc-75865ba15d19@zmail14.collab.prod.int.phx2.redhat.com> > Hi all, > Now with the elimination of DiskImageTemplate entity, > Do we really need DiskImageBase and IImage? DiskImageBase is part of our API in the commands, and it represents only the essential disk/image data that needs to be passed for certain operations, so I'd say we can keep it. IImage is irrelevant anymore since it was an abstraction over the DiskImage/DiskImageTemplate entities, so that one's for the trash can. > > Yair > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From gchaplik at redhat.com Tue Feb 21 15:11:51 2012 From: gchaplik at redhat.com (Gilad Chaplik) Date: Tue, 21 Feb 2012 10:11:51 -0500 (EST) Subject: [Engine-devel] 'Import VM/Template More Than Once' feature In-Reply-To: <268c68ca-34a3-4acc-a666-d13fb7c52f0d@zmail14.collab.prod.int.phx2.redhat.com> Message-ID: Hello all, The 'Import VM/Template More Than Once' feature description can be found under the following link: http://www.ovirt.org/wiki/Features/ImportMoreThanOnce Please review, and feel free to share your comments and thoughts. Thanks, Gilad. From ykaul at redhat.com Tue Feb 21 15:27:19 2012 From: ykaul at redhat.com (Yaniv Kaul) Date: Tue, 21 Feb 2012 17:27:19 +0200 Subject: [Engine-devel] 'Import VM/Template More Than Once' feature In-Reply-To: References: Message-ID: <4F43B7D7.8030804@redhat.com> On 02/21/2012 05:11 PM, Gilad Chaplik wrote: > Hello all, > > The 'Import VM/Template More Than Once' feature description can be found under the following link: > > http://www.ovirt.org/wiki/Features/ImportMoreThanOnce > > Please review, and feel free to share your comments and thoughts. > > Thanks, > Gilad. > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel 1. Missing high level (user level) summary. For example, what does it mean that a VM already exist in the setup? If I had a VM with 10GB disk, without an OS installed, then exported it, then installed an OS into it (so now the disk is a bit full, as opposed to the emptied exported one). Does it means that an identical entity already exist in the setup or not? (think of overwriting files). The design should (ALWAYS?) start with the user flow. 2. 'clone' doesn't strike me as a great parameter name in the API. Not in the UI either (but I don't have a better suggestion - yet). 3. What is the equivalent to 'suffix' in the API? Or do we expect to provide a name when we import? I'm not sure how the API works in the case of an existing VM, really. Only after we fetch via the API the fact it has an existing VM, we give it a new suffix? 4. Small typos (I'll fix them later directly in the wiki, except for the mockups, which I can't fix). Y. From yzaslavs at redhat.com Tue Feb 21 16:03:46 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Tue, 21 Feb 2012 18:03:46 +0200 Subject: [Engine-devel] 'Import VM/Template More Than Once' feature In-Reply-To: <4F43B7D7.8030804@redhat.com> References: <4F43B7D7.8030804@redhat.com> Message-ID: <4F43C062.1000600@redhat.com> On 02/21/2012 05:27 PM, Yaniv Kaul wrote: > On 02/21/2012 05:11 PM, Gilad Chaplik wrote: >> Hello all, >> >> The 'Import VM/Template More Than Once' feature description can be >> found under the following link: >> >> http://www.ovirt.org/wiki/Features/ImportMoreThanOnce >> >> Please review, and feel free to share your comments and thoughts. >> >> Thanks, >> Gilad. >> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel > > 1. Missing high level (user level) summary. For example, what does it > mean that a VM already exist in the setup? If I had a VM with 10GB disk, > without an OS installed, then exported it, then installed an OS into it > (so now the disk is a bit full, as opposed to the emptied exported one). > Does it means that an identical entity already exist in the setup or > not? (think of overwriting files). > The design should (ALWAYS?) start with the user flow. > 2. 'clone' doesn't strike me as a great parameter name in the API. Not > in the UI either (but I don't have a better suggestion - yet). > 3. What is the equivalent to 'suffix' in the API? Or do we expect to > provide a name when we import? I'm not sure how the API works in the > case of an existing VM, really. Only after we fetch via the API the fact This reminds me that I was in a dilemma of suffix/prefix issue myself, in Clone VM from snapshot feature. I think this issue may be problematic , and for simplicity we should simply provide a new name. > it has an existing VM, we give it a new suffix? > 4. Small typos (I'll fix them later directly in the wiki, except for the > mockups, which I can't fix). > Y. > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From ofrenkel at redhat.com Tue Feb 21 16:30:24 2012 From: ofrenkel at redhat.com (Omer Frenkel) Date: Tue, 21 Feb 2012 11:30:24 -0500 (EST) Subject: [Engine-devel] 'Import VM/Template More Than Once' feature In-Reply-To: <4F43B7D7.8030804@redhat.com> Message-ID: <00064d8c-41a8-4581-8c5f-a37a4dc19228@ofrenkel.csb> ----- Original Message ----- > From: "Yaniv Kaul" > To: "Gilad Chaplik" > Cc: engine-devel at ovirt.org > Sent: Tuesday, February 21, 2012 5:27:19 PM > Subject: Re: [Engine-devel] 'Import VM/Template More Than Once' feature > > On 02/21/2012 05:11 PM, Gilad Chaplik wrote: > > Hello all, > > > > The 'Import VM/Template More Than Once' feature description can be > > found under the following link: > > > > http://www.ovirt.org/wiki/Features/ImportMoreThanOnce > > > > Please review, and feel free to share your comments and thoughts. > > > > Thanks, > > Gilad. > > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > 1. Missing high level (user level) summary. For example, what does it > mean that a VM already exist in the setup? If I had a VM with 10GB > disk, > without an OS installed, then exported it, then installed an OS into > it > (so now the disk is a bit full, as opposed to the emptied exported > one). > Does it means that an identical entity already exist in the setup or > not? (think of overwriting files). > The design should (ALWAYS?) start with the user flow. > 2. 'clone' doesn't strike me as a great parameter name in the API. > Not > in the UI either (but I don't have a better suggestion - yet). i would say: importAsNewEntity - because what it really does is create a new entity, in terms of ids, disks, nics... also, the copyCollapse flag should be on as well. > 3. What is the equivalent to 'suffix' in the API? Or do we expect to > provide a name when we import? I'm not sure how the API works in the > case of an existing VM, really. Only after we fetch via the API the > fact > it has an existing VM, we give it a new suffix? > 4. Small typos (I'll fix them later directly in the wiki, except for > the > mockups, which I can't fix). > Y. > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From ecohen at redhat.com Tue Feb 21 16:49:17 2012 From: ecohen at redhat.com (Einav Cohen) Date: Tue, 21 Feb 2012 11:49:17 -0500 (EST) Subject: [Engine-devel] 'Import VM/Template More Than Once' feature In-Reply-To: <4F43C062.1000600@redhat.com> Message-ID: <91137465-e86b-45e4-b38f-059dcb6d84f9@zmail04.collab.prod.int.phx2.redhat.com> > ----- Original Message ----- > From: "Yair Zaslavsky" > Sent: Tuesday, February 21, 2012 6:03:46 PM > > On 02/21/2012 05:27 PM, Yaniv Kaul wrote: > > On 02/21/2012 05:11 PM, Gilad Chaplik wrote: > >> Hello all, > >> > >> The 'Import VM/Template More Than Once' feature description can be > >> found under the following link: > >> > >> http://www.ovirt.org/wiki/Features/ImportMoreThanOnce > >> > >> Please review, and feel free to share your comments and thoughts. > >> > >> Thanks, > >> Gilad. > >> > >> _______________________________________________ > >> Engine-devel mailing list > >> Engine-devel at ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > 1. Missing high level (user level) summary. For example, what does > > it > > mean that a VM already exist in the setup? If I had a VM with 10GB > > disk, > > without an OS installed, then exported it, then installed an OS > > into it > > (so now the disk is a bit full, as opposed to the emptied exported > > one). > > Does it means that an identical entity already exist in the setup > > or > > not? (think of overwriting files). > > The design should (ALWAYS?) start with the user flow. > > 2. 'clone' doesn't strike me as a great parameter name in the API. > > Not > > in the UI either (but I don't have a better suggestion - yet). > > 3. What is the equivalent to 'suffix' in the API? Or do we expect > > to > > provide a name when we import? I'm not sure how the API works in > > the > > case of an existing VM, really. Only after we fetch via the API the > > fact > > This reminds me that I was in a dilemma of suffix/prefix issue > myself, > in Clone VM from snapshot feature. > I think this issue may be problematic , and for simplicity we should > simply provide a new name. * just to clarify: the suffix feature is a pure GUI feature, and it shouldn't involve the backend/api; the api should require a new name for every imported VM; the suffix text-box is just to allow the user to provide a new name for several imported VMs at once. * in case of "clone VM from snapshot", I think that providing a new name is enough (maybe provide a default name in the new name text-box, but again - it is GUI only), mainly because I don't think that we will allow cloning from several VMs/snapshots at once. [we might want to clone several VMs from the same snapshot, not sure how we will provide multiple names in such case] > > > it has an existing VM, we give it a new suffix? > > 4. Small typos (I'll fix them later directly in the wiki, except > > for the > > mockups, which I can't fix). > > Y. > > > > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From mkenneth at redhat.com Tue Feb 21 17:21:22 2012 From: mkenneth at redhat.com (Miki Kenneth) Date: Tue, 21 Feb 2012 12:21:22 -0500 (EST) Subject: [Engine-devel] VM disks In-Reply-To: <24bc9ab0-b274-4365-aa72-29fefff2c2c9@zmail14.collab.prod.int.phx2.redhat.com> Message-ID: <13e9c6d5-5b6d-4512-a6dc-d9a12386205e@mkenneth.csb> Top posting: It's not that I am for breaking the flow to create/attach/activate, but we need to consider all the use cases. Just want to highlight a use case, and pls find the solution for it: I've a VM with 4 different disks on 4 different storage domains. What will happen if the on Run VM when one of the SDs is inaccessible? Of course the VM should be able to run, but the Disk on the inaccessible SD should be "off/down"? Note that if the Disk on the inaccessible SD is "boot" disk, the Run VM should probably fail, or ? Thanks, Miki ----- Original Message ----- > From: "Mike Kolesnik" > To: "Livnat Peer" > Cc: engine-devel at ovirt.org > Sent: Monday, February 20, 2012 2:14:13 PM > Subject: Re: [Engine-devel] VM disks > > > On 19/02/12 20:56, Daniel Erez wrote: > > > > > > > > > ----- Original Message ----- > > >> From: "Livnat Peer" > > >> To: "Itamar Heim" > > >> Cc: engine-devel at ovirt.org > > >> Sent: Sunday, February 19, 2012 1:23:56 PM > > >> Subject: Re: [Engine-devel] VM disks > > >> > > >> On 19/02/12 12:35, Itamar Heim wrote: > > >>> On 02/18/2012 07:07 PM, Livnat Peer wrote: > > >>>> Hi, > > >>>> > > >>>> These days we are working on various features around VM disks, > > >>>> in > > >>>> the > > >>>> different threads it was decided that we'll have the ability > > >>>> to > > >>>> attach a > > >>>> disk to a VM but it will be added as inactive, then the user > > >>>> can > > >>>> activate it for it to be accessible from within the guest. > > >>>> > > >>>> Flow of adding a new disk would be: > > >>>> - creating the disk > > >>>> - attaching the disk to the VM > > >>>> - activating it > > >>>> > > >>>> Flow of adding a shared disk (or any other existing disk): > > >>>> - attach the disk > > >>>> - activate it > > >>>> > > >>>> It seems to me a lot like adding a storage domain and I > > >>>> remember > > >>>> a > > >>>> lot > > >>>> of rejections on the storage domain flow (mostly about it > > >>>> being > > >>>> too > > >>>> cumbersome). > > >>> > > >>> true, you'll be asked to provide an option for the initial > > >>> state > > >>> in > > >>> that > > >>> case. > > >>> > > >>>> After discussing the issue with various people we could not > > >>>> find > > >>>> a > > >>>> good > > >>>> reason for having a VM disk in attached but inactive mode. > > >>>> > > >>>> Of course we can wrap the above steps in one step for specific > > >>>> flows > > >>>> (add+attach within a VM context for example) but can anyone > > >>>> think > > >>>> on a > > >>>> good reason to support attached but inactive disk? > > >>>> > > >>>> I would suggest that when attaching a disk to a VM it becomes > > >>>> part > > >>>> of > > >>>> the VM (active) like in 'real' machines. > > >>> > > >>> so hotunplug would make the disk floating, as it will detach it > > >>> as > > >>> well? > > >> > > >> In short - yes. > > >> > > >> The user will be able to attach/detach disk, the implementation > > >> would > > >> be > > >> to hotplug or simply plug according to the VM status (up or not) > > >> . > > > > > > > > > What about disks with snapshots? > > > By the current design of floating disks, detaching a disk with > > > snapshots > > > can be done only by collapsing and marking the snapshots as > > > broken. > > > Thus, removing a disk momentarily might be problematic without > > > Plugged/Unplugged status. > > > > > > > when taking the snapshots the user can choose if he wants to have > > the > > shared disk or direct lun in the snapshot or not, once the user > > makes > > the call that would be reflected in the snapshot configuration. > > What derez meant is that once disk is detached from VM it cannot > retain it's history, as today snapshot data is part of the VM > definition and not the single disk, so then all it's images should > be collapsed, especially if it is to be attached to an entirely > different VM. > > > > > > > > Maybe we should keep the current Activate/Deactivate buttons for > > > disks in addition to > > > encapsulating attach/detach and plug/unplug commands. > > > So, adding/attaching a new disk will plug the disk automatically > > > while allowing the user > > > deactivating a disk temporarily. > > > > IIUC that's the original design which I am suggesting to change. > > We got negative feedback on a similar approach with regard to > > storage > > domains I suspect it will be even more acute when it comes to VM > > disks > > which is much more common. > > I think the downside of improving UX like you suggest (by chaining > the atomic commands in the client IIUC), is that the client needs to > poll us repetitively which poses several issues such as performance > and the need for the client to manage a "transaction". > > Since these cases of the need to run several commands in a "flow" is > increasing, maybe we need to offer a generic API that allows to run > several commands in a simple flow (simple BPEL style perhaps?) and > take the load off the clients. > > > > > Livnat > > > > > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From rgolan at redhat.com Wed Feb 22 00:15:40 2012 From: rgolan at redhat.com (Roy Golan) Date: Tue, 21 Feb 2012 19:15:40 -0500 (EST) Subject: [Engine-devel] network - UI Sync meeting Message-ID: <69015606-afb8-48db-93af-ad7ae9400c58@zmail01.collab.prod.int.phx2.redhat.com> The following meeting has been modified: Subject: network - UI Sync meeting Organizer: "Roy Golan" Time: Monday, February 27, 2012, 4:00:00 PM - 5:00:00 PM GMT +02:00 Jerusalem [MODIFIED] Invitees: mkenneth at redhat.com; sgrinber at redhat.com; lpeer at redhat.com; dfediuck at redhat.com; drankevi at redhat.com; ecohen at redhat.com; iheim at redhat.com; ovedo at redhat.com; acathrow at redhat.com; engine-devel at ovirt.org; kroberts at redhat.com ... *~*~*~*~*~*~*~*~*~* Follow-up meeting on setup networks UI. issues to follow: 1. can VDSM attach many non-vlan and many vlan networks to a single nic? (Dan - please reply if its doable) 2. if yes is the UI breakdown of vlan/non-vlan is probably not necessary? open issues: 1.should we use "VmNetwork"? (or "allow/able to run VMs" you name it) would it be a DC or a Cluster property? 2.should we implicitly set bridge/bridgeless when attaching a network with setupnetworks? 3.nickless networks - was that planned for this version? VDSM support it already but we are missing the UI and Backend for it. Bridge ID: 1814335863 https://www.intercallonline.com/portlets/scheduling/viewNumbers/listNumbersByCode.do?confCode=1814335863 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 5113 bytes Desc: not available URL: From gchaplik at redhat.com Wed Feb 22 09:08:55 2012 From: gchaplik at redhat.com (Gilad Chaplik) Date: Wed, 22 Feb 2012 04:08:55 -0500 (EST) Subject: [Engine-devel] 'Import VM/Template More Than Once' feature In-Reply-To: <4F43B7D7.8030804@redhat.com> Message-ID: ----- Original Message ----- > From: "Yaniv Kaul" > To: "Gilad Chaplik" > Cc: engine-devel at ovirt.org > Sent: Tuesday, February 21, 2012 5:27:19 PM > Subject: Re: [Engine-devel] 'Import VM/Template More Than Once' feature > > On 02/21/2012 05:11 PM, Gilad Chaplik wrote: > > Hello all, > > > > The 'Import VM/Template More Than Once' feature description can be > > found under the following link: > > > > http://www.ovirt.org/wiki/Features/ImportMoreThanOnce > > > > Please review, and feel free to share your comments and thoughts. > > > > Thanks, > > Gilad. > > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > 1. Missing high level (user level) summary. For example, what does it > mean that a VM already exist in the setup? If I had a VM with 10GB > disk, > without an OS installed, then exported it, then installed an OS into > it > (so now the disk is a bit full, as opposed to the emptied exported > one). > Does it means that an identical entity already exist in the setup or > not? (think of overwriting files). not. We need to think how to portray it to the user, it a bit tricky. > The design should (ALWAYS?) start with the user flow. I'll change it. > 2. 'clone' doesn't strike me as a great parameter name in the API. > Not > in the UI either (but I don't have a better suggestion - yet). Other suggestions are welcomed, Omer has one. > 3. What is the equivalent to 'suffix' in the API? Or do we expect to > provide a name when we import? I'm not sure how the API works in the > case of an existing VM, really. Only after we fetch via the API the > fact > it has an existing VM, we give it a new suffix? In fact it's the same (just need to clarify that), the suffix is GUI only, and the client passes the vm name in the vm's vm_name field (old_name + suffix). In REST the vm name will be passed a part of the action parameters: true //This is the new value new_name note: in case the clone flag is true the vm.name is mandatory. (will update the wiki for all the above) > 4. Small typos (I'll fix them later directly in the wiki, except for > the > mockups, which I can't fix). > Y. > > > From ecohen at redhat.com Wed Feb 22 09:50:09 2012 From: ecohen at redhat.com (Einav Cohen) Date: Wed, 22 Feb 2012 04:50:09 -0500 (EST) Subject: [Engine-devel] gwt: new "userportal-gwtp" module, new "gwt-user-gwtp" mvn profile In-Reply-To: <65bd0332-3de6-4290-9dc9-ebd496e45552@zmail04.collab.prod.int.phx2.redhat.com> Message-ID: <39a939d2-542f-4c98-8bfc-80e9b2b27d6c@zmail04.collab.prod.int.phx2.redhat.com> Hi, A new oVirt UI module that is called "userportal-gwtp" has been introduced recently; it is written on top of the "gwt-common" infrastructure (on which the "web-admin" is based), and is targeted to eventually replace the existing "userportal" module. This module is still on the works - it is not fully functional yet. In order to build this new project (i.e., compile it to java-script), you need to "mvn build" oVirt with the "gwt-user-gwtp" profile (i.e., with "-Pgwt-user-gwtp"). The "gwt-user-gwtp" mvn profile has been added to the oVirt gwt Jenkins job, so in case of committing a change in the oVirt code that breaks mvn compilation that includes the "gwt-user-gwtp" profile, a relevant notification from Jenkins will be sent to the relevant recipients (just like notifications regarding breaking oVirt compilation with the other two gwt profiles, i.e, "gwt-user" and "gwt-admin", are already being sent). ---- Thanks, Einav From simon at redhat.com Wed Feb 22 10:56:23 2012 From: simon at redhat.com (Simon Grinberg) Date: Wed, 22 Feb 2012 05:56:23 -0500 (EST) Subject: [Engine-devel] network - UI Sync meeting In-Reply-To: <69015606-afb8-48db-93af-ad7ae9400c58@zmail01.collab.prod.int.phx2.redhat.com> Message-ID: > Follow-up meeting on setup networks UI. > issues to follow: > 1. can VDSM attach many non-vlan and many vlan networks to a single > nic? (Dan - please reply if its doable) Yes and no Easy to do: One non-Vlan + many vlan tagged If all the non-Vlan are also bridles then yes: Either using iprout or alias IP http://www.shorewall.net/Shorewall_and_Aliased_Interfaces.html Alias IP can't be enslaved to a bridge thus if deciding to use alias IP then can also do: 1 Non VLAN tagged + many tagged to many bridge-less non-tagged You probably can also create many iprout muti IP on a single bridge but not sure about the benefit of this - and still need verification. > 2. if yes is the UI breakdown of vlan/non-vlan is probably not > necessary? Like you can see above more then necessary, may even need a third category :( > open issues: > 1.should we use "VmNetwork"? (or "allow/able to run VMs" you name it) > would it be a DC or a Cluster property? I tend to say yes. Cluster level. PS. Network may have few attributes. Example: Management + VMs on one network Storage on the other Display + VMs + mamagement + storage (Small setup, with single network, Local storage DC) etc.. > 2.should we implicitly set bridge/bridgeless when attaching a network > with setupnetworks? Depending on the VM-networks tag if we have it. Best way is to be add by vdsm as needed (When creating the first VM the uses it) but this raises some flows that not sure you want to have for 3.1 time frame > 3.nickless networks - was that planned for this version? VDSM support > it already but we are missing the UI and Backend for it. From iheim at redhat.com Wed Feb 22 11:23:19 2012 From: iheim at redhat.com (Itamar Heim) Date: Wed, 22 Feb 2012 13:23:19 +0200 Subject: [Engine-devel] gwt: new "userportal-gwtp" module, new "gwt-user-gwtp" mvn profile In-Reply-To: <39a939d2-542f-4c98-8bfc-80e9b2b27d6c@zmail04.collab.prod.int.phx2.redhat.com> References: <39a939d2-542f-4c98-8bfc-80e9b2b27d6c@zmail04.collab.prod.int.phx2.redhat.com> Message-ID: <4F44D027.7080001@redhat.com> On 02/22/2012 11:50 AM, Einav Cohen wrote: > Hi, > > A new oVirt UI module that is called "userportal-gwtp" has been introduced recently; it is written on top of the "gwt-common" infrastructure (on which the "web-admin" is based), and is targeted to eventually replace the existing "userportal" module. > This module is still on the works - it is not fully functional yet. > > In order to build this new project (i.e., compile it to java-script), you need to "mvn build" oVirt with the "gwt-user-gwtp" profile (i.e., with "-Pgwt-user-gwtp"). > > The "gwt-user-gwtp" mvn profile has been added to the oVirt gwt Jenkins job, so in case of committing a change in the oVirt code that breaks mvn compilation that includes the "gwt-user-gwtp" profile, a relevant notification from Jenkins will be sent to the relevant recipients (just like notifications regarding breaking oVirt compilation with the other two gwt profiles, i.e, "gwt-user" and "gwt-admin", are already being sent). since this is time and cpu consuming - how about making the default build compile only the firefox permutation, and make others optional via a profile? From mburns at redhat.com Wed Feb 22 14:57:38 2012 From: mburns at redhat.com (Mike Burns) Date: Wed, 22 Feb 2012 09:57:38 -0500 Subject: [Engine-devel] Support for stateless nodes Message-ID: <1329922658.6140.14.camel@beelzebub.mburnsfire.net> There has been a lot of interest in being able to run stateless Nodes with ovirt-engine. ovirt-node has designed a way [1] to achieve this on the node side, but we need input from the engine and vdsm teams to see if we're missing some requirement or if there needs to be changes on the engine/vdsm side to achieve this. As it currently stands, every time you reboot an ovirt-node that is stateless, it would require manually removing the host in engine, then re-registering/approving it again in engine. Any thoughts, concerns, input on how to solve this? Thanks Mike [1] http://ovirt.org/wiki/Node_Stateless From dfediuck at redhat.com Wed Feb 22 15:33:15 2012 From: dfediuck at redhat.com (Doron Fediuck) Date: Wed, 22 Feb 2012 17:33:15 +0200 Subject: [Engine-devel] [node-devel] Support for stateless nodes In-Reply-To: <1329922658.6140.14.camel@beelzebub.mburnsfire.net> References: <1329922658.6140.14.camel@beelzebub.mburnsfire.net> Message-ID: <4F450ABB.30009@redhat.com> On 22/02/12 16:57, Mike Burns wrote: > There has been a lot of interest in being able to run stateless Nodes > with ovirt-engine. ovirt-node has designed a way [1] to achieve this on > the node side, but we need input from the engine and vdsm teams to see > if we're missing some requirement or if there needs to be changes on the > engine/vdsm side to achieve this. > > As it currently stands, every time you reboot an ovirt-node that is > stateless, it would require manually removing the host in engine, then > re-registering/approving it again in engine. > > Any thoughts, concerns, input on how to solve this? > > Thanks > > Mike > > [1] http://ovirt.org/wiki/Node_Stateless > Some points need to be considered; - Installation issues * Just stating the obvious, which is users need to remove-add the host on every reboot. This will not make this feature a lovable one from user's point of view. * During initial boot, vdsm-reg configures the networking and creates a management network bridge. This is a very delicate process which may fail due to networking issues such as resolution, routing, etc. So re-doing this on every boot increases the chances of loosing a node due to network problems. * CA pollution; generating a certificate on each reboot for each node will create a huge number of certificates in the engine side, which eventually may damage the CA. (Unsure if there's a limitation to certificates number, but having hundreds of junk cert's can't be good). * Today there's a supported flow that for nodes with password, the user is allowed to use the "add host" scenario. For stateless, it means re-configuring a password on every boot... - Other issues * Local storage; so far we were able to define a local storage in ovirt node. Stateless will block this ability. * Node upgrade; currently it's possible to upgrade a node from the engine. In stateless it will error, since no where to d/l the iso file to. * Collecting information; core dumps and logging may not be available due to lack of space? Or will it cause kernel panic if all space is consumed? -- /d "Hi, my name is Any Key. Please don't hit me!" From mburns at redhat.com Wed Feb 22 15:58:48 2012 From: mburns at redhat.com (Mike Burns) Date: Wed, 22 Feb 2012 10:58:48 -0500 Subject: [Engine-devel] [node-devel] Support for stateless nodes In-Reply-To: <4F450ABB.30009@redhat.com> References: <1329922658.6140.14.camel@beelzebub.mburnsfire.net> <4F450ABB.30009@redhat.com> Message-ID: <1329926328.6140.30.camel@beelzebub.mburnsfire.net> On Wed, 2012-02-22 at 17:33 +0200, Doron Fediuck wrote: > On 22/02/12 16:57, Mike Burns wrote: > > There has been a lot of interest in being able to run stateless Nodes > > with ovirt-engine. ovirt-node has designed a way [1] to achieve this on > > the node side, but we need input from the engine and vdsm teams to see > > if we're missing some requirement or if there needs to be changes on the > > engine/vdsm side to achieve this. > > > > As it currently stands, every time you reboot an ovirt-node that is > > stateless, it would require manually removing the host in engine, then > > re-registering/approving it again in engine. > > > > Any thoughts, concerns, input on how to solve this? > > > > Thanks > > > > Mike > > > > [1] http://ovirt.org/wiki/Node_Stateless > > > > Some points need to be considered; > > - Installation issues > > * Just stating the obvious, which is users need > to remove-add the host on every reboot. This will > not make this feature a lovable one from user's point of view. Yes, this is something that will cause this to be a non-starter. We'd need to change something in the engine/vdsm to make it smoother. Perhaps, a flag in engine on the host saying that it's stateless. Then if a host comes up with the same information, but no certs, etc, it would validate some other embedded key (TPM, key embedded in the node itself), and auto-approve it to be the same state as the previous boot > > * During initial boot, vdsm-reg configures the networking > and creates a management network bridge. This is a very > delicate process which may fail due to networking issues > such as resolution, routing, etc. So re-doing this on > every boot increases the chances of loosing a node due > to network problems. vdsm-reg runs on *every* boot anyway and renames the bridge. This is something that was debated previously, but it was decided to re-run it every boot. > > * CA pollution; generating a certificate on each reboot > for each node will create a huge number of certificates > in the engine side, which eventually may damage the CA. > (Unsure if there's a limitation to certificates number, > but having hundreds of junk cert's can't be good). We could have vdsm/engine store the certs on the engine side, and on boot, after validating the host (however that is done), it will load the certs onto the node machine. > > * Today there's a supported flow that for nodes with > password, the user is allowed to use the "add host" > scenario. For stateless, it means re-configuring a password > on every boot... Stateless is really targeted for a PXE environment. There is a supported kernel param that can be set that will set this password. Also, if we follow the design mentioned ^^, then it's not an issue since the host will auto-approve itself when it connects > > - Other issues > > * Local storage; so far we were able to define a local > storage in ovirt node. Stateless will block this ability. Yes, this would be unavailable if you're running stateless. I think that's a fine tradeoff since people want the host to be diskless. > > * Node upgrade; currently it's possible to upgrade a node > from the engine. In stateless it will error, since no where > to d/l the iso file to. Upgrade is handled easily by rebooting the host after updating the pxe server > > * Collecting information; core dumps and logging may not > be available due to lack of space? Or will it cause kernel > panic if all space is consumed? A valid concern, but a stateless environment would likely have collectd/rsyslog/netconsole servers running elsewhere that will collect the logs. kdumps can be configured to dump remotely as well. > Another concern raised is swap and overcommit. First version would likely disable swap completely. This would disable overcommit as well. Future versions could enable a local disk to be used completely for swap, but that is another tradeoff that people would need to evaluate when choosing between stateless and stateful installs. Mike From dfediuck at redhat.com Wed Feb 22 15:59:54 2012 From: dfediuck at redhat.com (Doron Fediuck) Date: Wed, 22 Feb 2012 17:59:54 +0200 Subject: [Engine-devel] [node-devel] Support for stateless nodes In-Reply-To: <4F450ABB.30009@redhat.com> References: <1329922658.6140.14.camel@beelzebub.mburnsfire.net> <4F450ABB.30009@redhat.com> Message-ID: <4F4510FA.4050107@redhat.com> On 22/02/12 17:33, Doron Fediuck wrote: > On 22/02/12 16:57, Mike Burns wrote: >> There has been a lot of interest in being able to run stateless Nodes >> with ovirt-engine. ovirt-node has designed a way [1] to achieve this on >> the node side, but we need input from the engine and vdsm teams to see >> if we're missing some requirement or if there needs to be changes on the >> engine/vdsm side to achieve this. >> >> As it currently stands, every time you reboot an ovirt-node that is >> stateless, it would require manually removing the host in engine, then >> re-registering/approving it again in engine. >> >> Any thoughts, concerns, input on how to solve this? >> >> Thanks >> >> Mike >> >> [1] http://ovirt.org/wiki/Node_Stateless >> > > Some points need to be considered; > > - Installation issues > > * Just stating the obvious, which is users need > to remove-add the host on every reboot. This will > not make this feature a lovable one from user's point of view. > > * During initial boot, vdsm-reg configures the networking > and creates a management network bridge. This is a very > delicate process which may fail due to networking issues > such as resolution, routing, etc. So re-doing this on > every boot increases the chances of loosing a node due > to network problems. > > * CA pollution; generating a certificate on each reboot > for each node will create a huge number of certificates > in the engine side, which eventually may damage the CA. > (Unsure if there's a limitation to certificates number, > but having hundreds of junk cert's can't be good). > > * Today there's a supported flow that for nodes with > password, the user is allowed to use the "add host" > scenario. For stateless, it means re-configuring a password > on every boot... > > - Other issues > > * Local storage; so far we were able to define a local > storage in ovirt node. Stateless will block this ability. > > * Node upgrade; currently it's possible to upgrade a node > from the engine. In stateless it will error, since no where > to d/l the iso file to. > > * Collecting information; core dumps and logging may not > be available due to lack of space? Or will it cause kernel > panic if all space is consumed? > One more question / thing to consider; Currently when you manually install a node, you need to configure the management-server's address. Will I need to re-do it on every boot of a stateless node? -- /d "Air conditioned environment - Do NOT open Windows!" From lhornyak at redhat.com Wed Feb 22 16:00:20 2012 From: lhornyak at redhat.com (Laszlo Hornyak) Date: Wed, 22 Feb 2012 11:00:20 -0500 (EST) Subject: [Engine-devel] reviewer needed for schedulerutil cleanup In-Reply-To: <7768efc6-3968-4078-8198-930b84006ead@zmail01.collab.prod.int.phx2.redhat.com> Message-ID: <536d84b4-4641-4925-a1de-2f1bc5c76536@zmail01.collab.prod.int.phx2.redhat.com> hi, As we previously agreed, I renamed the SchedulerUtil to SchedulerManager and removed the OnTimerAnnotation, can someone please review? http://gerrit.ovirt.org/#change,2140 Thanks, Laszlo From pmyers at redhat.com Wed Feb 22 16:06:40 2012 From: pmyers at redhat.com (Perry Myers) Date: Wed, 22 Feb 2012 11:06:40 -0500 Subject: [Engine-devel] [node-devel] Support for stateless nodes In-Reply-To: <4F450ABB.30009@redhat.com> References: <1329922658.6140.14.camel@beelzebub.mburnsfire.net> <4F450ABB.30009@redhat.com> Message-ID: <4F451290.3090103@redhat.com> > * Just stating the obvious, which is users need > to remove-add the host on every reboot. This will > not make this feature a lovable one from user's point of view. I think the point mburns is trying to make in his initial email is that we're going to need to do some joint work between node and vdsm teams to change the registration process so that this is no longer necessary. It will require some redesigning of the registration process > * During initial boot, vdsm-reg configures the networking > and creates a management network bridge. This is a very > delicate process which may fail due to networking issues > such as resolution, routing, etc. So re-doing this on > every boot increases the chances of loosing a node due > to network problems. Well, if the network is busted which leads to the bridge rename failing, wouldn't the fact that the network is broken cause other problems anyhow? So I don't see this as a problem. If your network doesn't work properly, don't expect hosts in the network to subsequently work properly. As an aside, why is reverse DNS lookup a requirement? If we remove that it makes things a lot easier, no? > * CA pollution; generating a certificate on each reboot > for each node will create a huge number of certificates > in the engine side, which eventually may damage the CA. > (Unsure if there's a limitation to certificates number, > but having hundreds of junk cert's can't be good). I don't think we should regenerate a new certificate on each boot. I think we need a way for 'an already registered host to retrieve it's certificate from the oVirt Engine server' Using an embedded encryption key (if you trust your mgmt network or are booting from embedded flash), or for the paranoid a key stored in TPM can be used to have vdsm safely retrieve this from the oVirt Engine server on each boot so that it's not required to regenerate/reregister on each boot > * Today there's a supported flow that for nodes with > password, the user is allowed to use the "add host" > scenario. For stateless, it means re-configuring a password > on every boot... This flow would still be applicable. We are going to allow setting of the admin password embedded in the core ISO via an offline process. Once vdsm is fixed to use a non-root account for installation flow, this is no longer a problem Also, if we (as described above) make registrations persistent across reboots by changing the registration flow a bit, then the install user password only need be set for the initial boot anyhow. Therefore I think as a requirement for stateless oVirt Node, we must have as a prerequsite removing root account usage for registration/installation > - Other issues > > * Local storage; so far we were able to define a local > storage in ovirt node. Stateless will block this ability. It shouldn't. The Node should be able to automatically scan locally attached disks to look for a well defined VG or partition label and based on that automatically activate/mount Stateless doesn't imply diskless. It is a requirement even for stateless node usage to be able to leverage locally attached disks both for VM storage and also for Swap. > * Node upgrade; currently it's possible to upgrade a node > from the engine. In stateless it will error, since no where > to d/l the iso file to. Upgrades are no longer needed with stateless. To upgrade a stateless node all you need to do is 'reboot from a newer image'. i.e. all upgrades would be done via PXE server image replacement. So the flow of 'upload ISO to running oVirt Node' is no longer even necessary > * Collecting information; core dumps and logging may not > be available due to lack of space? Or will it cause kernel > panic if all space is consumed? We already provide ability to send kdumps to remote ssh/NFS location and already provide the ability to use both collectd and rsyslogs to pipe logs/stats to remote server(s). Local logs can be set to logrotate to a reasonable size so that local RAM FS always contains recent log information for quick triage, but long term historical logging would be maintained on the rsyslog server Perry From mburns at redhat.com Wed Feb 22 16:08:12 2012 From: mburns at redhat.com (Mike Burns) Date: Wed, 22 Feb 2012 11:08:12 -0500 Subject: [Engine-devel] [node-devel] Support for stateless nodes In-Reply-To: <4F4510FA.4050107@redhat.com> References: <1329922658.6140.14.camel@beelzebub.mburnsfire.net> <4F450ABB.30009@redhat.com> <4F4510FA.4050107@redhat.com> Message-ID: <1329926892.6140.39.camel@beelzebub.mburnsfire.net> On Wed, 2012-02-22 at 17:59 +0200, Doron Fediuck wrote: > On 22/02/12 17:33, Doron Fediuck wrote: > > On 22/02/12 16:57, Mike Burns wrote: > >> There has been a lot of interest in being able to run stateless Nodes > >> with ovirt-engine. ovirt-node has designed a way [1] to achieve this on > >> the node side, but we need input from the engine and vdsm teams to see > >> if we're missing some requirement or if there needs to be changes on the > >> engine/vdsm side to achieve this. > >> > >> As it currently stands, every time you reboot an ovirt-node that is > >> stateless, it would require manually removing the host in engine, then > >> re-registering/approving it again in engine. > >> > >> Any thoughts, concerns, input on how to solve this? > >> > >> Thanks > >> > >> Mike > >> > >> [1] http://ovirt.org/wiki/Node_Stateless > >> > > > > Some points need to be considered; > > > > - Installation issues > > > > * Just stating the obvious, which is users need > > to remove-add the host on every reboot. This will > > not make this feature a lovable one from user's point of view. > > > > * During initial boot, vdsm-reg configures the networking > > and creates a management network bridge. This is a very > > delicate process which may fail due to networking issues > > such as resolution, routing, etc. So re-doing this on > > every boot increases the chances of loosing a node due > > to network problems. > > > > * CA pollution; generating a certificate on each reboot > > for each node will create a huge number of certificates > > in the engine side, which eventually may damage the CA. > > (Unsure if there's a limitation to certificates number, > > but having hundreds of junk cert's can't be good). > > > > * Today there's a supported flow that for nodes with > > password, the user is allowed to use the "add host" > > scenario. For stateless, it means re-configuring a password > > on every boot... > > > > - Other issues > > > > * Local storage; so far we were able to define a local > > storage in ovirt node. Stateless will block this ability. > > > > * Node upgrade; currently it's possible to upgrade a node > > from the engine. In stateless it will error, since no where > > to d/l the iso file to. > > > > * Collecting information; core dumps and logging may not > > be available due to lack of space? Or will it cause kernel > > panic if all space is consumed? > > > > One more question / thing to consider; > Currently when you manually install a node, > you need to configure the management-server's address. > Will I need to re-do it on every boot of a stateless node? As answered in the other response, there are kernel command line parameters to set the management_server. Since this will likely be in a pxe environment, setting the pxe profile to include management_server= should be fine. Another solution could be to setup a specific DNS SRV record that points to the ovirt-engine and have node automatically query that for the location. Mike From info at je-eigen-domein.nl Wed Feb 22 16:09:43 2012 From: info at je-eigen-domein.nl (Floris Bos / Maxnet) Date: Wed, 22 Feb 2012 17:09:43 +0100 Subject: [Engine-devel] Support for stateless nodes In-Reply-To: <1329922658.6140.14.camel@beelzebub.mburnsfire.net> References: <1329922658.6140.14.camel@beelzebub.mburnsfire.net> Message-ID: <4F451347.6030903@je-eigen-domein.nl> On 02/22/2012 03:57 PM, Mike Burns wrote: > There has been a lot of interest in being able to run stateless Nodes > with ovirt-engine. ovirt-node has designed a way [1] to achieve this on > the node side, but we need input from the engine and vdsm teams to see > if we're missing some requirement or if there needs to be changes on the > engine/vdsm side to achieve this. > > As it currently stands, every time you reboot an ovirt-node that is > stateless, it would require manually removing the host in engine, then > re-registering/approving it again in engine. > > Any thoughts, concerns, input on how to solve this? Perhaps the node can perform some very basic form of authentication based on IP-address and a key derived from hardware. I see that TPM is already mentioned on the wiki, but even on systems without it, one could simply take a hash of all the MAC-addresses of the system, the CPU serial and the BIOS info from /sys/class/dmi and use that as a form of password. It's better than nothing, or approving nodes all the time (and how do you know if the node you are approving is really THE node?) -- Yours sincerely, Floris Bos From dfediuck at redhat.com Wed Feb 22 16:10:50 2012 From: dfediuck at redhat.com (Doron Fediuck) Date: Wed, 22 Feb 2012 18:10:50 +0200 Subject: [Engine-devel] [node-devel] Support for stateless nodes In-Reply-To: <1329926328.6140.30.camel@beelzebub.mburnsfire.net> References: <1329922658.6140.14.camel@beelzebub.mburnsfire.net> <4F450ABB.30009@redhat.com> <1329926328.6140.30.camel@beelzebub.mburnsfire.net> Message-ID: <4F45138A.9060407@redhat.com> On 22/02/12 17:58, Mike Burns wrote: > On Wed, 2012-02-22 at 17:33 +0200, Doron Fediuck wrote: >> On 22/02/12 16:57, Mike Burns wrote: >>> There has been a lot of interest in being able to run stateless Nodes >>> with ovirt-engine. ovirt-node has designed a way [1] to achieve this on >>> the node side, but we need input from the engine and vdsm teams to see >>> if we're missing some requirement or if there needs to be changes on the >>> engine/vdsm side to achieve this. >>> >>> As it currently stands, every time you reboot an ovirt-node that is >>> stateless, it would require manually removing the host in engine, then >>> re-registering/approving it again in engine. >>> >>> Any thoughts, concerns, input on how to solve this? >>> >>> Thanks >>> >>> Mike >>> >>> [1] http://ovirt.org/wiki/Node_Stateless >>> >> >> Some points need to be considered; >> >> - Installation issues >> >> * Just stating the obvious, which is users need >> to remove-add the host on every reboot. This will >> not make this feature a lovable one from user's point of view. > > Yes, this is something that will cause this to be a non-starter. We'd > need to change something in the engine/vdsm to make it smoother. > Perhaps, a flag in engine on the host saying that it's stateless. Then > if a host comes up with the same information, but no certs, etc, it > would validate some other embedded key (TPM, key embedded in the node > itself), and auto-approve it to be the same state as the previous boot > This will require some thinking. >> >> * During initial boot, vdsm-reg configures the networking >> and creates a management network bridge. This is a very >> delicate process which may fail due to networking issues >> such as resolution, routing, etc. So re-doing this on >> every boot increases the chances of loosing a node due >> to network problems. > > vdsm-reg runs on *every* boot anyway and renames the bridge. This is > something that was debated previously, but it was decided to re-run it > every boot. > Close, but not exactly; vdsm-reg will run on every boot, but if the relevant bridge is found, then networking is unchanged. >> >> * CA pollution; generating a certificate on each reboot >> for each node will create a huge number of certificates >> in the engine side, which eventually may damage the CA. >> (Unsure if there's a limitation to certificates number, >> but having hundreds of junk cert's can't be good). > > We could have vdsm/engine store the certs on the engine side, and on > boot, after validating the host (however that is done), it will load the > certs onto the node machine. > This is a security issue, since the key pair should be generated on the node. This will lead us back to your TPM suggestion, but (although I like it, ) will cause us to be tpm-dependent, not to mention a non-trivial implementation. >> >> * Today there's a supported flow that for nodes with >> password, the user is allowed to use the "add host" >> scenario. For stateless, it means re-configuring a password >> on every boot... > > Stateless is really targeted for a PXE environment. There is a > supported kernel param that can be set that will set this password. > Also, if we follow the design mentioned ^^, then it's not an issue since > the host will auto-approve itself when it connects > >> >> - Other issues >> >> * Local storage; so far we were able to define a local >> storage in ovirt node. Stateless will block this ability. > > Yes, this would be unavailable if you're running stateless. I think > that's a fine tradeoff since people want the host to be diskless. >> >> * Node upgrade; currently it's possible to upgrade a node >> from the engine. In stateless it will error, since no where >> to d/l the iso file to. > > Upgrade is handled easily by rebooting the host after updating the pxe > server > >> >> * Collecting information; core dumps and logging may not >> be available due to lack of space? Or will it cause kernel >> panic if all space is consumed? > > A valid concern, but a stateless environment would likely have > collectd/rsyslog/netconsole servers running elsewhere that will collect > the logs. kdumps can be configured to dump remotely as well. This will also need some work on the vdsm side. >> > > Another concern raised is swap and overcommit. First version would > likely disable swap completely. This would disable overcommit as well. > Future versions could enable a local disk to be used completely for > swap, but that is another tradeoff that people would need to evaluate > when choosing between stateless and stateful installs. Indeed so- completely forgot about swap... > > Mike > -- /d ?Funny,? he intoned funereally, ?how just when you think life can't possibly get any worse it suddenly does.? --Douglas Adams, The Hitchhiker's Guide to the Galaxy From pmyers at redhat.com Wed Feb 22 16:21:49 2012 From: pmyers at redhat.com (Perry Myers) Date: Wed, 22 Feb 2012 11:21:49 -0500 Subject: [Engine-devel] [node-devel] Support for stateless nodes In-Reply-To: <4F45138A.9060407@redhat.com> References: <1329922658.6140.14.camel@beelzebub.mburnsfire.net> <4F450ABB.30009@redhat.com> <1329926328.6140.30.camel@beelzebub.mburnsfire.net> <4F45138A.9060407@redhat.com> Message-ID: <4F45161D.6010208@redhat.com> >>> >>> * CA pollution; generating a certificate on each reboot >>> for each node will create a huge number of certificates >>> in the engine side, which eventually may damage the CA. >>> (Unsure if there's a limitation to certificates number, >>> but having hundreds of junk cert's can't be good). >> >> We could have vdsm/engine store the certs on the engine side, and on >> boot, after validating the host (however that is done), it will load the >> certs onto the node machine. >> > This is a security issue, since the key pair should be > generated on the node. This will lead us back to your TPM > suggestion, but (although I like it, ) will cause us > to be tpm-dependent, not to mention a non-trivial implementation. Not necessarily 1. generate cert on oVirt Node 2. generate symmetric key and embed in TPM or use embedded symmetric key (for secured network model) 3. encrypt certs w/ symmetric key 4. push encryted cert to oVirt Engine On reboot 1. download encrypted cert from OE 2. use either embedded symmetric key or retrieve TPM based symmetric key and use to decrypt cert So no dependency on TPM, but the security is definitely much better if you have it. Use cases like this are one of the fundamental reasons why TPM exists :) From mburns at redhat.com Wed Feb 22 16:25:36 2012 From: mburns at redhat.com (Mike Burns) Date: Wed, 22 Feb 2012 11:25:36 -0500 Subject: [Engine-devel] [node-devel] Support for stateless nodes In-Reply-To: <4F45138A.9060407@redhat.com> References: <1329922658.6140.14.camel@beelzebub.mburnsfire.net> <4F450ABB.30009@redhat.com> <1329926328.6140.30.camel@beelzebub.mburnsfire.net> <4F45138A.9060407@redhat.com> Message-ID: <1329927936.6140.46.camel@beelzebub.mburnsfire.net> On Wed, 2012-02-22 at 18:10 +0200, Doron Fediuck wrote: > On 22/02/12 17:58, Mike Burns wrote: > > On Wed, 2012-02-22 at 17:33 +0200, Doron Fediuck wrote: > >> On 22/02/12 16:57, Mike Burns wrote: > > > > vdsm-reg runs on *every* boot anyway and renames the bridge. This is > > something that was debated previously, but it was decided to re-run it > > every boot. > > > Close, but not exactly; vdsm-reg will run on every boot, but > if the relevant bridge is found, then networking is unchanged. Yes, that's true, but vdsm-reg doesn't persist the changes it makes. So on the next boot, it will never find the management bridge it's looking for. So while the condition is there to skip it, it will actually never find the bridge and will run the rename every boot. From dfediuck at redhat.com Wed Feb 22 16:40:59 2012 From: dfediuck at redhat.com (Doron Fediuck) Date: Wed, 22 Feb 2012 18:40:59 +0200 Subject: [Engine-devel] [node-devel] Support for stateless nodes In-Reply-To: <4F45161D.6010208@redhat.com> References: <1329922658.6140.14.camel@beelzebub.mburnsfire.net> <4F450ABB.30009@redhat.com> <1329926328.6140.30.camel@beelzebub.mburnsfire.net> <4F45138A.9060407@redhat.com> <4F45161D.6010208@redhat.com> Message-ID: <4F451A9B.9090303@redhat.com> On 22/02/12 18:21, Perry Myers wrote: >>>> >>>> * CA pollution; generating a certificate on each reboot >>>> for each node will create a huge number of certificates >>>> in the engine side, which eventually may damage the CA. >>>> (Unsure if there's a limitation to certificates number, >>>> but having hundreds of junk cert's can't be good). >>> >>> We could have vdsm/engine store the certs on the engine side, and on >>> boot, after validating the host (however that is done), it will load the >>> certs onto the node machine. >>> >> This is a security issue, since the key pair should be >> generated on the node. This will lead us back to your TPM >> suggestion, but (although I like it, ) will cause us >> to be tpm-dependent, not to mention a non-trivial implementation. > > Not necessarily > > 1. generate cert on oVirt Node > 2. generate symmetric key and embed in TPM or use embedded symmetric > key (for secured network model) IIUC in this step you're using TPM. What if there is no TPM (at all)? > 3. encrypt certs w/ symmetric key > 4. push encryted cert to oVirt Engine > > On reboot > > 1. download encrypted cert from OE > 2. use either embedded symmetric key or retrieve TPM based symmetric > key and use to decrypt cert > > So no dependency on TPM, but the security is definitely much better if > you have it. Use cases like this are one of the fundamental reasons why > TPM exists :) > _______________________________________________ > node-devel mailing list > node-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/node-devel -- /d "Ford," he said, "you're turning into a penguin. Stop it." --Douglas Adams, The Hitchhiker's Guide to the Galaxy From pmyers at redhat.com Wed Feb 22 16:42:50 2012 From: pmyers at redhat.com (Perry Myers) Date: Wed, 22 Feb 2012 11:42:50 -0500 Subject: [Engine-devel] [node-devel] Support for stateless nodes In-Reply-To: <4F451A9B.9090303@redhat.com> References: <1329922658.6140.14.camel@beelzebub.mburnsfire.net> <4F450ABB.30009@redhat.com> <1329926328.6140.30.camel@beelzebub.mburnsfire.net> <4F45138A.9060407@redhat.com> <4F45161D.6010208@redhat.com> <4F451A9B.9090303@redhat.com> Message-ID: <4F451B0A.3000300@redhat.com> On 02/22/2012 11:40 AM, Doron Fediuck wrote: > On 22/02/12 18:21, Perry Myers wrote: >>>>> >>>>> * CA pollution; generating a certificate on each reboot >>>>> for each node will create a huge number of certificates >>>>> in the engine side, which eventually may damage the CA. >>>>> (Unsure if there's a limitation to certificates number, >>>>> but having hundreds of junk cert's can't be good). >>>> >>>> We could have vdsm/engine store the certs on the engine side, and on >>>> boot, after validating the host (however that is done), it will load the >>>> certs onto the node machine. >>>> >>> This is a security issue, since the key pair should be >>> generated on the node. This will lead us back to your TPM >>> suggestion, but (although I like it, ) will cause us >>> to be tpm-dependent, not to mention a non-trivial implementation. >> >> Not necessarily >> >> 1. generate cert on oVirt Node >> 2. generate symmetric key and embed in TPM or use embedded symmetric >> key (for secured network model) > IIUC in this step you're using TPM. > What if there is no TPM (at all)? That statement had an 'or' in it. Either you use TPM with a self generated key 'or' you use a key that is preembedded in the image on either a node by node basis or per site. >> 3. encrypt certs w/ symmetric key >> 4. push encryted cert to oVirt Engine >> >> On reboot >> >> 1. download encrypted cert from OE >> 2. use either embedded symmetric key or retrieve TPM based symmetric >> key and use to decrypt cert >> >> So no dependency on TPM, but the security is definitely much better if >> you have it. Use cases like this are one of the fundamental reasons why >> TPM exists :) >> _______________________________________________ >> node-devel mailing list >> node-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/node-devel > > From dfediuck at redhat.com Wed Feb 22 16:47:31 2012 From: dfediuck at redhat.com (Doron Fediuck) Date: Wed, 22 Feb 2012 18:47:31 +0200 Subject: [Engine-devel] [node-devel] Support for stateless nodes In-Reply-To: <1329927936.6140.46.camel@beelzebub.mburnsfire.net> References: <1329922658.6140.14.camel@beelzebub.mburnsfire.net> <4F450ABB.30009@redhat.com> <1329926328.6140.30.camel@beelzebub.mburnsfire.net> <4F45138A.9060407@redhat.com> <1329927936.6140.46.camel@beelzebub.mburnsfire.net> Message-ID: <4F451C23.4070908@redhat.com> On 22/02/12 18:25, Mike Burns wrote: > On Wed, 2012-02-22 at 18:10 +0200, Doron Fediuck wrote: >> On 22/02/12 17:58, Mike Burns wrote: >>> On Wed, 2012-02-22 at 17:33 +0200, Doron Fediuck wrote: >>>> On 22/02/12 16:57, Mike Burns wrote: > > > >>> >>> vdsm-reg runs on *every* boot anyway and renames the bridge. This is >>> something that was debated previously, but it was decided to re-run it >>> every boot. >>> >> Close, but not exactly; vdsm-reg will run on every boot, but >> if the relevant bridge is found, then networking is unchanged. > > Yes, that's true, but vdsm-reg doesn't persist the changes it makes. So > on the next boot, it will never find the management bridge it's looking > for. So while the condition is there to skip it, it will actually never > find the bridge and will run the rename every boot. > Sounds like a bug to me. Last time I handled it, network configuration was persisted. You can see in line 102 here: http://gerrit.ovirt.org/gitweb?p=vdsm.git;a=blob;f=vdsm_reg/vdsm-reg-setup.in;h=c51f40c53f5303cfb447cedf5bc0c16228cb876d;hb=HEAD -- /d "Who's General Failure and why's he reading my disk?" From dfediuck at redhat.com Wed Feb 22 16:50:15 2012 From: dfediuck at redhat.com (Doron Fediuck) Date: Wed, 22 Feb 2012 18:50:15 +0200 Subject: [Engine-devel] [node-devel] Support for stateless nodes In-Reply-To: <1329926892.6140.39.camel@beelzebub.mburnsfire.net> References: <1329922658.6140.14.camel@beelzebub.mburnsfire.net> <4F450ABB.30009@redhat.com> <4F4510FA.4050107@redhat.com> <1329926892.6140.39.camel@beelzebub.mburnsfire.net> Message-ID: <4F451CC7.4050909@redhat.com> On 22/02/12 18:08, Mike Burns wrote: > On Wed, 2012-02-22 at 17:59 +0200, Doron Fediuck wrote: >> On 22/02/12 17:33, Doron Fediuck wrote: >>> On 22/02/12 16:57, Mike Burns wrote: >>>> There has been a lot of interest in being able to run stateless Nodes >>>> with ovirt-engine. ovirt-node has designed a way [1] to achieve this on >>>> the node side, but we need input from the engine and vdsm teams to see >>>> if we're missing some requirement or if there needs to be changes on the >>>> engine/vdsm side to achieve this. >>>> >>>> As it currently stands, every time you reboot an ovirt-node that is >>>> stateless, it would require manually removing the host in engine, then >>>> re-registering/approving it again in engine. >>>> >>>> Any thoughts, concerns, input on how to solve this? >>>> >>>> Thanks >>>> >>>> Mike >>>> >>>> [1] http://ovirt.org/wiki/Node_Stateless >>>> >>> >>> Some points need to be considered; >>> >>> - Installation issues >>> >>> * Just stating the obvious, which is users need >>> to remove-add the host on every reboot. This will >>> not make this feature a lovable one from user's point of view. >>> >>> * During initial boot, vdsm-reg configures the networking >>> and creates a management network bridge. This is a very >>> delicate process which may fail due to networking issues >>> such as resolution, routing, etc. So re-doing this on >>> every boot increases the chances of loosing a node due >>> to network problems. >>> >>> * CA pollution; generating a certificate on each reboot >>> for each node will create a huge number of certificates >>> in the engine side, which eventually may damage the CA. >>> (Unsure if there's a limitation to certificates number, >>> but having hundreds of junk cert's can't be good). >>> >>> * Today there's a supported flow that for nodes with >>> password, the user is allowed to use the "add host" >>> scenario. For stateless, it means re-configuring a password >>> on every boot... >>> >>> - Other issues >>> >>> * Local storage; so far we were able to define a local >>> storage in ovirt node. Stateless will block this ability. >>> >>> * Node upgrade; currently it's possible to upgrade a node >>> from the engine. In stateless it will error, since no where >>> to d/l the iso file to. >>> >>> * Collecting information; core dumps and logging may not >>> be available due to lack of space? Or will it cause kernel >>> panic if all space is consumed? >>> >> >> One more question / thing to consider; >> Currently when you manually install a node, >> you need to configure the management-server's address. >> Will I need to re-do it on every boot of a stateless node? > > As answered in the other response, there are kernel command line > parameters to set the management_server. Since this will likely be in a > pxe environment, setting the pxe profile to include > management_server= should be fine. > I agree it's a valid solution as long as you assume this is relevant for PXE only use case. > Another solution could be to setup a specific DNS SRV record that points > to the ovirt-engine and have node automatically query that for the > location. This was discussed in the past and for some reason not implemented. > > Mike > > _______________________________________________ > node-devel mailing list > node-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/node-devel -- /d "Ford," he said, "you're turning into a penguin. Stop it." --Douglas Adams, The Hitchhiker's Guide to the Galaxy From pmyers at redhat.com Wed Feb 22 16:54:05 2012 From: pmyers at redhat.com (Perry Myers) Date: Wed, 22 Feb 2012 11:54:05 -0500 Subject: [Engine-devel] [node-devel] Support for stateless nodes In-Reply-To: <4F451CC7.4050909@redhat.com> References: <1329922658.6140.14.camel@beelzebub.mburnsfire.net> <4F450ABB.30009@redhat.com> <4F4510FA.4050107@redhat.com> <1329926892.6140.39.camel@beelzebub.mburnsfire.net> <4F451CC7.4050909@redhat.com> Message-ID: <4F451DAD.9090706@redhat.com> >> As answered in the other response, there are kernel command line >> parameters to set the management_server. Since this will likely be in a >> pxe environment, setting the pxe profile to include >> management_server= should be fine. >> > I agree it's a valid solution as long as you assume this is relevant > for PXE only use case. Not necessarily... Take the ISO/USB Stick and you can embed the kargs into the ISO/USB itself so that it always boots with that mgmt server arg This actually also enables use of 'stateless' combined with static IP addressing as well. As you can create a USB Stick and embed the kargs for the NIC configuration, rsyslog config, etc, etc. >> Another solution could be to setup a specific DNS SRV record that points >> to the ovirt-engine and have node automatically query that for the >> location. > This was discussed in the past and for some reason not implemented. Concerns about security, iirc. Assumption that someone could hijack the DNS SRV record and provide a man-in-the-middle oVirt Engine server. If you're paranoid about security, don't use DNS SRV of course, instead use hardcoded kargs as described above. But for some DNS SRV might be an ok option From dfediuck at redhat.com Wed Feb 22 17:10:37 2012 From: dfediuck at redhat.com (Doron Fediuck) Date: Wed, 22 Feb 2012 19:10:37 +0200 Subject: [Engine-devel] [node-devel] Support for stateless nodes In-Reply-To: <4F451290.3090103@redhat.com> References: <1329922658.6140.14.camel@beelzebub.mburnsfire.net> <4F450ABB.30009@redhat.com> <4F451290.3090103@redhat.com> Message-ID: <4F45218D.1020205@redhat.com> On 22/02/12 18:06, Perry Myers wrote: >> * Just stating the obvious, which is users need >> to remove-add the host on every reboot. This will >> not make this feature a lovable one from user's point of view. > > I think the point mburns is trying to make in his initial email is that > we're going to need to do some joint work between node and vdsm teams to > change the registration process so that this is no longer necessary. > > It will require some redesigning of the registration process > I'm aware of it, and that's why I'm raising my concerns, so we'll have a (partial) task list ;) >> * During initial boot, vdsm-reg configures the networking >> and creates a management network bridge. This is a very >> delicate process which may fail due to networking issues >> such as resolution, routing, etc. So re-doing this on >> every boot increases the chances of loosing a node due >> to network problems. > > Well, if the network is busted which leads to the bridge rename failing, > wouldn't the fact that the network is broken cause other problems anyhow? > Perry, my point is that we're increasing the chances to get into these holes. Network is not busted most of the time, but occasionally there's a glitch and we'd like to stay away from it. I'm sure you know what I'm talking about. > So I don't see this as a problem. If your network doesn't work > properly, don't expect hosts in the network to subsequently work properly. See previous answer. > As an aside, why is reverse DNS lookup a requirement? If we remove that > it makes things a lot easier, no? > Not sure I'm the right guy to defend it, but in order to drop reverse-dns, you need to consider dropping SSL, LDAP and some other important shortcuts... >> * CA pollution; generating a certificate on each reboot >> for each node will create a huge number of certificates >> in the engine side, which eventually may damage the CA. >> (Unsure if there's a limitation to certificates number, >> but having hundreds of junk cert's can't be good). > > I don't think we should regenerate a new certificate on each boot. I > think we need a way for 'an already registered host to retrieve it's > certificate from the oVirt Engine server' > > Using an embedded encryption key (if you trust your mgmt network or are > booting from embedded flash), or for the paranoid a key stored in TPM > can be used to have vdsm safely retrieve this from the oVirt Engine > server on each boot so that it's not required to regenerate/reregister > on each boot > Thoughtful redesign needed here... >> * Today there's a supported flow that for nodes with >> password, the user is allowed to use the "add host" >> scenario. For stateless, it means re-configuring a password >> on every boot... > > This flow would still be applicable. We are going to allow setting of > the admin password embedded in the core ISO via an offline process. > Once vdsm is fixed to use a non-root account for installation flow, this > is no longer a problem This is not exactly vdsm. More like vdsm-bootstrap. > > Also, if we (as described above) make registrations persistent across > reboots by changing the registration flow a bit, then the install user > password only need be set for the initial boot anyhow. > > Therefore I think as a requirement for stateless oVirt Node, we must > have as a prerequsite removing root account usage for > registration/installation > This is both for vdsm and engine, and I'm not sure it's that trivial. >> - Other issues >> >> * Local storage; so far we were able to define a local >> storage in ovirt node. Stateless will block this ability. > > It shouldn't. The Node should be able to automatically scan locally > attached disks to look for a well defined VG or partition label and > based on that automatically activate/mount > > Stateless doesn't imply diskless. It is a requirement even for > stateless node usage to be able to leverage locally attached disks both > for VM storage and also for Swap. > Still, in a pure disk-less setup you will not have local storage. See also Mike's answer. >> * Node upgrade; currently it's possible to upgrade a node >> from the engine. In stateless it will error, since no where >> to d/l the iso file to. > > Upgrades are no longer needed with stateless. To upgrade a stateless > node all you need to do is 'reboot from a newer image'. i.e. all > upgrades would be done via PXE server image replacement. So the flow of > 'upload ISO to running oVirt Node' is no longer even necessary > This is assuming PXE only use-case. I'm not sure it's the only one. >> * Collecting information; core dumps and logging may not >> be available due to lack of space? Or will it cause kernel >> panic if all space is consumed? > > We already provide ability to send kdumps to remote ssh/NFS location and > already provide the ability to use both collectd and rsyslogs to pipe > logs/stats to remote server(s). Local logs can be set to logrotate to a > reasonable size so that local RAM FS always contains recent log > information for quick triage, but long term historical logging would be > maintained on the rsyslog server > This needs to be co-ordinated with log-collection, as well as the bootstrapping code. > Perry -- /d "Willyoupleasehelpmefixmykeyboard?Thespacebarisbroken!" From pmyers at redhat.com Wed Feb 22 17:23:36 2012 From: pmyers at redhat.com (Perry Myers) Date: Wed, 22 Feb 2012 12:23:36 -0500 Subject: [Engine-devel] [node-devel] Support for stateless nodes In-Reply-To: <4F45218D.1020205@redhat.com> References: <1329922658.6140.14.camel@beelzebub.mburnsfire.net> <4F450ABB.30009@redhat.com> <4F451290.3090103@redhat.com> <4F45218D.1020205@redhat.com> Message-ID: <4F452498.7040105@redhat.com> >> Well, if the network is busted which leads to the bridge rename failing, >> wouldn't the fact that the network is broken cause other problems anyhow? >> > Perry, my point is that we're increasing the chances to get > into these holes. Network is not busted most of the time, but occasionally > there's a glitch and we'd like to stay away from it. I'm sure > you know what I'm talking about. What if oVirt Node creates ifcfg-ovirt (instead of ifcfg-br0) by default as part of bringing up the network on each boot (via either DHCP or kernel args)? Then vdsm would never need to do this. This particular step could be something that is turned on/off only if vdsm is installed so that it doesn't affect any non-oVirt usages of oVirt Node (Archipel, etc) >>> * Today there's a supported flow that for nodes with >>> password, the user is allowed to use the "add host" >>> scenario. For stateless, it means re-configuring a password >>> on every boot... >> >> This flow would still be applicable. We are going to allow setting of >> the admin password embedded in the core ISO via an offline process. >> Once vdsm is fixed to use a non-root account for installation flow, this >> is no longer a problem > This is not exactly vdsm. More like vdsm-bootstrap. ack >> >> Also, if we (as described above) make registrations persistent across >> reboots by changing the registration flow a bit, then the install user >> password only need be set for the initial boot anyhow. >> >> Therefore I think as a requirement for stateless oVirt Node, we must >> have as a prerequsite removing root account usage for >> registration/installation >> > This is both for vdsm and engine, and I'm not sure it's that trivial. Understood, but it's a requirement for other things. There are security considerations for requiring remote root ssh access as part of your core infrastructure. So this needs to be dealt with regardless. >>> - Other issues >>> >>> * Local storage; so far we were able to define a local >>> storage in ovirt node. Stateless will block this ability. >> >> It shouldn't. The Node should be able to automatically scan locally >> attached disks to look for a well defined VG or partition label and >> based on that automatically activate/mount >> >> Stateless doesn't imply diskless. It is a requirement even for >> stateless node usage to be able to leverage locally attached disks both >> for VM storage and also for Swap. >> > Still, in a pure disk-less setup you will not have local storage. > See also Mike's answer. Sure. If you want diskless specifically and then complain about lack of swap or local storage for VMs... then you might not be getting the point :) That has no bearing on the stateless discussion, except that the first pass of stateless might not allow config of local disk/swap to start with. We might do it incrementally >>> * Node upgrade; currently it's possible to upgrade a node >>> from the engine. In stateless it will error, since no where >>> to d/l the iso file to. >> >> Upgrades are no longer needed with stateless. To upgrade a stateless >> node all you need to do is 'reboot from a newer image'. i.e. all >> upgrades would be done via PXE server image replacement. So the flow of >> 'upload ISO to running oVirt Node' is no longer even necessary >> > This is assuming PXE only use-case. I'm not sure it's the only one. Nope... copy oVirt Node 2.2.3 to a USB stick (via ovirt-iso-to-usb-disk) boot a host with it Later... copy oVirt Node 2.2.4 to same USB stick (via ovirt-iso-to-usb-disk) boot the host with it Yes, it requires you to touch the USB stick. If you specifically want stateless (implying no 'installation' of the Node) and you won't be using PXE to run, then it involves legwork. But again, we're not planning to eliminate the current 'install' methods. Stateless is in addition to installing to disk, and using the 'iso upload' upgrade method >>> * Collecting information; core dumps and logging may not >>> be available due to lack of space? Or will it cause kernel >>> panic if all space is consumed? >> >> We already provide ability to send kdumps to remote ssh/NFS location and >> already provide the ability to use both collectd and rsyslogs to pipe >> logs/stats to remote server(s). Local logs can be set to logrotate to a >> reasonable size so that local RAM FS always contains recent log >> information for quick triage, but long term historical logging would be >> maintained on the rsyslog server >> > This needs to be co-ordinated with log-collection, as well as the bootstrapping > code. Yep. Lots of stuff for vdsm/oVirt Engine team to do in order to meet this requirement :) In contrast, making oVirt Node stateless is quite trivial. Most of the work here is actually for vdsm and other related utilities (like log collector) Perry From iheim at redhat.com Wed Feb 22 20:38:01 2012 From: iheim at redhat.com (Itamar Heim) Date: Wed, 22 Feb 2012 22:38:01 +0200 Subject: [Engine-devel] [node-devel] Support for stateless nodes In-Reply-To: <4F452498.7040105@redhat.com> References: <1329922658.6140.14.camel@beelzebub.mburnsfire.net> <4F450ABB.30009@redhat.com> <4F451290.3090103@redhat.com> <4F45218D.1020205@redhat.com> <4F452498.7040105@redhat.com> Message-ID: <4F455229.2030604@redhat.com> On 02/22/2012 07:23 PM, Perry Myers wrote: >>> Well, if the network is busted which leads to the bridge rename failing, >>> wouldn't the fact that the network is broken cause other problems anyhow? >>> >> Perry, my point is that we're increasing the chances to get >> into these holes. Network is not busted most of the time, but occasionally >> there's a glitch and we'd like to stay away from it. I'm sure >> you know what I'm talking about. > > What if oVirt Node creates ifcfg-ovirt (instead of ifcfg-br0) by default > as part of bringing up the network on each boot (via either DHCP or > kernel args)? what if admin wants this bonded, or bridgeless, or jumbo frames? stateless doesn't mean you can't save configuration somewhere on the network, right (could be on engine, could be just on some nfs or http location). if you have a tpm, just encrypt all the data to make sure no one tampered with your config, or if you don't care, just download your config (well, you trust the network to download your image without encryption, so no need to be fanatic i guess). so in short: 1. pxe boot the image 2. download from known location (kernerl param) the rest of the config you care about, certificates, etc. 3. use tpm for private key (or get the password to keystore in config via kernel parameter if you don't want/have tpm. I guess my main question is why does stateless implies no saved config for all the various issues > > Then vdsm would never need to do this. This particular step could be > something that is turned on/off only if vdsm is installed so that it > doesn't affect any non-oVirt usages of oVirt Node (Archipel, etc) > >>>> * Today there's a supported flow that for nodes with >>>> password, the user is allowed to use the "add host" >>>> scenario. For stateless, it means re-configuring a password >>>> on every boot... >>> >>> This flow would still be applicable. We are going to allow setting of >>> the admin password embedded in the core ISO via an offline process. >>> Once vdsm is fixed to use a non-root account for installation flow, this >>> is no longer a problem >> This is not exactly vdsm. More like vdsm-bootstrap. > > ack > >>> >>> Also, if we (as described above) make registrations persistent across >>> reboots by changing the registration flow a bit, then the install user >>> password only need be set for the initial boot anyhow. >>> >>> Therefore I think as a requirement for stateless oVirt Node, we must >>> have as a prerequsite removing root account usage for >>> registration/installation >>> >> This is both for vdsm and engine, and I'm not sure it's that trivial. > > Understood, but it's a requirement for other things. There are security > considerations for requiring remote root ssh access as part of your core > infrastructure. So this needs to be dealt with regardless. > >>>> - Other issues >>>> >>>> * Local storage; so far we were able to define a local >>>> storage in ovirt node. Stateless will block this ability. >>> >>> It shouldn't. The Node should be able to automatically scan locally >>> attached disks to look for a well defined VG or partition label and >>> based on that automatically activate/mount >>> >>> Stateless doesn't imply diskless. It is a requirement even for >>> stateless node usage to be able to leverage locally attached disks both >>> for VM storage and also for Swap. >>> >> Still, in a pure disk-less setup you will not have local storage. >> See also Mike's answer. > > Sure. If you want diskless specifically and then complain about lack of > swap or local storage for VMs... then you might not be getting the point :) > > That has no bearing on the stateless discussion, except that the first > pass of stateless might not allow config of local disk/swap to start > with. We might do it incrementally > >>>> * Node upgrade; currently it's possible to upgrade a node >>>> from the engine. In stateless it will error, since no where >>>> to d/l the iso file to. >>> >>> Upgrades are no longer needed with stateless. To upgrade a stateless >>> node all you need to do is 'reboot from a newer image'. i.e. all >>> upgrades would be done via PXE server image replacement. So the flow of >>> 'upload ISO to running oVirt Node' is no longer even necessary >>> >> This is assuming PXE only use-case. I'm not sure it's the only one. > > Nope... > > copy oVirt Node 2.2.3 to a USB stick (via ovirt-iso-to-usb-disk) > boot a host with it > > Later... > > copy oVirt Node 2.2.4 to same USB stick (via ovirt-iso-to-usb-disk) > boot the host with it > > Yes, it requires you to touch the USB stick. If you specifically want > stateless (implying no 'installation' of the Node) and you won't be > using PXE to run, then it involves legwork. > > But again, we're not planning to eliminate the current 'install' > methods. Stateless is in addition to installing to disk, and using the > 'iso upload' upgrade method > >>>> * Collecting information; core dumps and logging may not >>>> be available due to lack of space? Or will it cause kernel >>>> panic if all space is consumed? >>> >>> We already provide ability to send kdumps to remote ssh/NFS location and >>> already provide the ability to use both collectd and rsyslogs to pipe >>> logs/stats to remote server(s). Local logs can be set to logrotate to a >>> reasonable size so that local RAM FS always contains recent log >>> information for quick triage, but long term historical logging would be >>> maintained on the rsyslog server >>> >> This needs to be co-ordinated with log-collection, as well as the bootstrapping >> code. > > Yep. Lots of stuff for vdsm/oVirt Engine team to do in order to meet > this requirement :) > > In contrast, making oVirt Node stateless is quite trivial. Most of the > work here is actually for vdsm and other related utilities (like log > collector) > > Perry > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From bos at je-eigen-domein.nl Wed Feb 22 16:03:39 2012 From: bos at je-eigen-domein.nl (Floris Bos / Maxnet) Date: Wed, 22 Feb 2012 17:03:39 +0100 Subject: [Engine-devel] Support for stateless nodes In-Reply-To: <1329922658.6140.14.camel@beelzebub.mburnsfire.net> References: <1329922658.6140.14.camel@beelzebub.mburnsfire.net> Message-ID: <4F4511DB.4090604@je-eigen-domein.nl> On 02/22/2012 03:57 PM, Mike Burns wrote: > There has been a lot of interest in being able to run stateless Nodes > with ovirt-engine. ovirt-node has designed a way [1] to achieve this on > the node side, but we need input from the engine and vdsm teams to see > if we're missing some requirement or if there needs to be changes on the > engine/vdsm side to achieve this. > > As it currently stands, every time you reboot an ovirt-node that is > stateless, it would require manually removing the host in engine, then > re-registering/approving it again in engine. > > Any thoughts, concerns, input on how to solve this? Perhaps the node can perform some very basic form of authentication based on IP-address and a key derived from hardware. I see that TPM is already mentioned on the wiki, but even on systems without it, one could simply take a hash of all the MAC-addresses of the system, the CPU serial and the BIOS info from /sys/class/dmi and use that as a form of password. It's better than nothing, or approving nodes all the time (how do you know if the node you are approving is really THE node?) -- Yours sincerely, Floris Bos From pmyers at redhat.com Wed Feb 22 20:54:01 2012 From: pmyers at redhat.com (Perry Myers) Date: Wed, 22 Feb 2012 15:54:01 -0500 Subject: [Engine-devel] [node-devel] Support for stateless nodes In-Reply-To: <4F455229.2030604@redhat.com> References: <1329922658.6140.14.camel@beelzebub.mburnsfire.net> <4F450ABB.30009@redhat.com> <4F451290.3090103@redhat.com> <4F45218D.1020205@redhat.com> <4F452498.7040105@redhat.com> <4F455229.2030604@redhat.com> Message-ID: <4F4555E9.8000908@redhat.com> On 02/22/2012 03:38 PM, Itamar Heim wrote: > On 02/22/2012 07:23 PM, Perry Myers wrote: >>>> Well, if the network is busted which leads to the bridge rename >>>> failing, >>>> wouldn't the fact that the network is broken cause other problems >>>> anyhow? >>>> >>> Perry, my point is that we're increasing the chances to get >>> into these holes. Network is not busted most of the time, but >>> occasionally >>> there's a glitch and we'd like to stay away from it. I'm sure >>> you know what I'm talking about. >> >> What if oVirt Node creates ifcfg-ovirt (instead of ifcfg-br0) by default >> as part of bringing up the network on each boot (via either DHCP or >> kernel args)? > > what if admin wants this bonded, or bridgeless, or jumbo frames? Right now those aren't things you can configure via oVirt Node, they're things you configure via oVirt Engine/vdsm. If we want to add things like this to oVirt Node, we can (just file some RFEs for us) and then we'll expose this sort of stuff via the kernel cmd line args as we do with all of our other config params > stateless doesn't mean you can't save configuration somewhere on the > network, right (could be on engine, could be just on some nfs or http > location). Correct. Our present thinking is that oVirt Node specific stuff us saved via kernel cmd line args persisted either on PXE server or directly on boot media. Anything else (oVirt Engine specific for example) would be stored in the mgmt server (OE) itself > if you have a tpm, just encrypt all the data to make sure no one > tampered with your config, or if you don't care, just download your > config (well, you trust the network to download your image without > encryption, so no need to be fanatic i guess). > so in short: > 1. pxe boot the image > 2. download from known location (kernerl param) the rest of the config > you care about, certificates, etc. > 3. use tpm for private key (or get the password to keystore in config > via kernel parameter if you don't want/have tpm. Agreed, though w/ vdsm you can simplify by just encrypting the certs 1. boot up, network and basics are configured via kernel cmd args or DNS SRV 2. vdsm retrieves cert from OE and decrypts via locally stored key (TPM or embedded in ISO) 3. vdsm now can securely communicate with OE, so it retrieves config for network/storage from OE and applies that config > I guess my main question is why does stateless implies no saved config > for all the various issues It doesn't imply no saved config, it implies no saved config on the local node itself. Config absolutely will need to be retrieved from a remote server. Right now, oVirt Node config (outside of vdsm) consists of: * network config (dhcp vs. static) * logging/rsyslog config/collectd * mgmt server config (oVirt Engine IP/port) * kdump remote config * auth info (admin password, etc) All of that can be handled through some combination of: * DNS SRV * kernel command line args * embedding things in the ISO pre-deployment (passwords for example) The remaining config is what vdsm does after the node has the above configured, so things like bonding, add'l interfaces, storage, should all be configured by vdsm on each boot by retrieving the configuration details from oVirt Engine and applying them on vdsm startup Perry From djasa at redhat.com Thu Feb 23 11:56:59 2012 From: djasa at redhat.com (David =?UTF-8?Q?Ja=C5=A1a?=) Date: Thu, 23 Feb 2012 12:56:59 +0100 Subject: [Engine-devel] [node-devel] Support for stateless nodes In-Reply-To: <4F451DAD.9090706@redhat.com> References: <1329922658.6140.14.camel@beelzebub.mburnsfire.net> <4F450ABB.30009@redhat.com> <4F4510FA.4050107@redhat.com> <1329926892.6140.39.camel@beelzebub.mburnsfire.net> <4F451CC7.4050909@redhat.com> <4F451DAD.9090706@redhat.com> Message-ID: <1329998220.4319.133.camel@dhcp-29-7.brq.redhat.com> Perry Myers p??e v St 22. 02. 2012 v 11:54 -0500: > >> As answered in the other response, there are kernel command line > >> parameters to set the management_server. Since this will likely be in a > >> pxe environment, setting the pxe profile to include > >> management_server= should be fine. > >> > > I agree it's a valid solution as long as you assume this is relevant > > for PXE only use case. > > Not necessarily... > > Take the ISO/USB Stick and you can embed the kargs into the ISO/USB > itself so that it always boots with that mgmt server arg > > This actually also enables use of 'stateless' combined with static IP > addressing as well. As you can create a USB Stick and embed the kargs > for the NIC configuration, rsyslog config, etc, etc. > > >> Another solution could be to setup a specific DNS SRV record that points > >> to the ovirt-engine and have node automatically query that for the > >> location. > > This was discussed in the past and for some reason not implemented. > > Concerns about security, iirc. Assumption that someone could hijack the > DNS SRV record and provide a man-in-the-middle oVirt Engine server. > What about DNSSEC validation for DNS records in node? David > If you're paranoid about security, don't use DNS SRV of course, instead > use hardcoded kargs as described above. But for some DNS SRV might be > an ok option > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel -- David Ja?a, RHCE SPICE QE based in Brno GPG Key: 22C33E24 Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24 From yzaslavs at redhat.com Thu Feb 23 12:31:03 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Thu, 23 Feb 2012 14:31:03 +0200 Subject: [Engine-devel] [backend] a little confusion about the quartz jobs In-Reply-To: References: Message-ID: <4F463187.8000608@redhat.com> On 02/14/2012 06:49 PM, Laszlo Hornyak wrote: > > > ----- Original Message ----- >> From: "Yair Zaslavsky" >> To: engine-devel at ovirt.org >> Sent: Tuesday, February 14, 2012 2:01:41 PM >> Subject: Re: [Engine-devel] [backend] a little confusion about the quartz jobs >> >> On 02/14/2012 02:21 PM, Mike Kolesnik wrote: >>>> hi, >>>> >>>> I was playing with the quartz jobs in the backend and I thought >>>> this >>>> is an area where some simplification and/or cleanup would be >>>> useful. >>>> >>>> - SchedulerUtil interface would be nice to hide quartz from the >>>> rest >>>> of the code, but it very rarely used, the clients are bound to >>>> it's >>>> single implementation, SchedulerUtilQuartzImpl through it's >>>> getInstance() method. >>> >>> I think the whole class name is misleading, since usually when I >>> imagine a utils class, it's a simple class that does some menial >>> work for me in static methods, and not really calls anything else >>> or even has an instance. >> +1 > > Agreed, I will rename it. > >>> >>> Maybe the class can be renamed to just Scheduler, or >>> ScheduleManager which will be more precise. >>> >>>> - It was designed to be a local EJB, Backend actually expects it >>>> to >>>> be injected. (this field is not used) >>>> - when scheduling a job, you call schedule...Job(Object instance, >>>> String methodName, ...) however, it is not the _methodname_ that >>>> the executor will look for >>>> - instead, it will check the OnTimerMethodAnnotation on all the >>>> methods. But this annotation has everywhere the methodName as >>>> value >>>> - JobWrapper actually iterates over all the methods to find the >>>> one >>>> with the right annotation >>>> >>>> So a quick simplification could be: >>>> - The annotation is not needed, it could be removed >>>> - JobWrapper could just getMethod(methodName, argClasses) instead >>>> of >>>> looking for the annotation in all of the methods >>> >>> Sounds good, or maybe just keep the annotation and not the method >>> name in the call/annotation since then if the method name changes >>> it won't break and we can easily locate all jobs by searching for >>> the annotation.. This is why the annotations were introduced in the first place, we have too much places in code where we rely on usage of strings and reflection , so if a method name gets changed, the code stops working after being compiled. As this is the case, we should consider sticking to @OnTimer annotation, but maybe a proper documentation on the motivation for it should be added. >>> >>>> - I am really not for factoryes, but if we want to separate the >>>> interface from the implementation, then probably a >>>> SchedulerUtilFactory could help here. The dummy implementation >>>> would do just the very same thing as the >>>> SchedulerUtilQuartzImpl.getInstance() >>>> - I would remove the reference to SchedulerUtil from Backend as >>>> well, since it is not used. Really _should_ the Backend class do >>>> any scheduling? >>> >>> Backend does schedule at least one job in it's Initialize() >>> method.. >> Yes, we have the DbUsers cache manager that performs periodic checks >> for >> db users against AD/IPA. >> This scheduler should start upon @PostConstruct (or any logical >> equivalent). >> > > Yes but I am not sure this should happen right there. All the other service installs it's own jobs, so maybe SessionDataContainer should do so as well. It would look more consistent. > >>> Maybe the class should be injected, but I don't know if that >>> happens so maybe that's why it's unused. >>> >>>> >>>> Please share your thoughts. >>>> >>>> Thank you, >>>> Laszlo >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>> >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> From masayag at redhat.com Thu Feb 23 12:37:48 2012 From: masayag at redhat.com (Moti Asayag) Date: Thu, 23 Feb 2012 14:37:48 +0200 Subject: [Engine-devel] [backend] a little confusion about the quartz jobs In-Reply-To: <4F463187.8000608@redhat.com> References: <4F463187.8000608@redhat.com> Message-ID: <4F46331C.9020209@redhat.com> On 02/23/2012 02:31 PM, Yair Zaslavsky wrote: > On 02/14/2012 06:49 PM, Laszlo Hornyak wrote: >> >> >> ----- Original Message ----- >>> From: "Yair Zaslavsky" >>> To: engine-devel at ovirt.org >>> Sent: Tuesday, February 14, 2012 2:01:41 PM >>> Subject: Re: [Engine-devel] [backend] a little confusion about the quartz jobs >>> >>> On 02/14/2012 02:21 PM, Mike Kolesnik wrote: >>>>> hi, >>>>> >>>>> I was playing with the quartz jobs in the backend and I thought >>>>> this >>>>> is an area where some simplification and/or cleanup would be >>>>> useful. >>>>> >>>>> - SchedulerUtil interface would be nice to hide quartz from the >>>>> rest >>>>> of the code, but it very rarely used, the clients are bound to >>>>> it's >>>>> single implementation, SchedulerUtilQuartzImpl through it's >>>>> getInstance() method. >>>> >>>> I think the whole class name is misleading, since usually when I >>>> imagine a utils class, it's a simple class that does some menial >>>> work for me in static methods, and not really calls anything else >>>> or even has an instance. >>> +1 >> >> Agreed, I will rename it. >> >>>> >>>> Maybe the class can be renamed to just Scheduler, or >>>> ScheduleManager which will be more precise. >>>> >>>>> - It was designed to be a local EJB, Backend actually expects it >>>>> to >>>>> be injected. (this field is not used) >>>>> - when scheduling a job, you call schedule...Job(Object instance, >>>>> String methodName, ...) however, it is not the _methodname_ that >>>>> the executor will look for >>>>> - instead, it will check the OnTimerMethodAnnotation on all the >>>>> methods. But this annotation has everywhere the methodName as >>>>> value >>>>> - JobWrapper actually iterates over all the methods to find the >>>>> one >>>>> with the right annotation >>>>> >>>>> So a quick simplification could be: >>>>> - The annotation is not needed, it could be removed >>>>> - JobWrapper could just getMethod(methodName, argClasses) instead >>>>> of >>>>> looking for the annotation in all of the methods >>>> >>>> Sounds good, or maybe just keep the annotation and not the method >>>> name in the call/annotation since then if the method name changes >>>> it won't break and we can easily locate all jobs by searching for >>>> the annotation.. > This is why the annotations were introduced in the first place, we have > too much places in code where we rely on usage of strings and reflection > , so if a method name gets changed, the code stops working after being > compiled. > As this is the case, we should consider sticking to @OnTimer annotation, > but maybe a proper documentation on the motivation for it should be added. > Since we don't have track for the scheduled jobs on ovirt-engine, perhaps it would be better to concentrate the system jobs that currently are spread all over is a single Job's name constant file, and provide a meaningful name for each job instead the "OnTime". It will able somehow to gain control over the jobs in the system in a single point of code instead searching for the OnTimer annotation all over. >>>> >>>>> - I am really not for factoryes, but if we want to separate the >>>>> interface from the implementation, then probably a >>>>> SchedulerUtilFactory could help here. The dummy implementation >>>>> would do just the very same thing as the >>>>> SchedulerUtilQuartzImpl.getInstance() >>>>> - I would remove the reference to SchedulerUtil from Backend as >>>>> well, since it is not used. Really _should_ the Backend class do >>>>> any scheduling? >>>> >>>> Backend does schedule at least one job in it's Initialize() >>>> method.. >>> Yes, we have the DbUsers cache manager that performs periodic checks >>> for >>> db users against AD/IPA. >>> This scheduler should start upon @PostConstruct (or any logical >>> equivalent). >>> >> >> Yes but I am not sure this should happen right there. All the other service installs it's own jobs, so maybe SessionDataContainer should do so as well. It would look more consistent. >> >>>> Maybe the class should be injected, but I don't know if that >>>> happens so maybe that's why it's unused. >>>> >>>>> >>>>> Please share your thoughts. >>>>> >>>>> Thank you, >>>>> Laszlo >>>>> _______________________________________________ >>>>> Engine-devel mailing list >>>>> Engine-devel at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>> >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> >>> _______________________________________________ >>> Engine-devel mailing list >>> Engine-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>> > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From rgolan at redhat.com Thu Feb 23 13:20:40 2012 From: rgolan at redhat.com (Roy Golan) Date: Thu, 23 Feb 2012 15:20:40 +0200 Subject: [Engine-devel] bridge-less networks design wiki update Message-ID: <4F463D28.9060007@redhat.com> Hi all, Please find bridge-less network design wiki updated with changes following the latest discussions. www.ovirt.org/wiki/Features/Design/Network/Bridgeless_Networks Thanks, Roy From lhornyak at redhat.com Thu Feb 23 15:23:46 2012 From: lhornyak at redhat.com (Laszlo Hornyak) Date: Thu, 23 Feb 2012 10:23:46 -0500 (EST) Subject: [Engine-devel] [backend] a little confusion about the quartz jobs In-Reply-To: <4F463187.8000608@redhat.com> Message-ID: <787147c7-f301-421f-8187-53f8036558ce@zmail01.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > From: "Yair Zaslavsky" > To: "Laszlo Hornyak" > Cc: engine-devel at ovirt.org > Sent: Thursday, February 23, 2012 1:31:03 PM > Subject: Re: [Engine-devel] [backend] a little confusion about the quartz jobs > > On 02/14/2012 06:49 PM, Laszlo Hornyak wrote: > > > > > > ----- Original Message ----- > >> From: "Yair Zaslavsky" > >> To: engine-devel at ovirt.org > >> Sent: Tuesday, February 14, 2012 2:01:41 PM > >> Subject: Re: [Engine-devel] [backend] a little confusion about the > >> quartz jobs > >> > >> On 02/14/2012 02:21 PM, Mike Kolesnik wrote: > >>>> hi, > >>>> > >>>> I was playing with the quartz jobs in the backend and I thought > >>>> this > >>>> is an area where some simplification and/or cleanup would be > >>>> useful. > >>>> > >>>> - SchedulerUtil interface would be nice to hide quartz from the > >>>> rest > >>>> of the code, but it very rarely used, the clients are bound to > >>>> it's > >>>> single implementation, SchedulerUtilQuartzImpl through it's > >>>> getInstance() method. > >>> > >>> I think the whole class name is misleading, since usually when I > >>> imagine a utils class, it's a simple class that does some menial > >>> work for me in static methods, and not really calls anything else > >>> or even has an instance. > >> +1 > > > > Agreed, I will rename it. > > > >>> > >>> Maybe the class can be renamed to just Scheduler, or > >>> ScheduleManager which will be more precise. > >>> > >>>> - It was designed to be a local EJB, Backend actually expects > >>>> it > >>>> to > >>>> be injected. (this field is not used) > >>>> - when scheduling a job, you call schedule...Job(Object > >>>> instance, > >>>> String methodName, ...) however, it is not the _methodname_ > >>>> that > >>>> the executor will look for > >>>> - instead, it will check the OnTimerMethodAnnotation on all the > >>>> methods. But this annotation has everywhere the methodName as > >>>> value > >>>> - JobWrapper actually iterates over all the methods to find the > >>>> one > >>>> with the right annotation > >>>> > >>>> So a quick simplification could be: > >>>> - The annotation is not needed, it could be removed > >>>> - JobWrapper could just getMethod(methodName, argClasses) > >>>> instead > >>>> of > >>>> looking for the annotation in all of the methods > >>> > >>> Sounds good, or maybe just keep the annotation and not the method > >>> name in the call/annotation since then if the method name changes > >>> it won't break and we can easily locate all jobs by searching for > >>> the annotation.. > This is why the annotations were introduced in the first place, we > have > too much places in code where we rely on usage of strings and > reflection > , so if a method name gets changed, the code stops working after > being > compiled. > As this is the case, we should consider sticking to @OnTimer > annotation, > but maybe a proper documentation on the motivation for it should be > added. I understand your decision but... - the methods are usually about 5-10 lines below the schedule.*Job call, it is very hard not to notice the connection - for safe and easy refactoring, it could be better to pass over a callback - plus in the schedule.*Job call it could then be better to check if such method still exists, should throw an IllegalArgumentException if not there in this case we could catch the problem right at the cause, not when scheduled > > >>> > >>>> - I am really not for factoryes, but if we want to separate the > >>>> interface from the implementation, then probably a > >>>> SchedulerUtilFactory could help here. The dummy implementation > >>>> would do just the very same thing as the > >>>> SchedulerUtilQuartzImpl.getInstance() > >>>> - I would remove the reference to SchedulerUtil from Backend as > >>>> well, since it is not used. Really _should_ the Backend class > >>>> do > >>>> any scheduling? > >>> > >>> Backend does schedule at least one job in it's Initialize() > >>> method.. > >> Yes, we have the DbUsers cache manager that performs periodic > >> checks > >> for > >> db users against AD/IPA. > >> This scheduler should start upon @PostConstruct (or any logical > >> equivalent). > >> > > > > Yes but I am not sure this should happen right there. All the other > > service installs it's own jobs, so maybe SessionDataContainer > > should do so as well. It would look more consistent. > > > >>> Maybe the class should be injected, but I don't know if that > >>> happens so maybe that's why it's unused. > >>> > >>>> > >>>> Please share your thoughts. > >>>> > >>>> Thank you, > >>>> Laszlo > >>>> _______________________________________________ > >>>> Engine-devel mailing list > >>>> Engine-devel at ovirt.org > >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>>> > >>> _______________________________________________ > >>> Engine-devel mailing list > >>> Engine-devel at ovirt.org > >>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >> > >> _______________________________________________ > >> Engine-devel mailing list > >> Engine-devel at ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/engine-devel > >> > > From yzaslavs at redhat.com Thu Feb 23 15:54:52 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Thu, 23 Feb 2012 17:54:52 +0200 Subject: [Engine-devel] [backend] a little confusion about the quartz jobs In-Reply-To: <787147c7-f301-421f-8187-53f8036558ce@zmail01.collab.prod.int.phx2.redhat.com> References: <787147c7-f301-421f-8187-53f8036558ce@zmail01.collab.prod.int.phx2.redhat.com> Message-ID: <4F46614C.6010600@redhat.com> On 02/23/2012 05:23 PM, Laszlo Hornyak wrote: > > ----- Original Message ----- >> From: "Yair Zaslavsky" >> To: "Laszlo Hornyak" >> Cc: engine-devel at ovirt.org >> Sent: Thursday, February 23, 2012 1:31:03 PM >> Subject: Re: [Engine-devel] [backend] a little confusion about the quartz jobs >> >> On 02/14/2012 06:49 PM, Laszlo Hornyak wrote: >>> >>> >>> ----- Original Message ----- >>>> From: "Yair Zaslavsky" >>>> To: engine-devel at ovirt.org >>>> Sent: Tuesday, February 14, 2012 2:01:41 PM >>>> Subject: Re: [Engine-devel] [backend] a little confusion about the >>>> quartz jobs >>>> >>>> On 02/14/2012 02:21 PM, Mike Kolesnik wrote: >>>>>> hi, >>>>>> >>>>>> I was playing with the quartz jobs in the backend and I thought >>>>>> this >>>>>> is an area where some simplification and/or cleanup would be >>>>>> useful. >>>>>> >>>>>> - SchedulerUtil interface would be nice to hide quartz from the >>>>>> rest >>>>>> of the code, but it very rarely used, the clients are bound to >>>>>> it's >>>>>> single implementation, SchedulerUtilQuartzImpl through it's >>>>>> getInstance() method. >>>>> >>>>> I think the whole class name is misleading, since usually when I >>>>> imagine a utils class, it's a simple class that does some menial >>>>> work for me in static methods, and not really calls anything else >>>>> or even has an instance. >>>> +1 >>> >>> Agreed, I will rename it. >>> >>>>> >>>>> Maybe the class can be renamed to just Scheduler, or >>>>> ScheduleManager which will be more precise. >>>>> >>>>>> - It was designed to be a local EJB, Backend actually expects >>>>>> it >>>>>> to >>>>>> be injected. (this field is not used) >>>>>> - when scheduling a job, you call schedule...Job(Object >>>>>> instance, >>>>>> String methodName, ...) however, it is not the _methodname_ >>>>>> that >>>>>> the executor will look for >>>>>> - instead, it will check the OnTimerMethodAnnotation on all the >>>>>> methods. But this annotation has everywhere the methodName as >>>>>> value >>>>>> - JobWrapper actually iterates over all the methods to find the >>>>>> one >>>>>> with the right annotation >>>>>> >>>>>> So a quick simplification could be: >>>>>> - The annotation is not needed, it could be removed >>>>>> - JobWrapper could just getMethod(methodName, argClasses) >>>>>> instead >>>>>> of >>>>>> looking for the annotation in all of the methods >>>>> >>>>> Sounds good, or maybe just keep the annotation and not the method >>>>> name in the call/annotation since then if the method name changes >>>>> it won't break and we can easily locate all jobs by searching for >>>>> the annotation.. >> This is why the annotations were introduced in the first place, we >> have >> too much places in code where we rely on usage of strings and >> reflection >> , so if a method name gets changed, the code stops working after >> being >> compiled. >> As this is the case, we should consider sticking to @OnTimer >> annotation, >> but maybe a proper documentation on the motivation for it should be >> added. > > I understand your decision but... > - the methods are usually about 5-10 lines below the schedule.*Job call, it is very hard not to notice the connection > - for safe and easy refactoring, it could be better to pass over a callback Callback will introduce some limitations (I don't need to tell what are the limitations of anonymous inner classes :) ) > - plus in the schedule.*Job call it could then be better to check if such method still exists, should throw an IllegalArgumentException if not there in this case we could catch the problem right at the cause, not when scheduled That depends on what point you start scheduling - do we schedule all our jobs on startup? > >> >>>>> >>>>>> - I am really not for factoryes, but if we want to separate the >>>>>> interface from the implementation, then probably a >>>>>> SchedulerUtilFactory could help here. The dummy implementation >>>>>> would do just the very same thing as the >>>>>> SchedulerUtilQuartzImpl.getInstance() >>>>>> - I would remove the reference to SchedulerUtil from Backend as >>>>>> well, since it is not used. Really _should_ the Backend class >>>>>> do >>>>>> any scheduling? >>>>> >>>>> Backend does schedule at least one job in it's Initialize() >>>>> method.. >>>> Yes, we have the DbUsers cache manager that performs periodic >>>> checks >>>> for >>>> db users against AD/IPA. >>>> This scheduler should start upon @PostConstruct (or any logical >>>> equivalent). >>>> >>> >>> Yes but I am not sure this should happen right there. All the other >>> service installs it's own jobs, so maybe SessionDataContainer >>> should do so as well. It would look more consistent. >>> >>>>> Maybe the class should be injected, but I don't know if that >>>>> happens so maybe that's why it's unused. >>>>> >>>>>> >>>>>> Please share your thoughts. >>>>>> >>>>>> Thank you, >>>>>> Laszlo >>>>>> _______________________________________________ >>>>>> Engine-devel mailing list >>>>>> Engine-devel at ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>>> >>>>> _______________________________________________ >>>>> Engine-devel mailing list >>>>> Engine-devel at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>> >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>> >> >> From lhornyak at redhat.com Thu Feb 23 16:25:22 2012 From: lhornyak at redhat.com (Laszlo Hornyak) Date: Thu, 23 Feb 2012 11:25:22 -0500 (EST) Subject: [Engine-devel] [backend] a little confusion about the quartz jobs In-Reply-To: <4F46614C.6010600@redhat.com> Message-ID: <356db482-aaad-436a-a7da-8281637ab9a1@zmail01.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > From: "Yair Zaslavsky" > To: "Laszlo Hornyak" > Cc: engine-devel at ovirt.org > Sent: Thursday, February 23, 2012 4:54:52 PM > Subject: Re: [Engine-devel] [backend] a little confusion about the quartz jobs > > On 02/23/2012 05:23 PM, Laszlo Hornyak wrote: > > > > ----- Original Message ----- > >> From: "Yair Zaslavsky" > >> To: "Laszlo Hornyak" > >> Cc: engine-devel at ovirt.org > >> Sent: Thursday, February 23, 2012 1:31:03 PM > >> Subject: Re: [Engine-devel] [backend] a little confusion about the > >> quartz jobs > >> > >> On 02/14/2012 06:49 PM, Laszlo Hornyak wrote: > >>> > >>> > >>> ----- Original Message ----- > >>>> From: "Yair Zaslavsky" > >>>> To: engine-devel at ovirt.org > >>>> Sent: Tuesday, February 14, 2012 2:01:41 PM > >>>> Subject: Re: [Engine-devel] [backend] a little confusion about > >>>> the > >>>> quartz jobs > >>>> > >>>> On 02/14/2012 02:21 PM, Mike Kolesnik wrote: > >>>>>> hi, > >>>>>> > >>>>>> I was playing with the quartz jobs in the backend and I > >>>>>> thought > >>>>>> this > >>>>>> is an area where some simplification and/or cleanup would be > >>>>>> useful. > >>>>>> > >>>>>> - SchedulerUtil interface would be nice to hide quartz from > >>>>>> the > >>>>>> rest > >>>>>> of the code, but it very rarely used, the clients are bound > >>>>>> to > >>>>>> it's > >>>>>> single implementation, SchedulerUtilQuartzImpl through it's > >>>>>> getInstance() method. > >>>>> > >>>>> I think the whole class name is misleading, since usually when > >>>>> I > >>>>> imagine a utils class, it's a simple class that does some > >>>>> menial > >>>>> work for me in static methods, and not really calls anything > >>>>> else > >>>>> or even has an instance. > >>>> +1 > >>> > >>> Agreed, I will rename it. > >>> > >>>>> > >>>>> Maybe the class can be renamed to just Scheduler, or > >>>>> ScheduleManager which will be more precise. > >>>>> > >>>>>> - It was designed to be a local EJB, Backend actually expects > >>>>>> it > >>>>>> to > >>>>>> be injected. (this field is not used) > >>>>>> - when scheduling a job, you call schedule...Job(Object > >>>>>> instance, > >>>>>> String methodName, ...) however, it is not the _methodname_ > >>>>>> that > >>>>>> the executor will look for > >>>>>> - instead, it will check the OnTimerMethodAnnotation on all > >>>>>> the > >>>>>> methods. But this annotation has everywhere the methodName as > >>>>>> value > >>>>>> - JobWrapper actually iterates over all the methods to find > >>>>>> the > >>>>>> one > >>>>>> with the right annotation > >>>>>> > >>>>>> So a quick simplification could be: > >>>>>> - The annotation is not needed, it could be removed > >>>>>> - JobWrapper could just getMethod(methodName, argClasses) > >>>>>> instead > >>>>>> of > >>>>>> looking for the annotation in all of the methods > >>>>> > >>>>> Sounds good, or maybe just keep the annotation and not the > >>>>> method > >>>>> name in the call/annotation since then if the method name > >>>>> changes > >>>>> it won't break and we can easily locate all jobs by searching > >>>>> for > >>>>> the annotation.. > >> This is why the annotations were introduced in the first place, we > >> have > >> too much places in code where we rely on usage of strings and > >> reflection > >> , so if a method name gets changed, the code stops working after > >> being > >> compiled. > >> As this is the case, we should consider sticking to @OnTimer > >> annotation, > >> but maybe a proper documentation on the motivation for it should > >> be > >> added. > > > > I understand your decision but... > > - the methods are usually about 5-10 lines below the schedule.*Job > > call, it is very hard not to notice the connection > > - for safe and easy refactoring, it could be better to pass over a > > callback > Callback will introduce some limitations (I don't need to tell what > are > the limitations of anonymous inner classes :) ) > > - plus in the schedule.*Job call it could then be better to check > > if such method still exists, should throw an > > IllegalArgumentException if not there in this case we could catch > > the problem right at the cause, not when scheduled > That depends on what point you start scheduling - do we schedule all > our > jobs on startup? No, only some of the jobs. I am not telling that the method should be checked on backend start, I am telling that the method name should be checked by the SchedulerUtilQuartzImpl when the schedule.*Job method is called, not when the job is scheduled to run. > > > > >> > >>>>> > >>>>>> - I am really not for factoryes, but if we want to separate > >>>>>> the > >>>>>> interface from the implementation, then probably a > >>>>>> SchedulerUtilFactory could help here. The dummy > >>>>>> implementation > >>>>>> would do just the very same thing as the > >>>>>> SchedulerUtilQuartzImpl.getInstance() > >>>>>> - I would remove the reference to SchedulerUtil from Backend > >>>>>> as > >>>>>> well, since it is not used. Really _should_ the Backend class > >>>>>> do > >>>>>> any scheduling? > >>>>> > >>>>> Backend does schedule at least one job in it's Initialize() > >>>>> method.. > >>>> Yes, we have the DbUsers cache manager that performs periodic > >>>> checks > >>>> for > >>>> db users against AD/IPA. > >>>> This scheduler should start upon @PostConstruct (or any logical > >>>> equivalent). > >>>> > >>> > >>> Yes but I am not sure this should happen right there. All the > >>> other > >>> service installs it's own jobs, so maybe SessionDataContainer > >>> should do so as well. It would look more consistent. > >>> > >>>>> Maybe the class should be injected, but I don't know if that > >>>>> happens so maybe that's why it's unused. > >>>>> > >>>>>> > >>>>>> Please share your thoughts. > >>>>>> > >>>>>> Thank you, > >>>>>> Laszlo > >>>>>> _______________________________________________ > >>>>>> Engine-devel mailing list > >>>>>> Engine-devel at ovirt.org > >>>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>>>>> > >>>>> _______________________________________________ > >>>>> Engine-devel mailing list > >>>>> Engine-devel at ovirt.org > >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>>> > >>>> _______________________________________________ > >>>> Engine-devel mailing list > >>>> Engine-devel at ovirt.org > >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>>> > >> > >> > > From iheim at redhat.com Sun Feb 26 12:05:48 2012 From: iheim at redhat.com (Itamar Heim) Date: Sun, 26 Feb 2012 14:05:48 +0200 Subject: [Engine-devel] Clone VM from snapshot feature In-Reply-To: <4F3A1614.9030405@redhat.com> References: <4F3A1614.9030405@redhat.com> Message-ID: <4F4A201C.2030000@redhat.com> On 02/14/2012 10:06 AM, Yair Zaslavsky wrote: > Hi all, > I modified the Wiki pages of this feature: > > http://www.ovirt.org/wiki/Features/CloneVmFromSnapshot > > http://www.ovirt.org/wiki/Features/DetailedCloneVmFromSnapshot > > Comments are more than welcome 1. "Shared disks and direct LUN diskes behavior - For shared disks and direct LUN based disks, the user who performs the snapshot will specify during snapshot creation whether the disk should be plugged or unplugged upon performing the clone." direct lun - if it is not already in shared mode, cannot be used by more than one VM, hence should not be cloned, unless already flagged as shared. 2. it sounds like there should be some general code shared for import vm and clone vm for handling items which can't be duplicate by default (say, mac addresses). 3. MLA - are you cloning the permissions on the VM as well, or only creating an owner permission on the new entity? 4. MLA - what permission does one need to have on source VM/snapsot to clone it? if a non-owner can clone a VM/snapshot, and become owner of the new entity, need to make sure no privilege escalation flows exist. is the intent to share the code of clone VM with AddVm (which is what clone is), with a task to clone the disks rather than create them (otherwise you need to duplicate the code for quota and permission handling?) Thanks, Itamar From yzaslavs at redhat.com Sun Feb 26 12:38:33 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Sun, 26 Feb 2012 14:38:33 +0200 Subject: [Engine-devel] Clone VM from snapshot feature In-Reply-To: <4F4A201C.2030000@redhat.com> References: <4F3A1614.9030405@redhat.com> <4F4A201C.2030000@redhat.com> Message-ID: <4F4A27C9.70200@redhat.com> On 02/26/2012 02:05 PM, Itamar Heim wrote: > On 02/14/2012 10:06 AM, Yair Zaslavsky wrote: >> Hi all, >> I modified the Wiki pages of this feature: >> >> http://www.ovirt.org/wiki/Features/CloneVmFromSnapshot >> >> http://www.ovirt.org/wiki/Features/DetailedCloneVmFromSnapshot >> >> Comments are more than welcome > > 1. "Shared disks and direct LUN diskes behavior - For shared disks and > direct LUN based disks, the user who performs the snapshot will specify > during snapshot creation whether the disk should be plugged or unplugged > upon performing the clone." > > direct lun - if it is not already in shared mode, cannot be used by more > than one VM, hence should not be cloned, unless already flagged as shared. Understood. What should be the behaviour if shared flag is set to false? > > 2. it sounds like there should be some general code shared for import vm > and clone vm for handling items which can't be duplicate by default > (say, mac addresses). True, I will revisit this. Aren't we facing actually this issue also in creating a VM from template? > > 3. MLA - are you cloning the permissions on the VM as well, or only > creating an owner permission on the new entity? > > 4. MLA - what permission does one need to have on source VM/snapsot to > clone it? > if a non-owner can clone a VM/snapshot, and become owner of the new > entity, need to make sure no privilege escalation flows exist. > is the intent to share the code of clone VM with AddVm (which is what > clone is), with a task to clone the disks rather than create them > (otherwise you need to duplicate the code for quota and permission > handling?) If I understand you correctly - Cloning images commands (AddVmFromTemplate, cloning vm from snapshot, etc..) will invoke a CopyImage internal command. > > Thanks, > Itamar From iheim at redhat.com Sun Feb 26 13:04:26 2012 From: iheim at redhat.com (Itamar Heim) Date: Sun, 26 Feb 2012 15:04:26 +0200 Subject: [Engine-devel] Clone VM from snapshot feature In-Reply-To: <4F4A27C9.70200@redhat.com> References: <4F3A1614.9030405@redhat.com> <4F4A201C.2030000@redhat.com> <4F4A27C9.70200@redhat.com> Message-ID: <4F4A2DDA.10504@redhat.com> On 02/26/2012 02:38 PM, Yair Zaslavsky wrote: > On 02/26/2012 02:05 PM, Itamar Heim wrote: >> On 02/14/2012 10:06 AM, Yair Zaslavsky wrote: >>> Hi all, >>> I modified the Wiki pages of this feature: >>> >>> http://www.ovirt.org/wiki/Features/CloneVmFromSnapshot >>> >>> http://www.ovirt.org/wiki/Features/DetailedCloneVmFromSnapshot >>> >>> Comments are more than welcome >> >> 1. "Shared disks and direct LUN diskes behavior - For shared disks and >> direct LUN based disks, the user who performs the snapshot will specify >> during snapshot creation whether the disk should be plugged or unplugged >> upon performing the clone." >> >> direct lun - if it is not already in shared mode, cannot be used by more >> than one VM, hence should not be cloned, unless already flagged as shared. > Understood. What should be the behavior if shared flag is set to false? warning to audit log that the disk isn't part of the clone. > >> >> 2. it sounds like there should be some general code shared for import vm >> and clone vm for handling items which can't be duplicate by default >> (say, mac addresses). > True, I will revisit this. Aren't we facing actually this issue also in > creating a VM from template? I assume it already has such logic. I'm suggesting to check how redundant it is across the various commands (if it is), before creating another care. >> >> 3. MLA - are you cloning the permissions on the VM as well, or only >> creating an owner permission on the new entity? >> >> 4. MLA - what permission does one need to have on source VM/snapsot to >> clone it? >> if a non-owner can clone a VM/snapshot, and become owner of the new >> entity, need to make sure no privilege escalation flows exist. >> is the intent to share the code of clone VM with AddVm (which is what >> clone is), with a task to clone the disks rather than create them >> (otherwise you need to duplicate the code for quota and permission >> handling?) > If I understand you correctly - Cloning images commands > (AddVmFromTemplate, cloning vm from snapshot, etc..) will invoke a > CopyImage internal command. iiuc, internal commands don't perform permission checks? From yzaslavs at redhat.com Sun Feb 26 13:20:45 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Sun, 26 Feb 2012 15:20:45 +0200 Subject: [Engine-devel] Clone VM from snapshot feature In-Reply-To: <4F4A2DDA.10504@redhat.com> References: <4F3A1614.9030405@redhat.com> <4F4A201C.2030000@redhat.com> <4F4A27C9.70200@redhat.com> <4F4A2DDA.10504@redhat.com> Message-ID: <4F4A31AD.30007@redhat.com> On 02/26/2012 03:04 PM, Itamar Heim wrote: > On 02/26/2012 02:38 PM, Yair Zaslavsky wrote: >> On 02/26/2012 02:05 PM, Itamar Heim wrote: >>> On 02/14/2012 10:06 AM, Yair Zaslavsky wrote: >>>> Hi all, >>>> I modified the Wiki pages of this feature: >>>> >>>> http://www.ovirt.org/wiki/Features/CloneVmFromSnapshot >>>> >>>> http://www.ovirt.org/wiki/Features/DetailedCloneVmFromSnapshot >>>> >>>> Comments are more than welcome >>> >>> 1. "Shared disks and direct LUN diskes behavior - For shared disks and >>> direct LUN based disks, the user who performs the snapshot will specify >>> during snapshot creation whether the disk should be plugged or unplugged >>> upon performing the clone." >>> >>> direct lun - if it is not already in shared mode, cannot be used by more >>> than one VM, hence should not be cloned, unless already flagged as >>> shared. >> Understood. What should be the behavior if shared flag is set to false? > > warning to audit log that the disk isn't part of the clone. > >> >>> >>> 2. it sounds like there should be some general code shared for import vm >>> and clone vm for handling items which can't be duplicate by default >>> (say, mac addresses). >> True, I will revisit this. Aren't we facing actually this issue also in >> creating a VM from template? > > I assume it already has such logic. I'm suggesting to check how > redundant it is across the various commands (if it is), before creating > another care. Just checked, and you're correct. We do have such logic at AddVmCommand (adding network of new VM part). > >>> >>> 3. MLA - are you cloning the permissions on the VM as well, or only >>> creating an owner permission on the new entity? >>> >>> 4. MLA - what permission does one need to have on source VM/snapsot to >>> clone it? >>> if a non-owner can clone a VM/snapshot, and become owner of the new >>> entity, need to make sure no privilege escalation flows exist. >>> is the intent to share the code of clone VM with AddVm (which is what >>> clone is), with a task to clone the disks rather than create them >>> (otherwise you need to duplicate the code for quota and permission >>> handling?) >> If I understand you correctly - Cloning images commands >> (AddVmFromTemplate, cloning vm from snapshot, etc..) will invoke a >> CopyImage internal command. > > iiuc, internal commands don't perform permission checks? Correct, they do not. From iheim at redhat.com Sun Feb 26 13:19:17 2012 From: iheim at redhat.com (Itamar Heim) Date: Sun, 26 Feb 2012 15:19:17 +0200 Subject: [Engine-devel] Clone VM from snapshot feature In-Reply-To: <4F4A31AD.30007@redhat.com> References: <4F3A1614.9030405@redhat.com> <4F4A201C.2030000@redhat.com> <4F4A27C9.70200@redhat.com> <4F4A2DDA.10504@redhat.com> <4F4A31AD.30007@redhat.com> Message-ID: <4F4A3155.5060101@redhat.com> On 02/26/2012 03:20 PM, Yair Zaslavsky wrote: ... >>>> 4. MLA - what permission does one need to have on source VM/snapsot to >>>> clone it? >>>> if a non-owner can clone a VM/snapshot, and become owner of the new >>>> entity, need to make sure no privilege escalation flows exist. >>>> is the intent to share the code of clone VM with AddVm (which is what >>>> clone is), with a task to clone the disks rather than create them >>>> (otherwise you need to duplicate the code for quota and permission >>>> handling?) >>> If I understand you correctly - Cloning images commands >>> (AddVmFromTemplate, cloning vm from snapshot, etc..) will invoke a >>> CopyImage internal command. >> >> iiuc, internal commands don't perform permission checks? > Correct, they do not. then how do you not duplicate checks like user is allowed to the cluster (and later, to custom properties, logical networks, shared disks, etc.) From yzaslavs at redhat.com Sun Feb 26 13:24:16 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Sun, 26 Feb 2012 15:24:16 +0200 Subject: [Engine-devel] Clone VM from snapshot feature In-Reply-To: <4F4A3155.5060101@redhat.com> References: <4F3A1614.9030405@redhat.com> <4F4A201C.2030000@redhat.com> <4F4A27C9.70200@redhat.com> <4F4A2DDA.10504@redhat.com> <4F4A31AD.30007@redhat.com> <4F4A3155.5060101@redhat.com> Message-ID: <4F4A3280.1040708@redhat.com> On 02/26/2012 03:19 PM, Itamar Heim wrote: > On 02/26/2012 03:20 PM, Yair Zaslavsky wrote: > ... >>>>> 4. MLA - what permission does one need to have on source VM/snapsot to >>>>> clone it? >>>>> if a non-owner can clone a VM/snapshot, and become owner of the new >>>>> entity, need to make sure no privilege escalation flows exist. >>>>> is the intent to share the code of clone VM with AddVm (which is what >>>>> clone is), with a task to clone the disks rather than create them >>>>> (otherwise you need to duplicate the code for quota and permission >>>>> handling?) >>>> If I understand you correctly - Cloning images commands >>>> (AddVmFromTemplate, cloning vm from snapshot, etc..) will invoke a >>>> CopyImage internal command. >>> >>> iiuc, internal commands don't perform permission checks? >> Correct, they do not. > > then how do you not duplicate checks like user is allowed to the cluster > (and later, to custom properties, logical networks, shared disks, etc.) Not sure if I understand - are you asking if why I'm not duplicating this from the original VM? From iheim at redhat.com Sun Feb 26 13:27:02 2012 From: iheim at redhat.com (Itamar Heim) Date: Sun, 26 Feb 2012 15:27:02 +0200 Subject: [Engine-devel] Clone VM from snapshot feature In-Reply-To: <4F4A3280.1040708@redhat.com> References: <4F3A1614.9030405@redhat.com> <4F4A201C.2030000@redhat.com> <4F4A27C9.70200@redhat.com> <4F4A2DDA.10504@redhat.com> <4F4A31AD.30007@redhat.com> <4F4A3155.5060101@redhat.com> <4F4A3280.1040708@redhat.com> Message-ID: <4F4A3326.2070601@redhat.com> On 02/26/2012 03:24 PM, Yair Zaslavsky wrote: > On 02/26/2012 03:19 PM, Itamar Heim wrote: >> On 02/26/2012 03:20 PM, Yair Zaslavsky wrote: >> ... >>>>>> 4. MLA - what permission does one need to have on source VM/snapsot to >>>>>> clone it? >>>>>> if a non-owner can clone a VM/snapshot, and become owner of the new >>>>>> entity, need to make sure no privilege escalation flows exist. >>>>>> is the intent to share the code of clone VM with AddVm (which is what >>>>>> clone is), with a task to clone the disks rather than create them >>>>>> (otherwise you need to duplicate the code for quota and permission >>>>>> handling?) >>>>> If I understand you correctly - Cloning images commands >>>>> (AddVmFromTemplate, cloning vm from snapshot, etc..) will invoke a >>>>> CopyImage internal command. >>>> >>>> iiuc, internal commands don't perform permission checks? >>> Correct, they do not. >> >> then how do you not duplicate checks like user is allowed to the cluster >> (and later, to custom properties, logical networks, shared disks, etc.) > Not sure if I understand - are you asking if why I'm not duplicating > this from the original VM? > I'm asking if a non owner of the original VM can copy these, and also if you are cloning the permissions of the original VM From abaron at redhat.com Sun Feb 26 14:16:21 2012 From: abaron at redhat.com (Ayal Baron) Date: Sun, 26 Feb 2012 09:16:21 -0500 (EST) Subject: [Engine-devel] Clone VM from snapshot feature In-Reply-To: <4F3A384B.6070904@redhat.com> Message-ID: <584b6bb8-b5c1-4868-8b7d-dd566897fd7a@zmail13.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > On 02/14/2012 11:03 AM, Yaniv Kaul wrote: > > On 02/14/2012 10:53 AM, Yair Zaslavsky wrote: > >> On 02/14/2012 10:35 AM, Yair Zaslavsky wrote: > >>> On 02/14/2012 10:29 AM, Yaniv Kaul wrote: > >>>> On 02/14/2012 10:06 AM, Yair Zaslavsky wrote: > >>>>> Hi all, > >>>>> I modified the Wiki pages of this feature: > >>>>> > >>>>> http://www.ovirt.org/wiki/Features/CloneVmFromSnapshot > >>>>> > >>>>> http://www.ovirt.org/wiki/Features/DetailedCloneVmFromSnapshot > >>>> - Missing error handling. I hope all will goes well, of course. > >> Will be added. Not sure though what can we do in case for example > >> you > >> fail to copy image N out of M , besides of course > > > > Since it's not clear that if you merge the snapshots regardless of > > the > > base image (if it's RAW), or you merge them all to one big image, > > I'm > > not sure if there are two processes here or not - I assume there > > are: > > copy and merge. Each can fail independently, and rollback is > > probably > > required? > > > >>>> - Will you be copying the disks in parallel, or serially? > >> CopyImage is an asycnrhonous verb that will be monitored by the > >> AsyncTaskManager at Engine core. > > > > Which means that if there are N disks you copy them in parallel or > > one > > by one? May make sense to do it depending on the storage domain - > > if > > it's the same for all or not, etc. An optimization, I guess. > This engine core code that is required to launch VDS command + create > a > task for monitoring it takes less time than completion of the > monitoring > itself - so the part of lauch VDS command + create task for > monitoring > is serial, but the monitoring itself is performed periodically, > according to the behavior of AsyncTaskManager. The simple answer is that disks are being copied concurrently. Yaniv, to answer a previous question you had - there is no copy + merge, it is a single operation which creates the target already collapsed (qemu-img convert) Yaniv, I'm not sure what you meant with the disk1 and disk2 being raw question. And wrt the copy being done by other hosts - unfortunately we do not yet support that. That would require separating the creation of the target volumes/files from the copy operation. > Engine-core is indifferent to whether the copies are performed > concurrently or not in VDSM. > > > > > > >> > >>>> - Too bad the disks have to be copied by the SPM. Not sure why, > >>>> really. > >>> Typo, will be fixed. > >>>> Same for the merge, which is not really mentioned where/how it's > >>>> going > >>>> to take place (VDSM-wise). > >> The copy operation will perform collapse on destination. > >> Maybe I do not understand your question here- please elaborate. > > > > Will the merge of the snapshots be done by SPM or HSM? > > > >> > >>>> - If the 'Disk1' , 'Disk2' are RAW, would be nice to have an > >>>> option NOT > >>>> to copy them. Especially as you have a snapshot on top of them. > >> Please elaborate on that. > > > > If you are going to merge snapshots into the base, not sure it > > needs to > > be copied first - I wonder if there's an option to collapse to a > > new > > destination. QEMU feature, I guess. > > Y. > > > >>>> Y. > >>>> > >>>>> Comments are more than welcome > >>>>> > >>>>> Kind regards, > >>>>> Yair > >>>>> > >>>>> > >>>>> > >>>>> _______________________________________________ > >>>>> Engine-devel mailing list > >>>>> Engine-devel at ovirt.org > >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>> _______________________________________________ > >>> Engine-devel mailing list > >>> Engine-devel at ovirt.org > >>> http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From abaron at redhat.com Sun Feb 26 14:38:01 2012 From: abaron at redhat.com (Ayal Baron) Date: Sun, 26 Feb 2012 09:38:01 -0500 (EST) Subject: [Engine-devel] Clone VM from snapshot feature In-Reply-To: <4F3A1614.9030405@redhat.com> Message-ID: Yair, what about import of VM more than once? ----- Original Message ----- > Hi all, > I modified the Wiki pages of this feature: > > http://www.ovirt.org/wiki/Features/CloneVmFromSnapshot > > http://www.ovirt.org/wiki/Features/DetailedCloneVmFromSnapshot > > Comments are more than welcome > > Kind regards, > Yair > > > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From yzaslavs at redhat.com Sun Feb 26 14:50:02 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Sun, 26 Feb 2012 16:50:02 +0200 Subject: [Engine-devel] Clone VM from snapshot feature In-Reply-To: References: Message-ID: <4F4A469A.2050809@redhat.com> On 02/26/2012 04:38 PM, Ayal Baron wrote: > Yair, what about import of VM more than once? Hi Ayal, We consider this as a different feature. Gilad Chaplik is the feature owner. I can think of some very similar features to this one (not just import more than once). > > ----- Original Message ----- >> Hi all, >> I modified the Wiki pages of this feature: >> >> http://www.ovirt.org/wiki/Features/CloneVmFromSnapshot >> >> http://www.ovirt.org/wiki/Features/DetailedCloneVmFromSnapshot >> >> Comments are more than welcome >> >> Kind regards, >> Yair >> >> >> >> _______________________________________________ >> Engine-devel mailing list >> Engine-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/engine-devel >> From abaron at redhat.com Sun Feb 26 14:55:03 2012 From: abaron at redhat.com (Ayal Baron) Date: Sun, 26 Feb 2012 09:55:03 -0500 (EST) Subject: [Engine-devel] Clone VM from snapshot feature In-Reply-To: <4F4A469A.2050809@redhat.com> Message-ID: <7f5e1bb8-7726-4bf0-a399-e9b9633efc54@zmail13.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > On 02/26/2012 04:38 PM, Ayal Baron wrote: > > Yair, what about import of VM more than once? > Hi Ayal, > We consider this as a different feature. > Gilad Chaplik is the feature owner. > I can think of some very similar features to this one (not just > import > more than once). First, I couldn't find a feature page for that. Second, I don't really understand the difference, there are subtle differences in the flow, but it is basically the same. In fact. the only difference I can think of is that it is initiated from import and not from right click on the snapshot... What other similar features will this *not* cover? > > > > > ----- Original Message ----- > >> Hi all, > >> I modified the Wiki pages of this feature: > >> > >> http://www.ovirt.org/wiki/Features/CloneVmFromSnapshot > >> > >> http://www.ovirt.org/wiki/Features/DetailedCloneVmFromSnapshot > >> > >> Comments are more than welcome > >> > >> Kind regards, > >> Yair > >> > >> > >> > >> _______________________________________________ > >> Engine-devel mailing list > >> Engine-devel at ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/engine-devel > >> > > From yzaslavs at redhat.com Sun Feb 26 15:02:44 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Sun, 26 Feb 2012 17:02:44 +0200 Subject: [Engine-devel] Clone VM from snapshot feature In-Reply-To: <7f5e1bb8-7726-4bf0-a399-e9b9633efc54@zmail13.collab.prod.int.phx2.redhat.com> References: <7f5e1bb8-7726-4bf0-a399-e9b9633efc54@zmail13.collab.prod.int.phx2.redhat.com> Message-ID: <4F4A4994.9040403@redhat.com> On 02/26/2012 04:55 PM, Ayal Baron wrote: > > > ----- Original Message ----- >> On 02/26/2012 04:38 PM, Ayal Baron wrote: >>> Yair, what about import of VM more than once? >> Hi Ayal, >> We consider this as a different feature. >> Gilad Chaplik is the feature owner. >> I can think of some very similar features to this one (not just >> import >> more than once). > > First, I couldn't find a feature page for that. > Second, I don't really understand the difference, there are subtle differences in the flow, but it is basically the same. > In fact. the only difference I can think of is that it is initiated from import and not from right click on the snapshot... Gilad? > > What other similar features will this *not* cover? > > >> >>> >>> ----- Original Message ----- >>>> Hi all, >>>> I modified the Wiki pages of this feature: >>>> >>>> http://www.ovirt.org/wiki/Features/CloneVmFromSnapshot >>>> >>>> http://www.ovirt.org/wiki/Features/DetailedCloneVmFromSnapshot >>>> >>>> Comments are more than welcome >>>> >>>> Kind regards, >>>> Yair >>>> >>>> >>>> >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>> >> >> From yzaslavs at redhat.com Sun Feb 26 15:03:33 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Sun, 26 Feb 2012 17:03:33 +0200 Subject: [Engine-devel] Clone VM from snapshot feature In-Reply-To: <4F4A4994.9040403@redhat.com> References: <7f5e1bb8-7726-4bf0-a399-e9b9633efc54@zmail13.collab.prod.int.phx2.redhat.com> <4F4A4994.9040403@redhat.com> Message-ID: <4F4A49C5.2000908@redhat.com> On 02/26/2012 05:02 PM, Yair Zaslavsky wrote: > On 02/26/2012 04:55 PM, Ayal Baron wrote: >> >> >> ----- Original Message ----- >>> On 02/26/2012 04:38 PM, Ayal Baron wrote: >>>> Yair, what about import of VM more than once? >>> Hi Ayal, >>> We consider this as a different feature. >>> Gilad Chaplik is the feature owner. >>> I can think of some very similar features to this one (not just >>> import >>> more than once). >> >> First, I couldn't find a feature page for that. >> Second, I don't really understand the difference, there are subtle differences in the flow, but it is basically the same. >> In fact. the only difference I can think of is that it is initiated from import and not from right click on the snapshot... > Gilad? CC'ing Gilad on this >> >> What other similar features will this *not* cover? >> >> >>> >>>> >>>> ----- Original Message ----- >>>>> Hi all, >>>>> I modified the Wiki pages of this feature: >>>>> >>>>> http://www.ovirt.org/wiki/Features/CloneVmFromSnapshot >>>>> >>>>> http://www.ovirt.org/wiki/Features/DetailedCloneVmFromSnapshot >>>>> >>>>> Comments are more than welcome >>>>> >>>>> Kind regards, >>>>> Yair >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> Engine-devel mailing list >>>>> Engine-devel at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>>> >>> >>> > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From yzaslavs at redhat.com Sun Feb 26 15:05:32 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Sun, 26 Feb 2012 17:05:32 +0200 Subject: [Engine-devel] Clone VM from snapshot feature In-Reply-To: <7f5e1bb8-7726-4bf0-a399-e9b9633efc54@zmail13.collab.prod.int.phx2.redhat.com> References: <7f5e1bb8-7726-4bf0-a399-e9b9633efc54@zmail13.collab.prod.int.phx2.redhat.com> Message-ID: <4F4A4A3C.1070809@redhat.com> On 02/26/2012 04:55 PM, Ayal Baron wrote: > > > ----- Original Message ----- >> On 02/26/2012 04:38 PM, Ayal Baron wrote: >>> Yair, what about import of VM more than once? >> Hi Ayal, >> We consider this as a different feature. >> Gilad Chaplik is the feature owner. >> I can think of some very similar features to this one (not just >> import >> more than once). > > First, I couldn't find a feature page for that. > Second, I don't really understand the difference, there are subtle differences in the flow, but it is basically the same. > In fact. the only difference I can think of is that it is initiated from import and not from right click on the snapshot... > > What other similar features will this *not* cover? I can tell you that for current testing until fully integrated with snapshots modifications, I am testing it on VM which is down. Not sure we're interested in this, but here is an example of possible feature. > > >> >>> >>> ----- Original Message ----- >>>> Hi all, >>>> I modified the Wiki pages of this feature: >>>> >>>> http://www.ovirt.org/wiki/Features/CloneVmFromSnapshot >>>> >>>> http://www.ovirt.org/wiki/Features/DetailedCloneVmFromSnapshot >>>> >>>> Comments are more than welcome >>>> >>>> Kind regards, >>>> Yair >>>> >>>> >>>> >>>> _______________________________________________ >>>> Engine-devel mailing list >>>> Engine-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel >>>> >> >> From abaron at redhat.com Sun Feb 26 15:25:30 2012 From: abaron at redhat.com (Ayal Baron) Date: Sun, 26 Feb 2012 10:25:30 -0500 (EST) Subject: [Engine-devel] Clone VM from snapshot feature In-Reply-To: <4F4A4A3C.1070809@redhat.com> Message-ID: <36546b69-eb38-4ad1-a6d6-dbbd685946ad@zmail13.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > On 02/26/2012 04:55 PM, Ayal Baron wrote: > > > > > > ----- Original Message ----- > >> On 02/26/2012 04:38 PM, Ayal Baron wrote: > >>> Yair, what about import of VM more than once? > >> Hi Ayal, > >> We consider this as a different feature. > >> Gilad Chaplik is the feature owner. > >> I can think of some very similar features to this one (not just > >> import > >> more than once). > > > > First, I couldn't find a feature page for that. > > Second, I don't really understand the difference, there are subtle > > differences in the flow, but it is basically the same. > > In fact. the only difference I can think of is that it is initiated > > from import and not from right click on the snapshot... > > > > What other similar features will this *not* cover? > I can tell you that for current testing until fully integrated with > snapshots modifications, I am testing it on VM which is down. > Not sure we're interested in this, but here is an example of possible > feature. I don't understand, what possible other feature? > > > > > > >> > >>> > >>> ----- Original Message ----- > >>>> Hi all, > >>>> I modified the Wiki pages of this feature: > >>>> > >>>> http://www.ovirt.org/wiki/Features/CloneVmFromSnapshot > >>>> > >>>> http://www.ovirt.org/wiki/Features/DetailedCloneVmFromSnapshot > >>>> > >>>> Comments are more than welcome > >>>> > >>>> Kind regards, > >>>> Yair > >>>> > >>>> > >>>> > >>>> _______________________________________________ > >>>> Engine-devel mailing list > >>>> Engine-devel at ovirt.org > >>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>>> > >> > >> > > From gchaplik at redhat.com Sun Feb 26 16:45:55 2012 From: gchaplik at redhat.com (Gilad Chaplik) Date: Sun, 26 Feb 2012 11:45:55 -0500 (EST) Subject: [Engine-devel] Clone VM from snapshot feature In-Reply-To: <4F4A49C5.2000908@redhat.com> Message-ID: Thanks, Gilad. ----- Original Message ----- > From: "Yair Zaslavsky" > To: "Ayal Baron" > Cc: engine-devel at ovirt.org, "gilad Chaplik" > Sent: Sunday, February 26, 2012 5:03:33 PM > Subject: Re: [Engine-devel] Clone VM from snapshot feature > > On 02/26/2012 05:02 PM, Yair Zaslavsky wrote: > > On 02/26/2012 04:55 PM, Ayal Baron wrote: > >> > >> > >> ----- Original Message ----- > >>> On 02/26/2012 04:38 PM, Ayal Baron wrote: > >>>> Yair, what about import of VM more than once? > >>> Hi Ayal, > >>> We consider this as a different feature. > >>> Gilad Chaplik is the feature owner. > >>> I can think of some very similar features to this one (not just > >>> import > >>> more than once). > >> > >> First, I couldn't find a feature page for that. > >> Second, I don't really understand the difference, there are subtle > >> differences in the flow, but it is basically the same. > >> In fact. the only difference I can think of is that it is > >> initiated from import and not from right click on the snapshot... > > Gilad? > CC'ing Gilad on this http://www.ovirt.org/wiki/Features/ImportMoreThanOnce > >> > >> What other similar features will this *not* cover? > >> > >> > >>> > >>>> > >>>> ----- Original Message ----- > >>>>> Hi all, > >>>>> I modified the Wiki pages of this feature: > >>>>> > >>>>> http://www.ovirt.org/wiki/Features/CloneVmFromSnapshot > >>>>> > >>>>> http://www.ovirt.org/wiki/Features/DetailedCloneVmFromSnapshot > >>>>> > >>>>> Comments are more than welcome > >>>>> > >>>>> Kind regards, > >>>>> Yair > >>>>> > >>>>> > >>>>> > >>>>> _______________________________________________ > >>>>> Engine-devel mailing list > >>>>> Engine-devel at ovirt.org > >>>>> http://lists.ovirt.org/mailman/listinfo/engine-devel > >>>>> > >>> > >>> > > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > From abaron at redhat.com Mon Feb 27 10:06:12 2012 From: abaron at redhat.com (Ayal Baron) Date: Mon, 27 Feb 2012 05:06:12 -0500 (EST) Subject: [Engine-devel] [vdsm] [node-devel] Support for stateless nodes In-Reply-To: <1329998220.4319.133.camel@dhcp-29-7.brq.redhat.com> Message-ID: <749d6385-4ad6-4751-b5fa-2db43fb2d3e2@zmail13.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > Perry Myers p??e v St 22. 02. 2012 v 11:54 -0500: > > >> As answered in the other response, there are kernel command line > > >> parameters to set the management_server. Since this will likely > > >> be in a > > >> pxe environment, setting the pxe profile to include > > >> management_server= should be fine. > > >> > > > I agree it's a valid solution as long as you assume this is > > > relevant > > > for PXE only use case. > > > > Not necessarily... > > > > Take the ISO/USB Stick and you can embed the kargs into the ISO/USB > > itself so that it always boots with that mgmt server arg > > > > This actually also enables use of 'stateless' combined with static > > IP > > addressing as well. As you can create a USB Stick and embed the > > kargs > > for the NIC configuration, rsyslog config, etc, etc. > > > > >> Another solution could be to setup a specific DNS SRV record > > >> that points > > >> to the ovirt-engine and have node automatically query that for > > >> the > > >> location. > > > This was discussed in the past and for some reason not > > > implemented. > > > > Concerns about security, iirc. Assumption that someone could > > hijack the > > DNS SRV record and provide a man-in-the-middle oVirt Engine server. > > > > What about DNSSEC validation for DNS records in node? This will require more than just changes to the registration process and it's quite difficult to track the required changes here on email. Let's setup a call to discuss this and try to capture the list of issues we already know about (I'm sure we'll discover more once we actually try to do this). To play devil's advocate though, I know there is interest, but I really don't understand the incentive. What is the *problem* you're trying to solve here (stateless is a solution) > > David > > > If you're paranoid about security, don't use DNS SRV of course, > > instead > > use hardcoded kargs as described above. But for some DNS SRV might > > be > > an ok option > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > -- > > David Ja?a, RHCE > > SPICE QE based in Brno > GPG Key: 22C33E24 > Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24 > > > > _______________________________________________ > vdsm-devel mailing list > vdsm-devel at lists.fedorahosted.org > https://fedorahosted.org/mailman/listinfo/vdsm-devel > From rgolan at redhat.com Mon Feb 27 13:45:27 2012 From: rgolan at redhat.com (Roy Golan) Date: Mon, 27 Feb 2012 08:45:27 -0500 (EST) Subject: [Engine-devel] network - UI Sync meeting Message-ID: The following meeting has been modified: Subject: network - UI Sync meeting Organizer: "Roy Golan" Location: "Asia-tlv" [MODIFIED] Resources: asia-tlv at redhat.com [MODIFIED] Time: Monday, February 27, 2012, 4:00:00 PM - 5:00:00 PM GMT +02:00 Jerusalem Invitees: mkenneth at redhat.com; sgrinber at redhat.com; lpeer at redhat.com; dfediuck at redhat.com; drankevi at redhat.com; ecohen at redhat.com; iheim at redhat.com; ovedo at redhat.com; acathrow at redhat.com; engine-devel at ovirt.org; kroberts at redhat.com ... *~*~*~*~*~*~*~*~*~* Follow-up meeting on setup networks UI. issues to follow: 1. can VDSM attach many non-vlan and many vlan networks to a single nic? (Dan - please reply if its doable) 2. if yes is the UI breakdown of vlan/non-vlan is probably not necessary? open issues: 1.should we use "VmNetwork"? (or "allow/able to run VMs" you name it) would it be a DC or a Cluster property? 2.should we implicitly set bridge/bridgeless when attaching a network with setupnetworks? 3.nickless networks - was that planned for this version? VDSM support it already but we are missing the UI and Backend for it. Bridge ID: 1814335863 https://www.intercallonline.com/portlets/scheduling/viewNumbers/listNumbersByCode.do?confCode=1814335863 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 5733 bytes Desc: not available URL: From shaharh at redhat.com Mon Feb 27 14:19:02 2012 From: shaharh at redhat.com (Shahar Havivi) Date: Mon, 27 Feb 2012 16:19:02 +0200 Subject: [Engine-devel] VM Payload feature In-Reply-To: <20120123151223.GB2300@redhat.com> References: <20120123150728.GA2300@redhat.com> <20120123151223.GB2300@redhat.com> Message-ID: <20120227141901.GF22174@redhat.com> We encounter a problem with persisting the content to engine database (we don't want to save the file the database). There are some solution for that: 1. we do want to persist files to the database (with size limitation). 2. engine can expect file path (nfs or http) and persist only the file url 3. we can use this feature in run-once only hence no persistence is needed. Thoughts, other ideas? On 23.01.12 17:12, Shahar Havivi wrote: > On 23.01.12 10:11, Andrew Cathrow wrote: > > > > > > ----- Original Message ----- > > > From: "Shahar Havivi" > > > To: "Joseph VLcek" > > > Cc: "Oved Ourfalli" , engine-devel at ovirt.org, "Michal Fojtik" , "David > > > Lutterkort" > > > Sent: Monday, January 23, 2012 10:07:30 AM > > > Subject: Re: [Engine-devel] VM Payload feature > > > > > > On 23.01.12 09:39, Joseph VLcek wrote: > > > > > > > > On Jan 22, 2012, at 3:09 AM, Oved Ourfalli wrote: > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > >> From: "Ayal Baron" > > > > >> To: "Oved Ourfalli" > > > > >> Cc: engine-devel at ovirt.org > > > > >> Sent: Thursday, January 19, 2012 4:05:08 PM > > > > >> Subject: Re: [Engine-devel] VM Payload feature > > > > >> > > > > >> > > > > >> > > > > >> ----- Original Message ----- > > > > >>> Hey all, > > > > >>> > > > > >>> Continuing the discussion about Aeolus instance data injection > > > > >>> to a > > > > >>> VM > > > > >>> (http://lists.ovirt.org/pipermail/engine-devel/2012-January/000423.html) > > > > >>> we propose a new VM Payload feature. > > > > >>> > > > > >>> The following wiki page contains a description page of the > > > > >>> feature. > > > > >>> http://www.ovirt.org/wiki/Features/VMPayload > > > > >>> > > > > >>> Please read and review. > > > > >>> There are several approaches there, and we wish to head your > > > > >>> opinions > > > > >>> and thoughts about them. > > > > >>> > > > > >>> Once we agree on an approach, we will start designing. > > > > >> > > > > >> Permanent payload availability requires determining where the > > > > >> payload > > > > >> is stored. > > > > >> Makes sense to me to store it together with the VM disks on the > > > > >> storage domain, but that requires the small object store which > > > > >> will > > > > >> not be available in the coming version (payloads can be large > > > > >> and > > > > >> keeping them in the DB and passing over the net every time the > > > > >> VM is > > > > >> run doesn't make much sense). > > > > >> > > > > > I guess we can start with storing it in the database, with some > > > > > size limitation, and move it to the storage domain later on. > > > > > > > > > >> Wrt availability, I don't see a reason to exclude attaching both > > > > >> a CD > > > > >> and a payload via another CD at the same time (i.e. multiple > > > > >> devices). > > > > >> > > > > >>> > > > > >>> Thank you, > > > > >>> Oved > > > > >>> _______________________________________________ > > > > >>> Engine-devel mailing list > > > > >>> Engine-devel at ovirt.org > > > > >>> http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > >>> > > > > >> _______________________________________________ > > > > >> Engine-devel mailing list > > > > >> Engine-devel at ovirt.org > > > > >> http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > >> > > > > > > > > > > > > > > > > My perspective is that of the end user, the instance retrieving the > > > > data. > > > > > > > > From a functional standpoint I would like to see similar > > > > performance to > > > > what EC2 provides. AWS EC2 user data is limited to 16K. This limit > > > > applies to the data in raw form, not base64 encoded form. > > > > see: > > > > http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/instancedata-data-categories.html > > > > > > > > I am concerned about the 512k limit as mentioned in the notes > > > > of: http://www.ovirt.org/wiki/Features/VMPayload > > > > "if the content of the file is bigger the 512K it will pass an nfs > > > > share for vdsm to fetch the file/s" > > > > > > > > Please confirm: > > > > - Will it be possible to pass user data to larger than 512k? > > > > - If so what will the instance need to do in order to retrieve > > > > user-data bigger than 512k. > > > > - What will the MAX size supported for the user-data? > > > 512k is a suggestion, > > > we don't want to embed large files in the verb that ovirt calls vdsm, > > > instead > > > if it bigger then 512k/1M we will pass urls/nfs path of the files and > > > VDSM > > > will add them to the iso file. > > > there is not limitation of size... > > > > If we're talking about URLs keep in mind SELinux restrictions (eg. passing a URL to a HTTP hosted IS will be blocked, iirc) > right, > its proffered to be common share directory > > > > > > > > > > Thank you. > > > > Joe VLcek > > > > > > > _______________________________________________ > > > Engine-devel mailing list > > > Engine-devel at ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel From yzaslavs at redhat.com Mon Feb 27 14:49:49 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Mon, 27 Feb 2012 16:49:49 +0200 Subject: [Engine-devel] Do we need ImagesToRemoveList in RemoveImageParameters? Message-ID: <4F4B980D.8070701@redhat.com> Hi all, I see the getter code for this field is not used by Engine-Core. We perform removal of the image specified by the imageId parameter. Can we remove this parameter? Yair From danken at redhat.com Mon Feb 27 15:45:11 2012 From: danken at redhat.com (Dan Kenigsberg) Date: Mon, 27 Feb 2012 17:45:11 +0200 Subject: [Engine-devel] VM Payload feature In-Reply-To: <20120227141901.GF22174@redhat.com> References: <20120123150728.GA2300@redhat.com> <20120123151223.GB2300@redhat.com> <20120227141901.GF22174@redhat.com> Message-ID: <20120227154510.GF4730@redhat.com> On Mon, Feb 27, 2012 at 04:19:02PM +0200, Shahar Havivi wrote: > We encounter a problem with persisting the content to engine database (we don't > want to save the file the database). > > There are some solution for that: > 1. we do want to persist files to the database (with size limitation). > 2. engine can expect file path (nfs or http) and persist only the file url > 3. we can use this feature in run-once only hence no persistence is needed. > > Thoughts, other ideas? Let's re-ask the question about the (Engine) API: Do we need the payload to be passed on VM definition? Or is it enough to pass it on VM startup? > > > On 23.01.12 17:12, Shahar Havivi wrote: > > On 23.01.12 10:11, Andrew Cathrow wrote: > > > > > > > > > ----- Original Message ----- > > > > From: "Shahar Havivi" > > > > To: "Joseph VLcek" > > > > Cc: "Oved Ourfalli" , engine-devel at ovirt.org, "Michal Fojtik" , "David > > > > Lutterkort" > > > > Sent: Monday, January 23, 2012 10:07:30 AM > > > > Subject: Re: [Engine-devel] VM Payload feature > > > > > > > > On 23.01.12 09:39, Joseph VLcek wrote: > > > > > > > > > > On Jan 22, 2012, at 3:09 AM, Oved Ourfalli wrote: > > > > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > >> From: "Ayal Baron" > > > > > >> To: "Oved Ourfalli" > > > > > >> Cc: engine-devel at ovirt.org > > > > > >> Sent: Thursday, January 19, 2012 4:05:08 PM > > > > > >> Subject: Re: [Engine-devel] VM Payload feature > > > > > >> > > > > > >> > > > > > >> > > > > > >> ----- Original Message ----- > > > > > >>> Hey all, > > > > > >>> > > > > > >>> Continuing the discussion about Aeolus instance data injection > > > > > >>> to a > > > > > >>> VM > > > > > >>> (http://lists.ovirt.org/pipermail/engine-devel/2012-January/000423.html) > > > > > >>> we propose a new VM Payload feature. > > > > > >>> > > > > > >>> The following wiki page contains a description page of the > > > > > >>> feature. > > > > > >>> http://www.ovirt.org/wiki/Features/VMPayload > > > > > >>> > > > > > >>> Please read and review. > > > > > >>> There are several approaches there, and we wish to head your > > > > > >>> opinions > > > > > >>> and thoughts about them. > > > > > >>> > > > > > >>> Once we agree on an approach, we will start designing. > > > > > >> > > > > > >> Permanent payload availability requires determining where the > > > > > >> payload > > > > > >> is stored. > > > > > >> Makes sense to me to store it together with the VM disks on the > > > > > >> storage domain, but that requires the small object store which > > > > > >> will > > > > > >> not be available in the coming version (payloads can be large > > > > > >> and > > > > > >> keeping them in the DB and passing over the net every time the > > > > > >> VM is > > > > > >> run doesn't make much sense). > > > > > >> > > > > > > I guess we can start with storing it in the database, with some > > > > > > size limitation, and move it to the storage domain later on. > > > > > > > > > > > >> Wrt availability, I don't see a reason to exclude attaching both > > > > > >> a CD > > > > > >> and a payload via another CD at the same time (i.e. multiple > > > > > >> devices). > > > > > >> > > > > > >>> > > > > > >>> Thank you, > > > > > >>> Oved > > > > > >>> _______________________________________________ > > > > > >>> Engine-devel mailing list > > > > > >>> Engine-devel at ovirt.org > > > > > >>> http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > >>> > > > > > >> _______________________________________________ > > > > > >> Engine-devel mailing list > > > > > >> Engine-devel at ovirt.org > > > > > >> http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > >> > > > > > > > > > > > > > > > > > > > > My perspective is that of the end user, the instance retrieving the > > > > > data. > > > > > > > > > > From a functional standpoint I would like to see similar > > > > > performance to > > > > > what EC2 provides. AWS EC2 user data is limited to 16K. This limit > > > > > applies to the data in raw form, not base64 encoded form. > > > > > see: > > > > > http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/instancedata-data-categories.html > > > > > > > > > > I am concerned about the 512k limit as mentioned in the notes > > > > > of: http://www.ovirt.org/wiki/Features/VMPayload > > > > > "if the content of the file is bigger the 512K it will pass an nfs > > > > > share for vdsm to fetch the file/s" > > > > > > > > > > Please confirm: > > > > > - Will it be possible to pass user data to larger than 512k? > > > > > - If so what will the instance need to do in order to retrieve > > > > > user-data bigger than 512k. > > > > > - What will the MAX size supported for the user-data? > > > > 512k is a suggestion, > > > > we don't want to embed large files in the verb that ovirt calls vdsm, > > > > instead > > > > if it bigger then 512k/1M we will pass urls/nfs path of the files and > > > > VDSM > > > > will add them to the iso file. > > > > there is not limitation of size... > > > > > > If we're talking about URLs keep in mind SELinux restrictions (eg. passing a URL to a HTTP hosted IS will be blocked, iirc) > > right, > > its proffered to be common share directory > > > > > > > > > > > > > Thank you. > > > > > Joe VLcek > > > > > > > > > _______________________________________________ > > > > Engine-devel mailing list > > > > Engine-devel at ovirt.org > > > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > > > > > > _______________________________________________ > > > Engine-devel mailing list > > > Engine-devel at ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/engine-devel From iheim at redhat.com Mon Feb 27 16:22:29 2012 From: iheim at redhat.com (Itamar Heim) Date: Mon, 27 Feb 2012 18:22:29 +0200 Subject: [Engine-devel] VM Payload feature In-Reply-To: <20120227154510.GF4730@redhat.com> References: <20120123150728.GA2300@redhat.com> <20120123151223.GB2300@redhat.com> <20120227141901.GF22174@redhat.com> <20120227154510.GF4730@redhat.com> Message-ID: <4F4BADC5.9090206@redhat.com> On 02/27/2012 05:45 PM, Dan Kenigsberg wrote: > On Mon, Feb 27, 2012 at 04:19:02PM +0200, Shahar Havivi wrote: >> We encounter a problem with persisting the content to engine database (we don't >> want to save the file the database). >> >> There are some solution for that: >> 1. we do want to persist files to the database (with size limitation). >> 2. engine can expect file path (nfs or http) and persist only the file url >> 3. we can use this feature in run-once only hence no persistence is needed. i think size limitation should be ok. looking forward to next phase of supporting the cloud based api for guests which david mentioned earlier in the thread, vm payload should allow passing things like guest keypair files. looking at the API[1] david mentioned, i didn't notice files other than keypair's, but i may have missed them. the rest of the API seems to be based on data that can be retrieved from the DB. David - any more insight as to when/how files are needed? (btw, can you please remind where in the metadata do you pass extra information aelous needs?) >> >> Thoughts, other ideas? > > Let's re-ask the question about the (Engine) API: Do we need the payload > to be passed on VM definition? Or is it enough to pass it on VM startup? my view is vm definition, not only startup. [1] http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/AESDG-chapter-instancedata.html From iheim at redhat.com Mon Feb 27 19:09:31 2012 From: iheim at redhat.com (Itamar Heim) Date: Mon, 27 Feb 2012 21:09:31 +0200 Subject: [Engine-devel] Clone VM from snapshot feature In-Reply-To: <4F4A4A3C.1070809@redhat.com> References: <7f5e1bb8-7726-4bf0-a399-e9b9633efc54@zmail13.collab.prod.int.phx2.redhat.com> <4F4A4A3C.1070809@redhat.com> Message-ID: <4F4BD4EB.5000904@redhat.com> On 02/26/2012 05:05 PM, Yair Zaslavsky wrote: > On 02/26/2012 04:55 PM, Ayal Baron wrote: >> >> >> ----- Original Message ----- >>> On 02/26/2012 04:38 PM, Ayal Baron wrote: >>>> Yair, what about import of VM more than once? >>> Hi Ayal, >>> We consider this as a different feature. >>> Gilad Chaplik is the feature owner. >>> I can think of some very similar features to this one (not just >>> import >>> more than once). >> >> First, I couldn't find a feature page for that. >> Second, I don't really understand the difference, there are subtle differences in the flow, but it is basically the same. >> In fact. the only difference I can think of is that it is initiated from import and not from right click on the snapshot... >> >> What other similar features will this *not* cover? > I can tell you that for current testing until fully integrated with > snapshots modifications, I am testing it on VM which is down. > Not sure we're interested in this, but here is an example of possible > feature. I think Ayal point/question is similar to mine - why at general code level (obviously there are implementation differences) and user experience wise, the following aren't similar: [AddVmFromBlank] AddVmFromTemplate AddVmFromVm AddVmFromSnapshot AddVmFromImportCandidate[1] [1] yes, there is a difference if you select more than a single import candidate (adding multiple VMs), but that's actually at UI level, not backend implementation. From lutter at redhat.com Mon Feb 27 19:14:01 2012 From: lutter at redhat.com (David Lutterkort) Date: Mon, 27 Feb 2012 11:14:01 -0800 Subject: [Engine-devel] VM Payload feature In-Reply-To: <4F4BADC5.9090206@redhat.com> References: <20120123150728.GA2300@redhat.com> <20120123151223.GB2300@redhat.com> <20120227141901.GF22174@redhat.com> <20120227154510.GF4730@redhat.com> <4F4BADC5.9090206@redhat.com> Message-ID: <1330370041.2540.17.camel@avon.watzmann.net> On Mon, 2012-02-27 at 18:22 +0200, Itamar Heim wrote: > On 02/27/2012 05:45 PM, Dan Kenigsberg wrote: > > On Mon, Feb 27, 2012 at 04:19:02PM +0200, Shahar Havivi wrote: > >> We encounter a problem with persisting the content to engine database (we don't > >> want to save the file the database). > >> > >> There are some solution for that: > >> 1. we do want to persist files to the database (with size limitation). > >> 2. engine can expect file path (nfs or http) and persist only the file url > >> 3. we can use this feature in run-once only hence no persistence is needed. > > i think size limitation should be ok. Yes, this only needs to allow for a small amount of data; EC2 limits it to 16k (though they further limit it to 2k in certain circumstances[1]) Other providers have similar limits. In general, people pass only a few hundred bytes in in most cases, so a limit of 16k is plenty. > looking forward to next phase of supporting the cloud based api for > guests which david mentioned earlier in the thread, vm payload should > allow passing things like guest keypair files. > looking at the API[1] david mentioned, i didn't notice files other than > keypair's, but i may have missed them. > the rest of the API seems to be based on data that can be retrieved from > the DB. > David - any more insight as to when/how files are needed? A good thing to look at is cloud-init; it's a widely used tool for early initialization of cloud instances. Yes, by far the most important thing it does is retrieve the public key from the meta-data server and stuffs it into $USER/.ssh/authroized_keys (USER is either root or sth like ec2-user, depending on the image) > (btw, can you please remind where in the metadata do you pass extra > information aelous needs?) For Aeolus, it passes all the information it needs to talk to its config server and pull down additional things as the user-data. The format is completely custom, and the cloud platform only needs to make sure that the data that's passed in through the API shows up in the instance - it should be completely opaque to the oVirt. > >> > >> Thoughts, other ideas? > > > > Let's re-ask the question about the (Engine) API: Do we need the payload > > to be passed on VM definition? Or is it enough to pass it on VM startup? > > my view is vm definition, not only startup. For a cloud, you generally want to launch multiple VM's off the same image, and therefore need to be able to specify the user data for each VM. Whether that happens when you define or launch the VM isn't all that important, though it would be a little friendlier to users to do it at VM start, since they then do not need to have the user data ready (or know it) when they define a VM. David [1] https://forums.aws.amazon.com/thread.jspa?threadID=74488 [2] https://help.ubuntu.com/community/CloudInit From iheim at redhat.com Mon Feb 27 19:17:02 2012 From: iheim at redhat.com (Itamar Heim) Date: Mon, 27 Feb 2012 21:17:02 +0200 Subject: [Engine-devel] VM Payload feature In-Reply-To: <1330370041.2540.17.camel@avon.watzmann.net> References: <20120123150728.GA2300@redhat.com> <20120123151223.GB2300@redhat.com> <20120227141901.GF22174@redhat.com> <20120227154510.GF4730@redhat.com> <4F4BADC5.9090206@redhat.com> <1330370041.2540.17.camel@avon.watzmann.net> Message-ID: <4F4BD6AE.9060801@redhat.com> On 02/27/2012 09:14 PM, David Lutterkort wrote: > On Mon, 2012-02-27 at 18:22 +0200, Itamar Heim wrote: >> On 02/27/2012 05:45 PM, Dan Kenigsberg wrote: >>> On Mon, Feb 27, 2012 at 04:19:02PM +0200, Shahar Havivi wrote: >>>> We encounter a problem with persisting the content to engine database (we don't >>>> want to save the file the database). >>>> >>>> There are some solution for that: >>>> 1. we do want to persist files to the database (with size limitation). >>>> 2. engine can expect file path (nfs or http) and persist only the file url >>>> 3. we can use this feature in run-once only hence no persistence is needed. >> >> i think size limitation should be ok. > > Yes, this only needs to allow for a small amount of data; EC2 limits it > to 16k (though they further limit it to 2k in certain circumstances[1]) > Other providers have similar limits. In general, people pass only a few > hundred bytes in in most cases, so a limit of 16k is plenty. > >> looking forward to next phase of supporting the cloud based api for >> guests which david mentioned earlier in the thread, vm payload should >> allow passing things like guest keypair files. >> looking at the API[1] david mentioned, i didn't notice files other than >> keypair's, but i may have missed them. >> the rest of the API seems to be based on data that can be retrieved from >> the DB. >> David - any more insight as to when/how files are needed? > > A good thing to look at is cloud-init; it's a widely used tool for early > initialization of cloud instances. Yes, by far the most important thing > it does is retrieve the public key from the meta-data server and stuffs > it into $USER/.ssh/authroized_keys (USER is either root or sth like > ec2-user, depending on the image) > >> (btw, can you please remind where in the metadata do you pass extra >> information aelous needs?) > > For Aeolus, it passes all the information it needs to talk to its config > server and pull down additional things as the user-data. The format is > completely custom, and the cloud platform only needs to make sure that > the data that's passed in through the API shows up in the instance - it > should be completely opaque to the oVirt. can you give a pointer how/where in the EC2 api this would appear under? > >>>> >>>> Thoughts, other ideas? >>> >>> Let's re-ask the question about the (Engine) API: Do we need the payload >>> to be passed on VM definition? Or is it enough to pass it on VM startup? >> >> my view is vm definition, not only startup. > > For a cloud, you generally want to launch multiple VM's off the same > image, and therefore need to be able to specify the user data for each > VM. Whether that happens when you define or launch the VM isn't all that > important, though it would be a little friendlier to users to do it at > VM start, since they then do not need to have the user data ready (or > know it) when they define a VM. well, true for cloud when image start is also creating the instance, while in ovirt, at least for now, you need to create the VM then launch it. i agree you need to be able to pass this at VM start, but considering VM are mostly stateful, i don't see why you wouldn't let user persist this like the VM is presistent. > > David > > [1] https://forums.aws.amazon.com/thread.jspa?threadID=74488 > [2] https://help.ubuntu.com/community/CloudInit > > From pmyers at redhat.com Mon Feb 27 19:38:44 2012 From: pmyers at redhat.com (Perry Myers) Date: Mon, 27 Feb 2012 14:38:44 -0500 Subject: [Engine-devel] [node-devel] [vdsm] Support for stateless nodes In-Reply-To: <749d6385-4ad6-4751-b5fa-2db43fb2d3e2@zmail13.collab.prod.int.phx2.redhat.com> References: <749d6385-4ad6-4751-b5fa-2db43fb2d3e2@zmail13.collab.prod.int.phx2.redhat.com> Message-ID: <4F4BDBC4.3020103@redhat.com> On 02/27/2012 05:06 AM, Ayal Baron wrote: > > > ----- Original Message ----- >> Perry Myers p??e v St 22. 02. 2012 v 11:54 -0500: >>>>> As answered in the other response, there are kernel command line >>>>> parameters to set the management_server. Since this will likely >>>>> be in a >>>>> pxe environment, setting the pxe profile to include >>>>> management_server= should be fine. >>>>> >>>> I agree it's a valid solution as long as you assume this is >>>> relevant >>>> for PXE only use case. >>> >>> Not necessarily... >>> >>> Take the ISO/USB Stick and you can embed the kargs into the ISO/USB >>> itself so that it always boots with that mgmt server arg >>> >>> This actually also enables use of 'stateless' combined with static >>> IP >>> addressing as well. As you can create a USB Stick and embed the >>> kargs >>> for the NIC configuration, rsyslog config, etc, etc. >>> >>>>> Another solution could be to setup a specific DNS SRV record >>>>> that points >>>>> to the ovirt-engine and have node automatically query that for >>>>> the >>>>> location. >>>> This was discussed in the past and for some reason not >>>> implemented. >>> >>> Concerns about security, iirc. Assumption that someone could >>> hijack the >>> DNS SRV record and provide a man-in-the-middle oVirt Engine server. >>> >> >> What about DNSSEC validation for DNS records in node? > > This will require more than just changes to the registration process > and it's quite difficult to track the required changes here on email. > Let's setup a call to discuss this and try to capture the list of > issues we already know about (I'm sure we'll discover more once we > actually try to do this). I'm fine with setting up a call as long as we publish the dial in info on list so that folks in the community can join in if they are interested. Also, stateless design has been a topic on the oVirt Node weekly IRC meeting for the last few weeks as we flesh out design issues, etc. I'd be happy for folks from other teams to join in there > To play devil's advocate though, I know there is interest, but I > really don't understand the incentive. > What is the *problem* you're trying to solve here (stateless is a solution) The primary motivation is diskless nodes. Being able to purchase mass quantities of servers and not needing to put any sort of storage in them. Since disks have much lower MTBF than other components (aside from fans probably), diskless servers require less maintenance. As mentioned earlier in this thread, diskless would mean no swap and no local storage. Both of which are suitable if you're using SAN (FC/iSCSI) based storage for VM images and if memory overcommit is not allowed. The argument for 'stateless without diskless' is a little more tenuous, however, if you're going to do stateless to get support for diskless, adding in support for local VM image storage and swap is fairly trivial. Andy, I think you've got some opinions here, care to weigh in? Otherwise if there is continued pushback on this feature, I'm happy to not waste effort on it. There are other things we can work on :) Perry From yzaslavs at redhat.com Mon Feb 27 19:42:11 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Mon, 27 Feb 2012 21:42:11 +0200 Subject: [Engine-devel] Clone VM from snapshot feature In-Reply-To: <4F4BD4EB.5000904@redhat.com> References: <7f5e1bb8-7726-4bf0-a399-e9b9633efc54@zmail13.collab.prod.int.phx2.redhat.com> <4F4A4A3C.1070809@redhat.com> <4F4BD4EB.5000904@redhat.com> Message-ID: <4F4BDC93.9070603@redhat.com> On 02/27/2012 09:09 PM, Itamar Heim wrote: > On 02/26/2012 05:05 PM, Yair Zaslavsky wrote: >> On 02/26/2012 04:55 PM, Ayal Baron wrote: >>> >>> >>> ----- Original Message ----- >>>> On 02/26/2012 04:38 PM, Ayal Baron wrote: >>>>> Yair, what about import of VM more than once? >>>> Hi Ayal, >>>> We consider this as a different feature. >>>> Gilad Chaplik is the feature owner. >>>> I can think of some very similar features to this one (not just >>>> import >>>> more than once). >>> >>> First, I couldn't find a feature page for that. >>> Second, I don't really understand the difference, there are subtle >>> differences in the flow, but it is basically the same. >>> In fact. the only difference I can think of is that it is initiated >>> from import and not from right click on the snapshot... >>> >>> What other similar features will this *not* cover? >> I can tell you that for current testing until fully integrated with >> snapshots modifications, I am testing it on VM which is down. >> Not sure we're interested in this, but here is an example of possible >> feature. > > I think Ayal point/question is similar to mine - why at general code > level (obviously there are implementation differences) and user > experience wise, the following aren't similar: > [AddVmFromBlank] > AddVmFromTemplate > AddVmFromVm > AddVmFromSnapshot > AddVmFromImportCandidate[1] Of course there is shared code - for example, in the class diagram I presented for Clone VM from Snapshot, it can clearly be seen that there is code reuse, and there are two paths of "image-creation" (actually, there is of course also a 3rd path of 'createImage' verb) - path for snapshot and path for copyImage - In addition, the code of the "addVmXXXCommand" usually extends AddVmCommand, and of course introduces some changes. In addition , differences may occur at MLA (for example, for clone-vm-from-snapshot, and maybe other future flows I'm considering of adding a new action-group for CloneVm to the code - I'll update the wiki shortly on this), and of course UI-wise. So I hope that now I gave more explanations on where the code is similar and where its different (actually, I would like to point that in my changes, I'm striving to introduce more code-reuse). Yair > > > [1] yes, there is a difference if you select more than a single import > candidate (adding multiple VMs), but that's actually at UI level, not > backend implementation. > From lutter at redhat.com Mon Feb 27 19:44:51 2012 From: lutter at redhat.com (David Lutterkort) Date: Mon, 27 Feb 2012 11:44:51 -0800 Subject: [Engine-devel] VM Payload feature In-Reply-To: <4F4BD6AE.9060801@redhat.com> References: <20120123150728.GA2300@redhat.com> <20120123151223.GB2300@redhat.com> <20120227141901.GF22174@redhat.com> <20120227154510.GF4730@redhat.com> <4F4BADC5.9090206@redhat.com> <1330370041.2540.17.camel@avon.watzmann.net> <4F4BD6AE.9060801@redhat.com> Message-ID: <1330371891.2540.21.camel@avon.watzmann.net> On Mon, 2012-02-27 at 21:17 +0200, Itamar Heim wrote: > On 02/27/2012 09:14 PM, David Lutterkort wrote: > > On Mon, 2012-02-27 at 18:22 +0200, Itamar Heim wrote: > >> On 02/27/2012 05:45 PM, Dan Kenigsberg wrote: > >>> On Mon, Feb 27, 2012 at 04:19:02PM +0200, Shahar Havivi wrote: > >>>> > >>> Let's re-ask the question about the (Engine) API: Do we need the payload > >>> to be passed on VM definition? Or is it enough to pass it on VM startup? > >> > >> my view is vm definition, not only startup. > > > > For a cloud, you generally want to launch multiple VM's off the same > > image, and therefore need to be able to specify the user data for each > > VM. Whether that happens when you define or launch the VM isn't all that > > important, though it would be a little friendlier to users to do it at > > VM start, since they then do not need to have the user data ready (or > > know it) when they define a VM. > > well, true for cloud when image start is also creating the instance, > while in ovirt, at least for now, you need to create the VM then launch it. > i agree you need to be able to pass this at VM start, but considering VM > are mostly stateful, i don't see why you wouldn't let user persist this > like the VM is presistent. The point I was making is that for cloud you want a notion of an image (template) that's one-to-many with the VM's launched off of it, no matter whether the VM's themselves are stateful or stateless. The important thing is that user data is associated to the VM, not the image/template. David From abaron at redhat.com Mon Feb 27 22:16:49 2012 From: abaron at redhat.com (Ayal Baron) Date: Mon, 27 Feb 2012 17:16:49 -0500 (EST) Subject: [Engine-devel] Clone VM from snapshot feature In-Reply-To: <4F4BDC93.9070603@redhat.com> Message-ID: <918b602c-b136-48de-a76f-721510d40164@zmail13.collab.prod.int.phx2.redhat.com> ----- Original Message ----- > On 02/27/2012 09:09 PM, Itamar Heim wrote: > > On 02/26/2012 05:05 PM, Yair Zaslavsky wrote: > >> On 02/26/2012 04:55 PM, Ayal Baron wrote: > >>> > >>> > >>> ----- Original Message ----- > >>>> On 02/26/2012 04:38 PM, Ayal Baron wrote: > >>>>> Yair, what about import of VM more than once? > >>>> Hi Ayal, > >>>> We consider this as a different feature. > >>>> Gilad Chaplik is the feature owner. > >>>> I can think of some very similar features to this one (not just > >>>> import > >>>> more than once). > >>> > >>> First, I couldn't find a feature page for that. > >>> Second, I don't really understand the difference, there are > >>> subtle > >>> differences in the flow, but it is basically the same. > >>> In fact. the only difference I can think of is that it is > >>> initiated > >>> from import and not from right click on the snapshot... > >>> > >>> What other similar features will this *not* cover? > >> I can tell you that for current testing until fully integrated > >> with > >> snapshots modifications, I am testing it on VM which is down. > >> Not sure we're interested in this, but here is an example of > >> possible > >> feature. > > > > I think Ayal point/question is similar to mine - why at general > > code > > level (obviously there are implementation differences) and user > > experience wise, the following aren't similar: > > [AddVmFromBlank] > > AddVmFromTemplate > > AddVmFromVm > > AddVmFromSnapshot > > AddVmFromImportCandidate[1] > > Of course there is shared code - for example, in the class diagram I > presented for Clone VM from Snapshot, it can clearly be seen that > there > is code reuse, and there are two paths of "image-creation" (actually, > there is of course also a 3rd path of 'createImage' verb) - path for > snapshot and path for copyImage - In addition, the code of the > "addVmXXXCommand" usually extends AddVmCommand, and of course > introduces > some changes. > In addition , differences may occur at MLA (for example, for > clone-vm-from-snapshot, and maybe other future flows I'm considering > of > adding a new action-group for CloneVm to the code - I'll update the > wiki > shortly on this), and of course UI-wise. > So I hope that now I gave more explanations on where the code is > similar > and where its different (actually, I would like to point that in my > changes, I'm striving to introduce more code-reuse). The point is not about code reuse. It is about the above being the same functionality from the user perspective with a few nuances. It could actually be implemented in different ways but it's still not multiple features. Which also means that user experience should be almost the same in all these scenarios, which is why they should derive from the same point (design wise not implementation wise). Today as you mentioned there are 2 different low level commands for achieving the above, but in fact going forward in the new image API, creating a new image whether it is based on an existing image or not and whether it is optimized for performance (collapse) or optimized for space (qcow2 / something else) would be a single command. Somthing like: createImage size [source] [perf/space] So again, code wise you could in fact be invoking different commands (so no code reuse), but the user doesn't care that underlying it is different flows, it is the same operation - creating a new image. everything else is a nuance. > > Yair > > > > > > > [1] yes, there is a difference if you select more than a single > > import > > candidate (adding multiple VMs), but that's actually at UI level, > > not > > backend implementation. > > > > From mkenneth at redhat.com Tue Feb 28 05:21:54 2012 From: mkenneth at redhat.com (Miki Kenneth) Date: Tue, 28 Feb 2012 00:21:54 -0500 (EST) Subject: [Engine-devel] Clone VM from snapshot feature In-Reply-To: <918b602c-b136-48de-a76f-721510d40164@zmail13.collab.prod.int.phx2.redhat.com> References: <918b602c-b136-48de-a76f-721510d40164@zmail13.collab.prod.int.phx2.redhat.com> Message-ID: Send from my iPhone . On 28 ???? 2012, at 00:16, Ayal Baron wrote: > > > ----- Original Message ----- >> On 02/27/2012 09:09 PM, Itamar Heim wrote: >>> On 02/26/2012 05:05 PM, Yair Zaslavsky wrote: >>>> On 02/26/2012 04:55 PM, Ayal Baron wrote: >>>>> >>>>> >>>>> ----- Original Message ----- >>>>>> On 02/26/2012 04:38 PM, Ayal Baron wrote: >>>>>>> Yair, what about import of VM more than once? >>>>>> Hi Ayal, >>>>>> We consider this as a different feature. >>>>>> Gilad Chaplik is the feature owner. >>>>>> I can think of some very similar features to this one (not just >>>>>> import >>>>>> more than once). >>>>> >>>>> First, I couldn't find a feature page for that. >>>>> Second, I don't really understand the difference, there are >>>>> subtle >>>>> differences in the flow, but it is basically the same. >>>>> In fact. the only difference I can think of is that it is >>>>> initiated >>>>> from import and not from right click on the snapshot... >>>>> >>>>> What other similar features will this *not* cover? >>>> I can tell you that for current testing until fully integrated >>>> with >>>> snapshots modifications, I am testing it on VM which is down. >>>> Not sure we're interested in this, but here is an example of >>>> possible >>>> feature. >>> >>> I think Ayal point/question is similar to mine - why at general >>> code >>> level (obviously there are implementation differences) and user >>> experience wise, the following aren't similar: >>> [AddVmFromBlank] >>> AddVmFromTemplate >>> AddVmFromVm >>> AddVmFromSnapshot >>> AddVmFromImportCandidate[1] >> >> Of course there is shared code - for example, in the class diagram I >> presented for Clone VM from Snapshot, it can clearly be seen that >> there >> is code reuse, and there are two paths of "image-creation" (actually, >> there is of course also a 3rd path of 'createImage' verb) - path for >> snapshot and path for copyImage - In addition, the code of the >> "addVmXXXCommand" usually extends AddVmCommand, and of course >> introduces >> some changes. >> In addition , differences may occur at MLA (for example, for >> clone-vm-from-snapshot, and maybe other future flows I'm considering >> of >> adding a new action-group for CloneVm to the code - I'll update the >> wiki >> shortly on this), and of course UI-wise. >> So I hope that now I gave more explanations on where the code is >> similar >> and where its different (actually, I would like to point that in my >> changes, I'm striving to introduce more code-reuse). > > The point is not about code reuse. It is about the above being the same functionality from the user perspective with a few nuances. It could actually be implemented in different ways but it's still not multiple features. > Which also means that user experience should be almost the same in all these scenarios, which is why they should derive from the same point (design wise not implementation wise). > Today as you mentioned there are 2 different low level commands for achieving the above, but in fact going forward in the new image API, creating a new image whether it is based on an existing image or not and whether it is optimized for performance (collapse) or optimized for space (qcow2 / something else) would be a single command. Somthing like: createImage size [source] [perf/space] > > So again, code wise you could in fact be invoking different commands (so no code reuse), but the user doesn't care that underlying it is different flows, it is the same operation - creating a new image. > everything else is a nuance. I would state it a bit different, the user would like to create new vm (this is the feature) all the rest are different params (flows if you want). > >> >> Yair >> >>> >>> >>> [1] yes, there is a difference if you select more than a single >>> import >>> candidate (adding multiple VMs), but that's actually at UI level, >>> not >>> backend implementation. >>> >> >> > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From eedri at redhat.com Tue Feb 28 08:07:02 2012 From: eedri at redhat.com (Eyal Edri) Date: Tue, 28 Feb 2012 03:07:02 -0500 (EST) Subject: [Engine-devel] [oVirt Jenkins] Dao test fails due to latest commits In-Reply-To: <377c858e-3da3-4647-a834-d4c9e42a635d@zmail17.collab.prod.int.phx2.redhat.com> Message-ID: <9e6b78e5-9978-4f5a-85ee-81f9e85f4bca@zmail17.collab.prod.int.phx2.redhat.com> FYI, there is a new failure in dao tests on jenkins.ovirt.org: http://jenkins.ovirt.org/job/ovirt_engine_dao_unit_tests/lastCompletedBuild/testReport/ Test: org.ovirt.engine.core.dao.DiskImageDAOTest.testSave latest commits (that may have caused this): Changes engine: Add pre defined roles for consume quota (detail / gitweb) engine: Create default unlimited quota for DC (detail / gitweb) engine: Add quota to audit log (detail / gitweb) engine: Add validation for quota name (detail / gitweb) engine: Add/Edit quota commands (detail / gitweb) engine: Remove quota command (detail / gitweb) engine: Add quota id to VM parameters (detail / gitweb) engine: Add quota id to disk image (detail / gitweb) engine: Add quota id to vm static (detail / gitweb) frontend: Add business entities to sharedGWT (detail / gitweb) engine: Add search support for quota. (detail / gitweb) engine: Add audit log messages (detail / gitweb) engine: Add audit log messages to CRUD commands. (detail / gitweb) engine: Adding fix to quota stored procedure (detail / gitweb) engine: Add quota manager class (detail / gitweb) engine: Add quota validation support commandBase (detail / gitweb) engine: Add quota validation for AddVmCommand (detail / gitweb) engine: Add quota validation for Add Disk to VM (detail / gitweb) engine: Add quota validation for Add Vm Tempalte (detail / gitweb) engine:Add quota validation for RunVmCommand (detail / gitweb) Please check, Eyal Edri oVirt Infrastructure Team From tjelinek at redhat.com Tue Feb 28 09:02:11 2012 From: tjelinek at redhat.com (Tomas Jelinek) Date: Tue, 28 Feb 2012 04:02:11 -0500 (EST) Subject: [Engine-devel] final fields on entities sent to client In-Reply-To: <73416051-6981-4196-bfdb-5aca992453b1@zmail16.collab.prod.int.phx2.redhat.com> Message-ID: <9041c31d-bd90-4930-b7d4-bb8643d69175@zmail16.collab.prod.int.phx2.redhat.com> Hi all, as the same entities as the ones used on the server are also sent to the client (e.g. org.ovirt.engine.core.common.businessentities.VM), this entities will than be serialized/deserialized using the GWT serialization. It means, this entities should follow the restrictions given by the GWT serialization: http://code.google.com/webtoolkit/doc/latest/DevGuideServerCommunication.html#DevGuideSerializableTypes >From it I would like to point out this: "Fields that are declared final are also not exchanged during RPCs, so they should generally be marked transient as well." It means, that no fields which will be sent to client should be marked with the final keyword. Thanks, Tomas From mkolesni at redhat.com Tue Feb 28 10:05:08 2012 From: mkolesni at redhat.com (Mike Kolesnik) Date: Tue, 28 Feb 2012 05:05:08 -0500 (EST) Subject: [Engine-devel] Do we need ImagesToRemoveList in RemoveImageParameters? In-Reply-To: <4F4B980D.8070701@redhat.com> Message-ID: > Hi all, > I see the getter code for this field is not used by Engine-Core. > We perform removal of the image specified by the imageId parameter. > > Can we remove this parameter? Seems so, just need to update clients constructor calls. > > Yair > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From sanjal at redhat.com Tue Feb 28 12:34:42 2012 From: sanjal at redhat.com (Shireesh Anjal) Date: Tue, 28 Feb 2012 18:04:42 +0530 Subject: [Engine-devel] Wiki page for introducing entity search Message-ID: <4F4CC9E2.4010002@redhat.com> Hi all, Based on my recent experience with entity search, I have created a Wiki page providing step-by-step instructions on introducing entity search for a new entity in oVirt. It is available at following URL: http://www.ovirt.org/wiki/Development/Introducing_Entity_Search Please feel free to contribute to the same in order to correct any discrepancies or enhance the information provided. -- Thanks, Shireesh From jhernand at redhat.com Tue Feb 28 17:23:55 2012 From: jhernand at redhat.com (Juan Hernandez) Date: Tue, 28 Feb 2012 18:23:55 +0100 Subject: [Engine-devel] Problem with log4j Message-ID: <4F4D0DAB.6060401@redhat.com> Hello, A patch related to log4j that I created was recently merged (commit 8ee5c9bb46bbff7a624a84a07ee6b2f9fa4ec9d4) and apparently is causing a problem during the deployment of the engine for some developers (Moti Asayag pointed that to me). If you find this when deploying the engine: Caused by: java.lang.NoClassDefFoundError: org/apache/log4j/Level Them I am to be blamed and the short term solution is to undo part of my patch in the root pom.xml file: --- a/pom.xml +++ b/pom.xml @@ -244,7 +244,6 @@ log4j log4j ${log4j.version} - provided I can't reproduce this myself, so please let me know if you find it. If this is common I think that the long term solution would be to add log4j to the list of dependencies of the affected EJB: --- a/backend/manager/modules/beans/scheduler/pom.xml +++ b/backend/manager/modules/beans/scheduler/pom.xml @@ -51,7 +51,7 @@ false - org.slf4j,deployment.engine.ear.engine-vdsbroker.jar + org.slf4j,org.apache.log4j,deployment.engine.ear.engine-vdsbroker.jar Regards, Juan Hernandez -- Direcci?n Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta 3?D, 28016 Madrid, Spain Inscrita en el Reg. Mercantil de Madrid ? C.I.F. B82657941 - Red Hat S.L. From lpeer at redhat.com Wed Feb 29 13:54:53 2012 From: lpeer at redhat.com (Livnat Peer) Date: Wed, 29 Feb 2012 15:54:53 +0200 Subject: [Engine-devel] Agenda for todays meeting Message-ID: <4F4E2E2D.5070906@redhat.com> Hi All, Today we'll discuss the following: - Summary of mailing list discussions on networking: VM network and optional network. - Floating disks - Snapshot behavior (and live snapshot if time allows) Thanks, Livnat From lhornyak at redhat.com Wed Feb 29 14:23:53 2012 From: lhornyak at redhat.com (Laszlo Hornyak) Date: Wed, 29 Feb 2012 09:23:53 -0500 (EST) Subject: [Engine-devel] synthetic-access In-Reply-To: Message-ID: <1ab3955b-cf30-419c-9678-ccab62349203@zmail01.collab.prod.int.phx2.redhat.com> Hi, I am reviewing Allon's patches (e.g. http://gerrit.ovirt.org/#patch,sidebyside,2188,11,backend/manager/modules/dal/src/main/java/org/ovirt/engine/core/dao/VdsGroupDAODbFacadeImpl.java ) and this is the first time I have met @SuppressWarnings("synthetic-access") annotations in the ovirt code. It is right, eclipse warns about the performance problem synthetic access (if turned on, by default it is turned off). This mostly happens in DAO's because rowmappers are private inner classes. What if, instead of adding an annotation to ignore this - we could make the rowmapper classes package protected? - or since most of these classes are stateless and thread safe, we can add a public final static rowmapper instance and instead of instantiating the rowmapper over and over again, use that single instance. Please share your thoughts. Thank you, Laszlo From yzaslavs at redhat.com Wed Feb 29 14:54:28 2012 From: yzaslavs at redhat.com (Yair Zaslavsky) Date: Wed, 29 Feb 2012 16:54:28 +0200 Subject: [Engine-devel] synthetic-access In-Reply-To: <1ab3955b-cf30-419c-9678-ccab62349203@zmail01.collab.prod.int.phx2.redhat.com> References: <1ab3955b-cf30-419c-9678-ccab62349203@zmail01.collab.prod.int.phx2.redhat.com> Message-ID: <4F4E3C24.7060904@redhat.com> On 02/29/2012 04:23 PM, Laszlo Hornyak wrote: > Hi, > > I am reviewing Allon's patches (e.g. http://gerrit.ovirt.org/#patch,sidebyside,2188,11,backend/manager/modules/dal/src/main/java/org/ovirt/engine/core/dao/VdsGroupDAODbFacadeImpl.java ) and this is the first time I have met @SuppressWarnings("synthetic-access") annotations in the ovirt code. It is right, eclipse warns about the performance problem synthetic access (if turned on, by default it is turned off). This mostly happens in DAO's because rowmappers are private inner classes. What if, instead of adding an annotation to ignore this > - we could make the rowmapper classes package protected? > - or since most of these classes are stateless and thread safe, we can add a public final static rowmapper instance and instead of instantiating the rowmapper over and over again, use that single instance. +1 on this, and I'm already giving comments to people on this issue (to make a single static instance of a mapper) > > Please share your thoughts. > > Thank you, > Laszlo > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel From mkolesni at redhat.com Wed Feb 29 17:39:37 2012 From: mkolesni at redhat.com (Mike Kolesnik) Date: Wed, 29 Feb 2012 12:39:37 -0500 (EST) Subject: [Engine-devel] synthetic-access In-Reply-To: <4F4E3C24.7060904@redhat.com> Message-ID: <961907e9-d4e3-4c27-b56d-f8bdd5046d6c@zmail14.collab.prod.int.phx2.redhat.com> > On 02/29/2012 04:23 PM, Laszlo Hornyak wrote: > > Hi, > > > > I am reviewing Allon's patches (e.g. > > http://gerrit.ovirt.org/#patch,sidebyside,2188,11,backend/manager/modules/dal/src/main/java/org/ovirt/engine/core/dao/VdsGroupDAODbFacadeImpl.java > > ) and this is the first time I have met > > @SuppressWarnings("synthetic-access") annotations in the ovirt > > code. It is right, eclipse warns about the performance problem > > synthetic access (if turned on, by default it is turned off). This > > mostly happens in DAO's because rowmappers are private inner > > classes. What if, instead of adding an annotation to ignore this > > - we could make the rowmapper classes package protected? > > - or since most of these classes are stateless and thread safe, we > > can add a public final static rowmapper instance and instead of > > instantiating the rowmapper over and over again, use that single > > instance. > +1 on this, and I'm already giving comments to people on this issue > (to > make a single static instance of a mapper) +1 this sounds like the right thing to do anyway. > > > > > Please share your thoughts. > > > > Thank you, > > Laszlo > > _______________________________________________ > > Engine-devel mailing list > > Engine-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/engine-devel > > _______________________________________________ > Engine-devel mailing list > Engine-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel > From smizrahi at redhat.com Wed Feb 29 20:15:47 2012 From: smizrahi at redhat.com (Saggi Mizrahi) Date: Wed, 29 Feb 2012 15:15:47 -0500 (EST) Subject: [Engine-devel] My experience using the UI In-Reply-To: <92f3d8e8-5660-4a16-94e0-ab84677dc0d6@zmail16.collab.prod.int.phx2.redhat.com> Message-ID: I recently had to use the Engine UI (GWT one). The UI in my opinion is very unintuitve and, at some flows, even counter productive. It took me a lot more time to figure out how to do even the simplest tasks. Lets start with the rudimentary things. These are agreed upon UI idioms that are not used currently thus causing needless confusion. These idioms are ingrained into how users grasp a UI when they first it. I did not invent these they are written in various Windows\MacOS\KDE\Gnome and many other UI\HI guidelines. This is not a complete list of all the problems, just the things that annoyed me the most. * Buttons\menu items that spawn dialog windows should end with an ellipsis ("edit..." vs "remove"). * Grayed out means disabled not "not selected" (eg. the domain creation dialog->the lun chooser). * Never write things vertically only horizontally (eg. the domain creation dialog). * The entire object be clickable not only the text (tabs). * Clickable items should be either underlined in another color, or inside a box to differentiated between regular text (menu items). * Buttons should be in the same group should be in the same width and height and the entire area clickable. * Border toggling cannot be used as hover indicator only background and/or text color toggle. * Don't use "/" in the column header."host/ip" should be "Host address" * Everything should be arranged in relative proportions (The search bar looks weird on large screens) * Everything about the error message windows * Try and scale the browser window to 800x600 and see what happens. * Icons that don't have text next to them have to have a tool tip (bookmark search icon) * (This one is my personal preference) Everything that is clickable should have hover toggle that shows what is the entire object you are clicking. * Color scheme!!! alerts(red on gray?) Where does the green on the logo fit into all of this? There are tools to help you create an eye pleasing color scheme (eg. http://colorschemedesigner.com/) Now that the basics are out of the way. These are my opinions and are debatable. - Top panel: * current user name should be next to the sign-out not on opposite ends of list. * About and guide could be merged to "help" Non of them deserve that much prominence in the UI * Configure? What who? Find somewhere contextual to put it. - search: This is a long standing gripe against the UI. * First and foremost it doesn't work properly. Try navigating the Tree and then editing the search bar. * The browser already has a bookmarks feature so reusing the name is confusing, maybe saved views? perspectives? I don't know. * The search bar is change when I navigate, This is confusing. Imagine having the gmail search bar change when you navigate gmail. To fix this I suggest that the search bar remains empty unless the user starts using it and the bookmark button moved to a place where it's obvious it relates to the current view\perspective. - The Tree * Tree? Really? Is that a proper title? * Tree refresh annoyingly does collapse all - Events * What is the difference between the button on the right and the one on the bottom? * I'd rather it'd be called log\messages because these are not really events - All tabs * The difference between "guide me" and "edit" is confusing and arbitrary. * Buttons in the toolbar should be grouped, and spaced according to those groups. * Buttons should utilized the above general UI guidelines * Similar buttons like "run" and "run Once" or "new desktop" and "new server" might be better displayed as drop down button (http://bit.ly/xMAUbK) * Stuff on less importance like "change CD" could be grouped under a drop down button as well so it's less cluttered. * Paging is so 2000, everyone does continues scroll now (http://bit.ly/a8B9UC). - Guide Me: Does not guide and is utterly confusing. * Some buttons are grayed out but you have no idea why until you figure out that there is a hint hidden in the tooltip (An allusion to XKCD?) * Can't use existing entities! Only create new ones. Why? * "Optional actions"? Really? * "There are still unconfigured entities". What? I don't understand what is wrong. Unconfigured isn't even a proper word. Do you mean that I still have entities to set-up? What entities? What is an entity? * "Configure later"? What is wrong with "close". * Again, unify with edit, I kept clicking edit even after I realized that I need to click guide me. Pet peeves: * The green line under the black part of the top bar is just annoying. * CPU Name -> CPU Architecture( name)? * destroy (in storage domain) -> force remove There are a lot more things that could be stream lined. The lun selection screen for example is just plain confusing. But that would require me to start drafting mockups and I have a lot of other work to do. I recommend reading at least the WinXP and Apple HIG guidelines (http://bit.ly/djCHb8). Finally when in doubt copy Google, they have the an excellent web UX team.