From dneary at redhat.com Mon Sep 2 19:41:58 2013 From: dneary at redhat.com (Dave Neary) Date: Mon, 02 Sep 2013 21:41:58 +0200 Subject: Enabled SpamAssassin for mailing list filtering pre-moderation Message-ID: <5224EA06.9050105@redhat.com> Hi all, I have just set up SpamAssassin for Mailman for all ovirt.org mailing lists - I will be checking in regularly to make sure that this change does not cause dropping of legitimate emails. If you send an email to the list and it doesn't get there, please mail me personally at dneary @redhat.com to let me know, and we'll adjust the spam level appropriately. I have trained all the lists with a corpus of ham and spam, so hopefully this will significantly lessen the moderation load. Let me know if you notice anything out of the ordinary on *any* ovirt.org lists, please! Thanks, Dave. -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 From dfediuck at redhat.com Tue Sep 3 13:03:08 2013 From: dfediuck at redhat.com (Doron Fediuck) Date: Tue, 3 Sep 2013 09:03:08 -0400 (EDT) Subject: [Users] deep dive - scheduling in 3.3 In-Reply-To: <1796377068.6676747.1377595604442.JavaMail.root@redhat.com> References: <1796377068.6676747.1377595604442.JavaMail.root@redhat.com> Message-ID: <152854599.12264036.1378213388484.JavaMail.root@redhat.com> Hi all, session will start momentarily. Join us, Doron ----- Original Message ----- > From: "Doron Fediuck" > To: "users" , "arch" > Sent: Tuesday, August 27, 2013 12:26:44 PM > Subject: [Users] deep dive - scheduling in 3.3 > > Hi all, > Next Tuesday (September 3rd) at 16:00 UTC+3, we're going to have > a deep dive session on the new oVirt scheduling. > > You're more than welcome to join us using: > > - Audio only bridge > * Dial your local access number in: > https://www.intercallonline.com/portlets/scheduling/viewNumbers/listNumbersByCode.do?confCode=972545636785 > * Use the following bridge ID: 972545636785 > > - Visual session (Elluminate) > * Browse to: > https://sas.elluminate.com/m.jnlp?sid=819&password=M.97AA703D4B077C9D0263B1D791D482 > > Slides will be available after the session. > See you then, > Doron > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From mburns at redhat.com Tue Sep 3 14:11:27 2013 From: mburns at redhat.com (Mike Burns) Date: Tue, 03 Sep 2013 10:11:27 -0400 Subject: Enabled SpamAssassin for mailing list filtering pre-moderation In-Reply-To: <5224EA06.9050105@redhat.com> References: <5224EA06.9050105@redhat.com> Message-ID: <5225EE0F.6050009@redhat.com> On 09/02/2013 03:41 PM, Dave Neary wrote: > Hi all, > > I have just set up SpamAssassin for Mailman for all ovirt.org mailing > lists - I will be checking in regularly to make sure that this change > does not cause dropping of legitimate emails. If you send an email to > the list and it doesn't get there, please mail me personally at dneary > @redhat.com to let me know, and we'll adjust the spam level appropriately. > > I have trained all the lists with a corpus of ham and spam, so hopefully > this will significantly lessen the moderation load. > > Let me know if you notice anything out of the ordinary on *any* > ovirt.org lists, please! > > Thanks, > Dave. > Woot! thanks Dave. From dfediuck at redhat.com Sat Sep 7 19:41:27 2013 From: dfediuck at redhat.com (Doron Fediuck) Date: Sat, 7 Sep 2013 15:41:27 -0400 (EDT) Subject: deep dive - hosted engine In-Reply-To: <1796377068.6676747.1377595604442.JavaMail.root@redhat.com> Message-ID: <648743419.16080410.1378582887201.JavaMail.root@redhat.com> Hi all, Next Tuesday (September 9) at 16:00 UTC+3, we're going to have a deep dive session on the cool hosted engine. You're more than welcome to join us using: - Audio only bridge * Dial your local access number in: https://www.intercallonline.com/portlets/scheduling/viewNumbers/listNumbersByCode.do?confCode=0543004646 * Use the following bridge ID: 0543004646 - Visual session (Elluminate) * Browse to: https://sas.elluminate.com/m.jnlp?sid=819&password=M.2BA8BA41E5D0E38D2CC8C19918A932 Slides will be available after the session. See you then, Doron From mkolesni at redhat.com Sun Sep 8 09:30:20 2013 From: mkolesni at redhat.com (Mike Kolesnik) Date: Sun, 8 Sep 2013 05:30:20 -0400 (EDT) Subject: Design issue when using optional networks for administrative usages In-Reply-To: <1784627598.11325788.1378630979139.JavaMail.root@redhat.com> Message-ID: <378570826.11327031.1378632620084.JavaMail.root@redhat.com> Hi, I would like to hear opinions about what I consider a design issue in oVirt. First of all, a short description of the current situation in oVirt 3.3: Network is a data-center level entity, representing a L2 broadcast domain. Each network can be attached to one or more clusters, where the attachment can have several properties: - Required/Optional - Does the network have to be on all hosts or not? - Usages (administrative): - Display network - used for the display traffic - Migration network - used for the migration traffic Now, what bothers me is the affinity between these two properties - if a network is defined "optional", can is be used for an "administrative" usage? Currently I can have the following situation: 0. Fresh install with some hosts and a shared storage, and no networks other than default. 1. Create a network X. 2. Attach to a cluster as "migration", "display", "optional". 3. Create a VM in the same cluster. Now all is well and everything is green across the board, BUT: 1. The VM can't be run on any host in that cluster if the host doesn't have the display network. 2. VM will migrate over the default network if the network is not present on the source host. 3. Migration will not work if the network is not present on the destination host. I find this situation very troublesome! We give the admin the impression that everything is fine and dandy, but underneath the surface everything is NOT. If we look at the previous points we can see that: 1. No VM can run in that cluster, but hosts and network seem A-OK - this is intrinsically awful as we don't reflect the real problem anywhere in the network nor the host statuses but rather postpone it until someone makes an attempt to actually use the VM. 2. Migration network is NOT being used, which was obviously not the intent of the admin who set it up. 3. There is still an open bug for it ( https://bugzilla.redhat.com/983515 ) and it's unclear as to what should happen, but it would be either what happens in case #1 or in case #2. What I suggest is to have any network with usage be "required". This will utilize the existing logic for required networks: - Either the network should not be used until its available on all hosts (reflected in the network status being Non-Operational) - Or the host should be Non-Operational as it's incapable of running/migrating VMs Therefore reflecting the problem to the admin and giving him a chance to fix it properly, and not hiding the failure until it occurs or doing some unexpected behavior. I would love to hear your thoughts on the subject. Regards, Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From dfediuck at redhat.com Sun Sep 8 21:46:02 2013 From: dfediuck at redhat.com (Doron Fediuck) Date: Sun, 8 Sep 2013 17:46:02 -0400 (EDT) Subject: [Users] deep dive - hosted engine In-Reply-To: References: <1796377068.6676747.1377595604442.JavaMail.root@redhat.com> <648743419.16080410.1378582887201.JavaMail.root@redhat.com> Message-ID: <284607254.16299245.1378676762250.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Andrew Lau" > To: "Doron Fediuck" > Cc: "arch" , "users" > Sent: Sunday, September 8, 2013 3:48:58 AM > Subject: Re: [Users] deep dive - hosted engine > > On Sun, Sep 8, 2013 at 5:41 AM, Doron Fediuck < dfediuck at redhat.com > wrote: > > > Hi all, > Next Tuesday (September 9) at 16:00 UTC+3, we're going to have > a deep dive session on the cool hosted engine. > > You're more than welcome to join us using: > > - Audio only bridge > * Dial your local access number in: > https://www.intercallonline.com/portlets/scheduling/viewNumbers/listNumbersByCode.do?confCode=0543004646 > * Use the following bridge ID: 0543004646 > > - Visual session (Elluminate) > * Browse to: > https://sas.elluminate.com/m.jnlp?sid=819&password=M.2BA8BA41E5D0E38D2CC8C19918A932 > > Slides will be available after the session. > See you then, > Doron > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > Hi Doron, > > Does that mean hosted-engine is ready to be used? Also Sept 9 is a Monday. > > I'm excited! > > Andrew. > > Hi Andrew, This is a technical session, explaining the design, architecture, flows and various aspects. The actual implementation is still being tested, thus not ready yet for release. Thanks for noticing the day issue, the session will be held on Monday, September 9. See you then ;) Doron. From dfediuck at redhat.com Sun Sep 8 21:48:11 2013 From: dfediuck at redhat.com (Doron Fediuck) Date: Sun, 8 Sep 2013 17:48:11 -0400 (EDT) Subject: [Users] deep dive - hosted engine In-Reply-To: <648743419.16080410.1378582887201.JavaMail.root@redhat.com> References: <648743419.16080410.1378582887201.JavaMail.root@redhat.com> Message-ID: <585010270.16299400.1378676891226.JavaMail.root@redhat.com> Just clarifying my typo- Session will be on September 9, which is Monday ;) ----- Original Message ----- > From: "Doron Fediuck" > To: "users" , "arch" > Sent: Saturday, September 7, 2013 10:41:27 PM > Subject: [Users] deep dive - hosted engine > > Hi all, > Next Tuesday (September 9) at 16:00 UTC+3, we're going to have > a deep dive session on the cool hosted engine. > > You're more than welcome to join us using: > > - Audio only bridge > * Dial your local access number in: > https://www.intercallonline.com/portlets/scheduling/viewNumbers/listNumbersByCode.do?confCode=0543004646 > * Use the following bridge ID: 0543004646 > > - Visual session (Elluminate) > * Browse to: > https://sas.elluminate.com/m.jnlp?sid=819&password=M.2BA8BA41E5D0E38D2CC8C19918A932 > > Slides will be available after the session. > See you then, > Doron > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From iheim at redhat.com Mon Sep 9 12:17:35 2013 From: iheim at redhat.com (Itamar Heim) Date: Mon, 09 Sep 2013 15:17:35 +0300 Subject: rebasing for oVirt 3.3.1? Message-ID: <522DBC5F.8000403@redhat.com> with 3.3.0 coming soon, one of the questions I heard is "what about 3.3.1" considering the number of patches fox bugs that went into master branch since since we branched to stabilize 3.3.0. i.e., most of the work in master branch has been focused on bug fixes) so my suggestion is for 3.3.1 that we rebase from master, then move to backporting patches to that branch for the rest of 3.3 time frame. while this poses a small risk, i believe its the best course forward to making ovirt 3.3 a more robust and stable version going forward. this is mostly about ovirt-engine, and probably vdsm. for the other projects, its up to the maintainer, based on risk/benefit. thoughts? Thanks, Itamar From mburns at redhat.com Mon Sep 9 13:19:50 2013 From: mburns at redhat.com (Mike Burns) Date: Mon, 09 Sep 2013 09:19:50 -0400 Subject: rebasing for oVirt 3.3.1? In-Reply-To: <522DBC5F.8000403@redhat.com> References: <522DBC5F.8000403@redhat.com> Message-ID: <522DCAF6.6000504@redhat.com> On 09/09/2013 08:17 AM, Itamar Heim wrote: > with 3.3.0 coming soon, one of the questions I heard is "what about > 3.3.1" considering the number of patches fox bugs that went into master > branch since since we branched to stabilize 3.3.0. > i.e., most of the work in master branch has been focused on bug fixes) > > so my suggestion is for 3.3.1 that we rebase from master, then move to > backporting patches to that branch for the rest of 3.3 time frame. > > while this poses a small risk, i believe its the best course forward to > making ovirt 3.3 a more robust and stable version going forward. > > this is mostly about ovirt-engine, and probably vdsm. for the other > projects, its up to the maintainer, based on risk/benefit. > I have no objections as long as we're not taking features into the 3.3.1 release and we're not changing the package set. We had an issue with one of the 3.2.x updates where we pulled a change in vdsm that removed the vdsm-gluster package. As long as we're making every effort to avoid features and avoid packaging changes, then I'm happy. Mike > thoughts? > > Thanks, > Itamar > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch From iheim at redhat.com Mon Sep 9 13:28:49 2013 From: iheim at redhat.com (Itamar Heim) Date: Mon, 09 Sep 2013 16:28:49 +0300 Subject: rebasing for oVirt 3.3.1? In-Reply-To: <522DCAF6.6000504@redhat.com> References: <522DBC5F.8000403@redhat.com> <522DCAF6.6000504@redhat.com> Message-ID: <522DCD11.4000603@redhat.com> On 09/09/2013 04:19 PM, Mike Burns wrote: > On 09/09/2013 08:17 AM, Itamar Heim wrote: >> with 3.3.0 coming soon, one of the questions I heard is "what about >> 3.3.1" considering the number of patches fox bugs that went into master >> branch since since we branched to stabilize 3.3.0. >> i.e., most of the work in master branch has been focused on bug fixes) >> >> so my suggestion is for 3.3.1 that we rebase from master, then move to >> backporting patches to that branch for the rest of 3.3 time frame. >> >> while this poses a small risk, i believe its the best course forward to >> making ovirt 3.3 a more robust and stable version going forward. >> >> this is mostly about ovirt-engine, and probably vdsm. for the other >> projects, its up to the maintainer, based on risk/benefit. >> > > I have no objections as long as we're not taking features into the 3.3.1 > release and we're not changing the package set. We had an issue with > one of the 3.2.x updates where we pulled a change in vdsm that removed > the vdsm-gluster package. As long as we're making every effort to avoid > features and avoid packaging changes, then I'm happy. i think there is a feature or two, but i think the version would still be way better off with this, considering the ratio of patches that went into it. I do expect us to do a bit more testing on it than if we didn't rebase, but i think its worth it. (as a side note, i also think it will be worth while to release hosted-engine in async to beta testing / release). > > Mike > >> thoughts? >> >> Thanks, >> Itamar >> _______________________________________________ >> Arch mailing list >> Arch at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/arch > From mburns at redhat.com Mon Sep 9 13:31:03 2013 From: mburns at redhat.com (Mike Burns) Date: Mon, 09 Sep 2013 09:31:03 -0400 Subject: rebasing for oVirt 3.3.1? In-Reply-To: <522DCD11.4000603@redhat.com> References: <522DBC5F.8000403@redhat.com> <522DCAF6.6000504@redhat.com> <522DCD11.4000603@redhat.com> Message-ID: <522DCD97.5020606@redhat.com> On 09/09/2013 09:28 AM, Itamar Heim wrote: > On 09/09/2013 04:19 PM, Mike Burns wrote: >> On 09/09/2013 08:17 AM, Itamar Heim wrote: >>> with 3.3.0 coming soon, one of the questions I heard is "what about >>> 3.3.1" considering the number of patches fox bugs that went into master >>> branch since since we branched to stabilize 3.3.0. >>> i.e., most of the work in master branch has been focused on bug fixes) >>> >>> so my suggestion is for 3.3.1 that we rebase from master, then move to >>> backporting patches to that branch for the rest of 3.3 time frame. >>> >>> while this poses a small risk, i believe its the best course forward to >>> making ovirt 3.3 a more robust and stable version going forward. >>> >>> this is mostly about ovirt-engine, and probably vdsm. for the other >>> projects, its up to the maintainer, based on risk/benefit. >>> >> >> I have no objections as long as we're not taking features into the 3.3.1 >> release and we're not changing the package set. We had an issue with >> one of the 3.2.x updates where we pulled a change in vdsm that removed >> the vdsm-gluster package. As long as we're making every effort to avoid >> features and avoid packaging changes, then I'm happy. > > i think there is a feature or two, but i think the version would still > be way better off with this, considering the ratio of patches that went > into it. Ok, can we at least make sure we call them out so we can list them in announcement email? > > I do expect us to do a bit more testing on it than if we didn't rebase, > but i think its worth it. > > (as a side note, i also think it will be worth while to release > hosted-engine in async to beta testing / release). Yes, hosted-engine is something we had discussed as an async feature and there were no objections to it going async. Mike > > >> >> Mike >> >>> thoughts? >>> >>> Thanks, >>> Itamar >>> _______________________________________________ >>> Arch mailing list >>> Arch at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/arch >> > > _______________________________________________ > Infra mailing list > Infra at ovirt.org > http://lists.ovirt.org/mailman/listinfo/infra From iheim at redhat.com Mon Sep 9 13:34:04 2013 From: iheim at redhat.com (Itamar Heim) Date: Mon, 09 Sep 2013 16:34:04 +0300 Subject: rebasing for oVirt 3.3.1? In-Reply-To: <522DCD97.5020606@redhat.com> References: <522DBC5F.8000403@redhat.com> <522DCAF6.6000504@redhat.com> <522DCD11.4000603@redhat.com> <522DCD97.5020606@redhat.com> Message-ID: <522DCE4C.1020206@redhat.com> On 09/09/2013 04:31 PM, Mike Burns wrote: > On 09/09/2013 09:28 AM, Itamar Heim wrote: >> On 09/09/2013 04:19 PM, Mike Burns wrote: >>> On 09/09/2013 08:17 AM, Itamar Heim wrote: >>>> with 3.3.0 coming soon, one of the questions I heard is "what about >>>> 3.3.1" considering the number of patches fox bugs that went into master >>>> branch since since we branched to stabilize 3.3.0. >>>> i.e., most of the work in master branch has been focused on bug fixes) >>>> >>>> so my suggestion is for 3.3.1 that we rebase from master, then move to >>>> backporting patches to that branch for the rest of 3.3 time frame. >>>> >>>> while this poses a small risk, i believe its the best course forward to >>>> making ovirt 3.3 a more robust and stable version going forward. >>>> >>>> this is mostly about ovirt-engine, and probably vdsm. for the other >>>> projects, its up to the maintainer, based on risk/benefit. >>>> >>> >>> I have no objections as long as we're not taking features into the 3.3.1 >>> release and we're not changing the package set. We had an issue with >>> one of the 3.2.x updates where we pulled a change in vdsm that removed >>> the vdsm-gluster package. As long as we're making every effort to avoid >>> features and avoid packaging changes, then I'm happy. >> >> i think there is a feature or two, but i think the version would still >> be way better off with this, considering the ratio of patches that went >> into it. > > Ok, can we at least make sure we call them out so we can list them in > announcement email? goes without saying. I'm thinking we should create that branch asap (say, post wednesday ovirt meeting), to be able to start testing it, to be able to release 3.3.1 in a few weeks. probably with a mini test day around install flows and upgrade from 3.2 and 3.3.0. > >> >> I do expect us to do a bit more testing on it than if we didn't rebase, >> but i think its worth it. >> >> (as a side note, i also think it will be worth while to release >> hosted-engine in async to beta testing / release). > > Yes, hosted-engine is something we had discussed as an async feature and > there were no objections to it going async. > > Mike > >> >> >>> >>> Mike >>> >>>> thoughts? >>>> >>>> Thanks, >>>> Itamar >>>> _______________________________________________ >>>> Arch mailing list >>>> Arch at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/arch >>> >> >> _______________________________________________ >> Infra mailing list >> Infra at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/infra > From dfediuck at redhat.com Mon Sep 9 16:01:13 2013 From: dfediuck at redhat.com (Doron Fediuck) Date: Mon, 9 Sep 2013 12:01:13 -0400 (EDT) Subject: deep dive - hosted engine In-Reply-To: <648743419.16080410.1378582887201.JavaMail.root@redhat.com> References: <648743419.16080410.1378582887201.JavaMail.root@redhat.com> Message-ID: <997399378.16894501.1378742473511.JavaMail.root@redhat.com> Hi all, Session slides uploaded and now available here: http://www.ovirt.org/OVirt_3.3_release_notes#Deep_dives Feel free to send questions to the users list. Thanks again to Sandro & Greg for delivering it! Doron ----- Original Message ----- > From: "Doron Fediuck" > To: "users" , "arch" > Cc: iheim at redhat.com, "Sandro Bonazzola" , "Greg Padgett" > Sent: Saturday, September 7, 2013 10:41:27 PM > Subject: deep dive - hosted engine > > Hi all, > Next Monday (September 9) at 16:00 UTC+3, we're going to have > a deep dive session on the cool hosted engine. > > You're more than welcome to join us using: > > - Audio only bridge > * Dial your local access number in: > https://www.intercallonline.com/portlets/scheduling/viewNumbers/listNumbersByCode.do?confCode=0543004646 > * Use the following bridge ID: 0543004646 > > - Visual session (Elluminate) > * Browse to: > https://sas.elluminate.com/m.jnlp?sid=819&password=M.2BA8BA41E5D0E38D2CC8C19918A932 > > Slides will be available after the session. > See you then, > Doron From danken at redhat.com Mon Sep 9 16:34:17 2013 From: danken at redhat.com (Dan Kenigsberg) Date: Mon, 9 Sep 2013 17:34:17 +0100 Subject: Design issue when using optional networks for administrative usages In-Reply-To: <378570826.11327031.1378632620084.JavaMail.root@redhat.com> References: <1784627598.11325788.1378630979139.JavaMail.root@redhat.com> <378570826.11327031.1378632620084.JavaMail.root@redhat.com> Message-ID: <20130909163417.GE23100@redhat.com> On Sun, Sep 08, 2013 at 05:30:20AM -0400, Mike Kolesnik wrote: > Hi, > > I would like to hear opinions about what I consider a design issue in oVirt. > > First of all, a short description of the current situation in oVirt 3.3: > Network is a data-center level entity, representing a L2 broadcast domain. > Each network can be attached to one or more clusters, where the attachment can have several properties: > - Required/Optional - Does the network have to be on all hosts or not? > - Usages (administrative): > - Display network - used for the display traffic > - Migration network - used for the migration traffic > > Now, what bothers me is the affinity between these two properties - if a network is defined "optional", can is be used for an "administrative" usage? > > Currently I can have the following situation: > 0. Fresh install with some hosts and a shared storage, and no networks other than default. > 1. Create a network X. > 2. Attach to a cluster as "migration", "display", "optional". > 3. Create a VM in the same cluster. > > Now all is well and everything is green across the board, BUT: > 1. The VM can't be run on any host in that cluster if the host doesn't have the display network. > 2. VM will migrate over the default network if the network is not present on the source host. > 3. Migration will not work if the network is not present on the destination host. > > I find this situation very troublesome! > We give the admin the impression that everything is fine and dandy, but underneath the surface everything is NOT. > > If we look at the previous points we can see that: > 1. No VM can run in that cluster, but hosts and network seem A-OK - this is intrinsically awful as we don't reflect the real problem anywhere in the network nor the host statuses but rather postpone it until someone makes an attempt to actually use the VM. > 2. Migration network is NOT being used, which was obviously not the intent of the admin who set it up. > 3. There is still an open bug for it ( https://bugzilla.redhat.com/983515 ) and it's unclear as to what should happen, but it would be either what happens in case #1 or in case #2. > > What I suggest is to have any network with usage be "required". > This will utilize the existing logic for required networks: > - Either the network should not be used until its available on all hosts (reflected in the network status being Non-Operational) > - Or the host should be Non-Operational as it's incapable of running/migrating VMs > > Therefore reflecting the problem to the admin and giving him a chance to fix it properly, and not hiding the failure until it occurs or doing some unexpected behavior. > > I would love to hear your thoughts on the subject. Some history first. Once upon at time, we wanted an Up host to mean "this host is ready to run any of its cluster's VMs". This meant that if a host lost connectivity to one of the cluster networks, it had to be taken down. Customers did not like our over protection, so we've introduced non-required networks. When an admin uses this option he says "I know what I'm doing, let me do stuff on this host even if the network is down." I think that this request is a valid one, even when a network serves other purposes than connecting VMs. When designing migration network, we've decided that if it is missing, migration would be attempted over the management network, as a fallback. I can imagine an admin who says: I don't care much about migrations, most of my VMs are pinned-to-host anyway. so if the migration network is gone, don't make a fuss out of it. The use case for letting a host be Up even if its display network is less obvious. But then again, I can think of an admin who uses a vdsm hook to set the display IP of each VM. He does not care if the display network is up or not. In my opinion, the meaning and the danger on non-req networks should be properly documented and clear to customers, but some of them are expected to find it useful. Dan. From mkolesni at redhat.com Tue Sep 10 05:28:52 2013 From: mkolesni at redhat.com (Mike Kolesnik) Date: Tue, 10 Sep 2013 01:28:52 -0400 (EDT) Subject: Design issue when using optional networks for administrative usages In-Reply-To: <20130909163417.GE23100@redhat.com> References: <1784627598.11325788.1378630979139.JavaMail.root@redhat.com> <378570826.11327031.1378632620084.JavaMail.root@redhat.com> <20130909163417.GE23100@redhat.com> Message-ID: <693349734.12138664.1378790932419.JavaMail.root@redhat.com> ----- Original Message ----- > On Sun, Sep 08, 2013 at 05:30:20AM -0400, Mike Kolesnik wrote: > > Hi, > > > > I would like to hear opinions about what I consider a design issue in > > oVirt. > > > > First of all, a short description of the current situation in oVirt 3.3: > > Network is a data-center level entity, representing a L2 broadcast domain. > > Each network can be attached to one or more clusters, where the attachment > > can have several properties: > > - Required/Optional - Does the network have to be on all hosts or not? > > - Usages (administrative): > > - Display network - used for the display traffic > > - Migration network - used for the migration traffic > > > > Now, what bothers me is the affinity between these two properties - if a > > network is defined "optional", can is be used for an "administrative" > > usage? > > > > Currently I can have the following situation: > > 0. Fresh install with some hosts and a shared storage, and no networks > > other than default. > > 1. Create a network X. > > 2. Attach to a cluster as "migration", "display", "optional". > > 3. Create a VM in the same cluster. > > > > Now all is well and everything is green across the board, BUT: > > 1. The VM can't be run on any host in that cluster if the host doesn't have > > the display network. > > 2. VM will migrate over the default network if the network is not present > > on the source host. > > 3. Migration will not work if the network is not present on the destination > > host. > > > > I find this situation very troublesome! > > We give the admin the impression that everything is fine and dandy, but > > underneath the surface everything is NOT. > > > > If we look at the previous points we can see that: > > 1. No VM can run in that cluster, but hosts and network seem A-OK - this is > > intrinsically awful as we don't reflect the real problem anywhere in the > > network nor the host statuses but rather postpone it until someone makes > > an attempt to actually use the VM. > > 2. Migration network is NOT being used, which was obviously not the intent > > of the admin who set it up. > > 3. There is still an open bug for it ( https://bugzilla.redhat.com/983515 ) > > and it's unclear as to what should happen, but it would be either what > > happens in case #1 or in case #2. > > > > What I suggest is to have any network with usage be "required". > > This will utilize the existing logic for required networks: > > - Either the network should not be used until its available on all hosts > > (reflected in the network status being Non-Operational) > > - Or the host should be Non-Operational as it's incapable of > > running/migrating VMs > > > > Therefore reflecting the problem to the admin and giving him a chance to > > fix it properly, and not hiding the failure until it occurs or doing some > > unexpected behavior. > > > > I would love to hear your thoughts on the subject. > > Some history first. Once upon at time, we wanted an Up host to mean > "this host is ready to run any of its cluster's VMs". This meant that if > a host lost connectivity to one of the cluster networks, it had to be > taken down. > > Customers did not like our over protection, so we've introduced > non-required networks. When an admin uses this option he says "I know > what I'm doing, let me do stuff on this host even if the network is > down." So what you're saying is non-required networks should not protect the user at all? In this case I say we shouldn't impose any limitations whatsoever in this situation, and if the VM fails to start/migrate then let it fail. > > I think that this request is a valid one, even when a network serves > other purposes than connecting VMs. When designing migration network, > we've decided that if it is missing, migration would be attempted over > the management network, as a fallback. I can imagine an admin who says: > I don't care much about migrations, most of my VMs are pinned-to-host > anyway. so if the migration network is gone, don't make a fuss out of > it. > > The use case for letting a host be Up even if its display network is > less obvious. But then again, I can think of an admin who uses a vdsm > hook to set the display IP of each VM. He does not care if the display > network is up or not. If the admin uses hooks for his networking needs then I don't see why he even needs this support in oVirt, so your point is not clear to me.. > > In my opinion, the meaning and the danger on non-req networks should be > properly documented and clear to customers, but some of them are > expected to find it useful. I agree, if this is our approach then it should be very very well documented. > > Dan. > From lpeer at redhat.com Tue Sep 10 07:08:46 2013 From: lpeer at redhat.com (Livnat Peer) Date: Tue, 10 Sep 2013 10:08:46 +0300 Subject: Design issue when using optional networks for administrative usages In-Reply-To: <20130909163417.GE23100@redhat.com> References: <1784627598.11325788.1378630979139.JavaMail.root@redhat.com> <378570826.11327031.1378632620084.JavaMail.root@redhat.com> <20130909163417.GE23100@redhat.com> Message-ID: <522EC57E.7090906@redhat.com> On 09/09/2013 07:34 PM, Dan Kenigsberg wrote: > On Sun, Sep 08, 2013 at 05:30:20AM -0400, Mike Kolesnik wrote: >> Hi, >> >> I would like to hear opinions about what I consider a design issue in oVirt. >> >> First of all, a short description of the current situation in oVirt 3.3: >> Network is a data-center level entity, representing a L2 broadcast domain. >> Each network can be attached to one or more clusters, where the attachment can have several properties: >> - Required/Optional - Does the network have to be on all hosts or not? >> - Usages (administrative): >> - Display network - used for the display traffic >> - Migration network - used for the migration traffic >> >> Now, what bothers me is the affinity between these two properties - if a network is defined "optional", can is be used for an "administrative" usage? >> >> Currently I can have the following situation: >> 0. Fresh install with some hosts and a shared storage, and no networks other than default. >> 1. Create a network X. >> 2. Attach to a cluster as "migration", "display", "optional". >> 3. Create a VM in the same cluster. >> >> Now all is well and everything is green across the board, BUT: >> 1. The VM can't be run on any host in that cluster if the host doesn't have the display network. >> 2. VM will migrate over the default network if the network is not present on the source host. >> 3. Migration will not work if the network is not present on the destination host. >> >> I find this situation very troublesome! >> We give the admin the impression that everything is fine and dandy, but underneath the surface everything is NOT. >> >> If we look at the previous points we can see that: >> 1. No VM can run in that cluster, but hosts and network seem A-OK - this is intrinsically awful as we don't reflect the real problem anywhere in the network nor the host statuses but rather postpone it until someone makes an attempt to actually use the VM. >> 2. Migration network is NOT being used, which was obviously not the intent of the admin who set it up. >> 3. There is still an open bug for it ( https://bugzilla.redhat.com/983515 ) and it's unclear as to what should happen, but it would be either what happens in case #1 or in case #2. >> >> What I suggest is to have any network with usage be "required". >> This will utilize the existing logic for required networks: >> - Either the network should not be used until its available on all hosts (reflected in the network status being Non-Operational) >> - Or the host should be Non-Operational as it's incapable of running/migrating VMs >> >> Therefore reflecting the problem to the admin and giving him a chance to fix it properly, and not hiding the failure until it occurs or doing some unexpected behavior. >> >> I would love to hear your thoughts on the subject. > > Some history first. Once upon at time, we wanted an Up host to mean > "this host is ready to run any of its cluster's VMs". This meant that if > a host lost connectivity to one of the cluster networks, it had to be > taken down. > > Customers did not like our over protection, so we've introduced > non-required networks. When an admin uses this option he says "I know > what I'm doing, let me do stuff on this host even if the network is > down." > > I think that this request is a valid one, even when a network serves > other purposes than connecting VMs. When designing migration network, > we've decided that if it is missing, migration would be attempted over > the management network, as a fallback. I can imagine an admin who says: > I don't care much about migrations, most of my VMs are pinned-to-host > anyway. so if the migration network is gone, don't make a fuss out of > it. > I think this is a classic case of flexibility vs. simplicity. Usually I'm all for not over-protecting the user but looking on this specific case I think it make sense to have migration network as required network. I think the motivation at the time for setting the migration network as optional was to be able to support different migration network to different tenets. I think this use case is mostly relevant for public cloud which is not our target user. If a user does not want to use migration or want to use it on rare occasions then he does not have to define migration network, he can use the defaults which is the using the management network for migration. The down side of having the migration network as optional is that if someone chooses that option by mistake (or not) he had to read long documentation which are complicated and not intuitive to understand what is the expected behavior. (BTW only when I put it in words I realized how cumbersome the current behavior is) > The use case for letting a host be Up even if its display network is > less obvious. But then again, I can think of an admin who uses a vdsm > hook to set the display IP of each VM. He does not care if the display > network is up or not. > I would not design behavior for the case someone is using hooks to override the behavior in oVirt. If the user want he can leave the defaults which is to use RHEV as the display network and override this configuration in hooks. All in all I agree with Mike I think that 'administrative' networks should be required. > In my opinion, the meaning and the danger on non-req networks should be > properly documented and clear to customers, but some of them are > expected to find it useful. > > Dan. > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch > From danken at redhat.com Tue Sep 10 08:02:15 2013 From: danken at redhat.com (Dan Kenigsberg) Date: Tue, 10 Sep 2013 09:02:15 +0100 Subject: Design issue when using optional networks for administrative usages In-Reply-To: <693349734.12138664.1378790932419.JavaMail.root@redhat.com> References: <1784627598.11325788.1378630979139.JavaMail.root@redhat.com> <378570826.11327031.1378632620084.JavaMail.root@redhat.com> <20130909163417.GE23100@redhat.com> <693349734.12138664.1378790932419.JavaMail.root@redhat.com> Message-ID: <20130910080215.GA966@redhat.com> On Tue, Sep 10, 2013 at 01:28:52AM -0400, Mike Kolesnik wrote: > > ----- Original Message ----- > > On Sun, Sep 08, 2013 at 05:30:20AM -0400, Mike Kolesnik wrote: > > > Hi, > > > > > > I would like to hear opinions about what I consider a design issue in > > > oVirt. > > > > > > First of all, a short description of the current situation in oVirt 3.3: > > > Network is a data-center level entity, representing a L2 broadcast domain. > > > Each network can be attached to one or more clusters, where the attachment > > > can have several properties: > > > - Required/Optional - Does the network have to be on all hosts or not? > > > - Usages (administrative): > > > - Display network - used for the display traffic > > > - Migration network - used for the migration traffic > > > > > > Now, what bothers me is the affinity between these two properties - if a > > > network is defined "optional", can is be used for an "administrative" > > > usage? > > > > > > Currently I can have the following situation: > > > 0. Fresh install with some hosts and a shared storage, and no networks > > > other than default. > > > 1. Create a network X. > > > 2. Attach to a cluster as "migration", "display", "optional". > > > 3. Create a VM in the same cluster. > > > > > > Now all is well and everything is green across the board, BUT: > > > 1. The VM can't be run on any host in that cluster if the host doesn't have > > > the display network. > > > 2. VM will migrate over the default network if the network is not present > > > on the source host. > > > 3. Migration will not work if the network is not present on the destination > > > host. > > > > > > I find this situation very troublesome! > > > We give the admin the impression that everything is fine and dandy, but > > > underneath the surface everything is NOT. > > > > > > If we look at the previous points we can see that: > > > 1. No VM can run in that cluster, but hosts and network seem A-OK - this is > > > intrinsically awful as we don't reflect the real problem anywhere in the > > > network nor the host statuses but rather postpone it until someone makes > > > an attempt to actually use the VM. > > > 2. Migration network is NOT being used, which was obviously not the intent > > > of the admin who set it up. > > > 3. There is still an open bug for it ( https://bugzilla.redhat.com/983515 ) > > > and it's unclear as to what should happen, but it would be either what > > > happens in case #1 or in case #2. > > > > > > What I suggest is to have any network with usage be "required". > > > This will utilize the existing logic for required networks: > > > - Either the network should not be used until its available on all hosts > > > (reflected in the network status being Non-Operational) > > > - Or the host should be Non-Operational as it's incapable of > > > running/migrating VMs > > > > > > Therefore reflecting the problem to the admin and giving him a chance to > > > fix it properly, and not hiding the failure until it occurs or doing some > > > unexpected behavior. > > > > > > I would love to hear your thoughts on the subject. > > > > Some history first. Once upon at time, we wanted an Up host to mean > > "this host is ready to run any of its cluster's VMs". This meant that if > > a host lost connectivity to one of the cluster networks, it had to be > > taken down. > > > > Customers did not like our over protection, so we've introduced > > non-required networks. When an admin uses this option he says "I know > > what I'm doing, let me do stuff on this host even if the network is > > down." > > So what you're saying is non-required networks should not protect the user at all? > > In this case I say we shouldn't impose any limitations whatsoever in this situation, > and if the VM fails to start/migrate then let it fail. Alona and others thought about it in the context of migration network and decided that for that case, we'd like a fallback to the management network. I believe that the main motivation was not to introduce migration blockage on upgrade from ovirt-3.2 to ovirt-3.3. On ovirt-3.2, migration was possible as long as ovirtmgmt was up, so we wanted to keep that. Even with the price of ignoring the user's request to use a designated migration network. That's the old protection-vs-comfort equilibrium - we chose for comfort, where the choice of protection is not preposterous. (I'm delving into this issue only because we have to deal with bug 975786 VM migration fails when required network, configured with migration usages is turned down). > > > > > I think that this request is a valid one, even when a network serves > > other purposes than connecting VMs. When designing migration network, > > we've decided that if it is missing, migration would be attempted over > > the management network, as a fallback. I can imagine an admin who says: > > I don't care much about migrations, most of my VMs are pinned-to-host > > anyway. so if the migration network is gone, don't make a fuss out of > > it. > > > > The use case for letting a host be Up even if its display network is > > less obvious. But then again, I can think of an admin who uses a vdsm > > hook to set the display IP of each VM. He does not care if the display > > network is up or not. > > If the admin uses hooks for his networking needs then I don't see why he > even needs this support in oVirt, so your point is not clear to me.. The user does not need ovirt's support, it needs ovirt to not get in his way. Assume the user wants to transport the display of each VM over a different IP address. And assume that he has the logic to choose this address tucked in a vdsm hook. He then does not care whether the ovirt-designated displaynetwork is up or down. Monitoring it is a liability for him. > > > > > In my opinion, the meaning and the danger on non-req networks should be > > properly documented and clear to customers, but some of them are > > expected to find it useful. > > I agree, if this is our approach then it should be very very well documented. It has been our approach since the introduction of non-req networks. From danken at redhat.com Tue Sep 10 08:10:32 2013 From: danken at redhat.com (Dan Kenigsberg) Date: Tue, 10 Sep 2013 09:10:32 +0100 Subject: Design issue when using optional networks for administrative usages In-Reply-To: <522EC57E.7090906@redhat.com> References: <1784627598.11325788.1378630979139.JavaMail.root@redhat.com> <378570826.11327031.1378632620084.JavaMail.root@redhat.com> <20130909163417.GE23100@redhat.com> <522EC57E.7090906@redhat.com> Message-ID: <20130910081032.GB966@redhat.com> On Tue, Sep 10, 2013 at 10:08:46AM +0300, Livnat Peer wrote: > On 09/09/2013 07:34 PM, Dan Kenigsberg wrote: > > On Sun, Sep 08, 2013 at 05:30:20AM -0400, Mike Kolesnik wrote: > >> Hi, > >> > >> I would like to hear opinions about what I consider a design issue in oVirt. > >> > >> First of all, a short description of the current situation in oVirt 3.3: > >> Network is a data-center level entity, representing a L2 broadcast domain. > >> Each network can be attached to one or more clusters, where the attachment can have several properties: > >> - Required/Optional - Does the network have to be on all hosts or not? > >> - Usages (administrative): > >> - Display network - used for the display traffic > >> - Migration network - used for the migration traffic > >> > >> Now, what bothers me is the affinity between these two properties - if a network is defined "optional", can is be used for an "administrative" usage? > >> > >> Currently I can have the following situation: > >> 0. Fresh install with some hosts and a shared storage, and no networks other than default. > >> 1. Create a network X. > >> 2. Attach to a cluster as "migration", "display", "optional". > >> 3. Create a VM in the same cluster. > >> > >> Now all is well and everything is green across the board, BUT: > >> 1. The VM can't be run on any host in that cluster if the host doesn't have the display network. > >> 2. VM will migrate over the default network if the network is not present on the source host. > >> 3. Migration will not work if the network is not present on the destination host. > >> > >> I find this situation very troublesome! > >> We give the admin the impression that everything is fine and dandy, but underneath the surface everything is NOT. > >> > >> If we look at the previous points we can see that: > >> 1. No VM can run in that cluster, but hosts and network seem A-OK - this is intrinsically awful as we don't reflect the real problem anywhere in the network nor the host statuses but rather postpone it until someone makes an attempt to actually use the VM. > >> 2. Migration network is NOT being used, which was obviously not the intent of the admin who set it up. > >> 3. There is still an open bug for it ( https://bugzilla.redhat.com/983515 ) and it's unclear as to what should happen, but it would be either what happens in case #1 or in case #2. > >> > >> What I suggest is to have any network with usage be "required". > >> This will utilize the existing logic for required networks: > >> - Either the network should not be used until its available on all hosts (reflected in the network status being Non-Operational) > >> - Or the host should be Non-Operational as it's incapable of running/migrating VMs > >> > >> Therefore reflecting the problem to the admin and giving him a chance to fix it properly, and not hiding the failure until it occurs or doing some unexpected behavior. > >> > >> I would love to hear your thoughts on the subject. > > > > Some history first. Once upon at time, we wanted an Up host to mean > > "this host is ready to run any of its cluster's VMs". This meant that if > > a host lost connectivity to one of the cluster networks, it had to be > > taken down. > > > > Customers did not like our over protection, so we've introduced > > non-required networks. When an admin uses this option he says "I know > > what I'm doing, let me do stuff on this host even if the network is > > down." > > > > I think that this request is a valid one, even when a network serves > > other purposes than connecting VMs. When designing migration network, > > we've decided that if it is missing, migration would be attempted over > > the management network, as a fallback. I can imagine an admin who says: > > I don't care much about migrations, most of my VMs are pinned-to-host > > anyway. so if the migration network is gone, don't make a fuss out of > > it. > > > > I think this is a classic case of flexibility vs. simplicity. > Usually I'm all for not over-protecting the user but looking on this > specific case I think it make sense to have migration network as > required network. > > I think the motivation at the time for setting the migration network as > optional was to be able to support different migration network to > different tenets. > I think this use case is mostly relevant for public cloud which is not > our target user. > > If a user does not want to use migration or want to use it on rare > occasions then he does not have to define migration network, he can use > the defaults which is the using the management network for migration. > > The down side of having the migration network as optional is that if > someone chooses that option by mistake (or not) he had to read long > documentation which are complicated and not intuitive to understand what > is the expected behavior. (BTW only when I put it in words I realized > how cumbersome the current behavior is) > > > The use case for letting a host be Up even if its display network is > > less obvious. But then again, I can think of an admin who uses a vdsm > > hook to set the display IP of each VM. He does not care if the display > > network is up or not. > > > > I would not design behavior for the case someone is using hooks to > override the behavior in oVirt. We do not design a behavior for this case. We already had the behavior specifically asked by users for VMs. I suggest, again, to be open for more use cases. If we see that users ask us to nanny them, and to limit their ability to choose non-req networks, let's do it. Currently, I do not find the danger as a real one. Dan. From lpeer at redhat.com Tue Sep 10 09:21:06 2013 From: lpeer at redhat.com (Livnat Peer) Date: Tue, 10 Sep 2013 12:21:06 +0300 Subject: Design issue when using optional networks for administrative usages In-Reply-To: <20130910080215.GA966@redhat.com> References: <1784627598.11325788.1378630979139.JavaMail.root@redhat.com> <378570826.11327031.1378632620084.JavaMail.root@redhat.com> <20130909163417.GE23100@redhat.com> <693349734.12138664.1378790932419.JavaMail.root@redhat.com> <20130910080215.GA966@redhat.com> Message-ID: <522EE482.9030200@redhat.com> On 09/10/2013 11:02 AM, Dan Kenigsberg wrote: > On Tue, Sep 10, 2013 at 01:28:52AM -0400, Mike Kolesnik wrote: >> >> ----- Original Message ----- >>> On Sun, Sep 08, 2013 at 05:30:20AM -0400, Mike Kolesnik wrote: >>>> Hi, >>>> >>>> I would like to hear opinions about what I consider a design issue in >>>> oVirt. >>>> >>>> First of all, a short description of the current situation in oVirt 3.3: >>>> Network is a data-center level entity, representing a L2 broadcast domain. >>>> Each network can be attached to one or more clusters, where the attachment >>>> can have several properties: >>>> - Required/Optional - Does the network have to be on all hosts or not? >>>> - Usages (administrative): >>>> - Display network - used for the display traffic >>>> - Migration network - used for the migration traffic >>>> >>>> Now, what bothers me is the affinity between these two properties - if a >>>> network is defined "optional", can is be used for an "administrative" >>>> usage? >>>> >>>> Currently I can have the following situation: >>>> 0. Fresh install with some hosts and a shared storage, and no networks >>>> other than default. >>>> 1. Create a network X. >>>> 2. Attach to a cluster as "migration", "display", "optional". >>>> 3. Create a VM in the same cluster. >>>> >>>> Now all is well and everything is green across the board, BUT: >>>> 1. The VM can't be run on any host in that cluster if the host doesn't have >>>> the display network. >>>> 2. VM will migrate over the default network if the network is not present >>>> on the source host. >>>> 3. Migration will not work if the network is not present on the destination >>>> host. >>>> >>>> I find this situation very troublesome! >>>> We give the admin the impression that everything is fine and dandy, but >>>> underneath the surface everything is NOT. >>>> >>>> If we look at the previous points we can see that: >>>> 1. No VM can run in that cluster, but hosts and network seem A-OK - this is >>>> intrinsically awful as we don't reflect the real problem anywhere in the >>>> network nor the host statuses but rather postpone it until someone makes >>>> an attempt to actually use the VM. >>>> 2. Migration network is NOT being used, which was obviously not the intent >>>> of the admin who set it up. >>>> 3. There is still an open bug for it ( https://bugzilla.redhat.com/983515 ) >>>> and it's unclear as to what should happen, but it would be either what >>>> happens in case #1 or in case #2. >>>> >>>> What I suggest is to have any network with usage be "required". >>>> This will utilize the existing logic for required networks: >>>> - Either the network should not be used until its available on all hosts >>>> (reflected in the network status being Non-Operational) >>>> - Or the host should be Non-Operational as it's incapable of >>>> running/migrating VMs >>>> >>>> Therefore reflecting the problem to the admin and giving him a chance to >>>> fix it properly, and not hiding the failure until it occurs or doing some >>>> unexpected behavior. >>>> >>>> I would love to hear your thoughts on the subject. >>> >>> Some history first. Once upon at time, we wanted an Up host to mean >>> "this host is ready to run any of its cluster's VMs". This meant that if >>> a host lost connectivity to one of the cluster networks, it had to be >>> taken down. >>> >>> Customers did not like our over protection, so we've introduced >>> non-required networks. When an admin uses this option he says "I know >>> what I'm doing, let me do stuff on this host even if the network is >>> down." >> >> So what you're saying is non-required networks should not protect the user at all? >> >> In this case I say we shouldn't impose any limitations whatsoever in this situation, >> and if the VM fails to start/migrate then let it fail. > > Alona and others thought about it in the context of migration network > and decided that for that case, we'd like a fallback to the management > network. I believe that the main motivation was not to introduce > migration blockage on upgrade from ovirt-3.2 to ovirt-3.3. On ovirt-3.2, > migration was possible as long as ovirtmgmt was up, so we wanted to keep > that. Even with the price of ignoring the user's request to use a > designated migration network. > That's the old protection-vs-comfort equilibrium - we chose for comfort, > where the choice of protection is not preposterous. > (I'm delving into this issue only because we have to deal with bug > 975786 VM migration fails when required network, configured with > migration usages is turned down). > I think that VM network and administrative network are substantially different. In the case of administrative network it makes the system behavior unpredictable and cumbersome, since this was never asked by our users I don't see the need to complicate the common use case. >> >>> >>> I think that this request is a valid one, even when a network serves >>> other purposes than connecting VMs. When designing migration network, >>> we've decided that if it is missing, migration would be attempted over >>> the management network, as a fallback. I can imagine an admin who says: >>> I don't care much about migrations, most of my VMs are pinned-to-host >>> anyway. so if the migration network is gone, don't make a fuss out of >>> it. >>> >>> The use case for letting a host be Up even if its display network is >>> less obvious. But then again, I can think of an admin who uses a vdsm >>> hook to set the display IP of each VM. He does not care if the display >>> network is up or not. >> >> If the admin uses hooks for his networking needs then I don't see why he >> even needs this support in oVirt, so your point is not clear to me.. > > The user does not need ovirt's support, it needs ovirt to not get in his way. > Assume the user wants to transport the display of each VM over a > different IP address. And assume that he has the logic to choose this > address tucked in a vdsm hook. He then does not care whether the > ovirt-designated displaynetwork is up or down. Monitoring it is a > liability for him. > As I explained above in the above case the user can use the management network as the display network which is the default and would not block him while rewriting the display network in the hook. >> >>> >>> In my opinion, the meaning and the danger on non-req networks should be >>> properly documented and clear to customers, but some of them are >>> expected to find it useful. >> >> I agree, if this is our approach then it should be very very well documented. > > It has been our approach since the introduction of non-req networks. > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch > From danken at redhat.com Tue Sep 10 11:57:37 2013 From: danken at redhat.com (Dan Kenigsberg) Date: Tue, 10 Sep 2013 12:57:37 +0100 Subject: Design issue when using optional networks for administrative usages In-Reply-To: <522EE482.9030200@redhat.com> References: <1784627598.11325788.1378630979139.JavaMail.root@redhat.com> <378570826.11327031.1378632620084.JavaMail.root@redhat.com> <20130909163417.GE23100@redhat.com> <693349734.12138664.1378790932419.JavaMail.root@redhat.com> <20130910080215.GA966@redhat.com> <522EE482.9030200@redhat.com> Message-ID: <20130910115737.GD16447@redhat.com> On Tue, Sep 10, 2013 at 12:21:06PM +0300, Livnat Peer wrote: > On 09/10/2013 11:02 AM, Dan Kenigsberg wrote: > > On Tue, Sep 10, 2013 at 01:28:52AM -0400, Mike Kolesnik wrote: > >> > >> ----- Original Message ----- > >>> On Sun, Sep 08, 2013 at 05:30:20AM -0400, Mike Kolesnik wrote: > >>>> Hi, > >>>> > >>>> I would like to hear opinions about what I consider a design issue in > >>>> oVirt. > >>>> > >>>> First of all, a short description of the current situation in oVirt 3.3: > >>>> Network is a data-center level entity, representing a L2 broadcast domain. > >>>> Each network can be attached to one or more clusters, where the attachment > >>>> can have several properties: > >>>> - Required/Optional - Does the network have to be on all hosts or not? > >>>> - Usages (administrative): > >>>> - Display network - used for the display traffic > >>>> - Migration network - used for the migration traffic > >>>> > >>>> Now, what bothers me is the affinity between these two properties - if a > >>>> network is defined "optional", can is be used for an "administrative" > >>>> usage? > >>>> > >>>> Currently I can have the following situation: > >>>> 0. Fresh install with some hosts and a shared storage, and no networks > >>>> other than default. > >>>> 1. Create a network X. > >>>> 2. Attach to a cluster as "migration", "display", "optional". > >>>> 3. Create a VM in the same cluster. > >>>> > >>>> Now all is well and everything is green across the board, BUT: > >>>> 1. The VM can't be run on any host in that cluster if the host doesn't have > >>>> the display network. > >>>> 2. VM will migrate over the default network if the network is not present > >>>> on the source host. > >>>> 3. Migration will not work if the network is not present on the destination > >>>> host. > >>>> > >>>> I find this situation very troublesome! > >>>> We give the admin the impression that everything is fine and dandy, but > >>>> underneath the surface everything is NOT. > >>>> > >>>> If we look at the previous points we can see that: > >>>> 1. No VM can run in that cluster, but hosts and network seem A-OK - this is > >>>> intrinsically awful as we don't reflect the real problem anywhere in the > >>>> network nor the host statuses but rather postpone it until someone makes > >>>> an attempt to actually use the VM. > >>>> 2. Migration network is NOT being used, which was obviously not the intent > >>>> of the admin who set it up. > >>>> 3. There is still an open bug for it ( https://bugzilla.redhat.com/983515 ) > >>>> and it's unclear as to what should happen, but it would be either what > >>>> happens in case #1 or in case #2. > >>>> > >>>> What I suggest is to have any network with usage be "required". > >>>> This will utilize the existing logic for required networks: > >>>> - Either the network should not be used until its available on all hosts > >>>> (reflected in the network status being Non-Operational) > >>>> - Or the host should be Non-Operational as it's incapable of > >>>> running/migrating VMs > >>>> > >>>> Therefore reflecting the problem to the admin and giving him a chance to > >>>> fix it properly, and not hiding the failure until it occurs or doing some > >>>> unexpected behavior. > >>>> > >>>> I would love to hear your thoughts on the subject. > >>> > >>> Some history first. Once upon at time, we wanted an Up host to mean > >>> "this host is ready to run any of its cluster's VMs". This meant that if > >>> a host lost connectivity to one of the cluster networks, it had to be > >>> taken down. > >>> > >>> Customers did not like our over protection, so we've introduced > >>> non-required networks. When an admin uses this option he says "I know > >>> what I'm doing, let me do stuff on this host even if the network is > >>> down." > >> > >> So what you're saying is non-required networks should not protect the user at all? > >> > >> In this case I say we shouldn't impose any limitations whatsoever in this situation, > >> and if the VM fails to start/migrate then let it fail. > > > > Alona and others thought about it in the context of migration network > > and decided that for that case, we'd like a fallback to the management > > network. I believe that the main motivation was not to introduce > > migration blockage on upgrade from ovirt-3.2 to ovirt-3.3. On ovirt-3.2, > > migration was possible as long as ovirtmgmt was up, so we wanted to keep > > that. Even with the price of ignoring the user's request to use a > > designated migration network. > > That's the old protection-vs-comfort equilibrium - we chose for comfort, > > where the choice of protection is not preposterous. > > (I'm delving into this issue only because we have to deal with bug > > 975786 VM migration fails when required network, configured with > > migration usages is turned down). > > > > I think that VM network and administrative network are substantially > different. > In the case of administrative network it makes the system behavior > unpredictable and cumbersome, since this was never asked by our users I > don't see the need to complicate the common use case. Maybe Simon Grinberg would chime in. He once requested to have a migration network that is VM specific. I have a vague memory of a customer asking for VM-specific display network. And once we have storage networks? Are they considered "administrative" and must-have on all hosts, or would you allow an admin to be more flexible? Not all "administrative networks" are alike. Some are important, some are less critical. > > >> > >>> > >>> I think that this request is a valid one, even when a network serves > >>> other purposes than connecting VMs. When designing migration network, > >>> we've decided that if it is missing, migration would be attempted over > >>> the management network, as a fallback. I can imagine an admin who says: > >>> I don't care much about migrations, most of my VMs are pinned-to-host > >>> anyway. so if the migration network is gone, don't make a fuss out of > >>> it. > >>> > >>> The use case for letting a host be Up even if its display network is > >>> less obvious. But then again, I can think of an admin who uses a vdsm > >>> hook to set the display IP of each VM. He does not care if the display > >>> network is up or not. > >> > >> If the admin uses hooks for his networking needs then I don't see why he > >> even needs this support in oVirt, so your point is not clear to me.. > > > > The user does not need ovirt's support, it needs ovirt to not get in his way. > > Assume the user wants to transport the display of each VM over a > > different IP address. And assume that he has the logic to choose this > > address tucked in a vdsm hook. He then does not care whether the > > ovirt-designated displaynetwork is up or down. Monitoring it is a > > liability for him. > > > > As I explained above in the above case the user can use the management > network as the display network which is the default and would not block > him while rewriting the display network in the hook. It's easy to think of an admin who wants to have some of his non-critical VMs use a non-ovirtmgmt network for display, and do an uber-complex hook for several critical VMs. He likes to use a display network, but would not want to take a host down if the net is gone. From dcaroest at redhat.com Tue Sep 10 16:00:23 2013 From: dcaroest at redhat.com (David Caro) Date: Tue, 10 Sep 2013 18:00:23 +0200 Subject: Today maintenance window Message-ID: <522F4217.6070608@redhat.com> Hi everyone, Today at 21:30 CEST, 22:30 IDT, racksace02 machine will be rebooted. The downtime will affect only to some jenkins slaves (f18 and centos) and will last less than 30min, I'll notice you before starting and after finished. We will have installed a new 2TB disk, no more out of space issues ;) Thanks! -- David Caro Red Hat Czech s.r.o. Continuous Integration Engineer - EMEA ENG Virtualization R&D Tel.: +420 532 294 605 Email: dcaro at redhat.com Web: www.cz.redhat.com Red Hat Czech s.r.o., Purky?ova 99/71, 612 45, Brno, Czech Republic RHT Global #: 82-62605 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: OpenPGP digital signature URL: From dcaroest at redhat.com Tue Sep 10 19:30:20 2013 From: dcaroest at redhat.com (David Caro Estevez) Date: Tue, 10 Sep 2013 15:30:20 -0400 (EDT) Subject: Today maintenance window In-Reply-To: <522F4217.6070608@redhat.com> References: <522F4217.6070608@redhat.com> Message-ID: <1195056765.12388319.1378841420690.JavaMail.root@redhat.com> Maintenance window started! ----- Original Message ----- > From: "David Caro" > To: "infra" , arch at ovirt.org > Sent: Tuesday, September 10, 2013 6:00:23 PM > Subject: Today maintenance window > > Hi everyone, > > Today at 21:30 CEST, 22:30 IDT, racksace02 machine will be rebooted. > > The downtime will affect only to some jenkins slaves (f18 and centos) > and will last less than 30min, I'll notice you before starting and after > finished. > > We will have installed a new 2TB disk, no more out of space issues ;) > > Thanks! > > -- > David Caro > > Red Hat Czech s.r.o. > Continuous Integration Engineer - EMEA ENG Virtualization R&D > > Tel.: +420 532 294 605 > Email: dcaro at redhat.com > Web: www.cz.redhat.com > Red Hat Czech s.r.o., Purky?ova 99/71, 612 45, Brno, Czech Republic > RHT Global #: 82-62605 > > > _______________________________________________ > Infra mailing list > Infra at ovirt.org > http://lists.ovirt.org/mailman/listinfo/infra > From dcaroest at redhat.com Tue Sep 10 20:16:42 2013 From: dcaroest at redhat.com (David Caro) Date: Tue, 10 Sep 2013 22:16:42 +0200 Subject: Today maintenance window In-Reply-To: <1195056765.12388319.1378841420690.JavaMail.root@redhat.com> References: <522F4217.6070608@redhat.com> <1195056765.12388319.1378841420690.JavaMail.root@redhat.com> Message-ID: <522F7E2A.4060906@redhat.com> On Tue 10 Sep 2013 09:30:20 PM CEST, David Caro Estevez wrote: > Maintenance window started! > > ----- Original Message ----- >> From: "David Caro" >> To: "infra" , arch at ovirt.org >> Sent: Tuesday, September 10, 2013 6:00:23 PM >> Subject: Today maintenance window >> >> Hi everyone, >> >> Today at 21:30 CEST, 22:30 IDT, racksace02 machine will be rebooted. >> >> The downtime will affect only to some jenkins slaves (f18 and centos) >> and will last less than 30min, I'll notice you before starting and after >> finished. >> >> We will have installed a new 2TB disk, no more out of space issues ;) >> >> Thanks! >> >> -- >> David Caro >> >> Red Hat Czech s.r.o. >> Continuous Integration Engineer - EMEA ENG Virtualization R&D >> >> Tel.: +420 532 294 605 >> Email: dcaro at redhat.com >> Web: www.cz.redhat.com >> Red Hat Czech s.r.o., Purky?ova 99/71, 612 45, Brno, Czech Republic >> RHT Global #: 82-62605 >> >> >> _______________________________________________ >> Infra mailing list >> Infra at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/infra >> Maintenance finished! We have now a new shiny 2TB disk on rackspace02 :) -- David Caro Red Hat Czech s.r.o. Continuous Integration Engineer - EMEA ENG Virtualization R&D Tel.: +420 532 294 605 Email: dcaro at redhat.com Web: www.cz.redhat.com Red Hat Czech s.r.o., Purky?ova 99/71, 612 45, Brno, Czech Republic RHT Global #: 82-62605 From mkolesni at redhat.com Thu Sep 12 04:56:37 2013 From: mkolesni at redhat.com (Mike Kolesnik) Date: Thu, 12 Sep 2013 00:56:37 -0400 (EDT) Subject: Design issue when using optional networks for administrative usages In-Reply-To: References: <1784627598.11325788.1378630979139.JavaMail.root@redhat.com> <378570826.11327031.1378632620084.JavaMail.root@redhat.com> <20130909163417.GE23100@redhat.com> <693349734.12138664.1378790932419.JavaMail.root@redhat.com> <20130910080215.GA966@redhat.com> <522EE482.9030200@redhat.com> <20130910115737.GD16447@redhat.com> Message-ID: <913810477.13269644.1378961797207.JavaMail.root@redhat.com> ----- Original Message ----- > > On Sep 10, 2013, at 2:57 PM, Dan Kenigsberg wrote: > > > On Tue, Sep 10, 2013 at 12:21:06PM +0300, Livnat Peer wrote: > >> On 09/10/2013 11:02 AM, Dan Kenigsberg wrote: > >>> On Tue, Sep 10, 2013 at 01:28:52AM -0400, Mike Kolesnik wrote: > >>>> > >>>> ----- Original Message ----- > >>>>> On Sun, Sep 08, 2013 at 05:30:20AM -0400, Mike Kolesnik wrote: > >>>>>> Hi, > >>>>>> > >>>>>> I would like to hear opinions about what I consider a design issue in > >>>>>> oVirt. > >>>>>> > >>>>>> First of all, a short description of the current situation in oVirt > >>>>>> 3.3: > >>>>>> Network is a data-center level entity, representing a L2 broadcast > >>>>>> domain. > >>>>>> Each network can be attached to one or more clusters, where the > >>>>>> attachment > >>>>>> can have several properties: > >>>>>> - Required/Optional - Does the network have to be on all hosts or not? > >>>>>> - Usages (administrative): > >>>>>> - Display network - used for the display traffic > >>>>>> - Migration network - used for the migration traffic > >>>>>> > >>>>>> Now, what bothers me is the affinity between these two properties - if > >>>>>> a > >>>>>> network is defined "optional", can is be used for an "administrative" > >>>>>> usage? > >>>>>> > >>>>>> Currently I can have the following situation: > >>>>>> 0. Fresh install with some hosts and a shared storage, and no networks > >>>>>> other than default. > >>>>>> 1. Create a network X. > >>>>>> 2. Attach to a cluster as "migration", "display", "optional". > >>>>>> 3. Create a VM in the same cluster. > >>>>>> > >>>>>> Now all is well and everything is green across the board, BUT: > >>>>>> 1. The VM can't be run on any host in that cluster if the host doesn't > >>>>>> have > >>>>>> the display network. > >>>>>> 2. VM will migrate over the default network if the network is not > >>>>>> present > >>>>>> on the source host. > >>>>>> 3. Migration will not work if the network is not present on the > >>>>>> destination > >>>>>> host. > >>>>>> > >>>>>> I find this situation very troublesome! > >>>>>> We give the admin the impression that everything is fine and dandy, > >>>>>> but > >>>>>> underneath the surface everything is NOT. > >>>>>> > >>>>>> If we look at the previous points we can see that: > >>>>>> 1. No VM can run in that cluster, but hosts and network seem A-OK - > >>>>>> this is > >>>>>> intrinsically awful as we don't reflect the real problem anywhere in > >>>>>> the > >>>>>> network nor the host statuses but rather postpone it until someone > >>>>>> makes > >>>>>> an attempt to actually use the VM. > >>>>>> 2. Migration network is NOT being used, which was obviously not the > >>>>>> intent > >>>>>> of the admin who set it up. > >>>>>> 3. There is still an open bug for it ( > >>>>>> https://bugzilla.redhat.com/983515 ) > >>>>>> and it's unclear as to what should happen, but it would be either what > >>>>>> happens in case #1 or in case #2. > >>>>>> > >>>>>> What I suggest is to have any network with usage be "required". > >>>>>> This will utilize the existing logic for required networks: > >>>>>> - Either the network should not be used until its available on all > >>>>>> hosts > >>>>>> (reflected in the network status being Non-Operational) > >>>>>> - Or the host should be Non-Operational as it's incapable of > >>>>>> running/migrating VMs > >>>>>> > >>>>>> Therefore reflecting the problem to the admin and giving him a chance > >>>>>> to > >>>>>> fix it properly, and not hiding the failure until it occurs or doing > >>>>>> some > >>>>>> unexpected behavior. > >>>>>> > >>>>>> I would love to hear your thoughts on the subject. > >>>>> > >>>>> Some history first. Once upon at time, we wanted an Up host to mean > >>>>> "this host is ready to run any of its cluster's VMs". This meant that > >>>>> if > >>>>> a host lost connectivity to one of the cluster networks, it had to be > >>>>> taken down. > >>>>> > >>>>> Customers did not like our over protection, so we've introduced > >>>>> non-required networks. When an admin uses this option he says "I know > >>>>> what I'm doing, let me do stuff on this host even if the network is > >>>>> down." > >>>> > >>>> So what you're saying is non-required networks should not protect the > >>>> user at all? > >>>> > >>>> In this case I say we shouldn't impose any limitations whatsoever in > >>>> this situation, > >>>> and if the VM fails to start/migrate then let it fail. > >>> > >>> Alona and others thought about it in the context of migration network > >>> and decided that for that case, we'd like a fallback to the management > >>> network. I believe that the main motivation was not to introduce > >>> migration blockage on upgrade from ovirt-3.2 to ovirt-3.3. On ovirt-3.2, > >>> migration was possible as long as ovirtmgmt was up, so we wanted to keep > >>> that. Even with the price of ignoring the user's request to use a > >>> designated migration network. > >>> That's the old protection-vs-comfort equilibrium - we chose for comfort, > >>> where the choice of protection is not preposterous. > >>> (I'm delving into this issue only because we have to deal with bug > >>> 975786 VM migration fails when required network, configured with > >>> migration usages is turned down). > >>> > >> > >> I think that VM network and administrative network are substantially > >> different. > >> In the case of administrative network it makes the system behavior > >> unpredictable and cumbersome, since this was never asked by our users I > >> don't see the need to complicate the common use case. > > > > Maybe Simon Grinberg would chime in. He once requested to have a > > migration network that is VM specific. I have a vague memory of a > > customer asking for VM-specific display network. And once we have > > storage networks? Are they considered "administrative" and must-have on > > all hosts, or would you allow an admin to be more flexible? > > > > Not all "administrative networks" are alike. Some are important, some > > are less critical. > > > >> > >>>> > >>>>> > >>>>> I think that this request is a valid one, even when a network serves > >>>>> other purposes than connecting VMs. When designing migration network, > >>>>> we've decided that if it is missing, migration would be attempted over > >>>>> the management network, as a fallback. I can imagine an admin who says: > >>>>> I don't care much about migrations, most of my VMs are pinned-to-host > >>>>> anyway. so if the migration network is gone, don't make a fuss out of > >>>>> it. > >>>>> > >>>>> The use case for letting a host be Up even if its display network is > >>>>> less obvious. But then again, I can think of an admin who uses a vdsm > >>>>> hook to set the display IP of each VM. He does not care if the display > >>>>> network is up or not. > >>>> > >>>> If the admin uses hooks for his networking needs then I don't see why he > >>>> even needs this support in oVirt, so your point is not clear to me.. > >>> > >>> The user does not need ovirt's support, it needs ovirt to not get in his > >>> way. > >>> Assume the user wants to transport the display of each VM over a > >>> different IP address. And assume that he has the logic to choose this > >>> address tucked in a vdsm hook. He then does not care whether the > >>> ovirt-designated displaynetwork is up or down. Monitoring it is a > >>> liability for him. > >>> > >> > >> As I explained above in the above case the user can use the management > >> network as the display network which is the default and would not block > >> him while rewriting the display network in the hook. > > > > It's easy to think of an admin who wants to have some of his > > non-critical VMs use a non-ovirtmgmt network for display, and do an > > uber-complex hook for several critical VMs. He likes to use a display > > network, but would not want to take a host down if the net is gone. > > > > > Indeed, > I think that my opinion is known that I didn't like the notion of the > not-required network in the first place. I think it's a poor substitute for > dynamic networks. I agree, but currently there are no dynamic networks that can be used for "VM services" networks that you mention. > A network is either dynamic or static where: > Static = expect to find the network there > Dynamic = expect this to be created per need bases > Both should be part of the host configuration while dynamic is a place holder > that says this host is capable to accommodate this dynamic network > > Another missing notion is redundancy group. > Networks may be part of redundancy groups, no redundancy group settings means > redundancy group of one. > > As I see it there are few types of networks: > 1. Facilities (Storage, Management, other) - Static by nature > 2. VM services (Migration, Display) > 3. VM connectivity networks > The first group should effect host operational status - meaning they are > required (as on failure the host just can't run ANY VM) but will not effect > host operational state unless all the networks in the redundancy group are > down. > The last are not required by nature and could be dynamic and be setup on > demand or use static, based on the underlying technology (like Mellanox vs > linux bridge), and should only effect scheduling decisions. > > Now let's discuss the confusing group #2 - VM Services, I claim that they are > basically not required since like the VM networks they effect a certain > aspect of the VM. They affect an aspect of the VM, that's correct, but they're not explicitly used by it which makes them different from "VM" networks. > - Display: I've raised in the past that this should be a per VM property that > takes it's default from the clusters' display default, but can be any > network in the cluster. > -- None should be allowed value (KVM support a VM with no display device) > -- If set then only allow to run the VM on hosts that support the selected > network (either as dynamic or static) > > This also answers an RFE to support multiple display networks for > multi-tenants, where multiple tenants may even be two departments using > different VLANS in the organizations Do you know if such an RFE is filed? > > - Migration: Again a per VM property that takes it's default from the > clusters' migration network default, but can be any network in the cluster. > -- None should be allowed, it's like never migrate. > -- If set then if the network is down the worst case is that the VM can't be > migrate via this network > --- on manual migration ask to select a network for migration > --- on host maintenance use the cluster default - if it's not available then > maintenance with fail > > I hope this makes sense (and that arc@ won't reject this mail) I think currently we can ease the life of the admin by being more strict, it's not nannying him IMHO but giving him the right status for his data center. In the future, when and if we decide to implement these ideas (which currently are not implemented) we can revisit and probably find a better way to accomplish this. > > Regards, > Simon. > > From dfediuck at redhat.com Sun Sep 15 15:01:37 2013 From: dfediuck at redhat.com (Doron Fediuck) Date: Sun, 15 Sep 2013 11:01:37 -0400 (EDT) Subject: Network traffic shaping. In-Reply-To: <52037ECD.2040307@redhat.com> References: <1809186597.402470.1373280341185.JavaMail.root@redhat.com> <51FF5E8A.4010409@redhat.com> <927454428.13264363.1375958897072.JavaMail.root@redhat.com> <52037ECD.2040307@redhat.com> Message-ID: <1435742337.21538357.1379257297265.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Lior Vernia" > To: "Giuseppe Vallarelli" > Cc: arch at ovirt.org > Sent: Thursday, August 8, 2013 2:19:41 PM > Subject: Re: Network traffic shaping. > > > > On 08/08/13 13:48, Giuseppe Vallarelli wrote: > > ----- Original Message ----- > > | From: "Lior Vernia" > > | To: "Giuseppe Vallarelli" > > | Cc: arch at ovirt.org > > | Sent: Monday, August 5, 2013 10:12:58 AM > > | Subject: Re: Network traffic shaping. > > | > > | Hey Giuseppe and everyone else, > > > > Hello Lior, > > > > | > > | Sorry for being late to the party. I've read all the e-mails and have > > | been rolling the idea around in my head for a couple of days. Here are > > | my two main thoughts, more UX-oriented, let me know what you think. > > | > > | 1. I would prefer not to be able to create a host network QoS entity, > > | which doesn't really have any significance as an independent entity. > > | However, I would like to be able to copy the configuration from one host > > | to another for the same network, right? So how about we add a > > | "Copy/Clone from" UI field that lets you choose a host from which to > > | copy the QoS configuration for that network? > > | > > | This would appear next to the manual configuration, so users would still > > | be able to input other custom values if they prefer. Once we do this, we > > | also won't need to enable to define it on a per-network basis, where it > > | doesn't really make sense, but we could do with just defining it for > > | pairs (i.e. say in the edit network dialog when attaching > > | a network to a host NIC). > > > > I like your idea and I think it's a good simplification overall. > > > > | To further clarify, copying/cloning would be INSTEAD OF creating a > > | Network QoS through some subtab, naming it, and then picking it in a > > | list box. There would be no way to create a named host network QoS > > | configuration. > > > > I thought it was the right approach simply because it's what have been > > implemented > > already in: http://www.ovirt.org/Features/Network_QoS > > (I'll refer to it as Network_QoS from now on) > > > > I didn't want to have 2 different ways to create what I would call > > simply QoS (the infamous 6 values) that can be applied a host network, > > vnic and so on. That's where the idea of associating a predefined Qos > > comes from. > > > > | 2. I would prefer to not have to fill six fields to define QoS. Even if > > | there are default values to these fields, it makes it look complicated. > > > > I'm completely with you on this - libvirt only requires the average > > attribute, burst and peak are optional, but again for uniformity of > > behaviour with Network_QoS I opted to have all 6 user defined. > > Perhaps the uniform behaviour is misplaced in this case, I thought > > that it might be confusing for the user to provide 6 values in Network_Qos > > and only one, average, for QoS but host network side. > > > > I discussed to have only one compulsory value in a different thread > > discussion "network and vnic qos" but Doron gave me a rationale (SLA > > related) > > for having all six values defined. > > Yeah, I saw that discussion and I see Doron's point. I just wanna > clarify that all 6 values would still be set as per those comments - but > this assignment would be transparent to the users, unless they access > the "advanced settings" (still not sure how that should be accessed). > > > > > | I think these six values could be replaced by just one typical value for > > | the network's traffic. The six-field configuration would still be > > | accessible somehow, but I don't want it to be necessary. > > > > +1 > > > > | > > | Regarding the empty values discussion, I'm not saying to leave the > > | values empty. I get why we want to fill them. But fill them ourselves in > > | some reasonable way that users won't be aware of unless they go into > > | advanced settings. > > | > > | An alternative might be to allow two values, one for inbound traffic and > > | another for outbound traffic. However, I think this would only be > > | necessary if a user wants to actually manage both inbound and outbound > > | traffic in detail, which sounds to me like the uncommon use case. In > > | general people would just want to avoid host traffic, either inbound or > > | outbound, being taken over by one network. > > > > That's a good idea. My only concern is how does it fit with QoS in the > > engine as a whole ? It's almost like we have different definitions of QoS, > > which might be fine but maybe we want to find a different naming > > convention. > > > > I'm all for a different naming convention anyway, as we do not really > provide network QoS at the moment - networks are DC-wide entities and we > can't guarantee anything about what happens when packets leave the host. > I actually think "Host Traffic Shaping" is pretty good, I would maybe > call it "Host Traffic Control" because control is a simpler word to > fathom than shaping. > Actually this feature provides an upper limit which enables QoS, by preventing starvation. So QoS is definitely relevant here. With regards to traffic * name, this implies on implementation, while QoS may be set by other means (such as defining it in the network device). So I still keep my original definition that this should be host level network QoS. > > > > | And again, distinguishing > > | between inbound and outbound would still be accessible through some > > | advanced settings. > > | > > | Lior. > > > > Thanks for the contribute, hope my feedback will help. > > > > Giuseppe > > > > | > > | On 08/07/13 13:45, Giuseppe Vallarelli wrote: > > | > Hi everybody, I'm working to implement traffic shaping at the network > > | > level > > | > [1]. > > | > This feature is composed by two distinct parts: definition of traffic > > | > shaping > > | > for a logical network entity and optional redefinition of traffic > > | > shaping > > | > when > > | > the user is doing a Setup Host Networks task. Initial focus will be on > > | > first > > | > part. There are some points of contact with Network Qos [2] that's why > > | > I > > | > proposed > > | > to reuse some code backend side. > > | > > > | > Cheers, Giuseppe > > | > > > | > [1] http://www.ovirt.org/Features/Network_traffic_shaping > > | > [2] http://www.ovirt.org/Features/Network_QoS > > | > _______________________________________________ > > | > Arch mailing list > > | > Arch at ovirt.org > > | > http://lists.ovirt.org/mailman/listinfo/arch > > | > > > | > > > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch > From iheim at redhat.com Mon Sep 16 23:21:40 2013 From: iheim at redhat.com (Itamar Heim) Date: Tue, 17 Sep 2013 02:21:40 +0300 Subject: oVirt Updates - September 16th Message-ID: <52379284.7060303@redhat.com> Summer vacations and holidays, but many updates... Conferences/Talks - Greg Padget will cover the new sla mechanism in ovirt at LinuxCon/CloudOpen NA (September 17th, New Orleans) - a lot of sessions around linuxcon europe/kvm forum. also, the annual developer summit. Dave Neary created this helpful page: http://www.ovirt.org/KVM_Forum_2013 3.3 - Release is going out any minute now... looks like an exciting one, packed with features! - deep dives to new 3.3 features if you missed them - deep dives into some of the new features are available at the end of this page: http://www.ovirt.org/OVirt_3.3_release_notes - note on native glusterfs storage domain and .el6 centos/rhel 6.4 don't have qemu/libvirt packages supporting this. these are expected with 6.5. we are trying to find a solution prior to 6.5 to resolve this. - Zhou Zheng Sheng graciously created guest agent packages for ubuntu and debian[1] - heads up that we're planning a bit larger 3.3.1 which will rebase and pass some more testing. then more frequent 3.3.x updates for issues found. - jason brooks blogged on "Testing oVirt 3.3 with Nested KVM" and "Up and Running with oVirt 3.3" http://community.redhat.com/testing-ovirt-3-3-with-nested-kvm http://community.redhat.com/up-and-running-with-ovirt-3-3/ 3.4 - I started a thread for "what do you want next" - will collect and publish separately. feel free to chime in[2] Case Studies - Dave Neary worked with Alter Way to publish one. http://www.ovirt.org/Alter_Way_case_study - We'd love to see more of these - please ping dneary at redhat.com Kimchi - Adam Litke followed on the vote to accept Kimchi as in incubated project. Kimchi is a lightweight single machine virt manager. simplified, html 5, etc. - Mike burns updated they are already trying to build a stand alone ovirt-node with it. - Adam will present Kimchi at the KVM Forum (see conferences below)` Other - Ren? Koch asked for early testing on a new version of the Nagios plugin[3] - Karli Sj?berg published screenshots of their "order portal"[4] - (YouTube) how to use a glance image with ovirt by Herv? Leclerc http://www.youtube.com/watch?v=4yeopUKjjnY Thanks, Itamar [1] http://lists.ovirt.org/pipermail/users/2013-September/016308.html [2] http://lists.ovirt.org/pipermail/users/2013-August/015807.html [3] http://lists.ovirt.org/pipermail/users/2013-August/015779.html [4] http://lists.ovirt.org/pipermail/users/2013-August/015619.html From mburns at redhat.com Tue Sep 17 12:00:44 2013 From: mburns at redhat.com (Mike Burns) Date: Tue, 17 Sep 2013 08:00:44 -0400 (EDT) Subject: Cancelled: oVirt Weekly Meeting Message-ID: <1170741728.14960904.1379419244499.JavaMail.root@redhat.com> You have been removed from the attendee list by the organizer. Cancelled instance occurring on: Tuesday, September 17, 2013 9:00 AM GMT-0400 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 3443 bytes Desc: not available URL: From mburns at redhat.com Tue Sep 17 12:01:35 2013 From: mburns at redhat.com (Mike Burns) Date: Tue, 17 Sep 2013 08:01:35 -0400 (EDT) Subject: Cancelled: oVirt Weekly Meeting Message-ID: <169987693.14961374.1379419295615.JavaMail.root@redhat.com> You have been removed from the attendee list by the organizer. Cancelled instance occurring on: Wednesday, September 25, 2013 10:00 AM GMT-0400 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 3446 bytes Desc: not available URL: From danken at redhat.com Wed Sep 18 23:04:44 2013 From: danken at redhat.com (Dan Kenigsberg) Date: Thu, 19 Sep 2013 00:04:44 +0100 Subject: rebasing for oVirt 3.3.1? In-Reply-To: <522DBC5F.8000403@redhat.com> References: <522DBC5F.8000403@redhat.com> Message-ID: <20130918230443.GA29084@redhat.com> On Mon, Sep 09, 2013 at 03:17:35PM +0300, Itamar Heim wrote: > with 3.3.0 coming soon, one of the questions I heard is "what about > 3.3.1" considering the number of patches fox bugs that went into > master branch since since we branched to stabilize 3.3.0. > i.e., most of the work in master branch has been focused on bug fixes) > > so my suggestion is for 3.3.1 that we rebase from master, then move > to backporting patches to that branch for the rest of 3.3 time > frame. > > while this poses a small risk, i believe its the best course forward > to making ovirt 3.3 a more robust and stable version going forward. > > this is mostly about ovirt-engine, and probably vdsm. for the other > projects, its up to the maintainer, based on risk/benefit. To make this happen for Vdsm, we need to slow things down a bit, stabilize what we have, and test it out. Most of our work since ovirt-3.3 was bug fixing (23 patches), but some of the 101 patches we've got are related to refactoring (19), cleanups (27), test improvements (21), behind-the-scenes features (6), and visible features (5). Refactoring included Zhou Zheng Sheng's Ubuntu-readiness patches, which may still incur surprises to sysV/systemd/upstart service framework, and changes to how network configurators are to be used. Behind-the-scenes features include speedup to block-based storage: - One shot teardown. - Avoid Img and Vol produces in fileVolume.getV*Size - Make lvm.listPVNames() be based on vgs information. - One shot prepare. - Introduce lvm short filters. Visible features are few, and only one of them: - clientIF: automatically unpause vms in EIO when SD becomes active carries some kind of a risk to a timely release. The rest of them are: - Support for multiple heads for Qxl display device - Add support for direct setting of cpu_shares when creating a VM - Introducing hidden_vlans configurable. - macspoof hooks: new hook script to enable macspoof filtering per vnic. I think we can release vdsm-4.13.0 within a week if we put a hold on new features and big changes, and put enough effort into testing the mostly-changed areas: - service framework - VM lifecycle over block storage (including auto unpause) - network configuration Then, we could release vdsm-4.13.z without risking the stability of ovirt-3.3.1. Let's do it! Dan.