From liuyang3240 at gmail.com Thu Dec 1 08:29:23 2011 From: liuyang3240 at gmail.com (Kevein Liu) Date: Thu, 01 Dec 2011 16:29:23 +0800 Subject: ovirt-release package In-Reply-To: <32cd1bd2-daa2-4bec-994a-fe0a8a9a53e9@zmail15.collab.prod.int.phx2.redhat.com> References: <32cd1bd2-daa2-4bec-994a-fe0a8a9a53e9@zmail15.collab.prod.int.phx2.redhat.com> Message-ID: <4ED73AE3.8000606@gmail.com> Hello Steve, Where are the installation document available now? How to deploy by rpm? Thanks! Regards! Kevein On 11/30/2011 04:26 AM, Steve Gordon wrote: > Hi all, > > In last week's sync up meeting we again discussed the possibility of creating an RPM to deliver the yum repository file (which I have had a crack at providing, but more on that in a moment). A few questions came up which I'd like to also put to the list with my own thoughts, hopefully others can provide theirs. > > ------ > > Q: Why provide an RPM for this? The user still has to download the RPM to install it! > A: The benefit of providing the repository file wrapped in an RPM as I see it is that the user only has to manually download the RPM once, if (more likely when in my opinion) we need to change the repo file and/or add GPG keys etc they pick up the updated ovirt-release RPM up via yum update. If we don't provide it in an RPM then they will have to manually wget the updated repo file, and keys if applicable, and we would need a way to communicate to users that it was in fact necessary to do this. > > ------ > > Q: Would we need to get this RPM into Fedora? > A: No, this repo file (and RPM) is for users wishing to track releases and nightly builds as they are put up by the oVirt Project. As I understand it the various oVirt components are not in any way tied to a particular distributions release schedule. As such it seems probable that there will be times when the release available at ovirt.org is more up to date than the version that is packaged in Fedora, obviously with nightly builds this will almost always be the case. > > ------ > > Q: What does the repo file actually look like? > A: There are two variations at the moment, the one initially linked from the wiki: > > http://www.ovirt.org/releases/nightly/fedora/16/ovirt-engine.repo > > And one that Karsten I think had been working on as a result of the previous meeting: > > http://ovirt.org/wiki/Yum_repo_file > > The later is probably closer to what we need long term, in particular supporting both nightly and stable releases and using variables for the release version and architecture. At the moment however the repository doesn't support this level of granularity as we seem to have lumped x86_64 and noarch packages in the same directory? Is this something we intend to change long term? Where are we planning to place SRPMs? > > ------ > > Q: Where would this RPM exist in the repository? Where would the spec file live? > A: This was the open question that was posed at the last meeting and probably the bit I need help with. I created a spec file based on the one used for fedora-release which I put on the wiki page for want of a better place: > > http://www.ovirt.org/wiki/Yum_repo_file#Spec_File > > To get the benefits I talked about earlier in this message the resultant RPM really needs to both: > 1) Exist in the repository itself so that once it has installed itself users get updates to it. > 2) Easily discoverable, probably linked from whatever 'Get oVirt!' style page we have under Fedora. > > The question really I guess is how does it get into the repository? > > ------ > > What do other people think? I realise that the above could easily be construed as bike shedding but I think that even though these are relatively simple matters it is important to flesh them out now particularly if we are looking to make a "release" (whatever that entails) in the near future. > > -Steve > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch -- Best Regards! Kevein Liu | ?? From iheim at redhat.com Thu Dec 1 10:55:45 2011 From: iheim at redhat.com (Itamar Heim) Date: Thu, 01 Dec 2011 12:55:45 +0200 Subject: ovirt-release package In-Reply-To: <4ED73AE3.8000606@gmail.com> References: <32cd1bd2-daa2-4bec-994a-fe0a8a9a53e9@zmail15.collab.prod.int.phx2.redhat.com> <4ED73AE3.8000606@gmail.com> Message-ID: <4ED75D31.5060108@redhat.com> On 12/01/2011 10:29 AM, Kevein Liu wrote: > Hello Steve, > > Where are the installation document available now? How to deploy by rpm? http://www.ovirt.org/wiki/Installing_ovirt-engine_from_rpm > Thanks! > Regards! > Kevein > > On 11/30/2011 04:26 AM, Steve Gordon wrote: >> Hi all, >> >> In last week's sync up meeting we again discussed the possibility of >> creating an RPM to deliver the yum repository file (which I have had a >> crack at providing, but more on that in a moment). A few questions >> came up which I'd like to also put to the list with my own thoughts, >> hopefully others can provide theirs. >> >> ------ >> >> Q: Why provide an RPM for this? The user still has to download the RPM >> to install it! >> A: The benefit of providing the repository file wrapped in an RPM as I >> see it is that the user only has to manually download the RPM once, if >> (more likely when in my opinion) we need to change the repo file >> and/or add GPG keys etc they pick up the updated ovirt-release RPM up >> via yum update. If we don't provide it in an RPM then they will have >> to manually wget the updated repo file, and keys if applicable, and we >> would need a way to communicate to users that it was in fact necessary >> to do this. >> >> ------ >> >> Q: Would we need to get this RPM into Fedora? >> A: No, this repo file (and RPM) is for users wishing to track releases >> and nightly builds as they are put up by the oVirt Project. As I >> understand it the various oVirt components are not in any way tied to >> a particular distributions release schedule. As such it seems probable >> that there will be times when the release available at ovirt.org is >> more up to date than the version that is packaged in Fedora, obviously >> with nightly builds this will almost always be the case. >> >> ------ >> >> Q: What does the repo file actually look like? >> A: There are two variations at the moment, the one initially linked >> from the wiki: >> >> http://www.ovirt.org/releases/nightly/fedora/16/ovirt-engine.repo >> >> And one that Karsten I think had been working on as a result of the >> previous meeting: >> >> http://ovirt.org/wiki/Yum_repo_file >> >> The later is probably closer to what we need long term, in particular >> supporting both nightly and stable releases and using variables for >> the release version and architecture. At the moment however the >> repository doesn't support this level of granularity as we seem to >> have lumped x86_64 and noarch packages in the same directory? Is this >> something we intend to change long term? Where are we planning to >> place SRPMs? >> >> ------ >> >> Q: Where would this RPM exist in the repository? Where would the spec >> file live? >> A: This was the open question that was posed at the last meeting and >> probably the bit I need help with. I created a spec file based on the >> one used for fedora-release which I put on the wiki page for want of a >> better place: >> >> http://www.ovirt.org/wiki/Yum_repo_file#Spec_File >> >> To get the benefits I talked about earlier in this message the >> resultant RPM really needs to both: >> 1) Exist in the repository itself so that once it has installed itself >> users get updates to it. >> 2) Easily discoverable, probably linked from whatever 'Get oVirt!' >> style page we have under Fedora. >> >> The question really I guess is how does it get into the repository? >> >> ------ >> >> What do other people think? I realise that the above could easily be >> construed as bike shedding but I think that even though these are >> relatively simple matters it is important to flesh them out now >> particularly if we are looking to make a "release" (whatever that >> entails) in the near future. >> >> -Steve >> _______________________________________________ >> Arch mailing list >> Arch at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/arch > > From dougsland at redhat.com Thu Dec 1 17:21:47 2011 From: dougsland at redhat.com (Douglas Landgraf) Date: Thu, 01 Dec 2011 12:21:47 -0500 Subject: First community release In-Reply-To: <4ED56779.9040605@redhat.com> References: <4EB80CAB.3030908@redhat.com> <4ED50144.60406@redhat.com> <4ED51D3F.7050203@redhat.com> <4ED5253B.7060600@redhat.com> <4ED53441.5050702@redhat.com> <4ED56779.9040605@redhat.com> Message-ID: <4ED7B7AB.3050703@redhat.com> On 11/29/2011 06:15 PM, Douglas Landgraf wrote: > Hi, > > On 11/29/2011 02:36 PM, Perry Myers wrote: >>> 1 - What is preventing us from releasing today? >> oVirt Node still can't get vdsm to register yet. dougsland is working >> on this from the vdsm side, and mburns is assisting with providing him >> with Node builds as he needs them >> >>> 2 - Are these really worth holding up the release? >> Yes >> >>> 3 - What can we do today so that the issues in #1 are gone tomorrow? >> It's being worked on, and dougsland is making progress. Not sure what >> the ETA is though. >> > > Currently, I can make vdsm register ovirt Node into ovirt GUI and > still have some adjusts to execute and test, like add new storage. I > will have more news in the end of this week. > > Here a few comments: > > - As Itamar said in the previous email, we will need to automate this > steps: > http://ovirt.org/wiki/Engine_Node_Integration#Engine_core_machine > > - vdsm will require this change into ovirt: > https://bugzilla.redhat.com/show_bug.cgi?id=756136 > > - This BZ might be important to resolve before we launch: > https://bugzilla.redhat.com/show_bug.cgi?id=755749 > Anyone from ovirt-engine could give me a hand (please) with this: https://bugzilla.redhat.com/show_bug.cgi?id=752464#c22 Thanks Douglas From pmyers at redhat.com Thu Dec 1 14:32:48 2011 From: pmyers at redhat.com (Perry Myers) Date: Thu, 01 Dec 2011 09:32:48 -0500 Subject: oVirt Node designs for stateless operation and 3rd party plugins Message-ID: <4ED79010.8060202@redhat.com> the Node development team has been trying to write up rough requirements around the stateless and plugins concepts. And also some working high level design. They can be reviewed on these two wiki pages: http://ovirt.org/wiki/Node_plugins http://ovirt.org/wiki/Node_Stateless Since the plugin model and the stateless model affect more than just the oVirt Node itself, we definitely would like to get input from other teams on the oVirt project. Please add comments here or directly to the wiki. Cheers, Perry From quintela at redhat.com Mon Dec 5 14:13:10 2011 From: quintela at redhat.com (Juan Quintela) Date: Mon, 05 Dec 2011 15:13:10 +0100 Subject: KVM call agenda for 12/6 (Tuesday) @ 10am US/Eastern Message-ID: <87pqg35cp5.fsf@neno.neno> Hi Please send in any agenda items you are interested in covering. Proposal (from Anthony): > 1. A short introduction to each of the guest agents, what guests they > support, and what verbs they support. > 2. A short description of key requirements from each party (oVirt, > libvirt, QEMU) for a guest agent > 3. An open discussion about possible ways to collaborate/converge. Notice that guest integration will take more than one week (Anthony estimation also). For libvirt and ovirt folks, please contact me or Chris for details of the call. Thanks, Juan. From chrisw at redhat.com Mon Dec 5 19:21:48 2011 From: chrisw at redhat.com (Chris Wright) Date: Mon, 5 Dec 2011 11:21:48 -0800 Subject: KVM Call Agenda for 12/6 (Tuesday) @ 10am US/Eastern In-Reply-To: <20111130162947.GA26932@x200.localdomain> References: <4ED63F5C.8000808@us.ibm.com> <20111130162947.GA26932@x200.localdomain> Message-ID: <20111205192148.GF12221@x200.localdomain> * Chris Wright (chrisw at redhat.com) wrote: > * Anthony Liguori (aliguori at us.ibm.com) wrote: > > 1. A short introduction to each of the guest agents, what guests they > > support, and what verbs they support. > > I think we did this once before w/ Matahari. Can we please capture > these things in email before the call, so people actually have time > to ponder the details. > > > 2. A short description of key requirements from each party (oVirt, libvirt, > > QEMU) for a guest agent > > Same here...call this the abstract/intro of the above detailed list of > verbs and guest support, and send it by Friday this week. > > I know there's plenty of details buried in the current thread and old > discussions of Matahari. But that's just it...buried... It's past Friday. Barak's links are all we have so far... thanks, -chris From mdroth at linux.vnet.ibm.com Mon Dec 5 21:36:59 2011 From: mdroth at linux.vnet.ibm.com (Michael Roth) Date: Mon, 05 Dec 2011 15:36:59 -0600 Subject: [Qemu-devel] Guest Agent Summaries In-Reply-To: <20111205192148.GF12221@x200.localdomain> References: <4ED63F5C.8000808@us.ibm.com> <20111130162947.GA26932@x200.localdomain> <20111205192148.GF12221@x200.localdomain> Message-ID: <4EDD397B.8090709@linux.vnet.ibm.com> On 12/05/2011 01:21 PM, Chris Wright wrote: > * Chris Wright (chrisw at redhat.com) wrote: >> * Anthony Liguori (aliguori at us.ibm.com) wrote: >>> 1. A short introduction to each of the guest agents, what guests they >>> support, and what verbs they support. >> >> I think we did this once before w/ Matahari. Can we please capture >> these things in email before the call, so people actually have time >> to ponder the details. >> >>> 2. A short description of key requirements from each party (oVirt, libvirt, >>> QEMU) for a guest agent >> >> Same here...call this the abstract/intro of the above detailed list of >> verbs and guest support, and send it by Friday this week. >> >> I know there's plenty of details buried in the current thread and old >> discussions of Matahari. But that's just it...buried... > > It's past Friday. Barak's links are all we have so far... Sorry this slipped by me. However, Barak's link to the guest agent proposals: http://www.ovirt.org/wiki/Guest_agent_proposals is a summary of the recent discussion on guest agents for oVirt from the following thread: http://thread.gmane.org/gmane.comp.emulators.ovirt.vdsm.devel/93/focus=93 Requirements were posted there for oVirt (ovirt-guest-agent), QEMU (qemu-ga), and Spice (vdagent) and pulled into wiki, so as far as requirements go that is probably the best summary available at the moment. There is also summary of the current proposals for how to go about leveraging ovirt-guest-agent or qemu-ga for oVirt/QEMU requirements. Matahari was mentioned only in brief since it didn't come up much in that particular discussion, but feel free to add as a response to this email and I can add it to the wiki so we can start getting all this stuff in one place. But for brevity, a (slightly) higher-level summary would be: A. oVirt (currently using ovirt-guest-agent) 1) supported functionality: - protocol: JSON RPC over virtio-serial - verbs: lock screen, login/logoff (automatic/SSO on RHEL/Windows with plugins installed), shutdown - guest info: machine name, OS, packages, avail. RAM, logged in users, disk usage, networks interfaces - notifications: guest/agent up, heartbeat, user info, session lock/unlock/logoff/logon, agent uninstalled 2) key requirements: - first-class support for oVirt guest extensions - VM life-cycle assistance - single sign-on support for spice desktop sessions - monitoring and inventory - make VDSM management more robust/guest-aware 3) additional info: - http://www.ovirt.org/w/images/2/20/Ovirt-guest-agent.pdf - http://www.ovirt.org/wiki/Ovirt_guest_agent B. QEMU (currently using qemu-ga): 1) supported functionality: - protocol: JSON RPC (QMP) over virtio-serial/"isa"-serial - verbs: ping, agent info, shutdown, file open/read/write/seek/flush/close, filesystem freeze, command exec (experimental, RFC this week) - guest info: arbitrary (via file read/command exec) - notifications: on hold till QMP/QAPI integration completed 2) key requirements: - first-class support for QEMU guest extensions (usable by device model, integrated into QMP, same repo (for lock-step versioning and hypervisor deployability via ISO or other host-initiated mechanism rather than guest distro support) - implement low-level primitives that QEMU can use, higher-level functionality built on top of the QMP interfaces it exposes. 3) additional info: - http://wiki.qemu.org/Features/QAPI/GuestAgent (might be down atm =/) C. Spice (vdagent): 1) supported functionality: - protocol: binary RPC over virtio-serial - verbs: set mouse state, monitor/display config, copy/paste 2) key requirements: - first-class support for Spice extensions (managing QXL devices/displays remotely, desktop integration (copy/paste, etc) - session-level guest agent 3) additional info: - http://spice-space.org/page/Whiteboard/AgentProtocol Please feel free to add to this, and I'll roll it back into the wiki. > > thanks, > -chris > From pmyers at redhat.com Tue Dec 6 04:21:54 2011 From: pmyers at redhat.com (Perry Myers) Date: Mon, 05 Dec 2011 23:21:54 -0500 Subject: [Qemu-devel] Guest Agent Summaries In-Reply-To: <4EDD397B.8090709@linux.vnet.ibm.com> References: <4ED63F5C.8000808@us.ibm.com> <20111130162947.GA26932@x200.localdomain> <20111205192148.GF12221@x200.localdomain> <4EDD397B.8090709@linux.vnet.ibm.com> Message-ID: <4EDD9862.4000509@redhat.com> On 12/05/2011 04:36 PM, Michael Roth wrote: > On 12/05/2011 01:21 PM, Chris Wright wrote: >> * Chris Wright (chrisw at redhat.com) wrote: >>> * Anthony Liguori (aliguori at us.ibm.com) wrote: >>>> 1. A short introduction to each of the guest agents, what guests they >>>> support, and what verbs they support. >>> >>> I think we did this once before w/ Matahari. Can we please capture >>> these things in email before the call, so people actually have time >>> to ponder the details. >>> >>>> 2. A short description of key requirements from each party (oVirt, >>>> libvirt, >>>> QEMU) for a guest agent >>> >>> Same here...call this the abstract/intro of the above detailed list of >>> verbs and guest support, and send it by Friday this week. >>> >>> I know there's plenty of details buried in the current thread and old >>> discussions of Matahari. But that's just it...buried... >> >> It's past Friday. Barak's links are all we have so far... > > Sorry this slipped by me. However, Barak's link to the guest agent > proposals: > > http://www.ovirt.org/wiki/Guest_agent_proposals > > is a summary of the recent discussion on guest agents for oVirt from the > following thread: > > http://thread.gmane.org/gmane.comp.emulators.ovirt.vdsm.devel/93/focus=93 > > Requirements were posted there for oVirt (ovirt-guest-agent), QEMU > (qemu-ga), and Spice (vdagent) and pulled into wiki, so as far as > requirements go that is probably the best summary available at the > moment. There is also summary of the current proposals for how to go > about leveraging ovirt-guest-agent or qemu-ga for oVirt/QEMU requirements. > > Matahari was mentioned only in brief since it didn't come up much in > that particular discussion, but feel free to add as a response to this > email and I can add it to the wiki so we can start getting all this > stuff in one place. Matahari really isn't a candidate for a lightweight guest agent for virtualization because of the heavy dependency chain. So it's not really in contention with ovirt guest agent or qemu guest agent. Matahari will continue to be developed for general systems management functionality, but it doesn't need to be further considered for this specific usage. From geoffocallaghan at gmail.com Tue Dec 6 10:18:56 2011 From: geoffocallaghan at gmail.com (Geoff O'Callaghan) Date: Tue, 06 Dec 2011 21:18:56 +1100 Subject: oVirt Node designs for stateless operation and 3rd party plugins In-Reply-To: <4ED79010.8060202@redhat.com> References: <4ED79010.8060202@redhat.com> Message-ID: <1323166736.10490.46.camel@mrpointy> On Thu, 2011-12-01 at 09:32 -0500, Perry Myers wrote: > the Node development team has been trying to write up rough requirements > around the stateless and plugins concepts. And also some working high > level design. > > They can be reviewed on these two wiki pages: > > http://ovirt.org/wiki/Node_plugins > http://ovirt.org/wiki/Node_Stateless > > Since the plugin model and the stateless model affect more than just the > oVirt Node itself, we definitely would like to get input from other > teams on the oVirt project. > > Please add comments here or directly to the wiki. > Hi There I work for a *large* organisation, I have issues with the goal of a stateless design. * Being able to install without a local disk I don't see this as a compelling reason for doing anything. In fact, in many cases for other nameless hypervisors we use local disk as a source for logging / dumps etc. I think the goal for stateless should be instead be configuration neutral. ie. if the node is destroyed the configuration can be re-deployed without issue. The other issue is that the node should continue to be re-bootable even if the configuration server is unavailable, which is a reason for having the configuration on a local disk or a san attached LUN. This should apply to the entire operational environment - if the engine is unavailable during a restart I should continue working the way I was configured to do so - that implies state is retained. It needs to be easily refreshable :-) The configuration bundle should be refreshable from a configuration server (part of the engine) and that could either be just configuration or agents or even s/w images - all would be preferred and it's pretty simple conceptually to have an active/backup image on local disk concept to allow easy rollbacks etc. Yes all this , except for the logging / swap could be in a usb key. The bundle should all be pushed via a SSL encrypted RESTful api using known non-priv credentials, preferably with rotating passwords or some cert based approach. The server should also know who previously managed it to reduce hostile attempts to change ownership of the node. * DHCP and PXE booting Many corporate security policies prohibit the use of DHCP or PXE booting servers for production environments. I don't see it as a big issue to boot an install image and be a good woodpecker and hit enter a few times and configure a management IP address. It should be possible to script the complete configuration / addition of the node after that step. I see the initial install as a trivial part of the complete node lifecycle. * DNS SRV records Sorry, I hate the idea. Large corporates have so many different teams doing little things that adding this in as a requirement simply adds delays to the deployments and opportunities for misconfiguration. Having the node image and config on local disk (or usb) avoids this requirement as the node knows who manages it. A complete rebuild could occur and the configuration reloaded once added back into the engine. * Previously configured state Yes, the node should remember the previous operational state if it can't talk to the engine. This is not a bad thing. * Configuration server This should be part of the engine. It should know the complete configuration of a node, right down to hypervisor 'firmware' image. The process should be 2-way. An admin should be able to 'pull' the image/config from an operational and accessible node and new configurations/images should be pushable to it. I really don't think this needs to be a separate server to the engine. * New bundle deployments / Upgrades The engine should keep track of what images are on a node. If a new config / image is to be deployed then for example, the node would be tagged with the new image. If the node was online, an alternate image would be pushed, vm's migrated to an alternate node and the node restarted implementing the new image when requested. If the node was offline at the time the new image was configured in the engine or if the node was built say with an old image then when it connects to the engine the image would be refreshed and the node recycled. * Swap Local disk swap is likely to be required. Overcommit is common and SSD local disk is something that is quite useful :-) So in summary, I prefer to think that the target should be configuration neutrality or even just plain old distributed configuration from a central source rather than completely stateless. The goal should be toleration of complete destruction of a node image and configuration and a simple process to re-add it and automatically re-apply the configuration/sw image. Just some thoughts for discussion / abuse ;-) Tks Geoff > Cheers, > > Perry > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch From mburns at redhat.com Tue Dec 6 13:52:52 2011 From: mburns at redhat.com (Mike Burns) Date: Tue, 06 Dec 2011 08:52:52 -0500 Subject: oVirt Node designs for stateless operation and 3rd party plugins In-Reply-To: <1323166736.10490.46.camel@mrpointy> References: <4ED79010.8060202@redhat.com> <1323166736.10490.46.camel@mrpointy> Message-ID: <1323179572.3915.124.camel@beelzebub.mburnsfire.net> Comments Inline On Tue, 2011-12-06 at 21:18 +1100, Geoff O'Callaghan wrote: > On Thu, 2011-12-01 at 09:32 -0500, Perry Myers wrote: > > the Node development team has been trying to write up rough requirements > > around the stateless and plugins concepts. And also some working high > > level design. > > > > They can be reviewed on these two wiki pages: > > > > http://ovirt.org/wiki/Node_plugins > > http://ovirt.org/wiki/Node_Stateless > > > > Since the plugin model and the stateless model affect more than just the > > oVirt Node itself, we definitely would like to get input from other > > teams on the oVirt project. > > > > Please add comments here or directly to the wiki. > > > > Hi There > > I work for a *large* organisation, I have issues with the goal of a > stateless design. Thanks for the feedback overall. I'll try to address all your points below. > > * Being able to install without a local disk > > I don't see this as a compelling reason for doing anything. In fact, > in many cases for other nameless hypervisors we use local disk as a > source for logging / dumps etc. That may be the case in your environment, but when we presented this at the oVirt Workshop, the idea of a diskless deployment was very well received. I suppose that what we're calling stateless is really more of a diskless feature rather than truly stateless since we're keeping the stateful information in a configuration server. > > I think the goal for stateless should be instead be configuration > neutral. ie. if the node is destroyed the configuration can be > re-deployed without issue. Redeployed on the same machine? Or redeployed on a different machine? We already provide autoinstallation options that will do redeployments easily and one of the goals or ideas along with the proposed stateless model is that the machine gets re-provisioned and downloads its config bundle. This would successfully recover the node if someone were to power it off or destroy it somehow. If you're looking to move the config to a new machine, then that's not quite as simple. The easiest would be to simply install it again from scratch. > > The other issue is that the node should continue to be re-bootable even > if the configuration server is unavailable, which is a reason for having > the configuration on a local disk or a san attached LUN. This should > apply to the entire operational environment - if the engine is > unavailable during a restart I should continue working the way I was > configured to do so - that implies state is retained. It needs to be > easily refreshable :-) I will admit that the thought of the config server being unavailable hadn't come up previously. If this is something that you're legitimately concerned about, then it sounds like you'd want to continue doing local installations and not stateless installs. Currently, node images will install to local disk and they will boot fine without a management server or config server. But they won't be truly functional unless there is a management server available to tell it what to do. This is the case for all hypervisors, whether they're ovirt-node images or Fedora 16 images with VDSM installed or any of the other distributions. It's a limitation the VDSM and Engine need to solve outside the scope of ovirt-node. > > The configuration bundle should be refreshable from a configuration > server (part of the engine) and that could either be just configuration > or agents or even s/w images - all would be preferred and it's pretty > simple conceptually to have an active/backup image on local disk concept > to allow easy rollbacks etc. Yes all this , except for the logging / > swap could be in a usb key. We do provide a RootBackup partition that we automatically activate if something goes wrong with an upgrade. It would make sense that we should keep a backup configuration bundle on the management server as well. The actual image itself is a livecd, so updating that would be a matter of changing the usb stick/cd-rom/pxe image to the old/new version > > The bundle should all be pushed via a SSL encrypted RESTful api using > known non-priv credentials, preferably with rotating passwords or some > cert based approach. The server should also know who previously > managed it to reduce hostile attempts to change ownership of the node. Yes, the security issues are something that we're definitely aware of and not taking lightly. The actual process for how we do this is something that still would need to be worked out. The initial design was something along the lines of a free posting to the config server that the admin has to approve. The thought was that we would have different levels of security that could be configured depending on your deployment and the strictness of the rules in your environment. > > * DHCP and PXE booting > > Many corporate security policies prohibit the use of DHCP or PXE booting > servers for production environments. I don't see it as a big issue to > boot an install image and be a good woodpecker and hit enter a few times > and configure a management IP address. It should be possible to script > the complete configuration / addition of the node after that step. I > see the initial install as a trivial part of the complete node > lifecycle. So a couple thoughts here: 1. If only pxe is restricted, then you could have a usb stick or cd-rom with the image in each machine and still do stateless as defined otherwise. 2. If just DHCP, then you could have a pxe profile per machine that sets up the static networking options needed 3. If both are restricted, then you would have to go with a stateful installation. It's not going away, just another mode that we will provide. Actual installation and configuration can be completed automatically using kernel command line options. That is independent of whether you're using a stateful or stateless installation. > > * DNS SRV records > > Sorry, I hate the idea. Large corporates have so many different teams > doing little things that adding this in as a requirement simply adds > delays to the deployments and opportunities for misconfiguration. Sure, that's a valid possibility. Perhaps another commandline option that allows someone to specify the config server manually. > > Having the node image and config on local disk (or usb) avoids this > requirement as the node knows who manages it. A complete rebuild could > occur and the configuration reloaded once added back into the engine. Yes, this is a valid use case. And if that's the way you want to deploy your environment, then use the install to disk option and not stateless. We will provide both > > * Previously configured state > > Yes, the node should remember the previous operational state if it > can't talk to the engine. This is not a bad thing. > > * Configuration server > > This should be part of the engine. It should know the complete > configuration of a node, right down to hypervisor 'firmware' image. The > process should be 2-way. An admin should be able to 'pull' the > image/config from an operational and accessible node and new > configurations/images should be pushable to it. > > I really don't think this needs to be a separate server to the engine. I agree, it should be part of the engine, probably will be. Depending on time frames and availability, it might be developed separate initially, but long term we probably want to integrate with the management server. > > * New bundle deployments / Upgrades > > The engine should keep track of what images are on a node. If a new > config / image is to be deployed then for example, the node would be > tagged with the new image. If the node was online, an alternate image > would be pushed, vm's migrated to an alternate node and the node > restarted implementing the new image when requested. This is mostly already done, I think. I know the functionality is there in RHEV-M, but not sure if it's all in the webadmin UI yet. I know the backend pieces are all there though. A running node has it's version info that vdsm reads initially and reports back to the engine. An admin logs into the engine, and can see the details of the node including the version that it's currently running. There is an option to push out a new image to the node and have it upgrade itself. The node does have to be in maintenance mode to start the process which causes all VMs to be migrated away. > > If the node was offline at the time the new image was configured in the > engine or if the node was built say with an old image then when it > connects to the engine the image would be refreshed and the node > recycled. Automatic upgrades like this aren't done at the moment. There probably needs to be some policy engine that can control it so all machines don't suddenly try to upgrade themselves. This whole section really applies to stateful installations though. In Stateless, you just need to refresh the image in the PXE server/cd-rom/usb stick and reboot the machine (after putting it in maintenance mode) > > * Swap > > Local disk swap is likely to be required. Overcommit is common and SSD > local disk is something that is quite useful :-) I agree, in general. I did talk to one person at the workshop that had a machine with 300+GB RAM and had 0 interest in doing overcommit. So there is certainly a use case for being able to support both. > > So in summary, I prefer to think that the target should be > configuration neutrality or even just plain old distributed > configuration from a central source rather than completely stateless. > The goal should be toleration of complete destruction of a node image > and configuration and a simple process to re-add it and automatically > re-apply the configuration/sw image. I like the thought of storing the configuration to a central location even when having the image installed locally. I definitely think there will be people that can't or won't go with stateless for various reasons many of which you state above. But I also think there are some that will want it as well. The simplest use case for wanting a stateless model like we designed is someone that has a rack of blades without local disks. The setup pxe and dhcp, and just turn on the blades. Mike > > Just some thoughts for discussion / abuse ;-) > > Tks > Geoff > > > Cheers, > > > > Perry > > _______________________________________________ > > Arch mailing list > > Arch at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/arch > > > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch From pmyers at redhat.com Tue Dec 6 14:11:38 2011 From: pmyers at redhat.com (Perry Myers) Date: Tue, 06 Dec 2011 09:11:38 -0500 Subject: oVirt Node designs for stateless operation and 3rd party plugins In-Reply-To: <1323166736.10490.46.camel@mrpointy> References: <4ED79010.8060202@redhat.com> <1323166736.10490.46.camel@mrpointy> Message-ID: <4EDE229A.5090306@redhat.com> Hi Geoff, Thanks for weighing in on this. Let me preface my comments below by saying that I think by and large we're on the same page with the design/intent of stateless here. I just think you may have some confusion about terminology and what stateless will imply/not imply for the end user. I'll try to clarify below so that we can get on the same page. On 12/06/2011 05:18 AM, Geoff O'Callaghan wrote: > On Thu, 2011-12-01 at 09:32 -0500, Perry Myers wrote: >> the Node development team has been trying to write up rough requirements >> around the stateless and plugins concepts. And also some working high >> level design. >> >> They can be reviewed on these two wiki pages: >> >> http://ovirt.org/wiki/Node_plugins >> http://ovirt.org/wiki/Node_Stateless >> >> Since the plugin model and the stateless model affect more than just the >> oVirt Node itself, we definitely would like to get input from other >> teams on the oVirt project. >> >> Please add comments here or directly to the wiki. >> > > Hi There > > I work for a *large* organisation, I have issues with the goal of a > stateless design. > > * Being able to install without a local disk > > I don't see this as a compelling reason for doing anything. In fact, > in many cases for other nameless hypervisors we use local disk as a > source for logging / dumps etc. This feature doesn't mandate that a local disk not be present. In fact, we must support using the local disk for things like swap and kernel dumps as you mention. Not sure why you thought stateless required no local disk... It's purely one of many deployment options. i.e. you could do: stateless w/ no local disk stateless w/ local disk for swap/kdump etc > I think the goal for stateless should be instead be configuration > neutral. ie. if the node is destroyed the configuration can be > re-deployed without issue. That is the goal > The other issue is that the node should continue to be re-bootable even > if the configuration server is unavailable, which is a reason for having > the configuration on a local disk or a san attached LUN. This should > apply to the entire operational environment - if the engine is > unavailable during a restart I should continue working the way I was > configured to do so - that implies state is retained. It needs to be > easily refreshable :-) The Config Server != oVirt Engine Server. They are two separate servers. I agree that in the event of the config server being down, you cannot boot the oVirt Node. However, a common deployment scenario is using PXE to boot the node and this already suffers from this drawback and people do seem comfortable with that scenario. Basically, 'stateless' is an option, not a mandate. If you want/need to continue to persist local config data on the Node, we're not going to prevent that. We're just adding an option to allow people to use a centralized config server with the option of no local disks. Adding this option won't prevent you from still doing a stateful install, so I don't think this feature conflicts with your requirements. > The configuration bundle should be refreshable from a configuration > server (part of the engine) and that could either be just configuration > or agents or even s/w images - all would be preferred and it's pretty > simple conceptually to have an active/backup image on local disk concept > to allow easy rollbacks etc. Yes all this , except for the logging / > swap could be in a usb key. > > The bundle should all be pushed via a SSL encrypted RESTful api using > known non-priv credentials, preferably with rotating passwords or some > cert based approach. The server should also know who previously > managed it to reduce hostile attempts to change ownership of the node. > > * DHCP and PXE booting > > Many corporate security policies prohibit the use of DHCP or PXE booting > servers for production environments. I don't see it as a big issue to > boot an install image and be a good woodpecker and hit enter a few times > and configure a management IP address. It should be possible to script > the complete configuration / addition of the node after that step. I > see the initial install as a trivial part of the complete node > lifecycle. You are correct. Many do prohibit this, but many don't. So as stated above, we'll allow both boot from USB key, local disk and PXE. We already do this today. > * DNS SRV records > > Sorry, I hate the idea. Large corporates have so many different teams > doing little things that adding this in as a requirement simply adds > delays to the deployments and opportunities for misconfiguration. Sorry that you hate the idea. Others like it, so we'll provide it as an option. Again, we're not _mandating_ the usage of DNS SRV, we're providing it as an option. If you want, you can certainly manually configure every Node in your datacenter. Using things like DHCP and DNS SRV help with automating large deployments, but we certainly won't require their usage for those that wish to have more control over every Node's configuration. > Having the node image and config on local disk (or usb) avoids this > requirement as the node knows who manages it. A complete rebuild could > occur and the configuration reloaded once added back into the engine. We'll continue allowing local disk usage for config. No plans to remove this as an install option. > * Previously configured state > > Yes, the node should remember the previous operational state if it > can't talk to the engine. This is not a bad thing. > > * Configuration server > > This should be part of the engine. It should know the complete > configuration of a node, right down to hypervisor 'firmware' image. The > process should be 2-way. An admin should be able to 'pull' the > image/config from an operational and accessible node and new > configurations/images should be pushable to it. It will be collocated with the Engine, but we will design it in such a way that it can be run independently from the oVirt Engine. There are several reasons for this: 1. re-usability outside of the oVirt context 2. scaling multiple config servers for different areas of the datacenter 3. perhaps one part of the datacenter is comfortable using a config server and another is not. You could co-locate the config server with the portion of Nodes that are using DNS SRV/DHCP, etc and keep them physically separate from the Nodes that are using static config and local disks for configuration Keep in mind that most of the Node configuration is _already_ done by oVirt Engine (advanced storage config, network config, vm information). The only thing that this config server will need to store are: * config of the mgmt network interface * config of vdsm so that the Node can talk back to the oVirt Engine * config of the local passwd file Most everything else can/is applied dynamically by oVirt Engine sending config to vdsm. So this config server really is just bootstrapping for the basic stuff, and we let the oVirt Engine handle everything else more complex. > I really don't think this needs to be a separate server to the engine. Noted. I'd be interested to see if others have an opinion here. > * New bundle deployments / Upgrades > > The engine should keep track of what images are on a node. If a new > config / image is to be deployed then for example, the node would be > tagged with the new image. If the node was online, an alternate image > would be pushed, vm's migrated to an alternate node and the node > restarted implementing the new image when requested. Engine already does this. It knows which version of oVirt Node ISO has been pushed to each node in the datacenter. That is also how it knows when a Node is eligible for an upgrade. > If the node was offline at the time the new image was configured in the > engine or if the node was built say with an old image then when it > connects to the engine the image would be refreshed and the node > recycled. > > * Swap > > Local disk swap is likely to be required. Overcommit is common and SSD > local disk is something that is quite useful :-) Yes, please read the wiki where it says: > In order to overcommit a host, you need to have swap space to support it > First implementation will probably disable swap > Future implementation may allow the system to configure a local disk as swap space So yes, the plan is to allow swap even during stateless operation if the administrator chooses to do so. > So in summary, I prefer to think that the target should be > configuration neutrality or even just plain old distributed > configuration from a central source rather than completely stateless. You're confusing stateless with diskless. Stateless is configuration neutrality. Nowhere in the wiki does it imply that we must be diskless > The goal should be toleration of complete destruction of a node image > and configuration and a simple process to re-add it and automatically > re-apply the configuration/sw image. Agreed, and that is indeed the intent of this design > Just some thoughts for discussion / abuse ;-) Thanks for the feedback. I think roughly speaking we are on the same page. It just seems like perhaps the wiki made you think we required diskless and PXE/DNS SRV vs. making those things options that the administrator could choose or reject as a deployment detail. I think the only area of contention is whether or not the Config Server should be integral with oVirt Engine, and on that point I think we can discuss further. But please keep in mind that this Config Engine is for the bare minimum config info and anything more complex will be coming from the oVirt Engine server via vdsm anyhow. I think if you limit the scope of the Config Server to be that, it is more reasonable to make it a standalone/separate component Perhaps to make it less confusing we should call it the "Bootstrap Server" since it won't be a true "Config Server" since it only has bootstrap config information to allow the Node to get add'l config from the oVirt Engine via vdsm Perry From mburns at redhat.com Tue Dec 6 14:26:29 2011 From: mburns at redhat.com (Mike Burns) Date: Tue, 06 Dec 2011 09:26:29 -0500 Subject: [node-devel] oVirt Node designs for stateless operation and 3rd party plugins In-Reply-To: <1323179572.3915.124.camel@beelzebub.mburnsfire.net> References: <4ED79010.8060202@redhat.com> <1323166736.10490.46.camel@mrpointy> <1323179572.3915.124.camel@beelzebub.mburnsfire.net> Message-ID: <1323181589.3915.127.camel@beelzebub.mburnsfire.net> On Tue, 2011-12-06 at 08:52 -0500, Mike Burns wrote: > Comments Inline > > On Tue, 2011-12-06 at 21:18 +1100, Geoff O'Callaghan wrote: > > On Thu, 2011-12-01 at 09:32 -0500, Perry Myers wrote: > > > the Node development team has been trying to write up rough requirements > > > around the stateless and plugins concepts. And also some working high > > > level design. > > > > > > They can be reviewed on these two wiki pages: > > > > > > http://ovirt.org/wiki/Node_plugins > > > http://ovirt.org/wiki/Node_Stateless > > > > > > Since the plugin model and the stateless model affect more than just the > > > oVirt Node itself, we definitely would like to get input from other > > > teams on the oVirt project. > > > > > > Please add comments here or directly to the wiki. > > > > > > > Hi There > > > > I work for a *large* organisation, I have issues with the goal of a > > stateless design. > > Thanks for the feedback overall. I'll try to address all your points > below. > > > > > * Being able to install without a local disk > > > > I don't see this as a compelling reason for doing anything. In fact, > > in many cases for other nameless hypervisors we use local disk as a > > source for logging / dumps etc. > > That may be the case in your environment, but when we presented this at > the oVirt Workshop, the idea of a diskless deployment was very well > received. > I suppose that what we're calling stateless is really more of > a diskless feature rather than truly stateless since we're keeping the > stateful information in a configuration server. This is actually not correct. My mind was just caught up in thinking of a totally diskless system. What Perry said is correct. We're looking to move all configuration to a central location (configuration neutrality). Disk would then become optional for things like swap and/or kdump, etc. > > > > > I think the goal for stateless should be instead be configuration > > neutral. ie. if the node is destroyed the configuration can be > > re-deployed without issue. > > Redeployed on the same machine? Or redeployed on a different machine? > We already provide autoinstallation options that will do redeployments > easily and one of the goals or ideas along with the proposed stateless > model is that the machine gets re-provisioned and downloads its config > bundle. This would successfully recover the node if someone were to > power it off or destroy it somehow. If you're looking to move the > config to a new machine, then that's not quite as simple. The easiest > would be to simply install it again from scratch. > > > > > The other issue is that the node should continue to be re-bootable even > > if the configuration server is unavailable, which is a reason for having > > the configuration on a local disk or a san attached LUN. This should > > apply to the entire operational environment - if the engine is > > unavailable during a restart I should continue working the way I was > > configured to do so - that implies state is retained. It needs to be > > easily refreshable :-) > > I will admit that the thought of the config server being unavailable > hadn't come up previously. If this is something that you're > legitimately concerned about, then it sounds like you'd want to continue > doing local installations and not stateless installs. > > Currently, node images will install to local disk and they will boot > fine without a management server or config server. But they won't be > truly functional unless there is a management server available to tell > it what to do. This is the case for all hypervisors, whether they're > ovirt-node images or Fedora 16 images with VDSM installed or any of the > other distributions. It's a limitation the VDSM and Engine need to > solve outside the scope of ovirt-node. > > > > The configuration bundle should be refreshable from a configuration > > server (part of the engine) and that could either be just configuration > > or agents or even s/w images - all would be preferred and it's pretty > > simple conceptually to have an active/backup image on local disk concept > > to allow easy rollbacks etc. Yes all this , except for the logging / > > swap could be in a usb key. > > We do provide a RootBackup partition that we automatically activate if > something goes wrong with an upgrade. It would make sense that we > should keep a backup configuration bundle on the management server as > well. The actual image itself is a livecd, so updating that would be a > matter of changing the usb stick/cd-rom/pxe image to the old/new version > > > > > The bundle should all be pushed via a SSL encrypted RESTful api using > > known non-priv credentials, preferably with rotating passwords or some > > cert based approach. The server should also know who previously > > managed it to reduce hostile attempts to change ownership of the node. > > Yes, the security issues are something that we're definitely aware of > and not taking lightly. The actual process for how we do this is > something that still would need to be worked out. The initial design > was something along the lines of a free posting to the config server > that the admin has to approve. The thought was that we would have > different levels of security that could be configured depending on your > deployment and the strictness of the rules in your environment. > > > > > * DHCP and PXE booting > > > > Many corporate security policies prohibit the use of DHCP or PXE booting > > servers for production environments. I don't see it as a big issue to > > boot an install image and be a good woodpecker and hit enter a few times > > and configure a management IP address. It should be possible to script > > the complete configuration / addition of the node after that step. I > > see the initial install as a trivial part of the complete node > > lifecycle. > > So a couple thoughts here: > > 1. If only pxe is restricted, then you could have a usb stick or cd-rom > with the image in each machine and still do stateless as defined > otherwise. > 2. If just DHCP, then you could have a pxe profile per machine that > sets up the static networking options needed > 3. If both are restricted, then you would have to go with a stateful > installation. It's not going away, just another mode that we will > provide. > > Actual installation and configuration can be completed automatically > using kernel command line options. That is independent of whether > you're using a stateful or stateless installation. > > > > > * DNS SRV records > > > > Sorry, I hate the idea. Large corporates have so many different teams > > doing little things that adding this in as a requirement simply adds > > delays to the deployments and opportunities for misconfiguration. > > Sure, that's a valid possibility. Perhaps another commandline option > that allows someone to specify the config server manually. > > > > > Having the node image and config on local disk (or usb) avoids this > > requirement as the node knows who manages it. A complete rebuild could > > occur and the configuration reloaded once added back into the engine. > > Yes, this is a valid use case. And if that's the way you want to deploy > your environment, then use the install to disk option and not stateless. > We will provide both > > > > > * Previously configured state > > > > Yes, the node should remember the previous operational state if it > > can't talk to the engine. This is not a bad thing. > > > > * Configuration server > > > > This should be part of the engine. It should know the complete > > configuration of a node, right down to hypervisor 'firmware' image. The > > process should be 2-way. An admin should be able to 'pull' the > > image/config from an operational and accessible node and new > > configurations/images should be pushable to it. > > > > I really don't think this needs to be a separate server to the engine. > > I agree, it should be part of the engine, probably will be. Depending > on time frames and availability, it might be developed separate > initially, but long term we probably want to integrate with the > management server. > > > > * New bundle deployments / Upgrades > > > > The engine should keep track of what images are on a node. If a new > > config / image is to be deployed then for example, the node would be > > tagged with the new image. If the node was online, an alternate image > > would be pushed, vm's migrated to an alternate node and the node > > restarted implementing the new image when requested. > > This is mostly already done, I think. I know the functionality is there > in RHEV-M, but not sure if it's all in the webadmin UI yet. I know the > backend pieces are all there though. > > A running node has it's version info that vdsm reads initially and > reports back to the engine. An admin logs into the engine, and can see > the details of the node including the version that it's currently > running. There is an option to push out a new image to the node and > have it upgrade itself. The node does have to be in maintenance mode to > start the process which causes all VMs to be migrated away. > > > > > If the node was offline at the time the new image was configured in the > > engine or if the node was built say with an old image then when it > > connects to the engine the image would be refreshed and the node > > recycled. > > Automatic upgrades like this aren't done at the moment. There probably > needs to be some policy engine that can control it so all machines don't > suddenly try to upgrade themselves. > > This whole section really applies to stateful installations though. In > Stateless, you just need to refresh the image in the PXE > server/cd-rom/usb stick and reboot the machine (after putting it in > maintenance mode) > > > > > * Swap > > > > Local disk swap is likely to be required. Overcommit is common and SSD > > local disk is something that is quite useful :-) > > I agree, in general. I did talk to one person at the workshop that had > a machine with 300+GB RAM and had 0 interest in doing overcommit. So > there is certainly a use case for being able to support both. > > > > > So in summary, I prefer to think that the target should be > > configuration neutrality or even just plain old distributed > > configuration from a central source rather than completely stateless. > > The goal should be toleration of complete destruction of a node image > > and configuration and a simple process to re-add it and automatically > > re-apply the configuration/sw image. > > I like the thought of storing the configuration to a central location > even when having the image installed locally. I definitely think there > will be people that can't or won't go with stateless for various reasons > many of which you state above. But I also think there are some that > will want it as well. > > The simplest use case for wanting a stateless model like we designed is > someone that has a rack of blades without local disks. The setup pxe > and dhcp, and just turn on the blades. > > Mike > > > > Just some thoughts for discussion / abuse ;-) > > > > Tks > > Geoff > > > > > Cheers, > > > > > > Perry > > > _______________________________________________ > > > Arch mailing list > > > Arch at ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/arch > > > > > > _______________________________________________ > > Arch mailing list > > Arch at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/arch > > > _______________________________________________ > node-devel mailing list > node-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/node-devel From jbjohnso at us.ibm.com Tue Dec 6 14:45:20 2011 From: jbjohnso at us.ibm.com (Jarrod B Johnson) Date: Tue, 6 Dec 2011 09:45:20 -0500 Subject: oVirt Node designs for stateless operation and 3rd party plugins In-Reply-To: <1323166736.10490.46.camel@mrpointy> References: <4ED79010.8060202@redhat.com> <1323166736.10490.46.camel@mrpointy> Message-ID: arch-bounces at ovirt.org wrote on 12/06/2011 05:18:56 AM: > From: "Geoff O'Callaghan" > To: arch at ovirt.org, node-devel , > Date: 12/06/2011 05:19 AM > Subject: Re: oVirt Node designs for stateless operation and 3rd party plugins > Sent by: arch-bounces at ovirt.org > > On Thu, 2011-12-01 at 09:32 -0500, Perry Myers wrote: > > the Node development team has been trying to write up rough requirements > > around the stateless and plugins concepts. And also some working high > > level design. > > > > They can be reviewed on these two wiki pages: > > > > http://ovirt.org/wiki/Node_plugins > > http://ovirt.org/wiki/Node_Stateless > > > > Since the plugin model and the stateless model affect more than just the > > oVirt Node itself, we definitely would like to get input from other > > teams on the oVirt project. > > > > Please add comments here or directly to the wiki. > > > > Hi There > > I work for a *large* organisation, I have issues with the goal of a > stateless design. > > * Being able to install without a local disk > > I don't see this as a compelling reason for doing anything. In fact, > in many cases for other nameless hypervisors we use local disk as a > source for logging / dumps etc. > The ability to operate without disk does not imply the inability to take advantage of local disk when available and appropriate. > I think the goal for stateless should be instead be configuration > neutral. ie. if the node is destroyed the configuration can be > re-deployed without issue. > > The other issue is that the node should continue to be re-bootable even > if the configuration server is unavailable, which is a reason for having > the configuration on a local disk or a san attached LUN. This should > apply to the entire operational environment - if the engine is > unavailable during a restart I should continue working the way I was > configured to do so - that implies state is retained. It needs to be > easily refreshable :-) One, I would think 'the' configuration server would be a bit more robust than implied here. I think certain examples to date have been fairly poor, but a respectable architecture would work out well. I think there is some opportunity for risk if a 'central' authority can repair configuration automagically (including restarting VMs on another node) *and* the downed node can also independently operate as last expected. There is a risk for split-brain scenario. Maybe there is a large degree of confidence in fencing and storage locking mitigating this sufficiently. > > The configuration bundle should be refreshable from a configuration > server (part of the engine) and that could either be just configuration > or agents or even s/w images - all would be preferred and it's pretty > simple conceptually to have an active/backup image on local disk concept > to allow easy rollbacks etc. Yes all this , except for the logging / > swap could be in a usb key. > > The bundle should all be pushed via a SSL encrypted RESTful api using > known non-priv credentials, preferably with rotating passwords or some > cert based approach. The server should also know who previously > managed it to reduce hostile attempts to change ownership of the node. > > * DHCP and PXE booting > > Many corporate security policies prohibit the use of DHCP or PXE booting > servers for production environments. I don't see it as a big issue to > boot an install image and be a good woodpecker and hit enter a few times > and configure a management IP address. It should be possible to script > the complete configuration / addition of the node after that step. I > see the initial install as a trivial part of the complete node > lifecycle. I find it a big issue when configuring a few thousand servers. Of course, taking a bigger picture here, we can make unattended stateless image booting a secure prospect even over PXE. Either way from the inside, tolerating an ISO boot generally covers the same concerns as a PXE boot anyway. > > * DNS SRV records > > Sorry, I hate the idea. Large corporates have so many different teams > doing little things that adding this in as a requirement simply adds > delays to the deployments and opportunities for misconfiguration. A capability does not imply requirement. Active Directory makes use of this as can Kerberos, so it's not exactly without precedent. I'd expect /proc/cmdline to be the typical path however. You could add SLP, mDNS, or roll-your-own multicast discovery to the list though. > > Having the node image and config on local disk (or usb) avoids this > requirement as the node knows who manages it. A complete rebuild could > occur and the configuration reloaded once added back into the engine. Once you sanely provide for the automated 'rebuild' case, you've solved the problem for arbitrary boots anyway. > > * Previously configured state > > Yes, the node should remember the previous operational state if it > can't talk to the engine. This is not a bad thing. Depends on what the 'weak' point is. If the chances are your inability to talk to the configuration infrastructure favor a split-brain scenario, restoring last state could be a bad thing. > > * Configuration server > > This should be part of the engine. It should know the complete > configuration of a node, right down to hypervisor 'firmware' image. The > process should be 2-way. An admin should be able to 'pull' the > image/config from an operational and accessible node and new > configurations/images should be pushable to it. > > I really don't think this needs to be a separate server to the engine. > > * New bundle deployments / Upgrades > > The engine should keep track of what images are on a node. If a new > config / image is to be deployed then for example, the node would be > tagged with the new image. If the node was online, an alternate image > would be pushed, vm's migrated to an alternate node and the node > restarted implementing the new image when requested. > > If the node was offline at the time the new image was configured in the > engine or if the node was built say with an old image then when it > connects to the engine the image would be refreshed and the node > recycled. The nice thing about stateless, it's a lot more straightforward to work through these workflows. > > * Swap > > Local disk swap is likely to be required. Overcommit is common and SSD > local disk is something that is quite useful :-) Flexibility is good, support swap and non-swap cases. In fact, stateless is a good match for this, you can use the local disk for things like distributed filesystems, logging and swap, the OS image isn't bound to disk and neither is the configuration. > > So in summary, I prefer to think that the target should be > configuration neutrality or even just plain old distributed > configuration from a central source rather than completely stateless. > The goal should be toleration of complete destruction of a node image > and configuration and a simple process to re-add it and automatically > re-apply the configuration/sw image. I think a bullet-proof configuration infrastructure that makes stateless just as good as stateful handles the failure cases a lot more smoothly. Emphasis on bulletproof. > > Just some thoughts for discussion / abuse ;-) > > Tks > Geoff > > > Cheers, > > > > Perry > > _______________________________________________ > > Arch mailing list > > Arch at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/arch > > > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alevy at redhat.com Wed Dec 7 09:15:27 2011 From: alevy at redhat.com (Alon Levy) Date: Wed, 7 Dec 2011 11:15:27 +0200 Subject: [libvirt] [Qemu-devel] Guest Agent Summaries In-Reply-To: <4EDD397B.8090709@linux.vnet.ibm.com> References: <4ED63F5C.8000808@us.ibm.com> <20111130162947.GA26932@x200.localdomain> <20111205192148.GF12221@x200.localdomain> <4EDD397B.8090709@linux.vnet.ibm.com> Message-ID: <20111207091527.GC3896@garlic.redhat.com> On Mon, Dec 05, 2011 at 03:36:59PM -0600, Michael Roth wrote: > On 12/05/2011 01:21 PM, Chris Wright wrote: > >* Chris Wright (chrisw at redhat.com) wrote: > >>* Anthony Liguori (aliguori at us.ibm.com) wrote: > >>>1. A short introduction to each of the guest agents, what guests they > >>>support, and what verbs they support. > >> > >>I think we did this once before w/ Matahari. Can we please capture > >>these things in email before the call, so people actually have time > >>to ponder the details. > >> > >>>2. A short description of key requirements from each party (oVirt, libvirt, > >>>QEMU) for a guest agent > >> > >>Same here...call this the abstract/intro of the above detailed list of > >>verbs and guest support, and send it by Friday this week. > >> > >>I know there's plenty of details buried in the current thread and old > >>discussions of Matahari. But that's just it...buried... > > > >It's past Friday. Barak's links are all we have so far... > > Sorry this slipped by me. However, Barak's link to the guest agent > proposals: > > http://www.ovirt.org/wiki/Guest_agent_proposals > > is a summary of the recent discussion on guest agents for oVirt from > the following thread: > > http://thread.gmane.org/gmane.comp.emulators.ovirt.vdsm.devel/93/focus=93 > > Requirements were posted there for oVirt (ovirt-guest-agent), QEMU > (qemu-ga), and Spice (vdagent) and pulled into wiki, so as far as > requirements go that is probably the best summary available at the > moment. There is also summary of the current proposals for how to go > about leveraging ovirt-guest-agent or qemu-ga for oVirt/QEMU > requirements. > > Matahari was mentioned only in brief since it didn't come up much in > that particular discussion, but feel free to add as a response to > this email and I can add it to the wiki so we can start getting all > this stuff in one place. > > But for brevity, a (slightly) higher-level summary would be: > > A. oVirt (currently using ovirt-guest-agent) > > 1) supported functionality: > - protocol: JSON RPC over virtio-serial > - verbs: lock screen, login/logoff (automatic/SSO on > RHEL/Windows with plugins installed), shutdown > - guest info: machine name, OS, packages, avail. RAM, logged > in users, disk usage, networks interfaces > - notifications: guest/agent up, heartbeat, user info, session > lock/unlock/logoff/logon, agent uninstalled > > 2) key requirements: > - first-class support for oVirt guest extensions > - VM life-cycle assistance > - single sign-on support for spice desktop sessions > - monitoring and inventory > - make VDSM management more robust/guest-aware > > 3) additional info: > - http://www.ovirt.org/w/images/2/20/Ovirt-guest-agent.pdf > - http://www.ovirt.org/wiki/Ovirt_guest_agent > > > B. QEMU (currently using qemu-ga): > > 1) supported functionality: > - protocol: JSON RPC (QMP) over virtio-serial/"isa"-serial > - verbs: ping, agent info, shutdown, file > open/read/write/seek/flush/close, filesystem freeze, command exec > (experimental, RFC this week) > - guest info: arbitrary (via file read/command exec) > - notifications: on hold till QMP/QAPI integration completed > 2) key requirements: > - first-class support for QEMU guest extensions (usable by > device model, integrated into QMP, same repo (for lock-step > versioning and hypervisor deployability via ISO or other > host-initiated mechanism rather than guest distro support) > - implement low-level primitives that QEMU can use, > higher-level functionality built on top of the QMP interfaces it > exposes. > 3) additional info: > - http://wiki.qemu.org/Features/QAPI/GuestAgent (might be down > atm =/) > > > C. Spice (vdagent): > > 1) supported functionality: > - protocol: binary RPC over virtio-serial > - verbs: set mouse state, monitor/display config, copy/paste > 2) key requirements: > - first-class support for Spice extensions (managing QXL > devices/displays remotely, desktop integration (copy/paste, etc) > - session-level guest agent > 3) additional info: > - http://spice-space.org/page/Whiteboard/AgentProtocol > 4) binary data passthrough for copy-paste. Any large file would otherwise go through uuencode / uudecode needlessy (it cannot be verified in anyway with any schema since it's by definition opaque). Sorry for the late reply. > Please feel free to add to this, and I'll roll it back into the wiki. > > > > >thanks, > >-chris > > > > -- > libvir-list mailing list > libvir-list at redhat.com > https://www.redhat.com/mailman/listinfo/libvir-list From kwade at redhat.com Thu Dec 8 00:38:17 2011 From: kwade at redhat.com (Karsten 'quaid' Wade) Date: Wed, 07 Dec 2011 16:38:17 -0800 Subject: showing community activity on ovirt.org Message-ID: <4EE006F9.2010401@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I've just started some changes to ovirt.org to show recent community activity. Work is happening and we want people to see that. http://ovirt.org/community-activity ... and a bundled git commit RSS feed in the sidebar on the side. (Except it's showing the oldest commits first ...) This concept could quickly stretch to cover other activities; I picked git and the wiki as heavy activity areas that already give RSS feeds (thankfully.) (The wiki recent changes are on the far end of the feed, since it's in reverse date ...) And the output could be prettier with more information. I'd like to run the RSS tools locally (the sidebar feed is a Yahoo! Pipes bundle, for example ... and why I can't control the feed.) I'm still looking for tools and solutions, and welcome any help. One solution is to use SimplePie in Wordpress, which is a PHP class for handling RSS: http://simplepie.org/ If anyone could spare a few cycles, I could use some help hacking a solution up to: 1. Take an arbitrary list of RSS URLs. 2. Merge the feeds in to one feed that can be sorted by date, source, etc. 3. Use that feed plus more PHP and CSS skills to improve on the output in Wordpress. Cheers - Karsten - -- name: Karsten 'quaid' Wade, Sr. Community Architect team: Red Hat Community Architecture & Leadership uri: http://communityleadershipteam.org http://TheOpenSourceWay.org gpg: AD0E0C41 -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iD8DBQFO4Ab52ZIOBq0ODEERAmppAJ9jcDbRGv8Jp9zzUNgEuiVGxHiuOQCaA3XI WWhE0hMLu5YAE4ew2CR4EzY= =yHBE -----END PGP SIGNATURE----- From iheim at redhat.com Thu Dec 8 04:23:51 2011 From: iheim at redhat.com (Itamar Heim) Date: Thu, 08 Dec 2011 06:23:51 +0200 Subject: showing community activity on ovirt.org In-Reply-To: <4EE006F9.2010401@redhat.com> References: <4EE006F9.2010401@redhat.com> Message-ID: <4EE03BD7.7010204@redhat.com> On 12/08/2011 02:38 AM, Karsten 'quaid' Wade wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > I've just started some changes to ovirt.org to show recent community > activity. Work is happening and we want people to see that. > > http://ovirt.org/community-activity very nice! From iheim at redhat.com Thu Dec 8 05:02:50 2011 From: iheim at redhat.com (Itamar Heim) Date: Thu, 08 Dec 2011 07:02:50 +0200 Subject: oVirt Node designs for stateless operation and 3rd party plugins In-Reply-To: <4EDE229A.5090306@redhat.com> References: <4ED79010.8060202@redhat.com> <1323166736.10490.46.camel@mrpointy> <4EDE229A.5090306@redhat.com> Message-ID: <4EE044FA.7070106@redhat.com> On 12/06/2011 04:11 PM, Perry Myers wrote: > Hi Geoff, > > Thanks for weighing in on this. Let me preface my comments below by > saying that I think by and large we're on the same page with the > design/intent of stateless here. I just think you may have some > confusion about terminology and what stateless will imply/not imply for > the end user. I'll try to clarify below so that we can get on the same > page. +1 - thanks for the input and insights. one more point below ... >> * Configuration server >> >> This should be part of the engine. It should know the complete >> configuration of a node, right down to hypervisor 'firmware' image. The >> process should be 2-way. An admin should be able to 'pull' the >> image/config from an operational and accessible node and new >> configurations/images should be pushable to it. > > It will be collocated with the Engine, but we will design it in such a > way that it can be run independently from the oVirt Engine. There are > several reasons for this: > > 1. re-usability outside of the oVirt context > 2. scaling multiple config servers for different areas of the datacenter > 3. perhaps one part of the datacenter is comfortable using a config > server and another is not. You could co-locate the config server > with the portion of Nodes that are using DNS SRV/DHCP, etc and keep > them physically separate from the Nodes that are using static config > and local disks for configuration > > Keep in mind that most of the Node configuration is _already_ done by > oVirt Engine (advanced storage config, network config, vm information). > The only thing that this config server will need to store are: > > * config of the mgmt network interface > * config of vdsm so that the Node can talk back to the oVirt Engine > * config of the local passwd file > > Most everything else can/is applied dynamically by oVirt Engine sending > config to vdsm. So this config server really is just bootstrapping for > the basic stuff, and we let the oVirt Engine handle everything else more > complex. > >> I really don't think this needs to be a separate server to the engine. > > Noted. I'd be interested to see if others have an opinion here. I understand deploying config as a separate service than the engine, but can it optionally use the engine to get what it need to avoid the need for an extra db to sync/upgrade/etc. a deployment in a remote site could cache the information to provide it in case the engine is not responding (so the node can finish booting to be used by the engine next time it is running). so you could configure it to use a local store, but in a ovirt-engine deployment it would just be a service focusing on the relevant communication method and delivery of image if needed. I think that would make deployment/life cycle management much easier. From rgolan at redhat.com Thu Dec 8 07:01:51 2011 From: rgolan at redhat.com (Roy Golan) Date: Thu, 08 Dec 2011 09:01:51 +0200 Subject: showing community activity on ovirt.org In-Reply-To: <4EE03BD7.7010204@redhat.com> References: <4EE006F9.2010401@redhat.com> <4EE03BD7.7010204@redhat.com> Message-ID: <4EE060DF.7090009@redhat.com> On Thu 08 Dec 2011 06:23:51 AM IST, Itamar Heim wrote: > On 12/08/2011 02:38 AM, Karsten 'quaid' Wade wrote: >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA1 >> >> I've just started some changes to ovirt.org to show recent community >> activity. Work is happening and we want people to see that. >> >> http://ovirt.org/community-activity > > very nice! > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch +1 I think dates for activities are missing in the "community activity" widget in the home page. From oschreib at redhat.com Thu Dec 8 09:03:58 2011 From: oschreib at redhat.com (Ofer Schreiber) Date: Thu, 08 Dec 2011 04:03:58 -0500 (EST) Subject: First oVirt Release - 31 JAN Message-ID: <06db01ccb588$53964b00$fac2e100$@redhat.com> Yesterday, on the weekly meeting, we agreed on the 31st of January as the first release of oVirt. Since we might want to stabilize the build a bit before, we will enter a quiet period (basically, create a new branch out of the master, and cherry-pick needed commits) on the 14th of January. Components owners - Please make sure your component is up & running by that time. If there's any reason you won't make it, please make sure we know about it. A wiki about the first release (timeline, release criteria and more) can be found at http://www.ovirt.org/wiki/Releases/First_Release Thanks, -- Ofer Schreiber oVirt Release Manager From mburns at redhat.com Thu Dec 8 14:58:47 2011 From: mburns at redhat.com (Mike Burns) Date: Thu, 08 Dec 2011 09:58:47 -0500 Subject: First oVirt Release - 31 JAN In-Reply-To: <06db01ccb588$53964b00$fac2e100$@redhat.com> References: <06db01ccb588$53964b00$fac2e100$@redhat.com> Message-ID: <1323356327.12161.5.camel@beelzebub.mburnsfire.net> On Thu, 2011-12-08 at 04:03 -0500, Ofer Schreiber wrote: > Yesterday, on the weekly meeting, we agreed on the 31st of January as the > first release of oVirt. > > Since we might want to stabilize the build a bit before, we will enter a > quiet period (basically, create a new branch out of the master, and > cherry-pick needed commits) on the 14th of January. > > Components owners - Please make sure your component is up & running by > that time. If there's any reason you won't make it, please make sure we > know about it. ovirt-node is essentially ready to go pending some vdsm and engine issues. Is there a process setup for what the projects need to do w.r.t. branching/tagging/etc in gerrit/git? Thanks Mike > > A wiki about the first release (timeline, release criteria and more) can > be found at http://www.ovirt.org/wiki/Releases/First_Release > > Thanks, > -- > Ofer Schreiber > oVirt Release Manager > > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch From iheim at redhat.com Thu Dec 8 15:33:55 2011 From: iheim at redhat.com (Itamar Heim) Date: Thu, 08 Dec 2011 17:33:55 +0200 Subject: oVirt Node designs for stateless operation and 3rd party plugins In-Reply-To: <4EDE229A.5090306@redhat.com> References: <4ED79010.8060202@redhat.com> <1323166736.10490.46.camel@mrpointy> <4EDE229A.5090306@redhat.com> Message-ID: <4EE0D8E3.6030903@redhat.com> On 12/06/2011 04:11 PM, Perry Myers wrote: > Hi Geoff, > > Thanks for weighing in on this. Let me preface my comments below by > saying that I think by and large we're on the same page with the > design/intent of stateless here. I just think you may have some > confusion about terminology and what stateless will imply/not imply for > the end user. I'll try to clarify below so that we can get on the same > page. +1 - thanks for the input and insights. one more point below ... >> * Configuration server >> >> This should be part of the engine. It should know the complete >> configuration of a node, right down to hypervisor 'firmware' image. The >> process should be 2-way. An admin should be able to 'pull' the >> image/config from an operational and accessible node and new >> configurations/images should be pushable to it. > > It will be collocated with the Engine, but we will design it in such a > way that it can be run independently from the oVirt Engine. There are > several reasons for this: > > 1. re-usability outside of the oVirt context > 2. scaling multiple config servers for different areas of the datacenter > 3. perhaps one part of the datacenter is comfortable using a config > server and another is not. You could co-locate the config server > with the portion of Nodes that are using DNS SRV/DHCP, etc and keep > them physically separate from the Nodes that are using static config > and local disks for configuration > > Keep in mind that most of the Node configuration is _already_ done by > oVirt Engine (advanced storage config, network config, vm information). > The only thing that this config server will need to store are: > > * config of the mgmt network interface > * config of vdsm so that the Node can talk back to the oVirt Engine > * config of the local passwd file > > Most everything else can/is applied dynamically by oVirt Engine sending > config to vdsm. So this config server really is just bootstrapping for > the basic stuff, and we let the oVirt Engine handle everything else more > complex. > >> I really don't think this needs to be a separate server to the engine. > > Noted. I'd be interested to see if others have an opinion here. I understand deploying config as a separate service than the engine, but can it optionally use the engine to get what it need to avoid the need for an extra db to sync/upgrade/etc. a deployment in a remote site could cache the information to provide it in case the engine is not responding (so the node can finish booting to be used by the engine next time it is running). so you could configure it to use a local store, but in a ovirt-engine deployment it would just be a service focusing on the relevant communication method and delivery of image if needed. I think that would make deployment/life cycle management much easier. From cctrieloff at redhat.com Thu Dec 8 16:08:06 2011 From: cctrieloff at redhat.com (Carl Trieloff) Date: Thu, 08 Dec 2011 11:08:06 -0500 Subject: showing community activity on ovirt.org In-Reply-To: <4EE060DF.7090009@redhat.com> References: <4EE006F9.2010401@redhat.com> <4EE03BD7.7010204@redhat.com> <4EE060DF.7090009@redhat.com> Message-ID: <4EE0E0E6.6080008@redhat.com> On 12/08/2011 02:01 AM, Roy Golan wrote: > On Thu 08 Dec 2011 06:23:51 AM IST, Itamar Heim wrote: >> On 12/08/2011 02:38 AM, Karsten 'quaid' Wade wrote: >>> -----BEGIN PGP SIGNED MESSAGE----- >>> Hash: SHA1 >>> >>> I've just started some changes to ovirt.org to show recent community >>> activity. Work is happening and we want people to see that. >>> >>> http://ovirt.org/community-activity >> >> very nice! >> _______________________________________________ >> Arch mailing list >> Arch at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/arch > > +1 > I think dates for activities are missing in the "community activity" > widget in the home page. > > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch would it not be better to have the link from the widget on the home page go to http://www.ovirt.org/community-activity/ ? Carl. From oschreib at redhat.com Thu Dec 8 17:30:36 2011 From: oschreib at redhat.com (Ofer Schreiber) Date: Thu, 08 Dec 2011 12:30:36 -0500 (EST) Subject: First oVirt Release - 31 JAN In-Reply-To: <1323356327.12161.5.camel@beelzebub.mburnsfire.net> References: <06db01ccb588$53964b00$fac2e100$@redhat.com> <1323356327.12161.5.camel@beelzebub.mburnsfire.net> Message-ID: <2AC09DCC-1C3C-48DC-A7F1-C2FBE1CBD2BC@redhat.com> On 8 Dec 2011, at 16:58, Mike Burns wrote: > On Thu, 2011-12-08 at 04:03 -0500, Ofer Schreiber wrote: >> Yesterday, on the weekly meeting, we agreed on the 31st of January as the >> first release of oVirt. >> >> Since we might want to stabilize the build a bit before, we will enter a >> quiet period (basically, create a new branch out of the master, and >> cherry-pick needed commits) on the 14th of January. >> >> Components owners - Please make sure your component is up & running by >> that time. If there's any reason you won't make it, please make sure we >> know about it. > > ovirt-node is essentially ready to go pending some vdsm and engine > issues. Is there a process setup for what the projects need to do > w.r.t. branching/tagging/etc in gerrit/git? > > Thanks > > Mike Good point. I'll make such a wiki page on Sunday. Btw- can you elaborate on the pending engine/vdsm issues? > >> >> A wiki about the first release (timeline, release criteria and more) can >> be found at http://www.ovirt.org/wiki/Releases/First_Release >> >> Thanks, >> -- >> Ofer Schreiber >> oVirt Release Manager >> >> _______________________________________________ >> Arch mailing list >> Arch at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/arch > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mburns at redhat.com Thu Dec 8 17:36:22 2011 From: mburns at redhat.com (Mike Burns) Date: Thu, 08 Dec 2011 12:36:22 -0500 Subject: First oVirt Release - 31 JAN In-Reply-To: <2AC09DCC-1C3C-48DC-A7F1-C2FBE1CBD2BC@redhat.com> References: <06db01ccb588$53964b00$fac2e100$@redhat.com> <1323356327.12161.5.camel@beelzebub.mburnsfire.net> <2AC09DCC-1C3C-48DC-A7F1-C2FBE1CBD2BC@redhat.com> Message-ID: <1323365782.2853.1.camel@mburns-laptop.usersys.redhat.com> On Thu, 2011-12-08 at 12:30 -0500, Ofer Schreiber wrote: > > > On 8 Dec 2011, at 16:58, Mike Burns wrote: > > > > > On Thu, 2011-12-08 at 04:03 -0500, Ofer Schreiber wrote: > > > Yesterday, on the weekly meeting, we agreed on the 31st of January > > > as the > > > first release of oVirt. > > > > > > Since we might want to stabilize the build a bit before, we will > > > enter a > > > quiet period (basically, create a new branch out of the master, > > > and > > > cherry-pick needed commits) on the 14th of January. > > > > > > Components owners - Please make sure your component is up & > > > running by > > > that time. If there's any reason you won't make it, please make > > > sure we > > > know about it. > > > > ovirt-node is essentially ready to go pending some vdsm and engine > > issues. Is there a process setup for what the projects need to do > > w.r.t. branching/tagging/etc in gerrit/git? > > > > Thanks > > > > Mike > > > > > Good point. > I'll make such a wiki page on Sunday. > > > Btw- can you elaborate on the pending engine/vdsm issues? Engine: https://bugzilla.redhat.com/show_bug.cgi?id=760756 VDSM: https://bugzilla.redhat.com/show_bug.cgi?id=752464 Not sure if the engine one is a must fix or we can just document around it, but the VDSM one is a blocker for release. Mike > > > > > > > > > A wiki about the first release (timeline, release criteria and > > > more) can > > > be found at http://www.ovirt.org/wiki/Releases/First_Release > > > > > > Thanks, > > > -- > > > Ofer Schreiber > > > oVirt Release Manager > > > > > > _______________________________________________ > > > Arch mailing list > > > Arch at ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/arch > > > > > > From kwade at redhat.com Thu Dec 8 19:48:42 2011 From: kwade at redhat.com (Karsten 'quaid' Wade) Date: Thu, 08 Dec 2011 11:48:42 -0800 Subject: showing community activity on ovirt.org In-Reply-To: <4EE0E0E6.6080008@redhat.com> References: <4EE006F9.2010401@redhat.com> <4EE03BD7.7010204@redhat.com> <4EE060DF.7090009@redhat.com> <4EE0E0E6.6080008@redhat.com> Message-ID: <4EE1149A.6050409@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 12/08/2011 08:08 AM, Carl Trieloff wrote: > > would it not be better to have the link from the widget on the home > page go to http://www.ovirt.org/community-activity/ ? Definitely, but I didn't see a way to make the title of a widget a hyperlink. This (plus previous comment about including dates) falls under the heading of PHP-hacking-outside-of-my-current-skillset. But with someone's help, I'll learn quickly. :) - - Karsten - -- name: Karsten 'quaid' Wade, Sr. Community Architect team: Red Hat Community Architecture & Leadership uri: http://communityleadershipteam.org http://TheOpenSourceWay.org gpg: AD0E0C41 -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iD8DBQFO4RSa2ZIOBq0ODEERAtn1AJ4sOWcHrioSueN+9BrWq3OA3zI4WQCg51sC EiVKfcLSUJR1ufcnmkUgpG4= =MXEx -----END PGP SIGNATURE----- From yihherng at us.ibm.com Fri Dec 9 17:08:07 2011 From: yihherng at us.ibm.com (Yih-Herng Chuang) Date: Fri, 9 Dec 2011 10:08:07 -0700 Subject: AUTO: Yih-Herng Chuang is out of the office (returning 12/13/2011) Message-ID: I am out of the office until 12/13/2011. I will respond to emails when return back to office. If there is an emergency please contact Doug Jans for technical issues or my manager Carol Roth. Note: This is an automated response to your message "Arch Digest, Vol 2, Issue 7" sent on 12/9/2011 10:00:02. This is the only notification you will receive while this person is away. From kwade at redhat.com Wed Dec 14 00:53:52 2011 From: kwade at redhat.com (Karsten 'quaid' Wade) Date: Tue, 13 Dec 2011 16:53:52 -0800 Subject: watching community activity Message-ID: <4EE7F3A0.9030402@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Wanted to quickly highlight that we now have a few ways highlighting community activity. Aside from showing other people "there is something there", it is also valuable for the existing community what is going on in real time. * Aggregated changes feed on sidebar of http://ovirt.org . * All changes from git repos and wiki edits on http://ovirt.org/community-activity . * Notification of wiki changes via ovirtbot on the #ovirt IRC channel. On the latter, I'm hoping it makes the IRC channel more useful and interesting; we can consider turning it off if we have a heavy wiki editing session that is interrupting e.g. meetings. - - Karsten - -- name: Karsten 'quaid' Wade, Sr. Community Architect team: Red Hat Community Architecture & Leadership uri: http://communityleadershipteam.org http://TheOpenSourceWay.org gpg: AD0E0C41 -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iD8DBQFO5/Og2ZIOBq0ODEERAp73AKCl5BU/7MIHu0sju1rzpMisIpG/lgCfRLsD gBM1upcJ+AkdqT5kGp1rSPY= =8zvH -----END PGP SIGNATURE----- From cctrieloff at redhat.com Wed Dec 14 15:45:37 2011 From: cctrieloff at redhat.com (Carl Trieloff) Date: Wed, 14 Dec 2011 10:45:37 -0500 Subject: watching community activity In-Reply-To: <4EE7F3A0.9030402@redhat.com> References: <4EE7F3A0.9030402@redhat.com> Message-ID: <4EE8C4A1.8020902@redhat.com> I also added a top level menu, we can track to see which why the page is hit more and then tune it. Carl. On 12/13/2011 07:53 PM, Karsten 'quaid' Wade wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Wanted to quickly highlight that we now have a few ways highlighting > community activity. Aside from showing other people "there is > something there", it is also valuable for the existing community what > is going on in real time. > > * Aggregated changes feed on sidebar of http://ovirt.org . > * All changes from git repos and wiki edits on > http://ovirt.org/community-activity . > * Notification of wiki changes via ovirtbot on the #ovirt IRC channel. > > On the latter, I'm hoping it makes the IRC channel more useful and > interesting; we can consider turning it off if we have a heavy wiki > editing session that is interrupting e.g. meetings. > > - - Karsten > - -- > name: Karsten 'quaid' Wade, Sr. Community Architect > team: Red Hat Community Architecture & Leadership > uri: http://communityleadershipteam.org > http://TheOpenSourceWay.org > gpg: AD0E0C41 > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.11 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iD8DBQFO5/Og2ZIOBq0ODEERAp73AKCl5BU/7MIHu0sju1rzpMisIpG/lgCfRLsD > gBM1upcJ+AkdqT5kGp1rSPY= > =8zvH > -----END PGP SIGNATURE----- > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch From kwade at redhat.com Thu Dec 15 00:12:21 2011 From: kwade at redhat.com (Karsten 'quaid' Wade) Date: Wed, 14 Dec 2011 16:12:21 -0800 Subject: wiki writing tips, guidelines, and rules Message-ID: <4EE93B65.3000004@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 You can find a comprehensive guide to writing on our wiki here: http://ovirt.org/wiki/Help:Editing ... with links out to other pages in this category: http://ovirt.org/wiki/Category:Documentation_for_writing_on_the_wiki All of this work was modified from materials from fedoraproject.org/wiki. I used that as a starting point because the ruleset is i) very adherent to the WikiMedia way of doing things, and ii) 80% of the content is very generic and got us 90% of the way there. ;-) Also, those rules and tips were hammered out over multiple years of using MediaWiki as a community and collaborator documentation location, so they are pretty solid. HTH - Karsten - -- name: Karsten 'quaid' Wade, Sr. Community Architect team: Red Hat Community Architecture & Leadership uri: http://communityleadershipteam.org http://TheOpenSourceWay.org gpg: AD0E0C41 -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iD8DBQFO6Ttl2ZIOBq0ODEERAk3dAJ9zWDmtwi/OIJl77+pZ1bjA8/iMmQCglBuh hgrUG/QBA7Y/1zdDsOBZVXU= =fSkq -----END PGP SIGNATURE----- From sgordon at redhat.com Thu Dec 22 21:15:15 2011 From: sgordon at redhat.com (Steve Gordon) Date: Thu, 22 Dec 2011 16:15:15 -0500 (EST) Subject: oVirt Release Notes In-Reply-To: Message-ID: <7dea1d9f-c1b3-4c6e-9dd3-d06e6fc4bfaa@zmail15.collab.prod.int.phx2.redhat.com> Hi all, One of the topics we discussed in the meeting this week was how we should go about generating the release notes for the first release of oVirt. To this end a flag has been added to bugzilla, ovirt_requires_release_note, to allow us to flag bugs which are noteworthy and should be included in whatever release notes documents we end up building. The actual text of the release note will be drawn from the Technical Notes fields of the flagged bugs. For now all I ask is that if you are working on a bug and moving it to MODIFIED that if you think it *might* be noteworthy and worth inclusion in the release notes please consider: - Writing a brief message in the bug's Technical Note field describing what the change being provided fixes/changed. Try to highlight the cause of the issue, the consequence it had in the past, the fix being implemented, and the result from a user perspective. That said generally we are only looking for a short paragraph per bug. - Flag the bug as ovirt_requires_release_note? We can review the list closer to the actual branching and release activity in the new year, getting some initial bugs noted and flagged will also give me some source data to demonstrate how we can automate the actual generation of a release notes document so we can discuss further. Thanks, Steve From rgolan at redhat.com Mon Dec 26 13:55:00 2011 From: rgolan at redhat.com (Roy Golan) Date: Mon, 26 Dec 2011 15:55:00 +0200 Subject: support for old registration URLs Message-ID: <4EF87CB4.3000709@redhat.com> During the work on http://gerrit.ovirt.org/#change,318 Dan suggested we need to support old RHEVH which their unchangeable registration link is pointing to "RHEVManagerWeb/" . To overcome that I thought to use jboss rewrite engine (like httpd mod_rewrite java port) and to rewrite RHEVManagerWeb to EngineManagerWeb. Simple examples I tried on AS 5.1 didn't work at all. Does anyone has experience with it? Thanks, Roy From ykaul at redhat.com Mon Dec 26 14:07:28 2011 From: ykaul at redhat.com (Yaniv Kaul) Date: Mon, 26 Dec 2011 16:07:28 +0200 Subject: support for old registration URLs In-Reply-To: <4EF87CB4.3000709@redhat.com> References: <4EF87CB4.3000709@redhat.com> Message-ID: <4EF87FA0.5090906@redhat.com> On 12/26/2011 03:55 PM, Roy Golan wrote: > During the work on http://gerrit.ovirt.org/#change,318 Dan suggested > we need to support > old RHEVH which their unchangeable registration link is pointing to > "RHEVManagerWeb/" . Why would we care about old RHEVH with upstream oVirt? Is there an old ovirt-node that actually registers there? Y. > > To overcome that I thought to use jboss rewrite engine (like httpd > mod_rewrite java port) and to rewrite RHEVManagerWeb to EngineManagerWeb. > Simple examples I tried on AS 5.1 didn't work at all. > > Does anyone has experience with it? > > Thanks, > Roy > > > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch