From mburns at redhat.com Tue Jan 3 13:31:11 2012 From: mburns at redhat.com (Mike Burns) Date: Tue, 03 Jan 2012 08:31:11 -0500 Subject: [node-devel] [PATCH] Add network configure in emergency shell In-Reply-To: <4EF2D10E.9080107@linux.vnet.ibm.com> References: <1324453677-14915-1-git-send-email-taget@linux.vnet.ibm.com> <1324478487.12161.177.camel@beelzebub.mburnsfire.net> <4EF2D10E.9080107@linux.vnet.ibm.com> Message-ID: <1325597471.3782.14.camel@beelzebub.mburnsfire.net> Thought I replied to this already, but I can't find the reply, so I'll reply again. On Thu, 2011-12-22 at 14:41 +0800, Eli Qiao wrote: > > > ? 2011?12?21? 22:41, Mike Burns ??: > > Thanks for the patch. > > > > Just so you're aware, the normal process for patch submissions is to > > post patches to gerrit: > > > > http://ovirt.org/wiki/Working_with_oVirt_Gerrit > > > > It looks like you're working off a rather old version. The > > autoinstall_failed function was moved to ovirt-functions back in > > September. > > > > The current repo is at: > > Gitweb: http://gerrit.ovirt.org/gitweb?p=ovirt-node.git > > giturl: http://gerrit.ovirt.org/p/ovirt-node.git > > > > The actual commit that moved the function is here: > > > > http://gerrit.ovirt.org/gitweb?p=ovirt-node.git;a=commit;h=a6f000ae812ee1ee4d762ba1f6c807968ac13170 > > > > > > Some initial comments on you're patch are below: > > > > On Wed, 2011-12-21 at 15:47 +0800, taget at linux.vnet.ibm.com wrote: > > > From: jimmy > > > > > > This patch is to configure networking before dropping in to emergency shell. > > > By doing this, user can using ssh to connect host server. > > > > > > Signed-off-by: Eli > > > --- > > > ovirt-firstboot | 3 +++ > > > 1 files changed, 3 insertions(+), 0 deletions(-) > > > > > > diff --git a/scripts/virt-firstboot b/scripts/ovirt-firstboot > > > index ef0a071..48f31e6 100755 > > > --- a/scripts/ovirt-firstboot > > > +++ b/scripts/ovirt-firstboot > > > @@ -36,6 +36,9 @@ trap 'exit $?' 1 2 13 15 > > > autoinstall_failed(){ > > > log "Automatic installation failed. Please review console messages." > > > log "Press Enter to drop to emergency shell." > > > + /usr/libexec/ovirt-config-networking AUTO > > Since changed to use python scripts instead of bash. This should be > > calling the network_auto function in network.py. You can see how this > > is called in ovirt-auto-install.py > > > > > + log "Configure root password : " > > > + /usr/bin/passwd > > This will set the root password, but ssh password authentication is > > disabled by default, so you need to enable that too. > > > > > read < /dev/console > > > bash < /dev/console > > > } > > > > The overall concept of being able to ssh in when auto-installation fails > > is good, but it's not solved completely here. Assuming I understand > > correctly, you're looking to be able to start the installation process > > and then ssh in to debug it if/when it fails. You're current patch > > doesn't remove the need for console access since you need to be able to > > type the password anyway to set it. > > > > My recommendation would be to: > > > > 1. get the updated repo > > 2. work in ovirt-auto-install.py to configure networking in the else > > condition > > 3. use existing means to set passwords and ssh password authentication > > you can set root password with rootpw commandline option > > you can set admin password with adminpw commandline option > > you can enable ssh password auth with ssh_pwauth option > > These options are documented in ovirt-early where they're > > parsed. > > > hi Mike, > I don't think add network configure and enable ssh password in the > else condition is a good way , cause if so , the code is duplicated > and ugly. > > > So , can we just configure networking and enable ssh password before > we call storage_auto() ? By doing this , we could access the host even > the automatic installation failed. > like this : > We can't run the network configuration before storage configuration. Network config does some persisting that requires that we have persistent storage. If the persistent storage does not exist, then the networking config will get lost after a reboot. In the case of a failure in the storage configuration, we just want the networking up and running so that it can be debugged. The duplication of code in this case is minimal since it is simply a function call. We also can't just pull the networking piece out and call it after everything else since other parts of the auto-installation process depend on networking. The right solution may be to break up the ovirt_auto_install.py script and do something like: storage_auto check for failure run networking regardless of storage status check for failure if either storage or networking failed, don't run anything else if they didn't fail, run other automated setup and check for errors on each exit successfully if all steps passed, otherwise fail > in scripts/ovirt-auto-install.py > .... > import os > import sys > from ovirtnode.ovirtfunctions import * > from ovirtnode.storage import * > from ovirtnode.install import * > from ovirtnode.network import * > from ovirtnode.log import * > from ovirtnode.kdump import * > from ovirtnode.snmp import * > from ovirt_config_setup.collectd import * > > # store /etc/shadow if adminpw/rootpw are set, handled already in > ovirt-early > file = open("/proc/cmdline") > args = file.read() > if "adminpw" in args or "rootpw" in args: > print "Storing /etc/shadow" > ovirt_store_config("/etc/passwd") > ovirt_store_config("/etc/shadow") > file.close() > # network configuration > print "Configuring Network" > if OVIRT_VARS["OVIRT_BOOTIF"] != "": > network_auto() > > if OVIRT_VARS.has_key("OVIRT_HOSTNAME"): > system("hostname %s" % OVIRT_VARS["OVIRT_HOSTNAME"]) > #set ssh_passwd_auth > if OVIRT_VARS.has_key("OVIRT_SSH_PWAUTH"): > if self.ssh_passwd_status.value() == 1: > > augtool("set","/files/etc/ssh/sshd_config/PasswordAuthentication", > "yes") > elif self.ssh_passwd_status.value() == 0: > > augtool("set","/files/etc/ssh/sshd_config/PasswordAuthentication", > "no") > print "Performing automatic disk partitioning" > if storage_auto(): > # iscsi handled in install.py > print "Configuring Logging" > logging_auto() > print "Configuring Collectd" > collectd_auto() > install = Install() > print "Configuring KDump" > kdump_auto() > print "Configuring SNMP" > snmp_auto() > ... ... > else > print "Automatic installation failed. Please > review /tmp/ovirt.log" > sys.exit(1) > > -- > best regards > eli > _______________________________________________ > node-devel mailing list > node-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/node-devel From mburns at redhat.com Thu Jan 5 14:46:47 2012 From: mburns at redhat.com (Mike Burns) Date: Thu, 05 Jan 2012 09:46:47 -0500 (EST) Subject: [node-devel] oVirt Node weekly meeting Message-ID: The following meeting has been modified: Subject: oVirt Node weekly meeting Organizer: "Mike Burns" Location: #ovirt on irc.oftc.net Time: 9:00:00 AM - 9:30:00 AM GMT -05:00 US/Canada Eastern [MODIFIED] Recurrence : Every Tuesday No end date Effective Nov 22, 2011 Invitees: node-devel at ovirt.org; aliguori at linux.vnet.ibm.com; anthony at codemonkey.ws *~*~*~*~*~*~*~*~*~* Weekly call to go over features, bugs, tasks, etc -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 2301 bytes Desc: not available URL: From jboggs at redhat.com Thu Jan 5 15:35:08 2012 From: jboggs at redhat.com (Joey Boggs) Date: Thu, 05 Jan 2012 10:35:08 -0500 Subject: [node-devel] ovirt-node uefi status update Message-ID: <4F05C32C.1060508@redhat.com> Just an update and hopefully a search for help/opinions on finishing uefi installation steps. So far ovirt-node can be installed to usb drives without being booted in uefi mode, this works for usb because it can use a default bootloader in EFI/BOOT/BOOTX64.EFI which is where we place it for now. The problem with installing it to other media like sata drives etc is that we have to put a bootloader entry in using efibootmgr which requires being booted in uefi mode. Here are some problems with getting that completed: - livecd-iso-to-disk doesn't support grub2 yet, using legacy grub can't happen as it's obsoleted by grub2 now in the repos, so the grub.conf generated config is written for grub legacy. Manually writing in kernel/initrd entries will boot fine - can possibly patch this add a --grub2 option or similar - we use /dev/mapper devices when available, only usb devices show as /dev/sdXXX. - grub2-efi-install /dev/sdX --boot-directory=/liveos we translate back to /dev/sdXX for all /dev/mapper devices as I was told devmapper/multipath support isnt quite there, not a problem there AFAIK for now - grub2-efi-install attempts adding an entry for /EFI/ovirt/grubx64.efi which fails, reports no disk found, checking manually with efibootmgr -v, shows no entry added. - manually adding an entry as efibootmgr -c -l \\EFI\\ovirt\\grubx64.efi -L oVirt still puts me at a blinking cursor on boot As a baseline for the machine installing RHEL6.2 uefi works fine but it's using grub legacy. Typical storage layout as it's different from a Fedora/RHEL install, livecd format and installs to a disk in the same format. /dev/mapper/XXXXXXp1 - EFI partition mounted to /liveos/efi (256M fat32) /dev/mapper/XXXXXXp2 - Second Root for upgrades/backup boot /dev/mapper/XXXXXXp3 - Actual Root partition on first install, contents basically mirrors a livecd + grub directories mounted to /liveos /dev/mapper/XXXXXXp4 - LVM partition for stateful storage So not sure where to go at this point and just want to throw my progress out there to see if anyone can point me on the right path as usb does work just not internal drives From mjg at redhat.com Thu Jan 5 15:56:34 2012 From: mjg at redhat.com (Matthew Garrett) Date: Thu, 5 Jan 2012 15:56:34 +0000 Subject: [node-devel] ovirt-node uefi status update In-Reply-To: <4F05C32C.1060508@redhat.com> References: <4F05C32C.1060508@redhat.com> Message-ID: <20120105155633.GB21118@srcf.ucam.org> On Thu, Jan 05, 2012 at 10:35:08AM -0500, Joey Boggs wrote: > - grub2-efi-install attempts adding an entry for > /EFI/ovirt/grubx64.efi which fails, reports no disk found, checking > manually with efibootmgr -v, shows no entry added. > - manually adding an entry as efibootmgr -c -l > \\EFI\\ovirt\\grubx64.efi -L oVirt still puts me at a blinking > cursor on boot What does efibootmgr -v report in this setup? -- Matthew Garrett | mjg59 at srcf.ucam.org From pjones at redhat.com Thu Jan 5 16:30:59 2012 From: pjones at redhat.com (Peter Jones) Date: Thu, 05 Jan 2012 11:30:59 -0500 Subject: [node-devel] ovirt-node uefi status update In-Reply-To: <4F05C32C.1060508@redhat.com> References: <4F05C32C.1060508@redhat.com> Message-ID: <4F05D043.7070904@redhat.com> On 01/05/2012 10:35 AM, Joey Boggs wrote: > Just an update and hopefully a search for help/opinions on finishing uefi > installation steps. > > So far ovirt-node can be installed to usb drives without being booted in > uefi mode, this works for usb because it can use a default bootloader in > EFI/BOOT/BOOTX64.EFI which is where we place it for now. The problem with > installing it to other media like sata drives etc is that we have to put a > bootloader entry in using efibootmgr which requires being booted in uefi > mode. The current spec also provides for using /EFI/BOOT/BOOTX64.EFI as a fallback on non-removable media. > Typical storage layout as it's different from a Fedora/RHEL install, livecd > format and installs to a disk in the same format. > > /dev/mapper/XXXXXXp1 - EFI partition mounted to /liveos/efi (256M fat32) You probably really do want to stick this in a subdirectory that you can "chmod o-rwx" - or can boot passwords and whatnot actually not be set in this environment? -- Peter From jboggs at redhat.com Thu Jan 5 17:24:35 2012 From: jboggs at redhat.com (Joey Boggs) Date: Thu, 05 Jan 2012 12:24:35 -0500 Subject: [node-devel] ovirt-node uefi status update In-Reply-To: <20120105155633.GB21118@srcf.ucam.org> References: <4F05C32C.1060508@redhat.com> <20120105155633.GB21118@srcf.ucam.org> Message-ID: <4F05DCD3.6020309@redhat.com> On 01/05/2012 10:56 AM, Matthew Garrett wrote: > On Thu, Jan 05, 2012 at 10:35:08AM -0500, Joey Boggs wrote: >> - grub2-efi-install attempts adding an entry for >> /EFI/ovirt/grubx64.efi which fails, reports no disk found, checking >> manually with efibootmgr -v, shows no entry added. >> - manually adding an entry as efibootmgr -c -l >> \\EFI\\ovirt\\grubx64.efi -L oVirt still puts me at a blinking >> cursor on boot > What does efibootmgr -v report in this setup? > BootCurrent: 0009 Timeout: 1 seconds BootOrder: 0000,0004,0005,0006,0009 Boot0000* oVirt HD(1,800,79800,24f832dc-406a-4a56-a0d3-96775851ee61)File(\EFI\ovirt\grubx64.efi) Boot0004* Hard Drive BIOS(2,0,00)PNY USB 2.0 FD 8192. Boot0005* CD/DVD Drive BIOS(3,0,00)SATA: TSSTcorp CDDVDW TS-H653F. Boot0006* UEFI: VBTM Store 'n' Go 5.00 ACPI(a0341d0,0)PCI(1a,0)USB(1,0)USB(3,0) Boot0009* UEFI: PNY USB 2.0 FD 8192 ACPI(a0341d0,0)PCI(1a,0)USB(1,0)USB(4,0)HD(1,800,f17001,5f9c5a4f-ace8-4908-ae87-604382036913) From mjg at redhat.com Thu Jan 5 17:32:39 2012 From: mjg at redhat.com (Matthew Garrett) Date: Thu, 5 Jan 2012 17:32:39 +0000 Subject: [node-devel] ovirt-node uefi status update In-Reply-To: <4F05DCD3.6020309@redhat.com> References: <4F05C32C.1060508@redhat.com> <20120105155633.GB21118@srcf.ucam.org> <4F05DCD3.6020309@redhat.com> Message-ID: <20120105173239.GB23205@srcf.ucam.org> On Thu, Jan 05, 2012 at 12:24:35PM -0500, Joey Boggs wrote: > On 01/05/2012 10:56 AM, Matthew Garrett wrote: > >On Thu, Jan 05, 2012 at 10:35:08AM -0500, Joey Boggs wrote: > >> - grub2-efi-install attempts adding an entry for > >>/EFI/ovirt/grubx64.efi which fails, reports no disk found, checking > >>manually with efibootmgr -v, shows no entry added. > >> - manually adding an entry as efibootmgr -c -l > >>\\EFI\\ovirt\\grubx64.efi -L oVirt still puts me at a blinking > >>cursor on boot > >What does efibootmgr -v report in this setup? > > > > > BootCurrent: 0009 > Timeout: 1 seconds > BootOrder: 0000,0004,0005,0006,0009 > Boot0000* oVirt HD(1,800,79800,24f832dc-406a-4a56-a0d3-96775851ee61)File(\EFI\ovirt\grubx64.efi) > Boot0004* Hard Drive BIOS(2,0,00)PNY USB 2.0 FD 8192. > Boot0005* CD/DVD Drive BIOS(3,0,00)SATA: TSSTcorp CDDVDW TS-H653F. > Boot0006* UEFI: VBTM Store 'n' Go 5.00 > ACPI(a0341d0,0)PCI(1a,0)USB(1,0)USB(3,0) > Boot0009* UEFI: PNY USB 2.0 FD 8192 ACPI(a0341d0,0)PCI(1a,0)USB(1,0)USB(4,0)HD(1,800,f17001,5f9c5a4f-ace8-4908-ae87-604382036913) And if you mount /dev/sda1, the file EFI/ovirt/grubx64.efi exists? -- Matthew Garrett | mjg59 at srcf.ucam.org From jboggs at redhat.com Thu Jan 5 18:36:45 2012 From: jboggs at redhat.com (Joey Boggs) Date: Thu, 05 Jan 2012 13:36:45 -0500 Subject: [node-devel] ovirt-node uefi status update In-Reply-To: <20120105173239.GB23205@srcf.ucam.org> References: <4F05C32C.1060508@redhat.com> <20120105155633.GB21118@srcf.ucam.org> <4F05DCD3.6020309@redhat.com> <20120105173239.GB23205@srcf.ucam.org> Message-ID: <4F05EDBD.1030706@redhat.com> On 01/05/2012 12:32 PM, Matthew Garrett wrote: > On Thu, Jan 05, 2012 at 12:24:35PM -0500, Joey Boggs wrote: >> On 01/05/2012 10:56 AM, Matthew Garrett wrote: >>> On Thu, Jan 05, 2012 at 10:35:08AM -0500, Joey Boggs wrote: >>>> - grub2-efi-install attempts adding an entry for >>>> /EFI/ovirt/grubx64.efi which fails, reports no disk found, checking >>>> manually with efibootmgr -v, shows no entry added. >>>> - manually adding an entry as efibootmgr -c -l >>>> \\EFI\\ovirt\\grubx64.efi -L oVirt still puts me at a blinking >>>> cursor on boot >>> What does efibootmgr -v report in this setup? >>> >> >> BootCurrent: 0009 >> Timeout: 1 seconds >> BootOrder: 0000,0004,0005,0006,0009 >> Boot0000* oVirt HD(1,800,79800,24f832dc-406a-4a56-a0d3-96775851ee61)File(\EFI\ovirt\grubx64.efi) >> Boot0004* Hard Drive BIOS(2,0,00)PNY USB 2.0 FD 8192. >> Boot0005* CD/DVD Drive BIOS(3,0,00)SATA: TSSTcorp CDDVDW TS-H653F. >> Boot0006* UEFI: VBTM Store 'n' Go 5.00 >> ACPI(a0341d0,0)PCI(1a,0)USB(1,0)USB(3,0) >> Boot0009* UEFI: PNY USB 2.0 FD 8192 ACPI(a0341d0,0)PCI(1a,0)USB(1,0)USB(4,0)HD(1,800,f17001,5f9c5a4f-ace8-4908-ae87-604382036913) > And if you mount /dev/sda1, the file EFI/ovirt/grubx64.efi exists? > Just to throw out multipath/dev-mapper If I mount /dev/mapper/XXXXX its there, if I kill multipath and partprobe /dev/sda it there as well. It's an asus p8h61-m le motherboard I'm testing with that has the ez bios if that adds anything From jboggs at redhat.com Thu Jan 5 18:40:35 2012 From: jboggs at redhat.com (Joey Boggs) Date: Thu, 05 Jan 2012 13:40:35 -0500 Subject: [node-devel] ovirt-node uefi status update In-Reply-To: <4F05EDBD.1030706@redhat.com> References: <4F05C32C.1060508@redhat.com> <20120105155633.GB21118@srcf.ucam.org> <4F05DCD3.6020309@redhat.com> <20120105173239.GB23205@srcf.ucam.org> <4F05EDBD.1030706@redhat.com> Message-ID: <4F05EEA3.8030704@redhat.com> On 01/05/2012 01:36 PM, Joey Boggs wrote: > On 01/05/2012 12:32 PM, Matthew Garrett wrote: >> On Thu, Jan 05, 2012 at 12:24:35PM -0500, Joey Boggs wrote: >>> On 01/05/2012 10:56 AM, Matthew Garrett wrote: >>>> On Thu, Jan 05, 2012 at 10:35:08AM -0500, Joey Boggs wrote: >>>>> - grub2-efi-install attempts adding an entry for >>>>> /EFI/ovirt/grubx64.efi which fails, reports no disk found, checking >>>>> manually with efibootmgr -v, shows no entry added. >>>>> - manually adding an entry as efibootmgr -c -l >>>>> \\EFI\\ovirt\\grubx64.efi -L oVirt still puts me at a blinking >>>>> cursor on boot >>>> What does efibootmgr -v report in this setup? >>>> >>> >>> BootCurrent: 0009 >>> Timeout: 1 seconds >>> BootOrder: 0000,0004,0005,0006,0009 >>> Boot0000* oVirt >>> HD(1,800,79800,24f832dc-406a-4a56-a0d3-96775851ee61)File(\EFI\ovirt\grubx64.efi) >>> Boot0004* Hard Drive BIOS(2,0,00)PNY USB 2.0 FD 8192. >>> Boot0005* CD/DVD Drive BIOS(3,0,00)SATA: TSSTcorp CDDVDW TS-H653F. >>> Boot0006* UEFI: VBTM Store 'n' Go 5.00 >>> ACPI(a0341d0,0)PCI(1a,0)USB(1,0)USB(3,0) >>> Boot0009* UEFI: PNY USB 2.0 FD 8192 >>> ACPI(a0341d0,0)PCI(1a,0)USB(1,0)USB(4,0)HD(1,800,f17001,5f9c5a4f-ace8-4908-ae87-604382036913) >> And if you mount /dev/sda1, the file EFI/ovirt/grubx64.efi exists? >> > Just to throw out multipath/dev-mapper If I mount /dev/mapper/XXXXX > its there, if I kill multipath and partprobe /dev/sda it there as well. > > It's an asus p8h61-m le motherboard I'm testing with that has the ez > bios if that adds anything > _______________________________________________ > node-devel mailing list > node-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/node-devel Also tried rerunning grub2-efi-install again with just /dev/sda and multipath off, No errors reported about the drive this time, but still a blinking cursor grub2-efi-install /dev/sda --boot-directory=/liveos /dev/sda1 -> /liveos/efi /dev/sda3 -> /liveos From mjg at redhat.com Thu Jan 5 18:42:22 2012 From: mjg at redhat.com (Matthew Garrett) Date: Thu, 5 Jan 2012 18:42:22 +0000 Subject: [node-devel] ovirt-node uefi status update In-Reply-To: <4F05EDBD.1030706@redhat.com> References: <4F05C32C.1060508@redhat.com> <20120105155633.GB21118@srcf.ucam.org> <4F05DCD3.6020309@redhat.com> <20120105173239.GB23205@srcf.ucam.org> <4F05EDBD.1030706@redhat.com> Message-ID: <20120105184221.GA24974@srcf.ucam.org> On Thu, Jan 05, 2012 at 01:36:45PM -0500, Joey Boggs wrote: > Just to throw out multipath/dev-mapper If I mount /dev/mapper/XXXXX > its there, if I kill multipath and partprobe /dev/sda it there as > well. > > It's an asus p8h61-m le motherboard I'm testing with that has the ez > bios if that adds anything Ok. Just to avoid some other corner cases - does it work if you just do a normal EFI install to the bare metal? -- Matthew Garrett | mjg59 at srcf.ucam.org From mjg at redhat.com Thu Jan 5 19:00:43 2012 From: mjg at redhat.com (Matthew Garrett) Date: Thu, 5 Jan 2012 19:00:43 +0000 Subject: [node-devel] ovirt-node uefi status update In-Reply-To: <4F05EEA3.8030704@redhat.com> References: <4F05C32C.1060508@redhat.com> <20120105155633.GB21118@srcf.ucam.org> <4F05DCD3.6020309@redhat.com> <20120105173239.GB23205@srcf.ucam.org> <4F05EDBD.1030706@redhat.com> <4F05EEA3.8030704@redhat.com> Message-ID: <20120105190043.GA25407@srcf.ucam.org> On Thu, Jan 05, 2012 at 01:40:35PM -0500, Joey Boggs wrote: > Also tried rerunning grub2-efi-install again with just /dev/sda and > multipath off, No errors reported about the drive this time, but > still a blinking cursor > > grub2-efi-install /dev/sda --boot-directory=/liveos > /dev/sda1 -> /liveos/efi > /dev/sda3 -> /liveos Yeah, wouldn't necessarily expect grub2-efi-install to do anything useful right now. Make sure you're using grub. -- Matthew Garrett | mjg59 at srcf.ucam.org From jboggs at redhat.com Thu Jan 5 19:06:47 2012 From: jboggs at redhat.com (Joey Boggs) Date: Thu, 05 Jan 2012 14:06:47 -0500 Subject: [node-devel] ovirt-node uefi status update In-Reply-To: <20120105184221.GA24974@srcf.ucam.org> References: <4F05C32C.1060508@redhat.com> <20120105155633.GB21118@srcf.ucam.org> <4F05DCD3.6020309@redhat.com> <20120105173239.GB23205@srcf.ucam.org> <4F05EDBD.1030706@redhat.com> <20120105184221.GA24974@srcf.ucam.org> Message-ID: <4F05F4C7.4070001@redhat.com> On 01/05/2012 01:42 PM, Matthew Garrett wrote: > On Thu, Jan 05, 2012 at 01:36:45PM -0500, Joey Boggs wrote: > >> Just to throw out multipath/dev-mapper If I mount /dev/mapper/XXXXX >> its there, if I kill multipath and partprobe /dev/sda it there as >> well. >> >> It's an asus p8h61-m le motherboard I'm testing with that has the ez >> bios if that adds anything > Ok. Just to avoid some other corner cases - does it work if you just do > a normal EFI install to the bare metal? > fresh boot, killed everything running/mounted from the /dev/mapper lvm devices multipath -F service multipathd stop dd'd the first 50MB of the disk then ran: parted /dev/sda mklabel gpt mkpart efi 0M 255M set 1 boot on mkpart root 256M 512M partprobe /dev/sda sda1/sd2 exist mkfs.vfat -F32 /dev/sda1 mount /dev/sda2 /liveos mkdir /liveos/efi mount /dev/sda1 /liveos/efi grub2-efi-install /dev/sda --boot-directory=/liveos # successful, no errors efibootmgr -v # looks the same umount /liveos/efi umount /liveos reboot Now I get a grub prompt, anything else I can possibly rule out to narrow it down? its a start now :) From mjg at redhat.com Thu Jan 5 19:13:58 2012 From: mjg at redhat.com (Matthew Garrett) Date: Thu, 5 Jan 2012 19:13:58 +0000 Subject: [node-devel] ovirt-node uefi status update In-Reply-To: <4F05F4C7.4070001@redhat.com> References: <4F05C32C.1060508@redhat.com> <20120105155633.GB21118@srcf.ucam.org> <4F05DCD3.6020309@redhat.com> <20120105173239.GB23205@srcf.ucam.org> <4F05EDBD.1030706@redhat.com> <20120105184221.GA24974@srcf.ucam.org> <4F05F4C7.4070001@redhat.com> Message-ID: <20120105191358.GA25627@srcf.ucam.org> On Thu, Jan 05, 2012 at 02:06:47PM -0500, Joey Boggs wrote: > Now I get a grub prompt, anything else I can possibly rule out to > narrow it down? its a start now :) If the EFI entries are the same then there's something different about the partitions - can you give parted's idea of the partition table in both cases, along with the efibootmgr -v and make sure that the grub executable installed looks the same? -- Matthew Garrett | mjg59 at srcf.ucam.org From jboggs at redhat.com Mon Jan 9 01:10:12 2012 From: jboggs at redhat.com (Joey Boggs) Date: Sun, 08 Jan 2012 20:10:12 -0500 Subject: [node-devel] ovirt-node uefi status update In-Reply-To: <20120105191358.GA25627@srcf.ucam.org> References: <4F05C32C.1060508@redhat.com> <20120105155633.GB21118@srcf.ucam.org> <4F05DCD3.6020309@redhat.com> <20120105173239.GB23205@srcf.ucam.org> <4F05EDBD.1030706@redhat.com> <20120105184221.GA24974@srcf.ucam.org> <4F05F4C7.4070001@redhat.com> <20120105191358.GA25627@srcf.ucam.org> Message-ID: <4F0A3E74.3030605@redhat.com> On 01/05/2012 02:13 PM, Matthew Garrett wrote: > On Thu, Jan 05, 2012 at 02:06:47PM -0500, Joey Boggs wrote: > >> Now I get a grub prompt, anything else I can possibly rule out to >> narrow it down? its a start now :) > If the EFI entries are the same then there's something different about > the partitions - can you give parted's idea of the partition table in > both cases, along with the efibootmgr -v and make sure that the grub > executable installed looks the same? > Partitions appear the same, if I go through the motions I did before but using /dev/mapper/1ATAXXXXXXX, I narrowed it down a bit more mklabel /dev/mapper/1ATAXXXXX mkpart efi 1M 256M mkpart root 256M 512M mkfs.vfat -F32 /dev/mapper/1ATAXXXXp1 *** this reports "unable to get drive geometry, using default 255/63 on /dev/mapper/XXXp1" *** not a deal breaker it looks like though mke2fs /dev/mapper/1ATAXXXXp2 mount /dev/mapper/1ATAXXXXp2 /liveos mkdir /liveos/efi mount /dev/mapper/1ATAXXXXp1 /liveos/efi grub2-efi-install /dev/mapper/1ATAXXX --boot-directory=/liveos *** reports output of efibootmgr -v and the below error Could not open disk: No such file or directory Installation Finished. No Error Reported grub2-efi-probe --target=device -v /liveos ** /dev/mapper/1ATAXXXXp2 grub2-efi-probe --target=device -v /liveos/efi ** /dev/mapper/1ATAXXXXp1 efibootmgr -c -l \\EFI\\ovirt\\grubx64.efi -L ovirt --gpt -d /dev/mapper/1ATAXXXX ** returns fine, no error My guess is some translation in the grub2-efi-install is going wrong... causing "Could not open disk: No such file or directory" Peter any ideas? From mjg at redhat.com Mon Jan 9 02:44:25 2012 From: mjg at redhat.com (Matthew Garrett) Date: Mon, 9 Jan 2012 02:44:25 +0000 Subject: [node-devel] ovirt-node uefi status update In-Reply-To: <4F0A3E74.3030605@redhat.com> References: <4F05C32C.1060508@redhat.com> <20120105155633.GB21118@srcf.ucam.org> <4F05DCD3.6020309@redhat.com> <20120105173239.GB23205@srcf.ucam.org> <4F05EDBD.1030706@redhat.com> <20120105184221.GA24974@srcf.ucam.org> <4F05F4C7.4070001@redhat.com> <20120105191358.GA25627@srcf.ucam.org> <4F0A3E74.3030605@redhat.com> Message-ID: <20120109024424.GA3252@srcf.ucam.org> On EFI systems there's no requirement for grub to know what the drive geometry is, so as long as it's installed a binary then it shouldn't matter. But, as noted before, please just try with grub at the moment - grub2 won't be the default for EFI until F17 at least. -- Matthew Garrett | mjg59 at srcf.ucam.org From hypervisor at qq.com Mon Jan 9 06:14:40 2012 From: hypervisor at qq.com (=?ISO-8859-1?B?TGF1cmVuY2U=?=) Date: Mon, 9 Jan 2012 14:14:40 +0800 Subject: [node-devel] where is this code? Message-ID: Hi, I found there is a "RHEV-M" configuration page in node setup UI. But I CAN'T find any source code related to this page in node project code, while other code of this UI is mainly in ovirt-config-setup.py. Where is "RHEV-M" configuration code?? -------------- next part -------------- An HTML attachment was scrubbed... URL: From work.iec23801 at gmail.com Mon Jan 9 07:36:40 2012 From: work.iec23801 at gmail.com (Jarod. w) Date: Mon, 9 Jan 2012 15:36:40 +0800 Subject: [node-devel] where is this code? In-Reply-To: References: Message-ID: You can get it from vdsm/vdsm_reg/engine.py. 2012/1/9 Laurence > Hi, > I found there is a "RHEV-M" configuration page in node setup UI. > But I CAN'T find any source code related to this page in node project > code, while other code of this UI is mainly in ovirt-config-setup.py. > Where is "RHEV-M" configuration code?? > > _______________________________________________ > node-devel mailing list > node-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/node-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mburns at redhat.com Tue Jan 10 14:43:06 2012 From: mburns at redhat.com (Mike Burns) Date: Tue, 10 Jan 2012 09:43:06 -0500 Subject: [node-devel] oVirt Node Weekly Meeting Summary Message-ID: <1326206586.3202.43.camel@beelzebub.mburnsfire.net> Minutes: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-01-10-14.00.html Minutes (text): http://ovirt.org/meetings/ovirt/2012/ovirt.2012-01-10-14.00.txt Log: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-01-10-14.00.log.html Good discussion on stateless in connection with work by the archipel group. Action items, by person apevec apevec to send proposal to node-devel re: virt-minimal comps group mburns mburns to pick up 768491 mburns to follow up with vdsm team on rhevm plugin renaming to engine mburns to build 2.2.1 rpms Jan 12 and iso Jan 13 mburns deferred 2.3.0 bug and schedule review to next week mburns to investigate creating ovirt-node mirror on github mburns to rework stateless plan to mimic archipel workflow From hypervisor at qq.com Wed Jan 11 11:39:05 2012 From: hypervisor at qq.com (=?ISO-8859-1?B?TGF1cmVuY2U=?=) Date: Wed, 11 Jan 2012 19:39:05 +0800 Subject: [node-devel] problem with ovirt node startup Message-ID: hi, I occasionally meet this problem: When node starts, I have to wait a long time (about 15 minites) till I can see the "login:". Then I checked /var/log/message file, it said "...localhost systemd[1]: ovirt.service operation timed out..." or sometimes "...localhost systemd[1]: ovirt-awake.service operation timed out...". I look up this two scripts, but can't find anything abnormal. Can anybody help me? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmyers at redhat.com Wed Jan 11 18:32:48 2012 From: pmyers at redhat.com (Perry Myers) Date: Wed, 11 Jan 2012 13:32:48 -0500 Subject: [node-devel] oVirt Node Weekly Meeting Summary In-Reply-To: <1326206586.3202.43.camel@beelzebub.mburnsfire.net> References: <1326206586.3202.43.camel@beelzebub.mburnsfire.net> Message-ID: <4F0DD5D0.1070509@redhat.com> On 01/10/2012 09:43 AM, Mike Burns wrote: > Minutes: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-01-10-14.00.html > Minutes (text): http://ovirt.org/meetings/ovirt/2012/ovirt.2012-01-10-14.00.txt > Log: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-01-10-14.00.log.html > > Good discussion on stateless in connection with work by the archipel group. > > Action items, by person > > apevec > apevec to send proposal to node-devel re: virt-minimal comps group > mburns > mburns to pick up 768491 > mburns to follow up with vdsm team on rhevm plugin renaming to engine > mburns to build 2.2.1 rpms Jan 12 and iso Jan 13 > mburns deferred 2.3.0 bug and schedule review to next week > mburns to investigate creating ovirt-node mirror on github > mburns to rework stateless plan to mimic archipel workflow Should add: jboggs to finalize EFI patches and post to list in time for 2.2.1 inclusion And then one EFI is out of the way, we can start focusing on the plugin model work. This may end up dovetailing with apevec's suggestions for virt-minimal comps group. Ack? Perry From jim at meyering.net Mon Jan 16 13:47:34 2012 From: jim at meyering.net (Jim Meyering) Date: Mon, 16 Jan 2012 14:47:34 +0100 Subject: [node-devel] ovirt-node uefi status update In-Reply-To: <4F05F4C7.4070001@redhat.com> (Joey Boggs's message of "Thu, 05 Jan 2012 14:06:47 -0500") References: <4F05C32C.1060508@redhat.com> <20120105155633.GB21118@srcf.ucam.org> <4F05DCD3.6020309@redhat.com> <20120105173239.GB23205@srcf.ucam.org> <4F05EDBD.1030706@redhat.com> <20120105184221.GA24974@srcf.ucam.org> <4F05F4C7.4070001@redhat.com> Message-ID: <87aa5n3gjt.fsf@rho.meyering.net> Joey Boggs wrote: > On 01/05/2012 01:42 PM, Matthew Garrett wrote: >> On Thu, Jan 05, 2012 at 01:36:45PM -0500, Joey Boggs wrote: >> >>> Just to throw out multipath/dev-mapper If I mount /dev/mapper/XXXXX >>> its there, if I kill multipath and partprobe /dev/sda it there as >>> well. >>> >>> It's an asus p8h61-m le motherboard I'm testing with that has the ez >>> bios if that adds anything >> Ok. Just to avoid some other corner cases - does it work if you just do >> a normal EFI install to the bare metal? >> > fresh boot, killed everything running/mounted from the /dev/mapper lvm devices > multipath -F > service multipathd stop > > dd'd the first 50MB of the disk then ran: > > parted /dev/sda > mklabel gpt > mkpart efi 0M 255M > set 1 boot on > mkpart root 256M 512M > partprobe /dev/sda > sda1/sd2 exist > mkfs.vfat -F32 /dev/sda1 > mount /dev/sda2 /liveos > mkdir /liveos/efi > mount /dev/sda1 /liveos/efi > grub2-efi-install /dev/sda --boot-directory=/liveos > # successful, no errors > efibootmgr -v > # looks the same > umount /liveos/efi > umount /liveos > reboot > > Now I get a grub prompt, anything else I can possibly rule out to > narrow it down? its a start now :) Hi Joey, Here are a few suggestions (although unrelated to the current problem, taking these suggestions may help avoid confusion and performance problems in actual use, i.e., if you happen to use an SSD or a disk for which alignment makes a difference): - Do not use "primary" as a GPT partition name. "primary" makes sense when using an msdos partition table, but not with a GPT table. There, it suggests a misunderstanding. Use a meaningful name instead, as below. - Do not use the "M" suffix, because e.g., 256M translates to byte 256000000, which may let parted give you a partition that is not well-aligned. For some types of disks, the resulting systems would then have unnecessarily poor I/O performance. With parted-3.1, you will be able to use "MiB" as a suffix, and it will do the right thing, but for now, that is not portable to RHEL. - I see you invoked parted only once above, but in case it's run more than once for a given device in the code... invoking parted multiple times is not needed/desired. It's more efficient and has less duplication to do it like this (note use of "--" to avoid the need to keep the "-34" from being interpret as an option): k=1024 m=$((k*k)) parted -s /dev/mapper/XXXXX -- \ mklabel gpt \ mkpart efi 34s $((256*m-1))b \ mkpart root $((256*m))b $((512*m-1))b \ mkpart root-backup $((512*m))b $((768*m-1))b \ mkpart data $((768*m))b -34s The 34s (34 sectors) is to allow room for the primary GPT header. The -34s as endpoint of final partition says to use everything up to the end of the disk, but leaving 34 sectors for the backup GPT header. Note that I've used meaningful partition names and that all other partition start and end offsets are specified in bytes (note the "b" suffixes), and each partition other than the first one starts on a byte address that is a multiple of 256MiB (not 256MB). The EFI partition is used only once, at boot, so its alignment does not matter. ------------------------------------- Maybe relevant, I noticed you wrote this: set 1 boot on Did you mean to use "bios_grub" instead of "boot"? set 1 bios_grub on ------------------------------------- To try the above, I did this: (this uses 1GB of memory, so be sure you have at least that much free) modprobe scsi_debug dev_size_mb=1000 dev=/dev/sdd k=1024 m=$((k*k)) parted -s $dev -- \ mklabel gpt \ mkpart efi 34s $((256*m-1))b \ mkpart root $((256*m))b $((512*m-1))b \ mkpart root-backup $((512*m))b $((768*m-1))b \ mkpart data $((768*m))b -34s parted -s $dev u mi p u s p Model: Linux scsi_debug (scsi) Disk /dev/sdd: 1000MiB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 0.02MiB 256MiB 256MiB efi bios_grub 2 256MiB 512MiB 256MiB root 3 512MiB 768MiB 256MiB root-backup 4 768MiB 1000MiB 232MiB data Model: Linux scsi_debug (scsi) Disk /dev/sdd: 2048000s Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 34s 524287s 524254s efi bios_grub 2 524288s 1048575s 524288s root 3 1048576s 1572863s 524288s root-backup 4 1572864s 2047966s 475103s data If you convert those start sector numbers to hexadecimal, you see that they are well-aligned: 80000 100000 180000 ------------------------------------- Note that using the commands you listed above, I do end up with well-aligned partitions, but with an 800KB free gap between the two partitions: # dev=/dev/sdd # parted -s $dev -- mklabel gpt mkpart efi 0 255M mkpart root 256M 512M # parted -s $dev u s p free Model: Linux scsi_debug (scsi) Disk /dev/sdd: 2048000s Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 34s 498046s 498013s efi 498047s 499711s 1665s Free Space 2 499712s 999423s 499712s root 999424s 2047966s 1048543s Free Space From apevec at gmail.com Mon Jan 16 22:37:52 2012 From: apevec at gmail.com (Alan Pevec) Date: Mon, 16 Jan 2012 23:37:52 +0100 Subject: [node-devel] ovirt-node uefi status update In-Reply-To: <87aa5n3gjt.fsf@rho.meyering.net> References: <4F05C32C.1060508@redhat.com> <20120105155633.GB21118@srcf.ucam.org> <4F05DCD3.6020309@redhat.com> <20120105173239.GB23205@srcf.ucam.org> <4F05EDBD.1030706@redhat.com> <20120105184221.GA24974@srcf.ucam.org> <4F05F4C7.4070001@redhat.com> <87aa5n3gjt.fsf@rho.meyering.net> Message-ID: Hi Jim, thanks for yours, as always :) very detailed instructions! I'll let Joey test them out. Just one clarification: > I noticed you wrote this: > > ? ?set 1 boot on > > Did you mean to use "bios_grub" instead of "boot"? > > ? ?set 1 bios_grub on That sets http://en.wikipedia.org/wiki/BIOS_Boot_partition while AFAIK boot flag is for ESP http://en.wikipedia.org/wiki/EFI_System_partition which is what we need to boot on an EFI system. Cheers, Alan From jim at meyering.net Tue Jan 17 11:29:52 2012 From: jim at meyering.net (Jim Meyering) Date: Tue, 17 Jan 2012 12:29:52 +0100 Subject: [node-devel] ovirt-node uefi status update In-Reply-To: (Alan Pevec's message of "Mon, 16 Jan 2012 23:37:52 +0100") References: <4F05C32C.1060508@redhat.com> <20120105155633.GB21118@srcf.ucam.org> <4F05DCD3.6020309@redhat.com> <20120105173239.GB23205@srcf.ucam.org> <4F05EDBD.1030706@redhat.com> <20120105184221.GA24974@srcf.ucam.org> <4F05F4C7.4070001@redhat.com> <87aa5n3gjt.fsf@rho.meyering.net> Message-ID: <87aa5my3bj.fsf@rho.meyering.net> Alan Pevec wrote: > Hi Jim, > > thanks for yours, as always :) very detailed instructions! I'll let > Joey test them out. > Just one clarification: >> I noticed you wrote this: >> >> ? ?set 1 boot on >> >> Did you mean to use "bios_grub" instead of "boot"? >> >> ? ?set 1 bios_grub on > > That sets http://en.wikipedia.org/wiki/BIOS_Boot_partition while AFAIK > boot flag is for ESP http://en.wikipedia.org/wiki/EFI_System_partition > which is what we need to boot on an EFI system. Hi Alan, Thanks, you're right of course. I conflated them. From mburns at redhat.com Tue Jan 17 14:20:49 2012 From: mburns at redhat.com (Mike Burns) Date: Tue, 17 Jan 2012 09:20:49 -0500 Subject: [node-devel] oVirt Node Weekly Sync Meeting Minutes Message-ID: <1326810049.7241.6.camel@beelzebub.mburnsfire.net> Minutes: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-01-17-14.00.txt Minutes (text): http://ovirt.org/meetings/ovirt/2012/ovirt.2012-01-17-14.00.txt Log: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-01-17-14.00.log.html Action items, by person 1. apevec 1. apevec to send virt-minimal comps group proposal to node-devel and fedora-devel 2. mburns 1. mburns to build final image for test day on 17 Jan 2. mburns to rework stateless plan to mimic archipel workflow (didn't get to it this past week) 3. mburns defers 2.3.0 bug and schedule review until after test day 4. mburns to tag/branch today for release 5. mburns to followup with jenkins owner for build deps 3. jboggs 1. jboggs to followup on jim meyering's email about uefi From mburns at redhat.com Wed Jan 18 13:58:01 2012 From: mburns at redhat.com (Mike Burns) Date: Wed, 18 Jan 2012 08:58:01 -0500 Subject: [node-devel] [ovirt-devel] blockdev --flushbufs required [was: parted issue/question In-Reply-To: <87haztruph.fsf_-_@rho.meyering.net> References: <4F13ADB3.5070806@redhat.com> <87lip6t157.fsf@rho.meyering.net> <20120118003802.GA11769@agk-dp.fab.redhat.com> <87haztruph.fsf_-_@rho.meyering.net> Message-ID: <1326895081.27879.0.camel@beelzebub.mburnsfire.net> Thanks Jim Moving to correct ovirt-node mailing list (node-devel at ovirt.org) On Wed, 2012-01-18 at 14:44 +0100, Jim Meyering wrote: > [Following up on this thread: > http://thread.gmane.org/gmane.linux.kernel.device-mapper.devel/14999] > > Alasdair G Kergon wrote: > > Try > > blkdev --flushbufs > > after any cmd that writes to a dev to see if that makes any difference. > > Thanks for the work-around. > Using "blockdev --flushbufs $dev" does indeed make parted > behave the same with dm-backed storage as with other devices. > > Adjusting my small example, > > cd /tmp; truncate -s 10m g && loop=$(losetup --show -f g) > echo 0 100 linear $loop 0 | dmsetup create zub > dev=/dev/mapper/zub > parted -s $dev \ > mklabel gpt \ > mkpart efi 34s 34s \ > mkpart root 35s 35s \ > mkpart roo2 36s 36s \ > u s p > blockdev --flushbufs $dev # FIXME: required with device-mapper-1.02.65-5 > > # write random bits to p1 > dd of=${dev}p1 if=/dev/urandom count=1 > dd if=${dev}p1 of=p1-copy.pre count=1 > parted -s $dev mkpart p4 37s 37s > blockdev --flushbufs $dev # FIXME: required with device-mapper-1.02.65-5 > > dd if=${dev}p1 of=p1-copy.post count=1 > cmp -l p1-copy.pre p1-copy.post > > With that, the "cmp" show no differences. > > Does this sound like a problem in device-mapper land, > or in how parted interacts with DM? > > _______________________________________________ > ovirt-devel mailing list > ovirt-devel at redhat.com > https://www.redhat.com/mailman/listinfo/ovirt-devel From apevec at gmail.com Wed Jan 18 16:22:14 2012 From: apevec at gmail.com (Alan Pevec) Date: Wed, 18 Jan 2012 17:22:14 +0100 Subject: [node-devel] [ovirt-devel] blockdev --flushbufs required [was: parted issue/question In-Reply-To: <1326895081.27879.0.camel@beelzebub.mburnsfire.net> References: <4F13ADB3.5070806@redhat.com> <87lip6t157.fsf@rho.meyering.net> <20120118003802.GA11769@agk-dp.fab.redhat.com> <87haztruph.fsf_-_@rho.meyering.net> <1326895081.27879.0.camel@beelzebub.mburnsfire.net> Message-ID: > On Wed, 2012-01-18 at 14:44 +0100, Jim Meyering wrote: >> [Following up on this thread: >> ?http://thread.gmane.org/gmane.linux.kernel.device-mapper.devel/14999] >> Alasdair G Kergon wrote: >> > Try >> > ? blkdev --flushbufs >> > after any cmd that writes to a dev to see if that makes any difference. >> >> Thanks for the work-around. >> Using "blockdev --flushbufs $dev" does indeed make parted >> behave the same with dm-backed storage as with other devices. Thanks Jim! That reminds me we've already seen something similar and there's still workaround with drop_caches in ovirt-config-boot installer: # flush to sync DM and blockdev, workaround from rhbz#623846#c14 echo 3 > /proc/sys/vm/drop_caches But 623846 was supposed to be fixed in RHEL 6.0 ? Alan From jim at meyering.net Thu Jan 19 13:00:45 2012 From: jim at meyering.net (Jim Meyering) Date: Thu, 19 Jan 2012 14:00:45 +0100 Subject: [node-devel] [ovirt-devel] blockdev --flushbufs required [was: parted issue/question In-Reply-To: (Alan Pevec's message of "Wed, 18 Jan 2012 17:22:14 +0100") References: <4F13ADB3.5070806@redhat.com> <87lip6t157.fsf@rho.meyering.net> <20120118003802.GA11769@agk-dp.fab.redhat.com> <87haztruph.fsf_-_@rho.meyering.net> <1326895081.27879.0.camel@beelzebub.mburnsfire.net> Message-ID: <871uqvkfsy.fsf@rho.meyering.net> Alan Pevec wrote: >> On Wed, 2012-01-18 at 14:44 +0100, Jim Meyering wrote: >>> [Following up on this thread: >>> ?http://thread.gmane.org/gmane.linux.kernel.device-mapper.devel/14999] >>> Alasdair G Kergon wrote: >>> > Try >>> > ? blkdev --flushbufs >>> > after any cmd that writes to a dev to see if that makes any difference. >>> >>> Thanks for the work-around. >>> Using "blockdev --flushbufs $dev" does indeed make parted >>> behave the same with dm-backed storage as with other devices. > > Thanks Jim! > That reminds me we've already seen something similar and there's still > workaround with drop_caches in ovirt-config-boot installer: > # flush to sync DM and blockdev, workaround from rhbz#623846#c14 > echo 3 > /proc/sys/vm/drop_caches > > But 623846 was supposed to be fixed in RHEL 6.0 ? FYI, Niels de Vos has just posted a patch that should fix this: http://thread.gmane.org/gmane.linux.kernel/1241227 With that, maybe you'll be able to remove that other work-around. From mburns at redhat.com Tue Jan 24 14:17:34 2012 From: mburns at redhat.com (Mike Burns) Date: Tue, 24 Jan 2012 09:17:34 -0500 Subject: [node-devel] oVirt Node Weekly Sync Meeting Minutes -- 2012-01-24 Message-ID: <1327414654.2644.1.camel@beelzebub.mburnsfire.net> Minutes: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-01-24-14.00.html Minutes (text): http://ovirt.org/meetings/ovirt/2012/ovirt.2012-01-24-14.00.txt Log: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-01-24-14.00.log.html Action items, by person apevec apevec to incorporate into node kickstart and send info to node-devel mburns mburns to rework stateless plan based on archipel-node work mburns to get next release built as soon as uefi patches are acked From pmyers at redhat.com Thu Jan 26 15:47:43 2012 From: pmyers at redhat.com (Perry Myers) Date: Thu, 26 Jan 2012 10:47:43 -0500 Subject: [node-devel] Rethinking implementation for oVirt Node Plugins Message-ID: <4F21759F.9030302@redhat.com> The current thinking/design around doing oVirt Node Plugins is here: http://ovirt.org/wiki/Node_plugins And is based mostly on the premise that: * Plugins are self contained blobs of RPMs that are internally dependency complete * Plugins are installed via smth like rpm -Uvh of a set of RPMs contained inside the blob (tarball likely) As I was thinking about some additional use cases for plugins (like including CIM/tog-pegasus and making vdsm a plugin), it seems like a lot of redundancy to pull packages out of Fedora repos, and stick them in a tarball when there are perfectly good yum mirrors that have those packages. It's also a lot of overhead on the part of the plugin creators to be doing dual updates: Update RPM in Fedora and simultaneously update and publish a new plugin. The core problem is: remote retrieval of packages and dependency resolution... wait, doesn't yum solve that set of problems? But there's no yum on oVirt Node... The original reasons for excluding yum were: * No python on the node (but vdsm pulled python in, so that's moot now) * Don't want folks running yum on a live oVirt Node image (we can address that by making yum impossible to run when the image is booted vs. offline) So I'd like to rethink the plugins concept by first starting with putting yum back on oVirt Node, and leveraging that for what it is good at. If we put yum on the node, then plugin installation could be as simple as: mount ISO cp foo.repo /etc/yum.conf.d/ yum install foo --enablerepo=foo If offline is desired, then the plugin is basically a repo inside a tarball and you do mount ISO cp foo.repo /etc/yum.conf.d/ yum localinstall foo/repo/foo.rpm In either case, we could enforce offline vs. online plugin integration by always setting all repo files to disabled, and manually needing to enable them with --enablerepo=* if the user is doing an online plugin So a plugin could just simply be: * repo file (with one or more repo definitions that are not included in the base distro) * rpm list * blacklisting info * optionally a collection of RPMs with repo metadata This means that we can let normal yum dep resolution work and plugins essentially become dependent on things like 'what version of ovirt-node is installed' or 'what version of the kernel is installed' and if dependencies aren't met, the plugin installation should fail gracefully We can prevent _core_ files from being upgraded (like ovirt-node, kernel, etc) by adding explicit excludepkg directives so that if a plugin tries to bring in a version of a package already core to oVirt Node, it fails and reports "dude, you need a newer ISO already" Thoughts? This should make the plugin concept easier to implement and also allow us to include support for plugins that pull packages from remote repositories much easier. Perry From kmestery at cisco.com Thu Jan 26 15:54:22 2012 From: kmestery at cisco.com (kmestery) Date: Thu, 26 Jan 2012 09:54:22 -0600 Subject: [node-devel] Rethinking implementation for oVirt Node Plugins In-Reply-To: <4F21759F.9030302@redhat.com> References: <4F21759F.9030302@redhat.com> Message-ID: <441A57F9-DD8D-45EF-91B9-6C8E418CE9DF@cisco.com> On Jan 26, 2012, at 9:47 AM, Perry Myers wrote: > The current thinking/design around doing oVirt Node Plugins is here: > http://ovirt.org/wiki/Node_plugins > > And is based mostly on the premise that: > * Plugins are self contained blobs of RPMs that are internally > dependency complete > * Plugins are installed via smth like rpm -Uvh of a set of RPMs > contained inside the blob (tarball likely) > > As I was thinking about some additional use cases for plugins (like > including CIM/tog-pegasus and making vdsm a plugin), it seems like a lot > of redundancy to pull packages out of Fedora repos, and stick them in a > tarball when there are perfectly good yum mirrors that have those packages. > > It's also a lot of overhead on the part of the plugin creators to be > doing dual updates: Update RPM in Fedora and simultaneously update and > publish a new plugin. > > The core problem is: remote retrieval of packages and dependency > resolution... wait, doesn't yum solve that set of problems? > > But there's no yum on oVirt Node... The original reasons for excluding > yum were: > * No python on the node (but vdsm pulled python in, so that's moot now) > * Don't want folks running yum on a live oVirt Node image (we can > address that by making yum impossible to run when the image is booted > vs. offline) > > So I'd like to rethink the plugins concept by first starting with > putting yum back on oVirt Node, and leveraging that for what it is good at. > > If we put yum on the node, then plugin installation could be as simple as: > > mount ISO > cp foo.repo /etc/yum.conf.d/ > yum install foo --enablerepo=foo > > If offline is desired, then the plugin is basically a repo inside a > tarball and you do > > mount ISO > cp foo.repo /etc/yum.conf.d/ > yum localinstall foo/repo/foo.rpm > > In either case, we could enforce offline vs. online plugin integration > by always setting all repo files to disabled, and manually needing to > enable them with --enablerepo=* if the user is doing an online plugin > > So a plugin could just simply be: > * repo file (with one or more repo definitions that are not included in > the base distro) > * rpm list > * blacklisting info > * optionally a collection of RPMs with repo metadata > > This means that we can let normal yum dep resolution work and plugins > essentially become dependent on things like 'what version of ovirt-node > is installed' or 'what version of the kernel is installed' and if > dependencies aren't met, the plugin installation should fail gracefully > > We can prevent _core_ files from being upgraded (like ovirt-node, > kernel, etc) by adding explicit excludepkg directives so that if a > plugin tries to bring in a version of a package already core to oVirt > Node, it fails and reports "dude, you need a newer ISO already" > > Thoughts? This should make the plugin concept easier to implement and > also allow us to include support for plugins that pull packages from > remote repositories much easier. > I like this design a lot, because it does leverage the things which yum is good at, and makes the job of plugin writers much easier. My only question is, what is the main difference in offline vs. online if all repo files are always set to disabled? In either case, wouldn't you need to do a --enablerepo=*? Thanks, Kyle > Perry > _______________________________________________ > node-devel mailing list > node-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/node-devel From apevec at gmail.com Fri Jan 27 10:35:29 2012 From: apevec at gmail.com (Alan Pevec) Date: Fri, 27 Jan 2012 11:35:29 +0100 Subject: [node-devel] Rethinking implementation for oVirt Node Plugins In-Reply-To: <4F21759F.9030302@redhat.com> References: <4F21759F.9030302@redhat.com> Message-ID: > The original reasons for excluding yum were: > * No python on the node (but vdsm pulled python in, so that's moot now) > * Don't want folks running yum on a live oVirt Node image (we can > ?address that by making yum impossible to run when the image is booted > ?vs. offline) Good idea, also running oVirt Node has read-only rootfs, so any update attempt would fail. And for a graceful failure, we could have a yum plugin which aborts transaction if it detects running Node, with a user-friendly message. > If we put yum on the node, then plugin installation could be as simple as: > mount ISO > cp foo.repo /etc/yum.conf.d/ > yum install foo --enablerepo=foo Of course, you still need edit-livecd around this, which will handle setting up chroot and repackaging updated rootfs back into ISO. Alan From pmyers at redhat.com Fri Jan 27 13:01:54 2012 From: pmyers at redhat.com (Perry Myers) Date: Fri, 27 Jan 2012 08:01:54 -0500 Subject: [node-devel] Rethinking implementation for oVirt Node Plugins In-Reply-To: References: <4F21759F.9030302@redhat.com> Message-ID: <4F22A042.4050804@redhat.com> On 01/27/2012 05:35 AM, Alan Pevec wrote: >> The original reasons for excluding yum were: >> * No python on the node (but vdsm pulled python in, so that's moot now) >> * Don't want folks running yum on a live oVirt Node image (we can >> address that by making yum impossible to run when the image is booted >> vs. offline) > > Good idea, also running oVirt Node has read-only rootfs, so any update > attempt would fail. > And for a graceful failure, we could have a yum plugin which aborts > transaction if it detects running Node, with a user-friendly message. good idea re: plugin >> If we put yum on the node, then plugin installation could be as simple as: >> mount ISO >> cp foo.repo /etc/yum.conf.d/ >> yum install foo --enablerepo=foo > > Of course, you still need edit-livecd around this, which will handle > setting up chroot and repackaging updated rootfs back into ISO. yes, I sort of put that all into the magic of "mount iso" :) From mburns at redhat.com Sun Jan 29 14:57:59 2012 From: mburns at redhat.com (Mike Burns) Date: Sun, 29 Jan 2012 09:57:59 -0500 Subject: [node-devel] New oVirt Node pakages and image available -- Release Candidates Message-ID: <1327849079.27954.6.camel@beelzebub.mburnsfire.net> We've just uploaded ovirt-node-2.2.2 RPMs and ISO image to ovirt.org. This is the Release Candidate for the first oVirt Project release. Feedback, comments, and bug reports are welcome as always. Release Notes and Known issues are at: http://ovirt.org/wiki/Node_Release_Notes#2.2.2_Release_Notes Links to packages and image are here: http://ovirt.org/wiki/Category:Node#Current_Release Thanks The Node Team -- Michael Burns Software Engineer, Cloud Infrastructure Tech Lead, oVirt Node, RHEV-H Red Hat mburns at redhat.com From pmyers at redhat.com Sun Jan 29 15:29:01 2012 From: pmyers at redhat.com (Perry Myers) Date: Sun, 29 Jan 2012 10:29:01 -0500 Subject: [node-devel] Rethinking implementation for oVirt Node Plugins In-Reply-To: <4F21759F.9030302@redhat.com> References: <4F21759F.9030302@redhat.com> Message-ID: <4F2565BD.6090107@redhat.com> Geert, Itamar mentioned to me that you might have interest in this topic and possibly from your experience some thoughts on requirements here. Take a look at the below email and also the wiki page mentioned and let us know if you have any thoughts. Thanks! (For others following this thread, there are some other points made below the original message from discussion w/ Itamar) On 01/26/2012 10:47 AM, Perry Myers wrote: > The current thinking/design around doing oVirt Node Plugins is here: > http://ovirt.org/wiki/Node_plugins > > And is based mostly on the premise that: * Plugins are self contained > blobs of RPMs that are internally dependency complete * Plugins are > installed via smth like rpm -Uvh of a set of RPMs contained inside > the blob (tarball likely) > > As I was thinking about some additional use cases for plugins (like > including CIM/tog-pegasus and making vdsm a plugin), it seems like a > lot of redundancy to pull packages out of Fedora repos, and stick > them in a tarball when there are perfectly good yum mirrors that have > those packages. > > It's also a lot of overhead on the part of the plugin creators to be > doing dual updates: Update RPM in Fedora and simultaneously update > and publish a new plugin. > > The core problem is: remote retrieval of packages and dependency > resolution... wait, doesn't yum solve that set of problems? > > But there's no yum on oVirt Node... The original reasons for > excluding yum were: * No python on the node (but vdsm pulled python > in, so that's moot now) * Don't want folks running yum on a live > oVirt Node image (we can address that by making yum impossible to run > when the image is booted vs. offline) > > So I'd like to rethink the plugins concept by first starting with > putting yum back on oVirt Node, and leveraging that for what it is > good at. > > If we put yum on the node, then plugin installation could be as > simple as: > > mount ISO cp foo.repo /etc/yum.conf.d/ yum install foo > --enablerepo=foo > > If offline is desired, then the plugin is basically a repo inside a > tarball and you do > > mount ISO cp foo.repo /etc/yum.conf.d/ yum localinstall > foo/repo/foo.rpm > > In either case, we could enforce offline vs. online plugin > integration by always setting all repo files to disabled, and > manually needing to enable them with --enablerepo=* if the user is > doing an online plugin > > So a plugin could just simply be: * repo file (with one or more repo > definitions that are not included in the base distro) * rpm list * > blacklisting info * optionally a collection of RPMs with repo > metadata > > This means that we can let normal yum dep resolution work and > plugins essentially become dependent on things like 'what version of > ovirt-node is installed' or 'what version of the kernel is installed' > and if dependencies aren't met, the plugin installation should fail > gracefully > > We can prevent _core_ files from being upgraded (like ovirt-node, > kernel, etc) by adding explicit excludepkg directives so that if a > plugin tries to bring in a version of a package already core to > oVirt Node, it fails and reports "dude, you need a newer ISO > already" > > Thoughts? This should make the plugin concept easier to implement > and also allow us to include support for plugins that pull packages > from remote repositories much easier. Will the rpm's survive node upgrade? ------------------------------------ Only if the image you are upgrading with has also had the appropriate plugins merged into it. The proper procedure would be: * Get ISOv1 * Run plugin tool to merge in Plugin1, 2, 3 * Deploy Later: * ISOv2 comes out * Get ISOv2 * Run plugin tool to merge in Plugin1, 2, 3 * Deploy If you merge in the plugins you want onto every ISO, you're fine. But if you decide that you don't like Plugin3, you would do: * Get ISOv2 * Run plugin tool to merge in Plugin1, 2 * Deploy And in this case, the reinstalled/updated node would only have Plugin1,2 and not Plugin3 As far as I understand this is the behavior that is wanted. Especially since the long term is to move to a completely stateless Node where nothing is persisted to disk aside from swap partition. How will oVirt Engine know what plugins a Node has installed? ------------------------------------------------------------- Since plugins are just normal RPMs, there won't be any way to figure out from a rpm -qa command 'which plugins are installed', but since each plugin is a separate entity with a metadata file, we'll maintain a registry of which plugins are installed and what version each is at. Something like: /etc/ovirt.plugins.d/cim /etc/ovirt.plugins.d/vdsm /etc/ovirt.plugins.d/isv-module-foo And vdsm can look at this to determine what to report back to oVirt Engine for display to the user. From pmyers at redhat.com Sun Jan 29 15:39:03 2012 From: pmyers at redhat.com (Perry Myers) Date: Sun, 29 Jan 2012 10:39:03 -0500 Subject: [node-devel] CIM plugin for oVirt Node Message-ID: <4F256817.6030000@redhat.com> One of the items on our backlog has been to include CIM server/providers on oVirt Node. Initially we'll do this statically and include things like sblim, tog-pegasus, libvirt-cim as part of the core Node recipe. Later, we can use the 'plugin' concept so that this functionality can be added by those that need it, and for those that don't they can ignore. Some questions have come up around this point, and since the Node team aren't CIM experts, we wanted to reach out to folks that have been using it a little more heavily to make sure we're on the right track. Some of the technical things we've run into are: 1. Our initial attempt at getting tog-pegasus and friends running failed due to lots of issues with r/o root filesystem. Might need help from folks more knowledgeable about CIM to halp resolve that. Anthony, I think you might have some kickstart snippets that would be of use here, correct? 2. Once you've got the CIM server there (tog-pegasus) you need to have some way to enable/disable it, which right now isn't easy to do except via offline image manipulation (since you can't persist symlinks in stateless Linux). 3. When the CIM server is enabled, need to unblock the appropriate firewall port, which again is not trivial to do given the stateless nature of the Node via tools like lokkit. (Perhaps firewalld will make this easier, but for now firewalld doesn't look mature enough to begin using in earnest) 4. How should CIM be secured and configured for authentication? Do we need to provide some mechanism for deploying SSL client certs into the Node for tog-pegasus to use? What about setting simple user/pass auth? 5. What sort of other configuration should be exposed for CIM providers? Geert/Anthony/DV, if you guys have thoughts on the above questions or can point us at other people to loop into this thread, that would be helpful. Thanks! Perry From veillard at redhat.com Mon Jan 30 13:45:40 2012 From: veillard at redhat.com (Daniel Veillard) Date: Mon, 30 Jan 2012 21:45:40 +0800 Subject: [node-devel] CIM plugin for oVirt Node In-Reply-To: <4F256817.6030000@redhat.com> References: <4F256817.6030000@redhat.com> Message-ID: <20120130134540.GS2632@redhat.com> Hi everybody, I guess the best is to ask Chip Vincent about those oVirt Node integration issues. CIM is not always trivial to setup in a normal RHEL environment, and I'm afraid nobody tried it on a read-only root/stateless environment. Chip I think the expertise from some of your libvirt-cim team is needed there, I guess the best is to provide an image to someone knowledgeable in the set-u and have him check the issues. Maybe Eduardo ro you can have a look ? thanks ! Daniel On Sun, Jan 29, 2012 at 10:39:03AM -0500, Perry Myers wrote: > One of the items on our backlog has been to include CIM server/providers > on oVirt Node. Initially we'll do this statically and include things > like sblim, tog-pegasus, libvirt-cim as part of the core Node recipe. P.S.: shouldn't only one of sblim/tog-pegasus be needed and not both ? One server should be sufficient isn't it and the goal is still to limit the size of images. Which one to pick may be the result of which one is the easier to coerce to work in root RO mode, or the smaller of the two ... > Later, we can use the 'plugin' concept so that this functionality can be > added by those that need it, and for those that don't they can ignore. > > Some questions have come up around this point, and since the Node team > aren't CIM experts, we wanted to reach out to folks that have been using > it a little more heavily to make sure we're on the right track. > > Some of the technical things we've run into are: > > 1. Our initial attempt at getting tog-pegasus and friends running > failed due to lots of issues with r/o root filesystem. Might need > help from folks more knowledgeable about CIM to halp resolve that. > > Anthony, I think you might have some kickstart snippets that would > be of use here, correct? > > 2. Once you've got the CIM server there (tog-pegasus) you need to have > some way to enable/disable it, which right now isn't easy to do > except via offline image manipulation (since you can't persist > symlinks in stateless Linux). > > 3. When the CIM server is enabled, need to unblock the appropriate > firewall port, which again is not trivial to do given the stateless > nature of the Node via tools like lokkit. (Perhaps firewalld will > make this easier, but for now firewalld doesn't look mature enough > to begin using in earnest) > > 4. How should CIM be secured and configured for authentication? Do we > need to provide some mechanism for deploying SSL client certs into > the Node for tog-pegasus to use? What about setting simple > user/pass auth? > > 5. What sort of other configuration should be exposed for CIM > providers? > > Geert/Anthony/DV, if you guys have thoughts on the above questions or > can point us at other people to loop into this thread, that would be > helpful. > > Thanks! > > Perry -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel at veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/ From pmyers at redhat.com Mon Jan 30 14:42:01 2012 From: pmyers at redhat.com (Perry Myers) Date: Mon, 30 Jan 2012 09:42:01 -0500 Subject: [node-devel] CIM plugin for oVirt Node In-Reply-To: <20120130134540.GS2632@redhat.com> References: <4F256817.6030000@redhat.com> <20120130134540.GS2632@redhat.com> Message-ID: <4F26AC39.1020609@redhat.com> On 01/30/2012 08:45 AM, Daniel Veillard wrote: > Hi everybody, > > I guess the best is to ask Chip Vincent about those oVirt Node > integration issues. CIM is not always trivial to setup in a normal > RHEL environment, and I'm afraid nobody tried it on a read-only > root/stateless environment. Chip I think the expertise from some > of your libvirt-cim team is needed there, I guess the best is to provide > an image to someone knowledgeable in the set-u and have him check > the issues. Maybe Eduardo ro you can have a look ? > > thanks ! > > Daniel > > On Sun, Jan 29, 2012 at 10:39:03AM -0500, Perry Myers wrote: >> One of the items on our backlog has been to include CIM server/providers >> on oVirt Node. Initially we'll do this statically and include things >> like sblim, tog-pegasus, libvirt-cim as part of the core Node recipe. > > P.S.: shouldn't only one of sblim/tog-pegasus be needed and not both ? > One server should be sufficient isn't it and the goal is still > to limit the size of images. Which one to pick may be the result > of which one is the easier to coerce to work in root RO mode, > or the smaller of the two ... I asked Anthony about this, and he explained it to me... sblim is both a collection of CIM providers as well as a server. tog-pegasus is just the server. So you can either use: sblim + tog-pegasus or sblim + sblim-sfcb If you omit sfcb from the oVirt Node, then you can use tog-pegasus in its place. It's also my understanding that sblim-sfcb and tog-pegasus are not fully interchangeable as there are some providers that will only work with one or the other. So far, it seems like tog-pegasus is what folks want specifically, so that is what we have been focusing on. Perry >> Later, we can use the 'plugin' concept so that this functionality can be >> added by those that need it, and for those that don't they can ignore. >> >> Some questions have come up around this point, and since the Node team >> aren't CIM experts, we wanted to reach out to folks that have been using >> it a little more heavily to make sure we're on the right track. >> >> Some of the technical things we've run into are: >> >> 1. Our initial attempt at getting tog-pegasus and friends running >> failed due to lots of issues with r/o root filesystem. Might need >> help from folks more knowledgeable about CIM to halp resolve that. >> >> Anthony, I think you might have some kickstart snippets that would >> be of use here, correct? >> >> 2. Once you've got the CIM server there (tog-pegasus) you need to have >> some way to enable/disable it, which right now isn't easy to do >> except via offline image manipulation (since you can't persist >> symlinks in stateless Linux). >> >> 3. When the CIM server is enabled, need to unblock the appropriate >> firewall port, which again is not trivial to do given the stateless >> nature of the Node via tools like lokkit. (Perhaps firewalld will >> make this easier, but for now firewalld doesn't look mature enough >> to begin using in earnest) >> >> 4. How should CIM be secured and configured for authentication? Do we >> need to provide some mechanism for deploying SSL client certs into >> the Node for tog-pegasus to use? What about setting simple >> user/pass auth? >> >> 5. What sort of other configuration should be exposed for CIM >> providers? >> >> Geert/Anthony/DV, if you guys have thoughts on the above questions or >> can point us at other people to loop into this thread, that would be >> helpful. >> >> Thanks! >> >> Perry > From Charles_Rose at Dell.com Mon Jan 30 14:59:32 2012 From: Charles_Rose at Dell.com (Charles_Rose at Dell.com) Date: Mon, 30 Jan 2012 20:29:32 +0530 Subject: [node-devel] CIM plugin for oVirt Node In-Reply-To: <4F26AC39.1020609@redhat.com> References: <4F256817.6030000@redhat.com> <20120130134540.GS2632@redhat.com> <4F26AC39.1020609@redhat.com> Message-ID: <4F26B054.5020802@dell.com> On Monday 30 January 2012 08:12 PM, Perry Myers wrote: > On 01/30/2012 08:45 AM, Daniel Veillard wrote: >> Hi everybody, >> >> I guess the best is to ask Chip Vincent about those oVirt Node >> integration issues. CIM is not always trivial to setup in a normal >> RHEL environment, and I'm afraid nobody tried it on a read-only >> root/stateless environment. Chip I think the expertise from some >> of your libvirt-cim team is needed there, I guess the best is to provide >> an image to someone knowledgeable in the set-u and have him check >> the issues. Maybe Eduardo ro you can have a look ? >> >> thanks ! >> >> Daniel >> >> On Sun, Jan 29, 2012 at 10:39:03AM -0500, Perry Myers wrote: >>> One of the items on our backlog has been to include CIM server/providers >>> on oVirt Node. Initially we'll do this statically and include things >>> like sblim, tog-pegasus, libvirt-cim as part of the core Node recipe. >> >> P.S.: shouldn't only one of sblim/tog-pegasus be needed and not both ? >> One server should be sufficient isn't it and the goal is still >> to limit the size of images. Which one to pick may be the result >> of which one is the easier to coerce to work in root RO mode, >> or the smaller of the two ... > > I asked Anthony about this, and he explained it to me... sblim is both > a collection of CIM providers as well as a server. tog-pegasus is just > the server. > > So you can either use: > sblim + tog-pegasus > or > sblim + sblim-sfcb > > If you omit sfcb from the oVirt Node, then you can use tog-pegasus in > its place. It's also my understanding that sblim-sfcb and tog-pegasus > are not fully interchangeable as there are some providers that will only > work with one or the other. So far, it seems like tog-pegasus is what > folks want specifically, so that is what we have been focusing on. We have had issues with sfcb and tog-pegasus conflicting in the past: https://bugzilla.redhat.com/show_bug.cgi?id=604578 sblim + sblim-sfcb is what we needed and tog-pegasus was installed as part of the @base install. Charles > > Perry > >>> Later, we can use the 'plugin' concept so that this functionality can be >>> added by those that need it, and for those that don't they can ignore. >>> >>> Some questions have come up around this point, and since the Node team >>> aren't CIM experts, we wanted to reach out to folks that have been using >>> it a little more heavily to make sure we're on the right track. >>> >>> Some of the technical things we've run into are: >>> >>> 1. Our initial attempt at getting tog-pegasus and friends running >>> failed due to lots of issues with r/o root filesystem. Might need >>> help from folks more knowledgeable about CIM to halp resolve that. >>> >>> Anthony, I think you might have some kickstart snippets that would >>> be of use here, correct? >>> >>> 2. Once you've got the CIM server there (tog-pegasus) you need to have >>> some way to enable/disable it, which right now isn't easy to do >>> except via offline image manipulation (since you can't persist >>> symlinks in stateless Linux). >>> >>> 3. When the CIM server is enabled, need to unblock the appropriate >>> firewall port, which again is not trivial to do given the stateless >>> nature of the Node via tools like lokkit. (Perhaps firewalld will >>> make this easier, but for now firewalld doesn't look mature enough >>> to begin using in earnest) >>> >>> 4. How should CIM be secured and configured for authentication? Do we >>> need to provide some mechanism for deploying SSL client certs into >>> the Node for tog-pegasus to use? What about setting simple >>> user/pass auth? >>> >>> 5. What sort of other configuration should be exposed for CIM >>> providers? >>> >>> Geert/Anthony/DV, if you guys have thoughts on the above questions or >>> can point us at other people to loop into this thread, that would be >>> helpful. >>> >>> Thanks! >>> >>> Perry >> > > _______________________________________________ > node-devel mailing list > node-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/node-devel > From gjansen at redhat.com Mon Jan 30 16:39:10 2012 From: gjansen at redhat.com (Geert Jansen) Date: Mon, 30 Jan 2012 17:39:10 +0100 Subject: [node-devel] Rethinking implementation for oVirt Node Plugins In-Reply-To: <4F2565BD.6090107@redhat.com> References: <4F21759F.9030302@redhat.com> <4F2565BD.6090107@redhat.com> Message-ID: <4F26C7AE.4090005@redhat.com> Thanks Perry. I had a look at the Wiki and i think it is mostly very reasonable. Some comments/questions: - Do i understand it correctly that the plugin <-> OS interface you propose is "model based" rather than "file based"? All plugins' files are under a separate root, and they only become enabled once a logical setting in an XML description file enables it (e.g. services, kernel modules, etc). I think that is probably the right approach. It will make it cleaner for sure to manage plugins this way. It also allows much better reporting of what a plugin does in the GUI. The alternative would be a "file based" interface where the plugin just puts files directly in various system directories. I think the main requirement to ensure is that with a model based approach our initial model is rich enough. - One important thing (actually #1 request so far) is to allow VDSM hooks to be installed. So the XML descriptor would need to support that. - The signature infrastructure is something that requires a business process at Red Hat to manage the requests of ISVs, signing keys, etc. I'm skeptical if we need this for v1 to be honest. Feedback i have from vendors so far is that they'd prefer, at least initially, to keep it open. We still maintain list of plugins that are supported, but initially the overhead of basically runnning a PKI for ISVs is something that i wonder we want to do on a short time scale. I think the risks for abuse, for sure initially, is very low. - I think the offline merging of plugins with the hypervisor ISO is good enough for v1, given that a re-install of RHEV-H via RHEV-M is easy. So doing a live install only in v2 should be fine; however: - We should also allow plugins to be installed on RHEL... Otherwise ISVs have to maintain two packages for their extensions, one for RHEL and one for RHEV-H. Regards, Geert On 01/29/2012 04:29 PM, Perry Myers wrote: > Geert, Itamar mentioned to me that you might have interest in this topic > and possibly from your experience some thoughts on requirements here. > Take a look at the below email and also the wiki page mentioned and let > us know if you have any thoughts. Thanks! > > (For others following this thread, there are some other points made > below the original message from discussion w/ Itamar) > > On 01/26/2012 10:47 AM, Perry Myers wrote: >> The current thinking/design around doing oVirt Node Plugins is here: >> http://ovirt.org/wiki/Node_plugins >> >> And is based mostly on the premise that: * Plugins are self contained >> blobs of RPMs that are internally dependency complete * Plugins are >> installed via smth like rpm -Uvh of a set of RPMs contained inside >> the blob (tarball likely) >> >> As I was thinking about some additional use cases for plugins (like >> including CIM/tog-pegasus and making vdsm a plugin), it seems like a >> lot of redundancy to pull packages out of Fedora repos, and stick >> them in a tarball when there are perfectly good yum mirrors that have >> those packages. >> >> It's also a lot of overhead on the part of the plugin creators to be >> doing dual updates: Update RPM in Fedora and simultaneously update >> and publish a new plugin. >> >> The core problem is: remote retrieval of packages and dependency >> resolution... wait, doesn't yum solve that set of problems? >> >> But there's no yum on oVirt Node... The original reasons for >> excluding yum were: * No python on the node (but vdsm pulled python >> in, so that's moot now) * Don't want folks running yum on a live >> oVirt Node image (we can address that by making yum impossible to run >> when the image is booted vs. offline) >> >> So I'd like to rethink the plugins concept by first starting with >> putting yum back on oVirt Node, and leveraging that for what it is >> good at. >> >> If we put yum on the node, then plugin installation could be as >> simple as: >> >> mount ISO cp foo.repo /etc/yum.conf.d/ yum install foo >> --enablerepo=foo >> >> If offline is desired, then the plugin is basically a repo inside a >> tarball and you do >> >> mount ISO cp foo.repo /etc/yum.conf.d/ yum localinstall >> foo/repo/foo.rpm >> >> In either case, we could enforce offline vs. online plugin >> integration by always setting all repo files to disabled, and >> manually needing to enable them with --enablerepo=* if the user is >> doing an online plugin >> >> So a plugin could just simply be: * repo file (with one or more repo >> definitions that are not included in the base distro) * rpm list * >> blacklisting info * optionally a collection of RPMs with repo >> metadata >> >> This means that we can let normal yum dep resolution work and >> plugins essentially become dependent on things like 'what version of >> ovirt-node is installed' or 'what version of the kernel is installed' >> and if dependencies aren't met, the plugin installation should fail >> gracefully >> >> We can prevent _core_ files from being upgraded (like ovirt-node, >> kernel, etc) by adding explicit excludepkg directives so that if a >> plugin tries to bring in a version of a package already core to >> oVirt Node, it fails and reports "dude, you need a newer ISO >> already" >> >> Thoughts? This should make the plugin concept easier to implement >> and also allow us to include support for plugins that pull packages >> from remote repositories much easier. > > Will the rpm's survive node upgrade? > ------------------------------------ > Only if the image you are upgrading with has also had the appropriate > plugins merged into it. The proper procedure would be: > > * Get ISOv1 > * Run plugin tool to merge in Plugin1, 2, 3 > * Deploy > > Later: > * ISOv2 comes out > * Get ISOv2 > * Run plugin tool to merge in Plugin1, 2, 3 > * Deploy > > If you merge in the plugins you want onto every ISO, you're fine. But > if you decide that you don't like Plugin3, you would do: > > * Get ISOv2 > * Run plugin tool to merge in Plugin1, 2 > * Deploy > > And in this case, the reinstalled/updated node would only have Plugin1,2 > and not Plugin3 > > As far as I understand this is the behavior that is wanted. Especially > since the long term is to move to a completely stateless Node where > nothing is persisted to disk aside from swap partition. > > How will oVirt Engine know what plugins a Node has installed? > ------------------------------------------------------------- > Since plugins are just normal RPMs, there won't be any way to figure out > from a rpm -qa command 'which plugins are installed', but since each > plugin is a separate entity with a metadata file, we'll maintain a > registry of which plugins are installed and what version each is at. > Something like: > > /etc/ovirt.plugins.d/cim > /etc/ovirt.plugins.d/vdsm > /etc/ovirt.plugins.d/isv-module-foo > > And vdsm can look at this to determine what to report back to oVirt > Engine for display to the user. -- Geert Jansen Sr. Product Marketing Manager, Red Hat Enterprise Virtualization Red Hat S.r.L. O: +39 095 916287 Via G. Fara 26 C: +39 348 1980079 Milan 20124, Italy E: gjansen at redhat.com From jboggs at redhat.com Mon Jan 30 17:36:11 2012 From: jboggs at redhat.com (Joey Boggs) Date: Mon, 30 Jan 2012 12:36:11 -0500 Subject: [node-devel] Rethinking implementation for oVirt Node Plugins In-Reply-To: <4F26C7AE.4090005@redhat.com> References: <4F21759F.9030302@redhat.com> <4F2565BD.6090107@redhat.com> <4F26C7AE.4090005@redhat.com> Message-ID: <4F26D50B.50907@redhat.com> On 01/30/2012 11:39 AM, Geert Jansen wrote: > Thanks Perry. I had a look at the Wiki and i think it is mostly very > reasonable. Some comments/questions: > > - Do i understand it correctly that the plugin <-> OS interface you > propose is "model based" rather than "file based"? All plugins' files > are under a separate root, and they only become enabled once a logical > setting in an XML description file enables it (e.g. services, kernel > modules, etc). I think that is probably the right approach. It will > make it cleaner for sure to manage plugins this way. It also allows > much better reporting of what a plugin does in the GUI. The goal is to have it all separated with minimal overlap such as suggested earlier with a place to register the plugins that are installed on the node visible within the web ui. > > The alternative would be a "file based" interface where the plugin > just puts files directly in various system directories. I think the > main requirement to ensure is that with a model based approach our > initial model is rich enough. > > - One important thing (actually #1 request so far) is to allow VDSM > hooks to be installed. So the XML descriptor would need to support that. Don't see that being an issue, it should be added as a requirement now rather than later > > - The signature infrastructure is something that requires a business > process at Red Hat to manage the requests of ISVs, signing keys, etc. > I'm skeptical if we need this for v1 to be honest. Feedback i have > from vendors so far is that they'd prefer, at least initially, to keep > it open. We still maintain list of plugins that are supported, but > initially the overhead of basically runnning a PKI for ISVs is > something that i wonder we want to do on a short time scale. I think > the risks for abuse, for sure initially, is very low. > > - I think the offline merging of plugins with the hypervisor ISO is > good enough for v1, given that a re-install of RHEV-H via RHEV-M is > easy. So doing a live install only in v2 should be fine; however: > > - We should also allow plugins to be installed on RHEL... Otherwise > ISVs have to maintain two packages for their extensions, one for RHEL > and one for RHEV-H. As long as vdsm looks in the same predefined places this will not be an issue and should be fine for both scenarios > > Regards, > Geert > > > On 01/29/2012 04:29 PM, Perry Myers wrote: >> Geert, Itamar mentioned to me that you might have interest in this topic >> and possibly from your experience some thoughts on requirements here. >> Take a look at the below email and also the wiki page mentioned and let >> us know if you have any thoughts. Thanks! >> >> (For others following this thread, there are some other points made >> below the original message from discussion w/ Itamar) >> >> On 01/26/2012 10:47 AM, Perry Myers wrote: >>> The current thinking/design around doing oVirt Node Plugins is here: >>> http://ovirt.org/wiki/Node_plugins >>> >>> And is based mostly on the premise that: * Plugins are self contained >>> blobs of RPMs that are internally dependency complete * Plugins are >>> installed via smth like rpm -Uvh of a set of RPMs contained inside >>> the blob (tarball likely) >>> >>> As I was thinking about some additional use cases for plugins (like >>> including CIM/tog-pegasus and making vdsm a plugin), it seems like a >>> lot of redundancy to pull packages out of Fedora repos, and stick >>> them in a tarball when there are perfectly good yum mirrors that have >>> those packages. >>> >>> It's also a lot of overhead on the part of the plugin creators to be >>> doing dual updates: Update RPM in Fedora and simultaneously update >>> and publish a new plugin. >>> >>> The core problem is: remote retrieval of packages and dependency >>> resolution... wait, doesn't yum solve that set of problems? >>> >>> But there's no yum on oVirt Node... The original reasons for >>> excluding yum were: * No python on the node (but vdsm pulled python >>> in, so that's moot now) * Don't want folks running yum on a live >>> oVirt Node image (we can address that by making yum impossible to run >>> when the image is booted vs. offline) >>> >>> So I'd like to rethink the plugins concept by first starting with >>> putting yum back on oVirt Node, and leveraging that for what it is >>> good at. >>> >>> If we put yum on the node, then plugin installation could be as >>> simple as: >>> >>> mount ISO cp foo.repo /etc/yum.conf.d/ yum install foo >>> --enablerepo=foo >>> >>> If offline is desired, then the plugin is basically a repo inside a >>> tarball and you do >>> >>> mount ISO cp foo.repo /etc/yum.conf.d/ yum localinstall >>> foo/repo/foo.rpm >>> >>> In either case, we could enforce offline vs. online plugin >>> integration by always setting all repo files to disabled, and >>> manually needing to enable them with --enablerepo=* if the user is >>> doing an online plugin >>> >>> So a plugin could just simply be: * repo file (with one or more repo >>> definitions that are not included in the base distro) * rpm list * >>> blacklisting info * optionally a collection of RPMs with repo >>> metadata >>> >>> This means that we can let normal yum dep resolution work and >>> plugins essentially become dependent on things like 'what version of >>> ovirt-node is installed' or 'what version of the kernel is installed' >>> and if dependencies aren't met, the plugin installation should fail >>> gracefully >>> >>> We can prevent _core_ files from being upgraded (like ovirt-node, >>> kernel, etc) by adding explicit excludepkg directives so that if a >>> plugin tries to bring in a version of a package already core to >>> oVirt Node, it fails and reports "dude, you need a newer ISO >>> already" >>> >>> Thoughts? This should make the plugin concept easier to implement >>> and also allow us to include support for plugins that pull packages >>> from remote repositories much easier. >> >> Will the rpm's survive node upgrade? >> ------------------------------------ >> Only if the image you are upgrading with has also had the appropriate >> plugins merged into it. The proper procedure would be: >> >> * Get ISOv1 >> * Run plugin tool to merge in Plugin1, 2, 3 >> * Deploy >> >> Later: >> * ISOv2 comes out >> * Get ISOv2 >> * Run plugin tool to merge in Plugin1, 2, 3 >> * Deploy >> >> If you merge in the plugins you want onto every ISO, you're fine. But >> if you decide that you don't like Plugin3, you would do: >> >> * Get ISOv2 >> * Run plugin tool to merge in Plugin1, 2 >> * Deploy >> >> And in this case, the reinstalled/updated node would only have Plugin1,2 >> and not Plugin3 >> >> As far as I understand this is the behavior that is wanted. Especially >> since the long term is to move to a completely stateless Node where >> nothing is persisted to disk aside from swap partition. >> >> How will oVirt Engine know what plugins a Node has installed? >> ------------------------------------------------------------- >> Since plugins are just normal RPMs, there won't be any way to figure out >> from a rpm -qa command 'which plugins are installed', but since each >> plugin is a separate entity with a metadata file, we'll maintain a >> registry of which plugins are installed and what version each is at. >> Something like: >> >> /etc/ovirt.plugins.d/cim >> /etc/ovirt.plugins.d/vdsm >> /etc/ovirt.plugins.d/isv-module-foo >> >> And vdsm can look at this to determine what to report back to oVirt >> Engine for display to the user. > From aliguori at us.ibm.com Mon Jan 30 19:37:18 2012 From: aliguori at us.ibm.com (Anthony Liguori) Date: Mon, 30 Jan 2012 13:37:18 -0600 Subject: [node-devel] CIM plugin for oVirt Node In-Reply-To: <4F26B054.5020802@dell.com> References: <4F256817.6030000@redhat.com> <20120130134540.GS2632@redhat.com> <4F26AC39.1020609@redhat.com> <4F26B054.5020802@dell.com> Message-ID: <4F26F16E.5040202@us.ibm.com> On 01/30/2012 08:59 AM, Charles_Rose at dell.com wrote: > > > On Monday 30 January 2012 08:12 PM, Perry Myers wrote: >> On 01/30/2012 08:45 AM, Daniel Veillard wrote: >>> Hi everybody, >>> >>> I guess the best is to ask Chip Vincent about those oVirt Node >>> integration issues. CIM is not always trivial to setup in a normal >>> RHEL environment, and I'm afraid nobody tried it on a read-only >>> root/stateless environment. Chip I think the expertise from some >>> of your libvirt-cim team is needed there, I guess the best is to provide >>> an image to someone knowledgeable in the set-u and have him check >>> the issues. Maybe Eduardo ro you can have a look ? >>> >>> thanks ! >>> >>> Daniel >>> >>> On Sun, Jan 29, 2012 at 10:39:03AM -0500, Perry Myers wrote: >>>> One of the items on our backlog has been to include CIM server/providers >>>> on oVirt Node. Initially we'll do this statically and include things >>>> like sblim, tog-pegasus, libvirt-cim as part of the core Node recipe. >>> >>> P.S.: shouldn't only one of sblim/tog-pegasus be needed and not both ? >>> One server should be sufficient isn't it and the goal is still >>> to limit the size of images. Which one to pick may be the result >>> of which one is the easier to coerce to work in root RO mode, >>> or the smaller of the two ... >> >> I asked Anthony about this, and he explained it to me... sblim is both >> a collection of CIM providers as well as a server. tog-pegasus is just >> the server. >> >> So you can either use: >> sblim + tog-pegasus >> or >> sblim + sblim-sfcb >> >> If you omit sfcb from the oVirt Node, then you can use tog-pegasus in >> its place. It's also my understanding that sblim-sfcb and tog-pegasus >> are not fully interchangeable as there are some providers that will only >> work with one or the other. So far, it seems like tog-pegasus is what >> folks want specifically, so that is what we have been focusing on. > > We have had issues with sfcb and tog-pegasus conflicting in the past: > https://bugzilla.redhat.com/show_bug.cgi?id=604578 > > sblim + sblim-sfcb is what we needed and tog-pegasus was installed as > part of the @base install. Let me start out by saying that I know almost nothing about CIM so I'm largely talking out of my.... I think the first question to answer is why we're adding CIM to oVirt node in the first place. Is the goal to expose a specific DMTF schema for on-node management? Is it to allow arbitrary third-party CMPI providers to be added for hardware management? Is it to allow third party CIM management suites to see and manage ovirt-node instances? For just about any of these, if the answer isn't qualified with "exclusively for use by ovirt-engine", how is a split brain going to be avoided? I think the provider needs ultimately will determine what cimmon is necessary. Just to further complicate things, most providers these days are still only 32-bit as I understand it... Regards, Anthony Liguori From veillard at redhat.com Tue Jan 31 10:59:56 2012 From: veillard at redhat.com (Daniel Veillard) Date: Tue, 31 Jan 2012 18:59:56 +0800 Subject: [node-devel] CIM plugin for oVirt Node In-Reply-To: <4F26AC39.1020609@redhat.com> References: <4F256817.6030000@redhat.com> <20120130134540.GS2632@redhat.com> <4F26AC39.1020609@redhat.com> Message-ID: <20120131105956.GU2632@redhat.com> On Mon, Jan 30, 2012 at 09:42:01AM -0500, Perry Myers wrote: > On 01/30/2012 08:45 AM, Daniel Veillard wrote: > > Hi everybody, > > > > I guess the best is to ask Chip Vincent about those oVirt Node > > integration issues. CIM is not always trivial to setup in a normal > > RHEL environment, and I'm afraid nobody tried it on a read-only > > root/stateless environment. Chip I think the expertise from some > > of your libvirt-cim team is needed there, I guess the best is to provide > > an image to someone knowledgeable in the set-u and have him check > > the issues. Maybe Eduardo ro you can have a look ? > > > > thanks ! > > > > Daniel > > > > On Sun, Jan 29, 2012 at 10:39:03AM -0500, Perry Myers wrote: > >> One of the items on our backlog has been to include CIM server/providers > >> on oVirt Node. Initially we'll do this statically and include things > >> like sblim, tog-pegasus, libvirt-cim as part of the core Node recipe. > > > > P.S.: shouldn't only one of sblim/tog-pegasus be needed and not both ? > > One server should be sufficient isn't it and the goal is still > > to limit the size of images. Which one to pick may be the result > > of which one is the easier to coerce to work in root RO mode, > > or the smaller of the two ... > > I asked Anthony about this, and he explained it to me... sblim is both > a collection of CIM providers as well as a server. tog-pegasus is just > the server. > > So you can either use: > sblim + tog-pegasus > or > sblim + sblim-sfcb > > If you omit sfcb from the oVirt Node, then you can use tog-pegasus in > its place. It's also my understanding that sblim-sfcb and tog-pegasus > are not fully interchangeable as there are some providers that will only > work with one or the other. So far, it seems like tog-pegasus is what > folks want specifically, so that is what we have been focusing on. Ah, okay, makes it clear, thanks ! Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel at veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/ From eblima at linux.vnet.ibm.com Tue Jan 31 13:07:47 2012 From: eblima at linux.vnet.ibm.com (Eduardo Lima (Etrunko)) Date: Tue, 31 Jan 2012 11:07:47 -0200 Subject: [node-devel] CIM plugin for oVirt Node In-Reply-To: <20120130134540.GS2632@redhat.com> References: <4F256817.6030000@redhat.com> <20120130134540.GS2632@redhat.com> Message-ID: <4F27E7A3.8080600@linux.vnet.ibm.com> On 01/30/2012 11:45 AM, Daniel Veillard wrote: > Hi everybody, > > I guess the best is to ask Chip Vincent about those oVirt Node > integration issues. CIM is not always trivial to setup in a normal > RHEL environment, and I'm afraid nobody tried it on a read-only > root/stateless environment. Chip I think the expertise from some > of your libvirt-cim team is needed there, I guess the best is to provide > an image to someone knowledgeable in the set-u and have him check > the issues. Maybe Eduardo ro you can have a look ? > Hi there, Copied to this email is Leonardo Garcia, which is works on a project with similar requirements as described. I am sure he and the team have already been through may of these problems described. In a quick conversation he explained me that pegasus requires some configuration files to be persistent, and these are stored in a partition that is mounted in boot time. I have forwarded him the entire conversation so he give his thoughts about it. Best regards, Eduardo. > thanks ! > > Daniel > > On Sun, Jan 29, 2012 at 10:39:03AM -0500, Perry Myers wrote: >> One of the items on our backlog has been to include CIM server/providers >> on oVirt Node. Initially we'll do this statically and include things >> like sblim, tog-pegasus, libvirt-cim as part of the core Node recipe. > > P.S.: shouldn't only one of sblim/tog-pegasus be needed and not both ? > One server should be sufficient isn't it and the goal is still > to limit the size of images. Which one to pick may be the result > of which one is the easier to coerce to work in root RO mode, > or the smaller of the two ... > >> Later, we can use the 'plugin' concept so that this functionality can be >> added by those that need it, and for those that don't they can ignore. >> >> Some questions have come up around this point, and since the Node team >> aren't CIM experts, we wanted to reach out to folks that have been using >> it a little more heavily to make sure we're on the right track. >> >> Some of the technical things we've run into are: >> >> 1. Our initial attempt at getting tog-pegasus and friends running >> failed due to lots of issues with r/o root filesystem. Might need >> help from folks more knowledgeable about CIM to halp resolve that. >> >> Anthony, I think you might have some kickstart snippets that would >> be of use here, correct? >> >> 2. Once you've got the CIM server there (tog-pegasus) you need to have >> some way to enable/disable it, which right now isn't easy to do >> except via offline image manipulation (since you can't persist >> symlinks in stateless Linux). >> >> 3. When the CIM server is enabled, need to unblock the appropriate >> firewall port, which again is not trivial to do given the stateless >> nature of the Node via tools like lokkit. (Perhaps firewalld will >> make this easier, but for now firewalld doesn't look mature enough >> to begin using in earnest) >> >> 4. How should CIM be secured and configured for authentication? Do we >> need to provide some mechanism for deploying SSL client certs into >> the Node for tog-pegasus to use? What about setting simple >> user/pass auth? >> >> 5. What sort of other configuration should be exposed for CIM >> providers? >> >> Geert/Anthony/DV, if you guys have thoughts on the above questions or >> can point us at other people to loop into this thread, that would be >> helpful. >> >> Thanks! >> >> Perry > -- Eduardo de Barros Lima Software Engineer, Open Virtualization Linux Technology Center - IBM/Brazil eblima at br.ibm.com From mburns at redhat.com Tue Jan 31 14:35:01 2012 From: mburns at redhat.com (Mike Burns) Date: Tue, 31 Jan 2012 09:35:01 -0500 Subject: [node-devel] oVirt Node Weekly Sync -- 2012-01-31 Message-ID: <1328020501.2417.76.camel@beelzebub.mburnsfire.net> Minutes: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-01-31-14.00.html Minutes (text): http://ovirt.org/meetings/ovirt/2012/ovirt.2012-01-31-14.00.txt Log: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-01-31-14.00.log.html Action items, by person 1. apevec 1. apevec to incorporate virt-minimal comps group into node kickstart and send info to node-devel 2. jboggs 1. jboggs to follow up with rgolan on node registration to engine 2. jboggs to follow up with dnaori on failed installation 3. mburns 1. mburns to rework stateless plan based on archipel-node work 2. mburns to follow up on virt minimal with apevec 3. mburns to build 2.2.2-2 4. mburns to clone rhel6 bugs to upstream and mark for 2.2.3/2.3.0/etc 5. mburns to update backlog page From pmyers at redhat.com Tue Jan 31 16:40:56 2012 From: pmyers at redhat.com (Perry Myers) Date: Tue, 31 Jan 2012 11:40:56 -0500 Subject: [node-devel] Rethinking implementation for oVirt Node Plugins In-Reply-To: <4F26D50B.50907@redhat.com> References: <4F21759F.9030302@redhat.com> <4F2565BD.6090107@redhat.com> <4F26C7AE.4090005@redhat.com> <4F26D50B.50907@redhat.com> Message-ID: <4F281998.4010500@redhat.com> On 01/30/2012 12:36 PM, Joey Boggs wrote: > On 01/30/2012 11:39 AM, Geert Jansen wrote: >> Thanks Perry. I had a look at the Wiki and i think it is mostly very >> reasonable. Some comments/questions: >> >> - Do i understand it correctly that the plugin <-> OS interface you >> propose is "model based" rather than "file based"? All plugins' files >> are under a separate root, and they only become enabled once a logical >> setting in an XML description file enables it (e.g. services, kernel >> modules, etc). I think that is probably the right approach. It will >> make it cleaner for sure to manage plugins this way. It also allows >> much better reporting of what a plugin does in the GUI. > > The goal is to have it all separated with minimal overlap such as > suggested earlier with a place to register the plugins that are > installed on the node visible within the web ui. >> >> The alternative would be a "file based" interface where the plugin >> just puts files directly in various system directories. I think the >> main requirement to ensure is that with a model based approach our >> initial model is rich enough. Actually I think this is a hybrid of the two models you describe above. On the one hand, what is being installed are just 'normal RPMs' and so from that perspective the RPM used for vdsm should be no different on oVirt Node from Fedora usage. It should put the files where they normally go in directories like /usr/bin, etc. But on the other hand, being a plugin implies a higher level model. So the 'top level' RPM in a given plugin (like vdsm) should include some metadata about itself so that we can do things like 'report which plugins are installed' back to oVirt Engine That being said... RPM has a rich metadata structure, so can't we use some of RPM metadata to identify a particular package (vdsm, cim-plugin, etc) as such? That way the process of finding out 'which plugins are installed' could be simply rpm queries looking for which rpms are classified as plugins. There is other metadata needed like: * firewall ports * services that need to start But this is normally handled in RPM %post anyhow. >> - One important thing (actually #1 request so far) is to allow VDSM >> hooks to be installed. So the XML descriptor would need to support that. > > Don't see that being an issue, it should be added as a requirement now > rather than later ack Do we have an example of a vdsm hook RPM upstream that we can use as a test case? >> - The signature infrastructure is something that requires a business >> process at Red Hat to manage the requests of ISVs, signing keys, etc. >> I'm skeptical if we need this for v1 to be honest. Feedback i have >> from vendors so far is that they'd prefer, at least initially, to keep >> it open. We still maintain list of plugins that are supported, but >> initially the overhead of basically runnning a PKI for ISVs is >> something that i wonder we want to do on a short time scale. I think >> the risks for abuse, for sure initially, is very low. Right. I think even for upstream we should do signature verification. Here's a model that should work: 1. Upstream developer wants to create new oVirt Node plugin foo (and foo is not part of Fedora, if it is, we just leverage normal Fedora package signatures) 2. Upstream developer has to create a keypair, and they sign their package foo with their private key. Meanwhile, they need to publicly distribute their public key 3. The plugin tool should be able to import a new public key into the oVirt Node image 4. Once this public key is imported, then a subsequent install of foo should work since the signature will be able to be verified >From a product perspective, perhaps the ability to do step 3) is restricted to the company distributing the oVirt Node derivative. But that's not necessarily a topic for upstream to decide :) >> - I think the offline merging of plugins with the hypervisor ISO is >> good enough for v1, given that a re-install of RHEV-H via RHEV-M is >> easy. So doing a live install only in v2 should be fine; however: Agreed. The other thing to consider is that with live installs, because oVirt Node is stateless you'd have to completely reinstall all plugins on every boot which may end up being a source of additional complexity/errors. So while live plugin installation is theoretically possible, I want to avoid it at least for the time being >> - We should also allow plugins to be installed on RHEL... Otherwise >> ISVs have to maintain two packages for their extensions, one for RHEL >> and one for RHEV-H. > > As long as vdsm looks in the same predefined places this will not be an > issue and should be fine for both scenarios Actually... if plugins are simply 'the normal Fedora/RhEL package with some additional RPM metadata and possibly some additional other metadata' then this becomes trivial. So completely agree So moving things to be more RPM/yum oriented I think buys us a lot here in terms of not reinventing the wheel. As mburns has pointed out to me, the hard work here is going to be: * determining what the necessary metadata is that we need to insert into the spec file or other metadata file * determining how oVirt Node should handle that metadata. Example: + package foo says it needs port X opened. It can either do this in %post via smth like lokkit or it can just specify this in the metadata + If done via lokkit, we need oVirt Node to be able to emulate the lokkit command so that we 'do the right thing' in the firewall + If done via metadata we need to have the plugin process that metadata so that it makes the appropriate change to the firewall * Doing the work associated with tracking filesystem/config changes so that we can audit each change a plugin makes to the system Perry From pmyers at redhat.com Tue Jan 31 16:54:23 2012 From: pmyers at redhat.com (Perry Myers) Date: Tue, 31 Jan 2012 11:54:23 -0500 Subject: [node-devel] CIM plugin for oVirt Node In-Reply-To: <4F26F16E.5040202@us.ibm.com> References: <4F256817.6030000@redhat.com> <20120130134540.GS2632@redhat.com> <4F26AC39.1020609@redhat.com> <4F26B054.5020802@dell.com> <4F26F16E.5040202@us.ibm.com> Message-ID: <4F281CBF.2000108@redhat.com> > Let me start out by saying that I know almost nothing about CIM so I'm > largely talking out of my.... "..." is code for "secret love for CIM" right? :) > I think the first question to answer is why we're adding CIM to oVirt > node in the first place. > > Is the goal to expose a specific DMTF schema for on-node management? Is > it to allow arbitrary third-party CMPI providers to be added for > hardware management? Is it to allow third party CIM management suites > to see and manage ovirt-node instances? Those are good questions, none of which I have context/answers for :) I think there are some different paths we could take here (not necessarily mutually exclusive) 1. Expose providers for nominal read-only access so that existing monitoring solutions can leverage CIM at least for getting system state 2. Get someone to write vdsm-cim which exposes a subset of standard CIM API calls but processed through vdsm so that we don't end up having split brain 3. Enable providers in read-write mode for usages of oVirt Node where oVirt Engine is not involved (single node use case) As an aside, this is the reason why we want to add this functionality as a plugin vs. making it integral into the Node. We don't see/think that everyone will find it useful, so it'll be smth optional rather than required. > For just about any of these, if the answer isn't qualified with > "exclusively for use by ovirt-engine", how is a split brain going to be > avoided? For libvirt-cim we planned on making it so you could only call the read-only functions. I forget the details here, but iirc we did do some testing to make sure that read-only access from libvirt-cim to libvirtd worked properly. DV might recall... > I think the provider needs ultimately will determine what cimmon is > necessary. > > Just to further complicate things, most providers these days are still > only 32-bit as I understand it... I think right now the only concrete thing we're going to add is libvirt-cim, which does have 64bit version From gjansen at redhat.com Tue Jan 31 17:55:04 2012 From: gjansen at redhat.com (Geert Jansen) Date: Tue, 31 Jan 2012 18:55:04 +0100 Subject: [node-devel] Rethinking implementation for oVirt Node Plugins In-Reply-To: <4F281998.4010500@redhat.com> References: <4F21759F.9030302@redhat.com> <4F2565BD.6090107@redhat.com> <4F26C7AE.4090005@redhat.com> <4F26D50B.50907@redhat.com> <4F281998.4010500@redhat.com> Message-ID: <4F282AF8.9080604@redhat.com> On 01/31/2012 05:40 PM, Perry Myers wrote: > Do we have an example of a vdsm hook RPM upstream that we can use as a > test case? These two hooks are the most popular so far: - http://gerrit.ovirt.org/gitweb?p=vdsm.git;a=tree;f=vdsm_hooks/directlun - http://gerrit.ovirt.org/gitweb?p=vdsm.git;a=tree;f=vdsm_hooks/promisc Regards, Geert From gjansen at redhat.com Tue Jan 31 18:33:52 2012 From: gjansen at redhat.com (Geert Jansen) Date: Tue, 31 Jan 2012 19:33:52 +0100 Subject: [node-devel] CIM plugin for oVirt Node In-Reply-To: <4F26F16E.5040202@us.ibm.com> References: <4F256817.6030000@redhat.com> <20120130134540.GS2632@redhat.com> <4F26AC39.1020609@redhat.com> <4F26B054.5020802@dell.com> <4F26F16E.5040202@us.ibm.com> Message-ID: <4F283410.9070505@redhat.com> On 01/30/2012 08:37 PM, Anthony Liguori wrote: > Let me start out by saying that I know almost nothing about CIM so I'm > largely talking out of my.... > > I think the first question to answer is why we're adding CIM to oVirt > node in the first place. > > Is the goal to expose a specific DMTF schema for on-node management? Is > it to allow arbitrary third-party CMPI providers to be added for > hardware management? Is it to allow third party CIM management suites > to see and manage ovirt-node instances? The requests i have had from RHEV ISV partners so far all were around using CIM for /read-only/ discovery of physical and virtual resources. One partner for example has a management platform that is structured in such a way that the lowest level of management (discovery) is all done via CIM, which is heterogeneous. The higher levels are them using the native APIs for each of the virtualization systems and therefore are bespoke for the virtualization solution that is being integrated with. Regards, Geert