From wenyi at linux.vnet.ibm.com Fri Mar 1 01:23:16 2013 From: wenyi at linux.vnet.ibm.com (Wenyi Gao) Date: Fri, 01 Mar 2013 09:23:16 +0800 Subject: [node-devel] oVirt Node 2.6.1 Release In-Reply-To: <1362086779.2421.14.camel@fdeutsch-laptop.local> References: <1362086779.2421.14.camel@fdeutsch-laptop.local> Message-ID: <51300304.9090109@linux.vnet.ibm.com> On 2013-03-01 05:26, Fabian Deutsch wrote: > Hey, > > oVirt Node 2.6.1 has been released [0] and can be downloaded [1]. > > One big reason for this update was this bug [2], also known as > CVE-2013-0293. Additionally this release contains an updated > vdsm-4.10.3-9 and the latest Fedora 18 packages. > > Thanks > The oVirt Node Team Fabian, Thanks for the release. I didn't find the tag 2.6.1 on the tag list, so could you give a tag for this release? Wenyi Gao > --- > [0] http://www.ovirt.org/Node_Release_Notes#2.6.1_Release_Notes > [1] http://www.ovirt.org/Node#Current_Release > [2] https://bugzilla.redhat.com/show_bug.cgi?id=911699 > > > _______________________________________________ > node-devel mailing list > node-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/node-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabiand at redhat.com Fri Mar 1 08:05:13 2013 From: fabiand at redhat.com (Fabian Deutsch) Date: Fri, 01 Mar 2013 09:05:13 +0100 Subject: [node-devel] oVirt Node 2.6.1 Release In-Reply-To: <51300304.9090109@linux.vnet.ibm.com> References: <1362086779.2421.14.camel@fdeutsch-laptop.local> <51300304.9090109@linux.vnet.ibm.com> Message-ID: <1362125113.2406.0.camel@fdeutsch-laptop.local> Am Freitag, den 01.03.2013, 09:23 +0800 schrieb Wenyi Gao: > On 2013-03-01 05:26, Fabian Deutsch wrote: > > > Hey, > > > > oVirt Node 2.6.1 has been released [0] and can be downloaded [1]. > > > > One big reason for this update was this bug [2], also known as > > CVE-2013-0293. Additionally this release contains an updated > > vdsm-4.10.3-9 and the latest Fedora 18 packages. > > > > Thanks > > The oVirt Node Team > > Fabian, > > Thanks for the release. I didn't find the tag 2.6.1 on the tag list, > so could you give a tag for this release? Hey Wenyi, I've no also pushed the tag for ovirt-node-2.6.1. Thanks for the reminder and Greetings fabian -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part URL: From fabiand at redhat.com Fri Mar 1 12:39:06 2013 From: fabiand at redhat.com (Fabian Deutsch) Date: Fri, 01 Mar 2013 13:39:06 +0100 Subject: [node-devel] State of the installer In-Reply-To: <1361995261.2393.25.camel@fdeutsch-laptop.local> References: <1361995261.2393.25.camel@fdeutsch-laptop.local> Message-ID: <1362141546.2406.10.camel@fdeutsch-laptop.local> Am Mittwoch, den 27.02.2013, 21:01 +0100 schrieb Fabian Deutsch: > Hey, > > so - lately I've been working on "rebasing" our Node installer on the > new UI code. > > This build [0] contains most patches to get a working installer - the > patches are all in gerrit (tui-installer topic branch). > Boot the iso, in the old installer, press F2 to drop to shell, then run > python -m ovirt.node.installer > to launch the new installer. This image should be quite fine: http://jenkins.ovirt.org/view/ovirt_node/job/ovirt-node-iso-devel/1799/ I could address some of the issues I was seeing earlier. - fabian -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part URL: From hongsheng.jia at yahoo.com Mon Mar 4 12:32:47 2013 From: hongsheng.jia at yahoo.com (Jia Hongsheng) Date: Mon, 4 Mar 2013 04:32:47 -0800 (PST) Subject: [node-devel] node-creator suspending for few hours. Anybody knows why?> Message-ID: <1362400367.79400.YahooMailNeo@web161301.mail.bf1.yahoo.com> I follows the guide of?http://www.ovirt.org/Node_Building#Get_the_Source, but the building procedure seems that suspending for more than several hours in the step of?/home/jhs/ovirt-node/recipe/node-creator ovirt-node-iso.ks. Anybody knows why? ??? ? ? else \ ? ? ? ? ? ? echo "# OVIRT_REPO_URL=" > repos.ks ;\ ? ? ? ? ? ? for repo in ; do \ ? ? ? ? ? ? ? ?echo "repo --name=repo${i} --baseurl=${repo}" >> repos.ks ;\ ? ? ? ? ? ? ? ?i=${i}_ ;\ ? ? ? ? ? ? done ;\ ? ? ? fi ;\ ) ( \ ? ? echo "PRODUCT='"oVirt Node Hypervisor"'" ;\ ? ? echo "PRODUCT_SHORT='"oVirt Node Hypervisor"'" ;\ ? ? echo "PACKAGE=ovirt-node-iso" ;\ ? ? echo "VERSION=2.6.1" ;\ ? ? echo "RELEASE=999.001.fc18" ;\ ) > version.ks ksflatten -c ovirt-node-image.ks -o ovirt-node-iso.ks /home/jiahongsheng/ovirt-node/recipe/node-creator ovirt-node-iso.ks ? ? ????Love the world in my eyes. ? ??????????????????????????????? ----Hongsheng Jia -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabiand at redhat.com Wed Mar 6 13:40:15 2013 From: fabiand at redhat.com (Fabian Deutsch) Date: Wed, 06 Mar 2013 14:40:15 +0100 Subject: [node-devel] node-creator suspending for few hours. Anybody knows why?> In-Reply-To: <1362400367.79400.YahooMailNeo@web161301.mail.bf1.yahoo.com> References: <1362400367.79400.YahooMailNeo@web161301.mail.bf1.yahoo.com> Message-ID: <1362577215.2379.17.camel@fdeutsch-laptop.local> Am Montag, den 04.03.2013, 04:32 -0800 schrieb Jia Hongsheng: > I follows the guide > of http://www.ovirt.org/Node_Building#Get_the_Source, but the building > procedure seems that suspending for more than several hours in the > step of /home/jhs/ovirt-node/recipe/node-creator ovirt-node-iso.ks. > Anybody knows why? > ksflatten -c ovirt-node-image.ks -o ovirt-node-iso.ks > /home/jiahongsheng/ovirt-node/recipe/node-creator ovirt-node-iso.ks The process can hang at that point for some time, it heavily depends on the machine how long it takes. The build process ranges fom ~2hrs to ~20min on my machines. But a couple of hours just for ksflatten is quite long. What kind of machine have you got? - fabian -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part URL: From mburns at redhat.com Wed Mar 6 13:43:34 2013 From: mburns at redhat.com (Mike Burns) Date: Wed, 06 Mar 2013 08:43:34 -0500 Subject: [node-devel] node-creator suspending for few hours. Anybody knows why?> In-Reply-To: <1362577215.2379.17.camel@fdeutsch-laptop.local> References: <1362400367.79400.YahooMailNeo@web161301.mail.bf1.yahoo.com> <1362577215.2379.17.camel@fdeutsch-laptop.local> Message-ID: <51374806.2070504@redhat.com> On 03/06/2013 08:40 AM, Fabian Deutsch wrote: > Am Montag, den 04.03.2013, 04:32 -0800 schrieb Jia Hongsheng: >> I follows the guide >> of http://www.ovirt.org/Node_Building#Get_the_Source, but the building >> procedure seems that suspending for more than several hours in the >> step of /home/jhs/ovirt-node/recipe/node-creator ovirt-node-iso.ks. >> Anybody knows why? > > > >> ksflatten -c ovirt-node-image.ks -o ovirt-node-iso.ks >> /home/jiahongsheng/ovirt-node/recipe/node-creator ovirt-node-iso.ks > > The process can hang at that point for some time, it heavily depends on > the machine how long it takes. The build process ranges fom ~2hrs to > ~20min on my machines. It's not a hang, it's downloading the packages it needs to do the build. These will generally show up under ~/ovirt-cache/yum-x86_64. There is no feedback at this point, though. Subsequent builds (as long as you don't remove the ovirt-cache directory) should run quicker. > > But a couple of hours just for ksflatten is quite long. What kind of > machine have you got? > > - fabian > > > > _______________________________________________ > node-devel mailing list > node-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/node-devel > From fabiand at redhat.com Wed Mar 6 19:20:06 2013 From: fabiand at redhat.com (Fabian Deutsch) Date: Wed, 06 Mar 2013 20:20:06 +0100 Subject: [node-devel] State of the installer In-Reply-To: <1362141546.2406.10.camel@fdeutsch-laptop.local> References: <1361995261.2393.25.camel@fdeutsch-laptop.local> <1362141546.2406.10.camel@fdeutsch-laptop.local> Message-ID: <1362597606.2379.21.camel@fdeutsch-laptop.local> Hey, so - the switching from the old to the new installer wasn't as easy as I thought. urwid makes more assumptions about the terminal compared to newt (mainly to get the windowsize), this lead to problems and an upstream patch. However, this build: http://jenkins.ovirt.org/view/ovirt_node/job/ovirt-node-iso-devel/1820/ Contains a fixed dracut module, switches to the new installer and the setup ui. Upgrades are known to be broken (though I don't know why yet). But reinstall and installs work. Greetings fabian -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part URL: From mburns at redhat.com Wed Mar 6 21:03:21 2013 From: mburns at redhat.com (Mike Burns) Date: Wed, 06 Mar 2013 16:03:21 -0500 Subject: [node-devel] State of the installer In-Reply-To: <1362597606.2379.21.camel@fdeutsch-laptop.local> References: <1361995261.2393.25.camel@fdeutsch-laptop.local> <1362141546.2406.10.camel@fdeutsch-laptop.local> <1362597606.2379.21.camel@fdeutsch-laptop.local> Message-ID: <5137AF19.3090401@redhat.com> On 03/06/2013 02:20 PM, Fabian Deutsch wrote: > Hey, > > so - the switching from the old to the new installer wasn't as easy as I > thought. urwid makes more assumptions about the terminal compared to > newt (mainly to get the windowsize), this lead to problems and an > upstream patch. > > However, this build: > http://jenkins.ovirt.org/view/ovirt_node/job/ovirt-node-iso-devel/1820/ > > Contains a fixed dracut module, switches to the new installer and the > setup ui. > Upgrades are known to be broken (though I don't know why yet). But > reinstall and installs work. Patches acked and merged. I also sent some feedback directly to you. Thanks! Mike > > Greetings > fabian > > > > _______________________________________________ > node-devel mailing list > node-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/node-devel > From wenyi at linux.vnet.ibm.com Thu Mar 7 09:03:57 2013 From: wenyi at linux.vnet.ibm.com (Wenyi Gao) Date: Thu, 07 Mar 2013 17:03:57 +0800 Subject: [node-devel] About check_existing_hostvg check to auto installation Message-ID: <513857FD.2010701@linux.vnet.ibm.com> Hey Jbos, When I install automatically the ovirt-node-2.6.1 iso to our machine via pxe, I ran into the following error: Starting ovirt-firstboot: Performing automatic disk partitioning ERROR:ovirtnode.storage:HostVG/AppVG exists on a separate disk ERROR:ovirtnode.storage:Manual Intervention required It seems the auto install will stop if the "HostVG" exists on the disk on the machine. I checked the code and find the following patch: http://lists.ovirt.org/pipermail/node-patches/2012-December/003471.html The patch add hostvg check to auto installation to fix rhbz#889198 that I have no access to it. It seems it can't auto install ovirt-node via pxe with the patch and need to delete the existing hostvg manually. So what do you think about the issue and could you give me some suggestions to fix it? Thanks. Best regards Wenyi Gao From mburns at redhat.com Thu Mar 7 12:52:50 2013 From: mburns at redhat.com (Mike Burns) Date: Thu, 07 Mar 2013 07:52:50 -0500 Subject: [node-devel] About check_existing_hostvg check to auto installation In-Reply-To: <513857FD.2010701@linux.vnet.ibm.com> References: <513857FD.2010701@linux.vnet.ibm.com> Message-ID: <51388DA2.6050309@redhat.com> On 03/07/2013 04:03 AM, Wenyi Gao wrote: > Hey Jbos, > > > When I install automatically the ovirt-node-2.6.1 iso to our machine > via pxe, I ran into the following error: > > Starting ovirt-firstboot: Performing automatic disk partitioning > ERROR:ovirtnode.storage:HostVG/AppVG exists on a separate disk > ERROR:ovirtnode.storage:Manual Intervention required > > > It seems the auto install will stop if the "HostVG" exists on the > disk on the machine. > I checked the code and find the following patch: > > http://lists.ovirt.org/pipermail/node-patches/2012-December/003471.html > > The patch add hostvg check to auto installation to fix rhbz#889198 > that I have no access to it. > > It seems it can't auto install ovirt-node via pxe with the patch and > need to delete the existing hostvg manually. > > So what do you think about the issue and could you give me some > suggestions to fix it? Thanks. What is your pxe commandline? What device are you installing to? What device contains the previous installation? The intention of the fix is to prevent users from wiping data accidentally. It's existed in the TUI install for some time, and was previously in the auto-install, but in a migration from bash to python, was missed. As for the bug, it was a customer filed issue in RHEV-H, and not something I can make public, unfortunately. Mike > > Best regards > Wenyi Gao > > > _______________________________________________ > node-devel mailing list > node-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/node-devel From wenyi at linux.vnet.ibm.com Fri Mar 8 02:58:58 2013 From: wenyi at linux.vnet.ibm.com (Wenyi Gao) Date: Fri, 08 Mar 2013 10:58:58 +0800 Subject: [node-devel] About check_existing_hostvg check to auto installation In-Reply-To: <51388DA2.6050309@redhat.com> References: <513857FD.2010701@linux.vnet.ibm.com> <51388DA2.6050309@redhat.com> Message-ID: <513953F2.70504@linux.vnet.ibm.com> On 2013-03-07 20:52, Mike Burns wrote: > On 03/07/2013 04:03 AM, Wenyi Gao wrote: >> Hey Jbos, >> >> >> When I install automatically the ovirt-node-2.6.1 iso to our machine >> via pxe, I ran into the following error: >> >> Starting ovirt-firstboot: Performing automatic disk partitioning >> ERROR:ovirtnode.storage:HostVG/AppVG exists on a separate disk >> ERROR:ovirtnode.storage:Manual Intervention required >> >> >> It seems the auto install will stop if the "HostVG" exists on the >> disk on the machine. >> I checked the code and find the following patch: >> >> http://lists.ovirt.org/pipermail/node-patches/2012-December/003471.html >> >> The patch add hostvg check to auto installation to fix rhbz#889198 >> that I have no access to it. >> >> It seems it can't auto install ovirt-node via pxe with the patch and >> need to delete the existing hostvg manually. >> >> So what do you think about the issue and could you give me some >> suggestions to fix it? Thanks. > > What is your pxe commandline? What device are you installing to? What > device contains the previous installation? > > The intention of the fix is to prevent users from wiping data > accidentally. It's existed in the TUI install for some time, and was > previously in the auto-install, but in a migration from bash to > python, was missed. > > As for the bug, it was a customer filed issue in RHEV-H, and not > something I can make public, unfortunately. > > Mike Thank you answering the question. My pex command line is as follow: In the emergency shell [root at mcptest ~]# cat /proc/cmdline ksdevice=bootif text BOOTIF=eth0 firstboot storage_init=/dev/sda lang= upgrade storage_vol=50:1024:512:5:2048:-1 standalone hostname=mcptest vga=791 rootpw=$1$cfm5kMmj$M1uknfs/8aSeZkJyf/NBC/ root=live:/ovirt-node-image-2.6.1-1.1.el6.iso ssh_pwauth=1 ks=http://9.181.129.219/cblr/svc/op/ks/system/62 rootfstype=auto ro liveimg nomodeset check rootflags=loop crashkernel=512M-2G:64M,2G-:128M elevator=deadline processor.max_cstate=1 install rhgb rd_NO_LUKS rd_NO_MD rd_NO_DM [root at mcptest ~]# pvs PV VG Fmt Attr PSize PFree /dev/mapper/3600605b003b86620166a1ddb0bfa15b7p4 HostVG lvm2 a-- 1.36t 0 We are installing it to the hard disk , and the previous installation is RHEVH 6.3. During the installation, before performing the disk format, it checks that there is a existing HostVG on the disk, so it stops installing. Wenyi Gao >> >> Best regards >> Wenyi Gao >> >> >> _______________________________________________ >> node-devel mailing list >> node-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/node-devel > From mburns at redhat.com Fri Mar 8 12:37:52 2013 From: mburns at redhat.com (Mike Burns) Date: Fri, 08 Mar 2013 07:37:52 -0500 Subject: [node-devel] About check_existing_hostvg check to auto installation In-Reply-To: <513953F2.70504@linux.vnet.ibm.com> References: <513857FD.2010701@linux.vnet.ibm.com> <51388DA2.6050309@redhat.com> <513953F2.70504@linux.vnet.ibm.com> Message-ID: <5139DBA0.3090704@redhat.com> On 03/07/2013 09:58 PM, Wenyi Gao wrote: > On 2013-03-07 20:52, Mike Burns wrote: >> On 03/07/2013 04:03 AM, Wenyi Gao wrote: >>> Hey Jbos, >>> >>> >>> When I install automatically the ovirt-node-2.6.1 iso to our machine >>> via pxe, I ran into the following error: >>> >>> Starting ovirt-firstboot: Performing automatic disk partitioning >>> ERROR:ovirtnode.storage:HostVG/AppVG exists on a separate disk >>> ERROR:ovirtnode.storage:Manual Intervention required >>> >>> >>> It seems the auto install will stop if the "HostVG" exists on the >>> disk on the machine. >>> I checked the code and find the following patch: >>> >>> http://lists.ovirt.org/pipermail/node-patches/2012-December/003471.html >>> >>> The patch add hostvg check to auto installation to fix rhbz#889198 >>> that I have no access to it. >>> >>> It seems it can't auto install ovirt-node via pxe with the patch and >>> need to delete the existing hostvg manually. >>> >>> So what do you think about the issue and could you give me some >>> suggestions to fix it? Thanks. >> >> What is your pxe commandline? What device are you installing to? What >> device contains the previous installation? >> >> The intention of the fix is to prevent users from wiping data >> accidentally. It's existed in the TUI install for some time, and was >> previously in the auto-install, but in a migration from bash to >> python, was missed. >> >> As for the bug, it was a customer filed issue in RHEV-H, and not >> something I can make public, unfortunately. >> >> Mike > > Thank you answering the question. > > My pex command line is as follow: > > In the emergency shell > > [root at mcptest ~]# cat /proc/cmdline > ksdevice=bootif text BOOTIF=eth0 firstboot storage_init=/dev/sda lang= > upgrade storage_vol=50:1024:512:5:2048:-1 standalone hostname=mcptest > vga=791 rootpw=$1$cfm5kMmj$M1uknfs/8aSeZkJyf/NBC/ > root=live:/ovirt-node-image-2.6.1-1.1.el6.iso ssh_pwauth=1 > ks=http://9.181.129.219/cblr/svc/op/ks/system/62 rootfstype=auto ro > liveimg nomodeset check rootflags=loop crashkernel=512M-2G:64M,2G-:128M > elevator=deadline processor.max_cstate=1 install rhgb rd_NO_LUKS > rd_NO_MD rd_NO_DM > > [root at mcptest ~]# pvs > PV VG Fmt Attr > PSize PFree > /dev/mapper/3600605b003b86620166a1ddb0bfa15b7p4 HostVG lvm2 a-- > 1.36t 0 > > > We are installing it to the hard disk , and the previous installation is > RHEVH 6.3. During the installation, > before performing the disk format, it checks that there is a existing > HostVG on the disk, so it stops installing. It's possible we're not translating correctly from /dev/sda to /dev/mapper/3600*. Can you try one thing? try adding reinstall instead of install to the PXE command line. Also, you probably shouldn't pass both upgrade and install on the same command line. It's may be getting confused because it has both of those. Joey, can you see if you can reproduce this? Thanks Mike > > > Wenyi Gao >>> >>> Best regards >>> Wenyi Gao >>> >>> >>> _______________________________________________ >>> node-devel mailing list >>> node-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/node-devel >> > From jboggs at redhat.com Fri Mar 8 15:17:54 2013 From: jboggs at redhat.com (Joey Boggs) Date: Fri, 08 Mar 2013 10:17:54 -0500 Subject: [node-devel] About check_existing_hostvg check to auto installation In-Reply-To: <5139DBA0.3090704@redhat.com> References: <513857FD.2010701@linux.vnet.ibm.com> <51388DA2.6050309@redhat.com> <513953F2.70504@linux.vnet.ibm.com> <5139DBA0.3090704@redhat.com> Message-ID: <513A0122.20200@redhat.com> On 03/08/2013 07:37 AM, Mike Burns wrote: > On 03/07/2013 09:58 PM, Wenyi Gao wrote: >> On 2013-03-07 20:52, Mike Burns wrote: >>> On 03/07/2013 04:03 AM, Wenyi Gao wrote: >>>> Hey Jbos, >>>> >>>> >>>> When I install automatically the ovirt-node-2.6.1 iso to our machine >>>> via pxe, I ran into the following error: >>>> >>>> Starting ovirt-firstboot: Performing automatic disk partitioning >>>> ERROR:ovirtnode.storage:HostVG/AppVG exists on a separate disk >>>> ERROR:ovirtnode.storage:Manual Intervention required >>>> >>>> >>>> It seems the auto install will stop if the "HostVG" exists on the >>>> disk on the machine. >>>> I checked the code and find the following patch: >>>> >>>> http://lists.ovirt.org/pipermail/node-patches/2012-December/003471.html >>>> >>>> >>>> The patch add hostvg check to auto installation to fix rhbz#889198 >>>> that I have no access to it. >>>> >>>> It seems it can't auto install ovirt-node via pxe with the patch and >>>> need to delete the existing hostvg manually. >>>> >>>> So what do you think about the issue and could you give me some >>>> suggestions to fix it? Thanks. >>> >>> What is your pxe commandline? What device are you installing to? What >>> device contains the previous installation? >>> >>> The intention of the fix is to prevent users from wiping data >>> accidentally. It's existed in the TUI install for some time, and was >>> previously in the auto-install, but in a migration from bash to >>> python, was missed. >>> >>> As for the bug, it was a customer filed issue in RHEV-H, and not >>> something I can make public, unfortunately. >>> >>> Mike >> >> Thank you answering the question. >> >> My pex command line is as follow: >> >> In the emergency shell >> >> [root at mcptest ~]# cat /proc/cmdline >> ksdevice=bootif text BOOTIF=eth0 firstboot storage_init=/dev/sda lang= >> upgrade storage_vol=50:1024:512:5:2048:-1 standalone hostname=mcptest >> vga=791 rootpw=$1$cfm5kMmj$M1uknfs/8aSeZkJyf/NBC/ >> root=live:/ovirt-node-image-2.6.1-1.1.el6.iso ssh_pwauth=1 >> ks=http://9.181.129.219/cblr/svc/op/ks/system/62 rootfstype=auto ro >> liveimg nomodeset check rootflags=loop crashkernel=512M-2G:64M,2G-:128M >> elevator=deadline processor.max_cstate=1 install rhgb rd_NO_LUKS >> rd_NO_MD rd_NO_DM >> >> [root at mcptest ~]# pvs >> PV VG Fmt Attr >> PSize PFree >> /dev/mapper/3600605b003b86620166a1ddb0bfa15b7p4 HostVG lvm2 a-- >> 1.36t 0 >> >> >> We are installing it to the hard disk , and the previous installation is >> RHEVH 6.3. During the installation, >> before performing the disk format, it checks that there is a existing >> HostVG on the disk, so it stops installing. > > It's possible we're not translating correctly from /dev/sda to > /dev/mapper/3600*. > > Can you try one thing? try adding reinstall instead of install to the > PXE command line. Also, you probably shouldn't pass both upgrade and > install on the same command line. It's may be getting confused > because it has both of those. > > Joey, can you see if you can reproduce this? > > Thanks > > Mike >> >> >> Wenyi Gao >>>> >>>> Best regards >>>> Wenyi Gao >>>> >>>> >>>> _______________________________________________ >>>> node-devel mailing list >>>> node-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/node-devel >>> >> > unable to reproduce the failure. Installs fine over top of existing installation using your cmdline args. Long shot but does the length of the pxe args have any affect its over 400 chars long. From mburns at redhat.com Fri Mar 8 15:26:29 2013 From: mburns at redhat.com (Mike Burns) Date: Fri, 08 Mar 2013 10:26:29 -0500 Subject: [node-devel] About check_existing_hostvg check to auto installation In-Reply-To: <513A0122.20200@redhat.com> References: <513857FD.2010701@linux.vnet.ibm.com> <51388DA2.6050309@redhat.com> <513953F2.70504@linux.vnet.ibm.com> <5139DBA0.3090704@redhat.com> <513A0122.20200@redhat.com> Message-ID: <513A0325.3000404@redhat.com> On 03/08/2013 10:17 AM, Joey Boggs wrote: > On 03/08/2013 07:37 AM, Mike Burns wrote: >> On 03/07/2013 09:58 PM, Wenyi Gao wrote: >>> On 2013-03-07 20:52, Mike Burns wrote: >>>> On 03/07/2013 04:03 AM, Wenyi Gao wrote: >>>>> Hey Jbos, >>>>> >>>>> >>>>> When I install automatically the ovirt-node-2.6.1 iso to our machine >>>>> via pxe, I ran into the following error: >>>>> >>>>> Starting ovirt-firstboot: Performing automatic disk partitioning >>>>> ERROR:ovirtnode.storage:HostVG/AppVG exists on a separate disk >>>>> ERROR:ovirtnode.storage:Manual Intervention required >>>>> >>>>> >>>>> It seems the auto install will stop if the "HostVG" exists on the >>>>> disk on the machine. >>>>> I checked the code and find the following patch: >>>>> >>>>> http://lists.ovirt.org/pipermail/node-patches/2012-December/003471.html >>>>> >>>>> >>>>> The patch add hostvg check to auto installation to fix rhbz#889198 >>>>> that I have no access to it. >>>>> >>>>> It seems it can't auto install ovirt-node via pxe with the patch and >>>>> need to delete the existing hostvg manually. >>>>> >>>>> So what do you think about the issue and could you give me some >>>>> suggestions to fix it? Thanks. >>>> >>>> What is your pxe commandline? What device are you installing to? What >>>> device contains the previous installation? >>>> >>>> The intention of the fix is to prevent users from wiping data >>>> accidentally. It's existed in the TUI install for some time, and was >>>> previously in the auto-install, but in a migration from bash to >>>> python, was missed. >>>> >>>> As for the bug, it was a customer filed issue in RHEV-H, and not >>>> something I can make public, unfortunately. >>>> >>>> Mike >>> >>> Thank you answering the question. >>> >>> My pex command line is as follow: >>> >>> In the emergency shell >>> >>> [root at mcptest ~]# cat /proc/cmdline >>> ksdevice=bootif text BOOTIF=eth0 firstboot storage_init=/dev/sda lang= >>> upgrade storage_vol=50:1024:512:5:2048:-1 standalone hostname=mcptest >>> vga=791 rootpw=$1$cfm5kMmj$M1uknfs/8aSeZkJyf/NBC/ >>> root=live:/ovirt-node-image-2.6.1-1.1.el6.iso ssh_pwauth=1 >>> ks=http://9.181.129.219/cblr/svc/op/ks/system/62 rootfstype=auto ro >>> liveimg nomodeset check rootflags=loop crashkernel=512M-2G:64M,2G-:128M >>> elevator=deadline processor.max_cstate=1 install rhgb rd_NO_LUKS >>> rd_NO_MD rd_NO_DM >>> >>> [root at mcptest ~]# pvs >>> PV VG Fmt Attr >>> PSize PFree >>> /dev/mapper/3600605b003b86620166a1ddb0bfa15b7p4 HostVG lvm2 a-- >>> 1.36t 0 >>> >>> >>> We are installing it to the hard disk , and the previous >>> installation is >>> RHEVH 6.3. During the installation, >>> before performing the disk format, it checks that there is a existing >>> HostVG on the disk, so it stops installing. >> >> It's possible we're not translating correctly from /dev/sda to >> /dev/mapper/3600*. >> >> Can you try one thing? try adding reinstall instead of install to >> the PXE command line. Also, you probably shouldn't pass both upgrade >> and install on the same command line. It's may be getting confused >> because it has both of those. >> >> Joey, can you see if you can reproduce this? >> >> Thanks >> >> Mike >>> >>> >>> Wenyi Gao >>>>> >>>>> Best regards >>>>> Wenyi Gao >>>>> >>>>> >>>>> _______________________________________________ >>>>> node-devel mailing list >>>>> node-devel at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/node-devel >>>> >>> >> > > unable to reproduce the failure. Installs fine over top of existing > installation using your cmdline args. Long shot but does the length of > the pxe args have any affect its over 400 chars long. No, that issue is only on some pxe servers and only causes problems with getting the commnadline to the machine. In this case, Wenyi says that this is from /proc/cmdline From wenyi at linux.vnet.ibm.com Mon Mar 11 07:15:46 2013 From: wenyi at linux.vnet.ibm.com (Wenyi Gao) Date: Mon, 11 Mar 2013 15:15:46 +0800 Subject: [node-devel] About check_existing_hostvg check to auto installation In-Reply-To: <5139DBA0.3090704@redhat.com> References: <513857FD.2010701@linux.vnet.ibm.com> <51388DA2.6050309@redhat.com> <513953F2.70504@linux.vnet.ibm.com> <5139DBA0.3090704@redhat.com> Message-ID: <513D84A2.6050703@linux.vnet.ibm.com> On 2013-03-08 20:37, Mike Burns wrote: > On 03/07/2013 09:58 PM, Wenyi Gao wrote: >> On 2013-03-07 20:52, Mike Burns wrote: >>> On 03/07/2013 04:03 AM, Wenyi Gao wrote: >>>> Hey Jbos, >>>> >>>> >>>> When I install automatically the ovirt-node-2.6.1 iso to our machine >>>> via pxe, I ran into the following error: >>>> >>>> Starting ovirt-firstboot: Performing automatic disk partitioning >>>> ERROR:ovirtnode.storage:HostVG/AppVG exists on a separate disk >>>> ERROR:ovirtnode.storage:Manual Intervention required >>>> >>>> >>>> It seems the auto install will stop if the "HostVG" exists on the >>>> disk on the machine. >>>> I checked the code and find the following patch: >>>> >>>> http://lists.ovirt.org/pipermail/node-patches/2012-December/003471.html >>>> >>>> >>>> The patch add hostvg check to auto installation to fix rhbz#889198 >>>> that I have no access to it. >>>> >>>> It seems it can't auto install ovirt-node via pxe with the patch and >>>> need to delete the existing hostvg manually. >>>> >>>> So what do you think about the issue and could you give me some >>>> suggestions to fix it? Thanks. >>> >>> What is your pxe commandline? What device are you installing to? What >>> device contains the previous installation? >>> >>> The intention of the fix is to prevent users from wiping data >>> accidentally. It's existed in the TUI install for some time, and was >>> previously in the auto-install, but in a migration from bash to >>> python, was missed. >>> >>> As for the bug, it was a customer filed issue in RHEV-H, and not >>> something I can make public, unfortunately. >>> >>> Mike >> >> Thank you answering the question. >> >> My pex command line is as follow: >> >> In the emergency shell >> >> [root at mcptest ~]# cat /proc/cmdline >> ksdevice=bootif text BOOTIF=eth0 firstboot storage_init=/dev/sda lang= >> upgrade storage_vol=50:1024:512:5:2048:-1 standalone hostname=mcptest >> vga=791 rootpw=$1$cfm5kMmj$M1uknfs/8aSeZkJyf/NBC/ >> root=live:/ovirt-node-image-2.6.1-1.1.el6.iso ssh_pwauth=1 >> ks=http://9.181.129.219/cblr/svc/op/ks/system/62 rootfstype=auto ro >> liveimg nomodeset check rootflags=loop crashkernel=512M-2G:64M,2G-:128M >> elevator=deadline processor.max_cstate=1 install rhgb rd_NO_LUKS >> rd_NO_MD rd_NO_DM >> >> [root at mcptest ~]# pvs >> PV VG Fmt Attr >> PSize PFree >> /dev/mapper/3600605b003b86620166a1ddb0bfa15b7p4 HostVG lvm2 a-- >> 1.36t 0 >> >> >> We are installing it to the hard disk , and the previous installation is >> RHEVH 6.3. During the installation, >> before performing the disk format, it checks that there is a existing >> HostVG on the disk, so it stops installing. > > It's possible we're not translating correctly from /dev/sda to > /dev/mapper/3600*. > > Can you try one thing? try adding reinstall instead of install to the > PXE command line. Also, you probably shouldn't pass both upgrade and > install on the same command line. It's may be getting confused > because it has both of those. I removed upgrade and change install to reinstall in the parameters, and still got the same error. The error is caused by following code in storage.py: def storage_auto(): storage = Storage() if not _functions.OVIRT_VARS["OVIRT_INIT"] == "": #force root install variable for autoinstalls _functions.OVIRT_VARS["OVIRT_ROOT_INSTALL"] = "y" if _functions.check_existing_hostvg("") or \ _functions.check_existing_hostvg("","AppVG"): logger.error("HostVG/AppVG exists on a separate disk") logger.error("Manual Intervention required") return False if storage.perform_partitioning(): return True else: logger.error("Storage Device Is Required for Auto Installation") return False When run check_existing_hostvg, because our disk have a existing HostVG, [root at mcptest ~]# pvs PV VG Fmt Attr PSize PFree /dev/mapper/3600605b003b86620166a1ddb0bfa15b7p4 HostVG lvm2 a-- 1.36t 0 So check_existing_hostvg("") always return "True", which leads to the issue. I think the HostVG should be there because the machine have installed RHEVH system before. So can we can skip the check_existing_hostvg for an machine with a HostVG already? > > Joey, can you see if you can reproduce this? > > Thanks > > Mike >> >> >> Wenyi Gao >>>> >>>> Best regards >>>> Wenyi Gao >>>> >>>> >>>> _______________________________________________ >>>> node-devel mailing list >>>> node-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/node-devel >>> >> > From mburns at redhat.com Wed Mar 13 12:47:53 2013 From: mburns at redhat.com (Mike Burns) Date: Wed, 13 Mar 2013 08:47:53 -0400 Subject: [node-devel] About check_existing_hostvg check to auto installation In-Reply-To: <513D84A2.6050703@linux.vnet.ibm.com> References: <513857FD.2010701@linux.vnet.ibm.com> <51388DA2.6050309@redhat.com> <513953F2.70504@linux.vnet.ibm.com> <5139DBA0.3090704@redhat.com> <513D84A2.6050703@linux.vnet.ibm.com> Message-ID: <51407579.6000008@redhat.com> On 03/11/2013 03:15 AM, Wenyi Gao wrote: > On 2013-03-08 20:37, Mike Burns wrote: >> On 03/07/2013 09:58 PM, Wenyi Gao wrote: >>> On 2013-03-07 20:52, Mike Burns wrote: >>>> On 03/07/2013 04:03 AM, Wenyi Gao wrote: >>>>> Hey Jbos, >>>>> >>>>> >>>>> When I install automatically the ovirt-node-2.6.1 iso to our machine >>>>> via pxe, I ran into the following error: >>>>> >>>>> Starting ovirt-firstboot: Performing automatic disk partitioning >>>>> ERROR:ovirtnode.storage:HostVG/AppVG exists on a separate disk >>>>> ERROR:ovirtnode.storage:Manual Intervention required >>>>> >>>>> >>>>> It seems the auto install will stop if the "HostVG" exists on the >>>>> disk on the machine. >>>>> I checked the code and find the following patch: >>>>> >>>>> http://lists.ovirt.org/pipermail/node-patches/2012-December/003471.html >>>>> >>>>> >>>>> The patch add hostvg check to auto installation to fix rhbz#889198 >>>>> that I have no access to it. >>>>> >>>>> It seems it can't auto install ovirt-node via pxe with the patch and >>>>> need to delete the existing hostvg manually. >>>>> >>>>> So what do you think about the issue and could you give me some >>>>> suggestions to fix it? Thanks. >>>> >>>> What is your pxe commandline? What device are you installing to? What >>>> device contains the previous installation? >>>> >>>> The intention of the fix is to prevent users from wiping data >>>> accidentally. It's existed in the TUI install for some time, and was >>>> previously in the auto-install, but in a migration from bash to >>>> python, was missed. >>>> >>>> As for the bug, it was a customer filed issue in RHEV-H, and not >>>> something I can make public, unfortunately. >>>> >>>> Mike >>> >>> Thank you answering the question. >>> >>> My pex command line is as follow: >>> >>> In the emergency shell >>> >>> [root at mcptest ~]# cat /proc/cmdline >>> ksdevice=bootif text BOOTIF=eth0 firstboot storage_init=/dev/sda lang= >>> upgrade storage_vol=50:1024:512:5:2048:-1 standalone hostname=mcptest >>> vga=791 rootpw=$1$cfm5kMmj$M1uknfs/8aSeZkJyf/NBC/ >>> root=live:/ovirt-node-image-2.6.1-1.1.el6.iso ssh_pwauth=1 >>> ks=http://9.181.129.219/cblr/svc/op/ks/system/62 rootfstype=auto ro >>> liveimg nomodeset check rootflags=loop crashkernel=512M-2G:64M,2G-:128M >>> elevator=deadline processor.max_cstate=1 install rhgb rd_NO_LUKS >>> rd_NO_MD rd_NO_DM >>> >>> [root at mcptest ~]# pvs >>> PV VG Fmt Attr >>> PSize PFree >>> /dev/mapper/3600605b003b86620166a1ddb0bfa15b7p4 HostVG lvm2 a-- >>> 1.36t 0 >>> >>> >>> We are installing it to the hard disk , and the previous installation is >>> RHEVH 6.3. During the installation, >>> before performing the disk format, it checks that there is a existing >>> HostVG on the disk, so it stops installing. >> >> It's possible we're not translating correctly from /dev/sda to >> /dev/mapper/3600*. >> >> Can you try one thing? try adding reinstall instead of install to the >> PXE command line. Also, you probably shouldn't pass both upgrade and >> install on the same command line. It's may be getting confused >> because it has both of those. > > I removed upgrade and change install to reinstall in the parameters, and > still got the same error. The error is caused by following code in > storage.py: > > > def storage_auto(): > storage = Storage() > if not _functions.OVIRT_VARS["OVIRT_INIT"] == "": > #force root install variable for autoinstalls > _functions.OVIRT_VARS["OVIRT_ROOT_INSTALL"] = "y" > if _functions.check_existing_hostvg("") or \ > _functions.check_existing_hostvg("","AppVG"): > logger.error("HostVG/AppVG exists on a separate disk") > logger.error("Manual Intervention required") > return False > if storage.perform_partitioning(): > return True > else: > logger.error("Storage Device Is Required for Auto Installation") > return False > > When run check_existing_hostvg, because our disk have a existing HostVG, > > [root at mcptest ~]# pvs > PV VG Fmt Attr > PSize PFree > /dev/mapper/3600605b003b86620166a1ddb0bfa15b7p4 HostVG lvm2 a-- > 1.36t 0 > > > So check_existing_hostvg("") always return "True", which leads to the > issue. I think the HostVG should be there > because the machine have installed RHEVH system before. So can we can > skip the check_existing_hostvg > for an machine with a HostVG already? Yes, you're right, that logic is broken. It should be passing the disks mentioned in storage_init. Can you file a bug on this? I'll try to work up a patch. Thanks Mike > > > >> >> Joey, can you see if you can reproduce this? >> >> Thanks >> >> Mike >>> >>> >>> Wenyi Gao >>>>> >>>>> Best regards >>>>> Wenyi Gao >>>>> >>>>> >>>>> _______________________________________________ >>>>> node-devel mailing list >>>>> node-devel at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/node-devel >>>> >>> >> > > _______________________________________________ > node-devel mailing list > node-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/node-devel From wenyi at linux.vnet.ibm.com Thu Mar 14 04:53:02 2013 From: wenyi at linux.vnet.ibm.com (Wenyi Gao) Date: Thu, 14 Mar 2013 12:53:02 +0800 Subject: [node-devel] About check_existing_hostvg check to auto installation In-Reply-To: <51407579.6000008@redhat.com> References: <513857FD.2010701@linux.vnet.ibm.com> <51388DA2.6050309@redhat.com> <513953F2.70504@linux.vnet.ibm.com> <5139DBA0.3090704@redhat.com> <513D84A2.6050703@linux.vnet.ibm.com> <51407579.6000008@redhat.com> Message-ID: <514157AE.8030202@linux.vnet.ibm.com> On 2013-03-13 20:47, Mike Burns wrote: > On 03/11/2013 03:15 AM, Wenyi Gao wrote: >> On 2013-03-08 20:37, Mike Burns wrote: >>> On 03/07/2013 09:58 PM, Wenyi Gao wrote: >>>> On 2013-03-07 20:52, Mike Burns wrote: >>>>> On 03/07/2013 04:03 AM, Wenyi Gao wrote: >>>>>> Hey Jbos, >>>>>> >>>>>> >>>>>> When I install automatically the ovirt-node-2.6.1 iso to our machine >>>>>> via pxe, I ran into the following error: >>>>>> >>>>>> Starting ovirt-firstboot: Performing automatic disk partitioning >>>>>> ERROR:ovirtnode.storage:HostVG/AppVG exists on a separate disk >>>>>> ERROR:ovirtnode.storage:Manual Intervention required >>>>>> >>>>>> >>>>>> It seems the auto install will stop if the "HostVG" exists on the >>>>>> disk on the machine. >>>>>> I checked the code and find the following patch: >>>>>> >>>>>> http://lists.ovirt.org/pipermail/node-patches/2012-December/003471.html >>>>>> >>>>>> >>>>>> >>>>>> The patch add hostvg check to auto installation to fix rhbz#889198 >>>>>> that I have no access to it. >>>>>> >>>>>> It seems it can't auto install ovirt-node via pxe with the patch and >>>>>> need to delete the existing hostvg manually. >>>>>> >>>>>> So what do you think about the issue and could you give me some >>>>>> suggestions to fix it? Thanks. >>>>> >>>>> What is your pxe commandline? What device are you installing to? >>>>> What >>>>> device contains the previous installation? >>>>> >>>>> The intention of the fix is to prevent users from wiping data >>>>> accidentally. It's existed in the TUI install for some time, and was >>>>> previously in the auto-install, but in a migration from bash to >>>>> python, was missed. >>>>> >>>>> As for the bug, it was a customer filed issue in RHEV-H, and not >>>>> something I can make public, unfortunately. >>>>> >>>>> Mike >>>> >>>> Thank you answering the question. >>>> >>>> My pex command line is as follow: >>>> >>>> In the emergency shell >>>> >>>> [root at mcptest ~]# cat /proc/cmdline >>>> ksdevice=bootif text BOOTIF=eth0 firstboot storage_init=/dev/sda lang= >>>> upgrade storage_vol=50:1024:512:5:2048:-1 standalone hostname=mcptest >>>> vga=791 rootpw=$1$cfm5kMmj$M1uknfs/8aSeZkJyf/NBC/ >>>> root=live:/ovirt-node-image-2.6.1-1.1.el6.iso ssh_pwauth=1 >>>> ks=http://9.181.129.219/cblr/svc/op/ks/system/62 rootfstype=auto ro >>>> liveimg nomodeset check rootflags=loop >>>> crashkernel=512M-2G:64M,2G-:128M >>>> elevator=deadline processor.max_cstate=1 install rhgb rd_NO_LUKS >>>> rd_NO_MD rd_NO_DM >>>> >>>> [root at mcptest ~]# pvs >>>> PV VG Fmt Attr >>>> PSize PFree >>>> /dev/mapper/3600605b003b86620166a1ddb0bfa15b7p4 HostVG lvm2 a-- >>>> 1.36t 0 >>>> >>>> >>>> We are installing it to the hard disk , and the previous >>>> installation is >>>> RHEVH 6.3. During the installation, >>>> before performing the disk format, it checks that there is a existing >>>> HostVG on the disk, so it stops installing. >>> >>> It's possible we're not translating correctly from /dev/sda to >>> /dev/mapper/3600*. >>> >>> Can you try one thing? try adding reinstall instead of install to the >>> PXE command line. Also, you probably shouldn't pass both upgrade and >>> install on the same command line. It's may be getting confused >>> because it has both of those. >> >> I removed upgrade and change install to reinstall in the parameters, and >> still got the same error. The error is caused by following code in >> storage.py: >> >> >> def storage_auto(): >> storage = Storage() >> if not _functions.OVIRT_VARS["OVIRT_INIT"] == "": >> #force root install variable for autoinstalls >> _functions.OVIRT_VARS["OVIRT_ROOT_INSTALL"] = "y" >> if _functions.check_existing_hostvg("") or \ >> _functions.check_existing_hostvg("","AppVG"): >> logger.error("HostVG/AppVG exists on a separate disk") >> logger.error("Manual Intervention required") >> return False >> if storage.perform_partitioning(): >> return True >> else: >> logger.error("Storage Device Is Required for Auto >> Installation") >> return False >> >> When run check_existing_hostvg, because our disk have a existing HostVG, >> >> [root at mcptest ~]# pvs >> PV VG Fmt Attr >> PSize PFree >> /dev/mapper/3600605b003b86620166a1ddb0bfa15b7p4 HostVG lvm2 a-- >> 1.36t 0 >> >> >> So check_existing_hostvg("") always return "True", which leads to the >> issue. I think the HostVG should be there >> because the machine have installed RHEVH system before. So can we can >> skip the check_existing_hostvg >> for an machine with a HostVG already? > > Yes, you're right, that logic is broken. It should be passing the > disks mentioned in storage_init. > > Can you file a bug on this? I'll try to work up a patch. > > Thanks > > Mike Mike, There is another question confused me. If I install rhev-hypervisor6-6.4-20130221 with ovirt-node-2.5.0 via pxe, which has same check_existing_hostvg code as what mentioned above, I can install it successfully, in addition, I do some debug before run _functions.OVIRT_VARS["OVIRT_ROOT_INSTALL"] = "y" if _functions.check_existing_hostvg("") or \ _functions.check_existing_hostvg("","AppVG"): logger.error("HostVG/AppVG exists on a separate disk") logger.error("Manual Intervention required") return False if storage.perform_partitioning(): return True [root at mcptest ~]# pvs I didn't got the HostVG information that got in ovirt-node-2.6.1 version. So I guess ovirt-node-2.5.0 does something about HostVG before check but ovirt-node-2.6.1 doesn't. Thanks Wenyi Gao >> >> >> >>> >>> Joey, can you see if you can reproduce this? >>> >>> Thanks >>> >>> Mike >>>> >>>> >>>> Wenyi Gao >>>>>> >>>>>> Best regards >>>>>> Wenyi Gao >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> node-devel mailing list >>>>>> node-devel at ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/node-devel >>>>> >>>> >>> >> >> _______________________________________________ >> node-devel mailing list >> node-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/node-devel > From mburns at redhat.com Thu Mar 14 12:57:39 2013 From: mburns at redhat.com (Mike Burns) Date: Thu, 14 Mar 2013 08:57:39 -0400 Subject: [node-devel] About check_existing_hostvg check to auto installation In-Reply-To: <514157AE.8030202@linux.vnet.ibm.com> References: <513857FD.2010701@linux.vnet.ibm.com> <51388DA2.6050309@redhat.com> <513953F2.70504@linux.vnet.ibm.com> <5139DBA0.3090704@redhat.com> <513D84A2.6050703@linux.vnet.ibm.com> <51407579.6000008@redhat.com> <514157AE.8030202@linux.vnet.ibm.com> Message-ID: <5141C943.3060803@redhat.com> On 03/14/2013 12:53 AM, Wenyi Gao wrote: > On 2013-03-13 20:47, Mike Burns wrote: >> On 03/11/2013 03:15 AM, Wenyi Gao wrote: >>> On 2013-03-08 20:37, Mike Burns wrote: >>>> On 03/07/2013 09:58 PM, Wenyi Gao wrote: >>>>> On 2013-03-07 20:52, Mike Burns wrote: >>>>>> On 03/07/2013 04:03 AM, Wenyi Gao wrote: >>>>>>> Hey Jbos, >>>>>>> >>>>>>> >>>>>>> When I install automatically the ovirt-node-2.6.1 iso to our machine >>>>>>> via pxe, I ran into the following error: >>>>>>> >>>>>>> Starting ovirt-firstboot: Performing automatic disk partitioning >>>>>>> ERROR:ovirtnode.storage:HostVG/AppVG exists on a separate disk >>>>>>> ERROR:ovirtnode.storage:Manual Intervention required >>>>>>> >>>>>>> >>>>>>> It seems the auto install will stop if the "HostVG" exists on the >>>>>>> disk on the machine. >>>>>>> I checked the code and find the following patch: >>>>>>> >>>>>>> http://lists.ovirt.org/pipermail/node-patches/2012-December/003471.html >>>>>>> >>>>>>> >>>>>>> >>>>>>> The patch add hostvg check to auto installation to fix rhbz#889198 >>>>>>> that I have no access to it. >>>>>>> >>>>>>> It seems it can't auto install ovirt-node via pxe with the patch and >>>>>>> need to delete the existing hostvg manually. >>>>>>> >>>>>>> So what do you think about the issue and could you give me some >>>>>>> suggestions to fix it? Thanks. >>>>>> >>>>>> What is your pxe commandline? What device are you installing to? >>>>>> What >>>>>> device contains the previous installation? >>>>>> >>>>>> The intention of the fix is to prevent users from wiping data >>>>>> accidentally. It's existed in the TUI install for some time, and was >>>>>> previously in the auto-install, but in a migration from bash to >>>>>> python, was missed. >>>>>> >>>>>> As for the bug, it was a customer filed issue in RHEV-H, and not >>>>>> something I can make public, unfortunately. >>>>>> >>>>>> Mike >>>>> >>>>> Thank you answering the question. >>>>> >>>>> My pex command line is as follow: >>>>> >>>>> In the emergency shell >>>>> >>>>> [root at mcptest ~]# cat /proc/cmdline >>>>> ksdevice=bootif text BOOTIF=eth0 firstboot storage_init=/dev/sda lang= >>>>> upgrade storage_vol=50:1024:512:5:2048:-1 standalone hostname=mcptest >>>>> vga=791 rootpw=$1$cfm5kMmj$M1uknfs/8aSeZkJyf/NBC/ >>>>> root=live:/ovirt-node-image-2.6.1-1.1.el6.iso ssh_pwauth=1 >>>>> ks=http://9.181.129.219/cblr/svc/op/ks/system/62 rootfstype=auto ro >>>>> liveimg nomodeset check rootflags=loop >>>>> crashkernel=512M-2G:64M,2G-:128M >>>>> elevator=deadline processor.max_cstate=1 install rhgb rd_NO_LUKS >>>>> rd_NO_MD rd_NO_DM >>>>> >>>>> [root at mcptest ~]# pvs >>>>> PV VG Fmt Attr >>>>> PSize PFree >>>>> /dev/mapper/3600605b003b86620166a1ddb0bfa15b7p4 HostVG lvm2 a-- >>>>> 1.36t 0 >>>>> >>>>> >>>>> We are installing it to the hard disk , and the previous >>>>> installation is >>>>> RHEVH 6.3. During the installation, >>>>> before performing the disk format, it checks that there is a existing >>>>> HostVG on the disk, so it stops installing. >>>> >>>> It's possible we're not translating correctly from /dev/sda to >>>> /dev/mapper/3600*. >>>> >>>> Can you try one thing? try adding reinstall instead of install to the >>>> PXE command line. Also, you probably shouldn't pass both upgrade and >>>> install on the same command line. It's may be getting confused >>>> because it has both of those. >>> >>> I removed upgrade and change install to reinstall in the parameters, and >>> still got the same error. The error is caused by following code in >>> storage.py: >>> >>> >>> def storage_auto(): >>> storage = Storage() >>> if not _functions.OVIRT_VARS["OVIRT_INIT"] == "": >>> #force root install variable for autoinstalls >>> _functions.OVIRT_VARS["OVIRT_ROOT_INSTALL"] = "y" >>> if _functions.check_existing_hostvg("") or \ >>> _functions.check_existing_hostvg("","AppVG"): >>> logger.error("HostVG/AppVG exists on a separate disk") >>> logger.error("Manual Intervention required") >>> return False >>> if storage.perform_partitioning(): >>> return True >>> else: >>> logger.error("Storage Device Is Required for Auto >>> Installation") >>> return False >>> >>> When run check_existing_hostvg, because our disk have a existing HostVG, >>> >>> [root at mcptest ~]# pvs >>> PV VG Fmt Attr >>> PSize PFree >>> /dev/mapper/3600605b003b86620166a1ddb0bfa15b7p4 HostVG lvm2 a-- >>> 1.36t 0 >>> >>> >>> So check_existing_hostvg("") always return "True", which leads to the >>> issue. I think the HostVG should be there >>> because the machine have installed RHEVH system before. So can we can >>> skip the check_existing_hostvg >>> for an machine with a HostVG already? >> >> Yes, you're right, that logic is broken. It should be passing the >> disks mentioned in storage_init. >> >> Can you file a bug on this? I'll try to work up a patch. >> >> Thanks >> >> Mike > > Mike, > > There is another question confused me. If I install > rhev-hypervisor6-6.4-20130221 with ovirt-node-2.5.0 via pxe, which has > same check_existing_hostvg > code as what mentioned above, I can install it successfully, in > addition, I do some debug before run > _functions.OVIRT_VARS["OVIRT_ROOT_INSTALL"] = "y" > if _functions.check_existing_hostvg("") or \ > _functions.check_existing_hostvg("","AppVG"): > logger.error("HostVG/AppVG exists on a separate disk") > logger.error("Manual Intervention required") > return False > if storage.perform_partitioning(): > return True > > [root at mcptest ~]# pvs > > I didn't got the HostVG information that got in ovirt-node-2.6.1 > version. So I guess ovirt-node-2.5.0 does something about HostVG before > check > but ovirt-node-2.6.1 doesn't. Yes, we need to investigate what's happening. Any chance you can file a bz for this so we can track it? Mike > > Thanks > Wenyi Gao > > > >>> >>> >>> >>>> >>>> Joey, can you see if you can reproduce this? >>>> >>>> Thanks >>>> >>>> Mike >>>>> >>>>> >>>>> Wenyi Gao >>>>>>> >>>>>>> Best regards >>>>>>> Wenyi Gao >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> node-devel mailing list >>>>>>> node-devel at ovirt.org >>>>>>> http://lists.ovirt.org/mailman/listinfo/node-devel >>>>>> >>>>> >>>> >>> >>> _______________________________________________ >>> node-devel mailing list >>> node-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/node-devel >> > > _______________________________________________ > node-devel mailing list > node-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/node-devel From wenyi at linux.vnet.ibm.com Mon Mar 18 03:20:29 2013 From: wenyi at linux.vnet.ibm.com (Wenyi Gao) Date: Mon, 18 Mar 2013 11:20:29 +0800 Subject: [node-devel] About check_existing_hostvg check to auto installation In-Reply-To: <5141C943.3060803@redhat.com> References: <513857FD.2010701@linux.vnet.ibm.com> <51388DA2.6050309@redhat.com> <513953F2.70504@linux.vnet.ibm.com> <5139DBA0.3090704@redhat.com> <513D84A2.6050703@linux.vnet.ibm.com> <51407579.6000008@redhat.com> <514157AE.8030202@linux.vnet.ibm.com> <5141C943.3060803@redhat.com> Message-ID: <514687FD.7020006@linux.vnet.ibm.com> On 2013-03-14 20:57, Mike Burns wrote: > On 03/14/2013 12:53 AM, Wenyi Gao wrote: >> On 2013-03-13 20:47, Mike Burns wrote: >>> On 03/11/2013 03:15 AM, Wenyi Gao wrote: >>>> On 2013-03-08 20:37, Mike Burns wrote: >>>>> On 03/07/2013 09:58 PM, Wenyi Gao wrote: >>>>>> On 2013-03-07 20:52, Mike Burns wrote: >>>>>>> On 03/07/2013 04:03 AM, Wenyi Gao wrote: >>>>>>>> Hey Jbos, >>>>>>>> >>>>>>>> >>>>>>>> When I install automatically the ovirt-node-2.6.1 iso to our >>>>>>>> machine >>>>>>>> via pxe, I ran into the following error: >>>>>>>> >>>>>>>> Starting ovirt-firstboot: Performing automatic disk partitioning >>>>>>>> ERROR:ovirtnode.storage:HostVG/AppVG exists on a separate disk >>>>>>>> ERROR:ovirtnode.storage:Manual Intervention required >>>>>>>> >>>>>>>> >>>>>>>> It seems the auto install will stop if the "HostVG" exists on the >>>>>>>> disk on the machine. >>>>>>>> I checked the code and find the following patch: >>>>>>>> >>>>>>>> http://lists.ovirt.org/pipermail/node-patches/2012-December/003471.html >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> The patch add hostvg check to auto installation to fix rhbz#889198 >>>>>>>> that I have no access to it. >>>>>>>> >>>>>>>> It seems it can't auto install ovirt-node via pxe with the >>>>>>>> patch and >>>>>>>> need to delete the existing hostvg manually. >>>>>>>> >>>>>>>> So what do you think about the issue and could you give me some >>>>>>>> suggestions to fix it? Thanks. >>>>>>> >>>>>>> What is your pxe commandline? What device are you installing to? >>>>>>> What >>>>>>> device contains the previous installation? >>>>>>> >>>>>>> The intention of the fix is to prevent users from wiping data >>>>>>> accidentally. It's existed in the TUI install for some time, >>>>>>> and was >>>>>>> previously in the auto-install, but in a migration from bash to >>>>>>> python, was missed. >>>>>>> >>>>>>> As for the bug, it was a customer filed issue in RHEV-H, and not >>>>>>> something I can make public, unfortunately. >>>>>>> >>>>>>> Mike >>>>>> >>>>>> Thank you answering the question. >>>>>> >>>>>> My pex command line is as follow: >>>>>> >>>>>> In the emergency shell >>>>>> >>>>>> [root at mcptest ~]# cat /proc/cmdline >>>>>> ksdevice=bootif text BOOTIF=eth0 firstboot storage_init=/dev/sda >>>>>> lang= >>>>>> upgrade storage_vol=50:1024:512:5:2048:-1 standalone >>>>>> hostname=mcptest >>>>>> vga=791 rootpw=$1$cfm5kMmj$M1uknfs/8aSeZkJyf/NBC/ >>>>>> root=live:/ovirt-node-image-2.6.1-1.1.el6.iso ssh_pwauth=1 >>>>>> ks=http://9.181.129.219/cblr/svc/op/ks/system/62 rootfstype=auto ro >>>>>> liveimg nomodeset check rootflags=loop >>>>>> crashkernel=512M-2G:64M,2G-:128M >>>>>> elevator=deadline processor.max_cstate=1 install rhgb rd_NO_LUKS >>>>>> rd_NO_MD rd_NO_DM >>>>>> >>>>>> [root at mcptest ~]# pvs >>>>>> PV VG Fmt Attr >>>>>> PSize PFree >>>>>> /dev/mapper/3600605b003b86620166a1ddb0bfa15b7p4 HostVG lvm2 a-- >>>>>> 1.36t 0 >>>>>> >>>>>> >>>>>> We are installing it to the hard disk , and the previous >>>>>> installation is >>>>>> RHEVH 6.3. During the installation, >>>>>> before performing the disk format, it checks that there is a >>>>>> existing >>>>>> HostVG on the disk, so it stops installing. >>>>> >>>>> It's possible we're not translating correctly from /dev/sda to >>>>> /dev/mapper/3600*. >>>>> >>>>> Can you try one thing? try adding reinstall instead of install to >>>>> the >>>>> PXE command line. Also, you probably shouldn't pass both upgrade and >>>>> install on the same command line. It's may be getting confused >>>>> because it has both of those. >>>> >>>> I removed upgrade and change install to reinstall in the >>>> parameters, and >>>> still got the same error. The error is caused by following code in >>>> storage.py: >>>> >>>> >>>> def storage_auto(): >>>> storage = Storage() >>>> if not _functions.OVIRT_VARS["OVIRT_INIT"] == "": >>>> #force root install variable for autoinstalls >>>> _functions.OVIRT_VARS["OVIRT_ROOT_INSTALL"] = "y" >>>> if _functions.check_existing_hostvg("") or \ >>>> _functions.check_existing_hostvg("","AppVG"): >>>> logger.error("HostVG/AppVG exists on a separate disk") >>>> logger.error("Manual Intervention required") >>>> return False >>>> if storage.perform_partitioning(): >>>> return True >>>> else: >>>> logger.error("Storage Device Is Required for Auto >>>> Installation") >>>> return False >>>> >>>> When run check_existing_hostvg, because our disk have a existing >>>> HostVG, >>>> >>>> [root at mcptest ~]# pvs >>>> PV VG Fmt Attr >>>> PSize PFree >>>> /dev/mapper/3600605b003b86620166a1ddb0bfa15b7p4 HostVG lvm2 a-- >>>> 1.36t 0 >>>> >>>> >>>> So check_existing_hostvg("") always return "True", which leads to the >>>> issue. I think the HostVG should be there >>>> because the machine have installed RHEVH system before. So can we can >>>> skip the check_existing_hostvg >>>> for an machine with a HostVG already? >>> >>> Yes, you're right, that logic is broken. It should be passing the >>> disks mentioned in storage_init. >>> >>> Can you file a bug on this? I'll try to work up a patch. >>> >>> Thanks >>> >>> Mike >> >> Mike, >> >> There is another question confused me. If I install >> rhev-hypervisor6-6.4-20130221 with ovirt-node-2.5.0 via pxe, which has >> same check_existing_hostvg >> code as what mentioned above, I can install it successfully, in >> addition, I do some debug before run >> _functions.OVIRT_VARS["OVIRT_ROOT_INSTALL"] = "y" >> if _functions.check_existing_hostvg("") or \ >> _functions.check_existing_hostvg("","AppVG"): >> logger.error("HostVG/AppVG exists on a separate disk") >> logger.error("Manual Intervention required") >> return False >> if storage.perform_partitioning(): >> return True >> >> [root at mcptest ~]# pvs >> >> I didn't got the HostVG information that got in ovirt-node-2.6.1 >> version. So I guess ovirt-node-2.5.0 does something about HostVG before >> check >> but ovirt-node-2.6.1 doesn't. > > Yes, we need to investigate what's happening. Any chance you can file > a bz for this so we can track it? > > Mike Done with BZ: https://bugzilla.redhat.com/show_bug.cgi?id=922593 Thanks Mike. Wenyi Gao > >> >> Thanks >> Wenyi Gao >> >> >> >>>> >>>> >>>> >>>>> >>>>> Joey, can you see if you can reproduce this? >>>>> >>>>> Thanks >>>>> >>>>> Mike >>>>>> >>>>>> >>>>>> Wenyi Gao >>>>>>>> >>>>>>>> Best regards >>>>>>>> Wenyi Gao >>>>>>>> >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> node-devel mailing list >>>>>>>> node-devel at ovirt.org >>>>>>>> http://lists.ovirt.org/mailman/listinfo/node-devel >>>>>>> >>>>>> >>>>> >>>> >>>> _______________________________________________ >>>> node-devel mailing list >>>> node-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/node-devel >>> >> >> _______________________________________________ >> node-devel mailing list >> node-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/node-devel > From mburns at redhat.com Mon Mar 18 13:26:45 2013 From: mburns at redhat.com (Mike Burns) Date: Mon, 18 Mar 2013 09:26:45 -0400 Subject: [node-devel] About check_existing_hostvg check to auto installation In-Reply-To: <514687FD.7020006@linux.vnet.ibm.com> References: <513857FD.2010701@linux.vnet.ibm.com> <51388DA2.6050309@redhat.com> <513953F2.70504@linux.vnet.ibm.com> <5139DBA0.3090704@redhat.com> <513D84A2.6050703@linux.vnet.ibm.com> <51407579.6000008@redhat.com> <514157AE.8030202@linux.vnet.ibm.com> <5141C943.3060803@redhat.com> <514687FD.7020006@linux.vnet.ibm.com> Message-ID: <51471615.1000600@redhat.com> On 03/17/2013 11:20 PM, Wenyi Gao wrote: > On 2013-03-14 20:57, Mike Burns wrote: >> On 03/14/2013 12:53 AM, Wenyi Gao wrote: >>> On 2013-03-13 20:47, Mike Burns wrote: >>>> On 03/11/2013 03:15 AM, Wenyi Gao wrote: >>>>> On 2013-03-08 20:37, Mike Burns wrote: >>>>>> On 03/07/2013 09:58 PM, Wenyi Gao wrote: >>>>>>> On 2013-03-07 20:52, Mike Burns wrote: >>>>>>>> On 03/07/2013 04:03 AM, Wenyi Gao wrote: >>>>>>>>> Hey Jbos, >>>>>>>>> >>>>>>>>> >>>>>>>>> When I install automatically the ovirt-node-2.6.1 iso to our >>>>>>>>> machine >>>>>>>>> via pxe, I ran into the following error: >>>>>>>>> >>>>>>>>> Starting ovirt-firstboot: Performing automatic disk partitioning >>>>>>>>> ERROR:ovirtnode.storage:HostVG/AppVG exists on a separate disk >>>>>>>>> ERROR:ovirtnode.storage:Manual Intervention required >>>>>>>>> >>>>>>>>> >>>>>>>>> It seems the auto install will stop if the "HostVG" exists on the >>>>>>>>> disk on the machine. >>>>>>>>> I checked the code and find the following patch: >>>>>>>>> >>>>>>>>> http://lists.ovirt.org/pipermail/node-patches/2012-December/003471.html >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> The patch add hostvg check to auto installation to fix rhbz#889198 >>>>>>>>> that I have no access to it. >>>>>>>>> >>>>>>>>> It seems it can't auto install ovirt-node via pxe with the >>>>>>>>> patch and >>>>>>>>> need to delete the existing hostvg manually. >>>>>>>>> >>>>>>>>> So what do you think about the issue and could you give me some >>>>>>>>> suggestions to fix it? Thanks. >>>>>>>> >>>>>>>> What is your pxe commandline? What device are you installing to? >>>>>>>> What >>>>>>>> device contains the previous installation? >>>>>>>> >>>>>>>> The intention of the fix is to prevent users from wiping data >>>>>>>> accidentally. It's existed in the TUI install for some time, >>>>>>>> and was >>>>>>>> previously in the auto-install, but in a migration from bash to >>>>>>>> python, was missed. >>>>>>>> >>>>>>>> As for the bug, it was a customer filed issue in RHEV-H, and not >>>>>>>> something I can make public, unfortunately. >>>>>>>> >>>>>>>> Mike >>>>>>> >>>>>>> Thank you answering the question. >>>>>>> >>>>>>> My pex command line is as follow: >>>>>>> >>>>>>> In the emergency shell >>>>>>> >>>>>>> [root at mcptest ~]# cat /proc/cmdline >>>>>>> ksdevice=bootif text BOOTIF=eth0 firstboot storage_init=/dev/sda >>>>>>> lang= >>>>>>> upgrade storage_vol=50:1024:512:5:2048:-1 standalone >>>>>>> hostname=mcptest >>>>>>> vga=791 rootpw=$1$cfm5kMmj$M1uknfs/8aSeZkJyf/NBC/ >>>>>>> root=live:/ovirt-node-image-2.6.1-1.1.el6.iso ssh_pwauth=1 >>>>>>> ks=http://9.181.129.219/cblr/svc/op/ks/system/62 rootfstype=auto ro >>>>>>> liveimg nomodeset check rootflags=loop >>>>>>> crashkernel=512M-2G:64M,2G-:128M >>>>>>> elevator=deadline processor.max_cstate=1 install rhgb rd_NO_LUKS >>>>>>> rd_NO_MD rd_NO_DM >>>>>>> >>>>>>> [root at mcptest ~]# pvs >>>>>>> PV VG Fmt Attr >>>>>>> PSize PFree >>>>>>> /dev/mapper/3600605b003b86620166a1ddb0bfa15b7p4 HostVG lvm2 a-- >>>>>>> 1.36t 0 >>>>>>> >>>>>>> >>>>>>> We are installing it to the hard disk , and the previous >>>>>>> installation is >>>>>>> RHEVH 6.3. During the installation, >>>>>>> before performing the disk format, it checks that there is a >>>>>>> existing >>>>>>> HostVG on the disk, so it stops installing. >>>>>> >>>>>> It's possible we're not translating correctly from /dev/sda to >>>>>> /dev/mapper/3600*. >>>>>> >>>>>> Can you try one thing? try adding reinstall instead of install to >>>>>> the >>>>>> PXE command line. Also, you probably shouldn't pass both upgrade and >>>>>> install on the same command line. It's may be getting confused >>>>>> because it has both of those. >>>>> >>>>> I removed upgrade and change install to reinstall in the >>>>> parameters, and >>>>> still got the same error. The error is caused by following code in >>>>> storage.py: >>>>> >>>>> >>>>> def storage_auto(): >>>>> storage = Storage() >>>>> if not _functions.OVIRT_VARS["OVIRT_INIT"] == "": >>>>> #force root install variable for autoinstalls >>>>> _functions.OVIRT_VARS["OVIRT_ROOT_INSTALL"] = "y" >>>>> if _functions.check_existing_hostvg("") or \ >>>>> _functions.check_existing_hostvg("","AppVG"): >>>>> logger.error("HostVG/AppVG exists on a separate disk") >>>>> logger.error("Manual Intervention required") >>>>> return False >>>>> if storage.perform_partitioning(): >>>>> return True >>>>> else: >>>>> logger.error("Storage Device Is Required for Auto >>>>> Installation") >>>>> return False >>>>> >>>>> When run check_existing_hostvg, because our disk have a existing >>>>> HostVG, >>>>> >>>>> [root at mcptest ~]# pvs >>>>> PV VG Fmt Attr >>>>> PSize PFree >>>>> /dev/mapper/3600605b003b86620166a1ddb0bfa15b7p4 HostVG lvm2 a-- >>>>> 1.36t 0 >>>>> >>>>> >>>>> So check_existing_hostvg("") always return "True", which leads to the >>>>> issue. I think the HostVG should be there >>>>> because the machine have installed RHEVH system before. So can we can >>>>> skip the check_existing_hostvg >>>>> for an machine with a HostVG already? >>>> >>>> Yes, you're right, that logic is broken. It should be passing the >>>> disks mentioned in storage_init. >>>> >>>> Can you file a bug on this? I'll try to work up a patch. >>>> >>>> Thanks >>>> >>>> Mike >>> >>> Mike, >>> >>> There is another question confused me. If I install >>> rhev-hypervisor6-6.4-20130221 with ovirt-node-2.5.0 via pxe, which has >>> same check_existing_hostvg >>> code as what mentioned above, I can install it successfully, in >>> addition, I do some debug before run >>> _functions.OVIRT_VARS["OVIRT_ROOT_INSTALL"] = "y" >>> if _functions.check_existing_hostvg("") or \ >>> _functions.check_existing_hostvg("","AppVG"): >>> logger.error("HostVG/AppVG exists on a separate disk") >>> logger.error("Manual Intervention required") >>> return False >>> if storage.perform_partitioning(): >>> return True >>> >>> [root at mcptest ~]# pvs >>> >>> I didn't got the HostVG information that got in ovirt-node-2.6.1 >>> version. So I guess ovirt-node-2.5.0 does something about HostVG before >>> check >>> but ovirt-node-2.6.1 doesn't. >> >> Yes, we need to investigate what's happening. Any chance you can file >> a bz for this so we can track it? >> >> Mike > > Done with BZ: https://bugzilla.redhat.com/show_bug.cgi?id=922593 Thanks, I'll get people to start working on this ASAP. Mike > > Thanks Mike. > > > Wenyi Gao >> >>> >>> Thanks >>> Wenyi Gao >>> >>> >>> >>>>> >>>>> >>>>> >>>>>> >>>>>> Joey, can you see if you can reproduce this? >>>>>> >>>>>> Thanks >>>>>> >>>>>> Mike >>>>>>> >>>>>>> >>>>>>> Wenyi Gao >>>>>>>>> >>>>>>>>> Best regards >>>>>>>>> Wenyi Gao >>>>>>>>> >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> node-devel mailing list >>>>>>>>> node-devel at ovirt.org >>>>>>>>> http://lists.ovirt.org/mailman/listinfo/node-devel >>>>>>>> >>>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> node-devel mailing list >>>>> node-devel at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/node-devel >>>> >>> >>> _______________________________________________ >>> node-devel mailing list >>> node-devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/node-devel >> > > _______________________________________________ > node-devel mailing list > node-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/node-devel From mburns at redhat.com Tue Mar 26 11:31:20 2013 From: mburns at redhat.com (Mike Burns) Date: Tue, 26 Mar 2013 07:31:20 -0400 (EDT) Subject: [node-devel] Cancelled: oVirt Node weekly meeting Message-ID: <672348665.23841427.1364297480979.JavaMail.root@redhat.com> A single instance of the following meeting has been cancelled: Subject: oVirt Node weekly meeting Organizer: "Mike Burns" Location: #ovirt on irc.oftc.net Time: Tuesday, March 26, 2013, 9:00:00 AM - 9:30:00 AM GMT -05:00 US/Canada Eastern Invitees: aliguori at linux.vnet.ibm.com; anthony at codemonkey.ws; node-devel at ovirt.org; whenry at redhat.com *~*~*~*~*~*~*~*~*~* Weekly call to go over features, bugs, tasks, etc -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 2836 bytes Desc: not available URL: