[ovirt-users] Upgrade 4.0.6 to 4.1.1 -- How to Update oVirt Node (4.x) Hosts?

Yuval Turgeman yuvalt at redhat.com
Tue May 9 20:47:05 UTC 2017


Not sure where those grub probe errors come from.. seem suspicious, need to
check this, is everything running ok?

Regarding the updates, since the last update didnt really finish
successfully (you had to run grub-mkconfig), after you rebooted to the new
image, it will try to update itself again if you yum update.   You can take
the update rpm and install it on the new image with rpm --justdb --nodeps,
it's all very hackish, but should do the trick.
As for the ovirt repos, they should have something like
'includepkg=ovirt-node-ng-image-update'  Take a look at the post install
script for ovirt-release-host-node (something like this, no laptop atm).

On May 9, 2017 22:35, "Beckman, Daniel" <Daniel.Beckman at ingramcontent.com>
wrote:

> OK, I entered that verbatim and got this:
>
>  grubby --copy-default --add-kernel /boot/ovirt-node-ng-4.1.1.1-0.
> 20170406.0+1/vmlinuz-3.10.0-514.10.2.el7.x86_64 --initrd
> /boot/ovirt-node-ng-4.1.1.1-0.20170406.0+1/initramfs-3.10.0-514.10.2.el7.x86_64.img
> --args rhgb crashkernel=auto root=/dev/onn_labvmhostt05/
> ovirt-node-ng-4.1.1.1-0.20170406.0+1 rd.lvm.lv=onn_labvmhostt05/
> ovirt-node-ng-4.1.1.1-0.20170406.0+1 rd.lvm.lv=onn_labvmhostt05/swap
> quiet img.bootid=ovirt-node-ng-4.1.1.1-0.20170406.0+1 --title
> ovirt-node-ng-4.1.1.1-0.20170406.0 --bad-image-okay
>
> grubby: unexpected argument crashkernel=auto
>
>
>
> It seemed to be tripping up after ‘—args’ so I added some double quotes:
>
> grubby --copy-default --add-kernel /boot/ovirt-node-ng-4.1.1.1-0.
> 20170406.0+1/vmlinuz-3.10.0-514.10.2.el7.x86_64 --initrd
> /boot/ovirt-node-ng-4.1.1.1-0.20170406.0+1/initramfs-3.10.0-514.10.2.el7.x86_64.img
> --args "rhgb crashkernel=auto root=/dev/onn_labvmhostt05/
> ovirt-node-ng-4.1.1.1-0.20170406.0+1 rd.lvm.lv=onn_labvmhostt05/
> ovirt-node-ng-4.1.1.1-0.20170406.0+1 rd.lvm.lv=onn_labvmhostt05/swap
> quiet img.bootid=ovirt-node-ng-4.1.1.1-0.20170406.0+1" --title
> ovirt-node-ng-4.1.1.1-0.20170406.0 --bad-image-okay
>
> grubby fatal error: unable to find a suitable template
>
>
>
> On a hunch, I checked ‘/etc/grub/default’ – which I recall having to edit
> at one point in order to allow kdump integration with ‘power management’
> (out-of-bound management, Dell IDRAC in this case):
>
> GRUB_TIMEOUT=5
>
> GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
>
> GRUB_DEFAULT=saved
>
> GRUB_DISABLE_SUBMENU=true
>
> GRUB_TERMINAL_OUTPUT="console"
>
> GRUB_CMDLINE_LINUX='crashkernel=auto rd.lvm.lv=onn_labvmhostt05/
> ovirt-node-ng-4.0.3-0.20160830.0+1 rd.lvm.lv=onn_labvmhostt05/swap rhgb
> quiet'
>
> GRUB_DISABLE_RECOVERY="true"
>
>
>
> And updated the GRUB_CMDLINE_LINUX line as follows:
>
> GRUB_CMDLINE_LINUX='crashkernel=auto rd.lvm.lv=onn_labvmhostt05/
> ovirt-node-ng-4.1.1.1-0.20170406.0+1 rd.lvm.lv=onn_labvmhostt05/swap rhgb
> quiet'
>
>
>
> Then ran this:
>
>
>
> grub2-mkconfig -o /boot/grub2/grub.cfg
>
> Generating grub configuration file ...
>
> Found linux image: /boot/ovirt-node-ng-4.1.1.1-0.
> 20170406.0+1//vmlinuz-3.10.0-514.10.2.el7.x86_64
>
> Found initrd image: /boot/ovirt-node-ng-4.1.1.1-0.
> 20170406.0+1/initramfs-3.10.0-514.10.2.el7.x86_64.img
>
> Found linux image: /boot/ovirt-node-ng-4.0.3-0.
> 20160830.0+1//vmlinuz-3.10.0-327.28.3.el7.x86_64
>
> Found initrd image: /boot/ovirt-node-ng-4.0.3-0.
> 20160830.0+1/initramfs-3.10.0-327.28.3.el7.x86_64.img
>
> Found linux image: /boot/vmlinuz-3.10.0-514.16.1.el7.x86_64
>
> Found initrd image: /boot/initramfs-3.10.0-514.16.1.el7.x86_64.img
>
> Found linux image: /boot/vmlinuz-3.10.0-514.10.2.el7.x86_64
>
> Found initrd image: /boot/initramfs-3.10.0-514.10.2.el7.x86_64.img
>
> Found linux image: /boot/vmlinuz-3.10.0-514.6.1.el7.x86_64
>
> Found initrd image: /boot/initramfs-3.10.0-514.6.1.el7.x86_64.img
>
> Found linux image: /boot/vmlinuz-3.10.0-327.28.3.el7.x86_64
>
> Found initrd image: /boot/initramfs-3.10.0-327.28.3.el7.x86_64.img
>
> Found linux image: /boot/vmlinuz-3.10.0-514.16.1.el7.x86_64
>
> Found initrd image: /boot/initramfs-3.10.0-514.16.1.el7.x86_64.img
>
> Found linux image: /boot/vmlinuz-3.10.0-514.10.2.el7.x86_64
>
> Found initrd image: /boot/initramfs-3.10.0-514.10.2.el7.x86_64.img
>
> Found linux image: /boot/vmlinuz-3.10.0-514.6.1.el7.x86_64
>
> Found initrd image: /boot/initramfs-3.10.0-514.6.1.el7.x86_64.img
>
> Found linux image: /boot/vmlinuz-3.10.0-327.28.3.el7.x86_64
>
> Found initrd image: /boot/initramfs-3.10.0-327.28.3.el7.x86_64.img
>
> /usr/sbin/grub2-probe: error: disk `lvmid/TlxT0v-hF13-Z0Ex-Jubj-
> qPxe-3G6w-5a7Thb/u6GnMJ-Bx1p-a6tB-kNXm-6RiS-bRWQ-Uucioi' not found.
>
> Found CentOS Linux release 7.3.1611 (Core)  on
> /dev/mapper/onn_labvmhostt05-ovirt--node--ng--4.1.1.1--0.20170406.0
>
> /usr/sbin/grub2-probe: error: disk `lvmid/TlxT0v-hF13-Z0Ex-Jubj-
> qPxe-3G6w-5a7Thb/u6GnMJ-Bx1p-a6tB-kNXm-6RiS-bRWQ-Uucioi' not found.
>
> /usr/sbin/grub2-probe: error: disk `lvmid/TlxT0v-hF13-Z0Ex-Jubj-
> qPxe-3G6w-5a7Thb/jeo8ci-Ev4x-XF1m-DhYw-HHc2-fmpi-8rRfaS' not found.
>
> Found CentOS Linux release 7.3.1611 (Core)  on
> /dev/mapper/onn_labvmhostt05-ovirt--node--ng--4.1.1.1--0.20170406.0+1
>
> /usr/sbin/grub2-probe: error: disk `lvmid/TlxT0v-hF13-Z0Ex-Jubj-
> qPxe-3G6w-5a7Thb/qtjURN-CKIf-yAYe-IMUw-mV20-rtxP-WxKZdR' not found.
>
> Found CentOS Linux release 7.2.1511 (Core)  on
> /dev/mapper/onn_labvmhostt05-root
>
> Done
>
>
>
> This looks encouraging…
>
>
>
> [root at labvmhostt05 ~]# nodectl check
>
> Status: OK
>
> Bootloader ... OK
>
>   Layer boot entries ... OK
>
>   Valid boot entries ... OK
>
> Mount points ... OK
>
>   Separate /var ... OK
>
>   Discard is used ... OK
>
> Basic storage ... OK
>
>   Initialized VG ... OK
>
>   Initialized Thin Pool ... OK
>
>   Initialized LVs ... OK
>
> Thin storage ... OK
>
>   Checking available space in thinpool ... OK
>
>   Checking thinpool auto-extend ... OK
>
> vdsmd ... OK
>
>
>
> After a reboot, all looks good and it’s running the newer kernel. But when
> I run a ‘yum update’ it wants to install a bunch of packages:
>
> Package
> Arch                                    Version
>                   Repository
> Size
>
> ============================================================
> ============================================================
> ============================================================
> ============================================
>
> Updating:
>
> ansible
> noarch                                  2.2.2.0-3.el7
> ovirt-centos-ovirt41                                        4.6 M
>
> glusterfs
> x86_64                                  3.8.11-1.el7
> ovirt-4.1-centos-gluster38                                  512 k
>
> glusterfs-api
> x86_64                                  3.8.11-1.el7
> ovirt-4.1-centos-gluster38                                   90 k
>
> glusterfs-cli
> x86_64                                  3.8.11-1.el7
> ovirt-4.1-centos-gluster38                                  183 k
>
> glusterfs-client-xlators
> x86_64                                  3.8.11-1.el7
> ovirt-4.1-centos-gluster38                                  781 k
>
> glusterfs-fuse
> x86_64                                  3.8.11-1.el7
> ovirt-4.1-centos-gluster38                                  134 k
>
> glusterfs-geo-replication
> x86_64                                  3.8.11-1.el7
>                              ovirt-4.1-centos-gluster38
> 208 k
>
> glusterfs-libs
> x86_64                                  3.8.11-1.el7
> ovirt-4.1-centos-gluster38                                  379 k
>
> glusterfs-rdma
> x86_64                                  3.8.11-1.el7
> ovirt-4.1-centos-gluster38                                   61 k
>
> glusterfs-server
> x86_64                                  3.8.11-1.el7
> ovirt-4.1-centos-gluster38                                  1.4 M
>
> imgbased
> noarch                                  0.9.23-1.el7.centos
> ovirt-4.1                                                   126 k
>
> ovirt-node-ng-nodectl
> noarch                                  4.1.0-0.20170406.0.el7
> ovirt-4.1                                                    70 k
>
> ovirt-release41
> noarch                                  4.1.1.1-1.el7.centos
> ovirt-4.1                                                    14 k
>
>
>
> Transaction Summary
>
> ============================================================
> ============================================================
> ============================================================
> ============================================
>
> Upgrade  13 Packages
>
>
>
>
>
> Then I tried this…
>
>
>
> lvremove onn_labvmhostt05/ovirt-node-ng-4.0.3-0.20160830.0
>
> lvremove onn_labvmhostt05/ovirt-node-ng-4.0.3-0.20160830.0+1
>
> (rebooted)
>
> yum remove ovirt-node-ng-image-update ovirt-node-ng-image ovirt-release41
>
> rm -f /etc/yum.repos.d/*
>
> yum localinstall http://resources.ovirt.org/pub/yum-repo/ovirt-release41.
> rpm
>
> yum update ovirt-release41
>
> yum install ovirt-node-ng-image-update
>
>
>
> But it hangs after this…
>
>
>
> Installing:
>
>  ovirt-node-ng-image-update                                       noarch
>                                     4.1.1.1-1.el7.centos
>                     ovirt-4.1                                       3.8 k
>
> Installing for dependencies:
>
>  ovirt-node-ng-image                                              noarch
>                                     4.1.1.1-1.el7.centos
>                     ovirt-4.1                                       526 M
>
>
>
> Transaction Summary
>
> ============================================================
> ============================================================
> ============================================================
> ============================================
>
> Install  1 Package (+1 Dependent package)
>
>
>
> Total size: 526 M
>
> Installed size: 526 M
>
> Is this ok [y/d/N]: y
>
> Downloading packages:
>
> Running transaction check
>
> Running transaction test
>
> Transaction test succeeded
>
> Running transaction
>
>   Installing : ovirt-node-ng-image-4.1.1.1-1.el7.centos.noarch
>
>                                                                       1/2
>
>   Installing : ovirt-node-ng-image-update-4.1.1.1-1.el7.centos.noarch
>
>                                                                       2/2
>
>
>
> On another console, I see this is where it stops:
>
>
>
> [root at labvmhostt05 ~]# tail -f /tmp/imgbased.log
>
>     return self.call(["lvcreate"] + args, **kwargs)
>
>   File "/tmp/tmp.fN7Qb8tTs8/usr/lib/python2.7/site-packages/imgbased/utils.py",
> line 426, in call
>
>     return super(LvmBinary, self).call(*args, stderr=DEVNULL, **kwargs)
>
>   File "/tmp/tmp.fN7Qb8tTs8/usr/lib/python2.7/site-packages/imgbased/utils.py",
> line 355, in call
>
>     stdout = call(*args, **kwargs)
>
>   File "/tmp/tmp.fN7Qb8tTs8/usr/lib/python2.7/site-packages/imgbased/utils.py",
> line 140, in call
>
>     return subprocess.check_output(*args, **kwargs).strip()
>
>   File "/usr/lib64/python2.7/subprocess.py", line 575, in check_output
>
>     raise CalledProcessError(retcode, cmd, output=output)
>
> subprocess.CalledProcessError: Command '['lvcreate', '--thin',
> '--virtualsize', u'360689172480B', '--name', 'ovirt-node-ng-4.1.1.1-0.20170406.0',
> u'onn_labvmhostt05/pool00']' returned non-zero exit status 5
>
>
>
> Here are the LVs and current mounts:
>
> [root at labvmhostt05 ~]# lvs -a
>
>   LV                                   VG               Attr       LSize
> Pool   Origin                             Data%  Meta%  Move Log Cpy%Sync
> Convert
>
>   [lvol0_pmspare]                      onn_labvmhostt05 ewi-------  88.00m
>
>
>
>   ovirt-node-ng-4.1.1.1-0.20170406.0   onn_labvmhostt05 Vri---tz-k
> 335.92g pool00
>
>
>   ovirt-node-ng-4.1.1.1-0.20170406.0+1 onn_labvmhostt05 Vwi-aotz--
> 335.92g pool00 ovirt-node-ng-4.1.1.1-0.20170406.0 0.86
>
>
>   pool00                               onn_labvmhostt05 twi-aotz-- 350.96g
>                                           1.91   0.13
>
>
>   [pool00_tdata]                       onn_labvmhostt05 Twi-ao---- 350.96g
>
>
>
>   [pool00_tmeta]                       onn_labvmhostt05 ewi-ao----   1.00g
>
>
>
>   root                                 onn_labvmhostt05 Vwi---tz-- 335.92g
> pool00
>
>
>   swap                                 onn_labvmhostt05 -wi-ao----   4.00g
>
>
>
>   var                                  onn_labvmhostt05 Vwi-aotz--  15.00g
> pool00                                    11.97
>
>
> [root at labvmhostt05 ~]# mount
>
> sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
>
> proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
>
> devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=
> 264013912k,nr_inodes=66003478,mode=755)
>
> securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,
> relatime)
>
> tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel)
>
> devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,
> seclabel,gid=5,mode=620,ptmxmode=000)
>
> tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)
>
> tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,
> seclabel,mode=755)
>
> cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,
> relatime,xattr,release_agent=/usr/lib/systemd/systemd-
> cgroups-agent,name=systemd)
>
> pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
>
> cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,
> relatime,cpuacct,cpu)
>
> cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,
> relatime,perf_event)
>
> cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,
> relatime,hugetlb)
>
> cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,
> relatime,freezer)
>
> cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup
> (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls)
>
> cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,
> relatime,memory)
>
> cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,
> relatime,cpuset)
>
> cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,
> relatime,blkio)
>
> cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,
> relatime,devices)
>
> cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,
> relatime,pids)
>
> configfs on /sys/kernel/config type configfs (rw,relatime)
>
> /dev/mapper/onn_labvmhostt05-ovirt--node--ng--4.1.1.1--0.20170406.0+1 on
> / type xfs (rw,relatime,seclabel,attr2,inode64,logbsize=256k,sunit=
> 512,swidth=512,noquota)
>
> rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
>
> selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)
>
> systemd-1 on /proc/sys/fs/binfmt_misc type autofs
> (rw,relatime,fd=29,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
>
> debugfs on /sys/kernel/debug type debugfs (rw,relatime)
>
> hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel)
>
> mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel)
>
> nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
>
> /dev/mapper/361866da06a447c001fb358304710f8ea1 on /boot type ext4
> (rw,relatime,seclabel,data=ordered)
>
> /dev/mapper/onn_labvmhostt05-var on /var type xfs
> (rw,relatime,seclabel,attr2,discard,inode64,logbsize=256k,
> sunit=512,swidth=512,noquota)
>
> tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,
> seclabel,size=52808308k,mode=700)
>
> /dev/mapper/onn_labvmhostt05-ovirt--node--ng--4.1.1.1--0.20170406.0+1 on
> /tmp/tmp.O5CIMItAuZ type xfs (rw,relatime,seclabel,attr2,
> inode64,logbsize=256k,sunit=512,swidth=512,noquota)
>
>
>
> Should I just blow away the disks and install fresh?  Any last things to
> try?
>
>
>
> Thanks,
>
> Daniel
>
>
>
> *From: *Yuval Turgeman <yuvalt at redhat.com>
> *Date: *Tuesday, May 9, 2017 at 9:55 AM
> *To: *"Beckman, Daniel" <Daniel.Beckman at ingramcontent.com>
> *Cc: *"sbonazzo at redhat.com" <sbonazzo at redhat.com>, Yedidyah Bar David <
> didi at redhat.com>, "users at ovirt.org" <users at ovirt.org>
> *Subject: *Re: [ovirt-users] Upgrade 4.0.6 to 4.1.1 -- How to Update
> oVirt Node (4.x) Hosts?
>
>
>
> My pleasure ! :)
>
>
>
> The line should be as follows:
>
>
>
> grubby --copy-default --add-kernel /boot/ovirt-node-ng-4.1.1.1-0.
> 20170406.0+1/vmlinuz-3.10.0-514.10.2.el7.x86_64 --initrd
> /boot/ovirt-node-ng-4.1.1.1-0.20170406.0+1/initramfs-3.10.0-514.10.2.el7.x86_64.img
> --args rhgb crashkernel=auto root=/dev/onn_labvmhostt05/
> ovirt-node-ng-4.1.1.1-0.20170406.0+1 rd.lvm.lv=onn_labvmhostt05/
> ovirt-node-ng-4.1.1.1-0.20170406.0+1 rd.lvm.lv=onn_labvmhostt05/swap
> quiet img.bootid=ovirt-node-ng-4.1.1.1-0.20170406.0+1 --title
> ovirt-node-ng-4.1.1.1-0.20170406.0 --bad-image-okay
>
>
>
>
>
>
>
> On Tue, May 9, 2017 at 5:49 PM, Beckman, Daniel <
> Daniel.Beckman at ingramcontent.com> wrote:
>
> Hi Yuval,
>
>
>
> Thanks for your patience. ☺
>
>
>
> I tried that – completely removing /boot/ovirt-node-ng-4.1.1.1* and
> performing the same previous steps. Before doing this I cleared out the
> imgbased.log file so it has only the latest entries.
>
>
>
> I’m assuming this is the command you referenced:
>
> [DEBUG] Calling binary: (['grubby', '--copy-default', '--add-kernel',
> '/boot/ovirt-node-ng-4.1.1.1-0.20170406.0+1/vmlinuz-3.10.0-514.10.2.el7.x86_64',
> '--initrd', '/boot/ovirt-node-ng-4.1.1.1-0.20170406.0+1/initramfs-3.10.0-514.10.2.el7.x86_64.img',
> '--args', 'rhgb crashkernel=auto root=/dev/onn_labvmhostt05/
> ovirt-node-ng-4.1.1.1-0.20170406.0+1 rd.lvm.lv=onn_labvmhostt05/
> ovirt-node-ng-4.1.1.1-0.20170406.0+1 rd.lvm.lv=onn_labvmhostt05/swap
> quiet img.bootid=ovirt-node-ng-4.1.1.1-0.20170406.0+1', '--title',
> 'ovirt-node-ng-4.1.1.1-0.20170406.0', '--bad-image-okay'],) {}
>
>
>
> I could use some help in getting the correct syntax. I’ve attached the
> latest imgbased.log file.
>
>
>
> Thanks,
>
> Daniel
>
>
>
> *From: *Yuval Turgeman <yuvalt at redhat.com>
> *Date: *Tuesday, May 9, 2017 at 3:43 AM
>
>
> *To: *"Beckman, Daniel" <Daniel.Beckman at ingramcontent.com>
> *Cc: *"sbonazzo at redhat.com" <sbonazzo at redhat.com>, Yedidyah Bar David <
> didi at redhat.com>, "users at ovirt.org" <users at ovirt.org>
> *Subject: *Re: [ovirt-users] Upgrade 4.0.6 to 4.1.1 -- How to Update
> oVirt Node (4.x) Hosts?
>
>
>
> Hi, it seems like some stuff was left on /boot from previous attempts,
> making the boot setup stage fail, which means that the node is actually
> installed on the onn_labvmhostt05/ovirt-node-ng-4.1.1.1-0.20170406.0+1 LV
> but the kernel wasn't installed, making it impossible to boot to that LV.
>
> The way I see it, you could try to clean up the
> /boot/ovirt-node-ng-4.1.1.1* files and retry everything just like you did
> (umount, lvremove, reinstall rpms, etc), but the thing is that in one of
> your runs, there's a 'grubby' line that failed and stderr is not shown in
> the log.  Try to follow the steps above and retry, and if grubby fails
> again (you can see it in the last few lines of the imgbased.log), you could
> try to manually run that grubby line from the log and send its output and
> imgbased.log so we could continue from there.
>
>
>
> Thanks,
>
> Yuval.
>
>
>
>
>
> On Mon, May 8, 2017 at 11:14 PM, Beckman, Daniel <
> Daniel.Beckman at ingramcontent.com> wrote:
>
> Hello,
>
>
>
> I was originally on 4.0.3 (from the ISO). The two 4.1.1 layers were not
> mounted; I went ahead and used lvremove to remove them. I removed all three
> packages,  cleared out /etc/yum.repos.d, re-added ovirt-release41 from the
> URL, and then re-installed ovirt-node-ng-image-update, which installed
> ovirt-node-ng-image as a dependency. The install did not report any errors.
> It put the 4.1.1 layers back in. I’ve uploaded the latest
> /tmp/imgbased.log.
>
>
>
> Thanks,
>
> Daniel
>
>
>
> *From: *Yuval Turgeman <yuvalt at redhat.com>
> *Date: *Friday, May 5, 2017 at 12:32 PM
>
>
> *To: *"Beckman, Daniel" <Daniel.Beckman at ingramcontent.com>
> *Cc: *"sbonazzo at redhat.com" <sbonazzo at redhat.com>, Yedidyah Bar David <
> didi at redhat.com>, "users at ovirt.org" <users at ovirt.org>
> *Subject: *Re: [ovirt-users] Upgrade 4.0.6 to 4.1.1 -- How to Update
> oVirt Node (4.x) Hosts?
>
>
>
> Were you on 4.0.3 or 4.0.6 ?  Anyway, try to umount and lvremove the two
> 4.1.1 layers, then redo the steps from the last email.  If it doesnt work
> please resend /tmp/imgbased.log
>
>
>
> Thanks,
>
> Yuval
>
>
>
> On May 5, 2017 6:17 PM, "Beckman, Daniel" <Daniel.Beckman at ingramcontent.
> com> wrote:
>
> Here is output of ‘lvs –a’:
>
>
>
>   LV                                   VG               Attr       LSize
> Pool   Origin                             Data%  Meta%  Move Log Cpy%Sync
> Convert
>
>   [lvol0_pmspare]                      onn_labvmhostt05 ewi-------  88.00m
>
>
>
>   ovirt-node-ng-4.0.3-0.20160830.0     onn_labvmhostt05 Vwi---tz-k
> 335.92g pool00 root
>
>
>   ovirt-node-ng-4.0.3-0.20160830.0+1   onn_labvmhostt05 Vwi-aotz--
> 335.92g pool00 ovirt-node-ng-4.0.3-0.20160830.0   1.26
>
>
>   ovirt-node-ng-4.1.1.1-0.20170406.0   onn_labvmhostt05 Vri---tz-k
> 335.92g pool00
>
>
>   ovirt-node-ng-4.1.1.1-0.20170406.0+1 onn_labvmhostt05 Vwi---tz--
> 335.92g pool00 ovirt-node-ng-4.1.1.1-0.20170406.0
>
>
>   pool00                               onn_labvmhostt05 twi-aotz--
> 350.96g                                           2.53   0.17
>
>
>   [pool00_tdata]                       onn_labvmhostt05 Twi-ao----
> 350.96g
>
>
>   [pool00_tmeta]                       onn_labvmhostt05 ewi-ao----   1.00g
>
>
>
>   root                                 onn_labvmhostt05 Vwi---tz--
> 335.92g pool00
>
>
>   swap                                 onn_labvmhostt05 -wi-ao----   4.00g
>
>
>
>   var                                  onn_labvmhostt05 Vwi-aotz--  15.00g
> pool00                                    8.47
>
>
>
> Thanks,
>
> Daniel
>
>
>
> *From: *Yuval Turgeman <yuvalt at redhat.com>
> *Date: *Thursday, May 4, 2017 at 4:18 PM
> *To: *"Beckman, Daniel" <Daniel.Beckman at ingramcontent.com>
> *Cc: *"sbonazzo at redhat.com" <sbonazzo at redhat.com>, Yedidyah Bar David <
> didi at redhat.com>, "users at ovirt.org" <users at ovirt.org>
> *Subject: *Re: [ovirt-users] Upgrade 4.0.6 to 4.1.1 -- How to Update
> oVirt Node (4.x) Hosts?
>
>
>
> what does `lvs -a` show ?
>
>
>
> On May 4, 2017 21:50, "Beckman, Daniel" <Daniel.Beckman at ingramcontent.com>
> wrote:
>
> Hi Yuval,
>
>
>
> All three of those packages (ovirt-node-ng-image-update,
> ovirt-node-ng-image, ovirt-release41) were already installed. So I ran a
> ‘yum remove’ on all of them, removed everything from /etc/yum.repos.d,
> installed the release RPM, then installed the other two packages. Here’s
> the installation:
>
>
>
> ============================================================
> ============================================================
> ============================================================
> ==================
>
> Package
> Arch                                  Version
> Repository                                Size
>
> ============================================================
> ============================================================
> ============================================================
> ==================
>
> Installing:
>
> ovirt-node-ng-image-update
> noarch                                4.1.1.1-1.el7.centos
> ovirt-4.1                                3.8 k
>
> Installing for dependencies:
>
> ovirt-node-ng-image
> noarch                                4.1.1.1-1.el7.centos
>                    ovirt-4.1                                526 M
>
>
>
> Transaction Summary
>
> ============================================================
> ============================================================
> ============================================================
> ==================
>
> Install  1 Package (+1 Dependent package)
>
>
>
> Total download size: 526 M
>
> Installed size: 526 M
>
> Is this ok [y/d/N]: y
>
> Downloading packages:
>
> (1/2): ovirt-node-ng-image-update-4.1.1.1-1.el7.centos.noarch.rpm
>                                                                                      |
> 3.8 kB  00:00:00
>
> (2/2): ovirt-node-ng-image-4.1.1.1-1.el7.centos.noarch.rpm
>
>                             | 526 MB  00:01:55
>
> ------------------------------------------------------------
> ------------------------------------------------------------
> ------------------------------------------------------------
> ------------------
>
> Total
>
>                                               4.6 MB/s | 526 MB
> 00:01:55
>
> Running transaction check
>
> Running transaction test
>
> Transaction test succeeded
>
> Running transaction
>
>   Installing : ovirt-node-ng-image-4.1.1.1-1.
> el7.centos.noarch
>
> 1/2
>
>   Installing : ovirt-node-ng-image-update-4.1.1.1-1.el7.centos.noarch
>
>                                                             2/2
>
> mount: special device /dev/onn_labvmhostt05/ovirt-node-ng-4.1.1.1-0.20170406.0+1
> does not exist
>
> rm: cannot remove ‘/tmp/tmp.uEAD6kCtlR/usr/share/imgbased/*image-update*.rpm’:
> No such file or directory
>
> umount: /tmp/tmp.uEAD6kCtlR: not mounted
>
>   Verifying  : ovirt-node-ng-image-update-4.1.1.1-1.el7.centos.noarch
>
>                                           1/2
>
>   Verifying  : ovirt-node-ng-image-4.1.1.1-1.
> el7.centos.noarch
>
> 2/2
>
>
>
> Installed:
>
>   ovirt-node-ng-image-update.noarch 0:4.1.1.1-1.el7.centos
>
>
>
>
>
> Dependency Installed:
>
>   ovirt-node-ng-image.noarch 0:4.1.1.1-1.el7.centos
>
>
>
> ...
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170509/c5f250e8/attachment-0001.html>


More information about the Users mailing list