[ovirt-users] Upgrade 4.0.6 to 4.1.1 -- How to Update oVirt Node (4.x) Hosts?

Yuval Turgeman yuvalt at redhat.com
Tue May 9 14:55:21 UTC 2017


My pleasure ! :)

The line should be as follows:

grubby --copy-default --add-kernel
/boot/ovirt-node-ng-4.1.1.1-0.20170406.0+1/vmlinuz-3.10.0-514.10.2.el7.x86_64
--initrd
/boot/ovirt-node-ng-4.1.1.1-0.20170406.0+1/initramfs-3.10.0-514.10.2.el7.x86_64.img
--args rhgb crashkernel=auto
root=/dev/onn_labvmhostt05/ovirt-node-ng-4.1.1.1-0.20170406.0+1
rd.lvm.lv=onn_labvmhostt05/ovirt-node-ng-4.1.1.1-0.20170406.0+1
rd.lvm.lv=onn_labvmhostt05/swap quiet
img.bootid=ovirt-node-ng-4.1.1.1-0.20170406.0+1 --title
ovirt-node-ng-4.1.1.1-0.20170406.0 --bad-image-okay



On Tue, May 9, 2017 at 5:49 PM, Beckman, Daniel <
Daniel.Beckman at ingramcontent.com> wrote:

> Hi Yuval,
>
>
>
> Thanks for your patience. ☺
>
>
>
> I tried that – completely removing /boot/ovirt-node-ng-4.1.1.1* and
> performing the same previous steps. Before doing this I cleared out the
> imgbased.log file so it has only the latest entries.
>
>
>
> I’m assuming this is the command you referenced:
>
> [DEBUG] Calling binary: (['grubby', '--copy-default', '--add-kernel',
> '/boot/ovirt-node-ng-4.1.1.1-0.20170406.0+1/vmlinuz-3.10.0-514.10.2.el7.x86_64',
> '--initrd', '/boot/ovirt-node-ng-4.1.1.1-0.20170406.0+1/initramfs-3.10.0-514.10.2.el7.x86_64.img',
> '--args', 'rhgb crashkernel=auto root=/dev/onn_labvmhostt05/
> ovirt-node-ng-4.1.1.1-0.20170406.0+1 rd.lvm.lv=onn_labvmhostt05/
> ovirt-node-ng-4.1.1.1-0.20170406.0+1 rd.lvm.lv=onn_labvmhostt05/swap
> quiet img.bootid=ovirt-node-ng-4.1.1.1-0.20170406.0+1', '--title',
> 'ovirt-node-ng-4.1.1.1-0.20170406.0', '--bad-image-okay'],) {}
>
>
>
> I could use some help in getting the correct syntax. I’ve attached the
> latest imgbased.log file.
>
>
>
> Thanks,
>
> Daniel
>
>
>
> *From: *Yuval Turgeman <yuvalt at redhat.com>
> *Date: *Tuesday, May 9, 2017 at 3:43 AM
>
> *To: *"Beckman, Daniel" <Daniel.Beckman at ingramcontent.com>
> *Cc: *"sbonazzo at redhat.com" <sbonazzo at redhat.com>, Yedidyah Bar David <
> didi at redhat.com>, "users at ovirt.org" <users at ovirt.org>
> *Subject: *Re: [ovirt-users] Upgrade 4.0.6 to 4.1.1 -- How to Update
> oVirt Node (4.x) Hosts?
>
>
>
> Hi, it seems like some stuff was left on /boot from previous attempts,
> making the boot setup stage fail, which means that the node is actually
> installed on the onn_labvmhostt05/ovirt-node-ng-4.1.1.1-0.20170406.0+1 LV
> but the kernel wasn't installed, making it impossible to boot to that LV.
>
> The way I see it, you could try to clean up the
> /boot/ovirt-node-ng-4.1.1.1* files and retry everything just like you did
> (umount, lvremove, reinstall rpms, etc), but the thing is that in one of
> your runs, there's a 'grubby' line that failed and stderr is not shown in
> the log.  Try to follow the steps above and retry, and if grubby fails
> again (you can see it in the last few lines of the imgbased.log), you could
> try to manually run that grubby line from the log and send its output and
> imgbased.log so we could continue from there.
>
>
>
> Thanks,
>
> Yuval.
>
>
>
>
>
> On Mon, May 8, 2017 at 11:14 PM, Beckman, Daniel <
> Daniel.Beckman at ingramcontent.com> wrote:
>
> Hello,
>
>
>
> I was originally on 4.0.3 (from the ISO). The two 4.1.1 layers were not
> mounted; I went ahead and used lvremove to remove them. I removed all three
> packages,  cleared out /etc/yum.repos.d, re-added ovirt-release41 from the
> URL, and then re-installed ovirt-node-ng-image-update, which installed
> ovirt-node-ng-image as a dependency. The install did not report any errors.
> It put the 4.1.1 layers back in. I’ve uploaded the latest
> /tmp/imgbased.log.
>
>
>
> Thanks,
>
> Daniel
>
>
>
> *From: *Yuval Turgeman <yuvalt at redhat.com>
> *Date: *Friday, May 5, 2017 at 12:32 PM
>
>
> *To: *"Beckman, Daniel" <Daniel.Beckman at ingramcontent.com>
> *Cc: *"sbonazzo at redhat.com" <sbonazzo at redhat.com>, Yedidyah Bar David <
> didi at redhat.com>, "users at ovirt.org" <users at ovirt.org>
> *Subject: *Re: [ovirt-users] Upgrade 4.0.6 to 4.1.1 -- How to Update
> oVirt Node (4.x) Hosts?
>
>
>
> Were you on 4.0.3 or 4.0.6 ?  Anyway, try to umount and lvremove the two
> 4.1.1 layers, then redo the steps from the last email.  If it doesnt work
> please resend /tmp/imgbased.log
>
>
>
> Thanks,
>
> Yuval
>
>
>
> On May 5, 2017 6:17 PM, "Beckman, Daniel" <Daniel.Beckman at ingramcontent.
> com> wrote:
>
> Here is output of ‘lvs –a’:
>
>
>
>   LV                                   VG               Attr       LSize
> Pool   Origin                             Data%  Meta%  Move Log Cpy%Sync
> Convert
>
>   [lvol0_pmspare]                      onn_labvmhostt05 ewi-------  88.00m
>
>
>
>   ovirt-node-ng-4.0.3-0.20160830.0     onn_labvmhostt05 Vwi---tz-k
> 335.92g pool00 root
>
>
>   ovirt-node-ng-4.0.3-0.20160830.0+1   onn_labvmhostt05 Vwi-aotz--
> 335.92g pool00 ovirt-node-ng-4.0.3-0.20160830.0   1.26
>
>
>   ovirt-node-ng-4.1.1.1-0.20170406.0   onn_labvmhostt05 Vri---tz-k
> 335.92g pool00
>
>
>   ovirt-node-ng-4.1.1.1-0.20170406.0+1 onn_labvmhostt05 Vwi---tz--
> 335.92g pool00 ovirt-node-ng-4.1.1.1-0.20170406.0
>
>
>   pool00                               onn_labvmhostt05 twi-aotz--
> 350.96g                                           2.53   0.17
>
>
>   [pool00_tdata]                       onn_labvmhostt05 Twi-ao----
> 350.96g
>
>
>   [pool00_tmeta]                       onn_labvmhostt05 ewi-ao----   1.00g
>
>
>
>   root                                 onn_labvmhostt05 Vwi---tz--
> 335.92g pool00
>
>
>   swap                                 onn_labvmhostt05 -wi-ao----   4.00g
>
>
>
>   var                                  onn_labvmhostt05 Vwi-aotz--  15.00g
> pool00                                    8.47
>
>
>
> Thanks,
>
> Daniel
>
>
>
> *From: *Yuval Turgeman <yuvalt at redhat.com>
> *Date: *Thursday, May 4, 2017 at 4:18 PM
> *To: *"Beckman, Daniel" <Daniel.Beckman at ingramcontent.com>
> *Cc: *"sbonazzo at redhat.com" <sbonazzo at redhat.com>, Yedidyah Bar David <
> didi at redhat.com>, "users at ovirt.org" <users at ovirt.org>
> *Subject: *Re: [ovirt-users] Upgrade 4.0.6 to 4.1.1 -- How to Update
> oVirt Node (4.x) Hosts?
>
>
>
> what does `lvs -a` show ?
>
>
>
> On May 4, 2017 21:50, "Beckman, Daniel" <Daniel.Beckman at ingramcontent.com>
> wrote:
>
> Hi Yuval,
>
>
>
> All three of those packages (ovirt-node-ng-image-update,
> ovirt-node-ng-image, ovirt-release41) were already installed. So I ran a
> ‘yum remove’ on all of them, removed everything from /etc/yum.repos.d,
> installed the release RPM, then installed the other two packages. Here’s
> the installation:
>
>
>
> ============================================================
> ============================================================
> ============================================================
> ==================
>
> Package
> Arch                                  Version
> Repository                                Size
>
> ============================================================
> ============================================================
> ============================================================
> ==================
>
> Installing:
>
> ovirt-node-ng-image-update
> noarch                                4.1.1.1-1.el7.centos
> ovirt-4.1                                3.8 k
>
> Installing for dependencies:
>
> ovirt-node-ng-image
> noarch                                4.1.1.1-1.el7.centos
>                    ovirt-4.1                                526 M
>
>
>
> Transaction Summary
>
> ============================================================
> ============================================================
> ============================================================
> ==================
>
> Install  1 Package (+1 Dependent package)
>
>
>
> Total download size: 526 M
>
> Installed size: 526 M
>
> Is this ok [y/d/N]: y
>
> Downloading packages:
>
> (1/2): ovirt-node-ng-image-update-4.1.1.1-1.el7.centos.noarch.rpm
>                                                                                      |
> 3.8 kB  00:00:00
>
> (2/2): ovirt-node-ng-image-4.1.1.1-1.el7.centos.noarch.rpm
>
>                             | 526 MB  00:01:55
>
> ------------------------------------------------------------
> ------------------------------------------------------------
> ------------------------------------------------------------
> ------------------
>
> Total
>
>                                               4.6 MB/s | 526 MB
> 00:01:55
>
> Running transaction check
>
> Running transaction test
>
> Transaction test succeeded
>
> Running transaction
>
>   Installing : ovirt-node-ng-image-4.1.1.1-1.
> el7.centos.noarch
>
> 1/2
>
>   Installing : ovirt-node-ng-image-update-4.1.1.1-1.el7.centos.noarch
>
>                                                             2/2
>
> mount: special device /dev/onn_labvmhostt05/ovirt-node-ng-4.1.1.1-0.20170406.0+1
> does not exist
>
> rm: cannot remove ‘/tmp/tmp.uEAD6kCtlR/usr/share/imgbased/*image-update*.rpm’:
> No such file or directory
>
> umount: /tmp/tmp.uEAD6kCtlR: not mounted
>
>   Verifying  : ovirt-node-ng-image-update-4.1.1.1-1.el7.centos.noarch
>
>                                           1/2
>
>   Verifying  : ovirt-node-ng-image-4.1.1.1-1.
> el7.centos.noarch
>
> 2/2
>
>
>
> Installed:
>
>   ovirt-node-ng-image-update.noarch 0:4.1.1.1-1.el7.centos
>
>
>
>
>
> Dependency Installed:
>
>   ovirt-node-ng-image.noarch 0:4.1.1.1-1.el7.centos
>
>
>
>
>
> Complete!
>
>
>
> Also, note output of ‘nodectl check’:
>
> [root at labvmhostt05 yum.repos.d]# nodectl check
>
> Status: FAILED
>
> Bootloader ... FAILED - It looks like there are no valid bootloader
> entries. Please ensure this is fixed before rebooting.
>
>   Layer boot entries ... FAILED - No bootloader entries which point to
> imgbased layers
>
>   Valid boot entries ... FAILED - No valid boot entries for imgbased
> layers or non-imgbased layers
>
> Mount points ... OK
>
>   Separate /var ... OK
>
>   Discard is used ... OK
>
> Basic storage ... OK
>
>   Initialized VG ... OK
>
>   Initialized Thin Pool ... OK
>
>   Initialized LVs ... OK
>
> Thin storage ... OK
>
>   Checking available space in thinpool ... OK
>
>   Checking thinpool auto-extend ... OK
>
> vdsmd ... OK
>
>
>
> I’ll attach the /tmp/imgbased.log file.
>
>
>
> Thanks,
>
> Daniel
>
>
>
> *From: *Yuval Turgeman <yuvalt at redhat.com>
> *Date: *Wednesday, May 3, 2017 at 1:23 PM
> *To: *"Beckman, Daniel" <Daniel.Beckman at ingramcontent.com>
> *Cc: *"users at ovirt.org" <users at ovirt.org>, Yedidyah Bar David <
> didi at redhat.com>, "sbonazzo at redhat.com" <sbonazzo at redhat.com>
> *Subject: *Re: [ovirt-users] Upgrade 4.0.6 to 4.1.1 -- How to Update
> oVirt Node (4.x) Hosts?
>
>
>
> Hi, you can try the following:
>
>
>
> 1.  Make sure you have a /etc/iscsi/initiatorname.iscsi file.  If you
> don't, create an empty one (to avoid a migration bug)
>
> 2.  Install the ovirt-release41 rpm (http://resources.ovirt.org/
> pub/yum-repo/ovirt-release41.rpm)
>
> 3.  yum update ovirt-node-ng-image-update
>
> 4.  Make sure only 2 rpms are about to be installed
> (ovirt-node-ng-image-update and ovirt-node-ng-image) ~530M
>
>
>
> Save /tmp/imgbased.log in case something fails so we could take a look :)
>
>
>
> Thanks,
>
> Yuval
>
>
>
>
>
>
>
>
>
> On Wed, May 3, 2017 at 6:23 PM, Beckman, Daniel <
> Daniel.Beckman at ingramcontent.com> wrote:
>
> I don’t recall doing anything special with the repositories, apart from
> (recently) adding the oVirt 4.1 repository. These hosts were originally
> deployed by downloading the oVirt Node 4.0.3 ISO image.
>
>
>
> How should the repos be setup? Is there an RPM I can download that will
> install the appropriate repos?
>
>
>
> Thanks,
>
> Daniel
>
>
>
> *From: *Yuval Turgeman <yuvalt at redhat.com>
> *Date: *Tuesday, May 2, 2017 at 2:56 PM
> *To: *"Beckman, Daniel" <Daniel.Beckman at ingramcontent.com>
> *Cc: *"users at ovirt.org" <users at ovirt.org>, Yedidyah Bar David <
> didi at redhat.com>, "sbonazzo at redhat.com" <sbonazzo at redhat.com>
> *Subject: *Re: [ovirt-users] Upgrade 4.0.6 to 4.1.1 -- How to Update
> oVirt Node (4.x) Hosts?
>
>
>
> Looks like your repos are not set up correctly.  oVirt node is an image
> and the update is a complete image as well, containing a set packages that
> were tested and known to work well together.  This means that when you yum
> update your system, a single "ovirt-node-ng-image-update" rpm should be
> installed instead of a list of packages like you mentioned.  That's
> probably what messed things up.  How did you configure your repos ?
>
>
>
> On May 1, 2017 6:15 PM, "Beckman, Daniel" <Daniel.Beckman at ingramcontent.
> com> wrote:
>
> Hello,
>
> I’ve attached the log file for one of the hosts from
> /var/log/ovirt-engine/host-deploy.
>
> As to the manual update: yes, I ran it after removing the ovirt40 repo and
> adding ovirt41 repo. Here are the packages it updated:
> Updated     cockpit-ovirt-dashboard-0.10.6-1.4.2.el7.centos.noarch
>       ?
>     Update                              0.10.7-0.0.16.el7.centos.noarch
>            @ovirt-4.1
>     Dep-Install collectd-5.7.0-2.el7.x86_64
>          @centos-opstools-release
>     Dep-Install collectd-disk-5.7.0-2.el7.x86_64
>           @centos-opstools-release
>     Dep-Install collectd-netlink-5.7.0-2.el7.x86_64
>            @centos-opstools-release
>     Dep-Install collectd-virt-5.7.0-2.el7.x86_64
>           @centos-opstools-release
>     Dep-Install collectd-write_http-5.7.0-2.el7.x86_64
>           @centos-opstools-release
>     Dep-Install fluentd-0.12.26-2.el7.noarch
>           @centos-opstools-release
>     Dep-Install gdeploy-2.0.1-13.noarch
>          @rnachimu-gdeploy
>     Updated     glusterfs-3.7.20-1.el7.x86_64
>            ?
>     Update                3.8.11-1.el7.x86_64
>          @ovirt-4.1-centos-gluster38
>     Updated     glusterfs-api-3.7.20-1.el7.x86_64
>            ?
>     Update                    3.8.11-1.el7.x86_64
>          @ovirt-4.1-centos-gluster38
>     Updated     glusterfs-cli-3.7.20-1.el7.x86_64
>            ?
>     Update                    3.8.11-1.el7.x86_64
>          @ovirt-4.1-centos-gluster38
>     Updated     glusterfs-client-xlators-3.7.20-1.el7.x86_64
>           ?
>     Update                               3.8.11-1.el7.x86_64
>           @ovirt-4.1-centos-gluster38
>     Updated     glusterfs-fuse-3.7.20-1.el7.x86_64
>           ?
>     Update                     3.8.11-1.el7.x86_64
>           @ovirt-4.1-centos-gluster38
>     Updated     glusterfs-geo-replication-3.7.20-1.el7.x86_64
>            ?
>     Update                                3.8.11-1.el7.x86_64
>          @ovirt-4.1-centos-gluster38
>     Updated     glusterfs-libs-3.7.20-1.el7.x86_64
>           ?
>     Update                     3.8.11-1.el7.x86_64
>           @ovirt-4.1-centos-gluster38
>     Updated     glusterfs-rdma-3.7.20-1.el7.x86_64
>           ?
>     Update                     3.8.11-1.el7.x86_64
>           @ovirt-4.1-centos-gluster38
>     Updated     glusterfs-server-3.7.20-1.el7.x86_64
>           ?
>     Update                       3.8.11-1.el7.x86_64
>           @ovirt-4.1-centos-gluster38
>     Updated     imgbased-0.8.11-0.201612061451git1b9e081.el7.centos.noarch
>         ?
>     Update               0.9.23-1.el7.centos.noarch
>          @ovirt-4.1
>     Updated     ioprocess-0.16.1-1.el7.x86_64
>            @?ovirt-centos-ovirt41
>     Update                0.17.0-1.201611101241.gitb7e353c.el7.centos.x86_64
>       @ovirt-4.1
>     Dep-Install libtomcrypt-1.17-23.el7.x86_64
>           @ovirt-4.1-epel
>     Dep-Install libtommath-0.42.0-4.el7.x86_64
>           @ovirt-4.1-epel
>     Dep-Install libtool-ltdl-2.4.2-22.el7_3.x86_64
>           @updates
>     Updated     mom-0.5.8-1.el7.centos.noarch
>            @?ovirt-4.1
>     Update          0.5.9-1.el7.centos.noarch
>          @ovirt-4.1
>     Dep-Install net-snmp-1:5.7.2-24.el7_3.2.x86_64
>           @updates
>     Dep-Install net-snmp-agent-libs-1:5.7.2-24.el7_3.2.x86_64
>            @updates
>     Updated     openvswitch-2.5.0-2.el7.x86_64
>           ?
>     Update                  2.7.0-1.el7.centos.x86_64
>          @ovirt-4.1
>     Updated     otopi-1.5.2-1.el7.centos.noarch
>            ?
>     Update            1.6.1-1.el7.centos.noarch
>          @ovirt-4.1
>     Updated     ovirt-host-deploy-1.5.3-1.el7.centos.noarch
>            ?
>     Update                        1.6.3-1.el7.centos.noarch
>          @ovirt-4.1
>     Updated     ovirt-hosted-engine-ha-2.0.6-1.el7.centos.noarch
>           ?
>     Update                             2.1.0.5-1.el7.centos.noarch
>           @ovirt-4.1
>     Updated     ovirt-hosted-engine-setup-2.0.4.1-1.el7.centos.noarch
>            ?
>     Update                                2.1.0.5-1.el7.centos.noarch
>          @ovirt-4.1
>     Updated     ovirt-imageio-common-0.4.0-1.el7.noarch
>            ?
>     Update                           1.0.0-1.el7.noarch
>          @ovirt-centos-ovirt41
>     Updated     ovirt-imageio-daemon-0.4.0-1.el7.noarch
>            ?
>     Update                           1.0.0-1.el7.noarch
>          @ovirt-centos-ovirt41
>     Updated     ovirt-node-ng-nodectl-4.0.6-0.20170111.0.el7.noarch
>            ?
>     Update                            4.1.0-0.20170406.0.el7.noarch
>          @ovirt-4.1
>     Updated     ovirt-release-host-node-4.0.6.1-1.el7.noarch
>           ?
>     Update                              4.1.1.1-1.el7.centos.noarch
>          @ovirt-4.1
>     Updated     ovirt-release41-4.1.1-1.el7.centos.noarch
>            @?ovirt-4.1
>     Update                      4.1.1.1-1.el7.centos.noarch
>          @ovirt-4.1
>     Updated     ovirt-setup-lib-1.0.2-1.el7.centos.noarch
>            ?
>     Update                      1.1.0-1.el7.centos.noarch
>          @ovirt-4.1
>     Dep-Install python-babel-0.9.6-8.el7.noarch
>            @base
>     Dep-Install python-dateutil-1.5-7.el7.noarch
>           @base
>     Dep-Install python-httplib2-0.9.1-2.el7.noarch
>           @ovirt-centos-ovirt41
>     Updated     python-ioprocess-0.16.1-1.el7.noarch
>           @?ovirt-centos-ovirt41
>     Update                       0.17.0-1.201611101242.gitb7e353c.el7.centos.noarch
> @ovirt-4.1
>     Dep-Install python-jinja2-2.7.2-2.el7.noarch
>           @base
>     Dep-Install python-keyczar-0.71c-2.el7.noarch
>            @ovirt-centos-ovirt41
>     Dep-Install python-markupsafe-0.11-10.el7.x86_64
>           @base
>     Dep-Install python-setuptools-0.9.8-4.el7.noarch
>           @base
>     Dep-Install python2-crypto-2.6.1-13.el7.x86_64
>           @ovirt-4.1-epel
>     Dep-Install python2-ecdsa-0.13-4.el7.noarch
>            @ovirt-4.1-epel
>     Dep-Install python2-paramiko-1.16.1-2.el7.noarch
>           @ovirt-4.1-epel
>     Dep-Install python2-passlib-1.6.5-1.el7.noarch
>           @ovirt-centos-ovirt41
>     Dep-Install python2-pyasn1-0.1.9-7.el7.noarch
>            @base
>     Dep-Install rng-tools-5-8.el7.x86_64
>           @base
>     Dep-Install ruby-2.0.0.648-29.el7.x86_64
>           @base
>     Dep-Install ruby-irb-2.0.0.648-29.el7.noarch
>           @base
>     Dep-Install ruby-libs-2.0.0.648-29.el7.x86_64
>            @base
>     Dep-Install rubygem-bigdecimal-1.2.0-29.el7.x86_64
>           @base
>     Dep-Install rubygem-cool.io-1.2.4-2.el7.x86_64
>           @centos-opstools-release
>     Dep-Install rubygem-fluent-plugin-rewrite-tag-filter-1.5.5-5.el7.noarch
>        @centos-opstools-release
>     Dep-Install rubygem-fluent-plugin-secure-forward-0.4.3-1.el7.noarch
>            @centos-opstools-release
>     Dep-Install rubygem-http_parser.rb-0.6.0-1.el7.x86_64
>            @centos-opstools-release
>     Dep-Install rubygem-io-console-0.4.2-29.el7.x86_64
>           @base
>     Dep-Install rubygem-json-1.7.7-29.el7.x86_64
>           @base
>     Dep-Install rubygem-msgpack-0.5.11-1.el7.x86_64
>            @centos-opstools-release
>     Dep-Install rubygem-proxifier-1.0.3-1.el7.noarch
>           @centos-opstools-release
>     Dep-Install rubygem-psych-2.0.0-29.el7.x86_64
>            @base
>     Dep-Install rubygem-rdoc-4.0.0-29.el7.noarch
>           @base
>     Dep-Install rubygem-resolve-hostname-0.0.4-1.el7.noarch
>            @centos-opstools-release
>     Dep-Install rubygem-sigdump-0.2.2-1.el7.noarch
>           @centos-opstools-release
>     Dep-Install rubygem-string-scrub-0.0.5-1.el7.x86_64
>            @centos-opstools-release
>     Dep-Install rubygem-thread_safe-0.3.4-1.el7.noarch
>           @centos-opstools-release
>     Dep-Install rubygem-tzinfo-1.2.2-2.el7.noarch
>            @centos-opstools-release
>     Dep-Install rubygem-tzinfo-data-1.2014.10-2.el7.noarch
>           @centos-opstools-release
>     Dep-Install rubygem-yajl-ruby-1.2.1-1.el7.x86_64
>           @centos-opstools-release
>     Dep-Install rubygems-2.0.14.1-29.el7.noarch
>            @base
>     Dep-Install screen-4.1.0-0.23.20120314git3c2946.el7_2.x86_64
>           @base
>     Dep-Install sshpass-1.05-5.el7.x86_64
>          @ovirt-centos-ovirt41
>     Dep-Install tcpdump-14:4.5.1-3.el7.x86_64
>          @base
>     Updated     vdsm-4.18.21-1.el7.centos.x86_64
>           ?
>     Update           4.19.10.1-1.el7.centos.x86_64
>           @ovirt-4.1
>     Updated     vdsm-api-4.18.21-1.el7.centos.noarch
>           ?
>     Update               4.19.10.1-1.el7.centos.noarch
>           @ovirt-4.1
>     Updated     vdsm-cli-4.18.21-1.el7.centos.noarch
>           ?
>     Update               4.19.10.1-1.el7.centos.noarch
>           @ovirt-4.1
>     Dep-Install vdsm-client-4.19.10.1-1.el7.centos.noarch
>            @ovirt-4.1
>     Updated     vdsm-gluster-4.18.21-1.el7.centos.noarch
>           ?
>     Update                   4.19.10.1-1.el7.centos.noarch
>           @ovirt-4.1
>     Updated     vdsm-hook-ethtool-options-4.18.21-1.el7.centos.noarch
>            ?
>     Update                                4.19.10.1-1.el7.centos.noarch
>          @ovirt-4.1
>     Updated     vdsm-hook-fcoe-4.18.21-1.el7.centos.noarch
>           ?
>     Update                     4.19.10.1-1.el7.centos.noarch
>           @ovirt-4.1
>     Updated     vdsm-hook-macspoof-4.18.21-1.el7.centos.noarch
>           ?
>     Update                         4.19.10.1-1.el7.centos.noarch
>           @ovirt-4.1
>     Updated     vdsm-hook-nestedvt-4.18.21-1.el7.centos.noarch
>           ?
>     Update                         4.19.10.1-1.el7.centos.noarch
>           @ovirt-4.1
>     Updated     vdsm-hook-openstacknet-4.18.21-1.el7.centos.noarch
>           ?
>     Update                             4.19.10.1-1.el7.centos.noarch
>           @ovirt-4.1
>     Dep-Install vdsm-hook-vhostmd-4.19.10.1-1.el7.centos.noarch
>            @ovirt-4.1
>     Updated     vdsm-hook-vmfex-dev-4.18.21-1.el7.centos.noarch
>            ?
>     Update                          4.19.10.1-1.el7.centos.noarch
>          @ovirt-4.1
>     Updated     vdsm-jsonrpc-4.18.21-1.el7.centos.noarch
>           ?
>     Update                   4.19.10.1-1.el7.centos.noarch
>           @ovirt-4.1
>     Updated     vdsm-python-4.18.21-1.el7.centos.noarch
>            ?
>     Update                  4.19.10.1-1.el7.centos.noarch
>          @ovirt-4.1
>     Updated     vdsm-xmlrpc-4.18.21-1.el7.centos.noarch
>            ?
>     Update                  4.19.10.1-1.el7.centos.noarch
>          @ovirt-4.1
>     Updated     vdsm-yajsonrpc-4.18.21-1.el7.centos.noarch
>           ?
>     Update                     4.19.10.1-1.el7.centos.noarch
>           @ovirt-4.1
>     Dep-Install vhostmd-0.5-11.el7.x86_64
>          @ovirt-centos-ovirt41
>
> Thanks,
> Daniel
>
> On 4/30/17, 12:44 AM, "Yedidyah Bar David" <didi at redhat.com> wrote:
>
>     On Thu, Apr 27, 2017 at 6:48 PM, Beckman, Daniel
>     <Daniel.Beckman at ingramcontent.com> wrote:
>     > Didi,
>     >
>     > Thanks for the tip on the utilities – I’ll add that for future
> upgrades. Since you pointed that out,  I’m reminded that in a previous
> upgrade (following one of the developer’s suggestions) I had added this:
>     > /etc/ovirt-engine/engine.conf.d/99-custom-truststore.conf
>     > So I guess that’s why my https certificate was preserved.
>
>     Good.
>
>     >
>     > As to the documentation, I did submit a pull request (#923) and
> ‘JohnMarksRH’ added that along with some additional edits. I’ll move any
> continuing discussion on that to another thread. And yes, the RHV
> documentation is excellent and I’ve often turned to it. It’s too bad some
> of the effort ends up being duplicated. Anyway….
>     >
>     > Here’s what I did with one of the oVirt nodes:
>     > yum -y remove ovirt-release40
>     > yum -y install http://resources.ovirt.org/
> pub/yum-repo/ovirt-release41.rpm
>     > cd /etc/yum.repos.d
>     > # ls
>     > CentOS-Base.repo       CentOS-fasttrack.repo
>  CentOS-Sources.repo       cockpit-preview-epel-7.repo
>   �� > CentOS-CR.repo         CentOS-fasttrack.repo.rpmnew
> CentOS-Vault.repo         ovirt-4.0-dependencies.repo
>     > CentOS-Debuginfo.repo  CentOS-Media.repo
>  CentOS-Vault.repo.rpmnew  ovirt-4.0.repo
>     > rm -f ovirt-4.0*
>     >
>     > After doing that, when I check again in the admin GUI for an
> upgrade, it shows one available (4.1.1.1). From the GUI I tell it to
> upgrade, and it runs along with no errors, seems to finish, and then
> reboots the host.
>     >
>     > When the host comes back up, it’s still running 4.0.6. When I check
> again for an available upgrade, it doesn’t see it available. I’m attaching
> the installation log that is referenced in Events in the GUI.
>     >
>     > If I go straight into the node and run ‘yum update’ and reboot, then
> it gets the latest 4.1.x image and the engine detects it as such.
>
>     You mean you do that after the above (removing 4.0 repos, adding 4.1)?
>
>     What packages did it update?
>
>     Please check also time-wise nearby log files for this host in
>     /var/log/ovirt-engine/host-deploy and share them.
>     'ovirt-host-mgmt*' is the result of checking for updates from the
> admin web ui.
>
>     > But of course that’s not the ideal method. I used the manual method
> for the remaining hosts.
>     >
>     > I don’t know if this is related, but since the upgrade I’ve also
> noticed an unfamiliar error when I log in directly to the engine host.
> (It’s a standalone Centos7 VM running on a separate KVM host.) Here is is:
>     >
>     > nodectl must be run as root!
>     > nodectl must be run as root!
>     > This comes up when *any* user logs into the box. When I switch to
> root I get this:
>     > /bin/python3: Error while finding spec for 'nodectl.__main__'
> (<class 'ImportError'>: No module named 'nodectl')
>     > /bin/python3: Error while finding spec for 'nodectl.__main__'
> (<class 'ImportError'>: No module named 'nodectl')
>     > So it looks like it’s been invoked from here:
>     > ls -llh /etc/profile.d/nodectl*
>     > -rwxr-xr-x. 1 root root 13 Apr  6 06:46
> /etc/profile.d/nodectl-motd.sh
>     > -rwxr-xr-x. 1 root root 24 Apr  6 06:46 /etc/profile.d/nodectl-run-
> banner.sh
>     > According to ‘yum whatprovides’ this appears to have been installed
> by package “ovirt-node-ng-nodectl-4.1.0-0.20170406.0.el7.noarch”.
>     >
>     > Anyone else getting this? I can try fixing the python error by
> adding the module, but I thought I’d report this first. Any suggestions as
> to next steps?
>
>     Adding Yuval for the node-specific issues.
>
>     Best,
>
>     >
>     > Thanks
>     > Daniel
>     >
>     >
>     >
>     >
>     >
>     >
>     >
>     >
>     >
>     >
>     >
>     > On 4/25/17, 2:01 AM, "Yedidyah Bar David" <didi at redhat.com> wrote:
>     >
>     >     On Tue, Apr 25, 2017 at 1:19 AM, Beckman, Daniel
>     >     <Daniel.Beckman at ingramcontent.com> wrote:
>     >     > So I successfully upgraded my engine from 4.06 to 4.1.1 with
> no major
>     >     > issues.
>     >     >
>     >     >
>     >     >
>     >     > A nice thing I noticed was that my custom CA certificate for
> https on the
>     >     > admin and user portals wasn’t clobbered by setup.
>     >     >
>     >     >
>     >     >
>     >     > I did have to restore my custom settings for ISO uploader, log
> collector,
>     >     > and websocket proxy:
>     >     >
>     >     > cp
>     >     > /etc/ovirt-engine/isouploader.conf.d/10-engine-setup.conf.<
> latest_timestamp>
>     >     > /etc/ovirt-engine/isouploader.conf.d/10-engine-setup.conf
>     >     >
>     >     > cp
>     >     > /etc/ovirt-engine/ovirt-websocket-proxy.conf.d/10-
> setup.conf.<latest_timestamp>
>     >     > /etc/ovirt-engine/ovirt-websocket-proxy.conf.d/10-setup.conf
>     >     >
>     >     > cp
>     >     > /etc/ovirt-engine/logcollector.conf.d/10-engine-
> setup.conf.<latest_timestamp>
>     >     > /etc/ovirt-engine/logcollector.conf.d/10-engine-setup.conf
>     >
>     >     The utilities read these files sorted by name, last wins. So you
>     >     can add '99-my.conf' to each and have it override whatever
> engine-setup does.
>     >
>     >     >
>     >     >
>     >     >
>     >     > Now I’m moving on to updating the oVirt node hosts, which are
> currently at
>     >     > oVirt Node 4.0.6.1. (I’m assuming I should do that before
> attempting to
>     >     > upgrade the cluster and data center compatibility level to
> 4.1.)
>     >     >
>     >     >
>     >     >
>     >     > When I right-click on a host and go to Installation / Check
> for Upgrade, the
>     >     > results are ‘no updates found.’ When I log into that host
> directly, I notice
>     >     > it’s still got the oVirt 4.0 repo, not 4.1. Is there an extra
> step I’m
>     >     > missing? The documentation I’ve found
>     >     > (http://www.ovirt.org/documentation/upgrade-guide/
> chap-Updates_between_Minor_Releases/)
>     >     > doesn’t mention this.
>     >
>     >     You are right. It's mentioned for the engine in the release
> notes [1]
>     >     but not for the hosts. Please file a github issue or send a pull
> request :-)
>     >
>     >     [1] https://www.ovirt.org/release/4.1.0/
>     >
>     >     >
>     >     >
>     >     >
>     >     >
>     >     >
>     >     > **
>     >     >
>     >     > If I can offer some unsolicited feedback: I feel like this
> list is populated
>     >     > with a lot of questions that could be averted with a little
> care and feeding
>     >     > of the documentation. It’s unfortunate because that makes for
> a rocky
>     >     > introduction to oVirt, and it makes it look like a neglected
> project, which
>     >     > I know is not the case.
>     >
>     >     Patches are welcome :-)
>     >
>     >     >
>     >     >
>     >     >
>     >     > On a related note, I know this has been discussed before but…
>     >     >
>     >     > The centralized control in Github for the documentation does
> not really
>     >     > encourage user contributions. What’s wrong with a wiki? If
> we’re really
>     >     > concerned about bad or malicious edits being posted, keep the
> official in
>     >     > git and add a separate wiki that is clearly marked as
> user-contributed.
>     >
>     >     That was indeed discussed in the past, I am not aware of any
> conclusions.
>     >     Perhaps start a separate thread about this? Adding Duck.
>     >
>     >     Please also note that you can have a look at RHV documentation
> [2].
>     >     Almost all of it applies to oVirt as well (and oVirt's to RHV).
>     >
>     >     [2] https://access.redhat.com/documentation/en/red-hat-
> virtualization/
>     >
>     >     Best,
>     >
>     >     >
>     >     > **
>     >     >
>     >     >
>     >     >
>     >     >
>     >     >
>     >     > Thanks,
>     >     >
>     >     > Daniel
>     >     >
>     >     >
>     >     > _______________________________________________
>     >     > Users mailing list
>     >     > Users at ovirt.org
>     >     > http://lists.ovirt.org/mailman/listinfo/users
>     >     >
>     >
>     >
>     >
>     >     --
>     >     Didi
>     >
>     >
>     >
>     >
>     >
>     >
>
>
>
>     --
>     Didi
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170509/9f55ecf0/attachment-0001.html>


More information about the Users mailing list