On Sun, Jan 15, 2017 at 4:54 PM, Derek Atkins <derek(a)ihtfp.com> wrote:
> - update the self hosted engine environment
> (with commands:
> yum update "ovirt-*-setup*"
> engine-setup
> )
I did "yum update" and not "yum update "ovirt-*-setup*"..
and...
> - verify connection to engine web admin gui is still ok and 4.0.6. Engine
> os at this time is still 7.2
.... I updated the OS to 7.3 in the engine VM.
I think that's the root of this bug, having PG restarted from under dwhd.
The fact that your engine is still at 7.2 implies you didn't also perform
the OS update on the engine. I wanted to do that. (Not sure why you
didn't).
-derek
See below, I did it at the end, after update of the host
> - shutdown engine VM
> - put hypervisor host in local maintenance
> - stop some services (ovirt-ha-agent, ovirt-ha-broker, vdsmd)
> - run yum update that brings hypervisor at 7.3 and also new vdsm and
> related packages to 4.0.6 level and also qemu-kvm-ev at 2.6.0-27.1.el7
here for the host I used the approach of double update: os packages and
oVirt packages
> - adjust/merge some rpmnew files (both os in general and ovirt
related)
> - stop again vdsmd (agent and broker remained down)
> - stop sanlock (here sometimes it goes timeout so I "kill -9" the
> remaining
> process otherwise the system is unable to shutdown due to impossibility
to
> umount nfs filesystems
> In fact in my environment the host itself provides nfs mounts for data
> storage domain and iso one; the umount problem is only with the data one)
> - shutdown host and reboot it
> - exit maintenance
> - engine vm starts after a while
> - enter global maintenance again
> - yum update on engine vm and adjust rpmnew files
here I have the step where I update the engine VM genaral os packages from
7.2 to 7.3...
> - shutdown engine vm
> - exit global maintenance
> - after a while engine vm starts
> - power on VMs required.
>
> Gianluca
>
--
Derek Atkins 617-623-3745
derek(a)ihtfp.com
www.ihtfp.com
Computer and Internet Security Consultant