On Sun, Jan 15, 2017 at 3:39 PM, Derek Atkins <derek@ihtfp.com> wrote:


FWIW, I'm running on a single-host system with hosted-engine.

I made the same update on two single host environments with self hosted engine without any problem.
My approach was;

- shutdown all VMs except self hosted engine
- put environment in global maintenance
- update the self hosted engine environment
(with commands:
yum update "ovirt-*-setup*"
engine-setup
)
- verify connection to engine web admin gui is still ok and 4.0.6. Engine os at this time is still 7.2
- shutdown engine VM
- put hypervisor host in local maintenance
- stop some services (ovirt-ha-agent, ovirt-ha-broker, vdsmd)
- run yum update that brings hypervisor at 7.3 and also new vdsm and related packages to 4.0.6 level and also qemu-kvm-ev at 2.6.0-27.1.el7
- adjust/merge some rpmnew files (both os in general and ovirt related)
- stop again vdsmd (agent and broker remained down)
- stop sanlock (here sometimes it goes timeout so I "kill -9" the remaining process otherwise the system is unable to shutdown due to impossibility to umount nfs filesystems
In fact in my environment the host itself provides nfs mounts for data storage domain and iso one; the umount problem is only with the data one)
- shutdown host and reboot it
- exit maintenance
- engine vm starts after a while
- enter global maintenance again
- yum update on engine vm and adjust rpmnew files
- shutdown engine vm
- exit global maintenance
- after a while engine vm starts
- power on VMs required.

Gianluca