On Thu, Feb 7, 2019 at 8:57 AM Edward Berger <edwberger(a)gmail.com> wrote:
I'm seeing migration failures for the hosted-engine VM from a 4.28 node to a 4.30
node so I can complete the node upgrades.
You may be running into
https://bugzilla.redhat.com/show_bug.cgi?id=1641798. Can you check the
version of libvirt used on the nodes?
In one case I tried to force an update on the last node and now have a cluster where the
hosted-engine VM fails to start properly. Sometimes something thinks the VM is running,
but clicking for details it seems to have no disk. I was so close to putting that one
into production with working LDAP user logins and custom SSL cert and then Oops! with the
upgrade.
In both cases the hosted VM storage is on gluster. One hyperconverged oVirt nodes, the
other two nodes with an external gluster 5 server cluster
I will note that I had set the engines previously (while at 4.2.8) to use the gluster
direct method enabling (libgfapi disk access level 4.2) instead of the default fuse client
for improved performance.
Other details.... Engine was updated first.
Nodes were updated with
yum install
https://resources.ovirt.org/pub/ovirt-4.3/rpm/el7/noarch/ovirt-node-ng-im...
I forgot to mention another symptom.
On the hyperconverged cluster, the "gluster volume heal engine" command errors
out on 4.3, but other gluster volumes accepted it.
+Ravishankar Narayanankutty to check the heal failures
>
> rolling back to 4.28 allows the heal command to succeed (nothing to do actually) on
engine, but I'm still unable to start a working engine VM with hosted-engine
--vm-start
>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HFHLP473CFI...