I'm seeing migration failures for the hosted-engine VM from a 4.28 node to a 4.30 node so I can complete the node upgrades. 

In one case I tried to force an update on the last node and now have a cluster where the hosted-engine VM fails to start properly.   Sometimes something thinks the VM is running, but clicking for details it seems to have no disk.  I was so close to putting that one into production with working LDAP user logins and custom SSL cert and then Oops! with the upgrade.

In both cases the hosted VM storage is on gluster.  One hyperconverged oVirt nodes, the other two nodes with an external gluster 5 server cluster

I will note that I had set the engines previously (while at 4.2.8) to use the gluster direct method enabling (libgfapi disk access level 4.2) instead of the default fuse client for improved performance.

Other details....  Engine was updated first.
Nodes were updated with

yum install https://resources.ovirt.org/pub/ovirt-4.3/rpm/el7/noarch/ovirt-node-ng-image-update-4.3.0-1.el7.noarch.rpm

I forgot to mention another symptom.
On the hyperconverged cluster, the "gluster volume heal engine" command errors out on 4.3, but other gluster volumes accepted it.

rolling back to 4.28 allows the heal command to succeed (nothing to do actually) on engine, but I'm still unable to start a working engine VM with hosted-engine --vm-start