Yeah. I digged that bug report up as well.
Our oVirt setup comprises of
6 oVirt hosts which form 2 datacenters
Datacenter: Luise (has a ceph storage domain)
node01 to node03 are a hyper converged oVirt glusterfs. This is cluster Luise01. My
original is to move the engine onto this cluster.
Datacenter: Default Cluster Default
node04 and node05. It uses an nfs storage: see below. The engine used to live here.
Datacenter Luise Cluster: Luise02
node06 is currently the only node. This cluster will gain all nodes from the Default
Datacenter as soon as it is decommissioned.
All nodes are by Supermicro 4 cores 48GB Intel Xeon E5506 @ 2.13 GHz
They use two bonded 1Gbit Links for connectivity. I was sceptical with performance, but we
are running multiple around 10-20 TYPO3 CMSes plus lesser IO hungry things in around 3 to
6 VMs per Node. They all use Ceph as storage domain.
All nodes I tried to deploy to are without virtual load.
Storage:
========
The glusterfs is provided by the hyper converged setup provided by oVirt Node NG. It is
quite small because is just our gateway to boot the cinder VM that will guide our ceph
nodes to their rbd destiny. Every Node has a 250GB SSD. This gives us 90GB for the
:/engine gluster volume. There is also a 100GB volume for the supporting VMs that provide
the bootstrap to cinder. There is little IO going to these volumes.
The ceph is a 10Gbit connected cluster of 3 Nodes with SSD OSDs. The oVirt connect via a
Cisco
The NFS server is serving a Raid-10 btrfs formatted filesystem added with a bcache.
My migration history so far:
Tried to Migrate to Luise Datacenter I did a typo with entering Luise1 instead the correct
cluster Luise01. Ironically that migration succeeded.
Tried to Migrate back to Default. First one erred out with "Disk Image is not
available in the cluster". Second one succeeded after following your advice to try it
a second time. I removed the accidentally created Luise1 cluster.
Tried to Migrate to Luise again. This time I forgot to completely shut down the hosted
engine on Default Datacenter. I shut it down and tried again.
All subsequent retires to deploy to the hyper converged Luise01 cluster failed. I used the
Local Engine to re-install node05 which was hosting the engine I forget to shut down. I
was not able to access the web interface for some reason, so I supposed it was damaged in
a way.
After 4 tries of deployment to our hyper converged cluster that all erred with the
"Disk Image is not available" I'm not trying to go back to the Default
Datacenter. Twice it erred out at "Wait for OVF_STORE disk content"