Hello!
On Fri, Jun 30, 2017 at 5:46 PM, cmc <iucounu(a)gmail.com> wrote:
I ran 'hosted-engine --vm-start' after trying to ping the
engine and
running 'hosted-engine --vm-status' (which said it wasn't running) and
it reported that it was 'destroying storage' and starting the engine,
though it did not start it. I could not see any evidence from
'hosted-engine --vm-status' or logs that it started.
That sound really strange. I would suspect some storage problems or
something. As i told you earlier, output of --vm-status may shed light on
that issue.
By this point I
was in a panic to get VMs running. So I had to fire up the old bare
metal engine. This has been a very disappointing experience. I still
have no idea why the IDs in 'host_id' differed from the spm ID, and
Did you tried to migrate form bare metal engine to the hosted engine?
1. Why did the VMs (apart from the Hosted Engine VM) not start on
power up of the hosts? Is it because the hosts were powered down, that
they stay in a down state on power up of the host?
Engine is responsible for starting those VMs. As you had no engine, there
was no one to start them. Hosted Engine tools are only responsible for the
engine VM, not other VMs.
2. Now that I have connected the bare metal engine back to the
cluster, is there a way back, or do I have to start from scratch
again? I imagine there is no way of getting the Hosted Engine running
again. If not, what do I need to 'clean' all the hosts of the remnants
of the failed deployment? I can of course reinitialise the LUN that
the Hosted Engine was on - anything else?
I know, there exists 'bare metal - to - hosted engine' migration procedure,
but i doubt i knew it good enough. If i remember correctly, you need to
take a backup of your bare metal engine database, run migration preparation
script, that will handle spm_id duplications, deploy your first HE host,
restore database from the backup, deploy more HE hosts. I'm not sure if
those steps are correct and would better ask Martin about migration process.