On Sun, Sep 24, 2017 at 10:59 PM, Ben Bradley <listsbb@virtx.net> wrote:
On 23/09/17 00:27, Ben Bradley wrote:
On 20/09/17 15:41, Simone Tiraboschi wrote:

On Wed, Sep 20, 2017 at 12:30 AM, Ben Bradley <listsbb@virtx.net <mailto:listsbb@virtx.net>> wrote:

    Hi All

    I've been running a single-host ovirt setup for several months,
    having previously used a basic QEMU/KVM for a few years in lab
    environments.

    I currently have the ovirt engine running at the bare-metal level,
    with the box also acting as the single host. I am also running this
    with local storage.

    I now have an extra host I can use and would like to migrate to a
    hosted engine. The following documentation appears to be perfect and
    pretty clear about the steps involved:
    https://www.ovirt.org/develop/developer-guide/engine/migrate-to-hosted-engine/
    <https://www.ovirt.org/develop/developer-guide/engine/migrate-to-hosted-engine/>
    and
    https://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment
    <https://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment>

    However I'd like to try and get a bit more of an understanding of
    the process that happens behind the scenes during the cut-over from
    one engine to a new/hosted engine.

    As an experiment I attempted the following:
    - created a new VM within my current environment (bare-metal engine)
    - creating an engine-backup
    - stopped the bare-metal engine
    - restored the backup into the new VM
    - ran engine-setup within the new VM
    The new engine started up ok and I was able to connect and login to
    the web UI. However my host was "unresponsive" and I was unable to
    manage it in any way from the VM. I shut the VM down and started the
    bare-metal ovirt-engine again on the host and everything worked as
    before. I didn't try very hard to make it work however.

    The magic missing from the basic process I tried is the
    synchronising and importing of the existing host, which is what the
    hosted-engine utility does.


No magic up to now: the host are simply in the DB you restored.
If the VM has network connectivity and the same host-name of the old machine you shouldn't see any issue.
If you changed the host-name moving to the VM, you should simply run engine-rename after the restore.

Thank you for the reply.
I tried this again this evening - again it failed.

The host is present within the new engine but I am unable to manage it.
Host is marked as down but Activate is greyed out. I can get get into the "Edit" screen for the host and on right-click I get the following options:
- Maintenance
- Confirm Host has been Rebooted
- SSH Management: Restart and Stop both available
The VMs are still running and accessible but are not listed as running under the web interface. This time however I did lose access to the ovirtmgmt bridge and the web interface, running VMs and host SSH session were unavailable until I rebooted.
Luckily I left ovirt-engine service enabled to restart on boot so everything came back up.

The engine URL is a CNAME so I just re-pointed to the hostname of the VM just before running engine-setup after the restore.

This time though I have kept the new engine VM so I can power it up again and try and debug.

I am going to try a few times over the weekend and I have setup serial console access so I can do a bit more debugging.

What ovirt logs could I check on the host to see if the new engine VM is able to connect and sync to the host properly?

Thanks, Ben

So I tried again to migrate my bare-metal host to a hosted VM but no luck. The host remained in unresponsive state in the engine web UI and I was unable to manage the host in anyway. Although all VMs continued to run.

I did capture some logs though.
>From the new engine VM... engine.log
https://p.bsd-unix.net/view/raw/666839d1

>From the host...
mom.log  https://p.bsd-unix.net/view/raw/ac9379f0
supervdsm.log  https://p.bsd-unix.net/view/raw/f9018dec
vdsm.log  https://p.bsd-unix.net/view/raw/bcdcdb13


Sorry but no one of that links is working right now.
 
The engine VM is complaining about being unable to connect to the host, though I can see from tcpdump communication is fine. I believe this is backed up by the pings seen in mom.log

Though I can see the following in vdsm.log... [vds] recovery: waiting for storage pool to go up (clientIF:569)
So I wonder if this is blocking the engine bringing the host up.

The host is running local storage, which I believe is a pretty recent addition to ovirt. So I could see how trying to run an engine VM on a host's local storage might cause weird issues.

Maybe I lost a piece here.
Are you trying to migrate an old 3.6 all-in-one setup (where ovirt-engine - the manager - and vdsm - the agent on the host are running on the same host) to hosted-engine in a single step?
In that case it will not work for sure since when you will power-down the old engine machine you will also power-down your old host and so the new engine running on the VM is not able to contact the host since the old host is down.

In hosted-engine idea, the engine runs on a VM running on the host that its managing.

In that case probably the most effective way is to:
within your old engine:
- create a datacenter with shared storage (please avoid using NFS in loopback)
- add your new hosts there
- migrate all of your VMs to the new dataceter
- only now you can migrate to hosted-engine
 

I realise that there won't be HA with this setup, until I create my second host and configure HA on the VM.

If I am unable to migrate from bare-metal -> engine VM then it doesn't give me any confidence that I would be able to restore a setup from a backup onto a bare-metal host and recover host state.

So is the only supported method of migrating from bare-metal engine to hosted engine by...
1) Migrating to the appliance
2) Using a new host to migrate to VM
3) Using shared storage between hosts

Thanks, Ben



The only detail is that hosted-engine-setup will try to add the host where you are running it to the engine and so you have to manually remove it just after the restore in order to avoid a failure there.


    Can anyone describe that process in a bit more detail?
    Is it possible to perform any part of that process manually?

    I'm planning to expand my lab and dev environments so for me it's
    important to discover the following...
    - That I'm able to reverse the process back to bare-metal engine if
    I ever need/want to
    - That I can setup a new VM or host with nothing more than an
    engine-backup but still be able to regain control of exiting hosts
    and VMs within the cluster

    My main concern after my basic attempt at a "restore/migration"
    above is that I might not be able to re-import/sync an existing host
    after I have restored engine from a backup.

    I have been able to export VMs to storage, remove them from ovirt,
    re-install engine and restore, then import VMs from the export
    domain. That all worked fine. But it involved shutting down all VMs
    and removing their definitions from the environment.

    Are there any pre-requisites to being able to re-import an existing
    running host (and VMs), such as placing ALL hosts into maintenance
    mode and shutting down any VMs first?

    Any insight into host recovery/import/sync processes and steps will
    be greatly appreciated.

    Best regards
    Ben
    _______________________________________________
    Users mailing list
    Users@ovirt.org <mailto:Users@ovirt.org>
    http://lists.ovirt.org/mailman/listinfo/users
    <http://lists.ovirt.org/mailman/listinfo/users>



_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users