Hi David,
Thanks for reply, I have Ovirt env which is in very bad shape, and as a new
employer I have to fix it :). I have 5 hosts in whole env. There is broken
HA for hosted engine, so engine can only works on host1. I can't add
another host because it shows me that it was deployed with version 3.5 (If
I'm not wrong it's fixed in 4.0). Also I can't update/upgrade ovirt-engine
because there is 500MB of free space (after cleaning up) without LVM, so
I'm afraid that I'll run out off space during update. Because of that I
decided to add completely new server and migrate hosted-engine on fixed HE
(with LVM) and properly configured HA on new host.
Below short summary:
Hosted engine:
CentOS Linux release 7.2.1511 (Core)
ovirt-engine-3.6.7.5-1.el7.centos.noarch
Host0: with running hosted Hosted-Engine which I need to update/upgrade
CentOS Linux release 7.2.1511 (Core)
ovirt-vmconsole-1.0.0-1.el7.centos.noarch
ovirt-hosted-engine-ha-1.3.5.1-1.el7.centos.noarch
ovirt-host-deploy-1.4.1-1.el7.centos.noarch
ovirt-release36-007-1.noarch
ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
ovirt-image-uploader-3.6.0-1.el7.centos.noarch
ovirt-setup-lib-1.0.1-1.el7.centos.noarch
libgovirt-0.3.3-1.el7_2.1.x86_64
ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch
ovirt-hosted-engine-setup-1.3.4.0-1.el7.centos.noarch
ovirt-engine-appliance-3.6-20160301.1.el7.centos.noarch
vdsm-jsonrpc-4.17.23.2-0.el7.centos.noarch
vdsm-yajsonrpc-4.17.23.2-0.el7.centos.noarch
vdsm-4.17.23.2-0.el7.centos.noarch
vdsm-python-4.17.23.2-0.el7.centos.noarch
vdsm-infra-4.17.23.2-0.el7.centos.noarch
vdsm-hook-vmfex-dev-4.17.23.2-0.el7.centos.noarch
vdsm-xmlrpc-4.17.23.2-0.el7.centos.noarch
vdsm-cli-4.17.23.2-0.el7.centos.noarch
Output from hosted-engine --vm-status from host1
--== Host 1 status ==--
Status up-to-date : True
Hostname :
dev-ovirtnode0.example.com
Host ID : 1
Engine status : {"health": "good",
"vm": "up",
"detail": "up"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : d7fdf8b6
Host timestamp : 1243846
--== Host 2 status ==-- <- it's garbage because there is not
installed and configured HA on host2
Status up-to-date : False
Hostname :
dev-ovirtnode1.example.com
Host ID : 2
Engine status : unknown stale-data
Score : 0
stopped : True
Local maintenance : False
crc32 : fb5f379e
Host timestamp : 563
Remaining Hosts1 - 4 are updated and configured in the same way (it was
done by me). I had to replace network cards and now there is LACP on 4x10G
cards (before was only 1G card).
Because there is CentOS 7.4 I decided to install vdsm in version 4.17.43
(from repo) to fix bugs. I aware that 3.6 is only supported with verision
7.2, but I want to update whole env to 3.6.X then to 4.0. then to 4.1 to be
up to date.
vdsm-jsonrpc-4.17.43-1.el7.centos.noarch
vdsm-xmlrpc-4.17.43-1.el7.centos.noarch
vdsm-4.17.43-1.el7.centos.noarch
vdsm-infra-4.17.43-1.el7.centos.noarch
vdsm-yajsonrpc-4.17.43-1.el7.centos.noarch
vdsm-cli-4.17.43-1.el7.centos.noarch
vdsm-python-4.17.43-1.el7.centos.noarch
vdsm-hook-vmfex-dev-4.17.43-1.el7.centos.noarch
On those host1-4 I have around 400 vm's for used by developers, and I need
to shorten downtime as much as possible (the best option is without
downtime, but I'm not sure if it's possible). I decided to restore HE on
completely new host because I believe that in my case it's the easiest way
to update then upgrade the whole env :)
Many thanks for all advises
Regards
Krzysztof
2017-11-14 8:50 GMT+01:00 Yedidyah Bar David <didi(a)redhat.com>:
On Mon, Nov 13, 2017 at 11:58 PM, Krzysztof Wajda
<vajdovski(a)gmail.com>
wrote:
> Hello,
>
> I have to restore Hosted Engine on another host (completely new
hardware).
> Based on this
>
https://www.ovirt.org/documentation/self-hosted/
chap-Backing_up_and_Restoring_an_EL-Based_Self-Hosted_Environment/
> is not clear for me if vm's will be rebooted during synchronization hosts
> with engine ?
They should not be rebooted automatically, but you might need to do
this yourself, see below.
>
> I have 5 hosts + 1 completely fresh. On host1 I have HE and there is no
vm's
> on other 4 (host1-4) there are around 400 vm which can't be rebooted.
Host5
> for restore HE.
Please provide more details about your backup/restore flow.
What died (storage? hosts? data?), what are you going to restore,
how, etc.
Which hosts are hosted-engine hosts. Do they have running VMs.
We are working on updating the documentation, but it will take some time.
For now, you should assume that the safest way is to pass during restore,
to engine-backup, '--he-remove-storage-vm' and '--he-remove-hosts'. This
will remove from the engine all the hosted-engine hosts and storage. So
when you add the hosts back, you'll have to somehow power off the VMs
there - the engine will refuse to add them with running VMs. If you do
not want to use these options, you should plan carefully and test.
See also:
https://bugzilla.redhat.com/show_bug.cgi?id=1235200
https://bugzilla.redhat.com/show_bug.cgi?id=1240466
https://bugzilla.redhat.com/show_bug.cgi?id=1441322
Best regards,
--
Didi