In a Red Hat Solution , it is recommended to restart ovirt-ha-agent &
ovirt-ha-broker.
I usually set the global maintenance and wait 20s-30s . Then I just stop on all nodes
ovirt-ha-agent.service & ovirt-ha-broker.service . Once everywhere is stopped, start
the 2 services on all nodes and wait 4-5min.
Last verify the status from each host, before removing global maintenance.
Best Regards,
Strahil NikolovOn May 2, 2019 12:30, Andreas Elvers
<andreas.elvers+ovirtforum(a)solutions.work> wrote:
I have 5 nodes (node01 to node05). Originally all those nodes were part of our default
datacenter/cluster with a NFS storage domain for vmdisk, engine and iso-images. All five
nodes were engine HA nodes.
Later node01, node02 and node03 were re-installed to have engine HA removed. Then those
nodes were removed from the default cluster. Eventually node01,02 and 03 were completely
re-installed to host our new Ceph/Gluster based datecenter. The engine is still running on
the old default Datacenter. Now I wish to move it over to our ceph/gluster datacenter.
when I look at the current output of "hosted-engine --vm-status" I see:
--== Host node01.infra.solutions.work (id: 1) status ==--
conf_on_shared_storage : True
Status up-to-date : False
Hostname : node01.infra.solutions.work
Host ID : 1
Engine status : unknown stale-data
Score : 0
stopped : True
Local maintenance : False
crc32 : e437bff4
local_conf_timestamp : 155627
Host timestamp : 155877
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=155877 (Fri Aug 3 13:09:19 2018)
host-id=1
score=0
vm_conf_refresh_time=155627 (Fri Aug 3 13:05:08 2018)
conf_on_shared_storage=True
maintenance=False
state=AgentStopped
stopped=True
--== Host node02.infra.solutions.work (id: 2) status ==--
conf_on_shared_storage : True
Status up-to-date : False
Hostname : node02.infra.solutions.work
Host ID : 2
Engine status : unknown stale-data
Score : 0
stopped : True
Local maintenance : False
crc32 : 11185b04
local_conf_timestamp : 154757
Host timestamp : 154856
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=154856 (Fri Aug 3 13:22:19 2018)
host-id=2
score=0
vm_conf_refresh_time=154757 (Fri Aug 3 13:20:40 2018)
conf_on_shared_storage=True
maintenance=False
state=AgentStopped
stopped=True
--== Host node03.infra.solutions.work (id: 3) status ==--
conf_on_shared_storage : True
Status up-to-date : False
Hostname : node03.infra.solutions.work
Host ID : 3
Engine status : unknown stale-data
Score : 0
stopped : False
Local maintenance : True
crc32 : 9595bed9
local_conf_timestamp : 14363
Host timestamp : 14362
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=14362 (Thu Aug 2 18:03:25 2018)
host-id=3
score=0
vm_conf_refresh_time=14363 (Thu Aug 2 18:03:25 2018)
conf_on_shared_storage=True
maintenance=True
state=LocalMaintenance
stopped=False
--== Host node04.infra.solutions.work (id: 4) status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : node04.infra.solutions.work
Host ID : 4
Engine status : {"health": "good",
"vm": "up", "detail": "Up"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 245854b1
local_conf_timestamp : 317498
Host timestamp : 317498
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=317498 (Thu May 2 09:44:47 2019)
host-id=4
score=3400
vm_conf_refresh_time=317498 (Thu May 2 09:44:47 2019)
conf_on_shared_storage=True
maintenance=False
state=EngineUp
stopped=False
--== Host node05.infra.solutions.work (id: 5) status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : node05.infra.solutions.work
Host ID : 5
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down",
"detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 0711afa0
local_conf_timestamp : 318044
Host timestamp : 318044
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=318044 (Thu May 2 09:44:45 2019)
host-id=5
score=3400
vm_conf_refresh_time=318044 (Thu May 2 09:44:45 2019)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
The old node01, node02 and node03 are still present.
The new incarnations of node01, node02 and node03 will be the destination the the
deployment of the new home of our engine to which I wish to restore the backup to. But
I'm not sure, if (and how) the old date should be removed first.
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WF5BNCWZHS2...