Replace one engine host

Hello Still playing a bit with oVirt, my cluster is ovirt+gluster with a replica 3 glusterfs for the engine (on vm01, vm02 and vm03) and 4 hosts for the data glusterfs and running virtual machines (vm01, vm02, vm03 and vm04). Now, by mistake, I deployed hosted-engine on vm04 but I don't want to have the engine on a machine without local data + it's not working correctly (weight 0 and marked as in maintenance). Now, I've already tried to reinstall it from the the GUI marking the "Undeploy" check, reinstall vm03 marking the "deploy" check but nothing seems to work. Here it is the hosted-engine --vm-status output from vm04. What can I do to deploy correctly the hosted engine on vm03? --== Host 1 status ==-- Status up-to-date : True Hostname : vm01.mydomain.tld Host ID : 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"} Score : 3400 stopped : False Local maintenance : False crc32 : fee2f7d8 Host timestamp : 48911 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=48911 (Sat Sep 17 07:57:02 2016) host-id=1 score=3400 maintenance=False state=EngineDown stopped=False --== Host 3 status ==-- Status up-to-date : True Hostname : vm02.mydomain.tld Host ID : 3 Engine status : {"health": "good", "vm": "up", "detail": "up"} Score : 3400 stopped : False Local maintenance : False crc32 : 9138d24e Host timestamp : 48907 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=48907 (Sat Sep 17 07:57:00 2016) host-id=3 score=3400 maintenance=False state=EngineUp stopped=False --== Host 4 status ==-- Status up-to-date : False Hostname : vm04.mydomain.tld Host ID : 4 Engine status : unknown stale-data Score : 0 stopped : False Local maintenance : True crc32 : 221d262e Host timestamp : 17958 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=17958 (Fri Sep 16 23:19:32 2016) host-id=4 score=0 maintenance=True state=LocalMaintenance stopped=False -- Davide Ferrari Senior Systems Engineer

vm03 is missing here from the status. Can you reinstall it from UI (if you added the host vm03 then just go to Management menu and click reinstall, choose 'Deploy' Option) - After its done, when the host is activated you'll need to manually switch the host out of maintenence (we have a bug on it - solved in master already) On 18 September 2016 at 10:45, Davide Ferrari <davide@billymob.com> wrote:
Hello
Still playing a bit with oVirt, my cluster is ovirt+gluster with a replica 3 glusterfs for the engine (on vm01, vm02 and vm03) and 4 hosts for the data glusterfs and running virtual machines (vm01, vm02, vm03 and vm04). Now, by mistake, I deployed hosted-engine on vm04 but I don't want to have the engine on a machine without local data + it's not working correctly (weight 0 and marked as in maintenance). Now, I've already tried to reinstall it from the the GUI marking the "Undeploy" check, reinstall vm03 marking the "deploy" check but nothing seems to work. Here it is the hosted-engine --vm-status output from vm04. What can I do to deploy correctly the hosted engine on vm03?
--== Host 1 status ==--
Status up-to-date : True Hostname : vm01.mydomain.tld Host ID : 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"} Score : 3400 stopped : False Local maintenance : False crc32 : fee2f7d8 Host timestamp : 48911 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=48911 (Sat Sep 17 07:57:02 2016) host-id=1 score=3400 maintenance=False state=EngineDown stopped=False
--== Host 3 status ==--
Status up-to-date : True Hostname : vm02.mydomain.tld Host ID : 3 Engine status : {"health": "good", "vm": "up", "detail": "up"} Score : 3400 stopped : False Local maintenance : False crc32 : 9138d24e Host timestamp : 48907 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=48907 (Sat Sep 17 07:57:00 2016) host-id=3 score=3400 maintenance=False state=EngineUp stopped=False
--== Host 4 status ==--
Status up-to-date : False Hostname : vm04.mydomain.tld Host ID : 4 Engine status : unknown stale-data Score : 0 stopped : False Local maintenance : True crc32 : 221d262e Host timestamp : 17958 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=17958 (Fri Sep 16 23:19:32 2016) host-id=4 score=0 maintenance=True state=LocalMaintenance stopped=False
-- Davide Ferrari Senior Systems Engineer
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hello Roy, I forgot to mention that I already reinstalled (several times) vm03 from the gui with the deploy option checked, but there's no way it appears there. For the record vm04 was the first host deployed (from cli) after the initial vm01 installation, then vm02 and then 03. On Sep 18, 2016 11:06, "Roy Golan" <rgolan@redhat.com> wrote:
vm03 is missing here from the status. Can you reinstall it from UI (if you added the host vm03 then just go to Management menu and click reinstall, choose 'Deploy' Option) - After its done, when the host is activated you'll need to manually switch the host out of maintenence (we have a bug on it - solved in master already)
On 18 September 2016 at 10:45, Davide Ferrari <davide@billymob.com> wrote:
Hello
Still playing a bit with oVirt, my cluster is ovirt+gluster with a replica 3 glusterfs for the engine (on vm01, vm02 and vm03) and 4 hosts for the data glusterfs and running virtual machines (vm01, vm02, vm03 and vm04). Now, by mistake, I deployed hosted-engine on vm04 but I don't want to have the engine on a machine without local data + it's not working correctly (weight 0 and marked as in maintenance). Now, I've already tried to reinstall it from the the GUI marking the "Undeploy" check, reinstall vm03 marking the "deploy" check but nothing seems to work. Here it is the hosted-engine --vm-status output from vm04. What can I do to deploy correctly the hosted engine on vm03?
--== Host 1 status ==--
Status up-to-date : True Hostname : vm01.mydomain.tld Host ID : 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"} Score : 3400 stopped : False Local maintenance : False crc32 : fee2f7d8 Host timestamp : 48911 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=48911 (Sat Sep 17 07:57:02 2016) host-id=1 score=3400 maintenance=False state=EngineDown stopped=False
--== Host 3 status ==--
Status up-to-date : True Hostname : vm02.mydomain.tld Host ID : 3 Engine status : {"health": "good", "vm": "up", "detail": "up"} Score : 3400 stopped : False Local maintenance : False crc32 : 9138d24e Host timestamp : 48907 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=48907 (Sat Sep 17 07:57:00 2016) host-id=3 score=3400 maintenance=False state=EngineUp stopped=False
--== Host 4 status ==--
Status up-to-date : False Hostname : vm04.mydomain.tld Host ID : 4 Engine status : unknown stale-data Score : 0 stopped : False Local maintenance : True crc32 : 221d262e Host timestamp : 17958 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=17958 (Fri Sep 16 23:19:32 2016) host-id=4 score=0 maintenance=True state=LocalMaintenance stopped=False
-- Davide Ferrari Senior Systems Engineer
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Can you share the deploy logs of vm03? /var/log/ovirt-engine/host-deploy/ Is there an /etc/ovirt-hosted-engine/hosted-engine.conf at all? On 18 September 2016 at 17:28, Davide Ferrari <davide@billymob.com> wrote:
Hello Roy, I forgot to mention that I already reinstalled (several times) vm03 from the gui with the deploy option checked, but there's no way it appears there. For the record vm04 was the first host deployed (from cli) after the initial vm01 installation, then vm02 and then 03.
On Sep 18, 2016 11:06, "Roy Golan" <rgolan@redhat.com> wrote:
vm03 is missing here from the status. Can you reinstall it from UI (if you added the host vm03 then just go to Management menu and click reinstall, choose 'Deploy' Option) - After its done, when the host is activated you'll need to manually switch the host out of maintenence (we have a bug on it - solved in master already)
On 18 September 2016 at 10:45, Davide Ferrari <davide@billymob.com> wrote:
Hello
Still playing a bit with oVirt, my cluster is ovirt+gluster with a replica 3 glusterfs for the engine (on vm01, vm02 and vm03) and 4 hosts for the data glusterfs and running virtual machines (vm01, vm02, vm03 and vm04). Now, by mistake, I deployed hosted-engine on vm04 but I don't want to have the engine on a machine without local data + it's not working correctly (weight 0 and marked as in maintenance). Now, I've already tried to reinstall it from the the GUI marking the "Undeploy" check, reinstall vm03 marking the "deploy" check but nothing seems to work. Here it is the hosted-engine --vm-status output from vm04. What can I do to deploy correctly the hosted engine on vm03?
--== Host 1 status ==--
Status up-to-date : True Hostname : vm01.mydomain.tld Host ID : 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"} Score : 3400 stopped : False Local maintenance : False crc32 : fee2f7d8 Host timestamp : 48911 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=48911 (Sat Sep 17 07:57:02 2016) host-id=1 score=3400 maintenance=False state=EngineDown stopped=False
--== Host 3 status ==--
Status up-to-date : True Hostname : vm02.mydomain.tld Host ID : 3 Engine status : {"health": "good", "vm": "up", "detail": "up"} Score : 3400 stopped : False Local maintenance : False crc32 : 9138d24e Host timestamp : 48907 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=48907 (Sat Sep 17 07:57:00 2016) host-id=3 score=3400 maintenance=False state=EngineUp stopped=False
--== Host 4 status ==--
Status up-to-date : False Hostname : vm04.mydomain.tld Host ID : 4 Engine status : unknown stale-data Score : 0 stopped : False Local maintenance : True crc32 : 221d262e Host timestamp : 17958 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=17958 (Fri Sep 16 23:19:32 2016) host-id=4 score=0 maintenance=True state=LocalMaintenance stopped=False
-- Davide Ferrari Senior Systems Engineer
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Oh, thanks! You put me on the right way! Now it works: Steps that I did: - reinstall + deploy vm03 - manually set from maintenance to none on vm03 - reinstall + undeploy vm04 (I left it deployed again last time, trying to remove it) Then I found the problem: vm03 hosted-engine conf (/etc/ovirt-hosted-engine/hosted-engine.conf) had host_id set to 4 which conflicted with vm04 host_id. So I changed it to 2 (the only one free), restarted ovirt-ha-agent (and broker just to be sure) and it worked! Now from vm-status I see vm01,02 and 03 with 3400 as priority and I can migrate the HE vm to any of them! Thanks a lot for your kind help, Roy! 2016-09-18 22:00 GMT+02:00 Roy Golan <rgolan@redhat.com>:
Can you share the deploy logs of vm03? /var/log/ovirt-engine/host-deploy/ Is there an /etc/ovirt-hosted-engine/hosted-engine.conf at all?
On 18 September 2016 at 17:28, Davide Ferrari <davide@billymob.com> wrote:
Hello Roy, I forgot to mention that I already reinstalled (several times) vm03 from the gui with the deploy option checked, but there's no way it appears there. For the record vm04 was the first host deployed (from cli) after the initial vm01 installation, then vm02 and then 03.
On Sep 18, 2016 11:06, "Roy Golan" <rgolan@redhat.com> wrote:
vm03 is missing here from the status. Can you reinstall it from UI (if you added the host vm03 then just go to Management menu and click reinstall, choose 'Deploy' Option) - After its done, when the host is activated you'll need to manually switch the host out of maintenence (we have a bug on it - solved in master already)
On 18 September 2016 at 10:45, Davide Ferrari <davide@billymob.com> wrote:
Hello
Still playing a bit with oVirt, my cluster is ovirt+gluster with a replica 3 glusterfs for the engine (on vm01, vm02 and vm03) and 4 hosts for the data glusterfs and running virtual machines (vm01, vm02, vm03 and vm04). Now, by mistake, I deployed hosted-engine on vm04 but I don't want to have the engine on a machine without local data + it's not working correctly (weight 0 and marked as in maintenance). Now, I've already tried to reinstall it from the the GUI marking the "Undeploy" check, reinstall vm03 marking the "deploy" check but nothing seems to work. Here it is the hosted-engine --vm-status output from vm04. What can I do to deploy correctly the hosted engine on vm03?
--== Host 1 status ==--
Status up-to-date : True Hostname : vm01.mydomain.tld Host ID : 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"} Score : 3400 stopped : False Local maintenance : False crc32 : fee2f7d8 Host timestamp : 48911 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=48911 (Sat Sep 17 07:57:02 2016) host-id=1 score=3400 maintenance=False state=EngineDown stopped=False
--== Host 3 status ==--
Status up-to-date : True Hostname : vm02.mydomain.tld Host ID : 3 Engine status : {"health": "good", "vm": "up", "detail": "up"} Score : 3400 stopped : False Local maintenance : False crc32 : 9138d24e Host timestamp : 48907 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=48907 (Sat Sep 17 07:57:00 2016) host-id=3 score=3400 maintenance=False state=EngineUp stopped=False
--== Host 4 status ==--
Status up-to-date : False Hostname : vm04.mydomain.tld Host ID : 4 Engine status : unknown stale-data Score : 0 stopped : False Local maintenance : True crc32 : 221d262e Host timestamp : 17958 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=17958 (Fri Sep 16 23:19:32 2016) host-id=4 score=0 maintenance=True state=LocalMaintenance stopped=False
-- Davide Ferrari Senior Systems Engineer
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Davide Ferrari Senior Systems Engineer

I'm suspect the host-id is out of sync in the engine now. changing the host-id locally isn't supported. The engine is the only source of id assignment. On 19 September 2016 at 00:21, Davide Ferrari <davide@billymob.com> wrote:
Oh, thanks! You put me on the right way! Now it works: Steps that I did: - reinstall + deploy vm03 - manually set from maintenance to none on vm03 - reinstall + undeploy vm04 (I left it deployed again last time, trying to remove it)
Then I found the problem: vm03 hosted-engine conf (/etc/ovirt-hosted-engine/hosted-engine.conf) had host_id set to 4 which conflicted with vm04 host_id. So I changed it to 2 (the only one free), restarted ovirt-ha-agent (and broker just to be sure) and it worked! Now from vm-status I see vm01,02 and 03 with 3400 as priority and I can migrate the HE vm to any of them!
Thanks a lot for your kind help, Roy!
2016-09-18 22:00 GMT+02:00 Roy Golan <rgolan@redhat.com>:
Can you share the deploy logs of vm03? /var/log/ovirt-engine/host-deploy/ Is there an /etc/ovirt-hosted-engine/hosted-engine.conf at all?
On 18 September 2016 at 17:28, Davide Ferrari <davide@billymob.com> wrote:
Hello Roy, I forgot to mention that I already reinstalled (several times) vm03 from the gui with the deploy option checked, but there's no way it appears there. For the record vm04 was the first host deployed (from cli) after the initial vm01 installation, then vm02 and then 03.
On Sep 18, 2016 11:06, "Roy Golan" <rgolan@redhat.com> wrote:
vm03 is missing here from the status. Can you reinstall it from UI (if you added the host vm03 then just go to Management menu and click reinstall, choose 'Deploy' Option) - After its done, when the host is activated you'll need to manually switch the host out of maintenence (we have a bug on it - solved in master already)
On 18 September 2016 at 10:45, Davide Ferrari <davide@billymob.com> wrote:
Hello
Still playing a bit with oVirt, my cluster is ovirt+gluster with a replica 3 glusterfs for the engine (on vm01, vm02 and vm03) and 4 hosts for the data glusterfs and running virtual machines (vm01, vm02, vm03 and vm04). Now, by mistake, I deployed hosted-engine on vm04 but I don't want to have the engine on a machine without local data + it's not working correctly (weight 0 and marked as in maintenance). Now, I've already tried to reinstall it from the the GUI marking the "Undeploy" check, reinstall vm03 marking the "deploy" check but nothing seems to work. Here it is the hosted-engine --vm-status output from vm04. What can I do to deploy correctly the hosted engine on vm03?
--== Host 1 status ==--
Status up-to-date : True Hostname : vm01.mydomain.tld Host ID : 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"} Score : 3400 stopped : False Local maintenance : False crc32 : fee2f7d8 Host timestamp : 48911 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=48911 (Sat Sep 17 07:57:02 2016) host-id=1 score=3400 maintenance=False state=EngineDown stopped=False
--== Host 3 status ==--
Status up-to-date : True Hostname : vm02.mydomain.tld Host ID : 3 Engine status : {"health": "good", "vm": "up", "detail": "up"} Score : 3400 stopped : False Local maintenance : False crc32 : 9138d24e Host timestamp : 48907 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=48907 (Sat Sep 17 07:57:00 2016) host-id=3 score=3400 maintenance=False state=EngineUp stopped=False
--== Host 4 status ==--
Status up-to-date : False Hostname : vm04.mydomain.tld Host ID : 4 Engine status : unknown stale-data Score : 0 stopped : False Local maintenance : True crc32 : 221d262e Host timestamp : 17958 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=17958 (Fri Sep 16 23:19:32 2016) host-id=4 score=0 maintenance=True state=LocalMaintenance stopped=False
-- Davide Ferrari Senior Systems Engineer
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Davide Ferrari Senior Systems Engineer

Mmmh, how can I check this? And how can I cahnge the id from the engine? IDs are not even shown in the GUI for example 2016-09-19 10:35 GMT+02:00 Roy Golan <rgolan@redhat.com>:
I'm suspect the host-id is out of sync in the engine now. changing the host-id locally isn't supported. The engine is the only source of id assignment.
On 19 September 2016 at 00:21, Davide Ferrari <davide@billymob.com> wrote:
Oh, thanks! You put me on the right way! Now it works: Steps that I did: - reinstall + deploy vm03 - manually set from maintenance to none on vm03 - reinstall + undeploy vm04 (I left it deployed again last time, trying to remove it)
Then I found the problem: vm03 hosted-engine conf (/etc/ovirt-hosted-engine/hosted-engine.conf) had host_id set to 4 which conflicted with vm04 host_id. So I changed it to 2 (the only one free), restarted ovirt-ha-agent (and broker just to be sure) and it worked! Now from vm-status I see vm01,02 and 03 with 3400 as priority and I can migrate the HE vm to any of them!
Thanks a lot for your kind help, Roy!
2016-09-18 22:00 GMT+02:00 Roy Golan <rgolan@redhat.com>:
Can you share the deploy logs of vm03? /var/log/ovirt-engine/host-dep loy/ Is there an /etc/ovirt-hosted-engine/hosted-engine.conf at all?
On 18 September 2016 at 17:28, Davide Ferrari <davide@billymob.com> wrote:
Hello Roy, I forgot to mention that I already reinstalled (several times) vm03 from the gui with the deploy option checked, but there's no way it appears there. For the record vm04 was the first host deployed (from cli) after the initial vm01 installation, then vm02 and then 03.
On Sep 18, 2016 11:06, "Roy Golan" <rgolan@redhat.com> wrote:
vm03 is missing here from the status. Can you reinstall it from UI (if you added the host vm03 then just go to Management menu and click reinstall, choose 'Deploy' Option) - After its done, when the host is activated you'll need to manually switch the host out of maintenence (we have a bug on it - solved in master already)
On 18 September 2016 at 10:45, Davide Ferrari <davide@billymob.com> wrote:
Hello
Still playing a bit with oVirt, my cluster is ovirt+gluster with a replica 3 glusterfs for the engine (on vm01, vm02 and vm03) and 4 hosts for the data glusterfs and running virtual machines (vm01, vm02, vm03 and vm04). Now, by mistake, I deployed hosted-engine on vm04 but I don't want to have the engine on a machine without local data + it's not working correctly (weight 0 and marked as in maintenance). Now, I've already tried to reinstall it from the the GUI marking the "Undeploy" check, reinstall vm03 marking the "deploy" check but nothing seems to work. Here it is the hosted-engine --vm-status output from vm04. What can I do to deploy correctly the hosted engine on vm03?
--== Host 1 status ==--
Status up-to-date : True Hostname : vm01.mydomain.tld Host ID : 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"} Score : 3400 stopped : False Local maintenance : False crc32 : fee2f7d8 Host timestamp : 48911 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=48911 (Sat Sep 17 07:57:02 2016) host-id=1 score=3400 maintenance=False state=EngineDown stopped=False
--== Host 3 status ==--
Status up-to-date : True Hostname : vm02.mydomain.tld Host ID : 3 Engine status : {"health": "good", "vm": "up", "detail": "up"} Score : 3400 stopped : False Local maintenance : False crc32 : 9138d24e Host timestamp : 48907 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=48907 (Sat Sep 17 07:57:00 2016) host-id=3 score=3400 maintenance=False state=EngineUp stopped=False
--== Host 4 status ==--
Status up-to-date : False Hostname : vm04.mydomain.tld Host ID : 4 Engine status : unknown stale-data Score : 0 stopped : False Local maintenance : True crc32 : 221d262e Host timestamp : 17958 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=17958 (Fri Sep 16 23:19:32 2016) host-id=4 score=0 maintenance=True state=LocalMaintenance stopped=False
-- Davide Ferrari Senior Systems Engineer
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Davide Ferrari Senior Systems Engineer
-- Davide Ferrari Senior Systems Engineer

Those ids are hidden in the UI and you can reveal them from the db ```sql select vds_spm_id,vds_name from vds_spm_id_map map,vds_static static where map.vds_id = static.vds_id ; ``` On 19 September 2016 at 11:52, Davide Ferrari <davide@billymob.com> wrote:
Mmmh, how can I check this? And how can I cahnge the id from the engine? IDs are not even shown in the GUI for example
2016-09-19 10:35 GMT+02:00 Roy Golan <rgolan@redhat.com>:
I'm suspect the host-id is out of sync in the engine now. changing the host-id locally isn't supported. The engine is the only source of id assignment.
On 19 September 2016 at 00:21, Davide Ferrari <davide@billymob.com> wrote:
Oh, thanks! You put me on the right way! Now it works: Steps that I did: - reinstall + deploy vm03 - manually set from maintenance to none on vm03 - reinstall + undeploy vm04 (I left it deployed again last time, trying to remove it)
Then I found the problem: vm03 hosted-engine conf (/etc/ovirt-hosted-engine/hosted-engine.conf) had host_id set to 4 which conflicted with vm04 host_id. So I changed it to 2 (the only one free), restarted ovirt-ha-agent (and broker just to be sure) and it worked! Now from vm-status I see vm01,02 and 03 with 3400 as priority and I can migrate the HE vm to any of them!
Thanks a lot for your kind help, Roy!
2016-09-18 22:00 GMT+02:00 Roy Golan <rgolan@redhat.com>:
Can you share the deploy logs of vm03? /var/log/ovirt-engine/host-dep loy/ Is there an /etc/ovirt-hosted-engine/hosted-engine.conf at all?
On 18 September 2016 at 17:28, Davide Ferrari <davide@billymob.com> wrote:
Hello Roy, I forgot to mention that I already reinstalled (several times) vm03 from the gui with the deploy option checked, but there's no way it appears there. For the record vm04 was the first host deployed (from cli) after the initial vm01 installation, then vm02 and then 03.
On Sep 18, 2016 11:06, "Roy Golan" <rgolan@redhat.com> wrote:
vm03 is missing here from the status. Can you reinstall it from UI (if you added the host vm03 then just go to Management menu and click reinstall, choose 'Deploy' Option) - After its done, when the host is activated you'll need to manually switch the host out of maintenence (we have a bug on it - solved in master already)
On 18 September 2016 at 10:45, Davide Ferrari <davide@billymob.com> wrote:
> Hello > > Still playing a bit with oVirt, my cluster is ovirt+gluster with a > replica 3 glusterfs for the engine (on vm01, vm02 and vm03) and 4 hosts for > the data glusterfs and running virtual machines (vm01, vm02, vm03 and > vm04). Now, by mistake, I deployed hosted-engine on vm04 but I don't want > to have the engine on a machine without local data + it's not working > correctly (weight 0 and marked as in maintenance). Now, I've already tried > to reinstall it from the the GUI marking the "Undeploy" check, reinstall > vm03 marking the "deploy" check but nothing seems to work. Here it is the > hosted-engine --vm-status output from vm04. What can I do to deploy > correctly the hosted engine on vm03? > > --== Host 1 status ==-- > > Status up-to-date : True > Hostname : vm01.mydomain.tld > Host ID : 1 > Engine status : {"reason": "vm not running on > this host", "health": "bad", "vm": "down", "detail": "unknown"} > Score : 3400 > stopped : False > Local maintenance : False > crc32 : fee2f7d8 > Host timestamp : 48911 > Extra metadata (valid at timestamp): > metadata_parse_version=1 > metadata_feature_version=1 > timestamp=48911 (Sat Sep 17 07:57:02 2016) > host-id=1 > score=3400 > maintenance=False > state=EngineDown > stopped=False > > > --== Host 3 status ==-- > > Status up-to-date : True > Hostname : vm02.mydomain.tld > Host ID : 3 > Engine status : {"health": "good", "vm": "up", > "detail": "up"} > Score : 3400 > stopped : False > Local maintenance : False > crc32 : 9138d24e > Host timestamp : 48907 > Extra metadata (valid at timestamp): > metadata_parse_version=1 > metadata_feature_version=1 > timestamp=48907 (Sat Sep 17 07:57:00 2016) > host-id=3 > score=3400 > maintenance=False > state=EngineUp > stopped=False > > > --== Host 4 status ==-- > > Status up-to-date : False > Hostname : vm04.mydomain.tld > Host ID : 4 > Engine status : unknown stale-data > Score : 0 > stopped : False > Local maintenance : True > crc32 : 221d262e > Host timestamp : 17958 > Extra metadata (valid at timestamp): > metadata_parse_version=1 > metadata_feature_version=1 > timestamp=17958 (Fri Sep 16 23:19:32 2016) > host-id=4 > score=0 > maintenance=True > state=LocalMaintenance > stopped=False > > > -- > Davide Ferrari > Senior Systems Engineer > > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > >
-- Davide Ferrari Senior Systems Engineer
-- Davide Ferrari Senior Systems Engineer
participants (2)
-
Davide Ferrari
-
Roy Golan