i always get "Service Unavailable" in the browser and each time i it reload
in the browser i get in the error_log
[proxy_ajp:error] [pid 1868] [client 10.8.1.76:63512] AH00896: failed to
make connection to backend: 127.0.0.1
[Tue Jul 23 14:04:10.074023 2019] [proxy:error] [pid 1416] (111)Connection
refused: AH00957: AJP: attempt to connect to 127.0.0.1:8702 (127.0.0.1)
failed
Thanks & Regards
Carl
On Tue, Jul 23, 2019 at 12:59 PM carl langlois <crl.langlois(a)gmail.com>
wrote:
Hi
At one point we did have issue with DNS resolution(mainly the reverse
lookup). But that was fix. Yes we can ping both network and vice-versa.
Not sure how to multi-home the engine. Will do some research on that.
I did find something in the error_log on the engine.
In the /etc/httpd/logs/error_log i always get this messages.
[Tue Jul 23 11:21:52.430555 2019] [proxy:error] [pid 3189] AH00959:
ap_proxy_connect_backend disabling worker for (127.0.0.1) for 5s
[Tue Jul 23 11:21:52.430562 2019] [proxy_ajp:error] [pid 3189] [client
10.16.248.65:35154] AH00896: failed to make connection to backend: 127.0.0.1
The 10.16.248.65 is the new address of the host that was move to the new
network.
Thanks & Regards
Carl
On Tue, Jul 23, 2019 at 11:52 AM Strahil <hunter86_bg(a)yahoo.com> wrote:
> According to another post in the mailing list, the Engine Hosts (that has
> ovirt-ha-agent/ovirt-ha-broker running) is checking http://
> {fqdn}/ovirt-engine/services/health
>
> As the IP is changed, I think you need to check the URL before and after
> thr mifgration.
>
> Best Regards,
> Strahil NikolovOn Jul 23, 2019 16:41, Derek Atkins <derek(a)ihtfp.com>
> wrote:
> >
> > Hi,
> >
> > If I understand it correctly, the HE Hosts try to ping (or SSH, or
> > otherwise reach) the Engine host. If it reaches it, then it passes the
> > liveness check. If it cannot reach it, then it fails. So to me this
> error
> > means that there is some configuration, somewhere, that is trying to
> reach
> > the engine on the old address (which fails when the engine has the new
> > address).
> >
> > I do not know where in the *host* configuration this data lives, so I
> > cannot suggest where you need to change it.
> >
> > Can 10.16.248.x reach 10.8.236.x and vice-versa?
> >
> > Maybe multi-home the engine on both networks for now until you figure
> it out?
> >
> > -derek
> >
> > On Tue, July 23, 2019 9:13 am, carl langlois wrote:
> > > Hi,
> > >
> > > We have managed to stabilize the DNS udpate in out network. Now the
> > > current
> > > situation is.
> > > I have 3 hosts that can run the engine (hosted-engine).
> > > They were all in the 10.8.236.x. Now i have moved one of them in the
> > > 10.16.248.x.
> > >
> > > If i boot the engine on one of the host that is in the 10.8.236.x the
> > > engine is going up with status "good". I can access the engine
UI. I
> can
> > > see all my hosts even the one in the 10.16.248.x network.
> > >
> > > But if i boot the engine on the hosted-engine host that was switch to
> the
> > > 10.16.248.x the engine is booting. I can ssh to it but the status is
> > > always
> > > " fail for liveliness check".
> > > The main difference is that when i boot on the host that is in the
> > > 10.16.248.x network the engine gets a address in the 248.x network.
> > >
> > > On the engine i have this in the
> > > /var/log/ovirt-engine-dwh/ovirt-engine-dwhd.log
> > > 019-07-23
> > >
>
09:05:30|MFzehi|YYTDiS|jTq2w8|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|Default|5|tWarn|tWarn_1|Can
>
> > > not sample data, oVirt Engine is not updating the statistics. Please
> check
> > > your oVirt Engine status.|9704
> > > the engine.log seems okey.
> > >
> > > So i need to understand what this " liveliness check" do(or try
to
> do) so
> > > i
> > > can investigate why the engine status is not becoming good.
> > >
> > > The initial deployment was done in the 10.8.236.x network. Maybe is
> as
> > > something to do with that.
> > >
> > > Thanks & Regards
> > >
> > > Carl
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > On Thu, Jul 18, 2019 at 8:53 AM Miguel Duarte de Mora Barroso <
> > > mdbarroso(a)redhat.com> wrote:
> > >
> > >> On Thu, Jul 18, 2019 at 2:50 PM Miguel Duarte de Mora Barroso
> > >> <mdbarroso(a)redhat.com> wrote:
> > >> >
> > >> > On Thu, Jul 18, 2019 at 1:57 PM carl langlois <
> crl.langlois(a)gmail.com>
> > >> wrote:
> > >> > >
> > >> > > Hi Miguel,
> > >> > >
> > >> > > I have managed to change the config for the ovn-controler.
> > >> > > with those commands
> > >> > > ovs-vsctl set Open_vSwitch . external-ids:ovn-remote=ssl:
> > >> 10.16.248.74:6642
> > >> > > ovs-vsctl set Open_vSwitch .
> external-ids:ovn-encap-ip=10.16.248.65
> > >> > > and restating the services
> > >> >
> > >> > Yes, that's what the script is supposed to do, check [0].
> > >> >
> > >> > Not sure why running vdsm-tool didn't work for you.
> > >> >
> > >> > >
> > >> > > But even with this i still have the "fail for liveliness
check"
> when
> > >> starting the ovirt engine. But one thing i notice with our new
> network
> > >> is
> > >> that the reverse DNS does not work(IP -> hostname). The forward is
> > >> working
> > >> fine. I am trying to see with our IT why it is not working.
> > >> >
> > >> > Do you guys use OVN? If not, you could disable the provider,
> install
> > >> > the hosted-engine VM, then, if needed, re-add / re-activate it .
> > >>
> > >> I'm assuming it fails for the same reason you've stated
initially -
> > >> i.e. ovn-controller is involved; if it is not, disregard this msg :)
> > >> >
> > >> > [0] -
> > >>
>
https://github.com/oVirt/ovirt-provider-ovn/blob/master/driver/scripts/se...
> > >> >
> > >> > >
> > >> > > Regards.
> > >> > > Carl
> > >>