<div dir="ltr"><div><div><div>Hi, and thanks for your replies.<br><br></div>The problem was the <b>quotation marks</b> in fence options, the lanplus="1" and power_wait="60" were misinterpreted, the correct form is lanplus=1, power_wait=60<br><br></div>After changing that the power management was configured and the test passed with : success, on.<br><br></div>thanks.<br><div><div><div><div><div class="gmail_extra"><br><div class="gmail_quote">2015-06-11 12:11 GMT+01:00 Martin Perina <span dir="ltr"><<a href="mailto:mperina@redhat.com" target="_blank">mperina@redhat.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi,<br>
<br>
I have HP DL160 G6 with ILO 2.0 and Firmware 4.23, but I'm really<br>
don't have an idea what is your issue right now with power management.<br>
Because oVirt just executes fence_ipmi on selected proxy host with<br>
parameters you could see in your vdsm log. And those parameters are<br>
identical to your command line options that you used when testing it<br>
with fence_ipmi directly (which worked fine). Very strange :-(<br>
<br>
Also I can't see any difference from oVirt power management code point<br>
of view in using Centos 6 or 7. Just be warned that in oVirt 3.6 not<br>
all features will be supported on Centos 6, so using Centos 7 is IMO<br>
a better option for now.<br>
<br>
But anyway could you please file a bug for oVirt 3.5.2 with the description<br>
of your issue and your logs attached? We will try to reproduce this issue,<br>
but so for I wasn't able to reproduce it.<br>
<br>
About your issue with hp watch dog, I cannot give any specifi advice, just<br>
that you can try to solve the issue by updating BIOS/firmware to latest<br>
version and/or try to contact HP support.<br>
<br>
Thanks a lot<br>
<span><br>
Martin Perina<br>
<br>
<br>
----- Original Message -----<br>
> From: "wodel youchi" <<a href="mailto:wodel.youchi@gmail.com" target="_blank">wodel.youchi@gmail.com</a>><br>
</span><span>> To: "Martin Perina" <<a href="mailto:mperina@redhat.com" target="_blank">mperina@redhat.com</a>><br>
> Cc: "users" <<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>>, "Eli Mesika" <<a href="mailto:emesika@redhat.com" target="_blank">emesika@redhat.com</a>><br>
> Sent: Thursday, June 11, 2015 12:56:46 PM<br>
> Subject: Re: [ovirt-users] [Centos7.1x64] [Ovirt 3.5.2] Test fence : Power management test failed for Host<br>
> hosted_engine1 Done<br>
><br>
</span><div><div>> Hi Martin,<br>
><br>
> Could you please tell me the version of the ILO firmware you are using?<br>
><br>
> I did upgrade mine from 1.40 to 2.10 but nothing changed, I did also<br>
> upgrade the smart array p420i card from 5.10 to 6.34 without luck so far.<br>
><br>
> I checked again all parameters, I can't find the error.<br>
><br>
> I did all the updates for Centos and oVirt<br>
><br>
> I have another problem when rebooting any hypervisor, the hypervisor hangs,<br>
> the problem is with hpwtd (hp watch dog)<br>
> "hpwdt unexpected close not stopping watchdog"<br>
><br>
> I added this to kernel parameters "intremap=no_x2apic_optout" but it didn't<br>
> change any thing.<br>
><br>
> I am thinking to test with the latest kernel available to see if it's a<br>
> kernel problem.<br>
><br>
> and I am going to reinstall the platform with Centos 6 to see if there will<br>
> be any differences.<br>
><br>
><br>
><br>
><br>
> 2015-06-10 12:00 GMT+01:00 wodel youchi <<a href="mailto:wodel.youchi@gmail.com" target="_blank">wodel.youchi@gmail.com</a>>:<br>
><br>
> > Hi,<br>
> ><br>
> > engine log is already in debug mode<br>
> ><br>
> > here it is:<br>
> > 2015-06-10 11:48:23,653 INFO<br>
> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]<br>
> > (ajp--127.0.0.1-8702-12) Correlation ID: null, Call Stack: null, Custom<br>
> > Event ID: -1, Message: Host hosted_engine_2 from cluster Default was chosen<br>
> > as a proxy to execute Status command on Host hosted_engine_1.<br>
> > 2015-06-10 11:48:23,653 INFO [org.ovirt.engine.core.bll.FenceExecutor]<br>
> > (ajp--127.0.0.1-8702-12) Using Host hosted_engine_2 from cluster Default as<br>
> > proxy to execute Status command on Host<br>
> > 2015-06-10 11:48:23,673 INFO [org.ovirt.engine.core.bll.FenceExecutor]<br>
> > (ajp--127.0.0.1-8702-12) Executing <Status> Power Management command, Proxy<br>
> > Host:hosted_engine_2, Agent:ipmilan, Target Host:, Management<br>
> > IP:192.168.2.2, User:Administrator, Options: power_wait="60",lanplus="1",<br>
> > Fencing policy:null<br>
> > 2015-06-10 11:48:23,703 INFO<br>
</div></div>> > *[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]<br>
<span>> > (ajp--127.0.0.1-8702-12) START, FenceVdsVDSCommand(HostName =<br>
> > hosted_engine_2, HostId = 0192d1ac-b905-4660-b149-4bef578985dd, targetVdsId<br>
> > = cf2d1260-7bb3-451a-9cd7-80e6a0ede52a, action = Status, ip = 192.168.2.2,<br>
> > port = , type = ipmilan, user = Administrator, password = ******, options =<br>
> > ' power_wait="60",lanplus="1"', policy = 'null'), log id:<br>
</span>> > 2bda01bd2015-06-10 11:48:23,892 WARN<br>
<span>> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]<br>
> > (ajp--127.0.0.1-8702-12) Correlation ID: null, Call Stack: null, Custom<br>
> > Event ID: -1, Message: Power Management test failed for Host<br>
</span>> > hosted_engine_1.Done*<br>
<span>> > 2015-06-10 11:48:23,892 INFO<br>
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]<br>
> > (ajp--127.0.0.1-8702-12) FINISH, FenceVdsVDSCommand, return:<br>
</span>> > *Test Succeeded, unknown, log id: 2bda01bd2015-06-10 11:48:23,897 WARN<br>
> > [org.ovirt.engine.core.bll.FenceExecutor] (ajp--127.0.0.1-8702-12) Fencing<br>
> > operation failed with proxy host 0192d1ac-b905-4660-*b149-4bef578985dd,<br>
<span>> > trying another proxy...<br>
> > 2015-06-10 11:48:24,039 ERROR [org.ovirt.engine.core.bll.FenceExecutor]<br>
> > (ajp--127.0.0.1-8702-12) Failed to run Power Management command on Host ,<br>
> > no running proxy Host was found.<br>
> > 2015-06-10 11:48:24,039 WARN [org.ovirt.engine.core.bll.FenceExecutor]<br>
> > (ajp--127.0.0.1-8702-12) Failed to find other proxy to re-run failed fence<br>
> > operation, retrying with the same proxy...<br>
> > 2015-06-10 11:48:24,143 INFO<br>
</span>> > *[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]<br>
<span>> > (ajp--127.0.0.1-8702-12) Correlation ID: null, Call Stack: null, Custom<br>
> > Event ID: -1, Message: Host hosted_engine_2 from cluster Default was chosen<br>
</span>> > as a proxy to execute Status command on Host hosted_engine_1.*<br>
<span>> > 2015-06-10 11:48:24,143 INFO [org.ovirt.engine.core.bll.FenceExecutor]<br>
> > (ajp--127.0.0.1-8702-12) Using Host hosted_engine_2 from cluster Default as<br>
> > proxy to execute Status command on Host<br>
> ><br>
</span>> > *2015-06-10 11:48:24,148 INFO [org.ovirt.engine.core.bll.FenceExecutor]<br>
<span>> > (ajp--127.0.0.1-8702-12) Executing <Status> Power Management command, Proxy<br>
> > Host:hosted_engine_2, Agent:ipmilan, Target Host:, Management<br>
> > IP:192.168.2.2, User:Administrator, Options: power_wait="60",lanplus="1",<br>
> > Fencing policy:null2015-06-10 11:48:24,165 INFO<br>
</span>> > *[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]<br>
<span>> > (ajp--127.0.0.1-8702-12) START, FenceVdsVDSCommand(HostName =<br>
> > hosted_engine_2, HostId = 0192d1ac-b905-4660-b149-4bef578985dd, targetVdsId<br>
> > = cf2d1260-7bb3-451a-9cd7-80e6a0ede52a, action = Status, ip = 192.168.2.2,<br>
> > port = , type = ipmilan, user = Administrator, password = ******, options =<br>
> > ' power_wait="60",lanplus="1"', policy = 'null'), log id: 7e7f2726<br>
> > 2015-06-10 11:48:24,360 WARN<br>
> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]<br>
> > (ajp--127.0.0.1-8702-12) Correlation ID: null, Call Stack: null, Custom<br>
> > Event ID: -1, Message: Power Management test failed for Host<br>
> > hosted_engine_1.Done<br>
> > 2015-06-10 11:48:24,360 INFO<br>
</span>> > *[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]<br>
<span>> > (ajp--127.0.0.1-8702-12) FINISH, FenceVdsVDSCommand, return: Test<br>
</span>> > Succeeded, unknown, log id: 7e7f2726*<br>
<span>> ><br>
> ><br>
> > VDSM log from hosted_engine_2<br>
> ><br>
> > JsonRpcServer::DEBUG::2015-06-10<br>
> > 11:48:23,640::__init__::506::jsonrpc.JsonRpcServer::(serve_requests)<br>
> > Waiting for request<br>
> > Thread-2201::DEBUG::2015-06-10 11:48:23,642::API::1209::vds::(fenceNode)<br>
</span>> > *fenceNode(addr=192.168.2.2,port=,agent=ipmilan,user=Administrator,passwd=XXXX,action=status,secure=False,options=<br>
> > power_wait="60"lanplus="1",policy=None)*<br>
<span>> > Thread-2201::DEBUG::2015-06-10 11:48:23,642::utils::739::root::(execCmd)<br>
> > /usr/sbin/fence_ipmilan (cwd None)<br>
> > Thread-2201::DEBUG::2015-06-10 11:48:23,709::utils::759::root::(execCmd)<br>
</span>> > *FAILED:<br>
<span>> > <err> = 'Failed: Unable to obtain correct plug status or plug is not<br>
</span>> > available\n\n\n*'; <rc> = 1<br>
<div><div>> > Thread-2201::DEBUG::2015-06-10 11:48:23,710::API::1164::vds::(fence) rc 1<br>
> > inp agent=fence_ipmilan<br>
> > ipaddr=192.168.2.2<br>
> > login=Administrator<br>
> > action=status<br>
> > passwd=XXXX<br>
> > power_wait="60"<br>
> > lanplus="1" out [] err ['Failed: Unable to obtain correct plug status or<br>
> > plug is not available', '', '']<br>
> > Thread-2201::DEBUG::2015-06-10 11:48:23,710::API::1235::vds::(fenceNode)<br>
> > rc 1 in agent=fence_ipmilan<br>
> > ipaddr=192.168.2.2<br>
> > login=Administrator<br>
> > action=status<br>
> > passwd=XXXX<br>
> > power_wait="60"<br>
> > lanplus="1" out [] err ['Failed: Unable to obtain correct plug status or<br>
> > plug is not available', '', '']<br>
> > Thread-2201::DEBUG::2015-06-10<br>
> > 11:48:23,710::stompReactor::163::yajsonrpc.StompServer::(send) Sending<br>
> > response<br>
> > JsonRpc (StompReactor)::DEBUG::2015-06-10<br>
> > 11:48:23,712::stompReactor::98::Broker.StompAdapter::(handle_frame)<br>
> > Handling message <StompFrame command='SEND'><br>
> > JsonRpcServer::DEBUG::2015-06-10<br>
> > 11:48:23,713::__init__::506::jsonrpc.JsonRpcServer::(serve_requests)<br>
> > Waiting for request<br>
> > Thread-2202::DEBUG::2015-06-10 11:48:23,715::API::1209::vds::(fenceNode)<br>
> > fenceNode(addr=192.168.2.2,port=,agent=ipmilan,user=Administrator,passwd=XXXX,action=status,secure=False,options=<br>
> > power_wait="60"<br>
> > lanplus="1",policy=None)<br>
> > Thread-2202::DEBUG::2015-06-10 11:48:23,715::utils::739::root::(execCmd)<br>
> > /usr/sbin/fence_ipmilan (cwd None)<br>
> > Thread-2202::DEBUG::2015-06-10 11:48:23,781::utils::759::root::(execCmd)<br>
> > FAILED: <err> = 'Failed: Unable to obtain correct plug status or plug is<br>
> > not available\n\n\n'; <rc> = 1<br>
> > Thread-2202::DEBUG::2015-06-10 11:48:23,781::API::1164::vds::(fence) rc 1<br>
> > inp agent=fence_ipmilan<br>
> > ipaddr=192.168.2.2<br>
> > login=Administrator<br>
> > action=status<br>
> > passwd=XXXX<br>
> > power_wait="60"<br>
> > lanplus="1" out [] err ['Failed: Unable to obtain correct plug status or<br>
> > plug is not available', '', '']<br>
> ><br>
> ><br>
> ><br>
> > I triple checked, I used the correct IPs and login password, the test in<br>
> > console works.<br>
> ><br>
> > 2015-06-10 10:31 GMT+01:00 Martin Perina <<a href="mailto:mperina@redhat.com" target="_blank">mperina@redhat.com</a>>:<br>
> ><br>
> >> Hi,<br>
> >><br>
> >> I just install engine 3.5.2 on Centos 7.1, added 2 Centos 7.1 hosts (both<br>
> >> with ipmilan fence devices) and everything worked fine. I also tried to<br>
> >> add<br>
> >> options<br>
> >><br>
> >> lanplus="1", power_wait="60"<br>
> >><br>
> >> and even with them getting power status of hosts worked fine.<br>
> >><br>
> >> So could you please check again settings of your hosts in webadmin?<br>
> >><br>
> >> hosted_engine1<br>
> >> PM address: IP address of ILO4 interface of the host hosted_engine1<br>
> >><br>
> >><br>
> >> hosted_engine2<br>
> >> PM address: IP address of ILO4 interface of the host hosted_engine2<br>
> >><br>
> >> If the IP addresses are entered correctly, please allow DEBUG log for<br>
> >> engine,<br>
> >> execute test of PM settings for one host and attach logs from engine and<br>
> >> VDSM logs from both hosts.<br>
> >><br>
> >> Thanks<br>
> >><br>
> >> Martin Perina<br>
> >><br>
> >><br>
> >> ----- Original Message -----<br>
> >> > From: "wodel youchi" <<a href="mailto:wodel.youchi@gmail.com" target="_blank">wodel.youchi@gmail.com</a>><br>
> >> > To: "users" <<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>><br>
> >> > Sent: Tuesday, June 9, 2015 2:41:02 PM<br>
> >> > Subject: [ovirt-users] [Centos7.1x64] [Ovirt 3.5.2] Test fence : Power<br>
> >> management test failed for Host hosted_engine1<br>
> >> > Done<br>
> >> ><br>
> >> > Hi,<br>
> >> ><br>
> >> > I have a weird problem with fencing<br>
> >> ><br>
> >> > I have a cluster of two HP DL380p G8 (ILO4)<br>
> >> ><br>
> >> > Centos7.1x64 and oVirt 3.5.2 ALL UPDATED<br>
> >> ><br>
> >> > I configured fencing first with ilo4 then ipmilan<br>
> >> ><br>
> >> > When testing fence from the engine I get : Succeeded, Unknown<br>
> >> ><br>
> >> > And in alerts tab I get : Power management test failed for Host<br>
> >> > hosted_engine1 Done (the same for host2)<br>
> >> ><br>
> >> > I tested with fence_ilo4 and fence_ipmilan and they report the result<br>
> >> > correctly<br>
> >> ><br>
> >> > # fence_ipmilan -P -a 192.168.2.2 -o status -l Administrator -p ertyuiop<br>
> >> > -vExecuting: /usr/bin/ipmitool -I lanplus -H 192.168.2.2 -U<br>
> >> Administrator -P<br>
> >> > ertyuiop -p 623 -L ADMINISTRATOR chassis power status<br>
> >> ><br>
> >> > 0 Chassis Power is on<br>
> >> ><br>
> >> ><br>
> >> > Status: ON<br>
> >> ><br>
> >> ><br>
> >> > # fence_ilo4 -l Administrator -p ertyuiop -a 192.168.2.2 -o status -v<br>
> >> > Executing: /usr/bin/ipmitool -I lanplus -H 192.168.2.2 -U Administrator<br>
> >> -P<br>
> >> > ertyuiop -p 623 -L ADMINISTRATOR chassis power status<br>
> >> ><br>
> >> > 0 Chassis Power is on<br>
> >> ><br>
> >> ><br>
> >> > Status: ON<br>
> >> ><br>
> >> > ----------------------------------<br>
> >> > These are the options passed to fence_ipmilan (I tested with the<br>
> >> options and<br>
> >> > without them)<br>
> >> ><br>
> >> > lanplus="1", power_wait="60"<br>
> >> ><br>
> >> ><br>
> >> > This is the engine log:<br>
> >> ><br>
> >> > 2015-06-09 13:35:29,287 INFO [org.ovirt.engine.core.bll.FenceExecutor]<br>
> >> > (ajp--127.0.0.1-8702-7) Using Host hosted_engine_2 from cluster Default<br>
> >> as<br>
> >> > proxy to execute Status command on Host<br>
> >> > 2015-06-09 13:35:29,289 INFO [org.ovirt.engine.core.bll.FenceExecutor]<br>
> >> > (ajp--127.0.0.1-8702-7) Executing <Status> Power Management command,<br>
> >> Proxy<br>
> >> > Host:hosted_engine_2, Agent:ipmilan, Target Host:, Management<br>
> >> > IP:192.168.2.2, User:Administrator, Options:<br>
> >> power_wait="60",lanplus="1",<br>
> >> > Fencing policy:null<br>
> >> > 2015-06-09 13:35:29,306 INFO<br>
> >> > [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]<br>
> >> > (ajp--127.0.0.1-8702-7) START, FenceVdsVDSCommand(<br>
> >> > HostName = hosted_engine_2,<br>
> >> > HostId = 0192d1ac-b905-4660-b149-4bef578985dd,<br>
> >> > targetVdsId = cf2d1260-7bb3-451a-9cd7-80e6a0ede52a,<br>
> >> > action = Status,<br>
> >> > ip = 192.168.2.2,<br>
> >> > port = ,<br>
> >> > type = ipmilan,<br>
> >> > user = Administrator,<br>
> >> > password = ******,<br>
> >> > options = ' power_wait="60",lanplus="1"',<br>
> >> > policy = 'null'), log id: 24ce6206<br>
> >> > 2015-06-09 13:35:29,516 WARN<br>
> >> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]<br>
> >> > (ajp--127.0.0.1-8702-7) Correlation ID: null, Call Stack: null, Custom<br>
> >> Event<br>
> >> > ID: -1, Message: Power Management test failed for Host<br>
> >> hosted_engine_1.Done<br>
> >> > 2015-06-09 13:35:29,516 INFO<br>
> >> > [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]<br>
> >> > (ajp--127.0.0.1-8702-7) FINISH, FenceVdsVDSCommand, return: Test<br>
> >> Succeeded,<br>
> >> > unknown , log id: 24ce6206<br>
> >> ><br>
> >> ><br>
> >> > and here the vdsm log from the proxy<br>
> >> ><br>
> >> > JsonRpcServer::DEBUG::2015-06-09<br>
> >> > 13:37:52,461::__init__::506::jsonrpc.JsonRpcServer::(serve_requests)<br>
> >> Waiting<br>
> >> > for request<br>
> >> > Thread-131907::DEBUG::2015-06-09<br>
> >> 13:37:52,463::API::1209::vds::(fenceNode)<br>
> >> ><br>
> >> fenceNode(addr=192.168.2.2,port=,agent=ipmilan,user=Administrator,passwd=XXXX,action=status,secure=False,options=<br>
> >> > power_wait="60"<br>
> >> > lanplus="1",policy=None)<br>
> >> > Thread-131907::DEBUG::2015-06-09<br>
> >> 13:37:52,463::utils::739::root::(execCmd)<br>
> >> > /usr/sbin/fence_ipmilan (cwd None)<br>
> >> > Thread-131907::DEBUG::2015-06-09<br>
> >> 13:37:52,533::utils::759::root::(execCmd)<br>
> >> > FAILED: <err> = 'Failed: Unable to obtain correct plug status or plug<br>
> >> is not<br>
> >> > available\n\n\n'; <rc> = 1<br>
> >> > Thread-131907::DEBUG::2015-06-09 13:37:52,533::API::1164::vds::(fence)<br>
> >> rc 1<br>
> >> > inp agent=fence_ipmilan<br>
> >> > ipaddr=192.168.2.2<br>
> >> > login=Administrator<br>
> >> > action=status<br>
> >> > passwd=XXXX<br>
> >> > power_wait="60"<br>
> >> > lanplus="1" out [] err ['Failed: Unable to obtain correct plug status<br>
> >> or plug<br>
> >> > is not available', '', '']<br>
> >> > Thread-131907::DEBUG::2015-06-09<br>
> >> 13:37:52,533::API::1235::vds::(fenceNode) rc<br>
> >> > 1 in agent=fence_ipmilan<br>
> >> > ipaddr=192.168.2.2<br>
> >> > login=Administrator<br>
> >> > action=status<br>
> >> > passwd=XXXX<br>
> >> > power_wait="60"<br>
> >> > lanplus="1" out [] err [' Failed: Unable to obtain correct plug status<br>
> >> or<br>
> >> > plug is not available ', '', '']<br>
> >> > Thread-131907::DEBUG::2015-06-09<br>
> >> > 13:37:52,534::stompReactor::163::yajsonrpc.StompServer::(send) Sending<br>
> >> > response<br>
> >> > Detector thread::DEBUG::2015-06-09<br>
> >> ><br>
> >> 13:37:53,670::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)<br>
> >> > Adding connection from <a href="http://127.0.0.1:55761" rel="noreferrer" target="_blank">127.0.0.1:55761</a><br>
> >> ><br>
> >> ><br>
> >> > VDSM rpms<br>
> >> > # rpm -qa | grep vdsm<br>
> >> > vdsm-cli-4.16.14-0.el7.noarch<br>
> >> > vdsm-python-zombiereaper-4.16.14-0.el7.noarch<br>
> >> > vdsm-xmlrpc-4.16.14-0.el7.noarch<br>
> >> > vdsm-yajsonrpc-4.16.14-0.el7.noarch<br>
> >> > vdsm-4.16.14-0.el7.x86_64<br>
> >> > vdsm-python-4.16.14-0.el7.noarch<br>
> >> > vdsm-jsonrpc-4.16.14-0.el7.noarch<br>
> >> ><br>
> >> > any idea?<br>
> >> ><br>
> >> > Thanks in advance.<br>
> >> ><br>
> >> > _______________________________________________<br>
> >> > Users mailing list<br>
> >> > <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
> >> > <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
> >> ><br>
> >><br>
> ><br>
> ><br>
><br>
</div></div></blockquote></div><br></div></div></div></div></div></div>