Hi,

I have a weird problem with fencing

I have a cluster of two HP DL380p G8 (ILO4)

Centos7.1x64 and oVirt 3.5.2 ALL UPDATED

I configured fencing first with ilo4 then ipmilan

When testing fence from the engine I get : Succeeded, Unknown

And in alerts tab I get : Power management test failed for Host hosted_engine1 Done (the same for host2)

I tested with fence_ilo4 and fence_ipmilan and they report the result correctly

# fence_ipmilan -P -a 192.168.2.2 -o status -l Administrator -p ertyuiop -vExecuting: /usr/bin/ipmitool -I lanplus -H 192.168.2.2 -U Administrator -P ertyuiop -p 623 -L ADMINISTRATOR chassis power status

0 Chassis Power is on
 

Status: ON


# fence_ilo4 -l Administrator -p ertyuiop -a 192.168.2.2 -o status -v
Executing: /usr/bin/ipmitool -I lanplus -H 192.168.2.2 -U Administrator -P ertyuiop -p 623 -L ADMINISTRATOR chassis power status

0 Chassis Power is on
 

Status: ON

----------------------------------
These are the options passed to fence_ipmilan (I tested with the options and without them)

lanplus="1", power_wait="60"


This is the engine log:

2015-06-09 13:35:29,287 INFO  [org.ovirt.engine.core.bll.FenceExecutor] (ajp--127.0.0.1-8702-7) Using Host hosted_engine_2 from cluster Default as proxy to execute Status command on Host
2015-06-09 13:35:29,289 INFO  [org.ovirt.engine.core.bll.FenceExecutor] (ajp--127.0.0.1-8702-7) Executing <Status> Power Management command, Proxy Host:hosted_engine_2, Agent:ipmilan, Target Host:, Management IP:192.168.2.2, User:Administrator, Options: power_wait="60",lanplus="1", Fencing policy:null
2015-06-09 13:35:29,306 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (ajp--127.0.0.1-8702-7) START, FenceVdsVDSCommand(
HostName = hosted_engine_2,
HostId = 0192d1ac-b905-4660-b149-4bef578985dd,
targetVdsId = cf2d1260-7bb3-451a-9cd7-80e6a0ede52a,
action = Status,
ip = 192.168.2.2,
port = ,
type = ipmilan,
user = Administrator,
password = ******,
options = ' power_wait="60",lanplus="1"',
policy = 'null'), log id: 24ce6206
2015-06-09 13:35:29,516 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-7) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Power Management test failed for Host hosted_engine_1.Done
2015-06-09 13:35:29,516 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (ajp--127.0.0.1-8702-7) FINISH, FenceVdsVDSCommand, return: Test Succeeded, unknown, log id: 24ce6206


and here the vdsm log from the proxy

JsonRpcServer::DEBUG::2015-06-09 13:37:52,461::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-131907::DEBUG::2015-06-09 13:37:52,463::API::1209::vds::(fenceNode) fenceNode(addr=192.168.2.2,port=,agent=ipmilan,user=Administrator,passwd=XXXX,action=status,secure=False,options= power_wait="60"
lanplus="1",policy=None)
Thread-131907::DEBUG::2015-06-09 13:37:52,463::utils::739::root::(execCmd) /usr/sbin/fence_ipmilan (cwd None)
Thread-131907::DEBUG::2015-06-09 13:37:52,533::utils::759::root::(execCmd) FAILED: <err> = 'Failed: Unable to obtain correct plug status or plug is not available\n\n\n'; <rc> = 1
Thread-131907::DEBUG::2015-06-09 13:37:52,533::API::1164::vds::(fence) rc 1 inp agent=fence_ipmilan
ipaddr=192.168.2.2
login=Administrator
action=status
passwd=XXXX
 power_wait="60"
lanplus="1" out [] err ['Failed: Unable to obtain correct plug status or plug is not available', '', '']
Thread-131907::DEBUG::2015-06-09 13:37:52,533::API::1235::vds::(fenceNode) rc 1 in agent=fence_ipmilan
ipaddr=192.168.2.2
login=Administrator
action=status
passwd=XXXX
 power_wait="60"
lanplus="1" out [] err ['Failed: Unable to obtain correct plug status or plug is not available', '', '']
Thread-131907::DEBUG::2015-06-09 13:37:52,534::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
Detector thread::DEBUG::2015-06-09 13:37:53,670::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection) Adding connection from 127.0.0.1:55761


VDSM rpms
# rpm -qa | grep vdsm
vdsm-cli-4.16.14-0.el7.noarch
vdsm-python-zombiereaper-4.16.14-0.el7.noarch
vdsm-xmlrpc-4.16.14-0.el7.noarch
vdsm-yajsonrpc-4.16.14-0.el7.noarch
vdsm-4.16.14-0.el7.x86_64
vdsm-python-4.16.14-0.el7.noarch
vdsm-jsonrpc-4.16.14-0.el7.noarch

any idea?

Thanks in advance.