<div dir="ltr"><div><div><div>Hi Eli,<br><br></div>All nodes are the same in the cluster (fresh install): <br><br>[mpc-ovirt-node03 ~]# yum list installed *OVIRT* *VDSM* *FENCE*<br>Loaded plugins: fastestmirror, langpacks, priorities<br>Loading mirror speeds from cached hostfile<br> * base: <a href="http://centos.mirror.rafal.ca">centos.mirror.rafal.ca</a><br> * epel: <a href="http://fedora-epel.mirror.iweb.com">fedora-epel.mirror.iweb.com</a><br> * extras: <a href="http://less.cogeco.net">less.cogeco.net</a><br> * ovirt-3.5: <a href="http://resources.ovirt.org">resources.ovirt.org</a><br> * ovirt-3.5-epel: <a href="http://fedora-epel.mirror.iweb.com">fedora-epel.mirror.iweb.com</a><br> * rpmforge: <a href="http://repoforge.mirror.constant.com">repoforge.mirror.constant.com</a><br> * updates: <a href="http://centos.mirror.rafal.ca">centos.mirror.rafal.ca</a><br>183 packages excluded due to repository priority protections<br>Installed Packages<br>fence-agents-all.x86_64 4.0.11-11.el7_1 @updates <br>fence-agents-apc.x86_64 4.0.11-11.el7_1 @updates <br>fence-agents-apc-snmp.x86_64 4.0.11-11.el7_1 @updates <br>fence-agents-bladecenter.x86_64 4.0.11-11.el7_1 @updates <br>fence-agents-brocade.x86_64 4.0.11-11.el7_1 @updates <br>fence-agents-cisco-mds.x86_64 4.0.11-11.el7_1 @updates <br>fence-agents-cisco-ucs.x86_64 4.0.11-11.el7_1 @updates <br>fence-agents-common.x86_64 4.0.11-11.el7_1 @updates <br>fence-agents-drac5.x86_64 4.0.11-11.el7_1 @updates <br>fence-agents-eaton-snmp.x86_64 4.0.11-11.el7_1 @updates <br>fence-agents-eps.x86_64 4.0.11-11.el7_1 @updates <br>fence-agents-hpblade.x86_64 4.0.11-11.el7_1 @updates <br>fence-agents-ibmblade.x86_64 4.0.11-11.el7_1 @updates <br>fence-agents-ifmib.x86_64 4.0.11-11.el7_1 @updates <br>fence-agents-ilo-mp.x86_64 4.0.11-11.el7_1 @updates <br>fence-agents-ilo-ssh.x86_64 4.0.11-11.el7_1 @updates <br>fence-agents-ilo2.x86_64 4.0.11-11.el7_1 @updates <br>fence-agents-intelmodular.x86_64 4.0.11-11.el7_1 @updates <br>fence-agents-ipdu.x86_64 4.0.11-11.el7_1 @updates <br>fence-agents-ipmilan.x86_64 4.0.11-11.el7_1 @updates <br>fence-agents-kdump.x86_64 4.0.11-11.el7_1 @updates <br>fence-agents-rhevm.x86_64 4.0.11-11.el7_1 @updates <br>fence-agents-rsb.x86_64 4.0.11-11.el7_1 @updates <br>fence-agents-scsi.x86_64 4.0.11-11.el7_1 @updates <br>fence-agents-vmware-soap.x86_64 4.0.11-11.el7_1 @updates <br>fence-agents-wti.x86_64 4.0.11-11.el7_1 @updates <br>fence-virt.x86_64 0.3.2-1.el7 @base <br>libgovirt.x86_64 0.3.1-3.el7 @base <br>ovirt-engine-sdk-python.noarch 3.5.1.0-1.el7.centos @ovirt-3.5 <br>ovirt-host-deploy.noarch 1.3.1-1.el7 @ovirt-3.5 <br>ovirt-hosted-engine-ha.noarch 1.2.5-1.el7.centos @ovirt-3.5 <br>ovirt-hosted-engine-setup.noarch 1.2.2-1.el7.centos @ovirt-3.5 <br>ovirt-release35.noarch 002-1 @/ovirt-release35<br>vdsm.x86_64 4.16.10-8.gitc937927.el7 @ovirt-3.5 <br>vdsm-cli.noarch 4.16.10-8.gitc937927.el7 @ovirt-3.5 <br>vdsm-gluster.noarch 4.16.10-8.gitc937927.el7 @ovirt-3.5 <br>vdsm-jsonrpc.noarch 4.16.10-8.gitc937927.el7 @ovirt-3.5 <br>vdsm-python.noarch 4.16.10-8.gitc937927.el7 @ovirt-3.5 <br>vdsm-python-zombiereaper.noarch 4.16.10-8.gitc937927.el7 @ovirt-3.5 <br>vdsm-xmlrpc.noarch 4.16.10-8.gitc937927.el7 @ovirt-3.5 <br>vdsm-yajsonrpc.noarch 4.16.10-8.gitc937927.el7 @ovirt-3.5 <br><br></div>Cheers,<br></div>Mike<br><div><div><div><br></div></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On 22 April 2015 at 03:00, Eli Mesika <span dir="ltr"><<a href="mailto:emesika@redhat.com" target="_blank">emesika@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5"><br>
<br>
----- Original Message -----<br>
> From: "Mike Lindsay" <<a href="mailto:mike.lindsay@cbc.ca">mike.lindsay@cbc.ca</a>><br>
> To: <a href="mailto:users@ovirt.org">users@ovirt.org</a><br>
> Sent: Tuesday, April 21, 2015 8:16:18 PM<br>
> Subject: [ovirt-users] Options not being passed fence_ipmilan, Ovirt3.5 on Centos 7.1 hosts<br>
><br>
> Hi All,<br>
><br>
> I have a bit of an issue with a new install of Ovirt 3.5 (our 3.4 cluster is<br>
> working fine) in a 4 node cluster.<br>
><br>
> When I test fencing (or cause a kernal panic triggering a fence) the fencing<br>
> fails. On investigation it appears that the fencing options are not being<br>
> passed to the fencing script (fence_ipmilan in this case):<br>
><br>
> Fence options under GUI(as entered in the gui): lanplus, ipport=623,<br>
> power_wait=4, privlvl=operator<br>
><br>
> from vdsm.log on the fence proxy node:<br>
><br>
> Thread-818296::DEBUG::2015-04-21 12:39:39,136::API::1209::vds::(fenceNode)<br>
> fenceNode(addr=x.x.x.x,port=,agent=ipmilan,user=stonith,passwd=XXXX,action=status,secure=False,options=<br>
> power_wait=4<br>
> Thread-818296::DEBUG::2015-04-21 12:39:39,137::utils::739::root::(execCmd)<br>
> /usr/sbin/fence_ipmilan (cwd None)<br>
> Thread-818296::DEBUG::2015-04-21 12:39:39,295::utils::759::root::(execCmd)<br>
> FAILED: <err> = 'Failed: Unable to obtain correct plug status or plug is not<br>
> available\n\n\n'; <rc> = 1<br>
> Thread-818296::DEBUG::2015-04-21 12:39:39,296::API::1164::vds::(fence) rc 1<br>
> inp agent=fence_ipmilan<br>
> Thread-818296::DEBUG::2015-04-21 12:39:39,296::API::1235::vds::(fenceNode) rc<br>
> 1 in agent=fence_ipmilan<br>
> Thread-818296::DEBUG::2015-04-21<br>
> 12:39:39,297::stompReactor::163::yajsonrpc.StompServer::(send) Sending<br>
> response<br>
><br>
><br>
> from engine.log on the engine:<br>
> 2015-04-21 12:39:38,843 INFO<br>
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]<br>
> (ajp--127.0.0.1-8702-4) Correlation ID: null, Call Stack: null, Custom Event<br>
> ID: -1, Message: Host mpc-ovirt-node03 from cluster Default was chosen as a<br>
> proxy to execute Status command on Host mpc-ovirt-node04.<br>
> 2015-04-21 12:39:38,845 INFO [org.ovirt.engine.core.bll.FenceExecutor]<br>
> (ajp--127.0.0.1-8702-4) Using Host mpc-ovirt-node03 from cluster Default as<br>
> proxy to execute Status command on Host<br>
> 2015-04-21 12:39:38,885 INFO [org.ovirt.engine.core.bll.FenceExecutor]<br>
> (ajp--127.0.0.1-8702-4) Executing <Status> Power Management command, Proxy<br>
> Host:mpc-ovirt-node03, Agent:ipmilan, Target Host:, Management IP:x.x.x.x,<br>
> User:stonith, Options: power_wait=4, ipport=623, privlvl=operator,lanplus,<br>
> Fencing policy:null<br>
> 2015-04-21 12:39:38,921 INFO<br>
> [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]<br>
> (ajp--127.0.0.1-8702-4) START, FenceVdsVDSCommand(HostName =<br>
> mpc-ovirt-node03, HostId = 5613a489-589d-4e89-ab01-3642795eedb8, targetVdsId<br>
> = dbfa4e85-3e97-4324-b222-bf40a491db08, action = Status, ip = x.x.x.x, port<br>
> = , type = ipmilan, user = stonith, password = ******, options = '<br>
> power_wait=4, ipport=623, privlvl=operator,lanplus', policy = 'null'), log<br>
> id: 774f328<br>
> 2015-04-21 12:39:39,338 WARN<br>
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]<br>
> (ajp--127.0.0.1-8702-4) Correlation ID: null, Call Stack: null, Custom Event<br>
> ID: -1, Message: Power Management test failed for Host mpc-ovirt-node04.Done<br>
> 2015-04-21 12:39:39,339 INFO<br>
> [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]<br>
> (ajp--127.0.0.1-8702-4) FINISH, FenceVdsVDSCommand, return: Test Succeeded,<br>
> unknown, log id: 774f328<br>
> 2015-04-21 12:39:39,340 WARN [org.ovirt.engine.core.bll.FenceExecutor]<br>
> (ajp--127.0.0.1-8702-4) Fencing operation failed with proxy host<br>
> 5613a489-589d-4e89-ab01-3642795eedb8, trying another proxy...<br>
> 2015-04-21 12:39:39,594 INFO<br>
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]<br>
> (ajp--127.0.0.1-8702-4) Correlation ID: null, Call Stack: null, Custom Event<br>
> ID: -1, Message: Host mpc-ovirt-node01 from cluster Default was chosen as a<br>
> proxy to execute Status command on Host mpc-ovirt-node04.<br>
> 2015-04-21 12:39:39,595 INFO [org.ovirt.engine.core.bll.FenceExecutor]<br>
> (ajp--127.0.0.1-8702-4) Using Host mpc-ovirt-node01 from cluster Default as<br>
> proxy to execute Status command on Host<br>
> 2015-04-21 12:39:39,598 INFO [org.ovirt.engine.core.bll.FenceExecutor]<br>
> (ajp--127.0.0.1-8702-4) Executing <Status> Power Management command, Proxy<br>
> Host:mpc-ovirt-node01, Agent:ipmilan, Target Host:, Management IP:x.x.x.x,<br>
> User:stonith, Options: power_wait=4, ipport=623, privlvl=operator,lanplus,<br>
> Fencing policy:null<br>
> 2015-04-21 12:39:39,634 INFO<br>
> [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]<br>
> (ajp--127.0.0.1-8702-4) START, FenceVdsVDSCommand(HostName =<br>
> mpc-ovirt-node01, HostId = c3e8be6e-ac54-4861-b774-17ba5cc66dc6, targetVdsId<br>
> = dbfa4e85-3e97-4324-b222-bf40a491db08, action = Status, ip = x.x.x.x, port<br>
> = , type = ipmilan, user = stonith, password = ******, options = '<br>
> power_wait=4, ipport=623, privlvl=operator,lanplus', policy = 'null'), log<br>
> id: 6369eb1<br>
> 2015-04-21 12:39:40,056 WARN<br>
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]<br>
> (ajp--127.0.0.1-8702-4) Correlation ID: null, Call Stack: null, Custom Event<br>
> ID: -1, Message: Power Management test failed for Host mpc-ovirt-node04.Done<br>
> 2015-04-21 12:39:40,057 INFO<br>
> [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]<br>
> (ajp--127.0.0.1-8702-4) FINISH, FenceVdsVDSCommand, return: Test Succeeded,<br>
> unknown, log id: 6369eb1<br>
><br>
><br>
> For verification I temporarily replaced /usr/sbin/fence_ipmilan with a shell<br>
> script that dumps the env plus any cli args passed into a log file:<br>
><br>
> -------------------------- Tue Apr 21 12:39:39 EDT 2015<br>
> ----------------------------<br>
> ENV DUMP:<br>
> LC_ALL=C<br>
> USER=vdsm<br>
> PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin<br>
> PWD=/<br>
> LANG=en_CA.UTF-8<br>
> LIBVIRT_LOG_FILTERS=<br>
> SHLVL=1<br>
> HOME=/var/lib/vdsm<br>
> LOGNAME=vdsm<br>
> LIBVIRT_LOG_OUTPUTS=<br>
> _=/usr/bin/env<br>
><br>
> ------------------------------<br>
> CLI DUMP:<br>
><br>
> <this is where the cli args should be listed><br>
><br>
><br>
> Version info:<br>
> libvirt version: 1.2.8, package: 16.el7_1.2 (CentOS BuildSystem <<br>
> <a href="http://bugs.centos.org" target="_blank">http://bugs.centos.org</a> >, <a href="tel:2015-03-26-23" value="+12015032623">2015-03-26-23</a>:17:42, <a href="http://worker1.bsys.centos.org" target="_blank">worker1.bsys.centos.org</a> )<br>
> fence_ipmilan: 4.0.11 (built Mon Apr 13 13:22:18 UTC 2015)<br>
> vdsm.x86_64: 4.16.10-8.gitc937927.el7<br>
> ovirt-engine.noarch: 3.5.1.1-1.el6<br>
><br>
> Engine os: Centos 6.6<br>
> Host os: Centos 7.1.1503<br>
><br>
> I've found some old post from 2012 that describe the same problem. Has anyone<br>
> else run into this?<br>
><br>
> Any thought or suggestions would be appreciated.<br>
<br>
</div></div>Hi Mike<br>
<br>
Seems from the logs above that the oVirt engine got the options correctly but somehow it is ignored by VDSM<br>
Can you please share installed VDSM RPM version on the host that served as a proxy for this operation<br>
Please also provide the cluster version of the cluster that contains the proxy host as well<br>
<br>
Thanks<br>
<br>
Eli<br>
<br>
<br>
<br>
><br>
> Cheers,<br>
> Mike<br>
><br>
> _______________________________________________<br>
> Users mailing list<br>
> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
> <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
><br>
</blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature">
<div style="MARGIN:4px 4px 1px;FONT:10pt Microsoft Sans Serif">
<div><strong><font face="Terminal">Mike Lindsay</font></strong></div>
<div><font face="Terminal">Senior Systems Administrator</font></div>
<div><font face="Terminal" size="1"><em>Technological Maintenance and Support</em></font></div>
<div><em><font face="Terminal" size="1">CBC/SRC</font></em></div>
<div><a href="mailto:mike.lindsay@cbc.ca" target="_blank"><font face="Terminal">mike.lindsay@cbc.ca</font></a></div>
<div><font face="Terminal">(o) 416-205-8992</font></div>
<div><font face="Terminal">(c) 416-819-2841</font></div></div></div>
</div>