[ovirt-users] Info on fence_rhevm against oVirt 4.1.1
Juan Hernández
jhernand at redhat.com
Fri Apr 28 06:49:17 UTC 2017
On 04/28/2017 01:54 AM, Gianluca Cecchi wrote:
> On Thu, Apr 27, 2017 at 6:32 PM, Juan Hernández <jhernand at redhat.com
> <mailto:jhernand at redhat.com>> wrote:
>
> That is a known issue:
>
> fence_rhevm can only work as RHEV admin user not a regular user (that
> requires "Filter: true http header)
> https://bugzilla.redhat.com/1287059
> <https://bugzilla.redhat.com/1287059>
>
> That was fixed in fence-agents-4.0.11-47.el7, but I guess it wasn't
> backported to CentOS 6.
>
> I'd suggest that you open a bug for this component in the Red Hat
> Enterprise Linux bug tracker, requesting that the fix be back-ported.
>
> Meanwhile, if you are in a hurry, you can take the CentOS 7 fence_rhev
> script, which should work.
>
> You will most likely also need to add --ssl-indecure to the command line
> of the agent, because you will most likely be using the default self
> signed certificate authority used by the engine.
>
> Note that the latest version of this script uses the 'Filter: true'
> header to drop privileges. That means that even when using
> 'admin at internal' you have to make sure that 'admin at internal' has
> permissions for the VM that you want to fence, otherwise it will not be
> able to find/fence it.
>
>
> Thanks for the feedback Juan.
> I confirm that using fence_rhevm from latest CentOS 7 version it worked.
> These were the lines in my cluster.conf
>
> <clusternode name="p2viclnorasvi1" nodeid="1" votes="1">
> <fence>
> <method name="1">
> <device name="ovirt_fencedelay" port="p2vorasvi1"/>
> </method>
> </fence>
> </clusternode>
> <clusternode name="p2viclnorasvi2" nodeid="2" votes="1">
> <fence>
> <method name="1">
> <device name="ovirt_fence" port="p2vorasvi2"/>
> </method>
> </fence>
> </clusternode>
> </clusternodes>
> <quorumd label="p2vcluorasvi" votes="1">
> <heuristic interval="2" program="ping -c1 -w1 172.16.10.231" score="1" tko="200"/>
> </quorumd>
> <fencedevices>
> <fencedevice agent="fence_rhevm" delay="30" ipaddr="10.4.192.43" login="g.cecchi at internal" passwd_script="/usr/local/bin/pwd_dracnode01.sh" name="ovirt_fencedelay" ssl="on" ssl_insecure="on" shell_timeout="20" power_wait="10"/>
> <fencedevice agent="fence_rhevm" ipaddr="10.4.192.43" login="g.cecchi at internal" passwd_script="/usr/local/bin/pwd_dracnode02.sh" name="ovirt_fence" ssl="on" ssl_insecure="on" shell_timeout="20" power_wait="10"/>
> </fencedevices>
>
> Using admin at internal didn't work even if I set the permissions at vm
> level too...
>
It should work adding 'UserRole' to 'admin at internal'. The issue is that
the fence agent uses the 'Filter: true' header, thus it drops its
super-user privileges to do the query, and won't get the VM unless it
has explicitly granted permissions. To check it you can do the
following, for example:
---8<---
#!/bin/sh -ex
url="https://yourengine/ovirt-engine/api"
user="admin at internal"
password="..."
curl \
--verbose \
--cacert "/etc/pki/ovirt-engine/ca.pem" \
--user "${user}:${password}" \
--request GET \
--header "Version: 3" \
--header "Filter: true" \
"${url}/vms?search=name%3Dmyvm"
--->8---
That should return the details of the VM, or nothing if the user doesn't
have permission to see that VM.
> It worked with my username (g.cecchi) that has SuperUser system
> privilege and also at VM level.
>
> Is it yet necessary to have a user with SuperUser privilege at system level?
>
No, it shouldn't be necessary. Actually, as you are using the 'internal'
domain, it is easy to add a new dummy user without SuperUser privileges.
You can give that user permissions (with 'UserRole') only for the VMs
that are nodes of the cluster. That should be enough.
> Tomorrow (today... ;-) I'm going to open a bugzilla to backport the feature.
>
> Thanks again,
> Gianluca
>
More information about the Users
mailing list