On Wed, Jan 20, 2021 at 3:52 AM Matt Snow <mattsnow(a)gmail.com> wrote:
[root@brick ~]# ps -efz | grep sanlock
Sorry, its "ps -efZ", but we already know its not selinux.
[root@brick ~]# ps -ef | grep sanlock
sanlock 1308 1 0 10:21 ? 00:00:01 /usr/sbin/sanlock daemon
Does sanlock run with the right groups?
On a working system:
$ ps -efZ | grep sanlock | grep -v grep
system_u:system_r:sanlock_t:s0-s0:c0.c1023 sanlock 983 1 0 11:23 ?
00:00:03 /usr/sbin/sanlock daemon
system_u:system_r:sanlock_t:s0-s0:c0.c1023 root 986 983 0 11:23 ?
00:00:00 /usr/sbin/sanlock daemon
The sanlock process running with "sanlock" user (pid=983) is the
interesting one.
The other one is a helper that never accesses storage.
$ grep Groups: /proc/983/status
Groups: 6 36 107 179
Vdsm verify this on startup using vdsm-tool is-configured. On a working system:
$ sudo vdsm-tool is-configured
lvm is configured for vdsm
libvirt is already configured for vdsm
sanlock is configured for vdsm
Managed volume database is already configured
Current revision of multipath.conf detected, preserving
abrt is already configured for vdsm
[root@brick ~]# ausearch -m avc
<no matches>
Looks good.
[root@brick ~]# ls -lhZ
/rhev/data-center/mnt/stumpy\:_tanker_ovirt_host__storage/8fd5420f-61fd-41af-8575-f61853a18d91/dom_md
total 278K
-rw-rw----. 1 vdsm kvm system_u:object_r:nfs_t:s0 0 Jan 19 13:38 ids
Looks correct.