Now sure what's wrong with sanlock daemon ..what I am expecting is .. as
Iam not using any FENCING & shutdown everything [ ovirt manager / node /
AD server / Storage ] bfr leaving office
so may be sanlock is not able to clean leases, & failed co-ordination as
both nodes in cluster goes down at same time .
so may be after boot up it need sanlock restart to clean something pending
bfr last shutdown or manual fencing requires ?? not sure my understanding
is correct here .. coz sanlock used to kill any process
holding a resource lease within the lockspace and release the it
automatically .. but will it work without FENCEING .
On Mon, Oct 21, 2013 at 8:34 PM, Fabian Deutsch <fabiand(a)redhat.com> wrote:
Am Montag, den 21.10.2013, 20:21 +0800 schrieb Anil Dhingra:
>
> below is the output after reboot .. also after reboot we need to
> restart sanlock daemon manually every time else no spm selection &
> hosts keep on contending & fails
Do you know what the problem of the sanlock daemon is, so why it needs
to be restarted?
> [root@node1-3-3 ~]# getsebool -a | egrep -i 'nfs|sanlock'
> allow_ftpd_use_nfs --> off
> cobbler_use_nfs --> off
> git_system_use_nfs --> off
> httpd_use_nfs --> off
> qemu_use_nfs --> on
> rsync_use_nfs --> off
> samba_share_nfs --> off
> sanlock_use_fusefs --> off
> sanlock_use_nfs --> off
> sanlock_use_samba --> off
> sge_use_nfs --> off
> use_nfs_home_dirs --> on
> virt_use_nfs --> off
> virt_use_sanlock --> off
> xen_use_nfs --> off
> [root@node1-3-3 ~]# getsebool -a | egrep -i allow_execstack
> allow_execstack --> on
> [root@node1-3-3 ~]#
Seems like it was changed. Is maybe VDSM changing it?
Greetings
fabian
>
> On Mon, Oct 21, 2013 at 7:16 PM, Fabian Deutsch <fabiand(a)redhat.com>
> wrote:
> Am Montag, den 21.10.2013, 15:44 +0800 schrieb Anil Dhingra:
> > hi
> >
> > Permission issue is resolved after changing on openfiler NFS
> share
> > permission .but still on every reboot we need to set below
> values
> > manually
> > Any idea how to make it perm
> >
> > setsebool -P virt_use_sanlock=on
> > setsebool -P virt_use_nfs=on
>
>
> Hum ... That's interesting.
> We actually set both of them to on during the installation of
> the
> ovirt-node selinux package:
> /usr/sbin/setsebool -P allow_execstack=0 \
> virt_use_nfs=1 \
> virt_use_sanlock=1 \
> sanlock_use_nfs=1
>
> What does
> getsebool virt_use_sanlock virt_use_nfs
>
> say?
>
> - fabian
>
> >
> > On Wed, Oct 16, 2013 at 8:24 AM, Itamar Heim
> <iheim(a)redhat.com> wrote:
> > On 10/15/2013 11:05 AM, Anil Dhingra wrote:
> >
> > Hi Guys
> > Any know issue why we are not able to start
> VM due to
> > permission issue
> > on disk image file .. as per docs ownership
> should be
> > vdsm:kvm byt not
> > sure why its showing below
> > used - both
> ovirt-node-iso-3.0.1-1.0.1.vdsm.el6 &
> > ovirt-node-iso-3.0.1-1.0.2.vdsm.el6 same
> issue
> > [ using NFS Domain ]
> > VM n0001vdap is down. Exit message: internal
> error
> > process exited while
> > connecting to monitor: qemu-kvm: -drive
> >
>
file=/rhev/data-center/d09d8a3e-8ab4-42fc-84ec-86f307d144a0/1a04e13a-0ed4-40d6-a153-f7091c65d916/images/44e3fc9b-0382-4c11-b00c-35bd74032e9a/34542412-ed50-4350-8867-0d7d5f8127fd,if=none,id=drive-virtio-disk0,format=raw,serial=44e3fc9b-0382-4c11-b00c-35bd74032e9a,cache=none,werror=stop,rerror=stop,aio=threads:
> >
> > *could not open *disk image
> >
>
*/rhev/data-center*/d09d8a3e-8ab4-42fc-84ec-86f307d144a0/1a04e13a-0ed4-40d6-a153-f7091c65d916/*images*/44e3fc9b-0382-4c11-b00c-35bd74032e9a/34542412-ed50-4350-8867-0d7d5f8127fd:
> > *Permission denied*
> >
> >
> > [root@node1
> 44e3fc9b-0382-4c11-b00c-35bd74032e9a]# ls
> > -lh
> > total 1.1M
> >
> > -rw-rw----+ 1 *vdsm 96* 6.0G 2013-10-15
> 05:47
> > 34542412-ed50-4350-8867-0d7d5f8127fd
> > -rw-rw----+ 1 *vdsm 96* 1.0M 2013-10-15
> 05:47
> > 34542412-ed50-4350-8867-0d7d5f8127fd.lease
> > -rw-rw-rw-+ 1 *vdsm 96* 268 2013-10-15
> 05:47
> >
> > 34542412-ed50-4350-8867-0d7d5f8127fd.meta
> > As it doesn't allow us o change permissions
> any
> > alternate way for this
> >
> > ?or do I need to manually set permissions in
> > *"/etc/libvirt/qemu.conf"*
> > alos ther is no such *group *with*"96"* ..
> so from
> > where it picks this
> >
> > config .
> > Another question is related to SELINUX
> config change
> > for below 2
> >
> > parameters to recover from error "*internal
> error
> > Failed to open socket
> > to sanlock daemon: Permission denied*" I saw
> some
> > where this is fixed
> >
> > but not sure why it appears VDSM should
> take care of
> > this auto
> > setsebool -P virt_use_sanlock=on
> > setsebool -P virt_use_nfs=on
> >
> >
> >
> >
> _______________________________________________
> > Users mailing list
> > Users(a)ovirt.org
> >
>
http://lists.ovirt.org/mailman/listinfo/users
> >
> >
> > have you tried:
> >
>
http://www.ovirt.org/wiki/Troubleshooting_NFS_Storage_Issues
> >
> >
> > _______________________________________________
> > Users mailing list
> > Users(a)ovirt.org
> >
http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
>