<div dir="ltr"><div>Now sure what's wrong with� sanlock daemon� ..what I am expecting is .. as Iam not using any FENCING� & �shutdown everything [ ovirt manager / node / AD server / Storage ] bfr leaving office </div>
<div>so may be sanlock is not able to clean leases, & failed co-ordination �as both nodes in cluster goes down at same time .</div><div>�</div><div>�</div><div>so may be after boot up it need sanlock restart to clean something pending bfr last shutdown or manual fencing requires ?? not sure�my understanding is�correct here .. coz� sanlock�used to kill any process<br>
�holding a resource lease within the lockspace and release the�it� automatically� .. but will it work without FENCEING .</div><div>�</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Mon, Oct 21, 2013 at 8:34 PM, Fabian Deutsch <span dir="ltr"><<a href="mailto:fabiand@redhat.com" target="_blank">fabiand@redhat.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Am Montag, den 21.10.2013, 20:21 +0800 schrieb Anil Dhingra:<br>
<div class="im">><br>
> below is the output after reboot .. also after reboot we need to<br>
> restart sanlock daemon manually every time else no spm selection &<br>
> hosts keep on contending & fails<br>
<br>
</div>Do you know what the problem of the sanlock daemon is, so why it needs<br>
to be restarted?<br>
<div class="im"><br>
> [root@node1-3-3 ~]# getsebool -a | egrep -i 'nfs|sanlock'<br>
> allow_ftpd_use_nfs --> off<br>
> cobbler_use_nfs --> off<br>
> git_system_use_nfs --> off<br>
> httpd_use_nfs --> off<br>
> qemu_use_nfs --> on<br>
> rsync_use_nfs --> off<br>
> samba_share_nfs --> off<br>
> sanlock_use_fusefs --> off<br>
> sanlock_use_nfs --> off<br>
> sanlock_use_samba --> off<br>
> sge_use_nfs --> off<br>
> use_nfs_home_dirs --> on<br>
> virt_use_nfs --> off<br>
> virt_use_sanlock --> off<br>
> xen_use_nfs --> off<br>
> [root@node1-3-3 ~]# getsebool -a | egrep -i allow_execstack<br>
> allow_execstack --> on<br>
> [root@node1-3-3 ~]#<br>
<br>
</div>Seems like it was changed. Is maybe VDSM changing it?<br>
<br>
Greetings<br>
<span class="HOEnZb"><font color="#888888">fabian<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
><br>
> On Mon, Oct 21, 2013 at 7:16 PM, Fabian Deutsch <<a href="mailto:fabiand@redhat.com">fabiand@redhat.com</a>><br>
> wrote:<br>
> � � � � Am Montag, den 21.10.2013, 15:44 +0800 schrieb Anil Dhingra:<br>
> � � � � > hi<br>
> � � � � ><br>
> � � � � > Permission issue is resolved after changing on openfiler NFS<br>
> � � � � share<br>
> � � � � > permission �.but still on every reboot we need to set below<br>
> � � � � values<br>
> � � � � > manually<br>
> � � � � > Any idea how to make it perm<br>
> � � � � ><br>
> � � � � > setsebool -P virt_use_sanlock=on<br>
> � � � � > setsebool -P virt_use_nfs=on<br>
><br>
><br>
> � � � � Hum ... That's interesting.<br>
> � � � � We actually set both of them to on during the installation of<br>
> � � � � the<br>
> � � � � ovirt-node selinux package:<br>
> � � � � /usr/sbin/setsebool -P allow_execstack=0 \<br>
> � � � � � � � � � � � � � � � �virt_use_nfs=1 \<br>
> � � � � � � � � � � � � � � � �virt_use_sanlock=1 \<br>
> � � � � � � � � � � � � � � � �sanlock_use_nfs=1<br>
><br>
> � � � � What does<br>
> � � � � getsebool virt_use_sanlock virt_use_nfs<br>
><br>
> � � � � say?<br>
><br>
> � � � � - fabian<br>
><br>
> � � � � ><br>
> � � � � > On Wed, Oct 16, 2013 at 8:24 AM, Itamar Heim<br>
> � � � � <<a href="mailto:iheim@redhat.com">iheim@redhat.com</a>> wrote:<br>
> � � � � > � � � � On 10/15/2013 11:05 AM, Anil Dhingra wrote:<br>
> � � � � ><br>
> � � � � > � � � � � � � � Hi Guys<br>
> � � � � > � � � � � � � � Any know issue why we are not able to start<br>
> � � � � VM due to<br>
> � � � � > � � � � � � � � permission issue<br>
> � � � � > � � � � � � � � on disk image file .. as per docs ownership<br>
> � � � � should be<br>
> � � � � > � � � � � � � � vdsm:kvm �byt not<br>
> � � � � > � � � � � � � � sure why its showing below<br>
> � � � � > � � � � � � � � used - both<br>
> � � � � �ovirt-node-iso-3.0.1-1.0.1.vdsm.el6 &<br>
> � � � � > � � � � � � � � ovirt-node-iso-3.0.1-1.0.2.vdsm.el6 �same<br>
> � � � � issue<br>
> � � � � > � � � � � � � � �[ using NFS Domain ]<br>
> � � � � > � � � � � � � � VM n0001vdap is down. Exit message: internal<br>
> � � � � error<br>
> � � � � > � � � � � � � � process exited while<br>
> � � � � > � � � � � � � � connecting to monitor: qemu-kvm: -drive<br>
> � � � � ><br>
> � � � � file=/rhev/data-center/d09d8a3e-8ab4-42fc-84ec-86f307d144a0/1a04e13a-0ed4-40d6-a153-f7091c65d916/images/44e3fc9b-0382-4c11-b00c-35bd74032e9a/34542412-ed50-4350-8867-0d7d5f8127fd,if=none,id=drive-virtio-disk0,format=raw,serial=44e3fc9b-0382-4c11-b00c-35bd74032e9a,cache=none,werror=stop,rerror=stop,aio=threads:<br>
> � � � � ><br>
> � � � � > � � � � � � � � *could not open *disk image<br>
> � � � � ><br>
> � � � � */rhev/data-center*/d09d8a3e-8ab4-42fc-84ec-86f307d144a0/1a04e13a-0ed4-40d6-a153-f7091c65d916/*images*/44e3fc9b-0382-4c11-b00c-35bd74032e9a/34542412-ed50-4350-8867-0d7d5f8127fd:<br>
> � � � � > � � � � � � � � *Permission denied*<br>
> � � � � ><br>
> � � � � ><br>
> � � � � > � � � � � � � � [root@node1<br>
> � � � � 44e3fc9b-0382-4c11-b00c-35bd74032e9a]# ls<br>
> � � � � > � � � � � � � � -lh<br>
> � � � � > � � � � � � � � total 1.1M<br>
> � � � � ><br>
> � � � � > � � � � � � � � -rw-rw----+ 1 *vdsm 96* 6.0G 2013-10-15<br>
> � � � � 05:47<br>
> � � � � > � � � � � � � � 34542412-ed50-4350-8867-0d7d5f8127fd<br>
> � � � � > � � � � � � � � -rw-rw----+ 1 *vdsm 96* 1.0M 2013-10-15<br>
> � � � � 05:47<br>
> � � � � > � � � � � � � � 34542412-ed50-4350-8867-0d7d5f8127fd.lease<br>
> � � � � > � � � � � � � � -rw-rw-rw-+ 1 *vdsm 96* �268 2013-10-15<br>
> � � � � 05:47<br>
> � � � � ><br>
> � � � � > � � � � � � � � 34542412-ed50-4350-8867-0d7d5f8127fd.meta<br>
> � � � � > � � � � � � � � As it doesn't allow us o change permissions<br>
> � � � � any<br>
> � � � � > � � � � � � � � alternate way for this<br>
> � � � � ><br>
> � � � � > � � � � � � � � ?or do I need to manually set permissions in<br>
> � � � � > � � � � � � � � *"/etc/libvirt/qemu.conf"*<br>
> � � � � > � � � � � � � � alos ther is no such *group *with*"96"* ..<br>
> � � � � so from<br>
> � � � � > � � � � � � � � where it picks this<br>
> � � � � ><br>
> � � � � > � � � � � � � � config .<br>
> � � � � > � � � � � � � � Another question is related to SELINUX<br>
> � � � � config change<br>
> � � � � > � � � � � � � � for below 2<br>
> � � � � ><br>
> � � � � > � � � � � � � � parameters to recover from error "*internal<br>
> � � � � error<br>
> � � � � > � � � � � � � � Failed to open socket<br>
> � � � � > � � � � � � � � to sanlock daemon: Permission denied*" I saw<br>
> � � � � some<br>
> � � � � > � � � � � � � � where this is fixed<br>
> � � � � ><br>
> � � � � > � � � � � � � � but not sure why it appears �VDSM should<br>
> � � � � take care of<br>
> � � � � > � � � � � � � � this auto<br>
> � � � � > � � � � � � � � setsebool -P virt_use_sanlock=on<br>
> � � � � > � � � � � � � � setsebool -P virt_use_nfs=on<br>
> � � � � ><br>
> � � � � ><br>
> � � � � ><br>
> � � � � ><br>
> � � � � _______________________________________________<br>
> � � � � > � � � � � � � � Users mailing list<br>
> � � � � > � � � � � � � � <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
> � � � � ><br>
> � � � � <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
> � � � � ><br>
> � � � � ><br>
> � � � � > � � � � have you tried:<br>
> � � � � ><br>
> � � � � <a href="http://www.ovirt.org/wiki/Troubleshooting_NFS_Storage_Issues" target="_blank">http://www.ovirt.org/wiki/Troubleshooting_NFS_Storage_Issues</a><br>
> � � � � ><br>
> � � � � ><br>
> � � � � > _______________________________________________<br>
> � � � � > Users mailing list<br>
> � � � � > <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
> � � � � > <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
><br>
><br>
><br>
><br>
><br>
<br>
<br>
</div></div></blockquote></div><br></div>