[ovirt-users] storage domain and nfs problem
Douglas Schilling Landgraf
dougsland at redhat.com
Thu Jan 14 15:31:02 UTC 2016
On 01/14/2016 08:28 AM, alireza sadeh seighalan wrote:
> hi again
>
>
> Stale file handle related to selinux . i ran setenforce 0 and
> restart nfs on hv03,04,05,07 and those messages disappeared :)
You can use audit2allow -a to identify these selinux denies, and later
generate the module using -M and install it with semodule -i. I hope you
didn't disable the selinux.
>
> but on some hosts like hv03 when run df -h it doesnt show nfs path!
Could you please provide the vdsm logs from these hosts, after the nfs
restart?
>
>
>
>
> On Thu, Jan 14, 2016 at 2:46 PM, alireza sadeh seighalan
> <seighalani at gmail.com <mailto:seighalani at gmail.com>> wrote:
>
> hi everyone
>
> i have a bad problem in ovirt3.6.1. my hosts goes to
> nonresponsiveness or nonoperational. i take stale file handle in
> df -h command on my main host and some other hosts too:
>
> [root at mainhv ~]# df -h
> df: ‘/rhev/data-center/mnt/hv03:_VM’: Stale file handle
> df: ‘/rhev/data-center/mnt/hv04:_VM’: Stale file handle
> df: ‘/rhev/data-center/mnt/hv05:_VM’: Stale file handle
> df: ‘/rhev/data-center/mnt/hv07:_VM’: Stale file handle
> Filesystem Size Used Avail Use% Mounted on
> /dev/sda6 4.0G 112M 3.9G 3% /
> devtmpfs 24G 0 24G 0% /dev
> tmpfs 24G 4.0K 24G 1% /dev/shm
> tmpfs 24G 41M 24G 1% /run
> tmpfs 24G 0 24G 0% /sys/fs/cgroup
>
>
> i attached vdsm.log and engine.log too.
>
>
> my server specification:
> os: centos7.1 updated
> ovirt 3.6.1
> vdsm : 4.17.13-0.el7.centos
> libvirt: 1.2.8-16.el7_1.5
> data storage: /VM based on xfs filesystem
>
> fstab:
> uuid ...... /VM xfs defaults 0 0
>
>
> thanks inadvance
>
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20160114/6fedfc80/attachment-0001.html>
More information about the Users
mailing list