make sure nfs-lock,nfs-server and rpcbind is running
run rpcinfo -p and make sure ports on the firewall are allowed. Looks like
you are using firewalld not plain iptables. Host and client firewall
settings.
make sure you have the following in your /etc/exports file;
(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)
restart services and check showmount -e from client end.
Go grab the nfscheck.py file from ovirt's github directory and run it. It
will check your nfs mounts.
On 16 March 2017 at 18:22, Yedidyah Bar David <didi(a)redhat.com> wrote:
On Wed, Mar 15, 2017 at 5:45 PM, Angel R. Gonzalez
<angel.gonzalez(a)uam.es> wrote:
> Hello,
> I've installed a engine server and 2 host nodes. After install the 2
nodes
> I've configured a nfs domain storage in the first node. After few
minutes,
> the second host is down and I dont put it online.
>
> The log in the engine show:
>>Host node2 cannot access the Storage Domain(s) nfsexport_node1 attached
to
>> the Data Center Labs. Setting Host state to Non-Operational.
>
> The nfsexport Storage Domain Format is V4, but in the /etc/nfsmount.conf
> Defaultproto=tcp
> Defaultvers=3
> Nfsvers=3
>
> Also I've added the line
> NFS4_SUPPORT="no"
> to /etc/sysconfig/nfs file
>
> The node1's iptables rules are:
>
> ACCEPT all -- anywhere anywhere state
RELATED,ESTABLISHED
> ACCEPT icmp -- anywhere anywhere
> ACCEPT all -- anywhere anywhere
> ACCEPT tcp -- anywhere anywhere tcp
dpt:54321
> ACCEPT tcp -- anywhere anywhere tcp
dpt:54322
> ACCEPT tcp -- anywhere anywhere tcp
dpt:sunrpc
> ACCEPT udp -- anywhere anywhere udp
dpt:sunrpc
> ACCEPT tcp -- anywhere anywhere tcp dpt:ssh
> ACCEPT udp -- anywhere anywhere udp
dpt:snmp
> ACCEPT tcp -- anywhere anywhere tcp
dpt:websm
> ACCEPT tcp -- anywhere anywhere tcp
dpt:16514
> ACCEPT tcp -- anywhere anywhere multiport dports
> rockwell-csp2
> ACCEPT tcp -- anywhere anywhere multiport dports
rfb:6923
> ACCEPT tcp -- anywhere anywhere multiport dports
> 49152:49216
> ACCEPT tcp -- anywhere anywhere tcp dpt:nfs
> ACCEPT udp -- anywhere anywhere udp dpt:nfs
> ACCEPT tcp -- anywhere anywhere tcp
dpt:mountd
> ACCEPT udp -- anywhere anywhere udp
dpt:mountd
> REJECT all -- anywhere anywhere reject-with
> icmp-host-prohibited
>
> And the output of showmount -e command in a terminal of node1 is:
>>/nfs/data *
>
> But the output of showmount -e node1 in a terminal of node2 is
>>clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No
>> route to host)
>
> Any help?
Sounds like a general NFS issue, nothing specific to oVirt.
You might want to use normal debugging means - tcpdump, strace, google
:-), etc.
I think your last error is because you need the portmapper port (111) open.
You should see this easily with tcpdump or strace.
Also please note that it's not considered a good idea to use one of the
hosts as an nfs server, although some people happily do that. See also:
https://lwn.net/Articles/595652/
Best,
>
> Thanks you in advance.
>
> Ángel González
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
--
Didi
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--
Ian Neilsen
Mobile: 0424 379 762
Linkedin:
http://au.linkedin.com/in/ianneilsen
Twitter : ineilsen