[Users] Nfs version 3 or 4 when mounting predefined engine ISO?
Gianluca Cecchi
gianluca.cecchi at gmail.com
Wed Jan 16 14:57:53 UTC 2013
Hello,
what should it be in 3.2 the version of NFS default ISO created on engine?
Can I change it afterwards
During engine setup I was only requested if I wanted it or not:
(f18 with ovirt-nightly repo and 3.2.0-1.20130113.gitc954518)
Configure NFS share on this server to be used as an ISO Domain? ['yes'|
'no'] [yes] :
Local ISO domain path [/var/lib/exports/iso] : /ISO
ok
Current situation on engine regarding iptables
[root at f18engine ~]# iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmptype 255
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate
RELATED,ESTABLISHED
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW
tcp dpt:22
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW
tcp dpt:80
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW
tcp dpt:443
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW
udp dpt:111
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW
tcp dpt:111
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW
udp dpt:892
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW
tcp dpt:892
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW
udp dpt:875
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW
tcp dpt:875
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW
udp dpt:662
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW
tcp dpt:662
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW
tcp dpt:2049
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW
tcp dpt:32803
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW
udp dpt:32769
REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with
icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ANd regarding nfs:
[root at f18engine ~]# ps -ef|grep [n]fs
root 1134 2 0 Jan15 ? 00:00:00 [nfsd4]
root 1135 2 0 Jan15 ? 00:00:00 [nfsd4_callbacks]
root 1136 2 0 Jan15 ? 00:00:00 [nfsd]
root 1137 2 0 Jan15 ? 00:00:00 [nfsd]
root 1138 2 0 Jan15 ? 00:00:00 [nfsd]
root 1139 2 0 Jan15 ? 00:00:00 [nfsd]
root 1140 2 0 Jan15 ? 00:00:00 [nfsd]
root 1141 2 0 Jan15 ? 00:00:00 [nfsd]
root 1142 2 0 Jan15 ? 00:00:00 [nfsd]
root 1143 2 0 Jan15 ? 00:00:00 [nfsd]
[root at f18engine ~]# systemctl status rpcbind.service
rpcbind.service - RPC bind service
Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled)
Active: active (running) since Tue, 2013-01-15 13:38:46 CET; 1 day and 2h
ago
Process: 1098 ExecStart=/sbin/rpcbind -w ${RPCBIND_ARGS} (code=exited,
status=0/SUCCESS)
Main PID: 1128 (rpcbind)
CGroup: name=systemd:/system/rpcbind.service
└ 1128 /sbin/rpcbind -w
Jan 15 13:38:46 f18engine.ceda.polimi.it systemd[1]: Started RPC bind
service.
When host tries to attach ISO it fails
host is f18 with ovirt-nightly and
vdsm-4.10.3-0.78.gitb005b54.fc18.x86_64
I noticed
[root at f18ovn03 ]# ps -ef|grep mount
root 1692 1 0 14:39 ? 00:00:00 /usr/sbin/rpc.mountd
root 6616 2334 0 15:17 ? 00:00:00 /usr/bin/sudo -n
/usr/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6,nfsvers=3
f18engine:/ISO /rhev/data-center/mnt/f18engine:_ISO
root 6617 6616 0 15:17 ? 00:00:00 /usr/bin/mount -t nfs -o
soft,nosharecache,timeo=600,retrans=6,nfsvers=3 f18engine:/ISO
/rhev/data-center/mnt/f18engine:_ISO
root 6618 6617 0 15:17 ? 00:00:00 /sbin/mount.nfs
f18engine:/ISO /rhev/data-center/mnt/f18engine:_ISO -o
rw,soft,nosharecache,timeo=600,retrans=6,nfsvers=3
root 6687 4147 0 15:17 pts/0 00:00:00 grep --color=auto mount
The problem here is option
nfsvers=3
in fact if I manually run on node
[root at f18ovn03 ]# mount -t nfs -o nfsvers=4 f18engine:/ISO /p
--> OK
and
[root at f18ovn03 ]# mount
...
f18engine:/ISO on /p type nfs4
(rw,relatime,vers=4.0,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.4.4.59,local_lock=none,addr=10.4.4.60)
while
# mount -t nfs -o nfsvers=3 f18engine:/ISO /p
--> KO
stalled
What should I change, engine or host or both?
Thanks in advance,
Gianluca
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20130116/93a81ad0/attachment-0001.html>
More information about the Users
mailing list