
--------------030609020105080503060106 Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 8bit just to close off this issue. I found problem. My /etc/exports file had a space in it where it shouldn't have (between * and options). This seemed to acceptable with older versions of centos7.2 but on my newer host with latest centos7.2 (3.10.0-327.13.1.el7.x86_64) it only mounted read-only. maybe default changed from rw to ro? I fixed exports file and now both versions of centos work. /mount_point *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36) On 4/13/16 4:39 PM, Brett I. Holcomb wrote:
On Wed, 2016-04-13 at 15:52 -0700, Bill James wrote:
[vdsm@ovirt4 test /]$ touch /rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs/test touch: cannot touch ‘/rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs/test’: Read-only file system
Hmm, read-only. :-(
ovirt3-ks.test.j2noc.com:/ovirt-store/nfs on /rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs type nfs4 (*rw*,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,port=0,timeo=600,retrans=6,sec=sys,clientaddr=10.100.108.96,local_lock=none,addr=10.100.108.33)
now to figure out why....
[root@ovirt4 test ~]# ls -la /rhev/data-center/mnt/ total 8 drwxr-xr-x 4 vdsm kvm 110 Apr 13 15:30 . drwxr-xr-x 3 vdsm kvm 16 Apr 13 08:06 .. drwxr-xr-x 3 vdsm kvm 4096 Mar 11 15:19 netappqa3:_vol_cloud__images_ovirt__QA__export drwxr-xr-x 3 vdsm kvm 4096 Mar 11 15:17 netappqa3:_vol_cloud__images_ovirt__QA__ISOs
export and ISO domain mount fine too. (and rw)
ovirt-engine-3.6.4.1-1.el7.centos.noarch
On 04/13/2016 03:21 PM, Brett I. Holcomb wrote:
I have a cluster working fine with 2 nodes. I'm trying to add a third and it is complaining:
StorageServerAccessPermissionError: Permission settings on the specified path do not allow access to the storage. Verify permission settings on the specified storage path.: 'path = /rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs'
if I try the commands manually as vdsm they work fine and the volume mounts.
[vdsm@ovirt4 test /]$ mkdir -p /rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs [vdsm@ovirt4 test /]$ sudo -n /usr/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6 ovirt3-ks.test.j2noc.com:/ovirt-store/nfs /rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs [vdsm@ovirt4 test /]$ df -h /rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs Filesystem Size Used Avail Use% Mounted on ovirt3-ks.test.j2noc.com:/ovirt-store/nfs 1.1T 305G 759G 29% /rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs
After manually mounting the NFS volumes and activating the node it still fails.
2016-04-13 14:55:16,559 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector ] (DefaultQuartzScheduler_Worker-61) [64ceea1d] Correlation ID: 64ceea1d, Job ID: a47b74c7-2ae0-43f9-9bdf-e50963a28895, Call Stack: null, Custom Event ID: -1, Message: Host ovirt4.test.j2noc.com cannot access the Storage Domain(s) <UNKNOWN> attached to the Data Center Default. Setting Host state to Non-Operational.
Not sure what "UNKNOWN" storage is, unless its one I deleted earlier that somehow isn't really removed.
Also tried "reinstall" on node. same issue.
Attached are engine and vdsm logs.
Thanks.
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users Try adding anonuid=36,anongid=36 to the mount and make sure 36:36 is
On Wed, 2016-04-13 at 15:09 -0700, Bill James wrote: the owner group on the mount point. I found this,http://www.ovirt.org /documentation/how-to/troubleshooting/troubleshooting-nfs-storage- issues/, helpful.
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users%0A>
Try adding the anonuid=36,anongid=36 to the NFS mount options.
--------------030609020105080503060106 Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> just to close off this issue.<br> I found problem.<br> My /etc/exports file had a space in it where it shouldn't have (between * and options).<br> This seemed to acceptable with older versions of centos7.2 but on my newer host with latest centos7.2 (3.10.0-327.13.1.el7.x86_64) it only mounted read-only. maybe default changed from rw to ro?<br> <br> I fixed exports file and now both versions of centos work.<br> <br> /mount_point *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)<br> <br> <br> <br> <div class="moz-cite-prefix">On 4/13/16 4:39 PM, Brett I. Holcomb wrote:<br> </div> <blockquote cite="mid:1460590790.3902.20.camel@l1049h.com" type="cite"> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <div>On Wed, 2016-04-13 at 15:52 -0700, Bill James wrote:</div> <blockquote type="cite"> [vdsm@ovirt4 test /]$ touch /rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs/test<br> touch: cannot touch ‘/rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs/test’: Read-only file system<br> <br> Hmm, read-only. :-(<br> <br> ovirt3-ks.test.j2noc.com:/ovirt-store/nfs on /rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs type nfs4 (<b>rw</b>,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,port=0,timeo=600,retrans=6,sec=sys,clientaddr=10.100.108.96,local_lock=none,addr=10.100.108.33)<br> <br> now to figure out why....<br> <br> <br> [root@ovirt4 test ~]# ls -la /rhev/data-center/mnt/<br> total 8<br> drwxr-xr-x 4 vdsm kvm 110 Apr 13 15:30 .<br> drwxr-xr-x 3 vdsm kvm 16 Apr 13 08:06 ..<br> drwxr-xr-x 3 vdsm kvm 4096 Mar 11 15:19 netappqa3:_vol_cloud__images_ovirt__QA__export<br> drwxr-xr-x 3 vdsm kvm 4096 Mar 11 15:17 netappqa3:_vol_cloud__images_ovirt__QA__ISOs<br> <br> <br> <br> export and ISO domain mount fine too. (and rw)<br> <br> <br> <br> ovirt-engine-3.6.4.1-1.el7.centos.noarch<br> <br> <br> <div class="moz-cite-prefix">On 04/13/2016 03:21 PM, Brett I. Holcomb wrote:<br> </div> <blockquote cite="mid:1460586060.3902.19.camel@l1049h.com" type="cite"> <pre wrap="">On Wed, 2016-04-13 at 15:09 -0700, Bill James wrote: </pre> <blockquote type="cite"> <pre wrap="">I have a cluster working fine with 2 nodes. I'm trying to add a third and it is complaining: StorageServerAccessPermissionError: Permission settings on the specified path do not allow access to the storage. Verify permission settings on the specified storage path.: 'path = /rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs' if I try the commands manually as vdsm they work fine and the volume mounts. [vdsm@ovirt4 test /]$ mkdir -p /rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs [vdsm@ovirt4 test /]$ sudo -n /usr/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6 ovirt3-ks.test.j2noc.com:/ovirt-store/nfs /rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs [vdsm@ovirt4 test /]$ df -h /rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs Filesystem Size Used Avail Use% Mounted on ovirt3-ks.test.j2noc.com:/ovirt-store/nfs 1.1T 305G 759G 29% /rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs After manually mounting the NFS volumes and activating the node it still fails. 2016-04-13 14:55:16,559 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector ] (DefaultQuartzScheduler_Worker-61) [64ceea1d] Correlation ID: 64ceea1d, Job ID: a47b74c7-2ae0-43f9-9bdf-e50963a28895, Call Stack: null, Custom Event ID: -1, Message: Host ovirt4.test.j2noc.com cannot access the Storage Domain(s) <UNKNOWN> attached to the Data Center Default. Setting Host state to Non-Operational. Not sure what "UNKNOWN" storage is, unless its one I deleted earlier that somehow isn't really removed. Also tried "reinstall" on node. same issue. Attached are engine and vdsm logs. Thanks. _______________________________________________ Users mailing list <a moz-do-not-send="true" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <pre wrap="">Try adding anonuid=36,anongid=36 to the mount and make sure 36:36 is the owner group on the mount point. I found this, <a moz-do-not-send="true" href="http://www.ovirt.org">http://www.ovirt.org</a> /documentation/how-to/troubleshooting/troubleshooting-nfs-storage- issues/, helpful. _______________________________________________ Users mailing list <a moz-do-not-send="true" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> <pre>_______________________________________________ Users mailing list <a moz-do-not-send="true" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users%0A">http://lists.ovirt.org/mailman/listinfo/users </a></pre> </blockquote> <div><br> </div> <div>Try adding the anonuid=36,anongid=36 to the NFS mount options.</div> <div><br> </div> <div><br> </div> </blockquote> <br> </body> </html> --------------030609020105080503060106--