Weirdness with attempting NFS mount

I'm attempting to mount a share from a TrueNAS server in oVirt through the Storage > Storage Domains UI as a New Domain. Whenever I try to add the domain (as server:share), it kicks back a Permission Denied error. Looking at the vdsm.log file, I see the below snippet: 2023-11-17 10:05:32,059-0600 INFO (jsonrpc/7) [vdsm.api] START connectStorageServer(domType=1, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'password': '********', 'protocol_version': 'auto', 'port': '', 'iqn': '', 'connection': '192.168.1.11:/mnt/POOL0', 'ipv6_enabled': 'false', 'id': '00000000-0000-0000-0000-000000000000', 'user': '', 'tpgt': '1'}]) from=::ffff:192.168.1.16,50468, flow_id=31d18c09-2e12-480f-8d6d-722a9fe78029, task_id=05818e9d-562f-4ca7-a19b-5c3b5a02bae8 (api:31) 2023-11-17 10:05:32,060-0600 INFO (jsonrpc/7) [storage.storageServer] Creating directory '/rhev/data-center/mnt/192.168.1.11:_mnt_POOL0' (storageServer:217) 2023-11-17 10:05:32,060-0600 INFO (jsonrpc/7) [storage.fileutils] Creating directory: /rhev/data-center/mnt/192.168.1.11:_mnt_POOL0 mode: None (fileUtils:213) 2023-11-17 10:05:32,060-0600 INFO (jsonrpc/7) [storage.mount] mounting 192.168.1.11:/mnt/POOL0 at /rhev/data-center/mnt/192.168.1.11:_mnt_POOL0 (mount:190) 2023-11-17 10:05:32,150-0600 WARN (jsonrpc/7) [storage.oop] Permission denied for directory: /rhev/data-center/mnt/192.168.1.11:_mnt_POOL0 with permissions:7 (outOfProcess:176) 2023-11-17 10:05:32,150-0600 INFO (jsonrpc/7) [storage.mount] unmounting /rhev/data-center/mnt/192.168.1.11:_mnt_POOL0 (mount:198) 2023-11-17 10:05:32,200-0600 ERROR (jsonrpc/7) [storage.storageServer] Could not connect to storage server (storageServer:75) Traceback (most recent call last): File "/usr/lib/python3.9/site-packages/vdsm/storage/fileSD.py", line 64, in validateDirAccess getProcPool().fileUtils.validateAccess(dirPath) File "/usr/lib/python3.9/site-packages/vdsm/storage/outOfProcess.py", line 178, in validateAccess raise OSError(errno.EACCES, os.strerror(errno.EACCES)) PermissionError: [Errno 13] Permission denied During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.9/site-packages/vdsm/storage/storageServer.py", line 73, in connect_all con.connect() File "/usr/lib/python3.9/site-packages/vdsm/storage/storageServer.py", line 507, in connect return self._mountCon.connect() File "/usr/lib/python3.9/site-packages/vdsm/storage/storageServer.py", line 241, in connect six.reraise(t, v, tb) File "/usr/lib/python3.9/site-packages/six.py", line 709, in reraise raise value File "/usr/lib/python3.9/site-packages/vdsm/storage/storageServer.py", line 233, in connect fileSD.validateDirAccess( File "/usr/lib/python3.9/site-packages/vdsm/storage/fileSD.py", line 75, in validateDirAccess raise se.StorageServerAccessPermissionError(dirPath) vdsm.storage.exception.StorageServerAccessPermissionError: Permission settings on the specified path do not allow access to the storage. Verify permission settings on the specified storage path.: 'path = /rhev/data-center/mnt/192.168.1.11:_mnt_POOL0' Stepping it through, am I correct that I'm seeing vdsm not able to create the local directory for the NFS mount? It previously was able to mount an NFS share from the host to itself when I initially setup the server for testing. When I do an ls -l on the directory it's trying to create the mount point in, I see ownership looks correct. [root@ovirt ~]# cd /rhev/data-center/ [root@ovirt data-center]# ls c7d57de0-6ba8-11ee-9a48-00163e1989b9 mnt [root@ovirt data-center]# ls -l total 0 drwxr-xr-x. 2 vdsm kvm 66 Nov 15 23:32 c7d57de0-6ba8-11ee-9a48-00163e1989b9 drwxr-xr-x. 3 vdsm kvm 59 Nov 17 10:05 mnt [root@ovirt data-center]# cd mnt [root@ovirt mnt]# ls -l total 0 drwxr-xr-x. 3 vdsm kvm 50 Nov 17 10:14 ovirt.theclocktower.home:_home_ovirt__storage [root@ovirt mnt]# Is there something missing? Something that got misconfigured, maybe? Or something I can configure to fix this?

Also, I can mount that NFS share just fine from cli or Cockpit's Storage interface to the /mnt directory. I'm logged into Cockpit and have SSH'd into the ovirt host as root, though, so that may play into why I can mount those without issue. Still, the oVirt management interface should be using the vdsm service, right? And that service should have rights to the /rhev/data-center/ directory?

I have the same setup running here and the only hiccup experienced was the permission set on TrueNAS. You'll need to modify your filesystem permissions to allow read/write for user/group IDs 36:36. I created local user 'ovirt' and corresponding group with IDs '36' on TrueNAS and granted only that user permission to access that share. I suspect that'll do the trick. Patrick.Dubois On 2023-11-17 11:32, Michaal R via Users wrote:
Also, I can mount that NFS share just fine from cli or Cockpit's Storage interface to the /mnt directory. I'm logged into Cockpit and have SSH'd into the ovirt host as root, though, so that may play into why I can mount those without issue. Still, the oVirt management interface should be using the vdsm service, right? And that service should have rights to the /rhev/data-center/ directory? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PEZO6UXHKAXE73...

Okay, I don't see the person's reply to this thread (maybe it was a direct reply to me?), but now I can authenticate. Except... now it's complaining it can't get sanlock... 2023-11-17 17:34:43,315-0600 INFO (jsonrpc/4) [storage.sanlock] Initializing sanlock for domain a476e565-d06a-473f-bd35-d052cbca7653 path=/rhev/data-center/mnt/192.168.1.11:_mnt_POOL0/a476e565-d06a-473f-bd35-d052cbca7653/dom_md/ids alignment=1048576 block_size=512 io_timeout=10 (clusterlock:325) 2023-11-17 17:34:43,317-0600 ERROR (jsonrpc/4) [storage.sanlock] Cannot initialize lock for domain a476e565-d06a-473f-bd35-d052cbca7653 (clusterlock:347) Traceback (most recent call last): File "/usr/lib/python3.9/site-packages/vdsm/storage/clusterlock.py", line 333, in initLock sanlock.write_lockspace( sanlock.SanlockException: (19, 'Sanlock lockspace write failure', 'No such device') 2023-11-17 17:34:43,317-0600 INFO (jsonrpc/4) [vdsm.api] FINISH createStorageDomain error=Could not initialize cluster lock: () from=::ffff:192.168.1.16,50468, flow_id=228696e3, task_id=f106211c-2e11-48de-8abc-42925ba70a0c (api:35) 2023-11-17 17:34:43,318-0600 ERROR (jsonrpc/4) [storage.taskmanager.task] (Task='f106211c-2e11-48de-8abc-42925ba70a0c') Unexpected error (task:860) Traceback (most recent call last): File "/usr/lib/python3.9/site-packages/vdsm/storage/clusterlock.py", line 333, in initLock sanlock.write_lockspace( sanlock.SanlockException: (19, 'Sanlock lockspace write failure', 'No such device') During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.9/site-packages/vdsm/storage/task.py", line 867, in _run return fn(*args, **kargs) File "<decorator-gen-123>", line 2, in createStorageDomain File "/usr/lib/python3.9/site-packages/vdsm/common/api.py", line 33, in method ret = func(*args, **kwargs) File "/usr/lib/python3.9/site-packages/vdsm/storage/hsm.py", line 2315, in createStorageDomain newSD = create( File "/usr/lib/python3.9/site-packages/vdsm/storage/nfsSD.py", line 103, in create fsd.initSPMlease() File "/usr/lib/python3.9/site-packages/vdsm/storage/sd.py", line 1215, in initSPMlease return self._manifest.initDomainLock() File "/usr/lib/python3.9/site-packages/vdsm/storage/sd.py", line 628, in initDomainLock self._domainLock.initLock(self.getDomainLease()) File "/usr/lib/python3.9/site-packages/vdsm/storage/clusterlock.py", line 349, in initLock raise se.ClusterLockInitError() vdsm.storage.exception.ClusterLockInitError: Could not initialize cluster lock: () 2023-11-17 17:34:43,318-0600 INFO (jsonrpc/4) [storage.taskmanager.task] (Task='f106211c-2e11-48de-8abc-42925ba70a0c') aborting: Task is aborted: 'value=Could not initialize cluster lock: () abortedcode=701' (task:1165) 2023-11-17 17:34:43,318-0600 ERROR (jsonrpc/4) [storage.dispatcher] FINISH createStorageDomain error=Could not initialize cluster lock: () (dispatcher:66) 2023-11-17 17:34:43,318-0600 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call StorageDomain.create failed (error 701) in 2.46 seconds (__init__:300) "No such device"? and in sanlock.log: 2023-11-17 17:20:30 150756 [1038]: open error -13 EACCES: no permission to open /rhev/data-center/mnt/192.168.1.11:_mnt_POOL0/dcd1274e-f8df-4f61-98bd-66ad335f7c94/dom_md/ids 2023-11-17 17:20:30 150756 [1038]: check that daemon user sanlock 179 group sanlock 179 has access to disk or file. 2023-11-17 17:34:43 151609 [1038]: open error -13 EACCES: no permission to open /rhev/data-center/mnt/192.168.1.11:_mnt_POOL0/a476e565-d06a-473f-bd35-d052cbca7653/dom_md/ids 2023-11-17 17:34:43 151609 [1038]: check that daemon user sanlock 179 group sanlock 179 has access to disk or file. Why is so much of this needing my personal touch to setup? Shouldn't access through 36:36 be sufficient? Sorry, just not happy that something that I thought would be a straightforward process is becoming more involved than I thought.

Finally, it let me create the storage domain. Went in and cleared out all of the previous attempts (that apparently it didn't know to clear out on its own). After that, it created successfully.
participants (2)
-
Michaal R
-
Patrick Dubois