I'm trying to add 3 more nodes to an active cluster. 3 previous nodes are
working fine.
I've tried 2 new nodes and they get the same error.
It seems strange to me cause the volume is mounted.
[root@ovirt5n prod vdsm]# df -h|grep rhev
10.2.2.230:/vol/ovirt_inside_export 440G 57G 384G 13% /rhev
/data-center/mnt/10.2.2.230:_vol_ovirt__inside__export
ovirt1n-gl.j2noc.com:/gv0 11T 239G 11T 3% /rhev
/data-center/mnt/glusterSD/ovirt1n-gl.j2noc.com:_gv0
engine.log says:
cannot access the Storage Domain(s) <UNKNOWN> attached to the Data Center
Default.
vdsm.log says:
2024-10-09 14:55:06,726-0700 ERROR (jsonrpc/2) [storage.dispatcher] FINISH
connectStoragePool error=[Errno 13] Permission de
nied (dispatcher:70)
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/vdsm/storage/dispatcher.py", line
57, in wrapper
result = ctask.prepare(func, *args, **kwargs)
File "/usr/lib/python3.9/site-packages/vdsm/storage/task.py", line 93, in
wrapper
return m(self, *a, **kw)
File "/usr/lib/python3.9/site-packages/vdsm/storage/task.py", line 1173,
in prepare
.....
File "/usr/lib/python3.9/site-packages/ioprocess/__init__.py", line 479,
in _sendCommand
raise OSError(errcode, errstr)
PermissionError: [Errno 13] Permission denied
2024-10-09 14:55:06,726-0700 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC
call StoragePool.connect failed (error 302) in 2.
09 seconds (__init__:300)
OS: Rocky 9
vdsm-4.50.5.1-1.el9.x86_64
Any ideas welcome.
Thanks
vdsm.log and engine.log attached.
Show replies by date
Hi Bill,
Which steps you ran through on the KVM hosts before trying to add them to the Cluster?
Marcos
From: Bill James <bill.james(a)consensus.com>
Sent: Wednesday, October 9, 2024 7:02 PM
To: users <users(a)ovirt.org>
Subject: [External] : [ovirt-users] cannot access the Storage Domain(s) <UNKNOWN>
attached to the Data Center Default.
I'm trying to add 3 more nodes to an active cluster. 3 previous nodes are working
fine.
I've tried 2 new nodes and they get the same error.
It seems strange to me cause the volume is mounted.
[root@ovirt5n prod vdsm]# df -h|grep rhev
10.2.2.230:/vol/ovirt_inside_export 440G 57G 384G 13%
/rhev/data-center/mnt/10.2.2.230:_vol_ovirt__inside__export
ovirt1n-gl.j2noc.com:/gv0 11T 239G 11T 3%
/rhev/data-center/mnt/glusterSD/ovirt1n-gl.j2noc.com:_gv0
engine.log says:
cannot access the Storage Domain(s) <UNKNOWN> attached to the Data Center Default.
vdsm.log says:
2024-10-09 14:55:06,726-0700 ERROR (jsonrpc/2) [storage.dispatcher] FINISH
connectStoragePool error=[Errno 13] Permission de
nied (dispatcher:70)
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/vdsm/storage/dispatcher.py", line 57, in
wrapper
result = ctask.prepare(func, *args, **kwargs)
File "/usr/lib/python3.9/site-packages/vdsm/storage/task.py", line 93, in
wrapper
return m(self, *a, **kw)
File "/usr/lib/python3.9/site-packages/vdsm/storage/task.py", line 1173, in
prepare
.....
File "/usr/lib/python3.9/site-packages/ioprocess/__init__.py", line 479, in
_sendCommand
raise OSError(errcode, errstr)
PermissionError: [Errno 13] Permission denied
2024-10-09 14:55:06,726-0700 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call
StoragePool.connect failed (error 302) in 2.
09 seconds (__init__:300)
OS: Rocky 9
vdsm-4.50.5.1-1.el9.x86_64
Any ideas welcome.
Thanks
vdsm.log and engine.log attached.
Hey,
I have had issues with storage before, where selinux was the culprit. I suggest you check
what selinux is set to on the existing nodes, perhaps they are permissive? The permission
denied might point in that direction.
Last time I had storage issues (my issues were related to migration and the sanlock
process not being able to set a lock), I changed the new node to permissive. It turned out
that the existing nodes were also set to permissive.
Hope this helps.
//J
thanks for the reply, I appreciate it.
Unfortunately we don't use selinux so that's not the issue.
[root@ovirt5n prod ~]# sestatus
SELinux status: disabled
On Sun, Oct 13, 2024 at 6:22 AM change_jeeringly679--- via Users <
users(a)ovirt.org> wrote:
Hey,
I have had issues with storage before, where selinux was the culprit. I
suggest you check what selinux is set to on the existing nodes, perhaps
they are permissive? The permission denied might point in that direction.
Last time I had storage issues (my issues were related to migration and
the sanlock process not being able to set a lock), I changed the new node
to permissive. It turned out that the existing nodes were also set to
permissive.
Hope this helps.
//J
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G27NFGQAQDT...