Re: [External] : cannot access the Storage Domain(s) <UNKNOWN> attached to the Data Center Default.
by Bill James
basically following these steps:
https://www.ovirt.org/download/install_on_rhel.html
How do I know what storage domain it is saying it can't access?
From what I see in engine log its:
2024-10-10 08:40:09,219-07 ERROR
[org.ovirt.engine.core.bll.InitVdsOnUpCommand]
(EE-ManagedThreadFactory-engine-Thread-14) [
] Could not connect host 'ovirt4n.j2noc.com' to pool 'Default': Error
storage pool connection: ("spUUID=16933ed6-28dc-11ef-baff-c4cbe1c7b014,
msdUUID=9df6c434-c848-4987-afd2-9fd8458ed78a, masterVersion=1, hostID=4,
domainsMap={'9cc85468-53f3-49a9-bf05-359b0d715fd3': 'active',
'9df6c434-c848-4987-afd2-9fd8458ed78a': 'active'}",)
9df6c434-c848-4987-afd2-9fd8458ed78a is ovirt1n-gv0 which is already
mounted and working fine.
[root@ovirt5n prod vdsm]# df -h |grep gv0
ovirt1n-gl.j2noc.com:/gv0 11T 239G 11T 3%
/rhev/data-center/mnt/glusterSD/ovirt1n-gl.j2noc.com:_gv0
[root@ovirt5n prod vdsm]# cd
/rhev/data-center/mnt/glusterSD/ovirt1n-gl.j2noc.com:_gv0
[root@ovirt5n prod ovirt1n-gl.j2noc.com:_gv0]# ls -l
total 4
drwxr-xr-x 5 vdsm kvm 4096 Sep 13 11:27 9df6c434-c848-4987-afd2-9fd8458ed78a
[root@ovirt5n prod ovirt1n-gl.j2noc.com:_gv0]# cd
9df6c434-c848-4987-afd2-9fd8458ed78a/images/
[root@ovirt5n prod images]# ls -l
total 48
drwxr-xr-x 2 vdsm kvm 4096 Sep 30 17:33 09393484-e5e8-49cc-bce4-393b01d84ebf
drwxr-xr-x 2 vdsm kvm 4096 Sep 13 14:41 0b761aa1-f61f-422f-acf4-43315745c671
drwxr-xr-x 2 vdsm kvm 4096 Sep 17 11:58 109e1d23-3007-424e-a787-56b5c2631e58
drwxr-xr-x 2 vdsm kvm 4096 Sep 13 15:53 11e7c409-686e-4e71-b9e8-8bb503fc91c0
drwxr-xr-x 2 vdsm kvm 4096 Sep 17 12:00 2e84682f-9351-4401-99fa-e58575567136
drwxr-xr-x 2 vdsm kvm 4096 Sep 13 14:46 8f1cd1ad-9836-4abd-bba8-e569ec38311e
drwxr-xr-x 2 vdsm kvm 4096 Sep 13 14:41 aaceb015-e849-4b40-bb4a-22af2ccb8a9d
drwxr-xr-x 2 vdsm kvm 4096 Sep 13 15:23 ada1430e-ba3d-4b02-85c8-d0ee01f5eb37
drwxr-xr-x 2 vdsm kvm 4096 Sep 17 11:59 b1950739-d63d-41e7-8b5b-c00279b6453b
drwxr-xr-x 2 vdsm kvm 4096 Sep 13 15:26 bb6c8234-23e9-47f4-93e2-5ebf6a028320
drwxr-xr-x 2 vdsm kvm 4096 Sep 30 16:55 c15562ca-a94e-4e8b-a957-81eb0441d9da
drwxr-xr-x 2 vdsm kvm 4096 Sep 30 17:33 cb1ffbda-229a-4393-a10a-b2e67bb80bfc
[root@ovirt5n prod images]#
why does it say it can't access it??
On Thu, Oct 10, 2024 at 5:22 AM Marcos Sungaila <marcos.sungaila(a)oracle.com>
wrote:
> Hi Bill,
>
>
>
> Which steps you ran through on the KVM hosts before trying to add them to
> the Cluster?
>
>
>
> Marcos
>
>
>
> *From:* Bill James <bill.james(a)consensus.com>
> *Sent:* Wednesday, October 9, 2024 7:02 PM
> *To:* users <users(a)ovirt.org>
> *Subject:* [External] : [ovirt-users] cannot access the Storage Domain(s)
> <UNKNOWN> attached to the Data Center Default.
>
>
>
> I'm trying to add 3 more nodes to an active cluster. 3 previous nodes are
> working fine.
>
> I've tried 2 new nodes and they get the same error.
>
> It seems strange to me cause the volume is mounted.
>
>
>
> [root@ovirt5n prod vdsm]# df -h|grep rhev
> 10.2.2.230:/vol/ovirt_inside_export 440G 57G 384G 13% /*rhev*
> /data-center/mnt/10.2.2.230:_vol_ovirt__inside__export
> ovirt1n-gl.j2noc.com:/gv0 11T 239G 11T 3% /*rhev*
> /data-center/mnt/glusterSD/ovirt1n-gl.j2noc.com:_gv0
>
>
>
> engine.log says:
>
> cannot access the Storage Domain(s) <UNKNOWN> attached to the Data Center
> Default.
>
>
>
> vdsm.log says:
>
> 2024-10-09 14:55:06,726-0700 ERROR (jsonrpc/2) [storage.dispatcher] FINISH
> connectStoragePool error=[Errno 13] Permission de
> nied (dispatcher:70)
> Traceback (most recent call last):
> File "/usr/lib/python3.9/site-packages/vdsm/storage/dispatcher.py", line
> 57, in wrapper
> result = ctask.prepare(func, *args, **kwargs)
> File "/usr/lib/python3.9/site-packages/vdsm/storage/task.py", line 93, in
> wrapper
> return m(self, *a, **kw)
> File "/usr/lib/python3.9/site-packages/vdsm/storage/task.py", line 1173,
> in prepare
> .....
>
> File "/usr/lib/python3.9/site-packages/ioprocess/__init__.py", line 479,
> in _sendCommand
> raise OSError(errcode, errstr)
> PermissionError: [Errno 13] Permission denied
> 2024-10-09 14:55:06,726-0700 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC
> call StoragePool.connect failed (error 302) in 2.
> 09 seconds (__init__:300)
>
>
>
> OS: Rocky 9
>
> *vdsm*-4.50.5.1-1.el9.x86_64
>
>
>
> Any ideas welcome.
>
> Thanks
>
>
>
> vdsm.log and engine.log attached.
>
>
>
13 hours, 23 minutes
Error during SSO authentication access_denied : Cannot authenticate user No valid profile found in credentials..
by devis@gmx.com
Hello,
I cannot deploy the hosted engine because I obtain "Error during SSO authentication access_denied" during migration of VM from local to storage. I'm using a oVirt Node Master - Latest master, experimental iso, in particular ovirt-node-ng-installer-4.5.6-2024081806.el9.iso.
Below the log:
[ INFO ] ok: [localhost -> 192.168.222.238]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Remove cloud-init package]
[ INFO ] changed: [localhost -> 192.168.222.238]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fail if he_force_ip4 and he_force_ip6 are set at the same time]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Prepare getent key]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Trigger hosted engine OVF update and enable the serial console]
[ ERROR ] ovirtsdk4.AuthError: Error during SSO authentication access_denied : Cannot authenticate user No valid profile found in credentials..
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Error during SSO authentication access_denied : Cannot authenticate user No valid profile found in credentials.."}
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch logs from the engine VM]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set destination directory path]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create destination directory]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Find the local appliance image]
[ INFO ] ok: [localhost]
Can you help me?
16 hours, 40 minutes
future versions of oVirt
by Diggy Mc
Where can I find information on future versions of oVirt so I can see its direction and progress?
21 hours, 51 minutes
cannot access the Storage Domain(s) <UNKNOWN> attached to the Data Center Default.
by Bill James
I'm trying to add 3 more nodes to an active cluster. 3 previous nodes are
working fine.
I've tried 2 new nodes and they get the same error.
It seems strange to me cause the volume is mounted.
[root@ovirt5n prod vdsm]# df -h|grep rhev
10.2.2.230:/vol/ovirt_inside_export 440G 57G 384G 13% /rhev
/data-center/mnt/10.2.2.230:_vol_ovirt__inside__export
ovirt1n-gl.j2noc.com:/gv0 11T 239G 11T 3% /rhev
/data-center/mnt/glusterSD/ovirt1n-gl.j2noc.com:_gv0
engine.log says:
cannot access the Storage Domain(s) <UNKNOWN> attached to the Data Center
Default.
vdsm.log says:
2024-10-09 14:55:06,726-0700 ERROR (jsonrpc/2) [storage.dispatcher] FINISH
connectStoragePool error=[Errno 13] Permission de
nied (dispatcher:70)
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/vdsm/storage/dispatcher.py", line
57, in wrapper
result = ctask.prepare(func, *args, **kwargs)
File "/usr/lib/python3.9/site-packages/vdsm/storage/task.py", line 93, in
wrapper
return m(self, *a, **kw)
File "/usr/lib/python3.9/site-packages/vdsm/storage/task.py", line 1173,
in prepare
.....
File "/usr/lib/python3.9/site-packages/ioprocess/__init__.py", line 479,
in _sendCommand
raise OSError(errcode, errstr)
PermissionError: [Errno 13] Permission denied
2024-10-09 14:55:06,726-0700 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC
call StoragePool.connect failed (error 302) in 2.
09 seconds (__init__:300)
OS: Rocky 9
vdsm-4.50.5.1-1.el9.x86_64
Any ideas welcome.
Thanks
vdsm.log and engine.log attached.
1 day, 17 hours
iSCSI Storage Issue - move Hosts!?
by steve-pa@hotmail.com
Hello everyone,
I am currently trying to set up my VM environment again.
Status now:
Host1 + iSCSI storage connection
Plan:
Host2 + same iSCSI storage as above
Now i want to move all VMs to Host2, Host2 is already included and I can start all VMs on Host2.
Problem:
Storage -> Manage Domain: Host1 is connected to the storage here. I have no possibility to switch to Host2.
What happens if I now put Host1 into maintenance mode or delete it completely? Will all my systems then crash because the storage connection has been lost?
2 days, 12 hours
Importing Storage Domain's Export from one Data Center to Another
by fodfateofdeath@yahoo.com
Is it possible to perform disaster recovery importing a Storage Domain's Export from one Data Center to another in a manually fashion without Ansible? I'm trying to import Data Center 1's Storage Domain Export by cloning the storage domain export and importing it into Data Center 2's environment through IP address & NFS name within the oVirt's web portal. However, I receive this error message: there is no storage domain under the specified path. Below is what I have done to tried to troubleshoot the issue. After looking into the documentation, I haven't seen information for this method. Should I consider another option? Is there a preferred way of testing DR within an environment?
-I have detach both of the export storage domains from both data centers before importing the cloned export.
-Changing import settings
-I have adjusted the metadata information within the cloned export after mounting it and reviewing the logs.
2 days, 13 hours
Connect to hypervisor node with Virt-Manager
by luc.lalonde@polymtl.ca
Hello,
I can connect directly to one of my hypervisor node using Virt-manager with this auth:
Login: vdsm@ovirt
Password: shibboleth
I then get a listing of the machines on the node.
However, if I try to open the VM console, I'm prompted for another password... I have no idea what password to use.
Any ideas?
Thanks.
2 days, 13 hours
Migrate Ovirt Node from el8 to el9
by devis@gmx.com
Hello,
I tried to find a guide or procedure to upgrade an ovirt node from el8 to el9, but I didn't find anything.
Is it possible migrate node without reinstallation?
Thanks,
Devis
1 week