On Mon, Jan 18, 2021 at 8:58 AM Matt Snow <mattsnow(a)gmail.com>
wrote:
>
> I installed ovirt node 4.4.4 as well as 4.4.5-pre and experience the
same problem with both versions. The issue occurs in both cockpit UI and
tmux'd CLI of ovirt-hosted-engine-setup. I get passed the point where the
VM is created and running.
> I tried to do some debugging on my own before reaching out to this list.
Any help is much appreciated!
>
> ovirt node hardware: NUC format Jetway w/ Intel N3160 (Braswell 4
cores/4threads), 8GB RAM, 64GB SSD. I understand this is underspec'd, but I
believe it meets the minimum requirements.
>
> NFS server:
> * Ubuntu 19.10 w/ ZFS share w/ 17TB available space.
We don't test ZFS, but since you tried also non-ZFS server with same
issue, the
issue is probably not ZFS.
> * NFS share settings are just 'rw=(a)172.16.1.0/24' but have also tried
'rw,sec=sys,anon=0' and '(a)172.16.1.0/24,insecure'
You are missing anonuid=36,anongid=36. This should not affect sanlock, but
you will have issues with libvirt without this.
I found this setting in another email thread and have applied it
root@stumpy:/tanker/ovirt/host_storage# zfs get all tanker/ovirt| grep nfs
tanker/ovirt sharenfs rw,anonuid=36,anongid=36 local
root@stumpy:/tanker/ovirt/host_storage#
Here is working export from my test system:
/export/00 *(rw,sync,anonuid=36,anongid=36,no_subtree_check)
I have updated both ZFS server and the Ubuntu 20.04 NFS server to use the
above
settings.
# ZFS Server
root@stumpy:/tanker/ovirt/host_storage# zfs set
sharenfs="rw,sync,anonuid=36,anongid=36,no_subtree_check" tanker/ovirt
root@stumpy:/tanker/ovirt/host_storage# zfs get all tanker/ovirt| grep nfs
tanker/ovirt sharenfs
rw,sync,anonuid=36,anongid=36,no_subtree_check local
root@stumpy:/tanker/ovirt/host_storage#
# Ubuntu laptop
ls -ld /exports/host_storage
drwxr-xr-x 2 36 36 4096 Jan 16 19:42 /exports/host_storage
root@msnowubntlt:/exports/host_storage# showmount -e dongle
Export list for dongle:
/exports/host_storage *
root@msnowubntlt:/exports/host_storage# grep host_storage /etc/exports
/exports/host_storage *(rw,sync,anonuid=36,anongid=36,no_subtree_check)
root@msnowubntlt:/exports/host_storage#
Interesting point - upon being re-prompted to configure storage after
initial failure I provided *different* NFS server information (the ubuntu
laptop)
**snip**
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage domain]
[ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail
is "[]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
"Fault
reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response
code
is 400."}
Please specify the storage you would like to use (glusterfs,
iscsi, fc, nfs)[nfs]: nfs
Please specify the nfs version you would like to use (auto, v3,
v4, v4_0, v4_1, v4_2)[auto]:
Please specify the full shared storage connection path to use
(example: host:/path): dongle:/exports/host_storage
If needed, specify additional mount options for the connection to
the hosted-engine storagedomain (example: rsize=32768,wsize=32768) []:
[ INFO ] Creating Storage Domain
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a specific
set of steps]
**snip**
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage domain]
[ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail
is "[]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
"Fault
reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response
code
is 400."}
Please specify the storage you would like to use (glusterfs,
iscsi, fc, nfs)[nfs]:
**snip**
Upon checking mounts I see that the original server is still being used.
[root@brick ~]# mount | grep nfs
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
stumpy:/tanker/ovirt/host_storage on
/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage type nfs4
(rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=172.16.1.49,local_lock=none,addr=172.16.1.50)
[root@brick ~]#
> * The target directory is always empty and chown'd 36:36
with 0755
permissions.
Correct.
> * I have tried using both IP and DNS names. forward and reverse DNS
works from ovirt host and other systems on the network.
> * The NFS share always gets mounted successfully on the ovirt node
system.
> * I have tried auto and v3 NFS versions in other various combinations.
> * I have also tried setting up an NFS server on a non-ZFS backed storage
system that is open to any host and get the same errors as shown below.
> * I ran nfs-check.py script without issue against both NFS servers and
followed other verification steps listed on
https://www.ovirt.org/develop/troubleshooting-nfs-storage-issues.html
>
> ***Snip from ovirt-hosted-engine-setup***
> Please specify the storage you would like to use (glusterfs,
iscsi, fc, nfs)[nfs]: nfs
> Please specify the nfs version you would like to use (auto,
v3, v4, v4_0, v4_1, v4_2)[auto]:
> Please specify the full shared storage connection path to use
(example: host:/path): stumpy.mydomain.com:/tanker/ovirt/host_storage
> If needed, specify additional mount options for the connection
to the hosted-engine storagedomain (example: rsize=32768,wsize=32768) []: rw
> [ INFO ] Creating Storage Domain
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a
specific set of steps]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Force facts gathering]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Wait for the storage
interface to be up]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check local VM dir
stat]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Enforce local VM dir
existence]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Obtain SSO token using
username/password credentials]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch host facts]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster ID]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster facts]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter facts]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter ID]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter name]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add NFS storage domain]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add glusterfs storage
domain]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add iSCSI storage
domain]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add Fibre Channel
storage domain]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get storage domain
details]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Find the appliance OVF]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Parse OVF]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get required size]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Remove unsuitable
storage domain]
> [ INFO ] skipping: [localhost]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check storage domain
free space]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage
domain]
> [ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault
detail is "[]". HTTP response code is 400.
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false,
"msg":
"Fault reason is \"Operation Failed\". Fault detail is \"[]\".
HTTP
response code is 400."}
> Please specify the storage you would like to use (glusterfs,
iscsi, fc, nfs)[nfs]:
>
> ***End snippet**
>
> The relevant section from
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-ansible-create_storage_domain-20210116195220-bnfg1w.log:
> **Snip***
> 2021-01-16 19:53:56,010-0700 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f11a04c8320> kwargs
> 2021-01-16 19:53:57,219-0700 INFO ansible task start {'status':
'OK',
'ansible_type': 'task', 'ansible_playbook':
'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml',
'ansible_task': 'ovirt.ovirt.hosted_engine_setup : Activate storage
domain'}
> 2021-01-16 19:53:57,220-0700 DEBUG ansible on_any args TASK:
ovirt.ovirt.hosted_engine_setup : Activate storage domain kwargs
is_conditional:False
> 2021-01-16 19:53:57,221-0700 DEBUG ansible on_any args localhost TASK:
ovirt.ovirt.hosted_engine_setup : Activate storage domain kwargs
> 2021-01-16 19:54:00,346-0700 DEBUG var changed: host "localhost" var
"ansible_play_hosts" type "<class 'list'>" value:
"[]"
> 2021-01-16 19:54:00,347-0700 DEBUG var changed: host "localhost" var
"ansible_play_batch" type "<class 'list'>" value:
"[]"
> 2021-01-16 19:54:00,348-0700 DEBUG var changed: host "localhost" var
"play_hosts" type "<class 'list'>" value:
"[]"
> 2021-01-16 19:54:00,349-0700 ERROR ansible failed {
> "ansible_host": "localhost",
> "ansible_playbook":
"/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
> "ansible_result": {
> "_ansible_no_log": false,
> "changed": false,
> "exception": "Traceback (most recent call last):\n File
\"/tmp/ansible_ovirt_storage_domain_payload_f38n25ab/ansible_ovirt_storage_domain_payload.zip/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_domain.py\",
line 783, in main\n File
\"/tmp/ansible_ovirt_storage_domain_payload_f38n25ab/ansible_ovirt_storage_domain_payload.zip/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_domain.py\",
line 638, in post_create_check\n File
\"/usr/lib64/python3.6/site-packages/ovirtsdk4/services.py\", line 3647, in
add\n return self._internal_add(storage_domain, headers, query, wait)\n
File \"/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py\", line 232,
in _internal_add\n return future.wait() if wait else future\n File
\"/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py\", line 55, in
wait\n return self._code(response)\n File
\"/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py\", line 229, in
callback\n self._check_fault(respon
> se)\n File
\"/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py\", line 132, in
_check_fault\n self._raise_error(response, body)\n File
\"/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py\", line 118, in
_raise_error\n raise error\novirtsdk4.Error: Fault reason is \"Operation
Failed\". Fault detail is \"[]\". HTTP response code is 400.\n",
> "invocation": {
> "module_args": {
> "backup": null,
> "comment": null,
> "critical_space_action_blocker": null,
> "data_center": "Default",
> "description": null,
> "destroy": null,
> "discard_after_delete": null,
> "domain_function": "data",
> "fcp": null,
> "fetch_nested": false,
> "format": null,
> "glusterfs": null,
> "host": "brick.mydomain.com",
> "id": null,
> "iscsi": null,
> "localfs": null,
> "managed_block_storage": null,
> "name": "hosted_storage",
> "nested_attributes": [],
> "nfs": null,
> "poll_interval": 3,
> "posixfs": null,
> "state": "present",
> "timeout": 180,
> "wait": true,
> "warning_low_space": null,
> "wipe_after_delete": null
> }
> },
> "msg": "Fault reason is \"Operation Failed\". Fault
detail is
\"[]\". HTTP response code is 400."
> },
> "ansible_task": "Activate storage domain",
> "ansible_type": "task",
> "status": "FAILED",
> "task_duration": 4
> }
> 2021-01-16 19:54:00,350-0700 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f11a05f2d30> kwargs
ignore_errors:None
> 2021-01-16 19:54:00,358-0700 INFO ansible stats {
> "ansible_playbook":
"/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
> "ansible_playbook_duration": "01:35 Minutes",
> "ansible_result": "type: <class 'dict'>\nstr:
{'localhost': {'ok':
23, 'failures': 1, 'unreachable': 0, 'changed': 0,
'skipped': 7, 'rescued':
0, 'ignored': 0}}",
> "ansible_type": "finish",
> "status": "FAILED"
> }
> 2021-01-16 19:54:00,359-0700 INFO SUMMARY:
> Duration Task Name
> -------- --------
> [ < 1 sec ] Execute just a specific set of steps
> [ 00:05 ] Force facts gathering
> [ 00:03 ] Check local VM dir stat
> [ 00:04 ] Obtain SSO token using username/password credentials
> [ 00:04 ] Fetch host facts
> [ 00:02 ] Fetch cluster ID
> [ 00:04 ] Fetch cluster facts
> [ 00:04 ] Fetch Datacenter facts
> [ 00:02 ] Fetch Datacenter ID
> [ 00:02 ] Fetch Datacenter name
> [ 00:04 ] Add NFS storage domain
> [ 00:04 ] Get storage domain details
> [ 00:03 ] Find the appliance OVF
> [ 00:03 ] Parse OVF
> [ 00:02 ] Get required size
> [ FAILED ] Activate storage domain
> 2021-01-16 19:54:00,359-0700 DEBUG ansible on_any args
<ansible.executor.stats.AggregateStats object at 0x7f11a2e3b8d0> kwargs
> **End snip**
>
> The relevantsection from /var/log/vdsm/vdsm.log:
> ***begin snip***
> 2021-01-16 19:53:58,439-0700 INFO (vmrecovery) [vdsm.api] START
getConnectedStoragePoolsList(options=None) from=internal,
task_id=b8b21668-189e-4b68-a7f0-c2d2ebf14546 (api:48)
> 2021-01-16 19:53:58,439-0700 INFO (vmrecovery) [vdsm.api] FINISH
getConnectedStoragePoolsList return={'poollist': []} from=internal,
task_id=b8b21668-189e-4b68-a7f0-c2d2ebf14546 (api:54)
> 2021-01-16 19:53:58,440-0700 INFO (vmrecovery) [vds] recovery: waiting
for storage pool to go up (clientIF:726)
> 2021-01-16 19:53:58,885-0700 INFO (jsonrpc/3) [vdsm.api] START
connectStorageServer(domType=1,
spUUID='00000000-0000-0000-0000-000000000000', conList=[{'password':
'********', 'protocol_version': 'auto', 'port':
'', 'iqn': '',
'connection': 'stumpy:/tanker/ovirt/host_storage',
'ipv6_enabled': 'false',
'id': '3ffd1e3b-168e-4248-a2af-b28fbdf49eef', 'user': '',
'tpgt': '1'}],
options=None) from=::ffff:192.168.222.53,41192, flow_id=592e278f,
task_id=5bd52fa3-f790-4ed3-826d-c1f51e5f2291 (api:48)
> 2021-01-16 19:53:58,892-0700 INFO (jsonrpc/3)
[storage.StorageDomainCache] Invalidating storage domain cache (sdc:74)
> 2021-01-16 19:53:58,892-0700 INFO (jsonrpc/3) [vdsm.api] FINISH
connectStorageServer return={'statuslist': [{'id':
'3ffd1e3b-168e-4248-a2af-b28fbdf49eef', 'status': 0}]}
from=::ffff:192.168.222.53,41192, flow_id=592e278f,
task_id=5bd52fa3-f790-4ed3-826d-c1f51e5f2291 (api:54)
> 2021-01-16 19:53:58,914-0700 INFO (jsonrpc/6) [vdsm.api] START
getStorageDomainInfo(sdUUID='54532dd4-3e5b-4885-b88e-599c81efb146',
options=None) from=::ffff:192.168.222.53,41192, flow_id=592e278f,
task_id=4d5a352d-6096-45b7-a4ee-b6e08ac02f7b (api:48)
> 2021-01-16 19:53:58,914-0700 INFO (jsonrpc/6) [storage.StorageDomain]
sdUUID=54532dd4-3e5b-4885-b88e-599c81efb146 (fileSD:535)
> 2021-01-16 19:53:58,918-0700 INFO (jsonrpc/6) [vdsm.api] FINISH
getStorageDomainInfo error=Domain is either partially accessible or
entirely inaccessible: ('54532dd4-3e5b-4885-b88e-599c81efb146',)
from=::ffff:192.168.222.53,41192, flow_id=592e278f,
task_id=4d5a352d-6096-45b7-a4ee-b6e08ac02f7b (api:52)
> 2021-01-16 19:53:58,918-0700 ERROR (jsonrpc/6)
[storage.TaskManager.Task] (Task='4d5a352d-6096-45b7-a4ee-b6e08ac02f7b')
Unexpected error (task:880)
> Traceback (most recent call last):
> File "/usr/lib/python3.6/site-packages/vdsm/storage/task.py", line
887, in _run
> return fn(*args, **kargs)
> File "<decorator-gen-131>", line 2, in getStorageDomainInfo
> File "/usr/lib/python3.6/site-packages/vdsm/common/api.py", line 50,
in method
> ret = func(*args, **kwargs)
> File "/usr/lib/python3.6/site-packages/vdsm/storage/hsm.py", line
2796, in getStorageDomainInfo
> dom = self.validateSdUUID(sdUUID)
> File "/usr/lib/python3.6/site-packages/vdsm/storage/hsm.py", line 312,
in validateSdUUID
> sdDom.validate()
> File "/usr/lib/python3.6/site-packages/vdsm/storage/fileSD.py", line
538, in validate
> raise se.StorageDomainAccessError(self.sdUUID)
> vdsm.storage.exception.StorageDomainAccessError: Domain is either
partially accessible or entirely inaccessible:
('54532dd4-3e5b-4885-b88e-599c81efb146',)
> 2021-01-16 19:53:58,918-0700 INFO (jsonrpc/6)
[storage.TaskManager.Task] (Task='4d5a352d-6096-45b7-a4ee-b6e08ac02f7b')
aborting: Task is aborted: "value=Domain is either partially accessible or
entirely inaccessible: ('54532dd4-3e5b-4885-b88e-599c81efb146',)
abortedcode=379" (task:1190)
> 2021-01-16 19:53:58,918-0700 ERROR (jsonrpc/6) [storage.Dispatcher]
FINISH getStorageDomainInfo error=Domain is either partially accessible or
entirely inaccessible: ('54532dd4-3e5b-4885-b88e-599c81efb146',)
(dispatcher:83)
> 2021-01-16 19:53:58,919-0700 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer]
RPC call StorageDomain.getInfo failed (error 379) in 0.00 seconds
(__init__:312)
> 2021-01-16 19:53:58,956-0700 INFO (jsonrpc/1) [vdsm.api] START
createStoragePool(poolType=None,
spUUID='91fb3f22-5795-11eb-ad9f-00163e3f4683', poolName='Default',
masterDom='54532dd4-3e5b-4885-b88e-599c81efb146',
domList=['54532dd4-3e5b-4885-b88e-599c81efb146'], masterVersion=7,
lockPolicy=None, lockRenewalIntervalSec=5, leaseTimeSec=60,
ioOpTimeoutSec=10, leaseRetries=3, options=None)
from=::ffff:192.168.222.53,41192, flow_id=592e278f,
task_id=a946357f-537a-4c9c-9040-e70a98ee5643 (api:48)
> 2021-01-16 19:53:58,958-0700 INFO (jsonrpc/1) [storage.StoragePool]
updating pool 91fb3f22-5795-11eb-ad9f-00163e3f4683 backend from type
NoneType instance 0x7f86e45f89d0 to type StoragePoolDiskBackend instance
0x7f8680738408 (sp:157)
> 2021-01-16 19:53:58,958-0700 INFO (jsonrpc/1) [storage.StoragePool]
spUUID=91fb3f22-5795-11eb-ad9f-00163e3f4683 poolName=Default
master_sd=54532dd4-3e5b-4885-b88e-599c81efb146
domList=['54532dd4-3e5b-4885-b88e-599c81efb146'] masterVersion=7
{'LEASERETRIES': 3, 'LEASETIMESEC': 60, 'LOCKRENEWALINTERVALSEC':
5,
'IOOPTIMEOUTSEC': 10} (sp:602)
> 2021-01-16 19:53:58,958-0700 INFO (jsonrpc/1) [storage.StorageDomain]
sdUUID=54532dd4-3e5b-4885-b88e-599c81efb146 (fileSD:535)
> 2021-01-16 19:53:58,963-0700 ERROR (jsonrpc/1) [storage.StoragePool]
Unexpected error (sp:618)
> Traceback (most recent call last):
> File "/usr/lib/python3.6/site-packages/vdsm/storage/sp.py", line 613,
in create
> domain.validate()
> File "/usr/lib/python3.6/site-packages/vdsm/storage/fileSD.py", line
538, in validate
> raise se.StorageDomainAccessError(self.sdUUID)
> vdsm.storage.exception.StorageDomainAccessError: Domain is either
partially accessible or entirely inaccessible:
('54532dd4-3e5b-4885-b88e-599c81efb146',)
> 2021-01-16 19:53:58,963-0700 INFO (jsonrpc/1) [vdsm.api] FINISH
createStoragePool error=Domain is either partially accessible or entirely
inaccessible: ('54532dd4-3e5b-4885-b88e-599c81efb146',)
from=::ffff:192.168.222.53,41192, flow_id=592e278f,
task_id=a946357f-537a-4c9c-9040-e70a98ee5643 (api:52)
> 2021-01-16 19:53:58,963-0700 ERROR (jsonrpc/1)
[storage.TaskManager.Task] (Task='a946357f-537a-4c9c-9040-e70a98ee5643')
Unexpected error (task:880)
> Traceback (most recent call last):
> File "/usr/lib/python3.6/site-packages/vdsm/storage/sp.py", line 613,
in create
> domain.validate()
> File "/usr/lib/python3.6/site-packages/vdsm/storage/fileSD.py", line
538, in validate
> raise se.StorageDomainAccessError(self.sdUUID)
> vdsm.storage.exception.StorageDomainAccessError: Domain is either
partially accessible or entirely inaccessible:
('54532dd4-3e5b-4885-b88e-599c81efb146',)
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last):
> File "/usr/lib/python3.6/site-packages/vdsm/storage/task.py", line
887, in _run
> return fn(*args, **kargs)
> File "<decorator-gen-31>", line 2, in createStoragePool
> File "/usr/lib/python3.6/site-packages/vdsm/common/api.py", line 50,
in method
> ret = func(*args, **kwargs)
> File "/usr/lib/python3.6/site-packages/vdsm/storage/hsm.py", line
1027, in createStoragePool
> leaseParams)
> File "/usr/lib/python3.6/site-packages/vdsm/storage/sp.py", line 619,
in create
> raise se.StorageDomainAccessError(sdUUID)
> vdsm.storage.exception.StorageDomainAccessError: Domain is either
partially accessible or entirely inaccessible:
('54532dd4-3e5b-4885-b88e-599c81efb146',)
> 2021-01-16 19:53:58,964-0700 INFO (jsonrpc/1)
[storage.TaskManager.Task] (Task='a946357f-537a-4c9c-9040-e70a98ee5643')
aborting: Task is aborted: "value=Domain is either partially accessible or
entirely inaccessible: ('54532dd4-3e5b-4885-b88e-599c81efb146',)
abortedcode=379" (task:1190)
> 2021-01-16 19:53:58,964-0700 ERROR (jsonrpc/1) [storage.Dispatcher]
FINISH createStoragePool error=Domain is either partially accessible or
entirely inaccessible: ('54532dd4-3e5b-4885-b88e-599c81efb146',)
(dispatcher:83)
> 2021-01-16 19:53:58,965-0700 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer]
RPC call StoragePool.create failed (error 379) in 0.01 seconds
(__init__:312)
> 2021-01-16 19:54:01,923-0700 INFO (jsonrpc/5) [api.host] START
getStats() from=::ffff:192.168.222.53,41192 (api:48)
> 2021-01-16 19:54:01,964-0700 INFO (jsonrpc/7) [api.host] START
getAllVmStats() from=::ffff:192.168.222.53,41192 (api:48)
> 2021-01-16 19:54:01,966-0700 INFO (jsonrpc/7) [api.host] FINISH
getAllVmStats return={'status': {'code': 0, 'message':
'Done'},
'statsList': (suppressed)} from=::ffff:192.168.222.53,41192 (api:54)
> 2021-01-16 19:54:01,977-0700 INFO (jsonrpc/5) [vdsm.api] START
repoStats(domains=()) from=::ffff:192.168.222.53,41192,
task_id=7a6c8536-3f1e-41b3-9ab7-bb58a5e75fac (api:48)
> 2021-01-16 19:54:01,978-0700 INFO (jsonrpc/5) [vdsm.api] FINISH
repoStats return={} from=::ffff:192.168.222.53,41192,
task_id=7a6c8536-3f1e-41b3-9ab7-bb58a5e75fac (api:54)
> 2021-01-16 19:54:01,980-0700 INFO (jsonrpc/5) [vdsm.api] START
multipath_health() from=::ffff:192.168.222.53,41192,
task_id=b8af45b5-6488-4940-b7d0-1d11b5f76db4 (api:48)
> 2021-01-16 19:54:01,981-0700 INFO (jsonrpc/5) [vdsm.api] FINISH
multipath_health return={} from=::ffff:192.168.222.53,41192,
task_id=b8af45b5-6488-4940-b7d0-1d11b5f76db4 (api:54)
> 2021-01-16 19:54:01,996-0700 INFO (jsonrpc/5) [api.host] FINISH
getStats return={'status': {'code': 0, 'message':
'Done'}, 'info':
(suppressed)} from=::ffff:192.168.222.53,41192 (api:54)
> ***end snip***
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement:
https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G2TQC6XTJTI...