On Mon, Jan 18, 2021 at 8:58 AM Matt Snow <mattsnow(a)gmail.com> wrote:
I installed ovirt node 4.4.4 as well as 4.4.5-pre and experience the same problem with
both versions. The issue occurs in both cockpit UI and tmux'd CLI of
ovirt-hosted-engine-setup. I get passed the point where the VM is created and running.
I tried to do some debugging on my own before reaching out to this list. Any help is much
appreciated!
ovirt node hardware: NUC format Jetway w/ Intel N3160 (Braswell 4 cores/4threads), 8GB
RAM, 64GB SSD. I understand this is underspec'd, but I believe it meets the minimum
requirements.
NFS server:
* Ubuntu 19.10 w/ ZFS share w/ 17TB available space.
* NFS share settings are just 'rw=(a)172.16.1.0/24' but have also tried
'rw,sec=sys,anon=0' and '(a)172.16.1.0/24,insecure'
* The target directory is always empty and chown'd 36:36 with 0755 permissions.
* I have tried using both IP and DNS names. forward and reverse DNS works from ovirt host
and other systems on the network.
* The NFS share always gets mounted successfully on the ovirt node system.
* I have tried auto and v3 NFS versions in other various combinations.
* I have also tried setting up an NFS server on a non-ZFS backed storage system that is
open to any host and get the same errors as shown below.
* I ran nfs-check.py script without issue against both NFS servers and followed other
verification steps listed on
https://www.ovirt.org/develop/troubleshooting-nfs-storage-issues.html
***Snip from ovirt-hosted-engine-setup***
Removing snippets of other logs, as I think vdsm should be enough.
Thanks for the investigation so far!
The relevantsection from /var/log/vdsm/vdsm.log:
***begin snip***
2021-01-16 19:53:58,439-0700 INFO (vmrecovery) [vdsm.api] START
getConnectedStoragePoolsList(options=None) from=internal,
task_id=b8b21668-189e-4b68-a7f0-c2d2ebf14546 (api:48)
2021-01-16 19:53:58,439-0700 INFO (vmrecovery) [vdsm.api] FINISH
getConnectedStoragePoolsList return={'poollist': []} from=internal,
task_id=b8b21668-189e-4b68-a7f0-c2d2ebf14546 (api:54)
2021-01-16 19:53:58,440-0700 INFO (vmrecovery) [vds] recovery: waiting for storage pool
to go up (clientIF:726)
2021-01-16 19:53:58,885-0700 INFO (jsonrpc/3) [vdsm.api] START
connectStorageServer(domType=1, spUUID='00000000-0000-0000-0000-000000000000',
conList=[{'password': '********', 'protocol_version':
'auto', 'port': '', 'iqn': '',
'connection': 'stumpy:/tanker/ovirt/host_storage', 'ipv6_enabled':
'false', 'id': '3ffd1e3b-168e-4248-a2af-b28fbdf49eef',
'user': '', 'tpgt': '1'}], options=None)
from=::ffff:192.168.222.53,41192, flow_id=592e278f,
task_id=5bd52fa3-f790-4ed3-826d-c1f51e5f2291 (api:48)
Are you sure this is the first place that deals with this issue?
Perhaps search the log earlier, e.g. for 'host_storage' (part of your
path).
Also, please check supervdsm.log (in the same log directory).
Does it manage to mount it? Write anything there?
Best regards,
--
Didi