[ovirt-devel] [ OST Failure Report ] [ oVirt 4.1 ] [ 07/02/17 ] [ add_secondary_storage_domains ]

Nir Soffer nsoffer at redhat.com
Tue Mar 7 11:18:18 UTC 2017


Is this 4.1 or 4.0 test? These vdsm logs are in 4.0 format.

בתאריך 7 במרץ 2017 10:17 לפנה״צ,‏ "Yaniv Kaul" <ykaul at redhat.com> כתב:



On Tue, Mar 7, 2017 at 9:44 AM Daniel Belenky <dbelenky at redhat.com> wrote:

> *Test failed:* add_secondary_storage_domains
>
> *Link to failed Job: *test-repo_ovirt_experimental_4.1/889
> <http://jenkins.ovirt.org/view/experimental%20jobs/job/test-repo_ovirt_experimental_4.1/889>
>
> *Link to all logs: *logs from Jenkins
> <http://jenkins.ovirt.org/view/experimental%20jobs/job/test-repo_ovirt_experimental_4.1/889/artifact/exported-artifacts/basic-suit-4.1-el7/>
>
> *Error snippet from log:*
>
> 2017-03-07 01:29:03,789::utils.py::_ret_via_queue::59::lago.utils::ERROR::Error
> while running thread
>
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 57, in _ret_via_queue
>     queue.put({'return': func()})
>   File "/home/jenkins/workspace/test-repo_ovirt_experimental_4.1/ovirt-system-tests/basic-suite-4.1/test-scenarios/002_bootstrap.py", line 574, in add_iso_storage_domain
>     add_generic_nfs_storage_domain(prefix, SD_ISO_NAME, SD_ISO_HOST_NAME, SD_ISO_PATH, sd_format='v1', sd_type='iso', nfs_version='v3')
>   File "/home/jenkins/workspace/test-repo_ovirt_experimental_4.1/ovirt-system-tests/basic-suite-4.1/test-scenarios/002_bootstrap.py", line 437, in add_generic_nfs_storage_domain
>     add_generic_nfs_storage_domain_4(prefix, sd_nfs_name, nfs_host_name, mount_path, sd_format, sd_type, nfs_version)
>   File "/home/jenkins/workspace/test-repo_ovirt_experimental_4.1/ovirt-system-tests/basic-suite-4.1/test-scenarios/002_bootstrap.py", line 493, in add_generic_nfs_storage_domain_4
>     _add_storage_domain_4(api, p)
>   File "/home/jenkins/workspace/test-repo_ovirt_experimental_4.1/ovirt-system-tests/basic-suite-4.1/test-scenarios/002_bootstrap.py", line 407, in _add_storage_domain_4
>     id=sd.id,
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line 3488, in add
>     self._check_fault(response)
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 98, in _check_fault
>     Service._raise_error(response, fault)
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 71, in _raise_error
>     raise Error(msg)
> Error: Fault reason is "Operation Failed". Fault detail is "[Storage domain cannot be reached. Please ensure it is accessible from the host(s).]". HTTP response code is 400.
>
>
Looking at  http://jenkins.ovirt.org/view/experimental%20jobs/job/
test-repo_ovirt_experimental_4.1/889/artifact/exported-
artifacts/basic-suit-4.1-el7/test_logs/basic-suite-4.1/
post-002_bootstrap.py/lago-basic-suite-4-1-host1/_var_log/vdsm/supervdsm.log
:
I don't see the return of the mount command:
 MainProcess|jsonrpc/3::DEBUG::2017-03-06 20:26:01,872::supervdsmServer:
:93::SuperVdsm.ServerCallback::(wrapper) call mount with (u'192.168.201.2:
/exports/nfs/iso', u'/rhev/data-center/mnt/192.168.201.2:_exports_nfs_iso')
{'vfstype': 'nfs', 'mntOpts':
'soft,nosharecache,timeo=600,retrans=6,nfsvers=3',
'timeout': None, 'cgroup': None}
MainProcess|jsonrpc/3::DEBUG::2017-03-06
20:26:01,872::commands::69::root::(execCmd)
/usr/bin/taskset --cpu-list 0-1 /usr/bin/mount -t nfs -o
soft,nosharecache,timeo=600,retrans=6,nfsvers=3 192.168.201.2:/exports/nfs/iso
/rhev/data-center/mnt/192.168.201.2:_exports_nfs_iso (cwd None)
MainProcess|jsonrpc/4::DEBUG::2017-03-06 20:29:02,501::supervdsmServer:
:93::SuperVdsm.ServerCallback::(wrapper) call hbaRescan with () {}
MainProcess|jsonrpc/4::DEBUG::2017-03-06
20:29:02,501::commands::69::storage.HBA::(execCmd)
/usr/bin/taskset --cpu-list 0-1 /usr/libexec/vdsm/fc-scan (cwd None)
MainProcess|jsonrpc/4::DEBUG::2017-03-06 20:29:02,573::supervdsmServer:
:100::SuperVdsm.ServerCallback::(wrapper) return hbaRescan with None
...

(I don't know why we do all those hbaRescan, but that's unrelated)

Where on host0 it does return:

MainProcess|jsonrpc/5::DEBUG::2017-03-06
20:25:20,369::supervdsmServer::93::SuperVdsm.ServerCallback::(wrapper)
call mount with (u'192.168.201.2:/exports/nfs/iso',
u'/rhev/data-center/mnt/192.168.201.2:_exports_nfs_iso') {'vfstype':
'nfs', 'mntOpts': 'soft,nosharecache,timeo=600,retrans=6,nfsvers=3',
'timeout': None, 'cgroup': None}
MainProcess|jsonrpc/5::DEBUG::2017-03-06
20:25:20,369::commands::69::root::(execCmd) /usr/bin/taskset
--cpu-list 0-1 /usr/bin/mount -t nfs -o
soft,nosharecache,timeo=600,retrans=6,nfsvers=3
192.168.201.2:/exports/nfs/iso
/rhev/data-center/mnt/192.168.201.2:_exports_nfs_iso (cwd None)
MainProcess|jsonrpc/5::DEBUG::2017-03-06
20:25:20,504::supervdsmServer::100::SuperVdsm.ServerCallback::(wrapper)
return mount with None



On the storage server, both requests are seen:

Mar  6 20:25:20 lago-basic-suite-4-1-engine rpc.mountd[4279]:
authenticated mount request from 192.168.201.4:970 for
/exports/nfs/iso (/exports/nfs/iso)
Mar  6 20:26:01 lago-basic-suite-4-1-engine rpc.mountd[4279]:
authenticated mount request from 192.168.201.3:948 for
/exports/nfs/iso (/exports/nfs/iso)

-- 
>
> *Daniel Belenky*
>
> *RHV DevOps*
>
> *Red Hat Israel*
> _______________________________________________
> Devel mailing list
> Devel at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel


_______________________________________________
Infra mailing list
Infra at ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/infra/attachments/20170307/3b10361f/attachment-0001.html>


More information about the Infra mailing list