[JIRA] (OVIRT-1709) provision Persistent Storage for OpenShift
by Evgheni Dereveanchin (oVirt JIRA)
[ https://ovirt-jira.atlassian.net/browse/OVIRT-1709?page=com.atlassian.jir... ]
Evgheni Dereveanchin reassigned OVIRT-1709:
-------------------------------------------
Assignee: Evgheni Dereveanchin (was: infra)
> provision Persistent Storage for OpenShift
> ------------------------------------------
>
> Key: OVIRT-1709
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1709
> Project: oVirt - virtualization made easy
> Issue Type: Improvement
> Components: OpenShift
> Reporter: Evgheni Dereveanchin
> Assignee: Evgheni Dereveanchin
> Priority: High
> Labels: openshift
>
> The OpenShift instance in PHX currently does not have any Persistent Storage assigned to it which is needed for things like databases and other important data. Opening this ticket to track how many volumes we may need and attach them.
--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100072)
6 years, 12 months
[JIRA] (OVIRT-1774) Upstream source collector can fail to push on multi-branch projects
by Barak Korren (oVirt JIRA)
This is a multi-part message in MIME format...
------------=_1511186869-30492-188
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
[ https://ovirt-jira.atlassian.net/browse/OVIRT-1774?page=com.atlassian.jir... ]
Barak Korren updated OVIRT-1774:
--------------------------------
Epic Link: OVIRT-400
> Upstream source collector can fail to push on multi-branch projects
> -------------------------------------------------------------------
>
> Key: OVIRT-1774
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1774
> Project: oVirt - virtualization made easy
> Issue Type: Bug
> Components: oVirt CI
> Reporter: Barak Korren
> Assignee: infra
> Labels: poll-upstream-sources, upstream-source-collector
>
> When the upstream source collection code looks for similar patches in order to avoid pushing a new patch, it lookes for patches that contain the same change to the '{{upstream-sources.yaml}}' file, regardless of the branch they may belong to.
> The code currently ignores the possibility that similar changes might be required for different branches, in which case different patches may need to be pushed.
> The way of detecting similar patches should be changed so that either the branch name is included in the checksum that are used to identify a patch, or the query is limited to patches of the branch being handled.
--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100072)
------------=_1511186869-30492-188
Content-Type: text/html; charset="UTF-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit
<html><body>
<pre>[ https://ovirt-jira.atlassian.net/browse/OVIRT-1774?page=com.atlassian.jir... ]</pre>
<h3>Barak Korren updated OVIRT-1774:</h3>
<pre>Epic Link: OVIRT-400</pre>
<blockquote><h3>Upstream source collector can fail to push on multi-branch projects</h3>
<pre> Key: OVIRT-1774
URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1774
Project: oVirt - virtualization made easy
Issue Type: Bug
Components: oVirt CI
Reporter: Barak Korren
Assignee: infra
Labels: poll-upstream-sources, upstream-source-collector</pre>
<p>When the upstream source collection code looks for similar patches in order to avoid pushing a new patch, it lookes for patches that contain the same change to the ‘{{upstream-sources.yaml}}’ file, regardless of the branch they may belong to. The code currently ignores the possibility that similar changes might be required for different branches, in which case different patches may need to be pushed. The way of detecting similar patches should be changed so that either the branch name is included in the checksum that are used to identify a patch, or the query is limited to patches of the branch being handled.</p></blockquote>
<p>— This message was sent by Atlassian Jira (v1001.0.0-SNAPSHOT#100072)</p>
<img src="https://u4043402.ct.sendgrid.net/wf/open?upn=i5TMWGV99amJbNxJpSp2-2BCmpYL..." alt="" width="1" height="1" border="0" style="height:1px !important;width:1px !important;border-width:0 !important;margin-top:0 !important;margin-bottom:0 !important;margin-right:0 !important;margin-left:0 !important;padding-top:0 !important;padding-bottom:0 !important;padding-right:0 !important;padding-left:0 !important;"/>
</body></html>
------------=_1511186869-30492-188--
7 years
[JIRA] (OVIRT-1774) Upstream source collector can fail to push on multi-branch projects
by Barak Korren (oVirt JIRA)
This is a multi-part message in MIME format...
------------=_1511186815-19288-176
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
Barak Korren created OVIRT-1774:
-----------------------------------
Summary: Upstream source collector can fail to push on multi-branch projects
Key: OVIRT-1774
URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1774
Project: oVirt - virtualization made easy
Issue Type: Bug
Components: oVirt CI
Reporter: Barak Korren
Assignee: infra
When the upstream source collection code looks for similar patches in order to avoid pushing a new patch, it lookes for patches that contain the same change to the '{{upstream-sources.yaml}}' file, regardless of the branch they may belong to.
The code currently ignores the possibility that similar changes might be required for different branches, in which case different patches may need to be pushed.
The way of detecting similar patches should be changed so that either the branch name is included in the checksum that are used to identify a patch, or the query is limited to patches of the branch being handled.
--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100072)
------------=_1511186815-19288-176
Content-Type: text/html; charset="UTF-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit
<html><body>
<h3>Barak Korren created OVIRT-1774:</h3>
<pre> Summary: Upstream source collector can fail to push on multi-branch projects
Key: OVIRT-1774
URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1774
Project: oVirt - virtualization made easy
Issue Type: Bug
Components: oVirt CI
Reporter: Barak Korren
Assignee: infra</pre>
<p>When the upstream source collection code looks for similar patches in order to avoid pushing a new patch, it lookes for patches that contain the same change to the ‘{{upstream-sources.yaml}}’ file, regardless of the branch they may belong to.</p>
<p>The code currently ignores the possibility that similar changes might be required for different branches, in which case different patches may need to be pushed.</p>
<p>The way of detecting similar patches should be changed so that either the branch name is included in the checksum that are used to identify a patch, or the query is limited to patches of the branch being handled.</p>
<p>— This message was sent by Atlassian Jira (v1001.0.0-SNAPSHOT#100072)</p>
<img src="https://u4043402.ct.sendgrid.net/wf/open?upn=i5TMWGV99amJbNxJpSp2-2BCmpYL..." alt="" width="1" height="1" border="0" style="height:1px !important;width:1px !important;border-width:0 !important;margin-top:0 !important;margin-bottom:0 !important;margin-right:0 !important;margin-left:0 !important;padding-top:0 !important;padding-bottom:0 !important;padding-right:0 !important;padding-left:0 !important;"/>
</body></html>
------------=_1511186815-19288-176--
7 years
[ OST Failure Report ] [ oVirt master ] [ 20-11-2017 ] [004_basic_sanity.vm_run ]
by Dafna Ron
This is a multi-part message in MIME format.
--------------2EEBD942435F80A2268AD3F9
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
We have a failure in OST on test 004_basic_sanity.vm_run.
it seems to be an error in vm type which is related to the patch reported.
**
*Link to suspected patches: https://gerrit.ovirt.org/#/c/84343/*
*
Link to Job:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3922
Link to all logs:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3922/artifact
(Relevant) error snippet from the log:
<error>
vdsm log:
*
*2017-11-20 07:40:12,779-0500 ERROR (jsonrpc/2) [jsonrpc.JsonRpcServer]
Internal server error (__init__:611)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
606, in _handle_request
res = method(**params)
File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 201,
in _dynamicMethod
result = fn(*methodArgs)
File "<string>", line 2, in getAllVmStats
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48,
in method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1341, in
getAllVmStats
statsList = self._cif.getAllVmStats()
File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 508, in
getAllVmStats
return [v.getStats() for v in self.vmContainer.values()]
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1664, in
getStats
stats.update(self._getConfigVmStats())
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1703, in
_getConfigVmStats
'vmType': self.conf['vmType'],
KeyError: 'vmType'
*
*
*
engine log:
***2017-11-20 07:43:07,675-05 DEBUG
[org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker] (ResponseWorker)
[] Message received: {"jsonrpc": "2.0", "id":
"5bf12e5a-4a09-4999-a6ce-a7dd639d3833", "error": {"message": "Internal
JSON-RPC error:
{'reason': \"'vmType'\"}", "code": -32603}}
2017-11-20 07:43:07,676-05 WARN
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-70) [] Unexpected return
value: Status [code=-32603, message=Internal JSON-RPC error: {'r
eason': "'vmType'"}]
2017-11-20 07:43:07,676-05 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-70) [] Failed in
'GetAllVmStatsVDS' method
2017-11-20 07:43:07,676-05 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-70) [] Command
'GetAllVmStatsVDSCommand(HostName = lago-basic-suite-master-host-0, VdsIdV
DSCommandParametersBase:{hostId='1af28f2c-79db-4069-aa53-5bb46528c5e9'})'
execution failed: VDSGenericException: VDSErrorException: Failed to
GetAllVmStatsVDS, error = Internal JSON-RPC error: {'reason':
"'vmType'"}, code = -32603
2017-11-20 07:43:07,676-05 DEBUG
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-70) [] Exception:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGeneric
Exception: VDSErrorException: Failed to GetAllVmStatsVDS, error =
Internal JSON-RPC error: {'reason': "'vmType'"}, code = -32603
at
org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.createDefaultConcreteException(VdsBrokerCommand.java:81)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.createException(BrokerCommandBase.java:223)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.proceedProxyReturnValue(BrokerCommandBase.java:193)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand.executeVdsBrokerCommand(GetAllVmStatsVDSCommand.java:23)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:112)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:73)
[vdsbroker.jar:]
at
org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33)
[dal.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.DefaultVdsCommandExecutor.execute(DefaultVdsCommandExecutor.java:14)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:387)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand$$super(Unknown
Source) [vdsbroker.jar:]
at sun.reflect.GeneratedMethodAccessor247.invoke(Unknown Source)
[:1.8.0_151]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_151]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_151]
at
org.jboss.weld.interceptor.proxy.TerminalAroundInvokeInvocationContext.proceedInternal(TerminalAroundInvokeInvocationContext.java:49)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
at
org.jboss.weld.interceptor.proxy.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:77)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
at
org.ovirt.engine.core.common.di.interceptor.LoggingInterceptor.apply(LoggingInterceptor.java:12)
[common.jar:]
at sun.reflect.GeneratedMethodAccessor69.invoke(Unknown Source)
[:1.8.0_151]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_151]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_151]
at
org.jboss.weld.interceptor.reader.SimpleInterceptorInvocation$SimpleMethodInvocation.invoke(SimpleInterceptorInvocation.java:73)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
at
org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.executeAroundInvoke(InterceptorMethodHandler.java:84)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
at
org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.executeInterception(InterceptorMethodHandler.java:72)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
at
org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.invoke(InterceptorMethodHandler.java:56)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
at
org.jboss.weld.bean.proxy.CombinedInterceptorAndDecoratorStackMethodHandler.invoke(CombinedInterceptorAndDecoratorStackMethodHandler.java:79)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
at
org.jboss.weld.bean.proxy.CombinedInterceptorAndDecoratorStackMethodHandler.invoke(CombinedInterceptorAndDecoratorStackMethodHandler.java:68)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
at
org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand(Unknown
Source) [vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher.poll(VmsStatisticsFetcher.java:29)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.monitoring.VmsListFetcher.fetch(VmsListFetcher.java:57)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefresher.poll(PollVmStatsRefresher.java:42)
[vdsbroker.jar:]
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[rt.jar:1.8.0_151]
at
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
[rt.jar:1.8.0_151]
at
org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383)
[javax.enterprise.concurrent-1.0.jar:]
at
org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534)
[javax.enterprise.concurrent-1.0.jar:]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[rt.jar:1.8.0_151]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[rt.jar:1.8.0_151]
at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_151]
at
org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250)
[javax.enterprise.concurrent-1.0.jar:]
at
org.jboss.as.ee.concurrent.service.ElytronManagedThreadFactory$ElytronManagedThread.run(ElytronManagedThreadFactory.java:78)
*
*</error>*
**
--------------2EEBD942435F80A2268AD3F9
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>Hi, <br>
</p>
<p>We have a failure in OST on test 004_basic_sanity.vm_run. <br>
</p>
<p>it seems to be an error in vm type which is related to the patch
reported. <br>
</p>
<p><br>
</p>
<p><b style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-d9a9-4242-5122-975989463d16">
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Link to suspected patches: <a class="moz-txt-link-freetext" href="https://gerrit.ovirt.org/#/c/84343/">https://gerrit.ovirt.org/#/c/84343/</a></span></p>
<br>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Link to Job: <a class="moz-txt-link-freetext" href="http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3922">http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3922</a></span></p>
<br>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Link to all logs: <a class="moz-txt-link-freetext" href="http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3922/artifact">http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3922/artifact</a></span></p>
<br>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">(Relevant) error snippet from the log: </span></p>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;"><error></span></p>
<br>
vdsm log: <br>
</b></p>
<p><b style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-d9a9-4242-5122-975989463d16">2017-11-20
07:40:12,779-0500 ERROR (jsonrpc/2) [jsonrpc.JsonRpcServer]
Internal server error (__init__:611)<br>
Traceback (most recent call last):<br>
File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py",
line 606, in _handle_request<br>
res = method(**params)<br>
File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py",
line 201, in _dynamicMethod<br>
result = fn(*methodArgs)<br>
File "<string>", line 2, in getAllVmStats<br>
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py",
line 48, in method<br>
ret = func(*args, **kwargs)<br>
File "/usr/lib/python2.7/site-packages/vdsm/API.py", line
1341, in getAllVmStats<br>
statsList = self._cif.getAllVmStats()<br>
File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line
508, in getAllVmStats<br>
return [v.getStats() for v in self.vmContainer.values()]<br>
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
1664, in getStats<br>
stats.update(self._getConfigVmStats())<br>
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
1703, in _getConfigVmStats<br>
'vmType': self.conf['vmType'],<br>
KeyError: 'vmType'<br>
<br>
</b></p>
<p><b style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-d9a9-4242-5122-975989463d16"><br>
</b></p>
<p><span style="font-weight:normal;">engine log:</span></p>
<p><b style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-d9a9-4242-5122-975989463d16"><b></b>2017-11-20
07:43:07,675-05 DEBUG
[org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
(ResponseWorker) [] Message received: {"jsonrpc": "2.0", "id":
"5bf12e5a-4a09-4999-a6ce-a7dd639d3833", "error": {"message":
"Internal JSON-RPC error:<br>
{'reason': \"'vmType'\"}", "code": -32603}}<br>
2017-11-20 07:43:07,676-05 WARN
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-70) []
Unexpected return value: Status [code=-32603, message=Internal
JSON-RPC error: {'r<br>
eason': "'vmType'"}]<br>
2017-11-20 07:43:07,676-05 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-70) [] Failed in
'GetAllVmStatsVDS' method<br>
2017-11-20 07:43:07,676-05 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-70) [] Command
'GetAllVmStatsVDSCommand(HostName =
lago-basic-suite-master-host-0, VdsIdV<br>
DSCommandParametersBase:{hostId='1af28f2c-79db-4069-aa53-5bb46528c5e9'})'
execution failed: VDSGenericException: VDSErrorException: Failed
to GetAllVmStatsVDS, error = Internal JSON-RPC error: {'reason':
"'vmType'"}, code = -32603<br>
2017-11-20 07:43:07,676-05 DEBUG
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-70) []
Exception:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGeneric<br>
Exception: VDSErrorException: Failed to GetAllVmStatsVDS, error
= Internal JSON-RPC error: {'reason': "'vmType'"}, code = -32603<br>
at
org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.createDefaultConcreteException(VdsBrokerCommand.java:81)
[vdsbroker.jar:]<br>
at
org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.createException(BrokerCommandBase.java:223)
[vdsbroker.jar:]<br>
at
org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.proceedProxyReturnValue(BrokerCommandBase.java:193)
[vdsbroker.jar:]<br>
at
org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand.executeVdsBrokerCommand(GetAllVmStatsVDSCommand.java:23)
[vdsbroker.jar:]<br>
at
org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:112)
[vdsbroker.jar:]<br>
at
org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:73)
[vdsbroker.jar:]<br>
at
org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33)
[dal.jar:]<br>
at
org.ovirt.engine.core.vdsbroker.vdsbroker.DefaultVdsCommandExecutor.execute(DefaultVdsCommandExecutor.java:14)
[vdsbroker.jar:]<br>
at
org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:387)
[vdsbroker.jar:]<br>
at
org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand$$super(Unknown
Source) [vdsbroker.jar:]<br>
at sun.reflect.GeneratedMethodAccessor247.invoke(Unknown
Source) [:1.8.0_151]<br>
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_151]<br>
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_151]<br>
at
org.jboss.weld.interceptor.proxy.TerminalAroundInvokeInvocationContext.proceedInternal(TerminalAroundInvokeInvocationContext.java:49)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]<br>
at
org.jboss.weld.interceptor.proxy.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:77)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]<br>
at
org.ovirt.engine.core.common.di.interceptor.LoggingInterceptor.apply(LoggingInterceptor.java:12)
[common.jar:]<br>
at sun.reflect.GeneratedMethodAccessor69.invoke(Unknown
Source) [:1.8.0_151]<br>
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_151]<br>
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_151]<br>
at
org.jboss.weld.interceptor.reader.SimpleInterceptorInvocation$SimpleMethodInvocation.invoke(SimpleInterceptorInvocation.java:73)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]<br>
at
org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.executeAroundInvoke(InterceptorMethodHandler.java:84)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]<br>
at
org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.executeInterception(InterceptorMethodHandler.java:72)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]<br>
at
org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.invoke(InterceptorMethodHandler.java:56)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]<br>
at
org.jboss.weld.bean.proxy.CombinedInterceptorAndDecoratorStackMethodHandler.invoke(CombinedInterceptorAndDecoratorStackMethodHandler.java:79)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]<br>
at
org.jboss.weld.bean.proxy.CombinedInterceptorAndDecoratorStackMethodHandler.invoke(CombinedInterceptorAndDecoratorStackMethodHandler.java:68)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]<br>
at
org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand(Unknown
Source) [vdsbroker.jar:]<br>
at
org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher.poll(VmsStatisticsFetcher.java:29)
[vdsbroker.jar:]<br>
at
org.ovirt.engine.core.vdsbroker.monitoring.VmsListFetcher.fetch(VmsListFetcher.java:57)
[vdsbroker.jar:]<br>
at
org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefresher.poll(PollVmStatsRefresher.java:42)
[vdsbroker.jar:]<br>
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[rt.jar:1.8.0_151]<br>
at
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
[rt.jar:1.8.0_151]<br>
at
org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383)
[javax.enterprise.concurrent-1.0.jar:]<br>
at
org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534)
[javax.enterprise.concurrent-1.0.jar:]<br>
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[rt.jar:1.8.0_151]<br>
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[rt.jar:1.8.0_151]<br>
at java.lang.Thread.run(Thread.java:748)
[rt.jar:1.8.0_151]<br>
at
org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250)
[javax.enterprise.concurrent-1.0.jar:]<br>
at
org.jboss.as.ee.concurrent.service.ElytronManagedThreadFactory$ElytronManagedThread.run(ElytronManagedThreadFactory.java:78)<br>
<br>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;"></error></span></p>
</b><br class="Apple-interchange-newline">
</p>
</body>
</html>
--------------2EEBD942435F80A2268AD3F9--
7 years
[ OST Failure Report ] [ oVirt master ] [ 20-11-1017 ] [ 002_bootstrap.verify_add_all_hosts ]
by Dafna Ron
This is a multi-part message in MIME format.
--------------FBD48B98231B5BEAA6AAA899
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
We had a failure in OST for test 002_bootstrap.verify_add_all_hosts.
>From the logs I can see that vdsm on host0 was reporting that it cannot
find the physical volume but eventually the storage was created and is
reported as responsive.
However, Host1 is reported to became non-operational with storage domain
does not exist error and I think that there is a race.
I think that we create the storage domain while host1 is being installed
and if the domain is not created and reported as activated in time,
host1 will become nonOperational.
are we starting installation of host1 before host0 and storage are active?
**
*Link to suspected patches: I do not think that the patch reported is
related to the error*
*
**
**
https://gerrit.ovirt.org/#/c/84133/**
Link to Job:
**
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3902/
Link to all logs:
*
*http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3902/artifact/
*
*
*
*(Relevant) error snippet from the log: *
*
<error>
Lago log:
*
2017-11-18 11:15:25,472::log_utils.py::end_log_task::670::nose::INFO::
# add_master_storage_domain: ESC[32mSuccessESC[0m (in 0:01:09)
2017-11-18
11:15:25,472::log_utils.py::start_log_task::655::nose::INFO:: #
add_secondary_storage_domains: ESC[0mESC[0m
2017-11-18 11:16:47,455::log_utils.py::end_log_task::670::nose::INFO::
# add_secondary_storage_domains: ESC[32mSuccessESC[0m (in 0:01:21)
2017-11-18
11:16:47,456::log_utils.py::start_log_task::655::nose::INFO:: #
import_templates: ESC[0mESC[0m
2017-11-18 11:16:47,513::testlib.py::stopTest::198::nose::INFO:: *
SKIPPED: Exported domain generation not supported yet
2017-11-18 11:16:47,514::log_utils.py::end_log_task::670::nose::INFO::
# import_templates: ESC[32mSuccessESC[0m (in 0:00:00)
2017-11-18
11:16:47,514::log_utils.py::start_log_task::655::nose::INFO:: #
verify_add_all_hosts: ESC[0mESC[0m
2017-11-18
11:16:47,719::testlib.py::assert_equals_within::227::ovirtlago.testlib::ERROR::
* Unhandled exception in <function <lambda> at 0x2909230>
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
219, in assert_equals_within
res = func()
File
"/home/jenkins/workspace/ovirt-master_change-queue-tester/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
line 430, in <lambda>
lambda: _all_hosts_up(hosts_service, total_hosts)
File
"/home/jenkins/workspace/ovirt-master_change-queue-tester/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
line 129, in _all_hosts_up
_check_problematic_hosts(hosts_service)
File
"/home/jenkins/workspace/ovirt-master_change-queue-tester/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
line 149, in _check_problematic_hosts
raise RuntimeError(dump_hosts)
RuntimeError: 1 hosts failed installation:
lago-basic-suite-master-host-1: non_operational
2017-11-18
11:16:47,722::utils.py::wrapper::480::lago.utils::DEBUG::Looking for a
workdir
2017-11-18
11:16:47,722::workdir.py::resolve_workdir_path::361::lago.workdir::DEBUG::Checking
if /dev/shm/ost/deployment-basic-suite-master is a workdir
2017-11-18
11:16:47,724::log_utils.py::__enter__::600::lago.prefix::INFO:: *
Collect artifacts: ESC[0mESC[0m
2017-11-18
11:16:47,724::log_utils.py::__enter__::600::lago.prefix::INFO:: *
Collect artifacts: ESC[0mESC[0m
vdsm host0:
2017-11-18 06:14:23,980-0500 INFO (jsonrpc/0) [vdsm.api] START
getDeviceList(storageType=3,
guids=[u'360014059618895272774e97a2aaf5dd6'], checkStatus=False,
options={}) from=::ffff:192.168.201.4,45636,
flow_id=ed8310a1-a7af-4a67-b351-8ff
364766b8a, task_id=6ced0092-34cd-49f0-aa0f-6aae498af37f (api:46)
2017-11-18 06:14:24,353-0500 WARN (jsonrpc/0) [storage.LVM] lvm pvs
failed: 5 [] [' Failed to find physical volume
"/dev/mapper/360014059618895272774e97a2aaf5dd6".'] (lvm:322)
2017-11-18 06:14:24,353-0500 WARN (jsonrpc/0) [storage.HSM] getPV
failed for guid: 360014059618895272774e97a2aaf5dd6 (hsm:1973)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line
1970, in _getDeviceList
pv = lvm.getPV(guid)
File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 852,
in getPV
raise se.InaccessiblePhysDev((pvName,))
InaccessiblePhysDev: Multipath cannot access physical device(s):
"devices=(u'360014059618895272774e97a2aaf5dd6',)"
2017-11-18 06:14:24,389-0500 INFO (jsonrpc/0) [vdsm.api] FINISH
getDeviceList return={'devList': [{'status': 'unknown', 'vendorID':
'LIO-ORG', 'capacity': '21474836480', 'fwrev': '4.0',
'discard_zeroes_data': 0, 'vgUUID': '', 'pvsize': '', 'pathlist':
[{'initiatorname': u'default', 'connection': u'192.168.200.4', 'iqn':
u'iqn.2014-07.org.ovirt:storage', 'portal': '1', 'user': u'username',
'password': '********', 'port': '3260'}, {'initiatorname': u'default',
'connection': u'192.168.201.4', 'iqn': u'iqn.2014-07.org.ovirt:storage',
'portal': '1', 'user': u'username', 'password': '********', 'port':
'3260'}], 'logicalblocksize': '512', 'discard_max_bytes': 1073741824,
'pathstatus': [{'type': 'iSCSI', 'physdev': 'sda', 'capacity':
'21474836480', 'state': 'active', 'lun': '0'}, {'type': 'iSCSI',
'physdev': 'sdf', 'capacity': '21474836480', 'state': 'active', 'lun':
'0'}], 'devtype': 'iSCSI', 'physicalblocksize': '512', 'pvUUID': '',
'serial': 'SLIO-ORG_lun0_bdev_96188952-7277-4e97-a2aa-f5dd6aad6fc2',
'GUID': '360014059618895272774e97a2aaf5dd6', 'productID': 'lun0_bdev'}]}
from=::ffff:192.168.201.4,45636,
flow_id=ed8310a1-a7af-4a67-b351-8ff364766b8a,
task_id=6ced0092-34cd-49f0-aa0f-6aae498af37f (api:52)
2017-11-18 06:14:31,788-0500 INFO (jsonrpc/0) [vdsm.api] FINISH
getStorageDomainInfo return={'info': {'uuid':
'cc61e074-a3b6-4371-9185-66079a39f123', 'vgMetadataDevice':
'360014059618895272774e97a2aaf5dd6', 'vguuid': '7ifbmt-0elj-uWZZ-zS
LG-plA8-8hd3-JG298b', 'metadataDevice':
'360014059618895272774e97a2aaf5dd6', 'state': 'OK', 'version': '4',
'role': 'Regular', 'type': 'ISCSI', 'class': 'Data', 'pool': [], 'name':
'iscsi'}} from=::ffff:192.168.201.4,45636, flow_id=2c1876
99, task_id=c2080b61-d4a5-4bdb-9d75-f81580a8257a (api:
vdsm host1:
2017-11-18 06:16:34,315-0500 ERROR (monitor/c65437c) [storage.Monitor]
Setting up monitor for c65437ce-339f-4b01-aeb5-45c1d486bf49 failed
(monitor:329)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
326, in _setupLoop
self._setupMonitor()
File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
348, in _setupMonitor
self._produceDomain()
File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 177, in
wrapper
value = meth(self, *a, **kw)
File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
366, in _produceDomain
self.domain = sdCache.produce(self.sdUUID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 110,
in produce
domain.getRealDomain()
File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51,
in getRealDomain
return self._cache._realProduce(self._sdUUID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134,
in _realProduce
domain = self._findDomain(sdUUID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151,
in _findDomain
return findMethod(sdUUID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 176,
in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'c65437ce-339f-4b01-aeb5-45c1d486bf49',)
2017-11-18 06:16:40,377-0500 INFO (jsonrpc/7) [api.host] START
getStats() from=::ffff:192.168.201.4,58722 (api:46)
2017-11-18 06:16:40,378-0500 INFO (jsonrpc/7) [vdsm.api] START
repoStats(domains=()) from=::ffff:192.168.201.4,58722,
task_id=8fb74944-08c0-491e-ad55-a7a9f0a11ef8 (api:46)
2017-11-18 06:16:40,379-0500 INFO (jsonrpc/7) [vdsm.api] FINISH
repoStats return={u'c65437ce-339f-4b01-aeb5-45c1d486bf49': {'code': 358,
'actual': True, 'version': -1, 'acquired': False, 'delay': '0',
'lastCheck': '6.1', 'valid': False},
u'cc61e074-a3b6-4371-9185-66079a39f123': {'code': 0, 'actual': True,
'version': 4, 'acquired': True, 'delay': '0.00103987', 'lastCheck':
'6.5', 'valid': True}} from=::ffff:192.168.201.4,58722,
task_id=8fb74944-08c0-491e-ad55-a7a9f0a11ef8
(api:52)
engine log:
2017-11-18 06:15:54,040-05 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
(EE-ManagedThreadFactory-engine-Thread-29) [4ce8aff3] Domain
'c65437ce-339f-4b01-aeb5-45c1d486bf49:nfs' was reported with error code
'358'
2017-11-18 06:15:54,041-05 ERROR
[org.ovirt.engine.core.bll.InitVdsOnUpCommand]
(EE-ManagedThreadFactory-engine-Thread-29) [4ce8aff3] Storage Domain
'nfs' of pool 'test-dc' is in problem in host
'lago-basic-suite-master-host-1'
2017-11-18 06:15:54,045-05 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-29) [4ce8aff3] EVENT_ID:
VDS_STORAGE_VDS_STATS_FAILED(189), Host lago-basic-suite-master-host-1
reports about one of the Active Storage Domains as Problematic.**
**
***
*
*</error>*
**
--------------FBD48B98231B5BEAA6AAA899
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>Hi, <br>
</p>
<p>We had a failure in OST for test
002_bootstrap.verify_add_all_hosts. <br>
</p>
<p>From the logs I can see that vdsm on host0 was reporting that it
cannot find the physical volume but eventually the storage was
created and is reported as responsive. <br>
</p>
<p>However, Host1 is reported to became non-operational with storage
domain does not exist error and I think that there is a race. <br>
</p>
<p>I think that we create the storage domain while host1 is being
installed and if the domain is not created and reported as
activated in time, host1 will become nonOperational. <br>
</p>
<p>are we starting installation of host1 before host0 and storage
are active? <br>
</p>
<p><b style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-d974-a2c9-3d0d-cb5378c92f81">
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Link to suspected patches: I do not think that the patch reported is related to the error</span></p>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;"><b>
</b></span></p>
</b><b style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-d974-a2c9-3d0d-cb5378c92f81">
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;"><a class="moz-txt-link-freetext" href="https://gerrit.ovirt.org/#/c/84133/">https://gerrit.ovirt.org/#/c/84133/</a><b>
</b></span></p>
<br>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Link to Job:</span></p>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">
</span></p>
</b><b style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-d974-a2c9-3d0d-cb5378c92f81">
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;"><a class="moz-txt-link-freetext" href="http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3902/">http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3902/</a>
</span></p>
<br>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Link to all logs:</span></p>
<br>
</b></p>
<p><b style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-d974-a2c9-3d0d-cb5378c92f81"><a class="moz-txt-link-freetext" href="http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3902/artifact/">http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3902/artifact/</a><br>
</b></p>
<p><b style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-d974-a2c9-3d0d-cb5378c92f81"><br>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">(Relevant) error snippet from the log: </span></p>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;"><error></span></p>
<br>
Lago log: <br>
</b></p>
<p><span style="font-weight:normal;">2017-11-18
11:15:25,472::log_utils.py::end_log_task::670::nose::INFO:: #
add_master_storage_domain: ESC[32mSuccessESC[0m (in 0:01:09)<br>
2017-11-18
11:15:25,472::log_utils.py::start_log_task::655::nose::INFO:: #
add_secondary_storage_domains: ESC[0mESC[0m<br>
2017-11-18
11:16:47,455::log_utils.py::end_log_task::670::nose::INFO:: #
add_secondary_storage_domains: ESC[32mSuccessESC[0m (in 0:01:21)<br>
2017-11-18
11:16:47,456::log_utils.py::start_log_task::655::nose::INFO:: #
import_templates: ESC[0mESC[0m<br>
2017-11-18
11:16:47,513::testlib.py::stopTest::198::nose::INFO:: *
SKIPPED: Exported domain generation not supported yet<br>
2017-11-18
11:16:47,514::log_utils.py::end_log_task::670::nose::INFO:: #
import_templates: ESC[32mSuccessESC[0m (in 0:00:00)<br>
2017-11-18
11:16:47,514::log_utils.py::start_log_task::655::nose::INFO:: #
verify_add_all_hosts: ESC[0mESC[0m<br>
2017-11-18
11:16:47,719::testlib.py::assert_equals_within::227::ovirtlago.testlib::ERROR::
* Unhandled exception in <function <lambda> at
0x2909230><br>
Traceback (most recent call last):<br>
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py",
line 219, in assert_equals_within<br>
res = func()<br>
File
"/home/jenkins/workspace/ovirt-master_change-queue-tester/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
line 430, in <lambda><br>
lambda: _all_hosts_up(hosts_service, total_hosts)<br>
File
"/home/jenkins/workspace/ovirt-master_change-queue-tester/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
line 129, in _all_hosts_up<br>
_check_problematic_hosts(hosts_service)<br>
File
"/home/jenkins/workspace/ovirt-master_change-queue-tester/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
line 149, in _check_problematic_hosts<br>
raise RuntimeError(dump_hosts)<br>
RuntimeError: 1 hosts failed installation:<br>
lago-basic-suite-master-host-1: non_operational<br>
<br>
2017-11-18
11:16:47,722::utils.py::wrapper::480::lago.utils::DEBUG::Looking
for a workdir<br>
2017-11-18
11:16:47,722::workdir.py::resolve_workdir_path::361::lago.workdir::DEBUG::Checking
if /dev/shm/ost/deployment-basic-suite-master is a workdir<br>
2017-11-18
11:16:47,724::log_utils.py::__enter__::600::lago.prefix::INFO::
* Collect artifacts: ESC[0mESC[0m<br>
2017-11-18
11:16:47,724::log_utils.py::__enter__::600::lago.prefix::INFO::
* Collect artifacts: ESC[0mESC[0m<br>
</span></p>
<p><span style="font-weight:normal;">vdsm host0: <br>
</span></p>
<p><span style="font-weight:normal;">2017-11-18 06:14:23,980-0500
INFO (jsonrpc/0) [vdsm.api] START getDeviceList(storageType=3,
guids=[u'360014059618895272774e97a2aaf5dd6'], checkStatus=False,
options={}) from=::ffff:192.168.201.4,45636,
flow_id=ed8310a1-a7af-4a67-b351-8ff<br>
364766b8a, task_id=6ced0092-34cd-49f0-aa0f-6aae498af37f (api:46)<br>
2017-11-18 06:14:24,353-0500 WARN (jsonrpc/0) [storage.LVM] lvm
pvs failed: 5 [] [' Failed to find physical volume
"/dev/mapper/360014059618895272774e97a2aaf5dd6".'] (lvm:322)<br>
2017-11-18 06:14:24,353-0500 WARN (jsonrpc/0) [storage.HSM]
getPV failed for guid: 360014059618895272774e97a2aaf5dd6
(hsm:1973)<br>
Traceback (most recent call last):<br>
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py",
line 1970, in _getDeviceList<br>
pv = lvm.getPV(guid)<br>
File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py",
line 852, in getPV<br>
raise se.InaccessiblePhysDev((pvName,))<br>
InaccessiblePhysDev: Multipath cannot access physical device(s):
"devices=(u'360014059618895272774e97a2aaf5dd6',)"<br>
2017-11-18 06:14:24,389-0500 INFO (jsonrpc/0) [vdsm.api] FINISH
getDeviceList return={'devList': [{'status': 'unknown',
'vendorID': 'LIO-ORG', 'capacity': '21474836480', 'fwrev':
'4.0', 'discard_zeroes_data': 0, 'vgUUID': '', 'pvsize': '',
'pathlist': [{'initiatorname': u'default', 'connection':
u'192.168.200.4', 'iqn': u'iqn.2014-07.org.ovirt:storage',
'portal': '1', 'user': u'username', 'password': '********',
'port': '3260'}, {'initiatorname': u'default', 'connection':
u'192.168.201.4', 'iqn': u'iqn.2014-07.org.ovirt:storage',
'portal': '1', 'user': u'username', 'password': '********',
'port': '3260'}], 'logicalblocksize': '512',
'discard_max_bytes': 1073741824, 'pathstatus': [{'type':
'iSCSI', 'physdev': 'sda', 'capacity': '21474836480', 'state':
'active', 'lun': '0'}, {'type': 'iSCSI', 'physdev': 'sdf',
'capacity': '21474836480', 'state': 'active', 'lun': '0'}],
'devtype': 'iSCSI', 'physicalblocksize': '512', 'pvUUID': '',
'serial':
'SLIO-ORG_lun0_bdev_96188952-7277-4e97-a2aa-f5dd6aad6fc2',
'GUID': '360014059618895272774e97a2aaf5dd6', 'productID':
'lun0_bdev'}]} from=::ffff:192.168.201.4,45636,
flow_id=ed8310a1-a7af-4a67-b351-8ff364766b8a,
task_id=6ced0092-34cd-49f0-aa0f-6aae498af37f (api:52)</span></p>
<p><span style="font-weight:normal;"><br>
</span></p>
<p><span style="font-weight:normal;">2017-11-18 06:14:31,788-0500
INFO (jsonrpc/0) [vdsm.api] FINISH getStorageDomainInfo
return={'info': {'uuid': 'cc61e074-a3b6-4371-9185-66079a39f123',
'vgMetadataDevice': '360014059618895272774e97a2aaf5dd6',
'vguuid': '7ifbmt-0elj-uWZZ-zS<br>
LG-plA8-8hd3-JG298b', 'metadataDevice':
'360014059618895272774e97a2aaf5dd6', 'state': 'OK', 'version':
'4', 'role': 'Regular', 'type': 'ISCSI', 'class': 'Data',
'pool': [], 'name': 'iscsi'}} from=::ffff:192.168.201.4,45636,
flow_id=2c1876<br>
99, task_id=c2080b61-d4a5-4bdb-9d75-f81580a8257a (api:<br>
</span></p>
<p><span style="font-weight:normal;">vdsm host1:</span></p>
<p><span style="font-weight:normal;">2017-11-18 06:16:34,315-0500
ERROR (monitor/c65437c) [storage.Monitor] Setting up monitor for
c65437ce-339f-4b01-aeb5-45c1d486bf49 failed (monitor:329)<br>
Traceback (most recent call last):<br>
File
"/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
326, in _setupLoop<br>
self._setupMonitor()<br>
File
"/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
348, in _setupMonitor<br>
self._produceDomain()<br>
File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line
177, in wrapper<br>
value = meth(self, *a, **kw)<br>
File
"/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
366, in _produceDomain<br>
self.domain = sdCache.produce(self.sdUUID)<br>
File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py",
line 110, in produce<br>
domain.getRealDomain()<br>
File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py",
line 51, in getRealDomain<br>
return self._cache._realProduce(self._sdUUID)<br>
File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py",
line 134, in _realProduce<br>
domain = self._findDomain(sdUUID)<br>
File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py",
line 151, in _findDomain<br>
return findMethod(sdUUID)<br>
File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py",
line 176, in _findUnfetchedDomain<br>
raise se.StorageDomainDoesNotExist(sdUUID)<br>
StorageDomainDoesNotExist: Storage domain does not exist:
(u'c65437ce-339f-4b01-aeb5-45c1d486bf49',)<br>
2017-11-18 06:16:40,377-0500 INFO (jsonrpc/7) [api.host] START
getStats() from=::ffff:192.168.201.4,58722 (api:46)<br>
2017-11-18 06:16:40,378-0500 INFO (jsonrpc/7) [vdsm.api] START
repoStats(domains=()) from=::ffff:192.168.201.4,58722,
task_id=8fb74944-08c0-491e-ad55-a7a9f0a11ef8 (api:46)<br>
2017-11-18 06:16:40,379-0500 INFO (jsonrpc/7) [vdsm.api] FINISH
repoStats return={u'c65437ce-339f-4b01-aeb5-45c1d486bf49':
{'code': 358, 'actual': True, 'version': -1, 'acquired': False,
'delay': '0', 'lastCheck': '6.1', 'valid': False},<br>
u'cc61e074-a3b6-4371-9185-66079a39f123': {'code': 0, 'actual':
True, 'version': 4, 'acquired': True, 'delay': '0.00103987',
'lastCheck': '6.5', 'valid': True}}
from=::ffff:192.168.201.4,58722,
task_id=8fb74944-08c0-491e-ad55-a7a9f0a11ef8<br>
(api:52)<br>
</span></p>
<p><span style="font-weight:normal;">engine log: <br>
</span></p>
<p><span style="font-weight:normal;">2017-11-18 06:15:54,040-05
ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
(EE-ManagedThreadFactory-engine-Thread-29) [4ce8aff3] Domain
'c65437ce-339f-4b01-aeb5-45c1d486bf49:nfs' was reported with
error code '358'<br>
2017-11-18 06:15:54,041-05 ERROR
[org.ovirt.engine.core.bll.InitVdsOnUpCommand]
(EE-ManagedThreadFactory-engine-Thread-29) [4ce8aff3] Storage
Domain 'nfs' of pool 'test-dc' is in problem in host
'lago-basic-suite-master-host-1'<br>
2017-11-18 06:15:54,045-05 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-29) [4ce8aff3] EVENT_ID:
VDS_STORAGE_VDS_STATS_FAILED(189), Host
lago-basic-suite-master-host-1<br>
reports about one of the Active Storage Domains as Problematic.</span><b
style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-d974-a2c9-3d0d-cb5378c92f81"><b><br>
</b></b></p>
<p><b style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-d974-a2c9-3d0d-cb5378c92f81"><b></b><br>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;"></error></span></p>
</b><br class="Apple-interchange-newline">
</p>
<p><br>
</p>
</body>
</html>
--------------FBD48B98231B5BEAA6AAA899--
7 years