Re: [ovirt-devel] System tests for 4.1 currently failing to run VMs!
by Oved Ourfali
Why not run it via Jenkins for patches?
Like, if you add a comment saying "run: ost" it will run it?
It do it automatically based on another thing?
On Dec 21, 2016 17:42, "Eyal Edri" <eedri(a)redhat.com> wrote:
>
>
>
> On Wed, Dec 21, 2016 at 5:36 PM, Michal Skrivanek <
michal.skrivanek(a)redhat.com> wrote:
>>
>>
>>> On 21 Dec 2016, at 16:25, Yaniv Kaul <ykaul(a)redhat.com> wrote:
>>>
>>>
>>>
>>> On Wed, Dec 21, 2016 at 5:19 PM, Michal Skrivanek <
michal.skrivanek(a)redhat.com> wrote:
>>>>
>>>>
>>>>> On 21 Dec 2016, at 14:56, Michal Skrivanek <
michal.skrivanek(a)redhat.com> wrote:
>>>>>
>>>>>
>>>>>> On 21 Dec 2016, at 12:19, Eyal Edri <eedri(a)redhat.com> wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Wed, Dec 21, 2016 at 12:56 PM, Vinzenz Feenstra <
vfeenstr(a)redhat.com> wrote:
>>>>>>>
>>>>>>>
>>>>>>>> On Dec 21, 2016, at 11:17 AM, Barak Korren <bkorren(a)redhat.com>
wrote:
>>>>>>>>
>>>>>>>> The test for running VMs had been failing since yesterday.
>>>>>>>>
>>>>>>>> The patch merged before the failures started was:
>>>>>>>> https://gerrit.ovirt.org/#/c/68826/
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> The error we`re seeing is a time-out (after two minutes) while
running
>>>>>>>> this API call:
>>>>>>>>
>>>>>>>> api.vms.get(VM0_NAME).status.state == ‘up'
>>>>>>>
>>>>>>>
>>>>>>> This is a REST API call, the patch above is Frontend. So this is
unrelated.
>>>>>>>
>>>>>>> However on Host 0 I can see this:
>>>>>>>
>>>>>>> 2016-12-20 16:54:43,544 ERROR (vm/d299ab29) [virt.vm]
(vmId='d299ab29-284a-435c-a50f-183a6e54def2') The vm start process failed
(vm:615) Traceback (most recent call last): File
"/usr/share/vdsm/virt/vm.py", line 551, in _startUnderlyingVm self._run()
File "/usr/share/vdsm/virt/vm.py", line 1991, in _run
self._connection.createXML(domxml, flags), File
"/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, in
wrapper ret = f(*args, **kwargs) File
"/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in wrapper
return func(inst, *args, **kwargs) File
"/usr/lib64/python2.7/site-packages/libvirt.py", line 3782, in createXML if
ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: internal error: process exited while connecting to monitor:
2016-12-20T21:54:43.044971Z qemu-kvm: warning: CPU(s) not present in any
NUMA nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 2016-12-20T21:54:43.045164Z
qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA
config 2016-12-20T21:54:43.101886Z qemu-kvm: -device
usb-ccid,id=ccid0,bus=usb.0,port=1: Warning: speed mismatch trying to
attach usb device "QEMU USB CCID" (full speed) to bus "usb.0", port "1"
(high speed)
>>>>>
>>>>>
>>>>> it is likely related to the recent USB patches
>>>>> investigating
>>>>
>>>>
>>>> hm, there are multiple problems (features/bugs depending on prefferred
point of view:)
>>>> but there is an easy “fix” taking care of this particular problem, so
we can start with that and figure out the proper approach later
>>>> arik will push that and merge it soon, likely today
>>>
>>>
>>> Thanks - if there is a quicker way to resolve this by reverting, I
think it's a better option.
>>
>>
>> I really need to talk you out of this approach:-)
>> It does sound tempting and logical, but with our development model of
large patch series combined with late detection it really is quite risky.
Here it wouldn’t help much…and figuring out the right revert patch is more
complicated then fixing it.
>
>
> Can we start asking developers run OST before they merge so it will be
early detection and not late detection?
> We have video sessions on how to use OST, so it shouldn't be any issues
in running it on a patch.
>
>>
>> I believe the best is to identify it early and notify the maintainer who
merged that patch ASAP, as that person is in the best position to asses if
revert is safe or if there is a simple follow up patch he can push right
away
>>
>> We can surely improve on reporting, so Barak, how/why did you point to
that particular patch in your email? It should start failing on
16c2ec236184b3152f1df8e874b43115f78d0989 (CommitDate: Fri Dec 16 01:56:07
2016 -0500)
>> Even though it may be that it was hidden because
of c46f653a7846c3c2a76507b8dcf5bc0391ec5709 (CommitDate: Mon Dec 19
15:16:40 2016 -0500)
>>
>> (fix is ready, waiting on CI now)
>>
>> Thanks,
>> michal
>>
>>> Y.
>>>
>>>>
>>>>
>>>>>>> 2016-12-20 16:54:43,550 INFO (vm/d299ab29) [virt.vm]
(vmId='d299ab29-284a-435c-a50f-183a6e54def2') Changed state to Down:
internal error: process exited while connecting to monitor:
2016-12-20T21:54:43.044971Z qemu-kvm: warning: CPU(s) not present in any
NUMA nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 2016-12-20T21:54:43.045164Z
qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA
config 2016-12-20T21:54:43.101886Z qemu-kvm: -device
usb-ccid,id=ccid0,bus=usb.0,port=1: Warning: speed mismatch trying to
attach usb device "QEMU USB CCID" (full speed) to bus "usb.0", port "1"
(high speed) (code=1) (vm:1197) 2016-12-20 16:54:43,550 INFO (vm/d299ab29)
[virt.vm] (vmId='d299ab29-284a-435c-a50f-183a6e54def2') Stopping connection
(guestagent:430)
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> And on The engine loads of these:
>>>>>>>
>>>>>>> 2016-12-20 16:53:57,844-05 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
task-17) [5ecd5a55-2b7a-4dd6-b42b-cc49bbfb3962] Command
'PollVDSCommand(HostName = lago-basic-suite-4-1-host0,
VdsIdVDSCommandParametersBase:{runAsync='true',
hostId='994b5d79-605f-4415-94f2-02c79cfa246e'})' execution failed:
VDSGenericException: VDSNetworkException: Timeout during rpc call
2016-12-20 16:53:57,849-05 DEBUG
[org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message] (SSL Stomp
Reactor) [7971dfb4] MESSAGE content-length:80
destination:jms.topic.vdsm_responses content-type:application/json
subscription:5b6494d5-d5a0-4771-941c-a8be70f72450 {"jsonrpc": "2.0", "id":
"3c95fdb0-5b77-4927-9f6e-adc7395c122d", "result": true}� 2016-12-20
16:53:57,850-05 DEBUG
[org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker] (ResponseWorker) []
Message received: {"jsonrpc": "2.0", "id":
"3c95fdb0-5b77-4927-9f6e-adc7395c122d", "result": true} 2016-12-20
16:53:57,850-05 ERROR [org.ovirt.vdsm.jsonrpc.client.JsonRpcClient]
(ResponseWorker) [] Not able to update response for
"3c95fdb0-5b77-4927-9f6e-adc7395c122d" 2016-12-20 16:53:57,844-05 DEBUG
[org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
task-17) [5ecd5a55-2b7a-4dd6-b42b-cc49bbfb3962] Exception:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
VDSGenericException: VDSNetworkException: Timeout during rpc call at
org.ovirt.engine.core.vdsbroker.vdsbroker.FutureVDSCommand.get(FutureVDSCommand.java:73)
[vdsbroker.jar:] at
org.ovirt.engine.core.bll.network.host.HostSetupNetworkPoller.getValue(HostSetupNetworkPoller.java:56)
[bll.jar:] at
org.ovirt.engine.core.bll.network.host.HostSetupNetworkPoller.poll(HostSetupNetworkPoller.java:41)
[bll.jar:] at
org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand.invokeSetupNetworksCommand(HostSetupNetworksCommand.java:426)
[bll.jar:] at
org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand.executeCommand(HostSetupNetworksCommand.java:287)
[bll.jar:] at
org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1249)
[bll.jar:] at
org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1389)
[bll.jar:] at
org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:2053)
[bll.jar:] at
org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:164)
[utils.jar:] at
org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:103)
[utils.jar:] at
org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1449)
[bll.jar:] at
org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:395)
[bll.jar:] at
org.ovirt.engine.core.bll.executor.DefaultBackendActionExecutor.execute(DefaultBackendActionExecutor.java:13)
[bll.jar:] at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:511)
[bll.jar:] at
org.ovirt.engine.core.bll.Backend.runActionImpl(Backend.java:493)
[bll.jar:] at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:446)
[bll.jar:] at sun.reflect.GeneratedMethodAccessor232.invoke(Unknown Source)
[:1.8.0_111] at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_111] at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_111] at
org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
at
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:70)
[wildfly-weld-10.1.0.Final.jar:10.1.0.Final] at
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:80)
[wildfly-weld-10.1.0.Final.jar:10.1.0.Final] at
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:93)
[wildfly-weld-10.1.0.Final.jar:10.1.0.Final] at
org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
at
org.ovirt.engine.core.bll.interceptors.CorrelationIdTrackerInterceptor.aroundInvoke(CorrelationIdTrackerInterceptor.java:13)
[bll.jar:] at sun.reflect.GeneratedMethodAccessor64.invoke(Unknown Source)
[:1.8.0_111] at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_111] at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_111] at
org.jboss.as.ee.component.ManagedReferenceLifecycleMethodInterceptor.processInvocation(ManagedReferenceLifecycleMethodInterceptor.java:89)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43)
[wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
at
org.jboss.weld.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:73)
[weld-core-impl-2.3.5.Final.jar:2.3.5.Final] at
org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:83)
[wildfly-weld-10.1.0.Final.jar:10.1.0.Final] at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45)
[wildfly-ee-10.1.0.Final.jar:10.1.0.Final] at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:21)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
at
org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:52)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.singleton.SingletonComponentInstanceAssociationInterceptor.processInvocation(SingletonComponentInstanceAssociationInterceptor.java:53)
[wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:263)
[wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at
org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:374)
[wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at
org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:243)
[wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
[wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:47)
[wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:100)
[wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.deployment.processors.StartupAwaitInterceptor.processInvocation(StartupAwaitInterceptor.java:22)
[wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64)
[wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:67)
[wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:64)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:356)
at
org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:636)
at
org.jboss.invocation.AccessCheckingInterceptor.processInvocation(AccessCheckingInterceptor.java:61)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:356)
at
org.jboss.invocation.PrivilegedWithCombinerInterceptor.processInvocation(PrivilegedWithCombinerInterceptor.java:80)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
at org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:198)
at
org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:185)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
at
org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:73)
at
org.ovirt.engine.core.common.interfaces.BackendLocal$$$view4.runAction(Unknown
Source) [common.jar:] at
org.ovirt.engine.api.restapi.resource.BackendResource.doAction(BackendResource.java:250)
at
org.ovirt.engine.api.restapi.resource.BackendResource.performAction(BackendResource.java:182)
at
org.ovirt.engine.api.restapi.resource.BackendResource.performAction(BackendResource.java:170)
at
org.ovirt.engine.api.restapi.resource.BackendHostResource.setupNetworks(BackendHostResource.java:212)
at org.ovirt.engine.api.v3.V3Server.adaptAction(V3Server.java:216) at
org.ovirt.engine.api.v3.servers.V3HostServer.setupNetworks(V3HostServer.java:182)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[rt.jar:1.8.0_111] at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[rt.jar:1.8.0_111] at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_111] at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_111] at
org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:139)
[resteasy-jaxrs-3.0.19.Final.jar:3.0.19.Final] at
org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:295)
[resteasy-jaxrs-3.0.19.Final.jar:3.0.19.Final] at
org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:249)
[resteasy-jaxrs-3.0.19.Final.jar:3.0.19.Final] at
org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTargetObject(ResourceLocatorInvoker.java:138)
[resteasy-jaxrs-3.0.19.Final.jar:3.0.19.Final] at
org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(ResourceLocatorInvoker.java:107)
[resteasy-jaxrs-3.0.19.Final.jar:3.0.19.Final] at
org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTargetObject(ResourceLocatorInvoker.java:133)
[resteasy-jaxrs-3.0.19.Final.jar:3.0.19.Final] at
org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(ResourceLocatorInvoker.java:101)
[resteasy-jaxrs-3.0.19.Final.jar:3.0.19.Final] at
org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:402)
[resteasy-jaxrs-3.0.19.Final.jar:3.0.19.Final] at
org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:209)
[resteasy-jaxrs-3.0.19.Final.jar:3.0.19.Final] at
org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:221)
[resteasy-jaxrs-3.0.19.Final.jar:3.0.19.Final] at
org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:56)
[resteasy-jaxrs-3.0.19.Final.jar:3.0.19.Final] at
org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:51)
[resteasy-jaxrs-3.0.19.Final.jar:3.0.19.Final] at
javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
[jboss-servlet-api_3.1_spec-1.0.0.Final.jar:1.0.0.Final] at
io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85)
at
io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:81)
at
io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
at
io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:274)
at
io.undertow.servlet.handlers.ServletInitialHandler.dispatchToPath(ServletInitialHandler.java:209)
at
io.undertow.servlet.spec.RequestDispatcherImpl.forwardImpl(RequestDispatcherImpl.java:221)
at
io.undertow.servlet.spec.RequestDispatcherImpl.forwardImplSetup(RequestDispatcherImpl.java:147)
at
io.undertow.servlet.spec.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:111)
at
org.ovirt.engine.api.restapi.invocation.VersionFilter.doFilter(VersionFilter.java:139)
at
org.ovirt.engine.api.restapi.invocation.VersionFilter.doFilter(VersionFilter.java:68)
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.api.restapi.invocation.CurrentFilter.doFilter(CurrentFilter.java:116)
at
org.ovirt.engine.api.restapi.invocation.CurrentFilter.doFilter(CurrentFilter.java:71)
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.core.aaa.filters.RestApiSessionMgmtFilter.doFilter(RestApiSessionMgmtFilter.java:78)
[aaa.jar:] at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.core.aaa.filters.EnforceAuthFilter.doFilter(EnforceAuthFilter.java:39)
[aaa.jar:] at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.core.aaa.filters.SsoRestApiNegotiationFilter.doFilter(SsoRestApiNegotiationFilter.java:91)
[aaa.jar:] at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.core.aaa.filters.SsoRestApiAuthFilter.doFilter(SsoRestApiAuthFilter.java:47)
[aaa.jar:] at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.core.aaa.filters.SessionValidationFilter.doFilter(SessionValidationFilter.java:59)
[aaa.jar:] at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.core.aaa.filters.RestApiSessionValidationFilter.doFilter(RestApiSessionValidationFilter.java:35)
[aaa.jar:] at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.api.restapi.security.CSRFProtectionFilter.doFilter(CSRFProtectionFilter.java:111)
at
org.ovirt.engine.api.restapi.security.CSRFProtectionFilter.doFilter(CSRFProtectionFilter.java:102)
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.api.restapi.security.CORSSupportFilter.doFilter(CORSSupportFilter.java:183)
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84)
at
io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
at
io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
at
org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131)
at
io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.security.handlers.AuthenticationConstraintHandler.handleRequest(AuthenticationConstraintHandler.java:53)
at
io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
at
io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
at
io.undertow.servlet.handlers.security.ServletSecurityConstraintHandler.handleRequest(ServletSecurityConstraintHandler.java:59)
at
io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60)
at
io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77)
at
io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)
at
io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:292)
at
io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletInitialHandler.java:81)
at
io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:138)
at
io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:135)
at
io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:48)
at
io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
at
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThreadSetupActionWrapper.java:44)
at
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThreadSetupActionWrapper.java:44)
at
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThreadSetupActionWrapper.java:44)
at
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThreadSetupActionWrapper.java:44)
at
io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:272)
at
io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)
at
io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:104)
at io.undertow.server.Connectors.executeRootHandler(Connectors.java:202) at
io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:805) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[rt.jar:1.8.0_111] at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[rt.jar:1.8.0_111] at java.lang.Thread.run(Thread.java:745)
[rt.jar:1.8.0_111] 2016-12-20 16:53:57,859-05 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
task-17) [5ecd5a55-2b7a-4dd6-b42b-cc49bbfb3962] Timeout waiting for VDSM
response: Internal timeout occured
>>>>>>>
>>>>>>>
>>>>>>> This is definitely not related to the patch linked above.
>>>>>>>
>>>>>>> However I am also not quite sure what is the cause of this right
now, it might be a breaking change in VDSM.
>>>>>>>
>>>>>>
>>>>>> The DC level is 4.0, is it possible that is the reason for the
failure?
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> Full test code can be seen here:
>>>>>>>>
https://gerrit.ovirt.org/gitweb?p=ovirt-system-tests.git;a=blob;f=basic-s...
>>>>>>>>
>>>>>>>> Full test exception can be seen here:
>>>>>>>>
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_4.1/13/testRepo...
>>>>>>>>
>>>>>>>> Further logs can be seen in Jenkins:
>>>>>>>>
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_4.1/13/artifact...
>>>>>>>>
>>>>>>>> --
>>>>>>>> Barak Korren
>>>>>>>> bkorren(a)redhat.com
>>>>>>>> RHCE, RHCi, RHV-DevOps Team
>>>>>>>> https://ifireball.wordpress.com/
>>>>>>>> _______________________________________________
>>>>>>>> Devel mailing list
>>>>>>>> Devel(a)ovirt.org
>>>>>>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Devel mailing list
>>>>>>> Devel(a)ovirt.org
>>>>>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Eyal Edri
>>>>>> Associate Manager
>>>>>> RHV DevOps
>>>>>> EMEA ENG Virtualization R&D
>>>>>> Red Hat Israel
>>>>>>
>>>>>> phone: +972-9-7692018
>>>>>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>>>>> _______________________________________________
>>>>>> Devel mailing list
>>>>>> Devel(a)ovirt.org
>>>>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>>>
>>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Devel mailing list
>>>> Devel(a)ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>
>>>
>>
>
>
>
> --
> Eyal Edri
> Associate Manager
> RHV DevOps
> EMEA ENG Virtualization R&D
> Red Hat Israel
>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
> _______________________________________________
> Devel mailing list
> Devel(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
8 years
Re: [ovirt-devel] [RFE] treat local NFS storage as localfs
by Pavel Gashev
On Wed, 2016-12-21 at 16:25 +0100, Martin Sivak wrote:
>
> So VMs on one host will get better IO performance and the others will
> still use NFS as they do now.
Performance is not an issue when all I/O bound VMs are co-located with
disks. If all VMs are I/O bound, all hosts in a cluster can use local
storage.
> It is an interesting idea, I am just not sure if having poor-man's
> hyperconverged setup with all the drawbacks of NFS is worth it.
> Imagine for example what happens when that storage provider host
> needs
> to be fenced or put into maintenance. The whole cluster would go down
> (all VMs would lose storage connection, not just the VMs from the
> affected host).
Losing a server together with a storage could be an issue, or not. It
really depends on setup. At least it would be good to provide such
feature :)
> I will let someone from the storage team to respond to this, but I do
> not think that trading performance (each host has its own local
> storage) and resilience (well, at least one failing host does not
> affect the others) for migrations is a good deal.
>
> --
> Martin Sivak
> SLA / oVirt
>
> On Wed, Dec 21, 2016 at 2:18 PM, Sven Kieske <s.kieske(a)mittwald.de>
> wrote:
> >
> > On 21/12/16 11:44, Pavel Gashev wrote:
> > >
> > > Hello,
> > >
> > > I'd like to introduce a RFE that allows to use a local storage in
> > > multi
> > > server environments https://bugzilla.redhat.com/show_bug.cgi?id=1
> > > 406412
> > >
> > > Most servers have a local storage. Some servers have very
> > > reliable
> > > storages with hardware RAID controllers and battery units.
> > >
> > > Example user cases:
> > > https://www.mail-archive.com/users@ovirt.org/msg36719.html
> > > https://www.mail-archive.com/users@ovirt.org/msg36772.html
> > >
> > > The best way to use local storage in multi server "shared"
> > > datacenters
> > > is exporting it over NFS. Using NFS allows to move disks and VMs
> > > among
> > > servers.
> > >
> > > In order to improve performance, disk I/O bound VMs can be pinned
> > > to
> > > a host with local storage. However there still is a performance
> > > drawback of NFS layers. Treating a local NFS storage as a local
> > > storage
> > > improves performance for VMs pinned to host.
> > >
> > > Currently setting up of NFS exports is out of scope of oVirt.
> > > However
> > > this would be a way to get rid of "Local/Shared" storage types of
> > > datacenter. So that all storages are shared, but local storages
> > > are
> > > used as local.
> > >
> > > Any questions/comments are welcome.
> > >
> > > Specifically I'd like to request for comment on potential data
> > > integrity issues during online VM or disk migration between NFS
> > > and
> > > localfs.
> > >
> >
> > Just let me say that I really like this as an end user.
> >
> > Hope this get's in. This seems less overhead than a complete
> > hyperconverged gluster setup.
> >
> >
> > --
> > Mit freundlichen Grüßen / Regards
> >
> > Sven Kieske
> >
> > Systemadministrator
> > Mittwald CM Service GmbH & Co. KG
> > Königsberger Straße 6
> > 32339 Espelkamp
> > T: +495772 293100
> > F: +495772 293333
> > https://www.mittwald.de
> > Geschäftsführer: Robert Meyer
> > St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad
> > Oeynhausen
> > Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad
> > Oeynhausen
> >
> >
> > _______________________________________________
> > Devel mailing list
> > Devel(a)ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/devel
> _______________________________________________
> Devel mailing list
> Devel(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
8 years
Re: [ovirt-devel] [RFE] treat local NFS storage as localfs
by Michal Skrivanek
> On 21 Dec 2016, at 16:26, Martin Sivak <msivak(a)redhat.com> wrote:
>
> Hi,
>
>> Hope this get's in. This seems less overhead than a complete
>> hyperconverged gluster setup.
>
> But NFS still is a single point of failure. Hyperconverged is supposed
> to address that.
>
>>> In order to improve performance, disk I/O bound VMs can be pinned to
>>> a host with local storage. However there still is a performance
>>> drawback of NFS layers. Treating a local NFS storage as a local storage
>>> improves performance for VMs pinned to host.
>
> So VMs on one host will get better IO performance and the others will
> still use NFS as they do now.
>
> It is an interesting idea, I am just not sure if having poor-man's
> hyperconverged setup with all the drawbacks of NFS is worth it.
> Imagine for example what happens when that storage provider host needs
> to be fenced or put into maintenance. The whole cluster would go down
> (all VMs would lose storage connection, not just the VMs from the
> affected host).
>
> I will let someone from the storage team to respond to this, but I do
> not think that trading performance (each host has its own local
> storage) and resilience (well, at least one failing host does not
> affect the others) for migrations is a good deal.
If disk performance is critical then there is an option to use direct
access on local host using either PCI passthrough of a local storage
controller or SCSI passthrough of LUNs.
>
> --
> Martin Sivak
> SLA / oVirt
>
>> On Wed, Dec 21, 2016 at 2:18 PM, Sven Kieske <s.kieske(a)mittwald.de> wrote:
>>> On 21/12/16 11:44, Pavel Gashev wrote:
>>> Hello,
>>>
>>> I'd like to introduce a RFE that allows to use a local storage in multi
>>> server environments https://bugzilla.redhat.com/show_bug.cgi?id=1406412
>>>
>>> Most servers have a local storage. Some servers have very reliable
>>> storages with hardware RAID controllers and battery units.
>>>
>>> Example user cases:
>>> https://www.mail-archive.com/users@ovirt.org/msg36719.html
>>> https://www.mail-archive.com/users@ovirt.org/msg36772.html
>>>
>>> The best way to use local storage in multi server "shared" datacenters
>>> is exporting it over NFS. Using NFS allows to move disks and VMs among
>>> servers.
>>>
>>> In order to improve performance, disk I/O bound VMs can be pinned to
>>> a host with local storage. However there still is a performance
>>> drawback of NFS layers. Treating a local NFS storage as a local storage
>>> improves performance for VMs pinned to host.
>>>
>>> Currently setting up of NFS exports is out of scope of oVirt. However
>>> this would be a way to get rid of "Local/Shared" storage types of
>>> datacenter. So that all storages are shared, but local storages are
>>> used as local.
>>>
>>> Any questions/comments are welcome.
>>>
>>> Specifically I'd like to request for comment on potential data
>>> integrity issues during online VM or disk migration between NFS and
>>> localfs.
>>>
>>
>> Just let me say that I really like this as an end user.
>>
>> Hope this get's in. This seems less overhead than a complete
>> hyperconverged gluster setup.
>>
>>
>> --
>> Mit freundlichen Grüßen / Regards
>>
>> Sven Kieske
>>
>> Systemadministrator
>> Mittwald CM Service GmbH & Co. KG
>> Königsberger Straße 6
>> 32339 Espelkamp
>> T: +495772 293100
>> F: +495772 293333
>> https://www.mittwald.de
>> Geschäftsführer: Robert Meyer
>> St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
>> Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
>>
>>
>> _______________________________________________
>> Devel mailing list
>> Devel(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
> _______________________________________________
> Devel mailing list
> Devel(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
>
8 years
Branching Vdsm to ovirt-4.1
by Yaniv Bronheim
Hello devels,
I just introduced ovirt-4.1 branch in vdsm - which will be the base for
ovirt-4.1 alpha build.
Now master is tagged v4.20.0 and first ovirt-4.1 tag is v4.19.1.
Basically it means that every fix that should be in 4.1 needs to be
backported and merged to ovirt-4.1 branch by fromani or me. We both in
#vdsm so ping us if needed and add us to the patches. Bare in mind that in
most of the cases we will ask for Bug-Url link in the commit message and
same commit-id as in the master branch patch.
For any further questions or wonders feel free to reply.
Greetings,
--
*Yaniv Bronhaim.*
8 years
Fwd: Fedora 23 End Of Life
by Sandro Bonazzola
FYI.
---------- Messaggio inoltrato ----------
Da: "Mohan Boddu" <mboddu(a)redhat.com>
Data: 21/Dic/2016 04:05
Oggetto: Fedora 23 End Of Life
A: <announce(a)lists.fedoraproject.org>, <
test-announce(a)lists.fedoraproject.org>, <
devel-announce(a)lists.fedoraproject.org>
Cc:
As of the 20th of December 2016, Fedora 23 has reached its end of life
for updates and support. No further updates, including security
updates, will be available for Fedora 23. A previous reminder was sent
on 28th of November 2016 [0]. Fedora 24 will continue to receive
updates until approximately one month after the release of Fedora 26.
The maintenance schedule of Fedora releases is documented on the
Fedora Project wiki [1]. The Fedora Project wiki also contains
instructions [2] on how to upgrade from a previous release of Fedora
to a version receiving updates.
Mohan Boddu.
[0]https://lists.fedoraproject.org/archives/list/devel@lists.
fedoraproject.org/thread/HLHKRTIB33EDZXP624GHF2OZLHWAGKSJ/#
Q5O44X4BEBOYEKAEVLSXVI44DSNVHBYG
[1]https://fedoraproject.org/wiki/Fedora_Release_Life_
Cycle#Maintenance_Schedule
[2]https://fedoraproject.org/wiki/Upgrading?rd=DistributionUpgrades
_______________________________________________
devel-announce mailing list -- devel-announce(a)lists.fedoraproject.org
To unsubscribe send an email to devel-announce-leave(a)lists.fedoraproject.org
8 years
Heads-up: moving Libvirt xml creation to the engine
by Arik Hadas
Hi All,
We are working on something that is expected to have a big impact, hence this heads-up.
First, we want you to be aware of this change and provide your feedback to make it as good as possible.
Second, until the proposed mechanism is fully merged there will be a chase to cover all features unless new features are also implemented with the new mechanism. So please, if you are working on something that adds/changes something in the Libvirt's domain xml, do it with this new mechanism as well (first version would be merged soon).
* Goal
Creating Libvirt XML in the engine rather than in VDSM.
** Today's flow
Engine: VM business entity -> VM properties map
VDSM: VM properties map -> Libvirt XML
** Desired flow
Engine: VM business entity -> Libvirt XML
* Potential Benefits
1. Reduce the number of conversions from 2 to 1, reducing chances for mistakes in the process.
2. Reduce the amount of code in VDSM.
3. Make VM related changes easier - today many of these changes need to be reviewed in 2 projects, this will eliminate the one that tends to take longer.
4. Prevent shortcuts in the form of VDSM-only changes that should be better reflected in the engine.
5. Not to re-generate the XML on each rerun attempt of VM run/migration.
6. Future - not to re-generate the XML on each attempt to auto-start HA VM when using vm-leases (need to make sure we're using the up-to-date VM configuration though).
7. We already found improvements and cleanups that could be made while touching this area (e.g., remove the boot order from devices in the database).
* Challenges
1. Not to move host-specific information to the engine. For example, path to storage domain or sockets of channels.
The solution is to use place-holders that will be replaced by VDSM.
2. Backward compatibility.
3. The more challenging part is the other direction - that will be the next phase.
* Status
As a first step, we began with producing the Libvirt XML in the engine by converting the VM properties map to XML in the engine [1]
And using the XML that is received as an input in VDSM [2]
[1] https://gerrit.ovirt.org/#/c/64473/
[2] https://gerrit.ovirt.org/#/c/65182/
Regards,
Arik
8 years
Red Hat Summit: oVirt Talk(s) Submission
by Brian Proffitt
One of the big events for the year is the Red Hat Summit, which has a
dedicated open source community track for upstream projects. At the very
least this is an opportunity to give a "state of the union" talk for the
various communities, including oVirt.
There are a significant number of downstream-related talks for Summit, but
if you would like to submit an oVirt-related talk for the open source
community track[1] before the CFP for Summit closes on Dec. 16, please do
so!
Peace,
Brian
[1]
https://rh2017.smarteventscloud.com/portal/cfp/cfpLogin.ww?sc_cid=7016000...
--
Brian Proffitt
Principal Community Analyst
Open Source and Standards
@TheTechScribe
574.383.9BKP
8 years
set_modified_hook (update)
by Shlomo Ben David
Hi All,
This email is an update for the set_modified hook
hook: set_modified
hook goal: when patch merged, changes the bug status from POST to MODIFIED.
- The current version of the hook doesn't checks that all previous
patches (on the external tracker) were closed before setting the bug status
to MODIFIED.
- I released this [1] patch that fixes it.
- If you have any questions/issues about this/other hooks please fill
free to send an email to me / infra.ovirt.org
[1] - https://gerrit.ovirt.org/#/c/67512/1
Thanks for your cooperation,
Shlomi Ben-David | DevOps Engineer | Red Hat ISRAEL
RHCSA | RHCE
IRC: shlomibendavid (on #rhev-integ, #rhev-dev, #rhev-ci)
OPEN SOURCE - 1 4 011 && 011 4 1
8 years