Hi,Merged an engine side workaround - https://gerrit.ovirt.org/#/c/92656/ It passes system-tests basic and network suits https://jenkins.ovirt.org/job/ovirt-system-tests_manual/ , https://jenkins.ovirt.org/job/2872/ ovirt-system-tests_manual/ 2873/ There is also a vdsm side fix - https://gerrit.ovirt.org/#/c/92634/ (although the engine fix is enough to get the ost to pass).I'm not sure it passes OST without the engine fix, verifying it now.
Thanks,Alona.On Sun, Jul 1, 2018 at 11:21 AM, Ehud Yonasi <eyonasi@redhat.com> wrote:Hey Alona,what is the current status of the fix?On Thu, Jun 28, 2018 at 5:57 PM Dafna Ron <dron@redhat.com> wrote:______________________________Dafnavdsm also failing on the same issue:Thanks,
https://jenkins.ovirt.org/job/ovirt-master_change-queue-test er/8407/ On Thu, Jun 28, 2018 at 11:11 AM, Dafna Ron <dron@redhat.com> wrote:DafnaThanks,Thanks Alona,Can you please update me once you have a fix?On Thu, Jun 28, 2018 at 10:28 AM, Alona Kaplan <alkaplan@redhat.com> wrote:Hi,I'm aware to the error. Francesco and me are working on it.Thank,Alona.On Thu, Jun 28, 2018, 12:23 Dafna Ron <dron@redhat.com> wrote:ovirt-hosted-engine-ha failed on the same issue as well.On Thu, Jun 28, 2018 at 10:07 AM, Dafna Ron <dron@redhat.com> wrote:Alona, can you please take a look?https://gerrit.ovirt.org/#/c/9Although CQ is pointing to this change: https://gerrit.ovirt.org/#/c/9Hi,We had a failure in test 098_ovirt_provider_ovn.use_ovn_provider. 2567/ - packaging: Add python-netaddr requirement I actually think from the error its because of changes made to multiqueues
https://gerrit.ovirt.org/#/c/92009/ - engine: Update libvirtVmXml to consider vmBase.multiQueuesEnabled attribute2008/ - engine: Introduce algorithm for calculating how many queues asign per vnic
https://gerrit.ovirt.org/#/c/92007/ - engine: Add multiQueuesEnabled to VmBase
https://gerrit.ovirt.org/#/c/92318/ - restapi: Add 'Multi Queues Enabled' to the relevant mappers
https://gerrit.ovirt.org/#/c/92149/ - webadmin: Add 'Multi Queues Enabled' to vm dialogLink to Job:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-teste r/8375/ Link to all logs:
https://jenkins.ovirt.org/job/ovirt-master_change-queue-test er/8375/artifact/exported-arti facts/basic-suit.el7.x86_64/ test_logs/basic-suite-master/ post-098_ovirt_provider_ovn. py/ (Relevant) error snippet from the log:
<error>
engine:
2018-06-27 13:59:25,976-04 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSC ommand] (EE-ManagedThreadFactory-engin eScheduled-Thread-80) [] Command 'GetAllVmStatsVDSCommand(HostN ame = lago-basic-suite-master-host-1 , VdsIdVDSCommandParametersBase: {hostId='d9094c95-3275-4616-b4 c2-815e753bcfed'})' execution failed: VDSGenericException: VDSNetworkException: Broken pipe
2018-06-27 13:59:25,977-04 DEBUG [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (EE-ManagedThreadFactory-engin e-Thread-442) [] Executing task: EE-ManagedThreadFactory-engine -Thread-442
2018-06-27 13:59:25,977-04 DEBUG [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInt erceptor] (EE-ManagedThreadFactory-engin e-Thread-442) [] method: getVdsManager, params: [d9094c95-3275-4616-b4c2-815e7 53bcfed], timeElapsed: 0ms
2018-06-27 13:59:25,977-04 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (EE-ManagedThreadFactory-engin e-Thread-442) [] Host 'lago-basic-suite-master-host- 1' is not responding.
2018-06-27 13:59:25,979-04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLo gDirector] (EE-ManagedThreadFactory-engin eScheduled-Thread-63) [] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10, 802), VDSM lago-basic-suite-master-host-1 command GetStatsAsyncVDS failed: Broken pipe
2018-06-27 13:59:25,976-04 DEBUG [org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSC ommand] (EE-ManagedThreadFactory-engin eScheduled-Thread-80) [] Exception: org.ovirt.engine.core.vdsbroke r.vdsbroker.VDSNetworkExceptio n: VDSGenericException: VDSNetworkException: Broken pipe
at org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase. proceedProxyReturnValue(Broker CommandBase.java:189) [vdsbroker.jar:]
at org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCo mmand.executeVdsBrokerCommand( GetAllVmStatsVDSCommand.java:2 3) [vdsbroker.jar:]
at org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.e xecuteVdsCommandWithNetworkEve nt(VdsBrokerCommand.java:123) [vdsbroker.jar:]
at org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.e xecuteVDSCommand(VdsBrokerComm and.java:111) [vdsbroker.jar:]
at org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeComman d(VDSCommandBase.java:65) [vdsbroker.jar:]
at org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandB ase.java:31) [dal.jar:]
at org.ovirt.engine.core.vdsbroker.vdsbroker.DefaultVdsCommandE xecutor.execute(DefaultVdsComm andExecutor.java:14) [vdsbroker.jar:]
at org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsComman d(ResourceManager.java:399) [vdsbroker.jar:]
at org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_ WeldSubclass.runVdsCommand$$ super(Unknown Source) [vdsbroker.jar:]
at sun.reflect.GeneratedMethodAccessor270.invoke(Unknown Source) [:1.8.0_171]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe thodAccessorImpl.java:43) [rt.jar:1.8.0_171]
at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_171]
at org.jboss.weld.interceptor.proxy.TerminalAroundInvokeInvocat ionContext.proceedInternal(Ter minalAroundInvokeInvocationCon text.java:49) [weld-core-impl-2.4.3.Final.ja r:2.4.3.Final]
at org.jboss.weld.interceptor.proxy.AroundInvokeInvocationConte xt.proceed(AroundInvokeInvocat ionContext.java:77) [weld-core-impl-2.4.3.Final.ja r:2.4.3.Final]
at org.ovirt.engine.core.common.di.interceptor.LoggingIntercept or.apply(LoggingInterceptor. java:12) [common.jar:]
at sun.reflect.GeneratedMethodAccessor68.invoke(Unknown Source) [:1.8.0_171]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe thodAccessorImpl.java:43) [rt.jar:1.8.0_171]
at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_171]
at org.jboss.weld.interceptor.reader.SimpleInterceptorInvocatio n$SimpleMethodInvocation. invoke(SimpleInterceptorInvoca tion.java:73) [weld-core-impl-2.4.3.Final.ja r:2.4.3.Final]
at org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.ex ecuteAroundInvoke(InterceptorM ethodHandler.java:84) [weld-core-impl-2.4.3.Final.ja r:2.4.3.Final]
at org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.ex ecuteInterception(InterceptorM ethodHandler.java:72) [weld-core-impl-2.4.3.Final.ja r:2.4.3.Final]
at org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.in voke(InterceptorMethodHandler. java:56) [weld-core-impl-2.4.3.Final.ja r:2.4.3.Final]
at org.jboss.weld.bean.proxy.CombinedInterceptorAndDecoratorSta ckMethodHandler.invoke(Combine dInterceptorAndDecoratorStackM ethodHandler.java:79) [weld-core-impl-2.4.3.Final.ja r:2.4.3.Final]
at org.jboss.weld.bean.proxy.CombinedInterceptorAndDecoratorSta ckMethodHandler.invoke(Combine dInterceptorAndDecoratorStackM ethodHandler.java:68) [weld-core-impl-2.4.3.Final.ja r:2.4.3.Final]
at org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_ WeldSubclass.runVdsCommand( Unknown Source) [vdsbroker.jar:]
at org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetc her.poll(VmsStatisticsFetcher. java:29) [vdsbroker.jar:]
at org.ovirt.engine.core.vdsbroker.monitoring.VmsListFetcher. fetch(VmsListFetcher.java:49) [vdsbroker.jar:]
at org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefres her.poll(PollVmStatsRefresher. java:44) [vdsbroker.jar:]
at java.util.concurrent.Executors$RunnableAdapter.call( Executors.java:511) [rt.jar:1.8.0_171]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java: 308) [rt.jar:1.8.0_171]
at org.glassfish.enterprise.concurrent.internal.ManagedSchedule dThreadPoolExecutor$ManagedSch eduledFutureTask.access$201(Ma nagedScheduledThreadPoolExecut or.java:383) [javax.enterprise.concurrent-1 .0.jar:]
at org.glassfish.enterprise.concurrent.internal.ManagedSchedule dThreadPoolExecutor$ManagedSch eduledFutureTask.run(ManagedSc heduledThreadPoolExecutor. java:534) [javax.enterprise.concurrent-1 .0.jar:]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool Executor.java:1149) [rt.jar:1.8.0_171]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo lExecutor.java:624) [rt.jar:1.8.0_171]
at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_171]
at org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl $ManagedThread.run(ManagedThre adFactoryImpl.java:250) [javax.enterprise.concurrent-1 .0.jar:]
at org.jboss.as.ee.concurrent.service.ElytronManagedThreadFacto ry$ElytronManagedThread.run(El ytronManagedThreadFactory.java :78)
2018-06-27 13:59:25,984-04 DEBUG [org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSC ommand] (EE-ManagedThreadFactory-engin eScheduled-Thread-80) [] FINISH, GetAllVmStatsVDSCommand, return: , log id: 56d99e77
2018-06-27 13:59:25,984-04 DEBUG [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInt erceptor] (EE-ManagedThreadFactory-engin eScheduled-Thread-80) [] method: runVdsCommand, params: [GetAllVmStats, VdsIdVDSCommandParametersBase: {hostId='d9094c95-3275-4616-b4 c2-815e753bcfed'}], timeElapsed: 1497ms
2018-06-27 13:59:25,984-04 INFO [org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefre sher] (EE-ManagedThreadFactory-engin eScheduled-Thread-80) [] Failed to fetch vms info for host 'lago-basic-suite-master-host- 1' - skipping VMs monitoring.
vdsm:
2018-06-27 14:10:17,314-0400 INFO (jsonrpc/7) [virt.vm] (vmId='b8a11304-07e3-4e64-af35-7421be780d5b') Hotunplug NIC xml: <?xml version='1.0' encoding='utf-8'?>
<interface type="bridge">
<address bus="0x00" domain="0x0000" function="0x0" slot="0x0b" type="pci" />
<mac address="00:1a:4a:16:01:0e" />
<model type="virtio" />
<source bridge="network_1" />
<link state="up" />
<driver name="vhost" queues="" />
<alias name="ua-3c77476f-f194-476a-8412-d76a9e58d1f9" />
</interface>
(vm:3321)
2018-06-27 14:10:17,328-0400 ERROR (jsonrpc/7) [virt.vm] (vmId='b8a11304-07e3-4e64-af35-7421be780d5b') Hotunplug failed (vm:3353)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 3343, in hotunplugNic
self._dom.detachDevice(nicXml)
File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 99, in f
ret = attr(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnect ion.py", line 131, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 93, in wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1177, in detachDevice
if ret == -1: raise libvirtError ('virDomainDetachDevice() failed', dom=self)
libvirtError: 'queues' attribute must be positive number:
2018-06-27 14:10:17,345-0400 DEBUG (jsonrpc/7) [api] FINISH hotunplugNic response={'status': {'message': "'queues' attribute must be positive number: ", 'code': 50}} (api:136)
2018-06-27 14:10:17,346-0400 INFO (jsonrpc/7) [api.virt] FINISH hotunplugNic return={'status': {'message': "'queues' attribute must be positive number: ", 'code': 50}} from=::ffff:192.168.201.4,32976, flow_id=ecb6652, vmId=b8a11304-07e3-4e64-af35-7 421be780d5b (api:53)
2018-06-27 14:10:17,346-0400 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call VM.hotunplugNic failed (error 50) in 0.07 seconds (__init__:311)
2018-06-27 14:10:19,244-0400 DEBUG (qgapoller/2) [vds] Not sending QEMU-GA command 'guest-get-users' to vm_id='b8a11304-07e3-4e64-af35-7421be780d5b', command is not supported (qemuguestagent:192)
2018-06-27 14:10:20,038-0400 DEBUG (jsonrpc/1) [jsonrpc.JsonRpcServer] Calling 'Host.getAllVmStats' in bridge with {} (__init__:328)
2018-06-27 14:10:20,038-0400 INFO (jsonrpc/1) [api.host] START getAllVmStats() from=::1,48032 (api:47)
2018-06-27 14:10:20,041-0400 INFO (jsonrpc/1) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::1,48032 (api:53)
2018-06-27 14:10:20,043-0400 DEBUG (jsonrpc/1) [jsonrpc.JsonRpcServer] Return 'Host.getAllVmStats' in bridge with (suppressed) (__init__:355)
2018-06-27 14:10:20,043-0400 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:311)
2018-06-27 14:10:20,057-0400 DEBUG (jsonrpc/6) [jsonrpc.JsonRpcServer] Calling 'Host.getAllVmIoTunePolicies' in bridge with {} (__init__:328)
2018-06-27 14:10:20,058-0400 INFO (jsonrpc/6) [api.host] START getAllVmIoTunePolicies() from=::1,48032 (api:47)
2018-06-27 14:10:20,058-0400 INFO (jsonrpc/6) [api.host] FINISH getAllVmIoTunePolicies return={'status': {'message': 'Done', 'code': 0}, 'io_tune_policies_dict': {'b8a11304-07e3-4e64-af35-7421be780d5b': {'policy': [], 'current_values': [{'ioTune': {'write_bytes_sec': 0L, 'total_iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L, 'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path': '/rhev/data-center/mnt/blockSD /cf23ceeb-81a3-4714-85a0- c6ddd1e024da/images/650fe4ae- 47a1-4f2d-9cba-1617a8c868c3/ 03e75c3c-24e7-4e68-a6f1- 21728aaaa73e', 'name': 'vda'}]}}} from=::1,48032 (api:53)
2018-06-27 14:10:20,059-0400 DEBUG (jsonrpc/6) [jsonrpc.JsonRpcServer] Return 'Host.getAllVmIoTunePolicies' in bridge with {'b8a11304-07e3-4e64-af35-7421be780d5b': {'policy': [], 'current_values': [{'ioTune': {'write_bytes_sec': 0L, 'total_iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L, 'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path': '/rhev/data-center/mnt/blockSD /cf23ceeb-81a3-4714-85a0- c6ddd1e024da/images/650fe4ae- 47a1-4f2d-9cba-1617a8c868c3/ 03e75c3c-24e7-4e68-a6f1- 21728aaaa73e', 'name': 'vda'}]}} (__init__:355) </error>
_________________
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/infra@ovirt.org/messag e/52RUJGRYJVGAYTEVHD2PUCVINQRH E5QQ/