engine builds failing (Invalid Git ref given: jenkins-ovirt-engine_master_check-patch...)
by Greg Sheremeta
Hi,
It appears all engine check-patch builds are failing.
https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-engine_master_...
They all say
"""
15:40:10 ##########################################################
15:40:10 ## FINISHED SUCCESSFULLY
15:40:10 ##########################################################
15:40:10 Collecting mock logs
15:40:10 renamed './mock_logs.sWqMKYDi/populate_mock' ->
'exported-artifacts/mock_logs/populate_mock'
15:40:10 renamed './mock_logs.sWqMKYDi/script' ->
'exported-artifacts/mock_logs/script'
15:40:10 renamed './mock_logs.sWqMKYDi/init' ->
'exported-artifacts/mock_logs/init'
15:40:10 ##########################################################
15:40:10 [ovirt-engine_master_check-patch-fc28-x86_64] $ /bin/bash -xe
/tmp/jenkins7563971716314834364.sh
15:40:10 +
WORKSPACE=/home/jenkins/workspace/ovirt-engine_master_check-patch-fc28-x86_64
15:40:10 +
LOGDIR=/home/jenkins/workspace/ovirt-engine_master_check-patch-fc28-x86_64/exported-artifacts/pusher_logs
15:40:10 + mkdir -p
/home/jenkins/workspace/ovirt-engine_master_check-patch-fc28-x86_64/exported-artifacts/pusher_logs
15:40:10 + cd ./ovirt-engine
15:40:10 +
/home/jenkins/workspace/ovirt-engine_master_check-patch-fc28-x86_64/jenkins/scripts/pusher.py
--log=/home/jenkins/workspace/ovirt-engine_master_check-patch-fc28-x86_64/exported-artifacts/pusher_logs/push_ovirt-engine.log
push --if-not-exists
--unless-hash=jenkins-ovirt-engine_master_check-patch-fc28-x86_64-822 master
*15:40:10 Invalid Git ref given:
'jenkins-ovirt-engine_master_check-patch-fc28-x86_64-822'*
*15:40:10 Build step 'Execute shell' marked build as failure*
15:40:10 $ ssh-agent -k
15:40:10 unset SSH_AUTH_SOCK;
15:40:10 unset SSH_AGENT_PID;
15:40:10 echo Agent pid 10302 killed;
15:40:10 [ssh-agent] Stopped.
15:40:11 Performing Post build task...
15:40:11 Match found for :.* : True
15:40:11 Logical operation result is TRUE
15:40:11 Running script : #!/bin/bash -ex
15:40:11 echo "shell-scripts/collect_artifacts.sh"
15:40:11 cat <<EOC
15:40:11
_______________________________________________________________________
15:40:11
#######################################################################
15:40:11 #
#
15:40:11 # ARTIFACT COLLECTION
#
15:40:11 #
#
15:40:11
#######################################################################
"""
--
GREG SHEREMETA
SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
Red Hat NA
<https://www.redhat.com/>
gshereme(a)redhat.com IRC: gshereme
<https://red.ht/sig>
6 years, 5 months
Strange error in CI when checking an engine patch
by Andrej Krejcir
Hi,
I'm getting a strange error from CI on one of my patches[1]. The error only
happens in CI, local build passes OK.
[ERROR] Tests run: 29, Failures: 0, Errors: 4, Skipped: 0, Time elapsed:
0.073 s <<< FAILURE! - in org.ovirt.engine.core.bll.quota.QuotaManagerTest
[ERROR] testConsumeStorageQuotaSpecificOverThreshold Time elapsed: 0.022
s <<< ERROR!
java.lang.VerifyError:
Bad return type
Exception Details:
Location:
org/mockito/internal/junit/ExceptionFactory$JUnitArgsAreDifferent.create(Ljava/lang/String;Ljava/lang/String;Ljava/lang/String;)Ljava/lang/AssertionError;
@10: areturn
Reason:
Type 'org/mockito/exceptions/verification/junit/ArgumentsAreDifferent'
(current frame, stack[0]) is not assignable to 'java/lang/AssertionError'
(from method signature)
Current Frame:
bci: @10
flags: { }
locals: { 'java/lang/String', 'java/lang/String', 'java/lang/String' }
stack: {
'org/mockito/exceptions/verification/junit/ArgumentsAreDifferent' }
Bytecode:
0x0000000: bb00 0259 2a2b 2cb7 0003 b0
at
org.ovirt.engine.core.bll.quota.QuotaManagerTest.assertAuditLogWritten(QuotaManagerTest.java:110)
at
org.ovirt.engine.core.bll.quota.QuotaManagerTest.testConsumeStorageQuotaSpecificOverThreshold(QuotaManagerTest.java:180)
The whole log is here:
https://jenkins.ovirt.org/job/ovirt-engine_master_check-patch-el7-x86_64/...
The QuotaManagerTest class was changed in previous patches[2][3] in the
topic branch, but for them the CI passes OK.
Can someone point me to a possible cause?
Thanks,
Andrej
[1] - https://gerrit.ovirt.org/#/c/85213/
[2] - https://gerrit.ovirt.org/#/c/90477/
[3] - https://gerrit.ovirt.org/#/c/92345/
6 years, 5 months
[VDSM] All tests passed, but CI failed to archive the artifacts (No space left on device)
by Nir Soffer
Can we improve the CI so the build will succeed once check-patch completed
successfully,
regardless of errors after that point?
All tests suceeded
*13:41:18* tests: commands succeeded*13:41:18* storage-py27:
commands succeeded*13:41:18* storage-py36: commands
succeeded*13:41:18* lib-py27: commands succeeded*13:41:18*
lib-py36: commands succeeded*13:41:18* network-py27: commands
succeeded*13:41:18* network-py36: commands succeeded*13:41:18*
virt-py27: commands succeeded*13:41:18* virt-py36: commands
succeeded*13:41:18* congratulations :)
But the build failed:
https://jenkins.ovirt.org/job/vdsm_master_check-patch-fc28-x86_64/229/
Because of:
*13:42:33* POST BUILD TASK : SUCCESS*13:42:33* END OF POST BUILD TASK
: 2*13:42:33* Archiving artifacts*13:42:35* ERROR: Step ‘Archive the
artifacts’ aborted due to exception: *13:42:35* java.io.IOException:
No space left on device*13:42:35* at
sun.nio.ch.FileDispatcherImpl.write0(Native Method)*13:42:35* at
sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:60)*13:42:35*
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)*13:42:35*
at sun.nio.ch.IOUtil.write(IOUtil.java:65)*13:42:35* at
sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:211)*13:42:35*
at java.nio.channels.Channels.writeFullyImpl(Channels.java:78)*13:42:35*
at java.nio.channels.Channels.writeFully(Channels.java:101)*13:42:35*
at java.nio.channels.Channels.access$000(Channels.java:61)*13:42:35*
at java.nio.channels.Channels$1.write(Channels.java:174)*13:42:35*
at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1793)*13:42:35*
at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1769)*13:42:35*
at org.apache.commons.io.IOUtils.copy(IOUtils.java:1744)*13:42:35*
at hudson.util.IOUtils.copy(IOUtils.java:43)*13:42:35* at
hudson.FilePath.readFromTar(FilePath.java:2465)*13:42:35* Also:
Also: hudson.remoting.Channel$CallSiteStackTrace: Remote call to
vm0064.workers-phx.ovirt.org*13:42:35* at
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1741)*13:42:35*
at hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)*13:42:35*
at hudson.remoting.Channel$2.adapt(Channel.java:990)*13:42:35*
at hudson.remoting.Channel$2.adapt(Channel.java:986)*13:42:35*
at hudson.remoting.FutureAdapter.get(FutureAdapter.java:59)*13:42:35*
at hudson.FilePath.copyRecursiveTo(FilePath.java:2368)*13:42:35*
at jenkins.model.StandardArtifactManager.archive(StandardArtifactManager.java:61)*13:42:35*
at hudson.tasks.ArtifactArchiver.perform(ArtifactArchiver.java:235)*13:42:35*
at hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:81)*13:42:35*
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)*13:42:35*
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)*13:42:35*
at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:690)*13:42:35*
at hudson.model.Build$BuildExecution.post2(Build.java:186)*13:42:35*
at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:635)*13:42:35*
at hudson.model.Run.execute(Run.java:1819)*13:42:35* at
hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
See
https://jenkins.ovirt.org/job/vdsm_master_check-patch-fc28-x86_64/229/con...
Nir
6 years, 5 months
Recent he-basic-ansible-suite-4.2 failures
by Yedidyah Bar David
Hi all,
I noticed that our hosted-engine suites [1] often fail recently, and
decided to have a look at [2], which are on 4.2, which should
hopefully be "rock solid" and basically never fail.
I looked at these, [3][4][5][6][7], which are all the ones that still
appear in [2] and marked as failed.
Among them:
- All but one failed while "Waiting for agent to be ready" and timing
out after 10 minutes, as part of 008_restart_he_vm.py, which was added
a month ago [8] and then patched [9].
- The other one [7] failed while "Waiting for engine to migrate", also
eventually timing out after 10 minutes, as part of
010_local_mainentance.py, which was also added in [9].
I also had a look at the last ones that succeeded, builds 329 to 337
of [2]. There:
- "Waiting for agent to be ready" took between 26 and 48 seconds
- "Waiting for engine to migrate" took between 69 and 82 seconds
Assuming these numbers are reasonable (which might be debatable), 10
minutes indeed sounds like a reasonable timeout, and I think we should
handle each failure specifically. Did anyone check them? Was it an
infra issue/load/etc.? A bug? Something else?
I didn't check the logs yet, might do this later. Also didn't check
the failures in other jobs in [1].
Best regards,
[1] https://jenkins.ovirt.org/search/?q=he-basic
[2] https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-4.2/
[3] https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-4...
[4] https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-4...
[5] https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-4...
[6] https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-4...
[7] https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-4...
[8] https://gerrit.ovirt.org/91952
[9] https://gerrit.ovirt.org/92341
--
Didi
6 years, 5 months
Gerrit trying to set 3rd party cookies
by Nir Soffer
After watching Sarah Bird's great talk about the terrifying web[1], I found
that for
some reason 3rd party cookies were enabled in my browser.
After disabling them, I found that gerrit is using 3rd party cookies from
gravatar.com.
(see attached screenshot).
Why do we allow 3rd parties like gravatar to set cookies?
Can we use gravatar without setting cookies?
[image: Screenshot from 2018-07-01 15-31-37.png]
[1] https://il.pycon.org/2018/schedule/presentation/18/
Nir
6 years, 5 months
[ OST Failure Report ] [ oVirt Master (ovirt-engine) ] [ 28-06-2018 ] [ 098_ovirt_provider_ovn.use_ovn_provider.]
by Dafna Ron
Hi,
We had a failure in test 098_ovirt_provider_ovn.use_ovn_provider.
Although CQ is pointing to this change: https://gerrit.ovirt.org/#/c/92567/
- packaging: Add python-netaddr requirement I actually think from the
error its because of changes made to multiqueues
https://gerrit.ovirt.org/#/c/92009/ - engine: Update libvirtVmXml to
consider vmBase.multiQueuesEnabled attribute
https://gerrit.ovirt.org/#/c/92008/ - engine: Introduce algorithm for
calculating how many queues asign per vnic
https://gerrit.ovirt.org/#/c/92007/ - engine: Add multiQueuesEnabled to
VmBase
https://gerrit.ovirt.org/#/c/92318/ - restapi: Add 'Multi Queues Enabled'
to the relevant mappers
https://gerrit.ovirt.org/#/c/92149/ - webadmin: Add 'Multi Queues Enabled'
to vm dialog
Alona, can you please take a look?
*Link to Job:*
*http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/8375/
<http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/8375/>Link
to all
logs:https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/8375/...
<https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/8375/artif...>(Relevant)
error snippet from the log: <error>*
*eng**ine: *
2018-06-27 13:59:25,976-04 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-80) [] Command
'GetAllVmStatsVDSCommand(HostName = lago-basic-suite-master-host-1,
VdsIdVDSCommandParametersBase:{hostId='d9094c95-3275-4616-b4c2-815e753bcfed'})'
execution failed: VDSGenericException: VDSNetworkException: Broken pipe
2018-06-27 13:59:25,977-04 DEBUG
[org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil]
(EE-ManagedThreadFactory-engine-Thread-442) [] Executing task:
EE-ManagedThreadFactory-engine-Thread-442
2018-06-27 13:59:25,977-04 DEBUG
[org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
(EE-ManagedThreadFactory-engine-Thread-442) [] method: getVdsManager,
params: [d9094c95-3275-4616-b4c2-815e753bcfed], timeElapsed: 0ms
2018-06-27 13:59:25,977-04 WARN
[org.ovirt.engine.core.vdsbroker.VdsManager]
(EE-ManagedThreadFactory-engine-Thread-442) [] Host
'lago-basic-suite-master-host-1' is not responding.
2018-06-27 13:59:25,979-04 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-63) [] EVENT_ID:
VDS_BROKER_COMMAND_FAILURE(10,802), VDSM lago-basic-suite-master-host-1
command GetStatsAsyncVDS failed: Broken pipe
2018-06-27 13:59:25,976-04 DEBUG
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-80) [] Exception:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
VDSGenericException: VDSNetworkException: Broken pipe
at
org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.proceedProxyReturnValue(BrokerCommandBase.java:189)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand.executeVdsBrokerCommand(GetAllVmStatsVDSCommand.java:23)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVdsCommandWithNetworkEvent(VdsBrokerCommand.java:123)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:111)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:65)
[vdsbroker.jar:]
at
org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:31)
[dal.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.DefaultVdsCommandExecutor.execute(DefaultVdsCommandExecutor.java:14)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:399)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand$$super(Unknown
Source) [vdsbroker.jar:]
at sun.reflect.GeneratedMethodAccessor270.invoke(Unknown Source)
[:1.8.0_171]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_171]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_171]
at
org.jboss.weld.interceptor.proxy.TerminalAroundInvokeInvocationContext.proceedInternal(TerminalAroundInvokeInvocationContext.java:49)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
at
org.jboss.weld.interceptor.proxy.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:77)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
at
org.ovirt.engine.core.common.di.interceptor.LoggingInterceptor.apply(LoggingInterceptor.java:12)
[common.jar:]
at sun.reflect.GeneratedMethodAccessor68.invoke(Unknown Source)
[:1.8.0_171]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_171]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_171]
at
org.jboss.weld.interceptor.reader.SimpleInterceptorInvocation$SimpleMethodInvocation.invoke(SimpleInterceptorInvocation.java:73)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
at
org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.executeAroundInvoke(InterceptorMethodHandler.java:84)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
at
org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.executeInterception(InterceptorMethodHandler.java:72)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
at
org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.invoke(InterceptorMethodHandler.java:56)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
at
org.jboss.weld.bean.proxy.CombinedInterceptorAndDecoratorStackMethodHandler.invoke(CombinedInterceptorAndDecoratorStackMethodHandler.java:79)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
at
org.jboss.weld.bean.proxy.CombinedInterceptorAndDecoratorStackMethodHandler.invoke(CombinedInterceptorAndDecoratorStackMethodHandler.java:68)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
at
org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand(Unknown
Source) [vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher.poll(VmsStatisticsFetcher.java:29)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.monitoring.VmsListFetcher.fetch(VmsListFetcher.java:49)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefresher.poll(PollVmStatsRefresher.java:44)
[vdsbroker.jar:]
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[rt.jar:1.8.0_171]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
[rt.jar:1.8.0_171]
at
org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383)
[javax.enterprise.concurrent-1.0.jar:]
at
org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534)
[javax.enterprise.concurrent-1.0.jar:]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[rt.jar:1.8.0_171]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[rt.jar:1.8.0_171]
at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_171]
at
org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250)
[javax.enterprise.concurrent-1.0.jar:]
at
org.jboss.as.ee.concurrent.service.ElytronManagedThreadFactory$ElytronManagedThread.run(ElytronManagedThreadFactory.java:78)
2018-06-27 13:59:25,984-04 DEBUG
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-80) [] FINISH,
GetAllVmStatsVDSCommand, return: , log id: 56d99e77
2018-06-27 13:59:25,984-04 DEBUG
[org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
(EE-ManagedThreadFactory-engineScheduled-Thread-80) [] method:
runVdsCommand, params: [GetAllVmStats,
VdsIdVDSCommandParametersBase:{hostId='d9094c95-3275-4616-b4c2-815e753bcfed'}],
timeElapsed: 1497ms
2018-06-27 13:59:25,984-04 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefresher]
(EE-ManagedThreadFactory-engineScheduled-Thread-80) [] Failed to fetch vms
info for host 'lago-basic-suite-master-host-1' - skipping VMs monitoring.
*vdsm: 2018-06-27 14:10:17,314-0400 INFO (jsonrpc/7) [virt.vm]
(vmId='b8a11304-07e3-4e64-af35-7421be780d5b') Hotunplug NIC xml: <?xml
version='1.0' encoding='utf-8'?><interface type="bridge"> <address
bus="0x00" domain="0x0000" function="0x0" slot="0x0b" type="pci" /> <mac
address="00:1a:4a:16:01:0e" /> <model type="virtio" /> <source
bridge="network_1" /> <link state="up" /> <driver name="vhost"
queues="" /> <alias name="ua-3c77476f-f194-476a-8412-d76a9e58d1f9"
/></interface> (vm:3321)2018-06-27 14:10:17,328-0400 ERROR (jsonrpc/7)
[virt.vm] (vmId='b8a11304-07e3-4e64-af35-7421be780d5b') Hotunplug failed
(vm:3353)Traceback (most recent call last): File
"/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 3343, in
hotunplugNic self._dom.detachDevice(nicXml) File
"/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 99, in f
ret = attr(*args, **kwargs) File
"/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line
131, in wrapper ret = f(*args, **kwargs) File
"/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 93, in
wrapper return func(inst, *args, **kwargs) File
"/usr/lib64/python2.7/site-packages/libvirt.py", line 1177, in
detachDevice if ret == -1: raise libvirtError ('virDomainDetachDevice()
failed', dom=self)libvirtError: 'queues' attribute must be positive number:
2018-06-27 14:10:17,345-0400 DEBUG (jsonrpc/7) [api] FINISH hotunplugNic
response={'status': {'message': "'queues' attribute must be positive
number: ", 'code': 50}} (api:136)2018-06-27 14:10:17,346-0400 INFO
(jsonrpc/7) [api.virt] FINISH hotunplugNic return={'status': {'message':
"'queues' attribute must be positive number: ", 'code': 50}}
from=::ffff:192.168.201.4,32976, flow_id=ecb6652,
vmId=b8a11304-07e3-4e64-af35-7421be780d5b (api:53)2018-06-27
14:10:17,346-0400 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call
VM.hotunplugNic failed (error 50) in 0.07 seconds (__init__:311)2018-06-27
14:10:19,244-0400 DEBUG (qgapoller/2) [vds] Not sending QEMU-GA command
'guest-get-users' to vm_id='b8a11304-07e3-4e64-af35-7421be780d5b', command
is not supported (qemuguestagent:192)2018-06-27 14:10:20,038-0400 DEBUG
(jsonrpc/1) [jsonrpc.JsonRpcServer] Calling 'Host.getAllVmStats' in bridge
with {} (__init__:328)2018-06-27 14:10:20,038-0400 INFO (jsonrpc/1)
[api.host] START getAllVmStats() from=::1,48032 (api:47)2018-06-27
14:10:20,041-0400 INFO (jsonrpc/1) [api.host] FINISH getAllVmStats
return={'status': {'message': 'Done', 'code': 0}, 'statsList':
(suppressed)} from=::1,48032 (api:53)2018-06-27 14:10:20,043-0400 DEBUG
(jsonrpc/1) [jsonrpc.JsonRpcServer] Return 'Host.getAllVmStats' in bridge
with (suppressed) (__init__:355)2018-06-27 14:10:20,043-0400 INFO
(jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded
in 0.00 seconds (__init__:311)2018-06-27 14:10:20,057-0400 DEBUG
(jsonrpc/6) [jsonrpc.JsonRpcServer] Calling 'Host.getAllVmIoTunePolicies'
in bridge with {} (__init__:328)2018-06-27 14:10:20,058-0400 INFO
(jsonrpc/6) [api.host] START getAllVmIoTunePolicies() from=::1,48032
(api:47)2018-06-27 14:10:20,058-0400 INFO (jsonrpc/6) [api.host] FINISH
getAllVmIoTunePolicies return={'status': {'message': 'Done', 'code': 0},
'io_tune_policies_dict': {'b8a11304-07e3-4e64-af35-7421be780d5b':
{'policy': [], 'current_values': [{'ioTune': {'write_bytes_sec': 0L,
'total_iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L,
'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path':
'/rhev/data-center/mnt/blockSD/cf23ceeb-81a3-4714-85a0-c6ddd1e024da/images/650fe4ae-47a1-4f2d-9cba-1617a8c868c3/03e75c3c-24e7-4e68-a6f1-21728aaaa73e',
'name': 'vda'}]}}} from=::1,48032 (api:53)2018-06-27 14:10:20,059-0400
DEBUG (jsonrpc/6) [jsonrpc.JsonRpcServer] Return
'Host.getAllVmIoTunePolicies' in bridge with
{'b8a11304-07e3-4e64-af35-7421be780d5b': {'policy': [], 'current_values':
[{'ioTune': {'write_bytes_sec': 0L, 'total_iops_sec': 0L, 'read_iops_sec':
0L, 'read_bytes_sec': 0L, 'write_iops_sec': 0L, 'total_bytes_sec': 0L},
'path':
'/rhev/data-center/mnt/blockSD/cf23ceeb-81a3-4714-85a0-c6ddd1e024da/images/650fe4ae-47a1-4f2d-9cba-1617a8c868c3/03e75c3c-24e7-4e68-a6f1-21728aaaa73e',
'name': 'vda'}]}} (__init__:355)</error>*
6 years, 5 months