qemu-kvm-ev-2.9.0-16.el7_4.11.1 now available for testing
by Sandro Bonazzola
Hi, qemu-kvm-ev-2.9.0-16.el7_4.11.1
<https://cbs.centos.org/koji/buildinfo?buildID=21003> is now available for
testing.
If no negative feedback will be reported, I'm going to push to release on
Thursday, December 14th.
Here's the changelog:
* Mon Dec 11 2017 Sandro Bonazzola <sbonazzo(a)redhat.com> -
ev-2.9.0-16.el7_4.11.1 - Removing RH branding from package name * Mon Nov
13 2017 Miroslav Rezanina <mrezanin(a)redhat.com> - rhev-2.9.0-16.el7_4.11 -
kvm-exec-use-qemu_ram_ptr_length-to-access-guest-ram.patch [bz#1472185] -
kvm-multiboot-validate-multiboot-header-address-values.patch [bz#1501123] -
Resolves: bz#1472185 (CVE-2017-11334 qemu-kvm-rhev: Qemu: exec: oob access
during dma operation [rhel-7.4.z]) - Resolves: bz#1501123 (CVE-2017-14167
qemu-kvm-rhev: Qemu: i386: multiboot OOB access while loading kernel image
[rhel-7.4.z]) * Mon Oct 23 2017 Miroslav Rezanina <mrezanin(a)redhat.com> -
rhev-2.9.0-16.el7_4.10 -
kvm-vga-stop-passing-pointers-to-vga_draw_line-functions.patch [bz#1501300]
- kvm-vga-drop-line_offset-variable.patch [bz#1501300] -
kvm-vga-handle-cirrus-vbe-mode-wraparounds.patch [bz#1501300] -
kvm-cirrus-fix-oob-access-in-mode4and5-write-functions.patch [bz#1501300] -
Resolves: bz#1501300 (CVE-2017-15289 qemu-kvm-rhev: Qemu: cirrus: OOB
access issue in mode4and5 write functions [rhel-7.4.z]) * Mon Oct 09 2017
Miroslav Rezanina <mrezanin(a)redhat.com> - rhev-2.9.0-16.el7_4.9 -
kvm-nbd-client-Fix-regression-when-server-sends-garbage.patch [bz#1495474]
- kvm-fix-build-failure-in-nbd_read_reply_entry.patch [bz#1495474] -
kvm-nbd-client-avoid-spurious-qio_channel_yield-re-entry.patch [bz#1495474]
- kvm-nbd-client-avoid-read_reply_co-entry-if-send-failed.patch
[bz#1495474] -
kvm-qemu-iotests-improve-nbd-fault-injector.py-startup-p.patch [bz#1495474]
- kvm-qemu-iotests-test-NBD-over-UNIX-domain-sockets-in-08.patch
[bz#1495474] -
kvm-block-nbd-client-nbd_co_send_request-fix-return-code.patch [bz#1495474]
- Resolves: bz#1495474 (Fail to quit source qemu when do live migration
after mirroring guest to NBD server [rhel-7.4.z])
<https://cbs.centos.org/koji/buildinfo?buildID=21003>--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
6 years, 11 months
[ OST Failure Report ] [ oVirt Hosted-engine-ha Master ] [ 12-12-2017 ] [ 002_bootstrap.verify_add_hosts, 002_bootstrap.add_hosts ]
by Dafna Ron
This is a multi-part message in MIME format.
--------------CE482932EE452B575FD119B4
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
We had a failure on ovirt-hosted-engine-ha in two tests on bith basic
and upgrade suite.
Both tests failed on deploy host with the same missing package: python2-lxml
**
*Link and headline of suspected patches: *
**
*
Patch reported as failed: https://gerrit.ovirt.org/#/c/85279/ - build:
ovirt-hosted-engine-ha-2.2.1
**
Patch reported as root cause of failure:
https://gerrit.ovirt.org/#/c/85262/ - Fix OVF parsing of memory size
when multiple Items are present
****
Link to Job:
**
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4358/
Link to all logs:
**
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4358/artifact/
(Relevant) error snippet from the log:
<error>
*
2017-12-11 16:14:19,691-0500 DEBUG otopi.context context._executeMethod:143 method exception
Traceback (most recent call last):
File "/tmp/ovirt-3qTn6VadoI/pythonlib/otopi/context.py", line 133, in _executeMethod
method['method']()
File "/tmp/ovirt-3qTn6VadoI/otopi-plugins/otopi/packagers/yumpackager.py", line 248, in _packages
self.processTransaction()
File "/tmp/ovirt-3qTn6VadoI/otopi-plugins/otopi/packagers/yumpackager.py", line 262, in processTransaction
if self._miniyum.buildTransaction():
File "/tmp/ovirt-3qTn6VadoI/pythonlib/otopi/miniyum.py", line 920, in buildTransaction
raise yum.Errors.YumBaseError(msg)
YumBaseError: [u'ovirt-hosted-engine-ha-2.2.1-1.el7.centos.noarch requires python2-lxml']
2017-12-11 16:14:19,692-0500 ERROR otopi.context context._executeMethod:152 Failed to execute stage 'Package installation': [u'ovirt-hosted-engine-ha-2.2.1-1.el7.centos.noarch requires python2-lxml']
2017-12-11 16:14:19,706-0500 DEBUG otopi.plugins.otopi.debug.debug_failure.debug_failure debug_failure._notification:100 tcp connections:
id uid local foreign state pid exe
*
</error>
*
--------------CE482932EE452B575FD119B4
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>Hi, <br>
</p>
<p>We had a failure on ovirt-hosted-engine-ha in two tests on bith
basic and upgrade suite. <br>
</p>
<p>Both tests failed on deploy host with the same missing package:
python2-lxml</p>
<p><b style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-4a38-4ee9-ee8c-34cfa3b21a49">
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Link and headline of suspected patches:
</span></p>
</b></p>
<b style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-4a38-4ee9-ee8c-34cfa3b21a49">
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Patch reported as failed: <a class="moz-txt-link-freetext" href="https://gerrit.ovirt.org/#/c/85279/">https://gerrit.ovirt.org/#/c/85279/</a> - build: ovirt-hosted-engine-ha-2.2.1</span></p>
</b><b style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-4a38-4ee9-ee8c-34cfa3b21a49">
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Patch reported as root cause of failure: <a class="moz-txt-link-freetext" href="https://gerrit.ovirt.org/#/c/85262/">https://gerrit.ovirt.org/#/c/85262/</a> - Fix OVF parsing of memory size when multiple Items are present</span></p>
</b><b style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-4a38-4ee9-ee8c-34cfa3b21a49">
</b><b style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-4a38-4ee9-ee8c-34cfa3b21a49"><br>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Link to Job:</span></p>
</b><b style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-4a38-4ee9-ee8c-34cfa3b21a49">
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;"><a class="moz-txt-link-freetext" href="http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4358/">http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4358/</a>
</span></p>
<br>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Link to all logs:</span></p>
</b><b style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-4a38-4ee9-ee8c-34cfa3b21a49">
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;"><a class="moz-txt-link-freetext" href="http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4358/artifact/">http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4358/artifact/</a>
</span></p>
<br>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">(Relevant) error snippet from the log: </span></p>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;"><error></span></p>
<br>
</b><br>
<pre style="color: rgb(0, 0, 0); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">2017-12-11 16:14:19,691-0500 DEBUG otopi.context context._executeMethod:143 method exception
Traceback (most recent call last):
File "/tmp/ovirt-3qTn6VadoI/pythonlib/otopi/context.py", line 133, in _executeMethod
method['method']()
File "/tmp/ovirt-3qTn6VadoI/otopi-plugins/otopi/packagers/yumpackager.py", line 248, in _packages
self.processTransaction()
File "/tmp/ovirt-3qTn6VadoI/otopi-plugins/otopi/packagers/yumpackager.py", line 262, in processTransaction
if self._miniyum.buildTransaction():
File "/tmp/ovirt-3qTn6VadoI/pythonlib/otopi/miniyum.py", line 920, in buildTransaction
raise yum.Errors.YumBaseError(msg)
YumBaseError: [u'ovirt-hosted-engine-ha-2.2.1-1.el7.centos.noarch requires python2-lxml']
2017-12-11 16:14:19,692-0500 ERROR otopi.context context._executeMethod:152 Failed to execute stage 'Package installation': [u'ovirt-hosted-engine-ha-2.2.1-1.el7.centos.noarch requires python2-lxml']
2017-12-11 16:14:19,706-0500 DEBUG otopi.plugins.otopi.debug.debug_failure.debug_failure debug_failure._notification:100 tcp connections:
id uid local foreign state pid exe</pre>
<b style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-4a38-4ee9-ee8c-34cfa3b21a49">
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;"></error></span></p>
</b><br class="Apple-interchange-newline">
</body>
</html>
--------------CE482932EE452B575FD119B4--
6 years, 11 months
[vdsm] libvirtconnection test failures on my box
by Francesco Romani
Hi all,
since yesterday, running 'make check' on my F26 box I get those errors:
======================================================================
ERROR: libvirtMock will raise an error when nodeDeviceLookupByName is
called.
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/fromani/Projects/upstream/vdsm/tests/monkeypatch.py", line
134, in wrapper
return f(*args, **kw)
File "/home/fromani/Projects/upstream/vdsm/tests/monkeypatch.py", line
134, in wrapper
return f(*args, **kw)
File "/home/fromani/Projects/upstream/vdsm/tests/monkeypatch.py", line
134, in wrapper
return f(*args, **kw)
File
"/home/fromani/Projects/upstream/vdsm/tests/common/libvirtconnection_test.py",
line 150, in testCallFailedConnectionDown
connection = libvirtconnection.get(killOnFailure=True)
TypeError: __init__() got an unexpected keyword argument 'killOnFailure'
-------------------- >> begin captured logging << --------------------
2017-12-12 09:57:05,801 DEBUG (libvirt/events) [root] START thread
<Thread(libvirt/events, started daemon 140602075010816)> (func=<bound
method _EventLoop.__run of <vdsm.common.libvirtconnection._EventLoop
instance at 0x7fe081993128>>, args=(), kwargs={}) (concurrent:189)
2017-12-12 09:57:05,802 DEBUG (libvirt/events) [root] FINISH thread
<Thread(libvirt/events, started daemon 140602075010816)> (concurrent:192)
--------------------- >> end captured logging << ---------------------
======================================================================
ERROR: libvirtMock will raise an error when nodeDeviceLookupByName is
called.
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/fromani/Projects/upstream/vdsm/tests/monkeypatch.py", line
134, in wrapper
return f(*args, **kw)
File "/home/fromani/Projects/upstream/vdsm/tests/monkeypatch.py", line
134, in wrapper
return f(*args, **kw)
File "/home/fromani/Projects/upstream/vdsm/tests/monkeypatch.py", line
134, in wrapper
return f(*args, **kw)
File
"/home/fromani/Projects/upstream/vdsm/tests/common/libvirtconnection_test.py",
line 132, in testCallFailedConnectionUp
connection = libvirtconnection.get(killOnFailure=True)
TypeError: __init__() got an unexpected keyword argument 'killOnFailure'
-------------------- >> begin captured logging << --------------------
2017-12-12 09:57:05,803 DEBUG (libvirt/events) [root] START thread
<Thread(libvirt/events, started daemon 140602075010816)> (func=<bound
method _EventLoop.__run of <vdsm.common.libvirtconnection._EventLoop
instance at 0x7fe081993128>>, args=(), kwargs={}) (concurrent:189)
2017-12-12 09:57:05,803 DEBUG (libvirt/events) [root] FINISH thread
<Thread(libvirt/events, started daemon 140602075010816)> (concurrent:192)
--------------------- >> end captured logging << ---------------------
======================================================================
ERROR: Positive test - libvirtMock does not raise any errors
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/fromani/Projects/upstream/vdsm/tests/monkeypatch.py", line
134, in wrapper
return f(*args, **kw)
File "/home/fromani/Projects/upstream/vdsm/tests/monkeypatch.py", line
134, in wrapper
return f(*args, **kw)
File
"/home/fromani/Projects/upstream/vdsm/tests/common/libvirtconnection_test.py",
line 118, in testCallSucceeded
connection.nodeDeviceLookupByName()
TypeError: nodeDeviceLookupByName() takes exactly 2 arguments (1 given)
-------------------- >> begin captured logging << --------------------
2017-12-12 09:57:05,804 DEBUG (libvirt/events) [root] START thread
<Thread(libvirt/events, started daemon 140602075010816)> (func=<bound
method _EventLoop.__run of <vdsm.common.libvirtconnection._EventLoop
instance at 0x7fe081993128>>, args=(), kwargs={}) (concurrent:189)
2017-12-12 09:57:05,804 DEBUG (libvirt/events) [root] FINISH thread
<Thread(libvirt/events, started daemon 140602075010816)> (concurrent:192)
--------------------- >> end captured logging << ---------------------
----------------------------------------------------------------------
Smells like incorrect monkeypatching leaking out of test module.
The last one is easy, seems just incorrect call, I have a fix pending.
However, why is it starting to fail just now?
It seems to run fine on CI, which is interesting.
Any help is welcome
Bests,
--
Francesco Romani
Senior SW Eng., Virtualization R&D
Red Hat
IRC: fromani github: @fromanirh
6 years, 11 months
[ OST Failure Report ] [ oVirt Master ] [ 05-12-2017 ] [ 006_migrations.migrate_vm ]
by Dafna Ron
This is a multi-part message in MIME format.
--------------D695925BEA0466FA018AF9F5
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
We had a failure for test 006_migrations.migrate_vm on master.
There was a libvirt disruption in the migration src and after that vdsm
reported the migration as failed because the vm already exists which
makes me suspect a split bran case.
The patch reported has nothing to do with this issue and I think we
simply stumbled on a race condition which can cause a split brain.
Please note that there are several metrics related issues reported in
vdsm logs as well.
**
*Link and headline of suspected patches: *
*
*Not related *
Link to Job:
*
**http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4278/**
*
*
*Link to all logs:*
*
*
**http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4278/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-006_migrations.py/**
*
*
*(Relevant) error snippet from the log: *
*
*<error>*
**
**Engine: * *
*2017-12-05 06:26:48,546-05 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-385) [] EVENT_ID:
VM_MIGRATION_TO_SERVER_FAILED(120), Migration failed (VM: vm0, Source:
lago- basic-suite-master-host-1, Destination:
lago-basic-suite-master-host-0). *
*dst:**
**
*2017-12-05 06:26:46,615-0500 WARN (jsonrpc/6) [vds] vm
b3962e5c-08e3-444e-910e-ea675fa1a5c7 already exists (API:179)
2017-12-05 06:26:46,615-0500 ERROR (jsonrpc/6) [api] FINISH create
error=Virtual machine already exists (api:124)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 117,
in method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 180, in create
raise exception.VMExists()
VMExists: Virtual machine already exists
2017-12-05 06:26:46,620-0500 INFO (jsonrpc/6) [api.virt] FINISH create
return={'status': {'message': 'Virtual machine already exists', 'code':
4}} from=::ffff:192.168.201.3,50394 (api:52)
2017-12-05 06:26:46,620-0500 INFO (jsonrpc/6) [api.virt] FINISH
migrationCreate return={'status': {'message': 'Virtual machine already
exists', 'code': 4}} from=::ffff:192.168.201.3,50394 (api:52)
2017-12-05 06:26:46,620-0500 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer]
RPC call VM.migrationCreate failed (error 4) in 0.03 seconds (__init__:573)
2017-12-05 06:26:46,624-0500 INFO (jsonrpc/2) [api.virt] START
destroy(gracefulAttempts=1) from=::ffff:192.168.201.3,50394 (api:46)
2017-12-05 06:26:46,624-0500 INFO (jsonrpc/2) [virt.vm]
(vmId='b3962e5c-08e3-444e-910e-ea675fa1a5c7') Release VM resources (vm:4967)
2017-12-05 06:26:46,625-0500 WARN (jsonrpc/2) [virt.vm]
(vmId='b3962e5c-08e3-444e-910e-ea675fa1a5c7') trying to set state to
Powering down when already Down (vm:575)
2017-12-05 06:26:46,625-0500 INFO (jsonrpc/2) [virt.vm]
(vmId='b3962e5c-08e3-444e-910e-ea675fa1a5c7') Stopping connection
(guestagent:435)
2017-12-05 06:26:46,625-0500 INFO (jsonrpc/2) [virt.vm]
(vmId='b3962e5c-08e3-444e-910e-ea675fa1a5c7') _destroyVmGraceful attempt
#0 (vm:5004)
2017-12-05 06:26:46,626-0500 WARN (jsonrpc/2) [virt.vm]
(vmId='b3962e5c-08e3-444e-910e-ea675fa1a5c7') VM
'b3962e5c-08e3-444e-910e-ea675fa1a5c7' couldn't be destroyed in libvirt:
Requested operation is not valid: domain is not running (vm
:5025)
2017-12-05 06:26:46,627-0500 INFO (jsonrpc/2) [vdsm.api] START
teardownImage(sdUUID='952bb427-b88c-4fbe-99ef-49970d3aaf70',
spUUID='9dcfeaaf-96b7-4e26-a327-5570e0e39261',
imgUUID='e6eadbae-ec7a-48f4-8832-64a622a12bef', volUUID=None) from
=::ffff:192.168.201.3,50394,
task_id=2da93725-5533-4354-a369-751eb44f9ef2 (api:46)
*
**scr *
*
*2017-12-05 06:26:46,623-0500 ERROR (migsrc/b3962e5c) [virt.vm]
(vmId='b3962e5c-08e3-444e-910e-ea675fa1a5c7') migration destination
error: Virtual machine already exists (migration:290)
*
**disruption on **dst**: *
*
**2017-12-05 06:20:04,662-0500 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer]
RPC call VM.shutdown succeeded in 0.00 seconds (__init__:573)
2017-12-05 06:20:04,676-0500 ERROR (Thread-1) [root] Shutdown by QEMU
Guest Agent failed (vm:5097)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 5088, in
qemuGuestAgentShutdown
self._dom.shutdownFlags(libvirt.VIR_DOMAIN_SHUTDOWN_GUEST_AGENT)
File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line
98, in f
ret = attr(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
line 126, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 512, in
wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2403, in
shutdownFlags
if ret == -1: raise libvirtError ('virDomainShutdownFlags() failed',
dom=self)
libvirtError: Guest agent is not responding: QEMU guest agent is not
connected
2017-12-05 06:20:04,697-0500 INFO (libvirt/events) [virt.vm]
(vmId='b3962e5c-08e3-444e-910e-ea675fa1a5c7') block threshold 1 exceeded
on 'vda'
(/rhev/data-center/mnt/blockSD/952bb427-b88c-4fbe-99ef-49970d3aaf70/images/e6eadbae-ec7a-48f4-
8832-64a622a12bef/1f063c73-f5cd-44e8-a84f-7810857f82df) (drivemonitor:162)
2017-12-05 06:20:04,698-0500 INFO (libvirt/events) [virt.vm]
(vmId='b3962e5c-08e3-444e-910e-ea675fa1a5c7') drive 'vda' threshold
exceeded (storage:872)
2017-12-05 06:20:05,889-0500 INFO (jsonrpc/7) [api.host] START
getAllVmStats() from=::1,41118 (api:46)
2017-12-05 06:20:05,891-0500 INFO (jsonrpc/7) [api.host] FINISH
getAllVmStats return={'status': {'message': 'Done', 'code': 0},
'statsList': (suppressed)} from=::1,41118 (api:52)
2017-12-05 06:20:05,892-0500 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer]
RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:573)
2017-12-05 06:20:07,466-0500 INFO (libvirt/events) [virt.vm]
(vmId='b3962e5c-08e3-444e-910e-ea675fa1a5c7') underlying process
disconnected (vm:1024)
2017-12-05 06:20:07,469-0500 INFO (libvirt/events) [virt.vm]
(vmId='b3962e5c-08e3-444e-910e-ea675fa1a5c7') Release VM resources (vm:4967)
2017-12-05 06:20:07,469-0500 INFO (libvirt/events) [virt.vm]
(vmId='b3962e5c-08e3-444e-910e-ea675fa1a5c7') Stopping connection
(guestagent:435)
2017-12-05 06:20:07,469-0500 INFO (libvirt/events) [vdsm.api] START
teardownImage(sdUUID='952bb427-b88c-4fbe-99ef-49970d3aaf70',
spUUID='9dcfeaaf-96b7-4e26-a327-5570e0e39261',
imgUUID='e6eadbae-ec7a-48f4-8832-64a622a12bef', volUUID=None)
from=internal, task_id=9efedf46-d3be-4e41-b7f9-a074ed6344f6 (api:46)
*
*
*</error>*
**
--------------D695925BEA0466FA018AF9F5
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>Hi, <br>
</p>
<p>We had a failure for test 006_migrations.migrate_vm on master. </p>
<p>There was a libvirt disruption in the migration src and after
that vdsm reported the migration as failed because the vm already
exists which makes me suspect a split bran case. <br>
</p>
<p>The patch reported has nothing to do with this issue and I think
we simply stumbled on a race condition which can cause a split
brain. <br>
</p>
<p>Please note that there are several metrics related issues
reported in vdsm logs as well. <br>
</p>
<p><b style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-269d-3fbb-6886-821a181fd550">
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Link and headline of suspected patches:
</span></p>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">
</span></p>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;"><b>Not related </b>
</span></p>
<br>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Link to Job:</span></p>
<br>
</b></p>
<p><b style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-269d-3fbb-6886-821a181fd550"><b><a class="moz-txt-link-freetext" href="http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4278/">http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4278/</a></b></b></p>
<p><b style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-269d-3fbb-6886-821a181fd550"><br>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Link to all logs:</span></p>
<br>
</b></p>
<p><b style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-269d-3fbb-6886-821a181fd550"><b><a class="moz-txt-link-freetext" href="http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4278/artifa...">http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4278/artifa...</a></b></b></p>
<p><b style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-269d-3fbb-6886-821a181fd550"><br>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">(Relevant) error snippet from the log: </span></p>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><b><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;"><error></span></b></p>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><b><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">
</span></b></p>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><b><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;"><b>Engine: </b>
</span></b></p>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><b><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">2017-12-05 06:26:48,546-05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-385) [] EVENT_ID: VM_MIGRATION_TO_SERVER_FAILED(120), Migration failed (VM: vm0, Source: lago-
basic-suite-master-host-1, Destination: lago-basic-suite-master-host-0).
</span></b></p>
<br>
<b>dst:</b><b><br>
</b></b></p>
<p><b><span style="font-weight:normal;">2017-12-05 06:26:46,615-0500
WARN (jsonrpc/6) [vds] vm
b3962e5c-08e3-444e-910e-ea675fa1a5c7 already exists (API:179)<br>
2017-12-05 06:26:46,615-0500 ERROR (jsonrpc/6) [api] FINISH
create error=Virtual machine already exists (api:124)<br>
Traceback (most recent call last):<br>
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py",
line 117, in method<br>
ret = func(*args, **kwargs)<br>
File "/usr/lib/python2.7/site-packages/vdsm/API.py", line
180, in create<br>
raise exception.VMExists()<br>
VMExists: Virtual machine already exists<br>
2017-12-05 06:26:46,620-0500 INFO (jsonrpc/6) [api.virt]
FINISH create return={'status': {'message': 'Virtual machine
already exists', 'code': 4}} from=::ffff:192.168.201.3,50394
(api:52)<br>
2017-12-05 06:26:46,620-0500 INFO (jsonrpc/6) [api.virt]
FINISH migrationCreate return={'status': {'message': 'Virtual
machine already exists', 'code': 4}}
from=::ffff:192.168.201.3,50394 (api:52)<br>
2017-12-05 06:26:46,620-0500 INFO (jsonrpc/6)
[jsonrpc.JsonRpcServer] RPC call VM.migrationCreate failed
(error 4) in 0.03 seconds (__init__:573)<br>
2017-12-05 06:26:46,624-0500 INFO (jsonrpc/2) [api.virt]
START destroy(gracefulAttempts=1)
from=::ffff:192.168.201.3,50394 (api:46)<br>
2017-12-05 06:26:46,624-0500 INFO (jsonrpc/2) [virt.vm]
(vmId='b3962e5c-08e3-444e-910e-ea675fa1a5c7') Release VM
resources (vm:4967)<br>
2017-12-05 06:26:46,625-0500 WARN (jsonrpc/2) [virt.vm]
(vmId='b3962e5c-08e3-444e-910e-ea675fa1a5c7') trying to set
state to Powering down when already Down (vm:575)<br>
2017-12-05 06:26:46,625-0500 INFO (jsonrpc/2) [virt.vm]
(vmId='b3962e5c-08e3-444e-910e-ea675fa1a5c7') Stopping
connection (guestagent:435)<br>
2017-12-05 06:26:46,625-0500 INFO (jsonrpc/2) [virt.vm]
(vmId='b3962e5c-08e3-444e-910e-ea675fa1a5c7')
_destroyVmGraceful attempt #0 (vm:5004)<br>
2017-12-05 06:26:46,626-0500 WARN (jsonrpc/2) [virt.vm]
(vmId='b3962e5c-08e3-444e-910e-ea675fa1a5c7') VM
'b3962e5c-08e3-444e-910e-ea675fa1a5c7' couldn't be destroyed
in libvirt: Requested operation is not valid: domain is not
running (vm<br>
:5025)<br>
2017-12-05 06:26:46,627-0500 INFO (jsonrpc/2) [vdsm.api]
START
teardownImage(sdUUID='952bb427-b88c-4fbe-99ef-49970d3aaf70',
spUUID='9dcfeaaf-96b7-4e26-a327-5570e0e39261',
imgUUID='e6eadbae-ec7a-48f4-8832-64a622a12bef', volUUID=None)
from<br>
=::ffff:192.168.201.3,50394,
task_id=2da93725-5533-4354-a369-751eb44f9ef2 (api:46)<br>
</span></b></p>
<p><b><span style="font-weight:normal;"><b>scr </b><br>
</span></b></p>
<p><b><span style="font-weight:normal;">2017-12-05 06:26:46,623-0500
ERROR (migsrc/b3962e5c) [virt.vm]
(vmId='b3962e5c-08e3-444e-910e-ea675fa1a5c7') migration
destination error: Virtual machine already exists
(migration:290)<br>
</span></b></p>
<p><b><span style="font-weight:normal;"><b>disruption on </b><b>dst</b><b>:
</b><br>
</span></b></p>
<p><b style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-269d-3fbb-6886-821a181fd550"><b>2017-12-05
06:20:04,662-0500 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer]
RPC call VM.shutdown succeeded in 0.00 seconds (__init__:573)<br>
2017-12-05 06:20:04,676-0500 ERROR (Thread-1) [root] Shutdown
by QEMU Guest Agent failed (vm:5097)<br>
Traceback (most recent call last):<br>
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py",
line 5088, in qemuGuestAgentShutdown<br>
self._dom.shutdownFlags(libvirt.VIR_DOMAIN_SHUTDOWN_GUEST_AGENT)<br>
File
"/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py",
line 98, in f<br>
ret = attr(*args, **kwargs)<br>
File
"/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
line 126, in wrapper<br>
ret = f(*args, **kwargs)<br>
File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line
512, in wrapper<br>
return func(inst, *args, **kwargs)<br>
File "/usr/lib64/python2.7/site-packages/libvirt.py", line
2403, in shutdownFlags<br>
if ret == -1: raise libvirtError
('virDomainShutdownFlags() failed', dom=self)<br>
libvirtError: Guest agent is not responding: QEMU guest agent
is not connected<br>
2017-12-05 06:20:04,697-0500 INFO (libvirt/events) [virt.vm]
(vmId='b3962e5c-08e3-444e-910e-ea675fa1a5c7') block threshold
1 exceeded on 'vda'
(/rhev/data-center/mnt/blockSD/952bb427-b88c-4fbe-99ef-49970d3aaf70/images/e6eadbae-ec7a-48f4-<br>
8832-64a622a12bef/1f063c73-f5cd-44e8-a84f-7810857f82df)
(drivemonitor:162)<br>
2017-12-05 06:20:04,698-0500 INFO (libvirt/events) [virt.vm]
(vmId='b3962e5c-08e3-444e-910e-ea675fa1a5c7') drive 'vda'
threshold exceeded (storage:872)<br>
2017-12-05 06:20:05,889-0500 INFO (jsonrpc/7) [api.host]
START getAllVmStats() from=::1,41118 (api:46)<br>
2017-12-05 06:20:05,891-0500 INFO (jsonrpc/7) [api.host]
FINISH getAllVmStats return={'status': {'message': 'Done',
'code': 0}, 'statsList': (suppressed)} from=::1,41118 (api:52)<br>
2017-12-05 06:20:05,892-0500 INFO (jsonrpc/7)
[jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded
in 0.00 seconds (__init__:573)<br>
2017-12-05 06:20:07,466-0500 INFO (libvirt/events) [virt.vm]
(vmId='b3962e5c-08e3-444e-910e-ea675fa1a5c7') underlying
process disconnected (vm:1024)<br>
2017-12-05 06:20:07,469-0500 INFO (libvirt/events) [virt.vm]
(vmId='b3962e5c-08e3-444e-910e-ea675fa1a5c7') Release VM
resources (vm:4967)<br>
2017-12-05 06:20:07,469-0500 INFO (libvirt/events) [virt.vm]
(vmId='b3962e5c-08e3-444e-910e-ea675fa1a5c7') Stopping
connection (guestagent:435)<br>
2017-12-05 06:20:07,469-0500 INFO (libvirt/events) [vdsm.api]
START
teardownImage(sdUUID='952bb427-b88c-4fbe-99ef-49970d3aaf70',
spUUID='9dcfeaaf-96b7-4e26-a327-5570e0e39261',
imgUUID='e6eadbae-ec7a-48f4-8832-64a622a12bef', volUUID=None)<br>
from=internal, task_id=9efedf46-d3be-4e41-b7f9-a074ed6344f6
(api:46)<br>
</b><br>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;"></error></span></p>
</b><br class="Apple-interchange-newline">
</p>
<p><br>
</p>
<p><br>
</p>
</body>
</html>
--------------D695925BEA0466FA018AF9F5--
6 years, 11 months
UI memory leak testing across browsers
by Greg Sheremeta
I thought this was interesting enough to share with everyone.
tl;dr: give Firefox Quantum a try!
We're doing UI memory leak testing of webadmin across browsers. The tests
use selenium [http://www.seleniumhq.org/] which drives the browser to do
hundreds of repetitive actions, which then over time exposes browser memory
leaks in the actions being performed.
X = repetitions (over time)
Y = memory consumed in MB
The graph shows that, as the number of repetitions through the application
increases, Firefox Quantum 57 leaks less (lower slope) than Chrome 62
(latest) and older Firefox 52 ESR. Also, it uses less memory the entire
time (lower on the graph). [FF 57 is also just plain faster per repetition,
but I haven't posted that graph here. Take my word for it :)]
[image: Inline image 1]
Best wishes,
Greg
--
GREG SHEREMETA
SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
Red Hat NA
<https://www.redhat.com/>
gshereme(a)redhat.com IRC: gshereme
<https://red.ht/sig>
6 years, 11 months
[ANN] oVirt 4.2.0 First Candidate Release is now available for testing
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the First
Candidate Release of oVirt 4.2.0, as of December 5th, 2017
This is pre-release software. This pre-release should not to be used in
production.
Please take a look at our community page[1] to learn how to ask questions
and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This update is the first candidate release of the 4.2.0 version. This
release brings more than 280 enhancements and more than one thousand bug
fixes, including more than 500 high or urgent severity fixes, on top of
oVirt 4.1 series.
What's new in oVirt 4.2.0?
-
The Administration Portal has been completely redesigned using
Patternfly, a widely adopted standard in web application design. It now
features a cleaner, more intuitive design, for an improved user experience.
-
There is an all-new VM Portal for non-admin users.
-
A new High Performance virtual machine type has been added to the New VM
dialog box in the Administration Portal.
-
Open Virtual Network (OVN) adds support for Open vSwitch software
defined networking (SDN).
-
oVirt now supports Nvidia vGPU.
-
The ovirt-ansible-roles set of packages help users with common
administration tasks.
-
Virt-v2v now supports Debian/Ubuntu based VMs.
For more information about these and other features, check out the oVirt
4.2.0 blog post
<https://ovirt.org/blog/2017/11/introducing-ovirt-4.2.0-beta/> and stay
tuned for further blog post.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.4 or later
* CentOS Linux (or similar) 7.4 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.4 or later
* CentOS Linux (or similar) 7.4 or later
* oVirt Node 4.2 (available for x86_64 only)
See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available.
- oVirt Node is already available [4]
Additional Resources:
* Read more about the oVirt 4.2.0 release highlights:
http://www.ovirt.org/release/4.2.0/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.2.0/
[4] http://resources.ovirt.org/pub/ovirt-4.2-pre/iso/
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
6 years, 11 months
[ OST Failure Report ] [ oVirtMaster ] [ 07-11-2017 ] [007_sd_reattach.deactivate_storage_domain ]
by Dafna Ron
This is a multi-part message in MIME format.
--------------B1FC8E7562A08FCCC455CECC
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
We had a failure on master basic suite for test
007_sd_reattach.deactivate_storage_domain.
The failure was that we failed to deactivate domain due to running tasks.
It does not seem to be related to the patch it was testing and I think
that the test itself needs to be modified to check there are no running
tasks.
Is there perhaps a way to query if there are running tasks before
running the command? can you please take a look at the test on OST?
**
*Link and headline of suspected patches:**Not related to error***
**
**
*Link to Job:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4319/*
***
Link to all logs:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4319/artifact/
(Relevant) error snippet from the log:
<error>
*2017-12-06 20:13:23,166-05 WARN
[org.ovirt.engine.core.bll.storage.domain.DeactivateStorageDomainWithOvfUpdateCommand]
(default task-7) [d82880e8-1d40-4a3b-a1ba-3362f2f130a0] Validation of
action 'DeactivateStorageDomainWithOvfUpdate' fa iled for user
admin@internal-authz. Reasons:
VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__DEACTIVATE,ERROR_CANNOT_DEACTIVATE_DOMAIN_WITH_TASKS
2017-12-06 20:13:23,167-05 INFO
[org.ovirt.engine.core.bll.storage.domain.DeactivateStorageDomainWithOvfUpdateCommand]
(default task-7) [d82880e8-1d40-4a3b-a1ba-3362f2f130a0] Lock freed to
object 'EngineLock:{exclusiveLocks='[ea2fd992-8a
a4-44fe-aa43-e96754a975ba=STORAGE]',
sharedLocks='[5e0a0183-0e25-4f43-b5b0-0cfb5510248e=POOL]'}' 2017-12-06
20:13:23,172-05 DEBUG
[org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
(default task-7) [d82880e8-1d40-4a3b-a1ba-3362f2f130a0] method:
runAction, params: [DeactivateStorageDomainWithOvfUpdate, DeactivateSto
rageDomainWithOvfUpdateParameters:{commandId='630c28e1-41ab-43db-9755-a2bb870dbcb3',
user='null', commandType='Unknown'}], timeElapsed: 65ms 2017-12-06
20:13:23,176-05 ERROR
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default
task-7) [] Operation Failed: [Cannot deactivate Storage while there are
running tasks on this Storage. -Please wait until tasks will finish and
try again.] *
*<error>
--------------B1FC8E7562A08FCCC455CECC
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>Hi, <br>
</p>
<p>We had a failure on master basic suite for test
007_sd_reattach.deactivate_storage_domain. <br>
</p>
<p>The failure was that we failed to deactivate domain due to
running tasks. <br>
</p>
<p>It does not seem to be related to the patch it was testing and I
think that the test itself needs to be modified to check there are
no running tasks. <br>
</p>
<p>Is there perhaps a way to query if there are running tasks before
running the command? can you please take a look at the test on
OST? <br>
</p>
<p><b style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-3073-5414-18b7-2d31991889e8">
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Link and headline of suspected patches:</span><b
style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-3073-5414-18b7-2d31991889e8"><b>
Not related to error</b></b></p>
</b></p>
<p><b style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-3073-5414-18b7-2d31991889e8">
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Link to Job: <a class="moz-txt-link-freetext" href="http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4319/">http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4319/</a></span></p>
</b><b style="font-weight:normal;"
id="docs-internal-guid-5859b7a1-3073-5414-18b7-2d31991889e8"><br>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Link to all logs: <a class="moz-txt-link-freetext" href="http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4319/artifact/">http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4319/artifact/</a></span></p>
<br>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">(Relevant) error snippet from the log: </span></p>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;"><error></span></p>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">
</span></p>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;"><b>2017-12-06 20:13:23,166-05 WARN [org.ovirt.engine.core.bll.storage.domain.DeactivateStorageDomainWithOvfUpdateCommand] (default task-7) [d82880e8-1d40-4a3b-a1ba-3362f2f130a0] Validation of action 'DeactivateStorageDomainWithOvfUpdate' fa
iled for user admin@internal-authz. Reasons: VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__DEACTIVATE,ERROR_CANNOT_DEACTIVATE_DOMAIN_WITH_TASKS
2017-12-06 20:13:23,167-05 INFO [org.ovirt.engine.core.bll.storage.domain.DeactivateStorageDomainWithOvfUpdateCommand] (default task-7) [d82880e8-1d40-4a3b-a1ba-3362f2f130a0] Lock freed to object 'EngineLock:{exclusiveLocks='[ea2fd992-8a
a4-44fe-aa43-e96754a975ba=STORAGE]', sharedLocks='[5e0a0183-0e25-4f43-b5b0-0cfb5510248e=POOL]'}'
2017-12-06 20:13:23,172-05 DEBUG [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor] (default task-7) [d82880e8-1d40-4a3b-a1ba-3362f2f130a0] method: runAction, params: [DeactivateStorageDomainWithOvfUpdate, DeactivateSto
rageDomainWithOvfUpdateParameters:{commandId='630c28e1-41ab-43db-9755-a2bb870dbcb3', user='null', commandType='Unknown'}], timeElapsed: 65ms
2017-12-06 20:13:23,176-05 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-7) [] Operation Failed: [Cannot deactivate Storage while there are running tasks on this Storage.
-Please wait until tasks will finish and try again.]
</b>
</span></p>
<p dir="ltr"
style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">
</span></p>
</b><error><br class="Apple-interchange-newline">
</p>
<p><br>
</p>
<p><br>
</p>
<p><br>
</p>
</body>
</html>
--------------B1FC8E7562A08FCCC455CECC--
6 years, 11 months