[ OST Failure Report ] [ oVirt master ] [ 19/06/17 ] [ add_secondary_storage_domains ]
by Gil Shinar
Test failed: 002_bootstrap.add_secondary_storage_domains
Link to suspected patches:
Link to Job: http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/7264
Link to all logs:
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/7264/art...
Error snippet from the log:
Host 0:
2017-06-19 08:46:48,457-0400 WARN (jsonrpc/1) [storage.HSM] getPV
failed for guid: 360014050f775dd654404bd39d061693c (hsm:1928)
Traceback (most recent call last):
File "/usr/share/vdsm/storage/hsm.py", line 1925, in _getDeviceList
pv = lvm.getPV(guid)
File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line
852, in getPV
raise se.InaccessiblePhysDev((pvName,))
InaccessiblePhysDev: Multipath cannot access physical device(s):
"devices=(u'360014050f775dd654404bd39d061693c',)"
2017-06-19 08:46:48,546-0400 WARN (jsonrpc/1) [storage.LVM] lvm pvs
failed: 5 [] [' Failed to find physical volume
"/dev/mapper/360014050fae4a3a8e8047e6933531876".'] (lvm:322)
2017-06-19 08:46:48,546-0400 WARN (jsonrpc/1) [storage.HSM] getPV
failed for guid: 360014050fae4a3a8e8047e6933531876 (hsm:1928)
Traceback (most recent call last):
File "/usr/share/vdsm/storage/hsm.py", line 1925, in _getDeviceList
pv = lvm.getPV(guid)
File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line
852, in getPV
raise se.InaccessiblePhysDev((pvName,))
InaccessiblePhysDev: Multipath cannot access physical device(s):
"devices=(u'360014050fae4a3a8e8047e6933531876',)"
Host 1:
*2017-06-19 08:49:03,151-0400 ERROR (monitor/b92f872)
[storage.Monitor] Setting up monitor for
b92f8727-8c51-4e12-a3fe-5441531c13ca failed (monitor:329)
Traceback (most recent call last):
File "/usr/share/vdsm/storage/monitor.py", line 326, in _setupLoop
self._setupMonitor()
File "/usr/share/vdsm/storage/monitor.py", line 348, in _setupMonitor
self._produceDomain()
File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 237, in wrapper
value = meth(self, *a, **kw)
File "/usr/share/vdsm/storage/monitor.py", line 366, in _produceDomain
self.domain = sdCache.produce(self.sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 108, in produce
domain.getRealDomain()
File "/usr/share/vdsm/storage/sdc.py", line 49, in getRealDomain
return self._cache._realProduce(self._sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 132, in _realProduce
domain = self._findDomain(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 149, in _findDomain
return findMethod(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 174, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'b92f8727-8c51-4e12-a3fe-5441531c13ca',)*
7 years, 4 months
Heads up: VdcAction* classes names have been changed
by Tal Nisan
Hi all,
I've merged patches to change the legacy name "VdcAction*" to "Action*",
the Vdc prefix is here from long time ago and doesn't make much sense
anymore (if it ever did).
Please make sure you rebase your patches before submitting and make sure CI
passes again even if it passed before rebasing.
Changed classes are:
VdcActionType -> ActionType
VdcActionTypeDeserializer -> ActionTypeDeserializer
VdcActionParametersBase -> ActionParametersBase
VdcActionUtils -> ActionUtils
Thanks for the reviewers that helped review those long and boring patches
(amureini, masayah, frolland, ishaby, awels, gsheremeta, oourfali)
Next up on my list is VdcQuery*, stay tuned :)
7 years, 4 months
[ENGINE][ACTION_NEEDED] - default maintainers, per path
by Roy Golan
Hi all,
Some time ago the infra team enabled *gerrit default reviewer plugin* which
you probably noticed if you logged in to gerrit.
What we can do is for example set the stable branch maintainers per branch
automatically:
[filter "branch:ovirt-engine-4.1"]
reviewer = stableBranchMaintainersGroup
Put people based on path:
[filter "branch:master file:^frontend/.*"]
reviewer = j.r.r.t(a)shire.me
*Action Needed:*
Nominate yourself, or others, by amending this patch online [1]. Once this
patch is fully acked and agreed we will merge it. If something can't get
consensus we will defer it from the patch till maintainers agree.
[1] https://gerrit.ovirt.org/#/c/77488/
7 years, 4 months
Heads up: Checkstyle is now enforcing blank lines between import groups in oVirt Engine
by Tal Nisan
A couple of years ago commit a4f50e3b introduced a new checkstyle
validation in oVirt Engine that enforces the order of the imports since
some of the developers did not use the standards from the Engine code
format and this caused a situation where some commits included aside for
the actual code also a reformat of the imports that was unrelated to the
patch.
After the validation was added the situation got better but there were
still no 100% alignment since the Engine code format also defines that
there should be that there should be a blank line between the different
import groups and this was not enforced, a fact that caused some patches to
add lines and some remove lines between the imports.
Today I've merged commit bf298fed that fixes all the imports as per the
project standard and commit 1c6b710 that adds a checkstyle validation that
enforces this change so note that from now on you should maintain that.
You can add the code format which is located in Engine root under
config/engine-code-format.xml to your IDE to have the imports arranged
automatically.
7 years, 4 months
[ OST Failure Report ] [ oVirt 4.1 ] [ 15/06/17 ] [ snapshot merge ]
by Gil Shinar
Test failed: 004_basic_sanity.snapshots_merge
Link to Job: http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_4.1/1713/
Link to all logs:
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_4.1/1713/artifa...
Error snippet from the log:
Enginr error:
2017-06-15 09:16:56,989-04 WARN
[org.ovirt.engine.core.bll.GetVmConfigurationBySnapshotQuery] (default
task-8) [d18e72b4-f40f-4229-850f-b347ab08aef6] Snapshot
'5341b09d-3e06-4dd2-92ec-3325fa50a9d6' does not exist
2017-06-15 09:16:56,989-04 ERROR
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource]
(default task-8) [] Operation Failed: Entity not found: null
VDSM error:
2017-06-15 09:15:14,948-0400 INFO (jsonrpc/6) [vdsm.api] FINISH
teardownImage error=Cannot deactivate Logical Volume: ('General
Storage Exception: ("5 [] [\' Logical volume
cb913f2b-22ef-4beb-b9c8-4d865f2e48a7/cb64690e-473d-4c3c-a4e6-640de10f119b
in use.\', \' Logical volume
cb913f2b-22ef-4beb-b9c8-4d865f2e48a7/3dd815a0-92a9-4337-b467-6e0dd7fcf223
in use.\']\\ncb913f2b-22ef-4beb-b9c8-4d865f2e48a7/[\'3dd815a0-92a9-4337-b467-6e0dd7fcf223\',
\'cb64690e-473d-4c3c-a4e6-640de10f119b\']",)',)
from=::ffff:192.168.201.2,43638,
flow_id=33e259a3-3db2-4c07-a6f0-a1e5be5b98d6 (api:50)
2017-06-15 09:15:14,949-0400 ERROR (jsonrpc/6)
[storage.TaskManager.Task]
(Task='d96ad445-3017-4fea-b6d4-59adae63156a') Unexpected error
(task:870)
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 877, in _run
return fn(*args, **kargs)
File "<string>", line 2, in teardownImage
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method
ret = func(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 3186, in teardownImage
dom.deactivateImage(imgUUID)
File "/usr/share/vdsm/storage/blockSD.py", line 1288, in deactivateImage
lvm.deactivateLVs(self.sdUUID, volUUIDs)
File "/usr/share/vdsm/storage/lvm.py", line 1304, in deactivateLVs
_setLVAvailability(vgName, toDeactivate, "n")
File "/usr/share/vdsm/storage/lvm.py", line 843, in _setLVAvailability
raise error(str(e))
CannotDeactivateLogicalVolume: Cannot deactivate Logical Volume:
('General Storage Exception: ("5 [] [\' Logical volume
cb913f2b-22ef-4beb-b9c8-4d865f2e48a7/cb64690e-473d-4c3c-a4e6-640de10f119b
in use.\', \' Logical volume
cb913f2b-22ef-4beb-b9c8-4d865f2e48a7/3dd815a0-92a9-4337-b467-6e0dd7fcf223
in use.\']\\ncb913f2b-22ef-4beb-b9c8-4d865f2e48a7/[\'3dd815a0-92a9-4337-b467-6e0dd7fcf223\',
\'cb64690e-473d-4c3c-a4e6-640de10f119b\']",)',)
2017-06-15 09:15:14,952-0400 INFO (jsonrpc/6)
[storage.TaskManager.Task]
(Task='d96ad445-3017-4fea-b6d4-59adae63156a') aborting: Task is
aborted: 'Cannot deactivate Logical Volume' - code 552 (task:1175)
2017-06-15 09:15:14,953-0400 ERROR (jsonrpc/6) [storage.Dispatcher]
FINISH teardownImage error=Cannot deactivate Logical Volume: ('General
Storage Exception: ("5 [] [\' Logical volume
cb913f2b-22ef-4beb-b9c8-4d865f2e48a7/cb64690e-473d-4c3c-a4e6-640de10f119b
in use.\', \' Logical volume
cb913f2b-22ef-4beb-b9c8-4d865f2e48a7/3dd815a0-92a9-4337-b467-6e0dd7fcf223
in use.\']\\ncb913f2b-22ef-4beb-b9c8-4d865f2e48a7/[\'3dd815a0-92a9-4337-b467-6e0dd7fcf223\',
\'cb64690e-473d-4c3c-a4e6-640de10f119b\']",)',) (dispatcher:78
7 years, 4 months
UI Redesign patch has been merged.
by Alexander Wels
Hi,
I have just merged [1] which is a huge patch to update the look and feel of
the webadmin UI to be more modern and based on Patternfly. I believe this is a
huge step forward for the UI and should hopefully improve the usability and
functionality of the UI (it certainly looks better).
There are a bunch of follow up patches that once merged will make the UI look
like [2]. It might take a few days for all of these patches to get into
master. There are more enhancements planned to the usability of the UI as part
of the overhaul process.
As always, if there are any issues or comments let me know and I will be sure
to address them ASAP.
Alexander
ps. You will have to clear your browser cache to get some of the formatting to
look right.
[1] https://gerrit.ovirt.org/#/c/75669/
[2] http://imgur.com/a/JX0iG
7 years, 4 months
[ovirt-system-tests] Failed after fluentd rpm ubdate
by Valentina Makarova
Hello!
Fluentd packages was modified yesterday there
http://resources.ovirt.org/pub/ovirt-master-snapshot-static/rpm/el7/noarch/
This repository is referenced in reposync-config.
And now run_suile.sh failed in 003_00_metrics_bootstrap test with error:
TASK [fluentd : Ensure fluentd configuration directory exists]
***************** fatal: [localhost]: FAILED! => {"changed": false,
"failed": true, "gid": 0, "group": "root", "mode": "0755", "msg": "chgrp
failed: failed to look up group fluentd"
And the same error on host0 and hos1.
Does anyone know how to fix it?
Sincerely, Valentina Makarova
7 years, 4 months
Removal of Export Storage Domain and virt-v2v
by Richard W.M. Jones
As you may know virt-v2v can use the Export Storage Domain (ESD) to
upload converted virtual machines to oVirt. It was brought to my
attention yesterday that the ESD feature is being dropped, so this
will no longer work at some point in the future. (BTW I would
appreciate notice if you're going to drop major features that we rely on.)
Although virt-v2v can still work via the GUI, this isn't really
suitable for bulk, scripted upload of hundreds or thousands of VMs.
The ESD method was never very good. It was sort of an undocumented
back door into oVirt, and it was slow, and still required manual
intervention (after virt-v2v had done its job, you still needed to go
through the GUI and import the guests into the Data Domain).
What we really need is a fully scripted method to upload VMs --
metadata and disk images -- to oVirt. Maybe one exists already? If
not, what's the best way to do this?
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
libguestfs lets you edit virtual machines. Supports shell scripting,
bindings from many languages. http://libguestfs.org
7 years, 4 months