[ANNOUNCE] Introducing STDCI V2
by Barak Korren
The CI team is thrilled to announce the general availability of the second
version of the oVirt CI standard. Work on this version included almost a
complete rewrite of the CI backend. The major user-visible features are:
- Project maintainers no longer need to maintain YAML in the ‘jenkins’
repository. Details that were specified there, including targeted
distributions, architectures and oVirt versions should now be specified in a
YAML file under the project’s own repository (In a different syntax).
- We now support “sub stages” which provide the ability to run multiple
different scripts in parallel within the same STDCI stage. There is also a
conditional syntax which allows controlling which scripts get executed
according to which files were changed in the patch being tested.
- The STDCI script file names and locations can now be customized via the above
mentioned YAML file. This means that for e.g. using the same script for
different stages can now be done by assigning it to the stages in the YAML
file instead of by using symlinks.
Inspecting job results in STDCI V2
----------------------------------
As already mentioned, the work on STDCI V2 consisted of a major rewrite of the
CI backend, one of the changes made is switching from using multiple “FreeStyle”
type jobs per project to just two (pre-merge and post-merge) pipeline jobs. This
has implications for the way job results are to be inspected.
Since all the different parallel tasks now happen within the same job, looking
at the job output can be rather confusing as it includes the merged output of
all the tasks. Instead, the “Blue Ocean” view should be used. The “Blue Ocean”
view displays a graphical layout of the job execution allowing one to quickly
learn which parts of the job failed. It also allows drilling down and viewing
the logs of individual parts of the job.
Apart from using the “Blue Ocean” view, job logs are also stored as artifact
files. The ‘exported-artifacts’ directory seen in the job results will now
include different subdirectories for the different parts of the job. Assuming we
have a ‘check-patch’ stage script running on ‘el7/x86_64’, we can find its
output under ‘exported-artifacts’ in:
check-patch.el7.x86_64/mock_logs/script/stdout_stderr.log
Any additional artifacts generated by the script would be present in the
‘check-patch.el7.x86-64’ directory as well.
I have a CI YAML file in my project already, is this really new?
----------------------------------------------------------------
We’ve been working on this for a while, and occasionally introduced V2 features
into individual projects as needed. In particular, our GitHub support was always
based on STDCI V2 code, so all GitHub projects (except Lago, which is
‘special’…) are already using STDCI V2.
A few Gerrit-based projects have already been converted to V2 as well as part of
our efforts to test and debug the V2 code. Most notably, the “OST” and “Jenkins”
projects have been switched, although they are running the STDCI V1 jobs as well
for the time being.
What is the process for switching my project to STDCI V2?
---------------------------------------------------------
The CI team is going to proactively work with project maintainers to switch them
to V2. The process for switching is as follows:
- Send a one-line patch to the ‘jenkins’ repo to enable the V2 jobs for the
project - at this point the V2 jobs will run side-by-side with the V1 jobs,
and will execute the STDCI scripts on el7/x86_64.
- Create an STDCI YAML file to define the target distributions, architectures
and oVirt versions for the project. (See below for a sample file that would be
equivalent to what many projects have defined in V1 currently). As soon as a
patch with the new YAML file is submitted to the project, the V2 job will
parse it and follow the instructions in it. This allows for an easy
verification of the file functionality in CI.
- Remove the STDCI V1 job configuration from the ‘jenkins’ repo. This should be
the last patch project maintainers have to send to the ‘jenkins’ repo.
What does the new YAML file look like?
--------------------------------------
We defined multiple optional names for the file, so that each project owner can
choose which name seems most adequate. The following names can be used:
- stdci.yaml
- automation.yaml
- ovirtci.yaml
A dot (.) can also optionally be added at the beginning of the file name to make
the file hidden, the file extension could also be “yml”. If multiple matching
files exist in the project repo, the first matching file according to the order
listed above will be used.
The file conforms to the YAML syntax. The key names in the file are
case-agnostic, and hyphens (-) underscores (_) and spaces ( ) in key names are
ignored. Additionally we support multiple forms of the same word so you don’t
need to remember if the key should be ‘distro’, ‘distros’, ‘distributions’,
‘operating-systems’ or ‘OperatingSystems’ as all these forms (and others) will
work and mean the same thing.
To create complex test/build matrices, ‘stage’, ‘distribution’, ‘architecture’
and ‘sub-stage’ definitions can be nested within one another. We find this to be
more intuitive than having to maintain tedious ‘exclude’ lists as was needed in
V1.
Here is an example of an STDCI V2 YAML file that is compatible with the current
master branch V1 configuration of many oVirt projects:
---
Architectures:
- x86_64:
Distributions: [ "el7", "fc27" ]
- ppc64le:
Distribution: el7
- s390x:
Distribution: fc27
Release Branches:
master: ovirt-master
Note: since the file is committed into the project’s own repo, having different
configuration for different branches can be done by simply having different
files in the different branches, so there is no need for a big convoluted file
to configure all branches.
Since the above file does not mention stages, any STDCI scripts that exists in
the project repo and belong to a particular stage will be run on all specified
distribution and architecture combinations. Since it is sometimes desired to run
‘check-patch.sh’ on less platforms then build-artifacts for example, a slightly
different file would be needed:
---
Architectures:
- x86_64:
Distributions: [ “el7”, “fc27” ]
- ppc64le:
Distribution: el7
- s390x:
Distribution: fc27
Stages:
- check-patch:
Architecture: x86_64
Distribution: el7
- build-artifacts
Release Branches:
master: ovirt-master
The above file makes ‘check-patch’ run only on el7/x86_64, while build-artifacts
runs on all platforms specified and check-merged would not run at all because it
is not listed in the file.
Great efforts have been made to make the file format very flexible but intuitive
to use. Additionally there are many defaults in place to allow specifying
complex behaviours with very brief YAML code. For further details about the file
format, please see the documentation linked below.
About the relation between STDCI V2 and the change-queue
--------------------------------------------------------
In STDCI V1 the change queue that would run the OST tests and release a given
patch was determined by looking at the “version” part of the name of the
project’s build-artifacts jobs that got invoked for the patch.
This was confusing for people as most people understood “version” to mean the
internal version for their own project rather then the oVirt version.
In V2 we decided to be more explicit and simply include a map from branches to
change queues in the YAML configuration under the “release-branches” option, as
can be seen in the examples above.
We also chose to no longer allow specifying the oVirt version as a shorthand for
the equivalent queue name (E.g specifying ‘4.2’ instead of ‘ovirt-4.2’), this
should reduce the chance for confusing between project versions and queue names,
and also allows us to create and use change queues for projects that are not
oVirt.
A project can choose not to include a “release-branches” option, in which case
its patches will not get submitted to any queues.
Further information
-------------------
The documentation for STDCI can be found at [1].
The documentation update for V2 are still in progress and expected to be merged
soon. In the meatine, the GitHub-specific documentation [2] already provides a
great deal of information which is relevant for V2.
[1]: http://ovirt-infra-docs.readthedocs.io/en/latest/CI/Build_and_test_standards
[2]: http://ovirt-infra-docs.readthedocs.io/en/latest/CI/Using_STDCI_with_GitHub
---
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
6 years, 6 months
dynamic ownership changes
by Martin Polednik
Hey,
I've created a patch[0] that is finally able to activate libvirt's
dynamic_ownership for VDSM while not negatively affecting
functionality of our storage code.
That of course comes with quite a bit of code removal, mostly in the
area of host devices, hwrng and anything that touches devices; bunch
of test changes and one XML generation caveat (storage is handled by
VDSM, therefore disk relabelling needs to be disabled on the VDSM
level).
Because of the scope of the patch, I welcome storage/virt/network
people to review the code and consider the implication this change has
on current/future features.
[0] https://gerrit.ovirt.org/#/c/89830/
mpolednik
6 years, 6 months
OST Failure - Weekly update [/04/2018-20/04/2018]
by Dafna Ron
Hi,
I wanted to give a short status on this week's failures and OST current
status.
I am glad to report that the issue with CQ alerts was resolved thanks to
Barak and Evgheni.
You can read more about the issue and how it was resolved here:
https://ovirt-jira.atlassian.net/browse/OVIRT-1974
Currently we have one on-going possible regression which was reported to
the list and to Arik.
the change reported: https://gerrit.ovirt.org/#/c/89852/ - examples: upload
ova as a virtual machine template.
you can view the details in this Jira:
https://ovirt-jira.atlassian.net/browse/OFT-648
The majority of issues we had this week were failed-build artifacts for
fc27. There were two different cases, one was reported to Francesco who was
already working on a fix to the issue and the second started and resolved
during the evening/night of Apr 26-27.
You can see the Jira to these two issues here:
https://ovirt-jira.atlassian.net/browse/OFT-605
https://ovirt-jira.atlassian.net/browse/OFT-612
There was an infra issue with Mirrors not being available for a few
minutes. the issue was momentarily and was resolved on its own.
https://ovirt-jira.atlassian.net/browse/OFT-606
*Below you can see the chart for this week's resolved issues but cause of
failure:**Code* = regression of working components/functionalities
*Infra* = infrastructure/OST Infrastructure/Lago related issues/Power
outages
*OST Tests* = package related issues, failed build artifacts
*Below is a chart showing failures by suite type: *
*Below is a chart showing failures by version type: *
*Below you can see the number of reported failures by resolution status:*
Thanks,
Dafna
6 years, 6 months
Update: HC suites failing for 3 weeks ( was: [OST][HC] HE fails to deploy )
by Eyal Edri
FYI,
I've disabled the 4.2 and master HC suites nightly run on CI as they are
constantly failing for almost 3 weeks and spamming the mailing lists.
I think this should get higher priority for a fix if we want it to provide
any value,
Work can continue using the manual jobs or via check-patch.
On Mon, Apr 16, 2018 at 10:56 AM, Gal Ben Haim <gbenhaim(a)redhat.com> wrote:
> Any update on https://gerrit.ovirt.org/#/c/88887/ ?
> The HC suites still failing and it's hard to understand why without the
> logs from the engine VM.
>
> On Sat, Apr 7, 2018 at 7:19 AM, Sahina Bose <sabose(a)redhat.com> wrote:
>
>>
>>
>> On Fri, Apr 6, 2018 at 1:10 PM, Simone Tiraboschi <stirabos(a)redhat.com>
>> wrote:
>>
>>>
>>>
>>> On Fri, Apr 6, 2018 at 9:28 AM, Sahina Bose <sabose(a)redhat.com> wrote:
>>>
>>>> 2018-04-05 20:46:52,773-0400 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Get local VM IP]
>>>> 2018-04-05 20:55:28,217-0400 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 {u'_ansible_parsed': True, u'stderr_lines': [], u'cmd': u"virsh -r net-dhcp-leases default | grep -i 00:16:3e:24:d3:63 | awk '{ print $5 }' | cut -f1 -d'/'", u'end': u'2018-04-05 20:55:28.046320', u'_ansible_no_log': False, u'stdout': u'', u'changed': True, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'_raw_params': u"virsh -r net-dhcp-leases default | grep -i 00:16:3e:24:d3:63 | awk '{ print $5 }' | cut -f1 -d'/'", u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'start': u'2018-04-05 20:55:28.000470', u'attempts': 50, u'stderr': u'', u'rc': 0, u'delta': u'0:00:00.045850', u'stdout_lines': []}
>>>> 2018-04-05 20:55:28,318-0400 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:98 fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:24:d3:63 | awk '{ print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.045850", "end": "2018-04-05 20:55:28.046320", "rc": 0, "start": "2018-04-05 20:55:28.000470", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
>>>>
>>>> Both the 4.2 and master suites are failing on getting local VM IP.
>>>> Any idea what changed or if I have to change the test?
>>>>
>>>> thanks!
>>>>
>>>
>>> Hi Sahina,
>>> 4.2 and master suite non HC are correctly running this morning.
>>> http://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovi
>>> rt-system-tests_he-basic-ansible-suite-master/146/
>>> http://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovi
>>> rt-system-tests_he-basic-ansible-suite-4.2/76/
>>>
>>> I'll try to check the difference with HC suites.
>>>
>>> Are you using more than one subnet in the HC suites?
>>>
>>
>> No, I'm not. And we havent's changed anything related to network in the
>> test suite.
>>
>>
>>
>
>
> --
> *GAL bEN HAIM*
> RHV DEVOPS
>
--
Eyal edri
MANAGER
RHV DevOps
EMEA VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
<https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
6 years, 6 months
Mailing-Lists upgrade
by Marc Dequènes (Duck)
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--IfGqXrbgT9wNNdIsvS8kI8WdhEQHlgkiq
Content-Type: multipart/mixed; boundary="AUFHCWPQQknM84fF2kF4uGjIv4blC9bMv";
protected-headers="v1"
From: =?UTF-8?B?TWFyYyBEZXF1w6huZXMgKER1Y2sp?= <duck(a)redhat.com>
To: oVirt Infra <infra(a)ovirt.org>, users <users(a)ovirt.org>,
devel <devel(a)ovirt.org>
Message-ID: <c5c71fce-0290-e97a-ddd0-eab0e6fccea4(a)redhat.com>
Subject: Mailing-Lists upgrade
--AUFHCWPQQknM84fF2kF4uGjIv4blC9bMv
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
Quack,
On behalf of the oVirt infra team, I'd like to announce the current
Mailing-Lists system is going to be upgraded to a brand new Mailman 3
installation on Monday during the slot 11:00-12:00 JST.
It should not take a full hour to migrate as we already made incremental
synchronization with the current system but better keep some margin. The
system will then take over delivery of the mails but might be a bit slow
at first as it needs to reindex all the archived mails (which might take
a few hours).
To manage your subscriptions and delivery settings you can do this
easily on the much nicer web interface (https://lists.ovirt.org). There
is a notion of account so you don't need to login separately for each ML.=
You can Sign In using Fedora, GitHub or Google or create a local account
if you prefer. Please keep in mind signing in with a different method
would create separate accounts (which cannot be merged at the moment).
But you can easily link your account to other authentication methods in
your settings (click on you name in the up-right corner -> Account ->
Account Connections).
As for the original mail archives, because the previous system did not
have stable URLs, we cannot create mappings to the new pages. We decided
to keep the old archives around on the same URL (/pipermail), so the
Internet links would still work fine.
Hope you'd be happy with the new system.
\_o<
--AUFHCWPQQknM84fF2kF4uGjIv4blC9bMv--
--IfGqXrbgT9wNNdIsvS8kI8WdhEQHlgkiq
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEEcpcqg+UmRT3yiF+BVen596wcRD8FAlmwx0MACgkQVen596wc
RD9LTQ/+LtUncsq9K8D/LX8wqUTd6VyPwAD5UnAk5c3H/2tmyVA0u7FIfhEyPsXs
Z//LE9FEneTDqDVRi1Dw9I54K0ZwxPBemi71dXfwgBI7Ay0ezkbLWrA168Mt9spE
tHAODEuxPt2to2aqaS4ujogrkp/gvEP8ILoxPEqoTPCJ/eDTPAu/I1a2JzjMPK3n
2BBS6D8z0TLAf7w1n72TsgX2QzJW57ig/0HELyjvat2/K8V3HSrkwiKlsdULQDWe
zB+aMde7r6UoyVKHqlu4asTl2tU/lGZ+e31Hd9Bnx1/oZOJdzslGOhEo9Qoz6763
AHWU9LKiK4NtxYHj2UQTWhndr8PiTtTmR73eIDmkb0cuRXxzjl9VQbwYJ0Kbrmfp
attTqpc2CnEojTNXUUNSmNxotZoYXZiX8ZvjPfSgRVr15TUYujzlOfG+lUynbQMV
9rQ9/m58wgwYUymMpOIsRGaIcAKzjm+WpuuVnO+bS2AfmcBkGMQRoIhfV+3SkS8q
kT9cDXgcDZOzVFcnZZB4EjbycMcPgZDcoHxU88VdYH+jFJYvvb21esgswVF/wJ2Z
uEI/chp4+ADaQhl8ehZNWMSZq125v6SeirPhBNgLG7zFVZI1S9Tm/6qFmH+ajQY7
nCk1X9HZlB1ubex1X+HibRz9QKOilkMgkADyJ4yMDckwYj93sx0=
=l6uN
-----END PGP SIGNATURE-----
--IfGqXrbgT9wNNdIsvS8kI8WdhEQHlgkiq--
6 years, 6 months
[ OST Failure Report ] [ oVirt Master (ovirt-engine-sdk) ] [ 27-04-2018 ] [ 004_basic_sanity.verify_vm_import ]
by Dafna Ron
Hi,
CQ reported a failure for test 004_basic_sanity.verify_vm_import on basic
suite.
It seems to me that its related to the reported change. can you please have
a look?
*Link and headline of suspected patches:
https://gerrit.ovirt.org/#/c/89852/ <https://gerrit.ovirt.org/#/c/89852/> -
examples: upload ova as a virtual machine template Link to Job:*
* http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/7166/
<http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/7166/>Link
to all
logs:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/7166/a...
<http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/7166/artifa...>(Relevant)
error snippet from the log: <error>vdsm: *
*2018-04-27 10:26:03,131-0400 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC
call Host.dumpxmls succeeded in 0.00 seconds (__init__:311)2018-04-27
10:26:04,825-0400 DEBUG (tasks/8) [root] FAILED: <err> = "virt-sparsify:
error: libguestfs error: guestfs_launch failed.\nThis usually means the
libguestfs appliance failed to start or crashed.\nDo:\n export
LIBGUESTFS_DEBUG=1 LIBGUESTFS_TRACE=1\nand run the command again. For
further information, read:\n
http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs\nYou
<http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs\nYou> can
also run 'libguestfs-test-tool' and post the *complete* output\ninto a bug
report or message to the libguestfs mailing list.\n\nIf reporting bugs, run
virt-sparsify with debugging enabled and include the \ncomplete output:\n\n
virt-sparsify -v -x [...]\n"; <rc> = 1 (commands:87)2018-04-27
10:26:04,829-0400 INFO (tasks/8) [storage.SANLock] Releasing
Lease(name='ee91b001-53e4-42b0-9dc1-3a24e5e2b273',
path=u'/rhev/data-center/mnt/192.168.200.4:_exports_nfs_share1/c4befad8-ac1e-4cf7-929e-8c26e3a12935/images/46fd673b-4e94-4bc5-aab3-f7af16f11e19/ee91b001-53e4-42b0-9dc1-3a24e5e2b273.lease',
offset=0) (clusterlock:435)2018-04-27 10:26:04,835-0400 INFO (tasks/8)
[storage.SANLock] Successfully released
Lease(name='ee91b001-53e4-42b0-9dc1-3a24e5e2b273',
path=u'/rhev/data-center/mnt/192.168.200.4:_exports_nfs_share1/c4befad8-ac1e-4cf7-929e-8c26e3a12935/images/46fd673b-4e94-4bc5-aab3-f7af16f11e19/ee91b001-53e4-42b0-9dc1-3a24e5e2b273.lease',
offset=0) (clusterlock:444)2018-04-27 10:26:04,835-0400 ERROR (tasks/8)
[root] Job u'f7cc21f1-26fa-430a-93d0-671b402613d1' failed
(jobs:221)Traceback (most recent call last): File
"/usr/lib/python2.7/site-packages/vdsm/jobs.py", line 157, in run
self._run() File
"/usr/lib/python2.7/site-packages/vdsm/storage/sdm/api/sparsify_volume.py",
line 56, in _run virtsparsify.sparsify_inplace(self._vol_info.path) File
"/usr/lib/python2.7/site-packages/vdsm/virtsparsify.py", line 71, in
sparsify_inplace raise cmdutils.Error(cmd, rc, out, err)Error: Command
['/usr/bin/virt-sparsify', '--machine-readable', '--in-place',
u'/rhev/data-center/mnt/192.168.200engine: 2018-04-27 10:16:40,016-04 ERROR
[org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand]
(default task-9) [] An error occurred while fetching unregistered disks
from Storage Domain id 'f25de789-f7ce-4b7a-9e92-156bae627b9c'{"jsonrpc":
"2.0", "id": "38e21a74-88e9-4f1b-a4bf-8ba61f6a0438", "error": {"message":
"Image does not exist in domain:
u'image=79a81745-1140-4ba8-aa6b-16ab7820df5e,
domain=9729b818-49e6-4ac6-b7a6-5d634bb06a83'", "code": 268}}^@2018-04-27
10:28:00,286-04 DEBUG
[org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker] (ResponseWorker) []
Message received: {"jsonrpc": "2.0", "id":
"38e21a74-88e9-4f1b-a4bf-8ba61f6a0438", "error": {"message": "Image does
not exist in domain: u'image=79a81745-1140-4ba8-aa6b-16ab7820df5e,
domain=9729b818-49e6-4ac6-b7a6-5d634bb06a83'", "code": 268}}2018-04-27
10:28:00,296-04 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-12) [1d9c39cd] EVENT_ID:
IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command DeleteImageGroupVDS
failed: Image does not exist in domain:
u'image=79a81745-1140-4ba8-aa6b-16ab7820df5e,
domain=9729b818-49e6-4ac6-b7a6-5d634bb06a83'2018-04-27 10:28:00,296-04
ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-12) [1d9c39cd] Command
'DeleteImageGroupVDSCommand(
DeleteImageGroupVDSCommandParameters:{storagePoolId='42724c3b-3d09-4c1d-a716-56758c9ba2e3',
ignoreFailoverLimit='false',
storageDomainId='9729b818-49e6-4ac6-b7a6-5d634bb06a83',
imageGroupId='79a81745-1140-4ba8-aa6b-16ab7820df5e', postZeros='false',
discard='true', forceDelete='false'})' execution failed:
IRSGenericException: IRSErrorException: Image does not exist in domain:
u'image=79a81745-1140-4ba8-aa6b-16ab7820df5e,
domain=9729b818-49e6-4ac6-b7a6-5d634bb06a83'2018-04-27 10:28:00,296-04
DEBUG
[org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-12) [1d9c39cd] Exception:
org.ovirt.engine.core.vdsbroker.irsbroker.IrsOperationFailedNoFailoverException:
IRSGenericException: IRSErrorException: Image does not exist in domain:
u'image=79a81745-1140-4ba8-aa6b-16ab7820df5e,
domain=9729b818-49e6-4ac6-b7a6-5d634bb06a83' at
org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.proceedProxyReturnValue(BrokerCommandBase.java:180)
[vdsbroker.jar:] at
org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand.executeIrsBrokerCommand(DeleteImageGroupVDSCommand.java:30)
[vdsbroker.jar:] at
org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand.lambda$executeVDSCommand$0(IrsBrokerCommand.java:98)
[vdsbroker.jar:] at
org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy.runInControlledConcurrency(IrsProxy.java:274)
[vdsbroker.jar:] at
org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand.executeVDSCommand(IrsBrokerCommand.java:95)
[vdsbroker.jar:] at
org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:65)
[vdsbroker.jar:] at
org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:31)
[dal.jar:] at
org.ovirt.engine.core.vdsbroker.vdsbroker.DefaultVdsCommandExecutor.execute(DefaultVdsCommandExecutor.java:14)
[vdsbroker.jar:] at
org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:398)
[vdsbroker.jar:] at
org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand$$super(Unknown
Source) [vdsbroker.jar:] at
sun.reflect.GeneratedMethodAccessor271.invoke(Unknown Source)
[:1.8.0_161] at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_161] at
java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_161]
at
org.jboss.weld.interceptor.proxy.TerminalAroundInvokeInvocationContext.proceedInternal(TerminalAroundInvokeInvocationContext.java:49)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at
org.jboss.weld.interceptor.proxy.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:77)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at
org.ovirt.engine.core.common.di.interceptor.LoggingInterceptor.apply(LoggingInterceptor.java:12)
[common.jar:] at
sun.reflect.GeneratedMethodAccessor70.invoke(Unknown Source)
[:1.8.0_161] at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_161] at
java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_161]
at
org.jboss.weld.interceptor.reader.SimpleInterceptorInvocation$SimpleMethodInvocation.invoke(SimpleInterceptorInvocation.java:73)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at
org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.executeAroundInvoke(InterceptorMethodHandler.java:84)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at
org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.executeInterception(InterceptorMethodHandler.java:72)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at
org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.invoke(InterceptorMethodHandler.java:56)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at
org.jboss.weld.bean.proxy.CombinedInterceptorAndDecoratorStackMethodHandler.invoke(CombinedInterceptorAndDecoratorStackMethodHandler.java:79)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at
org.jboss.weld.bean.proxy.CombinedInterceptorAndDecoratorStackMethodHandler.invoke(CombinedInterceptorAndDecoratorStackMethodHandler.java:68)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final] at
org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand(Unknown
Source) [vdsbroker.jar:] at
org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33)
[bll.jar:] at
org.ovirt.engine.core.bll.CommandBase.runVdsCommand(CommandBase.java:2046)
[bll.jar:] at
org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand.performDeleteImageVdsmOperation(RemoveImageCommand.java:310)
[bll.jar:] at
org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand.executeCommand(RemoveImageCommand.java:112)
[bll.jar:] at
org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1133)
[bll.jar:] at
org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1286)
[bll.jar:] at
org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:1935)
[bll.jar:] at
org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:164)
[utils.jar:] at
org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:103)
[utils.jar:] at
org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1346)
[bll.jar:] at
org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:400)
[bll.jar:] at
org.ovirt.engine.core.bll.executor.DefaultBackendActionExecutor.execute(DefaultBackendActionExecutor.java:13)
[bll.jar:] at
org.ovirt.engine.core.bll.Backend.runAction(Backend.java:450)
[bll.jar:] at
org.ovirt.engine.core.bll.Backend.runActionImpl(Backend.java:432)
[bll.jar:] at
org.ovirt.engine.core.bll.Backend.runInternalAction(Backend.java:638)
[bll.jar:] at sun.reflect.GeneratedMethodAccessor551.invoke(Unknown
Source) [:1.8.0_161] at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_161] at
java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_161]
at
org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509):2018-04-27
10:28:00,358-04 INFO
[org.ovirt.engine.core.bll.exportimport.ImportVmCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-12) [] Lock freed to object
'EngineLock:{exclusiveLocks='[2cf61dee-c89c-469a-8ab8-54aebd0ab875=VM,
imported_vm=VM_NAME]',
sharedLocks='[5e0b3417-5e0d-4f98-a3cb-1a78445de946=REMOTE_VM]'}'2018-04-27
10:28:00,372-04 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-12) [] EVENT_ID:
IMPORTEXPORT_IMPORT_VM_FAILED(1,153), Failed to import Vm imported_vm to
Data Center test-dc, Cluster test-cluster</error>*
6 years, 7 months
Fwd: [CentOS-devel] CentOS 7.4.1708 CR Available in CBS
by Sandro Bonazzola
FYI, CentOS 7.5 beta is available in CR repos.
I'm going to build qemu-kvm-ev for it in CBS but it will be published to
release repo only when CentOS 7.5 will go GA.
Please note that test repos will be updated with this build so in order to
use test repo you'll need CR repo enabled.
---------- Forwarded message ----------
From: Brian Stinson <brian(a)bstinson.com>
Date: 2018-04-26 23:16 GMT+02:00
Subject: [CentOS-devel] CentOS 7.4.1708 CR Available in CBS
To: centos-devel(a)centos.org
Hi Folks,
The CR (https://wiki.centos.org/AdditionalResources/Repositories/CR)
repository was released earlier today. We have regenerated all the
CentOS 7 buildroots in CBS so you can build against the content that
will make it into the next point-release of CentOS Linux.
Machines in ci.centos.org are available with CR as well so you may
continue your testing.
As we discussed earlier
(https://lists.centos.org/pipermail/centos-devel/2018-April/016627.html)
we are currently in Point-release Freeze which means anything tagged for
release will *NOT* get pushed out to mirror.centos.org and we will be
holding new push requests until we have a GA point-release.
If there are any questions please find us here or in #centos-devel on
Freenode.
Cheers!
--
Brian Stinson
_______________________________________________
CentOS-devel mailing list
CentOS-devel(a)centos.org
https://lists.centos.org/mailman/listinfo/centos-devel
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
<https://redhat.com/summit>
6 years, 7 months
Changing unit labels from GB, MB, KB to GiB, MiB, KiB
by Andrej Krejcir
Hi,
In the ovirt-engine UI, we use units KB, MB, GB with the meaning of powers
of 1024 bytes, instead of powers of 1000. Some time ago, there was a bug[1]
that has changed some of the GB labels to GiB, but not in all places. This
causes inconsistent messages in the UI, where one part of a sentence uses
GB and the other GiB.
To make it consistent, I have posted patches[2] that change unit labels in
other places as well.
This change touches strings from different parts of the project, so I'm
sending this email to let you know about it. Feel free to check the patches
or comment if the change is correct.
Andrej
[1] - https://bugzilla.redhat.com/1535714
[2] - https://gerrit.ovirt.org/#/q/project:ovirt-engine+topic:unit-relabel
6 years, 7 months
gerrit service restart
by Evgheni Dereveanchin
Hi everyone,
As you may have noticed, Gerrit is slow to respond today. I will restart
the service in a few minutes to address that so if you get some failures
while trying to fetch/push/review patches please re-try in a few minutes.
--
Regards,
Evgheni Dereveanchin
6 years, 7 months