This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
Content-Type: multipart/mixed; boundary="AUFHCWPQQknM84fF2kF4uGjIv4blC9bMv";
From: =?UTF-8?B?TWFyYyBEZXF1w6huZXMgKER1Y2sp?= <duck(a)redhat.com>
To: oVirt Infra <infra(a)ovirt.org>, users <users(a)ovirt.org>,
Subject: Mailing-Lists upgrade
Content-Type: text/plain; charset=utf-8
On behalf of the oVirt infra team, I'd like to announce the current
Mailing-Lists system is going to be upgraded to a brand new Mailman 3
installation on Monday during the slot 11:00-12:00 JST.
It should not take a full hour to migrate as we already made incremental
synchronization with the current system but better keep some margin. The
system will then take over delivery of the mails but might be a bit slow
at first as it needs to reindex all the archived mails (which might take
a few hours).
To manage your subscriptions and delivery settings you can do this
easily on the much nicer web interface (https://lists.ovirt.org). There
is a notion of account so you don't need to login separately for each ML.=
You can Sign In using Fedora, GitHub or Google or create a local account
if you prefer. Please keep in mind signing in with a different method
would create separate accounts (which cannot be merged at the moment).
But you can easily link your account to other authentication methods in
your settings (click on you name in the up-right corner -> Account ->
As for the original mail archives, because the previous system did not
have stable URLs, we cannot create mappings to the new pages. We decided
to keep the old archives around on the same URL (/pipermail), so the
Internet links would still work fine.
Hope you'd be happy with the new system.
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
-----END PGP SIGNATURE-----
It is time again to reconsider branching out the 4.2 stable branch.
So far we decided to *not* branch out, and we are taking tags for ovirt
4.2 releases from master branch.
This means we are merging safe and/or stabilization patches only in master.
I think it is time to reconsider this decision and branch out for 4.2,
because of two reasons:
1. it sends a clearer signal that 4.2 is going in stabilization mode
2. we have requests from virt team, which wants to start working on the
next cycle features.
If we decide to branch out, I'd start the new branch on monday, February
5 (1 week from now).
The discussion is open, please share your acks/nacks for branching out,
and for the branching date.
I for myself I'm inclined to branch out, so if noone chimes in (!!) I'll
execute the above plan.
Senior SW Eng., Virtualization R&D
IRC: fromani github: @fromanirh
This message is aimed for project maintainers. Other developers may
also find it interesting to have a glimpse at the oVirt-wide test and
TL;DR: To get accurate CI for oVirt 4.2, most projects
need to add 4.2 jobs in YAML.
Before I can explain what the current issue is and which action is
required, I'm need to provide a brief overview into how oVirt CI
oVirt CI has two major components to it:
1. The STDCI component which is used to build and test individual
projects. Most developers interact with this on a daily basis as it is
responding on GitHub and Gerrit events.
2. The "change-queue" (CQ) component which is used to automatically
compose the whole of oVirt from its sub projects and run system tests
(OST) on it. This component is used to gather the information that is
used by the infra team to compose the "OST failure report" you can
occasionally see being sent to this list. The change queue is also
used to automatically maintain the 'tested' and '*-snapshot' (AKA
The way the CQ composes oVirt is by looking at the post-merge STDCI
'build-artifacts' jobs, and collecting together artifacts built by
jobs that target a specific oVirt version into that version's change
queue. Essentially the '*_master_build-artifacts-*' jobs go into the
'ovirt-master' change queue, the '*_4.1_build-artifacts-*' jobs go
into the 'ovirt-4.1' change queue etc.
Over the course of the oVirt 4.2 development, most project used their
'master' branch, which is typically mapped to '*_master_*' jobs, for
developing 4.2 code. So the 'ovirt-master' CQ provided good indication
of the state of 4.2 code.
As projects started addeing 4.2 branches, we have created an
'ovirt-4.2' CQ to gather them. We did so under the assumption that
most projects will branch soon after. The assumption turned up to be
wrong as most projects did not yet fork and may not do so in the near
As some projects did fork, the end result is that currently:
___there is no accurate representation of oVirt 4.2 in CI___
'ovirt-master' CQ no longer represents oVirt 4.2 as some projects
already have some 4.3 code in their 'master' branches.
'ovirt-4.2' CQ does not represent oVirt 4.2 as most projects do not
push artifacts into it.
To get any benefit from CI, we need to have it represent what we are
going to release. This means that at this point we need all projects
to have '*_4.2_build-artifacts-*' jobs that map to the code that is
intended to be included in oVirt 4.2. Projects can either:
1. Create 4.2 branches and map the new jobs to them.
2. Keep 4.2 development in 'master' and create '4.2' jobs that map to it.
Taking route #2 means a commitment to not adding any 4.3 code to the
'master' branch. Please keep it, as rolling back "too new" builds from
the various repos and caches we have is very difficult.
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
I released ioprocess 1.0.0 for Fedora 27 and 28.
If you are using Fedora, please install the new version from the
and test it.
Please share your feedback here:
This version will be available soon from oVirt repositories.
For alignment purposes between the projects we should consider branch
ovirt-engine as well.
At this point in time I'm for it, would like to hear any acks/nacks so we
can decide how we proceeed
On Mon, Jan 29, 2018 at 9:02 AM, Dan Kenigsberg <danken(a)redhat.com> wrote:
> On Mon, Jan 29, 2018 at 9:39 AM, Francesco Romani <fromani(a)redhat.com>
> > Hi all,
> > It is time again to reconsider branching out the 4.2 stable branch.
> > So far we decided to *not* branch out, and we are taking tags for ovirt
> > 4.2 releases from master branch.
> > This means we are merging safe and/or stabilization patches only in
> > I think it is time to reconsider this decision and branch out for 4.2,
> > because of two reasons:
> > 1. it sends a clearer signal that 4.2 is going in stabilization mode
> > 2. we have requests from virt team, which wants to start working on the
> > next cycle features.
> > If we decide to branch out, I'd start the new branch on monday, February
> > 5 (1 week from now).
> > The discussion is open, please share your acks/nacks for branching out,
> > and for the branching date.
> > I for myself I'm inclined to branch out, so if noone chimes in (!!) I'll
> > execute the above plan.
> For network we don't see a lot of pending 4.2 work - except for the
> requirement to support el7.5. On the other hand, we too have already a
> patch for 4.3. Thus I'm fine with branching next week.
> When you do branch, please make sure to follow Barak Korren's request on
> [ovirt-devel] [ACTION-REQUIRED] Making accurate CI for oVirt 4.2
> Devel mailing list
Last week we merged a patch that may reset the cluster's CPU and therefore
could cause the following error to happen when trying to run a VM:
"The CPU type of the cluster is unknown. ..."
A fix is now posted. In the meantime, this can be resolved by refreshing
the capabilities of one of your hosts.
Planned works are going to be performed in the PHX data center today. They
are related to core networking hardware upgrades and may cause short
connectivity loss to services hosted by the oVirt project, including:
* Jenkins CI
* Package repositories
* project website
* mailing lists
The maintenance window starts at 19:00 GMT and has a duration of one hour.
I will pause CI job execution before the maintenance window so builds will
be queued and executed after network upgrades are implemented to decrease
the possibility of false positives. I will follow up with any further
I have set up ovirt-engine for testing purpose and, I am using
ovirt-vdsmfake to provision host and vms on that ovirt-engine. But I am
facing an error while creating a disk.
I have provision 10 hosts, all are up with their own local storage. And
when I am trying to add a disk to the disk section, error popups like "*Error
while executing action Add Disk to VM: General Exception*". I saw the logs,
thought of some permission error, checked that but still this error is
Can somebody provide some details on it?
I am attaching the engine.log with this mail.
ASSOCIATE SOFTWARE ENGINEER IN ENG PERF R&D, RHCSA
Red Hat BLR <https://www.redhat.com/>
IRC - tenstormavi
adasound(a)redhat.com M: +91-8653245552