One of the OST suite (hc-basic-suite-4.3) is failing with
error: libvirtError: COMMAND_FAILED: INVALID_IPV: 'ipv6' is not a valid
backend or is unavailable
* Create network lago-hc-basic-suite-4-3-net-management: ERROR (in
0:00:11)*07:42:57* # Start nets: ERROR (in 0:00:11)*07:42:57* @
Start Prefix: ERROR (in 0:00:11)*07:42:57* Error occured,
aborting*07:42:57* Traceback (most recent call last):*07:42:57* File
"/usr/lib/python2.7/site-packages/lago/cmd.py", line 969, in
File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 184,
in do_run*07:42:57* self._do_run(**vars(args))*07:42:57* File
"/usr/lib/python2.7/site-packages/lago/utils.py", line 573, in
wrapper*07:42:57* return func(*args, **kwargs)*07:42:57* File
"/usr/lib/python2.7/site-packages/lago/utils.py", line 584, in
wrapper*07:42:57* return func(*args, prefix=prefix,
"/usr/lib/python2.7/site-packages/lago/cmd.py", line 271, in
File "/usr/lib/python2.7/site-packages/lago/sdk_utils.py", line 50, in
wrapped*07:42:57* return func(*args, **kwargs)*07:42:57* File
"/usr/lib/python2.7/site-packages/lago/prefix.py", line 1323, in
File "/usr/lib/python2.7/site-packages/lago/virt.py", line 341, in
start*07:42:57* net.start()*07:42:57* File
line 115, in start*07:42:57* net =
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 4216, in
networkCreateXML*07:42:57* if ret is None:raise
libvirtError('virNetworkCreateXML() failed', conn=self)*07:42:57*
libvirtError: COMMAND_FAILED: INVALID_IPV: 'ipv6' is not a valid
backend or is unavailable*07:42:57* + on_exit
Any one has idea what could be the issue?
Benny joined the Storage team in November 2016 and since then played a key
role in investigating very complex customer bugs around oVirt engine as
well as contributing to features such as DR, Cinberlib, new export/import
mechanism and also rewrote and still maintains the LSM mechanism.
Given his big contribution and knowledge around the engine I'd like to
nominate Benny as an engine storage maintainer.
Your thoughts please.
---------- Forwarded message ---------
Da: Miroslav Suchý <msuchy(a)redhat.com>
Date: gio 20 feb 2020 alle ore 10:28
Subject: Copr outage - details
To: Development discussions related to Fedora <devel(a)lists.fedoraproject.org
tl;dr: On Sunday 23rd February, there will be Copr outage. It will last the
PPC64LE builder and chroots will be deactivated. The PPC64LE builders
should be back in a matter of weeks.
As previously announced, Fedora's infrastructure is moving to a different
datacenter. For some servers, the move is
trivial. Copr servers are different. Copr build system consists of four
servers, plus four staging servers. Eight TB of
repos, four TB of dist-git, and several small volumes.
The original plan was to move to Washington D.C. to IAD2 datacenter by
June. Copr is running in Fedora OpenStack, and
this cloud has to be evacuated by the beginning of March to free an IP
The plan was to move Copr to new hardware (thanks to Red Hat) and later
move this HW to the new datacenter. That would
mean two outages, where the second one lasted at least 15 days (!).
We were looking for another option and we found it. We are going to move
Copr to Amazon AWS, shutdown old VM on Fedora
Cloud. Move the new HW to IAD2 datacenter and then move Copr from AWS to
new HW in IAD2 - FYI, the final destination is
still subject to change. This still means two outages, but they should be
just a few hours. And web server with DNF
repositories should be available all the time.
The second outage, will happen in May or June.
Here is a detailed schedule. We are going to update this table during
migration. You can watch the progress during
Here is a short abstract:
* we are doing constant rsync to the new location
* we spin up staging and production instances in the new location
* on Sunday morning we stop frontend and therefore accepting new jobs. The
backend with DNF repos will still be
* we do final rsync (~6 hours)
* around 13:00 UTC we switch DNS to the new location
* we then enable all services
* once we confirm that everything is operational, the outage will be over
There are several caveats:
* After we enable services on Sunday 13:00 UTC you may see some failures.
Be asured that we will swiftly address them.
* Once we get out of Fedora Cloud, we lost access to PPC64LE builders. We
are going to deactivate those chroots just
before the migration. After a few weeks, we should get it back. ETA is
unknown. The worst-case scenario is in June 2020.
We will be aiming to bring it back as soon as possible.
* Any small issue can easily change the schedule by hours. E.g., just
simple 'chown -R' on backend runs ~4 hours.
There are going to be three Copr engineers and one fedora-infrastructure
member available whole Sunday. If you
experienced a problem, do not hesitate to contact us. We are on
#fedora-buildsys on Freenode.
The link to the outage ticket is:
Miroslav Suchy, RHCARed Hat, Associate Manager ABRT/Copr, #brno,
devel mailing list -- devel(a)lists.fedoraproject.org
To unsubscribe send an email to devel-leave(a)lists.fedoraproject.org
Fedora Code of Conduct:
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
<https://www.redhat.com/>*Red Hat respects your work life balance.
Therefore there is no need to answer this email out of your office hours.
I'm trying to run OST basic suite on CentOS 8 based physical
server with lago running on py3. I'm currently stuck
at 'verify_add_hosts' step. The engine continously tries to run:
but it ends up with a failure:
2020-02-18 05:29:01,386-05 DEBUG
(EE-ManagedThreadFactory-engine-Thread-1) [39a236fd] Translating
SQLException with SQL state '23505', error code '0', message [ERROR:
duplicate key value violates unique constraint "name_server_pkey"
Detail: Key (dns_resolver_configuration_id,
address)=(98794c52-8364-4909-bbaf-a7ac16a1bf2a, 192.168.201.1) already
Here's part of my lago prefix status:
root password: 123456
So it seems the two bonding NICs indeed share the same dns resolver
address - 192.168.201.1, but I suppose that's how it looks for all basic
suite runs, right?
Does anyone know what could be wrong in this setup and why the engine
It seems I was able to get my setup working for NFS storage, thanks for your advices!
Now I'm afraid I am stuck again: what I need to do is to add validation for volume metadata to a certain command (to be exact, that VOLTYPE is LEAF). But I can't find examples of us dealing with this data in the engine part at all. I understand this is in the end executed by VDSM, but nevertheless - in what format are we even reading and writing volume metadata? Even class/method name would be helpful.
Does anyone get this error while building the Engine?
[ ERROR ] schema.sh: FATAL: Cannot execute sql command: --command=select 1;
[ ERROR ] Failed to execute stage 'Misc configuration': Engine schema
Happened in a completely new setup, rebased from master, new database, even
build directory manually deleted before running make.
Any help would be appreciated,