OST Failure: libvirtError: COMMAND_FAILED: INVALID_IPV: 'ipv6' is not a valid backend or is unavailable
by Gobinda Das
Hi,
One of the OST suite (hc-basic-suite-4.3) is failing with
error: libvirtError: COMMAND_FAILED: INVALID_IPV: 'ipv6' is not a valid
backend or is unavailable
Full log:
* Create network lago-hc-basic-suite-4-3-net-management: ERROR (in
0:00:11)*07:42:57* # Start nets: ERROR (in 0:00:11)*07:42:57* @
Start Prefix: ERROR (in 0:00:11)*07:42:57* Error occured,
aborting*07:42:57* Traceback (most recent call last):*07:42:57* File
"/usr/lib/python2.7/site-packages/lago/cmd.py", line 969, in
main*07:42:57* cli_plugins[args.verb].do_run(args)*07:42:57*
File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 184,
in do_run*07:42:57* self._do_run(**vars(args))*07:42:57* File
"/usr/lib/python2.7/site-packages/lago/utils.py", line 573, in
wrapper*07:42:57* return func(*args, **kwargs)*07:42:57* File
"/usr/lib/python2.7/site-packages/lago/utils.py", line 584, in
wrapper*07:42:57* return func(*args, prefix=prefix,
**kwargs)*07:42:57* File
"/usr/lib/python2.7/site-packages/lago/cmd.py", line 271, in
do_start*07:42:57* prefix.start(vm_names=vm_names)*07:42:57*
File "/usr/lib/python2.7/site-packages/lago/sdk_utils.py", line 50, in
wrapped*07:42:57* return func(*args, **kwargs)*07:42:57* File
"/usr/lib/python2.7/site-packages/lago/prefix.py", line 1323, in
start*07:42:57* self.virt_env.start(vm_names=vm_names)*07:42:57*
File "/usr/lib/python2.7/site-packages/lago/virt.py", line 341, in
start*07:42:57* net.start()*07:42:57* File
"/usr/lib/python2.7/site-packages/lago/providers/libvirt/network.py",
line 115, in start*07:42:57* net =
self.libvirt_con.networkCreateXML(self._libvirt_xml())*07:42:57*
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 4216, in
networkCreateXML*07:42:57* if ret is None:raise
libvirtError('virNetworkCreateXML() failed', conn=self)*07:42:57*
libvirtError: COMMAND_FAILED: INVALID_IPV: 'ipv6' is not a valid
backend or is unavailable*07:42:57* + on_exit
Any one has idea what could be the issue?
--
Thanks,
Gobinda
4 years, 9 months
Proposing Benny Zlotnik as an Engine Storage maintainer
by Tal Nisan
Hi everyone,
Benny joined the Storage team in November 2016 and since then played a key
role in investigating very complex customer bugs around oVirt engine as
well as contributing to features such as DR, Cinberlib, new export/import
mechanism and also rewrote and still maintains the LSM mechanism.
Given his big contribution and knowledge around the engine I'd like to
nominate Benny as an engine storage maintainer.
Your thoughts please.
Tal.
4 years, 9 months
Fwd: Copr outage - details
by Sandro Bonazzola
FYI
---------- Forwarded message ---------
Da: Miroslav Suchý <msuchy(a)redhat.com>
Date: gio 20 feb 2020 alle ore 10:28
Subject: Copr outage - details
To: Development discussions related to Fedora <devel(a)lists.fedoraproject.org
>
tl;dr: On Sunday 23rd February, there will be Copr outage. It will last the
whole day.
PPC64LE builder and chroots will be deactivated. The PPC64LE builders
should be back in a matter of weeks.
Hi.
As previously announced, Fedora's infrastructure is moving to a different
datacenter. For some servers, the move is
trivial. Copr servers are different. Copr build system consists of four
servers, plus four staging servers. Eight TB of
repos, four TB of dist-git, and several small volumes.
The original plan was to move to Washington D.C. to IAD2 datacenter by
June. Copr is running in Fedora OpenStack, and
this cloud has to be evacuated by the beginning of March to free an IP
range.
The plan was to move Copr to new hardware (thanks to Red Hat) and later
move this HW to the new datacenter. That would
mean two outages, where the second one lasted at least 15 days (!).
We were looking for another option and we found it. We are going to move
Copr to Amazon AWS, shutdown old VM on Fedora
Cloud. Move the new HW to IAD2 datacenter and then move Copr from AWS to
new HW in IAD2 - FYI, the final destination is
still subject to change. This still means two outages, but they should be
just a few hours. And web server with DNF
repositories should be available all the time.
The second outage, will happen in May or June.
Here is a detailed schedule. We are going to update this table during
migration. You can watch the progress during
migration:
https://docs.google.com/spreadsheets/d/1jrCgdhseZwi91CTRlo9Y5DNwfl9VHoZfj...
Here is a short abstract:
* we are doing constant rsync to the new location
* we spin up staging and production instances in the new location
* on Sunday morning we stop frontend and therefore accepting new jobs. The
backend with DNF repos will still be
operational.
* we do final rsync (~6 hours)
* around 13:00 UTC we switch DNS to the new location
* we then enable all services
* once we confirm that everything is operational, the outage will be over
There are several caveats:
* After we enable services on Sunday 13:00 UTC you may see some failures.
Be asured that we will swiftly address them.
* Once we get out of Fedora Cloud, we lost access to PPC64LE builders. We
are going to deactivate those chroots just
before the migration. After a few weeks, we should get it back. ETA is
unknown. The worst-case scenario is in June 2020.
We will be aiming to bring it back as soon as possible.
* Any small issue can easily change the schedule by hours. E.g., just
simple 'chown -R' on backend runs ~4 hours.
There are going to be three Copr engineers and one fedora-infrastructure
member available whole Sunday. If you
experienced a problem, do not hesitate to contact us. We are on
#fedora-buildsys on Freenode.
The link to the outage ticket is:
https://pagure.io/fedora-infrastructure/issue/8668
--
Miroslav Suchy, RHCARed Hat, Associate Manager ABRT/Copr, #brno,
#fedora-buildsys
_______________________________________________
devel mailing list -- devel(a)lists.fedoraproject.org
To unsubscribe send an email to devel-leave(a)lists.fedoraproject.org
Fedora Code of Conduct:
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives:
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>*Red Hat respects your work life balance.
Therefore there is no need to answer this email out of your office hours.
<https://mojo.redhat.com/docs/DOC-1199578>*
4 years, 9 months
OST basic suite failing on el8 physical server
by Marcin Sobczyk
Hi all,
I'm trying to run OST basic suite on CentOS 8 based physical
server with lago running on py3. I'm currently stuck
at 'verify_add_hosts' step. The engine continously tries to run:
org.ovirt.engine.core.vdsbroker.vdsbroker.CollectVdsNetworkDataAfterInstallationVDSCommand
but it ends up with a failure:
2020-02-18 05:29:01,386-05 DEBUG
[org.ovirt.engine.core.dal.dbbroker.CustomSQLErrorCodeSQLExceptionTranslator]
(EE-ManagedThreadFactory-engine-Thread-1) [39a236fd] Translating
SQLException with SQL state '23505', error code '0', message [ERROR:
duplicate key value violates unique constraint "name_server_pkey"
Detail: Key (dns_resolver_configuration_id,
address)=(98794c52-8364-4909-bbaf-a7ac16a1bf2a, 192.168.201.1) already
exists.
Here's part of my lago prefix status:
[lago-basic-suite-master-host-0]:
[NICs]:
[eth0]:
ip: 192.168.200.3
network: lago-basic-suite-master-net-management
[eth1]:
ip: 192.168.202.3
network: lago-basic-suite-master-net-storage
[eth2]:
ip: 192.168.201.2
network: lago-basic-suite-master-net-bonding
[eth3]:
ip: 192.168.201.3
network: lago-basic-suite-master-net-bonding
distro: el8
[metadata]:
deploy-scripts:
$LAGO_PREFIX_PATH/scripts/_root_ovirt-system-tests_basic-suite-master_deploy-scripts_add_local_repo_no_ext_access.sh
$LAGO_PREFIX_PATH/scripts/_root_ovirt-system-tests_basic-suite-master_deploy-scripts_setup_sar_stat.sh
$LAGO_PREFIX_PATH/scripts/_root_ovirt-system-tests_basic-suite-master_deploy-scripts_setup_host_el7.sh
$LAGO_PREFIX_PATH/scripts/_root_ovirt-system-tests_basic-suite-master_deploy-scripts_setup_1st_host_el7.sh
root password: 123456
status: running
[lago-basic-suite-master-host-1]:
[NICs]:
[eth0]:
ip: 192.168.200.4
network: lago-basic-suite-master-net-management
[eth1]:
ip: 192.168.202.4
network: lago-basic-suite-master-net-storage
[eth2]:
ip: 192.168.201.4
network: lago-basic-suite-master-net-bonding
[eth3]:
ip: 192.168.201.5
network: lago-basic-suite-master-net-bonding
So it seems the two bonding NICs indeed share the same dns resolver
address - 192.168.201.1, but I suppose that's how it looks for all basic
suite runs, right?
Does anyone know what could be wrong in this setup and why the engine
is complaining?
Thanks, Marcin
4 years, 9 months
do we even handle volume metadata in the engine?
by Fedor Gavrilov
Hi,
It seems I was able to get my setup working for NFS storage, thanks for your advices!
Now I'm afraid I am stuck again: what I need to do is to add validation for volume metadata to a certain command (to be exact, that VOLTYPE is LEAF). But I can't find examples of us dealing with this data in the engine part at all. I understand this is in the end executed by VDSM, but nevertheless - in what format are we even reading and writing volume metadata? Even class/method name would be helpful.
Thanks,
Fedor
4 years, 9 months
Engine-setup failure
by Ori Liel
Does anyone get this error while building the Engine?
[ ERROR ] schema.sh: FATAL: Cannot execute sql command: --command=select 1;
[ ERROR ] Failed to execute stage 'Misc configuration': Engine schema
refresh failed
Happened in a completely new setup, rebased from master, new database, even
build directory manually deleted before running make.
Any help would be appreciated,
thanks
4 years, 9 months