Nominating Miguel Barosso as ovirt-provider-ovn maintainer
by Marcin Mirecki
I would like to propose Miguel Barosso as a maintainer for
ovirt-provider-ovn.
Miguel is now working on the project for almost a year, and for the last
few month is practically the only active contributor to the project.
He has successfully implemented new features right up from the design
stage, added a new integration test framework, fixed an endless amount of
bugs and contributed over 200 patches to the project.
Currently the only maintainer (me) is no longer actively working on the
project, which is a cause of a review bottleneck.
Thanks,
Marcin
5 years, 6 months
[ OST Failure Report ] [ oVirt 4.3 (vdsm) ] [ 22-03-2019 ] [ 002_bootstrap.add_master_storage_domain ]
by Dafna Ron
Hi,
We are failing branch 4.3 for test: 002_bootstrap.add_master_storage_domain
It seems that in one of the hosts, the vdsm is not starting
there is nothing in vdsm.log or in supervdsm.log
CQ identified this patch as the suspected root cause:
https://gerrit.ovirt.org/#/c/98748/ - vdsm: client: Add support for flow id
Milan, Marcin, can you please have a look?
full logs:
http://jenkins.ovirt.org/job/ovirt-4.3_change-queue-tester/326/artifact/b...
the only error I can see is about host not being up (makes sense as vdsm is
not running)
Stacktrace
File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
testMethod()
File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
142, in wrapped_test
test()
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
60, in wrapper
return func(get_test_prefix(), *args, **kwargs)
File "/home/jenkins/workspace/ovirt-4.3_change-queue-tester/ovirt-system-tests/basic-suite-4.3/test-scenarios/002_bootstrap.py",
line 417, in add_master_storage_domain
add_iscsi_storage_domain(prefix)
File "/home/jenkins/workspace/ovirt-4.3_change-queue-tester/ovirt-system-tests/basic-suite-4.3/test-scenarios/002_bootstrap.py",
line 561, in add_iscsi_storage_domain
host=_random_host_from_dc(api, DC_NAME),
File "/home/jenkins/workspace/ovirt-4.3_change-queue-tester/ovirt-system-tests/basic-suite-4.3/test-scenarios/002_bootstrap.py",
line 122, in _random_host_from_dc
return _hosts_in_dc(api, dc_name, True)
File "/home/jenkins/workspace/ovirt-4.3_change-queue-tester/ovirt-system-tests/basic-suite-4.3/test-scenarios/002_bootstrap.py",
line 119, in _hosts_in_dc
raise RuntimeError('Could not find hosts that are up in DC %s' % dc_name)
'Could not find hosts that are up in DC test-dc\n--------------------
>> begin captured logging << --------------------\nlago.ssh: DEBUG:
start task:937bdea7-a2a3-47ad-9383-36647ea37ddf:Get ssh client for
lago-basic-suite-4-3-engine:\nlago.ssh: DEBUG: end
task:937bdea7-a2a3-47ad-9383-36647ea37ddf:Get ssh client for
lago-basic-suite-4-3-engine:\nlago.ssh: DEBUG: Running c07b5ee2 on
lago-basic-suite-4-3-engine: cat /root/multipath.txt\nlago.ssh: DEBUG:
Command c07b5ee2 on lago-basic-suite-4-3-engine returned with
0\nlago.ssh: DEBUG: Command c07b5ee2 on lago-basic-suite-4-3-engine
output:\n 3600140516f88cafa71243648ea218995\n360014053e28f60001764fed9978ec4b3\n360014059edc777770114a6484891dcf1\n36001405d93d8585a50d43a4ad0bd8d19\n36001405e31361631de14bcf87d43e55a\n\n-----------
5 years, 6 months
oVirt Node build failure on 4.2 snapshot
by Sandro Bonazzola
https://jenkins.ovirt.org/job/ovirt-node-ng_4.2_build-artifacts-el7-x86_6...
Not sure what happened but yesterday ovirt Node 4.2 jenkins job
started failing with:
06:28:14,517 INFO program:Error: Package:
vdsm-client-4.20.47-2.git31d3591.el7.noarch (ovirt-4.2-snapshot)
06:28:14,518 INFO program:Requires: vdsm-api = 4.20.47-2.git31d3591.el7
06:28:14,518 INFO program:Available: vdsm-api-4.20.23-1.el7.noarch
(ovirt-4.2-centos-ovirt42)
06:28:14,519 INFO program:vdsm-api = 4.20.23-1.el7
06:28:14,520 INFO program:Available: vdsm-api-4.20.39.1-1.el7.noarch
(ovirt-4.2-centos-ovirt42)
06:28:14,521 INFO program:vdsm-api = 4.20.39.1-1.el7
06:28:14,522 INFO program:Error: Package:
vdsm-jsonrpc-4.20.47-2.git31d3591.el7.noarch (ovirt-4.2-snapshot)
06:28:14,522 INFO program:Requires: vdsm-api = 4.20.47-2.git31d3591.el7
06:28:14,523 INFO program:Available: vdsm-api-4.20.23-1.el7.noarch
(ovirt-4.2-centos-ovirt42)
06:28:14,524 INFO program:vdsm-api = 4.20.23-1.el7
06:28:14,525 INFO program:Available: vdsm-api-4.20.39.1-1.el7.noarch
(ovirt-4.2-centos-ovirt42)
06:28:14,526 INFO program:vdsm-api = 4.20.39.1-1.el7
06:28:14,527 INFO program:Error: Package:
vdsm-python-4.20.47-2.git31d3591.el7.noarch (ovirt-4.2-snapshot)
06:28:14,528 INFO program:Requires: vdsm-api = 4.20.47-2.git31d3591.el7
06:28:14,528 INFO program:Available: vdsm-api-4.20.23-1.el7.noarch
(ovirt-4.2-centos-ovirt42)
06:28:14,529 INFO program:vdsm-api = 4.20.23-1.el7
06:28:14,530 INFO program:Available: vdsm-api-4.20.39.1-1.el7.noarch
(ovirt-4.2-centos-ovirt42)
06:28:14,531 INFO program:vdsm-api = 4.20.39.1-1.el7
Looking at vdsm.spec I see:
grep "\-api" vdsm.spec.in:
Requires: %{name}-api = %{version}-%{release}
Requires: %{name}-api = %{version}-%{release}
Obsoletes: %{name}-api < 4.16
Requires: %{name}-api = %{version}-%{release}
Requires: glusterfs-api >= %{gluster_version}
Looking at the Obsoletes:
%package jsonrpc
Summary: VDSM API Server
BuildArch: noarch
Requires: %{name}-python = %{version}-%{release}
Requires: %{name}-api = %{version}-%{release}
Requires: %{name}-yajsonrpc = %{version}-%{release}
Obsoletes: %{name}-api < 4.16
There's an Obsoletes without a Provides while there are still around lines
with:
Requires: %{name}-api = %{version}-%{release}
so I don't know how it could have worked till now.
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
5 years, 6 months
ovirt-vmconsole CI check-patch
by Sandro Bonazzola
Hi,
looks like ovirt-vmconsole is missing CI check-patch stage.
Is there any reason for not running at least an rpm build in check-patch?
I see the project has unit testing, why not run them on check-patch?
Thanks,
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
5 years, 7 months
Re: Problem on host deployment from engine
by Nir Soffer
On Mon, Apr 15, 2019, 14:59 Amit Bawer <abawer(a)redhat.com> wrote:
> Hello Didi & Sandro,
>
> I have encountered following issue when attempting to deploy a host from
> the engine management.
>
> Engine: Fedora 28
> Host: CentOS 7.6.1810
>
> engine.log error:
>
> 2019-04-14 14:08:35,578+03 INFO
> [org.ovirt.engine.core.uutils.ssh.SSHDialog]
> (EE-ManagedThreadFactory-engine-Thread-2) [628278a7] SSH execute '
> root(a)10.35.0.229' 'umask 0077; MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp
> -d -t ovirt-XXXXXXXXXX)"; trap "chmod -R u+rwX \"${MYTMP}\" > /dev/null
> 2>&1; rm -fr \"${MYTMP}\" > /dev/null 2>&1" 0; tar --warning=no-timestamp
> -C "${MYTMP}" -x && "${MYTMP}"/ovirt-host-deploy
> DIALOG/dialect=str:machine DIALOG/customization=bool:True'
> 2019-04-14 14:08:35,670+03 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (VdsDeploy) [628278a7] EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511), An
> error has occurred during installation of Host host1: Python is required
> but missing.
> 2019-04-14 14:08:35,688+03 ERROR
> [org.ovirt.engine.core.uutils.ssh.SSHDialog]
> (EE-ManagedThreadFactory-engine-Thread-2) [628278a7] SSH error running
> command root(a)10.35.0.229:'umask 0077; MYTMP="$(TMPDIR="${OVIRT_TMPDIR}"
> mktemp -d -t ovirt-XXXXXXXXXX)"; trap "chmod -R u+rwX \"${MYTMP}\" >
> /dev/null 2>&1; rm -fr \"${MYTMP}\" > /dev/null 2>&1" 0; tar
> --warning=no-timestamp -C "${MYTMP}" -x && "${MYTMP}"/ovirt-host-deploy
> DIALOG/dialect=str:machine DIALOG/customization=bool:True': IOException:
> Command returned failure code 1 during SSH session 'root(a)10.35.0.229'
>
>
> I managed to resolve it manually by installing following rpms over the
> host machine before reattempting to deploy again the host from the engine
> management:
>
> python2-otopi
> python2-ovirt-host-deploy
>
> I should mention that both engine and host were built from git master
> branch and not installed using the release rpms themselves, however Nir can
> also testify there is an issue when trying to deploy a host from Fedora
> engines (29 in Nir's case and 28 in my case).
>
> Thanks,
> Amit
>
Adding devel
>
>
5 years, 7 months