OST 4.1 failure: Error: ('Error while sending HTTP request', error('cannot add/remove handle - multi_perform() already running', ))

I got this failure in 4.1 build[1], which should not be relevant to the tested patch[2] - is this a known issue? *13:31:47* # add_hosts: *13:31:47* Error while running thread*13:31:47* Traceback (most recent call last):*13:31:47* File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in _ret_via_queue*13:31:47* queue.put({'return': func()})*13:31:47* File "/home/jenkins/workspace/ovirt-system-tests_manual/ovirt-system-tests/basic-suite-4.1/test-scenarios/002_bootstrap.py", line 320, in _add_host_4*13:31:47* name=CLUSTER_NAME,*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line 8726, in add*13:31:47* return self._internal_add(host, headers, query, wait)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 211, in _internal_add*13:31:47* context = self._connection.send(request)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 300, in send*13:31:47* sys.exc_info()[2]*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 295, in send*13:31:47* return self.__send(request)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 413, in __send*13:31:47* self._multi.add_handle(curl)*13:31:47* Error: ('Error while sending HTTP request', error('cannot add/remove handle - multi_perform() already running',)) [1] http://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_... [2] https://gerrit.ovirt.org/76645 Nir

We are aware of this failure. Initially it was a bug in SDK4 and was (probably) fixed by Ondra, But now we have another bug in repoman which takes an older SDK version instead of latest. We're working on a fix as we speak. On Thu, May 11, 2017 at 6:26 PM, Nir Soffer <nsoffer@redhat.com> wrote:
I got this failure in 4.1 build[1], which should not be relevant to the tested patch[2] - is this a known issue?
*13:31:47* # add_hosts: *13:31:47* Error while running thread*13:31:47* Traceback (most recent call last):*13:31:47* File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in _ret_via_queue*13:31:47* queue.put({'return': func()})*13:31:47* File "/home/jenkins/workspace/ovirt-system-tests_manual/ovirt-system-tests/basic-suite-4.1/test-scenarios/002_bootstrap.py", line 320, in _add_host_4*13:31:47* name=CLUSTER_NAME,*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line 8726, in add*13:31:47* return self._internal_add(host, headers, query, wait)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 211, in _internal_add*13:31:47* context = self._connection.send(request)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 300, in send*13:31:47* sys.exc_info()[2]*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 295, in send*13:31:47* return self.__send(request)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 413, in __send*13:31:47* self._multi.add_handle(curl)*13:31:47* Error: ('Error while sending HTTP request', error('cannot add/remove handle - multi_perform() already running',))
[1] http://jenkins.ovirt.org/view/oVirt%20system%20tests/ job/ovirt-system-tests_manual/374/console [2] https://gerrit.ovirt.org/76645
Nir
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
-- Eyal edri ASSOCIATE MANAGER RHV DevOps EMEA VIRTUALIZATION R&D Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)

Hello Ondra. Looks like the bump did not fixed it alas, see here: http://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_... Thanks. On Thu, May 11, 2017 at 5:31 PM, Eyal Edri <eedri@redhat.com> wrote:
We are aware of this failure. Initially it was a bug in SDK4 and was (probably) fixed by Ondra, But now we have another bug in repoman which takes an older SDK version instead of latest.
We're working on a fix as we speak.
On Thu, May 11, 2017 at 6:26 PM, Nir Soffer <nsoffer@redhat.com> wrote:
I got this failure in 4.1 build[1], which should not be relevant to the tested patch[2] - is this a known issue?
*13:31:47* # add_hosts: *13:31:47* Error while running thread*13:31:47* Traceback (most recent call last):*13:31:47* File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in _ret_via_queue*13:31:47* queue.put({'return': func()})*13:31:47* File "/home/jenkins/workspace/ovirt-system-tests_manual/ovirt-system-tests/basic-suite-4.1/test-scenarios/002_bootstrap.py", line 320, in _add_host_4*13:31:47* name=CLUSTER_NAME,*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line 8726, in add*13:31:47* return self._internal_add(host, headers, query, wait)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 211, in _internal_add*13:31:47* context = self._connection.send(request)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 300, in send*13:31:47* sys.exc_info()[2]*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 295, in send*13:31:47* return self.__send(request)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 413, in __send*13:31:47* self._multi.add_handle(curl)*13:31:47* Error: ('Error while sending HTTP request', error('cannot add/remove handle - multi_perform() already running',))
[1] http://jenkins.ovirt.org/view/oVirt%20system%20tests/job /ovirt-system-tests_manual/374/console [2] https://gerrit.ovirt.org/76645
Nir
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
--
Eyal edri
ASSOCIATE MANAGER
RHV DevOps
EMEA VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 <+972%209-769-2018> irc: eedri (on #tlv #rhev-dev #rhev-integ)
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat

Take a look at this: *16:01:27* Package ovirt-engine-sdk-python-3.6.9.2-0.1.20161204.gite99bbd1.el7.centos.noarch already installed and latest version*16:01:28* *16:01:28* ================================================================================*16:01:28* Package Arch Version Repository Size*16:01:28* ================================================================================*16:01:28* Installing:*16:01:28* python-ovirt-engine-sdk4 x86_64 4.2.0-1.a0.20170511git210c375.el7.centos*16:01:28* ovirt-master-tested 446 k*16:01:28* Installing for dependencies:*16:01:28* python-enum34 noarch 1.0.4-1.el7 centos-base-el7 52 k*16:01:28* *16:01:28* Transaction Summary*16:01:28* ================================================================================*16:01:28* Install 1 Package (+1 Dependent package)*16:01:28* *16:01:28* Total size: 498 k*16:01:28* Installed size: 5.1 M*16:01:30* *16:01:30* Installed:*16:01:30* python-ovirt-engine-sdk4.x86_64 0:4.2.0-1.a0.20170511git210c375.el7.centos It's replaced for some reason. On Thu, May 11, 2017 at 6:02 PM, Anton Marchukov <amarchuk@redhat.com> wrote:
Hello Ondra.
Looks like the bump did not fixed it alas, see here:
http://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ ovirt-system-tests_manual/376/artifact/exported-artifacts/ lago_logs/lago.log
Thanks.
On Thu, May 11, 2017 at 5:31 PM, Eyal Edri <eedri@redhat.com> wrote:
We are aware of this failure. Initially it was a bug in SDK4 and was (probably) fixed by Ondra, But now we have another bug in repoman which takes an older SDK version instead of latest.
We're working on a fix as we speak.
On Thu, May 11, 2017 at 6:26 PM, Nir Soffer <nsoffer@redhat.com> wrote:
I got this failure in 4.1 build[1], which should not be relevant to the tested patch[2] - is this a known issue?
*13:31:47* # add_hosts: *13:31:47* Error while running thread*13:31:47* Traceback (most recent call last):*13:31:47* File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in _ret_via_queue*13:31:47* queue.put({'return': func()})*13:31:47* File "/home/jenkins/workspace/ovirt-system-tests_manual/ovirt-system-tests/basic-suite-4.1/test-scenarios/002_bootstrap.py", line 320, in _add_host_4*13:31:47* name=CLUSTER_NAME,*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line 8726, in add*13:31:47* return self._internal_add(host, headers, query, wait)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 211, in _internal_add*13:31:47* context = self._connection.send(request)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 300, in send*13:31:47* sys.exc_info()[2]*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 295, in send*13:31:47* return self.__send(request)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 413, in __send*13:31:47* self._multi.add_handle(curl)*13:31:47* Error: ('Error while sending HTTP request', error('cannot add/remove handle - multi_perform() already running',))
[1] http://jenkins.ovirt.org/view/oVirt%20system%20tests/job /ovirt-system-tests_manual/374/console [2] https://gerrit.ovirt.org/76645
Nir
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
--
Eyal edri
ASSOCIATE MANAGER
RHV DevOps
EMEA VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 <+972%209-769-2018> irc: eedri (on #tlv #rhev-dev #rhev-integ)
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat

Hello Ondra. For the bump patch at https://gerrit.ovirt.org/#/c/76732/1 I see that job still built python-ovirt-engine-sdk4-4.2.0-1.a1.20170511git20eea95.fc25.x86_64.rpm while the bump was to a2. I am now going to check this in manual job one more time to be sure. Thanks. On Thu, May 11, 2017 at 7:19 PM, Ondra Machacek <omachace@redhat.com> wrote:
Take a look at this:
*16:01:27* Package ovirt-engine-sdk-python-3.6.9.2-0.1.20161204.gite99bbd1.el7.centos.noarch already installed and latest version*16:01:28* *16:01:28* ================================================================================*16:01:28* Package Arch Version Repository Size*16:01:28* ================================================================================*16:01:28* Installing:*16:01:28* python-ovirt-engine-sdk4 x86_64 4.2.0-1.a0.20170511git210c375.el7.centos*16:01:28* ovirt-master-tested 446 k*16:01:28* Installing for dependencies:*16:01:28* python-enum34 noarch 1.0.4-1.el7 centos-base-el7 52 k*16:01:28* *16:01:28* Transaction Summary*16:01:28* ================================================================================*16:01:28* Install 1 Package (+1 Dependent package)*16:01:28* *16:01:28* Total size: 498 k*16:01:28* Installed size: 5.1 M*16:01:30* *16:01:30* Installed:*16:01:30* python-ovirt-engine-sdk4.x86_64 0:4.2.0-1.a0.20170511git210c375.el7.centos
It's replaced for some reason.
On Thu, May 11, 2017 at 6:02 PM, Anton Marchukov <amarchuk@redhat.com> wrote:
Hello Ondra.
Looks like the bump did not fixed it alas, see here:
http://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovi rt-system-tests_manual/376/artifact/exported-artifacts/lago_logs/lago.log
Thanks.
On Thu, May 11, 2017 at 5:31 PM, Eyal Edri <eedri@redhat.com> wrote:
We are aware of this failure. Initially it was a bug in SDK4 and was (probably) fixed by Ondra, But now we have another bug in repoman which takes an older SDK version instead of latest.
We're working on a fix as we speak.
On Thu, May 11, 2017 at 6:26 PM, Nir Soffer <nsoffer@redhat.com> wrote:
I got this failure in 4.1 build[1], which should not be relevant to the tested patch[2] - is this a known issue?
*13:31:47* # add_hosts: *13:31:47* Error while running thread*13:31:47* Traceback (most recent call last):*13:31:47* File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in _ret_via_queue*13:31:47* queue.put({'return': func()})*13:31:47* File "/home/jenkins/workspace/ovirt-system-tests_manual/ovirt-system-tests/basic-suite-4.1/test-scenarios/002_bootstrap.py", line 320, in _add_host_4*13:31:47* name=CLUSTER_NAME,*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line 8726, in add*13:31:47* return self._internal_add(host, headers, query, wait)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 211, in _internal_add*13:31:47* context = self._connection.send(request)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 300, in send*13:31:47* sys.exc_info()[2]*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 295, in send*13:31:47* return self.__send(request)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 413, in __send*13:31:47* self._multi.add_handle(curl)*13:31:47* Error: ('Error while sending HTTP request', error('cannot add/remove handle - multi_perform() already running',))
[1] http://jenkins.ovirt.org/view/oVirt%20system%20tests/job /ovirt-system-tests_manual/374/console [2] https://gerrit.ovirt.org/76645
Nir
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
--
Eyal edri
ASSOCIATE MANAGER
RHV DevOps
EMEA VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 <+972%209-769-2018> irc: eedri (on #tlv #rhev-dev #rhev-integ)
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat

I see that in all the runs it's updated to incorrect version before the tests run for some reason. On Thu, May 11, 2017 at 7:46 PM, Anton Marchukov <amarchuk@redhat.com> wrote:
Hello Ondra.
For the bump patch at https://gerrit.ovirt.org/#/c/76732/1 I see that job still built python-ovirt-engine-sdk4-4.2.0-1.a1. 20170511git20eea95.fc25.x86_64.rpm while the bump was to a2. I am now going to check this in manual job one more time to be sure.
Thanks.
On Thu, May 11, 2017 at 7:19 PM, Ondra Machacek <omachace@redhat.com> wrote:
Take a look at this:
*16:01:27* Package ovirt-engine-sdk-python-3.6.9.2-0.1.20161204.gite99bbd1.el7.centos.noarch already installed and latest version*16:01:28* *16:01:28* ================================================================================*16:01:28* Package Arch Version Repository Size*16:01:28* ================================================================================*16:01:28* Installing:*16:01:28* python-ovirt-engine-sdk4 x86_64 4.2.0-1.a0.20170511git210c375.el7.centos*16:01:28* ovirt-master-tested 446 k*16:01:28* Installing for dependencies:*16:01:28* python-enum34 noarch 1.0.4-1.el7 centos-base-el7 52 k*16:01:28* *16:01:28* Transaction Summary*16:01:28* ================================================================================*16:01:28* Install 1 Package (+1 Dependent package)*16:01:28* *16:01:28* Total size: 498 k*16:01:28* Installed size: 5.1 M*16:01:30* *16:01:30* Installed:*16:01:30* python-ovirt-engine-sdk4.x86_64 0:4.2.0-1.a0.20170511git210c375.el7.centos
It's replaced for some reason.
On Thu, May 11, 2017 at 6:02 PM, Anton Marchukov <amarchuk@redhat.com> wrote:
Hello Ondra.
Looks like the bump did not fixed it alas, see here:
http://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovi rt-system-tests_manual/376/artifact/exported-artifacts/lago_ logs/lago.log
Thanks.
On Thu, May 11, 2017 at 5:31 PM, Eyal Edri <eedri@redhat.com> wrote:
We are aware of this failure. Initially it was a bug in SDK4 and was (probably) fixed by Ondra, But now we have another bug in repoman which takes an older SDK version instead of latest.
We're working on a fix as we speak.
On Thu, May 11, 2017 at 6:26 PM, Nir Soffer <nsoffer@redhat.com> wrote:
I got this failure in 4.1 build[1], which should not be relevant to the tested patch[2] - is this a known issue?
*13:31:47* # add_hosts: *13:31:47* Error while running thread*13:31:47* Traceback (most recent call last):*13:31:47* File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in _ret_via_queue*13:31:47* queue.put({'return': func()})*13:31:47* File "/home/jenkins/workspace/ovirt-system-tests_manual/ovirt-system-tests/basic-suite-4.1/test-scenarios/002_bootstrap.py", line 320, in _add_host_4*13:31:47* name=CLUSTER_NAME,*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line 8726, in add*13:31:47* return self._internal_add(host, headers, query, wait)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 211, in _internal_add*13:31:47* context = self._connection.send(request)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 300, in send*13:31:47* sys.exc_info()[2]*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 295, in send*13:31:47* return self.__send(request)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 413, in __send*13:31:47* self._multi.add_handle(curl)*13:31:47* Error: ('Error while sending HTTP request', error('cannot add/remove handle - multi_perform() already running',))
[1] http://jenkins.ovirt.org/view/oVirt%20system%20tests/job /ovirt-system-tests_manual/374/console [2] https://gerrit.ovirt.org/76645
Nir
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
--
Eyal edri
ASSOCIATE MANAGER
RHV DevOps
EMEA VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 <+972%209-769-2018> irc: eedri (on #tlv #rhev-dev #rhev-integ)
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat

Also the build job produced a1 artifact while the bump seems to be to a2. Is it correct? Anyway I think the best is to merge it. Alas till the date will update we might have problem due to that git hashes being unordered. I also started fresh manual run to debug the manual job [1] http://jenkins.ovirt.org/job/ovirt-system-tests_manual/384/console On Thu, May 11, 2017 at 8:09 PM, Ondra Machacek <omachace@redhat.com> wrote:
I see that in all the runs it's updated to incorrect version before the tests run for some reason.
On Thu, May 11, 2017 at 7:46 PM, Anton Marchukov <amarchuk@redhat.com> wrote:
Hello Ondra.
For the bump patch at https://gerrit.ovirt.org/#/c/76732/1 I see that job still built python-ovirt-engine-sdk4-4.2.0-1.a1.20170511git20eea95.fc25.x86_64.rpm while the bump was to a2. I am now going to check this in manual job one more time to be sure.
Thanks.
On Thu, May 11, 2017 at 7:19 PM, Ondra Machacek <omachace@redhat.com> wrote:
Take a look at this:
*16:01:27* Package ovirt-engine-sdk-python-3.6.9.2-0.1.20161204.gite99bbd1.el7.centos.noarch already installed and latest version*16:01:28* *16:01:28* ================================================================================*16:01:28* Package Arch Version Repository Size*16:01:28* ================================================================================*16:01:28* Installing:*16:01:28* python-ovirt-engine-sdk4 x86_64 4.2.0-1.a0.20170511git210c375.el7.centos*16:01:28* ovirt-master-tested 446 k*16:01:28* Installing for dependencies:*16:01:28* python-enum34 noarch 1.0.4-1.el7 centos-base-el7 52 k*16:01:28* *16:01:28* Transaction Summary*16:01:28* ================================================================================*16:01:28* Install 1 Package (+1 Dependent package)*16:01:28* *16:01:28* Total size: 498 k*16:01:28* Installed size: 5.1 M*16:01:30* *16:01:30* Installed:*16:01:30* python-ovirt-engine-sdk4.x86_64 0:4.2.0-1.a0.20170511git210c375.el7.centos
It's replaced for some reason.
On Thu, May 11, 2017 at 6:02 PM, Anton Marchukov <amarchuk@redhat.com> wrote:
Hello Ondra.
Looks like the bump did not fixed it alas, see here:
http://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovi rt-system-tests_manual/376/artifact/exported-artifacts/lago_ logs/lago.log
Thanks.
On Thu, May 11, 2017 at 5:31 PM, Eyal Edri <eedri@redhat.com> wrote:
We are aware of this failure. Initially it was a bug in SDK4 and was (probably) fixed by Ondra, But now we have another bug in repoman which takes an older SDK version instead of latest.
We're working on a fix as we speak.
On Thu, May 11, 2017 at 6:26 PM, Nir Soffer <nsoffer@redhat.com> wrote:
I got this failure in 4.1 build[1], which should not be relevant to the tested patch[2] - is this a known issue?
*13:31:47* # add_hosts: *13:31:47* Error while running thread*13:31:47* Traceback (most recent call last):*13:31:47* File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in _ret_via_queue*13:31:47* queue.put({'return': func()})*13:31:47* File "/home/jenkins/workspace/ovirt-system-tests_manual/ovirt-system-tests/basic-suite-4.1/test-scenarios/002_bootstrap.py", line 320, in _add_host_4*13:31:47* name=CLUSTER_NAME,*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line 8726, in add*13:31:47* return self._internal_add(host, headers, query, wait)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 211, in _internal_add*13:31:47* context = self._connection.send(request)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 300, in send*13:31:47* sys.exc_info()[2]*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 295, in send*13:31:47* return self.__send(request)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 413, in __send*13:31:47* self._multi.add_handle(curl)*13:31:47* Error: ('Error while sending HTTP request', error('cannot add/remove handle - multi_perform() already running',))
[1] http://jenkins.ovirt.org/view/oVirt%20system%20tests/job /ovirt-system-tests_manual/374/console [2] https://gerrit.ovirt.org/76645
Nir
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
--
Eyal edri
ASSOCIATE MANAGER
RHV DevOps
EMEA VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 <+972%209-769-2018> irc: eedri (on #tlv #rhev-dev #rhev-integ)
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat

On Thu, May 11, 2017 at 8:14 PM, Anton Marchukov <amarchuk@redhat.com> wrote:
Also the build job produced a1 artifact while the bump seems to be to a2. Is it correct?
Yes, that's correct, it's handled by our automation script in SDK.
Anyway I think the best is to merge it. Alas till the date will update we might have problem due to that git hashes being unordered.
I also started fresh manual run to debug the manual job
[1] http://jenkins.ovirt.org/job/ovirt-system-tests_manual/384/console
On Thu, May 11, 2017 at 8:09 PM, Ondra Machacek <omachace@redhat.com> wrote:
I see that in all the runs it's updated to incorrect version before the tests run for some reason.
On Thu, May 11, 2017 at 7:46 PM, Anton Marchukov <amarchuk@redhat.com> wrote:
Hello Ondra.
For the bump patch at https://gerrit.ovirt.org/#/c/76732/1 I see that job still built python-ovirt-engine-sdk4-4.2.0-1.a1.20170511git20eea95.fc25.x86_64.rpm while the bump was to a2. I am now going to check this in manual job one more time to be sure.
Thanks.
On Thu, May 11, 2017 at 7:19 PM, Ondra Machacek <omachace@redhat.com> wrote:
Take a look at this:
*16:01:27* Package ovirt-engine-sdk-python-3.6.9.2-0.1.20161204.gite99bbd1.el7.centos.noarch already installed and latest version*16:01:28* *16:01:28* ================================================================================*16:01:28* Package Arch Version Repository Size*16:01:28* ================================================================================*16:01:28* Installing:*16:01:28* python-ovirt-engine-sdk4 x86_64 4.2.0-1.a0.20170511git210c375.el7.centos*16:01:28* ovirt-master-tested 446 k*16:01:28* Installing for dependencies:*16:01:28* python-enum34 noarch 1.0.4-1.el7 centos-base-el7 52 k*16:01:28* *16:01:28* Transaction Summary*16:01:28* ================================================================================*16:01:28* Install 1 Package (+1 Dependent package)*16:01:28* *16:01:28* Total size: 498 k*16:01:28* Installed size: 5.1 M*16:01:30* *16:01:30* Installed:*16:01:30* python-ovirt-engine-sdk4.x86_64 0:4.2.0-1.a0.20170511git210c375.el7.centos
It's replaced for some reason.
On Thu, May 11, 2017 at 6:02 PM, Anton Marchukov <amarchuk@redhat.com> wrote:
Hello Ondra.
Looks like the bump did not fixed it alas, see here:
http://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovi rt-system-tests_manual/376/artifact/exported-artifacts/lago_ logs/lago.log
Thanks.
On Thu, May 11, 2017 at 5:31 PM, Eyal Edri <eedri@redhat.com> wrote:
We are aware of this failure. Initially it was a bug in SDK4 and was (probably) fixed by Ondra, But now we have another bug in repoman which takes an older SDK version instead of latest.
We're working on a fix as we speak.
On Thu, May 11, 2017 at 6:26 PM, Nir Soffer <nsoffer@redhat.com> wrote:
> I got this failure in 4.1 build[1], which should not be relevant to > the tested patch[2] - is this a known issue? > > *13:31:47* # add_hosts: *13:31:47* Error while running thread*13:31:47* Traceback (most recent call last):*13:31:47* File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in _ret_via_queue*13:31:47* queue.put({'return': func()})*13:31:47* File "/home/jenkins/workspace/ovirt-system-tests_manual/ovirt-system-tests/basic-suite-4.1/test-scenarios/002_bootstrap.py", line 320, in _add_host_4*13:31:47* name=CLUSTER_NAME,*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line 8726, in add*13:31:47* return self._internal_add(host, headers, query, wait)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 211, in _internal_add*13:31:47* context = self._connection.send(request)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 300, in send*13:31:47* sys.exc_info()[2]*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 295, in send*13:31:47* return self.__send(request)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 413, in __send*13:31:47* self._multi.add_handle(curl)*13:31:47* Error: ('Error while sending HTTP request', error('cannot add/remove handle - multi_perform() already running',)) > > > [1] http://jenkins.ovirt.org/view/oVirt%20system%20tests/job > /ovirt-system-tests_manual/374/console > [2] https://gerrit.ovirt.org/76645 > > Nir > > _______________________________________________ > Devel mailing list > Devel@ovirt.org > http://lists.ovirt.org/mailman/listinfo/devel >
--
Eyal edri
ASSOCIATE MANAGER
RHV DevOps
EMEA VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 <+972%209-769-2018> irc: eedri (on #tlv #rhev-dev #rhev-integ)
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat

Hello All. We checked the OST and so far it looks like it is correct for master. It does use the latest SDK4 version built by the job triggered as part of the fix [1]. It is visible at console log [2]: *15:50:44* [basic_suit_el7] Updated:*15:50:44* [basic_suit_el7] python-ovirt-engine-sdk4.x86_64 0:4.2.0-1.a0.20170511git210c375.el7.centos Also checking the fix at [1] I see that according to the stacktrace from [2] we fail at send(): *15:56:58* [basic_suit_el7] Error while running thread*15:56:58* [basic_suit_el7] Traceback (most recent call last):*15:56:58* [basic_suit_el7] File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in _ret_via_queue*15:56:58* [basic_suit_el7] queue.put({'return': func()})*15:56:58* [basic_suit_el7] File "/home/jenkins/workspace/test-repo_ovirt_experimental_master/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py", line 327, in _add_host_4*15:56:58* [basic_suit_el7] name=CLUSTER_NAME,*15:56:58* [basic_suit_el7] File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line 8726, in add*15:56:58* [basic_suit_el7] return self._internal_add(host, headers, query, wait)*15:56:58* [basic_suit_el7] File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 211, in _internal_add*15:56:58* [basic_suit_el7] context = self._connection.send(request)*15:56:58* [basic_suit_el7] File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 300, in send*15:56:58* [basic_suit_el7] sys.exc_info()[2]*15:56:58* [basic_suit_el7] File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 295, in send*15:56:58* [basic_suit_el7] return self.__send(request)*15:56:58* [basic_suit_el7] File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 413, in __send*15:56:58* [basic_suit_el7] self._multi.add_handle(curl)*15:56:58* [basic_suit_el7] Error: ('Error while sending HTTP request', error('cannot add/remove handle - multi_perform() already running',)) While at the fix [1] I see the lock is added to wait() method while according to the stacktrace we have we fail at send() method and as I see the wait() is executed by _internal_add() later. So that code added at [1] is not executed yet. Do we have any other fix that we have missed? Just want to make sure we track the correct gerrit fix through our system. Anton. [1] https://gerrit.ovirt.org/#/c/76713/ [2] http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/6643/consol... On Thu, May 11, 2017 at 5:31 PM, Eyal Edri <eedri@redhat.com> wrote:
We are aware of this failure. Initially it was a bug in SDK4 and was (probably) fixed by Ondra, But now we have another bug in repoman which takes an older SDK version instead of latest.
We're working on a fix as we speak.
On Thu, May 11, 2017 at 6:26 PM, Nir Soffer <nsoffer@redhat.com> wrote:
I got this failure in 4.1 build[1], which should not be relevant to the tested patch[2] - is this a known issue?
*13:31:47* # add_hosts: *13:31:47* Error while running thread*13:31:47* Traceback (most recent call last):*13:31:47* File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in _ret_via_queue*13:31:47* queue.put({'return': func()})*13:31:47* File "/home/jenkins/workspace/ovirt-system-tests_manual/ovirt-system-tests/basic-suite-4.1/test-scenarios/002_bootstrap.py", line 320, in _add_host_4*13:31:47* name=CLUSTER_NAME,*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line 8726, in add*13:31:47* return self._internal_add(host, headers, query, wait)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 211, in _internal_add*13:31:47* context = self._connection.send(request)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 300, in send*13:31:47* sys.exc_info()[2]*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 295, in send*13:31:47* return self.__send(request)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 413, in __send*13:31:47* self._multi.add_handle(curl)*13:31:47* Error: ('Error while sending HTTP request', error('cannot add/remove handle - multi_perform() already running',))
[1] http://jenkins.ovirt.org/view/oVirt%20system%20tests/job /ovirt-system-tests_manual/374/console [2] https://gerrit.ovirt.org/76645
Nir
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
--
Eyal edri
ASSOCIATE MANAGER
RHV DevOps
EMEA VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 <+972%209-769-2018> irc: eedri (on #tlv #rhev-dev #rhev-integ)
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat

On Thu, May 11, 2017 at 7:34 PM, Anton Marchukov <amarchuk@redhat.com> wrote:
Hello All.
We checked the OST and so far it looks like it is correct for master. It does use the latest SDK4 version built by the job triggered as part of the fix [1]. It is visible at console log [2]:
*15:50:44* [basic_suit_el7] Updated:*15:50:44* [basic_suit_el7] python-ovirt-engine-sdk4.x86_64 0:4.2.0-1.a0.20170511git210c375.el7.centos
This is incorrect version. The correct one is: python-ovirt-engine-sdk4-4.2.0-1.a0.20170511gitcd0adb4.el7.centos.x86_64.rpm <http://jenkins.ovirt.org/job/python-ovirt-engine-sdk4_master_build-artifacts-el7-x86_64/71/artifact/exported-artifacts/python-ovirt-engine-sdk4-4.2.0-1.a0.20170511gitcd0adb4.el7.centos.x86_64.rpm>
From this build:
http://jenkins.ovirt.org/job/python-ovirt-engine-sdk4_master_build-artifacts...
Also checking the fix at [1] I see that according to the stacktrace from [2] we fail at send():
*15:56:58* [basic_suit_el7] Error while running thread*15:56:58* [basic_suit_el7] Traceback (most recent call last):*15:56:58* [basic_suit_el7] File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in _ret_via_queue*15:56:58* [basic_suit_el7] queue.put({'return': func()})*15:56:58* [basic_suit_el7] File "/home/jenkins/workspace/test-repo_ovirt_experimental_master/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py", line 327, in _add_host_4*15:56:58* [basic_suit_el7] name=CLUSTER_NAME,*15:56:58* [basic_suit_el7] File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line 8726, in add*15:56:58* [basic_suit_el7] return self._internal_add(host, headers, query, wait)*15:56:58* [basic_suit_el7] File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 211, in _internal_add*15:56:58* [basic_suit_el7] context = self._connection.send(request)*15:56:58* [basic_suit_el7] File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 300, in send*15:56:58* [basic_suit_el7] sys.exc_info()[2]*15:56:58* [basic_suit_el7] File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 295, in send*15:56:58* [basic_suit_el7] return self.__send(request)*15:56:58* [basic_suit_el7] File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 413, in __send*15:56:58* [basic_suit_el7] self._multi.add_handle(curl)*15:56:58* [basic_suit_el7] Error: ('Error while sending HTTP request', error('cannot add/remove handle - multi_perform() already running',))
While at the fix [1] I see the lock is added to wait() method while according to the stacktrace we have we fail at send() method and as I see the wait() is executed by _internal_add() later. So that code added at [1] is not executed yet.
The send() method already has a lock.
Do we have any other fix that we have missed? Just want to make sure we track the correct gerrit fix through our system.
This patch should be correct fix.
Anton.
[1] https://gerrit.ovirt.org/#/c/76713/ [2] http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/6643/ consoleFull
On Thu, May 11, 2017 at 5:31 PM, Eyal Edri <eedri@redhat.com> wrote:
We are aware of this failure. Initially it was a bug in SDK4 and was (probably) fixed by Ondra, But now we have another bug in repoman which takes an older SDK version instead of latest.
We're working on a fix as we speak.
On Thu, May 11, 2017 at 6:26 PM, Nir Soffer <nsoffer@redhat.com> wrote:
I got this failure in 4.1 build[1], which should not be relevant to the tested patch[2] - is this a known issue?
*13:31:47* # add_hosts: *13:31:47* Error while running thread*13:31:47* Traceback (most recent call last):*13:31:47* File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in _ret_via_queue*13:31:47* queue.put({'return': func()})*13:31:47* File "/home/jenkins/workspace/ovirt-system-tests_manual/ovirt-system-tests/basic-suite-4.1/test-scenarios/002_bootstrap.py", line 320, in _add_host_4*13:31:47* name=CLUSTER_NAME,*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line 8726, in add*13:31:47* return self._internal_add(host, headers, query, wait)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 211, in _internal_add*13:31:47* context = self._connection.send(request)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 300, in send*13:31:47* sys.exc_info()[2]*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 295, in send*13:31:47* return self.__send(request)*13:31:47* File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 413, in __send*13:31:47* self._multi.add_handle(curl)*13:31:47* Error: ('Error while sending HTTP request', error('cannot add/remove handle - multi_perform() already running',))
[1] http://jenkins.ovirt.org/view/oVirt%20system%20tests/job /ovirt-system-tests_manual/374/console [2] https://gerrit.ovirt.org/76645
Nir
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
--
Eyal edri
ASSOCIATE MANAGER
RHV DevOps
EMEA VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 <+972%209-769-2018> irc: eedri (on #tlv #rhev-dev #rhev-integ)
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat

On Thu, May 11, 2017 at 8:03 PM, Ondra Machacek <omachace@redhat.com> wrote:
*15:50:44* [basic_suit_el7] Updated:
*15:50:44* [basic_suit_el7] python-ovirt-engine-sdk4.x86_64 0:4.2.0-1.a0.20170511git210c375.el7.centos
This is incorrect version. The correct one is:
python-ovirt-engine-sdk4-4.2.0-1.a0.20170511gitcd0adb4.el7. centos.x86_64.rpm <http://jenkins.ovirt.org/job/python-ovirt-engine-sdk4_master_build-artifacts-el7-x86_64/71/artifact/exported-artifacts/python-ovirt-engine-sdk4-4.2.0-1.a0.20170511gitcd0adb4.el7.centos.x86_64.rpm>
From this build:
http://jenkins.ovirt.org/job/python-ovirt-engine-sdk4_ master_build-artifacts-el7-x86_64/71/
Sounds like we have a problem if the version different only by git hashes. They are not ordered. I suggest we just merge the version bump at https://gerrit.ovirt.org/#/c/76732/ and then see which version it will install. Any objections to that? -- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat

On Thu, May 11, 2017 at 8:11 PM, Anton Marchukov <amarchuk@redhat.com> wrote:
On Thu, May 11, 2017 at 8:03 PM, Ondra Machacek <omachace@redhat.com> wrote:
*15:50:44* [basic_suit_el7] Updated:
*15:50:44* [basic_suit_el7] python-ovirt-engine-sdk4.x86_64 0:4.2.0-1.a0.20170511git210c375.el7.centos
This is incorrect version. The correct one is:
python-ovirt-engine-sdk4-4.2.0-1.a0.20170511gitcd0adb4.el7. centos.x86_64.rpm <http://jenkins.ovirt.org/job/python-ovirt-engine-sdk4_master_build-artifacts-el7-x86_64/71/artifact/exported-artifacts/python-ovirt-engine-sdk4-4.2.0-1.a0.20170511gitcd0adb4.el7.centos.x86_64.rpm>
From this build:
http://jenkins.ovirt.org/job/python-ovirt-engine-sdk4_maste r_build-artifacts-el7-x86_64/71/
Sounds like we have a problem if the version different only by git hashes. They are not ordered.
I suggest we just merge the version bump at https://gerrit.ovirt.org/#/ c/76732/ and then see which version it will install.
Any objections to that?
OK, I will do a proper release.
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat

Hello Ondra. Thanks. It seems that the manual job populates SDK from custom repo only for the VMs under test, but the mock where the python test code runs does not use it from there. So the release of bumped version will be good idea. Anton. On Thu, May 11, 2017 at 8:20 PM, Ondra Machacek <omachace@redhat.com> wrote:
On Thu, May 11, 2017 at 8:11 PM, Anton Marchukov <amarchuk@redhat.com> wrote:
On Thu, May 11, 2017 at 8:03 PM, Ondra Machacek <omachace@redhat.com> wrote:
*15:50:44* [basic_suit_el7] Updated:
*15:50:44* [basic_suit_el7] python-ovirt-engine-sdk4.x86_64 0:4.2.0-1.a0.20170511git210c375.el7.centos
This is incorrect version. The correct one is:
python-ovirt-engine-sdk4-4.2.0-1.a0.20170511gitcd0adb4.el7. centos.x86_64.rpm <http://jenkins.ovirt.org/job/python-ovirt-engine-sdk4_master_build-artifacts-el7-x86_64/71/artifact/exported-artifacts/python-ovirt-engine-sdk4-4.2.0-1.a0.20170511gitcd0adb4.el7.centos.x86_64.rpm>
From this build:
http://jenkins.ovirt.org/job/python-ovirt-engine-sdk4_maste r_build-artifacts-el7-x86_64/71/
Sounds like we have a problem if the version different only by git hashes. They are not ordered.
I suggest we just merge the version bump at https://gerrit.ovirt.org/#/ c/76732/ and then see which version it will install.
Any objections to that?
OK, I will do a proper release.
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat

Hello Anton, So I've bumped the version, but it's still installing the old one. The bumped version: python-ovirt-engine-sdk4-4.2.0-1.a1.20170512git7c40be2.el7.centos.x86_64.rpm <http://jenkins.ovirt.org/job/python-ovirt-engine-sdk4_master_build-artifacts-el7-x86_64/74/artifact/exported-artifacts/python-ovirt-engine-sdk4-4.2.0-1.a1.20170512git7c40be2.el7.centos.x86_64.rpm> Log from OST run: *07:25:59* [upgrade-from-release_suit_el7] ================================================================================*07:25:59* [upgrade-from-release_suit_el7] Package Arch Version Repository Size*07:25:59* [upgrade-from-release_suit_el7] ================================================================================*07:25:59* [upgrade-from-release_suit_el7] Installing:*07:25:59* [upgrade-from-release_suit_el7] python-ovirt-engine-sdk4 x86_64 4.2.0-1.a0.20170511git210c375.el7.centos*07:25:59* [upgrade-from-release_suit_el7] ovirt-master-snapshot 446 k*07:25:59* [upgrade-from-release_suit_el7] Installing for dependencies:*07:25:59* [upgrade-from-release_suit_el7] python-enum34 noarch 1.0.4-1.el7 centos-base-el7 52 k*07:25:59* [upgrade-from-release_suit_el7] *07:25:59* [upgrade-from-release_suit_el7] Transaction Summary*07:25:59* [upgrade-from-release_suit_el7] ================================================================================ On Thu, May 11, 2017 at 8:35 PM, Anton Marchukov <amarchuk@redhat.com> wrote:
Hello Ondra.
Thanks.
It seems that the manual job populates SDK from custom repo only for the VMs under test, but the mock where the python test code runs does not use it from there. So the release of bumped version will be good idea.
Anton.
On Thu, May 11, 2017 at 8:20 PM, Ondra Machacek <omachace@redhat.com> wrote:
On Thu, May 11, 2017 at 8:11 PM, Anton Marchukov <amarchuk@redhat.com> wrote:
On Thu, May 11, 2017 at 8:03 PM, Ondra Machacek <omachace@redhat.com> wrote:
*15:50:44* [basic_suit_el7] Updated:
*15:50:44* [basic_suit_el7] python-ovirt-engine-sdk4.x86_64 0:4.2.0-1.a0.20170511git210c375.el7.centos
This is incorrect version. The correct one is:
python-ovirt-engine-sdk4-4.2.0-1.a0.20170511gitcd0adb4.el7. centos.x86_64.rpm <http://jenkins.ovirt.org/job/python-ovirt-engine-sdk4_master_build-artifacts-el7-x86_64/71/artifact/exported-artifacts/python-ovirt-engine-sdk4-4.2.0-1.a0.20170511gitcd0adb4.el7.centos.x86_64.rpm>
From this build:
http://jenkins.ovirt.org/job/python-ovirt-engine-sdk4_maste r_build-artifacts-el7-x86_64/71/
Sounds like we have a problem if the version different only by git hashes. They are not ordered.
I suggest we just merge the version bump at https://gerrit.ovirt.org/#/ c/76732/ and then see which version it will install.
Any objections to that?
OK, I will do a proper release.
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat

Hello Ondra. Yes I see it installs the old version, e.g. the latest master run at [1] installs: *07:43:13* [basic_suit_el7] Updated:*07:43:13* [basic_suit_el7] python-ovirt-engine-sdk4.x86_64 0:4.2.0-1.a0.20170511git210c375.el7.centos while the latest version is indeed python-ovirt-engine-sdk4-4.2.0-1.a1.20170512git7c40be2.el7.centos.x86_64.rpm Just for the record: latest and latest.under_test have correct version of the package, so it does not look to be a repoman bug. Checking OST sources now... [1] http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/6651/consol... On Fri, May 12, 2017 at 9:43 AM, Ondra Machacek <omachace@redhat.com> wrote:
Hello Anton,
So I've bumped the version, but it's still installing the old one. The bumped version:
python-ovirt-engine-sdk4-4.2.0-1.a1.20170512git7c40be2.el7. centos.x86_64.rpm <http://jenkins.ovirt.org/job/python-ovirt-engine-sdk4_master_build-artifacts-el7-x86_64/74/artifact/exported-artifacts/python-ovirt-engine-sdk4-4.2.0-1.a1.20170512git7c40be2.el7.centos.x86_64.rpm>
Log from OST run:
*07:25:59* [upgrade-from-release_suit_el7] ================================================================================*07:25:59* [upgrade-from-release_suit_el7] Package Arch Version Repository Size*07:25:59* [upgrade-from-release_suit_el7] ================================================================================*07:25:59* [upgrade-from-release_suit_el7] Installing:*07:25:59* [upgrade-from-release_suit_el7] python-ovirt-engine-sdk4 x86_64 4.2.0-1.a0.20170511git210c375.el7.centos*07:25:59* [upgrade-from-release_suit_el7] ovirt-master-snapshot 446 k*07:25:59* [upgrade-from-release_suit_el7] Installing for dependencies:*07:25:59* [upgrade-from-release_suit_el7] python-enum34 noarch 1.0.4-1.el7 centos-base-el7 52 k*07:25:59* [upgrade-from-release_suit_el7] *07:25:59* [upgrade-from-release_suit_el7] Transaction Summary*07:25:59* [upgrade-from-release_suit_el7] ================================================================================
On Thu, May 11, 2017 at 8:35 PM, Anton Marchukov <amarchuk@redhat.com> wrote:
Hello Ondra.
Thanks.
It seems that the manual job populates SDK from custom repo only for the VMs under test, but the mock where the python test code runs does not use it from there. So the release of bumped version will be good idea.
Anton.
On Thu, May 11, 2017 at 8:20 PM, Ondra Machacek <omachace@redhat.com> wrote:
On Thu, May 11, 2017 at 8:11 PM, Anton Marchukov <amarchuk@redhat.com> wrote:
On Thu, May 11, 2017 at 8:03 PM, Ondra Machacek <omachace@redhat.com> wrote:
*15:50:44* [basic_suit_el7] Updated:
*15:50:44* [basic_suit_el7] python-ovirt-engine-sdk4.x86_64 0:4.2.0-1.a0.20170511git210c375.el7.centos
This is incorrect version. The correct one is:
python-ovirt-engine-sdk4-4.2.0-1.a0.20170511gitcd0adb4.el7. centos.x86_64.rpm <http://jenkins.ovirt.org/job/python-ovirt-engine-sdk4_master_build-artifacts-el7-x86_64/71/artifact/exported-artifacts/python-ovirt-engine-sdk4-4.2.0-1.a0.20170511gitcd0adb4.el7.centos.x86_64.rpm>
From this build:
http://jenkins.ovirt.org/job/python-ovirt-engine-sdk4_maste r_build-artifacts-el7-x86_64/71/
Sounds like we have a problem if the version different only by git hashes. They are not ordered.
I suggest we just merge the version bump at https://gerrit.ovirt.org/#/ c/76732/ and then see which version it will install.
Any objections to that?
OK, I will do a proper release.
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat

Anton, are you seeing reponan pull the right version in the lago logs? We need to know if it makes it into the Lago local repo or not. Barak Korren bkorren@redhat.com RHCE, RHCi, RHV-DevOps Team https://ifireball.wordpress.com/ בתאריך 12 במאי 2017 11:13, "Anton Marchukov" <amarchuk@redhat.com> כתב:
Hello Ondra.
Yes I see it installs the old version, e.g. the latest master run at [1] installs:
*07:43:13* [basic_suit_el7] Updated:*07:43:13* [basic_suit_el7] python-ovirt-engine-sdk4.x86_64 0:4.2.0-1.a0.20170511git210c375.el7.centos
while the latest version is indeed python-ovirt-engine-sdk4-4.2. 0-1.a1.20170512git7c40be2.el7.centos.x86_64.rpm
Just for the record: latest and latest.under_test have correct version of the package, so it does not look to be a repoman bug.
Checking OST sources now...
[1] http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/6651/ consoleFull
On Fri, May 12, 2017 at 9:43 AM, Ondra Machacek <omachace@redhat.com> wrote:
Hello Anton,
So I've bumped the version, but it's still installing the old one. The bumped version:
python-ovirt-engine-sdk4-4.2.0-1.a1.20170512git7c40be2.el7. centos.x86_64.rpm <http://jenkins.ovirt.org/job/python-ovirt-engine-sdk4_master_build-artifacts-el7-x86_64/74/artifact/exported-artifacts/python-ovirt-engine-sdk4-4.2.0-1.a1.20170512git7c40be2.el7.centos.x86_64.rpm>
Log from OST run:
*07:25:59* [upgrade-from-release_suit_el7] ================================================================================*07:25:59* [upgrade-from-release_suit_el7] Package Arch Version Repository Size*07:25:59* [upgrade-from-release_suit_el7] ================================================================================*07:25:59* [upgrade-from-release_suit_el7] Installing:*07:25:59* [upgrade-from-release_suit_el7] python-ovirt-engine-sdk4 x86_64 4.2.0-1.a0.20170511git210c375.el7.centos*07:25:59* [upgrade-from-release_suit_el7] ovirt-master-snapshot 446 k*07:25:59* [upgrade-from-release_suit_el7] Installing for dependencies:*07:25:59* [upgrade-from-release_suit_el7] python-enum34 noarch 1.0.4-1.el7 centos-base-el7 52 k*07:25:59* [upgrade-from-release_suit_el7] *07:25:59* [upgrade-from-release_suit_el7] Transaction Summary*07:25:59* [upgrade-from-release_suit_el7] ================================================================================
On Thu, May 11, 2017 at 8:35 PM, Anton Marchukov <amarchuk@redhat.com> wrote:
Hello Ondra.
Thanks.
It seems that the manual job populates SDK from custom repo only for the VMs under test, but the mock where the python test code runs does not use it from there. So the release of bumped version will be good idea.
Anton.
On Thu, May 11, 2017 at 8:20 PM, Ondra Machacek <omachace@redhat.com> wrote:
On Thu, May 11, 2017 at 8:11 PM, Anton Marchukov <amarchuk@redhat.com> wrote:
On Thu, May 11, 2017 at 8:03 PM, Ondra Machacek <omachace@redhat.com> wrote:
*15:50:44* [basic_suit_el7] Updated: > > *15:50:44* [basic_suit_el7] python-ovirt-engine-sdk4.x86_64 0:4.2.0-1.a0.20170511git210c375.el7.centos > > This is incorrect version. The correct one is:
python-ovirt-engine-sdk4-4.2.0-1.a0.20170511gitcd0adb4.el7. centos.x86_64.rpm <http://jenkins.ovirt.org/job/python-ovirt-engine-sdk4_master_build-artifacts-el7-x86_64/71/artifact/exported-artifacts/python-ovirt-engine-sdk4-4.2.0-1.a0.20170511gitcd0adb4.el7.centos.x86_64.rpm>
From this build:
http://jenkins.ovirt.org/job/python-ovirt-engine-sdk4_maste r_build-artifacts-el7-x86_64/71/
Sounds like we have a problem if the version different only by git hashes. They are not ordered.
I suggest we just merge the version bump at https://gerrit.ovirt.org/#/c/76732/ and then see which version it will install.
Any objections to that?
OK, I will do a proper release.
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

Hello Barak. Yes. repoman pulls the latest version and that version is in latest and latest.under_test on resources. Additionally it is proven by lago.log too. The only problem seems to be the mock env that runs the python itself. Anton. On Fri, May 12, 2017 at 11:03 AM, Barak Korren <bkorren@redhat.com> wrote:
Anton, are you seeing reponan pull the right version in the lago logs? We need to know if it makes it into the Lago local repo or not.
Barak Korren bkorren@redhat.com RHCE, RHCi, RHV-DevOps Team https://ifireball.wordpress.com/
בתאריך 12 במאי 2017 11:13, "Anton Marchukov" <amarchuk@redhat.com> כתב:
Hello Ondra.
Yes I see it installs the old version, e.g. the latest master run at [1] installs:
*07:43:13* [basic_suit_el7] Updated:*07:43:13* [basic_suit_el7] python-ovirt-engine-sdk4.x86_64 0:4.2.0-1.a0.20170511git210c375.el7.centos
while the latest version is indeed python-ovirt-engine-sdk4-4.2. 0-1.a1.20170512git7c40be2.el7.centos.x86_64.rpm
Just for the record: latest and latest.under_test have correct version of the package, so it does not look to be a repoman bug.
Checking OST sources now...
[1] http://jenkins.ovirt.org/job/test-repo_ovirt_experimenta l_master/6651/consoleFull
On Fri, May 12, 2017 at 9:43 AM, Ondra Machacek <omachace@redhat.com> wrote:
Hello Anton,
So I've bumped the version, but it's still installing the old one. The bumped version:
python-ovirt-engine-sdk4-4.2.0-1.a1.20170512git7c40be2.el7. centos.x86_64.rpm <http://jenkins.ovirt.org/job/python-ovirt-engine-sdk4_master_build-artifacts-el7-x86_64/74/artifact/exported-artifacts/python-ovirt-engine-sdk4-4.2.0-1.a1.20170512git7c40be2.el7.centos.x86_64.rpm>
Log from OST run:
*07:25:59* [upgrade-from-release_suit_el7] ================================================================================*07:25:59* [upgrade-from-release_suit_el7] Package Arch Version Repository Size*07:25:59* [upgrade-from-release_suit_el7] ================================================================================*07:25:59* [upgrade-from-release_suit_el7] Installing:*07:25:59* [upgrade-from-release_suit_el7] python-ovirt-engine-sdk4 x86_64 4.2.0-1.a0.20170511git210c375.el7.centos*07:25:59* [upgrade-from-release_suit_el7] ovirt-master-snapshot 446 k*07:25:59* [upgrade-from-release_suit_el7] Installing for dependencies:*07:25:59* [upgrade-from-release_suit_el7] python-enum34 noarch 1.0.4-1.el7 centos-base-el7 52 k*07:25:59* [upgrade-from-release_suit_el7] *07:25:59* [upgrade-from-release_suit_el7] Transaction Summary*07:25:59* [upgrade-from-release_suit_el7] ================================================================================
On Thu, May 11, 2017 at 8:35 PM, Anton Marchukov <amarchuk@redhat.com> wrote:
Hello Ondra.
Thanks.
It seems that the manual job populates SDK from custom repo only for the VMs under test, but the mock where the python test code runs does not use it from there. So the release of bumped version will be good idea.
Anton.
On Thu, May 11, 2017 at 8:20 PM, Ondra Machacek <omachace@redhat.com> wrote:
On Thu, May 11, 2017 at 8:11 PM, Anton Marchukov <amarchuk@redhat.com> wrote:
On Thu, May 11, 2017 at 8:03 PM, Ondra Machacek <omachace@redhat.com> wrote:
> > *15:50:44* [basic_suit_el7] Updated: >> >> *15:50:44* [basic_suit_el7] python-ovirt-engine-sdk4.x86_64 0:4.2.0-1.a0.20170511git210c375.el7.centos >> >> > This is incorrect version. The correct one is: > > python-ovirt-engine-sdk4-4.2.0-1.a0.20170511gitcd0adb4.el7. > centos.x86_64.rpm > <http://jenkins.ovirt.org/job/python-ovirt-engine-sdk4_master_build-artifacts-el7-x86_64/71/artifact/exported-artifacts/python-ovirt-engine-sdk4-4.2.0-1.a0.20170511gitcd0adb4.el7.centos.x86_64.rpm> > > From this build: > > http://jenkins.ovirt.org/job/python-ovirt-engine-sdk4_maste > r_build-artifacts-el7-x86_64/71/ >
Sounds like we have a problem if the version different only by git hashes. They are not ordered.
I suggest we just merge the version bump at https://gerrit.ovirt.org/#/c/76732/ and then see which version it will install.
Any objections to that?
OK, I will do a proper release.
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat

So, yum is installing the older version even though it has a newer one visible in a repo it is configured to use? I guess its not reading the updated repodata then. We need to try and add 'yum clean metadata' after we configure the localrepo in the mock environment. On 12 May 2017 at 12:29, Anton Marchukov <amarchuk@redhat.com> wrote:
Hello Barak.
Yes. repoman pulls the latest version and that version is in latest and latest.under_test on resources. Additionally it is proven by lago.log too.
The only problem seems to be the mock env that runs the python itself.
Anton.
On Fri, May 12, 2017 at 11:03 AM, Barak Korren <bkorren@redhat.com> wrote:
Anton, are you seeing reponan pull the right version in the lago logs? We need to know if it makes it into the Lago local repo or not.
Barak Korren bkorren@redhat.com RHCE, RHCi, RHV-DevOps Team https://ifireball.wordpress.com/
בתאריך 12 במאי 2017 11:13, "Anton Marchukov" <amarchuk@redhat.com> כתב:
Hello Ondra.
Yes I see it installs the old version, e.g. the latest master run at [1] installs:
07:43:13 [basic_suit_el7] Updated: 07:43:13 [basic_suit_el7] python-ovirt-engine-sdk4.x86_64 0:4.2.0-1.a0.20170511git210c375.el7.centos
while the latest version is indeed python-ovirt-engine-sdk4-4.2.0-1.a1.20170512git7c40be2.el7.centos.x86_64.rpm
Just for the record: latest and latest.under_test have correct version of the package, so it does not look to be a repoman bug.
Checking OST sources now...
[1] http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/6651/consol...
On Fri, May 12, 2017 at 9:43 AM, Ondra Machacek <omachace@redhat.com> wrote:
Hello Anton,
So I've bumped the version, but it's still installing the old one. The bumped version:
python-ovirt-engine-sdk4-4.2.0-1.a1.20170512git7c40be2.el7.centos.x86_64.rpm
Log from OST run:
07:25:59 [upgrade-from-release_suit_el7] ================================================================================ 07:25:59 [upgrade-from-release_suit_el7] Package Arch Version Repository Size 07:25:59 [upgrade-from-release_suit_el7] ================================================================================ 07:25:59 [upgrade-from-release_suit_el7] Installing: 07:25:59 [upgrade-from-release_suit_el7] python-ovirt-engine-sdk4 x86_64 4.2.0-1.a0.20170511git210c375.el7.centos 07:25:59 [upgrade-from-release_suit_el7] ovirt-master-snapshot 446 k 07:25:59 [upgrade-from-release_suit_el7] Installing for dependencies: 07:25:59 [upgrade-from-release_suit_el7] python-enum34 noarch 1.0.4-1.el7 centos-base-el7 52 k 07:25:59 [upgrade-from-release_suit_el7] 07:25:59 [upgrade-from-release_suit_el7] Transaction Summary 07:25:59 [upgrade-from-release_suit_el7] ================================================================================
On Thu, May 11, 2017 at 8:35 PM, Anton Marchukov <amarchuk@redhat.com> wrote:
Hello Ondra.
Thanks.
It seems that the manual job populates SDK from custom repo only for the VMs under test, but the mock where the python test code runs does not use it from there. So the release of bumped version will be good idea.
Anton.
On Thu, May 11, 2017 at 8:20 PM, Ondra Machacek <omachace@redhat.com> wrote:
On Thu, May 11, 2017 at 8:11 PM, Anton Marchukov <amarchuk@redhat.com> wrote: > > On Thu, May 11, 2017 at 8:03 PM, Ondra Machacek <omachace@redhat.com> > wrote: >> >> >>> 15:50:44 [basic_suit_el7] Updated: >>> >>> 15:50:44 [basic_suit_el7] python-ovirt-engine-sdk4.x86_64 >>> 0:4.2.0-1.a0.20170511git210c375.el7.centos >> >> >> This is incorrect version. The correct one is: >> >> >> python-ovirt-engine-sdk4-4.2.0-1.a0.20170511gitcd0adb4.el7.centos.x86_64.rpm >> >> From this build: >> >> >> http://jenkins.ovirt.org/job/python-ovirt-engine-sdk4_master_build-artifacts... > > > > Sounds like we have a problem if the version different only by git > hashes. They are not ordered. > > I suggest we just merge the version bump at > https://gerrit.ovirt.org/#/c/76732/ and then see which version it will > install. > > Any objections to that?
OK, I will do a proper release.
> > > -- > Anton Marchukov > Senior Software Engineer - RHEV CI - Red Hat >
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

Ok I found the issue: PATH_TO_CONFIG=/etc/yum.repos.d/internal.repo '/etc/um.repos.d' is intentionally disabled in mock. The configuration should've been placed directly in /etc/yum/yum.comf/ On 12 May 2017 at 13:43, Barak Korren <bkorren@redhat.com> wrote:
So, yum is installing the older version even though it has a newer one visible in a repo it is configured to use? I guess its not reading the updated repodata then. We need to try and add 'yum clean metadata' after we configure the localrepo in the mock environment.
On 12 May 2017 at 12:29, Anton Marchukov <amarchuk@redhat.com> wrote:
Hello Barak.
Yes. repoman pulls the latest version and that version is in latest and latest.under_test on resources. Additionally it is proven by lago.log too.
The only problem seems to be the mock env that runs the python itself.
Anton.
On Fri, May 12, 2017 at 11:03 AM, Barak Korren <bkorren@redhat.com> wrote:
Anton, are you seeing reponan pull the right version in the lago logs? We need to know if it makes it into the Lago local repo or not.
Barak Korren bkorren@redhat.com RHCE, RHCi, RHV-DevOps Team https://ifireball.wordpress.com/
בתאריך 12 במאי 2017 11:13, "Anton Marchukov" <amarchuk@redhat.com> כתב:
Hello Ondra.
Yes I see it installs the old version, e.g. the latest master run at [1] installs:
07:43:13 [basic_suit_el7] Updated: 07:43:13 [basic_suit_el7] python-ovirt-engine-sdk4.x86_64 0:4.2.0-1.a0.20170511git210c375.el7.centos
while the latest version is indeed python-ovirt-engine-sdk4-4.2.0-1.a1.20170512git7c40be2.el7.centos.x86_64.rpm
Just for the record: latest and latest.under_test have correct version of the package, so it does not look to be a repoman bug.
Checking OST sources now...
[1] http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/6651/consol...
On Fri, May 12, 2017 at 9:43 AM, Ondra Machacek <omachace@redhat.com> wrote:
Hello Anton,
So I've bumped the version, but it's still installing the old one. The bumped version:
python-ovirt-engine-sdk4-4.2.0-1.a1.20170512git7c40be2.el7.centos.x86_64.rpm
Log from OST run:
07:25:59 [upgrade-from-release_suit_el7] ================================================================================ 07:25:59 [upgrade-from-release_suit_el7] Package Arch Version Repository Size 07:25:59 [upgrade-from-release_suit_el7] ================================================================================ 07:25:59 [upgrade-from-release_suit_el7] Installing: 07:25:59 [upgrade-from-release_suit_el7] python-ovirt-engine-sdk4 x86_64 4.2.0-1.a0.20170511git210c375.el7.centos 07:25:59 [upgrade-from-release_suit_el7] ovirt-master-snapshot 446 k 07:25:59 [upgrade-from-release_suit_el7] Installing for dependencies: 07:25:59 [upgrade-from-release_suit_el7] python-enum34 noarch 1.0.4-1.el7 centos-base-el7 52 k 07:25:59 [upgrade-from-release_suit_el7] 07:25:59 [upgrade-from-release_suit_el7] Transaction Summary 07:25:59 [upgrade-from-release_suit_el7] ================================================================================
On Thu, May 11, 2017 at 8:35 PM, Anton Marchukov <amarchuk@redhat.com> wrote:
Hello Ondra.
Thanks.
It seems that the manual job populates SDK from custom repo only for the VMs under test, but the mock where the python test code runs does not use it from there. So the release of bumped version will be good idea.
Anton.
On Thu, May 11, 2017 at 8:20 PM, Ondra Machacek <omachace@redhat.com> wrote: > > > > On Thu, May 11, 2017 at 8:11 PM, Anton Marchukov <amarchuk@redhat.com> > wrote: >> >> On Thu, May 11, 2017 at 8:03 PM, Ondra Machacek <omachace@redhat.com> >> wrote: >>> >>> >>>> 15:50:44 [basic_suit_el7] Updated: >>>> >>>> 15:50:44 [basic_suit_el7] python-ovirt-engine-sdk4.x86_64 >>>> 0:4.2.0-1.a0.20170511git210c375.el7.centos >>> >>> >>> This is incorrect version. The correct one is: >>> >>> >>> python-ovirt-engine-sdk4-4.2.0-1.a0.20170511gitcd0adb4.el7.centos.x86_64.rpm >>> >>> From this build: >>> >>> >>> http://jenkins.ovirt.org/job/python-ovirt-engine-sdk4_master_build-artifacts... >> >> >> >> Sounds like we have a problem if the version different only by git >> hashes. They are not ordered. >> >> I suggest we just merge the version bump at >> https://gerrit.ovirt.org/#/c/76732/ and then see which version it will >> install. >> >> Any objections to that? > > > OK, I will do a proper release. > >> >> >> -- >> Anton Marchukov >> Senior Software Engineer - RHEV CI - Red Hat >> >
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

So repoman pulls both of the versions to the internal repo? I think we're running repoman with only latest flag... On May 12, 2017 12:29 PM, "Anton Marchukov" <amarchuk@redhat.com> wrote:
Hello Barak.
Yes. repoman pulls the latest version and that version is in latest and latest.under_test on resources. Additionally it is proven by lago.log too.
The only problem seems to be the mock env that runs the python itself.
Anton.
On Fri, May 12, 2017 at 11:03 AM, Barak Korren <bkorren@redhat.com> wrote:
Anton, are you seeing reponan pull the right version in the lago logs? We need to know if it makes it into the Lago local repo or not.
Barak Korren bkorren@redhat.com RHCE, RHCi, RHV-DevOps Team https://ifireball.wordpress.com/
בתאריך 12 במאי 2017 11:13, "Anton Marchukov" <amarchuk@redhat.com> כתב:
Hello Ondra.
Yes I see it installs the old version, e.g. the latest master run at [1] installs:
*07:43:13* [basic_suit_el7] Updated:*07:43:13* [basic_suit_el7] python-ovirt-engine-sdk4.x86_64 0:4.2.0-1.a0.20170511git210c375.el7.centos
while the latest version is indeed python-ovirt-engine-sdk4-4.2. 0-1.a1.20170512git7c40be2.el7.centos.x86_64.rpm
Just for the record: latest and latest.under_test have correct version of the package, so it does not look to be a repoman bug.
Checking OST sources now...
[1] http://jenkins.ovirt.org/job/test-repo_ovirt_experimenta l_master/6651/consoleFull
On Fri, May 12, 2017 at 9:43 AM, Ondra Machacek <omachace@redhat.com> wrote:
Hello Anton,
So I've bumped the version, but it's still installing the old one. The bumped version:
python-ovirt-engine-sdk4-4.2.0-1.a1.20170512git7c40be2.el7. centos.x86_64.rpm <http://jenkins.ovirt.org/job/python-ovirt-engine-sdk4_master_build-artifacts-el7-x86_64/74/artifact/exported-artifacts/python-ovirt-engine-sdk4-4.2.0-1.a1.20170512git7c40be2.el7.centos.x86_64.rpm>
Log from OST run:
*07:25:59* [upgrade-from-release_suit_el7] ================================================================================*07:25:59* [upgrade-from-release_suit_el7] Package Arch Version Repository Size*07:25:59* [upgrade-from-release_suit_el7] ================================================================================*07:25:59* [upgrade-from-release_suit_el7] Installing:*07:25:59* [upgrade-from-release_suit_el7] python-ovirt-engine-sdk4 x86_64 4.2.0-1.a0.20170511git210c375.el7.centos*07:25:59* [upgrade-from-release_suit_el7] ovirt-master-snapshot 446 k*07:25:59* [upgrade-from-release_suit_el7] Installing for dependencies:*07:25:59* [upgrade-from-release_suit_el7] python-enum34 noarch 1.0.4-1.el7 centos-base-el7 52 k*07:25:59* [upgrade-from-release_suit_el7] *07:25:59* [upgrade-from-release_suit_el7] Transaction Summary*07:25:59* [upgrade-from-release_suit_el7] ================================================================================
On Thu, May 11, 2017 at 8:35 PM, Anton Marchukov <amarchuk@redhat.com> wrote:
Hello Ondra.
Thanks.
It seems that the manual job populates SDK from custom repo only for the VMs under test, but the mock where the python test code runs does not use it from there. So the release of bumped version will be good idea.
Anton.
On Thu, May 11, 2017 at 8:20 PM, Ondra Machacek <omachace@redhat.com> wrote:
On Thu, May 11, 2017 at 8:11 PM, Anton Marchukov <amarchuk@redhat.com > wrote:
> On Thu, May 11, 2017 at 8:03 PM, Ondra Machacek <omachace@redhat.com > > wrote: > >> >> *15:50:44* [basic_suit_el7] Updated: >>> >>> *15:50:44* [basic_suit_el7] python-ovirt-engine-sdk4.x86_64 0:4.2.0-1.a0.20170511git210c375.el7.centos >>> >>> >> This is incorrect version. The correct one is: >> >> python-ovirt-engine-sdk4-4.2.0-1.a0.20170511gitcd0adb4.el7. >> centos.x86_64.rpm >> <http://jenkins.ovirt.org/job/python-ovirt-engine-sdk4_master_build-artifacts-el7-x86_64/71/artifact/exported-artifacts/python-ovirt-engine-sdk4-4.2.0-1.a0.20170511gitcd0adb4.el7.centos.x86_64.rpm> >> >> From this build: >> >> http://jenkins.ovirt.org/job/python-ovirt-engine-sdk4_maste >> r_build-artifacts-el7-x86_64/71/ >> > > > Sounds like we have a problem if the version different only by git > hashes. They are not ordered. > > I suggest we just merge the version bump at > https://gerrit.ovirt.org/#/c/76732/ and then see which version it > will install. > > Any objections to that? >
OK, I will do a proper release.
> > -- > Anton Marchukov > Senior Software Engineer - RHEV CI - Red Hat > >
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

On 12 May 2017 at 14:02, Daniel Belenky <dbelenky@redhat.com> wrote:
So repoman pulls both of the versions to the internal repo? I think we're running repoman with only latest flag...
No, the older version comes from 'tested'. -- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

BTW, master experimental was fixed by removing the old broken pkg from the repo. If we know the root cause of the error, we can test the fix on the 4.1 job now. On Fri, May 12, 2017 at 2:04 PM, Barak Korren <bkorren@redhat.com> wrote:
On 12 May 2017 at 14:02, Daniel Belenky <dbelenky@redhat.com> wrote:
So repoman pulls both of the versions to the internal repo? I think we're running repoman with only latest flag...
No, the older version comes from 'tested'.
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
-- Eyal edri ASSOCIATE MANAGER RHV DevOps EMEA VIRTUALIZATION R&D Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)

On 12 May 2017 at 14:24, Eyal Edri <eedri@redhat.com> wrote:
BTW, master experimental was fixed by removing the old broken pkg from the repo. If we know the root cause of the error, we can test the fix on the 4.1 job now.
from where id you remove it? It made it into 'tested' so, you probably also needed to remove it from the local cache on the slave. The root cause was the bad SDK package. But it also uncovered the truth that daniel's new patch was not really working and not taking the SDK from experimental. This made the fixed SDK not come into play. I've a fix patch here: https://gerrit.ovirt.org/#/c/76765/ -- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

On Fri, May 12, 2017 at 3:06 PM, Barak Korren <bkorren@redhat.com> wrote:
On 12 May 2017 at 14:24, Eyal Edri <eedri@redhat.com> wrote:
BTW, master experimental was fixed by removing the old broken pkg from the repo. If we know the root cause of the error, we can test the fix on the 4.1 job now.
from where id you remove it? It made it into 'tested' so, you probably also needed to remove it from the local cache on the slave.
Yes, removed it from both. However, experimental then run on a new slave which I didn't clean and also passed.
The root cause was the bad SDK package.
But it also uncovered the truth that daniel's new patch was not really working and not taking the SDK from experimental. This made the fixed SDK not come into play.
I've a fix patch here: https://gerrit.ovirt.org/#/c/76765/
Great! Can we verify it fixes 4.1 ?
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
-- Eyal edri ASSOCIATE MANAGER RHV DevOps EMEA VIRTUALIZATION R&D Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)

On 12 May 2017 at 15:08, Eyal Edri <eedri@redhat.com> wrote:
Yes, removed it from both. However, experimental then run on a new slave which I didn't clean and also passed.
I guess we just got lucky there
Great! Can we verify it fixes 4.1 ?
Maybe, I guess we can run manual with it and add newer SDK to extra_sources then if we see it is getting installed in mock we know it works. But do we have fixed SDK for 4.1 ? -- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

On Fri, May 12, 2017 at 3:23 PM, Barak Korren <bkorren@redhat.com> wrote:
On 12 May 2017 at 15:08, Eyal Edri <eedri@redhat.com> wrote:
Yes, removed it from both. However, experimental then run on a new slave which I didn't clean and
also passed.
I guess we just got lucky there
Great! Can we verify it fixes 4.1 ?
Maybe, I guess we can run manual with it and add newer SDK to extra_sources then if we see it is getting installed in mock we know it works.
But do we have fixed SDK for 4.1 ?
Yes, should be on latest experimental. The patch was merged: https://gerrit.ovirt.org/#/c/76714/
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
-- Eyal edri ASSOCIATE MANAGER RHV DevOps EMEA VIRTUALIZATION R&D Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)

On 12 May 2017 at 16:00, Eyal Edri <eedri@redhat.com> wrote:
Maybe, I guess we can run manual with it and add newer SDK to extra_sources then if we see it is getting installed in mock we know it works.
But do we have fixed SDK for 4.1 ?
Yes, should be on latest experimental. The patch was merged:
And do we still have a broken sdk package in "testing" for 4.1? If so please try running manual for 4.1 with my patch. -- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
participants (6)
-
Anton Marchukov
-
Barak Korren
-
Daniel Belenky
-
Eyal Edri
-
Nir Soffer
-
Ondra Machacek