Merge gating in Gerrit
by Barak Korren
Hi all,
Perhaps the main purpose of CI, is to prevent braking code from
getting merged into the stable/master branches. Unfortunately our CI
is not there yet, and one of the reasons for that is that we do large
amount of our CI tests only _after_ the code is merged.
The reason for that is that when balancing through, but time
consuming, tests (e.g. enging build with all permutations) v.s. faster
but more basic ones (e.g. "findbugs" and single permutation build), we
typically choose the faster tests to be run per-patch-set and leave
the through testing to only be run post-merge.
We'd like to change that and have the through tests also run before
merge. Ideally we would like to just hook stuff to the "submit"
button, but Gerrit doesn't allow one to do that easily. So instead
we'll need to adopt some kind of flag to indicate we want to submit
and have Jenkins
"click" the submit button on our behalf if tests pass.
I see two options here:
1. Use Code-Review+2 as the indicator to run "heavy" CI and merge.
2. Add an "approve" flag that maintainers can set to +1 (This is
what OpenStack is doing).
What would you prefer?
--
Barak Korren
bkorren(a)redhat.com
RHEV-CI Team
8 years
Failures in OST (4.0/master) ( was error msg from Jenkins )
by Eyal Edri
Renaming title and adding devel.
On Sun, Nov 20, 2016 at 2:36 PM, Piotr Kliczewski <pkliczew(a)redhat.com>
wrote:
> The last failure seems to be storage related.
>
> @Nir please take a look.
>
> Here is engine side error:
>
> 2016-11-20 05:54:59,605 DEBUG [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand]
> (default task-5) [59fc0074] Exception: org.ovirt.engine.core.
> vdsbroker.irsbroker.IRSNoMasterDomainException: IRSGenericException:
> IRSErrorException: IRSNoMasterDomainException: Cannot find master domain:
> u'spUUID=1ca141f1-b64d-4a52-8861-05c7de2a72b2, msdUUID=7d4bf750-4fb8-463f-
> bbb0-92156c47306e'
>
> and here is vdsm:
>
> jsonrpc.Executor/5::ERROR::2016-11-20 05:54:56,331::multipath::95::
> Storage.Multipath::(resize_devices) Could not resize device
> 360014052749733c7b8248628637b990f
> Traceback (most recent call last):
> File "/usr/share/vdsm/storage/multipath.py", line 93, in resize_devices
> _resize_if_needed(guid)
> File "/usr/share/vdsm/storage/multipath.py", line 101, in
> _resize_if_needed
> for slave in devicemapper.getSlaves(name)]
> File "/usr/share/vdsm/storage/multipath.py", line 158, in getDeviceSize
> bs, phyBs = getDeviceBlockSizes(devName)
> File "/usr/share/vdsm/storage/multipath.py", line 150, in
> getDeviceBlockSizes
> "queue", "logical_block_size")).read())
> IOError: [Errno 2] No such file or directory:
> '/sys/block/sdb/queue/logical_block_size'
>
We now see a different error in master [1], which also indicates the hosts
are in a problematic state: ( failing 'assign_hosts_network_label' test )
status: 409
reason: Conflict
detail: Cannot add Label. Operation can be performed only when Host status
is Maintenance, Up, NonOperational.
-------------------- >> begin captured logging << --------------------
[1]
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/3506/tes...
>
>
> On Sun, Nov 20, 2016 at 12:50 PM, Eyal Edri <eedri(a)redhat.com> wrote:
>
>>
>>
>> On Sun, Nov 20, 2016 at 1:42 PM, Yaniv Kaul <ykaul(a)redhat.com> wrote:
>>
>>>
>>>
>>> On Sun, Nov 20, 2016 at 1:30 PM, Yaniv Kaul <ykaul(a)redhat.com> wrote:
>>>
>>>>
>>>>
>>>> On Sun, Nov 20, 2016 at 1:18 PM, Eyal Edri <eedri(a)redhat.com> wrote:
>>>>
>>>>> the test fails to run VM because no hosts are in UP state(?) [1], not
>>>>> sure it is related to the triggering patch[2]
>>>>>
>>>>> status: 400
>>>>> reason: Bad Request
>>>>> detail: There are no hosts to use. Check that the cluster contains at
>>>>> least one host in Up state.
>>>>>
>>>>> Thoughts? Shouldn't we fail the test earlier we hosts are not UP?
>>>>>
>>>>
>>>> Yes. It's more likely that we are picking the wrong host or so, but who
>>>> knows - where are the engine and VDSM logs?
>>>>
>>>
>>> A simple grep on the engine.log[1] finds serveral unrelated issues I'm
>>> not sure are reported, it's despairing to even begin...
>>> That being said, I don't see the issue there. We may need better logging
>>> on the API level, to see what is being sent. Is it consistent?
>>>
>>
>> Just failed now the first time, I didn't see it before.
>>
>>
>>> Y.
>>>
>>>
>>> [1] http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_4.
>>> 0/3015/artifact/exported-artifacts/basic_suite_4.0.sh-el7/ex
>>> ported-artifacts/test_logs/basic-suite-4.0/post-004_basic_
>>> sanity.py/lago-basic-suite-4-0-engine/_var_log_ovirt-engine/engine.log
>>>
>>>> Y.
>>>>
>>>>
>>>>>
>>>>>
>>>>> [1] http://jenkins.ovirt.org/job/test-repo_ovirt_experimenta
>>>>> l_4.0/3015/testReport/junit/(root)/004_basic_sanity/vm_run/
>>>>> [2] http://jenkins.ovirt.org/job/ovirt-engine_4.0_build-arti
>>>>> facts-el7-x86_64/1535/changes#detail
>>>>>
>>>>>
>>>>>
>>>>> On Sun, Nov 20, 2016 at 1:00 PM, <jenkins(a)jenkins.phx.ovirt.org>
>>>>> wrote:
>>>>>
>>>>>> Build: http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_4.
>>>>>> 0/3015/,
>>>>>> Build Number: 3015,
>>>>>> Build Status: FAILURE
>>>>>> _______________________________________________
>>>>>> Infra mailing list
>>>>>> Infra(a)ovirt.org
>>>>>> http://lists.ovirt.org/mailman/listinfo/infra
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Eyal Edri
>>>>> Associate Manager
>>>>> RHV DevOps
>>>>> EMEA ENG Virtualization R&D
>>>>> Red Hat Israel
>>>>>
>>>>> phone: +972-9-7692018
>>>>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> Eyal Edri
>> Associate Manager
>> RHV DevOps
>> EMEA ENG Virtualization R&D
>> Red Hat Israel
>>
>> phone: +972-9-7692018
>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>
>
>
--
Eyal Edri
Associate Manager
RHV DevOps
EMEA ENG Virtualization R&D
Red Hat Israel
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
8 years
Re: [ovirt-devel] Where's MOM (on latest master)
by Michal Skrivanek
> On 18 Nov 2016, at 12:35, Martin Sivak <msivak(a)redhat.com> wrote:
>
>> I don't think it is related to version X or Y. It is a race, so might be
>> related to other factors.
>
> It never (seriously: NEVER) happened with xml-rpc before 4.0.5.
that is surprising
but we also didn’t have lago before;-)
>
>> likely because json-rpc is initialized after xml-rpc….or indeed whatever
>> else;-)
>
> But this is not about jsonrpc. The socket itself is shared according
> to what Piotr said.
it is
>
>> btw you likely still want to have a retry in mom once it
>> starts responding due to delayed vdsm async recovery taking potentially
>> minutes
>
> We handle this already. The only issue is the connection refused state.
then why don’t you handle the connection state as well? isn’t that a simple fix?
>
>
> Martin
>
>
> On Fri, Nov 18, 2016 at 12:19 PM, Michal Skrivanek
> <michal.skrivanek(a)redhat.com> wrote:
>>
>> On 18 Nov 2016, at 12:12, Oved Ourfali <oourfali(a)redhat.com> wrote:
>>
>> I don't think it is related to version X or Y. It is a race, so might be
>> related to other factors.
>>
>>
>> likely because json-rpc is initialized after xml-rpc….or indeed whatever
>> else;-)
>>
>> either way it needs to be solved. Either by improving the systemd service
>> file or mom retry (btw you likely still want to have a retry in mom once it
>> starts responding due to delayed vdsm async recovery taking potentially
>> minutes)
>>
>>
>> On Nov 18, 2016 12:59 PM, "Martin Sivak" <msivak(a)redhat.com> wrote:
>>>
>>>> Are we / can we use systemd socket activation there?
>>>
>>> That actually requires systemd specific code iirc (to take over the
>>> standing by socket). I am actually wondering why the xml-rpc in 4.0.4
>>> was fine and json-rpc in 4.0.6 is too slow.
>>>
>>> Martin
>>>
>>> On Fri, Nov 18, 2016 at 11:53 AM, Anton Marchukov <amarchuk(a)redhat.com>
>>> wrote:
>>>> Hello All.
>>>>
>>>> Are we / can we use systemd socket activation there?
>>>>
>>>> Anton.
>>>>
>>>> On Fri, Nov 18, 2016 at 11:21 AM, Martin Sivak <msivak(a)redhat.com>
>>>> wrote:
>>>>>
>>>>> What about making vdsm ready to answer connections when it returns to
>>>>> systemd instead? I hate workarounds and this always worked fine.
>>>>>
>>>>> Martin
>>>>>
>>>>> On Fri, Nov 18, 2016 at 11:13 AM, Oved Ourfali <oourfali(a)redhat.com>
>>>>> wrote:
>>>>>> Seems like a race regardless of the protocol.
>>>>>> Should you add a retry?
>>>>>>
>>>>>>
>>>>>> On Nov 18, 2016 11:52 AM, "Martin Sivak" <msivak(a)redhat.com> wrote:
>>>>>>>
>>>>>>> Yes, because VDSM is supposed to be up (there is systemd
>>>>>>> dependency).
>>>>>>> This always worked fine with xml-rpc.
>>>>>>>
>>>>>>> Martin
>>>>>>>
>>>>>>> On Fri, Nov 18, 2016 at 10:14 AM, Nir Soffer <nsoffer(a)redhat.com>
>>>>>>> wrote:
>>>>>>>> On Fri, Nov 18, 2016 at 10:45 AM, Martin Sivak <msivak(a)redhat.com>
>>>>>>>> wrote:
>>>>>>>>> This happens because MOM can't connect to VDSM and so it quits.
>>>>>>>>
>>>>>>>> So mom try once to connect and if the connection fails it quits?
>>>>>>>>
>>>>>>>>> We
>>>>>>>>> discussed it on the mailinglist
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> https://lists.fedoraproject.org/archives/list/vdsm-devel@lists.fedorahost...
>>>>>>>>> http://lists.ovirt.org/pipermail/devel/2016-November/014101.html
>>>>>>>>>
>>>>>>>>> This issue never happened with XML-RPC.
>>>>>>>>>
>>>>>>>>> Shira reported it as
>>>>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1393012
>>>>>>>>>
>>>>>>>>> Martin
>>>>>>>>>
>>>>>>>>> On Thu, Nov 17, 2016 at 7:42 PM, Yaniv Kaul <ykaul(a)redhat.com>
>>>>>>>>> wrote:
>>>>>>>>>> I've recently seen, including now on Master, the following
>>>>>>>>>> warnings:
>>>>>>>>>> Nov 17 13:33:25 lago-basic-suite-master-host0 systemd[1]:
>>>>>>>>>> Started
>>>>>>>>>> MOM
>>>>>>>>>> instance configured for VDSM purposes.
>>>>>>>>>> Nov 17 13:33:25 lago-basic-suite-master-host0 systemd[1]:
>>>>>>>>>> Starting
>>>>>>>>>> MOM
>>>>>>>>>> instance configured for VDSM purposes...
>>>>>>>>>> Nov 17 13:33:35 lago-basic-suite-master-host0 vdsm[2012]: vdsm
>>>>>>>>>> MOM
>>>>>>>>>> WARN MOM
>>>>>>>>>> not available, Policy could not be set.
>>>>>>>>>> Nov 17 13:33:39 lago-basic-suite-master-host0 vdsm[2012]: vdsm
>>>>>>>>>> MOM
>>>>>>>>>> WARN MOM
>>>>>>>>>> not available.
>>>>>>>>>> Nov 17 13:33:39 lago-basic-suite-master-host0 vdsm[2012]: vdsm
>>>>>>>>>> MOM
>>>>>>>>>> WARN MOM
>>>>>>>>>> not available, KSM stats will be missing.
>>>>>>>>>> Nov 17 13:33:55 lago-basic-suite-master-host0 vdsm[2012]: vdsm
>>>>>>>>>> MOM
>>>>>>>>>> WARN MOM
>>>>>>>>>> not available.
>>>>>>>>>> Nov 17 13:33:55 lago-basic-suite-master-host0 vdsm[2012]: vdsm
>>>>>>>>>> MOM
>>>>>>>>>> WARN MOM
>>>>>>>>>> not available, KSM stats will be missing.
>>>>>>>>>> Nov 17 13:34:10 lago-basic-suite-master-host0 vdsm[2012]: vdsm
>>>>>>>>>> MOM
>>>>>>>>>> WARN MOM
>>>>>>>>>> not available.
>>>>>>>>>> Nov 17 13:34:10 lago-basic-suite-master-host0 vdsm[2012]: vdsm
>>>>>>>>>> MOM
>>>>>>>>>> WARN MOM
>>>>>>>>>> not available, KSM stats will be missing.
>>>>>>>>>> Nov 17 13:34:26 lago-basic-suite-master-host0 vdsm[2012]: vdsm
>>>>>>>>>> MOM
>>>>>>>>>> WARN MOM
>>>>>>>>>> not available.
>>>>>>>>>> Nov 17 13:34:26 lago-basic-suite-master-host0 vdsm[2012]: vdsm
>>>>>>>>>> MOM
>>>>>>>>>> WARN MOM
>>>>>>>>>> not available, KSM stats will be missing.
>>>>>>>>>> Nov 17 13:34:42 lago-basic-suite-master-host0 vdsm[2012]: vdsm
>>>>>>>>>> MOM
>>>>>>>>>> WARN MOM
>>>>>>>>>> not available.
>>>>>>>>>> Nov 17 13:34:42 lago-basic-suite-master-host0 vdsm[2012]: vdsm
>>>>>>>>>> MOM
>>>>>>>>>> WARN MOM
>>>>>>>>>> not available, KSM stats will be missing.
>>>>>>>>>> Nov 17 13:34:57 lago-basic-suite-master-host0 vdsm[2012]: vdsm
>>>>>>>>>> MOM
>>>>>>>>>> WARN MOM
>>>>>>>>>> not available.
>>>>>>>>>> Nov 17 13:34:57 lago-basic-suite-master-host0 vdsm[2012]: vdsm
>>>>>>>>>> MOM
>>>>>>>>>> WARN MOM
>>>>>>>>>> not available, KSM stats will be missing.
>>>>>>>>>> Nov 17 13:35:12 lago-basic-suite-master-host0 vdsm[2012]: vdsm
>>>>>>>>>> MOM
>>>>>>>>>> WARN MOM
>>>>>>>>>> not available.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Any ideas what this is and why?
>>>>>>>>>>
>>>>>>>>>> _______________________________________________
>>>>>>>>>> Devel mailing list
>>>>>>>>>> Devel(a)ovirt.org
>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>>>>>>> _______________________________________________
>>>>>>>>> Devel mailing list
>>>>>>>>> Devel(a)ovirt.org
>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>>>>> _______________________________________________
>>>>>>> Devel mailing list
>>>>>>> Devel(a)ovirt.org
>>>>>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>>>>>
>>>>>>>
>>>>>>
>>>>> _______________________________________________
>>>>> Devel mailing list
>>>>> Devel(a)ovirt.org
>>>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Anton Marchukov
>>>> Senior Software Engineer - RHEV CI - Red Hat
>>>>
>>
>> _______________________________________________
>> Devel mailing list
>> Devel(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>>
8 years
Where's MOM (on latest master)
by Yaniv Kaul
I've recently seen, including now on Master, the following warnings:
Nov 17 13:33:25 lago-basic-suite-master-host0 systemd[1]: Started MOM
instance configured for VDSM purposes.
Nov 17 13:33:25 lago-basic-suite-master-host0 systemd[1]: Starting MOM
instance configured for VDSM purposes...
Nov 17 13:33:35 lago-basic-suite-master-host0 vdsm[2012]: vdsm MOM WARN MOM
not available, Policy could not be set.
Nov 17 13:33:39 lago-basic-suite-master-host0 vdsm[2012]: vdsm MOM WARN MOM
not available.
Nov 17 13:33:39 lago-basic-suite-master-host0 vdsm[2012]: vdsm MOM WARN MOM
not available, KSM stats will be missing.
Nov 17 13:33:55 lago-basic-suite-master-host0 vdsm[2012]: vdsm MOM WARN MOM
not available.
Nov 17 13:33:55 lago-basic-suite-master-host0 vdsm[2012]: vdsm MOM WARN MOM
not available, KSM stats will be missing.
Nov 17 13:34:10 lago-basic-suite-master-host0 vdsm[2012]: vdsm MOM WARN MOM
not available.
Nov 17 13:34:10 lago-basic-suite-master-host0 vdsm[2012]: vdsm MOM WARN MOM
not available, KSM stats will be missing.
Nov 17 13:34:26 lago-basic-suite-master-host0 vdsm[2012]: vdsm MOM WARN MOM
not available.
Nov 17 13:34:26 lago-basic-suite-master-host0 vdsm[2012]: vdsm MOM WARN MOM
not available, KSM stats will be missing.
Nov 17 13:34:42 lago-basic-suite-master-host0 vdsm[2012]: vdsm MOM WARN MOM
not available.
Nov 17 13:34:42 lago-basic-suite-master-host0 vdsm[2012]: vdsm MOM WARN MOM
not available, KSM stats will be missing.
Nov 17 13:34:57 lago-basic-suite-master-host0 vdsm[2012]: vdsm MOM WARN MOM
not available.
Nov 17 13:34:57 lago-basic-suite-master-host0 vdsm[2012]: vdsm MOM WARN MOM
not available, KSM stats will be missing.
Nov 17 13:35:12 lago-basic-suite-master-host0 vdsm[2012]: vdsm MOM WARN MOM
not available.
Any ideas what this is and why?
8 years
Introduce new package decencies in vdsm
by Yaniv Bronheim
Hi
After merging
https://gerrit.ovirt.org/#/q/status:open+project:vdsm+branch:master+topic...
we
need now for any new requirement to add a line in:
check-patch.packages.el7
check-patch.packages.fc24
check-merged.packages.el7
check-merged.packages.fc24
vdsm.spec.in
Dockerfile.centos
Dockerfile.fedora
seems like we can add it once to the spec with section for fedora and
centos and the rest of the places will use yum-builddep. sounds more
reasonable to me and probably the right way to work with rpms dependencies.
no?
*Yaniv Bronhaim.*
8 years
Vdsm build failure
by Piotr Kliczewski
All,
I see a build failure [1] due to:
13:34:32 FAIL: test_from_invalid_to_valid_domain('selftest', <type
'exceptions.OSError'>)
(storage_monitor_test.TestMonitorThreadMonitoring)
13:34:32 ----------------------------------------------------------------------
13:34:32 Traceback (most recent call last):
13:34:32 File
"/home/jenkins/workspace/vdsm_master_check-patch-fc24-x86_64/vdsm/tests/testlib.py",
line 133, in wrapper
13:34:32 return f(self, *args)
13:34:32 File
"/home/jenkins/workspace/vdsm_master_check-patch-fc24-x86_64/vdsm/tests/storage_monitor_test.py",
line 470, in test_from_invalid_to_valid_domain
13:34:32 self.assertTrue(status.valid)
13:34:32 AssertionError: False is not true
13:34:32 -------------------- >> begin captured logging << --------------------
13:34:32 2016-11-14 13:33:04,968 DEBUG (monitor/uuid)
[storage.Monitor] Domain monitor for uuid started (monitor:304)
13:34:32 2016-11-14 13:33:04,968 DEBUG (monitor/uuid)
[storage.Monitor] Producing domain uuid (monitor:366)
13:34:32 2016-11-14 13:33:04,969 INFO (monitor/uuid) [test] Start
checking '/path/to/metadata' (storage_monitor_test:73)
13:34:32 2016-11-14 13:33:04,969 DEBUG (monitor/uuid) [test] Checking
if iso domain (storage_monitor_test:156)
13:34:32 2016-11-14 13:33:04,969 ERROR (monitor/uuid)
[storage.Monitor] Error checking domain uuid (monitor:426)
13:34:32 Traceback (most recent call last):
13:34:32 File
"/home/jenkins/workspace/vdsm_master_check-patch-fc24-x86_64/vdsm/vdsm/storage/monitor.py",
line 407, in _checkDomainStatus
13:34:32 self.domain.selftest()
13:34:32 File
"/home/jenkins/workspace/vdsm_master_check-patch-fc24-x86_64/vdsm/tests/testlib.py",
line 616, in wrapper
13:34:32 raise exception
13:34:32 OSError
13:34:32 2016-11-14 13:33:04,969 INFO (monitor/uuid)
[storage.Monitor] Domain uuid became INVALID (monitor:456)
13:34:32 2016-11-14 13:33:04,969 DEBUG (monitor/uuid) [test] Emitting
event (args=('uuid', False), kwrags={}) (storage_monitor_test:57)
13:34:32 2016-11-14 13:33:05,170 ERROR (monitor/uuid)
[storage.Monitor] Error checking domain uuid (monitor:426)
13:34:32 Traceback (most recent call last):
13:34:32 File
"/home/jenkins/workspace/vdsm_master_check-patch-fc24-x86_64/vdsm/vdsm/storage/monitor.py",
line 407, in _checkDomainStatus
13:34:32 self.domain.selftest()
13:34:32 File
"/home/jenkins/workspace/vdsm_master_check-patch-fc24-x86_64/vdsm/tests/testlib.py",
line 616, in wrapper
13:34:32 raise exception
13:34:32 OSError
13:34:32 2016-11-14 13:33:05,371 ERROR (monitor/uuid)
[storage.Monitor] Error checking domain uuid (monitor:426)
13:34:32 Traceback (most recent call last):
13:34:32 File
"/home/jenkins/workspace/vdsm_master_check-patch-fc24-x86_64/vdsm/vdsm/storage/monitor.py",
line 407, in _checkDomainStatus
13:34:32 self.domain.selftest()
13:34:32 File
"/home/jenkins/workspace/vdsm_master_check-patch-fc24-x86_64/vdsm/tests/testlib.py",
line 616, in wrapper
13:34:32 raise exception
13:34:32 OSError
13:34:32 2016-11-14 13:33:05,461 DEBUG (monitor/uuid)
[storage.Monitor] Domain monitor for uuid canceled (monitor:309)
13:34:32 2016-11-14 13:33:05,461 DEBUG (monitor/uuid)
[storage.Monitor] Domain monitor for uuid stopped (shutdown=False)
(monitor:312)
13:34:32 2016-11-14 13:33:05,461 INFO (monitor/uuid) [test] Stop
checking '/path/to/metadata' (storage_monitor_test:79)
13:34:32 2016-11-14 13:33:05,461 DEBUG (monitor/uuid) [test] Releasing
host id (hostId=host_id, unused=True) (storage_monitor_test:143)
13:34:32 2016-11-14 13:33:05,461 ERROR (monitor/uuid)
[storage.Monitor] Error releasing host id host_id for domain uuid
(monitor:568)
13:34:32 Traceback (most recent call last):
13:34:32 File
"/home/jenkins/workspace/vdsm_master_check-patch-fc24-x86_64/vdsm/vdsm/storage/monitor.py",
line 565, in _releaseHostId
13:34:32 self.domain.releaseHostId(self.hostId, unused=True)
13:34:32 File
"/home/jenkins/workspace/vdsm_master_check-patch-fc24-x86_64/vdsm/tests/storage_monitor_test.py",
line 146, in releaseHostId
13:34:32 assert self.acquired, "Attempt to release unacquired host id"
13:34:32 AssertionError: Attempt to release unacquired host id
Can someone please take a look?
Thanks,
Piotr
[1] http://jenkins.ovirt.org/job/vdsm_master_check-patch-fc24-x86_64/4123/con...
8 years