vdsm Check-patch failed - ERROR: Error fetching remote repo 'origin'
by Eyal Shenitzky
Hey,
I saw the following error on the vdsm check-patch:
*09:47:31* ERROR: Error fetching remote repo 'origin'*09:47:31*
hudson.plugins.git.GitException: Failed to fetch from
https://gerrit.ovirt.org/vdsm*09:47:31* at
hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:904)*09:47:31* at
hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1144)*09:47:31*
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1175)*09:47:31* at
org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:120)*09:47:31*
at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:90)*09:47:31*
at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:77)*09:47:31*
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)*09:47:31*
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)*09:47:31*
at java.util.concurrent.FutureTask.run(FutureTask.java:266)*09:47:31*
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)*09:47:31*
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)*09:47:31*
at java.lang.Thread.run(Thread.java:748)*09:47:31* Caused by:
hudson.plugins.git.GitException: Command "git clean -fdx" returned
status code 1:*09:47:31* stdout: *09:47:31* stderr: warning: failed to
remove .tox/lib-py27/lib/python2.7/site-packages/pytest_cov/__pycache__/plugin.cpython-27-PYTEST.pyc:
Permission denied*09:47:31* warning: failed to remove
.tox/lib-py27/lib/python2.7/site-packages/pytest_cov/__pycache__/compat.cpython-27-PYTEST.pyc:
Permission denied*09:47:31* warning: failed to remove
.tox/lib-py27/lib/python2.7/site-packages/pytest_cov/__pycache__/engine.cpython-27-PYTEST.pyc:
Permission denied*09:47:31* warning: failed to remove
.tox/lib-py27/lib/python2.7/site-packages/pytest_cov/__pycache__/embed.cpython-27-PYTEST.pyc:
Permission denied*09:47:31* warning: failed to remove
tests/__pycache__/pywatch_test.cpython-27-PYTEST.pyc: Permission
denied*09:47:31* warning: failed to remove
tests/__pycache__/prlimit_test.cpython-27-PYTEST.pyc: Permission
denied*09:47:31* warning: failed to remove
tests/__pycache__/ssl_test.cpython-27-PYTEST.pyc: Permission
denied*09:47:31* warning: failed to remove
tests/__pycache__/hooking_test.cpython-27-PYTEST.pyc: Permission
denied*09:47:31* warning: failed to remove
tests/common/__pycache__/systemd_test.cpython-27-PYTEST.pyc:
Permission denied*09:47:31* warning: failed to remove
tests/common/__pycache__/systemctl_test.cpython-27-PYTEST.pyc:
Permission denied*09:47:31* warning: failed to remove
tests/common/__pycache__/commands_test.cpython-27-PYTEST.pyc:
Permission denied*09:47:31*
Does someone encounter it?
--
Regards,
Eyal Shenitzky
5 years, 9 months
Can't discover luns on iscisi target when host is running fedora 28
by Fedor Gavrilov
Hi,
Recently I hit an issue when I can't add iscsi storage domain because engine can't see any LUNs there. I wonder if I missed some step in the configuration or something.
There is a fedora 29 machine running iscsi target which has a single lun and host (banana) registered in its ACL:
[cucumber@cucumber ~]$ sudo targetcli
[sudo] password for cucumber:
targetcli shell version 2.1.fb48
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.
/iscsi/iqn.20...e:target/tpg1> ls
o- tpg1 ..................................................................................................... [no-gen-acls, no-auth]
o- acls ................................................................................................................ [ACLs: 1]
| o- iqn.1994-05.com.redhat:4bf5ff1a15b6 ........................................................................ [Mapped LUNs: 1]
| o- mapped_lun0 ................................................................................. [lun0 fileio/filestore0 (rw)]
o- luns ................................................................................................................ [LUNs: 1]
| o- lun0 ................................................................... [fileio/filestore0 (/etc/fileio) (default_tg_pt_gp)]
o- portals .......................................................................................................... [Portals: 1]
o- 0.0.0.0:3260 ........................................................................................................... [OK]
From fedora 28 host I can login and logout, but there is this segfault and when I try getting LUN info it fails:
[banana@banana ~]$ sudo iscsiadm --mode discovery --type sendtargets --portal cucumber
192.168.111.4:3260,1 iqn.2016-01.com.example:target
[banana@banana ~]$ sudo iscsiadm -m node -T iqn.2016-01.com.example:target -l
Logging in to [iface: default, target: iqn.2016-01.com.example:target, portal: 192.168.111.4,3260] (multiple)
Login to [iface: default, target: iqn.2016-01.com.example:target, portal: 192.168.111.4,3260] successful.
[banana@banana ~]$ sudo iscsiadm -m session -P 3 -r 2
iscsiadm: could not read session targetname: 5
iscsiadm: Could not get session info for sid 2
[banana@banana ~]$ sudo iscsiadm -m session -P 3
iSCSI Transport Class version 2.0-870
version 6.2.0.874
Segmentation fault
It seems I am hitting this: https://bugzilla.redhat.com/show_bug.cgi?id=1462767
Could this be the reason that there are no LUNs in engine web UI after I discover iscsi target? Is anyone aware of workaround or whether I am doing something wrong?
Thanks,
Fedor Gavrilov
5 years, 9 months
Re: [Vdo-devel] impact of --emulate512 setting for VDO volumes
by Guillaume Pavese
Thank you very much for your answer. I am copying ovirt-devel too as that
could be of interest to them.
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Sat, Mar 2, 2019 at 7:00 AM Michael Sclafani <sclafani(a)redhat.com> wrote:
> Hi!
>
> 512 emulation was intended to support drivers that only do a fraction of
> their I/O in blocks smaller 4KB. It is not optimized for performance in any
> way. Under the covers, VDO is still operating on 4KB physical blocks, so
> each 512-byte read is potentially amplified to a 4KB read, and each
> 512-byte write to a 4KB read followed by a 4KB write. A workload consisting
> exclusively of 512-byte randomly-distributed writes could effectively be
> amplified by a factor of 16.
>
> We have a suite of automated tests we run in 512e mode on a nightly basis.
> That suite is a subset of our regular tests, containing only ones we expect
> would be likely to expose problems specific to the emulation.
>
> There should be no penalty to having emulation enabled on a volume that no
> longer uses it. If the I/O is 4KB-aligned and 4KB or larger, having it
> enabled won't affect it.
> It does not appear the setting can be modified by the VDO manager, but I
> cannot remember at this moment why that should be so.
>
> Hope this helps.
>
> On Fri, Mar 1, 2019 at 2:24 PM Guillaume Pavese <
> guillaume.pavese(a)interactiv-group.com> wrote:
>
>> Hello,
>>
>> We are planning to deploy VDO with oVirt 4.3 on centos 7.6 (on SSD
>> devices).
>> As oVirt does not support 4K devices yet, VDO volumes are created with
>> the parameter "--emulate512 enabled"
>>
>> What are the implications of this setting? Does it impact performance? If
>> so, is it IOPS or throughput that is impacted? What about reliability (is
>> that mode equally tested as standard mode)?
>>
>> As I saw on RH Bugzilla, support for 4K devices in oVirt will need to
>> wait at least for Centos 7.7
>> Once that is supported, would it be possible to transition/upgrade an
>> emulate512 vdo volume to a standard one?
>>
>> Thanks,
>>
>> Guillaume Pavese
>> Ingénieur Système et Réseau
>> Interactiv-Group
>> _______________________________________________
>> Vdo-devel mailing list
>> Vdo-devel(a)redhat.com
>> https://www.redhat.com/mailman/listinfo/vdo-devel
>>
>
5 years, 9 months
[RHV] [CQ weekly status] [01-03-2019]
by Dafna Ron
Hi,
This mail is to provide the current status of CQ and allow people to review
status before and after the weekend.
Please refer to below colour map for further information on the meaning of
the colours.
*CQ-RHV-4.2:* GREEN (#1)
Last failure was on 28-02-2019 for project rhevm-appliance due to a failed
build-artifacts job.
this issue is still happening and message was sent to developer.
*CQ-RHV-Master:* GREEN (#1)
Last failure is identical to 4.2 - build-artifacts for the same project.
** We do not yet have a 4.3 CQ job which we will start working on next
week.
Current running jobs for 4.2 [1] and for master [2] can be found below:
[1]
https://rhv-devops-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/Change%20...
[2]
https://rhv-devops-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/Change%20...
Happy week!
Dafna
------------------------------------------------------------------------------------------------------------------------
COLOUR MAP
Green = job has been passing successfully
** green for more than 3 days may suggest we need a review of our test
coverage
1.
1-3 days GREEN (#1)
2.
4-7 days GREEN (#2)
3.
Over 7 days GREEN (#3)
Yellow = intermittent failures for different projects but no lasting or
current regressions
** intermittent would be a healthy project as we expect a number of
failures during the week
** I will not report any of the solved failures or regressions.
1.
Solved job failures YELLOW (#1)
2.
Solved regressions YELLOW (#2)
Red = job has been failing
** Active Failures. The colour will change based on the amount of time the
project/s has been broken. Only active regressions would be reported.
1.
1-3 days RED (#1)
2.
4-7 days RED (#2)
5 years, 9 months
[Ovirt] [CQ weekly status] [01-03-2019]
by Dafna Ron
Hi,
This mail is to provide the current status of CQ and allow people to review
status before and after the weekend.
Please refer to below colour map for further information on the meaning of
the colours.
*CQ-4.2*: GREEN (#1)
Last CQ job failure on 4.2 was 27-02-2019 on project ovirt-provider-ovn due
to a failed build-artifacts job. this issue is now fixed.
Please note that this is a CQ job failure but not an OST job failure as we
exit if no package is available.
*CQ-4.3*: RED (#1)
We added a CQ job for 4.3 but not all projects created the branch yet.
I created a google sheet to follow the advancement of this work and I will
use this report to ask all the managers to go over the doc and add info:
https://docs.google.com/spreadsheets/d/1i142cTN_0QwOvogtlQiF6bcIYxTpBqYBe...
*CQ-Master:* RED (#1)
We have sporadic failures in test get_host_devices
this is effecting all projects and we have yet to know what is causing the
issue.
Michal checked the issue and thinks the test should be modified to do a
re-try as the info is filled by the host 4 seconds after the query is sent.
Milan was asked to add a loop with 1s delay to get_host_devices to fix the
test.
Please note that we have ruled out a possible infra cause for this failure
before alerting dev as the failure is sporadic and happens on multiple
projects.
Jira: https://ovirt-jira.atlassian.net/browse/OVIRT-2686
Current running jobs for 4.2 [1] and master [2] can be found here:
[1]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.2_change-...
[2]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_chan...
Happy week!
Dafna
-------------------------------------------------------------------------------------------------------------------
COLOUR MAP
Green = job has been passing successfully
** green for more than 3 days may suggest we need a review of our test
coverage
1.
1-3 days GREEN (#1)
2.
4-7 days GREEN (#2)
3.
Over 7 days GREEN (#3)
Yellow = intermittent failures for different projects but no lasting or
current regressions
** intermittent would be a healthy project as we expect a number of
failures during the week
** I will not report any of the solved failures or regressions.
1.
Solved job failures YELLOW (#1)
2.
Solved regressions YELLOW (#2)
Red = job has been failing
** Active Failures. The colour will change based on the amount of time the
project/s has been broken. Only active regressions would be reported.
1.
1-3 days RED (#1)
2.
4-7 days RED (#2)
3.
Over 7 days RED (#3)
5 years, 9 months
[ OST Failure Report ] [ oVirt Master ] [ 27-02-2019 ] [ 002_bootstrap.get_host_devices]
by Galit Rosenthal
Hi,
we are failing basic suite master (vdsm, ovirt-engine-sdk ...)
It happens sporadically.
We disqualify that this is an infra issue.
Can you please have a look at the issue?
ERROR:
<testcase classname="002_bootstrap" name="get_host_devices" time="0.077">
<error type="exceptions.RuntimeError" message="*Could not find
block_vda_1 device in host devices*: "><![CDATA[Traceback (most recent
call last):
File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
testMethod()
File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
142, in wrapped_test
test()
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
60, in wrapper
return func(get_test_prefix(), *args, **kwargs)
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
79, in wrapper
prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs
File "/home/jenkins/workspace/ovirt-master_change-queue-tester/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
line 1016, in get_host_devices
raise RuntimeError('Could not find block_vda_1 device in host
devices: {}'.format(device_list))
RuntimeError: Could not find block_vda_1 device in host devices:
]]></error>
Thanks,
Galit
Examples, Full logs:
[1] https://jenkins.ovirt.org/blue/rest/organizations/jenkins/pipelines/ovirt...
[2]
https://jenkins.ovirt.org/blue/rest/organizations/jenkins/pipelines/ovirt...
_______________________________________________
Infra mailing list -- infra(a)ovirt.org
To unsubscribe send an email to infra-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/WXW6II3NYEX...
--
GALIT ROSENTHAL
SOFTWARE ENGINEER
Red Hat
<https://www.redhat.com/>
galit(a)gmail.com T: 972-9-7692230
<https://red.ht/sig>
5 years, 9 months
unboundid-ldapsdk-4.0.9 is now available for testing
by Sandro Bonazzola
On Fedora we had previously 4.0.7. The new releases 4.0.7 will be soon
available within repositories for Fedora >= 30 and includes several bug
fixes and enhancements.
You can see release notes for 4.0.8 and 4.0.9 here:
https://github.com/pingidentity/ldapsdk/releases
In oVirt this package is required by ovirt-engine-extension-aaa-ldap.
If you're going to test the package, please provide feedback here.
Martin, Ravi, let me know if you see value in backporting this version to
Fedora 28 and EL7 for oVirt 4.3 consumption.
Thanks,
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
5 years, 9 months