oci: command line tool for oVirt CI
by Nir Soffer
I want to share a nice little tool that can make your life easier.
https://github.com/nirs/oci
With this PR:
https://github.com/nirs/oci/pull/1
You will be able to do this:
$ python system-tests.py 98535
2019-03-15 21:17:04,579 INFO [system-tests] [ 1/8 ] Getting build info
for change 98535
2019-03-15 21:17:05,295 INFO [system-tests] [ 2/8 ] Starting
build-artifacts job for {'url': u'git://gerrit.ovirt.org/vdsm', 'ref':
u'refs/changes/35/98535/3'}
2019-03-15 21:17:06,134 INFO [system-tests] [ 3/8 ] Waiting for queue
item https://jenkins.ovirt.org/queue/item/382806/
2019-03-15 21:17:07,960 INFO [system-tests] [ 4/8 ] Waiting for job
http://jenkins.ovirt.org/job/standard-manual-runner/89/
2019-03-15 21:26:21,381 INFO [system-tests] [ 5/8 ] Starting oVirt
system tests basic suite with custom repos
http://jenkins.ovirt.org/job/standard-manual-runner/89/
2019-03-15 21:26:22,242 INFO [system-tests] [ 6/8 ] Waiting for queue
item https://jenkins.ovirt.org/queue/item/382817/
2019-03-15 21:30:43,558 INFO [system-tests] [ 7/8 ] Waiting for job
http://jenkins.ovirt.org/job/ovirt-system-tests_manual/4330/
2019-03-15 22:10:52,891 INFO [system-tests] [ 8/8 ] System tests
completed with SUCCESS
Yes, one command from your favorite shell. It takes about 53 minutes, but
on Fedora you get
notifications when commands finish.
No need to to add "ci please build" comment, wait until the job is
complete, copy the job url, open OST page,
wait (jenkins is slow), paste the build artifacts url, start the job, wait,
wait more, check if OST is done, (not yet),
wait more..., OST finished, check the logs, did you paste the wrong url?
The tool is not finished yet. If you like to contribute to this fun little
project please check
the issues:
https://github.com/nirs/oci/issues
Most of the issues are easy. Some harder like:
- https://github.com/nirs/oci/issues/11
- https://github.com/nirs/oci/issues/12
Feel free to open new issues if you have an idea for this tool.
Nir
5 years, 9 months
Dependencies issue on manual OST
by Eyal Shenitzky
Hi,
I encounter this dependency issue while trying to run OST manually:
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/4343/
*09:49:35* @ Deploy oVirt environment: *09:49:35* # Deploy
environment: *09:49:35* * [Thread-2] Deploy VM
lago-basic-suite-master-host-0: *09:49:35* * [Thread-3] Deploy VM
lago-basic-suite-master-host-1: *09:49:35* * [Thread-4] Deploy VM
lago-basic-suite-master-engine:
*09:49:54* * [Thread-3] Deploy VM lago-basic-suite-master-host-1:
Success (in 0:00:18)
*09:49:56* STDERR*09:49:56* + yum -y install ovirt-host*09:49:56*
Error: Package:
ovirt-host-dependencies-4.4.0-0.0.master.20190308162916.git4c2aa13.el7.x86_64
(alocalsync)*09:49:56* Requires: vdsm-client >=
4.40.0*09:49:56* Available:
vdsm-client-4.30.11-10.gitf23ba52.el7.noarch (alocalsync)*09:49:56*
vdsm-client = 4.30.11-10.gitf23ba52.el7*09:49:56* Error:
Package: ovirt-host-dependencies-4.4.0-0.0.master.20190308162916.git4c2aa13.el7.x86_64
(alocalsync)*09:49:56* Requires: vdsm >= 4.40.0*09:49:56*
Installing: vdsm-4.30.11-10.gitf23ba52.el7.x86_64
(alocalsync)*09:49:56* vdsm =
4.30.11-10.gitf23ba52.el7*09:49:56* Error: Package:
ovirt-host-dependencies-4.4.0-0.0.master.20190308162916.git4c2aa13.el7.x86_64
(alocalsync)*09:49:56* Requires: vdsm-client >=
4.40.0*09:49:56* Installing:
vdsm-client-4.30.11-10.gitf23ba52.el7.noarch (alocalsync)*09:49:56*
vdsm-client = 4.30.11-10.gitf23ba52.el7
Does someone encounter it?
Thanks
--
Regards,
Eyal Shenitzky
5 years, 9 months
OST hyperconverged master fails
by Greg Sheremeta
Hi,
I'm trying to run OST hyperconverged master locally, but it's missing some
ansible inventory file and fails. Is this a known issue?
+ lago copy-to-vm lago-hc-basic-suite-master-host-0
/home/greg/projects/ovirt-system-tests/hc-basic-suite-master/generate-hc-answerfile.sh
/root/generate-hc-answerfile.sh
@ Copy
/home/greg/projects/ovirt-system-tests/hc-basic-suite-master/generate-hc-answerfile.sh
to lago-hc-basic-suite-master-host-0:/root/generate-hc-answerfile.sh:
@ Copy
/home/greg/projects/ovirt-system-tests/hc-basic-suite-master/generate-hc-answerfile.sh
to lago-hc-basic-suite-master-host-0:/root/generate-hc-answerfile.sh:
Success (in 0:00:00)
+ lago shell lago-hc-basic-suite-master-host-0 /root/exec_playbook.sh
lago-hc-basic-suite-master-host-0 lago-hc-basic-suite-master-host-1
lago-hc-basic-suite-master-host-2
*/root/exec_playbook.sh: line 9:
/usr/share/doc/gluster.ansible/playbooks/hc-ansible-deployment/ohc_gluster_inventory.yml:
No such file or directory*
+ RET_CODE=1
+ '[' 1 -ne 0 ']'
+ echo 'ansible setup on lago-hc-basic-suite-master-host-0 failed with
status 1.'
ansible setup on lago-hc-basic-suite-master-host-0 failed with status 1.
+ exit 1
+ on_exit
+ [[ 1 -ne 0 ]]
+ logger.error 'on_exit: Exiting with a non-zero status'
+ logger.log ERROR 'on_exit: Exiting with a non-zero status'
+ set +x
2019-03-15 06:16:36.021775858-0400 run_suite.sh::on_exit::ERROR:: on_exit:
Exiting with a non-zero status
--
GREG SHEREMETA
SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
Red Hat NA
<https://www.redhat.com/>
gshereme(a)redhat.com IRC: gshereme
<https://red.ht/sig>
5 years, 9 months
[Ovirt] [CQ weekly status] [15-03-2019]
by Dafna Ron
Hi,
This mail is to provide the current status of CQ and allow people to review
status before and after the weekend.
Please refer to below colour map for further information on the meaning of
the colours.
*CQ-4.2*: GREEN (#1)
Last CQ job failure on 4.2 was 14-03-2019 on project v2v-conversion-host
due to missing failed build artifacts which was fixed the same day.
*CQ-4.3*: RED (#1)
Last job to failed today. it had 13 changes and CQ is still bisecting the
result.
*CQ-Master:* RED (#1)
Last failure was today due to a build-artifacts failure. a message was sent
to patch owner.
Current running jobs for 4.2 [1] and master [2] can be found here:
[1]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.2_change-...
[2]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_chan...
Happy week!
Dafna
-------------------------------------------------------------------------------------------------------------------
COLOUR MAP
Green = job has been passing successfully
** green for more than 3 days may suggest we need a review of our test
coverage
1.
1-3 days GREEN (#1)
2.
4-7 days GREEN (#2)
3.
Over 7 days GREEN (#3)
Yellow = intermittent failures for different projects but no lasting or
current regressions
** intermittent would be a healthy project as we expect a number of
failures during the week
** I will not report any of the solved failures or regressions.
1.
Solved job failures YELLOW (#1)
2.
Solved regressions YELLOW (#2)
Red = job has been failing
** Active Failures. The colour will change based on the amount of time the
project/s has been broken. Only active regressions would be reported.
1.
1-3 days RED (#1)
2.
4-7 days RED (#2)
3.
Over 7 days RED (#3)
5 years, 9 months
Re: [URGENT] stop merging to vdsm master
by Sandro Bonazzola
Adding missing devel list.
Il giorno mar 12 mar 2019 alle ore 10:24 Dafna Ron <dron(a)redhat.com> ha
scritto:
> Hi,
>
> We have been continuously failing CQ on the vdsm project since 07-03-2019.
>
> The latest failure is still in vm migration test and root cause is still
> pointing to:
>
> https://gerrit.ovirt.org/#/c/97381/22 - virt: In FIPS mode, use SASL auth
> instead of qemu passwords
>
> Please stop merging until issue is resolved as we are risking further
> regressions being merged.
>
> Thanks,
> Dafna
>
>
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
5 years, 9 months
[hyperv] hvinfo: command line tool to get the HyperV enlightements status from within the guest
by Francesco Romani
Hi all,
lately I've been involved again in hyperv support. I was tasked to help
improve the testability of the hyperv guest configuration.
Hyperv support is a set of optimizations that libvirt/qemu offer to
improve the runtime behaviour (stability, performance) of Windows guets.
See for example: https://libvirt.org/formatdomain.html#elementsFeatures
oVirt does support them (see for example
https://github.com/oVirt/vdsm/blob/master/lib/vdsm/virt/libvirtxml.py#L243)
and will keep supporting them
because they are a key feature for windows guests.
Up until now the easiest (only?) way to check if a hyperv optmization
was really enabled was to inspect the libvirt XML and/or the QEMU
command line flags
But we wanted to have another check.
Enter hvinfo: https://github.com/fromanirh/hvinfo
hvinfo is a simple tool that decodes CPUID informations according to the
publicly-available HyperV specs
(https://github.com/MicrosoftDocs/Virtualization-Documentation/tree/live/tlfs)
and report what the guest sees.
It takes no arguments - just run it!-, it requires no special privileges
and emits easy to consume JSON. It is designed to help and integrate
into fully automated CI/QA.
Being a commandline tool, is hard to give a "screenshot", so let me just
report sample output (admin privileges not actually required)
Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.
PS C:\Users\admin> cd .\Downloads\
PS C:\Users\admin\Downloads> .\hvinfo.exe
{
"HyperVsupport": true,
"Features": {
"GuestDebugging": false,
"PerformanceMonitor": false,
"PCPUDynamicPartitioningEvents": true,
"HypercallInputParamsXMM": false,
"VirtualGuestIdleState": false,
"HypervisorSleepState": false,
"NUMADistanceQuery": false,
"TimerFrequenciesQuery": false,
"SytheticMCEInjection": false,
"GuestCrashMSR": false,
"DebugMSR": false,
"NPIEP": false,
"DisableHypervisorAvailable": false,
"ExtendedGvaRangesForFlushVirtualAddressList": false,
"HypercallOutputReturnXMM": false,
"SintPollingMode": false,
"HypercallMsrLock": false,
"UseDirectSyntheticTimers": false
},
"Recommendations": {
"HypercallAddressSpaceSwitch": false,
"HypercallLocalTLBFlush": false,
"HypercallRemoteTLBFlush": false,
"MSRAPICRegisters": true,
"MSRSystemReset": false,
"RelaxedTiming": true,
"DMARemapping": false,
"InterruptRemapping": false,
"X2APICMSR": false,
"DeprecatingAutoEOI": false,
"SyntheticClusterIPI": false,
"ExProcessorMasks": false,
"Nested": false,
"INTForMBECSyscalls": false,
"NestedEVMCS": false,
"SyncedTimeline": false,
"DirectLocalFlushEntire": false,
"NoNonArchitecturalCoreSharing": false,
"SpinlockRetries": 8191
}
}
PS C:\Users\admin\Downloads>
Caveat: the name of the features are the same of the spec, so we need
mappings for oVirt flags, libvirt flags and so on.
For example
libvirt xml
domain.features.hyperv.relaxed[state="on"]
maps to hvinfo json
Features.Recommendations.RelaxedTiming=true
Feel free to test it and report any issue
Bests,
--
Francesco Romani
Senior SW Eng., Virtualization R&D
Red Hat
IRC: fromani github: @fromanirh
5 years, 9 months
[ OST Failure Report ] [ oVirt 4.3 (vdsm-jsonrpc-java) ] [ 10-3-19 ] [ Upgrade Suite ]
by Liora Milbaum
Link and headline of suspected patches:
Link to Job: http://jenkins.ovirt.org/job/ovirt-4.3_change-queue-tester/153/
Link to all logs:
(Relevant) error snippet from the log:
<error>
[36m # 001_upgrade_engine.test_initialize_engine: [0m [0m [0m
[36m * Copy
/home/jenkins/workspace/ovirt-4.3_change-queue-tester/ovirt-system-tests/upgrade-from-prevrelease-suite-4.3/upgrade-engine-answer-file.conf
to lago-upgrade-from-prevrelease-suite-4-3-engine:/tmp/answer-file-post:
[0m [0m [0m
[36m * Copy
/home/jenkins/workspace/ovirt-4.3_change-queue-tester/ovirt-system-tests/upgrade-from-prevrelease-suite-4.3/upgrade-engine-answer-file.conf
to lago-upgrade-from-prevrelease-suite-4-3-engine:/tmp/answer-file-post:
[32mSuccess [0m (in 0:00:00) [0m
[36m * Collect artifacts: [0m [0m [0m
[36m ~ [Thread-8] Copy from
lago-upgrade-from-prevrelease-suite-4-3-engine:/tmp/otopi* to
/dev/shm/ost/deployment-upgrade-from-prevrelease-suite-4.3/default/test_logs/001_upgrade_engine.test_initialize_engine-20190307160642/lago-upgrade-from-prevrelease-suite-4-3-engine/_tmp_otopi*:
[31mERROR [0m (in 0:00:00) [0m
[36m - [Thread-8] lago-upgrade-from-prevrelease-suite-4-3-engine: [31mERROR
[0m (in 0:00:00) [0m
[36m * Collect artifacts: [31mERROR [0m (in 0:00:01) [0m
[36m # 001_upgrade_engine.test_initialize_engine: [31mERROR [0m (in
0:00:49) [0m
[36m # Results located at
/dev/shm/ost/deployment-upgrade-from-prevrelease-suite-4.3/001_upgrade_engine.py.junit.xml
[0m
[36m@ Run test: 001_upgrade_engine.py: [31mERROR [0m (in 0:00:49) [0m
[31mError occured, aborting
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 383, in
do_run
self.cli_plugins[args.ovirtverb].do_run(args)
File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 184, in
do_run
self._do_run(**vars(args))
File "/usr/lib/python2.7/site-packages/lago/utils.py", line 549, in wrapper
return func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/lago/utils.py", line 560, in wrapper
return func(*args, prefix=prefix, **kwargs)
File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 107, in
do_ovirt_runtest
raise RuntimeError('Some tests failed')
RuntimeError: Some tests failed [0m
</error>
--
*Liora Milbaum*
Senior Principal Software Engineer
RHV/CNV DevOps
EMEA VIRTUALIZATION R&D
T: +972-54-6560051
<https://red.ht/sig>
5 years, 9 months