[Ovirt] [CQ weekly status] [19-07-2019]
by Dusan Fodor
Hi,
This mail is to provide the current status of CQ and allow people to review
status before and after the weekend.
Please refer to below colour map for further information on the meaning of
the colours.
*CQ-4.2*: GREEN (#1)
Last failure was on 02-07 for ovirt-provider-ovm due to failed test:
098_ovirt_provider_ovn.use_ovn_provide. This was a code regression and
which was fixed.
*CQ-4.3*: GREEN (#1)
Last failure was on 12-07 for ovirt-provider-ovm due to failed
build-artifacts which was caused due to a mirror issue.
*CQ-Master:* GREEN (#1)
Last failure was on 16-07 for ovirt-engine due failure in build-artifacts,
which was caused by gerrit issue, which is under investigation by Evgheni.
Current running jobs for 4.2 [1], 4.3 [2] and master [3] can be found
here:
[1]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.2_change-...
[2]
https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.3_change...
[3]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_chan...
Happy week!
Dusan
-------------------------------------------------------------------------------------------------------------------
COLOUR MAP
Green = job has been passing successfully
** green for more than 3 days may suggest we need a review of our test
coverage
1.
1-3 days GREEN (#1)
2.
4-7 days GREEN (#2)
3.
Over 7 days GREEN (#3)
Yellow = intermittent failures for different projects but no lasting or
current regressions
** intermittent would be a healthy project as we expect a number of
failures during the week
** I will not report any of the solved failures or regressions.
1.
Solved job failures YELLOW (#1)
2.
Solved regressions YELLOW (#2)
Red = job has been failing
** Active Failures. The colour will change based on the amount of time the
project/s has been broken. Only active regressions would be reported.
1.
1-3 days RED (#1)
2.
4-7 days RED (#2)
3.
Over 7 days RED (#3)
_______________________________________________
Devel mailing list -- devel(a)ovirt.org
To unsubscribe send an email to devel-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/YCNCKRK3G4E...
5 years, 4 months
Doubt on Backing Up and Restoring Virtual Machines Using the Backup and Restore API:-
by smidhunraj@gmail.com
Doubt on Backing Up and Restoring Virtual Machines Using the Backup and Restore API:-
In this link https://ovirt.org/documentation/admin-guide/chap-Backups_and_Migration.ht...
in point 4 it is said that
4 .Attach the snapshot to the backup virtual machine and activate the disk:
POST /api/vms/22222222-2222-2222-2222-222222222222/disks/ HTTP/1.1
Accept: application/xml
Content-type: application/xml
<disk id="11111111-1111-1111-1111-111111111111">
<snapshot id="11111111-1111-1111-1111-111111111111"/>
<active>true</active>
</disk>
What does this 22222222-2222-2222-2222-222222222222 mean do we need to create a new virtual machine and attach the snapshot to that.
Could you kindly explain it further
5 years, 4 months
[Ovirt] [CQ weekly status] [12-07-2019]
by Dafna Ron
Hi,
This mail is to provide the current status of CQ and allow people to review
status before and after the weekend.
Please refer to below colour map for further information on the meaning of
the colours.
*CQ-4.2*: GREEN (#1)
Last failure was on 02-07 for ovirt-provider-ovm due to failed test:
098_ovirt_provider_ovn.use_ovn_provide.
This was a code regression and was fixed by patch
https://gerrit.ovirt.org/#/c/97072/
*CQ-4.3*: GREEN (#1)
Last failure was on 12-07 for ovirt-provider-ovm due to failed
build-artifacts which was caused due to a mirror issue.
*CQ-Master:* RED (#1)
Last failure was on 12-07 for ovirt-provider-ovm due to failed
build-artifacts which was caused due to a mirror issue.
Current running jobs for 4.2 [1], 4.3 [2] and master [3] can be found
here:
[1]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.2_change-...
[2]
https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.3_change...
[3]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_chan...
Happy week!
Dafna
-------------------------------------------------------------------------------------------------------------------
COLOUR MAP
Green = job has been passing successfully
** green for more than 3 days may suggest we need a review of our test
coverage
1.
1-3 days GREEN (#1)
2.
4-7 days GREEN (#2)
3.
Over 7 days GREEN (#3)
Yellow = intermittent failures for different projects but no lasting or
current regressions
** intermittent would be a healthy project as we expect a number of
failures during the week
** I will not report any of the solved failures or regressions.
1.
Solved job failures YELLOW (#1)
2.
Solved regressions YELLOW (#2)
Red = job has been failing
** Active Failures. The colour will change based on the amount of time the
project/s has been broken. Only active regressions would be reported.
1.
1-3 days RED (#1)
2.
4-7 days RED (#2)
3.
Over 7 days RED (#3)
5 years, 4 months
CI fails to download rpms
by Vojtech Juranek
Hi,
quite a lot of Ci runs fail because Ci fails to download
mom-0.5.14-0.0.master.fc29.noarch.rpm, fails with:
Downloading Packages:
12:36:34 [MIRROR] mom-0.5.14-0.0.master.fc29.noarch.rpm: Interrupted by header callback: Server reports Content-Length: 133628 but expected size is: 133676
12:36:34 [FAILED] mom-0.5.14-0.0.master.fc29.noarch.rpm: No more mirrors to try - All mirrors were already tried without success
12:36:34 Error: Error downloading packages:
12:36:34 Cannot download noarch/mom-0.5.14-0.0.master.fc29.noarch.rpm: All mirrors were tried
for full log, see e.g. [1].
Any idea how to fix it?
Thanks
Vojta
[1] https://jenkins.ovirt.org/job/vdsm_standard-check-patch/7809/console
5 years, 4 months
[ OST Failure Report ] [ oVirt Master (vdsm) ] [ 10-07-2019 ] [ 004_basic_sanity.vdsm_recovery ]
by Dafna Ron
Hi,
We have a failure on test 004_basic_sanity.vdsm_recovery on basic suite.
the error seems to be an error in KSM (invalid arg)
can you please have a look?
Link and headline of suspected patches:
cq identified this as the cause of failure:
https://gerrit.ovirt.org/#/c/101603/ - localFsSD: Enable 4k block_size and
alignments
However, I can see some py3 patches merged at the same time:
py3: storage: Fix bytes x string in lvm locking type validation -
https://gerrit.ovirt.org/#/c/101124/
Link to Job:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14963/
Link to all logs:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14963/artif...
(Relevant) error snippet from the log:
<error>
s/da0eeccb-5dd8-47e5-9009-8a848fe17ea5.ovirt-guest-agent.0',) {}
MainProcess|vm/da0eeccb::DEBUG::2019-07-10
07:53:41,003::supervdsm_server::106::SuperVdsm.ServerCallback::(wrapper)
return prepareVmChannel with None
MainProcess|jsonrpc/1::DEBUG::2019-07-10
07:54:05,580::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper)
call ksmTune with ({u'pages_to_scan': 64, u'run': 1, u'sleep
_millisecs': 89.25152465623417},) {}
MainProcess|jsonrpc/1::ERROR::2019-07-10
07:54:05,581::supervdsm_server::103::SuperVdsm.ServerCallback::(wrapper)
Error in ksmTune
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line
101, in wrapper
res = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_api/ksm.py", line
45, in ksmTune
f.write(str(v))
IOError: [Errno 22] Invalid argument
MainProcess|jsonrpc/5::DEBUG::2019-07-10
07:56:33,211::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper)
call rmAppropriateMultipathRules with
('da0eeccb-5dd8-47e5-9009-8a848fe17ea5',) {}
</error>
5 years, 4 months
Access to images in block domain from more VMs
by Tomáš Golembiovský
Hi,
we have a large gap in handling ISO files in data domains and especially
on block domains. None of the flows that work with ISOs works (well, one
sort of does):
1) Change CD for running VM [1]
2) Attaching boot disk in Run Once -- this works but I wonder if it's by
accident, because engine issues spurious teardown on images [2]
3) Guest tools during VM import [3]
4) Automatic attach of guest tools to Windows VMs
Before virt can fix all those we need input from storage on how we
should handle the images on block domains. The specialty of ISO image is
that it makes sense to have it attached to multiple VMs. And that also
means something should control whether image needs attaching or whether
the image is already attached to the host. Similarly the image should be
(attempted to) detached only when there is no longer any VM on the host
that uses it.
Hence my question to storage -- is there already some framework from
storage for handling this? Does it make more sense to do this on engine
side or should each host handle it on it's own? To me it seems
preferable to handle this on engine side as that is where the requests
for attach/detach normally come from (disks in VM import are an
exception). It could possibly mean less race conditions (but I may be
totally wrong here). Also having it on VDSM side would probably mean
changes to the API for 1) and 2).
Any input appreciated.
Tomas
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1589763
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1721455
[3] https://bugzilla.redhat.com/show_bug.cgi?id=1721455
--
Tomáš Golembiovský <tgolembi(a)redhat.com>
5 years, 4 months