I'll be performing a planned Jenkins restart within the next hour.
No new jobs will be scheduled during this maintenance period.
I will inform you once it is over.
This mail is to provide the current status of CQ and allow people to review
status before and after the weekend.
Please refer to below colour map for further information on the meaning of
*CQ-4.2*: GREEN (#1)
Last job failure was on Dec 11th.
Reason: infra related failure
*CQ-Master:* GREEN (#1)
Last job failure was on Dec 13th
Reason: failed package build cause CQ to fail to run. the package build
failure was infra related and a new package was successfully built and
tested after this failure.
Green = job has been passing successfully
** green for more than 3 days may suggest we need a review of our test
1-3 days GREEN (#1)
4-7 days GREEN (#2)
Over 7 days GREEN (#3)
Yellow = intermittent failures for different projects but no lasting or
** intermittent would be a healthy project as we expect a number of
failures during the week
** I will not report any of the solved failures or regressions.
Solved job failures YELLOW (#1)
Solved regressions YELLOW (#2)
Red = job has been failing
** Active Failures. The colour will change based on the amount of time the
project/s has been broken. Only active regressions would be reported.
1-3 days RED (#1)
4-7 days RED (#2)
Over 7 days RED (#3)
Hi Nir & RedHat team
1. Cloned VM support in RHV.
There are two types of clones as per my understanding
1. Full Cloned VM: Independent copy of a virtual machine that shares nothing with the parent template after the cloning operation.
2. Dependent Cloned VM: It is a copy of a virtual machine that shares virtual disks with the parent template. This type of clone is made from a snapshot of the parent which is qcow2 image. All files available on the parent at the moment of the snapshot continue to remain available to the linked clone. Ongoing changes to the virtual disk of the parent do not affect the linked clone, and changes to the disk of the linked clone do not affect the parent.
We want to know what the customer preference ( 1 or 2) as both have pros & cons. What is the guidance given by RHV around this? How do we recommend customers to patch the template disks when multiple VMs are cloned from it? Kindly provide more details.
1. Is there a way to we can mount a QCOW2 image using QCOW2 mounter and have a block level view of the data ? We read about libguestfs but this is filesystem view.
1. We would like to know when we can expect “Allocation Regions API” . I’m referring to
This is marked for 4.3 at this point in time. I presume we will get it as part of 4.2.X release as well.
Kindly confirm ?
When: 8:30 PM - 9:00 PM December 13, 2018
Subject: [EXTERNAL] Canceled event: RHV- Veritas Netbackup Weekly Sync @ Thu Dec 13, 2018 10am - 10:30am (EST) (suchitra.herwadkar(a)veritas.com)
This event has been canceled.
RHV- Veritas Netbackup Weekly Sync
Thu Dec 13, 2018 10am – 10:30am Eastern Time - New York
pchavva(a)redhat.com - organizer
adbarbos(a)redhat.com - optional
renee.carlisle(a)veritas.com - optional
This Engineering weekly sync will cover the engagement activity for Netbackup and RHV integration.
To join the Meeting:
To join via Room System:
Video Conferencing System: <https://www.google.com/url?q=http%3A%2F%2Fredhat.bjn.vc&sa=D&ust=15451344...>
Meeting ID : 5723877297
To join via phone :
408-915-6466 (United States)
(see all numbers - <https://www.google.com/url?q=https%3A%2F%2Fwww.redhat.com%2Fen%2Fconferen...>
2) Enter Conference ID : 5723877297
Invitation from Google Calendar<https://www.google.com/calendar/>
You are receiving this courtesy email at the account suchitra.herwadkar(a)veritas.com because you are an attendee of this event.
To stop receiving future updates for this event, decline this event. Alternatively you can sign up for a Google account at https://www.google.com/calendar/ and control your notification settings for your entire calendar.
Forwarding this invitation could allow any recipient to modify your RSVP response. Learn More<https://support.google.com/calendar/answer/37135#forwarding>.
I've been working on adding coverage report for VDSM in OST
recently, and I'm happy to announce, that the first batch of patches is
To run a suite with coverage, look for 'COVERAGE' drop-down on OST's
build parameters page. If you run OST locally, pass a '--coverage' argument
Currently, coverage works only for VDSM in basic-suite-master, but adding
VDSM support for other suites is now a no-brainer. More patches are on the
Since the option is named 'COVERAGE', and not 'VDSM_COVERAGE', other
are welcome to implement their coverage reports on top of it.
hi, ovirt-system-tests_compat-4.2-suite-master is failing since November 27
with following error:
Error: Fault reason is "Operation Failed". Fault detail is "[Bond name
'bond_fancy0' does not follow naming criteria. For host compatibility
version 4.3 or greater, custom bond names must begin with the prefix
'bond' followed by 'a-z', 'A-Z', '0-9', or '_' characters. For host
compatibility version 4.2 and lower, custom bond names must begin with
the prefix 'bond' followed by a number.]". HTTP response code is 400.
I think that if the scope of the test is to check that 4.2 still works with
4.3 engine, bond_fancy0 works for 4.3 but it's clearly not good for 4.2 and
the test needs a fix.
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
I posted this patch for vdsm, adding NBD APIs:
The main purpose of these APIs are enabling incremental restore, but they
enable a more efficient rhv-upload via NBD, and importing to thin disks,
not possible with current solution.
The same idea can work for KubeVirt or other targets, minimizing specific
Here is how rhv-upload can work using NBD:
1. rhv-upload plugin separated to pre and post scripts
- pre - prepare disk and start image transfer
- post - finialize image transfer, create vm, etc.
2. rhr-upload-pre plugin create a transfer with transport="nbd"
Engine does not implement <transport> yet, but this should be an easy
This will use the new NBD APIs to export the disk using NBD over unix
We can support later also NBD over TLS/PSK.
Engine will return NBD url in transfer_url:
3. v2v use the trasfer_url to start qem-img with the NBD server:
nbdkit (vddk) -> qemu-img convert -> qemu-nbd
Note that nbdkit is removed from the rhv side of the pipeline. This is
improve the throughput significantly, since imageio is not very good with
small requests generated by qemu-img.
4. rhv-upload-post script invoked to complete the transfer
What do you think?