Update to latest master failed with Failed to execute stage 'Setup validation': 'changed'
by Michael Burman
Hi
I'm trying to update the latest master version
4.3.0-0.4.master.20181219115514.git1d803c4.el7 and on engine-setup i get:
[ ERROR ] Failed to execute stage 'Setup validation': 'changed'
2018-12-20 08:44:50,803+0200 DEBUG otopi.context context._executeMethod:128
Stage validation METHOD
otopi.plugins.ovirt_engine_common.base.core.uninstall.Plugin._validatio
n_changed_files
2018-12-20 08:44:50,819+0200 DEBUG otopi.context context._executeMethod:143
method exception
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in
_executeMethod
method['method']()
File
"/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-common/base/core/uninstall.py",
line 247, in _validation_changed_files
if info['changed']
KeyError: 'changed'
2018-12-20 08:44:50,821+0200 ERROR otopi.context context._executeMethod:152
Failed to execute stage 'Setup validation': 'changed'
Known issue?
Thanks,
--
Michael Burman
Senior Quality engineer - rhv network - redhat israel
Red Hat
<https://www.redhat.com>
mburman(a)redhat.com M: 0545355725 IM: mburman
<https://red.ht/sig>
5 years, 11 months
planned Jenkins restart
by Evgheni Dereveanchin
Hi everyone,
I'll be performing a planned Jenkins restart within the next hour.
No new jobs will be scheduled during this maintenance period.
I will inform you once it is over.
Regards,
Evgheni Dereveanchin
5 years, 11 months
[Ovirt] [CQ weekly status] [14-12-2018]
by Dafna Ron
Hi,
This mail is to provide the current status of CQ and allow people to review
status before and after the weekend.
Please refer to below colour map for further information on the meaning of
the colours.
*CQ-4.2*: GREEN (#1)
Last job failure was on Dec 11th.
Project: ovirt-engine
Reason: infra related failure
*CQ-Master:* GREEN (#1)
Last job failure was on Dec 13th
Project: vdsm
Reason: failed package build cause CQ to fail to run. the package build
failure was infra related and a new package was successfully built and
tested after this failure.
Happy week!
Dafna
-------------------------------------------------------------------------------------------------------------------
COLOUR MAP
Green = job has been passing successfully
** green for more than 3 days may suggest we need a review of our test
coverage
1.
1-3 days GREEN (#1)
2.
4-7 days GREEN (#2)
3.
Over 7 days GREEN (#3)
Yellow = intermittent failures for different projects but no lasting or
current regressions
** intermittent would be a healthy project as we expect a number of
failures during the week
** I will not report any of the solved failures or regressions.
1.
Solved job failures YELLOW (#1)
2.
Solved regressions YELLOW (#2)
Red = job has been failing
** Active Failures. The colour will change based on the amount of time the
project/s has been broken. Only active regressions would be reported.
1.
1-3 days RED (#1)
2.
4-7 days RED (#2)
3.
Over 7 days RED (#3)
5 years, 11 months
Questions for Redhat team
by Suchitra Herwadkar
Hi Nir & RedHat team
1. Cloned VM support in RHV.
There are two types of clones as per my understanding
1. Full Cloned VM: Independent copy of a virtual machine that shares nothing with the parent template after the cloning operation.
2. Dependent Cloned VM: It is a copy of a virtual machine that shares virtual disks with the parent template. This type of clone is made from a snapshot of the parent which is qcow2 image. All files available on the parent at the moment of the snapshot continue to remain available to the linked clone. Ongoing changes to the virtual disk of the parent do not affect the linked clone, and changes to the disk of the linked clone do not affect the parent.
We want to know what the customer preference ( 1 or 2) as both have pros & cons. What is the guidance given by RHV around this? How do we recommend customers to patch the template disks when multiple VMs are cloned from it? Kindly provide more details.
1. Is there a way to we can mount a QCOW2 image using QCOW2 mounter and have a block level view of the data ? We read about libguestfs but this is filesystem view.
1. We would like to know when we can expect “Allocation Regions API” . I’m referring to
https://bugzilla.redhat.com/show_bug.cgi?id=1622946
This is marked for 4.3 at this point in time. I presume we will get it as part of 4.2.X release as well.
Kindly confirm ?
Thanks
Suchitra
From: pchavva(a)redhat.com
When: 8:30 PM - 9:00 PM December 13, 2018
Subject: [EXTERNAL] Canceled event: RHV- Veritas Netbackup Weekly Sync @ Thu Dec 13, 2018 10am - 10:30am (EST) (suchitra.herwadkar(a)veritas.com)
This event has been canceled.
RHV- Veritas Netbackup Weekly Sync
When
Thu Dec 13, 2018 10am – 10:30am Eastern Time - New York
Calendar
suchitra.herwadkar(a)veritas.com
Who
•
pchavva(a)redhat.com - organizer
•
mahesh.falmari(a)veritas.com
•
abhay.marode(a)veritas.com
•
nsoffer(a)redhat.com
•
ketan.pachpande(a)veritas.com
•
tnisan(a)redhat.com
•
sudhakar.paulzagade(a)veritas.com
•
suchitra.herwadkar(a)veritas.com
•
navin.tah(a)veritas.com
•
ehowriga(a)redhat.com
•
james.olson(a)veritas.com
•
adbarbos(a)redhat.com - optional
•
renee.carlisle(a)veritas.com - optional
Agenda:
This Engineering weekly sync will cover the engagement activity for Netbackup and RHV integration.
To join the Meeting:
https://bluejeans.com/5723877297<https://www.google.com/url?q=https%3A%2F%2Fbluejeans.com%2F5723877297&sa=...>
To join via Room System:
Video Conferencing System: <https://www.google.com/url?q=http%3A%2F%2Fredhat.bjn.vc&sa=D&ust=15451344...>
redhat.bjn.vc<https://www.google.com/url?q=http%3A%2F%2Fredhat.bjn.vc&sa=D&ust=15451344...> -or-199.48.152.18
Meeting ID : 5723877297
To join via phone :
1) Dial:
408-915-6466 (United States)
(see all numbers - <https://www.google.com/url?q=https%3A%2F%2Fwww.redhat.com%2Fen%2Fconferen...>
https://www.redhat.com/en/conference-numbers<https://www.google.com/url?q=https%3A%2F%2Fwww.redhat.com%2Fen%2Fconferen...>)
2) Enter Conference ID : 5723877297
Invitation from Google Calendar<https://www.google.com/calendar/>
You are receiving this courtesy email at the account suchitra.herwadkar(a)veritas.com because you are an attendee of this event.
To stop receiving future updates for this event, decline this event. Alternatively you can sign up for a Google account at https://www.google.com/calendar/ and control your notification settings for your entire calendar.
Forwarding this invitation could allow any recipient to modify your RSVP response. Learn More<https://support.google.com/calendar/answer/37135#forwarding>.
5 years, 11 months
Announcing 'COVERAGE' option in OST
by Marcin Sobczyk
Hi,
I've been working on adding coverage report for VDSM in OST
recently, and I'm happy to announce, that the first batch of patches is
merged!
To run a suite with coverage, look for 'COVERAGE' drop-down on OST's
build parameters page. If you run OST locally, pass a '--coverage' argument
to 'run_suite.sh'.
Currently, coverage works only for VDSM in basic-suite-master, but adding
VDSM support for other suites is now a no-brainer. More patches are on the
way!
Since the option is named 'COVERAGE', and not 'VDSM_COVERAGE', other
projects
are welcome to implement their coverage reports on top of it.
Cheers, Marcin
5 years, 11 months
ovirt-system-tests_compat-4.2-suite-master is failing since November 27
by Sandro Bonazzola
hi, ovirt-system-tests_compat-4.2-suite-master is failing since November 27
with following error:
Error: Fault reason is "Operation Failed". Fault detail is "[Bond name
'bond_fancy0' does not follow naming criteria. For host compatibility
version 4.3 or greater, custom bond names must begin with the prefix
'bond' followed by 'a-z', 'A-Z', '0-9', or '_' characters. For host
compatibility version 4.2 and lower, custom bond names must begin with
the prefix 'bond' followed by a number.]". HTTP response code is 400.
I think that if the scope of the test is to check that 4.2 still works with
4.3 engine, bond_fancy0 works for 4.3 but it's clearly not good for 4.2 and
the test needs a fix.
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
5 years, 11 months
v2v: -o rhv-upload: Upload via NBD
by Nir Soffer
I posted this patch for vdsm, adding NBD APIs:
https://gerrit.ovirt.org/c/96079/
The main purpose of these APIs are enabling incremental restore, but they
also
enable a more efficient rhv-upload via NBD, and importing to thin disks,
which is
not possible with current solution.
The same idea can work for KubeVirt or other targets, minimizing specific
target code.
Here is how rhv-upload can work using NBD:
1. rhv-upload plugin separated to pre and post scripts
- pre - prepare disk and start image transfer
- post - finialize image transfer, create vm, etc.
2. rhr-upload-pre plugin create a transfer with transport="nbd"
POST /imagetransfers
<image_transfer>
<disk id="123"/>
<direction>upload</direction>
<format>raw</format>
<transport>nbd</transport>
</image_transfer>
Engine does not implement <transport> yet, but this should be an easy
change.
This will use the new NBD APIs to export the disk using NBD over unix
socket.
We can support later also NBD over TLS/PSK.
Engine will return NBD url in transfer_url:
<transfer_url>nbd:unix:/run/vdsm/nbd/<transfer_uuid>.sock</tansfer_url>
3. v2v use the trasfer_url to start qem-img with the NBD server:
nbdkit (vddk) -> qemu-img convert -> qemu-nbd
Note that nbdkit is removed from the rhv side of the pipeline. This is
expected to
improve the throughput significantly, since imageio is not very good with
lot of
small requests generated by qemu-img.
4. rhv-upload-post script invoked to complete the transfer
What do you think?
Nir
5 years, 11 months