New test failure on travis
by Nir Soffer
Looks like we need @brokentest("reason...", name="TRAVIC_CI") on this:
See https://travis-ci.org/oVirt/vdsm/jobs/181933329
======================================================================
ERROR: test suite for <module 'network.nmdbus_test' from
'/vdsm/tests/network/nmdbus_test.py'>
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/nose/suite.py", line 209, in run
self.setUp()
File "/usr/lib/python2.7/site-packages/nose/suite.py", line 292, in setUp
self.setupContext(ancestor)
File "/usr/lib/python2.7/site-packages/nose/suite.py", line 315, in
setupContext
try_run(context, names)
File "/usr/lib/python2.7/site-packages/nose/util.py", line 471, in try_run
return func()
File "/vdsm/tests/testValidation.py", line 191, in wrapper
return f(*args, **kwargs)
File "/vdsm/tests/testValidation.py", line 97, in wrapper
return f(*args, **kwargs)
File "/vdsm/tests/network/nmdbus_test.py", line 48, in setup_module
NMDbus.init()
File "/vdsm/lib/vdsm/network/nm/nmdbus/__init__.py", line 33, in init
NMDbus.bus = dbus.SystemBus()
File "/usr/lib64/python2.7/site-packages/dbus/_dbus.py", line 194, in __new__
private=private)
File "/usr/lib64/python2.7/site-packages/dbus/_dbus.py", line 100, in __new__
bus = BusConnection.__new__(subclass, bus_type, mainloop=mainloop)
File "/usr/lib64/python2.7/site-packages/dbus/bus.py", line 122, in __new__
bus = cls._new_for_bus(address_or_type, mainloop=mainloop)
DBusException: org.freedesktop.DBus.Error.FileNotFound: Failed to
connect to socket /var/run/dbus/system_bus_socket: No such file or
directory
-------------------- >> begin captured logging << --------------------
2016-12-07 11:48:33,458 DEBUG (MainThread) [root] /usr/bin/taskset
--cpu-list 0-1 /bin/systemctl status NetworkManager (cwd None)
(commands:69)
2016-12-07 11:48:33,465 DEBUG (MainThread) [root] FAILED: <err> =
'Failed to get D-Bus connection: Operation not permitted\n'; <rc> = 1
(commands:93)
2016-12-07 11:48:33,465 DEBUG (MainThread) [root] /usr/bin/taskset
--cpu-list 0-1 /bin/systemctl start NetworkManager (cwd None)
(commands:69)
2016-12-07 11:48:33,470 DEBUG (MainThread) [root] FAILED: <err> =
'Failed to get D-Bus connection: Operation not permitted\n'; <rc> = 1
(commands:93)
--------------------- >> end captured logging << ---------------------
7 years, 10 months
[Feture discussion] Full vacuum tool
by Roy Golan
Hi all,
This is a discussion on the RFE[1] to provide a tool to perform full vacuum
on our DBs.
First if you are not familiar with vacuum please read this [2]
# Backgroud
ovirt 'engine' DB have several busy table with 2 differnt usage patten. One
is audit_log and the others are the 'v*_statistics' tables and the
difference between them is mostly inserts vs mostly hot updates.
Tables with tons of updates creates garbage or 'dead' records that should
be removed, and for this postgres have the aforementioned autovacuum
cleaner. It will make the db reuse its already allocated space to perform
future updates/inserts and so on.
Autovacuum is essential for a db to function optimally and tweaking it is
out of the scope of the feature.
Full vacuum is designed to reclaim the disk space and reset the table
statistics. It is a heavy maintenance task, it takes an exclusive lock on
the table and may take seconds to minutes. In some situations it is
effectively a downtime due to the long table lock and should not be running
when the engine is running.
# Critiria
Provide a way to reclaim disk space claimed by the garbage created over
time by the engine db and dwh.
# Usage
Either use it as part of the upgrade procedure (after all dbscipts
execution)
or just provide the tool and admin will run in on demand
- engine db credentials read from /etc/ovirt-engine/engine.conf.d/
- invocation:
```
tool: [dbname(default engine)] [table: (default all)]
```
- if we invoke it on upgrade than an installation plugin should be added to
invoke with default, no interaction
- since VACUUM ANALYZE is consider a recommended maintenance task we can to
it by default and ask the user for FULL.
- remote db is supported as well, doesn't have to be local
# Questions
- Will remote dwh have the credentials under
/etc/ovirt-engine/engine.conf.d?
- Should AAA schema be taken into account as well?
Please review, thanks
Roy
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1388430
[2]
https://www.postgresql.org/docs/9.2/static/runtime-config-autovacuum.html
[3] https://www.postgresql.org/docs/devel/static/sql-vacuum.html
7 years, 10 months
The engine build job you probably didn't know exists
by Eyal Edri
FYI,
After hearing that this might be useful to many developers, I wanted to
bring to your attention the ovirt-engine master build job from patch [1]
which allows you to build new rpms from an open ovirt-engine patch on
Gerrit.
Its was created as temp job for a team who needed it a few months ago, but
seems it might useful for other teams as well, so we decided to publish it
even though its not standardized yet as part of our standard CI system.
I hope you can find it useful for your teams.
[1]
http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_...
--
Eyal Edri
Associate Manager
RHV DevOps
EMEA ENG Virtualization R&D
Red Hat Israel
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
7 years, 10 months
[VDSM] Correct implementation of virt-sysprep job
by Shmuel Melamud
Hi!
I'm currently working on integration of virt-sysprep into oVirt.
Usually, if user creates a template from a regular VM, and then creates new
VMs from this template, these new VMs inherit all configuration of the
original VM, including SSH keys, UDEV rules, MAC addresses, system ID,
hostname etc. It is unfortunate, because you cannot have two network
devices with the same MAC address in the same network, for example.
To avoid this, user must clean all machine-specific configuration from the
original VM before creating a template from it. You can do this manually,
but there is virt-sysprep utility that does this automatically.
Ideally, virt-sysprep should be seamlessly integrated into template
creation process. But the first step is to create a simple button: user
selects a VM, clicks the button and oVirt executes virt-sysprep on the VM.
virt-sysprep works directly on VM's filesystem. It accepts list of all
disks of the VM as parameters:
virt-sysprep -a disk1.img -a disk2.img -a disk3.img
The architecture is as follows: command on the Engine side runs a job on
VDSM side and tracks its success/failure. The job on VDSM side runs
virt-sysprep.
The question is how to implement the job correctly?
I thought about using storage jobs, but they are designed to work only with
a single volume, correct? Is is possible to use them with operation that is
performed on multiple volumes?
Or, alternatively, is it possible to use some kind of 'VM jobs' - that work
on VM at whole? How v2v solves this problem?
Any ideas?
Shmuel
7 years, 10 months
Re: [ovirt-devel] [VDSM] Correct implementation of virt-sysprep job
by Oved Ourfali
On Dec 7, 2016 20:24, "Michal Skrivanek" <mskrivan(a)redhat.com> wrote:
>
>
>
> On 07 Dec 2016, at 09:17, Oved Ourfali <oourfali(a)redhat.com> wrote:
>
>>
>>
>> On Tue, Dec 6, 2016 at 11:12 PM, Adam Litke <alitke(a)redhat.com> wrote:
>>>
>>> On 06/12/16 22:06 +0200, Arik Hadas wrote:
>>>>
>>>> Adam,
>>>
>>>
>>> :) You seem upset. Sorry if I touched on a nerve...
>>>
>>>> Just out of curiosity: when you write "v2v has promised" - what
exactly do you
>>>> mean? the tool? Richard Jones (the maintainer of virt-v2v)? Shahar and
I that
>>>> implemented the integration with virt-v2v? I'm not aware of such a
promise by
>>>> any of these options :)
>>>
>>>
>>> Some history...
>>>
>>> Earlier this year Nir, Francesco (added), Shahar, and I began
>>> discussing the similarities between what storage needed to do with
>>> external commands and what was designed specifically for v2v. I am
>>> not sure if you were involved in the project at that time. The plan
>>> was to create common infrastructure that could be extended to fit the
>>> unique needs of the verticals. The v2v code was going to be moved
>>> over to the new infrastructure (see [1]) and the only thing that
>>> stopped the initial patch was lack of a VMWare testing environment for
>>> verification.
>>>
>>> At that time storage refocused on developing verbs that used the new
>>> infrastructure and have been maintaining its suitability for general
>>> use. Conversion of v2v -> Host Jobs is obviously a lower priority
>>> item and much more difficult now due to the early missed opportunity.
>>>
>>>> Anyway, let's say that you were given such a promise by someone and
thus
>>>> consider that mechanism to be deprecated - it doesn't really matter.
>>>
>>>
>>> I may be biased but I think my opinion does matter.
>>>
>>>> The current implementation doesn't well fit to this flow (it requires
>>>> per-volume job, it creates leases that are not needed for template's
disks,
>>>> ...) and with the "next-gen API" with proper support for virt flows
not even
>>>> being discussed with us (and iiuc also not with the infra team) yet, I
don't
>>>> understand what do you suggest except for some strong, though
irrelevant,
>>>> statements.
>>>
>>>
>>> If you are willing to engage in a good-faith technical discussion I am
>>> sure I can help you to understand. These operations to storage demand
>>> some form of locking protection. If volume leases aren't appropriate
then
>>> perhaps we should use the VM Leases / xleases that Nir is finishing
>>> off for 4.1 now.
>>>
>>>> I suggest loud and clear to reuse (not to add dependencies, not to
enhance, ..)
>>>> an existing mechanism for a very similar flow of virt-v2v that works
well and
>>>> simple.
>>>
>>>
>>> I clearly remember discussions involving infra (hello Oved), virt
>>> (hola Michal), and storage where we decided that new APIs performing
>>> async operations involving external commands should use the HostJobs
>>> infrastructure instead of adding more information to Host Stats.
>>> These were the "famous" entity polling meetings.
>>>
>>> Of course plans can change but I have never been looped into any such
>>> discussions.
>>>
>>
>> Well, I think that when someone builds a good infrastructure he first
needs to talk to all consumers and make sure it fits.
>> In this case it seems like most work was done to fit the storage
use-case, and now you check whether it can fit others as well....
>>
>> IMO it makes much more sense to use events where possible (and you've
promised to use those as well, but I don't see you doing that...). v2v
should use events for sure, and they have promised to do that in the past,
instead of using the v2v jobs. The reason events weren't used originally
with the v2v feature, was that it was too risky and the events
infrastructure was added too late in the game.
>
>
> Revisiting and refactoring code which is already in use is always a bit
of luxury we can rarely prioritize. So indeed v2v is not using events. The
generalization work has been done to some extent, but there is no incentive
to rewrite it completely.
> On the other hand we are now trying to add events to migration progress
reporting and hand over since that area is being touched due to post-copy
enhancements.
> So, when there is a practical chance to improve functionality by
utilizing events it indeed should be the first choice
+1.
Well understood.
>
>>
>>
>>>>
>>>> Do you "promise" to implement your "next gen API" for 4.1 as an
alternative?
>>>
>>>
>>> I guess we need the design first.
>>>
>>>
>>>> On Tue, Dec 6, 2016 at 5:04 PM, Adam Litke <alitke(a)redhat.com> wrote:
>>>>
>>>> On 05/12/16 11:17 +0200, Arik Hadas wrote:
>>>>
>>>>
>>>>
>>>> On Mon, Dec 5, 2016 at 10:05 AM, Nir Soffer <nsoffer(a)redhat.com>
wrote:
>>>>
>>>> On Sun, Dec 4, 2016 at 8:50 PM, Shmuel Melamud <
smelamud(a)redhat.com>
>>>> wrote:
>>>> >
>>>> > Hi!
>>>> >
>>>> > I'm currently working on integration of virt-sysprep into
oVirt.
>>>> >
>>>> > Usually, if user creates a template from a regular VM, and
then
>>>> creates
>>>> new VMs from this template, these new VMs inherit all
configuration
>>>> of the
>>>> original VM, including SSH keys, UDEV rules, MAC addresses,
system
>>>> ID,
>>>> hostname etc. It is unfortunate, because you cannot have two
network
>>>> devices with the same MAC address in the same network, for
example.
>>>> >
>>>> > To avoid this, user must clean all machine-specific
configuration
>>>> from
>>>> the original VM before creating a template from it. You can
do this
>>>> manually, but there is virt-sysprep utility that does this
>>>> automatically.
>>>> >
>>>> > Ideally, virt-sysprep should be seamlessly integrated into
>>>> template
>>>> creation process. But the first step is to create a simple
button:
>>>> user
>>>> selects a VM, clicks the button and oVirt executes
virt-sysprep on
>>>> the VM.
>>>> >
>>>> > virt-sysprep works directly on VM's filesystem. It accepts
list of
>>>> all
>>>> disks of the VM as parameters:
>>>> >
>>>> > virt-sysprep -a disk1.img -a disk2.img -a disk3.img
>>>> >
>>>> > The architecture is as follows: command on the Engine side
runs a
>>>> job on
>>>> VDSM side and tracks its success/failure. The job on VDSM
side runs
>>>> virt-sysprep.
>>>> >
>>>> > The question is how to implement the job correctly?
>>>> >
>>>> > I thought about using storage jobs, but they are designed
to work
>>>> only
>>>> with a single volume, correct?
>>>>
>>>> New storage verbs are volume based. This make it easy to
manage
>>>> them on the engine side, and will allow parallelizing volume
>>>> operations
>>>> on single or multiple hosts.
>>>>
>>>> A storage volume job is using sanlock lease on the modified
volume
>>>> and volume generation number. If a host running pending jobs
becomes
>>>> non-responsive and cannot be fenced, we can detect the state
of
>>>> the job, fence the job, and start the job on another host.
>>>>
>>>> In the SPM task, if a host becomes non-responsive and cannot
be
>>>> fenced, the whole setup is stuck, there is no way to perform
any
>>>> storage operation.
>>>> > Is is possible to use them with operation that is
performed on
>>>> multiple
>>>> volumes?
>>>> > Or, alternatively, is it possible to use some kind of 'VM
jobs' -
>>>> that
>>>> work on VM at whole?
>>>>
>>>> We can do:
>>>>
>>>> 1. Add jobs with multiple volumes leases - can make error
handling
>>>> very
>>>> complex. How do tell a job state if you have multiple
leases?
>>>> which
>>>> volume generation you use?
>>>>
>>>> 2. Use volume job using one of the volumes (the boot
volume?). This
>>>> does
>>>> not protect the other volumes from modification but
engine is
>>>> responsible
>>>> for this.
>>>>
>>>> 3. Use new "vm jobs", using a vm lease (should be available
this
>>>> week
>>>> on master).
>>>> This protects a vm during sysprep from starting the vm.
>>>> We still need a generation to detect the job state, I
think we
>>>> can
>>>> use the sanlock
>>>> lease generation for this.
>>>>
>>>> I like the last option since sysprep is much like running a
vm.
>>>> > How v2v solves this problem?
>>>>
>>>> It does not.
>>>>
>>>> v2v predates storage volume jobs. It does not use volume
leases and
>>>> generation
>>>> and does have any way to recover if a host running v2v
becomes
>>>> non-responsive
>>>> and cannot be fenced.
>>>>
>>>> It also does not use the jobs framework and does not use a
thread
>>>> pool for
>>>> v2v jobs, so it has no limit on the number of storage
operations on
>>>> a host.
>>>>
>>>>
>>>> Right, but let's be fair and present the benefits of v2v-jobs
as well:
>>>> 1. it is the simplest "infrastructure" in terms of LOC
>>>>
>>>>
>>>> It is also deprecated. V2V has promised to adopt the richer Host
Jobs
>>>> API in the future.
>>>>
>>>>
>>>> 2. it is the most efficient mechanism in terms of interactions
between
>>>> the
>>>> engine and VDSM (it doesn't require new verbs/call, the data is
>>>> attached to
>>>> VdsStats; probably the easiest mechanism to convert to events)
>>>>
>>>>
>>>> Engine is already polling the host jobs API so I am not sure I agree
>>>> with you here.
>>>>
>>>>
>>>> 3. it is the most efficient implementation in terms of
interaction with
>>>> the
>>>> database (no date is persisted into the database, no polling is
done)
>>>>
>>>>
>>>> Again, we're already using the Host Jobs API. We'll gain efficiency
>>>> by migrating away from the old v2v API and having a single, unified
>>>> approach (Host Jobs).
>>>>
>>>>
>>>> Currently we have 3 mechanisms to report jobs:
>>>> 1. VM jobs - that is currently used for live-merge. This
requires the
>>>> VM entity
>>>> to exist in VDSM, thus not suitable for virt-sysprep.
>>>>
>>>>
>>>> Correct, not appropriate for this application.
>>>>
>>>>
>>>> 2. storage jobs - complicated infrastructure, targeted for
recovering
>>>> from
>>>> failures to maintain storage consistency. Many of the things
this
>>>> infrastructure knows to handle is irrelevant for virt-sysprep
flow, and
>>>> the
>>>> fact that virt-sysprep is invoked on VM rather than particular
disk
>>>> makes it
>>>> less suitable.
>>>>
>>>>
>>>> These are more appropriately called HostJobs and the have the
>>>> following semantics:
>>>> - They represent an external process running on a single host
>>>> - They are not persisted. If the host or vdsm restarts, the job is
>>>> aborted
>>>> - They operate on entities. Currently storage is the first adopter
>>>> of the infrastructure but virt was going to adopt these for the
>>>> next-gen API. Entities can be volumes, storage domains, vms,
>>>> network interfaces, etc.
>>>> - Job status and progress is reported by the Host Jobs API. If a
job
>>>> is not present, then the underlying entitie(s) must be polled by
>>>> engine to determine the actual state.
>>>>
>>>>
>>>> 3. V2V jobs - no mechanism is provided to resume failed jobs, no
>>>> leases, etc
>>>>
>>>>
>>>> This is the old infra upon which Host Jobs are built. v2v has
>>>> promised to move to Host Jobs in the future so we should not add new
>>>> dependencies to this code.
>>>>
>>>>
>>>> I have some arguments for using V2V-like jobs [1]:
>>>> 1. creating template from vm is rarely done - if host goes
unresponsive
>>>> or any
>>>> other failure is detected we can just remove the template and
report
>>>> the error
>>>>
>>>>
>>>> We can chose this error handling with Host Jobs as well.
>>>>
>>>>
>>>> 2. the phase of virt-sysprep is, unlike typical storage
operation,
>>>> short -
>>>> reducing the risk of failures during the process
>>>>
>>>>
>>>> Reduced risk of failures is never an excuse to have lax error
>>>> handling. The storage flavored host jobs provide tons of utilities
>>>> for making error handling standardized, easy to implement, and
>>>> correct.
>>>>
>>>>
>>>> 3. during the operation the VM is down - by locking the
VM/template and
>>>> its
>>>> disks on the engine side, we render leases-like mechanism
redundant
>>>>
>>>>
>>>> Eventually we want to protect all operations on storage with sanlock
>>>> leases. This is safer and allows for a more distributed approach to
>>>> management. Again, the use of leases correctly in host jobs
requires
>>>> about 5 lines of code. The benefits of standardization far outweigh
>>>> any perceived simplification resulting from omitting it.
>>>>
>>>>
>>>> 4. in the worst case - the disk will not be corrupted (only
some of the
>>>> data
>>>> might be removed).
>>>>
>>>>
>>>> Again, the way engine chooses to handle job failures is independent
of
>>>> the mechanism. Let's separate that from this discussion.
>>>>
>>>>
>>>> So I think that the mechanism for storage jobs is an over-kill
for this
>>>> case.
>>>> We can keep it simple by generalise the V2V-job for other
virt-tools
>>>> jobs, like
>>>> virt-sysprep.
>>>>
>>>>
>>>> I think we ought to standardize on the Host Jobs framework where we
>>>> can collaborate on unit tests, standardized locking and error
>>>> handling, abort logic, etc. When v2v moves to host jobs then we
will
>>>> have a unified method of handling ephemeral jobs that are tied to
>>>> entities.
>>>>
>>>> --
>>>> Adam Litke
>>>>
>>>>
>>>
>>> --
>>> Adam Litke
>>
>>
7 years, 10 months
ovirt-node_ovirt-3.6_create-iso-el7_merged job failed
by Shlomo Ben David
Hi,
The [1] job failed with the following error:
*10:21:33 Package qemu-kvm-tools is obsoleted by qemu-kvm-tools-ev,
trying to install 10:qemu-kvm-tools-ev-2.6.0-27.1.el7.x86_64 instead
10:21:48 Error creating Live CD : Failed to build transaction :
10:qemu-kvm-ev-2.6.0-27.1.el7.x86_64 requires usbredir >= 0.7.1
10:21:48 10:qemu-kvm-ev-2.6.0-27.1.el7.x86_64 requires seavgabios-bin >= 1.9.1-4
10:21:48 10:qemu-kvm-ev-2.6.0-27.1.el7.x86_64 requires ipxe-roms-qemu
>= 20160127-4
10:21:48 10:qemu-kvm-ev-2.6.0-27.1.el7.x86_64 requires libusbx >= 1.0.19
10:21:48 ERROR: ISO build failed.*
[1] -
http://jenkins.ovirt.org/job/ovirt-node_ovirt-3.6_create-iso-el7_merged/1...
Best Regards,
Shlomi Ben-David | DevOps Engineer | Red Hat ISRAEL
RHCSA | RHCE
IRC: shlomibendavid (on #rhev-integ, #rhev-dev, #rhev-ci)
OPEN SOURCE - 1 4 011 && 011 4 1
7 years, 10 months
Vdsm tests are 4X times faster on travis
by Nir Soffer
HI all,
Watching vdsm travis builds in the last weeks, it is clear that vdsm tests
on travis are about 4X times faster compared with jenkins builds.
Here is a typical build:
ovirt ci: http://jenkins.ovirt.org/job/vdsm_master_check-patch-fc24-x86_64/5101/con...
travis ci: https://travis-ci.org/nirs/vdsm/builds/179056079
The build took 4:34 on travis, and 19:34 on ovirt ci.
This has a huge impact on vdsm maintainers. Having to wait 20 minutes
for each patch
means that we must ignore the ci and merge and hope that previous tests without
rebasing on master were good enough.
The builds are mostly the same, expect:
- In travis we don't check if the build system was changed and
packages should be built
takes 9:18 minutes in ovirt ci.
- In travis we don't clean or install anything before the test, we use
a container with all the
available packages, pulled from dockerhub.
takes about 3:52 minutes in ovirt ci
- In travis we don't enable coverage. Running the tests with coverage
may slow down the tests
takes 5:04 minutes in ovirt ci
creating the coverage report takes only 15 seconds, not interesting
- In travis we don't cleanup anything after the test
this takes 34 seconds in ovirt ci
The biggest problem is the build system check taking 9:18 minutes.
fixing it will cut the build time in half.
This is how time is spent in ovirt ci:
1. Starting (1:28)
00:00:00.001 Triggered by Gerrit: https://gerrit.ovirt.org/67338
00:00:00.039 [EnvInject] - Loading node environment variables.
00:00:00.056 Building remotely on fc24-vm15.phx.ovirt.org (phx nested
local_disk fc24) in workspace
/home/jenkins/workspace/vdsm_master_check-patch-fc24-x86_64
2. Installing packages (2:24)
00:01:28.338 INFO: installing package(s): autoconf automake gdb git
libguestfs-tools-c libselinux-python3 libvirt-python3 m2crypto make
mom openvswitch policycoreutils-python PyYAML python-blivet
python-coverage python2-decorator python-devel python-inotify
python-ioprocess python-mock python-netaddr python-pthreading
python-setuptools python-six python-requests python3-decorator
python3-netaddr python3-nose python3-six python3-yaml rpm-build
sanlock-python sudo yum yum-utils
3. Setup in check-patch.sh (00:04)
00:03:52.838 + export VDSM_AUTOMATION=1
4. Running the actual tests
00:03:56.670 + ./autogen.sh --system --enable-hooks --enable-vhostmd
5. Installing python-debugifo (00:19)
00:04:15.385 Yum-utils package has been deprecated, use dnf instead.
6. Running make check (05:04)
00:04:30.948 + TIMEOUT=600
00:04:30.948 + make check NOSE_WITH_COVERAGE=1
NOSE_COVER_PACKAGE=/home/jenkins/workspace/vdsm_master_check-patch-fc24-x86_64/vdsm/vdsm,/home/jenkins/workspace/vdsm_master_check-patch-fc24-x86_64/vdsm/lib
...
00:09:33.981 tests: commands succeeded
00:09:33.981 congratulations :)
7. Creating coverage report (00:15)
00:09:34.017 + coverage html -d
/home/jenkins/workspace/vdsm_master_check-patch-fc24-x86_64/vdsm/exported-artifacts/htmlcov
8. Finding modified files (09:18)
00:09:49.213 + git diff-tree --no-commit-id --name-only -r HEAD
00:09:49.213 + egrep --quiet 'vdsm.spec.in|Makefile.am'
9. Cleaing up (00:27)
00:19:07.994 Took 915 seconds
00:19:34.973 Finished: SUCCESS
7 years, 10 months
VDSM review & development tools
by Martin Polednik
Hello developers,
this e-mail is mostly aimed at VDSM contributors (current and
potential) and should serve as continuation of the idea started at our
weekly meeting.
We've had a proposal of moving the project to github[1] and using
reviewable[2] as a code review due to github's code review interface.
I would like to start a *discussion* regarding how we would like to
develop VDSM in future. So far, the suggested options were (1st
implied):
1) gerrit (stay as we are),
2) github & reviewable,
3) mailing list.
What would be your favorite review & development tool and why? Do you
hate any of them? Let the flamewar^Wconstructive discussion begin! :)
<subjective part>
My preferred tool is mailing list with the main tree mirrored to
github. Why mailing list in 2016 (almost 2017)?
a) stack - We're built on top of libvirt, libvirt is
built on top of qemu and qemu utilizes kvm which is a kernel module.
Each of this projects uses mailing lists for development.
b) tooling - Everyone is free to use tools of choice. Any sane e-mail
client can handle mailing list patches, and it's up to reviewers to
choose the best way to handle the review. As for sending the patches,
there is the universal fallback in the form of git-send-email.
c) freedom - It's up to us to decide how would we handle such
development. As many system-level projects already use mailing lists
(see a), there is enough inspiration for workflow design[3].
d) accessibility - VDSM is in unfortunate position between "cool",
high level projects and "boring" low level projects. I believe that we
should be more accessible to the developers from below the stack
rather than to general public. Having unified workflow that doesn't
require additional accounts and is compatible with their workflows
makes that easier.
</subjective part>
[1] https://github.com/
[2] https://reviewable.io/
[3] e.g. https://www.kernel.org/doc/Documentation/SubmittingPatches
Regards,
mpolednik
7 years, 10 months
ovirt_4.0_he-system-tests job failed
by Shlomo Ben David
Hi,
The [1] job failed with the following error:
2016-12-07 02:40:43,805::ssh.py::ssh::96::lago.ssh::DEBUG::Command
8b25fff2 on lago-he-basic-suite-4-0-host0 errors:
Error: Package:
ovirt-hosted-engine-setup-2.0.4.2-0.0.master.20161201083322.gita375924.el7.centos.noarch
(alocalsync)
Requires: ovirt-engine-sdk-python >= 3.6.3.0
[1] - http://jenkins.ovirt.org/job/ovirt_4.0_he-system-tests/584/console
Best Regards,
Shlomi Ben-David | DevOps Engineer | Red Hat ISRAEL
RHCSA | RHCE
IRC: shlomibendavid (on #rhev-integ, #rhev-dev, #rhev-ci)
OPEN SOURCE - 1 4 011 && 011 4 1
7 years, 10 months