I'm getting the feeling I'm not alone in this, authoring and publishing a
wiki page isn't as used to be for long time.
I want to suggest a bit lighter workflow:
1. Everyone can merge their page - (it's a wiki)
Same as with (public and open) code, no one has the motivation to publish
a badly written
wiki page under their name. True, it can have an impact, but not as with
2. Use Page-Status marker
The author first merges the draft. Its now out there and should be updated
as time goes and its
status is DRAFT. Maintainers will come later and after review would change
the status to
PUBLISH. That could be a header in on the page:
page status: DRAFT/PUBLISH
Simple I think, and should work.
Test failed: [ 002_bootstrap.add_secondary_storage_domains ]
Link to suspected patches: N/A
Link to Job:
Link to all logs:
Error snippet from the log:
2017-03-23 07:23:38,867-0400 ERROR (jsonrpc/2) [storage.TaskManager.Task]
(Task='8347d74c-92fe-4371-bc84-1314a43a2971') Unexpected error (task:870)
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 877, in _run
return fn(*args, **kargs)
File "/usr/lib/python2.7/site-packages/vdsm/logUtils.py", line 52, in
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 1159, in attachStorageDomain
File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line
79, in wrapper
return method(self, *args, **kwargs)
File "/usr/share/vdsm/storage/sp.py", line 924, in attachSD
dom = sdCache.produce(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 112, in produce
File "/usr/share/vdsm/storage/sdc.py", line 53, in getRealDomain
File "/usr/share/vdsm/storage/sdc.py", line 136, in _realProduce
domain = self._findDomain(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 153, in _findDomain
File "/usr/share/vdsm/storage/sdc.py", line 178, in _findUnfetchedDomain
StorageDomainDoesNotExist: Storage domain does not exist:
Shlomi Ben-David | Software Engineer | Red Hat ISRAEL
RHCSA | RHCVA | RHCE
IRC: shlomibendavid (on #rhev-integ, #rhev-dev, #rhev-ci)
OPEN SOURCE - 1 4 011 && 011 4 1
The error in vdsm.log
Traceback (most recent call last):
File "/usr/share/vdsm/virt/vm.py", line 2016, in _setup_devices
line 63, in setup
File "/usr/lib/python2.7/site-packages/vdsm/network/api.py", line
90, in create_libvirt_network
line 94, in create_network
if not is_libvirt_network(netname):
line 159, in is_libvirt_network
netname = LIBVIRT_NET_PREFIX + netname
TypeError: cannot concatenate 'str' and 'NoneType' objects
2017-03-29 22:58:39,559-0400 ERROR (vm/d71bdf4e) [virt.vm]
(vmId='d71bdf4e-1eb3-4762-bd0e-05bb9f5e43ef') The vm start process
The tests last passed on Mar 28. Did a recent patch break this?
The full build logs at
We're looking to migrate from iptables to firewalld. We came up with a
couple of possible approaches we'd like opinions on. I'll list the options
first, and will
1) Replicate existing flow:
As of date, iptable rules are inserted in the database via SQL config
files. During host deployment, VdsDeployIptablesUnit adds the required
rules (based on cluster/firewall configuration) to the deployment
configuration, en route to being deployed on the host via otopi and its
- Reuse of existing infrastructure.
- Current infrastructure is overly complex...
- Many of the required services are provided by firewalld. Rewriting them
is wasteful; specifying them (instead of providing actual service .xml
content) will require adaptations on both (engine/host) sides. More on that
2) Host side based configuration:
Essentially, all the required logic (aforementioned cluster/firewall
configuration) to determine if/how firewalld should be deployed could be
passed on to the host via ohd. Vdsm could take on the responsibility of
examining the relevant configuration, and then creating and/or adding the
required services (using vdsm.conf and vdsm-tool).
- Engine side involvement is greatly diminished.
- Custom services/rules capabilities will have to be rethought and
re-implemented (current infrastructure supports custom iptables rules by
being specified in the SQL config file).
3) Some other hybrid approach:
If we're able to guarantee all the required firewalld services are
statically provided one way or the other, the current procedure could be
replicated and be made more simpler. Instead of providing xml content in
the form of strings, service names could be supplied. The responsibility of
actual service deployment becomes easier, and could be left to otopi (with
the appropriate modifications) or switched over to vdsm.
Regardless, usage of statically provided vs. dynamically created services
remains an open question. I think we'd like to avoid implementing logic
that ask whether some service is provided (and then write it if it
isn't...), and so choosing between the dynamic and static approaches is
also needed. Using the static approach, guaranteeing *all* services are
provided will be required.
I do believe guaranteeing the presence of all required services is worth
it, however custom services aren't going to be naively compatible, and
we'll still have to use similar mechanism as described in #1 (service
string -> .xml -> addition of service name to active zone).
Your thoughts are welcome.
Hello UI devs,
with oVirt master UI using the latest GWT SDK , I'd like to host a live
demo session with two goals in mind:
a, explain how the (Java IDE based) Classic Dev Mode works and mention its
b, explain how the (browser based) Super Dev Mode works and encourage
people to start using it to boost their productivity
The Classic vs. Super Dev Mode are two possible ways to debug GWT
applications. This session will give you the knowledge to decide which one
to use in a specific situation.
Proposed time: Mon, Apr 3 @ 5pm CET / 6pm TLV / 11am US EST. (This can be
changed as needed.)
Let me know if this kind of session interests you or if the above time
doesn't fit you but you'd still like to join.
we're using the latest GWT version in master UI now .
We can start using Java 8 syntax in our frontend code. GWT 2.8 also brings
partial support for Java 8 standard library APIs, see "JDK emulation" at
 for details.
Effectively, all Engine (Java Maven) modules  are now Java 8 source &
 except backend/manager/modules/extensions-api-root
should take advantage of new web APIs (e.g. using ES6 Maps to implement
Java HashMap) as well as general performance improvements (e.g. using a
faster "long" emulation).
GWT 2.8 removes deRPC (direct-eval RPC) mechanism but that's OK because
we're already using the standard GWT RPC .
There are no changes to existing build & development process:
- debugging via Java IDE (aka Classic Dev Mode) remains the default debug
- debugging via browser (aka Super Dev Mode) can be enabled via
DEV_BUILD_GWT_SUPER_DEV_MODE flag 
 example: $ make gwt-debug DEBUG_MODULE=webadmin
Note that in GWT 2.8 the Super Dev Mode is the new default debug method
(with Classic Dev Mode being deprecated).
There are still some post-upgrade tasks to do, we're tracking them on
If you encounter any issues related to GWT compilation or debugging, let me
Thanks very much for the response. I am currently trying to develop a
proposal for this project but I have not yet been able to successfully
setup oVirt. I just realized I could not install the oVirt-engine an
Ubuntu 14, so I am currently trying to download and install Fedora 23,
where I intend to setup the oVirt-engine. I would then use another
machine with Ubuntu for the host.
Please with respect to this project I would like to know if I am also
required compile and install the source code for oVirt-engine or do I
only need to build and install ovirt4cli.
On 17 March 2017 at 20:26, Yaniv Kaul <ykaul(a)redhat.com> wrote:
> Hi Konrad,
> I'm very happy to hear that and wish you good luck with the project.
> I think it'd be best if you can begin with familiarizing yourself with
> If you have few computers around, it should be fairly easy to set it up. If
> you have a single computer, you can set up Lago and ovirt-system-test
> to bring up an environment as well.
> The (very basic) code for ovirt4cli is available on my github - I have
> just created a very basic framework as a proof of concept - feel free to
> fork and modify it. Specifically, I think we should re-work it to share more
> code with our Ansible code.
> Let me know if there's anything I can help you with in the project!
>  http://lago.readthedocs.io/en/stable/
>  http://ovirt-system-tests.readthedocs.io/en/latest/
>  https://github.com/mykaul/ovirt4cli
> On Fri, Mar 17, 2017 at 9:06 AM, Konrad Djimeli <djkonro35(a)gmail.com> wrote:
>> My name is Konrad Djimeli a third year Computer Science Student at the
>> University of Buea, Cameroon. I am interested in contributing to oVirt
>> and I would like to work on the Google Summer of Code project
>> "ovirt4cli". I am very comfortable working with Python and I have
>> experience working with web services like REST.
>> Please I would appreciate any suggestion on how to get started and to
>> better familiarize myself with the project.
following oVirt 4.1.1 GA the oVirt team identified some bugs worth to be
addressed out of the usual release cycle.
For this reason an async release including these fixes is under preparation.
An update to the following packages have been pushed to the pre-release
repository for testing.
In order to test it, you'll need pre-release repository enabled:
yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release41-pre.rpm
An update to release notes including bugs related to above packages and
updates applied in bugzilla documentation texts has been pushed to
https://github.com/oVirt/ovirt-site/pull/888 for review.
On behalf of the oVirt team,
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
Recently I had a task of performance improvement in one of our network
related flows and had some hard time following our DAL code and one of the
outcomes of the task was defining a couple of new quite simple, but neat
When I came to coding those new queries I remembered how hard was following
the existing DAL code, I decided to make my own new methods clearer. So I
created  and  patches.
Everything is quite standard there beside the fact that they do not use any
stored procedure, but use SQL directly, IMHO by that they save time that I
spent in trying to follow what a DAO method does. Looking into the method
code you get the understanding of what this method is all about:
- no looking for a stored procedure name that is buried in the DAO class
- no looking for the SP definition
So I'd like to propose moving towards this approach in general in all cases
when nothing beyond a simple SQL is needed (no stored procedure programming
language is needed).
>From my experience with the performance improvement task it looks like
people avoid adding new queries for a specific need of a flow, instead they
use the existing general ones (e.g. dao.getAllForX()) and do the actual
join in the bll code.
IMHO the proposed approach would simplify adding new specific queries and
by that would prevent a decent part of performance issues in the future.
I do not propose changing all existing SP's to inline queries in a once,
but to allow adding new queries this way. Also we might consider converting
relatively simple SP's to inline SQL statements later in a graduate way.
 - https://gerrit.ovirt.org/#/c/74456
 - https://gerrit.ovirt.org/#/c/74458