I'll be performing a planned Jenkins restart within the next hour.
No new CI jobs will be scheduled during this maintenance period.
I will inform you once it is back online.
[adding devel list]
> On 29 Sep 2020, at 09:51, Ritesh Chikatwar <rchikatw(a)redhat.com> wrote:
> i am new to VDSM codebase.
> There is one minor bug in gluster and they don't have any near plan to fix this. So i need to fix this in vdsm.
> The bug is when i run command
> [root@dhcp35-237 ~]# gluster v geo-replication status
> No active geo-replication sessions
> geo-replication command failed
> [root@dhcp35-237 ~]#
> So this engine is piling up with error. i need handle this in vdsm the code at this place
> https://github.com/oVirt/vdsm/blob/master/lib/vdsm/gluster/cli.py#L1231 <https://github.com/oVirt/vdsm/blob/master/lib/vdsm/gluster/cli.py#L1231>
> If i run the above command with xml then i will get
> [root@dhcp35-237 ~]# gluster v geo-replication status --xml
> geo-replication command failed
> [root@dhcp35-237 ~]#
> so i have to run one more command before executing this and check in exception that it contains No active geo-replication sessions this string. for that i did like this
maybe it goes to stderr?
other than that, no idea, sorry
> xmltree = _execGluster(command)
> except ge.GlusterCmdFailedException as e:
> if str(e).find("No active geo-replication sessions") != -1:
> return 
> Seems like this is not working can you help me here
> Any help will be appreciated
currently, when we run virt-sparsify on VM or user runs VM with discard
enabled and when the disk is on block storage in qcow, the results are
not reflected in oVirt. The blocks get discarded, storage can reuse them
and reports correct allocation statistics, but oVirt does not. In oVirt
one can still see the original allocation for disk and storage domain as
it was before blocks were discarded. This is super-confusing to the
users because when they check after running virt-sparsify and see the
same values they think sparsification is not working. Which is not true.
It all seems to be because of our LVM layout that we have on storage
domain. The feature page for discard  suggests it could be solved by
running lvreduce. But this does not seem to be true. When blocks are
discarded the QCOW does not necessarily change its apparent size, the
blocks don't have to be removed from the end of the disk. So running
lvreduce is likely to remove valuable data.
At the moment I don't see how we could achieve the correct values. If
anyone has any idea feel free to entertain me. The only option seems to
be to switch to LVM thin pools. Do we have any plans on doing that?
Tomáš Golembiovský <tgolembi(a)redhat.com>
My name is Luwen Zhang from Vinchin Backup & Recovery.
I'm sending you this Email because our developers encountered some problems while developing incremental backup feature following the oVirt documentations.
We downloaded oVirt 4.4.1 from oVirt website, and is now developing based on this version, as per the documentation when we perform the first time full backup, we should obtain a "checkpoint id", so when we perform incremental backup using this ID we should know what's have changed on the VM. But now the problem is that we cannot get the checkpoint ID.
The components and versions of oVirt are as follows:
Could you please kindly check on this issue and point out what should be wrong and what we should do?
Looking forward to your reply.
Thanks & regards!
Luwen Zhang | Product Manager
F5, Block 8, National Information Security Industry Park, No.333 YunHua Road, Hi-Tech Zone, Chengdu, China | P.C.610015
Eitan Raviv has been working on the oVirt project for more than 3 years.
He contributed more than 40 patches to ovirt-system-tests, including
more than 35 patches to the network-suite.
He is already recognized as a relevant reviewer for the network-suite,
so it time to give him the official role for the work he is already
I would like to propose Eitan as a virt-system-tests network-suite
I'm trying to build engine on new VM, installed based on README
and it fails in ansible-lint on:
Is this a known issue? any workaround?
$ make clean install-dev PREFIX="$HOME/ovirt-engine"
+ /usr/bin/ansible-lint -c build/ansible-lint.conf
[WARNING]: While constructing a mapping from
line 118, column 7, found a duplicate dict key
(when). Using last defined value only.
[WARNING]: While constructing a mapping from <unicode string>, line
118, column 7, found a duplicate dict key (when). Using last defined
Traceback (most recent call last):
File "/usr/bin/ansible-lint", line 11, in <module>
load_entry_point('ansible-lint==4.1.0', 'console_scripts', 'ansible-lint')()
line 187, in main
line 282, in run
line 174, in run
line 84, in matchtasks
yaml = ansiblelint.utils.append_skipped_rules(yaml, text, file['type'])
File "/usr/lib/python3.6/site-packages/ansiblelint/utils.py", line
604, in append_skipped_rules
ruamel_data = yaml.load(file_text)
File "/usr/lib64/python3.6/site-packages/ruamel/yaml/main.py", line
266, in load
line 105, in get_single_data
line 115, in construct_document
for dummy in generator:
line 1357, in construct_yaml_map
line 1266, in construct_mapping
self.check_mapping_key(node, key_node, maptyp, key, value)
line 265, in check_mapping_key
ruamel.yaml.constructor.DuplicateKeyError: while constructing a mapping
in "<unicode string>", line 118, column 7:
- name: Populate logging_outputs d ...
^ (line: 118)
" (original value: "collect_ovirt_engine_log or collect_ovirt_vdsm_log")
in "<unicode string>", line 135, column 7:
^ (line: 135)
To suppress this check see:
Duplicate keys will become and error in future releases, and are errors
by default when using the new API.
make: *** [Makefile:366: validations] Error 1
make: Leaving directory '/home/nsoffer/src/ovirt-engine'
make: *** [Makefile:545: all-dev] Error 2
it has been reported to oVirt website on
that we may improve upgrade documentation covering (or linking to existing
documentation) the gluster brick migration from el7 to el8 while upgrading
from 4.3 to 4.4.
Have we already got something we can provide there?
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
Recently pushing changes to gerrit is very slow. It seems that the issue
is pulling changes during rebase in "git review".
Sometimes "git review" is very slow, but "git review -R" is very quick. This
show that the issue was getting changes from gerrit during when git-review
try to rebase the current branch on master.
I noticed that origin/master and gerrit/master do not show the same commit:
$ git log origin/master | head -1
$ git log gerrit/master | head -1
$ git remote -v
gerrit http://gerrit.ovirt.org/p/vdsm.git (fetch)
gerrit ssh://email@example.com:29418/vdsm.git (push)
origin http://gerrit.ovirt.org/p/vdsm.git (fetch)
origin http://gerrit.ovirt.org/p/vdsm.git (push)
I've been using this configuration for several years, and the slowness
started only recently, so I guess the issue is not on my side.
Same issues also on ovirt-engine, using similar configuration:
$ git remote -v
gerrit http://gerrit.ovirt.org/ovirt-engine (fetch)
gerrit ssh://firstname.lastname@example.org:29418/ovirt-engine.git (push)
origin git://gerrit.ovirt.org/ovirt-engine (fetch)
origin git://gerrit.ovirt.org/ovirt-engine (push)
Anyone experienced this?
I guess that restarting the gerrit server will "fix" this issue.