I want to remove inactive contributors from vdsm-master-maintainers.
I suggest the simple rule of 2 years of inactivity for removing from this
based on git log.
See the list below for current status:
As you probably know we are now in a mode in which we develop our next
zstream version on the master branch as opposed to how we worked before
where the master version was dedicated for the next major version. This
makes the rapid changes in master to be delivered to customers in a much
higher cadence thus affecting stability.
Due to that we think it's best that from now on merges in the master branch
will be done only by stable branch maintainers after inspecting those
What you need to do in order to get your patch merged:
- Have it pass Jenkins
- Have it get code review +2
- Have it mark verified +1
- It's always encourage to have it tested by OST, for bigger changes it's a
Once you have all those covered, please add me as a reviewer and I'll
examine it and merge if everything seems right, if I haven't done it in a
timely manner feel free to ping me.
Dear Ladies and Gentlemen!
I am currently working with the java-sdk and I encountered a problem.
If I would like to retrieve the disk details, I get the following error:
Disk currDisk = ovirtConnection.followLink(diskAttachment.disk());
The Error is occurring in this line:
The getResponst looks quiet ok. (I inspected: [cid:image002.png@01D44537.AF127FD0] and it looks ok).
wrong number of arguments
The code is quiet similar to what you published on github (https://github.com/oVirt/ovirt-engine-sdk-java/blob/master/sdk/src/test/j... ).
Can you confirm the defect?
we would like to ask about interest in community about oVirt moving to CentOS Stream.
There were some requests before but it’s hard to see how many people would really like to see that.
With CentOS releases lagging behind RHEL for months it’s interesting to consider moving to CentOS Stream as it is much more up to date and allows us to fix bugs faster, with less workarounds and overhead for maintaining old code. E.g. our current integration tests do not really pass on CentOS 8.1 and we can’t really do much about that other than wait for more up to date packages. It would also bring us closer to make oVirt run smoothly on RHEL as that is also much closer to Stream than it is to outdated CentOS.
So..would you like us to support CentOS Stream?
We don’t really have capacity to run 3 different platforms, would you still want oVirt to support CentOS Stream if it means “less support” for regular CentOS?
There are some concerns about Stream being a bit less stable, do you share those concerns?
Thank you for your comments,
I'll be performing a planned Jenkins restart within the next hour.
No new CI jobs will be scheduled during this maintenance period.
I will inform you once it is back online.
[adding devel list]
> On 29 Sep 2020, at 09:51, Ritesh Chikatwar <rchikatw(a)redhat.com> wrote:
> i am new to VDSM codebase.
> There is one minor bug in gluster and they don't have any near plan to fix this. So i need to fix this in vdsm.
> The bug is when i run command
> [root@dhcp35-237 ~]# gluster v geo-replication status
> No active geo-replication sessions
> geo-replication command failed
> [root@dhcp35-237 ~]#
> So this engine is piling up with error. i need handle this in vdsm the code at this place
> https://github.com/oVirt/vdsm/blob/master/lib/vdsm/gluster/cli.py#L1231 <https://github.com/oVirt/vdsm/blob/master/lib/vdsm/gluster/cli.py#L1231>
> If i run the above command with xml then i will get
> [root@dhcp35-237 ~]# gluster v geo-replication status --xml
> geo-replication command failed
> [root@dhcp35-237 ~]#
> so i have to run one more command before executing this and check in exception that it contains No active geo-replication sessions this string. for that i did like this
maybe it goes to stderr?
other than that, no idea, sorry
> xmltree = _execGluster(command)
> except ge.GlusterCmdFailedException as e:
> if str(e).find("No active geo-replication sessions") != -1:
> return 
> Seems like this is not working can you help me here
> Any help will be appreciated
currently, when we run virt-sparsify on VM or user runs VM with discard
enabled and when the disk is on block storage in qcow, the results are
not reflected in oVirt. The blocks get discarded, storage can reuse them
and reports correct allocation statistics, but oVirt does not. In oVirt
one can still see the original allocation for disk and storage domain as
it was before blocks were discarded. This is super-confusing to the
users because when they check after running virt-sparsify and see the
same values they think sparsification is not working. Which is not true.
It all seems to be because of our LVM layout that we have on storage
domain. The feature page for discard  suggests it could be solved by
running lvreduce. But this does not seem to be true. When blocks are
discarded the QCOW does not necessarily change its apparent size, the
blocks don't have to be removed from the end of the disk. So running
lvreduce is likely to remove valuable data.
At the moment I don't see how we could achieve the correct values. If
anyone has any idea feel free to entertain me. The only option seems to
be to switch to LVM thin pools. Do we have any plans on doing that?
Tomáš Golembiovský <tgolembi(a)redhat.com>
My name is Luwen Zhang from Vinchin Backup & Recovery.
I'm sending you this Email because our developers encountered some problems while developing incremental backup feature following the oVirt documentations.
We downloaded oVirt 4.4.1 from oVirt website, and is now developing based on this version, as per the documentation when we perform the first time full backup, we should obtain a "checkpoint id", so when we perform incremental backup using this ID we should know what's have changed on the VM. But now the problem is that we cannot get the checkpoint ID.
The components and versions of oVirt are as follows:
Could you please kindly check on this issue and point out what should be wrong and what we should do?
Looking forward to your reply.
Thanks & regards!
Luwen Zhang | Product Manager
F5, Block 8, National Information Security Industry Park, No.333 YunHua Road, Hi-Tech Zone, Chengdu, China | P.C.610015
Eitan Raviv has been working on the oVirt project for more than 3 years.
He contributed more than 40 patches to ovirt-system-tests, including
more than 35 patches to the network-suite.
He is already recognized as a relevant reviewer for the network-suite,
so it time to give him the official role for the work he is already
I would like to propose Eitan as a virt-system-tests network-suite