[adding devel list]
> On 29 Sep 2020, at 09:51, Ritesh Chikatwar <rchikatw(a)redhat.com> wrote:
>
> Hello,
>
> i am new to VDSM codebase.
>
> There is one minor bug in gluster and they don't have any near plan to fix this. So i need to fix this in vdsm.
>
> The bug is when i run command
> [root@dhcp35-237 ~]# gluster v geo-replication status
> No active geo-replication sessions
> geo-replication command failed
> [root@dhcp35-237 ~]#
>
> So this engine is piling up with error. i need handle this in vdsm the code at this place
>
> https://github.com/oVirt/vdsm/blob/master/lib/vdsm/gluster/cli.py#L1231 <https://github.com/oVirt/vdsm/blob/master/lib/vdsm/gluster/cli.py#L1231>
>
> If i run the above command with xml then i will get
>
> [root@dhcp35-237 ~]# gluster v geo-replication status --xml
> geo-replication command failed
> [root@dhcp35-237 ~]#
>
> so i have to run one more command before executing this and check in exception that it contains No active geo-replication sessions this string. for that i did like this
maybe it goes to stderr?
other than that, no idea, sorry
Thanks,
michal
>
> try:
> xmltree = _execGluster(command)
> except ge.GlusterCmdFailedException as e:
> if str(e).find("No active geo-replication sessions") != -1:
> return []
> Seems like this is not working can you help me here
>
>
> Any help will be appreciated
> Ritesh
>
Hi,
currently, when we run virt-sparsify on VM or user runs VM with discard
enabled and when the disk is on block storage in qcow, the results are
not reflected in oVirt. The blocks get discarded, storage can reuse them
and reports correct allocation statistics, but oVirt does not. In oVirt
one can still see the original allocation for disk and storage domain as
it was before blocks were discarded. This is super-confusing to the
users because when they check after running virt-sparsify and see the
same values they think sparsification is not working. Which is not true.
It all seems to be because of our LVM layout that we have on storage
domain. The feature page for discard [1] suggests it could be solved by
running lvreduce. But this does not seem to be true. When blocks are
discarded the QCOW does not necessarily change its apparent size, the
blocks don't have to be removed from the end of the disk. So running
lvreduce is likely to remove valuable data.
At the moment I don't see how we could achieve the correct values. If
anyone has any idea feel free to entertain me. The only option seems to
be to switch to LVM thin pools. Do we have any plans on doing that?
Tomas
[1] https://www.ovirt.org/develop/release-management/features/storage/pass-disc…
--
Tomáš Golembiovský <tgolembi(a)redhat.com>