Re: Incremental Backup | oVirt
by Nir Soffer
On Thu, Oct 8, 2020, 12:28 luwen.zhang(a)vinchin.com <luwen.zhang(a)vinchin.com>
wrote:
> Dear team,
>
> My name is Luwen Zhang from Vinchin Backup & Recovery.
>
> I'm sending you this Email because our developers encountered some
> problems while developing incremental backup feature following the oVirt
> documentations.
>
> https://www.ovirt.org/documentation/incremental-backup-guide/incremental-...
>
> We downloaded oVirt 4.4.1 from oVirt website, and is now developing based
> on this version, as per the documentation when we perform the first time
> full backup, we should obtain a "checkpoint id", so when we perform
> incremental backup using this ID we should know what's have changed on the
> VM. But now the problem is that we cannot get the checkpoint ID.
>
> The components and versions of oVirt are as follows:
>
You are using qemu 4.2 and libvirt 6.0. These versions are too old, missing
some features or have bugs which will not be fixed. These version maya be
good enough for development, but your product will need newer version.
We require now libvirt 6.6 and qemu 5.1. These versions will be available
in CentOS 8.3.
On libvirt, incremental backup will be fully supported only in RHEL AV
8.3.1. I guess we will build the relevant version for CentOS after this
version is released for RHEL.
Nir
>
>
> Could you please kindly check on this issue and point out what should be
> wrong and what we should do?
>
> Looking forward to your reply.
>
> Thanks & regards!
> ------------------------------
> *Luwen Zhang* | Product Manager
>
> Tel: +86-28-8553-0156
> Mob: +86-138-8042-4687
> Skype: luwen.zhang_cn
> E-mail: luwen.zhang(a)vinchin.com
>
> F5, Block 8, National Information Security Industry Park, No.333 YunHua
> Road, Hi-Tech Zone, Chengdu, China | P.C.610015
> _______________________________________________
> Devel mailing list -- devel(a)ovirt.org
> To unsubscribe send an email to devel-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/CC56L57GK3D...
>
4 years, 2 months
Re: Incremental Backup | oVirt
by Eyal Shenitzky
Hi Luwen,
Full backup using the new API is supported by default, but in order to
enable the use of 'incremental backup' you need to follow the following
steps that written in the feature page:
# engine-config -s "IsIncrementalBackupSupported=true"
# systemctl restart ovirt-engine
After those steps are done, each backup operation will create a
checkpoint so you will receive the created checkpoint ID.
On Thu, 8 Oct 2020 at 12:28, luwen.zhang(a)vinchin.com <
luwen.zhang(a)vinchin.com> wrote:
> Dear team,
>
> My name is Luwen Zhang from Vinchin Backup & Recovery.
>
> I'm sending you this Email because our developers encountered some
> problems while developing incremental backup feature following the oVirt
> documentations.
>
> https://www.ovirt.org/documentation/incremental-backup-guide/incremental-...
>
> We downloaded oVirt 4.4.1 from oVirt website, and is now developing based
> on this version, as per the documentation when we perform the first time
> full backup, we should obtain a "checkpoint id", so when we perform
> incremental backup using this ID we should know what's have changed on the
> VM. But now the problem is that we cannot get the checkpoint ID.
>
> The components and versions of oVirt are as follows:
>
>
>
> Could you please kindly check on this issue and point out what should be
> wrong and what we should do?
>
> Looking forward to your reply.
>
> Thanks & regards!
> ------------------------------
> *Luwen Zhang* | Product Manager
>
> Tel: +86-28-8553-0156
> Mob: +86-138-8042-4687
> Skype: luwen.zhang_cn
> E-mail: luwen.zhang(a)vinchin.com
>
> F5, Block 8, National Information Security Industry Park, No.333 YunHua
> Road, Hi-Tech Zone, Chengdu, China | P.C.610015
> _______________________________________________
> Devel mailing list -- devel(a)ovirt.org
> To unsubscribe send an email to devel-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/CC56L57GK3D...
>
--
Regards,
Eyal Shenitzky
4 years, 2 months
certificates not created?
by lejeczek
hi
I have ovirt from ovirt-release44-4.4.2-1.el8.noarch.
I presumed these files should be created without any user
intervention and I wonder if this is a bug?
...
Oct 03 15:02:42 dzien.private.pawel libvirtd[1291696]:
Unable to read from monitor: Connection reset by peer
Oct 03 15:02:42 dzien.private.pawel libvirtd[1291696]:
internal error: qemu unexpectedly closed the monitor:
2020-10-03T14:02:42.051456Z qemu-kvm: warning: Spice:
reds.c:2943:reds_init_ssl: Could not load certificates from
/etc/pki/vdsm/libvirt-spice/server-cert.pem
2020-10-03T14:02:42.051477Z qemu-kvm: warning: Spice:
error:02001002:system library:fopen:No such file or directory
2020-10-03T14:02:42.051483Z qemu-kvm: warning: Spice:
error:20074002:BIO routines:file_ctrl:system lib
2020-10-03T14:02:42.051490Z qemu-kvm: warning: Spice:
error:140DC002:SSL
routines:use_certificate_chain_file:system lib
2020-10-03T14:02:42.051506Z qemu-kvm: failed to initialize
spice server
Oct 03 15:02:42 dzien.private.pawel libvirtd[1291696]:
internal error: process exited while connecting to monitor:
2020-10-03T14:02:42.051456Z qemu-kvm: warning: Spice:
reds.c:2943:reds_init_ssl: Could not load certificates from
/etc/pki/vdsm/libvirt-spice/server-cert.pem
2020-10-03T14:02:42.051477Z qemu-kvm: warning: Spice:
error:02001002:system library:fopen:No such file or directory
2020-10-03T14:02:42.051483Z qemu-kvm: warning: Spice:
error:20074002:BIO routines:file_ctrl:system lib
2020-10-03T14:02:42.051490Z qemu-kvm: warning: Spice:
error:140DC002:SSL
routines:use_certificate_chain_file:system lib
2020-10-03T14:02:42.051506Z qemu-kvm: failed to initialize
spice server
...
Would any of you experts have a possible resolution, a
workaround?
many thanks, L.
4 years, 2 months
Re: Need help in handling catch block in vdsm
by Michal Skrivanek
[adding devel list]
> On 29 Sep 2020, at 09:51, Ritesh Chikatwar <rchikatw(a)redhat.com> wrote:
>
> Hello,
>
> i am new to VDSM codebase.
>
> There is one minor bug in gluster and they don't have any near plan to fix this. So i need to fix this in vdsm.
>
> The bug is when i run command
> [root@dhcp35-237 ~]# gluster v geo-replication status
> No active geo-replication sessions
> geo-replication command failed
> [root@dhcp35-237 ~]#
>
> So this engine is piling up with error. i need handle this in vdsm the code at this place
>
> https://github.com/oVirt/vdsm/blob/master/lib/vdsm/gluster/cli.py#L1231 <https://github.com/oVirt/vdsm/blob/master/lib/vdsm/gluster/cli.py#L1231>
>
> If i run the above command with xml then i will get
>
> [root@dhcp35-237 ~]# gluster v geo-replication status --xml
> geo-replication command failed
> [root@dhcp35-237 ~]#
>
> so i have to run one more command before executing this and check in exception that it contains No active geo-replication sessions this string. for that i did like this
maybe it goes to stderr?
other than that, no idea, sorry
Thanks,
michal
>
> try:
> xmltree = _execGluster(command)
> except ge.GlusterCmdFailedException as e:
> if str(e).find("No active geo-replication sessions") != -1:
> return []
> Seems like this is not working can you help me here
>
>
> Any help will be appreciated
> Ritesh
>
4 years, 2 months
Disk sizes not updated on unmap/discard
by Tomáš Golembiovský
Hi,
currently, when we run virt-sparsify on VM or user runs VM with discard
enabled and when the disk is on block storage in qcow, the results are
not reflected in oVirt. The blocks get discarded, storage can reuse them
and reports correct allocation statistics, but oVirt does not. In oVirt
one can still see the original allocation for disk and storage domain as
it was before blocks were discarded. This is super-confusing to the
users because when they check after running virt-sparsify and see the
same values they think sparsification is not working. Which is not true.
It all seems to be because of our LVM layout that we have on storage
domain. The feature page for discard [1] suggests it could be solved by
running lvreduce. But this does not seem to be true. When blocks are
discarded the QCOW does not necessarily change its apparent size, the
blocks don't have to be removed from the end of the disk. So running
lvreduce is likely to remove valuable data.
At the moment I don't see how we could achieve the correct values. If
anyone has any idea feel free to entertain me. The only option seems to
be to switch to LVM thin pools. Do we have any plans on doing that?
Tomas
[1] https://www.ovirt.org/develop/release-management/features/storage/pass-di...
--
Tomáš Golembiovský <tgolembi(a)redhat.com>
4 years, 2 months