On Tue, May 19, 2020 at 1:52 PM FMGarcia <francisco.garcia(a)wbsgo.com> wrote:
...
Is there a tutorial for testing these compilations? (Necessary
dependencies, scripts to remove all traces of the previous installed
version, etc.)
The patch you want to test is merged, but is not included in the latest
release (4.4.1).
$ git log --oneline
dd2d7d391fa (HEAD -> master, origin/master, origin/HEAD) core:
CreateSnapshotDisk - validate specified disk
d2af151b9ca engine: add warning message for low space for re attached
storage domain
d068d4da10c core: prevent exporting a VM to a data domain with illegal disks
945afdc5b13 core: rephrase backup audit log messages
ae1c762858a core: Add VM_INCREMENTAL_BACKUP_FAILED_FULL_VM_BACKUP_NEEDED
audit log
3cd2766ae15 core: set image initial size on CreateSnapshot
fd19d1cc73f packaging: backup: Handle grafana
4c5a5305843 build: post ovirt-engine-4.4.1
1997cba811f (tag: ovirt-engine-4.4.1) build: ovirt-engine-4.4.1
If you want to test it now, you can use install the
ovirt-release-master.rpm:
dnf install
http://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm
Then install in the normal way as described in:
https://ovirt.org/download/
I usually do:
dnf update
engine-setup
If you have a previous 4.4 install you may be able to upgrade it. Sometimes
upgrades
do not work, especially when upgrading from development versions. In this
case you
can remove the older version and do a clean install using:
engine-cleanup
engine-setup
I understand that the changes you have made these days at
https://gerrit.ovirt.org/#/c/108991/ are almost final, right? Could you
test them already? Or do I wait a little?
You can test now if you want. If you don't like to test a development
version wait until the fix
is released (check the related bug).
Despite the fact that you have implemented some very complete
scripts
(backup-disk and upload-disk). In the short term, I'm still interested in
the snapshot-based model. Since in my Java code, I support full,
incremental and differential backups
Creating snapshots and merging them for backup is complex, but they are not
going anywhere.
You can continue to use them.
However, why do you try to restore the snapshots ? We discussed this with
several backup
folks and they were not interested in restoring snapshots, but restoring
complete disk.
Assuming you backup using snapshots, and you have:
backup-day-3.qcow2
backup-day-2.qcow2
backup-day-1.qcow2
To restore a disk to the state in backup-day-3.qcow2 you don't need to
upload the snapshot
to storage. You can create a chain from these images:
qemu-img rebase -u -b backup-day-1.qcow2 -F qcow2 backup-day-2.qcow2
qemu-img rebase -u -b backup-day-2.qcow2 -F qcow2 backup-day-3.qcow2
Note that you don't need to use disk uuids for the rebase, your qcow2 are
not uploaded
to ovirt so the backing chain is relevant only locally. upload_disk.py read
the guest
data from the chain and upload the raw data to imageio server in ovirt.
Now you have this chain:
backup-day-1.qcow2 <- backup-day-2.qcow2 <- backup-day-3.qcow2 (top)
You an upload all the data in this chain to a new disk using:
upload_disk.py ... --disk-sparse --disk-format raw --sd-name my-domain
backup-day-3.qcow2
This is basically like.
qemu-img convert -f qcow2 -O raw my-domain backup-day-3.qcow2
new-disk.raw
So you can continue to use snapshot for backup, but restore to a disk with
a single layer.
This should be available in ovirt-4.4.0 released 5 days ago.
with the limitation that the storage domain cannot be a block domain,
Why block domain is a problem?
> in NFS(for example) it works very well!. However, in the vms that have
> some disk stored in a block domain, I can only make a backup by means of a
> snapshot clone. Until this is resolved :) hehe.
Why do you need a snapshot clone? You can download a snapshot
from any
storage
without cloning the snapshot.
BTW, thanks for your '*backup_disk.py demo*', it looks so good!. If I have
> some time I will try to test it in my environment. :)
You may be interested in this talk:
https://www.youtube.com/watch?v=kye6ncyxHXs
https://www.youtube.com/watch?v=foyi1UyadEc
Cheers,
Nir
Best regards,
> Fran
> On 5/14/20 5:39 PM, Nir Soffer
wrote:
> On Wed, May 13, 2020 at 12:19 PM FMGarcia
<francisco.garcia(a)wbsgo.com> <francisco.garcia(a)wbsgo.com> wrote:
> Hi Fran, I'm moving the discussion to devel mailing
list where it belongs.
> In
https://gerrit.ovirt.org/#/c/107082/ we have "several problems" to decide this
patch:
> At the base (current version in github), the synergy
('download_disk_snapshot.py' and 'upload_disk_snapshot.py') does not
working:
> 'Download_disk_snapshot.py' only download volumes
of a disk.
> 'Upload_disk_snapshot.py' requires: virtual machine configuration
('.ovf'), a only disk to upload in path './disks/xxxx', and manual action
to attach disk to the vm.
> Then, I think that if you want a synergy with both
scripts, we should change 'download_disk_snapshot.py' before that
'upload_disk_snapshot.py'. If not, you should edit
'upload_disk_snapshot.py' to add a variable 'vm_id'(as variable sd_name in
this script) to attach the uploaded disk.
> I agree. It would be nice if we can do:
> $ mkdir -p backups/vm-id
> $ download_disk_snapshots.py --backup-dir backups/vm-id ...
> $ upload_disk_snapshots.py --backup-dir backups/vm-id ...
> download_disk_snapshots.py will download vm ovf and all
disks.
> upload_disk_snaphsots.py
> would take the output of download_disk_snapshots.py and create a new vm.
> I suppose that the best thing
is to discard the gerrit, and to propose first what you want with
'download_disk_snapshot.py' and 'upload_disk_snapshot.py' and then act
accordingly (several patch). Do you agree?
> This is a bigger change that can take more time. I think
we better fix
> the issues in the current
> scripts - the first one is the missing attach disk that you fix in your patch.
> Since you posted this fix with a lot of other unrelated
fixes (some
> wrong or unneeded),
> we cannot merge it. This is another reason to post minimal patches
> that do one fix.
> I'm only truly interested
in opened bug with block domain and volumes of > 1GB:
https://bugzilla.redhat.com/show_bug.cgi?id=1707707. I make these changes to help a little
since you would help me by solving the bug. I don't code in Python, I code in Java,
using Java-sdk and the bug is a major limitation in my software, so I want resolve this
bug (1 year old). =( I hope you understand. :)
> Sure, I understand.
> If you don't time to work on this, some other
developer can take over
> this patch.
> The bug should be fixed
by:https://gerrit.ovirt.org/c/108991/
> It would be nice if you can test this. I started a build
here:https://jenkins.ovirt.org/job/ovirt-engine_standard-check-patch/5867/
> When the build is ready, you will be able to install
engine from this
> build by adidng
> a yum repo with the
baseurl:https://jenkins.ovirt.org/job/ovirt-engine_standard-check-patch/5...
> Note that this requires CentOS 8.1. If you want to test on
CentOS 7,
> you need to wait until
> the fix will be backported to 4.3, or since you like Java, maybe port
> it yourself?
> Note also that we have a much more advanced backup and
restore
options:https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/example...
> Here is example run I did yesterday:
> I started with full backup of a running vm:
> $ ./backup_vm.py full --engine-url
https://engine3/
--username
> admin@internal --password-file
> /home/nsoffer/.config/ovirt/engine3/password --cafile
> /home/nsoffer/Downloads/certs/engine3.pem --backup-dir
> /home/nsoffer/tmp/backups/test-nfs
> b5732b5c-37ee-4c66-b77e-bda5d37a10fe
> [ 0.0 ] Starting full backup for VM b5732b5c-37ee-4c66-b77e-bda5d37a10fe
> [ 1.5 ] Waiting until backup f73541c6-88d1-4dac-a551-da922cdb3f55 is ready
> [ 4.6 ] Created checkpoint '4754dc34-da4b-4e62-84ea-164c413b003c'
> (to use in --from-checkpoint-uuid for the next incremental backup)
> [ 4.6 ] Creating image transfer for disk 566e6aa6-575b-4f83-88c9-e5e5b54d9649
> [ 5.9 ] Waiting until transfer 98e5aabc-fedb-4d2c-81c5-eed1a8b07790
> will be ready
> [ 5.9 ] Image transfer 98e5aabc-fedb-4d2c-81c5-eed1a8b07790 is ready
> [ 5.9 ] Transfer
url:https://host4:54322/images/13a0a396-5070-4b0f-a5cd-e2506c5abf0f
> Formatting
'/home/nsoffer/tmp/backups/test-nfs/566e6aa6-575b-4f83-88c9-e5e5b54d9649.202005132336.full.qcow2',
> fmt=qcow2 size=6442450944 cluster_size=65536 lazy_refcounts=off
> refcount_bits=16
> [ 100.00% ] 6.00 GiB, 18.34 seconds, 334.95 MiB/s
> [ 24.3 ] Finalizing transfer 98e5aabc-fedb-4d2c-81c5-eed1a8b07790
> [ 24.5 ] Full backup completed successfully
> This downloads all vms disks to ~/tmp/backups/test-nfs/,
creating
> 566e6aa6-575b-4f83-88c9-e5e5b54d9649.202005132336.full.qcow2
> This file includes entire disk content at the time the
backup was
> stated. This includes
> data from all snapshots.
> Then I run incremental backup of the same vm, recoding the
data changes since
> the full backup:
> $ ./backup_vm.py incremental --engine-url
https://engine3/
--username
> admin@internal --password-file
> /home/nsoffer/.config/ovirt/engine3/password --cafile
> /home/nsoffer/Downloads/certs/engine3.pem --backup-dir
> /home/nsoffer/tmp/backups/test-nfs --from-checkpoint-uuid
> 4754dc34-da4b-4e62-84ea-164c413b003c
> b5732b5c-37ee-4c66-b77e-bda5d37a10fe
> [ 0.0 ] Starting incremental backup for VM
> b5732b5c-37ee-4c66-b77e-bda5d37a10fe
> [ 1.3 ] Waiting until backup 01a88749-06eb-431a-81f2-b03db24b878e is ready
> [ 2.3 ] Created checkpoint '6f80d3c5-5b81-42ae-9700-2ccab37ad93b'
> (to use in --from-checkpoint-uuid for the next incremental backup)
> [ 2.3 ] Creating image transfer for disk 566e6aa6-575b-4f83-88c9-e5e5b54d9649
> [ 3.4 ] Waiting until transfer 16c90052-9411-46f6-8dc6-b2f260206708
> will be ready
> [ 3.4 ] Image transfer 16c90052-9411-46f6-8dc6-b2f260206708 is ready
> [ 3.4 ] Transfer
url:https://host4:54322/images/b9a44902-46f1-43b3-a9ad-9d72735c53ad
> Formatting
'/home/nsoffer/tmp/backups/test-nfs/566e6aa6-575b-4f83-88c9-e5e5b54d9649.202005132347.incremental.qcow2',
> fmt=qcow2 size=6442450944 cluster_size=65536 lazy_refcounts=off
> refcount_bits=16
> [ 100.00% ] 6.00 GiB, 0.63 seconds, 9.52 GiB/s
> [ 4.0 ] Finalizing transfer 16c90052-9411-46f6-8dc6-b2f260206708
> [ 4.1 ] Incremental backup completed successfully
> This backup is tiny since the only thing changed was new
directory created
> on the vm, and some system logs modified since the full backup.
> Then I rebased the incremental backup on top of the full
backup:
> cd home/nsoffer/tmp/backups/test-nfs
> qemu-img rebase -u -b
> 566e6aa6-575b-4f83-88c9-e5e5b54d9649.202005132336.full.qcow2 -F qcow2
> 566e6aa6-575b-4f83-88c9-e5e5b54d9649.202005132347.incremental.qcow2
> This images are now a valid qcow2 chain that can be
uploaded using
> upload_disk.py:
> $ python3 upload_disk.py --engine-url
https://engine3/
--username
> admin@internal --password /home/nsoffer/.config/ovirt/engine3/password
> --cafile /home/nsoffer/Downloads/certs/engine3.pem --disk-format qcow2
> --disk-sparse --sd-name iscsi2-1
>
/home/nsoffer/tmp/backups/test-nfs/566e6aa6-575b-4f83-88c9-e5e5b54d9649.202005132347.incremental.qcow2
> Checking image...
> Image format: qcow2
> Disk format: cow
> Disk content type: data
> Disk provisioned size: 6442450944
> Disk initial size: 2755264512
> Disk name: 566e6aa6-575b-4f83-88c9-e5e5b54d9649.202005132347.incremental.qcow2
> Connecting...
> Creating disk...
> Disk id: a9785777-8aac-4515-a47a-2f5126e3af73
> Creating image transfer...
> Transfer ID: 6e0384b6-730b-4416-a954-bf45e627d5cf
> Transfer host: host4
> Uploading image...
> [ 100.00% ] 6.00 GiB, 20.50 seconds, 299.70 MiB/s
> Finalizing image transfer...
> Upload completed successfully
> The result is a single qcow2 disk on the domain iscs2-1.
> I created a new vm from this disk.
> This backup script is not complete yet, we don't
download the VM OVF
> in each backup, and we don't
> create the VM from the OVF. These features should be added later.
> You may want to start testing and intergating this code
instead of the
> snapshot based download.
> See
https://www.ovirt.org/develop/release-management/features/storage/increme...
> Nir