On Thu, Oct 28, 2021 at 7:21 PM <christian.peater(a)gmx.de> wrote:
Hello, is there any progress in this problem? I have tried to reduce a bloated volume by
trigger the reduce command over rest api. I am using ovirt 4.3.10, but i get no response,
and the volume keeps bloated anyway. I'm getting frustrated, because i am using a
backupscript which creates a snapshot, then creates a clone vm out of the snapshot,
exports it and removes snapshot and cloned vm. This is done every night, and so the
volumes increase.
it is a productiv cluster with 70 vm, and i can't just stop them and do som magic
stuff.
I think the only way you can optimize the disk size is to move the
disk to another
storage domain and back to the original storage domain.
When we copy a disk, we measure every volume in the chain, and create
a new volume
in the destination storage, using the optimal size for this volume.
Then we copy the
data from the source volume to the new volume using qemu-img convert.
When qemu-img convert copy an image, it detects unallocated areas or
areas which
reads as zeroes. In the target image, these areas will not be
allocated, or will be stored
efficiently as zero clusters (8 byte for 64k of data).
Detecting zeroes happens during the copy, not when you measure the
volume, so after
the first copy the disk, may still have unneeded allocation at lvm
level. When you copy
the disk back to original storage, this extra allocation will be eliminated.
This is not a fast operation, but you can move disks when vms are
running, so there is
no downtime.
You can try this with a new vm:
1. create vm with 100g thin data disk
2. in the guest, fill the disk with zeros
dd if=/dev/zero bs=1M count=102400 of=/dev/sdb oflag=direct status=progress
3. the vm disk will be extended to 100g+
4. move the disk to another storage domain
5. after the move, the disk's actual size will be 100g+
6. move the disk back to original storage
7. the disk actual size will go back to 1g
Note that enabling discard and using fstrim in the guest before the copy will
optimize the process.
1. create vm with 100g thin virtio-scsi data disk, with "enable discard"
2. in the guest, fill the disk with zeros
dd if=/dev/zero bs=1M count=102400 of=/dev/sdb oflag=direct status=progress
3. the vm disk will be extended to 100g+
4. in the guest, run "fstrim /dev/sdb"
5. the disk size will remain 100g+
6. move the disk to another storage domain
7. this move will be extremly quick, no data will be copied
8. the disk actual size on the destination storage domain will be 1g
In ovirt 4.5 we plan to support disk format conversion - this will allow this
kind of sparsification without copying the data twice to another storage
domain.
Nir