
Hello, I'm on 4.1 with 2 FC SAN storage domains and testing live migration of disk. I started with a CentOS 7.3 VM with a thin provisioned disk of 30Gb. It seems that I can see actual disk size in VM--> Snapshots--> Active VM line and the right sub pane. If I remember correctly before the live storage migration the actual size was 16Gb. But after the migration it shows actual size greater than virtual (33Gb)?? There are no snapshots on the VM right now. Is this true? In case could it depend on temporary snapshots done by th esystem for storage migration? But in this case does it mean that a live merge generates a biggest image at the end..? The command I saw during conversion was: vdsm 17197 2707 3 22:39 ? 00:00:01 /usr/bin/qemu-img convert -p -t none -T none -f qcow2 /rhev/data-center/mnt/blockSD/922b5269-ab56-4c4d-838f-49d33427e2ab/images/6af3dfe5-6da7-48e3-9eb0-1e9596aca9d3/9af3574d-dc83-485f-b906-0970ad09b660 -O qcow2 -o compat=1.1 /rhev/data-center/mnt/blockSD/5ed04196-87f1-480e-9fee-9dd450a3b53b/images/6af3dfe5-6da7-48e3-9eb0-1e9596aca9d3/9af3574d-dc83-485f-b906-0970ad09b660he See currenti situation here: https://drive.google.com/file/d/0BwoPbcrMv8mvMnhib2NCQlJfRVE/view?usp=sharin... thanks for clarification, Gianluca

Hi Gianluca. This is most likely caused by the temporary snapshot we create when performing live storage migration. Please check your VM for snapshots and delete the temporary one. On Mon, Feb 13, 2017 at 5:07 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
Hello, I'm on 4.1 with 2 FC SAN storage domains and testing live migration of disk.
I started with a CentOS 7.3 VM with a thin provisioned disk of 30Gb. It seems that I can see actual disk size in VM--> Snapshots--> Active VM line and the right sub pane. If I remember correctly before the live storage migration the actual size was 16Gb. But after the migration it shows actual size greater than virtual (33Gb)?? There are no snapshots on the VM right now. Is this true? In case could it depend on temporary snapshots done by th esystem for storage migration? But in this case does it mean that a live merge generates a biggest image at the end..?
The command I saw during conversion was:
vdsm 17197 2707 3 22:39 ? 00:00:01 /usr/bin/qemu-img convert -p -t none -T none -f qcow2 /rhev/data-center/mnt/blockSD/ 922b5269-ab56-4c4d-838f-49d33427e2ab/images/6af3dfe5- 6da7-48e3-9eb0-1e9596aca9d3/9af3574d-dc83-485f-b906-0970ad09b660 -O qcow2 -o compat=1.1 /rhev/data-center/mnt/blockSD/5ed04196-87f1-480e-9fee- 9dd450a3b53b/images/6af3dfe5-6da7-48e3-9eb0-1e9596aca9d3/ 9af3574d-dc83-485f-b906-0970ad09b660he
See currenti situation here: https://drive.google.com/file/d/0BwoPbcrMv8mvMnhib2NCQlJfRVE/ view?usp=sharing
thanks for clarification, Gianluca
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Adam Litke

On Mon, Feb 20, 2017 at 3:38 PM, Adam Litke <alitke@redhat.com> wrote:
Hi Gianluca. This is most likely caused by the temporary snapshot we create when performing live storage migration. Please check your VM for snapshots and delete the temporary one.
Hi Adam, as you see from my attachment, there is one line for snapshots, the current one, and no others. So the VM is without snapshots. See bigger frame here: https://drive.google.com/file/d/0BwoPbcrMv8mvbDZaVFpvd1Ayd0k/view?usp=sharin... Gianluca

Ok, thanks for clarifying that. So it looks like the temporary snapshot was already merged/deleted. As part of that operation, vdsm calculates how large the original volume would have to be (LV size) in order to guarantee that the base volume has enough space for the merge to complete. It looks like in this case we decided that value was the virtual size + 10% for qcow2 metadata. If your base volume was only about 16G allocated prior to all of this than it seems we were way too conservative. Did you remove the snapshot quickly after the move completed? One thing that could cause this is if you wrote about 12-15 GB of data into the that disk after moving but before deleting the temporary snapshot. Do you happen to have the vdsm logs from the host running the VM and the host that was acting SPM (if different) from during the time you experienced this issue? On Mon, Feb 20, 2017 at 10:02 AM, Gianluca Cecchi <gianluca.cecchi@gmail.com
wrote:
On Mon, Feb 20, 2017 at 3:38 PM, Adam Litke <alitke@redhat.com> wrote:
Hi Gianluca. This is most likely caused by the temporary snapshot we create when performing live storage migration. Please check your VM for snapshots and delete the temporary one.
Hi Adam, as you see from my attachment, there is one line for snapshots, the current one, and no others. So the VM is without snapshots. See bigger frame here: https://drive.google.com/file/d/0BwoPbcrMv8mvbDZaVFpvd1Ayd0k/ view?usp=sharing
Gianluca
-- Adam Litke

On Mon, Feb 20, 2017 at 8:39 PM, Adam Litke <alitke@redhat.com> wrote:
Ok, thanks for clarifying that. So it looks like the temporary snapshot was already merged/deleted. As part of that operation, vdsm calculates how large the original volume would have to be (LV size) in order to guarantee that the base volume has enough space for the merge to complete. It looks like in this case we decided that value was the virtual size + 10% for qcow2 metadata. If your base volume was only about 16G allocated prior to all of this than it seems we were way too conservative. Did you remove the snapshot quickly after the move completed? One thing that could cause this is if you wrote about 12-15 GB of data into the that disk after moving but before deleting the temporary snapshot.
Do you happen to have the vdsm logs from the host running the VM and the host that was acting SPM (if different) from during the time you experienced this issue?
I executed some snapshot creation to test backup so with following live merge after a couple of minutes without using that much VM in the mean time. And I also tested storage migration. I have not the logs but I can try to reproduce similar operations and then send new log files if the anomaly persists...

On Tue, Feb 14, 2017 at 12:07 AM, Gianluca Cecchi <gianluca.cecchi@gmail.com
wrote:
Hello, I'm on 4.1 with 2 FC SAN storage domains and testing live migration of disk.
I started with a CentOS 7.3 VM with a thin provisioned disk of 30Gb. It seems that I can see actual disk size in VM--> Snapshots--> Active VM line and the right sub pane. If I remember correctly before the live storage migration the actual size was 16Gb. But after the migration it shows actual size greater than virtual (33Gb)??
Hi Gianluca, This is possible for thin disks. With thin disk we have some overhead for qcow2 metadata, which can be up to 10% in the worst case. So when doing operations on thin disk, we may enlarge it to the virtual size * 1.1, which is 33G for 30G disk. Can you share the output of this commands? (If the vm is running, you should skip the lvchange commands) lvchange -ay 5ed04196-87f1-480e-9fee-9dd450a3b53b/9af3574d-dc83-485f-b906- 0970ad09b660he qemu-img check /dev/5ed04196-87f1-480e-9fee-9dd450a3b53b/ 9af3574d-dc83-485f-b906-0970ad09b660he lvchange -an 5ed04196-87f1-480e-9fee-9dd450a3b53b/9af3574d-dc83-485f-b906- 0970ad09b660he Nir
There are no snapshots on the VM right now. Is this true? In case could it depend on temporary snapshots done by th esystem for storage migration? But in this case does it mean that a live merge generates a biggest image at the end..?
The command I saw during conversion was:
vdsm 17197 2707 3 22:39 ? 00:00:01 /usr/bin/qemu-img convert -p -t none -T none -f qcow2 /rhev/data-center/mnt/blockSD/ 922b5269-ab56-4c4d-838f-49d33427e2ab/images/6af3dfe5- 6da7-48e3-9eb0-1e9596aca9d3/9af3574d-dc83-485f-b906-0970ad09b660 -O qcow2 -o compat=1.1 /rhev/data-center/mnt/blockSD/5ed04196-87f1-480e-9fee- 9dd450a3b53b/images/6af3dfe5-6da7-48e3-9eb0-1e9596aca9d3/ 9af3574d-dc83-485f-b906-0970ad09b660he
See currenti situation here: https://drive.google.com/file/d/0BwoPbcrMv8mvMnhib2NCQlJfRVE/ view?usp=sharing
thanks for clarification, Gianluca
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Mon, Feb 20, 2017 at 11:08 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Tue, Feb 14, 2017 at 12:07 AM, Gianluca Cecchi < gianluca.cecchi@gmail.com> wrote:
Hello, I'm on 4.1 with 2 FC SAN storage domains and testing live migration of disk.
I started with a CentOS 7.3 VM with a thin provisioned disk of 30Gb. It seems that I can see actual disk size in VM--> Snapshots--> Active VM line and the right sub pane. If I remember correctly before the live storage migration the actual size was 16Gb. But after the migration it shows actual size greater than virtual (33Gb)??
Hi Gianluca,
This is possible for thin disks.
With thin disk we have some overhead for qcow2 metadata, which can be up to 10% in the worst case. So when doing operations on thin disk, we may enlarge it to the virtual size * 1.1, which is 33G for 30G disk.
Can you share the output of this commands?
(If the vm is running, you should skip the lvchange commands)
lvchange -ay 5ed04196-87f1-480e-9fee-9dd450a3b53b/9af3574d-dc83-485f- b906-0970ad09b660he
qemu-img check /dev/5ed04196-87f1-480e-9fee-9dd450a3b53b/9af3574d-dc83- 485f-b906-0970ad09b660he
lvchange -an 5ed04196-87f1-480e-9fee-9dd450a3b53b/9af3574d-dc83-485f- b906-0970ad09b660he
Nir
Yes, the VM is active. In this moment the disk name seems to be without the final "he" letters....
[root@ovmsrv07 ~]# qemu-img check /dev/5ed04196-87f1-480e-9fee-9dd450a3b53b/9af3574d-dc83-485f-b906-0970ad09b660 No errors were found on the image. 41734/491520 = 8.49% allocated, 3.86% fragmented, 0.00% compressed clusters Image end offset: 2736128000 [root@ovmsrv07 ~]#

On Tue, Feb 21, 2017 at 12:17 AM, Gianluca Cecchi <gianluca.cecchi@gmail.com
wrote:
On Mon, Feb 20, 2017 at 11:08 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Tue, Feb 14, 2017 at 12:07 AM, Gianluca Cecchi < gianluca.cecchi@gmail.com> wrote:
Hello, I'm on 4.1 with 2 FC SAN storage domains and testing live migration of disk.
I started with a CentOS 7.3 VM with a thin provisioned disk of 30Gb. It seems that I can see actual disk size in VM--> Snapshots--> Active VM line and the right sub pane. If I remember correctly before the live storage migration the actual size was 16Gb. But after the migration it shows actual size greater than virtual (33Gb)??
Hi Gianluca,
This is possible for thin disks.
With thin disk we have some overhead for qcow2 metadata, which can be up to 10% in the worst case. So when doing operations on thin disk, we may enlarge it to the virtual size * 1.1, which is 33G for 30G disk.
Can you share the output of this commands?
(If the vm is running, you should skip the lvchange commands)
lvchange -ay 5ed04196-87f1-480e-9fee-9dd450a3b53b/9af3574d-dc83-485f-b906 -0970ad09b660he
qemu-img check /dev/5ed04196-87f1-480e-9fee-9 dd450a3b53b/9af3574d-dc83-485f-b906-0970ad09b660he
lvchange -an 5ed04196-87f1-480e-9fee-9dd450a3b53b/9af3574d-dc83-485f-b906 -0970ad09b660he
Nir
Yes, the VM is active. In this moment the disk name seems to be without the final "he" letters....
Seems to be an issue between my keyboard and chair :-)
[root@ovmsrv07 ~]# qemu-img check /dev/5ed04196-87f1-480e-9fee- 9dd450a3b53b/9af3574d-dc83-485f-b906-0970ad09b660 No errors were found on the image. 41734/491520 = 8.49% allocated, 3.86% fragmented, 0.00% compressed clusters Image end offset: 2736128000
So you have 2.5 G image in 33G lv? If this is internal volume, you can reduce it to the next power of 128m - 2688m If this is active volume, you want to leave empty space at the end. since you are using 4G extent size, you can reduce it to 6784m. To reduce the lv: 1. Move storage domain to maintenance 2. Check again the image end offset using qemu-img check lvchange -ay vg-name/lv-name qemu-img check /dev/vg-name/lv-name lvchange -an vg-name/lv-name 2. use lvreduce (update the size if needed) lvreduce -L 2688m vg-name/lv-name 3. Actvate the storage domain We are working now on integrating this into the flows like cold and live merge. Nir

On Mon, Feb 20, 2017 at 11:39 PM, Nir Soffer <nsoffer@redhat.com> wrote:
Yes, the VM is active. In this moment the disk name seems to be without the final "he" letters....
Seems to be an issue between my keyboard and chair :-)
Actually you wrote the correct name that I intercepted during live storage migration. The output disk of the migration was /rhev/data-center/mnt/blockSD/5ed04196-87f1-480e-9fee-9dd450a3b53b/images/6af3dfe5-6da7-48e3-9eb0-1e9596aca9d3/9af3574d-dc83-485f-b906-0970ad09b660he But I see that indeed the name is now without the final "he" part... don't know if the suffix is in place only during the migration as a sort of notifier/lock.... [g.cecchi@ovmsrv07 ~]$ ll /rhev/data-center/mnt/blockSD/5ed04196-87f1-480e-9fee-9dd450a3b53b/images/6af3dfe5-6da7-48e3-9eb0-1e9596aca9d3/9af3574d-dc83-485f-b906-0970ad09b660 lrwxrwxrwx. 1 vdsm kvm 78 Feb 13 22:39 /rhev/data-center/mnt/blockSD/5ed04196-87f1-480e-9fee-9dd450a3b53b/images/6af3dfe5-6da7-48e3-9eb0-1e9596aca9d3/9af3574d-dc83-485f-b906-0970ad09b660 -> /dev/5ed04196-87f1-480e-9fee-9dd450a3b53b/9af3574d-dc83-485f-b906-0970ad09b660 [g.cecchi@ovmsrv07 ~]$
[root@ovmsrv07 ~]# qemu-img check /dev/5ed04196-87f1-480e-9fee-9 dd450a3b53b/9af3574d-dc83-485f-b906-0970ad09b660 No errors were found on the image. 41734/491520 = 8.49% allocated, 3.86% fragmented, 0.00% compressed clusters Image end offset: 2736128000
So you have 2.5 G image in 33G lv?
Yes, I created a 30Gb disk but in the mean time only a small part of it is used. Inside the VM: [root@c7service ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 30G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 29G 0 part ├─cl-root 253:0 0 26G 0 lvm / └─cl-swap 253:1 0 3G 0 lvm [SWAP] sr0 11:0 1 1024M 0 rom [root@c7service ~]# [root@c7service ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/cl-root 26G 2.2G 24G 9% / devtmpfs 2.0G 0 2.0G 0% /dev tmpfs 2.0G 0 2.0G 0% /dev/shm tmpfs 2.0G 17M 2.0G 1% /run tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/sda1 1014M 150M 865M 15% /boot tmpfs 396M 0 396M 0% /run/user/0 [root@c7service ~]#
If this is internal volume, you can reduce it to the next power of 128m - 2688m
If this is active volume, you want to leave empty space at the end. since you are using 4G extent size, you can reduce it to 6784m.
To reduce the lv:
1. Move storage domain to maintenance
2. Check again the image end offset using qemu-img check
lvchange -ay vg-name/lv-name qemu-img check /dev/vg-name/lv-name lvchange -an vg-name/lv-name
2. use lvreduce (update the size if needed)
lvreduce -L 2688m vg-name/lv-name
3. Actvate the storage domain
We are working now on integrating this into the flows like cold and live merge.
Nir
Thanks for the information, that could be useful But my concern was not to reduce the disk size in this case: I'll let the free space for future applications I have to install. My concern was that one would expect to see actual size of a thin provisioned disk always less or equal the virtual one and not the opposite....
participants (3)
-
Adam Litke
-
Gianluca Cecchi
-
Nir Soffer