Hi all,
For 3.6, we will not support live vm snapshot, but this is a must for the next
release.
It is trivial to create a disk snapshot in ceph (using cinder apis). The snapshot
is transparent to libvirt, qmeu and the guest os.
However, we want to create a consistent snapshot, so you can revert to the disk
snapshot and get a consistent file system state.
We also want to create a complete vm snapshot, including all disks and vm memory.
Libvirt and qemu provides that when given a new disk for the active layer, but
when using ceph disk, we don't change the active layer - we continue to use the
same disk.
Since 1.2.5, libvirt provides virDomainFSFreeze and virDomainFSThaw:
https://libvirt.org/hvsupport.html
So here is possible flows (ignoring engine side stuff like locking vms and disks)
Disk snapshot
-------------
1. Engine invoke VM.freezeFileSystems
2. Vdsm invokes libvirt.virDomainFSFreeze
3. Engine creates snapshot via cinder
4. Engine invokes VM.thawFileSystems
5. Vdsm invokes livbirt.virDomainFSThaw
Vm snapshot
-----------
1. Engine invoke VM.freezeFileSystems
2. Vdsm invokes libvirt.virDomainFSFreeze
3. Engine creates snapshot via cinder
4. Engine invokes VM.snapshot
5. Vdsm creates snapshot, skipping ceph disks
6. Engine invokes VM.thawFileSystems
7. Vdsm invokes livbirt.virDomainFSThaw
API changes
-----------
New verbs:
- VM.freezeFileSystems - basically invokes virDomainFSFreeze
- VM.thawFileSystems - basically invokes virDomainFSThaw
What do you think?
Nir