
On 19 Jun 2015, at 22:40, Nir Soffer wrote:
Hi all,
For 3.6, we will not support live vm snapshot, but this is a must for the next release.
It is trivial to create a disk snapshot in ceph (using cinder apis). The snapshot is transparent to libvirt, qmeu and the guest os.
However, we want to create a consistent snapshot, so you can revert to the disk snapshot and get a consistent file system state.
We also want to create a complete vm snapshot, including all disks and vm memory. Libvirt and qemu provides that when given a new disk for the active layer, but when using ceph disk, we don't change the active layer - we continue to use the same disk.
Since 1.2.5, libvirt provides virDomainFSFreeze and virDomainFSThaw: https://libvirt.org/hvsupport.html
So here is possible flows (ignoring engine side stuff like locking vms and disks)
Disk snapshot -------------
1. Engine invoke VM.freezeFileSystems 2. Vdsm invokes libvirt.virDomainFSFreeze 3. Engine creates snapshot via cinder 4. Engine invokes VM.thawFileSystems 5. Vdsm invokes livbirt.virDomainFSThaw
Vm snapshot -----------
1. Engine invoke VM.freezeFileSystems 2. Vdsm invokes libvirt.virDomainFSFreeze 3. Engine creates snapshot via cinder 4. Engine invokes VM.snapshot 5. Vdsm creates snapshot, skipping ceph disks 6. Engine invokes VM.thawFileSystems 7. Vdsm invokes livbirt.virDomainFSThaw
API changes -----------
New verbs: - VM.freezeFileSystems - basically invokes virDomainFSFreeze - VM.thawFileSystems - basically invokes virDomainFSThaw
once we do it explicitly we can drop the flag from libvirt api which does it "atomically" for us right now also note the dependency on functional qemu-ga (that's no different from today, but the current behavior is that when qemu-ga is not running we are quietly doing an unsafe snapshot)
What do you think?
Nir _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel