On Wed, Jun 21, 2017 at 11:03 PM Deepak Jagtap <deepak.jagtap@maxta.com> wrote:

Hi Allon,


I am trying to leverage snapshot  capability of underlying filesystem.

As per my understanding current snapshot works like this:

Base Image(raw)--->snap1(qcow)->snap2(qcow), i.e after each snapshot vm starts writing on newly created qcow image.

So in this case vm is going to do all new writes on snap2(qcow) voulme and will redirect read IOs to snap1 & Base image as required.


Right

But in my case snapshots created by the filesystem are read only and it's in raw format.

As a result after creating snapshot vm disk configuration won't change after taking snapshot but will continue doing writes on same base image.

So snapshots will look like this:

Base Image(raw)--->snap1(raw)->snap2(raw)

Not sure what is snap1 and snap2 - how do you create and use them
with your file system? what is the underlying file system? 

Base Image will always remain writable, while the snapshots will remain read only raw format.


This works like ceph volumes.

Our flow for ceph is:

1. engine invokes VM.freeze vdsm api to ensure that guest file systems are consistent
2. engine create new snapshot via cinder api
3. engine may invokes VM.snapshot vdsm api (without the ceph disk) if memory snapshot is needed.
    memory snapshot is stored in a new disk created by engine before this flow 
4. engine invokes VM.thaw to  unfreeze guest file systems

Just wanted to confirm is this configurable so that vm continues  referring base image after the snapshot instead of newly created qcow image?

No, it will use new image, this is not possible with snapshot. 

Vdsm has the basic building blocks to do what you need, except creating
and deleting snapshots. To implement such feature you need to add a new
type of storage in engine that will call the right vdsm apis when creating
and deleting snapshots, and new vdsm apis to create and delete snapshots.

Nir