Am 07.12.2017 um 23:19 hat Nir Soffer geschrieben:
> On Wed, Dec 6, 2017 at 6:02 PM Jason Lelievre <jlelievre@folksvfx.com>
> wrote:
>
> > Hello,
> >
> > What is the best way to set up a daily live snapshot for all VM, and have
> > the possibility to recover, for example, a specific VM to a specific day?
>
> Each snapshot you create make reads and writes slower, as qemu has to
> lookup data through the entire chain.
This is true in principle. However, as long as the lookup is purely in
memory and doesn't involve I/O, you won't even notice this in average
use cases. Whether additional I/O is necessary depends on whether the
metadata caches already cover the part of the image that you're
accessing.
By choosing the right cache size values for the use case, it can
normally be achieved that everything is already in memory.
Can you give more details about selecting the cache size?
> When we take a snapshot, we create a new file (or block device) and make
> the new file the active layer of the chain.
>
> For example, assuming you have a base image in raw format:
>
> image-1.raw (top)
>
> After taking a snapshot, you have:
>
> image-1.raw <- image-2.qcow2 (top)
>
> Now when qemu need to read data from the image, it will try to get the
> data from the top layer (image-2), if it does not exists it will try
> the backing file (image-1). Same when writing data, if qemu need to
> write small amount of data, it may need to get entire sector from a
> another layer in the chain and copy it to the top layer.
Yes, though for this operation it doesn't matter whether it has to copy
it from the second image in the chain or the thirtieth. As soon as you
do a partial write to a cluster that hasn't been written yet since the
last snapshot was taken, you get to copy data, no matter the length of
the chain.
So do you think keeping 30 snapshots for backup/restore purpose is
a practical solution with negligible effect on performance?
Kevin