Re: [ovirt-devel] [VDSM] Live snapshot with ceph disks

Hi Nir, Regarding "3. Engine creates snapshot *via cinder*"... What are the benefits of creating snapshots via cinder vs via libvirt? Libvirt and qemu are offering core VM-aware storage and memory snapshot features. Besides, snapshot-create-as has no VM downtime. It would be a mistake to implement snapshoting on the ceph layer. At some point, you would need VM-aware code (eg: the VM memory state) and organically go back to the libvirt + qemu way. There seems to be qemu + libvirt support for ceph snapshots (via rbd commands) which probably offers some (?) VM-awareness, but what are the benefits of not using the good old core libvirt + qemu snapshot features? I must be missing something... 2) Not related: It seems like oVirt shifted focus towards Ceph recently... I would like to drop Gluster for Ceph if the latter supports SEEK HOLE reading and optimal sparse files operations. Can someone please confirm if Ceph is supporting SEEK_HOLE? I saw some related code, but would like to ask for comments before setting up and benchmarking Ceph sparse image file operations.

----- Original Message -----
From: "Christopher Pereira" <kripper@imatronix.cl> To: "Nir Soffer" <nsoffer@redhat.com>, devel@ovirt.org Cc: "Eric Blake" <eblake@redhat.com> Sent: Saturday, June 20, 2015 9:34:57 AM Subject: Re: [ovirt-devel] [VDSM] Live snapshot with ceph disks
Hi Nir,
Regarding "3. Engine creates snapshot *via cinder*"...
What are the benefits of creating snapshots via cinder vs via libvirt?
Ceph provides thin provisioning and snapshot in the server side, which is more efficient and simpler to use. ceph disk use raw format, so we cannot use qcow based snapshots.
Libvirt and qemu are offering core VM-aware storage and memory snapshot features. Besides, snapshot-create-as has no VM downtime.
We don't plan to introduce downtime.
It would be a mistake to implement snapshoting on the ceph layer. At some point, you would need VM-aware code (eg: the VM memory state) and organically go back to the libvirt + qemu way.
We will use libvirt to create memory snapshot, stored in ceph disks instead of a vdsm image.
There seems to be qemu + libvirt support for ceph snapshots (via rbd commands) which probably offers some (?) VM-awareness, but what are the benefits of not using the good old core libvirt + qemu snapshot features? I must be missing something...
We want to support smart storage servers, offloading storage operations to the server. We also want to leverage the rich echo system of cinder, supported by many storage vendors. So engine is creating ceph volumes and snapshots via cinder, and vdsm is consuming the volumes via libvirt/qemu network disk support.
2) Not related:
It seems like oVirt shifted focus towards Ceph recently...
I would like to drop Gluster for Ceph if the latter supports SEEK HOLE reading and optimal sparse files operations. Can someone please confirm if Ceph is supporting SEEK_HOLE? I saw some related code, but would like to ask for comments before setting up and benchmarking Ceph sparse image file operations.
Ceph provides block storage, and gluster provides file-based storage. We are focused on providing both options so user can choose what works best. Nir
participants (2)
-
Christopher Pereira
-
Nir Soffer