Hello oVirt Development team
Not sure you got a chance to see this email, resending it.
Thanks
Ranjit
From: Ranjit DSouza
Sent: Monday, July 9, 2018 1:16 PM
To: 'Nir Soffer' <nsoffer(a)redhat.com>
Cc: devel <devel(a)ovirt.org>; DL-VTAS-ENG-NBU-EverestFalcons
<DL-VTAS-ENG-NBU-EverestFalcons(a)veritas.com>; Navin Tah
<Navin.Tah(a)veritas.com>; Sudhakar Paulzagade
<Sudhakar.Paulzagade(a)veritas.com>; Pavan Chavva; Yaniv Lavi (Dary)
<ylavi(a)redhat.com>; Nisan, Tal <tnisan(a)redhat.com>; Daniel Erez
<derez(a)redhat.com>
Subject: RE: [EXTERNAL] Re: [ovirt-devel] Image Transfer mechanism queries/API support
Hi Nir
Thanks for getting back!
We have few follow up questions:
1. On the new ‘Allocated extents API’:
Can you share the release timeline for 4.2.z? From the link below it seems like it will
be available on 7/30/2018.
https://www.ovirt.org/develop/release-management/releases/4.2.z/release-m...
However, we thought we would double check on this.
2. If this will be available in 4.2.z, does It mean, we can assume it will be
backported to 4.3 also?
3. When we downloaded the snapshot disk using Image Transfer API, the resulting
format of the disk is “raw”. However, for upload, we must upload a qcow2 disk (to enable
further snapshots).
It means, we need to convert it first using qemu-img convert. Or is there a way we
directly ask via API for a qcow2 instead?
Portion of the response of “GET /storagedomains/{storagedomain:id}/disksnapshots”
"format" : "raw",
"shareable" : "false",
"sparse" : "true",
"status" : "ok",
"snapshot" : {
"id" : "4756036e-92aa-4ebb-ae4b-052a30cd5109"
},
"actual_size" : "1345228800",
"content_type" : "data",
"propagate_errors" : "false",
"provisioned_size" : "21474836480",
"storage_type" : "image",
"total_size" : "0",
"wipe_after_delete" : "false",
4. When downloading a snapshot in chunks, Is there any recommended chunk size? For
our study we used 512 KB. Checked the documentation too.
5. Regarding your ask on ‘Can you file RFE for this, explaining the use case and the
current performance’.
Since you do not recommend direct access to NFS storage, we will consider it only if we
see significant performance degradation using http download.
Also, if time permits, we may check if there is a significant performance benefit using
Unix socket, over http download (via Rest API).
But as it stands now, getting the allocated extents support (soon), will alleviate most of
our performance concerns.
Thanks
Ranjit
From: Nir Soffer [mailto:nsoffer@redhat.com]
Sent: Tuesday, July 3, 2018 4:29 PM
To: Ranjit DSouza
<Ranjit.DSouza@veritas.com<mailto:Ranjit.DSouza@veritas.com>>
Cc: devel <devel@ovirt.org<mailto:devel@ovirt.org>>;
DL-VTAS-ENG-NBU-EverestFalcons
<DL-VTAS-ENG-NBU-EverestFalcons@veritas.com<mailto:DL-VTAS-ENG-NBU-EverestFalcons@veritas.com>>;
Navin Tah <Navin.Tah@veritas.com<mailto:Navin.Tah@veritas.com>>; Sudhakar
Paulzagade
<Sudhakar.Paulzagade@veritas.com<mailto:Sudhakar.Paulzagade@veritas.com>>;
Pavan Chavva; Yaniv Lavi (Dary) <ylavi@redhat.com<mailto:ylavi@redhat.com>>;
Nisan, Tal <tnisan@redhat.com<mailto:tnisan@redhat.com>>; Daniel Erez
<derez@redhat.com<mailto:derez@redhat.com>>
Subject: [EXTERNAL] Re: [ovirt-devel] Image Transfer mechanism queries/API support
On Tue, Jul 3, 2018 at 11:32 AM Ranjit DSouza
<Ranjit.DSouza@veritas.com<mailto:Ranjit.DSouza@veritas.com>> wrote:
...
We had a conversation with Pavan Chavva for supporting RHV. He had suggested to contact
you with queries related to oVirt APIs we plan to use.
We have following queries:
1. While downloading a snapshot disk, can we identify allocated extents and download
only those using oVirt API? We are able to download the disk using the Image Transfer API
mechanism.
However, this method downloads the entire disk including the non-allocated extents, which
is a performance overhead. If this functionality does not exist at this point will it be
available in near future?
Hi Ranjit,
There is no way to do this in current 4.2, but we plan to introduce in in 4.2.z.
The API will be something like:
GET /images/xxx-yyy/map
...
[{ "start": 0, "length": 65536, "depth": 0,
"zero": false, "data": true, "offset": 0},
{ "start": 65536, "length": 983040, "depth": 0,
"zero": true, "data": false, "offset": 65536},
{ "start": 1048576, "length": 65536, "depth": 0,
"zero": false, "data": true, "offset": 1048576},
{ "start": 1114112, "length": 983040, "depth": 0,
"zero": true, "data": false, "offset": 1114112},
...
{ "start": 5465571328, "length": 22675456, "depth": 0,
"zero": false, "data": true, "offset": 5465571328},
{ "start": 5488246784, "length": 954138624, "depth": 0,
"zero": true, "data": false, "offset": 5488246784},
{ "start": 6442385408, "length": 65536, "depth": 0,
"zero": false, "data": true, "offset": 6442385408}]
This is basically what you get using qemu-img map.
You can test play this with this using:
virt-builder Fedora-27 -o /var/tmp/fedora-27.img
qemu-img map -f raw --output json /var/tmp/fedora-27.img
This is the first data segment:
{ "start": 0, "length": 65536, "depth": 0, "zero":
false, "data": true, "offset": 0}
This is a hole between the first data segment and the second:
{ "start": 65536, "length": 983040, "depth": 0,
"zero": true, "data": false, "offset": 65536}
This is the second data segment:
{ "start": 1048576, "length": 65536, "depth": 0,
"zero": false, "data": true, "offset": 1048576}
Based on this output, you will be able to get the allocated parts of the image using:
Request:
GET /image/xxx-yyy HTTP/1.1
Range: bytes=0-65535
Response:
HTTP/1.1 206 Partial Content
Content-Range: bytes 0-65535/6442450944
<data of first segment>
Request:
GET /image/xxx-yyy HTTP/1.1
Range: bytes=1048576-1114111
Response:
HTTP/1.1 206 Partial Content
Content-Range: bytes 1048576-1114111/6442450944
<data of second segment>
And so on.
If you create a sparse file on your backup media, and download and write
the data segments at the correct offset, you will get the a sparse version of
the image as on the server side.
This will work for raw or qcow2 images on NFS >= 4.2, or for qcow2 images on block
storage.
For older NFS versions, or raw images on block storage, we can solve the issue by
reading the entire image and detecting zeroes - which is quite expensive, so I'm not
sure we will implement this, maybe it will be done later.
We have experimental patch using special sparse format, that can support this use
case, downloading entire image in one pass. This commit message explain the
format:
https://gerrit.ovirt.org/#/c/85413/12//COMMIT_MSG
For more info on using random I/O APIs, see:
http://ovirt.github.io/ovirt-imageio/random-io
(available since 4.2.3)
For example code uploading sparse images see:
https://github.com/oVirt/ovirt-imageio/blob/master/examples/upload
For best performance, you should run your application on a oVirt host, using unix
socket to communicate with imageio. See:
http://ovirt.github.io/ovirt-imageio/unix-socket
(will be available in 4.2.5)
All this will work only for the non-active layer in a qcow2 chain. We are working now
on incremental backup which will allow the same for the active layer with a running
vm. This is expected in 4.3, and we may have a tech preview at some point in 4.2.z.
Incremantal backup will use the similar API, allowing detection of dirty parts of an
image,
so you can download only the data that was changed since the last backup.
Please watch and comment on the feature page:
https://ovirt.org/develop/release-management/features/storage/incremental...
We are also considering exposing images using NBD. This will allow downloading
and uploading images using qemu-img from any host. This work depends on TLS-PSK
support in qemu-img and qemu-nbd. You can follow this work here:
https://lists.nongnu.org/archive/html/qemu-devel/2018-06/threads.html#08491
2. Is there an alternate method to transfer a snapshot to and from RHV storage? Are
there other methods such as NFS share where we can download snapshot image to and from RHV
storage?
We don't support direct access to storage by 3rd party. You should use imageio API.
Can you file RFE for this, explaning the use case and the current performance
issues you experience?
Nir