Re: Image Transfer mechanism queries/API support

Hello oVirt Development team Not sure you got a chance to see this email, resending it. Thanks Ranjit From: Ranjit DSouza Sent: Monday, July 9, 2018 1:16 PM To: 'Nir Soffer' <nsoffer@redhat.com> Cc: devel <devel@ovirt.org>; DL-VTAS-ENG-NBU-EverestFalcons <DL-VTAS-ENG-NBU-EverestFalcons@veritas.com>; Navin Tah <Navin.Tah@veritas.com>; Sudhakar Paulzagade <Sudhakar.Paulzagade@veritas.com>; Pavan Chavva; Yaniv Lavi (Dary) <ylavi@redhat.com>; Nisan, Tal <tnisan@redhat.com>; Daniel Erez <derez@redhat.com> Subject: RE: [EXTERNAL] Re: [ovirt-devel] Image Transfer mechanism queries/API support Hi Nir Thanks for getting back! We have few follow up questions: 1. On the new ‘Allocated extents API’: Can you share the release timeline for 4.2.z? From the link below it seems like it will be available on 7/30/2018. https://www.ovirt.org/develop/release-management/releases/4.2.z/release-mana... However, we thought we would double check on this. 2. If this will be available in 4.2.z, does It mean, we can assume it will be backported to 4.3 also? 3. When we downloaded the snapshot disk using Image Transfer API, the resulting format of the disk is “raw”. However, for upload, we must upload a qcow2 disk (to enable further snapshots). It means, we need to convert it first using qemu-img convert. Or is there a way we directly ask via API for a qcow2 instead? Portion of the response of “GET /storagedomains/{storagedomain:id}/disksnapshots” "format" : "raw", "shareable" : "false", "sparse" : "true", "status" : "ok", "snapshot" : { "id" : "4756036e-92aa-4ebb-ae4b-052a30cd5109" }, "actual_size" : "1345228800", "content_type" : "data", "propagate_errors" : "false", "provisioned_size" : "21474836480", "storage_type" : "image", "total_size" : "0", "wipe_after_delete" : "false", 4. When downloading a snapshot in chunks, Is there any recommended chunk size? For our study we used 512 KB. Checked the documentation too. 5. Regarding your ask on ‘Can you file RFE for this, explaining the use case and the current performance’. Since you do not recommend direct access to NFS storage, we will consider it only if we see significant performance degradation using http download. Also, if time permits, we may check if there is a significant performance benefit using Unix socket, over http download (via Rest API). But as it stands now, getting the allocated extents support (soon), will alleviate most of our performance concerns. Thanks Ranjit From: Nir Soffer [mailto:nsoffer@redhat.com] Sent: Tuesday, July 3, 2018 4:29 PM To: Ranjit DSouza <Ranjit.DSouza@veritas.com<mailto:Ranjit.DSouza@veritas.com>> Cc: devel <devel@ovirt.org<mailto:devel@ovirt.org>>; DL-VTAS-ENG-NBU-EverestFalcons <DL-VTAS-ENG-NBU-EverestFalcons@veritas.com<mailto:DL-VTAS-ENG-NBU-EverestFalcons@veritas.com>>; Navin Tah <Navin.Tah@veritas.com<mailto:Navin.Tah@veritas.com>>; Sudhakar Paulzagade <Sudhakar.Paulzagade@veritas.com<mailto:Sudhakar.Paulzagade@veritas.com>>; Pavan Chavva; Yaniv Lavi (Dary) <ylavi@redhat.com<mailto:ylavi@redhat.com>>; Nisan, Tal <tnisan@redhat.com<mailto:tnisan@redhat.com>>; Daniel Erez <derez@redhat.com<mailto:derez@redhat.com>> Subject: [EXTERNAL] Re: [ovirt-devel] Image Transfer mechanism queries/API support On Tue, Jul 3, 2018 at 11:32 AM Ranjit DSouza <Ranjit.DSouza@veritas.com<mailto:Ranjit.DSouza@veritas.com>> wrote: ... We had a conversation with Pavan Chavva for supporting RHV. He had suggested to contact you with queries related to oVirt APIs we plan to use. We have following queries: 1. While downloading a snapshot disk, can we identify allocated extents and download only those using oVirt API? We are able to download the disk using the Image Transfer API mechanism. However, this method downloads the entire disk including the non-allocated extents, which is a performance overhead. If this functionality does not exist at this point will it be available in near future? Hi Ranjit, There is no way to do this in current 4.2, but we plan to introduce in in 4.2.z. The API will be something like: GET /images/xxx-yyy/map ... [{ "start": 0, "length": 65536, "depth": 0, "zero": false, "data": true, "offset": 0}, { "start": 65536, "length": 983040, "depth": 0, "zero": true, "data": false, "offset": 65536}, { "start": 1048576, "length": 65536, "depth": 0, "zero": false, "data": true, "offset": 1048576}, { "start": 1114112, "length": 983040, "depth": 0, "zero": true, "data": false, "offset": 1114112}, ... { "start": 5465571328, "length": 22675456, "depth": 0, "zero": false, "data": true, "offset": 5465571328}, { "start": 5488246784, "length": 954138624, "depth": 0, "zero": true, "data": false, "offset": 5488246784}, { "start": 6442385408, "length": 65536, "depth": 0, "zero": false, "data": true, "offset": 6442385408}] This is basically what you get using qemu-img map. You can test play this with this using: virt-builder Fedora-27 -o /var/tmp/fedora-27.img qemu-img map -f raw --output json /var/tmp/fedora-27.img This is the first data segment: { "start": 0, "length": 65536, "depth": 0, "zero": false, "data": true, "offset": 0} This is a hole between the first data segment and the second: { "start": 65536, "length": 983040, "depth": 0, "zero": true, "data": false, "offset": 65536} This is the second data segment: { "start": 1048576, "length": 65536, "depth": 0, "zero": false, "data": true, "offset": 1048576} Based on this output, you will be able to get the allocated parts of the image using: Request: GET /image/xxx-yyy HTTP/1.1 Range: bytes=0-65535 Response: HTTP/1.1 206 Partial Content Content-Range: bytes 0-65535/6442450944 <data of first segment> Request: GET /image/xxx-yyy HTTP/1.1 Range: bytes=1048576-1114111 Response: HTTP/1.1 206 Partial Content Content-Range: bytes 1048576-1114111/6442450944 <data of second segment> And so on. If you create a sparse file on your backup media, and download and write the data segments at the correct offset, you will get the a sparse version of the image as on the server side. This will work for raw or qcow2 images on NFS >= 4.2, or for qcow2 images on block storage. For older NFS versions, or raw images on block storage, we can solve the issue by reading the entire image and detecting zeroes - which is quite expensive, so I'm not sure we will implement this, maybe it will be done later. We have experimental patch using special sparse format, that can support this use case, downloading entire image in one pass. This commit message explain the format: https://gerrit.ovirt.org/#/c/85413/12//COMMIT_MSG For more info on using random I/O APIs, see: http://ovirt.github.io/ovirt-imageio/random-io (available since 4.2.3) For example code uploading sparse images see: https://github.com/oVirt/ovirt-imageio/blob/master/examples/upload For best performance, you should run your application on a oVirt host, using unix socket to communicate with imageio. See: http://ovirt.github.io/ovirt-imageio/unix-socket (will be available in 4.2.5) All this will work only for the non-active layer in a qcow2 chain. We are working now on incremental backup which will allow the same for the active layer with a running vm. This is expected in 4.3, and we may have a tech preview at some point in 4.2.z. Incremantal backup will use the similar API, allowing detection of dirty parts of an image, so you can download only the data that was changed since the last backup. Please watch and comment on the feature page: https://ovirt.org/develop/release-management/features/storage/incremental-ba... We are also considering exposing images using NBD. This will allow downloading and uploading images using qemu-img from any host. This work depends on TLS-PSK support in qemu-img and qemu-nbd. You can follow this work here: https://lists.nongnu.org/archive/html/qemu-devel/2018-06/threads.html#08491 2. Is there an alternate method to transfer a snapshot to and from RHV storage? Are there other methods such as NFS share where we can download snapshot image to and from RHV storage? We don't support direct access to storage by 3rd party. You should use imageio API. Can you file RFE for this, explaning the use case and the current performance issues you experience? Nir

Hi! The team has had a busy week -- sorry for the delay :) cc'ing Nir. Work is almost always done in 4.3 (master) first, and then backported to 4.2.z. 4.2.5 is indeed just a couple weeks away. Greg On Thu, Jul 12, 2018 at 10:59 AM Ranjit DSouza <Ranjit.DSouza@veritas.com> wrote:
Hello oVirt Development team
Not sure you got a chance to see this email, resending it.
Thanks
Ranjit
*From:* Ranjit DSouza *Sent:* Monday, July 9, 2018 1:16 PM *To:* 'Nir Soffer' <nsoffer@redhat.com> *Cc:* devel <devel@ovirt.org>; DL-VTAS-ENG-NBU-EverestFalcons < DL-VTAS-ENG-NBU-EverestFalcons@veritas.com>; Navin Tah < Navin.Tah@veritas.com>; Sudhakar Paulzagade < Sudhakar.Paulzagade@veritas.com>; Pavan Chavva; Yaniv Lavi (Dary) < ylavi@redhat.com>; Nisan, Tal <tnisan@redhat.com>; Daniel Erez < derez@redhat.com> *Subject:* RE: [EXTERNAL] Re: [ovirt-devel] Image Transfer mechanism queries/API support
Hi Nir
Thanks for getting back!
We have few follow up questions:
1. On the new ‘Allocated extents API’:
Can you share the release timeline for 4.2.z? From the link below it seems like it will be available on 7/30/2018.
https://www.ovirt.org/develop/release-management/releases/4.2.z/release-mana...
However, we thought we would double check on this.
2. If this will be available in 4.2.z, does It mean, we can assume it will be backported to 4.3 also?
3. When we downloaded the snapshot disk using Image Transfer API, the resulting format of the disk is “raw”. However, for upload, we must upload a qcow2 disk (to enable further snapshots).
It means, we need to convert it first using *qemu-img convert*. Or is there a way we directly ask via API for a qcow2 instead?
Portion of the response of “GET /storagedomains/{storagedomain:id}/disksnapshots”
"format" : "*raw*",
"shareable" : "false",
"sparse" : "true",
"status" : "ok",
"snapshot" : {
"id" : "4756036e-92aa-4ebb-ae4b-052a30cd5109"
},
"actual_size" : "1345228800",
"content_type" : "data",
"propagate_errors" : "false",
"provisioned_size" : "21474836480",
"storage_type" : "image",
"total_size" : "0",
"wipe_after_delete" : "false",
4. When downloading a snapshot in chunks, Is there any recommended chunk size? For our study we used 512 KB. Checked the documentation too.
5. Regarding your ask on ‘Can you file RFE for this, explaining the use case and the current performance’.
Since you do not recommend direct access to NFS storage, we will consider it only if we see significant performance degradation using http download.
Also, if time permits, we may check if there is a significant performance benefit using Unix socket, over http download (via Rest API).
But as it stands now, getting the allocated extents support (soon), will alleviate most of our performance concerns.
Thanks
Ranjit
*From:* Nir Soffer [mailto:nsoffer@redhat.com <nsoffer@redhat.com>] *Sent:* Tuesday, July 3, 2018 4:29 PM *To:* Ranjit DSouza <Ranjit.DSouza@veritas.com> *Cc:* devel <devel@ovirt.org>; DL-VTAS-ENG-NBU-EverestFalcons < DL-VTAS-ENG-NBU-EverestFalcons@veritas.com>; Navin Tah < Navin.Tah@veritas.com>; Sudhakar Paulzagade < Sudhakar.Paulzagade@veritas.com>; Pavan Chavva; Yaniv Lavi (Dary) < ylavi@redhat.com>; Nisan, Tal <tnisan@redhat.com>; Daniel Erez < derez@redhat.com> *Subject:* [EXTERNAL] Re: [ovirt-devel] Image Transfer mechanism queries/API support
On Tue, Jul 3, 2018 at 11:32 AM Ranjit DSouza <Ranjit.DSouza@veritas.com> wrote:
...
We had a conversation with Pavan Chavva for supporting RHV. He had suggested to contact you with queries related to oVirt APIs we plan to use.
We have following queries:
1. While downloading a snapshot disk, can we identify allocated extents and download only those using oVirt API? We are able to download the disk using the Image Transfer API mechanism.
However, this method downloads the entire disk including the non-allocated extents, which is a performance overhead. If this functionality does not exist at this point will it be available in near future?
Hi Ranjit,
There is no way to do this in current 4.2, but we plan to introduce in in 4.2.z.
The API will be something like:
GET /images/xxx-yyy/map
...
[{ "start": 0, "length": 65536, "depth": 0, "zero": false, "data": true, "offset": 0},
{ "start": 65536, "length": 983040, "depth": 0, "zero": true, "data": false, "offset": 65536},
{ "start": 1048576, "length": 65536, "depth": 0, "zero": false, "data": true, "offset": 1048576},
{ "start": 1114112, "length": 983040, "depth": 0, "zero": true, "data": false, "offset": 1114112},
...
{ "start": 5465571328, "length": 22675456, "depth": 0, "zero": false, "data": true, "offset": 5465571328},
{ "start": 5488246784, "length": 954138624, "depth": 0, "zero": true, "data": false, "offset": 5488246784},
{ "start": 6442385408, "length": 65536, "depth": 0, "zero": false, "data": true, "offset": 6442385408}]
This is basically what you get using qemu-img map.
You can test play this with this using:
virt-builder Fedora-27 -o /var/tmp/fedora-27.img
qemu-img map -f raw --output json /var/tmp/fedora-27.img
This is the first data segment:
{ "start": 0, "length": 65536, "depth": 0, "zero": false, "data": true, "offset": 0}
This is a hole between the first data segment and the second:
{ "start": 65536, "length": 983040, "depth": 0, "zero": true, "data": false, "offset": 65536}
This is the second data segment:
{ "start": 1048576, "length": 65536, "depth": 0, "zero": false, "data": true, "offset": 1048576}
Based on this output, you will be able to get the allocated parts of the image using:
Request:
GET /image/xxx-yyy HTTP/1.1
Range: bytes=0-65535
Response:
HTTP/1.1 206 Partial Content
Content-Range: bytes 0-65535/6442450944
<data of first segment>
Request:
GET /image/xxx-yyy HTTP/1.1
Range: bytes=1048576-1114111
Response:
HTTP/1.1 206 Partial Content
Content-Range: bytes 1048576-1114111/6442450944
<data of second segment>
And so on.
If you create a sparse file on your backup media, and download and write
the data segments at the correct offset, you will get the a sparse version of
the image as on the server side.
This will work for raw or qcow2 images on NFS >= 4.2, or for qcow2 images on block
storage.
For older NFS versions, or raw images on block storage, we can solve the issue by
reading the entire image and detecting zeroes - which is quite expensive, so I'm not
sure we will implement this, maybe it will be done later.
We have experimental patch using special sparse format, that can support this use
case, downloading entire image in one pass. This commit message explain the
format:
https://gerrit.ovirt.org/#/c/85413/12//COMMIT_MSG
For more info on using random I/O APIs, see:
http://ovirt.github.io/ovirt-imageio/random-io
(available since 4.2.3)
For example code uploading sparse images see:
https://github.com/oVirt/ovirt-imageio/blob/master/examples/upload
For best performance, you should run your application on a oVirt host, using unix
socket to communicate with imageio. See:
http://ovirt.github.io/ovirt-imageio/unix-socket
(will be available in 4.2.5)
All this will work only for the non-active layer in a qcow2 chain. We are working now
on incremental backup which will allow the same for the active layer with a running
vm. This is expected in 4.3, and we may have a tech preview at some point in 4.2.z.
Incremantal backup will use the similar API, allowing detection of dirty parts of an image,
so you can download only the data that was changed since the last backup.
Please watch and comment on the feature page:
https://ovirt.org/develop/release-management/features/storage/incremental-ba...
We are also considering exposing images using NBD. This will allow downloading
and uploading images using qemu-img from any host. This work depends on TLS-PSK
support in qemu-img and qemu-nbd. You can follow this work here:
https://lists.nongnu.org/archive/html/qemu-devel/2018-06/threads.html#08491
2. Is there an alternate method to transfer a snapshot to and from RHV storage? Are there other methods such as NFS share where we can download snapshot image to and from RHV storage?
We don't support direct access to storage by 3rd party. You should use imageio API.
Can you file RFE for this, explaning the use case and the current performance
issues you experience?
Nir _______________________________________________ Devel mailing list -- devel@ovirt.org To unsubscribe send an email to devel-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/devel@ovirt.org/message/KOO33S2OKE67NC...
-- GREG SHEREMETA SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX Red Hat NA <https://www.redhat.com/> gshereme@redhat.com IRC: gshereme <https://red.ht/sig>

Hi team
For best performance, you should run your application on an oVirt host, using unix socket to communicate with imageio. See: http://ovirt.github.io/ovirt-imageio/unix-socket (will be available in 4.2.5)
Just to reconfirm here… the unix socket option to communicate with imageio is only available if the client application runs on the ovirt host. For remote client, the only way is the REST API, right ? Thanks -Suchitra From: Ranjit DSouza <Ranjit.DSouza@veritas.com> Date: Thursday, July 12, 2018 at 3:47 PM To: "devel@ovirt.org" <devel@ovirt.org> Cc: devel <devel@ovirt.org>, DL-VTAS-ENG-NBU-EverestFalcons <DL-VTAS-ENG-NBU-EverestFalcons@veritas.com>, Navin Tah <Navin.Tah@veritas.com>, Sudhakar Paulzagade <Sudhakar.Paulzagade@veritas.com>, Pavan Chavva <pchavva@redhat.com>, "Yaniv Lavi (Dary)" <ylavi@redhat.com>, "Nisan, Tal" <tnisan@redhat.com>, Daniel Erez <derez@redhat.com>, Nir Soffer <nsoffer@redhat.com> Subject: Re: Image Transfer mechanism queries/API support For best performance, you should run your application on a oVirt host, using unix socket to communicate with imageio. See: http://ovirt.github.io/ovirt-imageio/unix-socket (will be available in 4.2.5)

On Mon, 27 Aug 2018, 20:10 Suchitra Herwadkar, < Suchitra.Herwadkar@veritas.com> wrote:
Hi team
For best performance, you should run your application on an oVirt host, using unix
socket to communicate with imageio. See:
http://ovirt.github.io/ovirt-imageio/unix-socket
(will be available in 4.2.5)
Just to reconfirm here… the unix socket option to communicate with imageio is only available if the client application runs on the ovirt host. For remote client, the only way is the REST API, right ?
Correct. Here is more detailed description. 1. Find which host can access the disk. Check on which storage domain the disk is located, and to which data center this storage domain belongs. For example, in this setup: dc1 host1 host2 storage1 disk1 dc2 host3 host4 storage2 disk2 If we want to backup/restore disk1 we should run our program on host1 or host2 so you can use unix socket, and avoid sending the data over the network. If you cannot run your transfer program on host1 and host2, you must use HTTPS and send the data over the network. The host performing the upload should also be active. If the host is in maintenance, the disk will not be attached to this host. 2. Balance the backup/restore on all available hosts Check how many transfer are active on all available hosts, and schedule a new transfer on a host which is less busy. For reference, for virt-v2v imports, copying 10s of 100g images from VMWare to RHV, we got best results when running 2-3 imports on 4 hosts, compared with running 10 imports on one host. There is a useful document about virt-v2v performance recommendations crated by our scale team. I think the recommendations should hold for backup/restore. Daniel, can you share a link to the document? 3. Starting the image transfer Once you chose a host, you can start a transfer: - from your management system, and run the transfer program on that host - or run the transfer program on the host, and let it start the transfer virt-v2v took the second approach. This way the transfer program is easy to test, but you may find the first approach better for your needs. To start the transfer on the current host, you can can use the host hardware id. The best example for this virt-v2v rhv-upload-plugin (BSD license): https://github.com/libguestfs/libguestfs/blob/master/v2v/rhv-upload-plugin.p... 4. Transferring the data First check the host capabilities using OPTIONS request and HTTPS: http://ovirt.github.io/ovirt-imageio/random-io.html#options This will tell if the local imageio daemon supports zero and flush operations, and unix socket. All operations are supported since 4.2.3, but robust program should handle older daemon not supporting the new APIs. If the daemon supports unix socket, and you started the transfer on the same host your transfer program is running, you should close the HTTPS connection, and open a new one using unix socket, to get 15% better performance with less CPU usage. If the daemon supports zero you can use PATCH/zero request. With 4.2.6, this can give significant performance improvement. if the daemon supports flush you can defer flushing, possibly improving performance with some storage. In all cases you should use reuse the same connection for all requests, and always consume the entire response for all requests. The best example for this is imageio examples upload script (GPL license): https://raw.githubusercontent.com/oVirt/ovirt-imageio/master/examples/upload But note that this example works only for local upload, we are working on improving it to support also remote uploads, see here: https://gerrit.ovirt.org/c/93327/ With this patch, if you can use Python, doing: from ovirt_imageio_common import client client.upload(filename, url, use_unix_socket=True) Will do the right thing. If you cannot use Python you can use this as a reference how to implement the same thing in other languages. All this will be eventually documented properly. Nir

Thanks Nir for the detailed response. On this below point
Check how many transfer are active on all available hosts, and schedule a new transfer on a host which is less busy
Are there any resource limits configured on hosts that we should query in such a case? Or if there is documented way to identify a best host for transfer, can you share? Thanks Suchitra From: Nir Soffer <nsoffer@redhat.com> Date: Tuesday, August 28, 2018 at 3:01 PM To: Suchitra Herwadkar <Suchitra.Herwadkar@veritas.com>, Daniel Gur <dagur@redhat.com> Cc: Daniel Erez <derez@redhat.com>, "Nisan, Tal" <tnisan@redhat.com>, Pavan Chavva <pchavva@redhat.com>, "Yaniv Lavi (Dary)" <ylavi@redhat.com>, "devel@ovirt.org" <devel@ovirt.org> Subject: [EXTERNAL] Re: Image Transfer mechanism queries/API support On Mon, 27 Aug 2018, 20:10 Suchitra Herwadkar, <Suchitra.Herwadkar@veritas.com<mailto:Suchitra.Herwadkar@veritas.com>> wrote: Hi team
For best performance, you should run your application on an oVirt host, using unix socket to communicate with imageio. See: http://ovirt.github.io/ovirt-imageio/unix-socket (will be available in 4.2.5)
Just to reconfirm here… the unix socket option to communicate with imageio is only available if the client application runs on the ovirt host. For remote client, the only way is the REST API, right ? Correct. Here is more detailed description. 1. Find which host can access the disk. Check on which storage domain the disk is located, and to which data center this storage domain belongs. For example, in this setup: dc1 host1 host2 storage1 disk1 dc2 host3 host4 storage2 disk2 If we want to backup/restore disk1 we should run our program on host1 or host2 so you can use unix socket, and avoid sending the data over the network. If you cannot run your transfer program on host1 and host2, you must use HTTPS and send the data over the network. The host performing the upload should also be active. If the host is in maintenance, the disk will not be attached to this host. 2. Balance the backup/restore on all available hosts Check how many transfer are active on all available hosts, and schedule a new transfer on a host which is less busy. For reference, for virt-v2v imports, copying 10s of 100g images from VMWare to RHV, we got best results when running 2-3 imports on 4 hosts, compared with running 10 imports on one host. There is a useful document about virt-v2v performance recommendations crated by our scale team. I think the recommendations should hold for backup/restore. Daniel, can you share a link to the document? 3. Starting the image transfer Once you chose a host, you can start a transfer: - from your management system, and run the transfer program on that host - or run the transfer program on the host, and let it start the transfer virt-v2v took the second approach. This way the transfer program is easy to test, but you may find the first approach better for your needs. To start the transfer on the current host, you can can use the host hardware id. The best example for this virt-v2v rhv-upload-plugin (BSD license): https://github.com/libguestfs/libguestfs/blob/master/v2v/rhv-upload-plugin.p... 4. Transferring the data First check the host capabilities using OPTIONS request and HTTPS: http://ovirt.github.io/ovirt-imageio/random-io.html#options This will tell if the local imageio daemon supports zero and flush operations, and unix socket. All operations are supported since 4.2.3, but robust program should handle older daemon not supporting the new APIs. If the daemon supports unix socket, and you started the transfer on the same host your transfer program is running, you should close the HTTPS connection, and open a new one using unix socket, to get 15% better performance with less CPU usage. If the daemon supports zero you can use PATCH/zero request. With 4.2.6, this can give significant performance improvement. if the daemon supports flush you can defer flushing, possibly improving performance with some storage. In all cases you should use reuse the same connection for all requests, and always consume the entire response for all requests. The best example for this is imageio examples upload script (GPL license): https://raw.githubusercontent.com/oVirt/ovirt-imageio/master/examples/upload But note that this example works only for local upload, we are working on improving it to support also remote uploads, see here: https://gerrit.ovirt.org/c/93327/ With this patch, if you can use Python, doing: from ovirt_imageio_common import client client.upload(filename, url, use_unix_socket=True) Will do the right thing. If you cannot use Python you can use this as a reference how to implement the same thing in other languages. All this will be eventually documented properly. Nir
participants (4)
-
Greg Sheremeta
-
Nir Soffer
-
Ranjit DSouza
-
Suchitra Herwadkar