On Thu, Sep 14, 2017 at 5:45 PM Matthias Leopold <
matthias.leopold(a)meduniwien.ac.at> wrote:
Hi Daniel and other friendly contributors,
finally i sorted out how to set provisioned_size/initial_size correctly
in upload_disk.py and my error is gone. It wasn't so easy, but maybe i
took an awkard route when starting with a preallocated qcow2 image. In
this special case you have to set provisioned_size to st_size, whereas
with sparse images provisioned_size is "virtual size" from "qemu-img
info". This may seem obvious to others, i took the hard route.
Just to make this more clear - when you create a disk, you want to use the
virtual size for the disk size - this is the size that the guest will see.
For example:
$ qemu-img create -f qcow2 test.qcow2 20g
Formatting 'test.qcow2', fmt=qcow2 size=21474836480 encryption=off
cluster_size=65536 lazy_refcounts=off refcount_bits=16
$ ls -lh test.qcow2
-rw-r--r--. 1 nsoffer nsoffer 193K Sep 15 18:15 test.qcow2
The disk size should be exactly 20g.
The initial size of the disk depends on the actual file size, in this
example,
anything bigger than 193k will be fine.
When using raw format, the disk size is always the file size.
When using preallocation, the actual file size may be more than the
virtual size, since the qcow file format needs extra space for image
metadata.
$ qemu-img create -f qcow2 test.qcow2 -o preallocation=full 1g
Formatting 'test.qcow2', fmt=qcow2 size=1073741824 encryption=off
cluster_size=65536 preallocation=full lazy_refcounts=off refcount_bits=16
$ ls -lht test.qcow2
-rw-r--r--. 1 nsoffer nsoffer 1.1G Sep 15 18:20 test.qcow2
$ python -c 'import os; print os.path.getsize("test.qcow2")'
1074135040
In this case the initial size must be 1074135040.
I guess the example upload code should be improved so it works
out of the box for preallocated images.
My approach stems from my desire to repeat the exact example in
upload_disk.py (which uses a qcow image) and my actual use case, which
is uploading a rather large image converted from vmdk (i only tested
this with raw format yet), so i wanted to have some "real large" data to
upload.
I don't know any good reason to do full or falloc preallocation since this
means preallocating the entire image - in this case you should use raw
image and get better performance.
To simulate upload of a real image you can upload some dvd iso, or
create raw disk with some data and convert the disk to qcow2:
$ time dd if=/dev/zero bs=8M count=128 | tr "\0" "\1" > test.raw
$ qemu-img convert -f raw -O qcow2 test.raw test.qcow2
$ qemu-img info test.qcow2
image: test.qcow2
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 1.0G
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
Nir
@nsoffer:
I'll open a bug for better ovirt-imageio-daemon as soon as i can.
thanks a lot for help
matthias
Am 2017-09-13 um 16:49 schrieb Daniel Erez:
> Hi Matthias,
>
> The 403 response from the daemon means the ticket can't be authenticated
> (for some reason). I assume that the issue here is the initial size of
> the disk.
> When uploading/downloading a qcow image, you should specify the apparent
> size of the file (see 'st_size' in [1]). You can get it simply by 'ls
> -l' [2] (which is
> a different value from 'disk size' of qemu-img info [3]).
> btw, why are you creating a preallocated qcow disk? For what use-case?
>
> [1]
https://linux.die.net/man/2/stat
>
> [2] $ ls -l test.qcow2
> -rw-r--r--. 1 user user 1074135040 Sep 13 16:50 test.qcow2
>
> [3]
> $ qemu-img create -f qcow2 -o preallocation=full test.qcow2 1g
> $ qemu-img info test.qcow2
> image: test.qcow2
> file format: qcow2
> virtual size: 1.0G (1073741824 bytes)
> disk size: 1.0G
> cluster_size: 65536
> Format specific information:
> compat: 1.1
> lazy refcounts: false
> refcount bits: 16
> corrupt: false
>
>
>
> On Wed, Sep 13, 2017 at 5:03 PM Matthias Leopold
> <matthias.leopold(a)meduniwien.ac.at
> <mailto:matthias.leopold@meduniwien.ac.at>> wrote:
>
> i tried it again twice:
>
> when using upload_disk.py from the ovirt engine host itself the disk
> upload succeeds (despite an "503 Service Unavailable Completed 100%"
in
> script output in the end)
>
> another try was from an ovirt-sdk installation on my ubuntu desktop
> itself (yesterday i tried it from a centos VM on my desktop machine).
> this failed again, this time with "socket.error: [Errno 32] Broken
pipe"
> after reaching "200 OK Completed 100%". in imageio-proxy log i have
> again the 403 error in this moment
>
> what's the difference between accessing the API from the engine host
and
> from "outside" in this case?
>
> thx
> matthias
>
> Am 2017-09-12 um 16:42 schrieb Matthias Leopold:
> > Thanks, i tried this script and it _almost_ worked ;-)
> >
> > i uploaded two images i created with
> > qemu-img create -f qcow2 -o preallocation=full
> > and
> > qemu-img create -f qcow2 -o preallocation=falloc
> >
> > for initial_size and provisioned_size i took the value reported by
> > "qemu-img info" in "virtual size" (same as "disk
size" in this
case)
> >
> > the upload goes to 100% and then fails with
> >
> > 200 OK Completed 100%
> > Traceback (most recent call last):
> > File "./upload_disk.py", line 157, in <module>
> > headers=upload_headers,
> > File "/usr/lib64/python2.7/httplib.py", line 1017, in
request
> > self._send_request(method, url, body, headers)
> > File "/usr/lib64/python2.7/httplib.py", line 1051, in
> _send_request
> > self.endheaders(body)
> > File "/usr/lib64/python2.7/httplib.py", line 1013, in
endheaders
> > self._send_output(message_body)
> > File "/usr/lib64/python2.7/httplib.py", line 864, in
_send_output
> > self.send(msg)
> > File "/usr/lib64/python2.7/httplib.py", line 840, in send
> > self.sock.sendall(data)
> > File "/usr/lib64/python2.7/ssl.py", line 746, in sendall
> > v = self.send(data[count:])
> > File "/usr/lib64/python2.7/ssl.py", line 712, in send
> > v = self._sslobj.write(data)
> > socket.error: [Errno 104] Connection reset by peer
> >
> > in web GUI the disk stays in Status: "Transferring via API"
> > it can only be removed when manually unlocking it
(unlock_entity.sh)
> >
> > engine.log tells nothing interesting
> >
> > i attached the last lines of ovirt-imageio-proxy/image-proxy.log
and
> > ovirt-imageio-daemon/daemon.log (from the executing node)
> >
> > the HTTP status 403 in ovirt-imageio-daemon/daemon.log doesn't
> look too
> > nice to me
> >
> > can you explain what happens?
> >
> > ovirt engine is 4.1.5
> > ovirt node is 4.1.3 (is that a problem?)
> >
> > thx
> > matthias
> >
> >
> >
> > Am 2017-09-12 um 13:15 schrieb Fred Rolland:
> >> Hi,
> >>
> >> You can check this example:
> >>
>
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload...
> >>
> >>
> >> Regards,
> >> Fred
> >>
> >> On Tue, Sep 12, 2017 at 11:49 AM, Matthias Leopold
> >> <matthias.leopold(a)meduniwien.ac.at
> <mailto:matthias.leopold@meduniwien.ac.at>
> >> <mailto:matthias.leopold@meduniwien.ac.at
> <mailto:matthias.leopold@meduniwien.ac.at>>> wrote:
> >>
> >> Hi,
> >>
> >> is there a way to upload disk images (not OVF files, not ISO
> files)
> >> to oVirt storage domains via CLI? I need to upload a 800GB
> file and
> >> this is not really comfortable via browser. I looked at
> ovirt-shell
> >> and
> >>
> >>
>
https://www.ovirt.org/develop/release-management/features/storage/image-u...
> >>
> >>
> >>
> <
https://www.ovirt.org/develop/release-management/features/storage/image-u...
>,
> >>
> >> but i didn't find an option in either of them.
> >>
> >> thx
> >> matthias
> >>
> >> _______________________________________________
> >> Users mailing list
> >> Users(a)ovirt.org <mailto:Users@ovirt.org>
<mailto:Users@ovirt.org
> <mailto:Users@ovirt.org>>
> >>
http://lists.ovirt.org/mailman/listinfo/users
> >> <
http://lists.ovirt.org/mailman/listinfo/users>
> >>
> >>
> >
>
> --
> Matthias Leopold
> IT Systems & Communications
> Medizinische Universität Wien
> Spitalgasse 23 / BT 88 /Ebene 00
> A-1090 Wien
> Tel: +43 1 40160-21241 <+43%201%204016021241>
<tel:+43%201%204016021241>
> Fax: +43 1 40160-921200 <+43%201%2040160921200>
<tel:+43%201%2040160921200>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org <mailto:Users@ovirt.org>
>
http://lists.ovirt.org/mailman/listinfo/users
>
--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241 <+43%201%204016021241>
Fax: +43 1 40160-921200 <+43%201%2040160921200>