<div dir="ltr"><div class="gmail_quote"><div dir="ltr">On Thu, Sep 14, 2017 at 5:45 PM Matthias Leopold <<a href="mailto:matthias.leopold@meduniwien.ac.at">matthias.leopold@meduniwien.ac.at</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Daniel and other friendly contributors,<br>
<br>
finally i sorted out how to set provisioned_size/initial_size correctly<br>
in upload_disk.py and my error is gone. It wasn't so easy, but maybe i<br>
took an awkard route when starting with a preallocated qcow2 image. In<br>
this special case you have to set provisioned_size to st_size, whereas<br>
with sparse images provisioned_size is "virtual size" from "qemu-img<br>
info". This may seem obvious to others, i took the hard route.<br></blockquote><div><br></div><div>Just to make this more clear - when you create a disk, you want to use the</div><div>virtual size for the disk size - this is the size that the guest will see.</div><div><br></div><div>For example:</div><div><br></div><div><div>$ qemu-img create -f qcow2 test.qcow2 20g</div><div>Formatting 'test.qcow2', fmt=qcow2 size=21474836480 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16</div><div><br></div><div>$ ls -lh test.qcow2 </div><div>-rw-r--r--. 1 nsoffer nsoffer 193K Sep 15 18:15 test.qcow2</div></div><div><br></div><div>The disk size should be exactly 20g.</div><div><br></div><div>The initial size of the disk depends on the actual file size, in this example,</div><div>anything bigger than 193k will be fine.</div><div><br></div><div>When using raw format, the disk size is always the file size.</div><div><br></div><div>When using preallocation, the actual file size may be more than the</div><div>virtual size, since the qcow file format needs extra space for image</div><div>metadata.</div><div><br></div><div><div>$ qemu-img create -f qcow2 test.qcow2 -o preallocation=full 1g</div><div>Formatting 'test.qcow2', fmt=qcow2 size=1073741824 encryption=off cluster_size=65536 preallocation=full lazy_refcounts=off refcount_bits=16</div><div><br></div><div>$ ls -lht test.qcow2 </div><div>-rw-r--r--. 1 nsoffer nsoffer 1.1G Sep 15 18:20 test.qcow2</div></div><div><br></div><div><div>$ python -c 'import os; print os.path.getsize("test.qcow2")'</div><div>1074135040</div></div><div><br></div><div>In this case the initial size must be 1074135040. </div><div><br></div><div>I guess the example upload code should be improved so it works</div><div>out of the box for preallocated images.</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
My approach stems from my desire to repeat the exact example in<br>
upload_disk.py (which uses a qcow image) and my actual use case, which<br>
is uploading a rather large image converted from vmdk (i only tested<br>
this with raw format yet), so i wanted to have some "real large" data to<br>
upload.<br></blockquote><div><br></div><div>I don't know any good reason to do full or falloc preallocation since this</div><div>means preallocating the entire image - in this case you should use raw</div><div>image and get better performance.</div><div><br></div><div>To simulate upload of a real image you can upload some dvd iso, or </div><div>create raw disk with some data and convert the disk to qcow2:</div><div><br></div><div>$ time dd if=/dev/zero bs=8M count=128 | tr "\0" "\1" > test.raw<br></div><div><br></div><div>$ qemu-img convert -f raw -O qcow2 test.raw test.qcow2</div><div><br></div><div>$ qemu-img info test.qcow2 </div><div>image: test.qcow2</div><div>file format: qcow2</div><div>virtual size: 1.0G (1073741824 bytes)</div><div>disk size: 1.0G</div><div>cluster_size: 65536</div><div>Format specific information:</div><div> compat: 1.1</div><div> lazy refcounts: false</div><div> refcount bits: 16</div><div> corrupt: false</div><div><br></div><div>Nir</div><div> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
@nsoffer:<br>
I'll open a bug for better ovirt-imageio-daemon as soon as i can.<br>
<br>
thanks a lot for help<br>
matthias<br>
<br>
Am 2017-09-13 um 16:49 schrieb Daniel Erez:<br>
> Hi Matthias,<br>
><br>
> The 403 response from the daemon means the ticket can't be authenticated<br>
> (for some reason). I assume that the issue here is the initial size of<br>
> the disk.<br>
> When uploading/downloading a qcow image, you should specify the apparent<br>
> size of the file (see 'st_size' in [1]). You can get it simply by 'ls<br>
> -l' [2] (which is<br>
> a different value from 'disk size' of qemu-img info [3]).<br>
> btw, why are you creating a preallocated qcow disk? For what use-case?<br>
><br>
> [1] <a href="https://linux.die.net/man/2/stat" rel="noreferrer" target="_blank">https://linux.die.net/man/2/stat</a><br>
><br>
> [2] $ ls -l test.qcow2<br>
> -rw-r--r--. 1 user user 1074135040 Sep 13 16:50 test.qcow2<br>
><br>
> [3]<br>
> $ qemu-img create -f qcow2 -o preallocation=full test.qcow2 1g<br>
> $ qemu-img info test.qcow2<br>
> image: test.qcow2<br>
> file format: qcow2<br>
> virtual size: 1.0G (1073741824 bytes)<br>
> disk size: 1.0G<br>
> cluster_size: 65536<br>
> Format specific information:<br>
> compat: 1.1<br>
> lazy refcounts: false<br>
> refcount bits: 16<br>
> corrupt: false<br>
><br>
><br>
><br>
> On Wed, Sep 13, 2017 at 5:03 PM Matthias Leopold<br>
> <<a href="mailto:matthias.leopold@meduniwien.ac.at" target="_blank">matthias.leopold@meduniwien.ac.at</a><br>
> <mailto:<a href="mailto:matthias.leopold@meduniwien.ac.at" target="_blank">matthias.leopold@meduniwien.ac.at</a>>> wrote:<br>
><br>
> i tried it again twice:<br>
><br>
> when using upload_disk.py from the ovirt engine host itself the disk<br>
> upload succeeds (despite an "503 Service Unavailable Completed 100%" in<br>
> script output in the end)<br>
><br>
> another try was from an ovirt-sdk installation on my ubuntu desktop<br>
> itself (yesterday i tried it from a centos VM on my desktop machine).<br>
> this failed again, this time with "socket.error: [Errno 32] Broken pipe"<br>
> after reaching "200 OK Completed 100%". in imageio-proxy log i have<br>
> again the 403 error in this moment<br>
><br>
> what's the difference between accessing the API from the engine host and<br>
> from "outside" in this case?<br>
><br>
> thx<br>
> matthias<br>
><br>
> Am 2017-09-12 um 16:42 schrieb Matthias Leopold:<br>
> > Thanks, i tried this script and it _almost_ worked ;-)<br>
> ><br>
> > i uploaded two images i created with<br>
> > qemu-img create -f qcow2 -o preallocation=full<br>
> > and<br>
> > qemu-img create -f qcow2 -o preallocation=falloc<br>
> ><br>
> > for initial_size and provisioned_size i took the value reported by<br>
> > "qemu-img info" in "virtual size" (same as "disk size" in this case)<br>
> ><br>
> > the upload goes to 100% and then fails with<br>
> ><br>
> > 200 OK Completed 100%<br>
> > Traceback (most recent call last):<br>
> > File "./upload_disk.py", line 157, in <module><br>
> > headers=upload_headers,<br>
> > File "/usr/lib64/python2.7/httplib.py", line 1017, in request<br>
> > self._send_request(method, url, body, headers)<br>
> > File "/usr/lib64/python2.7/httplib.py", line 1051, in<br>
> _send_request<br>
> > self.endheaders(body)<br>
> > File "/usr/lib64/python2.7/httplib.py", line 1013, in endheaders<br>
> > self._send_output(message_body)<br>
> > File "/usr/lib64/python2.7/httplib.py", line 864, in _send_output<br>
> > self.send(msg)<br>
> > File "/usr/lib64/python2.7/httplib.py", line 840, in send<br>
> > self.sock.sendall(data)<br>
> > File "/usr/lib64/python2.7/ssl.py", line 746, in sendall<br>
> > v = self.send(data[count:])<br>
> > File "/usr/lib64/python2.7/ssl.py", line 712, in send<br>
> > v = self._sslobj.write(data)<br>
> > socket.error: [Errno 104] Connection reset by peer<br>
> ><br>
> > in web GUI the disk stays in Status: "Transferring via API"<br>
> > it can only be removed when manually unlocking it (unlock_entity.sh)<br>
> ><br>
> > engine.log tells nothing interesting<br>
> ><br>
> > i attached the last lines of ovirt-imageio-proxy/image-proxy.log and<br>
> > ovirt-imageio-daemon/daemon.log (from the executing node)<br>
> ><br>
> > the HTTP status 403 in ovirt-imageio-daemon/daemon.log doesn't<br>
> look too<br>
> > nice to me<br>
> ><br>
> > can you explain what happens?<br>
> ><br>
> > ovirt engine is 4.1.5<br>
> > ovirt node is 4.1.3 (is that a problem?)<br>
> ><br>
> > thx<br>
> > matthias<br>
> ><br>
> ><br>
> ><br>
> > Am 2017-09-12 um 13:15 schrieb Fred Rolland:<br>
> >> Hi,<br>
> >><br>
> >> You can check this example:<br>
> >><br>
> <a href="https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk.py" rel="noreferrer" target="_blank">https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk.py</a><br>
> >><br>
> >><br>
> >> Regards,<br>
> >> Fred<br>
> >><br>
> >> On Tue, Sep 12, 2017 at 11:49 AM, Matthias Leopold<br>
> >> <<a href="mailto:matthias.leopold@meduniwien.ac.at" target="_blank">matthias.leopold@meduniwien.ac.at</a><br>
> <mailto:<a href="mailto:matthias.leopold@meduniwien.ac.at" target="_blank">matthias.leopold@meduniwien.ac.at</a>><br>
> >> <mailto:<a href="mailto:matthias.leopold@meduniwien.ac.at" target="_blank">matthias.leopold@meduniwien.ac.at</a><br>
> <mailto:<a href="mailto:matthias.leopold@meduniwien.ac.at" target="_blank">matthias.leopold@meduniwien.ac.at</a>>>> wrote:<br>
> >><br>
> >> Hi,<br>
> >><br>
> >> is there a way to upload disk images (not OVF files, not ISO<br>
> files)<br>
> >> to oVirt storage domains via CLI? I need to upload a 800GB<br>
> file and<br>
> >> this is not really comfortable via browser. I looked at<br>
> ovirt-shell<br>
> >> and<br>
> >><br>
> >><br>
> <a href="https://www.ovirt.org/develop/release-management/features/storage/image-upload/" rel="noreferrer" target="_blank">https://www.ovirt.org/develop/release-management/features/storage/image-upload/</a><br>
> >><br>
> >><br>
> >><br>
> <<a href="https://www.ovirt.org/develop/release-management/features/storage/image-upload/" rel="noreferrer" target="_blank">https://www.ovirt.org/develop/release-management/features/storage/image-upload/</a>>,<br>
> >><br>
> >> but i didn't find an option in either of them.<br>
> >><br>
> >> thx<br>
> >> matthias<br>
> >><br>
> >> _______________________________________________<br>
> >> Users mailing list<br>
> >> <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a> <mailto:<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>> <mailto:<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
> <mailto:<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>>><br>
> >> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
> >> <<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a>><br>
> >><br>
> >><br>
> ><br>
><br>
> --<br>
> Matthias Leopold<br>
> IT Systems & Communications<br>
> Medizinische Universität Wien<br>
> Spitalgasse 23 / BT 88 /Ebene 00<br>
> A-1090 Wien<br>
> Tel: <a href="tel:+43%201%204016021241" value="+4314016021241" target="_blank">+43 1 40160-21241</a> <tel:+43%201%204016021241><br>
> Fax: <a href="tel:+43%201%2040160921200" value="+43140160921200" target="_blank">+43 1 40160-921200</a> <tel:+43%201%2040160921200><br>
> _______________________________________________<br>
> Users mailing list<br>
> <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a> <mailto:<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>><br>
> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
><br>
<br>
--<br>
Matthias Leopold<br>
IT Systems & Communications<br>
Medizinische Universität Wien<br>
Spitalgasse 23 / BT 88 /Ebene 00<br>
A-1090 Wien<br>
Tel: <a href="tel:+43%201%204016021241" value="+4314016021241" target="_blank">+43 1 40160-21241</a><br>
Fax: <a href="tel:+43%201%2040160921200" value="+43140160921200" target="_blank">+43 1 40160-921200</a><br>
</blockquote></div></div>