Well.. to know how to do it with Curl is helpful.. but I think I did
[root@odin ~]# curl -s -k --user admin@internal:blahblah
https://ovirte01.penguinpages.local/ovirt-engine/api/storagedomains/ |grep
'<name>'
<name>data</name>
<name>hosted_storage</name>
<name>ovirt-image-repository</name>
What I guess I did is translated that field --sd-name my-storage-domain \
to " volume" name... My question is .. where do those fields come from?
And which would you typically place all your VMs into?
[image: image.png]
I just took a guess.. and figured "data" sounded like a good place to
stick raw images to build into VM...
[root@medusa thorst.penguinpages.local:_vmstore]# python3
/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py
--engine-url
https://ovirte01.penguinpages.local/ --username admin@internal
--password-file
/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirt.password
--cafile
/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirte01_pki-resource.cer
--sd-name data --disk-sparse
/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/ns02.qcow2
Checking image...
Image format: qcow2
Disk format: cow
Disk content type: data
Disk provisioned size: 21474836480
Disk initial size: 11574706176
Disk name: ns02.qcow2
Disk backup: False
Connecting...
Creating disk...
Disk ID: 9ccb26cf-dd4a-4c9a-830c-ee084074d7a1
Creating image transfer...
Transfer ID: 3a382f0b-1e7d-4397-ab16-4def0e9fe890
Transfer host name: medusa
Uploading image...
[ 100.00% ] 20.00 GiB, 249.86 seconds, 81.97 MiB/s
Finalizing image transfer...
Upload completed successfully
[root@medusa thorst.penguinpages.local:_vmstore]# python3
/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py
--engine-url
https://ovirte01.penguinpages.local/ --username admin@internal
--password-file
/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirt.password
--cafile
/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirte01_pki-resource.cer
--sd-name data --disk-sparse
/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/ns02_v^C
[root@medusa thorst.penguinpages.local:_vmstore]# ls
example.log f118dcae-6162-4e9a-89e4-f30ffcfb9ccf ns02_20200910.tgz
ns02.qcow2 ns02_var.qcow2
[root@medusa thorst.penguinpages.local:_vmstore]# python3
/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py
--engine-url
https://ovirte01.penguinpages.local/ --username admin@internal
--password-file
/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirt.password
--cafile
/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirte01_pki-resource.cer
--sd-name data --disk-sparse
/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/ns02_var.qcow2
Checking image...
Image format: qcow2
Disk format: cow
Disk content type: data
Disk provisioned size: 107374182400
Disk initial size: 107390828544
Disk name: ns02_var.qcow2
Disk backup: False
Connecting...
Creating disk...
Disk ID: 26def4e7-1153-417c-88c1-fd3dfe2b0fb9
Creating image transfer...
Transfer ID: 41518eac-8881-453e-acc0-45391fd23bc7
Transfer host name: medusa
Uploading image...
[ 16.50% ] 16.50 GiB, 556.42 seconds, 30.37 MiB/s
Now with those ID numbers and that it kept its name (very helpful)... I am
able to re-constitute the VM
[image: image.png]
VM boots fine. Fixing VLANs and manual macs on vNICs.. but this process
worked fine.
Thanks for input. Would be nice to have a GUI "upload" via http into
system :)
On Mon, Sep 21, 2020 at 2:19 PM Nir Soffer <nsoffer(a)redhat.com> wrote:
On Mon, Sep 21, 2020 at 8:37 PM penguin pages
<jeremey.wise(a)gmail.com>
wrote:
>
>
> I pasted old / file path not right example above.. But here is a cleaner
version with error i am trying to root cause
>
> [root@odin vmstore]# python3
/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py
--engine-url
https://ovirte01.penguinpages.local/ --username
admin@internal --password-file
/gluster_bricks/vmstore/vmstore/.ovirt.password --cafile
/gluster_bricks/vmstore/vmstore/.ovirte01_pki-resource.cer --sd-name
vmstore --disk-sparse /gluster_bricks/vmstore/vmstore/ns01.qcow2
> Checking image...
> Image format: qcow2
> Disk format: cow
> Disk content type: data
> Disk provisioned size: 21474836480
> Disk initial size: 431751168
> Disk name: ns01.qcow2
> Disk backup: False
> Connecting...
> Creating disk...
> Traceback (most recent call last):
> File
"/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py", line
262, in <module>
> name=args.sd_name
> File "/usr/lib64/python3.6/site-packages/ovirtsdk4/services.py", line
7697, in add
> return self._internal_add(disk, headers, query, wait)
> File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line
232, in _internal_add
> return future.wait() if wait else future
> File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line
55, in wait
> return self._code(response)
> File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line
229, in callback
> self._check_fault(response)
> File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line
132, in _check_fault
> self._raise_error(response, body)
> File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line
118, in _raise_error
> raise error
> ovirtsdk4.NotFoundError: Fault reason is "Operation Failed". Fault
detail is "Entity not found: vmstore". HTTP response code is 404.
You used:
--sd-name vmstore
But there is no such storage domain in this setup.
Check the storage domains on this setup. One (ugly) way is is:
$ curl -s -k --user admin@internal:password
https://ovirte01.penguinpages.local/ovirt-engine/api/storagedomains/ |
grep '<name>'
<name>export1</name>
<name>iscsi1</name>
<name>iscsi2</name>
<name>nfs1</name>
<name>nfs2</name>
<name>ovirt-image-repository</name>
Nir
--
jeremey.wise(a)gmail.com