
Hello list is anybody successfully using oVirt + Foreman for VM creation + provisioning? I'm using Foremn (latest version, 1.15.2) with latest oVirt version (4.1.3) but I'm encountering several problem, especially related to disks. For example: - cannot create a VM with multiple disks though Foreman CLI (hammer) - if I create a multidisk VM from Foreman, the second disk always gets the "bootable" flag and not the primary image, making the VMs not bootable at all. Any other Foreman user sharing the pain here? Foramn's list is not so useful so I'm trying to ask here. How do you programmatically create virtual machines with oVirt and Foreman? Should I switch do directly using oVirt API? Thanks in advance Davide

CC-ing Ohad and Ivan from the Foreman team to take a look. Also, by default, RHV 4.1 will use v4 of the api, so you have to use a URL in Foreman that uses v3 (as Foreman doesn't support v4 yet). I assume that's not your issue, otherwise you would have encountered more basic issues. Also, can you please share your logs from both environments? Ohad/Ivan, any clue? Thanks, Oved On Jul 24, 2017 18:08, "Davide Ferrari" <davide@billymob.com> wrote: Hello list is anybody successfully using oVirt + Foreman for VM creation + provisioning? I'm using Foremn (latest version, 1.15.2) with latest oVirt version (4.1.3) but I'm encountering several problem, especially related to disks. For example: - cannot create a VM with multiple disks though Foreman CLI (hammer) - if I create a multidisk VM from Foreman, the second disk always gets the "bootable" flag and not the primary image, making the VMs not bootable at all. Any other Foreman user sharing the pain here? Foramn's list is not so useful so I'm trying to ask here. How do you programmatically create virtual machines with oVirt and Foreman? Should I switch do directly using oVirt API? Thanks in advance Davide _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Oved Ourfali <oourfali@redhat.com> writes:
CC-ing Ohad and Ivan from the Foreman team to take a look.
Also, by default, RHV 4.1 will use v4 of the api, so you have to use a URL in Foreman that uses v3 (as Foreman doesn't support v4 yet).
I assume that's not your issue, otherwise you would have encountered more basic issues.
Also, can you please share your logs from both environments?
Ohad/Ivan, any clue?
Thanks, Oved
On Jul 24, 2017 18:08, "Davide Ferrari" <davide@billymob.com> wrote:
Hello list
is anybody successfully using oVirt + Foreman for VM creation + provisioning?
I'm using Foremn (latest version, 1.15.2) with latest oVirt version (4.1.3) but I'm encountering several problem, especially related to disks. For example:
- cannot create a VM with multiple disks though Foreman CLI (hammer)
Could you send the hammer command you're using
- if I create a multidisk VM from Foreman, the second disk always gets the "bootable" flag and not the primary image, making the VMs not bootable at all.
Are the compute profiles involved in the provisioning by any chance? /CC to ori to have more pair of eyes to look at this. -- Ivan
Any other Foreman user sharing the pain here? Foramn's list is not so useful so I'm trying to ask here. How do you programmatically create virtual machines with oVirt and Foreman? Should I switch do directly using oVirt API?
Thanks in advance
Davide
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hello I've attached logs from: - hammer cli (debug) with the command line I've used - foreman logs - ovirt engine logs (server.log) Basically I was trying to create a VM from an ovirt template linked to a Foreman image (CentOS_73) which consists of a single disk with the OS, and attach via Hammer 2 more disks. In this case I get a 404 Resource Not Found from Foreman and what I see in the ovirt logs is that the VM is created and then immediately deleted via API Thanks! On 24/07/17 20:56, Oved Ourfali wrote:
CC-ing Ohad and Ivan from the Foreman team to take a look.
Also, by default, RHV 4.1 will use v4 of the api, so you have to use a URL in Foreman that uses v3 (as Foreman doesn't support v4 yet).
I assume that's not your issue, otherwise you would have encountered more basic issues.
Also, can you please share your logs from both environments?
Ohad/Ivan, any clue?
Thanks, Oved
On Jul 24, 2017 18:08, "Davide Ferrari" <davide@billymob.com <mailto:davide@billymob.com>> wrote:
Hello list
is anybody successfully using oVirt + Foreman for VM creation + provisioning?
I'm using Foremn (latest version, 1.15.2) with latest oVirt version (4.1.3) but I'm encountering several problem, especially related to disks. For example:
- cannot create a VM with multiple disks though Foreman CLI (hammer)
- if I create a multidisk VM from Foreman, the second disk always gets the "bootable" flag and not the primary image, making the VMs not bootable at all.
Any other Foreman user sharing the pain here? Foramn's list is not so useful so I'm trying to ask here. How do you programmatically create virtual machines with oVirt and Foreman? Should I switch do directly using oVirt API?
Thanks in advance
Davide
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>

Last time I looked at creating VM's from foreman there was a problem with the compute resource being passed from foreman plugin to the ovirt api. Can't remember exactly what was being sent, but it didn't match any available ovirt 'instance type' which is why it was failing to create the machine. Not sure if you're facing the same issue, but maybe worth looking into... On 25 July 2017 at 09:59, Davide Ferrari <davide@billymob.com> wrote:
Hello
I've attached logs from:
- hammer cli (debug) with the command line I've used
- foreman logs
- ovirt engine logs (server.log)
Basically I was trying to create a VM from an ovirt template linked to a Foreman image (CentOS_73) which consists of a single disk with the OS, and attach via Hammer 2 more disks. In this case I get a 404 Resource Not Found from Foreman and what I see in the ovirt logs is that the VM is created and then immediately deleted via API
Thanks!
On 24/07/17 20:56, Oved Ourfali wrote:
CC-ing Ohad and Ivan from the Foreman team to take a look.
Also, by default, RHV 4.1 will use v4 of the api, so you have to use a URL in Foreman that uses v3 (as Foreman doesn't support v4 yet).
I assume that's not your issue, otherwise you would have encountered more basic issues.
Also, can you please share your logs from both environments?
Ohad/Ivan, any clue?
Thanks, Oved
On Jul 24, 2017 18:08, "Davide Ferrari" <davide@billymob.com> wrote:
Hello list
is anybody successfully using oVirt + Foreman for VM creation + provisioning?
I'm using Foremn (latest version, 1.15.2) with latest oVirt version (4.1.3) but I'm encountering several problem, especially related to disks. For example:
- cannot create a VM with multiple disks though Foreman CLI (hammer)
- if I create a multidisk VM from Foreman, the second disk always gets the "bootable" flag and not the primary image, making the VMs not bootable at all.
Any other Foreman user sharing the pain here? Foramn's list is not so useful so I'm trying to ask here. How do you programmatically create virtual machines with oVirt and Foreman? Should I switch do directly using oVirt API?
Thanks in advance
Davide
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 25/07/17 12:19, Maton, Brett wrote:
Last time I looked at creating VM's from foreman there was a problem with the compute resource being passed from foreman plugin to the ovirt api.
Can't remember exactly what was being sent, but it didn't match any available ovirt 'instance type' which is why it was failing to create the machine.
Not sure if you're facing the same issue, but maybe worth looking into...
Well, actually I can create a VM both from Foreman UI and Hammer CLI, the problem arises when I'm trying to add more disks to that VM

On 07/27/2017 09:46 AM, Davide Ferrari wrote:
On 25/07/17 10:59, Davide Ferrari wrote:
Hello
I've attached logs from:
- hammer cli (debug) with the command line I've used
- foreman logs
- ovirt engine logs (server.log)
Any idea about what might be happening?
Looks like the oVirt engine is rejecting the request to add the disk because some of the related entities doesn't exist. This is the relevant message in the engine log: 2017-07-25 08:28:03,063Z ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-51) [] Operation Failed: Entity not found: 23f8f1ae-a3ac-47bf-8223-5b5f7c29e508 Would be nice if you can check the /var/log/httpd/ssl_access_log in the oVirt engine machine. There should be a line there with the 404 HTTP status, something like this: POST /ovirt-engine/api/vms/<vm_id>/disks 404 What is the exact content of that line? Is the VM id the one that appears in the above message. Also, can you check what are the identifiers of the relevant data center and storage domains? There should also be additional details in the /var/log/ovirt-engine/server.log file. Please check it.

On 27/07/17 11:17, Juan Hernández wrote:
Looks like the oVirt engine is rejecting the request to add the disk because some of the related entities doesn't exist. This is the relevant message in the engine log:
2017-07-25 08:28:03,063Z ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-51) [] Operation Failed: Entity not found: 23f8f1ae-a3ac-47bf-8223-5b5f7c29e508
Would be nice if you can check the /var/log/httpd/ssl_access_log in the oVirt engine machine. There should be a line there with the 404 HTTP status, something like this:
POST /ovirt-engine/api/vms/<vm_id>/disks 404
What is the exact content of that line? Is the VM id the one that appears in the above message.
Bingo, there's definitely a 404 error log at the same hour: 192.168.10.158 - - [25/Jul/2017:08:28:02 +0000] "POST /ovirt-engine/api/vms/896098c2-5895-42c3-a419-0c3a43b5ff8b/disks HTTP/1.1" 404 169 But the ID is different
Also, can you check what are the identifiers of the relevant data center and storage domains?
DC and storage UUID "should" be correct, I've copied them from the oVirt CLI output into my hammer command These are the storage IDs for the datacenter where I'm trying to create the VM in: [oVirt shell (connected)]# list glustervolumes --cluster-identifier 00000002-0002-0002-0002-000000000345 id : 23f8f1ae-a3ac-47bf-8223-5b5f7c29e508 name : data_ssd id : 6be35972-4720-4d34-b2b0-26ffc294f8a3 name : engine id : 66f33b1e-7bc8-44cf-9cca-9041b0e0dd15 name : export id : cc2c9765-6a3d-4281-8af8-c3526a81cfab name : iso and this is the command line I'm using hammer host create --architecture-id=1 --domain billy.preprod --operatingsystem-id=7 --hostgroup-title Billy/Preprod --name foo01 --partition-table-id=192 --provision-method image --root-password billy12345 --compute-resource 'LeaseWeb VMs prod' --image CentOS_7.3 --compute-attributes cluster=00000002-0002-0002-0002-000000000345,cores=2,memory=4294967296,start=1 --volume '"size_gb=20,storage_domain=23f8f1ae-a3ac-47bf-8223-5b5f7c29e508,bootable=0"' --volume '"size_gb=30,storage_domain=23f8f1ae-a3ac-47bf-8223-5b5f7c29e508,bootable=0"'
There should also be additional details in the /var/log/ovirt-engine/server.log file. Please check it.
Nope, no log with the same timestamp in server.log :/ Thanks for your kind help! -- Davide

On 07/28/2017 09:34 AM, Davide Ferrari wrote:
On 27/07/17 11:17, Juan Hernández wrote:
Looks like the oVirt engine is rejecting the request to add the disk because some of the related entities doesn't exist. This is the relevant message in the engine log:
2017-07-25 08:28:03,063Z ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-51) [] Operation Failed: Entity not found: 23f8f1ae-a3ac-47bf-8223-5b5f7c29e508
Would be nice if you can check the /var/log/httpd/ssl_access_log in the oVirt engine machine. There should be a line there with the 404 HTTP status, something like this:
POST /ovirt-engine/api/vms/<vm_id>/disks 404
What is the exact content of that line? Is the VM id the one that appears in the above message.
Bingo, there's definitely a 404 error log at the same hour:
192.168.10.158 - - [25/Jul/2017:08:28:02 +0000] "POST /ovirt-engine/api/vms/896098c2-5895-42c3-a419-0c3a43b5ff8b/disks HTTP/1.1" 404 169
But the ID is different
Also, can you check what are the identifiers of the relevant data center and storage domains?
DC and storage UUID "should" be correct, I've copied them from the oVirt CLI output into my hammer command These are the storage IDs for the datacenter where I'm trying to create the VM in:
[oVirt shell (connected)]# list glustervolumes --cluster-identifier 00000002-0002-0002-0002-000000000345
id : 23f8f1ae-a3ac-47bf-8223-5b5f7c29e508 name : data_ssd
id : 6be35972-4720-4d34-b2b0-26ffc294f8a3 name : engine
id : 66f33b1e-7bc8-44cf-9cca-9041b0e0dd15 name : export
id : cc2c9765-6a3d-4281-8af8-c3526a81cfab name : iso
and this is the command line I'm using
hammer host create --architecture-id=1 --domain billy.preprod --operatingsystem-id=7 --hostgroup-title Billy/Preprod --name foo01 --partition-table-id=192 --provision-method image --root-password billy12345 --compute-resource 'LeaseWeb VMs prod' --image CentOS_7.3 --compute-attributes cluster=00000002-0002-0002-0002-000000000345,cores=2,memory=4294967296,start=1 --volume '"size_gb=20,storage_domain=23f8f1ae-a3ac-47bf-8223-5b5f7c29e508,bootable=0"' --volume '"size_gb=30,storage_domain=23f8f1ae-a3ac-47bf-8223-5b5f7c29e508,bootable=0"'
So there is something wrong with the "data_ssd" storage domain, apparently, as the identifier that can't be found corresponds to that storage domain. Can you try to retrieve that storage domain? Just use your browser to get the following URL: https://yourovirt/ovirt-engine/api/storagedomains/23f8f1ae-a3ac-47bf-8223-5b... Also this, in case the problem is related to version 3 of the API: https://yourovirt/ovirt-engine/api/v3/storagedomains/23f8f1ae-a3ac-47bf-8223... Do they work?
There should also be additional details in the /var/log/ovirt-engine/server.log file. Please check it.
Nope, no log with the same timestamp in server.log :/
Thanks for your kind help!

On 28/07/17 10:50, Juan Hernández wrote:
[oVirt shell (connected)]# list glustervolumes --cluster-identifier 00000002-0002-0002-0002-000000000345
id : 23f8f1ae-a3ac-47bf-8223-5b5f7c29e508 name : data_ssd
id : 6be35972-4720-4d34-b2b0-26ffc294f8a3 name : engine
id : 66f33b1e-7bc8-44cf-9cca-9041b0e0dd15 name : export
id : cc2c9765-6a3d-4281-8af8-c3526a81cfab name : iso
So there is something wrong with the "data_ssd" storage domain, apparently, as the identifier that can't be found corresponds to that storage domain. Can you try to retrieve that storage domain? Just use your browser to get the following URL:
https://yourovirt/ovirt-engine/api/storagedomains/23f8f1ae-a3ac-47bf-8223-5b...
Also this, in case the problem is related to version 3 of the API:
https://yourovirt/ovirt-engine/api/v3/storagedomains/23f8f1ae-a3ac-47bf-8223...
Do they work?
Nope, 404 both v4 and v3 API, but if I go to the storagedomains/ root, I get completely different UUIDs listed there. For example, in the case of the "data_ssd" domain, the UUID is 7a28ea1a-df7e-4205-bb96-45ff2817f175 Why is the ovirt console showing a completely different UUID? Anyway, I've replaced the storage domain UUID with the one that works with the REST API and something improved: now I don't get the 404 from ovirt and the machine is not deleted BUT: I've added 2 disks (20GB and 30GB) plus the base template 8Gb disk, and I get a VM with four (4) 8GB disks, and the bootable one is a random disk I've attached the engine.log with the (I hope) relevant messages Thanks

On 07/28/2017 01:27 PM, Davide Ferrari wrote:
On 28/07/17 10:50, Juan Hernández wrote:
[oVirt shell (connected)]# list glustervolumes --cluster-identifier 00000002-0002-0002-0002-000000000345
id : 23f8f1ae-a3ac-47bf-8223-5b5f7c29e508 name : data_ssd
id : 6be35972-4720-4d34-b2b0-26ffc294f8a3 name : engine
id : 66f33b1e-7bc8-44cf-9cca-9041b0e0dd15 name : export
id : cc2c9765-6a3d-4281-8af8-c3526a81cfab name : iso
So there is something wrong with the "data_ssd" storage domain, apparently, as the identifier that can't be found corresponds to that storage domain. Can you try to retrieve that storage domain? Just use your browser to get the following URL:
https://yourovirt/ovirt-engine/api/storagedomains/23f8f1ae-a3ac-47bf-8223-5b...
Also this, in case the problem is related to version 3 of the API:
https://yourovirt/ovirt-engine/api/v3/storagedomains/23f8f1ae-a3ac-47bf-8223...
Do they work?
Nope, 404 both v4 and v3 API, but if I go to the storagedomains/ root, I get completely different UUIDs listed there. For example, in the case of the "data_ssd" domain, the UUID is 7a28ea1a-df7e-4205-bb96-45ff2817f175
Why is the ovirt console showing a completely different UUID?
Ah, I see, in your command you are listing Gluster volumes, not storage domains. They are different kinds of objects inside oVirt, and thus they have different identifiers. That is completely normal. If you want to get the identifiers of the storage domains use "list storagedomains".
Anyway, I've replaced the storage domain UUID with the one that works with the REST API and something improved: now I don't get the 404 from ovirt and the machine is not deleted BUT: I've added 2 disks (20GB and 30GB) plus the base template 8Gb disk, and I get a VM with four (4) 8GB disks, and the bootable one is a random disk
I've attached the engine.log with the (I hope) relevant messages
Are you adding those disks and template using the Foreman CLI? Can you share the commands that you are using? Also, can you share again the relevant part of the /var/log/ovirt-engine/ssl_access_log file? There we can see what requests are actually sent to the oVirt engine.

This is a multi-part message in MIME format. --------------AB98537EE40272BEB36295B9 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit On 28/07/17 16:14, Juan Hernández wrote:
Ah, I see, in your command you are listing Gluster volumes, not storage domains. They are different kinds of objects inside oVirt, and thus they have different identifiers. That is completely normal. If you want to get the identifiers of the storage domains use "list storagedomains".
Oh, got it. Thanks for the tip!
Anyway, I've replaced the storage domain UUID with the one that works with the REST API and something improved: now I don't get the 404 from ovirt and the machine is not deleted BUT: I've added 2 disks (20GB and 30GB) plus the base template 8Gb disk, and I get a VM with four (4) 8GB disks, and the bootable one is a random disk
I've attached the engine.log with the (I hope) relevant messages
Are you adding those disks and template using the Foreman CLI? Can you share the commands that you are using?
Yes, I'm using Hammer CLI hammer host create --architecture-id=1 --domain billy.preprod --operatingsystem-id=7 --hostgroup-title Billy/Preprod --name foobar03 --partition-table-id=192 --provision-method image --root-password billy12345 --compute-resource 'LeaseWeb VMs prod' --image CentOS_7.3 --compute-attributes cluster=00000002-0002-0002-0002-000000000345,cores=2,memory=4294967296,start=1 --volume '"size_gb=20,storage_domain=ba2bd397-9222-424d-aecc-eb652c0169d9,bootable=0"' --volume '"size_gb=30,storage_domain=ba2bd397-9222-424d-aecc-eb652c0169d9,bootable=0"'
Also, can you share again the relevant part of the /var/log/ovirt-engine/ssl_access_log file? There we can see what requests are actually sent to the oVirt engine.
These are the requests arriving from Foreman: 192.168.10.158 - - [28/Jul/2017:14:19:42 +0000] "GET /ovirt-engine/api/vms/24831007-97ad-4f6d-9009-e6fb68a585f9 HTTP/1.1" 200 2865 192.168.10.158 - - [28/Jul/2017:14:26:19 +0000] "GET /ovirt-engine/api/datacenters?search= HTTP/1.1" 200 408 192.168.10.158 - - [28/Jul/2017:14:26:19 +0000] "GET /ovirt-engine/api/operatingsystems HTTP/1.1" 200 2943 192.168.10.158 - - [28/Jul/2017:14:26:20 +0000] "GET /ovirt-engine/api/datacenters?search= HTTP/1.1" 200 408 192.168.10.158 - - [28/Jul/2017:14:26:20 +0000] "GET /ovirt-engine/api/operatingsystems HTTP/1.1" 200 2943 192.168.10.158 - - [28/Jul/2017:14:26:20 +0000] "GET /ovirt-engine/api/datacenters?search= HTTP/1.1" 200 408 192.168.10.158 - - [28/Jul/2017:14:26:20 +0000] "GET /ovirt-engine/api/clusters/ HTTP/1.1" 200 1091 192.168.10.158 - - [28/Jul/2017:14:26:20 +0000] "POST /ovirt-engine/api/vms HTTP/1.1" 202 1612 192.168.10.158 - - [28/Jul/2017:14:26:22 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2256 192.168.10.158 - - [28/Jul/2017:14:26:22 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/nics HTTP/1.1" 200 409 192.168.10.158 - - [28/Jul/2017:14:26:22 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2256 192.168.10.158 - - [28/Jul/2017:14:26:23 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2256 192.168.10.158 - - [28/Jul/2017:14:26:26 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2256 192.168.10.158 - - [28/Jul/2017:14:26:30 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2252 192.168.10.158 - - [28/Jul/2017:14:26:30 +0000] "DELETE /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/nics/bf0dabd2-796b-4b07-bd69-db3915409939 HTTP/1.1" 200 119 192.168.10.158 - - [28/Jul/2017:14:26:30 +0000] "POST /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/nics HTTP/1.1" 201 430 192.168.10.158 - - [28/Jul/2017:14:26:31 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/nics HTTP/1.1" 200 442 192.168.10.158 - - [28/Jul/2017:14:26:31 +0000] "GET /ovirt-engine/api/ HTTP/1.1" 200 873 192.168.10.158 - - [28/Jul/2017:14:26:31 +0000] "GET /ovirt-engine/api/datacenters/00000001-0001-0001-0001-0000000003e3 HTTP/1.1" 200 396 192.168.10.158 - - [28/Jul/2017:14:26:31 +0000] "POST /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks HTTP/1.1" 201 674 192.168.10.158 - - [28/Jul/2017:14:26:32 +0000] "POST /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks HTTP/1.1" 201 676 192.168.10.158 - - [28/Jul/2017:14:26:33 +0000] "POST /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks HTTP/1.1" 201 677 192.168.10.158 - - [28/Jul/2017:14:26:33 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks HTTP/1.1" 200 1136 192.168.10.158 - - [28/Jul/2017:14:26:34 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2668 192.168.10.158 - - [28/Jul/2017:14:26:34 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2668 192.168.10.158 - - [28/Jul/2017:14:26:34 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks HTTP/1.1" 200 1136 192.168.10.158 - - [28/Jul/2017:14:26:36 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2668 192.168.10.158 - - [28/Jul/2017:14:26:36 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks HTTP/1.1" 200 1136 192.168.10.158 - - [28/Jul/2017:14:26:38 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2669 192.168.10.158 - - [28/Jul/2017:14:26:38 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks HTTP/1.1" 200 1138 192.168.10.158 - - [28/Jul/2017:14:26:42 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2660 192.168.10.158 - - [28/Jul/2017:14:26:43 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks HTTP/1.1" 200 1125 192.168.10.158 - - [28/Jul/2017:14:26:43 +0000] "POST /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/start HTTP/1.1" 200 616 192.168.10.158 - - [28/Jul/2017:14:26:44 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2804 There are 3 POSTs to /disks so it seems that the culprit is indeed Foreman, but looking at Foreman's production.log I cannot see much more than this (even with logging level set to debug): 2017-07-28 16:26:20 [app] [I] Parameters: {"host"=>{"name"=>"foobar03", "architecture_id"=>1, "domain_id"=>9, "operatingsystem_id"=>7, "ptable_id"=>192, "compute_resource_id"=>5, "hos tgroup_id"=>34, "image_id"=>6, "build"=>true, "enabled"=>true, "provision_method"=>"image", "managed"=>true, "compute_attributes"=>{"cluster"=>"00000002-0002-0002-0002-000000000345", "c ores"=>"2", "memory"=>"4294967296", "start"=>"1", "volumes_attributes"=>{"0"=>{"\"size_gb"=>"20", "storage_domain"=>"ba2bd397-9222-424d-aecc-eb652c0169d9", "bootable"=>"0"}, "1"=>{"\"si ze_gb"=>"30", "storage_domain"=>"ba2bd397-9222-424d-aecc-eb652c0169d9", "bootable"=>"0"}}}, "overwrite"=>true, "host_parameters_attributes"=>[], "interfaces_attributes"=>[], "root_pass" =>"[FILTERED]"}, "apiv"=>"v2"} More over, looking at /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks it throws a 404, the endpoint seems to be /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/diskattachments while /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks seems to work only with API v3. Maybe I should change the base URL for the ovirt's API in foreman config, shouldn't I? --------------AB98537EE40272BEB36295B9 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> </head> <body text="#000000" bgcolor="#FFFFFF"> <p><br> </p> <br> <div class="moz-cite-prefix">On 28/07/17 16:14, Juan Hernández wrote:<br> </div> <blockquote type="cite" cite="mid:298579b2-f30e-23e5-6628-0a342ee5e917@redhat.com">Ah, I see, in your command you are listing Gluster volumes, not storage domains. They are different kinds of objects inside oVirt, and thus they have different identifiers. That is completely normal. If you want to get the identifiers of the storage domains use "list storagedomains". <br> <br> </blockquote> <br> Oh, got it. Thanks for the tip!<br> <br> <blockquote type="cite" cite="mid:298579b2-f30e-23e5-6628-0a342ee5e917@redhat.com"> <blockquote type="cite" style="color: #000000;">Anyway, I've replaced the storage domain UUID with the one that works with the REST API and something improved: now I don't get the 404 from ovirt and the machine is not deleted BUT: I've added 2 disks (20GB and 30GB) plus the base template 8Gb disk, and I get a VM with four (4) 8GB disks, and the bootable one is a random disk <br> <br> I've attached the engine.log with the (I hope) relevant messages <br> <br> </blockquote> <br> Are you adding those disks and template using the Foreman CLI? Can you share the commands that you are using? <br> <br> </blockquote> <br> Yes, I'm using Hammer CLI<br> <br> <tt>hammer host create --architecture-id=1 --domain billy.preprod --operatingsystem-id=7 --hostgroup-title Billy/Preprod --name foobar03 --partition-table-id=192 --provision-method image --root-password billy12345 --compute-resource 'LeaseWeb VMs prod' --image CentOS_7.3 --compute-attributes cluster=00000002-0002-0002-0002-000000000345,cores=2,memory=4294967296,start=1 --volume '"size_gb=20,storage_domain=ba2bd397-9222-424d-aecc-eb652c0169d9,bootable=0"' --volume '"size_gb=30,storage_domain=ba2bd397-9222-424d-aecc-eb652c0169d9,bootable=0"'</tt><br> <br> <br> <blockquote type="cite" cite="mid:298579b2-f30e-23e5-6628-0a342ee5e917@redhat.com">Also, can you share again the relevant part of the /var/log/ovirt-engine/ssl_access_log file? There we can see what requests are actually sent to the oVirt engine. </blockquote> <br> These are the requests arriving from Foreman:<br> <tt>192.168.10.158 - - [28/Jul/2017:14:19:42 +0000] "GET /ovirt-engine/api/vms/24831007-97ad-4f6d-9009-e6fb68a585f9 HTTP/1.1" 200 2865</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:19 +0000] "GET /ovirt-engine/api/datacenters?search= HTTP/1.1" 200 408</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:19 +0000] "GET /ovirt-engine/api/operatingsystems HTTP/1.1" 200 2943</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:20 +0000] "GET /ovirt-engine/api/datacenters?search= HTTP/1.1" 200 408</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:20 +0000] "GET /ovirt-engine/api/operatingsystems HTTP/1.1" 200 2943</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:20 +0000] "GET /ovirt-engine/api/datacenters?search= HTTP/1.1" 200 408</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:20 +0000] "GET /ovirt-engine/api/clusters/ HTTP/1.1" 200 1091</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:20 +0000] "POST /ovirt-engine/api/vms HTTP/1.1" 202 1612</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:22 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2256</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:22 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/nics HTTP/1.1" 200 409</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:22 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2256</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:23 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2256</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:26 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2256</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:30 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2252</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:30 +0000] "DELETE /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/nics/bf0dabd2-796b-4b07-bd69-db3915409939 HTTP/1.1" 200 119</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:30 +0000] "POST /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/nics HTTP/1.1" 201 430</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:31 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/nics HTTP/1.1" 200 442</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:31 +0000] "GET /ovirt-engine/api/ HTTP/1.1" 200 873</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:31 +0000] "GET /ovirt-engine/api/datacenters/00000001-0001-0001-0001-0000000003e3 HTTP/1.1" 200 396</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:31 +0000] "POST /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks HTTP/1.1" 201 674</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:32 +0000] "POST /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks HTTP/1.1" 201 676</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:33 +0000] "POST /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks HTTP/1.1" 201 677</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:33 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks HTTP/1.1" 200 1136</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:34 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2668</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:34 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2668</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:34 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks HTTP/1.1" 200 1136</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:36 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2668</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:36 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks HTTP/1.1" 200 1136</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:38 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2669</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:38 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks HTTP/1.1" 200 1138</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:42 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2660</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:43 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks HTTP/1.1" 200 1125</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:43 +0000] "POST /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/start HTTP/1.1" 200 616</tt><tt><br> </tt><tt>192.168.10.158 - - [28/Jul/2017:14:26:44 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2804</tt><br> <br> There are 3 POSTs to /disks so it seems that the culprit is indeed Foreman, but looking at Foreman's production.log I cannot see much more than this (even with logging level set to debug):<br> <br> <tt>2017-07-28 16:26:20 [app] [I] Parameters: {"host"=>{"name"=>"foobar03", "architecture_id"=>1, "domain_id"=>9, "operatingsystem_id"=>7, "ptable_id"=>192, "compute_resource_id"=>5, "hos</tt><tt><br> </tt><tt>tgroup_id"=>34, "image_id"=>6, "build"=>true, "enabled"=>true, "provision_method"=>"image", "managed"=>true, "compute_attributes"=>{"cluster"=>"00000002-0002-0002-0002-000000000345", "c</tt><tt><br> </tt><tt>ores"=>"2", "memory"=>"4294967296", "start"=>"1", "volumes_attributes"=>{"0"=>{"\"size_gb"=>"20", "storage_domain"=>"ba2bd397-9222-424d-aecc-eb652c0169d9", "bootable"=>"0"}, "1"=>{"\"si</tt><tt><br> </tt><tt>ze_gb"=>"30", "storage_domain"=>"ba2bd397-9222-424d-aecc-eb652c0169d9", "bootable"=>"0"}}}, "overwrite"=>true, "host_parameters_attributes"=>[], "interfaces_attributes"=>[], "root_pass"</tt><tt><br> </tt><tt>=>"[FILTERED]"}, "apiv"=>"v2"}</tt><br> <br> <br> More over, looking at <tt>/ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks </tt>it throws a 404, the endpoint seems to be <tt>/ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/diskattachments </tt>while <tt>/ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks </tt>seems to work only with API v3. Maybe I should change the base URL for the ovirt's API in foreman config, shouldn't I?<br> <br> </body> </html> --------------AB98537EE40272BEB36295B9--

On 07/28/2017 04:53 PM, Davide Ferrari wrote:
On 28/07/17 16:14, Juan Hernández wrote:
Ah, I see, in your command you are listing Gluster volumes, not storage domains. They are different kinds of objects inside oVirt, and thus they have different identifiers. That is completely normal. If you want to get the identifiers of the storage domains use "list storagedomains".
Oh, got it. Thanks for the tip!
Anyway, I've replaced the storage domain UUID with the one that works with the REST API and something improved: now I don't get the 404 from ovirt and the machine is not deleted BUT: I've added 2 disks (20GB and 30GB) plus the base template 8Gb disk, and I get a VM with four (4) 8GB disks, and the bootable one is a random disk
I've attached the engine.log with the (I hope) relevant messages
Are you adding those disks and template using the Foreman CLI? Can you share the commands that you are using?
Yes, I'm using Hammer CLI
hammer host create --architecture-id=1 --domain billy.preprod --operatingsystem-id=7 --hostgroup-title Billy/Preprod --name foobar03 --partition-table-id=192 --provision-method image --root-password billy12345 --compute-resource 'LeaseWeb VMs prod' --image CentOS_7.3 --compute-attributes cluster=00000002-0002-0002-0002-000000000345,cores=2,memory=4294967296,start=1 --volume '"size_gb=20,storage_domain=ba2bd397-9222-424d-aecc-eb652c0169d9,bootable=0"' --volume '"size_gb=30,storage_domain=ba2bd397-9222-424d-aecc-eb652c0169d9,bootable=0"'
Also, can you share again the relevant part of the /var/log/ovirt-engine/ssl_access_log file? There we can see what requests are actually sent to the oVirt engine.
These are the requests arriving from Foreman: 192.168.10.158 - - [28/Jul/2017:14:19:42 +0000] "GET /ovirt-engine/api/vms/24831007-97ad-4f6d-9009-e6fb68a585f9 HTTP/1.1" 200 2865 192.168.10.158 - - [28/Jul/2017:14:26:19 +0000] "GET /ovirt-engine/api/datacenters?search= HTTP/1.1" 200 408 192.168.10.158 - - [28/Jul/2017:14:26:19 +0000] "GET /ovirt-engine/api/operatingsystems HTTP/1.1" 200 2943 192.168.10.158 - - [28/Jul/2017:14:26:20 +0000] "GET /ovirt-engine/api/datacenters?search= HTTP/1.1" 200 408 192.168.10.158 - - [28/Jul/2017:14:26:20 +0000] "GET /ovirt-engine/api/operatingsystems HTTP/1.1" 200 2943 192.168.10.158 - - [28/Jul/2017:14:26:20 +0000] "GET /ovirt-engine/api/datacenters?search= HTTP/1.1" 200 408 192.168.10.158 - - [28/Jul/2017:14:26:20 +0000] "GET /ovirt-engine/api/clusters/ HTTP/1.1" 200 1091 192.168.10.158 - - [28/Jul/2017:14:26:20 +0000] "POST /ovirt-engine/api/vms HTTP/1.1" 202 1612 192.168.10.158 - - [28/Jul/2017:14:26:22 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2256 192.168.10.158 - - [28/Jul/2017:14:26:22 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/nics HTTP/1.1" 200 409 192.168.10.158 - - [28/Jul/2017:14:26:22 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2256 192.168.10.158 - - [28/Jul/2017:14:26:23 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2256 192.168.10.158 - - [28/Jul/2017:14:26:26 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2256 192.168.10.158 - - [28/Jul/2017:14:26:30 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2252 192.168.10.158 - - [28/Jul/2017:14:26:30 +0000] "DELETE /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/nics/bf0dabd2-796b-4b07-bd69-db3915409939 HTTP/1.1" 200 119 192.168.10.158 - - [28/Jul/2017:14:26:30 +0000] "POST /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/nics HTTP/1.1" 201 430 192.168.10.158 - - [28/Jul/2017:14:26:31 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/nics HTTP/1.1" 200 442 192.168.10.158 - - [28/Jul/2017:14:26:31 +0000] "GET /ovirt-engine/api/ HTTP/1.1" 200 873 192.168.10.158 - - [28/Jul/2017:14:26:31 +0000] "GET /ovirt-engine/api/datacenters/00000001-0001-0001-0001-0000000003e3 HTTP/1.1" 200 396 192.168.10.158 - - [28/Jul/2017:14:26:31 +0000] "POST /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks HTTP/1.1" 201 674 192.168.10.158 - - [28/Jul/2017:14:26:32 +0000] "POST /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks HTTP/1.1" 201 676 192.168.10.158 - - [28/Jul/2017:14:26:33 +0000] "POST /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks HTTP/1.1" 201 677 192.168.10.158 - - [28/Jul/2017:14:26:33 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks HTTP/1.1" 200 1136 192.168.10.158 - - [28/Jul/2017:14:26:34 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2668 192.168.10.158 - - [28/Jul/2017:14:26:34 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2668 192.168.10.158 - - [28/Jul/2017:14:26:34 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks HTTP/1.1" 200 1136 192.168.10.158 - - [28/Jul/2017:14:26:36 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2668 192.168.10.158 - - [28/Jul/2017:14:26:36 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks HTTP/1.1" 200 1136 192.168.10.158 - - [28/Jul/2017:14:26:38 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2669 192.168.10.158 - - [28/Jul/2017:14:26:38 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks HTTP/1.1" 200 1138 192.168.10.158 - - [28/Jul/2017:14:26:42 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2660 192.168.10.158 - - [28/Jul/2017:14:26:43 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks HTTP/1.1" 200 1125 192.168.10.158 - - [28/Jul/2017:14:26:43 +0000] "POST /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/start HTTP/1.1" 200 616 192.168.10.158 - - [28/Jul/2017:14:26:44 +0000] "GET /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7 HTTP/1.1" 200 2804
There are 3 POSTs to /disks so it seems that the culprit is indeed Foreman, but looking at Foreman's production.log I cannot see much more than this (even with logging level set to debug):
2017-07-28 16:26:20 [app] [I] Parameters: {"host"=>{"name"=>"foobar03", "architecture_id"=>1, "domain_id"=>9, "operatingsystem_id"=>7, "ptable_id"=>192, "compute_resource_id"=>5, "hos tgroup_id"=>34, "image_id"=>6, "build"=>true, "enabled"=>true, "provision_method"=>"image", "managed"=>true, "compute_attributes"=>{"cluster"=>"00000002-0002-0002-0002-000000000345", "c
ores"=>"2", "memory"=>"4294967296", "start"=>"1", "volumes_attributes"=>{"0"=>{"\"size_gb"=>"20", "storage_domain"=>"ba2bd397-9222-424d-aecc-eb652c0169d9", "bootable"=>"0"}, "1"=>{"\"si ze_gb"=>"30", "storage_domain"=>"ba2bd397-9222-424d-aecc-eb652c0169d9", "bootable"=>"0"}}}, "overwrite"=>true, "host_parameters_attributes"=>[], "interfaces_attributes"=>[], "root_pass" =>"[FILTERED]"}, "apiv"=>"v2"}
The oVirt access log indeed shows that three disks are added to the virtual machine. May it be that Foreman thinks that it has to explicitly add a boot disk? Ohad, Ivan, any idea?
More over, looking at /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks it throws a 404, the endpoint seems to be /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/diskattachments while /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks seems to work only with API v3. Maybe I should change the base URL for the ovirt's API in foreman config, shouldn't I?
I think you don't need to change anything there. Foreman uses 'rbovirt', and 'rbovirt' explicitly requests version 3 of the API using the 'Version: 3' header.

This is a multi-part message in MIME format. --------------9C91303ABCB6F10BD8BCA26C Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit On 28/07/17 17:46, Juan Hernández wrote:
The oVirt access log indeed shows that three disks are added to the virtual machine. May it be that Foreman thinks that it has to explicitly add a boot disk? Ohad, Ivan, any idea?
I've explicitly added the template id to the hammer command line and still adds 3 disks but at least now two of them respect the size I'm passing through Hammer. But it still sets a random disk as the bootable one and I cannot find a way to force to use the disk already present in the oVirt template as the bootable one Is there a way in oVirt to log the JSONs passed in the various POST requests?
More over, looking at /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks it throws a 404, the endpoint seems to be /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/diskattachments while /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks seems to work only with API v3. Maybe I should change the base URL for the ovirt's API in foreman config, shouldn't I?
I think you don't need to change anything there. Foreman uses 'rbovirt', and 'rbovirt' explicitly requests version 3 of the API using the 'Version: 3' header.
Well, I've added it anyway and it didn't break anything :) --------------9C91303ABCB6F10BD8BCA26C Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> </head> <body text="#000000" bgcolor="#FFFFFF"> <p><br> </p> <br> <div class="moz-cite-prefix">On 28/07/17 17:46, Juan Hernández wrote:<br> </div> <blockquote type="cite" cite="mid:ef7207f8-2957-2aff-1ff2-fbf7fda4e2b4@redhat.com"><br> The oVirt access log indeed shows that three disks are added to the virtual machine. May it be that Foreman thinks that it has to explicitly add a boot disk? Ohad, Ivan, any idea? <br> <br> </blockquote> <br> I've explicitly added the template id to the hammer command line and still adds 3 disks but at least now two of them respect the size I'm passing through Hammer. But it still sets a random disk as the bootable one and I cannot find a way to force to use the disk already present in the oVirt template as the bootable one<br> Is there a way in oVirt to log the JSONs passed in the various POST requests?<br> <br> <blockquote type="cite" cite="mid:ef7207f8-2957-2aff-1ff2-fbf7fda4e2b4@redhat.com"> <blockquote type="cite" style="color: #000000;"> <br> More over, looking at /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks it throws a 404, the endpoint seems to be /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/diskattachments while /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks seems to work only with API v3. Maybe I should change the base URL for the ovirt's API in foreman config, shouldn't I? <br> <br> </blockquote> <br> I think you don't need to change anything there. Foreman uses 'rbovirt', and 'rbovirt' explicitly requests version 3 of the API using the 'Version: 3' header. </blockquote> <br> Well, I've added it anyway and it didn't break anything :)<br> <br> </body> </html> --------------9C91303ABCB6F10BD8BCA26C--

On 07/28/2017 06:03 PM, Davide Ferrari wrote:
On 28/07/17 17:46, Juan Hernández wrote:
The oVirt access log indeed shows that three disks are added to the virtual machine. May it be that Foreman thinks that it has to explicitly add a boot disk? Ohad, Ivan, any idea?
I've explicitly added the template id to the hammer command line and still adds 3 disks but at least now two of them respect the size I'm passing through Hammer. But it still sets a random disk as the bootable one and I cannot find a way to force to use the disk already present in the oVirt template as the bootable one Is there a way in oVirt to log the JSONs passed in the various POST requests?
There is no such mechanism available by default You can get some more information about the requests and responses using the WildFly request dumping filter, but it won't give you the request or response bodies. If you want to do that first you need to go to the oVirt engine machine and start the "jboss-cli.sh" tool: # /usr/share/ovirt-engine-wildfly/bin/jboss-cli.sh \ --controller=localhost:8706 \ --user=admin@internal \ --connect That will as for the password of the "admin@internal" user, and then it should display you a prompt like this: [standalone@localhost:8706 /] In that prompt you can type any WildFly management command. For more information see here: https://docs.jboss.org/author/display/WFLY/Command+Line+Interface In this particular case you can first add the request dumping filter to the configuration, typing the following command: /subsystem=undertow/configuration=filter/custom-filter=myfilter:\ add(class-name=io.undertow.server.handlers.RequestDumpingHandler,\ module=io.undertow.core) Then you can enable that filter for the /ovirt-engine/api/* URL: /subsystem=undertow/server=default-server/host=default-host/filter-ref=myfilter:add(predicate="regex['/ovirt-engine/api.*']") Note again that this won't give you the request and response bodies, so it may not be worth. Another thing you may want to try, in the Foreman side, is to modify the "rbovirt" gem so it writes the request bodies to some place. For example, you can locate the "rbovirt.rb" file in your Foreman installation, and then, after this line: https://github.com/abenari/rbovirt/blob/v0.1.3/lib/rbovirt.rb#L131 Add something that writes the request body to a file, for example: open('/tmp/mylog', 'a') { |f| f.write(body) } Then you will probably need to restart Foreman. Remember to restore the "rbovirt.rb" file when you finish.
More over, looking at /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks it throws a 404, the endpoint seems to be /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/diskattachments while /ovirt-engine/api/vms/47f5035a-696c-4578-ace9-b23d865c6aa7/disks seems to work only with API v3. Maybe I should change the base URL for the ovirt's API in foreman config, shouldn't I?
I think you don't need to change anything there. Foreman uses 'rbovirt', and 'rbovirt' explicitly requests version 3 of the API using the 'Version: 3' header.
Well, I've added it anyway and it didn't break anything :)
participants (5)
-
Davide Ferrari
-
Ivan Necas
-
Juan Hernández
-
Maton, Brett
-
Oved Ourfali