Hello,
I'm playing with export_vm_as_ova.py downloaded from the examples github:
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/export_vm_as_ova.py

My environment is oVirt 4.3.3.7 with iSCSI storage domain.
It fails leaving an ova.tmp file

In webadmin gui:
Starting to export Vm enginecopy1 as a Virtual Appliance
7/19/1911:55:12 AM

VDSM ov301 command TeardownImageVDS failed: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\' Logical volume fa33df49-b09d-4f86-9719-ede649542c21/0420ef47-0ad0-4cf9-babd-d89383f7536b in use.\']\\nfa33df49-b09d-4f86-9719-ede649542c21/[\'a7480dc5-b5ca-4cb3-986d-77bc12165be4\', \'0420ef47-0ad0-4cf9-babd-d89383f7536b\']",)',)
7/19/1912:25:36 PM

Failed to export Vm enginecopy1 as a Virtual Appliance to path /save_ova/base/dump/myvm2.ova on Host ov301
7/19/1912:25:37 PM

During export I have this qemu-img process creating the disk over the loop device:

root     30878 30871  0 11:55 pts/2    00:00:00 su -p -c qemu-img convert -T none -O qcow2 '/rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/59a4a324-4c99-4ff5-abb1-e9bbac83292a/0420ef47-0ad0-4cf9-babd-d89383f7536b' '/dev/loop1' vdsm
vdsm     30882 30878 10 11:55 ?        00:00:00 qemu-img convert -T none -O qcow2 /rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/59a4a324-4c99-4ff5-abb1-e9bbac83292a/0420ef47-0ad0-4cf9-babd-d89383f7536b /dev/loop1

The ova.tmp file is getting filled while command runs 
eg:
[root@ov301 ]# du -sh /save_ova/base/dump/myvm2.ova.tmp
416M /save_ova/base/dump/myvm2.ova.tmp
[root@ov301 sysctl.d]# 

[root@ov301 sysctl.d]# du -sh /save_ova/base/dump/myvm2.ova.tmp
911M /save_ova/base/dump/myvm2.ova.tmp
[root@ov301 ]# 

and the final generated / not completed file is in this state:
[root@ov301 ]# qemu-img info /save_ova/base/dump/myvm2.ova.tmp
image: /save_ova/base/dump/myvm2.ova.tmp
file format: raw
virtual size: 30G (32217446400 bytes)
disk size: 30G
[root@ov301 sysctl.d]# 

But I notice that the timestamp of the file is about 67 minutes after start of job and well after the notice of its failure....

[root@ov301 sysctl.d]# ll /save_ova/base/dump/
total 30963632
-rw-------. 1 root root 32217446400 Jul 19 13:02 myvm2.ova.tmp
[root@ov301 sysctl.d]#

[root@ov301 sysctl.d]# du -sh /save_ova/base/dump/myvm2.ova.tmp
30G /save_ova/base/dump/myvm2.ova.tmp
[root@ov301 sysctl.d]# 

In engine.log the first error I see is 30 minutes after start

2019-07-19 12:25:31,563+02 ERROR [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] (EE-ManagedThreadFactory-engineScheduled-Thread-64) [2001ddf4] Ansible playbook execution failed: Timeout occurred while executing Ansible playbook.
2019-07-19 12:25:31,563+02 INFO  [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] (EE-ManagedThreadFactory-engineScheduled-Thread-64) [2001ddf4] Ansible playbook command has exited with value: 1
2019-07-19 12:25:31,564+02 ERROR [org.ovirt.engine.core.bll.CreateOvaCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-64) [2001ddf4] Failed to create OVA. Please check logs for more details: /var/log/ovirt-engine/ova/ovirt-export-ova-ansible-20190719115531-ov301-2001ddf4.log
2019-07-19 12:25:31,565+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-64) [2001ddf4] START, TeardownImageVDSCommand(HostName = ov301, ImageActionsVDSCommandParameters:{hostId='8ef1ce6f-4e38-486c-b3a4-58235f1f1d06'}), log id: 3d2246f7
2019-07-19 12:25:36,569+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-64) [2001ddf4] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ov301 command TeardownImageVDS failed: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume fa33df49-b09d-4f86-9719-ede649542c21/0420ef47-0ad0-4cf9-babd-d89383f7536b in use.\']\\nfa33df49-b09d-4f86-9719-ede649542c21/[\'a7480dc5-b5ca-4cb3-986d-77bc12165be4\', \'0420ef47-0ad0-4cf9-babd-d89383f7536b\']",)',)
In ansible playbook suggested log file I don't see anything useful.
It ends with timestamps when the script has been launched.
Last lines are:

2019-07-19 11:55:33,877 p=5699 u=ovirt |  TASK [ovirt-ova-export-pre-pack : Retrieving the temporary path for the OVA file] ***
2019-07-19 11:55:34,198 p=5699 u=ovirt |  changed: [ov301] => {
    "changed": true,
    "dest": "/save_ova/base/dump/myvm2.ova.tmp",
    "gid": 0,
    "group": "root",
    "mode": "0600",
    "owner": "root",
    "secontext": "system_u:object_r:nfs_t:s0",
    "size": 32217446912,
    "state": "file",
    "uid": 0
}
2019-07-19 11:55:34,204 p=5699 u=ovirt |  TASK [ovirt-ova-pack : Run packing script] *************************************

It seems 30 minutes... for timeout? About what, ansible job?
Or possibly implicit user session created when running the python script?
The snapshot has been correctly deleted (as I see also in engine.log), I don't see it in webadmin gui.
Any known problem?

Just for test I executed again at 14:24 and I see same Ansible error at 14:54 
The snapshot gets deleted, while the qemu-img command still continues....

[root@ov301 sysctl.d]# ps -ef | grep qemu-img
root     13504 13501  0 14:24 pts/1    00:00:00 su -p -c qemu-img convert -T none -O qcow2 '/rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/59a4a324-4c99-4ff5-abb1-e9bbac83292a/0420ef47-0ad0-4cf9-babd-d89383f7536b' '/dev/loop0' vdsm
vdsm     13505 13504  3 14:24 ?        00:01:26 qemu-img convert -T none -O qcow2 /rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/59a4a324-4c99-4ff5-abb1-e9bbac83292a/0420ef47-0ad0-4cf9-babd-d89383f7536b /dev/loop0
root     17587 24530  0 15:05 pts/0    00:00:00 grep --color=auto qemu-img
[root@ov301 sysctl.d]# 

[root@ov301 sysctl.d]# du -sh /save_ova/base/dump/myvm2.ova.tmp
24G /save_ova/base/dump/myvm2.ova.tmp
[root@ov301 sysctl.d]# ll /save_ova/base/dump/myvm2.ova.tmp
-rw-------. 1 root root 32217446400 Jul 19 15:14 /save_ova/base/dump/myvm2.ova.tmp
[root@ov301 sysctl.d]# 

and then continues until image copy completes, but at this time the job has already aborted and so the completion of the ova composition doesn't go ahead... and I remain with the ova.tmp file...

How to extend timeout?

Thanks in advance,
Gianluca