It seems to be working properly , but the OVF got updated recently and powering up the hosted-engine is not working :)

[root@ovirt2 ~]# sudo -u vdsm tar -tvf  /var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74/441abdc8-6cb1-49a4-903f-a1ec0ed88429/c3309fc0-8707-4de1-903d-8d4bbb024f81
-rw-r--r-- 0/0             138 2019-03-12 08:06 info.json
-rw-r--r-- 0/0           21164 2019-03-12 08:06 8474ae07-f172-4a20-b516-375c73903df7.ovf
-rw-r--r-- 0/0              72 2019-03-12 08:06 metadata.json

[root@ovirt2 ~]# sudo -u vdsm tar -tvf /var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74/94ade632-6ecc-4901-8cec-8e39f3d69cb0/9460fc4b-54f3-48e3-b7b6-da962321ecf4
-rw-r--r-- 0/0             138 2019-03-13 11:06 info.json
-rw-r--r-- 0/0           21164 2019-03-13 11:06 8474ae07-f172-4a20-b516-375c73903df7.ovf
-rw-r--r-- 0/0              72 2019-03-13 11:06 metadata.json

Best Regards,
Strahil Nikolov

В сряда, 13 март 2019 г., 11:08:57 ч. Гринуич+2, Simone Tiraboschi <stirabos@redhat.com> написа:




On Wed, Mar 13, 2019 at 9:57 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Hi Simone,Nir,

>Adding also Nir on this, the whole sequence is tracked here:
>I'd suggest to check ovirt-imageio and vdsm logs on ovirt2.localdomain about the same time.

I have tested again (first wiped current transfers) and it is happening the same (phase 10).

engine=# \x
Expanded display is on.
engine=# select * from image_transfers;
-[ RECORD 1 ]-------------+-------------------------------------
command_id                | 11b2c162-29e0-46ef-b0a4-f41ebe3c2910
command_type              | 1024
phase                     | 10
last_updated              | 2019-03-13 09:38:30.365+02
message                   |
vds_id                    |
disk_id                   | 94ade632-6ecc-4901-8cec-8e39f3d69cb0
imaged_ticket_id          |
proxy_uri                 |
signed_ticket             |
bytes_sent                | 0
bytes_total               | 134217728
type                      | 1
active                    | f
daemon_uri                |
client_inactivity_timeout | 60

engine=# delete from image_transfers where disk_id='94ade632-6ecc-4901-8cec-8e39f3d69cb0';


This is the VDSM log from the last test:


2019-03-13 09:38:23,229+0200 INFO  (jsonrpc/4) [vdsm.api] START prepareImage(sdUUID=u'808423f9-8a5c-40cd-bc9f-2568c85b8c74', spUUID=u'b803f7e4-2543-11e9-ba9a-00163e6272c8', imgUUID=u'94ade632-6ecc-4901-8cec-8e39f3d69cb0', leafUUID=u'9460fc4b-54f3-48e3-b7b6-da962321ecf4', allowIllegal=True) from=::ffff:192.168.1.2,42644, flow_id=d48d9272-2e65-438d-a7b2-46979309833b, task_id=bb534320-451c-45c0-b7a6-0cce017ec7cb (api:48)
2019-03-13 09:38:23,253+0200 INFO  (jsonrpc/4) [storage.StorageDomain] Fixing permissions on /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/94ade632-6ecc-4901-8cec-8e39f3d69cb0/9460fc4b-54f3-48e3-b7b6-da962321ecf4 (fileSD:623)
2019-03-13 09:38:23,254+0200 INFO  (jsonrpc/4) [storage.StorageDomain] Creating domain run directory u'/var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74' (fileSD:577)
2019-03-13 09:38:23,254+0200 INFO  (jsonrpc/4) [storage.fileUtils] Creating directory: /var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74 mode: None (fileUtils:199)
2019-03-13 09:38:23,254+0200 INFO  (jsonrpc/4) [storage.StorageDomain] Creating symlink from /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/94ade632-6ecc-4901-8cec-8e39f3d69cb0 to /var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74/94ade632-6ecc-4901-8cec-8e39f3d69cb0 (fileSD:580)
2019-03-13 09:38:23,260+0200 INFO  (jsonrpc/4) [vdsm.api] FINISH prepareImage error=Volume does not exist: (u'9460fc4b-54f3-48e3-b7b6-da962321ecf4',) from=::ffff:192.168.1.2,42644, flow_id=d48d9272-2e65-438d-a7b2-46979309833b, task_id=bb534320-451c-45c0-b7a6-0cce017ec7cb (api:52)
2019-03-13 09:38:23,261+0200 ERROR (jsonrpc/4) [storage.TaskManager.Task] (Task='bb534320-451c-45c0-b7a6-0cce017ec7cb') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
    return fn(*args, **kargs)
  File "<string>", line 2, in prepareImage
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3212, in prepareImage
    leafInfo = dom.produceVolume(imgUUID, leafUUID).getVmVolumeInfo()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 822, in produceVolume
    volUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/glusterVolume.py", line 45, in __init__
    volUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 801, in __init__
    self._manifest = self.manifestClass(repoPath, sdUUID, imgUUID, volUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileVolume.py", line 71, in __init__
    volUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 86, in __init__
    self.validate()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 112, in validate
    self.validateVolumePath()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileVolume.py", line 136, in validateVolumePath
    self.validateMetaVolumePath()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileVolume.py", line 118, in validateMetaVolumePath
    raise se.VolumeDoesNotExist(self.volUUID)
VolumeDoesNotExist: Volume does not exist: (u'9460fc4b-54f3-48e3-b7b6-da962321ecf4',)
2019-03-13 09:38:23,261+0200 INFO  (jsonrpc/4) [storage.TaskManager.Task] (Task='bb534320-451c-45c0-b7a6-0cce017ec7cb') aborting: Task is aborted: "Volume does not exist: (u'9460fc4b-54f3-48e3-b7b6-da962321ecf4',)" - code 201 (task:1181)
2019-03-13 09:38:23,261+0200 ERROR (jsonrpc/4) [storage.Dispatcher] FINISH prepareImage error=Volume does not exist: (u'9460fc4b-54f3-48e3-b7b6-da962321ecf4',) (dispatcher:83)

Yet, the volume is there and is accessible:

Now you are accessing that volume as root user but vdsm will access it as vdsm user,
can you please repeat the test with sudo -u vdsm and check volume file ownership?
 

[root@ovirt1 94ade632-6ecc-4901-8cec-8e39f3d69cb0]# tar -tvf /var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74/94ade632-6ecc-4901-8cec-8e39f3d69cb0/9460fc4b-54f3-48e3-b7b6-da962321ecf4
-rw-r--r-- 0/0             138 2019-03-12 08:06 info.json
-rw-r--r-- 0/0           21164 2019-03-12 08:06 8474ae07-f172-4a20-b516-375c73903df7.ovf
-rw-r--r-- 0/0              72 2019-03-12 08:06 metadata.json


[root@ovirt1 94ade632-6ecc-4901-8cec-8e39f3d69cb0]# tar -tvf /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/94ade632-6ecc-4901-8cec-8e39f3d69cb0/9460fc4b-54f3-48e3-b7b6-da962321ecf4
-rw-r--r-- 0/0             138 2019-03-12 08:06 info.json
-rw-r--r-- 0/0           21164 2019-03-12 08:06 8474ae07-f172-4a20-b516-375c73903df7.ovf
-rw-r--r-- 0/0              72 2019-03-12 08:06 metadata.json

Best Regards,
Strahil Nikolov