On Mon, May 23, 2022 at 3:31 PM <upalmin(a)gmail.com> wrote:
HI
Thank you for fast response.
In the mean time I have discovered what was the problem in my case.
The problem was that export domain and data domain from oVirt 4.3 had OVF where
<InstanceID> tag is used (ID caps letters) instead of expected <InstanceId>.
oVirt 4.4 expected <InstanceId> tag which wasn't used in this case so the
engine assumed that OVF files were corrupted.
Fix for me was simple on Export Domain I swapped InstanceID with InstanceId.
bash# for i in `find . -name "*.ovf"` ; do sudo sed -i
's/InstanceID/InstanceId/g' $i ; done ;
But I could not fix datadomain since I didn't want to dive into OVF_STORE disk. I am
guessing that there is a tool for editing OVF_STORE disks whit out damaging the domain?!
The OVF_STORE disks contains a single tar file at offset 0. You can
extract the tar from the volume
using:
tar xf /path/to/ovf_store/volume
On file storage this is easy - you can modify the contents of the OVF
files in the tar, and write the
modied tar back to the volume, but you must update the size of the tar
in the ovf store metadata file.
For example:
# grep DESCRIPTION
/rhev/data-center/mnt/alpine\:_01/81738a7a-7ca6-43b8-b9d8-1866a1f81f83/images/0b0dd3b2-71a2-4c48-ad83-cea1dc900818/35dd9951-
DESCRIPTION={"Updated":true,"Size":23040,"Last
Updated":"Sun Apr 24
15:46:27 IDT 2022","Storage
Domains":[{"uuid":"81738a7a-7ca6-43b8-b9d8-1866a1f81f83"}],"Disk
Description":"OVF_STORE"}
You need to keep "Size":23040, correct, since engine use it to read
the tar from storage.
On block storage updating the metadata is much harder, so I would not
go in this way.
If the issue is code expecting "InstanceId" but the actual key is
"InstanceID" the right place to
fix this is in the code, accepting "InstanceId" or "InstanceID".
In general this sounds like a bug, so you should file a bug for the
component reading the
OVF (vdsm?).
Nir