
On Fri, Aug 28, 2020 at 2:31 AM <thomas@hoberg.net> wrote:
I am testing the migration from CentOS7/oVirt 4.3 to CentOS8/oVirt 4.4.
Exporting all VMs to OVAs, and re-importing them on a new cluster built from scratch seems the safest and best method, because in the step-by-step migration, there is simply far too many things that can go wrong and no easy way to fail-back after each step.
You should really try the attach/detach storage domain, this is the recommended way to move vms from one ovirt system to another. You could detach the entire domain with all vms from the old system, and connect it to the new system, without copying even one bit. I guess you cannot do this because you don't use shared storage? ...
So I have manually put the single-line fix in, which settles udev to ensure that disks are not exported as zeros. That's the bug which renders the final release oVirt 4.3 forever unfit, 4 years before the end of maintenance of CentOS7, because it won't be fixed there.
Using ovirt 4.3 when 4.4 was released is going to be painful, don't do this. ...
But just as I was exporting not one of the trivial machines, that I have been using for testing, but one of the bigger ones, that actually contain a significant amout of data, I find myself hitting this timeout bug.
The disks for both the trival and less-trivial are defined at 500GB, thinly allocated. The trivial is the naked OS at something like 7GB actually allocated, the 'real' has 113GB allocated. In both cases the OVA export file to a local SSD xfs partition is 500GB, with lots of zeros and sparse allocation in the case of the first one.
The second came to 72GB of 500GB actually allocated, which didn't seem like a good sign already, but perhaps there was some compression involved?
Still the export finished without error or incident and the import on the other side went just as well. The machine even boots and runs, it was only once I started using it, that I suddenly had all types of file system errors... it turns out 113-73GB were actually really cut off and missing from the OVA export, and there is nobody and nothing checking for that.
You are hitting https://bugzilla.redhat.com/1854888 ...
I have the export domain backup running right now, but I'm not sure it's not using the same mechanism under the cover with potentially similar results.
No, export domain is using qemu-img, which is the best tool for copying images. This is how all disks are copied in oVirt in all flows. There are no issues like ignored errors or silent failures in storage code. ...
P.P.S. So just where (and on which machine) do I need to change the timeout?
There are no timeouts in storage code, e.g. attach/detach domain, or export to export domain. Nir Nir