How can Gluster be a HCI default, when it's hardly ever working?
by thomas@hoberg.net
I've just tried to verify what you said here.
As a base line I started with the 1nHCI Gluster setup. From four VMs, two legacy, two Q35 on the single node Gluster, one survived the import, one failed silently with an empty disk, two failed somewhere in the middle of qemu-img trying to write the image to the Gluster storage. For each of those two, this always happened at the same block number, a unique one per machine, not in random places, as if qemu-img reading and writing the very same image could not agree. That's two types of error and a 75% failure rate
I created another domain, basically using an NFS automount export from one of the HCI nodes (a 4.3 node serving as 4.4 storage) and imported the very same VMs (source all 4.3) transported via a re-attached export domain to 4.4. Three of the for imports worked fine, no error with qemu-img writing to NFS. All VMs had full disk images and launched, which verified that there is nothing wrong with the exports at least.
But there was still one, that failed with the same qemu-img error.
I then tried to move the disks from NFS to Gluster, which internally is also done via qemu-img, and I had those fail every time.
Gluster or HCI seems a bit of a Russian roulette for migrations, and I am wondering how much it is better for normal operations.
I'm still going to try moving via a backup domain (on NFS) and moving between that and Gluster, to see if it makes any difference.
I really haven't done a lot of stress testing yet with oVirt, but this experience doesn't build confidence.
3 years, 7 months
Problem installing Windows VM on 4.4.1
by fgarat@gmail.com
Hi,
I'm having problem after I upgraded to 4.4.1 with Windows machines.
The installation sees no disk. Even IDE disk doesn't get detected and installation won't move forward no matter what driver i use for the disk.
Any one else having this issue?.
Regards,
Facundo
3 years, 7 months
How can you avoid breaking 4.3.11 legacy VMs imported in 4.4.1 during a migration?
by thomas@hoberg.net
Testing the 4.3 to 4.4 migration... what I describe here is facts is mostly observations and conjecture, could be wrong, just makes writing easier...
While 4.3 seems to maintain a default emulated machine type (pc-i440fx-rhel7.6.0 by default), it doesn't actually allow setting it in the cluster settings: Could be built-in, could be inherited from the default template... Most of my VMs were created with the default on 4.3.
oVirt 4.4 presets that to pc-q35-rhel8.1.0 and that has implications:
1. Any VM imported from an export on a 4.3 farm, will get upgraded to Q35, which unfortunately breaks things, e.g. network adapters getting renamed as the first issue I stumbled on some Debian machines
2. If you try to compensate by lowering the cluster default from Q35 to pc-i44fx the hosted-engine will fail, because it was either built or came as Q35 and can no longer find critical devices: It evidently doesn't take/use the VM configuration data it had at the last shutdown, but seems to re-generate it according to some obscure logic, which fails here.
I've tried creating a bit of backward compatibility by creating another template based on pc-i440fx, but at the time of the import, I cannot switch the template.
If I try to downgrade the cluster, the hosted-engine will fail to start and I can't change the template of the hosted-engine to something Q35.
Currently this leaves me in a position where I can't separate the move of VMs from 4.3 to 4.4 and the upgrade of the virtual hardware, which is a different mess for every OS in the mix of VMs.
Recommendations, tips anyone?
P.S. A hypervisor reconstructing the virtual hardware from anywhere but storage at every launch, is difficult to trust IMHO.
3 years, 7 months
Upgrade Doc typo.
by carl langlois
Hi,
Not sure if this is relevant but in the ovirt doc in the 4.2 to 4.3 upgrade
the repo rpm is specified with 4,4 rpm .
Thanks
Carl
[image: image.png]
3 years, 7 months