Are you reusing a gluster volume or you have created a fresh one ?
В вторник, 1 септември 2020 г., 02:58:19 Гринуич+3, thomas(a)hoberg.net
I've just tried to verify what you said here.
As a base line I started with the 1nHCI Gluster setup. From four VMs, two legacy, two Q35
on the single node Gluster, one survived the import, one failed silently with an empty
disk, two failed somewhere in the middle of qemu-img trying to write the image to the
Gluster storage. For each of those two, this always happened at the same block number, a
unique one per machine, not in random places, as if qemu-img reading and writing the very
same image could not agree. That's two types of error and a 75% failure rate
I created another domain, basically using an NFS automount export from one of the HCI
nodes (a 4.3 node serving as 4.4 storage) and imported the very same VMs (source all 4.3)
transported via a re-attached export domain to 4.4. Three of the for imports worked fine,
no error with qemu-img writing to NFS. All VMs had full disk images and launched, which
verified that there is nothing wrong with the exports at least.
But there was still one, that failed with the same qemu-img error.
I then tried to move the disks from NFS to Gluster, which internally is also done via
qemu-img, and I had those fail every time.
Gluster or HCI seems a bit of a Russian roulette for migrations, and I am wondering how
much it is better for normal operations.
I'm still going to try moving via a backup domain (on NFS) and moving between that and
Gluster, to see if it makes any difference.
I really haven't done a lot of stress testing yet with oVirt, but this experience
doesn't build confidence.
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/