I have an ovirt22.214.171.124 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot!
normal vm migration:
hosted engine vm migration:
setup ovirt 4.4.1 on CentOS 8.2 as an experiment, and I am trying to get an iSCSI domain working but have issues. The little experimental cluster has 3 hosts. There is an ovirtmgmt network on the default vlan, and two iSCSI network (172.27.0/1.X) with vlans 20/21. ovirtmgmt has all the functions (Data, display, migration etc), and the iSCSI networks nothing yet, and they are not set as required.
The SAN device is already serving a few iSCSI volumes to a vmware cluster, so I know things are fine on this end. It has two controllers, and four NICs per controller so a total of 8 NICs, half of the NICS per controller on 172.27.0.X and half on 172.27.1.X.
When I create the iSCSI domain, I login to only one of the targets, and add the Volume, all is good and I can use the disc fine.
However when I login to more than one of the targets, then I start having issues with the Volume. Even when I enabled multipath in the cluster, and I created a single multipath by selecting both of the 172.27.0/1.X networks, and all the targets, the end result was the same. The hosts have difficulty accessing the volume, they may even swing between 'non-operational' and 'up' if I transfer data to the volume. When I ssh to the hosts and i check things in the command line I also get inconsistent results between hosts, and blocks that appear with lsblk when I first setup iSCSI have dissapeared after I try to actively use the volume.
I am new to iSCSI so I am not sure how to debug this. I am not sure if my multipath configuration is correct or not. The documentation on this part was not very detailed. I also tried to remove the domain, and try to experiment with mounting the iSCSI volume from the command line, but I cannot even discover the target from the command line, which is very bizarre. The command
iscsiadm --mode discovery --target sendtargets --portal 172.27.0.55 --discover
returns the message 'iscsiadm: cannot make connection to 172.27.0.55: No route to host'. Yet through ovirt, and if I select only one target, everything work fine!
Any suggestions on how to start debugging this would really be appreciated.
Hello to all
I try to backup a normal VM. But It seems that I don't really understand the concept. At first I found the possibility to backup with the api https://www.ovirt.org/documentation/administration_guide/#Setting_a_stora....
Create a snapshot of the VM, finding the ID of the snapshot and the configuration of the VM makes sense to me.
But at this point, I would download the config an the snapshot and put it to my backup storage. And not create a new VM attach the disk and run a backup with backup programm. And for restoring do the sam way backwards.
If i look at other project, there seems do be a way to download the snapshot and configfile, or am I wrong?
Maybe someone can explain to me why I should use additional software to install in an additional machine. Or even better someone can explain to me how I don't have to use additional backup software.
And to the same topic backup.
There is in the documentation the possibility to set up a backup storage
It is nearly the same, create a snapshot, or clone the machine and export it to backup storage
> Export the new virtual machine to a backup domain. See Exporting a Virtual Machine to a Data Domain in the Virtual Machine Management Guide.
Sadly there is just writen what to do, not how, the link points to 404 page. maybe someone can explain to me how to use backup storage
thank you very much
Hello, I was using oVirt 4.3.10 with IBM AC922 (POWER9 / ppc64le) without any issues.
Since I’ve moved to 4.4.1 I can’t add the AC922 machine to the engine anymore, it complains with the following error:
The host CPU does not match the Cluster CPU type and is running in degraded mode. It is missing the following CPU flags: model_POWER9, powernv.
Any ideia of what’s may be happening? The engine runs on x86_64, and I was using this way on 4.3.10.
timebase : 512000000
platform : PowerNV
model : 8335-GTH
machine : PowerNV 8335-GTH
firmware : OPAL
MMU : Radix
I've just tried to verify what you said here.
As a base line I started with the 1nHCI Gluster setup. From four VMs, two legacy, two Q35 on the single node Gluster, one survived the import, one failed silently with an empty disk, two failed somewhere in the middle of qemu-img trying to write the image to the Gluster storage. For each of those two, this always happened at the same block number, a unique one per machine, not in random places, as if qemu-img reading and writing the very same image could not agree. That's two types of error and a 75% failure rate
I created another domain, basically using an NFS automount export from one of the HCI nodes (a 4.3 node serving as 4.4 storage) and imported the very same VMs (source all 4.3) transported via a re-attached export domain to 4.4. Three of the for imports worked fine, no error with qemu-img writing to NFS. All VMs had full disk images and launched, which verified that there is nothing wrong with the exports at least.
But there was still one, that failed with the same qemu-img error.
I then tried to move the disks from NFS to Gluster, which internally is also done via qemu-img, and I had those fail every time.
Gluster or HCI seems a bit of a Russian roulette for migrations, and I am wondering how much it is better for normal operations.
I'm still going to try moving via a backup domain (on NFS) and moving between that and Gluster, to see if it makes any difference.
I really haven't done a lot of stress testing yet with oVirt, but this experience doesn't build confidence.
I have some blades with 10GE interface that have / need the Emulex be2net
driver, however it is no longer available on Redhat8 / Centos8. Is there
any way to install ovirt to work on these machines?
Here are some links to problems with Redhat.
Emulex NIC using be2net driver
I did a lot of research and couldn't get a functional driver for ovirt.
I'm having problem after I upgraded to 4.4.1 with Windows machines.
The installation sees no disk. Even IDE disk doesn't get detected and installation won't move forward no matter what driver i use for the disk.
Any one else having this issue?.
Testing the 4.3 to 4.4 migration... what I describe here is facts is mostly observations and conjecture, could be wrong, just makes writing easier...
While 4.3 seems to maintain a default emulated machine type (pc-i440fx-rhel7.6.0 by default), it doesn't actually allow setting it in the cluster settings: Could be built-in, could be inherited from the default template... Most of my VMs were created with the default on 4.3.
oVirt 4.4 presets that to pc-q35-rhel8.1.0 and that has implications:
1. Any VM imported from an export on a 4.3 farm, will get upgraded to Q35, which unfortunately breaks things, e.g. network adapters getting renamed as the first issue I stumbled on some Debian machines
2. If you try to compensate by lowering the cluster default from Q35 to pc-i44fx the hosted-engine will fail, because it was either built or came as Q35 and can no longer find critical devices: It evidently doesn't take/use the VM configuration data it had at the last shutdown, but seems to re-generate it according to some obscure logic, which fails here.
I've tried creating a bit of backward compatibility by creating another template based on pc-i440fx, but at the time of the import, I cannot switch the template.
If I try to downgrade the cluster, the hosted-engine will fail to start and I can't change the template of the hosted-engine to something Q35.
Currently this leaves me in a position where I can't separate the move of VMs from 4.3 to 4.4 and the upgrade of the virtual hardware, which is a different mess for every OS in the mix of VMs.
Recommendations, tips anyone?
P.S. A hypervisor reconstructing the virtual hardware from anywhere but storage at every launch, is difficult to trust IMHO.
Not sure it will actually do that, but if you create a new network profile, you can select a 'port mirroring' option: It is my understanding that is what you need. You may also want to deselect the filtering rules in the same place.