I recently deployed oVirt in some hosts to evaluate it as one of my main virtualization tool. I like it a lot, but I don't know if this behaviour is normal: I just configured my iSCSI network against a IBM V7000, and from that point on, the host with the role "SPM" continuosly is reading from storage at about 7 or 8 Mbps. I don't know what it i doing, due to no storage domains are still created. How can I know what it is doing ?
I had similar problem in the past with ESXi 6.0 U2 host where I defined a
VM and used it as oVirt node for nested virtualization. If I run on this
"virtual" node an L2 VM all went well until custom emulated machine value
was at "pc-i440fx-rhel7.2.0".
At some point the default became 7.3 and the L2 VM was not able to boot any
more: it remained in blank screen at boot.
The same up to now in 188.8.131.52 where this default seems to be 7.5
I'm having same problem in a mini lab on a nuc6 where I have ESXi 6.7
installed and try to setup an hosted engine environment.
This problem prevents me to configure the hosted engine because the L2 VM
that should be the hosted engine freezes (?) during its first startup and I
see indeed that the qemu-kvm process is started with option "-machine
Being stated that I'm in an unsupported configuration, I would like to
understand if I can correct in some point and proceed.
So some questions:
1) How can I understand what is the difference between running qemu-kvm
with "-machine pc-i440fx-rhel7.2.0" ad "-machine pc-i440fx-rhel7.5.0" so
that I can then dig with ESXi parameters eventually to tune the settings of
its VM that becomes my oVirt hypervisor?
2) Do I have any chance to configure my oVirt node so that when it runs the
hosted-engine deploy it runs the engine VM with 7.2 parameter? Any conf
file on host where I can somehow force 7.2 compatibility or set it as
3) during gluster base hosted-engine deployment at first "local" startup
how can I connect to the console of the hosted engine VM and see where it
Thanks in advance,
following this reference guide:
after having run with success the gdeploy based gluster setup, there is
Setting up Hosted Engine
Use the Ansible based installation flow of Hosted Engine to set up oVirt
within a virtual machine. The storage details should be provided as type:
glusterfs and connection path as: <hostname>:/engine (Replace hostname with
address of host on which installation is carried out)
What does exactly mean "Use the Ansible based instalation" ? Does it mean
using cockpit web ui? In this casae I suppose I have to choose:
Deploy oVirt hosted engine on storage that has already been provisioned
latest ovirt node stable (4.2.6 today) introduced a bug in kernel: IP
over infiniband is not workingh anymore after an upgrade, due to kernel
You can find some detail here:
dmesg is full of "failed to modify QP to RTR: -22", and the networking
stack (in my case used to connect to storage) is broken. The interface
can obtain an address via DHCP, but also a simple ICMP ping fails.
Does someone have news about a fix for this issue?
I'm almost ready to start with a new oVirt deplyment. I will use CentOS
7, self hosting engine and glusert storage.
I have 3 phisical host. Each host has four NIC. My first idea is:
- configure bond betwheen NICs
- configure a VLAN interface for management network (and local lan)
- configure a VLAN interface for gluster network
- configure gluster for the hosted engine
- start "hosted-engine --deploy" process
is this enough? do I need a phisical dedicated NIC for management network?
Dear oVirt Gurus,
Using the oVirt user VM portal seems to not work through the squid proxy setup (configured as per the guide). The page loads and login works fine through the proxy, but the asynchronous requests just hang. I've attached a screenshot, but you can see the "api" endpoint just hanging in a web inspector:
This works fine when not going through the proxy.
Is there a way to force noVNC HTML as the console mode through the web-ui, or at least have it as an option if not default?
The console seems not to work when logged in with a base 'user role'.
Research Computing Core
Wellcome Trust Centre for Human Genetics
University of Oxford
I have a windows 7 VM that is composed by a raw disk on filesystem on
I would like to transfer this VM to another environment, always based on
same version of Fedora 28, but where the storage is configured in
virt-manager as lvm based, so similar to what happens in oVirt block based
I'm trying to figure how to transfer the image.
I presume I can create a new LV and then leave some bytes for the LVM
header into the created LV and then make a sort of dd with offset?
Anyone has any hint?
Thanks in advance,