We are running oVirt Version 220.127.116.11-1.el6.
I have a VM on one of my hosts that appears to be stuck. The VM shows
running, however I am unable to successfully shutdown, powerdown, or
otherwise change the status of the VM.
It is not pinging. The guest OS is Ubuntu 14.04 and the ovirt agent is
I have attempted to stop the vm from the ovirt-shell as well and get the
following errors for each command:
[oVirt shell (*connected*)]# action vm connect-turbo-stage-03 stop
* status: 400*
* reason: Bad Request*
* detail: Unexpected exception*
[oVirt shell (*connected*)]# remove vm connect-turbo-stage-03
* status: 409*
* reason: Conflict*
* detail: Cannot remove VM. VM is running.*
[oVirt shell (*connected*)]# action vm connect-turbo-stage-03 detach
* status: 400*
* reason: Bad Request*
* detail: User is not authorized to perform this action.*
*Thank you in advance for your assistance.*
CIO / VP Technical Operations | TelVue Corporation
TelVue - We Share Your Vision
800.885.8886 x128 | msteele(a)telvue.com | http://www.telvue.com
twitter: http://twitter.com/telvue | facebook:
We are running the following setup:
- CentOS release 6.6
- CentOS Linux release 7.1.1503
While adding a new host to our cluster the package installations fail.
I have attached a piece of engine.log and a full ovirt-host-deploy* log
from the node.
Why is is happening? Help would be very much appreciated.
ARNES, Tehnološki park 18, p.p. 7, SI-1001 Ljubljana, Slovenia
tel: +386 1 479 8877, fax: +386 1 479 88 78
Content-Type: text/plain; charset="us-ascii"
Is there somewhere a document or information on how to move a hosted engine=
form one cluster to another ?
Right now we still have one cluster running fedora 20 that is also running =
the hosted engine and we have one cluster running CentOS 7.1 where the host=
ed engine should go.
Any advice how to proceed with this ?
Content-Type: text/html; charset="us-ascii"
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
<body style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-lin=
e-break: after-white-space; color: rgb(0, 0, 0); font-size: 14px; font-fami=
ly: Calibri, sans-serif;">
<div>Is there somewhere a document or information on how to move a hosted e=
ngine form one cluster to another ?</div>
<div>Right now we still have one cluster running fedora 20 that is also run=
ning the hosted engine and we have one cluster running CentOS 7.1 where the=
hosted engine should go.</div>
<div>Any advice how to proceed with this ?</div>
We actually created the ovirtmgmt and the bond manually upfront and then
in the ³Setup Hosts Network² we basically did this again (including
setting the IP address), regarding the bonding in the gluster network we
did not have a problem, you just drag one interface onto the other and
then select the bonding mode, where you can also go for bonding mode TLB
or ALB if you choose ³custom² or just LACP if you have switches that
Step by Step:
- set the engine to maintenance and shut it down
- configure the bond on the 2 nics for the ovirtmgmt bridge ( em1+em2 ->
bond0 -> ovirtmgmt )
- configure the IP on the bridge
- reboot the server and see whethter it comes up correctly
- remove maintenance and let engine start
- Set up the ovirtmgmt in hosts networks but do not forget to set IP and
Gateway as well.
Though it should work without this hassle (if the bonding mode on the
switch and server is compatible) but this way it is easy to get server and
switch in the same mode and working without having to do anything in ovirt
Hope that helps
On 02/07/15 00:31, "users-bounces(a)ovirt.org on behalf of Prof. Dr. Michael
Schefczyk" <users-bounces(a)ovirt.org on behalf of michael(a)schefczyk.net>
>Having set up a Centos 7 server with Gluster, oVirt 3.5.3 and hosted
>engine according to
>I was hoping that the NIC management and particularly NIC bond/bridge
>capabilities would have improved a bit. My server has four NICs, two
>connected to the LAN and two to an adjacent server to be used as a
>Gluster network between the two servers. My aim is to use NIC bonding for
>two NICs each.
>Via the engine, I would like to use Hosts -> Network Interfaces -> Setup
>Host Networks. As I use hosted engine, I cannot set the only host to
>maintenance mode. At least during normal operations, however, I am
>neither able to change the ovirt bridge from DHCP to static IP nor create
>a bond consisting of the two LAN facing NICs. In each case I get, "Error
>while executing action Setup Networks: Network is currently being used".
>Editing the network scripts manually is not an option either, as that
>does not survive a reboot. Contrary to this real view, everything should
>be easily configurable according to section 6.6 of the oVirt
>One workaround approach could be to temporarily move one NIC connection
>from the adjacent server to the LAN or even temporarily swap both pairs
>of NICs and edit interfaces while they are not in use. Is this really the
>way forward? Should there not be a more elegant approach not requiring
>physically plugging NIC connections just to work around such issue?
I've activated oVirt 3.5 with the hosted engine setup using CentOS 7 as
host and CentOS 6 as engine OS.
The storage for the hosted engine is iSCSI.
The setup went smooth, I've some vm running on top of it without many issues.
Right now I've some issue with hosted engine metadata:
if I try to add an additional node, at the end of the deploy sequence
it fails with error:
Failed to execute stage 'Setup validation': Metadata version 9 from host 5 too new for this agent (highest compatible version: 1)
On the first host, "hosted-engine --check-liveliness" confirms that
the engine is up (Hosted Engine is up!) , but "hosted-engine --vm-status"
fails with a python exception:
Traceback (most recent call last):
File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/vm_status.py", line 116, in <module>
if not status_checker.print_status():
File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/vm_status.py", line 59, in print_status
all_host_stats = ha_cli.get_all_host_stats()
File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 155, in get_all_host_stats
File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 102, in get_all_stats
stats = self._parse_stats(stats, mode)
File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 141, in _parse_stats
md = metadata.parse_metadata_to_dict(host_id, data)
File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/metadata.py", line 147, in parse_metadata_to_dict
ovirt_hosted_engine_ha.lib.exceptions.FatalMetadataError: Metadata version 9 from host 5 too new for this agent (highest compatible version: 1)
looking into metadata file, which is really a block device (via some symlinks), I see a lot
of binary data after plain ascii text. Dumping this device results into a 134MB file.
( if someome is curious, the dump is here:
Reading other posts where the setup was over nfs was suggested to truncate the file,
but this is a device file and I'm not sure what to do.
Any hint on where I can look at ?