I am testing guest VM migration in the event of a host crash, and I am
surprised to see that guests that are selected to be highly available do
not migrate when a host is forcibly turned off.
I have a 2 host cluster using iSCSI for storage, and when one of the
hosts, either the SPM or normal, is forcibly turned off, although the
engine sees the host as non-responsive, the VMs that were running on it
remain on that crashed host, and a question mark (?) appears next to
them. Other than checking "highly available", is there another step
that needs to be made for a VM to be restarted on a working host should
the host it is running on fail?
accidently, i delete the wrong live snapshot of a Virtual Machine. Which
whouldnt be that big deal, since Filer Snapshots are created every hour.
But... after pulling out the qcow Files of the VM, i am confronted with
- the biggest one, which is most likely the main part from what i was
able to investigate with qemu-img
- several smaller files, which seem to contain the delta - so i expect
this files be the live snaps of the vm
could anyone point me into a direction to:
- push the restored main image file + snapshot files back into ovirt (if
its hacky, ok..)
- how to find out which snapshot files need to be merged into the main
qcow image to get the latest state
for option 2, importing a single qcow image is imho no big deal
(virt-v2v). but how the heck do i find out in which order / which files
needs to be merged ?
I would like to bring our hosts up to 3.5 compatibility version.
Currently we have one host at Compatibility Version 3.5(NonOperational)
and a few others at 3.3 in the cluster/datacenter.
I am guessing that if I change Cluster Compatibility Mode to 3.5 all
hosts not 3.5 will go into NonOperational state - the VM's will try to
migrate (likely fail) and/or stop. VM's will need to be 'rebooted' to
the Operational host (the only one upgraded so far).
Does the Datacenter Compatibility Mode need to be updated for VM's to
What are people doing at this upgrading step to minimize vm downtime?
I connected (client spice firefox)to a VM vm with the "open full screen
Even by logging off the remote system still remain connected to VM.
How to get out of the spice connection?
I try Shift+F12, Ctrl+Alt but did not work
I can only close the spice connection when I power off the VM
Ao encaminhar esta mensagem, por favor:
1. Apague o meu e-mail e o meu nome.
2. Apague também os endereços dos amigos antes de reenviar
3. Use Cco ou Bcc para enviar mensagens!
Dificulte a disseminação de vírus e spam.
i think i found a nasty Bug (or feature) of ovirt.
One of my network cards was set up with dhcp. At this specific time
there was not yet a dhcp server set up which could respond to dhcp requests.
Therefore my network interface was not able to obtain an ip address.
This „failure“ leaded to that my ovirtmgnt bride would not get startet.
__Maybe__ because ovirtmgmt is alpha numeric after dbvlan116? Because
all my bonding interfaces bond0 and bond1 started just fine.
I was able to solve it by moving my /sbin/dhclient to
/sbin/dhclient.backup and creating a dummy exit0 bash script as
Then the network startup process seems to progress to my ovirtmgmt
interface. From now on i was able to connect and manage my host again
and to set up my dbvlan116 interface from dhcp to none.
Here is the process list it seems to loop in:
root 2554 0.0 0.0 115612 1988 ? S< 10:06 0:00
/bin/bash /etc/sysconfig/network-scripts/ifup-eth ifcfg-dbvlan116
root 2594 0.0 0.0 104208 15620 ? S< 10:06 0:00
/sbin/dhclient -H ovirt-node06-stgt -1 -q -lf
/var/lib/dhclient/dhclient--dbvlan116.lease -pf /var/run/
root 32047 0.0 0.0 115348 1676 ? S<s 10:06 0:00 /bin/sh
root 32142 1.5 0.0 348460 24952 ? S< 10:06 0:00
Just killing the dhclient does not seem to work. It keeps retrying.
I reported a bug before, but maybe its better to discuss it here first
and explain the bug properly to that the Bugtracker guys know what i
mean and what the problem is? :)
Maybe its best to start the ovirtmgmt interface first? Otherwise a wrong
configured interface will lock you out of the system.
Thanks for your time,
(sorry: resending as I wasn’t part of the list, yet)
this is my first post so hallo all and thank you for reading.
I have an issue with my production Ovirt environment (22.214.171.124-1.el6).
My system consists of several datancers.
2 of them are connected to an iSCSI SAN and they were working fine.
Until the moment I had the bad idea of deleting a SAN volume from the SAN
manager before deleting the associated storage on Ovirt. From that moment,
the DC where this storage was mounted became not responsive: it cannot
attach the master storage (or any other).
I tried to
1) manually destroy the offending storage (select -> destroy) but still
cannot recover the situation.
2) right click on master storage and activate it
3) re-initialize the datacenter using a NFS storage from the working sister
All Hosts are still running even though their status is "unknown".
All VM are still running even though their status is "not responding".
I half resolved the issue by manually restarting the host where the
datastore was originally mounted. This cleared the orphaned multipath.
However, the SPM does not come up still.
This is an extract of the log
*2015-04-16 03:51:48,069 WARN
(DefaultQuartzScheduler_Worker-14) [61a44b19] could not stop spm of pool
00000002-0002-0002-0002-00000000009c on vds
89254f23-8748-402a-afc9-08438dca0975 - reason:
VDSGenericException: VDSNetworkException: Message timeout which can be
caused by communication issues*
*2015-04-16 03:51:48,072 INFO
(DefaultQuartzScheduler_Worker-14) [61a44b19] FINISH, SpmStopVDSCommand,
log id: 4354cf46*
*2015-04-16 03:51:48,072 WARN
(DefaultQuartzScheduler_Worker-14) [61a44b19] spm stop on spm failed,
stopping spm selection!*
*2015-04-16 03:51:58,223 INFO
(DefaultQuartzScheduler_Worker-4) [4ca2d938] hostFromVds::selectedVds -
Brachetto, spmStatus Free, storage pool IRDC-INTEL*
*2015-04-16 03:51:58,225 ERROR
(DefaultQuartzScheduler_Worker-4) [4ca2d938] SPM Init: could not find
reported vds or not up - pool:IRDC-INTEL vds_spm_id: 3*
*2015-04-16 03:51:58,239 INFO
(DefaultQuartzScheduler_Worker-4) [4ca2d938] SPM selection - vds seems as
*2015-04-16 03:51:58,252 INFO
(DefaultQuartzScheduler_Worker-4) [4ca2d938] START,
SpmStopVDSCommand(HostName = sovana, HostId =
89254f23-8748-402a-afc9-08438dca0975, storagePoolId =
00000002-0002-0002-0002-00000000009c), log id: 63a17687*
storagePoolId = 00000002-0002-0002-0002-00000000009c is (was) hertz-dstore2
which does not exists anymore on SAN adn ovirt
hostid 89254f23-8748-402a-afc9-08438dca0975 is sovana server (current SPM)
I’m thinking about
*Put the hosted engine host into Maintenance*
*Shutdown Ovirt Manager*
*Rebooted SPM server*
*Restarted Ovirt Manager*
*Took hosted engine host out of Maintenance*
any help or clue is highly welcomed with cheers and beers