I'd like to find any vms with some defined properties like
(console-enabled or serial_number-policy for example) with ovirt-shell
-E "show vm name", but I can't find all information that I can get with
edit vm popupin the UI.
I belevied these kind of properties were unvailable because the flag was
by default to false, but after updating the flag to true, it is still
not visible. Does it exist an extended way to get those feature with
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
I'm running oVirt 220.127.116.11-1.el7.centos and when I install a Win 7 guest
VM, using VirtIO disk, networking etc, it goes through the install process
ok, but blue screens upon boot with a vioscsi.sys error (attached). I have
tried the official version ovirt-guest-tools-iso 3.6.0 0.2_master.fc22, as
well as some earlier and later versions. I am using a PXE boot method with
a Windows deployment server, which has the drivers from the oVirt tools ISO
installed (indeed, it picks up the drive and networking and I can see it
installing the drivers). I have tried with the generic IDE and rtl8139
config on the guest, and it also fails with the same vioscsi.sys error
after rebooting upon finishing installation even though I'm using IDE as
the disk driver.
I have uploaded a win 7 x64 ISO and tried installing that, and it loads the
VirtIO viostor driver (using the method at:
and even manages to partition the disks, but fails to install on the disk
I've tried temporarily removing the vioscsi files from the install server
as a last resort, but as expected it fails to install properly, though I
thought it used the viostor driver instead.
Thanks for any help.
There seems to be a pretty severe bug with using hosted engine on gluster.
If the host that was used as the initial hosted-engine --deploy host goes away, the engine VM wil crash and cannot be restarted until the host comes back.
This is regardless of which host the engine was currently running.
The issue seems to be buried in the bowels of VDSM and is not an issue with gluster itself.
The gluster filesystem is still accessable from the host that was running the engine. The issue has been submitted to bugzilla but the fix is some way off (4.1).
Can my hosted engine be converted to use NFS (using the gluster NFS server on the same filesystem) without rebuilding my hosted engine (ie change domainType=glusterfs to domainType=nfs)?
What effect would that have on the hosted-engine storage domain inside oVirt, ie would the same filesystem be mounted twice or would it just break.
Will this actually fix the problem, does it have the same issue when the hosted engine is on NFS?
The contents of this electronic message and any attachments are intended only for the addressee and may contain legally privileged, personal, sensitive or confidential information. If you are not the intended addressee, and have received this email, any transmission, distribution, downloading, printing or photocopying of the contents of this message or attachments is strictly prohibited. Any legal privilege or confidentiality attached to this message and attachments is not waived, lost or destroyed by reason of delivery to any person other than intended addressee. If you have received this message and are not the intended addressee you should notify the sender by return email and destroy all copies of the message and any attachments. Unless expressly attributed, the views expressed in this email do not necessarily represent the views of the company.
[Here : oVirt 3.5.3, 3 x CentOS 7.0 hosts with replica-3 gluster SD on
On the switchs, I have created a dedicated VLAN to isolate the glusterFS
traffic, but I'm not using it yet.
I was thinking of creating a dedicated IP for each node's gluster NIC,
and a DNS record by the way ("my_nodes_name_GL"), but I fear using this
hostname or this ip in oVirt GUI host network interface tab, leading
oVirt think this is a different host.
Not being sure this fear is clearly described, let's say :
- On each node, I create a second ip+(dns record in the soa) used by
gluster, plugged on the correct VLAN
- in oVirt gui, in the host network setting tab, the interface will be
seen, with its ip, but reverse-dns-related to a different hostname.
Here, I fear oVirt might check this reverse DNS and declare this NIC
belongs to another host.
I would also prefer not use a reverse pointing to the name of the host
management ip, as this is evil and I'm a good guy.
On your side, how do you cope with a dedicated storage network in case
of storage+compute mixed hosts?
I'm confused because though I'm using ovirt-shell to script many actions
every day, and even after a large bunch of reading and testing, I can
not find the correct syntax to move (offline/available) disks between
May you help me please?
Since we upgraded to the latest ovirt node running 7.2, we're seeing that
nodes become unavailable after a while. It's running fine, with a couple of
VM's on it, untill it becomes non responsive. At that moment it doesn't
even respond to ICMP. It'll come back by itself after a while, but oVirt
fences the machine before that time and restarts VM's elsewhere.
Engine tells me this message:
VDSM host09 command failed: Message timeout which can be caused by
Is anyone else experiencing these issues with ixgbe drivers? I'm running on
Intel X540-AT2 cards.
Met vriendelijke groeten / With kind regards,
I have a problem since couple of weeks, where randomly 1 VM (not always the same) becomes completely unresponsive.
We find this out because our Icinga server complains that host is down.
Upon inspection, we find we can’t open a console to the VM, nor can we login.
In oVirt engine, the VM looks like “up”. The only weird thing is that RAM usage shows 0% and CPU usage shows 100% or 75% depending on number of cores.
The only way to recover is to force shutdown the VM via 2-times shutdown from the engine.
Could you please help me to start debugging this?
I can provide any logs, but I’m not sure which ones, because I couldn’t see anything with ERROR in the vdsm logs on the host.
The host is running
OS Version: RHEL - 7 - 1.1503.el7.centos.2.8
Kernel Version: 3.10.0 - 229.14.1.el7.x86_64
KVM Version: 2.1.2 - 23.el7_1.8.1
LIBVIRT Version: libvirt-1.2.8-16.el7_1.4
VDSM Version: vdsm-4.16.26-0.el7.centos
SPICE Version: 0.12.4 - 9.el7_1.3
GlusterFS Version: glusterfs-3.7.5-1.el7
We use a locally exported gluster as storage domain (eg, storage is on the same machine exposed via gluster). No replica.
We run around 50 VMs on that host.
Thank you for your help in this,
Hi everyone. I've been having trouble when exporting VM's. I get error when
moving image. I've created a whole new storage domain exclusive for this
issue, and same thing happens. It's not always the same VM that fails, but
once it fails on a certain storage domain, I cannot export it anymore.
Please tell me which logs are relevant so i can post them and any other
relevant iformation I can provide, and maybe someone can help me get
through this problem.
Ovirt version is 18.104.22.168-1.el6. Hosted engines is CentOS 6. Hosts are
CentOS 7. VM's are all CentOS 7, except for two that are CentOS 6 and
Please excuse my bad english!
Thanks in advance!