I need to change the gateway ping address, the one used by hosted engine setup.
Is ok to edit /etc/ovirt-hosted-engine/hosted-engine.conf on each node,
update the gateway param with the new ip address and restart
the agent&broker on each node?
With a blind test seems ok, but need to understand if is the right procedure.
I am looking for a way to get my hosted-engine running on el7 so I can
upgrade to oVirt 4.0. Currently my hosts already run el7, but my
hosted-engine is still el6.
this is only about the hosts.
I read https://www.ovirt.org/documentation/how-to/hosted-engine/, but it
only mentions upgrade of the hosted-engine software, not the OS.
I understood I can do a fresh hosted-engine install, and then import my
storage domain to the new hosted engine, but:
- Do I need to restore my hosted engine database? ( like described here:
- Can I directly install hosted-engine 4.0 and then import the storage
domain? Or should I install same hosted-engine version?
- Do I first need another master storage domain or can I directly import my
old master storage domain?
- When importing the storage domain what is the risk it fails ( I have
backups, but it would cost a day to restore all )
- How long would import take? few minutes or hours? ( I want to keep down
time as low as possible ).
Another option would be upgrade the OS ( with redhat-upgrade-tool ) or is
this a path for disaster?
I hope someone can tell me how I can smoothly upgrade my hosted-engine up
to el7 and run oVirt 4.
I'm running oVirt 184.108.40.206-1.el7.centos and when I install a Win 7 guest
VM, using VirtIO disk, networking etc, it goes through the install process
ok, but blue screens upon boot with a vioscsi.sys error (attached). I have
tried the official version ovirt-guest-tools-iso 3.6.0 0.2_master.fc22, as
well as some earlier and later versions. I am using a PXE boot method with
a Windows deployment server, which has the drivers from the oVirt tools ISO
installed (indeed, it picks up the drive and networking and I can see it
installing the drivers). I have tried with the generic IDE and rtl8139
config on the guest, and it also fails with the same vioscsi.sys error
after rebooting upon finishing installation even though I'm using IDE as
the disk driver.
I have uploaded a win 7 x64 ISO and tried installing that, and it loads the
VirtIO viostor driver (using the method at:
and even manages to partition the disks, but fails to install on the disk
I've tried temporarily removing the vioscsi files from the install server
as a last resort, but as expected it fails to install properly, though I
thought it used the viostor driver instead.
Thanks for any help.
I'm testing video and audio capacity inside a guest.
Guest chosen is CentOS 6 with latest updates.
oVirt is 3.6.6 on an intel NUC6i5SYH with CentOS 7.2 OS
BTW: is it of any importance audio adapter on the host?
In case lspci on host gives
00:1f.3 Audio device: Intel Corporation Device 9d70 (rev 21)
Client connecting from user portal is Fedora 23 on an Asus laptop U36SD
where audio works and lspci gives
00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family
High Definition Audio Controller (rev 05)
On CentOS 6 guest the audio adapter detected by OS with lspci is
00:08.0 Audio device: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family)
High Definition Audio Controller (rev 01)
It seems all ok, apart that on guest actually I don't get any audio, also
from test speakers page ;-)
Both on guest and on client the volume bar is near the maximum.
Any hints about debugging?
>From host point of view the qemu-kvm command line is this one below
I don't understand the env variable
If it can be of any importance, I initially configured the guest without
sound card and in fact in gnome I saw the audio card as "dummy".
Then I powered off the guest and enabled sound card from user portal edit
vm (I see it enabled also from admin portal btw...) and then powered on the
Now the sound card seems to be present but no audio
Thanks in advance,
2016-06-11 09:22:13.698+0000: starting up libvirt version: 1.2.17, package:
13.el7_2.4 (CentOS BuildSystem <http://bugs.centos.org>, 2016-03-
31-16:56:26, worker1.bsys.centos.org), qemu version: 2.3.0
QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name c6desktop -S -machine
fx-rhel7.2.0,accel=kvm,usb=off -cpu Broadwell-noTSX -m
size=3145728k,slots=16,maxmem=4294967296k -realtime mlock=off -smp
s=16,cores=1,threads=1 -numa node,nodeid=0,cpus=0,mem=3072 -uuid
68a82ada-a1d2-44d0-85b0-f3a08cc2f579 -smbios type=1,manufacturer=oVirt,produ
-config -nodefaults -chardev
nitor,id=monitor,mode=control -rtc base=2016-06-11T09:22:13,driftfix=slew
-global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boo
t menu=on,strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x9.0x7
i.0,addr=0x4 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
k,server,nowait -device isa-serial,chardev=charserial0,id=serial0 -chardev
-chardev spicevmc,id=charchannel2,name=vdagent -device
-vnc 0:3,password -device
-device intel-hda,id=sound0,bus=pci.0,addr=0x8 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on
2016-06-11T09:22:13.748410Z qemu-kvm: warning: CPU(s) not present in any
NUMA nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
2016-06-11T09:22:13.748524Z qemu-kvm: warning: All CPU(s) up to maxcpus
should be described in NUMA config
main_channel_link: add main channel client
main_channel_handle_parsed: net test: latency 21.910000 ms, bitrate
51635025 bps (49.242997 Mbps)
inputs_connect: inputs channel client create
main_channel_handle_parsed: agent start
main_channel_handle_parsed: agent start
There seems to be a pretty severe bug with using hosted engine on gluster.
If the host that was used as the initial hosted-engine --deploy host goes away, the engine VM wil crash and cannot be restarted until the host comes back.
This is regardless of which host the engine was currently running.
The issue seems to be buried in the bowels of VDSM and is not an issue with gluster itself.
The gluster filesystem is still accessable from the host that was running the engine. The issue has been submitted to bugzilla but the fix is some way off (4.1).
Can my hosted engine be converted to use NFS (using the gluster NFS server on the same filesystem) without rebuilding my hosted engine (ie change domainType=glusterfs to domainType=nfs)?
What effect would that have on the hosted-engine storage domain inside oVirt, ie would the same filesystem be mounted twice or would it just break.
Will this actually fix the problem, does it have the same issue when the hosted engine is on NFS?
The contents of this electronic message and any attachments are intended only for the addressee and may contain legally privileged, personal, sensitive or confidential information. If you are not the intended addressee, and have received this email, any transmission, distribution, downloading, printing or photocopying of the contents of this message or attachments is strictly prohibited. Any legal privilege or confidentiality attached to this message and attachments is not waived, lost or destroyed by reason of delivery to any person other than intended addressee. If you have received this message and are not the intended addressee you should notify the sender by return email and destroy all copies of the message and any attachments. Unless expressly attributed, the views expressed in this email do not necessarily represent the views of the company.
[Here : oVirt 3.5.3, 3 x CentOS 7.0 hosts with replica-3 gluster SD on
On the switchs, I have created a dedicated VLAN to isolate the glusterFS
traffic, but I'm not using it yet.
I was thinking of creating a dedicated IP for each node's gluster NIC,
and a DNS record by the way ("my_nodes_name_GL"), but I fear using this
hostname or this ip in oVirt GUI host network interface tab, leading
oVirt think this is a different host.
Not being sure this fear is clearly described, let's say :
- On each node, I create a second ip+(dns record in the soa) used by
gluster, plugged on the correct VLAN
- in oVirt gui, in the host network setting tab, the interface will be
seen, with its ip, but reverse-dns-related to a different hostname.
Here, I fear oVirt might check this reverse DNS and declare this NIC
belongs to another host.
I would also prefer not use a reverse pointing to the name of the host
management ip, as this is evil and I'm a good guy.
On your side, how do you cope with a dedicated storage network in case
of storage+compute mixed hosts?