how to debug no audio in guest?
by Gianluca Cecchi
Hello,
I'm testing video and audio capacity inside a guest.
Guest chosen is CentOS 6 with latest updates.
oVirt is 3.6.6 on an intel NUC6i5SYH with CentOS 7.2 OS
BTW: is it of any importance audio adapter on the host?
In case lspci on host gives
00:1f.3 Audio device: Intel Corporation Device 9d70 (rev 21)
Client connecting from user portal is Fedora 23 on an Asus laptop U36SD
where audio works and lspci gives
00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family
High Definition Audio Controller (rev 05)
On CentOS 6 guest the audio adapter detected by OS with lspci is
00:08.0 Audio device: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family)
High Definition Audio Controller (rev 01)
It seems all ok, apart that on guest actually I don't get any audio, also
from test speakers page ;-)
Both on guest and on client the volume bar is near the maximum.
Any hints about debugging?
>From host point of view the qemu-kvm command line is this one below
I don't understand the env variable
QEMU_AUDIO_DRV=none
???
If it can be of any importance, I initially configured the guest without
sound card and in fact in gnome I saw the audio card as "dummy".
Then I powered off the guest and enabled sound card from user portal edit
vm (I see it enabled also from admin portal btw...) and then powered on the
VM.
Now the sound card seems to be present but no audio
Thanks in advance,
Gianluca
2016-06-11 09:22:13.698+0000: starting up libvirt version: 1.2.17, package:
13.el7_2.4 (CentOS BuildSystem <http://bugs.centos.org>, 2016-03-
31-16:56:26, worker1.bsys.centos.org), qemu version: 2.3.0
(qemu-kvm-ev-2.3.0-31.el7_2.10.1)
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name c6desktop -S -machine
pc-i440
fx-rhel7.2.0,accel=kvm,usb=off -cpu Broadwell-noTSX -m
size=3145728k,slots=16,maxmem=4294967296k -realtime mlock=off -smp
1,maxcpus=16,socket
s=16,cores=1,threads=1 -numa node,nodeid=0,cpus=0,mem=3072 -uuid
68a82ada-a1d2-44d0-85b0-f3a08cc2f579 -smbios type=1,manufacturer=oVirt,produ
ct=oVirt
Node,version=7-2.1511.el7.centos.2.10,serial=AC1EDDD3-CAF1-2711-EE16-B8AEED7F1711,uuid=68a82ada-a1d2-44d0-85b0-f3a08cc2f579
-no-user
-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-c6desktop/monitor.sock,server,nowait
-mon chardev=charmo
nitor,id=monitor,mode=control -rtc base=2016-06-11T09:22:13,driftfix=slew
-global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boo
t menu=on,strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x9.0x7
-device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x
9.0x2 -device
ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x9.0x1 -device
ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,
multifunction=on,addr=0x9 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x3 -device
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pc
i.0,addr=0x4 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
file=/rhev/data-center/00000001-0001-0001-0001-000000000343/572eabe7-15d0-42c2-8fa9-0bd773e22e2e/images/aff55e62-6a41-4f75-bbd3-78561eae18f3/
f520473e-8fbe-4886-bb64-921b42edf499,if=none,id=drive-virtio-disk0,format=raw,serial=aff55e62-6a41-4f75-bbd3-78561eae18f3,cache=none,werror=s
top,rerror=stop,aio=threads -device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-netdev t
ap,fd=28,id=hostnet0,vhost=on,vhostfd=29 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:51,bus=pci.0,addr=0x7
-chardev soc
ket,id=charserial0,path=/var/run/ovirt-vmconsole-console/68a82ada-a1d2-44d0-85b0-f3a08cc2f579.soc
k,server,nowait -device isa-serial,chardev=charserial0,id=serial0 -chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/68a82ada-a1d2-44d0-85b0-f3a08cc2f579.com.redhat.rhevm.vdsm,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/68a82ada-a1d2-44d0-85b0-f3a08cc2f579.org.qemu.guest_agent.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-spice
port=5901,tls-port=5902,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,seamless-migration=on
-vnc 0:3,password -device
qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vgamem_mb=16,bus=pci.0,addr=0x2
-device intel-hda,id=sound0,bus=pci.0,addr=0x8 -device
hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -chardev
spicevmc,id=charredir0,name=usbredir -device
usb-redir,chardev=charredir0,id=redir0 -chardev
spicevmc,id=charredir1,name=usbredir -device
usb-redir,chardev=charredir1,id=redir1 -chardev
spicevmc,id=charredir2,name=usbredir -device
usb-redir,chardev=charredir2,id=redir2 -chardev
spicevmc,id=charredir3,name=usbredir -device
usb-redir,chardev=charredir3,id=redir3 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on
2016-06-11T09:22:13.748410Z qemu-kvm: warning: CPU(s) not present in any
NUMA nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
2016-06-11T09:22:13.748524Z qemu-kvm: warning: All CPU(s) up to maxcpus
should be described in NUMA config
main_channel_link: add main channel client
main_channel_handle_parsed: net test: latency 21.910000 ms, bitrate
51635025 bps (49.242997 Mbps)
inputs_connect: inputs channel client create
red_dispatcher_set_cursor_peer:
main_channel_handle_parsed: agent start
main_channel_handle_parsed: agent start
4 years, 6 months
Hosted engine on gluster problem
by Bond, Darryl
There seems to be a pretty severe bug with using hosted engine on gluster.
If the host that was used as the initial hosted-engine --deploy host goes away, the engine VM wil crash and cannot be restarted until the host comes back.
This is regardless of which host the engine was currently running.
The issue seems to be buried in the bowels of VDSM and is not an issue with gluster itself.
The gluster filesystem is still accessable from the host that was running the engine. The issue has been submitted to bugzilla but the fix is some way off (4.1).
Can my hosted engine be converted to use NFS (using the gluster NFS server on the same filesystem) without rebuilding my hosted engine (ie change domainType=glusterfs to domainType=nfs)?
What effect would that have on the hosted-engine storage domain inside oVirt, ie would the same filesystem be mounted twice or would it just break.
Will this actually fix the problem, does it have the same issue when the hosted engine is on NFS?
Darryl
________________________________
The contents of this electronic message and any attachments are intended only for the addressee and may contain legally privileged, personal, sensitive or confidential information. If you are not the intended addressee, and have received this email, any transmission, distribution, downloading, printing or photocopying of the contents of this message or attachments is strictly prohibited. Any legal privilege or confidentiality attached to this message and attachments is not waived, lost or destroyed by reason of delivery to any person other than intended addressee. If you have received this message and are not the intended addressee you should notify the sender by return email and destroy all copies of the message and any attachments. Unless expressly attributed, the views expressed in this email do not necessarily represent the views of the company.
4 years, 6 months
Dedicated NICs for gluster network
by Nicolas Ecarnot
Hello,
[Here : oVirt 3.5.3, 3 x CentOS 7.0 hosts with replica-3 gluster SD on
the hosts].
On the switchs, I have created a dedicated VLAN to isolate the glusterFS
traffic, but I'm not using it yet.
I was thinking of creating a dedicated IP for each node's gluster NIC,
and a DNS record by the way ("my_nodes_name_GL"), but I fear using this
hostname or this ip in oVirt GUI host network interface tab, leading
oVirt think this is a different host.
Not being sure this fear is clearly described, let's say :
- On each node, I create a second ip+(dns record in the soa) used by
gluster, plugged on the correct VLAN
- in oVirt gui, in the host network setting tab, the interface will be
seen, with its ip, but reverse-dns-related to a different hostname.
Here, I fear oVirt might check this reverse DNS and declare this NIC
belongs to another host.
I would also prefer not use a reverse pointing to the name of the host
management ip, as this is evil and I'm a good guy.
On your side, how do you cope with a dedicated storage network in case
of storage+compute mixed hosts?
--
Nicolas ECARNOT
4 years, 6 months
Ovirt 4.0 Login Issue
by Melissa Mesler
I am running 4.0 on CentOS 7.2. Sometimes when I first log in to the
admin page, it will give me and error that says "Request state does not
match session state." Then if I go through the process of logging in
again, it will go through with no issue. It doesn't do this every time
but it does do it quite often. Any ideas on why?
- MeLLy
4 years, 6 months
oVirt-shell command to move a disk
by Nicolas Ecarnot
Hello,
I'm confused because though I'm using ovirt-shell to script many actions
every day, and even after a large bunch of reading and testing, I can
not find the correct syntax to move (offline/available) disks between
storage domains.
May you help me please?
(oVirt 3.4.4)
--
Nicolas Ecarnot
4 years, 6 months
ovirt 3.6.6 and gluster 3.7.13
by David Gossage
Anyone running one of recent 3.6.x lines and gluster using 3.7.13? I am
looking to upgrade gluster from 3.7.11->3.7.13 for some bug fixes, but have
been told by users on gluster mail list due to some gluster changes I'd
need to change the disk parameters to use writeback cache. Something to do
with aio support being removed.
I believe this could be done with custom parameters? But I believe strage
tests are done using dd and would they fail with current settings then?
Last upgrade to 3.7.13 I had to rollback to 3.7.11 due to stability isues
where gluster storage would go into down state and always show N/A as space
available/used. Even if hosts saw storage still and VM's were running on
it on all 3 hosts.
Saw a lot of messages like these that went away once gluster rollback
finished
[2016-07-09 15:27:46.935694] I [fuse-bridge.c:4083:fuse_init]
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22 kernel
7.22
[2016-07-09 15:27:49.555466] W [MSGID: 114031]
[client-rpc-fops.c:3050:client3_3_readv_cbk] 0-GLUSTER1-client-1: remote
operation failed [Operation not permitted]
[2016-07-09 15:27:49.556574] W [MSGID: 114031]
[client-rpc-fops.c:3050:client3_3_readv_cbk] 0-GLUSTER1-client-0: remote
operation failed [Operation not permitted]
[2016-07-09 15:27:49.556659] W [fuse-bridge.c:2227:fuse_readv_cbk]
0-glusterfs-fuse: 80: READ => -1 gfid=deb61291-5176-4b81-8315-3f1cf8e3534d
fd=0x7f5224002f68 (Operation not permitted)
[2016-07-09 15:27:59.612477] W [MSGID: 114031]
[client-rpc-fops.c:3050:client3_3_readv_cbk] 0-GLUSTER1-client-1: remote
operation failed [Operation not permitted]
[2016-07-09 15:27:59.613700] W [MSGID: 114031]
[client-rpc-fops.c:3050:client3_3_readv_cbk] 0-GLUSTER1-client-0: remote
operation failed [Operation not permitted]
[2016-07-09 15:27:59.613781] W [fuse-bridge.c:2227:fuse_readv_cbk]
0-glusterfs-fuse: 168: READ => -1 gfid=deb61291-5176-4b81-8315-3f1cf8e3534d
fd=0x7f5224002f68 (Operation not permitted)
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
4 years, 6 months
management interface
by Fernando Fuentes
Team,
Is it possible to move the ovirt management interface to a separate vlan
and keep it from been on the same interface as my LAN?
IE: assign a tag to ovirtmanagement and keep another interface as untag
for my LAN to use with the VM's
Thoughts ideas?
TIA!
Regards,
--
Fernando Fuentes
ffuentes(a)txweather.org
http://www.txweather.org
4 years, 7 months
Upgrade from 3.6 to 4.0
by ovirt@timmi.org
Hi List,
is there a special upgrade procedure to get from 3.6 to 4.0 of oVirt?
Or it is just a normal upgrade procedure?
I'm currently running oVirt 3.6.7 on CentOS 7.
Best regards and thank you for the answer.
Christoph
4 years, 7 months
disk not bootable
by Fernando Fuentes
Team,
After I successfully copy my template from one storage to another I was
able to move my VM disks from my nfs domain to my iscsi domain.
My linux vms which are basic template (No template) moved just fine and
boot just fine.
On the other hand my windows vms (Template) once moved they cant boot.
Complaining that there is no bootable disk available.
What is going on?
oVirt 3.6.6
Hosts Centos 6.6 x86_64
iSCSI Domain on TrueNAS Attached via ovirt.
--
Fernando Fuentes
ffuentes(a)txweather.org
http://www.txweather.org
4 years, 7 months
oVirt 4.0.1 on FC23: problem mounting NFSv4 with fsid=0
by wodel youchi
Hi,
I wanted to test the new version of oVirt the 4.0.1 on FC23, using HE
method and using NFSv4 for storage.
When launching the deploy command, and choosing nfsv4 and filling the mount
point, the setup exits with error : Connection to storage server failed.
In vdsm log I have this error:
Could not connect to storageServer
Traceback (most recent call last):
...
OSError: [Errno 2] Mount of `noveria.wodel.wd:/engine_vm` at
`/rhev/data-center/mnt/noveria.wodel.wd:_engine__vm` does not exist
the df -h command shows that the nfs mount was done, but the device path
contains a double slash.
noveria.wodel.wd:*//*vmengine 656G 111G 513G 18%
/rhev/data-center/mnt/noveria.wodel.wd:_vmengine
I am not sure of the problem, but I think it's the *double slash on the
nfs4 path when using fsid=0*: noveria.wodel.wd:*//*vmengine
if I use nfsv4 with fsid=0, the problem shows up.
If I don't use fsid=0 and mention the full path of the nfs mount, the
problem disappears.
Regards.
4 years, 7 months