[ovirt-users] [Users] 2 virtual monitors for Fedora guest

René Koch rkoch at linuxland.at
Thu Apr 10 10:48:33 EDT 2014


On 04/09/2014 12:57 PM, René Koch wrote:
> On 04/09/2014 11:24 AM, René Koch wrote:
>> Thanks a lot for testing.
>> Too bad that multiple monitors didn't work for you, too.
>>
>> I'll test RHEL next - maybe this works better then Fedora...
>
> I just tested CentOS 6.5 with Gnome desktop and 2 monitors aren't
> working, too.
> I can see 3 vdagent processes running in CentOS...

Short update from my side:
- RHEL 6.5 Workstation is working fine out of the box
- CentOS 6.5 is working now (there was no xorg-x11-drv-qxl package 
installed in CentOS guest)
- Fedora is still not working due to open bugs

>
>>
>>
>> Regards,
>> René
>>
>>
>> On 04/08/2014 09:25 PM, Gianluca Cecchi wrote:
>>> Some prelimianry tests at my side.
>>>
>>> oVirt 3.4 on fedora 19 AIO.
>>> Datacenter and cluster configured as 3.4 level
>>> Some packages on it
>>> libvirt-1.1.3.2-1.fc19.x86_64
>>> qemu-kvm-1.6.1-2.fc19.x86_64
>>> vdsm-4.14.6-0.fc19.x86_64
>>> spice-server-0.12.4-3.fc19.x86_64
>>>
>>> guest is an updated Fedora 19 system configured based on blank
>>> template and OS=Linux
>>> and
>>> xorg-x11-drv-qxl-0.1.1-3.fc19.x86_64
>>> spice-vdagent-0.14.0-5.fc19.x86_64
>>>
>>> Client is an updated Fedora 20 box with virt-viewer-0.6.0-1.fc20.x86_64
>>>
>>> If I select the "Single PCI" checkbox in console options of guest and
>>> connect from the Fedora 20 client I don't see at all an option in
>>> remote-viewer to open a second display and no new display detected in
>>> guest.
>>> And lspci on guest indeed gives only one video controller.
>>>
>>> BTW: what is this option for, apart the meaning?
>>>
>>> If I deselect the "Single PCI" checkbox I get the "Display 2" option
>>> in remote-viewer but it is greyed out.
>>> No new monitor in "detect displays" of guest.
>>>
>>> In this last situation I have on host this qem-kvm command line:
>>> qemu     16664     1 48 21:04 ?        00:02:42
>>> /usr/bin/qemu-system-x86_64 -machine accel=kvm -name f19 -S -machine
>>> pc-1.0,accel=kvm,usb=off -cpu Opteron_G3 -m 2048 -realtime mlock=off
>>> -smp 1,maxcpus=160,sockets=160,cores=1,threads=1 -uuid
>>> 55d8b95b-f420-4208-a2fb-5f370d05f5d8 -smbios
>>> type=1,manufacturer=oVirt,product=oVirt
>>> Node,version=19-8,serial=E0E1001E-8C00-002A-6F9A-90E6BAC9F1E1,uuid=55d8b95b-f420-4208-a2fb-5f370d05f5d8
>>>
>>>
>>> -no-user-config -nodefaults -chardev
>>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/f19.monitor,server,nowait
>>>
>>>
>>> -mon chardev=charmonitor,id=monitor,mode=control -rtc
>>> base=2014-04-08T19:04:45,driftfix=slew -no-shutdown -device
>>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
>>> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -device
>>> usb-ccid,id=ccid0 -drive
>>> if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device
>>> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
>>> file=/rhev/data-center/mnt/_data_DATA2/b24b94c7-5935-4940-9152-36ecd370ba7c/images/5e99a818-9fd1-47bb-99dc-50bd25374c2f/a2baa1e5-569f-4081-97a7-10ec2a20daab,if=none,id=drive-virtio-disk0,format=raw,serial=5e99a818-9fd1-47bb-99dc-50bd25374c2f,cache=none,werror=stop,rerror=stop,aio=threads
>>>
>>>
>>> -device
>>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
>>>
>>>
>>> -netdev tap,fd=28,id=hostnet0,vhost=on,vhostfd=29 -device
>>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:a8:01:55,bus=pci.0,addr=0x3
>>>
>>>
>>> -chardev spicevmc,id=charsmartcard0,name=smartcard -device
>>> ccid-card-passthru,chardev=charsmartcard0,id=smartcard0,bus=ccid0.0
>>> -chardev
>>> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/55d8b95b-f420-4208-a2fb-5f370d05f5d8.com.redhat.rhevm.vdsm,server,nowait
>>>
>>>
>>> -device
>>> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
>>>
>>>
>>> -chardev
>>> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/55d8b95b-f420-4208-a2fb-5f370d05f5d8.org.qemu.guest_agent.0,server,nowait
>>>
>>>
>>> -device
>>> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
>>>
>>>
>>> -chardev spicevmc,id=charchannel2,name=vdagent -device
>>> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
>>>
>>>
>>> -spice
>>> tls-port=5901,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
>>>
>>>
>>> -k en-us -device
>>> qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,bus=pci.0,addr=0x2
>>>
>>> -device
>>> qxl,id=video1,ram_size=67108864,vram_size=33554432,bus=pci.0,addr=0x8
>>> -device AC97,id=sound0,bus=pci.0,addr=0x4 -device
>>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
>>>
>>> On guest:
>>> [root at localhost ~]# lspci
>>> 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma]
>>> (rev 02)
>>> 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton
>>> II]
>>> 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE
>>> [Natoma/Triton II]
>>> 00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB
>>> [Natoma/Triton II] (rev 01)
>>> 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
>>> 00:02.0 VGA compatible controller: Red Hat, Inc. QXL paravirtual
>>> graphic card (rev 03)
>>> 00:03.0 Ethernet controller: Red Hat, Inc Virtio network device
>>> 00:04.0 Multimedia audio controller: Intel Corporation 82801AA AC'97
>>> Audio Controller (rev 01)
>>> 00:05.0 Communication controller: Red Hat, Inc Virtio console
>>> 00:06.0 SCSI storage controller: Red Hat, Inc Virtio block device
>>> 00:07.0 RAM memory: Red Hat, Inc Virtio memory balloon
>>> 00:08.0 Display controller: Red Hat, Inc. QXL paravirtual graphic card
>>> (rev 03)
>>>
>>> See here Xorg.0.log generated on guest:
>>> https://drive.google.com/file/d/0BwoPbcrMv8mvTm9VbE53ZmVKcVk/edit?usp=sharing
>>>
>>>
>>>
>>> In particular I see in it many:
>>> [    64.234] (II) qxl(0): qxl_xf86crtc_resize: Placeholder resize
>>> 1024x768
>>> [    87.280] qxl_surface_create: Bad bpp: 1 (1)
>>> [    87.280] qxl_surface_create: Bad bpp: 1 (1)
>>> [    87.949] qxl_surface_create: Bad bpp: 1 (1)
>>> [   110.469] qxl_surface_create: Bad bpp: 1 (1)
>>> [   110.478] qxl_surface_create: Bad bpp: 1 (1)
>>> [   146.096] - 0th attempt
>>> [   146.096] - OOM at 962 511 24 (= 1474746 bytes)
>>> [   146.096] Cache contents:  null null null null null null null null
>>> null null null null null null null null null null null null null null
>>> null null null null null null null null null null null null null null
>>> null null null null null null null null null null null null null null
>>> 1008  997 1007 1005 1018 1003 1009 1011 1001 1012 1019 1016 1006 1013
>>>     total: 14
>>> [   146.107] - 1th attempt
>>> [   146.107] - OOM at 962 511 24 (= 1474746 bytes)
>>>
>>> Gianluca
>>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


More information about the Users mailing list