Still no luck getting the gtx 1080 to enable inside the VM. I see the code is being generated in the xml with the hook. But I still get error code 43. Someone mentioned doing it with eufi bios and that worked for them. So when I get back from work today, perhaps ill give that a try. 

On Mon, Mar 18, 2019, 6:10 AM Darin Schmidt <darinschmidt@gmail.com> wrote:
I have gotten the system to see the card, its in device manager. The problem seems to be that I cannot use it in the VM because from what I have been finding out is that it gets and error code 43. Nvidia drivers disable the card if it detects that its being used in a VM. I have found some code to use to hook it into the xml before_vm_starts.

99_mask_kvm
#!/usr/bin/python2

import hooking
domxml = hooking.read_domxml()

hyperv = domxml.getElementsByTagName('hyperv')[0]
smm = domxml.createElement('vendor_id')
smm.setAttribute('state', 'on')
smm.setAttribute('value', '1234567890ab')
hyperv.appendChild(smm)

features = domxml.getElementsByTagName('features')[0]
kvm = domxml.createElement('kvm')
hidden = domxml.createElement('hidden')
hidden.setAttribute('state', 'on')
kvm.appendChild(hidden)
features.appendChild(kvm)

hooking.write_domxml(domxml)


I am currently reinstalling the drivers to see if this helps. 

kvm off and vender_id is now in the xml code that get generated when the VM is started. Im going off of examples Im finding online. Perhaps I just need to add the 10de to it instead of some generic # others are using. 

On Mon, Mar 18, 2019 at 6:02 AM Nisim Simsolo <nsimsolo@redhat.com> wrote:
Hi

Vendor ID of Nvidia is usually 10de.
You can locate 'vendor ID:product ID' by running lspci command, for example:
[root@intel-vfio ~]# lspci -Dnn | grep -i nvidia
0000:03:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK104GL [Quadro K4200] [10de:11b4] (rev a1)
0000:03:00.1 Audio device [0403]: NVIDIA Corporation GK104 HDMI Audio Controller [10de:0e0a] (rev a1)
[root@intel-vfio ~]#

In this example, the vendor ID of VGA controller is 10de and the product ID is 11b4

Please bare in mind that you need to enable IOMMU, add pci-stub (prevent the host driver for using GPU device) and disable the default nouveau driver on the host Kernel command line.
to do that:
1. Edit host /etc/sysconfig/grub and add the next to GRUB_CMDLINE_LINUX:
  • intel_iommu=on or amd_iommu=on
  • pci-stub.ids=10de:11b4,10de:0e0a
  • rdblacklist=nouveau
2. Regenerate the boot loader configuration using grub2-mkconfig command:
# grub2-mkconfig -o /etc/grub2.cfg
3. Reboot the host.
4. Verify configuration:
[root@intel-vfio ~]# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.10.0-957.5.1.el7.x86_64 root=/dev/mapper/vg0-lv_root ro crashkernel=auto rd.lvm.lv=vg0/lv_root rd.lvm.lv=vg0/lv_swap rhgb quiet pci-stub.ids=10de:11b4,10de:0e0a intel_iommu=on rdblacklist=nouveau LANG=en_US.UTF-8
[root@intel-vfio ~]# 


After running this, you should be able to passthrough GPU to VM.

BTW, why are you using engine-config and not doing it from oVirt UI or using virsh edit command?

Thanks


On Mon, Mar 18, 2019 at 1:52 AM Darin Schmidt <darinschmidt@gmail.com> wrote:
Hello all, im trying to figure out how to configure the custom properties to enable my NVIDIA card to work in the VM. Its my understanding that the drives dont work because it detects its in a VM......

Im trying to do something like this:

engine-config -s UserDefinedVMProperties="kvmhidden=^(true|false)$;{type=vendor_id;state={^(on|off)$;value=^([0-9])$}}"


But thats clearly not working. If I do this:

engine-config -s UserDefinedVMProperties="kvmhidden=^(true|false)$;vendor_id={state=^(on|off)$;value=^([0-9])$}"


It works but, the options are messed up. Im not sure how to find out the correct syntax to get this to work. Would appreciate any advice. 


--
Nisim Simsolo
QE -Testing Engineer
IRC: nsimsolo
int phone - 8272305
mobile - 054-4779934