oVirt SHE 4.2.8 and CentOS 7.6
by Vrgotic, Marko
Hi all,
Looking at 4.2.8 and How Support version is EL 7.5.
Does that mean that is not recommended to upgrade already running SHE to higher than EL 7.5, or it means not to deploy it on EL version lower than 7.5?
Thank you.
Marko Vrgotic
5 years, 10 months
Re: converting oVirt host to be able to hosts to a Self-Hosted Environment
by Simone Tiraboschi
On Mon, Jan 28, 2019 at 1:31 PM Jarosław Prokopowski <jprokopowski(a)gmail.com>
wrote:
> Thanks Simone
>
> I forgot to mention that this is to have second node that is able to host
> the self-hosted engine for HA purposes.
> There is already one node that hosts the self-hosted engine and I want to
> have second one.
> Will it work in this case?
>
No, in this case it's by far easier:
you should just set that host into maintenance mode from the engine and
remember to switch on the hosted-engine deployment flag when you go to
reinstall the host.
>
>
>
> On Mon, Jan 28, 2019 at 1:25 PM Simone Tiraboschi <stirabos(a)redhat.com>
> wrote:
>
>>
>>
>> On Mon, Jan 28, 2019 at 12:21 PM Jarosław Prokopowski <
>> jprokopowski(a)gmail.com> wrote:
>>
>>> Hi Guys,
>>>
>>> Is there a way to convert existing oVirt node (CentOS) to be able to
>>> host a Self-Hosted Environment?
>>> If so how can I do that?
>>>
>>
>> Hi, the best option is with backup and restore.
>> Basically you should take a backup of your current engine with
>> engine-backup
>> and then deploy hosted-engine with hosted-engine --deploy
>> --restore-from-file=mybackup.tar.gz
>>
>>
>>>
>>> Thanks
>>> Jaroson
>>> _______________________________________________
>>> Users mailing list -- users(a)ovirt.org
>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZVY4SD5RTNV...
>>>
>>
5 years, 11 months
Ovirt snapshot issues
by Alex K
Hi all,
I have ovirt 4.2.7, self-hosted on top gluster, with two servers.
I have a specific VM which has encountered some snapshot issues.
The engine lists 4 snapshots and when trying to delete one of them I
get "General
command validation failure".
The VM was being backed up periodically by a python script which was
creating a snapshot -> clone -> export -> delete clone -> delete snapshot.
There were times where the VM was complaining of some illegal snapshots
following such backup procedures and I had to delete such illegal snapshots
references from the engine DB (following some steps found online),
otherwise I would not be able to start the VM if it was shut down. Seems
though that this is not a clean process and leaves the underlying image of
the VM in an inconsistent state in regards to its snapshots as when
checking the backing chain of the image file I get:
*b46d8efe-885b-4a68-94ca-e8f437566bee* (active VM)* ->*
*b7673dca-6e10-4a0f-9885-1c91b86616af ->*
*4f636d91-a66c-4d68-8720-d2736a3765df ->*
6826cb76-6930-4b53-a9f5-fdeb0e8012ac ->
61eea475-1135-42f4-b8d1-da6112946bac ->
*604d84c3-8d5f-4bb6-a2b5-0aea79104e43 ->*
1e75898c-9790-4163-ad41-847cfe84db40 ->
*cf8707f2-bf1f-4827-8dc2-d7e6ffcc3d43 ->*
3f54c98e-07ca-4810-82d8-cbf3964c7ce5 (raw image)
The bold ones are the ones shown at engine GUI. The VM runs normally
without issues.
I was thinking if I could use qemu-img commit to consolidate and remove the
snapshots that are not referenced from engine anymore. Any ideas from your
side?
Thanx,
Alex
5 years, 11 months
Nvidia Grid K2 and Ovirt GPU Passtrough
by jarheadx@hotmail.de
Hello,
i have tried every documentation that i found to bring GPU Passtrough with a Nvidia Grid K2 to ovirt, but i failed.
I am confident that it has to run with my hardware, but i have no ideas anymore. Maybe the community can help me.
Actually my VM (Win7 and Win10) crashes or hangs on startup.
I have done theese steps:
1. lspci -nnk
.........
07:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK104GL [GRID K2] [10de:11bf] (rev a1)
Subsystem: NVIDIA Corporation Device [10de:100a]
Kernel driver in use: pci-stub
Kernel modules: nouveau
08:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK104GL [GRID K2] [10de:11bf] (rev a1)
Subsystem: NVIDIA Corporation Device [10de:100a]
Kernel driver in use: pci-stub
Kernel modules: nouveau
.........
2. /etc/default/grub
.........
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=cl/root rd.lvm.lv=cl/swap rhgb quiet pci-stub.ids=10de:11bf rdblacklist=nouveau amd_iommu=on"
........
3.
Added Line
"options vfio-pci ids=10de:11bf"
to nano /etc/modprobe.d/vfio.conf
dmesg | grep -i vfio ->
[ 11.202767] VFIO - User Level meta-driver version: 0.3
[ 11.315368] vfio_pci: add [10de:11bf[ffff:ffff]] class 0x000000/00000000
[ 1032.582778] vfio_ecap_init: 0000:07:00.0 hiding ecap 0x19@0x900
[ 1046.232009] vfio-pci 0000:07:00.0: irq 61 for MSI/MSI-X
-------------------------------------------------------------------------
After assigning the GPU to the VM, the OS hangs on startup
Any ideas?
Best Regards
Reza
5 years, 11 months
Using oVirt for Desktop Virtualization
by jarheadx@hotmail.de
Hey guys,
i want to use ovirt as my desktop enviroement. Therefore i bought a GRID K2 for GPU Passtrough. But i have still lags on video streaming for example HD Videos.
I used Spice, VNC and Remote Desktop but i guess they are not good on video compression over WAN or LAN.
Is there any other Remote App which allows me to use ovirt as my Virtual Desktop with smooth and reliable video output?
Many Thanks
Volrath
5 years, 11 months
Re: ovirt 4.2 HCI rollout
by Dominik Holler
On Tue, 22 Jan 2019 11:15:12 +0000
Markus Schaufler <markus.schaufler(a)digit-all.at> wrote:
> Thanks for your reply,
>
> getent ahosts ovirt-hci.res01.ads.ooe.local | cut -d' ' -f1 | uniq
> 10.1.31.20
>
> attached you'll find the logs.
>
Thanks, to my eyes this looks like a bug.
I tried to isolate the relevant lines in the attached playbook.
Markus, would you be so kind to check if ovirt-4.2.8 is working for you?
> ________________________________
> Von: Dominik Holler <dholler(a)redhat.com>
> Gesendet: Montag, 21. Jänner 2019 17:52:35
> An: Markus Schaufler
> Cc: users(a)ovirt.org; Simone Tiraboschi
> Betreff: Re: [ovirt-users] ovirt 4.2 HCI rollout
>
> Would you please share the related ovirt-host-deploy-ansible-*.log
> stored on the host in /var/log/ovirt-hosted-engine-setup ?
>
> Would you please also share the output of
> getent ahosts YOUR_HOSED_ENGNE_FQDN | cut -d' ' -f1 | uniq
> if executed on this host?
>
>
> On Mon, 21 Jan 2019 13:37:53 -0000
> "Markus Schaufler" <markus.schaufler(a)digit-all.at> wrote:
>
> > Hi,
> >
> > I'm trying a (nested) ovirt 4.2.7 HCI rollout on 3 centos VM's by
> > following
> > https://ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyp...
> > gluster deployment was successful but at HE deployment "stage 5" I
> > got following error:
> >
> > [ INFO ] TASK [Reconfigure OVN central address]
> > [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes
> > an option with an undefined variable. The error was: 'dict object'
> > has no attribute 'stdout_lines'\n\nThe error appears to have been
> > in
> > '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml':
> > line 522, column 5, but may\nbe elsewhere in the file depending on
> > the exact syntax problem.\n\nThe offending line appears to be:\n\n
> > #
> > https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/rol...
> > - name: Reconfigure OVN central address\n ^ here\n"}
> >
> >
> > /var/log/messages:
> > Jan 21 14:09:56 HCI01 journal: ovirt-ha-agent
> > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR
> > Engine VM stopped on localhost Jan 21 14:10:01 HCI01 systemd:
> > Started Session 22 of user root. Jan 21 14:10:02 HCI01 systemd:
> > Started Session c306 of user root. Jan 21 14:10:03 HCI01 systemd:
> > Started Session c307 of user root. Jan 21 14:10:06 HCI01
> > vdsm[3650]: WARN executor state: count=5 workers=set([<Worker
> > name=periodic/4 waiting task#=141 at 0x7fd2d4316910>, <Worker
> > name=periodic/1 running <Task discardable <Operation
> > action=<vdsm.virt.sampling.VMBulkstatsMonitor object at
> > 0x7fd2d4679490> at 0x7fd2d4679710> timeout=7.5, duration=7 at
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> disca rded task#=413 at
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>, <Worker
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>name=periodic/3
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>waiting task#=414 at
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>,
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10><Worker
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>name=periodic/5
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>waiting
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>task#=0
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>at
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>,
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650><Worker
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>name
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>=periodic/2
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>waiting
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>task#=412
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>at
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>])
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>device
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>vnet0
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>left
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>promiscuous
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>mode
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> > Jan 21 14:10:06 HCI01 NetworkManager[3666]: <info>
> > [1548076206.9177] device (vnet0): state change: disconnected ->
> > unmanaged (reason 'unmanaged', sys-iface-state: 'remo ved') Jan 21
> > 14:10:06 HCI01 NetworkManager[3666]: <info> [1548076206.9180]
> > device (vnet0): released from master device ovirtmgmt Jan 21
> > 14:10:06 HCI01 lldpad: recvfrom(Event interface): No buffer space
> > available Jan 21 14:10:06 HCI01 libvirtd: 2019-01-21
> > 13:10:06.925+0000: 2651: error : qemuMonitorIORead:609 : Unable to
> > read from monitor: Connection reset by peer Jan 21 14:10:07 HCI01
> > kvm: 0 guests now active Jan 21 14:10:07 HCI01 systemd-machined:
> > Machine qemu-3-HostedEngine terminated. Jan 21 14:10:07 HCI01
> > libvirtd: 2019-01-21 13:10:07.125+0000: 2704: warning :
> > qemuGetProcessInfo:1406 : cannot parse process status data Jan 21
> > 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000: 2704:
> > warning : qemuGetProcessInfo:1406 : cannot parse process status
> > data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000:
> > 2704: warning : qemuGetProcessInfo:1406 : cannot parse process
> > status data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21
> > 13:10:07.125+0000: 2704: warning : qemuGetProcessInfo:1406 : cannot
> > parse process status data Jan 21 14:10:07 HCI01 libvirtd:
> > 2019-01-21 13:10:07.126+0000: 2704: error :
> > virNetDevTapInterfaceStats:764 : internal error: /proc/net/dev:
> > Interface not found Jan 21 14:10:07 HCI01 firewalld[24040]:
> > WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out
> > -m physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vnet 0'
> > failed: iptables v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try
> > `iptables -h' or 'iptables --help' for more information. Jan 21
> > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out
> > vnet0 -g FP-vnet0' failed: iptables v 1.4.21: goto 'FP-vnet0' is
> > not a chain#012#012Try `iptables -h' or 'iptables --help' for more
> > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev
> > --physdev-in vnet0 -g FJ-vnet0' failed: iptables v1. 4.21: goto
> > 'FJ-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> > --help' for more information. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HJ-vnet0'
> > failed: iptable s v1.4.21: goto 'HJ-vnet0' is not a
> > chain#012#012Try `iptables -h' or 'iptables --help' for more
> > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FP-vnet0' failed:
> > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > -w -X FP-vnet0' failed: iptables: No chain/target/match by that
> > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FJ-vnet0' failed:
> > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > -w -X FJ-vnet0' failed: iptables: No chain/target/match by that
> > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HJ-vnet0' failed:
> > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > -w -X HJ-vnet0' failed: iptables: No chain/target/match by that
> > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vne t0'
> > failed: ip6tables v1.4.21: goto 'FP-vnet0' is not a
> > chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > physdev --physdev-out vnet0 -g FP-vnet0' failed: ip6tables v1.4.21:
> > goto 'FP-vnet0' is not a chain#012#012Try `ip6tables -h' or
> > 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FJ-vnet0' failed:
> > ip6tables v 1.4.21: goto 'FJ-vnet0' is not a chain#012#012Try
> > `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> > --physdev-in vnet0 -g HJ-vnet0' failed: ip6tab les v1.4.21: goto
> > 'HJ-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> > --help' for more information. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > -w -F FP-vnet0' failed: ip6tables: No chain/target/match by that
> > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FP-vnet0' failed:
> > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > '/usr/sbin/ip6tables -w2 -w -F FJ-vnet0' failed: ip6tables: No
> > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > -w -X FJ-vnet0' failed: ip6tables: No chain/target/match by that
> > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F HJ-vnet0' failed:
> > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > '/usr/sbin/ip6tables -w2 -w -X HJ-vnet0' failed: ip6tables: No
> > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > --concurrent -t nat -D PREROUTING -i vnet0 -j libvirt-J-vnet0'
> > failed: Illegal target name 'libvirt-J-vnet0'. Jan 21 14:10:07
> > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> > libvirt-P-vnet0' failed: Illegal target name 'libvirt-P-vnet0'. Jan
> > 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0 -j
> > libvirt-J-vnet0' failed: Illegal targe t name 'libvirt-J-vnet0'.
> > Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> > libvirt-P-vnet0' failed: Illegal targ et name 'libvirt-P-vnet0'.
> > Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet0' failed:
> > Chain 'libvirt-J-vnet0' doesn't exis t. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > --concurrent -t nat -L libvirt-P-vnet0' failed: Chain
> > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > --concurrent -t nat -F libvirt-J-vnet0' failed: Chain
> > 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > --concurrent -t nat -X libvirt-J-vnet0' failed: Chain
> > 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > --concurrent -t nat -F libvirt-P-vnet0' failed: Chain
> > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > --concurrent -t nat -X libvirt-P-vnet0' failed: Chain
> > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out
> > vnet0 -g FO-vnet0' failed: iptables v1.4.21: goto 'FO-vnet0' is not
> > a chain#012#012Try `iptables -h' or 'iptables --help' for more
> > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m
> > physdev --physdev-out vnet0 -g FO-vnet0' failed: iptables v1.4.21:
> > goto 'FO-vnet0' is not a chain#012#012Try `iptables -h' or
> > 'iptables --help' for more information. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> > iptables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> > `iptables -h' or 'iptables --help' for more information. Jan 21
> > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev
> > --physdev-in vnet0 -g HI-vnet0' failed: iptables v1.4.21: goto
> > 'HI-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> > --help' for more information. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> > failed: iptables: Bad rule (does a matching rule exist in that
> > chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FO-vnet0' failed:
> > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > -w -X FO-vnet0' failed: iptables: No chain/target/match by that
> > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FI-vnet0' failed:
> > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > -w -X FI-vnet0' failed: iptables: No chain/target/match by that
> > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HI-vnet0' failed:
> > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > -w -X HI-vnet0' failed: iptables: No chain/target/match by that
> > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > physdev --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0'
> > failed: ip6tables v1.4.21: goto 'FO-vnet0' is not a
> > chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > physdev --physdev-out vnet0 -g FO-vnet0' failed: ip6tables v1.4.21:
> > goto 'FO-vnet0' is not a chain#012#012Try `ip6tables -h' or
> > 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> > ip6tables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> > `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> > --physdev-in vnet0 -g HI-vnet0' failed: ip6tables v1.4.21: goto
> > 'HI-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> > --help' for more information. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> > failed: ip6tables: Bad rule (does a matching rule exist in that
> > chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F FO-vnet0' failed:
> > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > '/usr/sbin/ip6tables -w2 -w -X FO-vnet0' failed: ip6tables: No
> > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > -w -F FI-vnet0' failed: ip6tables: No chain/target/match by that
> > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FI-vnet0' failed:
> > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > '/usr/sbin/ip6tables -w2 -w -F HI-vnet0' failed: ip6tables: No
> > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > -w -X HI-vnet0' failed: ip6tables: No chain/target/match by that
> > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D
> > POSTROUTING -o vnet0 -j libvirt-O-vnet0' failed: Illegal target
> > name 'libvirt-O-vnet0'. Jan 21 14:10:07 HCI01 firewalld[24040]:
> > WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L
> > libvirt-O-vnet0' failed: Chain 'libvirt-O-vnet0' doesn't exist. Jan
> > 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > '/usr/sbin/ebtables --concurrent -t nat -F libvirt-O-vnet0' failed:
> > Chain 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > --concurrent -t nat -X libvirt-O-vnet0' failed: Chain
> > 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01 vdsm[3650]:
> > WARN
> > File: /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.ovirt-guest-agent.0
> > already removed Jan 21 14:10:07 HCI01 vdsm[3650]: WARN Attempting
> > to remove a non existing network:
> > ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21 14:10:07
> > HCI01 vdsm[3650]: WARN Attempting to remove a non existing net
> > user: ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21
> > 14:10:07 HCI01 vdsm[3650]: WARN
> > File: /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.org.qemu.guest_agent.0
> > already removed
> >
> > any ideas on that?
> > _______________________________________________
> > Users mailing list -- users(a)ovirt.org
> > To unsubscribe send an email to users-leave(a)ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/ List
> > Archives:
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/XMMX5CY6VHF...
>
5 years, 11 months
Re: Nvidia Grid K2 and Ovirt VGPU
by Michal Skrivanek
> On 11 Jan 2019, at 18:11, R A <jarheadx(a)hotmail.de> wrote:
>
> Hello Michal,
>
> many thanks for your response.
>
> Are you sure that there is no vGPU support? Please check lines below
Hi Reza,
it’s nvidia’s doc, you would have to talk to them if you have more questions. All I understand from that doc (it’s just little more below on the same page) is that they explicitly say that K1 and K2 do not support vGPU. This might be a licensing limitation or temporary or anything else, I don’t know.
Thanks,
michal
>
> <image.png>
>
>
> and here:
>
> <image.png>
>
> The NVIDIA Grid K2 is GRID capaable.
>
> Many thanks!
> Reza
>
> Von: Michal Skrivanek <michal.skrivanek(a)redhat.com <mailto:michal.skrivanek@redhat.com>>
> Gesendet: Freitag, 11. Januar 2019 12:04
> An: jarheadx(a)hotmail.de <mailto:jarheadx@hotmail.de>
> Cc: users(a)ovirt.org <mailto:users@ovirt.org>
> Betreff: Re: [ovirt-users] Nvidia Grid K2 and Ovirt VGPU
>
> > On 10 Jan 2019, at 02:14, jarheadx(a)hotmail.de <mailto:jarheadx@hotmail.de> wrote:
> >
> > Hello,
> >
> > sry for my bad english.
> >
> > I have just bought the NVIDIA Grid K2 and want to passtrough the vGPU to several VMS. But i am not able to manage this.
> > I need the driver for Guest and Host, but can not download it. On the official Site of Nvidia there are only drivers for XenServer and VMWare.
> >
> > This Nvidia-Site tells that there is support for Ovirt (RHEV) and K2 ( <applewebdata://881AA05A-2E7C-4482-AEBB-52EA439E3294>https://docs.nvidia.com/grid/4.6/grid-vgpu-release-notes-red-hat-el-kvm/i... <https://docs.nvidia.com/grid/4.6/grid-vgpu-release-notes-red-hat-el-kvm/i...> )
>
> The page says K1 and K2 do not support vGPU so you could only do Host
> Device Passthrough of the whole GPU
>
> >
> > Can someone please tell me what is going on? Is there any possibility to run Grid K2 in Ovirt with vGpu (or Host Device Passtrough?)
> >
> > Furthermore if I do "vdsm-client Host hostdevListByCaps" on the Host , i do not get any mdev markers like described here:https://www.ovirt.org/documentation/install-guide/appe-Preparing_a_H... <https://www.ovirt.org/documentation/install-guide/appe-Preparing_a_Host_f...>
>
> That’s only for vgpu which it doesn’t support, so you do not see any mdev
>
> >
> > Many Thanks
> > Volrath
> > _______________________________________________
> > Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
> > To unsubscribe send an email to users-leave(a)ovirt.org <mailto:users-leave@ovirt.org>
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/>
> > oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/>
> > List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FBIEV6OCU4V... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/FBIEV6OCU4V...>
5 years, 11 months