oVirt nVidia vGPU driver questions
by femi adegoke
I'm on oVirt 4.2.6.
1) If I have a Tesla P4 (Pascal) GPU (non-GRID), do I use nVidia vGPU driver for RHEL KVM 6.2 or the Linux KVM vGPU 6.2 driver?
2) If I have a Grid K1 or K2, I assume the correct driver would be GRID 4.7?
3) Must the vGPU driver be installed on the host & in the vm?
4) Must the vGPU driver be installed first before I can run "yum -y install vdsm-hook-vfio-mdev”
5 years, 7 months
oVirt 4.3.2 - Cannot update Host via UI
by Strahil Nikolov
Hello guys,
I have the following issue after I have successfully updated by engine from 4.3.1 to 4.3.2 - I canont update any host via the UI.
The event log show startup of the update , but there is no process running on the Host, yum.log is not updated and engine log doesn't show anything meaningful.
Any hint where to look for ?
Thanks in advance.
Best Regards,Strahil Nikolov
5 years, 7 months
How to fix ovn apparent inconsistency?
by Gianluca Cecchi
Hello,
passing from old manual to current OVN in 4.3.1 it seems I have some
problems with OVN now.
I cannot assign network on OVN to VM (powered on or off doesn't change).
When I add//edit a vnic, they are not on the possible choices
Environment composed by three hosts and one engine (external on vSphere).
The mgmt network during time has been configured on network
named ovirtmgmntZ2Z3
On engine it seems there are 2 switches for every defined ovn network
(ovn192 and ovn172)
Below some output of commands in case any inconsistency has remained and I
can purge it.
Thanks in advance.
Gianluca
- On manager ovmgr1:
[root@ovmgr1 ~]# ovs-vsctl show
eae54ff9-b86c-4050-8241-46f44336ba94
ovs_version: "2.10.1"
[root@ovmgr1 ~]#
[root@ovmgr1 ~]# ovn-nbctl show
switch 32367d8a-460f-4447-b35a-abe9ea5187e0 (ovn192)
port affc5570-3e5a-439c-9fdf-d75d6810e3a3
addresses: ["00:1a:4a:17:01:73"]
port f639d541-2118-4c24-b478-b7a586eb170c
addresses: ["00:1a:4a:17:01:75"]
switch 6110649a-db2b-4de7-8fbc-601095cfe510 (ovn192)
switch 64c4c17f-cd67-4e29-939e-2b952495159f (ovn172)
port 32c348d9-12e9-4bcf-a43f-69338c887cfc
addresses: ["00:1a:4a:17:01:72 dynamic"]
port 3c77c2ea-de00-43f9-a5c5-9b3ffea5ec69
addresses: ["00:1a:4a:17:01:74 dynamic"]
switch 04501f6b-3977-4ba1-9ead-7096768d796d (ovn172)
port 0a2a47bc-ea0d-4f1d-8f49-ec903e519983
addresses: ["00:1a:4a:17:01:65 dynamic"]
port 8fc7bed4-7663-4903-922b-05e490c6a5a1
addresses: ["00:1a:4a:17:01:64 dynamic"]
port f2b64f89-b719-484c-ac02-2a1ac8eaacdb
addresses: ["00:1a:4a:17:01:59 dynamic"]
port f7389c88-1ea1-47c2-92fd-6beffb2e2190
addresses: ["00:1a:4a:17:01:58 dynamic"]
[root@ovmgr1 ~]#
- On host ov200 (10.4.192.32 on ovirtmgmntZ2Z3):
[root@ov200 ~]# ovs-vsctl show
ae0a1256-7250-46a2-a1b6-8f0ae6105c20
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port "ovn-ddecf0-0"
Interface "ovn-ddecf0-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.4.192.33"}
Port "ovn-b8872a-0"
Interface "ovn-b8872a-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.4.192.34"}
ovs_version: "2.10.1"
[root@ov200 ~]#
- On host ov300 (10.4.192.33 on ovirtmgmntZ2Z3):
[root@ov300 ~]# ovs-vsctl show
f1a41e9c-16fb-4aa2-a386-2f366ade4d3c
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port "ovn-b8872a-0"
Interface "ovn-b8872a-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.4.192.34"}
Port "ovn-1dce5b-0"
Interface "ovn-1dce5b-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.4.192.32"}
ovs_version: "2.10.1"
[root@ov300 ~]#
- On host ov301 (10.4.192.34 on ovirtmgmntZ2Z3):
[root@ov301 ~]# ovs-vsctl show
3a38c5bb-0abf-493d-a2e6-345af8aedfe3
Bridge br-int
fail_mode: secure
Port "ovn-1dce5b-0"
Interface "ovn-1dce5b-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.4.192.32"}
Port "ovn-ddecf0-0"
Interface "ovn-ddecf0-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.4.192.33"}
Port br-int
Interface br-int
type: internal
ovs_version: "2.10.1"
[root@ov301 ~]#
In web admin gui:
In network -> networks ->
- ovn192
Id: 8fd63a10-a2ba-4c56-a8e0-0bc8d70be8b5
VDSM Name: ovn192
External ID: 32367d8a-460f-4447-b35a-abe9ea5187e0
- ovn172
Id: 7546d5d3-a0e3-40d5-9d22-cf355da47d3a
VDSM Name: ovn172
External ID: 64c4c17f-cd67-4e29-939e-2b952495159f
5 years, 7 months
oVirt Node 4.3 Master on fc28
by femi adegoke
4.3 Master is looking good!!
A few questions:
- In Step #2: Fqdns: for a 3 host cluster, are we to add the 2nd & 3rd
hosts?
- In Step #5: Bricks:
What effect does the RAID setting have?
Enable Dedupe & Compression is new?
Configure LV Cache: what do we enter for SSD? What happens if my disks
are already all SSD, do I still benefit from using this?
5 years, 7 months
Expand existing gluster storage in ovirt 4.2/4.3
by adrianquintero@gmail.com
Hello,
I have a 3 node ovirt 4.3 cluster that I've setup and using gluster (Hyperconverged setup)
I need to increase the amount of storage and compute so I added a 4th host (server4.example.com) if it is possible to expand the amount of bricks (storage) in the "data" volume?
I did some research and from that research the following caught my eye: old post "https://medium.com/@tumballi/scale-your-gluster-cluster-1-node-at-a-time-..."
So the question is, if taking that approach feasible? , is it even possible an oVirt point of view?
---------------------------
My gluster volume:
---------------------------
Volume Name: data
Type: Replicate
Volume ID: 003ffea0-b441-43cb-a38f-ccdf6ffb77f8
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: server1.example1.com:/gluster_bricks/data/data
Brick2: server2.example.com:/gluster_bricks/data/data
Brick3: server3.example.com:/gluster_bricks/data/data (arbiter)
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: enable
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
Thanks.
5 years, 7 months
Re: Vagrant Plugin
by Jeremy Tourville
Thanks for your reply Luca,
I confirmed the cluster name, it is "Default". I even tried to run the script again and I made sure the D in default was upper case because linux is case sensitive. It still fails in the same way as before.
[cid:c5758639-8ce0-46d1-ad0b-900af9cc138e]
________________________________
From: Luca 'remix_tj' Lorenzetto <lorenzetto.luca(a)gmail.com>
Sent: Saturday, March 30, 2019 3:34 AM
To: Jeremy Tourville
Subject: Re: [ovirt-users] Vagrant Plugin
Il ven 29 mar 2019, 19:12 Jeremy Tourville <Jeremy_Tourville(a)hotmail.com<mailto:Jeremy_Tourville@hotmail.com>> ha scritto:
I am having some trouble getting the Ovirt Vagrant plugin working. I was able to get Vagrant installed and could even run the example scenario listed in the blog. https://www.ovirt.org/blog/2017/02/using-oVirt-vagrant.html
My real issue is getting a vm generated by the SecGen project https://github.com/SecGen/SecGen to come up. If I use the VirtualBox provider everything works as expected and I can launch the vm with vagrant up. If I try to run using Ovirt provider it fails.
I had originally posted this over in Google groups / Vagrant forums and it was suggested to take it to Ovirt. Hopefully, somebody here has some insights.
The process fails quickly with the following output. Can anyone give some suggestions on how to fix the issue? I have also included a copy of my vagrantfile below. Thanks in advance for your assistance!
***Output***
Bringing machine 'escalation' up with 'ovirt4' provider...
==> escalation: Creating VM with the following settings...
==> escalation: -- Name: SecGen-default-scenario-escalation
==> escalation: -- Cluster: default
==> escalation: -- Template: debian_stretch_server_291118
==> escalation: -- Console Type: spice
==> escalation: -- Memory:
==> escalation: ---- Memory: 512 MB
==> escalation: ---- Maximum: 512 MB
==> escalation: ---- Guaranteed: 512 MB
==> escalation: -- Cpu:
==> escalation: ---- Cores: 1
==> escalation: ---- Sockets: 1
==> escalation: ---- Threads: 1
==> escalation: -- Cloud-Init: false
==> escalation: An error occured. Recovering..
==> escalation: VM is not created. Please run `vagrant up` first.
/home/secgenadmin/.vagrant.d/gems/2.4.4/gems/ovirt-engine-sdk-4.0.12/lib/ovirtsdk4/service.rb:52:in `raise_error': Fault reason is "Operation Failed". Fault detail is "Entity not found: Cluster: name=default". HTTP response code is 404.
Hello Jeremy,
Looks like you have no cluster called default in your setup. Edit your vagrant file accordingly to your setup.
Luca
5 years, 7 months
genev_sys_6081 left/entered promiscuous mode
by Arif Ali
Hi all,
I was wondering if anyone seen the messages below in the syslog. I see
this set every second, and I am not sure why this is the case, and it's
filling the logs quite a lot
Mar 22 23:08:08 <snip> kernel: device genev_sys_6081 left promiscuous
mode
Mar 22 23:08:08 <snip> kernel: i40e 0000:3d:00.0 eno3: port 6081 already
offloaded
Mar 22 23:08:08 <snip> kernel: i40e 0000:3d:00.1 eno4: port 6081 already
offloaded
Mar 22 23:08:08 <snip> kernel: i40e 0000:3d:00.2 eno5: port 6081 already
offloaded
Mar 22 23:08:08 <snip> kernel: i40e 0000:3d:00.3 eno6: port 6081 already
offloaded
Mar 22 23:08:08 <snip> kernel: device genev_sys_6081 entered promiscuous
mode
Mar 22 23:08:09 <snip> kernel: i40e 0000:3d:00.0 eno3: UDP port 6081 was
not found, not deleting
Mar 22 23:08:09 <snip> kernel: i40e 0000:3d:00.1 eno4: UDP port 6081 was
not found, not deleting
Mar 22 23:08:09 <snip> kernel: i40e 0000:3d:00.2 eno5: UDP port 6081 was
not found, not deleting
Mar 22 23:08:09 <snip> kernel: i40e 0000:3d:00.3 eno6: UDP port 6081 was
not found, not deleting
Mar 22 23:08:09 <snip> kernel: device genev_sys_6081 left promiscuous
mode
--
regards,
Arif Ali
5 years, 7 months