TASK [ovirt.hosted_engine_setup : Get local VM IP] Problem
by Emre Özkan
hi guys,
ı waiting this part when I try to install with hosted-engine --deploy command.have you got any knowledge about that.My environment(hyper-v running nested vm on azure and rhev running on hyper-v)
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Create local VM]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Get local VM IP]
4 years, 7 months
numa pinning and reserved hugepages (1G) Bug in scheduler calculation or decision ?
by Ralf Schenk
Hello List,
i ran into problems using numa-pinning and reserved hugepages.
- My EPYC 7281 based Servers (Dual Socket) have 8 Numa-Nodes each having
32 GB of memory for a total of 256 GB System Memory
- I'm using 192 x 1 GB hugepages reserved on the kernel cmdline
default_hugepagesz=1G hugepagesz=1G hugepages=192 This reserves 24
hugepages on each numa-node.
I wanted to pin a MariaDB VM using 32 GB (Custom Property
hugepages=1048576) to numa-nodes 0-3 of CPU-Socket 1. Pinning in GUI
etc. no problem.
When trying to start the vm this can't be done since ovirt claims that
the host can't fullfill the memory requirements - which is simply not
correct since there were > 164 hugepages free.
It should have taken 8 hugepages from each numa node 0-3 to fullfill the
32 GB Memory requirement.
I also freed the system completely from other VM's but that didn't work
either.
Is it possible that the scheduler only takes into account the "free
memory" (as seen in numactl -H below) *not reserved* by hugepages for
its decisions ? Since the host has only < 8 GB of free mem per numa-node
I can understand that VM was not able to start under that condition.
VM is runnig and using 32 hugepages without pinning but a warning states
"VM dbserver01b does not fit to a single NUMA node on host
myhost.mydomain.de. This may negatively impact its performance. Consider
using vNUMA and NUMA pinning for this VM."
This is the numa Hardware Layout and hugepages usage now with other VM's
running:
from cat /proc/meminfo
HugePages_Total: 192
HugePages_Free: 160
HugePages_Rsvd: 0
HugePages_Surp: 0
I can confirm that also under the condition of running other VM's there
are at least 8 hugepages free for each numa-node 0-3:
grep ""
/sys/devices/system/node/*/hugepages/hugepages-1048576kB/free_hugepages
/sys/devices/system/node/node0/hugepages/hugepages-1048576kB/free_hugepages:8
/sys/devices/system/node/node1/hugepages/hugepages-1048576kB/free_hugepages:23
/sys/devices/system/node/node2/hugepages/hugepages-1048576kB/free_hugepages:20
/sys/devices/system/node/node3/hugepages/hugepages-1048576kB/free_hugepages:22
/sys/devices/system/node/node4/hugepages/hugepages-1048576kB/free_hugepages:16
/sys/devices/system/node/node5/hugepages/hugepages-1048576kB/free_hugepages:5
/sys/devices/system/node/node6/hugepages/hugepages-1048576kB/free_hugepages:19
/sys/devices/system/node/node7/hugepages/hugepages-1048576kB/free_hugepages:24
numactl -h:
available: 8 nodes (0-7)
node 0 cpus: 0 1 2 3 32 33 34 35
node 0 size: 32673 MB
node 0 free: 3779 MB
node 1 cpus: 4 5 6 7 36 37 38 39
node 1 size: 32767 MB
node 1 free: 6162 MB
node 2 cpus: 8 9 10 11 40 41 42 43
node 2 size: 32767 MB
node 2 free: 6698 MB
node 3 cpus: 12 13 14 15 44 45 46 47
node 3 size: 32767 MB
node 3 free: 1589 MB
node 4 cpus: 16 17 18 19 48 49 50 51
node 4 size: 32767 MB
node 4 free: 2630 MB
node 5 cpus: 20 21 22 23 52 53 54 55
node 5 size: 32767 MB
node 5 free: 2487 MB
node 6 cpus: 24 25 26 27 56 57 58 59
node 6 size: 32767 MB
node 6 free: 3279 MB
node 7 cpus: 28 29 30 31 60 61 62 63
node 7 size: 32767 MB
node 7 free: 5513 MB
node distances:
node 0 1 2 3 4 5 6 7
0: 10 16 16 16 32 32 32 32
1: 16 10 16 16 32 32 32 32
2: 16 16 10 16 32 32 32 32
3: 16 16 16 10 32 32 32 32
4: 32 32 32 32 10 16 16 16
5: 32 32 32 32 16 10 16 16
6: 32 32 32 32 16 16 10 16
7: 32 32 32 32 16 16 16 10
--
*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *rs(a)databay.de* <mailto:rs@databay.de>
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen
------------------------------------------------------------------------
4 years, 7 months
Need to enable STP on ovirt bridges
by ej.albany@gmail.com
Hello. I have been trying to figure out an issue for a very long time.
That issue relates to the ethernet and 10gb fc links that I have on my
cluster being disabled any time a migration occurs.
I believe this is because I need to have STP turned on in order to
participate with the switch. However, there does not seem to be any
way to tell oVirt to stop turning it off! Very frustrating.
After entering a cronjob that enables stp on all bridges every 1
minute, the migration issue disappears....
Is there any way at all to do without this cronjob and set STP to be
ON without having to resort to such a silly solution?
Here are some details about my systems, if you need it.
selinux is disabled.
[root@swm-02 ~]# rpm -qa | grep ovirt
ovirt-imageio-common-1.5.1-0.el7.x86_64
ovirt-release43-4.3.5.2-1.el7.noarch
ovirt-imageio-daemon-1.5.1-0.el7.noarch
ovirt-vmconsole-host-1.0.7-2.el7.noarch
ovirt-hosted-engine-setup-2.3.11-1.el7.noarch
ovirt-ansible-hosted-engine-setup-1.0.26-1.el7.noarch
python2-ovirt-host-deploy-1.8.0-1.el7.noarch
ovirt-ansible-engine-setup-1.1.9-1.el7.noarch
python2-ovirt-setup-lib-1.2.0-1.el7.noarch
cockpit-machines-ovirt-195.1-1.el7.noarch
ovirt-hosted-engine-ha-2.3.3-1.el7.noarch
ovirt-vmconsole-1.0.7-2.el7.noarch
cockpit-ovirt-dashboard-0.13.5-1.el7.noarch
ovirt-provider-ovn-driver-1.2.22-1.el7.noarch
ovirt-host-deploy-common-1.8.0-1.el7.noarch
ovirt-host-4.3.4-1.el7.x86_64
python-ovirt-engine-sdk4-4.3.2-2.el7.x86_64
ovirt-host-dependencies-4.3.4-1.el7.x86_64
ovirt-ansible-repositories-1.1.5-1.el7.noarch
[root@swm-02 ~]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
[root@swm-02 ~]# uname -r
3.10.0-957.27.2.el7.x86_64
You have new mail in /var/spool/mail/root
[root@swm-02 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master
test state UP group default qlen 1000
link/ether d4:ae:52:8d:50:48 brd ff:ff:ff:ff:ff:ff
3: em2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group
default qlen 1000
link/ether d4:ae:52:8d:50:49 brd ff:ff:ff:ff:ff:ff
4: p1p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master
ovirtmgmt state UP group default qlen 1000
link/ether 90:e2:ba:1e:14:80 brd ff:ff:ff:ff:ff:ff
5: p1p2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group
default qlen 1000
link/ether 90:e2:ba:1e:14:81 brd ff:ff:ff:ff:ff:ff
6: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
group default qlen 1000
link/ether a2:b8:d6:e8:b3:d8 brd ff:ff:ff:ff:ff:ff
7: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group
default qlen 1000
link/ether 96:a0:c1:4a:45:4b brd ff:ff:ff:ff:ff:ff
25: test: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP group default qlen 1000
link/ether d4:ae:52:8d:50:48 brd ff:ff:ff:ff:ff:ff
inet 10.15.11.21/24 brd 10.15.11.255 scope global test
valid_lft forever preferred_lft forever
26: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue state UP group default qlen 1000
link/ether 90:e2:ba:1e:14:80 brd ff:ff:ff:ff:ff:ff
inet 10.15.28.31/24 brd 10.15.28.255 scope global ovirtmgmt
valid_lft forever preferred_lft forever
27: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
group default qlen 1000
link/ether 62:e5:e5:07:99:eb brd ff:ff:ff:ff:ff:ff
29: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master
ovirtmgmt state UNKNOWN group default qlen 1000
link/ether fe:6f:9c:95:00:02 brd ff:ff:ff:ff:ff:ff
[root@swm-02 ~]# free -m
total used free shared buff/cache available
Mem: 64413 1873 61804 9 735 62062
Swap: 16383 0 16383
[root@swm-02 ~]# free -h
total used free shared buff/cache available
Mem: 62G 1.8G 60G 9.5M 735M 60G
Swap: 15G 0B 15G
[root@swm-02 ~]# ls
ls lsb_release lshw lslocks
lsmod lspci lssubsys
lsusb.py
lsattr lscgroup lsinitrd lslogins
lsns lss16toppm lstopo-no-graphics
lsblk lscpu lsipc lsmem
lsof lsscsi lsusb
[root@swm-02 ~]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 44
Model name: Intel(R) Xeon(R) CPU X5672 @ 3.20GHz
Stepping: 2
CPU MHz: 3192.064
BogoMIPS: 6384.12
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 12288K
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep
mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht
tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts
rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq
dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca
sse4_1 sse4_2 popcnt aes lahf_lm ssbd ibrs ibpb stibp tpr_shadow vnmi
flexpriority ept vpid dtherm ida arat spec_ctrl intel_stibp flush_l1d
[root@swm-02 ~]#
4 years, 7 months
Migration from bare metal engine to hosted doesn't seem to work at all in 4.3.2
by Clinton Goudie-Nice
Hi all,
I'm trying to migrate from an ovirt bare metal instance to the hosted
appliance.
The instructions here:
https://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_...
are
at best murky, and at worst quite wrong, and I'm looking for some help.
The instructions say "You must answer No to the following question so that
you can restore the BareMetal-Engine backup file on HostedEngine-VM before
running engine-setup. Automatically execute engine-setup on the engine
appliance on first boot (Yes, No)[Yes]? No"
Makes complete sense, except this question is not asked at all.
Some research on the internet says we should be able to do this via
append-answers:
[environment:default]
OVEHOSTED_VM/cloudinitExecuteEngineSetup=bool:False
Except this doesn't work either. When the appliance deploys the engine is
setup and already running.
How do I take a backup file, and a backup log, and deploy them into a
hosted engine?
Clint
4 years, 7 months
Creating hosts via the REST API using SSH Public Key authentication
by schill-julian@gmx.de
In the UI one can create hosts using two authentication methods: 'Password' and 'SSH Public Key'.
I have only found the Password authentication in the API Docs (/ovirt-engine/apidoc/#/services/hosts/methods/add).
My question is: How can i create hosts using SSH Public Key authentication via the REST API?
I would appreciate an example POST request!
4 years, 7 months
nodectl on plain CentOS hypervisors
by Gianluca Cecchi
Does it make sense to install nodectl utility on plain CentOS 7.x nodes?
Or any other alternative for plain OS nodes vs ovirt-node-ng ones?
On my updated CentOS 7.6 oVirt node I have not the command; I think it is
provided by the package ovirt-node-ng-nodectl, that is one of the available
ones if I run "yum search" on the system.
Thanks
Gianluca
4 years, 7 months
oVirt 3.6: Node went into Error state while migrations happening
by Christopher Cox
On the node in question, the metadata isn't coming across (state) wise.
It shows VMs being in an unknown state (some are up and some are down),
some show as migrating and there are 9 forever hung migrating tasks. We
tried to bring up some of the down VMs that had a state of Down, but
that ended up getting them the state of "Wait for Lauch", though those
VMs are actually started.
Right now, my plan is attempt a restart of vdsmd on the node in
question. Just trying to get the node to a working state again. There
a total of 9 nodes in our cluster, but we can't manage any VMs on the
affected node right now.
Is there a way in 3.6 to cancel the hung tasks? I'm worried that if
vdsmd is restarted on the node, the tasks might be "attempted"... I
really need them to be forgotten if possible.
Ideally want all "Unknown" to return to either an "up" or "down" state
(depending if the VM is up or down) and for "Wait for Launch" for those,
to go to "up" and for all the "Migrating" to go to "up" or "down" (I
think only one is actually down).
I'm concerned that any attempt manually maniplate the state in the ovirt
mgmt head db will be moot because the node will be queried for state and
that state will be taken and override anything I attempt to do.
Thoughts??
4 years, 7 months
When I create a new domain in storage , it report : VDSM command ActivateStorageDomainVDS failed: Unknown pool id, pool not connected: (u'b87012a1-8f7a-4af5-8884-e0fb8002e842', )
by wangyu13476969128@126.com
The version of ovirt-engine is 4.2.8
The version of ovirt-node is 4.2.8
When I create a new domain in storage , the storage type is NFS , it report :
VDSM command ActivateStorageDomainVDS failed: Unknown pool id, pool not connected: (u'b87012a1-8f7a-4af5-8884-e0fb8002e842',)
The error of vdsm.log is :
2019-08-23 11:02:14,740+0800 INFO (jsonrpc/4) [vdsm.api] START connectStorageServer(domType=1, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id': u'c6893a09-ab28-4328-b186-d2f88f2320d4', u'connection': u'172.16.10.74:/ovirt-data', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'protocol_version': u'auto', u'password': '********', u'port': u''}], options=None) from=::ffff:172.16.90.10,52962, flow_id=5d720a32, task_id=2af924ff-37a6-46a1-b79f-4251d21d5ff9 (api:46)
2019-08-23 11:02:14,743+0800 INFO (jsonrpc/4) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'status': 0, 'id': u'c6893a09-ab28-4328-b186-d2f88f2320d4'}]} from=::ffff:172.16.90.10,52962, flow_id=5d720a32, task_id=2af924ff-37a6-46a1-b79f-4251d21d5ff9 (api:52)
2019-08-23 11:02:14,743+0800 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call StoragePool.connectStorageServer succeeded in 0.00 seconds (__init__:573)
2019-08-23 11:02:14,751+0800 INFO (jsonrpc/6) [vdsm.api] START activateStorageDomain(sdUUID=u'3bcdc32c-040e-4a4c-90fb-de950f54f1b4', spUUID=u'b87012a1-8f7a-4af5-8884-e0fb8002e842', options=None) from=::ffff:172.16.90.10,53080, flow_id=709ee722, task_id=9b8f32af-5fdd-4ffa-9520-c474af03db70 (api:46)
2019-08-23 11:02:14,752+0800 INFO (jsonrpc/6) [vdsm.api] FINISH activateStorageDomain error=Unknown pool id, pool not connected: (u'b87012a1-8f7a-4af5-8884-e0fb8002e842',) from=::ffff:172.16.90.10,53080, flow_id=709ee722, task_id=9b8f32af-5fdd-4ffa-9520-c474af03db70 (api:50)
2019-08-23 11:02:14,752+0800 ERROR (jsonrpc/6) [storage.TaskManager.Task] (Task='9b8f32af-5fdd-4ffa-9520-c474af03db70') Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
return fn(*args, **kargs)
File "<string>", line 2, in activateStorageDomain
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1262, in activateStorageDomain
pool = self.getPool(spUUID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 350, in getPool
raise se.StoragePoolUnknown(spUUID)
StoragePoolUnknown: Unknown pool id, pool not connected: (u'b87012a1-8f7a-4af5-8884-e0fb8002e842',)
2019-08-23 11:02:14,752+0800 INFO (jsonrpc/6) [storage.TaskManager.Task] (Task='9b8f32af-5fdd-4ffa-9520-c474af03db70') aborting: Task is aborted: "Unknown pool id, pool not connected: (u'b87012a1-8f7a-4af5-8884-e0fb8002e842',)" - code 309 (task:1181)
2019-08-23 11:02:14,752+0800 ERROR (jsonrpc/6) [storage.Dispatcher] FINISH activateStorageDomain error=Unknown pool id, pool not connected: (u'b87012a1-8f7a-4af5-8884-e0fb8002e842',) (dispatcher:82)
How can I solve this problem ?
4 years, 7 months
large size OVA import fails
by jason.l.cox@l3harris.com
I have a fairly large OVA (~200GB) that was exported from oVirt4.3.5. I'm trying to import it into a new cluster, also oVirt4.3.5. The import starts fine but fails again and again.
Everything I can find online appears to be outdated, mentioning incorrect log file locations and saying virt-v2v does the import.
On the engine in /var/log/ovirt-engine/engine.log I can see where it is doing the CreateImageVDSCommand, then a few outputs concerning adding the disk, which end with USER_ADD_DISK_TO_VM_FINISHED_SUCCESS, then the ansible command:
2019-08-20 15:40:38,653-04
Executing Ansible command: /usr/bin/ansible-playbook --ssh-common-args=-F /var/lib/ovirt-engine/.ssh/config -v --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa --inventory=/tmp/ansible-inventory8416464991088315694 --extra-vars=ovirt_import_ova_path="/mnt/vm_backups/myvm.ova" --extra-vars=ovirt_import_ova_disks="['/rhev/data-center/mnt/glusterSD/myhost.mydomain.com:_vmstore/59502c8b-fd1e-482b-bff7-39c699c196b3/images/886a3313-19a9-435d-aeac-64c2d507bb54/465ce2ba-8883-4378-bae7-e231047ea09d']" --extra-vars=ovirt_import_ova_image_mappings="{}" /usr/share/ovirt-engine/playbooks/ovirt-ova-import.yml [Logfile: /var/log/ovirt-engine/ova/ovirt-import-ova-ansible-20190820154038-myhost.mydomain.com-25f6ac6f-9bdc-4301-b896-d357712dbf01.log]
Then nothing about the import until:
2019-08-20 16:11:08,859-04 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromOvaCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-88) [3321d4f6] Lock freed to object 'EngineLock:{exclusiveLocks='[myvm=VM_NAME, 464a25ba-8f0a-421d-a6ab-13eff67b4c96=VM]', sharedLocks=''}'
2019-08-20 16:11:08,894-04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-88) [3321d4f6] EVENT_ID: IMPORTEXPORT_IMPORT_VM_FAILED(1,153), Failed to import Vm myvm to Data Center Default, Cluster Default
I've found the import logs on the engine, in /var/log/ovirt-engine/ova, but the ovirt-import-ova-ansible*.logs for the imports of concern only contain:
2019-08-20 19:59:48,799 p=44701 u=ovirt | Using /usr/share/ovirt-engine/playbooks/ansible.cfg as config file
2019-08-20 19:59:49,271 p=44701 u=ovirt | PLAY [all] *********************************************************************
2019-08-20 19:59:49,280 p=44701 u=ovirt | TASK [ovirt-ova-extract : Run extraction script] *******************************
Watching the host selected for the import, I can see the qemu-img convert process running, but then the engine frees the lock on the VM and reports the import as having failed. However, the qemu-img process continues to run on the host. I don't know where else to look to try and find out what's going on and I cannot see anything that says why the import failed.
Since the qemu-img process on the host is still running after the engine log shows the lock has been freed and import failed, I'm guessing what's happening is on the engine side.
Looking at the time between the start of the ansible command and when the lock is freed it is consistently around 30 minutes.
# first try
2019-08-20 15:40:38,653-04 ansible command start
2019-08-20 16:11:08,859-04 lock freed
31 minutes
# second try
2019-08-20 19:59:48,463-04 ansible command start
2019-08-20 20:30:21,697-04 lock freed
30 minutes, 33 seconds
# third try
2019-08-21 09:16:42,706-04 ansible command start
2019-08-21 09:46:47,103-04 lock freed
30 minutes, 45 seconds
With that in mind, I took a look at the available configuration keys from engine-config --list. After getting each the only one set to ~30 minutes and looks like it could be the problem was SSHInactivityHardTimeoutSeconds (set to 1800 seconds). I set it to 3600 and tried the import again, but it still failed at ~30 minutes, so that's apparently not the correct key.
Also, just FYI, I also tried to import the ova using virt-v2v, but that fails immediately:
virt-v2v: error: expecting XML expression to return an integer (expression:
rasd:Parent/text(), matching string: 00000000-0000-0000-0000-000000000000)
If reporting bugs, run virt-v2v with debugging enabled and include the
complete output:
virt-v2v -v -x [...]
Does virt-v2v not support OVAs created by the oVirt 'export to ova' option?
So my main question is, Is there a timeout for VM imports through the engine web UI?
And if so, is if configurable?
Thanks in advance.
4 years, 7 months
ovn networking
by Staniforth, Paul
Hello
I the latest release of the engine 4.3.5.5-1.el7
ovn-nbctl show
ovn-sbctl show
both seem to work but produce the errors
net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory
net_mlx5: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx5)
PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory
PMD: net_mlx4: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx4)
should libibverbs be a dependency?
Paul S.
To view the terms under which this email is distributed, please go to:-
http://leedsbeckett.ac.uk/disclaimer/email/
4 years, 7 months