VDO + HCI = failed reboots
by Donny Davis
I just spun up the latest and greatest ovirt has to offer, and I am
building out an HCI cluster. The deployment went wonderfully. I had dns
setup for everything, and it just worked. Great job team!
I just wanted to add in something i noticed after rebooting the nodes. The
gluster service, and volumes would not come back up after rebooting. I
found that there needs to be an entry in the fstab for the gluster logical
volumes to tell them to start the vdo service. So without this, the node
will fail to boot if you are using vdo.
inode64,noatime,nodiratime,discard,x-systemd.requires=vdo.service
the defaults are this
inode64,noatime,nodiratime
Hopefully it helps someone else
:)
Cheers
6 years, 3 months
Migrating to HCI
by Vincent Royer
I have 2 hosts using NFS storage
I want to migrate to HCI and add a third host. Have added the 10Gbe cards
and drives to the 3 target hosts and setup a separate physical network for
storage.
Can I:
- deploy a single-node HCI to host 3
- export the VMs to an NFS export domain
- Import the VMs to the HCI node
- wipe the first 2 hosts and re-deploy them to the HCI DC
Is there a guide for this?
or should I just export the VMs, blow away all the nodes, and deploy as a
3-node HCI using the wizards? I'm hoping to minimize downtime of the VMs
to under one day.
*Vince*
6 years, 3 months
One Windows VM is not responding
by Jonathan Baecker
Hello Everybody!
We have here 13 VMs running under ovirt 4.2.4. Two of them are Windows
Server 2016. On one runs a AD, DNS and one application.
On the second one runs an SQL Server and also an application. This
second one have the problem, that periodical it goes in a state where I
get in ovirt the message: *not responding
*
Beside this the VM is running normal. I can connect it over Remote
Desktop. But I can not connect it with ovirt/noVNC, the button is gray out.
In a weekly cycle it runs a backup script, maybe this brings some
problems, but I really don't know how to debug this. It can be that the
VM runs for 3 weeks normal, and then it goes in this state, so I can not
really say when this is happen. Also the fact that the other Windows
Server VM runs normal, wonders me.
Do you have experienced this problem, or do you know how I found out the
issue?
Best Regards!
Jonathan
6 years, 3 months
Re: External ceph storage
by Luca 'remix_tj' Lorenzetto
No,
The only way you have is to configure cinder to manage ceph pool or in
alternative you have to deploy an iscsi gateway, no other ways are
available at the moment.
So you can't use rbd directly.
Luca
Il dom 27 mag 2018, 16:54 Leo David <leoalex(a)gmail.com> ha scritto:
> Thank you Luca,
> At the moment i would try the cinder storage provider, since we already
> have a proxmox cluster directly connecting to ceph. The problem is that I
> just could not find a straight way to do this.
> ie: Specify the ceph monitors and ceph pool to connect to. Can oVirt
> directly connect to ceph monitors ? How the configuration should be done if
> so ?
> Thank you very much !
>
>
> On Sun, May 27, 2018, 17:20 Luca 'remix_tj' Lorenzetto <
> lorenzetto.luca(a)gmail.com> wrote:
>
>> Hello,
>>
>> Yes, using cinder or through iscsi gateway.
>>
>> For a simpler setup i suggest the second option.
>>
>> Luca
>>
>> Il dom 27 mag 2018, 16:08 Leo David <leoalex(a)gmail.com> ha scritto:
>>
>>> Hello everyone,
>>> I am new to ovirt and very impressed of its features. I would like to
>>> levereage on our existing ceph cluster to provide rbd images for vm hdds,
>>> is this possible to achieve ?
>>> Thank you very much !
>>> Regards,
>>> Leo
>>> _______________________________________________
>>> Users mailing list -- users(a)ovirt.org
>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>
>>
6 years, 3 months
Unknown CPU model Broadwell-IBRS-SSBD
by me@brendanh.com
Hi,
I've just installed hosted-engine. It proceeds to near the end of the Ansible script, then pauses at:
"TASK [Check engine VM health]"
then eventually errors:
"Engine VM is not running, please check vdsm logs. The VDSM logs have error:
"libvirtError: internal error: Unknown CPU model Broadwell-IBRS-SSBD"
Also, in the oVirt Node webui, the HostedEngine VM is created, but not started. If I press Run, the following error is shown:
VM START action failed
error: Failed to start domain HostedEngine error: internal error: Unknown CPU model Broadwell-IBRS-SSBD
I notice a Trello card was created to add similar Skylake & EPYC CPUs, but not Broadwell:
https://trello.com/c/5nGTgQ9P/89-new-ibrs-and-ssbd-cpu-types#
Does the Broadwell-IBRS-SSBD CPU need to be added to this supported types list?
Many thanks.
6 years, 3 months
Re: losing ib0 connection after activating host
by Douglas Duckworth
Sorry, I mean "migration network" for moving live migration traffic.
FDR infiniband much faster than 1Gb network which currently acts as
migration network, vm network, display network, mgmt network, etc.
Thanks,
Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit
Weill Cornell Medicine
1300 York - LC-502
E: doug(a)med.cornell.edu
O: 212-746-6305
F: 212-746-8690
On Fri, Aug 24, 2018 at 9:36 AM, Dominik Holler <dholler(a)redhat.com> wrote:
> On Thu, 23 Aug 2018 13:51:39 -0400
> Douglas Duckworth <dod2014(a)med.cornell.edu> wrote:
>
> > THANKS!
> >
> > ib0 now up with NFS storage back on this hypervisor
> >
>
> Thanks for letting us know.
>
> > Though how do I make it a transfer network? I don't see an option.
> >
>
> I do not understand the meaning of "transfer network".
> The network interface to use for NFS results from the routing tables of
> the host.
> In "Compute > Clusters > Clustername > Logical Networks > Manage
> Networks" network roles for some kind of loads can be assigned, but not
> for NFS access.
>
>
> > Thanks,
> >
> > Douglas Duckworth, MSc, LFCS
> > HPC System Administrator
> > Scientific Computing Unit
> > Weill Cornell Medicine
> > 1300 York - LC-502
> > E: doug(a)med.cornell.edu
> > O: 212-746-6305
> > F: 212-746-8690
> >
> >
> > On Thu, Aug 23, 2018 at 11:12 AM, Douglas Duckworth
> > <dod2014(a)med.cornell.edu
> > > wrote:
> >
> > > Hi Dominik
> > >
> > > Yes, the network-script was created by our Ansible role that deploys
> > > CentOS hosts. It pulls the IP from DNS then templates the script
> > > and copies to host.
> > >
> > > I will try this oVirt step then see if it works!
> > >
> > > Thanks,
> > >
> > > Douglas Duckworth, MSc, LFCS
> > > HPC System Administrator
> > > Scientific Computing Unit
> > > Weill Cornell Medicine
> > > 1300 York - LC-502
> > > E: doug(a)med.cornell.edu
> > > O: 212-746-6305
> > > F: 212-746-8690
> > >
> > >
> > > On Thu, Aug 23, 2018 at 11:09 AM, Dominik Holler
> > > <dholler(a)redhat.com> wrote:
> > >
> > >> Is ifcfg-ib0 created before adding the host?
> > >> Can ib0 be reconfigured using engine, e.g. by
> > >> "Compute > Hosts > hostx > Network Interfaces > Setup Host
> > >> Networks"? If this some kind of self-hosted engine?
> > >>
> > >> On Thu, 23 Aug 2018 09:30:59 -0400
> > >> Douglas Duckworth <dod2014(a)med.cornell.edu> wrote:
> > >>
> > >> > Here's a link to the files:
> > >> >
> > >> > https://urldefense.proofpoint.com/v2/url?u=https-3A__bit.ly_
> > >> 2wjZ6Vo&d=DwICAg&c=lb62iw4YL4RFalcE2hQUQealT9-RXrryqt9KZX2qu
> > >> 2s&r=2Fzhh_78OGspKQpl_e-CbhH6xUjnRkaqPFUS2wTJ2cw&m=Y25-
> > >> OOvgu58jlC82-fzBeNIpQ7ZscoHznffUhqE6EBM&s=QQXlC9Tisa60TvimyS
> > >> 3BnFDCaDF7VPD8eCzT-Fke-p0&e=
> > >> >
> > >> > Thank you!
> > >> >
> > >> > Thanks,
> > >> >
> > >> > Douglas Duckworth, MSc, LFCS
> > >> > HPC System Administrator
> > >> > Scientific Computing Unit
> > >> > Weill Cornell Medicine
> > >> > 1300 York - LC-502
> > >> > E: doug(a)med.cornell.edu
> > >> > O: 212-746-6305
> > >> > F: 212-746-8690
> > >> >
> > >> >
> > >> > On Thu, Aug 23, 2018 at 6:51 AM, Dominik Holler
> > >> > <dholler(a)redhat.com> wrote:
> > >> >
> > >> > > Would you please share the vdsm.log and the supervdsm.log from
> > >> > > this host?
> > >> > >
> > >> > > On Wed, 22 Aug 2018 11:36:09 -0400
> > >> > > Douglas Duckworth <dod2014(a)med.cornell.edu> wrote:
> > >> > >
> > >> > > > Hi
> > >> > > >
> > >> > > > I keep losing ib0 connection on hypervisor after adding host
> > >> > > > to engine. This makes the host not really work since NFS
> > >> > > > will be mounted over ib0.
> > >> > > >
> > >> > > > I don't really understand why this occurs.
> > >> > > >
> > >> > > > OS:
> > >> > > >
> > >> > > > [root@ovirt-hv2 ~]# cat /etc/redhat-release
> > >> > > > CentOS Linux release 7.5.1804 (Core)
> > >> > > >
> > >> > > > Here's the network script:
> > >> > > >
> > >> > > > [root@ovirt-hv2 ~]#
> > >> > > > cat /etc/sysconfig/network-scripts/ifcfg-ib0 DEVICE=ib0
> > >> > > > BOOTPROTO=static
> > >> > > > IPADDR=172.16.0.207
> > >> > > > NETMASK=255.255.255.0
> > >> > > > ONBOOT=yes
> > >> > > > ZONE=public
> > >> > > >
> > >> > > > When I try "ifup"
> > >> > > >
> > >> > > > [root@ovirt-hv2 ~]# ifup ib0
> > >> > > > Error: Connection activation failed: No suitable device
> > >> > > > found for this connection.
> > >> > > >
> > >> > > > The error in syslog:
> > >> > > >
> > >> > > > Aug 22 11:31:50 ovirt-hv2 kernel: IPv4: martian source
> > >> > > > 172.16.0.87 from 172.16.0.49, on dev ib0
> > >> > > > Aug 22 11:31:53 ovirt-hv2 NetworkManager[1070]: <info>
> > >> > > > [1534951913.7486] audit: op="connection-activate"
> > >> > > > uuid="2ab4abde-b8a5-6cbc-19b1-2bfb193e4e89" name="System ib0"
> > >> > > > result="fail" reason="No suitable device found for this
> > >> > > > connection.
> > >> > > >
> > >> > > > As you can see media state up:
> > >> > > >
> > >> > > > [root@ovirt-hv2 ~]# ip a
> > >> > > > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state
> > >> > > > UNKNOWN group default qlen 1000
> > >> > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > >> > > > inet 127.0.0.1/8 scope host lo
> > >> > > > valid_lft forever preferred_lft forever
> > >> > > > 2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq
> > >> > > > master ovirtmgmt state UP group default qlen 1000
> > >> > > > link/ether 50:9a:4c:89:d3:81 brd ff:ff:ff:ff:ff:ff
> > >> > > > 3: em2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq
> > >> > > > state DOWN group default qlen 1000
> > >> > > > link/ether 50:9a:4c:89:d3:82 brd ff:ff:ff:ff:ff:ff
> > >> > > > 4: p1p1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc
> > >> > > > mq state DOWN group default qlen 1000
> > >> > > > link/ether b4:96:91:13:ea:68 brd ff:ff:ff:ff:ff:ff
> > >> > > > 5: p1p2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc
> > >> > > > mq state DOWN group default qlen 1000
> > >> > > > link/ether b4:96:91:13:ea:6a brd ff:ff:ff:ff:ff:ff
> > >> > > > 6: idrac: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> > >> > > > pfifo_fast state UNKNOWN group default qlen 1000
> > >> > > > link/ether 50:9a:4c:89:d3:84 brd ff:ff:ff:ff:ff:ff
> > >> > > > inet 169.254.0.2/16 brd 169.254.255.255 scope global
> > >> > > > idrac valid_lft forever preferred_lft forever
> > >> > > > 7: ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 2044 qdisc mq
> > >> > > > state UP group default qlen 256
> > >> > > > link/infiniband
> > >> > > > a0:00:02:08:fe:80:00:00:00:00:00:00:ec:0d:9a:03:00:1d:13:41
> > >> > > > brd
> > >> > > > 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff
> > >> > > > 8: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop
> > >> > > > state DOWN group default qlen 1000 link/ether
> > >> > > > 12:b4:30:22:39:5b brd ff:ff:ff:ff:ff:ff 9: br-int:
> > >> > > > <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group
> > >> > > > default qlen 1000 link/ether 3e:32:e6:66:98:49 brd
> > >> > > > ff:ff:ff:ff:ff:ff 25: ovirtmgmt:
> > >> > > > <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> > >> > > > state UP group default qlen 1000 link/ether
> > >> > > > 50:9a:4c:89:d3:81 brd ff:ff:ff:ff:ff:ff inet 10.0.0.183/16
> > >> > > > brd 10.0.255.255 scope global ovirtmgmt valid_lft forever
> > >> > > > preferred_lft forever 26: genev_sys_6081:
> > >> > > > <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue
> > >> > > > master ovs-system state UNKNOWN group default qlen 1000
> > >> > > > link/ether aa:32:82:1b:01:d9 brd ff:ff:ff:ff:ff:ff
> > >> > > > 27: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop
> > >> > > > state DOWN group default qlen 1000 link/ether
> > >> > > > 32:ff:5d:b8:c2:b4 brd ff:ff:ff:ff:ff:ff
> > >> > > >
> > >> > > > The card is FDR:
> > >> > > >
> > >> > > > [root@ovirt-hv2 ~]# lspci -v | grep Mellanox
> > >> > > > 01:00.0 Network controller: Mellanox Technologies MT27500
> > >> > > > Family [ConnectX-3]
> > >> > > > Subsystem: Mellanox Technologies Device 0051
> > >> > > >
> > >> > > > Latest OFED driver:
> > >> > > >
> > >> > > > [root@ovirt-hv2 ~]# /etc/init.d/openibd status
> > >> > > >
> > >> > > > HCA driver loaded
> > >> > > >
> > >> > > > Configured IPoIB devices:
> > >> > > > ib0
> > >> > > >
> > >> > > > Currently active IPoIB devices:
> > >> > > > ib0
> > >> > > > Configured Mellanox EN devices:
> > >> > > >
> > >> > > > Currently active Mellanox devices:
> > >> > > > ib0
> > >> > > >
> > >> > > > The following OFED modules are loaded:
> > >> > > >
> > >> > > > rdma_ucm
> > >> > > > rdma_cm
> > >> > > > ib_ipoib
> > >> > > > mlx4_core
> > >> > > > mlx4_ib
> > >> > > > mlx4_en
> > >> > > > mlx5_core
> > >> > > > mlx5_ib
> > >> > > > ib_uverbs
> > >> > > > ib_umad
> > >> > > > ib_ucm
> > >> > > > ib_cm
> > >> > > > ib_core
> > >> > > > mlxfw
> > >> > > > mlx5_fpga_tools
> > >> > > >
> > >> > > > I can add an IP to ib0 using "ip addr" though I need Network
> > >> > > > Manager to work with ib0.
> > >> > > >
> > >> > > >
> > >> > > > Thanks,
> > >> > > >
> > >> > > > Douglas Duckworth, MSc, LFCS
> > >> > > > HPC System Administrator
> > >> > > > Scientific Computing Unit
> > >> > > > Weill Cornell Medicine
> > >> > > > 1300 York - LC-502
> > >> > > > E: doug(a)med.cornell.edu
> > >> > > > O: 212-746-6305
> > >> > > > F: 212-746-8690
> > >> > >
> > >> > >
> > >>
> > >>
> > >
>
>
6 years, 3 months
Console to HostedEngine
by Daniel Menzel
Hi at all,
we cannot access our hosted engine anymore. It started with and overfull
/var due to a growing database. We access the engine via SSH and tried
to fix that - but somehow we seem to have produced another problem on
the SSH server itself. So unfortunately we can not login anymore.
We then tried to access it via its host and a "hosted-engine --console"
but ran into an
internal error: cannot find character device <null>
which I know from KVM. With other VMs I could follow the RedHat's advice
to add a console
(https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/...)
but although I edited the hosted engine's profile those changes weren't
applied - and after an engine restart even deleted again. That kind of
makes sense to me, but limits my options.
So my question is: Is there any idea how can I access the the console
with my current limitations to fix the SSH server's problems and then
hopefully fix everything?
Regards
Daniel
6 years, 3 months
Re: losing ib0 connection after activating host
by Douglas Duckworth
Hi Dominik
Yes, the network-script was created by our Ansible role that deploys CentOS
hosts. It pulls the IP from DNS then templates the script and copies to
host.
I will try this oVirt step then see if it works!
Thanks,
Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit
Weill Cornell Medicine
1300 York - LC-502
E: doug(a)med.cornell.edu
O: 212-746-6305
F: 212-746-8690
On Thu, Aug 23, 2018 at 11:09 AM, Dominik Holler <dholler(a)redhat.com> wrote:
> Is ifcfg-ib0 created before adding the host?
> Can ib0 be reconfigured using engine, e.g. by
> "Compute > Hosts > hostx > Network Interfaces > Setup Host Networks"?
> If this some kind of self-hosted engine?
>
> On Thu, 23 Aug 2018 09:30:59 -0400
> Douglas Duckworth <dod2014(a)med.cornell.edu> wrote:
>
> > Here's a link to the files:
> >
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__bit.ly_
> 2wjZ6Vo&d=DwICAg&c=lb62iw4YL4RFalcE2hQUQealT9-RXrryqt9KZX2qu2s&r=2Fzhh_
> 78OGspKQpl_e-CbhH6xUjnRkaqPFUS2wTJ2cw&m=Y25-OOvgu58jlC82-
> fzBeNIpQ7ZscoHznffUhqE6EBM&s=QQXlC9Tisa60TvimyS3BnFDCaDF7VP
> D8eCzT-Fke-p0&e=
> >
> > Thank you!
> >
> > Thanks,
> >
> > Douglas Duckworth, MSc, LFCS
> > HPC System Administrator
> > Scientific Computing Unit
> > Weill Cornell Medicine
> > 1300 York - LC-502
> > E: doug(a)med.cornell.edu
> > O: 212-746-6305
> > F: 212-746-8690
> >
> >
> > On Thu, Aug 23, 2018 at 6:51 AM, Dominik Holler <dholler(a)redhat.com>
> > wrote:
> >
> > > Would you please share the vdsm.log and the supervdsm.log from this
> > > host?
> > >
> > > On Wed, 22 Aug 2018 11:36:09 -0400
> > > Douglas Duckworth <dod2014(a)med.cornell.edu> wrote:
> > >
> > > > Hi
> > > >
> > > > I keep losing ib0 connection on hypervisor after adding host to
> > > > engine. This makes the host not really work since NFS will be
> > > > mounted over ib0.
> > > >
> > > > I don't really understand why this occurs.
> > > >
> > > > OS:
> > > >
> > > > [root@ovirt-hv2 ~]# cat /etc/redhat-release
> > > > CentOS Linux release 7.5.1804 (Core)
> > > >
> > > > Here's the network script:
> > > >
> > > > [root@ovirt-hv2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ib0
> > > > DEVICE=ib0
> > > > BOOTPROTO=static
> > > > IPADDR=172.16.0.207
> > > > NETMASK=255.255.255.0
> > > > ONBOOT=yes
> > > > ZONE=public
> > > >
> > > > When I try "ifup"
> > > >
> > > > [root@ovirt-hv2 ~]# ifup ib0
> > > > Error: Connection activation failed: No suitable device found for
> > > > this connection.
> > > >
> > > > The error in syslog:
> > > >
> > > > Aug 22 11:31:50 ovirt-hv2 kernel: IPv4: martian source 172.16.0.87
> > > > from 172.16.0.49, on dev ib0
> > > > Aug 22 11:31:53 ovirt-hv2 NetworkManager[1070]: <info>
> > > > [1534951913.7486] audit: op="connection-activate"
> > > > uuid="2ab4abde-b8a5-6cbc-19b1-2bfb193e4e89" name="System ib0"
> > > > result="fail" reason="No suitable device found for this
> > > > connection.
> > > >
> > > > As you can see media state up:
> > > >
> > > > [root@ovirt-hv2 ~]# ip a
> > > > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state
> > > > UNKNOWN group default qlen 1000
> > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > > > inet 127.0.0.1/8 scope host lo
> > > > valid_lft forever preferred_lft forever
> > > > 2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master
> > > > ovirtmgmt state UP group default qlen 1000
> > > > link/ether 50:9a:4c:89:d3:81 brd ff:ff:ff:ff:ff:ff
> > > > 3: em2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq
> > > > state DOWN group default qlen 1000
> > > > link/ether 50:9a:4c:89:d3:82 brd ff:ff:ff:ff:ff:ff
> > > > 4: p1p1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq
> > > > state DOWN group default qlen 1000
> > > > link/ether b4:96:91:13:ea:68 brd ff:ff:ff:ff:ff:ff
> > > > 5: p1p2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq
> > > > state DOWN group default qlen 1000
> > > > link/ether b4:96:91:13:ea:6a brd ff:ff:ff:ff:ff:ff
> > > > 6: idrac: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> > > > pfifo_fast state UNKNOWN group default qlen 1000
> > > > link/ether 50:9a:4c:89:d3:84 brd ff:ff:ff:ff:ff:ff
> > > > inet 169.254.0.2/16 brd 169.254.255.255 scope global idrac
> > > > valid_lft forever preferred_lft forever
> > > > 7: ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 2044 qdisc mq state
> > > > UP group default qlen 256
> > > > link/infiniband
> > > > a0:00:02:08:fe:80:00:00:00:00:00:00:ec:0d:9a:03:00:1d:13:41 brd
> > > > 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff
> > > > 8: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state
> > > > DOWN group default qlen 1000
> > > > link/ether 12:b4:30:22:39:5b brd ff:ff:ff:ff:ff:ff
> > > > 9: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
> > > > group default qlen 1000
> > > > link/ether 3e:32:e6:66:98:49 brd ff:ff:ff:ff:ff:ff
> > > > 25: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> > > > noqueue state UP group default qlen 1000
> > > > link/ether 50:9a:4c:89:d3:81 brd ff:ff:ff:ff:ff:ff
> > > > inet 10.0.0.183/16 brd 10.0.255.255 scope global ovirtmgmt
> > > > valid_lft forever preferred_lft forever
> > > > 26: genev_sys_6081: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000
> > > > qdisc noqueue master ovs-system state UNKNOWN group default qlen
> > > > 1000 link/ether aa:32:82:1b:01:d9 brd ff:ff:ff:ff:ff:ff
> > > > 27: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state
> > > > DOWN group default qlen 1000
> > > > link/ether 32:ff:5d:b8:c2:b4 brd ff:ff:ff:ff:ff:ff
> > > >
> > > > The card is FDR:
> > > >
> > > > [root@ovirt-hv2 ~]# lspci -v | grep Mellanox
> > > > 01:00.0 Network controller: Mellanox Technologies MT27500 Family
> > > > [ConnectX-3]
> > > > Subsystem: Mellanox Technologies Device 0051
> > > >
> > > > Latest OFED driver:
> > > >
> > > > [root@ovirt-hv2 ~]# /etc/init.d/openibd status
> > > >
> > > > HCA driver loaded
> > > >
> > > > Configured IPoIB devices:
> > > > ib0
> > > >
> > > > Currently active IPoIB devices:
> > > > ib0
> > > > Configured Mellanox EN devices:
> > > >
> > > > Currently active Mellanox devices:
> > > > ib0
> > > >
> > > > The following OFED modules are loaded:
> > > >
> > > > rdma_ucm
> > > > rdma_cm
> > > > ib_ipoib
> > > > mlx4_core
> > > > mlx4_ib
> > > > mlx4_en
> > > > mlx5_core
> > > > mlx5_ib
> > > > ib_uverbs
> > > > ib_umad
> > > > ib_ucm
> > > > ib_cm
> > > > ib_core
> > > > mlxfw
> > > > mlx5_fpga_tools
> > > >
> > > > I can add an IP to ib0 using "ip addr" though I need Network
> > > > Manager to work with ib0.
> > > >
> > > >
> > > > Thanks,
> > > >
> > > > Douglas Duckworth, MSc, LFCS
> > > > HPC System Administrator
> > > > Scientific Computing Unit
> > > > Weill Cornell Medicine
> > > > 1300 York - LC-502
> > > > E: doug(a)med.cornell.edu
> > > > O: 212-746-6305
> > > > F: 212-746-8690
> > >
> > >
>
>
6 years, 3 months