On Mon, Jan 6, 2020 at 9:21 PM Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
Hi Miguel,
I had read some blogs about OVN and I tried to collect some data that might hint where
the issue is.
I still struggle to "decode" that , but it may be easier for you or anyone on
the list.
I am eager to receive your reply.
Thanks in advance and Happy New Year !
Hi,
Sorry for not noticing your email before. Hope late is better than never ..
Best Regards,
Strahil Nikolov
В сряда, 18 декември 2019 г., 21:10:31 ч. Гринуич+2, Strahil Nikolov
<hunter86_bg(a)yahoo.com> написа:
That's a good question.
ovirtmgmt is using linux bridge, but I'm not so sure about the br-int.
'brctl show' is not understanding what type is br-int , so I guess openvswitch.
This is still a guess, so you can give me the command to verify that :)
You can use the GUI for that; access "Compute > clusters" , choose the
cluster in question, hit 'edit', then look for the 'Swtich type'
entry.
As the system was first build on 4.2.7 , most probably it never used anything except
openvswitch.
Thanks in advance for your help. I really appreciate that.
Best Regards,
Strahil Nikolov
В сряда, 18 декември 2019 г., 17:53:31 ч. Гринуич+2, Miguel Duarte de Mora Barroso
<mdbarroso(a)redhat.com> написа:
On Wed, Dec 18, 2019 at 6:35 AM Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
>
> Hi Dominik,
>
> sadly reinstall of all hosts is not helping.
>
> @ Miguel,
>
> I have 2 clusters
> 1. Default (amd-based one) -> ovirt1 (192.168.1.90) & ovirt2 (192.168.1.64)
> 2. Intel (intel-based one and a gluster arbiter) -> ovirt3 (192.168.1.41)
But what are the switch types used on the clusters: openvswitch *or*
legacy / linux bridges ?
>
> The output of the 2 commands (after I run reinstall on all hosts ):
>
> [root@engine ~]# ovn-sbctl list encap
> _uuid : d4d98c65-11da-4dc8-9da3-780e7738176f
> chassis_name : "baa0199e-d1a4-484c-af13-a41bcad19dbc"
> ip : "192.168.1.90"
> options : {csum="true"}
> type : geneve
>
> _uuid : ed8744a5-a302-493b-8c3b-19a4d2e170de
> chassis_name : "25cc77b3-046f-45c5-af0c-ffb2f77d73f1"
> ip : "192.168.1.64"
> options : {csum="true"}
> type : geneve
>
> _uuid : b72ff0ab-92fc-450c-a6eb-ab2869dee217
> chassis_name : "5668499c-7dd0-41ee-bc5d-2e6ee9cd61c3"
> ip : "192.168.1.41"
> options : {csum="true"}
> type : geneve
>
>
> [root@engine ~]# ovn-sbctl list chassis
> _uuid : b1da5110-f477-4c60-9963-b464ab96c644
> encaps : [ed8744a5-a302-493b-8c3b-19a4d2e170de]
> external_ids : {datapath-type="",
iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan",
ovn-bridge-mappings=""}
> hostname : "ovirt2.localdomain"
> name : "25cc77b3-046f-45c5-af0c-ffb2f77d73f1"
> nb_cfg : 0
> transport_zones : []
> vtep_logical_switches: []
>
> _uuid : dcc94e1c-bf44-46a3-b9d1-45360c307b26
> encaps : [b72ff0ab-92fc-450c-a6eb-ab2869dee217]
> external_ids : {datapath-type="",
iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan",
ovn-bridge-mappings=""}
> hostname : "ovirt3.localdomain"
> name : "5668499c-7dd0-41ee-bc5d-2e6ee9cd61c3"
> nb_cfg : 0
> transport_zones : []
> vtep_logical_switches: []
>
> _uuid : 897b34c5-d1d1-41a7-b2fd-5f1fa203c1da
> encaps : [d4d98c65-11da-4dc8-9da3-780e7738176f]
> external_ids : {datapath-type="",
iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan",
ovn-bridge-mappings=""}
> hostname : "ovirt1.localdomain"
> name : "baa0199e-d1a4-484c-af13-a41bcad19dbc"
> nb_cfg : 0
> transport_zones : []
> vtep_logical_switches: []
>
>
> If you know an easy method to reach default settings will be best, as I'm
currently not using OVN in production (just for tests and to learn more about how it
works) and I can afford any kind of downtime.
>
> Best Regards,
> Strahil Nikolov
>
> В вторник, 17 декември 2019 г., 11:28:25 ч. Гринуич+2, Miguel Duarte de Mora Barroso
<mdbarroso(a)redhat.com> написа:
>
>
> On Tue, Dec 17, 2019 at 10:19 AM Miguel Duarte de Mora Barroso
> <mdbarroso(a)redhat.com> wrote:
> >
> > On Tue, Dec 17, 2019 at 9:17 AM Dominik Holler <dholler(a)redhat.com>
wrote:
> > >
> > >
> > >
> > > On Tue, Dec 17, 2019 at 6:28 AM Strahil <hunter86_bg(a)yahoo.com>
wrote:
> > >>
> > >> Hi Dominik,
> > >>
> > >> Thanks for your reply.
> > >>
> > >> On ovirt1 I got the following:
> > >> [root@ovirt1 openvswitch]# less ovn-controller.log-20191216.gz
> > >> 2019-12-15T01:49:02.988Z|00032|vlog|INFO|opened log file
/var/log/openvswitch/ovn-controller.log
> > >> 2019-12-16T01:18:02.114Z|00033|vlog|INFO|closing log file
> > >> ovn-controller.log-20191216.gz (END)
> > >>
> > >> Same is on the other node:
> > >>
> > >> [root@ovirt2 openvswitch]# less ovn-controller.log-20191216.gz
> > >> 2019-12-15T01:26:03.477Z|00028|vlog|INFO|opened log file
/var/log/openvswitch/ovn-controller.log
> > >> 2019-12-16T01:30:01.718Z|00029|vlog|INFO|closing log file
> > >> ovn-controller.log-20191216.gz (END)
> > >>
> > >> The strange thing is that the geneve tunnels are there:
> > >
> > >
> > >
> > > Miguel, do you know how to remove and re-add the chassis to the southbound
db?
> > > (We have to check this to address bug
https://bugzilla.redhat.com/1758289
)
> >
> > "ovn-sbctl chassis-del CHASSIS" will get rid of the chassis / encap.
> >
> > Afterwards, the ovn-controller will re-register itself on OVN southbound.
> >
> > There's something weird about your chassis, I see you have listed 3
> > different chassis IDs on your port IDs:
> > -
ovn-chassis-id="5668499c-7dd0-41ee-bc5d-2e6ee9cd61c3(a)192.168.1.41"
> > -
ovn-chassis-id="25cc77b3-046f-45c5-af0c-ffb2f77d73f1(a)192.168.1.64"
> > -
ovn-chassis-id="baa0199e-d1a4-484c-af13-a41bcad19dbc(a)192.168.1.90"
> >
> > From your 'ovs-vsctl show' I can see the IP encaps 192.168.1.41 and
> > 192.168.1.64 are 'expected'.
> >
> > I would expect to see a duplicate chassis entry in the same IP . Who
> > has this 192.168.1.90 IP ?
> >
> > A couple of further questions:
> > - did you upgrade recently your environment ? If so, from which
> > version to which version ?
> > - what is the switch type of the cluster where you are running OVN ?
> - what is the output of "ovn-sbctl list chassis" and "ovn-sbctl
list
> encap" on your engine node ?
>
>
> >
> > >
> > >>
> > >> [root@ovirt1 ~]# ovs-vsctl show
> > >> c0e938f1-b5b5-4d5a-9cda-29dae2986f29
> > >> Bridge br-int
> > >> fail_mode: secure
> > >> Port "ovn-25cc77-0"
> > >> Interface "ovn-25cc77-0"
> > >> type: geneve
> > >> options: {csum="true", key=flow,
remote_ip="192.168.1.64"}
Port "ovn-566849-0"
> > >> Interface "ovn-566849-0"
> > >> type: geneve
> > >> options: {csum="true", key=flow,
remote_ip="192.168.1.41"}
Port br-int
> > >> Interface br-int
> > >> type: internal
> > >> Port "vnet2"
> > >> Interface "vnet2"
> > >> ovs_version: "2.11.0"
> > >> [root@ovirt1 ~]# ovs-vsctl list ports
> > >> ovs-vsctl: unknown table "ports"
> > >> [root@ovirt1 ~]# ovs-vsctl list port
> > >> _uuid : fbf40569-925e-4430-a7c5-c78d58979bbc
> > >> bond_active_slave : []
> > >> bond_downdelay : 0
> > >> bond_fake_iface : false
> > >> bond_mode : []
> > >> bond_updelay : 0
> > >> cvlans : []
> > >> external_ids : {}
> > >> fake_bridge : false
> > >> interfaces : [3207c0cb-3000-40f2-a850-83548f76f090]lacp
: []
> > >> mac : []
> > >> name : "vnet2"
> > >> other_config : {}
> > >> protected : false
> > >> qos : []
> > >> rstp_statistics : {}
> > >> rstp_status : {}
> > >> statistics : {}
> > >> status : {}
> > >> tag : []
> > >> trunks : []
> > >> vlan_mode : []
> > >>
> > >> _uuid : 8947f82d-a089-429b-8843-71371314cb52
> > >> bond_active_slave : []
> > >> bond_downdelay : 0
> > >> bond_fake_iface : false
> > >> bond_mode : []
> > >> bond_updelay : 0
> > >> cvlans : []
> > >> external_ids : {}
> > >> fake_bridge : false
> > >> interfaces : [ec6a6688-e5d6-4346-ac47-ece1b8379440]lacp
: []
> > >> mac : []
> > >> name : br-int
> > >> other_config : {}
> > >> protected : false
> > >> qos : []
> > >> rstp_statistics : {}
> > >> rstp_status : {}
> > >> statistics : {}
> > >> status : {}
> > >> tag : []
> > >> trunks : []
> > >> vlan_mode : []
> > >>
> > >> _uuid : 72d612be-853e-43e9-8f5c-ce66cef0bebe
> > >> bond_active_slave : []
> > >> bond_downdelay : 0
> > >> bond_fake_iface : false
> > >> bond_mode : []
> > >> bond_updelay : 0
> > >> cvlans : []
> > >> external_ids :
{ovn-chassis-id="5668499c-7dd0-41ee-bc5d-2e6ee9cd61c3(a)192.168.1.41"}
> > >> fake_bridge : false
> > >> interfaces : [a31574fe-515b-420b-859d-7f2ac729638f]lacp
: []
> > >> mac : []
> > >> name : "ovn-566849-0"
> > >> other_config : {}
> > >> protected : false
> > >> qos : []
> > >> rstp_statistics : {}
> > >> rstp_status : {}
> > >> statistics : {}
> > >> status : {}
> > >> tag : []
> > >> trunks : []
> > >> vlan_mode : []
> > >>
> > >> _uuid : 2043a15f-ec39-4cc3-b875-7be00423dd7a
> > >> bond_active_slave : []
> > >> bond_downdelay : 0
> > >> bond_fake_iface : false
> > >> bond_mode : []
> > >> bond_updelay : 0
> > >> cvlans : []
> > >> external_ids :
{ovn-chassis-id="25cc77b3-046f-45c5-af0c-ffb2f77d73f1(a)192.168.1.64"}
> > >> fake_bridge : false
> > >> interfaces : [f9a9e3ff-070e-4044-b601-7f7394dc295f]lacp
: []
> > >> mac : []
> > >> name : "ovn-25cc77-0"
> > >> other_config : {}
> > >> protected : false
> > >> qos : []
> > >> rstp_statistics : {}
> > >> rstp_status : {}
> > >> statistics : {}
> > >> status : {}
> > >> tag : []
> > >> trunks : []
> > >> vlan_mode : []
> > >> [root@ovirt1 ~]#
> > >>
> > >> [root@ovirt2 ~]# ovs-vsctl show
> > >> 3dbab138-6b90-44c5-af05-b8a944c9bf20
> > >> Bridge br-int
> > >> fail_mode: secure
> > >> Port "ovn-baa019-0"
> > >> Interface "ovn-baa019-0"
> > >> type: geneve
> > >> options: {csum="true", key=flow,
remote_ip="192.168.1.90"}
Port br-int
> > >> Interface br-int
> > >> type: internal
> > >> Port "vnet5"
> > >> Interface "vnet5"
> > >> Port "ovn-566849-0"
> > >> Interface "ovn-566849-0"
> > >> type: geneve
> > >> options: {csum="true", key=flow,
remote_ip="192.168.1.41"}
> > >> ovs_version: "2.11.0"
> > >> [root@ovirt2 ~]# ovs-vsctl list port
> > >> _uuid : 151e1188-f07a-4750-a620-392a08e7e7fe
> > >> bond_active_slave : []
> > >> bond_downdelay : 0
> > >> bond_fake_iface : false
> > >> bond_mode : []
bond_updelay : 0
> > >> cvlans : []
> > >> external_ids :
{ovn-chassis-id="baa0199e-d1a4-484c-af13-a41bcad19dbc(a)192.168.1.90"}
> > >> fake_bridge : false
> > >> interfaces : [4d4bc12a-609a-4917-b839-d4f652acdc33]lacp
: []
> > >> mac : []
> > >> name : "ovn-baa019-0"
> > >> other_config : {}
> > >> protected : false
> > >> qos : []
> > >> rstp_statistics : {}
> > >> rstp_status : {}
> > >> statistics : {}
> > >> status : {}
> > >> tag : []
> > >> trunks : []
> > >> vlan_mode : []
> > >>
> > >> _uuid : 3a862f96-b3ec-46a9-bcf6-f385e5def410
> > >> bond_active_slave : []
> > >> bond_downdelay : 0
> > >> bond_fake_iface : false
> > >> bond_mode : []
> > >> bond_updelay : 0
> > >> cvlans : []
> > >> external_ids : {}
> > >> fake_bridge : false
> > >> interfaces : [777f2819-ca27-4890-8d2f-11349ca0d398]lacp
: []
> > >> mac : []
> > >> name : br-int
> > >> other_config : {}
> > >> protected : false
> > >> qos : []
> > >> rstp_statistics : {}
> > >> rstp_status : {}
> > >> statistics : {}
> > >> status : {}
> > >> tag : []
> > >> trunks : []
> > >> vlan_mode : []
> > >>
> > >> _uuid : a65109fa-f8b4-4670-8ae8-a2bd0bf6aba3
> > >> bond_active_slave : []
> > >> bond_downdelay : 0
> > >> bond_fake_iface : false
> > >> bond_mode : []
> > >> bond_updelay : 0
> > >> cvlans : []
> > >> external_ids :
{ovn-chassis-id="5668499c-7dd0-41ee-bc5d-2e6ee9cd61c3(a)192.168.1.41"}
> > >> fake_bridge : false
> > >> interfaces : [ed442077-f897-4e0b-97a1-a8051e9c3d56]lacp
: []
> > >> mac : []
> > >> name : "ovn-566849-0"
> > >> other_config : {}
> > >> protected : false
> > >> qos : []
> > >> rstp_statistics : {}
> > >> rstp_status : {}
> > >> statistics : {}
> > >> status : {}
> > >> tag : []
> > >> trunks : []
> > >> vlan_mode : []
> > >>
> > >> _uuid : a1622e6f-fcd0-4a8a-b259-ca4d0ccf1cd2
> > >> bond_active_slave : []
> > >> bond_downdelay : 0
> > >> bond_fake_iface : false
> > >> bond_mode : []
> > >> bond_updelay : 0
> > >> cvlans : []
> > >> external_ids : {}
> > >> fake_bridge : false
> > >> interfaces : [ca368654-54f3-49d0-a71c-8894426df6bf]lacp
: []
> > >> mac : []
> > >> name : "vnet5"
> > >> other_config : {}
> > >> protected : false
> > >> qos : []
> > >> rstp_statistics : {}
> > >> rstp_status : {}
> > >> statistics : {}
> > >> status : {}
> > >> tag : []
> > >> trunks : []
> > >> vlan_mode : []
> > >> [root@ovirt2 ~]#
> > >>
> > >> Best Regards,
> > >> Strahil Nikolov
> > >>
> > >> On Dec 16, 2019 23:28, Dominik Holler <dholler(a)redhat.com>
wrote:
> > >> >
> > >> >
> > >> >
> > >> > On Sat, Dec 14, 2019 at 11:36 AM Strahil Nikolov
<hunter86_bg(a)yahoo.com> wrote:
> > >> >>
> > >> >> Hi Dominik,
> > >> >>
> > >> >> yes I was looking for those settings.
> > >> >>
> > >> >> I have added again the external provider , but I guess the
mess is even bigger as I made some stupid decisions (like removing 2 port groups :)
without knowing what I'm doing) .
> > >> >> Sadly I can't remove all packages on the engine and hosts
and reinstall them from scratch.
> > >> >>
> > >> >> Pip fails to install the openstacksdk (centOS7 is not great
for such tasks) on the engine and my lack of knowledge in OVN makes it even more
difficult.
> > >> >>
> > >> >> So the symptoms are that 2 machines can communicate with each
other only if they are on the same host ,while on separate - no communications is
happening.
> > >> >>
> > >> >
> > >> > This indicates that the tunnels between the hosts are not
created.
> > >> > Can you please check the /var/log/openvswitch/ovn-controller.log
on both hosts for errors and warnings, or share parts of the files here?
> > >> > If this does not point us to a problem, ovn has to be
reconfigured. If possible, most easy way to do this would be to ensure that
> > >> > ovirt-provider-ovn is the default network provider of the cluster
of the hosts, put one host after another in maintance mode and reinstall.
> > >> >
> > >> >
> > >> >>
> > >> >> How I created the network via UI:
> > >> >>
> > >> >> 1. Networks - new
> > >> >> 2. Fill in the name
> > >> >> 3. Create on external provider
> > >> >> 4. Network Port security -> disabled (even undefined does
not work)
> > >> >> 5.Connect to physical network -> ovirtmgmt
> > >> >>
> > >> >>
> > >> >> I would be happy to learn more about OVN and thus I would
like to make it work.
> > >> >>
> > >> >> Here is some info from the engine:
> > >> >>
> > >> >> [root@engine ~]# ovn-nbctl show
> > >> >> switch 1288ed26-471c-4bc2-8a7d-4531f306f44c
(ovirt-pxelan-2a88b2e0-d04b-4196-ad50-074501e4ed08)
> > >> >> port c1eba112-5eed-4c04-b25c-d3dcfb934546
> > >> >> addresses: ["56:6f:5a:65:00:06"]
> > >> >> port 8b52ab60-f474-4d51-b258-cb2e0a53c34a
> > >> >> type: localnet
> > >> >> addresses: ["unknown"]
> > >> >> port b2753040-881b-487a-92a1-9721da749be4
> > >> >> addresses: ["56:6f:5a:65:00:09"]
> > >> >> [root@engine ~]# ovn-sbctl show
> > >> >> Chassis "5668499c-7dd0-41ee-bc5d-2e6ee9cd61c3"
> > >> >> hostname: "ovirt3.localdomain"
> > >> >> Encap geneve
> > >> >> ip: "192.168.1.41"
> > >> >> options: {csum="true"}
> > >> >> Chassis "baa0199e-d1a4-484c-af13-a41bcad19dbc"
> > >> >> hostname: "ovirt1.localdomain"
> > >> >> Encap geneve
> > >> >> ip: "192.168.1.90"
> > >> >> options: {csum="true"}
> > >> >> Chassis "25cc77b3-046f-45c5-af0c-ffb2f77d73f1"
> > >> >> hostname: "ovirt2.localdomain"
> > >> >> Encap geneve
> > >> >> ip: "192.168.1.64"
> > >> >> options: {csum="true"}
> > >> >> Port_Binding
"b2753040-881b-487a-92a1-9721da749be4"
> > >> >> Port_Binding &quo
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IDJOOL23PIV...
>