[ovirt-users] Issue with OVN/OVS and mandatory ovirtmgmt network

Marcin Mirecki mmirecki at redhat.com
Fri Jan 13 09:31:24 UTC 2017


Please push the patch into: https://gerrit.ovirt.org/ovirt-provider-ovn
(let me know if you need some directions)



----- Original Message -----
> From: "Sverker Abrahamsson" <sverker at abrahamsson.com>
> To: "Marcin Mirecki" <mmirecki at redhat.com>
> Cc: "Ovirt Users" <users at ovirt.org>
> Sent: Monday, January 9, 2017 1:45:37 PM
> Subject: Re: [ovirt-users] Issue with OVN/OVS and mandatory ovirtmgmt network
> 
> Ok, found it. The issue is right here:
> 
>          <interface type="bridge">
>              <mac address="00:1a:4a:16:01:54" />
>              <model type="virtio" />
>              <source bridge="br-int" />
>              <virtualport type="openvswitch" />
>              <link state="up" />
>              <boot order="2" />
>              <bandwidth />
>              <virtualport type="openvswitch">
>                  <parameters
> interfaceid="912cba79-982e-4a87-868e-241fedccb59a" />
>              </virtualport>
>          </interface>
> 
> There are two elements for virtualport, the first without id and the
> second with. On h2 I had fixed this which was the patch I posted earlier
> although I switched back to use br-int after understanding that was the
> correct way. When that hook was copied to h1 the port gets attached fine.
> 
> Patch with updated testcase attached.
> 
> /Sverker
> 
> 
> Den 2017-01-09 kl. 10:41, skrev Sverker Abrahamsson:
> > This is the content of vdsm.log on h1 at this time:
> >
> > 2017-01-06 20:54:12,636 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
> > call VM.create succeeded in 0.01 seconds (__init__:515)
> > 2017-01-06 20:54:12,636 INFO  (vm/6dd5291e) [virt.vm]
> > (vmId='6dd5291e-6556-4d29-8b4e-ea896e627645') VM wrapper has started
> > (vm:1901)
> > 2017-01-06 20:54:12,636 INFO  (vm/6dd5291e) [vds] prepared volume
> > path:
> > /rhev/data-center/mnt/h2-int.limetransit.com:_var_lib_exports_iso/1d49c4bc-0fec-4503-a583-d476fa3a370d/images/11111111-1111-1111-1111-111111111111/CentOS-7-x86_64-NetInstall-1611.iso
> > (clientIF:374)
> > 2017-01-06 20:54:12,743 INFO  (vm/6dd5291e) [root]  (hooks:108)
> > 2017-01-06 20:54:12,847 INFO  (vm/6dd5291e) [root]  (hooks:108)
> > 2017-01-06 20:54:12,863 INFO  (vm/6dd5291e) [virt.vm]
> > (vmId='6dd5291e-6556-4d29-8b4e-ea896e627645') <?xml version='1.0'
> > encoding='UTF-8'?>
> > <domain xmlns:ovirt="http://ovirt.org/vm/tune/1.0" type="kvm">
> >     <name>CentOS7_3</name>
> >     <uuid>6dd5291e-6556-4d29-8b4e-ea896e627645</uuid>
> >     <memory>1048576</memory>
> >     <currentMemory>1048576</currentMemory>
> >     <maxMemory slots="16">4294967296</maxMemory>
> >     <vcpu current="1">16</vcpu>
> >     <devices>
> >         <channel type="unix">
> >             <target name="com.redhat.rhevm.vdsm" type="virtio" />
> >             <source mode="bind"
> > path="/var/lib/libvirt/qemu/channels/6dd5291e-6556-4d29-8b4e-ea896e627645.com.redhat.rhevm.vdsm"
> > />
> >         </channel>
> >         <channel type="unix">
> >             <target name="org.qemu.guest_agent.0" type="virtio" />
> >             <source mode="bind"
> > path="/var/lib/libvirt/qemu/channels/6dd5291e-6556-4d29-8b4e-ea896e627645.org.qemu.guest_agent.0"
> > />
> >         </channel>
> >         <input bus="ps2" type="mouse" />
> >         <memballoon model="virtio" />
> >         <controller index="0" model="virtio-scsi" type="scsi" />
> >         <controller index="0" ports="16" type="virtio-serial" />
> >         <video>
> >             <model heads="1" ram="65536" type="qxl" vgamem="16384"
> > vram="32768" />
> >         </video>
> >         <graphics autoport="yes" defaultMode="secure" passwd="*****"
> > passwdValidTo="1970-01-01T00:00:01" port="-1" tlsPort="-1" type="spice">
> >             <channel mode="secure" name="main" />
> >             <channel mode="secure" name="inputs" />
> >             <channel mode="secure" name="cursor" />
> >             <channel mode="secure" name="playback" />
> >             <channel mode="secure" name="record" />
> >             <channel mode="secure" name="display" />
> >             <channel mode="secure" name="smartcard" />
> >             <channel mode="secure" name="usbredir" />
> >             <listen network="vdsm-ovirtmgmt" type="network" />
> >         </graphics>
> >         <interface type="bridge">
> >             <mac address="00:1a:4a:16:01:54" />
> >             <model type="virtio" />
> >             <source bridge="br-int" />
> >             <virtualport type="openvswitch" />
> >             <link state="up" />
> >             <boot order="2" />
> >             <bandwidth />
> >             <virtualport type="openvswitch">
> >                 <parameters
> > interfaceid="912cba79-982e-4a87-868e-241fedccb59a" />
> >             </virtualport>
> >         </interface>
> >         <disk device="cdrom" snapshot="no" type="file">
> >             <source
> > file="/rhev/data-center/mnt/h2-int.limetransit.com:_var_lib_exports_iso/1d49c4bc-0fec-4503-a583-d476fa3a370d/images/11111111-1111-1111-1111-111111111111/CentOS-7-x86_64-NetInstall-1611.iso"
> > startupPolicy="optional" />
> >             <target bus="ide" dev="hdc" />
> >             <readonly />
> >             <boot order="1" />
> >         </disk>
> >         <channel type="spicevmc">
> >             <target name="com.redhat.spice.0" type="virtio" />
> >         </channel>
> >     </devices>
> >     <metadata>
> >         <ovirt:qos />
> >     </metadata>
> >     <os>
> >         <type arch="x86_64" machine="pc-i440fx-rhel7.2.0">hvm</type>
> >         <smbios mode="sysinfo" />
> >         <bootmenu enable="yes" timeout="10000" />
> >     </os>
> >     <sysinfo type="smbios">
> >         <system>
> >             <entry name="manufacturer">oVirt</entry>
> >             <entry name="product">oVirt Node</entry>
> >             <entry name="version">7-3.1611.el7.centos</entry>
> >             <entry
> > name="serial">62f1adff-b29e-4a7c-abba-c2c4c73248c6</entry>
> >             <entry
> > name="uuid">6dd5291e-6556-4d29-8b4e-ea896e627645</entry>
> >         </system>
> >     </sysinfo>
> >     <clock adjustment="0" offset="variable">
> >         <timer name="rtc" tickpolicy="catchup" />
> >         <timer name="pit" tickpolicy="delay" />
> >         <timer name="hpet" present="no" />
> >     </clock>
> >     <features>
> >         <acpi />
> >     </features>
> >     <cpu match="exact">
> >         <model>SandyBridge</model>
> >         <topology cores="1" sockets="16" threads="1" />
> >         <numa>
> >             <cell cpus="0" memory="1048576" />
> >         </numa>
> >     </cpu>
> > </domain>
> >  (vm:1988)
> > 2017-01-06 20:54:13,046 INFO  (libvirt/events) [virt.vm]
> > (vmId='6dd5291e-6556-4d29-8b4e-ea896e627645') CPU running: onResume
> > (vm:4863)
> > 2017-01-06 20:54:13,058 INFO  (vm/6dd5291e) [virt.vm]
> > (vmId='6dd5291e-6556-4d29-8b4e-ea896e627645') Starting connection
> > (guestagent:245)
> > 2017-01-06 20:54:13,060 INFO  (vm/6dd5291e) [virt.vm]
> > (vmId='6dd5291e-6556-4d29-8b4e-ea896e627645') CPU running: domain
> > initialization (vm:4863)
> > 2017-01-06 20:54:15,154 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC
> > call Host.getVMFullList succeeded in 0.01 seconds (__init__:515)
> > 2017-01-06 20:54:17,571 INFO  (periodic/2) [dispatcher] Run and
> > protect: getVolumeSize(sdUUID=u'2ee54fb8-48f2-4576-8cff-f2346504b08b',
> > spUUID=u'584ebd64-0268-0193-025b-00000000038e',
> > imgUUID=u'5a3aae57-ffe0-4a3b-aa87-8461669db7f9',
> > volUUID=u'b6a88789-fcb1-4d3e-911b-2a4d3b6c69c7', options=None)
> > (logUtils:49)
> > 2017-01-06 20:54:17,573 INFO  (periodic/2) [dispatcher] Run and
> > protect: getVolumeSize, Return response: {'truesize': '1859723264',
> > 'apparentsize': '21474836480'} (logUtils:52)
> > 2017-01-06 20:54:21,211 INFO  (periodic/2) [dispatcher] Run and
> > protect: repoStats(options=None) (logUtils:49)
> > 2017-01-06 20:54:21,212 INFO  (periodic/2) [dispatcher] Run and
> > protect: repoStats, Return response:
> > {u'2ee54fb8-48f2-4576-8cff-f2346504b08b': {'code': 0, 'actual': True,
> > 'version': 3, 'acquired': True, 'delay': '0.000936552', 'lastCheck':
> > '1.4', 'valid': True}, u'1d49c4bc-0fec-4503-a583-d476fa3a370d':
> > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay':
> > '0.000960248', 'lastCheck': '1.4', 'valid': True}} (logUtils:52)
> > 2017-01-06 20:54:23,543 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC
> > call Host.getAllVmStats succeeded in 0.00 seconds (__init__:515)
> > 2017-01-06 20:54:23,641 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC
> > call Host.getAllVmIoTunePolicies succeeded in 0.00 seconds (__init__:515)
> > 2017-01-06 20:54:24,918 INFO  (jsonrpc/0) [dispatcher] Run and
> > protect: repoStats(options=None) (logUtils:49)
> > 2017-01-06 20:54:24,918 INFO  (jsonrpc/0) [dispatcher] Run and
> > protect: repoStats, Return response:
> > {u'2ee54fb8-48f2-4576-8cff-f2346504b08b': {'code': 0, 'actual': True,
> > 'version': 3, 'acquired': True, 'delay': '0.000936552', 'lastCheck':
> > '5.1', 'valid': True}, u'1d49c4bc-0fec-4503-a583-d476fa3a370d':
> > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay':
> > '0.000960248', 'lastCheck': '2.1', 'valid': True}} (logUtils:52)
> > 2017-01-06 20:54:24,924 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
> > call Host.getStats succeeded in 0.01 seconds (__init__:515)
> >
> > Vdsm and the OVN driver must have been called as the port IS created,
> > but with the wrong id. I don't find the faulty id in vdsm.log neither,
> > the xml above have the correct id.
> > /Sverker
> >
> > Den 2017-01-09 kl. 10:06, skrev Marcin Mirecki:
> >> The port is set up on the host by the ovirt-provider-ovn-driver.
> >> The driver is invoked by the vdsm hook whenever any operation on
> >> the port is done.
> >> Please ensure that this is installed properly.
> >> You can check the vdsm log (/var/log/vdsm/vdsm.log) to see if the
> >> hook was executed properly.
> >>
> >>
> >> ----- Original Message -----
> >>> From: "Sverker Abrahamsson" <sverker at abrahamsson.com>
> >>> To: "Marcin Mirecki" <mmirecki at redhat.com>
> >>> Cc: "Ovirt Users" <users at ovirt.org>
> >>> Sent: Friday, January 6, 2017 9:00:26 PM
> >>> Subject: Re: [ovirt-users] Issue with OVN/OVS and mandatory
> >>> ovirtmgmt network
> >>>
> >>> I created a new VM in the ui and assigned it to host h1. In
> >>> /var/log/ovirt-provider-ovn.log I get the following:
> >>>
> >>> 2017-01-06 20:54:11,940   Request: GET : /v2.0/ports
> >>> 2017-01-06 20:54:11,940   Connecting to remote ovn database:
> >>> tcp:127.0.0.1:6641
> >>> 2017-01-06 20:54:12,157   Connected (number of retries: 2)
> >>> 2017-01-06 20:54:12,158   Response code: 200
> >>> 2017-01-06 20:54:12,158   Response body: {"ports": [{"name":
> >>> "4981ee5f-6e15-4bd5-a1cf-7ead9bdd5873", "network_id":
> >>> "e53554cf-e553-40a1-8d22-9c8d95ec0601", "device_owner": "oVirt",
> >>> "mac_address": "00:1a:4a:16:01:51", "id":
> >>> "4981ee5f-6e15-4bd5-a1cf-7ead9bdd5873", "device_id":
> >>> "40cd7328-d575-4c3d-b656-9ef9bacc0078"}, {"name":
> >>> "92f6d3c8-68b3-4986-9c09-60bee04644b5", "network_id":
> >>> "e53554cf-e553-40a1-8d22-9c8d95ec0601", "device_owner": "oVirt",
> >>> "mac_address": "00:1a:4a:16:01:52", "id":
> >>> "92f6d3c8-68b3-4986-9c09-60bee04644b5", "device_id":
> >>> "4baefa8c-3822-4de0-9cd0-1d025bab7844"}]}
> >>> 2017-01-06 20:54:12,160   Request: SHOW :
> >>> /v2.0/networks/e53554cf-e553-40a1-8d22-9c8d95ec0601
> >>> 2017-01-06 20:54:12,160   Connecting to remote ovn database:
> >>> tcp:127.0.0.1:6641
> >>> 2017-01-06 20:54:12,377   Connected (number of retries: 2)
> >>> 2017-01-06 20:54:12,378   Response code: 200
> >>> 2017-01-06 20:54:12,378   Response body: {"network": {"id":
> >>> "e53554cf-e553-40a1-8d22-9c8d95ec0601", "name": "ovirtbridge"}}
> >>> 2017-01-06 20:54:12,380   Request: POST : /v2.0/ports
> >>> 2017-01-06 20:54:12,380   Request body:
> >>> {
> >>>     "port" : {
> >>>       "name" : "nic1",
> >>>       "binding:host_id" : "h1.limetransit.com",
> >>>       "admin_state_up" : true,
> >>>       "device_id" : "e8553a88-05f0-401d-8b9b-5fff77f7bbbe",
> >>>       "device_owner" : "oVirt",
> >>>       "mac_address" : "00:1a:4a:16:01:54",
> >>>       "network_id" : "e53554cf-e553-40a1-8d22-9c8d95ec0601"
> >>>     }
> >>> }
> >>> 2017-01-06 20:54:12,380   Connecting to remote ovn database:
> >>> tcp:127.0.0.1:6641
> >>> 2017-01-06 20:54:12,610   Connected (number of retries: 2)
> >>> 2017-01-06 20:54:12,614   Response code: 200
> >>> 2017-01-06 20:54:12,614   Response body: {"port": {"name":
> >>> "912cba79-982e-4a87-868e-241fedccb59a", "network_id":
> >>> "e53554cf-e553-40a1-8d22-9c8d95ec0601", "device_owner": "oVirt",
> >>> "mac_address": "00:1a:4a:16:01:54", "id":
> >>> "912cba79-982e-4a87-868e-241fedccb59a", "device_id":
> >>> "e8553a88-05f0-401d-8b9b-5fff77f7bbbe"}}
> >>>
> >>> h1:/var/log/messages
> >>> Jan  6 20:54:12 h1 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl
> >>> --timeout=5 -- --if-exists del-port vnet1 -- add-port br-int vnet1 --
> >>> set Interface vnet1
> >>> "external-ids:attached-mac=\"00:1a:4a:16:01:54\"" --
> >>> set Interface vnet1
> >>> "external-ids:iface-id=\"20388407-0f76-41d8-97aa-8e2b5978f908\"" -- set
> >>> Interface vnet1
> >>> "external-ids:vm-id=\"6dd5291e-6556-4d29-8b4e-ea896e627645\"" -- set
> >>> Interface vnet1 external-ids:iface-status=active
> >>>
> >>> [root at h2 ~]# ovn-nbctl show
> >>>       switch e53554cf-e553-40a1-8d22-9c8d95ec0601 (ovirtbridge)
> >>>           port 4981ee5f-6e15-4bd5-a1cf-7ead9bdd5873
> >>>               addresses: ["00:1a:4a:16:01:51"]
> >>>           port 912cba79-982e-4a87-868e-241fedccb59a
> >>>               addresses: ["00:1a:4a:16:01:54"]
> >>>           port 92f6d3c8-68b3-4986-9c09-60bee04644b5
> >>>               addresses: ["00:1a:4a:16:01:52"]
> >>>           port ovirtbridge-port2
> >>>               addresses: ["unknown"]
> >>>           port ovirtbridge-port1
> >>>               addresses: ["unknown"]
> >>> [root at h2 ~]# ovn-sbctl show
> >>> Chassis "6e4dd29f-7607-48d7-8e5a-eef4c6aeefb5"
> >>>       hostname: "h2.limetransit.com"
> >>>       Encap geneve
> >>>           ip: "148.251.126.50"
> >>>           options: {csum="true"}
> >>>       Port_Binding "4981ee5f-6e15-4bd5-a1cf-7ead9bdd5873"
> >>>       Port_Binding "ovirtbridge-port1"
> >>> Chassis "4f10fb04-8fb2-48d7-8a3f-ea6444c02cf9"
> >>>       hostname: "h1.limetransit.com"
> >>>       Encap geneve
> >>>           ip: "144.76.84.73"
> >>>           options: {csum="true"}
> >>>       Port_Binding "ovirtbridge-port2"
> >>>       Port_Binding "92f6d3c8-68b3-4986-9c09-60bee04644b5"
> >>>
> >>> I.e. same issue
> >>> /Sverker
> >>>
> >>> Den 2017-01-06 kl. 20:49, skrev Sverker Abrahamsson:
> >>>> The port is created from Ovirt UI, the ovs-vsctl command below is
> >>>> executed when VM is started. In /var/log/ovirt-provider-ovn.log on h2
> >>>> I get the following:
> >>>>
> >>>> 2017-01-06 20:19:25,452   Request: GET : /v2.0/ports
> >>>> 2017-01-06 20:19:25,452   Connecting to remote ovn database:
> >>>> tcp:127.0.0.1:6641
> >>>> 2017-01-06 20:19:25,670   Connected (number of retries: 2)
> >>>> 2017-01-06 20:19:25,670   Response code: 200
> >>>> 2017-01-06 20:19:25,670   Response body: {"ports": [{"name":
> >>>> "4981ee5f-6e15-4bd5-a1cf-7ead9bdd5873", "network_id":
> >>>> "e53554cf-e553-40a1-8d22-9c8d95ec0601", "device_owner": "oVirt",
> >>>> "mac_address": "00:1a:4a:16:01:51", "id":
> >>>> "4981ee5f-6e15-4bd5-a1cf-7ead9bdd5873", "device_id":
> >>>> "40cd7328-d575-4c3d-b656-9ef9bacc0078"}, {"name":
> >>>> "92f6d3c8-68b3-4986-9c09-60bee04644b5", "network_id":
> >>>> "e53554cf-e553-40a1-8d22-9c8d95ec0601", "device_owner": "oVirt",
> >>>> "mac_address": "00:1a:4a:16:01:52", "id":
> >>>> "92f6d3c8-68b3-4986-9c09-60bee04644b5", "device_id":
> >>>> "4baefa8c-3822-4de0-9cd0-1d025bab7844"}]}
> >>>> 2017-01-06 20:19:25,673   Request: PUT :
> >>>> /v2.0/ports/92f6d3c8-68b3-4986-9c09-60bee04644b5
> >>>> 2017-01-06 20:19:25,673   Request body:
> >>>> {
> >>>>    "port" : {
> >>>>      "binding:host_id" : "h1.limetransit.com",
> >>>>      "security_groups" : null
> >>>>    }
> >>>> }
> >>>> 2017-01-06 20:19:25,673   Connecting to remote ovn database:
> >>>> tcp:127.0.0.1:6641
> >>>> 2017-01-06 20:19:25,890   Connected (number of retries: 2)
> >>>> 2017-01-06 20:19:25,891   Response code: 200
> >>>> 2017-01-06 20:19:25,891   Response body: {"port": {"name":
> >>>> "92f6d3c8-68b3-4986-9c09-60bee04644b5", "network_id":
> >>>> "e53554cf-e553-40a1-8d22-9c8d95ec0601", "device_owner": "oVirt",
> >>>> "mac_address": "00:1a:4a:16:01:52", "id":
> >>>> "92f6d3c8-68b3-4986-9c09-60bee04644b5", "device_id":
> >>>> "4baefa8c-3822-4de0-9cd0-1d025bab7844"}}
> >>>>
> >>>> In /var/log/messages on h1 I get the following:
> >>>>
> >>>> Jan  6 20:18:56 h1 dbus-daemon: dbus[1339]: [system] Successfully
> >>>> activated service 'org.freedesktop.problems'
> >>>> Jan  6 20:19:26 h1 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl
> >>>> --timeout=5 -- --if-exists del-port vnet0 -- add-port br-int vnet0 --
> >>>> set Interface vnet0 "external-ids:attached-mac=\"00:1a:4a:16:01:52\""
> >>>> -- set Interface vnet0
> >>>> "external-ids:iface-id=\"72dafda5-03c2-4bb6-bcb6-241fa5c0a1f3\"" --
> >>>> set Interface vnet0
> >>>> "external-ids:vm-id=\"4d0c134a-11a0-40f4-b2fb-c13c17c7251c\"" -- set
> >>>> Interface vnet0 external-ids:iface-status=active
> >>>> Jan  6 20:19:26 h1 kernel: device vnet0 entered promiscuous mode
> >>>> Jan  6 20:19:26 h1 avahi-daemon[1391]: Registering new address record
> >>>> for fe80::fc1a:4aff:fe16:152 on vnet0.*.
> >>>> Jan  6 20:19:26 h1 systemd-machined: New machine qemu-4-CentOS72.
> >>>> Jan  6 20:19:26 h1 systemd: Started Virtual Machine qemu-4-CentOS72.
> >>>> Jan  6 20:19:26 h1 systemd: Starting Virtual Machine qemu-4-CentOS72.
> >>>>
> >>>> [root at h2 ~]# ovn-nbctl show
> >>>>      switch e53554cf-e553-40a1-8d22-9c8d95ec0601 (ovirtbridge)
> >>>>          port 4981ee5f-6e15-4bd5-a1cf-7ead9bdd5873
> >>>>              addresses: ["00:1a:4a:16:01:51"]
> >>>>          port 92f6d3c8-68b3-4986-9c09-60bee04644b5
> >>>>              addresses: ["00:1a:4a:16:01:52"]
> >>>>          port ovirtbridge-port2
> >>>>              addresses: ["unknown"]
> >>>>          port ovirtbridge-port1
> >>>>              addresses: ["unknown"]
> >>>> [root at h2 ~]# ovn-sbctl show
> >>>> Chassis "6e4dd29f-7607-48d7-8e5a-eef4c6aeefb5"
> >>>>      hostname: "h2.limetransit.com"
> >>>>      Encap geneve
> >>>>          ip: "148.251.126.50"
> >>>>          options: {csum="true"}
> >>>>      Port_Binding "4981ee5f-6e15-4bd5-a1cf-7ead9bdd5873"
> >>>>      Port_Binding "ovirtbridge-port1"
> >>>> Chassis "4f10fb04-8fb2-48d7-8a3f-ea6444c02cf9"
> >>>>      hostname: "h1.limetransit.com"
> >>>>      Encap geneve
> >>>>          ip: "144.76.84.73"
> >>>>          options: {csum="true"}
> >>>>      Port_Binding "ovirtbridge-port2"
> >>>>
> >>>> I.e. the port is set up with the wrong ID and not attached to OVN.
> >>>>
> >>>> If I correct external-ids:iface-id like this:
> >>>> [root at h1 ~]# ovs-vsctl set Interface vnet0
> >>>> "external-ids:iface-id=\"92f6d3c8-68b3-4986-9c09-60bee04644b5\""
> >>>>
> >>>> then sb is correct:
> >>>> [root at h2 ~]# ovn-sbctl show
> >>>> Chassis "6e4dd29f-7607-48d7-8e5a-eef4c6aeefb5"
> >>>>      hostname: "h2.limetransit.com"
> >>>>      Encap geneve
> >>>>          ip: "148.251.126.50"
> >>>>          options: {csum="true"}
> >>>>      Port_Binding "4981ee5f-6e15-4bd5-a1cf-7ead9bdd5873"
> >>>>      Port_Binding "ovirtbridge-port1"
> >>>> Chassis "4f10fb04-8fb2-48d7-8a3f-ea6444c02cf9"
> >>>>      hostname: "h1.limetransit.com"
> >>>>      Encap geneve
> >>>>          ip: "144.76.84.73"
> >>>>          options: {csum="true"}
> >>>>      Port_Binding "ovirtbridge-port2"
> >>>>      Port_Binding "92f6d3c8-68b3-4986-9c09-60bee04644b5"
> >>>>
> >>>> I don't know from where the ID 72dafda5-03c2-4bb6-bcb6-241fa5c0a1f3
> >>>> comes from, doesn't show in any log other than /var/log/messages.
> >>>>
> >>>> If I do the same exercise on the same host as engine is running on
> >>>> then the port for the VM gets the right id and is working from
> >>>> beginning.
> >>>> /Sverker
> >>>>
> >>>> Den 2017-01-03 kl. 10:23, skrev Marcin Mirecki:
> >>>>> How did you create this port?
> >>>>>   From the oVirt engine UI?
> >>>>> The OVN provider creates the port when you add the port in the
> >>>>> engine UI,
> >>>>> it is then plugged into the ovs bridge by the VIF driver.
> >>>>> Please attach /var/log/ovirt-provider-ovn.log
> >>>>>
> >>>>>
> >>>>>
> >>>>> ----- Original Message -----
> >>>>>> From: "Sverker Abrahamsson"<sverker at abrahamsson.com>
> >>>>>> To: "Marcin Mirecki"<mmirecki at redhat.com>
> >>>>>> Cc: "Ovirt Users"<users at ovirt.org>
> >>>>>> Sent: Tuesday, January 3, 2017 2:06:22 AM
> >>>>>> Subject: Re: [ovirt-users] Issue with OVN/OVS and mandatory
> >>>>>> ovirtmgmt
> >>>>>> network
> >>>>>>
> >>>>>> Found an issue with Ovirt - OVN integration.
> >>>>>>
> >>>>>> Engine and OVN central db running on host h2. Created VM to run
> >>>>>> on host
> >>>>>> h1, which is started. Ovn db state:
> >>>>>>
> >>>>>> [root at h2 env3]# ovn-nbctl show
> >>>>>>        switch e53554cf-e553-40a1-8d22-9c8d95ec0601 (ovirtbridge)
> >>>>>>            port 4981ee5f-6e15-4bd5-a1cf-7ead9bdd5873
> >>>>>>                addresses: ["00:1a:4a:16:01:51"]
> >>>>>>            port 92f6d3c8-68b3-4986-9c09-60bee04644b5
> >>>>>>                addresses: ["00:1a:4a:16:01:52"]
> >>>>>>            port ovirtbridge-port2
> >>>>>>                addresses: ["unknown"]
> >>>>>>            port ovirtbridge-port1
> >>>>>>                addresses: ["unknown"]
> >>>>>> [root at h2 env3]# ovn-sbctl show
> >>>>>> Chassis "6e4dd29f-7607-48d7-8e5a-eef4c6aeefb5"
> >>>>>>        hostname: "h2.limetransit.com"
> >>>>>>        Encap geneve
> >>>>>>            ip: "148.251.126.50"
> >>>>>>            options: {csum="true"}
> >>>>>>        Port_Binding "4981ee5f-6e15-4bd5-a1cf-7ead9bdd5873"
> >>>>>>        Port_Binding "ovirtbridge-port1"
> >>>>>> Chassis "4f10fb04-8fb2-48d7-8a3f-ea6444c02cf9"
> >>>>>>        hostname: "h1.limetransit.com"
> >>>>>>        Encap geneve
> >>>>>>            ip: "144.76.84.73"
> >>>>>>            options: {csum="true"}
> >>>>>>        Port_Binding "ovirtbridge-port2"
> >>>>>>
> >>>>>> Port 92f6d3c8-68b3-4986-9c09-60bee04644b5 is for the new VM which is
> >>>>>> started on h1, but it is not assigned to that chassis. The reason is
> >>>>>> that on h1 the port on br-int is created like this:
> >>>>>>
> >>>>>> ovs-vsctl --timeout=5 -- --if-exists del-port vnet0 -- add-port
> >>>>>> br-int
> >>>>>> vnet0 -- set Interface vnet0
> >>>>>> "external-ids:attached-mac=\"00:1a:4a:16:01:52\"" -- set
> >>>>>> Interface vnet0
> >>>>>> "external-ids:iface-id=\"35bcbe31-2c7e-4d97-add9-ce150eeb2f11\""
> >>>>>> -- set
> >>>>>> Interface vnet0
> >>>>>> "external-ids:vm-id=\"4d0c134a-11a0-40f4-b2fb-c13c17c7251c\"" -- set
> >>>>>> Interface vnet0 external-ids:iface-status=active
> >>>>>>
> >>>>>> I.e. the extrernal id of interface is wrong. When I manually
> >>>>>> change to
> >>>>>> the right id like this the port works fine:
> >>>>>>
> >>>>>> ovs-vsctl --timeout=5 -- --if-exists del-port vnet0 -- add-port
> >>>>>> br-int
> >>>>>> vnet0 -- set Interface vnet0
> >>>>>> "external-ids:attached-mac=\"00:1a:4a:16:01:52\"" -- set
> >>>>>> Interface vnet0
> >>>>>> "external-ids:iface-id=\"92f6d3c8-68b3-4986-9c09-60bee04644b5\""
> >>>>>> -- set
> >>>>>> Interface vnet0
> >>>>>> "external-ids:vm-id=\"4d0c134a-11a0-40f4-b2fb-c13c17c7251c\"" -- set
> >>>>>> Interface vnet0 external-ids:iface-status=active
> >>>>>>
> >>>>>> sb db after correcting the port:
> >>>>>>
> >>>>>> Chassis "6e4dd29f-7607-48d7-8e5a-eef4c6aeefb5"
> >>>>>>        hostname: "h2.limetransit.com"
> >>>>>>        Encap geneve
> >>>>>>            ip: "148.251.126.50"
> >>>>>>            options: {csum="true"}
> >>>>>>        Port_Binding "4981ee5f-6e15-4bd5-a1cf-7ead9bdd5873"
> >>>>>>        Port_Binding "ovirtbridge-port1"
> >>>>>> Chassis "4f10fb04-8fb2-48d7-8a3f-ea6444c02cf9"
> >>>>>>        hostname: "h1.limetransit.com"
> >>>>>>        Encap geneve
> >>>>>>            ip: "144.76.84.73"
> >>>>>>            options: {csum="true"}
> >>>>>>        Port_Binding "ovirtbridge-port2"
> >>>>>>        Port_Binding "92f6d3c8-68b3-4986-9c09-60bee04644b5"
> >>>>>>
> >>>>>> I don't know from where the faulty id comes from, it's not in any
> >>>>>> logs.
> >>>>>> In the domain xml as printed in vdsm.log the id is correct:
> >>>>>>
> >>>>>>            <interface type="bridge">
> >>>>>>                <mac address="00:1a:4a:16:01:52" />
> >>>>>>                <model type="virtio" />
> >>>>>>                <source bridge="br-int" />
> >>>>>>                <virtualport type="openvswitch" />
> >>>>>>                <link state="up" />
> >>>>>>                <boot order="2" />
> >>>>>>                <bandwidth />
> >>>>>>                <virtualport type="openvswitch">
> >>>>>>                    <parameters
> >>>>>> interfaceid="92f6d3c8-68b3-4986-9c09-60bee04644b5" />
> >>>>>>                </virtualport>
> >>>>>>            </interface>
> >>>>>>
> >>>>>> Where is the ovs-vsctl command line built for this call?
> >>>>>>
> >>>>>> /Sverker
> >>>>>>
> >>>>>>
> >>>>>> Den 2017-01-02 kl. 13:40, skrev Sverker Abrahamsson:
> >>>>>>> Got it to work now by following the env8 example in OVN tutorial,
> >>>>>>> where a port is added with type l2gateway. Not sure how that is
> >>>>>>> different from the localnet variant, but didn't suceed in
> >>>>>>> getting that
> >>>>>>> one working. Now I'm able to ping and telnet over the tunnel,
> >>>>>>> but not
> >>>>>>> ssh even when the port is answering on telnet. Neither does nfs
> >>>>>>> traffic work even though mount did. Suspecting MTU issue. I did
> >>>>>>> notice
> >>>>>>> that ovn-controller starts too early, before network interfaces are
> >>>>>>> established and hence can't reach the db. As these is a purely
> >>>>>>> OVS/OVN
> >>>>>>> issue I'll ask about it on their mailing list.
> >>>>>>>
> >>>>>>> Getting back to the original issue with Ovirt, I've now added the
> >>>>>>> second host h1 to ovirt-engine. Had to do the same as with h2 to
> >>>>>>> create a dummy ovirtmgmt network but configured access via the
> >>>>>>> public
> >>>>>>> IP. My firewall settings was replaced with iptables config and
> >>>>>>> vdsm.conf was overwritten when engine was set up, so those had
> >>>>>>> to be
> >>>>>>> manually restored. It would be preferable if it would be
> >>>>>>> possible to
> >>>>>>> configure ovirt-engine that it does not "own" the host and instead
> >>>>>>> comply with the settings it has instead of enforcing it's own
> >>>>>>> view..
> >>>>>>>
> >>>>>>> Apart from that it seems the second host works, although I need to
> >>>>>>> resolve the traffic issue over the OVS tunnel.
> >>>>>>> /Sverker
> >>>>>>>
> >>>>>>> Den 2017-01-02 kl. 01:13, skrev Sverker Abrahamsson:
> >>>>>>>> 1. That is not possible as ovirt (or vdsm) will rewrite the
> >>>>>>>> network
> >>>>>>>> configuration to a non-working state. That is why I've set that
> >>>>>>>> if as
> >>>>>>>> hidden to vdsm and is why I'm keen on getting OVS/OVN to work
> >>>>>>>>
> >>>>>>>> 2. I've been reading the doc for OVN and starting to connect the
> >>>>>>>> dots, which is not trivial as it is complex. Some insights
> >>>>>>>> reached:
> >>>>>>>>
> >>>>>>>> First step is the OVN database, installed by
> >>>>>>>> openvswitch-ovn-central,
> >>>>>>>> which I currently have running on h2 host. The 'ovn-nbctl' and
> >>>>>>>> 'ovn-sbctl' commands are only possible to execute on a database
> >>>>>>>> node.
> >>>>>>>> Two ip's are given to 'vdsm-tool ovn-config <ip to database>
> >>>>>>>> <tunnel
> >>>>>>>> ip>' as arguments, where <ip to database> is how this OVN node
> >>>>>>>> reaches the database and <tunnel ip> is the ip to which other OVN
> >>>>>>>> nodes sets up a tunnel to this node. I.e. it is not for creating a
> >>>>>>>> tunnel to the database which I thought first from the
> >>>>>>>> description in
> >>>>>>>> blog post.
> >>>>>>>>
> >>>>>>>> The tunnel between OVN nodes is of type geneve which is a UDP
> >>>>>>>> based
> >>>>>>>> protocol but I have not been able to find anywhere which port
> >>>>>>>> is used
> >>>>>>>> so that I can open it in firewalld. I have added OVN on another
> >>>>>>>> host,
> >>>>>>>> called h1, and connected it to the db. I see there is traffic
> >>>>>>>> to the
> >>>>>>>> db port, but I don't see any geneve traffic between the nodes.
> >>>>>>>>
> >>>>>>>> Ovirt is now able to create it's vnet0 interface on the br-int ovs
> >>>>>>>> bridge, but then I run into the next issue. How do I create a
> >>>>>>>> connection from the logical switch to the physical host? I need
> >>>>>>>> that
> >>>>>>>> to a) get a connection out to the internet through a
> >>>>>>>> masqueraded if
> >>>>>>>> or ipv6 and b) be able to run a dhcp server to give ip's to the
> >>>>>>>> VM's.
> >>>>>>>>
> >>>>>>>> /Sverker
> >>>>>>>>
> >>>>>>>> Den 2016-12-30 kl. 18:05, skrev Marcin Mirecki:
> >>>>>>>>> 1. Why not use your physical nic for ovirtmgmt then?
> >>>>>>>>>
> >>>>>>>>> 2. "ovn-nbctl ls-add" does not add a bridge, but a logical
> >>>>>>>>> switch.
> >>>>>>>>>       br-int is an internal OVN implementation detail, which
> >>>>>>>>> the user
> >>>>>>>>>       should not care about. What you see in the ovirt UI are
> >>>>>>>>> logical
> >>>>>>>>>       networks. They are implemented as OVN logical switches
> >>>>>>>>> in case
> >>>>>>>>>       of the OVN provider.
> >>>>>>>>>
> >>>>>>>>> Please look at:
> >>>>>>>>> http://www.ovirt.org/blog/2016/11/ovirt-provider-ovn/
> >>>>>>>>> You can get the latest rpms from here:
> >>>>>>>>> http://resources.ovirt.org/repos/ovirt/experimental/master/ovirt-provider-ovn_fc24_46/rpm/fc24/noarch/
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> ----- Original Message -----
> >>>>>>>>>> From: "Sverker Abrahamsson"<sverker at abrahamsson.com>
> >>>>>>>>>> To: "Marcin Mirecki"<mmirecki at redhat.com>
> >>>>>>>>>> Cc: "Ovirt Users"<users at ovirt.org>
> >>>>>>>>>> Sent: Friday, December 30, 2016 4:25:58 PM
> >>>>>>>>>> Subject: Re: [ovirt-users] Issue with OVN/OVS and mandatory
> >>>>>>>>>> ovirtmgmt network
> >>>>>>>>>>
> >>>>>>>>>> 1. No, I did not want to put the ovirtmgmt bridge on my physical
> >>>>>>>>>> nic as
> >>>>>>>>>> it always messed up the network config making the host
> >>>>>>>>>> unreachable. I
> >>>>>>>>>> have put a ovs bridge on this nic which I will use to make
> >>>>>>>>>> tunnels
> >>>>>>>>>> when
> >>>>>>>>>> I add other hosts. Maybe br-int will be used for that
> >>>>>>>>>> instead, will
> >>>>>>>>>> see
> >>>>>>>>>> when I get that far.
> >>>>>>>>>>
> >>>>>>>>>> As it is now I have a dummy if for ovirtmgmt bridge but this
> >>>>>>>>>> will
> >>>>>>>>>> probably not work when I add other hosts as that bridge cannot
> >>>>>>>>>> connect
> >>>>>>>>>> to the other hosts. I'm considering keeping this just as a
> >>>>>>>>>> dummy to
> >>>>>>>>>> keep
> >>>>>>>>>> ovirt engine satisfied while the actual communication will
> >>>>>>>>>> happen
> >>>>>>>>>> over
> >>>>>>>>>> OVN/OVS bridges and tunnels.
> >>>>>>>>>>
> >>>>>>>>>> 2. On
> >>>>>>>>>> https://www.ovirt.org//develop/release-management/features/ovirt-ovn-provider/
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> there is instructions how to add an OVS bridge to OVN with
> >>>>>>>>>> |ovn-nbctl
> >>>>>>>>>> ls-add <network name>|. If you want to use br-int then it makes
> >>>>>>>>>> sense to
> >>>>>>>>>> make that bridge visible in ovirt webui under networks so
> >>>>>>>>>> that it
> >>>>>>>>>> can be
> >>>>>>>>>> selected for VM's.
> >>>>>>>>>>
> >>>>>>>>>> It quite doesn't make sense to me that I can select other
> >>>>>>>>>> network
> >>>>>>>>>> for my
> >>>>>>>>>> VM but then that setting is not used when setting up the
> >>>>>>>>>> network.
> >>>>>>>>>>
> >>>>>>>>>> /Sverker
> >>>>>>>>>>
> >>>>>>>>>> Den 2016-12-30 kl. 15:34, skrev Marcin Mirecki:
> >>>>>>>>>>> Hi,
> >>>>>>>>>>>
> >>>>>>>>>>> The OVN provider does not require you to add any bridges
> >>>>>>>>>>> manually.
> >>>>>>>>>>> As I understand we were dealing with two problems:
> >>>>>>>>>>> 1. You only had one physical nic and wanted to put a bridge
> >>>>>>>>>>> on it,
> >>>>>>>>>>>        attaching the management network to the bridge. This
> >>>>>>>>>>> was the
> >>>>>>>>>>> reason for
> >>>>>>>>>>>        creating the bridge (the recommended setup would be
> >>>>>>>>>>> to used a
> >>>>>>>>>>> separate
> >>>>>>>>>>>        physical nic for the management network). This bridge
> >>>>>>>>>>> has
> >>>>>>>>>>> nothing to
> >>>>>>>>>>>        do with the OVN bridge.
> >>>>>>>>>>> 2. OVN - you want to use OVN on this system. For this you
> >>>>>>>>>>> have to
> >>>>>>>>>>> install
> >>>>>>>>>>>        OVN on your hosts. This should create the br-int bridge,
> >>>>>>>>>>> which are
> >>>>>>>>>>>        then used by the OVN provider. This br-int bridge
> >>>>>>>>>>> must be
> >>>>>>>>>>> configured
> >>>>>>>>>>>        to connect to other hosts using the geneve tunnels.
> >>>>>>>>>>>
> >>>>>>>>>>> In both cases the systems will not be aware of any bridges you
> >>>>>>>>>>> create.
> >>>>>>>>>>> They need a nic (be it physical or virtual) to connect to other
> >>>>>>>>>>> system.
> >>>>>>>>>>> Usually this is the physical nic. In your case you decided
> >>>>>>>>>>> to put
> >>>>>>>>>>> a bridge
> >>>>>>>>>>> on the physical nic, and give oVirt a virtual nic attached
> >>>>>>>>>>> to this
> >>>>>>>>>>> bridge.
> >>>>>>>>>>> This works, but keep in mind that the bridge you have
> >>>>>>>>>>> introduced
> >>>>>>>>>>> is outside
> >>>>>>>>>>> of oVirt's (and OVN) control (and as such is not supported).
> >>>>>>>>>>>
> >>>>>>>>>>>> What is the purpose of
> >>>>>>>>>>>> adding my bridges to Ovirt through the external provider and
> >>>>>>>>>>>> configure
> >>>>>>>>>>>> them on my VM
> >>>>>>>>>>> I am not quite sure I understand.
> >>>>>>>>>>> The external provider (OVN provider to be specific), does
> >>>>>>>>>>> not add
> >>>>>>>>>>> any
> >>>>>>>>>>> bridges
> >>>>>>>>>>> to the system. It is using the br-int bridge created by OVN.
> >>>>>>>>>>> The
> >>>>>>>>>>> networks
> >>>>>>>>>>> created by the OVN provider are purely logical entities,
> >>>>>>>>>>> implemented using
> >>>>>>>>>>> the OVN br-int bridge.
> >>>>>>>>>>>
> >>>>>>>>>>> Marcin
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>> ----- Original Message -----
> >>>>>>>>>>>> From: "Sverker Abrahamsson"<sverker at abrahamsson.com>
> >>>>>>>>>>>> To: "Marcin Mirecki"<mmirecki at redhat.com>
> >>>>>>>>>>>> Cc: "Ovirt Users"<users at ovirt.org>
> >>>>>>>>>>>> Sent: Friday, December 30, 2016 12:15:43 PM
> >>>>>>>>>>>> Subject: Re: [ovirt-users] Issue with OVN/OVS and mandatory
> >>>>>>>>>>>> ovirtmgmt
> >>>>>>>>>>>> network
> >>>>>>>>>>>>
> >>>>>>>>>>>> Hi
> >>>>>>>>>>>> That is the logic I quite don't understand. What is the
> >>>>>>>>>>>> purpose of
> >>>>>>>>>>>> adding my bridges to Ovirt through the external provider and
> >>>>>>>>>>>> configure
> >>>>>>>>>>>> them on my VM if you are disregarding that and using br-int
> >>>>>>>>>>>> anyway?
> >>>>>>>>>>>>
> >>>>>>>>>>>> /Sverker
> >>>>>>>>>>>>
> >>>>>>>>>>>> Den 2016-12-30 kl. 10:53, skrev Marcin Mirecki:
> >>>>>>>>>>>>> Sverker,
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> br-int is the integration bridge created by default in
> >>>>>>>>>>>>> OVN. This
> >>>>>>>>>>>>> is the
> >>>>>>>>>>>>> bridge we use for the OVN provider. As OVN is required to be
> >>>>>>>>>>>>> installed,
> >>>>>>>>>>>>> we assume that this bridge is present.
> >>>>>>>>>>>>> Using any other ovs bridge is not supported, and will require
> >>>>>>>>>>>>> custom code
> >>>>>>>>>>>>> changes (such as the ones you created).
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> The proper setup in your case would probably be to create
> >>>>>>>>>>>>> br-int
> >>>>>>>>>>>>> and
> >>>>>>>>>>>>> connect
> >>>>>>>>>>>>> this to your ovirtbridge, although I don't know the
> >>>>>>>>>>>>> details of
> >>>>>>>>>>>>> your env,
> >>>>>>>>>>>>> so
> >>>>>>>>>>>>> this is just my best guess.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Marcin
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> ----- Original Message -----
> >>>>>>>>>>>>>> From: "Sverker Abrahamsson"<sverker at abrahamsson.com>
> >>>>>>>>>>>>>> To: "Marcin Mirecki"<mmirecki at redhat.com>
> >>>>>>>>>>>>>> Cc: "Ovirt Users"<users at ovirt.org>, "Numan Siddique"
> >>>>>>>>>>>>>> <nusiddiq at redhat.com>
> >>>>>>>>>>>>>> Sent: Friday, December 30, 2016 1:14:50 AM
> >>>>>>>>>>>>>> Subject: Re: [ovirt-users] Issue with OVN/OVS and mandatory
> >>>>>>>>>>>>>> ovirtmgmt
> >>>>>>>>>>>>>> network
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Even better, if the value is not hardcoded then the
> >>>>>>>>>>>>>> configured
> >>>>>>>>>>>>>> value is
> >>>>>>>>>>>>>> used. Might be that I'm missunderstanding something but
> >>>>>>>>>>>>>> this is
> >>>>>>>>>>>>>> the
> >>>>>>>>>>>>>> behaviour I expected instead of that it is using br-int.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Attached is a patch which properly sets up the xml, in case
> >>>>>>>>>>>>>> there is
> >>>>>>>>>>>>>> already a virtual port there + testcode of some variants
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> /Sverker
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Den 2016-12-29 kl. 22:55, skrev Sverker Abrahamsson:
> >>>>>>>>>>>>>>> When I change
> >>>>>>>>>>>>>>> /usr/libexec/vdsm/hooks/before_device_create/ovirt_provider_ovn_hook
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> to instead of hardcoded to br-int use BRIDGE_NAME =
> >>>>>>>>>>>>>>> 'ovirtbridge' then
> >>>>>>>>>>>>>>> I get the expected behaviour and I get a working network
> >>>>>>>>>>>>>>> connectivity
> >>>>>>>>>>>>>>> in my VM with IP provided by dhcp.
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> /Sverker
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> Den 2016-12-29 kl. 22:07, skrev Sverker Abrahamsson:
> >>>>>>>>>>>>>>>> By default the vNic profile of my OVN bridge
> >>>>>>>>>>>>>>>> ovirtbridge gets a
> >>>>>>>>>>>>>>>> Network filter named vdsm-no-mac-spoofing. If I instead
> >>>>>>>>>>>>>>>> set
> >>>>>>>>>>>>>>>> No filter
> >>>>>>>>>>>>>>>> then I don't get those ebtables / iptables messages. It
> >>>>>>>>>>>>>>>> seems
> >>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>> there is some issue between ovirt/vdsm and firewalld,
> >>>>>>>>>>>>>>>> which
> >>>>>>>>>>>>>>>> we can
> >>>>>>>>>>>>>>>> put to the side for now.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> It is not clear for me why the port is added on br-int
> >>>>>>>>>>>>>>>> instead of the
> >>>>>>>>>>>>>>>> bridge I've assigned to the VM, which is ovirtbridge??
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> /Sverker
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Den 2016-12-29 kl. 14:20, skrev Sverker Abrahamsson:
> >>>>>>>>>>>>>>>>> The specific command most likely fails because there
> >>>>>>>>>>>>>>>>> is no
> >>>>>>>>>>>>>>>>> chain
> >>>>>>>>>>>>>>>>> named libvirt-J-vnet0, but when should that have been
> >>>>>>>>>>>>>>>>> created?
> >>>>>>>>>>>>>>>>> /Sverker
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> -------- Vidarebefordrat meddelande --------
> >>>>>>>>>>>>>>>>> Ämne:     Re: [ovirt-users] Issue with OVN/OVS and
> >>>>>>>>>>>>>>>>> mandatory
> >>>>>>>>>>>>>>>>> ovirtmgmt
> >>>>>>>>>>>>>>>>> network
> >>>>>>>>>>>>>>>>> Datum:     Thu, 29 Dec 2016 08:06:29 -0500 (EST)
> >>>>>>>>>>>>>>>>> Från:     Marcin Mirecki<mmirecki at redhat.com>
> >>>>>>>>>>>>>>>>> Till:     Sverker Abrahamsson<sverker at abrahamsson.com>
> >>>>>>>>>>>>>>>>> Kopia:     Ovirt Users<users at ovirt.org>, Lance Richardson
> >>>>>>>>>>>>>>>>> <lrichard at redhat.com>, Numan
> >>>>>>>>>>>>>>>>> Siddique<nusiddiq at redhat.com>
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> Let me add the OVN team.
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> Lance, Numan,
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> Can you please look at this?
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> Trying to plug a vNIC results in:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 ovs-vsctl:
> >>>>>>>>>>>>>>>>>>>>>>>> ovs|00001|vsctl|INFO|Called as
> >>>>>>>>>>>>>>>>>>>>>>>> ovs-vsctl
> >>>>>>>>>>>>>>>>>>>>>>>> --timeout=5 -- --if-exists del-port vnet0 --
> >>>>>>>>>>>>>>>>>>>>>>>> add-port
> >>>>>>>>>>>>>>>>>>>>>>>> br-int
> >>>>>>>>>>>>>>>>>>>>>>>> vnet0 --
> >>>>>>>>>>>>>>>>>>>>>>>> set Interface vnet0
> >>>>>>>>>>>>>>>>>>>>>>>> "external-ids:attached-mac=\"00:1a:4a:16:01:51\""
> >>>>>>>>>>>>>>>>>>>>>>>> -- set Interface vnet0
> >>>>>>>>>>>>>>>>>>>>>>>> "external-ids:iface-id=\"e8853aac-8a75-41b0-8010-e630017dcdd8\""
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>> set Interface vnet0
> >>>>>>>>>>>>>>>>>>>>>>>> "external-ids:vm-id=\"b9440d60-ef5a-4e2b-83cf-081df7c09e6f\""
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>> set
> >>>>>>>>>>>>>>>>>>>>>>>> Interface vnet0 external-ids:iface-status=active
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 kernel: device vnet0 entered
> >>>>>>>>>>>>>>>>>>>>>>>> promiscuous
> >>>>>>>>>>>>>>>>>>>>>>>> mode
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -D
> >>>>>>>>>>>>>>>>>>>>>>>> PREROUTING
> >>>>>>>>>>>>>>>>>>>>>>>> -i vnet0
> >>>>>>>>>>>>>>>>>>>>>>>> -j
> >>>>>>>>>>>>>>>>>>>>>>>> libvirt-J-vnet0' failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>> More details below
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> ----- Original Message -----
> >>>>>>>>>>>>>>>>>> From: "Sverker Abrahamsson"<sverker at abrahamsson.com>
> >>>>>>>>>>>>>>>>>> To: "Marcin Mirecki"<mmirecki at redhat.com>
> >>>>>>>>>>>>>>>>>> Cc: "Ovirt Users"<users at ovirt.org>
> >>>>>>>>>>>>>>>>>> Sent: Thursday, December 29, 2016 1:42:11 PM
> >>>>>>>>>>>>>>>>>> Subject: Re: [ovirt-users] Issue with OVN/OVS and
> >>>>>>>>>>>>>>>>>> mandatory
> >>>>>>>>>>>>>>>>>> ovirtmgmt
> >>>>>>>>>>>>>>>>>> network
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> Hi
> >>>>>>>>>>>>>>>>>> Same problem still..
> >>>>>>>>>>>>>>>>>> /Sverker
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> Den 2016-12-29 kl. 13:34, skrev Marcin Mirecki:
> >>>>>>>>>>>>>>>>>>> Hi,
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> The tunnels are created to connect multiple OVN
> >>>>>>>>>>>>>>>>>>> controllers.
> >>>>>>>>>>>>>>>>>>> If there is only one, there is no need for the
> >>>>>>>>>>>>>>>>>>> tunnels, so
> >>>>>>>>>>>>>>>>>>> none
> >>>>>>>>>>>>>>>>>>> will be created, this is the correct behavior.
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> Does the problem still occur after setting
> >>>>>>>>>>>>>>>>>>> configuring the
> >>>>>>>>>>>>>>>>>>> OVN-controller?
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> Marcin
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> ----- Original Message -----
> >>>>>>>>>>>>>>>>>>>> From: "Sverker Abrahamsson"<sverker at abrahamsson.com>
> >>>>>>>>>>>>>>>>>>>> To: "Marcin Mirecki"<mmirecki at redhat.com>
> >>>>>>>>>>>>>>>>>>>> Cc: "Ovirt Users"<users at ovirt.org>
> >>>>>>>>>>>>>>>>>>>> Sent: Thursday, December 29, 2016 11:44:32 AM
> >>>>>>>>>>>>>>>>>>>> Subject: Re: [ovirt-users] Issue with OVN/OVS and
> >>>>>>>>>>>>>>>>>>>> mandatory
> >>>>>>>>>>>>>>>>>>>> ovirtmgmt
> >>>>>>>>>>>>>>>>>>>> network
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> Hi
> >>>>>>>>>>>>>>>>>>>> The rpm packages you listed in the other mail are
> >>>>>>>>>>>>>>>>>>>> installed but I
> >>>>>>>>>>>>>>>>>>>> had
> >>>>>>>>>>>>>>>>>>>> not run vdsm-tool ovn-config to create tunnel as
> >>>>>>>>>>>>>>>>>>>> the OVN
> >>>>>>>>>>>>>>>>>>>> controller
> >>>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>> on the same host.
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> [root at h2 ~]# rpm -q openvswitch-ovn-common
> >>>>>>>>>>>>>>>>>>>> openvswitch-ovn-common-2.6.90-1.el7.centos.x86_64
> >>>>>>>>>>>>>>>>>>>> [root at h2 ~]# rpm -q openvswitch-ovn-host
> >>>>>>>>>>>>>>>>>>>> openvswitch-ovn-host-2.6.90-1.el7.centos.x86_64
> >>>>>>>>>>>>>>>>>>>> [root at h2 ~]# rpm -q python-openvswitch
> >>>>>>>>>>>>>>>>>>>> python-openvswitch-2.6.90-1.el7.centos.noarch
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> After removing my manually created br-int and run
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> vdsm-tool ovn-config 127.0.0.1 172.27.1.1
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> then I have the br-int but 'ip link show' does not
> >>>>>>>>>>>>>>>>>>>> show
> >>>>>>>>>>>>>>>>>>>> any link
> >>>>>>>>>>>>>>>>>>>> 'genev_sys_' nor does 'ovs-vsctl show' any port for
> >>>>>>>>>>>>>>>>>>>> ovn.
> >>>>>>>>>>>>>>>>>>>> I assume
> >>>>>>>>>>>>>>>>>>>> these
> >>>>>>>>>>>>>>>>>>>> are when there is an actual tunnel?
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> [root at h2 ~]# ovs-vsctl show
> >>>>>>>>>>>>>>>>>>>> ebb6aede-cbbc-4f4f-a88a-a9cd72b2bd23
> >>>>>>>>>>>>>>>>>>>>            Bridge br-int
> >>>>>>>>>>>>>>>>>>>>                fail_mode: secure
> >>>>>>>>>>>>>>>>>>>>                Port br-int
> >>>>>>>>>>>>>>>>>>>>                    Interface br-int
> >>>>>>>>>>>>>>>>>>>>                        type: internal
> >>>>>>>>>>>>>>>>>>>>            Bridge ovirtbridge
> >>>>>>>>>>>>>>>>>>>>                Port ovirtbridge
> >>>>>>>>>>>>>>>>>>>>                    Interface ovirtbridge
> >>>>>>>>>>>>>>>>>>>>                        type: internal
> >>>>>>>>>>>>>>>>>>>>            Bridge "ovsbridge0"
> >>>>>>>>>>>>>>>>>>>>                Port "ovsbridge0"
> >>>>>>>>>>>>>>>>>>>>                    Interface "ovsbridge0"
> >>>>>>>>>>>>>>>>>>>>                        type: internal
> >>>>>>>>>>>>>>>>>>>>                Port "eth0"
> >>>>>>>>>>>>>>>>>>>>                    Interface "eth0"
> >>>>>>>>>>>>>>>>>>>>            ovs_version: "2.6.90"
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> [root at h2 ~]# ip link show
> >>>>>>>>>>>>>>>>>>>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc
> >>>>>>>>>>>>>>>>>>>> noqueue state
> >>>>>>>>>>>>>>>>>>>> UNKNOWN
> >>>>>>>>>>>>>>>>>>>> mode
> >>>>>>>>>>>>>>>>>>>> DEFAULT qlen 1
> >>>>>>>>>>>>>>>>>>>>            link/loopback 00:00:00:00:00:00 brd
> >>>>>>>>>>>>>>>>>>>> 00:00:00:00:00:00
> >>>>>>>>>>>>>>>>>>>> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
> >>>>>>>>>>>>>>>>>>>> qdisc
> >>>>>>>>>>>>>>>>>>>> pfifo_fast
> >>>>>>>>>>>>>>>>>>>> master ovs-system state UP mode DEFAULT qlen 1000
> >>>>>>>>>>>>>>>>>>>>            link/ether 44:8a:5b:84:7d:b3 brd
> >>>>>>>>>>>>>>>>>>>>            ff:ff:ff:ff:ff:ff
> >>>>>>>>>>>>>>>>>>>> 3: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc
> >>>>>>>>>>>>>>>>>>>> noop
> >>>>>>>>>>>>>>>>>>>> state
> >>>>>>>>>>>>>>>>>>>> DOWN
> >>>>>>>>>>>>>>>>>>>> mode
> >>>>>>>>>>>>>>>>>>>> DEFAULT qlen 1000
> >>>>>>>>>>>>>>>>>>>>            link/ether 5a:14:cf:28:47:e2 brd
> >>>>>>>>>>>>>>>>>>>>            ff:ff:ff:ff:ff:ff
> >>>>>>>>>>>>>>>>>>>> 4: ovsbridge0: <BROADCAST,MULTICAST,UP,LOWER_UP>
> >>>>>>>>>>>>>>>>>>>> mtu 1500
> >>>>>>>>>>>>>>>>>>>> qdisc
> >>>>>>>>>>>>>>>>>>>> noqueue
> >>>>>>>>>>>>>>>>>>>> state UNKNOWN mode DEFAULT qlen 1000
> >>>>>>>>>>>>>>>>>>>>            link/ether 44:8a:5b:84:7d:b3 brd
> >>>>>>>>>>>>>>>>>>>>            ff:ff:ff:ff:ff:ff
> >>>>>>>>>>>>>>>>>>>> 5: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop
> >>>>>>>>>>>>>>>>>>>> state DOWN
> >>>>>>>>>>>>>>>>>>>> mode
> >>>>>>>>>>>>>>>>>>>> DEFAULT qlen 1000
> >>>>>>>>>>>>>>>>>>>>            link/ether 9e:b0:3a:9d:f2:4b brd
> >>>>>>>>>>>>>>>>>>>>            ff:ff:ff:ff:ff:ff
> >>>>>>>>>>>>>>>>>>>> 6: ovirtbridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu
> >>>>>>>>>>>>>>>>>>>> 1500 qdisc
> >>>>>>>>>>>>>>>>>>>> noqueue
> >>>>>>>>>>>>>>>>>>>> state UNKNOWN mode DEFAULT qlen 1000
> >>>>>>>>>>>>>>>>>>>>            link/ether a6:f6:e5:a4:5b:45 brd
> >>>>>>>>>>>>>>>>>>>>            ff:ff:ff:ff:ff:ff
> >>>>>>>>>>>>>>>>>>>> 7: dummy0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500
> >>>>>>>>>>>>>>>>>>>> qdisc
> >>>>>>>>>>>>>>>>>>>> noqueue
> >>>>>>>>>>>>>>>>>>>> master
> >>>>>>>>>>>>>>>>>>>> ovirtmgmt state UNKNOWN mode DEFAULT qlen 1000
> >>>>>>>>>>>>>>>>>>>>            link/ether 66:e0:1c:c3:a9:d8 brd
> >>>>>>>>>>>>>>>>>>>>            ff:ff:ff:ff:ff:ff
> >>>>>>>>>>>>>>>>>>>> 8: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu
> >>>>>>>>>>>>>>>>>>>> 1500
> >>>>>>>>>>>>>>>>>>>> qdisc
> >>>>>>>>>>>>>>>>>>>> noqueue
> >>>>>>>>>>>>>>>>>>>> state UP mode DEFAULT qlen 1000
> >>>>>>>>>>>>>>>>>>>>            link/ether 66:e0:1c:c3:a9:d8 brd
> >>>>>>>>>>>>>>>>>>>>            ff:ff:ff:ff:ff:ff
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> Firewall settings:
> >>>>>>>>>>>>>>>>>>>> [root at h2 ~]# firewall-cmd --list-all-zones
> >>>>>>>>>>>>>>>>>>>> work
> >>>>>>>>>>>>>>>>>>>>          target: default
> >>>>>>>>>>>>>>>>>>>> icmp-block-inversion: no
> >>>>>>>>>>>>>>>>>>>>          interfaces:
> >>>>>>>>>>>>>>>>>>>>          sources:
> >>>>>>>>>>>>>>>>>>>>          services: dhcpv6-client ssh
> >>>>>>>>>>>>>>>>>>>>          ports:
> >>>>>>>>>>>>>>>>>>>>          protocols:
> >>>>>>>>>>>>>>>>>>>>          masquerade: no
> >>>>>>>>>>>>>>>>>>>>          forward-ports:
> >>>>>>>>>>>>>>>>>>>>          sourceports:
> >>>>>>>>>>>>>>>>>>>>          icmp-blocks:
> >>>>>>>>>>>>>>>>>>>>          rich rules:
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> drop
> >>>>>>>>>>>>>>>>>>>>          target: DROP
> >>>>>>>>>>>>>>>>>>>> icmp-block-inversion: no
> >>>>>>>>>>>>>>>>>>>>          interfaces:
> >>>>>>>>>>>>>>>>>>>>          sources:
> >>>>>>>>>>>>>>>>>>>>          services:
> >>>>>>>>>>>>>>>>>>>>          ports:
> >>>>>>>>>>>>>>>>>>>>          protocols:
> >>>>>>>>>>>>>>>>>>>>          masquerade: no
> >>>>>>>>>>>>>>>>>>>>          forward-ports:
> >>>>>>>>>>>>>>>>>>>>          sourceports:
> >>>>>>>>>>>>>>>>>>>>          icmp-blocks:
> >>>>>>>>>>>>>>>>>>>>          rich rules:
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> internal
> >>>>>>>>>>>>>>>>>>>>          target: default
> >>>>>>>>>>>>>>>>>>>> icmp-block-inversion: no
> >>>>>>>>>>>>>>>>>>>>          interfaces:
> >>>>>>>>>>>>>>>>>>>>          sources:
> >>>>>>>>>>>>>>>>>>>>          services: dhcpv6-client mdns samba-client ssh
> >>>>>>>>>>>>>>>>>>>>          ports:
> >>>>>>>>>>>>>>>>>>>>          protocols:
> >>>>>>>>>>>>>>>>>>>>          masquerade: no
> >>>>>>>>>>>>>>>>>>>>          forward-ports:
> >>>>>>>>>>>>>>>>>>>>          sourceports:
> >>>>>>>>>>>>>>>>>>>>          icmp-blocks:
> >>>>>>>>>>>>>>>>>>>>          rich rules:
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> external
> >>>>>>>>>>>>>>>>>>>>          target: default
> >>>>>>>>>>>>>>>>>>>> icmp-block-inversion: no
> >>>>>>>>>>>>>>>>>>>>          interfaces:
> >>>>>>>>>>>>>>>>>>>>          sources:
> >>>>>>>>>>>>>>>>>>>>          services: ssh
> >>>>>>>>>>>>>>>>>>>>          ports:
> >>>>>>>>>>>>>>>>>>>>          protocols:
> >>>>>>>>>>>>>>>>>>>>          masquerade: yes
> >>>>>>>>>>>>>>>>>>>>          forward-ports:
> >>>>>>>>>>>>>>>>>>>>          sourceports:
> >>>>>>>>>>>>>>>>>>>>          icmp-blocks:
> >>>>>>>>>>>>>>>>>>>>          rich rules:
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> trusted
> >>>>>>>>>>>>>>>>>>>>          target: ACCEPT
> >>>>>>>>>>>>>>>>>>>> icmp-block-inversion: no
> >>>>>>>>>>>>>>>>>>>>          interfaces:
> >>>>>>>>>>>>>>>>>>>>          sources:
> >>>>>>>>>>>>>>>>>>>>          services:
> >>>>>>>>>>>>>>>>>>>>          ports:
> >>>>>>>>>>>>>>>>>>>>          protocols:
> >>>>>>>>>>>>>>>>>>>>          masquerade: no
> >>>>>>>>>>>>>>>>>>>>          forward-ports:
> >>>>>>>>>>>>>>>>>>>>          sourceports:
> >>>>>>>>>>>>>>>>>>>>          icmp-blocks:
> >>>>>>>>>>>>>>>>>>>>          rich rules:
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> home
> >>>>>>>>>>>>>>>>>>>>          target: default
> >>>>>>>>>>>>>>>>>>>> icmp-block-inversion: no
> >>>>>>>>>>>>>>>>>>>>          interfaces:
> >>>>>>>>>>>>>>>>>>>>          sources:
> >>>>>>>>>>>>>>>>>>>>          services: dhcpv6-client mdns samba-client ssh
> >>>>>>>>>>>>>>>>>>>>          ports:
> >>>>>>>>>>>>>>>>>>>>          protocols:
> >>>>>>>>>>>>>>>>>>>>          masquerade: no
> >>>>>>>>>>>>>>>>>>>>          forward-ports:
> >>>>>>>>>>>>>>>>>>>>          sourceports:
> >>>>>>>>>>>>>>>>>>>>          icmp-blocks:
> >>>>>>>>>>>>>>>>>>>>          rich rules:
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> dmz
> >>>>>>>>>>>>>>>>>>>>          target: default
> >>>>>>>>>>>>>>>>>>>> icmp-block-inversion: no
> >>>>>>>>>>>>>>>>>>>>          interfaces:
> >>>>>>>>>>>>>>>>>>>>          sources:
> >>>>>>>>>>>>>>>>>>>>          services: ssh
> >>>>>>>>>>>>>>>>>>>>          ports:
> >>>>>>>>>>>>>>>>>>>>          protocols:
> >>>>>>>>>>>>>>>>>>>>          masquerade: no
> >>>>>>>>>>>>>>>>>>>>          forward-ports:
> >>>>>>>>>>>>>>>>>>>>          sourceports:
> >>>>>>>>>>>>>>>>>>>>          icmp-blocks:
> >>>>>>>>>>>>>>>>>>>>          rich rules:
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> public (active)
> >>>>>>>>>>>>>>>>>>>>          target: default
> >>>>>>>>>>>>>>>>>>>> icmp-block-inversion: no
> >>>>>>>>>>>>>>>>>>>>          interfaces: eth0 ovsbridge0
> >>>>>>>>>>>>>>>>>>>>          sources:
> >>>>>>>>>>>>>>>>>>>>          services: dhcpv6-client ssh
> >>>>>>>>>>>>>>>>>>>>          ports:
> >>>>>>>>>>>>>>>>>>>>          protocols:
> >>>>>>>>>>>>>>>>>>>>          masquerade: no
> >>>>>>>>>>>>>>>>>>>>          forward-ports:
> >>>>>>>>>>>>>>>>>>>>          sourceports:
> >>>>>>>>>>>>>>>>>>>>          icmp-blocks:
> >>>>>>>>>>>>>>>>>>>>          rich rules:
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> block
> >>>>>>>>>>>>>>>>>>>>          target: %%REJECT%%
> >>>>>>>>>>>>>>>>>>>> icmp-block-inversion: no
> >>>>>>>>>>>>>>>>>>>>          interfaces:
> >>>>>>>>>>>>>>>>>>>>          sources:
> >>>>>>>>>>>>>>>>>>>>          services:
> >>>>>>>>>>>>>>>>>>>>          ports:
> >>>>>>>>>>>>>>>>>>>>          protocols:
> >>>>>>>>>>>>>>>>>>>>          masquerade: no
> >>>>>>>>>>>>>>>>>>>>          forward-ports:
> >>>>>>>>>>>>>>>>>>>>          sourceports:
> >>>>>>>>>>>>>>>>>>>>          icmp-blocks:
> >>>>>>>>>>>>>>>>>>>>          rich rules:
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> ovirt (active)
> >>>>>>>>>>>>>>>>>>>>          target: default
> >>>>>>>>>>>>>>>>>>>> icmp-block-inversion: no
> >>>>>>>>>>>>>>>>>>>>          interfaces: ovirtbridge ovirtmgmt
> >>>>>>>>>>>>>>>>>>>>          sources:
> >>>>>>>>>>>>>>>>>>>>          services: dhcp ovirt-fence-kdump-listener
> >>>>>>>>>>>>>>>>>>>>          ovirt-http
> >>>>>>>>>>>>>>>>>>>>          ovirt-https
> >>>>>>>>>>>>>>>>>>>> ovirt-imageio-proxy ovirt-postgres ovirt-provider-ovn
> >>>>>>>>>>>>>>>>>>>> ovirt-vmconsole-proxy ovirt-websocket-proxy ssh vdsm
> >>>>>>>>>>>>>>>>>>>>          ports:
> >>>>>>>>>>>>>>>>>>>>          protocols:
> >>>>>>>>>>>>>>>>>>>>          masquerade: yes
> >>>>>>>>>>>>>>>>>>>>          forward-ports:
> >>>>>>>>>>>>>>>>>>>>          sourceports:
> >>>>>>>>>>>>>>>>>>>>          icmp-blocks:
> >>>>>>>>>>>>>>>>>>>>          rich rules:
> >>>>>>>>>>>>>>>>>>>>                rule family="ipv4" port port="6641"
> >>>>>>>>>>>>>>>>>>>> protocol="tcp"
> >>>>>>>>>>>>>>>>>>>>                accept
> >>>>>>>>>>>>>>>>>>>>                rule family="ipv4" port port="6642"
> >>>>>>>>>>>>>>>>>>>> protocol="tcp"
> >>>>>>>>>>>>>>>>>>>>                accept
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> The db dump is attached
> >>>>>>>>>>>>>>>>>>>> /Sverker
> >>>>>>>>>>>>>>>>>>>> Den 2016-12-29 kl. 09:50, skrev Marcin Mirecki:
> >>>>>>>>>>>>>>>>>>>>> Hi,
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> Can you please do: "sudo ovsdb-client dump"
> >>>>>>>>>>>>>>>>>>>>> on the host and send me the output?
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> Have you configured the ovn controller to connect
> >>>>>>>>>>>>>>>>>>>>> to the
> >>>>>>>>>>>>>>>>>>>>> OVN north? You can do it using "vdsm-tool
> >>>>>>>>>>>>>>>>>>>>> ovn-config" or
> >>>>>>>>>>>>>>>>>>>>> using the OVN tools directly.
> >>>>>>>>>>>>>>>>>>>>> Please check
> >>>>>>>>>>>>>>>>>>>>> out:https://www.ovirt.org/blog/2016/11/ovirt-provider-ovn/
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> for details.
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> Also please note that the OVN provider is completely
> >>>>>>>>>>>>>>>>>>>>> different
> >>>>>>>>>>>>>>>>>>>>> from the neutron-openvswitch plugin. Please don't mix
> >>>>>>>>>>>>>>>>>>>>> the two.
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> Marcin
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> ----- Original Message -----
> >>>>>>>>>>>>>>>>>>>>>> From: "Marcin Mirecki"<mmirecki at redhat.com>
> >>>>>>>>>>>>>>>>>>>>>> To: "Sverker Abrahamsson"<sverker at abrahamsson.com>
> >>>>>>>>>>>>>>>>>>>>>> Cc: "Ovirt Users"<users at ovirt.org>
> >>>>>>>>>>>>>>>>>>>>>> Sent: Thursday, December 29, 2016 9:27:19 AM
> >>>>>>>>>>>>>>>>>>>>>> Subject: Re: [ovirt-users] Issue with OVN/OVS and
> >>>>>>>>>>>>>>>>>>>>>> mandatory
> >>>>>>>>>>>>>>>>>>>>>> ovirtmgmt
> >>>>>>>>>>>>>>>>>>>>>> network
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> Hi,
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> br-int is the OVN integration bridge, it should
> >>>>>>>>>>>>>>>>>>>>>> have been
> >>>>>>>>>>>>>>>>>>>>>> created
> >>>>>>>>>>>>>>>>>>>>>> when installing OVN. I assume you have the following
> >>>>>>>>>>>>>>>>>>>>>> packages
> >>>>>>>>>>>>>>>>>>>>>> installed
> >>>>>>>>>>>>>>>>>>>>>> on the host:
> >>>>>>>>>>>>>>>>>>>>>> openvswitch-ovn-common
> >>>>>>>>>>>>>>>>>>>>>> openvswitch-ovn-host
> >>>>>>>>>>>>>>>>>>>>>> python-openvswitch
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> Please give me some time to look at the connectivity
> >>>>>>>>>>>>>>>>>>>>>> problem.
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> Marcin
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> ----- Original Message -----
> >>>>>>>>>>>>>>>>>>>>>>> From: "Sverker
> >>>>>>>>>>>>>>>>>>>>>>> Abrahamsson"<sverker at abrahamsson.com>
> >>>>>>>>>>>>>>>>>>>>>>> To: "Marcin Mirecki"<mmirecki at redhat.com>
> >>>>>>>>>>>>>>>>>>>>>>> Cc: "Ovirt Users"<users at ovirt.org>
> >>>>>>>>>>>>>>>>>>>>>>> Sent: Thursday, December 29, 2016 12:47:04 AM
> >>>>>>>>>>>>>>>>>>>>>>> Subject: Re: [ovirt-users] Issue with OVN/OVS and
> >>>>>>>>>>>>>>>>>>>>>>> mandatory
> >>>>>>>>>>>>>>>>>>>>>>> ovirtmgmt
> >>>>>>>>>>>>>>>>>>>>>>> network
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> From
> >>>>>>>>>>>>>>>>>>>>>>> /usr/libexec/vdsm/hooks/before_device_create/ovirt_provider_ovn_hook
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> (installed by ovirt-provider-ovn-driver rpm):
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> BRIDGE_NAME = 'br-int'
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> Den 2016-12-28 kl. 23:56, skrev Sverker
> >>>>>>>>>>>>>>>>>>>>>>> Abrahamsson:
> >>>>>>>>>>>>>>>>>>>>>>>> Googling on the message about br-int suggested
> >>>>>>>>>>>>>>>>>>>>>>>> adding
> >>>>>>>>>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>>>>>>> bridge to
> >>>>>>>>>>>>>>>>>>>>>>>> ovs:
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> ovs-vsctl add-br br-int
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> Then the VM is able to boot, but it fails to get
> >>>>>>>>>>>>>>>>>>>>>>>> network
> >>>>>>>>>>>>>>>>>>>>>>>> connectivity.
> >>>>>>>>>>>>>>>>>>>>>>>> Output in /var/log/messages:
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 ovs-vsctl:
> >>>>>>>>>>>>>>>>>>>>>>>> ovs|00001|vsctl|INFO|Called as
> >>>>>>>>>>>>>>>>>>>>>>>> ovs-vsctl
> >>>>>>>>>>>>>>>>>>>>>>>> --timeout=5 -- --if-exists del-port vnet0 --
> >>>>>>>>>>>>>>>>>>>>>>>> add-port
> >>>>>>>>>>>>>>>>>>>>>>>> br-int
> >>>>>>>>>>>>>>>>>>>>>>>> vnet0 --
> >>>>>>>>>>>>>>>>>>>>>>>> set Interface vnet0
> >>>>>>>>>>>>>>>>>>>>>>>> "external-ids:attached-mac=\"00:1a:4a:16:01:51\""
> >>>>>>>>>>>>>>>>>>>>>>>> -- set Interface vnet0
> >>>>>>>>>>>>>>>>>>>>>>>> "external-ids:iface-id=\"e8853aac-8a75-41b0-8010-e630017dcdd8\""
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>> set Interface vnet0
> >>>>>>>>>>>>>>>>>>>>>>>> "external-ids:vm-id=\"b9440d60-ef5a-4e2b-83cf-081df7c09e6f\""
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>>>>>>>>>>> set
> >>>>>>>>>>>>>>>>>>>>>>>> Interface vnet0 external-ids:iface-status=active
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 kernel: device vnet0 entered
> >>>>>>>>>>>>>>>>>>>>>>>> promiscuous
> >>>>>>>>>>>>>>>>>>>>>>>> mode
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -D
> >>>>>>>>>>>>>>>>>>>>>>>> PREROUTING
> >>>>>>>>>>>>>>>>>>>>>>>> -i vnet0
> >>>>>>>>>>>>>>>>>>>>>>>> -j
> >>>>>>>>>>>>>>>>>>>>>>>> libvirt-J-vnet0' failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -D
> >>>>>>>>>>>>>>>>>>>>>>>> POSTROUTING -o
> >>>>>>>>>>>>>>>>>>>>>>>> vnet0
> >>>>>>>>>>>>>>>>>>>>>>>> -j
> >>>>>>>>>>>>>>>>>>>>>>>> libvirt-P-vnet0' failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -L
> >>>>>>>>>>>>>>>>>>>>>>>> libvirt-J-vnet0'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -L
> >>>>>>>>>>>>>>>>>>>>>>>> libvirt-P-vnet0'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -F
> >>>>>>>>>>>>>>>>>>>>>>>> libvirt-J-vnet0'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -X
> >>>>>>>>>>>>>>>>>>>>>>>> libvirt-J-vnet0'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -F
> >>>>>>>>>>>>>>>>>>>>>>>> libvirt-P-vnet0'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -X
> >>>>>>>>>>>>>>>>>>>>>>>> libvirt-P-vnet0'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -F
> >>>>>>>>>>>>>>>>>>>>>>>> J-vnet0-mac'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -X
> >>>>>>>>>>>>>>>>>>>>>>>> J-vnet0-mac'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -F
> >>>>>>>>>>>>>>>>>>>>>>>> J-vnet0-arp-mac'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -X
> >>>>>>>>>>>>>>>>>>>>>>>> J-vnet0-arp-mac'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/iptables -w2 -w -D libvirt-out -m
> >>>>>>>>>>>>>>>>>>>>>>>> physdev
> >>>>>>>>>>>>>>>>>>>>>>>> --physdev-is-bridged --physdev-out vnet0 -g
> >>>>>>>>>>>>>>>>>>>>>>>> FO-vnet0'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/iptables -w2 -w -D libvirt-out -m
> >>>>>>>>>>>>>>>>>>>>>>>> physdev
> >>>>>>>>>>>>>>>>>>>>>>>> --physdev-out
> >>>>>>>>>>>>>>>>>>>>>>>> vnet0 -g FO-vnet0' failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/iptables -w2 -w -D libvirt-in -m
> >>>>>>>>>>>>>>>>>>>>>>>> physdev
> >>>>>>>>>>>>>>>>>>>>>>>> --physdev-in
> >>>>>>>>>>>>>>>>>>>>>>>> vnet0
> >>>>>>>>>>>>>>>>>>>>>>>> -g FI-vnet0' failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m
> >>>>>>>>>>>>>>>>>>>>>>>> physdev
> >>>>>>>>>>>>>>>>>>>>>>>> --physdev-in
> >>>>>>>>>>>>>>>>>>>>>>>> vnet0 -g HI-vnet0' failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/iptables -w2 -w -F FO-vnet0' failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/iptables -w2 -w -X FO-vnet0' failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/iptables -w2 -w -F FI-vnet0' failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/iptables -w2 -w -X FI-vnet0' failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/iptables -w2 -w -F HI-vnet0' failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/iptables -w2 -w -X HI-vnet0' failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/iptables -w2 -w -E FP-vnet0 FO-vnet0'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/iptables -w2 -w -E FJ-vnet0 FI-vnet0'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/iptables -w2 -w -E HJ-vnet0 HI-vnet0'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> >>>>>>>>>>>>>>>>>>>>>>>> physdev
> >>>>>>>>>>>>>>>>>>>>>>>> --physdev-is-bridged --physdev-out vnet0 -g
> >>>>>>>>>>>>>>>>>>>>>>>> FO-vnet0'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> >>>>>>>>>>>>>>>>>>>>>>>> physdev
> >>>>>>>>>>>>>>>>>>>>>>>> --physdev-out
> >>>>>>>>>>>>>>>>>>>>>>>> vnet0 -g FO-vnet0' failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ip6tables -w2 -w -D libvirt-in -m
> >>>>>>>>>>>>>>>>>>>>>>>> physdev
> >>>>>>>>>>>>>>>>>>>>>>>> --physdev-in
> >>>>>>>>>>>>>>>>>>>>>>>> vnet0 -g FI-vnet0' failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m
> >>>>>>>>>>>>>>>>>>>>>>>> physdev
> >>>>>>>>>>>>>>>>>>>>>>>> --physdev-in
> >>>>>>>>>>>>>>>>>>>>>>>> vnet0 -g HI-vnet0' failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ip6tables -w2 -w -F FO-vnet0' failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ip6tables -w2 -w -X FO-vnet0' failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ip6tables -w2 -w -F FI-vnet0' failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ip6tables -w2 -w -X FI-vnet0' failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ip6tables -w2 -w -F HI-vnet0' failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ip6tables -w2 -w -X HI-vnet0' failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ip6tables -w2 -w -E FP-vnet0 FO-vnet0'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ip6tables -w2 -w -E FJ-vnet0 FI-vnet0'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ip6tables -w2 -w -E HJ-vnet0 HI-vnet0'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -D
> >>>>>>>>>>>>>>>>>>>>>>>> PREROUTING
> >>>>>>>>>>>>>>>>>>>>>>>> -i vnet0
> >>>>>>>>>>>>>>>>>>>>>>>> -j
> >>>>>>>>>>>>>>>>>>>>>>>> libvirt-I-vnet0' failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -D
> >>>>>>>>>>>>>>>>>>>>>>>> POSTROUTING -o
> >>>>>>>>>>>>>>>>>>>>>>>> vnet0
> >>>>>>>>>>>>>>>>>>>>>>>> -j
> >>>>>>>>>>>>>>>>>>>>>>>> libvirt-O-vnet0' failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -L
> >>>>>>>>>>>>>>>>>>>>>>>> libvirt-I-vnet0'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -L
> >>>>>>>>>>>>>>>>>>>>>>>> libvirt-O-vnet0'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -F
> >>>>>>>>>>>>>>>>>>>>>>>> libvirt-I-vnet0'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -X
> >>>>>>>>>>>>>>>>>>>>>>>> libvirt-I-vnet0'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -F
> >>>>>>>>>>>>>>>>>>>>>>>> libvirt-O-vnet0'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -X
> >>>>>>>>>>>>>>>>>>>>>>>> libvirt-O-vnet0'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -L
> >>>>>>>>>>>>>>>>>>>>>>>> libvirt-P-vnet0'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -E
> >>>>>>>>>>>>>>>>>>>>>>>> libvirt-P-vnet0
> >>>>>>>>>>>>>>>>>>>>>>>> libvirt-O-vnet0' failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -F
> >>>>>>>>>>>>>>>>>>>>>>>> I-vnet0-mac'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -X
> >>>>>>>>>>>>>>>>>>>>>>>> I-vnet0-mac'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -F
> >>>>>>>>>>>>>>>>>>>>>>>> I-vnet0-arp-mac'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING:
> >>>>>>>>>>>>>>>>>>>>>>>> COMMAND_FAILED:
> >>>>>>>>>>>>>>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -X
> >>>>>>>>>>>>>>>>>>>>>>>> I-vnet0-arp-mac'
> >>>>>>>>>>>>>>>>>>>>>>>> failed:
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> [root at h2 etc]# ovs-vsctl show
> >>>>>>>>>>>>>>>>>>>>>>>> ebb6aede-cbbc-4f4f-a88a-a9cd72b2bd23
> >>>>>>>>>>>>>>>>>>>>>>>>            Bridge ovirtbridge
> >>>>>>>>>>>>>>>>>>>>>>>>                Port "ovirtport0"
> >>>>>>>>>>>>>>>>>>>>>>>> Interface "ovirtport0"
> >>>>>>>>>>>>>>>>>>>>>>>>                        type: internal
> >>>>>>>>>>>>>>>>>>>>>>>>                Port ovirtbridge
> >>>>>>>>>>>>>>>>>>>>>>>> Interface ovirtbridge
> >>>>>>>>>>>>>>>>>>>>>>>>                        type: internal
> >>>>>>>>>>>>>>>>>>>>>>>>            Bridge "ovsbridge0"
> >>>>>>>>>>>>>>>>>>>>>>>>                Port "ovsbridge0"
> >>>>>>>>>>>>>>>>>>>>>>>> Interface "ovsbridge0"
> >>>>>>>>>>>>>>>>>>>>>>>>                        type: internal
> >>>>>>>>>>>>>>>>>>>>>>>>                Port "eth0"
> >>>>>>>>>>>>>>>>>>>>>>>> Interface "eth0"
> >>>>>>>>>>>>>>>>>>>>>>>>            Bridge br-int
> >>>>>>>>>>>>>>>>>>>>>>>>                Port br-int
> >>>>>>>>>>>>>>>>>>>>>>>> Interface br-int
> >>>>>>>>>>>>>>>>>>>>>>>>                        type: internal
> >>>>>>>>>>>>>>>>>>>>>>>>                Port "vnet0"
> >>>>>>>>>>>>>>>>>>>>>>>> Interface "vnet0"
> >>>>>>>>>>>>>>>>>>>>>>>> ovs_version: "2.6.90"
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> Searching through the code it appears that br-int
> >>>>>>>>>>>>>>>>>>>>>>>> comes from
> >>>>>>>>>>>>>>>>>>>>>>>> neutron-openvswitch plugin ??
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> [root at h2 share]# rpm -qf
> >>>>>>>>>>>>>>>>>>>>>>>> /usr/share/otopi/plugins/ovirt-host-deploy/openstack/neutron_openvswitch.py
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> ovirt-host-deploy-1.6.0-0.0.master.20161215101008.gitb76ad50.el7.centos.noarch
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> /Sverker
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> Den 2016-12-28 kl. 23:24, skrev Sverker
> >>>>>>>>>>>>>>>>>>>>>>>> Abrahamsson:
> >>>>>>>>>>>>>>>>>>>>>>>>> In addition I had to add an alias to modprobe:
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>> [root at h2 modprobe.d]# cat dummy.conf
> >>>>>>>>>>>>>>>>>>>>>>>>> alias dummy0 dummy
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>> Den 2016-12-28 kl. 23:03, skrev Sverker
> >>>>>>>>>>>>>>>>>>>>>>>>> Abrahamsson:
> >>>>>>>>>>>>>>>>>>>>>>>>>> Hi
> >>>>>>>>>>>>>>>>>>>>>>>>>> I first tried to set device name to dummy_0, but
> >>>>>>>>>>>>>>>>>>>>>>>>>> then ifup
> >>>>>>>>>>>>>>>>>>>>>>>>>> did
> >>>>>>>>>>>>>>>>>>>>>>>>>> not
> >>>>>>>>>>>>>>>>>>>>>>>>>> succeed in creating the device unless I first
> >>>>>>>>>>>>>>>>>>>>>>>>>> did
> >>>>>>>>>>>>>>>>>>>>>>>>>> 'ip link
> >>>>>>>>>>>>>>>>>>>>>>>>>> add
> >>>>>>>>>>>>>>>>>>>>>>>>>> dummy_0 type dummy' but then it would not
> >>>>>>>>>>>>>>>>>>>>>>>>>> suceed to
> >>>>>>>>>>>>>>>>>>>>>>>>>> establish
> >>>>>>>>>>>>>>>>>>>>>>>>>> the if
> >>>>>>>>>>>>>>>>>>>>>>>>>> on reboot.
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> Setting fake_nics = dummy0 would not work
> >>>>>>>>>>>>>>>>>>>>>>>>>> neither,
> >>>>>>>>>>>>>>>>>>>>>>>>>> but this
> >>>>>>>>>>>>>>>>>>>>>>>>>> works:
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> fake_nics = dummy*
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> The engine is now able to find the if and assign
> >>>>>>>>>>>>>>>>>>>>>>>>>> bridge
> >>>>>>>>>>>>>>>>>>>>>>>>>> ovirtmgmt to
> >>>>>>>>>>>>>>>>>>>>>>>>>> it.
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> However, I then run into the next issue when
> >>>>>>>>>>>>>>>>>>>>>>>>>> starting a VM:
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> 2016-12-28 22:28:23,897 ERROR
> >>>>>>>>>>>>>>>>>>>>>>>>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> (ForkJoinPool-1-worker-2) [] Correlation ID:
> >>>>>>>>>>>>>>>>>>>>>>>>>> null,
> >>>>>>>>>>>>>>>>>>>>>>>>>> Call
> >>>>>>>>>>>>>>>>>>>>>>>>>> Stack:
> >>>>>>>>>>>>>>>>>>>>>>>>>> null,
> >>>>>>>>>>>>>>>>>>>>>>>>>> Custom Event ID: -1, Message: VM CentOS7 is down
> >>>>>>>>>>>>>>>>>>>>>>>>>> with error.
> >>>>>>>>>>>>>>>>>>>>>>>>>> Exit
> >>>>>>>>>>>>>>>>>>>>>>>>>> message: Cannot get interface MTU on
> >>>>>>>>>>>>>>>>>>>>>>>>>> 'br-int': No
> >>>>>>>>>>>>>>>>>>>>>>>>>> such
> >>>>>>>>>>>>>>>>>>>>>>>>>> device.
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> This VM has a nic on ovirtbridge, which comes
> >>>>>>>>>>>>>>>>>>>>>>>>>> from
> >>>>>>>>>>>>>>>>>>>>>>>>>> the OVN
> >>>>>>>>>>>>>>>>>>>>>>>>>> provider.
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> /Sverker
> >>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> Den 2016-12-28 kl. 14:38, skrev Marcin Mirecki:
> >>>>>>>>>>>>>>>>>>>>>>>>>>> Sverker,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>> Can you try adding a vnic named veth_* or
> >>>>>>>>>>>>>>>>>>>>>>>>>>> dummy_*,
> >>>>>>>>>>>>>>>>>>>>>>>>>>> (or alternatively add the name of the vnic to
> >>>>>>>>>>>>>>>>>>>>>>>>>>> vdsm.config fake_nics), and setup the
> >>>>>>>>>>>>>>>>>>>>>>>>>>> management
> >>>>>>>>>>>>>>>>>>>>>>>>>>> network using this vnic?
> >>>>>>>>>>>>>>>>>>>>>>>>>>> I suppose adding the vnic you use for
> >>>>>>>>>>>>>>>>>>>>>>>>>>> connecting
> >>>>>>>>>>>>>>>>>>>>>>>>>>> to the engine to fake_nics should make it
> >>>>>>>>>>>>>>>>>>>>>>>>>>> visible
> >>>>>>>>>>>>>>>>>>>>>>>>>>> to the engine, and you should be able to use
> >>>>>>>>>>>>>>>>>>>>>>>>>>> it for
> >>>>>>>>>>>>>>>>>>>>>>>>>>> the setup.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>> Marcin
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>> ----- Original Message -----
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> From: "Marcin Mirecki"<mmirecki at redhat.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> To: "Sverker
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> Abrahamsson"<sverker at abrahamsson.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> Cc: "Ovirt Users"<users at ovirt.org>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> Sent: Wednesday, December 28, 2016 12:06:26 PM
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> Subject: Re: [ovirt-users] Issue with
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> OVN/OVS and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> mandatory
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> ovirtmgmt network
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> I have an internal OVS bridge called
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> ovirtbridge
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> which
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> has
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> a port
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> with
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> IP address, but in the host network settings
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> that port is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> not
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> visible.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> I just verified and unfortunately the virtual
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> ports are
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> not
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> visible in engine
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> to assign a network to :(
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> I'm afraid that the engine is not ready for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> such a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> scenario
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> (even
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> if it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> works).
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> Please give me some time to look for a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> solution.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> ----- Original Message -----
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> From: "Sverker
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Abrahamsson"<sverker at abrahamsson.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> To: "Marcin Mirecki"<mmirecki at redhat.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Cc: "Ovirt Users"<users at ovirt.org>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Sent: Wednesday, December 28, 2016
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> 11:48:24 AM
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Subject: Re: [ovirt-users] Issue with
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> OVN/OVS and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> mandatory
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> ovirtmgmt
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> network
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi Marcin
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes, that is my issue. I don't want to let
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> ovirt/vdsm see
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> eth0
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> nor
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> ovsbridge0 since as soon as it sees them it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> messes up the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> network
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> config
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> so that the host will be unreachable.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> I have an internal OVS bridge called
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> ovirtbridge
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> which
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> has
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> a port
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> with
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> IP address, but in the host network settings
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> that port is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> not
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> visible.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> It doesn't help to name it ovirtmgmt.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> The engine is able to communicate with the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> host
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> on the ip
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> it has
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> been
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> given, it's just that it believes that it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> HAS to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> have a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> ovirtmgmt
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> network which can't be on OVN.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> /Sverker
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Den 2016-12-28 kl. 10:45, skrev Marcin
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Mirecki:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi Sverker,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The management network is mandatory on each
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> host. It's
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> used by
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> engine to communicate with the host.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Looking at your description and the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> exception
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> it looks
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> missing.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The error is caused by not having any
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> network
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> for the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> host
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (network list retrieved in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> InterfaceDaoImpl.getHostNetworksByCluster -
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> which
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> gets all the networks on nics for a host
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> vds_interface
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> table in the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> DB).
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Could you maybe create a virtual nic
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> connected to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ovsbridge0 (as
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> understand you
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> have no physical nic available) and use this
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> for the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> management
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> network?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I then create a bridge for use with
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ovirt, with
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> address.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I'm not quite sure I understand. Is this yet
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> another
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> bridge
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> connected to
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ovsbridge0?
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You could also attach the vnic for the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> management
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> network
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> here
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> if need
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> be.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Please keep in mind that OVN has no use in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> setting up
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> management
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> network.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The OVN provider can only handle external
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> networks,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> which
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> can
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> not be used
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> for a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> management network.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Marcin
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ----- Original Message -----
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> From: "Sverker
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Abrahamsson"<sverker at abrahamsson.com>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> To:users at ovirt.org
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Sent: Wednesday, December 28, 2016
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 12:39:59 AM
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Subject: [ovirt-users] Issue with
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> OVN/OVS and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mandatory
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ovirtmgmt
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> network
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> For long time I've been looking for proper
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> support in
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ovirt for
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Open
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> vSwitch
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> so I'm happy that it is moving in the right
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> direction.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> However,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> there
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> seems
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to still be a dependency on a ovirtmgmt
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> bridge
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and I'm
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> unable
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to move
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to the OVN provider.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The hosting center where I rent hw
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> instances
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> has a bit
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> special
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> network
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> setup,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> so I have one physical network port with
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> a /32
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> netmask
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> point-to-point
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> config to router. The physical port I
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> connect
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to a ovs
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> bridge
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> which has
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> public ip. Since ovirt always messes up the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> network
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> config when
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I've
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> tried
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to let it have access to the network config
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> for the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> physical
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> port, I've
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> set
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> eht0 and ovsbridge0 as hidden in vdsm.conf.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I then create a bridge for use with
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ovirt, with
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> address. With
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> OVN provider I am now able to import these
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> into the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> engine and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> it looks
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> good. When creating a VM I can select
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> will have
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> vNic
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> on my OVS
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> bridge.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> However, I can't start the VM as an
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> exception
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is thrown
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> log:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 2016-12-28 00:13:33,350 ERROR
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [org.ovirt.engine.core.bll.RunVmCommand]
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (default task-5) [3c882d53] Error during
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ValidateFailure.:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> java.lang.NullPointerException
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> org.ovirt.engine.core.bll.scheduling.policyunits.NetworkPolicyUnit.validateRequiredNetworksAvailable(NetworkPolicyUnit.java:140)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [bll.jar:]
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> org.ovirt.engine.core.bll.scheduling.policyunits.NetworkPolicyUnit.filter(NetworkPolicyUnit.java:69)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [bll.jar:]
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> org.ovirt.engine.core.bll.scheduling.SchedulingManager.runInternalFilters(SchedulingManager.java:597)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [bll.jar:]
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> org.ovirt.engine.core.bll.scheduling.SchedulingManager.runFilters(SchedulingManager.java:564)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [bll.jar:]
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> org.ovirt.engine.core.bll.scheduling.SchedulingManager.canSchedule(SchedulingManager.java:494)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [bll.jar:]
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> org.ovirt.engine.core.bll.validator.RunVmValidator.canRunVm(RunVmValidator.java:133)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [bll.jar:]
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> org.ovirt.engine.core.bll.RunVmCommand.validate(RunVmCommand.java:940)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [bll.jar:]
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> org.ovirt.engine.core.bll.CommandBase.internalValidate(CommandBase.java:886)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [bll.jar:]
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> org.ovirt.engine.core.bll.CommandBase.validateOnly(CommandBase.java:366)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [bll.jar:]
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.canRunActions(PrevalidatingMultipleActionsRunner.java:113)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [bll.jar:]
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.invokeCommands(PrevalidatingMultipleActionsRunner.java:99)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [bll.jar:]
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.execute(PrevalidatingMultipleActionsRunner.java:76)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [bll.jar:]
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> org.ovirt.engine.core.bll.Backend.runMultipleActionsImpl(Backend.java:613)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [bll.jar:]
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> org.ovirt.engine.core.bll.Backend.runMultipleActions(Backend.java:583)
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [bll.jar:]
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Looking at that section of code where the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> exception is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> thrown,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I see
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> iterates over host networks to find
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> required
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> networks,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> which I
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> assume is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ovirtmgmt. In the host network setup
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> dialog I
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> don't see
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> any
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> networks at
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> all
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> but it lists ovirtmgmt as required. It also
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> list the
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> OVN
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> networks but
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> these
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> can't be statically assigned as they are
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> added
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> dynamically when
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> needed,
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> which is fine.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I believe that I either need to remove
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ovirtmgmt
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> network
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> or
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> configure
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is provided by the OVN provider, but
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> neither is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> possible.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Preferably it
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> shouldn't be hardcoded which network is
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> management and
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mandatory but be
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> possible to configure.
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /Sverker
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Den 2016-12-27 kl. 17:10, skrev Marcin
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Mirecki:
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> _______________________________________________
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> Users mailing list
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> Users at ovirt.org
> >>>>>>>>>>>>>>>>>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>>> _______________________________________________
> >>>>>>>>>>>>>>>>>>>>>>>>>> Users mailing list
> >>>>>>>>>>>>>>>>>>>>>>>>>> Users at ovirt.org
> >>>>>>>>>>>>>>>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
> >>>>>>>>>>>>>>>>>>>>>>>>> _______________________________________________
> >>>>>>>>>>>>>>>>>>>>>>>>> Users mailing list
> >>>>>>>>>>>>>>>>>>>>>>>>> Users at ovirt.org
> >>>>>>>>>>>>>>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
> >>>>>>>>>>>>>>>>>>>>>>>> _______________________________________________
> >>>>>>>>>>>>>>>>>>>>>>>> Users mailing list
> >>>>>>>>>>>>>>>>>>>>>>>> Users at ovirt.org
> >>>>>>>>>>>>>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
> >>>>>>>>>>>>>>>>>>>>>> _______________________________________________
> >>>>>>>>>>>>>>>>>>>>>> Users mailing list
> >>>>>>>>>>>>>>>>>>>>>> Users at ovirt.org
> >>>>>>>>>>>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> _______________________________________________
> >>>>>>>>>>>>>>>>> Users mailing list
> >>>>>>>>>>>>>>>>> Users at ovirt.org
> >>>>>>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
> >>>>>>>>>>>>>>>> _______________________________________________
> >>>>>>>>>>>>>>>> Users mailing list
> >>>>>>>>>>>>>>>> Users at ovirt.org
> >>>>>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
> >>>>>>>>>>>>>>> _______________________________________________
> >>>>>>>>>>>>>>> Users mailing list
> >>>>>>>>>>>>>>> Users at ovirt.org
> >>>>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
> >>>>>>>> _______________________________________________
> >>>>>>>> Users mailing list
> >>>>>>>> Users at ovirt.org
> >>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
> >>>>>>> _______________________________________________
> >>>>>>> Users mailing list
> >>>>>>> Users at ovirt.org
> >>>>>>> http://lists.ovirt.org/mailman/listinfo/users
> >>>>
> >>>>
> >>>> _______________________________________________
> >>>> Users mailing list
> >>>> Users at ovirt.org
> >>>> http://lists.ovirt.org/mailman/listinfo/users
> >>>
> >
> >
> > _______________________________________________
> > Users mailing list
> > Users at ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> 
> 


More information about the Users mailing list