Hi All,
I am also facing same issue (unable to change ovirtmgmt to OVS type).
On git repository, I saw some commit in which added support to switch
network type in host. but there is red cross across those changes.
Would I get to know will this issue get solve in 4.1 release ?
I tried modifying ifcfg-ovirtmgmt and ifcfg-eth0 file to support OVS and
then rebooted m/c.
When m/c got bootup I found that configuration made in ifcfg got
overwritten by vdsm.
Is their any workaround ?
Thanks,
~Rohit
On Thu, Dec 29, 2016 at 5:17 AM, Sverker Abrahamsson <
sverker(a)abrahamsson.com> wrote:
From
/usr/libexec/vdsm/hooks/before_device_create/ovirt_provider_ovn_hook
(installed by ovirt-provider-ovn-driver rpm):
BRIDGE_NAME = 'br-int'
Den 2016-12-28 kl. 23:56, skrev Sverker Abrahamsson:
> Googling on the message about br-int suggested adding that bridge to ovs:
>
> ovs-vsctl add-br br-int
>
> Then the VM is able to boot, but it fails to get network connectivity.
> Output in /var/log/messages:
>
> Dec 28 23:31:35 h2 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl
> --timeout=5 -- --if-exists del-port vnet0 -- add-port br-int vnet0 -- set
> Interface vnet0 "external-ids:attached-mac=\"00:1a:4a:16:01:51\""
-- set
> Interface vnet0 "external-ids:iface-id=\"e8853
> aac-8a75-41b0-8010-e630017dcdd8\"" -- set Interface vnet0
> "external-ids:vm-id=\"b9440d60-ef5a-4e2b-83cf-081df7c09e6f\"" --
set
> Interface vnet0 external-ids:iface-status=active
> Dec 28 23:31:35 h2 kernel: device vnet0 entered promiscuous mode
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0 -j
> libvirt-J-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> libvirt-P-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -F libvirt-J-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -X libvirt-J-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -F libvirt-P-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -X libvirt-P-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -F J-vnet0-mac' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -X J-vnet0-mac' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -F J-vnet0-arp-mac' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -X J-vnet0-arp-mac' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged
> --physdev-out vnet0 -g FO-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out vnet0 -g
> FO-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev --physdev-in vnet0 -g
> FI-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet0
> -g HI-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -F FO-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -X FO-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -F FI-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -X FI-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -F HI-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -X HI-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -E FP-vnet0 FO-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -E FJ-vnet0 FI-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -E HJ-vnet0 HI-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged
> --physdev-out vnet0 -g FO-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-out vnet0
> -g FO-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -D libvirt-in -m physdev --physdev-in vnet0 -g
> FI-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev --physdev-in
> vnet0 -g HI-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -F FO-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -X FO-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -F FI-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -X FI-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -F HI-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -X HI-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -E FP-vnet0 FO-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -E FJ-vnet0 FI-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -E HJ-vnet0 HI-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0 -j
> libvirt-I-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> libvirt-O-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-I-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-O-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -F libvirt-I-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -X libvirt-I-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -F libvirt-O-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -X libvirt-O-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet0' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -E libvirt-P-vnet0 libvirt-O-vnet0'
> failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -F I-vnet0-mac' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -X I-vnet0-mac' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -F I-vnet0-arp-mac' failed:
> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -X I-vnet0-arp-mac' failed:
>
>
> [root@h2 etc]# ovs-vsctl show
> ebb6aede-cbbc-4f4f-a88a-a9cd72b2bd23
> Bridge ovirtbridge
> Port "ovirtport0"
> Interface "ovirtport0"
> type: internal
> Port ovirtbridge
> Interface ovirtbridge
> type: internal
> Bridge "ovsbridge0"
> Port "ovsbridge0"
> Interface "ovsbridge0"
> type: internal
> Port "eth0"
> Interface "eth0"
> Bridge br-int
> Port br-int
> Interface br-int
> type: internal
> Port "vnet0"
> Interface "vnet0"
> ovs_version: "2.6.90"
>
> Searching through the code it appears that br-int comes from
> neutron-openvswitch plugin ??
>
> [root@h2 share]# rpm -qf /usr/share/otopi/plugins/ovirt
> -host-deploy/openstack/neutron_openvswitch.py
> ovirt-host-deploy-1.6.0-0.0.master.20161215101008.gitb76ad50.el7.centos.noarch
>
>
> /Sverker
>
> Den 2016-12-28 kl. 23:24, skrev Sverker Abrahamsson:
>
>> In addition I had to add an alias to modprobe:
>>
>> [root@h2 modprobe.d]# cat dummy.conf
>> alias dummy0 dummy
>>
>>
>> Den 2016-12-28 kl. 23:03, skrev Sverker Abrahamsson:
>>
>>> Hi
>>> I first tried to set device name to dummy_0, but then ifup did not
>>> succeed in creating the device unless I first did 'ip link add dummy_0
type
>>> dummy' but then it would not suceed to establish the if on reboot.
>>>
>>> Setting fake_nics = dummy0 would not work neither, but this works:
>>>
>>> fake_nics = dummy*
>>>
>>> The engine is now able to find the if and assign bridge ovirtmgmt to it.
>>>
>>> However, I then run into the next issue when starting a VM:
>>>
>>> 2016-12-28 22:28:23,897 ERROR [org.ovirt.engine.core.dal.dbb
>>> roker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-2) []
>>> Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM
>>> CentOS7 is down with error. Exit message: Cannot get interface MTU on
>>> 'br-int': No such device.
>>>
>>> This VM has a nic on ovirtbridge, which comes from the OVN provider.
>>>
>>> /Sverker
>>>
>>> Den 2016-12-28 kl. 14:38, skrev Marcin Mirecki:
>>>
>>>> Sverker,
>>>>
>>>> Can you try adding a vnic named veth_* or dummy_*,
>>>> (or alternatively add the name of the vnic to
>>>> vdsm.config fake_nics), and setup the management
>>>> network using this vnic?
>>>> I suppose adding the vnic you use for connecting
>>>> to the engine to fake_nics should make it visible
>>>> to the engine, and you should be able to use it for
>>>> the setup.
>>>>
>>>> Marcin
>>>>
>>>>
>>>>
>>>> ----- Original Message -----
>>>>
>>>>> From: "Marcin Mirecki" <mmirecki(a)redhat.com>
>>>>> To: "Sverker Abrahamsson" <sverker(a)abrahamsson.com>
>>>>> Cc: "Ovirt Users" <users(a)ovirt.org>
>>>>> Sent: Wednesday, December 28, 2016 12:06:26 PM
>>>>> Subject: Re: [ovirt-users] Issue with OVN/OVS and mandatory
ovirtmgmt
>>>>> network
>>>>>
>>>>> I have an internal OVS bridge called ovirtbridge which has a port
with
>>>>>> IP address, but in the host network settings that port is not
>>>>>> visible.
>>>>>>
>>>>> I just verified and unfortunately the virtual ports are not visible
>>>>> in engine
>>>>> to assign a network to :(
>>>>> I'm afraid that the engine is not ready for such a scenario (even
if
>>>>> it
>>>>> works).
>>>>> Please give me some time to look for a solution.
>>>>>
>>>>> ----- Original Message -----
>>>>>
>>>>>> From: "Sverker Abrahamsson"
<sverker(a)abrahamsson.com>
>>>>>> To: "Marcin Mirecki" <mmirecki(a)redhat.com>
>>>>>> Cc: "Ovirt Users" <users(a)ovirt.org>
>>>>>> Sent: Wednesday, December 28, 2016 11:48:24 AM
>>>>>> Subject: Re: [ovirt-users] Issue with OVN/OVS and mandatory
ovirtmgmt
>>>>>> network
>>>>>>
>>>>>> Hi Marcin
>>>>>> Yes, that is my issue. I don't want to let ovirt/vdsm see
eth0 nor
>>>>>> ovsbridge0 since as soon as it sees them it messes up the
network
>>>>>> config
>>>>>> so that the host will be unreachable.
>>>>>>
>>>>>> I have an internal OVS bridge called ovirtbridge which has a
port
>>>>>> with
>>>>>> IP address, but in the host network settings that port is not
>>>>>> visible.
>>>>>> It doesn't help to name it ovirtmgmt.
>>>>>>
>>>>>> The engine is able to communicate with the host on the ip it has
been
>>>>>> given, it's just that it believes that it HAS to have a
ovirtmgmt
>>>>>> network which can't be on OVN.
>>>>>>
>>>>>> /Sverker
>>>>>>
>>>>>>
>>>>>> Den 2016-12-28 kl. 10:45, skrev Marcin Mirecki:
>>>>>>
>>>>>>> Hi Sverker,
>>>>>>>
>>>>>>> The management network is mandatory on each host. It's
used by the
>>>>>>> engine to communicate with the host.
>>>>>>> Looking at your description and the exception it looks like
it is
>>>>>>> missing.
>>>>>>> The error is caused by not having any network for the host
>>>>>>> (network list retrieved in
InterfaceDaoImpl.getHostNetworksByCluster
>>>>>>> -
>>>>>>> which
>>>>>>> gets all the networks on nics for a host from vds_interface
table
>>>>>>> in the
>>>>>>> DB).
>>>>>>>
>>>>>>> Could you maybe create a virtual nic connected to ovsbridge0
(as I
>>>>>>> understand you
>>>>>>> have no physical nic available) and use this for the
management
>>>>>>> network?
>>>>>>>
>>>>>>> I then create a bridge for use with ovirt, with a private
address.
>>>>>>>>
>>>>>>> I'm not quite sure I understand. Is this yet another
bridge
>>>>>>> connected to
>>>>>>> ovsbridge0?
>>>>>>> You could also attach the vnic for the management network
here if
>>>>>>> need
>>>>>>> be.
>>>>>>>
>>>>>>> Please keep in mind that OVN has no use in setting up the
management
>>>>>>> network.
>>>>>>> The OVN provider can only handle external networks, which can
not
>>>>>>> be used
>>>>>>> for a
>>>>>>> management network.
>>>>>>>
>>>>>>> Marcin
>>>>>>>
>>>>>>>
>>>>>>> ----- Original Message -----
>>>>>>>
>>>>>>>> From: "Sverker Abrahamsson"
<sverker(a)abrahamsson.com>
>>>>>>>> To: users(a)ovirt.org
>>>>>>>> Sent: Wednesday, December 28, 2016 12:39:59 AM
>>>>>>>> Subject: [ovirt-users] Issue with OVN/OVS and mandatory
ovirtmgmt
>>>>>>>> network
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Hi
>>>>>>>> For long time I've been looking for proper support in
ovirt for
>>>>>>>> Open
>>>>>>>> vSwitch
>>>>>>>> so I'm happy that it is moving in the right
direction. However,
>>>>>>>> there
>>>>>>>> seems
>>>>>>>> to still be a dependency on a ovirtmgmt bridge and
I'm unable to
>>>>>>>> move
>>>>>>>> that
>>>>>>>> to the OVN provider.
>>>>>>>>
>>>>>>>> The hosting center where I rent hw instances has a bit
special
>>>>>>>> network
>>>>>>>> setup,
>>>>>>>> so I have one physical network port with a /32 netmask
and
>>>>>>>> point-to-point
>>>>>>>> config to router. The physical port I connect to a ovs
bridge
>>>>>>>> which has
>>>>>>>> the
>>>>>>>> public ip. Since ovirt always messes up the network
config when
>>>>>>>> I've
>>>>>>>> tried
>>>>>>>> to let it have access to the network config for the
physical port,
>>>>>>>> I've
>>>>>>>> set
>>>>>>>> eht0 and ovsbridge0 as hidden in vdsm.conf.
>>>>>>>>
>>>>>>>>
>>>>>>>> I then create a bridge for use with ovirt, with a private
address.
>>>>>>>> With
>>>>>>>> the
>>>>>>>> OVN provider I am now able to import these into the
engine and it
>>>>>>>> looks
>>>>>>>> good. When creating a VM I can select that it will have a
vNic on
>>>>>>>> my OVS
>>>>>>>> bridge.
>>>>>>>>
>>>>>>>> However, I can't start the VM as an exception is
thrown in the log:
>>>>>>>>
>>>>>>>> 2016-12-28 00:13:33,350 ERROR
[org.ovirt.engine.core.bll.Run
>>>>>>>> VmCommand]
>>>>>>>> (default task-5) [3c882d53] Error during
ValidateFailure.:
>>>>>>>> java.lang.NullPointerException
>>>>>>>> at
>>>>>>>>
org.ovirt.engine.core.bll.scheduling.policyunits.NetworkPoli
>>>>>>>>
cyUnit.validateRequiredNetworksAvailable(NetworkPolicyUnit.java:140)
>>>>>>>>
>>>>>>>> [bll.jar:]
>>>>>>>> at
>>>>>>>>
org.ovirt.engine.core.bll.scheduling.policyunits.NetworkPoli
>>>>>>>> cyUnit.filter(NetworkPolicyUnit.java:69)
>>>>>>>> [bll.jar:]
>>>>>>>> at
>>>>>>>>
org.ovirt.engine.core.bll.scheduling.SchedulingManager.runIn
>>>>>>>> ternalFilters(SchedulingManager.java:597)
>>>>>>>> [bll.jar:]
>>>>>>>> at
>>>>>>>>
org.ovirt.engine.core.bll.scheduling.SchedulingManager.runFi
>>>>>>>> lters(SchedulingManager.java:564)
>>>>>>>> [bll.jar:]
>>>>>>>> at
>>>>>>>>
org.ovirt.engine.core.bll.scheduling.SchedulingManager.canSc
>>>>>>>> hedule(SchedulingManager.java:494)
>>>>>>>> [bll.jar:]
>>>>>>>> at
>>>>>>>>
org.ovirt.engine.core.bll.validator.RunVmValidator.canRunVm(RunVmValidator.java:133)
>>>>>>>>
>>>>>>>> [bll.jar:]
>>>>>>>> at
>>>>>>>>
org.ovirt.engine.core.bll.RunVmCommand.validate(RunVmCommand.java:940)
>>>>>>>>
>>>>>>>> [bll.jar:]
>>>>>>>> at
>>>>>>>>
org.ovirt.engine.core.bll.CommandBase.internalValidate(CommandBase.java:886)
>>>>>>>>
>>>>>>>> [bll.jar:]
>>>>>>>> at
>>>>>>>>
org.ovirt.engine.core.bll.CommandBase.validateOnly(CommandBase.java:366)
>>>>>>>>
>>>>>>>> [bll.jar:]
>>>>>>>> at
>>>>>>>>
org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner
>>>>>>>>
.canRunActions(PrevalidatingMultipleActionsRunner.java:113)
>>>>>>>> [bll.jar:]
>>>>>>>> at
>>>>>>>>
org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner
>>>>>>>>
.invokeCommands(PrevalidatingMultipleActionsRunner.java:99)
>>>>>>>> [bll.jar:]
>>>>>>>> at
>>>>>>>>
org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner
>>>>>>>> .execute(PrevalidatingMultipleActionsRunner.java:76)
>>>>>>>> [bll.jar:]
>>>>>>>> at
>>>>>>>>
org.ovirt.engine.core.bll.Backend.runMultipleActionsImpl(Backend.java:613)
>>>>>>>>
>>>>>>>> [bll.jar:]
>>>>>>>> at
>>>>>>>>
org.ovirt.engine.core.bll.Backend.runMultipleActions(Backend.java:583)
>>>>>>>>
>>>>>>>> [bll.jar:]
>>>>>>>>
>>>>>>>>
>>>>>>>> Looking at that section of code where the exception is
thrown, I
>>>>>>>> see
>>>>>>>> that
>>>>>>>> it
>>>>>>>> iterates over host networks to find required networks,
which I
>>>>>>>> assume is
>>>>>>>> ovirtmgmt. In the host network setup dialog I don't
see any
>>>>>>>> networks at
>>>>>>>> all
>>>>>>>> but it lists ovirtmgmt as required. It also list the OVN
networks
>>>>>>>> but
>>>>>>>> these
>>>>>>>> can't be statically assigned as they are added
dynamically when
>>>>>>>> needed,
>>>>>>>> which is fine.
>>>>>>>>
>>>>>>>> I believe that I either need to remove ovirtmgmt network
or
>>>>>>>> configure
>>>>>>>> that
>>>>>>>> it
>>>>>>>> is provided by the OVN provider, but neither is
possible.
>>>>>>>> Preferably it
>>>>>>>> shouldn't be hardcoded which network is management
and mandatory
>>>>>>>> but be
>>>>>>>> possible to configure.
>>>>>>>>
>>>>>>>> /Sverker
>>>>>>>> Den 2016-12-27 kl. 17:10, skrev Marcin Mirecki:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>> _______________________________________________
>>>>> Users mailing list
>>>>> Users(a)ovirt.org
>>>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>>>
>>>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>>
http://lists.ovirt.org/mailman/listinfo/users
>>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
>
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users