[Users] Setting up logical storage networks

Trey Dockendorf treydock at gmail.com
Fri Jun 22 22:22:15 UTC 2012


On Thu, Jun 21, 2012 at 7:17 AM, Dan Kenigsberg <danken at redhat.com> wrote:
> On Wed, Jun 20, 2012 at 02:52:19PM -0400, Mike Kolesnik wrote:
>> > Thanks for the response, see responses inline.
>>
>> You're welcome, responses inline.
>>
>> >
>> > On Wed, Jun 20, 2012 at 2:54 AM, Mike Kolesnik <mkolesni at redhat.com>
>> > wrote:
>> > > Hi,
>> > >
>> > > Please see reply in-line.
>> > >
>> > >> In ovirt-engine-3.1 I'm attempting to setup the base logical
>> > >> networks
>> > >> and have run into 2 major issues.
>> > >
>> > > Are you using cluster version 3.0 or 3.1?
>> > >
>> >
>> > I have been using 3.1 as it's the default.  Is the different just the
>> > API updates?  All I could really find related to 3.0 vs 3.1
>> > pertaining
>> > to networking was this document
>> > http://www.ovirt.org/wiki/Features/Design/Network/SetupNetworks
>> >
>>
>> As Itamar replied, there are a few more network features in 3.1 other than
>> this one.
>>
>> For a Host which is in a 3.1 cluster there should be a "Setup Networks"
>> button which indeed enables the functionality described in that wiki.
>> This is a new feature for 3.1 & up which allows to do several network
>> changes in an atomic manner, with an improved UI experience.
>>
>> However, from the logs it looks like you're using the old commands to edit
>> the networks on the Host, so if you have this button (you should) then you
>> can try using it.
>>
>> <SNIP>
>>
>> > >
>> > > Unfortunately, oVirt supports setting only the default gateway of
>> > > the Host
>> > > (This is the field you saw in the management network).
>> > >
>> > > We could theoretically use initscripts' static routing files, but
>> > > that is left for
>> > > future development.
>> > >
>> >
>> > So for now, is it then easier to just run all public interfaces
>> > through the same subnet/gateway?  The main reason to run management
>> > via 100Mbps and everything else 1Gbps was that our campus is out of
>> > IPs so we're attempting to conserve on the usage of gigabit IPs.
>>
>> Yes, currently the only gateway you can specify is the default one which
>> is set on the management network.
>
> However it is worth mentioning that VM networks should generally not
> have an IP address (or gateway) of their own. At best, they serve as
> layer-2-only entities. Putting the management network in one subnet and
> VMs on a different one, makes a lot of sense.
>
>>
>> <SNIP>
>>
> <snap>
>> >
>> >
>> > So in the host interface eth5 I set the following via web interface
>> >
>> > Network: private1
>> > Boot Protocol: Static
>> > IP: 10.20.1.241
>> > Subnet Mask: 255.0.0.0
>> > Check: Save network configuration
>> >
>> > After the save the node's ifcfg-eth5 is touched (based on modified
>> > date in ls -la) but this is all it contains
>> > DEVICE=eth5
>> > ONBOOT=yes
>> > BOOTPROTO=none
>> > HWADDR=00:1b:21:1d:33:f1
>> > NM_CONTROLLED=no
>> > MTU=9000
>> >
>> >
>> > As far as I can tell the only setting from ovirt-engine that made it
>> > to that file was the MTU setting defined when creating the logical
>> > network for the cluster.
>> >
>> > Is my process somehow wrong or am I missing a step?  I've done this
>> > with the node being in both "Up" status and "Maintenance", same
>> > results.
>>
>> No, it looks like a bug that should be taken care of.
>
> And a serious one, that hinders the usability of non-VM networks, and
> which I consider an oVirt-3.1 release blocker
>
>  Bug 834281 - [vdsm][bridgeless] BOOTPROTO/IPADDR/NETMASK options are
>  not set on interface
>
> Thanks for reporting it.
>
> Dan.

If I keep my ovirtmgmt interface on a 100Mbps subnet, and my VM
Networks on 1Gbps network, there's anything special I have to do in
routing or anything to prevent traffic of the VMs from following the
default route defined in ovirtmgmt?

I'm also experiencing an issue with bonds that may be related.  I
create the bond and set to Mode 5, yet the ifcfg-bond0 seems to
reflect Mode 4.

DEVICE=bond0
ONBOOT=yes
BOOTPROTO=none
BONDING_OPTS='mode=802.3ad miimon=150'
NM_CONTROLLED=no
MTU=9000


Here's what looks relevant in the vdsm.log


Thread-55232::DEBUG::2012-06-22
16:56:56,242::BindingXMLRPC::872::vds::(wrapper) client
[128.194.76.185]::call setupNetworks with ({'stor0': {'bonding':
'bond0', 'bridged': 'false', 'mtu': '9000'}}, {'bond0': {'nics':
['eth3', 'eth2'], 'BONDING_OPTS': 'mode=5'}}, {'connectivityCheck':
'true', 'connectivityTimeout': '60000'}) {} flowID [39d484a3]
Thread-55233::DEBUG::2012-06-22
16:56:56,242::BindingXMLRPC::872::vds::(wrapper) client
[128.194.76.185]::call ping with () {} flowID [39d484a3]
Thread-55233::DEBUG::2012-06-22
16:56:56,244::BindingXMLRPC::879::vds::(wrapper) return ping with
{'status': {'message': 'Done', 'code': 0}}
MainProcess|Thread-55232::DEBUG::2012-06-22
16:56:56,270::configNetwork::1061::setupNetworks::(setupNetworks)
Setting up network according to configuration: networks:{'stor0':
{'bonding': 'bond0', 'bridged': 'false', 'mtu': '9000'}},
bondings:{'bond0': {'nics': ['eth3', 'eth2'], 'BONDING_OPTS':
'mode=5'}}, options:{'connectivityCheck': 'true',
'connectivityTimeout': '60000'}
MainProcess|Thread-55232::DEBUG::2012-06-22
16:56:56,270::configNetwork::1065::root::(setupNetworks) Validating
configuration
Thread-55234::DEBUG::2012-06-22
16:56:56,294::BindingXMLRPC::872::vds::(wrapper) client
[128.194.76.185]::call ping with () {} flowID [39d484a3]
Thread-55234::DEBUG::2012-06-22
16:56:56,295::BindingXMLRPC::879::vds::(wrapper) return ping with
{'status': {'message': 'Done', 'code': 0}}
MainProcess|Thread-55232::DEBUG::2012-06-22
16:56:56,297::configNetwork::1070::setupNetworks::(setupNetworks)
Applying...
MainProcess|Thread-55232::DEBUG::2012-06-22
16:56:56,297::configNetwork::1099::setupNetworks::(setupNetworks)
Adding network 'stor0'
MainProcess|Thread-55232::DEBUG::2012-06-22
16:56:56,322::configNetwork::582::root::(addNetwork) validating
bridge...
MainProcess|Thread-55232::INFO::2012-06-22
16:56:56,323::configNetwork::591::root::(addNetwork) Adding network
stor0 with vlan=None, bonding=bond0, nics=['eth3', 'eth2'],
bondingOptions=None, mtu=9000, bridged=False, options={}


Looking at the code I may see where things are going wrong.  Looks
like network['bonding']['BONDING_OPTS'] is being passed when the code
is looking for network['bonding']['options'].

- Trey



More information about the Users mailing list