This is a multi-part message in MIME format.
--------------000807010405010304010305
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
On 12/19/2015 05:53 AM, Gianluca Cecchi wrote:
On Sat, Dec 19, 2015 at 1:08 AM, John Florian
<jflorian(a)doubledog.org
<mailto:jflorian@doubledog.org>> wrote:
I'm trying to get a 3.6.1 HE setup going where I have 4 VLANs
(VIDs 101-104) for storage networks, 1 VLAN (VID 100) for
ovirtmgmt and 1 more (VID 1) for everything else. Because I know
of no way to manipulate the network configuration from the
management GUI once the HE is running and with only a single Host,
I made the OS configuration as close as possible to what I'd want
when done. This looks like:
Why do you think of this necessary pre-work?
Because my storage is iSCSI and I need the VLAN configuration in place
for the Host to access it on behalf of the HE. Otherwise, yes I agree
it would be easier to let the hosted-engine script deal with the set
up. I've done a workable setup before letting the script do everything,
but the mode 4 bonding only gave me half the possible performance
because in effect one NIC on the NAS did all the transmitting while the
other NIC did all the receiving. So I really need all of the storage
network setup in place prior to starting the HE deployment.
It seems like it should be trivial to convince the engine that the two
netmasks are indeed equivalent. I tried changing in
/var/lib/vdsm/persistence/netconf/nets/ovirtmgmt the '"prefix":
"24"'
setting to '"netmask": "255.255.255.0"' and running
/usr/share/vdsm/vdsm-restore-net-config but that didn't seem to change
anything WRT the network being out of sync.
I configured (in 3.6.0) an environment with HE too on a single host
and I only preconfigured my bond1 in 802.3ad mode with the interfaces
I planned to use for ovirtmgmt and I left the other interfaces
unconfigured, so that all is not used by Network Manager.
During the "hosted-engine --deploy" setup I got this input:
--== NETWORK CONFIGURATION ==--
Please indicate a nic to set ovirtmgmt bridge on: (em1,
bond1, em2) [em1]: bond1
iptables was detected on your computer, do you wish setup to
configure it? (Yes, No)[Yes]:
Please indicate a pingable gateway IP address [10.4.168.254]:
and then on preview of configuration to apply:
--== CONFIGURATION PREVIEW ==--
Bridge interface : bond1
Engine FQDN : ractorshe.mydomain.local
Bridge name : ovirtmgmt
After setup I configured my vlan based networks for my VMS from the
GUI itself as in the usual way, so that now I have this bond0 created
by oVirt GUI on the other two interfaces (em1 and em2):
[root@ractor ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
802.3ad info
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
Aggregator ID: 2
Number of ports: 2
Actor Key: 17
Partner Key: 8
Partner Mac Address: 00:01:02:03:04:0c
Slave Interface: em1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:25:64:ff:0b:f0
Aggregator ID: 2
Slave queue ID: 0
Slave Interface: em2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:25:64:ff:0b:f2
Aggregator ID: 2
Slave queue ID: 0
And then "ip a" command returns:
9: bond0.65@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue master vlan65 state UP
link/ether 00:25:64:ff:0b:f0 brd ff:ff:ff:ff:ff:ff
10: vlan65: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP
link/ether 00:25:64:ff:0b:f0 brd ff:ff:ff:ff:ff:ff
with
[root@ractor ~]# brctl show
bridge name bridge id STP enabled interfaces
;vdsmdummy; 8000.000000000000 no
ovirtmgmt 8000.002564ff0bf4 no bond1
vnet0
vlan65 8000.002564ff0bf0 no bond0.65
vnet1
vnet2
vnet1 and vnet2 being the virtual network interfaces of my two running
VMs.
The only note I can submit is that by default when you set a network
in oVirt GUI with mode=4 (802.3ad), it defaults to configuring it with
"lacp_rate=0" so slow, that I think it is bad, as I read in many
articles (but I'm not a network guru at all)
So that I chose custom mode in the GUI and specified "mode=4
lacp_rate=1" in options and this was reflected in my configuration as
you see above in bond0 output.
Can we set lacp_rate=1 as a default option for mode=4 in oVirt?
HIH,
Gianluca
--
John Florian
--------------000807010405010304010305
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=utf-8"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">On 12/19/2015 05:53 AM, Gianluca Cecchi
wrote:<br>
</div>
<blockquote
cite="mid:CAG2kNCx72GAq1CNfmisV_Ohec4v+MVdZ-UPzooyoyTi9Xbo4Zw@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">On Sat, Dec 19, 2015 at 1:08 AM, John
Florian <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:jflorian@doubledog.org"
target="_blank">jflorian(a)doubledog.org</a>&gt;</span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"> I'm
trying to get a
3.6.1 HE setup going where I have 4 VLANs (VIDs 101-104)
for storage networks, 1 VLAN (VID 100) for ovirtmgmt and
1 more (VID 1) for everything else. Because I know of
no way to manipulate the network configuration from the
management GUI once the HE is running and with only a
single Host, I made the OS configuration as close as
possible to what I'd want when done. This looks like:<br>
</div>
</blockquote>
</div>
<br>
</div>
<div class="gmail_extra">Why do you think of this necessary
pre-work?</div>
</div>
</blockquote>
<br>
Because my storage is iSCSI and I need the VLAN configuration in
place for the Host to access it on behalf of the HE. Otherwise, yes
I agree it would be easier to let the hosted-engine script deal with
the set up. I've done a workable setup before letting the script do
everything, but the mode 4 bonding only gave me half the possible
performance because in effect one NIC on the NAS did all the
transmitting while the other NIC did all the receiving. So I really
need all of the storage network setup in place prior to starting the
HE deployment.<br>
<br>
It seems like it should be trivial to convince the engine that the
two netmasks are indeed equivalent. I tried changing in
/var/lib/vdsm/persistence/netconf/nets/ovirtmgmt the '"prefix":
"24"' setting to '"netmask":
"255.255.255.0"' and running
/usr/share/vdsm/vdsm-restore-net-config but that didn't seem to
change anything WRT the network being out of sync.<br>
<br>
<blockquote
cite="mid:CAG2kNCx72GAq1CNfmisV_Ohec4v+MVdZ-UPzooyoyTi9Xbo4Zw@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra"> I configured (in 3.6.0) an environment
with HE too on a single host and I only preconfigured my bond1
in 802.3ad mode with the interfaces I planned to use for
ovirtmgmt and I left the other interfaces unconfigured, so
that all is not used by Network Manager.<br>
</div>
<div class="gmail_extra">During the "hosted-engine
--deploy"
setup I got this input:<br>
<br>
--== NETWORK CONFIGURATION ==--<br>
<br>
Please indicate a nic to set ovirtmgmt bridge on:
(em1, bond1, em2) [em1]: bond1<br>
iptables was detected on your computer, do you wish
setup to configure it? (Yes, No)[Yes]: <br>
Please indicate a pingable gateway IP address
[10.4.168.254]: <br>
<br>
</div>
<div class="gmail_extra">and then on preview of configuration to
apply:<br>
<br>
--== CONFIGURATION PREVIEW ==--<br>
<br>
Bridge interface : bond1<br>
Engine FQDN :
ractorshe.mydomain.local<br>
Bridge name : ovirtmgmt<br>
<br>
</div>
<div class="gmail_extra">After setup I configured my vlan based
networks for my VMS from the GUI itself as in the usual way,
so that now I have this bond0 created by oVirt GUI on the
other two interfaces (em1 and em2):<br>
<br>
[root@ractor ~]# cat /proc/net/bonding/bond0<br>
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)<br>
<br>
Bonding Mode: IEEE 802.3ad Dynamic link aggregation<br>
Transmit Hash Policy: layer2 (0)<br>
MII Status: up<br>
MII Polling Interval (ms): 100<br>
Up Delay (ms): 0<br>
Down Delay (ms): 0<br>
<br>
802.3ad info<br>
LACP rate: fast<br>
Min links: 0<br>
Aggregator selection policy (ad_select): stable<br>
Active Aggregator Info:<br>
Aggregator ID: 2<br>
Number of ports: 2<br>
Actor Key: 17<br>
Partner Key: 8<br>
Partner Mac Address: 00:01:02:03:04:0c<br>
<br>
Slave Interface: em1<br>
MII Status: up<br>
Speed: 1000 Mbps<br>
Duplex: full<br>
Link Failure Count: 0<br>
Permanent HW addr: 00:25:64:ff:0b:f0<br>
Aggregator ID: 2<br>
Slave queue ID: 0<br>
<br>
Slave Interface: em2<br>
MII Status: up<br>
Speed: 1000 Mbps<br>
Duplex: full<br>
Link Failure Count: 0<br>
Permanent HW addr: 00:25:64:ff:0b:f2<br>
Aggregator ID: 2<br>
Slave queue ID: 0<br>
<br>
</div>
<div class="gmail_extra">And then "ip a" command
returns:<br>
<br>
9: bond0.65@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu
1500 qdisc noqueue master vlan65 state UP <br>
link/ether 00:25:64:ff:0b:f0 brd ff:ff:ff:ff:ff:ff<br>
10: vlan65: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
qdisc noqueue state UP <br>
link/ether 00:25:64:ff:0b:f0 brd ff:ff:ff:ff:ff:ff<br>
<br>
with<br>
[root@ractor ~]# brctl show<br>
bridge name bridge id STP enabled interfaces<br>
;vdsmdummy; 8000.000000000000 no <br>
ovirtmgmt 8000.002564ff0bf4 no bond1<br>
vnet0<br>
vlan65 8000.002564ff0bf0 no bond0.65<br>
vnet1<br>
vnet2<br>
<br>
</div>
<div class="gmail_extra">vnet1 and vnet2 being the virtual
network interfaces of my two running VMs.<br>
<br>
</div>
<div class="gmail_extra">The only note I can submit is that by
default when you set a network in oVirt GUI with mode=4
(802.3ad), it defaults to configuring it with "lacp_rate=0" so
slow, that I think it is bad, as I read in many articles (but
I'm not a network guru at all)<br>
</div>
<div class="gmail_extra">So that I chose custom mode in the GUI
and specified "mode=4 lacp_rate=1" in options and this was
reflected in my configuration as you see above in bond0
output.<br>
<br>
</div>
<div class="gmail_extra">Can we set lacp_rate=1 as a default
option for mode=4 in oVirt?<br>
</div>
<div class="gmail_extra"><br>
</div>
<div class="gmail_extra">HIH,<br>
</div>
<div class="gmail_extra">Gianluca <br>
</div>
</div>
</blockquote>
<br>
<br>
<pre class="moz-signature" cols="72">--
John Florian
</pre>
</body>
</html>
--------------000807010405010304010305--