Thanks for the response, see responses inline.
On Wed, Jun 20, 2012 at 2:54 AM, Mike Kolesnik <mkolesni(a)redhat.com> wrote:
Hi,
Please see reply in-line.
> In ovirt-engine-3.1 I'm attempting to setup the base logical networks
> and have run into 2 major issues.
Are you using cluster version 3.0 or 3.1?
I have been using 3.1 as it's the default. Is the different just the
API updates? All I could really find related to 3.0 vs 3.1 pertaining
to networking was this document
http://www.ovirt.org/wiki/Features/Design/Network/SetupNetworks
>
> The first is I'm only seeing a Gateway field for the management
> interface. When I went to create a network for VMs (on seperate
> subnet) I did not see a place to specify gateway (see img
> ovirt_network_missing_gateway.png). Right now my management port is
> on a 100mbps network and the bridged devices live on a 1Gbps network
> (net140 in cluster). Is there a reason the gateway would be missing?
> I've attached ovirt_networks.png that shows all the interfaces on my
> host.
Unfortunately, oVirt supports setting only the default gateway of the Host
(This is the field you saw in the management network).
We could theoretically use initscripts' static routing files, but that is left for
future development.
So for now, is it then easier to just run all public interfaces
through the same subnet/gateway? The main reason to run management
via 100Mbps and everything else 1Gbps was that our campus is out of
IPs so we're attempting to conserve on the usage of gigabit IPs.
>
> The second issue I'm having is creating a storage network. I created
> 2 logical networks , private0 and private1. I left the "VM Network"
> unchecked on both as my assumption was that dicates if they can be
> added to VMs. Since these are only for hosts to connect to the iSCSI
> I didn't think that was necessary. When I set the IP information
> (private_network0.png) and select Ok the save goes through but when I
> edit the interface again the information is gone and the file
> ifcfg-eth4 does not have IP information. This is what I looks like
>
> DEVICE=eth4
> ONBOOT=yes
> BOOTPROTO=none
> HWADDR=00:1b:21:1d:33:f0
> NM_CONTROLLED=no
> MTU=9000
I didn't quite understand what you did here..
What I think you meant is:
1. You edited the network on a NIC, and provided static boot protocol
with the parameters (ip, netmask).
2. After that when you clicked OK then the configuration was sent to
the Host, and in the "Network Interfaces" tab for the Host you could
see the IP in the "Address" column. On the host the ifcfg script for
this network had these fields set.
--- Assuming that no restart of Host or VDSM on Host was done ---
3. You edited the network again, didn't change anything, and clicked OK.
4. This time, the boot protocol info was gone from display & ifcfg file
on the Host.
Is this correct?
Also do you by any chance have the log files of ovirt (engine.log)/vdsm
(vdsm.log) with the flow that you did?
I'll try to clarify the steps I took a little better, sorry if it was
unclear before.
1. Create logical network in Cluster that was NOT a "VM Network" (my
assumption of how to setup a storage network)
2. Edit NIC on host, set boot protocol to static and provide
IP/Netmask, and select the logical network created in #1, check "Save
network configuration"
3. After clicking OK the corresponding ifcfg file on the node was
modified, but the values for IP/Netmask were missing. Also the values
did not appear in the network interface list, and were not shown when
going to that same interface and selecting "Add/Edit" again
That process did not involve a reboot of the host.
So in the host interface eth5 I set the following via web interface
Network: private1
Boot Protocol: Static
IP: 10.20.1.241
Subnet Mask: 255.0.0.0
Check: Save network configuration
After the save the node's ifcfg-eth5 is touched (based on modified
date in ls -la) but this is all it contains
DEVICE=eth5
ONBOOT=yes
BOOTPROTO=none
HWADDR=00:1b:21:1d:33:f1
NM_CONTROLLED=no
MTU=9000
As far as I can tell the only setting from ovirt-engine that made it
to that file was the MTU setting defined when creating the logical
network for the cluster.
Is my process somehow wrong or am I missing a step? I've done this
with the node being in both "Up" status and "Maintenance", same
results.
As a test I manually updated the IP/Netmask of ifcfg-eth4 and it shows
up in the web interface with the correct information however any
changes via the web interface will remove the IPADDR and NETMASK
lines.
>
> I also attached image cluster_logical_networks.png that shows the all
> the logical networks on this cluster. So far my plan is to have a
> single public interface for VM traffic, then two for storage traffic,
> each going to a different switch. This setup is just an initial test
> but I'd hope to have it in production once I get some of these kinks
> worked out.
>
> Please let me know what information would be useful to debug this
> further.
>
> Thanks
> - Trey
Regards,
Mike
Attached logs from the host and engine. Host - node_vdsm.txt and
Engine - engine.txt.
The only issues I see are two deprecation notices from vdsm.
Thanks
- Trey