Hi Everyone,
Okay, I'm again having problems with getting basic networking setup
with oVirt 4.1 Here is my situation. I have two servers I want to use
to create an oVirt cluster, with two different networks. My "public"
network is a 1G link on device em1 connected to my Internet feed, and my
"storage" network is a 10G link connected on device p5p1 to my file
server. Since I need to connect to my storage network in order to do
the install, I selected p5p1 has the ovirtmgmt interface when installing
the hosted engine. That worked fine, I got everything installed, so I
used some ssh-proxy magic to connect to the web console and completed
the install (setup a Storage domain and create a new network vmNet for
VM networking and added em1 to it.)
The problem was that when I added a second network device to the
HostedEngine VM (so that I can connect to it from my public network) it
would intermittently go down. I did some digging and found some IPV6
errors in the dmesg (IPv6: eth1: IPv6 duplicate address
2001:410:e000:902:21a:4aff:fe16:151 detected!) so I disabled IPv6 on
both eth0 and eth1 in the HostedEngine and rebooted it. The problem is
that when I restarted the VM, the eth1 device was missing.
So, my question is: Can I add a second NIC to the HostedEngine VM and
make it stick, or will it be deleted whenever the engine VM is
restarted? Is there a better way to do what I'm trying to do, ie,
should I setup ovirtmgmt on the public em1 interface, and then create
the "storage" network after the fact for connecting to the datastores
and such. Is that even possible, or required? I was thinking that it
would be better for migrations and other management functions to happen
on the faster 10G network, but if the HostedEngine doesn't need to be
able to connect to the storage network maybe it's not worth the effort?
Eventually I want to setup LACP on the storage network, but I had to
wipe the servers and reinstall from scratch the last time I tried to set
that up. I was thinking that it was because I setup the bonding before
installing oVirt, so I didn't do that this time.
Here are my /etc/sysconfig/network-scripts/ifcfg-* files in case I
did something wrong there (I'm more familiar with Debian/Ubuntu network
setup than CentOS)
ifcfg-eth0: (ovirtmgmt aka storage)
----------------
BROADCAST=192.168.130.255
NETMASK=255.255.255.0
BOOTPROTO=static
DEVICE=eth0
IPADDR=192.168.130.179
ONBOOT=yes
DOMAIN=public.net
ZONE=public
IPV6INIT=no
ifcfg-eth1: (vmNet aka Internet)
----------------
BROADCAST=192.168.1.255
NETMASK=255.255.255.0
BOOTPROTO=static
DEVICE=eth1
IPADDR=192.168.1.179
GATEWAY=192.168.1.254
ONBOOT=yes
DNS1=192.168.1.1
DNS2=192.168.1.2
DOMAIN=public.net
ZONE=public
IPV6INIT=no