Hello,
After a successful 3.1 setup, I'm starting a completely new 3.3 environment.
I installed a CentOS 6.4 for the manager with a 3.3 oVirt engine.
For the nodes, on Dell M620 blades, I installed
ovirt-node-iso-3.0.1-1.0.2.vdsm.el6.iso.
I setup a blade like this :
- first two interfaces bonded + bridge dedicated to ovirt managment network
- 4 next interfaces bonded + bridge dedicated to the iSCSI pure copper lan
(I've setup this exaclty same env. on my previous oVirt 3.1 setup and it
is working fine)
Once I've configured that correctly on 3 nodes, I reboot (or stonith)
the first node, to prove this is stable.
I get various effects :
- sometimes the bonding dedicated to the iscsi lan is lost (no more
/etc/sysconfig/network-scripts/ifcfg-bond1 neither ifcfg-ovirtiscsi files)
- sometimes the bonding dedicated to the ovirtmgmt part is lost or
changed : the bonding mode I have set to 1 (active-backup) is replaced
by mode 0 (balance-rr) (which leads to other local network issues)
- during reboot, I see "bnx2fs" issues : before them, pings to this node
are OK (mode 1), after them, ping to this node get DUPs (mode 0)
This is very tiresome to see that such a simple thing as bonding and
bridging are things I keep losing time on.
Using command line on the node, I'm able to correct everything, but
nothing is reboot-proof though I don't know what is causing those changes.
I'm sure I tried nothing exotic when first installing the node. I just
used the TUI and made a very simple bonding with no VLAN.
Even when trying to concentrate only on the ovirtmgmt bond oVirt can't
seem to be stable.
What should I look now?
--
Nicolas Ecarnot