[Users] Packet loss to guests
René Koch (ovido)
r.koch at ovido.at
Fri Mar 8 09:12:44 EST 2013
Hi,
If you don't have another oVirt / RHEL KVM host in the same network than
MACs want be an issue. So it's totally save to have oVirt in the same
network as all other systems.
I see in your email that you use bonding mode 2. This shouldn't cause
issues with switches, but I think I had an issue with RHEV (bond 2) and
a Cisco switch once on customer side - please don't ask me for details
on what their network admins changed to make this setup work.
Can you try bonding mode 0 (active-backup) and check if you still have
packet loss?
--
Best regards
René Koch
Senior Solution Architect
============================================
ovido gmbh - "Das Linux Systemhaus"
Brünner Straße 163, A-1210 Wien
Phone: +43 720 / 530 670
Mobile: +43 660 / 512 21 31
E-Mail: r.koch at ovido.at
============================================
On Fri, 2013-03-08 at 15:57 +0200, Neil wrote:
> Thanks Rene, I'll look into your suggestions.
>
> I don't think it's a conflicting MAC as there is only 1 guest, but
> will check it out.
>
> Would you advise running the engine and nodes on a separate network
> range to my existing network rather?
>
> Thanks.
>
> Regards.
>
> Neil Wilson.
>
> On Fri, Mar 8, 2013 at 1:52 PM, René Koch (ovido) <r.koch at ovido.at> wrote:
> > Hi Neil,
> >
> > I had a similar issue with my oVirt environment with some vms.
> > The issue on my side was oVirt and RHEV environment in the same subnet
> > and conflicting MAC addresses on some vms (as both use the same MAC
> > range and I didn't change this with engine-config).
> >
> > So can you check if this MAC our your vm is in use by an other host/vm
> > (maybe from a KVM installation)?
> >
> >
> > --
> > Best regards
> >
> > René Koch
> > Senior Solution Architect
> >
> > ============================================
> > ovido gmbh - "Das Linux Systemhaus"
> > Brünner Straße 163, A-1210 Wien
> >
> > Phone: +43 720 / 530 670
> > Mobile: +43 660 / 512 21 31
> > E-Mail: r.koch at ovido.at
> > ============================================
> >
> >
> > On Fri, 2013-03-08 at 11:27 +0200, Neil wrote:
> >> Hi guys,
> >>
> >> I've got a bit of a strange one, I'm setting up an internal ovirt
> >> system Centos 6.3 64bit dreyou repo...
> >>
> >> and I'm getting lots of packet loss on the guest I've installed, the
> >> packet loss doesn't happen on the physical hosts, only the VM gets it,
> >> when communicating from and to it.
> >>
> >> 1 node(Centos 6.3 64bit)
> >> vdsm-4.10.0-0.46.15.el6.x86_64
> >> vdsm-cli-4.10.0-0.46.15.el6.noarch
> >> vdsm-xmlrpc-4.10.0-0.46.15.el6.noarch
> >> vdsm-python-4.10.0-0.46.15.el6.x86_64
> >>
> >> The engine(also Centos 6.3 64bit(engine has local NFS storage which
> >> the node connects to)
> >> ovirt-engine-userportal-3.1.0-3.19.el6.noarch
> >> ovirt-engine-dbscripts-3.1.0-3.19.el6.noarch
> >> ovirt-engine-3.1.0-3.19.el6.noarch
> >> ovirt-engine-backend-3.1.0-3.19.el6.noarch
> >> ovirt-engine-notification-service-3.1.0-3.19.el6.noarch
> >> ovirt-image-uploader-3.1.0-16.el6.noarch
> >> ovirt-engine-genericapi-3.1.0-3.19.el6.noarch
> >> ovirt-iso-uploader-3.1.0-16.el6.noarch
> >> ovirt-engine-restapi-3.1.0-3.19.el6.noarch
> >> ovirt-engine-tools-common-3.1.0-3.19.el6.noarch
> >> ovirt-engine-sdk-3.2.0.8-1.el6.noarch
> >> ovirt-engine-webadmin-portal-3.1.0-3.19.el6.noarch
> >> ovirt-engine-cli-3.2.0.5-1.el6.noarch
> >> ovirt-log-collector-3.1.0-16.el6.noarch
> >> ovirt-engine-setup-3.1.0-3.19.el6.noarch
> >> ovirt-engine-jbossas711-1-0.x86_64
> >> ovirt-engine-config-3.1.0-3.19.el6.noarch
> >>
> >> Both the node and engine have bonded interfaces all NICS are intel
> >> 82574L gigabit and the managed switch reflects gigabit on each of the
> >> ports.
> >>
> >> The ifcfg-bond0 is below...
> >>
> >> DEVICE=bond0
> >> IPADDR=192.168.0.9
> >> NETWORK=192.168.0.0
> >> NETMASK=255.255.255.0
> >> USERCTL=no
> >> BONDING_OPTS=mode=2
> >> BOOTPROTO=none
> >> MTU=1500
> >> ONBOOT=yes
> >>
> >> Then the ifcfg-eth0 and eth1 are almost identical...
> >> DEVICE=eth2
> >> USERCTL=no
> >> ONBOOT=yes
> >> MASTER=bond0
> >> SLAVE=yes
> >> MTU=1500
> >> BOOTPROTO=none
> >>
> >>
> >> These are the network details on the guest, as you can see, there are
> >> no network errors showing on the guest at all, which is strange....
> >>
> >> eth0 Link encap:Ethernet HWaddr 00:1A:4A:A8:00:00
> >> inet addr:192.168.0.12 Bcast:192.168.0.255 Mask:255.255.255.0
> >> inet6 addr: fe80::21a:4aff:fea8:0/64 Scope:Link
> >> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> >> RX packets:5050 errors:0 dropped:0 overruns:0 frame:0
> >> TX packets:255 errors:0 dropped:0 overruns:0 carrier:0
> >> collisions:0 txqueuelen:1000
> >> RX bytes:490762 (479.2 KiB) TX bytes:32516 (31.7 KiB)
> >>
> >> Ethernet controller: Red Hat, Inc Virtio network device
> >>
> >> Has anyone got any ideas? Have I set something up wrong?
> >>
> >> Any help or advice is greatly appreciated.
> >>
> >> Regards.
> >>
> >> Neil Wilson.
> >> _______________________________________________
> >> Users mailing list
> >> Users at ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/users
> >
More information about the Users
mailing list