Hi Antoni,
The problem is LRO activated on bnx2x driver.
After the reboot the LRO comes activated, after reconfiguring the bond
the LRO is deactivated.
So i created /etc/modprobe.d/bnx2x.conf with: options bnx2x
disable_tpa=1
and everythink is ok now after reboot.
There was a previous bug open for RHEL5 about this problem
should i reopen this
for RHEL6?
This problem only happens when bnx2x is used with bonds... should this
be considered as a bug?
Best regards,
Ricardo Esteves.
-----Original Message-----
From: Antoni Segura Puimedon <asegurap(a)redhat.com>
To: Users(a)ovirt.org
Subject: Re: [Users] Bonding - VMs Network performance problem
Date: Tue, 30 Jul 2013 12:23:36 -0400 (EDT)
Hi Ricardo,
Thanks a lot for the extra and very relevant info. Could you please create a
bug in bugzilla so we can better track it?
Best,
Toni
----- Original Message -----
From: "Ricardo Esteves" gmail.com>
To: Users(a)ovirt.org
Sent: Tuesday, July 30, 2013 5:58:44 PM
Subject: Re: [Users] Bonding - VMs Network performance problem
Hi,
I also noticed this on /var/log/messages while the problem was happening (i
was downloading a file in the VM from the web):
Jul 30 16:55:55 blade5 kernel: bond0.1: received packets cannot be forwarded
while LRO is enabled
Jul 30 16:55:56 blade5 kernel: bond0.1: received packets cannot be forwarded
while LRO is enabled
Jul 30 16:55:56 blade5 kernel: bond0.1: received packets cannot be forwarded
while LRO is enabled
Jul 30 16:55:56 blade5 kernel: bond0.1: received packets cannot be forwarded
while LRO is enabled
Jul 30 16:55:56 blade5 kernel: bond0.1: received packets cannot be forwarded
while LRO is enabled
Jul 30 16:55:56 blade5 kernel: bond0.1: received packets cannot be forwarded
while LRO is enabled
Jul 30 16:55:57 blade5 kernel: bond0.1: received packets cannot be forwarded
while LRO is enabled
Jul 30 16:55:57 blade5 kernel: bond0.1: received packets cannot be forwarded
while LRO is enabled
Jul 30 16:55:57 blade5 kernel: bond0.1: received packets cannot be forwarded
while LRO is enabled
Jul 30 16:55:57 blade5 kernel: bond0.1: received packets cannot be forwarded
while LRO is enabled
Best regards,
Ricardo Esteves.
-----Original Message-----
From : Ricardo Esteves < maverick.pt(a)gmail.com >
To : Users(a)ovirt.org
Cc : Itamar Heim < iheim(a)redhat.com >, Mike Kolesnik < mkolesni(a)redhat.com >
Subject : Re: [Users] Bonding - VMs Network performance problem
Date : Tue, 30 Jul 2013 15:10:24 +0100
Good afternoon,
In attachment the result of vdsClient -s 0 getVdsCaps during problem and
after problem resolved.
Best regards,
Ricardo Esteves.
-----Original Message-----
From : Mike Kolesnik < mkolesni(a)redhat.com >
To : Ricardo Esteves < maverick.pt(a)gmail.com >
Cc : Users(a)ovirt.org , Itamar Heim < iheim(a)redhat.com >
Subject : Re: [Users] Bonding - VMs Network performance problem
Date : Tue, 16 Jul 2013 10:01:31 -0400 (EDT)
----- Original Message ----- > On 07/16/2013 04:09 PM, Ricardo Esteves wrote:
> > Hi, > > > > Not really. > > > > I can resolve
temporally, unconfiguring
the bond and then configure it > > again. > > > > But when i reboot
the
server the problem comes back. > > can you compare the network configuration
before and after you change it > the setup network? Perhaps you can send
pastebin of a 'vdsClient 0 getVdsCaps' (or 'vdsClient -s 0 getVdsCaps'
if
the first doesn't work)
from the host when the bond is slow, and one after you break and create it
again?
if you don't have vdsClient command you can 'yum install vdsm-cli' and it
should be available. > > > > > > > -----Original Message----- >
> *From*:
Itamar Heim < iheim(a)redhat.com > > <
mailto:Itamar%20Heim%20%3ciheim@redhat.com %3e>> > > *To*: Ricardo Esteves
<
maverick.pt(a)gmail.com > > <
mailto:Ricardo%20Esteves%20%3cmaverick.pt@gmail.com %3e>> > > *Subject*: Re:
[Users] Bonding - VMs Network performance problem > > *Date*: Tue, 16 Jul
2013 15:38:10 +0300 > > > > On 07/01/2013 03:12 PM, Ricardo Esteves wrote:
>
>> Hi, > >> > >> Yes, i'm still experiencing this problem, in
fact just
happened a few > >> minutes ago. :) > >> > >> All MTUs are
1500. > > > > was
this resolved? > > > >> > >> -----Original Message----- >
>> *From*: Livnat
Peer < lpeer(a)redhat.com < mailto:lpeer@redhat.com > > >> <
mailto:Livnat%20Peer%20%3clpeer@redhat.com %3e>> > >> *To*: Ricardo
Esteves
< maverick.pt(a)gmail.com > >> < mailto:maverick.pt@gmail.com > >
>> <
mailto:Ricardo%20Esteves%20%3cmaverick.pt@gmail.com %3e>> > >> *Subject*:
Re: [Users] Bonding - VMs Network performance problem > >> *Date*: Sun, 23
Jun 2013 11:33:58 +0300 > >> > >> Hi Ricardo, > >> Are you
still
experiencing the problem described below? > >> Are you configuring MTU (to
something other than default or 1500) for > >> one of the networks on the
bond? > >> > >> Thanks, Livnat > >> > >> On
06/18/2013 05:36 PM, Ricardo
Esteves wrote: > >>> Good afternoon, > >>> > >>> Yes,
the "Save network
configuration" is checked, configurations are > >>> persistent across
boots.
> >>> > >>> The problem is not the persistence of the
configurations, the
problem is > >>> that after a reboot the network performance on the VMs is
very bad, and > >>> to fix it i need to remove the bonding and add it again.
> >>> > >>> In attachment, the screenshots of my network
configuration. >
>>> > >>> Best regards, > >>> Ricardo Esteves. >
>>> > >>> -----Original
Message----- > >>> *From*: Mike Kolesnik < mkolesni(a)redhat.com <
mailto:mkolesni@redhat.com > > >>> < mailto:mkolesni@redhat.com >
> >>> <
mailto:Mike%20Kolesnik%20%3cmkolesni@redhat.com %3e>> > >>> *To*:
Ricardo
Esteves < maverick.pt(a)gmail.com > >>> < mailto:maverick.pt@gmail.com
> <
mailto:maverick.pt@gmail.com > > >>> <
mailto:Ricardo%20Esteves%20%3cmaverick.pt@gmail.com %3e>> > >>> *Cc*:
Users(a)ovirt.org < mailto:Users@ovirt.org > < mailto:Users@ovirt.org > >
>>> < mailto:Users@ovirt.org > > >>> *Subject*: Re: [Users]
Bonding - VMs
Network performance problem > >>> *Date*: Sun, 26 May 2013 04:57:43 -0400
(EDT) > >>> > >>> > >>> > >>>
------------------------------------------------------------------------ >
>>> > >>> > >>> Hi, > >>> >
>>> I've got ovirt installed on 2 HP
BL460c G6 blades, and my VMs have > >>> very poor network performance
(around 7,01K/s). > >>> > >>> On the servers itselfs there is
no
problem, i can download a file > >>> with wget at around 99 M/s. >
>>> >
>>> Then i go to ovirt network configuration remove the bonding and then
> >>> make the bonding again and the problem gets fixed (i have to do
this > >>> everytime i reboot my blades). > >>> >
>>> Have you tried to
check the "Save network configuration" check box, or > >>> clicking
the
button from the host's NICs sub-tab? > >>> > >>> This should
persist the
configuration that you set on the host across > >>> reboots.. >
>>> > >>> >
>>> SERVER' s Software: > >>> CentOS 6.4 (64 bits) -
2.6.32-358.6.2.el6.x86_64 > >>> Ovirt EL6 official rpms. >
>>> > >>>
Anyone experienced this kind of problems? > >>> > >>> Best
regards,
> >>> Ricardo Esteves. > >>> > >>> >
>>> > >>>
_______________________________________________ > >>> Users mailing
list > >>> Users(a)ovirt.org < mailto:Users@ovirt.org > <
mailto:Users@ovirt.org > > >>>
http://lists.ovirt.org/mailman/listinfo/users
> >>> > >>> > >>> > >>> > >>>
> >>>
_______________________________________________ > >>> Users mailing list
>
>>> Users(a)ovirt.org < mailto:Users@ovirt.org > <
mailto:Users@ovirt.org >
> >>>
http://lists.ovirt.org/mailman/listinfo/users > >>> >
>> > >> > >> >
>> _______________________________________________ > >> Users mailing list
>
>> Users(a)ovirt.org < mailto:Users@ovirt.org > > >>
http://lists.ovirt.org/mailman/listinfo/users > >> > > > >
_______________________________________________ > Users mailing list >
Users(a)ovirt.org >
http://lists.ovirt.org/mailman/listinfo/users >
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________
Users mailing list
Users(a)ovirt.org