Hi,
"Plain KVM", that is: Fedora 18 + KVM with VirtManager is suffering the
same:
On my KVM host:
# cat /sys/class/net/virbr0/bridge/multicast_snooping
1
Now, on 2 virtual (Centos 6.4) nodes:
will give huge packet loss:
92.168.122.151 : unicast, xmt/rcv/%loss = 300/300/0%,
min/avg/max/std-dev = 0.258/0.706/1.170/0.097
192.168.122.151 : multicast, xmt/rcv/%loss = 300/261/13% (seq>=2 12%),
min/avg/max/std-dev = 0.357/0.861/1.944/0.198
Increasing omping to -c 500; packet loss will be about 47%.
Now, on the KVM host:
# echo 0 > /sys/class/net/virbr0/bridge/multicast_snooping
cat /sys/class/net/virbr0/bridge/multicast_snooping
0
Giving it several tries, packet loss is 0%!
I'll give it a try on oVirt tonight.
Winfried
Op 21-03-13 12:17, Antoni Segura Puimedon schreef:
Michael Tsirkin (Thanks!) proposes to try the following:
try disabling multicast snooping in the bridge
Could you give it a shot?
----- Original Message -----
> From: "Winfried de Heiden" <wdh(a)dds.nl>
> To: users(a)ovirt.org
> Sent: Thursday, March 21, 2013 10:14:04 AM
> Subject: Re: [Users] ovirt 3.2 - high multicast packet loss
>
> So far no reactie about the multicast packet loss......
>
> I bumbed into this bug:
>
https://bugzilla.redhat.com/show_bug.cgi?id=880035
>
> This looks the same as the problems I am suffering using oVirt 3.2:
> heavy multicast packet loss after some time.
>
> This the bug affect oVirt 3.2 ovirt-node (2.6.1-20120228.fc18)?
> Can anyone reproduce the problem (omping between 3 virtual nodes)?
>
> Winfried
>
>
> Op 18-03-13 16:58, Winfried de Heiden schreef:
>> Same for Debian 6 (x86_64); 47% packet loss:
>>
>> ssmping -c 500 192.168.1.234
>>
>> --- 192.168.1.234 statistics ---
>> 500 packets transmitted, time 500001 ms
>> unicast:
>> 500 packets received, 0% packet loss
>> rtt min/avg/max/std-dev = 0.352/0.675/0.863/0.072 ms
>> multicast:
>> 265 packets received, 47% packet loss since first mc packet (seq
>> 1)
>> recvd
>> rtt min/avg/max/std-dev = 0.414/0.703/0.885/0.086 ms
>>
>>
>> Winfried
>>
>> Hi all,
>>> Playing around with Red Hat Clustering, it turns out I have a
>>> hughe
>>> multicast packet loss: (Centos 6.4 - x86_64 with all updates)
>>>
>>> omping 192.168.1.211 192.168.1.212 -c500 (node1)
>>> omping 192.168.1.212 192.168.1.211 -c500(node2)
>>>
>>> will give almost 50% loss!
>>>
>>> 192.168.1.211 : unicast, xmt/rcv/%loss = 500/500/0%,
>>> min/avg/max/std-dev = 0.330/0.610/0.789/0.064
>>> 192.168.1.211 : multicast, xmt/rcv/%loss = 500/268/46%,
>>> min/avg/max/std-dev = 0.416/0.635/0.921/0.066
>>>
>>> 192.168.1.212 : unicast, xmt/rcv/%loss = 500/500/0%,
>>> min/avg/max/std-dev = 0.388/0.653/0.863/0.069
>>> 192.168.1.212 : multicast, xmt/rcv/%loss = 500/263/47%,
>>> min/avg/max/std-dev = 0.396/0.670/1.080/0.074
>>>
>>> OK, I am using simple hardware, but this hardware is virtually
>>> doing
>>> nothing...
>>>
>>> As mentioned on
>>>
https://access.redhat.com/knowledge/sites/default/files/attachments/rhel_...,
>>> I set the txqueelen to 500, same result?
>>>
>>> I 'm still guessing whether this is an oVirt, virtio or Red
>>> Hat/Centos issue? Problems only happend after some time; that is
>>> 200
>>> mo-pings shows everything is fine.
>>>
>>> Anyone?
>>>
>>> Winfried
>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>>
http://lists.ovirt.org/mailman/listinfo/users
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>>
http://lists.ovirt.org/mailman/listinfo/users
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
>
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users