[Users] oVirt 3.2.1 Update

New RPMS are now uploaded for oVirt 3.2.1. The updates included: ovirt-engine-cli 3.2.0.11-1 ovirt-engine-sdk 3.2.0.10-1 ovirt-engine 3.2.1-1 oVirt Engine changes: 1. Fixed bug blocking firewalld configuration when selinux was in enforcing mode 2. Small change in All-In-One 3. Minor patches at rest-api Also included in the oVirt Engine update are patches to enable EL6 builds. These builds will be posted shortly. To upgrade, please run engine-upgrade. Thanks Mike

On Thu, Mar 14, 2013 at 10:49:28AM -0400, Mike Burns wrote:
New RPMS are now uploaded for oVirt 3.2.1. The updates included:
ovirt-engine-cli 3.2.0.11-1 ovirt-engine-sdk 3.2.0.10-1 ovirt-engine 3.2.1-1
oVirt Engine changes: 1. Fixed bug blocking firewalld configuration when selinux was in enforcing mode 2. Small change in All-In-One 3. Minor patches at rest-api
Also included in the oVirt Engine update are patches to enable EL6 builds. These builds will be posted shortly.
To upgrade, please run engine-upgrade.
Few days ago (March 14) Federico has rebuilt vdsm for ovirt-3.2 (vdsm-4.10.3-10). It fixes a serious storage bug, and makes glusterfs integration work for el6. I think that it would be good to respin this vdsm build into ovirt-3.2.1, or into a very quick ovirt-3.2.2. Regards, Dan.

On 03/17/2013 06:53 AM, Dan Kenigsberg wrote:
On Thu, Mar 14, 2013 at 10:49:28AM -0400, Mike Burns wrote:
New RPMS are now uploaded for oVirt 3.2.1. The updates included:
ovirt-engine-cli 3.2.0.11-1 ovirt-engine-sdk 3.2.0.10-1 ovirt-engine 3.2.1-1
oVirt Engine changes: 1. Fixed bug blocking firewalld configuration when selinux was in enforcing mode 2. Small change in All-In-One 3. Minor patches at rest-api
Also included in the oVirt Engine update are patches to enable EL6 builds. These builds will be posted shortly.
To upgrade, please run engine-upgrade.
Few days ago (March 14) Federico has rebuilt vdsm for ovirt-3.2 (vdsm-4.10.3-10). It fixes a serious storage bug, and makes glusterfs integration work for el6.
I think that it would be good to respin this vdsm build into ovirt-3.2.1, or into a very quick ovirt-3.2.2.
Ahh, Federico told me that and I overlooked it. vdsm updated in the stable F18 repo Mike
Regards, Dan. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi all, Playing around with Red Hat Clustering, it turns out I have a hughe multicast packet loss: (Centos 6.4 - x86_64 with all updates) omping 192.168.1.211 192.168.1.212 -c500 (node1) omping 192.168.1.212 192.168.1.211 -c500(node2) will give almost 50% loss! 192.168.1.211 : unicast, xmt/rcv/%loss = 500/500/0%, min/avg/max/std-dev = 0.330/0.610/0.789/0.064 192.168.1.211 : multicast, xmt/rcv/%loss = 500/268/46%, min/avg/max/std-dev = 0.416/0.635/0.921/0.066 192.168.1.212 : unicast, xmt/rcv/%loss = 500/500/0%, min/avg/max/std-dev = 0.388/0.653/0.863/0.069 192.168.1.212 : multicast, xmt/rcv/%loss = 500/263/47%, min/avg/max/std-dev = 0.396/0.670/1.080/0.074 OK, I am using simple hardware, but this hardware is virtually doing nothing... As mentioned on https://access.redhat.com/knowledge/sites/default/files/attachments/rhel_clu..., I set the txqueelen to 500, same result? I 'm still guessing whether this is an oVirt, virtio or Red Hat/Centos issue? Problems only happend after some time; that is 200 mo-pings shows everything is fine. Anyone? Winfried

Same for Debian 6 (x86_64); 47% packet loss: ssmping -c 500 192.168.1.234 --- 192.168.1.234 statistics --- 500 packets transmitted, time 500001 ms unicast: 500 packets received, 0% packet loss rtt min/avg/max/std-dev = 0.352/0.675/0.863/0.072 ms multicast: 265 packets received, 47% packet loss since first mc packet (seq 1) recvd rtt min/avg/max/std-dev = 0.414/0.703/0.885/0.086 ms Winfried Hi all,
Playing around with Red Hat Clustering, it turns out I have a hughe multicast packet loss: (Centos 6.4 - x86_64 with all updates)
omping 192.168.1.211 192.168.1.212 -c500 (node1) omping 192.168.1.212 192.168.1.211 -c500(node2)
will give almost 50% loss!
192.168.1.211 : unicast, xmt/rcv/%loss = 500/500/0%, min/avg/max/std-dev = 0.330/0.610/0.789/0.064 192.168.1.211 : multicast, xmt/rcv/%loss = 500/268/46%, min/avg/max/std-dev = 0.416/0.635/0.921/0.066
192.168.1.212 : unicast, xmt/rcv/%loss = 500/500/0%, min/avg/max/std-dev = 0.388/0.653/0.863/0.069 192.168.1.212 : multicast, xmt/rcv/%loss = 500/263/47%, min/avg/max/std-dev = 0.396/0.670/1.080/0.074
OK, I am using simple hardware, but this hardware is virtually doing nothing...
As mentioned on https://access.redhat.com/knowledge/sites/default/files/attachments/rhel_clu..., I set the txqueelen to 500, same result?
I 'm still guessing whether this is an oVirt, virtio or Red Hat/Centos issue? Problems only happend after some time; that is 200 mo-pings shows everything is fine.
Anyone?
Winfried
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

So far no reactie about the multicast packet loss...... I bumbed into this bug: https://bugzilla.redhat.com/show_bug.cgi?id=880035 This looks the same as the problems I am suffering using oVirt 3.2: heavy multicast packet loss after some time. This the bug affect oVirt 3.2 ovirt-node (2.6.1-20120228.fc18)? Can anyone reproduce the problem (omping between 3 virtual nodes)? Winfried Op 18-03-13 16:58, Winfried de Heiden schreef:
Same for Debian 6 (x86_64); 47% packet loss:
ssmping -c 500 192.168.1.234
--- 192.168.1.234 statistics --- 500 packets transmitted, time 500001 ms unicast: 500 packets received, 0% packet loss rtt min/avg/max/std-dev = 0.352/0.675/0.863/0.072 ms multicast: 265 packets received, 47% packet loss since first mc packet (seq 1) recvd rtt min/avg/max/std-dev = 0.414/0.703/0.885/0.086 ms
Winfried
Hi all,
Playing around with Red Hat Clustering, it turns out I have a hughe multicast packet loss: (Centos 6.4 - x86_64 with all updates)
omping 192.168.1.211 192.168.1.212 -c500 (node1) omping 192.168.1.212 192.168.1.211 -c500(node2)
will give almost 50% loss!
192.168.1.211 : unicast, xmt/rcv/%loss = 500/500/0%, min/avg/max/std-dev = 0.330/0.610/0.789/0.064 192.168.1.211 : multicast, xmt/rcv/%loss = 500/268/46%, min/avg/max/std-dev = 0.416/0.635/0.921/0.066
192.168.1.212 : unicast, xmt/rcv/%loss = 500/500/0%, min/avg/max/std-dev = 0.388/0.653/0.863/0.069 192.168.1.212 : multicast, xmt/rcv/%loss = 500/263/47%, min/avg/max/std-dev = 0.396/0.670/1.080/0.074
OK, I am using simple hardware, but this hardware is virtually doing nothing...
As mentioned on https://access.redhat.com/knowledge/sites/default/files/attachments/rhel_clu..., I set the txqueelen to 500, same result?
I 'm still guessing whether this is an oVirt, virtio or Red Hat/Centos issue? Problems only happend after some time; that is 200 mo-pings shows everything is fine.
Anyone?
Winfried
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Michael Tsirkin (Thanks!) proposes to try the following: try disabling multicast snooping in the bridge Could you give it a shot? ----- Original Message -----
From: "Winfried de Heiden" <wdh@dds.nl> To: users@ovirt.org Sent: Thursday, March 21, 2013 10:14:04 AM Subject: Re: [Users] ovirt 3.2 - high multicast packet loss
So far no reactie about the multicast packet loss......
I bumbed into this bug: https://bugzilla.redhat.com/show_bug.cgi?id=880035
This looks the same as the problems I am suffering using oVirt 3.2: heavy multicast packet loss after some time.
This the bug affect oVirt 3.2 ovirt-node (2.6.1-20120228.fc18)? Can anyone reproduce the problem (omping between 3 virtual nodes)?
Winfried
Op 18-03-13 16:58, Winfried de Heiden schreef:
Same for Debian 6 (x86_64); 47% packet loss:
ssmping -c 500 192.168.1.234
--- 192.168.1.234 statistics --- 500 packets transmitted, time 500001 ms unicast: 500 packets received, 0% packet loss rtt min/avg/max/std-dev = 0.352/0.675/0.863/0.072 ms multicast: 265 packets received, 47% packet loss since first mc packet (seq 1) recvd rtt min/avg/max/std-dev = 0.414/0.703/0.885/0.086 ms
Winfried
Hi all,
Playing around with Red Hat Clustering, it turns out I have a hughe multicast packet loss: (Centos 6.4 - x86_64 with all updates)
omping 192.168.1.211 192.168.1.212 -c500 (node1) omping 192.168.1.212 192.168.1.211 -c500(node2)
will give almost 50% loss!
192.168.1.211 : unicast, xmt/rcv/%loss = 500/500/0%, min/avg/max/std-dev = 0.330/0.610/0.789/0.064 192.168.1.211 : multicast, xmt/rcv/%loss = 500/268/46%, min/avg/max/std-dev = 0.416/0.635/0.921/0.066
192.168.1.212 : unicast, xmt/rcv/%loss = 500/500/0%, min/avg/max/std-dev = 0.388/0.653/0.863/0.069 192.168.1.212 : multicast, xmt/rcv/%loss = 500/263/47%, min/avg/max/std-dev = 0.396/0.670/1.080/0.074
OK, I am using simple hardware, but this hardware is virtually doing nothing...
As mentioned on https://access.redhat.com/knowledge/sites/default/files/attachments/rhel_clu..., I set the txqueelen to 500, same result?
I 'm still guessing whether this is an oVirt, virtio or Red Hat/Centos issue? Problems only happend after some time; that is 200 mo-pings shows everything is fine.
Anyone?
Winfried
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi, "Plain KVM", that is: Fedora 18 + KVM with VirtManager is suffering the same: On my KVM host: # cat /sys/class/net/virbr0/bridge/multicast_snooping 1 Now, on 2 virtual (Centos 6.4) nodes: will give huge packet loss: 92.168.122.151 : unicast, xmt/rcv/%loss = 300/300/0%, min/avg/max/std-dev = 0.258/0.706/1.170/0.097 192.168.122.151 : multicast, xmt/rcv/%loss = 300/261/13% (seq>=2 12%), min/avg/max/std-dev = 0.357/0.861/1.944/0.198 Increasing omping to -c 500; packet loss will be about 47%. Now, on the KVM host: # echo 0 > /sys/class/net/virbr0/bridge/multicast_snooping cat /sys/class/net/virbr0/bridge/multicast_snooping 0 Giving it several tries, packet loss is 0%! I'll give it a try on oVirt tonight. Winfried Op 21-03-13 12:17, Antoni Segura Puimedon schreef:
Michael Tsirkin (Thanks!) proposes to try the following:
try disabling multicast snooping in the bridge
Could you give it a shot?
----- Original Message -----
From: "Winfried de Heiden" <wdh@dds.nl> To: users@ovirt.org Sent: Thursday, March 21, 2013 10:14:04 AM Subject: Re: [Users] ovirt 3.2 - high multicast packet loss
So far no reactie about the multicast packet loss......
I bumbed into this bug: https://bugzilla.redhat.com/show_bug.cgi?id=880035
This looks the same as the problems I am suffering using oVirt 3.2: heavy multicast packet loss after some time.
This the bug affect oVirt 3.2 ovirt-node (2.6.1-20120228.fc18)? Can anyone reproduce the problem (omping between 3 virtual nodes)?
Winfried
Op 18-03-13 16:58, Winfried de Heiden schreef:
Same for Debian 6 (x86_64); 47% packet loss:
ssmping -c 500 192.168.1.234
--- 192.168.1.234 statistics --- 500 packets transmitted, time 500001 ms unicast: 500 packets received, 0% packet loss rtt min/avg/max/std-dev = 0.352/0.675/0.863/0.072 ms multicast: 265 packets received, 47% packet loss since first mc packet (seq 1) recvd rtt min/avg/max/std-dev = 0.414/0.703/0.885/0.086 ms
Winfried
Hi all,
Playing around with Red Hat Clustering, it turns out I have a hughe multicast packet loss: (Centos 6.4 - x86_64 with all updates)
omping 192.168.1.211 192.168.1.212 -c500 (node1) omping 192.168.1.212 192.168.1.211 -c500(node2)
will give almost 50% loss!
192.168.1.211 : unicast, xmt/rcv/%loss = 500/500/0%, min/avg/max/std-dev = 0.330/0.610/0.789/0.064 192.168.1.211 : multicast, xmt/rcv/%loss = 500/268/46%, min/avg/max/std-dev = 0.416/0.635/0.921/0.066
192.168.1.212 : unicast, xmt/rcv/%loss = 500/500/0%, min/avg/max/std-dev = 0.388/0.653/0.863/0.069 192.168.1.212 : multicast, xmt/rcv/%loss = 500/263/47%, min/avg/max/std-dev = 0.396/0.670/1.080/0.074
OK, I am using simple hardware, but this hardware is virtually doing nothing...
As mentioned on https://access.redhat.com/knowledge/sites/default/files/attachments/rhel_clu..., I set the txqueelen to 500, same result?
I 'm still guessing whether this is an oVirt, virtio or Red Hat/Centos issue? Problems only happend after some time; that is 200 mo-pings shows everything is fine.
Anyone?
Winfried
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi all, Tried it on oVirt 3.2 as well; on the oVirtnode (ovirt-node-iso-2.6.1-20120228.fc18): [root@bigvirt bridge]# cat /sys/class/net/ovirtmgmt/bridge/multicast_snooping 1 omping 192.168.1.212 192.168.1.211 -c500 ## will give heavy packet loss: 192.168.1.212 : unicast, xmt/rcv/%loss = 500/500/0%, min/avg/max/std-dev = 0.381/0.633/0.891/0.074 192.168.1.212 : multicast, xmt/rcv/%loss = 500/266/46%, min/avg/max/std-dev = 0.427/0.641/0.997/0.065 Disabling multicast snooping: [root@bigvirt bridge]# echo 0 > /sys/class/net/ovirtmgmt/bridge/multicast_snooping [root@bigvirt bridge]# cat /sys/class/net/ovirtmgmt/bridge/multicast_snooping 0 omping 192.168.1.212 192.168.1.211 -c500 ## packet loss 0% 192.168.1.212 : unicast, xmt/rcv/%loss = 500/500/0%, min/avg/max/std-dev = 0.353/0.623/0.944/0.075 192.168.1.212 : multicast, xmt/rcv/%loss = 500/500/0%, min/avg/max/std-dev = 0.356/0.642/0.964/0.078 I tried it a couple of times, switching multicast snooping on and off; same result. I would consider this to be a bug? Bugzilla report wanted? Winfried Op 21-03-13 12:17, Antoni Segura Puimedon schreef:
Michael Tsirkin (Thanks!) proposes to try the following:
try disabling multicast snooping in the bridge
Could you give it a shot?
----- Original Message -----
From: "Winfried de Heiden" <wdh@dds.nl> To: users@ovirt.org Sent: Thursday, March 21, 2013 10:14:04 AM Subject: Re: [Users] ovirt 3.2 - high multicast packet loss
So far no reactie about the multicast packet loss......
I bumbed into this bug: https://bugzilla.redhat.com/show_bug.cgi?id=880035
This looks the same as the problems I am suffering using oVirt 3.2: heavy multicast packet loss after some time.
This the bug affect oVirt 3.2 ovirt-node (2.6.1-20120228.fc18)? Can anyone reproduce the problem (omping between 3 virtual nodes)?
Winfried
Op 18-03-13 16:58, Winfried de Heiden schreef:
Same for Debian 6 (x86_64); 47% packet loss:
ssmping -c 500 192.168.1.234
--- 192.168.1.234 statistics --- 500 packets transmitted, time 500001 ms unicast: 500 packets received, 0% packet loss rtt min/avg/max/std-dev = 0.352/0.675/0.863/0.072 ms multicast: 265 packets received, 47% packet loss since first mc packet (seq 1) recvd rtt min/avg/max/std-dev = 0.414/0.703/0.885/0.086 ms
Winfried
Hi all,
Playing around with Red Hat Clustering, it turns out I have a hughe multicast packet loss: (Centos 6.4 - x86_64 with all updates)
omping 192.168.1.211 192.168.1.212 -c500 (node1) omping 192.168.1.212 192.168.1.211 -c500(node2)
will give almost 50% loss!
192.168.1.211 : unicast, xmt/rcv/%loss = 500/500/0%, min/avg/max/std-dev = 0.330/0.610/0.789/0.064 192.168.1.211 : multicast, xmt/rcv/%loss = 500/268/46%, min/avg/max/std-dev = 0.416/0.635/0.921/0.066
192.168.1.212 : unicast, xmt/rcv/%loss = 500/500/0%, min/avg/max/std-dev = 0.388/0.653/0.863/0.069 192.168.1.212 : multicast, xmt/rcv/%loss = 500/263/47%, min/avg/max/std-dev = 0.396/0.670/1.080/0.074
OK, I am using simple hardware, but this hardware is virtually doing nothing...
As mentioned on https://access.redhat.com/knowledge/sites/default/files/attachments/rhel_clu..., I set the txqueelen to 500, same result?
I 'm still guessing whether this is an oVirt, virtio or Red Hat/Centos issue? Problems only happend after some time; that is 200 mo-pings shows everything is fine.
Anyone?
Winfried
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Bug report created: https://bugzilla.redhat.com/show_bug.cgi?id=926954 Winfried Op 21-03-13 12:17, Antoni Segura Puimedon schreef:
Michael Tsirkin (Thanks!) proposes to try the following:
try disabling multicast snooping in the bridge
Could you give it a shot?
----- Original Message -----
From: "Winfried de Heiden" <wdh@dds.nl> To: users@ovirt.org Sent: Thursday, March 21, 2013 10:14:04 AM Subject: Re: [Users] ovirt 3.2 - high multicast packet loss
So far no reactie about the multicast packet loss......
I bumbed into this bug: https://bugzilla.redhat.com/show_bug.cgi?id=880035
This looks the same as the problems I am suffering using oVirt 3.2: heavy multicast packet loss after some time.
This the bug affect oVirt 3.2 ovirt-node (2.6.1-20120228.fc18)? Can anyone reproduce the problem (omping between 3 virtual nodes)?
Winfried
Op 18-03-13 16:58, Winfried de Heiden schreef:
Same for Debian 6 (x86_64); 47% packet loss:
ssmping -c 500 192.168.1.234
--- 192.168.1.234 statistics --- 500 packets transmitted, time 500001 ms unicast: 500 packets received, 0% packet loss rtt min/avg/max/std-dev = 0.352/0.675/0.863/0.072 ms multicast: 265 packets received, 47% packet loss since first mc packet (seq 1) recvd rtt min/avg/max/std-dev = 0.414/0.703/0.885/0.086 ms
Winfried
Hi all,
Playing around with Red Hat Clustering, it turns out I have a hughe multicast packet loss: (Centos 6.4 - x86_64 with all updates)
omping 192.168.1.211 192.168.1.212 -c500 (node1) omping 192.168.1.212 192.168.1.211 -c500(node2)
will give almost 50% loss!
192.168.1.211 : unicast, xmt/rcv/%loss = 500/500/0%, min/avg/max/std-dev = 0.330/0.610/0.789/0.064 192.168.1.211 : multicast, xmt/rcv/%loss = 500/268/46%, min/avg/max/std-dev = 0.416/0.635/0.921/0.066
192.168.1.212 : unicast, xmt/rcv/%loss = 500/500/0%, min/avg/max/std-dev = 0.388/0.653/0.863/0.069 192.168.1.212 : multicast, xmt/rcv/%loss = 500/263/47%, min/avg/max/std-dev = 0.396/0.670/1.080/0.074
OK, I am using simple hardware, but this hardware is virtually doing nothing...
As mentioned on https://access.redhat.com/knowledge/sites/default/files/attachments/rhel_clu..., I set the txqueelen to 500, same result?
I 'm still guessing whether this is an oVirt, virtio or Red Hat/Centos issue? Problems only happend after some time; that is 200 mo-pings shows everything is fine.
Anyone?
Winfried
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (4)
-
Antoni Segura Puimedon
-
Dan Kenigsberg
-
Mike Burns
-
Winfried de Heiden