Then I'm also out of ideas.
You could take al look at the options in /etc/vdsm/vdsm.conf
( information at /usr/share/doc/vdsm-*/vdsm.conf.sample )
On Wed, Dec 11, 2013 at 2:48 PM, Markus Stockhausen
<stockhausen(a)collogia.de> wrote:
> Von: sander.grendelman(a)gmail.com
[sander.grendelman(a)gmail.com]&quot; im Auftrag von "
> Gesendet: Mittwoch, 11. Dezember 2013 14:43
> An: Markus Stockhausen
> Cc: ovirt-users
> Betreff: Re: [Users] Abysmal network performance inside hypervisor node
>
> I'm just wondering if the tested interfaces are bridged because I've
> seen some issues with
> network througput and bridged interfaces on my local system (F19).
>
> Basically, if an IP is configured on the bridge itself ( in oVirt this
> is the case if the network
> is configured as a VM network ) latency goes up and throughput goes down.
>
> Can you rule this one out by using an unbridged interface?
I see. But my storage infiniband network is working without a bridge.
That was the mounting network device of my initial mail. So with
(ovirtmgmt) and without bridge (infiniband) I have the same problem.
[root@colovn01 ~]# ifconfig ib1
ib1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 2044
inet 10.10.30.1 netmask 255.255.255.0 broadcast 10.10.30.255
inet6 fe80::223:7dff:ff94:d3fe prefixlen 64 scopeid 0x20<link>
Infiniband hardware address can be incorrect! Please read BUGS section in ifconfig(8).
infiniband 80:00:00:49:FE:80:00:00:00:00:00:00:00:00:00:00:00:00:00:00
txqueuelen 256 (InfiniBand)
RX packets 3575120 bytes 7222538528 (6.7 GiB)
RX errors 0 dropped 10 overruns 0 frame 0
TX packets 416942 bytes 29156648 (27.8 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Markus