[Users] Abysmal network performance inside hypervisor node
Markus Stockhausen
stockhausen at collogia.de
Wed Dec 11 20:32:08 UTC 2013
> Von: sander.grendelman at gmail.com [sander.grendelman at gmail.com]" im Auftrag von "
> Gesendet: Mittwoch, 11. Dezember 2013 14:51
> An: Markus Stockhausen
> Cc: ovirt-users
> Betreff: Re: [Users] Abysmal network performance inside hypervisor node
>
> Then I'm also out of ideas.
>
> You could take al look at the options in /etc/vdsm/vdsm.conf
> ( information at /usr/share/doc/vdsm-*/vdsm.conf.sample )
Ok, now it is getting crazy. Something is going on inside debugfs.
I do not know what activated it, but I have two candidates:
1) DMA API debugging has been enabled. So I deactivated it with
echo Y > ./sys/kernel/debug/dma-api/disabled and instantly
throughput raises from 17MB/sec to 33MB/sec
2) Another location I cannot identify up to know. A perf trace shows
the following consumption path.
- 12,51% kworker/1:2 [kernel.kallsyms] [k] lock_is_held
- lock_is_held
- 82,25% __module_address
- 99,97% __module_text_address
is_module_text_address
__kernel_text_address
print_context_stack
dump_trace
save_stack_trace
- set_track
- 52,61% free_debug_processing
- __slab_free
- 93,67% kmem_cache_free
- 72,64% kfree_skbmem
- 91,34% __kfree_skb
- 92,58% napi_gro_receive
ipoib_ib_handle_rx_wc
ipoib_poll
net_rx_action
__do_softirq
call_softirq
- do_softirq
- 75,12% irq_exit
- 95,67% smp_apic_timer_interrupt
- apic_timer_interrupt
- 79,47% __slab_free
- 99,51% kmem_cache_free
- 99,19% nfs_free_request
nfs_release_request
nfs_readpage_release
nfs_read_completion
nfs_readdata_release
nfs_readpage_release_common
rpc_free_task
rpc_async_release
process_one_work
worker_thread
kthread
ret_from_fork
+ 0,81% nfs_readhdr_free
Any idea what activates tracing for nfs kmem allocations?
Markus
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: InterScan_Disclaimer.txt
URL: <http://lists.ovirt.org/pipermail/users/attachments/20131211/52da734c/attachment-0001.txt>
More information about the Users
mailing list