[ovirt-users] Strange network performance on VirtIIO VM NIC

Yaniv Kaul ykaul at redhat.com
Tue Mar 21 21:00:30 UTC 2017


On Tue, Mar 21, 2017 at 8:14 PM, FERNANDO FREDIANI <
fernando.frediani at upx.com> wrote:

> Hello Yaniv.
>
> Have a new information about this scenario: I have load-balanced the
> requests between both vNICs, so each is receiving/sending half of the
> traffic in average and the packet loss although it still exists it lowered
> to 1% - 2% (which was expected as the CPU to process this traffic is shared
> by more than one CPU at a time).
> However the Load on the VM is still high probably due to the interrupts.
>
> Find below in-line the answers to some of your points:
>
> On 21/03/2017 12:31, Yaniv Kaul wrote:
>
>
> So there are 2 NUMA nodes on the host? And where are the NICs located?
>
> Tried to search how to check it but couldn't find how. Could you give me a
> hint ?
>

I believe 'lspci -vmm' should provide you with node information per PCI
device.
'numactl' can also provide interesting information.

>
>
> BTW, since those are virtual interfaces, why do you need two on the same
> VLAN?
>
> Very good question. It's because of an specific situation where I need to
> 2 MAC addresses in order to balance the traffic in LAG in a switch which
> does only layer 2 hashing.
>
>
> Are you using hyper-threading on the host? Otherwise, I'm not sure threads
> per core would help.
>
> Yes I have hyper-threading enabled on the Host. Is it worth to enable it ?
>

Depends on the workload. Some benefit from it, some don't. I wouldn't in
your case (it benefits mainly the case of many VMs with small number of
vCPUs).
Y.


>
>
> Thanks
> Fernando
>
>
>> On 18/03/2017 12:53, Yaniv Kaul wrote:
>>
>>
>>
>> On Fri, Mar 17, 2017 at 6:11 PM, FERNANDO FREDIANI <
>> fernando.frediani at upx.com> wrote:
>>
>>> Hello all.
>>>
>>> I have a peculiar problem here which perhaps others may have had or know
>>> about and can advise.
>>>
>>> I have Virtual Machine with 2 VirtIO NICs. This VM serves around 1Gbps
>>> of traffic with thousands of clients connecting to it. When I do a packet
>>> loss test to the IP pinned to NIC1 it varies from 3% to 10% of packet loss.
>>> When I run the same test on NIC2 the packet loss is consistently 0%.
>>>
>>> From what I gather I may have something to do with possible lack of
>>> Multi Queu VirtIO where NIC1 is managed by a single CPU which might be
>>> hitting 100% and causing this packet loss.
>>>
>>> Looking at this reference (https://fedoraproject.org/wik
>>> i/Features/MQ_virtio_net) I see one way to test it is start the VM with
>>> 4 queues (for example), but checking on the qemu-kvm process I don't see
>>> option present. Any way I can force it from the Engine ?
>>>
>>
>> I don't see a need for multi-queue for 1Gbps.
>> Can you share the host statistics, the network configuration, the
>> qemu-kvm command line, etc.?
>> What is the difference between NIC1 and NIC2, in the way they are
>> connected to the outside world?
>>
>>
>>>
>>> This other reference (https://www.linux-kvm.org/pag
>>> e/Multiqueue#Enable_MQ_feature) points to the same direction about
>>> starting the VM with queues=N
>>>
>>> Also trying to increase the TX ring buffer within the guest with ethtool
>>> -g eth0 is not possible.
>>>
>>> Oh, by the way, the Load on the VM is significantly high despite the CPU
>>> usage isn't above 50% - 60% in average.
>>>
>>
>> Load = latest 'top' results? Vs. CPU usage? Can mean a lot of processes
>> waiting for CPU and doing very little - typical for web servers, for
>> example. What is occupying the CPU?
>> Y.
>>
>>
>>>
>>> Thanks
>>> Fernando
>>>
>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>>
>>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170321/5e31bea5/attachment.html>


More information about the Users mailing list