This is a multi-part message in MIME format.
--------------90DBE1C09961831317864F04
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hello Yaniv.
Have a new information about this scenario: I have load-balanced the
requests between both vNICs, so each is receiving/sending half of the
traffic in average and the packet loss although it still exists it
lowered to 1% - 2% (which was expected as the CPU to process this
traffic is shared by more than one CPU at a time).
However the Load on the VM is still high probably due to the interrupts.
Find below in-line the answers to some of your points:
On 21/03/2017 12:31, Yaniv Kaul wrote:
So there are 2 NUMA nodes on the host? And where are the NICs located?
Tried to
search how to check it but couldn't find how. Could you give me
a hint ?
BTW, since those are virtual interfaces, why do you need two on the
same VLAN?
Very good question. It's because of an specific situation where I
need
to 2 MAC addresses in order to balance the traffic in LAG in a switch
which does only layer 2 hashing.
Are you using hyper-threading on the host? Otherwise, I'm not
sure
threads per core would help.
Yes I have hyper-threading enabled on the Host. Is it
worth to enable it ?
Thanks
Fernando
>
> On 18/03/2017 12:53, Yaniv Kaul wrote:
>>
>>
>> On Fri, Mar 17, 2017 at 6:11 PM, FERNANDO FREDIANI
>> <fernando.frediani(a)upx.com <mailto:fernando.frediani@upx.com>>
>> wrote:
>>
>> Hello all.
>>
>> I have a peculiar problem here which perhaps others may have
>> had or know about and can advise.
>>
>> I have Virtual Machine with 2 VirtIO NICs. This VM serves
>> around 1Gbps of traffic with thousands of clients connecting
>> to it. When I do a packet loss test to the IP pinned to NIC1
>> it varies from 3% to 10% of packet loss. When I run the same
>> test on NIC2 the packet loss is consistently 0%.
>>
>> From what I gather I may have something to do with possible
>> lack of Multi Queu VirtIO where NIC1 is managed by a single
>> CPU which might be hitting 100% and causing this packet loss.
>>
>> Looking at this reference
>> (
https://fedoraproject.org/wiki/Features/MQ_virtio_net
>> <
https://fedoraproject.org/wiki/Features/MQ_virtio_net>) I
>> see one way to test it is start the VM with 4 queues (for
>> example), but checking on the qemu-kvm process I don't see
>> option present. Any way I can force it from the Engine ?
>>
>>
>> I don't see a need for multi-queue for 1Gbps.
>> Can you share the host statistics, the network configuration,
>> the qemu-kvm command line, etc.?
>> What is the difference between NIC1 and NIC2, in the way they
>> are connected to the outside world?
>>
>>
>> This other reference
>> (
https://www.linux-kvm.org/page/Multiqueue#Enable_MQ_feature
>> <
https://www.linux-kvm.org/page/Multiqueue#Enable_MQ_feature>)
>> points to the same direction about starting the VM with queues=N
>>
>> Also trying to increase the TX ring buffer within the guest
>> with ethtool -g eth0 is not possible.
>>
>> Oh, by the way, the Load on the VM is significantly high
>> despite the CPU usage isn't above 50% - 60% in average.
>>
>>
>> Load = latest 'top' results? Vs. CPU usage? Can mean a lot of
>> processes waiting for CPU and doing very little - typical for
>> web servers, for example. What is occupying the CPU?
>> Y.
>>
>>
>> Thanks
>> Fernando
>>
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org <mailto:Users@ovirt.org>
>>
http://lists.ovirt.org/mailman/listinfo/users
>> <
http://lists.ovirt.org/mailman/listinfo/users>
>>
>>
>
>
--------------90DBE1C09961831317864F04
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=utf-8"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p>Hello Yaniv.</p>
<p>Have a new information about this scenario: I have load-balanced
the requests between both vNICs, so each is receiving/sending half
of the traffic in average and the packet loss although it still
exists it lowered to 1% - 2% (which was expected as the CPU to
process this traffic is shared by more than one CPU at a time).<br>
However the Load on the VM is still high probably due to the
interrupts.</p>
<p>Find below in-line the answers to some of your points:<br>
</p>
<br>
<div class="moz-cite-prefix">On 21/03/2017 12:31, Yaniv Kaul
wrote:<br>
</div>
<blockquote
cite="mid:CAJgorsaVQzthC8Mq+Gpa66e6SB4SfvzwftE8ViH8bE4jv3r4eg@mail.gmail.com"
type="cite">
<div dir="ltr"><br>
<div class="gmail_extra">So there are 2 NUMA nodes on the host?
And where are the NICs located?</div>
</div>
</blockquote>
Tried to search how to check it but couldn't find how. Could you
give me a hint ?<br>
<blockquote
cite="mid:CAJgorsaVQzthC8Mq+Gpa66e6SB4SfvzwftE8ViH8bE4jv3r4eg@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">
<div> <br>
<span class=""> </span></div>
BTW, since those are virtual interfaces, why do you need two
on the same VLAN?</div>
</div>
</div>
</blockquote>
Very good question. It's because of an specific situation where I
need to 2 MAC addresses in order to balance the traffic in LAG in a
switch which does only layer 2 hashing.<br>
<blockquote
cite="mid:CAJgorsaVQzthC8Mq+Gpa66e6SB4SfvzwftE8ViH8bE4jv3r4eg@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">
<div> </div>
Are you using hyper-threading on the host? Otherwise, I'm
not sure threads per core would help.</div>
</div>
</div>
</blockquote>
Yes I have hyper-threading enabled on the Host. Is it worth to
enable it ?<br>
<br>
Thanks<br>
Fernando<br>
<span class=""></span><br>
<span class=""> </span>
<blockquote
cite="mid:CAJgorsaVQzthC8Mq+Gpa66e6SB4SfvzwftE8ViH8bE4jv3r4eg@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"><span
class="">
<blockquote type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">
<div bgcolor="#FFFFFF"
text="#000000"><span
class="m_-144604539708212250HOEnZb"></span>
<div>
<div class="m_-144604539708212250h5">
<br>
<div
class="m_-144604539708212250m_7680788519611111480moz-cite-prefix">On
18/03/2017 12:53, Yaniv Kaul wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr"><br>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Fri,
Mar 17, 2017 at 6:11 PM,
FERNANDO FREDIANI <span
dir="ltr"><<a
moz-do-not-send="true"
href="mailto:fernando.frediani@upx.com"
target="_blank">fernando.frediani(a)upx.com</a>&gt;</span>
wrote:<br>
<blockquote class="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div dir="ltr">
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>Hello
all.<br>
<br>
</div>
I have a
peculiar
problem here
which perhaps
others may
have had or
know about and
can advise.<br>
<br>
</div>
I have Virtual
Machine with 2
VirtIO NICs.
This VM serves
around 1Gbps of
traffic with
thousands of
clients
connecting to
it. When I do a
packet loss test
to the IP pinned
to NIC1 it
varies from 3%
to 10% of packet
loss. When I run
the same test on
NIC2 the packet
loss is
consistently 0%.<br>
<br>
</div>
From what I gather
I may have
something to do
with possible lack
of Multi Queu
VirtIO where NIC1
is managed by a
single CPU which
might be hitting
100% and causing
this packet loss.<br>
<br>
</div>
Looking at this
reference (<a
moz-do-not-send="true"
href="https://fedoraproject.org/wiki/Features/MQ_virtio_net"
target="_blank">https://fedoraproject.org/wik<wbr>i/Fe...>)
I see one way to
test it is start the
VM with 4 queues
(for example), but
checking on the
qemu-kvm process I
don't see option
present. Any way I
can force it from
the Engine ?<br>
</div>
</div>
</div>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>I don't see a need for
multi-queue for 1Gbps.</div>
<div>Can you share the host
statistics, the network
configuration, the qemu-kvm
command line, etc.?</div>
<div>What is the difference
between NIC1 and NIC2, in the
way they are connected to the
outside world?</div>
<div> </div>
<blockquote class="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div dir="ltr">
<div>
<div>
<div>
<div><br>
</div>
This other reference (<a
moz-do-not-send="true"
href="https://www.linux-kvm.org/page/Multiqueue#Enable_MQ_feature"
target="_blank">https://www.linux-kvm.org/pag<wbr>e/Mu...>)
points to the same
direction about
starting the VM with
queues=N<br>
<br>
</div>
<div>Also trying to
increase the TX ring
buffer within the
guest with ethtool -g
eth0 is not possible.<br>
</div>
<div><br>
</div>
Oh, by the way, the Load
on the VM is
significantly high
despite the CPU usage
isn't above 50% - 60% in
average.<br>
</div>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>Load = latest 'top'
results? Vs. CPU usage? Can
mean a lot of processes
waiting for CPU and doing very
little - typical for web
servers, for example. What is
occupying the CPU?</div>
<div>Y.</div>
<div> </div>
<blockquote class="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div dir="ltr">
<div>
<div><br>
</div>
Thanks<span
class="m_-144604539708212250m_7680788519611111480HOEnZb"><font
color="#888888"><br>
</font></span></div>
<span
class="m_-144604539708212250m_7680788519611111480HOEnZb"><font
color="#888888">Fernando<br>
<div>
<div>
<div>
<div><br>
<br>
</div>
</div>
</div>
</div>
</font></span></div>
<br>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a moz-do-not-send="true"
href="mailto:Users@ovirt.org"
target="_blank">Users(a)ovirt.org</a><br>
<a moz-do-not-send="true"
href="http://lists.ovirt.org/mailman/listinfo/users"
rel="noreferrer"
target="_blank">http://lists.ovirt.org/mailman<wbr>/li...
<br>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<br>
</div>
</div>
</div>
<blockquote class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px
#ccc solid;padding-left:1ex"> </blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<br>
</span></div>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<br>
</body>
</html>
--------------90DBE1C09961831317864F04--