<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Mar 20, 2017 at 5:14 PM, FERNANDO FREDIANI <span dir="ltr">&lt;<a href="mailto:fernando.frediani@upx.com" target="_blank">fernando.frediani@upx.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div bgcolor="#FFFFFF" text="#000000">
    <p>Hello Yaniv.</p>
    <p>It also looks to me initially that for 1Gbps multi-queue would
      not be necessary, however the Virtual Machine is relatively busy
      where the CPU necessary to process it may (or not) be competing
      with the processes running on in the guest.</p>
    <p>The network is as following: 3 x 1 Gb interfaces  bonded together
      with layer2+3 has algorithm where the VMs connect to the outside
      world.<br></p></div></blockquote><div><br></div><div>Is your host with NUMA support (multiple sockets) ? Are all your interfaces connected to the same socket? Perhaps one is on the &#39;other&#39; socket (a different PCI bus, etc.)? This can introduce latency.</div><div>In general, you would want to align everything, from host (interrupts of the drivers) all the way to the guest to perform the processing on the same socket. </div><div><br></div><div>Layer 2+3 may or may not provide you with good distribution across the physical links, depending on the traffic. Layer 3+4 hashing is better, but is not entirely compliant with all vendors/equipment.</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000"><p>
      vNIC1 and vNIC2 in the VMs are the same VirtIO NIC types. These
      vNICs are connected to the same VLAN and they are both able to
      output 1Gbps throughput each at the same time in iperf tests as
      the bond below has 3Gb capacity.</p></div></blockquote><div><br></div><div>Linux is not always happy with multiple interfaces on the same L2 network. I think there are some params needed to be set to make it happy?</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000">
    <p>Please note something interesting I mentioned previously: All
      traffic currently goes in and out via vNIC1 which is showing
      packet loss (3% to 10%) on the tests conducted. NIC2 has zero
      traffic and if the same tests are conducted against it shows 0%
      packets loss.<br>
      At first impression if it was something related to the bond or
      even to the physical NICs on the Host it should show packet loss
      for ANY of the vNICs as the traffic flows through the same
      physical NIC and bond, but is not the case.</p>
    <p>This is the qemu-kvm command the Host is executing:<br>
      /usr/libexec/qemu-kvm -name
      guest=VM_NAME_REPLACED,debug-<wbr>threads=on -S -object
secret,id=masterKey0,format=<wbr>raw,file=/var/lib/libvirt/<wbr>qemu/domain-6-VM_NAME_<wbr>REPLACED/master-key.aes
      -machine pc-i440fx-rhel7.3.0,accel=kvm,<wbr>usb=off -cpu SandyBridge -m
      4096 -realtime mlock=off -smp
      4,maxcpus=16,sockets=16,cores=<wbr>1,threads=1 -numa
      node,nodeid=0,cpus=0-3,mem=<wbr>4096 -uuid
      57ffc2ed-fec5-47d6-bfb1-<wbr>60c728737bd2 -smbios
      type=1,manufacturer=oVirt,<wbr>product=oVirt
Node,version=7-3.1611.el7.<wbr>centos,serial=4C4C4544-0043-<wbr>5610-804B-B1C04F4E3232,uuid=<wbr>57ffc2ed-fec5-47d6-bfb1-<wbr>60c728737bd2
      -no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/<wbr>var/lib/libvirt/qemu/domain-6-<wbr>VM_NAME_REPLACED/monitor.sock,<wbr>server,nowait
      -mon chardev=charmonitor,id=<wbr>monitor,mode=control -rtc
      base=2017-03-17T01:12:39,<wbr>driftfix=slew -global
      kvm-pit.lost_tick_policy=<wbr>discard -no-hpet -no-shutdown -boot
      strict=on -device piix3-usb-uhci,id=usb,bus=pci.<wbr>0,addr=0x1.0x2
      -device virtio-scsi-pci,id=scsi0,bus=<wbr>pci.0,addr=0x7 -device
      virtio-serial-pci,id=virtio-<wbr>serial0,max_ports=16,bus=pci.<wbr>0,addr=0x5
      -drive if=none,id=drive-ide0-1-0,<wbr>readonly=on -device
      ide-cd,bus=ide.1,unit=0,drive=<wbr>drive-ide0-1-0,id=ide0-1-0 -drive
file=/rhev/data-center/<wbr>2325e1a4-c702-469c-82eb-<wbr>ff43baa06d44/8dcd90f4-c0f0-<wbr>47db-be39-5b49685acc04/images/<wbr>ebe10e75-799a-439e-bc52-<wbr>551b894c34fa/1a73cd53-0e51-<wbr>4e49-8631-38cf571f6bb9,format=<wbr>qcow2,if=none,id=drive-scsi0-<wbr>0-0-0,serial=ebe10e75-799a-<wbr>439e-bc52-551b894c34fa,cache=<wbr>none,werror=stop,rerror=stop,<wbr>aio=native
      -device
scsi-hd,bus=scsi0.0,channel=0,<wbr>scsi-id=0,lun=0,drive=drive-<wbr>scsi0-0-0-0,id=scsi0-0-0-0,<wbr>bootindex=1
      -drive
file=/rhev/data-center/<wbr>2325e1a4-c702-469c-82eb-<wbr>ff43baa06d44/8dcd90f4-c0f0-<wbr>47db-be39-5b49685acc04/images/<wbr>db401b27-006d-494c-a1ee-<wbr>1d37810710c8/664cffe6-52f8-<wbr>429d-8bb9-2f43fa7a468f,format=<wbr>qcow2,if=none,id=drive-scsi0-<wbr>0-0-1,serial=db401b27-006d-<wbr>494c-a1ee-1d37810710c8,cache=<wbr>none,werror=stop,rerror=stop,<wbr>aio=native
      -device
scsi-hd,bus=scsi0.0,channel=0,<wbr>scsi-id=0,lun=1,drive=drive-<wbr>scsi0-0-0-1,id=scsi0-0-0-1
      -netdev tap,fd=33,id=hostnet0,vhost=<wbr>on,vhostfd=36 -device
virtio-net-pci,netdev=<wbr>hostnet0,id=net0,mac=00:1a:4a:<wbr>16:01:60,bus=pci.0,addr=0x3
      -netdev tap,fd=37,id=hostnet1,vhost=<wbr>on,vhostfd=38 -device
virtio-net-pci,netdev=<wbr>hostnet1,id=net1,mac=00:1a:4a:<wbr>16:01:61,bus=pci.0,addr=0x4
      -chardev
socket,id=charchannel0,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>57ffc2ed-fec5-47d6-bfb1-<wbr>60c728737bd2.com.redhat.rhevm.<wbr>vdsm,server,nowait
      -device
virtserialport,bus=virtio-<wbr>serial0.0,nr=1,chardev=<wbr>charchannel0,id=channel0,name=<wbr>com.redhat.rhevm.vdsm
      -chardev
socket,id=charchannel1,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>57ffc2ed-fec5-47d6-bfb1-<wbr>60c728737bd2.org.qemu.guest_<wbr>agent.0,server,nowait
      -device
virtserialport,bus=virtio-<wbr>serial0.0,nr=2,chardev=<wbr>charchannel1,id=channel1,name=<wbr>org.qemu.guest_agent.0
      -chardev spicevmc,id=charchannel2,name=<wbr>vdagent -device
virtserialport,bus=virtio-<wbr>serial0.0,nr=3,chardev=<wbr>charchannel2,id=channel2,name=<wbr>com.redhat.spice.0
      -vnc <a href="http://192.168.100.19:2" target="_blank">192.168.100.19:2</a>,password -k pt-br -spice
tls-port=5903,addr=192.168.<wbr>100.19,x509-dir=/etc/pki/vdsm/<wbr>libvirt-spice,tls-channel=<wbr>default,tls-channel=main,tls-<wbr>channel=display,tls-channel=<wbr>inputs,tls-channel=cursor,tls-<wbr>channel=playback,tls-channel=<wbr>record,tls-channel=smartcard,<wbr>tls-channel=usbredir,seamless-<wbr>migration=on
      -k pt-br -device
qxl-vga,id=video0,ram_size=<wbr>67108864,vram_size=33554432,<wbr>vram64_size_mb=0,vgamem_mb=16,<wbr>bus=pci.0,addr=0x2
      -incoming defer -device
      virtio-balloon-pci,id=<wbr>balloon0,bus=pci.0,addr=0x6 -object
      rng-random,id=objrng0,<wbr>filename=/dev/urandom -device
      virtio-rng-pci,rng=objrng0,id=<wbr>rng0,bus=pci.0,addr=0x8 -msg
      timestamp=on<br>
    </p>
    <p>Load in the VM is relatively high (20 to 30) and CPU usage is
      between 50% to 60% with eventual peaks of 100% in one of the
      vCPUs. There is a lot of processes running in the VM similar to
      Web Servers which is using this amount of CPU.</p>
    <p>Only guess I could have so far is that traffic on NIC1 is being
      handeled by one of the vCPUs which eventually get 100% due to some
      of the processes while traffic on NIC2 is handled by another vCPU
      which is not that busy and explains the 0% packet loss. BUT,
      should VirtIO vNIC use CPU from within the Guest ?<br>
      Does it make any sense ?</p>
    <p>Thanks</p></div></blockquote><div><br></div><div>That can explain it.  Ideally, you need to also streamline the processing in the guest. The relevant application should be on the same NUMA node as the vCPU processing the virtio-net interrupts.</div><div>In your case, the VM sees a single NUMA node - does that match the underlying host architecture as well?</div><div>Y.</div><div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000"><span class="HOEnZb"><font color="#888888">
    <p>Fernando<br>
    </p></font></span><div><div class="h5">
    <br>
    <div class="m_7680788519611111480moz-cite-prefix">On 18/03/2017 12:53, Yaniv Kaul wrote:<br>
    </div>
    <blockquote type="cite">
      <div dir="ltr"><br>
        <div class="gmail_extra"><br>
          <div class="gmail_quote">On Fri, Mar 17, 2017 at 6:11 PM,
            FERNANDO FREDIANI <span dir="ltr">&lt;<a href="mailto:fernando.frediani@upx.com" target="_blank">fernando.frediani@upx.com</a>&gt;</span>
            wrote:<br>
            <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
              <div dir="ltr">
                <div>
                  <div>
                    <div>
                      <div>
                        <div>
                          <div>
                            <div>
                              <div>Hello all.<br>
                                <br>
                              </div>
                              I have a peculiar problem here which
                              perhaps others may have had or know about
                              and can advise.<br>
                              <br>
                            </div>
                            I have Virtual Machine with 2 VirtIO NICs.
                            This VM serves around 1Gbps of traffic with
                            thousands of clients connecting to it. When
                            I do a packet loss test to the IP pinned to
                            NIC1 it varies from 3% to 10% of packet
                            loss. When I run the same test on NIC2 the
                            packet loss is consistently 0%.<br>
                            <br>
                          </div>
                          From what I gather I may have something to do
                          with possible lack of Multi Queu VirtIO where
                          NIC1 is managed by a single CPU which might be
                          hitting 100% and causing this packet loss.<br>
                          <br>
                        </div>
                        Looking at this reference (<a href="https://fedoraproject.org/wiki/Features/MQ_virtio_net" target="_blank">https://fedoraproject.org/wik<wbr>i/Features/MQ_virtio_net</a>)
                        I see one way to test it is start the VM with 4
                        queues (for example), but checking on the
                        qemu-kvm process I don&#39;t see option present. Any
                        way I can force it from the Engine ?<br>
                      </div>
                    </div>
                  </div>
                </div>
              </div>
            </blockquote>
            <div><br>
            </div>
            <div>I don&#39;t see a need for multi-queue for 1Gbps.</div>
            <div>Can you share the host statistics, the network
              configuration, the qemu-kvm command line, etc.?</div>
            <div>What is the difference between NIC1 and NIC2, in the
              way they are connected to the outside world?</div>
            <div> </div>
            <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
              <div dir="ltr">
                <div>
                  <div>
                    <div>
                      <div><br>
                      </div>
                      This other reference (<a href="https://www.linux-kvm.org/page/Multiqueue#Enable_MQ_feature" target="_blank">https://www.linux-kvm.org/pag<wbr>e/Multiqueue#Enable_MQ_feature</a><wbr>)
                      points to the same direction about starting the VM
                      with queues=N<br>
                      <br>
                    </div>
                    <div>Also trying to increase the TX ring buffer
                      within the guest with ethtool -g eth0 is not
                      possible.<br>
                    </div>
                    <div><br>
                    </div>
                    Oh, by the way, the Load on the VM is significantly
                    high despite the CPU usage isn&#39;t above 50% - 60% in
                    average.<br>
                  </div>
                </div>
              </div>
            </blockquote>
            <div><br>
            </div>
            <div>Load = latest &#39;top&#39; results? Vs. CPU usage? Can mean a
              lot of processes waiting for CPU and doing very little -
              typical for web servers, for example. What is occupying
              the CPU?</div>
            <div>Y.</div>
            <div> </div>
            <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
              <div dir="ltr">
                <div>
                  <div><br>
                  </div>
                  Thanks<span class="m_7680788519611111480HOEnZb"><font color="#888888"><br>
                    </font></span></div>
                <span class="m_7680788519611111480HOEnZb"><font color="#888888">Fernando<br>
                    <div>
                      <div>
                        <div>
                          <div><br>
                            <br>
                          </div>
                        </div>
                      </div>
                    </div>
                  </font></span></div>
              <br>
              ______________________________<wbr>_________________<br>
              Users mailing list<br>
              <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
              <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
              <br>
            </blockquote>
          </div>
          <br>
        </div>
      </div>
    </blockquote>
    <br>
  </div></div></div>

</blockquote></div><br></div></div>