----- Original Message -----
From: "Dan Kenigsberg" <danken(a)redhat.com>
To: "Itamar Heim" <iheim(a)redhat.com>, ydary(a)redhat.com,
masayag(a)redhat.com, nyechiel(a)redhat.com, msivak(a)redhat.com
Cc: users(a)ovirt.org
Sent: Wednesday, March 12, 2014 3:26:49 PM
Subject: Re: [Users] oVirt 3.5 planning - bandwidth/cpu/io accounting
On Thu, Feb 27, 2014 at 12:03:55PM +0000, Dan Kenigsberg wrote:
> There are users that would like to tell how much traffic each vnic of
> each VM has consumed in a period of time. Currently, we report only
> bitrate as a percetage of an estimated vnic "speed". Integrating this
> value over time is inefficent and error prone.
>
> I suggest to have all the stack (Vdsm, Engine, dwh) report the
> actually-trasmitted (and actually-received) byte count on each vnic, as
> well as the time when the sample was taken.
>
> Currently, Vdsm reports
>
> 'eth0': {'rxDropped': '0',
> 'rxErrors': '0',
> 'rxRate': '8.0',
> 'speed': '1000',
> 'state': 'up',
> 'txDropped': '0',
> 'txErrors': '0',
> 'txRate': '10.0'},
>
> but it should add rxKiBytes, txKiBytes and time to the frill.
>
> GUI could still calculate the rate for illustration, based on the raw
> trasmission and the sample time.
>
> Until we break backward compatibility, we'd keep reporting the flaky
> rxRate/txRate, too.
>
> I can think of only two problems with this approach: Linux byte counters
> would
> eventually reset when they overflow. This is currently hidden by Vdsm, but
> with
> the suggested change, would have to be handled by higher levels of the
> stack.
>
> A similar problem appears on migration: the counters would reset and Engine
> would need to know how to keep up the accounting properly.
>
> I've opened
>
> Bug 1066570 - [RFE] Report actual rx_byte instead of a false rxRate
>
> to track this request of mine.
For the reconrd, I'm told that there is a very similar need for
reporting accumulated guest CPU cycle IO operations consuption.
Martin, do we already have BZs for the other two use cases?
No.
Please open an RFE for ovirt on these use cases.
Thanks,
Doron