SLA feature for storage I/O bandwidth

Mei Liu liumbj at linux.vnet.ibm.com
Wed Jun 5 08:11:34 UTC 2013


Hi Doron,
After the discussion, we found that the last version of design is a 
little complex, so I simplified it and post it to

http://www.ovirt.org/Features/Design/SLA_for_storage_io_bandwidth

We want to let MOM in each host decides how to tune the bandwidth limit 
of vms in that host instead of letting engine to make the whole decision 
based on statistics from vms in diverse hosts. Maybe we can consider 
starting from the basic one in wiki.

Thanks & best regards,
Mei Liu(Rose)

-------- Original Message --------
Subject: 	Re: SLA feature for storage I/O bandwidth
Date: 	Mon, 03 Jun 2013 18:28:46 +0800
From: 	Mei Liu <liumbj at linux.vnet.ibm.com>
To: 	Doron Fediuck <dfediuck at redhat.com>
CC: 	arch at ovirt.org



On 05/29/2013 11:34 PM, Doron Fediuck wrote:
> ----- Original Message -----
>> From: "Mei Liu" <liumbj at linux.vnet.ibm.com>
>> To: "Dave Neary" <dneary at redhat.com>
>> Cc: arch at ovirt.org
>> Sent: Wednesday, May 29, 2013 11:35:12 AM
>> Subject: Re: SLA feature for storage I/O bandwidth
>>
>> On 05/29/2013 03:42 PM, Dave Neary wrote:
>>> Hi Mei Liu,
>>>
>>> On 05/28/2013 10:18 AM, Mei Liu wrote:
>>>> I created a drafted wiki page on  design of storage I/O bandwidth SLA in
>>>> the following link:
>>>>
>>>> http://www.ovirt.org/SLA_for_storage_resource .
>>>>
>>>> I will appreciate the efforts if anyone who works on ovirt engine, vdsm
>>>> or mom could give some comments. TIA.
>>> Just out of interest - which version of oVirt are you targeting this
>>> for? Can I assume that it's for post-3.3? Today is officially 3.3.
>>> feature freeze (but we have a release meeting later to discuss that).
>>>
>>> Thanks,
>>> Dave.
>>>
>> Hi Dave,
>> The basic i/o tune functionality for vdsm is almost ready. However,
>> there is nothing written on the Engine side and no policy for automatic
>> tuning is applied yet.
>> I am not sure if the basic functionality can target 3.3.
>>
>>
>> Best regards,
>> Mei Liu
>>
> Hi Mey Liu,
> I'm still going over the wiki, but a few things we need to consider;
> First of all QoS for storage I/O bandwidth is a part of a larger SLA
> policy which may include network QoS, CPU and memory QoS and the quota
> we implement today.
>
> So first of all we need to make sure your design does not conflict
> the other QoS parts, which is what I'm looking into now.
>
> Additionally, using the quota term is confusing as oVirt already has
> quota today, and cpu API has his own quota definition. So please try
> to come up with a different terminology.
>
> I like your idea of setting an initial value but I need some more time
> for it to come up with my insights.
> Also, I completely agree with your concept of letting mom handle
> it in host level. We need to verify it does not break anything related
> to SPM. This is something we need to synchronize with the storage guys.
>
> Looking into the engine area, we should start thinking on how this will
> be supported in the main storage entities and VM / template / instance
> entities. So you may want to add a section on this as well. This leads
> me to think of importing and exporting a VM which may want to maintain
> the defined IO QoS. Any thoughts around it?
>
> Doron
>
Hi Doron,
Thanks for your questions and insightful thoughts. They are really
inspiring.

I updated the design in
http://www.ovirt.org/Features/Design/SLA_for_storage_io_bandwidth .
This time, I add  storage I/O bandwidth control according to the quota
design and the SLA tries to ensure the reserved bandwidth for vDisks.
It requires the engine  to make the tuning decision, since vm uses the
SD volumes may  reside on different hosts and only engine can obtain the
global information.
I think this design will not lead to problem when importing or exporting
a VM.

I will appreciate if you can take a look at the new design.

Best regards,
Mei Liu (Rose)

_______________________________________________
Arch mailing list
Arch at ovirt.org
http://lists.ovirt.org/mailman/listinfo/arch




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/arch/attachments/20130605/f8b21d71/attachment.html>


More information about the Arch mailing list