SLA feature for storage I/O bandwidth

Doron Fediuck dfediuck at redhat.com
Wed Jun 5 14:26:55 UTC 2013


Hi Mei Liu,
sorry for the delay.

I agree that the advanced approach is getting complicated, and
that we can allow mom to control parts of it.
Having that said, I'd still want you to look at the bigger picture
of a VM QoS.
Please refer to the discussion we had on network QoS, where we
managed to define it in a way which going forward will be more
user friendly on the one hand, but can be translated into mom-policy
on the other hand.
See: http://lists.ovirt.org/pipermail/engine-devel/2013-June/004764.html

As you can see there, we defined a network profile, which includes a QoS
element. Going forward this element will be taken from a policy, which
will hold additional QoS aspects such as I/O, CPU and memory. In addition
to that, the policy may include a reference to a quota allocation as well
as other SLA related properties such as HA.

So in this light, I'd like to see a disk-profile, which holds the QoS element
and possibly other storage related properties, such as mom-auto-tune arguments.
So once an admin sets this profile, the VM creator (or disk creator) can
choose the relevant profile he needs for the disk and assign it to the disk.
Once we have this in place, the engine will provide this policy to mom on
VM creation, and mom will enforce I/O (in this case, but also network and
others going forward) on the VM based on the policy it got. If the policy
allows I/O auto-tuning, then mom will manage it for that disk.

The benefits to this approach are that it's more holistic from system
perspective. It should allow easy management and simple defaults which
users should not deal with unless they have the relevant permissions such
as disk creators.

So in this context, I'd love to see a definition of disk or image profile,
and the storage QoS element it holds. The VDSM API should already support
policy updates for mom, so we only need to introduce the libvirt API elements.

Feedback is more than welcomed.
Doron

----- Original Message -----
> From: "Mei Liu" <liumbj at linux.vnet.ibm.com>
> To: "Allon Mureinik" <amureini at redhat.com>
> Cc: "Doron Fediuck" <dfediuck at redhat.com>, arch at ovirt.org
> Sent: Wednesday, June 5, 2013 1:14:31 PM
> Subject: Re: SLA feature for storage I/O bandwidth
> 
> On 06/05/2013 05:03 PM, Allon Mureinik wrote:
> > Hi Mei,
> >
> > How are you treating shared disks?
> > Is the limitation defined per disk (as a total), or per disk-vm relation?
> >
> The existing API in libvirt is to limit the  per vm's vDisk. e.g. hda of VM1
> 
> So the limitation is on per vm's vDisk, if we don't consider quota for
> SD in the design (the basic design). That limitation value is adjusted
> based on if backend storage(localfs, nfs, glusterfs...) is heavily used.
> 
> SD's IO bandwidth quota is to constrain the sum of vDisk limitation's
> lower bound(min value of limitation, it is similar to the concept of
> reserving). These vDisks use the volumes in the SD.
> 
> 
> > ------------------------------------------------------------------------
> >
> >     *From: *"Mei Liu" <liumbj at linux.vnet.ibm.com>
> >     *To: *"Doron Fediuck" <dfediuck at redhat.com>
> >     *Cc: *arch at ovirt.org
> >     *Sent: *Wednesday, June 5, 2013 11:11:34 AM
> >     *Subject: *Re: SLA feature for storage I/O bandwidth
> >
> >     Hi Doron,
> >     After the discussion, we found that the last version of design is
> >     a little complex, so I simplified it and post it to
> >
> >     http://www.ovirt.org/Features/Design/SLA_for_storage_io_bandwidth
> >
> >     We want to let MOM in each host decides how to tune the bandwidth
> >     limit of vms in that host instead of letting engine to make the
> >     whole decision based on statistics from vms in diverse hosts.
> >     Maybe we can consider starting from the basic one in wiki.
> >
> >     Thanks & best regards,
> >     Mei Liu(Rose)
> >
> >     -------- Original Message --------
> >     Subject: 	Re: SLA feature for storage I/O bandwidth
> >     Date: 	Mon, 03 Jun 2013 18:28:46 +0800
> >     From: 	Mei Liu <liumbj at linux.vnet.ibm.com>
> >     To: 	Doron Fediuck <dfediuck at redhat.com>
> >     CC: 	arch at ovirt.org
> >
> >
> >
> >     On 05/29/2013 11:34 PM, Doron Fediuck wrote:
> >     > ----- Original Message -----
> >     >> From: "Mei Liu"<liumbj at linux.vnet.ibm.com>
> >     >> To: "Dave Neary"<dneary at redhat.com>
> >     >> Cc:arch at ovirt.org
> >     >> Sent: Wednesday, May 29, 2013 11:35:12 AM
> >     >> Subject: Re: SLA feature for storage I/O bandwidth
> >     >>
> >     >> On 05/29/2013 03:42 PM, Dave Neary wrote:
> >     >>> Hi Mei Liu,
> >     >>>
> >     >>> On 05/28/2013 10:18 AM, Mei Liu wrote:
> >     >>>> I created a drafted wiki page on  design of storage I/O bandwidth
> >     >>>> SLA in
> >     >>>> the following link:
> >     >>>>
> >     >>>>http://www.ovirt.org/SLA_for_storage_resource  .
> >     >>>>
> >     >>>> I will appreciate the efforts if anyone who works on ovirt engine,
> >     >>>> vdsm
> >     >>>> or mom could give some comments. TIA.
> >     >>> Just out of interest - which version of oVirt are you targeting
> >     >>> this
> >     >>> for? Can I assume that it's for post-3.3? Today is officially 3.3.
> >     >>> feature freeze (but we have a release meeting later to discuss
> >     >>> that).
> >     >>>
> >     >>> Thanks,
> >     >>> Dave.
> >     >>>
> >     >> Hi Dave,
> >     >> The basic i/o tune functionality for vdsm is almost ready. However,
> >     >> there is nothing written on the Engine side and no policy for
> >     >> automatic
> >     >> tuning is applied yet.
> >     >> I am not sure if the basic functionality can target 3.3.
> >     >>
> >     >>
> >     >> Best regards,
> >     >> Mei Liu
> >     >>
> >     > Hi Mey Liu,
> >     > I'm still going over the wiki, but a few things we need to consider;
> >     > First of all QoS for storage I/O bandwidth is a part of a larger SLA
> >     > policy which may include network QoS, CPU and memory QoS and the
> >     > quota
> >     > we implement today.
> >     >
> >     > So first of all we need to make sure your design does not conflict
> >     > the other QoS parts, which is what I'm looking into now.
> >     >
> >     > Additionally, using the quota term is confusing as oVirt already has
> >     > quota today, and cpu API has his own quota definition. So please try
> >     > to come up with a different terminology.
> >     >
> >     > I like your idea of setting an initial value but I need some more
> >     > time
> >     > for it to come up with my insights.
> >     > Also, I completely agree with your concept of letting mom handle
> >     > it in host level. We need to verify it does not break anything
> >     > related
> >     > to SPM. This is something we need to synchronize with the storage
> >     > guys.
> >     >
> >     > Looking into the engine area, we should start thinking on how this
> >     > will
> >     > be supported in the main storage entities and VM / template /
> >     > instance
> >     > entities. So you may want to add a section on this as well. This
> >     > leads
> >     > me to think of importing and exporting a VM which may want to
> >     > maintain
> >     > the defined IO QoS. Any thoughts around it?
> >     >
> >     > Doron
> >     >
> >     Hi Doron,
> >     Thanks for your questions and insightful thoughts. They are really
> >     inspiring.
> >
> >     I updated the design in
> >     http://www.ovirt.org/Features/Design/SLA_for_storage_io_bandwidth  .
> >     This time, I add  storage I/O bandwidth control according to the quota
> >     design and the SLA tries to ensure the reserved bandwidth for vDisks.
> >     It requires the engine  to make the tuning decision, since vm uses the
> >     SD volumes may  reside on different hosts and only engine can obtain
> >     the
> >     global information.
> >     I think this design will not lead to problem when importing or
> >     exporting
> >     a VM.
> >
> >     I will appreciate if you can take a look at the new design.
> >
> >     Best regards,
> >     Mei Liu (Rose)
> >
> >     _______________________________________________
> >     Arch mailing list
> >     Arch at ovirt.org
> >     http://lists.ovirt.org/mailman/listinfo/arch
> >
> >
> >
> >
> >
> >     _______________________________________________
> >     Arch mailing list
> >     Arch at ovirt.org
> >     http://lists.ovirt.org/mailman/listinfo/arch
> >
> >
> 
> 



More information about the Arch mailing list