On 11/27/2011 02:17 PM, Miki Kenneth wrote:
I think that capping the number of snapshots is a very intutive idea,
and it might be easier for cloud provider use case. However, I don't think we can omit
the total space option, but having a cap on the number of snapshots in addition, sounds
fine.
sounds good, but it raises a few issues:
First, if we already thinking of limiting the number of business
entities, like snapshot, why not also limit the number of other entities
like disks.
because the limitations of snapshot number, actually reflects on all the
snapshot for all the VMs created on the Quota.
What happened if someone wants to use on his VM, more then the snapshot
configured.
The administrator will have to change the configuration of the Quota for
all the VMs or reallocate the disk on other Quota that enables it (If
there is one).
Also I hope we don't give the user too many configuration parameters
without making the feature to be too complex to use.
maybe it is a requirement, that we should wait for feedbacks on it, see
if this case is usable enough...
On another note, looking at some other storage vendors I can say that in most cases the
following parameters apply:
- Hard Limit - when exceeding it the action is denied.
- Soft limit - when exceeding it the User get a warning. (optional)
- Grace - on the Hard limit, that in some cases is restricted by time as well
So I do think for completeness we should define/design on these parameters.
Agreed.
Miki
----- Original Message -----
> From: "Maor" <mlipchuk(a)redhat.com>
> To: "Saggi Mizrahi" <smizrahi(a)redhat.com>
> Cc: "Oved Ourfalli" <ovedo(a)redhat.com>, engine-devel(a)ovirt.org
> Sent: Thursday, November 17, 2011 8:16:22 PM
> Subject: Re: [Engine-devel] Quota feature description
>
> I updated the detailedQuota wiki, to sharpen that the Quota is
> managed
> from the engine-core perspective.
>
> Your suggestion about the number of snapshots per image is quite
> interesting, I think a PM should give an opinion about it.
> Although I'm not sure the limitation, should be in the Quota scope,
> maybe it is more accurate to set it as a limit in the VM scope. or
> the
> image scope instead.
>
>
> On 11/17/2011 05:41 PM, Saggi Mizrahi wrote:
>> On Thu 17 Nov 2011 03:29:48 PM IST, Maor wrote:
>>> Hi Saggi, thanks for the comments, please see my comments in line
>>>
>>> On 11/17/2011 02:36 PM, Saggi Mizrahi wrote:
>>>> On Wed 16 Nov 2011 02:48:40 PM IST, Maor wrote:
>>>>> Hello all,
>>>>>
>>>>> The Quota feature description is published under the following
>>>>> links:
>>>>>
http://www.ovirt.org/wiki/Features/DetailedQuota
>>>>>
http://www.ovirt.org/wiki/Features/Quota
>>>>> Notice that screens of UI mockups should be updated.
>>>>>
>>>>> Please feel free, to share your comments about it.
>>>>>
>>>>> Thank you,
>>>>> Maor
>>>>>
>>>>> _______________________________________________
>>>>> Engine-devel mailing list
>>>>> Engine-devel(a)ovirt.org
>>>>>
http://lists.ovirt.org/mailman/listinfo/engine-devel
>>>>
>>>> I can't see how the host is supposed to enforce and handle it.
>>>> Pause the
>>>> VM? Crash it? raise ENOMEM\ENOSPC in the guest?
>>> The enforcement and handling, should be from the engine scope and
>>> not
>>> from the Host perspective.
>>> Actually the Host, should not be aware of Quota at all.
>>>> Also what about cases of KSM\QCow2, disk\memory overcommit.
>>> on QCOW issue, the active disk should consume the full potential
>>> space
>>> from the Quota, since we are not sure how much space will be in
>>> use,
>>> slthough the snapshot disk, will be updated to consume only its
>>> real
>>> size from the Quota.
>>>
>>> you can check the Enforcement section :
>>> "When dealing with QCOW disks (which is not pre-allocated, like
>>> templates or stateless VM) the Quota should consume the total
>>> maximum
>>> size of the disk, since it is the potential size that can be
>>> used."
>>>
>>> for overcommit issue, please see CRUD section in the WIKI:
>>> "...However, users will not be able to exceed the Quota
>>> limitations
>>> again after the resources are released."
>>>
>>>> Disk Templates.
>>>> Storage for hibernation disk.
>>>> Temporary and shared disks.
>>> same logic as above (Enforcement section)
>>>> Shared disks between VMs owned by different users.
>>> Please see Dependencies / Related Features and Projects:
>>> "When handling plug/unplug disks or attach/detach disks, the
>>> entity will
>>> still consume resources from its configured original Quota it was
>>> created on. "
>>>
>>> Which means the disk should consume from the same Quota all the
>>> time
>>> (not dependent on the users that use it).
>>>> Backup snapshots (should they count in the quota? They are
>>>> transient)
>>> When ever a volume is created whether it is snapshot, backup
>>> snapshot,
>>> stateless disk, or any QCOW implementation, the enforcement should
>>> the
>>> the same as described above (see Enforcement section)
>>>>
>>>> I also don't see how vcpu limiting is any good? I don't even
know
>>>> what
>>>> it means. What happens in migration between hosts with different
>>>> amount
>>>> of physical CPUs?
>>> The "atomic" section that Quota is handling in the run time scope
>>> is
>>> cluster.
>>> Actually for the user migration will be transparent since it is
>>> consumed
>>> from the same Quota, the only validation the VM should encounter
>>> will be
>>> the same as before in the Host perspective.
>>>
>>>> I also don't think CPU limiting is even a thing to be managed by
>>>> a
>>>> quota. There is no reason not to use 100% of the CPU if you are
>>>> the only
>>>> VM active. CPU scheduling should use a priority model and not a
>>>> quota
>>>> IMHO.
>>> Again, the Quota should be managed from the engine level, and
>>> should not
>>> be reflected in the Host implementation.
>>> Try to look at it, as an abstract management mechanism for taking
>>> notes
>>> and managing resource consumes for the Administrator.
>>>
>>> A priority model is an interesting thought.
>>> Now it can be supported, by using different grace percentage from
>>> one
>>> Quota to another, or maybe create different Quota for Different
>>> type of
>>> users.
>>
>> So I understand there is a pure disconnect between host lever
>> quotas and
>> policies and this quota feature.
>> Speaking with Livnat and Maor I see this is a good use case for an
>> ovirt
>> user to allow simple quotas to clients.
>>
>> It just feels like a feature that should be implemented as part of
>> a
>> grater policy structure validating user actions and not as a strict
>> quota.
>>
>>
>> I think snapshots and images should have different quotas where
>> images
>> are capped by size and snapshots are capped by number.
>>
>> Think of it from a hosting provider perspective. I want each client
>> to
>> get a maximum of 30GB of storage.
>> I can limit them to 30GB. And let them calculated that if they made
>> a
>> 16GB worth of VM they can never snapshot it. On the other hand I
>> can
>> chose to limit them to 10GB and allow up to 2 snapshots to each
>> clients.
>> This way I know each client will take up to 30 GB and there will be
>> no
>> complaints when the user created a VM and now he can't snapshot it.
>> Snapshotting can also be capped on the image level instead of the
>> user
>> level (3 snapshots per image). I tend to go towards the latter
>> because
>> it'll keep multiple disk VM snapshots from failing due to depleted
>> quota.
>>
>> This will also make complaints about the pessimistic storage
>> estimation
>> invalid.
>
> _______________________________________________
> Engine-devel mailing list
> Engine-devel(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/engine-devel
>