
On Thu 17 Nov 2011 03:29:48 PM IST, Maor wrote:
Hi Saggi, thanks for the comments, please see my comments in line
On 11/17/2011 02:36 PM, Saggi Mizrahi wrote:
On Wed 16 Nov 2011 02:48:40 PM IST, Maor wrote:
Hello all,
The Quota feature description is published under the following links: http://www.ovirt.org/wiki/Features/DetailedQuota http://www.ovirt.org/wiki/Features/Quota Notice that screens of UI mockups should be updated.
Please feel free, to share your comments about it.
Thank you, Maor
_______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
I can't see how the host is supposed to enforce and handle it. Pause the VM? Crash it? raise ENOMEM\ENOSPC in the guest? The enforcement and handling, should be from the engine scope and not from the Host perspective. Actually the Host, should not be aware of Quota at all. Also what about cases of KSM\QCow2, disk\memory overcommit. on QCOW issue, the active disk should consume the full potential space from the Quota, since we are not sure how much space will be in use, slthough the snapshot disk, will be updated to consume only its real size from the Quota.
you can check the Enforcement section : "When dealing with QCOW disks (which is not pre-allocated, like templates or stateless VM) the Quota should consume the total maximum size of the disk, since it is the potential size that can be used."
for overcommit issue, please see CRUD section in the WIKI: "...However, users will not be able to exceed the Quota limitations again after the resources are released."
Disk Templates. Storage for hibernation disk. Temporary and shared disks. same logic as above (Enforcement section) Shared disks between VMs owned by different users. Please see Dependencies / Related Features and Projects: "When handling plug/unplug disks or attach/detach disks, the entity will still consume resources from its configured original Quota it was created on. "
Backup snapshots (should they count in the quota? They are transient) When ever a volume is created whether it is snapshot, backup snapshot, stateless disk, or any QCOW implementation, the enforcement should the
Which means the disk should consume from the same Quota all the time (not dependent on the users that use it). the same as described above (see Enforcement section)
I also don't see how vcpu limiting is any good? I don't even know what it means. What happens in migration between hosts with different amount of physical CPUs?
The "atomic" section that Quota is handling in the run time scope is cluster. Actually for the user migration will be transparent since it is consumed from the same Quota, the only validation the VM should encounter will be the same as before in the Host perspective.
I also don't think CPU limiting is even a thing to be managed by a quota. There is no reason not to use 100% of the CPU if you are the only VM active. CPU scheduling should use a priority model and not a quota IMHO. Again, the Quota should be managed from the engine level, and should not be reflected in the Host implementation. Try to look at it, as an abstract management mechanism for taking notes and managing resource consumes for the Administrator.
A priority model is an interesting thought. Now it can be supported, by using different grace percentage from one Quota to another, or maybe create different Quota for Different type of users.
So I understand there is a pure disconnect between host lever quotas and policies and this quota feature. Speaking with Livnat and Maor I see this is a good use case for an ovirt user to allow simple quotas to clients. It just feels like a feature that should be implemented as part of a grater policy structure validating user actions and not as a strict quota. I think snapshots and images should have different quotas where images are capped by size and snapshots are capped by number. Think of it from a hosting provider perspective. I want each client to get a maximum of 30GB of storage. I can limit them to 30GB. And let them calculated that if they made a 16GB worth of VM they can never snapshot it. On the other hand I can chose to limit them to 10GB and allow up to 2 snapshots to each clients. This way I know each client will take up to 30 GB and there will be no complaints when the user created a VM and now he can't snapshot it. Snapshotting can also be capped on the image level instead of the user level (3 snapshots per image). I tend to go towards the latter because it'll keep multiple disk VM snapshots from failing due to depleted quota. This will also make complaints about the pessimistic storage estimation invalid.