[ovirt-devel] bug 1041569 question was: ovirt-d] [QE][ACTION REQUIRED] oVirt 3.5.0 status (fwd)

Eric Blake eblake at redhat.com
Fri Sep 19 23:19:07 UTC 2014


On 09/19/2014 12:10 PM, R P Herrold wrote:
> 
> I was running down the open ovirt bugs, and notice this one, 
> which has an outlink to a post by you (Eric) on a mailing lsit 
> last month
> 
>> And the following dependencies still open:
>> Bug 1041569 - [NFR] libvirt: Returning the watermark for all 
>> the images opened for writing
> 
> https://www.redhat.com/archives/libvir-list/2014-August/msg00207.html
> 
> wherein you refer to a 'watermark'.  To me as a non-lingo 
> aware (as to libvirt / libguestfs) developer, a watermark is a 
> pattern overlaid onto an image (a pattern placed on the glue 
> side of a postage stamp, or a 'DRAFT" notice on a document)
> 
> I _think_ you are referring to a 'high water mark' limit 
> which, if exceeded would result in content being lost (there 
> is mention of ENOSPC)
> 
> Is this accurate?  

Close.  The terminology intended is indeed high water mark, but more as
a measure of when the underlying storage must be enlarged to avoid
pausing due to ENOSPC.  There won't be actual content loss, as long as
the guest is configured to pause on write errors, and the guest can be
resumed after storage is enlarged whether or not the high water mark was
used to attempt to pre-emptively grow the storage before actually
running out of space.  On the libvirt list, we decided to go with
calling it 'allocation' reporting, rather than 'watermark', because of
the confusion in the term watermark.

> 
> If so, isn't this information filesystem specific and so 
> something which the outsider 'container' cannot and should be 
> expected to reliably know?  (I am thinking here of inode 
> exhaustion even when a Linux type FS seemingly has space per: 
> du ; the problem is worse with a 'sparse' capable filesystem 
> in play; and seems wholly unanswerable with a Windows FS or 
> some other variant)
> 
> I don't see how the hypervisor could accurately ever know this 
> without 'cracking open' and asking the inside tenant .. and 
> for privacy and data integrity / security reasons, the 
> hypervisor should not ever be doing this

Dynamic storage is common with VDSM which tracks guest storage by using
host LVM partitions formatted with qcow2.  Qcow2 files can be sparse by
nature, so, for example, you can allocate a backing chain base <-
active, where base is 30G, and active starts out as a 10G partition. As
the guest causes more and more COW to differ from base, qemu will report
higher and higher allocation values, and if management is carefully
tracking these values, it can resize the LVM volume to 15G prior to
pausing the guest when qemu can't write more qcow2 metadata.

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 539 bytes
Desc: OpenPGP digital signature
URL: <http://lists.ovirt.org/pipermail/devel/attachments/20140919/7ec2cb8a/attachment-0001.sig>


More information about the Devel mailing list