Thanks for the clarification on this. I've realised my mistake now - I need to
configure the storage array to report the LUNs as larger than they physically are (to
account for the expected de-dup ratio). I was expecting oVirt to magaically know about
this, which when thinking it through is not really technically possible.
---- On Mon, 24 Feb 2020 13:34:49 +0000 Nir Soffer <nsoffer(a)redhat.com> wrote ----
On Mon, Feb 24, 2020 at 3:03 PM Gorka Eguileor <mailto:geguileo@redhat.com> wrote:
On 22/02, Nir Soffer wrote:
> On Sat, Feb 22, 2020, 13:02 Alan G <mailto:alan+ovirt@griff.me.uk> wrote:
> >
> > I'm not really concerned about the reporting aspect, I can look in the
storage vendor UI to see that. My concern is: will oVirt stop provisioning storage in the
domain because it *thinks* the domain is full. De-dup is currently running at about 2.5:1
so I'm concerned that oVirt will think the domain is full way before it actually is.
> >
> > Not clear if this is handled natively in oVirt or by the underlying lvs?
>
> Because oVirt does not know about deduplication or actual allocation
> on the storage side,
> it will let you allocate up the size of the LUNs that you added to the
> storage domain, minus
> the size oVirt uses for its own metadata.
>
> oVirt uses about 5G for its own metadata on the first LUN in a storage
> domain. The rest of
> the space can be used by user disks. Disks are LVM logical volumes
> created in the VG created
> from the LUN.
>
> If you create a storage domain with 4T LUN, you will be able to
> allocate about 4091G on this
> storage domain. If you use preallocated disks, oVirt will stop when
> you allocated all the space
> in the VG. Actually it will stop earlier based on the minimal amount
> of free space configured for
> the storage domain when creating the storage domain.
>
> If you use thin disks, oVirt will allocate only 1G per disk (by
> default), so you can allocate
> more storage than you actually have, but when VMs will write to the
> disk, oVirt will extend
> the disks. Once you use all the available space in this VG, you will
> not be able to allocate
> more without extending the storage domain with new LUN, or resizing
> the LUN on storage.
>
> If you use Managed Block Storage (cinderlib) every disk is a LUN with
> the exact size you
> ask when you create the disk. The actual allocation of this LUN
> depends on your storage.
>
> Nir
>
Hi,
I don't know anything about the oVirt's implementation, so I'm just
going to provide some information from cinderlib's point of view.
Cinderlib was developed as a dumb library to abstract access to storage
backends, so all the "smart" functionality is pushed to the user of the
library, in this case oVirt.
In practice this means that cinderlib will NOT limit the number of LUNs
or over-provisioning done in the backend.
Cinderlib doesn't care if we are over-provisioning because we have dedup
and decompression or because we are using thin volumes where we don't
consume all the allocated space, it doesn't even care if we cannot do
over-provisioning because we are using thick volumes. If it gets a
request to create a volume, it will try to do so.
From oVirt's perspective this is dangerous if not controlled, because we
could end up consuming all free space in the backend and then running
VMs will crash (I think) when they could no longer write to disks.
oVirt can query the stats of the backend [1] to see how much free space
is available (free_capacity_gb) at any given time in order to provide
over-provisioning limits to its users. I don't know if oVirt is already
doing that or something similar.
If is important to know that stats gathering is an expensive operation
for most drivers, and that's why we can request cached stats (cache is
lost as the process exits) to help users not overuse it. It probably
shouldn't be gathered more than once a minute.
I hope this helps. I'll be happy to answer any cinderlib questions. :-)
Thanks Gorka, good to know we already have API to get backend
allocation info. Hopefully we will use this in future version.
Nir
Cheers,
Gorka.
[1]:
https://docs.openstack.org/cinderlib/latest/topics/backends.html#stats
> > ---- On Fri, 21 Feb 2020 21:35:06 +0000 Nir Soffer
<mailto:nsoffer@redhat.com> wrote ----
> >
> >
> >
> > On Fri, Feb 21, 2020, 17:14 Alan G <mailto:alan+ovirt@griff.me.uk> wrote:
> >
> > Hi,
> >
> > I have an oVirt cluster with a storage domain hosted on a FC storage array that
utilises block de-duplication technology. oVirt reports the capacity of the domain as
though the de-duplication factor was 1:1, which of course is not the case. So what I would
like to understand is the likely behavior of oVirt when the used space approaches the
reported capacity. Particularly around the critical action space blocker.
> >
> >
> > oVirt does not know about the underlying block storage thin provisioning
implemention so it cannot help with this.
> >
> > You will have to use the underlying storage separately to learn about the
actual allocation.
> >
> > This is unlikely to change for legacy storage, but for Managed Block Storage
(conderlib) we may have a way to access such info.
> >
> > Gorka, do we have any support in cinderlib for getting info about storage
alllocation and deduplication?
> >
> > Nir
> > _______________________________________________
> > Users mailing list -- mailto:users@ovirt.org
> > To unsubscribe send an email to mailto:users-leave@ovirt.org
> > Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BKCQYELRW5X...
> >
> >
> >
>
_______________________________________________
Users mailing list -- mailto:users@ovirt.org
To unsubscribe send an email to mailto:users-leave@ovirt.org
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5MKAOBQVQKY...