Dear All,
i follow here the suggestion of Tal and share this problem (for which
there's a workaround) with you, hoping someone can identify something
which can explain this problem.
. Settings :
We work mostly on HP (ProLiant DL380 Gen9, DL380e Gen8, DL180 G6), with
an ovirt cluster DL180 G6.
On the storage side, we use the host harddrive mount in NFS (i know it's
not the "by the books optimal settings").
. Installation :
Each host is installed with the node iso.
Harddrive are not SSD and are in RAID5 (physical or logical)
Each harddrive has a standard oVirt node install auto partitionning,
except for one thing, we reduce the / mount point to make a new
partition in XFS for the data storage mount point (the partition is
inside LVM in our case)
. The problem :
We experienced on all of this hosts huge discrepancy between the LVS
command and DF on usage of the different partition. (sometimes up to a
90% difference !)
Spoiler alert, DF is right, LVM is wrong.
I think (but i might be wrong), that when we delete -in ovirt- a lot of
data on the drive, LVM doesn't get all the data about that.
. Workarounds :
By speaking about with lvm dev, we found a workaround :
fstrim who force LVM to check if data has been deleted or not.
Another solution is to use a partition for data which is not a part of
LVM, that works well too :)
One last thing : we saw this on data partition of our hosts, but it's
not impossible it might the case on the ovirt partition too (the current
discrepancy might be normal)
If someone has an idea of what we did wrong, or if there's a good reason
for this to happened...
Thank you.
--
regards,
Alexis Grillon
Pôle Humanités Numériques, Outils, Méthodes et Analyse de Données
Maison européenne des sciences de l'homme et de la société
MESHS - Lille Nord de France / CNRS
tel. +33 (0)3 20 12 58 57 | alexis.grillon(a)meshs.fr
www.meshs.fr | 2, rue des Canonniers 59000 Lille
-----------------------------------------------------------------
GPG fingerprint AC37 4C4B 6308 975B 77D4 772F 214F 1E97 6C08 CD11