[Users] cluster.min-free-disk option on gluster 3.4 Was: Re: how to temporarily solve low disk space problem

Gianluca Cecchi gianluca.cecchi at gmail.com
Fri Jan 10 10:07:34 UTC 2014


On Tue, Jan 7, 2014 at 8:29 PM, Amedeo Salvati wrote:

>>
>> Could this parameter impact in general only start of new VMs or in any
>> way also already running VMs?
>> Gianluca
>
> added gluster-users as they can responde to us questions.
>
> Gianluca, as you are using glusterfs, and as I can see on your df output:
>
> /dev/mapper/fedora-DATA_GLUSTER                 30G   23G  7.8G  75%
> /gluster/DATA_GLUSTER
> node01.mydomain:gvdata                  30G   26G  4.6G  85%
> /rhev/data-center/mnt/glusterSD/node01.mydomain:gvdata
>
>
> be careful to gluster cluster.min-free-disk option, that on gluster 3.1 and
> 3.2 it's default option is 0% (good for you!)
>
> http://gluster.org/community/documentation//index.php/Gluster_3.2:_Setting_Volume_Options#cluster.min-free-disk
>
> but I can't find the same documentation for gluster 3.4, that I suppose
> you're using this gluster version on ovirt; otherwise on red hat storage
> documentation cluster.min-free-disk default option is 10% (bad for you):
>
> https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/chap-User_Guide-Managing_Volumes.html
>
> so you fill your gvdata volume up to 90% and let we know if it's stop I/O
> (or only write) or we can wait for a clarification from gluster guy :-)
>
> best regards
> a

My environment is based on OVirt 3.3.2 with Fedora 19 oVirt stable repo.
Plus GlusterFS upgraded to 3.4.2-1.fc19 from updates-testing f19 repo

At this moment I have 4Gb free on xfs filesystem, that is the base for
gluster mount point.

# engine-config -g FreeSpaceCriticalLowInGB
FreeSpaceCriticalLowInGB: 2 version: general
--> all ok

Then
# engine-config -s FreeSpaceCriticalLowInGB=6
# systemctl restart ovirt-engine
--> all ok

Then
Tried "yum update" of a F20 VM that goes without problem (about 50Mb
involved in transaction)
Reboot on VM (so no power off)
--> all ok

Then
Shutdown of VM and attempt to power on it
--> fail

In webadmin gui I get:

"
Error while executing action:

f20:

    Cannot run VM. Low disk space on target Storage Domain gvdata.
"

In engine.log I get
2014-01-10 10:37:42,610 WARN  [org.ovirt.engine.core.bll.RunVmCommand]
(ajp--127.0.0.1-8702-1) [12c2f039] CanDoAction of action RunVm failed.
Reasons:VAR__ACTION__RUN,VAR__TYPE__VM,ACTION_TYPE_FAILED_DISK_SPACE_LOW_ON_TARGET_STORAGE_DOMAIN,$storageName
gvdata
, sharedLocks= ]

Strangely no output in webadmin gui events pane (see also below the
list of them)

Then
# engine-config -s FreeSpaceCriticalLowInGB=1
# systemctl restart ovirt-engine
--> all ok I can power on the f20 VM again

HIH,
Gianluca

Regarding GlusterFS i went through several updates from 3.x to 4.x
version for gluster and in 3.x for engine/hosts, so I don't know in a
clean install starting with oVirt 3.3.2 and GlusterFS 3.4.2 what
engine would have put as a value for FreeSpaceCriticalLowInGB, if a
fixed one (absolute or percentage) or variable depending on total
initial size of XFS filesystem.

Still one strange thing I notice in engine events sequence is this
(please read from last line going up); are normal/expected the
datacenter outputs messages when you restart engine?

2014-Jan-10, 10:50 user admin at internal initiated console session for VM f20

2014-Jan-10, 10:50 VM f20 started on Host f18ovn03

2014-Jan-10, 10:50 user admin at internal initiated console session for VM f20
--> here VM starts ok now
2014-Jan-10, 10:49 VM f20 was started by admin at internal (Host: f18ovn03).

2014-Jan-10, 10:49 User admin at internal logged in.
---> here I have done login again to webadmin gui after engine restart
2014-Jan-10, 10:49 User admin at internal logged in.

2014-Jan-10, 10:47 Storage Pool Manager runs on Host f18ovn03
(Address: 10.4.4.59).

2014-Jan-10, 10:47 Invalid status on Data Center Gluster. Setting
status to Non Responsive.

2014-Jan-10, 10:47 State was set to Up for host f18ovn01.

2014-Jan-10, 10:47 State was set to Up for host f18ovn03.
--> reset  FreeSpaceCriticalLowInGB to 1 and restart of engine
2014-Jan-10, 10:47 User admin at internal logged out.
---> shutdown and power off of VM
2014-Jan-10, 10:37 VM f20 is down. Exit message: User shut down

2014-Jan-10, 10:36 user admin at internal initiated console session for VM f20

2014-Jan-10, 10:35 user admin at internal initiated console session for VM f20

2014-Jan-10, 10:31 User admin at internal logged in.
---> here I have done login again to webadmin gui after engine restart
2014-Jan-10, 10:31 User admin at internal logged in.

2014-Jan-10, 10:29 Warning, Low disk space.gvdata domain has 4 GB of free space

2014-Jan-10, 10:29 Storage Pool Manager runs on Host f18ovn03
(Address: 10.4.4.59).

2014-Jan-10, 10:29 Invalid status on Data Center Gluster. Setting
status to Non Responsive.

2014-Jan-10, 10:29 State was set to Up for host f18ovn03.

2014-Jan-10, 10:29 State was set to Up for host f18ovn01.
--> here I have set new higher value for FreeSpaceCriticalLowInGB that
will cause problems and issued restart of engine
2014-Jan-10, 10:29 User admin at internal logged out.

2014-Jan-10, 10:23 User admin at internal logged in.



More information about the Users mailing list