[ovirt-users] Gluster storage question

Bartosiak-Jentys, Chris chris.bartosiak-jentys at certico.co.uk
Sat Feb 11 16:32:14 UTC 2017


Hello list,

Just wanted to get your opinion on my ovirt home lab setup. While this 
is not a production setup I would like it to run relatively reliably so 
please tell me if the following storage configuration is likely to 
result in corruption or just bat s**t insane.

I have a 3 node hosted engine setup, VM data store and engine data store 
are both replica 3 gluster volumes (one brick on each host).
I do not want to run all 3 hosts 24/7 due to electricity costs, I only 
power up the larger hosts (2 Dell R710's) when I need additional 
resources for VM's.

I read about using CTDB and floating/virtual IP's to allow the storage 
mount point to transition between available hosts but after some thought 
decided to go about this another, simpler, way:

I created a common hostname for the storage mount points: gfs-data and 
gfs-engine

On each host I edited /etc/hosts file to have these hostnames resolve to 
each hosts IP i.e. on host1 gfs-data & gfs-engine --> host1 IP
on host2 gfs-data & gfs-engine --> host2 IP
etc.

In ovirt engine each storage domain is mounted as gfs-data:/data and 
gfs-engine:/engine
My thinking is that this way no matter which host is up and acting as 
SPM it will be able to mount the storage as its only dependent on that 
host being up.

I changed gluster options for server-quorum-ratio so that the volumes 
remain up even if quorum is not met, I know this is risky but its just a 
lab setup after all.

So, any thoughts on the /etc/hosts method to ensure the storage mount 
point is always available? Is data corruption more or less inevitable 
with this setup? Am I insane ;) ?

Thanks,


More information about the Users mailing list