[ovirt-users] oVirt selft-hosted with NFS on top gluster
Abi Askushi
rightkicktech at gmail.com
Wed Sep 6 15:08:49 UTC 2017
For a first idea I use:
dd if=/dev/zero of=testfile bs=1GB count=1
When testing on the gluster mount point using above command I hardly get
10MB/s. (On the same time the network traffic hardly reaches 100Mbit).
When testing our of the gluster (for example at /root) I get 600 - 700MB/s.
When I mount the gluster volume with NFS and test on it I get 90 - 100
MB/s, (almost 10x from gluster results) which is the max I can get
considering I have only 1 Gbit network for the storage.
Also, when using glusterfs the general VM performance is very poor and disk
write benchmarks show that is it at least 4 times slower then when the VM
is hosted on the same data store when NFS mounted.
I don't know why I hitting such a significant performance penalty, and
every possible tweak that I was able to find out there did not make any
difference on the performance.
The hardware I am using is pretty decent for the purposes intended:
3 nodes, each node having with 32 MB of RAM, 16 physical CPU cores, 2 TB of
storage in RAID5 (4 disks), of which 1.5 TB are sliced for the data store
of ovirt where VMs are stored.
The gluster configuration is the following:
Volume Name: vms
Type: Replicate
Volume ID: 4513340d-7919-498b-bfe0-d836b5cea40b
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster0:/gluster/vms/brick
Brick2: gluster1:/gluster/vms/brick
Brick3: gluster2:/gluster/vms/brick (arbiter)
Options Reconfigured:
nfs.export-volumes: on
nfs.disable: off
performance.readdir-ahead: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: on
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: off
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
network.ping-timeout: 30
performance.strict-o-direct: on
cluster.granular-entry-heal: enable
features.shard-block-size: 64MB
performance.client-io-threads: on
client.event-threads: 4
server.event-threads: 4
performance.write-behind-window-size: 4MB
performance.cache-size: 1GB
In case I can provide any other details let me know.
At the moment I already switched to gluster based NFS but I have a similar
setup with 2 nodes where the data store is mounted through gluster (and
again relatively good hardware) where I might check any tweaks or
improvements on this setup.
Thanx
On Wed, Sep 6, 2017 at 5:32 PM, Yaniv Kaul <ykaul at redhat.com> wrote:
>
>
> On Wed, Sep 6, 2017 at 3:32 PM, Abi Askushi <rightkicktech at gmail.com>
> wrote:
>
>> Hi All,
>>
>> I've playing with ovirt self hosted engine setup and I even use it to
>> production for several VM. The setup I have is 3 server with gluster
>> storage in replica 2+1 (1 arbiter).
>> The data storage domain where VMs are stored is mounted with gluster
>> through ovirt. The performance I get for the VMs is very low and I was
>> thinking to switch and mount the same storage through NFS instead of
>> glusterfs.
>>
>
> I don't see how it'll improve performance.
> I suggest you share the gluster configuration (as well as the storage HW)
> so we can understand why the performance is low.
> Y.
>
>
>>
>> The only think I am hesitant is how can I ensure high availability of the
>> storage when I loose one server? I was thinking to have at /etc/hosts sth
>> like below:
>>
>> 10.100.100.1 nfsmount
>> 10.100.100.2 nfsmount
>> 10.100.100.3 nfsmount
>>
>> then use nfsmount as the server name when adding this domain through
>> ovirt GUI.
>> Are there any other more elegant solutions? What do you do for such cases?
>> Note: gluster has the back-vol-file option which provides a lean way to
>> have redundancy on the mount point and I am using this when mounting with
>> glusterfs.
>>
>> Thanx
>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170906/338fde79/attachment.html>
More information about the Users
mailing list