
On 2018-12-20 07:14, Stefan Wolf wrote:
yes i think this too, but as you see at the top
[root@kvm380 ~]# gluster volume info ... performance.strict-o-direct: on ... it was already set
i did a one cluster setup with ovirt and I uses this result
Volume Name: engine Type: Distribute Volume ID: a40e848b-a8f1-4990-9d32-133b46db6f1d Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: kvm360.durchhalten.intern:/gluster_bricks/engine/engine Options Reconfigured: cluster.eager-lock: enable performance.io-cache: off performance.read-ahead: off performance.quick-read: off user.cifs: off network.ping-timeout: 30 network.remote-dio: off performance.strict-o-direct: on performance.low-prio-threads: 32 features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 transport.address-family: inet nfs.disable: on
could there be an other reason?
are you mounting via the gluster GUI? I'm not sure how it handles mounting of manual gluster volumes, but the direct-io-mode=enable mount option comes to mind. I assume direct-io is also enabled on the other volume? It needs to be on all of them.