
Hi, thanks for helping. Here gluster vol info: Volume Name: GlusterVol Type: Distributed-Replicate Volume ID: 22ea3d7d-4435-423a-a06c-504fa9b36ada Status: Started Snapshot Count: 0 Number of Bricks: 2 x 4 = 8 Transport-type: tcp Bricks: Brick1: 172.31.100.150:/home/admin/ovirt_3/data Brick2: 172.31.100.151:/home/admin/ovirt_3/data Brick3: 172.31.100.152:/home/admin/ovirt_3/data Brick4: 172.31.100.153:/home/admin/ovirt_3/data Brick5: 172.31.100.154:/home/admin/ovirt_3/data Brick6: 172.31.100.155:/home/admin/ovirt_3/data Brick7: 172.31.100.156:/home/admin/ovirt_3/data Brick8: 172.31.100.157:/home/admin/ovirt_3/data Options Reconfigured: features.quota-deem-statfs: on features.inode-quota: on features.quota: on cluster.min-free-inodes: 6% diagnostics.count-fop-hits: on diagnostics.latency-measurement: on network.ping-timeout: 30 cluster.granular-entry-heal: enable performance.strict-o-direct: on storage.owner-gid: 36 storage.owner-uid: 36 features.shard: on cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: off performance.low-prio-threads: 32 performance.io-cache: off performance.read-ahead: off performance.quick-read: off auth.allow: * user.cifs: off transport.address-family: inet nfs.disable: on performance.client-io-threads: off