If you seek performance, then set tuned-adm profile in the VM  to 'throughput-performance' and scheduler either 'noop' or 'none' (depends if multi queue  is enabled).

Usually, if yhou crete the gluster cluster via cockpit and then install the hosted engine via cockpit again - all options on your gluster volumes are the most optimal you need.

Best Rregards,
Strahil Nikolov

On Oct 18, 2019 15:30, Jayme <jaymef@gmail.com> wrote:
My VMs are using virtual-guest tuned profiles and ovirt node hosts are using virtual-host profile.  Those seem to be good defaults from what I'm looking at.  I will test I/O schedulers to see if that makes any difference and also try out high performance VM profile (I was staying away from that profile due to loss of high-availability). 

On Fri, Oct 18, 2019 at 9:18 AM Jayme <jaymef@gmail.com> wrote:
The VMs are basically as stock CentOS 7x as you can get.  There are so many layers to deal with in HCI it's difficult to know where to begin with tuning.  I was focusing mainly on gluster.  Is it recommended to do tuning directly on oVirt host nodes as well such as I/O scheduler and tuned-adm profiles etc?

On Fri, Oct 18, 2019 at 6:55 AM Strahil <hunter86_bg@yahoo.com> wrote:

What is  your  I/O scheduler and tuned-adm profile in the VM.
RedHat based VMs use deadline which prioritizes reads before writes  ->  you can use 'noop' or 'none'.

For profile, you can use high-performance.

Best Regards,
Strahil Nikolov

On Oct 18, 2019 06:45, Jayme <jaymef@gmail.com> wrote:
I'm wondering if anyone has any tips to improve file/directory operations in HCI replica 3 (no arbtr) configuration with SSDs and 10Gbe storage network.  

I am running stock optimize for virt store volume settings currently and am wondering what if any improvements I can make for VM write speed and more specifically anything I can tune to increase performance of small file operations such as copying, untar, npm installs etc. 

For some context, I'm seeing ~50MB/s write speeds inner VM with: dd if=/dev/zero of=./test bs=512k count=2048 oflag=direct -- I am not sure how this compares to other HCI setups, I feel like it should be higher with SSD backed storage.  Same command from gluster mount is over 400MB/s

I've read some things about meta data caching, read ahead and other options.  There are so many and I'm not sure where to start, I'm also not sure which could potentially have a negative impact on VM stability/reliability. 

Here are options for one of my volumes:

Volume Name: prod_b
Type: Replicate
Volume ID: c3e7447e-8514-4e4a-9ff5-a648fe6aa537
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: gluster0.example.com:/gluster_bricks/prod_b/prod_b
Brick2: gluster1.example.com:/gluster_bricks/prod_b/prod_b
Brick3: gluster2.example.com:/gluster_bricks/prod_b/prod_b
Options Reconfigured:
server.event-threads: 4
client.event-threads: 4
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
network.ping-timeout: 30
performance.strict-o-direct: on
cluster.granular-entry-heal: enable
server.allow-insecure: on
cluster.choose-local: off