На 29 юни 2020 г. 4:14:33 GMT+03:00, jury cat <shadow.emy1(a)gmail.com> написа:
If i destroy the brick, i might upgrade to ovirt 4.4 and Centos 8.2.
Do you think upgrade to ovirt 4.4 with glusterfs improves performance
or i am better with NFS ?
Actually only you can find out as we cannot know the workload of your VMs.
ovirt 4.4 uses gluster v7 , but I have to warn you that several people has reported
issues after upgrading from v6.5 to 6.6+ or from 7.0 to 7.1+ . It's still under
investigation.
If that partition alignment is so important, can i have an example
command how to set it up ?
You are using 64K stripe size , but usually Red Hat recommend either 128k for raid6 or
256k for raid10. In your case 256k sounds nice.
Your stripe width will be 64k x 2 data disks = 128k
So you should use :
pvcreate --dataalignment 128k /dev/raid-device
For details , check RHGS Documentation;
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5...
I have upload an image with my current Raid 0 size and strip size.
Btw i manage to enable Jumbo Frames with 9k MTU on the Storage Gluster
Network and i can also try to enable multique scheduler
Verify that the MTU is the same on egery device.
As IP + ICMP stack need 28 bits , you can try:
ping -M do -c 10 -s 8972 remote_gluster node
Also, you can test lige changing the I/O scheduler.
Can i use the latest glusterfs version 8 with ovirt 4.3.10 or 4.4 ?
if
of course has performance benefits.
Gluster v8.0 is planned for community tests -
it's too early for it - use the 4.4 default (v7.X).
Also can you share the rhgs-random-io.settings you use.
I can't claim those are universal, but here is mine :
[main]
summary=Optimize for running KVM guests on Gluster (Random IO)
include=throughput-performance
[cpu]
governor=ondemand|powersave
energy_perf_bias=powersave|power
[sysctl]
#vm.dirty_ratio = 5
#Random io -> 2 , vm host -> 5
#vm.dirty_background_ratio = 4
vm.dirty_background_bytes = 200000000
vm.dirty_bytes = 450000000
# The total time the scheduler will consider a migrated process
# "cache hot" and thus less likely to be re-migrated
# (system default is 500000, i.e. 0.5 ms)
kernel.sched_migration_cost_ns = 5000000
I'm using powersave governor, as I'm chasing better power efficiency than
performance . I would recommend you to take a look in the source rpm from the previous
e-mail, which contains Red Hat's tuned profile.
>
>Thanks,
>Emy