Hi Jaroslaw,
That point was from someone else. I don't think that gluster has a such weak point.
The only weak point I have seen is the infrastructure it relies ontop and of course the
built-in limitations it has.
You need to verify the following:
- mount options are important . Using 'nobarrier' but without radi-controller
protection is devastating. Also I use the following option when using gluster + selinux in
enforcing mode:
context=system_u:object_r:glusterd_brick_t:s0 - it tells the kernel what is the selinux
context on all files/dirs in the gluster brick and this reduces I/O requests to the
bricks
My mount options are:
noatime,nodiratime,inode64,nouuid,context="system_u:object_r:glusterd_brick_t:s0"
- Next is your FS - if you use HW raid controller , you need to specify the sunit= and
swidth= for the 'mkfs.xfs' (and don't forget the '-i size=512')
This tells the XFS about the hardware beneath
- If you use thin LVM , you need to be sure that your '_tmeta' LV of the Thinpool
LV is not over a VDO device as it doesn't dedupe quite good
I'm using VDO in 'emulate512' as my 'PHY-SEC' is 4096 and oVirt
doesn't like it :) . You can check yours via 'lsblk -t'.
- Configure and tune your VDO. I think that 1 VDO = 1 Fast disk (NVMe/SSD) as I'm not
very good in tuning VDO. If you need dedupe - check RedHat's documentation about the
indexing as the defaults are not optimal.
- Next is the disk scheduler. In case you use NVMe - the linux kernel is taking care of it
, but for SSDs and large HW arrays - you can enable the multiqueue and switch to
'none' via UDEV rules.Of course , testing is needed for every prod environment :)
Also consider using noop/none I/O scheduler in the VMs as you don't want to reorder
I/O requests on VM level , just to do it on Host level.
- You can set your CPU to avoid switching to lower C states -> that adds extra latency
for the host/VM processes
- Transparent Huge Pages can be a real problem , especially with large VMs. oVirt 4.4.x
now should support native Huge and Gumbo pages which will reduce the stress over the OS.
- vm.swappiness, vm.dirty_background**** , vm.dirty_*** settings. You can check what RH
gluster storage is using the ones in the redhat-storage-server rpms:
in ftp://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/
They control the behaviour of the system when to start flushing memory to disk and when to
block any process until all memory is flushed.
Best Regards,
Strahil Nikolov
В събота, 10 октомври 2020 г., 18:18:55 Гринуич+3, Jarosław Prokopowski
<jprokopowski(a)gmail.com> написа:
Thanks Strahil
The data center is remote so I will definitely ask the lab guys to ensure the switch is
connected to battery supported power socket.
So the gluster's weak point is actually the switch in the network? Can it have
difficulty finding out which version of data is correct after the switch was off for some
time?
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VFP2FX2YRAP...