On March 27, 2020 2:49:13 PM GMT+02:00, Jorick Astrego <jorick(a)netbulae.eu> wrote:
On 3/24/20 7:25 PM, Alex McWhirter wrote:
> Red hat also recommends a shard size of 512mb, it's actually the only
> shard size they support. Also check the chunk size on the LVM thin
> pools running the bricks, should be at least 2mb. Note that changing
> the shard size only applies to new VM disks after the change.
Changing
> the chunk size requires making a new brick.
>
Regarding the chunk size, red hat tells me it depends on RAID or JBOD
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5...
chunksize
An important parameter to be specified while creating a thin
pool is the chunk size,which is the unit of allocation. For good
performance, the chunk size for the thin pool and the parameters
of the underlying hardware RAID storage should be chosen so that
they work well together.
And regarding the shard size, you can fix that with storage live
migration right? Use two volumes and domains and move them so they will
adopt the new shard size...
Am I correct that when you change the sharding on a running volume, it
only applies for new disks? Or does it also apply to extensions to a
current disk?
Met vriendelijke groet, With kind regards,
Jorick Astrego
Netbulae Virtualization Experts
----------------
Tel: 053 20 30 270 info(a)netbulae.eu Staalsteden 4-3A KvK 08198180
Fax: 053 20 30 271
www.netbulae.eu 7547 TA Enschede BTW
NL821234584B01
----------------
Shard size change is valid for new images, but this can be fixed either via storage
migration between volumes or via creating new disk and migrating within the OS (if
possible).
Still, MTU is important and you can use
'ping -s <size_of_data> -c 1 -M do destination ' to test.
Keep in mind that VLANs also take some data in the packet (I think around 8 bytes). Today
I have set MTU 9100 on some servers in order to guarantee that the app will be able to
transfer 9000 bytes of data, but this depends on the switches between the nodes and the
NICs of the servers.
You can use tracepath to detect if there is a switch that doesn't support Jumbo
Frames.
Actually setting ctdb with NFS Ganesha is quite easy . You will be able to get all
'goodies' from oVirt (snapshots, live migration, etc) while using Higher
performance via NFS Ganesha - which is like a gateway for the clients (while accessing
all servers simultaneously), so it will be better situated outside the Gluster servers.
Best Regards,
Strahil Nikolov