On 2020-03-27 05:28, Christian Reiss wrote:
Hey Alex,
you too, thanks for writing.
I'm on 64mb as per default for ovirt. We tried no sharding, 128mb
sharding, 64mb sharding (always with copying the disk). There was no
increase or decrease in disk speed in any way.
Besides losing HA capabilites, what other caveats?
-Chris.
On 24/03/2020 19:25, Alex McWhirter wrote:
> Red hat also recommends a shard size of 512mb, it's actually the only
> shard size they support. Also check the chunk size on the LVM thin
> pools running the bricks, should be at least 2mb. Note that changing
> the shard size only applies to new VM disks after the change. Changing
> the chunk size requires making a new brick.
>
> libgfapi brings a huge performance boost, in my opinion its almost a
> necessity unless you have a ton of extra disk speed / network
> throughput. Just be aware of the caveats.
--
Christian Reiss - email(a)christian-reiss.de /"\ ASCII Ribbon
support(a)alpha-labs.net \ / Campaign
X against HTML
WEB
alpha-labs.net / \ in eMails
GPG Retrieval
https://gpg.christian-reiss.de
GPG ID ABCD43C5, 0x44E29126ABCD43C5
GPG fingerprint = 9549 F537 2596 86BA 733C A4ED 44E2 9126 ABCD 43C5
"It's better to reign in hell than to serve in heaven.",
John Milton, Paradise lost.
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KL6HLEIRQ6G...
You don't lose HA, you just loose live migration in between separate
data centers or between gluster volumes. Live migration between nodes in
the same DC / gluster volume still works fine. Some people have snapshot
issues, i don't, but plan for problems just in case.
shard size 512MB will only affect new vm's, or new VM disks to be exact.
LVM chunk size defaults to 2mb on CentOS 7.6+, but it should be a
multiple of your raid stripe size. Stripe size should be fairly large,
we use 512KB0 stripe sizes on the bricks, 2mb chunk sizes on lvm.
With that and about 90 disks we can saturate 10GBe, then we added in
some SSD cache drives to lvm on the bricks, which helped a lot with
random io.