So in our internal tests (with nvme ssd drives, 10g n/w), we found read
performance to be better with choose-local
disabled in hyperconverged setup. See
for more information.
With choose-local off, the read replica is chosen randomly (based on hash
value of the gfid of that shard).
And when it is enabled, the reads always go to the local replica.
We attributed better performance with the option disabled to bottlenecks in
gluster's rpc/socket layer. Imagine all read
requests lined up to be sent over the same mount-to-brick connection as
opposed to (nearly) randomly getting distributed
over three (because replica count = 3) such connections.
Did you run any tests that indicate "choose-local=on" is giving better read
perf as opposed to when it's disabled?
-Krutika
On Sun, May 19, 2019 at 5:11 PM Strahil Nikolov <hunter86_bg(a)yahoo.com>
wrote:
Ok,
so it seems that Darell's case and mine are different as I use vdo.
Now I have destroyed Storage Domains, gluster volumes and vdo and
recreated again (4 gluster volumes on a single vdo).
This time vdo has '--emulate512=true' and no issues have been observed.
Gluster volume options before 'Optimize for virt':
Volume Name: data_fast
Type: Replicate
Volume ID: 378804bf-2975-44d8-84c2-b541aa87f9ef
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/gluster_bricks/data_fast/data_fast
Brick2: gluster2:/gluster_bricks/data_fast/data_fast
Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.enable-shared-storage: enable
Gluster volume after 'Optimize for virt':
Volume Name: data_fast
Type: Replicate
Volume ID: 378804bf-2975-44d8-84c2-b541aa87f9ef
Status: Stopped
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/gluster_bricks/data_fast/data_fast
Brick2: gluster2:/gluster_bricks/data_fast/data_fast
Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
Options Reconfigured:
network.ping-timeout: 30
performance.strict-o-direct: on
storage.owner-gid: 36
storage.owner-uid: 36
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on
cluster.enable-shared-storage: enable
After that adding the volumes as storage domains (via UI) worked without
any issues.
Can someone clarify why we have now 'cluster.choose-local: off' when in
oVirt 4.2.7 (gluster v3.12.15) we didn't have that ?
I'm using storage that is faster than network and reading from local brick
gives very high read speed.
Best Regards,
Strahil Nikolov
В неделя, 19 май 2019 г., 9:47:27 ч. Гринуич+3, Strahil <
hunter86_bg(a)yahoo.com> написа:
On this one
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3...
We should have the following options:
performance.quick-read=off performance.read-ahead=off performance.io-cache=off
performance.stat-prefetch=off performance.low-prio-threads=32
network.remote-dio=enable cluster.eager-lock=enable
cluster.quorum-type=auto cluster.server-quorum-type=server
cluster.data-self-heal-algorithm=full cluster.locking-scheme=granular
cluster.shd-max-threads=8 cluster.shd-wait-qlength=10000 features.shard=on
user.cifs=off
By the way the 'virt' gluster group disables 'cluster.choose-local' and
I
think it wasn't like that.
Any reasons behind that , as I use it to speedup my reads, as local
storage is faster than the network?
Best Regards,
Strahil Nikolov
On May 19, 2019 09:36, Strahil <hunter86_bg(a)yahoo.com> wrote:
OK,
Can we summarize it:
1. VDO must 'emulate512=true'
2. 'network.remote-dio' should be off ?
As per this:
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/h...
We should have these:
quick-read=off
read-ahead=off
io-cache=off
stat-prefetch=off
eager-lock=enable
remote-dio=on
quorum-type=auto
server-quorum-type=server
I'm a little bit confused here.
Best Regards,
Strahil Nikolov
On May 19, 2019 07:44, Sahina Bose <sabose(a)redhat.com> wrote:
On Sun, 19 May 2019 at 12:21 AM, Nir Soffer <nsoffer(a)redhat.com> wrote:
On Fri, May 17, 2019 at 7:54 AM Gobinda Das <godas(a)redhat.com> wrote:
From RHHI side default we are setting below volume options:
{ group: 'virt',
storage.owner-uid: '36',
storage.owner-gid: '36',
network.ping-timeout: '30',
performance.strict-o-direct: 'on',
network.remote-dio: 'off'
According to the user reports, this configuration is not compatible with
oVirt.
Was this tested?
Yes, this is set by default in all test configuration. We’re checking on
the bug, but the error is likely when the underlying device does not
support 512b writes.
With network.remote-dio off gluster will ensure o-direct writes
}
On Fri, May 17, 2019 at 2:31 AM Strahil Nikolov <hunter86_bg(a)yahoo.com>
wrote:
Ok, setting 'gluster volume set data_fast4 network.remote-dio on' allowed
me to create the storage domain without any issues.
I set it on all 4 new gluster volumes and the storage domains were
successfully created.
I have created bug for that:
https://bugzilla.redhat.com/show_bug.cgi?id=1711060
If someone else already opened - please ping me to mark this one as
duplicate.
Best Regards,
Strahil Nikolov
В четвъртък, 16 май 2019 г., 22:27:01 ч. Гринуич+3, Darrell Budic <
budic(a)onholyground.com> написа:
On May 16, 2019, at 1:41 PM, Nir Soffer <nsoffer(a)redhat.com> wrote:
On Thu, May 16, 2019 at 8:38 PM Darrell Budic <budic(a)onholyground.com>
wrote:
I tried adding a new storage domain on my hyper converged test cluster
running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new
gluster volume fine, but it’s not able to add the gluster storage domain
(as either a managed gluster volume or directly entering values). The
created gluster volume mounts and looks fine from the CLI. Errors in VDSM
log:
...
2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying
file system doesn't supportdirect IO (fileSD:110)
2019-05-16 10:25:09,584-0500 INFO (jsonrpc/5) [vdsm.api] FINISH
createStorageDomain error=Storage Domain target is unsupported: ()
from=::ffff:10.100.90.5,44732, flow_id=31d993dd,
task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
The direct I/O check has failed.
So something is wrong in the files system.
To confirm, you can try to do:
dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
This will probably fail with:
dd: failed to open '/path/to/mountoint/test': Invalid argument
If it succeeds, but oVirt fail to connect to this domain, file a bug and
we will investigate.
Nir
Yep, it fails as expected. Just to check, it is working on pre-existing
volumes, so I poked around at gluster settings for the new volume. It has
network.remote-dio=off set on the new volume, but enabled on old volumes.
After enabling it, I’m able to run the dd test:
[root@boneyard mnt]# gluster vol set test network.remote-dio enable
volume set: success
[root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1
oflag=direct
1+0 records in
1+0 records out
4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s
I’m also able to add the storage domain in ovirt now.
I see network.remote-dio=enable is part of the gluster virt group, so
apparently it’s not getting set by ovirt duding the volume creation/optimze
for storage?
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: <
https://www.ovirt.org/site/privacy-policy/>
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
<
https://www.ovirt.org/community/about/community-guidelines/>
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
<
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4...
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4...
/
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7K24XYG3M4...
--
Thanks,
Gobinda
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B5WXNGCKJUP...