Im in the middle of a priority issue right now, so cant take time out to rerun the bench,
but...
Usually in that kind of situation, if you dont turn on sync-to-disk on every write, you
get benchmarks that are artificially HIGH.
Forcing O_DIRECT slows throughput down.
Dont you think the results are bad enough already? :-}
----- Original Message -----
From: "Stefan Hajnoczi" <stefanha(a)redhat.com>
To: "Philip Brown" <pbrown(a)medata.com>
Cc: "Nir Soffer" <nsoffer(a)redhat.com>, "users"
<users(a)ovirt.org>, "qemu-block" <qemu-block(a)nongnu.org>, "Paolo
Bonzini" <pbonzini(a)redhat.com>, "Sergio Lopez Pascual"
<slp(a)redhat.com>, "Mordechai Lehrer" <mlehrer(a)redhat.com>,
"Kevin Wolf" <kwolf(a)redhat.com>
Sent: Thursday, July 23, 2020 6:09:39 AM
Subject: Re: [BULK] Re: [ovirt-users] very very bad iscsi performance
Hi,
At first glance it appears that the filebench OLTP workload does not use
O_DIRECT, so this isn't a measurement of pure disk I/O performance:
https://github.com/filebench/filebench/blob/master/workloads/oltp.f
If you suspect that disk performance is the issue please run a benchmark
that bypasses the page cache using O_DIRECT.
The fio setting is direct=1.
Here is an example fio job for 70% read/30% write 4KB random I/O:
[global]
filename=/path/to/device
runtime=120
ioengine=libaio
direct=1
ramp_time=10 # start measuring after warm-up time
[read]
readwrite=randrw
rwmixread=70
rwmixwrite=30
iodepth=64
blocksize=4k
(Based on
https://blog.vmsplice.net/2017/11/common-disk-benchmarking-mistakes.html)
Stefan