On Tue, Jul 21, 2020 at 07:14:53AM -0700, Philip Brown wrote:
Thank you for the analysis. I have some further comments:
First off, filebench pre-writes the files before doing oltp benchmarks, so I dont think
the thin provisioning is at play here.
I will double check this, but if you dont hear otherwise, please presume that is the case
:)
Secondly, I am surprised at your recommendation to use virtio instead of virtio-scsi.
since the writeup for virtio-scsi claims it has equivalent performance in general, and
adds better scaling
https://www.ovirt.org/develop/release-management/features/storage/virtio-...
As far as your suggestion for using multiple disks for scaling higher:
We are using an SSD. Isnt the whole advantage of using SSD drives, that you can get the
IOP/s performance of 10 drives, out of a single drive?
We certainly get that using it natively, outside of a VM.
SO it would be nice to see performance approaching that within an ovirt VM.
Hi,
At first glance it appears that the filebench OLTP workload does not use
O_DIRECT, so this isn't a measurement of pure disk I/O performance:
https://github.com/filebench/filebench/blob/master/workloads/oltp.f
If you suspect that disk performance is the issue please run a benchmark
that bypasses the page cache using O_DIRECT.
The fio setting is direct=1.
Here is an example fio job for 70% read/30% write 4KB random I/O:
[global]
filename=/path/to/device
runtime=120
ioengine=libaio
direct=1
ramp_time=10 # start measuring after warm-up time
[read]
readwrite=randrw
rwmixread=70
rwmixwrite=30
iodepth=64
blocksize=4k
(Based on
https://blog.vmsplice.net/2017/11/common-disk-benchmarking-mistakes.html)
Stefan