On Thu, Sep 9, 2021 at 4:12 PM Mathieu Valois <mvalois(a)teicee.com> wrote:
You can find attached the benchmarks on the host and guest. I find
the
differences not so big though...
Host is using the gluster mount
(/rhev/data-center/mnt/glusterSD/server:_path/...)
or writing directly into the same filesystem used by gluster
(/bricks/brick1/...)?
If will help if you share output of lsblk and the command line used to run
fio on the host.
Comparing host and guest:
seq-write: (groupid=0, jobs=4): err= 0: pid=294433: Thu Sep 9 14:30:14 2021
write: IOPS=151, BW=153MiB/s (160MB/s)(4628MiB/30280msec); 0 zone resets
I guess the underlying storage is hard disk - 150 MiB/s is not bad but
very low compared with fast SSD.
seq-read: (groupid=1, jobs=4): err= 0: pid=294778: Thu Sep 9 14:30:14 2021
read: IOPS=7084, BW=7086MiB/s (7430MB/s)(208GiB/30016msec)
You have crazy caching (ignoring the direct I/O?), 7GiB/s read?
rand-write-qd32: (groupid=2, jobs=4): err= 0: pid=295141: Thu Sep 9
14:30:14 2021
write: IOPS=228, BW=928KiB/s (951kB/s)(28.1MiB/30971msec); 0 zone resets
Very low, probably limited by the hard disks?
rand-read-qd32: (groupid=3, jobs=4): err= 0: pid=296094: Thu Sep 9
14:30:14 2021
read: IOPS=552k, BW=2157MiB/s (2262MB/s)(63.2GiB/30001msec)
Very high, this is what you get from fast consumer SSD.
rand-write-qd1: (groupid=4, jobs=1): err= 0: pid=296386: Thu Sep 9
14:30:14 2021
write: IOPS=55, BW=223KiB/s (229kB/s)(6696KiB/30002msec); 0 zone resets
Very low.
rand-read-qd1: (groupid=5, jobs=1): err= 0: pid=296633: Thu Sep 9 14:30:14
2021
read: IOPS=39.4k, BW=154MiB/s (161MB/s)(4617MiB/30001msec)
Same caching.
If we compare host and guest:
$ grep -B1 IOPS= *.out
guest.out-seq-write: (groupid=0, jobs=4): err= 0: pid=46235: Thu Sep 9
14:18:05 2021
guest.out: write: IOPS=57, BW=58.8MiB/s (61.6MB/s)(1792MiB/30492msec); 0
zone resets
~33% of host throughput
guest.out-rand-write-qd32: (groupid=2, jobs=4): err= 0: pid=46330: Thu Sep
9 14:18:05 2021
guest.out: write: IOPS=299, BW=1215KiB/s (1244kB/s)(35.8MiB/30212msec); 0
zone resets
Better than host
guest.out-rand-write-qd1: (groupid=4, jobs=1): err= 0: pid=46552: Thu Sep
9 14:18:05 2021
guest.out: write: IOPS=213, BW=854KiB/s (875kB/s)(25.0MiB/30003msec); 0
zone resets
Better than host
So you have very fast reads (seq/random), with very slow seq/random write.
Also would be interesting to test fsync - this benchmark does not do any
fsync
but your slow yum/rpm upgrade likey do one of more fsyncs per package
upgrade.
There is an example sync test script here:
https://www.ibm.com/cloud/blog/using-fio-to-tell-whether-your-storage-is-...
Le 09/09/2021 à 13:40, Nir Soffer a écrit :
There are few issues with this test:
- you don't use oflag=direct or conv=fsync, so this may test copying data
to the host page cache, instead of writing data to storage
- This tests only sequential write, which is the best case for any kind of
storage
- Using synchronous I/O - every write wait for the previous write
completion
- Using single process
- 2g is too small, may test your cache performance
Try to test using fio - attached fio script that tests sequential and
random io with
various queue depth.
You can use it like this:
fio --filename=/path/to/fio.data --output=test.out bench.fio
Test both on the host, and in the VM. This will give you more detailed
results that may help to evaluate the issue, and it may help Gluster
folks to advise on tuning your storage.
Nir
--
[image: téïcée] <
https://www.teicee.com/?pk_campaign=Email> *Mathieu
Valois*
Bureau Caen: Quartier Kœnig - 153, rue Géraldine MOCK - 14760
Bretteville-sur-Odon
Bureau Vitré: Zone de la baratière - 12, route de Domalain - 35500 Vitré
02 72 34 13 20 |
www.teicee.com
<
https://www.teicee.com/?pk_campaign=Email>
[image: téïcée sur facebook] <
https://www.facebook.com/teicee> [image:
téïcée sur twitter] <
https://twitter.com/Teicee_fr> [image: téïcée sur
linkedin] <
https://www.linkedin.com/company/t-c-e> [image: téïcée sur
viadeo] <
https://fr.viadeo.com/fr/company/teicee> [image: Datadocké]