for that workload (using that particular test with the dsync) then that
is what I saw on mounted gluster given the 7200 drives and simple 1G
network.
Next week I'll make a point of running your test with bonded ethernet to
see if that improves things.
Note: our testing uses the following:
for size in `echo 50M 10M 1M`
do
echo 'starting'
pwd
echo "$size"
dd if=/dev/zero of=./junk bs=$size count=100 oflag=direct;
rm ./junk
done
so we are doing multiple copies of much smaller files.
and this is what I see on that kit
SIZE = 50M
1.01 0.84 0.77 2/388 28977
100+0 records in
100+0 records out
5242880000 bytes (5.2 GB, 4.9 GiB) copied, 70.262 s, 74.6 MB/s
SIZE = 10M
3.88 1.79 1.11 2/400 29336
100+0 records in
100+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 15.8082 s, 66.3 MB/s
SIZE = 1M
3.93 1.95 1.18 1/394 29616
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 1.67975 s, 62.4 MB/s
with teamd (bonding) I would expect an approx 40-50% speed increase
(which is why I didn't catch my error earlier as I am used to seeing
values in the 80s)
On 11/26/2020 11:11 PM, Harry O wrote:
> So my gluster performance results is expected?
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement:
https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RCQ5LA77ZFQ...