On Tue, 2023-04-18 at 19:48 +0000, Jacob M. Nielsen wrote:
Hi List
I have trouble to find information regarding how to performance tune my IO on my VM.
We see bad performance max 30 MB/s from the VM to the storage (Posix mounted file
system)
We have tested the filesystems locally from the Ovirt host , and we can easy get
+500MB/s
Found a parameter
<io>
<threads>2</threads>
</io>
my understandign is generally tehre is not much value in setting it above 1 unless you
have multiple vitio-scsi contolers
in which case you shoudl add 1 io thread per contoler and affinitese the scis contoler
<controller type='scsi' index='0' model='virtio-scsi'>
<driver iothread='4'/>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x0b' function='0x0'/>
</controller>
https://libvirt.org/formatdomain.html#controllers
if you have enabeld multi queue for the block devices i dont know if that enables it to
use more then 1 io tread
i.e. perhaps 1 per queue
the docs for the iotrhead subelemnt state
"""
iothread
Supported for controller type scsi using model virtio-scsi for address types pci and
ccw since 1.3.5 (QEMU 2.4) . The optional iothread attribute
assigns the controller to an IOThread as defined by the range for the domain iothreads
(See IOThreads Allocation). Each SCSI disk assigned to use the
specified controller will utilize the same IOThread. If a specific IOThread is desired for
a specific SCSI disk, then multiple controllers must be
defined each having a specific iothread value. The iothread value must be within the range
1 to the domain iothreads value.
"""
so i would asssume multi queue will not help.
we have had enableign iothread in openstack on our backlog for some time and unfortunetly
we have not done it yet so i can really
provide any feedback on our expince with it but ^ is what i figured out was the intented
usage when i breifly looked into this
a year or two ago.
adding iotrehad alone wont help but adding iothread and mapping contoler to io thread and
spreadign your disks across those contoler enables scaling.
if your are usign virtio-blk not virtio-scsi i think you get most of the benifig form just
adding a singel io thread to move the io off the main
emulator thread.
when addign a virtion block device
https://libvirt.org/formatdomain.html#hard-drives-floppy-disks-cdroms
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' queues='4'
queue_size='256' />
<source file='/var/lib/libvirt/images/domain.qcow'/>
<backingStore type='file'>
<format type='qcow2'/>
<source file='/var/lib/libvirt/images/snapshot.qcow'/>
<backingStore type='block'>
<format type='raw'/>
<source dev='/dev/mapper/base'/>
<backingStore/>
</backingStore>
</backingStore>
<target dev='vdd' bus='virtio'/>
</disk>
you can impove the performace by turing the queues and the queue size
or changing the cach modes
"""
The optional cache attribute controls the cache mechanism, possible values are
"default", "none", "writethrough", "writeback",
"directsync" (like
"writethrough", but it bypasses the host page cache) and "unsafe"
(host may cache all disk io, and sync requests from guest are ignored). Since 0.6.0,
"directsync" since 0.9.5, "unsafe" since 0.9.7
"""
and you can also do per disk iothread affintiy
"""
the optional iothread attribute assigns the disk to an IOThread as defined by the range
for the domain iothreads value. (See IOThreads Allocation).
Multiple disks may be assigned to the same IOThread and are numbered from 1 to the domain
iothreads value. Available for a disk device target
configured to use "virtio" bus and "pci" or "ccw" address
types. Since 1.2.8 (QEMU 2.1)
"""
but i dont know if qemu or libvirt will round robbin the disks or ottehrwise try and
blance the disk aross the io trhead automatically.
i think like the contoler the same restirciton of 1 thread per virtio-blk applies so
haveign more io thread then disk will not help.
I can't find recommended configuration regarding this parameter.
My question is which value is should use for this , when having 60 high performance disks
mapped to my VM
Somebody any answers
i think if you have a vm with 60 disk using iotrhead is likely
very important but i suspect 60 iothread woudl be overkill
even if that gave the best throughput. so there is proably a sweet spot between 1 and 60
but i dont know where that lies
i normally work on openstack so ovirt is outside om my normal wheel house as such an ovirt
experit might be able to give better guidance
on what the approate value woudl be in an ovirt context.
i woudl start with one and then go to a nice power of 2 like 8 and messure.
ultimately the optimal value will be a tread off of the cpu overhead, disk configuration
(virio blk vs virtio-scsi, local vs remote storage, cache
modes) and the hardware you have combined with the workload so without meussruing there is
know way to really knwo what the optimal amount is.
tnx
_______________________________________________
Devel mailing list -- devel(a)ovirt.org
To unsubscribe send an email to devel-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/3BC63G3EKCT...