On Wed, Aug 26, 2020 at 8:19 PM info--- via Users <users(a)ovirt.org> wrote:
I enabled libgfapi and powered off / on the VM.
- engine-config --all
- LibgfApiSupported: true version: 4.3
How can I see that this is active on the VM? The disk looks the same like
before.
- virsh dumpxml 15
<disk type='file' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none'
error_policy='stop'
io='threads'/>
<source file='/rhev/data-center/mnt/glusterSD/10.9.9.101:
_vmstore/f2c621de-42bf-4dbf-920c-adf4506b786d/images/1e231e3e-d98c-491a-9236-907814d4837/c755aaa3-7d3d-4c0d-8184-c6aae37229ba'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
<target dev='sdc' bus='scsi'/>
<address type='drive' controller='0' bus='0'
target='0' unit='3'/>
</disk>
Latest status I remember was this one reported by me at 4.3 RC2 time last
year:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KHWQA72JZVG...
and I think nothing changed in the mean time, neither for 4.3 nor for 4.4.
But not directly tested.
Someone reported 4x-5x improvements using libgfapi but initially there were
some blockers related to snapshots and HA if I remember correctly and so
disabled.
My idea is that all previous problems in upstream qemu and libvirtd pieces
are now solved since many months but developers didn't consider
implementing them due to not so big improvements detected in their tests
and/or sufficient time to dedicate to fix and/or other priorities.
Some bugzilla referred, to dig into if interested:
https://bugzilla.redhat.com/show_bug.cgi?id=1484227
https://bugzilla.redhat.com/show_bug.cgi?id=1465810
https://bugzilla.redhat.com/show_bug.cgi?id=1633642
HIH revamping the interesting topic.
Gianluca