Looks like you’ve got a posix or nfs mount there? Is your gluster storage domain of type
GlusterFS? And make sure you restarted the ovirt-engine after enabling LibfgApiSupported,
before stopping and restarting the vm.
An active libgf mount looks like:
<disk type='network' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none'
error_policy='stop' io='native'/>
<source protocol='gluster'
name='gv1/a10b0cc4-b6ec-4b26-bef0-316e0e7bdcac/images/7e8b2be0-8206-4090-8fa4-c87d81fb5ea0/6b8c2ff9-15f3-47a9-af5d-bb7dea31b59b'>
<host name='gstor0' port='24007'/>
</source>
On Aug 26, 2020, at 1:12 PM, info--- via Users
<users(a)ovirt.org> wrote:
I enabled libgfapi and powered off / on the VM.
- engine-config --all
- LibgfApiSupported: true version: 4.3
How can I see that this is active on the VM? The disk looks the same like before.
- virsh dumpxml 15
<disk type='file' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none'
error_policy='stop' io='threads'/>
<source
file='/rhev/data-center/mnt/glusterSD/10.9.9.101:_vmstore/f2c621de-42bf-4dbf-920c-adf4506b786d/images/1e231e3e-d98c-491a-9236-907814d4837/c755aaa3-7d3d-4c0d-8184-c6aae37229ba'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
<target dev='sdc' bus='scsi'/>
<address type='drive' controller='0' bus='0'
target='0' unit='3'/>
</disk>
Here is the Volume setup:
Volume Name: vmstore
Type: Distributed-Replicate
Volume ID: 195e2a05-9667-4b8b-b0b7-82294631de50
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: 10.9.9.101:/gluster_bricks/vmstore/vmstore
Brick2: 10.9.9.102:/gluster_bricks/vmstore/vmstore
Brick3: 10.9.9.103:/gluster_bricks/vmstore/vmstore
Brick4: 10.9.9.101:/gluster_bricks/S4CYNF0M219849L/S4CYNF0M219849L
Brick5: 10.9.9.102:/gluster_bricks/S4CYNF0M219836L/S4CYNF0M219836L
Brick6: 10.9.9.103:/gluster_bricks/S4CYNF0M219801Y/S4CYNF0M219801Y
Options Reconfigured:
performance.write-behind-window-size: 64MB
performance.flush-behind: on
performance.stat-prefetch: on
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet
performance.strict-o-direct: on
performance.quick-read: off
performance.read-ahead: on
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
cluster.choose-local: on
client.event-threads: 4
server.event-threads: 4
network.ping-timeout: 30
storage.owner-uid: 36
storage.owner-gid: 36
cluster.granular-entry-heal: enable
Thank you for your support.
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z3JD7MQV2PI...