On Fri, Jan 18, 2019 at 11:16 AM Sandro Bonazzola <sbonazzo(a)redhat.com>
wrote:
Il giorno mer 16 gen 2019 alle ore 23:27 Gianluca Cecchi <
gianluca.cecchi(a)gmail.com> ha scritto:
> Just installed a single host HCI with gluster, with only the engine vm
> running.
> Is it expected this situation below?
>
> # virsh -r list
> Id Name State
> ----------------------------------------------------
> 2 HostedEngine running
>
> and
> # virsh -r dumpxml 2
> . . .
> <disk type='file' device='disk' snapshot='no'>
> <driver name='qemu' type='raw' cache='none'
error_policy='stop'
> io='native' iothread='1'/>
> <source
>
file='/var/run/vdsm/storage/e4eb6832-e0f6-40ee-902f-f301e5a3a643/fc34d770-9318-4539-9233-bfb1c5d68d14/b151557e-f1a2-45cb-b5c9-12c1f470467e'>
> <seclabel model='dac' relabel='no'/>
> </source>
> <backingStore/>
> <target dev='vda' bus='virtio'/>
> <serial>fc34d770-9318-4539-9233-bfb1c5d68d14</serial>
> <alias name='ua-fc34d770-9318-4539-9233-bfb1c5d68d14'/>
> <address type='pci' domain='0x0000' bus='0x00'
slot='0x07'
> function='0x0'/>
> </disk>
> . . .
>
> where
> # ll /var/run/vdsm/storage/e4eb6832-e0f6-40ee-902f-f301e5a3a643/
> total 24
> lrwxrwxrwx. 1 vdsm kvm 133 Jan 16 14:44
> 39df7b45-4932-4bfe-b69e-4fb2f8872f4f ->
> /rhev/data-center/mnt/glusterSD/10.10.10.216:
>
_engine/e4eb6832-e0f6-40ee-902f-f301e5a3a643/images/39df7b45-4932-4bfe-b69e-4fb2f8872f4f
> lrwxrwxrwx. 1 vdsm kvm 133 Jan 16 14:44
> 5ba6cd9e-b78d-4de4-9b7f-9688365128bf ->
> /rhev/data-center/mnt/glusterSD/10.10.10.216:
>
_engine/e4eb6832-e0f6-40ee-902f-f301e5a3a643/images/5ba6cd9e-b78d-4de4-9b7f-9688365128bf
> lrwxrwxrwx. 1 vdsm kvm 133 Jan 16 15:56
> 8b8e41e0-a875-4204-8ab1-c10214a49f5c ->
> /rhev/data-center/mnt/glusterSD/10.10.10.216:
>
_engine/e4eb6832-e0f6-40ee-902f-f301e5a3a643/images/8b8e41e0-a875-4204-8ab1-c10214a49f5c
> lrwxrwxrwx. 1 vdsm kvm 133 Jan 16 15:56
> c21a62ba-73d2-4914-940f-cee6a67a1b08 ->
> /rhev/data-center/mnt/glusterSD/10.10.10.216:
>
_engine/e4eb6832-e0f6-40ee-902f-f301e5a3a643/images/c21a62ba-73d2-4914-940f-cee6a67a1b08
> lrwxrwxrwx. 1 vdsm kvm 133 Jan 16 14:44
> fc34d770-9318-4539-9233-bfb1c5d68d14 ->
> /rhev/data-center/mnt/glusterSD/10.10.10.216:
>
_engine/e4eb6832-e0f6-40ee-902f-f301e5a3a643/images/fc34d770-9318-4539-9233-bfb1c5d68d14
> lrwxrwxrwx. 1 vdsm kvm 133 Jan 16 14:44
> fd73354d-699b-478e-893c-e2a0bd1e6cbb ->
> /rhev/data-center/mnt/glusterSD/10.10.10.216:
>
_engine/e4eb6832-e0f6-40ee-902f-f301e5a3a643/images/fd73354d-699b-478e-893c-e2a0bd1e6cbb
>
> so hosted engine not using libgfapi?
> Also, on hosted engine
>
> [root@hciengine ~]# engine-config -g LibgfApiSupported
> LibgfApiSupported: false version: 4.1
> LibgfApiSupported: false version: 4.2
> LibgfApiSupported: false version: 4.3
> [root@hciengine ~]#
>
> So that if I import a CentOS7 Atomic host image from glance repo as
> template and create a new vm from it,when running this VM I get
>
> # virsh -r dumpxml 3
> . . .
> <disk type='file' device='disk' snapshot='no'>
> <driver name='qemu' type='qcow2' cache='none'
error_policy='stop'
> io='native' iothread='1'/>
> <source file='/rhev/data-center/mnt/glusterSD/10.10.10.216:
>
_data/601d725a-1622-4dc8-a24d-2dba72ddf6ae/images/e4f92226-0f56-4822-a622-d1ebff41df9f/c6b2e076-1519-433e-9b37-2005c9ce6d2e'>
> <seclabel model='dac' relabel='no'/>
> </source>
> <backingStore/>
> <target dev='vda' bus='virtio'/>
> <serial>e4f92226-0f56-4822-a622-d1ebff41df9f</serial>
> <boot order='1'/>
> <alias name='ua-e4f92226-0f56-4822-a622-d1ebff41df9f'/>
> <address type='pci' domain='0x0000' bus='0x00'
slot='0x06'
> function='0x0'/>
> </disk>
> . . .
>
> I remember there was an "old" bug opened causing this default of not
> enabling libgfapi
> Does this mean it was not solved yet?
> If I remember correctly the bugzilla was this one related to HA:
>
https://bugzilla.redhat.com/show_bug.cgi?id=1484227
> that is still in new status.... since almost 2 years
>
> Is this the only one open?
>
Thanks Gianluca for the deep testing and analysis! Sahina, Simone, can you
please check this?
Yes, AFAIK we are still not ready to support libgfapi.
>
>
> Thanks,
> Gianluca
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
>
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KHWQA72JZVG...
>
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <
https://www.redhat.com/>
sbonazzo(a)redhat.com
<
https://red.ht/sig>