On Fri, Feb 10, 2017 at 11:04 PM, Doug Ingham <dougti@gmail.com> wrote:
Hey Guys,
 I currently use dedicated interfaces & hostnames to separate gluster traffic on my "hyperconverged" hosts.

For example, the first node uses "v0" for its management interface & "s0" for its gluster interface.

With this setup, I notice that all functions under the "Volumes" tab work, however I'm unable to import storage domains with "Use managed gluster", and hosts' bricks aren't listed under the "Hosts" tab.

My engine log is also full of entries such as this...

2017-02-10 03:25:07,155-03 WARN  [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler3) [78aef637] Could not add brick 's0:/gluster/data-novo/brick' to volume 'bded65c7-e79e-4bc9-9630-36a69ad2e684' - server uuid 'a9d062c6-7d01-404f-ab0c-3ed468e60c91' not found in cluster '00000002-0002-0002-0002-00000000017a'
2017-02-10 03:25:09,157-03 WARN  [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesHealInfoReturn] (DefaultQuartzScheduler10) [6828f9a7] Could not fetch heal info for brick 's0:/gluster/data-novo/brick' - server uuid 'a9d062c6-7d01-404f-ab0c-3ed468e60c91' not found

I'm wondering whether the different hostnames used to configure each of the interfaces is casuing the confusion?
So...is there something wrong, or is this still an unsupported configuration?

This is a supported config. Have all your gluster hosts been imported to the cluster? Have you re-installed any of the hosts?

Could you share output of "gluster volume info --xml" and "gluster peer status --xml"
 

Cheers,
--
Doug

_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users