Hi Jeremey,
I think the problem is you have not created a "gluster logical network"
from ovirt manager. So when the bricks are listing because you have only
mgmt network it's mapping with that network
Could you please confirm whether you have Gluster logical network created
which maps with 10G nic? If not please create and check, it should solve
the issue.
On Thu, Sep 24, 2020 at 3:24 PM Ritesh Chikatwar <rchikatw(a)redhat.com>
wrote:
Jermy,
This looks like a bug.
You are using an IPv4 or IPv6 network.
Ritesh
On Thu, Sep 24, 2020 at 12:14 PM Gobinda Das <godas(a)redhat.com> wrote:
> But I think this only sync gluster brick status not the entier object.
> Looks like this a bug.
> @Ritesh Chikatwar <rchikatw(a)redhat.com> Could you please check what data
> we are getting from vdsm during gluster sync job run? Are we saving exact
> data or customizing anything?
>
> On Thu, Sep 24, 2020 at 11:01 AM Gobinda Das <godas(a)redhat.com> wrote:
>
>> We do have gluster volume UI sync issue and this is fixed in ovirt-4.4.2
>> BZ:
https://bugzilla.redhat.com/show_bug.cgi?id=1860775
>>
>>
>> On Wed, Sep 23, 2020 at 8:50 PM Jeremey Wise <jeremey.wise(a)gmail.com>
>> wrote:
>>
>>>
>>> I just noticed when HCI setup bult the gluster engine / data / vmstore
>>> volumes... it did use correctly the definition of 10Gb "back end"
>>> interfaces / hosts.
>>>
>>> But.. oVirt Engine is NOT referencing this.
>>> it lists bricks as 1Gb "managment / host" interfaces. Is this a
GUI
>>> issue? I doubt this and how do I correct it?
>>> ### Data Volume Example
>>> Name:
>>> data
>>> Volume ID:
>>> 0ae7b487-8b87-4192-bd30-621d445902fe
>>> Volume Type:
>>> Replicate
>>> Replica Count:
>>> 3
>>> Number of Bricks:
>>> 3
>>> Transport Types:
>>> TCP
>>> Maximum no of snapshots:
>>> 256
>>> Capacity:
>>> 999.51 GiB total, 269.02 GiB used, 730.49 GiB free, 297.91 GiB
>>> Guaranteed free, 78 Deduplication/Compression savings (%)
>>>
>>>
>>> medusa.penguinpages.local
>>> medusa.penguinpages.local:/gluster_bricks/data/data
>>> 25%
>>> OK
>>> odin.penguinpages.local
>>> odin.penguinpages.local:/gluster_bricks/data/data
>>> 25%
>>> OK
>>> thor.penguinpages.local
>>> thor.penguinpages.local:/gluster_bricks/data/data
>>> 25%
>>> OK
>>>
>>>
>>> # I have storage back end of 172.16.101.x which is 10Gb dedicated for
>>> replication. Peers reflect this
>>> [root@odin c4918f28-00ce-49f9-91c8-224796a158b9]# gluster peer status
>>> Number of Peers: 2
>>>
>>> Hostname: thorst.penguinpages.local
>>> Uuid: 7726b514-e7c3-4705-bbc9-5a90c8a966c9
>>> State: Peer in Cluster (Connected)
>>>
>>> Hostname: medusast.penguinpages.local
>>> Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031
>>> State: Peer in Cluster (Connected)
>>> [root@odin c4918f28-00ce-49f9-91c8-224796a158b9]#
>>>
>>>
>>>
>>> --
>>> p <jeremey.wise(a)gmail.com>enguinpages
>>> _______________________________________________
>>> Users mailing list -- users(a)ovirt.org
>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>> Privacy Statement:
https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>>
https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ULE66KK5UEG...
>>>
>>
>>
>> --
>>
>>
>> Thanks,
>> Gobinda
>>
>
>
> --
>
>
> Thanks,
> Gobinda
>