If you wish the Gluster traffic to be over the 172.16.20.X/24, you will have to change the
bricks in the volume to 172.16.20.X:/gluster_bricks/vmstore/vmstore
The simplest way is to:gluster volume remove-brick VOLUMENAME replica 2
node3.mydomain.lab:/gluster_bricks/data/data force
# node3umount /gluster_bricks/datamkfs.xfs -f -i size=512 /dev/GLUSTER_VG/GLUSTER_LVmount
/gluster_bricks/datamkdir /gluster_bricks/data/datachown 36:36 -R
/gluster_bricks/data/datarestorecon -RFvv /gluster_bricks/data
# If you have entries in /etc/hosts or in the DNS , you can swap the IP with itgluster
volume add-brick VOLUMENAME replica 3 172.16.20.X:/gluster_bricks/data/datagluster volume
heal VOLUMENAME full
#Wait untill the volume heals and repeat with the other 2 bricks.
Of course, if it's a brand new setup -> it's easier to wipe the disks and then
reinstall the nodes to start fresh .
Best Regards,Strahil Nikolov
On Fri, Aug 5, 2022 at 18:56, r greg<itforums51(a)gmail.com> wrote: hi all,
*** new to oVirt and still learning ***
Sorry for the long thread...
I have a 3x node hyperconverged setup on v4.5.1.
4x 1G NICS
NIC0
ovirtmgmt (Hosted-Engine VM)
vmnetwork vlan102 (all VMs are placed on this network)
NIC1
migration
NIC2 - NIC3 > bond0
storage
Logical Networks:
ovirtmgmt - role: VM network | management | display | default route
vmnetwork - role: VM network
migrate - role: migration network
storage - role: gluster network
During deployment, I overlooked a setting and on node2 the host was deployed with Name:
node2.mydomain.lab --- Hostname/IP: 172.16.20.X/24 (WebUI > Compute > Hosts)
I suspect because of this I see the following entries on /var/log/ovirt-engine/engine.log
(only for node2)
2022-08-04 12:00:15,460Z WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-16) [] Could not
associate brick 'node2.mydomain.lab:/gluster_bricks/vmstore/vmstore' of volume
'1ca6a01a-9230-4bb1-844e-8064f3eadb53' with correct network as no gluster network
found in cluster '1770ade4-0f6f-11ed-b8f6-00163e6faae8'
Is this something I need to be worried about or correct somehow?
From node1
gluster> peer status
Number of Peers: 2
Hostname: node2.mydomain.lab
Uuid: a4468bb0-a3b3-42bc-9070-769da5a13427
State: Peer in Cluster (Connected)
Other names:
172.16.20.X
Hostname: node3.mydomain.lab
Uuid: 2b1273a4-667e-4925-af5e-00904988595a
State: Peer in Cluster (Connected)
Other names:
172.16.20.Z
volume status (same output Online Y --- for volumes vmstore and engine )
Status of volume: data
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick node1.mydomain.lab:/gluster_brick
s/data/data 58734 0 Y 31586
Brick node2.mydomain.lab:/gluster_brick
s/data/data 55148 0 Y 4317
Brick node3.mydomain.lab:/gluster_brick
s/data/data 57021 0 Y 5242
Self-heal Daemon on localhost N/A N/A Y 63170
Self-heal Daemon on node2.mydomain.lab N/A N/A Y 4365
Self-heal Daemon on node3.mydomain.lab N/A N/A Y 5385
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z5FXYQR5FDM...