Sahina,

Thank you very much for the explanation.  I definitely do want Gluster traffic on my 10gig network but am being extra cautious because there are live VMs on the volumes.  

This is my current configuration

host0.blah.example.com: 10.11.0.220 (gluster interface: 10.12.0.220)
host1.blah.example.com: 10.11.0.221 (gluster interface: 10.12.0.221)
host2.blah.example.com: 10.11.0.222 (gluster interface: 10.12.0.222)

The 10.11.0.0 subnet is management network and the hostnames have matching DNS records.  There are not currently any DNS entries that point to the gluster IPs.  My gluster configs all reference the above hostnames, as well as oVirt storage domains (for backupvol flag etc). 

I do have a gluster network setup with the 10.12.0.0 subnet, set as gluster and migration traffic and attached to each host (but obviously it's not being fully utilized since gluster is configured with management network hostnames/ips). 

So are you saying that I should leave the server hostnames/ips as they are but also create DNS records for example gluster0.blah.example.com with appropriate gluster subnet IPs then reset bricks one-by-one to that hostname instead? ex. gluster volume reset-brick VOLNAME host0.blah.example.com gluster0.blah.example.com:BRICKPATH commit force -- is actually necessary to use hostnames here or can I just use the IPs? 

This *mostly* makes sense to me, but what then happens to the references to the old hostname in the oVirt storage domain configurations?  Do they get updated somehow or have to be modified/recreated (or does that even matter in this case)?

Thanks again for the info.

On Thu, Oct 17, 2019 at 1:14 PM Sahina Bose <sabose@redhat.com> wrote:


On Thu, Oct 17, 2019 at 7:22 PM Jayme <jaymef@gmail.com> wrote:
Thanks for the info, but where does it get the new hostname from?  Do I need to change the actual server hostnames of my nodes?  If I were to do that then the hosts would not be accessible due to the fact that the gluster storage subnet is isolated. 

I guess I'm confused about what gdeploy does during a new HCI deployment.  I understand that you are now suppose to use hostnames that resolve to the storage network subnet in the first step and then specify FQDNs for management in the next step.  Where do the FQDNs actually get used?

In the cockpit based deployment, the hostnames in the first step is used to "peer probe" and create the gluster cluster. The bricks use this interface when the volume is created ensuring the data traffic is on this interface.

From the ovirt-engine, when a network is tagged as gluster network, the IP associated with this interface is also added as additonal hostname to the gluster cluster. Any brick/volume created after this, will then use the gluster networks IP to create the brick.
If the volume was created using the management network, you can change the hostname that the brick is using with this command [1]
gluster volume reset-brick VOLNAME MGMT-HOSTNAME:BRICKPATH GLUSTERNW-HOSTNAME:BRICKPATH commit force

This is assuming both of these paths ( MGMT-HOSTNAME:BRICKPATH GLUSTERNW-HOSTNAME:BRICKPATH) refer to same brick, and gluster has been peer probed with both MGMT-HOSTNAME and GLUSTERNW-HOSTNAME. Since I/O is affected during the operation, this has to be performed one brick at a time so that your VMs continue to be online.

[1] https://docs.gluster.org/en/latest/release-notes/3.9.0/#introducing-reset-brick-command


Can someone confirm if the hostname of oVirt host nodes should as shown by the "#hostname" command should resolve to IPs on the gluster storage network? 

On Thu, Oct 17, 2019 at 10:40 AM Strahil <hunter86_bg@yahoo.com> wrote:

The reset-brick and replace-brick affects only one brick and notifies the gluster cluster that a new hostname:/brick_path is being used.

Of course, you need a hostname that resolves to the IP that is on the storage network.

WARNING: Ensure that no heals are pending as the commands are wiping the brick and data there is lost.

Best Regards,
Strahil Nikolov

On Oct 17, 2019 15:28, Jayme <jaymef@gmail.com> wrote:
What does the reset brick option do and is it safe to do this on a live system or do all VMs need to be brought down first?  How does resetting the brick fix the issue with gluster peers using the server hostnames which are attached to IPs on the ovirtmanagement network?

On Thu, Oct 17, 2019 at 4:16 AM Sahina Bose <sabose@redhat.com> wrote:


On Wed, Oct 16, 2019 at 8:38 PM Jayme <jaymef@gmail.com> wrote:
Is there a way to fix this on a hci deployment which is already in operation?  I do have a separate gluster network which is chosen for migration and gluster network but when I originally deployed I used just one set of host names which resolve to management network subnet.

You will need to change the interface that's used by the bricks. You can do this by using the "Reset brick" option, once the gluster management network is set correctly on the storage interface from ovirt-engine
 

I appear to have a situation where gluster traffic may be going through both networks in seeing what looks like gluster traffic on both the gluster interface and ovirt management. 

On Wed, Oct 16, 2019 at 11:34 AM Stefano Stagnaro <stefanos@prismatelecomtesting.com> wrote:
Thank you Simone for the clarifications.

I've redeployed with both management and storage FQDNs; now everything seems to be in its place.

I only have a couple of questions:

1) In the Gluster deployment Wizard, section 1 (Hosts) and 2 (Additional Hosts) are misleading; should be renamed in something like "Host Configuration: Storage side" / "Host Configuration: Management side".

2) what is the real function of the "Gluster Network" cluster traffic type? What it actually does?

Thanks,
Stefano.
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NTNXPJMOZEYVHIZV2SJXXOVXMXCXS2XP/
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KO5CLJNLRZOGLNGVGFMBADSDWNEUUAUG/