
I posted that I had wiped out the oVirt-engine.. running cleanup on all three nodes. Done a re-deployment. Then to add nodes back.. though all have entries for eachother in /etc/hosts and ssh works fine via short and long name. I added nodes back into cluster.. but had to do it via IP to get past error. Now, if I go to create a volume via the GUI in gluster I get: Error while executing action Create Gluster Volume: Volume create failed: rc=30800 out=() err=["Host 172_16_100_102 is not in 'Peer in Cluster' state"] Which seems to be related to using IP vs DNS to add gluster volumes https://bugzilla.redhat.com/show_bug.cgi?id=1055928 Question: how do i fix the hosts in cluster being defined by IP vs desired hostname? -- jeremey.wise@gmail.com

Another note of color to this. I can't repair a brick as in gluster it calls bricks by hostname.. and oVirt-engine now thinks of it by IP. Error while executing action Start Gluster Volume Reset Brick: Volume reset brick start failed: rc=-1 out=() err=['Pre Validation failed on thorst_penguinpages_local_ brick: 172_16_100_103:/gluster_bricks/vmstore/vmstore does not exist in volume: vmstore\nPre Validation failed on odinst_penguinpages_local_ brick: 172_16_100_103:/gluster_bricks/vmstore/vmstore does not exist in volume: vmstore'] On Sat, Sep 26, 2020 at 1:27 PM Jeremey Wise <jeremey.wise@gmail.com> wrote:
I posted that I had wiped out the oVirt-engine.. running cleanup on all three nodes. Done a re-deployment. Then to add nodes back.. though all have entries for eachother in /etc/hosts and ssh works fine via short and long name.
I added nodes back into cluster.. but had to do it via IP to get past error.
Now, if I go to create a volume via the GUI in gluster I get: Error while executing action Create Gluster Volume: Volume create failed: rc=30800 out=() err=["Host 172_16_100_102 is not in 'Peer in Cluster' state"]
Which seems to be related to using IP vs DNS to add gluster volumes https://bugzilla.redhat.com/show_bug.cgi?id=1055928
Question: how do i fix the hosts in cluster being defined by IP vs desired hostname?
-- jeremey.wise@gmail.com
-- jeremey.wise@gmail.com

Is their any way I can access the host oVirt-engine database and change field of connection to host from IP to FQDN? I tried putting server in maintenance mode.. and even remove one (telling it to ignore gluster) but when I try to remove one node and then hope to re-add to cluster via FQDN .. (so gluster commands stop failing) it keeps saying it can't do that to a cluster gluster node. On Sat, Sep 26, 2020 at 1:41 PM Jeremey Wise <jeremey.wise@gmail.com> wrote:
Another note of color to this.
I can't repair a brick as in gluster it calls bricks by hostname.. and oVirt-engine now thinks of it by IP.
Error while executing action Start Gluster Volume Reset Brick: Volume reset brick start failed: rc=-1 out=() err=['Pre Validation failed on thorst_penguinpages_local_ brick: 172_16_100_103:/gluster_bricks/vmstore/vmstore does not exist in volume: vmstore\nPre Validation failed on odinst_penguinpages_local_ brick: 172_16_100_103:/gluster_bricks/vmstore/vmstore does not exist in volume: vmstore']
On Sat, Sep 26, 2020 at 1:27 PM Jeremey Wise <jeremey.wise@gmail.com> wrote:
I posted that I had wiped out the oVirt-engine.. running cleanup on all three nodes. Done a re-deployment. Then to add nodes back.. though all have entries for eachother in /etc/hosts and ssh works fine via short and long name.
I added nodes back into cluster.. but had to do it via IP to get past error.
Now, if I go to create a volume via the GUI in gluster I get: Error while executing action Create Gluster Volume: Volume create failed: rc=30800 out=() err=["Host 172_16_100_102 is not in 'Peer in Cluster' state"]
Which seems to be related to using IP vs DNS to add gluster volumes https://bugzilla.redhat.com/show_bug.cgi?id=1055928
Question: how do i fix the hosts in cluster being defined by IP vs desired hostname?
-- jeremey.wise@gmail.com
-- jeremey.wise@gmail.com
-- jeremey.wise@gmail.com

Hi Jeremey, I am not sure that I completely understand the problem. Can you provide the Host details page from UI and the output of: 'gluster pool list' & 'gluster peer status' from all nodes ? Best Regards, Strahil Nikolov В събота, 26 септември 2020 г., 20:31:23 Гринуич+3, Jeremey Wise <jeremey.wise@gmail.com> написа: I posted that I had wiped out the oVirt-engine.. running cleanup on all three nodes. Done a re-deployment. Then to add nodes back.. though all have entries for eachother in /etc/hosts and ssh works fine via short and long name. I added nodes back into cluster.. but had to do it via IP to get past error. Now, if I go to create a volume via the GUI in gluster I get: Error while executing action Create Gluster Volume: Volume create failed: rc=30800 out=() err=["Host 172_16_100_102 is not in 'Peer in Cluster' state"] Which seems to be related to using IP vs DNS to add gluster volumes https://bugzilla.redhat.com/show_bug.cgi?id=1055928 Question: how do i fix the hosts in cluster being defined by IP vs desired hostname? -- jeremey.wise@gmail.com _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/I7Z7UQTQYDSJDZ...

when I re deployed ovirt engine after running ovirt-hosted-engine-cleanup on all nodes.. it deployed oVirt on the first node fine.. but when I tried to add the other two nodes it kept failing. I got it to success ONLY if I used IP vs DNS (post about error here: https://lists.ovirt.org/archives/list/users@ovirt.org/thread/A6Z3MRFGFSEA7IO... ) But this seems to have been a bad idea. I now need to correct this. I can't send emails with images so I will post scrape: ###### Name Comment Hostname/IP Cluster Data Center Status Virtual Machines Memory CPU Network SPM medusa medusa host in three node HA cluster 172.16.100.103 Default_Cluster Default_Datacenter Up 0 6% 9% 0% SPM odin odin host in three node HA cluster 172.16.100.102 Default_Cluster Default_Datacenter Up 1 8% 0% 0% Normal thor thor host in three node HA cluster thor.penguinpages.local Default_Cluster Default_Datacenter Up 4 9% 2% 0% Normal ###### [root@thor ~]# gluster pool list UUID Hostname State 83c772aa-33cd-430f-9614-30a99534d10e odinst.penguinpages.local Connected 977b2c1d-36a8-4852-b953-f75850ac5031 medusast.penguinpages.local Connected 7726b514-e7c3-4705-bbc9-5a90c8a966c9 localhost Connected [root@thor ~]# gluster peer status Number of Peers: 2 Hostname: odinst.penguinpages.local Uuid: 83c772aa-33cd-430f-9614-30a99534d10e State: Peer in Cluster (Connected) Hostname: medusast.penguinpages.local Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031 State: Peer in Cluster (Connected) [root@odin ~]# gluster peer status Number of Peers: 2 Hostname: thorst.penguinpages.local Uuid: 7726b514-e7c3-4705-bbc9-5a90c8a966c9 State: Peer in Cluster (Connected) Hostname: medusast.penguinpages.local Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031 State: Peer in Cluster (Connected) [root@medusa ~]# gluster peer status Number of Peers: 2 Hostname: thorst.penguinpages.local Uuid: 7726b514-e7c3-4705-bbc9-5a90c8a966c9 State: Peer in Cluster (Connected) Hostname: odinst.penguinpages.local Uuid: 83c772aa-33cd-430f-9614-30a99534d10e State: Peer in Cluster (Connected) [root@medusa ~]# [root@thor ~]# cat /etc/hosts # Version: 20190730a 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 # Cluster node thor 172.16.100.91 thorm.penguinpages.local thorm 172.16.100.101 thor.penguinpages.local thor 172.16.101.101 thorst.penguinpages.local thorst # Cluster node odin 172.16.100.92 odinm.penguinpages.local odinm 172.16.100.102 odin.penguinpages.local odin 172.16.101.102 odinst.penguinpages.local odinst # Cluster node medusa # 172.16.100.93 medusam.penguinpages.local medusam 172.16.100.103 medusa.penguinpages.local medusa 172.16.101.103 medusast.penguinpages.local medusast 172.16.100.31 ovirte01.penguinpages.local ovirte01 172.16.100.32 ovirte02.penguinpages.local ovirte02 172.16.100.33 ovirte03.penguinpages.local ovirte03 [root@thor ~]# On Sun, Sep 27, 2020 at 1:54 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Hi Jeremey,
I am not sure that I completely understand the problem.
Can you provide the Host details page from UI and the output of: 'gluster pool list' & 'gluster peer status' from all nodes ?
Best Regards, Strahil Nikolov
В събота, 26 септември 2020 г., 20:31:23 Гринуич+3, Jeremey Wise < jeremey.wise@gmail.com> написа:
I posted that I had wiped out the oVirt-engine.. running cleanup on all three nodes. Done a re-deployment. Then to add nodes back.. though all have entries for eachother in /etc/hosts and ssh works fine via short and long name.
I added nodes back into cluster.. but had to do it via IP to get past error.
Now, if I go to create a volume via the GUI in gluster I get: Error while executing action Create Gluster Volume: Volume create failed: rc=30800 out=() err=["Host 172_16_100_102 is not in 'Peer in Cluster' state"]
Which seems to be related to using IP vs DNS to add gluster volumes https://bugzilla.redhat.com/show_bug.cgi?id=1055928
Question: how do i fix the hosts in cluster being defined by IP vs desired hostname?
-- jeremey.wise@gmail.com _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/I7Z7UQTQYDSJDZ...
-- jeremey.wise@gmail.com

You cannot have 2 IPs for 2 different FQDNs. You have to use something like: 172.16.100.101 thor.penguinpages.local thor thorst Fix your /etc/hosts or you should use DNS. Best Regards, Strahil Nikolov В понеделник, 28 септември 2020 г., 03:41:17 Гринуич+3, Jeremey Wise <jeremey.wise@gmail.com> написа: when I re deployed ovirt engine after running ovirt-hosted-engine-cleanup on all nodes.. it deployed oVirt on the first node fine.. but when I tried to add the other two nodes it kept failing. I got it to success ONLY if I used IP vs DNS (post about error here: https://lists.ovirt.org/archives/list/users@ovirt.org/thread/A6Z3MRFGFSEA7IO... ) But this seems to have been a bad idea. I now need to correct this. I can't send emails with images so I will post scrape: ###### Name Comment Hostname/IP Cluster Data Center Status Virtual Machines Memory CPU Network SPM medusa medusa host in three node HA cluster 172.16.100.103 Default_Cluster Default_Datacenter Up 0 6% 9% 0% SPM odin odin host in three node HA cluster 172.16.100.102 Default_Cluster Default_Datacenter Up 1 8% 0% 0% Normal thor thor host in three node HA cluster thor.penguinpages.local Default_Cluster Default_Datacenter Up 4 9% 2% 0% Normal ###### [root@thor ~]# gluster pool list UUID Hostname State 83c772aa-33cd-430f-9614-30a99534d10e odinst.penguinpages.local Connected 977b2c1d-36a8-4852-b953-f75850ac5031 medusast.penguinpages.local Connected 7726b514-e7c3-4705-bbc9-5a90c8a966c9 localhost Connected [root@thor ~]# gluster peer status Number of Peers: 2 Hostname: odinst.penguinpages.local Uuid: 83c772aa-33cd-430f-9614-30a99534d10e State: Peer in Cluster (Connected) Hostname: medusast.penguinpages.local Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031 State: Peer in Cluster (Connected) [root@odin ~]# gluster peer status Number of Peers: 2 Hostname: thorst.penguinpages.local Uuid: 7726b514-e7c3-4705-bbc9-5a90c8a966c9 State: Peer in Cluster (Connected) Hostname: medusast.penguinpages.local Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031 State: Peer in Cluster (Connected) [root@medusa ~]# gluster peer status Number of Peers: 2 Hostname: thorst.penguinpages.local Uuid: 7726b514-e7c3-4705-bbc9-5a90c8a966c9 State: Peer in Cluster (Connected) Hostname: odinst.penguinpages.local Uuid: 83c772aa-33cd-430f-9614-30a99534d10e State: Peer in Cluster (Connected) [root@medusa ~]# [root@thor ~]# cat /etc/hosts # Version: 20190730a 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 # Cluster node thor 172.16.100.91 thorm.penguinpages.local thorm 172.16.100.101 thor.penguinpages.local thor 172.16.101.101 thorst.penguinpages.local thorst # Cluster node odin 172.16.100.92 odinm.penguinpages.local odinm 172.16.100.102 odin.penguinpages.local odin 172.16.101.102 odinst.penguinpages.local odinst # Cluster node medusa # 172.16.100.93 medusam.penguinpages.local medusam 172.16.100.103 medusa.penguinpages.local medusa 172.16.101.103 medusast.penguinpages.local medusast 172.16.100.31 ovirte01.penguinpages.local ovirte01 172.16.100.32 ovirte02.penguinpages.local ovirte02 172.16.100.33 ovirte03.penguinpages.local ovirte03 [root@thor ~]# On Sun, Sep 27, 2020 at 1:54 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Hi Jeremey,
I am not sure that I completely understand the problem.
Can you provide the Host details page from UI and the output of: 'gluster pool list' & 'gluster peer status' from all nodes ?
Best Regards, Strahil Nikolov
В събота, 26 септември 2020 г., 20:31:23 Гринуич+3, Jeremey Wise <jeremey.wise@gmail.com> написа:
I posted that I had wiped out the oVirt-engine.. running cleanup on all three nodes. Done a re-deployment. Then to add nodes back.. though all have entries for eachother in /etc/hosts and ssh works fine via short and long name.
I added nodes back into cluster.. but had to do it via IP to get past error.
Now, if I go to create a volume via the GUI in gluster I get: Error while executing action Create Gluster Volume: Volume create failed: rc=30800 out=() err=["Host 172_16_100_102 is not in 'Peer in Cluster' state"]
Which seems to be related to using IP vs DNS to add gluster volumes https://bugzilla.redhat.com/show_bug.cgi?id=1055928
Question: how do i fix the hosts in cluster being defined by IP vs desired hostname?
-- jeremey.wise@gmail.com _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/I7Z7UQTQYDSJDZ...
-- jeremey.wise@gmail.com

I saw note about holiday.. and I wish all well. Just kind of stuck here where I am afraid to move forward in building the stack with nodes left in limbo with gluster / cluster. I just need to repair the host set to connect via IP vs DNS. Any ideas.. or is this a wipe and rebuild of engine again?

I used pgadmin Connected to oVirt-engine VM: username: engine password: 'cat /etc/ovirt-engine/engine.conf.d/10-setup-database.conf database: engine Schemas-> Tables -> 153 tables (which look like what we find in oVirt UI... Searched around.. no entry where it has 172.16.100.102 or 103 to reflect hosts in cluster that I could change to FQDN and restart engine and <hopefully> fixes issue :) I will keep poking but if someone has done this before it would help. On Mon, Sep 28, 2020 at 8:42 PM penguin pages <jeremey.wise@gmail.com> wrote:
I saw note about holiday.. and I wish all well. Just kind of stuck here where I am afraid to move forward in building the stack with nodes left in limbo with gluster / cluster. I just need to repair the host set to connect via IP vs DNS.
Any ideas.. or is this a wipe and rebuild of engine again? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SNZ4LEKE5P6LB4...
-- jeremey.wise@gmail.com

I think I found field to change host connection from IP to FQDN: select * from vds_static change "host_name" Change IP to fqdn To test .. I reboot cluster. Getting a few other errors but ... I think those are unrelated... will post if / as I learn more.. but so far this seems to be working. On Mon, Sep 28, 2020 at 10:56 PM Jeremey Wise <jeremey.wise@gmail.com> wrote:
I used pgadmin Connected to oVirt-engine VM:
username: engine password: 'cat /etc/ovirt-engine/engine.conf.d/10-setup-database.conf database: engine
Schemas-> Tables -> 153 tables (which look like what we find in oVirt UI...
Searched around.. no entry where it has 172.16.100.102 or 103 to reflect hosts in cluster that I could change to FQDN and restart engine and <hopefully> fixes issue :)
I will keep poking but if someone has done this before it would help.
On Mon, Sep 28, 2020 at 8:42 PM penguin pages <jeremey.wise@gmail.com> wrote:
I saw note about holiday.. and I wish all well. Just kind of stuck here where I am afraid to move forward in building the stack with nodes left in limbo with gluster / cluster. I just need to repair the host set to connect via IP vs DNS.
Any ideas.. or is this a wipe and rebuild of engine again? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SNZ4LEKE5P6LB4...
-- jeremey.wise@gmail.com
-- jeremey.wise@gmail.com
participants (3)
-
Jeremey Wise
-
penguin pages
-
Strahil Nikolov