
I have set up a 3 node system. Gluster has its own backend network and I have tried entering the FQDN hosts via ssh as follows... gfs1.gluster.private 10.10.45.11 gfs2.gluster.private 10.10.45.12 gfs3.gluster.private 10.10.45.13 I entered at /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 gfs1.gluster.private 10.10.45.11 gfs2.gluster.private 10.10.45.12 gfs3.gluster.private 10.10.45.13 but on the CLI host gfs1.gluster.private returns [root@ovirt1 etc]# host gfs1.gluster.private Host gfs1.gluster.private not found: 3(NXDOMAIN) [root@ovirt1 etc]# I guess this is the wrong hosts file, resolver.conf lists files first for lookup...

[root@ovirt1 etc]# host gfs1.gluster.private Host gfs1.gluster.private not found: 3(NXDOMAIN) [root@ovirt1 etc]#
I guess this is the wrong hosts file, resolver.conf lists files first for lookup...
what is resolver.conf ? do you mean /etc/nsswitch.conf ? however, "host" is a tool for querying dns. it does not care about /etc/hosts

It looks like you may have those host entries backward. Put the IP first then the hostname i.e. 1.1.1.1 host.example.com On Fri, Nov 15, 2019 at 8:17 AM <rob.downer@orbitalsystems.co.uk> wrote:
I have set up a 3 node system.
Gluster has its own backend network and I have tried entering the FQDN hosts via ssh as follows... gfs1.gluster.private 10.10.45.11 gfs2.gluster.private 10.10.45.12 gfs3.gluster.private 10.10.45.13
I entered at /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 gfs1.gluster.private 10.10.45.11 gfs2.gluster.private 10.10.45.12 gfs3.gluster.private 10.10.45.13
but on the CLI
host gfs1.gluster.private
returns
[root@ovirt1 etc]# host gfs1.gluster.private Host gfs1.gluster.private not found: 3(NXDOMAIN) [root@ovirt1 etc]#
I guess this is the wrong hosts file, resolver.conf lists files first for lookup... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ILABGNZFOH5BP6...

So using FQDN or ip I get this task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:4 fatal: [10.10.45.13]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).", "unreachable": true} fatal: [10.10.45.11]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).", "unreachable": true} fatal: [10.10.45.12]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).", "unreachable": true} but on CLI it's reachable.... looks like a password issue... ? [root@ovirt3 ~]# ping 10.10.45.12 PING 10.10.45.12 (10.10.45.12) 56(84) bytes of data. 64 bytes from 10.10.45.12: icmp_seq=1 ttl=64 time=0.159 ms 64 bytes from 10.10.45.12: icmp_seq=2 ttl=64 time=1.42 ms 64 bytes from 10.10.45.12: icmp_seq=3 ttl=64 time=0.157 ms 64 bytes from 10.10.45.12: icmp_seq=4 ttl=64 time=0.141 ms 64 bytes from 10.10.45.12: icmp_seq=5 ttl=64 time=0.140 ms 64 bytes from 10.10.45.12: icmp_seq=6 ttl=64 time=0.172 ms ^C --- 10.10.45.12 ping statistics --- 6 packets transmitted, 6 received, 0% packet loss, time 5001ms rtt min/avg/max/mdev = 0.140/0.366/1.429/0.475 ms [root@ovirt3 ~]#

Did you setup ssh keys between the hosts? On Fri, Nov 15, 2019 at 9:16 AM <rob.downer@orbitalsystems.co.uk> wrote:
So using FQDN or ip I get this task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:4 fatal: [10.10.45.13]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).", "unreachable": true} fatal: [10.10.45.11]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).", "unreachable": true} fatal: [10.10.45.12]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).", "unreachable": true}
but on CLI it's reachable.... looks like a password issue... ? [root@ovirt3 ~]# ping 10.10.45.12 PING 10.10.45.12 (10.10.45.12) 56(84) bytes of data. 64 bytes from 10.10.45.12: icmp_seq=1 ttl=64 time=0.159 ms 64 bytes from 10.10.45.12: icmp_seq=2 ttl=64 time=1.42 ms 64 bytes from 10.10.45.12: icmp_seq=3 ttl=64 time=0.157 ms 64 bytes from 10.10.45.12: icmp_seq=4 ttl=64 time=0.141 ms 64 bytes from 10.10.45.12: icmp_seq=5 ttl=64 time=0.140 ms 64 bytes from 10.10.45.12: icmp_seq=6 ttl=64 time=0.172 ms ^C --- 10.10.45.12 ping statistics --- 6 packets transmitted, 6 received, 0% packet loss, time 5001ms rtt min/avg/max/mdev = 0.140/0.366/1.429/0.475 ms [root@ovirt3 ~]# _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UBPCM6YVCW5LIS...

yes see below... still getting FQDN is not added in known_hosts on Additional hosts screen... Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@gfs3.gluster.private's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'gfs3.gluster.private'" and check to make sure that only the key(s) you wanted were added.

What is the output of "nmcli general hostname" ? On 11/15/19 7:15 AM, rob.downer@orbitalsystems.co.uk wrote:
I have set up a 3 node system.
Gluster has its own backend network and I have tried entering the FQDN hosts via ssh as follows... gfs1.gluster.private 10.10.45.11 gfs2.gluster.private 10.10.45.12 gfs3.gluster.private 10.10.45.13
I entered at /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 gfs1.gluster.private 10.10.45.11 gfs2.gluster.private 10.10.45.12 gfs3.gluster.private 10.10.45.13
but on the CLI
host gfs1.gluster.private
returns
[root@ovirt1 etc]# host gfs1.gluster.private Host gfs1.gluster.private not found: 3(NXDOMAIN) [root@ovirt1 etc]#
I guess this is the wrong hosts file, resolver.conf lists files first for lookup... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ILABGNZFOH5BP6...

OK so I found that even though DNS was set correctly havng put IP address' in additioanl hosts and adding them to /etc/hosts Deployment does not immediately fail... however it does fail... TASK [gluster.infra/roles/backend_setup : Set PV data alignment for JBOD] ****** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:17 skipping: [gfs2.gluster.private] => {"changed": false, "skip_reason": "Conditional result was False"} skipping: [gfs1.gluster.private] => {"changed": false, "skip_reason": "Conditional result was False"} skipping: [gfs3.gluster.private] => {"changed": false, "skip_reason": "Conditional result was False"} TASK [gluster.infra/roles/backend_setup : Set PV data alignment for RAID] ****** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:24 ok: [gfs2.gluster.private] => {"ansible_facts": {"pv_dataalign": "3072K\n"}, "changed": false} ok: [gfs1.gluster.private] => {"ansible_facts": {"pv_dataalign": "3072K\n"}, "changed": false} ok: [gfs3.gluster.private] => {"ansible_facts": {"pv_dataalign": "3072K\n"}, "changed": false} TASK [gluster.infra/roles/backend_setup : Set VG physical extent size for RAID] *** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:34 ok: [gfs2.gluster.private] => {"ansible_facts": {"vg_pesize": "3072K\n"}, "changed": false} ok: [gfs1.gluster.private] => {"ansible_facts": {"vg_pesize": "3072K\n"}, "changed": false} ok: [gfs3.gluster.private] => {"ansible_facts": {"vg_pesize": "3072K\n"}, "changed": false} TASK [gluster.infra/roles/backend_setup : Create volume groups] **************** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:46 failed: [gfs1.gluster.private] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Device /dev/sdb not found."} failed: [gfs3.gluster.private] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Device /dev/sdb not found."} failed: [gfs2.gluster.private] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Device /dev/sdb not found."} NO MORE HOSTS LEFT ************************************************************* NO MORE HOSTS LEFT ************************************************************* PLAY RECAP ********************************************************************* gfs1.gluster.private : ok=10 changed=0 unreachable=0 failed=1 skipped=16 rescued=0 ignored=0 gfs2.gluster.private : ok=11 changed=1 unreachable=0 failed=1 skipped=16 rescued=0 ignored=0 gfs3.gluster.private : ok=10 changed=0 unreachable=0 failed=1 skipped=16 rescued=0 ignored=0
participants (4)
-
Jayme
-
Jingjie Jiang
-
markus.falb@mafalb.at
-
rob.downer@orbitalsystems.co.uk