Ok, so I created the gluster volumes manually. It came out that virtualization was disabled in bios on the new servers. After enabling the virtualization I run standard wizard for creating hosted engine and choose gluster volume as location for it. It looks like it is ok now. Thanks.

On Thu, Oct 25, 2018 at 3:37 PM Jarosław Prokopowski <jprokopowski@gmail.com> wrote:
Yes, bq817 is the one with cockpit. Yes I can ssh locally by using full host name too.
I checked with tcpdump and there isn't any ssh connection try during the deployment on any of the network interfaces. Strange...

On Thu, Oct 25, 2018 at 3:15 PM Jayme <jaymef@gmail.com> wrote:
Is the host in the error the same host you are running cockpit from?  Make sure you can ssh not to localhost by to the hostname from the host.  I.e. on bq817storage.example.com try ssh'ing to root@bq817storage.example.com

On Thu, Oct 25, 2018 at 9:06 AM Jarosław Prokopowski <jprokopowski@gmail.com> wrote:
And because I sometimes ssh through the main (non-storage) network interface i have local .ssh/config file on the root account with:
Host *
    StrictHostKeyChecking no


On Thu, Oct 25, 2018 at 2:03 PM Jarosław Prokopowski <jprokopowski@gmail.com> wrote:
Hi,

Yes ssh keys have been distributed and root remote login works each way.
After I got the error  I tested all connection manually and they work.
On every host I can ssh to root@localhost and to other hosts without any problem.
That's why the error is so strange to me. I event tested ansible from oVirt host to others and it works ok using ssh keys.


W dniu czw., 25.10.2018 o 13:43 Jayme <jaymef@gmail.com> napisał(a):
You should also make sure the host can ssh to itself and accept keys 

On Thu, Oct 25, 2018, 8:42 AM Jayme, <jaymef@gmail.com> wrote:
Darn autocorrect, sshd config rather 

On Thu, Oct 25, 2018, 7:29 AM Jarosław Prokopowski, <jprokopowski@gmail.com> wrote:
Hi,

Please help! :-) I couldn't find any solution via google.

I followed this document to create oVirt hyperconverged on 3 hosts using cockpit wizard:


System: CentOS Linux release 7.5.1804

All hosts can resolve each other names via DNS, ssh keys are exchanged and working. 
I added firewall rules based on oVirt installation guide. SSH is possible between all hosts using keys.

I cannot create the configuration and the error I get in the last step is:

------------------------------------------------------------------------------------------------------
PLAY [gluster_servers] *********************************************************

TASK [Run a shell script] ******************************************************
failed: [bq817storage.example.com] (item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h bq817storage.example.com, bq735storage.example.com, bq813storage.example.com) => {"item": "/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h bq817storage.example.com, bq735storage.example.com, bq813storage.example.com", "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n", "unreachable": true}
fatal: [bq817storage.example.com]: UNREACHABLE! => {"changed": false, "msg": "All items completed", "results": [{"_ansible_ignore_errors": null, "_ansible_item_label": "/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h bq817storage.example.com, bq735storage.example.com, bq813storage.example.com", "_ansible_item_result": true, "item": "/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h bq817storage.example.com, bq735storage.example.com, bq813storage.example.com", "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n", "unreachable": true}]}
to retry, use: --limit @/tmp/tmpYLHDCP/run-script.retry

PLAY RECAP *********************************************************************
bq817storage.example.com : ok=0    changed=0    unreachable=1    failed=0


Firewall rules:

oVirt engine host:

#firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: enp134s0f0 enp134s0f1
  sources:
  services: ssh dhcpv6-client cockpit glusterfs http https dns
  ports: 2222/tcp 6100/tcp 7410/udp 54323/tcp 2223/tcp 161/udp 111/tcp 5900-6923/tcp 5989/tcp 9090/tcp 16514/tcp 49152-49216/tcp 54321/tcp 54322/tcp 6081/udp
  protocols:
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:
  
oVirt nodes:

#firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: enp134s0f0 enp134s0f1
  sources:
  services: ssh dhcpv6-client cockpit glusterfs dns
  ports: 2223/tcp 161/udp 111/tcp 5900-6923/tcp 5989/tcp 9090/tcp 16514/tcp 49152-49216/tcp 54321/tcp 54322/tcp 6081/udp
  protocols:
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
---------------------------------------------------------------------------------

Thanks in advance
Jarson
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/4KKTG4VVPG7WKRNBDJV6JWGOKPBMM2LB/