On Thu, Oct 13, 2016 at 2:45 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:


On Thu, Oct 13, 2016 at 11:23 AM, Piotr Kliczewski <pkliczew@redhat.com> wrote:
Gianluca,

The port needs to be open on machines where vdsm is installed.

@Simone can you take a look why after running host deploy at 2016-10-03 23:28:47,891
we are not able to talk to vdsm anymore?

OK, I'm on it.
 

Thanks,
Piotr 

On Thu, Oct 13, 2016 at 11:15 AM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:


On Thu, Oct 13, 2016 at 11:13 AM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:

Il 13/Ott/2016 11:00, "Piotr Kliczewski" <pkliczew@redhat.com> ha scritto:
>
> Gianluca,
>
> Checking the log it seems that we do not configure firewall:
>
> NETWORK/firewalldEnable=bool:'False'
> NETWORK/iptablesEnable=bool:'False'
>
> Please make sure that you reconfigure your firewall to open 54321 port or let host deploy to do it for you.
>
> Thanks,
> Piotr

Hi,
at this moment Ihave:
On hypervisor iptables service configured and active.
On engine firewalld service configured and active.
Do I have to open port 54321 on host?

Actually it is already...

root@ovirt01 ~]# iptables -L -n
Chain INPUT (policy ACCEPT)
target     prot opt source               destination        
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            udp dpt:53
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:53
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            udp dpt:67
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:67
ACCEPT     all  --  192.168.1.212        0.0.0.0/0          
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            state RELATED,ESTABLISHED
ACCEPT     icmp --  0.0.0.0/0            0.0.0.0/0          
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0          
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:54321
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:111
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            udp dpt:111
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:22
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            udp dpt:161
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:16514
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 2223
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 5900:6923
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 49152:49216
REJECT     all  --  0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibited

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination        
ACCEPT     all  --  0.0.0.0/0            192.168.122.0/24     ctstate RELATED,ESTABLISHED
ACCEPT     all  --  192.168.122.0/24     0.0.0.0/0          
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0          
REJECT     all  --  0.0.0.0/0            0.0.0.0/0            reject-with icmp-port-unreachable
REJECT     all  --  0.0.0.0/0            0.0.0.0/0            reject-with icmp-port-unreachable
REJECT     all  --  0.0.0.0/0            0.0.0.0/0            PHYSDEV match ! --physdev-is-bridged reject-with icmp-host-prohibited

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination        
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            udp dpt:68
[root@ovirt01 ~]#





In the mean time I confirmed that even without ipv6 the situation doesn't change

global maintenance
stop ovirt-engine service
create no-ipv6.conf under /etc/sysctl.d of engine
systemctl restart network
no more ipv6
shutdown engine
exit from maintenance and after a while engine is powered on

on host
vdsm    6767 vdsm   24u     IPv4           15528247      0t0       TCP *:54321 (LISTEN)
vdsm    6767 vdsm   82u     IPv4           15528876      0t0       TCP ovirt01.mydomain:54321->ovirt.mydomain:52980 (ESTABLISHED)
vdsm    6767 vdsm  110u     IPv4           15534849      0t0       TCP ovirt01.mydomain:54321->ovirt.mydomain:52984 (ESTABLISHED)

on engine now
[root@ovirt host-deploy]# netstat -an|grep 54321
tcp        0      0 192.168.1.212:52984     192.168.1.211:54321     ESTABLISHED
tcp        0      0 192.168.1.212:52980     192.168.1.211:54321     ESTABLISHED
[root@ovirt host-deploy]#

but vdsmd has the same errors. Also restarting vdsmd

Oct 13 14:49:20 ovirt01.mydomain vdsm[6767]: vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof

how can I force the creation of the ovirt-host-mgtmt file?
I just see that has been generated this one file
ovirt-host-mgmt-20161013124548-ovirt01.mydomain-null.log
here:
https://drive.google.com/file/d/0BwoPbcrMv8mvbXI3cndGcEtXbWs/view?usp=sharing