On Sun, Sep 17, 2017 at 11:14 AM, Eyal Edri <eedri@redhat.com> wrote:


On Sun, Sep 17, 2017 at 11:50 AM, Yaniv Kaul <ykaul@redhat.com> wrote:


On Sun, Sep 17, 2017 at 11:47 AM, Eyal Edri <eedri@redhat.com> wrote:
Hi,

It looks like HE suite ( both 'master' and '4.1' ) is failing constantly, most likely due to 7.4 updates.

I'm investigating the issue on master.
In my case I choose to configure the engine VM with a static IP address and engine-setup failed on the engine VM since it wasn't able to check available OVN related packages.

So we have two distinct issues here:
1. we are executing engine-setup with --offline cli option but the OVN plugins are ignoring it.

2. the engine VM has no connectivity.
I dig it a bit and I found that the default gateway wasn't configured on the engine VM although it's correctly set in cloud-init meta-data file.
So it seams that on 7.4 cloud-init is failing to set the default gateway:

[root@enginevm ~]# nmcli con show "System eth0" | grep -i GATEWAY
connection.gateway-ping-timeout:        0
ipv4.gateway:                           --
ipv6.gateway:                           --
IP4.GATEWAY:                            --
IP6.GATEWAY:                            fe80::c4ee:3eff:fed5:fad9
[root@enginevm ~]# nmcli con modify "System eth0" ipv4.gateway
Error: value for 'ipv4.gateway' is missing.
[root@enginevm ~]# 
[root@enginevm ~]# nmcli con show "System eth0" | grep -i GATEWAY
connection.gateway-ping-timeout:        0
ipv4.gateway:                           --
ipv6.gateway:                           --
IP4.GATEWAY:                            --
IP6.GATEWAY:                            fe80::c4ee:3eff:fed5:fad9
[root@enginevm ~]# nmcli con modify "System eth0" ipv4.gateway 192.168.1.1
[root@enginevm ~]# nmcli con reload "System eth0"
[root@enginevm ~]# nmcli con up "System eth0"
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3)
[root@enginevm ~]# nmcli con show "System eth0" | grep -i GATEWAY
connection.gateway-ping-timeout:        0
ipv4.gateway:                           192.168.1.1
ipv6.gateway:                           --
IP4.GATEWAY:                            192.168.1.1
IP6.GATEWAY:                            fe80::c4ee:3eff:fed5:fad9
[root@enginevm ~]# mount /dev/sr0 /mnt/
mount: /dev/sr0 is write-protected, mounting read-only
[root@enginevm ~]# cat /mnt/meta-data 
instance-id: d8b22f43-1565-44e2-916f-f211c7e07f13
local-hostname: enginevm.localdomain
network-interfaces: |
  auto eth0
  iface eth0 inet static
    address 192.168.1.204
    network 192.168.1.0
    netmask 255.255.255.0
    broadcast 192.168.1.255
    gateway 192.168.1.1


 
So there is no suspected patch from oVirt side that might have caused it.

It's the firewall. I've fixed it[1] and specifically[2] but probably not completely.

Great! Wasn't aware your patch address that, I've replied on the patch itself, but I think we need to split the fix to 2 seperate patches.
 

Perhaps we should try to take[2] separately.
Y.





It is probably also the reason why HC suites are failing, since they are using also HE for deployments.

I think this issue should BLOCK the Alpha release tomorrow, or at the minimum, we need to verify its an OST issue and not a real regression.

Links to relevant failures:

Error snippet:

03:01:38          
03:01:38           --== STORAGE CONFIGURATION ==--
03:01:38          
03:02:47 [ ERROR ] Error while mounting specified storage path: mount.nfs: No route to host
03:02:58 [WARNING] Cannot unmount /tmp/tmp2gkFwJ
03:02:58 [ ERROR ] Failed to execute stage 'Environment customization': mount.nfs: No route to host


--

Eyal edri


ASSOCIATE MANAGER

RHV DevOps

EMEA VIRTUALIZATION R&D


Red Hat EMEA

TRIED. TESTED. TRUSTED.
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)

_______________________________________________
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel




--

Eyal edri


ASSOCIATE MANAGER

RHV DevOps

EMEA VIRTUALIZATION R&D


Red Hat EMEA

TRIED. TESTED. TRUSTED.
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)