Re: [Users] simple networking? [SOLVED] mostly

Itamar, I have the same problem and already have mentioned that in previous posts. In my case, I have vlan 128 tagged assigned for ovirtmgmt so I set the host ip manually on top of a vlan interface and the, after ovirt does install the node, it fails to finish as it can't manage properly to create the bridge and pass the ip to the brand new bridge. The ovirtmgmt interface I create in engine is a bond (and originally I configure just a single interface with the ip to grant engine access), just in case this changes the scenario. Regards,

On 12/16/2013 12:54 PM, Juan Pablo Lorier wrote:
Itamar,
I have the same problem and already have mentioned that in previous posts. In my case, I have vlan 128 tagged assigned for ovirtmgmt so I set the host ip manually on top of a vlan interface and the, after ovirt does install the node, it fails to finish as it can't manage properly to create the bridge and pass the ip to the brand new bridge. The ovirtmgmt interface I create in engine is a bond (and originally I configure just a single interface with the ip to grant engine access), just in case this changes the scenario. Regards,
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
dan/toni - thoughts?

Hi Juan Pablo, Could you please share: /var/log/vdsm/supervdsm.log So that I can see why the installation fails? I'd also like to know how you set up the pre-existent vlan on the host. ----- Original Message -----
From: "Itamar Heim" <iheim@redhat.com> To: "Juan Pablo Lorier" <jplorier@gmail.com>, users@ovirt.org, "Dan Kenigsberg" <danken@redhat.com>, "Antoni Segura Puimedon" <asegurap@redhat.com> Sent: Monday, December 16, 2013 7:13:04 PM Subject: Re: [Users] simple networking? [SOLVED] mostly
On 12/16/2013 12:54 PM, Juan Pablo Lorier wrote:
Itamar,
I have the same problem and already have mentioned that in previous posts. In my case, I have vlan 128 tagged assigned for ovirtmgmt so I set the host ip manually on top of a vlan interface and the, after ovirt does install the node, it fails to finish as it can't manage properly to create the bridge and pass the ip to the brand new bridge. The ovirtmgmt interface I create in engine is a bond (and originally I configure just a single interface with the ip to grant engine access), just in case this changes the scenario. Regards,
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
dan/toni - thoughts?

Hi Antoni, I've tried two secentarios: to create a ifcfg-eth0.128 with the parameters so it survives restarts, and to create dinamically with vconfig and ifconfig so it does not survive the restart to see if ovirt get the mgmt working. The only thing that worked to me was creating the hole ovirtmgmt bridge on top or the bondX.128 and of course, it does not fails. This log is from one of the last two hosts I set up (3.3.1) where I didn't create the bond, just created a vlan on top of eth0. https://drive.google.com/file/d/0B9OrU8RK9m26TjdGcTBLbkJrYVU/edit?usp=sharin... Regards, On 16/12/13 16:52, Antoni Segura Puimedon wrote:
Hi Juan Pablo,
Could you please share:
/var/log/vdsm/supervdsm.log
So that I can see why the installation fails? I'd also like to know how you set up the pre-existent vlan on the host.
----- Original Message -----
From: "Itamar Heim" <iheim@redhat.com> To: "Juan Pablo Lorier" <jplorier@gmail.com>, users@ovirt.org, "Dan Kenigsberg" <danken@redhat.com>, "Antoni Segura Puimedon" <asegurap@redhat.com> Sent: Monday, December 16, 2013 7:13:04 PM Subject: Re: [Users] simple networking? [SOLVED] mostly
On 12/16/2013 12:54 PM, Juan Pablo Lorier wrote:
Itamar,
I have the same problem and already have mentioned that in previous posts. In my case, I have vlan 128 tagged assigned for ovirtmgmt so I set the host ip manually on top of a vlan interface and the, after ovirt does install the node, it fails to finish as it can't manage properly to create the bridge and pass the ip to the brand new bridge. The ovirtmgmt interface I create in engine is a bond (and originally I configure just a single interface with the ip to grant engine access), just in case this changes the scenario. Regards,
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
dan/toni - thoughts?

Juan - Are you using an all-in-one? Assaf Muller, Cloud Networking Engineer Red Hat ----- Original Message ----- From: "Juan Pablo Lorier" <jplorier@gmail.com> To: "Antoni Segura Puimedon" <asegurap@redhat.com>, "Itamar Heim" <iheim@redhat.com> Cc: users@ovirt.org Sent: Monday, December 16, 2013 9:16:13 PM Subject: Re: [Users] simple networking? [SOLVED] mostly Hi Antoni, I've tried two secentarios: to create a ifcfg-eth0.128 with the parameters so it survives restarts, and to create dinamically with vconfig and ifconfig so it does not survive the restart to see if ovirt get the mgmt working. The only thing that worked to me was creating the hole ovirtmgmt bridge on top or the bondX.128 and of course, it does not fails. This log is from one of the last two hosts I set up (3.3.1) where I didn't create the bond, just created a vlan on top of eth0. https://drive.google.com/file/d/0B9OrU8RK9m26TjdGcTBLbkJrYVU/edit?usp=sharin... Regards, On 16/12/13 16:52, Antoni Segura Puimedon wrote:
Hi Juan Pablo,
Could you please share:
/var/log/vdsm/supervdsm.log
So that I can see why the installation fails? I'd also like to know how you set up the pre-existent vlan on the host.
----- Original Message -----
From: "Itamar Heim" <iheim@redhat.com> To: "Juan Pablo Lorier" <jplorier@gmail.com>, users@ovirt.org, "Dan Kenigsberg" <danken@redhat.com>, "Antoni Segura Puimedon" <asegurap@redhat.com> Sent: Monday, December 16, 2013 7:13:04 PM Subject: Re: [Users] simple networking? [SOLVED] mostly
On 12/16/2013 12:54 PM, Juan Pablo Lorier wrote:
Itamar,
I have the same problem and already have mentioned that in previous posts. In my case, I have vlan 128 tagged assigned for ovirtmgmt so I set the host ip manually on top of a vlan interface and the, after ovirt does install the node, it fails to finish as it can't manage properly to create the bridge and pass the ip to the brand new bridge. The ovirtmgmt interface I create in engine is a bond (and originally I configure just a single interface with the ip to grant engine access), just in case this changes the scenario. Regards,
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
dan/toni - thoughts?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi, No I'm not. At this time the engine is on a virtualbox vm. Everything is on top of centos 6.5. Regards Regards El dic 19, 2013 6:39 AM, "Assaf Muller" <amuller@redhat.com> escribió:
Juan - Are you using an all-in-one?
Assaf Muller, Cloud Networking Engineer Red Hat
----- Original Message ----- From: "Juan Pablo Lorier" <jplorier@gmail.com> To: "Antoni Segura Puimedon" <asegurap@redhat.com>, "Itamar Heim" < iheim@redhat.com> Cc: users@ovirt.org Sent: Monday, December 16, 2013 9:16:13 PM Subject: Re: [Users] simple networking? [SOLVED] mostly
Hi Antoni,
I've tried two secentarios: to create a ifcfg-eth0.128 with the parameters so it survives restarts, and to create dinamically with vconfig and ifconfig so it does not survive the restart to see if ovirt get the mgmt working. The only thing that worked to me was creating the hole ovirtmgmt bridge on top or the bondX.128 and of course, it does not fails.
This log is from one of the last two hosts I set up (3.3.1) where I didn't create the bond, just created a vlan on top of eth0.
https://drive.google.com/file/d/0B9OrU8RK9m26TjdGcTBLbkJrYVU/edit?usp=sharin...
Regards,
On 16/12/13 16:52, Antoni Segura Puimedon wrote:
Hi Juan Pablo,
Could you please share:
/var/log/vdsm/supervdsm.log
So that I can see why the installation fails? I'd also like to know how you set up the pre-existent vlan on the host.
----- Original Message -----
From: "Itamar Heim" <iheim@redhat.com> To: "Juan Pablo Lorier" <jplorier@gmail.com>, users@ovirt.org, "Dan Kenigsberg" <danken@redhat.com>, "Antoni Segura Puimedon" <asegurap@redhat.com> Sent: Monday, December 16, 2013 7:13:04 PM Subject: Re: [Users] simple networking? [SOLVED] mostly
On 12/16/2013 12:54 PM, Juan Pablo Lorier wrote:
Itamar,
I have the same problem and already have mentioned that in previous posts. In my case, I have vlan 128 tagged assigned for ovirtmgmt so I set the host ip manually on top of a vlan interface and the, after ovirt does install the node, it fails to finish as it can't manage properly to create the bridge and pass the ip to the brand new bridge. The ovirtmgmt interface I create in engine is a bond (and originally I configure just a single interface with the ip to grant engine access), just in case this changes the scenario. Regards,
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
dan/toni - thoughts?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

----- Original Message -----
From: "Juan Pablo Lorier" <jplorier@gmail.com> To: users@ovirt.org Sent: Monday, December 16, 2013 7:54:34 PM Subject: Re: [Users] simple networking? [SOLVED] mostly
Itamar,
I have the same problem and already have mentioned that in previous posts. In my case, I have vlan 128 tagged assigned for ovirtmgmt so I set the host ip manually on top of a vlan interface and the, after ovirt does install the node, it fails to finish as it can't manage properly to create the bridge and pass the ip to the brand new bridge. The ovirtmgmt interface I create in engine is a bond (and originally I configure just a single interface with the ip to grant engine access), just in case this changes the scenario. Regards,
Just to make sure i get it right: You're attempting to install a host which is configured: eth0 --- eth0.128 (which has a static ip configured) When you install the host, you provide as address either the ip of eth0.128 as you configured manually or a fqdn which is resolved to the same ip. 'ovirtmgmt' logical network definition is a vm network tagged with 128. Can you provide the from the output of 'getCapabilities' from the vdsm.log immediately after vdsm starts during the installation the value of 'lastClientIface' ? Or even better - the entire output of 'getCapabilities' (which will indicate that vdsm reports properly the required information about the nic it should configure). Could you confirm the above and provide the missing pieces ?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi Moty, Here's the vdsm.log that should match the one I just sent to Antoni. https://drive.google.com/file/d/0B9OrU8RK9m26MGFTTGctUllxSHc/edit?usp=sharin... I can't be much of help these days (I'm out of office), but if it's just send logs, I'll be back at office next friday and give you the info you may request. Regards, On 16/12/13 17:09, Moti Asayag wrote:
----- Original Message -----
From: "Juan Pablo Lorier" <jplorier@gmail.com> To: users@ovirt.org Sent: Monday, December 16, 2013 7:54:34 PM Subject: Re: [Users] simple networking? [SOLVED] mostly
Itamar,
I have the same problem and already have mentioned that in previous posts. In my case, I have vlan 128 tagged assigned for ovirtmgmt so I set the host ip manually on top of a vlan interface and the, after ovirt does install the node, it fails to finish as it can't manage properly to create the bridge and pass the ip to the brand new bridge. The ovirtmgmt interface I create in engine is a bond (and originally I configure just a single interface with the ip to grant engine access), just in case this changes the scenario. Regards,
Just to make sure i get it right:
You're attempting to install a host which is configured: eth0 --- eth0.128 (which has a static ip configured)
When you install the host, you provide as address either the ip of eth0.128 as you configured manually or a fqdn which is resolved to the same ip.
'ovirtmgmt' logical network definition is a vm network tagged with 128.
Can you provide the from the output of 'getCapabilities' from the vdsm.log immediately after vdsm starts during the installation the value of 'lastClientIface' ? Or even better - the entire output of 'getCapabilities' (which will indicate that vdsm reports properly the required information about the nic it should configure).
Could you confirm the above and provide the missing pieces ?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

By looking at the output of 'getCapabilities' i noticed vdsm didn't report any value for 'lastClientIface': 'lastClientIface': '' It seems like the first 'getCapabilities' which the engine relies on to report the nic for configuring the management network on top of is missing. Toni, any idea in which case it might not be reported ? Thread-20::DEBUG::2013-12-05 14:01:43,531::BindingXMLRPC::981::vds::(wrapper) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:1a6a2adc920'}], 'FC': []}, 'packages2': {'kernel': {'release': '358.23.2.el6.x86_64', 'buildtime': 1381955832.0, 'version': '2.6.32'}, 'glusterfs-rdma': {'release': '8.el6', 'buildtime': 1375787859L, 'version': '3.4.0'}, 'glusterfs-fuse': {'release': '8.el6', 'buildtime': 1375787859L, 'version': '3.4.0'}, 'spice-server': {'release': '6.el6', 'buildtime': 1385213397L, 'version': '0.12.4'}, 'vdsm': {'release': '11.el6', 'buildtime': 1384277438L, 'version': '4.13.0'}, 'qemu-kvm': {'release': '2.355.0.1.el6_4.9', 'buildtime': 1380718456L, 'version': '0.12.1.2'}, 'qemu-img': {'release': '2.355.0.1.el6_4.9', 'buildtime': 1380718456L, 'version': '0.12.1.2'}, 'libvirt': {'release': '29.el6', 'buildtime': 1385212305L, 'version': '0.10.2'}, 'glusterfs': {'release': '8.el6', 'buildtime': 1375787859L, 'version': '3.4.0'}, 'mom': {'release': '3.el6', 'buildtime': 1375215703L, 'version': '0.3.2'}, 'glusterfs-server': {'release': '8.el6', 'buildtime': 1375787859L, 'version': '3.4.0'}}, 'cpuModel': 'Intel(R) Xeon(R) CPU X5450 @ 3.00GHz', 'hooks': {}, 'cpuSockets': '2', 'vmTypes': ['kvm'], 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr': '192.168.128.82', 'cfg': {'DEFROUTE': 'yes', 'IPADDR': '192.168.128.82', 'GATEWAY': '192.168.128.49', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'STP': 'no', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::21e:c9ff:fe2b:7a5c/64'], 'gateway': '192.168.128.49', 'netmask': '255.255.255.0', 'stp': 'off', 'bridged': True, 'qosInbound': '', 'qosOutbound': '', 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['bond0.128']}, 'iscsi131': {'iface': 'iscsi131', 'addr': '192.168.131.82', 'cfg': {'DEFROUTE': 'no', 'IPADDR': '192.168.131.82', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'STP': 'no', 'DEVICE': 'iscsi131', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::215:17ff:fe9b:eb04/64'], 'gateway': '0.0.0.0', 'netmask': '255.255.255.0', 'stp': 'off', 'bridged': True, 'qosInbound': '', 'qosOutbound': '', 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['bond1.131']}, 'iscsi130': {'iface': 'iscsi130', 'addr': '192.168.130.82', 'cfg': {'DEFROUTE': 'no', 'IPADDR': '192.168.130.82', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'STP': 'no', 'DEVICE': 'iscsi130', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::215:17ff:fe9b:eb04/64'], 'gateway': '0.0.0.0', 'netmask': '255.255.255.0', 'stp': 'off', 'bridged': True, 'qosInbound': '', 'qosOutbound': '', 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['bond1.130']}}, 'bridges': {'ovirtmgmt': {'addr': '192.168.128.82', 'cfg': {'DEFROUTE': 'yes', 'IPADDR': '192.168.128.82', 'GATEWAY': '192.168.128.49', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'STP': 'no', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::21e:c9ff:fe2b:7a5c/64'], 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'ipv6gateway': '::', 'gateway': '192.168.128.49', 'ports': ['bond0.128']}, 'iscsi131': {'addr': '192.168.131.82', 'cfg': {'DEFROUTE': 'no', 'IPADDR': '192.168.131.82', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'STP': 'no', 'DEVICE': 'iscsi131', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::215:17ff:fe9b:eb04/64'], 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'ipv6gateway': '::', 'gateway': '0.0.0.0', 'ports': ['bond1.131']}, 'iscsi130': {'addr': '192.168.130.82', 'cfg': {'DEFROUTE': 'no', 'IPADDR': '192.168.130.82', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'STP': 'no', 'DEVICE': 'iscsi130', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::215:17ff:fe9b:eb04/64'], 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'ipv6gateway': '::', 'gateway': '0.0.0.0', 'ports': ['bond1.130']}}, 'uuid': '44454C4C-5300-1038-8034-CAC04F4C4631', 'lastClientIface': '', 'nics': {'p3p1': {'netmask': '', 'addr': '', 'hwaddr': '00:15:17:9b:eb:04', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '00:15:17:9b:eb:04', 'MASTER': 'bond1', 'DEVICE': 'p3p1', 'STP': 'no', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'permhwaddr': '00:15:17:9B:EB:04', 'speed': 1000, 'mtu': '1500'}, 'p3p2': {'netmask': '', 'addr': '', 'hwaddr': '00:15:17:9b:eb:04', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '00:15:17:9b:eb:05', 'MASTER': 'bond1', 'DEVICE': 'p3p2', 'STP': 'no', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'permhwaddr': '00:15:17:9B:EB:05', 'speed': 1000, 'mtu': '1500'}, 'eth1': {'netmask': '', 'addr': '', 'hwaddr': '00:1e:c9:2b:7a:5c', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '00:1e:c9:2b:7a:5e', 'STP': 'no', 'DEVICE': 'eth1', 'MASTER': 'bond0', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'permhwaddr': '00:1E:C9:2B:7A:5E', 'speed': 1000, 'mtu': '1500'}, 'eth0': {'netmask': '', 'addr': '', 'hwaddr': '00:1e:c9:2b:7a:5c', 'cfg': {'SLAVE': 'yes', 'MTU': '1500', 'HWADDR': '00:1E:C9:2B:7A:5C', 'MASTER': 'bond0', 'DEVICE': 'eth0', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'permhwaddr': '00:1E:C9:2B:7A:5C', 'speed': 1000, 'mtu': '1500'}}, 'software_revision': '11', 'clusterLevels': ['3.0', '3.1', '3.2', '3.3'], 'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,dca,sse4_1,lahf_lm,dts,tpr_shadow,vnmi,flexpriority,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_n270', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:1a6a2adc920', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1', '3.2', '3.3'], 'reservedMem': '321', 'bondings': {'bond4': {'netmask': '', 'addr': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00', 'cfg': {}, 'ipv6addrs': [], 'mtu': '1500'}, 'bond0': {'netmask': '', 'addr': '', 'slaves': ['eth0', 'eth1'], 'hwaddr': '00:1e:c9:2b:7a:5c', 'cfg': {'DEVICE': 'bond0', 'NM_CONTROLLED': 'no', 'STP': 'no', 'ONBOOT': 'yes', 'BONDING_OPTS': 'mode=802.3ad miimon=150'}, 'ipv6addrs': ['fe80::21e:c9ff:fe2b:7a5c/64'], 'mtu': '1500'}, 'bond1': {'netmask': '', 'addr': '', 'slaves': ['p3p1', 'p3p2'], 'hwaddr': '00:15:17:9b:eb:04', 'cfg': {'DEVICE': 'bond1', 'NM_CONTROLLED': 'no', 'STP': 'no', 'ONBOOT': 'yes', 'BONDING_OPTS': 'mode=802.3ad miimon=150'}, 'ipv6addrs': ['fe80::215:17ff:fe9b:eb04/64'], 'mtu': '1500'}, 'bond2': {'netmask': '', 'addr': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00', 'cfg': {}, 'ipv6addrs': [], 'mtu': '1500'}, 'bond3': {'netmask': '', 'addr': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00', 'cfg': {}, 'ipv6addrs': [], 'mtu': '1500'}}, 'software_version': '4.13', 'memSize': '15946', 'cpuSpeed': '2992.433', 'version_name': 'Snow Man', 'vlans': {'bond0.128': {'netmask': '', 'iface': 'bond0', 'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt', 'VLAN': 'yes', 'MTU': '1500', 'STP': 'no', 'DEVICE': 'bond0.128', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::21e:c9ff:fe2b:7a5c/64'], 'vlanid': 128, 'mtu': '1500'}, 'bond1.131': {'netmask': '', 'iface': 'bond1', 'addr': '', 'cfg': {'BRIDGE': 'iscsi131', 'VLAN': 'yes', 'NM_CONTROLLED': 'no', 'STP': 'no', 'DEVICE': 'bond1.131', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::215:17ff:fe9b:eb04/64'], 'vlanid': 131, 'mtu': '1500'}, 'bond1.130': {'netmask': '', 'iface': 'bond1', 'addr': '', 'cfg': {'BRIDGE': 'iscsi130', 'VLAN': 'yes', 'NM_CONTROLLED': 'no', 'STP': 'no', 'DEVICE': 'bond1.130', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::215:17ff:fe9b:eb04/64'], 'vlanid': 130, 'mtu': '1500'}}, 'cpuCores': '8', 'kvmEnabled': 'true', 'guestOverhead': '65', 'management_ip': '0.0.0.0', 'cpuThreads': '8', 'emulatedMachines': [u'rhel6.4.0', u'pc', u'rhel6.3.0', u'rhel6.2.0', u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4. 4', u'rhel5.4.0'], 'operatingSystem': {'release': '5.el6.centos.11.2', 'version': '6', 'name': 'RHEL'}, 'lastClient': '0.0.0.0'}} Thread-21::DEBUG::2013-12-05 14:01:43,561::BindingXMLRPC::974::vds::(wrapper) client [192.168.128.79]::call getHardwareInfo with () {} ----- Original Message -----
From: "Juan Pablo Lorier" <jplorier@gmail.com> To: "Moti Asayag" <masayag@redhat.com> Cc: users@ovirt.org Sent: Monday, December 16, 2013 9:22:32 PM Subject: Re: [Users] simple networking? [SOLVED] mostly
Hi Moty,
Here's the vdsm.log that should match the one I just sent to Antoni.
https://drive.google.com/file/d/0B9OrU8RK9m26MGFTTGctUllxSHc/edit?usp=sharin...
I can't be much of help these days (I'm out of office), but if it's just send logs, I'll be back at office next friday and give you the info you may request. Regards,
On 16/12/13 17:09, Moti Asayag wrote:
----- Original Message -----
From: "Juan Pablo Lorier" <jplorier@gmail.com> To: users@ovirt.org Sent: Monday, December 16, 2013 7:54:34 PM Subject: Re: [Users] simple networking? [SOLVED] mostly
Itamar,
I have the same problem and already have mentioned that in previous posts. In my case, I have vlan 128 tagged assigned for ovirtmgmt so I set the host ip manually on top of a vlan interface and the, after ovirt does install the node, it fails to finish as it can't manage properly to create the bridge and pass the ip to the brand new bridge. The ovirtmgmt interface I create in engine is a bond (and originally I configure just a single interface with the ip to grant engine access), just in case this changes the scenario. Regards,
Just to make sure i get it right:
You're attempting to install a host which is configured: eth0 --- eth0.128 (which has a static ip configured)
When you install the host, you provide as address either the ip of eth0.128 as you configured manually or a fqdn which is resolved to the same ip.
'ovirtmgmt' logical network definition is a vm network tagged with 128.
Can you provide the from the output of 'getCapabilities' from the vdsm.log immediately after vdsm starts during the installation the value of 'lastClientIface' ? Or even better - the entire output of 'getCapabilities' (which will indicate that vdsm reports properly the required information about the nic it should configure).
Could you confirm the above and provide the missing pieces ?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

----- Original Message -----
From: "Moti Asayag" <masayag@redhat.com> To: "Antoni Segura Puimedon" <asegurap@redhat.com> Cc: users@ovirt.org, "Juan Pablo Lorier" <jplorier@gmail.com> Sent: Monday, December 16, 2013 8:43:24 PM Subject: Re: [Users] simple networking? [SOLVED] mostly
By looking at the output of 'getCapabilities' i noticed vdsm didn't report any value for 'lastClientIface': 'lastClientIface': ''
It seems like the first 'getCapabilities' which the engine relies on to report the nic for configuring the management network on top of is missing.
Toni, any idea in which case it might not be reported ?
Sure, this is fixed now (or at least the behavior was changed). The thing is that this Caps reports the management_ip as 0.0.0.0, which leads me to believe that this is probably an all in one setup. The code for getting lastClientIface used to check for which device had assigned the management_ip, which doesn't exist in this case. If we were to use the current code, that tries to route a packet, it would behave differently. However, it would still leave us out of luck as the device that would be reported to the engine would be, if this is indeed an allinone, the loopback device.
Thread-20::DEBUG::2013-12-05 14:01:43,531::BindingXMLRPC::981::vds::(wrapper) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:1a6a2adc920'}], 'FC': []}, 'packages2': {'kernel': {'release': '358.23.2.el6.x86_64', 'buildtime': 1381955832.0, 'version': '2.6.32'}, 'glusterfs-rdma': {'release': '8.el6', 'buildtime': 1375787859L, 'version': '3.4.0'}, 'glusterfs-fuse': {'release': '8.el6', 'buildtime': 1375787859L, 'version': '3.4.0'}, 'spice-server': {'release': '6.el6', 'buildtime': 1385213397L, 'version': '0.12.4'}, 'vdsm': {'release': '11.el6', 'buildtime': 1384277438L, 'version': '4.13.0'}, 'qemu-kvm': {'release': '2.355.0.1.el6_4.9', 'buildtime': 1380718456L, 'version': '0.12.1.2'}, 'qemu-img': {'release': '2.355.0.1.el6_4.9', 'buildtime': 1380718456L, 'version': '0.12.1.2'}, 'libvirt': {'release': '29.el6', 'buildtime': 1385212305L, 'version': '0.10.2'}, 'glusterfs': {'release': '8.el6', 'buildtime': 1375787859L, 'version': '3.4.0'}, 'mom': {'release': '3.el6', 'buildtime': 1375215703L, 'version': '0.3.2'}, 'glusterfs-server': {'release': '8.el6', 'buildtime': 1375787859L, 'version': '3.4.0'}}, 'cpuModel': 'Intel(R) Xeon(R) CPU X5450 @ 3.00GHz', 'hooks': {}, 'cpuSockets': '2', 'vmTypes': ['kvm'], 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr': '192.168.128.82', 'cfg': {'DEFROUTE': 'yes', 'IPADDR': '192.168.128.82', 'GATEWAY': '192.168.128.49', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'STP': 'no', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::21e:c9ff:fe2b:7a5c/64'], 'gateway': '192.168.128.49', 'netmask': '255.255.255.0', 'stp': 'off', 'bridged': True, 'qosInbound': '', 'qosOutbound': '', 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['bond0.128']}, 'iscsi131': {'iface': 'iscsi131', 'addr': '192.168.131.82', 'cfg': {'DEFROUTE': 'no', 'IPADDR': '192.168.131.82', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'STP': 'no', 'DEVICE': 'iscsi131', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::215:17ff:fe9b:eb04/64'], 'gateway': '0.0.0.0', 'netmask': '255.255.255.0', 'stp': 'off', 'bridged': True, 'qosInbound': '', 'qosOutbound': '', 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['bond1.131']}, 'iscsi130': {'iface': 'iscsi130', 'addr': '192.168.130.82', 'cfg': {'DEFROUTE': 'no', 'IPADDR': '192.168.130.82', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'STP': 'no', 'DEVICE': 'iscsi130', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::215:17ff:fe9b:eb04/64'], 'gateway': '0.0.0.0', 'netmask': '255.255.255.0', 'stp': 'off', 'bridged': True, 'qosInbound': '', 'qosOutbound': '', 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['bond1.130']}}, 'bridges': {'ovirtmgmt': {'addr': '192.168.128.82', 'cfg': {'DEFROUTE': 'yes', 'IPADDR': '192.168.128.82', 'GATEWAY': '192.168.128.49', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'STP': 'no', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::21e:c9ff:fe2b:7a5c/64'], 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'ipv6gateway': '::', 'gateway': '192.168.128.49', 'ports': ['bond0.128']}, 'iscsi131': {'addr': '192.168.131.82', 'cfg': {'DEFROUTE': 'no', 'IPADDR': '192.168.131.82', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'STP': 'no', 'DEVICE': 'iscsi131', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::215:17ff:fe9b:eb04/64'], 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'ipv6gateway': '::', 'gateway': '0.0.0.0', 'ports': ['bond1.131']}, 'iscsi130': {'addr': '192.168.130.82', 'cfg': {'DEFROUTE': 'no', 'IPADDR': '192.168.130.82', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'STP': 'no', 'DEVICE': 'iscsi130', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::215:17ff:fe9b:eb04/64'], 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'ipv6gateway': '::', 'gateway': '0.0.0.0', 'ports': ['bond1.130']}}, 'uuid': '44454C4C-5300-1038-8034-CAC04F4C4631', 'lastClientIface': '', 'nics': {'p3p1': {'netmask': '', 'addr': '', 'hwaddr': '00:15:17:9b:eb:04', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '00:15:17:9b:eb:04', 'MASTER': 'bond1', 'DEVICE': 'p3p1', 'STP': 'no', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'permhwaddr': '00:15:17:9B:EB:04', 'speed': 1000, 'mtu': '1500'}, 'p3p2': {'netmask': '', 'addr': '', 'hwaddr': '00:15:17:9b:eb:04', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '00:15:17:9b:eb:05', 'MASTER': 'bond1', 'DEVICE': 'p3p2', 'STP': 'no', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'permhwaddr': '00:15:17:9B:EB:05', 'speed': 1000, 'mtu': '1500'}, 'eth1': {'netmask': '', 'addr': '', 'hwaddr': '00:1e:c9:2b:7a:5c', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '00:1e:c9:2b:7a:5e', 'STP': 'no', 'DEVICE': 'eth1', 'MASTER': 'bond0', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'permhwaddr': '00:1E:C9:2B:7A:5E', 'speed': 1000, 'mtu': '1500'}, 'eth0': {'netmask': '', 'addr': '', 'hwaddr': '00:1e:c9:2b:7a:5c', 'cfg': {'SLAVE': 'yes', 'MTU': '1500', 'HWADDR': '00:1E:C9:2B:7A:5C', 'MASTER': 'bond0', 'DEVICE': 'eth0', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'permhwaddr': '00:1E:C9:2B:7A:5C', 'speed': 1000, 'mtu': '1500'}}, 'software_revision': '11', 'clusterLevels': ['3.0', '3.1', '3.2', '3.3'], 'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,dca,sse4_1,lahf_lm,dts,tpr_shadow,vnmi,flexpriority,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_n270', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:1a6a2adc920', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1', '3.2', '3.3'], 'reservedMem': '321', 'bondings': {'bond4': {'netmask': '', 'addr': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00', 'cfg': {}, 'ipv6addrs': [], 'mtu': '1500'}, 'bond0': {'netmask': '', 'addr': '', 'slaves': ['eth0', 'eth1'], 'hwaddr': '00:1e:c9:2b:7a:5c', 'cfg': {'DEVICE': 'bond0', 'NM_CONTROLLED': 'no', 'STP': 'no', 'ONBOOT': 'yes', 'BONDING_OPTS': 'mode=802.3ad miimon=150'}, 'ipv6addrs': ['fe80::21e:c9ff:fe2b:7a5c/64'], 'mtu': '1500'}, 'bond1': {'netmask': '', 'addr': '', 'slaves': ['p3p1', 'p3p2'], 'hwaddr': '00:15:17:9b:eb:04', 'cfg': {'DEVICE': 'bond1', 'NM_CONTROLLED': 'no', 'STP': 'no', 'ONBOOT': 'yes', 'BONDING_OPTS': 'mode=802.3ad miimon=150'}, 'ipv6addrs': ['fe80::215:17ff:fe9b:eb04/64'], 'mtu': '1500'}, 'bond2': {'netmask': '', 'addr': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00', 'cfg': {}, 'ipv6addrs': [], 'mtu': '1500'}, 'bond3': {'netmask': '', 'addr': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00', 'cfg': {}, 'ipv6addrs': [], 'mtu': '1500'}}, 'software_version': '4.13', 'memSize': '15946', 'cpuSpeed': '2992.433', 'version_name': 'Snow Man', 'vlans': {'bond0.128': {'netmask': '', 'iface': 'bond0', 'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt', 'VLAN': 'yes', 'MTU': '1500', 'STP': 'no', 'DEVICE': 'bond0.128', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::21e:c9ff:fe2b:7a5c/64'], 'vlanid': 128, 'mtu': '1500'}, 'bond1.131': {'netmask': '', 'iface': 'bond1', 'addr': '', 'cfg': {'BRIDGE': 'iscsi131', 'VLAN': 'yes', 'NM_CONTROLLED': 'no', 'STP': 'no', 'DEVICE': 'bond1.131', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::215:17ff:fe9b:eb04/64'], 'vlanid': 131, 'mtu': '1500'}, 'bond1.130': {'netmask': '', 'iface': 'bond1', 'addr': '', 'cfg': {'BRIDGE': 'iscsi130', 'VLAN': 'yes', 'NM_CONTROLLED': 'no', 'STP': 'no', 'DEVICE': 'bond1.130', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::215:17ff:fe9b:eb04/64'], 'vlanid': 130, 'mtu': '1500'}}, 'cpuCores': '8', 'kvmEnabled': 'true', 'guestOverhead': '65', 'management_ip': '0.0.0.0', 'cpuThreads': '8', 'emulatedMachines': [u'rhel6.4.0', u'pc', u'rhel6.3.0', u'rhel6.2.0', u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4. 4', u'rhel5.4.0'], 'operatingSystem': {'release': '5.el6.centos.11.2', 'version': '6', 'name': 'RHEL'}, 'lastClient': '0.0.0.0'}} Thread-21::DEBUG::2013-12-05 14:01:43,561::BindingXMLRPC::974::vds::(wrapper) client [192.168.128.79]::call getHardwareInfo with () {}
----- Original Message -----
From: "Juan Pablo Lorier" <jplorier@gmail.com> To: "Moti Asayag" <masayag@redhat.com> Cc: users@ovirt.org Sent: Monday, December 16, 2013 9:22:32 PM Subject: Re: [Users] simple networking? [SOLVED] mostly
Hi Moty,
Here's the vdsm.log that should match the one I just sent to Antoni.
https://drive.google.com/file/d/0B9OrU8RK9m26MGFTTGctUllxSHc/edit?usp=sharin...
I can't be much of help these days (I'm out of office), but if it's just send logs, I'll be back at office next friday and give you the info you may request. Regards,
On 16/12/13 17:09, Moti Asayag wrote:
----- Original Message -----
From: "Juan Pablo Lorier" <jplorier@gmail.com> To: users@ovirt.org Sent: Monday, December 16, 2013 7:54:34 PM Subject: Re: [Users] simple networking? [SOLVED] mostly
Itamar,
I have the same problem and already have mentioned that in previous posts. In my case, I have vlan 128 tagged assigned for ovirtmgmt so I set the host ip manually on top of a vlan interface and the, after ovirt does install the node, it fails to finish as it can't manage properly to create the bridge and pass the ip to the brand new bridge. The ovirtmgmt interface I create in engine is a bond (and originally I configure just a single interface with the ip to grant engine access), just in case this changes the scenario. Regards,
Just to make sure i get it right:
You're attempting to install a host which is configured: eth0 --- eth0.128 (which has a static ip configured)
When you install the host, you provide as address either the ip of eth0.128 as you configured manually or a fqdn which is resolved to the same ip.
'ovirtmgmt' logical network definition is a vm network tagged with 128.
Can you provide the from the output of 'getCapabilities' from the vdsm.log immediately after vdsm starts during the installation the value of 'lastClientIface' ? Or even better - the entire output of 'getCapabilities' (which will indicate that vdsm reports properly the required information about the nic it should configure).
Could you confirm the above and provide the missing pieces ?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Mon, Dec 16, 2013 at 06:01:51PM -0500, Antoni Segura Puimedon wrote:
----- Original Message -----
From: "Moti Asayag" <masayag@redhat.com> To: "Antoni Segura Puimedon" <asegurap@redhat.com> Cc: users@ovirt.org, "Juan Pablo Lorier" <jplorier@gmail.com> Sent: Monday, December 16, 2013 8:43:24 PM Subject: Re: [Users] simple networking? [SOLVED] mostly
By looking at the output of 'getCapabilities' i noticed vdsm didn't report any value for 'lastClientIface': 'lastClientIface': ''
It seems like the first 'getCapabilities' which the engine relies on to report the nic for configuring the management network on top of is missing.
Toni, any idea in which case it might not be reported ?
Sure, this is fixed now (or at least the behavior was changed). The thing is that this Caps reports the management_ip as 0.0.0.0, which leads me to believe that this is probably an all in one setup. The code for getting lastClientIface used to check for which device had assigned the management_ip, which doesn't exist in this case.
management_ip 0.0.0.0 means very little: only that Vdsm has kept its default of listening on all interfaces. I do not see how it is related.
If we were to use the current code, that tries to route a packet, it would behave differently. However, it would still leave us out of luck as the device that would be reported to the engine would be, if this is indeed an allinone, the loopback device.
I am confused about this reasoning. The vdsm.log.26.xz shows 10 calls to getCabilities, all from 192.168.128.79. Two of them (the first included) reports that odd lastClient = '0.0.0.0'. Both happen to be the first call after Vdsm has started up. It smells like a race (or a more consistent fault) in how we set self.server.lastClient = self.client_address[0] I'd apreciate a bug opened on that, for a closer scrutiny. Dan.
participants (7)
-
Antoni Segura Puimedon
-
Assaf Muller
-
Dan Kenigsberg
-
Itamar Heim
-
jplorier@gmail.com
-
Juan Pablo Lorier
-
Moti Asayag