
Il 10/02/2015 14:53, Lior Vernia ha scritto:
On 10/02/15 15:48, Sandro Bonazzola wrote:
Il 10/02/2015 13:38, Roy Golan ha scritto:
On 02/10/2015 02:07 PM, Sandro Bonazzola wrote:
Hi, recently I rebased oVirt Live for 3.6 on EL7.
When I try to run a VM on it I have:
2015-02-10 12:10:00,523 INFO [org.ovirt.engine.core.bll.RunVmCommand] (ajp--127.0.0.1-8702-7) [376139c7] Lock Acquired to object 'EngineLock [exclusiveLocks= key: ad476920-aafe-4af7-9f13-6ccbcd31f442 value: VM , sharedLocks= ]' 2015-02-10 12:10:00,537 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (ajp--127.0.0.1-8702-7) [376139c7] START, IsVmDuringInitiatingVDSCommand( vmId = ad476920-aafe-4af7-9f13-6ccbcd31f442), log id: 6443d18a 2015-02-10 12:10:00,538 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (ajp--127.0.0.1-8702-7) [376139c7] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 6443d18a 2015-02-10 12:10:00,546 WARN [org.ovirt.engine.core.bll.scheduling.policyunits.NetworkPolicyUnit] (ajp--127.0.0.1-8702-7) [376139c7] host local_host is missing networks required by VM nics ovirtmgmt 2015-02-10 12:10:00,547 INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (ajp--127.0.0.1-8702-7) [376139c7] Candidate host 'local_host' ('3260585c-e7aa-4b1f-bc91-3dc72d15cbf8') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'Network' (correlation id: null) 2015-02-10 12:10:00,547 WARN [org.ovirt.engine.core.bll.RunVmCommand] (ajp--127.0.0.1-8702-7) [376139c7] CanDoAction of action 'RunVm' failed for user admin@internal. Reasons: VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_ALL_HOSTS_FILTERED_OUT,VAR__FILTERTYPE__INTERNAL,$hostName local_host,$filterName Network,$networkNames ovirtmgmt,VAR__DETAIL__NETWORK_MISSING,SCHEDULING_HOST_FILTERED_REASON_WITH_DETAIL 2015-02-10 12:10:00,548 INFO [org.ovirt.engine.core.bll.RunVmCommand] (ajp--127.0.0.1-8702-7) [376139c7] Lock freed to object 'EngineLock [exclusiveLocks= key: ad476920-aafe-4af7-9f13-6ccbcd31f442 value: VM , sharedLocks= ]'
oVirt management network is there as it was in the el6 based iso, using a dummy device and a vdsm hook for having it up.
what's the output of http://localhost:8080/api/hosts/3260585c-e7aa-4b1f-bc91-3dc72d15cbf8/nics
Sorry, I had to reboot and since ovirt-live has no persistence I lost the setup. I've fully reproduced it, host id changed:
<host_nics> <actions> <link href="/api/hosts/07915908-72ac-4fd9-8aef-e9dd3194981d/nics/setupnetworks" rel="setupnetworks"/> </actions> <host_nic href="/api/hosts/07915908-72ac-4fd9-8aef-e9dd3194981d/nics/10070ac3-174a-4cf4-a774-8aed432a909a" id="10070ac3-174a-4cf4-a774-8aed432a909a"> <actions> <link href="/api/hosts/07915908-72ac-4fd9-8aef-e9dd3194981d/nics/10070ac3-174a-4cf4-a774-8aed432a909a/attach" rel="attach"/> <link href="/api/hosts/07915908-72ac-4fd9-8aef-e9dd3194981d/nics/10070ac3-174a-4cf4-a774-8aed432a909a/detach" rel="detach"/> </actions> <name>em1</name> <link href="/api/hosts/07915908-72ac-4fd9-8aef-e9dd3194981d/nics/10070ac3-174a-4cf4-a774-8aed432a909a/statistics" rel="statistics"/> <link href="/api/hosts/07915908-72ac-4fd9-8aef-e9dd3194981d/nics/10070ac3-174a-4cf4-a774-8aed432a909a/labels" rel="labels"/> <host href="/api/hosts/07915908-72ac-4fd9-8aef-e9dd3194981d" id="07915908-72ac-4fd9-8aef-e9dd3194981d"/><mac address="b8:ca:3a:76:9a:43"/> <ip address="" netmask=""/> <boot_protocol>none</boot_protocol> <status><state>up</state></status> <mtu>1500</mtu> <bridged>false</bridged> </host_nic> </host_nics>
[root@livecd ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether b8:ca:3a:76:9a:43 brd ff:ff:ff:ff:ff:ff inet 192.168.1.105/24 brd 192.168.1.255 scope global dynamic em1 valid_lft 85809sec preferred_lft 85809sec inet6 fe80::baca:3aff:fe76:9a43/64 scope link valid_lft forever preferred_lft forever 3: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN link/ether ba:29:75:10:37:7e brd ff:ff:ff:ff:ff:ff 4: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether da:5a:78:93:ee:95 brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/24 brd 10.0.0.255 scope global ovirtmgmt valid_lft forever preferred_lft forever inet6 fe80::d85a:78ff:fe93:ee95/64 scope link valid_lft forever preferred_lft forever 5: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 2e:56:e6:64:38:3d brd ff:ff:ff:ff:ff:ff
# cat /etc/sysconfig/network-scripts/ifcfg-dummy0 DEVICE=dummy0 BRIDGE=ovirtmgmt ONBOOT=yes MTU=1500 NM_CONTROLLED=no HOTPLUG=no PROMISC=yes
# cat /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt DEVICE=ovirtmgmt TYPE=Bridge DELAY=0 STP=off ONBOOT=yes IPADDR=10.0.0.1 NETMASK=255.255.255.0 GATEWAY=10.0.0.1 BOOTPROTO=none MTU=1500 DEFROUTE=yes NM_CONTROLLED=no HOTPLUG=no
I think I've found the issue: # ifup dummy0 ERROR : [/etc/sysconfig/network-scripts/ifup-eth] the device dummy0 is not present. I'm delaying the initialization
[root@livecd ~]# modprobe dummy [root@livecd ~]# ifup dummy0
# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether b8:ca:3a:76:9a:43 brd ff:ff:ff:ff:ff:ff inet 192.168.1.105/24 brd 192.168.1.255 scope global dynamic em1 valid_lft 85580sec preferred_lft 85580sec inet6 fe80::baca:3aff:fe76:9a43/64 scope link valid_lft forever preferred_lft forever 3: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN link/ether ba:29:75:10:37:7e brd ff:ff:ff:ff:ff:ff 4: ovirtmgmt: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/24 brd 10.0.0.255 scope global ovirtmgmt valid_lft forever preferred_lft forever inet6 fe80::d85a:78ff:fe93:ee95/64 scope link valid_lft forever preferred_lft forever 5: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 2e:56:e6:64:38:3d brd ff:ff:ff:ff:ff:ff 6: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noqueue state DOWN link/ether 62:c7:5d:59:ca:7b brd ff:ff:ff:ff:ff:ff
That would indeed be the issue. What has changed under oVirt live that makes the dummy device to suddenly not be created?...
Looks like dummy kernel module is not loaded anymore by default on EL7 while it was on EL6.
# vdsClient -s 0 getVdsCaps HBAInventory = {'FC': [], 'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:86a8d0a8890'}]} ISCSIInitiatorName = 'iqn.1994-05.com.redhat:86a8d0a8890' autoNumaBalancing = 0 bondings = {'bond0': {'active_slave': '', 'addr': '', 'cfg': {'BOOTPROTO': 'none'}, 'dhcpv4': False, 'dhcpv6': False, 'gateway': '', 'hwaddr': '52:87:0c:7c:59:e5', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'opts': {}, 'slaves': []}} bridges = {'ovirtmgmt': {'addr': '10.0.0.1', 'cfg': {'BOOTPROTO': 'none', 'DEFROUTE': 'yes', 'DELAY': '0', 'DEVICE': 'ovirtmgmt', 'GATEWAY': '10.0.0.1', 'HOTPLUG': 'no', 'IPADDR': '10.0.0.1', 'MTU': '1500', 'NETMASK': '255.255.255.0', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'off', 'TYPE': 'Bridge'}, 'dhcpv4': False, 'dhcpv6': False, 'gateway': '', 'ipv4addrs': ['10.0.0.1/24'], 'ipv6addrs': ['fe80::5811:41ff:fefd:f5a9/64'], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'opts': {'ageing_time': '30000', 'bridge_id': '8000.000000000000', 'forward_delay': '0', 'gc_timer': '10583', 'group_addr': '1:80:c2:0:0:0', 'group_fwd_mask': '0x0', 'hash_elasticity': '4', 'hash_max': '512', 'hello_time': '200', 'hello_timer': '162', 'max_age': '2000', 'multicast_last_member_count': '2', 'multicast_last_member_interval': '100', 'multicast_membership_interval': '26000', 'multicast_querier': '0', 'multicast_querier_interval': '25500', 'multicast_query_interval': '12500', 'multicast_query_response_interval': '1000', 'multicast_router': '1', 'multicast_snooping': '1', 'multicast_startup_query_count': '2', 'multicast_startup_query_interval': '3125', 'nf_call_arptables': '0', 'nf_call_ip6tables': '0', 'nf_call_iptables': '0', 'priority': '32768', 'root_id': '8000.000000000000', 'root_path_cost': '0', 'root_port': '0', 'stp_state': '0', 'tcn_timer': '0', 'topology_change': '0', 'topology_change_detected': '0', 'topology_change_timer': '0', 'vlan_filtering': '0'}, 'ports': [], 'stp': 'off'}} clusterLevels = ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'] cpuCores = '4' cpuFlags = 'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,xtopology,nonstop_tsc,aperfmperf,eagerfpu,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,sse4_1,sse4_2,x2apic,popcnt,tsc_deadline_timer,aes,xsave,avx,f16c,rdrand,lahf_lm,ida,arat,epb,xsaveopt,pln,pts,dtherm,tpr_shadow,vnmi,flexpriority,ept,vpid,fsgsbase,smep,erms,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270,model_SandyBridge'
cpuModel = 'Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz' cpuSockets = '1' cpuSpeed = '1600.125' cpuThreads = '8' emulatedMachines = ['pc', 'q35', 'rhel6.0.0', 'rhel6.1.0', 'rhel6.2.0', 'rhel6.3.0', 'rhel6.4.0', 'rhel6.5.0', 'none'] guestOverhead = '65' hooks = {} kdumpStatus = 0 kvmEnabled = 'true' lastClient = '127.0.0.1' lastClientIface = 'lo' liveMerge = 'false' memSize = '15936' netConfigDirty = 'True' networks = {'ovirtmgmt': {'addr': '10.0.0.1', 'bridged': True, 'cfg': {'BOOTPROTO': 'none', 'DEFROUTE': 'yes', 'DELAY': '0', 'DEVICE': 'ovirtmgmt', 'GATEWAY': '10.0.0.1', 'HOTPLUG': 'no', 'IPADDR': '10.0.0.1', 'MTU': '1500', 'NETMASK': '255.255.255.0', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'off', 'TYPE': 'Bridge'}, 'dhcpv4': False, 'dhcpv6': False, 'gateway': '', 'iface': 'ovirtmgmt', 'ipv4addrs': ['10.0.0.1/24'], 'ipv6addrs': ['fe80::5811:41ff:fefd:f5a9/64'], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ports': [], 'stp': 'off'}} nics = {'em1': {'addr': '192.168.1.105', 'cfg': {'BOOTPROTO': 'dhcp'}, 'dhcpv4': True, 'dhcpv6': False, 'gateway': '192.168.1.1', 'hwaddr': 'b8:ca:3a:76:9a:43', 'ipv4addrs': ['192.168.1.105/24'], 'ipv6addrs': ['fe80::baca:3aff:fe76:9a43/64'], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'speed': 1000}} numaNodeDistance = {'0': [10]} numaNodes = {'0': {'cpus': [0, 1, 2, 3, 4, 5, 6, 7], 'totalMemory': '15936'}} onlineCpus = '0,1,2,3,4,5,6,7' operatingSystem = {'name': 'RHEL', 'release': '0.1406.el7.centos.2.5', 'version': '7'} packages2 = {'kernel': {'buildtime': 1422551133.0, 'release': '123.20.1.el7.x86_64', 'version': '3.10.0'}, 'libvirt': {'buildtime': 1423142593, 'release': '29.el7_0.7', 'version': '1.1.1'}, 'mom': {'buildtime': 1417014144, 'release': '1.el7', 'version': '0.4.3'}, 'qemu-img': {'buildtime': 1423440359, 'release': '60.el7_0.2', 'version': '1.5.3'}, 'qemu-kvm': {'buildtime': 1423440359, 'release': '60.el7_0.2', 'version': '1.5.3'}, 'spice-server': {'buildtime': 1418043907, 'release': '5.el7_0.1', 'version': '0.12.4'}, 'vdsm': {'buildtime': 1423399999, 'release': '392.git8bcc985.el7', 'version': '4.17.0'}} reservedMem = '321' rngSources = ['random'] selinux = {'mode': '0'} software_revision = '392' software_version = '4.17' supportedENGINEs = ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'] supportedProtocols = ['2.2', '2.3'] uuid = '4C4C4544-0059-4310-8035-C4C04F595831' version_name = 'Snow Man' vlans = {} vmTypes = ['kvm']
If you want, you can try it yourself taking latest iso from http://jenkins.ovirt.org/job/ovirt_live_create_iso/
Can anybody advise on what's wrong here?
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com