[Users] Problem with libvirt
Moti Asayag
masayag at redhat.com
Thu Jan 31 09:04:42 EST 2013
On 01/31/2013 03:37 PM, Juan Jose wrote:
> Hello Moti,
>
> The execution of this command in the host is:
This indicates VDSM is up and running correctly, but the ovirt-engine
can't reach it.
Can you check the connectivity from the ovirt-engine to the host (use
the same address as used to add it to data-center) ?
Maybe there are iptables issues preventing establishing connection from
the engine to the host.
>
> [root at ovirt-host ~]# vdsClient -s 0 getVdsCaps
> HBAInventory = {'iSCSI': [{'InitiatorName':
> 'iqn.1994-05.com.redhat:69e9aaf7e4c'}], 'FC': []}
> ISCSIInitiatorName = iqn.1994-05.com.redhat:69e9aaf7e4c
> bondings = {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask':
> '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '',
> 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr':
> '00:00:00:00:00:00'}, 'bond1': {'addr': '', 'cfg': {}, 'mtu': '1500',
> 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond2':
> {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [],
> 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, 'mtu':
> '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}
> clusterLevels = ['3.0', '3.1']
> cpuCores = 4
> cpuFlags =
> fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,sse4_1,lahf_lm,dtherm,tpr_shadow,vnmi,flexpriority,model_coreduo,model_Conroe
> cpuModel = Intel(R) Core(TM)2 Quad CPU Q9300 @ 2.50GHz
> cpuSockets = 1
> cpuSpeed = 1999.000
> emulatedMachines = ['pc-0.15', 'pc-1.0', 'pc', 'pc-0.14', 'pc-0.13',
> 'pc-0.12', 'pc-0.11', 'pc-0.10', 'isapc', 'pc-0.15', 'pc-1.0', 'pc',
> 'pc-0.14', 'pc-0.13', 'pc-0.12', 'pc-0.11', 'pc-0.10', 'isapc']
> guestOverhead = 65
> hooks = {}
> kvmEnabled = true
> lastClient = xxx.xxx.xxx.91
> lastClientIface = ovirtmgmt
> management_ip =
> memSize = 7701
> netConfigDirty = False
> networks = {'ovirtmgmt': {'addr': '158.109.202.67', 'cfg': {'DELAY':
> '0', 'IPV6INIT': 'no', 'UUID': '3cbac056-822a-43e9-a4ec-df5324becd79',
> 'DEFROUTE': 'yes', 'DNS1': 'xxx.xxx.xxx.1', 'IPADDR': 'xxx.xxx.xxx.67',
> 'ONBOOT': 'yes', 'IPV4_FAILURE_FATAL': 'no', 'BROADCAST':
> 'xxx.xxx.xxx.255', 'NM_CONTROLLED': 'no', 'NETMASK': 'xxx.xxx.xxx.0',
> 'BOOTPROTO': 'none', 'DNS2': 'xxx.xxx.xxx.9', 'DEVICE': 'ovirtmgmt',
> 'TYPE': 'Bridge', 'GATEWAY': 'xxx.xxx.xxx.1', 'NETWORK':
> 'xxx.xxx.xxx.0'}, 'mtu': '1500', 'netmask': '255.255.254.0', 'stp':
> 'off', 'bridged': True, 'gateway': 'xxx.xxx.xxx.1', 'ports': ['em1']}}
> nics = {'em1': {'hwaddr': '00:19:99:35:cc:54', 'netmask': '', 'speed':
> 1000, 'addr': '', 'mtu': '1500'}}
> operatingSystem = {'release': '1', 'version': '17', 'name': 'Fedora'}
> packages2 = {'kernel': {'release': '1.fc17.x86_64', 'buildtime':
> 1350912755.0, 'version': '3.6.3'}, 'spice-server': {'release': '5.fc17',
> 'buildtime': '1336983054', 'version': '0.10.1'}, 'vdsm': {'release':
> '10.fc17', 'buildtime': '1349383616', 'version': '4.10.0'}, 'qemu-kvm':
> {'release': '2.fc17', 'buildtime': '1349642820', 'version': '1.0.1'},
> 'libvirt': {'release': '2.fc17', 'buildtime': '1355687905', 'version':
> '0.9.11.8'}, 'qemu-img': {'release': '2.fc17', 'buildtime':
> '1349642820', 'version': '1.0.1'}}
> reservedMem = 321
> software_revision = 10
> software_version = 4.10
> supportedProtocols = ['2.2', '2.3']
> supportedRHEVMs = ['3.0', '3.1']
> uuid = 36303030-3139-3236-3800-00199935CC54_00:19:99:35:cc:54
> version_name = Snow Man
> vlans = {}
> vmTypes = ['kvm']
> [root at ovirt-host ~]#
>
> I can't execute this command in my engine because command "vdsClient" is
> not installed.
>
> Many thanks in avanced,
>
> Juanjo.
>
> On Thu, Jan 31, 2013 at 2:06 PM, Moti Asayag <masayag at redhat.com
> <mailto:masayag at redhat.com>> wrote:
>
> On 01/31/2013 02:26 PM, Dafna Ron wrote:
> > non responsive state for host means that you either have a network
> issue
> > (iptables perhaps) or vdsmd is down.
> > please attach vdsm and libvirtd logs.
> >
> >
>
> By looking the engine.log below there are network issues:
>
> 2013-01-31 11:49:54,121 ERROR
> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> (QuartzScheduler_Worker-73) [73213e4f] vds::refreshVdsStats Failed
> getVdsStats, vds = 7d3491e8-49ce-11e2-8b2e-000a5e429f63 : host1, error
> = VDSNetworkException: VDSNetworkException:
>
> Can you reach the host from the ovirt-engine server ? or can you check
> that vdsm service runs on the host by invoking:
>
> vdsClient -s 0 getVdsCaps
>
> >
> > On 01/31/2013 01:21 PM, Juan Jose wrote:
> >> Hello everybody,
> >>
> >> I have installed and configured oVirt 3.1 engine in a Fedora 17
> with a
> >> Fedora 17 node connected. Ihave defined a NFS domain for my VM and
> >> another for ISOs. I try to start a Fedora 17 Server with Run once and
> >> the machi start without problems, after that I preceed with the
> >> installation in its wirtual disk but when I arrive to define
> >> partitions in the virtual disk the machine is freeze and I start to
> >> receive engine errors and the default data center go in non
> responsive
> >> status.
> >>
> >> I can see this messages in /var/log/ovirt-engine/engine.log, which I
> >> attach to this message:
> >>
> >> ....
> >> 2013-01-31 11:43:23,957 INFO
> >> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> >> (QuartzScheduler_Worker-47) [75664f2b] Recieved a Device without an
> >> address when processing VM da09284e-3189-428b-a879-6201f7a5ca87
> >> devices, skipping device: {shared=false,
> >> volumeID=1d0e9fdf-c4bc-4894-8ff1-7a5e185d57a4, index=0,
> >> propagateErrors=off, format=raw, type=disk, truesize=8589938688,
> >> reqsize=0, bootOrder=2, iface=virtio,
> >> volumeChain=[Ljava.lang.Object;@1ea2bdf9,
> >> imageID=49e21bfc-384b-4bea-8013-f02b1be137c7,
> >> domainID=57d184a0-908b-49b5-926f-cd413b9e6526, specParams={},
> >> optional=false, needExtend=false,
> >>
> path=/rhev/data-center/d6e7e8b8-49c7-11e2-a261-000a5e429f63/57d184a0-908b-49b5-926f-cd413b9e6526/images/49e21bfc-384b-4bea-8013-f02b1be137c7/1d0e9fdf-c4bc-4894-8ff1-7a5e185d57a4,
> >> device=disk, poolID=d6e7e8b8-49c7-11e2-a261-000a5e429f63,
> >> readonly=false, deviceId=49e21bfc-384b-4bea-8013-f02b1be137c7,
> >> apparentsize=8589934592}.
> >> 2013-01-31 11:43:23,960 ERROR
> >> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> >> (QuartzScheduler_Worker-47) [75664f2b] VM
> >> da09284e-3189-428b-a879-6201f7a5ca87 managed non plugable device was
> >> removed unexpetedly from libvirt:
> >>
> VmId=da09284e-3189-428b-a879-6201f7a5ca87,DeviceId=4dca1c64-dbf8-4e31-b359-82cf0e259f65,Device=qxl,Type=video,BootOrder=0,SpecParams={vram=65536},Address=,IsManaged=true,IsPlugged=true,IsReadOnly=false,alias=
> >> 2013-01-31 11:43:23,961 ERROR
> >> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> >> (QuartzScheduler_Worker-47) [75664f2b] VM
> >> da09284e-3189-428b-a879-6201f7a5ca87 managed non plugable device was
> >> removed unexpetedly from libvirt:
> >>
> VmId=da09284e-3189-428b-a879-6201f7a5ca87,DeviceId=aba73f2f-e951-4eba-9da4-8fb58315df2c,Device=memballoon,Type=balloon,BootOrder=0,SpecParams={model=virtio},Address=,IsManaged=true,IsPlugged=true,IsReadOnly=true,alias=
> >> 2013-01-31 11:43:23,962 ERROR
> >> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> >> (QuartzScheduler_Worker-47) [75664f2b] VM
> >> da09284e-3189-428b-a879-6201f7a5ca87 managed non plugable device was
> >> removed unexpetedly from libvirt:
> >>
> VmId=da09284e-3189-428b-a879-6201f7a5ca87,DeviceId=9bfb770c-13fa-4bf6-9f1f-414927bc31b0,Device=cdrom,Type=disk,BootOrder=0,SpecParams={path=},Address=,IsManaged=true,IsPlugged=true,IsReadOnly=true,alias=
> >> 2013-01-31 11:43:23,963 ERROR
> >> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> >> (QuartzScheduler_Worker-47) [75664f2b] VM
> >> da09284e-3189-428b-a879-6201f7a5ca87 managed non plugable device was
> >> removed unexpetedly from libvirt:
> >>
> VmId=da09284e-3189-428b-a879-6201f7a5ca87,DeviceId=614bc0b4-64d8-4058-8bf8-83db62617e00,Device=bridge,Type=interface,BootOrder=0,SpecParams={},Address=,IsManaged=true,IsPlugged=true,IsReadOnly=false,alias=
> >> 2013-01-31 11:43:23,964 ERROR
> >> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> >> (QuartzScheduler_Worker-47) [75664f2b] VM
> >> da09284e-3189-428b-a879-6201f7a5ca87 managed non plugable device was
> >> removed unexpetedly from libvirt:
> >>
> VmId=da09284e-3189-428b-a879-6201f7a5ca87,DeviceId=49e21bfc-384b-4bea-8013-f02b1be137c7,Device=disk,Type=disk,BootOrder=0,SpecParams={},Address=,IsManaged=true,IsPlugged=true,IsReadOnly=false,alias=
> >> 2013-01-31 11:43:26,063 INFO
> >> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> >> (QuartzScheduler_Worker-24) [7d021319] VM Fedora17
> >> da09284e-3189-428b-a879-6201f7a5ca87 moved from WaitForLaunch -->
> >> PoweringUp
> >> 2013-01-31 11:43:26,064 INFO
> >> [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand]
> >> (QuartzScheduler_Worker-24) [7d021319] START,
> FullListVdsCommand(vdsId
> >> = 7d3491e8-49ce-11e2-8b2e-000a5e429f63, vds=null,
> >> vmIds=[da09284e-3189-428b-a879-6201f7a5ca87]), log id: f68f564
> >> 2013-01-31 11:43:26,086 INFO
> >> [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand]
> >> (QuartzScheduler_Worker-24) [7d021319] FINISH, FullListVdsCommand,
> >> return:
> >> [Lorg.ovirt.engine.core.vdsbroker.xmlrpc.XmlRpcStruct;@33c68023, log
> >> id: f68f564
> >> 2013-01-31 11:43:26,091 ERROR
> >> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> >> (QuartzScheduler_Worker-24) [7d021319] VM
> >> da09284e-3189-428b-a879-6201f7a5ca87 managed non plugable device was
> >> removed unexpetedly from libvirt:
> >>
> VmId=da09284e-3189-428b-a879-6201f7a5ca87,DeviceId=aba73f2f-e951-4eba-9da4-8fb58315df2c,Device=memballoon,Type=balloon,BootOrder=0,SpecParams={model=virtio},Address=,IsManaged=true,IsPlugged=true,IsReadOnly=true,alias=
> >> 2013-01-31 11:43:26,092 ERROR
> >> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> >> (QuartzScheduler_Worker-24) [7d021319] VM
> >> da09284e-3189-428b-a879-6201f7a5ca87 managed non plugable device was
> >> removed unexpetedly from libvirt:
> >>
> VmId=da09284e-3189-428b-a879-6201f7a5ca87,DeviceId=9bfb770c-13fa-4bf6-9f1f-414927bc31b0,Device=cdrom,Type=disk,BootOrder=0,SpecParams={path=},Address=,IsManaged=true,IsPlugged=true,IsReadOnly=true,alias=
> >> 2013-01-31 11:43:31,721 INFO
> >> [org.ovirt.engine.core.bll.SetVmTicketCommand]
> (ajp--0.0.0.0-8009-11)
> >> [28d7a789] Running command: SetVmTicketCommand internal: false.
> >> Entities affected : ID: da09284e-3189-428b-a879-6201f7a5ca87
> Type: VM
> >> 2013-01-31 11:43:31,724 INFO
> >> [org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand]
> >> (ajp--0.0.0.0-8009-11) [28d7a789] START,
> SetVmTicketVDSCommand(vdsId =
> >> 7d3491e8-49ce-11e2-8b2e-000a5e429f63,
> >> vmId=da09284e-3189-428b-a879-6201f7a5ca87, ticket=qmcnuOICblb3,
> >> validTime=120,m userName=admin at internal,
> >> userId=fdfc627c-d875-11e0-90f0-83df133b58cc), log id: 6eaacb95
> >> 2013-01-31 11:43:31,758 INFO
> >> [org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand]
> >> (ajp--0.0.0.0-8009-11) [28d7a789] FINISH, SetVmTicketVDSCommand, log
> >> id: 6eaacb95
> >>
> >> ...
> >>
> >> 2013-01-31 11:49:13,392 WARN
> >> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> >> (QuartzScheduler_Worker-81) [164eaa47] domain
> >> 57d184a0-908b-49b5-926f-cd413b9e6526 in problem. vds: host1
> >> 2013-01-31 11:49:54,121 ERROR
> >> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> >> (QuartzScheduler_Worker-73) [73213e4f] vds::refreshVdsStats Failed
> >> getVdsStats, vds = 7d3491e8-49ce-11e2-8b2e-000a5e429f63 : host1,
> >> error = VDSNetworkException: VDSNetworkException:
> >> 2013-01-31 11:49:54,172 WARN
> >> [org.ovirt.engine.core.vdsbroker.VdsManager]
> >> (QuartzScheduler_Worker-73) [73213e4f]
> >> ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds =
> >> 7d3491e8-49ce-11e2-8b2e-000a5e429f63 : host1, VDS Network Error,
> >> continuing.
> >> VDSNetworkException:
> >>
> >> ....
> >>
> >>
> >> In the events windows after VM freezing, I have below events:
> >>
> >> 2013-Jan-31, 11:50:52 Failed to elect Host as Storage Pool Manager
> >> for Data Center Default. Setting status to Non-Operational.
> >> 2013-Jan-31, 11:50:52 VM Fedora17 was set to the Unknown status.
> >> 2013-Jan-31, 11:50:52 Host host1 is non-responsive.
> >> 2013-Jan-31, 11:49:55 Invalid status on Data Center Default. Setting
> >> Data Center status to Non-Responsive (On host host1, Error: Network
> >> error during communication with the Host.).
> >> 2013-Jan-31, 11:44:25 VM Fedora17 started on Host host1
> >>
> >> Any suggest about the problem?. It seem a libvirt problem, I will
> >> continue investigating.
> >>
> >> Many thanks in avanced,
> >>
> >> Juanjo.
> >>
> >>
> >> _______________________________________________
> >> Users mailing list
> >> Users at ovirt.org <mailto:Users at ovirt.org>
> >> http://lists.ovirt.org/mailman/listinfo/users
> >
> >
>
>
More information about the Users
mailing list