[Users] cannot add new logical network to host
by Alan Johnson
I am trying to add a network for a sandbox environment that only certain
VMs will have access to and those VMs will not have access to the rest of
our network. This is to allow new systems to be tested in a safe
environment where they can't possibly muck with our live systems. I'm
trying to follow the instructions of section 5.4 Logical Network
Tasks<https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Virtual...>in
the Admin guide but I keep getting an error when adding to the host.
Here is what I'm doing:
Under Data Centers > Default > Logical Networks > New, I create a new
logical network called sandbox, VM network is checked, VLAN network is
checked and VLAN ID is 2. .
Under Clusters > Default > Logical Networks > Assign/Unassign Networks, I
check Assign but not Required and sandbox appears in the list.
Under Hosts > cloudhost01 > Network Interfaces > Setup Host Networks, I
drag sandbox to the em1 interface which is also where ovirtmgmt is
assigned. There is am em2 interface, but that is dedicated to iSCSI
storage and has no Logical Networks assigned to it.
I check "Save network configuration", leave "Verify connectivity between
Host and ovirt-engine" checked, click OK and I get "Error: cloudhost01: -
General command validation failure."
This may not be relevant, but in my event log I get "cloudhost01 is missing
vlan id: 2 that is expected by the cluster" warnings when I activate
cloudhost01 while the sandbox network exists. I have tired doing things
with different orders and tweaks all resulting in the same error. Here are
my versions:
ovirt-log-collector-3.1.0-16.el6.noarch
ovirt-image-uploader-3.1.0-16.el6.noarch
ovirt-engine-userportal-3.1.0-3.19.el6.noarch
ovirt-engine-setup-3.1.0-3.19.el6.noarch
ovirt-engine-restapi-3.1.0-3.19.el6.noarch
ovirt-engine-config-3.1.0-3.19.el6.noarch
ovirt-engine-notification-service-3.1.0-3.19.el6.noarch
ovirt-engine-backend-3.1.0-3.19.el6.noarch
ovirt-engine-sdk-3.1.0.5-1.el6.noarch
ovirt-iso-uploader-3.1.0-16.el6.noarch
ovirt-engine-jbossas711-1-0.x86_64
ovirt-engine-webadmin-portal-3.1.0-3.19.el6.noarch
ovirt-engine-dbscripts-3.1.0-3.19.el6.noarch
ovirt-engine-genericapi-3.1.0-3.19.el6.noarch
ovirt-engine-tools-common-3.1.0-3.19.el6.noarch
ovirt-engine-3.1.0-3.19.el6.noarch
The rest of this message is the accompanying stack trace that shows up in
engine.log. Looks to my lay eye like an expected database column is
missing or something. Any ideas?
2012-11-14 15:34:17,332 ERROR
[org.ovirt.engine.core.bll.SetupNetworksCommand] (ajp--0.0.0.0-8009-10)
[78b1227b] Error during CanDoActionFailure.:
javax.validation.ValidationException: Call to
TraversableResolver.isReachable() threw an exception
at
org.hibernate.validator.engine.ValidatorImpl.isValidationRequired(ValidatorImpl.java:773)
[hibernate-validator.jar:4.0.2.GA]
at
org.hibernate.validator.engine.ValidatorImpl.validateConstraint(ValidatorImpl.java:331)
[hibernate-validator.jar:4.0.2.GA]
at
org.hibernate.validator.engine.ValidatorImpl.validateConstraintsForRedefinedDefaultGroup(ValidatorImpl.java:278)
[hibernate-validator.jar:4.0.2.GA]
at
org.hibernate.validator.engine.ValidatorImpl.validateConstraintsForCurrentGroup(ValidatorImpl.java:260)
[hibernate-validator.jar:4.0.2.GA]
at
org.hibernate.validator.engine.ValidatorImpl.validateInContext(ValidatorImpl.java:213)
[hibernate-validator.jar:4.0.2.GA]
at
org.hibernate.validator.engine.ValidatorImpl.validateCascadedConstraint(ValidatorImpl.java:466)
[hibernate-validator.jar:4.0.2.GA]
at
org.hibernate.validator.engine.ValidatorImpl.validateCascadedConstraints(ValidatorImpl.java:372)
[hibernate-validator.jar:4.0.2.GA]
at
org.hibernate.validator.engine.ValidatorImpl.validateInContext(ValidatorImpl.java:219)
[hibernate-validator.jar:4.0.2.GA]
at
org.hibernate.validator.engine.ValidatorImpl.validate(ValidatorImpl.java:119)
[hibernate-validator.jar:4.0.2.GA]
at
org.ovirt.engine.core.common.utils.ValidationUtils.validateInputs(ValidationUtils.java:77)
[engine-common.jar:]
at
org.ovirt.engine.core.bll.CommandBase.validateInputs(CommandBase.java:518)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.CommandBase.InternalCanDoAction(CommandBase.java:486)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.CommandBase.ExecuteAction(CommandBase.java:261)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.Backend.runActionImpl(Backend.java:336)
[engine-bll.jar:]
at org.ovirt.engine.core.bll.Backend.RunAction(Backend.java:294)
[engine-bll.jar:]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[rt.jar:1.7.0_09-icedtea]
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
[rt.jar:1.7.0_09-icedtea]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.7.0_09-icedtea]
at java.lang.reflect.Method.invoke(Method.java:601)
[rt.jar:1.7.0_09-icedtea]
at
org.jboss.as.ee.component.ManagedReferenceMethodInterceptorFactory$ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptorFactory.java:72)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:374)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.ovirt.engine.core.utils.ThreadLocalSessionCleanerInterceptor.injectWebContextToThreadLocal(ThreadLocalSessionCleanerInterceptor.java:11)
[engine-utils.jar:]
at sun.reflect.GeneratedMethodAccessor58.invoke(Unknown Source)
[:1.7.0_09-icedtea]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.7.0_09-icedtea]
at java.lang.reflect.Method.invoke(Method.java:601)
[rt.jar:1.7.0_09-icedtea]
at
org.jboss.as.ee.component.ManagedReferenceLifecycleMethodInterceptorFactory$ManagedReferenceLifecycleMethodInterceptor.processInvocation(ManagedReferenceLifecycleMethodInterceptorFactory.java:123)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.invocation.WeavedInterceptor.processInvocation(WeavedInterceptor.java:53)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:36)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:21)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:53)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ejb3.component.singleton.SingletonComponentInstanceAssociationInterceptor.processInvocation(SingletonComponentInstanceAssociationInterceptor.java:53)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:211)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:363)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:194)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:59)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ee.component.TCCLInterceptor.processInvocation(TCCLInterceptor.java:45)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:165)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:173)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:72)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.ovirt.engine.core.common.interfaces.BackendLocal$$$view8.RunAction(Unknown
Source) [engine-common.jar:]
at
org.ovirt.engine.ui.frontend.server.gwt.GenericApiGWTServiceImpl.RunAction(GenericApiGWTServiceImpl.java:120)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[rt.jar:1.7.0_09-icedtea]
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
[rt.jar:1.7.0_09-icedtea]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.7.0_09-icedtea]
at java.lang.reflect.Method.invoke(Method.java:601)
[rt.jar:1.7.0_09-icedtea]
at
com.google.gwt.rpc.server.RPC.invokeAndStreamResponse(RPC.java:196)
at
com.google.gwt.rpc.server.RpcServlet.processCall(RpcServlet.java:161)
at
com.google.gwt.rpc.server.RpcServlet.processPost(RpcServlet.java:222)
at
com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:754)
[jboss-servlet-api_3.0_spec-1.0.0.Final.jar:1.0.0.Final]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:847)
[jboss-servlet-api_3.0_spec-1.0.0.Final.jar:1.0.0.Final]
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:329)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:275)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
at
org.jboss.as.web.security.SecurityContextAssociationValve.invoke(SecurityContextAssociationValve.java:153)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:155)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.jboss.web.rewrite.RewriteValve.invoke(RewriteValve.java:466)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:368)
at org.apache.coyote.ajp.AjpProcessor.process(AjpProcessor.java:505)
at
org.apache.coyote.ajp.AjpProtocol$AjpConnectionHandler.process(AjpProtocol.java:445)
at
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:930)
at java.lang.Thread.run(Thread.java:722) [rt.jar:1.7.0_09-icedtea]
Caused by: javax.persistence.PersistenceException: Unable to find field or
method: class
org.ovirt.engine.core.common.businessentities.VdsNetworkInterface#interfaces
at
org.hibernate.ejb.util.PersistenceUtilHelper$MetadataCache.findMember(PersistenceUtilHelper.java:201)
at
org.hibernate.ejb.util.PersistenceUtilHelper$MetadataCache.getMember(PersistenceUtilHelper.java:176)
at
org.hibernate.ejb.util.PersistenceUtilHelper.get(PersistenceUtilHelper.java:89)
at
org.hibernate.ejb.util.PersistenceUtilHelper.isLoadedWithReference(PersistenceUtilHelper.java:81)
at
org.hibernate.ejb.HibernatePersistence$1.isLoadedWithReference(HibernatePersistence.java:93)
at javax.persistence.Persistence$1.isLoaded(Persistence.java:98)
[hibernate-jpa-2.0-api-1.0.1.Final.jar:1.0.1.Final]
at
org.hibernate.validator.engine.resolver.JPATraversableResolver.isReachable(JPATraversableResolver.java:33)
[hibernate-validator.jar:4.0.2.GA]
at
org.hibernate.validator.engine.resolver.DefaultTraversableResolver.isReachable(DefaultTraversableResolver.java:112)
[hibernate-validator.jar:4.0.2.GA]
at
org.hibernate.validator.engine.resolver.SingleThreadCachedTraversableResolver.isReachable(SingleThreadCachedTraversableResolver.java:47)
[hibernate-validator.jar:4.0.2.GA]
at
org.hibernate.validator.engine.ValidatorImpl.isValidationRequired(ValidatorImpl.java:764)
[hibernate-validator.jar:4.0.2.GA]
... 81 more
12 years
[Users] native USB redirection
by Fasil
This is a multi-part message in MIME format.
--------------030504050408020107030605
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Hi,
I am having issues in getting the native USB redirection.
I have selected the native USB support for my console options. When I
start the VM, it is returning to the down state. The below engine.log
shows some error.
2012-11-15 21:45:42,924 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
(ajp--0.0.0.0-8009-5) START, IsValidVDSCommand(storagePoolId =
9febe320-e6d5-4b91-a1c5-614c3a24ebe4, ignoreFailoverLimit = false,
compatabilityVersion = null), log id: 6dc08bad
2012-11-15 21:45:42,925 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
(ajp--0.0.0.0-8009-5) FINISH, IsValidVDSCommand, return: true, log id:
6dc08bad
2012-11-15 21:45:43,003 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(ajp--0.0.0.0-8009-5) START, IsVmDuringInitiatingVDSCommand(vmId =
2aedea82-0dcf-4f93-994d-425ed01c1479), log id: 4a2ace14
2012-11-15 21:45:43,003 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(ajp--0.0.0.0-8009-5) FINISH, IsVmDuringInitiatingVDSCommand, return:
false, log id: 4a2ace14
2012-11-15 21:45:43,051 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(pool-3-thread-49) [2fccb65f] Lock Acquired to object EngineLock
[exclusiveLocks= key: 2aedea82-0dcf-4f93-994d-425ed01c1479 value: VM
, sharedLocks= ]
2012-11-15 21:45:43,068 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(pool-3-thread-49) [2fccb65f] Running command: RunVmCommand internal:
false. Entities affected : ID: 2aedea82-0dcf-4f93-994d-425ed01c1479
Type: VM
2012-11-15 21:45:43,102 INFO
[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-3-thread-49)
[2fccb65f] START, CreateVmVDSCommand(vdsId =
a2f20736-2da8-11e2-a9ac-bb1cd2496234,
vmId=2aedea82-0dcf-4f93-994d-425ed01c1479,
vm=org.ovirt.engine.core.common.businessentities.VM@600411f2), log id:
377110c5
2012-11-15 21:45:43,104 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
(pool-3-thread-49) [2fccb65f] START, CreateVDSCommand(vdsId =
a2f20736-2da8-11e2-a9ac-bb1cd2496234,
vmId=2aedea82-0dcf-4f93-994d-425ed01c1479,
vm=org.ovirt.engine.core.common.businessentities.VM@600411f2), log id:
4847feea
2012-11-15 21:45:43,126 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
(pool-3-thread-49) [2fccb65f]
org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand
spiceSslCipherSuite=DEFAULT,memSize=2048,kvmEnable=true,smp=1,emulatedMachine=pc,vmType=kvm,keyboardLayout=en-us,nice=0,display=qxl,smpCoresPerSocket=1,spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,timeOffset=0,transparentHugePages=true,vmId=2aedea82-0dcf-4f93-994d-425ed01c1479,devices=[Ljava.util.Map;@18cc1b1b,acpiEnable=true,vmName=win73,cpuType=Westmere,custom={device_88276577-7921-4e39-82a2-267c8bcd3744device_66fa5d8f-6544-4e84-a1de-87c673c9611c=VmId=2aedea82-0dcf-4f93-994d-425ed01c1479,DeviceId=66fa5d8f-6544-4e84-a1de-87c673c9611c,Device=ide,Type=controller,BootOrder=0,SpecParams={},Address={bus=0x00,
domain=0x0000, type=pci, slot=0x01,
function=0x1},IsManaged=false,IsPlugged=true,IsReadOnly=false,alias=ide0, device_88276577-7921-4e39-82a2-267c8bcd3744device_66fa5d8f-6544-4e84-a1de-87c673c9611cdevice_5db33cc1-ed1d-4c21-b1bf-de0cbe76b778device_48d5a447-92e1-4cb3-81cc-104ab634ffa2=VmId=2aedea82-0dcf-4f93-994d-425ed01c1479,DeviceId=48d5a447-92e1-4cb3-81cc-104ab634ffa2,Device=unix,Type=channel,BootOrder=0,SpecParams={},Address={port=1,
bus=0, controller=0,
type=virtio-serial},IsManaged=false,IsPlugged=true,IsReadOnly=false,alias=channel0,
device_88276577-7921-4e39-82a2-267c8bcd3744=VmId=2aedea82-0dcf-4f93-994d-425ed01c1479,DeviceId=88276577-7921-4e39-82a2-267c8bcd3744,Device=usb,Type=controller,BootOrder=0,SpecParams={},Address={bus=0x00,
domain=0x0000, type=pci, slot=0x01,
function=0x2},IsManaged=false,IsPlugged=true,IsReadOnly=false,alias=usb0, device_88276577-7921-4e39-82a2-267c8bcd3744device_66fa5d8f-6544-4e84-a1de-87c673c9611cdevice_5db33cc1-ed1d-4c21-b1bf-de0cbe76b778device_48d5a447-92e1-4cb3-81cc-104ab634ffa2device_85265b4d-e652-434e-9247-40ff1ad07e99=VmId=2aedea82-0dcf-4f93-994d-425ed01c1479,DeviceId=85265b4d-e652-434e-9247-40ff1ad07e99,Device=spicevmc,Type=channel,BootOrder=0,SpecParams={},Address={port=2,
bus=0, controller=0,
type=virtio-serial},IsManaged=false,IsPlugged=true,IsReadOnly=false,alias=channel1,
device_88276577-7921-4e39-82a2-267c8bcd3744device_66fa5d8f-6544-4e84-a1de-87c673c9611cdevice_5db33cc1-ed1d-4c21-b1bf-de0cbe76b778=VmId=2aedea82-0dcf-4f93-994d-425ed01c1479,DeviceId=5db33cc1-ed1d-4c21-b1bf-de0cbe76b778,Device=virtio-serial,Type=controller,BootOrder=0,SpecParams={},Address={bus=0x00,
domain=0x0000, type=pci, slot=0x05,
function=0x0},IsManaged=false,IsPlugged=true,IsReadOnly=false,alias=virtio-serial0}
2012-11-15 21:45:43,132 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
(pool-3-thread-49) [2fccb65f] FINISH, CreateVDSCommand, log id: 4847feea
2012-11-15 21:45:43,134 INFO
[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-3-thread-49)
[2fccb65f] IncreasePendingVms::CreateVmIncreasing vds local_host pending
vcpu count, now 1. Vm: win73
2012-11-15 21:45:43,184 INFO
[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-3-thread-49)
[2fccb65f] FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id:
377110c5
2012-11-15 21:45:43,188 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(pool-3-thread-49) [2fccb65f] Lock freed to object EngineLock
[exclusiveLocks= key: 2aedea82-0dcf-4f93-994d-425ed01c1479 value: VM
, sharedLocks= ]
2012-11-15 21:45:44,340 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(QuartzScheduler_Worker-4) START, DestroyVDSCommand(vdsId =
a2f20736-2da8-11e2-a9ac-bb1cd2496234,
vmId=2aedea82-0dcf-4f93-994d-425ed01c1479, force=false, secondsToWait=0,
gracefully=false), log id: 39e6d7f3
2012-11-15 21:45:44,378 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(QuartzScheduler_Worker-4) FINISH, DestroyVDSCommand, log id: 39e6d7f3
2012-11-15 21:45:44,391 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-4) Running on vds during rerun failed vm: null
2012-11-15 21:45:44,394 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-4) vm win73 running in db and not running in vds
- add to rerun treatment. vds local_host
2012-11-15 21:45:44,408 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-4) Rerun vm
2aedea82-0dcf-4f93-994d-425ed01c1479. Called from vds local_host
2012-11-15 21:45:44,410 INFO
[org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand]
(pool-3-thread-49) START, UpdateVdsDynamicDataVDSCommand(vdsId =
a2f20736-2da8-11e2-a9ac-bb1cd2496234,
vdsDynamic=org.ovirt.engine.core.common.businessentities.VdsDynamic@8fa7e67e),
log id: 4f197787
2012-11-15 21:45:44,416 INFO
[org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand]
(pool-3-thread-49) FINISH, UpdateVdsDynamicDataVDSCommand, log id: 4f197787
2012-11-15 21:45:44,433 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(pool-3-thread-49) Lock Acquired to object EngineLock [exclusiveLocks=
key: 2aedea82-0dcf-4f93-994d-425ed01c1479 value: VM
, sharedLocks= ]
2012-11-15 21:45:44,439 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
(pool-3-thread-49) START, IsValidVDSCommand(storagePoolId =
9febe320-e6d5-4b91-a1c5-614c3a24ebe4, ignoreFailoverLimit = false,
compatabilityVersion = null), log id: 5daaa874
2012-11-15 21:45:44,440 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
(pool-3-thread-49) FINISH, IsValidVDSCommand, return: true, log id: 5daaa874
2012-11-15 21:45:44,467 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(pool-3-thread-49) START, IsVmDuringInitiatingVDSCommand(vmId =
2aedea82-0dcf-4f93-994d-425ed01c1479), log id: 4125b681
2012-11-15 21:45:44,467 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(pool-3-thread-49) FINISH, IsVmDuringInitiatingVDSCommand, return:
false, log id: 4125b681
2012-11-15 21:45:44,469 WARN [org.ovirt.engine.core.bll.RunVmCommand]
(pool-3-thread-49) CanDoAction of action RunVm failed.
Reasons:VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,ACTION_TYPE_FAILED_VDS_VM_CLUSTER
2012-11-15 21:45:44,470 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(pool-3-thread-49) Lock freed to object EngineLock [exclusiveLocks= key:
2aedea82-0dcf-4f93-994d-425ed01c1479 value: VM
, sharedLocks= ]
But when I select the legacy USB support, the system is getting booted
and I cannot see my attached USB devices in my guests.
I have already followed the thread
http://www.mail-archive.com/users@ovirt.org/msg03822.html without any
success. There is a work around mentioned in the last post of the thread
by Itamar. Where can find that work around?
My Installation:
Centos 6.3
Ovirt 3.1
Windows 7 client
spice-gtk-0.11-11.el6_3.1.x86_64
spice-server-0.10.1-10.el6.x86_64
spice-protocol-0.10.1-5.el6.noarch
spice-xpi-2.7-20.el6.x86_64
spice-client-0.8.2-15.el6.x86_64
spice-glib-0.11-11.el6_3.1.x86_64
spice-gtk-python-0.11-11.el6_3.1.x86_64
spice-vdagent-0.8.1-3.el6.x86_64
Regards,
Fasil.
--------------030504050408020107030605
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hi,<br>
<br>
I am having issues in getting the native USB redirection.<br>
I have selected the native USB support for my console options. When
I start the VM, it is returning to the down state. The below
engine.log shows some error.<br>
<br>
2012-11-15 21:45:42,924 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
(ajp--0.0.0.0-8009-5) START, IsValidVDSCommand(storagePoolId =
9febe320-e6d5-4b91-a1c5-614c3a24ebe4, ignoreFailoverLimit = false,
compatabilityVersion = null), log id: 6dc08bad<br>
2012-11-15 21:45:42,925 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
(ajp--0.0.0.0-8009-5) FINISH, IsValidVDSCommand, return: true, log
id: 6dc08bad<br>
2012-11-15 21:45:43,003 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(ajp--0.0.0.0-8009-5) START, IsVmDuringInitiatingVDSCommand(vmId =
2aedea82-0dcf-4f93-994d-425ed01c1479), log id: 4a2ace14<br>
2012-11-15 21:45:43,003 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(ajp--0.0.0.0-8009-5) FINISH, IsVmDuringInitiatingVDSCommand,
return: false, log id: 4a2ace14<br>
2012-11-15 21:45:43,051 INFO
[org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-49)
[2fccb65f] Lock Acquired to object EngineLock [exclusiveLocks= key:
2aedea82-0dcf-4f93-994d-425ed01c1479 value: VM<br>
, sharedLocks= ]<br>
2012-11-15 21:45:43,068 INFO
[org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-49)
[2fccb65f] Running command: RunVmCommand internal: false. Entities
affected : ID: 2aedea82-0dcf-4f93-994d-425ed01c1479 Type: VM<br>
2012-11-15 21:45:43,102 INFO
[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand]
(pool-3-thread-49) [2fccb65f] START, CreateVmVDSCommand(vdsId =
a2f20736-2da8-11e2-a9ac-bb1cd2496234,
vmId=2aedea82-0dcf-4f93-994d-425ed01c1479,
vm=org.ovirt.engine.core.common.businessentities.VM@600411f2), log
id: 377110c5<br>
2012-11-15 21:45:43,104 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
(pool-3-thread-49) [2fccb65f] START, CreateVDSCommand(vdsId =
a2f20736-2da8-11e2-a9ac-bb1cd2496234,
vmId=2aedea82-0dcf-4f93-994d-425ed01c1479,
vm=org.ovirt.engine.core.common.businessentities.VM@600411f2), log
id: 4847feea<br>
2012-11-15 21:45:43,126 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
(pool-3-thread-49) [2fccb65f]
org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand
spiceSslCipherSuite=DEFAULT,memSize=2048,kvmEnable=true,smp=1,emulatedMachine=pc,vmType=kvm,keyboardLayout=en-us,nice=0,display=qxl,smpCoresPerSocket=1,spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,timeOffset=0,transparentHugePages=true,vmId=2aedea82-0dcf-4f93-994d-425ed01c1479,devices=[Ljava.util.Map;@18cc1b1b,acpiEnable=true,vmName=win73,cpuType=Westmere,custom={device_88276577-7921-4e39-82a2-267c8bcd3744device_66fa5d8f-6544-4e84-a1de-87c673c9611c=VmId=2aedea82-0dcf-4f93-994d-425ed01c1479,DeviceId=66fa5d8f-6544-4e84-a1de-87c673c9611c,Device=ide,Type=controller,BootOrder=0,SpecParams={},Address={bus=0x00,
domain=0x0000, type=pci, slot=0x01,
function=0x1},IsManaged=false,IsPlugged=true,IsReadOnly=false,alias=ide0,
device_88276577-7921-4e39-82a2-267c8bcd3744device_66fa5d8f-6544-4e84-a1de-87c673c9611cdevice_5db33cc1-ed1d-4c21-b1bf-de0cbe76b778device_48d5a447-92e1-4cb3-81cc-104ab634ffa2=VmId=2aedea82-0dcf-4f93-994d-425ed01c1479,DeviceId=48d5a447-92e1-4cb3-81cc-104ab634ffa2,Device=unix,Type=channel,BootOrder=0,SpecParams={},Address={port=1,
bus=0, controller=0,
type=virtio-serial},IsManaged=false,IsPlugged=true,IsReadOnly=false,alias=channel0,
device_88276577-7921-4e39-82a2-267c8bcd3744=VmId=2aedea82-0dcf-4f93-994d-425ed01c1479,DeviceId=88276577-7921-4e39-82a2-267c8bcd3744,Device=usb,Type=controller,BootOrder=0,SpecParams={},Address={bus=0x00,
domain=0x0000, type=pci, slot=0x01,
function=0x2},IsManaged=false,IsPlugged=true,IsReadOnly=false,alias=usb0,
device_88276577-7921-4e39-82a2-267c8bcd3744device_66fa5d8f-6544-4e84-a1de-87c673c9611cdevice_5db33cc1-ed1d-4c21-b1bf-de0cbe76b778device_48d5a447-92e1-4cb3-81cc-104ab634ffa2device_85265b4d-e652-434e-9247-40ff1ad07e99=VmId=2aedea82-0dcf-4f93-994d-425ed01c1479,DeviceId=85265b4d-e652-434e-9247-40ff1ad07e99,Device=spicevmc,Type=channel,BootOrder=0,SpecParams={},Address={port=2,
bus=0, controller=0,
type=virtio-serial},IsManaged=false,IsPlugged=true,IsReadOnly=false,alias=channel1,
device_88276577-7921-4e39-82a2-267c8bcd3744device_66fa5d8f-6544-4e84-a1de-87c673c9611cdevice_5db33cc1-ed1d-4c21-b1bf-de0cbe76b778=VmId=2aedea82-0dcf-4f93-994d-425ed01c1479,DeviceId=5db33cc1-ed1d-4c21-b1bf-de0cbe76b778,Device=virtio-serial,Type=controller,BootOrder=0,SpecParams={},Address={bus=0x00,
domain=0x0000, type=pci, slot=0x05,
function=0x0},IsManaged=false,IsPlugged=true,IsReadOnly=false,alias=virtio-serial0}<br>
2012-11-15 21:45:43,132 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
(pool-3-thread-49) [2fccb65f] FINISH, CreateVDSCommand, log id:
4847feea<br>
2012-11-15 21:45:43,134 INFO
[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand]
(pool-3-thread-49) [2fccb65f] IncreasePendingVms::CreateVmIncreasing
vds local_host pending vcpu count, now 1. Vm: win73<br>
2012-11-15 21:45:43,184 INFO
[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand]
(pool-3-thread-49) [2fccb65f] FINISH, CreateVmVDSCommand, return:
WaitForLaunch, log id: 377110c5<br>
2012-11-15 21:45:43,188 INFO
[org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-49)
[2fccb65f] Lock freed to object EngineLock [exclusiveLocks= key:
2aedea82-0dcf-4f93-994d-425ed01c1479 value: VM<br>
, sharedLocks= ]<br>
2012-11-15 21:45:44,340 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(QuartzScheduler_Worker-4) START, DestroyVDSCommand(vdsId =
a2f20736-2da8-11e2-a9ac-bb1cd2496234,
vmId=2aedea82-0dcf-4f93-994d-425ed01c1479, force=false,
secondsToWait=0, gracefully=false), log id: 39e6d7f3<br>
2012-11-15 21:45:44,378 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(QuartzScheduler_Worker-4) FINISH, DestroyVDSCommand, log id:
39e6d7f3<br>
2012-11-15 21:45:44,391 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-4) Running on vds during rerun failed vm:
null<br>
2012-11-15 21:45:44,394 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-4) vm win73 running in db and not running in
vds - add to rerun treatment. vds local_host<br>
2012-11-15 21:45:44,408 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-4) Rerun vm
2aedea82-0dcf-4f93-994d-425ed01c1479. Called from vds local_host<br>
2012-11-15 21:45:44,410 INFO
[org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand]
(pool-3-thread-49) START, UpdateVdsDynamicDataVDSCommand(vdsId =
a2f20736-2da8-11e2-a9ac-bb1cd2496234,
vdsDynamic=org.ovirt.engine.core.common.businessentities.VdsDynamic@8fa7e67e),
log id: 4f197787<br>
2012-11-15 21:45:44,416 INFO
[org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand]
(pool-3-thread-49) FINISH, UpdateVdsDynamicDataVDSCommand, log id:
4f197787<br>
2012-11-15 21:45:44,433 INFO
[org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-49) Lock
Acquired to object EngineLock [exclusiveLocks= key:
2aedea82-0dcf-4f93-994d-425ed01c1479 value: VM<br>
, sharedLocks= ]<br>
2012-11-15 21:45:44,439 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
(pool-3-thread-49) START, IsValidVDSCommand(storagePoolId =
9febe320-e6d5-4b91-a1c5-614c3a24ebe4, ignoreFailoverLimit = false,
compatabilityVersion = null), log id: 5daaa874<br>
2012-11-15 21:45:44,440 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
(pool-3-thread-49) FINISH, IsValidVDSCommand, return: true, log id:
5daaa874<br>
2012-11-15 21:45:44,467 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(pool-3-thread-49) START, IsVmDuringInitiatingVDSCommand(vmId =
2aedea82-0dcf-4f93-994d-425ed01c1479), log id: 4125b681<br>
2012-11-15 21:45:44,467 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(pool-3-thread-49) FINISH, IsVmDuringInitiatingVDSCommand, return:
false, log id: 4125b681<br>
2012-11-15 21:45:44,469 WARN
[org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-49)
CanDoAction of action RunVm failed.
Reasons:VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,ACTION_TYPE_FAILED_VDS_VM_CLUSTER<br>
2012-11-15 21:45:44,470 INFO
[org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-49) Lock
freed to object EngineLock [exclusiveLocks= key:
2aedea82-0dcf-4f93-994d-425ed01c1479 value: VM<br>
, sharedLocks= ]<br>
<br>
But when I select the legacy USB support, the system is getting
booted and I cannot see my attached USB devices in my guests.<br>
I have already followed the thread
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<a href="http://www.mail-archive.com/users@ovirt.org/msg03822.html">http://www.mail-archive.com/users@ovirt.org/msg03822.html</a>
without any success. There is a work around mentioned in the last
post of the thread by Itamar. Where can find that work around?<br>
<br>
My Installation:<br>
Centos 6.3<br>
Ovirt 3.1<br>
Windows 7 client<br>
spice-gtk-0.11-11.el6_3.1.x86_64<br>
spice-server-0.10.1-10.el6.x86_64<br>
spice-protocol-0.10.1-5.el6.noarch<br>
spice-xpi-2.7-20.el6.x86_64<br>
spice-client-0.8.2-15.el6.x86_64<br>
spice-glib-0.11-11.el6_3.1.x86_64<br>
spice-gtk-python-0.11-11.el6_3.1.x86_64<br>
spice-vdagent-0.8.1-3.el6.x86_64<font face="Arial"><br>
<br>
Regards,<br>
Fasil.<br>
<!--<font color=green size=1><b>Please do not print this email unless you really need to</b></font>
</font>
</html>
</div>--></font>
</body>
</html>
--------------030504050408020107030605--
12 years
Re: [Users] could not add local storage domain
by Jorick Astrego
This is a multi-part message in MIME format.
--------------070401060008050901000001
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
>
> 2012-11-03 19:19:22::DEBUG::engine-setup::1747::root:: storage-path: /data/ovirt/vdsm
> 2012-11-03 19:19:22::DEBUG::engine-setup::1747::root:: superuser-pass: ********
> 2012-11-03 19:19:22::ERROR::engine-setup::2376::root:: Traceback (most recent call last):
> File "/bin/engine-setup", line 2370, in <module>
> main(confFile)
> File "/bin/engine-setup", line 2159, in main
> runSequences()
> File "/bin/engine-setup", line 2105, in runSequences
> controller.runAllSequences()
> File "/usr/share/ovirt-engine/scripts/setup_controller.py", line 54, in runAllSequences
> sequence.run()
> File "/usr/share/ovirt-engine/scripts/setup_sequences.py", line 154, in run
> step.run()
> File "/usr/share/ovirt-engine/scripts/setup_sequences.py", line 60, in run
> function()
> File "/usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py", line 290, in addStorageDomain
> raise Exception(ERROR_ADD_LOCAL_DOMAIN)
> Exception: Error: could not add local storage domain
>
> XMLSyntaxError: Space required after the Public Identifier, line 1, column 47 looks somewhat strange to me.
>
> Any hint what causes this error?
>
> Thanks,
>
> Christian
>
> P.S.: The installation failed several times before that, until i figured out that the engine-setup needs to login in via ssh; we had configured sshd to allow only public key auth, and this raised an error.
>
> did this get resolved?
>
>
> -
I'm not the original submitter of this issue, but I have exactly the
same problem with the latest nightly all-in-one installation.
We don't use public key auth for sshd on this machine so that's not the
problem. This is what I see in the vdsm.log:
MainThread::INFO::2012-11-14 12:45:51,444::vdsm::88::vds::(run) I am the
actual vdsm 4.10-0.129 demo.netbulae.eu (3.6.6-1.fc17.x86_64)
MainThread::DEBUG::2012-11-14
12:45:51,812::resourceManager::379::ResourceManager::(registerNamespace)
Registering namespace 'Storage'
MainThread::DEBUG::2012-11-14
12:45:51,813::threadPool::33::Misc.ThreadPool::(__init__) Enter -
numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0
MainThread::DEBUG::2012-11-14
12:45:51,856::multipath::115::Storage.Multipath::(isEnabled) multipath
Defaulting to False
MainThread::DEBUG::2012-11-14
12:45:51,857::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/bin/cp /tmp/tmpVVMg7O /etc/multipath.conf' (cwd None)
MainThread::DEBUG::2012-11-14
12:45:51,942::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
''; <rc> = 0
MainThread::DEBUG::2012-11-14
12:45:51,944::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/multipath -F' (cwd None)
MainThread::DEBUG::2012-11-14
12:45:51,975::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED: <err> =
''; <rc> = 1
MainThread::DEBUG::2012-11-14
12:45:51,976::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/service multipathd restart' (cwd None)
MainThread::DEBUG::2012-11-14
12:45:52,179::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
'Redirecting to /bin/systemctl restart multipathd.service\n'; <rc> = 0
MainThread::DEBUG::2012-11-14
12:45:52,179::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/lvm dumpconfig global/locking_type' (cwd None)
MainThread::DEBUG::2012-11-14
12:45:52,241::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
''; <rc> = 0
MainThread::DEBUG::2012-11-14
12:45:52,242::hsm::407::Storage.HSM::(__cleanStorageRepository) Started
cleaning storage repository at '/rhev/data-center'
MainThread::DEBUG::2012-11-14
12:45:52,253::hsm::439::Storage.HSM::(__cleanStorageRepository) White
list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*',
'/rhev/data-center/mnt']
MainThread::DEBUG::2012-11-14
12:45:52,253::hsm::440::Storage.HSM::(__cleanStorageRepository) Mount
list: []
MainThread::DEBUG::2012-11-14
12:45:52,254::hsm::442::Storage.HSM::(__cleanStorageRepository) Cleaning
leftovers
MainThread::DEBUG::2012-11-14
12:45:52,258::hsm::485::Storage.HSM::(__cleanStorageRepository) Finished
cleaning storage repository at '/rhev/data-center'
Thread-12::DEBUG::2012-11-14
12:45:52,259::lvm::334::OperationMutex::(_reloadpvs) Operation 'lvm
reload operation' got the operation mutex
MainThread::INFO::2012-11-14
12:45:52,260::dispatcher::95::Storage.Dispatcher::(__init__) Starting
StorageDispatcher...
Thread-12::DEBUG::2012-11-14
12:45:52,266::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/lvm pvs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup {
retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix
--separator | -o
*uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size'
(cwd None)**
**MainThread::WARNING::2012-11-14
12:45:52,300::clientIF::197::vds::(_prepareMOM) MOM initialization
failed and fall back to KsmMonitor**
**Traceback (most recent call last):**
** File "/usr/share/vdsm/clientIF.py", line 194, in _prepareMOM**
** self.mom = MomThread(momconf)**
** File "/usr/share/vdsm/momIF.py", line 34, in __init__**
** raise Exception("MOM is not available")**
**Exception: MOM is not available*
MainThread::DEBUG::2012-11-14
12:45:52,304::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/pgrep -xf
ksmd' (cwd None)
Thread-12::DEBUG::2012-11-14
12:45:52,340::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
''; <rc> = 0
Thread-12::DEBUG::2012-11-14
12:45:52,341::lvm::359::OperationMutex::(_reloadpvs) Operation 'lvm
reload operation' released the operation mutex
Thread-12::DEBUG::2012-11-14
12:45:52,342::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm
reload operation' got the operation mutex
Thread-12::DEBUG::2012-11-14
12:45:52,343::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/lvm vgs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup {
retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix
--separator | -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free'
(cwd None)
MainThread::DEBUG::2012-11-14
12:45:52,353::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
''; <rc> = 0
MainThread::INFO::2012-11-14 12:45:52,354::ksm::43::vds::(__init__)
starting ksm monitor thread, ksm pid is 40
KsmMonitor::DEBUG::2012-11-14
12:45:52,355::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/service ksmtuned start' (cwd None)
MainThread::INFO::2012-11-14
12:45:52,367::vmChannels::139::vds::(settimeout) Setting channels'
timeout to 30 seconds.
VM Channels Listener::INFO::2012-11-14
12:45:52,368::vmChannels::127::vds::(run) Starting VM channels listener
thread.
*MainThread::WARNING::2012-11-14
12:45:52,375::clientIF::182::vds::(_prepareBindings) Unable to load the
rest server module. Please make sure it is installed.**
**MainThread::WARNING::2012-11-14
12:45:52,376::clientIF::188::vds::(_prepareBindings) Unable to load the
json rpc server module. Please make sure it is installed.*
Thread-12::DEBUG::2012-11-14
12:45:52,398::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
' No volume groups found\n'; <rc> = 0
Thread-12::DEBUG::2012-11-14
12:45:52,399::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm
reload operation' released the operation mutex
Thread-12::DEBUG::2012-11-14
12:45:52,401::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/lvm lvs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup {
retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix
--separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags'
(cwd None)
KsmMonitor::DEBUG::2012-11-14
12:45:52,440::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
'Redirecting to /bin/systemctl start ksmtuned.service\n'; <rc> = 0
MainThread::INFO::2012-11-14
12:45:52,367::vmChannels::139::vds::(settimeout) Setting channels'
timeout to 30 seconds.
VM Channels Listener::INFO::2012-11-14
12:45:52,368::vmChannels::127::vds::(run) Starting VM channels listener
thread.
MainThread::WARNING::2012-11-14
12:45:52,375::clientIF::182::vds::(_prepareBindings) Unable to load the
rest server module. Please make sure it is installed.
MainThread::WARNING::2012-11-14
12:45:52,376::clientIF::188::vds::(_prepareBindings) Unable to load the
json rpc server module. Please make sure it is installed.
Thread-12::DEBUG::2012-11-14
12:45:52,398::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
' No volume groups found\n'; <rc> = 0
Thread-12::DEBUG::2012-11-14
12:45:52,399::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm
reload operation' released the operation mutex
Thread-12::DEBUG::2012-11-14
12:45:52,401::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/lvm lvs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup {
retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix
--separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags'
(cwd None)
KsmMonitor::DEBUG::2012-11-14
12:45:52,440::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
'Redirecting to /bin/systemctl start ksmtuned.service\n'; <rc> = 0
KsmMonitor::DEBUG::2012-11-14
12:45:52,440::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/service ksm start' (cwd None)
Thread-12::DEBUG::2012-11-14
12:45:52,457::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
' No volume groups found\n'; <rc> = 0
Thread-12::DEBUG::2012-11-14
12:45:52,458::misc::1026::SamplingMethod::(__call__) Trying to enter
sampling method (storage.sdc.refreshStorage)
Thread-12::DEBUG::2012-11-14
12:45:52,458::misc::1028::SamplingMethod::(__call__) Got in to sampling
method
Thread-12::DEBUG::2012-11-14
12:45:52,458::misc::1026::SamplingMethod::(__call__) Trying to enter
sampling method (storage.iscsi.rescan)
Thread-12::DEBUG::2012-11-14
12:45:52,458::misc::1028::SamplingMethod::(__call__) Got in to sampling
method
Thread-12::DEBUG::2012-11-14
12:45:52,458::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/iscsiadm -m session -R' (cwd None)
Thread-12::DEBUG::2012-11-14
12:45:52,477::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED: <err> =
'iscsiadm: No session found.\n'; <rc> = 21
Thread-12::DEBUG::2012-11-14
12:45:52,477::misc::1036::SamplingMethod::(__call__) Returning last result
Thread-12::DEBUG::2012-11-14
12:45:52,478::supervdsm::107::SuperVdsmProxy::(_start) Launching Super Vdsm
Thread-12::DEBUG::2012-11-14
12:45:52,478::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/bin/python /usr/share/vdsm/supervdsmServer.py
c9c732a0-065b-4634-8bb4-fbcd2081de16 11360' (cwd None)
KsmMonitor::DEBUG::2012-11-14
12:45:52,486::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
'Redirecting to /bin/systemctl start ksm.service\n'; <rc> = 0
MainThread::DEBUG::2012-11-14
12:45:52,669::supervdsmServer::324::SuperVdsm.Server::(main) Making sure
I'm root
MainThread::DEBUG::2012-11-14
12:45:52,670::supervdsmServer::328::SuperVdsm.Server::(main) Parsing cmd
args
MainThread::DEBUG::2012-11-14
12:45:52,670::supervdsmServer::331::SuperVdsm.Server::(main) Creating
PID file
MainThread::DEBUG::2012-11-14
12:45:52,670::supervdsmServer::338::SuperVdsm.Server::(main) Cleaning
old socket
MainThread::DEBUG::2012-11-14
12:45:52,670::supervdsmServer::342::SuperVdsm.Server::(main) Setting up
keep alive thread
MainThread::DEBUG::2012-11-14
12:45:52,670::supervdsmServer::348::SuperVdsm.Server::(main) Creating
remote object manager
MainThread::DEBUG::2012-11-14
12:45:52,671::supervdsmServer::360::SuperVdsm.Server::(main) Started
serving super vdsm object
Thread-14::DEBUG::2012-11-14
12:45:53,732::BindingXMLRPC::903::vds::(wrapper) client
[192.168.122.1]::call getCapabilities with () {}
Thread-14::DEBUG::2012-11-14
12:45:53,902::BindingXMLRPC::910::vds::(wrapper) return getCapabilities
with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory':
{'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba'}],
'FC': []}, 'packages2': {'kernel': {'release': '1.fc17.x86_64',
'buildtime': 1352149175.0, 'version': '3.6.6'}, 'spice-server':
{'release': '1.fc17', 'buildtime': 1348891802L, 'version': '0.12.0'},
'vdsm': {'release': '0.129.git2c2c228.fc17', 'buildtime': 1352759542L,
'version': '4.10.1'}, 'qemu-kvm': {'release': '19.fc17', 'buildtime':
1351915579L, 'version': '1.2.0'}, 'libvirt': {'release': '1.fc17',
'buildtime': 1352437629L, 'version': '1.0.0'}, 'qemu-img': {'release':
'19.fc17', 'buildtime': 1351915579L, 'version': '1.2.0'}}, 'cpuModel':
'AMD Phenom(tm) II X4 955 Processor', 'hooks': {}, 'vmTypes': ['kvm'],
'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt':
{'iface': 'ovirtmgmt', 'addr': '192.168.2.21', 'cfg': {'UUID':
'524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0', 'NM_CONTROLLED':
'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge',
'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp':
'off', 'bridged': True, 'gateway': '192.168.2.1', 'ports': ['p15p1']},
'virbr0': {'iface': 'virbr0', 'addr': '192.168.122.1', 'cfg': {}, 'mtu':
'1500', 'netmask': '255.255.255.0', 'stp': 'on', 'bridged': True,
'gateway': '0.0.0.0', 'ports': []}}, 'bridges': {'ovirtmgmt': {'addr':
'192.168.2.21', 'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9',
'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE':
'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500',
'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['p15p1']}, 'virbr0':
{'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask':
'255.255.255.0', 'stp': 'on', 'ports': []}}, 'uuid':
'4046266B-FA2B-DE11-AA3D-00248C9EAFD5_00:19:cb:d6:6a:e0',
'lastClientIface': 'ovirtmgmt', 'nics': {'p15p1': {'addr': '', 'cfg':
{'BRIDGE': 'ovirtmgmt', 'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9',
'NM_CONTROLLED': 'no', 'HWADDR': '00:19:cb:d6:6a:e0', 'DEVICE': 'p15p1',
'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr':
'00:19:cb:d6:6a:e0', 'speed': 1000}, 'p6p1': {'addr': '', 'cfg':
{'DEVICE': 'p6p1', 'UUID': '9d1e9605-931d-4e51-9c79-d5f0f204d46e',
'NM_CONTROLLED': 'yes', 'BOOTPROTO': 'dhcp', 'HWADDR':
'00:24:8C:9E:AF:D5', 'ONBOOT': 'no'}, 'mtu': '1500', 'netmask': '',
'hwaddr': '00:24:8c:9e:af:d5', 'speed': 1000}}, 'software_revision':
'0.129', 'clusterLevels': ['3.0', '3.1', '3.2'], 'cpuFlags':
u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,hw_pstate,npt,lbrv,svm_lock,nrip_save,model_athlon,model_Opteron_G3,model_Opteron_G1,model_phenom,model_Opteron_G2',
'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba',
'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1'],
'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {},
'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr':
'00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu': '1500',
'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}},
'software_version': '4.10', 'memSize': '7734', 'cpuSpeed': '3200.000',
'cpuSockets': '1', 'vlans': {}, 'cpuCores': '4', 'kvmEnabled': 'true',
'guestOverhead': '65', 'management_ip': '', 'version_name': 'Snow Man',
'emulatedMachines': [u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0',
u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10',
u'isapc', u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15',
u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc'],
'operatingSystem': {'release': '1', 'version': '17', 'name': 'Fedora'},
'lastClient': '0.0.0.0'}}
Thread-15::DEBUG::2012-11-14
12:45:54,148::BindingXMLRPC::903::vds::(wrapper) client
[192.168.122.1]::call getCapabilities with () {}
Thread-15::DEBUG::2012-11-14
12:45:54,173::BindingXMLRPC::910::vds::(wrapper) return getCapabilities
with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory':
{'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba'}],
'FC': []}, 'packages2': {'kernel': {'release': '1.fc17.x86_64',
'buildtime': 1352149175.0, 'version': '3.6.6'}, 'spice-server':
{'release': '1.fc17', 'buildtime': 1348891802L, 'version': '0.12.0'},
'vdsm': {'release': '0.129.git2c2c228.fc17', 'buildtime': 1352759542L,
'version': '4.10.1'}, 'qemu-kvm': {'release': '19.fc17', 'buildtime':
1351915579L, 'version': '1.2.0'}, 'libvirt': {'release': '1.fc17',
'buildtime': 1352437629L, 'version': '1.0.0'}, 'qemu-img': {'release':
'19.fc17', 'buildtime': 1351915579L, 'version': '1.2.0'}}, 'cpuModel':
'AMD Phenom(tm) II X4 955 Processor', 'hooks': {}, 'vmTypes': ['kvm'],
'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt':
{'iface': 'ovirtmgmt', 'addr': '192.168.2.21', 'cfg': {'UUID':
'524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0', 'NM_CONTROLLED':
'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge',
'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp':
'off', 'bridged': True, 'gateway': '192.168.2.1', 'ports': ['p15p1']},
'virbr0': {'iface': 'virbr0', 'addr': '192.168.122.1', 'cfg': {}, 'mtu':
'1500', 'netmask': '255.255.255.0', 'stp': 'on', 'bridged': True,
'gateway': '0.0.0.0', 'ports': []}}, 'bridges': {'ovirtmgmt': {'addr':
'192.168.2.21', 'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9',
'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE':
'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500',
'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['p15p1']}, 'virbr0':
{'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask':
'255.255.255.0', 'stp': 'on', 'ports': []}}, 'uuid':
'4046266B-FA2B-DE11-AA3D-00248C9EAFD5_00:19:cb:d6:6a:e0',
'lastClientIface': 'ovirtmgmt', 'nics': {'p15p1': {'addr': '', 'cfg':
{'BRIDGE': 'ovirtmgmt', 'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9',
'NM_CONTROLLED': 'no', 'HWADDR': '00:19:cb:d6:6a:e0', 'DEVICE': 'p15p1',
'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr':
'00:19:cb:d6:6a:e0', 'speed': 1000}, 'p6p1': {'addr': '', 'cfg':
{'DEVICE': 'p6p1', 'UUID': '9d1e9605-931d-4e51-9c79-d5f0f204d46e',
'NM_CONTROLLED': 'yes', 'BOOTPROTO': 'dhcp', 'HWADDR':
'00:24:8C:9E:AF:D5', 'ONBOOT': 'no'}, 'mtu': '1500', 'netmask': '',
'hwaddr': '00:24:8c:9e:af:d5', 'speed': 1000}}, 'software_revision':
'0.129', 'clusterLevels': ['3.0', '3.1', '3.2'], 'cpuFlags':
u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,hw_pstate,npt,lbrv,svm_lock,nrip_save,model_athlon,model_Opteron_G3,model_Opteron_G1,model_phenom,model_Opteron_G2',
'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba',
'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1'],
'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {},
'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr':
'00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu': '1500',
'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}},
'software_version': '4.10', 'memSize': '7734', 'cpuSpeed': '800.000',
'cpuSockets': '1', 'vlans': {}, 'cpuCores': '4', 'kvmEnabled': 'true',
'guestOverhead': '65', 'management_ip': '', 'version_name': 'Snow Man',
'emulatedMachines': [u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0',
u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10',
u'isapc', u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15',
u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc'],
'operatingSystem': {'release': '1', 'version': '17', 'name': 'Fedora'},
'lastClient': '192.168.122.1'}}
MainThread::INFO::2012-11-14 12:45:55,916::vdsm::88::vds::(run) I am the
actual vdsm 4.10-0.129 demo.netbulae.eu (3.6.6-1.fc17.x86_64)
MainThread::DEBUG::2012-11-14
12:46:08,422::resourceManager::379::ResourceManager::(registerNamespace)
Registering namespace 'Storage'
MainThread::DEBUG::2012-11-14
12:46:08,423::threadPool::33::Misc.ThreadPool::(__init__) Enter -
numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0
MainThread::WARNING::2012-11-14
12:46:08,431::fileUtils::184::fileUtils::(createdir) Dir
/rhev/data-center/mnt already exists
MainThread::DEBUG::2012-11-14
12:46:08,467::supervdsm::107::SuperVdsmProxy::(_start) Launching Super Vdsm
MainThread::DEBUG::2012-11-14
12:46:08,467::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/bin/python /usr/share/vdsm/supervdsmServer.py
d5652547-2838-4900-8e62-5191bf37c460 11918' (cwd None)
MainThread::DEBUG::2012-11-14
12:46:08,634::supervdsmServer::324::SuperVdsm.Server::(main) Making sure
I'm root
MainThread::DEBUG::2012-11-14
12:46:08,634::supervdsmServer::328::SuperVdsm.Server::(main) Parsing cmd
args
MainThread::DEBUG::2012-11-14
12:46:08,634::supervdsmServer::331::SuperVdsm.Server::(main) Creating
PID file
MainThread::DEBUG::2012-11-14
12:46:08,634::supervdsmServer::338::SuperVdsm.Server::(main) Cleaning
old socket
MainThread::DEBUG::2012-11-14
12:46:08,634::supervdsmServer::342::SuperVdsm.Server::(main) Setting up
keep alive thread
MainThread::DEBUG::2012-11-14
12:46:08,635::supervdsmServer::348::SuperVdsm.Server::(main) Creating
remote object manager
MainThread::DEBUG::2012-11-14
12:46:08,636::supervdsmServer::360::SuperVdsm.Server::(main) Started
serving super vdsm object
MainThread::DEBUG::2012-11-14
12:46:10,475::supervdsm::161::SuperVdsmProxy::(_connect) Trying to
connect to Super Vdsm
MainThread::DEBUG::2012-11-14
12:46:10,549::multipath::106::Storage.Multipath::(isEnabled) Current
revision of multipath.conf detected, preserving
MainThread::DEBUG::2012-11-14
12:46:10,549::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/lvm dumpconfig global/locking_type' (cwd None)
MainThread::DEBUG::2012-11-14
12:46:10,621::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
''; <rc> = 0
MainThread::DEBUG::2012-11-14
12:46:10,623::hsm::407::Storage.HSM::(__cleanStorageRepository) Started
cleaning storage repository at '/rhev/data-center'
MainThread::DEBUG::2012-11-14
12:46:10,634::hsm::439::Storage.HSM::(__cleanStorageRepository) White
list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*',
'/rhev/data-center/mnt']
MainThread::DEBUG::2012-11-14
12:46:10,634::hsm::440::Storage.HSM::(__cleanStorageRepository) Mount
list: []
MainThread::DEBUG::2012-11-14
12:46:10,634::hsm::442::Storage.HSM::(__cleanStorageRepository) Cleaning
leftovers
MainThread::DEBUG::2012-11-14
12:46:10,636::hsm::485::Storage.HSM::(__cleanStorageRepository) Finished
cleaning storage repository at '/rhev/data-center'
MainThread::INFO::2012-11-14
12:46:10,638::dispatcher::95::Storage.Dispatcher::(__init__) Starting
StorageDispatcher...
Thread-12::DEBUG::2012-11-14
12:46:10,638::lvm::334::OperationMutex::(_reloadpvs) Operation 'lvm
reload operation' got the operation mutex
Thread-12::DEBUG::2012-11-14
12:46:10,643::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/lvm pvs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup {
retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix
--separator | -o
uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size'
(cwd None)
*MainThread::WARNING::2012-11-14
12:46:10,688::clientIF::197::vds::(_prepareMOM) MOM initialization
failed and fall back to KsmMonitor**
**Traceback (most recent call last):**
** File "/usr/share/vdsm/clientIF.py", line 194, in _prepareMOM**
** self.mom = MomThread(momconf)**
** File "/usr/share/vdsm/momIF.py", line 34, in __init__**
** raise Exception("MOM is not available")**
**Exception: MOM is not available*
MainThread::DEBUG::2012-11-14
12:46:10,690::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/pgrep -xf
ksmd' (cwd None)
Thread-12::DEBUG::2012-11-14
12:46:10,710::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
''; <rc> = 0
Thread-12::DEBUG::2012-11-14
12:46:10,710::lvm::359::OperationMutex::(_reloadpvs) Operation 'lvm
reload operation' released the operation mutex
Thread-12::DEBUG::2012-11-14
12:46:10,711::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm
reload operation' got the operation mutex
Thread-12::DEBUG::2012-11-14
12:46:10,711::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/lvm vgs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup {
retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix
--separator | -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free'
(cwd None)
MainThread::DEBUG::2012-11-14
12:46:10,712::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
''; <rc> = 0
MainThread::INFO::2012-11-14 12:46:10,721::ksm::43::vds::(__init__)
starting ksm monitor thread, ksm pid is 40
KsmMonitor::DEBUG::2012-11-14
12:46:10,722::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/service ksmtuned start' (cwd None)
MainThread::INFO::2012-11-14
12:46:10,724::vmChannels::139::vds::(settimeout) Setting channels'
timeout to 30 seconds.
VM Channels Listener::INFO::2012-11-14
12:46:10,738::vmChannels::127::vds::(run) Starting VM channels listener
thread.
MainThread::WARNING::2012-11-14
12:46:10,747::clientIF::182::vds::(_prepareBindings) Unable to load the
rest server module. Please make sure it is installed.
MainThread::WARNING::2012-11-14
12:46:10,747::clientIF::188::vds::(_prepareBindings) Unable to load the
json rpc server module. Please make sure it is installed.
Thread-12::DEBUG::2012-11-14
12:46:10,767::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
' No volume groups found\n'; <rc> = 0
Thread-12::DEBUG::2012-11-14
12:46:10,768::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm
reload operation' released the operation mutex
Thread-12::DEBUG::2012-11-14
12:46:10,770::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/lvm lvs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup {
retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix
--separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags'
(cwd None)
KsmMonitor::DEBUG::2012-11-14
12:46:10,782::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
'Redirecting to /bin/systemctl start ksmtuned.service\n'; <rc> = 0
KsmMonitor::DEBUG::2012-11-14
12:46:10,783::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/service ksm start' (cwd None)
Thread-12::DEBUG::2012-11-14
12:46:10,823::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
' No volume groups found\n'; <rc> = 0
Thread-12::DEBUG::2012-11-14
12:46:10,824::misc::1026::SamplingMethod::(__call__) Trying to enter
sampling method (storage.sdc.refreshStorage)
Thread-12::DEBUG::2012-11-14
12:46:10,824::misc::1028::SamplingMethod::(__call__) Got in to sampling
method
Thread-12::DEBUG::2012-11-14
12:46:10,825::misc::1026::SamplingMethod::(__call__) Trying to enter
sampling method (storage.iscsi.rescan)
Thread-12::DEBUG::2012-11-14
12:46:10,825::misc::1028::SamplingMethod::(__call__) Got in to sampling
method
Thread-12::DEBUG::2012-11-14
12:46:10,826::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/iscsiadm -m session -R' (cwd None)
KsmMonitor::DEBUG::2012-11-14
12:46:10,840::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
'Redirecting to /bin/systemctl start ksm.service\n'; <rc> = 0
Thread-12::DEBUG::2012-11-14
12:46:10,852::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED: <err> =
'iscsiadm: No session found.\n'; <rc> = 21
Thread-12::DEBUG::2012-11-14
12:46:10,852::misc::1036::SamplingMethod::(__call__) Returning last result
MainProcess|Thread-12::DEBUG::2012-11-14
12:46:10,858::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd
of=/sys/class/scsi_host/host0/scan' (cwd None)
MainProcess|Thread-12::DEBUG::2012-11-14
12:46:10,882::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd
of=/sys/class/scsi_host/host1/scan' (cwd None)
MainProcess|Thread-12::DEBUG::2012-11-14
12:46:10,891::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd
of=/sys/class/scsi_host/host2/scan' (cwd None)
MainProcess|Thread-12::DEBUG::2012-11-14
12:46:10,898::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd
of=/sys/class/scsi_host/host3/scan' (cwd None)
MainProcess|Thread-12::DEBUG::2012-11-14
12:46:10,905::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd
of=/sys/class/scsi_host/host4/scan' (cwd None)
MainProcess|Thread-12::DEBUG::2012-11-14
12:46:10,913::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd
of=/sys/class/scsi_host/host5/scan' (cwd None)
MainProcess|Thread-12::DEBUG::2012-11-14
12:46:10,922::iscsi::388::Storage.ISCSI::(forceIScsiScan) Performing
SCSI scan, this will take up to 30 seconds
Thread-14::DEBUG::2012-11-14
12:46:12,615::BindingXMLRPC::903::vds::(wrapper) client
[192.168.122.1]::call getCapabilities with () {}
Thread-14::DEBUG::2012-11-14
12:46:12,777::BindingXMLRPC::910::vds::(wrapper) return getCapabilities
with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory':
{'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba'}],
'FC': []}, 'packages2': {'kernel': {'release': '1.fc17.x86_64',
'buildtime': 1352149175.0, 'version': '3.6.6'}, 'spice-server':
{'release': '1.fc17', 'buildtime': 1348891802L, 'version': '0.12.0'},
'vdsm': {'release': '0.129.git2c2c228.fc17', 'buildtime': 1352759542L,
'version': '4.10.1'}, 'qemu-kvm': {'release': '19.fc17', 'buildtime':
1351915579L, 'version': '1.2.0'}, 'libvirt': {'release': '1.fc17',
'buildtime': 1352437629L, 'version': '1.0.0'}, 'qemu-img': {'release':
'19.fc17', 'buildtime': 1351915579L, 'version': '1.2.0'}}, 'cpuModel':
'AMD Phenom(tm) II X4 955 Processor', 'hooks': {}, 'vmTypes': ['kvm'],
'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt':
{'iface': 'ovirtmgmt', 'addr': '192.168.2.21', 'cfg': {'UUID':
'524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0', 'NM_CONTROLLED':
'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge',
'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp':
'off', 'bridged': True, 'gateway': '192.168.2.1', 'ports': ['p15p1']},
'virbr0': {'iface': 'virbr0', 'addr': '192.168.122.1', 'cfg': {}, 'mtu':
'1500', 'netmask': '255.255.255.0', 'stp': 'on', 'bridged': True,
'gateway': '0.0.0.0', 'ports': []}}, 'bridges': {'ovirtmgmt': {'addr':
'192.168.2.21', 'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9',
'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE':
'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500',
'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['p15p1']}, 'virbr0':
{'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask':
'255.255.255.0', 'stp': 'on', 'ports': []}}, 'uuid':
'4046266B-FA2B-DE11-AA3D-00248C9EAFD5_00:19:cb:d6:6a:e0',
'lastClientIface': 'ovirtmgmt', 'nics': {'p15p1': {'addr': '', 'cfg':
{'BRIDGE': 'ovirtmgmt', 'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9',
'NM_CONTROLLED': 'no', 'HWADDR': '00:19:cb:d6:6a:e0', 'DEVICE': 'p15p1',
'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr':
'00:19:cb:d6:6a:e0', 'speed': 1000}, 'p6p1': {'addr': '', 'cfg':
{'DEVICE': 'p6p1', 'UUID': '9d1e9605-931d-4e51-9c79-d5f0f204d46e',
'NM_CONTROLLED': 'yes', 'BOOTPROTO': 'dhcp', 'HWADDR':
'00:24:8C:9E:AF:D5', 'ONBOOT': 'no'}, 'mtu': '1500', 'netmask': '',
'hwaddr': '00:24:8c:9e:af:d5', 'speed': 1000}}, 'software_revision':
'0.129', 'clusterLevels': ['3.0', '3.1', '3.2'], 'cpuFlags':
u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,hw_pstate,npt,lbrv,svm_lock,nrip_save,model_athlon,model_Opteron_G3,model_Opteron_G1,model_phenom,model_Opteron_G2',
'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba',
'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1'],
'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {},
'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr':
'00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu': '1500',
'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond1':
{'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [],
'hwaddr': '00:00:00:00:00:00'}}, 'software_version': '4.10', 'memSize':
'7734', 'cpuSpeed': '800.000', 'cpuSockets': '1', 'vlans': {},
'cpuCores': '4', 'kvmEnabled': 'true', 'guestOverhead': '65',
'management_ip': '', 'version_name': 'Snow Man', 'emulatedMachines':
[u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15',
u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc',
u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15', u'pc-0.14',
u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc'],
'operatingSystem': {'release': '1', 'version': '17', 'name': 'Fedora'},
'lastClient': '0.0.0.0'}}
Thread-12::DEBUG::2012-11-14
12:46:12,926::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/multipath' (cwd None)
Thread-12::DEBUG::2012-11-14
12:46:12,990::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
''; <rc> = 0
Thread-12::DEBUG::2012-11-14
12:46:12,990::lvm::477::OperationMutex::(_invalidateAllPvs) Operation
'lvm invalidate operation' got the operation mutex
Thread-12::DEBUG::2012-11-14
12:46:12,991::lvm::479::OperationMutex::(_invalidateAllPvs) Operation
'lvm invalidate operation' released the operation mutex
Thread-12::DEBUG::2012-11-14
12:46:12,991::lvm::488::OperationMutex::(_invalidateAllVgs) Operation
'lvm invalidate operation' got the operation mutex
Thread-12::DEBUG::2012-11-14
12:46:12,991::lvm::490::OperationMutex::(_invalidateAllVgs) Operation
'lvm invalidate operation' released the operation mutex
Thread-12::DEBUG::2012-11-14
12:46:12,991::lvm::508::OperationMutex::(_invalidateAllLvs) Operation
'lvm invalidate operation' got the operation mutex
Thread-12::DEBUG::2012-11-14
12:46:12,991::lvm::510::OperationMutex::(_invalidateAllLvs) Operation
'lvm invalidate operation' released the operation mutex
Thread-12::DEBUG::2012-11-14
12:46:12,991::misc::1036::SamplingMethod::(__call__) Returning last result
Thread-16::DEBUG::2012-11-14
12:46:14,043::BindingXMLRPC::161::vds::(wrapper) [192.168.122.1]
Thread-16::DEBUG::2012-11-14
12:46:14,044::task::568::TaskManager.Task::(_updateState)
Task=`8cf5bfe0-3851-4058-92b9-7a23f095ec30`::moving from state init ->
state preparing
Thread-16::INFO::2012-11-14
12:46:14,045::logUtils::37::dispatcher::(wrapper) Run and protect:
validateStorageServerConnection(domType=4,
spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection':
'/data', 'iqn': '', 'portal': '', 'user': '', 'password': '******',
'id': '00000000-0000-0000-0000-000000000000', 'port': ''}], options=None)
Thread-16::INFO::2012-11-14
12:46:14,045::logUtils::39::dispatcher::(wrapper) Run and protect:
validateStorageServerConnection, Return response: {'statuslist':
[{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-16::DEBUG::2012-11-14
12:46:14,045::task::1151::TaskManager.Task::(prepare)
Task=`8cf5bfe0-3851-4058-92b9-7a23f095ec30`::finished: {'statuslist':
[{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-16::DEBUG::2012-11-14
12:46:14,045::task::568::TaskManager.Task::(_updateState)
Task=`8cf5bfe0-3851-4058-92b9-7a23f095ec30`::moving from state preparing
-> state finished
Thread-16::DEBUG::2012-11-14
12:46:14,045::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-16::DEBUG::2012-11-14
12:46:14,045::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-16::DEBUG::2012-11-14
12:46:14,045::task::957::TaskManager.Task::(_decref)
Task=`8cf5bfe0-3851-4058-92b9-7a23f095ec30`::ref 0 aborting False
Thread-17::DEBUG::2012-11-14
12:46:14,128::BindingXMLRPC::161::vds::(wrapper) [192.168.122.1]
Thread-17::DEBUG::2012-11-14
12:46:14,129::task::568::TaskManager.Task::(_updateState)
Task=`0eb0651c-bb23-4b49-a07a-a27a9bbc4129`::moving from state init ->
state preparing
Thread-17::INFO::2012-11-14
12:46:14,129::logUtils::37::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=4,
spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection':
'/data', 'iqn': '', 'portal': '', 'user': '', 'password': '******',
'id': '00000000-0000-0000-0000-000000000000', 'port': ''}], options=None)
*Thread-17::ERROR::2012-11-14
12:46:14,212::hsm::2057::Storage.HSM::(connectStorageServer) Could not
connect to storageServer**
**Traceback (most recent call last):**
** File "/usr/share/vdsm/storage/hsm.py", line 2054, in
connectStorageServer**
** conObj.connect()**
** File "/usr/share/vdsm/storage/storageServer.py", line 462, in connect**
** if not self.checkTarget():**
** File "/usr/share/vdsm/storage/storageServer.py", line 449, in
checkTarget**
** fileSD.validateDirAccess(self._path))**
** File "/usr/share/vdsm/storage/fileSD.py", line 51, in
validateDirAccess**
** getProcPool().fileUtils.validateAccess(dirPath)**
** File "/usr/share/vdsm/storage/remoteFileHandler.py", line 274, in
callCrabRPCFunction**
** *args, **kwargs)**
** File "/usr/share/vdsm/storage/remoteFileHandler.py", line 180, in
callCrabRPCFunction**
** rawLength = self._recvAll(LENGTH_STRUCT_LENGTH, timeout)**
** File "/usr/share/vdsm/storage/remoteFileHandler.py", line 149, in
_recvAll**
** timeLeft):**
** File "/usr/lib64/python2.7/contextlib.py", line 84, in helper**
** return GeneratorContextManager(func(*args, **kwds))**
** File "/usr/share/vdsm/storage/remoteFileHandler.py", line 136, in
_poll**
** raise Timeout()**
**Timeout*
Thread-17::INFO::2012-11-14
12:46:14,231::logUtils::39::dispatcher::(wrapper) Run and protect:
connectStorageServer, Return response: {'statuslist': [{'status': 100,
'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-17::DEBUG::2012-11-14
12:46:14,231::task::1151::TaskManager.Task::(prepare)
Task=`0eb0651c-bb23-4b49-a07a-a27a9bbc4129`::finished: {'statuslist':
[{'status': 100, 'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-17::DEBUG::2012-11-14
12:46:14,232::task::568::TaskManager.Task::(_updateState)
Task=`0eb0651c-bb23-4b49-a07a-a27a9bbc4129`::moving from state preparing
-> state finished
Thread-17::DEBUG::2012-11-14
12:46:14,232::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-17::DEBUG::2012-11-14
12:46:14,233::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Kind regards,
Jorick Astrego
Netbulae B.V.
--------------070401060008050901000001
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<blockquote type="cite"><br>
<pre wrap="">2012-11-03 19:19:22::DEBUG::engine-setup::1747::root:: storage-path: /data/ovirt/vdsm
2012-11-03 19:19:22::DEBUG::engine-setup::1747::root:: superuser-pass: ********
2012-11-03 19:19:22::ERROR::engine-setup::2376::root:: Traceback (most recent call last):
File "/bin/engine-setup", line 2370, in <module>
main(confFile)
File "/bin/engine-setup", line 2159, in main
runSequences()
File "/bin/engine-setup", line 2105, in runSequences
controller.runAllSequences()
File "/usr/share/ovirt-engine/scripts/setup_controller.py", line 54, in runAllSequences
sequence.run()
File "/usr/share/ovirt-engine/scripts/setup_sequences.py", line 154, in run
step.run()
File "/usr/share/ovirt-engine/scripts/setup_sequences.py", line 60, in run
function()
File "/usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py", line 290, in addStorageDomain
raise Exception(ERROR_ADD_LOCAL_DOMAIN)
Exception: Error: could not add local storage domain
XMLSyntaxError: Space required after the Public Identifier, line 1, column 47 looks somewhat strange to me.
Any hint what causes this error?
Thanks,
Christian
P.S.: The installation failed several times before that, until i figured out that the engine-setup needs to login in via ssh; we had configured sshd to allow only public key auth, and this raised an error.
</pre>
</blockquote>
<blockquote cite="mid:mailman.6600.1352797990.6397.users@ovirt.org"
type="cite">
<blockquote type="cite">
<pre wrap="">
</pre>
</blockquote>
<pre wrap="">
did this get resolved?
-</pre>
</blockquote>
I'm not the original submitter of this issue, but I have exactly the
same problem with the latest nightly all-in-one installation. <br>
<br>
We don't use public key auth for sshd on this machine so that's not
the problem. This is what I see in the vdsm.log:<br>
<br>
MainThread::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a> 12:45:51,444::vdsm::88::vds::(run) I am
the actual vdsm 4.10-0.129 demo.netbulae.eu (3.6.6-1.fc17.x86_64)<br>
MainThread::DEBUG::2012-11-14
12:45:51,812::resourceManager::379::ResourceManager::(registerNamespace)
Registering namespace 'Storage'<br>
MainThread::DEBUG::2012-11-14
12:45:51,813::threadPool::33::Misc.ThreadPool::(__init__) Enter -
numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0<br>
MainThread::DEBUG::2012-11-14
12:45:51,856::multipath::115::Storage.Multipath::(isEnabled)
multipath Defaulting to False<br>
MainThread::DEBUG::2012-11-14
12:45:51,857::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /bin/cp /tmp/tmpVVMg7O /etc/multipath.conf' (cwd None)<br>
MainThread::DEBUG::2012-11-14
12:45:51,942::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = ''; <rc> = 0<br>
MainThread::DEBUG::2012-11-14
12:45:51,944::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/multipath -F' (cwd None)<br>
MainThread::DEBUG::2012-11-14
12:45:51,975::misc::84::Storage.Misc.excCmd::(<lambda>)
FAILED: <err> = ''; <rc> = 1<br>
MainThread::DEBUG::2012-11-14
12:45:51,976::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/service multipathd restart' (cwd None)<br>
MainThread::DEBUG::2012-11-14
12:45:52,179::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = 'Redirecting to /bin/systemctl restart
multipathd.service\n'; <rc> = 0<br>
MainThread::DEBUG::2012-11-14
12:45:52,179::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None)<br>
MainThread::DEBUG::2012-11-14
12:45:52,241::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = ''; <rc> = 0<br>
MainThread::DEBUG::2012-11-14
12:45:52,242::hsm::407::Storage.HSM::(__cleanStorageRepository)
Started cleaning storage repository at '/rhev/data-center'<br>
MainThread::DEBUG::2012-11-14
12:45:52,253::hsm::439::Storage.HSM::(__cleanStorageRepository)
White list: ['/rhev/data-center/hsm-tasks',
'/rhev/data-center/hsm-tasks/*', '/rhev/data-center/mnt']<br>
MainThread::DEBUG::2012-11-14
12:45:52,253::hsm::440::Storage.HSM::(__cleanStorageRepository)
Mount list: []<br>
MainThread::DEBUG::2012-11-14
12:45:52,254::hsm::442::Storage.HSM::(__cleanStorageRepository)
Cleaning leftovers<br>
MainThread::DEBUG::2012-11-14
12:45:52,258::hsm::485::Storage.HSM::(__cleanStorageRepository)
Finished cleaning storage repository at '/rhev/data-center'<br>
Thread-12::DEBUG::2012-11-14
12:45:52,259::lvm::334::OperationMutex::(_reloadpvs) Operation 'lvm
reload operation' got the operation mutex<br>
MainThread::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a>
12:45:52,260::dispatcher::95::Storage.Dispatcher::(__init__)
Starting StorageDispatcher...<br>
Thread-12::DEBUG::2012-11-14
12:45:52,266::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup
{ retain_min = 50 retain_days = 0 } " --noheadings --units b
--nosuffix --separator | -o <b>uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size'
(cwd None)</b><b><br>
</b><b>MainThread::WARNING::2012-11-14
12:45:52,300::clientIF::197::vds::(_prepareMOM) MOM initialization
failed and fall back to KsmMonitor</b><b><br>
</b><b>Traceback (most recent call last):</b><b><br>
</b><b> File "/usr/share/vdsm/clientIF.py", line 194, in
_prepareMOM</b><b><br>
</b><b> self.mom = MomThread(momconf)</b><b><br>
</b><b> File "/usr/share/vdsm/momIF.py", line 34, in __init__</b><b><br>
</b><b> raise Exception("MOM is not available")</b><b><br>
</b><b>Exception: MOM is not available</b><br>
MainThread::DEBUG::2012-11-14
12:45:52,304::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/pgrep -xf ksmd' (cwd None)<br>
Thread-12::DEBUG::2012-11-14
12:45:52,340::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = ''; <rc> = 0<br>
Thread-12::DEBUG::2012-11-14
12:45:52,341::lvm::359::OperationMutex::(_reloadpvs) Operation 'lvm
reload operation' released the operation mutex<br>
Thread-12::DEBUG::2012-11-14
12:45:52,342::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm
reload operation' got the operation mutex<br>
Thread-12::DEBUG::2012-11-14
12:45:52,343::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup
{ retain_min = 50 retain_days = 0 } " --noheadings --units b
--nosuffix --separator | -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free'
(cwd None)<br>
MainThread::DEBUG::2012-11-14
12:45:52,353::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = ''; <rc> = 0<br>
MainThread::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a> 12:45:52,354::ksm::43::vds::(__init__)
starting ksm monitor thread, ksm pid is 40<br>
KsmMonitor::DEBUG::2012-11-14
12:45:52,355::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/service ksmtuned start' (cwd None)<br>
MainThread::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a>
12:45:52,367::vmChannels::139::vds::(settimeout) Setting channels'
timeout to 30 seconds.<br>
VM Channels Listener::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a>
12:45:52,368::vmChannels::127::vds::(run) Starting VM channels
listener thread.<br>
<b>MainThread::WARNING::2012-11-14
12:45:52,375::clientIF::182::vds::(_prepareBindings) Unable to
load the rest server module. Please make sure it is installed.</b><b><br>
</b><b>MainThread::WARNING::2012-11-14
12:45:52,376::clientIF::188::vds::(_prepareBindings) Unable to
load the json rpc server module. Please make sure it is installed.</b><br>
Thread-12::DEBUG::2012-11-14
12:45:52,398::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0<br>
Thread-12::DEBUG::2012-11-14
12:45:52,399::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm
reload operation' released the operation mutex<br>
Thread-12::DEBUG::2012-11-14
12:45:52,401::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup
{ retain_min = 50 retain_days = 0 } " --noheadings --units b
--nosuffix --separator | -o
uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None)<br>
KsmMonitor::DEBUG::2012-11-14
12:45:52,440::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = 'Redirecting to /bin/systemctl start
ksmtuned.service\n'; <rc> = 0<br>
MainThread::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a>
12:45:52,367::vmChannels::139::vds::(settimeout) Setting channels'
timeout to 30 seconds.<br>
VM Channels Listener::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a>
12:45:52,368::vmChannels::127::vds::(run) Starting VM channels
listener thread.<br>
MainThread::WARNING::2012-11-14
12:45:52,375::clientIF::182::vds::(_prepareBindings) Unable to load
the rest server module. Please make sure it is installed.<br>
MainThread::WARNING::2012-11-14
12:45:52,376::clientIF::188::vds::(_prepareBindings) Unable to load
the json rpc server module. Please make sure it is installed.<br>
Thread-12::DEBUG::2012-11-14
12:45:52,398::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0<br>
Thread-12::DEBUG::2012-11-14
12:45:52,399::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm
reload operation' released the operation mutex<br>
Thread-12::DEBUG::2012-11-14
12:45:52,401::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup
{ retain_min = 50 retain_days = 0 } " --noheadings --units b
--nosuffix --separator | -o
uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None)<br>
KsmMonitor::DEBUG::2012-11-14
12:45:52,440::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = 'Redirecting to /bin/systemctl start
ksmtuned.service\n'; <rc> = 0<br>
KsmMonitor::DEBUG::2012-11-14
12:45:52,440::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/service ksm start' (cwd None)<br>
Thread-12::DEBUG::2012-11-14
12:45:52,457::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0<br>
Thread-12::DEBUG::2012-11-14
12:45:52,458::misc::1026::SamplingMethod::(__call__) Trying to enter
sampling method (storage.sdc.refreshStorage)<br>
Thread-12::DEBUG::2012-11-14
12:45:52,458::misc::1028::SamplingMethod::(__call__) Got in to
sampling method<br>
Thread-12::DEBUG::2012-11-14
12:45:52,458::misc::1026::SamplingMethod::(__call__) Trying to enter
sampling method (storage.iscsi.rescan)<br>
Thread-12::DEBUG::2012-11-14
12:45:52,458::misc::1028::SamplingMethod::(__call__) Got in to
sampling method<br>
Thread-12::DEBUG::2012-11-14
12:45:52,458::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None)<br>
Thread-12::DEBUG::2012-11-14
12:45:52,477::misc::84::Storage.Misc.excCmd::(<lambda>)
FAILED: <err> = 'iscsiadm: No session found.\n'; <rc> =
21<br>
Thread-12::DEBUG::2012-11-14
12:45:52,477::misc::1036::SamplingMethod::(__call__) Returning last
result<br>
Thread-12::DEBUG::2012-11-14
12:45:52,478::supervdsm::107::SuperVdsmProxy::(_start) Launching
Super Vdsm<br>
Thread-12::DEBUG::2012-11-14
12:45:52,478::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /bin/python /usr/share/vdsm/supervdsmServer.py
c9c732a0-065b-4634-8bb4-fbcd2081de16 11360' (cwd None)<br>
KsmMonitor::DEBUG::2012-11-14
12:45:52,486::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = 'Redirecting to /bin/systemctl start
ksm.service\n'; <rc> = 0<br>
MainThread::DEBUG::2012-11-14
12:45:52,669::supervdsmServer::324::SuperVdsm.Server::(main) Making
sure I'm root<br>
MainThread::DEBUG::2012-11-14
12:45:52,670::supervdsmServer::328::SuperVdsm.Server::(main) Parsing
cmd args<br>
MainThread::DEBUG::2012-11-14
12:45:52,670::supervdsmServer::331::SuperVdsm.Server::(main)
Creating PID file<br>
MainThread::DEBUG::2012-11-14
12:45:52,670::supervdsmServer::338::SuperVdsm.Server::(main)
Cleaning old socket<br>
MainThread::DEBUG::2012-11-14
12:45:52,670::supervdsmServer::342::SuperVdsm.Server::(main) Setting
up keep alive thread<br>
MainThread::DEBUG::2012-11-14
12:45:52,670::supervdsmServer::348::SuperVdsm.Server::(main)
Creating remote object manager<br>
MainThread::DEBUG::2012-11-14
12:45:52,671::supervdsmServer::360::SuperVdsm.Server::(main) Started
serving super vdsm object<br>
Thread-14::DEBUG::2012-11-14
12:45:53,732::BindingXMLRPC::903::vds::(wrapper) client
[192.168.122.1]::call getCapabilities with () {}<br>
Thread-14::DEBUG::2012-11-14
12:45:53,902::BindingXMLRPC::910::vds::(wrapper) return
getCapabilities with {'status': {'message': 'Done', 'code': 0},
'info': {'HBAInventory': {'iSCSI': [{'InitiatorName':
'iqn.1994-05.com.redhat:6de64a4dfdba'}], 'FC': []}, 'packages2':
{'kernel': {'release': '1.fc17.x86_64', 'buildtime': 1352149175.0,
'version': '3.6.6'}, 'spice-server': {'release': '1.fc17',
'buildtime': 1348891802L, 'version': '0.12.0'}, 'vdsm': {'release':
'0.129.git2c2c228.fc17', 'buildtime': 1352759542L, 'version':
'4.10.1'}, 'qemu-kvm': {'release': '19.fc17', 'buildtime':
1351915579L, 'version': '1.2.0'}, 'libvirt': {'release': '1.fc17',
'buildtime': 1352437629L, 'version': '1.0.0'}, 'qemu-img':
{'release': '19.fc17', 'buildtime': 1351915579L, 'version':
'1.2.0'}}, 'cpuModel': 'AMD Phenom(tm) II X4 955 Processor',
'hooks': {}, 'vmTypes': ['kvm'], 'supportedProtocols': ['2.2',
'2.3'], 'networks': {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr':
'192.168.2.21', 'cfg': {'UUID':
'524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0',
'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt',
'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask':
'255.255.255.0', 'stp': 'off', 'bridged': True, 'gateway':
'192.168.2.1', 'ports': ['p15p1']}, 'virbr0': {'iface': 'virbr0',
'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask':
'255.255.255.0', 'stp': 'on', 'bridged': True, 'gateway': '0.0.0.0',
'ports': []}}, 'bridges': {'ovirtmgmt': {'addr': '192.168.2.21',
'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY':
'0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE':
'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500',
'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['p15p1']},
'virbr0': {'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500',
'netmask': '255.255.255.0', 'stp': 'on', 'ports': []}}, 'uuid':
'4046266B-FA2B-DE11-AA3D-00248C9EAFD5_00:19:cb:d6:6a:e0',
'lastClientIface': 'ovirtmgmt', 'nics': {'p15p1': {'addr': '',
'cfg': {'BRIDGE': 'ovirtmgmt', 'UUID':
'524c7b17-8771-4426-82d7-0dbeca898ad9', 'NM_CONTROLLED': 'no',
'HWADDR': '00:19:cb:d6:6a:e0', 'DEVICE': 'p15p1', 'ONBOOT': 'yes'},
'mtu': '1500', 'netmask': '', 'hwaddr': '00:19:cb:d6:6a:e0',
'speed': 1000}, 'p6p1': {'addr': '', 'cfg': {'DEVICE': 'p6p1',
'UUID': '9d1e9605-931d-4e51-9c79-d5f0f204d46e', 'NM_CONTROLLED':
'yes', 'BOOTPROTO': 'dhcp', 'HWADDR': '00:24:8C:9E:AF:D5', 'ONBOOT':
'no'}, 'mtu': '1500', 'netmask': '', 'hwaddr': '00:24:8c:9e:af:d5',
'speed': 1000}}, 'software_revision': '0.129', 'clusterLevels':
['3.0', '3.1', '3.2'], 'cpuFlags':
u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,hw_pstate,npt,lbrv,svm_lock,nrip_save,model_athlon,model_Opteron_G3,model_Opteron_G1,model_phenom,model_Opteron_G2',
'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba',
'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1'],
'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {},
'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr':
'00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu':
'1500', 'netmask': '', 'slaves': [], 'hwaddr':
'00:00:00:00:00:00'}}, 'software_version': '4.10', 'memSize':
'7734', 'cpuSpeed': '3200.000', 'cpuSockets': '1', 'vlans': {},
'cpuCores': '4', 'kvmEnabled': 'true', 'guestOverhead': '65',
'management_ip': '', 'version_name': 'Snow Man', 'emulatedMachines':
[u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15',
u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10',
u'isapc', u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0',
u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11',
u'pc-0.10', u'isapc'], 'operatingSystem': {'release': '1',
'version': '17', 'name': 'Fedora'}, 'lastClient': '0.0.0.0'}}<br>
Thread-15::DEBUG::2012-11-14
12:45:54,148::BindingXMLRPC::903::vds::(wrapper) client
[192.168.122.1]::call getCapabilities with () {}<br>
Thread-15::DEBUG::2012-11-14
12:45:54,173::BindingXMLRPC::910::vds::(wrapper) return
getCapabilities with {'status': {'message': 'Done', 'code': 0},
'info': {'HBAInventory': {'iSCSI': [{'InitiatorName':
'iqn.1994-05.com.redhat:6de64a4dfdba'}], 'FC': []}, 'packages2':
{'kernel': {'release': '1.fc17.x86_64', 'buildtime': 1352149175.0,
'version': '3.6.6'}, 'spice-server': {'release': '1.fc17',
'buildtime': 1348891802L, 'version': '0.12.0'}, 'vdsm': {'release':
'0.129.git2c2c228.fc17', 'buildtime': 1352759542L, 'version':
'4.10.1'}, 'qemu-kvm': {'release': '19.fc17', 'buildtime':
1351915579L, 'version': '1.2.0'}, 'libvirt': {'release': '1.fc17',
'buildtime': 1352437629L, 'version': '1.0.0'}, 'qemu-img':
{'release': '19.fc17', 'buildtime': 1351915579L, 'version':
'1.2.0'}}, 'cpuModel': 'AMD Phenom(tm) II X4 955 Processor',
'hooks': {}, 'vmTypes': ['kvm'], 'supportedProtocols': ['2.2',
'2.3'], 'networks': {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr':
'192.168.2.21', 'cfg': {'UUID':
'524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0',
'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt',
'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask':
'255.255.255.0', 'stp': 'off', 'bridged': True, 'gateway':
'192.168.2.1', 'ports': ['p15p1']}, 'virbr0': {'iface': 'virbr0',
'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask':
'255.255.255.0', 'stp': 'on', 'bridged': True, 'gateway': '0.0.0.0',
'ports': []}}, 'bridges': {'ovirtmgmt': {'addr': '192.168.2.21',
'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY':
'0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE':
'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500',
'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['p15p1']},
'virbr0': {'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500',
'netmask': '255.255.255.0', 'stp': 'on', 'ports': []}}, 'uuid':
'4046266B-FA2B-DE11-AA3D-00248C9EAFD5_00:19:cb:d6:6a:e0',
'lastClientIface': 'ovirtmgmt', 'nics': {'p15p1': {'addr': '',
'cfg': {'BRIDGE': 'ovirtmgmt', 'UUID':
'524c7b17-8771-4426-82d7-0dbeca898ad9', 'NM_CONTROLLED': 'no',
'HWADDR': '00:19:cb:d6:6a:e0', 'DEVICE': 'p15p1', 'ONBOOT': 'yes'},
'mtu': '1500', 'netmask': '', 'hwaddr': '00:19:cb:d6:6a:e0',
'speed': 1000}, 'p6p1': {'addr': '', 'cfg': {'DEVICE': 'p6p1',
'UUID': '9d1e9605-931d-4e51-9c79-d5f0f204d46e', 'NM_CONTROLLED':
'yes', 'BOOTPROTO': 'dhcp', 'HWADDR': '00:24:8C:9E:AF:D5', 'ONBOOT':
'no'}, 'mtu': '1500', 'netmask': '', 'hwaddr': '00:24:8c:9e:af:d5',
'speed': 1000}}, 'software_revision': '0.129', 'clusterLevels':
['3.0', '3.1', '3.2'], 'cpuFlags':
u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,hw_pstate,npt,lbrv,svm_lock,nrip_save,model_athlon,model_Opteron_G3,model_Opteron_G1,model_phenom,model_Opteron_G2',
'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba',
'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1'],
'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {},
'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr':
'00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu':
'1500', 'netmask': '', 'slaves': [], 'hwaddr':
'00:00:00:00:00:00'}}, 'software_version': '4.10', 'memSize':
'7734', 'cpuSpeed': '800.000', 'cpuSockets': '1', 'vlans': {},
'cpuCores': '4', 'kvmEnabled': 'true', 'guestOverhead': '65',
'management_ip': '', 'version_name': 'Snow Man', 'emulatedMachines':
[u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15',
u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10',
u'isapc', u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0',
u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11',
u'pc-0.10', u'isapc'], 'operatingSystem': {'release': '1',
'version': '17', 'name': 'Fedora'}, 'lastClient': '192.168.122.1'}}<br>
MainThread::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a> 12:45:55,916::vdsm::88::vds::(run) I am
the actual vdsm 4.10-0.129 demo.netbulae.eu (3.6.6-1.fc17.x86_64)<br>
MainThread::DEBUG::2012-11-14
12:46:08,422::resourceManager::379::ResourceManager::(registerNamespace)
Registering namespace 'Storage'<br>
MainThread::DEBUG::2012-11-14
12:46:08,423::threadPool::33::Misc.ThreadPool::(__init__) Enter -
numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0<br>
MainThread::WARNING::2012-11-14
12:46:08,431::fileUtils::184::fileUtils::(createdir) Dir
/rhev/data-center/mnt already exists<br>
MainThread::DEBUG::2012-11-14
12:46:08,467::supervdsm::107::SuperVdsmProxy::(_start) Launching
Super Vdsm<br>
MainThread::DEBUG::2012-11-14
12:46:08,467::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /bin/python /usr/share/vdsm/supervdsmServer.py
d5652547-2838-4900-8e62-5191bf37c460 11918' (cwd None)<br>
MainThread::DEBUG::2012-11-14
12:46:08,634::supervdsmServer::324::SuperVdsm.Server::(main) Making
sure I'm root<br>
MainThread::DEBUG::2012-11-14
12:46:08,634::supervdsmServer::328::SuperVdsm.Server::(main) Parsing
cmd args<br>
MainThread::DEBUG::2012-11-14
12:46:08,634::supervdsmServer::331::SuperVdsm.Server::(main)
Creating PID file<br>
MainThread::DEBUG::2012-11-14
12:46:08,634::supervdsmServer::338::SuperVdsm.Server::(main)
Cleaning old socket<br>
MainThread::DEBUG::2012-11-14
12:46:08,634::supervdsmServer::342::SuperVdsm.Server::(main) Setting
up keep alive thread<br>
MainThread::DEBUG::2012-11-14
12:46:08,635::supervdsmServer::348::SuperVdsm.Server::(main)
Creating remote object manager<br>
MainThread::DEBUG::2012-11-14
12:46:08,636::supervdsmServer::360::SuperVdsm.Server::(main) Started
serving super vdsm object<br>
MainThread::DEBUG::2012-11-14
12:46:10,475::supervdsm::161::SuperVdsmProxy::(_connect) Trying to
connect to Super Vdsm<br>
MainThread::DEBUG::2012-11-14
12:46:10,549::multipath::106::Storage.Multipath::(isEnabled) Current
revision of multipath.conf detected, preserving<br>
MainThread::DEBUG::2012-11-14
12:46:10,549::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None)<br>
MainThread::DEBUG::2012-11-14
12:46:10,621::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = ''; <rc> = 0<br>
MainThread::DEBUG::2012-11-14
12:46:10,623::hsm::407::Storage.HSM::(__cleanStorageRepository)
Started cleaning storage repository at '/rhev/data-center'<br>
MainThread::DEBUG::2012-11-14
12:46:10,634::hsm::439::Storage.HSM::(__cleanStorageRepository)
White list: ['/rhev/data-center/hsm-tasks',
'/rhev/data-center/hsm-tasks/*', '/rhev/data-center/mnt']<br>
MainThread::DEBUG::2012-11-14
12:46:10,634::hsm::440::Storage.HSM::(__cleanStorageRepository)
Mount list: []<br>
MainThread::DEBUG::2012-11-14
12:46:10,634::hsm::442::Storage.HSM::(__cleanStorageRepository)
Cleaning leftovers<br>
MainThread::DEBUG::2012-11-14
12:46:10,636::hsm::485::Storage.HSM::(__cleanStorageRepository)
Finished cleaning storage repository at '/rhev/data-center'<br>
MainThread::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a>
12:46:10,638::dispatcher::95::Storage.Dispatcher::(__init__)
Starting StorageDispatcher...<br>
Thread-12::DEBUG::2012-11-14
12:46:10,638::lvm::334::OperationMutex::(_reloadpvs) Operation 'lvm
reload operation' got the operation mutex<br>
Thread-12::DEBUG::2012-11-14
12:46:10,643::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup
{ retain_min = 50 retain_days = 0 } " --noheadings --units b
--nosuffix --separator | -o
uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size'
(cwd None)<br>
<b>MainThread::WARNING::2012-11-14
12:46:10,688::clientIF::197::vds::(_prepareMOM) MOM initialization
failed and fall back to KsmMonitor</b><b><br>
</b><b>Traceback (most recent call last):</b><b><br>
</b><b> File "/usr/share/vdsm/clientIF.py", line 194, in
_prepareMOM</b><b><br>
</b><b> self.mom = MomThread(momconf)</b><b><br>
</b><b> File "/usr/share/vdsm/momIF.py", line 34, in __init__</b><b><br>
</b><b> raise Exception("MOM is not available")</b><b><br>
</b><b>Exception: MOM is not available</b><br>
MainThread::DEBUG::2012-11-14
12:46:10,690::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/pgrep -xf ksmd' (cwd None)<br>
Thread-12::DEBUG::2012-11-14
12:46:10,710::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = ''; <rc> = 0<br>
Thread-12::DEBUG::2012-11-14
12:46:10,710::lvm::359::OperationMutex::(_reloadpvs) Operation 'lvm
reload operation' released the operation mutex<br>
Thread-12::DEBUG::2012-11-14
12:46:10,711::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm
reload operation' got the operation mutex<br>
Thread-12::DEBUG::2012-11-14
12:46:10,711::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup
{ retain_min = 50 retain_days = 0 } " --noheadings --units b
--nosuffix --separator | -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free'
(cwd None)<br>
MainThread::DEBUG::2012-11-14
12:46:10,712::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = ''; <rc> = 0<br>
MainThread::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a> 12:46:10,721::ksm::43::vds::(__init__)
starting ksm monitor thread, ksm pid is 40<br>
KsmMonitor::DEBUG::2012-11-14
12:46:10,722::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/service ksmtuned start' (cwd None)<br>
MainThread::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a>
12:46:10,724::vmChannels::139::vds::(settimeout) Setting channels'
timeout to 30 seconds.<br>
VM Channels Listener::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a>
12:46:10,738::vmChannels::127::vds::(run) Starting VM channels
listener thread.<br>
MainThread::WARNING::2012-11-14
12:46:10,747::clientIF::182::vds::(_prepareBindings) Unable to load
the rest server module. Please make sure it is installed.<br>
MainThread::WARNING::2012-11-14
12:46:10,747::clientIF::188::vds::(_prepareBindings) Unable to load
the json rpc server module. Please make sure it is installed.<br>
Thread-12::DEBUG::2012-11-14
12:46:10,767::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0<br>
Thread-12::DEBUG::2012-11-14
12:46:10,768::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm
reload operation' released the operation mutex<br>
Thread-12::DEBUG::2012-11-14
12:46:10,770::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup
{ retain_min = 50 retain_days = 0 } " --noheadings --units b
--nosuffix --separator | -o
uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None)<br>
KsmMonitor::DEBUG::2012-11-14
12:46:10,782::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = 'Redirecting to /bin/systemctl start
ksmtuned.service\n'; <rc> = 0<br>
KsmMonitor::DEBUG::2012-11-14
12:46:10,783::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/service ksm start' (cwd None)<br>
Thread-12::DEBUG::2012-11-14
12:46:10,823::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0<br>
Thread-12::DEBUG::2012-11-14
12:46:10,824::misc::1026::SamplingMethod::(__call__) Trying to enter
sampling method (storage.sdc.refreshStorage)<br>
Thread-12::DEBUG::2012-11-14
12:46:10,824::misc::1028::SamplingMethod::(__call__) Got in to
sampling method<br>
Thread-12::DEBUG::2012-11-14
12:46:10,825::misc::1026::SamplingMethod::(__call__) Trying to enter
sampling method (storage.iscsi.rescan)<br>
Thread-12::DEBUG::2012-11-14
12:46:10,825::misc::1028::SamplingMethod::(__call__) Got in to
sampling method<br>
Thread-12::DEBUG::2012-11-14
12:46:10,826::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None)<br>
KsmMonitor::DEBUG::2012-11-14
12:46:10,840::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = 'Redirecting to /bin/systemctl start
ksm.service\n'; <rc> = 0<br>
Thread-12::DEBUG::2012-11-14
12:46:10,852::misc::84::Storage.Misc.excCmd::(<lambda>)
FAILED: <err> = 'iscsiadm: No session found.\n'; <rc> =
21<br>
Thread-12::DEBUG::2012-11-14
12:46:10,852::misc::1036::SamplingMethod::(__call__) Returning last
result<br>
MainProcess|Thread-12::DEBUG::2012-11-14
12:46:10,858::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/dd of=/sys/class/scsi_host/host0/scan' (cwd None)<br>
MainProcess|Thread-12::DEBUG::2012-11-14
12:46:10,882::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/dd of=/sys/class/scsi_host/host1/scan' (cwd None)<br>
MainProcess|Thread-12::DEBUG::2012-11-14
12:46:10,891::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/dd of=/sys/class/scsi_host/host2/scan' (cwd None)<br>
MainProcess|Thread-12::DEBUG::2012-11-14
12:46:10,898::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/dd of=/sys/class/scsi_host/host3/scan' (cwd None)<br>
MainProcess|Thread-12::DEBUG::2012-11-14
12:46:10,905::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/dd of=/sys/class/scsi_host/host4/scan' (cwd None)<br>
MainProcess|Thread-12::DEBUG::2012-11-14
12:46:10,913::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/dd of=/sys/class/scsi_host/host5/scan' (cwd None)<br>
MainProcess|Thread-12::DEBUG::2012-11-14
12:46:10,922::iscsi::388::Storage.ISCSI::(forceIScsiScan) Performing
SCSI scan, this will take up to 30 seconds<br>
Thread-14::DEBUG::2012-11-14
12:46:12,615::BindingXMLRPC::903::vds::(wrapper) client
[192.168.122.1]::call getCapabilities with () {}<br>
Thread-14::DEBUG::2012-11-14
12:46:12,777::BindingXMLRPC::910::vds::(wrapper) return
getCapabilities with {'status': {'message': 'Done', 'code': 0},
'info': {'HBAInventory': {'iSCSI': [{'InitiatorName':
'iqn.1994-05.com.redhat:6de64a4dfdba'}], 'FC': []}, 'packages2':
{'kernel': {'release': '1.fc17.x86_64', 'buildtime': 1352149175.0,
'version': '3.6.6'}, 'spice-server': {'release': '1.fc17',
'buildtime': 1348891802L, 'version': '0.12.0'}, 'vdsm': {'release':
'0.129.git2c2c228.fc17', 'buildtime': 1352759542L, 'version':
'4.10.1'}, 'qemu-kvm': {'release': '19.fc17', 'buildtime':
1351915579L, 'version': '1.2.0'}, 'libvirt': {'release': '1.fc17',
'buildtime': 1352437629L, 'version': '1.0.0'}, 'qemu-img':
{'release': '19.fc17', 'buildtime': 1351915579L, 'version':
'1.2.0'}}, 'cpuModel': 'AMD Phenom(tm) II X4 955 Processor',
'hooks': {}, 'vmTypes': ['kvm'], 'supportedProtocols': ['2.2',
'2.3'], 'networks': {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr':
'192.168.2.21', 'cfg': {'UUID':
'524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0',
'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt',
'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask':
'255.255.255.0', 'stp': 'off', 'bridged': True, 'gateway':
'192.168.2.1', 'ports': ['p15p1']}, 'virbr0': {'iface': 'virbr0',
'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask':
'255.255.255.0', 'stp': 'on', 'bridged': True, 'gateway': '0.0.0.0',
'ports': []}}, 'bridges': {'ovirtmgmt': {'addr': '192.168.2.21',
'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY':
'0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE':
'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500',
'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['p15p1']},
'virbr0': {'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500',
'netmask': '255.255.255.0', 'stp': 'on', 'ports': []}}, 'uuid':
'4046266B-FA2B-DE11-AA3D-00248C9EAFD5_00:19:cb:d6:6a:e0',
'lastClientIface': 'ovirtmgmt', 'nics': {'p15p1': {'addr': '',
'cfg': {'BRIDGE': 'ovirtmgmt', 'UUID':
'524c7b17-8771-4426-82d7-0dbeca898ad9', 'NM_CONTROLLED': 'no',
'HWADDR': '00:19:cb:d6:6a:e0', 'DEVICE': 'p15p1', 'ONBOOT': 'yes'},
'mtu': '1500', 'netmask': '', 'hwaddr': '00:19:cb:d6:6a:e0',
'speed': 1000}, 'p6p1': {'addr': '', 'cfg': {'DEVICE': 'p6p1',
'UUID': '9d1e9605-931d-4e51-9c79-d5f0f204d46e', 'NM_CONTROLLED':
'yes', 'BOOTPROTO': 'dhcp', 'HWADDR': '00:24:8C:9E:AF:D5', 'ONBOOT':
'no'}, 'mtu': '1500', 'netmask': '', 'hwaddr': '00:24:8c:9e:af:d5',
'speed': 1000}}, 'software_revision': '0.129', 'clusterLevels':
['3.0', '3.1', '3.2'], 'cpuFlags':
u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,hw_pstate,npt,lbrv,svm_lock,nrip_save,model_athlon,model_Opteron_G3,model_Opteron_G1,model_phenom,model_Opteron_G2',
'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba',
'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1'],
'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {},
'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr':
'00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu':
'1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'},
'bond1': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '',
'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}, 'software_version':
'4.10', 'memSize': '7734', 'cpuSpeed': '800.000', 'cpuSockets': '1',
'vlans': {}, 'cpuCores': '4', 'kvmEnabled': 'true', 'guestOverhead':
'65', 'management_ip': '', 'version_name': 'Snow Man',
'emulatedMachines': [u'pc-1.2', u'none', u'pc', u'pc-1.1',
u'pc-1.0', u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12',
u'pc-0.11', u'pc-0.10', u'isapc', u'pc-1.2', u'none', u'pc',
u'pc-1.1', u'pc-1.0', u'pc-0.15', u'pc-0.14', u'pc-0.13',
u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc'], 'operatingSystem':
{'release': '1', 'version': '17', 'name': 'Fedora'}, 'lastClient':
'0.0.0.0'}}<br>
Thread-12::DEBUG::2012-11-14
12:46:12,926::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/multipath' (cwd None)<br>
Thread-12::DEBUG::2012-11-14
12:46:12,990::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = ''; <rc> = 0<br>
Thread-12::DEBUG::2012-11-14
12:46:12,990::lvm::477::OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' got the operation mutex<br>
Thread-12::DEBUG::2012-11-14
12:46:12,991::lvm::479::OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' released the operation mutex<br>
Thread-12::DEBUG::2012-11-14
12:46:12,991::lvm::488::OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' got the operation mutex<br>
Thread-12::DEBUG::2012-11-14
12:46:12,991::lvm::490::OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' released the operation mutex<br>
Thread-12::DEBUG::2012-11-14
12:46:12,991::lvm::508::OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' got the operation mutex<br>
Thread-12::DEBUG::2012-11-14
12:46:12,991::lvm::510::OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' released the operation mutex<br>
Thread-12::DEBUG::2012-11-14
12:46:12,991::misc::1036::SamplingMethod::(__call__) Returning last
result<br>
Thread-16::DEBUG::2012-11-14
12:46:14,043::BindingXMLRPC::161::vds::(wrapper) [192.168.122.1]<br>
Thread-16::DEBUG::2012-11-14
12:46:14,044::task::568::TaskManager.Task::(_updateState)
Task=`8cf5bfe0-3851-4058-92b9-7a23f095ec30`::moving from state init
-> state preparing<br>
Thread-16::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a>
12:46:14,045::logUtils::37::dispatcher::(wrapper) Run and protect:
validateStorageServerConnection(domType=4,
spUUID='00000000-0000-0000-0000-000000000000',
conList=[{'connection': '/data', 'iqn': '', 'portal': '', 'user':
'', 'password': '******', 'id':
'00000000-0000-0000-0000-000000000000', 'port': ''}], options=None)<br>
Thread-16::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a>
12:46:14,045::logUtils::39::dispatcher::(wrapper) Run and protect:
validateStorageServerConnection, Return response: {'statuslist':
[{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}<br>
Thread-16::DEBUG::2012-11-14
12:46:14,045::task::1151::TaskManager.Task::(prepare)
Task=`8cf5bfe0-3851-4058-92b9-7a23f095ec30`::finished:
{'statuslist': [{'status': 0, 'id':
'00000000-0000-0000-0000-000000000000'}]}<br>
Thread-16::DEBUG::2012-11-14
12:46:14,045::task::568::TaskManager.Task::(_updateState)
Task=`8cf5bfe0-3851-4058-92b9-7a23f095ec30`::moving from state
preparing -> state finished<br>
Thread-16::DEBUG::2012-11-14
12:46:14,045::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}<br>
Thread-16::DEBUG::2012-11-14
12:46:14,045::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}<br>
Thread-16::DEBUG::2012-11-14
12:46:14,045::task::957::TaskManager.Task::(_decref)
Task=`8cf5bfe0-3851-4058-92b9-7a23f095ec30`::ref 0 aborting False<br>
Thread-17::DEBUG::2012-11-14
12:46:14,128::BindingXMLRPC::161::vds::(wrapper) [192.168.122.1]<br>
Thread-17::DEBUG::2012-11-14
12:46:14,129::task::568::TaskManager.Task::(_updateState)
Task=`0eb0651c-bb23-4b49-a07a-a27a9bbc4129`::moving from state init
-> state preparing<br>
Thread-17::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a>
12:46:14,129::logUtils::37::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=4,
spUUID='00000000-0000-0000-0000-000000000000',
conList=[{'connection': '/data', 'iqn': '', 'portal': '', 'user':
'', 'password': '******', 'id':
'00000000-0000-0000-0000-000000000000', 'port': ''}], options=None)<br>
<b>Thread-17::ERROR::2012-11-14
12:46:14,212::hsm::2057::Storage.HSM::(connectStorageServer) Could
not connect to storageServer</b><b><br>
</b><b>Traceback (most recent call last):</b><b><br>
</b><b> File "/usr/share/vdsm/storage/hsm.py", line 2054, in
connectStorageServer</b><b><br>
</b><b> conObj.connect()</b><b><br>
</b><b> File "/usr/share/vdsm/storage/storageServer.py", line 462,
in connect</b><b><br>
</b><b> if not self.checkTarget():</b><b><br>
</b><b> File "/usr/share/vdsm/storage/storageServer.py", line 449,
in checkTarget</b><b><br>
</b><b> fileSD.validateDirAccess(self._path))</b><b><br>
</b><b> File "/usr/share/vdsm/storage/fileSD.py", line 51, in
validateDirAccess</b><b><br>
</b><b> getProcPool().fileUtils.validateAccess(dirPath)</b><b><br>
</b><b> File "/usr/share/vdsm/storage/remoteFileHandler.py", line
274, in callCrabRPCFunction</b><b><br>
</b><b> *args, **kwargs)</b><b><br>
</b><b> File "/usr/share/vdsm/storage/remoteFileHandler.py", line
180, in callCrabRPCFunction</b><b><br>
</b><b> rawLength = self._recvAll(LENGTH_STRUCT_LENGTH, timeout)</b><b><br>
</b><b> File "/usr/share/vdsm/storage/remoteFileHandler.py", line
149, in _recvAll</b><b><br>
</b><b> timeLeft):</b><b><br>
</b><b> File "/usr/lib64/python2.7/contextlib.py", line 84, in
helper</b><b><br>
</b><b> return GeneratorContextManager(func(*args, **kwds))</b><b><br>
</b><b> File "/usr/share/vdsm/storage/remoteFileHandler.py", line
136, in _poll</b><b><br>
</b><b> raise Timeout()</b><b><br>
</b><b>Timeout</b><br>
Thread-17::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a>
12:46:14,231::logUtils::39::dispatcher::(wrapper) Run and protect:
connectStorageServer, Return response: {'statuslist': [{'status':
100, 'id': '00000000-0000-0000-0000-000000000000'}]}<br>
Thread-17::DEBUG::2012-11-14
12:46:14,231::task::1151::TaskManager.Task::(prepare)
Task=`0eb0651c-bb23-4b49-a07a-a27a9bbc4129`::finished:
{'statuslist': [{'status': 100, 'id':
'00000000-0000-0000-0000-000000000000'}]}<br>
Thread-17::DEBUG::2012-11-14
12:46:14,232::task::568::TaskManager.Task::(_updateState)
Task=`0eb0651c-bb23-4b49-a07a-a27a9bbc4129`::moving from state
preparing -> state finished<br>
Thread-17::DEBUG::2012-11-14
12:46:14,232::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}<br>
Thread-17::DEBUG::2012-11-14
12:46:14,233::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}<br>
<br>
Kind regards,<br>
<br>
Jorick Astrego<br>
Netbulae B.V.<br>
<br>
<br>
<br>
<br>
</body>
</html>
--------------070401060008050901000001--
12 years
[Users] can't add export domain
by Cristian Falcas
Hi all,
When trying to add a nfs export domain (nightly builds) I get this error in
logs:
Thread-564::DEBUG::2012-11-18
00:17:47,923::misc::83::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6,nfsvers=3
10.20.20.20:/media/ceva2/Ovirt/Export
/rhev/data-center/mnt/10.20.20.20:_media_ceva2_Ovirt_Export'
(cwd None)
Thread-564::DEBUG::2012-11-18
00:17:47,995::misc::83::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/bin/umount -f -l /rhev/data-center/mnt/10.20.20.20:_media_ceva2_Ovirt_Export'
(cwd None)
Thread-564::ERROR::2012-11-18
00:17:48,036::hsm::2207::Storage.HSM::(connectStorageServer) Could not
connect to storageServer
Traceback (most recent call last):
File "/usr/share/vdsm/storage/hsm.py", line 2203, in connectStorageServer
conObj.connect()
File "/usr/share/vdsm/storage/storageServer.py", line 302, in connect
return self._mountCon.connect()
File "/usr/share/vdsm/storage/storageServer.py", line 208, in connect
fileSD.validateDirAccess(self.getMountObj().getRecord().fs_file)
File "/usr/share/vdsm/storage/fileSD.py", line 58, in validateDirAccess
raise se.StorageServerAccessPermissionError(dirPath)
StorageServerAccessPermissionError: Permission settings on the specified
path do not allow access to the storage. Verify permission settings on the
specified storage path.: 'path = /rhev/data-center/mnt/10.20.20.20:
_media_ceva2_Ovirt_Export'
The directory is either removed or not created:
[root@localhost vdsm]# ls -la /rhev/data-center/mnt/
total 12
drwxr-xr-x. 3 vdsm kvm 4096 Nov 18 00:19 .
drwxr-xr-x. 7 vdsm kvm 4096 Nov 18 00:15 ..
drwxr-xr-x. 3 vdsm kvm 4096 Nov 18 00:14 10.20.20.20:_media_ceva2_Ovirt_Iso
lrwxrwxrwx. 1 vdsm kvm 26 Nov 17 17:20 _media_ceva2_Ovirt_Storage ->
/media/ceva2/Ovirt/Storage
If I create the /rhev/data-center/mnt/10.20.20.20:_media_ceva2_Ovirt_Export
directory, it will be deleted when I try to add the storage again.
Best regards,
Cristian Falcas
12 years
Re: [Users] Fwd: Re: Fwd: oVirt Weekly Meeting Minutes -- 2012-11-14
by Michael Pasternak
>
> -------- Original Message --------
> Subject: Re: [Users] Fwd: oVirt Weekly Meeting Minutes -- 2012-11-14
> Date: Fri, 16 Nov 2012 09:44:24 +0100
> From: Jiri Belka <jbelka(a)redhat.com>
> To: users(a)ovirt.org
>
> On 11/15/2012 01:55 PM, Michael Pasternak wrote:
>>
>> just to update, - i did sanity testing of sdk & cli for build and
>> functionality on f18 and didn't see any issue.
>>
>
> Could be ovirt-engine-cli updated to try to call spice-xpi-client (which
> is in fact remote-viewer from virt-viewer) before spicec like it is in
> spice-xpi?
>
Actually this is [1] planned feature.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=807696
--
Michael Pasternak
RedHat, ENG-Virtualization R&D
12 years
[Users] allinone setup can't add storage
by Cristian Falcas
Hi all,
Can someone help me with this error:
AIO: Adding Local Datacenter and cluster... [ ERROR ]
Error: could not create ovirtsdk API object
trace from the log file
2012-11-07 13:34:44::DEBUG::all_in_one_100::220::root:: Initiating the API
object
2012-11-07 13:34:44::ERROR::all_in_one_100::231::root:: Traceback (most
recent call last):
File "/usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py", line
228, in initAPI
ca_file=basedefs.FILE_CA_CRT_SRC,
TypeError: __init__() got an unexpected keyword argument 'ca_file'
2012-11-07 13:34:44::DEBUG::setup_sequences::62::root:: Traceback (most
recent call last):
File "/usr/share/ovirt-engine/scripts/setup_sequences.py", line 60, in run
function()
File "/usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py", line
232, in initAPI
raise Exception(ERROR_CREATE_API_OBJECT)
Exception: Error: could not create ovirtsdk API object
Versions installed:
ovirt-engine-3.1.0-3.20121106.git6891171.fc17.noarch
ovirt-engine-backend-3.1.0-3.20121106.git6891171.fc17.noarch
ovirt-engine-cli-3.1.0.6-1.fc17.noarch
ovirt-engine-config-3.1.0-3.20121106.git6891171.fc17.noarch
ovirt-engine-dbscripts-3.1.0-3.20121106.git6891171.fc17.noarch
ovirt-engine-genericapi-3.1.0-3.20121106.git6891171.fc17.noarch
ovirt-engine-notification-service-3.1.0-3.20121106.git6891171.fc17.noarch
ovirt-engine-restapi-3.1.0-3.20121106.git6891171.fc17.noarch
ovirt-engine-sdk-3.1.0.4-1.fc17.noarch
ovirt-engine-setup-3.1.0-3.20121106.git6891171.fc17.noarch
ovirt-engine-setup-plugin-allinone-3.1.0-3.20121106.git6891171.fc17.noarch
ovirt-engine-tools-common-3.1.0-3.20121106.git6891171.fc17.noarch
ovirt-engine-userportal-3.1.0-3.20121106.git6891171.fc17.noarch
ovirt-engine-webadmin-portal-3.1.0-3.20121106.git6891171.fc17.noarch
12 years
[Users] XP Guest Sound issue
by Baldwin, John
--_000_E493EF38FD81F74F94482F0EA9EAC0B0459643BCEXMBX04BCADBAYC_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Having issue with a 32bit XP guest. Sound device driver is not configured=
. Sound device has the ? "Audio Device on High Definition Audio Bus" define=
d. Driver ID is 1AF4 Red Hat Virtio. No drivers will load to this that =
I can find. I thought the ID would be listed are a AC97 or Ensoniq.
Running oVirt Engine Version: 3.1.0-3.19.el6 Centos 6.3 64bit. KVM serve=
r is also Centos 6.3 64bit
John Baldwin - Sr. UNIX Systems Administrator
Confidential: This electronic message and all contents contain information
from BayCare Health System which may be privileged, confidential or otherwi=
se
protected from disclosure. The information is intended to be for the addre=
ssee
only. If you are not the addressee, any disclosure, copy, distribution or =
use
of the contents of this message is prohibited. If you have received this
electronic message in error, please notify the sender and destroy the origi=
nal
message and all copies.
--_000_E493EF38FD81F74F94482F0EA9EAC0B0459643BCEXMBX04BCADBAYC_
Content-Type: text/html; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii">
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
span.gwt-inlinelabel
{mso-style-name:gwt-inlinelabel;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Having issue with a 32bit XP guest. Soun=
d device driver is not configured. Sound device has the ? “Audio Devi=
ce on High Definition Audio Bus” defined. Driver ID=
is 1AF4 Red Hat Virtio. No drivers will load to this that I can find=
.
I thought the ID would be listed are a AC97 or Ensoniq. <o:p></o:p><=
/p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Running <span class=3D"gwt-inlinelabel">oVirt Engine=
Version:</span>
<span class=3D"gwt-inlinelabel">3.1.0-3.19.el6 Centos 6.3 64bit. =
; KVM server is also Centos 6.3 64bit</span><o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal"><i><span style=3D"font-size:10.0pt;font-family:"=
;Arial","sans-serif";color:navy">John Baldwin - Sr. UNIX Sys=
tems Administrator</span></i><span style=3D"color:#1F497D"><o:p></o:p></spa=
n></p>
</div>
<p>Confidential: This electronic message and all contents contain inf=
ormation<br>
from BayCare Health System which may be privileged, confidential or otherwi=
se<br>
protected from disclosure. The information is intended to be for the =
addressee<br>
only. If you are not the addressee, any disclosure, copy, distributio=
n or use<br>
of the contents of this message is prohibited. If you have received t=
his<br>
electronic message in error, please notify the sender and destroy the origi=
nal<br>
message and all copies.</p></body>
</html>
--_000_E493EF38FD81F74F94482F0EA9EAC0B0459643BCEXMBX04BCADBAYC_--
12 years
[Users] Virtualization DevRoom @ FOSDEM
by Dave Neary
Hi all,
The call for participation for the Virtualization DevRoom (co-organised
by Itamar and myself) is now open:
http://osvc.v2.cs.unibo.it/index.php/Main_Page
We are looking for content submissions related to machine
virtualization, network virtualization, process-level virt and virt
management (like oVirt). The target audience for the conference is free
software enthusiasts who are also virt experts - very much the target
audience of oVirt.
Please send your proposals before December 16th to
virt-devroom(a)lists.fosdem.org (as described in the call for proposals).
Thanks,
Dave.
--
Dave Neary - Community Action and Impact
Open Source and Standards, Red Hat - http://community.redhat.com
Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13
12 years
Re: [Users] Fwd: oVirt Weekly Meeting Minutes -- 2012-11-14
by Michael Pasternak
just to update, - i did sanity testing of sdk & cli for build and
functionality on f18 and didn't see any issue.
--
Michael Pasternak
RedHat, ENG-Virtualization R&D
>
> -------- Original Message --------
> Subject: oVirt Weekly Meeting Minutes -- 2012-11-14
> Date: Thu, 15 Nov 2012 07:17:33 -0500
> From: Mike Burns <mburns(a)redhat.com>
> To: board <board(a)ovirt.org>, users <users(a)ovirt.org>
>
> Minutes: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-11-14-15.00.html
> Minutes (text): http://ovirt.org/meetings/ovirt/2012/ovirt.2012-11-14-15.00.txt
> Log: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-11-14-15.00.log.html
>
>
> ============================
> #ovirt: oVirt Weekly Meeting
> ============================
>
>
> Meeting started by mburns at 15:00:23 UTC. The full logs are available
> at http://ovirt.org/meetings/ovirt/2012/ovirt.2012-11-14-15.00.log.html
> .
>
>
>
> Meeting summary
> ---------------
> * agenda and roll call (mburns, 15:00:39)
>
> * Release Status (mburns, 15:03:34)
> * LINK: http://wiki.ovirt.org/wiki/OVirt_3.2_release-management
> (mburns, 15:03:46)
> * beta/feature freeze delayed until 28-Nov (mburns, 15:16:22)
> * GA delayed until 09-Jan (mburns, 15:16:30)
> * engine almost ready for F18 (mburns, 15:17:19)
> * vdsm needs some work, hope to hit feature freeze (mburns, 15:17:30)
> * node needs some work, should hit feature freeze (mburns, 15:17:39)
> * history/reports will likely be async (mburns, 15:17:53)
> * sdk/cli update coming later (mburns, 15:18:03)
> * guest-agent needs testing (mburns, 15:18:18)
>
> * subproject report -- infra (mburns, 15:25:33)
> * other sub-projects covered in release status, so skipping them
> (mburns, 15:26:11)
> * infra team still recovering from wiki/mailing list outage (mburns,
> 15:27:45)
> * mburns, quaid, oschreib and eedri have access to server during
> future emergencies (dneary, 15:28:58)
> * infra team working out contact methods to handle outages quicker in
> the future (mburns, 15:29:47)
> * MediaWiki instance is en route to being migrated; naked instance at
> (quaid, 15:30:08)
> * LINK: http://wiki-ovirt.rhcloud.com (quaid, 15:30:14)
> * expect to have mirroring working by the end of the week, meaning we
> can schedule DNS cutover for whenever is best (quaid, 15:30:39)
> * still working details for other hosting situations, hoping they
> resolve within the next few weeks (quaid, 15:31:17)
> * after wiki is off server that only saves ~200MB running, so not much
> compared to what else fills the disk, maybe a week or two of time
> that buys us (quaid, 15:31:54)
> * LINK:
> http://lists.ovirt.org/pipermail/infra/2012-November/001353.html
> (quaid, 15:33:23)
> * LINK:
> http://lists.ovirt.org/pipermail/infra/2012-November/001353.html
> (mburns, 15:33:32)
> * active infra@ discussion needs to happen about "to have /wiki or
> not" (quaid, 15:34:04)
>
> * Workshops - Barcelona (recap) (mburns, 15:36:06)
> * eedri has been added to sudoers on linode01 (quaid, 15:36:53)
> * Barcelona Workshops had people - room was over 50% full (except for
> 9am slots of Wednesday and Thursday - on Wednesday we clashed with
> keynotes, on Thursday I think people started arriving around 10)
> (dneary, 15:37:44)
> * oVirt booth generated a lot of foot traffic and questions (mburns,
> 15:37:49)
> * Barcelona Workshops had people - room was over 50% full (except for
> 9am slots of Wednesday and Thursday - on Wednesday we clashed with
> keynotes, on Thursday I think people started arriving around 10)
> (mburns, 15:38:00)
> * On Wednesday, we had a full house (~80 people if I counted right)
> for several sessions after the keynotes (dneary, 15:38:20)
> * USB keys and mugs were well received as gifts (dneary, 15:39:14)
> * dneary to gather presentations together in wiki for posterity
> (dneary, 15:39:26)
> * We met with several interesting oVirt users or potential users -
> dneary will follow up to see if we can use them as case studies when
> they've successfully deployed (dneary, 15:40:23)
> * ACTION: lh to talk to suehle about handout on what is ovirt and see
> what can be done (lh, 15:40:56)
> * looking to expand to conferences beyond linuxcon (mburns, 15:44:45)
> * definitely want booths for as many as we can (mburns, 15:45:36)
> * please submit conference suggestions to workshop-pc(a)ovirt.org
> (mburns, 15:47:13)
>
> * Workshop -- NetApp (Sunnyvale, CA, US) (mburns, 15:49:03)
> * January 22-24 (mburns, 15:49:18)
> * dates are confirmed (mburns, 15:50:19)
> * need to coordinate with some facilities people to ensure nothing
> slips through the cracks (mburns, 15:50:37)
> * CFP to open shortly after new web site is live (mburns, 15:51:02)
> * LINK:
>
> https://docs.google.com/document/d/15UBzC_5moynUSjzZWl8e_pxorc0-gip03JADd...
> (lh, 15:51:10)
> * LINK:
>
> https://docs.google.com/document/d/15UBzC_5moynUSjzZWl8e_pxorc0-gip03JADd...
> (mburns, 15:51:40)
> * ACTION: dneary to follow up on organizing a board meeting (mburns,
> 15:58:42)
> * seating for ~150 people (mburns, 15:58:49)
>
> * web site (mburns, 16:00:00)
> * ACTION: quaid to start the new wiki splash page & publicize its
> existence so others can help get it fixed up right (quaid,
> 16:11:03)
> * garrett_ and quaid to work on getting new site live asap (mburns,
> 16:11:33)
> * IDEA: the point is, when poeople write "the wiki" they need to link
> to this splash page directly vs. the ovirt.org/ (quaid, 16:11:34)
> * IDEA: it will be helpful to have something in the front page that
> says "the wiki" in an obvious location (quaid, 16:11:51)
> * ACTION: quaid to load new theme on current wiki to start local
> testing using user preferences (quaid, 16:19:52)
>
> Meeting ended at 16:28:55 UTC.
>
>
>
>
> Action Items
> ------------
> * lh to talk to suehle about handout on what is ovirt and see what can
> be done
> * dneary to follow up on organizing a board meeting
> * quaid to start the new wiki splash page & publicize its existence so
> others can help get it fixed up right
> * quaid to load new theme on current wiki to start local testing using
> user preferences
>
>
>
>
> Action Items, by person
> -----------------------
> * dneary
> * dneary to follow up on organizing a board meeting
> * lh
> * lh to talk to suehle about handout on what is ovirt and see what can
> be done
> * quaid
> * quaid to start the new wiki splash page & publicize its existence so
> others can help get it fixed up right
> * quaid to load new theme on current wiki to start local testing using
> user preferences
> * **UNASSIGNED**
> * (none)
>
>
>
>
> People Present (lines said)
> ---------------------------
> * mburns (108)
> * dneary (61)
> * lh (60)
> * quaid (54)
> * garrett_ (42)
> * itamar (19)
> * ovirtbot (12)
> * mgoldboi (11)
> * YamakasY (7)
> * sgordon (3)
> * val0x00ff (2)
> * mskrivanek (2)
> * oschreib (2)
> * ovedo (1)
> * ewoud (1)
> * eedri (1)
> * rgolan (1)
> * fsimonce (1)
>
>
>
>
> Generated by `MeetBot`_ 0.1.4
>
> .. _`MeetBot`: http://wiki.debian.org/MeetBot
>
> _______________________________________________
> Board mailing list
> Board(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/board
>
>
12 years
[Users] Reports Portal - Not able to login
by Fasil
This is a multi-part message in MIME format.
--------------060105090901060102060903
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Hi all,
Newbie entering with the VERY first post :)
Installation of Ovirt was successful with the Wiki and the Community.
Thanks to all
My installation:
CentOS 6.3 64bit
ovirt-iso-uploader-3.1.0-16.el6.noarch
ovirt-engine-jbossas711-1-0.x86_64
ovirt-engine-sdk-3.1.0.5-1.el6.noarch
ovirt-engine-userportal-3.1.0-3.19.el6.noarch
ovirt-engine-3.1.0-3.19.el6.noarch
ovirt-engine-backend-3.1.0-3.19.el6.noarch
ovirt-engine-config-3.1.0-3.19.el6.noarch
ovirt-engine-genericapi-3.1.0-3.19.el6.noarch
ovirt-engine-dwh-3.1.0-1.1.el6.centos.alt.noarch
ovirt-engine-restapi-3.1.0-3.19.el6.noarch
ovirt-engine-tools-common-3.1.0-3.19.el6.noarch
ovirt-image-uploader-3.1.0-16.el6.noarch
ovirt-engine-setup-3.1.0-3.19.el6.noarch
ovirt-engine-webadmin-portal-3.1.0-3.19.el6.noarch
ovirt-engine-notification-service-3.1.0-3.19.el6.noarch
ovirt-log-collector-3.1.0-16.el6.noarch
ovirt-engine-setup-plugin-allinone-3.1.0-3.19.el6.noarch
ovirt-engine-dbscripts-3.1.0-3.19.el6.noarch
I am not able to open the 'Reports Portal'. When I click on the link, it
redirects me to a new page with an alert 'Reports not installed, please
contact Administrator'. Am I miss something with my configuration or
could anyone guide me to the right direction?
Fasil.
--------------060105090901060102060903
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hi all,<br>
<br>
Newbie entering with the VERY first post :)<br>
<br>
Installation of Ovirt was successful with the Wiki and the
Community. Thanks to all<br>
My installation:<br>
CentOS 6.3 64bit<br>
ovirt-iso-uploader-3.1.0-16.el6.noarch<br>
ovirt-engine-jbossas711-1-0.x86_64<br>
ovirt-engine-sdk-3.1.0.5-1.el6.noarch<br>
ovirt-engine-userportal-3.1.0-3.19.el6.noarch<br>
ovirt-engine-3.1.0-3.19.el6.noarch<br>
ovirt-engine-backend-3.1.0-3.19.el6.noarch<br>
ovirt-engine-config-3.1.0-3.19.el6.noarch<br>
ovirt-engine-genericapi-3.1.0-3.19.el6.noarch<br>
ovirt-engine-dwh-3.1.0-1.1.el6.centos.alt.noarch<br>
ovirt-engine-restapi-3.1.0-3.19.el6.noarch<br>
ovirt-engine-tools-common-3.1.0-3.19.el6.noarch<br>
ovirt-image-uploader-3.1.0-16.el6.noarch<br>
ovirt-engine-setup-3.1.0-3.19.el6.noarch<br>
ovirt-engine-webadmin-portal-3.1.0-3.19.el6.noarch<br>
ovirt-engine-notification-service-3.1.0-3.19.el6.noarch<br>
ovirt-log-collector-3.1.0-16.el6.noarch<br>
ovirt-engine-setup-plugin-allinone-3.1.0-3.19.el6.noarch<br>
ovirt-engine-dbscripts-3.1.0-3.19.el6.noarch<br>
<br>
I am not able to open the 'Reports Portal'. When I click on the
link, it redirects me to a new page with an alert 'Reports not
installed, please contact Administrator'. Am I miss something with
my configuration or could anyone guide me to the right direction?<br>
<font face="Arial"><br>
Fasil.<br>
<!--<font color=green size=1><b>Please do not print this email unless you really need to</b></font>
</font>
</html>
</div>--></font>
</body>
</html>
--------------060105090901060102060903--
12 years