Users
Threads by month
- ----- 2026 -----
- February
- January
- ----- 2025 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- 6 participants
- 19176 discussions
I continue my quest to get a working version of the latest oVirt and
started with a clean setup.
What I would like is to get a working config consisting of 2 storage
servers, 2 hosts and a management server. Storage connected to a pair of
10G switches and the public side of the servers and VMs connected to a
pair of access switches. For that I need:
- bonding
- separate networks for storage and ovirtmgmt
- storage is using gluster
Ideal would be to do all configuration from the webui.
Items 1 and 2 need a version of the DC/Cluster of 3.2 but then I'm stopped
from going any further because the version of vdsmd on the storage server
isn't compatible with the DC/Cluster version.
How can I proceed with my 3.2 testing, or does someone have a better plan
of getting this setup workign.
Thanks in advance,
Joop
4
7
On Mon, Nov 19, 2012 at 12:51 PM, Alexandre Santos <santosam72(a)gmail.com>wrote:
> 2012/11/18 Cristian Falcas <cristi.falcas(a)gmail.com>
>
>> Hi all,
>>
>> I see that exporting a VM with ThinProvisioning will make an image with
>> the full disk size, instead of the currently used size:
>> - VM has a 20GB disk
>> - installed OS is taking 1.3GB
>> - exported disk is taking 20GB
>>
>> Is this mandatory? Couldn't the export make a file with the same size,
>> also sparse? It seems it only does a copy of the folder and the normal
>> linux cp can make a sparse copy.
>>
>> thank you,
>> Cristian Falcas
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
> Is it exporting a a raw image, right?
>
> Alex
>
Hi Alex,
I don't understand what you mean by raw.
I was saying that the same file could be copied as a sparse file instead.
Cristian
2
2
19 Nov '12
Hi All,
I've been trying to install latest ovirt node on IBM system X blade server
(UEFI based server) but not even able to start the installation.
I'm using latest image ovirt-node-iso-2.5.5-0.1.fc17.iso from
http://ovirt.org/releases/3.1/tools/
Server boots up and detects the ISO, gives me initial screen of Install or
Upgrade / Reinstall etc. but the moment default install is selected (or it
goes into default if initial screen is not interrupted for 30 seconds.) -
it just waits for 3-4 seconds and the server is rebooted. These problem of
server reboot persist with any available option to install on the screen.
Note : I was able to successfully install RHEV-H and it worked fine. Not
sure why ovirt node installation is not even starting and goes into
infinite reboot loop with doing anything.
Out of curiosity - I tried this version of node as well
http://jenkins.ovirt.org/job/ovirt-node-iso/lastSuccessfulBuild/artifact/ov…
that too resulted in same behavior.
Request Ovirt experts to assist and help resolve this issue.
Thanks & Regards,
Keyur
3
5
19 Nov '12
Hi all,
Is connectivity between engine and the ovirtmgmt interface required?
I have currently a strange setup where the engine is in another network
then the host. Connectivity is from engine to host only (and a few ssh
tunnels).
Like this I can't see any disks (but I can create new ones) and also I
can't see any hosts from the networks tab.
Everything else seems to work: create/start VM, start spice agent, etc.
There is nothing in the logs to alert me of any errors.
Best regards,
Cristian Falcas
2
2
Hi all,
I see that exporting a VM with ThinProvisioning will make an image with the
full disk size, instead of the currently used size:
- VM has a 20GB disk
- installed OS is taking 1.3GB
- exported disk is taking 20GB
Is this mandatory? Couldn't the export make a file with the same size, also
sparse? It seems it only does a copy of the folder and the normal linux cp
can make a sparse copy.
thank you,
Cristian Falcas
1
0
For more details see [1].
[1] http://wiki.ovirt.org/wiki/CLI#Change_Log
--
Michael Pasternak
RedHat, ENG-Virtualization R&D
1
0
For more details see [1].
[1] http://wiki.ovirt.org/wiki/SDK#Change_Log
--
Michael Pasternak
RedHat, ENG-Virtualization R&D
1
0
I am trying to add a network for a sandbox environment that only certain
VMs will have access to and those VMs will not have access to the rest of
our network. This is to allow new systems to be tested in a safe
environment where they can't possibly muck with our live systems. I'm
trying to follow the instructions of section 5.4 Logical Network
Tasks<https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Virtualiz…>in
the Admin guide but I keep getting an error when adding to the host.
Here is what I'm doing:
Under Data Centers > Default > Logical Networks > New, I create a new
logical network called sandbox, VM network is checked, VLAN network is
checked and VLAN ID is 2. .
Under Clusters > Default > Logical Networks > Assign/Unassign Networks, I
check Assign but not Required and sandbox appears in the list.
Under Hosts > cloudhost01 > Network Interfaces > Setup Host Networks, I
drag sandbox to the em1 interface which is also where ovirtmgmt is
assigned. There is am em2 interface, but that is dedicated to iSCSI
storage and has no Logical Networks assigned to it.
I check "Save network configuration", leave "Verify connectivity between
Host and ovirt-engine" checked, click OK and I get "Error: cloudhost01: -
General command validation failure."
This may not be relevant, but in my event log I get "cloudhost01 is missing
vlan id: 2 that is expected by the cluster" warnings when I activate
cloudhost01 while the sandbox network exists. I have tired doing things
with different orders and tweaks all resulting in the same error. Here are
my versions:
ovirt-log-collector-3.1.0-16.el6.noarch
ovirt-image-uploader-3.1.0-16.el6.noarch
ovirt-engine-userportal-3.1.0-3.19.el6.noarch
ovirt-engine-setup-3.1.0-3.19.el6.noarch
ovirt-engine-restapi-3.1.0-3.19.el6.noarch
ovirt-engine-config-3.1.0-3.19.el6.noarch
ovirt-engine-notification-service-3.1.0-3.19.el6.noarch
ovirt-engine-backend-3.1.0-3.19.el6.noarch
ovirt-engine-sdk-3.1.0.5-1.el6.noarch
ovirt-iso-uploader-3.1.0-16.el6.noarch
ovirt-engine-jbossas711-1-0.x86_64
ovirt-engine-webadmin-portal-3.1.0-3.19.el6.noarch
ovirt-engine-dbscripts-3.1.0-3.19.el6.noarch
ovirt-engine-genericapi-3.1.0-3.19.el6.noarch
ovirt-engine-tools-common-3.1.0-3.19.el6.noarch
ovirt-engine-3.1.0-3.19.el6.noarch
The rest of this message is the accompanying stack trace that shows up in
engine.log. Looks to my lay eye like an expected database column is
missing or something. Any ideas?
2012-11-14 15:34:17,332 ERROR
[org.ovirt.engine.core.bll.SetupNetworksCommand] (ajp--0.0.0.0-8009-10)
[78b1227b] Error during CanDoActionFailure.:
javax.validation.ValidationException: Call to
TraversableResolver.isReachable() threw an exception
at
org.hibernate.validator.engine.ValidatorImpl.isValidationRequired(ValidatorImpl.java:773)
[hibernate-validator.jar:4.0.2.GA]
at
org.hibernate.validator.engine.ValidatorImpl.validateConstraint(ValidatorImpl.java:331)
[hibernate-validator.jar:4.0.2.GA]
at
org.hibernate.validator.engine.ValidatorImpl.validateConstraintsForRedefinedDefaultGroup(ValidatorImpl.java:278)
[hibernate-validator.jar:4.0.2.GA]
at
org.hibernate.validator.engine.ValidatorImpl.validateConstraintsForCurrentGroup(ValidatorImpl.java:260)
[hibernate-validator.jar:4.0.2.GA]
at
org.hibernate.validator.engine.ValidatorImpl.validateInContext(ValidatorImpl.java:213)
[hibernate-validator.jar:4.0.2.GA]
at
org.hibernate.validator.engine.ValidatorImpl.validateCascadedConstraint(ValidatorImpl.java:466)
[hibernate-validator.jar:4.0.2.GA]
at
org.hibernate.validator.engine.ValidatorImpl.validateCascadedConstraints(ValidatorImpl.java:372)
[hibernate-validator.jar:4.0.2.GA]
at
org.hibernate.validator.engine.ValidatorImpl.validateInContext(ValidatorImpl.java:219)
[hibernate-validator.jar:4.0.2.GA]
at
org.hibernate.validator.engine.ValidatorImpl.validate(ValidatorImpl.java:119)
[hibernate-validator.jar:4.0.2.GA]
at
org.ovirt.engine.core.common.utils.ValidationUtils.validateInputs(ValidationUtils.java:77)
[engine-common.jar:]
at
org.ovirt.engine.core.bll.CommandBase.validateInputs(CommandBase.java:518)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.CommandBase.InternalCanDoAction(CommandBase.java:486)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.CommandBase.ExecuteAction(CommandBase.java:261)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.Backend.runActionImpl(Backend.java:336)
[engine-bll.jar:]
at org.ovirt.engine.core.bll.Backend.RunAction(Backend.java:294)
[engine-bll.jar:]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[rt.jar:1.7.0_09-icedtea]
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
[rt.jar:1.7.0_09-icedtea]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.7.0_09-icedtea]
at java.lang.reflect.Method.invoke(Method.java:601)
[rt.jar:1.7.0_09-icedtea]
at
org.jboss.as.ee.component.ManagedReferenceMethodInterceptorFactory$ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptorFactory.java:72)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:374)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.ovirt.engine.core.utils.ThreadLocalSessionCleanerInterceptor.injectWebContextToThreadLocal(ThreadLocalSessionCleanerInterceptor.java:11)
[engine-utils.jar:]
at sun.reflect.GeneratedMethodAccessor58.invoke(Unknown Source)
[:1.7.0_09-icedtea]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.7.0_09-icedtea]
at java.lang.reflect.Method.invoke(Method.java:601)
[rt.jar:1.7.0_09-icedtea]
at
org.jboss.as.ee.component.ManagedReferenceLifecycleMethodInterceptorFactory$ManagedReferenceLifecycleMethodInterceptor.processInvocation(ManagedReferenceLifecycleMethodInterceptorFactory.java:123)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.invocation.WeavedInterceptor.processInvocation(WeavedInterceptor.java:53)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:36)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:21)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:53)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ejb3.component.singleton.SingletonComponentInstanceAssociationInterceptor.processInvocation(SingletonComponentInstanceAssociationInterceptor.java:53)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:211)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:363)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:194)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:59)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ee.component.TCCLInterceptor.processInvocation(TCCLInterceptor.java:45)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:165)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:173)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:72)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.ovirt.engine.core.common.interfaces.BackendLocal$$$view8.RunAction(Unknown
Source) [engine-common.jar:]
at
org.ovirt.engine.ui.frontend.server.gwt.GenericApiGWTServiceImpl.RunAction(GenericApiGWTServiceImpl.java:120)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[rt.jar:1.7.0_09-icedtea]
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
[rt.jar:1.7.0_09-icedtea]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.7.0_09-icedtea]
at java.lang.reflect.Method.invoke(Method.java:601)
[rt.jar:1.7.0_09-icedtea]
at
com.google.gwt.rpc.server.RPC.invokeAndStreamResponse(RPC.java:196)
at
com.google.gwt.rpc.server.RpcServlet.processCall(RpcServlet.java:161)
at
com.google.gwt.rpc.server.RpcServlet.processPost(RpcServlet.java:222)
at
com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:754)
[jboss-servlet-api_3.0_spec-1.0.0.Final.jar:1.0.0.Final]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:847)
[jboss-servlet-api_3.0_spec-1.0.0.Final.jar:1.0.0.Final]
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:329)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:275)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
at
org.jboss.as.web.security.SecurityContextAssociationValve.invoke(SecurityContextAssociationValve.java:153)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:155)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.jboss.web.rewrite.RewriteValve.invoke(RewriteValve.java:466)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:368)
at org.apache.coyote.ajp.AjpProcessor.process(AjpProcessor.java:505)
at
org.apache.coyote.ajp.AjpProtocol$AjpConnectionHandler.process(AjpProtocol.java:445)
at
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:930)
at java.lang.Thread.run(Thread.java:722) [rt.jar:1.7.0_09-icedtea]
Caused by: javax.persistence.PersistenceException: Unable to find field or
method: class
org.ovirt.engine.core.common.businessentities.VdsNetworkInterface#interfaces
at
org.hibernate.ejb.util.PersistenceUtilHelper$MetadataCache.findMember(PersistenceUtilHelper.java:201)
at
org.hibernate.ejb.util.PersistenceUtilHelper$MetadataCache.getMember(PersistenceUtilHelper.java:176)
at
org.hibernate.ejb.util.PersistenceUtilHelper.get(PersistenceUtilHelper.java:89)
at
org.hibernate.ejb.util.PersistenceUtilHelper.isLoadedWithReference(PersistenceUtilHelper.java:81)
at
org.hibernate.ejb.HibernatePersistence$1.isLoadedWithReference(HibernatePersistence.java:93)
at javax.persistence.Persistence$1.isLoaded(Persistence.java:98)
[hibernate-jpa-2.0-api-1.0.1.Final.jar:1.0.1.Final]
at
org.hibernate.validator.engine.resolver.JPATraversableResolver.isReachable(JPATraversableResolver.java:33)
[hibernate-validator.jar:4.0.2.GA]
at
org.hibernate.validator.engine.resolver.DefaultTraversableResolver.isReachable(DefaultTraversableResolver.java:112)
[hibernate-validator.jar:4.0.2.GA]
at
org.hibernate.validator.engine.resolver.SingleThreadCachedTraversableResolver.isReachable(SingleThreadCachedTraversableResolver.java:47)
[hibernate-validator.jar:4.0.2.GA]
at
org.hibernate.validator.engine.ValidatorImpl.isValidationRequired(ValidatorImpl.java:764)
[hibernate-validator.jar:4.0.2.GA]
... 81 more
5
11
This is a multi-part message in MIME format.
--------------030504050408020107030605
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Hi,
I am having issues in getting the native USB redirection.
I have selected the native USB support for my console options. When I
start the VM, it is returning to the down state. The below engine.log
shows some error.
2012-11-15 21:45:42,924 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
(ajp--0.0.0.0-8009-5) START, IsValidVDSCommand(storagePoolId =
9febe320-e6d5-4b91-a1c5-614c3a24ebe4, ignoreFailoverLimit = false,
compatabilityVersion = null), log id: 6dc08bad
2012-11-15 21:45:42,925 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
(ajp--0.0.0.0-8009-5) FINISH, IsValidVDSCommand, return: true, log id:
6dc08bad
2012-11-15 21:45:43,003 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(ajp--0.0.0.0-8009-5) START, IsVmDuringInitiatingVDSCommand(vmId =
2aedea82-0dcf-4f93-994d-425ed01c1479), log id: 4a2ace14
2012-11-15 21:45:43,003 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(ajp--0.0.0.0-8009-5) FINISH, IsVmDuringInitiatingVDSCommand, return:
false, log id: 4a2ace14
2012-11-15 21:45:43,051 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(pool-3-thread-49) [2fccb65f] Lock Acquired to object EngineLock
[exclusiveLocks= key: 2aedea82-0dcf-4f93-994d-425ed01c1479 value: VM
, sharedLocks= ]
2012-11-15 21:45:43,068 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(pool-3-thread-49) [2fccb65f] Running command: RunVmCommand internal:
false. Entities affected : ID: 2aedea82-0dcf-4f93-994d-425ed01c1479
Type: VM
2012-11-15 21:45:43,102 INFO
[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-3-thread-49)
[2fccb65f] START, CreateVmVDSCommand(vdsId =
a2f20736-2da8-11e2-a9ac-bb1cd2496234,
vmId=2aedea82-0dcf-4f93-994d-425ed01c1479,
vm=org.ovirt.engine.core.common.businessentities.VM@600411f2), log id:
377110c5
2012-11-15 21:45:43,104 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
(pool-3-thread-49) [2fccb65f] START, CreateVDSCommand(vdsId =
a2f20736-2da8-11e2-a9ac-bb1cd2496234,
vmId=2aedea82-0dcf-4f93-994d-425ed01c1479,
vm=org.ovirt.engine.core.common.businessentities.VM@600411f2), log id:
4847feea
2012-11-15 21:45:43,126 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
(pool-3-thread-49) [2fccb65f]
org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand
spiceSslCipherSuite=DEFAULT,memSize=2048,kvmEnable=true,smp=1,emulatedMachine=pc,vmType=kvm,keyboardLayout=en-us,nice=0,display=qxl,smpCoresPerSocket=1,spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,timeOffset=0,transparentHugePages=true,vmId=2aedea82-0dcf-4f93-994d-425ed01c1479,devices=[Ljava.util.Map;@18cc1b1b,acpiEnable=true,vmName=win73,cpuType=Westmere,custom={device_88276577-7921-4e39-82a2-267c8bcd3744device_66fa5d8f-6544-4e84-a1de-87c673c9611c=VmId=2aedea82-0dcf-4f93-994d-425ed01c1479,DeviceId=66fa5d8f-6544-4e84-a1de-87c673c9611c,Device=ide,Type=controller,BootOrder=0,SpecParams={},Address={bus=0x00,
domain=0x0000, type=pci, slot=0x01,
function=0x1},IsManaged=false,IsPlugged=true,IsReadOnly=false,alias=ide0, device_88276577-7921-4e39-82a2-267c8bcd3744device_66fa5d8f-6544-4e84-a1de-87c673c9611cdevice_5db33cc1-ed1d-4c21-b1bf-de0cbe76b778device_48d5a447-92e1-4cb3-81cc-104ab634ffa2=VmId=2aedea82-0dcf-4f93-994d-425ed01c1479,DeviceId=48d5a447-92e1-4cb3-81cc-104ab634ffa2,Device=unix,Type=channel,BootOrder=0,SpecParams={},Address={port=1,
bus=0, controller=0,
type=virtio-serial},IsManaged=false,IsPlugged=true,IsReadOnly=false,alias=channel0,
device_88276577-7921-4e39-82a2-267c8bcd3744=VmId=2aedea82-0dcf-4f93-994d-425ed01c1479,DeviceId=88276577-7921-4e39-82a2-267c8bcd3744,Device=usb,Type=controller,BootOrder=0,SpecParams={},Address={bus=0x00,
domain=0x0000, type=pci, slot=0x01,
function=0x2},IsManaged=false,IsPlugged=true,IsReadOnly=false,alias=usb0, device_88276577-7921-4e39-82a2-267c8bcd3744device_66fa5d8f-6544-4e84-a1de-87c673c9611cdevice_5db33cc1-ed1d-4c21-b1bf-de0cbe76b778device_48d5a447-92e1-4cb3-81cc-104ab634ffa2device_85265b4d-e652-434e-9247-40ff1ad07e99=VmId=2aedea82-0dcf-4f93-994d-425ed01c1479,DeviceId=85265b4d-e652-434e-9247-40ff1ad07e99,Device=spicevmc,Type=channel,BootOrder=0,SpecParams={},Address={port=2,
bus=0, controller=0,
type=virtio-serial},IsManaged=false,IsPlugged=true,IsReadOnly=false,alias=channel1,
device_88276577-7921-4e39-82a2-267c8bcd3744device_66fa5d8f-6544-4e84-a1de-87c673c9611cdevice_5db33cc1-ed1d-4c21-b1bf-de0cbe76b778=VmId=2aedea82-0dcf-4f93-994d-425ed01c1479,DeviceId=5db33cc1-ed1d-4c21-b1bf-de0cbe76b778,Device=virtio-serial,Type=controller,BootOrder=0,SpecParams={},Address={bus=0x00,
domain=0x0000, type=pci, slot=0x05,
function=0x0},IsManaged=false,IsPlugged=true,IsReadOnly=false,alias=virtio-serial0}
2012-11-15 21:45:43,132 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
(pool-3-thread-49) [2fccb65f] FINISH, CreateVDSCommand, log id: 4847feea
2012-11-15 21:45:43,134 INFO
[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-3-thread-49)
[2fccb65f] IncreasePendingVms::CreateVmIncreasing vds local_host pending
vcpu count, now 1. Vm: win73
2012-11-15 21:45:43,184 INFO
[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-3-thread-49)
[2fccb65f] FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id:
377110c5
2012-11-15 21:45:43,188 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(pool-3-thread-49) [2fccb65f] Lock freed to object EngineLock
[exclusiveLocks= key: 2aedea82-0dcf-4f93-994d-425ed01c1479 value: VM
, sharedLocks= ]
2012-11-15 21:45:44,340 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(QuartzScheduler_Worker-4) START, DestroyVDSCommand(vdsId =
a2f20736-2da8-11e2-a9ac-bb1cd2496234,
vmId=2aedea82-0dcf-4f93-994d-425ed01c1479, force=false, secondsToWait=0,
gracefully=false), log id: 39e6d7f3
2012-11-15 21:45:44,378 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(QuartzScheduler_Worker-4) FINISH, DestroyVDSCommand, log id: 39e6d7f3
2012-11-15 21:45:44,391 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-4) Running on vds during rerun failed vm: null
2012-11-15 21:45:44,394 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-4) vm win73 running in db and not running in vds
- add to rerun treatment. vds local_host
2012-11-15 21:45:44,408 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-4) Rerun vm
2aedea82-0dcf-4f93-994d-425ed01c1479. Called from vds local_host
2012-11-15 21:45:44,410 INFO
[org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand]
(pool-3-thread-49) START, UpdateVdsDynamicDataVDSCommand(vdsId =
a2f20736-2da8-11e2-a9ac-bb1cd2496234,
vdsDynamic=org.ovirt.engine.core.common.businessentities.VdsDynamic@8fa7e67e),
log id: 4f197787
2012-11-15 21:45:44,416 INFO
[org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand]
(pool-3-thread-49) FINISH, UpdateVdsDynamicDataVDSCommand, log id: 4f197787
2012-11-15 21:45:44,433 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(pool-3-thread-49) Lock Acquired to object EngineLock [exclusiveLocks=
key: 2aedea82-0dcf-4f93-994d-425ed01c1479 value: VM
, sharedLocks= ]
2012-11-15 21:45:44,439 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
(pool-3-thread-49) START, IsValidVDSCommand(storagePoolId =
9febe320-e6d5-4b91-a1c5-614c3a24ebe4, ignoreFailoverLimit = false,
compatabilityVersion = null), log id: 5daaa874
2012-11-15 21:45:44,440 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
(pool-3-thread-49) FINISH, IsValidVDSCommand, return: true, log id: 5daaa874
2012-11-15 21:45:44,467 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(pool-3-thread-49) START, IsVmDuringInitiatingVDSCommand(vmId =
2aedea82-0dcf-4f93-994d-425ed01c1479), log id: 4125b681
2012-11-15 21:45:44,467 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(pool-3-thread-49) FINISH, IsVmDuringInitiatingVDSCommand, return:
false, log id: 4125b681
2012-11-15 21:45:44,469 WARN [org.ovirt.engine.core.bll.RunVmCommand]
(pool-3-thread-49) CanDoAction of action RunVm failed.
Reasons:VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,ACTION_TYPE_FAILED_VDS_VM_CLUSTER
2012-11-15 21:45:44,470 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(pool-3-thread-49) Lock freed to object EngineLock [exclusiveLocks= key:
2aedea82-0dcf-4f93-994d-425ed01c1479 value: VM
, sharedLocks= ]
But when I select the legacy USB support, the system is getting booted
and I cannot see my attached USB devices in my guests.
I have already followed the thread
http://www.mail-archive.com/users@ovirt.org/msg03822.html without any
success. There is a work around mentioned in the last post of the thread
by Itamar. Where can find that work around?
My Installation:
Centos 6.3
Ovirt 3.1
Windows 7 client
spice-gtk-0.11-11.el6_3.1.x86_64
spice-server-0.10.1-10.el6.x86_64
spice-protocol-0.10.1-5.el6.noarch
spice-xpi-2.7-20.el6.x86_64
spice-client-0.8.2-15.el6.x86_64
spice-glib-0.11-11.el6_3.1.x86_64
spice-gtk-python-0.11-11.el6_3.1.x86_64
spice-vdagent-0.8.1-3.el6.x86_64
Regards,
Fasil.
--------------030504050408020107030605
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hi,<br>
<br>
I am having issues in getting the native USB redirection.<br>
I have selected the native USB support for my console options. When
I start the VM, it is returning to the down state. The below
engine.log shows some error.<br>
<br>
2012-11-15 21:45:42,924 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
(ajp--0.0.0.0-8009-5) START, IsValidVDSCommand(storagePoolId =
9febe320-e6d5-4b91-a1c5-614c3a24ebe4, ignoreFailoverLimit = false,
compatabilityVersion = null), log id: 6dc08bad<br>
2012-11-15 21:45:42,925 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
(ajp--0.0.0.0-8009-5) FINISH, IsValidVDSCommand, return: true, log
id: 6dc08bad<br>
2012-11-15 21:45:43,003 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(ajp--0.0.0.0-8009-5) START, IsVmDuringInitiatingVDSCommand(vmId =
2aedea82-0dcf-4f93-994d-425ed01c1479), log id: 4a2ace14<br>
2012-11-15 21:45:43,003 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(ajp--0.0.0.0-8009-5) FINISH, IsVmDuringInitiatingVDSCommand,
return: false, log id: 4a2ace14<br>
2012-11-15 21:45:43,051 INFO
[org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-49)
[2fccb65f] Lock Acquired to object EngineLock [exclusiveLocks= key:
2aedea82-0dcf-4f93-994d-425ed01c1479 value: VM<br>
, sharedLocks= ]<br>
2012-11-15 21:45:43,068 INFO
[org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-49)
[2fccb65f] Running command: RunVmCommand internal: false. Entities
affected : ID: 2aedea82-0dcf-4f93-994d-425ed01c1479 Type: VM<br>
2012-11-15 21:45:43,102 INFO
[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand]
(pool-3-thread-49) [2fccb65f] START, CreateVmVDSCommand(vdsId =
a2f20736-2da8-11e2-a9ac-bb1cd2496234,
vmId=2aedea82-0dcf-4f93-994d-425ed01c1479,
vm=org.ovirt.engine.core.common.businessentities.VM@600411f2), log
id: 377110c5<br>
2012-11-15 21:45:43,104 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
(pool-3-thread-49) [2fccb65f] START, CreateVDSCommand(vdsId =
a2f20736-2da8-11e2-a9ac-bb1cd2496234,
vmId=2aedea82-0dcf-4f93-994d-425ed01c1479,
vm=org.ovirt.engine.core.common.businessentities.VM@600411f2), log
id: 4847feea<br>
2012-11-15 21:45:43,126 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
(pool-3-thread-49) [2fccb65f]
org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand
spiceSslCipherSuite=DEFAULT,memSize=2048,kvmEnable=true,smp=1,emulatedMachine=pc,vmType=kvm,keyboardLayout=en-us,nice=0,display=qxl,smpCoresPerSocket=1,spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,timeOffset=0,transparentHugePages=true,vmId=2aedea82-0dcf-4f93-994d-425ed01c1479,devices=[Ljava.util.Map;@18cc1b1b,acpiEnable=true,vmName=win73,cpuType=Westmere,custom={device_88276577-7921-4e39-82a2-267c8bcd3744device_66fa5d8f-6544-4e84-a1de-87c673c9611c=VmId=2aedea82-0dcf-4f93-994d-425ed01c1479,DeviceId=66fa5d8f-6544-4e84-a1de-87c673c9611c,Device=ide,Type=controller,BootOrder=0,SpecParams={},Address={bus=0x00,
domain=0x0000, type=pci, slot=0x01,
function=0x1},IsManaged=false,IsPlugged=true,IsReadOnly=false,alias=ide0,
device_88276577-7921-4e39-82a2-267c8bcd3744device_66fa5d8f-6544-4e84-a1de-87c673c9611cdevice_5db33cc1-ed1d-4c21-b1bf-de0cbe76b778device_48d5a447-92e1-4cb3-81cc-104ab634ffa2=VmId=2aedea82-0dcf-4f93-994d-425ed01c1479,DeviceId=48d5a447-92e1-4cb3-81cc-104ab634ffa2,Device=unix,Type=channel,BootOrder=0,SpecParams={},Address={port=1,
bus=0, controller=0,
type=virtio-serial},IsManaged=false,IsPlugged=true,IsReadOnly=false,alias=channel0,
device_88276577-7921-4e39-82a2-267c8bcd3744=VmId=2aedea82-0dcf-4f93-994d-425ed01c1479,DeviceId=88276577-7921-4e39-82a2-267c8bcd3744,Device=usb,Type=controller,BootOrder=0,SpecParams={},Address={bus=0x00,
domain=0x0000, type=pci, slot=0x01,
function=0x2},IsManaged=false,IsPlugged=true,IsReadOnly=false,alias=usb0,
device_88276577-7921-4e39-82a2-267c8bcd3744device_66fa5d8f-6544-4e84-a1de-87c673c9611cdevice_5db33cc1-ed1d-4c21-b1bf-de0cbe76b778device_48d5a447-92e1-4cb3-81cc-104ab634ffa2device_85265b4d-e652-434e-9247-40ff1ad07e99=VmId=2aedea82-0dcf-4f93-994d-425ed01c1479,DeviceId=85265b4d-e652-434e-9247-40ff1ad07e99,Device=spicevmc,Type=channel,BootOrder=0,SpecParams={},Address={port=2,
bus=0, controller=0,
type=virtio-serial},IsManaged=false,IsPlugged=true,IsReadOnly=false,alias=channel1,
device_88276577-7921-4e39-82a2-267c8bcd3744device_66fa5d8f-6544-4e84-a1de-87c673c9611cdevice_5db33cc1-ed1d-4c21-b1bf-de0cbe76b778=VmId=2aedea82-0dcf-4f93-994d-425ed01c1479,DeviceId=5db33cc1-ed1d-4c21-b1bf-de0cbe76b778,Device=virtio-serial,Type=controller,BootOrder=0,SpecParams={},Address={bus=0x00,
domain=0x0000, type=pci, slot=0x05,
function=0x0},IsManaged=false,IsPlugged=true,IsReadOnly=false,alias=virtio-serial0}<br>
2012-11-15 21:45:43,132 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
(pool-3-thread-49) [2fccb65f] FINISH, CreateVDSCommand, log id:
4847feea<br>
2012-11-15 21:45:43,134 INFO
[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand]
(pool-3-thread-49) [2fccb65f] IncreasePendingVms::CreateVmIncreasing
vds local_host pending vcpu count, now 1. Vm: win73<br>
2012-11-15 21:45:43,184 INFO
[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand]
(pool-3-thread-49) [2fccb65f] FINISH, CreateVmVDSCommand, return:
WaitForLaunch, log id: 377110c5<br>
2012-11-15 21:45:43,188 INFO
[org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-49)
[2fccb65f] Lock freed to object EngineLock [exclusiveLocks= key:
2aedea82-0dcf-4f93-994d-425ed01c1479 value: VM<br>
, sharedLocks= ]<br>
2012-11-15 21:45:44,340 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(QuartzScheduler_Worker-4) START, DestroyVDSCommand(vdsId =
a2f20736-2da8-11e2-a9ac-bb1cd2496234,
vmId=2aedea82-0dcf-4f93-994d-425ed01c1479, force=false,
secondsToWait=0, gracefully=false), log id: 39e6d7f3<br>
2012-11-15 21:45:44,378 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(QuartzScheduler_Worker-4) FINISH, DestroyVDSCommand, log id:
39e6d7f3<br>
2012-11-15 21:45:44,391 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-4) Running on vds during rerun failed vm:
null<br>
2012-11-15 21:45:44,394 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-4) vm win73 running in db and not running in
vds - add to rerun treatment. vds local_host<br>
2012-11-15 21:45:44,408 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-4) Rerun vm
2aedea82-0dcf-4f93-994d-425ed01c1479. Called from vds local_host<br>
2012-11-15 21:45:44,410 INFO
[org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand]
(pool-3-thread-49) START, UpdateVdsDynamicDataVDSCommand(vdsId =
a2f20736-2da8-11e2-a9ac-bb1cd2496234,
vdsDynamic=org.ovirt.engine.core.common.businessentities.VdsDynamic@8fa7e67e),
log id: 4f197787<br>
2012-11-15 21:45:44,416 INFO
[org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand]
(pool-3-thread-49) FINISH, UpdateVdsDynamicDataVDSCommand, log id:
4f197787<br>
2012-11-15 21:45:44,433 INFO
[org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-49) Lock
Acquired to object EngineLock [exclusiveLocks= key:
2aedea82-0dcf-4f93-994d-425ed01c1479 value: VM<br>
, sharedLocks= ]<br>
2012-11-15 21:45:44,439 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
(pool-3-thread-49) START, IsValidVDSCommand(storagePoolId =
9febe320-e6d5-4b91-a1c5-614c3a24ebe4, ignoreFailoverLimit = false,
compatabilityVersion = null), log id: 5daaa874<br>
2012-11-15 21:45:44,440 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
(pool-3-thread-49) FINISH, IsValidVDSCommand, return: true, log id:
5daaa874<br>
2012-11-15 21:45:44,467 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(pool-3-thread-49) START, IsVmDuringInitiatingVDSCommand(vmId =
2aedea82-0dcf-4f93-994d-425ed01c1479), log id: 4125b681<br>
2012-11-15 21:45:44,467 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(pool-3-thread-49) FINISH, IsVmDuringInitiatingVDSCommand, return:
false, log id: 4125b681<br>
2012-11-15 21:45:44,469 WARN
[org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-49)
CanDoAction of action RunVm failed.
Reasons:VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,ACTION_TYPE_FAILED_VDS_VM_CLUSTER<br>
2012-11-15 21:45:44,470 INFO
[org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-49) Lock
freed to object EngineLock [exclusiveLocks= key:
2aedea82-0dcf-4f93-994d-425ed01c1479 value: VM<br>
, sharedLocks= ]<br>
<br>
But when I select the legacy USB support, the system is getting
booted and I cannot see my attached USB devices in my guests.<br>
I have already followed the thread
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<a href="http://www.mail-archive.com/users@ovirt.org/msg03822.html">http://www.mail-archive.com/users@ovirt.org/msg03822.html</a>
without any success. There is a work around mentioned in the last
post of the thread by Itamar. Where can find that work around?<br>
<br>
My Installation:<br>
Centos 6.3<br>
Ovirt 3.1<br>
Windows 7 client<br>
spice-gtk-0.11-11.el6_3.1.x86_64<br>
spice-server-0.10.1-10.el6.x86_64<br>
spice-protocol-0.10.1-5.el6.noarch<br>
spice-xpi-2.7-20.el6.x86_64<br>
spice-client-0.8.2-15.el6.x86_64<br>
spice-glib-0.11-11.el6_3.1.x86_64<br>
spice-gtk-python-0.11-11.el6_3.1.x86_64<br>
spice-vdagent-0.8.1-3.el6.x86_64<font face="Arial"><br>
<br>
Regards,<br>
Fasil.<br>
<!--<font color=green size=1><b>Please do not print this email unless you really need to</b></font>
</font>
</html>
</div>--></font>
</body>
</html>
--------------030504050408020107030605--
2
9
This is a multi-part message in MIME format.
--------------070401060008050901000001
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
>
> 2012-11-03 19:19:22::DEBUG::engine-setup::1747::root:: storage-path: /data/ovirt/vdsm
> 2012-11-03 19:19:22::DEBUG::engine-setup::1747::root:: superuser-pass: ********
> 2012-11-03 19:19:22::ERROR::engine-setup::2376::root:: Traceback (most recent call last):
> File "/bin/engine-setup", line 2370, in <module>
> main(confFile)
> File "/bin/engine-setup", line 2159, in main
> runSequences()
> File "/bin/engine-setup", line 2105, in runSequences
> controller.runAllSequences()
> File "/usr/share/ovirt-engine/scripts/setup_controller.py", line 54, in runAllSequences
> sequence.run()
> File "/usr/share/ovirt-engine/scripts/setup_sequences.py", line 154, in run
> step.run()
> File "/usr/share/ovirt-engine/scripts/setup_sequences.py", line 60, in run
> function()
> File "/usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py", line 290, in addStorageDomain
> raise Exception(ERROR_ADD_LOCAL_DOMAIN)
> Exception: Error: could not add local storage domain
>
> XMLSyntaxError: Space required after the Public Identifier, line 1, column 47 looks somewhat strange to me.
>
> Any hint what causes this error?
>
> Thanks,
>
> Christian
>
> P.S.: The installation failed several times before that, until i figured out that the engine-setup needs to login in via ssh; we had configured sshd to allow only public key auth, and this raised an error.
>
> did this get resolved?
>
>
> -
I'm not the original submitter of this issue, but I have exactly the
same problem with the latest nightly all-in-one installation.
We don't use public key auth for sshd on this machine so that's not the
problem. This is what I see in the vdsm.log:
MainThread::INFO::2012-11-14 12:45:51,444::vdsm::88::vds::(run) I am the
actual vdsm 4.10-0.129 demo.netbulae.eu (3.6.6-1.fc17.x86_64)
MainThread::DEBUG::2012-11-14
12:45:51,812::resourceManager::379::ResourceManager::(registerNamespace)
Registering namespace 'Storage'
MainThread::DEBUG::2012-11-14
12:45:51,813::threadPool::33::Misc.ThreadPool::(__init__) Enter -
numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0
MainThread::DEBUG::2012-11-14
12:45:51,856::multipath::115::Storage.Multipath::(isEnabled) multipath
Defaulting to False
MainThread::DEBUG::2012-11-14
12:45:51,857::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/bin/cp /tmp/tmpVVMg7O /etc/multipath.conf' (cwd None)
MainThread::DEBUG::2012-11-14
12:45:51,942::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
''; <rc> = 0
MainThread::DEBUG::2012-11-14
12:45:51,944::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/multipath -F' (cwd None)
MainThread::DEBUG::2012-11-14
12:45:51,975::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED: <err> =
''; <rc> = 1
MainThread::DEBUG::2012-11-14
12:45:51,976::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/service multipathd restart' (cwd None)
MainThread::DEBUG::2012-11-14
12:45:52,179::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
'Redirecting to /bin/systemctl restart multipathd.service\n'; <rc> = 0
MainThread::DEBUG::2012-11-14
12:45:52,179::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/lvm dumpconfig global/locking_type' (cwd None)
MainThread::DEBUG::2012-11-14
12:45:52,241::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
''; <rc> = 0
MainThread::DEBUG::2012-11-14
12:45:52,242::hsm::407::Storage.HSM::(__cleanStorageRepository) Started
cleaning storage repository at '/rhev/data-center'
MainThread::DEBUG::2012-11-14
12:45:52,253::hsm::439::Storage.HSM::(__cleanStorageRepository) White
list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*',
'/rhev/data-center/mnt']
MainThread::DEBUG::2012-11-14
12:45:52,253::hsm::440::Storage.HSM::(__cleanStorageRepository) Mount
list: []
MainThread::DEBUG::2012-11-14
12:45:52,254::hsm::442::Storage.HSM::(__cleanStorageRepository) Cleaning
leftovers
MainThread::DEBUG::2012-11-14
12:45:52,258::hsm::485::Storage.HSM::(__cleanStorageRepository) Finished
cleaning storage repository at '/rhev/data-center'
Thread-12::DEBUG::2012-11-14
12:45:52,259::lvm::334::OperationMutex::(_reloadpvs) Operation 'lvm
reload operation' got the operation mutex
MainThread::INFO::2012-11-14
12:45:52,260::dispatcher::95::Storage.Dispatcher::(__init__) Starting
StorageDispatcher...
Thread-12::DEBUG::2012-11-14
12:45:52,266::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/lvm pvs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup {
retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix
--separator | -o
*uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size'
(cwd None)**
**MainThread::WARNING::2012-11-14
12:45:52,300::clientIF::197::vds::(_prepareMOM) MOM initialization
failed and fall back to KsmMonitor**
**Traceback (most recent call last):**
** File "/usr/share/vdsm/clientIF.py", line 194, in _prepareMOM**
** self.mom = MomThread(momconf)**
** File "/usr/share/vdsm/momIF.py", line 34, in __init__**
** raise Exception("MOM is not available")**
**Exception: MOM is not available*
MainThread::DEBUG::2012-11-14
12:45:52,304::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/pgrep -xf
ksmd' (cwd None)
Thread-12::DEBUG::2012-11-14
12:45:52,340::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
''; <rc> = 0
Thread-12::DEBUG::2012-11-14
12:45:52,341::lvm::359::OperationMutex::(_reloadpvs) Operation 'lvm
reload operation' released the operation mutex
Thread-12::DEBUG::2012-11-14
12:45:52,342::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm
reload operation' got the operation mutex
Thread-12::DEBUG::2012-11-14
12:45:52,343::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/lvm vgs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup {
retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix
--separator | -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free'
(cwd None)
MainThread::DEBUG::2012-11-14
12:45:52,353::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
''; <rc> = 0
MainThread::INFO::2012-11-14 12:45:52,354::ksm::43::vds::(__init__)
starting ksm monitor thread, ksm pid is 40
KsmMonitor::DEBUG::2012-11-14
12:45:52,355::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/service ksmtuned start' (cwd None)
MainThread::INFO::2012-11-14
12:45:52,367::vmChannels::139::vds::(settimeout) Setting channels'
timeout to 30 seconds.
VM Channels Listener::INFO::2012-11-14
12:45:52,368::vmChannels::127::vds::(run) Starting VM channels listener
thread.
*MainThread::WARNING::2012-11-14
12:45:52,375::clientIF::182::vds::(_prepareBindings) Unable to load the
rest server module. Please make sure it is installed.**
**MainThread::WARNING::2012-11-14
12:45:52,376::clientIF::188::vds::(_prepareBindings) Unable to load the
json rpc server module. Please make sure it is installed.*
Thread-12::DEBUG::2012-11-14
12:45:52,398::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
' No volume groups found\n'; <rc> = 0
Thread-12::DEBUG::2012-11-14
12:45:52,399::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm
reload operation' released the operation mutex
Thread-12::DEBUG::2012-11-14
12:45:52,401::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/lvm lvs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup {
retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix
--separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags'
(cwd None)
KsmMonitor::DEBUG::2012-11-14
12:45:52,440::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
'Redirecting to /bin/systemctl start ksmtuned.service\n'; <rc> = 0
MainThread::INFO::2012-11-14
12:45:52,367::vmChannels::139::vds::(settimeout) Setting channels'
timeout to 30 seconds.
VM Channels Listener::INFO::2012-11-14
12:45:52,368::vmChannels::127::vds::(run) Starting VM channels listener
thread.
MainThread::WARNING::2012-11-14
12:45:52,375::clientIF::182::vds::(_prepareBindings) Unable to load the
rest server module. Please make sure it is installed.
MainThread::WARNING::2012-11-14
12:45:52,376::clientIF::188::vds::(_prepareBindings) Unable to load the
json rpc server module. Please make sure it is installed.
Thread-12::DEBUG::2012-11-14
12:45:52,398::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
' No volume groups found\n'; <rc> = 0
Thread-12::DEBUG::2012-11-14
12:45:52,399::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm
reload operation' released the operation mutex
Thread-12::DEBUG::2012-11-14
12:45:52,401::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/lvm lvs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup {
retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix
--separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags'
(cwd None)
KsmMonitor::DEBUG::2012-11-14
12:45:52,440::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
'Redirecting to /bin/systemctl start ksmtuned.service\n'; <rc> = 0
KsmMonitor::DEBUG::2012-11-14
12:45:52,440::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/service ksm start' (cwd None)
Thread-12::DEBUG::2012-11-14
12:45:52,457::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
' No volume groups found\n'; <rc> = 0
Thread-12::DEBUG::2012-11-14
12:45:52,458::misc::1026::SamplingMethod::(__call__) Trying to enter
sampling method (storage.sdc.refreshStorage)
Thread-12::DEBUG::2012-11-14
12:45:52,458::misc::1028::SamplingMethod::(__call__) Got in to sampling
method
Thread-12::DEBUG::2012-11-14
12:45:52,458::misc::1026::SamplingMethod::(__call__) Trying to enter
sampling method (storage.iscsi.rescan)
Thread-12::DEBUG::2012-11-14
12:45:52,458::misc::1028::SamplingMethod::(__call__) Got in to sampling
method
Thread-12::DEBUG::2012-11-14
12:45:52,458::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/iscsiadm -m session -R' (cwd None)
Thread-12::DEBUG::2012-11-14
12:45:52,477::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED: <err> =
'iscsiadm: No session found.\n'; <rc> = 21
Thread-12::DEBUG::2012-11-14
12:45:52,477::misc::1036::SamplingMethod::(__call__) Returning last result
Thread-12::DEBUG::2012-11-14
12:45:52,478::supervdsm::107::SuperVdsmProxy::(_start) Launching Super Vdsm
Thread-12::DEBUG::2012-11-14
12:45:52,478::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/bin/python /usr/share/vdsm/supervdsmServer.py
c9c732a0-065b-4634-8bb4-fbcd2081de16 11360' (cwd None)
KsmMonitor::DEBUG::2012-11-14
12:45:52,486::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
'Redirecting to /bin/systemctl start ksm.service\n'; <rc> = 0
MainThread::DEBUG::2012-11-14
12:45:52,669::supervdsmServer::324::SuperVdsm.Server::(main) Making sure
I'm root
MainThread::DEBUG::2012-11-14
12:45:52,670::supervdsmServer::328::SuperVdsm.Server::(main) Parsing cmd
args
MainThread::DEBUG::2012-11-14
12:45:52,670::supervdsmServer::331::SuperVdsm.Server::(main) Creating
PID file
MainThread::DEBUG::2012-11-14
12:45:52,670::supervdsmServer::338::SuperVdsm.Server::(main) Cleaning
old socket
MainThread::DEBUG::2012-11-14
12:45:52,670::supervdsmServer::342::SuperVdsm.Server::(main) Setting up
keep alive thread
MainThread::DEBUG::2012-11-14
12:45:52,670::supervdsmServer::348::SuperVdsm.Server::(main) Creating
remote object manager
MainThread::DEBUG::2012-11-14
12:45:52,671::supervdsmServer::360::SuperVdsm.Server::(main) Started
serving super vdsm object
Thread-14::DEBUG::2012-11-14
12:45:53,732::BindingXMLRPC::903::vds::(wrapper) client
[192.168.122.1]::call getCapabilities with () {}
Thread-14::DEBUG::2012-11-14
12:45:53,902::BindingXMLRPC::910::vds::(wrapper) return getCapabilities
with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory':
{'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba'}],
'FC': []}, 'packages2': {'kernel': {'release': '1.fc17.x86_64',
'buildtime': 1352149175.0, 'version': '3.6.6'}, 'spice-server':
{'release': '1.fc17', 'buildtime': 1348891802L, 'version': '0.12.0'},
'vdsm': {'release': '0.129.git2c2c228.fc17', 'buildtime': 1352759542L,
'version': '4.10.1'}, 'qemu-kvm': {'release': '19.fc17', 'buildtime':
1351915579L, 'version': '1.2.0'}, 'libvirt': {'release': '1.fc17',
'buildtime': 1352437629L, 'version': '1.0.0'}, 'qemu-img': {'release':
'19.fc17', 'buildtime': 1351915579L, 'version': '1.2.0'}}, 'cpuModel':
'AMD Phenom(tm) II X4 955 Processor', 'hooks': {}, 'vmTypes': ['kvm'],
'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt':
{'iface': 'ovirtmgmt', 'addr': '192.168.2.21', 'cfg': {'UUID':
'524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0', 'NM_CONTROLLED':
'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge',
'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp':
'off', 'bridged': True, 'gateway': '192.168.2.1', 'ports': ['p15p1']},
'virbr0': {'iface': 'virbr0', 'addr': '192.168.122.1', 'cfg': {}, 'mtu':
'1500', 'netmask': '255.255.255.0', 'stp': 'on', 'bridged': True,
'gateway': '0.0.0.0', 'ports': []}}, 'bridges': {'ovirtmgmt': {'addr':
'192.168.2.21', 'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9',
'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE':
'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500',
'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['p15p1']}, 'virbr0':
{'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask':
'255.255.255.0', 'stp': 'on', 'ports': []}}, 'uuid':
'4046266B-FA2B-DE11-AA3D-00248C9EAFD5_00:19:cb:d6:6a:e0',
'lastClientIface': 'ovirtmgmt', 'nics': {'p15p1': {'addr': '', 'cfg':
{'BRIDGE': 'ovirtmgmt', 'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9',
'NM_CONTROLLED': 'no', 'HWADDR': '00:19:cb:d6:6a:e0', 'DEVICE': 'p15p1',
'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr':
'00:19:cb:d6:6a:e0', 'speed': 1000}, 'p6p1': {'addr': '', 'cfg':
{'DEVICE': 'p6p1', 'UUID': '9d1e9605-931d-4e51-9c79-d5f0f204d46e',
'NM_CONTROLLED': 'yes', 'BOOTPROTO': 'dhcp', 'HWADDR':
'00:24:8C:9E:AF:D5', 'ONBOOT': 'no'}, 'mtu': '1500', 'netmask': '',
'hwaddr': '00:24:8c:9e:af:d5', 'speed': 1000}}, 'software_revision':
'0.129', 'clusterLevels': ['3.0', '3.1', '3.2'], 'cpuFlags':
u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,hw_pstate,npt,lbrv,svm_lock,nrip_save,model_athlon,model_Opteron_G3,model_Opteron_G1,model_phenom,model_Opteron_G2',
'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba',
'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1'],
'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {},
'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr':
'00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu': '1500',
'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}},
'software_version': '4.10', 'memSize': '7734', 'cpuSpeed': '3200.000',
'cpuSockets': '1', 'vlans': {}, 'cpuCores': '4', 'kvmEnabled': 'true',
'guestOverhead': '65', 'management_ip': '', 'version_name': 'Snow Man',
'emulatedMachines': [u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0',
u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10',
u'isapc', u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15',
u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc'],
'operatingSystem': {'release': '1', 'version': '17', 'name': 'Fedora'},
'lastClient': '0.0.0.0'}}
Thread-15::DEBUG::2012-11-14
12:45:54,148::BindingXMLRPC::903::vds::(wrapper) client
[192.168.122.1]::call getCapabilities with () {}
Thread-15::DEBUG::2012-11-14
12:45:54,173::BindingXMLRPC::910::vds::(wrapper) return getCapabilities
with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory':
{'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba'}],
'FC': []}, 'packages2': {'kernel': {'release': '1.fc17.x86_64',
'buildtime': 1352149175.0, 'version': '3.6.6'}, 'spice-server':
{'release': '1.fc17', 'buildtime': 1348891802L, 'version': '0.12.0'},
'vdsm': {'release': '0.129.git2c2c228.fc17', 'buildtime': 1352759542L,
'version': '4.10.1'}, 'qemu-kvm': {'release': '19.fc17', 'buildtime':
1351915579L, 'version': '1.2.0'}, 'libvirt': {'release': '1.fc17',
'buildtime': 1352437629L, 'version': '1.0.0'}, 'qemu-img': {'release':
'19.fc17', 'buildtime': 1351915579L, 'version': '1.2.0'}}, 'cpuModel':
'AMD Phenom(tm) II X4 955 Processor', 'hooks': {}, 'vmTypes': ['kvm'],
'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt':
{'iface': 'ovirtmgmt', 'addr': '192.168.2.21', 'cfg': {'UUID':
'524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0', 'NM_CONTROLLED':
'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge',
'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp':
'off', 'bridged': True, 'gateway': '192.168.2.1', 'ports': ['p15p1']},
'virbr0': {'iface': 'virbr0', 'addr': '192.168.122.1', 'cfg': {}, 'mtu':
'1500', 'netmask': '255.255.255.0', 'stp': 'on', 'bridged': True,
'gateway': '0.0.0.0', 'ports': []}}, 'bridges': {'ovirtmgmt': {'addr':
'192.168.2.21', 'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9',
'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE':
'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500',
'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['p15p1']}, 'virbr0':
{'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask':
'255.255.255.0', 'stp': 'on', 'ports': []}}, 'uuid':
'4046266B-FA2B-DE11-AA3D-00248C9EAFD5_00:19:cb:d6:6a:e0',
'lastClientIface': 'ovirtmgmt', 'nics': {'p15p1': {'addr': '', 'cfg':
{'BRIDGE': 'ovirtmgmt', 'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9',
'NM_CONTROLLED': 'no', 'HWADDR': '00:19:cb:d6:6a:e0', 'DEVICE': 'p15p1',
'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr':
'00:19:cb:d6:6a:e0', 'speed': 1000}, 'p6p1': {'addr': '', 'cfg':
{'DEVICE': 'p6p1', 'UUID': '9d1e9605-931d-4e51-9c79-d5f0f204d46e',
'NM_CONTROLLED': 'yes', 'BOOTPROTO': 'dhcp', 'HWADDR':
'00:24:8C:9E:AF:D5', 'ONBOOT': 'no'}, 'mtu': '1500', 'netmask': '',
'hwaddr': '00:24:8c:9e:af:d5', 'speed': 1000}}, 'software_revision':
'0.129', 'clusterLevels': ['3.0', '3.1', '3.2'], 'cpuFlags':
u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,hw_pstate,npt,lbrv,svm_lock,nrip_save,model_athlon,model_Opteron_G3,model_Opteron_G1,model_phenom,model_Opteron_G2',
'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba',
'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1'],
'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {},
'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr':
'00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu': '1500',
'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}},
'software_version': '4.10', 'memSize': '7734', 'cpuSpeed': '800.000',
'cpuSockets': '1', 'vlans': {}, 'cpuCores': '4', 'kvmEnabled': 'true',
'guestOverhead': '65', 'management_ip': '', 'version_name': 'Snow Man',
'emulatedMachines': [u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0',
u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10',
u'isapc', u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15',
u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc'],
'operatingSystem': {'release': '1', 'version': '17', 'name': 'Fedora'},
'lastClient': '192.168.122.1'}}
MainThread::INFO::2012-11-14 12:45:55,916::vdsm::88::vds::(run) I am the
actual vdsm 4.10-0.129 demo.netbulae.eu (3.6.6-1.fc17.x86_64)
MainThread::DEBUG::2012-11-14
12:46:08,422::resourceManager::379::ResourceManager::(registerNamespace)
Registering namespace 'Storage'
MainThread::DEBUG::2012-11-14
12:46:08,423::threadPool::33::Misc.ThreadPool::(__init__) Enter -
numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0
MainThread::WARNING::2012-11-14
12:46:08,431::fileUtils::184::fileUtils::(createdir) Dir
/rhev/data-center/mnt already exists
MainThread::DEBUG::2012-11-14
12:46:08,467::supervdsm::107::SuperVdsmProxy::(_start) Launching Super Vdsm
MainThread::DEBUG::2012-11-14
12:46:08,467::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/bin/python /usr/share/vdsm/supervdsmServer.py
d5652547-2838-4900-8e62-5191bf37c460 11918' (cwd None)
MainThread::DEBUG::2012-11-14
12:46:08,634::supervdsmServer::324::SuperVdsm.Server::(main) Making sure
I'm root
MainThread::DEBUG::2012-11-14
12:46:08,634::supervdsmServer::328::SuperVdsm.Server::(main) Parsing cmd
args
MainThread::DEBUG::2012-11-14
12:46:08,634::supervdsmServer::331::SuperVdsm.Server::(main) Creating
PID file
MainThread::DEBUG::2012-11-14
12:46:08,634::supervdsmServer::338::SuperVdsm.Server::(main) Cleaning
old socket
MainThread::DEBUG::2012-11-14
12:46:08,634::supervdsmServer::342::SuperVdsm.Server::(main) Setting up
keep alive thread
MainThread::DEBUG::2012-11-14
12:46:08,635::supervdsmServer::348::SuperVdsm.Server::(main) Creating
remote object manager
MainThread::DEBUG::2012-11-14
12:46:08,636::supervdsmServer::360::SuperVdsm.Server::(main) Started
serving super vdsm object
MainThread::DEBUG::2012-11-14
12:46:10,475::supervdsm::161::SuperVdsmProxy::(_connect) Trying to
connect to Super Vdsm
MainThread::DEBUG::2012-11-14
12:46:10,549::multipath::106::Storage.Multipath::(isEnabled) Current
revision of multipath.conf detected, preserving
MainThread::DEBUG::2012-11-14
12:46:10,549::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/lvm dumpconfig global/locking_type' (cwd None)
MainThread::DEBUG::2012-11-14
12:46:10,621::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
''; <rc> = 0
MainThread::DEBUG::2012-11-14
12:46:10,623::hsm::407::Storage.HSM::(__cleanStorageRepository) Started
cleaning storage repository at '/rhev/data-center'
MainThread::DEBUG::2012-11-14
12:46:10,634::hsm::439::Storage.HSM::(__cleanStorageRepository) White
list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*',
'/rhev/data-center/mnt']
MainThread::DEBUG::2012-11-14
12:46:10,634::hsm::440::Storage.HSM::(__cleanStorageRepository) Mount
list: []
MainThread::DEBUG::2012-11-14
12:46:10,634::hsm::442::Storage.HSM::(__cleanStorageRepository) Cleaning
leftovers
MainThread::DEBUG::2012-11-14
12:46:10,636::hsm::485::Storage.HSM::(__cleanStorageRepository) Finished
cleaning storage repository at '/rhev/data-center'
MainThread::INFO::2012-11-14
12:46:10,638::dispatcher::95::Storage.Dispatcher::(__init__) Starting
StorageDispatcher...
Thread-12::DEBUG::2012-11-14
12:46:10,638::lvm::334::OperationMutex::(_reloadpvs) Operation 'lvm
reload operation' got the operation mutex
Thread-12::DEBUG::2012-11-14
12:46:10,643::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/lvm pvs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup {
retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix
--separator | -o
uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size'
(cwd None)
*MainThread::WARNING::2012-11-14
12:46:10,688::clientIF::197::vds::(_prepareMOM) MOM initialization
failed and fall back to KsmMonitor**
**Traceback (most recent call last):**
** File "/usr/share/vdsm/clientIF.py", line 194, in _prepareMOM**
** self.mom = MomThread(momconf)**
** File "/usr/share/vdsm/momIF.py", line 34, in __init__**
** raise Exception("MOM is not available")**
**Exception: MOM is not available*
MainThread::DEBUG::2012-11-14
12:46:10,690::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/pgrep -xf
ksmd' (cwd None)
Thread-12::DEBUG::2012-11-14
12:46:10,710::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
''; <rc> = 0
Thread-12::DEBUG::2012-11-14
12:46:10,710::lvm::359::OperationMutex::(_reloadpvs) Operation 'lvm
reload operation' released the operation mutex
Thread-12::DEBUG::2012-11-14
12:46:10,711::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm
reload operation' got the operation mutex
Thread-12::DEBUG::2012-11-14
12:46:10,711::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/lvm vgs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup {
retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix
--separator | -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free'
(cwd None)
MainThread::DEBUG::2012-11-14
12:46:10,712::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
''; <rc> = 0
MainThread::INFO::2012-11-14 12:46:10,721::ksm::43::vds::(__init__)
starting ksm monitor thread, ksm pid is 40
KsmMonitor::DEBUG::2012-11-14
12:46:10,722::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/service ksmtuned start' (cwd None)
MainThread::INFO::2012-11-14
12:46:10,724::vmChannels::139::vds::(settimeout) Setting channels'
timeout to 30 seconds.
VM Channels Listener::INFO::2012-11-14
12:46:10,738::vmChannels::127::vds::(run) Starting VM channels listener
thread.
MainThread::WARNING::2012-11-14
12:46:10,747::clientIF::182::vds::(_prepareBindings) Unable to load the
rest server module. Please make sure it is installed.
MainThread::WARNING::2012-11-14
12:46:10,747::clientIF::188::vds::(_prepareBindings) Unable to load the
json rpc server module. Please make sure it is installed.
Thread-12::DEBUG::2012-11-14
12:46:10,767::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
' No volume groups found\n'; <rc> = 0
Thread-12::DEBUG::2012-11-14
12:46:10,768::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm
reload operation' released the operation mutex
Thread-12::DEBUG::2012-11-14
12:46:10,770::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/lvm lvs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup {
retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix
--separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags'
(cwd None)
KsmMonitor::DEBUG::2012-11-14
12:46:10,782::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
'Redirecting to /bin/systemctl start ksmtuned.service\n'; <rc> = 0
KsmMonitor::DEBUG::2012-11-14
12:46:10,783::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/service ksm start' (cwd None)
Thread-12::DEBUG::2012-11-14
12:46:10,823::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
' No volume groups found\n'; <rc> = 0
Thread-12::DEBUG::2012-11-14
12:46:10,824::misc::1026::SamplingMethod::(__call__) Trying to enter
sampling method (storage.sdc.refreshStorage)
Thread-12::DEBUG::2012-11-14
12:46:10,824::misc::1028::SamplingMethod::(__call__) Got in to sampling
method
Thread-12::DEBUG::2012-11-14
12:46:10,825::misc::1026::SamplingMethod::(__call__) Trying to enter
sampling method (storage.iscsi.rescan)
Thread-12::DEBUG::2012-11-14
12:46:10,825::misc::1028::SamplingMethod::(__call__) Got in to sampling
method
Thread-12::DEBUG::2012-11-14
12:46:10,826::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/iscsiadm -m session -R' (cwd None)
KsmMonitor::DEBUG::2012-11-14
12:46:10,840::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
'Redirecting to /bin/systemctl start ksm.service\n'; <rc> = 0
Thread-12::DEBUG::2012-11-14
12:46:10,852::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED: <err> =
'iscsiadm: No session found.\n'; <rc> = 21
Thread-12::DEBUG::2012-11-14
12:46:10,852::misc::1036::SamplingMethod::(__call__) Returning last result
MainProcess|Thread-12::DEBUG::2012-11-14
12:46:10,858::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd
of=/sys/class/scsi_host/host0/scan' (cwd None)
MainProcess|Thread-12::DEBUG::2012-11-14
12:46:10,882::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd
of=/sys/class/scsi_host/host1/scan' (cwd None)
MainProcess|Thread-12::DEBUG::2012-11-14
12:46:10,891::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd
of=/sys/class/scsi_host/host2/scan' (cwd None)
MainProcess|Thread-12::DEBUG::2012-11-14
12:46:10,898::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd
of=/sys/class/scsi_host/host3/scan' (cwd None)
MainProcess|Thread-12::DEBUG::2012-11-14
12:46:10,905::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd
of=/sys/class/scsi_host/host4/scan' (cwd None)
MainProcess|Thread-12::DEBUG::2012-11-14
12:46:10,913::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd
of=/sys/class/scsi_host/host5/scan' (cwd None)
MainProcess|Thread-12::DEBUG::2012-11-14
12:46:10,922::iscsi::388::Storage.ISCSI::(forceIScsiScan) Performing
SCSI scan, this will take up to 30 seconds
Thread-14::DEBUG::2012-11-14
12:46:12,615::BindingXMLRPC::903::vds::(wrapper) client
[192.168.122.1]::call getCapabilities with () {}
Thread-14::DEBUG::2012-11-14
12:46:12,777::BindingXMLRPC::910::vds::(wrapper) return getCapabilities
with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory':
{'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba'}],
'FC': []}, 'packages2': {'kernel': {'release': '1.fc17.x86_64',
'buildtime': 1352149175.0, 'version': '3.6.6'}, 'spice-server':
{'release': '1.fc17', 'buildtime': 1348891802L, 'version': '0.12.0'},
'vdsm': {'release': '0.129.git2c2c228.fc17', 'buildtime': 1352759542L,
'version': '4.10.1'}, 'qemu-kvm': {'release': '19.fc17', 'buildtime':
1351915579L, 'version': '1.2.0'}, 'libvirt': {'release': '1.fc17',
'buildtime': 1352437629L, 'version': '1.0.0'}, 'qemu-img': {'release':
'19.fc17', 'buildtime': 1351915579L, 'version': '1.2.0'}}, 'cpuModel':
'AMD Phenom(tm) II X4 955 Processor', 'hooks': {}, 'vmTypes': ['kvm'],
'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt':
{'iface': 'ovirtmgmt', 'addr': '192.168.2.21', 'cfg': {'UUID':
'524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0', 'NM_CONTROLLED':
'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge',
'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp':
'off', 'bridged': True, 'gateway': '192.168.2.1', 'ports': ['p15p1']},
'virbr0': {'iface': 'virbr0', 'addr': '192.168.122.1', 'cfg': {}, 'mtu':
'1500', 'netmask': '255.255.255.0', 'stp': 'on', 'bridged': True,
'gateway': '0.0.0.0', 'ports': []}}, 'bridges': {'ovirtmgmt': {'addr':
'192.168.2.21', 'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9',
'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE':
'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500',
'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['p15p1']}, 'virbr0':
{'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask':
'255.255.255.0', 'stp': 'on', 'ports': []}}, 'uuid':
'4046266B-FA2B-DE11-AA3D-00248C9EAFD5_00:19:cb:d6:6a:e0',
'lastClientIface': 'ovirtmgmt', 'nics': {'p15p1': {'addr': '', 'cfg':
{'BRIDGE': 'ovirtmgmt', 'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9',
'NM_CONTROLLED': 'no', 'HWADDR': '00:19:cb:d6:6a:e0', 'DEVICE': 'p15p1',
'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr':
'00:19:cb:d6:6a:e0', 'speed': 1000}, 'p6p1': {'addr': '', 'cfg':
{'DEVICE': 'p6p1', 'UUID': '9d1e9605-931d-4e51-9c79-d5f0f204d46e',
'NM_CONTROLLED': 'yes', 'BOOTPROTO': 'dhcp', 'HWADDR':
'00:24:8C:9E:AF:D5', 'ONBOOT': 'no'}, 'mtu': '1500', 'netmask': '',
'hwaddr': '00:24:8c:9e:af:d5', 'speed': 1000}}, 'software_revision':
'0.129', 'clusterLevels': ['3.0', '3.1', '3.2'], 'cpuFlags':
u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,hw_pstate,npt,lbrv,svm_lock,nrip_save,model_athlon,model_Opteron_G3,model_Opteron_G1,model_phenom,model_Opteron_G2',
'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba',
'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1'],
'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {},
'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr':
'00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu': '1500',
'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond1':
{'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [],
'hwaddr': '00:00:00:00:00:00'}}, 'software_version': '4.10', 'memSize':
'7734', 'cpuSpeed': '800.000', 'cpuSockets': '1', 'vlans': {},
'cpuCores': '4', 'kvmEnabled': 'true', 'guestOverhead': '65',
'management_ip': '', 'version_name': 'Snow Man', 'emulatedMachines':
[u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15',
u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc',
u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15', u'pc-0.14',
u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc'],
'operatingSystem': {'release': '1', 'version': '17', 'name': 'Fedora'},
'lastClient': '0.0.0.0'}}
Thread-12::DEBUG::2012-11-14
12:46:12,926::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/multipath' (cwd None)
Thread-12::DEBUG::2012-11-14
12:46:12,990::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
''; <rc> = 0
Thread-12::DEBUG::2012-11-14
12:46:12,990::lvm::477::OperationMutex::(_invalidateAllPvs) Operation
'lvm invalidate operation' got the operation mutex
Thread-12::DEBUG::2012-11-14
12:46:12,991::lvm::479::OperationMutex::(_invalidateAllPvs) Operation
'lvm invalidate operation' released the operation mutex
Thread-12::DEBUG::2012-11-14
12:46:12,991::lvm::488::OperationMutex::(_invalidateAllVgs) Operation
'lvm invalidate operation' got the operation mutex
Thread-12::DEBUG::2012-11-14
12:46:12,991::lvm::490::OperationMutex::(_invalidateAllVgs) Operation
'lvm invalidate operation' released the operation mutex
Thread-12::DEBUG::2012-11-14
12:46:12,991::lvm::508::OperationMutex::(_invalidateAllLvs) Operation
'lvm invalidate operation' got the operation mutex
Thread-12::DEBUG::2012-11-14
12:46:12,991::lvm::510::OperationMutex::(_invalidateAllLvs) Operation
'lvm invalidate operation' released the operation mutex
Thread-12::DEBUG::2012-11-14
12:46:12,991::misc::1036::SamplingMethod::(__call__) Returning last result
Thread-16::DEBUG::2012-11-14
12:46:14,043::BindingXMLRPC::161::vds::(wrapper) [192.168.122.1]
Thread-16::DEBUG::2012-11-14
12:46:14,044::task::568::TaskManager.Task::(_updateState)
Task=`8cf5bfe0-3851-4058-92b9-7a23f095ec30`::moving from state init ->
state preparing
Thread-16::INFO::2012-11-14
12:46:14,045::logUtils::37::dispatcher::(wrapper) Run and protect:
validateStorageServerConnection(domType=4,
spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection':
'/data', 'iqn': '', 'portal': '', 'user': '', 'password': '******',
'id': '00000000-0000-0000-0000-000000000000', 'port': ''}], options=None)
Thread-16::INFO::2012-11-14
12:46:14,045::logUtils::39::dispatcher::(wrapper) Run and protect:
validateStorageServerConnection, Return response: {'statuslist':
[{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-16::DEBUG::2012-11-14
12:46:14,045::task::1151::TaskManager.Task::(prepare)
Task=`8cf5bfe0-3851-4058-92b9-7a23f095ec30`::finished: {'statuslist':
[{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-16::DEBUG::2012-11-14
12:46:14,045::task::568::TaskManager.Task::(_updateState)
Task=`8cf5bfe0-3851-4058-92b9-7a23f095ec30`::moving from state preparing
-> state finished
Thread-16::DEBUG::2012-11-14
12:46:14,045::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-16::DEBUG::2012-11-14
12:46:14,045::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-16::DEBUG::2012-11-14
12:46:14,045::task::957::TaskManager.Task::(_decref)
Task=`8cf5bfe0-3851-4058-92b9-7a23f095ec30`::ref 0 aborting False
Thread-17::DEBUG::2012-11-14
12:46:14,128::BindingXMLRPC::161::vds::(wrapper) [192.168.122.1]
Thread-17::DEBUG::2012-11-14
12:46:14,129::task::568::TaskManager.Task::(_updateState)
Task=`0eb0651c-bb23-4b49-a07a-a27a9bbc4129`::moving from state init ->
state preparing
Thread-17::INFO::2012-11-14
12:46:14,129::logUtils::37::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=4,
spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection':
'/data', 'iqn': '', 'portal': '', 'user': '', 'password': '******',
'id': '00000000-0000-0000-0000-000000000000', 'port': ''}], options=None)
*Thread-17::ERROR::2012-11-14
12:46:14,212::hsm::2057::Storage.HSM::(connectStorageServer) Could not
connect to storageServer**
**Traceback (most recent call last):**
** File "/usr/share/vdsm/storage/hsm.py", line 2054, in
connectStorageServer**
** conObj.connect()**
** File "/usr/share/vdsm/storage/storageServer.py", line 462, in connect**
** if not self.checkTarget():**
** File "/usr/share/vdsm/storage/storageServer.py", line 449, in
checkTarget**
** fileSD.validateDirAccess(self._path))**
** File "/usr/share/vdsm/storage/fileSD.py", line 51, in
validateDirAccess**
** getProcPool().fileUtils.validateAccess(dirPath)**
** File "/usr/share/vdsm/storage/remoteFileHandler.py", line 274, in
callCrabRPCFunction**
** *args, **kwargs)**
** File "/usr/share/vdsm/storage/remoteFileHandler.py", line 180, in
callCrabRPCFunction**
** rawLength = self._recvAll(LENGTH_STRUCT_LENGTH, timeout)**
** File "/usr/share/vdsm/storage/remoteFileHandler.py", line 149, in
_recvAll**
** timeLeft):**
** File "/usr/lib64/python2.7/contextlib.py", line 84, in helper**
** return GeneratorContextManager(func(*args, **kwds))**
** File "/usr/share/vdsm/storage/remoteFileHandler.py", line 136, in
_poll**
** raise Timeout()**
**Timeout*
Thread-17::INFO::2012-11-14
12:46:14,231::logUtils::39::dispatcher::(wrapper) Run and protect:
connectStorageServer, Return response: {'statuslist': [{'status': 100,
'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-17::DEBUG::2012-11-14
12:46:14,231::task::1151::TaskManager.Task::(prepare)
Task=`0eb0651c-bb23-4b49-a07a-a27a9bbc4129`::finished: {'statuslist':
[{'status': 100, 'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-17::DEBUG::2012-11-14
12:46:14,232::task::568::TaskManager.Task::(_updateState)
Task=`0eb0651c-bb23-4b49-a07a-a27a9bbc4129`::moving from state preparing
-> state finished
Thread-17::DEBUG::2012-11-14
12:46:14,232::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-17::DEBUG::2012-11-14
12:46:14,233::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Kind regards,
Jorick Astrego
Netbulae B.V.
--------------070401060008050901000001
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<blockquote type="cite"><br>
<pre wrap="">2012-11-03 19:19:22::DEBUG::engine-setup::1747::root:: storage-path: /data/ovirt/vdsm
2012-11-03 19:19:22::DEBUG::engine-setup::1747::root:: superuser-pass: ********
2012-11-03 19:19:22::ERROR::engine-setup::2376::root:: Traceback (most recent call last):
File "/bin/engine-setup", line 2370, in <module>
main(confFile)
File "/bin/engine-setup", line 2159, in main
runSequences()
File "/bin/engine-setup", line 2105, in runSequences
controller.runAllSequences()
File "/usr/share/ovirt-engine/scripts/setup_controller.py", line 54, in runAllSequences
sequence.run()
File "/usr/share/ovirt-engine/scripts/setup_sequences.py", line 154, in run
step.run()
File "/usr/share/ovirt-engine/scripts/setup_sequences.py", line 60, in run
function()
File "/usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py", line 290, in addStorageDomain
raise Exception(ERROR_ADD_LOCAL_DOMAIN)
Exception: Error: could not add local storage domain
XMLSyntaxError: Space required after the Public Identifier, line 1, column 47 looks somewhat strange to me.
Any hint what causes this error?
Thanks,
Christian
P.S.: The installation failed several times before that, until i figured out that the engine-setup needs to login in via ssh; we had configured sshd to allow only public key auth, and this raised an error.
</pre>
</blockquote>
<blockquote cite="mid:mailman.6600.1352797990.6397.users@ovirt.org"
type="cite">
<blockquote type="cite">
<pre wrap="">
</pre>
</blockquote>
<pre wrap="">
did this get resolved?
-</pre>
</blockquote>
I'm not the original submitter of this issue, but I have exactly the
same problem with the latest nightly all-in-one installation. <br>
<br>
We don't use public key auth for sshd on this machine so that's not
the problem. This is what I see in the vdsm.log:<br>
<br>
MainThread::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a> 12:45:51,444::vdsm::88::vds::(run) I am
the actual vdsm 4.10-0.129 demo.netbulae.eu (3.6.6-1.fc17.x86_64)<br>
MainThread::DEBUG::2012-11-14
12:45:51,812::resourceManager::379::ResourceManager::(registerNamespace)
Registering namespace 'Storage'<br>
MainThread::DEBUG::2012-11-14
12:45:51,813::threadPool::33::Misc.ThreadPool::(__init__) Enter -
numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0<br>
MainThread::DEBUG::2012-11-14
12:45:51,856::multipath::115::Storage.Multipath::(isEnabled)
multipath Defaulting to False<br>
MainThread::DEBUG::2012-11-14
12:45:51,857::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /bin/cp /tmp/tmpVVMg7O /etc/multipath.conf' (cwd None)<br>
MainThread::DEBUG::2012-11-14
12:45:51,942::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = ''; <rc> = 0<br>
MainThread::DEBUG::2012-11-14
12:45:51,944::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/multipath -F' (cwd None)<br>
MainThread::DEBUG::2012-11-14
12:45:51,975::misc::84::Storage.Misc.excCmd::(<lambda>)
FAILED: <err> = ''; <rc> = 1<br>
MainThread::DEBUG::2012-11-14
12:45:51,976::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/service multipathd restart' (cwd None)<br>
MainThread::DEBUG::2012-11-14
12:45:52,179::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = 'Redirecting to /bin/systemctl restart
multipathd.service\n'; <rc> = 0<br>
MainThread::DEBUG::2012-11-14
12:45:52,179::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None)<br>
MainThread::DEBUG::2012-11-14
12:45:52,241::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = ''; <rc> = 0<br>
MainThread::DEBUG::2012-11-14
12:45:52,242::hsm::407::Storage.HSM::(__cleanStorageRepository)
Started cleaning storage repository at '/rhev/data-center'<br>
MainThread::DEBUG::2012-11-14
12:45:52,253::hsm::439::Storage.HSM::(__cleanStorageRepository)
White list: ['/rhev/data-center/hsm-tasks',
'/rhev/data-center/hsm-tasks/*', '/rhev/data-center/mnt']<br>
MainThread::DEBUG::2012-11-14
12:45:52,253::hsm::440::Storage.HSM::(__cleanStorageRepository)
Mount list: []<br>
MainThread::DEBUG::2012-11-14
12:45:52,254::hsm::442::Storage.HSM::(__cleanStorageRepository)
Cleaning leftovers<br>
MainThread::DEBUG::2012-11-14
12:45:52,258::hsm::485::Storage.HSM::(__cleanStorageRepository)
Finished cleaning storage repository at '/rhev/data-center'<br>
Thread-12::DEBUG::2012-11-14
12:45:52,259::lvm::334::OperationMutex::(_reloadpvs) Operation 'lvm
reload operation' got the operation mutex<br>
MainThread::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a>
12:45:52,260::dispatcher::95::Storage.Dispatcher::(__init__)
Starting StorageDispatcher...<br>
Thread-12::DEBUG::2012-11-14
12:45:52,266::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup
{ retain_min = 50 retain_days = 0 } " --noheadings --units b
--nosuffix --separator | -o <b>uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size'
(cwd None)</b><b><br>
</b><b>MainThread::WARNING::2012-11-14
12:45:52,300::clientIF::197::vds::(_prepareMOM) MOM initialization
failed and fall back to KsmMonitor</b><b><br>
</b><b>Traceback (most recent call last):</b><b><br>
</b><b> File "/usr/share/vdsm/clientIF.py", line 194, in
_prepareMOM</b><b><br>
</b><b> self.mom = MomThread(momconf)</b><b><br>
</b><b> File "/usr/share/vdsm/momIF.py", line 34, in __init__</b><b><br>
</b><b> raise Exception("MOM is not available")</b><b><br>
</b><b>Exception: MOM is not available</b><br>
MainThread::DEBUG::2012-11-14
12:45:52,304::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/pgrep -xf ksmd' (cwd None)<br>
Thread-12::DEBUG::2012-11-14
12:45:52,340::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = ''; <rc> = 0<br>
Thread-12::DEBUG::2012-11-14
12:45:52,341::lvm::359::OperationMutex::(_reloadpvs) Operation 'lvm
reload operation' released the operation mutex<br>
Thread-12::DEBUG::2012-11-14
12:45:52,342::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm
reload operation' got the operation mutex<br>
Thread-12::DEBUG::2012-11-14
12:45:52,343::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup
{ retain_min = 50 retain_days = 0 } " --noheadings --units b
--nosuffix --separator | -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free'
(cwd None)<br>
MainThread::DEBUG::2012-11-14
12:45:52,353::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = ''; <rc> = 0<br>
MainThread::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a> 12:45:52,354::ksm::43::vds::(__init__)
starting ksm monitor thread, ksm pid is 40<br>
KsmMonitor::DEBUG::2012-11-14
12:45:52,355::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/service ksmtuned start' (cwd None)<br>
MainThread::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a>
12:45:52,367::vmChannels::139::vds::(settimeout) Setting channels'
timeout to 30 seconds.<br>
VM Channels Listener::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a>
12:45:52,368::vmChannels::127::vds::(run) Starting VM channels
listener thread.<br>
<b>MainThread::WARNING::2012-11-14
12:45:52,375::clientIF::182::vds::(_prepareBindings) Unable to
load the rest server module. Please make sure it is installed.</b><b><br>
</b><b>MainThread::WARNING::2012-11-14
12:45:52,376::clientIF::188::vds::(_prepareBindings) Unable to
load the json rpc server module. Please make sure it is installed.</b><br>
Thread-12::DEBUG::2012-11-14
12:45:52,398::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0<br>
Thread-12::DEBUG::2012-11-14
12:45:52,399::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm
reload operation' released the operation mutex<br>
Thread-12::DEBUG::2012-11-14
12:45:52,401::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup
{ retain_min = 50 retain_days = 0 } " --noheadings --units b
--nosuffix --separator | -o
uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None)<br>
KsmMonitor::DEBUG::2012-11-14
12:45:52,440::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = 'Redirecting to /bin/systemctl start
ksmtuned.service\n'; <rc> = 0<br>
MainThread::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a>
12:45:52,367::vmChannels::139::vds::(settimeout) Setting channels'
timeout to 30 seconds.<br>
VM Channels Listener::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a>
12:45:52,368::vmChannels::127::vds::(run) Starting VM channels
listener thread.<br>
MainThread::WARNING::2012-11-14
12:45:52,375::clientIF::182::vds::(_prepareBindings) Unable to load
the rest server module. Please make sure it is installed.<br>
MainThread::WARNING::2012-11-14
12:45:52,376::clientIF::188::vds::(_prepareBindings) Unable to load
the json rpc server module. Please make sure it is installed.<br>
Thread-12::DEBUG::2012-11-14
12:45:52,398::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0<br>
Thread-12::DEBUG::2012-11-14
12:45:52,399::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm
reload operation' released the operation mutex<br>
Thread-12::DEBUG::2012-11-14
12:45:52,401::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup
{ retain_min = 50 retain_days = 0 } " --noheadings --units b
--nosuffix --separator | -o
uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None)<br>
KsmMonitor::DEBUG::2012-11-14
12:45:52,440::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = 'Redirecting to /bin/systemctl start
ksmtuned.service\n'; <rc> = 0<br>
KsmMonitor::DEBUG::2012-11-14
12:45:52,440::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/service ksm start' (cwd None)<br>
Thread-12::DEBUG::2012-11-14
12:45:52,457::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0<br>
Thread-12::DEBUG::2012-11-14
12:45:52,458::misc::1026::SamplingMethod::(__call__) Trying to enter
sampling method (storage.sdc.refreshStorage)<br>
Thread-12::DEBUG::2012-11-14
12:45:52,458::misc::1028::SamplingMethod::(__call__) Got in to
sampling method<br>
Thread-12::DEBUG::2012-11-14
12:45:52,458::misc::1026::SamplingMethod::(__call__) Trying to enter
sampling method (storage.iscsi.rescan)<br>
Thread-12::DEBUG::2012-11-14
12:45:52,458::misc::1028::SamplingMethod::(__call__) Got in to
sampling method<br>
Thread-12::DEBUG::2012-11-14
12:45:52,458::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None)<br>
Thread-12::DEBUG::2012-11-14
12:45:52,477::misc::84::Storage.Misc.excCmd::(<lambda>)
FAILED: <err> = 'iscsiadm: No session found.\n'; <rc> =
21<br>
Thread-12::DEBUG::2012-11-14
12:45:52,477::misc::1036::SamplingMethod::(__call__) Returning last
result<br>
Thread-12::DEBUG::2012-11-14
12:45:52,478::supervdsm::107::SuperVdsmProxy::(_start) Launching
Super Vdsm<br>
Thread-12::DEBUG::2012-11-14
12:45:52,478::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /bin/python /usr/share/vdsm/supervdsmServer.py
c9c732a0-065b-4634-8bb4-fbcd2081de16 11360' (cwd None)<br>
KsmMonitor::DEBUG::2012-11-14
12:45:52,486::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = 'Redirecting to /bin/systemctl start
ksm.service\n'; <rc> = 0<br>
MainThread::DEBUG::2012-11-14
12:45:52,669::supervdsmServer::324::SuperVdsm.Server::(main) Making
sure I'm root<br>
MainThread::DEBUG::2012-11-14
12:45:52,670::supervdsmServer::328::SuperVdsm.Server::(main) Parsing
cmd args<br>
MainThread::DEBUG::2012-11-14
12:45:52,670::supervdsmServer::331::SuperVdsm.Server::(main)
Creating PID file<br>
MainThread::DEBUG::2012-11-14
12:45:52,670::supervdsmServer::338::SuperVdsm.Server::(main)
Cleaning old socket<br>
MainThread::DEBUG::2012-11-14
12:45:52,670::supervdsmServer::342::SuperVdsm.Server::(main) Setting
up keep alive thread<br>
MainThread::DEBUG::2012-11-14
12:45:52,670::supervdsmServer::348::SuperVdsm.Server::(main)
Creating remote object manager<br>
MainThread::DEBUG::2012-11-14
12:45:52,671::supervdsmServer::360::SuperVdsm.Server::(main) Started
serving super vdsm object<br>
Thread-14::DEBUG::2012-11-14
12:45:53,732::BindingXMLRPC::903::vds::(wrapper) client
[192.168.122.1]::call getCapabilities with () {}<br>
Thread-14::DEBUG::2012-11-14
12:45:53,902::BindingXMLRPC::910::vds::(wrapper) return
getCapabilities with {'status': {'message': 'Done', 'code': 0},
'info': {'HBAInventory': {'iSCSI': [{'InitiatorName':
'iqn.1994-05.com.redhat:6de64a4dfdba'}], 'FC': []}, 'packages2':
{'kernel': {'release': '1.fc17.x86_64', 'buildtime': 1352149175.0,
'version': '3.6.6'}, 'spice-server': {'release': '1.fc17',
'buildtime': 1348891802L, 'version': '0.12.0'}, 'vdsm': {'release':
'0.129.git2c2c228.fc17', 'buildtime': 1352759542L, 'version':
'4.10.1'}, 'qemu-kvm': {'release': '19.fc17', 'buildtime':
1351915579L, 'version': '1.2.0'}, 'libvirt': {'release': '1.fc17',
'buildtime': 1352437629L, 'version': '1.0.0'}, 'qemu-img':
{'release': '19.fc17', 'buildtime': 1351915579L, 'version':
'1.2.0'}}, 'cpuModel': 'AMD Phenom(tm) II X4 955 Processor',
'hooks': {}, 'vmTypes': ['kvm'], 'supportedProtocols': ['2.2',
'2.3'], 'networks': {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr':
'192.168.2.21', 'cfg': {'UUID':
'524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0',
'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt',
'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask':
'255.255.255.0', 'stp': 'off', 'bridged': True, 'gateway':
'192.168.2.1', 'ports': ['p15p1']}, 'virbr0': {'iface': 'virbr0',
'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask':
'255.255.255.0', 'stp': 'on', 'bridged': True, 'gateway': '0.0.0.0',
'ports': []}}, 'bridges': {'ovirtmgmt': {'addr': '192.168.2.21',
'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY':
'0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE':
'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500',
'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['p15p1']},
'virbr0': {'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500',
'netmask': '255.255.255.0', 'stp': 'on', 'ports': []}}, 'uuid':
'4046266B-FA2B-DE11-AA3D-00248C9EAFD5_00:19:cb:d6:6a:e0',
'lastClientIface': 'ovirtmgmt', 'nics': {'p15p1': {'addr': '',
'cfg': {'BRIDGE': 'ovirtmgmt', 'UUID':
'524c7b17-8771-4426-82d7-0dbeca898ad9', 'NM_CONTROLLED': 'no',
'HWADDR': '00:19:cb:d6:6a:e0', 'DEVICE': 'p15p1', 'ONBOOT': 'yes'},
'mtu': '1500', 'netmask': '', 'hwaddr': '00:19:cb:d6:6a:e0',
'speed': 1000}, 'p6p1': {'addr': '', 'cfg': {'DEVICE': 'p6p1',
'UUID': '9d1e9605-931d-4e51-9c79-d5f0f204d46e', 'NM_CONTROLLED':
'yes', 'BOOTPROTO': 'dhcp', 'HWADDR': '00:24:8C:9E:AF:D5', 'ONBOOT':
'no'}, 'mtu': '1500', 'netmask': '', 'hwaddr': '00:24:8c:9e:af:d5',
'speed': 1000}}, 'software_revision': '0.129', 'clusterLevels':
['3.0', '3.1', '3.2'], 'cpuFlags':
u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,hw_pstate,npt,lbrv,svm_lock,nrip_save,model_athlon,model_Opteron_G3,model_Opteron_G1,model_phenom,model_Opteron_G2',
'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba',
'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1'],
'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {},
'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr':
'00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu':
'1500', 'netmask': '', 'slaves': [], 'hwaddr':
'00:00:00:00:00:00'}}, 'software_version': '4.10', 'memSize':
'7734', 'cpuSpeed': '3200.000', 'cpuSockets': '1', 'vlans': {},
'cpuCores': '4', 'kvmEnabled': 'true', 'guestOverhead': '65',
'management_ip': '', 'version_name': 'Snow Man', 'emulatedMachines':
[u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15',
u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10',
u'isapc', u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0',
u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11',
u'pc-0.10', u'isapc'], 'operatingSystem': {'release': '1',
'version': '17', 'name': 'Fedora'}, 'lastClient': '0.0.0.0'}}<br>
Thread-15::DEBUG::2012-11-14
12:45:54,148::BindingXMLRPC::903::vds::(wrapper) client
[192.168.122.1]::call getCapabilities with () {}<br>
Thread-15::DEBUG::2012-11-14
12:45:54,173::BindingXMLRPC::910::vds::(wrapper) return
getCapabilities with {'status': {'message': 'Done', 'code': 0},
'info': {'HBAInventory': {'iSCSI': [{'InitiatorName':
'iqn.1994-05.com.redhat:6de64a4dfdba'}], 'FC': []}, 'packages2':
{'kernel': {'release': '1.fc17.x86_64', 'buildtime': 1352149175.0,
'version': '3.6.6'}, 'spice-server': {'release': '1.fc17',
'buildtime': 1348891802L, 'version': '0.12.0'}, 'vdsm': {'release':
'0.129.git2c2c228.fc17', 'buildtime': 1352759542L, 'version':
'4.10.1'}, 'qemu-kvm': {'release': '19.fc17', 'buildtime':
1351915579L, 'version': '1.2.0'}, 'libvirt': {'release': '1.fc17',
'buildtime': 1352437629L, 'version': '1.0.0'}, 'qemu-img':
{'release': '19.fc17', 'buildtime': 1351915579L, 'version':
'1.2.0'}}, 'cpuModel': 'AMD Phenom(tm) II X4 955 Processor',
'hooks': {}, 'vmTypes': ['kvm'], 'supportedProtocols': ['2.2',
'2.3'], 'networks': {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr':
'192.168.2.21', 'cfg': {'UUID':
'524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0',
'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt',
'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask':
'255.255.255.0', 'stp': 'off', 'bridged': True, 'gateway':
'192.168.2.1', 'ports': ['p15p1']}, 'virbr0': {'iface': 'virbr0',
'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask':
'255.255.255.0', 'stp': 'on', 'bridged': True, 'gateway': '0.0.0.0',
'ports': []}}, 'bridges': {'ovirtmgmt': {'addr': '192.168.2.21',
'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY':
'0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE':
'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500',
'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['p15p1']},
'virbr0': {'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500',
'netmask': '255.255.255.0', 'stp': 'on', 'ports': []}}, 'uuid':
'4046266B-FA2B-DE11-AA3D-00248C9EAFD5_00:19:cb:d6:6a:e0',
'lastClientIface': 'ovirtmgmt', 'nics': {'p15p1': {'addr': '',
'cfg': {'BRIDGE': 'ovirtmgmt', 'UUID':
'524c7b17-8771-4426-82d7-0dbeca898ad9', 'NM_CONTROLLED': 'no',
'HWADDR': '00:19:cb:d6:6a:e0', 'DEVICE': 'p15p1', 'ONBOOT': 'yes'},
'mtu': '1500', 'netmask': '', 'hwaddr': '00:19:cb:d6:6a:e0',
'speed': 1000}, 'p6p1': {'addr': '', 'cfg': {'DEVICE': 'p6p1',
'UUID': '9d1e9605-931d-4e51-9c79-d5f0f204d46e', 'NM_CONTROLLED':
'yes', 'BOOTPROTO': 'dhcp', 'HWADDR': '00:24:8C:9E:AF:D5', 'ONBOOT':
'no'}, 'mtu': '1500', 'netmask': '', 'hwaddr': '00:24:8c:9e:af:d5',
'speed': 1000}}, 'software_revision': '0.129', 'clusterLevels':
['3.0', '3.1', '3.2'], 'cpuFlags':
u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,hw_pstate,npt,lbrv,svm_lock,nrip_save,model_athlon,model_Opteron_G3,model_Opteron_G1,model_phenom,model_Opteron_G2',
'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba',
'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1'],
'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {},
'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr':
'00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu':
'1500', 'netmask': '', 'slaves': [], 'hwaddr':
'00:00:00:00:00:00'}}, 'software_version': '4.10', 'memSize':
'7734', 'cpuSpeed': '800.000', 'cpuSockets': '1', 'vlans': {},
'cpuCores': '4', 'kvmEnabled': 'true', 'guestOverhead': '65',
'management_ip': '', 'version_name': 'Snow Man', 'emulatedMachines':
[u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0', u'pc-0.15',
u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11', u'pc-0.10',
u'isapc', u'pc-1.2', u'none', u'pc', u'pc-1.1', u'pc-1.0',
u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12', u'pc-0.11',
u'pc-0.10', u'isapc'], 'operatingSystem': {'release': '1',
'version': '17', 'name': 'Fedora'}, 'lastClient': '192.168.122.1'}}<br>
MainThread::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a> 12:45:55,916::vdsm::88::vds::(run) I am
the actual vdsm 4.10-0.129 demo.netbulae.eu (3.6.6-1.fc17.x86_64)<br>
MainThread::DEBUG::2012-11-14
12:46:08,422::resourceManager::379::ResourceManager::(registerNamespace)
Registering namespace 'Storage'<br>
MainThread::DEBUG::2012-11-14
12:46:08,423::threadPool::33::Misc.ThreadPool::(__init__) Enter -
numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0<br>
MainThread::WARNING::2012-11-14
12:46:08,431::fileUtils::184::fileUtils::(createdir) Dir
/rhev/data-center/mnt already exists<br>
MainThread::DEBUG::2012-11-14
12:46:08,467::supervdsm::107::SuperVdsmProxy::(_start) Launching
Super Vdsm<br>
MainThread::DEBUG::2012-11-14
12:46:08,467::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /bin/python /usr/share/vdsm/supervdsmServer.py
d5652547-2838-4900-8e62-5191bf37c460 11918' (cwd None)<br>
MainThread::DEBUG::2012-11-14
12:46:08,634::supervdsmServer::324::SuperVdsm.Server::(main) Making
sure I'm root<br>
MainThread::DEBUG::2012-11-14
12:46:08,634::supervdsmServer::328::SuperVdsm.Server::(main) Parsing
cmd args<br>
MainThread::DEBUG::2012-11-14
12:46:08,634::supervdsmServer::331::SuperVdsm.Server::(main)
Creating PID file<br>
MainThread::DEBUG::2012-11-14
12:46:08,634::supervdsmServer::338::SuperVdsm.Server::(main)
Cleaning old socket<br>
MainThread::DEBUG::2012-11-14
12:46:08,634::supervdsmServer::342::SuperVdsm.Server::(main) Setting
up keep alive thread<br>
MainThread::DEBUG::2012-11-14
12:46:08,635::supervdsmServer::348::SuperVdsm.Server::(main)
Creating remote object manager<br>
MainThread::DEBUG::2012-11-14
12:46:08,636::supervdsmServer::360::SuperVdsm.Server::(main) Started
serving super vdsm object<br>
MainThread::DEBUG::2012-11-14
12:46:10,475::supervdsm::161::SuperVdsmProxy::(_connect) Trying to
connect to Super Vdsm<br>
MainThread::DEBUG::2012-11-14
12:46:10,549::multipath::106::Storage.Multipath::(isEnabled) Current
revision of multipath.conf detected, preserving<br>
MainThread::DEBUG::2012-11-14
12:46:10,549::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None)<br>
MainThread::DEBUG::2012-11-14
12:46:10,621::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = ''; <rc> = 0<br>
MainThread::DEBUG::2012-11-14
12:46:10,623::hsm::407::Storage.HSM::(__cleanStorageRepository)
Started cleaning storage repository at '/rhev/data-center'<br>
MainThread::DEBUG::2012-11-14
12:46:10,634::hsm::439::Storage.HSM::(__cleanStorageRepository)
White list: ['/rhev/data-center/hsm-tasks',
'/rhev/data-center/hsm-tasks/*', '/rhev/data-center/mnt']<br>
MainThread::DEBUG::2012-11-14
12:46:10,634::hsm::440::Storage.HSM::(__cleanStorageRepository)
Mount list: []<br>
MainThread::DEBUG::2012-11-14
12:46:10,634::hsm::442::Storage.HSM::(__cleanStorageRepository)
Cleaning leftovers<br>
MainThread::DEBUG::2012-11-14
12:46:10,636::hsm::485::Storage.HSM::(__cleanStorageRepository)
Finished cleaning storage repository at '/rhev/data-center'<br>
MainThread::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a>
12:46:10,638::dispatcher::95::Storage.Dispatcher::(__init__)
Starting StorageDispatcher...<br>
Thread-12::DEBUG::2012-11-14
12:46:10,638::lvm::334::OperationMutex::(_reloadpvs) Operation 'lvm
reload operation' got the operation mutex<br>
Thread-12::DEBUG::2012-11-14
12:46:10,643::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup
{ retain_min = 50 retain_days = 0 } " --noheadings --units b
--nosuffix --separator | -o
uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size'
(cwd None)<br>
<b>MainThread::WARNING::2012-11-14
12:46:10,688::clientIF::197::vds::(_prepareMOM) MOM initialization
failed and fall back to KsmMonitor</b><b><br>
</b><b>Traceback (most recent call last):</b><b><br>
</b><b> File "/usr/share/vdsm/clientIF.py", line 194, in
_prepareMOM</b><b><br>
</b><b> self.mom = MomThread(momconf)</b><b><br>
</b><b> File "/usr/share/vdsm/momIF.py", line 34, in __init__</b><b><br>
</b><b> raise Exception("MOM is not available")</b><b><br>
</b><b>Exception: MOM is not available</b><br>
MainThread::DEBUG::2012-11-14
12:46:10,690::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/pgrep -xf ksmd' (cwd None)<br>
Thread-12::DEBUG::2012-11-14
12:46:10,710::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = ''; <rc> = 0<br>
Thread-12::DEBUG::2012-11-14
12:46:10,710::lvm::359::OperationMutex::(_reloadpvs) Operation 'lvm
reload operation' released the operation mutex<br>
Thread-12::DEBUG::2012-11-14
12:46:10,711::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm
reload operation' got the operation mutex<br>
Thread-12::DEBUG::2012-11-14
12:46:10,711::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup
{ retain_min = 50 retain_days = 0 } " --noheadings --units b
--nosuffix --separator | -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free'
(cwd None)<br>
MainThread::DEBUG::2012-11-14
12:46:10,712::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = ''; <rc> = 0<br>
MainThread::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a> 12:46:10,721::ksm::43::vds::(__init__)
starting ksm monitor thread, ksm pid is 40<br>
KsmMonitor::DEBUG::2012-11-14
12:46:10,722::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/service ksmtuned start' (cwd None)<br>
MainThread::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a>
12:46:10,724::vmChannels::139::vds::(settimeout) Setting channels'
timeout to 30 seconds.<br>
VM Channels Listener::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a>
12:46:10,738::vmChannels::127::vds::(run) Starting VM channels
listener thread.<br>
MainThread::WARNING::2012-11-14
12:46:10,747::clientIF::182::vds::(_prepareBindings) Unable to load
the rest server module. Please make sure it is installed.<br>
MainThread::WARNING::2012-11-14
12:46:10,747::clientIF::188::vds::(_prepareBindings) Unable to load
the json rpc server module. Please make sure it is installed.<br>
Thread-12::DEBUG::2012-11-14
12:46:10,767::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0<br>
Thread-12::DEBUG::2012-11-14
12:46:10,768::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm
reload operation' released the operation mutex<br>
Thread-12::DEBUG::2012-11-14
12:46:10,770::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup
{ retain_min = 50 retain_days = 0 } " --noheadings --units b
--nosuffix --separator | -o
uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None)<br>
KsmMonitor::DEBUG::2012-11-14
12:46:10,782::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = 'Redirecting to /bin/systemctl start
ksmtuned.service\n'; <rc> = 0<br>
KsmMonitor::DEBUG::2012-11-14
12:46:10,783::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/service ksm start' (cwd None)<br>
Thread-12::DEBUG::2012-11-14
12:46:10,823::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0<br>
Thread-12::DEBUG::2012-11-14
12:46:10,824::misc::1026::SamplingMethod::(__call__) Trying to enter
sampling method (storage.sdc.refreshStorage)<br>
Thread-12::DEBUG::2012-11-14
12:46:10,824::misc::1028::SamplingMethod::(__call__) Got in to
sampling method<br>
Thread-12::DEBUG::2012-11-14
12:46:10,825::misc::1026::SamplingMethod::(__call__) Trying to enter
sampling method (storage.iscsi.rescan)<br>
Thread-12::DEBUG::2012-11-14
12:46:10,825::misc::1028::SamplingMethod::(__call__) Got in to
sampling method<br>
Thread-12::DEBUG::2012-11-14
12:46:10,826::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None)<br>
KsmMonitor::DEBUG::2012-11-14
12:46:10,840::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = 'Redirecting to /bin/systemctl start
ksm.service\n'; <rc> = 0<br>
Thread-12::DEBUG::2012-11-14
12:46:10,852::misc::84::Storage.Misc.excCmd::(<lambda>)
FAILED: <err> = 'iscsiadm: No session found.\n'; <rc> =
21<br>
Thread-12::DEBUG::2012-11-14
12:46:10,852::misc::1036::SamplingMethod::(__call__) Returning last
result<br>
MainProcess|Thread-12::DEBUG::2012-11-14
12:46:10,858::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/dd of=/sys/class/scsi_host/host0/scan' (cwd None)<br>
MainProcess|Thread-12::DEBUG::2012-11-14
12:46:10,882::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/dd of=/sys/class/scsi_host/host1/scan' (cwd None)<br>
MainProcess|Thread-12::DEBUG::2012-11-14
12:46:10,891::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/dd of=/sys/class/scsi_host/host2/scan' (cwd None)<br>
MainProcess|Thread-12::DEBUG::2012-11-14
12:46:10,898::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/dd of=/sys/class/scsi_host/host3/scan' (cwd None)<br>
MainProcess|Thread-12::DEBUG::2012-11-14
12:46:10,905::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/dd of=/sys/class/scsi_host/host4/scan' (cwd None)<br>
MainProcess|Thread-12::DEBUG::2012-11-14
12:46:10,913::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/dd of=/sys/class/scsi_host/host5/scan' (cwd None)<br>
MainProcess|Thread-12::DEBUG::2012-11-14
12:46:10,922::iscsi::388::Storage.ISCSI::(forceIScsiScan) Performing
SCSI scan, this will take up to 30 seconds<br>
Thread-14::DEBUG::2012-11-14
12:46:12,615::BindingXMLRPC::903::vds::(wrapper) client
[192.168.122.1]::call getCapabilities with () {}<br>
Thread-14::DEBUG::2012-11-14
12:46:12,777::BindingXMLRPC::910::vds::(wrapper) return
getCapabilities with {'status': {'message': 'Done', 'code': 0},
'info': {'HBAInventory': {'iSCSI': [{'InitiatorName':
'iqn.1994-05.com.redhat:6de64a4dfdba'}], 'FC': []}, 'packages2':
{'kernel': {'release': '1.fc17.x86_64', 'buildtime': 1352149175.0,
'version': '3.6.6'}, 'spice-server': {'release': '1.fc17',
'buildtime': 1348891802L, 'version': '0.12.0'}, 'vdsm': {'release':
'0.129.git2c2c228.fc17', 'buildtime': 1352759542L, 'version':
'4.10.1'}, 'qemu-kvm': {'release': '19.fc17', 'buildtime':
1351915579L, 'version': '1.2.0'}, 'libvirt': {'release': '1.fc17',
'buildtime': 1352437629L, 'version': '1.0.0'}, 'qemu-img':
{'release': '19.fc17', 'buildtime': 1351915579L, 'version':
'1.2.0'}}, 'cpuModel': 'AMD Phenom(tm) II X4 955 Processor',
'hooks': {}, 'vmTypes': ['kvm'], 'supportedProtocols': ['2.2',
'2.3'], 'networks': {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr':
'192.168.2.21', 'cfg': {'UUID':
'524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY': '0',
'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt',
'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask':
'255.255.255.0', 'stp': 'off', 'bridged': True, 'gateway':
'192.168.2.1', 'ports': ['p15p1']}, 'virbr0': {'iface': 'virbr0',
'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500', 'netmask':
'255.255.255.0', 'stp': 'on', 'bridged': True, 'gateway': '0.0.0.0',
'ports': []}}, 'bridges': {'ovirtmgmt': {'addr': '192.168.2.21',
'cfg': {'UUID': '524c7b17-8771-4426-82d7-0dbeca898ad9', 'DELAY':
'0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE':
'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500',
'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['p15p1']},
'virbr0': {'addr': '192.168.122.1', 'cfg': {}, 'mtu': '1500',
'netmask': '255.255.255.0', 'stp': 'on', 'ports': []}}, 'uuid':
'4046266B-FA2B-DE11-AA3D-00248C9EAFD5_00:19:cb:d6:6a:e0',
'lastClientIface': 'ovirtmgmt', 'nics': {'p15p1': {'addr': '',
'cfg': {'BRIDGE': 'ovirtmgmt', 'UUID':
'524c7b17-8771-4426-82d7-0dbeca898ad9', 'NM_CONTROLLED': 'no',
'HWADDR': '00:19:cb:d6:6a:e0', 'DEVICE': 'p15p1', 'ONBOOT': 'yes'},
'mtu': '1500', 'netmask': '', 'hwaddr': '00:19:cb:d6:6a:e0',
'speed': 1000}, 'p6p1': {'addr': '', 'cfg': {'DEVICE': 'p6p1',
'UUID': '9d1e9605-931d-4e51-9c79-d5f0f204d46e', 'NM_CONTROLLED':
'yes', 'BOOTPROTO': 'dhcp', 'HWADDR': '00:24:8C:9E:AF:D5', 'ONBOOT':
'no'}, 'mtu': '1500', 'netmask': '', 'hwaddr': '00:24:8c:9e:af:d5',
'speed': 1000}}, 'software_revision': '0.129', 'clusterLevels':
['3.0', '3.1', '3.2'], 'cpuFlags':
u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,hw_pstate,npt,lbrv,svm_lock,nrip_save,model_athlon,model_Opteron_G3,model_Opteron_G1,model_phenom,model_Opteron_G2',
'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:6de64a4dfdba',
'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1'],
'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {},
'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr':
'00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu':
'1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'},
'bond1': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '',
'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}, 'software_version':
'4.10', 'memSize': '7734', 'cpuSpeed': '800.000', 'cpuSockets': '1',
'vlans': {}, 'cpuCores': '4', 'kvmEnabled': 'true', 'guestOverhead':
'65', 'management_ip': '', 'version_name': 'Snow Man',
'emulatedMachines': [u'pc-1.2', u'none', u'pc', u'pc-1.1',
u'pc-1.0', u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12',
u'pc-0.11', u'pc-0.10', u'isapc', u'pc-1.2', u'none', u'pc',
u'pc-1.1', u'pc-1.0', u'pc-0.15', u'pc-0.14', u'pc-0.13',
u'pc-0.12', u'pc-0.11', u'pc-0.10', u'isapc'], 'operatingSystem':
{'release': '1', 'version': '17', 'name': 'Fedora'}, 'lastClient':
'0.0.0.0'}}<br>
Thread-12::DEBUG::2012-11-14
12:46:12,926::misc::84::Storage.Misc.excCmd::(<lambda>)
'/bin/sudo -n /sbin/multipath' (cwd None)<br>
Thread-12::DEBUG::2012-11-14
12:46:12,990::misc::84::Storage.Misc.excCmd::(<lambda>)
SUCCESS: <err> = ''; <rc> = 0<br>
Thread-12::DEBUG::2012-11-14
12:46:12,990::lvm::477::OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' got the operation mutex<br>
Thread-12::DEBUG::2012-11-14
12:46:12,991::lvm::479::OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' released the operation mutex<br>
Thread-12::DEBUG::2012-11-14
12:46:12,991::lvm::488::OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' got the operation mutex<br>
Thread-12::DEBUG::2012-11-14
12:46:12,991::lvm::490::OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' released the operation mutex<br>
Thread-12::DEBUG::2012-11-14
12:46:12,991::lvm::508::OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' got the operation mutex<br>
Thread-12::DEBUG::2012-11-14
12:46:12,991::lvm::510::OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' released the operation mutex<br>
Thread-12::DEBUG::2012-11-14
12:46:12,991::misc::1036::SamplingMethod::(__call__) Returning last
result<br>
Thread-16::DEBUG::2012-11-14
12:46:14,043::BindingXMLRPC::161::vds::(wrapper) [192.168.122.1]<br>
Thread-16::DEBUG::2012-11-14
12:46:14,044::task::568::TaskManager.Task::(_updateState)
Task=`8cf5bfe0-3851-4058-92b9-7a23f095ec30`::moving from state init
-> state preparing<br>
Thread-16::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a>
12:46:14,045::logUtils::37::dispatcher::(wrapper) Run and protect:
validateStorageServerConnection(domType=4,
spUUID='00000000-0000-0000-0000-000000000000',
conList=[{'connection': '/data', 'iqn': '', 'portal': '', 'user':
'', 'password': '******', 'id':
'00000000-0000-0000-0000-000000000000', 'port': ''}], options=None)<br>
Thread-16::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a>
12:46:14,045::logUtils::39::dispatcher::(wrapper) Run and protect:
validateStorageServerConnection, Return response: {'statuslist':
[{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}<br>
Thread-16::DEBUG::2012-11-14
12:46:14,045::task::1151::TaskManager.Task::(prepare)
Task=`8cf5bfe0-3851-4058-92b9-7a23f095ec30`::finished:
{'statuslist': [{'status': 0, 'id':
'00000000-0000-0000-0000-000000000000'}]}<br>
Thread-16::DEBUG::2012-11-14
12:46:14,045::task::568::TaskManager.Task::(_updateState)
Task=`8cf5bfe0-3851-4058-92b9-7a23f095ec30`::moving from state
preparing -> state finished<br>
Thread-16::DEBUG::2012-11-14
12:46:14,045::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}<br>
Thread-16::DEBUG::2012-11-14
12:46:14,045::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}<br>
Thread-16::DEBUG::2012-11-14
12:46:14,045::task::957::TaskManager.Task::(_decref)
Task=`8cf5bfe0-3851-4058-92b9-7a23f095ec30`::ref 0 aborting False<br>
Thread-17::DEBUG::2012-11-14
12:46:14,128::BindingXMLRPC::161::vds::(wrapper) [192.168.122.1]<br>
Thread-17::DEBUG::2012-11-14
12:46:14,129::task::568::TaskManager.Task::(_updateState)
Task=`0eb0651c-bb23-4b49-a07a-a27a9bbc4129`::moving from state init
-> state preparing<br>
Thread-17::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a>
12:46:14,129::logUtils::37::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=4,
spUUID='00000000-0000-0000-0000-000000000000',
conList=[{'connection': '/data', 'iqn': '', 'portal': '', 'user':
'', 'password': '******', 'id':
'00000000-0000-0000-0000-000000000000', 'port': ''}], options=None)<br>
<b>Thread-17::ERROR::2012-11-14
12:46:14,212::hsm::2057::Storage.HSM::(connectStorageServer) Could
not connect to storageServer</b><b><br>
</b><b>Traceback (most recent call last):</b><b><br>
</b><b> File "/usr/share/vdsm/storage/hsm.py", line 2054, in
connectStorageServer</b><b><br>
</b><b> conObj.connect()</b><b><br>
</b><b> File "/usr/share/vdsm/storage/storageServer.py", line 462,
in connect</b><b><br>
</b><b> if not self.checkTarget():</b><b><br>
</b><b> File "/usr/share/vdsm/storage/storageServer.py", line 449,
in checkTarget</b><b><br>
</b><b> fileSD.validateDirAccess(self._path))</b><b><br>
</b><b> File "/usr/share/vdsm/storage/fileSD.py", line 51, in
validateDirAccess</b><b><br>
</b><b> getProcPool().fileUtils.validateAccess(dirPath)</b><b><br>
</b><b> File "/usr/share/vdsm/storage/remoteFileHandler.py", line
274, in callCrabRPCFunction</b><b><br>
</b><b> *args, **kwargs)</b><b><br>
</b><b> File "/usr/share/vdsm/storage/remoteFileHandler.py", line
180, in callCrabRPCFunction</b><b><br>
</b><b> rawLength = self._recvAll(LENGTH_STRUCT_LENGTH, timeout)</b><b><br>
</b><b> File "/usr/share/vdsm/storage/remoteFileHandler.py", line
149, in _recvAll</b><b><br>
</b><b> timeLeft):</b><b><br>
</b><b> File "/usr/lib64/python2.7/contextlib.py", line 84, in
helper</b><b><br>
</b><b> return GeneratorContextManager(func(*args, **kwds))</b><b><br>
</b><b> File "/usr/share/vdsm/storage/remoteFileHandler.py", line
136, in _poll</b><b><br>
</b><b> raise Timeout()</b><b><br>
</b><b>Timeout</b><br>
Thread-17::<a class="moz-txt-link-freetext" href="INFO::2012-11-14">INFO::2012-11-14</a>
12:46:14,231::logUtils::39::dispatcher::(wrapper) Run and protect:
connectStorageServer, Return response: {'statuslist': [{'status':
100, 'id': '00000000-0000-0000-0000-000000000000'}]}<br>
Thread-17::DEBUG::2012-11-14
12:46:14,231::task::1151::TaskManager.Task::(prepare)
Task=`0eb0651c-bb23-4b49-a07a-a27a9bbc4129`::finished:
{'statuslist': [{'status': 100, 'id':
'00000000-0000-0000-0000-000000000000'}]}<br>
Thread-17::DEBUG::2012-11-14
12:46:14,232::task::568::TaskManager.Task::(_updateState)
Task=`0eb0651c-bb23-4b49-a07a-a27a9bbc4129`::moving from state
preparing -> state finished<br>
Thread-17::DEBUG::2012-11-14
12:46:14,232::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}<br>
Thread-17::DEBUG::2012-11-14
12:46:14,233::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}<br>
<br>
Kind regards,<br>
<br>
Jorick Astrego<br>
Netbulae B.V.<br>
<br>
<br>
<br>
<br>
</body>
</html>
--------------070401060008050901000001--
3
7