Re: [ovirt-devel] "Host.queryVms": A proposal for new a VDSM API verb
by Vinzenz Feenstra
On 07/22/2014 02:54 PM, Piotr Kliczewski wrote:
> On Tue, Jul 22, 2014 at 2:04 PM, Vinzenz Feenstra <vfeenstr(a)redhat.com> wrote:
>> On 07/22/2014 02:00 PM, Francesco Romani wrote:
>>> ----- Original Message -----
>>>> From: "Vinzenz Feenstra" <vfeenstr(a)redhat.com>
>>>> To: devel(a)ovirt.org
>>>> Sent: Tuesday, July 22, 2014 11:29:40 AM
>>>> Subject: Re: [ovirt-devel] "Host.queryVms": A proposal for new a VDSM API
>>>> verb
>>>>
>>>> On 07/14/2014 04:05 PM, Vinzenz Feenstra wrote:
>>>>> Hi,
>>>> Since this mail did not receive enough attention I am bumping it again.
>>>>
>>>> For this proposal exists currently a draft patch
>>>> http://gerrit.ovirt.org/#/c/28819/
>>>> The patch is not final since the query function should not require to
>>>> update all the data
>>>> on every call. (This should be done directly by data modifying code.
>>>> However that would be implemented by follow up patches)
>>>>
>>>> The trackable.py implementation also can be easily extended in future
>>>> for enabling push notifications to the engine once
>>>> we switched to the new communication. This could be done by subscribing
>>>> to certain keys in the TrackableMapping instance.
>>>> (Not implemented yet)
>>> Nice! Both nice to have now and on top of json/push notifications
>>> tomorrow.
>>> I just had a quick look to the changes to vm.py and to the interface
>>> and looks nice as well.
>>>
>>> [...]
>>>>> I have executed some tests and in those tested scenarios the new Verb
>>>>> can result in an improvement of 75%-90% of data transferred and average
>>>>> response body size depending on the scenario and usage.
>>>>>
>>>>> The test results can be found here:
>>>>> http://www.ovirt.org/Feature/VDSM_VM_Query_API/Measurements#Results
>>>>> (An explanation of the tested methods is on the top of the page and a
>>>>> description of the scenario in each section)
>>> Nice graphs :)
>>> Silly comment: having units in 'bytes' on the X-axis makes the numbers
>>> somehow
>>> hard to parse (to me). I suggest you to convert them to KiB for better
>>> readability.
>>> The savings look really nice.
>> The table below shows it in MiB in a separate column
>>>
>>> Bests,
>>>
> API looks good but I have small comment to json schema. Vdsm is
> currently versioned 4.16 not
> 4.15 so please update Since for each schema change.
I'll do with the next rebase of the patch, thanks for pointing it out. :-)
>
>> --
>> Regards,
>>
>> Vinzenz Feenstra | Senior Software Engineer
>> RedHat Engineering Virtualization R & D
>> Phone: +420 532 294 625
>> IRC: vfeenstr or evilissimo
>>
>> Better technology. Faster innovation. Powered by community collaboration.
>> See how it works at redhat.com
>>
>> _______________________________________________
>> Devel mailing list
>> Devel(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
--
Regards,
Vinzenz Feenstra | Senior Software Engineer
RedHat Engineering Virtualization R & D
Phone: +420 532 294 625
IRC: vfeenstr or evilissimo
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 9 months
Regression in java-1.7.0-openjdk-1.7.0.65
by Martin Perina
Hi,
there's some regression in java-1.7.0-openjdk-1.7.0.65 (probably this one [1])
that causes failure in our tests:
Tests in error:
initializationError(org.ovirt.engine.api.restapi.resource.BackendResourceInfoDetailTest): Bad <init> method call from inside of a branch
Exception Details:
Location:
org/ovirt/engine/api/restapi/resource/AbstractBackendResourceLoggingTest.<init>(Lorg/powermock/core/IndicateReloadClass;)V @40: invokespecial
Reason:
Error exists in the bytecode
Bytecode:
0000000: 2a2b 4e4d 1210 b800 1604 bd00 0d59 032d
0000010: 5312 a6b8 001b b800 213a 0519 05b2 0025
0000020: a500 0e2a 01c0 0027 b700 2aa7 000a 2c2d
0000030: b700 2a01 57b1
Stackmap Table:
full_frame(@46,{UninitializedThis,Object[#39],UninitializedThis,Object[#39],Top,Object[#13]},{})
full_frame(@53,{Object[#2],Object[#39],Object[#2],Object[#39],Top,Object[#13]},{})
initializationError(org.ovirt.engine.api.restapi.resource.BackendResourceDebugDetailTest): Bad <init> method call from inside of a branch
Exception Details:
Location:
org/ovirt/engine/api/restapi/resource/AbstractBackendResourceLoggingTest.<init>(Lorg/powermock/core/IndicateReloadClass;)V @40: invokespecial
Reason:
Error exists in the bytecode
Bytecode:
0000000: 2a2b 4e4d 1210 b800 1604 bd00 0d59 032d
0000010: 5312 a6b8 001b b800 213a 0519 05b2 0025
0000020: a500 0e2a 01c0 0027 b700 2aa7 000a 2c2d
0000030: b700 2a01 57b1
Stackmap Table:
full_frame(@46,{UninitializedThis,Object[#39],UninitializedThis,Object[#39],Top,Object[#13]},{})
full_frame(@53,{Object[#2],Object[#39],Object[#2],Object[#39],Top,Object[#13]},{})
If you spot this error, just downgrade to java-1.7.0-openjdk-1.7.0.60, which works fine.
Martin
[1] https://bugs.openjdk.java.net/browse/JDK-8051012
10 years, 9 months
Webadmin build failure on ovirt-engine master
by Moti Asayag
Hi,
Building the ovirt-engine master branch fails with the following, while build the ovirt-engine-3.5
branch ended successfully.
Any idea what might cause the failure ? It seems like a local issue since jenkins build passes.
[INFO] ------------------------------------------------------------------------
[INFO] Building WebAdmin 3.6.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- properties-maven-plugin:0.2.5:set-properties (set-properties) @ webadmin ---
[WARNING] Error injecting: com.github.goldin.plugins.properties.PropertiesMojo
java.lang.NoClassDefFoundError: Lorg/sonatype/aether/RepositorySystem;
at java.lang.Class.getDeclaredFields0(Native Method)
at java.lang.Class.privateGetDeclaredFields(Class.java:2436)
at java.lang.Class.getDeclaredFields(Class.java:1806)
at com.google.inject.spi.InjectionPoint.getInjectionPoints(InjectionPoint.java:661)
at com.google.inject.spi.InjectionPoint.forInstanceMethodsAndFields(InjectionPoint.java:366)
at com.google.inject.internal.ConstructorBindingImpl.getInternalDependencies(ConstructorBindingImpl.java:165)
at com.google.inject.internal.InjectorImpl.getInternalDependencies(InjectorImpl.java:609)
at com.google.inject.internal.InjectorImpl.cleanup(InjectorImpl.java:565)
at com.google.inject.internal.InjectorImpl.initializeJitBinding(InjectorImpl.java:551)
at com.google.inject.internal.InjectorImpl.createJustInTimeBinding(InjectorImpl.java:865)
at com.google.inject.internal.InjectorImpl.createJustInTimeBindingRecursive(InjectorImpl.java:790)
at com.google.inject.internal.InjectorImpl.getJustInTimeBinding(InjectorImpl.java:278)
at com.google.inject.internal.InjectorImpl.getBindingOrThrow(InjectorImpl.java:210)
at com.google.inject.internal.InjectorImpl.getProviderOrThrow(InjectorImpl.java:986)
at com.google.inject.internal.InjectorImpl.getProvider(InjectorImpl.java:1019)
at com.google.inject.internal.InjectorImpl.getProvider(InjectorImpl.java:982)
at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1032)
at org.eclipse.sisu.space.AbstractDeferredClass.get(AbstractDeferredClass.java:48)
at com.google.inject.internal.ProviderInternalFactory.provision(ProviderInternalFactory.java:86)
at com.google.inject.internal.InternalFactoryToInitializableAdapter.provision(InternalFactoryToInitializableAdapter.java:55)
at com.google.inject.internal.ProviderInternalFactory$1.call(ProviderInternalFactory.java:70)
at com.google.inject.internal.ProvisionListenerStackCallback$Provision.provision(ProvisionListenerStackCallback.java:100)
at org.eclipse.sisu.plexus.PlexusLifecycleManager.onProvision(PlexusLifecycleManager.java:133)
at com.google.inject.internal.ProvisionListenerStackCallback$Provision.provision(ProvisionListenerStackCallback.java:109)
at com.google.inject.internal.ProvisionListenerStackCallback.provision(ProvisionListenerStackCallback.java:55)
at com.google.inject.internal.ProviderInternalFactory.circularGet(ProviderInternalFactory.java:68)
at com.google.inject.internal.InternalFactoryToInitializableAdapter.get(InternalFactoryToInitializableAdapter.java:47)
at com.google.inject.internal.InjectorImpl$2$1.call(InjectorImpl.java:997)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1047)
at com.google.inject.internal.InjectorImpl$2.get(InjectorImpl.java:993)
at com.google.inject.Scopes$1$1.get(Scopes.java:59)
at org.eclipse.sisu.inject.LazyBeanEntry.getValue(LazyBeanEntry.java:82)
at org.eclipse.sisu.plexus.LazyPlexusBean.getValue(LazyPlexusBean.java:51)
at org.codehaus.plexus.DefaultPlexusContainer.lookup(DefaultPlexusContainer.java:260)
at org.codehaus.plexus.DefaultPlexusContainer.lookup(DefaultPlexusContainer.java:252)
at org.apache.maven.plugin.internal.DefaultMavenPluginManager.getConfiguredMojo(DefaultMavenPluginManager.java:459)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:97)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:317)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:152)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:555)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:214)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:158)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
Caused by: java.lang.ClassNotFoundException: org.sonatype.aether.RepositorySystem
at org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy.loadClass(SelfFirstStrategy.java:50)
at org.codehaus.plexus.classworlds.realm.ClassRealm.unsynchronizedLoadClass(ClassRealm.java:259)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:235)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:227)
... 57 more
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] ovirt-root ........................................ SUCCESS [0.215s]
[INFO] oVirt Build Tools root ............................ SUCCESS [0.009s]
[INFO] oVirt checkstyle .................................. SUCCESS [0.672s]
[INFO] oVirt JBoss Modules Maven Plugin .................. SUCCESS [2.945s]
[INFO] oVirt Checkstyle Checks ........................... SUCCESS [0.650s]
[INFO] Extensions API root ............................... SUCCESS [0.115s]
[INFO] ovirt-engine-extensions-api ....................... SUCCESS [3.919s]
[INFO] oVirt Modules - backend ........................... SUCCESS [0.004s]
[INFO] oVirt Manager ..................................... SUCCESS [0.003s]
[INFO] oVirt Engine dependencies ......................... SUCCESS [1.514s]
[INFO] oVirt Modules - manager ........................... SUCCESS [0.519s]
[INFO] Universal utilities ............................... SUCCESS [2.256s]
[INFO] Extensions manager ................................ SUCCESS [1.267s]
[INFO] CSharp Compatibility .............................. SUCCESS [1.524s]
[INFO] Common Code ....................................... SUCCESS [10.831s]
[INFO] Common utilities .................................. SUCCESS [5.684s]
[INFO] Data Access Layer ................................. SUCCESS [7.845s]
[INFO] engine scheduler bean ............................. SUCCESS [1.283s]
[INFO] Vds broker ........................................ SUCCESS [6.821s]
[INFO] Backend Authentication, Authorization and Accounting SUCCESS [1.139s]
[INFO] builtin-extensions ................................ SUCCESS [2.053s]
[INFO] Search Backend .................................... SUCCESS [2.212s]
[INFO] Backend Logic @Service bean ....................... SUCCESS [16.767s]
[INFO] oVirt RESTful API Backend Integration ............. SUCCESS [0.117s]
[INFO] oVirt RESTful API interface ....................... SUCCESS [0.098s]
[INFO] oVirt Engine API Definition ....................... SUCCESS [9.196s]
[INFO] oVirt Engine API Commom Parent POM ................ SUCCESS [0.058s]
[INFO] oVirt Engine API Common JAX-RS .................... SUCCESS [1.772s]
[INFO] oVirt RESTful API Backend Integration Type Mappers SUCCESS [6.886s]
[INFO] Branding package .................................. SUCCESS [1.054s]
[INFO] oVirt RESTful API Backend Integration JAX-RS Resources SUCCESS [9.550s]
[INFO] oVirt RESTful API Backend Integration Webapp ...... SUCCESS [0.424s]
[INFO] Custom Logger Using Extensions .................... SUCCESS [0.657s]
[INFO] oVirt Engine Web Root ............................. SUCCESS [0.135s]
[INFO] ovirt-engine services ............................. SUCCESS [0.629s]
[INFO] oVirt Engine Web Docs ............................. SUCCESS [0.496s]
[INFO] ovirt-engine welcome .............................. SUCCESS [0.952s]
[INFO] oVirt Engine Tools ................................ SUCCESS [1.874s]
[INFO] oVirt Modules :: Frontend ......................... SUCCESS [0.002s]
[INFO] oVirt Modules :: Webadmin ......................... SUCCESS [0.002s]
[INFO] oVirt Modules - ui ................................ SUCCESS [0.002s]
[INFO] Extensions for GWT ................................ SUCCESS [0.935s]
[INFO] UI Utils Compatibility (for UICommon) ............. SUCCESS [1.707s]
[INFO] Frontend for GWT UI Projects ...................... SUCCESS [5.142s]
[INFO] UICommonWeb ....................................... SUCCESS [12.164s]
[INFO] oVirt GWT UI common infrastructure ................ SUCCESS [9.654s]
[INFO] WebAdmin .......................................... FAILURE [0.215s]
[INFO] UserPortal ........................................ SKIPPED
[INFO] oVirt Server EAR .................................. SKIPPED
[INFO] ovirt-engine maven make ........................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2:14.910s
[INFO] Finished at: Sun Jul 20 21:48:52 IDT 2014
[INFO] Final Memory: 222M/1363M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal com.github.goldin:properties-maven-plugin:0.2.5:set-properties (set-properties) on project webadmin: Execution set-properties of goal com.github.goldin:properties-maven-plugin:0.2.5:set-properties failed: A required class was missing while executing com.github.goldin:properties-maven-plugin:0.2.5:set-properties: Lorg/sonatype/aether/RepositorySystem;
[ERROR] -----------------------------------------------------
[ERROR] realm = plugin>com.github.goldin:properties-maven-plugin:0.2.5
[ERROR] strategy = org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy
[ERROR] urls[0] = file:/home/masayag/.m2/repository/com/github/goldin/properties-maven-plugin/0.2.5/properties-maven-plugin-0.2.5.jar
[ERROR] urls[1] = file:/home/masayag/.m2/repository/com/github/goldin/maven-common/0.2.5/maven-common-0.2.5.jar
[ERROR] urls[2] = file:/home/masayag/.m2/repository/org/codehaus/plexus/plexus-component-annotations/1.5.5/plexus-component-annotations-1.5.5.jar
[ERROR] urls[3] = file:/home/masayag/.m2/repository/org/sonatype/plexus/plexus-sec-dispatcher/1.3/plexus-sec-dispatcher-1.3.jar
[ERROR] urls[4] = file:/home/masayag/.m2/repository/org/sonatype/plexus/plexus-cipher/1.4/plexus-cipher-1.4.jar
[ERROR] urls[5] = file:/home/masayag/.m2/repository/org/sonatype/aether/aether-util/1.13.1/aether-util-1.13.1.jar
[ERROR] urls[6] = file:/home/masayag/.m2/repository/org/apache/maven/shared/file-management/1.2.1/file-management-1.2.1.jar
[ERROR] urls[7] = file:/home/masayag/.m2/repository/org/apache/maven/shared/maven-shared-io/1.1/maven-shared-io-1.1.jar
[ERROR] urls[8] = file:/home/masayag/.m2/repository/org/apache/maven/shared/maven-filtering/1.0/maven-filtering-1.0.jar
[ERROR] urls[9] = file:/home/masayag/.m2/repository/org/sonatype/plexus/plexus-build-api/0.0.4/plexus-build-api-0.0.4.jar
[ERROR] urls[10] = file:/home/masayag/.m2/repository/org/apache/maven/shared/maven-common-artifact-filters/1.4/maven-common-artifact-filters-1.4.jar
[ERROR] urls[11] = file:/home/masayag/.m2/repository/org/codehaus/plexus/plexus/3.1/plexus-3.1.pom
[ERROR] urls[12] = file:/home/masayag/.m2/repository/org/codehaus/plexus/plexus-containers/1.5.5/plexus-containers-1.5.5.pom
[ERROR] urls[13] = file:/home/masayag/.m2/repository/org/codehaus/plexus/plexus-interpolation/1.15/plexus-interpolation-1.15.jar
[ERROR] urls[14] = file:/home/masayag/.m2/repository/org/codehaus/plexus/plexus-utils/3.0/plexus-utils-3.0.jar
[ERROR] urls[15] = file:/home/masayag/.m2/repository/org/codehaus/plexus/plexus-digest/1.1/plexus-digest-1.1.jar
[ERROR] urls[16] = file:/home/masayag/.m2/repository/org/codehaus/groovy/groovy-all/1.8.6/groovy-all-1.8.6.jar
[ERROR] urls[17] = file:/home/masayag/.m2/repository/org/codehaus/gmaven/gmaven-mojo/1.4/gmaven-mojo-1.4.jar
[ERROR] urls[18] = file:/home/masayag/.m2/repository/org/codehaus/gmaven/runtime/gmaven-runtime-api/1.4/gmaven-runtime-api-1.4.jar
[ERROR] urls[19] = file:/home/masayag/.m2/repository/org/codehaus/gmaven/feature/gmaven-feature-api/1.4/gmaven-feature-api-1.4.jar
[ERROR] urls[20] = file:/home/masayag/.m2/repository/org/apache/ant/ant/1.8.3/ant-1.8.3.jar
[ERROR] urls[21] = file:/home/masayag/.m2/repository/org/apache/ant/ant-launcher/1.8.3/ant-launcher-1.8.3.jar
[ERROR] urls[22] = file:/home/masayag/.m2/repository/org/apache/ant/ant-commons-net/1.8.3/ant-commons-net-1.8.3.jar
[ERROR] urls[23] = file:/home/masayag/.m2/repository/commons-net/commons-net/1.4.0/commons-net-1.4.0.jar
[ERROR] urls[24] = file:/home/masayag/.m2/repository/oro/oro/2.0.8/oro-2.0.8.jar
[ERROR] urls[25] = file:/home/masayag/.m2/repository/org/apache/ant/ant-jsch/1.8.3/ant-jsch-1.8.3.jar
[ERROR] urls[26] = file:/home/masayag/.m2/repository/org/codehaus/mojo/versions-maven-plugin/1.3.1/versions-maven-plugin-1.3.1.jar
[ERROR] urls[27] = file:/home/masayag/.m2/repository/org/apache/maven/reporting/maven-reporting-api/2.0.6/maven-reporting-api-2.0.6.jar
[ERROR] urls[28] = file:/home/masayag/.m2/repository/org/apache/maven/reporting/maven-reporting-impl/2.0.4.1/maven-reporting-impl-2.0.4.1.jar
[ERROR] urls[29] = file:/home/masayag/.m2/repository/commons-validator/commons-validator/1.2.0/commons-validator-1.2.0.jar
[ERROR] urls[30] = file:/home/masayag/.m2/repository/commons-beanutils/commons-beanutils/1.7.0/commons-beanutils-1.7.0.jar
[ERROR] urls[31] = file:/home/masayag/.m2/repository/commons-digester/commons-digester/1.6/commons-digester-1.6.jar
[ERROR] urls[32] = file:/home/masayag/.m2/repository/xml-apis/xml-apis/1.0.b2/xml-apis-1.0.b2.jar
[ERROR] urls[33] = file:/home/masayag/.m2/repository/org/apache/maven/doxia/doxia-core/1.0-alpha-10/doxia-core-1.0-alpha-10.jar
[ERROR] urls[34] = file:/home/masayag/.m2/repository/org/apache/maven/wagon/wagon-file/1.0-beta-2/wagon-file-1.0-beta-2.jar
[ERROR] urls[35] = file:/home/masayag/.m2/repository/org/apache/maven/doxia/doxia-sink-api/1.0/doxia-sink-api-1.0.jar
[ERROR] urls[36] = file:/home/masayag/.m2/repository/org/apache/maven/doxia/doxia-site-renderer/1.0/doxia-site-renderer-1.0.jar
[ERROR] urls[37] = file:/home/masayag/.m2/repository/org/codehaus/plexus/plexus-velocity/1.1.7/plexus-velocity-1.1.7.jar
[ERROR] urls[38] = file:/home/masayag/.m2/repository/org/apache/velocity/velocity/1.5/velocity-1.5.jar
[ERROR] urls[39] = file:/home/masayag/.m2/repository/org/apache/maven/doxia/doxia-decoration-model/1.0/doxia-decoration-model-1.0.jar
[ERROR] urls[40] = file:/home/masayag/.m2/repository/org/apache/maven/doxia/doxia-module-apt/1.0/doxia-module-apt-1.0.jar
[ERROR] urls[41] = file:/home/masayag/.m2/repository/org/apache/maven/doxia/doxia-module-fml/1.0/doxia-module-fml-1.0.jar
[ERROR] urls[42] = file:/home/masayag/.m2/repository/org/apache/maven/doxia/doxia-module-xdoc/1.0/doxia-module-xdoc-1.0.jar
[ERROR] urls[43] = file:/home/masayag/.m2/repository/org/apache/maven/doxia/doxia-module-xhtml/1.0/doxia-module-xhtml-1.0.jar
[ERROR] urls[44] = file:/home/masayag/.m2/repository/org/codehaus/plexus/plexus-i18n/1.0-beta-7/plexus-i18n-1.0-beta-7.jar
[ERROR] urls[45] = file:/home/masayag/.m2/repository/org/codehaus/woodstox/wstx-asl/3.2.7/wstx-asl-3.2.7.jar
[ERROR] urls[46] = file:/home/masayag/.m2/repository/stax/stax-api/1.0.1/stax-api-1.0.1.jar
[ERROR] urls[47] = file:/home/masayag/.m2/repository/commons-lang/commons-lang/2.4/commons-lang-2.4.jar
[ERROR] urls[48] = file:/home/masayag/.m2/repository/org/gcontracts/gcontracts-core/1.2.5/gcontracts-core-1.2.5.jar
[ERROR] urls[49] = file:/home/masayag/.m2/repository/asm/asm/3.2/asm-3.2.jar
[ERROR] urls[50] = file:/home/masayag/.m2/repository/log4j/log4j/1.2.16/log4j-1.2.16.jar
[ERROR] urls[51] = file:/home/masayag/.m2/repository/com/github/goldin/gcommons/0.5.4/gcommons-0.5.4.jar
[ERROR] urls[52] = file:/home/masayag/.m2/repository/org/slf4j/slf4j-log4j12/1.6.4/slf4j-log4j12-1.6.4.jar
[ERROR] urls[53] = file:/home/masayag/.m2/repository/org/apache/commons/commons-exec/1.1/commons-exec-1.1.jar
[ERROR] urls[54] = file:/home/masayag/.m2/repository/de/schlichtherle/truezip/6.8.2/truezip-6.8.2.jar
[ERROR] urls[55] = file:/home/masayag/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar
[ERROR] urls[56] = file:/home/masayag/.m2/repository/org/springframework/spring-asm/3.1.1.RELEASE/spring-asm-3.1.1.RELEASE.jar
[ERROR] urls[57] = file:/home/masayag/.m2/repository/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1.jar
[ERROR] urls[58] = file:/home/masayag/.m2/repository/org/slf4j/slf4j-api/1.6.4/slf4j-api-1.6.4.jar
[ERROR] urls[59] = file:/home/masayag/.m2/repository/org/sonatype/sisu/sisu-guice/3.1.1/sisu-guice-3.1.1.jar
[ERROR] urls[60] = file:/home/masayag/.m2/repository/org/sonatype/sisu/sisu-guava/0.11.1/sisu-guava-0.11.1.jar
[ERROR] urls[61] = file:/home/masayag/.m2/repository/com/google/code/findbugs/jsr305/1.3.9/jsr305-1.3.9.jar
[ERROR] urls[62] = file:/home/masayag/.m2/repository/org/sonatype/sisu/sisu-inject-bean/2.3.0/sisu-inject-bean-2.3.0.jar
[ERROR] urls[63] = file:/home/masayag/.m2/repository/br/com/ingenieux/maven/annomojo/org.jfrog.maven.maven-plugin-anno/1.4.1/org.jfrog.maven.maven-plugin-anno-1.4.1.jar
[ERROR] urls[64] = file:/home/masayag/.m2/repository/com/jcraft/jsch/0.1.48/jsch-0.1.48.jar
[ERROR] urls[65] = file:/home/masayag/.m2/repository/junit/junit/4.10/junit-4.10.jar
[ERROR] Number of foreign imports: 1
[ERROR] import: Entry[import from realm ClassRealm[maven.api, parent: null]]
[ERROR]
[ERROR] -----------------------------------------------------: org.sonatype.aether.RepositorySystem
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/AetherClassNotFound
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <goals> -rf :webadmin
make[2]: *** [maven] Error 1
make[2]: Leaving directory `/home/masayag/work/ovirt-engine'
make[1]: *** [tmp.built] Error 2
make[1]: Leaving directory `/home/masayag/work/ovirt-engine'
make: *** [all-dev] Error 2
Thanks,
Moti
10 years, 9 months
postponing oVirt 3.5.0 second beta
by Sandro Bonazzola
Hi,
we're going to postpone oVirt 3.5.0 second beta since ovirt-engine currently doesn't build [1].
We also have identified a set of bugs causing automated tests to fail so we're going to block the release
until engine will build cleanly and at least most critical issues found have been fixed.
Please note that more than 80 patches are now in master and not backported to 3.5 branch.
Maintainers, please ensure all patches targeted to 3.5 are properly backported.
Probably we're going to postpone second test day too, according to the date we'll be able to compose the second beta build.
[1] http://jenkins.ovirt.org/view/Stable%20branches%20per%20project/view/ovir...
[2] http://goo.gl/pFngWU
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 9 months
Can not run oVirt on last master
by Eli Mesika
Hi Guys
I am getting that on server.log
2014-07-20 13:02:25,453 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-1) MSC000001: Failed to start service jboss.deployment.unit."legacy_restapi.war".POST_MODULE: org.jboss.msc.service.StartException in service jboss.deployment.unit."legacy_restapi.war".POST_MODULE: Failed to process phase POST_MODULE of deployment "legacy_restapi.war"
at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:119) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final]
at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811) [jboss-msc.jar:1.0.2.GA]
at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746) [jboss-msc.jar:1.0.2.GA]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_60]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_60]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_60]
Caused by: org.jboss.as.server.deployment.DeploymentUnitProcessingException: JBAS011232: Only one JAX-RS Application Class allowed. org.ovirt.engine.api.restapi.BackendApplication org.ovirt.engine.api.restapi.BackendApplication
at org.jboss.as.jaxrs.deployment.JaxrsScanningProcessor.scan(JaxrsScanningProcessor.java:209)
at org.jboss.as.jaxrs.deployment.JaxrsScanningProcessor.deploy(JaxrsScanningProcessor.java:105)
at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:113) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final]
... 5 more
I saw that Juan H did some changes last THU (http://gerrit.ovirt.org/#/c/30222) but even when I had re-based on a commit before this patch merge, I got the same error.
Any ideas ?
Thanks
Eli Mesika
10 years, 9 months
remote rejected - not Signed-off-by
by Kobi Ianko
Hi,
Got this message when trying to push several changes:
To gerrit.ovirt.org:ovirt-engine
! [remote rejected] HEAD -> refs/for/master (not Signed-off-by author/committer/uploader in commit message footer)
error: failed to push some refs to 'gerrit.ovirt.org:ovirt-engine'
All of the patches has "Signed-off-by" in the footer, and there are no conflicts in the message.
Do you have any idea what might be the issue here when trying to push?
10x, Kobi
10 years, 9 months
[ANN] oVirt 3.4.3 Release is now available
by Sandro Bonazzola
The oVirt development team is pleased to announce the general
availability of oVirt 3.4.3 as of Jul 18th 2014. This release
solidifies oVirt as a leading KVM management application and open
source alternative to VMware vSphere.
oVirt is available now for Fedora 19 and Red Hat Enterprise Linux 6.5
(or similar).
This release of oVirt includes numerous bug fixes.
See the release notes [1] for a list of the new features and bugs fixed.
The existing repository ovirt-3.4 has been updated for delivering this
release without the need of enabling any other repository, however since we
introduced package signing you need an additional step in order to get
the public keys installed on your system if you're upgrading from an older release.
Please refer to release notes [1] for Installation / Upgrade instructions.
Please note that mirrors will need a couple of days before being synchronized.
If you want to be sure to use latest rpms and don't want to wait for the mirrors,
you can edit /etc/yum.repos.d/ovirt-3.4.repo commenting the mirror line and
removing the comment on baseurl line.
A new oVirt Live ISO will be available too[2].
[1] http://www.ovirt.org/OVirt_3.4.3_Release_Notes
[2] http://resources.ovirt.org/pub/ovirt-3.4/iso/ovirt-live-el6-3.4.3.iso
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 9 months
oVirt 3.4.3 GA postponed due to blocker
by Sandro Bonazzola
Hi,
recent python upgrade in Fedora 19 broke vdsmd service.
While we wait for an updated python-cpopen package to be built, we're postponing oVirt 3.4.3 GA.
The package should be built for tomorrow and will be hosted on ovirt repo until it will be available on Fedora repositories.
We'll release 3.4.3 after basic sanity testing with the new package.
Thanks,
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 9 months
python-ioprocess for el7?
by Adam Litke
Hi,
I am looking for python-ioprocess RPMs (new enough for latest vdsm
requirements). Can anyone point me in the right direction? Thanks!
--
Adam Litke
10 years, 9 months
[QE] Hardening Guide
by Sandro Bonazzola
Hi,
while I was working on Bug 1097022 - ovirt-engine-setup: weak default passwords for PostgreSQL database users
I was wondering where to write hardening tips described in comment #18.
It looks like we don't have any page on oVirt wiki about hardening.
Anyone interested in contributing to such page?
I guess it can be created as http://www.ovirt.org/OVirt_Hardening_Guide
Thoughts?
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 9 months
problem with UI css file
by Kobi Ianko
Hi,
Got the following error in the engine log:
Can't read file "/home/kianku/ovirt-engine/etc/ovirt-engine/branding/00-ovirt.brand/patternfly/css/styles.min.css" for request "/ovirt-engine/webadmin/theme/00-ovirt.brand/patternfly/css/styles.min.css", will send a 404 error response.
for some reason I'm missing "patternfly" directory...
I'm fetched and rebase with latest..
10x
10 years, 9 months
oVirt Node Weekly Meeting Minutes - July 16 2014
by Fabian Deutsch
Minutes: http://ovirt.org/meetings/ovirt/2014/ovirt.2014-07-15-13.17.html
Minutes (text): http://ovirt.org/meetings/ovirt/2014/ovirt.2014-07-15-13.17.txt
Log: http://ovirt.org/meetings/ovirt/2014/ovirt.2014-07-15-13.17.log.html
=================================
#ovirt: oVirt Node Weekly Meeting
=================================
Meeting started by fabiand at 13:17:08 UTC. The full logs are available
at http://ovirt.org/meetings/ovirt/2014/ovirt.2014-07-15-13.17.log.html
.
Meeting summary
---------------
* Agenda (fabiand, 13:19:01)
* Action Item Review (fabiand, 13:19:18)
* Next Release (3.1) (fabiand, 13:19:29)
* 3.5 Feature Status (fabiand, 13:20:51)
* Other Items (fabiand, 13:20:59)
* Action Item Review (fabiand, 13:21:17)
* LINK:
http://resources.ovirt.org/meetings/ovirt/2014/ovirt.2014-07-08-13.03.txt
(fabiand, 13:21:55)
* fabiand and rbarry to test the ovirt-node iso (fabiand, 13:22:12)
* QE team discovered some issues (fabiand, 13:23:15)
* LINK: http://lists.ovirt.org/pipermail/devel/2014-July/008142.html
(fabiand, 13:24:30)
* Next Release (3.1) (fabiand, 13:26:40)
* 3.5 Feature Status (fabiand, 13:29:55)
* generic-registration -- Needs some clearifying (fabiand, 13:30:48)
* hosted-engine-plugin -- Needs a maintainer (fabiand, 13:31:03)
* virtual-appliance -- Has a working jenkins build (fabiand,
13:31:22)
* Other Items (fabiand, 13:35:45)
* LINK: http://bpaste.net/show/HrZnry3kru8D1naejzt7/ (peetaur2,
15:39:00)
Meeting ended at 14:04:49 UTC.
Action Items
------------
Action Items, by person
-----------------------
* **UNASSIGNED**
* (none)
People Present (lines said)
---------------------------
* fabiand (44)
* YamakasY_ (40)
* peetaur2 (32)
* clarkee (22)
* ojorge (21)
* thomas (20)
* jhernand (13)
* bkp (12)
* msivak (12)
* sbonazzo (10)
* jvandewege (8)
* urthmover (8)
* rbarry (8)
* dougsland (5)
* YamakasY (5)
* leaboy (3)
* Dick-Tracy (3)
* ovirtbot (3)
* oved_ (2)
* SvenKieske (2)
* lvernia (2)
* kobi (1)
* Moe__ (1)
* derez (1)
* dcaro (1)
* eedri (1)
* yzaslavs|mtg (1)
Generated by `MeetBot`_ 0.1.4
.. _`MeetBot`: http://wiki.debian.org/MeetBot
10 years, 9 months
[QE][ACTION NEEDED] oVirt 3.5.0 Second Beta status
by Sandro Bonazzola
Hi,
We're going to compose oVirt 3.5.0 Second Beta on Mon *2014-07-21 08:00 UTC*.
Maintainers:
- Please be sure that 3.5 snapshot allow to create VMs before *2014-07-20 15:00 UTC*
The bug tracker [1] shows the following proposed blockers to be reviewed:
Bug ID Whiteboard Status Summary
1115044 infra POST Host stuck in "Unassinged" state when using jsonrpc and disconnection from pool failed
1115152 infra POST Cannot edit or create block storage doamin when using jsonrpc
1113974 integration POST Hostname validation during all-in-one setup
1115001 network ASSIGNED Error code 23 when invoking Setup Networks
1119019 network POST Remove network with network custom properties from Host fails
1110305 virt POST BSOD - CLOCK_WATCHDOG_TIMEOUT_2 - Win 7SP1 guest, need to set hv_relaxed
Feature freeze is now effective, and branch has been created.
All new patches must be backported to 3.5 branch too.
Features completed are marked in green on Features Status Table [2]
There are still 412 bugs [3] targeted to 3.5.0.
Excluding node and documentation bugs we still have 364 bugs [4] targeted to 3.5.0.
Maintainers / Assignee:
- Please check ensure that completed features are marked in green on Features Status Table [2]
- Please remember to rebuild your packages before *2014-07-20 15:00* if needed, otherwise nightly snapshot will be taken.
- Please be sure that 3.5 snapshot allow to create VMs before *2014-07-20 15:00 UTC*
- If you find a blocker bug please remember to add it to the tracker [1]
- Please start filling release notes, the page has been created here [5]
- Please review and add test cases to oVirt 3.5 Second Test Day [6]
Community:
- save the date for second test day scheduled on 2014-07-24!
- You're welcome to join us testing next beta release and getting involved in oVirt Quality Assurance[7]!
[1] http://bugzilla.redhat.com/1073943
[2] http://bit.ly/17qBn6F
[3] http://red.ht/1pVEk7H
[4] http://red.ht/1rLCJwF
[5] http://www.ovirt.org/OVirt_3.5_Release_Notes
[6] http://www.ovirt.org/OVirt_3.5_TestDay
[7] http://www.ovirt.org/OVirt_Quality_Assurance
Thanks,
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 9 months
[QE][ACTION NEEDED] oVirt 3.4.3 GA status
by Sandro Bonazzola
Hi,
We're going to start composing oVirt 3.4.3 GA tomorrow *2014-07-17 08:00 UTC* from 3.4.3 branch.
The bug tracker [1] shows no open blocking bugs for the release
There are still 10 bugs [2] targeted to 3.4.3.
Excluding node and documentation bugs we still have 3 bugs [3] targeted to 3.4.3.
Bug ID Status Whiteboard Severity Summary
1111655 NEW storage urgent Disks imported from Export Domain to Data Domain are converted to Preallocated after upgrade...
1059309 NEW sla high [events] 'Available memory of host $host (...) under defined threshold...' is logged only once
1048880 NEW network unspecified [vdsm][openstacknet] Migration fails for vNIC using OVS + security groups
Maintainers / Assignee:
- Please add the bugs to the tracker if you think that 3.4.3 should not be released without them fixed.
- Please update the target to any next release for bugs that won't be in 3.4.3:
it will ease gathering the blocking bugs for next releases.
- Please fill release notes, the page has been created here [4]
- Please build packages before today *2014-07-16 15:00 UTC*.
Community:
- If you're testing oVirt 3.4 nightly snapshot, please add yourself to the test page [5]
[1] bugzilla.redhat.com/1107968
[2] http://red.ht/1lBAw2R
[3] http://red.ht/1ly9hfA
[4] http://www.ovirt.org/OVirt_3.4.3_Release_Notes
[5] http://www.ovirt.org/Testing/oVirt_3.4.3_Testing
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 9 months
[VDSM][sampling] thread pool status and handling of stuck calls
by Francesco Romani
Hi,
Nir has begun reviewing my draft patches about the thread pool and sampling refactoring (thanks!),
and already suggested quite some improvements which I'd like to summarize
Quick links to the ongoing discussion:
http://gerrit.ovirt.org/#/c/29191/8/lib/threadpool/worker.py,cm
http://gerrit.ovirt.org/#/c/29190/4/lib/threadpool/README.rst,cm
Quick summary of the discussion on gerrit so far:
1. extract the scheduling logic from the thread pool. Either add a separate scheduler class
or let the sampling task reschedule themselves after a succesfull completion.
In any way the concept of 'periodic task', and the added complexity, isn't needed.
2. drop all the *queue classes I've added, thus making the package simpler.
They are no longer needed since we remove the concept of periodic task.
3. have per-task timeout, move the stuck task detection elsewhere, like in the worker thread, ot
maybe better in the aforementioned scheduler.
If the scheduler finds that any task started in the former pass (or even before!)
has not yet completed, there is no point in keeping this task alive and it should be cancelled.
4. the sampling task (or maybe the scheduler) can be smarter and halting the sample in presence of
not responding calls for a given VM, granted the VM reports its 'health'/responsiveness.
(Hopefully I haven't forgot anything big)
In the draft currently published, I reluctantly added the *queue classes and I agree the periodic
task implementation is messy, so I'll be very happy to drop them.
However, a core question still holds: what to do in presence of the stuck task?
I think it is worth to discuss this topic on a medium friendlier than gerrit, as it is the single
most important decision to make in the sampling refactoring.
It all boils down to:
Should we just keep somewhere stuck threads and wait? Should we cancel stuck tasks?
A. Let's cancel the stuck tasks.
If we move toward a libvirt connection pool, and we give each worker thread in the sampling pool
a separate libvirt connection, hopefully read-only, then we should be able to cancel stuck task by
killing the worker's libvirt connection. We'll still need a (probably much simpler) watchman/supervisor,
but no big deal here.
Libvirt allows to close a connection from a different thread.
I haven't actually tried to unstuck a blocked thread this way, but I have no reason to believe it
will not work.
B. Let's keep around blocked threads
The code as it is just leaves a blocked libvirt call and the worker thread that carried it frozen.
The stuck worker thread can be replaced up to a cap of frozen threads.
In this worst case scenario, we end up with one (blocked!) thread per VM, as it is today, and with
no sampling data.
I believe that #A has some drawbacks which we risk to overlook, and on the same time #B has some merits.
Let me explain:
The hardest case is a call blocked in the kernel in D state. Libvirt has no more room than VDSM
to unblock it; and libvirt itself *has* a pool of resources (threads in this case) which can be depleted
by stuck calls. Actually, retrying to do a failed task may deplete their pool even faster[1].
I'm not happy to just push this problem down the stack, as it looks to me that we gain
very little by doing so. VDSM itself surely stays cleaner, but the VDS/hypervisor hosts on the whole
improves just a bit: libvirt scales better, and that gives us some more room.
On the other hand, by avoiding to reissue dangerous calls, I believe we make better use of
the host resources in general. Actually, the point of keeping blocked thread around is a side effect
of not reattempting blocked calls. Moreover, to keep the blocked thread around has a significant
benefit: we can discover at the earliest moment when it is safe again to do the blocked call,
because the blocked call itself returns and we can track this event! (and of course drop the
now stale result). Otherwise, if we drop the connection, we'll lose this event and we have no
more option that trying again and hoping for the best[2]
I know the #B approach is not the cleanest, but I think it has slightly more appeal, especially
on the libvirt depletion front.
Thoughts and comments very welcome!
+++
[1] They have extensions to management API to dinamically adjust their thread pool and/or to cancel
tasks, but it is in the RHEL7.2 timeframe.
[2] A crazy idea would be to do something like http://en.wikipedia.org/wiki/Exponential_backoff
which I'm not sure would be beneficial
Bests and thanks,
--
Francesco Romani
RedHat Engineering Virtualization R & D
Phone: 8261328
IRC: fromani
10 years, 9 months
Custom fencing with virsh_fence
by Adam Litke
Hi all,
I am trying to configure custom fencing using fence_virsh in order to
test out fencing flows with my virtualized oVirt hosts. I'm getting a
failure when clicking the "Test" button. Can someone help me to
diagnose the problem? I have applied the following settings using
engine-config:
~/ovirt-engine/bin/engine-config -s CustomVdsFenceType="xxxvirt"
~/ovirt-engine/bin/engine-config -s CustomFenceAgentMapping="xxxvirt=virsh"
~/ovirt-engine/bin/engine-config -s CustomVdsFenceOptionMapping="xxxvirt:address=ip,username=username,password=password"
(note that engine-config seems to arbitrarily limit the number of
mapped options to 3. Seems like a bug to me).
Here is the log output in engine.log:
2014-07-15 11:43:34,813 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(http--0.0.0.0-8080-1) Correlation ID: null, Call Stack: null, Custom
Event ID: -1, Message: Host centennial from cluster block was chosen
as a proxy to execute Status command on Host cascade.
2014-07-15 11:43:34,813 INFO
[org.ovirt.engine.core.bll.FenceExecutor] (http--0.0.0.0-8080-1) Using
Host centennial from cluster block as proxy to execute Status command
on Host
2014-07-15 11:43:34,815 INFO
[org.ovirt.engine.core.bll.FenceExecutor] (http--0.0.0.0-8080-1)
Executing <Status> Power Management command, Proxy Host:centennial,
Agent:virsh, Target Host:, Management IP:192.168.2.101, User:root,
Options:
2014-07-15 11:43:34,816 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
(http--0.0.0.0-8080-1) START, FenceVdsVDSCommand(HostName =
centennial, HostId = a34f7dbc-dd99-4831-a1a9-54c411080ec1, targetVdsId
= b6b9d480-e20f-411a-9b9c-883fac32a4e5, action = Status, ip =
192.168.2.101, port = , type = virsh, user = root, password = ******,
options = ''), log id: 24f33bda
2014-07-15 11:43:34,875 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
(http--0.0.0.0-8080-1) Failed in FenceVdsVDS method, for vds:
centennial; host: 192.168.2.103
2014-07-15 11:43:34,876 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
(http--0.0.0.0-8080-1) Command FenceVdsVDSCommand(HostName =
centennial, HostId = a34f7dbc-dd99-4831-a1a9-54c411080ec1, targetVdsId
= b6b9d480-e20f-411a-9b9c-883fac32a4e5, action = Status, ip =
192.168.2.101, port = , type = virsh, user = root, password = ******,
options = '') execution failed. Exception: ClassCastException:
[Ljava.lang.Object; cannot be cast to java.lang.String
2014-07-15 11:43:34,877 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
(http--0.0.0.0-8080-1) FINISH, FenceVdsVDSCommand, log id: 24f33bda
--
Adam Litke
10 years, 9 months
Fwd: Test run for ovirt-node-iso-3.5.0.ovirt35.20140707.el6.iso - Pass
by Fabian Deutsch
Hey,
forwarding this results.
- fabian
----- Forwarded Message -----
From: "Haiyang Dong" <hadong(a)redhat.com>
To: "Fabian Deutsch" <fabiand(a)redhat.com>, "Lei Wang" <leiwang(a)redhat.com>, "ycui" <ycui(a)redhat.com>
Cc: ovirt-devel(a)ovirt.org, "node-devel" <node-devel(a)ovirt.org>
Sent: Tuesday, 15 July, 2014 11:15:03 AM
Subject: Test run for ovirt-node-iso-3.5.0.ovirt35.20140707.el6.iso - Pass
SUMMARY:
The ovirt-node-iso-3.5.0.ovirt35.20140707.el6.iso - run testing result is pass due to most functions of this iso could be work.
ISO Link:
http://resources.ovirt.org/pub/ovirt-3.5-pre/iso/ovirt-node-iso-3.5.0.ovi...
New Feature Sanity Test:
1. Configure kdump via ssh + sshkey -PASS
2. Set Logroate log with Interval Daily/ weekly/Monthly -PASS
3. Set ssh daemon port -PASS
4. Set console path -PASS
5.ovirt-node-config tools Parameters:
(a). h -FAILED
(b). --module -PASS
(c). --dry -PASS
(d). --config -PASS
6. ovirt-node-features tools Parameters:
(a). -d -FAILED
(b). dumpxml -FAILED
Auto test runs:
http://10.66.8.158:3000/profile/by/ovirt-node-iso-3.5.0.ovirt35.20140707....
Automated - Auto Install cover parameters
No.
Auto-install parameters
Stautus
Case
Comments
1
gateway
PASS
1 cases
2
hostname
PASS
1 cases
3
ntp
FAIL
2 cases
bz#1119665
4
bond_setup
PASS
1. mode related(14 cases)
2. bond + vlan (1 cases)
5
network_layout
PASS
2 cases
6
adminpw
PASS
1 cases
7
kdump_nfs
PASS
1 cases
8
netconsole
FAILED
2 Cases
1. default port
2. customer port bz# 1119566
9
mem_overcommit
PASS
1 cases
10
reinstall
PASS
1 cases
11
logrotate_max_size
PASS
1 cases
12
ssh_pwauth
PASS
1 cases
13
dns
PASS
1 cases
14
nocheck
PASS
1 cases
15
tuned
PASS
13 cases
16
keyboard
PASS
1 cases
17
nfsv4_domain
PASS
1 cases
18
syslog
PASS
1 cases
19
iscsi_install
PASS
2 cases
1.Soft iscsi
2.hard lun iscsi
20
Host VG in two disk
PASS
1 cases
21
Host VG and APP VG
PASS
1 cases
22
disable_aes_ni
PASS
1 cases
23
use_strong_rng
PASS
1 cases
24
storage_init
PASS
5 cases
/dev/mapper,ata,usb,scsi:,/dev/sda
25
swap_encrypt
PASS
1 Cases
26
storage_vol
PASS
2 cases
Default,customize
Total
Manual test runs:
https://tcms.engineering.redhat.com/run/159715/?from_plan=13675
https://tcms.engineering.redhat.com/run/159716/?from_plan=13675
https://tcms.engineering.redhat.com/run/159714/?from_plan=13675
https://tcms.engineering.redhat.com/run/159713/?from_plan=13675
Manual Acceptance testing Matrix: Main Function Basic Function
TUI Status
Auto Status
Main Function
Basic Function
TUI Status
Auto Status
Boot PXE ------- -------
Logging Rsyslog PASS -------
USB PASS PASS Netconsole FAIL -------
CD-ROM PASS PASS Kdump NFS PASS -------
Virtual-Media PASS PASS SSH PASS -------
Partition Partition storage PASS PASS Local PASS -------
Installation FC PASS FAIL
RHN RHN No rhn plungins in upstream No rhn plungins in upstream
iSCSI ------- PASS Satellite No rhn plungins in upstream No rhn plungins in upstream
Local Disk PASS PASS SAM No rhn plugin in upstream No rhn plungins in upstream
CCISS PASS PASS SNMP SNMP no snmp plugin in rthis base image no snmp plugin in rthis base image
USB ------- PASS CIM CIM no cim plugin in rthis base image no cim plugin in rthis base image
Upgrade PXE ------- FAIL
Keyboard US / German PASS PASS
CD-ROM PASS FAIL Diagnostics Diagnostics PASS -------
USB PASS ------- Performance Performances PASS PASS
Virtual-Media PASS FAIL Plugins Plugins PASS -------
Uninstall Uninstall ------- -------
Other Menu Support Menu PASS -------
Network IPv4 PASS ------- iSCSI initiator PASS PASS
Ipv6 PASS ------- Hostname PASS PASS
vlan ------- ------- Authentication PASS PASS
For detail test cases, you can check the following test runs.
======ovirt-node-iso-3.5.0.ovirt35.20140707.el6.iso========
Test Result: PASS
Packages Tested: (1)
ovirt-node-3.1.0-0.0.master.20140707.git2f40d75.el6.noarch
New Bug: (10) Bug ID
Summary
Status
Reporter Component 1118729 Configure kdump via ssh failed after configured kdump via ssh +sshkey. New hadong ovirt-node
1118758 Add validator for ssh port to only accpet "22" or "1024-65535" New hadong ovirt-node
1118952 Thrown nameerror:global name 'Feature' is not defined when using "ovirt-node-features dumpxml" New hadong ovirt-node
1118962 Thrown IndexError: list index out of range when using "ovirt-node-config h" New hadong ovirt-node
1118965 [RFE]Move "ovirt-config-password" from "/usr/libexec/" into "/usr/bin/" or "/usr/sbin/" New hadong ovirt-node
1119566 Configuring the Netconsole with ipv4 address failed New guasun ovirt-node
1119571 confirmation_page:Boot device still shown the first disk even if selected the second disk as the boot device New hadong ovirt-node
1119606 The name of "ovirt-node-plugin-vdsm" is incorrect New cshao ovirt-node
1119620 Upgrade failed via kernel cmdline. New cshao ovirt-node
1119665 Auto install with parameter "ntp" to set ntp server failed New hadong ovirt-node
Existed bug List: (0) Bug ID Summary Status Reporter Component
Verified: (0) Bug ID Summary Status Reporter Component
10 years, 9 months
Info related to display port allocation
by Vinzenz Feenstra
Hi,
I just want to give all of you an update on a recently created bug on VDSM.
Until recently we had 2 ports allocated for spice one for tls and one
plain text.
We have now observed the behaviour on RHEL7 and I also did now on RHEL6 that
when all spice channels are marked as secure, only the tlsPort value is
set in the
domain xml. VDSM will report the non-tls port as 'port=-1'
For us this is a good thing since we're saving one port in these cases
and we're
encouraging users to use the ssl configuration anyway.
I just wanted to give you a heads up on this, in case anyone is assuming
the 'port'
value reported by getAllVmStats or list full=True API calls would be
returning any value
but -1.
--
Regards,
Vinzenz Feenstra | Senior Software Engineer
RedHat Engineering Virtualization R & D
Phone: +420 532 294 625
IRC: vfeenstr or evilissimo
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 9 months
Fwd: Re: [ovirt-users] VDSM respawning too quickly
by Sven Kieske
FYI, please share some karma for this package!
It seems that all host installation for ovirt
fail until this is resolved, maybe there should
be additional tests before releasing new software?
I need to install a new host today and I hope it will
work. Currently epel still offers 0.1.3-2 :(
Thanks
-------- Original-Nachricht --------
Betreff: Re: [ovirt-users] VDSM respawning too quickly
Datum: Mon, 14 Jul 2014 15:53:17 -0500
Von: Chris Adams <cma(a)cmadams.net>
An: <users(a)ovirt.org>
Once upon a time, Kyle Gordon <kyle(a)lodge.glasgownet.com> said:
> Following an upgrade from 3.3 to 3.4, I've been greeted with this
> message in /var/log/messages, on my CentOS 6.5 server.
I'm hitting the same thing with an up-to-date CentOS 6.5 trying to
install hosted-engine. It appears the problem is an updated
pythong-pthreading package in EPEL, version 0.1.3-2. There's already a
0.1.3-3 in koji that rolls back the patch in 0.1.3-2.
http://koji.fedoraproject.org/koji/buildinfo?buildID=543650
--
Chris Adams <cma(a)cmadams.net>
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
10 years, 9 months
Removing boilerplate code in engine
by Mike Kolesnik
Hi,
I recently introduced 2 changes for removing boilerplate code:
1. http://gerrit.ovirt.org/29414 - Fluent syntax for writing validations
2. http://gerrit.ovirt.org/29617 - Wrapper for locks to use with try-with-resources
By removing boilerplate code we're making the code less error prone and easier to read (and maintain).
I've already sent some simple refactors to use these new styles of writing,
but more work is necessary to apply to the whole project.
I urge all engine developers that need to write such code to use the new styles of writing.
Below are examples for each change.
1. When expecting a negative outcome, instead of using:
return getVds() == null
? new ValidationResult(VdcBllMessages.ACTION_TYPE_FAILED_HOST_NOT_EXIST)
: ValidationResult.VALID;
use:
return ValidationResult.failWith(VdcBllMessages.ACTION_TYPE_FAILED_HOST_NOT_EXIST)
.when(getVds() == null);
When expecting a positive outcome, instead of using:
return FeatureSupported.nonVmNetwork(getDataCenter().getcompatibility_version())
? ValidationResult.VALID
: new ValidationResult(VdcBllMessages.NON_VM_NETWORK_NOT_SUPPORTED_FOR_POOL_LEVEL);
use:
return ValidationResult.failWith(VdcBllMessages.NON_VM_NETWORK_NOT_SUPPORTED_FOR_POOL_LEVEL)
.unless(FeatureSupported.nonVmNetwork(getDataCenter().getcompatibility_version()));
2. To lock a block of code, instead of using [1]:
lock.lock();
try {
// Thread safe code
} finally {
lock.unlock();
}
use:
try (AutoCloseableLock l = new AutoCloseableLock(lock)) {
// Thread safe code
}
[1] This is best used with locks from java.util.concurrent.locks package.
For regular thread safe blocks it's best to use the standard synchronized block.
Regards,
Mike
10 years, 9 months
Re: [ovirt-devel] [vdsm] compile vdsm and attach it to a engine.
by ybronhei
On 07/03/2014 05:11 AM, aaron Beein wrote:
> Hi,
>
> Thank you for your great job on ovirt and vdsm. Now I devote myself to
> compile vdsm on centos 6.3 host and attach it to a ovirt engine. But when I
> attach the host which contains a compiled vdsm to a ovirt engine , the
> status of the host is always ‘Non Responsive’(step 11 below). I reference
> the links below:
>
> http://www.ovirt.org/Vdsm_Developers
>
> http://www.ovirt.org/Installing_VDSM_from_rpm
>
>
>
> The steps( 1-9 ) are executed on centos6.3 host, and the steps(10--11) are
> executed on ovirt engine. So I would be very grateful if you can give me
> some clues that if I've missed anything or I done something wrong.
>
> The attachment is the same as the bellow which makes it easier for you to
> read.
> 1 Deployment platform
>
> Centos6.3
>
> Linux bogon 2.6.32-431.20.3.el6.x86_64 #1 SMP Thu Jun 19 21:14:45 UTC 2014
> x86_64 x86_64 x86_64 GNU/Linux
>
> Ip : 10.1.8.252
>
> CPU supports hardware virtualization extensions:
>
> # cat /proc/cpuinfo | egrep 'svm|vmx'| grep nx
>
> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
> cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
> rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc
> aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr
> pdcm pcid sse4_1 sse4_2 popcnt tsc_deadline_timer xsave lahf_lm arat epb
> xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid
>
> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
> cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
> rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc
> aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr
> pdcm pcid sse4_1 sse4_2 popcnt tsc_deadline_timer xsave lahf_lm arat epb
> xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid
> 2 Apply all updates # yum -y update 3 Installing required packages
>
> RHEL 6 users must add EPEL yum repository for installing python-ordereddict
> and pyton-pthreading. The rpm bellow will install the epel yum repo and
> required gpg keys.
>
> # yum install http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch...
>
> RHEL 6 users must install a newer pep8 version than the one shipped in
> EPEL6. Older pep8 versions have a bug that's tickled by vdsm. You can use
> `pip`, or
>
> yum install http://danken.fedorapeople.org/python-pep8-1.4.5-2.el6.noarch.rpm
>
> oVirt repo:
>
> yum install http://resources.ovirt.org/releases/ovirt-release.noarch.rpm
>
> RHEL 6 users must add the glusterfs repository, providing newer glusterfs
> not available on RHEL 6. Optionally install 'wget' if not present
>
> rpm -q wget 2> /dev/null || yum install wget
>
> wget -O /etc/yum.repos.d/glusterfs-epel.repo
> *http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo*
> <http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/gluste...>
>
> Fedora and RHEL 6 users must verify the following packages are installed
> before attempting to build:
>
> yum install make autoconf automake pyflakes logrotate gcc python-pep8
> libvirt-python python-devel \
>
> python-nose rpm-build sanlock-python genisoimage python-ordereddict
> python-pthreading libselinux-python\
>
> python-ethtool m2crypto python-dmidecode python-netaddr
> python-inotify python-argparse git \
>
> python-cpopen bridge-utils libguestfs-tools-c pyparted openssl libnl
> libtool gettext-devel python-ioprocess libvirt libvirt-client
> libvirt-lock-sanlock
>
> 4 Getting the source
>
> cd /root
>
> git clone *http://gerrit.ovirt.org/p/vdsm.git*
> <http://gerrit.ovirt.org/p/vdsm.git>
>
> cd vdsm
>
> 5 Building a Vdsm RPM
>
> ./autogen.sh –system
>
>
>
> ./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var
> --libdir=/usr/lib --enable-hooks
>
> make rpm NOSE_EXCLUDE=.*
> 6 Basic installation and start
>
> When building from source, you should enable the ovirt-beta repository, to
> satisfy dependencies that are not available yet in the release repository.
>
> # cd ~/rpmbuild/RPMS
>
> # yum install --skip-broken
> --enablerepo=ovirt-master-snapshot-static x86_64/* noarch/vdsm-xml*
> noarch/vdsm-cli* noarch/vdsm-python-zombiereaper*
> noarch/vdsm-*jsonrpc*
>
>
>
> Before starting vdsmd service for the first time vdsm requires some
> configuration procedures for external services that being used by vdsmd. To
> ease this process vdsm provides a utility (vdsm-tool). To perform full
> reconfiguration of external services perform:
>
> # vdsm-tool configure --force
>
> (for more information read "vdsm-tool --help")
>
>
>
>
> 7 Finally start the vdsmd service
>
> # service vdsmd start
>
>
> 8 Yum install -y bridge-utils
>
> Configuring the bridge Interface as below
>
> Disable the network manager service by executing as root:
>
> systemctl stop NetworkManager.service
>
> systemctl disable NetworkManager.service
>
>
>
> service network start
>
> chkconfig network on
>
> Add the following content into a new file named:
> */etc/sysconfig/network-scripts/ifcfg-ovirtmgmt*:
>
> DEVICE=ovirtmgmt
>
> TYPE=Bridge
>
> ONBOOT=yes
>
> DELAY=0
>
> BOOTPROTO=static
>
> IPADDR=10.1.8.252
>
> NETMASK=255.255.255.0
>
> GATEWAY=10.1.8.254
>
> Add the following line into the configuration file of your out going
> interface (usually em1/eth0) the file is located at:
> */etc/sysconfig/network-scripts/ifcfg-em1* (assuming the device is em1)
>
> BRIDGE=ovirtmgmt
>
> and remove the IPADDR, NETMASK and BOOTPROTO keys, since the interface
> should not have an IP address of its own. Full Example
>
> DEVICE=em1
>
> ONBOOT=yes
>
> BRIDGE=ovirtmgmt
>
> Restart the network service by executing:
>
> service network restart
>
> *Note that if any other bridge (from ovirtmgmt) is present at the time of
> host installation, the bridge creation operation is skipped and you have to
> change the bridge settings to correspond to above shown configuration
> manually.*
>
>
>
> *9 Configuring VDSM*
>
> Add the following content into the file: */etc/vdsm/vdsm.conf* (you may
> need to create that file):
>
> [vars]
>
> ssl = false
>
> Restart the vdsmd service by executing:
>
> service vdsmd restart
>
> If Vdsm was started earlier with ssl=true, it would refuse to start and you
> may need to use the undocumented verb
>
> service vdsmd reconfigure
you should perform - "vdsm-tool configure --force" without touching
other conf files. you already did it, so nothing else required
although, we you set InstallVds to true (few lines underneath), it means
that when you add the host to engine it will perform all those deploying
vdsm steps for you. So you don't have to do all those vdsm configuration
and installation steps at all.
>
> service vdsmd start
>
> which edits */etc/libvirt/qemu.conf* and changes *spice_tls=1* to
> *spice_tls=0*.
>
>
> 10 Connect to overt-engine
>
> *ref: OVirt_-_connecting_development_vdsm_to_ovirt_engine
> <http://www.ovirt.org/OVirt_-_connecting_development_vdsm_to_ovirt_engine>.*
>
>
>
> *su - postgres -c "psql engine -c \"UPDATE vdc_options set option_value =
> 'true' where option_name = 'InstallVds'\""*
>
>
>
> *service overt-engine restart*
> 11 Attach the host to the engine
>
> I login the engine Administration Portal , and attach the centos host to a
> cluster. But it failed.
>
can you please attach logs (host deploy logs under
/var/log/ovirt-engine/host-deploy , vdsm log under /var/log/vdsm (on
host side) and engine log /var/log/ovirt-engine/engine.log)
I forward the mail to devel(a)ovirt.org, vdsm-devel was merged to this one
few months ago.
>
>
> _______________________________________________
> vdsm-devel mailing list
> vdsm-devel(a)lists.fedorahosted.org
> https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
>
Regards.
--
Yaniv Bronhaim.
10 years, 9 months
[IMPORTANT] DB script numbering reminder
by Oved Ourfali
Hi
Now that a branch was created for ovirt-engine-3.5, master is practically of version 3.6.
So, when adding a DB script in a patch that is pushed to master, the numbering should start with 03_06_ABCD, where ABCD starts with 0010, second should be 0020, and so on...
When this patch needs to be ported to 3.5 as well, the numbering there should be 03_05_EFGH, where EFGH is the next number in the stable branch.
So, for example, if the last upgrade script in the stable branch starts with 03_05_0750, then the next should start with 03_05_0760, even if it starts with 03_06_0040 on master.
Confused? we're here to help!
Got it? go ahead and push some stuff!
Have a great day,
Oved
10 years, 9 months
[vdsm] VM recovery now depends on HSM
by Adam Litke
Hi all,
As part of the new live merge feature, when vdsm starts and has to
recover existing VMs, it calls VM._syncVolumeChain to ensure that
vdsm's view of the volume chain matches libvirt's. This involves two
kinds of operations: 1) sync VM object, 2) sync underlying storage
metadata via HSM.
This means that HSM must be up (and the storage domain(s) that the VM
is using must be accessible. When testing some rather eccentric error
flows, I am finding this to not always be the case.
Is there a way to have VM recovery wait on HSM to come up? How should
we respond if a required storage domain cannot be accessed? Is there
a mechanism in vdsm to schedule an operation to be retried at a later
time? Perhaps I could just schedule the sync and it could be retried
until the required resources are available.
Thanks for your insights.
--
Adam Litke
10 years, 9 months
[vdsm] logging levels and noise in the log
by Martin Sivak
Hi,
we discussed the right amount of logging with Nir and Francesco while reviewing my patches. Francesco was against one DEBUG log message that could potentially flood the logs. But the message in question was important for me and SLA, because it was logging the VDSM changes in response to MoM command.
Since DEBUG really is meant to be used for debug messages, I have one proposal:
1) Change the default log level of vdsm.log to INFO
2) Log (only) DEBUG messages to a separate file vdsm-debug.log
3) Make the vdsm-debug.log rotate faster (every hour, keep only last couple of hours?) so it does not grow too much
This way the customer would be able to monitor INFO logs (much smaller) without all the noise and we would be able to collect the DEBUG part in case something happens.
What do you think?
--
Martin Sivák
msivak(a)redhat.com
Red Hat Czech
RHEV-M SLA / Brno, CZ
10 years, 9 months
Call for Papers Deadline In Two Days: Linux.conf.au
by Brian Proffitt
Conference: Linux.conf.au
Information: Each year open source geeks from across the globe gather in Australia or New Zealand to meet their fellow technologists, share the latest ideas and innovations, and spend a week discussing and collaborating on open source projects. The conference is well known for the speakers and delegates depth of talent, and its focus on technical linux content.
Possible topics: Virtualization, oVirt, KVM, libvirt, RDO, OpenStack, Foreman
Date: January 12-15, 2015
Location: Auckland, New Zealand
Website: http://lca2015.linux.org.au/
Call for Papers Deadline: July 13, 2014
Call for Papers URL: http://lca2015.linux.org.au/cfp
Contact me for more information and assistance with presentations.
--
Brian Proffitt
oVirt Community Manager
Project Atomic Community Lead
Open Source and Standards, Red Hat - http://community.redhat.com
Phone: +1 574 383 9BKP
IRC: bkp @ OFTC
10 years, 9 months
[Documentation] Assistance Needed with User-Facing Documentation
by Brian Proffitt
All:
The Red Hat ECS team has made a great effort to convert some of the more important downstream documentation to a MediaWiki format that we can post on oVirt.org as an official set of user- and admin-facing documentation. This is being done as a bootstrapping effort to get our upstream documentation up to date and take a big step towards making the upstream documentation the canonical source for documentation in the near future.
Before that can happen, we need to get this RHEV-oriented information ported over to oVirt nomenclature and screenshots taken for oVirt and added to the documents as well.
I have placed the three guides
* Administration Guide[1]
* User's Guide[2]
* Installation Guide[3]
on the oVirt site as unlinked pages. General wording has been changed from RHEV to oVirt, but not always. Each of these documents must be reviewed and completely adapted to oVirt 3.4 before they can be posted as official documentation. Specifically:
* Review all text to ensure proper steps and descriptions for oVirt features and procedures
* Review all text to remove downstream-specific text
* Review all code for changes in package names on on-screen displays
* Replace all downstream RHEV screenshots with upstream oVirt 3.4 screenshots (Max width: 1024px)
I will be stepping through these documents to edit them in more detail under these guidelines, but help is most assuredly needed, in order to get this done in a timely manner. oVirt.org wiki users can visit these pages, review them, and add their changes. MediaWiki has limited version control, so it would be best to edit sections instead of entire documents, to minimize stepping on others' changes. Editing by section will also help us track the sections that have been edited, using the pages' histories.
Thank you in advance for all of your help on this project... when finished, this will represent a significant improvement to oVirt's documentation, and make oVirt that much easier to use.
Peace,
Brian
[1] http://www.ovirt.org/DraftAdministrationGuide
[2] http://www.ovirt.org/DraftUserGuide
[3] http://www.ovirt.org/DraftInstallationGuide
--
Brian Proffitt
oVirt Community Manager
Project Atomic Community Lead
Open Source and Standards, Red Hat - http://community.redhat.com
Phone: +1 574 383 9BKP
IRC: bkp @ OFTC
10 years, 9 months
Exception during VM recovery causes VMs not being properly recovered
by Vinzenz Feenstra
Hi,
With the current master of VDSM after restarting VDSM (e.g. after
upgrading) I noticed that the VMs were not properly initialized and in
PAUSED state. Once checking the logs I found that the cause was here:
Thread-13::INFO::2014-07-10
12:11:56,400::vm::2244::vm.Vm::(_startUnderlyingVm)
vmId=`db614831-3b4b-4010-a989-f7a5ae6fa5d0`::Skipping errors on recovery
Traceback (most recent call last):
File "/usr/share/vdsm/virt/vm.py", line 2228, in _startUnderlyingVm
self._run()
File "/usr/share/vdsm/virt/vm.py", line 3312, in _run
self._domDependentInit()
File "/usr/share/vdsm/virt/vm.py", line 3204, in _domDependentInit
self._syncVolumeChain(drive)
File "/usr/share/vdsm/virt/vm.py", line 5686, in _syncVolumeChain
volumes = self._driveGetActualVolumeChain(drive)
File "/usr/share/vdsm/virt/vm.py", line 5665, in
_driveGetActualVolumeChain
sourceAttr = ('file', 'dev')[drive.blockDev]
TypeError: tuple indices must be integers, not NoneType
The reason here seems to be this:
Thread-13::DEBUG::2014-07-10 12:11:56,393::vm::1349::vm.Vm::(blockDev)
vmId=`db614831-3b4b-4010-a989-f7a5ae6fa5d0`::Unable to determine if the
path
'/rhev/data-center/00000002-0002-0002-0002-000000000002/41b6de4e-23da-481d-904d-9af24fc5f3ab/images/17206f99-38ab-45bc-ae9b-d36a66b00e4c/7b05de43-9d85-435f-8ae9-6ccde21548e4'
is a block device
Traceback (most recent call last):
File "/usr/share/vdsm/virt/vm.py", line 1346, in blockDev
self._blockDev = utils.isBlockDevice(self.path)
File "/usr/lib64/python2.6/site-packages/vdsm/utils.py", line 99, in
isBlockDevice
return stat.S_ISBLK(os.stat(path).st_mode)
OSError: [Errno 2] No such file or directory:
'/rhev/data-center/00000002-0002-0002-0002-000000000002/41b6de4e-23da-481d-904d-9af24fc5f3ab/images/17206f99-38ab-45bc-ae9b-d36a66b00e4c/7b05de43-9d85-435f-8ae9-6ccde21548e4'
I am running the host on RHEL6.5
Note: I just rebooted the host and started a few more VMs again and when
I restart VDSM I get the same errors again.
--
Regards,
Vinzenz Feenstra | Senior Software Engineer
RedHat Engineering Virtualization R & D
Phone: +420 532 294 625
IRC: vfeenstr or evilissimo
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 9 months
Re: [ovirt-devel] [VDSM][sampling] thread pool status and handling of stuck calls
by Saggi Mizrahi
The more I think about it the more it looks like it's
purely a libvirt issue. As long as we can make calls
that get stuck on D state we can't scale under stress.
In any case, seems like having an interface like.
libvirtConnectionPool.request(args, callback)
Would be a better solution than a thread pool.
It would queue up the request and call the callback
once it's done.
pseudo example:
def collectStats():
def callback(resp):
doStuff(resp)
threading.Timer(4, collectStats)
lcp.listAllDevices(callback)
We could have the timer queue in a threadpool since it normally
runs in the main thread but that is orthogonal to the libvirt
connection issue.
As for the thread pool itself. As long as it's more than one class
it's too complicated.
Things like:
* Task Cancelling
* Re-queuing (periodic operations)
Shouldn't be part of the thread pool.
Task should just be functions. If we need something
with a state we could use the __call__() method to make
an object look like a function. I also don't mind doing
if hasattr(task, "run") or callable(task) to handle it
using an "interface"
Re-queuing could be done with the build it threading.Timer
as in
threading.Timer(4.5, threadpool.queue, args=(self,))
That way each operation is responsible for handing
the details of rescheduling.
Should we always wait X, should we wait X - timeItTookToCalculate.
You could also do:
threading.Timer(2, threadpool.queue, args=(self,))
threading.Timer(4, self.handleBeingLate)
Which would handle not getting queued for a certain amount of time.
Task cancelling can't be done in a generic manner.
The most I think we could do is have threadpool.stop()
check hassattr("stop", task) and call it.
----- Original Message -----
> From: "Francesco Romani" <fromani(a)redhat.com>
> To: devel(a)ovirt.org
> Sent: Friday, July 4, 2014 5:48:59 PM
> Subject: [ovirt-devel] [VDSM][sampling] thread pool status and handling of stuck calls
>
> Hi,
>
> Nir has begun reviewing my draft patches about the thread pool and sampling
> refactoring (thanks!),
> and already suggested quite some improvements which I'd like to summarize
>
> Quick links to the ongoing discussion:
> http://gerrit.ovirt.org/#/c/29191/8/lib/threadpool/worker.py,cm
> http://gerrit.ovirt.org/#/c/29190/4/lib/threadpool/README.rst,cm
>
> Quick summary of the discussion on gerrit so far:
> 1. extract the scheduling logic from the thread pool. Either add a separate
> scheduler class
> or let the sampling task reschedule themselves after a succesfull
> completion.
> In any way the concept of 'periodic task', and the added complexity,
> isn't needed.
>
> 2. drop all the *queue classes I've added, thus making the package simpler.
> They are no longer needed since we remove the concept of periodic task.
>
> 3. have per-task timeout, move the stuck task detection elsewhere, like in
> the worker thread, ot
> maybe better in the aforementioned scheduler.
> If the scheduler finds that any task started in the former pass (or even
> before!)
> has not yet completed, there is no point in keeping this task alive and it
> should be cancelled.
>
> 4. the sampling task (or maybe the scheduler) can be smarter and halting the
> sample in presence of
> not responding calls for a given VM, granted the VM reports its
> 'health'/responsiveness.
>
> (Hopefully I haven't forgot anything big)
>
> In the draft currently published, I reluctantly added the *queue classes and
> I agree the periodic
> task implementation is messy, so I'll be very happy to drop them.
>
> However, a core question still holds: what to do in presence of the stuck
> task?
>
> I think it is worth to discuss this topic on a medium friendlier than gerrit,
> as it is the single
> most important decision to make in the sampling refactoring.
>
> It all boils down to:
> Should we just keep somewhere stuck threads and wait? Should we cancel stuck
> tasks?
>
> A. Let's cancel the stuck tasks.
> If we move toward a libvirt connection pool, and we give each worker thread
> in the sampling pool
> a separate libvirt connection, hopefully read-only, then we should be able to
> cancel stuck task by
> killing the worker's libvirt connection. We'll still need a (probably much
> simpler) watchman/supervisor,
> but no big deal here.
> Libvirt allows to close a connection from a different thread.
> I haven't actually tried to unstuck a blocked thread this way, but I have no
> reason to believe it
> will not work.
>
> B. Let's keep around blocked threads
> The code as it is just leaves a blocked libvirt call and the worker thread
> that carried it frozen.
> The stuck worker thread can be replaced up to a cap of frozen threads.
> In this worst case scenario, we end up with one (blocked!) thread per VM, as
> it is today, and with
> no sampling data.
>
> I believe that #A has some drawbacks which we risk to overlook, and on the
> same time #B has some merits.
>
> Let me explain:
> The hardest case is a call blocked in the kernel in D state. Libvirt has no
> more room than VDSM
> to unblock it; and libvirt itself *has* a pool of resources (threads in this
> case) which can be depleted
> by stuck calls. Actually, retrying to do a failed task may deplete their pool
> even faster[1].
>
> I'm not happy to just push this problem down the stack, as it looks to me
> that we gain
> very little by doing so. VDSM itself surely stays cleaner, but the
> VDS/hypervisor hosts on the whole
> improves just a bit: libvirt scales better, and that gives us some more room.
>
> On the other hand, by avoiding to reissue dangerous calls, I believe we make
> better use of
> the host resources in general. Actually, the point of keeping blocked thread
> around is a side effect
> of not reattempting blocked calls. Moreover, to keep the blocked thread
> around has a significant
> benefit: we can discover at the earliest moment when it is safe again to do
> the blocked call,
> because the blocked call itself returns and we can track this event! (and of
> course drop the
> now stale result). Otherwise, if we drop the connection, we'll lose this
> event and we have no
> more option that trying again and hoping for the best[2]
>
> I know the #B approach is not the cleanest, but I think it has slightly more
> appeal, especially
> on the libvirt depletion front.
>
> Thoughts and comments very welcome!
>
> +++
>
> [1] They have extensions to management API to dinamically adjust their thread
> pool and/or to cancel
> tasks, but it is in the RHEL7.2 timeframe.
> [2] A crazy idea would be to do something like
> http://en.wikipedia.org/wiki/Exponential_backoff
> which I'm not sure would be beneficial
>
> Bests and thanks,
>
> --
> Francesco Romani
> RedHat Engineering Virtualization R & D
> Phone: 8261328
> IRC: fromani
> _______________________________________________
> Devel mailing list
> Devel(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
10 years, 9 months
3.6 release in bugzilla
by Michal Skrivanek
can someone please add 3.6 to oVirt release in bugzilla?
Thanks,
michal
10 years, 9 months
[QE][ACTION NEEDED] oVirt 3.4.3 RC status
by Sandro Bonazzola
Hi,
We're going to start composing oVirt 3.4.3 RC tomorrow *2014-07-10 08:00 UTC* from 3.4 branch.
A 3.4.3 branch will be created immediately after using the same hash of the build.
The bug tracker [1] shows no blocking bugs for the release
Bug 1096312 - log spam in vdsm: guest agents not heartbeating
has been proposed for backport from 3.6 target on the bug tracker [1]
There are still 28 bugs [2] targeted to 3.4.3.
Excluding node and documentation bugs we still have 9 bugs [3] targeted to 3.4.3.
Maintainers / Assignee:
- Please add the bugs to the tracker if you think that 3.4.3 should not be released without them fixed.
- Please update the target to any next release for bugs that won't be in 3.4.3:
it will ease gathering the blocking bugs for next releases.
- Please fill release notes, the page has been created here [4]
- Please build packages before today *2014-07-09 15:00 UTC*.
Community:
- If you're testing oVirt 3.4 nightly snapshot, please add yourself to the test page [5]
[1] bugzilla.redhat.com/1107968
[2] http://red.ht/1lBAw2R
[3] http://red.ht/1ly9hfA
[4] http://www.ovirt.org/OVirt_3.4.3_Release_Notes
[5] http://www.ovirt.org/Testing/oVirt_3.4.3_Testing
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 9 months
Update on ovirt-appliance and ovirt-node-iso for oVirt 3.5
by Fabian Deutsch
Hello,
I just wanted to give a short update on the state of the ovirt-appliance and ovirt-node-iso for 3.5.
ovirt-node is actually in a quite good shape. An iso is available with the ovirt-node-plugin-vdsm and -hosted-engine. Both based on ovirt-node from master, which will be branched off.
The pain point is the automation.
Ryan has actually gotten the Jenkins job back into place - good job! - so that at least the rpm packages are built. And could actually be included in the nighlies, Sandro.
The automatic iso generation is still not working, due to several reasons (same as the last N months).
The appliance generation is in a slightly better state than the ovirt-node-iso generation, but is still failing when it comes to automation.
The automation is currently blocked by several issues which I tried to tackle together with David during the last week. The build automation of the appliance is actually getting most of my attention these days, as it is more likely to get the automation working.
So we are not stable yet, but we are moving into the right direction.
Greetings
fabian
10 years, 9 months
[QE][ACTION NEEDED] oVirt 3.5.0 Second Beta status
by Sandro Bonazzola
Hi,
We're going to compose oVirt 3.5.0 Second Beta on Mon *2014-07-21 08:00 UTC*.
Maintainers:
- Please be sure that 3.5 snapshot allow to create VMs before *2014-07-20 15:00 UTC*
The bug tracker [1] shows the following proposed blockers to be reviewed:
Bug ID Whiteboard Status Summary
1114994 infra NEW Cannot edit cluster after upgrade from version 3.4 to 3.5 because cpu type (Intel Haswell) does not match
1115044 infra POST Host stuck in "Unassinged" state when using jsonrpc and disconnection from pool failed
1115152 infra POST Cannot edit or create block storage doamin when using jsonrpc
1116009 infra POST sdk always raises a DisconnectedError trying to instantiate again after a previous failure
1060198 integration NEW [RFE] add support for Fedora 20
1073944 integration ASSIGNED Add log gathering for a new ovirt module (External scheduler)
1113974 integration POST Hostname validation during all-in-one setup
1113091 network POST VDSM trying to restore saved network rollback
1114057 storage NEW uploaded iso is not visible in the engine
1110305 virt POST BSOD - CLOCK_WATCHDOG_TIMEOUT_2 - Win 7SP1 guest, need to set hv_relaxed
About Add log gathering for a new ovirt module (External scheduler), patch have been merged on upstream sos master branch.
For Fedora >= 19 sos-3.1 package includes the fix.
For EL 6 patches need to be backported to ovirt-log-collector.
Feature freeze is now effective, and branch has been created.
All new patches must be backported to 3.5 branch too.
Features completed are marked in green on Features Status Table [2]
There are still 422 bugs [3] targeted to 3.5.0.
Excluding node and documentation bugs we still have 358 bugs [4] targeted to 3.5.0.
Maintainers / Assignee:
- Please remember to rebuild your packages before *2014-07-20 15:00* if needed, otherwise nightly snapshot will be taken.
- Please be sure that 3.5 snapshot allow to create VMs before *2014-07-20 15:00 UTC*
- If you find a blocker bug please remember to add it to the tracker [1]
- Please start filling release notes, the page has been created here [5]
- Please review and add test cases to oVirt 3.5 Second Test Day [6]
Community:
- Thanks for your participation in first test day!
- You're welcome to join us testing next beta release and getting involved in oVirt Quality Assurance[7]!
[1] http://bugzilla.redhat.com/1073943
[2] http://bit.ly/17qBn6F
[3] http://red.ht/1pVEk7H
[4] http://red.ht/1rLCJwF
[5] http://www.ovirt.org/OVirt_3.5_Release_Notes
[6] http://www.ovirt.org/OVirt_3.5_TestDay
[7] http://www.ovirt.org/OVirt_Quality_Assurance
Thanks,
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 9 months
[ACTION NEEDED] ovirt-engine-3.5 branch update
by Sandro Bonazzola
Hi,
We're going to refresh ovirt-engine-3.5 branch tomorrow 2014-07-08 at 08:00 UTC.
If you have any commits that are relevant only for 3.6 and not for 3.5, please don't merge them yet
until we'll update the 3.5 stable branch.
An email with the exact cutoff commits sha will be sent once the branch is updated.
After branch update patch targeted to 3.5 will need to be cherry-picked to 3.5 branch too.
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 9 months
VDSM sync meeting minutes July 8th, 2014
by Dan Kenigsberg
We had relatively low attendence today and we did not discuss pending 3.5 issus
deeply enough. If you have urgent 3.5 issue that needs fix/review, please chime in!
live-merge:
- we have 2 complex patches by Adam
http://gerrit.ovirt.org/#/q/status:open+project:vdsm+branch:master+topic:...
jsonrpc
- we have 3 pending patches by Piotr
http://gerrit.ovirt.org/#/q/owner:%22Piotr+Kliczewski%22+status:open+proj...
The most flaky of them adds lastClientIface element to getVdsCaps over
jsonrpc.
That element is used by Engine to deduce which network interface is used to
carry management traffic, and build ovirtmgmt network on top of this.
I suggest in
Bug 1117303 - add a new getRoute() verb
to introduce a new verb for this purpose, but until this happens in 3.6, we'd
need a hack to expose it here.
pthreading's superfluous wakeups
- Nir's reported bug is still in effect, and his suggested patches awaits
review.
qos patches
- Martin has plenty of patches. Most urgent is the log spam avoidence of
http://gerrit.ovirt.org/#/c/29504/9/vdsm/virt/vm.py
All should be reviewed by virt folks, due to the natural dependency.
copious rebases
- It is sometimes tempting to use a single topic branch for multiple patches
that do not really depend on each other. The down size of this is that on
every rebase, reviewers are swamped with emails; it is also less apparent
that a certain patch can be merged without waiting for previous patches.
So if you can break your serial branch to multiple branches with the same
topic, you should probably do it.
Hear you in a fortnight,
Dan.
10 years, 9 months
oVirt Node ISO for 3.5-pre
by Fabian Deutsch
Hey,
here is another draft ISO which is intended to be used for testing with oVirt 3.5.
This ISO also provides for the first time the oVirt Node Hosted Engine plugin.
http://resources.ovirt.org/pub/ovirt-3.5-pre/iso/ovirt-node-iso-3.5.0.ovi...
We've learned from the past: To circumvent some SELinux issues, please append enforcing=0 to the kernel commandline on boot.
And please provide us your feedback, so we can take a look and address the problem.
Greetings
fabian
10 years, 9 months