Users
Threads by month
- ----- 2025 -----
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- 10 participants
- 19138 discussions
Our oVirt environment was originally setup by someone else. The hosted
engine VM has a custom name, but it seems to me like some of the
hosted-engine tools such as hosted-engine --console for example expect the
domain to be "HostedEngine'. I tried renaming it in the admin interface
but changes are locked for the hostedengine VM. Is there a way that I can
change the domain of the VM back to HostedEngine instead of the custom
name?
1
0
The oVirt Project is pleased to announce the availability of the oVirt
4.1.9 RC2 release, as of January 17th, 2017
This update is the ninth in a series of stabilization updates to the 4.1
series.
This release is available now for:
* Red Hat Enterprise Linux 7.4 or later
* CentOS Linux (or similar) 7.4 or later
This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.4 or later
* CentOS Linux (or similar) 7.4 or later
* oVirt Node 4.1
See the release notes [1] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Live is already available [2]
- oVirt Node will be available soon [2]
Additional Resources:
* Read more about the oVirt 4.1.9 release highlights:
http://www.ovirt.org/release/4.1.9/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.1.9/
[2] http://resources.ovirt.org/pub/ovirt-4.1-pre/iso/
--
Lev Veyde
Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
1
0

17 Jan '18
Hello,
i just completed the upgrade of my engine appliance from 4.1.8 to 4.2.0.1.
I would like to share the only error i've encountered while upgrading
and running engine-setup:
[ INFO ] Creating/refreshing Engine database schema
[ ERROR ] schema.sh: FATAL: Cannot execute sql command:
--file=/usr/share/ovirt-engine/dbscripts/upgrade/04_02_0140_add_max_memory_constraint.sql
[ ERROR ] Failed to execute stage 'Misc configuration': Engine schema
refresh failed
After some search i found out that is an error linked to a bad
configuration of one (or more) vm having "Memory Size" value set with
an higher value than "Maximum Memory".
In my case the vm with that error was HostedEngine.
Editing the vm configuration by setting an appropriate value allowed
me to run again engine-setup with success.
Luca
--
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)
Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <lorenzetto.luca(a)gmail.com>
3
4
Hello,
yesterday I upgraded to 4.2 from 4.1.8.
Now I notice I cannot assign host dev pass though any more; in the GUI
the 'Pinnded to host' list is empty.
The hostdev from before the upgrades are still present. I tried to
remove them and got an NPE (see below).
Has anyone an idea?
-----------------------
> 2018-01-17 11:37:51,035+01 INFO
[org.ovirt.engine.core.bll.hostdev.RemoveVmHostDevicesCommand] (default
task-48) [57a8c14d-fc2c-4846-bc3d-cc4f3e8393f8] Running command:
RemoveVmHostDevicesCommand internal: false. Entities affected : ID:
6132322b-e187-4a83-b8c1-0477bde10497 Type: VMAction group
EDIT_ADMIN_VM_PROPERTIES with role type ADMIN
> 2018-01-17 11:37:51,037+01 ERROR [org.ovirt.engine.core.bll.hostdev.RemoveVmHostDevicesCommand] (default task-48) [57a8c14d-fc2c-4846-bc3d-cc4f3e8393f8] Command 'org.ovirt.engine.core.bll.hostdev.RemoveVmHostDevicesCommand' failed: null
> 2018-01-17 11:37:51,037+01 ERROR [org.ovirt.engine.core.bll.hostdev.RemoveVmHostDevicesCommand] (default task-48) [57a8c14d-fc2c-4846-bc3d-cc4f3e8393f8] Exception: java.lang.NullPointerException
> at org.ovirt.engine.core.bll.hostdev.RemoveVmHostDevicesCommand.executeCommand(RemoveVmHostDevicesCommand.java:64) [bll.jar:]
> at org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1205) [bll.jar:]
> at org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1345) [bll.jar:]
> at org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:1987) [bll.jar:]
> at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInNewTransaction(TransactionSupport.java:202) [utils.jar:]
> at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInRequired(TransactionSupport.java:137) [utils.jar:]
> at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:105) [utils.jar:]
> at org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1405) [bll.jar:]
> at org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:412) [bll.jar:]
> at org.ovirt.engine.core.bll.executor.DefaultBackendActionExecutor.execute(DefaultBackendActionExecutor.java:13) [bll.jar:]
> at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:509) [bll.jar:]
> at org.ovirt.engine.core.bll.Backend.runActionImpl(Backend.java:491) [bll.jar:]
> at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:444) [bll.jar:]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [rt.jar:1.8.0_151]
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) [rt.jar:1.8.0_151]
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_151]
> at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_151]
> at org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509)
> at org.jboss.as.weld.ejb.DelegatingInterceptorInvocationContext.proceed(DelegatingInterceptorInvocationContext.java:92) [wildfly-weld-ejb-11.0.0.Final.jar:11.0.0.Final]
> at org.jboss.weld.interceptor.proxy.WeldInvocationContext.interceptorChainCompleted(WeldInvocationContext.java:98) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
> at org.jboss.weld.interceptor.proxy.WeldInvocationContext.proceed(WeldInvocationContext.java:117) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
> at org.ovirt.engine.core.common.di.interceptor.LoggingInterceptor.apply(LoggingInterceptor.java:12) [common.jar:]
> at sun.reflect.GeneratedMethodAccessor67.invoke(Unknown Source) [:1.8.0_151]
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_151]
> at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_151]
> at org.jboss.weld.interceptor.reader.SimpleInterceptorInvocation$SimpleMethodInvocation.invoke(SimpleInterceptorInvocation.java:73) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
> at org.jboss.weld.interceptor.proxy.WeldInvocationContext.invokeNext(WeldInvocationContext.java:83) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
> at org.jboss.weld.interceptor.proxy.WeldInvocationContext.proceed(WeldInvocationContext.java:115) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
> at org.jboss.weld.bean.InterceptorImpl.intercept(InterceptorImpl.java:108) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
> at org.jboss.as.weld.ejb.DelegatingInterceptorInvocationContext.proceed(DelegatingInterceptorInvocationContext.java:82) [wildfly-weld-ejb-11.0.0.Final.jar:11.0.0.Final]
> at org.jboss.as.weld.interceptors.EjbComponentInterceptorSupport.delegateInterception(EjbComponentInterceptorSupport.java:60)
> at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:76)
> at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:88)
> at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:101)
> at org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509)
> at org.ovirt.engine.core.bll.interceptors.CorrelationIdTrackerInterceptor.aroundInvoke(CorrelationIdTrackerInterceptor.java:13) [bll.jar:]
> at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source) [:1.8.0_151]
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_151]
> at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_151]
> at org.jboss.as.ee.component.ManagedReferenceLifecycleMethodInterceptor.processInvocation(ManagedReferenceLifecycleMethodInterceptor.java:89)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final]
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45) [wildfly-ee-11.0.0.Final.jar:11.0.0.Final]
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:40)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53)
> at org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:52)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at org.jboss.as.ejb3.component.singleton.SingletonComponentInstanceAssociationInterceptor.processInvocation(SingletonComponentInstanceAssociationInterceptor.java:53) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final]
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:264) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final]
> at org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:379) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final]
> at org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:244) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final]
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509)
> at org.jboss.weld.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:73) [weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
> at org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:89)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final]
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:47) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final]
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:100) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final]
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at org.jboss.as.ejb3.deployment.processors.StartupAwaitInterceptor.processInvocation(StartupAwaitInterceptor.java:22) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final]
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final]
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:67) [wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final]
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:60)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:438)
> at org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:609)
> at org.jboss.invocation.AccessCheckingInterceptor.processInvocation(AccessCheckingInterceptor.java:57)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53)
> at org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:198)
> at org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:185)
> at org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:81)
> at org.ovirt.engine.core.common.interfaces.BackendLocal$$$view4.runAction(Unknown Source) [common.jar:]
> at org.ovirt.engine.ui.frontend.server.gwt.GenericApiGWTServiceImpl.runAction(GenericApiGWTServiceImpl.java:176)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [rt.jar:1.8.0_151]
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) [rt.jar:1.8.0_151]
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_151]
> at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_151]
> at com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java:587)
> at com.google.gwt.user.server.rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:333)
> at com.google.gwt.user.server.rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:303)
> at com.google.gwt.user.server.rpc.RemoteServiceServlet.processPost(RemoteServiceServlet.java:373)
> at com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) [jboss-servlet-api_3.1_spec-1.0.0.Final.jar:1.0.0.Final]
> at org.ovirt.engine.ui.frontend.server.gwt.GenericApiGWTServiceImpl.service(GenericApiGWTServiceImpl.java:78)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) [jboss-servlet-api_3.1_spec-1.0.0.Final.jar:1.0.0.Final]
> at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85)
> at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129)
> at org.ovirt.engine.core.utils.servlet.HeaderFilter.doFilter(HeaderFilter.java:94) [utils.jar:]
> at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
> at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
> at org.ovirt.engine.core.utils.servlet.CachingFilter.doFilter(CachingFilter.java:133) [utils.jar:]
> at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
> at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
> at org.ovirt.engine.core.branding.BrandingFilter.doFilter(BrandingFilter.java:73) [branding.jar:]
> at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
> at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
> at org.ovirt.engine.core.utils.servlet.LocaleFilter.doFilter(LocaleFilter.java:65) [utils.jar:]
> at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
> at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
> at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84)
> at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
> at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
> at org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78)
> at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
> at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131)
> at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
> at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
> at io.undertow.security.handlers.AuthenticationConstraintHandler.handleRequest(AuthenticationConstraintHandler.java:53)
> at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
> at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
> at io.undertow.servlet.handlers.security.ServletSecurityConstraintHandler.handleRequest(ServletSecurityConstraintHandler.java:59)
> at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60)
> at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77)
> at io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)
> at io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43)
> at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
> at org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)
> at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
> at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
> at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:292)
> at io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletInitialHandler.java:81)
> at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:138)
> at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:135)
> at io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:48)
> at io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
> at org.wildfly.extension.undertow.security.SecurityContextThreadSetupAction.lambda$create$0(SecurityContextThreadSetupAction.java:105)
> at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
> at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
> at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
> at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:272)
> at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)
> at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:104)
> at io.undertow.server.Connectors.executeRootHandler(Connectors.java:326)
> at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:812)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_151]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_151]
> at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_151]
>
> 2018-01-17 11:37:51,049+01 ERROR [org.ovirt.engine.core.bll.hostdev.RemoveVmHostDevicesCommand] (default task-48) [57a8c14d-fc2c-4846-bc3d-cc4f3e8393f8] Transaction rolled-back for command 'org.ovirt.engine.core.bll.hostdev.RemoveVmHostDevicesCommand'.
> 2018-01-17 11:37:51,059+01 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-48) [57a8c14d-fc2c-4846-bc3d-cc4f3e8393f8] EVENT_ID: VM_REMOVE_HOST_DEVICES(10,801), Host devices [usb_4_10] were detached from Vm license2.int.lugundtrug.net by User admin@internal-authz.
1
0
As we continue to develop oVirt 4.2 and future releases, the Development
and Integration teams at Red Hat would value
insights on how you are deploying the oVirt environment. Please help us to
hit the mark by completing this short survey. Survey will close on February
1st.
Here's the link to the survey: https://goo.gl/forms/cAKWAR8RD7rGrVhE2
Thanks,
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
5
5
Hello,
I see in release notes this
BZ 1530730 [downstream clone - 4.2.1] [RFE] Allow uploading ISO images to
data domains and using them in VMs
It is now possible to upload an ISO file to a data domain and attach it to
a VM as a CDROM device.
In order to do so the user has to upload an ISO file via the UI (which will
recognize the ISO by it's header and will upload it as ISO) or via the APIs
in which case the request should define the disk container "content_type"
property as "iso" before the upload.
Once the ISO exists on an active storage domain in the data center it will
be possible to attach it to a VM as a CDROM device either through the "Edit
VM" dialog or through the APIs (see example in comment #27
So I'm trying it on an HCI Gluster environment of mine for testing.
I get this in image-proxy.log
(Thread-39 ) INFO 2018-01-14 18:35:38,066 web:95:web:(log_start) START
[192.168.150.101] PUT /images/0d852f7a-b19e-447d-82ad-966755070701
(Thread-39 ) WARNING 2018-01-14 18:35:38,067 web:112:web:(log_error) ERROR
[192.168.150.101] PUT /images/0d852f7a-b19e-447d-82ad-966755070701: [401]
Not authorized (0.00s)
(Thread-40 ) INFO 2018-01-14 18:35:38,106 web:95:web:(log_start) START
[192.168.150.101] PUT /images/0d852f7a-b19e-447d-82ad-966755070701
(Thread-40 ) WARNING 2018-01-14 18:35:38,106 web:112:web:(log_error) ERROR
[192.168.150.101] PUT /images/0d852f7a-b19e-447d-82ad-966755070701: [401]
Not authorized (0.00s)
Does this mean the functionality is not completely ready yet or what?
Any one has already tried on iSCSI and/or FC?
Thanks,
Gianluca
2
6
Hi, we are getting some errors with some of our vms in a 3 node server
setup.
2018-01-14 15:01:44,015+0100 INFO (libvirt/events) [virt.vm]
(vmId='2c34f52d-140b-4dbe-a4bd-d2cb467b0b7c') abnormal vm stop device
virtio-disk0 error eother (vm:4880)
We are running glusterfs for shared storage.
I have tried setting global maintenance on the first server and then
issuing a 'hosted-engine --vm-start' but that leads to nowhere.
3
7
> This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.
--B_3598954007_92574425
Content-type: text/plain;
charset="ISO-8859-1"
Content-transfer-encoding: quoted-printable
Hi All,
I have created a bash script that uses curl and the REST API based on the
docs that I could find.
I have the script working all the way up until the VM-ToBeBackedUp-Disk is
attached to the VM-RunningTheBackupScript with the mention of the Snapshot.
Example:
<disk_attachment>
<disk id=3D"33b533f8-13b1-4cc1-8091-31b07913b32a">
<snapshot id=3D"34709147-4b0c-4684-b3d9-f4892873f36f=B2/>
</disk>
<bootable>false</bootable>
<interface>virtio</interface>
</disk_attachment>
Posted to :=20
https://-myFQDN-/ovirt-engine/api/vms/-UUID-OF-VM-RunningTheBackupScript/di=
s
kattachments/
The above results in an attached disk to my VM-RunningTheBackupScript.
The question I have is how to I image that disk from the
VM-RunningTheBackupScript?
The VM-RunningTheBackupScript is running Debian8 and that VM cannot see the
attached disk as it is Inactive.
I have reviewed many of the older scripts out there and it looks trivial,
however everything I have tried has no joy ;/
If anyone on the list knows of the required magic, please advise ;)
* If I have to go to using python etc, I will, but trying to avoid it if
possible. Either way, I need some examples of how to proceed.
Some links I have reviewed:
https://www.ovirt.org/develop/api/design/backup-api/
http://200.1.19.60/ovirt-engine/docs/manual/en-US/html/Administration_Guide=
/
sect-Backing_Up_and_Restoring_Virtual_Machines_Using_the_Backup_and_Restore=
_
API.html
https://markmc.fedorapeople.org/rhevm-api/en-US/html-single/index.html
https://github.com/voidloop/ovirt-bash-backup/blob/master/backupvm.sh
https://github.com/laravot/backuprestoreapi/blob/master/example.py
Thanks
Zip
--B_3598954007_92574425
Content-type: text/html;
charset="ISO-8859-1"
Content-transfer-encoding: quoted-printable
<html><head></head><body style=3D"word-wrap: break-word; -webkit-nbsp-mode: s=
pace; -webkit-line-break: after-white-space; color: rgb(0, 0, 0); font-size:=
14px; font-family: Calibri, sans-serif;"><div>Hi All,</div><div><br></div><=
div>I have created a bash script that uses curl and the REST API based on th=
e docs that I could find. </div><div><br></div><div>I have the script w=
orking all the way up until the VM-ToBeBackedUp-Disk is attached to the VM-R=
unningTheBackupScript with the mention of the Snapshot.</div><div><br></div>=
<div>Example:</div><div><br></div><div><div><disk_attachment></div><di=
v> <disk id=3D"33b533f8-13b1-4cc1-8091-31b07913b32a"></div>=
<div> <snapshot id=3D"34709147-4b0c-4684-b3d9-f48=
92873f36f”/></div><div> </disk></div><div> =
<bootable>false</bootable></div><div> <i=
nterface>virtio</interface></div><div></disk_attachment></div=
></div><div><br></div><div>Posted to : <a href=3D"https://-myFQDN-/ovirt-=
engine/api/vms/-UUID-OF-VM-RunningTheBackupScript/diskattachments">https://-=
myFQDN-/ovirt-engine/api/vms/-UUID-OF-VM-RunningTheBackupScript/diskattachme=
nts</a>/</div><div><br></div><div>The above results in an attached disk to m=
y VM-RunningTheBackupScript.</div><div><br></div><div>The question I have is=
how to I image that disk from the VM-RunningTheBackupScript?</div><div><br>=
</div><div>The VM-RunningTheBackupScript is running Debian8 and that VM cann=
ot see the attached disk as it is Inactive.</div><div><br></div><div>I have =
reviewed many of the older scripts out there and it looks trivial, however e=
verything I have tried has no joy ;/</div><div><br></div><div>If anyone on t=
he list knows of the required magic, please advise ;)</div><div><br></div><d=
iv>* If I have to go to using python etc, I will, but trying to avoid it if =
possible. Either way, I need some examples of how to proceed.</div><div><br>=
</div><div>Some links I have reviewed:</div><div><br></div><div><a href=3D"htt=
ps://www.ovirt.org/develop/api/design/backup-api">https://www.ovirt.org/deve=
lop/api/design/backup-api</a>/</div><div><a href=3D"http://200.1.19.60/ovirt-e=
ngine/docs/manual/en-US/html/Administration_Guide/sect-Backing_Up_and_Restor=
ing_Virtual_Machines_Using_the_Backup_and_Restore_API.html">http://200.1.19.=
60/ovirt-engine/docs/manual/en-US/html/Administration_Guide/sect-Backing_Up_=
and_Restoring_Virtual_Machines_Using_the_Backup_and_Restore_API.html</a></di=
v><div><a href=3D"https://markmc.fedorapeople.org/rhevm-api/en-US/html-single/=
index.html">https://markmc.fedorapeople.org/rhevm-api/en-US/html-single/inde=
x.html</a></div><div><a href=3D"https://github.com/voidloop/ovirt-bash-backup/=
blob/master/backupvm.sh">https://github.com/voidloop/ovirt-bash-backup/blob/=
master/backupvm.sh</a></div><div><a href=3D"https://github.com/laravot/backupr=
estoreapi/blob/master/example.py">https://github.com/laravot/backuprestoreap=
i/blob/master/example.py</a></div><div><br></div><div><br></div><div>Thanks<=
/div><div><br></div><div>Zip</div></body></html>
--B_3598954007_92574425--
1
0

Are Ovirt updates nessessary after CVE-2017-5754 CVE-2017-5753 CVE-2017-5715
by Marcel Hanke 16 Jan '18
by Marcel Hanke 16 Jan '18
16 Jan '18
Hi,
besides the kernel and microcode updates are there also updates of ovirt-
engine and vdsm nessessary and if so, is there a timeline when the patches can
be expected?
If there are Patches nessessary will there also be updates for ovirt 4.1 or
only 4.2?
Thanks
Marcel
8
17
Is there an easy way to do this?
I have a UPS connected to my freenas box, and using NUT and UPSC I can
monitor the battery level on my centos 7 host.
In the case of an emergency, and my UPS goes below, lets say 40% battery, I
would like to make a script to shutdown all running VMs, and then reboot
the host.
Otherwise I am afraid of sending a straight reboot command to the host
without taking care of the virtual machines first.
Thanks!
6
8
Hi all, I didn't find any working ovirt-guest-agent for atomic7, and it
is a requirement for launching atomic from vagrant-ovirt4 plugin.
I tried installing ovirt-guest-agent inside, I tried installing
ovirtguestagent/centos7-atomic, but I had no success
Does any official project exist for it?
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
2
1

Re: [ovirt-users] Configuration of FCoE in oVirt 4.2 on HP BladeSystem c7000
by Gunder Johansen 15 Jan '18
by Gunder Johansen 15 Jan '18
15 Jan '18
------=_Part_6439429_1730970573.1516054958003
Content-Type: multipart/alternative;
boundary="----=_Part_6439428_201364203.1516054957950"
------=_Part_6439428_201364203.1516054957950
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
Thanks, Fred.
I have been looking at FCoE VDSM Hooks to no help. Looking at it again, I a=
m still not able to see any FCoE=C2=A0
[root@ovirtengine ~]# engine-config -g UserDefinedNetworkCustomPropertiesUs=
erDefinedNetworkCustomProperties:=C2=A0 version: 3.6UserDefinedNetworkCusto=
mProperties:=C2=A0 version: 4.0UserDefinedNetworkCustomProperties: fcoe=3D^=
((enable|dcb|auto_vlan)=3D(yes|no),?)*$ version: 4.1UserDefinedNetworkCusto=
mProperties: fcoe=3D^((enable|dcb|auto_vlan)=3D(yes|no),?)*$ version: 4.2
I finally managed to find the "custom Property" where I could set "enable=
=3Dyes,dcb=3Dno" for fcoe, but when applying the change, I get unexepected =
error. Yes, the host was in local maintenance mode when I applied the chang=
e.
I am afraid I am not understanding all the steps needed from the Virtual Co=
nnect configuration in the blade rack to the network interfaces inside oVir=
t. Should I add a new FCoE only network interface and should this have VLAN=
/IP address in a special range compared to the internal network in the rack=
?
Thanks again.
=C2=A0From:=C2=A0Fred Rolland [mailto:frolland@redhat.com]=C2=A0
Sent:=C2=A013. januar 2018 14:21
To:=C2=A0Luca 'remix_tj' Lorenzetto
Cc:=C2=A0Gunder Johansen; users
Subject:=C2=A0Re: [ovirt-users] Configuration of FCoE in oVirt 4.2 on HP Bl=
adeSystem c7000=C2=A0Take a look also at the FCoE Vdsm hook:
oVirt/vdsm
|=20
|=20
|=20
| | |
|
|
|=20
| |=20
oVirt/vdsm
vdsm - This is a mirror for http://gerrit.ovirt.org, for issues use http://=
bugzilla.redhat.com | |
|
|
------=_Part_6439428_201364203.1516054957950
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
<html><head></head><body><div style=3D"color:#000; background-color:#fff; f=
ont-family:Courier New, courier, monaco, monospace, sans-serif;font-size:13=
px"><div id=3D"yui_3_16_0_ym19_1_1516021449310_15492"><span id=3D"yui_3_16_=
0_ym19_1_1516021449310_15493">Thanks, Fred.</span></div><div id=3D"yui_3_16=
_0_ym19_1_1516021449310_15494"><br clear=3D"none" id=3D"yui_3_16_0_ym19_1_1=
516021449310_15495"></div><div dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_15160214=
49310_15496">I have been looking at FCoE VDSM Hooks to no help. Looking at =
it again, I am still not able to see any FCoE </div><div id=3D"yui_3_1=
6_0_ym19_1_1516021449310_15497"><br clear=3D"none" id=3D"yui_3_16_0_ym19_1_=
1516021449310_15498"></div><div id=3D"yui_3_16_0_ym19_1_1516021449310_15499=
">[root@ovirtengine ~]# engine-config -g UserDefinedNetworkCustomProperties=
</div><div id=3D"yui_3_16_0_ym19_1_1516021449310_15500">UserDefinedNetworkC=
ustomProperties: version: 3.6</div><div id=3D"yui_3_16_0_ym19_1_15160=
21449310_15501">UserDefinedNetworkCustomProperties: version: 4.0</div=
><div id=3D"yui_3_16_0_ym19_1_1516021449310_15502">UserDefinedNetworkCustom=
Properties: fcoe=3D^((enable|dcb|auto_vlan)=3D(yes|no),?)*$ version: 4.1</d=
iv><div id=3D"yui_3_16_0_ym19_1_1516021449310_15503">UserDefinedNetworkCust=
omProperties: fcoe=3D^((enable|dcb|auto_vlan)=3D(yes|no),?)*$ version: 4.2<=
/div><div id=3D"yui_3_16_0_ym19_1_1516021449310_15504"><br clear=3D"none" i=
d=3D"yui_3_16_0_ym19_1_1516021449310_15505"></div><div dir=3D"ltr" id=3D"yu=
i_3_16_0_ym19_1_1516021449310_15506">I finally managed to find the "custom =
Property" where I could set "enable=3Dyes,dcb=3Dno" for fcoe, but when appl=
ying the change, I get unexepected error. Yes, the host was in local mainte=
nance mode when I applied the change.</div><div dir=3D"ltr" id=3D"yui_3_16_=
0_ym19_1_1516021449310_15507"><br clear=3D"none" id=3D"yui_3_16_0_ym19_1_15=
16021449310_15508"></div><div dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_151602144=
9310_15509">I am afraid I am not understanding all the steps needed from th=
e Virtual Connect configuration in the blade rack to the network interfaces=
inside oVirt. Should I add a new FCoE only network interface and should th=
is have VLAN/IP address in a special range compared to the internal network=
in the rack?</div><div dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_1516021449310_1=
5510"><br clear=3D"none" id=3D"yui_3_16_0_ym19_1_1516021449310_15511"></div=
><div dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_1516021449310_15512">Thanks again=
.</div><div dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_1516021449310_15513"><br cl=
ear=3D"none" id=3D"yui_3_16_0_ym19_1_1516021449310_15514"><br clear=3D"none=
" id=3D"yui_3_16_0_ym19_1_1516021449310_15515"></div><div dir=3D"ltr" id=3D=
"yui_3_16_0_ym19_1_1516021449310_15516"><br clear=3D"none" id=3D"yui_3_16_0=
_ym19_1_1516021449310_15517"></div><div id=3D"yui_3_16_0_ym19_1_15160214493=
10_15518"><br clear=3D"none" id=3D"yui_3_16_0_ym19_1_1516021449310_15519"><=
br clear=3D"none" id=3D"yui_3_16_0_ym19_1_1516021449310_15520"></div><div i=
d=3D"yui_3_16_0_ym19_1_1516021449310_15521"><div id=3D"yui_3_16_0_ym19_1_15=
16021449310_15522"><div style=3D"font-family: HelveticaNeue, "Helvetic=
a Neue", Helvetica, Arial, "Lucida Grande", sans-serif; font=
-size: 16px;" id=3D"yui_3_16_0_ym19_1_1516021449310_15523"><div dir=3D"ltr"=
id=3D"yui_3_16_0_ym19_1_1516021449310_15524"><font size=3D"2" face=3D"Aria=
l" id=3D"yui_3_16_0_ym19_1_1516021449310_15525"></font><hr size=3D"1" id=3D=
"yui_3_16_0_ym19_1_1516021449310_15526"><div id=3D"yui_3_16_0_ym19_1_151602=
1449310_15527"><span style=3D"font-size: 11pt;" id=3D"yui_3_16_0_ym19_1_151=
6021449310_15528"> </span></div><div id=3D"yui_3_16_0_ym19_1_151602144=
9310_15529"><b id=3D"yui_3_16_0_ym19_1_1516021449310_15530"><span lang=3D"E=
N-US" style=3D"font-size: 10pt;" id=3D"yui_3_16_0_ym19_1_1516021449310_1553=
1">From:</span></b><span lang=3D"EN-US" style=3D"font-size: 10pt;" id=3D"yu=
i_3_16_0_ym19_1_1516021449310_15532"> Fred Rolland [mailto:frolland@re=
dhat.com] <br clear=3D"none" id=3D"yui_3_16_0_ym19_1_1516021449310_155=
33"><b id=3D"yui_3_16_0_ym19_1_1516021449310_15534">Sent:</b> 13. janu=
ar 2018 14:21<br clear=3D"none" id=3D"yui_3_16_0_ym19_1_1516021449310_15535=
"><b id=3D"yui_3_16_0_ym19_1_1516021449310_15536">To:</b> Luca 'remix_=
tj' Lorenzetto<br clear=3D"none" id=3D"yui_3_16_0_ym19_1_1516021449310_1553=
7"><b id=3D"yui_3_16_0_ym19_1_1516021449310_15538">Cc:</b> Gunder Joha=
nsen; users<br clear=3D"none" id=3D"yui_3_16_0_ym19_1_1516021449310_15539">=
<b id=3D"yui_3_16_0_ym19_1_1516021449310_15540">Subject:</b> Re: [ovir=
t-users] Configuration of FCoE in oVirt 4.2 on HP BladeSystem c7000</span><=
/div><div id=3D"yui_3_16_0_ym19_1_1516021449310_15541"> </div><div dir=
=3D"ltr" id=3D"yui_3_16_0_ym19_1_1516021449310_15542">Take a look also at t=
he FCoE Vdsm hook:<br clear=3D"none" id=3D"yui_3_16_0_ym19_1_1516021449310_=
15543"><br clear=3D"none" id=3D"yui_3_16_0_ym19_1_1516021449310_15544"><a r=
el=3D"nofollow" shape=3D"rect" target=3D"_blank" href=3D"https://github.com=
/oVirt/vdsm/tree/master/vdsm_hooks/fcoe" style=3D"color: rgb(25, 106, 212);=
" id=3D"yui_3_16_0_ym19_1_1516021449310_15545">oVirt/vdsm</a></div><div id=
=3D"yui_3_16_0_ym19_1_1516021449310_15546"><br clear=3D"none" id=3D"yui_3_1=
6_0_ym19_1_1516021449310_15547"></div><div class=3D"yiv3767805512yahoo-link=
-enhancr-card yiv3767805512ymail-preserve-class yiv3767805512ymail-preserve=
-style" dir=3D"ltr" data-url=3D"https://github.com/oVirt/vdsm/tree/master/v=
dsm_hooks/fcoe" data-type=3D"yenhancr" data-category=3D"object" data-embed-=
url=3D"" data-size=3D"medium" style=3D"overflow-x: auto; overflow-y: hidden=
; max-width: 400px; font-family: "Helvetica Neue", Helvetica, Ari=
al, sans-serif;" id=3D"yui_3_16_0_ym19_1_1516021449310_15548" contenteditab=
le=3D"false"><a rel=3D"nofollow" shape=3D"rect" target=3D"_blank" href=3D"h=
ttps://github.com/oVirt/vdsm/tree/master/vdsm_hooks/fcoe" style=3D"color: r=
gb(0, 0, 0);" id=3D"yui_3_16_0_ym19_1_1516021449310_15549"><table class=3D"=
yahoo-ignore-table" cellpadding=3D"0" cellspacing=3D"0" border=3D"0" style=
=3D"word-break: break-word; overflow-x: auto; max-width: 400px;" id=3D"yui_=
3_16_0_ym19_1_1516021449310_15550"><tbody style=3D"width: 400px;" id=3D"yui=
_3_16_0_ym19_1_1516021449310_15551"><tr id=3D"yui_3_16_0_ym19_1_15160214493=
10_15552"><td colspan=3D"1" rowspan=3D"1" width=3D"400" id=3D"yui_3_16_0_ym=
19_1_1516021449310_15553"><table class=3D"yahoo-ignore-table" cellpadding=
=3D"0" cellspacing=3D"0" border=3D"0" width=3D"100%" style=3D"word-break: b=
reak-word; overflow-x: auto; max-width: 400px;" id=3D"yui_3_16_0_ym19_1_151=
6021449310_15554"><tbody style=3D"width: 400px;" id=3D"yui_3_16_0_ym19_1_15=
16021449310_15555"><tr id=3D"yui_3_16_0_ym19_1_1516021449310_15556"><td col=
span=3D"1" rowspan=3D"1" valign=3D"top" background=3D"https://s.yimg.com/vv=
//api/res/1.2/TRGM7nw4vAiw.NSSO6tFAA--~A/YXBwaWQ9bWFpbDtmaT1maWxsO2g9MjAwO3=
c9NDAw/https://avatars1.githubusercontent.com/u/1318634?s=3D400&v=3D4.c=
f.jpg" bgcolor=3D"#000000" style=3D"background: url("https://s.yimg.co=
m/vv//api/res/1.2/TRGM7nw4vAiw.NSSO6tFAA--~A/YXBwaWQ9bWFpbDtmaT1maWxsO2g9Mj=
AwO3c9NDAw/https://avatars1.githubusercontent.com/u/1318634?s=3D400&v=
=3D4.cf.jpg") center center / cover no-repeat rgb(0, 0, 0); min-height=
: 200px; position: relative;" id=3D"yui_3_16_0_ym19_1_1516021449310_15557">=
<table class=3D"yahoo-ignore-table" cellpadding=3D"0" cellspacing=3D"0" bor=
der=3D"0" style=3D"word-break: break-word; width: 400px;" id=3D"yui_3_16_0_=
ym19_1_1516021449310_15558"><tbody style=3D"width: 400px;" id=3D"yui_3_16_0=
_ym19_1_1516021449310_15559"><tr id=3D"yui_3_16_0_ym19_1_1516021449310_1556=
0"><td colspan=3D"1" rowspan=3D"1" valign=3D"top" background=3D"https://s.y=
img.com/nq/storm/assets/enhancrV2/12/overlay-tile.png" bgcolor=3D"transpare=
nt" style=3D"background: url("https://s.yimg.com/nq/storm/assets/enhan=
crV2/12/overlay-tile.png") left top repeat transparent; min-height: 20=
0px;" id=3D"yui_3_16_0_ym19_1_1516021449310_15561"><table class=3D"yahoo-ig=
nore-table" height=3D"185" style=3D"word-break: break-word; width: 400px; m=
in-height: 185px;" id=3D"yui_3_16_0_ym19_1_1516021449310_15562"><tbody styl=
e=3D"width: 400px;" id=3D"yui_3_16_0_ym19_1_1516021449310_15563"><tr id=3D"=
yui_3_16_0_ym19_1_1516021449310_15564"><td colspan=3D"1" rowspan=3D"1" styl=
e=3D"padding-top: 15px; padding-left: 15px; vertical-align: top;" id=3D"yui=
_3_16_0_ym19_1_1516021449310_15565"></td><td colspan=3D"1" rowspan=3D"1" st=
yle=3D"text-align: right; padding-top: 15px; padding-right: 15px; vertical-=
align: top;" id=3D"yui_3_16_0_ym19_1_1516021449310_15566"><div style=3D"ver=
tical-align: top;" id=3D"yui_3_16_0_ym19_1_1516021449310_15567"><button dat=
a-share-url=3D"https://github.com/oVirt/vdsm/tree/master/vdsm_hooks/fcoe" s=
tyle=3D"min-height: 32px; width: 32px; background-color: rgba(255, 255, 255=
, 0.9); border-width: initial; border-style: none; border-color: initial; b=
order-radius: 2px; box-shadow: rgba(0, 0, 0, 0.1) 0px 1px 1px 0px; padding:=
4.5px;" id=3D"yui_3_16_0_ym19_1_1516021449310_15568"><span style=3D"font-s=
ize: 24px; padding: 0px;" id=3D"yui_3_16_0_ym19_1_1516021449310_15569"></sp=
an></button></div></td></tr></tbody></table></td></tr></tbody></table></td>=
</tr><tr id=3D"yui_3_16_0_ym19_1_1516021449310_15570"><td colspan=3D"1" row=
span=3D"1" id=3D"yui_3_16_0_ym19_1_1516021449310_15571"><div id=3D"yui_3_16=
_0_ym19_1_1516021449310_15572"></div><table align=3D"center" class=3D"yahoo=
-ignore-table" cellpadding=3D"0" cellspacing=3D"0" border=3D"0" style=3D"ma=
rgin-top: -40px; margin-right: auto; margin-left: auto; word-break: break-w=
ord; background-image: initial; background-position: initial; background-si=
ze: initial; background-repeat: initial; background-attachment: initial; ba=
ckground-origin: initial; background-clip: initial; position: relative; z-i=
ndex: 2; width: 378px; overflow-x: auto; max-width: 380px; border-width: 1p=
x 1px 3px; border-style: solid; border-color: rgb(224, 228, 233) rgb(224, 2=
28, 233) rgb(1, 1, 1); border-image: initial;" id=3D"yui_3_16_0_ym19_1_1516=
021449310_15573"><tbody style=3D"width: 378px;" id=3D"yui_3_16_0_ym19_1_151=
6021449310_15574"><tr id=3D"yui_3_16_0_ym19_1_1516021449310_15575"><td cols=
pan=3D"1" rowspan=3D"1" style=3D"padding-top: 16px; padding-bottom: 16px; p=
adding-left: 12px; vertical-align: top;" id=3D"yui_3_16_0_ym19_1_1516021449=
310_15576"><img class=3D"yiv3767805512card-object-1 yiv3767805512yahoo-igno=
re-inline-image yiv3767805512ymail-preserve-class" src=3D"https://s.yimg.co=
m/nq/storm/assets/enhancrV2/23/logos/github.png" height=3D"32" style=3D"bor=
der-width: 1px; border-style: solid; border-color: rgb(224, 228, 233); min-=
width: 32px; margin-top: 3px;" id=3D"yui_3_16_0_ym19_1_1516021449310_15577"=
data-id=3D"cb935002-08ec-9eaa-0113-1e31d0f20d58"></td><td colspan=3D"1" ro=
wspan=3D"1" style=3D"vertical-align: middle; padding: 16px 12px; width: 296=
px;" id=3D"yui_3_16_0_ym19_1_1516021449310_15578"><h2 style=3D"margin-top: =
0px; margin-bottom: 4px; font-size: 16px; line-height: 19px;" id=3D"yui_3_1=
6_0_ym19_1_1516021449310_15579">oVirt/vdsm</h2><div style=3D"font-size: 11p=
x; line-height: 15px; color: rgb(153, 153, 153);" id=3D"yui_3_16_0_ym19_1_1=
516021449310_15580">vdsm - This is a mirror for http://gerrit.ovirt.org, fo=
r issues use http://bugzilla.redhat.com</div></td><td colspan=3D"1" rowspan=
=3D"1" style=3D"text-align: right; padding-top: 16px; padding-right: 12px; =
padding-bottom: 16px;" id=3D"yui_3_16_0_ym19_1_1516021449310_15581"></td></=
tr></tbody></table><div id=3D"yui_3_16_0_ym19_1_1516021449310_15582"></div>=
</td></tr></tbody></table><div id=3D"yui_3_16_0_ym19_1_1516021449310_15583"=
></div></td></tr></tbody></table><div id=3D"yui_3_16_0_ym19_1_1516021449310=
_15584"></div></a><br id=3D"yui_3_16_0_ym19_1_1516021449310_15585"><br id=
=3D"yui_3_16_0_ym19_1_1516021449310_15586"></div><div id=3D"yui_3_16_0_ym19=
_1_1516021449310_15587"></div></div></div></div></div></div></body></html>
------=_Part_6439428_201364203.1516054957950--
------=_Part_6439429_1730970573.1516054958003
Content-Type: image/png
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="screenshot.1161.png"
Content-ID: <18712935-6369-a66a-7979-b3478851eea3(a)yahoo.com>
iVBORw0KGgoAAAANSUhEUgAAAloAAAGwCAYAAABxbMuTAAAAAXNSR0IArs4c6QAAAARnQU1BAACx
jwv8YQUAAAAJcEhZcwAADsQAAA7EAZUrDhsAAHHQSURBVHhe7d0JYBTV/QfwbzjDIbeKyqXIoUkJ
im0Fg4pWRVAOLwQKWlvRitSrgKIU8SyoKH9rVbQeIChVq6GCJwUkAioiwYQzCOEyIjchBEjIf36z
85K3k5m9spPsJt9P//OfmffevJldzM43b2YnCYcOHSoGEVUJK1aswLnnnmut+djL9HVZFvp6sO2J
iCh0Naw5EcUJCT4qIAUSrI2qlxBlX1aTKhP2ciIiCo5BiygOOY06CT0U6XPhtKyHLJnr5apOlRER
UfgYtKjKi3ZQKG9/Ttvby5zWVZkeslS5PTCpoKQmobdRZcLeTqh2in1dOJUREZE/Bi2KS6EEASHl
esCQeajbCqc6PZAIpz5FOGX6MQr7MQtVpiZVZj8eN7KNaqu2F/qy3pfe3k7fhoiI3DFoVXPROGFK
H8H6cavXy52W3baTACB19npVpperZXtbxd6X3i6UoCFtwjke1VbI3L7uRtWpY5J1fVLler2aO9XL
XG+jz+19yLo+qTKh2hARUVn81mE1pZ8c5YQp6/a5ok6oir2Nqrdv49aH2j5U9vb2/QqnfetzYe9H
OJXpAtVLndD3Y2+vr6v2wqlMhFsupM7et9O6U7lQZfZ6Yd/WPndrQ0REPhzRqibkBKhPwn5CVOVC
6vR6e50Te7lT/3o/wZbt7XXSt71e35+qd6K2k8nej14nk14vk95G6tQ+VbtA7YVqr8pUH3q5ospk
bi9320aofoW+rV4u7H3at3Nqa58LfTud3oaIqLriiFYlUycjdaKzU+X6XOjLOre2gaj2TvR9BGrj
tL9A/SqBttHr1LKi9+u0H7d+nDi1DUba6/2qPux9qWXFqZ0iZSLQNkLVO/Xhxq1vO6c+A+3HrU7t
j4ioumPQqkRyggp2Qgr3BCdUn6p/ezt9n059qG30foRbX3qZvq0+10mZUOVqXdi3cWur96na27dT
ZUqwbXRO2wt7W6ft7dval4PRtw/U3t5nOPsIhdP+Ax2bU5nOvq3MiYiqOgatGKKffOzLTtQJS7Gv
27nV6+X6PoP1F4za3j4Pl769zqkvp305lQXj1jZQ/zp7mb6dsLcXTv2IULZx2tatPyVYfSCR7s9O
2juVExFVFQxaFUydiPSTizrZOJ2kAp28VJ0+F3p7exs3+rZubYP1IVQ/ilt71Ze9T7W9Kgtln4pb
n8G4bResH7f6UPcfajsRTltdKNtF2rcb1Z9bv1JORFRdMGhVEvsJSJ18Ap3wwjkhhnKSU/V622js
QxfqcdjbOW0Xyv6iKdT9Reu49H5kWbj1W9HvRTDBjlc4HbPajoioqmLQqmByorGflJxOUuqk5HRy
UuxtArUNVyj7jYQXx6rzql9F+heR7CMa71s8CHSsTnXqPSUiqooYtDymTirhnHwCtVVCaWMXyTZC
bRfucUW6v1CFc1yK18cUqlg5Di+F+xqlPRFRVcOgVQHcTjbBTkSB6uPxRH3ffR/gmWcGWmveqqj3
pzz7icd/Qy/J+1FVZU67CaPe3oI2g5/HmyOSrVIiqg4ifmBp7lfTMHpwX1zWqxd69eqNfrc+hH8t
2mLVBrF8Fh55ZBaWW6ueyJyGm+TYbpqGTKtIyAdeL6P8pml6aaSWY9Yjj2CW7YXoJ0+1rJ9E1LLT
SVZvF+gk7NRvpM6tE9q/hASlcNiPraJClgj03oUj2Ptbnv1E6xi9Fo3/xspl3njzZ7ZXr8sw8q1s
q1BkYtpNZX/GA6qIzx4iIk1kQSvnHTzx8NtYvgtofW4vdE9qiqLNK/FT4clWg0COYP4ns7BgwRrs
tEri1ZH5n2DWggVYY70QOXGqk6e+LNyW7QLVOQm3vZMVR8+zlgILNyjpx7Z86n3WUvgeGxv5ttEW
aegId7tKDzeaaPw3Zuf0+mQ/gfdViNVvvoj3cq3VsFWdzx4iih+RBa2sLPxQaMzPH4nXnvkbnvjH
25j70fsYfWldX33uV5g2ejD6XqZGux7Df9cfMSr2YOGUW/HU/EPGcjqeMn9LHY95xpoaaRovKybr
t1Wr3vi1FuOl/X2v47+Tb/H13fsGjJ72FSL+3DUdwfr/PoZb+/U299+rdz/c+th/YR6u1K7/Lx67
tR96W79R973mVjz5aQ72LJyCW5+aD/OVPCV1vcz2sXSCrCj33TfOWnJ33l3PWEvhe2jSM8Y+Kids
2U/8kYSOseOfCns7e/vxT/3HWqoa9NcX6r9th7PPRq3CFZj+8mfGJ4mLcD97lj2H62T5gf8anwSG
nLcw0qy7Ck8ulAJg2XPXmT/74+dJi1x8NW00Bve9zPyZv6zvYP/PIHP0zdh2fgbeGn2D+blR+plW
Knf+kxhiHONlQ57EfGPjMlcIht2JNzjsRlQlRBa0kpLwq1rGPP0l3Do5Dav2GB9AdevCF7Oy8daj
D+Pt5YfQvt/tuOcP3dFw83xMGT/NHK5v2iUFbcx2HXDVPffgnnuuRGdzPUQrpuOFzKb4bc9z0Rq/
YPnbk/D6QisVOdm9DDMeeQSPWNOMZbutCp8jRmD665T5yD56Es41PuTOPekosucbZX+XD3PjN+AZ
/4f52cfRvr+8ln7o3KQIjZu3lReCFN8LQYer7sEzz/iChNMJ9R//mGktVU3PPPOEteQd9f7Go0mP
jraWnH0QQtB4dPQ11lLVI/+2X732kLXm7uTeI3CN8TN3cOG/MH2Z0898BJ8953dFtwZG0Q+ZWGrM
9nz7PVajOZo3P4TVazKMkkys+l4+M4yf9/PrIvutR/Hw28uxq257dO/VHe3r7jI+gx7GuH/pFy8P
YdFTY/GvlYdQv0UKOp5hFVuOZEzDOOPz5ad6qbhnygO4tGUm5kx7G8tzG+L8P9yD2wdfgDaFTdDY
th0RxafIglbbGzHu4cHo2vQwsj9+Dndd2x/DJ3+GLfLZt3YR/re6EGh1Jf44ahD6DZ+Av/RtDuxc
hPRlzZDyu7Pgu8B4Ms7q1w/9+vVAeJ8n5+JPzz2Dv/3tGTz3Jxk9OoivlspHpItDm/DNggVYYE3f
bJLfaEstWvQ/owej1z89h2f+9jc888poXGp88B5c+Cn+V/Jr8yFszcpBfoe+mPjaa7jjPKBZyu9w
lnWl9OSz+plzt9GsO+8cai35O/fc8o3FVSeVNaJVEQbGcYiMhna7/osLbnnMXK5j/AzJLytq0u3Z
3wS3/OFynICdSHvxzbL3ZUX02dMdKecYvzUe+g4rlwGrNqwDzjbqOgDbMn9ATs5KZMitp2efg183
W4tF/1uNQiOI9R3zEp742xN4afIwI7YVYtPHX8DYvMSROr/G6Blz8Z93n8Mw7TfJNo3WYtxf38am
eudh5NMPoU9Lq8K0BxtX56Lx+SPxzNuPYWAzq5iI4lrEN8O3vGAEnv3PB3hh7DVG4CrC1o+fxGOz
s4HtP2GHNNj2b9xtDoP3wgNp8hvhbux2He93UoSi49ain/qob30ANatf35wfys83547aDMbzWtB6
frA1DGXKxk/b5RpoG3TqbHVa92ScZHw2A7uwZ0dd9Bn5MAafdyKOZn+Ml8f9EQOvuafMCWBgS9+6
02hWICtW+H3KUgCVPaJ131Rex/HK5hZXW0uA+j6e0y8tefkHUffiP+DW843fhLa8ixff+tmqsUT0
2VMX56d0Mea7sSrrA6z87hCad+iIi5LOBlZnYdWqdVht1LZJ6Yq22I6fzB2chY7ny9xwZgtfeNu9
2/9yZkp3W4jyWfavl7FCPnLQDC3aWrdaGK/6pgfvxaWn18EvX7+NSXddi/7DJ5uXFIko/kUWtHJ/
xI95stAQZ/cehWf/5PvU2bBhPXDaKThVVpJuwv+98Qbe0KZRF5rNAtr+01rfwp4NyN7mW/SXj3zr
Ey17l++DtoEVuMJ3Jk45Ta6BbsG6tVanR37GTvPqYgs0kxfS8gKMeOrfSHt/Ku5OPQmFe1di7H93
maGq5cDSk38492aF++29ijC1Co8YRcMzd4X2hQEqP/lZkp8v9TO1oqX+JYyWuPqPN+LsWoVY/c50
fKUPUEf42dPsku6QPW1duwRbd9fCr84+H23P7IDm2IYlS2Q4qw0u6CER8DScYu5gDdar4avsXTA/
hZo3N6JTCFr3xW39T0etg59hymPzSu7tqtvxajz02lx88K+/4YazG+DI1o8xc671WUhEcS2yoLXi
dfxxYF9cc+t9eOSR+3DLC/KpY3xAJSUBnS/CJWcb4SXrHTz1yqdYlpGBZZ/OxsuzvsaehrLxaTi1
lcy/xTv3PYJHxr2BxcbaqSeeZPQAbEp7BuMeeQT33f0qnMeHVuDVu337fWTWBmP9BFzQvbuvKgIX
XdEXJxnzFa/ejftkv7c+Bblf9oSLr8AlxifnZ08Oxp2PTsOH6Zuxv1ZN8xibNWtmngRqfv1swHuz
zj36pbXkryIfcxCqu6r55SuKDfJzpIcscW6u7ReTM3+PP1/fBji0BVv0Wy4j/OxBs1/jnLOBwpUr
sQod0b6TUXb+2fhVrW1YtswIWs3PQRdzqK0z+vY51/gM2I25k283PqfG4fYxM7DBKDn9yt9BDXIF
cur5v8ONd4/DTcZxHkx/Fo+aj6vIxJt33or7np+O/2XuRGFN+ViuhebN5JOJiOJdZEHrpPZIalkT
+dkrsGDBCuyo2Q4X/OFhjLuxrVF5Jn4/aSruvrQ19i1/Gy89+yxe+k86th5ORD1z4xQMHHwpTqx7
BFtXLMAS44PlqFHarM+t+Mulp6P+4WwsXfw19nYcibuvMq/h+Wt5Oa5N3ouvF6/AVpyI8/74KO69
WA3Bh6/u+X/GUxMH47xGO7FiwQKs2FkHZ156L56+/3LzN9R69Wthy1e+1/F6+n5MMgLJmD/4RjeK
fnuPOXe7f2hFnQux94t/WGtV033j/m4txbYPct1HHDsmbrWWygr1GWO6c881fuGgiKn7tAJJvunP
6F8mh0T22QO0RVf5ZkthIQpbJeNX8jGGTmjf0axEg25dS0JUy+smYurdl6L1kY1YumApNh5pgfMG
P4wn/hjOQ0iN4xx/D1JPkMdVPIFpGcbxNdiNtXNex7PPvoT/rDdaXDkao/qENEZGRDEujp4ML493
eArpSMXoBY+ij1VaGey/cSvBTg5VnbrkE0yo7SpKrB0PlXL6OSMiiicR3wxfneknZpmrScqr84nB
Kaw4vR9O7SqTOh6e1CuP/t7rP0ex9t8KEVG4GLTKwekkEOmJoaqe5OPpRGk/Vv2EX1n0/bstVwX6
e68vV7XXSUTVD/+odBTYTwyyrubVVVV4/eG+BhUKyvu6Q9lvPL+/ob4+IqKqgCNaUeR2AomXk0Y0
j1Peh1D6i+X3JtwgI+1D2SbYaw6lD7c28fDfWijHHsp7QEQUDxi0IiAnAacTgV6mluXk4XbSiLWT
ottxRiqU/qK9z1gk/85qEvKanf7tQ/3vIVC7SN/PUPcdbWq/Mnd7X4iI4hmDVgTkZKCfEAKdHAKd
PIKdFL046Tj16cV+KlNFvW92qo29rfw72/+tA/3bB9tXsP9ulED96HWyHGqf0eb2vgR7D4iI4gWD
VpgiOSFFehLz4uTn1Ge4+/HiJFgZJ1a1T32uJjv9PXKqF/Y2el8yt7/Per3i1C6a1P70fYS7P/sx
hyLQNvbXHEn/RESxikErTPpJwOnk4HSSCHbiiLcTSzgn5lBfm9uJNpL3RvUVbFvVTp+rSXHqQ+ql
XE1O9L6kjepTtVdlqtxO79dtORi1b6HmTvuztwlG7yPYNm77ddpOykI9BiKieMFvHUaJOpHIicJp
2S5QnVcqY586L98Pp+2lTEi5vizs60qwfnT2tuGuC7e+hWrvtF0gqk9Fbav3Y19WwtlPuNz2o8ql
TG9DRFQVMGhFgdPJK1JufUSjb12k/QXbLpR+pY0I53UG6lf1pwvnGPS+A/XlVCdUHzq933D61/ty
2tbej729UOs6vZ0Tta2dvk+dKtf7VOv6PBindnqfRETxjkGrnPSTjb5s53TScTvJOLVVVH2wdrpQ
+ywvez9O60LKwj0mfT3Qsk7KnfpR9Hp7P6pM6MvCrU4ty1zo2wi93l6nC7Z9uNy2C9SfqrO3cSp3
ayukLBT27YiIqgoGrSgL5USk1yl6nWJfj4TaTyj9BNtfOH0Jvb9QX0u47ez7UFSdfTkcTtuF0pd9
30LWQ+kvlP5DFc2+wqVet05/D/Rjq8zjJCLyGoNWOTmdJOxlsi5COcGoMn0unLa109sLe1t7vXDr
z62tvX9ZdyoXqi4Ypz4UfXu9P7dlxanMTThtKXx8f4moOmPQirJAJxVVZ28j6zrVRi0rTmWKWx96
Xzp7verTvo3bcjD2/hT7ui5QHRERUTxi0IphKnjoc+EURvQ6PbC4LQu1jbBvp9Pb2dm3U22d+tG5
7YuIiKgqSSg2WMtUxaSnpyM1NdVa81Fl9jpZF/ayQG3sfdnb6wLVERERVVUMWuQXoPRlIiIiKh8G
LSIiIiKP8E/wEBEREXmEQYuIiIjIIwxaRERERB5h0CIiIiLyCIMWERERkUcYtIiIiIg8wqBFRERE
5BEGLSIiIiKPMGgREREReYRBi4iIiMgjDFpEREREHmHQIiIiIvJIyH9U+j8fpOHHTZusNSIiIiIK
JuSg9fSU53DH7SOsNSIiIiIKhpcOiYiIiDzCoEVERETkEQYtIiIiIo8waBERERF5hEGLiIiIyCMM
WkREREQeYdAiIiIi8giDFhEREZFHGLSIiIiIPMKgRUREROQRBi0iIiIijzBoEREREXmEQYuIiIjI
IwxaRERERB5h0CIiIiLySDUKWpmYdlMv9OqlT+Mxz6qtyjKn3YSbpmVaa9Ezb3wvjK8ObyAREVGE
qtmIVhsMfn4BFiywptHAU71uggcZhIiIiKiaXzrs8yieHwy8PdvbYZl546vHyBkRERH5q/b3aCX3
uABt0pf6glDmNNw0fp55qU0uLfout9kuOd40zSixzBtfWq4m27U0ubz2VHo6njLrVeCah/EBtilh
9H/TtHna/mX0zf949E1lXyV9ul0Wldeobadeqznpr83Gr53e91L9PdBHB93ft3njjXbT3LazvTfm
pPbn32fp5VD7ZWEGWyIiig28GT65FdpYi6b0p/AURpuXFt8ckWwUJGPEm6WXG0e3eRu+ATDj5D57
S+mlyOcHo02qsd2jfaSyRJ9Hn8fgNqkYbW7/KPqYoeApGAVWn0b9lqdc76Ha8vbs0raj2+DtUdq2
o1ORPrs0wPR51Co3pucHb8Fse58SDEdtxSCj3jxMY33U1kEl2ywYtBWjnEKftPvqAjyv2pmvwyd9
S+uScv/RQbf3TWzB2185b5c5bTa2DH7e2k69d779zRs/ClsHlfY5aOsoX2CcNxtfXaC2kan0+IiI
iCoTg1bmNuO0r2kzGKPNgKX4j5Y8lW4VO9myrST0uMpcgq8wGINKkoARSAalYstWv6MolToIJYfT
pjXatLkAPfR1a1HoI1qj3rb199VT6GVmtNIQMm+p8WKMYKm26SUvzuE1SLvUQSOMIy1LLzdHB63l
YO+b+3Z2W7DNPKB58B2uf59bpLJPd7R5e5RRxpEsIiKKLdU+aGUu+QpbUru7joDIKIo+WjI61aow
A5KMMFkn/lFf4YLRzmGkIsilPTUSJ9Pzg/2jy5Y2g3w3/9tGrFJLRtas6c3ovAb39y2w5BGDrNAk
76v0Mbo0aEKNDJZOvlHHPnjUXO+OpeZ2DFxERBQbqnXQknAy6m1gcOnwkk0mtm0B2rRSZ3rfqIqS
aVSWBpU3tUBgp0ZlDMk9cAH0y2hyCTIdbVq7j+mEQkbESvvIxJKv/Ee0zLo+jxpRrPQyZZ/uqUh/
KngoMdtplyiDC/y+BSQjjHIJ1i9IiT7onppeJij68wWu0ana+01ERFSJqlnQ2lI6AmVMo95ug9EB
A1IyRowejC0ll6uWorU2UiSXvErrZHIKLckwmln7lXq5d8l8roS1zSi83Wa0Figi0+fR0dpI0FPA
Bc5DSHLP2AVfjfKFLfNbl1usG/V9k+O9YtLugq8wqqRdsHAW+H0LSILoFu1ypjGpbGXe7+ZXZ91E
b/tSwlPQLrcSERFVooRig7Uc0NNTnsMdt4+w1sg8uc9ujee1S23q8l15Q1P1Jd84nI3Wz2vhV74l
KfeWRemSJhERUUXizfDl0aaV38nfvHxXcrmMItMGfm/hlq3YYnufiYiI4gVHtCIm36obBf3LfW0G
P8/RrHLy3Tfn96b6jRoSERHFEwYtIiIiIo/w0iERERGRRxi0iIiIiDzCoEVERETkEQYtIiIiIo8w
aBERERF5hEGLiIiIyCMMWkREREQeYdAiIiIi8giDFhEREZFHGLSIiIiIPMKgRUREROQRBi0iIiIi
jzBoEREREXkkodhgLQf0nw/S8OOmTdYaEREREQUTctAiIiIiovDw0iERERGRRxi0iIiIiDzCoEVE
RETkEQYtIiIiIo8waBERERF5hEGLiIiIyCMMWkREREQeYdAiIiIi8giDFhEREZFHGLSIiIiIPMKg
RUREROQRBi0iIiIijzBoEREREXmEQYuIiIjIIwxaRERERB5h0CIiIiLyCIMWERERkUcYtIiIiIg8
wqBFRERE5BEGLSIiIiKPMGgREREReYRBi4iIiMgjDFpEREREHmHQIiIiIvIIgxYRERGRRxi0iIiI
iDzCoEVERETkEQYtIiIiIo8waBERERF5hEGLiIiIyAPfr9vMoEVERETkFQYtIiIiIo8waBERERF5
hEGLiIiIyCMMWkREREQeYdAiyxyM+e1v8VuZxsyxyoiIKt6cMb9FvH0MxeMxu5PzwRjj/1M0MGjF
oPB+YDPwwqDy/0DMGfM4coZPw9dff42vJ/ezSoko7swZY/zCFIWTZLT6iSr5vBuEFzKs1SjJeGEQ
fjvoBaN3L8gxO/8CG85nvRxj1Qly1QuDVtzLQc5ma7Gc2rZOsZaIKD4ZJ/WZOWjXbhHSy3tS7jfZ
+MVrMqrDr10pI2fj69kj4d0nYDu0y3m8XEEpJ1of9FThGLRiXcYLGGT8dMpvPuZlPZnUb15S99vH
scj43+Nmnfrt0/oNyulSoPFb6iDj10HV36AXXjAvGT6+CFj0uJRZvy2av8269GEwfwMsqdd+w/Tb
Tv/N03ZMxsTfzoiiLCMdX+JCjBs3HDkzbSM09p9pv88RrVx9jki5Nsrj/zNvTVb9nDHGz/oc/378
f74DfCYZ/Po2+txqlYfLfox+uynzOmXSPu/0xra28pnpK/bvX5UH1xZD5d/k8SAjhI6fn773rvQz
2nhdxue2/m+jjld/CX6jZbZ/e7/jNur8zwkOr8ncXh2PdpuJOemf8+SomGJO2ujfFI9Os1ZW/qP4
ht9o68Uri/9xw2+Kb/jHSmtV6kcXl1QbZPvflHZgti9ZTRtd/Bujv5LtLX77FCtXGlsqacWj9WMw
+/Dfp8k8lhuKS7qWdjf8w+xn5T9uKLNPIoqu0p9j+bnXfhbNn2GHn9ky7TTy82z9/Jb5nNHrDOZn
jrZu/4wI/plk69vvM08X4Hgd+1Ftbfu09yPblh6Q/+edTv9c9Ovf9xqDHbN8Dpbux7aNrT/zmEre
U/vxyzGWtjU/X419+L8G672w92t/feb7Zj8n2La3/VtrL4GCWLF2UzFHtOLCRUgtGb9PQeqF7bA5
J8dat5uD9EXtMHyo2iAFI4dehEX6dYR2wzFuZJBB8pQUbRi9H1IvshYNc9IXGV0MLXNJISP9S2y+
aChKuu43FMON36/Tjd92Ulq3xebpMwP/NkdE5SA/++qzQn7u22L6TPUT1xZt2y3CzDJDDylo3Xaz
1i4Mm3Ogfwq1uzC19DOjX6rxqZWDrdYISKDPpDKfJympMD7iwla2n5EYNxz4Uj6AHG2G48fonHQs
Mj4jSw5Xp38uRnCcKSPHYbjLJcRAn59lyWeyOv4MGJviwqEXol3OVt8ol/kajH9zqbX3a2w7+cEQ
zwkyUjYiB0O1y6pt27bDIvtoKQXEoBUPrB+YkGRsNT7ejA/OEdrQrow5qx9A0bZ16YeFG9vQuXTh
k4Gtxg+30/1c5j0Eix4v3e9vR2D6ZuvDwLzfIxXpVp3TBw0RlYOcXEtuIzAm81pTuvXLjRFuZn+N
cXjCV6ddduo3+Wt8nZpu/cy6XNoyQ0tOad8jpqPtg/73b7ne4xnwM8np80TCn7UYMvfPJd8vpcbr
Ny/dqWMwPpvaPgin7/1k+Dpy/oz0uwQnn29WecjUcZR9nwN+fjroZ/z2a4Yl63JxqhFuL7SCmbwG
FXyl33ZtHd7QoOeEHMx8Yjo2284/5v1s44AnzGPkZcNQMGhVNSmtjR+Ki/CgfHtQn8K50dP8Lcb4
0Z1Wur3xC5DF9yGY4/tV1Y/8ptNOfXNRm0o/zIzfpMyyaWg7k2GLKHp8N8EP135mZXrwIv+b4s2T
pNQNzcEI/R4f8xcho3xaW8x0CVtywr7owdK+Q/5ycsDPJPfPk/C491MSMozEsvmiB0v37/ICZPTd
L4QoErIeh/Y6pmF4BCNvJaHV9gEY/PPTxnhdMoI1xxzOklAlVztkBG+Ob4Qr1feJL/06XgEJ4Rfu
C8cZrxHTMcL+YW28htnm8Q1FzgiGrWAYtKoENUQvZEjZ+K223CmmLUp+OTSC18ySES3fb1JOlwFT
5Kd8+hMh/NBF8hsrEblSoxq2M6c56uF0mUdO0taiHzMUOZERo3ayWQQCfyaV+Tyxfd6EyqmfJ6aX
Bg5zlCeUFyCXPTdPh+PVVH10Z87MCEa0fNQlxJnGe6qE8vnpFyTl0qXxrz5TC1VmH1/OxJebSz+/
pazdoplav3Mw5vFFuKj0fhQX0oeMhD6Iixa5fWNSLklbi+SKQSveGb9ZDL1IDcv7fhOVSwEPQh+C
DufbMQazT+0SxBPAUP1XN+O332n6ZQQ1fCy/5UwzfvT1SwTWMdm/rfO4cYQh/0ZMRAHNmWkmirIj
FOZ9PhIabN8UkxHrcTKiZP82sAzZOD3SQe6raut/+S+My0YBP5Psnyf2z5sybJchjckMATIq9yBs
lzdnl9yblDJyKNpOH+G3nfPnYj9MnqZfZrTaWe/lCLV9eqo20h8u3yVEbNaSWoDPT9/7L0HSd/y+
0CO/sG42utB+KTbDl1F2UWrpv2GZfn3PTAz987f0/ZD9+n0D/rcj8OWF47T7v8hJgtwVby0TERGV
Ib8ojcgZ6n+5TS6lGWEjLh5wbN4OkYOhfs8Fk/CZjtRq8qwwqhzfr9vMES0iIgrMvD/LdqlJvuUX
N8z7s7RRHmF+eYDIexzRIiKiIGT0Rx6OrJEby+Po+r9c8ir99rSQG/Sr12hWeno6UlNTyyyTd2RE
i0GLiIioinIKVAxZFYeXDomIiKogCVNCApVaVvSQZa+j6OOIFhERURUSyYgVR7m8wREtIiKiKkQP
TKGMVqk2DFneYdAiIiKKc06BySk82cOX3iaUYEbhY9AiIiKqAuxBySk4SbByC1QqdDFwRReDFhER
URyTYGQfvXIqU4KNYrltR5Fh0CIiIopjKhjJXAUnt7Ck6u3tnAIXRQeDFhERURUR6miUHsoEA5d3
ov54h8OHD1tLRERE5KXvvvsO3bp1s9Yio/cRjf6qg3r16llLgfHxDkRERHHMLRRJYNLp6/ZlvQ+G
rOhj0CIiIopjTiHKHpjcAhSDlfcYtIiIiOJUqCNSehijisWgRUREFKf0YBVqmHLahkHMOwxaRERE
cUyFJAlQboHJLZCpcr2eootBi4iIKI7pIcktUClSZg9VbuGMooNBi4iIKM45hSWnUSops7e1lzF4
RReDFhERUZxzClVChaZg4Unf3q0vigwfWEpERFQFqDDFoOS9cB5YWmlBa8aMGdYSERERRdvZZ5+N
1atXW2u+dSFl9mW9XXU2bNgwaymwuAlaob4gIiIiCo+McDmNbqlyt/rqKpxcwj/BQ0REVM3pIUq/
R4shq2IxaBEREVVx9lDFkFVxGLSIiIiIPMKgRUREROQRBi0iIiIijzBoEREREXmEQYuIiIjIIwxa
RERERB5h0CIiIiLyCJ8MT0RUDt9//721RESx6JxzzrGWAvPqyfAMWkRE5SBBq0ePHtYaEcWSJUuW
VHrQ4qVDIiIiIo8waBERERF5hEGLiIiIyCMMWkREREQeYdAiIiIi8giDFhFRFZOenm4tuZM2wdrZ
60PZhoj8MWgREcWxQMHHrU4vt7eRdTWlpqaWlAlZV2XRZj8OoqqCQYuIKA4ECyIqHAVqZ68LFJz0
crVs315fV8uB9h+Ivj/pQ01qXc31cqEvE8UiPrCUiKgcKuqBpRIoJIyouSoTqjxSen/2vpz6DtZe
qDK9Xi07tVFUvarTOZXr25L35AGg0VBRD/nlA0uJiMiVPVSodbfySNhDj84t8EiZKldzvR+ZAvXr
VCZUub69motAfbn1SdEnIak8U3XDoEVEFMMkQAQKGOVl799tORjVVg9YekgSgfajrwu1rub2vhSn
IObUH1Ud/6pZ25zy8vJQWFholcYuBi0iohgUr0FBP+5ovga3vqRcJj3gCVmP5v4pNkjAuvlmmNPs
xk2xb9++mA9bDFpEVG18+eWX1lLsUsEhFrmNKsUK++hW0ONNG46EhOFIk3nXyciyiktkTUbXhK6Y
bFRkTe6KrrIQgLRx6scsl/1Y6yVkv8PLlJZlHqfvOPylYXhCAvy60I65qhndt7a1BOzL9c137dqF
goIC30qMYtAiomollsOWPWDFWrCJ5REip4Cq1p2POwuTJ6zCpMzp6N//WgzLmIW5tnCSNXcWMoZN
xJgkqyCgNEwaC6SgbD8iJWUVBoQSqsowwtQEYFiKtaqYgWoCVtnLk8Zg5YddMHaoQ3D0wOPbbse8
vTPxz9zxmLLjPjy27Tbcn3Mj7tp0Ne748QrcnH0BRmRfarWOnISsAVd1RpOHW+KNN4APPwE6LVpg
1cY2Bi0iqnYqI2wlfZZgLflTAcEpDEhZecJWz549raXqS72HZd7ftEkYiyHoa4ao/hg7CZjll5Cy
MHdWBoZd299aDyLtfcwwQtnKiUbImVQ2UHWZOBOTVg3wH30KygiDXScgeeZYJFslPkb50FkYkrkS
xu7KcgmOXlidvxwFx/ORV3QAO47mIPfoVuQe24qdx7ZjR36OWVewpI7VOjIqZOXu2meur+3tC1lN
mjRBixYtkJiYaJbHKgYtIqqWKjJsSchafbl72FJhyh6qHANCGBYvXmwtVQ9uoVS9hzJXy2nvz0DK
kL5Qg1VJfYdA0lZJDsqai1kZwxBazrJGx8YajSXkzHi/zGXCVeuAMTMnYdUAh0uILrImD8WsITMd
RtSSMGblygAjbf1x7bAMW3D0RmKNBth8ZK055RXtN0LWFtRNSMT+nw+iYd1GOPBzHmrXiDxo2UPW
smW56P+AL2S1bNnSnNeqVcusi1UMWkQUNyQclWeycyqLNhWyhB621ElfD1NqrtjXKfDlVLf3S99G
X+7SSUsqSX0xJGUG3rdSkFw2lGuBoeUsI5T5jY6twgSnm6SSxmCmURfSJcSsyRg6awhmhnbdsowz
k1OQkZltrXmnW8OLcO+pz+CHL9fipfaf471OmXju9DmY13MD3urwDdJSjfchwqRRFUKWYNAiIvKI
HrIUFbbUCZ9hKjzhvl96kC0NWVlYtwoY0Vy/ZJuEvkNSMMNMWr77rYb4klNQ5r1cGWORnJCABGNK
HpuBjFlzjb2UlTQmlEuIaRienImJK8eUjLjFqvmbfC/kyWteNOe5R7dg4MIUFBUXoecbbVE7oQ6a
pYT2cE+dhKz7n34u7kOWYNAiorhz4YUXRjTZOZVFi1PIUvSRLQpNampTayk8EqQkYKnApQe1abv9
/wRRUt8hSJHLfnK/VYoaoQpGQlkXfFhcDPlDK74pE5NcboqXQBf0EqLs3/jfACu4JSQkY2xGBsYm
JwT95mNF27+hAJn53yC1UR/0eLk1WtZpg9qNaqJmQk0knuwLQscLI/8DNBdfd39chyzBoEVE1VJl
hSwlWD35S0/fay2FToUoPWD5ypLQqYvcN2ULLUljMHHYDEyYsMrv/q2AzJvgr7VdYpTRMbnlyyVK
qUuIsh+ryE//6Vpos4JbSgomZRZjZYiXErMzM5CSfKa15p3GneuZ3zYc+WNv1D+ttvlNw6P7ijB8
Q3fAyFe3ZPfEgTVHrNahW5jbFX//693mFM8hSzBoEVG143nIam4sfBd4Wpy+GD3r8VuB0aCPSun8
w1UpKe9/7TD8o8feMpf3pDwjw/myYcbYZGuEyZqGT8bkCTMcv5mYNGYihs2Y4Po8K/MSIjKM/3kh
De/PSAn50md57N95ALUSaiN7YzYadUjEhnXZqNOoJrauykVii1rIWb0de7cetFqHbsGiBbht6hoM
ffxb85uF8RqyBP+oNBHFDXXzeqRBSbaPdsiK9I9K65ewyFsStOT99g9cWUbZXixpmooee8v3GI2Y
Iw84nZCMTA/u8ZI/0qz/937v5oHIyv4BNeokoLiwGAk1EnyXCo3/U5cMzz3r15jU7h1zWdj7cCJP
e5cHkcpcwpU8wiGSkMU/Kk1EVIG8HMmKhip1so8hKtT6h9skpO6ehrHJw821KhN85UGmA1Zh0syK
uZF+SrsP8OnvsvHxhRvwySXG/OIN5vqnl2Xj8ys3mpMeskIloaphw4bmKJbM43EkS2HQIiKqYG4n
9Spzso9RZS4hNh+B4uLp5nKVCbnyZPjiQM/YoorGoEVEVMHUSV0/uTuVUeic3stAJNTyPaeKwHu0
iChulPceLS9Eco8WR64qjoQoPVSpZX0uGLZCI/c8RUO4PzORioV7tBi0iChuVJWgJewnemFfp9AE
et+c6lSosr/3VPXwZngiompITvAqAOgneLewQGWF+r6p91hNqkyn90UUbQxaREQVTJ3YZc5wFRm3
901/b9Wy3laVqzK1zH8H8gqDFhFRJYjFk7sKJvFKD1Dq/dWDlf39Vq9XtSHyAoMWEVElUCd3/WRf
2exBJN64BSkVuHSx8H5T9cCgRURUiVQIiPeQE02RhiD7dk7Bi+8zVTQGLSKiSuR28o80bFQFkYQh
9X45vW96mdv7TeQVPt6BiOJGrD7eoTyOHz+OGjVqmHPFvh5vQj3+UNrpbWRZCfR+qXZO2wkpt5dR
1cXnaBERhSgWg1a0fffdd+a8W7duJcuVQd9/OMcS6XE7bWc/Bp29XNZVe71M2LclcsLnaBERVXGB
gkFFhgW1L3tgUQIdiwo6arJz21bfh2rj9H5Imb6PQEJpQ+Q1Bi0iohgRLBhEMzTo+1LLal2FGT3o
6JMedGSytxH27RW1rdDLnTjVS5lbuT4nihUMWkREMUIFEycquJQnSOhhJNC+pF5NQrVVc1Uu3I7J
3ofbXFFt9fpAx0gULxi0iIhihFv40JfDDR/69k7s5U7rgfpwKnPjtr1bH4HqiOIFgxYRUYzTw4Ye
PtSyU71ebq/X50TkLQYtIqI4ZA9Kss7wRBR7GLSIiIiIPMKgRUREROQRBi0iIiIij/DJ8EQUN6ri
n+AhIm/xT/AQEYUoVoNWjx49rDUiiiVLliyp9KDFS4dEREREHmHQIiIiIvIIgxYRERGRRxi0iIiI
iDwSczfDP/biW9YSEcWLh/78e2vJW7wZnojCEQs3w8dk0Bp9y/XWGhHFuqdee5dBi0GLKCbxW4dE
REREVRiDFhEREZFHeOmQiMqFlw556bAqkUtN5L2K+pnhPVoMWkRxj0GLQasqkRNzef89VViL1X7+
97+lWL++ARITD6BNm0Rccsl5Zrnuu+8y8e23x802J59cB1de+Zugx7PvwEH8PPQmFB0pwGnvvIXG
zZpZNf6i8R6HSvbFe7SIiIioQqxevRG7dyeiV6/W6NatAwoKGmDp0jVWrc+mTduxbRtK2hQWNizT
xq7QmAqWLMOR7B9RuH0HDn33PaI6iqP5V83a5pSXl2ccm+w5tjFoEVG1oUbEqp0v70erxES0ur+a
vn4y7d69Dxs27EXLlieiU6emaN/+JJxySgssX77RalHaplmzJiVtWrU60a+Nk6MHDuLQF/NRfOwo
ahr/rR3+Mh2HDx60aqNHAtbNN8OcZjduin379sV82GLQIqJqpVqGrfpN0LplS5zVqrlV4G75qyMw
4tXl1loM2ToXj4x4BHO3WusUtvXrt+HIkRNwwQWtUFRUjNq1i3HOOSdi+/a9RsDaj82bd2DFirVG
eKnt2MbN8eJiHP5xE/IXfYn6v0pGwx7n4+jir3AwZwuieXfS6L61rSVgX65vvmvXLhQUFPhWYhSD
FhFVO9UubJ13P5Zu3ozP70yyClwc+RyvjXkFr3y53SqIHT/OeRITXvkPsg5YBXHos0Vf4ZXFyzHz
+1V4Z2Um3l6xCjO+WYFXl3yD5xcvw1tpH1kt3T2+7XbM2zsT/8wdjyk77sNj227D/Tk34q5NV+OO
H6/AzdkXYET2pVZrf+vWJaBPn9NRwzjz79tXYISu42Z5kyb18Nxzs9Cr12148cXPXNu4KSosxP7/
/Q9F27fjtNH34sShg3Fk3Toc/vpbFBYVWa3KR0LWgKs6o8nDLfHGG8CHnwCdFi2wamMbgxYRVUsV
HbaSPkuwliL0czomD+yIJokJSEhIRJOOAzE5/Wej4gg+v6OZWXbNO2rUwVaWNtxYNrYbnmbWZk3u
aqxfgpdXfY77zmuCxISumLzsf7j/1wPxonQxY4DWPguTuxrLl7yAz18eiDMayv4b4oyBL2PFipcx
8IyGZtuGJ12Gx5ep/f+Mz+/rjpPMtvo0HGkR9Jf12lW48L6vjKUMjE2W9sbxZplV3vvuPBw9etRc
XPt+sjnHpod8c4OUmfVGu2B+KjyGZceOYUfN2ihu1BDNW56Elq1PQ7u2bXD6Gadj84Hgl9pW5y9H
wfF85BUdwI6jOcg9uhW5x7Zi57Ht2JGfY9YVLKljtfYnh1m3bi1zuUGD2qhduwbk6l5BQSHeffcL
TJgwAkuXfov33vvMsY2bg9t3oGD+AjTrdTE21khA5sEDaNzjfBxO+y/ycq2hp3JQISt31z5zfW1v
X8hq0qQJWrRogcTERLM8VjFoEVHckZAUyWTnVOYFCVmrLy9P2PoRLw/+HcZ+uAUNfzMYtw7+DRpu
+RBjf3cVns2qi8uu+z2aGuFq3r8/hRlNjnyCGa8ZS01vwZ8HNjV7KGsJ7utxNaasrYOWbS9B104t
0OvKrr6qC8YYJ9538e7tXXzrYsGduHx8Ls4fMhjnn3gImz68Hd263YeNvxqCweefiEO/fIGHRr2B
dUbTra8Nx9VTluHUO77AmjX/xR3tpYNL8H/ZL6K/LIow+qvf+QpceJJs1BqD/884rncnofepsl4x
6mx/xJzXb+EcplR9MIeOFeKXI4fx7LpsjPphFYYuX4Gh36zA8CVLcfUXi5GfEPyUnFijATYfWWtO
eUX7jZC1BXWNQL3/54NoWLcRDvych9o1nIOW2LLlIHJyDpiBa+nSDLz55n+xfPkPWLcuBzfffDV6
9z7PKPsIn3/+ndkmP/8ofvnlKIqKfCNbdkeLinDom+U4vmUral18Id6Z+xE+mD8fxT26o3CNcYyL
FuNYOa4e2kPWsmW56P+AL2S1bNnSnNeq5QuPsSoug1bdunXNaf/+/fjpp5/w0PhxeP75qXj5pZex
c+dOs46IKBaokCUiDlvr0vDqgiNGzrgP7305C9NmGcHxmQuMQLUcz85aZmSYm/AXI8wcmfdvfGrk
qyOfvI9/S/M/D8dlrh+HR1B41iPI2L0TmzdPweVNu+CKHmf4qs7ogeuuuw7X9Tjdt25qij/PXIhZ
02bhjQd8gaPusJn4Nm0aZi18EYOkYPn3WGvMNmV9b/QOdLngUnTufBUuN7/J/wM2b28gC5bQ+zu9
hxEEzScFNEPXS4zjuu4KdHHLj17IWwX8/BbqtegG7P0COJ5vpMlnzNGszucOAPZ8YjUM7NCxY8Yr
qIOfjhRgd2ECdh0vxi/HC/FTYRGw4xDyQjgjd2t4Ee499Rn88OVavNT+c7zXKRPPnT4H83puwFsd
vkFaalbAM/v69fvw9dc/4ejRQjz77Nt45plpyMpahXbtTjHrX399gvHfw3Y89th0s83OnYeM8+wh
1KlT06y3yz94EAXzPkHdZs1wsHNHPD91Kt7597+xt+OZqNm4EQ6982/k5x2K6BuIVSFkibgd0ZJA
JTfBjbj1Vtz6h9twyy1/wrDhw/Dev9/D1q28W5KoKpPnaEUy2TmVRZMespSIwlbOephfrr/4Nzjf
LDCyUCtfKNq6VS4fdsPweyR4fYC35/6MT97/txF02uOWa1RrZ52v74suIf9e2gbtTvU17nTGWea8
c/KZMEvq1jLiQ6lf/TrVLF/y2UdYu/YjfCaPX6p7DpL03BZGf5WqWW/fPG8FTjz1LPyyw/eYg42Z
n/mHLNUugG25uXjzP0YwXpQDfLIOmGv0Nc+Y/2+zEaZ3YcdBI8AFMX+T7/Lvk9e8aM5zj27BwIUp
KCouQs832qJ2Qh00S3G/n6pTpybYv/+IEab2Y/78pViw4OWSSZHlb7/9zmwjN8Tn5x8z53YyxpWf
tQaHPv8cjXtfjiIjbOXn55vP0zze6jQ0vux3KMhcjcPpi1F03HlEzI2ErPuffi7uQ5aIy6ClQtad
I0fi+f/7B6a99k+kpqZi0uTHccufbsF7771ntSQiclYZIUsJO2y17Qgziiz8BsZp2vTjth/NeevW
J5vzM64cDCNqYc5Ho/G+DGddcA+GdzOrKlzTG6fg5b4nYOM/r8ZZZ12HmTUGYNIX03FLa6tBPDn9
MaDJRb7lwxtwYlPfyb19KyMS6iNZ0i6Iqbf+AcWv/LVkQtIZpetv3If3bvuD1dLd/g0FyMz/BqmN
+qDHy63Rsk4b1G5UEzUTaiLxZN+xHS90Hz86+eT6OO+8k9GwYV107twBEye+UjIpsnzeeUlmm8OH
i7BnzxEcP172v9fDhw4hf9Y7qNm4MQ51+RV+yMoyAlkRjh09iswffkDR73qhZr16OPjRPBwJ4f4z
Jxdfd39chywRl0FLErNcMhQJxv9andrOSN6b8cnHXxj/8IdRs4bzECcRlfV+8xPN6ZDxoRkPD/+L
hsoMWUpYYavTENzVzzixb30G1104BCOGXOi7ObzuebhniDVqdcYQ3DnQaDN7BmYYOavXsH6wLgSG
7vTOML+X+J/x6D9iBIY8H+E9bF8+j9FzDyLljtfx7rtv4dUnhuKMo79gr1xPjMiZaN9Z5mvxf382
Xv+IsXhPbt6qSIOeNYLVUaDfHcBPBcAf/gtszANu+y6k0axoady5nvltw5E/9kb902qb3zQ8uq8I
wzd0h1yfuyW7Jw6scX+j5eb2E06oY4SVRFxxRS+sWrUTS5euxxtvGK/HIPOvvsrA5ZdfbLbJzy/E
7t0HUVBwzKxXZHwqzwhThxcsxAkX9kTuCQ2w5Cv5woLcOF+Apenp2NOiORpdfBEOL1mGQ+vWG2Et
9FGthbld8fe/3m1O8RyyRFwGrWPHjplhS50Ucn/Kxf33329+UyXvUB5+3uULYaXWYmrPnpgqF/sr
yrzb0fT2edYKUWySgKUe/vdJm3bmfY9VPWxVSMiSx1UZ599gk7QLLWydjN/PWonZtycj75u38crb
3yCvjYwSfYR7Sp7Y0BQDb7rBuvQ2ECNuiGD4qMsf8fjtp6PBoR8w55VZ+GJTZM8nOnRSF1x0MpDx
zz/g+uuv902X/gqnDH7Hd7N+2OriqnFTcFFjYPvitzH93yuwy6qpEK3vA+bOQF69Osib/U/sbZiI
X567CrlNGwDvPhjSaJZdwnP/8ZuHav/OA6iVUBvZG7PRqEMiNqzLRp1GNbF1VS4SW9RCzurt2LvV
efRIfrbz8gqxffshNGhQC7169cSAATegX7/eRviqD3mO1m23PYmrr74EPY1zprQ5cuSYEbR+Mbb2
HyWTUau8eZ+gcO9+NL/hOuwzzss//vgjatb0DXRs3rQJ+w4ewIl/uhnHd+3GwYWLcOyI79uboViw
aAFum7oGQx//1vxmYbyGLBG392gpxcb/Mlf/gPy8/LDScoVbOxU9mzZFU3Oq4NBH5ODxoSdaS6UP
/9uzZ4/xwRrxsEPM8zpkiazLjU+lbqFP0j4kDTrjhheXY1+BsV1xAfat/wBjUn2XDZW6/aejoNio
L/gPbtRvFjfK5cGRxdN93/lLGrPSXF85xv5crZPR/8UfkSdti/Owc4oMyyVhzEpZX4mS5lZ/pdv3
x3Rzm+nov/cdDOt6M+ZfPBtbzWM1poIsTD5PbiGbhy/D7c8qqdvlHizcV2C2K9j3OW7vZFVUlJN/
j4bt/2xOTbtOwYmp/0XLy4y0HEHIEsV3X1MyhaNLcgpyNm1BjboJyN9+FLXq1cDRA0Vm2CrYXYga
tRPQ/vrSn23dzp07jKBVA506+eoLC4vwyy+b0bfv2ejf/yLzOVqdOnXCsGE3OrZR5L/YQxs3omDx
UjT4TTc06P5bXNS9O0aNGoXGjRvjV7/6FW7/85/x227nIdHor9ElF+PIx5/hwM6dvg5CII9skHDV
rl07tGrVKm5DlojroHW8qAhLli3BunXr8Yc//AFXXPk7c7Qr9szD7d0fRtLMvdi715iWXof3ut9u
lBJVDglZ9of/nTbXd+mAqFzy9mOfkdXzV36Gz1esxZbv/4c3pk7Ev36QgbY+8D7qVm1T2n2AT3+X
jY8v3IBPLjHmF28w1z+9LBufX7nRnCa1e8dq7a9fv2SsWrUBmZnbjcAqfxB9DXr0aIPk5PYYOfIG
TJhwG554YgS2bdvp2EYpNM6zeUu/xrHsbDS//joUFxaiXtFxnNq4CZ58aDzGjByJM4yQVNcol7pG
vS7CsS1bULBkKY6GOGIuoaphw4ZmwJJ5vIYsEZdBq3bt2qhfvz5uv+MOvPn66+ZvOC1OaoHTWrbB
+PEP4dY/3ma1tPn0dmtESSZ70DHCUEmdMfWcan6t2MeoM9bnTe3pvr19xGqDVS7mzcHs5Idxdx9r
vfNdeGDQbMxh0qJKoEKW/vA/CVnym2izZs34eBQqn9Y34+X3bsc5e97BLT3OQttzL8WdT2fhtJHv
4Ye3b0RFPpWB/J17bmsjtNRAfv4h876sxMQinH9+BzRv3sSY/wo333wVrrqqu2sbJf+XX5D/xf9w
/OBBFO3Zg33/ScPeD9JQa/FX6F23Hs4/fBT1ln2DvUb5vg/TcGzHTzh+6BAO//cj5P8c+qhWVRGX
QUtC1imnnGJet/3TiBHmtxx25u7Elm2b8Mijj5r1ZWXi4TX9fCNKMs0EhpaEKbmHayiyHl5aUj8z
6WF01++xynwYT+KV0nojKA0tqbeNWO19BXhvtlVn9L4hC0jqAPNeTssZZyVjNpMWVTB7yJJv81w6
yheyTjrpJHMez785Uiyoiw7XvoilO/PMX4JlytuZifnPXIsOzPCV7uKLO+OMMxrirbe+xIUXtke7
dq2smlLB2sjfMJS/ZZhgfFbs/OdL2PHE37H9sSdx/F9vIO/5F7B3ylQUvfK6Wb7jiUnY9cZ0yPDY
obmf4NCmHKuX6iPB+CEI8QaB0MjzM0IxY8YMDBs2zFor9diLb2H0Lddba87kN255xIN883Djxo34
y1/+gs8//RzTXnsJd97xF7Rv3952n4kEqe5Y88BevKRGlcwRrCdx1tLFuAtT0bP7Gjyw9yWUVJv1
c9DPLNPaqrQkN7s/eRaWLr4LnfVlq3rt1J7ovuYB7DV2KMu3GiFtccnG/vVEFcGrkPXUa+/ioT//
3lrzlnqSe0XcaxWq77//Hj16mE/kpCpgyRJ56Bd5raJ+ZuTf85xzzrHWAnPLJU7q1XN/Vpnu+3Wb
4z9o3TZiBLZu24a3ZryF1998DU2aNMbov441TxylJGjdCryiBSU9fKFsUPILYp310GXRwpXRUdnQ
JPVz+pUELXs9gxZVJAlZ8vC/he/93VyP5kgWgxaDFlGsioWgFZeXDtUDS2+5+WaMGTvWvHQoz9GS
G+H/Nv5hzHxrptVSl4k1vuf7WX7EmsxknCUPmjnjLCRnrjFK7JLQoTR5uercIQnI2mBdhizLqf7H
NZlINndOVHHUw/94uZCIqGLEZdBSDyw9XlyMRic0QutWrbB9x3YsXLAIu3fudn3Mg35P1NqpT2J2
8nW4QoJU5ytwXfJsPKk9c2He7UMxe1A/7VJiAH36YVDmw3iupPu1mPpk6T1aZerXTsWTs5Nxnblz
Iu/pD/9jyCIiqjhxGbTUA0vFKSedgrN/1RlPPPFEScD6MSfbnPsbhJn95pR8a7D7w0mYWXKpsDPu
WrwU173XvaR+aNbDWBryZb0+eGnpw8gaqr51eCvwwMNItmrL1Js3zuuXMYm89eGcDzH40a/R7/7/
oXnz5gxZREQVJC6DlqJuL5PnafXt29e8hKjW/UmQegl9+rxU8q3BvX43vgtpo+qMye9+LSMo2dtL
X3qbzndhcUnfRojqY6zrQc2vXr8pn8h7cl/jySefjDZt2uDUU09lyCIiqiBxH7QWLl5o/lXwzh07
48o+l2H9hvUx+tBSosojoapBgwZmwJI5QxYRUcWIy6ClHlg6eswYTJv2Mm4aegt6XtgTw39/C/42
YTwefeQJqyURERFR5YnLxzvIH76Vbx1u2bLFnI8YMcIslz8q/e23vj9AKb+5E5H3+HiH760lIopF
fI6WTTjP0ZKQJd8+lBvj9afFy42+VfkP4xLFkuoetIioauBztGwkTEmokqfAd+7c2ZyrkEVEREQU
C+IyaMlolUxyeVBGseSbVDKXdVVHREREVNni+luHRERERLGMQYuIiIjIIwxaRERERB5h0CIiIiLy
CIMWERERkUcYtIiIiIg8wqBFRERE5JG4eTK8PH2aiCqX288m/wQPEcUq/gkem0BB67q+l1lrRFTR
3pv7OYOWAwlaPXr0sNaIKJYsWbKk0oMWLx0SEREReYRBi4iIiMgjDFpEREREHmHQIiIiIvIIgxYR
ERGRRxi0iIjiUHp6urUUGmkfyjaqnWobaDu9jZtI64iqimoZtOrVrYOWzZvgtBOb4tQoTNKP9Fff
6JeIKNokkOiTzqlOb6OWU1NTzbnQ29knRW8vy25t1VzY6xS3MqdjUuzrRPGqWj5HS0LRQ/ePxsED
B5GYWBspZ5+KjZt34/Q2TbFpy97SeWtjvnWPsd6spFzandG2ubG+B+2N+bqNO7H3QD5OPfUUPDB+
InJ377P2QlS18DlaziryOVqhhg8VjNSyEmh7+zZOy8EEaxtoH8K+rVMbonDwOVqVpEFiHXy9bClW
ZqzExnXfo3OT77FnxwqcecJK/3kjmX9vrPvqOxjlv2zzre/abqw3+h7bNn2Pb775Dquzssx+iWLd
irM7mFN+fj6KioqsUoplEjZkkrChAkeg4KEHFrWsBxanbZ22EfpyMPa2+n5kWepVmX0fep2i2qht
9W2ockhwicZUnVTLoHW8uBiJiYkoKixC3cQG6HnlEDRv2QY9rrjBZT6oZL3pSaehx+VG+Umt0aP3
IDRqeiKOFx1DnTp1zH6JYpkErJtvhjmtPS8FBw8cYNiKQU6BQg8bajlUahu1HA32UORE31cox60f
p96/Kg+2PVUMGcEtz1TdVOub4WvVqoG9+/Iw7pGZyMzKwcQnZ5eZ/6CtP2zM167fjomT/o2sdVvx
8BPvIGfLL6hdu5bVY+XZ+GofnNv5Xiy01sXCsacbZdY04GVstMqpepp1bwdrCdiXa8337sGRI0d8
KxQz9EChhw/FKWzY2wQSzvZu5aEGHn17t22kjWqn2qi5Xkck/lWztjnl5eWhsLDQKo1dVeYeraaN
GqBhvURrzefwkaPYte+gtVaqVa2PrKXo21Z4lbVUgbJfxo2vrEOHNODStVNwsVEkwev6j/rj3Q9v
Q3u1vuEerJjEvxdZHUnIGnBVZ+Tu2od9D/tSVo0Zb6NJ06Y4qeUpqF+/vlkWSHnv0ZL7q8p7b1V5
79GKxjHYRfMerUBBxF7nVKYLVh+KaPQRqlD3pdrJnCqeXPYr73/v5elDApaMyIs33gD6//wTmjRp
glq1nAc8ZF+8RysK6tapbYas4uLjyD20Hj/u/xYHj/5ifrvw5OaNrValTjpzVMnktG7n1E4vU/Tl
Up9jvBpVMqYbX11vlfvCz/j57vXCb1Sqcx+8mW1VlDC2v2odbp90pbUu1iP9ozXoO8oXskT7P92D
vmnPOmxPVZ0essTa3r6Q1bBRIyNoNUPdunXNci+pgBQLYulYdCo86AEiUJgIFkqiEZCi0UeoQt2X
tOs5bbc5V5Mq19nXHaUNR0LCcKTJvOtkZFnFJbImo2tCV0w2KrImd0VXWQhA2jj1Y5bLfqz1ErLf
4WVKDWkYnpBgbGOffMdi9IjJXW11ar/aMVc1o/vWtpZKR+V37dqFgoIC30qMqhJBq1EDX7L8ePMU
TF5+GSZ9cyn+7/trzcBVx0i5J9T3H+kSO7OfNydFrTuHpVL27QJbjzcHjMCGv36KFWs3GdM0dHj6
br+wM3fkx7jUrDOmj+4HtHoJYveuux/vqvoXOmPqVfbLgyOAF3yjWP7OwpltrUVTW5zZ2VqkasMe
spYty8XZQ30hq0WLE3GCMa9Zs6ZZ55VYDDaxdkwSCtRIjR4Q1LKahzqKE2q7eLV4RHPH12h//+z8
308jrExYhUmZ09G//7UYljELc23hJGvuLGQMm4gxSVZBQGmYNBZIQdl+RErKKgxwDFVO+mN6cTHk
glPJlDnJ6GMI+pYcS4px7Fr9yjEwq5LGYOWHXTB2qENw9MDj227HvL0z8c/c8Ziy4z48tu023J9z
I+7adDXu+PEK3Jx9AUZkX2q1jpyELPksa/JwS3Mk68NPgE6LFli1sa1qjGhZ90h9mvMMzuvfEudd
fzJqnLsS83OfM8tlxMtOH5XyTPYCfLx2IG7/U0er4DLc/FcjEC7URq36X1kaks7shSs7r0F2jqyU
HZXCpVMwpf8HmD/fWp9/L+7FNDxa5r/hjmjbaQ2mvvK5tS6h7W5MXWutULXAkBVYRR9b0mcJ1lJZ
KmTJ3D7pAoUIXajt4pn+Gu2vV72Xbsy6tEkYCxVc+mPsJONnxi8hZWHurAwMu7a/tR5E2vuYYYSy
lRONkDOpbKDqMnEmJq0agJCzlk2akeK6TLTCFLKRmdEFndwCoEtw9MLq/OUoOJ6PvKID2HE0B7lH
tyL32FbsPLYdO/JzzLqCJeX7Rr4KWfqovIQsuWTYokUL88ttsaxKBC0ZNhVHiw/hIHbiCPai0Dh/
7CzwBZpatpOJ06iUCl6qPCpBLGedEZc+wL3apcHrn16D9RvMJGXq2MFv2EmTg+y19lEpnw2bjNcl
92U93wnvutxzdfGkT3HXuhEl+30Q9+AujmhVGxKy7n/6uZgLWVJWnklxqgtlsnMq84KErNWXBw9b
QkKCCg72AOGk6ZI7raWqJXX3NGspMD1QOQVT/T3U69Pen4GUIX2t4GL82/QdAklbJTkoay5mZQxD
aDnLGh0bazSWkDPj/TKXCVetA8bMnIRVAxwuIQaTNRkTVhnB0O9YZmCAdunQP8D1x7XDMmzB0RuJ
NRpg85G15pRXtN8IWVtQNyER+38+iIZ1G+HAz3moXSPyoGUPWfJZ1v8BX8hq2bJlwPuzYkWVCFpF
RcfNebN6rbA3bzf27z6KYweB0xt0N8uPHvP/VoJTiFLhS5U7hTFh31ZtI1OZ9m07oWNn7dKfmkK6
IV0u9anRLX8dTu+IjQvTsH7t33F9SYgbgblWqBtvjnh1xE0flu7znYt/xMfGD1/qmWYXVE1cfN39
lRKyqJQKWcItbNkDgfAPCLutpbL29viHteStabv9Q4zX0puPsJZCp94zCVSyLHMVrvT3WObNRyxG
F31IKKkvhqTMwPtWYJHLhnItMLScZYQyv9GxVZjgdJNU0hjMNOpCv4To4z+aJWyXFjMlwPnfl3Vm
cgoyMr2/Kbdbw4tw76nP4Icv1+Kl9p/jvU6ZeO70OZjXcwPe6vAN0lKNg4owaVSFkCWqRNA6cuyY
Ob++/VNIzPwNjq4+DWf+Mhi/O/Vus/zIUV+9okKUCkZ6QCoTlgx6O31S7Osl5FIg/o4HbTe4h6Yj
Uq86C3Of1x7LIJcK0wbi0kvl5vZ5/uFt7TT0xUBMMZbLXkqUG+b/jg76ZUiq0hbmdsXf/3q3OVVW
yHL6dp+UlWdSnOpCmeycyqJJD1mKU9jSw4CzEC9feWhEc7djqxwqRDm9Z3qgUpO9/ZI7e5pzVW/8
q6DvkBTMMJOW736rIaU3RAVk3suVMRbJ1uhS8tgMZMya63iPVNKYMC8hymjWjCAja0aAm1hBI1h2
8zf5XsiT17xoznOPbsHAhSkoKi5CzzfaonZCHTRLCe0bejoJWfZR+XgMWaJKBK29Bw+h6PhxdD2x
L+7u/AUm/moNhp/5Mk6qfwYKjJB1qMD/OUF1T74FF/d7ERdd/U9ceNUL5pTa5/mwJ7Vtj95TMfi2
D81+/cmo0qe48qMrSi7hyeQbcQpOwtS7V6WVjlqNXIu7PnK68d2BXFos2eezOPMjpwBGVdVzr76J
nnd/gW63ppmPcKiskSyvg0x5VEbIUuxhS538VeCyU/XRMq1naJflYoXT+6ICktt7JlSdCliqfWpq
U/T4x+KSOtUuqe8QpMhlP7nfyu/G80AklHXBh/oIU3EmJrncFG/sJaxLiDKaFfLIWiXYv6EAmfnf
ILVRH/R4uTVa1mmD2o1qomZCTSSe7AtCxwsjf4qUGpWP15AlqkTQOn682PwbgwcPHTYvE0royjfC
1Z4Dh/DL3gNWq1IH8wswcOA1uKL3lehtTFf26YNTTzvNqg1Nq1atzO1k+75XXYUeF6Qi77DTgx/9
L+HJpAKPBKl3Sm6UF762eiDyH7mah5tcL/1dhketZ2iZzrwN74S0HVVF8siG5ka4OvW0VuZzsirz
cmEshq3KDFmKPWypMKCok79Qy3qZEsk9WiMWh39ZrjLp74tdZHW+BLVqXWkSMtuaI0MzMGHCKr/7
twIyb4K/1haEZHRMbvlyiVLqEqLsxypyZI1mTbR/7TEry3+0zGyX4jcCl52ZgZRk7z/4G3euZ37b
cOSPvVH/tNrmNw2P7ivC8A3dASNf3ZLdEwfWhP9QZH1UPp5DlqgSQUtI2NqXl4+f9+zHjl/2Yvf+
PBw67Pxsjf1GIPvzX+7BmHHjMebBv2HsuL+Z/4ChOm4EuTPOaI8HHprg2/7BCbj9zruw39g/USyQ
UCUPIZWAJfPKvicrlsJWhYSs5sbCd8EnaSftnQKUnPhLyo0TqXAKDhV1j1ZlmNYzOjf66++leg/l
Zvt/9Nhbum7V9792GDIynC8bZoxNLrn53JyGT8bkCTMcv5mYNGYihs2Y4Po8K/MSIjKM/7lzH82a
i6H6cSTPwpDMldpjKNLwvi14eWX/zgOolVAb2Ruz0ahDIjasy0adRjWxdVUuElvUQs7q7di7teyD
w4NZsGgBbpu6BkMf/xbyzcJ4DVmiygStcMg9Wzv3HjBHwWQqNv4nJ6PGjRubUyANGjQw/7Fr1qpp
DhGrPiTgyWVKInIWC2GrIo4h63LjE6Vb6JO0l5O9U4hS0vdWv78PJ0YsLl+IVOFJ5vr7ay73H4sl
d/qCnN9733+68dmuhxafpDErtUuD1jR9DMasLMZ0x+t6csO6rx/ZdmWZh3ElmdsWO29s6j+92GE7
gzwry+9YbMcb1qXP8umSnIKcTVtQo24C8rcfRa16NXD0QJEZtgp2F6JG7QS0v/5Eq3Xo5JENEq7a
tWtnXkGK15AlqmXQsjuQdxgPjHsIr/zrdQweMtQqLUtGBQbdOBivvvY67r7nPnNkjIjIK/bwVTLC
FSXR7i9S4RxHOG3V+ydz2c5/WyPoTOyCnj2nYXeMvA9RkTUZXQeswqSZ+rcUvTOl3Qf49HfZ+PjC
DfjkEmN+8QZz/dPLsvH5lRvNaVK7d6zWoZNQ1bBhQzNgyTxeQ5Zg0DLIzfKHj9fECc1ORKNGjVCj
Rg0zVNkn+YeW+sYtTkZhzbrIy4/tx/4TUXyzhwp78CqvaPcXqXCOI9Jjdtyu/3QsXjwiZm80j4g5
2lV2RI4qD4OWpbCoCAfy8nFF3/74ZvkKfP3td2Wmr5Z+jasHXm+OgB0rLLK2JCIqH320RQ9XsRKE
Ypk9jAq3Mnk/+f5SRWPQ0sgXUH/ZdwDZ2352naRevtVIRBRN6qTPk7+PU1hy4vR+BSqTud43g1f4
lixZUq6pumHQIiKKEeqkbw8ZoYaOqiQaocftfZO+nfqvju9zuHr06BGVqTph0CIiigFOIUstx0Mo
qKjjCWU/bu+bvq0sy+QWuoiihUGLiCiGqJO+CgX2uRJOOLBv64WKCCv21+H2utyORS+XZbUu/VTE
8VP1lFAsD+GIosOHQ3vkwYwZMzBs2DBrrdRjL76F0bdcb62Veuq1d60lIqosbj+bD/3599aat778
8ktzHksPQP3+++89uRTiduLXQ4E9ILgFhlCCRHm2DVe0+4ykP9mGqj65J+ycc86x1gJzyyVO6tUL
7e83fr9uc/wELSKKTQxaFR+0hF5f3uBi3748wSWU7cp7vE6c+tTLAi0LWVfLVHXEQtDipUMiohgk
J337JOwBwY2+TTAqZATbR6D+ZJtQw4rqXxfKdoHYj1nvT5b1+lCPkygaGLSIiGKYhAI1CXtocKLK
VTs7+3Z6nzr7fuzr9rlw6kevd6NvZ2+vrzv1FWz/UqbaqHq9TDj1SxQNvHRIROXCS4ffW0sVS/64
vfwVC31u51aus/ehbyPLToL1qYSy/0jZ+w62L1Wvz6l64D1aNgxaRPGlugetivTdd9+hW7du1ppv
3Y20s9erMnsf+rri1rdqq7ZzmodK307Rtw+lv2Bt9L6FfX9ESrUPWvzWIVHl47cOY48eHOyBI1h5
NLgFF32f9uNw27/el1p22sbet31dOG2vl6l2RAqDlvFhfl3fy6w1Iqpo7839nEErRjgFBRUoFHvI
EPZ1xa08EvqxqeVAZU51TqRO2OvD2Ua1DbQNVV9eBS1epCYiijNOIUHK9EkvU+zrilt5JFQ/9jCj
Qo/QA4/OqUyROr0/odrqc31S26h6oS8TVQQGLSIiijp7KNLXVfixtxFOZW6kbbD2+n5U+2DbEEUT
gxYREXlKDzZ66HELW+Fy61+fcySLKguDFhERVYpohCydPVzpor0volAxaBERERF5hEGLiIiIyCPV
MmjVq1sHLZs3wWknNsWpUZikH+mvvtEvERERkVItn6Mloeih+0fj4IGDSEysjZSzT8XGzbtxepum
2LRlb+m8tTHfusdYb1ZSLu3OaNvcWN+D9sZ83cad2HsgH6eeegoeGD8Rubv3WXshqlr4HC1nlfUn
eIgoNPwTPDYVEbTan3YSepz/Gxw9VogTm9TCuD+cihnzdmHQZc0x+/PdDvNmxnwPbjTWZ3y8C0N7
N8fbn+3G73u3wEv/2YmsjXno2OF0/OfDOdi4fae1F6LYtOLsDua88/IM1K1bFzVr1jTXg2HQciZB
q0ePHtZafLP/oWX1B5ilTC0Ltz/AbN+eqLItWbKk0oNWtbx0eNzIlomJiSgqLELdxAboeeUQNG/Z
Bj2uuMFlPqhkvelJp6HH5Ub5Sa3Ro/cgNGp6Io4XHUOdOnXMfolimYSsm2+GOa09LwUHDxxAUVGR
VUvVjQpPeqBS9GAlVJ1bG4YsImfV+mb4WrVqYO++PIx7ZCYys3Iw8cnZZeY/aOsPG/O167dj4qR/
I2vdVjz8xDvI2fILateuZfVYsTa+2gfndj5dm/rgzWyr0uBXP+BlbLTKqXqada9vJEvsy7Xme/fg
yJEjvhWq8sIJRvZAFU57xb4/ouqoylw6TEhIQNMTGqBunVqoVbMmjh8vxpFjx3Dg0GHzEqGuVa2P
rKXo21Z4lbXkPQlSD+I5vPOnjlaJZv69OHckMGXtFFxsrErb6zfcgxWT+PciqyMJWQOu6ozcXfuw
72Ffyqox4200adoUJ7U8BfXr1zfLAuGlQ2exfunQLeCoYKTXuwUpex/STsr0OVEs4qXDKJGQJTe4
N6hX1wxZokaNBPPbhSc3a4w6thGnk84cVTI5rds5tdPLFH251OcYr4063fjqeqvcF37Gz3evFwvH
ltbZR6y2bliDDqc7hCzDws8+QMe/3m6GLNH+T/egb9rHWGitU/WhhyyxtrcvZDVs1MgIWs3M+7Qo
/rkFKicqICkqKEmZKrfPnUid3pdb20B9BBLpdkSxpEoErUYN6hkBqwbqzP8CTc//NVqcchKade6I
ev98oaTebmf28+akqHXnsFTKvl1g6/HmgBHY8NdPsWLtJmOahg5P3+0XluaO/BiXmnXG9NH9gFYv
QezedffjXVX/QmdMvepev7A0d6QWxMZ+bpWuR8462EJYW5zZ+QPMn2+tUrVgD1nLluXi7KG+kNWi
xYk4wZiHejM8xTYVeOyTqlPswcheJ/R6IeWqTp/LZO9L5qpMqHK1bK8X9vZqXe9Pb0MUT6pE0Kpb
p7Y5r/fUZNS8/nr5o1aoMfhG1P/iM7O8bm1fvU4flfJM9gJ8vHYgbi+5tHcZbv4r8PFCbdSq/5Ul
o044sxeu7LwG2Tmysh7pH61B31G3ob1Zabh0Cqb0Lw1LF0+yApg5fYq71o3QRsTOwpltrUWqlhiy
qjY9fLgFET3gCLWus/fjxN5Gn3T6ul6vltX+nQKUmgv9OPVlvT1RvKgSQUtGs0TNTZuAPXuA5cuB
oiIk5OWZ5XIZUec0KqWClyqPShDLWWfEpQ9wb8mlv9Nx/dNrsH6DmaRMHTu4paEcZK91DksbNvlf
XvTpiJtGDcT6jxZYN72rwEbVkYSs+59+jiGrinELI27soSSSoCL7UftSy6HsW9G3VfuWub0vNVfU
serHq9rby4liWZUIWkeO+m52P9arF/DTT+aIFvbuRVGDBmZ5YdFxc644hSgVvlS5UxgT9m3VNjKV
ad+2Ezp21i79qSmkG9LlUp9zWHK7L6tUR9m1LZC5Bzequi6+7n6GrDjmFib0oKHm9qASLfZ96euK
0771MhWOFFWnl9n7tFN9OLULti1RZaoiQeuYOc+/fxwKjBPJ8aVLcfTwYRwa+RezXL59qFMhSgUj
PSCVCUsGvZ0+Kfb1EnIpEH/Hg7Yb3EPTEalXnYW5z2uPZZh/L+5NG4hLL5WV9dio3etl3g/2/Afo
eFUv81LjxZcPxPqnXyq5n2vjq89ibuf+SD3TKqAqbWFuV/z9r3ebE0NWfFPhQiYVNvQQo9jLndqU
l1ufsm87VSZze72+7rRsb69TxyBzaReoLUWffIsvGlN1UmUe73Bi00ZItO7V0slo1s+79/k9TFSe
DN/9t7+GvHT18o8f9x/1CkWNGr6cWlhYiLZt27o8GV5uiL8CU9daq4a+L2zCo0ZYKvt4Bl/b7FG+
emE+luHpNb4VnIW7PpqHm8ywVLbfjn/91O9RD/7bDix51ANVffn5+di96xfkHzoE+VahfLuwvCGL
j3dwFu3HO3gdHFRAqUgqHDnttzzHY99W7Ye8IyGpvP+9R6OPUMm++HiHKPll7wHsOZCH/IIjKDLC
VcHRY9ifl18mZImD+QUYOPAaXNH7SvQ2piv79MGpp51m1YamVatW5nayfd+rrkKPC1KRd9jpwY8d
cdOH/pcOVYhq/6d5tmdg+dqqeiFtSrdVIUuU7df+PC3/bRmyqhMJV81bnGj8d93KfE4WR7JimwoL
MreHBVlXZU51FSnS/cnrcgtT9nL99QZj39ZtH1S1/KtmbXPKy8szBzpiXZUJWuKQEXR278/Djl17
zeAlDyt1+rM4+43yP//lHowZNx5jHvwbxo77G1q2bGnVBiejX2ec0R4PPDTBt/2DE3D7nXeZwY4o
FkiokoeQSsCSOUOWjxoRiyUSDiRY6HNFBQ5VZg8S9vVQRLKN4rZtqMEoFLIPtR+3fqXcqS6ax0Gx
SQKW+jNisxs3xb59+2I+bFWpoBUquadrpxHEcnfvM6di439yMmrcuLE5BdKgQQM0adIENWvVNC87
qj5+3rPfHEUjotgWK2FLDy1uAcYvdOyeZs5jkdvxK5EGoJLXrm0vy/r+ZF1NQn/PgkobjoSE4UiT
edfJyLKKS2RNRteErphsVGRN7oqushCAtHHqxyyX/VjrJWS/w8uU+nHqM214QtDtqqLRfUtvD1J/
RmzXrl0oKCjwrcSoahm07A7kHcYD4x7CK/96HYOHDLVKy5JRgUE3Dsarr72Ou++5zxwZI6L4Ewth
SwUDt7ldevMR1lL8CTn4uJDt9SBln9vLhL7sLAuTJ6zCpMzp6N//WgzLmIW5toSUNXcWMoZNxJgk
qyCgNEwaC6SgbD8iJWUVBoQZjiRQJc+SPv31n56JSasGIBay1uPbbse8vTPxz9zxmLLjPjy27Tbc
n3Mj7tp0Ne748QrcnH0BRmRr98NESEKWPBewycMt8cYbwIefAJ0WLbBqYxuDluFQwREcPl4TJzQ7
EY0aNTJvcpdQZZ9q1apl1jducTIKa9ZFXn5sp2giclfRYSvpM9/z/IKFASlzC1vVmf29kvfI/j6p
dae6MtImYSyGoK8Zovpj7CRgll9CysLcWRkYdm1/az2ItPcxwwhlKyd2Mfoqm4C6TJwZXjhKG44B
+BDFM4dYBbok9B2SghnvV37SWp2/HAXH85FXdAA7juYg9+hW5B7bip3HtmNHfo5ZV7CkjtU6Mipk
6X9GTEKWXF1q0aIFEhMTzfJYxaBlKSwqwoG8fFzRtz++Wb4CX3/7XZnpq6Vf4+qB15sjYMcKi6wt
iaiiSUiKZLJzKvOChKzVl/vmegDQl4MFMLvUphX3FfmgoaWCyfHIe+T0nqm6YNLen4GUIX2NyOKT
1NcINEbaKokuWXMxK2MYQstZ1ujYWKOxjI7NeL/MZcJV64AxMydh1QCHS4hO+k9H8XT3ncvxpjjs
p6Il1miAzUfWmlNe0X4jZG1B3YRE7P/5IBrWbYQDP+ehdo3Ig5Y9ZMlzAfs/4AtZcm+1zGUQJJYx
aGnktvlf9h1A9rafXSepL4rgURBEVD2pkCVU2HILCGoeLNiYYWJvxXw9XoQSXCJR+pqbmvNgpu32
tbcfj3rP7O9lMF06adcEk/piSMoMqEEiuWwo1wJDy1lGKPMbHVuFCU73cyWNwUyjLtxLiI6SOqEL
VmGdw24qUreGF+HeU5/BD1+uxUvtP8d7nTLx3OlzMK/nBrzV4RukpRoHGGHSqAohSzBoEVHckedo
RTLZOZVFkx6yFFm/Lb9nmVAg6/oUKCyobeOdeh3p6XvNeTAjmju/bvWeqeXgsrBulbVYQr8c57vf
aogvOQVl3suVMRbJCQlIMKbksRnImDXX7wZ2JWlMmJcQY9z8Tb4X8uQ1L5rz3KNbMHBhCoqKi9Dz
jbaonVAHzVJCe+aUTkKW/c+IxWPIEgxaRFQtVUbIUuxhS1EjMxQZ9f6FFrbKSlKX4+R+qxQ1QhWM
hLIu+NB6ALZvysQkl5viJdCFdQkxxu3fUIDM/G+Q2qgPerzcGi3rtEHtRjVRM6EmEk/2BaHjhWUf
sxQq9WfE4jVkCQYtIqp2KjNkLU5fbE5OYUuW1XqkYaG6098/9/cwCZ26yH1TtiSUNAYTh83AhAmr
/O7fCsi8Cf5a2yVGGR2TW75copS6hCj7sYrClrUOq9AF+tXPytC4cz3z24Yjf+yN+qfVNr9peHRf
EYZv6G7ej3NLdk8cWOP0MO/A9D8jFs8hS8TVn+AhosoV73+CR7aPdsiy/wkeM2Q1t1ZcLD68GD3r
9SxZdhqF0cvKM0pTVdnfE7Vuf99cyTOsJiQjc+UY/0Al5QPksQ8r/R7rIM+zkkuCfoZNwqRVY5E5
sRhl71tPw/CECUg2+uk7tyuGYiZW+j0nIguTuyZjbJcPA970buwYXYcCM23HaR5P5sTA23rA/udz
fvfRGTjjrHbI+XELGrSpg4M/HkFi81rI23oUDVvXQX7uMTRddzreGz3f2iK0P8EjT33Pzc015/Kt
Qvl2YSQhKxb+BE/cBC0iik38W4eh/61DexAQKgw41VFgTu+leh+FvlyWBB0zwYT4nKxY4gtpzgHP
W/aQdO/mgcjK/gE16iSguLAYCTUSfJcKjf9TlwzPPevXmNTuHXNZhBK05Gnv8iBSmUu4krAVyUhW
LAQtXjokIqog+klfllUwoMg4hSwlcMgSSRgjz7xKjr97pdKG+0bCKjpkOZnS7gN8+rtsfHzhBnxy
iTG/eIO5/ull2fj8yo3mpIesUEmoatiwoTmKJfN4u1yoY9AiIqpAKgDYQ4E+F4FDAgUT0vsnz6oq
nh7aIxxiSP/pxRV+yZAix6BFRFSB7CFLDwTxEq5COc7Kei3295eosjFoERFVIAkBKoTIslso8Dos
lCcIhXJsXh6/07HL/ior3FU3ct9TeabqJiZvhiei+MKb4UO7GV6oAOIWFoTUOS07CVZfUbw6Drd+
A+1P6oiEBDt+65CIKERVIWgpEhJUIFDLKjioZb3eTm8vArXV2bcLV6Dty9t3KJxet1qXZSJdLAQt
XjokIqoEeihQYcGpzB4i1FwPG0Jv60b16ZVw+7a/Xn3diTr+YO2IYgmDFhFRDLIHChViyhOUyrOt
LpRQFC45tmDHZ38PVPAiimUMWkRElcwtLOiBwh5s9PVohR6nfpzK5Lj0Y9OFcyzhhKRA+5HlcPZL
VJEYtIiIKpkKChI89LmdKtPb6O3s60Ktu5UHo4cht74UKQ8nPIlAxxFoPyLcfRFVBgYtIqIYoQKE
HjACBRGhhw1ZlknfRtWruR5S7Puxt9HnalnaqHbCvo3Q2zux92en6p36FuoY9H6IYhW/dUhEcSNW
v3UYS44fP44aNXy/Q8uynV4ny/pcp9op9vaqTNjbOrH3L/T96stO7PWh7JNI8PEOREQhisWgFSu+
++47a8mnW7du1pKPvV5IGylXcyXYtvp2wt6H0/Zube2c6uxl9v6JooGPdyAiIlcSPtRkp8KNqtPb
yLK93k6vc2qj17nVq6CklvV2+rIeqITeXk32NkSxjCNaRBQ3OKIV+/QQZQ9UTiFKqHb29iJQHVE0
cUSLiIhiXrAwJPVqUpy2sYcyonjFoEVERJ6wByinQKVzqldlwbYlilUMWkREFHMYrKiqYNAiIiIi
8giDFhEREZFHGLSIiIiIPMKgRUREROQRPkeLiOIG/wQPEYWLf4KHiChEsRq0evToYa0RUSxZsmRJ
pQctXjokIiIi8giDFhEREZFHGLSIiChupKenW0tE8YFBi4iIYpY9WKWmpvqVMXhRrGPQIiIiz0Qa
hNR2erCSuUxSRhQvGLSIiKogezjRuZUpatk+V2RdnwJRoShYOzs9YKllmatlvU6E2z9RRWHQIiKK
Y4EChl6nlmWuwomd1OnbqFAjVJ1at/eht3Oau7UXTts41Qv7sr0tUaxh0CIiilMqYOiBQ5VJsAkW
hoTbNk5lbnO9X1Um9HW9jZByWbaXC6lTk6Iv2wWqI6psfGApEcUNPrC0bFCxz53q3LjV69uLQH0o
9r70ddWPsJc5bROsL6c+1DJ5Sx4AGg0V9TMTCw8sZdAiorhRnYOWHkDs84pgDzVu+3WrC3TMet9u
9O1Ue6GX2Zf1dhQdElzK+997NPoIFZ8MT0RUgVRQiwcSFFRwCDavCLIvfb/2EKPW3Y7JaRtF79uN
Xq/ay+TUp2qrb0NVx79q1janvLw8FBYWWqWxi0GLiKqVWA5bejCQAKFChFtAqUz2EOMWavRjlzbR
Dj/Sn+xD9RvL7xmVnwSsm2+GOc1u3BT79u2L+bDFoEVE1U4shi17AFHrMrfXeS2ckBKsbUUcu74P
WVbBS81dpQ1HQsJwpMm862RkWcUlsiaja0JXTDYqsiZ3RVdZCEDaOPVjlst+rPUSst/hZUr9uPXp
O/YEa/Ido0k75qpmdN/a1hKwL9c337VrFwoKCnwrMYpBi4iqpVgJWyoISChQoUWVBQwJHgpnv+U5
xmAhzc1u23b2fvRjct9HFiZPWIVJmdPRv/+1GJYxC3Nt4SRr7ixkDJuIMUlWQUBpmDQWSEHZfkRK
yioMCBKq7NKGJyB5lvRpI2FqgBx7MeQ26+LMIZiVbAW5pDFY+WEXjB3qEM488Pi22zFv70z8M3c8
puy4D49tuw3359yIuzZdjTt+vAI3Z1+AEdmXWq0jJyFrwFWd0eThlnjjDeDDT4BOixZYtbGNQYuI
4o6EpEgmO6cyryR9lmAt+ZMgIMFAn6qDntN2R/xam9u2c+sn4PuZNgljMQR9zRDVH2MnAbP8ElIW
5s7KwLBr+1vrQaS9jxlGKFs50Qg5k8oGqi4TZ2LSqgEIOWulDccAfIjimUOsglJlAqARriYOm4EJ
ahjLJTh6YXX+chQcz0de0QHsOJqD3KNbkXtsK3Ye244d+TlmXcGSOlbryKiQlbtrn7m+trcvZDVp
0gQtWrRAYmKiWR6rGLSIiDwmIWv15YHDlj6vDhaPaG4tec8pbKW9PwMpQ/qiJKv0NQKNkbZKclDW
XMzKGIbQcpY1OjbWaCwhZ8b7ZS4TrloHjJk5CasGOFxCdNJ/Ooqnu+88JflMa8nnzGR93Ks/rh2W
YQuO3kis0QCbj6w1p7yi/UbI2oK6CYnY//NBNKzbCAd+zkPtGpEHLXvIWrYsF/0f8IWsli1bmvNa
tWqZdbGKQYuI4oY81qE8k51TWbSpkCXsYUsFAPu8OgWuytSlk3ZNMKkvhqTMwPtWCpJRI7kWGFrO
MkKZ3+jYqtLRJV3SGMw06sK9hGiX1KkLMvxC4WQMHZthrfhI8MrIzLbWvNOt4UW499Rn8MOXa/FS
+8/xXqdMPHf6HMzruQFvdfgGaanG+xBh0qgKIUswaBFRtVTRIUtRYUtClbpsyGDlPRVifbKwbpW1
WCIJfYekYIaZtHz3Ww3xJaegzEt5GWORbN2cnmyEnoxZcx3vkUoaE+YlRCf9pyNTApu6GX4oMHFS
mTu5KsT8Tb4X8uQ1L5rz3KNbMHBhCoqKi9DzjbaonVAHzVJCe+aUTkLW/U8/F/chSzBoEVG1U1kh
S1HlesjSw5Z/KKBI2MOrWg/03ib1HYIUuewn91ulqBGqYCSUdcGHclN6yZSJSS43xUugC+sSoouk
MStL97eyL9bNCj0YRtP+DQXIzP8GqY36oMfLrdGyThvUblQTNRNqIvFkXxA6Xhj5c9Evvu7+uA5Z
gkGLiKqVyg5ZdnLiVxNFj/5+lh0xTEKnLnLflC0JqZvKJ6zyu38rIPMm+GttlxhldExu+XKJUuoS
ouzHKiqPtOHJGNvF/9uR2ZkZZe7j8kLjzvXMbxuO/LE36p9W2/ym4dF9RRi+oTtg5KtbsnviwJoj
VuvQLcztir//9W5ziueQJRi0iKjaqLCQJfd5f+c8LU5f7Ft2UTYUVK6ePadZS2XF2rHq1LHJ3Cl0
9b92mOPlPbM8w3l0KGNssvXcKmsaPhmTJ8xw/GZi0piJGDZjguvzrMxLiMgw/heJLEzuWnocE5Iz
bTfOp+H9GSkVMsK1f+cB1EqojeyN2WjUIREb1mWjTqOa2LoqF4ktaiFn9Xbs3XrQah26BYsW4Lap
azD08W/NbxbGa8gS/FuHRETlUJ6/dWgPABzVih71furvq/9yUyOsDAVmrgzxOVlxRB5mOiEZmSvH
hDYqFwb73ym8d/NAZGX/gBp1ElBcWIyEGgm+S4XG/6lLhuee9WtMaveOuSxC+VuH8rR3eRCpzCVc
ySMcIglZsfC3Dhm0iIjKIZKgxYBV+eR9NwPJAODD4umhfbswHsjDTJNnYUimNwEyGn8QOhp9hCoW
ghYvHRIRVQLzRG/QR1sUfZmiR72vJe+vPKuqKoUsIU+GL66Co3RxjEGLiKgS2QOXsI9wMXiVj7x/
Msn7yveSKhovHRIRlUO4lw71EKWf/J3mTgLVxYuKfA36vmSZykcuxUVDdbp0yKBFRFQOkQQtpxN+
LISneAtx6n10O2a93mmZqj7eo0VEVA2pYCBzNYlwAoAXYUEdRzyQ16+/d4r9fVH1qq3ajqiiMGgR
EVUgPQjIsr6uAoA9LDip7LAQyjEq4bQNldvrV2FKJn1Z58XxELlh0CIiqmDBTvwVHaIiCR6hHqPT
a41kf8HofcqxqZBlp467ot9jqr4YtIiIKoF+oref9KMdRIL1Fyh0hHosbu2kb/trDbS/SOl9qmNR
ZTKXMjUJt+MlijYGLSKiSqCf6O0nfRUMIhXOtqqtfa6Eeiwq1ARr67af8nDry4t9EYWLQYuIKAbY
w0CowcWJvq2+vVNfqq0KVGpdUWVO2yp6nd7WaRt9fyJQv6Gy96Ufg31/RBWNj3cgIioHebxDNB0/
fhw1atQw59Fg7yvQuiwL+771ctXe3tZpW7f+hOpHZy8LtH0o1PY66cupnKouPkeLiIjK+O6776yl
srp162Yt+ZNtAtXp3NoJ1Y8+F6Fso5aFvq2dU1vh1I/OrU+1jbD3KfR+iZwwaBERVTORhINYDBSR
vg5FtlV96OWKvW+n/UVyDFS9eBW0OH5KRBSjIgkGsRgmIn0darKHK1Wukzb2dvq6vT1RRWHQIiKi
mKZCkj1ICVWnBymnMqLKwqBFREQxL1CgUuzrTsGMqKIxaBERUVwId4Qq3PZEXmDQIiKiuKLux+KI
FcUDBi0iIoorMlKlJqJYx6BFRERE5BEGLSIiIiKPVOoDS4mIiIhiRZV6MjwRERFRPOKT4YmIiIhi
AIMWERERkUcYtIiIiIg8wqBFRERE5BEGLSIiIiKPMGgREREReYRBi4iIiMgjDFpEREREHmHQIiIi
IvJI1J8MT0RERER8MjwRERGRpxi0iIiIiDzCoEVERETkEQYtIiIiIo8waBERERF5hEGLiIiIyCMM
WkREREQeYdAiIiIi8giDFhEREZFHGLSIiIiIPMKgRUREROQJ4P8Bg6qTdFLdMD8AAAAASUVORK5C
YII=
------=_Part_6439429_1730970573.1516054958003--
1
0
recently upgraded to 4.2 and had some problems with engine vm running, got
that cleared up now my only remaining issue is that now it seems
ovirt-ha-broker and ovirt-ha-agent are continually crashing on all three of
my hosts. Everything is up and working fine otherwise, all VMs running and
hosted engine VM is running along with interface etc.
Jan 12 16:52:34 cultivar0 journal: vdsm storage.Dispatcher ERROR FINISH
prepareImage error=Volume does not exist:
(u'8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8',)
Jan 12 16:52:34 cultivar0 python: detected unhandled Python exception in
'/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker'
Jan 12 16:52:34 cultivar0 abrt-server: Not saving repeating crash in
'/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker'
Jan 12 16:52:34 cultivar0 systemd: ovirt-ha-broker.service: main process
exited, code=exited, status=1/FAILURE
Jan 12 16:52:34 cultivar0 systemd: Unit ovirt-ha-broker.service entered
failed state.
Jan 12 16:52:34 cultivar0 systemd: ovirt-ha-broker.service failed.
Jan 12 16:52:34 cultivar0 systemd: ovirt-ha-broker.service holdoff time
over, scheduling restart.
Jan 12 16:52:34 cultivar0 systemd: Cannot add dependency job for unit
lvm2-lvmetad.socket, ignoring: Unit is masked.
Jan 12 16:52:34 cultivar0 systemd: Started oVirt Hosted Engine High
Availability Communications Broker.
Jan 12 16:52:34 cultivar0 systemd: Starting oVirt Hosted Engine High
Availability Communications Broker...
Jan 12 16:52:36 cultivar0 journal: vdsm storage.TaskManager.Task ERROR
(Task='73141dec-9d8f-4164-9c4e-67c43a102eff') Unexpected error#012Traceback
(most recent call last):#012 File
"/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in
_run#012 return fn(*args, **kargs)#012 File "<string>", line 2, in
prepareImage#012 File
"/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in
method#012 ret = func(*args, **kwargs)#012 File
"/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3162, in
prepareImage#012 raise
se.VolumeDoesNotExist(leafUUID)#012VolumeDoesNotExist: Volume does not
exist: (u'8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8',)
Jan 12 16:52:36 cultivar0 journal: vdsm storage.Dispatcher ERROR FINISH
prepareImage error=Volume does not exist:
(u'8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8',)
Jan 12 16:52:36 cultivar0 python: detected unhandled Python exception in
'/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker'
Jan 12 16:52:36 cultivar0 abrt-server: Not saving repeating crash in
'/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker'
Jan 12 16:52:36 cultivar0 systemd: ovirt-ha-broker.service: main process
exited, code=exited, status=1/FAILURE
Jan 12 16:52:36 cultivar0 systemd: Unit ovirt-ha-broker.service entered
failed state.
Jan 12 16:52:36 cultivar0 systemd: ovirt-ha-broker.service failed.
Jan 12 16:52:36 cultivar0 systemd: ovirt-ha-broker.service holdoff time
over, scheduling restart.
Jan 12 16:52:36 cultivar0 systemd: Cannot add dependency job for unit
lvm2-lvmetad.socket, ignoring: Unit is masked.
Jan 12 16:52:36 cultivar0 systemd: Started oVirt Hosted Engine High
Availability Communications Broker.
Jan 12 16:52:36 cultivar0 systemd: Starting oVirt Hosted Engine High
Availability Communications Broker...
Jan 12 16:52:37 cultivar0 journal: vdsm storage.TaskManager.Task ERROR
(Task='bc7af1e2-0ab2-4164-ae88-d2bee03500f9') Unexpected error#012Traceback
(most recent call last):#012 File
"/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in
_run#012 return fn(*args, **kargs)#012 File "<string>", line 2, in
prepareImage#012 File
"/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in
method#012 ret = func(*args, **kwargs)#012 File
"/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3162, in
prepareImage#012 raise
se.VolumeDoesNotExist(leafUUID)#012VolumeDoesNotExist: Volume does not
exist: (u'8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8',)
Jan 12 16:52:37 cultivar0 journal: vdsm storage.Dispatcher ERROR FINISH
prepareImage error=Volume does not exist:
(u'8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8',)
Jan 12 16:52:37 cultivar0 python: detected unhandled Python exception in
'/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker'
Jan 12 16:52:38 cultivar0 abrt-server: Not saving repeating crash in
'/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker'
Jan 12 16:52:38 cultivar0 systemd: ovirt-ha-broker.service: main process
exited, code=exited, status=1/FAILURE
Jan 12 16:52:38 cultivar0 systemd: Unit ovirt-ha-broker.service entered
failed state.
Jan 12 16:52:38 cultivar0 systemd: ovirt-ha-broker.service failed.
Jan 12 16:52:38 cultivar0 systemd: ovirt-ha-broker.service holdoff time
over, scheduling restart.
Jan 12 16:52:38 cultivar0 systemd: Cannot add dependency job for unit
lvm2-lvmetad.socket, ignoring: Unit is masked.
Jan 12 16:52:38 cultivar0 systemd: start request repeated too quickly for
ovirt-ha-broker.service
Jan 12 16:52:38 cultivar0 systemd: Failed to start oVirt Hosted Engine High
Availability Communications Broker.
Jan 12 16:52:38 cultivar0 systemd: Unit ovirt-ha-broker.service entered
failed state.
Jan 12 16:52:38 cultivar0 systemd: ovirt-ha-broker.service failed.
Jan 12 16:52:40 cultivar0 systemd: ovirt-ha-agent.service holdoff time
over, scheduling restart.
Jan 12 16:52:40 cultivar0 systemd: Cannot add dependency job for unit
lvm2-lvmetad.socket, ignoring: Unit is masked.
Jan 12 16:52:40 cultivar0 systemd: Started oVirt Hosted Engine High
Availability Communications Broker.
Jan 12 16:52:40 cultivar0 systemd: Starting oVirt Hosted Engine High
Availability Communications Broker...
Jan 12 16:52:40 cultivar0 systemd: Started oVirt Hosted Engine High
Availability Monitoring Agent.
Jan 12 16:52:40 cultivar0 systemd: Starting oVirt Hosted Engine High
Availability Monitoring Agent...
Jan 12 16:52:41 cultivar0 journal: ovirt-ha-agent
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Failed to
start necessary monitors
Jan 12 16:52:41 cultivar0 journal: ovirt-ha-agent
ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceback (most recent call
last):#012 File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
line 131, in _run_agent#012 return action(he)#012 File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
line 55, in action_proper#012 return he.start_monitoring()#012 File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
line 416, in start_monitoring#012 self._initialize_broker()#012 File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
line 535, in _initialize_broker#012 m.get('options', {}))#012 File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
line 83, in start_monitor#012 .format(type, options,
e))#012RequestError: Failed to start monitor ping, options {'addr':
'192.168.0.1'}: [Errno 2] No such file or directory
Jan 12 16:52:41 cultivar0 journal: ovirt-ha-agent
ovirt_hosted_engine_ha.agent.agent.Agent ERROR Trying to restart agent
Jan 12 16:52:42 cultivar0 systemd: ovirt-ha-agent.service: main process
exited, code=exited, status=157/n/a
Jan 12 16:52:42 cultivar0 systemd: Unit ovirt-ha-agent.service entered
failed state.
Jan 12 16:52:42 cultivar0 systemd: ovirt-ha-agent.service failed.
3
2
Trying to fix one thing I broke another :(
I fixed mnt_options for hosted engine storage domain and installed latest
security patches to my hosts and hosted engine. All VM's up and running,
but hosted_engine --vm-status reports about issues:
[root@ovirt1 ~]# hosted-engine --vm-status
--== Host 1 status ==--
conf_on_shared_storage : True
Status up-to-date : False
Hostname : ovirt2
Host ID : 1
Engine status : unknown stale-data
Score : 0
stopped : False
Local maintenance : False
crc32 : 193164b8
local_conf_timestamp : 8350
Host timestamp : 8350
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=8350 (Fri Jan 12 19:03:54 2018)
host-id=1
score=0
vm_conf_refresh_time=8350 (Fri Jan 12 19:03:54 2018)
conf_on_shared_storage=True
maintenance=False
state=EngineUnexpectedlyDown
stopped=False
timeout=Thu Jan 1 05:24:43 1970
--== Host 2 status ==--
conf_on_shared_storage : True
Status up-to-date : False
Hostname : ovirt1.telia.ru
Host ID : 2
Engine status : unknown stale-data
Score : 0
stopped : True
Local maintenance : False
crc32 : c7037c03
local_conf_timestamp : 7530
Host timestamp : 7530
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=7530 (Fri Jan 12 16:10:12 2018)
host-id=2
score=0
vm_conf_refresh_time=7530 (Fri Jan 12 16:10:12 2018)
conf_on_shared_storage=True
maintenance=False
state=AgentStopped
stopped=True
[root@ovirt1 ~]#
from second host situation looks a bit different:
[root@ovirt2 ~]# hosted-engine --vm-status
--== Host 1 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt2
Host ID : 1
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 0
stopped : False
Local maintenance : False
crc32 : 78eabdb6
local_conf_timestamp : 8403
Host timestamp : 8402
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=8402 (Fri Jan 12 19:04:47 2018)
host-id=1
score=0
vm_conf_refresh_time=8403 (Fri Jan 12 19:04:47 2018)
conf_on_shared_storage=True
maintenance=False
state=EngineUnexpectedlyDown
stopped=False
timeout=Thu Jan 1 05:24:43 1970
--== Host 2 status ==--
conf_on_shared_storage : True
Status up-to-date : False
Hostname : ovirt1.telia.ru
Host ID : 2
Engine status : unknown stale-data
Score : 0
stopped : True
Local maintenance : False
crc32 : c7037c03
local_conf_timestamp : 7530
Host timestamp : 7530
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=7530 (Fri Jan 12 16:10:12 2018)
host-id=2
score=0
vm_conf_refresh_time=7530 (Fri Jan 12 16:10:12 2018)
conf_on_shared_storage=True
maintenance=False
state=AgentStopped
stopped=True
WebGUI shows that engine running on host ovirt1.
Gluster looks fine
[root@ovirt1 ~]# gluster volume status engine
Status of volume: engine
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick ovirt1.telia.ru:/oVirt/engine 49169 0 Y
3244
Brick ovirt2.telia.ru:/oVirt/engine 49179 0 Y
20372
Brick ovirt3.telia.ru:/oVirt/engine 49206 0 Y
16609
Self-heal Daemon on localhost N/A N/A Y
117868
Self-heal Daemon on ovirt2.telia.ru N/A N/A Y
20521
Self-heal Daemon on ovirt3 N/A N/A Y
25093
Task Status of Volume engine
------------------------------------------------------------------------------
There are no active volume tasks
How to resolve this issue?
2
1
2
4
First, apologies for all the posts to this list lately, I've been having a
heck of a time after 4.2 upgrade and you've been helpful, I appreciate
that.
Since 4.2 upgrade I'm experiencing a few problems that I'm trying to debug.
Current status is engine and all hosts are upgraded to 4.2, and cluster and
domain set to 4.2 compatibility. Hosted Engine VM is running and ui
accessible etc, all VMs on hosts are running but no HA service. Web UI is
giving a few errors when checking network and snapshots on the hosted
engine VM only, it doesn't give errors on any of the others VMs that I spot
checked.
1. HA-agent and HA-broker are continually crashing on all three hosts over
and over every few seconds. I sent an email to users list with more
details on this problem but unfortunately haven't heard anything back yet.
The general error in the logs seems to be:
VolumeDoesNotExist(leafUUID)#012VolumeDoesNotExist:
Volume does not exist: (u'8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8',) --
What? Volume doesn't exist, why not?
2. Error when clicking "network interfaces" in the web gui for the
hosted VM engine.
3. Similar to #2 above an error is given when clicking "snapshots" in the
web gui for the hosted engine VM.
The errors for #2 and #3 are generic "cannot read property 'a' of null".
I've read previous postings on ovirt-mailing list that suggest you can
install debug-info package to get a human readable error.. but this package
does not seem to be compatible with 4.2, it expects 4.1: Requires:
"ovirt-engine-webadmin-portal = 4.1.2.2-1.el7.centos" -- Perhaps this
package is no longer required? I do see some additional details in the
ui.log that I can post if helpful.
There is obviously something odd going on here with the hosted engine VM.
All three errors appear to related to a problem with it, although it is
indeed up and running. I'd really like to get HA broker and agent back up
and running, and fix these GUI errors related to hosted engine VM. All
three problems may be connected to one common issue?
Thanks in advance!
3
9

Re: [ovirt-users] ovirt-engine-webadmin-portal-debuginfo package for 4.2?
by Karli Sjöberg 14 Jan '18
by Karli Sjöberg 14 Jan '18
14 Jan '18
1
0
ovirt-provider-ovn installed using engine-setup might
have it's password stored unencrypted in the engine database.
The problem occurs in some pre-4.2.1 engine-setup versions.
To fix the problem in existing ovirt-provider-ovn isntances,
it is enough to open the network provider dialog
(Administration -> Providers), choose the appropriate network
provider, open it by clicking "Edit", and confirm changes
by clicking the "OK" button.
1
0

12 Jan '18
No luck I'm afraid. It's very odd that I wouldn't be able to get a console
to it, if the status is up and seen by virsh. Any clue?
Engine status : {"reason": "failed liveliness check",
"health": "bad", "vm": "up", "detail": "Up"}
# virsh -r list
Id Name State
----------------------------------------------------
118 Cultivar running
# hosted-engine --console
The engine VM is running on this host
error: failed to get domain 'HostedEngine'
error: Domain not found: no domain with matching name 'HostedEngine'
# hosted-engine --console 118
The engine VM is running on this host
error: failed to get domain 'HostedEngine'
error: Domain not found: no domain with matching name 'HostedEngine'
# hosted-engine --console Cultivar
The engine VM is running on this host
error: failed to get domain 'HostedEngine'
error: Domain not found: no domain with matching name 'HostedEngine'
On Fri, Jan 12, 2018 at 2:05 PM, Martin Sivak <msivak(a)redhat.com> wrote:
> Try listing the domains with
>
> virsh -r list
>
> maybe it just has some weird name...
>
> Martin
>
> On Fri, Jan 12, 2018 at 6:56 PM, Jayme <jaymef(a)gmail.com> wrote:
> > I thought that it might be a good sign but unfortunately I cannot access
> it
> > with console :( if I could get console access to it I might be able to
> fix
> > the problem. But seeing is how the console is also not working leads me
> to
> > believe there is a bigger issue at hand here.
> >
> > hosted-engine --console
> > The engine VM is running on this host
> > error: failed to get domain 'HostedEngine'
> > error: Domain not found: no domain with matching name 'HostedEngine'
> >
> > I really wonder if this is all a symlinking problem in some way. Is it
> > possible for me to upgrade host to 4.2 RC2 without being able to upgrade
> the
> > engine first or should I keep everything on 4.2 as it is?
> >
> > On Fri, Jan 12, 2018 at 1:49 PM, Martin Sivak <msivak(a)redhat.com> wrote:
> >>
> >> Hi,
> >>
> >> the VM is up according to the status (at least for a while). You
> >> should be able to use console and diagnose anything that happened
> >> inside (line the need for fsck and such) now.
> >>
> >> Check the presence of those links again now, the metadata file content
> >> is not important, but the file has to exist (agents will populate it
> >> with status data). I have no new idea about what is wrong with that
> >> though.
> >>
> >> Best regards
> >>
> >> Martin
> >>
> >>
> >>
> >> On Fri, Jan 12, 2018 at 5:47 PM, Jayme <jaymef(a)gmail.com> wrote:
> >> > The lock space issue was an issue I needed to clear but I don't
> believe
> >> > it
> >> > has resolved the problem. I shutdown agent and broker on all hosts
> and
> >> > disconnected hosted-storage then enabled broker/agent on just one host
> >> > and
> >> > connected storage. I started the VM and actually didn't get any
> errors
> >> > in
> >> > the logs barely at all which was good to see, however the VM is still
> >> > not
> >> > running:
> >> >
> >> > HOST3:
> >> >
> >> > Engine status : {"reason": "failed liveliness
> >> > check",
> >> > "health": "bad", "vm": "up", "detail": "Up"}
> >> >
> >> > ==> /var/log/messages <==
> >> > Jan 12 12:42:57 cultivar3 kernel: ovirtmgmt: port 2(vnet0) entered
> >> > disabled
> >> > state
> >> > Jan 12 12:42:57 cultivar3 kernel: device vnet0 entered promiscuous
> mode
> >> > Jan 12 12:42:57 cultivar3 kernel: ovirtmgmt: port 2(vnet0) entered
> >> > blocking
> >> > state
> >> > Jan 12 12:42:57 cultivar3 kernel: ovirtmgmt: port 2(vnet0) entered
> >> > forwarding state
> >> > Jan 12 12:42:57 cultivar3 lldpad: recvfrom(Event interface): No buffer
> >> > space
> >> > available
> >> > Jan 12 12:42:57 cultivar3 systemd-machined: New machine
> >> > qemu-111-Cultivar.
> >> > Jan 12 12:42:57 cultivar3 systemd: Started Virtual Machine
> >> > qemu-111-Cultivar.
> >> > Jan 12 12:42:57 cultivar3 systemd: Starting Virtual Machine
> >> > qemu-111-Cultivar.
> >> > Jan 12 12:42:57 cultivar3 kvm: 3 guests now active
> >> > Jan 12 12:44:38 cultivar3 libvirtd: 2018-01-12 16:44:38.737+0000:
> 1535:
> >> > error : qemuDomainAgentAvailable:6010 : Guest agent is not responding:
> >> > QEMU
> >> > guest agent is not connected
> >> >
> >> > Interestingly though, now I'm seeing this in the logs which may be a
> new
> >> > clue:
> >> >
> >> >
> >> > ==> /var/log/vdsm/vdsm.log <==
> >> > File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py", line
> >> > 126,
> >> > in findDomain
> >> > return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID))
> >> > File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py", line
> >> > 116,
> >> > in findDomainPath
> >> > raise se.StorageDomainDoesNotExist(sdUUID)
> >> > StorageDomainDoesNotExist: Storage domain does not exist:
> >> > (u'248f46f0-d793-4581-9810-c9d965e2f286',)
> >> > jsonrpc/4::ERROR::2018-01-12
> >> > 12:40:30,380::dispatcher::82::storage.Dispatcher::(wrapper) FINISH
> >> > getStorageDomainInfo error=Storage domain does not exist:
> >> > (u'248f46f0-d793-4581-9810-c9d965e2f286',)
> >> > periodic/42::ERROR::2018-01-12
> >> > 12:40:35,430::api::196::root::(_getHaInfo)
> >> > failed to retrieve Hosted Engine HA score '[Errno 2] No such file or
> >> > directory'Is the Hosted Engine setup finished?
> >> > periodic/43::ERROR::2018-01-12
> >> > 12:40:50,473::api::196::root::(_getHaInfo)
> >> > failed to retrieve Hosted Engine HA score '[Errno 2] No such file or
> >> > directory'Is the Hosted Engine setup finished?
> >> > periodic/40::ERROR::2018-01-12
> >> > 12:41:05,519::api::196::root::(_getHaInfo)
> >> > failed to retrieve Hosted Engine HA score '[Errno 2] No such file or
> >> > directory'Is the Hosted Engine setup finished?
> >> > periodic/43::ERROR::2018-01-12
> >> > 12:41:20,566::api::196::root::(_getHaInfo)
> >> > failed to retrieve Hosted Engine HA score '[Errno 2] No such file or
> >> > directory'Is the Hosted Engine setup finished?
> >> >
> >> > ==> /var/log/ovirt-hosted-engine-ha/broker.log <==
> >> > File
> >> >
> >> > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_
> ha/broker/storage_broker.py",
> >> > line 151, in get_raw_stats
> >> > f = os.open(path, direct_flag | os.O_RDONLY | os.O_SYNC)
> >> > OSError: [Errno 2] No such file or directory:
> >> >
> >> > '/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8'
> >> > StatusStorageThread::ERROR::2018-01-12
> >> >
> >> > 12:32:06,049::status_broker::92::ovirt_hosted_engine_ha.
> broker.status_broker.StatusBroker.Update::(run)
> >> > Failed to read state.
> >> > Traceback (most recent call last):
> >> > File
> >> >
> >> > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_
> ha/broker/status_broker.py",
> >> > line 88, in run
> >> > self._storage_broker.get_raw_stats()
> >> > File
> >> >
> >> > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_
> ha/broker/storage_broker.py",
> >> > line 162, in get_raw_stats
> >> > .format(str(e)))
> >> > RequestError: failed to read metadata: [Errno 2] No such file or
> >> > directory:
> >> >
> >> > '/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8'
> >> >
> >> > On Fri, Jan 12, 2018 at 12:02 PM, Martin Sivak <msivak(a)redhat.com>
> >> > wrote:
> >> >>
> >> >> The lock is the issue.
> >> >>
> >> >> - try running sanlock client status on all hosts
> >> >> - also make sure you do not have some forgotten host still connected
> >> >> to the lockspace, but without ha daemons running (and with the VM)
> >> >>
> >> >> I need to go to our president election now, I might check the email
> >> >> later tonight.
> >> >>
> >> >> Martin
> >> >>
> >> >> On Fri, Jan 12, 2018 at 4:59 PM, Jayme <jaymef(a)gmail.com> wrote:
> >> >> > Here are the newest logs from me trying to start hosted vm:
> >> >> >
> >> >> > ==> /var/log/messages <==
> >> >> > Jan 12 11:58:14 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >> >> > blocking
> >> >> > state
> >> >> > Jan 12 11:58:14 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >> >> > disabled
> >> >> > state
> >> >> > Jan 12 11:58:14 cultivar0 kernel: device vnet4 entered promiscuous
> >> >> > mode
> >> >> > Jan 12 11:58:14 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >> >> > blocking
> >> >> > state
> >> >> > Jan 12 11:58:14 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >> >> > forwarding state
> >> >> > Jan 12 11:58:14 cultivar0 lldpad: recvfrom(Event interface): No
> >> >> > buffer
> >> >> > space
> >> >> > available
> >> >> > Jan 12 11:58:14 cultivar0 NetworkManager[1092]: <info>
> >> >> > [1515772694.8715]
> >> >> > manager: (vnet4): new Tun device
> >> >> > (/org/freedesktop/NetworkManager/Devices/140)
> >> >> > Jan 12 11:58:14 cultivar0 NetworkManager[1092]: <info>
> >> >> > [1515772694.8795]
> >> >> > device (vnet4): state change: unmanaged -> unavailable (reason
> >> >> > 'connection-assumed') [10 20 41]
> >> >> >
> >> >> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> >> >> > 2018-01-12 15:58:14.879+0000: starting up libvirt version: 3.2.0,
> >> >> > package:
> >> >> > 14.el7_4.7 (CentOS BuildSystem <http://bugs.centos.org>,
> >> >> > 2018-01-04-19:31:34, c1bm.rdu2.centos.org) qemu version:
> >> >> > 2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.13.1), hostname:
> >> >> > cultivar0.grove.silverorange.com
> >> >> > LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
> >> >> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
> >> >> > guest=Cultivar,debug-threads=on -S -object
> >> >> >
> >> >> >
> >> >> > secret,id=masterKey0,format=raw,file=/var/lib/libvirt/
> qemu/domain-119-Cultivar/master-key.aes
> >> >> > -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off
> >> >> > -cpu
> >> >> > Conroe -m 8192 -realtime mlock=off -smp
> >> >> > 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
> >> >> > 4013c829-c9d7-4b72-90d5-6fe58137504c -smbios
> >> >> > 'type=1,manufacturer=oVirt,product=oVirt
> >> >> >
> >> >> >
> >> >> > Node,version=7-4.1708.el7.centos,serial=44454C4C-3300-
> 1042-8031-B4C04F4B4831,uuid=4013c829-c9d7-4b72-90d5-6fe58137504c'
> >> >> > -no-user-config -nodefaults -chardev
> >> >> >
> >> >> >
> >> >> > socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-
> 119-Cultivar/monitor.sock,server,nowait
> >> >> > -mon chardev=charmonitor,id=monitor,mode=control -rtc
> >> >> > base=2018-01-12T15:58:14,driftfix=slew -global
> >> >> > kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on
> >> >> > -device
> >> >> > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> >> >> > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
> >> >> >
> >> >> >
> >> >> > file=/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-4967-a1e5-
> fbe47c0cd705,format=raw,if=none,id=drive-virtio-disk0,
> serial=c2dde892-f978-4dfc-a421-c8e04cf387f9,cache=none,
> werror=stop,rerror=stop,aio=threads
> >> >> > -device
> >> >> >
> >> >> >
> >> >> > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-
> virtio-disk0,id=virtio-disk0,bootindex=1
> >> >> > -drive if=none,id=drive-ide0-1-0,readonly=on -device
> >> >> > ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
> >> >> > tap,fd=35,id=hostnet0,vhost=on,vhostfd=38 -device
> >> >> >
> >> >> >
> >> >> > virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:
> 7f:d6:83,bus=pci.0,addr=0x3
> >> >> > -chardev
> >> >> >
> >> >> >
> >> >> > socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
> >> >> > -device
> >> >> >
> >> >> >
> >> >> > virtserialport,bus=virtio-serial0.0,nr=1,chardev=
> charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> >> >> > -chardev
> >> >> >
> >> >> >
> >> >> > socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.org.qemu.guest_agent.0,server,nowait
> >> >> > -device
> >> >> >
> >> >> >
> >> >> > virtserialport,bus=virtio-serial0.0,nr=2,chardev=
> charchannel1,id=channel1,name=org.qemu.guest_agent.0
> >> >> > -chardev spicevmc,id=charchannel2,name=vdagent -device
> >> >> >
> >> >> >
> >> >> > virtserialport,bus=virtio-serial0.0,nr=3,chardev=
> charchannel2,id=channel2,name=com.redhat.spice.0
> >> >> > -chardev
> >> >> >
> >> >> >
> >> >> > socket,id=charchannel3,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.org.ovirt.hosted-
> engine-setup.0,server,nowait
> >> >> > -device
> >> >> >
> >> >> >
> >> >> > virtserialport,bus=virtio-serial0.0,nr=4,chardev=
> charchannel3,id=channel3,name=org.ovirt.hosted-engine-setup.0
> >> >> > -chardev pty,id=charconsole0 -device
> >> >> > virtconsole,chardev=charconsole0,id=console0 -spice
> >> >> >
> >> >> >
> >> >> > tls-port=5904,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,
> tls-channel=default,seamless-migration=on
> >> >> > -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -object
> >> >> > rng-random,id=objrng0,filename=/dev/urandom -device
> >> >> > virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x5 -msg
> >> >> > timestamp=on
> >> >> >
> >> >> > ==> /var/log/messages <==
> >> >> > Jan 12 11:58:14 cultivar0 NetworkManager[1092]: <info>
> >> >> > [1515772694.8807]
> >> >> > device (vnet4): state change: unavailable -> disconnected (reason
> >> >> > 'none')
> >> >> > [20 30 0]
> >> >> > Jan 12 11:58:14 cultivar0 systemd-machined: New machine
> >> >> > qemu-119-Cultivar.
> >> >> > Jan 12 11:58:14 cultivar0 systemd: Started Virtual Machine
> >> >> > qemu-119-Cultivar.
> >> >> > Jan 12 11:58:14 cultivar0 systemd: Starting Virtual Machine
> >> >> > qemu-119-Cultivar.
> >> >> >
> >> >> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> >> >> > 2018-01-12T15:58:15.094002Z qemu-kvm: -chardev pty,id=charconsole0:
> >> >> > char
> >> >> > device redirected to /dev/pts/1 (label charconsole0)
> >> >> >
> >> >> > ==> /var/log/messages <==
> >> >> > Jan 12 11:58:15 cultivar0 kvm: 5 guests now active
> >> >> >
> >> >> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> >> >> > 2018-01-12 15:58:15.217+0000: shutting down, reason=failed
> >> >> >
> >> >> > ==> /var/log/messages <==
> >> >> > Jan 12 11:58:15 cultivar0 libvirtd: 2018-01-12 15:58:15.217+0000:
> >> >> > 1908:
> >> >> > error : virLockManagerSanlockAcquire:1041 : resource busy: Failed
> to
> >> >> > acquire
> >> >> > lock: Lease is held by another host
> >> >> >
> >> >> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> >> >> > 2018-01-12T15:58:15.219934Z qemu-kvm: terminating on signal 15 from
> >> >> > pid
> >> >> > 1773
> >> >> > (/usr/sbin/libvirtd)
> >> >> >
> >> >> > ==> /var/log/messages <==
> >> >> > Jan 12 11:58:15 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >> >> > disabled
> >> >> > state
> >> >> > Jan 12 11:58:15 cultivar0 kernel: device vnet4 left promiscuous
> mode
> >> >> > Jan 12 11:58:15 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >> >> > disabled
> >> >> > state
> >> >> > Jan 12 11:58:15 cultivar0 NetworkManager[1092]: <info>
> >> >> > [1515772695.2348]
> >> >> > device (vnet4): state change: disconnected -> unmanaged (reason
> >> >> > 'unmanaged')
> >> >> > [30 10 3]
> >> >> > Jan 12 11:58:15 cultivar0 NetworkManager[1092]: <info>
> >> >> > [1515772695.2349]
> >> >> > device (vnet4): released from master device ovirtmgmt
> >> >> > Jan 12 11:58:15 cultivar0 kvm: 4 guests now active
> >> >> > Jan 12 11:58:15 cultivar0 systemd-machined: Machine
> qemu-119-Cultivar
> >> >> > terminated.
> >> >> >
> >> >> > ==> /var/log/vdsm/vdsm.log <==
> >> >> > vm/4013c829::ERROR::2018-01-12
> >> >> > 11:58:15,444::vm::914::virt.vm::(_startUnderlyingVm)
> >> >> > (vmId='4013c829-c9d7-4b72-90d5-6fe58137504c') The vm start process
> >> >> > failed
> >> >> > Traceback (most recent call last):
> >> >> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
> 843,
> >> >> > in
> >> >> > _startUnderlyingVm
> >> >> > self._run()
> >> >> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
> 2721,
> >> >> > in
> >> >> > _run
> >> >> > dom.createWithFlags(flags)
> >> >> > File "/usr/lib/python2.7/site-packages/vdsm/
> libvirtconnection.py",
> >> >> > line
> >> >> > 126, in wrapper
> >> >> > ret = f(*args, **kwargs)
> >> >> > File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line
> 512, in
> >> >> > wrapper
> >> >> > return func(inst, *args, **kwargs)
> >> >> > File "/usr/lib64/python2.7/site-packages/libvirt.py", line
> 1069, in
> >> >> > createWithFlags
> >> >> > if ret == -1: raise libvirtError ('virDomainCreateWithFlags()
> >> >> > failed',
> >> >> > dom=self)
> >> >> > libvirtError: resource busy: Failed to acquire lock: Lease is held
> by
> >> >> > another host
> >> >> > jsonrpc/6::ERROR::2018-01-12
> >> >> > 11:58:16,421::__init__::611::jsonrpc.JsonRpcServer::(_
> handle_request)
> >> >> > Internal server error
> >> >> > Traceback (most recent call last):
> >> >> > File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py",
> line
> >> >> > 606,
> >> >> > in _handle_request
> >> >> > res = method(**params)
> >> >> > File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line
> >> >> > 201,
> >> >> > in
> >> >> > _dynamicMethod
> >> >> > result = fn(*methodArgs)
> >> >> > File "<string>", line 2, in getAllVmIoTunePolicies
> >> >> > File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line
> >> >> > 48,
> >> >> > in
> >> >> > method
> >> >> > ret = func(*args, **kwargs)
> >> >> > File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1354,
> in
> >> >> > getAllVmIoTunePolicies
> >> >> > io_tune_policies_dict = self._cif.getAllVmIoTunePolicies()
> >> >> > File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line
> 524,
> >> >> > in
> >> >> > getAllVmIoTunePolicies
> >> >> > 'current_values': v.getIoTune()}
> >> >> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
> 3481,
> >> >> > in
> >> >> > getIoTune
> >> >> > result = self.getIoTuneResponse()
> >> >> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
> 3500,
> >> >> > in
> >> >> > getIoTuneResponse
> >> >> > res = self._dom.blockIoTune(
> >> >> > File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py",
> >> >> > line
> >> >> > 47,
> >> >> > in __getattr__
> >> >> > % self.vmid)
> >> >> > NotConnectedError: VM '4013c829-c9d7-4b72-90d5-6fe58137504c' was
> not
> >> >> > defined
> >> >> > yet or was undefined
> >> >> >
> >> >> > ==> /var/log/messages <==
> >> >> > Jan 12 11:58:16 cultivar0 journal: vdsm jsonrpc.JsonRpcServer ERROR
> >> >> > Internal
> >> >> > server error#012Traceback (most recent call last):#012 File
> >> >> > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> 606,
> >> >> > in
> >> >> > _handle_request#012 res = method(**params)#012 File
> >> >> > "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 201,
> in
> >> >> > _dynamicMethod#012 result = fn(*methodArgs)#012 File
> "<string>",
> >> >> > line 2,
> >> >> > in getAllVmIoTunePolicies#012 File
> >> >> > "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in
> >> >> > method#012 ret = func(*args, **kwargs)#012 File
> >> >> > "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1354, in
> >> >> > getAllVmIoTunePolicies#012 io_tune_policies_dict =
> >> >> > self._cif.getAllVmIoTunePolicies()#012 File
> >> >> > "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 524, in
> >> >> > getAllVmIoTunePolicies#012 'current_values': v.getIoTune()}#012
> >> >> > File
> >> >> > "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 3481, in
> >> >> > getIoTune#012 result = self.getIoTuneResponse()#012 File
> >> >> > "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 3500, in
> >> >> > getIoTuneResponse#012 res = self._dom.blockIoTune(#012 File
> >> >> > "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line
> 47,
> >> >> > in
> >> >> > __getattr__#012 % self.vmid)#012NotConnectedError: VM
> >> >> > '4013c829-c9d7-4b72-90d5-6fe58137504c' was not defined yet or was
> >> >> > undefined
> >> >> >
> >> >> > On Fri, Jan 12, 2018 at 11:55 AM, Jayme <jaymef(a)gmail.com> wrote:
> >> >> >>
> >> >> >> One other tidbit I noticed is that it seems like there are less
> >> >> >> errors
> >> >> >> if
> >> >> >> I started in paused mode:
> >> >> >>
> >> >> >> but still shows: Engine status : {"reason":
> >> >> >> "bad
> >> >> >> vm
> >> >> >> status", "health": "bad", "vm": "up", "detail": "Paused"}
> >> >> >>
> >> >> >> ==> /var/log/messages <==
> >> >> >> Jan 12 11:55:05 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >> >> >> blocking state
> >> >> >> Jan 12 11:55:05 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >> >> >> disabled state
> >> >> >> Jan 12 11:55:05 cultivar0 kernel: device vnet4 entered promiscuous
> >> >> >> mode
> >> >> >> Jan 12 11:55:05 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >> >> >> blocking state
> >> >> >> Jan 12 11:55:05 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >> >> >> forwarding state
> >> >> >> Jan 12 11:55:05 cultivar0 lldpad: recvfrom(Event interface): No
> >> >> >> buffer
> >> >> >> space available
> >> >> >> Jan 12 11:55:05 cultivar0 NetworkManager[1092]: <info>
> >> >> >> [1515772505.3625]
> >> >> >> manager: (vnet4): new Tun device
> >> >> >> (/org/freedesktop/NetworkManager/Devices/139)
> >> >> >>
> >> >> >> ==> /var/log/libvirt/qemu/Cultivar.log <==
> >> >> >> 2018-01-12 15:55:05.370+0000: starting up libvirt version: 3.2.0,
> >> >> >> package:
> >> >> >> 14.el7_4.7 (CentOS BuildSystem <http://bugs.centos.org>,
> >> >> >> 2018-01-04-19:31:34, c1bm.rdu2.centos.org) qemu version:
> >> >> >> 2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.13.1), hostname:
> >> >> >> cultivar0.grove.silverorange.com
> >> >> >> LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
> >> >> >> QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
> >> >> >> guest=Cultivar,debug-threads=on -S -object
> >> >> >>
> >> >> >>
> >> >> >> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/
> qemu/domain-118-Cultivar/master-key.aes
> >> >> >> -machine pc-i440fx-rhel7.3.0,accel=kvm,
> usb=off,dump-guest-core=off
> >> >> >> -cpu
> >> >> >> Conroe -m 8192 -realtime mlock=off -smp
> >> >> >> 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
> >> >> >> 4013c829-c9d7-4b72-90d5-6fe58137504c -smbios
> >> >> >> 'type=1,manufacturer=oVirt,product=oVirt
> >> >> >>
> >> >> >>
> >> >> >> Node,version=7-4.1708.el7.centos,serial=44454C4C-3300-
> 1042-8031-B4C04F4B4831,uuid=4013c829-c9d7-4b72-90d5-6fe58137504c'
> >> >> >> -no-user-config -nodefaults -chardev
> >> >> >>
> >> >> >>
> >> >> >> socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-
> 118-Cultivar/monitor.sock,server,nowait
> >> >> >> -mon chardev=charmonitor,id=monitor,mode=control -rtc
> >> >> >> base=2018-01-12T15:55:05,driftfix=slew -global
> >> >> >> kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on
> >> >> >> -device
> >> >> >> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> >> >> >> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
> >> >> >>
> >> >> >>
> >> >> >> file=/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-4967-a1e5-
> fbe47c0cd705,format=raw,if=none,id=drive-virtio-disk0,
> serial=c2dde892-f978-4dfc-a421-c8e04cf387f9,cache=none,
> werror=stop,rerror=stop,aio=threads
> >> >> >> -device
> >> >> >>
> >> >> >>
> >> >> >> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-
> virtio-disk0,id=virtio-disk0,bootindex=1
> >> >> >> -drive if=none,id=drive-ide0-1-0,readonly=on -device
> >> >> >> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
> >> >> >> tap,fd=35,id=hostnet0,vhost=on,vhostfd=38 -device
> >> >> >>
> >> >> >>
> >> >> >> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:
> 7f:d6:83,bus=pci.0,addr=0x3
> >> >> >> -chardev
> >> >> >>
> >> >> >>
> >> >> >> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
> >> >> >> -device
> >> >> >>
> >> >> >>
> >> >> >> virtserialport,bus=virtio-serial0.0,nr=1,chardev=
> charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> >> >> >> -chardev
> >> >> >>
> >> >> >>
> >> >> >> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.org.qemu.guest_agent.0,server,nowait
> >> >> >> -device
> >> >> >>
> >> >> >>
> >> >> >> virtserialport,bus=virtio-serial0.0,nr=2,chardev=
> charchannel1,id=channel1,name=org.qemu.guest_agent.0
> >> >> >> -chardev spicevmc,id=charchannel2,name=vdagent -device
> >> >> >>
> >> >> >>
> >> >> >> virtserialport,bus=virtio-serial0.0,nr=3,chardev=
> charchannel2,id=channel2,name=com.redhat.spice.0
> >> >> >> -chardev
> >> >> >>
> >> >> >>
> >> >> >> socket,id=charchannel3,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.org.ovirt.hosted-
> engine-setup.0,server,nowait
> >> >> >> -device
> >> >> >>
> >> >> >>
> >> >> >> virtserialport,bus=virtio-serial0.0,nr=4,chardev=
> charchannel3,id=channel3,name=org.ovirt.hosted-engine-setup.0
> >> >> >> -chardev pty,id=charconsole0 -device
> >> >> >> virtconsole,chardev=charconsole0,id=console0 -spice
> >> >> >>
> >> >> >>
> >> >> >> tls-port=5904,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,
> tls-channel=default,seamless-migration=on
> >> >> >> -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -object
> >> >> >> rng-random,id=objrng0,filename=/dev/urandom -device
> >> >> >> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x5 -msg
> >> >> >> timestamp=on
> >> >> >>
> >> >> >> ==> /var/log/messages <==
> >> >> >> Jan 12 11:55:05 cultivar0 NetworkManager[1092]: <info>
> >> >> >> [1515772505.3689]
> >> >> >> device (vnet4): state change: unmanaged -> unavailable (reason
> >> >> >> 'connection-assumed') [10 20 41]
> >> >> >> Jan 12 11:55:05 cultivar0 NetworkManager[1092]: <info>
> >> >> >> [1515772505.3702]
> >> >> >> device (vnet4): state change: unavailable -> disconnected (reason
> >> >> >> 'none')
> >> >> >> [20 30 0]
> >> >> >> Jan 12 11:55:05 cultivar0 systemd-machined: New machine
> >> >> >> qemu-118-Cultivar.
> >> >> >> Jan 12 11:55:05 cultivar0 systemd: Started Virtual Machine
> >> >> >> qemu-118-Cultivar.
> >> >> >> Jan 12 11:55:05 cultivar0 systemd: Starting Virtual Machine
> >> >> >> qemu-118-Cultivar.
> >> >> >>
> >> >> >> ==> /var/log/libvirt/qemu/Cultivar.log <==
> >> >> >> 2018-01-12T15:55:05.586827Z qemu-kvm: -chardev
> pty,id=charconsole0:
> >> >> >> char
> >> >> >> device redirected to /dev/pts/1 (label charconsole0)
> >> >> >>
> >> >> >> ==> /var/log/messages <==
> >> >> >> Jan 12 11:55:05 cultivar0 kvm: 5 guests now active
> >> >> >>
> >> >> >> On Fri, Jan 12, 2018 at 11:36 AM, Jayme <jaymef(a)gmail.com> wrote:
> >> >> >>>
> >> >> >>> Yeah I am in global maintenance:
> >> >> >>>
> >> >> >>> state=GlobalMaintenance
> >> >> >>>
> >> >> >>> host0: {"reason": "vm not running on this host", "health":
> "bad",
> >> >> >>> "vm":
> >> >> >>> "down", "detail": "unknown"}
> >> >> >>> host2: {"reason": "vm not running on this host", "health": "bad",
> >> >> >>> "vm":
> >> >> >>> "down", "detail": "unknown"}
> >> >> >>> host3: {"reason": "vm not running on this host", "health": "bad",
> >> >> >>> "vm":
> >> >> >>> "down", "detail": "unknown"}
> >> >> >>>
> >> >> >>> I understand the lock is an issue, I'll try to make sure it is
> >> >> >>> fully
> >> >> >>> stopped on all three before starting but I don't think that is
> the
> >> >> >>> issue at
> >> >> >>> hand either. What concerns me is mostly that it seems to be
> >> >> >>> unable
> >> >> >>> to read
> >> >> >>> the meta data, I think that might be the heart of the problem but
> >> >> >>> I'm
> >> >> >>> not
> >> >> >>> sure what is causing it.
> >> >> >>>
> >> >> >>> On Fri, Jan 12, 2018 at 11:33 AM, Martin Sivak <
> msivak(a)redhat.com>
> >> >> >>> wrote:
> >> >> >>>>
> >> >> >>>> > On all three hosts I ran hosted-engine --vm-shutdown;
> >> >> >>>> > hosted-engine
> >> >> >>>> > --vm-poweroff
> >> >> >>>>
> >> >> >>>> Are you in global maintenance? I think you were in one of the
> >> >> >>>> previous
> >> >> >>>> emails, but worth checking.
> >> >> >>>>
> >> >> >>>> > I started ovirt-ha-broker with systemctl as root user but it
> >> >> >>>> > does
> >> >> >>>> > appear to be running under vdsm:
> >> >> >>>>
> >> >> >>>> That is the correct behavior.
> >> >> >>>>
> >> >> >>>> > libvirtError: resource busy: Failed to acquire lock: Lease is
> >> >> >>>> > held
> >> >> >>>> > by
> >> >> >>>> > another host
> >> >> >>>>
> >> >> >>>> sanlock seems to think the VM runs somewhere and it is possible
> >> >> >>>> that
> >> >> >>>> some other host tried to start the VM as well unless you are in
> >> >> >>>> global
> >> >> >>>> maintenance (that is why I asked the first question here).
> >> >> >>>>
> >> >> >>>> Martin
> >> >> >>>>
> >> >> >>>> On Fri, Jan 12, 2018 at 4:28 PM, Jayme <jaymef(a)gmail.com>
> wrote:
> >> >> >>>> > Martin,
> >> >> >>>> >
> >> >> >>>> > Thanks so much for keeping with me, this is driving me
> crazy! I
> >> >> >>>> > really do
> >> >> >>>> > appreciate it, thanks again
> >> >> >>>> >
> >> >> >>>> > Let's go through this:
> >> >> >>>> >
> >> >> >>>> > HE VM is down - YES
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > HE agent fails when opening metadata using the symlink - YES
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > the symlink is there and readable by vdsm:kvm - it appears to
> >> >> >>>> > be:
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > lrwxrwxrwx. 1 vdsm kvm 159 Jan 10 21:20
> >> >> >>>> > 14a20941-1b84-4b82-be8f-ace38d7c037a
> >> >> >>>> > ->
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > /rhev/data-center/mnt/cultivar0.grove.silverorange.
> com:_exports_hosted__engine/248f46f0-d793-4581-9810-
> c9d965e2f286/images/14a20941-1b84-4b82-be8f-ace38d7c037a
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > And the files in the linked directory exist and have vdsm:kvm
> >> >> >>>> > perms
> >> >> >>>> > as
> >> >> >>>> > well:
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > # cd
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > /rhev/data-center/mnt/cultivar0.grove.silverorange.
> com:_exports_hosted__engine/248f46f0-d793-4581-9810-
> c9d965e2f286/images/14a20941-1b84-4b82-be8f-ace38d7c037a
> >> >> >>>> >
> >> >> >>>> > [root@cultivar0 14a20941-1b84-4b82-be8f-ace38d7c037a]# ls -al
> >> >> >>>> >
> >> >> >>>> > total 2040
> >> >> >>>> >
> >> >> >>>> > drwxr-xr-x. 2 vdsm kvm 4096 Jan 12 10:51 .
> >> >> >>>> >
> >> >> >>>> > drwxr-xr-x. 8 vdsm kvm 4096 Feb 3 2016 ..
> >> >> >>>> >
> >> >> >>>> > -rw-rw----. 1 vdsm kvm 1028096 Jan 12 11:19
> >> >> >>>> > 8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8
> >> >> >>>> >
> >> >> >>>> > -rw-rw----. 1 vdsm kvm 1048576 Feb 3 2016
> >> >> >>>> > 8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8.lease
> >> >> >>>> >
> >> >> >>>> > -rw-r--r--. 1 vdsm kvm 283 Feb 3 2016
> >> >> >>>> > 8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8.meta
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > I started ovirt-ha-broker with systemctl as root user but it
> >> >> >>>> > does
> >> >> >>>> > appear to
> >> >> >>>> > be running under vdsm:
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > vdsm 16928 0.6 0.0 1618244 43328 ? Ssl 10:33
> 0:18
> >> >> >>>> > /usr/bin/python
> >> >> >>>> > /usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > Here is something I tried:
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > - On all three hosts I ran hosted-engine --vm-shutdown;
> >> >> >>>> > hosted-engine
> >> >> >>>> > --vm-poweroff
> >> >> >>>> >
> >> >> >>>> > - On HOST0 (cultivar0) I disconnected and reconnected storage
> >> >> >>>> > using
> >> >> >>>> > hosted-engine
> >> >> >>>> >
> >> >> >>>> > - Tried starting up the hosted VM on cultivar0 while tailing
> the
> >> >> >>>> > logs:
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > # hosted-engine --vm-start
> >> >> >>>> >
> >> >> >>>> > VM exists and is down, cleaning up and restarting
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > ==> /var/log/vdsm/vdsm.log <==
> >> >> >>>> >
> >> >> >>>> > jsonrpc/2::ERROR::2018-01-12
> >> >> >>>> > 11:27:27,194::vm::1766::virt.vm::(_getRunningVmStats)
> >> >> >>>> > (vmId='4013c829-c9d7-4b72-90d5-6fe58137504c') Error fetching
> vm
> >> >> >>>> > stats
> >> >> >>>> >
> >> >> >>>> > Traceback (most recent call last):
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py",
> line
> >> >> >>>> > 1762,
> >> >> >>>> > in
> >> >> >>>> > _getRunningVmStats
> >> >> >>>> >
> >> >> >>>> > vm_sample.interval)
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-
> packages/vdsm/virt/vmstats.py",
> >> >> >>>> > line
> >> >> >>>> > 45, in
> >> >> >>>> > produce
> >> >> >>>> >
> >> >> >>>> > networks(vm, stats, first_sample, last_sample, interval)
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-
> packages/vdsm/virt/vmstats.py",
> >> >> >>>> > line
> >> >> >>>> > 322, in
> >> >> >>>> > networks
> >> >> >>>> >
> >> >> >>>> > if nic.name.startswith('hostdev'):
> >> >> >>>> >
> >> >> >>>> > AttributeError: name
> >> >> >>>> >
> >> >> >>>> > jsonrpc/3::ERROR::2018-01-12
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > 11:27:27,221::__init__::611::jsonrpc.JsonRpcServer::(_
> handle_request)
> >> >> >>>> > Internal server error
> >> >> >>>> >
> >> >> >>>> > Traceback (most recent call last):
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.
> py",
> >> >> >>>> > line
> >> >> >>>> > 606,
> >> >> >>>> > in _handle_request
> >> >> >>>> >
> >> >> >>>> > res = method(**params)
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py",
> >> >> >>>> > line
> >> >> >>>> > 201, in
> >> >> >>>> > _dynamicMethod
> >> >> >>>> >
> >> >> >>>> > result = fn(*methodArgs)
> >> >> >>>> >
> >> >> >>>> > File "<string>", line 2, in getAllVmIoTunePolicies
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/common/api.py",
> >> >> >>>> > line
> >> >> >>>> > 48,
> >> >> >>>> > in
> >> >> >>>> > method
> >> >> >>>> >
> >> >> >>>> > ret = func(*args, **kwargs)
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/API.py", line
> >> >> >>>> > 1354,
> >> >> >>>> > in
> >> >> >>>> > getAllVmIoTunePolicies
> >> >> >>>> >
> >> >> >>>> > io_tune_policies_dict = self._cif.
> getAllVmIoTunePolicies()
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py",
> line
> >> >> >>>> > 524,
> >> >> >>>> > in
> >> >> >>>> > getAllVmIoTunePolicies
> >> >> >>>> >
> >> >> >>>> > 'current_values': v.getIoTune()}
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py",
> line
> >> >> >>>> > 3481,
> >> >> >>>> > in
> >> >> >>>> > getIoTune
> >> >> >>>> >
> >> >> >>>> > result = self.getIoTuneResponse()
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py",
> line
> >> >> >>>> > 3500,
> >> >> >>>> > in
> >> >> >>>> > getIoTuneResponse
> >> >> >>>> >
> >> >> >>>> > res = self._dom.blockIoTune(
> >> >> >>>> >
> >> >> >>>> > File
> >> >> >>>> > "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py",
> >> >> >>>> > line
> >> >> >>>> > 47,
> >> >> >>>> > in __getattr__
> >> >> >>>> >
> >> >> >>>> > % self.vmid)
> >> >> >>>> >
> >> >> >>>> > NotConnectedError: VM '4013c829-c9d7-4b72-90d5-6fe58137504c'
> was
> >> >> >>>> > not
> >> >> >>>> > defined
> >> >> >>>> > yet or was undefined
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > ==> /var/log/messages <==
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 journal: vdsm jsonrpc.JsonRpcServer
> >> >> >>>> > ERROR
> >> >> >>>> > Internal
> >> >> >>>> > server error#012Traceback (most recent call last):#012 File
> >> >> >>>> > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py",
> line
> >> >> >>>> > 606,
> >> >> >>>> > in
> >> >> >>>> > _handle_request#012 res = method(**params)#012 File
> >> >> >>>> > "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line
> 201,
> >> >> >>>> > in
> >> >> >>>> > _dynamicMethod#012 result = fn(*methodArgs)#012 File
> >> >> >>>> > "<string>",
> >> >> >>>> > line 2,
> >> >> >>>> > in getAllVmIoTunePolicies#012 File
> >> >> >>>> > "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line
> 48,
> >> >> >>>> > in
> >> >> >>>> > method#012 ret = func(*args, **kwargs)#012 File
> >> >> >>>> > "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1354, in
> >> >> >>>> > getAllVmIoTunePolicies#012 io_tune_policies_dict =
> >> >> >>>> > self._cif.getAllVmIoTunePolicies()#012 File
> >> >> >>>> > "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line
> 524,
> >> >> >>>> > in
> >> >> >>>> > getAllVmIoTunePolicies#012 'current_values':
> >> >> >>>> > v.getIoTune()}#012
> >> >> >>>> > File
> >> >> >>>> > "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
> 3481,
> >> >> >>>> > in
> >> >> >>>> > getIoTune#012 result = self.getIoTuneResponse()#012 File
> >> >> >>>> > "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
> 3500,
> >> >> >>>> > in
> >> >> >>>> > getIoTuneResponse#012 res = self._dom.blockIoTune(#012
> File
> >> >> >>>> > "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py",
> line
> >> >> >>>> > 47,
> >> >> >>>> > in
> >> >> >>>> > __getattr__#012 % self.vmid)#012NotConnectedError: VM
> >> >> >>>> > '4013c829-c9d7-4b72-90d5-6fe58137504c' was not defined yet or
> >> >> >>>> > was
> >> >> >>>> > undefined
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4)
> >> >> >>>> > entered
> >> >> >>>> > blocking
> >> >> >>>> > state
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4)
> >> >> >>>> > entered
> >> >> >>>> > disabled
> >> >> >>>> > state
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 kernel: device vnet4 entered
> >> >> >>>> > promiscuous
> >> >> >>>> > mode
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4)
> >> >> >>>> > entered
> >> >> >>>> > blocking
> >> >> >>>> > state
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4)
> >> >> >>>> > entered
> >> >> >>>> > forwarding state
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 lldpad: recvfrom(Event interface):
> No
> >> >> >>>> > buffer
> >> >> >>>> > space
> >> >> >>>> > available
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info>
> >> >> >>>> > [1515770847.4264]
> >> >> >>>> > manager: (vnet4): new Tun device
> >> >> >>>> > (/org/freedesktop/NetworkManager/Devices/135)
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info>
> >> >> >>>> > [1515770847.4342]
> >> >> >>>> > device (vnet4): state change: unmanaged -> unavailable (reason
> >> >> >>>> > 'connection-assumed') [10 20 41]
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info>
> >> >> >>>> > [1515770847.4353]
> >> >> >>>> > device (vnet4): state change: unavailable -> disconnected
> >> >> >>>> > (reason
> >> >> >>>> > 'none')
> >> >> >>>> > [20 30 0]
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> >> >> >>>> >
> >> >> >>>> > 2018-01-12 15:27:27.435+0000: starting up libvirt version:
> >> >> >>>> > 3.2.0,
> >> >> >>>> > package:
> >> >> >>>> > 14.el7_4.7 (CentOS BuildSystem <http://bugs.centos.org>,
> >> >> >>>> > 2018-01-04-19:31:34, c1bm.rdu2.centos.org) qemu version:
> >> >> >>>> > 2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.13.1), hostname:
> >> >> >>>> > cultivar0.grove.silverorange.com
> >> >> >>>> >
> >> >> >>>> > LC_ALL=C PATH=/usr/local/sbin:/usr/
> local/bin:/usr/sbin:/usr/bin
> >> >> >>>> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
> >> >> >>>> > guest=Cultivar,debug-threads=on -S -object
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > secret,id=masterKey0,format=raw,file=/var/lib/libvirt/
> qemu/domain-114-Cultivar/master-key.aes
> >> >> >>>> > -machine
> >> >> >>>> > pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off
> >> >> >>>> > -cpu
> >> >> >>>> > Conroe -m 8192 -realtime mlock=off -smp
> >> >> >>>> > 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
> >> >> >>>> > 4013c829-c9d7-4b72-90d5-6fe58137504c -smbios
> >> >> >>>> > 'type=1,manufacturer=oVirt,product=oVirt
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > Node,version=7-4.1708.el7.centos,serial=44454C4C-3300-
> 1042-8031-B4C04F4B4831,uuid=4013c829-c9d7-4b72-90d5-6fe58137504c'
> >> >> >>>> > -no-user-config -nodefaults -chardev
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-
> 114-Cultivar/monitor.sock,server,nowait
> >> >> >>>> > -mon chardev=charmonitor,id=monitor,mode=control -rtc
> >> >> >>>> > base=2018-01-12T15:27:27,driftfix=slew -global
> >> >> >>>> > kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot
> >> >> >>>> > strict=on
> >> >> >>>> > -device
> >> >> >>>> > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> >> >> >>>> > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > file=/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-4967-a1e5-
> fbe47c0cd705,format=raw,if=none,id=drive-virtio-disk0,
> serial=c2dde892-f978-4dfc-a421-c8e04cf387f9,cache=none,
> werror=stop,rerror=stop,aio=threads
> >> >> >>>> > -device
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-
> virtio-disk0,id=virtio-disk0,bootindex=1
> >> >> >>>> > -drive if=none,id=drive-ide0-1-0,readonly=on -device
> >> >> >>>> > ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
> -netdev
> >> >> >>>> > tap,fd=35,id=hostnet0,vhost=on,vhostfd=38 -device
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:
> 7f:d6:83,bus=pci.0,addr=0x3
> >> >> >>>> > -chardev
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
> >> >> >>>> > -device
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=1,chardev=
> charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> >> >> >>>> > -chardev
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.org.qemu.guest_agent.0,server,nowait
> >> >> >>>> > -device
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=2,chardev=
> charchannel1,id=channel1,name=org.qemu.guest_agent.0
> >> >> >>>> > -chardev spicevmc,id=charchannel2,name=vdagent -device
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=3,chardev=
> charchannel2,id=channel2,name=com.redhat.spice.0
> >> >> >>>> > -chardev
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > socket,id=charchannel3,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.org.ovirt.hosted-
> engine-setup.0,server,nowait
> >> >> >>>> > -device
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=4,chardev=
> charchannel3,id=channel3,name=org.ovirt.hosted-engine-setup.0
> >> >> >>>> > -chardev pty,id=charconsole0 -device
> >> >> >>>> > virtconsole,chardev=charconsole0,id=console0 -spice
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > tls-port=5904,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,
> tls-channel=default,seamless-migration=on
> >> >> >>>> > -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -object
> >> >> >>>> > rng-random,id=objrng0,filename=/dev/urandom -device
> >> >> >>>> > virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x5 -msg
> >> >> >>>> > timestamp=on
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > ==> /var/log/messages <==
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 systemd-machined: New machine
> >> >> >>>> > qemu-114-Cultivar.
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 systemd: Started Virtual Machine
> >> >> >>>> > qemu-114-Cultivar.
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 systemd: Starting Virtual Machine
> >> >> >>>> > qemu-114-Cultivar.
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> >> >> >>>> >
> >> >> >>>> > 2018-01-12T15:27:27.651669Z qemu-kvm: -chardev
> >> >> >>>> > pty,id=charconsole0:
> >> >> >>>> > char
> >> >> >>>> > device redirected to /dev/pts/2 (label charconsole0)
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > ==> /var/log/messages <==
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 kvm: 5 guests now active
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> >> >> >>>> >
> >> >> >>>> > 2018-01-12 15:27:27.773+0000: shutting down, reason=failed
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > ==> /var/log/messages <==
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 libvirtd: 2018-01-12
> >> >> >>>> > 15:27:27.773+0000:
> >> >> >>>> > 1910:
> >> >> >>>> > error : virLockManagerSanlockAcquire:1041 : resource busy:
> >> >> >>>> > Failed
> >> >> >>>> > to
> >> >> >>>> > acquire
> >> >> >>>> > lock: Lease is held by another host
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> >> >> >>>> >
> >> >> >>>> > 2018-01-12T15:27:27.776135Z qemu-kvm: terminating on signal 15
> >> >> >>>> > from
> >> >> >>>> > pid 1773
> >> >> >>>> > (/usr/sbin/libvirtd)
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > ==> /var/log/messages <==
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4)
> >> >> >>>> > entered
> >> >> >>>> > disabled
> >> >> >>>> > state
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 kernel: device vnet4 left
> promiscuous
> >> >> >>>> > mode
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4)
> >> >> >>>> > entered
> >> >> >>>> > disabled
> >> >> >>>> > state
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info>
> >> >> >>>> > [1515770847.7989]
> >> >> >>>> > device (vnet4): state change: disconnected -> unmanaged
> (reason
> >> >> >>>> > 'unmanaged')
> >> >> >>>> > [30 10 3]
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info>
> >> >> >>>> > [1515770847.7989]
> >> >> >>>> > device (vnet4): released from master device ovirtmgmt
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 kvm: 4 guests now active
> >> >> >>>> >
> >> >> >>>> > Jan 12 11:27:27 cultivar0 systemd-machined: Machine
> >> >> >>>> > qemu-114-Cultivar
> >> >> >>>> > terminated.
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > ==> /var/log/vdsm/vdsm.log <==
> >> >> >>>> >
> >> >> >>>> > vm/4013c829::ERROR::2018-01-12
> >> >> >>>> > 11:27:28,001::vm::914::virt.vm::(_startUnderlyingVm)
> >> >> >>>> > (vmId='4013c829-c9d7-4b72-90d5-6fe58137504c') The vm start
> >> >> >>>> > process
> >> >> >>>> > failed
> >> >> >>>> >
> >> >> >>>> > Traceback (most recent call last):
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py",
> line
> >> >> >>>> > 843,
> >> >> >>>> > in
> >> >> >>>> > _startUnderlyingVm
> >> >> >>>> >
> >> >> >>>> > self._run()
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py",
> line
> >> >> >>>> > 2721,
> >> >> >>>> > in
> >> >> >>>> > _run
> >> >> >>>> >
> >> >> >>>> > dom.createWithFlags(flags)
> >> >> >>>> >
> >> >> >>>> > File
> >> >> >>>> > "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
> >> >> >>>> > line
> >> >> >>>> > 126, in wrapper
> >> >> >>>> >
> >> >> >>>> > ret = f(*args, **kwargs)
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line
> >> >> >>>> > 512,
> >> >> >>>> > in
> >> >> >>>> > wrapper
> >> >> >>>> >
> >> >> >>>> > return func(inst, *args, **kwargs)
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib64/python2.7/site-packages/libvirt.py", line
> >> >> >>>> > 1069,
> >> >> >>>> > in
> >> >> >>>> > createWithFlags
> >> >> >>>> >
> >> >> >>>> > if ret == -1: raise libvirtError
> >> >> >>>> > ('virDomainCreateWithFlags()
> >> >> >>>> > failed',
> >> >> >>>> > dom=self)
> >> >> >>>> >
> >> >> >>>> > libvirtError: resource busy: Failed to acquire lock: Lease is
> >> >> >>>> > held
> >> >> >>>> > by
> >> >> >>>> > another host
> >> >> >>>> >
> >> >> >>>> > periodic/47::ERROR::2018-01-12
> >> >> >>>> > 11:27:32,858::periodic::215::virt.periodic.Operation::(__
> call__)
> >> >> >>>> > <vdsm.virt.sampling.VMBulkstatsMonitor object at 0x3692590>
> >> >> >>>> > operation
> >> >> >>>> > failed
> >> >> >>>> >
> >> >> >>>> > Traceback (most recent call last):
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/periodic.
> py",
> >> >> >>>> > line
> >> >> >>>> > 213,
> >> >> >>>> > in __call__
> >> >> >>>> >
> >> >> >>>> > self._func()
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/sampling.
> py",
> >> >> >>>> > line
> >> >> >>>> > 522,
> >> >> >>>> > in __call__
> >> >> >>>> >
> >> >> >>>> > self._send_metrics()
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/sampling.
> py",
> >> >> >>>> > line
> >> >> >>>> > 538,
> >> >> >>>> > in _send_metrics
> >> >> >>>> >
> >> >> >>>> > vm_sample.interval)
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-
> packages/vdsm/virt/vmstats.py",
> >> >> >>>> > line
> >> >> >>>> > 45, in
> >> >> >>>> > produce
> >> >> >>>> >
> >> >> >>>> > networks(vm, stats, first_sample, last_sample, interval)
> >> >> >>>> >
> >> >> >>>> > File "/usr/lib/python2.7/site-
> packages/vdsm/virt/vmstats.py",
> >> >> >>>> > line
> >> >> >>>> > 322, in
> >> >> >>>> > networks
> >> >> >>>> >
> >> >> >>>> > if nic.name.startswith('hostdev'):
> >> >> >>>> >
> >> >> >>>> > AttributeError: name
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>> > On Fri, Jan 12, 2018 at 11:14 AM, Martin Sivak
> >> >> >>>> > <msivak(a)redhat.com>
> >> >> >>>> > wrote:
> >> >> >>>> >>
> >> >> >>>> >> Hmm that rules out most of NFS related permission issues.
> >> >> >>>> >>
> >> >> >>>> >> So the current status is (I need to sum it up to get the full
> >> >> >>>> >> picture):
> >> >> >>>> >>
> >> >> >>>> >> - HE VM is down
> >> >> >>>> >> - HE agent fails when opening metadata using the symlink
> >> >> >>>> >> - the symlink is there
> >> >> >>>> >> - the symlink is readable by vdsm:kvm
> >> >> >>>> >>
> >> >> >>>> >> Hmm can you check under which user is ovirt-ha-broker
> started?
> >> >> >>>> >>
> >> >> >>>> >> Martin
> >> >> >>>> >>
> >> >> >>>> >>
> >> >> >>>> >> On Fri, Jan 12, 2018 at 4:10 PM, Jayme <jaymef(a)gmail.com>
> >> >> >>>> >> wrote:
> >> >> >>>> >> > Same thing happens with data images of other VMs as well
> >> >> >>>> >> > though,
> >> >> >>>> >> > and
> >> >> >>>> >> > those
> >> >> >>>> >> > seem to be running ok so I'm not sure if it's the problem.
> >> >> >>>> >> >
> >> >> >>>> >> > On Fri, Jan 12, 2018 at 11:08 AM, Jayme <jaymef(a)gmail.com>
> >> >> >>>> >> > wrote:
> >> >> >>>> >> >>
> >> >> >>>> >> >> Martin,
> >> >> >>>> >> >>
> >> >> >>>> >> >> I can as VDSM user but not as root . I get permission
> denied
> >> >> >>>> >> >> trying to
> >> >> >>>> >> >> touch one of the files as root, is that normal?
> >> >> >>>> >> >>
> >> >> >>>> >> >> On Fri, Jan 12, 2018 at 11:03 AM, Martin Sivak
> >> >> >>>> >> >> <msivak(a)redhat.com>
> >> >> >>>> >> >> wrote:
> >> >> >>>> >> >>>
> >> >> >>>> >> >>> Hmm, then it might be a permission issue indeed. Can you
> >> >> >>>> >> >>> touch
> >> >> >>>> >> >>> the
> >> >> >>>> >> >>> file? Open it? (try hexdump) Just to make sure NFS does
> not
> >> >> >>>> >> >>> prevent
> >> >> >>>> >> >>> you from doing that.
> >> >> >>>> >> >>>
> >> >> >>>> >> >>> Martin
> >> >> >>>> >> >>>
> >> >> >>>> >> >>> On Fri, Jan 12, 2018 at 3:57 PM, Jayme <jaymef(a)gmail.com
> >
> >> >> >>>> >> >>> wrote:
> >> >> >>>> >> >>> > Sorry, I think we got confused about the symlink, there
> >> >> >>>> >> >>> > are
> >> >> >>>> >> >>> > symlinks
> >> >> >>>> >> >>> > in
> >> >> >>>> >> >>> > /var/run that point the /rhev when I was doing an LS it
> >> >> >>>> >> >>> > was
> >> >> >>>> >> >>> > listing
> >> >> >>>> >> >>> > the
> >> >> >>>> >> >>> > files in /rhev
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> > /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> > 14a20941-1b84-4b82-be8f-ace38d7c037a ->
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> > /rhev/data-center/mnt/cultivar0.grove.silverorange.
> com:_exports_hosted__engine/248f46f0-d793-4581-9810-
> c9d965e2f286/images/14a20941-1b84-4b82-be8f-ace38d7c037a
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> > ls -al
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> > /rhev/data-center/mnt/cultivar0.grove.silverorange.
> com:_exports_hosted__engine/248f46f0-d793-4581-9810-
> c9d965e2f286/images/14a20941-1b84-4b82-be8f-ace38d7c037a
> >> >> >>>> >> >>> > total 2040
> >> >> >>>> >> >>> > drwxr-xr-x. 2 vdsm kvm 4096 Jan 12 10:51 .
> >> >> >>>> >> >>> > drwxr-xr-x. 8 vdsm kvm 4096 Feb 3 2016 ..
> >> >> >>>> >> >>> > -rw-rw----. 1 vdsm kvm 1028096 Jan 12 10:56
> >> >> >>>> >> >>> > 8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8
> >> >> >>>> >> >>> > -rw-rw----. 1 vdsm kvm 1048576 Feb 3 2016
> >> >> >>>> >> >>> > 8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8.lease
> >> >> >>>> >> >>> > -rw-r--r--. 1 vdsm kvm 283 Feb 3 2016
> >> >> >>>> >> >>> > 8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8.meta
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> > Is it possible that this is the wrong image for hosted
> >> >> >>>> >> >>> > engine?
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> > this is all I get in vdsm log when running
> hosted-engine
> >> >> >>>> >> >>> > --connect-storage
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> > jsonrpc/4::ERROR::2018-01-12
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> > 10:52:53,019::__init__::611::jsonrpc.JsonRpcServer::(_
> handle_request)
> >> >> >>>> >> >>> > Internal server error
> >> >> >>>> >> >>> > Traceback (most recent call last):
> >> >> >>>> >> >>> > File
> >> >> >>>> >> >>> > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.
> py",
> >> >> >>>> >> >>> > line
> >> >> >>>> >> >>> > 606,
> >> >> >>>> >> >>> > in _handle_request
> >> >> >>>> >> >>> > res = method(**params)
> >> >> >>>> >> >>> > File
> >> >> >>>> >> >>> > "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py",
> >> >> >>>> >> >>> > line
> >> >> >>>> >> >>> > 201,
> >> >> >>>> >> >>> > in
> >> >> >>>> >> >>> > _dynamicMethod
> >> >> >>>> >> >>> > result = fn(*methodArgs)
> >> >> >>>> >> >>> > File "<string>", line 2, in getAllVmIoTunePolicies
> >> >> >>>> >> >>> > File
> >> >> >>>> >> >>> > "/usr/lib/python2.7/site-packages/vdsm/common/api.py",
> >> >> >>>> >> >>> > line
> >> >> >>>> >> >>> > 48,
> >> >> >>>> >> >>> > in
> >> >> >>>> >> >>> > method
> >> >> >>>> >> >>> > ret = func(*args, **kwargs)
> >> >> >>>> >> >>> > File "/usr/lib/python2.7/site-packages/vdsm/API.py",
> >> >> >>>> >> >>> > line
> >> >> >>>> >> >>> > 1354, in
> >> >> >>>> >> >>> > getAllVmIoTunePolicies
> >> >> >>>> >> >>> > io_tune_policies_dict =
> >> >> >>>> >> >>> > self._cif.getAllVmIoTunePolicies()
> >> >> >>>> >> >>> > File
> >> >> >>>> >> >>> > "/usr/lib/python2.7/site-packages/vdsm/clientIF.py",
> >> >> >>>> >> >>> > line
> >> >> >>>> >> >>> > 524,
> >> >> >>>> >> >>> > in
> >> >> >>>> >> >>> > getAllVmIoTunePolicies
> >> >> >>>> >> >>> > 'current_values': v.getIoTune()}
> >> >> >>>> >> >>> > File
> >> >> >>>> >> >>> > "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py",
> >> >> >>>> >> >>> > line
> >> >> >>>> >> >>> > 3481,
> >> >> >>>> >> >>> > in
> >> >> >>>> >> >>> > getIoTune
> >> >> >>>> >> >>> > result = self.getIoTuneResponse()
> >> >> >>>> >> >>> > File
> >> >> >>>> >> >>> > "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py",
> >> >> >>>> >> >>> > line
> >> >> >>>> >> >>> > 3500,
> >> >> >>>> >> >>> > in
> >> >> >>>> >> >>> > getIoTuneResponse
> >> >> >>>> >> >>> > res = self._dom.blockIoTune(
> >> >> >>>> >> >>> > File
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> > "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.
> py",
> >> >> >>>> >> >>> > line
> >> >> >>>> >> >>> > 47,
> >> >> >>>> >> >>> > in __getattr__
> >> >> >>>> >> >>> > % self.vmid)
> >> >> >>>> >> >>> > NotConnectedError: VM
> >> >> >>>> >> >>> > '4013c829-c9d7-4b72-90d5-6fe58137504c'
> >> >> >>>> >> >>> > was not
> >> >> >>>> >> >>> > defined
> >> >> >>>> >> >>> > yet or was undefined
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> > On Fri, Jan 12, 2018 at 10:48 AM, Martin Sivak
> >> >> >>>> >> >>> > <msivak(a)redhat.com>
> >> >> >>>> >> >>> > wrote:
> >> >> >>>> >> >>> >>
> >> >> >>>> >> >>> >> Hi,
> >> >> >>>> >> >>> >>
> >> >> >>>> >> >>> >> what happens when you try hosted-engine
> >> >> >>>> >> >>> >> --connect-storage?
> >> >> >>>> >> >>> >> Do
> >> >> >>>> >> >>> >> you
> >> >> >>>> >> >>> >> see
> >> >> >>>> >> >>> >> any errors in the vdsm log?
> >> >> >>>> >> >>> >>
> >> >> >>>> >> >>> >> Best regards
> >> >> >>>> >> >>> >>
> >> >> >>>> >> >>> >> Martin Sivak
> >> >> >>>> >> >>> >>
> >> >> >>>> >> >>> >> On Fri, Jan 12, 2018 at 3:41 PM, Jayme
> >> >> >>>> >> >>> >> <jaymef(a)gmail.com>
> >> >> >>>> >> >>> >> wrote:
> >> >> >>>> >> >>> >> > Ok this is what I've done:
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> > - All three hosts in global maintenance mode
> >> >> >>>> >> >>> >> > - Ran: systemctl stop ovirt-ha-broker; systemctl
> stop
> >> >> >>>> >> >>> >> > ovirt-ha-broker --
> >> >> >>>> >> >>> >> > on
> >> >> >>>> >> >>> >> > all three hosts
> >> >> >>>> >> >>> >> > - Moved ALL files in
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> > /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/
> >> >> >>>> >> >>> >> > to
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> > /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/backup
> >> >> >>>> >> >>> >> > - Ran: systemctl start ovirt-ha-broker; systemctl
> >> >> >>>> >> >>> >> > start
> >> >> >>>> >> >>> >> > ovirt-ha-broker
> >> >> >>>> >> >>> >> > --
> >> >> >>>> >> >>> >> > on all three hosts
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> > - attempt start of engine vm from HOST0 (cultivar0):
> >> >> >>>> >> >>> >> > hosted-engine
> >> >> >>>> >> >>> >> > --vm-start
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> > Lots of errors in the logs still, it appears to be
> >> >> >>>> >> >>> >> > having
> >> >> >>>> >> >>> >> > problems
> >> >> >>>> >> >>> >> > with
> >> >> >>>> >> >>> >> > that
> >> >> >>>> >> >>> >> > directory still:
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> > Jan 12 10:40:13 cultivar0 journal: ovirt-ha-broker
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> > ovirt_hosted_engine_ha.broker.
> storage_broker.StorageBroker
> >> >> >>>> >> >>> >> > ERROR
> >> >> >>>> >> >>> >> > Failed
> >> >> >>>> >> >>> >> > to
> >> >> >>>> >> >>> >> > write metadata for host 1 to
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> > /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8#012Traceback
> >> >> >>>> >> >>> >> > (most recent call last):#012 File
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
> >> >> >>>> >> >>> >> > line 202, in put_stats#012 f = os.open(path,
> >> >> >>>> >> >>> >> > direct_flag
> >> >> >>>> >> >>> >> > |
> >> >> >>>> >> >>> >> > os.O_WRONLY |
> >> >> >>>> >> >>> >> > os.O_SYNC)#012OSError: [Errno 2] No such file or
> >> >> >>>> >> >>> >> > directory:
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> > '/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8'
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> > There are no new files or symlinks in
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> > /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> > - Jayme
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> > On Fri, Jan 12, 2018 at 10:23 AM, Martin Sivak
> >> >> >>>> >> >>> >> > <msivak(a)redhat.com>
> >> >> >>>> >> >>> >> > wrote:
> >> >> >>>> >> >>> >> >>
> >> >> >>>> >> >>> >> >> > Can you please stop all hosted engine tooling (
> >> >> >>>> >> >>> >> >>
> >> >> >>>> >> >>> >> >> On all hosts I should have added.
> >> >> >>>> >> >>> >> >>
> >> >> >>>> >> >>> >> >> Martin
> >> >> >>>> >> >>> >> >>
> >> >> >>>> >> >>> >> >> On Fri, Jan 12, 2018 at 3:22 PM, Martin Sivak
> >> >> >>>> >> >>> >> >> <msivak(a)redhat.com>
> >> >> >>>> >> >>> >> >> wrote:
> >> >> >>>> >> >>> >> >> >> RequestError: failed to read metadata: [Errno 2]
> >> >> >>>> >> >>> >> >> >> No
> >> >> >>>> >> >>> >> >> >> such
> >> >> >>>> >> >>> >> >> >> file
> >> >> >>>> >> >>> >> >> >> or
> >> >> >>>> >> >>> >> >> >> directory:
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> '/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8'
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> ls -al
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8
> >> >> >>>> >> >>> >> >> >> -rw-rw----. 1 vdsm kvm 1028096 Jan 12 09:59
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> Is this due to the symlink problem you guys are
> >> >> >>>> >> >>> >> >> >> referring to
> >> >> >>>> >> >>> >> >> >> that
> >> >> >>>> >> >>> >> >> >> was
> >> >> >>>> >> >>> >> >> >> addressed in RC1 or something else?
> >> >> >>>> >> >>> >> >> >
> >> >> >>>> >> >>> >> >> > No, this file is the symlink. It should point to
> >> >> >>>> >> >>> >> >> > somewhere
> >> >> >>>> >> >>> >> >> > inside
> >> >> >>>> >> >>> >> >> > /rhev/. I see it is a 1G file in your case. That
> is
> >> >> >>>> >> >>> >> >> > really
> >> >> >>>> >> >>> >> >> > interesting.
> >> >> >>>> >> >>> >> >> >
> >> >> >>>> >> >>> >> >> > Can you please stop all hosted engine tooling
> >> >> >>>> >> >>> >> >> > (ovirt-ha-agent,
> >> >> >>>> >> >>> >> >> > ovirt-ha-broker), move the file (metadata file is
> >> >> >>>> >> >>> >> >> > not
> >> >> >>>> >> >>> >> >> > important
> >> >> >>>> >> >>> >> >> > when
> >> >> >>>> >> >>> >> >> > services are stopped, but better safe than sorry)
> >> >> >>>> >> >>> >> >> > and
> >> >> >>>> >> >>> >> >> > restart
> >> >> >>>> >> >>> >> >> > all
> >> >> >>>> >> >>> >> >> > services again?
> >> >> >>>> >> >>> >> >> >
> >> >> >>>> >> >>> >> >> >> Could there possibly be a permissions
> >> >> >>>> >> >>> >> >> >> problem somewhere?
> >> >> >>>> >> >>> >> >> >
> >> >> >>>> >> >>> >> >> > Maybe, but the file itself looks out of the
> >> >> >>>> >> >>> >> >> > ordinary.
> >> >> >>>> >> >>> >> >> > I
> >> >> >>>> >> >>> >> >> > wonder
> >> >> >>>> >> >>> >> >> > how it
> >> >> >>>> >> >>> >> >> > got there.
> >> >> >>>> >> >>> >> >> >
> >> >> >>>> >> >>> >> >> > Best regards
> >> >> >>>> >> >>> >> >> >
> >> >> >>>> >> >>> >> >> > Martin Sivak
> >> >> >>>> >> >>> >> >> >
> >> >> >>>> >> >>> >> >> > On Fri, Jan 12, 2018 at 3:09 PM, Jayme
> >> >> >>>> >> >>> >> >> > <jaymef(a)gmail.com>
> >> >> >>>> >> >>> >> >> > wrote:
> >> >> >>>> >> >>> >> >> >> Thanks for the help thus far. Storage could be
> >> >> >>>> >> >>> >> >> >> related
> >> >> >>>> >> >>> >> >> >> but
> >> >> >>>> >> >>> >> >> >> all
> >> >> >>>> >> >>> >> >> >> other
> >> >> >>>> >> >>> >> >> >> VMs on
> >> >> >>>> >> >>> >> >> >> same storage are running ok. The storage is
> >> >> >>>> >> >>> >> >> >> mounted
> >> >> >>>> >> >>> >> >> >> via
> >> >> >>>> >> >>> >> >> >> NFS
> >> >> >>>> >> >>> >> >> >> from
> >> >> >>>> >> >>> >> >> >> within one
> >> >> >>>> >> >>> >> >> >> of the three hosts, I realize this is not ideal.
> >> >> >>>> >> >>> >> >> >> This
> >> >> >>>> >> >>> >> >> >> was
> >> >> >>>> >> >>> >> >> >> setup
> >> >> >>>> >> >>> >> >> >> by
> >> >> >>>> >> >>> >> >> >> a
> >> >> >>>> >> >>> >> >> >> previous admin more as a proof of concept and
> VMs
> >> >> >>>> >> >>> >> >> >> were
> >> >> >>>> >> >>> >> >> >> put on
> >> >> >>>> >> >>> >> >> >> there
> >> >> >>>> >> >>> >> >> >> that
> >> >> >>>> >> >>> >> >> >> should not have been placed in a proof of
> concept
> >> >> >>>> >> >>> >> >> >> environment..
> >> >> >>>> >> >>> >> >> >> it
> >> >> >>>> >> >>> >> >> >> was
> >> >> >>>> >> >>> >> >> >> intended to be rebuilt with proper storage down
> >> >> >>>> >> >>> >> >> >> the
> >> >> >>>> >> >>> >> >> >> road.
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> So the storage is on HOST0 and the other hosts
> >> >> >>>> >> >>> >> >> >> mount
> >> >> >>>> >> >>> >> >> >> NFS
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> cultivar0.grove.silverorange.com:/exports/data
> >> >> >>>> >> >>> >> >> >> 4861742080
> >> >> >>>> >> >>> >> >> >> 1039352832 3822389248 22%
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> /rhev/data-center/mnt/
> cultivar0.grove.silverorange.com:_exports_data
> >> >> >>>> >> >>> >> >> >> cultivar0.grove.silverorange.com:/exports/iso
> >> >> >>>> >> >>> >> >> >> 4861742080
> >> >> >>>> >> >>> >> >> >> 1039352832 3822389248 22%
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> /rhev/data-center/mnt/
> cultivar0.grove.silverorange.com:_exports_iso
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> cultivar0.grove.silverorange.
> com:/exports/import_export
> >> >> >>>> >> >>> >> >> >> 4861742080
> >> >> >>>> >> >>> >> >> >> 1039352832 3822389248 22%
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> /rhev/data-center/mnt/
> cultivar0.grove.silverorange.com:_exports_import__export
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> cultivar0.grove.silverorange.
> com:/exports/hosted_engine
> >> >> >>>> >> >>> >> >> >> 4861742080
> >> >> >>>> >> >>> >> >> >> 1039352832 3822389248 22%
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> /rhev/data-center/mnt/
> cultivar0.grove.silverorange.com:_exports_hosted__engine
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> Like I said, the VM data storage itself seems to
> >> >> >>>> >> >>> >> >> >> be
> >> >> >>>> >> >>> >> >> >> working
> >> >> >>>> >> >>> >> >> >> ok,
> >> >> >>>> >> >>> >> >> >> as
> >> >> >>>> >> >>> >> >> >> all
> >> >> >>>> >> >>> >> >> >> other
> >> >> >>>> >> >>> >> >> >> VMs appear to be running.
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> I'm curious why the broker log says this file is
> >> >> >>>> >> >>> >> >> >> not
> >> >> >>>> >> >>> >> >> >> found
> >> >> >>>> >> >>> >> >> >> when
> >> >> >>>> >> >>> >> >> >> it
> >> >> >>>> >> >>> >> >> >> is
> >> >> >>>> >> >>> >> >> >> correct and I can see the file at that path:
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> RequestError: failed to read metadata: [Errno 2]
> >> >> >>>> >> >>> >> >> >> No
> >> >> >>>> >> >>> >> >> >> such
> >> >> >>>> >> >>> >> >> >> file
> >> >> >>>> >> >>> >> >> >> or
> >> >> >>>> >> >>> >> >> >> directory:
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> '/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8'
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> ls -al
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8
> >> >> >>>> >> >>> >> >> >> -rw-rw----. 1 vdsm kvm 1028096 Jan 12 09:59
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> Is this due to the symlink problem you guys are
> >> >> >>>> >> >>> >> >> >> referring to
> >> >> >>>> >> >>> >> >> >> that
> >> >> >>>> >> >>> >> >> >> was
> >> >> >>>> >> >>> >> >> >> addressed in RC1 or something else? Could there
> >> >> >>>> >> >>> >> >> >> possibly be
> >> >> >>>> >> >>> >> >> >> a
> >> >> >>>> >> >>> >> >> >> permissions
> >> >> >>>> >> >>> >> >> >> problem somewhere?
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> Assuming that all three hosts have 4.2 rpms
> >> >> >>>> >> >>> >> >> >> installed
> >> >> >>>> >> >>> >> >> >> and the
> >> >> >>>> >> >>> >> >> >> host
> >> >> >>>> >> >>> >> >> >> engine
> >> >> >>>> >> >>> >> >> >> will not start is it safe for me to update hosts
> >> >> >>>> >> >>> >> >> >> to
> >> >> >>>> >> >>> >> >> >> 4.2
> >> >> >>>> >> >>> >> >> >> RC1
> >> >> >>>> >> >>> >> >> >> rpms?
> >> >> >>>> >> >>> >> >> >> Or
> >> >> >>>> >> >>> >> >> >> perhaps install that repo and *only* update the
> >> >> >>>> >> >>> >> >> >> ovirt
> >> >> >>>> >> >>> >> >> >> HA
> >> >> >>>> >> >>> >> >> >> packages?
> >> >> >>>> >> >>> >> >> >> Assuming that I cannot yet apply the same
> updates
> >> >> >>>> >> >>> >> >> >> to
> >> >> >>>> >> >>> >> >> >> the
> >> >> >>>> >> >>> >> >> >> inaccessible
> >> >> >>>> >> >>> >> >> >> hosted
> >> >> >>>> >> >>> >> >> >> engine VM.
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> I should also mention one more thing. I
> >> >> >>>> >> >>> >> >> >> originally
> >> >> >>>> >> >>> >> >> >> upgraded
> >> >> >>>> >> >>> >> >> >> the
> >> >> >>>> >> >>> >> >> >> engine
> >> >> >>>> >> >>> >> >> >> VM
> >> >> >>>> >> >>> >> >> >> first using new RPMS then engine-setup. It
> failed
> >> >> >>>> >> >>> >> >> >> due
> >> >> >>>> >> >>> >> >> >> to not
> >> >> >>>> >> >>> >> >> >> being
> >> >> >>>> >> >>> >> >> >> in
> >> >> >>>> >> >>> >> >> >> global maintenance, so I set global maintenance
> >> >> >>>> >> >>> >> >> >> and
> >> >> >>>> >> >>> >> >> >> ran
> >> >> >>>> >> >>> >> >> >> it
> >> >> >>>> >> >>> >> >> >> again,
> >> >> >>>> >> >>> >> >> >> which
> >> >> >>>> >> >>> >> >> >> appeared to complete as intended but never came
> >> >> >>>> >> >>> >> >> >> back
> >> >> >>>> >> >>> >> >> >> up
> >> >> >>>> >> >>> >> >> >> after.
> >> >> >>>> >> >>> >> >> >> Just
> >> >> >>>> >> >>> >> >> >> in
> >> >> >>>> >> >>> >> >> >> case
> >> >> >>>> >> >>> >> >> >> this might have anything at all to do with what
> >> >> >>>> >> >>> >> >> >> could
> >> >> >>>> >> >>> >> >> >> have
> >> >> >>>> >> >>> >> >> >> happened.
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> Thanks very much again, I very much appreciate
> the
> >> >> >>>> >> >>> >> >> >> help!
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> - Jayme
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >> >> On Fri, Jan 12, 2018 at 8:44 AM, Simone
> Tiraboschi
> >> >> >>>> >> >>> >> >> >> <stirabos(a)redhat.com>
> >> >> >>>> >> >>> >> >> >> wrote:
> >> >> >>>> >> >>> >> >> >>>
> >> >> >>>> >> >>> >> >> >>>
> >> >> >>>> >> >>> >> >> >>>
> >> >> >>>> >> >>> >> >> >>> On Fri, Jan 12, 2018 at 11:11 AM, Martin Sivak
> >> >> >>>> >> >>> >> >> >>> <msivak(a)redhat.com>
> >> >> >>>> >> >>> >> >> >>> wrote:
> >> >> >>>> >> >>> >> >> >>>>
> >> >> >>>> >> >>> >> >> >>>> Hi,
> >> >> >>>> >> >>> >> >> >>>>
> >> >> >>>> >> >>> >> >> >>>> the hosted engine agent issue might be fixed
> by
> >> >> >>>> >> >>> >> >> >>>> restarting
> >> >> >>>> >> >>> >> >> >>>> ovirt-ha-broker or updating to newest
> >> >> >>>> >> >>> >> >> >>>> ovirt-hosted-engine-ha
> >> >> >>>> >> >>> >> >> >>>> and
> >> >> >>>> >> >>> >> >> >>>> -setup. We improved handling of the missing
> >> >> >>>> >> >>> >> >> >>>> symlink.
> >> >> >>>> >> >>> >> >> >>>
> >> >> >>>> >> >>> >> >> >>>
> >> >> >>>> >> >>> >> >> >>> Available just in oVirt 4.2.1 RC1
> >> >> >>>> >> >>> >> >> >>>
> >> >> >>>> >> >>> >> >> >>>>
> >> >> >>>> >> >>> >> >> >>>>
> >> >> >>>> >> >>> >> >> >>>> All the other issues seem to point to some
> >> >> >>>> >> >>> >> >> >>>> storage
> >> >> >>>> >> >>> >> >> >>>> problem
> >> >> >>>> >> >>> >> >> >>>> I
> >> >> >>>> >> >>> >> >> >>>> am
> >> >> >>>> >> >>> >> >> >>>> afraid.
> >> >> >>>> >> >>> >> >> >>>>
> >> >> >>>> >> >>> >> >> >>>> You said you started the VM, do you see it in
> >> >> >>>> >> >>> >> >> >>>> virsh
> >> >> >>>> >> >>> >> >> >>>> -r
> >> >> >>>> >> >>> >> >> >>>> list?
> >> >> >>>> >> >>> >> >> >>>>
> >> >> >>>> >> >>> >> >> >>>> Best regards
> >> >> >>>> >> >>> >> >> >>>>
> >> >> >>>> >> >>> >> >> >>>> Martin Sivak
> >> >> >>>> >> >>> >> >> >>>>
> >> >> >>>> >> >>> >> >> >>>> On Thu, Jan 11, 2018 at 10:00 PM, Jayme
> >> >> >>>> >> >>> >> >> >>>> <jaymef(a)gmail.com>
> >> >> >>>> >> >>> >> >> >>>> wrote:
> >> >> >>>> >> >>> >> >> >>>> > Please help, I'm really not sure what else
> to
> >> >> >>>> >> >>> >> >> >>>> > try
> >> >> >>>> >> >>> >> >> >>>> > at
> >> >> >>>> >> >>> >> >> >>>> > this
> >> >> >>>> >> >>> >> >> >>>> > point.
> >> >> >>>> >> >>> >> >> >>>> > Thank
> >> >> >>>> >> >>> >> >> >>>> > you
> >> >> >>>> >> >>> >> >> >>>> > for reading!
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > I'm still working on trying to get my hosted
> >> >> >>>> >> >>> >> >> >>>> > engine
> >> >> >>>> >> >>> >> >> >>>> > running
> >> >> >>>> >> >>> >> >> >>>> > after a
> >> >> >>>> >> >>> >> >> >>>> > botched
> >> >> >>>> >> >>> >> >> >>>> > upgrade to 4.2. Storage is NFS mounted from
> >> >> >>>> >> >>> >> >> >>>> > within
> >> >> >>>> >> >>> >> >> >>>> > one
> >> >> >>>> >> >>> >> >> >>>> > of
> >> >> >>>> >> >>> >> >> >>>> > the
> >> >> >>>> >> >>> >> >> >>>> > hosts.
> >> >> >>>> >> >>> >> >> >>>> > Right
> >> >> >>>> >> >>> >> >> >>>> > now I have 3 centos7 hosts that are fully
> >> >> >>>> >> >>> >> >> >>>> > updated
> >> >> >>>> >> >>> >> >> >>>> > with
> >> >> >>>> >> >>> >> >> >>>> > yum
> >> >> >>>> >> >>> >> >> >>>> > packages
> >> >> >>>> >> >>> >> >> >>>> > from
> >> >> >>>> >> >>> >> >> >>>> > ovirt 4.2, the engine was fully updated with
> >> >> >>>> >> >>> >> >> >>>> > yum
> >> >> >>>> >> >>> >> >> >>>> > packages
> >> >> >>>> >> >>> >> >> >>>> > and
> >> >> >>>> >> >>> >> >> >>>> > failed to
> >> >> >>>> >> >>> >> >> >>>> > come
> >> >> >>>> >> >>> >> >> >>>> > up after reboot. As of right now,
> everything
> >> >> >>>> >> >>> >> >> >>>> > should
> >> >> >>>> >> >>> >> >> >>>> > have
> >> >> >>>> >> >>> >> >> >>>> > full
> >> >> >>>> >> >>> >> >> >>>> > yum
> >> >> >>>> >> >>> >> >> >>>> > updates
> >> >> >>>> >> >>> >> >> >>>> > and all having 4.2 rpms. I have global
> >> >> >>>> >> >>> >> >> >>>> > maintenance
> >> >> >>>> >> >>> >> >> >>>> > mode
> >> >> >>>> >> >>> >> >> >>>> > on
> >> >> >>>> >> >>> >> >> >>>> > right
> >> >> >>>> >> >>> >> >> >>>> > now
> >> >> >>>> >> >>> >> >> >>>> > and
> >> >> >>>> >> >>> >> >> >>>> > started hosted-engine on one of the three
> host
> >> >> >>>> >> >>> >> >> >>>> > and
> >> >> >>>> >> >>> >> >> >>>> > the
> >> >> >>>> >> >>> >> >> >>>> > status is
> >> >> >>>> >> >>> >> >> >>>> > currently:
> >> >> >>>> >> >>> >> >> >>>> > Engine status : {"reason": "failed
> liveliness
> >> >> >>>> >> >>> >> >> >>>> > check”;
> >> >> >>>> >> >>> >> >> >>>> > "health":
> >> >> >>>> >> >>> >> >> >>>> > "bad",
> >> >> >>>> >> >>> >> >> >>>> > "vm":
> >> >> >>>> >> >>> >> >> >>>> > "up", "detail": "Up"}
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > this is what I get when trying to enter
> >> >> >>>> >> >>> >> >> >>>> > hosted-vm
> >> >> >>>> >> >>> >> >> >>>> > --console
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > The engine VM is running on this host
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > error: failed to get domain 'HostedEngine'
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > error: Domain not found: no domain with
> >> >> >>>> >> >>> >> >> >>>> > matching
> >> >> >>>> >> >>> >> >> >>>> > name
> >> >> >>>> >> >>> >> >> >>>> > 'HostedEngine'
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > Here are logs from various sources when I
> >> >> >>>> >> >>> >> >> >>>> > start
> >> >> >>>> >> >>> >> >> >>>> > the
> >> >> >>>> >> >>> >> >> >>>> > VM on
> >> >> >>>> >> >>> >> >> >>>> > HOST3:
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > hosted-engine --vm-start
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > Command VM.getStats with args {'vmID':
> >> >> >>>> >> >>> >> >> >>>> > '4013c829-c9d7-4b72-90d5-6fe58137504c'}
> >> >> >>>> >> >>> >> >> >>>> > failed:
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > (code=1, message=Virtual machine does not
> >> >> >>>> >> >>> >> >> >>>> > exist:
> >> >> >>>> >> >>> >> >> >>>> > {'vmId':
> >> >> >>>> >> >>> >> >> >>>> > u'4013c829-c9d7-4b72-90d5-6fe58137504c'})
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > Jan 11 16:55:57 cultivar3 systemd-machined:
> >> >> >>>> >> >>> >> >> >>>> > New
> >> >> >>>> >> >>> >> >> >>>> > machine
> >> >> >>>> >> >>> >> >> >>>> > qemu-110-Cultivar.
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > Jan 11 16:55:57 cultivar3 systemd: Started
> >> >> >>>> >> >>> >> >> >>>> > Virtual
> >> >> >>>> >> >>> >> >> >>>> > Machine
> >> >> >>>> >> >>> >> >> >>>> > qemu-110-Cultivar.
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > Jan 11 16:55:57 cultivar3 systemd: Starting
> >> >> >>>> >> >>> >> >> >>>> > Virtual
> >> >> >>>> >> >>> >> >> >>>> > Machine
> >> >> >>>> >> >>> >> >> >>>> > qemu-110-Cultivar.
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > Jan 11 16:55:57 cultivar3 kvm: 3 guests now
> >> >> >>>> >> >>> >> >> >>>> > active
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > ==> /var/log/vdsm/vdsm.log <==
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > File
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/vdsm/common/api.py",
> >> >> >>>> >> >>> >> >> >>>> > line
> >> >> >>>> >> >>> >> >> >>>> > 48,
> >> >> >>>> >> >>> >> >> >>>> > in
> >> >> >>>> >> >>> >> >> >>>> > method
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > ret = func(*args, **kwargs)
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > File
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/vdsm/storage/hsm.py",
> >> >> >>>> >> >>> >> >> >>>> > line
> >> >> >>>> >> >>> >> >> >>>> > 2718, in
> >> >> >>>> >> >>> >> >> >>>> > getStorageDomainInfo
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > dom = self.validateSdUUID(sdUUID)
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > File
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/vdsm/storage/hsm.py",
> >> >> >>>> >> >>> >> >> >>>> > line
> >> >> >>>> >> >>> >> >> >>>> > 304, in
> >> >> >>>> >> >>> >> >> >>>> > validateSdUUID
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > sdDom.validate()
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > File
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/vdsm/storage/fileSD.py",
> >> >> >>>> >> >>> >> >> >>>> > line
> >> >> >>>> >> >>> >> >> >>>> > 515,
> >> >> >>>> >> >>> >> >> >>>> > in validate
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > raise
> >> >> >>>> >> >>> >> >> >>>> > se.StorageDomainAccessError(self.sdUUID)
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > StorageDomainAccessError: Domain is either
> >> >> >>>> >> >>> >> >> >>>> > partially
> >> >> >>>> >> >>> >> >> >>>> > accessible
> >> >> >>>> >> >>> >> >> >>>> > or
> >> >> >>>> >> >>> >> >> >>>> > entirely
> >> >> >>>> >> >>> >> >> >>>> > inaccessible:
> >> >> >>>> >> >>> >> >> >>>> > (u'248f46f0-d793-4581-9810-c9d965e2f286',)
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > jsonrpc/2::ERROR::2018-01-11
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > 16:55:16,144::dispatcher::82::
> storage.Dispatcher::(wrapper)
> >> >> >>>> >> >>> >> >> >>>> > FINISH
> >> >> >>>> >> >>> >> >> >>>> > getStorageDomainInfo error=Domain is either
> >> >> >>>> >> >>> >> >> >>>> > partially
> >> >> >>>> >> >>> >> >> >>>> > accessible
> >> >> >>>> >> >>> >> >> >>>> > or
> >> >> >>>> >> >>> >> >> >>>> > entirely
> >> >> >>>> >> >>> >> >> >>>> > inaccessible:
> >> >> >>>> >> >>> >> >> >>>> > (u'248f46f0-d793-4581-9810-c9d965e2f286',)
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > LC_ALL=C
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > PATH=/usr/local/sbin:/usr/
> local/bin:/usr/sbin:/usr/bin
> >> >> >>>> >> >>> >> >> >>>> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm
> >> >> >>>> >> >>> >> >> >>>> > -name
> >> >> >>>> >> >>> >> >> >>>> > guest=Cultivar,debug-threads=on -S -object
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > secret,id=masterKey0,format=
> raw,file=/var/lib/libvirt/qemu/domain-108-Cultivar/master-key.aes
> >> >> >>>> >> >>> >> >> >>>> > -machine
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > pc-i440fx-rhel7.3.0,accel=kvm,
> usb=off,dump-guest-core=off
> >> >> >>>> >> >>> >> >> >>>> > -cpu
> >> >> >>>> >> >>> >> >> >>>> > Conroe -m 8192 -realtime mlock=off -smp
> >> >> >>>> >> >>> >> >> >>>> > 2,maxcpus=16,sockets=16,cores=1,threads=1
> >> >> >>>> >> >>> >> >> >>>> > -uuid
> >> >> >>>> >> >>> >> >> >>>> > 4013c829-c9d7-4b72-90d5-6fe58137504c
> -smbios
> >> >> >>>> >> >>> >> >> >>>> > 'type=1,manufacturer=oVirt,product=oVirt
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > Node,version=7-4.1708.el7.
> centos,serial=44454C4C-4300-1034-8035-CAC04F424331,uuid=
> 4013c829-c9d7-4b72-90d5-6fe58137504c'
> >> >> >>>> >> >>> >> >> >>>> > -no-user-config -nodefaults -chardev
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > socket,id=charmonitor,path=/
> var/lib/libvirt/qemu/domain-108-Cultivar/monitor.sock,server,nowait
> >> >> >>>> >> >>> >> >> >>>> > -mon
> >> >> >>>> >> >>> >> >> >>>> > chardev=charmonitor,id=monitor,mode=control
> >> >> >>>> >> >>> >> >> >>>> > -rtc
> >> >> >>>> >> >>> >> >> >>>> > base=2018-01-11T20:33:19,driftfix=slew
> -global
> >> >> >>>> >> >>> >> >> >>>> > kvm-pit.lost_tick_policy=delay -no-hpet
> >> >> >>>> >> >>> >> >> >>>> > -no-reboot
> >> >> >>>> >> >>> >> >> >>>> > -boot
> >> >> >>>> >> >>> >> >> >>>> > strict=on
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> > piix3-usb-uhci,id=usb,bus=pci.
> 0,addr=0x1.0x2
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtio-serial-pci,id=virtio-
> serial0,bus=pci.0,addr=0x4
> >> >> >>>> >> >>> >> >> >>>> > -drive
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > file=/var/run/vdsm/storage/
> 248f46f0-d793-4581-9810-c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/
> 23aa0a66-fa6c-4967-a1e5-fbe47c0cd705,format=raw,if=
> none,id=drive-virtio-disk0,serial=c2dde892-f978-4dfc-
> a421-c8e04cf387f9,cache=none,werror=stop,rerror=stop,aio=threads
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtio-blk-pci,scsi=off,bus=
> pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> >> >> >>>> >> >>> >> >> >>>> > -drive if=none,id=drive-ide0-1-0,
> readonly=on
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > ide-cd,bus=ide.1,unit=0,drive=
> drive-ide0-1-0,id=ide0-1-0
> >> >> >>>> >> >>> >> >> >>>> > -netdev
> >> >> >>>> >> >>> >> >> >>>> > tap,fd=30,id=hostnet0,vhost=on,vhostfd=32
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtio-net-pci,netdev=
> hostnet0,id=net0,mac=00:16:3e:7f:d6:83,bus=pci.0,addr=0x3
> >> >> >>>> >> >>> >> >> >>>> > -chardev
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > socket,id=charchannel0,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-
> serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> >> >> >>>> >> >>> >> >> >>>> > -chardev
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > socket,id=charchannel1,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.org.qemu.guest_agent.0,server,nowait
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-
> serial0.0,nr=2,chardev=charchannel1,id=channel1,name=
> org.qemu.guest_agent.0
> >> >> >>>> >> >>> >> >> >>>> > -chardev spicevmc,id=charchannel2,name=
> vdagent
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-
> serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
> >> >> >>>> >> >>> >> >> >>>> > -chardev
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > socket,id=charchannel3,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.org.ovirt.hosted-engine-setup.0,server,nowait
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-
> serial0.0,nr=4,chardev=charchannel3,id=channel3,name=
> org.ovirt.hosted-engine-setup.0
> >> >> >>>> >> >>> >> >> >>>> > -chardev pty,id=charconsole0 -device
> >> >> >>>> >> >>> >> >> >>>> > virtconsole,chardev=
> charconsole0,id=console0
> >> >> >>>> >> >>> >> >> >>>> > -spice
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > tls-port=5900,addr=0,x509-dir=
> /etc/pki/vdsm/libvirt-spice,tls-channel=default,seamless-migration=on
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> > cirrus-vga,id=video0,bus=pci.0,addr=0x2
> >> >> >>>> >> >>> >> >> >>>> > -object
> >> >> >>>> >> >>> >> >> >>>> > rng-random,id=objrng0,filename=/dev/urandom
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtio-rng-pci,rng=objrng0,id=
> rng0,bus=pci.0,addr=0x5
> >> >> >>>> >> >>> >> >> >>>> > -msg
> >> >> >>>> >> >>> >> >> >>>> > timestamp=on
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > 2018-01-11T20:33:19.699999Z qemu-kvm:
> -chardev
> >> >> >>>> >> >>> >> >> >>>> > pty,id=charconsole0:
> >> >> >>>> >> >>> >> >> >>>> > char
> >> >> >>>> >> >>> >> >> >>>> > device redirected to /dev/pts/2 (label
> >> >> >>>> >> >>> >> >> >>>> > charconsole0)
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > 2018-01-11 20:38:11.640+0000: shutting down,
> >> >> >>>> >> >>> >> >> >>>> > reason=shutdown
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > 2018-01-11 20:39:02.122+0000: starting up
> >> >> >>>> >> >>> >> >> >>>> > libvirt
> >> >> >>>> >> >>> >> >> >>>> > version:
> >> >> >>>> >> >>> >> >> >>>> > 3.2.0,
> >> >> >>>> >> >>> >> >> >>>> > package:
> >> >> >>>> >> >>> >> >> >>>> > 14.el7_4.7 (CentOS BuildSystem
> >> >> >>>> >> >>> >> >> >>>> > <http://bugs.centos.org>,
> >> >> >>>> >> >>> >> >> >>>> > 2018-01-04-19:31:34, c1bm.rdu2.centos.org)
> >> >> >>>> >> >>> >> >> >>>> > qemu
> >> >> >>>> >> >>> >> >> >>>> > version:
> >> >> >>>> >> >>> >> >> >>>> > 2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.13.1),
> >> >> >>>> >> >>> >> >> >>>> > hostname:
> >> >> >>>> >> >>> >> >> >>>> > cultivar3
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > LC_ALL=C
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > PATH=/usr/local/sbin:/usr/
> local/bin:/usr/sbin:/usr/bin
> >> >> >>>> >> >>> >> >> >>>> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm
> >> >> >>>> >> >>> >> >> >>>> > -name
> >> >> >>>> >> >>> >> >> >>>> > guest=Cultivar,debug-threads=on -S -object
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > secret,id=masterKey0,format=
> raw,file=/var/lib/libvirt/qemu/domain-109-Cultivar/master-key.aes
> >> >> >>>> >> >>> >> >> >>>> > -machine
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > pc-i440fx-rhel7.3.0,accel=kvm,
> usb=off,dump-guest-core=off
> >> >> >>>> >> >>> >> >> >>>> > -cpu
> >> >> >>>> >> >>> >> >> >>>> > Conroe -m 8192 -realtime mlock=off -smp
> >> >> >>>> >> >>> >> >> >>>> > 2,maxcpus=16,sockets=16,cores=1,threads=1
> >> >> >>>> >> >>> >> >> >>>> > -uuid
> >> >> >>>> >> >>> >> >> >>>> > 4013c829-c9d7-4b72-90d5-6fe58137504c
> -smbios
> >> >> >>>> >> >>> >> >> >>>> > 'type=1,manufacturer=oVirt,product=oVirt
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > Node,version=7-4.1708.el7.
> centos,serial=44454C4C-4300-1034-8035-CAC04F424331,uuid=
> 4013c829-c9d7-4b72-90d5-6fe58137504c'
> >> >> >>>> >> >>> >> >> >>>> > -no-user-config -nodefaults -chardev
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > socket,id=charmonitor,path=/
> var/lib/libvirt/qemu/domain-109-Cultivar/monitor.sock,server,nowait
> >> >> >>>> >> >>> >> >> >>>> > -mon
> >> >> >>>> >> >>> >> >> >>>> > chardev=charmonitor,id=monitor,mode=control
> >> >> >>>> >> >>> >> >> >>>> > -rtc
> >> >> >>>> >> >>> >> >> >>>> > base=2018-01-11T20:39:02,driftfix=slew
> -global
> >> >> >>>> >> >>> >> >> >>>> > kvm-pit.lost_tick_policy=delay -no-hpet
> >> >> >>>> >> >>> >> >> >>>> > -no-reboot
> >> >> >>>> >> >>> >> >> >>>> > -boot
> >> >> >>>> >> >>> >> >> >>>> > strict=on
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> > piix3-usb-uhci,id=usb,bus=pci.
> 0,addr=0x1.0x2
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtio-serial-pci,id=virtio-
> serial0,bus=pci.0,addr=0x4
> >> >> >>>> >> >>> >> >> >>>> > -drive
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > file=/var/run/vdsm/storage/
> 248f46f0-d793-4581-9810-c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/
> 23aa0a66-fa6c-4967-a1e5-fbe47c0cd705,format=raw,if=
> none,id=drive-virtio-disk0,serial=c2dde892-f978-4dfc-
> a421-c8e04cf387f9,cache=none,werror=stop,rerror=stop,aio=threads
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtio-blk-pci,scsi=off,bus=
> pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> >> >> >>>> >> >>> >> >> >>>> > -drive if=none,id=drive-ide0-1-0,
> readonly=on
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > ide-cd,bus=ide.1,unit=0,drive=
> drive-ide0-1-0,id=ide0-1-0
> >> >> >>>> >> >>> >> >> >>>> > -netdev
> >> >> >>>> >> >>> >> >> >>>> > tap,fd=30,id=hostnet0,vhost=on,vhostfd=32
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtio-net-pci,netdev=
> hostnet0,id=net0,mac=00:16:3e:7f:d6:83,bus=pci.0,addr=0x3
> >> >> >>>> >> >>> >> >> >>>> > -chardev
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > socket,id=charchannel0,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-
> serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> >> >> >>>> >> >>> >> >> >>>> > -chardev
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > socket,id=charchannel1,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.org.qemu.guest_agent.0,server,nowait
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-
> serial0.0,nr=2,chardev=charchannel1,id=channel1,name=
> org.qemu.guest_agent.0
> >> >> >>>> >> >>> >> >> >>>> > -chardev spicevmc,id=charchannel2,name=
> vdagent
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-
> serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
> >> >> >>>> >> >>> >> >> >>>> > -chardev
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > socket,id=charchannel3,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.org.ovirt.hosted-engine-setup.0,server,nowait
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-
> serial0.0,nr=4,chardev=charchannel3,id=channel3,name=
> org.ovirt.hosted-engine-setup.0
> >> >> >>>> >> >>> >> >> >>>> > -chardev pty,id=charconsole0 -device
> >> >> >>>> >> >>> >> >> >>>> > virtconsole,chardev=
> charconsole0,id=console0
> >> >> >>>> >> >>> >> >> >>>> > -spice
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > tls-port=5900,addr=0,x509-dir=
> /etc/pki/vdsm/libvirt-spice,tls-channel=default,seamless-migration=on
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> > cirrus-vga,id=video0,bus=pci.0,addr=0x2
> >> >> >>>> >> >>> >> >> >>>> > -object
> >> >> >>>> >> >>> >> >> >>>> > rng-random,id=objrng0,filename=/dev/urandom
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtio-rng-pci,rng=objrng0,id=
> rng0,bus=pci.0,addr=0x5
> >> >> >>>> >> >>> >> >> >>>> > -msg
> >> >> >>>> >> >>> >> >> >>>> > timestamp=on
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > 2018-01-11T20:39:02.380773Z qemu-kvm:
> -chardev
> >> >> >>>> >> >>> >> >> >>>> > pty,id=charconsole0:
> >> >> >>>> >> >>> >> >> >>>> > char
> >> >> >>>> >> >>> >> >> >>>> > device redirected to /dev/pts/2 (label
> >> >> >>>> >> >>> >> >> >>>> > charconsole0)
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > 2018-01-11 20:53:11.407+0000: shutting down,
> >> >> >>>> >> >>> >> >> >>>> > reason=shutdown
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > 2018-01-11 20:55:57.210+0000: starting up
> >> >> >>>> >> >>> >> >> >>>> > libvirt
> >> >> >>>> >> >>> >> >> >>>> > version:
> >> >> >>>> >> >>> >> >> >>>> > 3.2.0,
> >> >> >>>> >> >>> >> >> >>>> > package:
> >> >> >>>> >> >>> >> >> >>>> > 14.el7_4.7 (CentOS BuildSystem
> >> >> >>>> >> >>> >> >> >>>> > <http://bugs.centos.org>,
> >> >> >>>> >> >>> >> >> >>>> > 2018-01-04-19:31:34, c1bm.rdu2.centos.org)
> >> >> >>>> >> >>> >> >> >>>> > qemu
> >> >> >>>> >> >>> >> >> >>>> > version:
> >> >> >>>> >> >>> >> >> >>>> > 2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.13.1),
> >> >> >>>> >> >>> >> >> >>>> > hostname:
> >> >> >>>> >> >>> >> >> >>>> > cultivar3.grove.silverorange.com
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > LC_ALL=C
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > PATH=/usr/local/sbin:/usr/
> local/bin:/usr/sbin:/usr/bin
> >> >> >>>> >> >>> >> >> >>>> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm
> >> >> >>>> >> >>> >> >> >>>> > -name
> >> >> >>>> >> >>> >> >> >>>> > guest=Cultivar,debug-threads=on -S -object
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > secret,id=masterKey0,format=
> raw,file=/var/lib/libvirt/qemu/domain-110-Cultivar/master-key.aes
> >> >> >>>> >> >>> >> >> >>>> > -machine
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > pc-i440fx-rhel7.3.0,accel=kvm,
> usb=off,dump-guest-core=off
> >> >> >>>> >> >>> >> >> >>>> > -cpu
> >> >> >>>> >> >>> >> >> >>>> > Conroe -m 8192 -realtime mlock=off -smp
> >> >> >>>> >> >>> >> >> >>>> > 2,maxcpus=16,sockets=16,cores=1,threads=1
> >> >> >>>> >> >>> >> >> >>>> > -uuid
> >> >> >>>> >> >>> >> >> >>>> > 4013c829-c9d7-4b72-90d5-6fe58137504c
> -smbios
> >> >> >>>> >> >>> >> >> >>>> > 'type=1,manufacturer=oVirt,product=oVirt
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > Node,version=7-4.1708.el7.
> centos,serial=44454C4C-4300-1034-8035-CAC04F424331,uuid=
> 4013c829-c9d7-4b72-90d5-6fe58137504c'
> >> >> >>>> >> >>> >> >> >>>> > -no-user-config -nodefaults -chardev
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > socket,id=charmonitor,path=/
> var/lib/libvirt/qemu/domain-110-Cultivar/monitor.sock,server,nowait
> >> >> >>>> >> >>> >> >> >>>> > -mon
> >> >> >>>> >> >>> >> >> >>>> > chardev=charmonitor,id=monitor,mode=control
> >> >> >>>> >> >>> >> >> >>>> > -rtc
> >> >> >>>> >> >>> >> >> >>>> > base=2018-01-11T20:55:57,driftfix=slew
> -global
> >> >> >>>> >> >>> >> >> >>>> > kvm-pit.lost_tick_policy=delay -no-hpet
> >> >> >>>> >> >>> >> >> >>>> > -no-reboot
> >> >> >>>> >> >>> >> >> >>>> > -boot
> >> >> >>>> >> >>> >> >> >>>> > strict=on
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> > piix3-usb-uhci,id=usb,bus=pci.
> 0,addr=0x1.0x2
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtio-serial-pci,id=virtio-
> serial0,bus=pci.0,addr=0x4
> >> >> >>>> >> >>> >> >> >>>> > -drive
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > file=/var/run/vdsm/storage/
> 248f46f0-d793-4581-9810-c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/
> 23aa0a66-fa6c-4967-a1e5-fbe47c0cd705,format=raw,if=
> none,id=drive-virtio-disk0,serial=c2dde892-f978-4dfc-
> a421-c8e04cf387f9,cache=none,werror=stop,rerror=stop,aio=threads
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtio-blk-pci,scsi=off,bus=
> pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> >> >> >>>> >> >>> >> >> >>>> > -drive if=none,id=drive-ide0-1-0,
> readonly=on
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > ide-cd,bus=ide.1,unit=0,drive=
> drive-ide0-1-0,id=ide0-1-0
> >> >> >>>> >> >>> >> >> >>>> > -netdev
> >> >> >>>> >> >>> >> >> >>>> > tap,fd=30,id=hostnet0,vhost=on,vhostfd=32
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtio-net-pci,netdev=
> hostnet0,id=net0,mac=00:16:3e:7f:d6:83,bus=pci.0,addr=0x3
> >> >> >>>> >> >>> >> >> >>>> > -chardev
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > socket,id=charchannel0,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-
> serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> >> >> >>>> >> >>> >> >> >>>> > -chardev
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > socket,id=charchannel1,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.org.qemu.guest_agent.0,server,nowait
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-
> serial0.0,nr=2,chardev=charchannel1,id=channel1,name=
> org.qemu.guest_agent.0
> >> >> >>>> >> >>> >> >> >>>> > -chardev spicevmc,id=charchannel2,name=
> vdagent
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-
> serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
> >> >> >>>> >> >>> >> >> >>>> > -chardev
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > socket,id=charchannel3,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.org.ovirt.hosted-engine-setup.0,server,nowait
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-
> serial0.0,nr=4,chardev=charchannel3,id=channel3,name=
> org.ovirt.hosted-engine-setup.0
> >> >> >>>> >> >>> >> >> >>>> > -chardev pty,id=charconsole0 -device
> >> >> >>>> >> >>> >> >> >>>> > virtconsole,chardev=
> charconsole0,id=console0
> >> >> >>>> >> >>> >> >> >>>> > -spice
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > tls-port=5900,addr=0,x509-dir=
> /etc/pki/vdsm/libvirt-spice,tls-channel=default,seamless-migration=on
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> > cirrus-vga,id=video0,bus=pci.0,addr=0x2
> >> >> >>>> >> >>> >> >> >>>> > -object
> >> >> >>>> >> >>> >> >> >>>> > rng-random,id=objrng0,filename=/dev/urandom
> >> >> >>>> >> >>> >> >> >>>> > -device
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > virtio-rng-pci,rng=objrng0,id=
> rng0,bus=pci.0,addr=0x5
> >> >> >>>> >> >>> >> >> >>>> > -msg
> >> >> >>>> >> >>> >> >> >>>> > timestamp=on
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > 2018-01-11T20:55:57.468037Z qemu-kvm:
> -chardev
> >> >> >>>> >> >>> >> >> >>>> > pty,id=charconsole0:
> >> >> >>>> >> >>> >> >> >>>> > char
> >> >> >>>> >> >>> >> >> >>>> > device redirected to /dev/pts/2 (label
> >> >> >>>> >> >>> >> >> >>>> > charconsole0)
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > ==> /var/log/ovirt-hosted-engine-
> ha/broker.log
> >> >> >>>> >> >>> >> >> >>>> > <==
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > File
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
> >> >> >>>> >> >>> >> >> >>>> > line 151, in get_raw_stats
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > f = os.open(path, direct_flag |
> >> >> >>>> >> >>> >> >> >>>> > os.O_RDONLY |
> >> >> >>>> >> >>> >> >> >>>> > os.O_SYNC)
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > OSError: [Errno 2] No such file or
> directory:
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > '/var/run/vdsm/storage/
> 248f46f0-d793-4581-9810-c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/
> 8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8'
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > StatusStorageThread::ERROR::2018-01-11
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > 16:55:15,761::status_broker::
> 92::ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update::(run)
> >> >> >>>> >> >>> >> >> >>>> > Failed to read state.
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > Traceback (most recent call last):
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > File
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/broker/status_broker.py",
> >> >> >>>> >> >>> >> >> >>>> > line 88, in run
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > self._storage_broker.get_raw_stats()
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > File
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
> >> >> >>>> >> >>> >> >> >>>> > line 162, in get_raw_stats
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > .format(str(e)))
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > RequestError: failed to read metadata:
> [Errno
> >> >> >>>> >> >>> >> >> >>>> > 2]
> >> >> >>>> >> >>> >> >> >>>> > No
> >> >> >>>> >> >>> >> >> >>>> > such
> >> >> >>>> >> >>> >> >> >>>> > file or
> >> >> >>>> >> >>> >> >> >>>> > directory:
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > '/var/run/vdsm/storage/
> 248f46f0-d793-4581-9810-c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/
> 8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8'
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > ==> /var/log/ovirt-hosted-engine-
> ha/agent.log
> >> >> >>>> >> >>> >> >> >>>> > <==
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > result = refresh_method()
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > File
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/env/config.py",
> >> >> >>>> >> >>> >> >> >>>> > line 519, in refresh_vm_conf
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > content =
> >> >> >>>> >> >>> >> >> >>>> > self._get_file_content_from_
> shared_storage(VM)
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > File
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/env/config.py",
> >> >> >>>> >> >>> >> >> >>>> > line 484, in
> >> >> >>>> >> >>> >> >> >>>> > _get_file_content_from_shared_storage
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > config_volume_path =
> >> >> >>>> >> >>> >> >> >>>> > self._get_config_volume_path()
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > File
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/env/config.py",
> >> >> >>>> >> >>> >> >> >>>> > line 188, in _get_config_volume_path
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > conf_vol_uuid
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > File
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/lib/heconflib.py",
> >> >> >>>> >> >>> >> >> >>>> > line 358, in get_volume_path
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > root=envconst.SD_RUN_DIR,
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > RuntimeError: Path to volume
> >> >> >>>> >> >>> >> >> >>>> > 4838749f-216d-406b-b245-98d0343fcf7f
> >> >> >>>> >> >>> >> >> >>>> > not
> >> >> >>>> >> >>> >> >> >>>> > found
> >> >> >>>> >> >>> >> >> >>>> > in /run/vdsm/storag
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > ==> /var/log/vdsm/vdsm.log <==
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > periodic/42::ERROR::2018-01-11
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > 16:56:11,446::vmstats::260::
> virt.vmstats::(send_metrics)
> >> >> >>>> >> >>> >> >> >>>> > VM
> >> >> >>>> >> >>> >> >> >>>> > metrics
> >> >> >>>> >> >>> >> >> >>>> > collection failed
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > Traceback (most recent call last):
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > File
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/vdsm/virt/vmstats.py",
> >> >> >>>> >> >>> >> >> >>>> > line
> >> >> >>>> >> >>> >> >> >>>> > 197, in
> >> >> >>>> >> >>> >> >> >>>> > send_metrics
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > data[prefix + '.cpu.usage'] =
> >> >> >>>> >> >>> >> >> >>>> > stat['cpuUsage']
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > KeyError: 'cpuUsage'
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> > ______________________________
> _________________
> >> >> >>>> >> >>> >> >> >>>> > Users mailing list
> >> >> >>>> >> >>> >> >> >>>> > Users(a)ovirt.org
> >> >> >>>> >> >>> >> >> >>>> > http://lists.ovirt.org/
> mailman/listinfo/users
> >> >> >>>> >> >>> >> >> >>>> >
> >> >> >>>> >> >>> >> >> >>>> ______________________________
> _________________
> >> >> >>>> >> >>> >> >> >>>> Users mailing list
> >> >> >>>> >> >>> >> >> >>>> Users(a)ovirt.org
> >> >> >>>> >> >>> >> >> >>>> http://lists.ovirt.org/mailman/listinfo/users
> >> >> >>>> >> >>> >> >> >>>
> >> >> >>>> >> >>> >> >> >>>
> >> >> >>>> >> >>> >> >> >>
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >> >
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>> >
> >> >> >>>> >> >>
> >> >> >>>> >> >>
> >> >> >>>> >> >
> >> >> >>>> >
> >> >> >>>> >
> >> >> >>>
> >> >> >>>
> >> >> >>
> >> >> >
> >> >
> >> >
> >
> >
>
1
2

12 Jan '18
The lock space issue was an issue I needed to clear but I don't believe it
has resolved the problem. I shutdown agent and broker on all hosts and
disconnected hosted-storage then enabled broker/agent on just one host and
connected storage. I started the VM and actually didn't get any errors in
the logs barely at all which was good to see, however the VM is still not
running:
HOST3:
Engine status : {"reason": "failed liveliness check",
"health": "bad", "vm": "up", "detail": "Up"}
==> /var/log/messages <==
Jan 12 12:42:57 cultivar3 kernel: ovirtmgmt: port 2(vnet0) entered disabled
state
Jan 12 12:42:57 cultivar3 kernel: device vnet0 entered promiscuous mode
Jan 12 12:42:57 cultivar3 kernel: ovirtmgmt: port 2(vnet0) entered blocking
state
Jan 12 12:42:57 cultivar3 kernel: ovirtmgmt: port 2(vnet0) entered
forwarding state
Jan 12 12:42:57 cultivar3 lldpad: recvfrom(Event interface): No buffer
space available
Jan 12 12:42:57 cultivar3 systemd-machined: New machine qemu-111-Cultivar.
Jan 12 12:42:57 cultivar3 systemd: Started Virtual Machine
qemu-111-Cultivar.
Jan 12 12:42:57 cultivar3 systemd: Starting Virtual Machine
qemu-111-Cultivar.
Jan 12 12:42:57 cultivar3 kvm: 3 guests now active
Jan 12 12:44:38 cultivar3 libvirtd: 2018-01-12 16:44:38.737+0000: 1535:
error : qemuDomainAgentAvailable:6010 : Guest agent is not responding: QEMU
guest agent is not connected
Interestingly though, now I'm seeing this in the logs which may be a new
clue:
==> /var/log/vdsm/vdsm.log <==
File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py", line 126,
in findDomain
return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID))
File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py", line 116,
in findDomainPath
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'248f46f0-d793-4581-9810-c9d965e2f286',)
jsonrpc/4::ERROR::2018-01-12
12:40:30,380::dispatcher::82::storage.Dispatcher::(wrapper) FINISH
getStorageDomainInfo error=Storage domain does not exist:
(u'248f46f0-d793-4581-9810-c9d965e2f286',)
periodic/42::ERROR::2018-01-12 12:40:35,430::api::196::root::(_getHaInfo)
failed to retrieve Hosted Engine HA score '[Errno 2] No such file or
directory'Is the Hosted Engine setup finished?
periodic/43::ERROR::2018-01-12 12:40:50,473::api::196::root::(_getHaInfo)
failed to retrieve Hosted Engine HA score '[Errno 2] No such file or
directory'Is the Hosted Engine setup finished?
periodic/40::ERROR::2018-01-12 12:41:05,519::api::196::root::(_getHaInfo)
failed to retrieve Hosted Engine HA score '[Errno 2] No such file or
directory'Is the Hosted Engine setup finished?
periodic/43::ERROR::2018-01-12 12:41:20,566::api::196::root::(_getHaInfo)
failed to retrieve Hosted Engine HA score '[Errno 2] No such file or
directory'Is the Hosted Engine setup finished?
==> /var/log/ovirt-hosted-engine-ha/broker.log <==
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
line 151, in get_raw_stats
f = os.open(path, direct_flag | os.O_RDONLY | os.O_SYNC)
OSError: [Errno 2] No such file or directory:
'/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8'
StatusStorageThread::ERROR::2018-01-12
12:32:06,049::status_broker::92::ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update::(run)
Failed to read state.
Traceback (most recent call last):
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py",
line 88, in run
self._storage_broker.get_raw_stats()
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
line 162, in get_raw_stats
.format(str(e)))
RequestError: failed to read metadata: [Errno 2] No such file or directory:
'/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8'
On Fri, Jan 12, 2018 at 12:02 PM, Martin Sivak <msivak(a)redhat.com> wrote:
> The lock is the issue.
>
> - try running sanlock client status on all hosts
> - also make sure you do not have some forgotten host still connected
> to the lockspace, but without ha daemons running (and with the VM)
>
> I need to go to our president election now, I might check the email
> later tonight.
>
> Martin
>
> On Fri, Jan 12, 2018 at 4:59 PM, Jayme <jaymef(a)gmail.com> wrote:
> > Here are the newest logs from me trying to start hosted vm:
> >
> > ==> /var/log/messages <==
> > Jan 12 11:58:14 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> blocking
> > state
> > Jan 12 11:58:14 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> disabled
> > state
> > Jan 12 11:58:14 cultivar0 kernel: device vnet4 entered promiscuous mode
> > Jan 12 11:58:14 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> blocking
> > state
> > Jan 12 11:58:14 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> > forwarding state
> > Jan 12 11:58:14 cultivar0 lldpad: recvfrom(Event interface): No buffer
> space
> > available
> > Jan 12 11:58:14 cultivar0 NetworkManager[1092]: <info> [1515772694.8715]
> > manager: (vnet4): new Tun device
> > (/org/freedesktop/NetworkManager/Devices/140)
> > Jan 12 11:58:14 cultivar0 NetworkManager[1092]: <info> [1515772694.8795]
> > device (vnet4): state change: unmanaged -> unavailable (reason
> > 'connection-assumed') [10 20 41]
> >
> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> > 2018-01-12 15:58:14.879+0000: starting up libvirt version: 3.2.0,
> package:
> > 14.el7_4.7 (CentOS BuildSystem <http://bugs.centos.org>,
> > 2018-01-04-19:31:34, c1bm.rdu2.centos.org) qemu version:
> > 2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.13.1), hostname:
> > cultivar0.grove.silverorange.com
> > LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
> > guest=Cultivar,debug-threads=on -S -object
> > secret,id=masterKey0,format=raw,file=/var/lib/libvirt/
> qemu/domain-119-Cultivar/master-key.aes
> > -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
> > Conroe -m 8192 -realtime mlock=off -smp
> > 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
> > 4013c829-c9d7-4b72-90d5-6fe58137504c -smbios
> > 'type=1,manufacturer=oVirt,product=oVirt
> > Node,version=7-4.1708.el7.centos,serial=44454C4C-3300-
> 1042-8031-B4C04F4B4831,uuid=4013c829-c9d7-4b72-90d5-6fe58137504c'
> > -no-user-config -nodefaults -chardev
> > socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-
> 119-Cultivar/monitor.sock,server,nowait
> > -mon chardev=charmonitor,id=monitor,mode=control -rtc
> > base=2018-01-12T15:58:14,driftfix=slew -global
> > kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on
> -device
> > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
> > file=/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-4967-a1e5-
> fbe47c0cd705,format=raw,if=none,id=drive-virtio-disk0,
> serial=c2dde892-f978-4dfc-a421-c8e04cf387f9,cache=none,
> werror=stop,rerror=stop,aio=threads
> > -device
> > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-
> virtio-disk0,id=virtio-disk0,bootindex=1
> > -drive if=none,id=drive-ide0-1-0,readonly=on -device
> > ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
> > tap,fd=35,id=hostnet0,vhost=on,vhostfd=38 -device
> > virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:
> 7f:d6:83,bus=pci.0,addr=0x3
> > -chardev
> > socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
> > -device
> > virtserialport,bus=virtio-serial0.0,nr=1,chardev=
> charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> > -chardev
> > socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.org.qemu.guest_agent.0,server,nowait
> > -device
> > virtserialport,bus=virtio-serial0.0,nr=2,chardev=
> charchannel1,id=channel1,name=org.qemu.guest_agent.0
> > -chardev spicevmc,id=charchannel2,name=vdagent -device
> > virtserialport,bus=virtio-serial0.0,nr=3,chardev=
> charchannel2,id=channel2,name=com.redhat.spice.0
> > -chardev
> > socket,id=charchannel3,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.org.ovirt.hosted-
> engine-setup.0,server,nowait
> > -device
> > virtserialport,bus=virtio-serial0.0,nr=4,chardev=
> charchannel3,id=channel3,name=org.ovirt.hosted-engine-setup.0
> > -chardev pty,id=charconsole0 -device
> > virtconsole,chardev=charconsole0,id=console0 -spice
> > tls-port=5904,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,
> tls-channel=default,seamless-migration=on
> > -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -object
> > rng-random,id=objrng0,filename=/dev/urandom -device
> > virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x5 -msg timestamp=on
> >
> > ==> /var/log/messages <==
> > Jan 12 11:58:14 cultivar0 NetworkManager[1092]: <info> [1515772694.8807]
> > device (vnet4): state change: unavailable -> disconnected (reason 'none')
> > [20 30 0]
> > Jan 12 11:58:14 cultivar0 systemd-machined: New machine
> qemu-119-Cultivar.
> > Jan 12 11:58:14 cultivar0 systemd: Started Virtual Machine
> > qemu-119-Cultivar.
> > Jan 12 11:58:14 cultivar0 systemd: Starting Virtual Machine
> > qemu-119-Cultivar.
> >
> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> > 2018-01-12T15:58:15.094002Z qemu-kvm: -chardev pty,id=charconsole0: char
> > device redirected to /dev/pts/1 (label charconsole0)
> >
> > ==> /var/log/messages <==
> > Jan 12 11:58:15 cultivar0 kvm: 5 guests now active
> >
> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> > 2018-01-12 15:58:15.217+0000: shutting down, reason=failed
> >
> > ==> /var/log/messages <==
> > Jan 12 11:58:15 cultivar0 libvirtd: 2018-01-12 15:58:15.217+0000: 1908:
> > error : virLockManagerSanlockAcquire:1041 : resource busy: Failed to
> acquire
> > lock: Lease is held by another host
> >
> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> > 2018-01-12T15:58:15.219934Z qemu-kvm: terminating on signal 15 from pid
> 1773
> > (/usr/sbin/libvirtd)
> >
> > ==> /var/log/messages <==
> > Jan 12 11:58:15 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> disabled
> > state
> > Jan 12 11:58:15 cultivar0 kernel: device vnet4 left promiscuous mode
> > Jan 12 11:58:15 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> disabled
> > state
> > Jan 12 11:58:15 cultivar0 NetworkManager[1092]: <info> [1515772695.2348]
> > device (vnet4): state change: disconnected -> unmanaged (reason
> 'unmanaged')
> > [30 10 3]
> > Jan 12 11:58:15 cultivar0 NetworkManager[1092]: <info> [1515772695.2349]
> > device (vnet4): released from master device ovirtmgmt
> > Jan 12 11:58:15 cultivar0 kvm: 4 guests now active
> > Jan 12 11:58:15 cultivar0 systemd-machined: Machine qemu-119-Cultivar
> > terminated.
> >
> > ==> /var/log/vdsm/vdsm.log <==
> > vm/4013c829::ERROR::2018-01-12
> > 11:58:15,444::vm::914::virt.vm::(_startUnderlyingVm)
> > (vmId='4013c829-c9d7-4b72-90d5-6fe58137504c') The vm start process
> failed
> > Traceback (most recent call last):
> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 843, in
> > _startUnderlyingVm
> > self._run()
> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2721, in
> > _run
> > dom.createWithFlags(flags)
> > File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
> line
> > 126, in wrapper
> > ret = f(*args, **kwargs)
> > File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 512, in
> > wrapper
> > return func(inst, *args, **kwargs)
> > File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1069, in
> > createWithFlags
> > if ret == -1: raise libvirtError ('virDomainCreateWithFlags()
> failed',
> > dom=self)
> > libvirtError: resource busy: Failed to acquire lock: Lease is held by
> > another host
> > jsonrpc/6::ERROR::2018-01-12
> > 11:58:16,421::__init__::611::jsonrpc.JsonRpcServer::(_handle_request)
> > Internal server error
> > Traceback (most recent call last):
> > File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> 606,
> > in _handle_request
> > res = method(**params)
> > File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 201,
> in
> > _dynamicMethod
> > result = fn(*methodArgs)
> > File "<string>", line 2, in getAllVmIoTunePolicies
> > File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48,
> in
> > method
> > ret = func(*args, **kwargs)
> > File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1354, in
> > getAllVmIoTunePolicies
> > io_tune_policies_dict = self._cif.getAllVmIoTunePolicies()
> > File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 524, in
> > getAllVmIoTunePolicies
> > 'current_values': v.getIoTune()}
> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 3481, in
> > getIoTune
> > result = self.getIoTuneResponse()
> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 3500, in
> > getIoTuneResponse
> > res = self._dom.blockIoTune(
> > File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line
> 47,
> > in __getattr__
> > % self.vmid)
> > NotConnectedError: VM '4013c829-c9d7-4b72-90d5-6fe58137504c' was not
> defined
> > yet or was undefined
> >
> > ==> /var/log/messages <==
> > Jan 12 11:58:16 cultivar0 journal: vdsm jsonrpc.JsonRpcServer ERROR
> Internal
> > server error#012Traceback (most recent call last):#012 File
> > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 606, in
> > _handle_request#012 res = method(**params)#012 File
> > "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 201, in
> > _dynamicMethod#012 result = fn(*methodArgs)#012 File "<string>",
> line 2,
> > in getAllVmIoTunePolicies#012 File
> > "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in
> > method#012 ret = func(*args, **kwargs)#012 File
> > "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1354, in
> > getAllVmIoTunePolicies#012 io_tune_policies_dict =
> > self._cif.getAllVmIoTunePolicies()#012 File
> > "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 524, in
> > getAllVmIoTunePolicies#012 'current_values': v.getIoTune()}#012 File
> > "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 3481, in
> > getIoTune#012 result = self.getIoTuneResponse()#012 File
> > "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 3500, in
> > getIoTuneResponse#012 res = self._dom.blockIoTune(#012 File
> > "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 47, in
> > __getattr__#012 % self.vmid)#012NotConnectedError: VM
> > '4013c829-c9d7-4b72-90d5-6fe58137504c' was not defined yet or was
> undefined
> >
> > On Fri, Jan 12, 2018 at 11:55 AM, Jayme <jaymef(a)gmail.com> wrote:
> >>
> >> One other tidbit I noticed is that it seems like there are less errors
> if
> >> I started in paused mode:
> >>
> >> but still shows: Engine status : {"reason": "bad vm
> >> status", "health": "bad", "vm": "up", "detail": "Paused"}
> >>
> >> ==> /var/log/messages <==
> >> Jan 12 11:55:05 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >> blocking state
> >> Jan 12 11:55:05 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >> disabled state
> >> Jan 12 11:55:05 cultivar0 kernel: device vnet4 entered promiscuous mode
> >> Jan 12 11:55:05 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >> blocking state
> >> Jan 12 11:55:05 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >> forwarding state
> >> Jan 12 11:55:05 cultivar0 lldpad: recvfrom(Event interface): No buffer
> >> space available
> >> Jan 12 11:55:05 cultivar0 NetworkManager[1092]: <info>
> [1515772505.3625]
> >> manager: (vnet4): new Tun device
> >> (/org/freedesktop/NetworkManager/Devices/139)
> >>
> >> ==> /var/log/libvirt/qemu/Cultivar.log <==
> >> 2018-01-12 15:55:05.370+0000: starting up libvirt version: 3.2.0,
> package:
> >> 14.el7_4.7 (CentOS BuildSystem <http://bugs.centos.org>,
> >> 2018-01-04-19:31:34, c1bm.rdu2.centos.org) qemu version:
> >> 2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.13.1), hostname:
> >> cultivar0.grove.silverorange.com
> >> LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
> >> QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
> >> guest=Cultivar,debug-threads=on -S -object
> >> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/
> qemu/domain-118-Cultivar/master-key.aes
> >> -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
> >> Conroe -m 8192 -realtime mlock=off -smp
> >> 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
> >> 4013c829-c9d7-4b72-90d5-6fe58137504c -smbios
> >> 'type=1,manufacturer=oVirt,product=oVirt
> >> Node,version=7-4.1708.el7.centos,serial=44454C4C-3300-
> 1042-8031-B4C04F4B4831,uuid=4013c829-c9d7-4b72-90d5-6fe58137504c'
> >> -no-user-config -nodefaults -chardev
> >> socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-
> 118-Cultivar/monitor.sock,server,nowait
> >> -mon chardev=charmonitor,id=monitor,mode=control -rtc
> >> base=2018-01-12T15:55:05,driftfix=slew -global
> >> kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on
> -device
> >> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> >> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
> >> file=/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-4967-a1e5-
> fbe47c0cd705,format=raw,if=none,id=drive-virtio-disk0,
> serial=c2dde892-f978-4dfc-a421-c8e04cf387f9,cache=none,
> werror=stop,rerror=stop,aio=threads
> >> -device
> >> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-
> virtio-disk0,id=virtio-disk0,bootindex=1
> >> -drive if=none,id=drive-ide0-1-0,readonly=on -device
> >> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
> >> tap,fd=35,id=hostnet0,vhost=on,vhostfd=38 -device
> >> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:
> 7f:d6:83,bus=pci.0,addr=0x3
> >> -chardev
> >> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
> >> -device
> >> virtserialport,bus=virtio-serial0.0,nr=1,chardev=
> charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> >> -chardev
> >> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.org.qemu.guest_agent.0,server,nowait
> >> -device
> >> virtserialport,bus=virtio-serial0.0,nr=2,chardev=
> charchannel1,id=channel1,name=org.qemu.guest_agent.0
> >> -chardev spicevmc,id=charchannel2,name=vdagent -device
> >> virtserialport,bus=virtio-serial0.0,nr=3,chardev=
> charchannel2,id=channel2,name=com.redhat.spice.0
> >> -chardev
> >> socket,id=charchannel3,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.org.ovirt.hosted-
> engine-setup.0,server,nowait
> >> -device
> >> virtserialport,bus=virtio-serial0.0,nr=4,chardev=
> charchannel3,id=channel3,name=org.ovirt.hosted-engine-setup.0
> >> -chardev pty,id=charconsole0 -device
> >> virtconsole,chardev=charconsole0,id=console0 -spice
> >> tls-port=5904,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,
> tls-channel=default,seamless-migration=on
> >> -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -object
> >> rng-random,id=objrng0,filename=/dev/urandom -device
> >> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x5 -msg timestamp=on
> >>
> >> ==> /var/log/messages <==
> >> Jan 12 11:55:05 cultivar0 NetworkManager[1092]: <info>
> [1515772505.3689]
> >> device (vnet4): state change: unmanaged -> unavailable (reason
> >> 'connection-assumed') [10 20 41]
> >> Jan 12 11:55:05 cultivar0 NetworkManager[1092]: <info>
> [1515772505.3702]
> >> device (vnet4): state change: unavailable -> disconnected (reason
> 'none')
> >> [20 30 0]
> >> Jan 12 11:55:05 cultivar0 systemd-machined: New machine
> qemu-118-Cultivar.
> >> Jan 12 11:55:05 cultivar0 systemd: Started Virtual Machine
> >> qemu-118-Cultivar.
> >> Jan 12 11:55:05 cultivar0 systemd: Starting Virtual Machine
> >> qemu-118-Cultivar.
> >>
> >> ==> /var/log/libvirt/qemu/Cultivar.log <==
> >> 2018-01-12T15:55:05.586827Z qemu-kvm: -chardev pty,id=charconsole0: char
> >> device redirected to /dev/pts/1 (label charconsole0)
> >>
> >> ==> /var/log/messages <==
> >> Jan 12 11:55:05 cultivar0 kvm: 5 guests now active
> >>
> >> On Fri, Jan 12, 2018 at 11:36 AM, Jayme <jaymef(a)gmail.com> wrote:
> >>>
> >>> Yeah I am in global maintenance:
> >>>
> >>> state=GlobalMaintenance
> >>>
> >>> host0: {"reason": "vm not running on this host", "health": "bad",
> "vm":
> >>> "down", "detail": "unknown"}
> >>> host2: {"reason": "vm not running on this host", "health": "bad", "vm":
> >>> "down", "detail": "unknown"}
> >>> host3: {"reason": "vm not running on this host", "health": "bad", "vm":
> >>> "down", "detail": "unknown"}
> >>>
> >>> I understand the lock is an issue, I'll try to make sure it is fully
> >>> stopped on all three before starting but I don't think that is the
> issue at
> >>> hand either. What concerns me is mostly that it seems to be unable
> to read
> >>> the meta data, I think that might be the heart of the problem but I'm
> not
> >>> sure what is causing it.
> >>>
> >>> On Fri, Jan 12, 2018 at 11:33 AM, Martin Sivak <msivak(a)redhat.com>
> wrote:
> >>>>
> >>>> > On all three hosts I ran hosted-engine --vm-shutdown; hosted-engine
> >>>> > --vm-poweroff
> >>>>
> >>>> Are you in global maintenance? I think you were in one of the previous
> >>>> emails, but worth checking.
> >>>>
> >>>> > I started ovirt-ha-broker with systemctl as root user but it does
> >>>> > appear to be running under vdsm:
> >>>>
> >>>> That is the correct behavior.
> >>>>
> >>>> > libvirtError: resource busy: Failed to acquire lock: Lease is held
> by
> >>>> > another host
> >>>>
> >>>> sanlock seems to think the VM runs somewhere and it is possible that
> >>>> some other host tried to start the VM as well unless you are in global
> >>>> maintenance (that is why I asked the first question here).
> >>>>
> >>>> Martin
> >>>>
> >>>> On Fri, Jan 12, 2018 at 4:28 PM, Jayme <jaymef(a)gmail.com> wrote:
> >>>> > Martin,
> >>>> >
> >>>> > Thanks so much for keeping with me, this is driving me crazy! I
> >>>> > really do
> >>>> > appreciate it, thanks again
> >>>> >
> >>>> > Let's go through this:
> >>>> >
> >>>> > HE VM is down - YES
> >>>> >
> >>>> >
> >>>> > HE agent fails when opening metadata using the symlink - YES
> >>>> >
> >>>> >
> >>>> > the symlink is there and readable by vdsm:kvm - it appears to be:
> >>>> >
> >>>> >
> >>>> > lrwxrwxrwx. 1 vdsm kvm 159 Jan 10 21:20
> >>>> > 14a20941-1b84-4b82-be8f-ace38d7c037a
> >>>> > ->
> >>>> >
> >>>> > /rhev/data-center/mnt/cultivar0.grove.silverorange.
> com:_exports_hosted__engine/248f46f0-d793-4581-9810-
> c9d965e2f286/images/14a20941-1b84-4b82-be8f-ace38d7c037a
> >>>> >
> >>>> >
> >>>> > And the files in the linked directory exist and have vdsm:kvm perms
> as
> >>>> > well:
> >>>> >
> >>>> >
> >>>> > # cd
> >>>> >
> >>>> > /rhev/data-center/mnt/cultivar0.grove.silverorange.
> com:_exports_hosted__engine/248f46f0-d793-4581-9810-
> c9d965e2f286/images/14a20941-1b84-4b82-be8f-ace38d7c037a
> >>>> >
> >>>> > [root@cultivar0 14a20941-1b84-4b82-be8f-ace38d7c037a]# ls -al
> >>>> >
> >>>> > total 2040
> >>>> >
> >>>> > drwxr-xr-x. 2 vdsm kvm 4096 Jan 12 10:51 .
> >>>> >
> >>>> > drwxr-xr-x. 8 vdsm kvm 4096 Feb 3 2016 ..
> >>>> >
> >>>> > -rw-rw----. 1 vdsm kvm 1028096 Jan 12 11:19
> >>>> > 8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8
> >>>> >
> >>>> > -rw-rw----. 1 vdsm kvm 1048576 Feb 3 2016
> >>>> > 8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8.lease
> >>>> >
> >>>> > -rw-r--r--. 1 vdsm kvm 283 Feb 3 2016
> >>>> > 8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8.meta
> >>>> >
> >>>> >
> >>>> > I started ovirt-ha-broker with systemctl as root user but it does
> >>>> > appear to
> >>>> > be running under vdsm:
> >>>> >
> >>>> >
> >>>> > vdsm 16928 0.6 0.0 1618244 43328 ? Ssl 10:33 0:18
> >>>> > /usr/bin/python /usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker
> >>>> >
> >>>> >
> >>>> >
> >>>> > Here is something I tried:
> >>>> >
> >>>> >
> >>>> > - On all three hosts I ran hosted-engine --vm-shutdown;
> hosted-engine
> >>>> > --vm-poweroff
> >>>> >
> >>>> > - On HOST0 (cultivar0) I disconnected and reconnected storage using
> >>>> > hosted-engine
> >>>> >
> >>>> > - Tried starting up the hosted VM on cultivar0 while tailing the
> logs:
> >>>> >
> >>>> >
> >>>> > # hosted-engine --vm-start
> >>>> >
> >>>> > VM exists and is down, cleaning up and restarting
> >>>> >
> >>>> >
> >>>> >
> >>>> > ==> /var/log/vdsm/vdsm.log <==
> >>>> >
> >>>> > jsonrpc/2::ERROR::2018-01-12
> >>>> > 11:27:27,194::vm::1766::virt.vm::(_getRunningVmStats)
> >>>> > (vmId='4013c829-c9d7-4b72-90d5-6fe58137504c') Error fetching vm
> stats
> >>>> >
> >>>> > Traceback (most recent call last):
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
> 1762,
> >>>> > in
> >>>> > _getRunningVmStats
> >>>> >
> >>>> > vm_sample.interval)
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vmstats.py",
> line
> >>>> > 45, in
> >>>> > produce
> >>>> >
> >>>> > networks(vm, stats, first_sample, last_sample, interval)
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vmstats.py",
> line
> >>>> > 322, in
> >>>> > networks
> >>>> >
> >>>> > if nic.name.startswith('hostdev'):
> >>>> >
> >>>> > AttributeError: name
> >>>> >
> >>>> > jsonrpc/3::ERROR::2018-01-12
> >>>> > 11:27:27,221::__init__::611::jsonrpc.JsonRpcServer::(_
> handle_request)
> >>>> > Internal server error
> >>>> >
> >>>> > Traceback (most recent call last):
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py",
> line
> >>>> > 606,
> >>>> > in _handle_request
> >>>> >
> >>>> > res = method(**params)
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line
> >>>> > 201, in
> >>>> > _dynamicMethod
> >>>> >
> >>>> > result = fn(*methodArgs)
> >>>> >
> >>>> > File "<string>", line 2, in getAllVmIoTunePolicies
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line
> 48,
> >>>> > in
> >>>> > method
> >>>> >
> >>>> > ret = func(*args, **kwargs)
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1354,
> in
> >>>> > getAllVmIoTunePolicies
> >>>> >
> >>>> > io_tune_policies_dict = self._cif.getAllVmIoTunePolicies()
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line
> 524,
> >>>> > in
> >>>> > getAllVmIoTunePolicies
> >>>> >
> >>>> > 'current_values': v.getIoTune()}
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
> 3481,
> >>>> > in
> >>>> > getIoTune
> >>>> >
> >>>> > result = self.getIoTuneResponse()
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
> 3500,
> >>>> > in
> >>>> > getIoTuneResponse
> >>>> >
> >>>> > res = self._dom.blockIoTune(
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py",
> line
> >>>> > 47,
> >>>> > in __getattr__
> >>>> >
> >>>> > % self.vmid)
> >>>> >
> >>>> > NotConnectedError: VM '4013c829-c9d7-4b72-90d5-6fe58137504c' was
> not
> >>>> > defined
> >>>> > yet or was undefined
> >>>> >
> >>>> >
> >>>> > ==> /var/log/messages <==
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 journal: vdsm jsonrpc.JsonRpcServer ERROR
> >>>> > Internal
> >>>> > server error#012Traceback (most recent call last):#012 File
> >>>> > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> 606, in
> >>>> > _handle_request#012 res = method(**params)#012 File
> >>>> > "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 201, in
> >>>> > _dynamicMethod#012 result = fn(*methodArgs)#012 File "<string>",
> >>>> > line 2,
> >>>> > in getAllVmIoTunePolicies#012 File
> >>>> > "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in
> >>>> > method#012 ret = func(*args, **kwargs)#012 File
> >>>> > "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1354, in
> >>>> > getAllVmIoTunePolicies#012 io_tune_policies_dict =
> >>>> > self._cif.getAllVmIoTunePolicies()#012 File
> >>>> > "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 524, in
> >>>> > getAllVmIoTunePolicies#012 'current_values': v.getIoTune()}#012
> >>>> > File
> >>>> > "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 3481, in
> >>>> > getIoTune#012 result = self.getIoTuneResponse()#012 File
> >>>> > "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 3500, in
> >>>> > getIoTuneResponse#012 res = self._dom.blockIoTune(#012 File
> >>>> > "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line
> 47, in
> >>>> > __getattr__#012 % self.vmid)#012NotConnectedError: VM
> >>>> > '4013c829-c9d7-4b72-90d5-6fe58137504c' was not defined yet or was
> >>>> > undefined
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >>>> > blocking
> >>>> > state
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >>>> > disabled
> >>>> > state
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 kernel: device vnet4 entered promiscuous
> >>>> > mode
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >>>> > blocking
> >>>> > state
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >>>> > forwarding state
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 lldpad: recvfrom(Event interface): No
> buffer
> >>>> > space
> >>>> > available
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info>
> >>>> > [1515770847.4264]
> >>>> > manager: (vnet4): new Tun device
> >>>> > (/org/freedesktop/NetworkManager/Devices/135)
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info>
> >>>> > [1515770847.4342]
> >>>> > device (vnet4): state change: unmanaged -> unavailable (reason
> >>>> > 'connection-assumed') [10 20 41]
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info>
> >>>> > [1515770847.4353]
> >>>> > device (vnet4): state change: unavailable -> disconnected (reason
> >>>> > 'none')
> >>>> > [20 30 0]
> >>>> >
> >>>> >
> >>>> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> >>>> >
> >>>> > 2018-01-12 15:27:27.435+0000: starting up libvirt version: 3.2.0,
> >>>> > package:
> >>>> > 14.el7_4.7 (CentOS BuildSystem <http://bugs.centos.org>,
> >>>> > 2018-01-04-19:31:34, c1bm.rdu2.centos.org) qemu version:
> >>>> > 2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.13.1), hostname:
> >>>> > cultivar0.grove.silverorange.com
> >>>> >
> >>>> > LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
> >>>> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
> >>>> > guest=Cultivar,debug-threads=on -S -object
> >>>> >
> >>>> > secret,id=masterKey0,format=raw,file=/var/lib/libvirt/
> qemu/domain-114-Cultivar/master-key.aes
> >>>> > -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off
> >>>> > -cpu
> >>>> > Conroe -m 8192 -realtime mlock=off -smp
> >>>> > 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
> >>>> > 4013c829-c9d7-4b72-90d5-6fe58137504c -smbios
> >>>> > 'type=1,manufacturer=oVirt,product=oVirt
> >>>> >
> >>>> > Node,version=7-4.1708.el7.centos,serial=44454C4C-3300-
> 1042-8031-B4C04F4B4831,uuid=4013c829-c9d7-4b72-90d5-6fe58137504c'
> >>>> > -no-user-config -nodefaults -chardev
> >>>> >
> >>>> > socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-
> 114-Cultivar/monitor.sock,server,nowait
> >>>> > -mon chardev=charmonitor,id=monitor,mode=control -rtc
> >>>> > base=2018-01-12T15:27:27,driftfix=slew -global
> >>>> > kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on
> >>>> > -device
> >>>> > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> >>>> > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
> >>>> >
> >>>> > file=/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-4967-a1e5-
> fbe47c0cd705,format=raw,if=none,id=drive-virtio-disk0,
> serial=c2dde892-f978-4dfc-a421-c8e04cf387f9,cache=none,
> werror=stop,rerror=stop,aio=threads
> >>>> > -device
> >>>> >
> >>>> > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-
> virtio-disk0,id=virtio-disk0,bootindex=1
> >>>> > -drive if=none,id=drive-ide0-1-0,readonly=on -device
> >>>> > ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
> >>>> > tap,fd=35,id=hostnet0,vhost=on,vhostfd=38 -device
> >>>> >
> >>>> > virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:
> 7f:d6:83,bus=pci.0,addr=0x3
> >>>> > -chardev
> >>>> >
> >>>> > socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
> >>>> > -device
> >>>> >
> >>>> > virtserialport,bus=virtio-serial0.0,nr=1,chardev=
> charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> >>>> > -chardev
> >>>> >
> >>>> > socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.org.qemu.guest_agent.0,server,nowait
> >>>> > -device
> >>>> >
> >>>> > virtserialport,bus=virtio-serial0.0,nr=2,chardev=
> charchannel1,id=channel1,name=org.qemu.guest_agent.0
> >>>> > -chardev spicevmc,id=charchannel2,name=vdagent -device
> >>>> >
> >>>> > virtserialport,bus=virtio-serial0.0,nr=3,chardev=
> charchannel2,id=channel2,name=com.redhat.spice.0
> >>>> > -chardev
> >>>> >
> >>>> > socket,id=charchannel3,path=/var/lib/libvirt/qemu/channels/
> 4013c829-c9d7-4b72-90d5-6fe58137504c.org.ovirt.hosted-
> engine-setup.0,server,nowait
> >>>> > -device
> >>>> >
> >>>> > virtserialport,bus=virtio-serial0.0,nr=4,chardev=
> charchannel3,id=channel3,name=org.ovirt.hosted-engine-setup.0
> >>>> > -chardev pty,id=charconsole0 -device
> >>>> > virtconsole,chardev=charconsole0,id=console0 -spice
> >>>> >
> >>>> > tls-port=5904,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,
> tls-channel=default,seamless-migration=on
> >>>> > -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -object
> >>>> > rng-random,id=objrng0,filename=/dev/urandom -device
> >>>> > virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x5 -msg
> >>>> > timestamp=on
> >>>> >
> >>>> >
> >>>> > ==> /var/log/messages <==
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 systemd-machined: New machine
> >>>> > qemu-114-Cultivar.
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 systemd: Started Virtual Machine
> >>>> > qemu-114-Cultivar.
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 systemd: Starting Virtual Machine
> >>>> > qemu-114-Cultivar.
> >>>> >
> >>>> >
> >>>> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> >>>> >
> >>>> > 2018-01-12T15:27:27.651669Z qemu-kvm: -chardev pty,id=charconsole0:
> >>>> > char
> >>>> > device redirected to /dev/pts/2 (label charconsole0)
> >>>> >
> >>>> >
> >>>> > ==> /var/log/messages <==
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 kvm: 5 guests now active
> >>>> >
> >>>> >
> >>>> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> >>>> >
> >>>> > 2018-01-12 15:27:27.773+0000: shutting down, reason=failed
> >>>> >
> >>>> >
> >>>> > ==> /var/log/messages <==
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 libvirtd: 2018-01-12 15:27:27.773+0000:
> >>>> > 1910:
> >>>> > error : virLockManagerSanlockAcquire:1041 : resource busy: Failed
> to
> >>>> > acquire
> >>>> > lock: Lease is held by another host
> >>>> >
> >>>> >
> >>>> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> >>>> >
> >>>> > 2018-01-12T15:27:27.776135Z qemu-kvm: terminating on signal 15 from
> >>>> > pid 1773
> >>>> > (/usr/sbin/libvirtd)
> >>>> >
> >>>> >
> >>>> > ==> /var/log/messages <==
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >>>> > disabled
> >>>> > state
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 kernel: device vnet4 left promiscuous mode
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 kernel: ovirtmgmt: port 6(vnet4) entered
> >>>> > disabled
> >>>> > state
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info>
> >>>> > [1515770847.7989]
> >>>> > device (vnet4): state change: disconnected -> unmanaged (reason
> >>>> > 'unmanaged')
> >>>> > [30 10 3]
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 NetworkManager[1092]: <info>
> >>>> > [1515770847.7989]
> >>>> > device (vnet4): released from master device ovirtmgmt
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 kvm: 4 guests now active
> >>>> >
> >>>> > Jan 12 11:27:27 cultivar0 systemd-machined: Machine
> qemu-114-Cultivar
> >>>> > terminated.
> >>>> >
> >>>> >
> >>>> > ==> /var/log/vdsm/vdsm.log <==
> >>>> >
> >>>> > vm/4013c829::ERROR::2018-01-12
> >>>> > 11:27:28,001::vm::914::virt.vm::(_startUnderlyingVm)
> >>>> > (vmId='4013c829-c9d7-4b72-90d5-6fe58137504c') The vm start process
> >>>> > failed
> >>>> >
> >>>> > Traceback (most recent call last):
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
> 843,
> >>>> > in
> >>>> > _startUnderlyingVm
> >>>> >
> >>>> > self._run()
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
> 2721,
> >>>> > in
> >>>> > _run
> >>>> >
> >>>> > dom.createWithFlags(flags)
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/
> libvirtconnection.py",
> >>>> > line
> >>>> > 126, in wrapper
> >>>> >
> >>>> > ret = f(*args, **kwargs)
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 512,
> in
> >>>> > wrapper
> >>>> >
> >>>> > return func(inst, *args, **kwargs)
> >>>> >
> >>>> > File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1069,
> in
> >>>> > createWithFlags
> >>>> >
> >>>> > if ret == -1: raise libvirtError ('virDomainCreateWithFlags()
> >>>> > failed',
> >>>> > dom=self)
> >>>> >
> >>>> > libvirtError: resource busy: Failed to acquire lock: Lease is held
> by
> >>>> > another host
> >>>> >
> >>>> > periodic/47::ERROR::2018-01-12
> >>>> > 11:27:32,858::periodic::215::virt.periodic.Operation::(__call__)
> >>>> > <vdsm.virt.sampling.VMBulkstatsMonitor object at 0x3692590>
> operation
> >>>> > failed
> >>>> >
> >>>> > Traceback (most recent call last):
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/periodic.py",
> line
> >>>> > 213,
> >>>> > in __call__
> >>>> >
> >>>> > self._func()
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/sampling.py",
> line
> >>>> > 522,
> >>>> > in __call__
> >>>> >
> >>>> > self._send_metrics()
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/sampling.py",
> line
> >>>> > 538,
> >>>> > in _send_metrics
> >>>> >
> >>>> > vm_sample.interval)
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vmstats.py",
> line
> >>>> > 45, in
> >>>> > produce
> >>>> >
> >>>> > networks(vm, stats, first_sample, last_sample, interval)
> >>>> >
> >>>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vmstats.py",
> line
> >>>> > 322, in
> >>>> > networks
> >>>> >
> >>>> > if nic.name.startswith('hostdev'):
> >>>> >
> >>>> > AttributeError: name
> >>>> >
> >>>> >
> >>>> > On Fri, Jan 12, 2018 at 11:14 AM, Martin Sivak <msivak(a)redhat.com>
> >>>> > wrote:
> >>>> >>
> >>>> >> Hmm that rules out most of NFS related permission issues.
> >>>> >>
> >>>> >> So the current status is (I need to sum it up to get the full
> >>>> >> picture):
> >>>> >>
> >>>> >> - HE VM is down
> >>>> >> - HE agent fails when opening metadata using the symlink
> >>>> >> - the symlink is there
> >>>> >> - the symlink is readable by vdsm:kvm
> >>>> >>
> >>>> >> Hmm can you check under which user is ovirt-ha-broker started?
> >>>> >>
> >>>> >> Martin
> >>>> >>
> >>>> >>
> >>>> >> On Fri, Jan 12, 2018 at 4:10 PM, Jayme <jaymef(a)gmail.com> wrote:
> >>>> >> > Same thing happens with data images of other VMs as well though,
> >>>> >> > and
> >>>> >> > those
> >>>> >> > seem to be running ok so I'm not sure if it's the problem.
> >>>> >> >
> >>>> >> > On Fri, Jan 12, 2018 at 11:08 AM, Jayme <jaymef(a)gmail.com>
> wrote:
> >>>> >> >>
> >>>> >> >> Martin,
> >>>> >> >>
> >>>> >> >> I can as VDSM user but not as root . I get permission denied
> >>>> >> >> trying to
> >>>> >> >> touch one of the files as root, is that normal?
> >>>> >> >>
> >>>> >> >> On Fri, Jan 12, 2018 at 11:03 AM, Martin Sivak <
> msivak(a)redhat.com>
> >>>> >> >> wrote:
> >>>> >> >>>
> >>>> >> >>> Hmm, then it might be a permission issue indeed. Can you touch
> >>>> >> >>> the
> >>>> >> >>> file? Open it? (try hexdump) Just to make sure NFS does not
> >>>> >> >>> prevent
> >>>> >> >>> you from doing that.
> >>>> >> >>>
> >>>> >> >>> Martin
> >>>> >> >>>
> >>>> >> >>> On Fri, Jan 12, 2018 at 3:57 PM, Jayme <jaymef(a)gmail.com>
> wrote:
> >>>> >> >>> > Sorry, I think we got confused about the symlink, there are
> >>>> >> >>> > symlinks
> >>>> >> >>> > in
> >>>> >> >>> > /var/run that point the /rhev when I was doing an LS it was
> >>>> >> >>> > listing
> >>>> >> >>> > the
> >>>> >> >>> > files in /rhev
> >>>> >> >>> >
> >>>> >> >>> > /var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286
> >>>> >> >>> >
> >>>> >> >>> > 14a20941-1b84-4b82-be8f-ace38d7c037a ->
> >>>> >> >>> >
> >>>> >> >>> >
> >>>> >> >>> >
> >>>> >> >>> > /rhev/data-center/mnt/cultivar0.grove.silverorange.
> com:_exports_hosted__engine/248f46f0-d793-4581-9810-
> c9d965e2f286/images/14a20941-1b84-4b82-be8f-ace38d7c037a
> >>>> >> >>> >
> >>>> >> >>> > ls -al
> >>>> >> >>> >
> >>>> >> >>> >
> >>>> >> >>> >
> >>>> >> >>> > /rhev/data-center/mnt/cultivar0.grove.silverorange.
> com:_exports_hosted__engine/248f46f0-d793-4581-9810-
> c9d965e2f286/images/14a20941-1b84-4b82-be8f-ace38d7c037a
> >>>> >> >>> > total 2040
> >>>> >> >>> > drwxr-xr-x. 2 vdsm kvm 4096 Jan 12 10:51 .
> >>>> >> >>> > drwxr-xr-x. 8 vdsm kvm 4096 Feb 3 2016 ..
> >>>> >> >>> > -rw-rw----. 1 vdsm kvm 1028096 Jan 12 10:56
> >>>> >> >>> > 8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8
> >>>> >> >>> > -rw-rw----. 1 vdsm kvm 1048576 Feb 3 2016
> >>>> >> >>> > 8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8.lease
> >>>> >> >>> > -rw-r--r--. 1 vdsm kvm 283 Feb 3 2016
> >>>> >> >>> > 8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8.meta
> >>>> >> >>> >
> >>>> >> >>> > Is it possible that this is the wrong image for hosted
> engine?
> >>>> >> >>> >
> >>>> >> >>> > this is all I get in vdsm log when running hosted-engine
> >>>> >> >>> > --connect-storage
> >>>> >> >>> >
> >>>> >> >>> > jsonrpc/4::ERROR::2018-01-12
> >>>> >> >>> >
> >>>> >> >>> >
> >>>> >> >>> > 10:52:53,019::__init__::611::jsonrpc.JsonRpcServer::(_
> handle_request)
> >>>> >> >>> > Internal server error
> >>>> >> >>> > Traceback (most recent call last):
> >>>> >> >>> > File
> >>>> >> >>> > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py",
> >>>> >> >>> > line
> >>>> >> >>> > 606,
> >>>> >> >>> > in _handle_request
> >>>> >> >>> > res = method(**params)
> >>>> >> >>> > File "/usr/lib/python2.7/site-
> packages/vdsm/rpc/Bridge.py",
> >>>> >> >>> > line
> >>>> >> >>> > 201,
> >>>> >> >>> > in
> >>>> >> >>> > _dynamicMethod
> >>>> >> >>> > result = fn(*methodArgs)
> >>>> >> >>> > File "<string>", line 2, in getAllVmIoTunePolicies
> >>>> >> >>> > File "/usr/lib/python2.7/site-
> packages/vdsm/common/api.py",
> >>>> >> >>> > line
> >>>> >> >>> > 48,
> >>>> >> >>> > in
> >>>> >> >>> > method
> >>>> >> >>> > ret = func(*args, **kwargs)
> >>>> >> >>> > File "/usr/lib/python2.7/site-packages/vdsm/API.py", line
> >>>> >> >>> > 1354, in
> >>>> >> >>> > getAllVmIoTunePolicies
> >>>> >> >>> > io_tune_policies_dict = self._cif.
> getAllVmIoTunePolicies()
> >>>> >> >>> > File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py",
> >>>> >> >>> > line
> >>>> >> >>> > 524,
> >>>> >> >>> > in
> >>>> >> >>> > getAllVmIoTunePolicies
> >>>> >> >>> > 'current_values': v.getIoTune()}
> >>>> >> >>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py",
> line
> >>>> >> >>> > 3481,
> >>>> >> >>> > in
> >>>> >> >>> > getIoTune
> >>>> >> >>> > result = self.getIoTuneResponse()
> >>>> >> >>> > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py",
> line
> >>>> >> >>> > 3500,
> >>>> >> >>> > in
> >>>> >> >>> > getIoTuneResponse
> >>>> >> >>> > res = self._dom.blockIoTune(
> >>>> >> >>> > File
> >>>> >> >>> > "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py",
> >>>> >> >>> > line
> >>>> >> >>> > 47,
> >>>> >> >>> > in __getattr__
> >>>> >> >>> > % self.vmid)
> >>>> >> >>> > NotConnectedError: VM '4013c829-c9d7-4b72-90d5-6fe58137504c'
> >>>> >> >>> > was not
> >>>> >> >>> > defined
> >>>> >> >>> > yet or was undefined
> >>>> >> >>> >
> >>>> >> >>> > On Fri, Jan 12, 2018 at 10:48 AM, Martin Sivak
> >>>> >> >>> > <msivak(a)redhat.com>
> >>>> >> >>> > wrote:
> >>>> >> >>> >>
> >>>> >> >>> >> Hi,
> >>>> >> >>> >>
> >>>> >> >>> >> what happens when you try hosted-engine --connect-storage?
> Do
> >>>> >> >>> >> you
> >>>> >> >>> >> see
> >>>> >> >>> >> any errors in the vdsm log?
> >>>> >> >>> >>
> >>>> >> >>> >> Best regards
> >>>> >> >>> >>
> >>>> >> >>> >> Martin Sivak
> >>>> >> >>> >>
> >>>> >> >>> >> On Fri, Jan 12, 2018 at 3:41 PM, Jayme <jaymef(a)gmail.com>
> >>>> >> >>> >> wrote:
> >>>> >> >>> >> > Ok this is what I've done:
> >>>> >> >>> >> >
> >>>> >> >>> >> > - All three hosts in global maintenance mode
> >>>> >> >>> >> > - Ran: systemctl stop ovirt-ha-broker; systemctl stop
> >>>> >> >>> >> > ovirt-ha-broker --
> >>>> >> >>> >> > on
> >>>> >> >>> >> > all three hosts
> >>>> >> >>> >> > - Moved ALL files in
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> > /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/
> >>>> >> >>> >> > to
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> > /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/backup
> >>>> >> >>> >> > - Ran: systemctl start ovirt-ha-broker; systemctl start
> >>>> >> >>> >> > ovirt-ha-broker
> >>>> >> >>> >> > --
> >>>> >> >>> >> > on all three hosts
> >>>> >> >>> >> >
> >>>> >> >>> >> > - attempt start of engine vm from HOST0 (cultivar0):
> >>>> >> >>> >> > hosted-engine
> >>>> >> >>> >> > --vm-start
> >>>> >> >>> >> >
> >>>> >> >>> >> > Lots of errors in the logs still, it appears to be having
> >>>> >> >>> >> > problems
> >>>> >> >>> >> > with
> >>>> >> >>> >> > that
> >>>> >> >>> >> > directory still:
> >>>> >> >>> >> >
> >>>> >> >>> >> > Jan 12 10:40:13 cultivar0 journal: ovirt-ha-broker
> >>>> >> >>> >> > ovirt_hosted_engine_ha.broker.
> storage_broker.StorageBroker
> >>>> >> >>> >> > ERROR
> >>>> >> >>> >> > Failed
> >>>> >> >>> >> > to
> >>>> >> >>> >> > write metadata for host 1 to
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> > /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8#012Traceback
> >>>> >> >>> >> > (most recent call last):#012 File
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_
> ha/broker/storage_broker.py",
> >>>> >> >>> >> > line 202, in put_stats#012 f = os.open(path,
> direct_flag
> >>>> >> >>> >> > |
> >>>> >> >>> >> > os.O_WRONLY |
> >>>> >> >>> >> > os.O_SYNC)#012OSError: [Errno 2] No such file or
> directory:
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> > '/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8'
> >>>> >> >>> >> >
> >>>> >> >>> >> > There are no new files or symlinks in
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> > /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/
> >>>> >> >>> >> >
> >>>> >> >>> >> > - Jayme
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >> > On Fri, Jan 12, 2018 at 10:23 AM, Martin Sivak
> >>>> >> >>> >> > <msivak(a)redhat.com>
> >>>> >> >>> >> > wrote:
> >>>> >> >>> >> >>
> >>>> >> >>> >> >> > Can you please stop all hosted engine tooling (
> >>>> >> >>> >> >>
> >>>> >> >>> >> >> On all hosts I should have added.
> >>>> >> >>> >> >>
> >>>> >> >>> >> >> Martin
> >>>> >> >>> >> >>
> >>>> >> >>> >> >> On Fri, Jan 12, 2018 at 3:22 PM, Martin Sivak
> >>>> >> >>> >> >> <msivak(a)redhat.com>
> >>>> >> >>> >> >> wrote:
> >>>> >> >>> >> >> >> RequestError: failed to read metadata: [Errno 2] No
> such
> >>>> >> >>> >> >> >> file
> >>>> >> >>> >> >> >> or
> >>>> >> >>> >> >> >> directory:
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> '/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8'
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> ls -al
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8
> >>>> >> >>> >> >> >> -rw-rw----. 1 vdsm kvm 1028096 Jan 12 09:59
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> Is this due to the symlink problem you guys are
> >>>> >> >>> >> >> >> referring to
> >>>> >> >>> >> >> >> that
> >>>> >> >>> >> >> >> was
> >>>> >> >>> >> >> >> addressed in RC1 or something else?
> >>>> >> >>> >> >> >
> >>>> >> >>> >> >> > No, this file is the symlink. It should point to
> >>>> >> >>> >> >> > somewhere
> >>>> >> >>> >> >> > inside
> >>>> >> >>> >> >> > /rhev/. I see it is a 1G file in your case. That is
> >>>> >> >>> >> >> > really
> >>>> >> >>> >> >> > interesting.
> >>>> >> >>> >> >> >
> >>>> >> >>> >> >> > Can you please stop all hosted engine tooling
> >>>> >> >>> >> >> > (ovirt-ha-agent,
> >>>> >> >>> >> >> > ovirt-ha-broker), move the file (metadata file is not
> >>>> >> >>> >> >> > important
> >>>> >> >>> >> >> > when
> >>>> >> >>> >> >> > services are stopped, but better safe than sorry) and
> >>>> >> >>> >> >> > restart
> >>>> >> >>> >> >> > all
> >>>> >> >>> >> >> > services again?
> >>>> >> >>> >> >> >
> >>>> >> >>> >> >> >> Could there possibly be a permissions
> >>>> >> >>> >> >> >> problem somewhere?
> >>>> >> >>> >> >> >
> >>>> >> >>> >> >> > Maybe, but the file itself looks out of the ordinary. I
> >>>> >> >>> >> >> > wonder
> >>>> >> >>> >> >> > how it
> >>>> >> >>> >> >> > got there.
> >>>> >> >>> >> >> >
> >>>> >> >>> >> >> > Best regards
> >>>> >> >>> >> >> >
> >>>> >> >>> >> >> > Martin Sivak
> >>>> >> >>> >> >> >
> >>>> >> >>> >> >> > On Fri, Jan 12, 2018 at 3:09 PM, Jayme <
> jaymef(a)gmail.com>
> >>>> >> >>> >> >> > wrote:
> >>>> >> >>> >> >> >> Thanks for the help thus far. Storage could be
> related
> >>>> >> >>> >> >> >> but
> >>>> >> >>> >> >> >> all
> >>>> >> >>> >> >> >> other
> >>>> >> >>> >> >> >> VMs on
> >>>> >> >>> >> >> >> same storage are running ok. The storage is mounted
> via
> >>>> >> >>> >> >> >> NFS
> >>>> >> >>> >> >> >> from
> >>>> >> >>> >> >> >> within one
> >>>> >> >>> >> >> >> of the three hosts, I realize this is not ideal. This
> >>>> >> >>> >> >> >> was
> >>>> >> >>> >> >> >> setup
> >>>> >> >>> >> >> >> by
> >>>> >> >>> >> >> >> a
> >>>> >> >>> >> >> >> previous admin more as a proof of concept and VMs were
> >>>> >> >>> >> >> >> put on
> >>>> >> >>> >> >> >> there
> >>>> >> >>> >> >> >> that
> >>>> >> >>> >> >> >> should not have been placed in a proof of concept
> >>>> >> >>> >> >> >> environment..
> >>>> >> >>> >> >> >> it
> >>>> >> >>> >> >> >> was
> >>>> >> >>> >> >> >> intended to be rebuilt with proper storage down the
> >>>> >> >>> >> >> >> road.
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> So the storage is on HOST0 and the other hosts mount
> NFS
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> cultivar0.grove.silverorange.com:/exports/data
> >>>> >> >>> >> >> >> 4861742080
> >>>> >> >>> >> >> >> 1039352832 3822389248 22%
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> /rhev/data-center/mnt/cultivar0.grove.silverorange.
> com:_exports_data
> >>>> >> >>> >> >> >> cultivar0.grove.silverorange.com:/exports/iso
> >>>> >> >>> >> >> >> 4861742080
> >>>> >> >>> >> >> >> 1039352832 3822389248 22%
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> /rhev/data-center/mnt/cultivar0.grove.silverorange.
> com:_exports_iso
> >>>> >> >>> >> >> >> cultivar0.grove.silverorange.
> com:/exports/import_export
> >>>> >> >>> >> >> >> 4861742080
> >>>> >> >>> >> >> >> 1039352832 3822389248 22%
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> /rhev/data-center/mnt/cultivar0.grove.silverorange.
> com:_exports_import__export
> >>>> >> >>> >> >> >> cultivar0.grove.silverorange.
> com:/exports/hosted_engine
> >>>> >> >>> >> >> >> 4861742080
> >>>> >> >>> >> >> >> 1039352832 3822389248 22%
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> /rhev/data-center/mnt/cultivar0.grove.silverorange.
> com:_exports_hosted__engine
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> Like I said, the VM data storage itself seems to be
> >>>> >> >>> >> >> >> working
> >>>> >> >>> >> >> >> ok,
> >>>> >> >>> >> >> >> as
> >>>> >> >>> >> >> >> all
> >>>> >> >>> >> >> >> other
> >>>> >> >>> >> >> >> VMs appear to be running.
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> I'm curious why the broker log says this file is not
> >>>> >> >>> >> >> >> found
> >>>> >> >>> >> >> >> when
> >>>> >> >>> >> >> >> it
> >>>> >> >>> >> >> >> is
> >>>> >> >>> >> >> >> correct and I can see the file at that path:
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> RequestError: failed to read metadata: [Errno 2] No
> such
> >>>> >> >>> >> >> >> file
> >>>> >> >>> >> >> >> or
> >>>> >> >>> >> >> >> directory:
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> '/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8'
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> ls -al
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8
> >>>> >> >>> >> >> >> -rw-rw----. 1 vdsm kvm 1028096 Jan 12 09:59
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> /var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> Is this due to the symlink problem you guys are
> >>>> >> >>> >> >> >> referring to
> >>>> >> >>> >> >> >> that
> >>>> >> >>> >> >> >> was
> >>>> >> >>> >> >> >> addressed in RC1 or something else? Could there
> >>>> >> >>> >> >> >> possibly be
> >>>> >> >>> >> >> >> a
> >>>> >> >>> >> >> >> permissions
> >>>> >> >>> >> >> >> problem somewhere?
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> Assuming that all three hosts have 4.2 rpms installed
> >>>> >> >>> >> >> >> and the
> >>>> >> >>> >> >> >> host
> >>>> >> >>> >> >> >> engine
> >>>> >> >>> >> >> >> will not start is it safe for me to update hosts to
> 4.2
> >>>> >> >>> >> >> >> RC1
> >>>> >> >>> >> >> >> rpms?
> >>>> >> >>> >> >> >> Or
> >>>> >> >>> >> >> >> perhaps install that repo and *only* update the ovirt
> HA
> >>>> >> >>> >> >> >> packages?
> >>>> >> >>> >> >> >> Assuming that I cannot yet apply the same updates to
> the
> >>>> >> >>> >> >> >> inaccessible
> >>>> >> >>> >> >> >> hosted
> >>>> >> >>> >> >> >> engine VM.
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> I should also mention one more thing. I originally
> >>>> >> >>> >> >> >> upgraded
> >>>> >> >>> >> >> >> the
> >>>> >> >>> >> >> >> engine
> >>>> >> >>> >> >> >> VM
> >>>> >> >>> >> >> >> first using new RPMS then engine-setup. It failed due
> >>>> >> >>> >> >> >> to not
> >>>> >> >>> >> >> >> being
> >>>> >> >>> >> >> >> in
> >>>> >> >>> >> >> >> global maintenance, so I set global maintenance and
> ran
> >>>> >> >>> >> >> >> it
> >>>> >> >>> >> >> >> again,
> >>>> >> >>> >> >> >> which
> >>>> >> >>> >> >> >> appeared to complete as intended but never came back
> up
> >>>> >> >>> >> >> >> after.
> >>>> >> >>> >> >> >> Just
> >>>> >> >>> >> >> >> in
> >>>> >> >>> >> >> >> case
> >>>> >> >>> >> >> >> this might have anything at all to do with what could
> >>>> >> >>> >> >> >> have
> >>>> >> >>> >> >> >> happened.
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> Thanks very much again, I very much appreciate the
> help!
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> - Jayme
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >> >> On Fri, Jan 12, 2018 at 8:44 AM, Simone Tiraboschi
> >>>> >> >>> >> >> >> <stirabos(a)redhat.com>
> >>>> >> >>> >> >> >> wrote:
> >>>> >> >>> >> >> >>>
> >>>> >> >>> >> >> >>>
> >>>> >> >>> >> >> >>>
> >>>> >> >>> >> >> >>> On Fri, Jan 12, 2018 at 11:11 AM, Martin Sivak
> >>>> >> >>> >> >> >>> <msivak(a)redhat.com>
> >>>> >> >>> >> >> >>> wrote:
> >>>> >> >>> >> >> >>>>
> >>>> >> >>> >> >> >>>> Hi,
> >>>> >> >>> >> >> >>>>
> >>>> >> >>> >> >> >>>> the hosted engine agent issue might be fixed by
> >>>> >> >>> >> >> >>>> restarting
> >>>> >> >>> >> >> >>>> ovirt-ha-broker or updating to newest
> >>>> >> >>> >> >> >>>> ovirt-hosted-engine-ha
> >>>> >> >>> >> >> >>>> and
> >>>> >> >>> >> >> >>>> -setup. We improved handling of the missing symlink.
> >>>> >> >>> >> >> >>>
> >>>> >> >>> >> >> >>>
> >>>> >> >>> >> >> >>> Available just in oVirt 4.2.1 RC1
> >>>> >> >>> >> >> >>>
> >>>> >> >>> >> >> >>>>
> >>>> >> >>> >> >> >>>>
> >>>> >> >>> >> >> >>>> All the other issues seem to point to some storage
> >>>> >> >>> >> >> >>>> problem
> >>>> >> >>> >> >> >>>> I
> >>>> >> >>> >> >> >>>> am
> >>>> >> >>> >> >> >>>> afraid.
> >>>> >> >>> >> >> >>>>
> >>>> >> >>> >> >> >>>> You said you started the VM, do you see it in virsh
> -r
> >>>> >> >>> >> >> >>>> list?
> >>>> >> >>> >> >> >>>>
> >>>> >> >>> >> >> >>>> Best regards
> >>>> >> >>> >> >> >>>>
> >>>> >> >>> >> >> >>>> Martin Sivak
> >>>> >> >>> >> >> >>>>
> >>>> >> >>> >> >> >>>> On Thu, Jan 11, 2018 at 10:00 PM, Jayme
> >>>> >> >>> >> >> >>>> <jaymef(a)gmail.com>
> >>>> >> >>> >> >> >>>> wrote:
> >>>> >> >>> >> >> >>>> > Please help, I'm really not sure what else to try
> at
> >>>> >> >>> >> >> >>>> > this
> >>>> >> >>> >> >> >>>> > point.
> >>>> >> >>> >> >> >>>> > Thank
> >>>> >> >>> >> >> >>>> > you
> >>>> >> >>> >> >> >>>> > for reading!
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > I'm still working on trying to get my hosted
> engine
> >>>> >> >>> >> >> >>>> > running
> >>>> >> >>> >> >> >>>> > after a
> >>>> >> >>> >> >> >>>> > botched
> >>>> >> >>> >> >> >>>> > upgrade to 4.2. Storage is NFS mounted from
> within
> >>>> >> >>> >> >> >>>> > one
> >>>> >> >>> >> >> >>>> > of
> >>>> >> >>> >> >> >>>> > the
> >>>> >> >>> >> >> >>>> > hosts.
> >>>> >> >>> >> >> >>>> > Right
> >>>> >> >>> >> >> >>>> > now I have 3 centos7 hosts that are fully updated
> >>>> >> >>> >> >> >>>> > with
> >>>> >> >>> >> >> >>>> > yum
> >>>> >> >>> >> >> >>>> > packages
> >>>> >> >>> >> >> >>>> > from
> >>>> >> >>> >> >> >>>> > ovirt 4.2, the engine was fully updated with yum
> >>>> >> >>> >> >> >>>> > packages
> >>>> >> >>> >> >> >>>> > and
> >>>> >> >>> >> >> >>>> > failed to
> >>>> >> >>> >> >> >>>> > come
> >>>> >> >>> >> >> >>>> > up after reboot. As of right now, everything
> should
> >>>> >> >>> >> >> >>>> > have
> >>>> >> >>> >> >> >>>> > full
> >>>> >> >>> >> >> >>>> > yum
> >>>> >> >>> >> >> >>>> > updates
> >>>> >> >>> >> >> >>>> > and all having 4.2 rpms. I have global
> maintenance
> >>>> >> >>> >> >> >>>> > mode
> >>>> >> >>> >> >> >>>> > on
> >>>> >> >>> >> >> >>>> > right
> >>>> >> >>> >> >> >>>> > now
> >>>> >> >>> >> >> >>>> > and
> >>>> >> >>> >> >> >>>> > started hosted-engine on one of the three host and
> >>>> >> >>> >> >> >>>> > the
> >>>> >> >>> >> >> >>>> > status is
> >>>> >> >>> >> >> >>>> > currently:
> >>>> >> >>> >> >> >>>> > Engine status : {"reason": "failed liveliness
> >>>> >> >>> >> >> >>>> > check”;
> >>>> >> >>> >> >> >>>> > "health":
> >>>> >> >>> >> >> >>>> > "bad",
> >>>> >> >>> >> >> >>>> > "vm":
> >>>> >> >>> >> >> >>>> > "up", "detail": "Up"}
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > this is what I get when trying to enter hosted-vm
> >>>> >> >>> >> >> >>>> > --console
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > The engine VM is running on this host
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > error: failed to get domain 'HostedEngine'
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > error: Domain not found: no domain with matching
> >>>> >> >>> >> >> >>>> > name
> >>>> >> >>> >> >> >>>> > 'HostedEngine'
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > Here are logs from various sources when I start
> the
> >>>> >> >>> >> >> >>>> > VM on
> >>>> >> >>> >> >> >>>> > HOST3:
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > hosted-engine --vm-start
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > Command VM.getStats with args {'vmID':
> >>>> >> >>> >> >> >>>> > '4013c829-c9d7-4b72-90d5-6fe58137504c'} failed:
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > (code=1, message=Virtual machine does not exist:
> >>>> >> >>> >> >> >>>> > {'vmId':
> >>>> >> >>> >> >> >>>> > u'4013c829-c9d7-4b72-90d5-6fe58137504c'})
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > Jan 11 16:55:57 cultivar3 systemd-machined: New
> >>>> >> >>> >> >> >>>> > machine
> >>>> >> >>> >> >> >>>> > qemu-110-Cultivar.
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > Jan 11 16:55:57 cultivar3 systemd: Started Virtual
> >>>> >> >>> >> >> >>>> > Machine
> >>>> >> >>> >> >> >>>> > qemu-110-Cultivar.
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > Jan 11 16:55:57 cultivar3 systemd: Starting
> Virtual
> >>>> >> >>> >> >> >>>> > Machine
> >>>> >> >>> >> >> >>>> > qemu-110-Cultivar.
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > Jan 11 16:55:57 cultivar3 kvm: 3 guests now active
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > ==> /var/log/vdsm/vdsm.log <==
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > File
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/vdsm/common/api.py",
> >>>> >> >>> >> >> >>>> > line
> >>>> >> >>> >> >> >>>> > 48,
> >>>> >> >>> >> >> >>>> > in
> >>>> >> >>> >> >> >>>> > method
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > ret = func(*args, **kwargs)
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > File
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/vdsm/storage/hsm.py",
> >>>> >> >>> >> >> >>>> > line
> >>>> >> >>> >> >> >>>> > 2718, in
> >>>> >> >>> >> >> >>>> > getStorageDomainInfo
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > dom = self.validateSdUUID(sdUUID)
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > File
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/vdsm/storage/hsm.py",
> >>>> >> >>> >> >> >>>> > line
> >>>> >> >>> >> >> >>>> > 304, in
> >>>> >> >>> >> >> >>>> > validateSdUUID
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > sdDom.validate()
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > File
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/vdsm/storage/fileSD.py",
> >>>> >> >>> >> >> >>>> > line
> >>>> >> >>> >> >> >>>> > 515,
> >>>> >> >>> >> >> >>>> > in validate
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > raise se.StorageDomainAccessError(
> self.sdUUID)
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > StorageDomainAccessError: Domain is either
> partially
> >>>> >> >>> >> >> >>>> > accessible
> >>>> >> >>> >> >> >>>> > or
> >>>> >> >>> >> >> >>>> > entirely
> >>>> >> >>> >> >> >>>> > inaccessible:
> >>>> >> >>> >> >> >>>> > (u'248f46f0-d793-4581-9810-c9d965e2f286',)
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > jsonrpc/2::ERROR::2018-01-11
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > 16:55:16,144::dispatcher::82::
> storage.Dispatcher::(wrapper)
> >>>> >> >>> >> >> >>>> > FINISH
> >>>> >> >>> >> >> >>>> > getStorageDomainInfo error=Domain is either
> >>>> >> >>> >> >> >>>> > partially
> >>>> >> >>> >> >> >>>> > accessible
> >>>> >> >>> >> >> >>>> > or
> >>>> >> >>> >> >> >>>> > entirely
> >>>> >> >>> >> >> >>>> > inaccessible:
> >>>> >> >>> >> >> >>>> > (u'248f46f0-d793-4581-9810-c9d965e2f286',)
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > ==> /var/log/libvirt/qemu/Cultivar.log <==
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > LC_ALL=C
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > PATH=/usr/local/sbin:/usr/
> local/bin:/usr/sbin:/usr/bin
> >>>> >> >>> >> >> >>>> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
> >>>> >> >>> >> >> >>>> > guest=Cultivar,debug-threads=on -S -object
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > secret,id=masterKey0,format=
> raw,file=/var/lib/libvirt/qemu/domain-108-Cultivar/master-key.aes
> >>>> >> >>> >> >> >>>> > -machine
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > pc-i440fx-rhel7.3.0,accel=kvm,
> usb=off,dump-guest-core=off
> >>>> >> >>> >> >> >>>> > -cpu
> >>>> >> >>> >> >> >>>> > Conroe -m 8192 -realtime mlock=off -smp
> >>>> >> >>> >> >> >>>> > 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
> >>>> >> >>> >> >> >>>> > 4013c829-c9d7-4b72-90d5-6fe58137504c -smbios
> >>>> >> >>> >> >> >>>> > 'type=1,manufacturer=oVirt,product=oVirt
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > Node,version=7-4.1708.el7.
> centos,serial=44454C4C-4300-1034-8035-CAC04F424331,uuid=
> 4013c829-c9d7-4b72-90d5-6fe58137504c'
> >>>> >> >>> >> >> >>>> > -no-user-config -nodefaults -chardev
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > socket,id=charmonitor,path=/
> var/lib/libvirt/qemu/domain-108-Cultivar/monitor.sock,server,nowait
> >>>> >> >>> >> >> >>>> > -mon chardev=charmonitor,id=monitor,mode=control
> >>>> >> >>> >> >> >>>> > -rtc
> >>>> >> >>> >> >> >>>> > base=2018-01-11T20:33:19,driftfix=slew -global
> >>>> >> >>> >> >> >>>> > kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot
> >>>> >> >>> >> >> >>>> > -boot
> >>>> >> >>> >> >> >>>> > strict=on
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2
> -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtio-serial-pci,id=virtio-
> serial0,bus=pci.0,addr=0x4
> >>>> >> >>> >> >> >>>> > -drive
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > file=/var/run/vdsm/storage/
> 248f46f0-d793-4581-9810-c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/
> 23aa0a66-fa6c-4967-a1e5-fbe47c0cd705,format=raw,if=
> none,id=drive-virtio-disk0,serial=c2dde892-f978-4dfc-
> a421-c8e04cf387f9,cache=none,werror=stop,rerror=stop,aio=threads
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtio-blk-pci,scsi=off,bus=
> pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> >>>> >> >>> >> >> >>>> > -drive if=none,id=drive-ide0-1-0,readonly=on
> -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > ide-cd,bus=ide.1,unit=0,drive=
> drive-ide0-1-0,id=ide0-1-0
> >>>> >> >>> >> >> >>>> > -netdev
> >>>> >> >>> >> >> >>>> > tap,fd=30,id=hostnet0,vhost=on,vhostfd=32 -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtio-net-pci,netdev=
> hostnet0,id=net0,mac=00:16:3e:7f:d6:83,bus=pci.0,addr=0x3
> >>>> >> >>> >> >> >>>> > -chardev
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > socket,id=charchannel0,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=1,chardev=
> charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> >>>> >> >>> >> >> >>>> > -chardev
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > socket,id=charchannel1,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.org.qemu.guest_agent.0,server,nowait
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=2,chardev=
> charchannel1,id=channel1,name=org.qemu.guest_agent.0
> >>>> >> >>> >> >> >>>> > -chardev spicevmc,id=charchannel2,name=vdagent
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=3,chardev=
> charchannel2,id=channel2,name=com.redhat.spice.0
> >>>> >> >>> >> >> >>>> > -chardev
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > socket,id=charchannel3,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.org.ovirt.hosted-engine-setup.0,server,nowait
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=4,chardev=
> charchannel3,id=channel3,name=org.ovirt.hosted-engine-setup.0
> >>>> >> >>> >> >> >>>> > -chardev pty,id=charconsole0 -device
> >>>> >> >>> >> >> >>>> > virtconsole,chardev=charconsole0,id=console0
> -spice
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > tls-port=5900,addr=0,x509-dir=
> /etc/pki/vdsm/libvirt-spice,tls-channel=default,seamless-migration=on
> >>>> >> >>> >> >> >>>> > -device cirrus-vga,id=video0,bus=pci.0,addr=0x2
> >>>> >> >>> >> >> >>>> > -object
> >>>> >> >>> >> >> >>>> > rng-random,id=objrng0,filename=/dev/urandom
> -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtio-rng-pci,rng=objrng0,id=
> rng0,bus=pci.0,addr=0x5
> >>>> >> >>> >> >> >>>> > -msg
> >>>> >> >>> >> >> >>>> > timestamp=on
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > 2018-01-11T20:33:19.699999Z qemu-kvm: -chardev
> >>>> >> >>> >> >> >>>> > pty,id=charconsole0:
> >>>> >> >>> >> >> >>>> > char
> >>>> >> >>> >> >> >>>> > device redirected to /dev/pts/2 (label
> charconsole0)
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > 2018-01-11 20:38:11.640+0000: shutting down,
> >>>> >> >>> >> >> >>>> > reason=shutdown
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > 2018-01-11 20:39:02.122+0000: starting up libvirt
> >>>> >> >>> >> >> >>>> > version:
> >>>> >> >>> >> >> >>>> > 3.2.0,
> >>>> >> >>> >> >> >>>> > package:
> >>>> >> >>> >> >> >>>> > 14.el7_4.7 (CentOS BuildSystem
> >>>> >> >>> >> >> >>>> > <http://bugs.centos.org>,
> >>>> >> >>> >> >> >>>> > 2018-01-04-19:31:34, c1bm.rdu2.centos.org) qemu
> >>>> >> >>> >> >> >>>> > version:
> >>>> >> >>> >> >> >>>> > 2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.13.1), hostname:
> >>>> >> >>> >> >> >>>> > cultivar3
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > LC_ALL=C
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > PATH=/usr/local/sbin:/usr/
> local/bin:/usr/sbin:/usr/bin
> >>>> >> >>> >> >> >>>> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
> >>>> >> >>> >> >> >>>> > guest=Cultivar,debug-threads=on -S -object
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > secret,id=masterKey0,format=
> raw,file=/var/lib/libvirt/qemu/domain-109-Cultivar/master-key.aes
> >>>> >> >>> >> >> >>>> > -machine
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > pc-i440fx-rhel7.3.0,accel=kvm,
> usb=off,dump-guest-core=off
> >>>> >> >>> >> >> >>>> > -cpu
> >>>> >> >>> >> >> >>>> > Conroe -m 8192 -realtime mlock=off -smp
> >>>> >> >>> >> >> >>>> > 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
> >>>> >> >>> >> >> >>>> > 4013c829-c9d7-4b72-90d5-6fe58137504c -smbios
> >>>> >> >>> >> >> >>>> > 'type=1,manufacturer=oVirt,product=oVirt
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > Node,version=7-4.1708.el7.
> centos,serial=44454C4C-4300-1034-8035-CAC04F424331,uuid=
> 4013c829-c9d7-4b72-90d5-6fe58137504c'
> >>>> >> >>> >> >> >>>> > -no-user-config -nodefaults -chardev
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > socket,id=charmonitor,path=/
> var/lib/libvirt/qemu/domain-109-Cultivar/monitor.sock,server,nowait
> >>>> >> >>> >> >> >>>> > -mon chardev=charmonitor,id=monitor,mode=control
> >>>> >> >>> >> >> >>>> > -rtc
> >>>> >> >>> >> >> >>>> > base=2018-01-11T20:39:02,driftfix=slew -global
> >>>> >> >>> >> >> >>>> > kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot
> >>>> >> >>> >> >> >>>> > -boot
> >>>> >> >>> >> >> >>>> > strict=on
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2
> -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtio-serial-pci,id=virtio-
> serial0,bus=pci.0,addr=0x4
> >>>> >> >>> >> >> >>>> > -drive
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > file=/var/run/vdsm/storage/
> 248f46f0-d793-4581-9810-c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/
> 23aa0a66-fa6c-4967-a1e5-fbe47c0cd705,format=raw,if=
> none,id=drive-virtio-disk0,serial=c2dde892-f978-4dfc-
> a421-c8e04cf387f9,cache=none,werror=stop,rerror=stop,aio=threads
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtio-blk-pci,scsi=off,bus=
> pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> >>>> >> >>> >> >> >>>> > -drive if=none,id=drive-ide0-1-0,readonly=on
> -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > ide-cd,bus=ide.1,unit=0,drive=
> drive-ide0-1-0,id=ide0-1-0
> >>>> >> >>> >> >> >>>> > -netdev
> >>>> >> >>> >> >> >>>> > tap,fd=30,id=hostnet0,vhost=on,vhostfd=32 -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtio-net-pci,netdev=
> hostnet0,id=net0,mac=00:16:3e:7f:d6:83,bus=pci.0,addr=0x3
> >>>> >> >>> >> >> >>>> > -chardev
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > socket,id=charchannel0,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=1,chardev=
> charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> >>>> >> >>> >> >> >>>> > -chardev
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > socket,id=charchannel1,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.org.qemu.guest_agent.0,server,nowait
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=2,chardev=
> charchannel1,id=channel1,name=org.qemu.guest_agent.0
> >>>> >> >>> >> >> >>>> > -chardev spicevmc,id=charchannel2,name=vdagent
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=3,chardev=
> charchannel2,id=channel2,name=com.redhat.spice.0
> >>>> >> >>> >> >> >>>> > -chardev
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > socket,id=charchannel3,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.org.ovirt.hosted-engine-setup.0,server,nowait
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=4,chardev=
> charchannel3,id=channel3,name=org.ovirt.hosted-engine-setup.0
> >>>> >> >>> >> >> >>>> > -chardev pty,id=charconsole0 -device
> >>>> >> >>> >> >> >>>> > virtconsole,chardev=charconsole0,id=console0
> -spice
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > tls-port=5900,addr=0,x509-dir=
> /etc/pki/vdsm/libvirt-spice,tls-channel=default,seamless-migration=on
> >>>> >> >>> >> >> >>>> > -device cirrus-vga,id=video0,bus=pci.0,addr=0x2
> >>>> >> >>> >> >> >>>> > -object
> >>>> >> >>> >> >> >>>> > rng-random,id=objrng0,filename=/dev/urandom
> -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtio-rng-pci,rng=objrng0,id=
> rng0,bus=pci.0,addr=0x5
> >>>> >> >>> >> >> >>>> > -msg
> >>>> >> >>> >> >> >>>> > timestamp=on
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > 2018-01-11T20:39:02.380773Z qemu-kvm: -chardev
> >>>> >> >>> >> >> >>>> > pty,id=charconsole0:
> >>>> >> >>> >> >> >>>> > char
> >>>> >> >>> >> >> >>>> > device redirected to /dev/pts/2 (label
> charconsole0)
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > 2018-01-11 20:53:11.407+0000: shutting down,
> >>>> >> >>> >> >> >>>> > reason=shutdown
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > 2018-01-11 20:55:57.210+0000: starting up libvirt
> >>>> >> >>> >> >> >>>> > version:
> >>>> >> >>> >> >> >>>> > 3.2.0,
> >>>> >> >>> >> >> >>>> > package:
> >>>> >> >>> >> >> >>>> > 14.el7_4.7 (CentOS BuildSystem
> >>>> >> >>> >> >> >>>> > <http://bugs.centos.org>,
> >>>> >> >>> >> >> >>>> > 2018-01-04-19:31:34, c1bm.rdu2.centos.org) qemu
> >>>> >> >>> >> >> >>>> > version:
> >>>> >> >>> >> >> >>>> > 2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.13.1), hostname:
> >>>> >> >>> >> >> >>>> > cultivar3.grove.silverorange.com
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > LC_ALL=C
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > PATH=/usr/local/sbin:/usr/
> local/bin:/usr/sbin:/usr/bin
> >>>> >> >>> >> >> >>>> > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
> >>>> >> >>> >> >> >>>> > guest=Cultivar,debug-threads=on -S -object
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > secret,id=masterKey0,format=
> raw,file=/var/lib/libvirt/qemu/domain-110-Cultivar/master-key.aes
> >>>> >> >>> >> >> >>>> > -machine
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > pc-i440fx-rhel7.3.0,accel=kvm,
> usb=off,dump-guest-core=off
> >>>> >> >>> >> >> >>>> > -cpu
> >>>> >> >>> >> >> >>>> > Conroe -m 8192 -realtime mlock=off -smp
> >>>> >> >>> >> >> >>>> > 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
> >>>> >> >>> >> >> >>>> > 4013c829-c9d7-4b72-90d5-6fe58137504c -smbios
> >>>> >> >>> >> >> >>>> > 'type=1,manufacturer=oVirt,product=oVirt
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > Node,version=7-4.1708.el7.
> centos,serial=44454C4C-4300-1034-8035-CAC04F424331,uuid=
> 4013c829-c9d7-4b72-90d5-6fe58137504c'
> >>>> >> >>> >> >> >>>> > -no-user-config -nodefaults -chardev
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > socket,id=charmonitor,path=/
> var/lib/libvirt/qemu/domain-110-Cultivar/monitor.sock,server,nowait
> >>>> >> >>> >> >> >>>> > -mon chardev=charmonitor,id=monitor,mode=control
> >>>> >> >>> >> >> >>>> > -rtc
> >>>> >> >>> >> >> >>>> > base=2018-01-11T20:55:57,driftfix=slew -global
> >>>> >> >>> >> >> >>>> > kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot
> >>>> >> >>> >> >> >>>> > -boot
> >>>> >> >>> >> >> >>>> > strict=on
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2
> -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtio-serial-pci,id=virtio-
> serial0,bus=pci.0,addr=0x4
> >>>> >> >>> >> >> >>>> > -drive
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > file=/var/run/vdsm/storage/
> 248f46f0-d793-4581-9810-c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/
> 23aa0a66-fa6c-4967-a1e5-fbe47c0cd705,format=raw,if=
> none,id=drive-virtio-disk0,serial=c2dde892-f978-4dfc-
> a421-c8e04cf387f9,cache=none,werror=stop,rerror=stop,aio=threads
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtio-blk-pci,scsi=off,bus=
> pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> >>>> >> >>> >> >> >>>> > -drive if=none,id=drive-ide0-1-0,readonly=on
> -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > ide-cd,bus=ide.1,unit=0,drive=
> drive-ide0-1-0,id=ide0-1-0
> >>>> >> >>> >> >> >>>> > -netdev
> >>>> >> >>> >> >> >>>> > tap,fd=30,id=hostnet0,vhost=on,vhostfd=32 -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtio-net-pci,netdev=
> hostnet0,id=net0,mac=00:16:3e:7f:d6:83,bus=pci.0,addr=0x3
> >>>> >> >>> >> >> >>>> > -chardev
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > socket,id=charchannel0,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=1,chardev=
> charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> >>>> >> >>> >> >> >>>> > -chardev
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > socket,id=charchannel1,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.org.qemu.guest_agent.0,server,nowait
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=2,chardev=
> charchannel1,id=channel1,name=org.qemu.guest_agent.0
> >>>> >> >>> >> >> >>>> > -chardev spicevmc,id=charchannel2,name=vdagent
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=3,chardev=
> charchannel2,id=channel2,name=com.redhat.spice.0
> >>>> >> >>> >> >> >>>> > -chardev
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > socket,id=charchannel3,path=/
> var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-
> 6fe58137504c.org.ovirt.hosted-engine-setup.0,server,nowait
> >>>> >> >>> >> >> >>>> > -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtserialport,bus=virtio-serial0.0,nr=4,chardev=
> charchannel3,id=channel3,name=org.ovirt.hosted-engine-setup.0
> >>>> >> >>> >> >> >>>> > -chardev pty,id=charconsole0 -device
> >>>> >> >>> >> >> >>>> > virtconsole,chardev=charconsole0,id=console0
> -spice
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > tls-port=5900,addr=0,x509-dir=
> /etc/pki/vdsm/libvirt-spice,tls-channel=default,seamless-migration=on
> >>>> >> >>> >> >> >>>> > -device cirrus-vga,id=video0,bus=pci.0,addr=0x2
> >>>> >> >>> >> >> >>>> > -object
> >>>> >> >>> >> >> >>>> > rng-random,id=objrng0,filename=/dev/urandom
> -device
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > virtio-rng-pci,rng=objrng0,id=
> rng0,bus=pci.0,addr=0x5
> >>>> >> >>> >> >> >>>> > -msg
> >>>> >> >>> >> >> >>>> > timestamp=on
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > 2018-01-11T20:55:57.468037Z qemu-kvm: -chardev
> >>>> >> >>> >> >> >>>> > pty,id=charconsole0:
> >>>> >> >>> >> >> >>>> > char
> >>>> >> >>> >> >> >>>> > device redirected to /dev/pts/2 (label
> charconsole0)
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > ==> /var/log/ovirt-hosted-engine-ha/broker.log
> <==
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > File
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
> >>>> >> >>> >> >> >>>> > line 151, in get_raw_stats
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > f = os.open(path, direct_flag | os.O_RDONLY |
> >>>> >> >>> >> >> >>>> > os.O_SYNC)
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > OSError: [Errno 2] No such file or directory:
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > '/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8'
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > StatusStorageThread::ERROR::2018-01-11
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > 16:55:15,761::status_broker::
> 92::ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update::(run)
> >>>> >> >>> >> >> >>>> > Failed to read state.
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > Traceback (most recent call last):
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > File
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/broker/status_broker.py",
> >>>> >> >>> >> >> >>>> > line 88, in run
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > self._storage_broker.get_raw_stats()
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > File
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
> >>>> >> >>> >> >> >>>> > line 162, in get_raw_stats
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > .format(str(e)))
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > RequestError: failed to read metadata: [Errno 2]
> No
> >>>> >> >>> >> >> >>>> > such
> >>>> >> >>> >> >> >>>> > file or
> >>>> >> >>> >> >> >>>> > directory:
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > '/var/run/vdsm/storage/248f46f0-d793-4581-9810-
> c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-
> f5b7ec1f1cf8'
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > ==> /var/log/ovirt-hosted-engine-ha/agent.log <==
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > result = refresh_method()
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > File
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/env/config.py",
> >>>> >> >>> >> >> >>>> > line 519, in refresh_vm_conf
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > content =
> >>>> >> >>> >> >> >>>> > self._get_file_content_from_shared_storage(VM)
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > File
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/env/config.py",
> >>>> >> >>> >> >> >>>> > line 484, in _get_file_content_from_shared_
> storage
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > config_volume_path =
> >>>> >> >>> >> >> >>>> > self._get_config_volume_path()
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > File
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/env/config.py",
> >>>> >> >>> >> >> >>>> > line 188, in _get_config_volume_path
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > conf_vol_uuid
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > File
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/lib/heconflib.py",
> >>>> >> >>> >> >> >>>> > line 358, in get_volume_path
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > root=envconst.SD_RUN_DIR,
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > RuntimeError: Path to volume
> >>>> >> >>> >> >> >>>> > 4838749f-216d-406b-b245-98d0343fcf7f
> >>>> >> >>> >> >> >>>> > not
> >>>> >> >>> >> >> >>>> > found
> >>>> >> >>> >> >> >>>> > in /run/vdsm/storag
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > ==> /var/log/vdsm/vdsm.log <==
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > periodic/42::ERROR::2018-01-11
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > 16:56:11,446::vmstats::260::
> virt.vmstats::(send_metrics)
> >>>> >> >>> >> >> >>>> > VM
> >>>> >> >>> >> >> >>>> > metrics
> >>>> >> >>> >> >> >>>> > collection failed
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > Traceback (most recent call last):
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > File
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > "/usr/lib/python2.7/site-
> packages/vdsm/virt/vmstats.py",
> >>>> >> >>> >> >> >>>> > line
> >>>> >> >>> >> >> >>>> > 197, in
> >>>> >> >>> >> >> >>>> > send_metrics
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > data[prefix + '.cpu.usage'] = stat['cpuUsage']
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > KeyError: 'cpuUsage'
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> > _______________________________________________
> >>>> >> >>> >> >> >>>> > Users mailing list
> >>>> >> >>> >> >> >>>> > Users(a)ovirt.org
> >>>> >> >>> >> >> >>>> > http://lists.ovirt.org/mailman/listinfo/users
> >>>> >> >>> >> >> >>>> >
> >>>> >> >>> >> >> >>>> _______________________________________________
> >>>> >> >>> >> >> >>>> Users mailing list
> >>>> >> >>> >> >> >>>> Users(a)ovirt.org
> >>>> >> >>> >> >> >>>> http://lists.ovirt.org/mailman/listinfo/users
> >>>> >> >>> >> >> >>>
> >>>> >> >>> >> >> >>>
> >>>> >> >>> >> >> >>
> >>>> >> >>> >> >
> >>>> >> >>> >> >
> >>>> >> >>> >
> >>>> >> >>> >
> >>>> >> >>
> >>>> >> >>
> >>>> >> >
> >>>> >
> >>>> >
> >>>
> >>>
> >>
> >
>
2
1
Hi all,
I'm trying to modify the oVirt NGN image (to add RPMs, since imgbased rpmpersistence currently seems to have a bug: https://bugzilla.redhat.com/show_bug.cgi?id=1528468 ) but I'm unfortunately stuck at the very beginning: it seems that I'm unable to recreate even the standard 4.1 squashfs image.
I'm following the instructions at https://gerrit.ovirt.org/gitweb?p=ovirt-node-ng.git;a=blob;f=README
I'm working inside a CentOS7 fully-updated vm (hosted inside VMware, with nested virtualization enabled).
I'm trying to work on the 4.1 branch, so I issued a:
./autogen.sh --with-ovirt-release-rpm-url=http://resources.ovirt.org/pub/yum-repo/ovirt-release41.rpm
And after that I'm stuck in the "make squashfs" step: it never ends (keeps printing dots forever with no errors/warnings in log messages nor any apparent activity on the virtual disk image).
Invoking it in debug mode and connecting to the VNC console shows the detailed Plymouth startup listing stuck (latest messages displayed: "Starting udev Wait for Complete Device Initialization..." and "Starting Device-Mapper Multipath Device Controller...")
I wonder if it's actually supposed to be run only from a recent Fedora (the "dnf" reference seems a good indicator): if so, which version?
I kindly ask for advice: has anyone succeeded in modifying/reproducing NGN squash images recently? If so, how? :-)
Many thanks in advance,
Giuseppe
3
2
Hi,
I have deployed a small cluster with 2 ovirt hosts and GlusterFS cluster
some time ago. And recently during software upgrade I noticed that I made
some mistakes during the installation:
if the host which was deployed first will be taken down for upgrade
(powered off or rebooted) the engine becomes unavailable (even all VM's and
hosted engine were migrated to second host in advance).
I was thinking that this is due to missing
mnt_options=backup-volfile--servers=host1.domain.com;host2.domain.com
option for hosted engine storage domain.
Is there any good way to fix this? I have tried
edit /etc/ovirt-hosted-engine/hosted-engine.conf manually to add missing
mnt_options but after while I noticed that those changes are gone.
Any suggestions?
Thanks in advance!
Artem
2
2
Please help, I'm really not sure what else to try at this point. Thank you
for reading!
I'm still working on trying to get my hosted engine running after a botched
upgrade to 4.2. Storage is NFS mounted from within one of the hosts. Right
now I have 3 centos7 hosts that are fully updated with yum packages from
ovirt 4.2, the engine was fully updated with yum packages and failed to
come up after reboot. As of right now, everything should have full yum
updates and all having 4.2 rpms. I have global maintenance mode on right
now and started hosted-engine on one of the three host and the status is
currently: Engine status : {"reason": "failed liveliness check”; "health":
"bad", "vm": "up", "detail": "Up"}
this is what I get when trying to enter hosted-vm --console
The engine VM is running on this host
error: failed to get domain 'HostedEngine'
error: Domain not found: no domain with matching name 'HostedEngine'
Here are logs from various sources when I start the VM on HOST3:
hosted-engine --vm-start
Command VM.getStats with args {'vmID':
'4013c829-c9d7-4b72-90d5-6fe58137504c'} failed:
(code=1, message=Virtual machine does not exist: {'vmId':
u'4013c829-c9d7-4b72-90d5-6fe58137504c'})
Jan 11 16:55:57 cultivar3 systemd-machined: New machine qemu-110-Cultivar.
Jan 11 16:55:57 cultivar3 systemd: Started Virtual Machine
qemu-110-Cultivar.
Jan 11 16:55:57 cultivar3 systemd: Starting Virtual Machine
qemu-110-Cultivar.
Jan 11 16:55:57 cultivar3 kvm: 3 guests now active
==> /var/log/vdsm/vdsm.log <==
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in
method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2718,
in getStorageDomainInfo
dom = self.validateSdUUID(sdUUID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 304, in
validateSdUUID
sdDom.validate()
File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line 515,
in validate
raise se.StorageDomainAccessError(self.sdUUID)
StorageDomainAccessError: Domain is either partially accessible or entirely
inaccessible: (u'248f46f0-d793-4581-9810-c9d965e2f286',)
jsonrpc/2::ERROR::2018-01-11
16:55:16,144::dispatcher::82::storage.Dispatcher::(wrapper) FINISH
getStorageDomainInfo error=Domain is either partially accessible or
entirely inaccessible: (u'248f46f0-d793-4581-9810-c9d965e2f286',)
==> /var/log/libvirt/qemu/Cultivar.log <==
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
guest=Cultivar,debug-threads=on -S -object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-108-Cultivar/master-key.aes
-machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
Conroe -m 8192 -realtime mlock=off -smp
2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
4013c829-c9d7-4b72-90d5-6fe58137504c -smbios
'type=1,manufacturer=oVirt,product=oVirt
Node,version=7-4.1708.el7.centos,serial=44454C4C-4300-1034-8035-CAC04F424331,uuid=4013c829-c9d7-4b72-90d5-6fe58137504c'
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-108-Cultivar/monitor.sock,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2018-01-11T20:33:19,driftfix=slew -global
kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
file=/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-4967-a1e5-fbe47c0cd705,format=raw,if=none,id=drive-virtio-disk0,serial=c2dde892-f978-4dfc-a421-c8e04cf387f9,cache=none,werror=stop,rerror=stop,aio=threads
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive if=none,id=drive-ide0-1-0,readonly=on -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
tap,fd=30,id=hostnet0,vhost=on,vhostfd=32 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:7f:d6:83,bus=pci.0,addr=0x3
-chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-6fe58137504c.org.qemu.guest_agent.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-chardev
socket,id=charchannel3,path=/var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-6fe58137504c.org.ovirt.hosted-engine-setup.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=4,chardev=charchannel3,id=channel3,name=org.ovirt.hosted-engine-setup.0
-chardev pty,id=charconsole0 -device
virtconsole,chardev=charconsole0,id=console0 -spice
tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,seamless-migration=on
-device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -object
rng-random,id=objrng0,filename=/dev/urandom -device
virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x5 -msg timestamp=on
2018-01-11T20:33:19.699999Z qemu-kvm: -chardev pty,id=charconsole0: char
device redirected to /dev/pts/2 (label charconsole0)
2018-01-11 20:38:11.640+0000: shutting down, reason=shutdown
2018-01-11 20:39:02.122+0000: starting up libvirt version: 3.2.0, package:
14.el7_4.7 (CentOS BuildSystem <http://bugs.centos.org>,
2018-01-04-19:31:34, c1bm.rdu2.centos.org) qemu version:
2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.13.1), hostname: cultivar3
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
guest=Cultivar,debug-threads=on -S -object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-109-Cultivar/master-key.aes
-machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
Conroe -m 8192 -realtime mlock=off -smp
2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
4013c829-c9d7-4b72-90d5-6fe58137504c -smbios
'type=1,manufacturer=oVirt,product=oVirt
Node,version=7-4.1708.el7.centos,serial=44454C4C-4300-1034-8035-CAC04F424331,uuid=4013c829-c9d7-4b72-90d5-6fe58137504c'
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-109-Cultivar/monitor.sock,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2018-01-11T20:39:02,driftfix=slew -global
kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
file=/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-4967-a1e5-fbe47c0cd705,format=raw,if=none,id=drive-virtio-disk0,serial=c2dde892-f978-4dfc-a421-c8e04cf387f9,cache=none,werror=stop,rerror=stop,aio=threads
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive if=none,id=drive-ide0-1-0,readonly=on -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
tap,fd=30,id=hostnet0,vhost=on,vhostfd=32 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:7f:d6:83,bus=pci.0,addr=0x3
-chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-6fe58137504c.org.qemu.guest_agent.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-chardev
socket,id=charchannel3,path=/var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-6fe58137504c.org.ovirt.hosted-engine-setup.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=4,chardev=charchannel3,id=channel3,name=org.ovirt.hosted-engine-setup.0
-chardev pty,id=charconsole0 -device
virtconsole,chardev=charconsole0,id=console0 -spice
tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,seamless-migration=on
-device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -object
rng-random,id=objrng0,filename=/dev/urandom -device
virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x5 -msg timestamp=on
2018-01-11T20:39:02.380773Z qemu-kvm: -chardev pty,id=charconsole0: char
device redirected to /dev/pts/2 (label charconsole0)
2018-01-11 20:53:11.407+0000: shutting down, reason=shutdown
2018-01-11 20:55:57.210+0000: starting up libvirt version: 3.2.0, package:
14.el7_4.7 (CentOS BuildSystem <http://bugs.centos.org>,
2018-01-04-19:31:34, c1bm.rdu2.centos.org) qemu version:
2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.13.1), hostname:
cultivar3.grove.silverorange.com
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
guest=Cultivar,debug-threads=on -S -object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-110-Cultivar/master-key.aes
-machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
Conroe -m 8192 -realtime mlock=off -smp
2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
4013c829-c9d7-4b72-90d5-6fe58137504c -smbios
'type=1,manufacturer=oVirt,product=oVirt
Node,version=7-4.1708.el7.centos,serial=44454C4C-4300-1034-8035-CAC04F424331,uuid=4013c829-c9d7-4b72-90d5-6fe58137504c'
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-110-Cultivar/monitor.sock,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2018-01-11T20:55:57,driftfix=slew -global
kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -boot strict=on -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
file=/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-4967-a1e5-fbe47c0cd705,format=raw,if=none,id=drive-virtio-disk0,serial=c2dde892-f978-4dfc-a421-c8e04cf387f9,cache=none,werror=stop,rerror=stop,aio=threads
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive if=none,id=drive-ide0-1-0,readonly=on -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
tap,fd=30,id=hostnet0,vhost=on,vhostfd=32 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:7f:d6:83,bus=pci.0,addr=0x3
-chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-6fe58137504c.com.redhat.rhevm.vdsm,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-6fe58137504c.org.qemu.guest_agent.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-chardev
socket,id=charchannel3,path=/var/lib/libvirt/qemu/channels/4013c829-c9d7-4b72-90d5-6fe58137504c.org.ovirt.hosted-engine-setup.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=4,chardev=charchannel3,id=channel3,name=org.ovirt.hosted-engine-setup.0
-chardev pty,id=charconsole0 -device
virtconsole,chardev=charconsole0,id=console0 -spice
tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,seamless-migration=on
-device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -object
rng-random,id=objrng0,filename=/dev/urandom -device
virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x5 -msg timestamp=on
2018-01-11T20:55:57.468037Z qemu-kvm: -chardev pty,id=charconsole0: char
device redirected to /dev/pts/2 (label charconsole0)
==> /var/log/ovirt-hosted-engine-ha/broker.log <==
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
line 151, in get_raw_stats
f = os.open(path, direct_flag | os.O_RDONLY | os.O_SYNC)
OSError: [Errno 2] No such file or directory:
'/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8'
StatusStorageThread::ERROR::2018-01-11
16:55:15,761::status_broker::92::ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update::(run)
Failed to read state.
Traceback (most recent call last):
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py",
line 88, in run
self._storage_broker.get_raw_stats()
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
line 162, in get_raw_stats
.format(str(e)))
RequestError: failed to read metadata: [Errno 2] No such file or directory:
'/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8'
==> /var/log/ovirt-hosted-engine-ha/agent.log <==
result = refresh_method()
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py",
line 519, in refresh_vm_conf
content = self._get_file_content_from_shared_storage(VM)
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py",
line 484, in _get_file_content_from_shared_storage
config_volume_path = self._get_config_volume_path()
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py",
line 188, in _get_config_volume_path
conf_vol_uuid
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/heconflib.py",
line 358, in get_volume_path
root=envconst.SD_RUN_DIR,
RuntimeError: Path to volume 4838749f-216d-406b-b245-98d0343fcf7f not found
in /run/vdsm/storag
==> /var/log/vdsm/vdsm.log <==
periodic/42::ERROR::2018-01-11
16:56:11,446::vmstats::260::virt.vmstats::(send_metrics) VM metrics
collection failed
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/virt/vmstats.py", line 197,
in send_metrics
data[prefix + '.cpu.usage'] = stat['cpuUsage']
KeyError: 'cpuUsage'
4
8

12 Jan '18
I performed Ovirt 4.2 upgrade on a 3 host cluster with NFS shared storage.
The shared storage is mounted from one of the hosts.
I upgraded the hosted engine first, downloading the 4.2 rpm, doing a yum
update then engine setup which seemed to complete successfully, at the end
it powered down the hosted VM but it never came back up. I was unable to
start it.
I proceeded to upgrade the three hosts, ovirt 4.2 rpm and a full yum
update. I also rebooted each of the three hosts.
After some time the hosts did come back and almost all of the VMs are
running again and seem to be working ok with the exception of two:
1. The hosted VM still will not start, I've tried everything I can think of.
2. A VM that I know existed is not running and does not appear to exist, I
have no idea where it is or how to start it.
1. Hosted engine
>From one of the hosts I get a weird error trying to start it:
# hosted-engine --vm-start
Command VM.getStats with args {'vmID':
'4013c829-c9d7-4b72-90d5-6fe58137504c'} failed:
(code=1, message=Virtual machine does not exist: {'vmId':
u'4013c829-c9d7-4b72-90d5-6fe58137504c'})
>From the two other hosts I do not get the same error as above, sometimes it
appears to start but --vm-status shows errors such as: Engine status
: {"reason": "failed liveliness check", "health": "bad",
"vm": "up", "detail": "Up"}
Seeing these errors in syslog:
Jan 11 01:06:30 host0 libvirtd: 2018-01-11 05:06:30.473+0000: 1910: error :
qemuOpenFileAs:3183 : Failed to open file
'/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-4967-a1e5-fbe47c0cd705':
No such file or directory
Jan 11 01:06:30 host0 libvirtd: 2018-01-11 05:06:30.473+0000: 1910: error :
qemuDomainStorageOpenStat:11492 : cannot stat file
'/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286/c2dde892-f978-4dfc-a421-c8e04cf387f9/23aa0a66-fa6c-4967-a1e5-fbe47c0cd705':
Bad file descriptor
2. Missing VM. virsh -r list on each host does not show the VM at all. I
know it existed and is important. The log on one of the hosts even shows
that it started it recently then stopped in 10 or so minutes later:
Jan 10 18:47:17 host3 systemd-machined: New machine qemu-9-Berna.
Jan 10 18:47:17 host3 systemd: Started Virtual Machine qemu-9-Berna.
Jan 10 18:47:17 host3 systemd: Starting Virtual Machine qemu-9-Berna.
Jan 10 18:54:45 host3 systemd-machined: Machine qemu-9-Berna terminated.
How can I find out the status of the "Berna" VM and get it running again?
Thanks so much!
3
2
Hello Everyone,
Is possible in 4.2 migrate hosted_engine to another storage from same
type. Right now I am trying migrate from old to new iscsi storage.
volga629
2
1