Default Quota could not be updated nor removed in ovirt hosted engine 4.1
by Manuel Luis Aznar
Hello there,
It seams that there is a bug with the Default Quota, as I reported in the
subject of the email I am using oVirt Hosted Engine 4.1
When I tried to edit the Default Quota the system replies the following
with Data Center in Audit Mode:
Operation Canceled: Error while executing action: Cannot edit Quota. Quota
is not valid.
If I try to remove the Default Quota of the system, it replied:
Operation Canceled: Error while executing action: Cannot remove Quota.
Default quota cannot be changed.
In the other hand if I do the same with the Data Center in Disabled Mode
for Quota it says:
Trying to edit Default Quota I got:
Operation canceled: Error while executing action: Cannot edit Quota. Quota
is not valid.
Trying to remove Quota I got:
Operation canceled: Error while executing action: Cannot remove Quota.
Default quota cannot be changed.
Of course I can create other Quotas, edit them and remove them. The problem
is only with the Default Quota.
Any help would be appreciated, either in unblocking the system to
edit/remove Default quota...
Thanks for all in advance,
Manuel
7 years, 7 months
oVirt 3.6 on CentOS 6.7 based HyperConverged DC : Upgradable?
by Nicolas Ecarnot
[Please ignore the previous msg]
Hello,
One of our DC is a very small one, though quite critical.
It's almost hyper converged : hosts are compute+storage, but the engine
is standalone.
It's made of :
Hardware :
- one physical engine : CentOS 6.7
- 3 physical hosts : CentOS 7.2
Software :
- oVirt 3.6.5
- glusterFS 3.7.16 in replica-3, sharded.
The goal is to upgrade all this to oVirt 4.1.1, and also upgrade the
OSes. (oV 4.x only available on cOS 7.x)
At present, only 3 VMs here are critical, and I have backups for them.
Though, I'm quite nervous with the path I have to follow and the
hazards. Especially about the gluster parts.
At first glance, I would go this way (feel free to comment) :
- upgrade the OS of the engine : 6.7 -> 7.3
- upgrade the OS of the hosts : 7.2 -> 7.3
- upgrade and manage the upgrade of gluster, check the volumes...
- upgrade oVirt (engine then hosts)
But when upgrading the OSes, I guess it will also upgrade the gluster layer.
During this upgrade, I have no constraint to keep everything running,
total shutdown is acceptable.
Is the above procedure seems OK, or may am I missing some essential points?
Thank you.
--
Nicolas ECARNOT
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
7 years, 7 months
When creating a gluster brick in oVirt, what is the reason for having to fill in the RAID-parameters?
by Goorkate, B.J.
Hi all,
When creating a gluster brick in oVirt, I have to fill in the parameters of the
RAID volume the brick is on (that's how I understand it anyway):
RAID-type, number of disks and stripe size.
What is the reason for that? Is the gluster brick optimised for this parameters?
tried to find information about this, but no luck yet...
Thanks in advance!
Regards,
Bertjan Goorkate
------------------------------------------------------------------------------
De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is
uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht
ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender direct
te informeren door het bericht te retourneren. Het Universitair Medisch
Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de W.H.W.
(Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat geregistreerd bij
de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.
Denk s.v.p aan het milieu voor u deze e-mail afdrukt.
------------------------------------------------------------------------------
This message may contain confidential information and is intended exclusively
for the addressee. If you receive this message unintentionally, please do not
use the contents but notify the sender immediately by return e-mail. University
Medical Center Utrecht is a legal person by public law and is registered at
the Chamber of Commerce for Midden-Nederland under no. 30244197.
Please consider the environment before printing this e-mail.
7 years, 7 months
ovirtvm-console : Failed to execute login on behalf - for user
by Eduardo Mayoral
This is a multi-part message in MIME format.
--------------FFCEC3700EE1EB803E2B583D
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
I am getting exactly the same issue here with 4.1 , when trying to
log in to the serial console over SSH.
The user with domain is "emayoral_adm(a)arsyslan.es" (please note mailman
may translate the "at" character to a textual "_at_"). The First name
and last name as read from active directory is "Eduardo Mayoral" (with
no quotes)
The password is: 08.HJYqoce,nrW (OK, this is not the real password, but
it has the same special characters and approximate structure and length)
This is the engine.log output.
2017-03-02 11:13:31,917Z INFO
[org.ovirt.engine.core.bll.aaa.LoginOnBehalfCommand] (default task-25)
[5d9b7d18] Running command: LoginOnBehalfCommand internal: true.
2017-03-02 11:13:31,938Z ERROR
[org.ovirt.engine.core.sso.utils.SsoUtils] (default task-33) []
OAuthException server_error: java.text.ParseException: Invalid character
' ' encountered.
2017-03-02 11:13:31,939Z ERROR
[org.ovirt.engine.core.bll.aaa.LoginOnBehalfCommand] (default task-25)
[5d9b7d18] Unable to create engine session: EngineException: user
emayoral_adm(a)arsyslan.es in domain 'arsyslan.es-authz (Failed with error
PRINCIPAL_NOT_FOUND and code 5200)
2017-03-02 11:13:31,945Z ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-25) [5d9b7d18] EVENT_ID:
USER_LOGIN_ON_BEHALF_FAILED(1,402), Correlation ID: 5d9b7d18, Call
Stack: null, Custom Event ID: -1, Message: Failed to execute login on
behalf - for user emayoral_adm(a)arsyslan.es.
2017-03-02 11:13:31,945Z ERROR
[org.ovirt.engine.core.services.VMConsoleProxyServlet] (default task-25)
[5d9b7d18] Error processing request: : java.lang.RuntimeException:
Unable to create session using LoginOnBehalf
at
org.ovirt.engine.core.services.VMConsoleProxyServlet.availableConsoles(VMConsoleProxyServlet.java:102)
[services.jar:]
at
org.ovirt.engine.core.services.VMConsoleProxyServlet.produceContentFromParameters(VMConsoleProxyServlet.java:177)
[services.jar:]
at
org.ovirt.engine.core.services.VMConsoleProxyServlet.doPost(VMConsoleProxyServlet.java:213)
[services.jar:]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
[jboss-servlet-api_3.1_spec-1.0.0.Final.jar:1.0.0.Final]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
[jboss-servlet-api_3.1_spec-1.0.0.Final.jar:1.0.0.Final]
at
io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at
org.ovirt.engine.core.utils.servlet.LocaleFilter.doFilter(LocaleFilter.java:66)
[utils.jar:]
at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at
org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
[undertow-core-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
[undertow-core-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
[undertow-core-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60)
[undertow-core-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)
[undertow-core-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43)
[undertow-core-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
[undertow-core-1.4.0.Final.jar:1.4.0.Final]
at
org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
[undertow-core-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
[undertow-core-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:292)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletInitialHandler.java:81)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:138)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:135)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:48)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThreadSetupActionWrapper.java:44)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThreadSetupActionWrapper.java:44)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThreadSetupActionWrapper.java:44)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThreadSetupActionWrapper.java:44)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:272)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:104)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.server.Connectors.executeRootHandler(Connectors.java:202)
[undertow-core-1.4.0.Final.jar:1.4.0.Final]
at
io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:805)
[undertow-core-1.4.0.Final.jar:1.4.0.Final]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[rt.jar:1.8.0_121]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[rt.jar:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_121]
Did you find the cause for this and possible fixes or workarounds?
--
Eduardo Mayoral Jimeno (emayoral(a)arsys.es)
Administrador de sistemas. Departamento de Plataformas. Arsys internet.
+34 941 620 145 ext. 5153
--------------FFCEC3700EE1EB803E2B583D
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>Hi, <br>
</p>
<p> I am getting exactly the same issue here with 4.1 , when
trying to log in to the serial console over SSH.</p>
<p><br>
</p>
<p>The user with domain is <a class="moz-txt-link-rfc2396E" href="mailto:emayoral_adm@arsyslan.es">"emayoral_adm(a)arsyslan.es"</a> (please note
mailman may translate the "at" character to a textual "_at_"). The
First name and last name as read from active directory is "Eduardo
Mayoral" (with no quotes)<br>
</p>
<p>The password is: 08.HJYqoce,nrW (OK, this is not the real
password, but it has the same special characters and approximate
structure and length)<br>
</p>
<p>This is the engine.log output.<br>
</p>
<p><tt><font size="-1">2017-03-02 11:13:31,917Z INFO
[org.ovirt.engine.core.bll.aaa.LoginOnBehalfCommand] (default
task-25) [5d9b7d18] Running command: LoginOnBehalfCommand
internal: true.<br>
2017-03-02 11:13:31,938Z ERROR
[org.ovirt.engine.core.sso.utils.SsoUtils] (default task-33)
[] OAuthException server_error: java.text.ParseException:
Invalid character ' ' encountered.<br>
2017-03-02 11:13:31,939Z ERROR
[org.ovirt.engine.core.bll.aaa.LoginOnBehalfCommand] (default
task-25) [5d9b7d18] Unable to create engine session:
EngineException: user <a class="moz-txt-link-abbreviated" href="mailto:emayoral_adm@arsyslan.es">emayoral_adm(a)arsyslan.es</a> in domain
'arsyslan.es-authz (Failed with error PRINCIPAL_NOT_FOUND and
code 5200)<br>
2017-03-02 11:13:31,945Z ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-25) [5d9b7d18] EVENT_ID:
USER_LOGIN_ON_BEHALF_FAILED(1,402), Correlation ID: 5d9b7d18,
Call Stack: null, Custom Event ID: -1, Message: Failed to
execute login on behalf - for user <a class="moz-txt-link-abbreviated" href="mailto:emayoral_adm@arsyslan.es">emayoral_adm(a)arsyslan.es</a>.<br>
2017-03-02 11:13:31,945Z ERROR
[org.ovirt.engine.core.services.VMConsoleProxyServlet]
(default task-25) [5d9b7d18] Error processing request: :
java.lang.RuntimeException: Unable to create session using
LoginOnBehalf<br>
at
org.ovirt.engine.core.services.VMConsoleProxyServlet.availableConsoles(VMConsoleProxyServlet.java:102)
[services.jar:]<br>
at
org.ovirt.engine.core.services.VMConsoleProxyServlet.produceContentFromParameters(VMConsoleProxyServlet.java:177)
[services.jar:]<br>
at
org.ovirt.engine.core.services.VMConsoleProxyServlet.doPost(VMConsoleProxyServlet.java:213)
[services.jar:]<br>
at
javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
[jboss-servlet-api_3.1_spec-1.0.0.Final.jar:1.0.0.Final]<br>
at
javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
[jboss-servlet-api_3.1_spec-1.0.0.Final.jar:1.0.0.Final]<br>
at
io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]<br>
at
org.ovirt.engine.core.utils.servlet.LocaleFilter.doFilter(LocaleFilter.java:66)
[utils.jar:]<br>
at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]<br>
at
org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78)<br>
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
[undertow-core-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
[undertow-core-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
[undertow-core-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60)
[undertow-core-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)
[undertow-core-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43)
[undertow-core-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
[undertow-core-1.4.0.Final.jar:1.4.0.Final]<br>
at
org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)<br>
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
[undertow-core-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
[undertow-core-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:292)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletInitialHandler.java:81)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:138)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:135)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:48)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThreadSetupActionWrapper.java:44)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThreadSetupActionWrapper.java:44)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThreadSetupActionWrapper.java:44)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThreadSetupActionWrapper.java:44)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:272)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:104)
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.server.Connectors.executeRootHandler(Connectors.java:202)
[undertow-core-1.4.0.Final.jar:1.4.0.Final]<br>
at
io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:805)
[undertow-core-1.4.0.Final.jar:1.4.0.Final]<br>
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[rt.jar:1.8.0_121]<br>
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[rt.jar:1.8.0_121]<br>
at java.lang.Thread.run(Thread.java:745)
[rt.jar:1.8.0_121]<br>
</font></tt><br>
</p>
<p>Did you find the cause for this and possible fixes or
workarounds?<br>
</p>
<br>
<pre class="moz-signature" cols="72">--
Eduardo Mayoral Jimeno (<a class="moz-txt-link-abbreviated" href="mailto:emayoral@arsys.es">emayoral(a)arsys.es</a>)
Administrador de sistemas. Departamento de Plataformas. Arsys internet.
+34 941 620 145 ext. 5153</pre>
</body>
</html>
--------------FFCEC3700EE1EB803E2B583D--
7 years, 7 months
vms stucked after migrate from 4.1.0 to 4.1.1 host
by Jiří Sléžka
This is a cryptographically signed message in MIME format.
--------------ms080008010405030107020806
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi all,
I have just upgraded ovirt manager from 4.1.0 to 4.1.1, then I have
upgraded one host. After reboot and activation of this host some vms
started to migrate here.
Some time after migration CPU of this vms go to 100% and vms become
unreachable.
There is /var/log/libvirt/qemu/hypnos.log (hypnos is one of this vms)
https://pastebin.com/U30xfBMJ
Powering off and starting again solve this but I would like to have a
glue what is happening.
I suspect this line
2017-03-27T22:09:23.746710Z qemu-kvm: warning: TSC frequency mismatch
between VM (2600031 kHz) and host (2100024 kHz), and TSC scaling unavaila=
ble
Original host uses "AMD Opteron(TM) Processor 6238" with "cpu MHz:
2600.113" and upgraded host uses "AMD Opteron(tm) Processor 6172" with
"cpu MHz: 2100.097".
Cheers,
Jiri
--------------ms080008010405030107020806
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature
MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCC
Cn4wggUJMIID8aADAgECAhACt8ndrdK9CetZxFyQDGB4MA0GCSqGSIb3DQEBCwUAMGUxCzAJ
BgNVBAYTAlVTMRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2Vy
dC5jb20xJDAiBgNVBAMTG0RpZ2lDZXJ0IEFzc3VyZWQgSUQgUm9vdCBDQTAeFw0xNDExMTgx
MjAwMDBaFw0yNDExMTgxMjAwMDBaMHIxCzAJBgNVBAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1I
b2xsYW5kMRIwEAYDVQQHEwlBbXN0ZXJkYW0xDzANBgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMd
VEVSRU5BIGVTY2llbmNlIFBlcnNvbmFsIENBIDMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw
ggEKAoIBAQCwp9Jj5Aej1xPkS1GV3LvBdemFmkUR//nSzBodqsU3dv2BCRD30r4gt5oRsYty
qDGF2nnItxV1SkwVoDxFeRzOIHYNYvBRHaiGvCQjEXzPRTocOSVfWpmq/zAL/QOEqpJogeM+
0IBGiJcAENJshl7UcfjYbBnN5qStk74f52VWFf/aiF7MVJnsUr3oriQvXYOzs8N/NXyyQyim
atBbumJVCNszF1X+XHCGfPNvxlNFW9ktv7azK0baminfLcsh6ubCdINZc+Nof2lU387NCDgg
oh3KsYVcZTSuhh7qp6MjxE5VqOZod1hpXXzDOkjK+DAMC57iZXssncp24eaN08VlAgMBAAGj
ggGmMIIBojASBgNVHRMBAf8ECDAGAQH/AgEAMA4GA1UdDwEB/wQEAwIBhjB5BggrBgEFBQcB
AQRtMGswJAYIKwYBBQUHMAGGGGh0dHA6Ly9vY3NwLmRpZ2ljZXJ0LmNvbTBDBggrBgEFBQcw
AoY3aHR0cDovL2NhY2VydHMuZGlnaWNlcnQuY29tL0RpZ2lDZXJ0QXNzdXJlZElEUm9vdENB
LmNydDCBgQYDVR0fBHoweDA6oDigNoY0aHR0cDovL2NybDMuZGlnaWNlcnQuY29tL0RpZ2lD
ZXJ0QXNzdXJlZElEUm9vdENBLmNybDA6oDigNoY0aHR0cDovL2NybDQuZGlnaWNlcnQuY29t
L0RpZ2lDZXJ0QXNzdXJlZElEUm9vdENBLmNybDA9BgNVHSAENjA0MDIGBFUdIAAwKjAoBggr
BgEFBQcCARYcaHR0cHM6Ly93d3cuZGlnaWNlcnQuY29tL0NQUzAdBgNVHQ4EFgQUjJ8RLubj
egSlHlWLRggEpu2XcKYwHwYDVR0jBBgwFoAUReuir/SSy4IxLVGLp6chnfNtyA8wDQYJKoZI
hvcNAQELBQADggEBAI5HEV91Oen8WHFCoJkeu2Av+b/kWTV2qH/YNI1Xsbou2hHKhh4IyNkF
OxA/TUiuK2qQnQ5hAS0TIrs9SJ1Ke+DjXd/cTBiw7lCYSW5hkzigFV+iSivninpItafWqYBS
WxITl1KHBS9YBskhEqO5GLliDMPiAgjqUBQ/H1qZMlZNQIuFu0UaFUQuZUpJFr4+0zpzPxsB
iWU2muAoGItwbaP55EYshM7+v/J+x6kIhAJt5Dng8fOmOvR9F6Vw2/E0EZ6oQ8g1fdhwM101
S1OI6J1tUil1r7ES/svNqVWVb7YkUEBcPo8ppfHnTI/uxsn2tslsWefsOGJxNYUUSMAb9Eow
ggVtMIIEVaADAgECAhAJGghy20GNQBKKy20wIXVXMA0GCSqGSIb3DQEBCwUAMHIxCzAJBgNV
BAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1Ib2xsYW5kMRIwEAYDVQQHEwlBbXN0ZXJkYW0xDzAN
BgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMdVEVSRU5BIGVTY2llbmNlIFBlcnNvbmFsIENBIDMw
HhcNMTYxMDE3MDAwMDAwWhcNMTcxMTE2MTIwMDAwWjCBkzETMBEGCgmSJomT8ixkARkWA29y
ZzEWMBQGCgmSJomT8ixkARkWBnRlcmVuYTETMBEGCgmSJomT8ixkARkWA3RjczELMAkGA1UE
BhMCQ1oxJTAjBgNVBAoTHFNpbGVzaWFuIFVuaXZlcnNpdHkgaW4gT3BhdmExGzAZBgNVBAMT
EkppcmkgU2xlemthIHNsZXprYTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPd3
XaprhIlr9KQ0VkOzTRn8wTrzw26B1SWOr6M+/lL9x/T33HMJTwavfoATuzczB/PnURjHu3kW
j5449dAKHmne3LgsNgLPp14YbhhXJtoe5WuJoZ/P1g2o5tR6neKkSlIMWUWBBiCMmZU70EfZ
2NmDjVAhyxOJedSw+Uk4lWs3nrRgtzg3c/K8fkalDnAbTrlCRs0UOUu0NdebrHDdc0AyGpdU
apsdD/vBIMlh0aF0il5qfBeCMjND/YAMlJileuYAH59+XWZVnZ2vWwyhE1WEeEpDUhyr3P4B
oWTkivIzlT/jynCIYlmoGlwA8qs3aMkcAF0z1xN+zmuJcJ//T30CAwEAAaOCAdswggHXMB8G
A1UdIwQYMBaAFIyfES7m43oEpR5Vi0YIBKbtl3CmMB0GA1UdDgQWBBSs4bsp5TEqufKnjp5U
kbi2XfR8vzAMBgNVHRMBAf8EAjAAMB0GA1UdEQQWMBSBEmppcmkuc2xlemthQHNsdS5jejAO
BgNVHQ8BAf8EBAMCBLAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMEMDQGA1UdIAQt
MCswDAYKKoZIhvdMBQICATAMBgpghkgBhv1sBB8BMA0GCyqGSIb3TAUCAwMDMIGFBgNVHR8E
fjB8MDygOqA4hjZodHRwOi8vY3JsMy5kaWdpY2VydC5jb20vVEVSRU5BZVNjaWVuY2VQZXJz
b25hbENBMy5jcmwwPKA6oDiGNmh0dHA6Ly9jcmw0LmRpZ2ljZXJ0LmNvbS9URVJFTkFlU2Np
ZW5jZVBlcnNvbmFsQ0EzLmNybDB7BggrBgEFBQcBAQRvMG0wJAYIKwYBBQUHMAGGGGh0dHA6
Ly9vY3NwLmRpZ2ljZXJ0LmNvbTBFBggrBgEFBQcwAoY5aHR0cDovL2NhY2VydHMuZGlnaWNl
cnQuY29tL1RFUkVOQWVTY2llbmNlUGVyc29uYWxDQTMuY3J0MA0GCSqGSIb3DQEBCwUAA4IB
AQCg0e3UlUq/MB5fHWNWvp4sz2mknbburgqrARkEX2JUlcKe1rRZbN/6E4qBKqGvVeXdvIhi
G9GQAhhXG2phojtY2r/MjmWt+vEE7zQoVhOyBxEY4ganXG4dhi4VLwsF7zggLh2XbqmhWcdZ
G/SBwF3wEqiOI4hX9ryaPcKM4c/2ojzzNrHI/WQez7LzrRJKrQY1k8dCKXliW7Am7garfIEs
L0xXK/6u4rZ5nH+PmvDVpr7MC0UpPP5AcfFAgf4dyu65IZ3TS3+ig0fqplGCpqCvaxGziWbn
YyQsUWgcdQJZckUWOg234UAUd4aIvt3fx2aDK9o1hunXShp5Xstzf9xeMYIDwzCCA78CAQEw
gYYwcjELMAkGA1UEBhMCTkwxFjAUBgNVBAgTDU5vb3JkLUhvbGxhbmQxEjAQBgNVBAcTCUFt
c3RlcmRhbTEPMA0GA1UEChMGVEVSRU5BMSYwJAYDVQQDEx1URVJFTkEgZVNjaWVuY2UgUGVy
c29uYWwgQ0EgMwIQCRoIcttBjUASisttMCF1VzANBglghkgBZQMEAgEFAKCCAg0wGAYJKoZI
hvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTcwMzI3MjIzNDAxWjAvBgkq
hkiG9w0BCQQxIgQgk+1uw2/IQMaeRveNkJLTzVPBhDmKy9KUH8C/J+miTpIwbAYJKoZIhvcN
AQkPMV8wXTALBglghkgBZQMEASowCwYJYIZIAWUDBAECMAoGCCqGSIb3DQMHMA4GCCqGSIb3
DQMCAgIAgDANBggqhkiG9w0DAgIBQDAHBgUrDgMCBzANBggqhkiG9w0DAgIBKDCBlwYJKwYB
BAGCNxAEMYGJMIGGMHIxCzAJBgNVBAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1Ib2xsYW5kMRIw
EAYDVQQHEwlBbXN0ZXJkYW0xDzANBgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMdVEVSRU5BIGVT
Y2llbmNlIFBlcnNvbmFsIENBIDMCEAkaCHLbQY1AEorLbTAhdVcwgZkGCyqGSIb3DQEJEAIL
MYGJoIGGMHIxCzAJBgNVBAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1Ib2xsYW5kMRIwEAYDVQQH
EwlBbXN0ZXJkYW0xDzANBgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMdVEVSRU5BIGVTY2llbmNl
IFBlcnNvbmFsIENBIDMCEAkaCHLbQY1AEorLbTAhdVcwDQYJKoZIhvcNAQEBBQAEggEASiqI
dps9YtvNagcrav0KU44An2Daph8aU1SXR0+xXwn5O06JwS154rbi+3zReRB2qFbKlrm+bBAi
VplbZRZ7XGy05auLrKm5zVuHoNlYCHHAu7v/Cz4k+OQItUXVzbYgIvPPY7D9Aj/ggBVJpiBU
L0r4fHinhsOlFLJy/CzFqDNjIaec8L0ib7vd3l4HPy/cWGR6z9AzTQlSX5RECh31JDtfjs4U
4w12E0ZQhIdsI4bfEicMppQdSI7n07549m8qZY83XALZqFR6m7Y4KXV9mJ3FGEDsojRjR8oE
W2jVRt0QM412BFEBmFQyob14NGoYrsxQsvfgx0hAusd+GoBQLgAAAAAAAA==
--------------ms080008010405030107020806--
7 years, 7 months
Details about why a live migration failed
by Davide Ferrari
Helo
I have a VM on a host that seems it cannot be live-migrated away. I'm
trying to migrate it to another (superior) cluster but please consider:
- VM isn't receiving too much tarffic and is not doing much at all
- I've already successfully live migrated other VMs from this host to the
same other cluster
looking through engine.log I cannot see anything interesting a aprt from
the generic ( I grepped for the job ID)
2017-03-24 10:30:52,186 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-42) [20518310] Correlation ID: 20518310,
Job ID: 039f0694-3e05-4f93-993d-9e7383047873, Call Stack: null, Custom
Event ID: -1, Message: Migration started (VM: druid-co01., Source:
vmhost04, Destination: vmhost06, User: admin@internal-authz).
2017-03-24 10:34:28,072 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-2) [6d6a3a53] Correlation ID: 20518310, Job
ID: 039f0694-3e05-4f93-993d-9e7383047873, Call Stack: null, Custom Event
ID: -1, Message: Failed to migrate VM druid-co01. to Host vmhost06 . Trying
to migrate to another Host.
2017-03-24 10:34:28,676 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-2) [6d6a3a53] Correlation ID: 20518310, Job
ID: 039f0694-3e05-4f93-993d-9e7383047873, Call Stack: null, Custom Event
ID: -1, Message: Migration started (VM: druid-co01., Source: vmhost04,
Destination: vmhost11, User: admin@internal-authz).
2017-03-24 10:37:59,097 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-40) [779e9773] Correlation ID: 20518310,
Job ID: 039f0694-3e05-4f93-993d-9e7383047873, Call Stack: null, Custom
Event ID: -1, Message: Failed to migrate VM druid-co01. to Host vmhost11 .
Trying to migrate to another Host.
2017-03-24 10:38:00,626 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-40) [779e9773] Correlation ID: 20518310,
Job ID: 039f0694-3e05-4f93-993d-9e7383047873, Call Stack: null, Custom
Event ID: -1, Message: Migration started (VM: druid-co01, Source: vmhost04,
Destination: vmhost08, User: admin@internal-authz).
2017-03-24 10:41:29,441 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-42) [6f032492] Correlation ID: 20518310,
Job ID: 039f0694-3e05-4f93-993d-9e7383047873, Call Stack: null, Custom
Event ID: -1, Message: Failed to migrate VM druid-co01 to Host vmhost08 .
Trying to migrate to another Host.
2017-03-24 10:41:29,488 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-42) [6f032492] Correlation ID: 20518310,
Job ID: 039f0694-3e05-4f93-993d-9e7383047873, Call Stack: null, Custom
Event ID: -1, Message: Migration failed (VM: druid-co01, Source: vmhost04).
--
Davide Ferrari
Senior Systems Engineer
7 years, 7 months
Re: [ovirt-users] Import KVM/libvirt VMs to oVirt 4.1 with External Provider
by Pradeep Antil
Hi Team,
While importing VMs with external Provider KVM on my oVirt setup, i am
getting the below error:
"All chosen VMs couldn't be retrieved by the external system and therefore
have been filtered. Please see log for details.
As per Shahar suggestion yesterday i was able to view the virtual machine
of my KVM hypervisior in oVirt Portal.I have also created the export domain
in my DC as well.
Any idea how to resolve this issue and so that i can start importing VMs
[image: Inline image 1]
Thanks in Advance
On Wed, Mar 29, 2017 at 6:17 PM, Pradeep Antil <pradeepantil(a)gmail.com>
wrote:
> Hi Shahar,
>
> Thanks for the quick response. Issue is resolved now with your suggested
> steps.
>
> Thank you very much !!!!
>
>
>
> On Wed, Mar 29, 2017 at 5:38 PM, Shahar Havivi <shavivi(a)redhat.com> wrote:
>
>> when you run virsh you where logged in as root user,
>> the user that oVirt is running is vdsm
>>
>> you need to run:
>> $ sudo -u vdsm ssh-keygen
>> $ sudo -u vdsm ssh-copy-id user@kvmhost
>>
>> you can look at wiki we have for Xen to generate the ssh keys but its the
>> same for kvm.
>> https://www.ovirt.org/develop/release-management/features/vi
>> rt/XenToOvirt/
>>
>> Shahar.
>>
>> On Wed, Mar 29, 2017 at 12:34 PM, Pradeep Antil <pradeepantil(a)gmail.com>
>> wrote:
>>
>>>
>>> Hello Folks,
>>>
>>> I am trying to import KVM guest VMs to my oVirt Server using external
>>> Provider. But when i add external provider with type KVM and do the testing
>>> that time i am getting below error
>>>
>>> Caused by: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
>>> VDSGenericException: VDSErrorException: Failed to
>>> GetVmsNamesFromExternalProviderVDS, error = Cannot recv data: Host key
>>> verification failed.: Connection reset by peer, code = 65
>>> at org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.c
>>> reateDefaultConcreteException(VdsBrokerCommand.java:76) [vdsbroker.jar:]
>>> at org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.
>>> createException(BrokerCommandBase.java:222) [vdsbroker.jar:]
>>>
>>>
>>> [image: Inline image 2]
>>>
>>> I have already configure password less authentication with ssh keys
>>> between proxy host to my KVM hypervisor. Though i can reach to kvm
>>> hypervisor with virsh connect.
>>>
>>> [root@lplinnd1hypov13 ~]# virsh -c qemu+ssh://root@<KVM-HYP-NAME>/system
>>> Welcome to virsh, the virtualization interactive terminal.
>>>
>>> Type: 'help' for help with commands
>>> 'quit' to quit
>>>
>>> virsh #
>>>
>>> Any idea how to resolve this issue, so that i can add KVM as external
>>> provider and start importing my KVM guest VMs in ovirt setup
>>>
>>>
>>> Thanks in Advance
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> --
>>> Best Regards
>>> Pradeep Kumar
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
>
> --
> Best Regards
> Pradeep Kumar
>
--
Best Regards
Pradeep Kumar
7 years, 7 months
Re: [ovirt-users] mOvirt 1.7 - Engine 4.0.6 - No disks per VM
by Markus Stockhausen
------=_NextPartTM-000-7ccbca86-325a-41a2-a22e-ad4e1f18d998
Content-Type: multipart/alternative;
boundary="_000_fd07ec8afb7f4990a9288b9449f19cacemailandroidcom_"
--_000_fd07ec8afb7f4990a9288b9449f19cacemailandroidcom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
VGhhbmtzIGEgbG90LA0KDQpJIHdpbGwgdGVzdCBhbmQgZ2l2ZSBmZWVkYmFjay4NCg0KTWFya3Vz
DQoNCkFtIDI5LjAzLjIwMTcgNToxMyBuYWNobS4gc2NocmllYiBGaWxpcCBLcmVwaW5za3kgPGZr
cmVwaW5zQHJlZGhhdC5jb20+Og0KDQoNCk9uIFdlZCwgTWFyIDI5LCAyMDE3IGF0IDI6MjkgUE0s
IEZpbGlwIEtyZXBpbnNreSA8ZmtyZXBpbnNAcmVkaGF0LmNvbTxtYWlsdG86ZmtyZXBpbnNAcmVk
aGF0LmNvbT4+IHdyb3RlOg0KDQoNCg0KDQpPbiBNb24sIE1hciAyNywgMjAxNyBhdCAxOjE1IFBN
LCBNYXJrdXMgU3RvY2toYXVzZW4gPHN0b2NraGF1c2VuQGNvbGxvZ2lhLmRlPG1haWx0bzpzdG9j
a2hhdXNlbkBjb2xsb2dpYS5kZT4+IHdyb3RlOg0KSGkgdGhlcmUsDQoNCm15IHNtYXJ0cGhvbmUg
dXBkYXRlZCBtT3ZpcnQgdGhlc2UgZGF5cyB0byAxLjcuIFNpbmNlIHRoZW4gSSBhbHdheXMNCmdl
dCBlcnJvcnMgd2hlbiB0cnlpbmcgdG8gYWNjZXNzIHRoZSBkaXNrcyBkaWFsb2d1ZSBvZiBhIFZN
IGluIG1PVmlydC4NCkl0IGJvaWxzIGRvd24gdG8gdGhlIFVSTCBodHRwczovLzxlbmdpbmUtaG9z
dD4vb3ZpcnQtZW5naW5lL2FwaS92bXMvPHZtLWlkPi9kaXNrcw0KUmVzdWx0IGlzIGFsd2F5cyA0
MDQuDQoNCg0KSGksDQoNCnRoYW5rcyBhIGxvdCBmb3IgcmVwb3J0aW5nIHRoaXMuIFllcywgdGhl
c2UgYXJlIHdyb25nIGNhbGxzIGZyb20gbW9WaXJ0IGFuZCB0aGUgYWZmZWN0ZWQgb1ZpcnQgdmVy
c2lvbnMgYXJlIDQuMC4xLSA0LjAuOS4NCldlIHdpbGwgcmVsZWFzZSBhIGZpeCBzb29uLg0KDQpG
aXhlZCB2ZXJzaW9uIHNob3VsZCBiZSBhdmFpbGFibGUgb24gR29vZ2xlIFBsYXkgaW4gc2V2ZXJh
bCBob3Vycy4NCkNhbiB5b3UgY2hlY2sgaWYgaXQgaGVscHM/DQoNCg0KQmVzdCByZWdhcmRzDQoN
CkZpbGlwDQoNCkEgc2ltcGxlIGNyb3NzIGNoZWNrIGluIHRoZSB3ZWJicm93c2VyIHJldHVybnMg
dGhlIHNhbWUgcmVzdWx0Lg0KQXQgbGVhc3Qgd2UgY2FuIGdldCB0byB0aGUgcHJlZGVjZXNzb3Ig
aXRlbSB3aXRob3V0IHRyb3VibGUuDQpodHRwczovLzxlbmdpbmUtaG9zdD4vb3ZpcnQtZW5naW5l
L2FwaS92bXMvPHZtLWlkPg0KTG9va2luZyBhdCB0aGF0IG91dHB1dCBJJ2Qgc2F5IHRoYXQgdGhl
IHVybCBzaG91bGQgYmUgZGlza2F0dGFjaG1lbnQNCmluc3RlYWQgb2YgZGlza3M6DQoNCjxhY3Rp
b25zPg0KICAuLi4NCjwvYWN0aW9ucz4NCi4uLg0KPGxpbmsgaHJlZj0iL292aXJ0LWVuZ2luZS9h
cGkvdm1zLzU0MmFmZmIwLTdhMmYtNDI5Ny04NGY0LTcyOGEzMTVhNmFiZC9kaXNrYXR0YWNobWVu
dHMiIHJlbD0iZGlza2F0dGFjaG1lbnRzIi8+DQouLi4NCg0KT3VyIGNsdXN0ZXIgaXMgc3RpbGwg
NC4wLjYgLSBtYXliZSBzb21lIGluY29tcGF0aWJpbGl0eT8NCg0KQmVzdCByZWdhcmRzLg0KDQpN
YXJrdXMNCg0KX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18N
ClVzZXJzIG1haWxpbmcgbGlzdA0KVXNlcnNAb3ZpcnQub3JnPG1haWx0bzpVc2Vyc0BvdmlydC5v
cmc+DQpodHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnMNCg0KDQoN
Cg0K
--_000_fd07ec8afb7f4990a9288b9449f19cacemailandroidcom_
Content-Type: text/html; charset="utf-8"
Content-ID: <6FF78A5CE9FA074EB52CB509EE833321(a)collogia.de>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPGRpdiBkaXI9ImF1
dG8iPlRoYW5rcyBhIGxvdCwNCjxkaXYgZGlyPSJhdXRvIj48YnI+DQo8L2Rpdj4NCjxkaXYgZGly
PSJhdXRvIj5JIHdpbGwgdGVzdCBhbmQgZ2l2ZSBmZWVkYmFjay48L2Rpdj4NCjxkaXYgZGlyPSJh
dXRvIj48YnI+DQo8L2Rpdj4NCjxkaXYgZGlyPSJhdXRvIj5NYXJrdXM8L2Rpdj4NCjwvZGl2Pg0K
PGRpdiBjbGFzcz0iZ21haWxfZXh0cmEiPjxicj4NCjxkaXYgY2xhc3M9ImdtYWlsX3F1b3RlIj5B
bSAyOS4wMy4yMDE3IDU6MTMgbmFjaG0uIHNjaHJpZWIgRmlsaXAgS3JlcGluc2t5ICZsdDtma3Jl
cGluc0ByZWRoYXQuY29tJmd0Ozo8YnIgdHlwZT0iYXR0cmlidXRpb24iPg0KPGJsb2NrcXVvdGUg
Y2xhc3M9InF1b3RlIiBzdHlsZT0ibWFyZ2luOjAgMCAwIC44ZXg7Ym9yZGVyLWxlZnQ6MXB4ICNj
Y2Mgc29saWQ7cGFkZGluZy1sZWZ0OjFleCI+DQo8ZGl2Pg0KPGRpdiBkaXI9Imx0ciI+PGJyPg0K
PGRpdj48YnI+DQo8ZGl2IGNsYXNzPSJlbGlkZWQtdGV4dCI+T24gV2VkLCBNYXIgMjksIDIwMTcg
YXQgMjoyOSBQTSwgRmlsaXAgS3JlcGluc2t5IDxzcGFuIGRpcj0ibHRyIj4NCiZsdDs8YSBocmVm
PSJtYWlsdG86ZmtyZXBpbnNAcmVkaGF0LmNvbSI+ZmtyZXBpbnNAcmVkaGF0LmNvbTwvYT4mZ3Q7
PC9zcGFuPiB3cm90ZTo8YnI+DQo8YmxvY2txdW90ZSBzdHlsZT0ibWFyZ2luOjAgMCAwIDAuOGV4
O2JvcmRlci1sZWZ0OjFweCAjY2NjIHNvbGlkO3BhZGRpbmctbGVmdDoxZXgiPg0KPGRpdiBkaXI9
Imx0ciI+DQo8ZGl2Pjxicj4NCjwvZGl2Pg0KPGRpdj48YnI+DQo8L2Rpdj4NCjxkaXY+PGJyPg0K
PC9kaXY+DQo8ZGl2Pjxicj4NCjxkaXYgY2xhc3M9ImVsaWRlZC10ZXh0Ij5PbiBNb24sIE1hciAy
NywgMjAxNyBhdCAxOjE1IFBNLCBNYXJrdXMgU3RvY2toYXVzZW4gPHNwYW4gZGlyPSJsdHIiPg0K
Jmx0OzxhIGhyZWY9Im1haWx0bzpzdG9ja2hhdXNlbkBjb2xsb2dpYS5kZSI+c3RvY2toYXVzZW5A
Y29sbG9naWEuZGU8L2E+Jmd0Ozwvc3Bhbj4gd3JvdGU6PGJyPg0KPGJsb2NrcXVvdGUgc3R5bGU9
Im1hcmdpbjowcHggMHB4IDBweCAwLjhleDtib3JkZXItbGVmdDoxcHggc29saWQgcmdiKCAyMDQg
LCAyMDQgLCAyMDQgKTtwYWRkaW5nLWxlZnQ6MWV4Ij4NCkhpIHRoZXJlLDxicj4NCjxicj4NCm15
IHNtYXJ0cGhvbmUgdXBkYXRlZCBtT3ZpcnQgdGhlc2UgZGF5cyB0byAxLjcuIFNpbmNlIHRoZW4g
SSBhbHdheXM8YnI+DQpnZXQgZXJyb3JzIHdoZW4gdHJ5aW5nIHRvIGFjY2VzcyB0aGUgZGlza3Mg
ZGlhbG9ndWUgb2YgYSBWTSBpbiBtT1ZpcnQuPGJyPg0KSXQgYm9pbHMgZG93biB0byB0aGUgVVJM
IGh0dHBzOi8vJmx0O2VuZ2luZS1ob3N0Jmd0Oy9vdmlydC1lbjx3YnI+Z2luZS9hcGkvdm1zLyZs
dDt2bS1pZCZndDsvZGlza3M8YnI+DQpSZXN1bHQgaXMgYWx3YXlzIDQwNC48YnI+DQo8YnI+DQo8
L2Jsb2NrcXVvdGU+DQo8ZGl2Pjxicj4NCjwvZGl2Pg0KPGRpdj5IaSw8L2Rpdj4NCjxkaXY+PGJy
Pg0KPC9kaXY+DQo8ZGl2Pg0KPGRpdj50aGFua3MgYSBsb3QgZm9yIHJlcG9ydGluZyB0aGlzLiBZ
ZXMsIHRoZXNlIGFyZSB3cm9uZyBjYWxscyBmcm9tIG1vVmlydCBhbmQgdGhlIGFmZmVjdGVkIG9W
aXJ0IHZlcnNpb25zIGFyZSA0LjAuMS0gNC4wLjkuJm5ic3A7PC9kaXY+DQo8ZGl2PldlIHdpbGwg
cmVsZWFzZSBhIGZpeCBzb29uLjxicj4NCjwvZGl2Pg0KPC9kaXY+DQo8L2Rpdj4NCjwvZGl2Pg0K
PC9kaXY+DQo8L2Jsb2NrcXVvdGU+DQo8ZGl2Pjxicj4NCjwvZGl2Pg0KPGRpdj5GaXhlZCB2ZXJz
aW9uIHNob3VsZCBiZSBhdmFpbGFibGUgb24gR29vZ2xlIFBsYXkgaW4gc2V2ZXJhbCBob3Vycy4m
bmJzcDs8L2Rpdj4NCjxkaXY+Q2FuIHlvdSBjaGVjayBpZiBpdCBoZWxwcz88L2Rpdj4NCjxkaXY+
Jm5ic3A7PC9kaXY+DQo8YmxvY2txdW90ZSBzdHlsZT0ibWFyZ2luOjAgMCAwIDAuOGV4O2JvcmRl
ci1sZWZ0OjFweCAjY2NjIHNvbGlkO3BhZGRpbmctbGVmdDoxZXgiPg0KPGRpdiBkaXI9Imx0ciI+
DQo8ZGl2Pg0KPGRpdiBjbGFzcz0iZWxpZGVkLXRleHQiPg0KPGRpdj4NCjxkaXY+PC9kaXY+DQo8
L2Rpdj4NCjxkaXY+PGJyPg0KPC9kaXY+DQo8ZGl2PkJlc3QgcmVnYXJkcyZuYnNwOzwvZGl2Pg0K
PGRpdj48YnI+DQo8L2Rpdj4NCjxkaXY+RmlsaXA8L2Rpdj4NCjxkaXY+Jm5ic3A7PC9kaXY+DQo8
YmxvY2txdW90ZSBzdHlsZT0ibWFyZ2luOjBweCAwcHggMHB4IDAuOGV4O2JvcmRlci1sZWZ0OjFw
eCBzb2xpZCByZ2IoIDIwNCAsIDIwNCAsIDIwNCApO3BhZGRpbmctbGVmdDoxZXgiPg0KQSBzaW1w
bGUgY3Jvc3MgY2hlY2sgaW4gdGhlIHdlYmJyb3dzZXIgcmV0dXJucyB0aGUgc2FtZSByZXN1bHQu
PGJyPg0KQXQgbGVhc3Qgd2UgY2FuIGdldCB0byB0aGUgcHJlZGVjZXNzb3IgaXRlbSB3aXRob3V0
IHRyb3VibGUuPGJyPg0KaHR0cHM6Ly8mbHQ7ZW5naW5lLWhvc3QmZ3Q7L292aXJ0LWVuPHdicj5n
aW5lL2FwaS92bXMvJmx0O3ZtLWlkJmd0Ozxicj4NCkxvb2tpbmcgYXQgdGhhdCBvdXRwdXQgSSdk
IHNheSB0aGF0IHRoZSB1cmwgc2hvdWxkIGJlIGRpc2thdHRhY2htZW50PGJyPg0KaW5zdGVhZCBv
ZiBkaXNrczo8YnI+DQo8YnI+DQombHQ7YWN0aW9ucyZndDs8YnI+DQombmJzcDsgLi4uPGJyPg0K
Jmx0Oy9hY3Rpb25zJmd0Ozxicj4NCi4uLjxicj4NCiZsdDtsaW5rIGhyZWY9JnF1b3Q7L292aXJ0
LWVuZ2luZS9hcGkvdm1zLzU0PHdicj4yYWZmYjAtN2EyZi00Mjk3LTg0ZjQtNzI4YTMxNWE8d2Jy
PjZhYmQvZGlza2F0dGFjaG1lbnRzJnF1b3Q7IHJlbD0mcXVvdDtkaXNrYXR0YWNobWVudHMmcXVv
dDsvJmd0Ozxicj4NCi4uLjxicj4NCjxicj4NCk91ciBjbHVzdGVyIGlzIHN0aWxsIDQuMC42IC0g
bWF5YmUgc29tZSBpbmNvbXBhdGliaWxpdHk/PGJyPg0KPGJyPg0KQmVzdCByZWdhcmRzLjxicj4N
Cjxmb250IGNvbG9yPSIjODg4ODg4Ij48YnI+DQpNYXJrdXM8YnI+DQombmJzcDs8L2ZvbnQ+PGJy
Pg0KX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fPHdicj5fX19fX19fX19fX19fX19fXzxi
cj4NClVzZXJzIG1haWxpbmcgbGlzdDxicj4NCjxhIGhyZWY9Im1haWx0bzpVc2Vyc0BvdmlydC5v
cmciPlVzZXJzQG92aXJ0Lm9yZzwvYT48YnI+DQo8YSBocmVmPSJodHRwOi8vbGlzdHMub3ZpcnQu
b3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnMiPmh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1h
bjx3YnI+L2xpc3RpbmZvL3VzZXJzPC9hPjxicj4NCjxicj4NCjwvYmxvY2txdW90ZT4NCjwvZGl2
Pg0KPGJyPg0KPC9kaXY+DQo8L2Rpdj4NCjwvYmxvY2txdW90ZT4NCjwvZGl2Pg0KPGJyPg0KPC9k
aXY+DQo8L2Rpdj4NCjwvZGl2Pg0KPC9ibG9ja3F1b3RlPg0KPC9kaXY+DQo8YnI+DQo8L2Rpdj4N
CjwvYm9keT4NCjwvaHRtbD4NCg==
--_000_fd07ec8afb7f4990a9288b9449f19cacemailandroidcom_--
------=_NextPartTM-000-7ccbca86-325a-41a2-a22e-ad4e1f18d998
Content-Type: text/plain;
name="InterScan_Disclaimer.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
filename="InterScan_Disclaimer.txt"
****************************************************************************
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.
Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
Vorstand:
Kadir Akin
Dr. Michael Höhnerbach
Vorsitzender des Aufsichtsrates:
Hans Kristian Langva
Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497
This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.
e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
executive board:
Kadir Akin
Dr. Michael Höhnerbach
President of the supervisory board:
Hans Kristian Langva
Registry office: district court Cologne
Register number: HRB 52 497
****************************************************************************
------=_NextPartTM-000-7ccbca86-325a-41a2-a22e-ad4e1f18d998--
7 years, 7 months
migration failed - "Cannot get interface MTU on 'vdsmbr_...'
by Devin A. Bougie
Hi, All. We have a new oVirt 4.1.1 cluster up with the OVS switch type. Everything seems to be working great, except for live migration.
I believe the red flag in vdsm.log on the source is:
Cannot get interface MTU on 'vdsmbr_QwORbsw2': No such device (migration:287)
Which results from vdsm assigning an arbitrary bridge name to each ovs bridge.
Please see below for more details on the bridges and excerpts from the logs. Any help would be greatly appreciated.
Many thanks,
Devin
SOURCE OVS BRIDGES:
# ovs-vsctl show
6d96d9a5-e30d-455b-90c7-9e9632574695
Bridge "vdsmbr_QwORbsw2"
Port "vdsmbr_QwORbsw2"
Interface "vdsmbr_QwORbsw2"
type: internal
Port "vnet0"
Interface "vnet0"
Port classepublic
Interface classepublic
type: internal
Port "ens1f0"
Interface "ens1f0"
Bridge "vdsmbr_9P7ZYKWn"
Port ovirtmgmt
Interface ovirtmgmt
type: internal
Port "ens1f1"
Interface "ens1f1"
Port "vdsmbr_9P7ZYKWn"
Interface "vdsmbr_9P7ZYKWn"
type: internal
ovs_version: "2.7.0"
DESTINATION OVS BRIDGES:
# ovs-vsctl show
f66d765d-712a-4c81-b18e-da1acc9cfdde
Bridge "vdsmbr_vdpp0dOd"
Port "vdsmbr_vdpp0dOd"
Interface "vdsmbr_vdpp0dOd"
type: internal
Port "ens1f0"
Interface "ens1f0"
Port classepublic
Interface classepublic
type: internal
Bridge "vdsmbr_3sEwEKd1"
Port "vdsmbr_3sEwEKd1"
Interface "vdsmbr_3sEwEKd1"
type: internal
Port "ens1f1"
Interface "ens1f1"
Port ovirtmgmt
Interface ovirtmgmt
type: internal
ovs_version: "2.7.0"
SOURCE VDSM LOG:
...
2017-03-27 10:57:02,567-0400 INFO (jsonrpc/1) [vdsm.api] START migrate args=(<virt.vm.Vm object at 0x3410810>, {u'incomingLimit': 2, u'src': u'192.168.55.84', u'dstqemu': u'192.168.55.81', u'autoConverge': u'false', u'tunneled': u'false', u'enableGuestEvents': False, u'dst': u'lnxvirt01-p55.classe.cornell.edu:54321', u'vmId': u'cf9c5dbf-3924-47c6-b323-22ac90a1f682', u'abortOnError': u'true', u'outgoingLimit': 2, u'compressed': u'false', u'maxBandwidth': 5000, u'method': u'online', 'mode': 'remote'}) kwargs={} (api:37)
2017-03-27 10:57:02,570-0400 INFO (jsonrpc/1) [vdsm.api] FINISH migrate return={'status': {'message': 'Migration in progress', 'code': 0}, 'progress': 0} (api:43)
2017-03-27 10:57:02,570-0400 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call VM.migrate succeeded in 0.01 seconds (__init__:515)
2017-03-27 10:57:03,028-0400 INFO (migsrc/cf9c5dbf) [virt.vm] (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Creation of destination VM took: 0 seconds (migration:455)
2017-03-27 10:57:03,028-0400 INFO (migsrc/cf9c5dbf) [virt.vm] (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') starting migration to qemu+tls://lnxvirt01-p55.classe.cornell.edu/system with miguri tcp://192.168.55.81 (migration:480)
2017-03-27 10:57:03,224-0400 ERROR (migsrc/cf9c5dbf) [virt.vm] (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Cannot get interface MTU on 'vdsmbr_QwORbsw2': No such device (migration:287)
2017-03-27 10:57:03,322-0400 ERROR (migsrc/cf9c5dbf) [virt.vm] (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Failed to migrate (migration:429)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 411, in run
self._startUnderlyingMigration(time.time())
File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 489, in _startUnderlyingMigration
self._perform_with_downtime_thread(duri, muri)
File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 555, in _perform_with_downtime_thread
self._perform_migration(duri, muri)
File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 528, in _perform_migration
self._vm._dom.migrateToURI3(duri, params, flags)
File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 69, in f
ret = attr(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1939, in migrateToURI3
if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', dom=self)
libvirtError: Cannot get interface MTU on 'vdsmbr_QwORbsw2': No such device
2017-03-27 10:57:03,435-0400 INFO (Reactor thread) [ProtocolDetector.AcceptorImpl] Accepted connection from ::1:33716 (protocoldetector:72)
2017-03-27 10:57:03,452-0400 INFO (Reactor thread) [ProtocolDetector.Detector] Detected protocol stomp from ::1:33716 (protocoldetector:127)
2017-03-27 10:57:03,452-0400 INFO (Reactor thread) [Broker.StompAdapter] Processing CONNECT request (stompreactor:102)
2017-03-27 10:57:03,456-0400 INFO (JsonRpc (StompReactor)) [Broker.StompAdapter] Subscribe command received (stompreactor:129)
2017-03-27 10:57:03,589-0400 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getHardwareInfo succeeded in 0.01 seconds (__init__:515)
2017-03-27 10:57:03,592-0400 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call VM.getStats failed (error 1) in 0.00 seconds (__init__:515)
2017-03-27 10:57:04,494-0400 INFO (Reactor thread) [ProtocolDetector.AcceptorImpl] Accepted connection from ::1:33718 (protocoldetector:72)
2017-03-27 10:57:04,500-0400 INFO (Reactor thread) [ProtocolDetector.Detector] Detected protocol stomp from ::1:33718 (protocoldetector:127)
2017-03-27 10:57:04,500-0400 INFO (Reactor thread) [Broker.StompAdapter] Processing CONNECT request (stompreactor:102)
2017-03-27 10:57:04,501-0400 INFO (JsonRpc (StompReactor)) [Broker.StompAdapter] Subscribe command received (stompreactor:129)
2017-03-27 10:57:04,559-0400 INFO (periodic/2) [dispatcher] Run and protect: repoStats(options=None) (logUtils:51)
2017-03-27 10:57:04,559-0400 INFO (periodic/2) [dispatcher] Run and protect: repoStats, Return response: {u'016ceee8-9117-4e8a-b611-f58f6763a098': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000262909', 'lastCheck': '5.3', 'valid': True}, u'2438f819-e7f5-4bb1-ad0d-5349fa371e6e': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000342019', 'lastCheck': '5.3', 'valid': True}, u'48d4f45d-0bdd-4f4a-90b6-35efe2da935a': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000239611', 'lastCheck': '0.6', 'valid': True}} (logUtils:54)
2017-03-27 10:57:04,669-0400 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.getHardwareInfo succeeded in 0.01 seconds (__init__:515)
2017-03-27 10:57:04,684-0400 INFO (jsonrpc/0) [dispatcher] Run and protect: repoStats(options=None) (logUtils:51)
2017-03-27 10:57:04,684-0400 INFO (jsonrpc/0) [dispatcher] Run and protect: repoStats, Return response: {u'016ceee8-9117-4e8a-b611-f58f6763a098': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000262909', 'lastCheck': '5.4', 'valid': True}, u'2438f819-e7f5-4bb1-ad0d-5349fa371e6e': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000342019', 'lastCheck': '5.4', 'valid': True}, u'48d4f45d-0bdd-4f4a-90b6-35efe2da935a': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000239611', 'lastCheck': '0.7', 'valid': True}} (logUtils:54)
2017-03-27 10:57:04,770-0400 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.09 seconds (__init__:515)
...
DESTINATION VDSM LOG:
...
2017-03-27 10:57:01,515-0400 INFO (JsonRpc (StompReactor)) [Broker.StompAdapter] Subscribe command received (stompreactor:129)
2017-03-27 10:57:01,652-0400 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.getHardwareInfo succeeded in 0.00 seconds (__init__:515)
2017-03-27 10:57:02,572-0400 INFO (Reactor thread) [ProtocolDetector.AcceptorImpl] Accepted connection from ::ffff:192.168.55.84:44582 (protocoldetector:72)
2017-03-27 10:57:02,577-0400 INFO (Reactor thread) [ProtocolDetector.Detector] Detected protocol stomp from ::ffff:192.168.55.84:44582 (protocoldetector:127)
2017-03-27 10:57:02,578-0400 INFO (Reactor thread) [Broker.StompAdapter] Processing CONNECT request (stompreactor:102)
2017-03-27 10:57:02,579-0400 INFO (JsonRpc (StompReactor)) [Broker.StompAdapter] Subscribe command received (stompreactor:129)
2017-03-27 10:57:02,748-0400 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call Host.ping succeeded in 0.00 seconds (__init__:515)
2017-03-27 10:57:02,774-0400 INFO (jsonrpc/4) [vdsm.api] START __init__ args=(<virt.vm.Vm object at 0x4132e50>, <clientIF.clientIF object at 0x7f453c01e650>, {u'guestFQDN': u'lnx16.classe.cornell.edu', u'acpiEnable': u'true', u'emulatedMachine': u'pc-i440fx-rhel7.3.0', u'afterMigrationStatus': u'', u'enableGuestEvents': False, u'vmId': u'cf9c5dbf-3924-47c6-b323-22ac90a1f682', u'elapsedTimeOffset': 1321.2906959056854, u'guestDiskMapping': {u'0QEMU_QEMU_HARDDISK_21d18729-279f-435b-9': {u'name': u'/dev/sda'}, u'QEMU_DVD-ROM_QM00003': {u'name': u'/dev/sr0'}}, u'transparentHugePages': u'true', u'timeOffset': u'0', u'cpuType': u'Haswell-noTSX', u'custom': {u'device_9d15a0c8-a122-43b9-89c2-3ad530ebc985': u"VmDevice:{id='VmDeviceId:{deviceId='9d15a0c8-a122-43b9-89c2-3ad530ebc985', vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}", u'device_9d15a0c8-a122-43b9-89c2-3ad530ebc985device_0bbb6377-22db-4d04-a501-64de81bab622device_758565a5-4378-4412-8ec1-2df7bb231c14device_7ead63f7-b119-434d-a2f7-081fd06229e4': u"VmDevice:{id='VmDeviceId:{deviceId='7ead63f7-b119-434d-a2f7-081fd06229e4', vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682'}', device='spicevmc', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=3}', managed='false', plugged='true', readOnly='false', deviceAlias='channel2', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}", u'device_9d15a0c8-a122-43b9-89c2-3ad530ebc985device_0bbb6377-22db-4d04-a501-64de81bab622': u"VmDevice:{id='VmDeviceId:{deviceId='0bbb6377-22db-4d04-a501-64de81bab622', vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}", u'device_9d15a0c8-a122-43b9-89c2-3ad530ebc985device_0bbb6377-22db-4d04-a501-64de81bab622device_758565a5-4378-4412-8ec1-2df7bb231c14': u"VmDevice:{id='VmDeviceId:{deviceId='758565a5-4378-4412-8ec1-2df7bb231c14', vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}"}, u'pauseCode': u'NOERR', u'migrationDest': u'libvirt', u'guestNumaNodes': [{u'nodeIndex': 0, u'cpus': u'0', u'memory': u'1024'}], u'display': u'qxl', u'smp': u'1', u'vmType': u'kvm', u'_srcDomXML': u'<domain type=\'kvm\' id=\'4\'>\n <name>lnx16</name>\n <uuid>cf9c5dbf-3924-47c6-b323-22ac90a1f682</uuid>\n <metadata xmlns:ovirt="http://ovirt.org/vm/tune/1.0">\n <ovirt:qos/>\n </metadata>\n <maxMemory slots=\'16\' unit=\'KiB\'>4194304</maxMemory>\n <memory unit=\'KiB\'>1048576</memory>\n <currentMemory unit=\'KiB\'>1048576</currentMemory>\n <vcpu placement=\'static\' current=\'1\'>16</vcpu>\n <cputune>\n <shares>1020</shares>\n </cputune>\n <resource>\n <partition>/machine</partition>\n </resource>\n <sysinfo type=\'smbios\'>\n <system>\n <entry name=\'manufacturer\'>oVirt</entry>\n <entry name=\'product\'>oVirt Node</entry>\n <entry name=\'version\'>7.3-4.sl7</entry>\n <entry name=\'serial\'>38C15485-D270-11E6-8752-0090FAEDD7C8</entry>\n <entry name=\'uuid\'>cf9c5dbf-3924-47c6-b323-22ac90a1f682</entry>\n </system>\n </sysinfo>\n <os>\n <type arch=\'x86_64\' machine=\'pc-i440fx-rhel7.3.0\'>hvm</type>\n <smbios mode=\'sysinfo\'/>\n </os>\n <features>\n <acpi/>\n </features>\n <cpu mode=\'custom\' match=\'exact\'>\n <model fallback=\'allow\'>Haswell-noTSX</model>\n <topology sockets=\'16\' cores=\'1\' threads=\'1\'/>\n <numa>\n <cell id=\'0\' cpus=\'0\' memory=\'1048576\' unit=\'KiB\'/>\n </numa>\n </cpu>\n <clock offset=\'variable\' adjustment=\'0\' basis=\'utc\'>\n <timer name=\'rtc\' tickpolicy=\'catchup\'/>\n <timer name=\'pit\' tickpolicy=\'delay\'/>\n <timer name=\'hpet\' present=\'no\'/>\n </clock>\n <on_poweroff>destroy</on_poweroff>\n <on_reboot>restart</on_reboot>\n <on_crash>destroy</on_crash>\n <devices>\n <emulator>/usr/libexec/qemu-kvm</emulator>\n <disk type=\'file\' device=\'cdrom\'>\n <driver name=\'qemu\' type=\'raw\'/>\n <source startupPolicy=\'optional\'/>\n <backingStore/>\n <target dev=\'hdc\' bus=\'ide\'/>\n <readonly/>\n <alias name=\'ide0-1-0\'/>\n <address type=\'drive\' controller=\'0\' bus=\'1\' target=\'0\' unit=\'0\'/>\n </disk>\n <disk type=\'block\' device=\'disk\' snapshot=\'no\'>\n <driver name=\'qemu\' type=\'raw\' cache=\'none\' error_policy=\'stop\' io=\'native\'/>\n <source dev=\'/rhev/data-center/00000001-0001-0001-0001-000000000311/016ceee8-9117-4e8a-b611-f58f6763a098/images/21d18729-279f-435b-90ab-a2995f041bc6/67b12700-5faf-49b2-af40-6178023b3794\'/>\n <backingStore/>\n <target dev=\'sda\' bus=\'scsi\'/>\n <serial>21d18729-279f-435b-90ab-a2995f041bc6</serial>\n <boot order=\'1\'/>\n <alias name=\'scsi0-0-0-0\'/>\n <address type=\'drive\' controller=\'0\' bus=\'0\' target=\'0\' unit=\'0\'/>\n </disk>\n <controller type=\'usb\' index=\'0\' model=\'piix3-uhci\'>\n <alias name=\'usb\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x01\' function=\'0x2\'/>\n </controller>\n <controller type=\'scsi\' index=\'0\' model=\'virtio-scsi\'>\n <alias name=\'scsi0\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x04\' function=\'0x0\'/>\n </controller>\n <controller type=\'virtio-serial\' index=\'0\' ports=\'16\'>\n <alias name=\'virtio-serial0\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x05\' function=\'0x0\'/>\n </controller>\n <controller type=\'pci\' index=\'0\' model=\'pci-root\'>\n <alias name=\'pci.0\'/>\n </controller>\n <controller type=\'ide\' index=\'0\'>\n <alias name=\'ide\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x01\' function=\'0x1\'/>\n </controller>\n <interface type=\'bridge\'>\n <mac address=\'00:1a:4a:16:01:51\'/>\n <source bridge=\'vdsmbr_QwORbsw2\'/>\n <virtualport type=\'openvswitch\'>\n <parameters interfaceid=\'3a24e764-32c8-4a2e-8dfd-02a776c3bef6\'/>\n </virtualport>\n <target dev=\'vnet0\'/>\n <model type=\'virtio\'/>\n <filterref filter=\'vdsm-no-mac-spoofing\'/>\n <link state=\'up\'/>\n <alias name=\'net0\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x0\'/>\n </interface>\n <channel type=\'unix\'>\n <source mode=\'bind\' path=\'/var/lib/libvirt/qemu/channels/cf9c5dbf-3924-47c6-b323-22ac90a1f682.com.redhat.rhevm.vdsm\'/>\n <target type=\'virtio\' name=\'com.redhat.rhevm.vdsm\' state=\'connected\'/>\n <alias name=\'channel0\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'1\'/>\n </channel>\n <channel type=\'unix\'>\n <source mode=\'bind\' path=\'/var/lib/libvirt/qemu/channels/cf9c5dbf-3924-47c6-b323-22ac90a1f682.org.qemu.guest_agent.0\'/>\n <target type=\'virtio\' name=\'org.qemu.guest_agent.0\' state=\'connected\'/>\n <alias name=\'channel1\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'2\'/>\n </channel>\n <channel type=\'spicevmc\'>\n <target type=\'virtio\' name=\'com.redhat.spice.0\' state=\'connected\'/>\n <alias name=\'channel2\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'3\'/>\n </channel>\n <input type=\'mouse\' bus=\'ps2\'>\n <alias name=\'input0\'/>\n </input>\n <input type=\'keyboard\' bus=\'ps2\'>\n <alias name=\'input1\'/>\n </input>\n <graphics type=\'spice\' tlsPort=\'5900\' autoport=\'yes\' listen=\'128.84.46.164\' defaultMode=\'secure\' passwdValidTo=\'2017-03-27T14:37:09\'>\n <listen type=\'address\' address=\'128.84.46.164\'/>\n <channel name=\'main\' mode=\'secure\'/>\n <channel name=\'display\' mode=\'secure\'/>\n <channel name=\'inputs\' mode=\'secure\'/>\n <channel name=\'cursor\' mode=\'secure\'/>\n <channel name=\'playback\' mode=\'secure\'/>\n <channel name=\'record\' mode=\'secure\'/>\n <channel name=\'smartcard\' mode=\'secure\'/>\n <channel name=\'usbredir\' mode=\'secure\'/>\n </graphics>\n <video>\n <model type=\'qxl\' ram=\'65536\' vram=\'8192\' vgamem=\'16384\' heads=\'1\' primary=\'yes\'/>\n <alias name=\'video0\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x0\'/>\n </video>\n <memballoon model=\'virtio\'>\n <alias name=\'balloon0\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x06\' function=\'0x0\'/>\n </memballoon>\n <rng model=\'virtio\'>\n <backend model=\'random\'>/dev/urandom</backend>\n <alias name=\'rng0\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x07\' function=\'0x0\'/>\n </rng>\n </devices>\n <seclabel type=\'none\' model=\'none\'/>\n <seclabel type=\'dynamic\' model=\'dac\' relabel=\'yes\'>\n <label>+265:+107</label>\n <imagelabel>+265:+107</imagelabel>\n </seclabel>\n</domain>\n', u'memSize': 1024, u'smpCoresPerSocket': u'1', u'vmName': u'lnx16', u'nice': u'0', u'username': u'(unknown)', u'maxMemSize': 4096, u'bootMenuEnable': u'false', u'smpThreadsPerCore': u'1', u'smartcardEnable': u'false', u'clientIp': u'', u'guestAgentAPIVersion': 3, u'kvmEnable': u'true', u'pitReinjection': u'false', u'displayNetwork': u'classepublic', u'devices': [{u'target': 1048576, u'alias': u'balloon0', u'specParams': {u'model': u'virtio'}, u'deviceId': u'ee3ebcbf-08d9-481e-907c-efe32b8ee869', u'address': {u'function': u'0x0', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'slot': u'0x06'}, u'device': u'memballoon', u'type': u'balloon'}, {u'alias': u'rng0', u'specParams': {u'source': u'urandom'}, u'deviceId': u'0c522e52-a4c9-4bd9-a367-7b6fddedc931', u'address': {u'slot': u'0x07', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'function': u'0x0'}, u'device': u'virtio', u'model': u'virtio', u'type': u'rng'}, {u'index': u'0', u'alias': u'usb', u'specParams': {}, u'deviceId': u'3d4cbbda-4aa6-424e-95c6-bfae37e466b7', u'address': {u'slot': u'0x01', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'function': u'0x2'}, u'device': u'usb', u'model': u'piix3-uhci', u'type': u'controller'}, {u'index': u'0', u'alias': u'scsi0', u'specParams': {}, u'deviceId': u'0db369d1-81d8-4534-8437-105b851670c5', u'address': {u'slot': u'0x04', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'function': u'0x0'}, u'device': u'scsi', u'model': u'virtio-scsi', u'type': u'controller'}, {u'alias': u'virtio-serial0', u'specParams': {}, u'deviceId': u'553f9e1c-d811-4cb6-82bc-d567cd8e50bd', u'address': {u'slot': u'0x05', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'function': u'0x0'}, u'device': u'virtio-serial', u'type': u'controller'}, {u'alias': u'video0', u'specParams': {u'vram': u'8192', u'vgamem': u'16384', u'heads': u'1', u'ram': u'65536'}, u'deviceId': u'2ece7a34-22ec-49bc-b48c-df4660d13d71', u'address': {u'slot': u'0x02', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'function': u'0x0'}, u'device': u'qxl', u'type': u'video'}, {u'device': u'spice', u'specParams': {u'fileTransferEnable': u'true', u'spiceSecureChannels': u'smain,sinputs,scursor,splayback,srecord,sdisplay,ssmartcard,susbredir', u'displayNetwork': u'classepublic', u'displayIp': u'128.84.46.164', u'copyPasteEnable': u'true'}, u'type': u'graphics', u'deviceId': u'a7076b55-ddbb-4932-aca4-c567d9576c54', u'tlsPort': u'5900'}, {u'nicModel': u'pv', u'macAddr': u'00:1a:4a:16:01:51', u'linkActive': True, u'network': u'classepublic', u'alias': u'net0', u'filter': u'vdsm-no-mac-spoofing', u'specParams': {u'inbound': {}, u'outbound': {}}, u'deviceId': u'5110646a-b5d4-4b62-8188-b18cec8c15df', u'address': {u'slot': u'0x03', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'function': u'0x0'}, u'device': u'bridge', u'type': u'interface', u'name': u'vnet0'}, {u'index': u'2', u'iface': u'ide', u'name': u'hdc', u'alias': u'ide0-1-0', u'specParams': {u'path': u''}, u'readonly': u'True', u'deviceId': u'75deba88-b39d-478c-83d3-cc535b43bb28', u'address': {u'bus': u'1', u'controller': u'0', u'type': u'drive', u'target': u'0', u'unit': u'0'}, u'device': u'cdrom', u'shared': u'false', u'path': u'', u'type': u'disk'}, {u'poolID': u'00000001-0001-0001-0001-000000000311', u'volumeInfo': {u'domainID': u'016ceee8-9117-4e8a-b611-f58f6763a098', u'volType': u'path', u'leaseOffset': 111149056, u'volumeID': u'67b12700-5faf-49b2-af40-6178023b3794', u'leasePath': u'/dev/016ceee8-9117-4e8a-b611-f58f6763a098/leases', u'imageID': u'21d18729-279f-435b-90ab-a2995f041bc6', u'path': u'/rhev/data-center/mnt/blockSD/016ceee8-9117-4e8a-b611-f58f6763a098/images/21d18729-279f-435b-90ab-a2995f041bc6/67b12700-5faf-49b2-af40-6178023b3794'}, u'index': 0, u'iface': u'scsi', u'apparentsize': u'107374182400', u'alias': u'scsi0-0-0-0', u'imageID': u'21d18729-279f-435b-90ab-a2995f041bc6', u'readonly': u'False', u'shared': u'false', u'truesize': u'107374182400', u'type': u'disk', u'domainID': u'016ceee8-9117-4e8a-b611-f58f6763a098', u'reqsize': u'0', u'format': u'raw', u'deviceId': u'21d18729-279f-435b-90ab-a2995f041bc6', u'address': {u'bus': u'0', u'controller': u'0', u'type': u'drive', u'target': u'0', u'unit': u'0'}, u'device': u'disk', u'path': u'/rhev/data-center/00000001-0001-0001-0001-000000000311/016ceee8-9117-4e8a-b611-f58f6763a098/images/21d18729-279f-435b-90ab-a2995f041bc6/67b12700-5faf-49b2-af40-6178023b3794', u'propagateErrors': u'off', u'optional': u'false', u'name': u'sda', u'bootOrder': u'1', u'volumeID': u'67b12700-5faf-49b2-af40-6178023b3794', u'specParams': {}, u'discard': False, u'volumeChain': [{u'domainID': u'016ceee8-9117-4e8a-b611-f58f6763a098', u'volType': u'path', u'leaseOffset': 111149056, u'volumeID': u'67b12700-5faf-49b2-af40-6178023b3794', u'leasePath': u'/dev/016ceee8-9117-4e8a-b611-f58f6763a098/leases', u'imageID': u'21d18729-279f-435b-90ab-a2995f041bc6', u'path': u'/rhev/data-center/mnt/blockSD/016ceee8-9117-4e8a-b611-f58f6763a098/images/21d18729-279f-435b-90ab-a2995f041bc6/67b12700-5faf-49b2-af40-6178023b3794'}]}, {u'device': u'ide', u'alias': u'ide', u'type': u'controller', u'address': {u'slot': u'0x01', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'function': u'0x1'}}, {u'device': u'unix', u'alias': u'channel0', u'type': u'channel', u'address': {u'bus': u'0', u'controller': u'0', u'type': u'virtio-serial', u'port': u'1'}}, {u'device': u'unix', u'alias': u'channel1', u'type': u'channel', u'address': {u'bus': u'0', u'controller': u'0', u'type': u'virtio-serial', u'port': u'2'}}, {u'device': u'spicevmc', u'alias': u'channel2', u'type': u'channel', u'address': {u'bus': u'0', u'controller': u'0', u'type': u'virtio-serial', u'port': u'3'}}], u'memGuaranteedSize': 1024, u'status': u'Up', u'maxVCpus': u'16', u'guestIPs': u'128.84.46.16 192.168.122.1', u'statusTime': u'4528256170', u'maxMemSlots': 16}, False) kwargs={} (api:37)
2017-03-27 10:57:02,776-0400 INFO (jsonrpc/4) [vdsm.api] FINISH __init__ return=None (api:43)
2017-03-27 10:57:02,777-0400 INFO (vm/cf9c5dbf) [virt.vm] (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') VM wrapper has started (vm:1915)
2017-03-27 10:57:02,778-0400 INFO (vm/cf9c5dbf) [dispatcher] Run and protect: getVolumeSize(sdUUID=u'016ceee8-9117-4e8a-b611-f58f6763a098', spUUID=u'00000001-0001-0001-0001-000000000311', imgUUID=u'21d18729-279f-435b-90ab-a2995f041bc6', volUUID=u'67b12700-5faf-49b2-af40-6178023b3794', options=None) (logUtils:51)
2017-03-27 10:57:02,809-0400 INFO (vm/cf9c5dbf) [dispatcher] Run and protect: getVolumeSize, Return response: {'truesize': '107374182400', 'apparentsize': '107374182400'} (logUtils:54)
2017-03-27 10:57:02,809-0400 INFO (vm/cf9c5dbf) [vds] prepared volume path: (clientIF:374)
2017-03-27 10:57:02,810-0400 INFO (vm/cf9c5dbf) [dispatcher] Run and protect: prepareImage(sdUUID=u'016ceee8-9117-4e8a-b611-f58f6763a098', spUUID=u'00000001-0001-0001-0001-000000000311', imgUUID=u'21d18729-279f-435b-90ab-a2995f041bc6', leafUUID=u'67b12700-5faf-49b2-af40-6178023b3794', allowIllegal=False) (logUtils:51)
2017-03-27 10:57:02,859-0400 INFO (vm/cf9c5dbf) [storage.LVM] Activating lvs: vg=016ceee8-9117-4e8a-b611-f58f6763a098 lvs=['67b12700-5faf-49b2-af40-6178023b3794'] (lvm:1289)
2017-03-27 10:57:02,947-0400 INFO (vm/cf9c5dbf) [storage.LVM] Refreshing lvs: vg=016ceee8-9117-4e8a-b611-f58f6763a098 lvs=['leases'] (lvm:1285)
2017-03-27 10:57:02,998-0400 INFO (vm/cf9c5dbf) [dispatcher] Run and protect: prepareImage, Return response: {'info': {'domainID': u'016ceee8-9117-4e8a-b611-f58f6763a098', 'volType': 'path', 'leaseOffset': 111149056, 'path': u'/rhev/data-center/mnt/blockSD/016ceee8-9117-4e8a-b611-f58f6763a098/images/21d18729-279f-435b-90ab-a2995f041bc6/67b12700-5faf-49b2-af40-6178023b3794', 'volumeID': '67b12700-5faf-49b2-af40-6178023b3794', 'leasePath': '/dev/016ceee8-9117-4e8a-b611-f58f6763a098/leases', 'imageID': u'21d18729-279f-435b-90ab-a2995f041bc6'}, 'path': u'/rhev/data-center/00000001-0001-0001-0001-000000000311/016ceee8-9117-4e8a-b611-f58f6763a098/images/21d18729-279f-435b-90ab-a2995f041bc6/67b12700-5faf-49b2-af40-6178023b3794', 'imgVolumesInfo': [{'domainID': u'016ceee8-9117-4e8a-b611-f58f6763a098', 'volType': 'path', 'leaseOffset': 111149056, 'path': u'/rhev/data-center/mnt/blockSD/016ceee8-9117-4e8a-b611-f58f6763a098/images/21d18729-279f-435b-90ab-a2995f041bc6/67b12700-5faf-49b2-af40-6178023b3794', 'volumeID': '67b12700-5faf-49b2-af40-6178023b3794', 'leasePath': '/dev/016ceee8-9117-4e8a-b611-f58f6763a098/leases', 'imageID': u'21d18729-279f-435b-90ab-a2995f041bc6'}]} (logUtils:54)
2017-03-27 10:57:02,998-0400 INFO (vm/cf9c5dbf) [vds] prepared volume path: /rhev/data-center/00000001-0001-0001-0001-000000000311/016ceee8-9117-4e8a-b611-f58f6763a098/images/21d18729-279f-435b-90ab-a2995f041bc6/67b12700-5faf-49b2-af40-6178023b3794 (clientIF:374)
2017-03-27 10:57:03,026-0400 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call VM.migrationCreate succeeded in 0.27 seconds (__init__:515)
2017-03-27 10:57:03,225-0400 INFO (jsonrpc/7) [vdsm.api] START destroy args=(<virt.vm.Vm object at 0x4132e50>, 1) kwargs={} (api:37)
2017-03-27 10:57:03,225-0400 INFO (jsonrpc/7) [virt.vm] (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Release VM resources (vm:4199)
2017-03-27 10:57:03,226-0400 INFO (jsonrpc/7) [virt.vm] (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Stopping connection (guestagent:430)
2017-03-27 10:57:03,227-0400 INFO (jsonrpc/7) [dispatcher] Run and protect: teardownImage(sdUUID=u'016ceee8-9117-4e8a-b611-f58f6763a098', spUUID=u'00000001-0001-0001-0001-000000000311', imgUUID=u'21d18729-279f-435b-90ab-a2995f041bc6', volUUID=None) (logUtils:51)
2017-03-27 10:57:03,237-0400 INFO (vm/cf9c5dbf) [virt.vm] (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Changed state to Down: VM destroyed during the startup (code=10) (vm:1207)
2017-03-27 10:57:03,238-0400 INFO (vm/cf9c5dbf) [virt.vm] (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Stopping connection (guestagent:430)
2017-03-27 10:57:03,260-0400 INFO (jsonrpc/7) [storage.LVM] Deactivating lvs: vg=016ceee8-9117-4e8a-b611-f58f6763a098 lvs=['67b12700-5faf-49b2-af40-6178023b3794'] (lvm:1297)
2017-03-27 10:57:03,311-0400 INFO (jsonrpc/7) [dispatcher] Run and protect: teardownImage, Return response: None (logUtils:54)
2017-03-27 10:57:03,312-0400 INFO (jsonrpc/7) [virt.vm] (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Stopping connection (guestagent:430)
2017-03-27 10:57:03,312-0400 WARN (jsonrpc/7) [root] File: /var/lib/libvirt/qemu/channels/cf9c5dbf-3924-47c6-b323-22ac90a1f682.com.redhat.rhevm.vdsm already removed (utils:120)
2017-03-27 10:57:03,312-0400 WARN (jsonrpc/7) [root] File: /var/lib/libvirt/qemu/channels/cf9c5dbf-3924-47c6-b323-22ac90a1f682.org.qemu.guest_agent.0 already removed (utils:120)
2017-03-27 10:57:03,313-0400 WARN (jsonrpc/7) [virt.vm] (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') timestamp already removed from stats cache (vm:1729)
2017-03-27 10:57:03,313-0400 INFO (jsonrpc/7) [dispatcher] Run and protect: inappropriateDevices(thiefId=u'cf9c5dbf-3924-47c6-b323-22ac90a1f682') (logUtils:51)
2017-03-27 10:57:03,314-0400 INFO (jsonrpc/7) [dispatcher] Run and protect: inappropriateDevices, Return response: None (logUtils:54)
2017-03-27 10:57:03,314-0400 INFO (jsonrpc/7) [vdsm.api] FINISH destroy return={'status': {'message': 'Done', 'code': 0}} (api:43)
2017-03-27 10:57:03,315-0400 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call VM.destroy succeeded in 0.09 seconds (__init__:515)
2017-03-27 10:57:03,937-0400 INFO (Reactor thread) [ProtocolDetector.AcceptorImpl] Accepted connection from ::1:53024 (protocoldetector:72)
2017-03-27 10:57:03,943-0400 INFO (Reactor thread) [ProtocolDetector.Detector] Detected protocol stomp from ::1:53024 (protocoldetector:127)
2017-03-27 10:57:03,944-0400 INFO (Reactor thread) [Broker.StompAdapter] Processing CONNECT request (stompreactor:102)
2017-03-27 10:57:03,945-0400 INFO (JsonRpc (StompReactor)) [Broker.StompAdapter] Subscribe command received (stompreactor:129)
2017-03-27 10:57:04,084-0400 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getHardwareInfo succeeded in 0.00 seconds (__init__:515)
2017-03-27 10:57:04,086-0400 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call VM.getStats failed (error 1) in 0.00 seconds (__init__:515)
2017-03-27 10:57:04,767-0400 INFO (Reactor thread) [ProtocolDetector.AcceptorImpl] Accepted connection from ::1:53026 (protocoldetector:72)
2017-03-27 10:57:04,772-0400 INFO (Reactor thread) [ProtocolDetector.Detector] Detected protocol stomp from ::1:53026 (protocoldetector:127)
2017-03-27 10:57:04,773-0400 INFO (Reactor thread) [Broker.StompAdapter] Processing CONNECT request (stompreactor:102)
2017-03-27 10:57:04,774-0400 INFO (JsonRpc (StompReactor)) [Broker.StompAdapter] Subscribe command received (stompreactor:129)
2017-03-27 10:57:04,913-0400 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Host.getHardwareInfo succeeded in 0.00 seconds (__init__:515)
2017-03-27 10:57:05,126-0400 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getCapabilities succeeded in 0.22 seconds (__init__:515)
2017-03-27 10:57:06,040-0400 INFO (periodic/0) [dispatcher] Run and protect: repoStats(options=None) (logUtils:51)
2017-03-27 10:57:06,040-0400 INFO (periodic/0) [dispatcher] Run and protect: repoStats, Return response: {u'016ceee8-9117-4e8a-b611-f58f6763a098': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000426872', 'lastCheck': '4.6', 'valid': True}, u'2438f819-e7f5-4bb1-ad0d-5349fa371e6e': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000387272', 'lastCheck': '4.5', 'valid': True}, u'48d4f45d-0bdd-4f4a-90b6-35efe2da935a': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000353399', 'lastCheck': '0.5', 'valid': True}} (logUtils:54)
2017-03-27 10:57:08,170-0400 INFO (Reactor thread) [ProtocolDetector.AcceptorImpl] Accepted connection from ::1:53028 (protocoldetector:72)
2017-03-27 10:57:08,176-0400 INFO (Reactor thread) [ProtocolDetector.Detector] Detected protocol stomp from ::1:53028 (protocoldetector:127)
2017-03-27 10:57:08,176-0400 INFO (Reactor thread) [Broker.StompAdapter] Processing CONNECT request (stompreactor:102)
2017-03-27 10:57:08,177-0400 INFO (JsonRpc (StompReactor)) [Broker.StompAdapter] Subscribe command received (stompreactor:129)
2017-03-27 10:57:08,321-0400 INFO (jsonrpc/6) [dispatcher] Run and protect: repoStats(options=None) (logUtils:51)
2017-03-27 10:57:08,322-0400 INFO (jsonrpc/6) [dispatcher] Run and protect: repoStats, Return response: {u'016ceee8-9117-4e8a-b611-f58f6763a098': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000426872', 'lastCheck': '2.1', 'valid': True}, u'2438f819-e7f5-4bb1-ad0d-5349fa371e6e': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000387272', 'lastCheck': '1.2', 'valid': True}, u'48d4f45d-0bdd-4f4a-90b6-35efe2da935a': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000353399', 'lastCheck': '2.8', 'valid': True}} (logUtils:54)
2017-03-27 10:57:08,324-0400 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call Host.getHardwareInfo succeeded in 0.00 seconds (__init__:515)
2017-03-27 10:57:08,327-0400 INFO (jsonrpc/4) [dispatcher] Run and protect: repoStats(options=None) (logUtils:51)
2017-03-27 10:57:08,328-0400 INFO (jsonrpc/4) [dispatcher] Run and protect: repoStats, Return response: {u'016ceee8-9117-4e8a-b611-f58f6763a098': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000426872', 'lastCheck': '2.1', 'valid': True}, u'2438f819-e7f5-4bb1-ad0d-5349fa371e6e': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000387272', 'lastCheck': '1.2', 'valid': True}, u'48d4f45d-0bdd-4f4a-90b6-35efe2da935a': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000353399', 'lastCheck': '2.8', 'valid': True}} (logUtils:54)
2017-03-27 10:57:08,619-0400 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.29 seconds (__init__:515)
2017-03-27 10:57:08,625-0400 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.30 seconds (__init__:515)
2017-03-27 10:57:08,848-0400 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:515)
2017-03-27 10:57:09,370-0400 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:515)
2017-03-27 10:57:10,927-0400 INFO (Reactor thread) [ProtocolDetector.AcceptorImpl] Accepted connection from ::1:53030 (protocoldetector:72)
2017-03-27 10:57:10,932-0400 INFO (Reactor thread) [ProtocolDetector.Detector] Detected protocol stomp from ::1:53030 (protocoldetector:127)
2017-03-27 10:57:10,932-0400 INFO (Reactor thread) [Broker.StompAdapter] Processing CONNECT request (stompreactor:102)
2017-03-27 10:57:10,933-0400 INFO (JsonRpc (StompReactor)) [Broker.StompAdapter] Subscribe command received (stompreactor:129)
2017-03-27 10:57:11,077-0400 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getHardwareInfo succeeded in 0.00 seconds (__init__:515)
2017-03-27 10:57:11,080-0400 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call VM.getStats failed (error 1) in 0.00 seconds (__init__:515)
2017-03-27 10:57:13,802-0400 INFO (Reactor thread) [ProtocolDetector.AcceptorImpl] Accepted connection from ::1:53034 (protocoldetector:72)
2017-03-27 10:57:13,808-0400 INFO (Reactor thread) [ProtocolDetector.Detector] Detected protocol stomp from ::1:53034 (protocoldetector:127)
2017-03-27 10:57:13,809-0400 INFO (Reactor thread) [Broker.StompAdapter] Processing CONNECT request (stompreactor:102)
2017-03-27 10:57:13,810-0400 INFO (JsonRpc (StompReactor)) [Broker.StompAdapter] Subscribe command received (stompreactor:129)
2017-03-27 10:57:14,015-0400 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getHardwareInfo succeeded in 0.00 seconds (__init__:515)
2017-03-27 10:57:14,019-0400 INFO (jsonrpc/5) [dispatcher] Run and protect: getImagesList(sdUUID=u'48d4f45d-0bdd-4f4a-90b6-35efe2da935a', options=None) (logUtils:51)
2017-03-27 10:57:14,050-0400 INFO (jsonrpc/5) [dispatcher] Run and protect: getImagesList, Return response: {'imageslist': ['a1d7f855-40d4-4f07-bf25-625284682b40', 'c61297c5-367b-4c98-a44b-de0318848a24', 'c2fc0c7c-596e-4011-8471-7f67408ea633', 'b2bb3396-a8fb-4ed7-89c7-4cc7d22aa26a', '6d594f05-c2c2-4ef0-849f-28da46b8f87e', '5fc548e6-51a8-44ee-a14b-92e0e924d7b8']} (logUtils:54)
2017-03-27 10:57:14,051-0400 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call StorageDomain.getImages succeeded in 0.03 seconds (__init__:515)
2017-03-27 10:57:14,083-0400 INFO (jsonrpc/4) [dispatcher] Run and protect: getVolumesList(sdUUID=u'48d4f45d-0bdd-4f4a-90b6-35efe2da935a', spUUID=u'00000000-0000-0000-0000-000000000000', imgUUID=u'a1d7f855-40d4-4f07-bf25-625284682b40', options=None) (logUtils:51)
...
7 years, 7 months
mOvirt 1.7 - Engine 4.0.6 - No disks per VM
by Markus Stockhausen
This is a multi-part message in MIME format.
------=_NextPartTM-000-cca377ad-5728-4865-8f51-8def01f5a14d
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi there,=0A=
=0A=
my smartphone updated mOvirt these days to 1.7. Since then I always=0A=
get errors when trying to access the disks dialogue of a VM in mOVirt. =0A=
It boils down to the URL https://<engine-host>/ovirt-engine/api/vms/<vm-id>=
/disks=0A=
Result is always 404.=0A=
=0A=
A simple cross check in the webbrowser returns the same result.=0A=
At least we can get to the predecessor item without trouble.=0A=
https://<engine-host>/ovirt-engine/api/vms/<vm-id>=0A=
Looking at that output I'd say that the url should be diskattachment=0A=
instead of disks:=0A=
=0A=
<actions>=0A=
...=0A=
</actions>=0A=
...=0A=
<link href=3D"/ovirt-engine/api/vms/542affb0-7a2f-4297-84f4-728a315a6abd/di=
skattachments" rel=3D"diskattachments"/>=0A=
...=0A=
=0A=
Our cluster is still 4.0.6 - maybe some incompatibility?=0A=
=0A=
Best regards.=0A=
=0A=
Markus=0A=
=
------=_NextPartTM-000-cca377ad-5728-4865-8f51-8def01f5a14d
Content-Type: text/plain;
name="InterScan_Disclaimer.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
filename="InterScan_Disclaimer.txt"
****************************************************************************
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.
Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
Vorstand:
Kadir Akin
Dr. Michael Höhnerbach
Vorsitzender des Aufsichtsrates:
Hans Kristian Langva
Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497
This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.
e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
executive board:
Kadir Akin
Dr. Michael Höhnerbach
President of the supervisory board:
Hans Kristian Langva
Registry office: district court Cologne
Register number: HRB 52 497
****************************************************************************
------=_NextPartTM-000-cca377ad-5728-4865-8f51-8def01f5a14d--
7 years, 7 months