[Users] Installation problem
by Dave Neary
Hi all,
I was working through the installation of ovirt-engine today (after
spending more time than I care to admit struggling with networking & DNS
issues - VPNs, dnsmasq, "classic" network start-up and iptables/firewall
rules can interract with each other in strange and surprising ways).
Anyway - I went through the engine set-up successfully, and got the
expected message at the end: "**** Installation completed successfully
******" with a message to visit the engine web application to finish set-up.
Unfortunately, when I connected (after resolving networking issues) to
the server in question, I got a "Service temporarily unavailable" error
(503) from Apache.
in httpd's error.log, I have:
> [Fri Sep 21 13:37:03 2012] [error] (111)Connection refused: proxy: AJP: attempt to connect to 127.0.0.1:8009 (localhost) failed
> [Fri Sep 21 13:37:03 2012] [error] ap_proxy_connect_backend disabling worker for (localhost)
> [Fri Sep 21 13:37:03 2012] [error] proxy: AJP: failed to make connection to backend: localhost
When I try to restart the ovirt-engine service, I get the following in
journalctl:
> Sep 21 13:34:44 clare.neary.home engine-service.py[5172]: The engine PID file "/var/run/ovirt-engine.pid" already exists.
> Sep 21 13:34:44 clare.neary.home systemd[1]: PID 1264 read from file /var/run/ovirt-engine.pid does not exist.
> Sep 21 13:34:44 clare.neary.home systemd[1]: Unit ovirt-engine.service entered failed state.
I tried to clean up and restart, but engine-cleanup failed:
> [root@clare ovirt-engine]# engine-cleanup -u
>
> Stopping JBoss service... [ DONE ]
>
> Error: Couldn't connect to the database server.Check that connection is working and rerun the cleanup utility
> Error: Cleanup failed.
> please check log at /var/log/ovirt-engine/engine-cleanup_2012_09_21_14_02_37.log
It turns out, in /var/log/messages, that I have these error messages:
> Sep 21 14:00:59 clare pg_ctl[5298]: FATAL: could not create shared memory segment: Invalid argument
> Sep 21 14:00:59 clare pg_ctl[5298]: DETAIL: Failed system call was shmget(key=5432001, size=36519936, 03600).
> Sep 21 14:00:59 clare pg_ctl[5298]: HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded your kernel's SHMMAX parameter. You can either reduce the request size or reconfigure the kernel with larger SHMMAX. To reduce the request size (currently 36519936 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.
> Sep 21 14:00:59 clare pg_ctl[5298]: If the request size is already small, it's possible that it is less than your kernel's SHMMIN parameter, in which case raising the request size or reconfiguring SHMMIN is called for.
> Sep 21 14:00:59 clare pg_ctl[5298]: The PostgreSQL documentation contains more information about shared memory configuration.
> Sep 21 14:01:03 clare pg_ctl[5298]: pg_ctl: could not start server
> Sep 21 14:01:03 clare pg_ctl[5298]: Examine the log output.
> Sep 21 14:01:03 clare systemd[1]: postgresql.service: control process exited, code=exited status=1
> Sep 21 14:01:03 clare systemd[1]: Unit postgresql.service entered failed state.
I increased the kernel's SHMMAX, and engine-cleanup worked correctly.
Has anyone else experienced this issue?
When I re-run engine-setup, I also got stuck when reconfiguring NFS -
when engine-setup asked me if I wanted to configure the NFS domain, I
said "yes", but then it refused to accept my input of "/mnt/iso" since
it was already in /etc/exports - perhaps engine-cleanup should also
remove ISO shares managed by ovirt-engine, or else handle more
gracefully when someone enters an existing export? The only fix I found
was to interrupt and restart the engine set-up.
Also, I have no idea whether allowing oVirt to manage iptables will keep
any extra rules I have added (specifically for DNS services on port 53
UDP) which I added to the iptables config. I didn't take the risk of
allowing it to reconfigure iptables the second time.
After all that, I got an error when starting the JBoss service:
> Starting JBoss Service... [ ERROR ]
> Error: Can't start the ovirt-engine service
> Please check log file /var/log/ovirt-engine/engine-setup_2012_09_21_14_28_11.log for more information
And when I checked that log file:
> 2012-09-21 14:30:02::DEBUG::common_utils::790::root:: starting ovirt-engine
> 2012-09-21 14:30:02::DEBUG::common_utils::835::root:: executing action ovirt-engine on service start
> 2012-09-21 14:30:02::DEBUG::common_utils::309::root:: Executing command --> '/sbin/service ovirt-engine start'
> 2012-09-21 14:30:02::DEBUG::common_utils::335::root:: output =
> 2012-09-21 14:30:02::DEBUG::common_utils::336::root:: stderr = Redirecting to /bin/systemctl start ovirt-engine.service
> Job failed. See system journal and 'systemctl status' for details.
>
> 2012-09-21 14:30:02::DEBUG::common_utils::337::root:: retcode = 1
> 2012-09-21 14:30:02::DEBUG::setup_sequences::62::root:: Traceback (most recent call last):
> File "/usr/share/ovirt-engine/scripts/setup_sequences.py", line 60, in run
> function()
> File "/bin/engine-setup", line 1535, in _startJboss
> srv.start(True)
> File "/usr/share/ovirt-engine/scripts/common_utils.py", line 795, in start
> raise Exception(output_messages.ERR_FAILED_START_SERVICE % self.name)
> Exception: Error: Can't start the ovirt-engine service
And when I check the system journal, we're back to the service starts,
but the PID mentioned in the PID file does not exist.
Any pointers into how I might debug this issue? I haven't found anything
similar in a troubleshooting page, so perhaps it's not a common error?
Cheers,
Dave.
--
Dave Neary
Community Action and Impact
Open Source and Standards, Red Hat
Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13
12 years, 2 months
[Users] SPM not selected after host failed
by "Marc-Christian Schröer | ingenit GmbH & Co. KG"
Hello all,
we are currently in the process of evaluating oVirt as a basis for our
new virutalization environment. As far as our evaluation has processed
it seems to be the way to go, but when testing the high availability
features I ran into a serious problem:
Our testing setup looks like this: 2 hosts on Dell R210 and R210II machines,
a seperate machine running the managing application in JBoss and providing
storage space through NFS. Under normal conditions everything works fine:
I can migrate machines between the two nodes, I can add a third node,
access everything by VNC, monitor the VMs really nicely, the power management
feature of the R210s work just fine.
Then, when simulating the loss of a host by pulling the plug on the machine,
(yes, that is kind of a crude check) some things seem to go terribly wrong:
the system detects the host being unresponsive and assumes it is down. But
the host happens to be the SPM and the other does not take over this function.
This leaves the hole cluster in an unresponseive state and my datacenter
is gone. I tracked down the problem in the log files to the point where
the engine tries to migrate the SPM to another node:
2012-09-20 07:54:40,836 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (QuartzScheduler_Worker-60) SPM selection - vds seems as spm node03
2012-09-20 07:54:40,837 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (QuartzScheduler_Worker-60) spm vds is non responsive, stopping spm selection.
2012-09-20 07:54:44,344 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand] (QuartzScheduler_Worker-51) XML RPC error in command GetCapabilitiesVDS ( Vds: node03 ),
the error was: java.util.concurrent.ExecutionException: java.lang.reflect.InvocationTargetException, NoRouteToHostException: Keine Route zum Zielrechner
2012-09-20 07:54:47,345 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand] (QuartzScheduler_Worker-47) XML RPC error in command GetCapabilitiesVDS ( Vds: node03 ),
the error was: java.util.concurrent.ExecutionException: java.lang.reflect.InvocationTargetException, NoRouteToHostException: Keine Route zum Zielrechner
2012-09-20 07:54:50,869 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (QuartzScheduler_Worker-69) hostFromVds::selectedVds - node04, spmStatus Free, storage
pool ingenit
2012-09-20 07:54:50,892 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (QuartzScheduler_Worker-69) SPM Init: could not find reported vds or not up -
pool:ingenit vds_spm_id: 2
2012-09-20 07:54:50,905 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (QuartzScheduler_Worker-69) SPM selection - vds seems as spm node03
As far as I understand these logs, the engine detects node03 not being
responsive, starts electing a new SPM but does not find node04. That is
strange as the host is online, pingable and worked just fine as part of
the cluster.
What I can do to remedy the situation using the management interface to
set "Confirm Host has been rebooted" and switch the host into maintenance
mode after that. Than the responsive node takes over and the VMs are
being migrated, too.
Has anyone experienced a similar problem? Is this by design and killing
off the SPM is a bad coincident and always requires manual intervention?
I would hope not :-)
I tried to google some answers, but aside from a thread in May that did
not help I came up empty.
Thanks in advance for all the help...
Kind regards from Germany,
Marc
--
________________________________________________________________________
Dipl.-Inform. Marc-Christian Schröer schroeer(a)ingenit.com
Geschäftsführer / CEO
----------------------------------------------------------------------
ingenit GmbH & Co. KG Tel. +49 (0)231 58 698-120
Emil-Figge-Strasse 76-80 Fax. +49 (0)231 58 698-121
D-44227 Dortmund www.ingenit.com
Registergericht: Amtsgericht Dortmund, HRA 13 914
Gesellschafter : Thomas Klute, Marc-Christian Schröer
________________________________________________________________________
12 years, 2 months
[Users] family cpu compatibility item
by Nathanaël Blanchet
Hi,
Does anyone know what importance have the "family cpu compatibility
item" into the cluster tab? Is there any consequence on the host
performance?
--
Nathanaël Blanchet
Supervision réseau
Pôle exploitation et maintenance
Département des systèmes d'information
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
12 years, 2 months
[Users] vdsmd doesn't restart after rebooting
by Nathanaël Blanchet
Hi all,
In the latest vdsm build from git
(vdsm-4.10.0-0.452.git87594e3.fc17.x86_64), vdsmd.service never starts
alone after rebooting.
I have had a look to journalctl anfd I've found this :
systemd-vdsmd[538]: vdsm: Failed to define network filters on
libvirt[FAILED]
[root@node ~]# service vdsmd status
Redirecting to /bin/systemctl status vdsmd.service
vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled)
Active: failed (Result: exit-code) since Fri, 21 Sep 2012
12:13:01 +0200; 4min 56s ago
Process: 543 ExecStart=/lib/systemd/systemd-vdsmd start
(code=exited, status=1/FAILURE)
CGroup: name=systemd:/system/vdsmd.service
Sep 21 12:12:55 node.abes.fr systemd-vdsmd[543]: Note: Forwarding
request to 'systemctl disable libvirt-guests.service'.
Sep 21 12:12:56 node.abes.fr systemd-vdsmd[543]: vdsm: libvirt already
configured for vdsm [ OK ]
Sep 21 12:12:56 node.abes.fr systemd-vdsmd[543]: Starting wdmd...
Sep 21 12:12:56 node.abes.fr systemd-vdsmd[543]: Starting sanlock...
Sep 21 12:12:56 node.abes.fr systemd-vdsmd[543]: Starting iscsid:
Sep 21 12:13:01 node.abes.fr systemd-vdsmd[543]: Starting libvirtd (via
systemctl): [ OK ]
May this nwfilter be the cause of the failure? If yes, do I need to open
a BZ?
Thank you for your answer
--
Nathanaël Blanchet
Supervision réseau
Pôle exploitation et maintenance
Département des systèmes d'information
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
12 years, 2 months
[Users] ActiveDirectory problems
by Joop
Hi List,
I have been reading the list for quite sometime and I have a question
because I can't find the problem myself.
I have an oVirt-3.1 setup with 3 nodes (Fed17 install from LiveCD +
vdsm) and an engine install. Sofar this all works. Can create VM's, can
migrate them, no problems ( well one but thats for another post, vdsmd
doesn't start at system start).
Version of oVirt thats installed:
Installed Packages
ovirt-engine.noarch 3.1.0-2.fc17 @ovirt-beta
ovirt-engine-backend.noarch 3.1.0-2.fc17
@ovirt-beta
ovirt-engine-cli.noarch 3.1.0.6-1.fc17 @ovirt-beta
ovirt-engine-config.noarch 3.1.0-2.fc17
@ovirt-beta
ovirt-engine-dbscripts.noarch 3.1.0-2.fc17
@ovirt-beta
ovirt-engine-genericapi.noarch 3.1.0-2.fc17
@ovirt-beta
ovirt-engine-notification-service.noarch
3.1.0-2.fc17 @ovirt-beta
ovirt-engine-restapi.noarch 3.1.0-2.fc17
@ovirt-beta
ovirt-engine-sdk.noarch 3.1.0.4-1.fc17 @ovirt-beta
ovirt-engine-setup.noarch 3.1.0-2.fc17 @ovirt-beta
ovirt-engine-tools-common.noarch 3.1.0-2.fc17
@ovirt-beta
ovirt-engine-userportal.noarch 3.1.0-2.fc17
@ovirt-beta
ovirt-engine-webadmin-portal.noarch
3.1.0-2.fc17 @ovirt-beta
ovirt-image-uploader.noarch 3.1.0-0.git9c42c8.fc17
@ovirt-beta
ovirt-iso-uploader.noarch 3.1.0-0.git1841d9.fc17 @ovirt-beta
ovirt-log-collector.noarch 3.1.0-0.git10d719.fc17
@ovirt-beta
Next step is integrating with our AD setup. Ran engine-manage-domains
-action=add -provider=ActiveDirectory -domain=nieuwland.local
-user=admin -interactive
Message is:
WARNING: No permissions were added to the Engine. Login either with the
internal admin user or with another configured user
Successfully added domain nieuwland.local. oVirt Engine restart is
required in order for the changes to take place (service
Manage Domains completed successfully
The specified admin is an DomainAdministrator.
The logfile in /var/log/engine/engine-manage-domains also says OK. The
resulting krb5.conf in /etc/ovirt-engine looks also OK. The AD servers
are resolvable forward and backward.
Then I'm lost because when I log into the Admin portal with the internal
admin account and goto the Users tab and want to add a user from the
nieuwland.local, myself (jvandewege) realm it won't work and I get the
following in engine.log
2012-09-14 12:55:26,104 ERROR
[org.ovirt.engine.core.bll.adbroker.DirectorySearcher]
(ajp--0.0.0.0-8009-12) Failed ldap search server
LDAP://digit.nieuwland.local:389 due to java.lang.NullPointerException.
We should try the next server: java.lang.NullPointerException
at
org.ovirt.engine.core.bll.adbroker.ADRootDSE.<init>(ADRootDSE.java:26)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.adbroker.RootDSEFactory.get(RootDSEFactory.java:14)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.adbroker.GetRootDSETask.setRootDSE(GetRootDSETask.java:97)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.adbroker.GetRootDSETask.call(GetRootDSETask.java:68)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.adbroker.DirectorySearcher.find(DirectorySearcher.java:91)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.adbroker.DirectorySearcher.FindOne(DirectorySearcher.java:39)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.adbroker.LdapAuthenticateUserCommand.executeQuery(LdapAuthenticateUserCommand.java:44)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.adbroker.LdapBrokerCommandBase.Execute(LdapBrokerCommandBase.java:68)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.adbroker.LdapBrokerBase.RunAdAction(LdapBrokerBase.java:18)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.LoginUserCommand.authenticateUser(LoginUserCommand.java:30)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.LoginBaseCommand.isUserCanBeAuthenticated(LoginBaseCommand.java:177)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.LoginAdminUserCommand.canDoAction(LoginAdminUserCommand.java:14)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.CommandBase.InternalCanDoAction(CommandBase.java:486)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.CommandBase.ExecuteAction(CommandBase.java:261)
[engine-bll.jar:]
at org.ovirt.engine.core.bll.Backend.Login(Backend.java:481)
[engine-bll.jar:]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[rt.jar:1.7.0_05-icedtea]
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
[rt.jar:1.7.0_05-icedtea]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.7.0_05-icedtea]
at java.lang.reflect.Method.invoke(Method.java:601)
[rt.jar:1.7.0_05-icedtea]
at
org.jboss.as.ee.component.ManagedReferenceMethodInterceptorFactory$ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptorFactory.java:72)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation.jar:1.1.1.Final]
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:374)
[jboss-invocation.jar:1.1.1.Final]
at
org.ovirt.engine.core.utils.ThreadLocalSessionCleanerInterceptor.injectWebContextToThreadLocal(ThreadLocalSessionCleanerInterceptor.java:11)
[engine-utils.jar:]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[rt.jar:1.7.0_05-icedtea]
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
[rt.jar:1.7.0_05-icedtea]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.7.0_05-icedtea]
at java.lang.reflect.Method.invoke(Method.java:601)
[rt.jar:1.7.0_05-icedtea]
at
org.jboss.as.ee.component.ManagedReferenceLifecycleMethodInterceptorFactory$ManagedReferenceLifecycleMethodInterceptor.processInvocation(ManagedReferenceLifecycleMethodInterceptorFactory.java:123)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation.jar:1.1.1.Final]
at
org.jboss.invocation.WeavedInterceptor.processInvocation(WeavedInterceptor.java:53)
[jboss-invocation.jar:1.1.1.Final]
at
org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:36)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation.jar:1.1.1.Final]
at
org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:21)
[jboss-invocation.jar:1.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation.jar:1.1.1.Final]
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
[jboss-invocation.jar:1.1.1.Final]
at
org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:53)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation.jar:1.1.1.Final]
at
org.jboss.as.ejb3.component.singleton.SingletonComponentInstanceAssociationInterceptor.processInvocation(SingletonComponentInstanceAssociationInterceptor.java:53)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation.jar:1.1.1.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:211)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:363)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:194)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation.jar:1.1.1.Final]
at
org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation.jar:1.1.1.Final]
at
org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:59)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation.jar:1.1.1.Final]
at
org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation.jar:1.1.1.Final]
at
org.jboss.as.ee.component.TCCLInterceptor.processInvocation(TCCLInterceptor.java:45)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation.jar:1.1.1.Final]
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
[jboss-invocation.jar:1.1.1.Final]
at
org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:165)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:173)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation.jar:1.1.1.Final]
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
[jboss-invocation.jar:1.1.1.Final]
at
org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:72)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.ovirt.engine.core.common.interfaces.BackendLocal$$$view9.Login(Unknown
Source) [engine-common.jar:]
at
org.ovirt.engine.ui.frontend.server.gwt.GenericApiGWTServiceImpl.Login(GenericApiGWTServiceImpl.java:157)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[rt.jar:1.7.0_05-icedtea]
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
[rt.jar:1.7.0_05-icedtea]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.7.0_05-icedtea]
at java.lang.reflect.Method.invoke(Method.java:601)
[rt.jar:1.7.0_05-icedtea]
at com.google.gwt.rpc.server.RPC.invokeAndStreamResponse(RPC.java:196)
at
com.google.gwt.rpc.server.RpcServlet.processCall(RpcServlet.java:161)
at
com.google.gwt.rpc.server.RpcServlet.processPost(RpcServlet.java:222)
at
com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:754)
[jboss-servlet-3.0-api.jar:1.0.1.Final]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:847)
[jboss-servlet-3.0-api.jar:1.0.1.Final]
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:329)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:275)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
at
org.jboss.as.web.security.SecurityContextAssociationValve.invoke(SecurityContextAssociationValve.java:153)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:155)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.jboss.web.rewrite.RewriteValve.invoke(RewriteValve.java:466)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:368)
at org.apache.coyote.ajp.AjpProcessor.process(AjpProcessor.java:505)
at
org.apache.coyote.ajp.AjpProtocol$AjpConnectionHandler.process(AjpProtocol.java:445)
at
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:930)
at java.lang.Thread.run(Thread.java:722) [rt.jar:1.7.0_05-icedtea]
2012-09-14 12:55:26,124 ERROR
[org.ovirt.engine.core.bll.adbroker.LdapAuthenticateUserCommand]
(ajp--0.0.0.0-8009-12) Failed authenticating user: admin to domain
nieuwland.local. Ldap Query Type is getUserByName
2012-09-14 12:55:26,125 ERROR
[org.ovirt.engine.core.bll.LoginAdminUserCommand] (ajp--0.0.0.0-8009-12)
USER_FAILED_TO_AUTHENTICATE : admin
2012-09-14 12:55:26,125 WARN
[org.ovirt.engine.core.bll.LoginAdminUserCommand] (ajp--0.0.0.0-8009-12)
CanDoAction of action LoginAdminUser failed.
Reasons:USER_FAILED_TO_AUTHENTICATE
2012-09-14 12:57:07,027 INFO
[org.ovirt.engine.core.bll.LoginAdminUserCommand] (ajp--0.0.0.0-8009-5)
Checking if user admin@internal is an admin, result true
2012-09-14 12:57:07,029 INFO
[org.ovirt.engine.core.bll.LoginAdminUserCommand] (ajp--0.0.0.0-8009-5)
Running command: LoginAdminUserCommand internal: false.
Using Wireshark I don't see what I expected namely a well formed ldap
search and a result. Can provide the dmp if needed.
Anyone had any luck and is willing to help me out?
Thanks in advance,
Joop
12 years, 2 months
[Users] non-operational state as host does not meet clusters' minimu CPU level.
by wujieke
Hi, everyone, if it's not the right mail list, pls point out.. thanks..
I am trying to install the ovirt on my Xeon E5-2650 process on Dell server,
which is installed with Fedora 17. While I create a new host , which
actually is the same server as overt-engine is running.
The host is created ,and starting to "installing". But it ends with "Non
operational state".
Error:
Host CPU type is not compatible with cluster properties, missing CPU
feature: model_sandybridge.
But in my cluster, I select "sandybridge" CPU, and my Xeon C5 is also in
Sandy bridge family. And also this error lead my server reboot.
Any help is appreciated.
Btw: I have enable INTEL-VT in BIOS. And modprobe KVM and kvm-intel modules.
. attached is screen shot for error.
12 years, 2 months
Re: [Users] Problem with creating a glusterfs volume
by Dominic Kaiser
Here is the message and the logs again except zipped I failed the first
delivery:
Ok here are the logs 4 node and 1 engine log. Tried making /data folder
owned by root and then tried by 36:36 neither worked. Name of volume is
data to match folders on nodes also.
Let me know what you think,
Dominic
On Mon, Sep 10, 2012 at 11:24 AM, Dominic Kaiser <dominic(a)bostonvineyard.org
> wrote:
> Here are the other two logs forgot them.
>
> dk
>
>
> On Mon, Sep 10, 2012 at 11:19 AM, Dominic Kaiser <
> dominic(a)bostonvineyard.org> wrote:
>
>> Ok here are the logs 4 node and 1 engine log. Tried making /data folder
>> owned by root and then tried by 36:36 neither worked. Name of volume is
>> data to match folders on nodes also.
>>
>> Let me know what you think,
>>
>> Dominic
>>
>>
>> On Thu, Sep 6, 2012 at 8:33 AM, Maxim Burgerhout <maxim(a)wzzrd.com> wrote:
>>
>>> I just ran into this as well, and it seems that you have to either
>>> reformat previously used gluster bricks or manually tweak some extended
>>> attributes.
>>>
>>> Maybe this helps you in setting up your gluster volume, Dominic?
>>>
>>> More info here: https://bugzilla.redhat.com/show_bug.cgi?id=812214
>>>
>>>
>>> Maxim Burgerhout
>>> maxim(a)wzzrd.com
>>> ----------------
>>> EB11 5E56 E648 9D99 E8EF 05FB C513 6FD4 1302 B48A
>>>
>>>
>>>
>>>
>>>
>>> On Thu, Sep 6, 2012 at 7:50 AM, Shireesh Anjal <sanjal(a)redhat.com>wrote:
>>>
>>>> Hi Dominic,
>>>>
>>>> Looking at the engine log immediately after trying to create the volume
>>>> should tell us on which node the gluster volume creation was attempted.
>>>> Then looking at the vdsm log on that node should help us identifying the
>>>> exact reason for failure.
>>>>
>>>> In case this doesn't help you, can you please simulate the issue again
>>>> and send back all the 5 log files? (engine.log from engine server and
>>>> vdsm.log from the 4 nodes)
>>>>
>>>> Regards,
>>>> Shireesh
>>>>
>>>>
>>>> On Wednesday 05 September 2012 11:50 PM, Dominic Kaiser wrote:
>>>>
>>>> So I have a problem creating glusterfs volumes. Here is the install:
>>>>
>>>>
>>>> 1. Ovirt 3.1
>>>> 2. 4 Nodes are Fedora 17 with kernel 3.3.4 - 5.fc17.x86_64
>>>> 3. 4 nodes peer joined and running
>>>> 4. 4 nodes added as hosts to ovirt
>>>> 5. created a directory on each node this path /data
>>>> 6. chmod 36.36 -R /data all nodes
>>>> 7. went back to ovirt and created a distributed/replicated volume
>>>> and added the 4 nodes with brick path of /data
>>>>
>>>> I received this error:
>>>>
>>>> Creation of Gluster Volume maingfs1 failed.
>>>>
>>>> I went and looked at the vdsm logs on the nodes and the ovirt server
>>>> which did not say much. Where else should I look? Also this error is
>>>> vague what does it mean?
>>>>
>>>>
>>>> --
>>>> Dominic Kaiser
>>>> Greater Boston Vineyard
>>>> Director of Operations
>>>>
>>>> cell: 617-230-1412
>>>> fax: 617-252-0238
>>>> email: dominic(a)bostonvineyard.org
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users(a)ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>
>>
>>
>> --
>> Dominic Kaiser
>> Greater Boston Vineyard
>> Director of Operations
>>
>> cell: 617-230-1412
>> fax: 617-252-0238
>> email: dominic(a)bostonvineyard.org
>>
>>
>>
>
>
> --
> Dominic Kaiser
> Greater Boston Vineyard
> Director of Operations
>
> cell: 617-230-1412
> fax: 617-252-0238
> email: dominic(a)bostonvineyard.org
>
>
>
--
Dominic Kaiser
Greater Boston Vineyard
Director of Operations
cell: 617-230-1412
fax: 617-252-0238
email: dominic(a)bostonvineyard.org
12 years, 2 months
[Users] API Documentation
by ??????
This is a multi-part message in MIME format.
------=_NextPart_000_0571_01CD97F2.2C02C030
Content-Type: text/plain;
charset="us-ascii"
Content-Transfer-Encoding: 7bit
Hi, where can I documentation ovirt api?
Interest features:
1. Suspended Virtual Machine
2. Creating a snapshot
3. Import snapshot to export storage
------=_NextPart_000_0571_01CD97F2.2C02C030
Content-Type: text/html;
charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head>
<META HTTP-EQUIV=3D"Content-Type" CONTENT=3D"text/html; =
charset=3Dus-ascii">
<meta name=3DGenerator content=3D"Microsoft Word 12 (filtered =
medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph
{mso-style-priority:34;
margin-top:0cm;
margin-right:0cm;
margin-bottom:0cm;
margin-left:36.0pt;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:2.0cm 42.5pt 2.0cm 3.0cm;}
div.WordSection1
{page:WordSection1;}
/* List Definitions */
@list l0
{mso-list-id:28185852;
mso-list-type:hybrid;
mso-list-template-ids:-1790555102 68747279 68747289 68747291 68747279 =
68747289 68747291 68747279 68747289 68747291;}
@list l0:level1
{mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;}
ol
{margin-bottom:0cm;}
ul
{margin-bottom:0cm;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DRU link=3Dblue =
vlink=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal><span =
lang=3DEN-US>Hi, where can I documentation ovirt =
api?<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Interest features:<o:p></o:p></span></p><p =
class=3DMsoListParagraph style=3D'text-indent:-18.0pt;mso-list:l0 level1 =
lfo1'><![if !supportLists]><span lang=3DEN-US><span =
style=3D'mso-list:Ignore'>1.<span style=3D'font:7.0pt "Times New =
Roman"'> =
</span></span></span><![endif]><span lang=3DEN-US>Suspended Virtual =
Machine<o:p></o:p></span></p><p class=3DMsoListParagraph =
style=3D'text-indent:-18.0pt;mso-list:l0 level1 lfo1'><![if =
!supportLists]><span lang=3DEN-US><span =
style=3D'mso-list:Ignore'>2.<span style=3D'font:7.0pt "Times New =
Roman"'> =
</span></span></span><![endif]><span lang=3DEN-US>Creating a =
snapshot<o:p></o:p></span></p><p class=3DMsoListParagraph =
style=3D'text-indent:-18.0pt;mso-list:l0 level1 lfo1'><![if =
!supportLists]><span lang=3DEN-US><span =
style=3D'mso-list:Ignore'>3.<span style=3D'font:7.0pt "Times New =
Roman"'> =
</span></span></span><![endif]><span lang=3DEN-US>Import snapshot to =
export storage<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p></div></body></html>
------=_NextPart_000_0571_01CD97F2.2C02C030--
12 years, 2 months
[Users] Fatal error during migration
by Dmitriy A Pyryakov
--0__=C9BBF0ECDFA610B98f9e8a93df938690918cC9BBF0ECDFA610B9
Content-type: text/plain; charset=US-ASCII
Content-transfer-encoding: quoted-printable
Hello,
I have two oVirt nodes ovirt-node-iso-2.5.0-2.0.fc17.
When I try to migrate VM from one host to another, I have an error:
Migration failed due to Error: Fatal error during migration.
vdsm.log:
Thread-3797::DEBUG::2012-09-20
09:42:56,439::BindingXMLRPC::859::vds::(wrapper) client
[192.168.10.10]::call vmMigrate with ({'src': '192.168.10.13', 'dst':
'192.168.10.12:54321', 'vmId': '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86',
'method': 'online'},) {} flowID [180ad979]
Thread-3797::DEBUG::2012-09-20 09:42:56,439::API::441::vds::(migrate)
{'src': '192.168.10.13', 'dst': '192.168.10.12:54321', 'vmId':
'2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'method': 'online'}
Thread-3798::DEBUG::2012-09-20
09:42:56,441::vm::122::vm.Vm::(_setupVdsConnection)
vmId=3D`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Destination server is:
192.168.10.12:54321
Thread-3797::DEBUG::2012-09-20
09:42:56,441::BindingXMLRPC::865::vds::(wrapper) return vmMigrate with
{'status': {'message': 'Migration process starting', 'code': 0}}
Thread-3798::DEBUG::2012-09-20
09:42:56,441::vm::124::vm.Vm::(_setupVdsConnection)
vmId=3D`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Initiating connection wi=
th
destination
Thread-3798::DEBUG::2012-09-20
09:42:56,452::libvirtvm::240::vm.Vm::(_getDiskStats)
vmId=3D`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Disk hdc stats not avail=
able
Thread-3798::DEBUG::2012-09-20
09:42:56,457::vm::170::vm.Vm::(_prepareGuest)
vmId=3D`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::migration Process begins=
Thread-3798::DEBUG::2012-09-20 09:42:56,475::vm::217::vm.Vm::(run)
vmId=3D`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::migration semaphore acqu=
ired
Thread-3798::DEBUG::2012-09-20
09:42:56,888::libvirtvm::427::vm.Vm::(_startUnderlyingMigration)
vmId=3D`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::starting migration to qe=
mu
+tls://192.168.10.12/system
Thread-3799::DEBUG::2012-09-20 09:42:56,889::libvirtvm::325::vm.Vm::(ru=
n)
vmId=3D`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::migration downtime threa=
d
started
Thread-3800::DEBUG::2012-09-20 09:42:56,890::libvirtvm::353::vm.Vm::(ru=
n)
vmId=3D`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::starting migration monit=
or
thread
Thread-3798::DEBUG::2012-09-20
09:42:56,903::libvirtvm::340::vm.Vm::(cancel)
vmId=3D`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::canceling migration down=
time
thread
Thread-3798::DEBUG::2012-09-20 09:42:56,904::libvirtvm::390::vm.Vm::(st=
op)
vmId=3D`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::stopping migration monit=
or
thread
Thread-3799::DEBUG::2012-09-20 09:42:56,904::libvirtvm::337::vm.Vm::(ru=
n)
vmId=3D`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::migration downtime threa=
d
exiting
Thread-3798::ERROR::2012-09-20 09:42:56,905::vm::176::vm.Vm::(_recover)=
vmId=3D`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::operation failed: Failed=
to
connect to remote libvirt URI qemu+tls://192.168.10.12/system
Thread-3798::ERROR::2012-09-20 09:42:56,977::vm::240::vm.Vm::(run)
vmId=3D`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Failed to migrate
Traceback (most recent call last):
File "/usr/share/vdsm/vm.py", line 223, in run
File "/usr/share/vdsm/libvirtvm.py", line 451, in
_startUnderlyingMigration
File "/usr/share/vdsm/libvirtvm.py", line 491, in f
File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", li=
ne
82, in wrapper
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1034, in
migrateToURI2
libvirtError: operation failed: Failed to connect to remote libvirt URI=
qemu+tls://192.168.10.12/system
Thread-3802::DEBUG::2012-09-20
09:42:57,793::BindingXMLRPC::859::vds::(wrapper) client
[192.168.10.10]::call vmGetStats with
('2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86',) {}
Thread-3802::DEBUG::2012-09-20
09:42:57,793::libvirtvm::240::vm.Vm::(_getDiskStats)
vmId=3D`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Disk hdc stats not avail=
able
Thread-3802::DEBUG::2012-09-20
09:42:57,794::BindingXMLRPC::865::vds::(wrapper) return vmGetStats with=
{'status': {'message': 'Done', 'code': 0}, 'statsList': [{'status': 'Up=
',
'username': 'Unknown', 'memUsage': '0', 'acpiEnable': 'true', 'pid':
'22047', 'displayIp': '192.168.10.13', 'displayPort': u'5912', 'session=
':
'Unknown', 'displaySecurePort': u'5913', 'timeOffset': '0', 'hash':
'3018874162324753083', 'pauseCode': 'NOERR', 'clientIp': '', 'kvmEnable=
':
'true', 'network': {u'vnet6': {'macAddr': '00:1a:4a:a8:0a:08', 'rxDropp=
ed':
'0', 'rxErrors': '0', 'txDropped': '0', 'txRate': '0.0', 'rxRate': '0.0=
',
'txErrors': '0', 'state': 'unknown', 'speed': '1000', 'name': u'vnet6'}=
},
'vmId': '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'displayType': 'qxl',
'cpuUser': '13.27', 'disks': {u'hdc': {'flushLatency': '0', 'readLatenc=
y':
'0', 'writeLatency': '0'}, u'hda': {'readLatency': '6183805',
'apparentsize': '11811160064', 'writeLatency': '0', 'imageID':
'd96d19f6-5a28-4fef-892f-4a04549d4e38', 'flushLatency': '0', 'readRate'=
:
'271.87', 'truesize': '11811160064', 'writeRate': '0.00'}},
'monitorResponse': '0', 'statsAge': '0.77', 'cpuIdle': '86.73',
'elapsedTime': '3941', 'vmType': 'kvm', 'cpuSys': '0.00', 'appsList': [=
],
'guestIPs': '', 'nice': ''}]}
Thread-3803::DEBUG::2012-09-20
09:42:57,869::BindingXMLRPC::859::vds::(wrapper) client
[192.168.10.10]::call vmGetMigrationStatus with
('2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86',) {}
Thread-3803::DEBUG::2012-09-20
09:42:57,870::BindingXMLRPC::865::vds::(wrapper) return
vmGetMigrationStatus with {'status': {'message': 'Fatal error during
migration', 'code': 12}}
Dummy-1264::DEBUG::2012-09-20
09:42:58,172::__init__::1249::Storage.Misc.excCmd::(_log) 'dd
if=3D/rhev/data-center/332694bb-364a-434e-b23f-5fef985d3cbd/mastersd/do=
m_md/inbox
iflag=3Ddirect,fullblock count=3D1 bs=3D1024000' (cwd None)
Dummy-1264::DEBUG::2012-09-20
09:42:58,262::__init__::1249::Storage.Misc.excCmd::(_log) SUCCESS: <err=
> =3D
'1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.0515=
109
s, 19.9 MB/s\n'; <rc> =3D 0
Dummy-1264::DEBUG::2012-09-20
09:43:00,271::__init__::1249::Storage.Misc.excCmd::(_log) 'dd
if=3D/rhev/data-center/332694bb-364a-434e-b23f-5fef985d3cbd/mastersd/do=
m_md/inbox
iflag=3Ddirect,fullblock count=3D1 bs=3D1024000' (cwd None)
Dummy-1264::DEBUG::2012-09-20
09:43:00,362::__init__::1249::Storage.Misc.excCmd::(_log) SUCCESS: <err=
> =3D
'1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.0530=
171
s, 19.3 MB/s\n'; <rc> =3D 0
Thread-21::DEBUG::2012-09-20
09:43:00,612::__init__::1249::Storage.Misc.excCmd::(_log) '/usr/bin/dd
iflag=3Ddirect if=3D/dev/26187d25-bfcb-40c7-97d1-667705ad2223/metadata =
bs=3D4096
count=3D1' (cwd None)
Thread-21::DEBUG::2012-09-20
09:43:00,629::__init__::1249::Storage.Misc.excCmd::(_log) SUCCESS: <err=
> =3D
'1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied, 0.0009376=
98
s, 4.4 MB/s\n'; <rc> =3D 0
Thread-3805::DEBUG::2012-09-20
09:43:01,901::task::588::TaskManager.Task::(_updateState)
Task=3D`ff134ecc-5597-4a83-81d6-e4f9804871ff`::moving from state init -=
>
state preparing
Thread-3805::INFO::2012-09-20
09:43:01,902::logUtils::37::dispatcher::(wrapper) Run and protect:
repoStats(options=3DNone)
Thread-3805::INFO::2012-09-20
09:43:01,902::logUtils::39::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {'26187d25-bfcb-40c7-97d1-667705ad2223':
{'delay': '0.0180931091309', 'lastCheck': 1348134180.825892, 'code': 0,=
'valid': True}, '90104c3d-837b-47dd-8c82-dda92eec30d9': {'delay':
'0.000955820083618', 'lastCheck': 1348134175.493277, 'code': 0, 'valid'=
:
True}}
Thread-3805::DEBUG::2012-09-20
09:43:01,902::task::1172::TaskManager.Task::(prepare)
Task=3D`ff134ecc-5597-4a83-81d6-e4f9804871ff`::finished:
{'26187d25-bfcb-40c7-97d1-667705ad2223': {'delay': '0.0180931091309',
'lastCheck': 1348134180.825892, 'code': 0, 'valid': True},
'90104c3d-837b-47dd-8c82-dda92eec30d9': {'delay': '0.000955820083618',
'lastCheck': 1348134175.493277, 'code': 0, 'valid': True}}
Thread-3805::DEBUG::2012-09-20
09:43:01,902::task::588::TaskManager.Task::(_updateState)
Task=3D`ff134ecc-5597-4a83-81d6-e4f9804871ff`::moving from state prepar=
ing ->
state finished
Thread-3805::DEBUG::2012-09-20
09:43:01,903::resourceManager::809::ResourceManager.Owner::(releaseAll)=
Owner.releaseAll requests {} resources {}
Thread-3805::DEBUG::2012-09-20
09:43:01,903::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-3805::DEBUG::2012-09-20
09:43:01,903::task::978::TaskManager.Task::(_decref)
Task=3D`ff134ecc-5597-4a83-81d6-e4f9804871ff`::ref 0 aborting False
Thread-3806::DEBUG::2012-09-20
09:43:01,931::libvirtvm::240::vm.Vm::(_getDiskStats)
vmId=3D`540335f0-2269-4bc4-aaf4-11bf5990013f`::Disk hdc stats not avail=
able
Thread-3806::DEBUG::2012-09-20
09:43:01,931::libvirtvm::240::vm.Vm::(_getDiskStats)
vmId=3D`2c3af5f5-f877-4e6b-8a34-05bbe78b3c82`::Disk hdc stats not avail=
able
Thread-3806::DEBUG::2012-09-20
09:43:01,932::libvirtvm::240::vm.Vm::(_getDiskStats)
vmId=3D`0ac0dd3a-ae2a-4963-adf1-918993031f6b`::Disk hdc stats not avail=
able
Thread-3806::DEBUG::2012-09-20
09:43:01,932::libvirtvm::240::vm.Vm::(_getDiskStats)
vmId=3D`35a65bb8-cbca-4049-a428-28914bcb094a`::Disk hdc stats not avail=
able
Thread-3806::DEBUG::2012-09-20
09:43:01,933::libvirtvm::240::vm.Vm::(_getDiskStats)
vmId=3D`4ef3258c-0380-4919-991f-ee7be7e9f7fa`::Disk hdc stats not avail=
able
Thread-3806::DEBUG::2012-09-20
09:43:01,933::libvirtvm::240::vm.Vm::(_getDiskStats)
vmId=3D`252e6d46-f362-46aa-a7ed-dd00a86af6f0`::Disk hdc stats not avail=
able
Thread-3806::DEBUG::2012-09-20
09:43:01,933::libvirtvm::240::vm.Vm::(_getDiskStats)
vmId=3D`509e608c-e657-473a-b031-f0811da96bde`::Disk hdc stats not avail=
able
Thread-3806::DEBUG::2012-09-20
09:43:01,934::libvirtvm::240::vm.Vm::(_getDiskStats)
vmId=3D`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Disk hdc stats not avail=
able
Dummy-1264::DEBUG::2012-09-20
09:43:02,371::__init__::1249::Storage.Misc.excCmd::(_log) 'dd
if=3D/rhev/data-center/332694bb-364a-434e-b23f-5fef985d3cbd/mastersd/do=
m_md/inbox
iflag=3Ddirect,fullblock count=3D1 bs=3D1024000' (cwd None)
Dummy-1264::DEBUG::2012-09-20
09:43:02,462::__init__::1249::Storage.Misc.excCmd::(_log) SUCCESS: <err=
> =3D
'1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.0525=
183
s, 19.5 MB/s\n'; <rc> =3D 0
- -=
--0__=C9BBF0ECDFA610B98f9e8a93df938690918cC9BBF0ECDFA610B9
Content-type: text/html; charset=US-ASCII
Content-Disposition: inline
Content-transfer-encoding: quoted-printable
<html><body>
<p><font size=3D"2" face=3D"sans-serif">Hello,</font><br>
<br>
<font size=3D"2" face=3D"sans-serif">I have two oVirt nodes ovirt-node-=
iso-2.5.0-2.0.fc17.</font><br>
<br>
<font size=3D"2" face=3D"sans-serif">When I try to migrate VM from one =
host to another, I have an error: Migration failed due to Error: Fatal =
error during migration.</font><br>
<br>
<font size=3D"2" face=3D"sans-serif">vdsm.log:</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3797::DEBUG::2012-09-20 09:=
42:56,439::BindingXMLRPC::859::vds::(wrapper) client [192.168.10.10]::c=
all vmMigrate with ({'src': '192.168.10.13', 'dst': '192.168.10.12:5432=
1', 'vmId': '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'method': 'online'}=
,) {} flowID [180ad979]</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3797::DEBUG::2012-09-20 09:=
42:56,439::API::441::vds::(migrate) {'src': '192.168.10.13', 'dst': '19=
2.168.10.12:54321', 'vmId': '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'me=
thod': 'online'}</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3798::DEBUG::2012-09-20 09:=
42:56,441::vm::122::vm.Vm::(_setupVdsConnection) vmId=3D`2bf3e6eb-49e4-=
42c7-8188-fc2aeeae2e86`::Destination server is: 192.168.10.12:54321</fo=
nt><br>
<font size=3D"2" face=3D"sans-serif">Thread-3797::DEBUG::2012-09-20 09:=
42:56,441::BindingXMLRPC::865::vds::(wrapper) return vmMigrate with {'s=
tatus': {'message': 'Migration process starting', 'code': 0}}</font><br=
>
<font size=3D"2" face=3D"sans-serif">Thread-3798::DEBUG::2012-09-20 09:=
42:56,441::vm::124::vm.Vm::(_setupVdsConnection) vmId=3D`2bf3e6eb-49e4-=
42c7-8188-fc2aeeae2e86`::Initiating connection with destination</font><=
br>
<font size=3D"2" face=3D"sans-serif">Thread-3798::DEBUG::2012-09-20 09:=
42:56,452::libvirtvm::240::vm.Vm::(_getDiskStats) vmId=3D`2bf3e6eb-49e4=
-42c7-8188-fc2aeeae2e86`::Disk hdc stats not available</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3798::DEBUG::2012-09-20 09:=
42:56,457::vm::170::vm.Vm::(_prepareGuest) vmId=3D`2bf3e6eb-49e4-42c7-8=
188-fc2aeeae2e86`::migration Process begins</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3798::DEBUG::2012-09-20 09:=
42:56,475::vm::217::vm.Vm::(run) vmId=3D`2bf3e6eb-49e4-42c7-8188-fc2aee=
ae2e86`::migration semaphore acquired</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3798::DEBUG::2012-09-20 09:=
42:56,888::libvirtvm::427::vm.Vm::(_startUnderlyingMigration) vmId=3D`2=
bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::starting migration to qemu+tls://=
192.168.10.12/system</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3799::DEBUG::2012-09-20 09:=
42:56,889::libvirtvm::325::vm.Vm::(run) vmId=3D`2bf3e6eb-49e4-42c7-8188=
-fc2aeeae2e86`::migration downtime thread started</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3800::DEBUG::2012-09-20 09:=
42:56,890::libvirtvm::353::vm.Vm::(run) vmId=3D`2bf3e6eb-49e4-42c7-8188=
-fc2aeeae2e86`::starting migration monitor thread</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3798::DEBUG::2012-09-20 09:=
42:56,903::libvirtvm::340::vm.Vm::(cancel) vmId=3D`2bf3e6eb-49e4-42c7-8=
188-fc2aeeae2e86`::canceling migration downtime thread</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3798::DEBUG::2012-09-20 09:=
42:56,904::libvirtvm::390::vm.Vm::(stop) vmId=3D`2bf3e6eb-49e4-42c7-818=
8-fc2aeeae2e86`::stopping migration monitor thread</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3799::DEBUG::2012-09-20 09:=
42:56,904::libvirtvm::337::vm.Vm::(run) vmId=3D`2bf3e6eb-49e4-42c7-8188=
-fc2aeeae2e86`::migration downtime thread exiting</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3798::ERROR::2012-09-20 09:=
42:56,905::vm::176::vm.Vm::(_recover) vmId=3D`2bf3e6eb-49e4-42c7-8188-f=
c2aeeae2e86`::operation failed: Failed to connect to remote libvirt URI=
qemu+tls://192.168.10.12/system</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3798::ERROR::2012-09-20 09:=
42:56,977::vm::240::vm.Vm::(run) vmId=3D`2bf3e6eb-49e4-42c7-8188-fc2aee=
ae2e86`::Failed to migrate</font><br>
<font size=3D"2" face=3D"sans-serif">Traceback (most recent call last):=
</font><br>
<font size=3D"2" face=3D"sans-serif"> File "/usr/share/vdsm/vm.py=
", line 223, in run</font><br>
<font size=3D"2" face=3D"sans-serif"> File "/usr/share/vdsm/libvi=
rtvm.py", line 451, in _startUnderlyingMigration</font><br>
<font size=3D"2" face=3D"sans-serif"> File "/usr/share/vdsm/libvi=
rtvm.py", line 491, in f</font><br>
<font size=3D"2" face=3D"sans-serif"> File "/usr/lib/python2.7/si=
te-packages/vdsm/libvirtconnection.py", line 82, in wrapper</font>=
<br>
<font size=3D"2" face=3D"sans-serif"> File "/usr/lib64/python2.7/=
site-packages/libvirt.py", line 1034, in migrateToURI2</font><br>
<font size=3D"2" face=3D"sans-serif">libvirtError: operation failed: Fa=
iled to connect to remote libvirt URI qemu+tls://192.168.10.12/system</=
font>
<ul style=3D"padding-left: 0pt"><font size=3D"2" face=3D"sans-serif">Th=
read-3802::DEBUG::2012-09-20 09:42:57,793::BindingXMLRPC::859::vds::(wr=
apper) client [192.168.10.10]::call vmGetStats with ('2bf3e6eb-49e4-42c=
7-8188-fc2aeeae2e86',) {}</font></ul>
<font size=3D"2" face=3D"sans-serif">Thread-3802::DEBUG::2012-09-20 09:=
42:57,793::libvirtvm::240::vm.Vm::(_getDiskStats) vmId=3D`2bf3e6eb-49e4=
-42c7-8188-fc2aeeae2e86`::Disk hdc stats not available</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3802::DEBUG::2012-09-20 09:=
42:57,794::BindingXMLRPC::865::vds::(wrapper) return vmGetStats with {'=
status': {'message': 'Done', 'code': 0}, 'statsList': [{'status': 'Up',=
'username': 'Unknown', 'memUsage': '0', 'acpiEnable': 'true', 'pid': '=
22047', 'displayIp': '192.168.10.13', 'displayPort': u'5912', 'session'=
: 'Unknown', 'displaySecurePort': u'5913', 'timeOffset': '0', 'hash': '=
3018874162324753083', 'pauseCode': 'NOERR', 'clientIp': '', 'kvmEnable'=
: 'true', 'network': {u'vnet6': {'macAddr': '00:1a:4a:a8:0a:08', 'rxDro=
pped': '0', 'rxErrors': '0', 'txDropped': '0', 'txRate': '0.0', 'rxRate=
': '0.0', 'txErrors': '0', 'state': 'unknown', 'speed': '1000', 'name':=
u'vnet6'}}, 'vmId': '2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86', 'displayTy=
pe': 'qxl', 'cpuUser': '13.27', 'disks': {u'hdc': {'flushLatency': '0',=
'readLatency': '0', 'writeLatency': '0'}, u'hda': {'readLatency': '618=
3805', 'apparentsize': '11811160064', 'writeLatency': '0', 'imageID': '=
d96d19f6-5a28-4fef-892f-4a04549d4e38', 'flushLatency': '0', 'readRate':=
'271.87', 'truesize': '11811160064', 'writeRate': '0.00'}}, 'monitorRe=
sponse': '0', 'statsAge': '0.77', 'cpuIdle': '86.73', 'elapsedTime': '3=
941', 'vmType': 'kvm', 'cpuSys': '0.00', 'appsList': [], 'guestIPs': ''=
, 'nice': ''}]}</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3803::DEBUG::2012-09-20 09:=
42:57,869::BindingXMLRPC::859::vds::(wrapper) client [192.168.10.10]::c=
all vmGetMigrationStatus with ('2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86',)=
{}</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3803::DEBUG::2012-09-20 09:=
42:57,870::BindingXMLRPC::865::vds::(wrapper) return vmGetMigrationStat=
us with {'status': {'message': 'Fatal error during migration', 'code': =
12}}</font><br>
<font size=3D"2" face=3D"sans-serif">Dummy-1264::DEBUG::2012-09-20 09:4=
2:58,172::__init__::1249::Storage.Misc.excCmd::(_log) 'dd if=3D/rhev/da=
ta-center/332694bb-364a-434e-b23f-5fef985d3cbd/mastersd/dom_md/inbox if=
lag=3Ddirect,fullblock count=3D1 bs=3D1024000' (cwd None)</font><br>
<font size=3D"2" face=3D"sans-serif">Dummy-1264::DEBUG::2012-09-20 09:4=
2:58,262::__init__::1249::Storage.Misc.excCmd::(_log) SUCCESS: <err&=
gt; =3D '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied=
, 0.0515109 s, 19.9 MB/s\n'; <rc> =3D 0</font><br>
<font size=3D"2" face=3D"sans-serif">Dummy-1264::DEBUG::2012-09-20 09:4=
3:00,271::__init__::1249::Storage.Misc.excCmd::(_log) 'dd if=3D/rhev/da=
ta-center/332694bb-364a-434e-b23f-5fef985d3cbd/mastersd/dom_md/inbox if=
lag=3Ddirect,fullblock count=3D1 bs=3D1024000' (cwd None)</font><br>
<font size=3D"2" face=3D"sans-serif">Dummy-1264::DEBUG::2012-09-20 09:4=
3:00,362::__init__::1249::Storage.Misc.excCmd::(_log) SUCCESS: <err&=
gt; =3D '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied=
, 0.0530171 s, 19.3 MB/s\n'; <rc> =3D 0</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-21::DEBUG::2012-09-20 09:43=
:00,612::__init__::1249::Storage.Misc.excCmd::(_log) '/usr/bin/dd iflag=
=3Ddirect if=3D/dev/26187d25-bfcb-40c7-97d1-667705ad2223/metadata bs=3D=
4096 count=3D1' (cwd None)</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-21::DEBUG::2012-09-20 09:43=
:00,629::__init__::1249::Storage.Misc.excCmd::(_log) SUCCESS: <err&g=
t; =3D '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied, 0.=
000937698 s, 4.4 MB/s\n'; <rc> =3D 0</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3805::DEBUG::2012-09-20 09:=
43:01,901::task::588::TaskManager.Task::(_updateState) Task=3D`ff134ecc=
-5597-4a83-81d6-e4f9804871ff`::moving from state init -> state prepa=
ring</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3805::INFO::2012-09-20 09:4=
3:01,902::logUtils::37::dispatcher::(wrapper) Run and protect: repoStat=
s(options=3DNone)</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3805::INFO::2012-09-20 09:4=
3:01,902::logUtils::39::dispatcher::(wrapper) Run and protect: repoStat=
s, Return response: {'26187d25-bfcb-40c7-97d1-667705ad2223': {'delay': =
'0.0180931091309', 'lastCheck': 1348134180.825892, 'code': 0, 'valid': =
True}, '90104c3d-837b-47dd-8c82-dda92eec30d9': {'delay': '0.00095582008=
3618', 'lastCheck': 1348134175.493277, 'code': 0, 'valid': True}}</font=
><br>
<font size=3D"2" face=3D"sans-serif">Thread-3805::DEBUG::2012-09-20 09:=
43:01,902::task::1172::TaskManager.Task::(prepare) Task=3D`ff134ecc-559=
7-4a83-81d6-e4f9804871ff`::finished: {'26187d25-bfcb-40c7-97d1-667705ad=
2223': {'delay': '0.0180931091309', 'lastCheck': 1348134180.825892, 'co=
de': 0, 'valid': True}, '90104c3d-837b-47dd-8c82-dda92eec30d9': {'delay=
': '0.000955820083618', 'lastCheck': 1348134175.493277, 'code': 0, 'val=
id': True}}</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3805::DEBUG::2012-09-20 09:=
43:01,902::task::588::TaskManager.Task::(_updateState) Task=3D`ff134ecc=
-5597-4a83-81d6-e4f9804871ff`::moving from state preparing -> state =
finished</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3805::DEBUG::2012-09-20 09:=
43:01,903::resourceManager::809::ResourceManager.Owner::(releaseAll) Ow=
ner.releaseAll requests {} resources {}</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3805::DEBUG::2012-09-20 09:=
43:01,903::resourceManager::844::ResourceManager.Owner::(cancelAll) Own=
er.cancelAll requests {}</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3805::DEBUG::2012-09-20 09:=
43:01,903::task::978::TaskManager.Task::(_decref) Task=3D`ff134ecc-5597=
-4a83-81d6-e4f9804871ff`::ref 0 aborting False</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3806::DEBUG::2012-09-20 09:=
43:01,931::libvirtvm::240::vm.Vm::(_getDiskStats) vmId=3D`540335f0-2269=
-4bc4-aaf4-11bf5990013f`::Disk hdc stats not available</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3806::DEBUG::2012-09-20 09:=
43:01,931::libvirtvm::240::vm.Vm::(_getDiskStats) vmId=3D`2c3af5f5-f877=
-4e6b-8a34-05bbe78b3c82`::Disk hdc stats not available</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3806::DEBUG::2012-09-20 09:=
43:01,932::libvirtvm::240::vm.Vm::(_getDiskStats) vmId=3D`0ac0dd3a-ae2a=
-4963-adf1-918993031f6b`::Disk hdc stats not available</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3806::DEBUG::2012-09-20 09:=
43:01,932::libvirtvm::240::vm.Vm::(_getDiskStats) vmId=3D`35a65bb8-cbca=
-4049-a428-28914bcb094a`::Disk hdc stats not available</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3806::DEBUG::2012-09-20 09:=
43:01,933::libvirtvm::240::vm.Vm::(_getDiskStats) vmId=3D`4ef3258c-0380=
-4919-991f-ee7be7e9f7fa`::Disk hdc stats not available</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3806::DEBUG::2012-09-20 09:=
43:01,933::libvirtvm::240::vm.Vm::(_getDiskStats) vmId=3D`252e6d46-f362=
-46aa-a7ed-dd00a86af6f0`::Disk hdc stats not available</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3806::DEBUG::2012-09-20 09:=
43:01,933::libvirtvm::240::vm.Vm::(_getDiskStats) vmId=3D`509e608c-e657=
-473a-b031-f0811da96bde`::Disk hdc stats not available</font><br>
<font size=3D"2" face=3D"sans-serif">Thread-3806::DEBUG::2012-09-20 09:=
43:01,934::libvirtvm::240::vm.Vm::(_getDiskStats) vmId=3D`2bf3e6eb-49e4=
-42c7-8188-fc2aeeae2e86`::Disk hdc stats not available</font><br>
<font size=3D"2" face=3D"sans-serif">Dummy-1264::DEBUG::2012-09-20 09:4=
3:02,371::__init__::1249::Storage.Misc.excCmd::(_log) 'dd if=3D/rhev/da=
ta-center/332694bb-364a-434e-b23f-5fef985d3cbd/mastersd/dom_md/inbox if=
lag=3Ddirect,fullblock count=3D1 bs=3D1024000' (cwd None)</font><br>
<font size=3D"2" face=3D"sans-serif">Dummy-1264::DEBUG::2012-09-20 09:4=
3:02,462::__init__::1249::Storage.Misc.excCmd::(_log) SUCCESS: <err&=
gt; =3D '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied=
, 0.0525183 s, 19.5 MB/s\n'; <rc> =3D 0<br>
<br>
- -<br>
</font></body></html>=
--0__=C9BBF0ECDFA610B98f9e8a93df938690918cC9BBF0ECDFA610B9--
12 years, 2 months
[Users] Can't login with the user 'admin'
by Mark Wu
After upgrading ovirt-engine (new version:
ovirt-engine-3.1.0-3.1345126685.git7649eed.fc17), I can't login with
the user 'admin'. Here's my upgrade process:
yum remove ovirt-engine
yum install ovirt-engine
engine-setup and type the same password for 'admin' as before.
The setup script finished successfully. But I can't login with 'admin'
user. I tried to run engine-setup again, but it didn't help.
I also tried to change password with engine-config:
# engine-config -g AdminPassword
Failed to decrypt the current value
# engine-config -s AdminPassword=xxxxxxxx
'xxxxxxxx' is not a valid value for type Password.
It always complains it's not a valid value whatever I input.
Is there anyone hit problem before? And any idea about how to resolve it?
Thanks
12 years, 2 months