[Users] Testing ovirt all in one on F18 gives error on DB creation

Alon Bar-Lev alonbl at redhat.com
Wed Dec 5 16:35:49 UTC 2012


Is there any special reason why you are using fedora 18 and not fedora 17?

----- Original Message -----
> From: "Gianluca Cecchi" <gianluca.cecchi at gmail.com>
> To: "Alon Bar-Lev" <alonbl at redhat.com>
> Cc: "Juan Antonio Hernandez Fernandez" <jhernand at redhat.com>, "users" <users at ovirt.org>, "Alex Lourie"
> <alourie at redhat.com>, "Ohad Basan" <obasan at redhat.com>
> Sent: Wednesday, December 5, 2012 4:56:44 PM
> Subject: Re: [Users] Testing ovirt all in one on F18 gives error on DB creation
> 
> On Tue, Dec 4, 2012 at 5:06 PM, Alon Bar-Lev  wrote:
> 
> Installed nightly build (rebased on 3.2)
> So now I managed to run
> 
> - engine-setup
> 
> It detected previous install so I exited
> Welcome to oVirt Engine setup utility
> 
> WARNING: oVirt Engine setup has already been run on this host.
> To remove all configuration and reset oVirt Engine please run
> engine-cleanup.
> Please be advised that executing engine-setup without cleanup is not
> supported.
> Would you like to proceed? (yes|no): no
> Installation stopped, Goodbye.
> 
> - engine-cleanup
> 
> It gave errore in relation to the db where it didn't find some
> objects
> (sue to cration failur in 3.1 I presume...)
> 
> WARNING: Executing oVirt Engine cleanup utility.
> This utility will wipe all existing data including configuration
> settings, certificates and database.
> In addition, all existing DB connections will be closed.
> Would you like to proceed? (yes|no): yes
> 
> Stopping ovirt-engine service...                         [ DONE ]
> Removing Database...                                   [ ERROR ]
> Removing CA...                                           [ DONE ]
> Stopping engine-notifierd service...                     [DONE ]
> 
> Cleanup finished with errors, please see log file
> Error: failed to clear active DB connections
> Cleanup log available at
> /var/log/ovirt-engine/engine-cleanup_2012_12_05_14_56_48.log
> 
> from this file:
> 2012-12-05 14:56:54::DEBUG::common_utils::390::root:: Executing
> command --> '/usr/bin/psql -U postgres -c SELECT pg_terminate_
> backend(procpid) FROM pg_stat_activity WHERE datname = 'engine''
> 2012-12-05 14:56:54::DEBUG::common_utils::428::root:: output =
> 2012-12-05 14:56:54::DEBUG::common_utils::429::root:: stderr = ERROR:
> column "procpid" does not exist
> LINE 1: SELECT pg_terminate_backend(procpid) FROM pg_stat_activity
> W...
>                                     ^
> 
> 2012-12-05 14:56:54::DEBUG::common_utils::430::root:: retcode = 1
> 2012-12-05 14:56:54::ERROR::engine-cleanup::408::root:: Traceback
> (most recent call last):
>   File "/bin/engine-cleanup", line 402, in runFunc
>     funcName()
>   File "/bin/engine-cleanup", line 308, in drop
>     utils.clearDbConnections(basedefs.DB_NAME)
>   File "/usr/share/ovirt-engine/scripts/common_utils.py", line 1004,
> in clearDbConnections
>     execCmd(cmdList=cmd, failOnError=True,
> msg=output_messages.ERR_DB_CONNECTIONS_CLEAR, envDict=getPgPassEnv())
>   File "/usr/share/ovirt-engine/scripts/common_utils.py", line 433,
>   in execCmd
>     raise Exception(msg)
> Exception: Error: failed to clear active DB connections
> 
> - Manually dropped "engine" db
> 
> - engine-cleanup again
> 
> It completed successfully:
> WARNING: Executing oVirt Engine cleanup utility.
> This utility will wipe all existing data including configuration
> settings, certificates and database.
> In addition, all existing DB connections will be closed.
> Would you like to proceed? (yes|no): yes
> 
> Stopping ovirt-engine service...                         [DONE ]
> Removing CA...                                           [ DONE ]
> Stopping engine-notifierd service...                     [DONE ]
> 
> Cleanup finished successfully!
> Cleanup log available at
> /var/log/ovirt-engine/engine-cleanup_2012_12_05_14_58_07.log
> 
> - engine-setup
> 
> Now it gives error at NFS point
> Installing:
> AIO: Validating CPU Compatibility...                               [
> DONE ]
> AIO: Adding firewall rules...                                      [
> DONE ]
> Configuring oVirt-engine...                                        [
> DONE ]
> Configuring JVM...                                                 [
> DONE ]
> Creating CA...                                                     [
> DONE ]
> Updating ovirt-engine service...                                   [
> DONE ]
> Setting Database Configuration...                                  [
> DONE ]
> Setting Database Security...                                       [
> DONE ]
> Creating Database...                                               [
> DONE ]
> Updating the Default Data Center Storage Type...                   [
> DONE ]
> Editing oVirt Engine Configuration...                              [
> DONE ]
> Editing Postgresql Configuration...                                [
> DONE ]
> Configuring the Default ISO Domain...                           [
> ERROR ]
> Failed to configure NFS share on this host
> Please check log file
> /var/log/ovirt-engine/engine-setup_2012_12_05_14_58_43.log for more
> information
> 
> In logfile
> 2012-12-05 15:00:28::DEBUG::common_utils::429::root:: stderr =
> Redirecting to /bin/systemctl start  rpcbind.service
> 
> 2012-12-05 15:00:28::DEBUG::common_utils::430::root:: retcode = 0
> 2012-12-05 15:00:28::DEBUG::common_utils::390::root:: Executing
> command --> '/sbin/chkconfig nfs-server on'
> 2012-12-05 15:00:28::DEBUG::common_utils::428::root:: output =
> 2012-12-05 15:00:28::DEBUG::common_utils::429::root:: stderr = Note:
> Forwarding request to 'systemctl enable nfs-server.servic
> e'.
> ln -s '/usr/lib/systemd/system/nfs-server.service'
> '/etc/systemd/system/multi-user.target.wants/nfs-server.service'
> 
> 2012-12-05 15:00:28::DEBUG::common_utils::430::root:: retcode = 0
> 2012-12-05 15:00:28::DEBUG::common_utils::1163::root:: stopping
> nfs-server
> 2012-12-05 15:00:28::DEBUG::common_utils::1200::root:: executing
> action nfs-server on service stop
> 2012-12-05 15:00:28::DEBUG::common_utils::390::root:: Executing
> command --> '/sbin/service nfs-server stop'
> 2012-12-05 15:00:28::DEBUG::common_utils::428::root:: output =
> 2012-12-05 15:00:28::DEBUG::common_utils::429::root:: stderr =
> Redirecting to /bin/systemctl stop  nfs-server.service
> 
> 2012-12-05 15:00:28::DEBUG::common_utils::430::root:: retcode = 0
> 2012-12-05 15:00:28::DEBUG::common_utils::1153::root:: starting
> nfs-server
> 2012-12-05 15:00:28::DEBUG::common_utils::1200::root:: executing
> action nfs-server on service start
> 2012-12-05 15:00:28::DEBUG::common_utils::390::root:: Executing
> command --> '/sbin/service nfs-server start'
> 2012-12-05 15:00:28::DEBUG::common_utils::428::root:: output =
> 2012-12-05 15:00:28::DEBUG::common_utils::429::root:: stderr =
> Redirecting to /bin/systemctl start  nfs-server.service
> A dependency job for nfs-server.service failed. See 'journalctl -n'
> for details.
> 
> 2012-12-05 15:00:28::DEBUG::common_utils::430::root:: retcode = 1
> 2012-12-05 15:00:28::ERROR::engine-setup::1674::root:: Traceback
> (most
> recent call last):
>   File "/bin/engine-setup", line 1672, in _startNfsServices
>     srv.start(True)
>   File "/usr/share/ovirt-engine/scripts/common_utils.py", line 1158,
>   in start
>     raise Exception(output_messages.ERR_FAILED_START_SERVICE %
>     self.name)
> Exception: Error: Can't start the nfs-server service
> 
> 2012-12-05 15:00:28::ERROR::engine-setup::1612::root:: Traceback
> (most
> recent call last):
>   File "/bin/engine-setup", line 1598, in _configNfsShare
>     _startNfsServices()
>   File "/bin/engine-setup", line 1675, in _startNfsServices
>     raise Exception(output_messages.ERR_FAILED_TO_START_NFS_SERVICE)
> Exception: Failed to start the NFS services
> 
> 
> # journalctl -n
> -- Logs begin at Wed, 2012-12-05 12:12:16 CET, end at Wed, 2012-12-05
> 15:01:02 CET. --
> Dec 05 15:00:29 f18aio.localdomain.local kernel: SELinux: initialized
> (dev nfsd, type nfsd), uses genfs_contexts
> Dec 05 15:00:29 f18aio.localdomain.local kernel: NFSD: Using
> /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
> Dec 05 15:00:29 f18aio.localdomain.local kernel: NFSD: starting
> 90-second grace period
> Dec 05 15:01:01 f18aio.localdomain.local CROND[14412]: (root) CMD
> (run-parts /etc/cron.hourly)
> Dec 05 15:01:01 f18aio.localdomain.local
> run-parts(/etc/cron.hourly)[14415]: starting 0anacron
> Dec 05 15:01:01 f18aio.localdomain.local
> run-parts(/etc/cron.hourly)[14421]: finished 0anacron
> Dec 05 15:01:01 f18aio.localdomain.local
> run-parts(/etc/cron.hourly)[14423]: starting mcelog.cron
> Dec 05 15:01:01 f18aio.localdomain.local
> run-parts(/etc/cron.hourly)[14427]: finished mcelog.cron
> Dec 05 15:01:02 f18aio.localdomain.local
> run-parts(/etc/cron.hourly)[14429]: starting vdsm-logrotate
> Dec 05 15:01:02 f18aio.localdomain.local
> run-parts(/etc/cron.hourly)[14435]: finished vdsm-logrotate
> 
> # systemctl status nfs-server.service
> nfs-server.service - NFS Server
>  Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled)
>  Active: active (exited) since Wed, 2012-12-05 15:00:29 CET; 54min
>  ago
> Process: 14396
> ExecStartPost=/usr/lib/nfs-utils/scripts/nfs-server.postconfig
> (code=exited, status=0/SUCCESS)
> Process: 14377 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS
> $RPCNFSDCOUNT
> (code=exited, status=0/SUCCESS)
> Process: 14372 ExecStartPre=/usr/sbin/exportfs -r (code=exited,
> status=0/SUCCESS)
> Process: 14366
> ExecStartPre=/usr/lib/nfs-utils/scripts/nfs-server.preconfig
> (code=exited, status=0/SUCCESS)
>  CGroup: name=systemd:/system/nfs-server.service
> 
> Dec 05 15:00:28 f18aio.localdomain.local systemd[1]: Starting NFS
> Server...
> Dec 05 15:00:28 f18aio.localdomain.local systemd[1]: Dependency
> failed
> for NFS Server.
> 
> 
> In /var/log/messages:
> Dec  5 15:00:28 f18aio systemd[1]: Starting NFS Server...
> Dec  5 15:00:28 f18aio mount[14364]: mount: unknown filesystem type
> 'rpc_pipefs'
> Dec  5 15:00:28 f18aio systemd[1]: var-lib-nfs-rpc_pipefs.mount mount
> process exited, code=exited status=32
> Dec  5 15:00:28 f18aio systemd[1]: Failed to mount RPC Pipe File
> System.
> Dec  5 15:00:28 f18aio systemd[1]: Dependency failed for NFS Server.
> Dec  5 15:00:28 f18aio systemd[1]: Dependency failed for NFS Remote
> Quota Server.
> Dec  5 15:00:28 f18aio systemd[1]: Job nfs-rquotad.service/start
> failed with result 'dependency'.
> Dec  5 15:00:28 f18aio systemd[1]: Dependency failed for NFS Mount
> Daemon.
> Dec  5 15:00:28 f18aio systemd[1]: Job nfs-mountd.service/start
> failed
> with result 'dependency'.
> Dec  5 15:00:28 f18aio systemd[1]: Dependency failed for NFSv4
> ID-name
> mapping daemon.
> Dec  5 15:00:28 f18aio systemd[1]: Job nfs-idmap.service/start failed
> with result 'dependency'.
> Dec  5 15:00:28 f18aio systemd[1]: Job nfs-server.service/start
> failed
> with result 'dependency'.
> Dec  5 15:00:28 f18aio systemd[1]: Unit var-lib-nfs-rpc_pipefs.mount
> entered failed state
> Dec  5 15:00:28 f18aio kernel: RPC: Registered named UNIX socket
> transport module.
> Dec  5 15:00:28 f18aio kernel: RPC: Registered udp transport module.
> Dec  5 15:00:28 f18aio kernel: RPC: Registered tcp transport module.
> Dec  5 15:00:28 f18aio kernel: RPC: Registered tcp NFSv4.1
> backchannel
> transport module.
> Dec  5 15:00:28 f18aio kernel: Installing knfsd (copyright (C) 1996
> okir at monad.swb.de).
> Dec  5 15:00:29 f18aio systemd[1]: Mounted RPC Pipe File System.
> Dec  5 15:00:28 f18aio kernel: NFSD: Using /var/lib/nfs/v4recovery as
> the NFSv4 state recovery directory
> Dec  5 15:00:28 f18aio kernel: NFSD: starting 90-second grace period
> 
> what to do?
> 
> Thanks,
> Gianluca
> 



More information about the Users mailing list