[ovirt-users] Unable to install ovirt engine using Centos 7.1 with 3.5.5
Jorgens, Bruce
bjorgens at ciena.com
Thu Oct 29 17:46:14 UTC 2015
Simone,
I can’t find a reference to the rpcbind issue in the release notes for ovirt-3.5.5 (or ovirt-3.5.6 RC1), but did find it in the ovirt-3.6 release notes.
http://www.ovirt.org/OVirt_3.6_Release_Notes
RHEL 7.1 - CentOS 7.1 and similar
* NFS startup on EL7.1 requires manual startup of rpcbind.service BZ 1171603<http://bugzilla.redhat.com/1171603> before running engine setup in order to avoid
[ INFO ] Restarting nfs services
[ ERROR ] Failed to execute stage 'Closing up': Command '/bin/systemctl' failed to execute
* v2v feature on EL 7.1 requires manual installation of virt-v2v packages. See BZ 1250376<http://bugzilla.redhat.com/1250376> for more details. This workaround will not be needed once EL 7.2 is out
Based on your previous comments, I aborted the hosted engine setup, removed the VM and started over.
After successfully installing a CentOS 7.1 VM, I used the following steps to install and configure the engine.
yum -y update
yum -y localinstall http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm
vi /etc/yum.repos.d/ovirt-3.5.repo (use main repo instead of mirror)
yum -y install ovirt-engine
systemctl enable rpcbind
systemctl start rpcbind
systemctl start nfs-server
engine-setup
This time engine-setup was successful.
--== CONFIGURATION PREVIEW ==--
Application mode : both
Firewall manager : firewalld
Update Firewall : True
Host FQDN : ovirt.ca.stalab.ciena.com
Engine database name : engine
Engine database secured connection : False
Engine database host : localhost
Engine database user name : engine
Engine database host name validation : False
Engine database port : 5432
Engine installation : True
NFS setup : True
PKI organization : ca.stalab.ciena.com
NFS mount point : /var/lib/exports/iso
NFS export ACL : ovirt.ca.stalab.ciena.com(rw)
Configure local Engine database : True
Set application as default page : True
Configure Apache SSL : True
Configure WebSocket Proxy : True
Engine Host FQDN : ovirt.ca.stalab.ciena.com
Please confirm installation settings (OK, Cancel) [OK]:
[ INFO ] Stage: Transaction setup
[ INFO ] Stopping engine service
[ INFO ] Stopping ovirt-fence-kdump-listener service
[ INFO ] Stopping websocket-proxy service
[ INFO ] Stage: Misc configuration
[ INFO ] Stage: Package installation
[ INFO ] Stage: Misc configuration
[ INFO ] Initializing PostgreSQL
[ INFO ] Creating PostgreSQL 'engine' database
[ INFO ] Configuring PostgreSQL
[ INFO ] Creating/refreshing Engine database schema
[ INFO ] Upgrading CA
[ INFO ] Creating CA
[ INFO ] Configuring WebSocket Proxy
[ INFO ] Generating post install configuration file '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf'
[ INFO ] Stage: Transaction commit
[ INFO ] Stage: Closing up
[ INFO ] Restarting nfs services
--== SUMMARY ==--
SSH fingerprint: 43:B5:19:57:6B:21:EC:2B:48:AA:08:CE:CC:DD:F7:51
Internal CA 89:DC:8E:17:1E:E0:EF:F7:0C:A2:06:CD:DD:BF:89:A3:C7:9C:BB:CD
Web access is enabled at:
http://ovirt.ca.stalab.ciena.com:80/ovirt-engine
https://ovirt.ca.stalab.ciena.com:443/ovirt-engine
Please use the user "admin" and password specified in order to login
--== END OF SUMMARY ==--
[ INFO ] Starting engine service
[ INFO ] Restarting httpd
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20151029133108-uq5b7w.log
[ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20151029133532-setup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ INFO ] Execution of setup completed successfully
Thanks for your help.
Bruce.
Bruce T. Jorgens | Technical Lead, System Test Automation
bjorgens at ciena.com<mailto:bjorgens at ciena.com> | 115 North Sullivan Road | Spokane Valley, WA 99037 USA
Direct +1.509.242.9292 | Mobile +1.509.499.0215 | Fax +1.509.242.9001
From: Simone Tiraboschi [mailto:stirabos at redhat.com]
Sent: Thursday, October 29, 2015 1:56 AM
To: Jorgens, Bruce
Cc: Users at ovirt.org
Subject: Re: [ovirt-users] Unable to install ovirt engine using Centos 7.1 with 3.5.5
On Wed, Oct 28, 2015 at 10:23 PM, Jorgens, Bruce <bjorgens at ciena.com<mailto:bjorgens at ciena.com>> wrote:
Simone,
I repeated the installation process this morning to document the steps and have a clean timestamp for the logs.
This time I got further than before, but still don’t have an operational hosted engine.
--== CONFIGURATION PREVIEW ==--
Engine FQDN : ovirt.ca.stalab.ciena.com<http://ovirt.ca.stalab.ciena.com>
Bridge name : ovirtmgmt
SSH daemon port : 22
Firewall manager : iptables
Gateway address : 10.33.80.1
Host name for web application : bml-21
Host ID : 1
Image alias : hosted_engine
Image size GB : 25
Storage connection : cantap01b.ciena.com:/vol/sta_fast/ovirt35/hosted_engine
Console type : vnc
Memory size MB : 4096
MAC address : 00:16:3e:65:0a:d7
Boot type : cdrom
Number of CPUs : 2
ISO image (for cdrom boot) : /tmp/CentOS-7-x86_64-Minimal-1503-01.iso
CPU Type : model_SandyBridge
Booting the CentOS .iso image and working through the install.
I used the defaults for disk partitioning.
[cid:image001.png at 01D11237.0857FB60]
I chose an alternate MAC address in the ovirt setup. The correct MAC address was defined for the VM, but the interface wasn’t enabled.
I enabled it and the install as then able to gather the host info via DHCP.
[cid:image002.png at 01D11237.0857FB60]
I set up the root password and alternate account password and proceeded with the install.
No errors were detected.
[cid:image003.png at 01D11237.0857FB60]
I rebooted the device to exit the install.
At this point, my VNV window terminated.
[cid:image004.png at 01D11237.0857FB60]
At this point previously, I had rebooted the VM from the host with the commands:
• hosted-engine –vm-start
• hosted-engine –add-console-password
• hosted-engine –console
Hi Bruce,
the issue was here.
The first time ovirt-hosted-engine-setup will boot the engine VM from the installation media (disk, cdrom or pxe as you selected).
At that stage you have to install the OS on the engine VM. When done you have just to choose '(1) Continue setup - VM installation is complete'.
[ INFO ] Stage: Closing up
[ INFO ] Creating VM
You can now connect to the VM with the following command:
/bin/remote-viewer vnc://localhost:5900
Use temporary password "3177MsQY" to connect to vnc console.
Please note that in order to use remote-viewer you need to be able to run graphical applications.
This means that if you are using ssh you have to supply the -Y flag (enables trusted X11 forwarding).
Otherwise you can run the command from a terminal in your preferred desktop environment.
If you cannot run graphical applications you can connect to the graphic console from another host or connect to the console using the following command:
virsh -c qemu+tls://Test/system console HostedEngine
If you need to reboot the VM you will need to start it manually using the command:
hosted-engine --vm-start
You can then set a temporary password using the command:
hosted-engine --add-console-password
The VM has been started. Install the OS and shut down or reboot it. To continue please make a selection:
(1) Continue setup - VM installation is complete
(2) Reboot the VM and restart installation
(3) Abort setup
(4) Destroy VM and abort setup
(1, 2, 3, 4)[1]: 1
Waiting for VM to shut down...
You don't have to manually reboot the VM: manually rebooting the VM with 'hosted-engine –vm-start' is just a recovery action if for any reasons you still have to deal with OS install.
If you correctly installed the OS you have just to continue.
Next time ovirt-hosted-engine-setup will boot the engine VM from the shared domain where you deployed it.
However, this time per the instructions here: http://www.ovirt.org/Hosted_Engine_Howto#Fresh_Install, I chose ‘continue setup’ on the host.
Upon reconnecting with VNC, the VM had booted and was sitting at a login prompt.
Given that the VM was able to boot using continue setup, it seems likely that the install script takes some action that preserves the disk image or changes settings on the VM for a second boot. For example, the second boot likely doesn’t mount the CDROM image. When I was using PXE before, I noticed that when I rebooted the VM manually it always used PXE. It thought it odd at the time, but makes sense now that VM config needs to change between the first boot and the second boot.
I was then able to login as root and perform the following actions:
• yum –y update
• yum –y localinstall http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm
• vi /etc/yum.repos.d/ovirt-3.5.repo (change to use baseurl instead or mirrors – mirrors aren’t updated with latest info)
• yum install ovirt-engine
• engine-setup
1. [cid:image005.png at 01D11237.0857FB60]
[cid:image006.png at 01D11237.0857FB60]
I tried to continue setup on the host, but it reports that the engine is unreachable.
engine-setup was failed. You have to complete it before continuing.
In this case it failed cause
systemctl start nfs-server
by default fails on centos 7.1: it's a know centos issue, you have to manually configure and start rpcbind before that cause centos 7.1 misses a systemd dependecines there.
systemctl enable rpcbind
systemctl start rpcbind
It was documented also in the oVirt release notes.
[ ERROR ] Engine is still unreachable
Engine health status page is not yet reachable.
To continue make a selection from the options below:
(1) Continue setup - engine installation is complete
(2) Power off and restart the VM
(3) Abort setup
(4) Destroy VM and abort setup
(1, 2, 3, 4)[1]:
I rebooted the VM, with these commands:
• hosted-engine –vm-start
• hosted-engine –add-console-password
• hosted-engine –console
That was not needed.
This did not resolve the problem.
Checking the log file shown above, I see the following detail on the error reported:
2015-10-28 14:57:01 DEBUG otopi.plugins.otopi.services.systemd systemd.state:134 starting service nfs-server
2015-10-28 14:57:01 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:824 execute: ('/bin/systemctl', 'start', 'nfs-server.service'), executable='None', cwd='None', env=None
2015-10-28 14:57:01 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:874 execute-result: ('/bin/systemctl', 'start', 'nfs-server.service'), rc=1
2015-10-28 14:57:01 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:932 execute-output: ('/bin/systemctl', 'start', 'nfs-server.service') stdout:
2015-10-28 14:57:01 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:937 execute-output: ('/bin/systemctl', 'start', 'nfs-server.service') stderr:
Job for nfs-server.service failed. See 'systemctl status nfs-server.service' and 'journalctl -xn' for details.
2015-10-28 14:57:01 DEBUG otopi.context context._executeMethod:152 method exception
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/otopi/context.py", line 142, in _executeMethod
method['method']()
File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/system/nfs.py", line 307, in _closeup
state=state,
File "/usr/share/otopi/plugins/otopi/services/systemd.py", line 138, in state
'start' if state else 'stop'
File "/usr/share/otopi/plugins/otopi/services/systemd.py", line 77, in _executeServiceCommand
raiseOnError=raiseOnError
File "/usr/lib/python2.7/site-packages/otopi/plugin.py", line 942, in execute
command=args[0],
RuntimeError: Command '/bin/systemctl' failed to execute
2015-10-28 14:57:01 ERROR otopi.context context._executeMethod:161 Failed to execute stage 'Closing up': Command '/bin/systemctl' failed to execute
2015-10-28 14:57:01 DEBUG otopi.context context.dumpEnvironment:490 ENVIRONMENT DUMP - BEGIN
2015-10-28 14:57:01 DEBUG otopi.context context.dumpEnvironment:500 ENV BASE/error=bool:'True'
2015-10-28 14:57:01 DEBUG otopi.context context.dumpEnvironment:500 ENV BASE/exceptionInfo=list:'[(<type 'exceptions.RuntimeError'>, RuntimeError("Command '/bin/systemctl' failed to execute",), <traceback object at 0x42b1dd0>)]'
2015-10-28 14:57:01 DEBUG otopi.context context.dumpEnvironment:504 ENVIRONMENT DUMP - END
2015-10-28 14:57:01 INFO otopi.context context.runSequence:417 Stage: Clean up
2015-10-28 14:57:01 DEBUG otopi.context context.runSequence:421 STAGE cleanup
I checked status of nfs-server.service and it is now running.
[root at ovirt ~]# systemctl status nfs-server.service
nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled)
Active: active (exited) since Wed 2015-10-28 15:14:00 EDT; 48min ago
Process: 11983 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
Process: 11981 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Main PID: 11983 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/nfs-server.service
Oct 28 15:14:00 ovirt.ca.stalab.ciena.com<http://ovirt.ca.stalab.ciena.com> systemd[1]: Started NFS server and services.
[root at ovirt ~]#
Here is the full status of services on the engine VM:
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active waiting Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:01.1-ata2-host1-target1:0:0-1:0:0:0-block-sr0.device loaded active plugged QEMU_DVD-ROM
sys-devices-pci0000:00-0000:00:03.0-virtio0-net-eth0.device loaded active plugged Virtio network device
sys-devices-pci0000:00-0000:00:05.0-virtio2-virtio\x2dports-vport2p0.device loaded active plugged /sys/devices/pci0000:00/0000:00:05.0/virtio2/virtio-ports/vport2p0
sys-devices-pci0000:00-0000:00:05.0-virtio2-virtio\x2dports-vport2p1.device loaded active plugged /sys/devices/pci0000:00/0000:00:05.0/virtio2/virtio-ports/vport2p1
sys-devices-pci0000:00-0000:00:05.0-virtio2-virtio\x2dports-vport2p2.device loaded active plugged /sys/devices/pci0000:00/0000:00:05.0/virtio2/virtio-ports/vport2p2
sys-devices-pci0000:00-0000:00:06.0-virtio3-block-vda-vda1.device loaded active plugged /sys/devices/pci0000:00/0000:00:06.0/virtio3/block/vda/vda1
sys-devices-pci0000:00-0000:00:06.0-virtio3-block-vda-vda2.device loaded active plugged LVM PV VNNNF1-CMck-2V9j-1mq9-x0wR-jiFX-5GCndA on /dev/vda2
sys-devices-pci0000:00-0000:00:06.0-virtio3-block-vda.device loaded active plugged /sys/devices/pci0000:00/0000:00:06.0/virtio3/block/vda
sys-devices-platform-serial8250-tty-ttyS0.device loaded active plugged /sys/devices/platform/serial8250/tty/ttyS0
sys-devices-platform-serial8250-tty-ttyS1.device loaded active plugged /sys/devices/platform/serial8250/tty/ttyS1
sys-devices-platform-serial8250-tty-ttyS2.device loaded active plugged /sys/devices/platform/serial8250/tty/ttyS2
sys-devices-platform-serial8250-tty-ttyS3.device loaded active plugged /sys/devices/platform/serial8250/tty/ttyS3
sys-devices-virtual-block-dm\x2d0.device loaded active plugged /sys/devices/virtual/block/dm-0
sys-devices-virtual-block-dm\x2d1.device loaded active plugged /sys/devices/virtual/block/dm-1
sys-devices-virtual-tty-hvc0.device loaded active plugged /sys/devices/virtual/tty/hvc0
sys-devices-virtual-tty-hvc1.device loaded active plugged /sys/devices/virtual/tty/hvc1
sys-devices-virtual-tty-hvc2.device loaded active plugged /sys/devices/virtual/tty/hvc2
sys-devices-virtual-tty-hvc3.device loaded active plugged /sys/devices/virtual/tty/hvc3
sys-devices-virtual-tty-hvc4.device loaded active plugged /sys/devices/virtual/tty/hvc4
sys-devices-virtual-tty-hvc5.device loaded active plugged /sys/devices/virtual/tty/hvc5
sys-devices-virtual-tty-hvc6.device loaded active plugged /sys/devices/virtual/tty/hvc6
sys-devices-virtual-tty-hvc7.device loaded active plugged /sys/devices/virtual/tty/hvc7
sys-module-configfs.device loaded active plugged /sys/module/configfs
sys-subsystem-net-devices-eth0.device loaded active plugged Virtio network device
-.mount loaded active mounted /
boot.mount loaded active mounted /boot
dev-hugepages.mount loaded active mounted Huge Pages File System
dev-mqueue.mount loaded active mounted POSIX Message Queue File System
proc-fs-nfsd.mount loaded active mounted NFSD configuration filesystem
sys-kernel-config.mount loaded active mounted Configuration File System
sys-kernel-debug.mount loaded active mounted Debug File System
var-lib-nfs-rpc_pipefs.mount loaded active mounted RPC Pipe File System
brandbot.path loaded active waiting Flexible branding
systemd-ask-password-plymouth.path loaded active waiting Forward Password Requests to Plymouth Directory Watch
systemd-ask-password-wall.path loaded active waiting Forward Password Requests to Wall Directory Watch
session-1.scope loaded active running Session 1 of user root
session-2.scope loaded active running Session 2 of user root
auditd.service loaded active running Security Auditing Service
crond.service loaded active running Command Scheduler
dbus.service loaded active running D-Bus System Message Bus
firewalld.service loaded active running firewalld - dynamic firewall daemon
getty at tty1.service<mailto:getty at tty1.service> loaded active running Getty on tty1
gssproxy.service loaded active running GSSAPI Proxy Daemon
irqbalance.service loaded active running irqbalance daemon
kdump.service loaded active exited Crash recovery kernel arming
kmod-static-nodes.service loaded active exited Create list of required static device nodes for the current kernel
lvm2-lvmetad.service loaded active running LVM2 metadata daemon
lvm2-monitor.service loaded active exited Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling
lvm2-pvscan at 252:2.service<mailto:lvm2-pvscan at 252:2.service> loaded active exited LVM2 PV scan on device 252:2
network.service loaded active exited LSB: Bring up/down networking
NetworkManager.service loaded active running Network Manager
nfs-config.service loaded active exited Preprocess NFS configuration
nfs-idmapd.service loaded active running NFSv4 ID-name mapping service
nfs-mountd.service loaded active running NFS Mount Daemon
nfs-server.service loaded active exited NFS server and services
polkit.service loaded active running Authorization Manager
postfix.service loaded active running Postfix Mail Transport Agent
postgresql.service loaded active running PostgreSQL database server
rhel-dmesg.service loaded active exited Dump dmesg to /var/log/dmesg
rhel-import-state.service loaded active exited Import network configuration from initramfs
rhel-readonly.service loaded active exited Configure read-only root support
rpc-statd.service loaded active running NFS status monitor for NFSv2/3 locking.
rpcbind.service loaded active running RPC bind service
rsyslog.service loaded active running System Logging Service
sshd.service loaded active running OpenSSH server daemon
systemd-journald.service loaded active running Journal Service
systemd-logind.service loaded active running Login Service
systemd-random-seed.service loaded active exited Load/Save Random Seed
systemd-remount-fs.service loaded active exited Remount Root and Kernel File Systems
systemd-sysctl.service loaded active exited Apply Kernel Variables
systemd-tmpfiles-setup-dev.service loaded active exited Create static device nodes in /dev
systemd-tmpfiles-setup.service loaded active exited Create Volatile Files and Directories
systemd-udev-trigger.service loaded active exited udev Coldplug all Devices
systemd-udevd.service loaded active running udev Kernel Device Manager
systemd-update-utmp.service loaded active exited Update UTMP about System Reboot/Shutdown
systemd-user-sessions.service loaded active exited Permit User Sessions
systemd-vconsole-setup.service loaded active exited Setup Virtual Console
tuned.service loaded active running Dynamic System Tuning Daemon
-.slice loaded active active Root Slice
system-getty.slice loaded active active system-getty.slice
system.slice loaded active active System Slice
user-0.slice loaded active active user-0.slice
user.slice loaded active active User and Session Slice
dbus.socket loaded active running D-Bus System Message Bus Socket
dm-event.socket loaded active listening Device-mapper event daemon FIFOs
lvm2-lvmetad.socket loaded active running LVM2 metadata daemon socket
rpcbind.socket loaded active running RPCbind Server Activation Socket
systemd-initctl.socket loaded active listening /dev/initctl Compatibility Named Pipe
systemd-journald.socket loaded active running Journal Socket
systemd-shutdownd.socket loaded active listening Delayed Shutdown Socket
systemd-udevd-control.socket loaded active running udev Control Socket
systemd-udevd-kernel.socket loaded active running udev Kernel Socket
dev-dm\x2d1.swap loaded active active /dev/dm-1
basic.target loaded active active Basic System
cryptsetup.target loaded active active Encrypted Volumes
getty.target loaded active active Login Prompts
local-fs-pre.target loaded active active Local File Systems (Pre)
local-fs.target loaded active active Local File Systems
multi-user.target loaded active active Multi-User System
network-online.target loaded active active Network is Online
network.target loaded active active Network
nss-lookup.target loaded active active Host and Network Name Lookups
paths.target loaded active active Paths
remote-fs.target loaded active active Remote File Systems
rpcbind.target loaded active active RPC Port Mapper
slices.target loaded active active Slices
sockets.target loaded active active Sockets
swap.target loaded active active Swap
sysinit.target loaded active active System Initialization
timers.target loaded active active Timers
systemd-tmpfiles-clean.timer loaded active waiting Daily Cleanup of Temporary Directories
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
112 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
What services or other components are missing and preventing the ovirt engine from running?
it was nfs-server
simply execute engine-setup again on the engine VM to fix it and restart the engine service.
when the engine-setup correctly completes choose:
(1) Continue setup - engine installation is complete
to complete hosted-engine setup.
Checking the logfile for hosted-engine-setup on the host, I see:
2015-10-28 12:15:46 DEBUG otopi.ovirt_hosted_engine_setup.check_liveliness check_liveliness.isEngineUp:46 Checking for Engine health status
2015-10-28 12:15:46 ERROR otopi.ovirt_hosted_engine_setup.check_liveliness check_liveliness.isEngineUp:68 Engine is still unreachable
2015-10-28 12:15:46 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:215 DIALOG:SEND Engine health status page is not yet reachable.
2015-10-28 12:15:46 DEBUG otopi.plugins.otopi.dialog.human human.queryString:153 query OVEHOSTED_ENGINE_UP
2015-10-28 12:15:46 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:215 DIALOG:SEND To continue make a selection from the options below:
2015-10-28 12:15:46 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:215 DIALOG:SEND (1) Continue setup - engine installation is complete
2015-10-28 12:15:46 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:215 DIALOG:SEND (2) Power off and restart the VM
2015-10-28 12:15:46 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:215 DIALOG:SEND (3) Abort setup
2015-10-28 12:15:46 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:215 DIALOG:SEND (4) Destroy VM and abort setup
2015-10-28 12:15:46 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:215 DIALOG:SEND
2015-10-28 12:15:46 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:215 DIALOG:SEND (1, 2, 3, 4)[1]:
Manually checking with ‘hosted-engine –check-liveliness’, I get this output:
[root at bml-21 ~]# hosted-engine --check-liveliness
No handlers could be found for logger "otopi.__main__"
Hosted Engine is not up!
Thanks for the help,
Bruce.
Bruce T. Jorgens | Technical Lead, System Test Automation
bjorgens at ciena.com<mailto:bjorgens at ciena.com> | 115 North Sullivan Road | Spokane Valley, WA 99037 USA
Direct +1.509.242.9292<tel:%2B1.509.242.9292> | Mobile +1.509.499.0215<tel:%2B1.509.499.0215> | Fax +1.509.242.9001<tel:%2B1.509.242.9001>
From: Simone Tiraboschi [mailto:stirabos at redhat.com<mailto:stirabos at redhat.com>]
Sent: Wednesday, October 28, 2015 10:06 AM
To: Jorgens, Bruce
Cc: Users at ovirt.org<mailto:Users at ovirt.org>
Subject: Re: [ovirt-users] Unable to install ovirt engine using Centos 7.1 with 3.5.5
On Wed, Oct 28, 2015 at 5:53 PM, Jorgens, Bruce <bjorgens at ciena.com<mailto:bjorgens at ciena.com>> wrote:
Simon,
I’m using an exported NFS file system for storage of the VM.
I’m not changing the filesystem type during the CentOS install.
Please let me know if there is any other information I can provide that might be helpful.
I tried it today and it worked as expected with oVirt 3.5.5 and exactly your centos ISO image.
Can you please share /var/log/messages and vdsm logs from your host to check if you had some storage issue?
Bruce.
Bruce T. Jorgens | Technical Lead, System Test Automation
bjorgens at ciena.com<mailto:bjorgens at ciena.com> | 115 North Sullivan Road | Spokane Valley, WA 99037 USA
Direct +1.509.242.9292<tel:%2B1.509.242.9292> | Mobile +1.509.499.0215<tel:%2B1.509.499.0215> | Fax +1.509.242.9001<tel:%2B1.509.242.9001>
From: Simone Tiraboschi [mailto:stirabos at redhat.com<mailto:stirabos at redhat.com>]
Sent: Wednesday, October 28, 2015 3:00 AM
To: Jorgens, Bruce
Cc: Users at ovirt.org<mailto:Users at ovirt.org>
Subject: Re: [ovirt-users] Unable to install ovirt engine using Centos 7.1 with 3.5.5
On Tue, Oct 27, 2015 at 9:00 PM, Jorgens, Bruce <bjorgens at ciena.com<mailto:bjorgens at ciena.com>> wrote:
I’m trying to install ovirt-3.5.5 in a hosted engine configuration on an HP DL360 system with CentOS 7.1.
All works well up until the hosted VM portion of the install.
After installing CentOS 7.1 on the engine, the VM is unable to boot.
I’ve tried installing via PXE as well as with an attached CDROM image of CentOS 7.1 and go the same results.
There is no indication of any issue during install.
I ran a rescue image against the VM and fdisk cannot find a valid partition table on /dev/vda.
Curiously, a reinstall using the VM disk finds a partition table and prompts if the space should be recovered.
Hi Bruce,
I deployed it several time and I never noticed something similar.
Which kind of storage are you using?
Are you changing the default file system type or the partition table type when you install CentOS?
Below is a sanitized version of the configuration preview:
--== CONFIGURATION PREVIEW ==--
Engine FQDN : XXXXXX
Bridge name : ovirtmgmt
SSH daemon port : 22
Firewall manager : iptables
Gateway address : 10.33.80.1
Host name for web application : bml-21
Host ID : 1
Image alias : hosted_engine
Image size GB : 25
Storage connection : XXXXXX
Console type : vnc
Memory size MB : 4096
MAC address : 00:16:3e:65:0a:d7
Boot type : cdrom
Number of CPUs : 2
ISO image (for cdrom boot) : /tmp/CentOS-7-x86_64-Minimal-1503-01.iso
CPU Type : model_SandyBridge
I found this old post from nearly a year ago. The issue sounds quite similar:
http://www.redgrid.net/kvm/no-bootable-device-found-ovirt-3-5-on-centos7/
I appreciate any help.
Bruce T. Jorgens | Technical Lead, System Test Automation
bjorgens at ciena.com<mailto:bjorgens at ciena.com> | 115 North Sullivan Road | Spokane Valley, WA 99037 USA
Direct +1.509.242.9292<tel:%2B1.509.242.9292> | Mobile +1.509.499.0215<tel:%2B1.509.499.0215> | Fax +1.509.242.9001<tel:%2B1.509.242.9001>
_______________________________________________
Users mailing list
Users at ovirt.org<mailto:Users at ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20151029/a918728e/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 148125 bytes
Desc: image001.png
URL: <http://lists.ovirt.org/pipermail/users/attachments/20151029/a918728e/attachment-0006.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image002.png
Type: image/png
Size: 132437 bytes
Desc: image002.png
URL: <http://lists.ovirt.org/pipermail/users/attachments/20151029/a918728e/attachment-0007.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image003.png
Type: image/png
Size: 158625 bytes
Desc: image003.png
URL: <http://lists.ovirt.org/pipermail/users/attachments/20151029/a918728e/attachment-0008.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image004.png
Type: image/png
Size: 79710 bytes
Desc: image004.png
URL: <http://lists.ovirt.org/pipermail/users/attachments/20151029/a918728e/attachment-0009.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image005.png
Type: image/png
Size: 75239 bytes
Desc: image005.png
URL: <http://lists.ovirt.org/pipermail/users/attachments/20151029/a918728e/attachment-0010.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image006.png
Type: image/png
Size: 72401 bytes
Desc: image006.png
URL: <http://lists.ovirt.org/pipermail/users/attachments/20151029/a918728e/attachment-0011.png>
More information about the Users
mailing list