Simone,
I repeated the installation process this morning to document the steps and have a clean timestamp for the logs.
This time I got further than before, but still don’t have an operational hosted engine.
--== CONFIGURATION PREVIEW ==--
Engine FQDN : ovirt.ca.stalab.ciena.com
Bridge name : ovirtmgmt
SSH daemon port : 22
Firewall manager : iptables
Gateway address : 10.33.80.1
Host name for web application : bml-21
Host ID : 1
Image alias : hosted_engine
Image size GB : 25
Storage connection : cantap01b.ciena.com:/vol/sta_fast/ovirt35/hosted_engine
Console type : vnc
Memory size MB : 4096
MAC address : 00:16:3e:65:0a:d7
Boot type : cdrom
Number of CPUs : 2
ISO image (for cdrom boot) : /tmp/CentOS-7-x86_64-Minimal-1503-01.iso
CPU Type : model_SandyBridge
Booting the CentOS .iso image and working through the install.
I used the defaults for disk partitioning.
I chose an alternate MAC address in the ovirt setup. The correct MAC address was defined for the VM, but the interface wasn’t enabled.
I enabled it and the install as then able to gather the host info via DHCP.
I set up the root password and alternate account password and proceeded with the install.
No errors were detected.
I rebooted the device to exit the install.
At this point, my VNV window terminated.
At this point previously, I had rebooted the VM from the host with the commands:
· hosted-engine –vm-start
· hosted-engine –add-console-password
· hosted-engine –console
However, this time per the instructions here: http://www.ovirt.org/Hosted_Engine_Howto#Fresh_Install, I chose ‘continue setup’ on the host.
Upon reconnecting with VNC, the VM had booted and was sitting at a login prompt.
Given that the VM was able to boot using continue setup, it seems likely that the install script takes some action that preserves the disk image or changes settings on the VM for a second boot. For example, the second boot likely doesn’t mount the CDROM image. When I was using PXE before, I noticed that when I rebooted the VM manually it always used PXE. It thought it odd at the time, but makes sense now that VM config needs to change between the first boot and the second boot.
I was then able to login as root and perform the following actions:
· yum –y update
· yum –y localinstall http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm
· vi /etc/yum.repos.d/ovirt-3.5.repo (change to use baseurl instead or mirrors – mirrors aren’t updated with latest info)
· yum install ovirt-engine
· engine-setup
1.
I tried to continue setup on the host, but it reports that the engine is unreachable.
[ ERROR ] Engine is still unreachable
Engine health status page is not yet reachable.
To continue make a selection from the options below:
(1) Continue setup - engine installation is complete
(2) Power off and restart the VM
(3) Abort setup
(4) Destroy VM and abort setup
(1, 2, 3, 4)[1]:
I rebooted the VM, with these commands:
· hosted-engine –vm-start
· hosted-engine –add-console-password
· hosted-engine –console
This did not resolve the problem.
Checking the log file shown above, I see the following detail on the error reported:
2015-10-28 14:57:01 DEBUG otopi.plugins.otopi.services.systemd systemd.state:134 starting service nfs-server
2015-10-28 14:57:01 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:824 execute: ('/bin/systemctl', 'start', 'nfs-server.service'), executable='None', cwd='None', env=None
2015-10-28 14:57:01 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:874 execute-result: ('/bin/systemctl', 'start', 'nfs-server.service'), rc=1
2015-10-28 14:57:01 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:932 execute-output: ('/bin/systemctl', 'start', 'nfs-server.service') stdout:
2015-10-28 14:57:01 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:937 execute-output: ('/bin/systemctl', 'start', 'nfs-server.service') stderr:
Job for nfs-server.service failed. See 'systemctl status nfs-server.service' and 'journalctl -xn' for details.
2015-10-28 14:57:01 DEBUG otopi.context context._executeMethod:152 method exception
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/otopi/context.py", line 142, in _executeMethod
method['method']()
File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/system/nfs.py", line 307, in _closeup
state=state,
File "/usr/share/otopi/plugins/otopi/services/systemd.py", line 138, in state
'start' if state else 'stop'
File "/usr/share/otopi/plugins/otopi/services/systemd.py", line 77, in _executeServiceCommand
raiseOnError=raiseOnError
File "/usr/lib/python2.7/site-packages/otopi/plugin.py", line 942, in execute
command=args[0],
RuntimeError: Command '/bin/systemctl' failed to execute
2015-10-28 14:57:01 ERROR otopi.context context._executeMethod:161 Failed to execute stage 'Closing up': Command '/bin/systemctl' failed to execute
2015-10-28 14:57:01 DEBUG otopi.context context.dumpEnvironment:490 ENVIRONMENT DUMP - BEGIN
2015-10-28 14:57:01 DEBUG otopi.context context.dumpEnvironment:500 ENV BASE/error=bool:'True'
2015-10-28 14:57:01 DEBUG otopi.context context.dumpEnvironment:500 ENV BASE/exceptionInfo=list:'[(<type 'exceptions.RuntimeError'>, RuntimeError("Command '/bin/systemctl' failed to execute",), <traceback object at 0x42b1dd0>)]'
2015-10-28 14:57:01 DEBUG otopi.context context.dumpEnvironment:504 ENVIRONMENT DUMP - END
2015-10-28 14:57:01 INFO otopi.context context.runSequence:417 Stage: Clean up
2015-10-28 14:57:01 DEBUG otopi.context context.runSequence:421 STAGE cleanup
I checked status of nfs-server.service and it is now running.
[root@ovirt ~]# systemctl status nfs-server.service
nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled)
Active: active (exited) since Wed 2015-10-28 15:14:00 EDT; 48min ago
Process: 11983 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
Process: 11981 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Main PID: 11983 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/nfs-server.service
Oct 28 15:14:00 ovirt.ca.stalab.ciena.com systemd[1]: Started NFS server and services.
[root@ovirt ~]#
Here is the full status of services on the engine VM:
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active waiting Arbitrary Executable File Formats File System Automount Point
sys-devices-pci0000:00-0000:00:01.1-ata2-host1-target1:0:0-1:0:0:0-block-sr0.device loaded active plugged QEMU_DVD-ROM
sys-devices-pci0000:00-0000:00:03.0-virtio0-net-eth0.device loaded active plugged Virtio network device
sys-devices-pci0000:00-0000:00:05.0-virtio2-virtio\x2dports-vport2p0.device loaded active plugged /sys/devices/pci0000:00/0000:00:05.0/virtio2/virtio-ports/vport2p0
sys-devices-pci0000:00-0000:00:05.0-virtio2-virtio\x2dports-vport2p1.device loaded active plugged /sys/devices/pci0000:00/0000:00:05.0/virtio2/virtio-ports/vport2p1
sys-devices-pci0000:00-0000:00:05.0-virtio2-virtio\x2dports-vport2p2.device loaded active plugged /sys/devices/pci0000:00/0000:00:05.0/virtio2/virtio-ports/vport2p2
sys-devices-pci0000:00-0000:00:06.0-virtio3-block-vda-vda1.device loaded active plugged /sys/devices/pci0000:00/0000:00:06.0/virtio3/block/vda/vda1
sys-devices-pci0000:00-0000:00:06.0-virtio3-block-vda-vda2.device loaded active plugged LVM PV VNNNF1-CMck-2V9j-1mq9-x0wR-jiFX-5GCndA on /dev/vda2
sys-devices-pci0000:00-0000:00:06.0-virtio3-block-vda.device loaded active plugged /sys/devices/pci0000:00/0000:00:06.0/virtio3/block/vda
sys-devices-platform-serial8250-tty-ttyS0.device loaded active plugged /sys/devices/platform/serial8250/tty/ttyS0
sys-devices-platform-serial8250-tty-ttyS1.device loaded active plugged /sys/devices/platform/serial8250/tty/ttyS1
sys-devices-platform-serial8250-tty-ttyS2.device loaded active plugged /sys/devices/platform/serial8250/tty/ttyS2
sys-devices-platform-serial8250-tty-ttyS3.device loaded active plugged /sys/devices/platform/serial8250/tty/ttyS3
sys-devices-virtual-block-dm\x2d0.device loaded active plugged /sys/devices/virtual/block/dm-0
sys-devices-virtual-block-dm\x2d1.device loaded active plugged /sys/devices/virtual/block/dm-1
sys-devices-virtual-tty-hvc0.device loaded active plugged /sys/devices/virtual/tty/hvc0
sys-devices-virtual-tty-hvc1.device loaded active plugged /sys/devices/virtual/tty/hvc1
sys-devices-virtual-tty-hvc2.device loaded active plugged /sys/devices/virtual/tty/hvc2
sys-devices-virtual-tty-hvc3.device loaded active plugged /sys/devices/virtual/tty/hvc3
sys-devices-virtual-tty-hvc4.device loaded active plugged /sys/devices/virtual/tty/hvc4
sys-devices-virtual-tty-hvc5.device loaded active plugged /sys/devices/virtual/tty/hvc5
sys-devices-virtual-tty-hvc6.device loaded active plugged /sys/devices/virtual/tty/hvc6
sys-devices-virtual-tty-hvc7.device loaded active plugged /sys/devices/virtual/tty/hvc7
sys-module-configfs.device loaded active plugged /sys/module/configfs
sys-subsystem-net-devices-eth0.device loaded active plugged Virtio network device
-.mount loaded active mounted /
boot.mount loaded active mounted /boot
dev-hugepages.mount loaded active mounted Huge Pages File System
dev-mqueue.mount loaded active mounted POSIX Message Queue File System
proc-fs-nfsd.mount loaded active mounted NFSD configuration filesystem
sys-kernel-config.mount loaded active mounted Configuration File System
sys-kernel-debug.mount loaded active mounted Debug File System
var-lib-nfs-rpc_pipefs.mount loaded active mounted RPC Pipe File System
brandbot.path loaded active waiting Flexible branding
systemd-ask-password-plymouth.path loaded active waiting Forward Password Requests to Plymouth Directory Watch
systemd-ask-password-wall.path loaded active waiting Forward Password Requests to Wall Directory Watch
session-1.scope loaded active running Session 1 of user root
session-2.scope loaded active running Session 2 of user root
auditd.service loaded active running Security Auditing Service
crond.service loaded active running Command Scheduler
dbus.service loaded active running D-Bus System Message Bus
firewalld.service loaded active running firewalld - dynamic firewall daemon
getty@tty1.service loaded active running Getty on tty1
gssproxy.service loaded active running GSSAPI Proxy Daemon
irqbalance.service loaded active running irqbalance daemon
kdump.service loaded active exited Crash recovery kernel arming
kmod-static-nodes.service loaded active exited Create list of required static device nodes for the current kernel
lvm2-lvmetad.service loaded active running LVM2 metadata daemon
lvm2-monitor.service loaded active exited Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling
lvm2-pvscan@252:2.service loaded active exited LVM2 PV scan on device 252:2
network.service loaded active exited LSB: Bring up/down networking
NetworkManager.service loaded active running Network Manager
nfs-config.service loaded active exited Preprocess NFS configuration
nfs-idmapd.service loaded active running NFSv4 ID-name mapping service
nfs-mountd.service loaded active running NFS Mount Daemon
nfs-server.service loaded active exited NFS server and services
polkit.service loaded active running Authorization Manager
postfix.service loaded active running Postfix Mail Transport Agent
postgresql.service loaded active running PostgreSQL database server
rhel-dmesg.service loaded active exited Dump dmesg to /var/log/dmesg
rhel-import-state.service loaded active exited Import network configuration from initramfs
rhel-readonly.service loaded active exited Configure read-only root support
rpc-statd.service loaded active running NFS status monitor for NFSv2/3 locking.
rpcbind.service loaded active running RPC bind service
rsyslog.service loaded active running System Logging Service
sshd.service loaded active running OpenSSH server daemon
systemd-journald.service loaded active running Journal Service
systemd-logind.service loaded active running Login Service
systemd-random-seed.service loaded active exited Load/Save Random Seed
systemd-remount-fs.service loaded active exited Remount Root and Kernel File Systems
systemd-sysctl.service loaded active exited Apply Kernel Variables
systemd-tmpfiles-setup-dev.service loaded active exited Create static device nodes in /dev
systemd-tmpfiles-setup.service loaded active exited Create Volatile Files and Directories
systemd-udev-trigger.service loaded active exited udev Coldplug all Devices
systemd-udevd.service loaded active running udev Kernel Device Manager
systemd-update-utmp.service loaded active exited Update UTMP about System Reboot/Shutdown
systemd-user-sessions.service loaded active exited Permit User Sessions
systemd-vconsole-setup.service loaded active exited Setup Virtual Console
tuned.service loaded active running Dynamic System Tuning Daemon
-.slice loaded active active Root Slice
system-getty.slice loaded active active system-getty.slice
system.slice loaded active active System Slice
user-0.slice loaded active active user-0.slice
user.slice loaded active active User and Session Slice
dbus.socket loaded active running D-Bus System Message Bus Socket
dm-event.socket loaded active listening Device-mapper event daemon FIFOs
lvm2-lvmetad.socket loaded active running LVM2 metadata daemon socket
rpcbind.socket loaded active running RPCbind Server Activation Socket
systemd-initctl.socket loaded active listening /dev/initctl Compatibility Named Pipe
systemd-journald.socket loaded active running Journal Socket
systemd-shutdownd.socket loaded active listening Delayed Shutdown Socket
systemd-udevd-control.socket loaded active running udev Control Socket
systemd-udevd-kernel.socket loaded active running udev Kernel Socket
dev-dm\x2d1.swap loaded active active /dev/dm-1
basic.target loaded active active Basic System
cryptsetup.target loaded active active Encrypted Volumes
getty.target loaded active active Login Prompts
local-fs-pre.target loaded active active Local File Systems (Pre)
local-fs.target loaded active active Local File Systems
multi-user.target loaded active active Multi-User System
network-online.target loaded active active Network is Online
network.target loaded active active Network
nss-lookup.target loaded active active Host and Network Name Lookups
paths.target loaded active active Paths
remote-fs.target loaded active active Remote File Systems
rpcbind.target loaded active active RPC Port Mapper
slices.target loaded active active Slices
sockets.target loaded active active Sockets
swap.target loaded active active Swap
sysinit.target loaded active active System Initialization
timers.target loaded active active Timers
systemd-tmpfiles-clean.timer loaded active waiting Daily Cleanup of Temporary Directories
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
112 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
What services or other components are missing and preventing the ovirt engine from running?
Checking the logfile for hosted-engine-setup on the host, I see:
2015-10-28 12:15:46 DEBUG otopi.ovirt_hosted_engine_setup.check_liveliness check_liveliness.isEngineUp:46 Checking for Engine health status
2015-10-28 12:15:46 ERROR otopi.ovirt_hosted_engine_setup.check_liveliness check_liveliness.isEngineUp:68 Engine is still unreachable
2015-10-28 12:15:46 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:215 DIALOG:SEND Engine health status page is not yet reachable.
2015-10-28 12:15:46 DEBUG otopi.plugins.otopi.dialog.human human.queryString:153 query OVEHOSTED_ENGINE_UP
2015-10-28 12:15:46 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:215 DIALOG:SEND To continue make a selection from the options below:
2015-10-28 12:15:46 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:215 DIALOG:SEND (1) Continue setup - engine installation is complete
2015-10-28 12:15:46 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:215 DIALOG:SEND (2) Power off and restart the VM
2015-10-28 12:15:46 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:215 DIALOG:SEND (3) Abort setup
2015-10-28 12:15:46 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:215 DIALOG:SEND (4) Destroy VM and abort setup
2015-10-28 12:15:46 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:215 DIALOG:SEND
2015-10-28 12:15:46 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:215 DIALOG:SEND (1, 2, 3, 4)[1]:
Manually checking with ‘hosted-engine –check-liveliness’, I get this output:
[root@bml-21 ~]# hosted-engine --check-liveliness
No handlers could be found for logger "otopi.__main__"
Hosted Engine is not up!
Thanks for the help,
Bruce.
Bruce T. Jorgens | Technical Lead, System Test Automation
bjorgens@ciena.com | 115 North Sullivan Road | Spokane Valley, WA 99037 USA
Direct +1.509.242.9292 | Mobile +1.509.499.0215 | Fax +1.509.242.9001
From: Simone Tiraboschi [mailto:stirabos@redhat.com]
Sent: Wednesday, October 28, 2015 10:06 AM
To: Jorgens, Bruce
Cc: Users@ovirt.org
Subject: Re: [ovirt-users] Unable to install ovirt engine using Centos 7.1 with 3.5.5
On Wed, Oct 28, 2015 at 5:53 PM, Jorgens, Bruce <bjorgens@ciena.com> wrote:
Simon,
I’m using an exported NFS file system for storage of the VM.
I’m not changing the filesystem type during the CentOS install.
Please let me know if there is any other information I can provide that might be helpful.
I tried it today and it worked as expected with oVirt 3.5.5 and exactly your centos ISO image.
Can you please share /var/log/messages and vdsm logs from your host to check if you had some storage issue?
Bruce.
Bruce T. Jorgens | Technical Lead, System Test Automation
bjorgens@ciena.com | 115 North Sullivan Road | Spokane Valley, WA 99037 USA
Direct +1.509.242.9292 | Mobile +1.509.499.0215 | Fax +1.509.242.9001
From: Simone Tiraboschi [mailto:stirabos@redhat.com]
Sent: Wednesday, October 28, 2015 3:00 AM
To: Jorgens, Bruce
Cc: Users@ovirt.org
Subject: Re: [ovirt-users] Unable to install ovirt engine using Centos 7.1 with 3.5.5
On Tue, Oct 27, 2015 at 9:00 PM, Jorgens, Bruce <bjorgens@ciena.com> wrote:
I’m trying to install ovirt-3.5.5 in a hosted engine configuration on an HP DL360 system with CentOS 7.1.
All works well up until the hosted VM portion of the install.
After installing CentOS 7.1 on the engine, the VM is unable to boot.
I’ve tried installing via PXE as well as with an attached CDROM image of CentOS 7.1 and go the same results.
There is no indication of any issue during install.
I ran a rescue image against the VM and fdisk cannot find a valid partition table on /dev/vda.
Curiously, a reinstall using the VM disk finds a partition table and prompts if the space should be recovered.
Hi Bruce,
I deployed it several time and I never noticed something similar.
Which kind of storage are you using?
Are you changing the default file system type or the partition table type when you install CentOS?
Below is a sanitized version of the configuration preview:
--== CONFIGURATION PREVIEW ==--
Engine FQDN : XXXXXX
Bridge name : ovirtmgmt
SSH daemon port : 22
Firewall manager : iptables
Gateway address : 10.33.80.1
Host name for web application : bml-21
Host ID : 1
Image alias : hosted_engine
Image size GB : 25
Storage connection : XXXXXX
Console type : vnc
Memory size MB : 4096
MAC address : 00:16:3e:65:0a:d7
Boot type : cdrom
Number of CPUs : 2
ISO image (for cdrom boot) : /tmp/CentOS-7-x86_64-Minimal-1503-01.iso
CPU Type : model_SandyBridge
I found this old post from nearly a year ago. The issue sounds quite similar:
http://www.redgrid.net/kvm/no-bootable-device-found-ovirt-3-5-on-centos7/
I appreciate any help.
Bruce T. Jorgens | Technical Lead, System Test Automation
bjorgens@ciena.com | 115 North Sullivan Road | Spokane Valley, WA 99037 USA
Direct +1.509.242.9292 | Mobile +1.509.499.0215 | Fax +1.509.242.9001
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users