<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p>Attached the VDSM logs from 'hosted-engine --deploy' to
'hosted-engine --console' <br>
</p>
<br>
<div class="moz-cite-prefix">Am 08.11.2016 um 10:43 schrieb Simone
Tiraboschi:<br>
</div>
<blockquote
cite="mid:CAN8-ONrc154+zsMbFNoV5ZCC1zSxH=JWi+BPe88iypFyxfs9-g@mail.gmail.com"
type="cite">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<div dir="ltr"><br>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Nov 8, 2016 at 9:35 AM,
Steffen Nolden <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:steffen.nolden@alumni.fh-aachen.de"
target="_blank">steffen.nolden@alumni.fh-aachen.de</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<p>Hello Florian,</p>
<p>there is no problem to reslove the FQDN</p>
</div>
</blockquote>
<div><br>
</div>
<div>Can you please attach also VDSM logs fro the relevant
time frame?</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<p>[nolden@oVirtNode01 ~]$ nslookup oVirtEngine.com<br>
Server: 192.168.122.1<br>
Address: 192.168.122.1#53<br>
<br>
Name: oVirtEngine.com<br>
Address: 192.168.122.201<br>
<br>
The '/etc/hosts' file of each host is configured to
resolve the name.</p>
<p>I tried it with stopped firewall,<br>
</p>
<p>[nolden@oVirtNode01 ~]$ sudo systemctl stop firewalld<br>
[nolden@oVirtNode01 ~]$ sudo systemctl status
firewalld<br>
● firewalld.service - firewalld - dynamic firewall
daemon<br>
Loaded: loaded (/usr/lib/systemd/system/<wbr>firewalld.service;
enabled; vendor preset: enabled)<br>
Active: inactive (dead) since Di 2016-11-08
09:10:46 CET; 3s ago<br>
Process: 724 ExecStart=/usr/sbin/firewalld --nofork
--nopid $FIREWALLD_ARGS (code=exited,
status=0/SUCCESS)<br>
Main PID: 724 (code=exited, status=0/SUCCESS)<br>
</p>
<p>but i got the same error!</p>
<div>
<div class="h5">
<p><br>
</p>
<div class="m_-5978212310940519061moz-cite-prefix">Am
04.11.2016 um 15:50 schrieb Florian Nolden:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hello Steffen,
<div><br>
</div>
<div>can your nodes resolve the FQDN "<a
moz-do-not-send="true"
href="http://ovirtengine.com"
target="_blank">ovirtengine.com</a>" to the
hosted engine ip ( nslookup <a
moz-do-not-send="true"
href="http://ovirtengine.com"
target="_blank">ovirtengine.com</a>) ? </div>
<div>If that works have you tried to disable the
firewall temporarily? </div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">2016-11-04 14:11
GMT+01:00 Steffen Nolden <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:steffen.nolden@alumni.fh-aachen.de"
target="_blank">steffen.nolden@alumni.fh-<wbr>aachen.de</a>></span>:<br>
<blockquote class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px
#ccc solid;padding-left:1ex">Hello,<br>
<br>
i tried to deploy hosted-engine on a testing
environment. First of all i tried to deploy
with the option<br>
<br>
"Automatically execute engine-setup on
the engine appliance on first boot (Yes,
No)[Yes]? Yes"<br>
<br>
but it got stuck.<br>
<br>
[ INFO ] Running engine-setup on the
appliance<br>
[ ERROR ] Engine setup got stuck on the
appliance<br>
[ ERROR ] Failed to execute stage 'Closing
up': Engine setup is stalled on the
appliance since 600 seconds ago. Please
check its log on the appliance.<br>
[ INFO ] Stage: Clean up<br>
[ INFO ] Generating answer file
'/var/lib/ovirt-hosted-engine-<wbr>setup/answers/answers-20161104<wbr>112913.conf'<br>
[ INFO ] Stage: Pre-termination<br>
[ INFO ] Stage: Termination<br>
[ ERROR ] Hosted Engine deployment failed:
this system is not reliable, please check
the issue,fix and redeploy<br>
Log file is located at
/var/log/ovirt-hosted-engine-s<wbr>etup/ovirt-hosted-engine-setup<wbr>-20161104110104-kyhq1e.log<br>
<br>
Next try i said 'No' and i tried to execute
it myself. But i can't access the engine
appliance.<br>
<br>
[nolden@oVirtNode01 ~]$ sudo
hosted-engine --console<br>
/usr/share/vdsm/vdsClient.py:3<wbr>3:
DeprecationWarning: vdscli uses xmlrpc.
since ovirt 3.6 xmlrpc is deprecated, please
use vdsm.jsonrpcvdscli<br>
from vdsm import utils, vdscli,
constants<br>
The engine VM is running on this host<br>
Verbunden mit der Domain: HostedEngine<br>
Escape-Zeichen ist ^]<br>
Fehler: Interner Fehler:
Charakter-Einheit console0 verwendet nicht
ein PTY<br>
<br>
Additionally i can't ping or access the
engine appliance via ssh.<br>
<br>
Did i forget to install a packege or
configure something?<br>
<br>
Below my infrastructure and configs.
Attached the log file.<br>
<br>
<br>
Thanks for help.<br>
<br>
<br>
<br>
My infrastructure is a nestedVM system:<br>
All systems have nested visualization
activated<br>
<br>
HW with LinuxMint x86_64;<br>
- VM with CentOS-7-x86_64; 24GB Ram; 7
cores;<br>
- Nested VM with CentOS-7-x86_64;
12288GB Ram; 4 cores; Hostname:
oVirtNode01.com (192.168.122.101): Here i
deploy hosted-engine, Hostname:
oVirtEngine.com (192.168.122.201)<br>
- Nested VM with CentOS-7-x86_64;
4096GB Ram; 1 cores; Hostname:
oVirtNode02.com (192.168.122.102):<br>
- Nested VM with CentOS-7-x86_64;
4096GB Ram; 1 cores; Hostname:
oVirtNode03.com (192.168.122.103):<br>
<br>
All three NestedVMs are updated, have
installed the ovirt-release40.rpm.
Additional installed:<br>
screen (4.1.0), ovirt-hosted-engine-setup
(2.0.2.2), vdsm-gluster (4.18.13),
bridge-utils (1.5),<br>
vdsm (4.18.13), vdsm-cli (4.18.13),
glusterfs-server (3.7.16), samba.<br>
<br>
First NestedVM additional installed
ovirt-engine-appliance
4.0-20160928.1.el7.centos<br>
<br>
The three VMs building a glusterfs volume
"engine" 3 replica, with the options...<br>
<br>
Volume Name: engine<br>
Type: Replicate<br>
Volume ID: e92849b7-af3b-4ccd-bd0d-69a5ab<wbr>3b6214<br>
Status: Started<br>
Number of Bricks: 1 x 3 = 3<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: oVirtNode01.com:/gluster/engin<wbr>e/brick1<br>
Brick2: oVirtNode02.com:/gluster/engin<wbr>e/brick1<br>
Brick3: oVirtNode03.com:/gluster/engin<wbr>e/brick1<br>
Options Reconfigured:<br>
auth.allow: 192.168.122.*<br>
storage.owner-gid: 36<br>
storage.owner-uid: 36<br>
performance.readdir-ahead: on<br>
<br>
## Session 1 (hosted-engine deploy without
automatically execute engine-setup)<br>
[nolden@oVirtNode01 ~]$ sudo hosted-engine
--deploy<br>
/usr/lib/python2.7/site-packag<wbr>es/ovirt_hosted_engine_ha/lib/<wbr>storage_backends.py:15:
DeprecationWarning: vdscli uses xmlrpc.
since ovirt 3.6 xmlrpc is deprecated, please
use vdsm.jsonrpcvdscli<br>
import vdsm.vdscli<br>
[ INFO ] Stage: Initializing<br>
[ INFO ] Generating a temporary VNC
password.<br>
[ INFO ] Stage: Environment setup<br>
During customization use CTRL-D to
abort.<br>
Continuing will configure this
host for serving as hypervisor and create a
VM where you have to install the engine
afterwards.<br>
Are you sure you want to continue?
(Yes, No)[Yes]:<br>
[ INFO ] Hardware supports virtualization<br>
Configuration files: []<br>
Log file:
/var/log/ovirt-hosted-engine-s<wbr>etup/ovirt-hosted-engine-setup<wbr>-20161104123834-chwikf.log<br>
Version: otopi-1.5.2
(otopi-1.5.2-1.el7.centos)<br>
[ INFO ] Stage: Environment packages setup<br>
[ INFO ] Stage: Programs detection<br>
[ INFO ] Stage: Environment setup<br>
[ INFO ] Generating libvirt-spice
certificates<br>
[ INFO ] Stage: Environment customization<br>
<br>
--== STORAGE CONFIGURATION ==--<br>
<br>
Please specify the storage you
would like to use (glusterfs, iscsi, fc,
nfs3, nfs4)[nfs3]: glusterfs<br>
[ INFO ] Please note that Replica 3 support
is required for the shared storage.<br>
Please specify the full shared
storage connection path to use (example:
host:/path): oVirtNode01.com:/engine<br>
[ INFO ] GlusterFS replica 3 Volume
detected<br>
[ INFO ] Installing on first host<br>
[ INFO ] GlusterFS replica 3 Volume
detected<br>
<br>
--== SYSTEM CONFIGURATION ==--<br>
<br>
<br>
--== NETWORK CONFIGURATION ==--<br>
<br>
iptables was detected on your
computer, do you wish setup to configure it?
(Yes, No)[Yes]: No<br>
Please indicate a pingable gateway
IP address [192.168.122.1]:<br>
Please indicate a nic to set
ovirtmgmt bridge on: (eth0) [eth0]:<br>
<br>
--== VM CONFIGURATION ==--<br>
<br>
Booting from cdrom on RHEL7 is ISO
image based only, as cdrom passthrough is
disabled (BZ760885)<br>
Please specify the device to boot
the VM from (choose disk for the oVirt
engine appliance)<br>
(cdrom, disk, pxe) [disk]:<br>
Please specify the console type
you would like to use to connect to the VM
(vnc, spice) [vnc]:<br>
[ INFO ] Detecting available oVirt engine
appliances<br>
The following appliance have been
found on your system:<br>
[1] - The oVirt Engine
Appliance image (OVA) -
4.0-20160928.1.el7.centos<br>
[2] - Directly select an OVA
file<br>
Please select an appliance (1, 2)
[1]:<br>
[ INFO ] Verifying its sha1sum<br>
[ INFO ] Checking OVF archive content
(could take a few minutes depending on
archive size)<br>
[ INFO ] Checking OVF XML content (could
take a few minutes depending on archive
size)<br>
[WARNING] OVF does not contain a valid image
description, using default.<br>
Would you like to use cloud-init
to customize the appliance on the first boot
(Yes, No)[Yes]?<br>
Would you like to generate on-fly
a cloud-init ISO image (of no-cloud type)<br>
or do you have an existing one
(Generate, Existing)[Generate]?<br>
Please provide the FQDN you would
like to use for the engine appliance.<br>
Note: This will be the FQDN of the
engine VM you are now going to launch,<br>
it should not point to the base
host or to any other existing machine.<br>
Engine VM FQDN: (leave it empty to
skip): []: oVirtEngine.com<br>
Automatically execute engine-setup
on the engine appliance on first boot (Yes,
No)[Yes]? No<br>
Please provide the domain name you
would like to use for the engine appliance.<br>
Engine VM domain: [com]<br>
Enter root password that will be
used for the engine appliance (leave it
empty to skip):<br>
Confirm appliance root password:<br>
The following CPU types are
supported by this host:<br>
- model_SandyBridge: Intel
SandyBridge Family<br>
- model_Westmere: Intel
Westmere Family<br>
- model_Nehalem: Intel
Nehalem Family<br>
- model_Penryn: Intel Penryn
Family<br>
- model_Conroe: Intel Conroe
Family<br>
Please specify the CPU type to be
used by the VM [model_SandyBridge]:<br>
Please specify the number of
virtual CPUs for the VM (Defaults to
appliance OVF value): [4]: 3<br>
[WARNING] Minimum requirements for disk size
not met<br>
You may specify a unicast MAC
address for the VM or accept a randomly
generated default [00:16:3e:2a:82:17]:<br>
Please specify the memory size of
the VM in MB (Defaults to maximum
available): [10880]: 8144<br>
How should the engine VM network
be configured (DHCP, Static)[DHCP]? Static<br>
Please enter the IP address to be
used for the engine VM [192.168.122.2]:
192.168.122.201<br>
[ INFO ] The engine VM will be configured
to use <a moz-do-not-send="true"
href="http://192.168.122.201/24"
rel="noreferrer" target="_blank">192.168.122.201/24</a><br>
Please provide a comma-separated
list (max 3) of IP addresses of domain name
servers for the engine VM<br>
Engine VM DNS (leave it empty to
skip) [192.168.122.1]:<br>
Add lines for the appliance itself
and for this host to /etc/hosts on the
engine VM?<br>
Note: ensuring that this host
could resolve the engine VM hostname is
still up to you<br>
(Yes, No)[No] Yes<br>
<br>
--== HOSTED ENGINE CONFIGURATION
==--<br>
<br>
Enter engine admin password:<br>
Confirm engine admin password:<br>
Enter the name which will be used
to identify this host inside the
Administrator Portal [hosted_engine_1]:<br>
Please provide the name of the
SMTP server through which we will send
notifications [localhost]:<br>
Please provide the TCP port number
of the SMTP server [25]:<br>
Please provide the email address
from which notifications will be sent
[root@localhost]:<br>
Please provide a comma-separated
list of email addresses which will get
notifications [root@localhost]:<br>
[ INFO ] Stage: Setup validation<br>
<br>
--== CONFIGURATION PREVIEW ==--<br>
<br>
Bridge interface
: eth0<br>
Engine FQDN
: oVirtEngine.com<br>
Bridge name
: ovirtmgmt<br>
Host address
: oVirtNode01<br>
SSH daemon port
: 22<br>
Gateway address
: 192.168.122.1<br>
Host name for web application
: hosted_engine_1<br>
Storage Domain type
: glusterfs<br>
Host ID
: 1<br>
Image size GB
: 10<br>
Storage connection
: oVirtNode01.com:/engine<br>
Console type
: vnc<br>
Memory size MB
: 8144<br>
MAC address
: 00:16:3e:2a:82:17<br>
Boot type
: disk<br>
Number of CPUs
: 3<br>
OVF archive (for disk boot)
: /usr/share/ovirt-engine-applia<wbr>nce/ovirt-engine-appliance-4.0<wbr>-20160928.1.el7.centos.ova<br>
Appliance version
: 4.0-20160928.1.el7.centos<br>
CPU Type
: model_SandyBridge<br>
<br>
Please confirm installation
settings (Yes, No)[Yes]:<br>
[ INFO ] Stage: Transaction setup<br>
...<br>
[ INFO ] Stage: Closing up<br>
The following network ports should
be opened:<br>
tcp:5900<br>
tcp:5901<br>
tcp:9090<br>
udp:5900<br>
udp:5901<br>
An example of the required
configuration for iptables can be found at:<br>
/etc/ovirt-hosted-engine/iptab<wbr>les.example<br>
In order to configure firewalld,
copy the files from<br>
/etc/ovirt-hosted-engine/firew<wbr>alld
to /etc/firewalld/services<br>
and execute the following
commands:<br>
firewall-cmd -service
hosted-cockpit<br>
firewall-cmd -service
hosted-console<br>
[ INFO ] Creating VM<br>
You can now connect to the VM with
the following command:<br>
hosted-engine --console<br>
You can also graphically connect
to the VM from your system with the
following command:<br>
remote-viewer
vnc://oVirtNode01:5900<br>
Use temporary password "2263CpET"
to connect to vnc console.<br>
Please ensure that your Guest OS
is properly configured to support serial
console according to your distro
documentation.<br>
Follow <a moz-do-not-send="true"
href="http://www.ovirt.org/Serial_Console_Setup#I_need_to_access_the_console_the_old_way"
rel="noreferrer" target="_blank">http://www.ovirt.org/Serial_Co<wbr>nsole_Setup#I_need_to_access_t<wbr>he_console_the_old_way</a>
for more info.<br>
If you need to reboot the VM you
will need to start it manually using the
command:<br>
hosted-engine --vm-start<br>
You can then set a temporary
password using the command:<br>
hosted-engine
--add-console-password<br>
Please install and setup the
engine in the VM.<br>
You may also be interested in
installing ovirt-guest-agent-common package
in the VM.<br>
<br>
<br>
The VM has been rebooted.<br>
To continue please install
oVirt-Engine in the VM<br>
(Follow <a moz-do-not-send="true"
href="http://www.ovirt.org/Quick_Start_Guide" rel="noreferrer"
target="_blank">http://www.ovirt.org/Quick_Sta<wbr>rt_Guide</a>
for more info).<br>
<br>
Make a selection from the options
below:<br>
(1) Continue setup - oVirt-Engine
installation is ready and ovirt-engine
service is up<br>
(2) Abort setup<br>
(3) Power off and restart the VM<br>
(4) Destroy VM and abort setup<br>
<br>
(1, 2, 3, 4)[1]:<br>
## Session 2<br>
[nolden@oVirtNode01 ~]$ sudo hosted-engine
--console<br>
/usr/share/vdsm/vdsClient.py:3<wbr>3:
DeprecationWarning: vdscli uses xmlrpc.
since ovirt 3.6 xmlrpc is deprecated, please
use vdsm.jsonrpcvdscli<br>
from vdsm import utils, vdscli, constants<br>
The engine VM is running on this host<br>
Verbunden mit der Domain: HostedEngine<br>
Escape-Zeichen ist ^]<br>
Fehler: Interner Fehler: Charakter-Einheit
console0 verwendet nicht ein PTY<br>
[nolden@oVirtNode01 ~]$ ssh <a
moz-do-not-send="true"
class="m_-5978212310940519061moz-txt-link-abbreviated"
href="mailto:root@oVirtEngine.com"
target="_blank">root@oVirtEngine.com</a><br>
ssh: connect to host <a
moz-do-not-send="true"
href="http://ovirtengine.com"
rel="noreferrer" target="_blank">ovirtengine.com</a>
port 22: No route to host<br>
<br>
## Back to session 1<br>
...<br>
(1, 2, 3, 4)[1]: 1<br>
<br>
Checking for oVirt-Engine status
at oVirtEngine.com...<br>
[ INFO ] Engine is still unreachable<br>
oVirt-Engine health status page is
not yet reachable.<br>
<br>
<br>
The VM has been rebooted.<br>
To continue please install
oVirt-Engine in the VM<br>
(Follow <a moz-do-not-send="true"
href="http://www.ovirt.org/Quick_Start_Guide" rel="noreferrer"
target="_blank">http://www.ovirt.org/Quick_Sta<wbr>rt_Guide</a>
for more info).<br>
<br>
Make a selection from the options
below:<br>
(1) Continue setup - oVirt-Engine
installation is ready and ovirt-engine
service is up<br>
(2) Abort setup<br>
(3) Power off and restart the VM<br>
(4) Destroy VM and abort setup<br>
<br>
(1, 2, 3, 4)[1]: 2<br>
[ ERROR ] Failed to execute stage 'Closing
up': Setup aborted by user<br>
[ INFO ] Stage: Clean up<br>
[ INFO ] Generating answer file
'/var/lib/ovirt-hosted-engine-<wbr>setup/answers/answers-20161104<wbr>130223.conf'<br>
[ INFO ] Stage: Pre-termination<br>
[ INFO ] Stage: Termination<br>
[ ERROR ] Hosted Engine deployment failed:
this system is not reliable, please check
the issue,fix and redeploy<br>
Log file is located at
/var/log/ovirt-hosted-engine-s<wbr>etup/ovirt-hosted-engine-setup<wbr>-20161104123834-chwikf.log<br>
<br>
<br>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a moz-do-not-send="true"
href="mailto:Users@ovirt.org"
target="_blank">Users@ovirt.org</a><br>
<a moz-do-not-send="true"
href="http://lists.ovirt.org/mailman/listinfo/users"
rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
<br>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</div>
</div>
</div>
<br>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a moz-do-not-send="true" href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a moz-do-not-send="true"
href="http://lists.ovirt.org/mailman/listinfo/users"
rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
<br>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<br>
</body>
</html>