--------------3025B96D7224F98FC0855C71
Content-Type: text/plain; charset="utf-8"; format=flowed
Content-Transfer-Encoding: 8bit
Am 08.11.2016 um 11:51 schrieb Simone Tiraboschi:
On Tue, Nov 8, 2016 at 11:30 AM, Steffen Nolden
<steffen.nolden(a)alumni.fh-aachen.de
<mailto:steffen.nolden@alumni.fh-aachen.de>> wrote:
Attached the VDSM logs from 'hosted-engine --deploy' to
'hosted-engine --console'
Everything seams fine here.
Now I read that you are using a nested env: do you removed mac spoof
filters on the physical host?
What is it and why is it necessary?
I want to connect from the nested VM to the local engine appliance. And
that didnt work.
Physical host --> VM --> NestedVM -x-> engine appliance
Am 08.11.2016 um 10:43 schrieb Simone Tiraboschi:
>
>
> On Tue, Nov 8, 2016 at 9:35 AM, Steffen Nolden
> <steffen.nolden(a)alumni.fh-aachen.de
> <mailto:steffen.nolden@alumni.fh-aachen.de>> wrote:
>
> Hello Florian,
>
> there is no problem to reslove the FQDN
>
>
> Can you please attach also VDSM logs fro the relevant time frame?
>
> [nolden@oVirtNode01 ~]$ nslookup
oVirtEngine.com
> Server: 192.168.122.1
> Address: 192.168.122.1#53
>
> Name:
oVirtEngine.com
> Address: 192.168.122.201
>
> The '/etc/hosts' file of each host is configured to resolve
> the name.
>
> I tried it with stopped firewall,
>
> [nolden@oVirtNode01 ~]$ sudo systemctl stop firewalld
> [nolden@oVirtNode01 ~]$ sudo systemctl status firewalld
> ● firewalld.service - firewalld - dynamic firewall daemon
> Loaded: loaded (/usr/lib/systemd/system/firewalld.service;
> enabled; vendor preset: enabled)
> Active: inactive (dead) since Di 2016-11-08 09:10:46 CET;
> 3s ago
> Process: 724 ExecStart=/usr/sbin/firewalld --nofork --nopid
> $FIREWALLD_ARGS (code=exited, status=0/SUCCESS)
> Main PID: 724 (code=exited, status=0/SUCCESS)
>
> but i got the same error!
>
>
> Am 04.11.2016 um 15:50 schrieb Florian Nolden:
>> Hello Steffen,
>>
>> can your nodes resolve the FQDN "ovirtengine.com
>> <
http://ovirtengine.com>" to the hosted engine ip ( nslookup
>>
ovirtengine.com <
http://ovirtengine.com>) ?
>> If that works have you tried to disable the firewall
>> temporarily?
>>
>> 2016-11-04 14:11 GMT+01:00 Steffen Nolden
>> <steffen.nolden(a)alumni.fh-aachen.de
>> <mailto:steffen.nolden@alumni.fh-aachen.de>>:
>>
>> Hello,
>>
>> i tried to deploy hosted-engine on a testing
>> environment. First of all i tried to deploy with the option
>>
>> "Automatically execute engine-setup on the engine
>> appliance on first boot (Yes, No)[Yes]? Yes"
>>
>> but it got stuck.
>>
>> [ INFO ] Running engine-setup on the appliance
>> [ ERROR ] Engine setup got stuck on the appliance
>> [ ERROR ] Failed to execute stage 'Closing up': Engine
>> setup is stalled on the appliance since 600 seconds ago.
>> Please check its log on the appliance.
>> [ INFO ] Stage: Clean up
>> [ INFO ] Generating answer file
>>
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20161104112913.conf'
>> [ INFO ] Stage: Pre-termination
>> [ INFO ] Stage: Termination
>> [ ERROR ] Hosted Engine deployment failed: this system
>> is not reliable, please check the issue,fix and redeploy
>> Log file is located at
>>
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20161104110104-kyhq1e.log
>>
>> Next try i said 'No' and i tried to execute it myself.
>> But i can't access the engine appliance.
>>
>> [nolden@oVirtNode01 ~]$ sudo hosted-engine --console
>> /usr/share/vdsm/vdsClient.py:33: DeprecationWarning:
>> vdscli uses xmlrpc. since ovirt 3.6 xmlrpc is
>> deprecated, please use vdsm.jsonrpcvdscli
>> from vdsm import utils, vdscli, constants
>> The engine VM is running on this host
>> Verbunden mit der Domain: HostedEngine
>> Escape-Zeichen ist ^]
>> Fehler: Interner Fehler: Charakter-Einheit console0
>> verwendet nicht ein PTY
>>
>> Additionally i can't ping or access the engine appliance
>> via ssh.
>>
>> Did i forget to install a packege or configure something?
>>
>> Below my infrastructure and configs. Attached the log file.
>>
>>
>> Thanks for help.
>>
>>
>>
>> My infrastructure is a nestedVM system:
>> All systems have nested visualization activated
>>
>> HW with LinuxMint x86_64;
>> - VM with CentOS-7-x86_64; 24GB Ram; 7 cores;
>> - Nested VM with CentOS-7-x86_64; 12288GB Ram; 4
>> cores; Hostname:
oVirtNode01.com (192.168.122.101): Here
>> i deploy hosted-engine, Hostname:
oVirtEngine.com
>> (192.168.122.201)
>> - Nested VM with CentOS-7-x86_64; 4096GB Ram; 1
>> cores; Hostname:
oVirtNode02.com (192.168.122.102):
>> - Nested VM with CentOS-7-x86_64; 4096GB Ram; 1
>> cores; Hostname:
oVirtNode03.com (192.168.122.103):
>>
>> All three NestedVMs are updated, have installed the
>> ovirt-release40.rpm. Additional installed:
>> screen (4.1.0), ovirt-hosted-engine-setup (2.0.2.2),
>> vdsm-gluster (4.18.13), bridge-utils (1.5),
>> vdsm (4.18.13), vdsm-cli (4.18.13), glusterfs-server
>> (3.7.16), samba.
>>
>> First NestedVM additional installed
>> ovirt-engine-appliance 4.0-20160928.1.el7.centos
>>
>> The three VMs building a glusterfs volume "engine" 3
>> replica, with the options...
>>
>> Volume Name: engine
>> Type: Replicate
>> Volume ID: e92849b7-af3b-4ccd-bd0d-69a5ab3b6214
>> Status: Started
>> Number of Bricks: 1 x 3 = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: oVirtNode01.com:/gluster/engine/brick1
>> Brick2: oVirtNode02.com:/gluster/engine/brick1
>> Brick3: oVirtNode03.com:/gluster/engine/brick1
>> Options Reconfigured:
>> auth.allow: 192.168.122.*
>> storage.owner-gid: 36
>> storage.owner-uid: 36
>> performance.readdir-ahead: on
>>
>> ## Session 1 (hosted-engine deploy without automatically
>> execute engine-setup)
>> [nolden@oVirtNode01 ~]$ sudo hosted-engine --deploy
>>
/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py:15:
>> DeprecationWarning: vdscli uses xmlrpc. since ovirt 3.6
>> xmlrpc is deprecated, please use vdsm.jsonrpcvdscli
>> import vdsm.vdscli
>> [ INFO ] Stage: Initializing
>> [ INFO ] Generating a temporary VNC password.
>> [ INFO ] Stage: Environment setup
>> During customization use CTRL-D to abort.
>> Continuing will configure this host for
>> serving as hypervisor and create a VM where you have to
>> install the engine afterwards.
>> Are you sure you want to continue? (Yes, No)[Yes]:
>> [ INFO ] Hardware supports virtualization
>> Configuration files: []
>> Log file:
>>
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20161104123834-chwikf.log
>> Version: otopi-1.5.2 (otopi-1.5.2-1.el7.centos)
>> [ INFO ] Stage: Environment packages setup
>> [ INFO ] Stage: Programs detection
>> [ INFO ] Stage: Environment setup
>> [ INFO ] Generating libvirt-spice certificates
>> [ INFO ] Stage: Environment customization
>>
>> --== STORAGE CONFIGURATION ==--
>>
>> Please specify the storage you would like to
>> use (glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]: glusterfs
>> [ INFO ] Please note that Replica 3 support is required
>> for the shared storage.
>> Please specify the full shared storage
>> connection path to use (example: host:/path):
>> oVirtNode01.com:/engine
>> [ INFO ] GlusterFS replica 3 Volume detected
>> [ INFO ] Installing on first host
>> [ INFO ] GlusterFS replica 3 Volume detected
>>
>> --== SYSTEM CONFIGURATION ==--
>>
>>
>> --== NETWORK CONFIGURATION ==--
>>
>> iptables was detected on your computer, do you
>> wish setup to configure it? (Yes, No)[Yes]: No
>> Please indicate a pingable gateway IP address
>> [192.168.122.1]:
>> Please indicate a nic to set ovirtmgmt bridge
>> on: (eth0) [eth0]:
>>
>> --== VM CONFIGURATION ==--
>>
>> Booting from cdrom on RHEL7 is ISO image based
>> only, as cdrom passthrough is disabled (BZ760885)
>> Please specify the device to boot the VM from
>> (choose disk for the oVirt engine appliance)
>> (cdrom, disk, pxe) [disk]:
>> Please specify the console type you would like
>> to use to connect to the VM (vnc, spice) [vnc]:
>> [ INFO ] Detecting available oVirt engine appliances
>> The following appliance have been found on
>> your system:
>> [1] - The oVirt Engine Appliance image
>> (OVA) - 4.0-20160928.1.el7.centos
>> [2] - Directly select an OVA file
>> Please select an appliance (1, 2) [1]:
>> [ INFO ] Verifying its sha1sum
>> [ INFO ] Checking OVF archive content (could take a few
>> minutes depending on archive size)
>> [ INFO ] Checking OVF XML content (could take a few
>> minutes depending on archive size)
>> [WARNING] OVF does not contain a valid image
>> description, using default.
>> Would you like to use cloud-init to customize
>> the appliance on the first boot (Yes, No)[Yes]?
>> Would you like to generate on-fly a cloud-init
>> ISO image (of no-cloud type)
>> or do you have an existing one (Generate,
>> Existing)[Generate]?
>> Please provide the FQDN you would like to use
>> for the engine appliance.
>> Note: This will be the FQDN of the engine VM
>> you are now going to launch,
>> it should not point to the base host or to any
>> other existing machine.
>> Engine VM FQDN: (leave it empty to skip): []:
>>
oVirtEngine.com
>> Automatically execute engine-setup on the
>> engine appliance on first boot (Yes, No)[Yes]? No
>> Please provide the domain name you would like
>> to use for the engine appliance.
>> Engine VM domain: [com]
>> Enter root password that will be used for the
>> engine appliance (leave it empty to skip):
>> Confirm appliance root password:
>> The following CPU types are supported by this
>> host:
>> - model_SandyBridge: Intel SandyBridge Family
>> - model_Westmere: Intel Westmere Family
>> - model_Nehalem: Intel Nehalem Family
>> - model_Penryn: Intel Penryn Family
>> - model_Conroe: Intel Conroe Family
>> Please specify the CPU type to be used by the
>> VM [model_SandyBridge]:
>> Please specify the number of virtual CPUs for
>> the VM (Defaults to appliance OVF value): [4]: 3
>> [WARNING] Minimum requirements for disk size not met
>> You may specify a unicast MAC address for the
>> VM or accept a randomly generated default
>> [00:16:3e:2a:82:17]:
>> Please specify the memory size of the VM in MB
>> (Defaults to maximum available): [10880]: 8144
>> How should the engine VM network be configured
>> (DHCP, Static)[DHCP]? Static
>> Please enter the IP address to be used for the
>> engine VM [192.168.122.2]: 192.168.122.201
>> [ INFO ] The engine VM will be configured to use
>> 192.168.122.201/24 <
http://192.168.122.201/24>
>> Please provide a comma-separated list (max 3)
>> of IP addresses of domain name servers for the engine VM
>> Engine VM DNS (leave it empty to skip)
>> [192.168.122.1]:
>> Add lines for the appliance itself and for
>> this host to /etc/hosts on the engine VM?
>> Note: ensuring that this host could resolve
>> the engine VM hostname is still up to you
>> (Yes, No)[No] Yes
>>
>> --== HOSTED ENGINE CONFIGURATION ==--
>>
>> Enter engine admin password:
>> Confirm engine admin password:
>> Enter the name which will be used to identify
>> this host inside the Administrator Portal [hosted_engine_1]:
>> Please provide the name of the SMTP server
>> through which we will send notifications [localhost]:
>> Please provide the TCP port number of the SMTP
>> server [25]:
>> Please provide the email address from which
>> notifications will be sent [root@localhost]:
>> Please provide a comma-separated list of email
>> addresses which will get notifications [root@localhost]:
>> [ INFO ] Stage: Setup validation
>>
>> --== CONFIGURATION PREVIEW ==--
>>
>> Bridge interface : eth0
>> Engine FQDN :
oVirtEngine.com
>> Bridge name : ovirtmgmt
>> Host address : oVirtNode01
>> SSH daemon port : 22
>> Gateway address : 192.168.122.1
>> Host name for web application :
>> hosted_engine_1
>> Storage Domain type : glusterfs
>> Host ID : 1
>> Image size GB : 10
>> Storage connection :
>> oVirtNode01.com:/engine
>> Console type : vnc
>> Memory size MB : 8144
>> MAC address : 00:16:3e:2a:82:17
>> Boot type : disk
>> Number of CPUs : 3
>> OVF archive (for disk boot) :
>>
/usr/share/ovirt-engine-appliance/ovirt-engine-appliance-4.0-20160928.1.el7.centos.ova
>> Appliance version :
>> 4.0-20160928.1.el7.centos
>> CPU Type : model_SandyBridge
>>
>> Please confirm installation settings (Yes,
>> No)[Yes]:
>> [ INFO ] Stage: Transaction setup
>> ...
>> [ INFO ] Stage: Closing up
>> The following network ports should be opened:
>> tcp:5900
>> tcp:5901
>> tcp:9090
>> udp:5900
>> udp:5901
>> An example of the required configuration for
>> iptables can be found at:
>> /etc/ovirt-hosted-engine/iptables.example
>> In order to configure firewalld, copy the
>> files from
>> /etc/ovirt-hosted-engine/firewalld to
>> /etc/firewalld/services
>> and execute the following commands:
>> firewall-cmd -service hosted-cockpit
>> firewall-cmd -service hosted-console
>> [ INFO ] Creating VM
>> You can now connect to the VM with the
>> following command:
>> hosted-engine --console
>> You can also graphically connect to the VM
>> from your system with the following command:
>> remote-viewer vnc://oVirtNode01:5900
>> Use temporary password "2263CpET" to connect
>> to vnc console.
>> Please ensure that your Guest OS is properly
>> configured to support serial console according to your
>> distro documentation.
>> Follow
>>
http://www.ovirt.org/Serial_Console_Setup#I_need_to_access_the_console_th...
>>
<
http://www.ovirt.org/Serial_Console_Setup#I_need_to_access_the_console_th...
>> for more info.
>> If you need to reboot the VM you will need to
>> start it manually using the command:
>> hosted-engine --vm-start
>> You can then set a temporary password using
>> the command:
>> hosted-engine --add-console-password
>> Please install and setup the engine in the VM.
>> You may also be interested in installing
>> ovirt-guest-agent-common package in the VM.
>>
>>
>> The VM has been rebooted.
>> To continue please install oVirt-Engine in the VM
>> (Follow
http://www.ovirt.org/Quick_Start_Guide
>> <
http://www.ovirt.org/Quick_Start_Guide> for more info).
>>
>> Make a selection from the options below:
>> (1) Continue setup - oVirt-Engine installation
>> is ready and ovirt-engine service is up
>> (2) Abort setup
>> (3) Power off and restart the VM
>> (4) Destroy VM and abort setup
>>
>> (1, 2, 3, 4)[1]:
>> ## Session 2
>> [nolden@oVirtNode01 ~]$ sudo hosted-engine --console
>> /usr/share/vdsm/vdsClient.py:33: DeprecationWarning:
>> vdscli uses xmlrpc. since ovirt 3.6 xmlrpc is
>> deprecated, please use vdsm.jsonrpcvdscli
>> from vdsm import utils, vdscli, constants
>> The engine VM is running on this host
>> Verbunden mit der Domain: HostedEngine
>> Escape-Zeichen ist ^]
>> Fehler: Interner Fehler: Charakter-Einheit console0
>> verwendet nicht ein PTY
>> [nolden@oVirtNode01 ~]$ ssh root(a)oVirtEngine.com
>> <mailto:root@oVirtEngine.com>
>> ssh: connect to host
ovirtengine.com
>> <
http://ovirtengine.com> port 22: No route to host
>>
>> ## Back to session 1
>> ...
>> (1, 2, 3, 4)[1]: 1
>>
>> Checking for oVirt-Engine status at
>> oVirtEngine.com...
>> [ INFO ] Engine is still unreachable
>> oVirt-Engine health status page is not yet
>> reachable.
>>
>>
>> The VM has been rebooted.
>> To continue please install oVirt-Engine in the VM
>> (Follow
http://www.ovirt.org/Quick_Start_Guide
>> <
http://www.ovirt.org/Quick_Start_Guide> for more info).
>>
>> Make a selection from the options below:
>> (1) Continue setup - oVirt-Engine installation
>> is ready and ovirt-engine service is up
>> (2) Abort setup
>> (3) Power off and restart the VM
>> (4) Destroy VM and abort setup
>>
>> (1, 2, 3, 4)[1]: 2
>> [ ERROR ] Failed to execute stage 'Closing up': Setup
>> aborted by user
>> [ INFO ] Stage: Clean up
>> [ INFO ] Generating answer file
>>
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20161104130223.conf'
>> [ INFO ] Stage: Pre-termination
>> [ INFO ] Stage: Termination
>> [ ERROR ] Hosted Engine deployment failed: this system
>> is not reliable, please check the issue,fix and redeploy
>> Log file is located at
>>
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20161104123834-chwikf.log
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org <mailto:Users@ovirt.org>
>>
http://lists.ovirt.org/mailman/listinfo/users
>> <
http://lists.ovirt.org/mailman/listinfo/users>
>>
>>
>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org <mailto:Users@ovirt.org>
>
http://lists.ovirt.org/mailman/listinfo/users
> <
http://lists.ovirt.org/mailman/listinfo/users>
>
>
_______________________________________________
Users mailing list
Users(a)ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<
http://lists.ovirt.org/mailman/listinfo/users>
--------------3025B96D7224F98FC0855C71
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=utf-8"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p><br>
</p>
<br>
<div class="moz-cite-prefix">Am 08.11.2016 um 11:51 schrieb Simone
Tiraboschi:<br>
</div>
<blockquote
cite="mid:CAN8-ONre=w9mFhLXnz15RHs21cH7xCM7rsJG+_Tk0B+G8C_N2w@mail.gmail.com"
type="cite">
<meta http-equiv="Content-Type" content="text/html;
charset=utf-8">
<div dir="ltr"><br>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Nov 8, 2016 at 11:30 AM,
Steffen Nolden <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:steffen.nolden@alumni.fh-aachen.de"
target="_blank">steffen.nolden(a)alumni.fh-aachen.de</a>&gt;</span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<p>Attached the VDSM logs from 'hosted-engine --deploy'
to 'hosted-engine --console' <br>
</p>
<div>
<div class="h5"> <br>
</div>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>Everything seams fine here.</div>
<div>Now I read that you are using a nested env: do you
removed mac spoof filters on the physical host?</div>
</div>
</div>
</div>
</blockquote>
<br>
What is it and why is it necessary? <br>
I want to connect from the nested VM to the local engine appliance.
And that didnt work.<br>
Physical host --> VM --> NestedVM -x-> engine
appliance<br>
<blockquote
cite="mid:CAN8-ONre=w9mFhLXnz15RHs21cH7xCM7rsJG+_Tk0B+G8C_N2w@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">
<div> </div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<div>
<div class="h5">
<div class="m_-1477765077132206067moz-cite-prefix">Am
08.11.2016 um 10:43 schrieb Simone Tiraboschi:<br>
</div>
<blockquote type="cite">
<div dir="ltr"><br>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Nov 8, 2016
at 9:35 AM, Steffen Nolden <span
dir="ltr"><<a
moz-do-not-send="true"
href="mailto:steffen.nolden@alumni.fh-aachen.de"
target="_blank">steffen.nolden(a)alumni.fh-<wbr>aachen.de</a>&gt;</span>
wrote:<br>
<blockquote class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px
#ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF"
text="#000000">
<p>Hello Florian,</p>
<p>there is no problem to reslove the
FQDN</p>
</div>
</blockquote>
<div><br>
</div>
<div>Can you please attach also VDSM logs
fro the relevant time frame?</div>
<div> </div>
<blockquote class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px
#ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF"
text="#000000">
<p>[nolden@oVirtNode01 ~]$ nslookup
oVirtEngine.com<br>
Server: 192.168.122.1<br>
Address: 192.168.122.1#53<br>
<br>
Name: oVirtEngine.com<br>
Address: 192.168.122.201<br>
<br>
The '/etc/hosts' file of each host is
configured to resolve the name.</p>
<p>I tried it with stopped firewall,<br>
</p>
<p>[nolden@oVirtNode01 ~]$ sudo
systemctl stop firewalld<br>
[nolden@oVirtNode01 ~]$ sudo systemctl
status firewalld<br>
● firewalld.service - firewalld -
dynamic firewall daemon<br>
Loaded: loaded
(/usr/lib/systemd/system/firew<wbr>alld.service;
enabled; vendor preset: enabled)<br>
Active: inactive (dead) since Di
2016-11-08 09:10:46 CET; 3s ago<br>
Process: 724
ExecStart=/usr/sbin/firewalld --nofork
--nopid $FIREWALLD_ARGS (code=exited,
status=0/SUCCESS)<br>
Main PID: 724 (code=exited,
status=0/SUCCESS)<br>
</p>
<p>but i got the same error!</p>
<div>
<div class="m_-1477765077132206067h5">
<p><br>
</p>
<div
class="m_-1477765077132206067m_-5978212310940519061moz-cite-prefix">Am
04.11.2016 um 15:50 schrieb
Florian Nolden:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hello Steffen,
<div><br>
</div>
<div>can your nodes resolve the
FQDN "<a
moz-do-not-send="true"
href="http://ovirtengine.com"
target="_blank">ovirtengine.com</a>"
to the hosted engine ip (
nslookup <a
moz-do-not-send="true"
href="http://ovirtengine.com"
target="_blank">ovirtengine.com</a>)
? </div>
<div>If that works have you
tried to disable the firewall
temporarily? </div>
</div>
<div class="gmail_extra"><br>
<div
class="gmail_quote">2016-11-04
14:11 GMT+01:00 Steffen Nolden
<span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:steffen.nolden@alumni.fh-aachen.de"
target="_blank">steffen.nolden@alumni.fh-aach<wbr>en.de</a>></span>:<br>
<blockquote
class="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px #ccc
solid;padding-left:1ex">Hello,<br>
<br>
i tried to deploy
hosted-engine on a testing
environment. First of all i
tried to deploy with the
option<br>
<br>
"Automatically execute
engine-setup on the engine
appliance on first boot
(Yes, No)[Yes]? Yes"<br>
<br>
but it got stuck.<br>
<br>
[ INFO ] Running
engine-setup on the
appliance<br>
[ ERROR ] Engine setup got
stuck on the appliance<br>
[ ERROR ] Failed to execute
stage 'Closing up': Engine
setup is stalled on the
appliance since 600 seconds
ago. Please check its log on
the appliance.<br>
[ INFO ] Stage: Clean up<br>
[ INFO ] Generating answer
file
'/var/lib/ovirt-hosted-engine-<wbr>setup/answers/answers-20161104<wbr>112913.conf'<br>
[ INFO ] Stage:
Pre-termination<br>
[ INFO ] Stage: Termination<br>
[ ERROR ] Hosted Engine
deployment failed: this
system is not reliable,
please check the issue,fix
and redeploy<br>
Log file is
located at
/var/log/ovirt-hosted-engine-s<wbr>etup/ovirt-hosted-engine-setup<wbr>-20161104110104-kyhq1e.log<br>
<br>
Next try i said 'No' and i
tried to execute it myself.
But i can't access the
engine appliance.<br>
<br>
[nolden@oVirtNode01 ~]$
sudo hosted-engine --console<br>
/usr/share/vdsm/vdsClient.py:3<wbr>3:
DeprecationWarning: vdscli
uses xmlrpc. since ovirt 3.6
xmlrpc is deprecated, please
use vdsm.jsonrpcvdscli<br>
from vdsm import
utils, vdscli, constants<br>
The engine VM is running
on this host<br>
Verbunden mit der
Domain: HostedEngine<br>
Escape-Zeichen ist ^]<br>
Fehler: Interner Fehler:
Charakter-Einheit console0
verwendet nicht ein PTY<br>
<br>
Additionally i can't ping or
access the engine appliance
via ssh.<br>
<br>
Did i forget to install a
packege or configure
something?<br>
<br>
Below my infrastructure and
configs. Attached the log
file.<br>
<br>
<br>
Thanks for help.<br>
<br>
<br>
<br>
My infrastructure is a
nestedVM system:<br>
All systems have nested
visualization activated<br>
<br>
HW with LinuxMint x86_64;<br>
- VM with CentOS-7-x86_64;
24GB Ram; 7 cores;<br>
- Nested VM with
CentOS-7-x86_64; 12288GB
Ram; 4 cores; Hostname:
oVirtNode01.com
(192.168.122.101): Here i
deploy hosted-engine,
Hostname:
oVirtEngine.com
(192.168.122.201)<br>
- Nested VM with
CentOS-7-x86_64; 4096GB
Ram; 1 cores; Hostname:
oVirtNode02.com
(192.168.122.102):<br>
- Nested VM with
CentOS-7-x86_64; 4096GB
Ram; 1 cores; Hostname:
oVirtNode03.com
(192.168.122.103):<br>
<br>
All three NestedVMs are
updated, have installed the
ovirt-release40.rpm.
Additional installed:<br>
screen (4.1.0),
ovirt-hosted-engine-setup
(2.0.2.2), vdsm-gluster
(4.18.13), bridge-utils
(1.5),<br>
vdsm (4.18.13), vdsm-cli
(4.18.13), glusterfs-server
(3.7.16), samba.<br>
<br>
First NestedVM additional
installed
ovirt-engine-appliance
4.0-20160928.1.el7.centos<br>
<br>
The three VMs building a
glusterfs volume "engine" 3
replica, with the options...<br>
<br>
Volume Name: engine<br>
Type: Replicate<br>
Volume ID:
e92849b7-af3b-4ccd-bd0d-69a5ab<wbr>3b6214<br>
Status: Started<br>
Number of Bricks: 1 x 3 = 3<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1:
oVirtNode01.com:/gluster/engin<wbr>e/brick1<br>
Brick2:
oVirtNode02.com:/gluster/engin<wbr>e/brick1<br>
Brick3:
oVirtNode03.com:/gluster/engin<wbr>e/brick1<br>
Options Reconfigured:<br>
auth.allow: 192.168.122.*<br>
storage.owner-gid: 36<br>
storage.owner-uid: 36<br>
performance.readdir-ahead:
on<br>
<br>
## Session 1 (hosted-engine
deploy without automatically
execute engine-setup)<br>
[nolden@oVirtNode01 ~]$ sudo
hosted-engine --deploy<br>
/usr/lib/python2.7/site-packag<wbr>es/ovirt_hosted_engine_ha/lib/<wbr>storage_backends.py:15:
DeprecationWarning: vdscli
uses xmlrpc. since ovirt 3.6
xmlrpc is deprecated, please
use vdsm.jsonrpcvdscli<br>
import vdsm.vdscli<br>
[ INFO ] Stage:
Initializing<br>
[ INFO ] Generating a
temporary VNC password.<br>
[ INFO ] Stage: Environment
setup<br>
During
customization use CTRL-D to
abort.<br>
Continuing will
configure this host for
serving as hypervisor and
create a VM where you have
to install the engine
afterwards.<br>
Are you sure you
want to continue? (Yes,
No)[Yes]:<br>
[ INFO ] Hardware supports
virtualization<br>
Configuration
files: []<br>
Log file:
/var/log/ovirt-hosted-engine-s<wbr>etup/ovirt-hosted-engine-setup<wbr>-20161104123834-chwikf.log<br>
Version:
otopi-1.5.2
(otopi-1.5.2-1.el7.centos)<br>
[ INFO ] Stage: Environment
packages setup<br>
[ INFO ] Stage: Programs
detection<br>
[ INFO ] Stage: Environment
setup<br>
[ INFO ] Generating
libvirt-spice certificates<br>
[ INFO ] Stage: Environment
customization<br>
<br>
--== STORAGE
CONFIGURATION ==--<br>
<br>
Please specify the
storage you would like to
use (glusterfs, iscsi, fc,
nfs3, nfs4)[nfs3]: glusterfs<br>
[ INFO ] Please note that
Replica 3 support is
required for the shared
storage.<br>
Please specify the
full shared storage
connection path to use
(example: host:/path):
oVirtNode01.com:/engine<br>
[ INFO ] GlusterFS replica
3 Volume detected<br>
[ INFO ] Installing on
first host<br>
[ INFO ] GlusterFS replica
3 Volume detected<br>
<br>
--== SYSTEM
CONFIGURATION ==--<br>
<br>
<br>
--== NETWORK
CONFIGURATION ==--<br>
<br>
iptables was
detected on your computer,
do you wish setup to
configure it? (Yes,
No)[Yes]: No<br>
Please indicate a
pingable gateway IP address
[192.168.122.1]:<br>
Please indicate a
nic to set ovirtmgmt bridge
on: (eth0) [eth0]:<br>
<br>
--== VM
CONFIGURATION ==--<br>
<br>
Booting from cdrom
on RHEL7 is ISO image based
only, as cdrom passthrough
is disabled (BZ760885)<br>
Please specify the
device to boot the VM from
(choose disk for the oVirt
engine appliance)<br>
(cdrom, disk, pxe)
[disk]:<br>
Please specify the
console type you would like
to use to connect to the VM
(vnc, spice) [vnc]:<br>
[ INFO ] Detecting
available oVirt engine
appliances<br>
The following
appliance have been found on
your system:<br>
[1] - The
oVirt Engine Appliance image
(OVA) -
4.0-20160928.1.el7.centos<br>
[2] - Directly
select an OVA file<br>
Please select an
appliance (1, 2) [1]:<br>
[ INFO ] Verifying its
sha1sum<br>
[ INFO ] Checking OVF
archive content (could take
a few minutes depending on
archive size)<br>
[ INFO ] Checking OVF XML
content (could take a few
minutes depending on archive
size)<br>
[WARNING] OVF does not
contain a valid image
description, using default.<br>
Would you like to
use cloud-init to customize
the appliance on the first
boot (Yes, No)[Yes]?<br>
Would you like to
generate on-fly a cloud-init
ISO image (of no-cloud type)<br>
or do you have an
existing one (Generate,
Existing)[Generate]?<br>
Please provide the
FQDN you would like to use
for the engine appliance.<br>
Note: This will be
the FQDN of the engine VM
you are now going to launch,<br>
it should not
point to the base host or to
any other existing machine.<br>
Engine VM FQDN:
(leave it empty to skip):
[]: oVirtEngine.com<br>
Automatically
execute engine-setup on the
engine appliance on first
boot (Yes, No)[Yes]? No<br>
Please provide the
domain name you would like
to use for the engine
appliance.<br>
Engine VM domain:
[com]<br>
Enter root
password that will be used
for the engine appliance
(leave it empty to skip):<br>
Confirm appliance
root password:<br>
The following CPU
types are supported by this
host:<br>
-
model_SandyBridge: Intel
SandyBridge Family<br>
-
model_Westmere: Intel
Westmere Family<br>
-
model_Nehalem: Intel Nehalem
Family<br>
-
model_Penryn: Intel Penryn
Family<br>
-
model_Conroe: Intel Conroe
Family<br>
Please specify the
CPU type to be used by the
VM [model_SandyBridge]:<br>
Please specify the
number of virtual CPUs for
the VM (Defaults to
appliance OVF value): [4]: 3<br>
[WARNING] Minimum
requirements for disk size
not met<br>
You may specify a
unicast MAC address for the
VM or accept a randomly
generated default
[00:16:3e:2a:82:17]:<br>
Please specify the
memory size of the VM in MB
(Defaults to maximum
available): [10880]: 8144<br>
How should the
engine VM network be
configured (DHCP,
Static)[DHCP]? Static<br>
Please enter the
IP address to be used for
the engine VM
[192.168.122.2]:
192.168.122.201<br>
[ INFO ] The engine VM will
be configured to use <a
moz-do-not-send="true"
href="http://192.168.122.201/24"
rel="noreferrer"
target="_blank">192.168.122.201/24</a><br>
Please provide a
comma-separated list (max 3)
of IP addresses of domain
name servers for the engine
VM<br>
Engine VM DNS
(leave it empty to skip)
[192.168.122.1]:<br>
Add lines for the
appliance itself and for
this host to /etc/hosts on
the engine VM?<br>
Note: ensuring
that this host could resolve
the engine VM hostname is
still up to you<br>
(Yes, No)[No] Yes<br>
<br>
--== HOSTED ENGINE
CONFIGURATION ==--<br>
<br>
Enter engine admin
password:<br>
Confirm engine
admin password:<br>
Enter the name
which will be used to
identify this host inside
the Administrator Portal
[hosted_engine_1]:<br>
Please provide the
name of the SMTP server
through which we will send
notifications [localhost]:<br>
Please provide the
TCP port number of the SMTP
server [25]:<br>
Please provide the
email address from which
notifications will be sent
[root@localhost]:<br>
Please provide a
comma-separated list of
email addresses which will
get notifications
[root@localhost]:<br>
[ INFO ] Stage: Setup
validation<br>
<br>
--== CONFIGURATION
PREVIEW ==--<br>
<br>
Bridge interface
: eth0<br>
Engine FQDN
:
oVirtEngine.com<br>
Bridge name
: ovirtmgmt<br>
Host address
:
oVirtNode01<br>
SSH daemon port
: 22<br>
Gateway address
:
192.168.122.1<br>
Host name for web
application :
hosted_engine_1<br>
Storage Domain
type :
glusterfs<br>
Host ID
: 1<br>
Image size GB
: 10<br>
Storage
connection :
oVirtNode01.com:/engine<br>
Console type
: vnc<br>
Memory size MB
: 8144<br>
MAC address
:
00:16:3e:2a:82:17<br>
Boot type
: disk<br>
Number of CPUs
: 3<br>
OVF archive (for
disk boot) :
/usr/share/ovirt-engine-applia<wbr>nce/ovirt-engine-appliance-4.0<wbr>-20160928.1.el7.centos.ova<br>
Appliance version
:
4.0-20160928.1.el7.centos<br>
CPU Type
:
model_SandyBridge<br>
<br>
Please confirm
installation settings (Yes,
No)[Yes]:<br>
[ INFO ] Stage: Transaction
setup<br>
...<br>
[ INFO ] Stage: Closing up<br>
The following
network ports should be
opened:<br>
tcp:5900<br>
tcp:5901<br>
tcp:9090<br>
udp:5900<br>
udp:5901<br>
An example of the
required configuration for
iptables can be found at:<br>
/etc/ovirt-hosted-engine/iptab<wbr>les.example<br>
In order to
configure firewalld, copy
the files from<br>
/etc/ovirt-hosted-engine/firew<wbr>alld
to /etc/firewalld/services<br>
and execute the
following commands:<br>
firewall-cmd
-service hosted-cockpit<br>
firewall-cmd
-service hosted-console<br>
[ INFO ] Creating VM<br>
You can now
connect to the VM with the
following command:<br>
hosted-engine
--console<br>
You can also
graphically connect to the
VM from your system with the
following command:<br>
remote-viewer
vnc://oVirtNode01:5900<br>
Use temporary
password "2263CpET" to
connect to vnc console.<br>
Please ensure that
your Guest OS is properly
configured to support serial
console according to your
distro documentation.<br>
Follow <a
moz-do-not-send="true"
href="http://www.ovirt.org/Serial_Console_Setup#I_need_to_access_the...
rel="noreferrer"
target="_blank">http://www.ovirt.org/Serial_Co<wbr>nso...
for more info.<br>
If you need to
reboot the VM you will need
to start it manually using
the command:<br>
hosted-engine
--vm-start<br>
You can then set a
temporary password using the
command:<br>
hosted-engine
--add-console-password<br>
Please install and
setup the engine in the VM.<br>
You may also be
interested in installing
ovirt-guest-agent-common
package in the VM.<br>
<br>
<br>
The VM has been
rebooted.<br>
To continue please
install oVirt-Engine in the
VM<br>
(Follow <a
moz-do-not-send="true"
href="http://www.ovirt.org/Quick_Start_Guide"
rel="noreferrer"
target="_blank">http://www.ovirt.org/Quick_Sta<wbr>rt_...
for more info).<br>
<br>
Make a selection
from the options below:<br>
(1) Continue setup
- oVirt-Engine installation
is ready and ovirt-engine
service is up<br>
(2) Abort setup<br>
(3) Power off and
restart the VM<br>
(4) Destroy VM and
abort setup<br>
<br>
(1, 2, 3, 4)[1]:<br>
## Session 2<br>
[nolden@oVirtNode01 ~]$ sudo
hosted-engine --console<br>
/usr/share/vdsm/vdsClient.py:3<wbr>3: DeprecationWarning: vdscli uses
xmlrpc. since ovirt 3.6
xmlrpc is deprecated, please
use vdsm.jsonrpcvdscli<br>
from vdsm import utils,
vdscli, constants<br>
The engine VM is running on
this host<br>
Verbunden mit der Domain:
HostedEngine<br>
Escape-Zeichen ist ^]<br>
Fehler: Interner Fehler:
Charakter-Einheit console0
verwendet nicht ein PTY<br>
[nolden@oVirtNode01 ~]$ ssh
<a moz-do-not-send="true"
class="m_-1477765077132206067m_-5978212310940519061moz-txt-link-abbreviated"
href="mailto:root@oVirtEngine.com"
target="_blank">root(a)oVirtEngine.com</a><br>
ssh: connect to host <a
moz-do-not-send="true"
href="http://ovirtengine.com"
rel="noreferrer"
target="_blank">ovirtengine.com</a>
port 22: No route to host<br>
<br>
## Back to session 1<br>
...<br>
(1, 2, 3, 4)[1]: 1<br>
<br>
Checking for
oVirt-Engine status at
oVirtEngine.com...<br>
[ INFO ] Engine is still
unreachable<br>
oVirt-Engine
health status page is not
yet reachable.<br>
<br>
<br>
The VM has been
rebooted.<br>
To continue please
install oVirt-Engine in the
VM<br>
(Follow <a
moz-do-not-send="true"
href="http://www.ovirt.org/Quick_Start_Guide"
rel="noreferrer"
target="_blank">http://www.ovirt.org/Quick_Sta<wbr>rt_...
for more info).<br>
<br>
Make a selection
from the options below:<br>
(1) Continue setup
- oVirt-Engine installation
is ready and ovirt-engine
service is up<br>
(2) Abort setup<br>
(3) Power off and
restart the VM<br>
(4) Destroy VM and
abort setup<br>
<br>
(1, 2, 3, 4)[1]: 2<br>
[ ERROR ] Failed to execute
stage 'Closing up': Setup
aborted by user<br>
[ INFO ] Stage: Clean up<br>
[ INFO ] Generating answer
file
'/var/lib/ovirt-hosted-engine-<wbr>setup/answers/answers-20161104<wbr>130223.conf'<br>
[ INFO ] Stage:
Pre-termination<br>
[ INFO ] Stage: Termination<br>
[ ERROR ] Hosted Engine
deployment failed: this
system is not reliable,
please check the issue,fix
and redeploy<br>
Log file is
located at
/var/log/ovirt-hosted-engine-s<wbr>etup/ovirt-hosted-engine-setup<wbr>-20161104123834-chwikf.log<br>
<br>
<br>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a moz-do-not-send="true"
href="mailto:Users@ovirt.org"
target="_blank">Users(a)ovirt.org</a><br>
<a moz-do-not-send="true"
href="http://lists.ovirt.org/mailman/listinfo/users"
rel="noreferrer"
target="_blank">http://lists.ovirt.org/mailman<wbr>/li...
<br>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</div>
</div>
</div>
<br>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a moz-do-not-send="true"
href="mailto:Users@ovirt.org"
target="_blank">Users(a)ovirt.org</a><br>
<a moz-do-not-send="true"
href="http://lists.ovirt.org/mailman/listinfo/users"
rel="noreferrer"
target="_blank">http://lists.ovirt.org/mailman<wbr>/li...
<br>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<br>
</div>
</div>
</div>
<br>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a moz-do-not-send="true"
href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a moz-do-not-send="true"
href="http://lists.ovirt.org/mailman/listinfo/users"
rel="noreferrer"
target="_blank">http://lists.ovirt.org/<wbr>mailman/li...
<br>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<br>
</body>
</html>
--------------3025B96D7224F98FC0855C71--