This is a multi-part message in MIME format.
--------------010402020902050501080400
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
For the record, once I added a new storage domain the Data center came up.
So in the end, this seems to have been due to known bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1160667
https://bugzilla.redhat.com/show_bug.cgi?id=1160423
Effectively, for hosts with static/manual IP addressing (i.e. not DHCP), the DNS and
default route information are not set up correctly by hosted-engine-setup. I'm not
sure why that's not considered a higher priority bug (e.g. blocker for 3.5.2?) since I
believe the most typical configuration for servers is static IP addressing.
All seems to be working now. Many thanks to Simone for the invaluable assistance.
-Bob
On Mar 10, 2015 2:29 PM, "Bob Doolittle" <bob(a)doolittle.us.com
<mailto:bob@doolittle.us.com>> wrote:
On 03/10/2015 10:20 AM, Simone Tiraboschi wrote:
> ----- Original Message -----
>
>> From: "Bob Doolittle"
<bob(a)doolittle.us.com <mailto:bob@doolittle.us.com>
>>
To: "Simone Tiraboschi" <stirabos(a)redhat.com
<mailto:stirabos@redhat.com>
>> Cc: "users-ovirt"
<users(a)ovirt.org <mailto:users@ovirt.org>
>>
Sent: Tuesday, March 10, 2015 2:40:13 PM
>> Subject: Re: [ovirt-users] Error during hosted-engine-setup for 3.5.1 on F20 (The
VDSM host was found in a failed
>> state)
>
>
>> On 03/10/2015 04:58 AM,
Simone Tiraboschi wrote:
>>
>>> ----- Original Message
-----
>>>
>>>> From: "Bob
Doolittle" <bob(a)doolittle.us.com <mailto:bob@doolittle.us.com>
>>>> To: "Simone Tiraboschi"
<stirabos(a)redhat.com <mailto:stirabos@redhat.com>
>>>> Cc: "users-ovirt" <users(a)ovirt.org
<mailto:users@ovirt.org>
>>>> Sent: Monday, March
9, 2015 11:48:03 PM
>>>> Subject: Re: [ovirt-users] Error during hosted-engine-setup for 3.5.1 on
>>>> F20 (The VDSM host was found in a failed
>>>> state)
>>>
>>>
>>>> On 03/09/2015 02:47 PM, Bob Doolittle wrote:
>>>>
>>>>> Resending with
CC to list (and an update).
>>>>
>>>>> On 03/09/2015
01:40 PM, Simone Tiraboschi wrote:
>>>>>
>>>>>> -----
Original Message -----
>>>>>>
>>>>>>> From:
"Bob Doolittle" <bob(a)doolittle.us.com <mailto:bob@doolittle.us.com>
>>>>>>> To: "Simone Tiraboschi"
<stirabos(a)redhat.com <mailto:stirabos@redhat.com>
>>>>>>> Cc: "users-ovirt" <users(a)ovirt.org
<mailto:users@ovirt.org>
>>>>>>> Sent:
Monday, March 9, 2015 6:26:30 PM
>>>>>>> Subject: Re: [ovirt-users] Error during hosted-engine-setup
for 3.5.1
>>>>>>> on
>>>>>>> F20 (Cannot add the host to cluster ... SSH
>>>>>>> has failed)
>>>>>>
>> ...
>>>>>>
>>>>>>> OK,
I've started over. Simply removing the storage domain was
>>>>>>> insufficient,
>>>>>>> the hosted-engine deploy failed when it found the HA and
Broker
>>>>>>> services
>>>>>>> already configured. I decided to just start over fresh
starting with
>>>>>>> re-installing the OS on my host.
>>>>>>
>>>>>>> I
can't deploy DNS at the moment, so I have to simply replicate
>>>>>>> /etc/hosts
>>>>>>> files on my host/engine. I did that this time, but have run
into a new
>>>>>>> problem:
>>>>>>
>>>>>>> [ INFO
] Engine replied: DB Up!Welcome to Health Status!
>>>>>>> Enter the name of the cluster to which you want to
add the
>>>>>>> host
>>>>>>> (Default) [Default]:
>>>>>>> [ INFO ] Waiting for the host to become operational in the
engine.
>>>>>>> This
>>>>>>> may
>>>>>>> take several minutes...
>>>>>>> [ ERROR ] The VDSM host was found in a failed state. Please
check
>>>>>>> engine
>>>>>>> and
>>>>>>> bootstrap installation logs.
>>>>>>> [ ERROR ] Unable to add ovirt-vm to the manager
>>>>>>> Please shutdown the VM allowing the system to
launch it as a
>>>>>>> monitored service.
>>>>>>> The system will wait until the VM is down.
>>>>>>> [ ERROR ] Failed to execute stage 'Closing up':
[Errno 111] Connection
>>>>>>> refused
>>>>>>> [ INFO ] Stage: Clean up
>>>>>>> [ ERROR ] Failed to execute stage 'Clean up': [Errno
111] Connection
>>>>>>> refused
>>>>>>
>>>>>>
>>>>>>> I've attached my engine log
and the ovirt-hosted-engine-setup log. I
>>>>>>> think I
>>>>>>> had an issue with resolving external hostnames, or else a
connectivity
>>>>>>> issue
>>>>>>> during the install.
>>>>>
>>>>>> For some
reason your engine wasn't able to deploy your hosts but the SSH
>>>>>> session this time was established.
>>>>>> 2015-03-09 13:05:58,514 ERROR
>>>>>> [org.ovirt.engine.core.bll.InstallVdsInternalCommand]
>>>>>> (org.ovirt.thread.pool-8-thread-3) [3cf91626] Host installation
failed
>>>>>> for host 217016bb-fdcd-4344-a0ca-4548262d10a8, ovirt-vm.:
>>>>>> java.io.IOException: Command returned failure code 1 during SSH
session
>>>>>> 'root(a)xion2.smartcity.net
<mailto:root@xion2.smartcity.net>'
>>>>>
>>>>>> Can you
please attach host-deploy logs from the engine VM?
>>>>
>>>>> OK, attached.
>>>>
>>>>> Like I said, it
looks to me like a name-resolution issue during the yum
>>>>> update on the engine. I think I've fixed that, but do you have a
better
>>>>> suggestion for cleaning up and re-deploying other than installing the
OS
>>>>> on my host and starting all over again?
>>>
>>>> I just finished
starting over from scratch, starting with OS installation
>>>> on
>>>> my host/node, and wound up with a very similar problem - the engine
>>>> couldn't
>>>> reach the hosts during the yum operation. But this time the error was
>>>> "Network is unreachable". Which is weird, because I can ssh
into the
>>>> engine
>>>> and ping many of those hosts, after the operation has failed.
>>>
>>>> Here's my
latest host-deploy log from the engine. I'd appreciate any
>>>> clues.
>>
>>> It seams that now your
host is able to resolve that addresses but it's not
>>> able to connect over http.
>>> On your hosts some of them resolves as IPv6 addresses; can you please try
>>> to use curl to get one of the file that it wasn't able to fetch?
>>> Can you please check your network configuration before and after
>>> host-deploy?
>
>> I can give you the network configuration after
host-deploy, at least for the
>> host/Node. The engine won't start for me this morning, after I shut down the
>> host for the night.
>
>> In order to give you the config before host-deploy
(or, apparently for the
>> engine), I'll have to re-install the OS on the host and start again from
>> scratch. Obviously I'd rather not do that unless absolutely necessary.
>
>> Here's the host config after the failed
host-deploy:
>
>> Host/Node:
>
>> # ip route
>> 169.254.0.0/16 <
http://169.254.0.0/16> dev ovirtmgmt scope link metric
1007
>> 172.16.0.0/16 <
http://172.16.0.0/16> dev ovirtmgmt proto kernel scope
link src 172.16.0.58
> You are missing a default gateway and so the issue.
> Are you sure that it was properly configured before trying to deploy that host?
It should have been, it was a fresh OS install. So I'm starting again, and keeping
careful records of my network config.
Here is my initial network config of my host/node, immediately following a new OS
install:
% ip route
default via 172.16.0.1 dev p3p1 proto static metric 1024
172.16.0.0/16 <
http://172.16.0.0/16> dev p3p1 proto kernel scope link src
172.16.0.58
% ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 <
http://127.0.0.1/8> scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: p3p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group
default qlen 1000
link/ether b8:ca:3a:79:22:12 brd ff:ff:ff:ff:ff:ff
inet 172.16.0.58/16 <
http://172.16.0.58/16> brd 172.16.255.255 scope global
p3p1
valid_lft forever preferred_lft forever
inet6 fe80::baca:3aff:fe79:2212/64 scope link
valid_lft forever preferred_lft forever
3: wlp2s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group
default qlen 1000
link/ether 1c:3e:84:50:8d:c3 brd ff:ff:ff:ff:ff:ff
After the VM is first created, the host/node config is:
# ip route
default via 172.16.0.1 dev ovirtmgmt
169.254.0.0/16 <
http://169.254.0.0/16> dev ovirtmgmt scope link metric 1006
172.16.0.0/16 <
http://172.16.0.0/16> dev ovirtmgmt proto kernel scope link src
172.16.0.58
# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 <
http://127.0.0.1/8> scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: p3p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master
ovirtmgmt state UP group default qlen 1000
link/ether b8:ca:3a:79:22:12 brd ff:ff:ff:ff:ff:ff
inet6 fe80::baca:3aff:fe79:2212/64 scope link
valid_lft forever preferred_lft forever
3: wlp2s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group
default qlen 1000
link/ether 1c:3e:84:50:8d:c3 brd ff:ff:ff:ff:ff:ff
4: bond0: <NO-CARRIER,BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc noqueue state
DOWN group default
link/ether 92:cb:9d:97:18:36 brd ff:ff:ff:ff:ff:ff
5: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default
link/ether 9a:bc:29:52:82:38 brd ff:ff:ff:ff:ff:ff
6: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
group default
link/ether b8:ca:3a:79:22:12 brd ff:ff:ff:ff:ff:ff
inet 172.16.0.58/16 <
http://172.16.0.58/16> brd 172.16.255.255 scope global
ovirtmgmt
valid_lft forever preferred_lft forever
inet6 fe80::baca:3aff:fe79:2212/64 scope link
valid_lft forever preferred_lft forever
7: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master
ovirtmgmt state UNKNOWN group default qlen 500
link/ether fe:16:3e:16:a4:37 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:fe16:a437/64 scope link
valid_lft forever preferred_lft forever
At this point, I was already seeing a problem on the host/node. I remembered that a newer
version of sos package is delivered from the ovirt repositories. So I tried to do a
"yum update" on my host, and got a similar problem:
% sudo yum update
[sudo] password for rad:
Loaded plugins: langpacks, refresh-packagekit
Resolving Dependencies
--> Running transaction check
---> Package sos.noarch 0:3.1-1.fc20 will be updated
---> Package sos.noarch 0:3.2-0.2.fc20.ovirt will be an update
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================================================
Package Arch Version
Repository Size
================================================================================================================
Updating:
sos noarch 3.2-0.2.fc20.ovirt ovirt-3.5
292 k
Transaction Summary
================================================================================================================
Upgrade 1 Package
Total download size: 292 k
Is this ok [y/d/N]: y
Downloading packages:
No Presto metadata available for ovirt-3.5
sos-3.2-0.2.fc20.ovirt.noarch. FAILED
http://www.gtlib.gatech.edu/pub/oVirt/pub/ovirt-3.5/rpm/fc20/noarch/sos-3...:
[Errno 14] curl#6 - "Could not resolve host:
www.gtlib.gatech.edu
<
http://www.gtlib.gatech.edu>"
Trying other mirror.
sos-3.2-0.2.fc20.ovirt.noarch. FAILED
ftp://ftp.gtlib.gatech.edu/pub/oVirt/pub/ovirt-3.5/rpm/fc20/noarch/sos-3....:
[Errno 14] curl#6 - "Could not resolve host:
ftp.gtlib.gatech.edu
<
http://ftp.gtlib.gatech.edu>"
Trying other mirror.
sos-3.2-0.2.fc20.ovirt.noarch. FAILED
http://resources.ovirt.org/pub/ovirt-3.5/rpm/fc20/noarch/sos-3.2-0.2.fc20...:
[Errno 14] curl#6 - "Could not resolve host:
resources.ovirt.org
<
http://resources.ovirt.org>"
Trying other mirror.
sos-3.2-0.2.fc20.ovirt.noarch. FAILED
http://ftp.snt.utwente.nl/pub/software/ovirt/ovirt-3.5/rpm/fc20/noarch/so...:
[Errno 14] curl#6 - "Could not resolve host: ftp.snt.utwente.nl
<
http://ftp.snt.utwente.nl>"
Trying other mirror.
sos-3.2-0.2.fc20.ovirt.noarch. FAILED
http://ftp.nluug.nl/os/Linux/virtual/ovirt/ovirt-3.5/rpm/fc20/noarch/sos-...:
[Errno 14] curl#6 - "Could not resolve host: ftp.nluug.nl
<
http://ftp.nluug.nl>"
Trying other mirror.
sos-3.2-0.2.fc20.ovirt.noarch. FAILED
http://mirror.linux.duke.edu/ovirt/pub/ovirt-3.5/rpm/fc20/noarch/sos-3.2-...:
[Errno 14] curl#6 - "Could not resolve host:
mirror.linux.duke.edu
<
http://mirror.linux.duke.edu>"
Trying other mirror.
Error downloading packages:
sos-3.2-0.2.fc20.ovirt.noarch: [Errno 256] No more mirrors to try.
This was similar to my previous failures. I took a look, and the problem was that
/etc/resolv.conf had no nameservers, and the
/etc/sysconfig/network-scripts/ifcfg-ovirtmgmt file contained no entries for DNS1 or
DOMAIN.
So, it appears that when hosted-engine set up my bridged network, it neglected to carry
over the DNS configuration necessary to the bridge.
Note that I am using *static* network configuration, rather than DHCP. During
installation of the OS I am setting up the network configuration as Manual. Perhaps the
hosted-engine script is not properly prepared to deal with that?
I went ahead and modified the ifcfg-ovirtmgmt network script (for the next service
restart/boot) and resolv.conf (I was afraid to restart the network in the middle of
hosted-engine execution since I don't know what might already be connected to the
engine). This time it got further, but ultimately it still failed at the very end:
[ INFO ] Waiting for the host to become operational in the engine. This may take several
minutes...
[ INFO ] Still waiting for VDSM host to become operational...
[ INFO ] The VDSM Host is now operational
Please shutdown the VM allowing the system to launch it as a monitored
service.
The system will wait until the VM is down.
[ ERROR ] Failed to execute stage 'Closing up': Error acquiring VM status
[ INFO ] Stage: Clean up
[ INFO ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20150310140028.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
At that point, neither the ovirt-ha-broker or ovirt-ha-agent services were running.
Note there was no significant pause after it said "The system will wait until the VM
is down".
After the script completed, I shut down the VM, and manually started the ha services, and
the VM came up. I could login to the Administration Portal, and finally see my
HostedEngine VM. :-)
I seem to be in a bad state however: The Data Center has no storage domains attached.
I'm not sure what else might need cleaning up. Any assistance appreciated.
> -Bob
>>> # ip addr
>>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
group
>>> default
>>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>> inet 127.0.0.1/8 <
http://127.0.0.1/8> scope host lo
>>> valid_lft forever preferred_lft forever
>>> inet6 ::1/128 scope host
>>> valid_lft forever preferred_lft forever
>>> 2: p3p2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
master
>>> ovirtmgmt state UP group default qlen 1000
>>> link/ether b8:ca:3a:79:22:12 brd ff:ff:ff:ff:ff:ff
>>> inet6 fe80::baca:3aff:fe79:2212/64 scope link
>>> valid_lft forever preferred_lft forever
>>> 3: bond0: <NO-CARRIER,BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc
noqueue
>>> state DOWN group default
>>> link/ether 56:56:f7:cf:73:27 brd ff:ff:ff:ff:ff:ff
>>> 4: wlp2s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state
DOWN
>>> group default qlen 1000
>>> link/ether 1c:3e:84:50:8d:c3 brd ff:ff:ff:ff:ff:ff
>>> 6: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
group
>>> default
>>> link/ether 22:a1:01:9e:30:71 brd ff:ff:ff:ff:ff:ff
>>> 7: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state
>>> UP group default
>>> link/ether b8:ca:3a:79:22:12 brd ff:ff:ff:ff:ff:ff
>>> inet 172.16.0.58/16 <
http://172.16.0.58/16> brd 172.16.255.255
scope global ovirtmgmt
>>> valid_lft forever preferred_lft forever
>>> inet6 fe80::baca:3aff:fe79:2212/64 scope link
>>> valid_lft forever preferred_lft forever
>>
>>
>>> The only unusual thing about my setup that I can
think of, from the network
>>> perspective, is that my physical host has a wireless interface, which
I've
>>> not configured. Could it be confusing hosted-engine --deploy?
>>
>>> -Bob
>>
>>
--------------010402020902050501080400
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html
<head
<meta http-equiv=3D"content-type" content=3D"text/html;
charset=3Dutf=
-8"
</head
<body bgcolor=3D"#FFFFFF"
text=3D"#000000"
<p
dir=3D"ltr">For the record, once I added a new storage domain the
Data center came up.</p
So in the end, this seems to
have been due to known bugs:<br
<pre
wrap=3D""><a class=3D"moz-txt-link-freetext"
href=3D"https://bug=
zilla.redhat.com/show_bug.cgi?id=3D1160667">https://bugzilla.redh...
how_bug.cgi?id=3D1160667</a
<a
class=3D"moz-txt-link-freetext"
href=3D"https://bugzilla.redhat.com/sh=
ow_bug.cgi?id=3D1160423">https://bugzilla.redhat.com/show_bug.cgi...
60423</a></pre
<br
Effectively, for hosts with static/manual IP addressing
(i.e. not
DHCP), the DNS and default route information are not set up
correctly by hosted-engine-setup. I'm not sure why that's not
considered a higher priority bug (e.g. blocker for 3.5.2?) since I
believe the most typical configuration for servers is static IP
addressing.<br
<p
dir=3D"ltr">All seems to be working now. Many thanks to Simone for
the invaluable assistance.<br
</p
<p dir=3D"ltr">-Bob</p
<p dir=3D"ltr"
On
Mar 10, 2015 2:29 PM, "Bob Doolittle" <<a
href=3D"mailto:bob@doolittle.us.com">bob@doolittle.us.com</a>>
wrote:<br
><br
><br
> On 03/10/2015 10:20 AM, Simone Tiraboschi wrote:<br
>><br
>><br
>> -----
Original Message -----<br
>>><br
>>> From:
"Bob Doolittle" <<a
href=3D"mailto:bob@doolittle.us.com">bob@doolittle.us.com</a>>=
<br
>>> To: "Simone
Tiraboschi" <<a
href=3D"mailto:stirabos@redhat.com">stirabos@redhat.com</a>><b=
r
>>> Cc: "users-ovirt"
<<a
href=3D"mailto:users@ovirt.org">users@ovirt.org</a>><br
>>> Sent: Tuesday, March 10, 2015
2:40:13 PM<br
>>>
Subject: Re: [ovirt-users] Error during
hosted-engine-setup for 3.5.1 on F20 (The VDSM host was found in a
failed<br
>>>
state)<br
>>><br
>>><br
>>> On
03/10/2015 04:58 AM, Simone Tiraboschi wrote:<br
>>>><br
>>>> ----- Original Message -----<br
>>>>><br
>>>>> From: "Bob
Doolittle" <<a
href=3D"mailto:bob@doolittle.us.com">bob@doolittle.us.com</a>>=
<br
>>>>> To: "Simone
Tiraboschi" <<a
href=3D"mailto:stirabos@redhat.com">stirabos@redhat.com</a>><b=
r
>>>>> Cc:
"users-ovirt" <<a
href=3D"mailto:users@ovirt.org">users@ovirt.org</a>><br
>>>>> Sent: Monday,
March 9, 2015 11:48:03 PM<br
>>>>> Subject: Re: [ovirt-users] Error during
hosted-engine-setup for 3.5.1 on<br
>>>>> F20 (The VDSM host was found in a
failed<br
>>>>> state)<br
>>>>><br
>>>>><br
>>>>> On 03/09/2015 02:47 PM, Bob Doolittle
wrote:<b=
r
>>>>>><br
>>>>>> Resending with CC to list (and an
update).<br
>>>>>><br
>>>>>> On 03/09/2015 01:40 PM, Simone
Tiraboschi
wrote:<br
>>>>>>><br
>>>>>>>
----- Original Message -----<br
>>>>>>>><br
>>>>>>>> From: "Bob
Doolittle" <<a
href=3D"mailto:bob@doolittle.us.com">bob@doolittle.us.com</a>>=
<br
>>>>>>>> To: "Simone
Tiraboschi" <<a
href=3D"mailto:stirabos@redhat.com">stirabos@redhat.com</a>><b=
r
>>>>>>>> Cc:
"users-ovirt" <<a
href=3D"mailto:users@ovirt.org">users@ovirt.org</a>><br
>>>>>>>> Sent: Monday, March 9,
2015
6:26:30 PM<br
>>>>>>>> Subject: Re:
[ovirt-users] Error
during hosted-engine-setup for 3.5.1<br
>>>>>>>> on<br
>>>>>>>> F20 (Cannot add the host
to
cluster ... SSH<br
>>>>>>>> has failed)<br
>>>>>>>><br
>>> ...<br
>>>>>>>><br
>>>>>>>> OK, I've started
over. Simply
removing the storage domain was<br
>>>>>>>> insufficient,<br
>>>>>>>> the hosted-engine deploy
failed
when it found the HA and Broker<br
>>>>>>>> services<br
>>>>>>>> already configured. I
decided to
just start over fresh starting with<br
>>>>>>>> re-installing the OS on
my host.<b=
r
>>>>>>>><br
>>>>>>>> I can't deploy DNS at
the moment,
so I have to simply replicate<br
>>>>>>>> /etc/hosts<br
>>>>>>>> files on my host/engine.
I did
that this time, but have run into a new<br
>>>>>>>> problem:<br
>>>>>>>><br
>>>>>>>> [ INFO=C2=A0 ] Engine
replied: DB
Up!Welcome to Health Status!<br
>>>>>>>>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 Enter the name of the
cluster to which you want to add the<br
>>>>>>>>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 host<br
>>>>>>>>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 (Default) [Default]:<br
>>>>>>>> [ INFO=C2=A0 ] Waiting
for the hos=
t to
become operational in the engine.<br
>>>>>>>> This<br
>>>>>>>> may<br
>>>>>>>> take several
minutes...<br
>>>>>>>> [ ERROR ] The VDSM host
was found
in a failed state. Please check<br
>>>>>>>> engine<br
>>>>>>>> and<br
>>>>>>>> bootstrap installation
logs.<br
>>>>>>>> [ ERROR ] Unable to add
ovirt-vm
to the manager<br
>>>>>>>>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 Please shutdown the VM
allowing the system to launch it as a<br
>>>>>>>>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 monitored service.<br
>>>>>>>>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 The system will wait
until the VM is down.<br
>>>>>>>> [ ERROR ] Failed to
execute stage
'Closing up': [Errno 111] Connection<br
>>>>>>>> refused<br
>>>>>>>> [ INFO=C2=A0 ] Stage:
Clean up<br
>>>>>>>> [ ERROR ] Failed to
execute stage
'Clean up': [Errno 111] Connection<br
>>>>>>>> refused<br
>>>>>>>><br
>>>>>>>><br
>>>>>>>> I've attached my
engine log and
the ovirt-hosted-engine-setup log. I<br
>>>>>>>> think I<br
>>>>>>>> had an issue with
resolving
external hostnames, or else a connectivity<br
>>>>>>>> issue<br
>>>>>>>> during the
install.<br
>>>>>>><br
>>>>>>>
For some reason your engine wasn't
able to deploy your hosts but the SSH<br
>>>>>>> session this time was
established.<br
>>>>>>> 2015-03-09 13:05:58,514
ERROR<br
>>>>>>>
[org.ovirt.engine.core.bll.InstallVdsInternalCommand]<br
>>>>>>>
(org.ovirt.thread.pool-8-thread-3)
[3cf91626] Host installation failed<br
>>>>>>> for host
217016bb-fdcd-4344-a0ca-4548262d10a8, ovirt-vm.:<br
>>>>>>> java.io.IOException: Command
returned
failure code 1 during SSH session<br
>>>>>>> '<a
href=3D"mailto:root@xion2.smartcity.net">root@xion2.smartcity.net=
</a>'<br
>>>>>>><br
>>>>>>>
Can you please attach host-deploy
logs from the engine VM?<br
>>>>>><br
>>>>>> OK, attached.<br
>>>>>><br
>>>>>> Like I said, it looks to me like a
name-resolution issue during the yum<br
>>>>>> update on the engine. I think I've
fixed
that, but do you have a better<br
>>>>>> suggestion for cleaning up and
re-deploying other than installing the OS<br
>>>>>> on my host and starting all over
again?<br=
>>>>><br
>>>>> I just finished
starting over from scratch,
starting with OS installation<br
>>>>> on<br
>>>>> my host/node, and wound up with a very
similar problem - the engine<br
>>>>> couldn't<br
>>>>> reach the hosts during the yum operation. But
this time the error was<br
>>>>> "Network is unreachable". Which is
weird,
because I can ssh into the<br
>>>>> engine<br
>>>>> and ping many of those hosts, after the
operation has failed.<br
>>>>><br
>>>>> Here's my latest host-deploy log from the
engine. I'd appreciate any<br
>>>>> clues.<br
>>>><br
>>>> It seams that now your host is able to resolve
that addresses but it's not<br
>>>> able to connect over http.<br
>>>> On your hosts some of
them resolves as IPv6
addresses; can you please try<br
>>>> to use curl to get one of the file that it wasn't
able to fetch?<br
>>>> Can you please check your network configuration
before and after<br
>>>> host-deploy?<br
>>><br
>>> I can
give you the network configuration after
host-deploy, at least for the<br
>>> host/Node. The engine won't start for me this
morning, after I shut down the<br
>>> host for the night.<br
>>><br
>>> In
order to give you the config before host-deploy
(or, apparently for the<br
>>> engine), I'll have to re-install the OS on the host
and start again from<br
>>>
scratch. Obviously I'd rather not do that unless
absolutely necessary.<br
>>><br
>>>
Here's the host config after the failed host-deploy:<b=
r
>>><br
>>> Host/Node:<br
>>><br
>>> # ip route<br
>>> <a
href=3D"http://169.254.0.0/16">169.254.0.0/16</a
dev ovirtmgmt=C2=A0 scope link=C2=A0 metric
1007<br
>>> <a
href=3D"http://172.16.0.0/16">172.16.0.0/16</a> dev
ovirtmgmt=C2=A0 proto kernel=C2=A0 scope link=C2=A0 src 172.16.0.58=
<br
>><br
>> You are missing a default gateway and so the issue.<br
>> Are you sure that it was properly
configured before
trying to deploy that host?<br
><br
><br
> It should have been, it was a fresh OS install.
So I'm
starting again, and keeping careful records of my network config.<b=
r
><br
> Here is my initial network config of my host/node,
immediately following a new OS install:<br
><br
> % ip route<br
> default via 172.16.0.1 dev p3p1=C2=A0 proto
static=C2=A0 metri=
c 1024 <br
> <a
href=3D"http://172.16.0.0/16">172.16.0.0/16</a> dev p3p1=C2=
=A0
proto kernel=C2=A0 scope link=C2=A0 src 172.16.0.58 <br
><br
> % ip addr<br
> 1: lo:
<LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue
state UNKNOWN group default <br
>=C2=A0=C2=A0=C2=A0=C2=A0 link/loopback 00:00:00:00:00:00 brd 00=
:00:00:00:00:00<br
>=C2=A0=C2=A0=C2=A0=C2=A0 inet <a href=3D"http://127.0.0.1/8">12=
7.0.0.1/8</a> scope
host lo<br
>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 valid_lft forever pr=
eferred_lft forever<br
>=C2=A0=C2=A0=C2=A0=C2=A0 inet6 ::1/128 scope host <br
>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0
valid_lft forever pr=
eferred_lft forever<br
> 2: p3p1:
<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
qdisc pfifo_fast state UP group default qlen 1000<br
>=C2=A0=C2=A0=C2=A0=C2=A0 link/ether b8:ca:3a:79:22:12 brd ff:ff=
:ff:ff:ff:ff<br
>=C2=A0=C2=A0=C2=A0=C2=A0 inet <a href=3D"http://172.16.0.58/16"=
>172.16.0.58/16</a
brd 172.16.255.255 scope
global p3p1<br
>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 valid_lft forever pr=
eferred_lft forever<br
>=C2=A0=C2=A0=C2=A0=C2=A0 inet6 fe80::baca:3aff:fe79:2212/64 sco=
pe link <br
>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 valid_lft forever pr=
eferred_lft forever<br
> 3: wlp2s0:
<NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500
qdisc mq state DOWN group default qlen 1000<br
>=C2=A0=C2=A0=C2=A0=C2=A0 link/ether 1c:3e:84:50:8d:c3 brd ff:ff=
:ff:ff:ff:ff<br
><br
><br
> After the VM is first created, the host/node config is:<br
><br
> # ip route<br
> default via
172.16.0.1 dev ovirtmgmt <br
> <a
href=3D"http://169.254.0.0/16">169.254.0.0/16</a> dev
ovirtmgmt=C2=A0 scope link=C2=A0 metric 1006 <br
> <a href=3D"http://172.16.0.0/16">172.16.0.0/16</a> dev
ovirtmgmt=C2=A0 proto kernel=C2=A0 scope link=C2=A0 src 172.16.0.58=
<br
><br
> # ip addr <br
> 1: lo:
<LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue
state UNKNOWN group default <br
>=C2=A0=C2=A0=C2=A0=C2=A0 link/loopback 00:00:00:00:00:00 brd 00=
:00:00:00:00:00<br
>=C2=A0=C2=A0=C2=A0=C2=A0 inet <a href=3D"http://127.0.0.1/8">12=
7.0.0.1/8</a> scope
host lo<br
>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 valid_lft forever pr=
eferred_lft forever<br
>=C2=A0=C2=A0=C2=A0=C2=A0 inet6 ::1/128 scope host <br
>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0
valid_lft forever pr=
eferred_lft forever<br
> 2: p3p1:
<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
qdisc pfifo_fast master ovirtmgmt state UP group default qlen 1000<=
br
>=C2=A0=C2=A0=C2=A0=C2=A0 link/ether
b8:ca:3a:79:22:12 brd ff:ff=
:ff:ff:ff:ff<br
>=C2=A0=C2=A0=C2=A0=C2=A0 inet6 fe80::baca:3aff:fe79:2212/64 sco=
pe link <br
>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 valid_lft forever pr=
eferred_lft forever<br
> 3: wlp2s0:
<NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500
qdisc mq state DOWN group default qlen 1000<br
>=C2=A0=C2=A0=C2=A0=C2=A0 link/ether 1c:3e:84:50:8d:c3 brd ff:ff=
:ff:ff:ff:ff<br
> 4: bond0:
<NO-CARRIER,BROADCAST,MULTICAST,MASTER,UP>
mtu 1500 qdisc noqueue state DOWN group default <br
>=C2=A0=C2=A0=C2=A0=C2=A0 link/ether 92:cb:9d:97:18:36 brd ff:ff=
:ff:ff:ff:ff<br
> 5: ;vdsmdummy;:
<BROADCAST,MULTICAST> mtu 1500 qdisc
noop state DOWN group default <br
>=C2=A0=C2=A0=C2=A0=C2=A0 link/ether 9a:bc:29:52:82:38 brd ff:ff=
:ff:ff:ff:ff<br
> 6: ovirtmgmt:
<BROADCAST,MULTICAST,UP,LOWER_UP> mtu
1500 qdisc noqueue state UP group default <br
>=C2=A0=C2=A0=C2=A0=C2=A0 link/ether b8:ca:3a:79:22:12 brd ff:ff=
:ff:ff:ff:ff<br
>=C2=A0=C2=A0=C2=A0=C2=A0 inet <a href=3D"http://172.16.0.58/16"=
>172.16.0.58/16</a
brd 172.16.255.255 scope
global ovirtmgmt<br
>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 valid_lft forever pr=
eferred_lft forever<br
>=C2=A0=C2=A0=C2=A0=C2=A0 inet6 fe80::baca:3aff:fe79:2212/64 sco=
pe link <br
>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 valid_lft forever pr=
eferred_lft forever<br
> 7: vnet0:
<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
qdisc pfifo_fast master ovirtmgmt state UNKNOWN group default qlen
500<br
>=C2=A0=C2=A0=C2=A0=C2=A0 link/ether fe:16:3e:16:a4:37 brd ff:ff=
:ff:ff:ff:ff<br
>=C2=A0=C2=A0=C2=A0=C2=A0 inet6 fe80::fc16:3eff:fe16:a437/64 sco=
pe link <br
>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 valid_lft forever pr=
eferred_lft forever<br
><br
><br
> At this point, I was already seeing a problem on the
host/node. I remembered that a newer version of sos package is
delivered from the ovirt repositories. So I tried to do a "yum
update" on my host, and got a similar problem:<br
><br
> % sudo yum update<br
> [sudo] password for
rad: <br
> Loaded plugins:
langpacks, refresh-packagekit<br
> Resolving Dependencies<br
> --> Running transaction check<br
> ---> Package sos.noarch 0:3.1-1.fc20 will be updated<br
> ---> Package sos.noarch
0:3.2-0.2.fc20.ovirt will be an
update<br
> --> Finished
Dependency Resolution<br
><br
> Dependencies Resolved<br
><br
>
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<br
>=C2=A0 Package=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 Arch=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0
Version=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Repository=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0
Size<br
>
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<br
> Updating:<br
>=C2=A0
sos=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 noarch=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0
3.2-0.2.fc20.ovirt=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ovirt-3.5=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 292
k<br
><br
> Transaction Summary<br
>
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<br
> Upgrade=C2=A0 1 Package<br
><br
> Total download size:
292 k<br
> Is this ok [y/d/N]:
y<br
> Downloading packages:<br
> No Presto metadata available for
ovirt-3.5<br
>
sos-3.2-0.2.fc20.ovirt.noarch.
FAILED=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <br
> <a
href=3D"http://www.gtlib.gatech.edu/pub/oVirt/pub/ovirt-3.5/rpm/fc20...
ch/sos-3.2-0.2.fc20.ovirt.noarch.rpm">http://www.gtlib.gatech.edu...
rt/pub/ovirt-3.5/rpm/fc20/noarch/sos-3.2-0.2.fc20.ovirt.noarch.rpm</a>:
[Errno 14] curl#6 - "Could not resolve host: <a
href=3D"http://www.gtlib.gatech.edu">www.gtlib.gatech.edu</a>"<br=
> Trying other mirror.<br
> sos-3.2-0.2.fc20.ovirt.noarch.
FAILED=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <br
> <a
href=3D"ftp://ftp.gtlib.gatech.edu/pub/oVirt/pub/ovirt-3.5/rpm/fc20/...
h/sos-3.2-0.2.fc20.ovirt.noarch.rpm">ftp://ftp.gtlib.gatech.edu/p...
/pub/ovirt-3.5/rpm/fc20/noarch/sos-3.2-0.2.fc20.ovirt.noarch.rpm</a>:
[Errno 14] curl#6 - "Could not resolve host: <a
href=3D"http://ftp.gtlib.gatech.edu">ftp.gtlib.gatech.edu</a>"<br=
> Trying other mirror.<br
> sos-3.2-0.2.fc20.ovirt.noarch.
FAILED=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <br
> <a
href=3D"http://resources.ovirt.org/pub/ovirt-3.5/rpm/fc20/noarch/sos...
0.2.fc20.ovirt.noarch.rpm">http://resources.ovirt.org/pub/ovirt-3...
c20/noarch/sos-3.2-0.2.fc20.ovirt.noarch.rpm</a>:
[Errno 14] curl#6 - "Could not resolve host: <a
href=3D"http://resources.ovirt.org">resources.ovirt.org</a>"<br
> Trying other mirror.<br
> sos-3.2-0.2.fc20.ovirt.noarch.
FAILED=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <br
> <a
href=3D"http://ftp.snt.utwente.nl/pub/software/ovirt/ovirt-3.5/rpm/fc20/n=
oarch/sos-3.2-0.2.fc20.ovirt.noarch.rpm">http://ftp.snt.utwente.nl/pub/so=
ftware/ovirt/ovirt-3.5/rpm/fc20/noarch/sos-3.2-0.2.fc20.ovirt.noarch.rpm<=
/a>:
[Errno 14] curl#6 - "Could not resolve host: <a
href=3D"http://ftp.snt.utwente.nl">ftp.snt.utwente.nl</a>"<br
> Trying other mirror.<br
> sos-3.2-0.2.fc20.ovirt.noarch.
FAILED=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <br
> <a
href=3D"http://ftp.nluug.nl/os/Linux/virtual/ovirt/ovirt-3.5/rpm/fc20/noa=
rch/sos-3.2-0.2.fc20.ovirt.noarch.rpm">http://ftp.nluug.nl/os/Linux/virtu=
al/ovirt/ovirt-3.5/rpm/fc20/noarch/sos-3.2-0.2.fc20.ovirt.noarch.rpm</a>:
[Errno 14] curl#6 - "Could not resolve host: <a
href=3D"http://ftp.nluug.nl">ftp.nluug.nl</a>"<br
> Trying other mirror.<br
> sos-3.2-0.2.fc20.ovirt.noarch.
FAILED=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <br
> <a
href=3D"http://mirror.linux.duke.edu/ovirt/pub/ovirt-3.5/rpm/fc20/no...
sos-3.2-0.2.fc20.ovirt.noarch.rpm">http://mirror.linux.duke.edu/o...
/ovirt-3.5/rpm/fc20/noarch/sos-3.2-0.2.fc20.ovirt.noarch.rpm</a>:
[Errno 14] curl#6 - "Could not resolve host: <a
href=3D"http://mirror.linux.duke.edu">mirror.linux.duke.edu</a>"<=
br
> Trying other mirror.<br
><br
><br
> Error downloading
packages:<br
>=C2=A0=C2=A0
sos-3.2-0.2.fc20.ovirt.noarch: [Errno 256] No more=
mirrors
to try.<br
><br
><br
> This was similar to my previous failures. I took a look, and
the problem was that /etc/resolv.conf had no nameservers, and the
/etc/sysconfig/network-scripts/ifcfg-ovirtmgmt file contained no
entries for DNS1 or DOMAIN.<br
><br
> So, it appears that
when hosted-engine set up my bridged
network, it neglected to carry over the DNS configuration
necessary to the bridge.<br
><br
> Note that I am using
*static* network configuration, rather
than DHCP. During installation of the OS I am setting up the
network configuration as Manual. Perhaps the hosted-engine script
is not properly prepared to deal with that?<br
><br
> I went ahead and
modified the ifcfg-ovirtmgmt network script
(for the next service restart/boot) and resolv.conf (I was afraid
to restart the network in the middle of hosted-engine execution
since I don't know what might already be connected to the engine).
This time it got further, but ultimately it still failed at the
very end:<br
><br
> [ INFO=C2=A0 ] Waiting for the host to become
operational in t=
he
engine. This may take several minutes...<br
> [ INFO=C2=A0 ] Still waiting for VDSM host to become
operational...<br
> [ INFO=C2=A0 ] The
VDSM Host is now operational<br
>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Pl=
ease shutdown the VM allowing the system to
launch it as a monitored service.<br
>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Th=
e system will wait until the VM is down.<br
> [ ERROR ] Failed to execute stage 'Closing up': Error
acquiring VM status<br
> [ INFO=C2=A0 ] Stage:
Clean up<br
> [ INFO=C2=A0 ]
Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20150310140028.=
conf'<br
> [ INFO=C2=A0 ] Stage:
Pre-termination<br
> [ INFO=C2=A0 ] Stage:
Termination<br
><br
><br
> At that point, neither the ovirt-ha-broker or ovirt-ha-agent
services were running.<br
><br
> Note there was no significant pause after it
said "The system
will wait until the VM is down".<br
><br
> After the script
completed, I shut down the VM, and manually
started the ha services, and the VM came up. I could login to the
Administration Portal, and finally see my HostedEngine VM. :-)<br
><br
> I seem to be in a bad state however: The Data Center has no
storage domains attached. I'm not sure what else might need
cleaning up. Any assistance appreciated.</p
><br
<p dir=3D"ltr"
> -Bob<br
><br
><br
><br
>>> # ip addr<br
>>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc
noqueue state UNKNOWN group<br
>>> default<br
>>>=C2=A0=C2=A0=C2=A0=C2=A0 link/loopback 00:00:00:00:00:0=
0 brd
00:00:00:00:00:00<br
>>>=C2=A0=C2=A0=C2=A0=C2=A0 inet <a href=3D"http://127.0.0=
.1/8">127.0.0.1/8</a
scope host lo<br
>>>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 valid_lft fo=
rever preferred_lft forever<br
>>>=C2=A0=C2=A0=C2=A0=C2=A0 inet6 ::1/128 scope host<br
>>>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 valid_lft fo=
rever preferred_lft forever<br
>>> 2:
p3p2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu
1500 qdisc pfifo_fast master<br
>>> ovirtmgmt state UP group default qlen 1000<br
>>>=C2=A0=C2=A0=C2=A0=C2=A0
link/ether b8:ca:3a:79:22:12 b=
rd
ff:ff:ff:ff:ff:ff<br
>>>=C2=A0=C2=A0=C2=A0=C2=A0 inet6 fe80::baca:3aff:fe79:221=
2/64 scope link<br
>>>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 valid_lft fo=
rever preferred_lft forever<br
>>> 3:
bond0:
<NO-CARRIER,BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc
noqueue<br
>>> state
DOWN group default<br
>>>=C2=A0=C2=A0=C2=A0=C2=A0 link/ether 56:56:f7:cf:73:27 b=
rd
ff:ff:ff:ff:ff:ff<br
>>> 4:
wlp2s0: <NO-CARRIER,BROADCAST,MULTICAST,UP>
mtu 1500 qdisc mq state DOWN<br
>>> group default qlen 1000<br
>>>=C2=A0=C2=A0=C2=A0=C2=A0 link/ether 1c:3e:84:50:8d:c3 b=
rd
ff:ff:ff:ff:ff:ff<br
>>> 6:
;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500
qdisc noop state DOWN group<br
>>> default<br
>>>=C2=A0=C2=A0=C2=A0=C2=A0 link/ether 22:a1:01:9e:30:71 b=
rd
ff:ff:ff:ff:ff:ff<br
>>> 7:
ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP>
mtu 1500 qdisc noqueue state<br
>>> UP group default<br
>>>=C2=A0=C2=A0=C2=A0=C2=A0 link/ether b8:ca:3a:79:22:12 b=
rd
ff:ff:ff:ff:ff:ff<br
>>>=C2=A0=C2=A0=C2=A0=C2=A0 inet <a href=3D"http://172.16.=
0.58/16">172.16.0.58/16</a
brd
172.16.255.255 scope global ovirtmgmt<br
>>>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 valid_lft fo=
rever preferred_lft forever<br
>>>=C2=A0=C2=A0=C2=A0=C2=A0 inet6 fe80::baca:3aff:fe79:221=
2/64 scope link<br
>>>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 valid_lft fo=
rever preferred_lft forever<br
>>><br
>>><br
>>> The
only unusual thing about my setup that I can
think of, from the network<br
>>> perspective, is that my physical host has a wireless
interface, which I've<br
>>> not configured. Could it be confusing hosted-engine
--deploy?<br
>>><br
>>>
-Bob<br
>>><br
>>><br
><br
</p
</body
</html
--------------010402020902050501080400--