<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p dir="ltr">For the record, once I added a new storage domain the
Data center came up.</p>
So in the end, this seems to have been due to known bugs:<br>
<pre wrap=""><a class="moz-txt-link-freetext" href="https://bugzilla.redhat.com/show_bug.cgi?id=1160667">https://bugzilla.redhat.com/show_bug.cgi?id=1160667</a>
<a class="moz-txt-link-freetext" href="https://bugzilla.redhat.com/show_bug.cgi?id=1160423">https://bugzilla.redhat.com/show_bug.cgi?id=1160423</a></pre>
<br>
Effectively, for hosts with static/manual IP addressing (i.e. not
DHCP), the DNS and default route information are not set up
correctly by hosted-engine-setup. I'm not sure why that's not
considered a higher priority bug (e.g. blocker for 3.5.2?) since I
believe the most typical configuration for servers is static IP
addressing.<br>
<p dir="ltr">All seems to be working now. Many thanks to Simone for
the invaluable assistance.<br>
</p>
<p dir="ltr">-Bob</p>
<p dir="ltr">
On Mar 10, 2015 2:29 PM, "Bob Doolittle" <<a
href="mailto:bob@doolittle.us.com">bob@doolittle.us.com</a>>
wrote:<br>
><br>
><br>
> On 03/10/2015 10:20 AM, Simone Tiraboschi wrote:<br>
>><br>
>><br>
>> ----- Original Message -----<br>
>>><br>
>>> From: "Bob Doolittle" <<a
href="mailto:bob@doolittle.us.com">bob@doolittle.us.com</a>><br>
>>> To: "Simone Tiraboschi" <<a
href="mailto:stirabos@redhat.com">stirabos@redhat.com</a>><br>
>>> Cc: "users-ovirt" <<a
href="mailto:users@ovirt.org">users@ovirt.org</a>><br>
>>> Sent: Tuesday, March 10, 2015 2:40:13 PM<br>
>>> Subject: Re: [ovirt-users] Error during
hosted-engine-setup for 3.5.1 on F20 (The VDSM host was found in a
failed<br>
>>> state)<br>
>>><br>
>>><br>
>>> On 03/10/2015 04:58 AM, Simone Tiraboschi wrote:<br>
>>>><br>
>>>> ----- Original Message -----<br>
>>>>><br>
>>>>> From: "Bob Doolittle" <<a
href="mailto:bob@doolittle.us.com">bob@doolittle.us.com</a>><br>
>>>>> To: "Simone Tiraboschi" <<a
href="mailto:stirabos@redhat.com">stirabos@redhat.com</a>><br>
>>>>> Cc: "users-ovirt" <<a
href="mailto:users@ovirt.org">users@ovirt.org</a>><br>
>>>>> Sent: Monday, March 9, 2015 11:48:03 PM<br>
>>>>> Subject: Re: [ovirt-users] Error during
hosted-engine-setup for 3.5.1 on<br>
>>>>> F20 (The VDSM host was found in a failed<br>
>>>>> state)<br>
>>>>><br>
>>>>><br>
>>>>> On 03/09/2015 02:47 PM, Bob Doolittle wrote:<br>
>>>>>><br>
>>>>>> Resending with CC to list (and an
update).<br>
>>>>>><br>
>>>>>> On 03/09/2015 01:40 PM, Simone Tiraboschi
wrote:<br>
>>>>>>><br>
>>>>>>> ----- Original Message -----<br>
>>>>>>>><br>
>>>>>>>> From: "Bob Doolittle" <<a
href="mailto:bob@doolittle.us.com">bob@doolittle.us.com</a>><br>
>>>>>>>> To: "Simone Tiraboschi" <<a
href="mailto:stirabos@redhat.com">stirabos@redhat.com</a>><br>
>>>>>>>> Cc: "users-ovirt" <<a
href="mailto:users@ovirt.org">users@ovirt.org</a>><br>
>>>>>>>> Sent: Monday, March 9, 2015
6:26:30 PM<br>
>>>>>>>> Subject: Re: [ovirt-users] Error
during hosted-engine-setup for 3.5.1<br>
>>>>>>>> on<br>
>>>>>>>> F20 (Cannot add the host to
cluster ... SSH<br>
>>>>>>>> has failed)<br>
>>>>>>>><br>
>>> ...<br>
>>>>>>>><br>
>>>>>>>> OK, I've started over. Simply
removing the storage domain was<br>
>>>>>>>> insufficient,<br>
>>>>>>>> the hosted-engine deploy failed
when it found the HA and Broker<br>
>>>>>>>> services<br>
>>>>>>>> already configured. I decided to
just start over fresh starting with<br>
>>>>>>>> re-installing the OS on my host.<br>
>>>>>>>><br>
>>>>>>>> I can't deploy DNS at the moment,
so I have to simply replicate<br>
>>>>>>>> /etc/hosts<br>
>>>>>>>> files on my host/engine. I did
that this time, but have run into a new<br>
>>>>>>>> problem:<br>
>>>>>>>><br>
>>>>>>>> [ INFO ] Engine replied: DB
Up!Welcome to Health Status!<br>
>>>>>>>> Enter the name of the
cluster to which you want to add the<br>
>>>>>>>> host<br>
>>>>>>>> (Default) [Default]:<br>
>>>>>>>> [ INFO ] Waiting for the host to
become operational in the engine.<br>
>>>>>>>> This<br>
>>>>>>>> may<br>
>>>>>>>> take several minutes...<br>
>>>>>>>> [ ERROR ] The VDSM host was found
in a failed state. Please check<br>
>>>>>>>> engine<br>
>>>>>>>> and<br>
>>>>>>>> bootstrap installation logs.<br>
>>>>>>>> [ ERROR ] Unable to add ovirt-vm
to the manager<br>
>>>>>>>> Please shutdown the VM
allowing the system to launch it as a<br>
>>>>>>>> monitored service.<br>
>>>>>>>> The system will wait
until the VM is down.<br>
>>>>>>>> [ ERROR ] Failed to execute stage
'Closing up': [Errno 111] Connection<br>
>>>>>>>> refused<br>
>>>>>>>> [ INFO ] Stage: Clean up<br>
>>>>>>>> [ ERROR ] Failed to execute stage
'Clean up': [Errno 111] Connection<br>
>>>>>>>> refused<br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> I've attached my engine log and
the ovirt-hosted-engine-setup log. I<br>
>>>>>>>> think I<br>
>>>>>>>> had an issue with resolving
external hostnames, or else a connectivity<br>
>>>>>>>> issue<br>
>>>>>>>> during the install.<br>
>>>>>>><br>
>>>>>>> For some reason your engine wasn't
able to deploy your hosts but the SSH<br>
>>>>>>> session this time was established.<br>
>>>>>>> 2015-03-09 13:05:58,514 ERROR<br>
>>>>>>>
[org.ovirt.engine.core.bll.InstallVdsInternalCommand]<br>
>>>>>>> (org.ovirt.thread.pool-8-thread-3)
[3cf91626] Host installation failed<br>
>>>>>>> for host
217016bb-fdcd-4344-a0ca-4548262d10a8, ovirt-vm.:<br>
>>>>>>> java.io.IOException: Command returned
failure code 1 during SSH session<br>
>>>>>>> '<a
href="mailto:root@xion2.smartcity.net">root@xion2.smartcity.net</a>'<br>
>>>>>>><br>
>>>>>>> Can you please attach host-deploy
logs from the engine VM?<br>
>>>>>><br>
>>>>>> OK, attached.<br>
>>>>>><br>
>>>>>> Like I said, it looks to me like a
name-resolution issue during the yum<br>
>>>>>> update on the engine. I think I've fixed
that, but do you have a better<br>
>>>>>> suggestion for cleaning up and
re-deploying other than installing the OS<br>
>>>>>> on my host and starting all over again?<br>
>>>>><br>
>>>>> I just finished starting over from scratch,
starting with OS installation<br>
>>>>> on<br>
>>>>> my host/node, and wound up with a very
similar problem - the engine<br>
>>>>> couldn't<br>
>>>>> reach the hosts during the yum operation. But
this time the error was<br>
>>>>> "Network is unreachable". Which is weird,
because I can ssh into the<br>
>>>>> engine<br>
>>>>> and ping many of those hosts, after the
operation has failed.<br>
>>>>><br>
>>>>> Here's my latest host-deploy log from the
engine. I'd appreciate any<br>
>>>>> clues.<br>
>>>><br>
>>>> It seams that now your host is able to resolve
that addresses but it's not<br>
>>>> able to connect over http.<br>
>>>> On your hosts some of them resolves as IPv6
addresses; can you please try<br>
>>>> to use curl to get one of the file that it wasn't
able to fetch?<br>
>>>> Can you please check your network configuration
before and after<br>
>>>> host-deploy?<br>
>>><br>
>>> I can give you the network configuration after
host-deploy, at least for the<br>
>>> host/Node. The engine won't start for me this
morning, after I shut down the<br>
>>> host for the night.<br>
>>><br>
>>> In order to give you the config before host-deploy
(or, apparently for the<br>
>>> engine), I'll have to re-install the OS on the host
and start again from<br>
>>> scratch. Obviously I'd rather not do that unless
absolutely necessary.<br>
>>><br>
>>> Here's the host config after the failed host-deploy:<br>
>>><br>
>>> Host/Node:<br>
>>><br>
>>> # ip route<br>
>>> <a href="http://169.254.0.0/16">169.254.0.0/16</a>
dev ovirtmgmt scope link metric 1007<br>
>>> <a href="http://172.16.0.0/16">172.16.0.0/16</a> dev
ovirtmgmt proto kernel scope link src 172.16.0.58<br>
>><br>
>> You are missing a default gateway and so the issue.<br>
>> Are you sure that it was properly configured before
trying to deploy that host?<br>
><br>
><br>
> It should have been, it was a fresh OS install. So I'm
starting again, and keeping careful records of my network config.<br>
><br>
> Here is my initial network config of my host/node,
immediately following a new OS install:<br>
><br>
> % ip route<br>
> default via 172.16.0.1 dev p3p1 proto static metric 1024 <br>
> <a href="http://172.16.0.0/16">172.16.0.0/16</a> dev p3p1
proto kernel scope link src 172.16.0.58 <br>
><br>
> % ip addr<br>
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue
state UNKNOWN group default <br>
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00<br>
> inet <a href="http://127.0.0.1/8">127.0.0.1/8</a> scope
host lo<br>
> valid_lft forever preferred_lft forever<br>
> inet6 ::1/128 scope host <br>
> valid_lft forever preferred_lft forever<br>
> 2: p3p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
qdisc pfifo_fast state UP group default qlen 1000<br>
> link/ether b8:ca:3a:79:22:12 brd ff:ff:ff:ff:ff:ff<br>
> inet <a href="http://172.16.0.58/16">172.16.0.58/16</a>
brd 172.16.255.255 scope global p3p1<br>
> valid_lft forever preferred_lft forever<br>
> inet6 fe80::baca:3aff:fe79:2212/64 scope link <br>
> valid_lft forever preferred_lft forever<br>
> 3: wlp2s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500
qdisc mq state DOWN group default qlen 1000<br>
> link/ether 1c:3e:84:50:8d:c3 brd ff:ff:ff:ff:ff:ff<br>
><br>
><br>
> After the VM is first created, the host/node config is:<br>
><br>
> # ip route<br>
> default via 172.16.0.1 dev ovirtmgmt <br>
> <a href="http://169.254.0.0/16">169.254.0.0/16</a> dev
ovirtmgmt scope link metric 1006 <br>
> <a href="http://172.16.0.0/16">172.16.0.0/16</a> dev
ovirtmgmt proto kernel scope link src 172.16.0.58 <br>
><br>
> # ip addr <br>
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue
state UNKNOWN group default <br>
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00<br>
> inet <a href="http://127.0.0.1/8">127.0.0.1/8</a> scope
host lo<br>
> valid_lft forever preferred_lft forever<br>
> inet6 ::1/128 scope host <br>
> valid_lft forever preferred_lft forever<br>
> 2: p3p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
qdisc pfifo_fast master ovirtmgmt state UP group default qlen 1000<br>
> link/ether b8:ca:3a:79:22:12 brd ff:ff:ff:ff:ff:ff<br>
> inet6 fe80::baca:3aff:fe79:2212/64 scope link <br>
> valid_lft forever preferred_lft forever<br>
> 3: wlp2s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500
qdisc mq state DOWN group default qlen 1000<br>
> link/ether 1c:3e:84:50:8d:c3 brd ff:ff:ff:ff:ff:ff<br>
> 4: bond0: <NO-CARRIER,BROADCAST,MULTICAST,MASTER,UP>
mtu 1500 qdisc noqueue state DOWN group default <br>
> link/ether 92:cb:9d:97:18:36 brd ff:ff:ff:ff:ff:ff<br>
> 5: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc
noop state DOWN group default <br>
> link/ether 9a:bc:29:52:82:38 brd ff:ff:ff:ff:ff:ff<br>
> 6: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu
1500 qdisc noqueue state UP group default <br>
> link/ether b8:ca:3a:79:22:12 brd ff:ff:ff:ff:ff:ff<br>
> inet <a href="http://172.16.0.58/16">172.16.0.58/16</a>
brd 172.16.255.255 scope global ovirtmgmt<br>
> valid_lft forever preferred_lft forever<br>
> inet6 fe80::baca:3aff:fe79:2212/64 scope link <br>
> valid_lft forever preferred_lft forever<br>
> 7: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
qdisc pfifo_fast master ovirtmgmt state UNKNOWN group default qlen
500<br>
> link/ether fe:16:3e:16:a4:37 brd ff:ff:ff:ff:ff:ff<br>
> inet6 fe80::fc16:3eff:fe16:a437/64 scope link <br>
> valid_lft forever preferred_lft forever<br>
><br>
><br>
> At this point, I was already seeing a problem on the
host/node. I remembered that a newer version of sos package is
delivered from the ovirt repositories. So I tried to do a "yum
update" on my host, and got a similar problem:<br>
><br>
> % sudo yum update<br>
> [sudo] password for rad: <br>
> Loaded plugins: langpacks, refresh-packagekit<br>
> Resolving Dependencies<br>
> --> Running transaction check<br>
> ---> Package sos.noarch 0:3.1-1.fc20 will be updated<br>
> ---> Package sos.noarch 0:3.2-0.2.fc20.ovirt will be an
update<br>
> --> Finished Dependency Resolution<br>
><br>
> Dependencies Resolved<br>
><br>
>
================================================================================================================<br>
> Package Arch
Version Repository
Size<br>
>
================================================================================================================<br>
> Updating:<br>
> sos noarch
3.2-0.2.fc20.ovirt ovirt-3.5 292
k<br>
><br>
> Transaction Summary<br>
>
================================================================================================================<br>
> Upgrade 1 Package<br>
><br>
> Total download size: 292 k<br>
> Is this ok [y/d/N]: y<br>
> Downloading packages:<br>
> No Presto metadata available for ovirt-3.5<br>
> sos-3.2-0.2.fc20.ovirt.noarch.
FAILED <br>
> <a
href="http://www.gtlib.gatech.edu/pub/oVirt/pub/ovirt-3.5/rpm/fc20/noarch/sos-3.2-0.2.fc20.ovirt.noarch.rpm">http://www.gtlib.gatech.edu/pub/oVirt/pub/ovirt-3.5/rpm/fc20/noarch/sos-3.2-0.2.fc20.ovirt.noarch.rpm</a>:
[Errno 14] curl#6 - "Could not resolve host: <a
href="http://www.gtlib.gatech.edu">www.gtlib.gatech.edu</a>"<br>
> Trying other mirror.<br>
> sos-3.2-0.2.fc20.ovirt.noarch.
FAILED <br>
> <a
href="ftp://ftp.gtlib.gatech.edu/pub/oVirt/pub/ovirt-3.5/rpm/fc20/noarch/sos-3.2-0.2.fc20.ovirt.noarch.rpm">ftp://ftp.gtlib.gatech.edu/pub/oVirt/pub/ovirt-3.5/rpm/fc20/noarch/sos-3.2-0.2.fc20.ovirt.noarch.rpm</a>:
[Errno 14] curl#6 - "Could not resolve host: <a
href="http://ftp.gtlib.gatech.edu">ftp.gtlib.gatech.edu</a>"<br>
> Trying other mirror.<br>
> sos-3.2-0.2.fc20.ovirt.noarch.
FAILED <br>
> <a
href="http://resources.ovirt.org/pub/ovirt-3.5/rpm/fc20/noarch/sos-3.2-0.2.fc20.ovirt.noarch.rpm">http://resources.ovirt.org/pub/ovirt-3.5/rpm/fc20/noarch/sos-3.2-0.2.fc20.ovirt.noarch.rpm</a>:
[Errno 14] curl#6 - "Could not resolve host: <a
href="http://resources.ovirt.org">resources.ovirt.org</a>"<br>
> Trying other mirror.<br>
> sos-3.2-0.2.fc20.ovirt.noarch.
FAILED <br>
> <a
href="http://ftp.snt.utwente.nl/pub/software/ovirt/ovirt-3.5/rpm/fc20/noarch/sos-3.2-0.2.fc20.ovirt.noarch.rpm">http://ftp.snt.utwente.nl/pub/software/ovirt/ovirt-3.5/rpm/fc20/noarch/sos-3.2-0.2.fc20.ovirt.noarch.rpm</a>:
[Errno 14] curl#6 - "Could not resolve host: <a
href="http://ftp.snt.utwente.nl">ftp.snt.utwente.nl</a>"<br>
> Trying other mirror.<br>
> sos-3.2-0.2.fc20.ovirt.noarch.
FAILED <br>
> <a
href="http://ftp.nluug.nl/os/Linux/virtual/ovirt/ovirt-3.5/rpm/fc20/noarch/sos-3.2-0.2.fc20.ovirt.noarch.rpm">http://ftp.nluug.nl/os/Linux/virtual/ovirt/ovirt-3.5/rpm/fc20/noarch/sos-3.2-0.2.fc20.ovirt.noarch.rpm</a>:
[Errno 14] curl#6 - "Could not resolve host: <a
href="http://ftp.nluug.nl">ftp.nluug.nl</a>"<br>
> Trying other mirror.<br>
> sos-3.2-0.2.fc20.ovirt.noarch.
FAILED <br>
> <a
href="http://mirror.linux.duke.edu/ovirt/pub/ovirt-3.5/rpm/fc20/noarch/sos-3.2-0.2.fc20.ovirt.noarch.rpm">http://mirror.linux.duke.edu/ovirt/pub/ovirt-3.5/rpm/fc20/noarch/sos-3.2-0.2.fc20.ovirt.noarch.rpm</a>:
[Errno 14] curl#6 - "Could not resolve host: <a
href="http://mirror.linux.duke.edu">mirror.linux.duke.edu</a>"<br>
> Trying other mirror.<br>
><br>
><br>
> Error downloading packages:<br>
> sos-3.2-0.2.fc20.ovirt.noarch: [Errno 256] No more mirrors
to try.<br>
><br>
><br>
> This was similar to my previous failures. I took a look, and
the problem was that /etc/resolv.conf had no nameservers, and the
/etc/sysconfig/network-scripts/ifcfg-ovirtmgmt file contained no
entries for DNS1 or DOMAIN.<br>
><br>
> So, it appears that when hosted-engine set up my bridged
network, it neglected to carry over the DNS configuration
necessary to the bridge.<br>
><br>
> Note that I am using *static* network configuration, rather
than DHCP. During installation of the OS I am setting up the
network configuration as Manual. Perhaps the hosted-engine script
is not properly prepared to deal with that?<br>
><br>
> I went ahead and modified the ifcfg-ovirtmgmt network script
(for the next service restart/boot) and resolv.conf (I was afraid
to restart the network in the middle of hosted-engine execution
since I don't know what might already be connected to the engine).
This time it got further, but ultimately it still failed at the
very end:<br>
><br>
> [ INFO ] Waiting for the host to become operational in the
engine. This may take several minutes...<br>
> [ INFO ] Still waiting for VDSM host to become
operational...<br>
> [ INFO ] The VDSM Host is now operational<br>
> Please shutdown the VM allowing the system to
launch it as a monitored service.<br>
> The system will wait until the VM is down.<br>
> [ ERROR ] Failed to execute stage 'Closing up': Error
acquiring VM status<br>
> [ INFO ] Stage: Clean up<br>
> [ INFO ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20150310140028.conf'<br>
> [ INFO ] Stage: Pre-termination<br>
> [ INFO ] Stage: Termination<br>
><br>
><br>
> At that point, neither the ovirt-ha-broker or ovirt-ha-agent
services were running.<br>
><br>
> Note there was no significant pause after it said "The system
will wait until the VM is down".<br>
><br>
> After the script completed, I shut down the VM, and manually
started the ha services, and the VM came up. I could login to the
Administration Portal, and finally see my HostedEngine VM. :-)<br>
><br>
> I seem to be in a bad state however: The Data Center has no
storage domains attached. I'm not sure what else might need
cleaning up. Any assistance appreciated.</p>
><br>
<p dir="ltr">
> -Bob<br>
><br>
><br>
><br>
>>> # ip addr<br>
>>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc
noqueue state UNKNOWN group<br>
>>> default<br>
>>> link/loopback 00:00:00:00:00:00 brd
00:00:00:00:00:00<br>
>>> inet <a href="http://127.0.0.1/8">127.0.0.1/8</a>
scope host lo<br>
>>> valid_lft forever preferred_lft forever<br>
>>> inet6 ::1/128 scope host<br>
>>> valid_lft forever preferred_lft forever<br>
>>> 2: p3p2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu
1500 qdisc pfifo_fast master<br>
>>> ovirtmgmt state UP group default qlen 1000<br>
>>> link/ether b8:ca:3a:79:22:12 brd
ff:ff:ff:ff:ff:ff<br>
>>> inet6 fe80::baca:3aff:fe79:2212/64 scope link<br>
>>> valid_lft forever preferred_lft forever<br>
>>> 3: bond0:
<NO-CARRIER,BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc
noqueue<br>
>>> state DOWN group default<br>
>>> link/ether 56:56:f7:cf:73:27 brd
ff:ff:ff:ff:ff:ff<br>
>>> 4: wlp2s0: <NO-CARRIER,BROADCAST,MULTICAST,UP>
mtu 1500 qdisc mq state DOWN<br>
>>> group default qlen 1000<br>
>>> link/ether 1c:3e:84:50:8d:c3 brd
ff:ff:ff:ff:ff:ff<br>
>>> 6: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500
qdisc noop state DOWN group<br>
>>> default<br>
>>> link/ether 22:a1:01:9e:30:71 brd
ff:ff:ff:ff:ff:ff<br>
>>> 7: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP>
mtu 1500 qdisc noqueue state<br>
>>> UP group default<br>
>>> link/ether b8:ca:3a:79:22:12 brd
ff:ff:ff:ff:ff:ff<br>
>>> inet <a href="http://172.16.0.58/16">172.16.0.58/16</a>
brd 172.16.255.255 scope global ovirtmgmt<br>
>>> valid_lft forever preferred_lft forever<br>
>>> inet6 fe80::baca:3aff:fe79:2212/64 scope link<br>
>>> valid_lft forever preferred_lft forever<br>
>>><br>
>>><br>
>>> The only unusual thing about my setup that I can
think of, from the network<br>
>>> perspective, is that my physical host has a wireless
interface, which I've<br>
>>> not configured. Could it be confusing hosted-engine
--deploy?<br>
>>><br>
>>> -Bob<br>
>>><br>
>>><br>
><br>
</p>
</body>
</html>