On Fri, Oct 30, 2015 at 8:13 PM, Jorgens, Bruce <bjorgens@ciena.com> wrote:

Ovirt users,

 

I have ovirt 3.5.5 with a hosted engine setup with the host node and engine node running CentOS 7.1 with a shared NFS storage domain.

I initially attempted to 2 additional hosted engine nodes using CentOS 7.1 with the ‘hosted-engine –deploy’ command.

Both nodes failed to add when the answer file was unable to be retrieved from the first host.

[root@bml-24 ~]# hosted-engine --deploy

[ INFO  ] Stage: Initializing

[ INFO  ] Generating a temporary VNC password.

[ INFO  ] Stage: Environment setup

          Continuing will configure this host for serving as hypervisor and create a VM where you have to install oVirt Engine afterwards.

          Are you sure you want to continue? (Yes, No)[Yes]:

          Configuration files: []

          Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151030113943-c18yxd.log

          Version: otopi-1.3.2 (otopi-1.3.2-1.el7.centos)

[ INFO  ] Hardware supports virtualization

[ INFO  ] Bridge ovirtmgmt already created

[ INFO  ] Stage: Environment packages setup

[ INFO  ] Stage: Programs detection

[ INFO  ] Stage: Environment setup

[ INFO  ] Stage: Environment customization

 

          --== STORAGE CONFIGURATION ==--

 

          During customization use CTRL-D to abort.

          Please specify the storage you would like to use (iscsi, nfs3, nfs4)[nfs3]:

          Please specify the full shared storage connection path to use (example: host:/path): cantap01b.ciena.com:/vol/sta_fast/ovirt35/hosted_engine

          The specified storage location already contains a data domain. Is this an additional host setup (Yes, No)[Yes]?

[ INFO  ] Installing on additional host

          Please specify the Host ID [Must be integer, default: 2]: 4

          Local storage datacenter name is an internal name and currently will not be shown in engine's admin UI.

          Please enter local datacenter name [hosted_datacenter]:

 

          --== SYSTEM CONFIGURATION ==--

 

[WARNING] A configuration file must be supplied to deploy Hosted Engine on an additional host.

          The answer file may be fetched from the first host using scp.

          If you do not want to download it automatically you can abort the setup answering no to the following question.

          Do you want to scp the answer file from the first host? (Yes, No)[Yes]:

          Please provide the FQDN or IP of the first host: ovirt.ca.stalab.ciena.com

          Enter 'root' user password for host ovirt.ca.stalab.ciena.com:

[ ERROR ] Failed to execute stage 'Environment customization': [Errno 2] No such file

[ INFO  ] Stage: Clean up

[ INFO  ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151030114103.conf'

[ INFO  ] Stage: Pre-termination

[ INFO  ] Stage: Termination

 

I was able to add the additional hosts using the ovirt GUI and am able to use them for running VMs.

 

I tried running ‘hosted-engine –deploy’ on each host again and get the same error as before.

 


Could you check if you see the file /etc/ovirt-hosted-engine/answers.conf on the host where you want to copy the answer file from?
If not, most likely something went wrong when you set up the first node. You will find your ansers file then in this  /var/lib/ovirt-hosted-engine-setup/answers/ folder.

You might find some errors in /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine*.log on the first node. Could you provide the log from this node?

After copying the ansers file from  /var/lib/ovirt-hosted-engine-setup/answers/ over to /etc/ovirt-hosted-engine/answers.conf you should no longer see this error on the other hosts.

 

Checking the logfile referenced above, I don’t see that the download filename listed. 

I do see a couples of stack traces that might be useful.

 

I also found this defect in the release notes for ovirt-3.5.5 which claims to be resolved.

Bug 1271272 - [HE] Failed to deploy additional host using NFS


You should only be affected by that bug if you see something like
    No such file or directory: '/rhev/data-center/mnt/[...]/ha_agent/hosted-engine.metadata.
This is related to accessing metadata of other nodes through the storage mounted by vdsm.

 

I was careful when installing ovirt 3.5.5 to ensure that I used the baseurl instead of the mirrors as it looked like the mirrors were behind.

 

[root@bml-24 ~]# cat /etc/yum.repos.d/ovirt-3.5.repo

[ovirt-3.5]

name=Latest oVirt 3.5 Release

baseurl=http://resources.ovirt.org/pub/ovirt-3.5/rpm/el$releasever/

#mirrorlist=http://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-3.5-el$releasever

enabled=1

skip_if_unavailable=1

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-3.5

 

I see that there is another repo file for ovirt as well that I didn’t modify.

Could my issue be caused by not picking up the latest versions from these mirrors?

 

[root@bml-24 ~]# cat /etc/yum.repos.d/ovirt-3.5-dependencies.repo

[ovirt-3.5-epel]

name=Extra Packages for Enterprise Linux 7 - $basearch

#baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch

mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch

failovermethod=priority

enabled=1

includepkgs=epel-release,python-uinput,puppet,python-lockfile,python-cpopen,python-ordereddict,python-pthreading,python-inotify,python-argparse,novnc,python-ply,python-kitchen,python-daemon,python-websockify,livecd-tools,spice-html5,mom,python-IPy,python-ioprocess,ioprocess,python-paramiko,python-crypto,python-cheetah,python-ecdsa,python-markdown,rubygem-rgen,ovirt-guest-agent,userspace-rcu,protobuf-java

gpgcheck=1

gpgkey=https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-7

 

[ovirt-3.5-glusterfs-epel]

name=GlusterFS is a clustered file-system capable of scaling to several petabytes.

baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-$releasever/$basearch/

enabled=1

skip_if_unavailable=1

gpgcheck=1

gpgkey=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/pub.key

 

[ovirt-3.5-glusterfs-noarch-epel]

name=GlusterFS is a clustered file-system capable of scaling to several petabytes.

baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-$releasever/noarch

enabled=1

skip_if_unavailable=1

gpgcheck=1

gpgkey=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/pub.key

 

[ovirt-3.5-patternfly1-noarch-epel]

name=Copr repo for patternfly1 owned by patternfly

baseurl=http://copr-be.cloud.fedoraproject.org/results/patternfly/patternfly1/epel-7-$basearch/

enabled=1

skip_if_unavailable=1

gpgcheck=0

 

 

2015-10-30 11:40:29 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:215 DIALOG:SEND

2015-10-30 11:40:29 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:215 DIALOG:SEND                 --== SYSTEM CONFIGURATION ==--

2015-10-30 11:40:29 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:215 DIALOG:SEND

2015-10-30 11:40:29 DEBUG otopi.context context._executeMethod:138 Stage customization METHOD otopi.plugins.ovirt_hosted_engine_setup.core.remote_answerfile.Plugin._customization

2015-10-30 11:40:29 WARNING otopi.plugins.ovirt_hosted_engine_setup.core.remote_answerfile remote_answerfile._customization:244 A configuration file must be supplied to deploy Hosted Engine on an additional h

ost.

2015-10-30 11:40:29 DEBUG otopi.plugins.otopi.dialog.human human.queryString:153 query OVEHOSTED_CORE_FETCH_ANSWER

2015-10-30 11:40:29 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:215 DIALOG:SEND                 The answer file may be fetched from the first host using scp.

2015-10-30 11:40:29 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:215 DIALOG:SEND                 If you do not want to download it automatically you can abort the setup answering no to the follow

ing question.

2015-10-30 11:40:29 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:215 DIALOG:SEND                 Do you want to scp the answer file from the first host? (Yes, No)[Yes]:

2015-10-30 11:40:43 DEBUG otopi.plugins.otopi.dialog.human human.queryString:153 query OVEHOSTED_NET_FIRST_HOST_FQDN

2015-10-30 11:40:43 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:215 DIALOG:SEND                 Please provide the FQDN or IP of the first host:

2015-10-30 11:40:55 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:215 DIALOG:RECEIVE    ovirt.ca.stalab.ciena.com

2015-10-30 11:40:55 DEBUG otopi.plugins.ovirt_hosted_engine_setup.core.remote_answerfile remote_answerfile._fetch_answer_file:106 _fetch_answer_file

2015-10-30 11:40:55 DEBUG otopi.plugins.otopi.dialog.human human.queryString:153 query HOST_FIRST_HOST_ROOT_PASSWORD

2015-10-30 11:40:55 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:215 DIALOG:SEND                 Enter 'root' user password for host ovirt.ca.stalab.ciena.com:

2015-10-30 11:40:59 DEBUG otopi.context context._executeMethod:152 method exception

Traceback (most recent call last):

  File "/usr/lib/python2.7/site-packages/otopi/context.py", line 142, in _executeMethod

    method['method']()

  File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/core/remote_answerfile.py", line 304, in _customization

    self._fetch_answer_file()

  File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/core/remote_answerfile.py", line 143, in _fetch_answer_file

    self._tmp_ans

  File "/usr/lib/python2.7/site-packages/paramiko/sftp_client.py", line 718, in get

    file_size = self.stat(remotepath).st_size

  File "/usr/lib/python2.7/site-packages/paramiko/sftp_client.py", line 413, in stat

    t, msg = self._request(CMD_STAT, path)

  File "/usr/lib/python2.7/site-packages/paramiko/sftp_client.py", line 729, in _request

    return self._read_response(num)

  File "/usr/lib/python2.7/site-packages/paramiko/sftp_client.py", line 776, in _read_response

    self._convert_status(msg)

  File "/usr/lib/python2.7/site-packages/paramiko/sftp_client.py", line 802, in _convert_status

    raise IOError(errno.ENOENT, text)

IOError: [Errno 2] No such file

2015-10-30 11:40:59 ERROR otopi.context context._executeMethod:161 Failed to execute stage 'Environment customization': [Errno 2] No such file

2015-10-30 11:40:59 DEBUG otopi.context context.dumpEnvironment:490 ENVIRONMENT DUMP - BEGIN

2015-10-30 11:40:59 DEBUG otopi.context context.dumpEnvironment:500 ENV BASE/error=bool:'True'

2015-10-30 11:40:59 DEBUG otopi.context context.dumpEnvironment:500 ENV BASE/exceptionInfo=list:'[(<type 'exceptions.IOError'>, IOError(2, u'No such file'), <traceback object at 0x3daa320>)]'

2015-10-30 11:40:59 DEBUG otopi.context context.dumpEnvironment:500 ENV OVEHOSTED_FIRST_HOST/fetchAnswer=bool:'True'

2015-10-30 11:40:59 DEBUG otopi.context context.dumpEnvironment:500 ENV OVEHOSTED_FIRST_HOST/fqdn=str:'ovirt.ca.stalab.ciena.com'

2015-10-30 11:40:59 DEBUG otopi.context context.dumpEnvironment:500 ENV OVEHOSTED_FIRST_HOST/rootPassword=str:'**FILTERED**'

2015-10-30 11:40:59 DEBUG otopi.context context.dumpEnvironment:504 ENVIRONMENT DUMP - END

2015-10-30 11:40:59 INFO otopi.context context.runSequence:417 Stage: Clean up

2015-10-30 11:40:59 DEBUG otopi.context context.runSequence:421 STAGE cleanup

2015-10-30 11:40:59 DEBUG otopi.context context._executeMethod:138 Stage cleanup METHOD otopi.plugins.ovirt_hosted_engine_setup.core.remote_answerfile.Plugin._cleanup

2015-10-30 11:40:59 DEBUG otopi.context context._executeMethod:138 Stage cleanup METHOD otopi.plugins.ovirt_hosted_engine_setup.engine.add_host.Plugin._cleanup

2015-10-30 11:40:59 DEBUG otopi.context context._executeMethod:138 Stage cleanup METHOD otopi.plugins.ovirt_hosted_engine_setup.pki.vdsmpki.Plugin._cleanup

2015-10-30 11:40:59 DEBUG otopi.context context._executeMethod:138 Stage cleanup METHOD otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._cleanup

2015-10-30 11:40:59 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage storage._spmStop:692 spmStop

2015-10-30 11:40:59 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage storage._cleanup:1055 Not SPM?

Traceback (most recent call last):

  File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/storage.py", line 1053, in _cleanup

    self._spmStop()

  File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/storage.py", line 698, in _spmStop

    raise RuntimeError(status['status']['message'])

RuntimeError: Not SPM

2015-10-30 11:40:59 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage storage._storagePoolConnection:657 disconnectStoragePool

2015-10-30 11:41:03 DEBUG otopi.context context._executeMethod:138 Stage cleanup METHOD otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk.Plugin._cleanup

2015-10-30 11:41:03 DEBUG otopi.context context._executeMethod:138 Stage cleanup METHOD otopi.plugins.ovirt_hosted_engine_setup.core.answerfile.Plugin._save_answers_at_cleanup

2015-10-30 11:41:03 INFO otopi.plugins.ovirt_hosted_engine_setup.core.answerfile answerfile._save_answers:52 Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151030114103.conf'

2015-10-30 11:41:03 INFO otopi.context context.runSequence:417 Stage: Pre-termination

 

Thanks for your help,

Bruce.

 

Bruce T. Jorgens | Technical Lead, System Test Automation
bjorgens@ciena.com | 115 North Sullivan Road | Spokane Valley, WA 99037 USA
Direct +1.509.242.9292
| Mobile +1.509.499.0215 | Fax +1.509.242.9001

 


_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Best Regards,
Roman