Hi Didi,
Thank you for the response.
I have tried to do a fresh install (RH 6.6) and still ran into the same
problem.
[root@pws-hv15 rhiso]# hosted-engine --deploy
[ INFO ] Stage: Initializing
Continuing will configure this host for serving as hypervisor
and create a VM where you have to install oVirt Engine afterwards.
Are you sure you want to continue? (Yes, No)[Yes]:
It has been detected that this program is executed through an
SSH connection without using screen.
Continuing with the installation may lead to broken
installation if the network connection fails.
It is highly recommended to abort the installation and run it
inside a screen session using command "screen".
Do you want to continue anyway? (Yes, No)[No]: yes
[ INFO ] Generating a temporary VNC password.
[ INFO ] Stage: Environment setup
Configuration files: []
Log file:
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20150102134318-7ougjc.log
Version: otopi-1.2.3 (otopi-1.2.3-1.el6ev)
[ INFO ] Hardware supports virtualization
[ INFO ] Bridge rhevm already created
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup
[ INFO ] Stage: Environment customization
--== STORAGE CONFIGURATION ==--
During customization use CTRL-D to abort.
Please specify the storage you would like to use (nfs3,
nfs4)[nfs3]:
Please specify the full shared storage connection path to use
(example: host:/path): 192.168.1.32:/Volumes/Raid1
[ INFO ] Installing on first host
Please provide storage domain name. [hosted_storage]:
Local storage datacenter name is an internal name and
currently will not be shown in engine's admin UI.Please enter local
datacenter name [hosted_datacenter]:
--== SYSTEM CONFIGURATION ==--
--== NETWORK CONFIGURATION ==--
iptables was detected on your computer, do you wish setup to
configure it? (Yes, No)[Yes]:
Please indicate a pingable gateway IP address [192.168.0.3]:
--== VM CONFIGURATION ==--
Please specify the device to boot the VM from (cdrom, disk,
pxe) [cdrom]:
The following CPU types are supported by this host:
- model_Westmere: Intel Westmere Family
- model_Nehalem: Intel Nehalem Family
- model_Penryn: Intel Penryn Family
- model_Conroe: Intel Conroe Family
Please specify the CPU type to be used by the VM
[model_Westmere]:
Please specify path to installation media you would like to
use [None]: /mnt/rhiso
Please specify the number of virtual CPUs for the VM
[Defaults to minimum requirement: 2]:
Please specify the disk size of the VM in GB [Defaults to
minimum requirement: 25]:
You may specify a MAC address for the VM or accept a randomly
generated default [00:16:3e:02:7f:c4]:
Please specify the memory size of the VM in MB [Defaults to
minimum requirement: 4096]:
Please specify the console type you would like to use to
connect to the VM (vnc, spice) [vnc]:
--== HOSTED ENGINE CONFIGURATION ==--
Enter the name which will be used to identify this host
inside the Administrator Portal [hosted_engine_1]:
Enter 'admin@internal' user password that will be used for
accessing the Administrator Portal:
Confirm 'admin@internal' user password:
Please provide the FQDN for the engine you would like to use.
This needs to match the FQDN that you will use for the engine
installation within the VM.
Note: This will be the FQDN of the VM you are now going to
create,
it should not point to the base host or to any other existing
machine.
Engine FQDN:
powerhost1.power-soft.net
<
http://powerhost1.power-soft.net>
Please provide the name of the SMTP server through which we
will send notifications [localhost]:
Please provide the TCP port number of the SMTP server [25]:
Please provide the email address from which notifications
will be sent [root@localhost]:
Please provide a comma-separated list of email addresses
which will get notifications [root@localhost]:
[ INFO ] Stage: Setup validation
--== CONFIGURATION PREVIEW ==--
Engine FQDN :
powerhost1.power-soft.net <
http://powerhost1.power-soft.net>
Bridge name : rhevm
SSH daemon port : 22
Firewall manager : iptables
Gateway address : 192.168.0.3
Host name for web application : hosted_engine_1
Host ID : 1
Image size GB : 25
Storage connection : 192.168.1.32:/Volumes/Raid1
Console type : vnc
Memory size MB : 4096
MAC address : 00:16:3e:02:7f:c4
Boot type : cdrom
Number of CPUs : 2
ISO image (for cdrom boot) : /mnt/rhiso
CPU Type : model_Westmere
Please confirm installation settings (Yes, No)[No]: Yes
[ INFO ] Stage: Transaction setup
[ INFO ] Stage: Misc configuration
[ INFO ] Stage: Package installation
[ INFO ] Stage: Misc configuration
[ INFO ] Configuring libvirt
[ INFO ] Configuring VDSM
[ INFO ] Starting vdsmd
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Creating Storage Domain
[ INFO ] Creating Storage Pool
[ INFO ] Connecting Storage Pool
[ INFO ] Verifying sanlock lockspace initialization
[ INFO ] Initializing sanlock lockspace
[ INFO ] Initializing sanlock metadata
[ INFO ] Creating VM Image
[ INFO ] Disconnecting Storage Pool
[ INFO ] Start monitoring domain
[ ERROR ] Failed to execute stage 'Misc configuration': The read
operation timed out
[ INFO ] Stage: Clean up
[ INFO ] Generating answer file '/etc/ovirt-hosted-engine/answers.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
sanlock status
daemon 4a21a439-cc72-4542-8146-71d0cb754da4.pws-hv15.p
p -1 helper
p -1 listener
p -1 status
p 5365
s
e7175b6e-bf5a-4723-98f6-0f7972623f4c:1:/rhev/data-center/mnt/192.168.1.32\:_Volumes_Raid1/e7175b6e-bf5a-4723-98f6-0f7972623f4c/dom_md/ids:0
I am not sure if it is relevant but after subscribing to the relevant
channels listed in the Installation Guid
<
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtuali...
I was receiving an error when trying to update via Yum and only after I
installed rhel-6-server-rhev-mgmt-agent-rpms. I have no idea why it
would want to install the i686 augeas items and can not find any info on
how to resolve this.
This usually means that you either using broken repo or you have
installed some 3rd party packages which require different version of the
libs then others (btw, augeas-1.0.0-5.el6_5.1 doesn't seem like clean
6.6 install, the package version suggests it's 6.5). You can try to
update the system before you add any 3rd party repos.
Now for the read timeout problem:
The vdsm log is full of:
Thread-62::DEBUG::2015-01-02
14:07:44,218::fileSD::261::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
<err> = '0+1 records in\n0+1 records out\n480 bytes (480 B) copied,
0.000351466 s, 1.4 MB/s\n'; <rc> = 0
Thread-62::DEBUG::2015-01-02
14:07:54,228::fileSD::261::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
if=/rhev/data-center/mnt/192.168.1.32:_Volumes_Raid1/e7175b6e-bf5a-4723-98f6-0f7972623f4c/dom_md/metadata
iflag=direct of=/dev/null bs=40
96 count=1' (cwd None)
repeating over and over which doesn't seem healthy to me, but I'm not
vdsm expert. Can you please try to run: vdsClient -s 0 getVdsStats on
the host you're deploying?
Thanks,
Jirka
[root@pws-hv15 ~]# yum update
Loaded plugins: product-id, security, subscription-manager, versionlock
Setting up Update Process
Resolving Dependencies
--> Running transaction check
---> Package augeas-libs.x86_64 0:1.0.0-5.el6_5.1 will be updated
--> Processing Dependency: augeas-libs = 1.0.0-5.el6_5.1 for package:
augeas-1.0.0-5.el6_5.1.x86_64
---> Package augeas-libs.x86_64 0:1.0.0-7.el6 will be an update
--> Running transaction check
---> Package augeas-libs.i686 0:1.0.0-5.el6_5.1 will be installed
--> Processing Dependency: libxml2.so.2(LIBXML2_2.4.30) for package:
augeas-libs-1.0.0-5.el6_5.1.i686
--> Processing Dependency: libxml2.so.2 for package:
augeas-libs-1.0.0-5.el6_5.1.i686
--> Processing Dependency: libselinux.so.1 for package:
augeas-libs-1.0.0-5.el6_5.1.i686
--> Processing Dependency: libc.so.6(GLIBC_2.8) for package:
augeas-libs-1.0.0-5.el6_5.1.i686
--> Processing Dependency: libc.so.6(GLIBC_2.4) for package:
augeas-libs-1.0.0-5.el6_5.1.i686
--> Processing Dependency: libc.so.6(GLIBC_2.3.4) for package:
augeas-libs-1.0.0-5.el6_5.1.i686
--> Processing Dependency: libc.so.6(GLIBC_2.3) for package:
augeas-libs-1.0.0-5.el6_5.1.i686
--> Processing Dependency: libc.so.6(GLIBC_2.2) for package:
augeas-libs-1.0.0-5.el6_5.1.i686
--> Processing Dependency: libc.so.6(GLIBC_2.1.3) for package:
augeas-libs-1.0.0-5.el6_5.1.i686
--> Processing Dependency: libc.so.6(GLIBC_2.1.1) for package:
augeas-libs-1.0.0-5.el6_5.1.i686
--> Processing Dependency: libc.so.6(GLIBC_2.1) for package:
augeas-libs-1.0.0-5.el6_5.1.i686
--> Processing Dependency: libc.so.6(GLIBC_2.0) for package:
augeas-libs-1.0.0-5.el6_5.1.i686
--> Processing Dependency: libc.so.6 for package:
augeas-libs-1.0.0-5.el6_5.1.i686
---> Package augeas-libs.x86_64 0:1.0.0-5.el6_5.1 will be updated
---> Package augeas-libs.x86_64 0:1.0.0-5.el6_5.1 will be updated
--> Running transaction check
---> Package glibc.i686 0:2.12-1.149.el6 will be installed
--> Processing Dependency: libfreebl3.so(NSSRAWHASH_3.12.3) for package:
glibc-2.12-1.149.el6.i686
--> Processing Dependency: libfreebl3.so for package:
glibc-2.12-1.149.el6.i686
---> Package libselinux.i686 0:2.0.94-5.8.el6 will be installed
---> Package libxml2.i686 0:2.7.6-17.el6_6.1 will be installed
--> Processing Dependency: libz.so.1 for package:
libxml2-2.7.6-17.el6_6.1.i686
--> Running transaction check
---> Package nss-softokn-freebl.i686 0:3.14.3-18.el6_6 will be installed
---> Package zlib.i686 0:1.2.3-29.el6 will be installed
--> Finished Dependency Resolution
Error: Multilib version problems found. This often means that the root
cause is something else and multilib version checking is just
pointing out that there is a problem. Eg.:
1. You have an upgrade for augeas-libs which is missing some
dependency that another package requires. Yum is trying to
solve this by installing an older version of augeas-libs of the
different architecture. If you exclude the bad architecture
yum will tell you what the root cause is (which package
requires what). You can try redoing the upgrade with
--exclude augeas-libs.otherarch ... this should give you an
error
message showing the root cause of the problem.
2. You have multiple architectures of augeas-libs installed, but
yum can only see an upgrade for one of those arcitectures.
If you don't want/need both architectures anymore then you
can remove the one with the missing update and everything
will work.
3. You have duplicate versions of augeas-libs installed already.
You can use "yum check" to get yum show these errors.
...you can also use --setopt=protected_multilib=false to remove
this checking, however this is almost never the correct thing to
do as something else is very likely to go wrong (often causing
much more problems).
Protected multilib versions: augeas-libs-1.0.0-5.el6_5.1.i686 !=
augeas-libs-1.0.0-7.el6.x86_64
> On Dec 31, 2014, at 1:28 AM, Yedidyah Bar David <didi(a)redhat.com
> <mailto:didi@redhat.com>> wrote:
>
> ----- Original Message -----
>> From: "Mikola Rose" <mrose(a)power-soft.com
<mailto:mrose@power-soft.com>>
>> To: users(a)ovirt.org <mailto:users@ovirt.org>
>> Sent: Tuesday, December 30, 2014 2:12:52 AM
>> Subject: [ovirt-users] HostedEngine Deployment Woes
>>
>>
>> Hi List Members;
>>
>> I have been struggling with deploying oVirt hosted engine I keep
>> running into
>> a timeout during the "Misc Configuration" any suggestion on how I can
>> trouble shoot this?
>>
>> Redhat 2.6.32-504.3.3.el6.x86_64
>>
>> Installed Packages
>> ovirt-host-deploy.noarch 1.2.5-1.el6ev @rhel-6-server-rhevm-3.4-rpms
>> ovirt-host-deploy-java.noarch 1.2.5-1.el6ev @rhel-6-server-rhevm-3.4-rpms
>> ovirt-hosted-engine-ha.noarch 1.1.6-3.el6ev @rhel-6-server-rhevm-3.4-rpms
>> ovirt-hosted-engine-setup.noarch 1.1.5-1.el6ev
>> @rhel-6-server-rhevm-3.4-rpms
>> rhevm-setup-plugin-ovirt-engine.noarch 3.4.4-2.2.el6ev
>> @rhel-6-server-rhevm-3.4-rpms
>> rhevm-setup-plugin-ovirt-engine-common.noarch 3.4.4-2.2.el6ev
>> @rhel-6-server-rhevm-3.4-rpms
>
> So this is RHEV (3.4) or ovirt?
>
>>
>>
>> Please confirm installation settings (Yes, No)[No]: Yes
>> [ INFO ] Stage: Transaction setup
>> [ INFO ] Stage: Misc configuration
>> [ INFO ] Stage: Package installation
>> [ INFO ] Stage: Misc configuration
>> [ INFO ] Configuring libvirt
>> [ INFO ] Configuring VDSM
>> [ INFO ] Starting vdsmd
>> [ INFO ] Waiting for VDSM hardware info
>> [ INFO ] Waiting for VDSM hardware info
>> [ INFO ] Connecting Storage Domain
>> [ INFO ] Connecting Storage Pool
>> [ INFO ] Verifying sanlock lockspace initialization
>> [ INFO ] sanlock lockspace already initialized
>> [ INFO ] sanlock metadata already initialized
>> [ INFO ] Creating VM Image
>> [ INFO ] Disconnecting Storage Pool
>> [ INFO ] Start monitoring domain
>> [ ERROR ] Failed to execute stage 'Misc configuration': The read
>> operation
>> timed out
>> [ INFO ] Stage: Clean up
>> [ INFO ] Generating answer file '/etc/ovirt-hosted-engine/answers.conf'
>> [ INFO ] Stage: Pre-termination
>> [ INFO ] Stage: Termination
>>
>>
>>
>> 2014-12-29 14:53:41 DEBUG
>> otopi.plugins.ovirt_hosted_engine_setup.sanlock.lockspace
>> lockspace._misc:133 Ensuring lease for lockspace hosted-engine, host
>> id 1 is
>> acquired (file:
>>
/rhev/data-center/mnt/192.168.0.75:_Volumes_Raid1/8094d528-7aa2-4c28-839f-73d7c8bcfebb/ha_agent/hosted-engine.lockspace)
>> 2014-12-29 14:53:41 INFO
>> otopi.plugins.ovirt_hosted_engine_setup.sanlock.lockspace
>> lockspace._misc:144 sanlock lockspace already initialized
>> 2014-12-29 14:53:41 INFO
>> otopi.plugins.ovirt_hosted_engine_setup.sanlock.lockspace
>> lockspace._misc:157 sanlock metadata already initialized
>> 2014-12-29 14:53:41 DEBUG otopi.context context._executeMethod:138
>> Stage misc
>> METHOD otopi.plugins.ovirt_hosted_engine_setup.vm.image.Plugin._misc
>> 2014-12-29 14:53:41 INFO otopi.plugins.ovirt_hosted_engine_setup.vm.image
>> image._misc:162 Creating VM Image
>> 2014-12-29 14:53:41 DEBUG
>> otopi.plugins.ovirt_hosted_engine_setup.vm.image
>> image._misc:163 createVolume
>> 2014-12-29 14:53:42 DEBUG
>> otopi.plugins.ovirt_hosted_engine_setup.vm.image
>> image._misc:184 Created volume d8e7eed4-c763-4b3d-8a71-35f2d692a73d,
>> request
>> was:
>> - image: 9043e535-ea94-41f8-98df-6fdbfeb107c3
>> - volume: e6a9291d-ac21-4a95-b43c-0d6e552baaa2
>> 2014-12-29 14:53:42 DEBUG otopi.ovirt_hosted_engine_setup.tasks
>> tasks.wait:48
>> Waiting for existing tasks to complete
>> 2014-12-29 14:53:43 DEBUG otopi.ovirt_hosted_engine_setup.tasks
>> tasks.wait:48
>> Waiting for existing tasks to complete
>> 2014-12-29 14:53:43 DEBUG otopi.context context._executeMethod:138
>> Stage misc
>> METHOD otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk.Plugin._misc
>> 2014-12-29 14:53:43 DEBUG otopi.context context._executeMethod:144
>> condition
>> False
>> 2014-12-29 14:53:43 DEBUG otopi.context context._executeMethod:138
>> Stage misc
>> METHOD
>> otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._disconnect_pool
>> 2014-12-29 14:53:43 INFO
>> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
>> storage._disconnect_pool:971 Disconnecting Storage Pool
>> 2014-12-29 14:53:43 DEBUG otopi.ovirt_hosted_engine_setup.tasks
>> tasks.wait:48
>> Waiting for existing tasks to complete
>> 2014-12-29 14:53:43 DEBUG
>> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
>> storage._spmStop:602
>> spmStop
>> 2014-12-29 14:53:43 DEBUG
>> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
>> storage._spmStop:611
>> 2014-12-29 14:53:43 DEBUG
>> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
>> storage._storagePoolConnection:573 disconnectStoragePool
>> 2014-12-29 14:53:45 INFO
>> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
>> storage._disconnect_pool:975 Start monitoring domain
>> 2014-12-29 14:53:45 DEBUG
>> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
>> storage._startMonitoringDomain:529 _startMonitoringDomain
>> 2014-12-29 14:53:46 DEBUG
>> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
>> storage._startMonitoringDomain:534 {'status': {'message':
'OK',
>> 'code': 0}}
>> 2014-12-29 14:53:51 DEBUG otopi.ovirt_hosted_engine_setup.tasks
>> tasks.wait:127 Waiting for domain monitor
>> 2014-12-29 14:54:51 DEBUG otopi.context context._executeMethod:152 method
>> exception
>> Traceback (most recent call last):
>> File "/usr/lib/python2.6/site-packages/otopi/context.py", line 142, in
>> _executeMethod
>> method['method']()
>> File
>>
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/storage.py",
>> line 976, in _disconnect_pool
>> self._startMonitoringDomain()
>> File
>>
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/storage.py",
>> line 539, in _startMonitoringDomain
>> waiter.wait(self.environment[ohostedcons.StorageEnv.SD_UUID])
>> File
>> "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_setup/tasks.py",
>> line 128, in wait
>> response = serv.s.getVdsStats()
>> File "/usr/lib64/python2.6/xmlrpclib.py", line 1199, in __call__
>> return self.__send(self.__name, args)
>> File "/usr/lib64/python2.6/xmlrpclib.py", line 1489, in __request
>> verbose=self.__verbose
>> File "/usr/lib64/python2.6/xmlrpclib.py", line 1237, in request
>> errcode, errmsg, headers = h.getreply()
>> File "/usr/lib64/python2.6/httplib.py", line 1064, in getreply
>> response = self._conn.getresponse()
>> File "/usr/lib64/python2.6/httplib.py", line 990, in getresponse
>> response.begin()
>> File "/usr/lib64/python2.6/httplib.py", line 391, in begin
>> version, status, reason = self._read_status()
>> File "/usr/lib64/python2.6/httplib.py", line 349, in _read_status
>> line = self.fp.readline()
>> File "/usr/lib64/python2.6/socket.py", line 433, in readline
>> data = recv(1)
>> File "/usr/lib64/python2.6/ssl.py", line 215, in recv
>> return self.read(buflen)
>> File "/usr/lib64/python2.6/ssl.py", line 136, in read
>> return self._sslobj.read(len)
>> SSLError: The read operation timed out
>
> We had several similar cases in the past [1] [2] but never managed
> to reproduce on clean systems. Might it be some "dirt"? E.g. repeated
> attempts using same storage, or storage used for something else,
> some other host accessing it, host left unclean after previous
> attempts to deploy (failed or successful), something like that?
>
> If you reproduce on completely clean systems, both host(s) and storage,
> please post more relevant logs, including:
>
> From host(s):
> /var/log/vdsm/*
> /var/log/ovirt-hosted-engine-setup/*
> /var/log/ovirt-hosted-engine-ha/*
> output of 'sanlock status'
> output of 'rpm -qa'
>
> From engine vm:
> /var/log/ovirt-engine/*
>
> [
1]https://bugzilla.redhat.com/show_bug.cgi?id=1152564
> [
2]https://bugzilla.redhat.com/show_bug.cgi?id=1099874
>
> Thanks,
> --
> Didi
Mik Rose | Manager, IT - Support Services | *PowerSoft Development Corp*
1 (250) 642-0295 x23
http://www.power-soft.com
<
http://www.power-soft.com/> Live Support
<
https://secure.logmeinrescue.com/Customer/Download.aspx?EntryID=15095831>
This e-mail may be privileged and/or confidential, and the sender does
not waive
any related rights and obligations. Any distribution, use or copying of
this e-mail or the information
it contains by other than an intended recipient is unauthorized.
If you received this e-mail in error, please advise me (by return e-mail
or otherwise) immediately.
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users