[Users] Problem with Storage
by Juan Jose
Hello everybody,
I had my system working, oVirt 3.1, my engine and one host, both with
Fedora 17. Suddenly electrical power off and I don't have any UPS in my
experimental machines. Now I have restarted the system and I have my engine
and host up but when I try to Activate Master Storage i receive this error:
Thread-8683::DEBUG::2013-03-20
16:17:14,530::resourceManager::565::ResourceManager::(releaseResource) No
one is waiting for resource 'Storage.d6e7e8b8-49c7-11e2-a261-000a5e429f63',
Clearing records.
Thread-8683::ERROR::2013-03-20
16:17:14,531::task::853::TaskManager.Task::(_setError)
Task=`48341c5a-1bd1-42e1-bc60-1a307d4f3704`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 861, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 817, in connectStoragePool
return self._connectStoragePool(spUUID, hostID, scsiKey, msdUUID,
masterVersion, options)
File "/usr/share/vdsm/storage/hsm.py", line 859, in _connectStoragePool
res = pool.connect(hostID, scsiKey, msdUUID, masterVersion)
File "/usr/share/vdsm/storage/sp.py", line 641, in connect
self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)
File "/usr/share/vdsm/storage/sp.py", line 1109, in __rebuild
self.masterDomain = self.getMasterDomain(msdUUID=msdUUID,
masterVersion=masterVersion)
File "/usr/share/vdsm/storage/sp.py", line 1448, in getMasterDomain
raise se.StoragePoolMasterNotFound(self.spUUID, msdUUID)
StoragePoolMasterNotFound: Cannot find master domain:
'spUUID=d6e7e8b8-49c7-11e2-a261-000a5e429f63,
msdUUID=028167ee-5168-4742-96c3-020856f500ec'
Thread-8683::DEBUG::2013-03-20
16:17:14,531::task::872::TaskManager.Task::(_run)
Task=`48341c5a-1bd1-42e1-bc60-1a307d4f3704`::Task._run:
48341c5a-1bd1-42e1-bc60-1a307d4f3704
('d6e7e8b8-49c7-11e2-a261-000a5e429f63', 1,
'd6e7e8b8-49c7-11e2-a261-000a5e429f63',
'028167ee-5168-4742-96c3-020856f500ec', 2) {} failed - stopping task
Thread-8683::DEBUG::2013-03-20
16:17:14,531::task::1199::TaskManager.Task::(stop)
Task=`48341c5a-1bd1-42e1-bc60-1a307d4f3704`::stopping in state preparing
(force False)
Thread-8683::DEBUG::2013-03-20
16:17:14,532::task::978::TaskManager.Task::(_decref)
Task=`48341c5a-1bd1-42e1-bc60-1a307d4f3704`::ref 1 aborting True
Thread-8683::INFO::2013-03-20
16:17:14,532::task::1157::TaskManager.Task::(prepare)
Task=`48341c5a-1bd1-42e1-bc60-1a307d4f3704`::aborting: Task is aborted:
'Cannot find master domain' - code 304
Thread-8683::DEBUG::2013-03-20
16:17:14,532::task::1162::TaskManager.Task::(prepare)
Task=`48341c5a-1bd1-42e1-bc60-1a307d4f3704`::Prepare: aborted: Cannot find
master domain
Thread-8683::DEBUG::2013-03-20
16:17:14,532::task::978::TaskManager.Task::(_decref)
Task=`48341c5a-1bd1-42e1-bc60-1a307d4f3704`::ref 0 aborting True
Thread-8683::DEBUG::2013-03-20
16:17:14,532::task::913::TaskManager.Task::(_doAbort)
Task=`48341c5a-1bd1-42e1-bc60-1a307d4f3704`::Task._doAbort: force False
Thread-8683::DEBUG::2013-03-20
16:17:14,532::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-8683::DEBUG::2013-03-20
16:17:14,533::task::588::TaskManager.Task::(_updateState)
Task=`48341c5a-1bd1-42e1-bc60-1a307d4f3704`::moving from state preparing ->
state aborting
Thread-8683::DEBUG::2013-03-20
16:17:14,533::task::537::TaskManager.Task::(__state_aborting)
Task=`48341c5a-1bd1-42e1-bc60-1a307d4f3704`::_aborting: recover policy none
Thread-8683::DEBUG::2013-03-20
16:17:14,533::task::588::TaskManager.Task::(_updateState)
Task=`48341c5a-1bd1-42e1-bc60-1a307d4f3704`::moving from state aborting ->
state failed
Thread-8683::DEBUG::2013-03-20
16:17:14,533::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-8683::DEBUG::2013-03-20
16:17:14,533::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-8683::ERROR::2013-03-20
16:17:14,533::dispatcher::66::Storage.Dispatcher.Protect::(run) {'status':
{'message': "Cannot find master domain:
'spUUID=d6e7e8b8-49c7-11e2-a261-000a5e429f63,
msdUUID=028167ee-5168-4742-96c3-020856f500ec'", 'code': 304}}
I attach vdsm.log file. It seems as if system is looking for a UUID which
doesn't exist. How can I fix this issue?.
Many thanks in avanced,
Juanjo.
11 years, 9 months
[Users] l,
by Eduardo Ramos
--=_80d6b189a98786c1bb5e641c65d8be4f
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset=UTF-8
Hi all.
I'd like to know if there is a way to resize VM disk
based on a iscsi domain.
--=_80d6b189a98786c1bb5e641c65d8be4f
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html; charset=UTF-8
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN">
<html><body>
<p>Hi all.</p>
<p>I'd like to know if there is a way to resize VM disk based on a iscsi do=
main.</p>
<div> </div>
</body></html>
--=_80d6b189a98786c1bb5e641c65d8be4f--
11 years, 9 months
[Users] vdsClient not work ,it waiting all the time
by bigclouds
------=_Part_377300_1117433317.1363594210172
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: 7bit
hi, vdsclient waiting all the time
client can send command, i see server side return result.
but client can not return.
[root@localhost mcvda]# python mcvdacli.py
connecting to 192.168.88.101:54321 ssl True ts /etc/pki/mcvda
<ServerProxy for 192.168.88.101:54321/RPC2>
^CTraceback (most recent call last):
File "mcvdacli.py", line 124, in <module>
print server.ping()
File "/usr/lib64/python2.6/xmlrpclib.py", line 1199, in __call__
return self.__send(self.__name, args)
File "/usr/lib64/python2.6/xmlrpclib.py", line 1489, in __request
verbose=self.__verbose
File "/usr/lib64/python2.6/xmlrpclib.py", line 1253, in request
return self._parse_response(h.getfile(), sock)
File "/usr/lib64/python2.6/xmlrpclib.py", line 1382, in _parse_response
response = file.read(1024)
File "/usr/lib64/python2.6/socket.py", line 383, in read
data = self._sock.recv(left)
File "/usr/lib64/python2.6/ssl.py", line 215, in recv
return self.read(buflen)
File "/usr/lib64/python2.6/ssl.py", line 136, in read
return self._sslobj.read(len)
KeyboardInterrupt
------=_Part_377300_1117433317.1363594210172
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: 7bit
<div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial">hi, vdsclient waiting all the time<div>client can send command, i see server side return result.</div><div>but client can not return.</div><div><br></div><div><br></div><div><div>[root@localhost mcvda]# python mcvdacli.py </div><div>connecting to 192.168.88.101:54321 ssl True ts /etc/pki/mcvda</div><div><ServerProxy for 192.168.88.101:54321/RPC2></div><div>^CTraceback (most recent call last):</div><div> File "mcvdacli.py", line 124, in <module></div><div> print server.ping()</div><div> File "/usr/lib64/python2.6/xmlrpclib.py", line 1199, in __call__</div><div> return self.__send(self.__name, args)</div><div> File "/usr/lib64/python2.6/xmlrpclib.py", line 1489, in __request</div><div> verbose=self.__verbose</div><div> File "/usr/lib64/python2.6/xmlrpclib.py", line 1253, in request</div><div> return self._parse_response(h.getfile(), sock)</div><div> File "/usr/lib64/python2.6/xmlrpclib.py", line 1382, in _parse_response</div><div> response = file.read(1024)</div><div> File "/usr/lib64/python2.6/socket.py", line 383, in read</div><div> data = self._sock.recv(left)</div><div> File "/usr/lib64/python2.6/ssl.py", line 215, in recv</div><div> return self.read(buflen)</div><div> File "/usr/lib64/python2.6/ssl.py", line 136, in read</div><div> return self._sslobj.read(len)</div><div>KeyboardInterrupt</div></div></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>
------=_Part_377300_1117433317.1363594210172--
11 years, 9 months
[Users] oVirt Weekly Meeting Minutes -- 2013-03-13
by Mike Burns
Minutes:
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-03-13-14.00.html
Minutes (text):
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-03-13-14.00.txt
Log:
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-03-13-14.00.log.html
=========================
#ovirt: oVirt Weekly Sync
=========================
Meeting started by mburns at 14:00:21 UTC. The full logs are available
at http://ovirt.org/meetings/ovirt/2013/ovirt.2013-03-13-14.00.log.html
.
Meeting summary
---------------
* agenda and roll call (mburns, 14:00:26)
* Agenda (mburns, 14:00:38)
* Release Process and updates (mburns, 14:00:47)
* Workshops and Conferences (mburns, 14:00:57)
* Sub Project Reports (mburns, 14:01:04)
* Other Topics (mburns, 14:01:10)
* 3.3 Release Planning and Release Process (mburns, 14:03:27)
* discussions about the release process and revamping it are ongoing
on board@ and arch@ (mburns, 14:03:57)
* decisions hopefully will be made in the next week or so (mburns,
14:05:03)
* not sure how much more we can do w.r.t. the release planning without
the process defined (mburns, 14:05:33)
* if you have input on the process or schedule or anything else,
please either speak up now or reply to the thread on the mailing
list (mburns, 14:06:52)
* 3.2.1 Update release and EL6 builds (mburns, 14:07:27)
* 3.2.1 build is ready (mburns, 14:09:33)
* .el6 build is in the works (mburns, 14:09:41)
* plan to be posted later today after some sanity testing (mburns,
14:09:52)
* ACTION: oschreib to upload builds to ovirt.org (mburns, 14:11:33)
* ACTION: mburns to make sure they're in the right places and send
announcement (mburns, 14:11:57)
* ACTION: mburns to upload new cli and sdk packages that were sent
privately... (mburns, 14:14:19)
* Conferences and Workshops (mburns, 14:29:34)
* LINK: http://www.ovirt.org/Intel_Workshop_May_2013 (theron,
14:30:55)
* Rydekull gave a presentation on oVirt (mburns, 14:31:01)
* went well and drew new users into the project (mburns, 14:31:11)
* CFP for Shanghai announced, still looking for submissions (mburns,
14:31:27)
* registration is open (mburns, 14:31:58)
* location is set and hotels are available (mburns, 14:32:12)
* Dates: 8-9 May 2013 (mburns, 14:32:25)
* shuttle buses available between the hotel and the event in morning
in evening (mburns, 14:33:23)
* work underway on marketing, intel working on it as well (mburns,
14:33:40)
* if you're planning on attending, you should start working on visa
issues NOW (mburns, 14:34:02)
* contact theron with questions (mburns, 14:34:14)
* if you need an invitation for your visa, please contact theron
(mburns, 14:36:02)
* watch http://www.ovirt.org/Intel_Workshop_May_2013 for more info
(mburns, 14:38:07)
* preliminary work started on open virt/open cloud event in SFO for
later this summer (mburns, 14:40:01)
* marketing working group will be forming (mburns, 14:40:14)
* Sub Project Report -- Infra (mburns, 14:42:20)
* nothing new from infra (mburns, 14:48:51)
* still waiting on el6 packages (mburns, 14:49:14)
* Other Topics (mburns, 14:49:47)
* just a reminder, please comment on changes to the release process on
the mailing list (mburns, 14:52:23)
* some updates to the release management page (mburns, 14:54:06)
* LINK: http://wiki.ovirt.org/OVirt_3.3_release-management (mburns,
14:54:11)
* new feature pages have started to appear (mburns, 14:54:24)
Meeting ended at 14:57:46 UTC.
Action Items
------------
* oschreib to upload builds to ovirt.org
* mburns to make sure they're in the right places and send announcement
* mburns to upload new cli and sdk packages that were sent privately...
Action Items, by person
-----------------------
* mburns
* mburns to make sure they're in the right places and send
announcement
* mburns to upload new cli and sdk packages that were sent
privately...
* oschreib
* oschreib to upload builds to ovirt.org
* **UNASSIGNED**
* (none)
People Present (lines said)
---------------------------
* mburns (94)
* theron (22)
* oschreib (18)
* Rydekull (6)
* apuimedo (6)
* ovirtbot (5)
* eedri (4)
* fsimonce (4)
* doron_ (3)
* masayag (3)
* vincent_vdk (2)
* jb_netapp (1)
* msalem (1)
* dneary (0)
* quaid (0)
* ewoud (0)
Generated by `MeetBot`_ 0.1.4
.. _`MeetBot`: http://wiki.debian.org/MeetBot
11 years, 9 months
[Users] Mint13 32bit VM crashing after Latest Centos6.3 64bit patching
by John Baldwin
A significant amount of patches were installed recently on a my lab CentOS
6.3 64bit Host. Many "cr" patches installed which I believe contain what
will be part of the soon to release CentOS 6.4. Booting from Virtual CD
images Mint13 32bit Mate install crashes with a qemu dump. The Mint 14 32
bit mate image boots up with no issues. Doing a strings on the core dump
show the following code:
/lib64/ld-linux-x86-64.so.2
{)V]U| .gdbinit
Recursive internal problem.
Is this a known issue. I can provide more debug info if this would be
worth
checking out
John Baldwin Sr. Unix Systems Administrator, Clearwater, FL
11 years, 9 months
[Users] oVirt 3.2.1 and reports update error
by Gianluca Cecchi
Hello,
an all-in-one setup with oVirt 3.2 on Fedora 18.
# yum update ovirt-engine-setup
and then
# engine-upgrade
with this output
...
During the upgrade process, oVirt Engine will not be accessible.
All existing running virtual machines will continue but you will not be able to
start or stop any new virtual machines during the process.
Would you like to proceed? (yes|no): yes
Stopping ovirt-engine service... [ DONE ]
Stopping DB related services... [ DONE ]
Pre-upgrade validations... [ DONE ]
Backing Up Database... [ DONE ]
Rename Database... [ DONE ]
Updating rpms... [ DONE ]
Updating Database... [ DONE ]
Restore Database name... [ DONE ]
Preparing CA... [ DONE ]
Running post install configuration... [ DONE ]
Starting ovirt-engine service... [ DONE ]
oVirt Engine upgrade completed successfully!
* Error: Can't start the ovirt-engine-dwhd service
* Upgrade log available at
/var/log/ovirt-engine/ovirt-engine-upgrade_2013_03_15_23_16_02.log
* Perform the following steps to upgrade the history service or the
reporting package:
1. Execute: yum update ovirt-engine-reports*
2. Execute: ovirt-engine-dwh-setup
3. Execute: ovirt-engine-reports-setup
* DB Backup available at
/var/lib/ovirt-engine/backups/ovirt-engine_db_backup_2013_03_15_23_16_03.sql
[root@tekkaman ~]# yum update ovirt-engine-reports*
Loaded plugins: fastestmirror, langpacks, presto, refresh-packagekit,
versionlock
Loading mirror speeds from cached hostfile
* fedora: fedora.mirror.garr.it
* livna: ftp-stud.fht-esslingen.de
* rpmfusion-free: ftp.nluug.nl
* rpmfusion-free-updates: ftp.nluug.nl
* rpmfusion-nonfree: ftp.nluug.nl
* rpmfusion-nonfree-updates: ftp.nluug.nl
* updates: ftp.nluug.nl
No Packages marked for Update
Do I have to consider or not the error regarding reports?
After a reboot webadmin is ok but reports appears as totally scrambled:
- login page
https://docs.google.com/file/d/0BwoPbcrMv8mvQ3piS2xCRnRZdzg/edit?usp=sharing
- after login
https://docs.google.com/file/d/0BwoPbcrMv8mvNm1VVnk4cDhDR0U/edit?usp=sharing
these above are from Fedora 18 and firefox 19.0.2
Gianluca
11 years, 9 months
[Users] ldap simple
by Andrej Bagon
Hi,
is it possible to change the bind request that is sent to the ldap
server? The default uid=user,cn=Users,cn=Accounts,cn=our,cn=domain is
not suitable.
Thank you.
11 years, 9 months
[Users] Features requests for the setup/configuration utilities - feedback requested
by Alex Lourie
Hi All
As we are working on the configuration utilities (engine-setup,
engine-upgrade and engine-cleanup), we would like to get as much
community involvement as possible. As such, we'd like to hear the
wishes of the community in regards with those tools.
I've created a wiki page [1] where we will keep the list of feature
requests. We would appreciate adding features to the list of by
replying to this thread directly.
Please do not bugs to that list - the bugs should be resolved in due
course according to their priorities and should not affect the features
that we would like to implement.
Thank you.
[1] http://www.ovirt.org/Features/Engine-Config-Utilities
--
Alex Lourie
Software Engineer in RHEVM Integration
Red Hat
11 years, 9 months
[Users] Problem install
by Marcelo Barbosa
hi guys,
I'am tryed install Ovirt 3.2 all-in-one in my Fedora 18, but this
version not finish for sucessful install, information my enviroment:
[root@firelap ovirt]# cat /etc/redhat-release
Fedora release 18 (Spherical Cow)
[root@firelap ovirt]# uname -a
Linux firelap 3.7.9-205.fc18.x86_64 #1 SMP Sun Feb 24 20:10:02 UTC 2013
x86_64 x86_64 x86_64 GNU/Linux
[root@firelap ovirt]# ifconfig
bond0: flags=5123<UP,BROADCAST,MASTER,MULTICAST> mtu 1500
ether 00:00:00:00:00:00 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
bond1: flags=5123<UP,BROADCAST,MASTER,MULTICAST> mtu 1500
ether 00:00:00:00:00:00 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
bond2: flags=5123<UP,BROADCAST,MASTER,MULTICAST> mtu 1500
ether 00:00:00:00:00:00 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
bond3: flags=5123<UP,BROADCAST,MASTER,MULTICAST> mtu 1500
ether 00:00:00:00:00:00 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
bond4: flags=5123<UP,BROADCAST,MASTER,MULTICAST> mtu 1500
ether 00:00:00:00:00:00 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 136972 bytes 54834621 (52.2 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 136972 bytes 54834621 (52.2 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
p5p1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.16.255.43 netmask 255.255.255.192 broadcast 172.16.255.63
inet6 fe80::7a45:c4ff:feb0:8995 prefixlen 64 scopeid 0x20<link>
ether 78:45:c4:b0:89:95 txqueuelen 1000 (Ethernet)
RX packets 480681 bytes 614801393 (586.3 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 323373 bytes 35479386 (33.8 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 17
[root@firelap ovirt]# rpm -qa | grep ovirt
ovirt-engine-cli-3.2.0.10-1.fc18.noarch
ovirt-engine-backend-3.2.0-4.fc18.noarch
ovirt-engine-config-3.2.0-4.fc18.noarch
ovirt-host-deploy-1.0.0-1.fc18.noarch
ovirt-engine-setup-plugin-allinone-3.2.0-4.fc18.noarch
ovirt-engine-restapi-3.2.0-4.fc18.noarch
ovirt-engine-dbscripts-3.2.0-4.fc18.noarch
ovirt-image-uploader-3.2.0-1.fc18.noarch
ovirt-host-deploy-offline-1.0.0-1.fc18.noarch
ovirt-engine-setup-3.2.0-4.fc18.noarch
ovirt-release-fedora-5-3.noarch
ovirt-engine-tools-common-3.2.0-4.fc18.noarch
ovirt-engine-3.2.0-4.fc18.noarch
ovirt-log-collector-3.2.0-1.fc18.noarch
ovirt-host-deploy-java-1.0.0-1.fc18.noarch
ovirt-engine-userportal-3.2.0-4.fc18.noarch
ovirt-engine-notification-service-3.2.0-4.fc18.noarch
ovirt-iso-uploader-3.2.0-1.fc18.noarch
ovirt-engine-webadmin-portal-3.2.0-4.fc18.noarch
ovirt-engine-sdk-3.2.0.9-1.fc18.noarch
ovirt-engine-genericapi-3.2.0-4.fc18.noarch
[root@firelap ~]# host firelap.usc.unirede.net
firelap.usc.unirede.net has address 172.16.255.43
[root@firelap ~]# ping firelap.usc.unirede.net
PING firelap.usc.unirede.net (172.16.255.43) 56(84) bytes of data.
64 bytes from firelap (172.16.255.43): icmp_seq=1 ttl=64 time=0.055 ms
[root@firelap ~]# engine-setup
Welcome to oVirt Engine setup utility
oVirt Engine uses httpd to proxy requests to the application server.
It looks like the httpd installed locally is being actively used.
The installer can override current configuration .
Alternatively you can use JBoss directly (on ports higher than 1024)
Do you wish to override current httpd configuration and restart the
service? ['yes'| 'no'] [yes] :
HTTP Port [80] :
HTTPS Port [443] :
Host fully qualified domain name. Note: this name should be fully
resolvable [firelap.no-ip.org] : localhost.localdomain
The IP (127.0.0.1) which was resolved from the FQDN localhost.localdomain
is not configured on any interface on this host
User input failed validation, do you still wish to use it? (yes|no): no
Host fully qualified domain name. Note: this name should be fully
resolvable [firelap.no-ip.org] :
firelap.no-ip.org did not resolve into an IP address
User input failed validation, do you still wish to use it? (yes|no): no
Host fully qualified domain name. Note: this name should be fully
resolvable [firelap.no-ip.org] : firelap.usc.unirede.net
Enter a password for an internal oVirt Engine administrator user
(admin@internal) :
Warning: Weak Password.
Confirm password :
Organization Name for the Certificate [no-ip.org] : firelap.usc.unirede.net
The engine can be configured to present the UI in three different
application modes. virt [Manage virtualization only], gluster [Manage
gluster storage only], and both [Manage virtualization as well as gluster
storage] ['virt'| 'gluster'| 'both'] [both] :
The default storage type you will be using ['NFS'| 'FC'| 'ISCSI'|
'POSIXFS'] [NFS] :
Enter DB type for installation ['remote'| 'local'] [local] :
Enter a password for a local oVirt Engine DB admin user (engine) :
Warning: Weak Password.
Confirm password :
Local ISO domain path [/var/lib/exports/iso] : /firebackup/ovirt/iso
Error: directory /firebackup/ovirt/iso is not empty
Local ISO domain path [/var/lib/exports/iso] : /firebackup/ovirt/iso
Firewall ports need to be opened.
The installer can configure firewall automatically overriding the current
configuration. The old configuration will be backed up.
Alternately you can configure the firewall later using an example file.
Which firewall do you wish to configure? ['None'| 'Firewalld'| 'IPTables']:
IPTables
Configure VDSM on this host? ['yes'| 'no'] [yes] :
Local storage domain path [/var/lib/images] : /firebackup/ovirt/vms
Confirm root password :
oVirt Engine will be installed using the following configuration:
=================================================================
override-httpd-config: yes
http-port: 80
https-port: 443
host-fqdn: firelap.usc.unirede.net
auth-pass: ********
org-name: firelap.usc.unirede.net
application-mode: both
default-dc-type: NFS
db-remote-install: local
db-local-pass: ********
nfs-mp: /firebackup/ovirt/iso
override-firewall: IPTables
config-allinone: yes
storage-path: /firebackup/ovirt/vms
superuser-pass: ********
Proceed with the configuration listed above? (yes|no): yes
Installing:
AIO: Validating CPU Compatibility... [ DONE ]
AIO: Adding firewall rules... [ DONE ]
Configuring oVirt Engine... [ DONE ]
Configuring JVM... [ DONE ]
Creating CA... [ DONE ]
Updating ovirt-engine service... [ DONE ]
Setting Database Configuration... [ DONE ]
Setting Database Security... [ DONE ]
Creating Database... [ DONE ]
Updating the Default Data Center Storage Type... [ DONE ]
Editing oVirt Engine Configuration... [ DONE ]
Editing Postgresql Configuration... [ DONE ]
Configuring the Default ISO Domain... [ DONE ]
Configuring Firewall... [ DONE ]
Starting ovirt-engine Service... [ DONE ]
Configuring HTTPD... [ DONE ]
AIO: Creating storage directory... [ DONE ]
AIO: Adding Local Datacenter and cluster... [ DONE ]
AIO: Adding Local host (This may take several minutes)... [ ERROR ]
Error: Host was found in a 'Failed' state. Please check engine and
bootstrap installation logs.
Please check log file
/var/log/ovirt-engine/engine-setup_2013_03_06_14_30_51.log for more
information
2013-03-06 14:35:34::DEBUG::common_utils::474::root:: retcode = 0
2013-03-06 14:35:34::DEBUG::common_utils::1238::root:: stopping httpd
2013-03-06 14:35:34::DEBUG::common_utils::1275::root:: executing action
httpd on service stop
2013-03-06 14:35:34::DEBUG::common_utils::434::root:: Executing command -->
'/sbin/service httpd stop'
2013-03-06 14:35:34::DEBUG::common_utils::472::root:: output =
2013-03-06 14:35:34::DEBUG::common_utils::473::root:: stderr = Redirecting
to /bin/systemctl stop httpd.service
2013-03-06 14:35:34::DEBUG::common_utils::474::root:: retcode = 0
2013-03-06 14:35:34::DEBUG::common_utils::1228::root:: starting httpd
2013-03-06 14:35:34::DEBUG::common_utils::1275::root:: executing action
httpd on service start
2013-03-06 14:35:34::DEBUG::common_utils::434::root:: Executing command -->
'/sbin/service httpd start'
2013-03-06 14:35:36::DEBUG::common_utils::472::root:: output =
2013-03-06 14:35:36::DEBUG::common_utils::473::root:: stderr = Redirecting
to /bin/systemctl start httpd.service
2013-03-06 14:35:36::DEBUG::common_utils::474::root:: retcode = 0
2013-03-06 14:35:36::DEBUG::setup_sequences::59::root:: running
makeStorageDir
2013-03-06 14:35:36::DEBUG::all_in_one_100::368::root:: Creating/Verifying
local domain path
2013-03-06 14:35:36::DEBUG::all_in_one_100::374::root:: Setting selinux
context
2013-03-06 14:35:36::DEBUG::nfsutils::36::root:: setting selinux context
for /firebackup/ovirt/vms
2013-03-06 14:35:36::DEBUG::common_utils::434::root:: Executing command -->
'/usr/sbin/semanage fcontext -a -t public_content_rw_t
/firebackup/ovirt/vms(/.*)?'
2013-03-06 14:35:39::DEBUG::common_utils::472::root:: output =
2013-03-06 14:35:39::DEBUG::common_utils::473::root:: stderr =
2013-03-06 14:35:39::DEBUG::common_utils::474::root:: retcode = 0
2013-03-06 14:35:39::DEBUG::common_utils::434::root:: Executing command -->
'/sbin/restorecon -r /firebackup/ovirt/vms'
2013-03-06 14:35:39::DEBUG::common_utils::472::root:: output =
2013-03-06 14:35:39::DEBUG::common_utils::473::root:: stderr =
2013-03-06 14:35:39::DEBUG::common_utils::474::root:: retcode = 0
2013-03-06 14:35:39::DEBUG::setup_sequences::59::root:: running
waitForJbossUp
2013-03-06 14:35:39::DEBUG::all_in_one_100::445::root:: Checking JBoss
status.
2013-03-06 14:35:39::INFO::all_in_one_100::448::root:: JBoss is up and
running.
2013-03-06 14:35:39::DEBUG::setup_sequences::59::root:: running initAPI
2013-03-06 14:35:39::DEBUG::all_in_one_100::240::root:: Initiating the API
object
2013-03-06 14:35:41::DEBUG::setup_sequences::59::root:: running createDC
2013-03-06 14:35:41::DEBUG::all_in_one_100::256::root:: Creating the local
datacenter
2013-03-06 14:35:42::DEBUG::setup_sequences::59::root:: running
createCluster
2013-03-06 14:35:42::DEBUG::all_in_one_100::267::root:: Creating the local
cluster
2013-03-06 14:35:43::DEBUG::setup_sequences::59::root:: running createHost
2013-03-06 14:35:43::DEBUG::all_in_one_100::280::root:: Adding the local
host
2013-03-06 14:35:44::DEBUG::setup_sequences::59::root:: running
waitForHostUp
2013-03-06 14:35:44::DEBUG::all_in_one_100::297::root:: Waiting for host to
become operational
2013-03-06 14:35:45::DEBUG::all_in_one_100::300::root:: current host status
is: installing
2013-03-06 14:35:45::DEBUG::all_in_one_100::311::root:: Traceback (most
recent call last):
File "/usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py", line
308, in isHostUp
raise Exception(INFO_CREATE_HOST_WAITING_UP)
Exception: Waiting for the host to start
2013-03-06 14:35:50::DEBUG::all_in_one_100::297::root:: Waiting for host to
become operational
2013-03-06 14:35:50::DEBUG::all_in_one_100::300::root:: current host status
is: installing
2013-03-06 14:35:50::DEBUG::all_in_one_100::311::root:: Traceback (most
recent call last):
File "/usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py", line
308, in isHostUp
raise Exception(INFO_CREATE_HOST_WAITING_UP)
Exception: Waiting for the host to start
2013-03-06 14:35:55::DEBUG::all_in_one_100::297::root:: Waiting for host to
become operational
2013-03-06 14:35:56::DEBUG::all_in_one_100::300::root:: current host status
is: install_failed
2013-03-06 14:35:56::DEBUG::all_in_one_100::311::root:: Traceback (most
recent call last):
File "/usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py", line
306, in isHostUp
raise utils.RetryFailException(ERROR_CREATE_HOST_FAILED)
RetryFailException: Error: Host was found in a 'Failed' state. Please check
engine and bootstrap installation logs.
2013-03-06 14:35:56::DEBUG::setup_sequences::62::root:: Traceback (most
recent call last):
File "/usr/share/ovirt-engine/scripts/setup_sequences.py", line 60, in run
function()
File "/usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py", line
294, in waitForHostUp
utils.retry(isHostUp, tries=120, timeout=600, sleep=5)
File "/usr/share/ovirt-engine/scripts/common_utils.py", line 1009, in
retry
raise e
RetryFailException: Error: Host was found in a 'Failed' state. Please check
engine and bootstrap installation logs.
2013-03-06 14:35:56::DEBUG::engine-setup::1948::root:: *** The following
params were used as user input:
2013-03-06 14:35:56::DEBUG::engine-setup::1953::root::
override-httpd-config: yes
2013-03-06 14:35:56::DEBUG::engine-setup::1953::root:: http-port: 80
2013-03-06 14:35:56::DEBUG::engine-setup::1953::root:: https-port: 443
2013-03-06 14:35:56::DEBUG::engine-setup::1953::root:: random-passwords: no
2013-03-06 14:35:56::DEBUG::engine-setup::1953::root:: mac-range:
00:1A:4A:10:FF:00-00:1A:4A:10:FF:FF
2013-03-06 14:35:56::DEBUG::engine-setup::1953::root:: host-fqdn:
firelap.usc.unirede.net
2013-03-06 14:35:56::DEBUG::engine-setup::1953::root:: auth-pass: ********
2013-03-06 14:35:56::DEBUG::engine-setup::1953::root:: org-name:
firelap.usc.unirede.net
2013-03-06 14:35:56::DEBUG::engine-setup::1953::root:: application-mode:
both
2013-03-06 14:35:56::DEBUG::engine-setup::1953::root:: default-dc-type: NFS
2013-03-06 14:35:56::DEBUG::engine-setup::1953::root:: db-remote-install:
local
2013-03-06 14:35:56::DEBUG::engine-setup::1953::root:: db-host: localhost
2013-03-06 14:35:56::DEBUG::engine-setup::1953::root:: db-local-pass:
********
2013-03-06 14:35:56::DEBUG::engine-setup::1953::root:: nfs-mp:
/firebackup/********/iso
2013-03-06 14:35:56::DEBUG::engine-setup::1953::root:: iso-domain-name:
ISO_DOMAIN
2013-03-06 14:35:56::DEBUG::engine-setup::1953::root:: config-nfs: yes
2013-03-06 14:35:56::DEBUG::engine-setup::1953::root:: override-firewall:
IPTables
2013-03-06 14:35:56::DEBUG::engine-setup::1953::root:: config-allinone: yes
2013-03-06 14:35:56::DEBUG::engine-setup::1953::root:: storage-path:
/firebackup/********/vms
2013-03-06 14:35:56::DEBUG::engine-setup::1953::root:: superuser-pass:
********
2013-03-06 14:35:56::ERROR::engine-setup::2369::root:: Traceback (most
recent call last):
File "/bin/engine-setup", line 2363, in <module>
main(confFile)
File "/bin/engine-setup", line 2146, in main
runSequences()
File "/bin/engine-setup", line 2068, in runSequences
controller.runAllSequences()
File "/usr/share/ovirt-engine/scripts/setup_controller.py", line 54, in
runAllSequences
sequence.run()
File "/usr/share/ovirt-engine/scripts/setup_sequences.py", line 154, in
run
step.run()
File "/usr/share/ovirt-engine/scripts/setup_sequences.py", line 60, in run
function()
File "/usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py", line
294, in waitForHostUp
utils.retry(isHostUp, tries=120, timeout=600, sleep=5)
File "/usr/share/ovirt-engine/scripts/common_utils.py", line 1009, in
retry
raise e
RetryFailException: Error: Host was found in a 'Failed' state. Please check
engine and bootstrap installation logs.
Thnkz for attention.
Marcelo Barbosa
*mr.marcelo.barbosa(a)gmail.com*
11 years, 9 months