[ANN] oVirt 3.4.3 Release is now available
by Sandro Bonazzola
The oVirt development team is pleased to announce the general
availability of oVirt 3.4.3 as of Jul 18th 2014. This release
solidifies oVirt as a leading KVM management application and open
source alternative to VMware vSphere.
oVirt is available now for Fedora 19 and Red Hat Enterprise Linux 6.5
(or similar).
This release of oVirt includes numerous bug fixes.
See the release notes [1] for a list of the new features and bugs fixed.
The existing repository ovirt-3.4 has been updated for delivering this
release without the need of enabling any other repository, however since we
introduced package signing you need an additional step in order to get
the public keys installed on your system if you're upgrading from an older release.
Please refer to release notes [1] for Installation / Upgrade instructions.
Please note that mirrors will need a couple of days before being synchronized.
If you want to be sure to use latest rpms and don't want to wait for the mirrors,
you can edit /etc/yum.repos.d/ovirt-3.4.repo commenting the mirror line and
removing the comment on baseurl line.
A new oVirt Live ISO will be available too[2].
[1] http://www.ovirt.org/OVirt_3.4.3_Release_Notes
[2] http://resources.ovirt.org/pub/ovirt-3.4/iso/ovirt-live-el6-3.4.3.iso
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 5 months
oVirt 3.4.3 GA postponed due to blocker
by Sandro Bonazzola
Hi,
recent python upgrade in Fedora 19 broke vdsmd service.
While we wait for an updated python-cpopen package to be built, we're postponing oVirt 3.4.3 GA.
The package should be built for tomorrow and will be hosted on ovirt repo until it will be available on Fedora repositories.
We'll release 3.4.3 after basic sanity testing with the new package.
Thanks,
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 5 months
small cluster
by urthmover
I have 10 mac minis at my disposal. (Currently, I’m booting each device using a centos6.5 usbstick leaving the 2 disks free for use)
GOAL:
To build a cluster of 10 servers running postgres-xc
EQUIPMENT:
10 mac mini: i7-3720QM(a)2.60GHz/16G RAM/1x1gbit NIC/2x1TB SSDs (zfs mirrored)
REQUEST:
Please run the software application postgres-xc (a multi-master version of postgres). I'm told by the DBA that disk IO is the most important factor for the tasks that he’ll be running. The DBA wants 10 servers each with a 50G OS partition and a 800GB /data.
THOUGHTS:
I have a few ideas for how to accomplish but I'm unsure which is the best balance between disk IO and overall IT management of the environment.
QUESTIONS FOR THE LIST:
Should I present each of the 10 mac mini’s mirrored disks to glusterfs thus creating a large 10TB storage area. Then connect the storage area to ovirt-engine creating on 10TB storage domain, and use it as the storage domain for 10 large 800GB disks (a /data for each guest) ?
Should I present each mac mini’s mirrored pair as an nfs share to ovirt-engine? This would create 10 1TB storage domains. Then create 10 large 800GB /data partitions (a /data for each guest).
Should I NOT use ovirt and just run each mac mini as a mirrored pair of disks and a sandalone server?
LASTLY:
I’m open to any other thoughts or ideas for how to best accomplish this task.
Thanks in advance,
--
urthmover
10 years, 5 months
[QE] Hardening Guide
by Sandro Bonazzola
Hi,
while I was working on Bug 1097022 - ovirt-engine-setup: weak default passwords for PostgreSQL database users
I was wondering where to write hardening tips described in comment #18.
It looks like we don't have any page on oVirt wiki about hardening.
Anyone interested in contributing to such page?
I guess it can be created as http://www.ovirt.org/OVirt_Hardening_Guide
Thoughts?
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 5 months
Missing Storage domain
by Maurice James
------=_Part_12515_719070347.1405534873658
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
What do I do when a host in a cluster cant find a storage domain that "it thinks" doesnt exist? The storage domain is in the db and is online because one of the other hosts is working just fine. I pulled this out of the vdsm.log. I even tried rebooting
Thread-30::ERROR::2014-07-16 14:19:10,522::domainMonitor::239::Storage.DomainMonitorThread::(_monitorDomain) Error while collecting domain b7663d70-e658-41fa-b9f0-8da83c9eddce monitoring information
Traceback (most recent call last):
File "/usr/share/vdsm/storage/domainMonitor.py", line 204, in _monitorDomain
self.domain = sdCache.produce(self.sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 98, in produce
domain.getRealDomain()
File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain
return self._cache._realProduce(self._sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 122, in _realProduce
domain = self._findDomain(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain
dom = findMethod(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist: ('b7663d70-e658-41fa-b9f0-8da83c9eddce',)
------=_Part_12515_719070347.1405534873658
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: times new roman, new york, times, se=
rif; font-size: 12pt; color: #000000"><div>What do I do when a host in a cl=
uster cant find a storage domain that "it thinks" doesnt exist? The storage=
domain is in the db and is online because one of the other hosts is workin=
g just fine. I pulled this out of the vdsm.log. I even tried rebooting<br><=
/div><div><br></div><div><br></div><div>Thread-30::ERROR::2014-07-16 14:19:=
10,522::domainMonitor::239::Storage.DomainMonitorThread::(_monitorDomain) E=
rror while collecting domain b7663d70-e658-41fa-b9f0-8da83c9eddce monitorin=
g information<br>Traceback (most recent call last):<br> File "/usr/sh=
are/vdsm/storage/domainMonitor.py", line 204, in _monitorDomain<br> &n=
bsp; self.domain =3D sdCache.produce(self.sdUUID)<br> File "/us=
r/share/vdsm/storage/sdc.py", line 98, in produce<br> dom=
ain.getRealDomain()<br> File "/usr/share/vdsm/storage/sdc.py", line 5=
2, in getRealDomain<br> return self._cache._realProduce(s=
elf._sdUUID)<br> File "/usr/share/vdsm/storage/sdc.py", line 122, in =
_realProduce<br> domain =3D self._findDomain(sdUUID)<br>&=
nbsp; File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain<br>&n=
bsp; dom =3D findMethod(sdUUID)<br> File "/usr/share/vdsm=
/storage/sdc.py", line 171, in _findUnfetchedDomain<br> r=
aise se.StorageDomainDoesNotExist(sdUUID)<br>StorageDomainDoesNotExist: Sto=
rage domain does not exist: ('b7663d70-e658-41fa-b9f0-8da83c9eddce',)<br><b=
r></div></div></body></html>
------=_Part_12515_719070347.1405534873658--
10 years, 5 months
Re: [ovirt-users] fileserver as a guest oVirt
by Karli Sjöberg
--_000_5F9E965F5A80BC468BE5F40576769F098C555F4Cexchange21_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
DQpEZW4gMTIganVsIDIwMTQgMjI6NDkgc2tyZXYgTmlrbGFzIEZvbmRiZXJnIDxuaWtsYXNAdmly
ZW9uZS5jb20+Og0KPg0KPg0KPg0KPiBPbiAxMiBqdWwgMjAxNCwgYXQgMTY6NTcsICJLYXJsaSBT
asO2YmVyZyIgPEthcmxpLlNqb2JlcmdAc2x1LnNlPiB3cm90ZToNCj4NCj4+DQo+PiBEZW4gMTIg
anVsIDIwMTQgMTU6NDUgc2tyZXYgTmlrbGFzIEZvbmRiZXJnIDxuaWtsYXNAdmlyZW9uZS5jb20+
Og0KPj4gPg0KPj4gPiBIaSwNCj4+ID4NCj4+ID4gSeKAmW0gbmV3IHRvIG9WaXJ0IGJ1dCBJIG11
c3Qgc2F5IEkgYW0gaW1wcmVzc2VkIQ0KPj4gPiBJIGFtIHJ1bm5pbmcgaXQgb24gYSBIUCBETDM4
MCB3aXRoIGFuIGV4dGVybmFsIFNBUyBjaGFzc2kuDQo+PiA+IExpbnV4IGRpc3QgaXMgQ2VudG9z
IDYuNSBhbmQgb1ZpcnQgaXMgMy40IHJ1bm5pbmcgYWxsLWluLW9uZSAoZm9yIG5vdyB1bnRpbCB3
ZSBuZWVkIHRvIGhhdmUgYSBzZWNvbmQgaG9zdCkuDQo+PiA+DQo+PiA+IE91ciBjb21wYW55ICh3
d3cudmlyZW9uZS5jb20pIGRlYWxzIHdpdGggc3lzdGVtIGFyY2hpdGVjdHVyZSBmb3IgbWFueSB0
ZWxjbyBhbmQgbWVkaWEgb3BlcmF0b3JzIGFuZCBpcyBub3cgc2V0dGluZyB1cCBhIHNtYWxsIG93
biBkYXRhY2VudGVyIGZvciBvdXIgaW50ZXJuYWwgdGVzdHMgYXMgd2VsbCBhcyBvdXIgSVQgaW5m
cmFzdHJ1Y3R1cmUuDQo+PiA+IFdlIGFyZSBpbiB0aGUgcHJvY2VzcyBvZiBpbnN0YWxsaW5nIFpl
bnR5YWwgZm9yIHRoZSBTTUIgcHVycG9zZXMgb24gYSBndWVzdCBhbmQgaXQgd291bGQgYmUgZ3Jl
YXQgdG8gaGF2ZSB0aGF0IGd1ZXN0IGFsc28gc2VydmluZyBhIGZzIHBhdGggZGlyZWN0b3J5IHdp
dGggTkZTICsgU01CICh3aGljaCBpcyBzZW1pIGNyaXBwbGVkIG9uIHRoZSBob3N0IGFmdGVyIG9W
aXJ0IGluc3RhbGxhdGlvbiB3aXRoIHZlcnNpb24gMyBldC5jLikuDQo+PiA+DQo+PiA+IERvZXMg
YW55b25lIGhhdmUgYW4gaWRlYSBvZiBob3cgSSBjYW4gdGhyb3VnaCBvVmlydCAoc2VlbiBzZXZl
cmFsIHNvbHV0aW9ucyB1c2luZyB2aXJzaCBhbmQga3ZtKSBsZXR0aW5nIG15IFplbnR5YWwgVWJ1
bnR1IGd1ZXN0IGhhdmUgYWNjZXNzIHRvIGEgaG9zdCBtb3VudCBwb2ludCBvciBpZiBuZWNlc3Nh
cnkgKHNlY29uZCBiZXN0KSBhIHNlcGVyYXRlIHBhcnRpdGlvbj8NCj4+ID4NCj4+ID4gQmVzdCBy
ZWdhcmRzDQo+PiA+IE5pa2xhcw0KPj4gPg0KPj4NCj4+IFdoeSBub3QganVzdCBnaXZlIHRoZSBn
dWVzdCBhIHRoaW4gcHJvdmlzaW9uIHZpcnR1YWwgaGFyZCBkcml2ZSBhbmQgZXhwYW5kIGl0IG9u
IGEgZGVtYW5kIGJhc2lzPw0KPj4NCj4+IC9LDQo+DQo+IFRoYW5rcyBmb3IgdGhlIGFkdmlzZSBi
dXQgdGhpcyB3b3VsZCBub3Qgc3VpdGUgdXMgSSdtIGFmcmFpZC4gSXQgd291bGQgYmUgZGlmZmlj
dWx0IHdydCBpbmNyZW1lbnRhbCBiYWNrdXBzIGFzIHdlbGwgYXMgaG9zdCBtYWNoaW5lIGZpbGUt
cm91dGluZXMuDQo+DQo+DQoNCldlbGwsIGdvaW5nIGJ5IE9jY2FtJ3MgcmFpem9yOyB0aGUgc2lt
cGxlc3QgYW5zd2VyIGlzIHVzdWFsbHkgY29ycmVjdC4gQ2FuJ3QgcmVhbGx5IHRlbGwgd2hhdCB5
b3UgbWVhbiBieSBmaWxlLXJvdXRpbmVzIGJ1dCBiYWNrdXBzIHdvdWxkIGJlIHdlbGwgc2VydmVk
IGJ5IHNuYXBzaG90cyAoY2FuJ3QgZ2V0IG1vcmUgaW5jcmVtZW50YWwgdGhhbiB0aGF0KSBhbmQg
ZGlzYXN0ZXIgcmVjb3ZlcnkgY291bGQgYmUgYXMgZWFzeSBhcyBhIHJzeW5jIGZyb20gaW5zaWRl
IHRoZSBndWVzdCB0byBhIHJlbW90ZSBtYWNoaW5lLg0KDQpUaGUgYmlnZ2VzdCBwcm9zIGhlcmUg
aXMgdGhlIGVhc2Ugb2YgYmVpbmcgYWJsZSB0byBzZXR1cCBhbiBleHBvcnQgZG9tYWluLCBhdHRh
Y2gsIGV4cG9ydCB0aGUgVk0sIGRldGFjaCBkb21haW4sIGFuZCB0aGVuIGF0dGFjaCBhbmQgaW1w
b3J0IHRvIGEgInJlYWwiIHNldHVwIHdoZW4gdGhlIEFJTyBzdGFydHMgZmVlbGluZyBjcm93ZGVk
IGxhdGVyIG9uLiBUaGlua2luZyBhaGVhZCBpcyBuZXZlciBhIGJhZCB0aGluZywgbm8/DQoNCi9L
DQo=
--_000_5F9E965F5A80BC468BE5F40576769F098C555F4Cexchange21_
Content-Type: text/html; charset="utf-8"
Content-ID: <859545D6595E94408D29C283B48B6E99(a)ad.slu.se>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPHAgZGlyPSJsdHIi
Pjxicj4NCkRlbiAxMiBqdWwgMjAxNCAyMjo0OSBza3JldiBOaWtsYXMgRm9uZGJlcmcgJmx0O25p
a2xhc0B2aXJlb25lLmNvbSZndDs6PGJyPg0KJmd0Ozxicj4NCiZndDs8YnI+DQomZ3Q7PGJyPg0K
Jmd0OyBPbiAxMiBqdWwgMjAxNCwgYXQgMTY6NTcsICZxdW90O0thcmxpIFNqw7ZiZXJnJnF1b3Q7
ICZsdDtLYXJsaS5Tam9iZXJnQHNsdS5zZSZndDsgd3JvdGU6PGJyPg0KJmd0Ozxicj4NCiZndDsm
Z3Q7PGJyPg0KJmd0OyZndDsgRGVuIDEyIGp1bCAyMDE0IDE1OjQ1IHNrcmV2IE5pa2xhcyBGb25k
YmVyZyAmbHQ7bmlrbGFzQHZpcmVvbmUuY29tJmd0Ozo8YnI+DQomZ3Q7Jmd0OyAmZ3Q7PGJyPg0K
Jmd0OyZndDsgJmd0OyBIaSw8YnI+DQomZ3Q7Jmd0OyAmZ3Q7PGJyPg0KJmd0OyZndDsgJmd0OyBJ
4oCZbSBuZXcgdG8gb1ZpcnQgYnV0IEkgbXVzdCBzYXkgSSBhbSBpbXByZXNzZWQhJm5ic3A7PGJy
Pg0KJmd0OyZndDsgJmd0OyBJIGFtIHJ1bm5pbmcgaXQgb24gYSBIUCBETDM4MCB3aXRoIGFuIGV4
dGVybmFsIFNBUyBjaGFzc2kuPGJyPg0KJmd0OyZndDsgJmd0OyBMaW51eCBkaXN0IGlzIENlbnRv
cyA2LjUgYW5kIG9WaXJ0IGlzIDMuNCBydW5uaW5nIGFsbC1pbi1vbmUgKGZvciBub3cgdW50aWwg
d2UgbmVlZCB0byBoYXZlIGEgc2Vjb25kIGhvc3QpLjxicj4NCiZndDsmZ3Q7ICZndDs8YnI+DQom
Z3Q7Jmd0OyAmZ3Q7IE91ciBjb21wYW55ICh3d3cudmlyZW9uZS5jb20pIGRlYWxzIHdpdGggc3lz
dGVtIGFyY2hpdGVjdHVyZSBmb3IgbWFueSB0ZWxjbyBhbmQgbWVkaWEgb3BlcmF0b3JzIGFuZCBp
cyBub3cgc2V0dGluZyB1cCBhIHNtYWxsIG93biBkYXRhY2VudGVyIGZvciBvdXIgaW50ZXJuYWwg
dGVzdHMgYXMgd2VsbCBhcyBvdXIgSVQgaW5mcmFzdHJ1Y3R1cmUuPGJyPg0KJmd0OyZndDsgJmd0
OyBXZSBhcmUgaW4gdGhlIHByb2Nlc3Mgb2YgaW5zdGFsbGluZyBaZW50eWFsIGZvciB0aGUgU01C
IHB1cnBvc2VzIG9uIGEgZ3Vlc3QgYW5kIGl0IHdvdWxkIGJlIGdyZWF0IHRvIGhhdmUgdGhhdCBn
dWVzdCBhbHNvIHNlcnZpbmcgYSBmcyBwYXRoIGRpcmVjdG9yeSB3aXRoIE5GUyAmIzQzOyBTTUIg
KHdoaWNoIGlzIHNlbWkgY3JpcHBsZWQgb24gdGhlIGhvc3QgYWZ0ZXIgb1ZpcnQgaW5zdGFsbGF0
aW9uIHdpdGggdmVyc2lvbiAzIGV0LmMuKS48YnI+DQomZ3Q7Jmd0OyAmZ3Q7PGJyPg0KJmd0OyZn
dDsgJmd0OyBEb2VzIGFueW9uZSBoYXZlIGFuIGlkZWEgb2YgaG93IEkgY2FuIHRocm91Z2ggb1Zp
cnQgKHNlZW4gc2V2ZXJhbCBzb2x1dGlvbnMgdXNpbmcgdmlyc2ggYW5kIGt2bSkgbGV0dGluZyBt
eSBaZW50eWFsIFVidW50dSBndWVzdCBoYXZlIGFjY2VzcyB0byBhIGhvc3QgbW91bnQgcG9pbnQg
b3IgaWYgbmVjZXNzYXJ5IChzZWNvbmQgYmVzdCkgYSBzZXBlcmF0ZSBwYXJ0aXRpb24/PGJyPg0K
Jmd0OyZndDsgJmd0Ozxicj4NCiZndDsmZ3Q7ICZndDsgQmVzdCByZWdhcmRzPGJyPg0KJmd0OyZn
dDsgJmd0OyBOaWtsYXM8YnI+DQomZ3Q7Jmd0OyAmZ3Q7PGJyPg0KJmd0OyZndDs8YnI+DQomZ3Q7
Jmd0OyBXaHkgbm90IGp1c3QgZ2l2ZSB0aGUgZ3Vlc3QgYSB0aGluIHByb3Zpc2lvbiB2aXJ0dWFs
IGhhcmQgZHJpdmUgYW5kIGV4cGFuZCBpdCBvbiBhIGRlbWFuZCBiYXNpcz88YnI+DQomZ3Q7Jmd0
Ozxicj4NCiZndDsmZ3Q7IC9LPGJyPg0KJmd0Ozxicj4NCiZndDsgVGhhbmtzIGZvciB0aGUgYWR2
aXNlIGJ1dCB0aGlzIHdvdWxkIG5vdCBzdWl0ZSB1cyBJJ20gYWZyYWlkLiBJdCB3b3VsZCBiZSBk
aWZmaWN1bHQgd3J0IGluY3JlbWVudGFsIGJhY2t1cHMgYXMgd2VsbCBhcyBob3N0IG1hY2hpbmUg
ZmlsZS1yb3V0aW5lcy4mbmJzcDs8YnI+DQomZ3Q7PGJyPg0KJmd0OzwvcD4NCjxwIGRpcj0ibHRy
Ij5XZWxsLCBnb2luZyBieSBPY2NhbSdzIHJhaXpvcjsgdGhlIHNpbXBsZXN0IGFuc3dlciBpcyB1
c3VhbGx5IGNvcnJlY3QuIENhbid0IHJlYWxseSB0ZWxsIHdoYXQgeW91IG1lYW4gYnkgZmlsZS1y
b3V0aW5lcyBidXQgYmFja3VwcyB3b3VsZCBiZSB3ZWxsIHNlcnZlZCBieSBzbmFwc2hvdHMgKGNh
bid0IGdldCBtb3JlIGluY3JlbWVudGFsIHRoYW4gdGhhdCkgYW5kIGRpc2FzdGVyIHJlY292ZXJ5
IGNvdWxkIGJlIGFzIGVhc3kNCiBhcyBhIHJzeW5jIGZyb20gaW5zaWRlIHRoZSBndWVzdCB0byBh
IHJlbW90ZSBtYWNoaW5lLjwvcD4NCjxwIGRpcj0ibHRyIj5UaGUgYmlnZ2VzdCBwcm9zIGhlcmUg
aXMgdGhlIGVhc2Ugb2YgYmVpbmcgYWJsZSB0byBzZXR1cCBhbiBleHBvcnQgZG9tYWluLCBhdHRh
Y2gsIGV4cG9ydCB0aGUgVk0sIGRldGFjaCBkb21haW4sIGFuZCB0aGVuIGF0dGFjaCBhbmQgaW1w
b3J0IHRvIGEgJnF1b3Q7cmVhbCZxdW90OyBzZXR1cCB3aGVuIHRoZSBBSU8gc3RhcnRzIGZlZWxp
bmcgY3Jvd2RlZCBsYXRlciBvbi4gVGhpbmtpbmcgYWhlYWQgaXMgbmV2ZXIgYSBiYWQgdGhpbmcs
IG5vPzwvcD4NCjxwIGRpcj0ibHRyIj4vSzwvcD4NCjwvYm9keT4NCjwvaHRtbD4NCg==
--_000_5F9E965F5A80BC468BE5F40576769F098C555F4Cexchange21_--
10 years, 5 months
Can't create disks on gluster storage
by andreas.ewert@cbc.de
Hi,
Due creation of new disk in gluster-storage I got some errors. Finally the disk is not being created.
engine.log says that there is no such file or directory, but the gluster domain (124e6273-c6f5-471f-88e7-e5d9d37d7385) is active:
[root@ipet etc]# vdsClient -s 0 getStoragePoolInfo 5849b030-626e-47cb-ad90-3ce782d831b3
name = Test
isoprefix = /rhev/data-center/mnt/mixcoatl:_srv_mirror_ISOs/c1582b82-9bfc-4e2b-ab2a-83551dcfba8f/images/11111111-1111-1111-1111-111111111111
pool_status = connected
lver = 0
spm_id = 1
master_uuid = 54f86ad7-2c12-4322-b2d1-f129f3d20e57
version = 3
domains = 9bdf01bd-78d6-4408-b3a9-e05469004d78:Active,e4a3928d-0475-4b99-bfb8-86606931296a:Active,c1582b82-9bfc-4e2b-ab2a-83551dcfba8f:Active,124e6273-c6f5-471f-88e7-e5d9d37d7385:Active,54f86ad7-2c12-4322-b2d1-f129f3d20e57:Active
type = FCP
master_ver = 8
9bdf01bd-78d6-4408-b3a9-e05469004d78 = {'status': 'Active', 'diskfree': '1043408617472', 'isoprefix': '', 'alerts': [], 'disktotal': '1099108974592', 'version': 3}
e4a3928d-0475-4b99-bfb8-86606931296a = {'status': 'Active', 'diskfree': '27244910346240', 'isoprefix': '', 'alerts': [], 'disktotal': '34573945667584', 'version': 3}
124e6273-c6f5-471f-88e7-e5d9d37d7385 = {'status': 'Active', 'diskfree': '12224677937152', 'isoprefix': '', 'alerts': [], 'disktotal': '16709735940096', 'version': 3}
c1582b82-9bfc-4e2b-ab2a-83551dcfba8f = {'status': 'Active', 'diskfree': '115282018304', 'isoprefix': '/rhev/data-center/mnt/mixcoatl:_srv_mirror_ISOs/c1582b82-9bfc-4e2b-ab2a-83551dcfba8f/images/11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': '539016298496', 'version': 0}
54f86ad7-2c12-4322-b2d1-f129f3d20e57 = {'status': 'Active', 'diskfree': '148579024896', 'isoprefix': '', 'alerts': [], 'disktotal': '1197490569216', 'version': 3}
[root@ipet etc]# ll /rhev/data-center/mnt/glusterSD/moly\:_repo1/
insgesamt 0
drwxr-xr-x 4 vdsm kvm 96 17. Apr 10:02 124e6273-c6f5-471f-88e7-e5d9d37d7385
-rwxr-xr-x 1 vdsm kvm 0 11. Feb 12:52 __DIRECT_IO_TEST__
[root@ipet etc]# ll /rhev/data-center/
insgesamt 12
drwxr-xr-x 2 vdsm kvm 4096 14. Jul 10:42 5849b030-626e-47cb-ad90-3ce782d831b3
drwxr-xr-x 2 vdsm kvm 4096 16. Dez 2013 hsm-tasks
drwxr-xr-x 7 vdsm kvm 4096 14. Jul 10:24 mnt
[root@ipet etc]# ll /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/
insgesamt 16
lrwxrwxrwx 1 vdsm kvm 66 14. Jul 10:42 54f86ad7-2c12-4322-b2d1-f129f3d20e57 -> /rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57
lrwxrwxrwx 1 vdsm kvm 66 14. Jul 10:42 9bdf01bd-78d6-4408-b3a9-e05469004d78 -> /rhev/data-center/mnt/blockSD/9bdf01bd-78d6-4408-b3a9-e05469004d78
lrwxrwxrwx 1 vdsm kvm 84 14. Jul 10:42 c1582b82-9bfc-4e2b-ab2a-83551dcfba8f -> /rhev/data-center/mnt/mixcoatl:_srv_mirror_ISOs/c1582b82-9bfc-4e2b-ab2a-83551dcfba8f
lrwxrwxrwx 1 vdsm kvm 66 14. Jul 10:42 mastersd -> /rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57
Here i miss the directory of the gluster domain 124e6273-c6f5-471f-88e7-e5d9d37d7385 and the symbolic link belonging to it. The directory is missed on every Host.
What can I do to fix this issue?
With best regards
Andreas
engine.log:
2014-07-17 05:30:33,347 INFO [org.ovirt.engine.core.bll.AddDiskCommand] (ajp--127.0.0.1-8702-7) [19404214] Running command: AddDiskCommand internal: false. Entities affected : ID: 124e6273-c6f5-471f-88e7-e5d9d37d7385 Type: Storage
2014-07-17 05:30:33,358 INFO [org.ovirt.engine.core.bll.AddImageFromScratchCommand] (ajp--127.0.0.1-8702-7) [3e7d9b07] Running command: AddImageFromScratchCommand internal: true. Entities affected : ID: 124e6273-c6f5-471f-88e7-e5d9d37d7385 Type: Storage
2014-07-17 05:30:33,364 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand] (ajp--127.0.0.1-8702-7) [3e7d9b07] START, CreateImageVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, storageDomainId = 124e6273-c6f5-471f-88e7-e5d9d37d7385, imageGroupId = 7594000c-23a9-4941-8218-2c0654518a3d, imageSizeInBytes = 68719476736, volumeFormat = RAW, newImageId = c48e46cd-9dd5-4c52-94a4-db0378aecc3c, newImageDescription = ), log id: 642dadf4
2014-07-17 05:30:33,366 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand] (ajp--127.0.0.1-8702-7) [3e7d9b07] -- executeIrsBrokerCommand: calling 'createVolume' with two new parameters: description and UUID
2014-07-17 05:30:33,392 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand] (ajp--127.0.0.1-8702-7) [3e7d9b07] FINISH, CreateImageVDSCommand, return: c48e46cd-9dd5-4c52-94a4-db0378aecc3c, log id: 642dadf4
2014-07-17 05:30:33,403 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (ajp--127.0.0.1-8702-7) [3e7d9b07] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command 7429f506-e57d-45e7-bc66-42bbdc90174c
2014-07-17 05:30:33,404 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (ajp--127.0.0.1-8702-7) [3e7d9b07] CommandMultiAsyncTasks::AttachTask: Attaching task d957c5a9-518e-4668-b8cd-27a106952e7c to command 7429f506-e57d-45e7-bc66-42bbdc90174c.
2014-07-17 05:30:33,412 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (ajp--127.0.0.1-8702-7) [3e7d9b07] Adding task d957c5a9-518e-4668-b8cd-27a106952e7c (Parent Command AddDisk, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters), polling hasn't started yet..
2014-07-17 05:30:33,419 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-7) [3e7d9b07] Correlation ID: 19404214, Job ID: 18601b0b-5bbe-4fc3-91f8-8a8dff0974fc, Call Stack: null, Custom Event ID: -1, Message: Add-Disk operation of 're_Disk2' was initiated by EwertA.
2014-07-17 05:30:33,421 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (ajp--127.0.0.1-8702-7) [3e7d9b07] BaseAsyncTask::startPollingTask: Starting to poll task d957c5a9-518e-4668-b8cd-27a106952e7c.
2014-07-17 05:30:38,432 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-70) Polling and updating Async Tasks: 1 tasks, 1 tasks to poll now
2014-07-17 05:30:38,445 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-70) Failed in HSMGetAllTasksStatusesVDS method
2014-07-17 05:30:38,450 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-70) SPMAsyncTask::PollTask: Polling task d957c5a9-518e-4668-b8cd-27a106952e7c (Parent Command AddDisk, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned status finished, result 'cleanSuccess'.
2014-07-17 05:30:38,467 ERROR [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-70) BaseAsyncTask::LogEndTaskFailure: Task d957c5a9-518e-4668-b8cd-27a106952e7c (Parent Command AddDisk, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended with failure:
-- Result: cleanSuccess
-- Message: VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = [Errno 2] No such file or directory: '/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/124e6273-c6f5-471f-88e7-e5d9d37d7385/images/7594000c-23a9-4941-8218-2c0654518a3d', code = 100,
-- Exception: VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = [Errno 2] No such file or directory: '/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/124e6273-c6f5-471f-88e7-e5d9d37d7385/images/7594000c-23a9-4941-8218-2c0654518a3d', code = 100
2014-07-17 05:30:38,485 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (DefaultQuartzScheduler_Worker-70) CommandAsyncTask::EndActionIfNecessary: All tasks of command 7429f506-e57d-45e7-bc66-42bbdc90174c has ended -> executing endAction
2014-07-17 05:30:38,487 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (DefaultQuartzScheduler_Worker-70) CommandAsyncTask::endAction: Ending action for 1 tasks (command ID: 7429f506-e57d-45e7-bc66-42bbdc90174c): calling endAction .
2014-07-17 05:30:38,489 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (org.ovirt.thread.pool-6-thread-42) CommandAsyncTask::EndCommandAction [within thread] context: Attempting to endAction AddDisk, executionIndex: 0
2014-07-17 05:30:38,517 ERROR [org.ovirt.engine.core.bll.AddDiskCommand] (org.ovirt.thread.pool-6-thread-42) [19404214] Ending command with failure: org.ovirt.engine.core.bll.AddDiskCommand
2014-07-17 05:30:38,519 ERROR [org.ovirt.engine.core.bll.AddImageFromScratchCommand] (org.ovirt.thread.pool-6-thread-42) [3e7d9b07] Ending command with failure: org.ovirt.engine.core.bll.AddImageFromScratchCommand
2014-07-17 05:30:38,616 WARN [org.ovirt.engine.core.bll.AddDiskCommand] (org.ovirt.thread.pool-6-thread-42) VmCommand::EndVmCommand: Vm is null - not performing endAction on Vm
2014-07-17 05:30:38,638 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-42) Correlation ID: 19404214, Job ID: 18601b0b-5bbe-4fc3-91f8-8a8dff0974fc, Call Stack: null, Custom Event ID: -1, Message: Operation Add-Disk failed to complete.
2014-07-17 05:30:38,645 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (org.ovirt.thread.pool-6-thread-42) CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type AddDisk completed, handling the result.
2014-07-17 05:30:38,652 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (org.ovirt.thread.pool-6-thread-42) CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type AddDisk succeeded, clearing tasks.
2014-07-17 05:30:38,663 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (org.ovirt.thread.pool-6-thread-42) SPMAsyncTask::ClearAsyncTask: Attempting to clear task d957c5a9-518e-4668-b8cd-27a106952e7c
2014-07-17 05:30:38,669 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-42) START, SPMClearTaskVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, taskId = d957c5a9-518e-4668-b8cd-27a106952e7c), log id: f40f5de
2014-07-17 05:30:38,711 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-42) START, HSMClearTaskVDSCommand(HostName = ipet, HostId = ec83e4e0-40cf-49e4-94e3-5efc5ef6f2e2, taskId=d957c5a9-518e-4668-b8cd-27a106952e7c), log id: 71cdc9a4
2014-07-17 05:30:38,728 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-42) FINISH, HSMClearTaskVDSCommand, log id: 71cdc9a4
2014-07-17 05:30:38,733 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-42) FINISH, SPMClearTaskVDSCommand, log id: f40f5de
2014-07-17 05:30:38,741 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (org.ovirt.thread.pool-6-thread-42) BaseAsyncTask::RemoveTaskFromDB: Removed task d957c5a9-518e-4668-b8cd-27a106952e7c from DataBase
2014-07-17 05:30:38,747 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (org.ovirt.thread.pool-6-thread-42) CommandAsyncTask::HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks object for entity 7429f506-e57d-45e7-bc66-42bbdc90174c
10 years, 5 months
oVirt Weekly Sync Meeting: July 16, 2014
by Brian Proffitt
Minutes: http://ovirt.org/meetings/ovirt/2014/ovirt.2014-07-16-14.05.html
Minutes (text): http://ovirt.org/meetings/ovirt/2014/ovirt.2014-07-16-14.05.txt
Log: http://ovirt.org/meetings/ovirt/2014/ovirt.2014-07-16-14.05.log.html
=========================
#ovirt: oVirt Weekly Sync
=========================
Meeting started by bkp at 14:05:27 UTC. The full logs are available at
http://ovirt.org/meetings/ovirt/2014/ovirt.2014-07-16-14.05.log.html .
Meeting summary
---------------
* Agenda and Roll Call (bkp, 14:05:44)
* infra update (bkp, 14:06:34)
* 3.4.z updates (bkp, 14:06:34)
* 3.5 status (bkp, 14:06:34)
* conferences and workshops (bkp, 14:06:34)
* other topics (bkp, 14:06:34)
* infra update (bkp, 14:07:26)
* infra phx lab still on the works (installed hosted engine though,
but now they are setting up cables for the network) (bkp, 14:09:27)
* infra Most of the 3.5 jobs have been added to jenkins, missing still
a few (bkp, 14:09:27)
* 3.4.z updates (bkp, 14:10:06)
* 3.4.z updates Detailed status report at
http://lists.ovirt.org/pipermail/users/2014-July/026028.html (bkp,
14:10:37)
* 3.4.z updates No active blockers for 3.4.3 GA, 3 bugs (excluding
node and documentation) still outstanding (bkp, 14:10:37)
* 3.4.z updates 3.4.3 will start composing on 07-17-14 0800 UTC from
3.4.3 branch (bkp, 14:10:37)
* 3.4.z updates Building engine in ~30 minutes so it will be ready for
tomorrow morning composition and testing (bkp, 14:14:05)
* 3.5 status (bkp, 14:14:41)
* 3.5 status integration Detailed status at
http://lists.ovirt.org/pipermail/users/2014-July/026029.html (bkp,
14:16:05)
* 3.5 status integration 3.5.0 Second Beta will be composed on
07-21-14 0800 UTC; Be sure 3.5 snapshots are enabled to create VMs
by 07-20-14 1500 UTC (bkp, 14:16:05)
* 3.5 status integration Feature freeze in place, 3.5.0 branch has
been created; All new patches must be backported to 3.5 branch too.
(bkp, 14:16:05)
* 3.5 status integration Six blockers are currently listed (2 infra, 1
integration, 2 network, and 1 virt) for 3.5.0 (bkp, 14:16:05)
* 3.5 status integration Blocker Bug 1113974: Trying to reproduce
issue, no ETA at this time. (bkp, 14:18:11)
* 3.5 status network Blocker Bug 1115001 status: pending review (POST)
(bkp, 14:25:30)
* 3.5 status network Blocker Bug 1119019 status: pending review. Both
blockers should be merged in next week's beta. (bkp, 14:26:48)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=1119024 (danken,
14:27:35)
* 3.5 status network Not an ovirt blocker yet, but bug 1119024 is a
persistent vdsm regression affecting network. Patch made. (bkp,
14:29:08)
* 3.5 status virt Blocker Bug 1110305 status: Almost in. ETA today.
(bkp, 14:32:03)
* 3.5 status infra Blocker Bug 1115044 status: In POST status unknown.
(bkp, 14:36:40)
* 3.5 status infra Blocker Bug 1115152 status: In POST status unknown.
(bkp, 14:36:41)
* 3.5 status infra No report this week. (bkp, 14:36:41)
* LINK: http://gerrit.ovirt.org/#/c/28817/ is not yet ready (danken,
14:37:11)
* 3.5 status infra http://gerrit.ovirt.org/#/c/28817/ is not yet
ready, lastClientIface is not reported over jsonrpc. Should be set
as a blocker (bkp, 14:38:08)
* 3.5 status infra http://gerrit.ovirt.org/#/c/28817/ patch in
progress, awaiting agreement on minor aspect. (bkp, 14:40:12)
* 3.5 status storage Live Merge feature
(http://gerrit.ovirt.org/#/q/status:open+project:vdsm+branch:master+topic:...)
not ready, still blocking for 3.5. No change in status, no ETA to
report. (bkp, 14:46:18)
* 3.5 status node Node is basically stabilizing. (bkp, 14:49:14)
* 3.5 status node In testing mode, working out some build/automation
problems. (Not 3.5-related) (bkp, 14:49:14)
* 3.5 status sla Everything sla had for vdsm is in master now (bkp,
15:09:41)
* 3.5 status sla iotune now needs to be merged to 3.5. Verified and
unittests written. There are still strong concerns from danken.
msivak to try to address and get this in the green. (bkp, 15:09:42)
* 3.5 status sla optimizer now knows how to compute a solution for
starting a vm, lacking ui, but that will take no time when vdsm is
sorted out (bkp, 15:12:29)
* 3.5 status sla hosted engine in good shape and with more stable
state machine as far as starting the engine vm is concerned (bkp,
15:12:29)
* 3.5 status vdsm dougsland to build vdsm 4.14.11 (bkp, 15:13:06)
* 3.4.z updates vdsm dougsland to build vdsm 4.14.11 (bkp, 15:14:47)
* 3.5 status vdsm Strong concerns about iotunes in SLA raised. (bkp,
15:17:26)
* 3.5 status vdsm Also concerns about lastclientIface (bkp, 15:17:55)
* ACTION: 3.5 status Appropriate owners, please address danken's
concerns about sla, lastclientIface (bkp, 15:19:09)
* 3.5 status ux Sorting is in 3.5 for all main tabs, except Volumes
(gluster) that is only in 'master' ATM. Sorting is in 3.5 for most
sub-tabs as well, except the ones in 'virt' main-tabs and Volumes.
(bkp, 15:20:58)
* 3.5 status ux full status is in:
https://docs.google.com/spreadsheets/d/1S3qDYqv9OK5ruI41RpFpRAsLaojpGL-W6...
(bkp, 15:20:58)
* 3.5 status ux Italian translation in progress (currently in ~30%
completion). (bkp, 15:20:58)
* 3.5 status ux No blockers (bkp, 15:20:58)
* Conferences and Workshops (bkp, 15:21:28)
* Conferences and Workshops oVirt Workshop around KVM Forum is a go.
Funding has been acquired for the one-day event around KVM Forum.
Date/deadlines to follow. (bkp, 15:21:36)
* Conferences and Workshops OSCON is next week, looking forward to
seeing oVirt community members' presentations and at the Red Hat
booth. (bkp, 15:21:37)
* Conferences and Workshops No new information on "Red Hat Day" at
Fossetcon in September. Will update. (bkp, 15:21:37)
* Other Topics (bkp, 15:21:49)
* Other Topics Documentation is ready for review/updating. See
http://lists.ovirt.org/pipermail/users/2014-July/025900.html for
details on how you can help! (bkp, 15:22:13)
Meeting ended at 15:23:29 UTC.
Action Items
------------
* 3.5 status Appropriate owners, please address danken's concerns about
sla, lastclientIface
Action Items, by person
-----------------------
* danken
* 3.5 status Appropriate owners, please address danken's concerns
about sla, lastclientIface
* **UNASSIGNED**
* (none)
People Present (lines said)
---------------------------
* bkp (128)
* danken (53)
* msivak (23)
* urthmover (22)
* sbonazzo (17)
* lvernia (15)
* tal (8)
* awels (7)
* SvenKieske (6)
* fabiand (5)
* dcaro (3)
* dougsland (3)
* mskrivanek (3)
* itamar (3)
* ovirtbot (3)
* alonbl_ (3)
* nsoffer (2)
* pkliczew (1)
* peetaur2 (1)
Generated by `MeetBot`_ 0.1.4
.. _`MeetBot`: http://wiki.debian.org/MeetBot
--
Brian Proffitt
oVirt Community Manager
Project Atomic Community Lead
Open Source and Standards, Red Hat - http://community.redhat.com
Phone: +1 574 383 9BKP
IRC: bkp @ OFTC
10 years, 5 months
3.5 thinks HA is already running
by Nathan Stratton
Trying to add new hosts to my pool, things work fine with 3.4, but 3.5
thinks that HA is already running (but it is not).
2014-07-14 15:07:47 DEBUG otopi.plugins.otopi.services.rhel rhel.status:138
check service ovirt-ha-agent status
2014-07-14 15:07:47 DEBUG otopi.plugins.otopi.services.rhel
plugin.executeRaw:785 execute: ('/sbin/initctl', 'status',
'ovirt-ha-agent'), executabl
e='None', cwd='None', env=None
2014-07-14 15:07:47 DEBUG otopi.plugins.otopi.services.rhel
plugin.executeRaw:803 execute-result: ('/sbin/initctl', 'status',
'ovirt-ha-agent'), rc
=1
2014-07-14 15:07:47 DEBUG otopi.plugins.otopi.services.rhel
plugin.execute:861 execute-output: ('/sbin/initctl', 'status',
'ovirt-ha-agent') stdout
:
2014-07-14 15:07:47 DEBUG otopi.plugins.otopi.services.rhel
plugin.execute:866 execute-output: ('/sbin/initctl', 'status',
'ovirt-ha-agent') stderr
:
initctl: Unknown job: ovirt-ha-agent
2014-07-14 15:07:47 DEBUG otopi.plugins.otopi.services.rhel
plugin.executeRaw:785 execute: ('/sbin/service', 'ovirt-ha-agent',
'status'), executabl
e='None', cwd='None', env=None
2014-07-14 15:07:47 DEBUG otopi.plugins.otopi.services.rhel
plugin.executeRaw:803 execute-result: ('/sbin/service', 'ovirt-ha-agent',
'status'), rc
=0
2014-07-14 15:07:47 DEBUG otopi.plugins.otopi.services.rhel
plugin.execute:861 execute-output: ('/sbin/service', 'ovirt-ha-agent',
'status') stdout
:
ovirt-ha-agent is stopped
2014-07-14 15:07:47 DEBUG otopi.plugins.otopi.services.rhel
plugin.execute:866 execute-output: ('/sbin/service', 'ovirt-ha-agent',
'status') stderr
:
2014-07-14 15:07:47 DEBUG otopi.plugins.otopi.services.rhel rhel.status:147
service ovirt-ha-agent status True
2014-07-14 15:07:47 DEBUG otopi.context context._executeMethod:152 method
exception
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/otopi/context.py", line 142, in
_executeMethod
method['method']()
File
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/ha/ha_services.py",
line 64, in _programs
'Hosted Engine HA services are already running on '
RuntimeError: Hosted Engine HA services are already running on this system.
Hosted Engine cannot be deployed on a host already running those servic
es.
2014-07-14 15:07:47 ERROR otopi.context context._executeMethod:161 Failed
to execute stage 'Programs detection': Hosted Engine HA services are alre
ady running on this system. Hosted Engine cannot be deployed on a host
already running those services.
2014-07-14 15:07:47 DEBUG otopi.context context.dumpEnvironment:468
ENVIRONMENT DUMP - BEGIN
><>
nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
www.broadsoft.com
10 years, 5 months
python-pthreading for epel6.5
by lucas castro
It is already available the python-pthreading 0.1.3-3 for epel.
And solving the problem with network installing new host.
--
contatos:
Celular: ( 99 ) 9143-5954 - Vivo
skype: lucasd3castro
msn: lucascastroborges(a)hotmail.com
10 years, 5 months