Re: [ovirt-users] host status "Non Operational" - how to diagnose & fix?
by Karli Sjöberg
--_000_d073628bd7264472a089d77e6921e2acemailandroidcom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
DQpEZW4gMyBqYW4uIDIwMTYgMjo0MyBmbSBza3JldiBXaWxsIERlbm5pcyA8d2Rlbm5pc0BuZWMt
bGFicy5jb20+Og0KPg0KPiBUaGUg4oCYb3ZpcnRtZ2104oCZIG5ldHdvcmsgaGFzIGJlZW4gJiBp
cyBzdGlsbCBwbGFjZWQgb24gYSB3b3JraW5nIE5JQyAoZW5wMTJzMGYwKeKApiBJdOKAmXMganVz
dCB0aGF0IG5vdywgb1ZpcnQgc29tZWhvdyBkb2VzbuKAmXQgKnRoaW5rKiBpdOKAmXMgd29ya2lu
Z+KApg0KDQpIZXJlJ3Mgc29tZXRoaW5nIEkgd3JvdGUgYSBsb25nIHRpbWUgYWdvIG5vdywgZm9y
IHRob3NlIHRpbWVzIHdoZW4gYXV0by1ndWktY29uZmlnIGZsdWZmIGp1c3Qgd29uJ3QgZG86DQoN
Cmh0dHA6Ly93d3cub3ZpcnQub3JnL0JvbmRpbmdfVkxBTl9CcmlkZ2UNCg0KL0sNCj4NCj4gaHR0
cDovL3MxMDk2LnBob3RvYnVja2V0LmNvbS91c2VyL3dpbGxkZW5uaXMvbWVkaWEvc2V0dXAtbmV0
d29ya3MucG5nLmh0bWwNCj4NCj4gSG93ZXZlciwgYXMgSSBzaG93ZWQgeW91IGluIHRoZSDigJhp
cCBsaW5rIHNob3cgdXDigJkgb3V0cHV0LCBpdCBpcyBpbmRlZWQgdXAgYW5kIHdvcmtpbmcuDQo+
DQo+DQo+DQo+DQo+IE9uIEphbiAyLCAyMDE2LCBhdCA4OjAwIFBNLCBSb3kgR29sYW4gPHJnb2xh
bkByZWRoYXQuY29tPG1haWx0bzpyZ29sYW5AcmVkaGF0LmNvbT4+IHdyb3RlOg0KPg0KPg0KPg0K
PiBPbiBTdW4sIEphbiAzLCAyMDE2IGF0IDI6NDYgQU0sIFdpbGwgRGVubmlzIDx3ZGVubmlzQG5l
Yy1sYWJzLmNvbTxtYWlsdG86d2Rlbm5pc0BuZWMtbGFicy5jb20+PiB3cm90ZToNCj4gSSBoYXZl
IGhhZCBvbmUgb2YgbXkgaG9zdHMgZ28gaW50byB0aGUgc3RhdGUg4oCcTm9uIE9wZXJhdGlvbmFs
4oCdIGFmdGVyIEkgcmVib290ZWQgaXTigKYgSSBhbHNvIG5vdGljZWQgdGhhdCBpbiB0aGUgb1Zp
cnQgd2ViYWRtaW4gVUksIHRoZSBOSUMgdGhhdOKAmXMgdXNlZCBpbiB0aGUg4oCYb3ZpcnRtZ210
4oCZIG5ldHdvcmsgaXMgc2hvd2luZyDigJxkb3du4oCdLCBidXQgaW4gTGludXggdGhlIE5JQyBp
cyBvcGVyYXRpb25hbCBhbmQgdXAsIGFzIGlzIHRoZSDigJhvdmlydG1nbXTigJkgYnJpZGdl4oCm
DQo+DQo+DQo+IEhvc3RzIHRhYiAtPiBOZXR3b3JrIEludGVyZmFjZXMgc3VidGFiIC0+IGNsaWNr
ICJTZXR1cCBuZXR3b3JrcyIgYW5kIG1ha2Ugc3VyZSAib3ZpcnRtZ210IiBpcyBwbGFjZWQgb24g
YSB3b3JraW5nIG5pYy4NCj4NCj4gbWFrZSBzdXJlDQo+IFtyb290QG92aXJ0LW5vZGUtMDIgfl0j
IGlwIGxpbmsgc2ggdXANCj4gMTogbG86IDxMT09QQkFDSyxVUCxMT1dFUl9VUD4gbXR1IDY1NTM2
IHFkaXNjIG5vcXVldWUgc3RhdGUgVU5LTk9XTiBtb2RlIERFRkFVTFQNCj4gICAgIGxpbmsvbG9v
cGJhY2sgMDA6MDA6MDA6MDA6MDA6MDAgYnJkIDAwOjAwOjAwOjAwOjAwOjAwDQo+IDI6IGJvbmQw
OiA8Tk8tQ0FSUklFUixCUk9BRENBU1QsTVVMVElDQVNULE1BU1RFUixVUD4gbXR1IDE1MDAgcWRp
c2Mgbm9xdWV1ZSBzdGF0ZSBET1dOIG1vZGUgREVGQVVMVA0KPiAgICAgbGluay9ldGhlciAwMDox
NToxNzo3YjplOTpiMCBicmQgZmY6ZmY6ZmY6ZmY6ZmY6ZmYNCj4gMzogZW5wNHMwZjA6IDxOTy1D
QVJSSUVSLEJST0FEQ0FTVCxNVUxUSUNBU1QsU0xBVkUsVVA+IG10dSAxNTAwIHFkaXNjIHBmaWZv
X2Zhc3QgbWFzdGVyIGJvbmQwIHN0YXRlIERPV04gbW9kZSBERUZBVUxUIHFsZW4gMTAwMA0KPiAg
ICAgbGluay9ldGhlciAwMDoxNToxNzo3YjplOTpiMCBicmQgZmY6ZmY6ZmY6ZmY6ZmY6ZmYNCj4g
NDogZW5wNHMwZjE6IDxOTy1DQVJSSUVSLEJST0FEQ0FTVCxNVUxUSUNBU1QsU0xBVkUsVVA+IG10
dSAxNTAwIHFkaXNjIHBmaWZvX2Zhc3QgbWFzdGVyIGJvbmQwIHN0YXRlIERPV04gbW9kZSBERUZB
VUxUIHFsZW4gMTAwMA0KPiAgICAgbGluay9ldGhlciAwMDoxNToxNzo3YjplOTpiMCBicmQgZmY6
ZmY6ZmY6ZmY6ZmY6ZmYNCj4gNTogZW5wMTJzMGYwOiA8QlJPQURDQVNULE1VTFRJQ0FTVCxVUCxM
T1dFUl9VUD4gbXR1IDE1MDAgcWRpc2MgcGZpZm9fZmFzdCBtYXN0ZXIgb3ZpcnRtZ210IHN0YXRl
IFVQIG1vZGUgREVGQVVMVCBxbGVuIDEwMDANCj4gICAgIGxpbmsvZXRoZXIgMDA6MjE6ODU6MzU6
MDg6NGMgYnJkIGZmOmZmOmZmOmZmOmZmOmZmDQo+IDc6IG92aXJ0bWdtdDogPEJST0FEQ0FTVCxN
VUxUSUNBU1QsVVAsTE9XRVJfVVA+IG10dSAxNTAwIHFkaXNjIG5vcXVldWUgc3RhdGUgVVAgbW9k
ZSBERUZBVUxUDQo+ICAgICBsaW5rL2V0aGVyIDAwOjIxOjg1OjM1OjA4OjRjIGJyZCBmZjpmZjpm
ZjpmZjpmZjpmZg0KPg0KPiBXaGF0IHNob3VsZCBJIHRha2UgYSBsb29rIGF0IGZpcnN0Pw0KPg0K
PiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0KPiBVc2Vy
cyBtYWlsaW5nIGxpc3QNCj4gVXNlcnNAb3ZpcnQub3JnPG1haWx0bzpVc2Vyc0BvdmlydC5vcmc+
DQo+IGh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vycw0KPg0KPg0K
PiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0KPiBVc2Vy
cyBtYWlsaW5nIGxpc3QNCj4gVXNlcnNAb3ZpcnQub3JnDQo+IGh0dHA6Ly9saXN0cy5vdmlydC5v
cmcvbWFpbG1hbi9saXN0aW5mby91c2Vycw0K
--_000_d073628bd7264472a089d77e6921e2acemailandroidcom_
Content-Type: text/html; charset="utf-8"
Content-ID: <0551208209315D4AADBC80EA8AC627CE(a)ad.slu.se>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPHAgZGlyPSJsdHIi
Pjxicj4NCkRlbiAzIGphbi4gMjAxNiAyOjQzIGZtIHNrcmV2IFdpbGwgRGVubmlzICZsdDt3ZGVu
bmlzQG5lYy1sYWJzLmNvbSZndDs6PGJyPg0KJmd0Ozxicj4NCiZndDsgVGhlIOKAmG92aXJ0bWdt
dOKAmSBuZXR3b3JrIGhhcyBiZWVuICZhbXA7IGlzIHN0aWxsIHBsYWNlZCBvbiBhIHdvcmtpbmcg
TklDIChlbnAxMnMwZjAp4oCmIEl04oCZcyBqdXN0IHRoYXQgbm93LCBvVmlydCBzb21laG93IGRv
ZXNu4oCZdCAqdGhpbmsqIGl04oCZcyB3b3JraW5n4oCmPC9wPg0KPHAgZGlyPSJsdHIiPkhlcmUn
cyBzb21ldGhpbmcgSSB3cm90ZSBhIGxvbmcgdGltZSBhZ28gbm93LCBmb3IgdGhvc2UgdGltZXMg
d2hlbiBhdXRvLWd1aS1jb25maWcgZmx1ZmYganVzdCB3b24ndCBkbzo8L3A+DQo8cCBkaXI9Imx0
ciI+aHR0cDovL3d3dy5vdmlydC5vcmcvQm9uZGluZ19WTEFOX0JyaWRnZTwvcD4NCjxwIGRpcj0i
bHRyIj4vSzxicj4NCiZndDs8YnI+DQomZ3Q7IGh0dHA6Ly9zMTA5Ni5waG90b2J1Y2tldC5jb20v
dXNlci93aWxsZGVubmlzL21lZGlhL3NldHVwLW5ldHdvcmtzLnBuZy5odG1sPGJyPg0KJmd0Ozxi
cj4NCiZndDsgSG93ZXZlciwgYXMgSSBzaG93ZWQgeW91IGluIHRoZSDigJhpcCBsaW5rIHNob3cg
dXDigJkgb3V0cHV0LCBpdCBpcyBpbmRlZWQgdXAgYW5kIHdvcmtpbmcuPGJyPg0KJmd0Ozxicj4N
CiZndDs8YnI+DQomZ3Q7PGJyPg0KJmd0Ozxicj4NCiZndDsgT24gSmFuIDIsIDIwMTYsIGF0IDg6
MDAgUE0sIFJveSBHb2xhbiAmbHQ7cmdvbGFuQHJlZGhhdC5jb20mbHQ7bWFpbHRvOnJnb2xhbkBy
ZWRoYXQuY29tJmd0OyZndDsgd3JvdGU6PGJyPg0KJmd0Ozxicj4NCiZndDs8YnI+DQomZ3Q7PGJy
Pg0KJmd0OyBPbiBTdW4sIEphbiAzLCAyMDE2IGF0IDI6NDYgQU0sIFdpbGwgRGVubmlzICZsdDt3
ZGVubmlzQG5lYy1sYWJzLmNvbSZsdDttYWlsdG86d2Rlbm5pc0BuZWMtbGFicy5jb20mZ3Q7Jmd0
OyB3cm90ZTo8YnI+DQomZ3Q7IEkgaGF2ZSBoYWQgb25lIG9mIG15IGhvc3RzIGdvIGludG8gdGhl
IHN0YXRlIOKAnE5vbiBPcGVyYXRpb25hbOKAnSBhZnRlciBJIHJlYm9vdGVkIGl04oCmIEkgYWxz
byBub3RpY2VkIHRoYXQgaW4gdGhlIG9WaXJ0IHdlYmFkbWluIFVJLCB0aGUgTklDIHRoYXTigJlz
IHVzZWQgaW4gdGhlIOKAmG92aXJ0bWdtdOKAmSBuZXR3b3JrIGlzIHNob3dpbmcg4oCcZG93buKA
nSwgYnV0IGluIExpbnV4IHRoZSBOSUMgaXMgb3BlcmF0aW9uYWwgYW5kIHVwLCBhcyBpcyB0aGUg
4oCYb3ZpcnRtZ2104oCZDQogYnJpZGdl4oCmPGJyPg0KJmd0Ozxicj4NCiZndDs8YnI+DQomZ3Q7
IEhvc3RzIHRhYiAtJmd0OyBOZXR3b3JrIEludGVyZmFjZXMgc3VidGFiIC0mZ3Q7IGNsaWNrICZx
dW90O1NldHVwIG5ldHdvcmtzJnF1b3Q7IGFuZCBtYWtlIHN1cmUgJnF1b3Q7b3ZpcnRtZ210JnF1
b3Q7IGlzIHBsYWNlZCBvbiBhIHdvcmtpbmcgbmljLjxicj4NCiZndDs8YnI+DQomZ3Q7IG1ha2Ug
c3VyZTxicj4NCiZndDsgW3Jvb3RAb3ZpcnQtbm9kZS0wMiB+XSMgaXAgbGluayBzaCB1cDxicj4N
CiZndDsgMTogbG86ICZsdDtMT09QQkFDSyxVUCxMT1dFUl9VUCZndDsgbXR1IDY1NTM2IHFkaXNj
IG5vcXVldWUgc3RhdGUgVU5LTk9XTiBtb2RlIERFRkFVTFQ8YnI+DQomZ3Q7ICZuYnNwOyZuYnNw
OyZuYnNwOyBsaW5rL2xvb3BiYWNrIDAwOjAwOjAwOjAwOjAwOjAwIGJyZCAwMDowMDowMDowMDow
MDowMDxicj4NCiZndDsgMjogYm9uZDA6ICZsdDtOTy1DQVJSSUVSLEJST0FEQ0FTVCxNVUxUSUNB
U1QsTUFTVEVSLFVQJmd0OyBtdHUgMTUwMCBxZGlzYyBub3F1ZXVlIHN0YXRlIERPV04gbW9kZSBE
RUZBVUxUPGJyPg0KJmd0OyAmbmJzcDsmbmJzcDsmbmJzcDsgbGluay9ldGhlciAwMDoxNToxNzo3
YjplOTpiMCBicmQgZmY6ZmY6ZmY6ZmY6ZmY6ZmY8YnI+DQomZ3Q7IDM6IGVucDRzMGYwOiAmbHQ7
Tk8tQ0FSUklFUixCUk9BRENBU1QsTVVMVElDQVNULFNMQVZFLFVQJmd0OyBtdHUgMTUwMCBxZGlz
YyBwZmlmb19mYXN0IG1hc3RlciBib25kMCBzdGF0ZSBET1dOIG1vZGUgREVGQVVMVCBxbGVuIDEw
MDA8YnI+DQomZ3Q7ICZuYnNwOyZuYnNwOyZuYnNwOyBsaW5rL2V0aGVyIDAwOjE1OjE3OjdiOmU5
OmIwIGJyZCBmZjpmZjpmZjpmZjpmZjpmZjxicj4NCiZndDsgNDogZW5wNHMwZjE6ICZsdDtOTy1D
QVJSSUVSLEJST0FEQ0FTVCxNVUxUSUNBU1QsU0xBVkUsVVAmZ3Q7IG10dSAxNTAwIHFkaXNjIHBm
aWZvX2Zhc3QgbWFzdGVyIGJvbmQwIHN0YXRlIERPV04gbW9kZSBERUZBVUxUIHFsZW4gMTAwMDxi
cj4NCiZndDsgJm5ic3A7Jm5ic3A7Jm5ic3A7IGxpbmsvZXRoZXIgMDA6MTU6MTc6N2I6ZTk6YjAg
YnJkIGZmOmZmOmZmOmZmOmZmOmZmPGJyPg0KJmd0OyA1OiBlbnAxMnMwZjA6ICZsdDtCUk9BRENB
U1QsTVVMVElDQVNULFVQLExPV0VSX1VQJmd0OyBtdHUgMTUwMCBxZGlzYyBwZmlmb19mYXN0IG1h
c3RlciBvdmlydG1nbXQgc3RhdGUgVVAgbW9kZSBERUZBVUxUIHFsZW4gMTAwMDxicj4NCiZndDsg
Jm5ic3A7Jm5ic3A7Jm5ic3A7IGxpbmsvZXRoZXIgMDA6MjE6ODU6MzU6MDg6NGMgYnJkIGZmOmZm
OmZmOmZmOmZmOmZmPGJyPg0KJmd0OyA3OiBvdmlydG1nbXQ6ICZsdDtCUk9BRENBU1QsTVVMVElD
QVNULFVQLExPV0VSX1VQJmd0OyBtdHUgMTUwMCBxZGlzYyBub3F1ZXVlIHN0YXRlIFVQIG1vZGUg
REVGQVVMVDxicj4NCiZndDsgJm5ic3A7Jm5ic3A7Jm5ic3A7IGxpbmsvZXRoZXIgMDA6MjE6ODU6
MzU6MDg6NGMgYnJkIGZmOmZmOmZmOmZmOmZmOmZmPGJyPg0KJmd0Ozxicj4NCiZndDsgV2hhdCBz
aG91bGQgSSB0YWtlIGEgbG9vayBhdCBmaXJzdD88YnI+DQomZ3Q7PGJyPg0KJmd0OyBfX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXzxicj4NCiZndDsgVXNlcnMg
bWFpbGluZyBsaXN0PGJyPg0KJmd0OyBVc2Vyc0BvdmlydC5vcmcmbHQ7bWFpbHRvOlVzZXJzQG92
aXJ0Lm9yZyZndDs8YnI+DQomZ3Q7IGh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0
aW5mby91c2Vyczxicj4NCiZndDs8YnI+DQomZ3Q7PGJyPg0KJmd0OyBfX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXzxicj4NCiZndDsgVXNlcnMgbWFpbGluZyBs
aXN0PGJyPg0KJmd0OyBVc2Vyc0BvdmlydC5vcmc8YnI+DQomZ3Q7IGh0dHA6Ly9saXN0cy5vdmly
dC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vyczxicj4NCjwvcD4NCjwvYm9keT4NCjwvaHRtbD4N
Cg==
--_000_d073628bd7264472a089d77e6921e2acemailandroidcom_--
8 years, 11 months
Re: [ovirt-users] iLO2
by Eli Mesika
----- Original Message -----
> From: "Eriks Goodwin" <eriks(a)connectcombpo.com>
> To: "Eli Mesika" <emesika(a)redhat.com>
> Sent: Tuesday, December 29, 2015 4:13:11 AM
> Subject: Re: [ovirt-users] iLO2
>
> [root@n001 ~]# /usr/sbin/fence_ipmilan --ip=10.0.1.104 --username=ovirtmgmt
> --password=REDACTED -v --action=status --lanplus
Please try again with
# /usr/sbin/fence_ipmilan --ip=10.0.1.104 --username=ovirtmgmt --password=REDACTED -v --action=status --lanplus -T 4
>
> Executing: /usr/bin/ipmitool -I lanplus -H 10.0.1.104 -U ovirtmgmt -P [set]
> -p 623 -L ADMINISTRATOR chassis power status
>
> Connection timed out
>
> [root@n001 ~]#
>
>
> ----- On Dec 22, 2015, at 8:48 AM, Eli Mesika <emesika(a)redhat.com> wrote:
>
8 years, 11 months
Re: [ovirt-users] Not able to attach my storage
by Nir Soffer
בתאריך 3 בינו׳ 2016 10:26 לפנה״צ, "Michael Cooper" <mcooper(a)coopfire.com>
כתב:
(Adding users list)
>
> This from my engine.log
(Snipped log)
Next time please attach log files, dumping partial logs in the message body
is not very useful.
>
> I have attached 2 screenshots as well for your review. Let me know I have
read through the log but I am new to oVirt so I am not sure what I am
looking for yet.
>
In the screenshot we can see that you already have iso domain attached
(CF_ISO). Seems that the system does not support more than one attached iso
domain.
Why do you need multiple iso domains attached to the same dc?
>
>
> On Sat, Jan 2, 2016 at 11:30 AM, Nir Soffer <nsoffer(a)redhat.com> wrote:
>>
>> On Sat, Jan 2, 2016 at 12:46 PM, Michael Cooper <mcooper(a)coopfire.com>
wrote:
>> > Hello Everyone,
>> >
>> > I wondering why I cannot attach my iso_domain to my Data
Center.
>> > When I try to attach it says no Valid DataCenters. Why is this
happening?
>> > Where should I start looking for the resolution?
>>
>> Can you describe your data center? do you have active hosts? active
>> storage domain?
>>
>> Can you attach engine.log showing this error?
>>
>> Nir
>>
>> >
>> > Thanks,
>> >
>> > --
>> > Michael A Cooper
>> > Linux Certified
>> > Zerto Certified
>> > http://www.coopfire.com
>> >
>> > _______________________________________________
>> > Users mailing list
>> > Users(a)ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>> >
>
>
>
>
> --
> Michael A Cooper
> Linux Certified
> Zerto Certified
> http://www.coopfire.com
8 years, 11 months
Re: [ovirt-users] Users Digest, Vol 52, Issue 1
by Michael Cooper
Hello Everyone,
I wondering why I cannot attach my iso_domain to my Data Center.
When I try to attach it says no Valid DataCenters. Why is this happening?
Where should I start looking for the resolution?
Thanks,
--
Michael A Cooper
Linux Certified
Zerto Certified
http://www.coopfire.com
8 years, 11 months
how shutdown a host
by alireza sadeh seighalan
hi everyone
how can i shutdown a host acording an standard? i want to shutdown or
reboot hosts for maintenance purposes (for example host 2-6). thanks in
advance
8 years, 11 months
host status "Non Operational" - how to diagnose & fix?
by Will Dennis
I have had one of my hosts go into the state “Non Operational” after I rebooted it… I also noticed that in the oVirt webadmin UI, the NIC that’s used in the ‘ovirtmgmt’ network is showing “down”, but in Linux the NIC is operational and up, as is the ‘ovirtmgmt’ bridge…
[root@ovirt-node-02 ~]# ip link sh up
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: bond0: <NO-CARRIER,BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT
link/ether 00:15:17:7b:e9:b0 brd ff:ff:ff:ff:ff:ff
3: enp4s0f0: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc pfifo_fast master bond0 state DOWN mode DEFAULT qlen 1000
link/ether 00:15:17:7b:e9:b0 brd ff:ff:ff:ff:ff:ff
4: enp4s0f1: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc pfifo_fast master bond0 state DOWN mode DEFAULT qlen 1000
link/ether 00:15:17:7b:e9:b0 brd ff:ff:ff:ff:ff:ff
5: enp12s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovirtmgmt state UP mode DEFAULT qlen 1000
link/ether 00:21:85:35:08:4c brd ff:ff:ff:ff:ff:ff
7: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT
link/ether 00:21:85:35:08:4c brd ff:ff:ff:ff:ff:ff
What should I take a look at first?
8 years, 11 months
Configuring another interface for trunked (tagged) VM traffic
by Will Dennis
Hi all,
Taking the next step on configuring my newly-established oVirt cluster, and that would be to set up a trunk (VLAN tagged) connection to each cluster host (there are 3) for VM traffic. What I’m looking at is akin to setting up vSwitches on VMware, except I have never done this on a VMware cluster, just on individual hosts…
Anyhow, I have the following NICs available on my three hosts (conveniently, they are the exact same hardware platform):
ovirt-node-01 | success | rc=0 >>
3: enp4s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
4: enp4s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
5: enp12s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovirtmgmt state UP mode DEFAULT qlen 1000
6: enp12s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
ovirt-node-02 | success | rc=0 >>
3: enp4s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
4: enp4s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
5: enp12s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovirtmgmt state UP mode DEFAULT qlen 1000
6: enp12s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
ovirt-node-03 | success | rc=0 >>
3: enp4s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
4: enp4s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
5: enp12s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovirtmgmt state UP mode DEFAULT qlen 1000
6: enp12s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
As you may see, I am using the ‘enp12s0f0’ interface on each host for the ‘ovirtmgmt’ bridge. This network carries the admin traffic as well as Gluster distributed filesystem traffic, but I now want to establish a separate link to each host for VM traffic. The ‘ovirtmgmt’ bridge is NOT trunked/tagged, only a single VLAN is used. For the VM traffic, I’d like to use the ‘enp4s0f0’ interface on each host, and tie them into a logical network named “vm-traffic” (or the like) and make that a trunked/tagged interface.
Are there any existing succinct instructions on how to do this? I have been reading thru the oVirt Admin Manual’s “Logical Networks” section (http://www.ovirt.org/OVirt_Administration_Guide#Logical_Network_Tasks) but it hasn’t “clicked” in my mind yet...
Thanks,
Will
8 years, 11 months
SPM
by Fernando Fuentes
Team,
I noticed that my SPM moved to another host which was odd because I have
a set SPM.
Somehow when that happen two of my hosts went down and all my vms when
in pause state.
The oddity behind all this is that my primary storage which has allways
been my SPM was online without any issues..
What could of have cause that? and is there a way prevent from the SPM
migrating unless there is an issue?
--
Fernando Fuentes
ffuentes(a)txweather.org
http://www.txweather.org
8 years, 11 months
oVirt hosted engine agent and broker duplicate logs to syslog
by Aleksey Chudov
Hi,
After upgrade from 3.6.0 to 3.6.1 agent and broker duplicate their logs to
syslog. So, the same messages logged twice to files in
/var/log/ovirt-hosted-engine-ha/ directory and to /var/log/messages file.
Agent and broker configuration files remain the same for 3.5, 3.6.0 and
3.6.1 and there is not such logs duplication in 3.5 and 3.6.0.
Is it a bug or expected behavior?
OS is CentOS 7.2
# rpm -qa 'ovirt*'
ovirt-vmconsole-1.0.0-1.el7.centos.noarch
ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch
ovirt-host-deploy-1.4.1-1.el7.centos.noarch
ovirt-hosted-engine-ha-1.3.3.5-1.el7.centos.noarch
ovirt-engine-sdk-python-3.6.0.3-1.el7.centos.noarch
ovirt-release36-002-2.noarch
ovirt-setup-lib-1.0.0-1.el7.centos.noarch
ovirt-hosted-engine-setup-1.3.1.3-1.el7.centos.noarch
# cat /etc/ovirt-hosted-engine-ha/agent-log.conf
[loggers]
keys=root
[handlers]
keys=syslog,logfile
[formatters]
keys=long,sysform
[logger_root]
level=INFO
handlers=syslog,logfile
propagate=0
[handler_syslog]
level=ERROR
class=handlers.SysLogHandler
formatter=sysform
args=('/dev/log', handlers.SysLogHandler.LOG_USER)
[handler_logfile]
class=logging.handlers.TimedRotatingFileHandler
args=('/var/log/ovirt-hosted-engine-ha/agent.log', 'd', 1, 7)
level=DEBUG
formatter=long
[formatter_long]
format=%(threadName)s::%(levelname)s::%(asctime)s::%(module)s::%(lineno)d::%(name)s::(%(funcName)s)
%(message)s
[formatter_sysform]
format=ovirt-ha-agent %(name)s %(levelname)s %(message)s
datefmt=
Aleksey
8 years, 11 months