Windows Server 2008 & 2016 Drivers
by Bill Bill
--_000_CO2PR0801MB07431AB46F846479A88F1AEDA68F0CO2PR0801MB0743_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
QWxzbywgdGhpcyBpcyB3aGVuIOKAnGluc3RhbGxpbmfigJ0gV2luZG93cywgZ29pbmcgdGhyb3Vn
aCB0aGUgc2V0dXAgdW5sZXNzIHlvdSBoYXZlIGFuIElERSBkaXNrLCBpdCB3aWxsIG5vdCBzZWUg
YW55IG90aGVyIGRpc2tzIHN1Y2ggYXMgdmlydGlvIG9yIHZpcnRpby1zY2lzaSDigJMgc28geW91
IGNhbm5vdCBpbnN0YWxsIGl0IHRvIGdldCB0byB0aGUgZ3Vlc3QtdG9vbHMgdW5sZXNzIHlvdSBz
ZXQgaXQgdXAgd2l0aCBJREUuDQoNClNlbnQgZnJvbSBNYWlsPGh0dHBzOi8vZ28ubWljcm9zb2Z0
LmNvbS9md2xpbmsvP0xpbmtJZD01NTA5ODY+IGZvciBXaW5kb3dzIDEwDQoNCkZyb206IEJpbGwg
QmlsbDxtYWlsdG86amF4MjU2OEBvdXRsb29rLmNvbT4NClNlbnQ6IFRodXJzZGF5LCBEZWNlbWJl
ciAxLCAyMDE2IDI6NDAgUE0NClRvOiBZYW5pdiBEYXJ5PG1haWx0bzp5ZGFyeUByZWRoYXQuY29t
Pg0KU3ViamVjdDogUkU6IFtvdmlydC11c2Vyc10gV2luZG93cyBTZXJ2ZXIgMjAwOCAmIDIwMTYg
RHJpdmVycw0KDQpUcnlpbmcgdG8gdXNlIHRoZSBJU08gZnJvbSBGZWRvcmEgUGVvcGxlIGJ1dCBp
dCBuZXZlciBzZWVzIGFueSBkcml2ZXJzLiBJIGRpZCBob3dldmVyLCBqdXN0IGZpbmQgdGhpczoN
Cg0KaHR0cHM6Ly9jb21tdW5pdHkucmVkaGF0LmNvbS9ibG9nLzIwMTUvMDUvaG93LXRvLWluc3Rh
bGwtYW5kLXVzZS1vdmlydHMtd2luZG93cy1ndWVzdC10b29scy8NCg0KSXMgdGhhdCB3aGF0IHlv
deKAmXJlIHJlZmVycmluZyB0bz8gT3IgaXMgdGhlcmUgc29tZSB1cGRhdGVkIGd1aWRlIEkgc2hv
dWxkIGxvb2sgYXQgdGhhdCB5b3Uga25vdyBvZj8NCg0KDQpTZW50IGZyb20gTWFpbDxodHRwczov
L2dvLm1pY3Jvc29mdC5jb20vZndsaW5rLz9MaW5rSWQ9NTUwOTg2PiBmb3IgV2luZG93cyAxMA0K
DQpGcm9tOiBZYW5pdiBEYXJ5PG1haWx0bzp5ZGFyeUByZWRoYXQuY29tPg0KU2VudDogVGh1cnNk
YXksIERlY2VtYmVyIDEsIDIwMTYgNjoxNyBBTQ0KVG86IEJpbGwgQmlsbDxtYWlsdG86amF4MjU2
OEBvdXRsb29rLmNvbT4NCkNjOiB1c2Vyc0BvdmlydC5vcmc8bWFpbHRvOnVzZXJzQG92aXJ0Lm9y
Zz4NClN1YmplY3Q6IFJlOiBbb3ZpcnQtdXNlcnNdIFdpbmRvd3MgU2VydmVyIDIwMDggJiAyMDE2
IERyaXZlcnMNCg0KRGlkIHlvdSBpbnN0YWxsIHRoZSBndWVzdCB0b29scz8NCg0KDQpZYW5pdiBE
YXJ5DQoNClRlY2huaWNhbCBQcm9kdWN0IE1hbmFnZXINCg0KUmVkIEhhdCBJc3JhZWwgTHRkLg0K
DQozNCBKZXJ1c2FsZW0gUm9hZA0KDQpCdWlsZGluZyBBLCA0dGggZmxvb3INCg0KUmEnYW5hbmEs
IElzcmFlbCA0MzUwMTA5DQoNCg0KDQpUZWwgOiArOTcyICg5KSA3NjkyMzA2DQoNCiAgICAgICAg
ODI3MjMwNg0KDQpFbWFpbDogeWRhcnlAcmVkaGF0LmNvbTxtYWlsdG86eWRhcnlAcmVkaGF0LmNv
bT4NCg0KSVJDIDogeWRhcnkNCg0KT24gVGh1LCBEZWMgMSwgMjAxNiBhdCA1OjQzIEFNLCBCaWxs
IEJpbGwgPGpheDI1NjhAb3V0bG9vay5jb208bWFpbHRvOmpheDI1NjhAb3V0bG9vay5jb20+PiB3
cm90ZToNCkhlbGxvLA0KDQpJcyB0aGVyZSBhIHByb3BlciBndWlkZSBmb3IgZ2V0dGluZyB2aXJ0
aW8gZHJpdmVycyB0byBsb2FkIOKAkyBJ4oCZdmUgYmVlbiB0cnlpbmcgdG8gZ2V0IFdpbmRvd3Mg
U2VydmVyIDIwMDggJiAyMDE2IGluc3RhbGxlZCB3aXRoIGEgVmlydGlvIGJhc2VkIGRpc2sgYnV0
IG5vIGRyaXZlcnMgYXBwZWFyIHRvIHdvcmsuDQoNCg0KX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX18NClVzZXJzIG1haWxpbmcgbGlzdA0KVXNlcnNAb3ZpcnQu
b3JnPG1haWx0bzpVc2Vyc0BvdmlydC5vcmc+DQpodHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxt
YW4vbGlzdGluZm8vdXNlcnMNCg0KDQo=
--_000_CO2PR0801MB07431AB46F846479A88F1AEDA68F0CO2PR0801MB0743_
Content-Type: text/html; charset="utf-8"
Content-ID: <4ACC833C99D16848B0E91D4F0B2583B6(a)sct-15-1-659-11-msonline-outlook-7ade0.templateTenant>
Content-Transfer-Encoding: base64
PGh0bWwgeG1sbnM6bz0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6b2ZmaWNlIiB4
bWxuczp3PSJ1cm46c2NoZW1hcy1taWNyb3NvZnQtY29tOm9mZmljZTp3b3JkIiB4bWxuczptPSJo
dHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL29mZmljZS8yMDA0LzEyL29tbWwiIHhtbG5zPSJo
dHRwOi8vd3d3LnczLm9yZy9UUi9SRUMtaHRtbDQwIj4NCjxoZWFkPg0KPG1ldGEgaHR0cC1lcXVp
dj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0L2h0bWw7IGNoYXJzZXQ9dXRmLTgiPg0KPG1l
dGEgbmFtZT0iR2VuZXJhdG9yIiBjb250ZW50PSJNaWNyb3NvZnQgV29yZCAxNSAoZmlsdGVyZWQg
bWVkaXVtKSI+DQo8c3R5bGU+PCEtLQ0KLyogRm9udCBEZWZpbml0aW9ucyAqLw0KQGZvbnQtZmFj
ZQ0KCXtmb250LWZhbWlseToiQ2FtYnJpYSBNYXRoIjsNCglwYW5vc2UtMToyIDQgNSAzIDUgNCA2
IDMgMiA0O30NCkBmb250LWZhY2UNCgl7Zm9udC1mYW1pbHk6Q2FsaWJyaTsNCglwYW5vc2UtMToy
IDE1IDUgMiAyIDIgNCAzIDIgNDt9DQovKiBTdHlsZSBEZWZpbml0aW9ucyAqLw0KcC5Nc29Ob3Jt
YWwsIGxpLk1zb05vcm1hbCwgZGl2Lk1zb05vcm1hbA0KCXttYXJnaW46MGluOw0KCW1hcmdpbi1i
b3R0b206LjAwMDFwdDsNCglmb250LXNpemU6MTEuMHB0Ow0KCWZvbnQtZmFtaWx5OiJDYWxpYnJp
IixzYW5zLXNlcmlmO30NCmE6bGluaywgc3Bhbi5Nc29IeXBlcmxpbmsNCgl7bXNvLXN0eWxlLXBy
aW9yaXR5Ojk5Ow0KCWNvbG9yOmJsdWU7DQoJdGV4dC1kZWNvcmF0aW9uOnVuZGVybGluZTt9DQph
OnZpc2l0ZWQsIHNwYW4uTXNvSHlwZXJsaW5rRm9sbG93ZWQNCgl7bXNvLXN0eWxlLXByaW9yaXR5
Ojk5Ow0KCWNvbG9yOiM5NTRGNzI7DQoJdGV4dC1kZWNvcmF0aW9uOnVuZGVybGluZTt9DQpwcmUN
Cgl7bXNvLXN0eWxlLXByaW9yaXR5Ojk5Ow0KCW1zby1zdHlsZS1saW5rOiJIVE1MIFByZWZvcm1h
dHRlZCBDaGFyIjsNCgltYXJnaW46MGluOw0KCW1hcmdpbi1ib3R0b206LjAwMDFwdDsNCglmb250
LXNpemU6MTAuMHB0Ow0KCWZvbnQtZmFtaWx5OiJDb3VyaWVyIE5ldyI7fQ0Kc3Bhbi5IVE1MUHJl
Zm9ybWF0dGVkQ2hhcg0KCXttc28tc3R5bGUtbmFtZToiSFRNTCBQcmVmb3JtYXR0ZWQgQ2hhciI7
DQoJbXNvLXN0eWxlLXByaW9yaXR5Ojk5Ow0KCW1zby1zdHlsZS1saW5rOiJIVE1MIFByZWZvcm1h
dHRlZCI7DQoJZm9udC1mYW1pbHk6IkNvdXJpZXIgTmV3Ijt9DQouTXNvQ2hwRGVmYXVsdA0KCXtt
c28tc3R5bGUtdHlwZTpleHBvcnQtb25seTt9DQpAcGFnZSBXb3JkU2VjdGlvbjENCgl7c2l6ZTo4
LjVpbiAxMS4waW47DQoJbWFyZ2luOjEuMGluIDEuMGluIDEuMGluIDEuMGluO30NCmRpdi5Xb3Jk
U2VjdGlvbjENCgl7cGFnZTpXb3JkU2VjdGlvbjE7fQ0KLS0+PC9zdHlsZT4NCjwvaGVhZD4NCjxi
b2R5IGxhbmc9IkVOLVVTIiBsaW5rPSJibHVlIiB2bGluaz0iIzk1NEY3MiI+DQo8ZGl2IGNsYXNz
PSJXb3JkU2VjdGlvbjEiPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+QWxzbywgdGhpcyBpcyB3aGVu
IOKAnGluc3RhbGxpbmfigJ0gV2luZG93cywgZ29pbmcgdGhyb3VnaCB0aGUgc2V0dXAgdW5sZXNz
IHlvdSBoYXZlIGFuIElERSBkaXNrLCBpdCB3aWxsIG5vdCBzZWUgYW55IG90aGVyIGRpc2tzIHN1
Y2ggYXMgdmlydGlvIG9yIHZpcnRpby1zY2lzaSDigJMgc28geW91IGNhbm5vdCBpbnN0YWxsIGl0
IHRvIGdldCB0byB0aGUgZ3Vlc3QtdG9vbHMgdW5sZXNzIHlvdSBzZXQgaXQgdXAgd2l0aA0KIElE
RS48bzpwPjwvbzpwPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPiZuYnNwOzxvOnA+PC9vOnA+
PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+U2VudCBmcm9tIDxhIGhyZWY9Imh0dHBzOi8vZ28u
bWljcm9zb2Z0LmNvbS9md2xpbmsvP0xpbmtJZD01NTA5ODYiPg0KTWFpbDwvYT4gZm9yIFdpbmRv
d3MgMTA8bzpwPjwvbzpwPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPiZuYnNwOzxvOnA+PC9v
OnA+PC9wPg0KPGRpdiBzdHlsZT0iYm9yZGVyOm5vbmU7Ym9yZGVyLXRvcDpzb2xpZCAjRTFFMUUx
IDEuMHB0O3BhZGRpbmc6My4wcHQgMGluIDBpbiAwaW4iPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+
PGI+RnJvbTogPC9iPjxhIGhyZWY9Im1haWx0bzpqYXgyNTY4QG91dGxvb2suY29tIj5CaWxsIEJp
bGw8L2E+PGJyPg0KPGI+U2VudDogPC9iPlRodXJzZGF5LCBEZWNlbWJlciAxLCAyMDE2IDI6NDAg
UE08YnI+DQo8Yj5UbzogPC9iPjxhIGhyZWY9Im1haWx0bzp5ZGFyeUByZWRoYXQuY29tIj5ZYW5p
diBEYXJ5PC9hPjxicj4NCjxiPlN1YmplY3Q6IDwvYj5SRTogW292aXJ0LXVzZXJzXSBXaW5kb3dz
IFNlcnZlciAyMDA4ICZhbXA7IDIwMTYgRHJpdmVyczxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+DQo8
cCBjbGFzcz0iTXNvTm9ybWFsIj4mbmJzcDs8bzpwPjwvbzpwPjwvcD4NCjxkaXY+DQo8cCBjbGFz
cz0iTXNvTm9ybWFsIj5UcnlpbmcgdG8gdXNlIHRoZSBJU08gZnJvbSBGZWRvcmEgUGVvcGxlIGJ1
dCBpdCBuZXZlciBzZWVzIGFueSBkcml2ZXJzLiBJIGRpZCBob3dldmVyLCBqdXN0IGZpbmQgdGhp
czo8bzpwPjwvbzpwPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPiZuYnNwOzxvOnA+PC9vOnA+
PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PGEgaHJlZj0iaHR0cHM6Ly9jb21tdW5pdHkucmVk
aGF0LmNvbS9ibG9nLzIwMTUvMDUvaG93LXRvLWluc3RhbGwtYW5kLXVzZS1vdmlydHMtd2luZG93
cy1ndWVzdC10b29scy8iPmh0dHBzOi8vY29tbXVuaXR5LnJlZGhhdC5jb20vYmxvZy8yMDE1LzA1
L2hvdy10by1pbnN0YWxsLWFuZC11c2Utb3ZpcnRzLXdpbmRvd3MtZ3Vlc3QtdG9vbHMvPC9hPjxv
OnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+Jm5ic3A7PG86cD48L286cD48L3A+
DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj5JcyB0aGF0IHdoYXQgeW914oCZcmUgcmVmZXJyaW5nIHRv
PyBPciBpcyB0aGVyZSBzb21lIHVwZGF0ZWQgZ3VpZGUgSSBzaG91bGQgbG9vayBhdCB0aGF0IHlv
dSBrbm93IG9mPzxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+Jm5ic3A7PG86
cD48L286cD48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj4mbmJzcDs8bzpwPjwvbzpwPjwvcD4N
CjxwIGNsYXNzPSJNc29Ob3JtYWwiPlNlbnQgZnJvbSA8YSBocmVmPSJodHRwczovL2dvLm1pY3Jv
c29mdC5jb20vZndsaW5rLz9MaW5rSWQ9NTUwOTg2Ij4NCk1haWw8L2E+IGZvciBXaW5kb3dzIDEw
PG86cD48L286cD48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj4mbmJzcDs8bzpwPjwvbzpwPjwv
cD4NCjxkaXYgc3R5bGU9ImJvcmRlcjpub25lO2JvcmRlci10b3A6c29saWQgI0UxRTFFMSAxLjBw
dDtwYWRkaW5nOjMuMHB0IDBpbiAwaW4gMGluIj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxiPkZy
b206IDwvYj48YSBocmVmPSJtYWlsdG86eWRhcnlAcmVkaGF0LmNvbSI+WWFuaXYgRGFyeTwvYT48
YnI+DQo8Yj5TZW50OiA8L2I+VGh1cnNkYXksIERlY2VtYmVyIDEsIDIwMTYgNjoxNyBBTTxicj4N
CjxiPlRvOiA8L2I+PGEgaHJlZj0ibWFpbHRvOmpheDI1NjhAb3V0bG9vay5jb20iPkJpbGwgQmls
bDwvYT48YnI+DQo8Yj5DYzogPC9iPjxhIGhyZWY9Im1haWx0bzp1c2Vyc0BvdmlydC5vcmciPnVz
ZXJzQG92aXJ0Lm9yZzwvYT48YnI+DQo8Yj5TdWJqZWN0OiA8L2I+UmU6IFtvdmlydC11c2Vyc10g
V2luZG93cyBTZXJ2ZXIgMjAwOCAmYW1wOyAyMDE2IERyaXZlcnM8bzpwPjwvbzpwPjwvcD4NCjwv
ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+Jm5ic3A7PG86cD48L286cD48L3A+DQo8ZGl2Pg0K
PGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPkRpZCB5b3UgaW5zdGFsbCB0aGUgZ3Vlc3QgdG9v
bHM/PG86cD48L286cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48
YnIgY2xlYXI9ImFsbCI+DQo8bzpwPjwvbzpwPjwvcD4NCjxkaXY+DQo8ZGl2Pg0KPGRpdj4NCjxk
aXY+DQo8ZGl2Pg0KPHByZT48c3BhbiBzdHlsZT0iZm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwmcXVv
dDssc2Fucy1zZXJpZiI+WWFuaXYgRGFyeTxvOnA+PC9vOnA+PC9zcGFuPjwvcHJlPg0KPHByZT48
c3BhbiBzdHlsZT0iZm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwmcXVvdDssc2Fucy1zZXJpZiI+VGVj
aG5pY2FsIFByb2R1Y3QgTWFuYWdlcjxvOnA+PC9vOnA+PC9zcGFuPjwvcHJlPg0KPHByZT48c3Bh
biBzdHlsZT0iZm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwmcXVvdDssc2Fucy1zZXJpZiI+UmVkIEhh
dCBJc3JhZWwgTHRkLjxvOnA+PC9vOnA+PC9zcGFuPjwvcHJlPg0KPHByZT48c3BhbiBzdHlsZT0i
Zm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwmcXVvdDssc2Fucy1zZXJpZiI+MzQgSmVydXNhbGVtIFJv
YWQ8bzpwPjwvbzpwPjwvc3Bhbj48L3ByZT4NCjxwcmU+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5
OiZxdW90O0FyaWFsJnF1b3Q7LHNhbnMtc2VyaWYiPkJ1aWxkaW5nIEEsIDR0aCBmbG9vcjxvOnA+
PC9vOnA+PC9zcGFuPjwvcHJlPg0KPHByZT48c3BhbiBzdHlsZT0iZm9udC1mYW1pbHk6JnF1b3Q7
QXJpYWwmcXVvdDssc2Fucy1zZXJpZiI+UmEnYW5hbmEsIElzcmFlbCA0MzUwMTA5PG86cD48L286
cD48L3NwYW4+PC9wcmU+DQo8cHJlPjxzcGFuIHN0eWxlPSJmb250LWZhbWlseTomcXVvdDtBcmlh
bCZxdW90OyxzYW5zLXNlcmlmIj48bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48L3ByZT4NCjxwcmU+
PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7LHNhbnMtc2VyaWYiPlRl
bCA6ICYjNDM7OTcyICg5KSA3NjkyMzA2PG86cD48L286cD48L3NwYW4+PC9wcmU+DQo8cHJlPjxz
cGFuIHN0eWxlPSJmb250LWZhbWlseTomcXVvdDtBcmlhbCZxdW90OyxzYW5zLXNlcmlmIj4mbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgODI3MjMwNjxvOnA+PC9vOnA+
PC9zcGFuPjwvcHJlPg0KPHByZT48c3BhbiBzdHlsZT0iZm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwm
cXVvdDssc2Fucy1zZXJpZiI+RW1haWw6IDxhIGhyZWY9Im1haWx0bzp5ZGFyeUByZWRoYXQuY29t
IiB0YXJnZXQ9Il9ibGFuayI+eWRhcnlAcmVkaGF0LmNvbTwvYT48bzpwPjwvbzpwPjwvc3Bhbj48
L3ByZT4NCjxwcmU+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7LHNh
bnMtc2VyaWYiPklSQyA6IHlkYXJ5PC9zcGFuPjwvcHJlPg0KPC9kaXY+DQo8L2Rpdj4NCjwvZGl2
Pg0KPC9kaXY+DQo8L2Rpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxvOnA+Jm5ic3A7PC9vOnA+
PC9wPg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPk9uIFRodSwgRGVjIDEsIDIwMTYgYXQg
NTo0MyBBTSwgQmlsbCBCaWxsICZsdDs8YSBocmVmPSJtYWlsdG86amF4MjU2OEBvdXRsb29rLmNv
bSIgdGFyZ2V0PSJfYmxhbmsiPmpheDI1NjhAb3V0bG9vay5jb208L2E+Jmd0OyB3cm90ZTo8bzpw
PjwvbzpwPjwvcD4NCjxibG9ja3F1b3RlIHN0eWxlPSJib3JkZXI6bm9uZTtib3JkZXItbGVmdDpz
b2xpZCAjQ0NDQ0NDIDEuMHB0O3BhZGRpbmc6MGluIDBpbiAwaW4gNi4wcHQ7bWFyZ2luLWxlZnQ6
NC44cHQ7bWFyZ2luLXJpZ2h0OjBpbiI+DQo8ZGl2Pg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3Jt
YWwiIHN0eWxlPSJtYXJnaW4tbGVmdDo0LjhwdCI+SGVsbG8sPG86cD48L286cD48L3A+DQo8cCBj
bGFzcz0iTXNvTm9ybWFsIiBzdHlsZT0ibWFyZ2luLWxlZnQ6NC44cHQiPiZuYnNwOzxvOnA+PC9v
OnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCIgc3R5bGU9Im1hcmdpbi1sZWZ0OjQuOHB0Ij5J
cyB0aGVyZSBhIHByb3BlciBndWlkZSBmb3IgZ2V0dGluZyB2aXJ0aW8gZHJpdmVycyB0byBsb2Fk
IOKAkyBJ4oCZdmUgYmVlbiB0cnlpbmcgdG8gZ2V0IFdpbmRvd3MgU2VydmVyIDIwMDggJmFtcDsg
MjAxNiBpbnN0YWxsZWQgd2l0aCBhIFZpcnRpbyBiYXNlZCBkaXNrIGJ1dCBubyBkcml2ZXJzIGFw
cGVhciB0byB3b3JrLjxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCIgc3R5bGU9
Im1hcmdpbi1sZWZ0OjQuOHB0Ij4mbmJzcDs8bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPC9kaXY+
DQo8cCBjbGFzcz0iTXNvTm9ybWFsIiBzdHlsZT0ibXNvLW1hcmdpbi10b3AtYWx0OjBpbjttYXJn
aW4tcmlnaHQ6MGluO21hcmdpbi1ib3R0b206MTIuMHB0O21hcmdpbi1sZWZ0OjQuOHB0Ij4NCjxi
cj4NCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fPGJyPg0K
VXNlcnMgbWFpbGluZyBsaXN0PGJyPg0KPGEgaHJlZj0ibWFpbHRvOlVzZXJzQG92aXJ0Lm9yZyI+
VXNlcnNAb3ZpcnQub3JnPC9hPjxicj4NCjxhIGhyZWY9Imh0dHA6Ly9saXN0cy5vdmlydC5vcmcv
bWFpbG1hbi9saXN0aW5mby91c2VycyIgdGFyZ2V0PSJfYmxhbmsiPmh0dHA6Ly9saXN0cy5vdmly
dC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2VyczwvYT48bzpwPjwvbzpwPjwvcD4NCjwvYmxvY2tx
dW90ZT4NCjwvZGl2Pg0KPC9kaXY+DQo8L2Rpdj4NCjwvZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1h
bCI+PG86cD4mbmJzcDs8L286cD48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48bzpwPiZuYnNw
OzwvbzpwPjwvcD4NCjwvZGl2Pg0KPC9ib2R5Pg0KPC9odG1sPg0K
--_000_CO2PR0801MB07431AB46F846479A88F1AEDA68F0CO2PR0801MB0743_--
8 years
remove gluster storage domain and resize gluster storage domain
by Bill James
I have a 3 node cluster with replica 3 gluster volume.
But for some reason the volume is not using the full size available.
I thought maybe it was because I had created a second gluster volume on
same partition, so I tried to remove it.
I was able to put it in maintenance mode and detach it, but in no window
was I able to get the "remove" option to be enabled.
Now if I select "attach data" I see ovirt thinks the volume is still
there, although it is not.
2 questions.
1. how do I clear out the old removed volume from ovirt?
2. how do I get gluster to use the full disk space available?
Its a 1T partition but it only created a 225G gluster volume. Why? How
do I get the space back?
All three nodes look the same:
/dev/mapper/rootvg01-lv02 1.1T 135G 929G 13% /ovirt-store
ovirt1-gl.j2noc.com:/gv1 225G 135G 91G 60%
/rhev/data-center/mnt/glusterSD/ovirt1-gl.j2noc.com:_gv1
[root@ovirt1 prod ovirt1-gl.j2noc.com:_gv1]# gluster volume status
Status of volume: gv1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick ovirt1-gl.j2noc.com:/ovirt-store/bric
k1/gv1 49152 0 Y 5218
Brick ovirt3-gl.j2noc.com:/ovirt-store/bric
k1/gv1 49152 0 Y 5678
Brick ovirt2-gl.j2noc.com:/ovirt-store/bric
k1/gv1 49152 0 Y 61386
NFS Server on localhost 2049 0 Y 31312
Self-heal Daemon on localhost N/A N/A Y 31320
NFS Server on ovirt3-gl.j2noc.com 2049 0 Y 38109
Self-heal Daemon on ovirt3-gl.j2noc.com N/A N/A Y 38119
NFS Server on ovirt2-gl.j2noc.com 2049 0 Y 5387
Self-heal Daemon on ovirt2-gl.j2noc.com N/A N/A Y 5402
Task Status of Volume gv1
------------------------------------------------------------------------------
There are no active volume tasks
Thanks.
8 years
Create cinder disk via python sdk
by Logan Kuhn
Hi
As part of migrating from KVM to Ceph, I'm trying to create a Cinder disk
via the python sdk. This is what I have so far and it fails:
disk_attachment = disk_attachments_service.add(
types.DiskAttachment(
disk=types.Disk(
name = 'API_Disk_Cinder_test',
format=types.DiskFormat.RAW,
provisioned_size=12 * 2**30,
storage_domains=[
types.StorageDomain(
name='ovirt-cinder',
),
types.OpenStackVolumeType('cinder')
],
),
interface=types.DiskInterface.VIRTIO,
bootable=False,
active=True,
),
)
Doing it similarly with an NFS backend works so I know the connection
details are correct, so I didn't bother including them.
Regards,
Logan
8 years
[ANN] oVirt 4.1.0 First Beta Release is now available for testing
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the First
Beta Release of oVirt 4.1.0 for testing, as of December 1st, 2016
This is pre-release software. Please take a look at our community page[1]
to know how to ask questions and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This pre-release should not to be used in production.
This release is available now for:
* Fedora 24 (tech preview)
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.2 or later
This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.2 or later
* Fedora 24 (tech preview)
See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Live iso will be available soon
- oVirt Node NG iso will be available soon
- Hosted Engine appliance will be available soon
- above delay is due to the need to build them after the initial release
from the live repositories.
An initial release management page including planned schedule is also
available[4]
Additional Resources:
* Read more about the oVirt 4.1.0 beta release highlights:
http://www.ovirt.org/release/4.1.0/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.1.0/
[4]
http://www.ovirt.org/develop/release-management/releases/4.1/release-mana...
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
8 years
Re: [ovirt-users] Ovirt node host installation failing in engine
by Yedidyah Bar David
(Replying to the list and Fabian as well)
On Wed, Nov 30, 2016 at 12:45 PM, Shalabh Goel <shalabhgoel13(a)gmail.com> wrote:
> Thanks for the reply. I looked into the logs in ovirt-engine server. The log
> is as follows:
>
> File "/usr/lib/python2.7/site-packages/yum/repos.py", line 344, in
> populateSack
> self.doSetup()
> File "/usr/lib/python2.7/site-packages/yum/repos.py", line 158, in doSetup
> self.ayum.plugins.run('postreposetup')
> File "/usr/lib/python2.7/site-packages/yum/plugins.py", line 188, in run
> func(conduitcls(self, self.base, conf, **kwargs))
> File "/usr/lib/yum-plugins/fastestmirror.py", line 197, in
> postreposetup_hook
> if downgrade_ftp and _len_non_ftp(repo.urls) == 1:
> File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 871, in
> <lambda>
> urls = property(fget=lambda self: self._geturls(),
> File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 868, in
> _geturls
> self._baseurlSetup()
> File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 834, in
> _baseurlSetup
> self.check()
> File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 554, in check
> 'Cannot find a valid baseurl for repo: %s' % self.ui_id
> RepoError: Cannot find a valid baseurl for repo: ovirt-4.0/7
> 2016-11-24 11:24:24 ERROR otopi.context context._executeMethod:151 Failed to
> execute stage 'Environment packages setup': Cannot find a valid baseurl for
> repo
> : ovirt-4.0/7
> 2016-11-24 11:24:24 DEBUG otopi.transaction transaction.abort:119 aborting
> 'Yum Transaction'
> 2016-11-24 11:24:24 INFO otopi.plugins.otopi.packagers.yumpackager
> yumpackager.info:80 Yum Performing yum transaction rollback
> Could not retrieve mirrorlist
> http://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-4.0-el7 error was
> 14: curl#7 - "Failed to connect to 66.187.230.28: Network is unreachable"
> Loaded plugins: fastestmirror
> 2016-11-24 11:24:24 DEBUG otopi.context context.dumpEnvironment:760
> ENVIRONMENT DUMP - BEGIN
> 2016-11-24 11:24:24 DEBUG otopi.context context.dumpEnvironment:770 ENV
> BASE/error=bool:'True'
> 2016-11-24 11:24:24 DEBUG otopi.context context.dumpEnvironment:770 ENV
> BASE/exceptionInfo=list:'[(<class 'yum.Errors.RepoError'>, RepoError(),
> <traceback ob
> ject at 0x2e88c20>)]'
> 2016-11-24 11:24:24 DEBUG otopi.context context.dumpEnvironment:774
> ENVIRONMENT DUMP - END
> 2016-11-24 11:24:24 INFO otopi.context context.runSequence:687 Stage:
> Pre-termination
> 2016-11-24 11:24:24 DEBUG otopi.context context.runSequence:691 STAGE
> pre-terminate
> 2016-11-24 11:24:24 DEBUG otopi.context context._executeMethod:128 Stage
> pre-terminate METHOD otopi.plugins.otopi.core.misc.Plugin._preTerminate
> 2016-11-24 11:24:24 DEBUG otopi.context context.dumpEnvironment:760
> ENVIRONMENT DUMP - BEGIN
> 2016-11-24 11:24:24 DEBUG otopi.context context.dumpEnvironment:770 ENV
> BASE/aborted=bool:'False'
> 2016-11-24 11:24:24 DEBUG otopi.context context.dumpEnvironment:770 ENV
> BASE/debug=int:'0'
> 2016-11-24 11:24:24 DEBUG otopi.context context.dumpEnvironment:770 ENV
> BASE/error=bool:'True'
> 2016-11-24 11:24:24 DEBUG otopi.context context.dumpEnvironment:770 ENV
> BASE/exceptionInfo=list:'[(<class 'yum.Errors.RepoError'>, RepoError(),
> <traceback ob
> :
>
> So my question is that if yum is the reason for the host install failing? Do
> I absolutely need Internet connectivity for the node also? What if the
> python file can be edited to prevent it? I have installed the node using the
> node ISO available on the ovirt website
Not sure about node specifically.
You can try preventing host-deploy from trying to update packages using:
https://gerrit.ovirt.org/gitweb?p=ovirt-host-deploy.git;a=blob;f=README
Search there for 'OFFLINE'.
Best,
>
> Thank You
>
> Shalabh Goel
>
> On Wed, Nov 30, 2016 at 2:03 PM, Yedidyah Bar David <didi(a)redhat.com> wrote:
>>
>> On Wed, Nov 30, 2016 at 9:44 AM, Shalabh Goel <shalabhgoel13(a)gmail.com>
>> wrote:
>> > Hi
>> >
>> > I just want to know if there is any way I can disable the yum looking
>> > for
>> > updates on the internet on the node?
>>
>> Not sure, adding Fabian.
>>
>> I think this should work if you add a proxy= line to yum.conf.
>> Not sure how to persist this in node.
>>
>> Best,
>>
>> >
>> > Please help me out here.
>> >
>> > Thanks
>> >
>> > Shalabh Goel
>> >
>> >
>> >
>> >
>> > On Tue, Nov 29, 2016 at 5:59 PM, <users-request(a)ovirt.org> wrote:
>> >>
>> >> Send Users mailing list submissions to
>> >> users(a)ovirt.org
>> >>
>> >> To subscribe or unsubscribe via the World Wide Web, visit
>> >> http://lists.ovirt.org/mailman/listinfo/users
>> >> or, via email, send a message with subject or body 'help' to
>> >> users-request(a)ovirt.org
>> >>
>> >> You can reach the person managing the list at
>> >> users-owner(a)ovirt.org
>> >>
>> >> When replying, please edit your Subject line so it is more specific
>> >> than "Re: Contents of Users digest..."
>> >>
>> >>
>> >> Today's Topics:
>> >>
>> >> 1. Re: I wrote an oVirt thing (Yaniv Kaul)
>> >> 2. Ovirt node host installation failing in engine (Shalabh Goel)
>> >> 3. Re: How to migrate Self Hosted Engine (Gianluca Cecchi)
>> >> 4. Re: Need help with dashboard (????? ???????)
>> >>
>> >>
>> >> ----------------------------------------------------------------------
>> >>
>> >> Message: 1
>> >> Date: Tue, 29 Nov 2016 14:06:22 +0200
>> >> From: Yaniv Kaul <ykaul(a)redhat.com>
>> >> To: Konstantin Shalygin <k0ste(a)k0ste.ru>
>> >> Cc: Ovirt Users <users(a)ovirt.org>
>> >> Subject: Re: [ovirt-users] I wrote an oVirt thing
>> >> Message-ID:
>> >>
>> >> <CAJgorsY_SksEnP2jQxBgdJyKW=4_qZZGZ=DwHQ13MteEN5jG5w(a)mail.gmail.com>
>> >> Content-Type: text/plain; charset="utf-8"
>> >>
>> >> On Tue, Nov 29, 2016 at 3:40 AM, Konstantin Shalygin <k0ste(a)k0ste.ru>
>> >> wrote:
>> >>
>> >> > ovirt-shell will be deprecated and not supported or some functions on
>> >> > ovirt-shell (or all package ovirt-engine-cli)?
>> >> >
>> >> > We use ovirt-shell on client desktops who connected to SPICE consoles
>> >> > and
>> >> > work (users provided by LDAP on ovirt-engine), like via RDP. For this
>> >> > I
>> >> > wrote very fast-hack patch for ovirt-shell and GUI for enter password
>> >> > (
>> >> > https://github.com/k0ste/ovirt-pygtk). Very simple, but via Internet
>> >> > people use SPICE without negative about packet loss and disconnects,
>> >> > instead RDP.
>> >>
>> >>
>> >> Can you further explain the use case? I assume the user portal is not
>> >> good
>> >> enough for some reason?
>> >>
>> >>
>> >> >
>> >> >
>> >> > BTW, the ovirt-shell is something we deprecated. It is working on top
>> >> > of
>> >> >> the v3 api, which we plan to remove in 4.2.
>> >> >> So better not use it.
>> >> >>
>> >> >
>> >> >
>> >> > You can start maintain. For example I maintain packes for Arch Linux:
>> >> > ovirt-engine-cli
>> >> > (https://aur.archlinux.org/packages/ovirt-engine-cli)
>> >> > and ovirt-engine-sdk-python (https://aur.archlinux.org/pac
>> >> > kages/ovirt-engine-sdk-python).
>> >>
>> >>
>> >> Hi,
>> >>
>> >> It somehow looks like a fork of the CLI (due to the added patch[1]).
>> >> I'm not sure how happy I am about it, considering the patch is adding a
>> >> feature with security issues (there is a reason we do not support
>> >> password
>> >> passed via the command line - it's somewhat less secure).
>> >> Since you are already checking for the CLI rc file[2], just add the
>> >> password to it and launch with it (in a temp file in the temp directory
>> >> with the right permissions, etc...)
>> >>
>> >> BTW, note that the attempt to delete the password from memory[3] may or
>> >> may
>> >> not work. After all, it's a copy of what you got from entry.get_text()
>> >> few
>> >> lines before.
>> >> And Python GC is not really to be relied upon to delete things ASAP
>> >> anyway.
>> >> There are some lovely discussions on the Internet about it. For
>> >> example[4].
>> >> Y.
>> >>
>> >> [1]
>> >>
>> >> https://github.com/k0ste/ovirt-pygtk/blob/master/add_password_option.patch
>> >> [2] https://github.com/k0ste/ovirt-pygtk/blob/master/ovirt-pygtk.py#L81
>> >> [3] https://github.com/k0ste/ovirt-pygtk/blob/master/ovirt-pygtk.py#L71
>> >> [4]
>> >>
>> >>
>> >> http://stackoverflow.com/questions/728164/securely-erasing-password-in-me...
>> >>
>> >> >
>> >> >
>> >> > My workstation at work is running Ubuntu, and I do not believe that
>> >> >> ovirt-shell is packaged for it.
>> >> >>
>> >> >
>> >> > --
>> >> > Best regards,
>> >> > Konstantin Shalygin
>> >> >
>> >> >
>> >> >
>> >> > _______________________________________________
>> >> > Users mailing list
>> >> > Users(a)ovirt.org
>> >> > http://lists.ovirt.org/mailman/listinfo/users
>> >> >
>> >>
8 years
[ANN] oVirt 4.0.6 Third Release Candidate is now available
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of oVirt 4.0.6
third release candidate for testing, as of December 1st, 2016.
This release is available now for:
* Fedora 23 (tech preview)
* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later
This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later
* Fedora 23 (tech preview)
* oVirt Next Generation Node 4.0
This is pre-release software. Please take a look at our community page[1]
to know how to ask questions and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This pre-release should not to be used in production.
This update is the third release candidate of the sixth in a series of
stabilization updates to the 4.0 series.
4.0.6 brings 2 enhancements and 51 bugfixes, including 20 high or urgent
severity fixes, on top of oVirt 4.0 series
See the release notes [3] for installation / upgrade instructions and a
list of new features and bugs fixed.
Notes:
* A new oVirt Live ISO is available. [4]
* A new oVirt Next Generation Node will be available soon [4]
* A new oVirt Engine Appliance is available for Red Hat Enterprise Linux
and CentOS Linux (or similar)
* Mirrors[5] might need up to one day to synchronize.
Additional Resources:
* Read more about the oVirt 4.0.6 release highlights:
http://www.ovirt.org/release/4.0.6/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.0.6/
[4] http://resources.ovirt.org/pub/ovirt-4.0-pre/iso/
[5] http://www.ovirt.org/Repository_mirrors#Current_mirrors
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
8 years
Help! My hosted engine lost his nic!
by Cristian Mammoli
Sorry for the repost, I don't know if the previous message got to the
list...
Hi, I upgraded an oVirt installation a month ago to the latest 3.6.7.
Before it was 3.6.0 if I remember correctly.
Everything went fine so far for a month or so.
A couple of days ago the the default gateway got rebooted and the
physical server hosting the HE decided to shut down the vm because it
could not ping the gateway.
The other host restarted the hevm but it now has *no nic*.
As a workaround I attached a virtio nic via virsh but every time the vm
gets restarted the nic get lost
After a bit of troubleshooting and digging this is what I found:
This is the /var/run/ovirt-hosted-engine-ha/vm.conf which, as far as I
understand, gets extracted from the HE storage domain
emulatedMachine=pc
vmId=497f5e4a-0c76-441a-b72e-724d7092d07e
smp=2
memSize=6144
spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
vmName=HostedEngine
display=vnc
devices={index:0,iface:virtio,format:raw,bootOrder:1,address:{slot:0x06,bus:0x00,domain:0x0000,type:pci,function:0x0},volumeID:bb3218ba-cbe9-4cd0-b50b-931deae992f7,imageID:d65b82e2-2ad1-
4f4f-bfad-0277c37f2808,readonly:false,domainID:2c3585cc-b7bc-4881-85b3-aa6514991a26,deviceId:d65b82e2-2ad1-4f4f-bfad-0277c37f2808,poolID:00000000-0000-0000-0000-000000000000,device:disk,
shared:exclusive,propagateErrors:off,type:disk}
devices={index:2,iface:ide,shared:false,readonly:true,deviceId:8c3179ac-b322-4f5c-9449-c52e3665e0ae,address:{controller:0,target:0,unit:0,bus:1,type:drive},device:cdrom,path:,type:disk}
devices={device:cirrus,alias:video0,type:video,deviceId:a99468b6-02d4-4a77-8f94-e5df806030f6,address:{slot:0x02,bus:0x00,domain:0x0000,type:pci,function:0x0}}
devices={device:virtio-serial,type:controller,deviceId:b7580676-19fb-462f-a61e-677b65ad920a,address:{slot:0x03,bus:0x00,domain:0x0000,type:pci,function:0x0}}
devices={device:usb,type:controller,deviceId:c63092b3-7bd8-4b54-bcd3-51f34dce478a,address:{slot:0x01,bus:0x00,domain:0x0000,type:pci,function:0x2}}
devices={device:ide,type:controller,deviceId:c77c2c01-6ccc-404b-b8d6-5a7f0631a52f,address:{slot:0x01,bus:0x00,domain:0x0000,type:pci,function:0x1}}
As you can see there is no nic, and there is no nic in the qemu-kvm
command-line:
qemu 23290 1 14 00:23 ? 01:44:26 /usr/libexec/qemu-kvm
-name HostedEngine -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off
-cpu qemu64,-svm -m 6144 -realtime mlock=off -s
mp 2,sockets=2,cores=1,threads=1 -uuid
497f5e4a-0c76-441a-b72e-724d7092d07e -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=7-2.1511.el7.centos.2.10,serial=4C4C4544-004B-571
0-8044-B9C04F5A3732,uuid=497f5e4a-0c76-441a-b72e-724d7092d07e
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-HostedEngine/monitor.sock,serve
r,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2016-11-30T23:23:26,driftfix=slew -global
kvm-pit.lost_tick_policy=discard -no-hpet -no-reboot -boot strict=on
-device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x3
-drive file=/var/run/vdsm/storage/2c3585cc-b7bc-4881-85b3-aa6514
991a26/d65b82e2-2ad1-4f4f-bfad-0277c37f2808/bb3218ba-cbe9-4cd0-b50b-931deae992f7,if=none,id=drive-virtio-disk0,format=raw,serial=d65b82e2-2ad1-4f4f-bfad-0277c37f2808,cache=none,werror=st
op,rerror=stop,aio=native -device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive if=none,id=drive-ide0-1-0,readonly=on,format=raw
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
-chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/497f5e4a-0c76-441a-b72e-724d7092d07e.com.redhat.rhevm
.vdsm,server,nowait -device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev socket,id=charchannel1,path=/var/lib/libvirt/qem
u/channels/497f5e4a-0c76-441a-b72e-724d7092d07e.org.qemu.guest_agent.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest
_agent.0 -chardev
socket,id=charchannel2,path=/var/lib/libvirt/qemu/channels/497f5e4a-0c76-441a-b72e-724d7092d07e.org.ovirt.hosted-engine-setup.0,server,nowait
-device virtserialport,bus
=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=org.ovirt.hosted-engine-setup.0
-vnc 0:0,password -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -msg
timestamp=on
I extracted the vm.conf from the storage domain and the nic is there:
mId=497f5e4a-0c76-441a-b72e-724d7092d07e
memSize=6144
display=vnc
devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0,
bus:1,
type:drive},specParams:{},readonly:true,deviceId:857b98b3-cf43-4c2d-8061-e7f105234a65,path:,device:cdrom,shared
:false,type:disk}
devices={index:0,iface:virtio,format:raw,poolID:00000000-0000-0000-0000-000000000000,volumeID:bb3218ba-cbe9-4cd0-b50b-931deae992f7,imageID:d65b82e2-2ad1-4f4f-bfad-0277c37f2808,specParams
:{},readonly:false,domainID:2c3585cc-b7bc-4881-85b3-aa6514991a26,optional:false,deviceId:d65b82e2-2ad1-4f4f-bfad-0277c37f2808,address:{bus:0x00,
slot:0x06, domain:0x0000, type:pci, funct
ion:0x0},device:disk,shared:exclusive,propagateErrors:off,type:disk,bootOrder:1}
devices={device:scsi,model:virtio-scsi,type:controller}
devices={nicModel:pv,macAddr:00:16:3e:7d:d8:27,linkActive:true,network:ovirtmgmt,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:5be8a089-9f51-46dc-a8bd-28422985aa35,address:{bus:0x00
, slot:0x03, domain:0x0000, type:pci,
function:0x0},device:bridge,type:interface}
devices={device:console,specParams:{},type:console,deviceId:1644f556-a4ff-4c93-8945-5aa165de2a85,alias:console0}
vmName=HostedEngine
spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
smp=2
cpuType=SandyBridge
emulatedMachine=pc
The local vm.conf gets continuosly overwritten but for some reason the
nic line gets lost in the process.
8 years
Windows Server 2008 & 2016 Drivers
by Bill Bill
--_000_CO2PR0801MB074391BE406D99EB3A8E6D2BA68F0CO2PR0801MB0743_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
SGVsbG8sDQoNCklzIHRoZXJlIGEgcHJvcGVyIGd1aWRlIGZvciBnZXR0aW5nIHZpcnRpbyBkcml2
ZXJzIHRvIGxvYWQg4oCTIEnigJl2ZSBiZWVuIHRyeWluZyB0byBnZXQgV2luZG93cyBTZXJ2ZXIg
MjAwOCAmIDIwMTYgaW5zdGFsbGVkIHdpdGggYSBWaXJ0aW8gYmFzZWQgZGlzayBidXQgbm8gZHJp
dmVycyBhcHBlYXIgdG8gd29yay4NCg0K
--_000_CO2PR0801MB074391BE406D99EB3A8E6D2BA68F0CO2PR0801MB0743_
Content-Type: text/html; charset="utf-8"
Content-ID: <228C918B53428E44BB1FC94EA5516AE1(a)sct-15-1-659-11-msonline-outlook-7ade0.templateTenant>
Content-Transfer-Encoding: base64
PGh0bWwgeG1sbnM6bz0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6b2ZmaWNlIiB4
bWxuczp3PSJ1cm46c2NoZW1hcy1taWNyb3NvZnQtY29tOm9mZmljZTp3b3JkIiB4bWxuczptPSJo
dHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL29mZmljZS8yMDA0LzEyL29tbWwiIHhtbG5zPSJo
dHRwOi8vd3d3LnczLm9yZy9UUi9SRUMtaHRtbDQwIj4NCjxoZWFkPg0KPG1ldGEgaHR0cC1lcXVp
dj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0L2h0bWw7IGNoYXJzZXQ9dXRmLTgiPg0KPG1l
dGEgbmFtZT0iR2VuZXJhdG9yIiBjb250ZW50PSJNaWNyb3NvZnQgV29yZCAxNSAoZmlsdGVyZWQg
bWVkaXVtKSI+DQo8c3R5bGU+PCEtLQ0KLyogRm9udCBEZWZpbml0aW9ucyAqLw0KQGZvbnQtZmFj
ZQ0KCXtmb250LWZhbWlseToiQ2FtYnJpYSBNYXRoIjsNCglwYW5vc2UtMToyIDQgNSAzIDUgNCA2
IDMgMiA0O30NCkBmb250LWZhY2UNCgl7Zm9udC1mYW1pbHk6Q2FsaWJyaTsNCglwYW5vc2UtMToy
IDE1IDUgMiAyIDIgNCAzIDIgNDt9DQovKiBTdHlsZSBEZWZpbml0aW9ucyAqLw0KcC5Nc29Ob3Jt
YWwsIGxpLk1zb05vcm1hbCwgZGl2Lk1zb05vcm1hbA0KCXttYXJnaW46MGluOw0KCW1hcmdpbi1i
b3R0b206LjAwMDFwdDsNCglmb250LXNpemU6MTEuMHB0Ow0KCWZvbnQtZmFtaWx5OiJDYWxpYnJp
IixzYW5zLXNlcmlmO30NCmE6bGluaywgc3Bhbi5Nc29IeXBlcmxpbmsNCgl7bXNvLXN0eWxlLXBy
aW9yaXR5Ojk5Ow0KCWNvbG9yOmJsdWU7DQoJdGV4dC1kZWNvcmF0aW9uOnVuZGVybGluZTt9DQph
OnZpc2l0ZWQsIHNwYW4uTXNvSHlwZXJsaW5rRm9sbG93ZWQNCgl7bXNvLXN0eWxlLXByaW9yaXR5
Ojk5Ow0KCWNvbG9yOiM5NTRGNzI7DQoJdGV4dC1kZWNvcmF0aW9uOnVuZGVybGluZTt9DQouTXNv
Q2hwRGVmYXVsdA0KCXttc28tc3R5bGUtdHlwZTpleHBvcnQtb25seTt9DQpAcGFnZSBXb3JkU2Vj
dGlvbjENCgl7c2l6ZTo4LjVpbiAxMS4waW47DQoJbWFyZ2luOjEuMGluIDEuMGluIDEuMGluIDEu
MGluO30NCmRpdi5Xb3JkU2VjdGlvbjENCgl7cGFnZTpXb3JkU2VjdGlvbjE7fQ0KLS0+PC9zdHls
ZT4NCjwvaGVhZD4NCjxib2R5IGxhbmc9IkVOLVVTIiBsaW5rPSJibHVlIiB2bGluaz0iIzk1NEY3
MiI+DQo8ZGl2IGNsYXNzPSJXb3JkU2VjdGlvbjEiPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+SGVs
bG8sPC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PG86cD4mbmJzcDs8L286cD48L3A+DQo8cCBj
bGFzcz0iTXNvTm9ybWFsIj5JcyB0aGVyZSBhIHByb3BlciBndWlkZSBmb3IgZ2V0dGluZyB2aXJ0
aW8gZHJpdmVycyB0byBsb2FkIOKAkyBJ4oCZdmUgYmVlbiB0cnlpbmcgdG8gZ2V0IFdpbmRvd3Mg
U2VydmVyIDIwMDggJmFtcDsgMjAxNiBpbnN0YWxsZWQgd2l0aCBhIFZpcnRpbyBiYXNlZCBkaXNr
IGJ1dCBubyBkcml2ZXJzIGFwcGVhciB0byB3b3JrLjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwi
PjxvOnA+Jm5ic3A7PC9vOnA+PC9wPg0KPC9kaXY+DQo8L2JvZHk+DQo8L2h0bWw+DQo=
--_000_CO2PR0801MB074391BE406D99EB3A8E6D2BA68F0CO2PR0801MB0743_--
8 years
Network usage in ovirt dashboard?
by Derek Atkins
Is there a way to configure what data gets shown in the dashboard?
Specifically, I'd like to see network usage.
It would be nice to see total network usage on the dashboard, and have a
graph of per-VM network usage as well.
The closest I've found is the "Total" columns on the Network -> Virtual
Machines page. I'd like to see something sort of like the CPU/RAM/Storage
(over)commit graphs.
Is there any way to do this?
-derek
--
Derek Atkins 617-623-3745
derek(a)ihtfp.com www.ihtfp.com
Computer and Internet Security Consultant
8 years
Python API ovirtsdk4 cloud-init script file upload issue
by Allen Swackhamer
I am attempting to run a VM with a cloud-init script, but can't seem to get
the syntax working for the python ovirtsdk4 library. We are running ovirt
4.
My cloud init script never appears to work and the users option does not
work either. If I use root_password and user_name for types.Initialization
(as in the example below) it will generate the correct users, but if I put
in the types.CloudInit piece it will break it.
types.Initialization(
user_name=cloud_init_username,
root_password=cloud_init_password,
regenerate_ssh_keys=True,
host_name="testingcloudinit",
nic_configurations=[
types.NicConfiguration(
boot_protocol=types.BootProtocol.DHCP,
name='ens3',
on_boot=True
)
],
custom_script=cloud_init_script
)
What I currently have is below. cloud_init_script is the script I want to
run as a string. Looking at the REST API documentation it shows that I
need to upload the file as a CDATA entry, but I am unsure of how I can
ensure that in python. Is there potentially a way I could get the full XML
call in python to ensure API compliance?
vm_service.start(
use_cloud_init=True,
vm=types.Vm(
initialization=types.Initialization(
nic_configurations=[types.NicConfiguration(
boot_protocol=types.BootProtocol.DHCP,
name="ens3",
on_boot=True
)],
cloud_init=types.CloudInit(
files=[types.File(
name="initialrunner",
content=cloud_init_script,
type="plaintext"
),
],
users=[types.User(
user_name=cloud_init_username,
password=cloud_init_password
)]
)
)
)
)
Documentation that I have been using:
http://www.ovirt.org/develop/release-management/features/
cloud/cloud-init-integration/ (seems old and for a earlier ovirtsdk version)
http://cloudinit.readthedocs.io/en/latest/topics/examples.
html#run-commands-on-first-boot (official cloud init documentation)
https://www.mail-archive.com/users@ovirt.org/msg35549.html (message board
conversation about this REST API)
http://lists.ovirt.org/pipermail/users/2014-September/027198.html (about
the cloud-init script specifically)
http://lists.ovirt.org/pipermail/users/2014-November/028878.html (for a old
version of ovirtsdk it appears)
http://www.ovirt.org/develop/api/rest-api/rest-api/#How_
can_I_run_a_custom_script_using_cloud-init.3F (appears to be up to date but
just documents the REST API)
8 years