Re: [ovirt-users] Is it possible to disable qxl video?

--_000_62bb29663c2a4276a00559ac230b2c34exch24sluse_ Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 DQpEZW4gMTEganVsIDIwMTYgMTE6MzcgZm0gc2tyZXYgQXJtYW4gS2hhbGF0eWFuIDxhcm0yYXJt QGdtYWlsLmNvbT46DQo+DQo+IEkganVzdCB0ZXN0aW5nIHNldmVyYWwgR1BVcyB3aXRoIHBjaS1w YXNzdGhyb3VnaDogR1RYNjMwLCBUZXNsYSAyMDUwIGFuZCBGWDU4MDAuDQo+IE5vbmUgb2YgdGhl bSBJIGdvdCBydW5uaW5nIHdpdGggb3BlbmdsL2RpcmVjdCByZW5kZXJpbmcgaW5zaWRlIHRoZSB2 aXJ0dWFsIG1hY2hpbmUuDQo+IEkgd2FzIHRoaW5raW5nIHNvbWV0aGluZyB0byBkbyB3aXRoIHF4 bCBkcml2ZXIsIGJ1dCBhZnRlciBibGFja2xpc3RpbmcgaXQgaW4gdGhlIFZNcyBrZXJuZWwgbm90 aGluZyB3YXMgY2hhbmdlZC4NCj4gVGhlIFZNcyBYb3JnIGNsYWltaW5nIGFsd2F5cyBubyBkaXNw bGF5IGZvdW5kIGV2ZW4gaWYgSSBjb25maWd1cmUgaXQgd2l0aCBoZWFkbGVzcyBvcHRpb246IG52 aWRpYS14Y29uZmlnIC1hIC0tdXNlLWRpc3BsYXktZGV2aWNlPU5vbmUgLS12aXJ0dWFsPTEyODB4 MTAyNA0KPiBCdXQgd2l0aCBudmlkaWEtc21pIEkgY2FuIHNlZSB0aGUgR1BVIHN0YXR1cywgYW5k IEkgY2FuIHJ1biBhbGwgY3VkYSB0ZXN0cyB3aGVyZSBvcGVuZ2wvWDExIGlzIG5vdCBpbnZvbHZl ZC4NCj4gTXkgdGVzdCBzeXN0ZW0gaXMgQ2VudE9TIDcgd2l0aCBvdmlydCA0Lg0KPiBCVFcgSWYg SSBjbG9uZSBWTSB3aGljaCB3YXMgbm90IHJ1bm5pbmcsIHRvIHJlYWwgaG9zdCBvbiBiYXJlIG1l dGFsLCBldmVyeXRoaW5nIGlzIHJ1bm5pbmcgYXMgZXhwZWN0aW5nLg0KDQpEb24ndCByZW1lbWJl ciBJZiB0aGlzIHdhcyBtZW50aW9uZWQgYnV0IGhhdmUgeW91IHRyaWVkIGRvaW5nIHRoZSBzYW1l IHRoaW5nIGJ1dCB3aXRoIHZpcnQtbWFuYWdlciBvbiBhIHN0YW5kYWxvbmUgaG9zdCwganVzdCB0 byBydWxlIG91dCBvVmlydCBmcm9tIHRoZSBlcXVhdGlvbj8NCg0KL0sNCj4NCj4NCj4NCj4NCj4N Cj4NCj4NCj4gKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioNCj4NCj4gIERyLiBBcm1hbiBLaGFsYXR5YW4gIGVTY2llbmNlIC1TdXBlckNv bXB1dGluZw0KPiAgTGVpYm5pei1JbnN0aXR1dCBmw7xyIEFzdHJvcGh5c2lrIFBvdHNkYW0gKEFJ UCkNCj4gIEFuIGRlciBTdGVybndhcnRlIDE2LCAxNDQ4MiBQb3RzZGFtLCBHZXJtYW55DQo+DQo+ ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqDQo+DQo+IE9uIE1vbiwgSnVsIDExLCAyMDE2IGF0IDk6MDYgQU0sIE1hcnRpbiBQb2xlZG5p ayA8bXBvbGVkbmlrQHJlZGhhdC5jb20+IHdyb3RlOg0KPj4NCj4+IE9uIDA4LzA3LzE2IDExOjI5 ICswMjAwLCBNaWNoYWwgU2tyaXZhbmVrIHdyb3RlOg0KPj4+DQo+Pj4NCj4+Pj4gT24gMDYgSnVs IDIwMTYsIGF0IDE0OjA2LCBGcmFuY2VzY28gUm9tYW5pIDxmcm9tYW5pQHJlZGhhdC5jb20+IHdy b3RlOg0KPj4+Pg0KPj4+Pg0KPj4+PiBGcm9tOiAiQXJtYW4gS2hhbGF0eWFuIiA8YXJtMmFybUBn bWFpbC5jb20+DQo+Pj4+IFRvOiAidXNlcnMiIDx1c2Vyc0BvdmlydC5vcmc+DQo+Pj4+IFNlbnQ6 IE1vbmRheSwgSnVseSA0LCAyMDE2IDExOjUyOjI0IEFNDQo+Pj4+IFN1YmplY3Q6IFtvdmlydC11 c2Vyc10gSXMgaXQgcG9zc2libGUgdG8gZGlzYWJsZSBxeGwgdmlkZW8/DQo+Pj4+DQo+Pj4+DQo+ Pj4+IEhpLA0KPj4+PiBJIGFtIGRvaW5nIFBDSSBQYXNzdGhyb3VnaCBmb3IgR1BVcy4NCj4+Pj4g SXMgaXQgcG9zc2libGUgc29tZWhvdyB0byBkaXNhYmxlL3JlbW92ZSBkZWZhdWx0IHZpZGVvIHF4 bD8NCj4+Pj4gdGhhbmtzLA0KPj4+PiBBcm1hbiwuDQo+Pj4+IEhpLA0KPj4+Pg0KPj4+PiBUaGUg NC4wIFZkc20gYWxsb3dzIHlvdSB0byBydW4gYSBoZWFkbGVzcyBWTSwgbWVhbmluZyBhIFZNIHdp dGhvdXQgZ3JhcGhpYyBkZXZpY2UvZnJvbnRlbmQuDQo+Pj4+IE5vdCBzdXJlIEVuZ2luZSBhbGxv d3MgeW91IHRoYXQgaG93ZXZlci4NCj4+Pg0KPj4+DQo+Pj4gbm8gaXQgZG9lc27igJl0DQo+Pg0K Pj4NCj4+IEV2ZW4gaWYgZW5naW5lIGFsbG93ZWQgdGhhdCwgaXQgbWF5IG5vdCBiZSB0aGUgY29y cmVjdCBzb2x1dGlvbi4NCj4+DQo+PiBSdW5uaW5nIGhlYWRsZXNzIHdpdGggUENJIHBhc3N0aHJv dWdoIGlzIGNvbnNpZGVyZWQgcHJpbWFyeSBWR0ENCj4+IHBhc3N0aHJvdWdoWzFdLiBVc2luZyBw cmltYXJ5IFZHQSBwYXNzdGhyb3VnaCB3aXRob3V0IE9WTUYgY291bGQgY2F1c2UNCj4+ICh3aWxs IGNhdXNlKSBxdWl0ZSBhIGJpdCBvZiBoZWFkYWNoZXMuDQo+Pg0KPj4gSWYgeW91IHJlYWxseSB3 YW50IHRvIHRyeSB0aGF0LCBJIGFzc3VtZSB1c2luZyBiZWZvcmVfdm1fc3RhcnQgaG9vayB0bw0K Pj4gcmVtb3ZlIHRoZSB2aWRlbyBkZXZpY2Ugc2hvdWxkIGJlIGVub3VnaC4NCj4+DQo+PiBbMV0g aHR0cDovL3dpa2kueGVucHJvamVjdC5vcmcvd2lraS9YZW5fVkdBX1Bhc3N0aHJvdWdoIC0gaXQn cw0KPj4gcG9zc2libGUgdG8gcmVhZCBhYm91dCBwcmltYXJ5IGFuZCBzZWNvbmRhcnkgcGFzc3Ro cm91Z2ggYmV0d2VlbiB0aGUNCj4+IGxpbmVzDQo+Pg0KPj4+IEhvd2V2ZXIsIGlmIFFYTCBpcyB0 aGUgcHJvYmxlbSAod2h5IGlzIHRoYXQgc28/KSB0aGVuIHlvdSBjYW4gc3dpdGNoIHRvIFZHQShv ciBDSVJSVVMgaWYgdGhpcyBpcyBzdGlsbCAzLjYpDQo+Pj4NCj4+Pj4gSSdtIGFmcmFpZCB5b3Ug d2lsbCBuZWVkIHRvIHdyaXRlIGEgVmRzbSBob29rLg0KPj4+Pg0KPj4+PiBUaGVyZSBjb3VsZCBi ZSBzaW1wbGVyIHNvbHV0aW9ucyBkZXBlbmRpbmcgb24gdGhlIEVuZ2luZQ0KPj4+Pg0KPj4+PiBC ZXN0cywNCj4+Pj4NCj4+Pj4gLS0NCj4+Pj4gRnJhbmNlc2NvIFJvbWFuaQ0KPj4+PiBSZWRIYXQg RW5naW5lZXJpbmcgVmlydHVhbGl6YXRpb24gUiAmIEQNCj4+Pj4gUGhvbmU6IDgyNjEzMjgNCj4+ Pj4gSVJDOiBmcm9tYW5pDQo+Pj4+IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fDQo+Pj4+IFVzZXJzIG1haWxpbmcgbGlzdA0KPj4+PiBVc2Vyc0BvdmlydC5v cmcgPG1haWx0bzpVc2Vyc0BvdmlydC5vcmc+DQo+Pj4+IGh0dHA6Ly9saXN0cy5vdmlydC5vcmcv bWFpbG1hbi9saXN0aW5mby91c2VycyA8aHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xp c3RpbmZvL3VzZXJzPg0KPj4NCj4+DQo+Pj4gX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX18NCj4+PiBVc2VycyBtYWlsaW5nIGxpc3QNCj4+PiBVc2Vyc0Bvdmly dC5vcmcNCj4+PiBodHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnMN Cj4+DQo+Pg0KPg0K --_000_62bb29663c2a4276a00559ac230b2c34exch24sluse_ Content-Type: text/html; charset="utf-8" Content-ID: <7C5B513FF9B24A44BE557047D136E313@ad.slu.se> Content-Transfer-Encoding: base64 PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPHAgZGlyPSJsdHIi Pjxicj4NCkRlbiAxMSBqdWwgMjAxNiAxMTozNyBmbSBza3JldiBBcm1hbiBLaGFsYXR5YW4gJmx0 O2FybTJhcm1AZ21haWwuY29tJmd0Ozo8YnI+DQomZ3Q7PGJyPg0KJmd0OyBJIGp1c3QgdGVzdGlu ZyBzZXZlcmFsIEdQVXMgd2l0aCBwY2ktcGFzc3Rocm91Z2g6IEdUWDYzMCwgVGVzbGEgMjA1MCBh bmQgRlg1ODAwLjxicj4NCiZndDsgTm9uZSBvZiB0aGVtIEkgZ290IHJ1bm5pbmcgd2l0aCBvcGVu Z2wvZGlyZWN0IHJlbmRlcmluZyBpbnNpZGUgdGhlIHZpcnR1YWwgbWFjaGluZS48YnI+DQomZ3Q7 IEkgd2FzIHRoaW5raW5nIHNvbWV0aGluZyB0byBkbyB3aXRoIHF4bCBkcml2ZXIsIGJ1dCBhZnRl ciBibGFja2xpc3RpbmcgaXQgaW4gdGhlIFZNcyBrZXJuZWwgbm90aGluZyB3YXMgY2hhbmdlZC4N Cjxicj4NCiZndDsgVGhlIFZNcyBYb3JnIGNsYWltaW5nIGFsd2F5cyBubyBkaXNwbGF5IGZvdW5k IGV2ZW4gaWYgSSBjb25maWd1cmUgaXQgd2l0aCBoZWFkbGVzcyBvcHRpb246IG52aWRpYS14Y29u ZmlnIC1hIC0tdXNlLWRpc3BsYXktZGV2aWNlPU5vbmUgLS12aXJ0dWFsPTEyODB4MTAyNA0KPGJy Pg0KJmd0OyBCdXQgd2l0aCBudmlkaWEtc21pIEkgY2FuIHNlZSB0aGUgR1BVIHN0YXR1cywgYW5k IEkgY2FuIHJ1biBhbGwgY3VkYSB0ZXN0cyB3aGVyZSBvcGVuZ2wvWDExIGlzIG5vdCBpbnZvbHZl ZC48YnI+DQomZ3Q7IE15IHRlc3Qgc3lzdGVtIGlzIENlbnRPUyA3IHdpdGggb3ZpcnQgNC48YnI+ DQomZ3Q7IEJUVyBJZiBJIGNsb25lIFZNIHdoaWNoIHdhcyBub3QgcnVubmluZywgdG8gcmVhbCBo b3N0IG9uIGJhcmUgbWV0YWwsIGV2ZXJ5dGhpbmcgaXMgcnVubmluZyBhcyBleHBlY3RpbmcuPC9w Pg0KPHAgZGlyPSJsdHIiPkRvbid0IHJlbWVtYmVyIElmIHRoaXMgd2FzIG1lbnRpb25lZCBidXQg aGF2ZSB5b3UgdHJpZWQgZG9pbmcgdGhlIHNhbWUgdGhpbmcgYnV0IHdpdGggdmlydC1tYW5hZ2Vy IG9uIGEgc3RhbmRhbG9uZSBob3N0LCBqdXN0IHRvIHJ1bGUgb3V0IG9WaXJ0IGZyb20gdGhlIGVx dWF0aW9uPzwvcD4NCjxwIGRpcj0ibHRyIj4vSzxicj4NCiZndDs8YnI+DQomZ3Q7PGJyPg0KJmd0 Ozxicj4NCiZndDs8YnI+DQomZ3Q7PGJyPg0KJmd0Ozxicj4NCiZndDs8YnI+DQomZ3Q7ICoqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqPGJy Pg0KJmd0Ozxicj4NCiZndDsgJm5ic3A7RHIuIEFybWFuIEtoYWxhdHlhbiAmbmJzcDtlU2NpZW5j ZSAtU3VwZXJDb21wdXRpbmcgPGJyPg0KJmd0OyAmbmJzcDtMZWlibml6LUluc3RpdHV0IGbDvHIg QXN0cm9waHlzaWsgUG90c2RhbSAoQUlQKSAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5i c3A7ICZuYnNwOzxicj4NCiZndDsgJm5ic3A7QW4gZGVyIFN0ZXJud2FydGUgMTYsIDE0NDgyIFBv dHNkYW0sIEdlcm1hbnkgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg Jm5ic3A7PGJyPg0KJmd0Ozxicj4NCiZndDsgKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKio8YnI+DQomZ3Q7PGJyPg0KJmd0OyBPbiBNb24s IEp1bCAxMSwgMjAxNiBhdCA5OjA2IEFNLCBNYXJ0aW4gUG9sZWRuaWsgJmx0O21wb2xlZG5pa0By ZWRoYXQuY29tJmd0OyB3cm90ZTo8YnI+DQomZ3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7IE9uIDA4LzA3 LzE2IDExOjI5ICYjNDM7MDIwMCwgTWljaGFsIFNrcml2YW5layB3cm90ZTo8YnI+DQomZ3Q7Jmd0 OyZndDs8YnI+DQomZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7Jmd0OyZndDsmZ3Q7IE9uIDA2IEp1bCAy MDE2LCBhdCAxNDowNiwgRnJhbmNlc2NvIFJvbWFuaSAmbHQ7ZnJvbWFuaUByZWRoYXQuY29tJmd0 OyB3cm90ZTo8YnI+DQomZ3Q7Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0OyZndDsmZ3Q7Jmd0Ozxicj4N CiZndDsmZ3Q7Jmd0OyZndDsgRnJvbTogJnF1b3Q7QXJtYW4gS2hhbGF0eWFuJnF1b3Q7ICZsdDth cm0yYXJtQGdtYWlsLmNvbSZndDs8YnI+DQomZ3Q7Jmd0OyZndDsmZ3Q7IFRvOiAmcXVvdDt1c2Vy cyZxdW90OyAmbHQ7dXNlcnNAb3ZpcnQub3JnJmd0Ozxicj4NCiZndDsmZ3Q7Jmd0OyZndDsgU2Vu dDogTW9uZGF5LCBKdWx5IDQsIDIwMTYgMTE6NTI6MjQgQU08YnI+DQomZ3Q7Jmd0OyZndDsmZ3Q7 IFN1YmplY3Q6IFtvdmlydC11c2Vyc10gSXMgaXQgcG9zc2libGUgdG8gZGlzYWJsZSBxeGwgdmlk ZW8/PGJyPg0KJmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7 Jmd0OyZndDsmZ3Q7IEhpLDxicj4NCiZndDsmZ3Q7Jmd0OyZndDsgSSBhbSBkb2luZyBQQ0kgUGFz c3Rocm91Z2ggZm9yIEdQVXMuPGJyPg0KJmd0OyZndDsmZ3Q7Jmd0OyBJcyBpdCBwb3NzaWJsZSBz b21laG93IHRvIGRpc2FibGUvcmVtb3ZlIGRlZmF1bHQgdmlkZW8gcXhsPzxicj4NCiZndDsmZ3Q7 Jmd0OyZndDsgdGhhbmtzLDxicj4NCiZndDsmZ3Q7Jmd0OyZndDsgQXJtYW4sLjxicj4NCiZndDsm Z3Q7Jmd0OyZndDsgSGksPGJyPg0KJmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7Jmd0OyZn dDsgVGhlIDQuMCBWZHNtIGFsbG93cyB5b3UgdG8gcnVuIGEgaGVhZGxlc3MgVk0sIG1lYW5pbmcg YSBWTSB3aXRob3V0IGdyYXBoaWMgZGV2aWNlL2Zyb250ZW5kLjxicj4NCiZndDsmZ3Q7Jmd0OyZn dDsgTm90IHN1cmUgRW5naW5lIGFsbG93cyB5b3UgdGhhdCBob3dldmVyLjxicj4NCiZndDsmZ3Q7 Jmd0Ozxicj4NCiZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7Jmd0OyBubyBpdCBkb2VzbuKAmXQ8 YnI+DQomZ3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7PGJyPg0KJmd0OyZndDsgRXZlbiBpZiBlbmdpbmUg YWxsb3dlZCB0aGF0LCBpdCBtYXkgbm90IGJlIHRoZSBjb3JyZWN0IHNvbHV0aW9uLjxicj4NCiZn dDsmZ3Q7PGJyPg0KJmd0OyZndDsgUnVubmluZyBoZWFkbGVzcyB3aXRoIFBDSSBwYXNzdGhyb3Vn aCBpcyBjb25zaWRlcmVkIHByaW1hcnkgVkdBPGJyPg0KJmd0OyZndDsgcGFzc3Rocm91Z2hbMV0u IFVzaW5nIHByaW1hcnkgVkdBIHBhc3N0aHJvdWdoIHdpdGhvdXQgT1ZNRiBjb3VsZCBjYXVzZTxi cj4NCiZndDsmZ3Q7ICh3aWxsIGNhdXNlKSBxdWl0ZSBhIGJpdCBvZiBoZWFkYWNoZXMuPGJyPg0K Jmd0OyZndDs8YnI+DQomZ3Q7Jmd0OyBJZiB5b3UgcmVhbGx5IHdhbnQgdG8gdHJ5IHRoYXQsIEkg YXNzdW1lIHVzaW5nIGJlZm9yZV92bV9zdGFydCBob29rIHRvPGJyPg0KJmd0OyZndDsgcmVtb3Zl IHRoZSB2aWRlbyBkZXZpY2Ugc2hvdWxkIGJlIGVub3VnaC48YnI+DQomZ3Q7Jmd0Ozxicj4NCiZn dDsmZ3Q7IFsxXSBodHRwOi8vd2lraS54ZW5wcm9qZWN0Lm9yZy93aWtpL1hlbl9WR0FfUGFzc3Ro cm91Z2ggLSBpdCdzPGJyPg0KJmd0OyZndDsgcG9zc2libGUgdG8gcmVhZCBhYm91dCBwcmltYXJ5 IGFuZCBzZWNvbmRhcnkgcGFzc3Rocm91Z2ggYmV0d2VlbiB0aGU8YnI+DQomZ3Q7Jmd0OyBsaW5l czxicj4NCiZndDsmZ3Q7PGJyPg0KJmd0OyZndDsmZ3Q7IEhvd2V2ZXIsIGlmIFFYTCBpcyB0aGUg cHJvYmxlbSAod2h5IGlzIHRoYXQgc28/KSB0aGVuIHlvdSBjYW4gc3dpdGNoIHRvIFZHQShvciBD SVJSVVMgaWYgdGhpcyBpcyBzdGlsbCAzLjYpPGJyPg0KJmd0OyZndDsmZ3Q7PGJyPg0KJmd0OyZn dDsmZ3Q7Jmd0OyBJJ20gYWZyYWlkIHlvdSB3aWxsIG5lZWQgdG8gd3JpdGUgYSBWZHNtIGhvb2su PGJyPg0KJmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7Jmd0OyZndDsgVGhlcmUgY291bGQg YmUgc2ltcGxlciBzb2x1dGlvbnMgZGVwZW5kaW5nIG9uIHRoZSBFbmdpbmU8YnI+DQomZ3Q7Jmd0 OyZndDsmZ3Q7PGJyPg0KJmd0OyZndDsmZ3Q7Jmd0OyBCZXN0cyw8YnI+DQomZ3Q7Jmd0OyZndDsm Z3Q7PGJyPg0KJmd0OyZndDsmZ3Q7Jmd0OyAtLTxicj4NCiZndDsmZ3Q7Jmd0OyZndDsgRnJhbmNl c2NvIFJvbWFuaTxicj4NCiZndDsmZ3Q7Jmd0OyZndDsgUmVkSGF0IEVuZ2luZWVyaW5nIFZpcnR1 YWxpemF0aW9uIFIgJmFtcDsgRDxicj4NCiZndDsmZ3Q7Jmd0OyZndDsgUGhvbmU6IDgyNjEzMjg8 YnI+DQomZ3Q7Jmd0OyZndDsmZ3Q7IElSQzogZnJvbWFuaTxicj4NCiZndDsmZ3Q7Jmd0OyZndDsg X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX188YnI+DQomZ3Q7 Jmd0OyZndDsmZ3Q7IFVzZXJzIG1haWxpbmcgbGlzdDxicj4NCiZndDsmZ3Q7Jmd0OyZndDsgVXNl cnNAb3ZpcnQub3JnICZsdDttYWlsdG86VXNlcnNAb3ZpcnQub3JnJmd0Ozxicj4NCiZndDsmZ3Q7 Jmd0OyZndDsgaHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzICZs dDtodHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnMmZ3Q7PGJyPg0K Jmd0OyZndDs8YnI+DQomZ3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7Jmd0OyBfX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXzxicj4NCiZndDsmZ3Q7Jmd0OyBVc2VycyBt YWlsaW5nIGxpc3Q8YnI+DQomZ3Q7Jmd0OyZndDsgVXNlcnNAb3ZpcnQub3JnPGJyPg0KJmd0OyZn dDsmZ3Q7IGh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vyczxicj4N CiZndDsmZ3Q7PGJyPg0KJmd0OyZndDs8YnI+DQomZ3Q7PGJyPg0KPC9wPg0KPC9ib2R5Pg0KPC9o dG1sPg0K --_000_62bb29663c2a4276a00559ac230b2c34exch24sluse_--

On 11/07/16 09:46 +0000, Karli Sjöberg wrote:
Den 11 jul 2016 11:37 fm skrev Arman Khalatyan <arm2arm@gmail.com>:
I just testing several GPUs with pci-passthrough: GTX630, Tesla 2050 and FX5800. None of them I got running with opengl/direct rendering inside the virtual machine. I was thinking something to do with qxl driver, but after blacklisting it in the VMs kernel nothing was changed. The VMs Xorg claiming always no display found even if I configure it with headless option: nvidia-xconfig -a --use-display-device=None --virtual=1280x1024 But with nvidia-smi I can see the GPU status, and I can run all cuda tests where opengl/X11 is not involved. My test system is CentOS 7 with ovirt 4. BTW If I clone VM which was not running, to real host on bare metal, everything is running as expecting.
Don't remember If this was mentioned but have you tried doing the same thing but with virt-manager on a standalone host, just to rule out oVirt from the equation?
Also, the xorg must be setup to use the VFIO GPU. There is BusID option in device section that must point to the guest address of the GPU. I'm not sure if you've done that (and I'm quite sure it's missing on the wiki, noted). [in the guest] $ lspci | grep VGA 03:00.0 VGA compatible controller: NVIDIA Corporation GM107GL [Quadro K2200] (rev a2) and using the address, add device section to /etc/X11/xorg.conf Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BusID "PCI:3:0:0" EndSection That should allow xorg to use the assigned GPU. mpolednik
/K
***********************************************************
Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut für Astrophysik Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany
***********************************************************
On Mon, Jul 11, 2016 at 9:06 AM, Martin Polednik <mpolednik@redhat.com> wrote:
On 08/07/16 11:29 +0200, Michal Skrivanek wrote:
On 06 Jul 2016, at 14:06, Francesco Romani <fromani@redhat.com> wrote:
From: "Arman Khalatyan" <arm2arm@gmail.com> To: "users" <users@ovirt.org> Sent: Monday, July 4, 2016 11:52:24 AM Subject: [ovirt-users] Is it possible to disable qxl video?
Hi, I am doing PCI Passthrough for GPUs. Is it possible somehow to disable/remove default video qxl? thanks, Arman,. Hi,
The 4.0 Vdsm allows you to run a headless VM, meaning a VM without graphic device/frontend. Not sure Engine allows you that however.
no it doesn’t
Even if engine allowed that, it may not be the correct solution.
Running headless with PCI passthrough is considered primary VGA passthrough[1]. Using primary VGA passthrough without OVMF could cause (will cause) quite a bit of headaches.
If you really want to try that, I assume using before_vm_start hook to remove the video device should be enough.
[1] http://wiki.xenproject.org/wiki/Xen_VGA_Passthrough - it's possible to read about primary and secondary passthrough between the lines
However, if QXL is the problem (why is that so?) then you can switch to VGA(or CIRRUS if this is still 3.6)
I'm afraid you will need to write a Vdsm hook.
There could be simpler solutions depending on the Engine
Bests,
-- Francesco Romani RedHat Engineering Virtualization R & D Phone: 8261328 IRC: fromani _______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Yes I have a BusID inside the xorg.conf. I just tested with virt-manager and indeed it is not a ovirt failure. It is something to do with qemu-kvm + nvidia. I found here:https://wiki.debian.org/VGAPassthrough that nvidia disables device when it is running on qemu-kvm is it still the case? to overcome that one need to set: -cpu host,kvm=off in the command line of qemu. I was trying to convert virsh commands to qemu native, but it fails on Centos 7 :( virsh dumpxml > render.xml virsh domxml-to-native qemu-argv render.xml error: internal error: invalid PCI passthrough type 'default' *********************************************************** Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut für Astrophysik Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany *********************************************************** On Mon, Jul 11, 2016 at 11:55 AM, Martin Polednik <mpolednik@redhat.com> wrote:
On 11/07/16 09:46 +0000, Karli Sjöberg wrote:
Den 11 jul 2016 11:37 fm skrev Arman Khalatyan <arm2arm@gmail.com>:
I just testing several GPUs with pci-passthrough: GTX630, Tesla 2050 and FX5800. None of them I got running with opengl/direct rendering inside the virtual machine. I was thinking something to do with qxl driver, but after blacklisting it in the VMs kernel nothing was changed. The VMs Xorg claiming always no display found even if I configure it with headless option: nvidia-xconfig -a --use-display-device=None --virtual=1280x1024 But with nvidia-smi I can see the GPU status, and I can run all cuda tests where opengl/X11 is not involved. My test system is CentOS 7 with ovirt 4. BTW If I clone VM which was not running, to real host on bare metal, everything is running as expecting.
Don't remember If this was mentioned but have you tried doing the same thing but with virt-manager on a standalone host, just to rule out oVirt from the equation?
Also, the xorg must be setup to use the VFIO GPU. There is BusID option in device section that must point to the guest address of the GPU. I'm not sure if you've done that (and I'm quite sure it's missing on the wiki, noted).
[in the guest] $ lspci | grep VGA 03:00.0 VGA compatible controller: NVIDIA Corporation GM107GL [Quadro K2200] (rev a2)
and using the address, add device section to /etc/X11/xorg.conf
Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BusID "PCI:3:0:0" EndSection
That should allow xorg to use the assigned GPU.
mpolednik
/K
***********************************************************
Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut für Astrophysik Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany
***********************************************************
On Mon, Jul 11, 2016 at 9:06 AM, Martin Polednik <mpolednik@redhat.com> wrote:
On 08/07/16 11:29 +0200, Michal Skrivanek wrote:
On 06 Jul 2016, at 14:06, Francesco Romani <fromani@redhat.com> wrote:
From: "Arman Khalatyan" <arm2arm@gmail.com> To: "users" <users@ovirt.org> Sent: Monday, July 4, 2016 11:52:24 AM Subject: [ovirt-users] Is it possible to disable qxl video?
Hi, I am doing PCI Passthrough for GPUs. Is it possible somehow to disable/remove default video qxl? thanks, Arman,. Hi,
The 4.0 Vdsm allows you to run a headless VM, meaning a VM without graphic device/frontend. Not sure Engine allows you that however.
no it doesn’t
Even if engine allowed that, it may not be the correct solution.
Running headless with PCI passthrough is considered primary VGA passthrough[1]. Using primary VGA passthrough without OVMF could cause (will cause) quite a bit of headaches.
If you really want to try that, I assume using before_vm_start hook to remove the video device should be enough.
[1] http://wiki.xenproject.org/wiki/Xen_VGA_Passthrough - it's possible to read about primary and secondary passthrough between the lines
However, if QXL is the problem (why is that so?) then you can switch to
VGA(or CIRRUS if this is still 3.6)
I'm afraid you will need to write a Vdsm hook.
There could be simpler solutions depending on the Engine
Bests,
-- Francesco Romani RedHat Engineering Virtualization R & D Phone: 8261328 IRC: fromani _______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users < http://lists.ovirt.org/mailman/listinfo/users>
_______________________________________________
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 11/07/16 17:16 +0200, Arman Khalatyan wrote:
Yes I have a BusID inside the xorg.conf. I just tested with virt-manager and indeed it is not a ovirt failure. It is something to do with qemu-kvm + nvidia. I found here:https://wiki.debian.org/VGAPassthrough that nvidia disables device when it is running on qemu-kvm is it still the case? to overcome that one need to set:
-cpu host,kvm=off
in the command line of qemu.
This applies to GPUs that NVIDIA does not support in virtualized environments (actually your GTX630). Quadro K2200 and higher (incl. grid, tesla) should not require masking the hypervisor. It could be worth to supply dmesg from the host and from the guest so we could see if there is any apparent failure. That being said, I have had failures even with supported card that just stopped at one point. It's worth trying the kvm=off trick.
I was trying to convert virsh commands to qemu native, but it fails on Centos 7 :( virsh dumpxml > render.xml virsh domxml-to-native qemu-argv render.xml error: internal error: invalid PCI passthrough type 'default'
I usually take the qemu command line from /var/log/libvirt/qemu/${VM_NAME}, tweak it in vim and launch. Works even for oVirt VMs (if you can hack around the storage).
***********************************************************
Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut für Astrophysik Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany
***********************************************************
On Mon, Jul 11, 2016 at 11:55 AM, Martin Polednik <mpolednik@redhat.com> wrote:
On 11/07/16 09:46 +0000, Karli Sjöberg wrote:
Den 11 jul 2016 11:37 fm skrev Arman Khalatyan <arm2arm@gmail.com>:
I just testing several GPUs with pci-passthrough: GTX630, Tesla 2050 and FX5800. None of them I got running with opengl/direct rendering inside the virtual machine. I was thinking something to do with qxl driver, but after blacklisting it in the VMs kernel nothing was changed. The VMs Xorg claiming always no display found even if I configure it with headless option: nvidia-xconfig -a --use-display-device=None --virtual=1280x1024 But with nvidia-smi I can see the GPU status, and I can run all cuda tests where opengl/X11 is not involved. My test system is CentOS 7 with ovirt 4. BTW If I clone VM which was not running, to real host on bare metal, everything is running as expecting.
Don't remember If this was mentioned but have you tried doing the same thing but with virt-manager on a standalone host, just to rule out oVirt from the equation?
Also, the xorg must be setup to use the VFIO GPU. There is BusID option in device section that must point to the guest address of the GPU. I'm not sure if you've done that (and I'm quite sure it's missing on the wiki, noted).
[in the guest] $ lspci | grep VGA 03:00.0 VGA compatible controller: NVIDIA Corporation GM107GL [Quadro K2200] (rev a2)
and using the address, add device section to /etc/X11/xorg.conf
Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BusID "PCI:3:0:0" EndSection
That should allow xorg to use the assigned GPU.
mpolednik
/K
***********************************************************
Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut für Astrophysik Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany
***********************************************************
On Mon, Jul 11, 2016 at 9:06 AM, Martin Polednik <mpolednik@redhat.com> wrote:
On 08/07/16 11:29 +0200, Michal Skrivanek wrote:
On 06 Jul 2016, at 14:06, Francesco Romani <fromani@redhat.com> wrote: > > > From: "Arman Khalatyan" <arm2arm@gmail.com> > To: "users" <users@ovirt.org> > Sent: Monday, July 4, 2016 11:52:24 AM > Subject: [ovirt-users] Is it possible to disable qxl video? > > > Hi, > I am doing PCI Passthrough for GPUs. > Is it possible somehow to disable/remove default video qxl? > thanks, > Arman,. > Hi, > > The 4.0 Vdsm allows you to run a headless VM, meaning a VM without > graphic device/frontend. > Not sure Engine allows you that however. >
no it doesn’t
Even if engine allowed that, it may not be the correct solution.
Running headless with PCI passthrough is considered primary VGA passthrough[1]. Using primary VGA passthrough without OVMF could cause (will cause) quite a bit of headaches.
If you really want to try that, I assume using before_vm_start hook to remove the video device should be enough.
[1] http://wiki.xenproject.org/wiki/Xen_VGA_Passthrough - it's possible to read about primary and secondary passthrough between the lines
However, if QXL is the problem (why is that so?) then you can switch to
VGA(or CIRRUS if this is still 3.6)
I'm afraid you will need to write a Vdsm hook. > > There could be simpler solutions depending on the Engine > > Bests, > > -- > Francesco Romani > RedHat Engineering Virtualization R & D > Phone: 8261328 > IRC: fromani > _______________________________________________ > Users mailing list > Users@ovirt.org <mailto:Users@ovirt.org> > http://lists.ovirt.org/mailman/listinfo/users < > http://lists.ovirt.org/mailman/listinfo/users> >
_______________________________________________
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Thanks for clarification. the kvm=off solved the problem. NICE!!! I got now inside the VM: OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce GT 630/PCIe/SSE2 OpenGL core profile version string: 4.4.0 NVIDIA 367.27 OpenGL core profile shading language version string: 4.40 NVIDIA via Cg compiler OpenGL core profile context flags: (none) the steps to reproduce: 1) virsh edit render 2) change schema of xml: <domain type='kvm' xmlns:qemu=' http://libvirt.org/schemas/domain/qemu/1.0'> 3) add following in the end before </domain>: <qemu:commandline> <qemu:arg value='-cpu'/> <qemu:arg value='host,kvm=off'/> </qemu:commandline> 4) start vm, install nvidia driver, check gpu: nvidia smi 5) install virtualgl, init 3 6) nvidia-xconfig --enable-all-gpus --separate-x-screens --use-display-device=None --virtual=1280x1024 init 5 7) vglconnet render01.vm and finally: vglrun glxinfo or whatever!!! Now back to origin: how to add this tweak in to ovirt?? :) *********************************************************** Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut für Astrophysik Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany *********************************************************** On Mon, Jul 11, 2016 at 5:30 PM, Martin Polednik <mpolednik@redhat.com> wrote:
On 11/07/16 17:16 +0200, Arman Khalatyan wrote:
Yes I have a BusID inside the xorg.conf. I just tested with virt-manager and indeed it is not a ovirt failure. It is something to do with qemu-kvm + nvidia. I found here:https://wiki.debian.org/VGAPassthrough that nvidia disables device when it is running on qemu-kvm is it still the case? to overcome that one need to set:
-cpu host,kvm=off
in the command line of qemu.
This applies to GPUs that NVIDIA does not support in virtualized environments (actually your GTX630). Quadro K2200 and higher (incl. grid, tesla) should not require masking the hypervisor. It could be worth to supply dmesg from the host and from the guest so we could see if there is any apparent failure.
That being said, I have had failures even with supported card that just stopped at one point. It's worth trying the kvm=off trick.
I was trying to convert virsh commands to qemu native, but it fails on
Centos 7 :( virsh dumpxml > render.xml virsh domxml-to-native qemu-argv render.xml error: internal error: invalid PCI passthrough type 'default'
I usually take the qemu command line from /var/log/libvirt/qemu/${VM_NAME}, tweak it in vim and launch. Works even for oVirt VMs (if you can hack around the storage).
***********************************************************
Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut für Astrophysik Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany
***********************************************************
On Mon, Jul 11, 2016 at 11:55 AM, Martin Polednik <mpolednik@redhat.com> wrote:
On 11/07/16 09:46 +0000, Karli Sjöberg wrote:
Den 11 jul 2016 11:37 fm skrev Arman Khalatyan <arm2arm@gmail.com>:
I just testing several GPUs with pci-passthrough: GTX630, Tesla 2050 and FX5800. None of them I got running with opengl/direct rendering inside the virtual machine. I was thinking something to do with qxl driver, but after blacklisting it in the VMs kernel nothing was changed. The VMs Xorg claiming always no display found even if I configure it with headless option: nvidia-xconfig -a --use-display-device=None --virtual=1280x1024 But with nvidia-smi I can see the GPU status, and I can run all cuda tests where opengl/X11 is not involved. My test system is CentOS 7 with ovirt 4. BTW If I clone VM which was not running, to real host on bare metal, everything is running as expecting.
Don't remember If this was mentioned but have you tried doing the same thing but with virt-manager on a standalone host, just to rule out oVirt from the equation?
Also, the xorg must be setup to use the VFIO GPU. There is BusID option in device section that must point to the guest address of the GPU. I'm not sure if you've done that (and I'm quite sure it's missing on the wiki, noted).
[in the guest] $ lspci | grep VGA 03:00.0 VGA compatible controller: NVIDIA Corporation GM107GL [Quadro K2200] (rev a2)
and using the address, add device section to /etc/X11/xorg.conf
Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BusID "PCI:3:0:0" EndSection
That should allow xorg to use the assigned GPU.
mpolednik
/K
***********************************************************
Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut für Astrophysik Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany
***********************************************************
On Mon, Jul 11, 2016 at 9:06 AM, Martin Polednik <mpolednik@redhat.com
wrote:
On 08/07/16 11:29 +0200, Michal Skrivanek wrote:
> > On 06 Jul 2016, at 14:06, Francesco Romani <fromani@redhat.com> > wrote: > >> >> >> From: "Arman Khalatyan" <arm2arm@gmail.com> >> To: "users" <users@ovirt.org> >> Sent: Monday, July 4, 2016 11:52:24 AM >> Subject: [ovirt-users] Is it possible to disable qxl video? >> >> >> Hi, >> I am doing PCI Passthrough for GPUs. >> Is it possible somehow to disable/remove default video qxl? >> thanks, >> Arman,. >> Hi, >> >> The 4.0 Vdsm allows you to run a headless VM, meaning a VM without >> graphic device/frontend. >> Not sure Engine allows you that however. >> >> > > no it doesn’t > >
Even if engine allowed that, it may not be the correct solution.
Running headless with PCI passthrough is considered primary VGA passthrough[1]. Using primary VGA passthrough without OVMF could cause (will cause) quite a bit of headaches.
If you really want to try that, I assume using before_vm_start hook to remove the video device should be enough.
[1] http://wiki.xenproject.org/wiki/Xen_VGA_Passthrough - it's possible to read about primary and secondary passthrough between the lines
However, if QXL is the problem (why is that so?) then you can switch to
> VGA(or CIRRUS if this is still 3.6) > > I'm afraid you will need to write a Vdsm hook. > >> >> There could be simpler solutions depending on the Engine >> >> Bests, >> >> -- >> Francesco Romani >> RedHat Engineering Virtualization R & D >> Phone: 8261328 >> IRC: fromani >> _______________________________________________ >> Users mailing list >> Users@ovirt.org <mailto:Users@ovirt.org> >> http://lists.ovirt.org/mailman/listinfo/users < >> http://lists.ovirt.org/mailman/listinfo/users> >> >> > _______________________________________________
> Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > >

On 11/07/16 18:03 +0200, Arman Khalatyan wrote:
Thanks for clarification. the kvm=off solved the problem. NICE!!! I got now inside the VM: OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce GT 630/PCIe/SSE2 OpenGL core profile version string: 4.4.0 NVIDIA 367.27 OpenGL core profile shading language version string: 4.40 NVIDIA via Cg compiler OpenGL core profile context flags: (none)
the steps to reproduce: 1) virsh edit render 2) change schema of xml: <domain type='kvm' xmlns:qemu=' http://libvirt.org/schemas/domain/qemu/1.0'> 3) add following in the end before </domain>: <qemu:commandline> <qemu:arg value='-cpu'/> <qemu:arg value='host,kvm=off'/> </qemu:commandline> 4) start vm, install nvidia driver, check gpu: nvidia smi 5) install virtualgl, init 3 6) nvidia-xconfig --enable-all-gpus --separate-x-screens --use-display-device=None --virtual=1280x1024 init 5 7) vglconnet render01.vm and finally: vglrun glxinfo or whatever!!!
Now back to origin: how to add this tweak in to ovirt?? :)
We have hooks just for that! You'll want before_vm_start hook point. Some idea how the hook could look like: $ pwd /usr/libexec/vdsm/hooks/before_vm_start $ cat 99_maskkvm #!/usr/bin/python import hooking if __name__ == '__main__': domxml = hooking.read_domxml() domain = domxml.getElementsByTagName('domain')[0] cmdline = domxml.createElement('qemu:commandline') for arg in ('-cpu', 'host,kvm=off'): cmdline_arg = domxml.createElement('qemu:arg') cmdline_arg.setAttribute('value', arg) cmdline.appendChild(cmdline_arg) domain.appendChild(cmdline) hooking.write_domxml(domxml) The hook must be installed on every host that you expect to run passthrough VMs. mpolednik
***********************************************************
Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut für Astrophysik Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany
***********************************************************
On Mon, Jul 11, 2016 at 5:30 PM, Martin Polednik <mpolednik@redhat.com> wrote:
On 11/07/16 17:16 +0200, Arman Khalatyan wrote:
Yes I have a BusID inside the xorg.conf. I just tested with virt-manager and indeed it is not a ovirt failure. It is something to do with qemu-kvm + nvidia. I found here:https://wiki.debian.org/VGAPassthrough that nvidia disables device when it is running on qemu-kvm is it still the case? to overcome that one need to set:
-cpu host,kvm=off
in the command line of qemu.
This applies to GPUs that NVIDIA does not support in virtualized environments (actually your GTX630). Quadro K2200 and higher (incl. grid, tesla) should not require masking the hypervisor. It could be worth to supply dmesg from the host and from the guest so we could see if there is any apparent failure.
That being said, I have had failures even with supported card that just stopped at one point. It's worth trying the kvm=off trick.
I was trying to convert virsh commands to qemu native, but it fails on
Centos 7 :( virsh dumpxml > render.xml virsh domxml-to-native qemu-argv render.xml error: internal error: invalid PCI passthrough type 'default'
I usually take the qemu command line from /var/log/libvirt/qemu/${VM_NAME}, tweak it in vim and launch. Works even for oVirt VMs (if you can hack around the storage).
***********************************************************
Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut für Astrophysik Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany
***********************************************************
On Mon, Jul 11, 2016 at 11:55 AM, Martin Polednik <mpolednik@redhat.com> wrote:
On 11/07/16 09:46 +0000, Karli Sjöberg wrote:
Den 11 jul 2016 11:37 fm skrev Arman Khalatyan <arm2arm@gmail.com>:
I just testing several GPUs with pci-passthrough: GTX630, Tesla 2050 and FX5800. None of them I got running with opengl/direct rendering inside the virtual machine. I was thinking something to do with qxl driver, but after blacklisting it in the VMs kernel nothing was changed. The VMs Xorg claiming always no display found even if I configure it with headless option: nvidia-xconfig -a --use-display-device=None --virtual=1280x1024 But with nvidia-smi I can see the GPU status, and I can run all cuda tests where opengl/X11 is not involved. My test system is CentOS 7 with ovirt 4. BTW If I clone VM which was not running, to real host on bare metal, everything is running as expecting.
Don't remember If this was mentioned but have you tried doing the same thing but with virt-manager on a standalone host, just to rule out oVirt from the equation?
Also, the xorg must be setup to use the VFIO GPU. There is BusID option in device section that must point to the guest address of the GPU. I'm not sure if you've done that (and I'm quite sure it's missing on the wiki, noted).
[in the guest] $ lspci | grep VGA 03:00.0 VGA compatible controller: NVIDIA Corporation GM107GL [Quadro K2200] (rev a2)
and using the address, add device section to /etc/X11/xorg.conf
Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BusID "PCI:3:0:0" EndSection
That should allow xorg to use the assigned GPU.
mpolednik
/K
***********************************************************
Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut für Astrophysik Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany
***********************************************************
On Mon, Jul 11, 2016 at 9:06 AM, Martin Polednik <mpolednik@redhat.com > wrote:
> On 08/07/16 11:29 +0200, Michal Skrivanek wrote: > > >> >> On 06 Jul 2016, at 14:06, Francesco Romani <fromani@redhat.com> >> wrote: >> >>> >>> >>> From: "Arman Khalatyan" <arm2arm@gmail.com> >>> To: "users" <users@ovirt.org> >>> Sent: Monday, July 4, 2016 11:52:24 AM >>> Subject: [ovirt-users] Is it possible to disable qxl video? >>> >>> >>> Hi, >>> I am doing PCI Passthrough for GPUs. >>> Is it possible somehow to disable/remove default video qxl? >>> thanks, >>> Arman,. >>> Hi, >>> >>> The 4.0 Vdsm allows you to run a headless VM, meaning a VM without >>> graphic device/frontend. >>> Not sure Engine allows you that however. >>> >>> >> >> no it doesn’t >> >> > > Even if engine allowed that, it may not be the correct solution. > > Running headless with PCI passthrough is considered primary VGA > passthrough[1]. Using primary VGA passthrough without OVMF could cause > (will cause) quite a bit of headaches. > > If you really want to try that, I assume using before_vm_start hook to > remove the video device should be enough. > > [1] http://wiki.xenproject.org/wiki/Xen_VGA_Passthrough - it's > possible to read about primary and secondary passthrough between the > lines > > However, if QXL is the problem (why is that so?) then you can switch > to > >> VGA(or CIRRUS if this is still 3.6) >> >> I'm afraid you will need to write a Vdsm hook. >> >>> >>> There could be simpler solutions depending on the Engine >>> >>> Bests, >>> >>> -- >>> Francesco Romani >>> RedHat Engineering Virtualization R & D >>> Phone: 8261328 >>> IRC: fromani >>> _______________________________________________ >>> Users mailing list >>> Users@ovirt.org <mailto:Users@ovirt.org> >>> http://lists.ovirt.org/mailman/listinfo/users < >>> http://lists.ovirt.org/mailman/listinfo/users> >>> >>> >> > _______________________________________________ > >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > >
participants (3)
-
Arman Khalatyan
-
Karli Sjöberg
-
Martin Polednik