Re: [Users] Extremely poor disk access speeds in Windows guest

--_9a4ce072-90af-4283-8756-2cce79d93e12_ Content-Type: multipart/alternative; boundary="_7268CB10-E597-419C-B2BE-EEC079156B7D_" --_7268CB10-E597-419C-B2BE-EEC079156B7D_ Content-Transfer-Encoding: base64 Content-Type: text/plain; charset="utf-8" U3RldmUsDQoNCldoYXQgaXMgdGhlIENQVSBsb2FkIG9mIHRoZSBHbHVzdGVyRlMgaG9zdCB3aGVu IGNvbXBhcmluZyB0aGUgcmF3IGJyaWNrIHRlc3QgdG8gdGhlIGdsdXN0ZXIgbW91bnQgcG9pbnQg dGVzdD8gR2l2ZSBpdCAzMCBzZWNvbmRzIGFuZCBzZWUgd2hhdCB0b3AgcmVwb3J0cy4gWW914oCZ bGwgcHJvYmFibHkgaGF2ZSB0byBzaWduaWZpY2FudGx5IGluY3JlYXNlIHRoZSBjb3VudCBvbiB0 aGUgdGVzdCBzbyB0aGF0IGl0IHJ1bnMgdGhhdCBsb25nLg0KDQoNCg0KDQoNCg0KLSBOaWNrDQoN Cg0KDQoNCg0KRnJvbTogU2FuamF5IFJhbw0KU2VudDog4oCORnJpZGF54oCOLCDigI5KYW51YXJ5 4oCOIOKAjjI04oCOLCDigI4yMDE0IOKAjjPigI464oCOMzXigI4g4oCOUE0NClRvOiBTdGV2ZSBE YWluYXJkDQpDYzogQm9iIFNpYmxleSwgb1ZpcnQgTWFpbGluZyBMaXN0LCBSb25lbiBIb2QNCg0K DQoNCg0KDQoNCkFkZGluZyBCb2IgU2libGV5IHRvIHRoaXMgdGhyZWFkLiANCg0KDQoNCg0KDQoN Cg0KDQoNCg0KRnJvbTogIlN0ZXZlIERhaW5hcmQiIDxzZGFpbmFyZEBtaW92aXNpb24uY29tPg0K VG86ICJBbmRyZXcgQ2F0aHJvdyIgPGFjYXRocm93QHJlZGhhdC5jb20+DQpDYzogIlJvbmVuIEhv ZCIgPHJob2RAcmVkaGF0LmNvbT4sICJ1c2VycyIgPHVzZXJzQG92aXJ0Lm9yZz4sICJTYW5qYXkg UmFvIiA8c3Jhb0ByZWRoYXQuY29tPiwgIkl0YW1hciBIZWltIiA8aWhlaW1AcmVkaGF0LmNvbT4N ClNlbnQ6IEZyaWRheSwgSmFudWFyeSAyNCwgMjAxNCAzOjAxOjI1IFBNDQpTdWJqZWN0OiBSZTog W1VzZXJzXSBFeHRyZW1lbHkgcG9vciBkaXNrIGFjY2VzcyBzcGVlZHMgaW4gV2luZG93cyBndWVz dA0KDQoNCg0KDQpOb3Qgc3VyZSB3aGF0IGEgZ29vZCBtZXRob2QgdG8gYmVuY2ggdGhpcyB3b3Vs ZCBiZSwgYnV0Og0KDQoNCg0KQW4gTkZTIG1vdW50IHBvaW50IG9uIHZpcnQgaG9zdDoNCg0KDQpb cm9vdEBvdmlydDAwMSBpc28tc3RvcmVdIyBkZCBpZj0vZGV2L3plcm8gb2Y9dGVzdDEgYnM9NGsg Y291bnQ9MTAwMDAwDQoNCjEwMDAwMCswIHJlY29yZHMgaW4NCg0KMTAwMDAwKzAgcmVjb3JkcyBv dXQNCg0KNDA5NjAwMDAwIGJ5dGVzICg0MTAgTUIpIGNvcGllZCwgMy45NTM5OSBzLCAxMDQgTUIv cw0KDQoNCg0KDQpSYXcgYnJpY2sgcGVyZm9ybWFuY2Ugb24gZ2x1c3RlciBzZXJ2ZXIgKHllcywg SSBrbm93IEkgc2hvdWxkbid0IHdyaXRlIGRpcmVjdGx5IHRvIHRoZSBicmljayk6DQoNCg0KW3Jv b3RAZ2x1c3RlcjEgaXNvLXN0b3JlXSMgZGQgaWY9L2Rldi96ZXJvIG9mPXRlc3QgYnM9NGsgY291 bnQ9MTAwMDAwDQoNCjEwMDAwMCswIHJlY29yZHMgaW4NCg0KMTAwMDAwKzAgcmVjb3JkcyBvdXQN Cg0KNDA5NjAwMDAwIGJ5dGVzICg0MTAgTUIpIGNvcGllZCwgMy4wNjc0MyBzLCAxMzQgTUIvcw0K DQoNCg0KDQpHbHVzdGVyIG1vdW50IHBvaW50IG9uIGdsdXN0ZXIgc2VydmVyOg0KDQoNCltyb290 QGdsdXN0ZXIxIGlzby1zdG9yZV0jIGRkIGlmPS9kZXYvemVybyBvZj10ZXN0IGJzPTRrIGNvdW50 PTEwMDAwMA0KDQoxMDAwMDArMCByZWNvcmRzIGluDQoNCjEwMDAwMCswIHJlY29yZHMgb3V0DQoN CjQwOTYwMDAwMCBieXRlcyAoNDEwIE1CKSBjb3BpZWQsIDE5LjU3NjYgcywgMjAuOSBNQi9zDQoN Cg0KDQoNClRoZSBzdG9yYWdlIHNlcnZlcnMgYXJlIGEgYml0IG9sZGVyLCBidXQgYXJlIGJvdGgg ZHVhbCBzb2NrZXQgcXVhZCBjb3JlIG9wdGVyb25zIHdpdGggNHggNzIwMHJwbSBkcml2ZXMuIA0K DQoNCg0KDQpJJ20gaW4gdGhlIHByb2Nlc3Mgb2Ygc2V0dGluZyB1cCBhIHNoYXJlIGZyb20gbXkg ZGVza3RvcCBhbmQgSSdsbCBzZWUgaWYgSSBjYW4gYmVuY2ggYmV0d2VlbiB0aGUgdHdvIHN5c3Rl bXMuIE5vdCBzdXJlIGlmIG15IHNzZCB3aWxsIGltcGFjdCB0aGUgdGVzdHMsIEkndmUgaGVhcmQg dGhlcmUgaXNuJ3QgYW4gYWR2YW50YWdlIHVzaW5nIHNzZCBzdG9yYWdlIGZvciBnbHVzdGVyZnMu DQoNCg0KDQoNCkRvZXMgYW55b25lIGhhdmUgYSBoYXJkd2FyZSByZWZlcmVuY2UgZGVzaWduIGZv ciBnbHVzdGVyZnMgYXMgYSBiYWNrZW5kIGZvciB2aXJ0PyBPciBpcyB0aGVyZSBhIGJlbmNobWFy ayB1dGlsaXR5Pw0KDQoNCg0KDQpTdGV2ZSBEYWluYXJkIA0KSVQgSW5mcmFzdHJ1Y3R1cmUgTWFu YWdlcg0KTWlvdmlzaW9uIHwgUmV0aGluayBUcmFmZmljDQo1MTktNTEzLTI0MDcgZXguMjUwDQo4 NzctNjQ2LTg0NzYgKHRvbGwtZnJlZSkNCg0KQmxvZyAgfCAgTGlua2VkSW4gIHwgIFR3aXR0ZXIg IHwgIEZhY2Vib29rIA0KDQoNCk1pb3Zpc2lvbiBUZWNobm9sb2dpZXMgSW5jLiB8IDE0OCBNYW5p dG91IERyaXZlLCBTdWl0ZSAxMDEsIEtpdGNoZW5lciwgT04sIENhbmFkYSB8IE4yQyAxTDMNClRo aXMgZS1tYWlsIG1heSBjb250YWluIGluZm9ybWF0aW9uIHRoYXQgaXMgcHJpdmlsZWdlZCBvciBj b25maWRlbnRpYWwuIElmIHlvdSBhcmUgbm90IHRoZSBpbnRlbmRlZCByZWNpcGllbnQsIHBsZWFz ZSBkZWxldGUgdGhlIGUtbWFpbCBhbmQgYW55IGF0dGFjaG1lbnRzIGFuZCBub3RpZnkgdXMgaW1t ZWRpYXRlbHkuDQoNCg0KDQoNCg0KT24gVGh1LCBKYW4gMjMsIDIwMTQgYXQgNzoxOCBQTSwgQW5k cmV3IENhdGhyb3cgPGFjYXRocm93QHJlZGhhdC5jb20+IHdyb3RlOg0KDQoNCg0KQXJlIHdlIHN1 cmUgdGhhdCB0aGUgaXNzdWUgaXMgdGhlIGd1ZXN0IEkvTyAtIHdoYXQncyB0aGUgcmF3IHBlcmZv cm1hbmNlIG9uIHRoZSBob3N0IGFjY2Vzc2luZyB0aGUgZ2x1c3RlciBzdG9yYWdlPw0KDQoNCg0K DQoNCg0KRnJvbTogIlN0ZXZlIERhaW5hcmQiIDxzZGFpbmFyZEBtaW92aXNpb24uY29tPg0KVG86 ICJJdGFtYXIgSGVpbSIgPGloZWltQHJlZGhhdC5jb20+DQpDYzogIlJvbmVuIEhvZCIgPHJob2RA cmVkaGF0LmNvbT4sICJ1c2VycyIgPHVzZXJzQG92aXJ0Lm9yZz4sICJTYW5qYXkgUmFvIiA8c3Jh b0ByZWRoYXQuY29tPg0KU2VudDogVGh1cnNkYXksIEphbnVhcnkgMjMsIDIwMTQgNDo1Njo1OCBQ TQ0KU3ViamVjdDogUmU6IFtVc2Vyc10gRXh0cmVtZWx5IHBvb3IgZGlzayBhY2Nlc3Mgc3BlZWRz IGluIFdpbmRvd3MgZ3Vlc3QNCg0KDQoNCg0KDQoNCkkgaGF2ZSB0d28gb3B0aW9ucywgdmlydGlv IGFuZCB2aXJ0aW8tc2NzaS4NCg0KDQoNCkkgd2FzIHVzaW5nIHZpcnRpbywgYW5kIGhhdmUgYWxz byBhdHRlbXB0ZWQgdmlydGlvLXNjc2kgb24gYW5vdGhlciBXaW5kb3dzIGd1ZXN0IHdpdGggdGhl IHNhbWUgcmVzdWx0cy4NCg0KDQoNCg0KVXNpbmcgdGhlIG5ld2VzdCBkcml2ZXJzLCB2aXJ0aW8t d2luLTAuMS03NC5pc28uDQoNCg0KDQoNClN0ZXZlIERhaW5hcmQgDQpJVCBJbmZyYXN0cnVjdHVy ZSBNYW5hZ2VyDQpNaW92aXNpb24gfCBSZXRoaW5rIFRyYWZmaWMNCjUxOS01MTMtMjQwNyBleC4y NTANCjg3Ny02NDYtODQ3NiAodG9sbC1mcmVlKQ0KDQpCbG9nICB8ICBMaW5rZWRJbiAgfCAgVHdp dHRlciAgfCAgRmFjZWJvb2sgDQoNCg0KTWlvdmlzaW9uIFRlY2hub2xvZ2llcyBJbmMuIHwgMTQ4 IE1hbml0b3UgRHJpdmUsIFN1aXRlIDEwMSwgS2l0Y2hlbmVyLCBPTiwgQ2FuYWRhIHwgTjJDIDFM Mw0KVGhpcyBlLW1haWwgbWF5IGNvbnRhaW4gaW5mb3JtYXRpb24gdGhhdCBpcyBwcml2aWxlZ2Vk IG9yIGNvbmZpZGVudGlhbC4gSWYgeW91IGFyZSBub3QgdGhlIGludGVuZGVkIHJlY2lwaWVudCwg cGxlYXNlIGRlbGV0ZSB0aGUgZS1tYWlsIGFuZCBhbnkgYXR0YWNobWVudHMgYW5kIG5vdGlmeSB1 cyBpbW1lZGlhdGVseS4NCg0KDQoNCg0KDQpPbiBUaHUsIEphbiAyMywgMjAxNCBhdCA0OjI0IFBN LCBJdGFtYXIgSGVpbSA8aWhlaW1AcmVkaGF0LmNvbT4gd3JvdGU6DQoNCg0KT24gMDEvMjMvMjAx NCAwNzo0NiBQTSwgU3RldmUgRGFpbmFyZCB3cm90ZToNCg0KQmFja2luZyBTdG9yYWdlOiBHbHVz dGVyIFJlcGxpY2ENClN0b3JhZ2UgRG9tYWluOiBORlMNCk92aXJ0IEhvc3RzOiBDZW50T1MgNi41 DQpPdmlydCB2ZXJzaW9uOiAzLjMuMg0KTmV0d29yazogR2lnRQ0KIyBvZiBWTSdzOiAzIC0gdHdv IExpbnV4IGd1ZXN0cyBhcmUgaWRsZSwgb25lIFdpbmRvd3MgZ3Vlc3QgaXMNCmluc3RhbGxpbmcg dXBkYXRlcy4NCg0KSSd2ZSBpbnN0YWxsZWQgYSBXaW5kb3dzIDIwMDggUjIgZ3Vlc3Qgd2l0aCB2 aXJ0aW8gZGlzaywgYW5kIGFsbCB0aGUNCmRyaXZlcnMgZnJvbSB0aGUgbGF0ZXN0IHZpcnRpbyBp c28uIEkndmUgYWxzbyBpbnN0YWxsZWQgdGhlIHNwaWNlIGFnZW50DQpkcml2ZXJzLg0KDQpHdWVz dCBkaXNrIGFjY2VzcyBpcyBob3JyaWJseSBzbG93LCBSZXNvdXJjZSBtb25pdG9yIGR1cmluZyBX aW5kb3dzDQp1cGRhdGVzIHNob3dzIERpc2sgcGVha2luZyBhdCAxTUIvc2VjIChzY2FsZSBuZXZl ciBpbmNyZWFzZXMpIGFuZCBEaXNrDQpRdWV1ZSBMZW5ndGggUGVha2luZyBhdCA1IGFuZCBsb29r cyB0byBiZSBzaXR0aW5nIGF0IHRoYXQgbGV2ZWwgOTklIG9mDQp0aGUgdGltZS4gMTEzIHVwZGF0 ZXMgaW4gV2luZG93cyBoYXMgYmVlbiBydW5uaW5nIHNvbGlkbHkgZm9yIGFib3V0IDIuNQ0KaG91 cnMgYW5kIGlzIGF0IDg5LzExMyB1cGRhdGVzIGNvbXBsZXRlLg0KDQoNCnZpcnRpby1ibG9jayBv ciB2aXJ0aW8tc2NzaT8NCndoaWNoIHdpbmRvd3MgZ3Vlc3QgZHJpdmVyIHZlcnNpb24gZm9yIHRo YXQ/DQoNCg0KDQoNCkkgY2FuJ3Qgc2F5IG15IExpbnV4IGd1ZXN0cyBhcmUgYmxpc3RlcmluZ2x5 IGZhc3QsIGJ1dCB1cGRhdGluZyBhIGd1ZXN0DQpmcm9tIFJIRUwgNi4zIGZyZXNoIGluc3RhbGwg dG8gNi41IHRvb2sgYWJvdXQgMjUgbWludXRlcy4NCg0KSWYgYW55b25lIGhhcyBhbnkgaWRlYXMs IHBsZWFzZSBsZXQgbWUga25vdyAtIEkgaGF2ZW4ndCBmb3VuZCBhbnkgdHVuaW5nDQpkb2NzIGZv ciBXaW5kb3dzIGd1ZXN0cyB0aGF0IGNvdWxkIGV4cGxhaW4gdGhpcyBpc3N1ZS4NCg0KVGhhbmtz LA0KDQoNCg0KKlN0ZXZlIERhaW5hcmQgKg0KDQoNCg0KX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX18NClVzZXJzIG1haWxpbmcgbGlzdA0KVXNlcnNAb3ZpcnQu b3JnDQpodHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnMNCg0KDQoN Cg0KX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18NClVzZXJz IG1haWxpbmcgbGlzdA0KVXNlcnNAb3ZpcnQub3JnDQpodHRwOi8vbGlzdHMub3ZpcnQub3JnL21h aWxtYW4vbGlzdGluZm8vdXNlcnMNCg0KDQoNCg0KDQoNCg0KDQoNCg0KLS0gDQoNCg0KU2FuamF5 IFJhbyANClByaW5jaXBhbCBQZXJmb3JtYW5jZSBFbmdpbmVlciAgICAgICAgICAgICAgICAgICBQ aG9uZTogOTc4LTM5Mi0yNDc5IA0KUmVkIEhhdCwgSW5jLiAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgIEZBWDogOTc4LTM5Mi0xMDAxIA0KMzE0IExpdHRsZXRvbiBSb2FkICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgIEVtYWlsOiBzcmFvQHJlZGhhdC5jb20gDQpXZXN0Zm9y ZCwgTUEgMDE4ODY= --_7268CB10-E597-419C-B2BE-EEC079156B7D_ Content-Transfer-Encoding: base64 Content-Type: text/html; charset="utf-8" CjxodG1sPgo8aGVhZD4KPG1ldGEgbmFtZT0iZ2VuZXJhdG9yIiBjb250ZW50PSJXaW5kb3dzIE1h aWwgMTcuNS45NjAwLjIwMzE1Ij4KPHN0eWxlIGRhdGEtZXh0ZXJuYWxzdHlsZT0idHJ1ZSI+PCEt LQpwLk1zb0xpc3RQYXJhZ3JhcGgsIGxpLk1zb0xpc3RQYXJhZ3JhcGgsIGRpdi5Nc29MaXN0UGFy YWdyYXBoIHsKbWFyZ2luLXRvcDowaW47Cm1hcmdpbi1yaWdodDowaW47Cm1hcmdpbi1ib3R0b206 MGluOwptYXJnaW4tbGVmdDouNWluOwptYXJnaW4tYm90dG9tOi4wMDAxcHQ7Cn0KcC5Nc29Ob3Jt YWwsIGxpLk1zb05vcm1hbCwgZGl2Lk1zb05vcm1hbCB7Cm1hcmdpbjowaW47Cm1hcmdpbi1ib3R0 b206LjAwMDFwdDsKfQpwLk1zb0xpc3RQYXJhZ3JhcGhDeFNwRmlyc3QsIGxpLk1zb0xpc3RQYXJh Z3JhcGhDeFNwRmlyc3QsIGRpdi5Nc29MaXN0UGFyYWdyYXBoQ3hTcEZpcnN0LCAKcC5Nc29MaXN0 UGFyYWdyYXBoQ3hTcE1pZGRsZSwgbGkuTXNvTGlzdFBhcmFncmFwaEN4U3BNaWRkbGUsIGRpdi5N c29MaXN0UGFyYWdyYXBoQ3hTcE1pZGRsZSwgCnAuTXNvTGlzdFBhcmFncmFwaEN4U3BMYXN0LCBs aS5Nc29MaXN0UGFyYWdyYXBoQ3hTcExhc3QsIGRpdi5Nc29MaXN0UGFyYWdyYXBoQ3hTcExhc3Qg ewptYXJnaW4tdG9wOjBpbjsKbWFyZ2luLXJpZ2h0OjBpbjsKbWFyZ2luLWJvdHRvbTowaW47Cm1h cmdpbi1sZWZ0Oi41aW47Cm1hcmdpbi1ib3R0b206LjAwMDFwdDsKbGluZS1oZWlnaHQ6MTE1JTsK fQotLT48L3N0eWxlPjwvaGVhZD4KPGJvZHkgZGlyPSJsdHIiPgo8ZGl2IGRhdGEtZXh0ZXJuYWxz dHlsZT0iZmFsc2UiIGRpcj0ibHRyIiBzdHlsZT0iZm9udC1mYW1pbHk6ICdDYWxpYnJpJywgJ1Nl Z29lIFVJJywgJ01laXJ5bycsICdNaWNyb3NvZnQgWWFIZWkgVUknLCAnTWljcm9zb2Z0IEpoZW5n SGVpIFVJJywgJ01hbGd1biBHb3RoaWMnLCAnc2Fucy1zZXJpZic7Zm9udC1zaXplOjEycHQ7Ij48 ZGl2PlN0ZXZlLDwvZGl2PjxkaXY+V2hhdCBpcyB0aGUgQ1BVIGxvYWQgb2YgdGhlIEdsdXN0ZXJG UyBob3N0IHdoZW4gY29tcGFyaW5nIHRoZSByYXcgYnJpY2sgdGVzdCB0byB0aGUgZ2x1c3RlciBt b3VudCBwb2ludCB0ZXN0PyBHaXZlIGl0Jm5ic3A7MzAgc2Vjb25kcyBhbmQgc2VlIHdoYXQgdG9w IHJlcG9ydHMuIFlvdeKAmWxsIHByb2JhYmx5IGhhdmUgdG8gc2lnbmlmaWNhbnRseSBpbmNyZWFz ZSB0aGUgY291bnQgb24gdGhlIHRlc3Qgc28gdGhhdCBpdCBydW5zIHRoYXQgbG9uZy48YnI+PC9k aXY+PGRpdiBkYXRhLXNpZ25hdHVyZWJsb2NrPSJ0cnVlIj48ZGl2Pjxicj48L2Rpdj48ZGl2Pi0g TmljazwvZGl2PjxkaXY+PGJyPjwvZGl2PjwvZGl2PjxkaXYgc3R5bGU9InBhZGRpbmctdG9wOiA1 cHg7IGJvcmRlci10b3AtY29sb3I6IHJnYigyMjksIDIyOSwgMjI5KTsgYm9yZGVyLXRvcC13aWR0 aDogMXB4OyBib3JkZXItdG9wLXN0eWxlOiBzb2xpZDsiPjxkaXY+PGZvbnQgZmFjZT0iICdDYWxp YnJpJywgJ1NlZ29lIFVJJywgJ01laXJ5bycsICdNaWNyb3NvZnQgWWFIZWkgVUknLCAnTWljcm9z b2Z0IEpoZW5nSGVpIFVJJywgJ01hbGd1biBHb3RoaWMnLCAnc2Fucy1zZXJpZiciIHN0eWxlPSds aW5lLWhlaWdodDogMTVwdDsgbGV0dGVyLXNwYWNpbmc6IDAuMDJlbTsgZm9udC1mYW1pbHk6ICJD YWxpYnJpIiwgIlNlZ29lIFVJIiwgIk1laXJ5byIsICJNaWNyb3NvZnQgWWFIZWkgVUkiLCAiTWlj cm9zb2Z0IEpoZW5nSGVpIFVJIiwgIk1hbGd1biBHb3RoaWMiLCAic2Fucy1zZXJpZiI7IGZvbnQt c2l6ZTogMTJwdDsnPjxiPkZyb206PC9iPiZuYnNwOzxhIGhyZWY9Im1haWx0bzpzcmFvQHJlZGhh dC5jb20iIHRhcmdldD0iX3BhcmVudCI+U2FuamF5IFJhbzwvYT48YnI+PGI+U2VudDo8L2I+Jm5i c3A74oCORnJpZGF54oCOLCDigI5KYW51YXJ54oCOIOKAjjI04oCOLCDigI4yMDE0IOKAjjPigI46 4oCOMzXigI4g4oCOUE08YnI+PGI+VG86PC9iPiZuYnNwOzxhIGhyZWY9Im1haWx0bzpzZGFpbmFy ZEBtaW92aXNpb24uY29tIiB0YXJnZXQ9Il9wYXJlbnQiPlN0ZXZlIERhaW5hcmQ8L2E+PGJyPjxi PkNjOjwvYj4mbmJzcDs8YSBocmVmPSJtYWlsdG86cnNpYmxleUByZWRoYXQuY29tIiB0YXJnZXQ9 Il9wYXJlbnQiPkJvYiBTaWJsZXk8L2E+LCA8YSBocmVmPSJtYWlsdG86dXNlcnNAb3ZpcnQub3Jn IiB0YXJnZXQ9Il9wYXJlbnQiPm9WaXJ0IE1haWxpbmcgTGlzdDwvYT4sIDxhIGhyZWY9Im1haWx0 bzpyaG9kQHJlZGhhdC5jb20iIHRhcmdldD0iX3BhcmVudCI+Um9uZW4gSG9kPC9hPjwvZm9udD48 L2Rpdj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2IGRpcj0iIj48ZGl2IHN0eWxlPSJjb2xvcjog cmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogdGltZXMgbmV3IHJvbWFuLCBuZXcgeW9yaywgdGlt ZXMsIHNlcmlmOyBmb250LXNpemU6IDEycHQ7Ij48ZGl2PkFkZGluZyBCb2IgU2libGV5IHRvIHRo aXMgdGhyZWFkLiA8YnI+PC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj48YnI+PC9kaXY+PGhyIGlk PSJ6d2NociI+PGJsb2NrcXVvdGUgc3R5bGU9ImNvbG9yOiByZ2IoMCwgMCwgMCk7IHBhZGRpbmct bGVmdDogNXB4OyBmb250LWZhbWlseTogSGVsdmV0aWNhLEFyaWFsLHNhbnMtc2VyaWY7IGZvbnQt c2l6ZTogMTJwdDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyB0ZXh0 LWRlY29yYXRpb246IG5vbmU7IG1hcmdpbi10b3A6IDBweDsgbWFyZ2luLWJvdHRvbTogMHB4OyBt YXJnaW4tbGVmdDogNXB4OyBib3JkZXItbGVmdC1jb2xvcjogcmdiKDE2LCAxNiwgMjU1KTsgYm9y ZGVyLWxlZnQtd2lkdGg6IDJweDsgYm9yZGVyLWxlZnQtc3R5bGU6IHNvbGlkOyIgZGF0YS1tY2Ut c3R5bGU9ImJvcmRlci1sZWZ0OiAycHggc29saWQgIzEwMTBGRjsgbWFyZ2luLWxlZnQ6IDVweDsg cGFkZGluZy1sZWZ0OiA1cHg7IGNvbG9yOiAjMDAwOyBmb250LXdlaWdodDogbm9ybWFsOyBmb250 LXN0eWxlOiBub3JtYWw7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsgZm9udC1mYW1pbHk6IEhlbHZl dGljYSxBcmlhbCxzYW5zLXNlcmlmOyBmb250LXNpemU6IDEycHQ7Ij48Yj5Gcm9tOiA8L2I+IlN0 ZXZlIERhaW5hcmQiICZsdDtzZGFpbmFyZEBtaW92aXNpb24uY29tJmd0Ozxicj48Yj5UbzogPC9i PiJBbmRyZXcgQ2F0aHJvdyIgJmx0O2FjYXRocm93QHJlZGhhdC5jb20mZ3Q7PGJyPjxiPkNjOiA8 L2I+IlJvbmVuIEhvZCIgJmx0O3Job2RAcmVkaGF0LmNvbSZndDssICJ1c2VycyIgJmx0O3VzZXJz QG92aXJ0Lm9yZyZndDssICJTYW5qYXkgUmFvIiAmbHQ7c3Jhb0ByZWRoYXQuY29tJmd0OywgIkl0 YW1hciBIZWltIiAmbHQ7aWhlaW1AcmVkaGF0LmNvbSZndDs8YnI+PGI+U2VudDogPC9iPkZyaWRh eSwgSmFudWFyeSAyNCwgMjAxNCAzOjAxOjI1IFBNPGJyPjxiPlN1YmplY3Q6IDwvYj5SZTogW1Vz ZXJzXSBFeHRyZW1lbHkgcG9vciBkaXNrIGFjY2VzcyBzcGVlZHMgaW4gV2luZG93cyBndWVzdDxi cj48ZGl2Pjxicj48L2Rpdj48ZGl2IGRpcj0ibHRyIj5Ob3Qgc3VyZSB3aGF0IGEgZ29vZCBtZXRo b2QgdG8gYmVuY2ggdGhpcyB3b3VsZCBiZSwgYnV0OjxkaXY+PGJyPjwvZGl2PjxkaXY+QW4gTkZT IG1vdW50IHBvaW50IG9uIHZpcnQgaG9zdDo8L2Rpdj48ZGl2PjxkaXY+W3Jvb3RAb3ZpcnQwMDEg aXNvLXN0b3JlXSMgZGQgaWY9L2Rldi96ZXJvIG9mPXRlc3QxIGJzPTRrIGNvdW50PTEwMDAwMDwv ZGl2PjxkaXY+MTAwMDAwKzAgcmVjb3JkcyBpbjwvZGl2PjxkaXY+MTAwMDAwKzAgcmVjb3JkcyBv dXQ8L2Rpdj48ZGl2PjQwOTYwMDAwMCBieXRlcyAoNDEwIE1CKSBjb3BpZWQsIDMuOTUzOTkgcywg MTA0IE1CL3M8L2Rpdj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PlJhdyBicmljayBwZXJmb3Jt YW5jZSBvbiBnbHVzdGVyIHNlcnZlciAoeWVzLCBJIGtub3cgSSBzaG91bGRuJ3Qgd3JpdGUgZGly ZWN0bHkgdG8gdGhlIGJyaWNrKTo8L2Rpdj48ZGl2PjxkaXY+W3Jvb3RAZ2x1c3RlcjEgaXNvLXN0 b3JlXSMgZGQgaWY9L2Rldi96ZXJvIG9mPXRlc3QgYnM9NGsgY291bnQ9MTAwMDAwPC9kaXY+PGRp dj4xMDAwMDArMCByZWNvcmRzIGluPC9kaXY+PGRpdj4xMDAwMDArMCByZWNvcmRzIG91dDwvZGl2 PjxkaXY+NDA5NjAwMDAwIGJ5dGVzICg0MTAgTUIpIGNvcGllZCwgMy4wNjc0MyBzLCAxMzQgTUIv czwvZGl2PjwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+R2x1c3RlciBtb3VudCBwb2ludCBvbiBn bHVzdGVyIHNlcnZlcjo8L2Rpdj48ZGl2PjxkaXY+W3Jvb3RAZ2x1c3RlcjEgaXNvLXN0b3JlXSMg ZGQgaWY9L2Rldi96ZXJvIG9mPXRlc3QgYnM9NGsgY291bnQ9MTAwMDAwPC9kaXY+PGRpdj4xMDAw MDArMCByZWNvcmRzIGluPC9kaXY+PGRpdj4xMDAwMDArMCByZWNvcmRzIG91dDwvZGl2PjxkaXY+ NDA5NjAwMDAwIGJ5dGVzICg0MTAgTUIpIGNvcGllZCwgMTkuNTc2NiBzLCAyMC45IE1CL3M8L2Rp dj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PlRoZSBzdG9yYWdlIHNlcnZlcnMgYXJlIGEgYml0 IG9sZGVyLCBidXQgYXJlIGJvdGggZHVhbCBzb2NrZXQgcXVhZCBjb3JlIG9wdGVyb25zIHdpdGgg NHggNzIwMHJwbSBkcml2ZXMuJm5ic3A7PC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj5JJ20gaW4g dGhlIHByb2Nlc3Mgb2Ygc2V0dGluZyB1cCBhIHNoYXJlIGZyb20gbXkgZGVza3RvcCBhbmQgSSds bCBzZWUgaWYgSSBjYW4gYmVuY2ggYmV0d2VlbiB0aGUgdHdvIHN5c3RlbXMuIE5vdCBzdXJlIGlm IG15IHNzZCB3aWxsIGltcGFjdCB0aGUgdGVzdHMsIEkndmUgaGVhcmQgdGhlcmUgaXNuJ3QgYW4g YWR2YW50YWdlIHVzaW5nIHNzZCBzdG9yYWdlIGZvciBnbHVzdGVyZnMuPC9kaXY+PGRpdj48YnI+ PC9kaXY+PGRpdj5Eb2VzIGFueW9uZSBoYXZlIGEgaGFyZHdhcmUgcmVmZXJlbmNlIGRlc2lnbiBm b3IgZ2x1c3RlcmZzIGFzIGEgYmFja2VuZCBmb3IgdmlydD8gT3IgaXMgdGhlcmUgYSBiZW5jaG1h cmsgdXRpbGl0eT88L2Rpdj48L2Rpdj48ZGl2IGNsYXNzPSJnbWFpbF9leHRyYSI+PGJyIGNsZWFy PSJhbGwiPjxkaXY+PGRpdiBkaXI9Imx0ciI+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiBhcmlh bCxzYW5zLXNlcmlmOyBmb250LXNpemU6IDE2cHg7IiBkYXRhLW1jZS1zdHlsZT0iZm9udC1mYW1p bHk6IGFyaWFsLHNhbnMtc2VyaWY7IGZvbnQtc2l6ZTogMTZweDsiPjxzdHJvbmc+U3RldmUmbmJz cDtEYWluYXJkJm5ic3A7PC9zdHJvbmc+PC9zcGFuPjxzcGFuIHN0eWxlPSJmb250LXNpemU6IDEy cHg7IiBkYXRhLW1jZS1zdHlsZT0iZm9udC1zaXplOiAxMnB4OyI+PC9zcGFuPjxicj4gPHNwYW4g c3R5bGU9ImZvbnQtZmFtaWx5OiBhcmlhbCxzYW5zLXNlcmlmOyBmb250LXNpemU6IDEycHg7IiBk YXRhLW1jZS1zdHlsZT0iZm9udC1mYW1pbHk6IGFyaWFsLHNhbnMtc2VyaWY7IGZvbnQtc2l6ZTog MTJweDsiPklUIEluZnJhc3RydWN0dXJlIE1hbmFnZXI8YnI+IDxhIGhyZWY9Imh0dHA6Ly9taW92 aXNpb24uY29tLyIgdGFyZ2V0PSJfcGFyZW50IiBkYXRhLW1jZS1ocmVmPSJodHRwOi8vbWlvdmlz aW9uLmNvbS8iPk1pb3Zpc2lvbjwvYT4gfCA8ZW0+UmV0aGluayBUcmFmZmljPC9lbT48YnI+IDUx OS01MTMtMjQwNyBleC4yNTA8YnI+IDg3Ny02NDYtODQ3NiAodG9sbC1mcmVlKTxicj4gPGJyPiA8 c3Ryb25nIHN0eWxlPSJjb2xvcjogcmdiKDE1MywgMTUzLCAxNTMpOyBmb250LWZhbWlseTogYXJp YWwsc2Fucy1zZXJpZjsgZm9udC1zaXplOiAxM3B4OyIgZGF0YS1tY2Utc3R5bGU9ImZvbnQtZmFt aWx5OiBhcmlhbCxzYW5zLXNlcmlmOyBmb250LXNpemU6IDEzcHg7IGNvbG9yOiAjOTk5OTk5OyI+ PGEgaHJlZj0iaHR0cDovL21pb3Zpc2lvbi5jb20vYmxvZyIgdGFyZ2V0PSJfcGFyZW50IiBkYXRh LW1jZS1ocmVmPSJodHRwOi8vbWlvdmlzaW9uLmNvbS9ibG9nIj5CbG9nPC9hPiZuYnNwOyB8ICZu YnNwOzwvc3Ryb25nPjxzcGFuIHN0eWxlPSJjb2xvcjogcmdiKDE1MywgMTUzLCAxNTMpOyBmb250 LWZhbWlseTogYXJpYWwsc2Fucy1zZXJpZjsgZm9udC1zaXplOiAxM3B4OyIgZGF0YS1tY2Utc3R5 bGU9ImZvbnQtZmFtaWx5OiBhcmlhbCxzYW5zLXNlcmlmOyBmb250LXNpemU6IDEzcHg7IGNvbG9y OiAjOTk5OTk5OyI+PHN0cm9uZz48YSBocmVmPSJodHRwczovL3d3dy5saW5rZWRpbi5jb20vY29t cGFueS9taW92aXNpb24tdGVjaG5vbG9naWVzIiB0YXJnZXQ9Il9wYXJlbnQiIGRhdGEtbWNlLWhy ZWY9Imh0dHBzOi8vd3d3LmxpbmtlZGluLmNvbS9jb21wYW55L21pb3Zpc2lvbi10ZWNobm9sb2dp ZXMiPkxpbmtlZEluPC9hPiZuYnNwOyZuYnNwO3wgJm5ic3A7PGEgaHJlZj0iaHR0cHM6Ly90d2l0 dGVyLmNvbS9taW92aXNpb24iIHRhcmdldD0iX3BhcmVudCIgZGF0YS1tY2UtaHJlZj0iaHR0cHM6 Ly90d2l0dGVyLmNvbS9taW92aXNpb24iPlR3aXR0ZXI8L2E+Jm5ic3A7Jm5ic3A7fCAmbmJzcDs8 YSBocmVmPSJodHRwczovL3d3dy5mYWNlYm9vay5jb20vbWlvdmlzaW9uIiB0YXJnZXQ9Il9wYXJl bnQiIGRhdGEtbWNlLWhyZWY9Imh0dHBzOi8vd3d3LmZhY2Vib29rLmNvbS9taW92aXNpb24iPkZh Y2Vib29rPC9hPjwvc3Ryb25nPjwvc3Bhbj4gPC9zcGFuPjxociBzdHlsZT0iY29sb3I6IHJnYig1 MSwgNTEsIDUxKTsgY2xlYXI6IGJvdGg7IGZvbnQtZmFtaWx5OiBhcmlhbCxzYW5zLXNlcmlmOyBm b250LXNpemU6IDEzcHg7IiBkYXRhLW1jZS1zdHlsZT0iZm9udC1mYW1pbHk6IGFyaWFsLHNhbnMt c2VyaWY7IGZvbnQtc2l6ZTogMTNweDsgY29sb3I6ICMzMzMzMzM7IGNsZWFyOiBib3RoOyI+PGRp diBzdHlsZT0iY29sb3I6IHJnYigxNTMsIDE1MywgMTUzKTsgcGFkZGluZy10b3A6IDVweDsgZm9u dC1mYW1pbHk6IGFyaWFsLHNhbnMtc2VyaWY7IGZvbnQtc2l6ZTogMTNweDsiIGRhdGEtbWNlLXN0 eWxlPSJjb2xvcjogIzk5OTk5OTsgZm9udC1mYW1pbHk6IGFyaWFsLHNhbnMtc2VyaWY7IGZvbnQt c2l6ZTogMTNweDsgcGFkZGluZy10b3A6IDVweDsiPjxzcGFuIHN0eWxlPSJmb250LWZhbWlseTog YXJpYWwsc2Fucy1zZXJpZjsgZm9udC1zaXplOiAxMnB4OyIgZGF0YS1tY2Utc3R5bGU9ImZvbnQt ZmFtaWx5OiBhcmlhbCxzYW5zLXNlcmlmOyBmb250LXNpemU6IDEycHg7Ij5NaW92aXNpb24gVGVj aG5vbG9naWVzIEluYy4gfCAxNDggTWFuaXRvdSBEcml2ZSwgU3VpdGUgMTAxLCBLaXRjaGVuZXIs IE9OLCBDYW5hZGEgfCZuYnNwO04yQyAxTDM8L3NwYW4+PGJyPiA8c3BhbiBzdHlsZT0iZm9udC1m YW1pbHk6IGFyaWFsLHNhbnMtc2VyaWY7IGZvbnQtc2l6ZTogMTJweDsiIGRhdGEtbWNlLXN0eWxl PSJmb250LWZhbWlseTogYXJpYWwsc2Fucy1zZXJpZjsgZm9udC1zaXplOiAxMnB4OyI+VGhpcyBl LW1haWwgbWF5IGNvbnRhaW4gaW5mb3JtYXRpb24gdGhhdCBpcyBwcml2aWxlZ2VkIG9yIGNvbmZp ZGVudGlhbC4gSWYgeW91IGFyZSBub3QgdGhlIGludGVuZGVkIHJlY2lwaWVudCwgcGxlYXNlIGRl bGV0ZSB0aGUgZS1tYWlsIGFuZCBhbnkgYXR0YWNobWVudHMgYW5kIG5vdGlmeSB1cyBpbW1lZGlh dGVseS48L3NwYW4+PC9kaXY+PC9kaXY+PC9kaXY+PGJyPjxkaXY+PGJyPjwvZGl2PjxkaXYgY2xh c3M9ImdtYWlsX3F1b3RlIj5PbiBUaHUsIEphbiAyMywgMjAxNCBhdCA3OjE4IFBNLCBBbmRyZXcg Q2F0aHJvdyA8c3BhbiBkaXI9Imx0ciI+Jmx0OzxhIGhyZWY9Im1haWx0bzphY2F0aHJvd0ByZWRo YXQuY29tIiB0YXJnZXQ9Il9wYXJlbnQiIGRhdGEtbWNlLWhyZWY9Im1haWx0bzphY2F0aHJvd0By ZWRoYXQuY29tIj5hY2F0aHJvd0ByZWRoYXQuY29tPC9hPiZndDs8L3NwYW4+IHdyb3RlOjxicj48 YmxvY2txdW90ZSBjbGFzcz0iZ21haWxfcXVvdGUiIHN0eWxlPSJtYXJnaW46IDBweCAwcHggMHB4 IDAuOGV4OyBwYWRkaW5nLWxlZnQ6IDFleDsgYm9yZGVyLWxlZnQtY29sb3I6IHJnYigyMDQsIDIw NCwgMjA0KTsgYm9yZGVyLWxlZnQtd2lkdGg6IDFweDsgYm9yZGVyLWxlZnQtc3R5bGU6IHNvbGlk OyIgZGF0YS1tY2Utc3R5bGU9Im1hcmdpbjogMCAwIDAgLjhleDsgYm9yZGVyLWxlZnQ6IDFweCAj Y2NjIHNvbGlkOyBwYWRkaW5nLWxlZnQ6IDFleDsiPjxkaXY+PGRpdiBzdHlsZT0iZm9udC1mYW1p bHk6IGFyaWFsLGhlbHZldGljYSxzYW5zLXNlcmlmOyBmb250LXNpemU6IDEwcHQ7IiBkYXRhLW1j ZS1zdHlsZT0iZm9udC1zaXplOiAxMHB0OyBmb250LWZhbWlseTogYXJpYWwsaGVsdmV0aWNhLHNh bnMtc2VyaWY7Ij5BcmUgd2Ugc3VyZSB0aGF0IHRoZSBpc3N1ZSBpcyB0aGUgZ3Vlc3QgSS9PIC0g d2hhdCdzIHRoZSByYXcgcGVyZm9ybWFuY2Ugb24gdGhlIGhvc3QgYWNjZXNzaW5nIHRoZSBnbHVz dGVyIHN0b3JhZ2U/PGJyPjxkaXY+PGJyPjwvZGl2Pjxocj48YmxvY2txdW90ZSBzdHlsZT0icGFk ZGluZy1sZWZ0OiA1cHg7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2EsQXJpYWwsc2Fucy1zZXJpZjsg Zm9udC1zaXplOiAxMnB0OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7 IHRleHQtZGVjb3JhdGlvbjogbm9uZTsgbWFyZ2luLXRvcDogMHB4OyBtYXJnaW4tYm90dG9tOiAw cHg7IG1hcmdpbi1sZWZ0OiA1cHg7IGJvcmRlci1sZWZ0LWNvbG9yOiByZ2IoMTYsIDE2LCAyNTUp OyBib3JkZXItbGVmdC13aWR0aDogMnB4OyBib3JkZXItbGVmdC1zdHlsZTogc29saWQ7IiBkYXRh LW1jZS1zdHlsZT0icGFkZGluZy1sZWZ0OiA1cHg7IGZvbnQtc2l6ZTogMTJwdDsgZm9udC1zdHls ZTogbm9ybWFsOyBtYXJnaW4tbGVmdDogNXB4OyBmb250LWZhbWlseTogSGVsdmV0aWNhLEFyaWFs LHNhbnMtc2VyaWY7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsgZm9udC13ZWlnaHQ6IG5vcm1hbDsg Ym9yZGVyLWxlZnQ6IDJweCBzb2xpZCAjMTAxMGZmOyI+PGI+RnJvbTogPC9iPiJTdGV2ZSBEYWlu YXJkIiAmbHQ7PGEgaHJlZj0ibWFpbHRvOnNkYWluYXJkQG1pb3Zpc2lvbi5jb20iIHRhcmdldD0i X3BhcmVudCIgZGF0YS1tY2UtaHJlZj0ibWFpbHRvOnNkYWluYXJkQG1pb3Zpc2lvbi5jb20iPnNk YWluYXJkQG1pb3Zpc2lvbi5jb208L2E+Jmd0Ozxicj48Yj5UbzogPC9iPiJJdGFtYXIgSGVpbSIg Jmx0OzxhIGhyZWY9Im1haWx0bzppaGVpbUByZWRoYXQuY29tIiB0YXJnZXQ9Il9wYXJlbnQiIGRh dGEtbWNlLWhyZWY9Im1haWx0bzppaGVpbUByZWRoYXQuY29tIj5paGVpbUByZWRoYXQuY29tPC9h PiZndDs8YnI+IDxiPkNjOiA8L2I+IlJvbmVuIEhvZCIgJmx0OzxhIGhyZWY9Im1haWx0bzpyaG9k QHJlZGhhdC5jb20iIHRhcmdldD0iX3BhcmVudCIgZGF0YS1tY2UtaHJlZj0ibWFpbHRvOnJob2RA cmVkaGF0LmNvbSI+cmhvZEByZWRoYXQuY29tPC9hPiZndDssICJ1c2VycyIgJmx0OzxhIGhyZWY9 Im1haWx0bzp1c2Vyc0BvdmlydC5vcmciIHRhcmdldD0iX3BhcmVudCIgZGF0YS1tY2UtaHJlZj0i bWFpbHRvOnVzZXJzQG92aXJ0Lm9yZyI+dXNlcnNAb3ZpcnQub3JnPC9hPiZndDssICJTYW5qYXkg UmFvIiAmbHQ7PGEgaHJlZj0ibWFpbHRvOnNyYW9AcmVkaGF0LmNvbSIgdGFyZ2V0PSJfcGFyZW50 IiBkYXRhLW1jZS1ocmVmPSJtYWlsdG86c3Jhb0ByZWRoYXQuY29tIj5zcmFvQHJlZGhhdC5jb208 L2E+Jmd0Ozxicj4gPGI+U2VudDogPC9iPlRodXJzZGF5LCBKYW51YXJ5IDIzLCAyMDE0IDQ6NTY6 NTggUE08YnI+PGI+U3ViamVjdDogPC9iPlJlOiBbVXNlcnNdIEV4dHJlbWVseSBwb29yIGRpc2sg YWNjZXNzIHNwZWVkcyBpbiBXaW5kb3dzIGd1ZXN0PGRpdj48ZGl2IGNsYXNzPSJoNSI+PGJyPjxk aXY+PGJyPjwvZGl2PjxkaXYgZGlyPSJsdHIiPkkgaGF2ZSB0d28gb3B0aW9ucywgdmlydGlvIGFu ZCB2aXJ0aW8tc2NzaS48ZGl2Pjxicj48L2Rpdj48ZGl2Pkkgd2FzIHVzaW5nIHZpcnRpbywgYW5k IGhhdmUgYWxzbyBhdHRlbXB0ZWQgdmlydGlvLXNjc2kgb24gYW5vdGhlciBXaW5kb3dzIGd1ZXN0 IHdpdGggdGhlIHNhbWUgcmVzdWx0cy48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PlVzaW5nIHRo ZSBuZXdlc3QgZHJpdmVycywgdmlydGlvLXdpbi0wLjEtNzQuaXNvLjwvZGl2PjwvZGl2PjxkaXYg Y2xhc3M9ImdtYWlsX2V4dHJhIj48YnIgY2xlYXI9ImFsbCI+PGRpdj48ZGl2IGRpcj0ibHRyIj48 c3BhbiBzdHlsZT0iZm9udC1mYW1pbHk6IGFyaWFsLHNhbnMtc2VyaWY7IGZvbnQtc2l6ZTogMTZw eDsiIGRhdGEtbWNlLXN0eWxlPSJmb250LWZhbWlseTogYXJpYWwsc2Fucy1zZXJpZjsgZm9udC1z aXplOiAxNnB4OyI+PHN0cm9uZz5TdGV2ZSZuYnNwO0RhaW5hcmQmbmJzcDs8L3N0cm9uZz48L3Nw YW4+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTogMTJweDsiIGRhdGEtbWNlLXN0eWxlPSJmb250LXNp emU6IDEycHg7Ij48L3NwYW4+PGJyPiA8c3BhbiBzdHlsZT0iZm9udC1mYW1pbHk6IGFyaWFsLHNh bnMtc2VyaWY7IGZvbnQtc2l6ZTogMTJweDsiIGRhdGEtbWNlLXN0eWxlPSJmb250LWZhbWlseTog YXJpYWwsc2Fucy1zZXJpZjsgZm9udC1zaXplOiAxMnB4OyI+SVQgSW5mcmFzdHJ1Y3R1cmUgTWFu YWdlcjxicj4gPGEgaHJlZj0iaHR0cDovL21pb3Zpc2lvbi5jb20vIiB0YXJnZXQ9Il9wYXJlbnQi IGRhdGEtbWNlLWhyZWY9Imh0dHA6Ly9taW92aXNpb24uY29tLyI+TWlvdmlzaW9uPC9hPiB8IDxl bT5SZXRoaW5rIFRyYWZmaWM8L2VtPjxicj4gPGEgaHJlZj0idGVsOjUxOS01MTMtMjQwNyIgdGFy Z2V0PSJfcGFyZW50IiBkYXRhLW1jZS1ocmVmPSJ0ZWw6NTE5LTUxMy0yNDA3Ij41MTktNTEzLTI0 MDc8L2E+IGV4LjI1MDxicj4gPGEgaHJlZj0idGVsOjg3Ny02NDYtODQ3NiIgdGFyZ2V0PSJfcGFy ZW50IiBkYXRhLW1jZS1ocmVmPSJ0ZWw6ODc3LTY0Ni04NDc2Ij44NzctNjQ2LTg0NzY8L2E+ICh0 b2xsLWZyZWUpPGJyPiA8YnI+IDxzdHJvbmcgc3R5bGU9ImNvbG9yOiByZ2IoMTUzLCAxNTMsIDE1 Myk7IGZvbnQtZmFtaWx5OiBhcmlhbCxzYW5zLXNlcmlmOyBmb250LXNpemU6IDEzcHg7IiBkYXRh LW1jZS1zdHlsZT0iZm9udC1mYW1pbHk6IGFyaWFsLHNhbnMtc2VyaWY7IGZvbnQtc2l6ZTogMTNw eDsgY29sb3I6ICM5OTk5OTk7Ij48YSBocmVmPSJodHRwOi8vbWlvdmlzaW9uLmNvbS9ibG9nIiB0 YXJnZXQ9Il9wYXJlbnQiIGRhdGEtbWNlLWhyZWY9Imh0dHA6Ly9taW92aXNpb24uY29tL2Jsb2ci PkJsb2c8L2E+Jm5ic3A7IHwgJm5ic3A7PC9zdHJvbmc+PHNwYW4gc3R5bGU9ImNvbG9yOiByZ2Io MTUzLCAxNTMsIDE1Myk7IGZvbnQtZmFtaWx5OiBhcmlhbCxzYW5zLXNlcmlmOyBmb250LXNpemU6 IDEzcHg7IiBkYXRhLW1jZS1zdHlsZT0iZm9udC1mYW1pbHk6IGFyaWFsLHNhbnMtc2VyaWY7IGZv bnQtc2l6ZTogMTNweDsgY29sb3I6ICM5OTk5OTk7Ij48c3Ryb25nPjxhIGhyZWY9Imh0dHBzOi8v d3d3LmxpbmtlZGluLmNvbS9jb21wYW55L21pb3Zpc2lvbi10ZWNobm9sb2dpZXMiIHRhcmdldD0i X3BhcmVudCIgZGF0YS1tY2UtaHJlZj0iaHR0cHM6Ly93d3cubGlua2VkaW4uY29tL2NvbXBhbnkv bWlvdmlzaW9uLXRlY2hub2xvZ2llcyI+TGlua2VkSW48L2E+Jm5ic3A7Jm5ic3A7fCAmbmJzcDs8 YSBocmVmPSJodHRwczovL3R3aXR0ZXIuY29tL21pb3Zpc2lvbiIgdGFyZ2V0PSJfcGFyZW50IiBk YXRhLW1jZS1ocmVmPSJodHRwczovL3R3aXR0ZXIuY29tL21pb3Zpc2lvbiI+VHdpdHRlcjwvYT4m bmJzcDsmbmJzcDt8ICZuYnNwOzxhIGhyZWY9Imh0dHBzOi8vd3d3LmZhY2Vib29rLmNvbS9taW92 aXNpb24iIHRhcmdldD0iX3BhcmVudCIgZGF0YS1tY2UtaHJlZj0iaHR0cHM6Ly93d3cuZmFjZWJv b2suY29tL21pb3Zpc2lvbiI+RmFjZWJvb2s8L2E+PC9zdHJvbmc+PC9zcGFuPiA8L3NwYW4+PGhy IHN0eWxlPSJjb2xvcjogcmdiKDUxLCA1MSwgNTEpOyBjbGVhcjogYm90aDsgZm9udC1mYW1pbHk6 IGFyaWFsLHNhbnMtc2VyaWY7IGZvbnQtc2l6ZTogMTNweDsiIGRhdGEtbWNlLXN0eWxlPSJmb250 LWZhbWlseTogYXJpYWwsc2Fucy1zZXJpZjsgZm9udC1zaXplOiAxM3B4OyBjb2xvcjogIzMzMzMz MzsgY2xlYXI6IGJvdGg7Ij48ZGl2IHN0eWxlPSJjb2xvcjogcmdiKDE1MywgMTUzLCAxNTMpOyBw YWRkaW5nLXRvcDogNXB4OyBmb250LWZhbWlseTogYXJpYWwsc2Fucy1zZXJpZjsgZm9udC1zaXpl OiAxM3B4OyIgZGF0YS1tY2Utc3R5bGU9ImNvbG9yOiAjOTk5OTk5OyBmb250LWZhbWlseTogYXJp YWwsc2Fucy1zZXJpZjsgZm9udC1zaXplOiAxM3B4OyBwYWRkaW5nLXRvcDogNXB4OyI+PHNwYW4g c3R5bGU9ImZvbnQtZmFtaWx5OiBhcmlhbCxzYW5zLXNlcmlmOyBmb250LXNpemU6IDEycHg7IiBk YXRhLW1jZS1zdHlsZT0iZm9udC1mYW1pbHk6IGFyaWFsLHNhbnMtc2VyaWY7IGZvbnQtc2l6ZTog MTJweDsiPk1pb3Zpc2lvbiBUZWNobm9sb2dpZXMgSW5jLiB8IDE0OCBNYW5pdG91IERyaXZlLCBT dWl0ZSAxMDEsIEtpdGNoZW5lciwgT04sIENhbmFkYSB8Jm5ic3A7TjJDIDFMMzwvc3Bhbj48YnI+ IDxzcGFuIHN0eWxlPSJmb250LWZhbWlseTogYXJpYWwsc2Fucy1zZXJpZjsgZm9udC1zaXplOiAx MnB4OyIgZGF0YS1tY2Utc3R5bGU9ImZvbnQtZmFtaWx5OiBhcmlhbCxzYW5zLXNlcmlmOyBmb250 LXNpemU6IDEycHg7Ij5UaGlzIGUtbWFpbCBtYXkgY29udGFpbiBpbmZvcm1hdGlvbiB0aGF0IGlz IHByaXZpbGVnZWQgb3IgY29uZmlkZW50aWFsLiBJZiB5b3UgYXJlIG5vdCB0aGUgaW50ZW5kZWQg cmVjaXBpZW50LCBwbGVhc2UgZGVsZXRlIHRoZSBlLW1haWwgYW5kIGFueSBhdHRhY2htZW50cyBh bmQgbm90aWZ5IHVzIGltbWVkaWF0ZWx5Ljwvc3Bhbj48L2Rpdj48L2Rpdj48L2Rpdj48YnI+PGRp dj48YnI+PC9kaXY+PGRpdiBjbGFzcz0iZ21haWxfcXVvdGUiPk9uIFRodSwgSmFuIDIzLCAyMDE0 IGF0IDQ6MjQgUE0sIEl0YW1hciBIZWltIDxzcGFuIGRpcj0ibHRyIj4mbHQ7PGEgaHJlZj0ibWFp bHRvOmloZWltQHJlZGhhdC5jb20iIHRhcmdldD0iX3BhcmVudCIgZGF0YS1tY2UtaHJlZj0ibWFp bHRvOmloZWltQHJlZGhhdC5jb20iPmloZWltQHJlZGhhdC5jb208L2E+Jmd0Ozwvc3Bhbj4gd3Jv dGU6PGJyPjxibG9ja3F1b3RlIGNsYXNzPSJnbWFpbF9xdW90ZSIgc3R5bGU9Im1hcmdpbjogMHB4 IDBweCAwcHggMC44ZXg7IHBhZGRpbmctbGVmdDogMWV4OyBib3JkZXItbGVmdC1jb2xvcjogcmdi KDIwNCwgMjA0LCAyMDQpOyBib3JkZXItbGVmdC13aWR0aDogMXB4OyBib3JkZXItbGVmdC1zdHls ZTogc29saWQ7IiBkYXRhLW1jZS1zdHlsZT0ibWFyZ2luOiAwIDAgMCAuOGV4OyBib3JkZXItbGVm dDogMXB4ICNjY2Mgc29saWQ7IHBhZGRpbmctbGVmdDogMWV4OyI+PGRpdj5PbiAwMS8yMy8yMDE0 IDA3OjQ2IFBNLCBTdGV2ZSBEYWluYXJkIHdyb3RlOjxicj48YmxvY2txdW90ZSBjbGFzcz0iZ21h aWxfcXVvdGUiIHN0eWxlPSJtYXJnaW46IDBweCAwcHggMHB4IDAuOGV4OyBwYWRkaW5nLWxlZnQ6 IDFleDsgYm9yZGVyLWxlZnQtY29sb3I6IHJnYigyMDQsIDIwNCwgMjA0KTsgYm9yZGVyLWxlZnQt d2lkdGg6IDFweDsgYm9yZGVyLWxlZnQtc3R5bGU6IHNvbGlkOyIgZGF0YS1tY2Utc3R5bGU9Im1h cmdpbjogMCAwIDAgLjhleDsgYm9yZGVyLWxlZnQ6IDFweCAjY2NjIHNvbGlkOyBwYWRkaW5nLWxl ZnQ6IDFleDsiPkJhY2tpbmcgU3RvcmFnZTogR2x1c3RlciBSZXBsaWNhPGJyPiBTdG9yYWdlIERv bWFpbjogTkZTPGJyPiBPdmlydCBIb3N0czogQ2VudE9TIDYuNTxicj4gT3ZpcnQgdmVyc2lvbjog My4zLjI8YnI+IE5ldHdvcms6IEdpZ0U8YnI+ICMgb2YgVk0nczogMyAtIHR3byBMaW51eCBndWVz dHMgYXJlIGlkbGUsIG9uZSBXaW5kb3dzIGd1ZXN0IGlzPGJyPiBpbnN0YWxsaW5nIHVwZGF0ZXMu PGJyPiA8YnI+IEkndmUgaW5zdGFsbGVkIGEgV2luZG93cyAyMDA4IFIyIGd1ZXN0IHdpdGggdmly dGlvIGRpc2ssIGFuZCBhbGwgdGhlPGJyPiBkcml2ZXJzIGZyb20gdGhlIGxhdGVzdCB2aXJ0aW8g aXNvLiBJJ3ZlIGFsc28gaW5zdGFsbGVkIHRoZSBzcGljZSBhZ2VudDxicj4gZHJpdmVycy48YnI+ IDxicj4gR3Vlc3QgZGlzayBhY2Nlc3MgaXMgaG9ycmlibHkgc2xvdywgUmVzb3VyY2UgbW9uaXRv ciBkdXJpbmcgV2luZG93czxicj4gdXBkYXRlcyBzaG93cyBEaXNrIHBlYWtpbmcgYXQgMU1CL3Nl YyAoc2NhbGUgbmV2ZXIgaW5jcmVhc2VzKSBhbmQgRGlzazxicj4gUXVldWUgTGVuZ3RoIFBlYWtp bmcgYXQgNSBhbmQgbG9va3MgdG8gYmUgc2l0dGluZyBhdCB0aGF0IGxldmVsIDk5JSBvZjxicj4g dGhlIHRpbWUuIDExMyB1cGRhdGVzIGluIFdpbmRvd3MgaGFzIGJlZW4gcnVubmluZyBzb2xpZGx5 IGZvciBhYm91dCAyLjU8YnI+IGhvdXJzIGFuZCBpcyBhdCA4OS8xMTMgdXBkYXRlcyBjb21wbGV0 ZS48YnI+PC9ibG9ja3F1b3RlPjxicj48L2Rpdj52aXJ0aW8tYmxvY2sgb3IgdmlydGlvLXNjc2k/ PGJyPiB3aGljaCB3aW5kb3dzIGd1ZXN0IGRyaXZlciB2ZXJzaW9uIGZvciB0aGF0Pzxicj4gPGJy PjxibG9ja3F1b3RlIGNsYXNzPSJnbWFpbF9xdW90ZSIgc3R5bGU9Im1hcmdpbjogMHB4IDBweCAw cHggMC44ZXg7IHBhZGRpbmctbGVmdDogMWV4OyBib3JkZXItbGVmdC1jb2xvcjogcmdiKDIwNCwg MjA0LCAyMDQpOyBib3JkZXItbGVmdC13aWR0aDogMXB4OyBib3JkZXItbGVmdC1zdHlsZTogc29s aWQ7IiBkYXRhLW1jZS1zdHlsZT0ibWFyZ2luOiAwIDAgMCAuOGV4OyBib3JkZXItbGVmdDogMXB4 ICNjY2Mgc29saWQ7IHBhZGRpbmctbGVmdDogMWV4OyI+PGRpdj48YnI+IEkgY2FuJ3Qgc2F5IG15 IExpbnV4IGd1ZXN0cyBhcmUgYmxpc3RlcmluZ2x5IGZhc3QsIGJ1dCB1cGRhdGluZyBhIGd1ZXN0 PGJyPiBmcm9tIFJIRUwgNi4zIGZyZXNoIGluc3RhbGwgdG8gNi41IHRvb2sgYWJvdXQgMjUgbWlu dXRlcy48YnI+IDxicj4gSWYgYW55b25lIGhhcyBhbnkgaWRlYXMsIHBsZWFzZSBsZXQgbWUga25v dyAtIEkgaGF2ZW4ndCBmb3VuZCBhbnkgdHVuaW5nPGJyPiBkb2NzIGZvciBXaW5kb3dzIGd1ZXN0 cyB0aGF0IGNvdWxkIGV4cGxhaW4gdGhpcyBpc3N1ZS48YnI+IDxicj4gVGhhbmtzLDxicj4gPGJy PiA8YnI+PC9kaXY+KlN0ZXZlIERhaW5hcmQgKjxicj4gPGJyPiA8YnI+IDxicj4gX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fPHNwYW4gc3R5bGU9InRleHQtZGVjb3JhdGlvbjogdW5kZXJs aW5lOyIgZGF0YS1tY2Utc3R5bGU9InRleHQtZGVjb3JhdGlvbjogdW5kZXJsaW5lOyI+PC9zcGFu Pl9fX19fX19fX19fX19fX19fPGJyPiBVc2VycyBtYWlsaW5nIGxpc3Q8YnI+IDxhIGhyZWY9Im1h aWx0bzpVc2Vyc0BvdmlydC5vcmciIHRhcmdldD0iX3BhcmVudCIgZGF0YS1tY2UtaHJlZj0ibWFp bHRvOlVzZXJzQG92aXJ0Lm9yZyI+VXNlcnNAb3ZpcnQub3JnPC9hPjxicj4gPGEgaHJlZj0iaHR0 cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzIiB0YXJnZXQ9Il9wYXJl bnQiIGRhdGEtbWNlLWhyZWY9Imh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5m by91c2VycyI+aHR0cDovL2xpc3RzLm92aXJ0Lm9yZy88c3BhbiBzdHlsZT0idGV4dC1kZWNvcmF0 aW9uOiB1bmRlcmxpbmU7IiBkYXRhLW1jZS1zdHlsZT0idGV4dC1kZWNvcmF0aW9uOiB1bmRlcmxp bmU7Ij48L3NwYW4+bWFpbG1hbi9saXN0aW5mby91c2VyczwvYT48YnI+IDxicj48L2Jsb2NrcXVv dGU+PGJyPjwvYmxvY2txdW90ZT48L2Rpdj48YnI+PC9kaXY+PGJyPl9fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fPGJyPlVzZXJzIG1haWxpbmcgbGlzdDxicj48 YSBocmVmPSJtYWlsdG86VXNlcnNAb3ZpcnQub3JnIiB0YXJnZXQ9Il9wYXJlbnQiIGRhdGEtbWNl LWhyZWY9Im1haWx0bzpVc2Vyc0BvdmlydC5vcmciPlVzZXJzQG92aXJ0Lm9yZzwvYT48YnI+PGEg aHJlZj0iaHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzIiB0YXJn ZXQ9Il9wYXJlbnQiIGRhdGEtbWNlLWhyZWY9Imh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1h bi9saXN0aW5mby91c2VycyI+aHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZv L3VzZXJzPC9hPjxicj48L2Rpdj48L2Rpdj48L2Jsb2NrcXVvdGU+PGJyPjwvZGl2PjwvZGl2Pjwv YmxvY2txdW90ZT48L2Rpdj48YnI+PC9kaXY+PC9ibG9ja3F1b3RlPjxkaXY+PGJyPjxicj48L2Rp dj48ZGl2Pjxicj48L2Rpdj48ZGl2Pi0tIDxicj48L2Rpdj48ZGl2PjxzcGFuPjwvc3Bhbj5TYW5q YXkgUmFvIDxicj5QcmluY2lwYWwgUGVyZm9ybWFuY2UgRW5naW5lZXIgJm5ic3A7ICZuYnNwOyAm bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgUGhvbmU6IDk3 OC0zOTItMjQ3OSA8YnI+UmVkIEhhdCwgSW5jLiAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwO0ZBWDogOTc4LTM5Mi0xMDAx IDxicj4zMTQgTGl0dGxldG9uIFJvYWQgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7 ICZuYnNwOyAmbmJzcDsgRW1haWw6IHNyYW9AcmVkaGF0LmNvbSA8YnI+V2VzdGZvcmQsIE1BIDAx ODg2IDxzcGFuPjwvc3Bhbj48YnI+PC9kaXY+PC9kaXY+PC9kaXY+PC9kaXY+CjwvYm9keT4KPC9o dG1sPgo= --_7268CB10-E597-419C-B2BE-EEC079156B7D_-- --_9a4ce072-90af-4283-8756-2cce79d93e12_ Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users --_9a4ce072-90af-4283-8756-2cce79d93e12_--

Thanks for the responses everyone, really appreciate it. I've condensed the other questions into this reply. Steve,
What is the CPU load of the GlusterFS host when comparing the raw brick test to the gluster mount point test? Give it 30 seconds and see what top reports. You’ll probably have to significantly increase the count on the test so that it runs that long.
- Nick
Gluster mount point: *4K* on GLUSTER host [root@gluster1 rep2]# dd if=/dev/zero of=/mnt/rep2/test1 bs=4k count=500000 500000+0 records in 500000+0 records out 2048000000 bytes (2.0 GB) copied, 100.076 s, 20.5 MB/s Top reported this right away: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1826 root 20 0 294m 33m 2540 S 27.2 0.4 0:04.31 glusterfs 2126 root 20 0 1391m 31m 2336 S 22.6 0.4 11:25.48 glusterfsd Then at about 20+ seconds top reports this: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1826 root 20 0 294m 35m 2660 R 141.7 0.5 1:14.94 glusterfs 2126 root 20 0 1392m 31m 2344 S 33.7 0.4 11:46.56 glusterfsd *4K* Directly on the brick: dd if=/dev/zero of=test1 bs=4k count=500000 500000+0 records in 500000+0 records out 2048000000 bytes (2.0 GB) copied, 4.99367 s, 410 MB/s 7750 root 20 0 102m 648 544 R 50.3 0.0 0:01.52 dd 7719 root 20 0 0 0 0 D 1.0 0.0 0:01.50 flush-253:2 Same test, gluster mount point on OVIRT host: dd if=/dev/zero of=/mnt/rep2/test1 bs=4k count=500000 500000+0 records in 500000+0 records out 2048000000 bytes (2.0 GB) copied, 42.4518 s, 48.2 MB/s PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2126 root 20 0 1396m 31m 2360 S 40.5 0.4 13:28.89 glusterfsd Same test, on OVIRT host but against NFS mount point: dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=4k count=500000 500000+0 records in 500000+0 records out 2048000000 bytes (2.0 GB) copied, 18.8911 s, 108 MB/s PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2141 root 20 0 550m 184m 2840 R 84.6 2.3 16:43.10 glusterfs 2126 root 20 0 1407m 30m 2368 S 49.8 0.4 13:49.07 glusterfsd Interesting - It looks like if I use a NFS mount point, I incur a cpu hit on two processes instead of just the daemon. I also get much better performance if I'm not running dd (fuse) on the GLUSTER host. The storage servers are a bit older, but are both dual socket quad core opterons with 4x 7200rpm drives. A block size of 4k is quite small so that the context switch overhead involved with fuse would be more perceivable. Would it be possible to increase the block size for dd and test?
I'm in the process of setting up a share from my desktop and I'll see if
I can bench between the two systems. Not sure if my ssd will impact the tests, I've heard there isn't an advantage using ssd storage for glusterfs. Do you have any pointers to this source of information? Typically glusterfs performance for virtualization work loads is bound by the slowest element in the entire stack. Usually storage/disks happen to be the bottleneck and ssd storage does benefit glusterfs. -Vijay I had a couple technical calls with RH (re: RHSS), and when I asked if SSD's could add any benefit I was told no. The context may have been in a product comparison to other storage vendors, where they use SSD's for read/write caching, versus having an all SSD storage domain (which I'm not proposing, but which is effectively what my desktop would provide). Increasing bs against NFS mount point (gluster backend): dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=128k count=16000 16000+0 records in 16000+0 records out 2097152000 bytes (2.1 GB) copied, 19.1089 s, 110 MB/s GLUSTER host top reports: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2141 root 20 0 550m 183m 2844 R 88.9 2.3 17:30.82 glusterfs 2126 root 20 0 1414m 31m 2408 S 46.1 0.4 14:18.18 glusterfsd So roughly the same performance as 4k writes remotely. I'm guessing if I could randomize these writes we'd see a large difference. Check this thread out, http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs-native-integration... it's quite dated but I remember seeing similar figures. In fact when I used FIO on a libgfapi mounted VM I got slightly faster read/write speeds than on the physical box itself (I assume because of some level of caching). On NFS it was close to half.. You'll probably get a little more interesting results using FIO opposed to dd ( -Andrew) Sorry Andrew, I meant to reply to your other message - it looks like CentOS 6.5 can't use libgfapi right now, I stumbled across this info in a couple threads. Something about how the CentOS build has different flags set on build for RHEV snapshot support then RHEL, so native gluster storage domains are disabled because snapshot support is assumed and would break otherwise. I'm assuming this is still valid as I cannot get a storage lock when I attempt a gluster storage domain. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- I've setup a NFS storage domain on my desktops SSD. I've re-installed win 2008 r2 and initially it was running smoother. Disk performance peaks at 100MB/s. If I copy a 250MB file from a share into the Windows VM, it writes out quickly, less than 5 seconds. If I copy 20 files, ranging in file sizes from 4k to 200MB, totaling in 650MB from the share - windows becomes unresponsive, in top the desktop's nfs daemon is barely being touched at all, and then eventually is not hit. I can still interact with the VM's windows through the spice console. Eventually the file transfer will start and rocket through the transfer. I've opened a 271MB zip file with 4454 files and started the extract process but the progress windows will sit on 'calculating...' after a significant period of time the decompression starts and runs at <200KB/second. Windows is guesstimating 1HR completion time. Eventually even this freezes up, and my spice console mouse won't grab. I can still see the resource monitor in the Windows VM doing its thing but have to poweroff the VM as its no longer usable. The windows update process is the same. It seems like when the guest needs quick large writes its fine, but lots of io causes serious hanging, unresponsiveness, spice mouse cursor freeze, and eventually poweroff/reboot is the only way to get it back. Also, during window 2008 r2 install the 'expanding windows files' task is quite slow, roughly 1% progress every 20 seconds (~30 mins to complete). The GLUSTER host shows these stats pretty consistently: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 8139 root 20 0 1380m 28m 2476 R 83.1 0.4 8:35.78 glusterfsd 8295 root 20 0 550m 186m 2980 S 4.3 2.4 1:52.56 glusterfs bwm-ng v0.6 (probing every 2.000s), press 'h' for help input: /proc/net/dev type: rate \ iface Rx Tx Total ============================================================================== lo: 3719.31 KB/s 3719.31 KB/s 7438.62 KB/s eth0: 3405.12 KB/s 3903.28 KB/s 7308.40 KB/s I've copied the same zip file to an nfs mount point on the OVIRT host (gluster backend) and get about 25 - 600 KB/s during unzip. The same test on NFS mount point (desktop SSD ext4 backend) averaged a network transfer speed of 5MB/s and completed in about 40 seconds. I have a RHEL 6.5 guest running on the NFS/gluster backend storage domain, and just did the same test. Extracting the file took 22.3 seconds (faster than the fuse mount point on the host !?!?). GLUSTER host top reported this while the RHEL guest was decompressing the zip file: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2141 root 20 0 555m 187m 2844 S 4.0 2.4 18:17.00 glusterfs 2122 root 20 0 1380m 31m 2396 S 2.3 0.4 83:19.40 glusterfsd *Steve Dainard * IT Infrastructure Manager Miovision <http://miovision.com/> | *Rethink Traffic* 519-513-2407 ex.250 877-646-8476 (toll-free) *Blog <http://miovision.com/blog> | **LinkedIn <https://www.linkedin.com/company/miovision-technologies> | Twitter <https://twitter.com/miovision> | Facebook <https://www.facebook.com/miovision>* ------------------------------ Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, Canada | N2C 1L3 This e-mail may contain information that is privileged or confidential. If you are not the intended recipient, please delete the e-mail and any attachments and notify us immediately.

On 01/26/2014 02:37 AM, Steve Dainard wrote:
Thanks for the responses everyone, really appreciate it.
I've condensed the other questions into this reply.
Steve, What is the CPU load of the GlusterFS host when comparing the raw brick test to the gluster mount point test? Give it 30 seconds and see what top reports. You’ll probably have to significantly increase the count on the test so that it runs that long.
- Nick
Gluster mount point:
*4K* on GLUSTER host [root@gluster1 rep2]# dd if=/dev/zero of=/mnt/rep2/test1 bs=4k count=500000 500000+0 records in 500000+0 records out 2048000000 <tel:2048000000> bytes (2.0 GB) copied, 100.076 s, 20.5 MB/s
Top reported this right away: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1826 root 20 0 294m 33m 2540 S 27.2 0.4 0:04.31 glusterfs 2126 root 20 0 1391m 31m 2336 S 22.6 0.4 11:25.48 glusterfsd
Then at about 20+ seconds top reports this: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1826 root 20 0 294m 35m 2660 R 141.7 0.5 1:14.94 glusterfs 2126 root 20 0 1392m 31m 2344 S 33.7 0.4 11:46.56 glusterfsd
*4K* Directly on the brick: dd if=/dev/zero of=test1 bs=4k count=500000 500000+0 records in 500000+0 records out 2048000000 <tel:2048000000> bytes (2.0 GB) copied, 4.99367 s, 410 MB/s
7750 root 20 0 102m 648 544 R 50.3 0.0 0:01.52 dd 7719 root 20 0 0 0 0 D 1.0 0.0 0:01.50 flush-253:2
Same test, gluster mount point on OVIRT host: dd if=/dev/zero of=/mnt/rep2/test1 bs=4k count=500000 500000+0 records in 500000+0 records out 2048000000 <tel:2048000000> bytes (2.0 GB) copied, 42.4518 s, 48.2 MB/s
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2126 root 20 0 1396m 31m 2360 S 40.5 0.4 13:28.89 glusterfsd
Same test, on OVIRT host but against NFS mount point: dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=4k count=500000 500000+0 records in 500000+0 records out 2048000000 <tel:2048000000> bytes (2.0 GB) copied, 18.8911 s, 108 MB/s
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2141 root 20 0 550m 184m 2840 R 84.6 2.3 16:43.10 glusterfs 2126 root 20 0 1407m 30m 2368 S 49.8 0.4 13:49.07 glusterfsd
Interesting - It looks like if I use a NFS mount point, I incur a cpu hit on two processes instead of just the daemon. I also get much better performance if I'm not running dd (fuse) on the GLUSTER host.
The storage servers are a bit older, but are both dual socket quad core
opterons with 4x 7200rpm drives.
A block size of 4k is quite small so that the context switch overhead involved with fuse would be more perceivable.
Would it be possible to increase the block size for dd and test?
I'm in the process of setting up a share from my desktop and I'll see if
I can bench between the two systems. Not sure if my ssd will impact the
tests, I've heard there isn't an advantage using ssd storage for glusterfs.
Do you have any pointers to this source of information? Typically glusterfs performance for virtualization work loads is bound by the slowest element in the entire stack. Usually storage/disks happen to be the bottleneck and ssd storage does benefit glusterfs.
-Vijay
I had a couple technical calls with RH (re: RHSS), and when I asked if SSD's could add any benefit I was told no. The context may have been in a product comparison to other storage vendors, where they use SSD's for read/write caching, versus having an all SSD storage domain (which I'm not proposing, but which is effectively what my desktop would provide).
Increasing bs against NFS mount point (gluster backend): dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=128k count=16000 16000+0 records in 16000+0 records out 2097152000 <tel:2097152000> bytes (2.1 GB) copied, 19.1089 s, 110 MB/s
GLUSTER host top reports: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2141 root 20 0 550m 183m 2844 R 88.9 2.3 17:30.82 glusterfs 2126 root 20 0 1414m 31m 2408 S 46.1 0.4 14:18.18 glusterfsd
So roughly the same performance as 4k writes remotely. I'm guessing if I could randomize these writes we'd see a large difference.
Check this thread out, http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs-native-integration... it's quite dated but I remember seeing similar figures.
In fact when I used FIO on a libgfapi mounted VM I got slightly faster read/write speeds than on the physical box itself (I assume because of some level of caching). On NFS it was close to half.. You'll probably get a little more interesting results using FIO opposed to dd
( -Andrew)
Sorry Andrew, I meant to reply to your other message - it looks like CentOS 6.5 can't use libgfapi right now, I stumbled across this info in a couple threads. Something about how the CentOS build has different flags set on build for RHEV snapshot support then RHEL, so native gluster storage domains are disabled because snapshot support is assumed and would break otherwise. I'm assuming this is still valid as I cannot get a storage lock when I attempt a gluster storage domain.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
I've setup a NFS storage domain on my desktops SSD. I've re-installed win 2008 r2 and initially it was running smoother.
Disk performance peaks at 100MB/s.
If I copy a 250MB file from a share into the Windows VM, it writes out quickly, less than 5 seconds.
If I copy 20 files, ranging in file sizes from 4k to 200MB, totaling in 650MB from the share - windows becomes unresponsive, in top the desktop's nfs daemon is barely being touched at all, and then eventually is not hit. I can still interact with the VM's windows through the spice console. Eventually the file transfer will start and rocket through the transfer.
I've opened a 271MB zip file with 4454 files and started the extract process but the progress windows will sit on 'calculating...' after a significant period of time the decompression starts and runs at <200KB/second. Windows is guesstimating 1HR completion time. Eventually even this freezes up, and my spice console mouse won't grab. I can still see the resource monitor in the Windows VM doing its thing but have to poweroff the VM as its no longer usable.
The windows update process is the same. It seems like when the guest needs quick large writes its fine, but lots of io causes serious hanging, unresponsiveness, spice mouse cursor freeze, and eventually poweroff/reboot is the only way to get it back.
Also, during window 2008 r2 install the 'expanding windows files' task is quite slow, roughly 1% progress every 20 seconds (~30 mins to complete). The GLUSTER host shows these stats pretty consistently: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 8139 root 20 0 1380m 28m 2476 R 83.1 0.4 8:35.78 glusterfsd 8295 root 20 0 550m 186m 2980 S 4.3 2.4 1:52.56 glusterfs
bwm-ng v0.6 (probing every 2.000s), press 'h' for help input: /proc/net/dev type: rate \ iface Rx Tx Total
============================================================================== lo: 3719.31 KB/s 3719.31 KB/s 7438.62 KB/s eth0: 3405.12 KB/s 3903.28 KB/s 7308.40 KB/s
I've copied the same zip file to an nfs mount point on the OVIRT host (gluster backend) and get about 25 - 600 KB/s during unzip. The same test on NFS mount point (desktop SSD ext4 backend) averaged a network transfer speed of 5MB/s and completed in about 40 seconds.
I have a RHEL 6.5 guest running on the NFS/gluster backend storage domain, and just did the same test. Extracting the file took 22.3 seconds (faster than the fuse mount point on the host !?!?).
GLUSTER host top reported this while the RHEL guest was decompressing the zip file: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2141 root 20 0 555m 187m 2844 S 4.0 2.4 18:17.00 glusterfs 2122 root 20 0 1380m 31m 2396 S 2.3 0.4 83:19.40 glusterfsd
*Steve Dainard * IT Infrastructure Manager Miovision <http://miovision.com/> | /Rethink Traffic/ 519-513-2407 <tel:519-513-2407> ex.250 877-646-8476 <tel:877-646-8476> (toll-free)
*Blog <http://miovision.com/blog> | **LinkedIn <https://www.linkedin.com/company/miovision-technologies> | Twitter <https://twitter.com/miovision> | Facebook <https://www.facebook.com/miovision>* ------------------------------------------------------------------------ Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, Canada | N2C 1L3 This e-mail may contain information that is privileged or confidential. If you are not the intended recipient, please delete the e-mail and any attachments and notify us immediately.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
please note currently (>3.3.1), we don't use libgfapi on fedora as well, as we found some gaps in functionality in the libvirt libgfapi support for snapshots. once these are resolved, we can re-enable libgfapi on a glusterfs storage domain.

I've had a bit of luck here. Overall IO performance is very poor during Windows updates, but a contributing factor seems to be the "SCSI Controller" device in the guest. This last install I didn't install a driver for that device, and my performance is much better. Updates still chug along quite slowly, but I seem to have more than the < 100KB/s write speeds I was seeing previously. Does anyone know what this device is for? I have the "Red Hat VirtIO SCSI Controller" listed under storage controllers. *Steve Dainard * IT Infrastructure Manager Miovision <http://miovision.com/> | *Rethink Traffic* 519-513-2407 ex.250 877-646-8476 (toll-free) *Blog <http://miovision.com/blog> | **LinkedIn <https://www.linkedin.com/company/miovision-technologies> | Twitter <https://twitter.com/miovision> | Facebook <https://www.facebook.com/miovision>* ------------------------------ Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, Canada | N2C 1L3 This e-mail may contain information that is privileged or confidential. If you are not the intended recipient, please delete the e-mail and any attachments and notify us immediately. On Sun, Jan 26, 2014 at 2:33 AM, Itamar Heim <iheim@redhat.com> wrote:
On 01/26/2014 02:37 AM, Steve Dainard wrote:
Thanks for the responses everyone, really appreciate it.
I've condensed the other questions into this reply.
Steve, What is the CPU load of the GlusterFS host when comparing the raw brick test to the gluster mount point test? Give it 30 seconds and see what top reports. You'll probably have to significantly increase the count on the test so that it runs that long.
- Nick
Gluster mount point:
*4K* on GLUSTER host [root@gluster1 rep2]# dd if=/dev/zero of=/mnt/rep2/test1 bs=4k count=500000 500000+0 records in 500000+0 records out 2048000000 <tel:2048000000> bytes (2.0 GB) copied, 100.076 s, 20.5 MB/s
Top reported this right away: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1826 root 20 0 294m 33m 2540 S 27.2 0.4 0:04.31 glusterfs 2126 root 20 0 1391m 31m 2336 S 22.6 0.4 11:25.48 glusterfsd
Then at about 20+ seconds top reports this: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1826 root 20 0 294m 35m 2660 R 141.7 0.5 1:14.94 glusterfs 2126 root 20 0 1392m 31m 2344 S 33.7 0.4 11:46.56 glusterfsd
*4K* Directly on the brick: dd if=/dev/zero of=test1 bs=4k count=500000 500000+0 records in 500000+0 records out 2048000000 <tel:2048000000> bytes (2.0 GB) copied, 4.99367 s, 410 MB/s
7750 root 20 0 102m 648 544 R 50.3 0.0 0:01.52 dd 7719 root 20 0 0 0 0 D 1.0 0.0 0:01.50 flush-253:2
Same test, gluster mount point on OVIRT host: dd if=/dev/zero of=/mnt/rep2/test1 bs=4k count=500000 500000+0 records in 500000+0 records out 2048000000 <tel:2048000000> bytes (2.0 GB) copied, 42.4518 s, 48.2 MB/s
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2126 root 20 0 1396m 31m 2360 S 40.5 0.4 13:28.89 glusterfsd
Same test, on OVIRT host but against NFS mount point: dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=4k count=500000 500000+0 records in 500000+0 records out 2048000000 <tel:2048000000> bytes (2.0 GB) copied, 18.8911 s, 108 MB/s
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2141 root 20 0 550m 184m 2840 R 84.6 2.3 16:43.10 glusterfs 2126 root 20 0 1407m 30m 2368 S 49.8 0.4 13:49.07 glusterfsd
Interesting - It looks like if I use a NFS mount point, I incur a cpu hit on two processes instead of just the daemon. I also get much better performance if I'm not running dd (fuse) on the GLUSTER host.
The storage servers are a bit older, but are both dual socket quad core
opterons with 4x 7200rpm drives.
A block size of 4k is quite small so that the context switch overhead involved with fuse would be more perceivable.
Would it be possible to increase the block size for dd and test?
I'm in the process of setting up a share from my desktop and I'll see if
I can bench between the two systems. Not sure if my ssd will impact the
tests, I've heard there isn't an advantage using ssd storage for glusterfs.
Do you have any pointers to this source of information? Typically glusterfs performance for virtualization work loads is bound by the slowest element in the entire stack. Usually storage/disks happen to be the bottleneck and ssd storage does benefit glusterfs.
-Vijay
I had a couple technical calls with RH (re: RHSS), and when I asked if SSD's could add any benefit I was told no. The context may have been in a product comparison to other storage vendors, where they use SSD's for read/write caching, versus having an all SSD storage domain (which I'm not proposing, but which is effectively what my desktop would provide).
Increasing bs against NFS mount point (gluster backend): dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=128k count=16000 16000+0 records in 16000+0 records out 2097152000 <tel:2097152000> bytes (2.1 GB) copied, 19.1089 s, 110 MB/s
GLUSTER host top reports: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2141 root 20 0 550m 183m 2844 R 88.9 2.3 17:30.82 glusterfs 2126 root 20 0 1414m 31m 2408 S 46.1 0.4 14:18.18 glusterfsd
So roughly the same performance as 4k writes remotely. I'm guessing if I could randomize these writes we'd see a large difference.
Check this thread out, http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs- native-integration/ it's quite dated but I remember seeing similar figures.
In fact when I used FIO on a libgfapi mounted VM I got slightly faster read/write speeds than on the physical box itself (I assume because of some level of caching). On NFS it was close to half.. You'll probably get a little more interesting results using FIO opposed to dd
( -Andrew)
Sorry Andrew, I meant to reply to your other message - it looks like CentOS 6.5 can't use libgfapi right now, I stumbled across this info in a couple threads. Something about how the CentOS build has different flags set on build for RHEV snapshot support then RHEL, so native gluster storage domains are disabled because snapshot support is assumed and would break otherwise. I'm assuming this is still valid as I cannot get a storage lock when I attempt a gluster storage domain.
------------------------------------------------------------ ------------------------------------------------------------ ------------------------------------------------------------ --------------------
I've setup a NFS storage domain on my desktops SSD. I've re-installed win 2008 r2 and initially it was running smoother.
Disk performance peaks at 100MB/s.
If I copy a 250MB file from a share into the Windows VM, it writes out quickly, less than 5 seconds.
If I copy 20 files, ranging in file sizes from 4k to 200MB, totaling in 650MB from the share - windows becomes unresponsive, in top the desktop's nfs daemon is barely being touched at all, and then eventually is not hit. I can still interact with the VM's windows through the spice console. Eventually the file transfer will start and rocket through the transfer.
I've opened a 271MB zip file with 4454 files and started the extract process but the progress windows will sit on 'calculating...' after a significant period of time the decompression starts and runs at <200KB/second. Windows is guesstimating 1HR completion time. Eventually even this freezes up, and my spice console mouse won't grab. I can still see the resource monitor in the Windows VM doing its thing but have to poweroff the VM as its no longer usable.
The windows update process is the same. It seems like when the guest needs quick large writes its fine, but lots of io causes serious hanging, unresponsiveness, spice mouse cursor freeze, and eventually poweroff/reboot is the only way to get it back.
Also, during window 2008 r2 install the 'expanding windows files' task is quite slow, roughly 1% progress every 20 seconds (~30 mins to complete). The GLUSTER host shows these stats pretty consistently: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 8139 root 20 0 1380m 28m 2476 R 83.1 0.4 8:35.78 glusterfsd 8295 root 20 0 550m 186m 2980 S 4.3 2.4 1:52.56 glusterfs
bwm-ng v0.6 (probing every 2.000s), press 'h' for help input: /proc/net/dev type: rate \ iface Rx Tx Total
============================================================ ================== lo: 3719.31 KB/s 3719.31 KB/s 7438.62 KB/s eth0: 3405.12 KB/s 3903.28 KB/s 7308.40 KB/s
I've copied the same zip file to an nfs mount point on the OVIRT host (gluster backend) and get about 25 - 600 KB/s during unzip. The same test on NFS mount point (desktop SSD ext4 backend) averaged a network transfer speed of 5MB/s and completed in about 40 seconds.
I have a RHEL 6.5 guest running on the NFS/gluster backend storage domain, and just did the same test. Extracting the file took 22.3 seconds (faster than the fuse mount point on the host !?!?).
GLUSTER host top reported this while the RHEL guest was decompressing the zip file: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2141 root 20 0 555m 187m 2844 S 4.0 2.4 18:17.00 glusterfs 2122 root 20 0 1380m 31m 2396 S 2.3 0.4 83:19.40 glusterfsd
*Steve Dainard * IT Infrastructure Manager Miovision <http://miovision.com/> | /Rethink Traffic/ 519-513-2407 <tel:519-513-2407> ex.250 877-646-8476 <tel:877-646-8476> (toll-free)
*Blog <http://miovision.com/blog> | **LinkedIn <https://www.linkedin.com/company/miovision-technologies> | Twitter <https://twitter.com/miovision> | Facebook <https://www.facebook.com/miovision>* ------------------------------------------------------------------------ Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, Canada | N2C 1L3 This e-mail may contain information that is privileged or confidential. If you are not the intended recipient, please delete the e-mail and any attachments and notify us immediately.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
please note currently (>3.3.1), we don't use libgfapi on fedora as well, as we found some gaps in functionality in the libvirt libgfapi support for snapshots. once these are resolved, we can re-enable libgfapi on a glusterfs storage domain.

This is a multi-part message in MIME format. --------------080505030003020501080002 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Adding the virtio-scsi developers. Anyhow, virtio-scsi is newer and less established than viostor (the block device), so you might want to try it out. A disclaimer: There are time and patches gaps between RHEL and other versions. Ronen. On 01/28/2014 10:39 PM, Steve Dainard wrote:
I've had a bit of luck here.
Overall IO performance is very poor during Windows updates, but a contributing factor seems to be the "SCSI Controller" device in the guest. This last install I didn't install a driver for that device, and my performance is much better. Updates still chug along quite slowly, but I seem to have more than the < 100KB/s write speeds I was seeing previously.
Does anyone know what this device is for? I have the "Red Hat VirtIO SCSI Controller" listed under storage controllers.
*Steve Dainard * IT Infrastructure Manager Miovision <http://miovision.com/> | /Rethink Traffic/ 519-513-2407 ex.250 877-646-8476 (toll-free)
*Blog <http://miovision.com/blog> | **LinkedIn <https://www.linkedin.com/company/miovision-technologies> | Twitter <https://twitter.com/miovision> | Facebook <https://www.facebook.com/miovision>* ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, Canada | N2C 1L3 This e-mail may contain information that is privileged or confidential. If you are not the intended recipient, please delete the e-mail and any attachments and notify us immediately.
On Sun, Jan 26, 2014 at 2:33 AM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com>> wrote:
On 01/26/2014 02:37 AM, Steve Dainard wrote:
Thanks for the responses everyone, really appreciate it.
I've condensed the other questions into this reply.
Steve, What is the CPU load of the GlusterFS host when comparing the raw brick test to the gluster mount point test? Give it 30 seconds and see what top reports. You'll probably have to significantly increase the count on the test so that it runs that long.
- Nick
Gluster mount point:
*4K* on GLUSTER host [root@gluster1 rep2]# dd if=/dev/zero of=/mnt/rep2/test1 bs=4k count=500000 500000+0 records in 500000+0 records out 2048000000 <tel:2048000000> <tel:2048000000 <tel:2048000000>> bytes (2.0 GB) copied, 100.076 s, 20.5 MB/s
Top reported this right away: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1826 root 20 0 294m 33m 2540 S 27.2 0.4 0:04.31 glusterfs 2126 root 20 0 1391m 31m 2336 S 22.6 0.4 11:25.48 glusterfsd
Then at about 20+ seconds top reports this: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1826 root 20 0 294m 35m 2660 R 141.7 0.5 1:14.94 glusterfs 2126 root 20 0 1392m 31m 2344 S 33.7 0.4 11:46.56 glusterfsd
*4K* Directly on the brick: dd if=/dev/zero of=test1 bs=4k count=500000 500000+0 records in 500000+0 records out 2048000000 <tel:2048000000> <tel:2048000000 <tel:2048000000>> bytes (2.0 GB) copied, 4.99367 s, 410 MB/s
7750 root 20 0 102m 648 544 R 50.3 0.0 0:01.52 dd 7719 root 20 0 0 0 0 D 1.0 0.0 0:01.50 flush-253:2
Same test, gluster mount point on OVIRT host: dd if=/dev/zero of=/mnt/rep2/test1 bs=4k count=500000 500000+0 records in 500000+0 records out 2048000000 <tel:2048000000> <tel:2048000000 <tel:2048000000>> bytes (2.0 GB) copied, 42.4518 s, 48.2 MB/s
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2126 root 20 0 1396m 31m 2360 S 40.5 0.4 13:28.89 glusterfsd
Same test, on OVIRT host but against NFS mount point: dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=4k count=500000 500000+0 records in 500000+0 records out 2048000000 <tel:2048000000> <tel:2048000000 <tel:2048000000>> bytes (2.0 GB) copied, 18.8911 s, 108 MB/s
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2141 root 20 0 550m 184m 2840 R 84.6 2.3 16:43.10 glusterfs 2126 root 20 0 1407m 30m 2368 S 49.8 0.4 13:49.07 glusterfsd
Interesting - It looks like if I use a NFS mount point, I incur a cpu hit on two processes instead of just the daemon. I also get much better performance if I'm not running dd (fuse) on the GLUSTER host.
The storage servers are a bit older, but are both dual socket quad core
opterons with 4x 7200rpm drives.
A block size of 4k is quite small so that the context switch overhead involved with fuse would be more perceivable.
Would it be possible to increase the block size for dd and test?
I'm in the process of setting up a share from my desktop and I'll see if
I can bench between the two systems. Not sure if my ssd will impact the
tests, I've heard there isn't an advantage using ssd storage for glusterfs.
Do you have any pointers to this source of information? Typically glusterfs performance for virtualization work loads is bound by the slowest element in the entire stack. Usually storage/disks happen to be the bottleneck and ssd storage does benefit glusterfs.
-Vijay
I had a couple technical calls with RH (re: RHSS), and when I asked if SSD's could add any benefit I was told no. The context may have been in a product comparison to other storage vendors, where they use SSD's for read/write caching, versus having an all SSD storage domain (which I'm not proposing, but which is effectively what my desktop would provide).
Increasing bs against NFS mount point (gluster backend): dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=128k count=16000 16000+0 records in 16000+0 records out 2097152000 <tel:2097152000> <tel:2097152000 <tel:2097152000>> bytes (2.1 GB) copied, 19.1089 s, 110 MB/s
GLUSTER host top reports: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2141 root 20 0 550m 183m 2844 R 88.9 2.3 17:30.82 glusterfs 2126 root 20 0 1414m 31m 2408 S 46.1 0.4 14:18.18 glusterfsd
So roughly the same performance as 4k writes remotely. I'm guessing if I could randomize these writes we'd see a large difference.
Check this thread out, http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs-native-integration... it's quite dated but I remember seeing similar figures.
In fact when I used FIO on a libgfapi mounted VM I got slightly faster read/write speeds than on the physical box itself (I assume because of some level of caching). On NFS it was close to half.. You'll probably get a little more interesting results using FIO opposed to dd
( -Andrew)
Sorry Andrew, I meant to reply to your other message - it looks like CentOS 6.5 can't use libgfapi right now, I stumbled across this info in a couple threads. Something about how the CentOS build has different flags set on build for RHEV snapshot support then RHEL, so native gluster storage domains are disabled because snapshot support is assumed and would break otherwise. I'm assuming this is still valid as I cannot get a storage lock when I attempt a gluster storage domain.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
I've setup a NFS storage domain on my desktops SSD. I've re-installed win 2008 r2 and initially it was running smoother.
Disk performance peaks at 100MB/s.
If I copy a 250MB file from a share into the Windows VM, it writes out quickly, less than 5 seconds.
If I copy 20 files, ranging in file sizes from 4k to 200MB, totaling in 650MB from the share - windows becomes unresponsive, in top the desktop's nfs daemon is barely being touched at all, and then eventually is not hit. I can still interact with the VM's windows through the spice console. Eventually the file transfer will start and rocket through the transfer.
I've opened a 271MB zip file with 4454 files and started the extract process but the progress windows will sit on 'calculating...' after a significant period of time the decompression starts and runs at <200KB/second. Windows is guesstimating 1HR completion time. Eventually even this freezes up, and my spice console mouse won't grab. I can still see the resource monitor in the Windows VM doing its thing but have to poweroff the VM as its no longer usable.
The windows update process is the same. It seems like when the guest needs quick large writes its fine, but lots of io causes serious hanging, unresponsiveness, spice mouse cursor freeze, and eventually poweroff/reboot is the only way to get it back.
Also, during window 2008 r2 install the 'expanding windows files' task is quite slow, roughly 1% progress every 20 seconds (~30 mins to complete). The GLUSTER host shows these stats pretty consistently: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 8139 root 20 0 1380m 28m 2476 R 83.1 0.4 8:35.78 glusterfsd 8295 root 20 0 550m 186m 2980 S 4.3 2.4 1:52.56 glusterfs
bwm-ng v0.6 (probing every 2.000s), press 'h' for help input: /proc/net/dev type: rate \ iface Rx Tx Total
============================================================================== lo: 3719.31 KB/s 3719.31 KB/s 7438.62 KB/s eth0: 3405.12 KB/s 3903.28 KB/s 7308.40 KB/s
I've copied the same zip file to an nfs mount point on the OVIRT host (gluster backend) and get about 25 - 600 KB/s during unzip. The same test on NFS mount point (desktop SSD ext4 backend) averaged a network transfer speed of 5MB/s and completed in about 40 seconds.
I have a RHEL 6.5 guest running on the NFS/gluster backend storage domain, and just did the same test. Extracting the file took 22.3 seconds (faster than the fuse mount point on the host !?!?).
GLUSTER host top reported this while the RHEL guest was decompressing the zip file: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2141 root 20 0 555m 187m 2844 S 4.0 2.4 18:17.00 glusterfs 2122 root 20 0 1380m 31m 2396 S 2.3 0.4 83:19.40 glusterfsd
*Steve Dainard * IT Infrastructure Manager Miovision <http://miovision.com/> | /Rethink Traffic/ 519-513-2407 <tel:519-513-2407> <tel:519-513-2407 <tel:519-513-2407>> ex.250 877-646-8476 <tel:877-646-8476> <tel:877-646-8476 <tel:877-646-8476>> (toll-free)
*Blog <http://miovision.com/blog> | **LinkedIn <https://www.linkedin.com/company/miovision-technologies> | Twitter <https://twitter.com/miovision> | Facebook <https://www.facebook.com/miovision>* ------------------------------------------------------------------------ Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, Canada | N2C 1L3 This e-mail may contain information that is privileged or confidential. If you are not the intended recipient, please delete the e-mail and any attachments and notify us immediately.
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users
please note currently (>3.3.1), we don't use libgfapi on fedora as well, as we found some gaps in functionality in the libvirt libgfapi support for snapshots. once these are resolved, we can re-enable libgfapi on a glusterfs storage domain.
--------------080505030003020501080002 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit <html> <head> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <div class="moz-cite-prefix">Adding the virtio-scsi developers.<br> Anyhow, virtio-scsi is newer and less established than viostor (the block device), so you might want to try it out.<br> A disclaimer: There are time and patches gaps between RHEL and other versions.<br> <br> Ronen.<br> <br> On 01/28/2014 10:39 PM, Steve Dainard wrote:<br> </div> <blockquote cite="mid:CAHnsdUt1wPNcWFhy1eW8iYRrp4cNn=9WMFt8sVV5bYGJS1cvUw@mail.gmail.com" type="cite"> <div dir="ltr">I've had a bit of luck here. <div><br> </div> <div>Overall IO performance is very poor during Windows updates, but a contributing factor seems to be the "SCSI Controller" device in the guest. This last install I didn't install a driver for that device, and my performance is much better. Updates still chug along quite slowly, but I seem to have more than the < 100KB/s write speeds I was seeing previously.</div> <div><br> </div> <div>Does anyone know what this device is for? I have the "Red Hat VirtIO SCSI Controller" listed under storage controllers.</div> </div> <div class="gmail_extra"><br clear="all"> <div> <div dir="ltr"> <span style="font-family:arial,sans-serif;font-size:16px"><strong>Steve Dainard </strong></span><span style="font-size:12px"></span><br> <span style="font-family:arial,sans-serif;font-size:12px">IT Infrastructure Manager<br> <a moz-do-not-send="true" href="http://miovision.com/" target="_blank">Miovision</a> | <em>Rethink Traffic</em><br> 519-513-2407 ex.250<br> 877-646-8476 (toll-free)<br> <br> <strong style="font-family:arial,sans-serif;font-size:13px;color:#999999"><a moz-do-not-send="true" href="http://miovision.com/blog" target="_blank">Blog</a> | </strong><font style="font-family:arial,sans-serif;font-size:13px" color="#999999"><strong><a moz-do-not-send="true" href="https://www.linkedin.com/company/miovision-technologies" target="_blank">LinkedIn</a> | <a moz-do-not-send="true" href="https://twitter.com/miovision" target="_blank">Twitter</a> | <a moz-do-not-send="true" href="https://www.facebook.com/miovision" target="_blank">Facebook</a></strong></font> </span> <hr style="font-family:arial,sans-serif;font-size:13px;color:#333333;clear:both"> <div style="color:#999999;font-family:arial,sans-serif;font-size:13px;padding-top:5px"> <span style="font-family:arial,sans-serif;font-size:12px">Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, Canada | N2C 1L3</span><br> <span style="font-family:arial,sans-serif;font-size:12px">This e-mail may contain information that is privileged or confidential. If you are not the intended recipient, please delete the e-mail and any attachments and notify us immediately.</span></div> </div> </div> <br> <br> <div class="gmail_quote">On Sun, Jan 26, 2014 at 2:33 AM, Itamar Heim <span dir="ltr"><<a moz-do-not-send="true" href="mailto:iheim@redhat.com" target="_blank">iheim@redhat.com</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div class="im">On 01/26/2014 02:37 AM, Steve Dainard wrote:<br> </div> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div class="im"> Thanks for the responses everyone, really appreciate it.<br> <br> I've condensed the other questions into this reply.<br> <br> <br> Steve,<br> What is the CPU load of the GlusterFS host when comparing the raw<br> brick test to the gluster mount point test? Give it 30 seconds and<br> see what top reports. You’ll probably have to significantly increase<br> the count on the test so that it runs that long.<br> <br> - Nick<br> <br> <br> <br> Gluster mount point:<br> <br> *4K* on GLUSTER host<br> [root@gluster1 rep2]# dd if=/dev/zero of=/mnt/rep2/test1 bs=4k count=500000<br> 500000+0 records in<br> 500000+0 records out<br> </div> <a moz-do-not-send="true" href="tel:2048000000" value="+12048000000" target="_blank">2048000000</a> <tel:<a moz-do-not-send="true" href="tel:2048000000" value="+12048000000" target="_blank">2048000000</a>> bytes (2.0 GB) copied, 100.076 s, 20.5 MB/s <div class="im"> <br> <br> Top reported this right away:<br> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND<br> 1826 root 20 0 294m 33m 2540 S 27.2 0.4 0:04.31 glusterfs<br> 2126 root 20 0 1391m 31m 2336 S 22.6 0.4 11:25.48 glusterfsd<br> <br> Then at about 20+ seconds top reports this:<br> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND<br> 1826 root 20 0 294m 35m 2660 R 141.7 0.5 1:14.94 glusterfs<br> 2126 root 20 0 1392m 31m 2344 S 33.7 0.4 11:46.56 glusterfsd<br> <br> *4K* Directly on the brick:<br> dd if=/dev/zero of=test1 bs=4k count=500000<br> 500000+0 records in<br> 500000+0 records out<br> </div> <a moz-do-not-send="true" href="tel:2048000000" value="+12048000000" target="_blank">2048000000</a> <tel:<a moz-do-not-send="true" href="tel:2048000000" value="+12048000000" target="_blank">2048000000</a>> bytes (2.0 GB) copied, 4.99367 s, 410 MB/s <div class="im"> <br> <br> 7750 root 20 0 102m 648 544 R 50.3 0.0 0:01.52 dd<br> 7719 root 20 0 0 0 0 D 1.0 0.0 0:01.50 flush-253:2<br> <br> Same test, gluster mount point on OVIRT host:<br> dd if=/dev/zero of=/mnt/rep2/test1 bs=4k count=500000<br> 500000+0 records in<br> 500000+0 records out<br> </div> <a moz-do-not-send="true" href="tel:2048000000" value="+12048000000" target="_blank">2048000000</a> <tel:<a moz-do-not-send="true" href="tel:2048000000" value="+12048000000" target="_blank">2048000000</a>> bytes (2.0 GB) copied, 42.4518 s, 48.2 MB/s <div class="im"> <br> <br> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND<br> 2126 root 20 0 1396m 31m 2360 S 40.5 0.4 13:28.89 glusterfsd<br> <br> <br> Same test, on OVIRT host but against NFS mount point:<br> dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=4k count=500000<br> 500000+0 records in<br> 500000+0 records out<br> </div> <a moz-do-not-send="true" href="tel:2048000000" value="+12048000000" target="_blank">2048000000</a> <tel:<a moz-do-not-send="true" href="tel:2048000000" value="+12048000000" target="_blank">2048000000</a>> bytes (2.0 GB) copied, 18.8911 s, 108 MB/s <div> <div class="h5"> <br> <br> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND<br> 2141 root 20 0 550m 184m 2840 R 84.6 2.3 16:43.10 glusterfs<br> 2126 root 20 0 1407m 30m 2368 S 49.8 0.4 13:49.07 glusterfsd<br> <br> Interesting - It looks like if I use a NFS mount point, I incur a cpu<br> hit on two processes instead of just the daemon. I also get much better<br> performance if I'm not running dd (fuse) on the GLUSTER host.<br> <br> <br> The storage servers are a bit older, but are both dual socket<br> quad core<br> <br> opterons with 4x 7200rpm drives.<br> <br> <br> A block size of 4k is quite small so that the context switch<br> overhead involved with fuse would be more perceivable.<br> <br> Would it be possible to increase the block size for dd and test?<br> <br> <br> <br> I'm in the process of setting up a share from my desktop and<br> I'll see if<br> <br> I can bench between the two systems. Not sure if my ssd will<br> impact the<br> <br> tests, I've heard there isn't an advantage using ssd storage for<br> glusterfs.<br> <br> <br> Do you have any pointers to this source of information? Typically<br> glusterfs performance for virtualization work loads is bound by the<br> slowest element in the entire stack. Usually storage/disks happen to<br> be the bottleneck and ssd storage does benefit glusterfs.<br> <br> -Vijay<br> <br> <br> I had a couple technical calls with RH (re: RHSS), and when I asked if<br> SSD's could add any benefit I was told no. The context may have been in<br> a product comparison to other storage vendors, where they use SSD's for<br> read/write caching, versus having an all SSD storage domain (which I'm<br> not proposing, but which is effectively what my desktop would provide).<br> <br> Increasing bs against NFS mount point (gluster backend):<br> dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=128k count=16000<br> 16000+0 records in<br> 16000+0 records out<br> </div> </div> <a moz-do-not-send="true" href="tel:2097152000" value="+12097152000" target="_blank">2097152000</a> <tel:<a moz-do-not-send="true" href="tel:2097152000" value="+12097152000" target="_blank">2097152000</a>> bytes (2.1 GB) copied, 19.1089 s, 110 MB/s <div> <div class="h5"><br> <br> <br> GLUSTER host top reports:<br> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND<br> 2141 root 20 0 550m 183m 2844 R 88.9 2.3 17:30.82 glusterfs<br> 2126 root 20 0 1414m 31m 2408 S 46.1 0.4 14:18.18 glusterfsd<br> <br> So roughly the same performance as 4k writes remotely. I'm guessing if I<br> could randomize these writes we'd see a large difference.<br> <br> <br> Check this thread out,<br> <a moz-do-not-send="true" href="http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs-native-integration..." target="_blank">http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs-native-integration/</a> it's<br> quite dated but I remember seeing similar figures.<br> <br> In fact when I used FIO on a libgfapi mounted VM I got slightly<br> faster read/write speeds than on the physical box itself (I assume<br> because of some level of caching). On NFS it was close to half..<br> You'll probably get a little more interesting results using FIO<br> opposed to dd<br> <br> ( -Andrew)<br> <br> <br> Sorry Andrew, I meant to reply to your other message - it looks like<br> CentOS 6.5 can't use libgfapi right now, I stumbled across this info in<br> a couple threads. Something about how the CentOS build has different<br> flags set on build for RHEV snapshot support then RHEL, so native<br> gluster storage domains are disabled because snapshot support is assumed<br> and would break otherwise. I'm assuming this is still valid as I cannot<br> get a storage lock when I attempt a gluster storage domain.<br> <br> <br> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------<br> <br> I've setup a NFS storage domain on my desktops SSD. I've re-installed<br> win 2008 r2 and initially it was running smoother.<br> <br> Disk performance peaks at 100MB/s.<br> <br> If I copy a 250MB file from a share into the Windows VM, it writes out<br> quickly, less than 5 seconds.<br> <br> If I copy 20 files, ranging in file sizes from 4k to 200MB, totaling in<br> 650MB from the share - windows becomes unresponsive, in top the<br> desktop's nfs daemon is barely being touched at all, and then eventually<br> is not hit. I can still interact with the VM's windows through the spice<br> console. Eventually the file transfer will start and rocket through the<br> transfer.<br> <br> I've opened a 271MB zip file with 4454 files and started the extract<br> process but the progress windows will sit on 'calculating...' after a<br> significant period of time the decompression starts and runs at<br> <200KB/second. Windows is guesstimating 1HR completion time. Eventually<br> even this freezes up, and my spice console mouse won't grab. I can still<br> see the resource monitor in the Windows VM doing its thing but have to<br> poweroff the VM as its no longer usable.<br> <br> The windows update process is the same. It seems like when the guest<br> needs quick large writes its fine, but lots of io causes serious<br> hanging, unresponsiveness, spice mouse cursor freeze, and eventually<br> poweroff/reboot is the only way to get it back.<br> <br> Also, during window 2008 r2 install the 'expanding windows files' task<br> is quite slow, roughly 1% progress every 20 seconds (~30 mins to<br> complete). The GLUSTER host shows these stats pretty consistently:<br> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND<br> 8139 root 20 0 1380m 28m 2476 R 83.1 0.4 8:35.78 glusterfsd<br> 8295 root 20 0 550m 186m 2980 S 4.3 2.4 1:52.56 glusterfs<br> <br> bwm-ng v0.6 (probing every 2.000s), press 'h' for help<br> input: /proc/net/dev type: rate<br> \ iface Rx Tx<br> Total<br> <br> ==============================================================================<br> lo: 3719.31 KB/s 3719.31 KB/s<br> 7438.62 KB/s<br> eth0: 3405.12 KB/s 3903.28 KB/s<br> 7308.40 KB/s<br> <br> <br> I've copied the same zip file to an nfs mount point on the OVIRT host<br> (gluster backend) and get about 25 - 600 KB/s during unzip. The same<br> test on NFS mount point (desktop SSD ext4 backend) averaged a network<br> transfer speed of 5MB/s and completed in about 40 seconds.<br> <br> I have a RHEL 6.5 guest running on the NFS/gluster backend storage<br> domain, and just did the same test. Extracting the file took 22.3<br> seconds (faster than the fuse mount point on the host !?!?).<br> <br> GLUSTER host top reported this while the RHEL guest was decompressing<br> the zip file:<br> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND<br> 2141 root 20 0 555m 187m 2844 S 4.0 2.4 18:17.00 glusterfs<br> 2122 root 20 0 1380m 31m 2396 S 2.3 0.4 83:19.40 glusterfsd<br> <br> <br> <br> <br> <br> </div> </div> <div class="im"> *Steve Dainard *<br> IT Infrastructure Manager<br> Miovision <<a moz-do-not-send="true" href="http://miovision.com/" target="_blank">http://miovision.com/</a>> | /Rethink Traffic/<br> <a moz-do-not-send="true" href="tel:519-513-2407" value="+15195132407" target="_blank">519-513-2407</a> <tel:<a moz-do-not-send="true" href="tel:519-513-2407" value="+15195132407" target="_blank">519-513-2407</a>> ex.250<br> <a moz-do-not-send="true" href="tel:877-646-8476" value="+18776468476" target="_blank">877-646-8476</a> <tel:<a moz-do-not-send="true" href="tel:877-646-8476" value="+18776468476" target="_blank">877-646-8476</a>> (toll-free)<br> <br> *Blog <<a moz-do-not-send="true" href="http://miovision.com/blog" target="_blank">http://miovision.com/blog</a>> | **LinkedIn<br> <<a moz-do-not-send="true" href="https://www.linkedin.com/company/miovision-technologies" target="_blank">https://www.linkedin.com/company/miovision-technologies</a>> | Twitter<br> <<a moz-do-not-send="true" href="https://twitter.com/miovision" target="_blank">https://twitter.com/miovision</a>> | Facebook<br> <<a moz-do-not-send="true" href="https://www.facebook.com/miovision" target="_blank">https://www.facebook.com/miovision</a>>*<br> ------------------------------------------------------------------------<br> </div> <div class="im"> Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener,<br> ON, Canada | N2C 1L3<br> This e-mail may contain information that is privileged or confidential.<br> If you are not the intended recipient, please delete the e-mail and any<br> attachments and notify us immediately.<br> <br> <br> <br> <br> </div> <div class="im"> _______________________________________________<br> Users mailing list<br> <a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br> <a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br> <br> </div> </blockquote> <br> please note currently (>3.3.1), we don't use libgfapi on fedora as well, as we found some gaps in functionality in the libvirt libgfapi support for snapshots. once these are resolved, we can re-enable libgfapi on a glusterfs storage domain.<br> </blockquote> </div> <br> </div> </blockquote> <br> </body> </html> --------------080505030003020501080002--

On Wed, 2014-01-29 at 11:30 +0200, Ronen Hod wrote:
Adding the virtio-scsi developers. Anyhow, virtio-scsi is newer and less established than viostor (the block device), so you might want to try it out.
[VR] Was it "SCSI Controller" or "SCSI pass-through controller"? If it's "SCSI Controller" then it will be viostor (virtio-blk) device driver.
A disclaimer: There are time and patches gaps between RHEL and other versions.
Ronen.
On 01/28/2014 10:39 PM, Steve Dainard wrote:
I've had a bit of luck here.
Overall IO performance is very poor during Windows updates, but a contributing factor seems to be the "SCSI Controller" device in the guest. This last install I didn't install a driver for that device,
[VR] Does it mean that your system disk is IDE and the data disk (virtio-blk) is not accessible?
and my performance is much better. Updates still chug along quite slowly, but I seem to have more than the < 100KB/s write speeds I was seeing previously.
Does anyone know what this device is for? I have the "Red Hat VirtIO SCSI Controller" listed under storage controllers.
[VR] It's a virtio-blk device. OS cannot see this volume unless you have viostor.sys driver installed on it.
Steve Dainard IT Infrastructure Manager Miovision | Rethink Traffic 519-513-2407 ex.250 877-646-8476 (toll-free)
Blog | LinkedIn | Twitter | Facebook ____________________________________________________________________ Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, Canada | N2C 1L3 This e-mail may contain information that is privileged or confidential. If you are not the intended recipient, please delete the e-mail and any attachments and notify us immediately.
On Sun, Jan 26, 2014 at 2:33 AM, Itamar Heim <iheim@redhat.com> wrote: On 01/26/2014 02:37 AM, Steve Dainard wrote:
Thanks for the responses everyone, really appreciate it.
I've condensed the other questions into this reply.
Steve, What is the CPU load of the GlusterFS host when comparing the raw brick test to the gluster mount point test? Give it 30 seconds and see what top reports. You’ll probably have to significantly increase the count on the test so that it runs that long.
- Nick
Gluster mount point:
*4K* on GLUSTER host [root@gluster1 rep2]# dd if=/dev/zero of=/mnt/rep2/test1 bs=4k count=500000 500000+0 records in 500000+0 records out
2048000000 <tel:2048000000> bytes (2.0 GB) copied, 100.076 s, 20.5 MB/s
Top reported this right away: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1826 root 20 0 294m 33m 2540 S 27.2 0.4 0:04.31 glusterfs 2126 root 20 0 1391m 31m 2336 S 22.6 0.4 11:25.48 glusterfsd
Then at about 20+ seconds top reports this: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1826 root 20 0 294m 35m 2660 R 141.7 0.5 1:14.94 glusterfs 2126 root 20 0 1392m 31m 2344 S 33.7 0.4 11:46.56 glusterfsd
*4K* Directly on the brick: dd if=/dev/zero of=test1 bs=4k count=500000 500000+0 records in 500000+0 records out
2048000000 <tel:2048000000> bytes (2.0 GB) copied, 4.99367 s, 410 MB/s
7750 root 20 0 102m 648 544 R 50.3 0.0 0:01.52 dd 7719 root 20 0 0 0 0 D 1.0 0.0 0:01.50 flush-253:2
Same test, gluster mount point on OVIRT host: dd if=/dev/zero of=/mnt/rep2/test1 bs=4k count=500000 500000+0 records in 500000+0 records out
2048000000 <tel:2048000000> bytes (2.0 GB) copied, 42.4518 s, 48.2 MB/s
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2126 root 20 0 1396m 31m 2360 S 40.5 0.4 13:28.89 glusterfsd
Same test, on OVIRT host but against NFS mount point: dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=4k count=500000 500000+0 records in 500000+0 records out
2048000000 <tel:2048000000> bytes (2.0 GB) copied, 18.8911 s, 108 MB/s
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2141 root 20 0 550m 184m 2840 R 84.6 2.3 16:43.10 glusterfs 2126 root 20 0 1407m 30m 2368 S 49.8 0.4 13:49.07 glusterfsd
Interesting - It looks like if I use a NFS mount point, I incur a cpu hit on two processes instead of just the daemon. I also get much better performance if I'm not running dd (fuse) on the GLUSTER host.
The storage servers are a bit older, but are both dual socket quad core
opterons with 4x 7200rpm drives.
A block size of 4k is quite small so that the context switch overhead involved with fuse would be more perceivable.
Would it be possible to increase the block size for dd and test?
I'm in the process of setting up a share from my desktop and I'll see if
I can bench between the two systems. Not sure if my ssd will impact the
tests, I've heard there isn't an advantage using ssd storage for glusterfs.
Do you have any pointers to this source of information? Typically glusterfs performance for virtualization work loads is bound by the slowest element in the entire stack. Usually storage/disks happen to be the bottleneck and ssd storage does benefit glusterfs.
-Vijay
I had a couple technical calls with RH (re: RHSS), and when I asked if SSD's could add any benefit I was told no. The context may have been in a product comparison to other storage vendors, where they use SSD's for read/write caching, versus having an all SSD storage domain (which I'm not proposing, but which is effectively what my desktop would provide).
Increasing bs against NFS mount point (gluster backend): dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=128k count=16000 16000+0 records in 16000+0 records out
2097152000 <tel:2097152000> bytes (2.1 GB) copied, 19.1089 s, 110 MB/s
GLUSTER host top reports: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2141 root 20 0 550m 183m 2844 R 88.9 2.3 17:30.82 glusterfs 2126 root 20 0 1414m 31m 2408 S 46.1 0.4 14:18.18 glusterfsd
So roughly the same performance as 4k writes remotely. I'm guessing if I could randomize these writes we'd see a large difference.
Check this thread out,
http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs-native-integration... it's quite dated but I remember seeing similar figures.
In fact when I used FIO on a libgfapi mounted VM I got slightly faster read/write speeds than on the physical box itself (I assume because of some level of caching). On NFS it was close to half.. You'll probably get a little more interesting results using FIO opposed to dd
( -Andrew)
Sorry Andrew, I meant to reply to your other message - it looks like CentOS 6.5 can't use libgfapi right now, I stumbled across this info in a couple threads. Something about how the CentOS build has different flags set on build for RHEV snapshot support then RHEL, so native gluster storage domains are disabled because snapshot support is assumed and would break otherwise. I'm assuming this is still valid as I cannot get a storage lock when I attempt a gluster storage domain.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
I've setup a NFS storage domain on my desktops SSD. I've re-installed win 2008 r2 and initially it was running smoother.
Disk performance peaks at 100MB/s.
If I copy a 250MB file from a share into the Windows VM, it writes out
[VR] Do you copy it with Explorer or any other copy program? Do you have HPET enabled? How does it work with if you copy from/to local (non-NFS) storage? What is your virtio-win drivers package origin and version?
Thanks, Vadim.
quickly, less than 5 seconds.
If I copy 20 files, ranging in file sizes from 4k to 200MB, totaling in 650MB from the share - windows becomes unresponsive, in top the desktop's nfs daemon is barely being touched at all, and then eventually is not hit. I can still interact with the VM's windows through the spice console. Eventually the file transfer will start and rocket through the transfer.
I've opened a 271MB zip file with 4454 files and started the extract process but the progress windows will sit on 'calculating...' after a significant period of time the decompression starts and runs at <200KB/second. Windows is guesstimating 1HR completion time. Eventually even this freezes up, and my spice console mouse won't grab. I can still see the resource monitor in the Windows VM doing its thing but have to poweroff the VM as its no longer usable.
The windows update process is the same. It seems like when the guest needs quick large writes its fine, but lots of io causes serious hanging, unresponsiveness, spice mouse cursor freeze, and eventually poweroff/reboot is the only way to get it back.
Also, during window 2008 r2 install the 'expanding windows files' task is quite slow, roughly 1% progress every 20 seconds (~30 mins to complete). The GLUSTER host shows these stats pretty consistently: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 8139 root 20 0 1380m 28m 2476 R 83.1 0.4 8:35.78 glusterfsd 8295 root 20 0 550m 186m 2980 S 4.3 2.4 1:52.56 glusterfs
bwm-ng v0.6 (probing every 2.000s), press 'h' for help input: /proc/net/dev type: rate \ iface Rx Tx Total
============================================================================== lo: 3719.31 KB/s 3719.31 KB/s 7438.62 KB/s eth0: 3405.12 KB/s 3903.28 KB/s 7308.40 KB/s
I've copied the same zip file to an nfs mount point on the OVIRT host (gluster backend) and get about 25 - 600 KB/s during unzip. The same test on NFS mount point (desktop SSD ext4 backend) averaged a network transfer speed of 5MB/s and completed in about 40 seconds.
I have a RHEL 6.5 guest running on the NFS/gluster backend storage domain, and just did the same test. Extracting the file took 22.3 seconds (faster than the fuse mount point on the host !?!?).
GLUSTER host top reported this while the RHEL guest was decompressing the zip file: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2141 root 20 0 555m 187m 2844 S 4.0 2.4 18:17.00 glusterfs 2122 root 20 0 1380m 31m 2396 S 2.3 0.4 83:19.40 glusterfsd
*Steve Dainard * IT Infrastructure Manager Miovision <http://miovision.com/> | /Rethink Traffic/ 519-513-2407 <tel:519-513-2407> ex.250 877-646-8476 <tel:877-646-8476> (toll-free)
*Blog <http://miovision.com/blog> | **LinkedIn <https://www.linkedin.com/company/miovision-technologies> | Twitter <https://twitter.com/miovision> | Facebook <https://www.facebook.com/miovision>* ------------------------------------------------------------------------
Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, Canada | N2C 1L3 This e-mail may contain information that is privileged or confidential. If you are not the intended recipient, please delete the e-mail and any attachments and notify us immediately.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
please note currently (>3.3.1), we don't use libgfapi on fedora as well, as we found some gaps in functionality in the libvirt libgfapi support for snapshots. once these are resolved, we can re-enable libgfapi on a glusterfs storage domain.

On Wed, Jan 29, 2014 at 5:11 AM, Vadim Rozenfeld <vrozenfe@redhat.com>wrote:
On Wed, 2014-01-29 at 11:30 +0200, Ronen Hod wrote:
Adding the virtio-scsi developers. Anyhow, virtio-scsi is newer and less established than viostor (the block device), so you might want to try it out.
[VR] Was it "SCSI Controller" or "SCSI pass-through controller"? If it's "SCSI Controller" then it will be viostor (virtio-blk) device driver.
"SCSI Controller" is listed in device manager. Hardware ID's: PCI\VEN_1AF4&DEV_1004&SUBSYS_00081AF4&REV_00 PCI\VEN_1AF4&DEV_1004&SUBSYS_00081AF4 PCI\VEN_1AF4&DEV_1004&CC_010000 PCI\VEN_1AF4&DEV_1004&CC_0100
A disclaimer: There are time and patches gaps between RHEL and other versions.
Ronen.
On 01/28/2014 10:39 PM, Steve Dainard wrote:
I've had a bit of luck here.
Overall IO performance is very poor during Windows updates, but a contributing factor seems to be the "SCSI Controller" device in the guest. This last install I didn't install a driver for that device,
[VR] Does it mean that your system disk is IDE and the data disk (virtio-blk) is not accessible?
In Ovirt 3.3.2-1.el6 I do not have an option to add a virtio-blk device: Screenshot here: https://dl.dropboxusercontent.com/u/21916057/Screenshot%20from%202014-01-29%... VM disk drive is "Red Hat VirtIO SCSI Disk Device", storage controller is listed as "Red Hat VirtIO SCSI Controller" as shown in device manager. Screenshot here: https://dl.dropboxusercontent.com/u/21916057/Screenshot%20from%202014-01-29%... In Ovirt manager the disk interface is listed as "VirtIO". Screenshot here: https://dl.dropboxusercontent.com/u/21916057/Screenshot%20from%202014-01-29%...
and my performance is much better. Updates still chug along quite slowly, but I seem to have more than the < 100KB/s write speeds I was seeing previously.
Does anyone know what this device is for? I have the "Red Hat VirtIO SCSI Controller" listed under storage controllers.
[VR] It's a virtio-blk device. OS cannot see this volume unless you have viostor.sys driver installed on it.
Interesting that my VM's can see the controller, but I can't add a disk for that controller in Ovirt. Is there a package I have missed on install? rpm -qa | grep ovirt ovirt-host-deploy-java-1.1.3-1.el6.noarch ovirt-engine-backend-3.3.2-1.el6.noarch ovirt-engine-lib-3.3.2-1.el6.noarch ovirt-engine-restapi-3.3.2-1.el6.noarch ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch ovirt-log-collector-3.3.2-2.el6.noarch ovirt-engine-dbscripts-3.3.2-1.el6.noarch ovirt-engine-webadmin-portal-3.3.2-1.el6.noarch ovirt-host-deploy-1.1.3-1.el6.noarch ovirt-image-uploader-3.3.2-1.el6.noarch ovirt-engine-websocket-proxy-3.3.2-1.el6.noarch ovirt-engine-userportal-3.3.2-1.el6.noarch ovirt-engine-setup-3.3.2-1.el6.noarch ovirt-iso-uploader-3.3.2-1.el6.noarch ovirt-engine-cli-3.3.0.6-1.el6.noarch ovirt-engine-3.3.2-1.el6.noarch ovirt-engine-tools-3.3.2-1.el6.noarch
I've setup a NFS storage domain on my desktops SSD. I've re-installed win 2008 r2 and initially it was running smoother.
Disk performance peaks at 100MB/s.
If I copy a 250MB file from a share into the Windows VM, it writes out
[VR] Do you copy it with Explorer or any other copy program?
Windows Explorer only.
Do you have HPET enabled?
I can't find it in the guest 'system devices'. On the hosts the current clock source is 'tsc', although 'hpet' is an available option.
How does it work with if you copy from/to local (non-NFS) storage?
Not sure, this is a royal pain to setup. Can I use my ISO domain in two different data centers at the same time? I don't have an option to create an ISO / NFS domain in the local storage DC. When I use the import option with the default DC's ISO domain, I get an error "There is no storage domain under the specified path. Check event log for more details." VDMS logs show "Resource namespace 0e90e574-b003-4a62-867d-cf274b17e6b1_imageNS already registered" so I'm guessing the answer is no. I tried to deploy with WDS, but the 64bit drivers apparently aren't signed, and on x86 I get an error about the NIC not being supported even with the drivers added to WDS.
What is your virtio-win drivers package origin and version?
virtio-win-0.1-74.iso -> http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/
Thanks, Vadim.
Appreciate it, Steve

I notced updates on a Win7 VM I ceated previously were *really* slow, but when I logged in to it remotely for daily use, it seemed pretty snappy. I did not do any significant data transfers, however. I had the same latest virtio-win drivers installed, and in oVirt, the disk was of type VIRTIO (and not VIRTIO-SCSI). For other reasons, I have rebuilt my test host, and am going to be installing a new Windows 7 VM. Is there anything I can do in this process to provide more data and help with this troubleshooting? -Alan

On 01/29/2014 07:35 PM, Steve Dainard wrote: ...
In Ovirt 3.3.2-1.el6 I do not have an option to add a virtio-blk device: Screenshot here: https://dl.dropboxusercontent.com/u/21916057/Screenshot%20from%202014-01-29%...
virtio is virtio-blk (in the beginning, there was only one virtio, virtio-blk)
... Do you have HPET enabled?
I can't find it in the guest 'system devices'. On the hosts the current clock source is 'tsc', although 'hpet' is an available option.
How does it work with if you copy from/to local (non-NFS) storage?
Not sure, this is a royal pain to setup. Can I use my ISO domain in two different data centers at the same time? I don't have an option to create an ISO / NFS domain in the local storage DC.
an iso domain can be associated to two data centers (or more, or different engines, etc.)
When I use the import option with the default DC's ISO domain, I get an error "There is no storage domain under the specified path. Check event log for more details." VDMS logs show "Resource namespace 0e90e574-b003-4a62-867d-cf274b17e6b1_imageNS already registered" so I'm guessing the answer is no.
the answer is yes, please open a separate thread on this issue to make it easier to troubleshoot it. thanks
I tried to deploy with WDS, but the 64bit drivers apparently aren't signed, and on x86 I get an error about the NIC not being supported even with the drivers added to WDS.
What is your virtio-win drivers package origin and version?
virtio-win-0.1-74.iso -> http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/
Thanks, Vadim.
Appreciate it, Steve

On Wed, 2014-01-29 at 12:35 -0500, Steve Dainard wrote:
On Wed, Jan 29, 2014 at 5:11 AM, Vadim Rozenfeld <vrozenfe@redhat.com> wrote: On Wed, 2014-01-29 at 11:30 +0200, Ronen Hod wrote: > Adding the virtio-scsi developers. > Anyhow, virtio-scsi is newer and less established than viostor (the > block device), so you might want to try it out.
[VR] Was it "SCSI Controller" or "SCSI pass-through controller"? If it's "SCSI Controller" then it will be viostor (virtio-blk) device driver.
"SCSI Controller" is listed in device manager.
Hardware ID's: PCI\VEN_1AF4&DEV_1004&SUBSYS_00081AF4&REV_00 PCI\VEN_1AF4&DEV_1004&SUBSYS_00081AF4
There is something strange here. Subsystem ID 0008 means it is a virtio scsi pass-through controller. And you shouldn't be able to install "SCSI Controller" device driver (viostor.sys) on top of "SCSI pass-through Controller". vioscsi.sys should be installed on top of VEN_1AF4&DEV_1004&SUBSYS_00081AF4&REV_00 viostor.sys should be installed on top of VEN_1AF4&DEV_1001&SUBSYS_00021AF4&REV_00
PCI\VEN_1AF4&DEV_1004&CC_010000 PCI\VEN_1AF4&DEV_1004&CC_0100
> A disclaimer: There are time and patches gaps between RHEL and other > versions. > > Ronen. > > On 01/28/2014 10:39 PM, Steve Dainard wrote: > > > I've had a bit of luck here. > > > > > > Overall IO performance is very poor during Windows updates, but a > > contributing factor seems to be the "SCSI Controller" device in the > > guest. This last install I didn't install a driver for that device,
[VR] Does it mean that your system disk is IDE and the data disk (virtio-blk) is not accessible?
In Ovirt 3.3.2-1.el6 I do not have an option to add a virtio-blk device: Screenshot here: https://dl.dropboxusercontent.com/u/21916057/Screenshot%20from% 202014-01-29%2010%3A04%3A57.png my guess is that VirtIO means virtio-blk, and you should use viostor.sys for it. VirtIO-SCSI is for virtio-scsi, and need install vioscsi.sys to make it working in Windows.
VM disk drive is "Red Hat VirtIO SCSI Disk Device", storage controller is listed as "Red Hat VirtIO SCSI Controller" as shown in device manager. Screenshot here: https://dl.dropboxusercontent.com/u/21916057/Screenshot%20from% 202014-01-29%2009%3A57%3A24.png
In Ovirt manager the disk interface is listed as "VirtIO". Screenshot here: https://dl.dropboxusercontent.com/u/21916057/Screenshot%20from% 202014-01-29%2009%3A58%3A35.png
> > and my performance is much better. Updates still chug along quite > > slowly, but I seem to have more than the < 100KB/s write speeds I > > was seeing previously. > > > > > > Does anyone know what this device is for? I have the "Red Hat VirtIO > > SCSI Controller" listed under storage controllers.
[VR] It's a virtio-blk device. OS cannot see this volume unless you have viostor.sys driver installed on it.
Interesting that my VM's can see the controller, but I can't add a disk for that controller in Ovirt. Is there a package I have missed on install?
rpm -qa | grep ovirt ovirt-host-deploy-java-1.1.3-1.el6.noarch ovirt-engine-backend-3.3.2-1.el6.noarch ovirt-engine-lib-3.3.2-1.el6.noarch ovirt-engine-restapi-3.3.2-1.el6.noarch ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch ovirt-log-collector-3.3.2-2.el6.noarch ovirt-engine-dbscripts-3.3.2-1.el6.noarch ovirt-engine-webadmin-portal-3.3.2-1.el6.noarch ovirt-host-deploy-1.1.3-1.el6.noarch ovirt-image-uploader-3.3.2-1.el6.noarch ovirt-engine-websocket-proxy-3.3.2-1.el6.noarch ovirt-engine-userportal-3.3.2-1.el6.noarch ovirt-engine-setup-3.3.2-1.el6.noarch ovirt-iso-uploader-3.3.2-1.el6.noarch ovirt-engine-cli-3.3.0.6-1.el6.noarch ovirt-engine-3.3.2-1.el6.noarch ovirt-engine-tools-3.3.2-1.el6.noarch
> > > > I've setup a NFS storage domain on my desktops SSD. > > I've re-installed > > win 2008 r2 and initially it was running smoother. > > > > Disk performance peaks at 100MB/s. > > > > If I copy a 250MB file from a share into the Windows > > VM, it writes out
[VR] Do you copy it with Explorer or any other copy program?
Windows Explorer only.
Do you have HPET enabled?
I can't find it in the guest 'system devices'. On the hosts the current clock source is 'tsc', although 'hpet' is an available option.
How does it work with if you copy from/to local (non-NFS) storage?
Not sure, this is a royal pain to setup. Can I use my ISO domain in two different data centers at the same time? I don't have an option to create an ISO / NFS domain in the local storage DC.
I mean just for sake of performance troubleshooting, it will be interesting to see whether local storage performs as badly as NFS.
When I use the import option with the default DC's ISO domain, I get an error "There is no storage domain under the specified path. Check event log for more details." VDMS logs show "Resource namespace 0e90e574-b003-4a62-867d-cf274b17e6b1_imageNS already registered" so I'm guessing the answer is no.
I tried to deploy with WDS, but the 64bit drivers apparently aren't signed, and on x86 I get an error about the NIC not being supported even with the drivers added to WDS.
That's also strange. All drivers available from fedoraproject side should be sighed with RedHat signature which is cross certified. You shouldn't have any problem installing 64-bit drivers, except for annoying pop-ups warning that drivers are not MS-signed.
What is your virtio-win drivers package origin and version?
virtio-win-0.1-74.iso -> http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/
Good, it is our most recent build. Best regards, Vadim.
Thanks, Vadim.
Appreciate it, Steve

I've reconfigured my setup (good succes below, but need clarity on gluster option): Two nodes total, both running virt and glusterfs storage (2 node replica, quorum). I've created an NFS storage domain, pointed at the first nodes IP address. I've launched a 2008 R2 SP1 install with a virtio-scsi disk, and the SCSI pass-through driver on the same node as the NFS domain is pointing at. Windows guest install has been running for roughly 1.5 hours, still "Expanding Windows files (55%) ..." top is showing: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3609 root 20 0 1380m 33m 2604 S 35.4 0.1 231:39.75 glusterfsd 21444 qemu 20 0 6362m 4.1g 6592 S 10.3 8.7 10:11.53 qemu-kvm This is a 2 socket, 6 core xeon machine with 48GB of RAM, and 6x 7200rpm enterprise sata disks in RAID5 so I don't think we're hitting hardware limitations. dd on xfs (no gluster) time dd if=/dev/zero of=test bs=1M count=2048 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 4.15787 s, 516 MB/s real 0m4.351s user 0m0.000s sys 0m1.661s time dd if=/dev/zero of=test bs=1k count=2000000 2000000+0 records in 2000000+0 records out 2048000000 bytes (2.0 GB) copied, 4.06949 s, 503 MB/s real 0m4.260s user 0m0.176s sys 0m3.991s I've enabled nfs.trusted-sync ( http://gluster.org/community/documentation/index.php/Gluster_3.2:_Setting_Vo... ) on the gluster volume, and the speed difference is immeasurable . Can anyone explain what this option does, and what the risks are with a 2 node gluster replica volume with quorum enabled? Thanks,

I've enabled nfs.trusted-sync ( http://gluster.org/community/documentation/index.php/Gluster_3.2:_Setting_Vo... ) on the gluster volume, and the speed difference is immeasurable . Can anyone explain what this option does, and what the risks are with a 2 node gluster replica volume with quorum enabled?
Sorry, I understand async, I meant option nfs.trusted-write, and if it would help this situation.

On Fri, 2014-01-31 at 11:37 -0500, Steve Dainard wrote:
I've reconfigured my setup (good succes below, but need clarity on gluster option):
Two nodes total, both running virt and glusterfs storage (2 node replica, quorum).
I've created an NFS storage domain, pointed at the first nodes IP address. I've launched a 2008 R2 SP1 install with a virtio-scsi disk, and the SCSI pass-through driver on the same node as the NFS domain is pointing at.
Windows guest install has been running for roughly 1.5 hours, still "Expanding Windows files (55%) ..."
[VR] Does it work faster with IDE? Do you have kvm enabled? Thanks, Vadim.
top is showing: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3609 root 20 0 1380m 33m 2604 S 35.4 0.1 231:39.75 glusterfsd 21444 qemu 20 0 6362m 4.1g 6592 S 10.3 8.7 10:11.53 qemu-kvm
This is a 2 socket, 6 core xeon machine with 48GB of RAM, and 6x 7200rpm enterprise sata disks in RAID5 so I don't think we're hitting hardware limitations.
dd on xfs (no gluster)
time dd if=/dev/zero of=test bs=1M count=2048 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 4.15787 s, 516 MB/s
real 0m4.351s user 0m0.000s sys 0m1.661s
time dd if=/dev/zero of=test bs=1k count=2000000 2000000+0 records in 2000000+0 records out 2048000000 bytes (2.0 GB) copied, 4.06949 s, 503 MB/s
real 0m4.260s user 0m0.176s sys 0m3.991s
I've enabled nfs.trusted-sync (http://gluster.org/community/documentation/index.php/Gluster_3.2:_Setting_Vo...) on the gluster volume, and the speed difference is immeasurable . Can anyone explain what this option does, and what the risks are with a 2 node gluster replica volume with quorum enabled?
Thanks,

IDE is just as slow. Just over 2 hours for 2008R2 install. Is this what you mean by kvm? lsmod | grep kvm kvm_intel 54285 3 kvm 332980 1 kvm_intel *Steve Dainard * IT Infrastructure Manager Miovision <http://miovision.com/> | *Rethink Traffic* 519-513-2407 ex.250 877-646-8476 (toll-free) *Blog <http://miovision.com/blog> | **LinkedIn <https://www.linkedin.com/company/miovision-technologies> | Twitter <https://twitter.com/miovision> | Facebook <https://www.facebook.com/miovision>* ------------------------------ Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, Canada | N2C 1L3 This e-mail may contain information that is privileged or confidential. If you are not the intended recipient, please delete the e-mail and any attachments and notify us immediately. On Fri, Jan 31, 2014 at 8:20 PM, Vadim Rozenfeld <vrozenfe@redhat.com>wrote:
On Fri, 2014-01-31 at 11:37 -0500, Steve Dainard wrote:
I've reconfigured my setup (good succes below, but need clarity on gluster option):
Two nodes total, both running virt and glusterfs storage (2 node replica, quorum).
I've created an NFS storage domain, pointed at the first nodes IP address. I've launched a 2008 R2 SP1 install with a virtio-scsi disk, and the SCSI pass-through driver on the same node as the NFS domain is pointing at.
Windows guest install has been running for roughly 1.5 hours, still "Expanding Windows files (55%) ..."
[VR] Does it work faster with IDE? Do you have kvm enabled? Thanks, Vadim.
top is showing: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3609 root 20 0 1380m 33m 2604 S 35.4 0.1 231:39.75 glusterfsd 21444 qemu 20 0 6362m 4.1g 6592 S 10.3 8.7 10:11.53 qemu-kvm
This is a 2 socket, 6 core xeon machine with 48GB of RAM, and 6x 7200rpm enterprise sata disks in RAID5 so I don't think we're hitting hardware limitations.
dd on xfs (no gluster)
time dd if=/dev/zero of=test bs=1M count=2048 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 4.15787 s, 516 MB/s
real 0m4.351s user 0m0.000s sys 0m1.661s
time dd if=/dev/zero of=test bs=1k count=2000000 2000000+0 records in 2000000+0 records out 2048000000 bytes (2.0 GB) copied, 4.06949 s, 503 MB/s
real 0m4.260s user 0m0.176s sys 0m3.991s
I've enabled nfs.trusted-sync (
http://gluster.org/community/documentation/index.php/Gluster_3.2:_Setting_Vo...) on the gluster volume, and the speed difference is immeasurable . Can anyone explain what this option does, and what the risks are with a 2 node gluster replica volume with quorum enabled?
Thanks,
participants (6)
-
Alan Murrell
-
Itamar Heim
-
Nicholas Kesick
-
Ronen Hod
-
Steve Dainard
-
Vadim Rozenfeld