Re: [ovirt-users] SLA : RAM scheduling

--_000_5F9E965F5A80BC468BE5F40576769F0979FC230Bexchange21_ Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 DQpEZW4gMjMgbWFqIDIwMTQgMTc6MTMgc2tyZXYgPT9JU08tODg1OS0xP1E/TmF0aGFuYT1FQmxf QmxhbmNoZXQ/PSA8YmxhbmNoZXRAYWJlcy5mcj46DQo+DQo+DQo+IExlIDIzLzA1LzIwMTQgMTc6 MTEsIE5hdGhhbmHDq2wgQmxhbmNoZXQgYSDDqWNyaXQgOg0KPiA+IEhlbGxvLA0KPiA+IE9uIG92 aXJ0IDMuNCwgaXMgaXQgcG9zc2libGUgdG8gc2NoZWR1bGUgdm1zIGRpc3RyaWJ1dGlvbiBkZXBl bmRpbmcgb24NCj4gPiBob3N0IFJBTSBhdmFpbGliaWxpdHk/DQo+ID4gQ29uY3JldGx5LCBJIGhh ZCB0byBtYW51YWxseSBtb3ZlIHZtcyBhbGwgdGhlIHZtcyB0byB0aGUgc2Vjb25kIGhvc3QNCj4g PiBvZiB0aGUgY2x1c3RlciwgdGhpcyBsZWFkIHRvIHJlYWNoIDkwJSBvY2N1cGF0aW9uIG9mIG1l bW9yeSBvbiB0aGUNCj4gPiBkZXN0aW5hdGlvbiBob3N0LiBXaGVuIG15IGZpcnN0IGhvc3QgaGFz IHJlYm9vdGVkLCBub25lIHZtcyBvZiB0aGUNCj4gPiBzZWNvbmQgaG9zdCBhdXRvbWF0aWNhbGx5 IG1pZ3JhdGVkIHRvIHRoZSBmaXJzdCBvbmUgd2hpY2ggaGFkIGZ1bGwNCj4gPiBSQU0uIEhvdyB0 byBtYWtlIHRoaXMgaGFwcGVuPw0KPiA+DQo+IC4uLiBzbyBhcyB0byBib3RoIGhvc3RzIGJlIFJB TSBldmVubHkgZGlzdHJpYnV0ZWQuLi4gaG9wZSB0byBiZSBlbm91Z2gNCj4gY2xlYXIuLi4NCg0K U291bmRzIGxpa2UgeW91IGp1c3Qgd2FudCB0byBhcHBseSB0aGUgY2x1c3RlciBwb2xpY3kgZm9y IGV2ZW4gZGlzdHJpYnV0aW9uLiBIYXZlIHlvdSBhc3NpZ25lZCBhbnkgcG9saWN5IGZvciB0aGF0 IGNsdXN0ZXI/DQoNCi9LDQoNCj4NCj4gLS0NCj4gTmF0aGFuYcOrbCBCbGFuY2hldA0KPg0KPiBT dXBlcnZpc2lvbiByw6lzZWF1DQo+IFDDtGxlIGV4cGxvaXRhdGlvbiBldCBtYWludGVuYW5jZQ0K PiBEw6lwYXJ0ZW1lbnQgZGVzIHN5c3TDqG1lcyBkJ2luZm9ybWF0aW9uDQo+IDIyNyBhdmVudWUg UHJvZmVzc2V1ci1KZWFuLUxvdWlzLVZpYWxhDQo+IDM0MTkzIE1PTlRQRUxMSUVSIENFREVYIDUN Cj4gVMOpbC4gMzMgKDApNCA2NyA1NCA4NCA1NQ0KPiBGYXggIDMzICgwKTQgNjcgNTQgODQgMTQN Cj4gYmxhbmNoZXRAYWJlcy5mcg0KPg0KPiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fXw0KPiBVc2VycyBtYWlsaW5nIGxpc3QNCj4gVXNlcnNAb3ZpcnQub3Jn DQo+IGh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vycw0K --_000_5F9E965F5A80BC468BE5F40576769F0979FC230Bexchange21_ Content-Type: text/html; charset="utf-8" Content-ID: <04232DB00184AB4AA78CEAFB5A968397@ad.slu.se> Content-Transfer-Encoding: base64 PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPHAgZGlyPSJsdHIi Pjxicj4NCkRlbiAyMyBtYWogMjAxNCAxNzoxMyBza3JldiA9P0lTTy04ODU5LTE/UT9OYXRoYW5h PUVCbF9CbGFuY2hldD89ICZsdDtibGFuY2hldEBhYmVzLmZyJmd0Ozo8YnI+DQomZ3Q7PGJyPg0K Jmd0Ozxicj4NCiZndDsgTGUgMjMvMDUvMjAxNCAxNzoxMSwgTmF0aGFuYcOrbCBCbGFuY2hldCBh IMOpY3JpdCA6PGJyPg0KJmd0OyAmZ3Q7IEhlbGxvLDxicj4NCiZndDsgJmd0OyBPbiBvdmlydCAz LjQsIGlzIGl0IHBvc3NpYmxlIHRvIHNjaGVkdWxlIHZtcyBkaXN0cmlidXRpb24gZGVwZW5kaW5n IG9uIDxicj4NCiZndDsgJmd0OyBob3N0IFJBTSBhdmFpbGliaWxpdHk/PGJyPg0KJmd0OyAmZ3Q7 IENvbmNyZXRseSwgSSBoYWQgdG8gbWFudWFsbHkgbW92ZSB2bXMgYWxsIHRoZSB2bXMgdG8gdGhl IHNlY29uZCBob3N0IDxicj4NCiZndDsgJmd0OyBvZiB0aGUgY2x1c3RlciwgdGhpcyBsZWFkIHRv IHJlYWNoIDkwJSBvY2N1cGF0aW9uIG9mIG1lbW9yeSBvbiB0aGUgPGJyPg0KJmd0OyAmZ3Q7IGRl c3RpbmF0aW9uIGhvc3QuIFdoZW4gbXkgZmlyc3QgaG9zdCBoYXMgcmVib290ZWQsIG5vbmUgdm1z IG9mIHRoZSA8YnI+DQomZ3Q7ICZndDsgc2Vjb25kIGhvc3QgYXV0b21hdGljYWxseSBtaWdyYXRl ZCB0byB0aGUgZmlyc3Qgb25lIHdoaWNoIGhhZCBmdWxsIDxicj4NCiZndDsgJmd0OyBSQU0uIEhv dyB0byBtYWtlIHRoaXMgaGFwcGVuPzxicj4NCiZndDsgJmd0Ozxicj4NCiZndDsgLi4uIHNvIGFz IHRvIGJvdGggaG9zdHMgYmUgUkFNIGV2ZW5seSBkaXN0cmlidXRlZC4uLiBob3BlIHRvIGJlIGVu b3VnaCA8YnI+DQomZ3Q7IGNsZWFyLi4uPC9wPg0KPHAgZGlyPSJsdHIiPlNvdW5kcyBsaWtlIHlv dSBqdXN0IHdhbnQgdG8gYXBwbHkgdGhlIGNsdXN0ZXIgcG9saWN5IGZvciBldmVuIGRpc3RyaWJ1 dGlvbi4gSGF2ZSB5b3UgYXNzaWduZWQgYW55IHBvbGljeSBmb3IgdGhhdCBjbHVzdGVyPzwvcD4N CjxwIGRpcj0ibHRyIj4vSzwvcD4NCjxwIGRpcj0ibHRyIj4mZ3Q7PGJyPg0KJmd0OyAtLSA8YnI+ DQomZ3Q7IE5hdGhhbmHDq2wgQmxhbmNoZXQ8YnI+DQomZ3Q7PGJyPg0KJmd0OyBTdXBlcnZpc2lv biByw6lzZWF1PGJyPg0KJmd0OyBQw7RsZSBleHBsb2l0YXRpb24gZXQgbWFpbnRlbmFuY2U8YnI+ DQomZ3Q7IETDqXBhcnRlbWVudCBkZXMgc3lzdMOobWVzIGQnaW5mb3JtYXRpb248YnI+DQomZ3Q7 IDIyNyBhdmVudWUgUHJvZmVzc2V1ci1KZWFuLUxvdWlzLVZpYWxhPGJyPg0KJmd0OyAzNDE5MyBN T05UUEVMTElFUiBDRURFWCA1Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5i c3A7IDxicj4NCiZndDsgVMOpbC4gMzMgKDApNCA2NyA1NCA4NCA1NTxicj4NCiZndDsgRmF4Jm5i c3A7IDMzICgwKTQgNjcgNTQgODQgMTQ8YnI+DQomZ3Q7IGJsYW5jaGV0QGFiZXMuZnI8YnI+DQom Z3Q7PGJyPg0KJmd0OyBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fXzxicj4NCiZndDsgVXNlcnMgbWFpbGluZyBsaXN0PGJyPg0KJmd0OyBVc2Vyc0BvdmlydC5v cmc8YnI+DQomZ3Q7IGh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vy czxicj4NCjwvcD4NCjwvYm9keT4NCjwvaHRtbD4NCg== --_000_5F9E965F5A80BC468BE5F40576769F0979FC230Bexchange21_--

This is a multi-part message in MIME format. --------------040407010403090001050900 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit even distribution is for cpu only Le 23/05/2014 17:48, Karli Sjöberg a écrit :
Den 23 maj 2014 17:13 skrev =?ISO-8859-1?Q?Nathana=EBl_Blanchet?= <blanchet@abes.fr>:
Le 23/05/2014 17:11, Nathanaël Blanchet a écrit :
Hello, On ovirt 3.4, is it possible to schedule vms distribution
depending on
host RAM availibility? Concretly, I had to manually move vms all the vms to the second host of the cluster, this lead to reach 90% occupation of memory on the destination host. When my first host has rebooted, none vms of the second host automatically migrated to the first one which had full RAM. How to make this happen?
... so as to both hosts be RAM evenly distributed... hope to be enough clear...
Sounds like you just want to apply the cluster policy for even distribution. Have you assigned any policy for that cluster?
/K
-- Nathanaël Blanchet
Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Nathanaël Blanchet Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr --------------040407010403090001050900 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=UTF-8" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <font size="-1">even distribution is for cpu only<br> <br> </font> <div class="moz-cite-prefix">Le 23/05/2014 17:48, Karli Sjöberg a écrit :<br> </div> <blockquote cite="mid:5F9E965F5A80BC468BE5F40576769F0979FC230B@exchange2-1" type="cite"> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <p dir="ltr"><br> Den 23 maj 2014 17:13 skrev =?ISO-8859-1?Q?Nathana=EBl_Blanchet?= <a class="moz-txt-link-rfc2396E" href="mailto:blanchet@abes.fr"><blanchet@abes.fr></a>:<br> ><br> ><br> > Le 23/05/2014 17:11, Nathanaël Blanchet a écrit :<br> > > Hello,<br> > > On ovirt 3.4, is it possible to schedule vms distribution depending on <br> > > host RAM availibility?<br> > > Concretly, I had to manually move vms all the vms to the second host <br> > > of the cluster, this lead to reach 90% occupation of memory on the <br> > > destination host. When my first host has rebooted, none vms of the <br> > > second host automatically migrated to the first one which had full <br> > > RAM. How to make this happen?<br> > ><br> > ... so as to both hosts be RAM evenly distributed... hope to be enough <br> > clear...</p> <p dir="ltr">Sounds like you just want to apply the cluster policy for even distribution. Have you assigned any policy for that cluster?</p> <p dir="ltr">/K</p> <p dir="ltr">><br> > -- <br> > Nathanaël Blanchet<br> ><br> > Supervision réseau<br> > Pôle exploitation et maintenance<br> > Département des systèmes d'information<br> > 227 avenue Professeur-Jean-Louis-Viala<br> > 34193 MONTPELLIER CEDEX 5 <br> > Tél. 33 (0)4 67 54 84 55<br> > Fax 33 (0)4 67 54 84 14<br> > <a class="moz-txt-link-abbreviated" href="mailto:blanchet@abes.fr">blanchet@abes.fr</a><br> ><br> > _______________________________________________<br> > Users mailing list<br> > <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a><br> > <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a><br> </p> </blockquote> <br> <pre class="moz-signature" cols="72">-- Nathanaël Blanchet Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 <a class="moz-txt-link-abbreviated" href="mailto:blanchet@abes.fr">blanchet@abes.fr</a> </pre> </body> </html> --------------040407010403090001050900--

Hi Nathanaël, You have 2 ways to get what you're after (quick/slow): 1) install 'oVirt's external scheduling proxy', and write an extremely simple weight function that orders hosts by used memory, then add that to your cluster policy. 2) open an RFE for oVirt 3.4 to have that in (https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt). let me know if you consider (1), and I'll assist. anyway I suggest you'll open an RFE for 3.5. Thanks, Gilad. ----- Original Message -----
From: "Nathanaël Blanchet" <blanchet@abes.fr> To: "Karli Sjöberg" <Karli.Sjoberg@slu.se> Cc: "users" <users@ovirt.org> Sent: Friday, May 23, 2014 7:38:40 PM Subject: Re: [ovirt-users] SLA : RAM scheduling
even distribution is for cpu only
Le 23/05/2014 17:48, Karli Sjöberg a écrit :
Den 23 maj 2014 17:13 skrev =?ISO-8859-1?Q?Nathana=EBl_Blanchet?= <blanchet@abes.fr> :
Le 23/05/2014 17:11, Nathanaël Blanchet a écrit :
Hello, On ovirt 3.4, is it possible to schedule vms distribution depending on host RAM availibility? Concretly, I had to manually move vms all the vms to the second host of the cluster, this lead to reach 90% occupation of memory on the destination host. When my first host has rebooted, none vms of the second host automatically migrated to the first one which had full RAM. How to make this happen?
... so as to both hosts be RAM evenly distributed... hope to be enough clear...
Sounds like you just want to apply the cluster policy for even distribution. Have you assigned any policy for that cluster?
/K
-- Nathanaël Blanchet
Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Nathanaël Blanchet
Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

----- Original Message -----
From: "Gilad Chaplik" <gchaplik@redhat.com> To: "Nathanaël Blanchet" <blanchet@abes.fr> Cc: "Karli Sjöberg" <Karli.Sjoberg@slu.se>, "users" <users@ovirt.org> Sent: Saturday, May 24, 2014 11:49:48 AM Subject: Re: [ovirt-users] SLA : RAM scheduling
Hi Nathanaël,
You have 2 ways to get what you're after (quick/slow): 1) install 'oVirt's external scheduling proxy', and write an extremely simple weight function that orders hosts by used memory, then add that to your cluster policy. 2) open an RFE for oVirt 3.4 to have that in (https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt).
by 3.4, I mean 3.4.x (= anyway for (2) you'll need to upgrade), but not sure it will make it.
let me know if you consider (1), and I'll assist.
anyway I suggest you'll open an RFE for 3.5.
Thanks, Gilad.
----- Original Message -----
From: "Nathanaël Blanchet" <blanchet@abes.fr> To: "Karli Sjöberg" <Karli.Sjoberg@slu.se> Cc: "users" <users@ovirt.org> Sent: Friday, May 23, 2014 7:38:40 PM Subject: Re: [ovirt-users] SLA : RAM scheduling
even distribution is for cpu only
Le 23/05/2014 17:48, Karli Sjöberg a écrit :
Den 23 maj 2014 17:13 skrev =?ISO-8859-1?Q?Nathana=EBl_Blanchet?= <blanchet@abes.fr> :
Le 23/05/2014 17:11, Nathanaël Blanchet a écrit :
Hello, On ovirt 3.4, is it possible to schedule vms distribution depending on host RAM availibility? Concretly, I had to manually move vms all the vms to the second host of the cluster, this lead to reach 90% occupation of memory on the destination host. When my first host has rebooted, none vms of the second host automatically migrated to the first one which had full RAM. How to make this happen?
... so as to both hosts be RAM evenly distributed... hope to be enough clear...
Sounds like you just want to apply the cluster policy for even distribution. Have you assigned any policy for that cluster?
/K
-- Nathanaël Blanchet
Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Nathanaël Blanchet
Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

----- Original Message -----
From: "Gilad Chaplik" <gchaplik@redhat.com> To: "Nathanaël Blanchet" <blanchet@abes.fr> Cc: "users" <users@ovirt.org> Sent: Saturday, May 24, 2014 11:52:10 AM Subject: Re: [ovirt-users] SLA : RAM scheduling
----- Original Message -----
From: "Gilad Chaplik" <gchaplik@redhat.com> To: "Nathanaël Blanchet" <blanchet@abes.fr> Cc: "Karli Sjöberg" <Karli.Sjoberg@slu.se>, "users" <users@ovirt.org> Sent: Saturday, May 24, 2014 11:49:48 AM Subject: Re: [ovirt-users] SLA : RAM scheduling
Hi Nathanaël,
You have 2 ways to get what you're after (quick/slow): 1) install 'oVirt's external scheduling proxy', and write an extremely simple weight function that orders hosts by used memory, then add that to your cluster policy. 2) open an RFE for oVirt 3.4 to have that in (https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt).
by 3.4, I mean 3.4.x (= anyway for (2) you'll need to upgrade), but not sure it will make it.
let me know if you consider (1), and I'll assist.
anyway I suggest you'll open an RFE for 3.5.
Thanks, Gilad.
----- Original Message -----
From: "Nathanaël Blanchet" <blanchet@abes.fr> To: "Karli Sjöberg" <Karli.Sjoberg@slu.se> Cc: "users" <users@ovirt.org> Sent: Friday, May 23, 2014 7:38:40 PM Subject: Re: [ovirt-users] SLA : RAM scheduling
even distribution is for cpu only
Le 23/05/2014 17:48, Karli Sjöberg a écrit :
Den 23 maj 2014 17:13 skrev =?ISO-8859-1?Q?Nathana=EBl_Blanchet?= <blanchet@abes.fr> :
Le 23/05/2014 17:11, Nathanaël Blanchet a écrit :
Hello, On ovirt 3.4, is it possible to schedule vms distribution depending on host RAM availibility? Concretly, I had to manually move vms all the vms to the second host of the cluster, this lead to reach 90% occupation of memory on the destination host. When my first host has rebooted, none vms of the second host automatically migrated to the first one which had full RAM. How to make this happen?
... so as to both hosts be RAM evenly distributed... hope to be enough clear...
Sounds like you just want to apply the cluster policy for even distribution. Have you assigned any policy for that cluster?
/K
-- Nathanaël Blanchet
Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Nathanaël Blanchet
Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr
Sounds like this RFE: https://bugzilla.redhat.com/show_bug.cgi?id=1093038 FWIW, you can implement your own logic in Python until we get to implement the above RFE.

This is a multi-part message in MIME format. --------------030605000009060400080508 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Hi Gilad, Yes, I'm interested in that feature, I'vebegun to read this following your advices http://www.ovirt.org/Features/oVirt_External_Scheduling_Proxy, I don't think I will have time enough to get it work quickly, I'm not experienced enough with this kind of feature, unless you help me. Le 24/05/2014 10:49, Gilad Chaplik a écrit :
Hi Nathanaël,
You have 2 ways to get what you're after (quick/slow): 1) install 'oVirt's external scheduling proxy', and write an extremely simple weight function that orders hosts by used memory, then add that to your cluster policy. 2) open an RFE for oVirt 3.4 to have that in (https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt).
let me know if you consider (1), and I'll assist.
anyway I suggest you'll open an RFE for 3.5.
Thanks, Gilad.
----- Original Message -----
From: "Nathanaël Blanchet" <blanchet@abes.fr> To: "Karli Sjöberg" <Karli.Sjoberg@slu.se> Cc: "users" <users@ovirt.org> Sent: Friday, May 23, 2014 7:38:40 PM Subject: Re: [ovirt-users] SLA : RAM scheduling
even distribution is for cpu only
Le 23/05/2014 17:48, Karli Sjöberg a écrit :
Den 23 maj 2014 17:13 skrev =?ISO-8859-1?Q?Nathana=EBl_Blanchet?= <blanchet@abes.fr> :
Le 23/05/2014 17:11, Nathanaël Blanchet a écrit :
Hello, On ovirt 3.4, is it possible to schedule vms distribution depending on host RAM availibility? Concretly, I had to manually move vms all the vms to the second host of the cluster, this lead to reach 90% occupation of memory on the destination host. When my first host has rebooted, none vms of the second host automatically migrated to the first one which had full RAM. How to make this happen?
... so as to both hosts be RAM evenly distributed... hope to be enough clear...
Sounds like you just want to apply the cluster policy for even distribution. Have you assigned any policy for that cluster?
/K
-- Nathanaël Blanchet
Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Nathanaël Blanchet
Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Nathanaël Blanchet Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr --------------030605000009060400080508 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=UTF-8" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <font size="-1">Hi Gilad,<br> <br> Yes, I'm interested in that feature, I've<small> begun to read this following your advices <a class="moz-txt-link-freetext" href="http://www.ovirt.org/Features/oVirt_External_Scheduling_Proxy">http://www.ovirt.org/Features/oVirt_External_Scheduling_Proxy</a>, I don't think I will have time enough to get it work quickly, I'm not experienced enough with this kind of feature, unless you help me.<br> <br> </small></font> <div class="moz-cite-prefix">Le 24/05/2014 10:49, Gilad Chaplik a écrit :<br> </div> <blockquote cite="mid:196256917.7208175.1400921388066.JavaMail.zimbra@redhat.com" type="cite"> <pre wrap="">Hi Nathanaël, You have 2 ways to get what you're after (quick/slow): 1) install 'oVirt's external scheduling proxy', and write an extremely simple weight function that orders hosts by used memory, then add that to your cluster policy. 2) open an RFE for oVirt 3.4 to have that in (<a class="moz-txt-link-freetext" href="https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt">https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt</a>). let me know if you consider (1), and I'll assist. anyway I suggest you'll open an RFE for 3.5. Thanks, Gilad. ----- Original Message ----- </pre> <blockquote type="cite"> <pre wrap="">From: "Nathanaël Blanchet" <a class="moz-txt-link-rfc2396E" href="mailto:blanchet@abes.fr"><blanchet@abes.fr></a> To: "Karli Sjöberg" <a class="moz-txt-link-rfc2396E" href="mailto:Karli.Sjoberg@slu.se"><Karli.Sjoberg@slu.se></a> Cc: "users" <a class="moz-txt-link-rfc2396E" href="mailto:users@ovirt.org"><users@ovirt.org></a> Sent: Friday, May 23, 2014 7:38:40 PM Subject: Re: [ovirt-users] SLA : RAM scheduling even distribution is for cpu only Le 23/05/2014 17:48, Karli Sjöberg a écrit : Den 23 maj 2014 17:13 skrev =?ISO-8859-1?Q?Nathana=EBl_Blanchet?= <a class="moz-txt-link-rfc2396E" href="mailto:blanchet@abes.fr"><blanchet@abes.fr></a> : </pre> <blockquote type="cite"> <pre wrap=""> Le 23/05/2014 17:11, Nathanaël Blanchet a écrit : </pre> <blockquote type="cite"> <pre wrap="">Hello, On ovirt 3.4, is it possible to schedule vms distribution depending on host RAM availibility? Concretly, I had to manually move vms all the vms to the second host of the cluster, this lead to reach 90% occupation of memory on the destination host. When my first host has rebooted, none vms of the second host automatically migrated to the first one which had full RAM. How to make this happen? </pre> </blockquote> <pre wrap="">... so as to both hosts be RAM evenly distributed... hope to be enough clear... </pre> </blockquote> <pre wrap=""> Sounds like you just want to apply the cluster policy for even distribution. Have you assigned any policy for that cluster? /K </pre> <blockquote type="cite"> <pre wrap=""> -- Nathanaël Blanchet Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 <a class="moz-txt-link-abbreviated" href="mailto:blanchet@abes.fr">blanchet@abes.fr</a> _______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <pre wrap=""> -- Nathanaël Blanchet Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 <a class="moz-txt-link-abbreviated" href="mailto:blanchet@abes.fr">blanchet@abes.fr</a> _______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> </blockquote> <br> <pre class="moz-signature" cols="72">-- Nathanaël Blanchet Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 <a class="moz-txt-link-abbreviated" href="mailto:blanchet@abes.fr">blanchet@abes.fr</a> </pre> </body> </html> --------------030605000009060400080508--

------=_Part_8277210_955683547.1401114136181 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Hi Nathana=C3=ABl, happy to assist :) hope it will work in first run: 1) install the proxy and ovirtsdk. 2) put attached file in the right place (according to docs: ".../plugins"),= make sure to edit the file with your ovirt's ip, user@domain and PW. 3) restart proxy service. 3) use config tool to configure ovirt-engine: * "ExternalSchedulerServiceURL"=3D"http://<ip>:18781/" * "ExternalSchedulerEnabled"=3Dtrue 4) restart ovirt-engine service. 5) under configure->cluster_policy see that weight function memory_even_dis= tribution was added (should be in manage policy units or /sth- you will see= it in the main dialog as well). 6) clone/copy currernt cluster's used cluster policy (probably none - prefe= r it to have no balancing modules to avoid conflicts), name it 'your_name' = and attach memory_even_distribution weight (you can leave it as the only we= ight module in weight section to avoid configuring factors). 7) replace cluster's cluster policy with newly created one. try it out and let me know how goes :-) Thanks,=20 Gilad. ----- Original Message -----
From: "Nathana=C3=ABl Blanchet" <blanchet@abes.fr> To: "Gilad Chaplik" <gchaplik@redhat.com> Cc: "Karli Sj=C3=B6berg" <Karli.Sjoberg@slu.se>, "users" <users@ovirt.org=
Sent: Monday, May 26, 2014 12:12:41 PM Subject: Re: [ovirt-users] SLA : RAM scheduling =20 Hi Gilad, =20 Yes, I'm interested in that feature, I'vebegun to read this following your advices http://www.ovirt.org/Features/oVirt_External_Scheduling_Proxy, I don't think I will have time enough to get it work quickly, I'm not experienced enough with this kind of feature, unless you help me. =20 Le 24/05/2014 10:49, Gilad Chaplik a =C3=A9crit :
Hi Nathana=C3=ABl,
You have 2 ways to get what you're after (quick/slow): 1) install 'oVirt's external scheduling proxy', and write an extremely simple weight function that orders hosts by used memory, then add that = to your cluster policy. 2) open an RFE for oVirt 3.4 to have that in (https://bugzilla.redhat.com/enter_bug.cgi?product=3DoVirt).
let me know if you consider (1), and I'll assist.
anyway I suggest you'll open an RFE for 3.5.
Thanks, Gilad.
----- Original Message -----
From: "Nathana=C3=ABl Blanchet" <blanchet@abes.fr> To: "Karli Sj=C3=B6berg" <Karli.Sjoberg@slu.se> Cc: "users" <users@ovirt.org> Sent: Friday, May 23, 2014 7:38:40 PM Subject: Re: [ovirt-users] SLA : RAM scheduling
even distribution is for cpu only
Le 23/05/2014 17:48, Karli Sj=C3=B6berg a =C3=A9crit :
Den 23 maj 2014 17:13 skrev =3D?ISO-8859-1?Q?Nathana=3DEBl_Blanchet?= =3D <blanchet@abes.fr> :
Le 23/05/2014 17:11, Nathana=C3=ABl Blanchet a =C3=A9crit :
Hello, On ovirt 3.4, is it possible to schedule vms distribution depending =
on
host RAM availibility? Concretly, I had to manually move vms all the vms to the second host of the cluster, this lead to reach 90% occupation of memory on the destination host. When my first host has rebooted, none vms of the second host automatically migrated to the first one which had full RAM. How to make this happen?
... so as to both hosts be RAM evenly distributed... hope to be enoug= h clear... Sounds like you just want to apply the cluster policy for even distribution. Have you assigned any policy for that cluster?
/K
-- Nathana=C3=ABl Blanchet
Supervision r=C3=A9seau P=C3=B4le exploitation et maintenance D=C3=A9partement des syst=C3=A8mes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 T=C3=A9l. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Nathana=C3=ABl Blanchet
Supervision r=C3=A9seau P=C3=B4le exploitation et maintenance D=C3=A9partement des syst=C3=A8mes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 T=C3=A9l. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
=20 -- Nathana=C3=ABl Blanchet =20 Supervision r=C3=A9seau P=C3=B4le exploitation et maintenance D=C3=A9partement des syst=C3=A8mes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 T=C3=A9l. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr =20
------=_Part_8277210_955683547.1401114136181 Content-Type: text/x-python; name=memory_even_distribution.py Content-Disposition: attachment; filename=memory_even_distribution.py Content-Transfer-Encoding: base64 ZnJvbSBvdmlydHNkay54bWwgaW1wb3J0IHBhcmFtcwpmcm9tIG92aXJ0c2RrLmFwaSBpbXBvcnQg QVBJCmltcG9ydCBzeXMKCgpjbGFzcyBtZW1vcnlfZXZlbl9kaXN0cmlidXRpb24oKToKICAgICcn J3JhbmsgaG9zdHMgYnkgZXZlbiByYW0gZGlzdHJpYnV0aW9uJycnCgogICAgcHJvcGVydGllc192 YWxpZGF0aW9uID0gJycKCiAgICBkZWYgX2dldF9jb25uZWN0aW9uKHNlbGYpOgogICAgICAgICNv cGVuIGEgY29ubmVjdGlvbiB0byB0aGUgcmVzdCBhcGkKICAgICAgICBjb25uZWN0aW9uID0gTm9u ZQogICAgICAgIHRyeToKICAgICAgICAgICAgY29ubmVjdGlvbiA9IEFQSSh1cmw9J2h0dHA6Ly9o b3N0OnBvcnQnLAogICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVzZXJuYW1lPSd1c2VyQGRv bWFpbicsIHBhc3N3b3JkPScnKQogICAgICAgIGV4Y2VwdCBCYXNlRXhjZXB0aW9uIGFzIGV4Ogog ICAgICAgICAgICAjbGV0dGluZyB0aGUgZXh0ZXJuYWwgcHJveHkga25vdyB0aGVyZSB3YXMgYW4g ZXJyb3IKICAgICAgICAgICAgcHJpbnQgPj4gc3lzLnN0ZGVyciwgZXgKICAgICAgICAgICAgcmV0 dXJuIE5vbmUKCiAgICAgICAgcmV0dXJuIGNvbm5lY3Rpb24KCiAgICBkZWYgX2dldF9ob3N0cyhz ZWxmLCBob3N0X2lkcywgY29ubmVjdGlvbik6CiAgICAgICAgI2dldCBhbGwgdGhlIGhvc3RzIHdp dGggdGhlIGdpdmVuIGlkcwogICAgICAgIGVuZ2luZV9ob3N0cyA9IGNvbm5lY3Rpb24uaG9zdHMu bGlzdCgKICAgICAgICAgICAgcXVlcnk9IiBvciAiLmpvaW4oWyJpZD0lcyIgJSB1IGZvciB1IGlu IGhvc3RfaWRzXSkpCgogICAgICAgIHJldHVybiBlbmdpbmVfaG9zdHMKCiAgICBkZWYgZG9fc2Nv cmUoc2VsZiwgaG9zdHNfaWRzLCB2bV9pZCwgYXJnc19tYXApOgogICAgICAgIGNvbm4gPSBzZWxm Ll9nZXRfY29ubmVjdGlvbigpCiAgICAgICAgaWYgY29ubiBpcyBOb25lOgogICAgICAgICAgICBy ZXR1cm4KCiAgICAgICAgZW5naW5lX2hvc3RzID0gc2VsZi5fZ2V0X2hvc3RzKGhvc3RzX2lkcywg Y29ubikKCiAgICAgICAgI2l0ZXJhdGUgb3ZlciB0aGVtIGFuZCB3ZWlnaCB0aGVtIGJhc2VkIG9u IGF2YWlsYWJsZSByYW0KICAgICAgICBob3N0X3Njb3JlcyA9IFtdCiAgICAgICAgZm9yIGVuZ2lu ZV9ob3N0IGluIGVuZ2luZV9ob3N0czoKICAgICAgICAgICAgaWYoZW5naW5lX2hvc3QgYW5kCiAg ICAgICAgICAgICAgICAgICAgZW5naW5lX2hvc3QubWF4X3NjaGVkdWxpbmdfbWVtb3J5KToKICAg ICAgICAgICAgICAgIGhvc3Rfc2NvcmVzLmFwcGVuZCgoZW5naW5lX2hvc3QuaWQsCiAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgIGFicygwIC0gZW5naW5lX2hvc3QubWF4X3NjaGVk dWxpbmdfbWVtb3J5KSkpCiAgICAgICAgcHJpbnQgaG9zdF9zY29yZXMK ------=_Part_8277210_955683547.1401114136181--

Hi Nathanaël,
happy to assist :) hope it will work in first run:
1) install the proxy and ovirtsdk. yum install -y ovirt-scheduler-proxy 2) put attached file in the right place (according to docs: ".../plugins"), according to the docs (http://www.ovirt.org/Features/oVirt_External_Scheduling_Proxy), the
make sure to edit the file with your ovirt's ip, user@domain and PW. My engine API can't be reached in http, so there is some work to do this with https. Here is what I did : according to /usr/lib/python2.6/site-packages/ovirtsdk/api.py, I add insecure=True to
This is a multi-part message in MIME format. --------------000809050505090608080605 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Hi Gilad, I followed your instructions, but it didn't work for me, I think for many reasons. Le 26/05/2014 16:22, Gilad Chaplik a écrit : place should be "plugins_path=$PYTHONPATH/ovirt_scheduler/plugins", which is should be mentionned in Scheduler conf file (/etc/ovirt/scheduler/scheduler.conf), but none of them exists in the filesystem!!! I've only found /usr/share/ovirt-scheduler-proxy/plugins for placing the python file and no scheduler.conf is present where I could change that path. the chain connection = API(url='http*s*://host:port', username='user@domain', password=''*insecure='True'*) Maybe it is not enough and it would be useful to add validate_cert_chain=False...
3) restart proxy service. 3) use config tool to configure ovirt-engine: * "ExternalSchedulerServiceURL"="http://<ip>:18781/" The scheduler proxy listens to localhost:18781, none of ips that can be filled here will be reached on that port. * "ExternalSchedulerEnabled"=true 4) restart ovirt-engine service. 5) under configure->cluster_policy see that weight function memory_even_distribution was added (should be in manage policy units or /sth- you will see it in the main dialog as well). 6) clone/copy currernt cluster's used cluster policy (probably none - prefer it to have no balancing modules to avoid conflicts), name it 'your_name' and attach memory_even_distribution weight (you can leave it as the only weight module in weight section to avoid configuring factors). 7) replace cluster's cluster policy with newly created one.
try it out and let me know how goes :-)
Thanks, Gilad. If all above is tru, should be a good idea to update the wiki... ----- Original Message -----
From: "Nathanaël Blanchet" <blanchet@abes.fr> To: "Gilad Chaplik" <gchaplik@redhat.com> Cc: "Karli Sjöberg" <Karli.Sjoberg@slu.se>, "users" <users@ovirt.org> Sent: Monday, May 26, 2014 12:12:41 PM Subject: Re: [ovirt-users] SLA : RAM scheduling
Hi Gilad,
Yes, I'm interested in that feature, I'vebegun to read this following your advices http://www.ovirt.org/Features/oVirt_External_Scheduling_Proxy, I don't think I will have time enough to get it work quickly, I'm not experienced enough with this kind of feature, unless you help me.
Le 24/05/2014 10:49, Gilad Chaplik a écrit :
Hi Nathanaël,
You have 2 ways to get what you're after (quick/slow): 1) install 'oVirt's external scheduling proxy', and write an extremely simple weight function that orders hosts by used memory, then add that to your cluster policy. 2) open an RFE for oVirt 3.4 to have that in (https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt).
let me know if you consider (1), and I'll assist.
anyway I suggest you'll open an RFE for 3.5.
Thanks, Gilad.
----- Original Message -----
From: "Nathanaël Blanchet" <blanchet@abes.fr> To: "Karli Sjöberg" <Karli.Sjoberg@slu.se> Cc: "users" <users@ovirt.org> Sent: Friday, May 23, 2014 7:38:40 PM Subject: Re: [ovirt-users] SLA : RAM scheduling
even distribution is for cpu only
Le 23/05/2014 17:48, Karli Sjöberg a écrit :
Den 23 maj 2014 17:13 skrev =?ISO-8859-1?Q?Nathana=EBl_Blanchet?= <blanchet@abes.fr> :
Le 23/05/2014 17:11, Nathanaël Blanchet a écrit :
Hello, On ovirt 3.4, is it possible to schedule vms distribution depending on host RAM availibility? Concretly, I had to manually move vms all the vms to the second host of the cluster, this lead to reach 90% occupation of memory on the destination host. When my first host has rebooted, none vms of the second host automatically migrated to the first one which had full RAM. How to make this happen?
... so as to both hosts be RAM evenly distributed... hope to be enough clear... Sounds like you just want to apply the cluster policy for even distribution. Have you assigned any policy for that cluster?
/K
-- Nathanaël Blanchet
Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Nathanaël Blanchet
Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Nathanaël Blanchet
Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr
-- Nathanaël Blanchet Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr --------------000809050505090608080605 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=UTF-8" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <font size="-1">Hi Gilad,<br> <br> I followed your instructions, but it didn't work for me, I think for many reasons.</font><br> <br> <div class="moz-cite-prefix">Le 26/05/2014 16:22, Gilad Chaplik a écrit :<br> </div> <blockquote cite="mid:2012612350.8277212.1401114136183.JavaMail.zimbra@redhat.com" type="cite"> <pre wrap="">Hi Nathanaël, happy to assist :) hope it will work in first run: 1) install the proxy and ovirtsdk.</pre> </blockquote> yum install -y ovirt-scheduler-proxy<br> <blockquote cite="mid:2012612350.8277212.1401114136183.JavaMail.zimbra@redhat.com" type="cite"> <pre wrap=""> 2) put attached file in the right place (according to docs: ".../plugins"), </pre> </blockquote> according to the docs (<a class="moz-txt-link-freetext" href="http://www.ovirt.org/Features/oVirt_External_Scheduling_Proxy">http://www.ovirt.org/Features/oVirt_External_Scheduling_Proxy</a>), the place should be "plugins_path=$PYTHONPATH/ovirt_scheduler/plugins", which is should be mentionned in Scheduler conf file (/etc/ovirt/scheduler/scheduler.conf), but none of them exists in the filesystem!!! I've only found /usr/share/ovirt-scheduler-proxy/plugins for placing the python file and no scheduler.conf is present where I could change that path.<br> <blockquote cite="mid:2012612350.8277212.1401114136183.JavaMail.zimbra@redhat.com" type="cite"> <pre wrap="">make sure to edit the file with your ovirt's ip, user@domain and PW. </pre> </blockquote> My engine API can't be reached in http, so there is some work to do this with https. Here is what I did : according to /usr/lib/python2.6/site-packages/ovirtsdk/api.py, I add insecure=True to the chain <br> connection = API(url='http<font color="#ff0000"><b>s</b></font>://host:port',<br> username='user@domain', password=''<b> <font color="#ff0000">insecure='True'</font></b>)<br> Maybe it is not enough and it would be useful to add validate_cert_chain=False...<br> <blockquote cite="mid:2012612350.8277212.1401114136183.JavaMail.zimbra@redhat.com" type="cite"> <pre wrap=""> 3) restart proxy service. 3) use config tool to configure ovirt-engine: * "ExternalSchedulerServiceURL"="<a class="moz-txt-link-freetext" href="http://">http://</a><ip>:18781/"</pre> </blockquote> The scheduler proxy listens to localhost:18781, none of ips that can be filled here will be reached on that port.<br> <blockquote cite="mid:2012612350.8277212.1401114136183.JavaMail.zimbra@redhat.com" type="cite"> <pre wrap=""> * "ExternalSchedulerEnabled"=true 4) restart ovirt-engine service. 5) under configure->cluster_policy see that weight function memory_even_distribution was added (should be in manage policy units or /sth- you will see it in the main dialog as well). 6) clone/copy currernt cluster's used cluster policy (probably none - prefer it to have no balancing modules to avoid conflicts), name it 'your_name' and attach memory_even_distribution weight (you can leave it as the only weight module in weight section to avoid configuring factors). 7) replace cluster's cluster policy with newly created one. try it out and let me know how goes :-) Thanks, Gilad. </pre> </blockquote> If all above is tru, should be a good idea to update the wiki...<br> <blockquote cite="mid:2012612350.8277212.1401114136183.JavaMail.zimbra@redhat.com" type="cite"> <pre wrap=""> ----- Original Message ----- </pre> <blockquote type="cite"> <pre wrap="">From: "Nathanaël Blanchet" <a class="moz-txt-link-rfc2396E" href="mailto:blanchet@abes.fr"><blanchet@abes.fr></a> To: "Gilad Chaplik" <a class="moz-txt-link-rfc2396E" href="mailto:gchaplik@redhat.com"><gchaplik@redhat.com></a> Cc: "Karli Sjöberg" <a class="moz-txt-link-rfc2396E" href="mailto:Karli.Sjoberg@slu.se"><Karli.Sjoberg@slu.se></a>, "users" <a class="moz-txt-link-rfc2396E" href="mailto:users@ovirt.org"><users@ovirt.org></a> Sent: Monday, May 26, 2014 12:12:41 PM Subject: Re: [ovirt-users] SLA : RAM scheduling Hi Gilad, Yes, I'm interested in that feature, I'vebegun to read this following your advices <a class="moz-txt-link-freetext" href="http://www.ovirt.org/Features/oVirt_External_Scheduling_Proxy">http://www.ovirt.org/Features/oVirt_External_Scheduling_Proxy</a>, I don't think I will have time enough to get it work quickly, I'm not experienced enough with this kind of feature, unless you help me. Le 24/05/2014 10:49, Gilad Chaplik a écrit : </pre> <blockquote type="cite"> <pre wrap="">Hi Nathanaël, You have 2 ways to get what you're after (quick/slow): 1) install 'oVirt's external scheduling proxy', and write an extremely simple weight function that orders hosts by used memory, then add that to your cluster policy. 2) open an RFE for oVirt 3.4 to have that in (<a class="moz-txt-link-freetext" href="https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt">https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt</a>). let me know if you consider (1), and I'll assist. anyway I suggest you'll open an RFE for 3.5. Thanks, Gilad. ----- Original Message ----- </pre> <blockquote type="cite"> <pre wrap="">From: "Nathanaël Blanchet" <a class="moz-txt-link-rfc2396E" href="mailto:blanchet@abes.fr"><blanchet@abes.fr></a> To: "Karli Sjöberg" <a class="moz-txt-link-rfc2396E" href="mailto:Karli.Sjoberg@slu.se"><Karli.Sjoberg@slu.se></a> Cc: "users" <a class="moz-txt-link-rfc2396E" href="mailto:users@ovirt.org"><users@ovirt.org></a> Sent: Friday, May 23, 2014 7:38:40 PM Subject: Re: [ovirt-users] SLA : RAM scheduling even distribution is for cpu only Le 23/05/2014 17:48, Karli Sjöberg a écrit : Den 23 maj 2014 17:13 skrev =?ISO-8859-1?Q?Nathana=EBl_Blanchet?= <a class="moz-txt-link-rfc2396E" href="mailto:blanchet@abes.fr"><blanchet@abes.fr></a> : </pre> <blockquote type="cite"> <pre wrap=""> Le 23/05/2014 17:11, Nathanaël Blanchet a écrit : </pre> <blockquote type="cite"> <pre wrap="">Hello, On ovirt 3.4, is it possible to schedule vms distribution depending on host RAM availibility? Concretly, I had to manually move vms all the vms to the second host of the cluster, this lead to reach 90% occupation of memory on the destination host. When my first host has rebooted, none vms of the second host automatically migrated to the first one which had full RAM. How to make this happen? </pre> </blockquote> <pre wrap="">... so as to both hosts be RAM evenly distributed... hope to be enough clear... </pre> </blockquote> <pre wrap="">Sounds like you just want to apply the cluster policy for even distribution. Have you assigned any policy for that cluster? /K </pre> <blockquote type="cite"> <pre wrap="">-- Nathanaël Blanchet Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 <a class="moz-txt-link-abbreviated" href="mailto:blanchet@abes.fr">blanchet@abes.fr</a> _______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <pre wrap="">-- Nathanaël Blanchet Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 <a class="moz-txt-link-abbreviated" href="mailto:blanchet@abes.fr">blanchet@abes.fr</a> _______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> </blockquote> <pre wrap=""> -- Nathanaël Blanchet Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 <a class="moz-txt-link-abbreviated" href="mailto:blanchet@abes.fr">blanchet@abes.fr</a> </pre> </blockquote> <pre wrap="">></pre> </blockquote> <br> <pre class="moz-signature" cols="72">-- Nathanaël Blanchet Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 <a class="moz-txt-link-abbreviated" href="mailto:blanchet@abes.fr">blanchet@abes.fr</a> </pre> </body> </html> --------------000809050505090608080605--

----- Original Message -----
From: "Nathanaël Blanchet" <blanchet@abes.fr> To: "Gilad Chaplik" <gchaplik@redhat.com> Cc: "Karli Sjöberg" <Karli.Sjoberg@slu.se>, "users" <users@ovirt.org> Sent: Wednesday, May 28, 2014 6:26:16 PM Subject: Re: [ovirt-users] SLA : RAM scheduling
Hi Gilad,
I followed your instructions, but it didn't work for me, I think for many reasons.
Hi Nathanaël,
happy to assist :) hope it will work in first run:
1) install the proxy and ovirtsdk. yum install -y ovirt-scheduler-proxy 2) put attached file in the right place (according to docs: ".../plugins"), according to the docs (http://www.ovirt.org/Features/oVirt_External_Scheduling_Proxy), the
Le 26/05/2014 16:22, Gilad Chaplik a écrit : place should be "plugins_path=$PYTHONPATH/ovirt_scheduler/plugins", which is should be mentionned in Scheduler conf file (/etc/ovirt/scheduler/scheduler.conf), but none of them exists in the filesystem!!! I've only found /usr/share/ovirt-scheduler-proxy/plugins for placing the python file and no scheduler.conf is present where I could change that path.
could be a bug in docs or installation - once we'll get it working for you I'll see to it. btw, change 'abs(0 - engine_host.max_scheduling_memory)' to 'sys.maxsize - engine_host.max_scheduling_memory'
make sure to edit the file with your ovirt's ip, user@domain and PW. My engine API can't be reached in http, so there is some work to do this with https. Here is what I did : according to /usr/lib/python2.6/site-packages/ovirtsdk/api.py, I add insecure=True to the chain connection = API(url='http*s*://host:port', username='user@domain', password=''*insecure='True'*) Maybe it is not enough and it would be useful to add validate_cert_chain=False...
Martin?
3) restart proxy service. 3) use config tool to configure ovirt-engine: * "ExternalSchedulerServiceURL"="http://<ip>:18781/" The scheduler proxy listens to localhost:18781, none of ips that can be filled here will be reached on that port.
you config ovirt engine with the ip of the proxy, or I'm missing sth. ovirt communicates with the proxy and not the other way around.
* "ExternalSchedulerEnabled"=true 4) restart ovirt-engine service. 5) under configure->cluster_policy see that weight function memory_even_distribution was added (should be in manage policy units or /sth- you will see it in the main dialog as well). 6) clone/copy currernt cluster's used cluster policy (probably none - prefer it to have no balancing modules to avoid conflicts), name it 'your_name' and attach memory_even_distribution weight (you can leave it as the only weight module in weight section to avoid configuring factors). 7) replace cluster's cluster policy with newly created one.
try it out and let me know how goes :-)
Thanks, Gilad. If all above is tru, should be a good idea to update the wiki... ----- Original Message -----
From: "Nathanaël Blanchet" <blanchet@abes.fr> To: "Gilad Chaplik" <gchaplik@redhat.com> Cc: "Karli Sjöberg" <Karli.Sjoberg@slu.se>, "users" <users@ovirt.org> Sent: Monday, May 26, 2014 12:12:41 PM Subject: Re: [ovirt-users] SLA : RAM scheduling
Hi Gilad,
Yes, I'm interested in that feature, I'vebegun to read this following your advices http://www.ovirt.org/Features/oVirt_External_Scheduling_Proxy, I don't think I will have time enough to get it work quickly, I'm not experienced enough with this kind of feature, unless you help me.
Le 24/05/2014 10:49, Gilad Chaplik a écrit :
Hi Nathanaël,
You have 2 ways to get what you're after (quick/slow): 1) install 'oVirt's external scheduling proxy', and write an extremely simple weight function that orders hosts by used memory, then add that to your cluster policy. 2) open an RFE for oVirt 3.4 to have that in (https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt).
let me know if you consider (1), and I'll assist.
anyway I suggest you'll open an RFE for 3.5.
Thanks, Gilad.
----- Original Message -----
From: "Nathanaël Blanchet" <blanchet@abes.fr> To: "Karli Sjöberg" <Karli.Sjoberg@slu.se> Cc: "users" <users@ovirt.org> Sent: Friday, May 23, 2014 7:38:40 PM Subject: Re: [ovirt-users] SLA : RAM scheduling
even distribution is for cpu only
Le 23/05/2014 17:48, Karli Sjöberg a écrit :
Den 23 maj 2014 17:13 skrev =?ISO-8859-1?Q?Nathana=EBl_Blanchet?= <blanchet@abes.fr> :
Le 23/05/2014 17:11, Nathanaël Blanchet a écrit : > Hello, > On ovirt 3.4, is it possible to schedule vms distribution depending on > host RAM availibility? > Concretly, I had to manually move vms all the vms to the second host > of the cluster, this lead to reach 90% occupation of memory on the > destination host. When my first host has rebooted, none vms of the > second host automatically migrated to the first one which had full > RAM. How to make this happen? > ... so as to both hosts be RAM evenly distributed... hope to be enough clear... Sounds like you just want to apply the cluster policy for even distribution. Have you assigned any policy for that cluster?
/K
-- Nathanaël Blanchet
Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Nathanaël Blanchet
Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Nathanaël Blanchet
Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr
-- Nathanaël Blanchet
Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr

On 28-5-2014 17:37, Gilad Chaplik wrote:
make sure to edit the file with your ovirt's ip, user@domain and PW. My engine API can't be reached in http, so there is some work to do this with https. Here is what I did : according to /usr/lib/python2.6/site-packages/ovirtsdk/api.py, I add insecure=True to the chain connection = API(url='http*s*://host:port', username='user@domain', password=''*insecure='True'*) Maybe it is not enough and it would be useful to add validate_cert_chain=False... Martin?
3) restart proxy service. 3) use config tool to configure ovirt-engine: * "ExternalSchedulerServiceURL"="http://<ip>:18781/" The scheduler proxy listens to localhost:18781, none of ips that can be filled here will be reached on that port. you config ovirt engine with the ip of the proxy, or I'm missing sth. ovirt communicates with the proxy and not the other way around.
I'm following this and found out that ovirt-scheduler-proxy only listens on localhost so you'll need to adjust accordingly. Joop

On 26-5-2014 16:22, Gilad Chaplik wrote:
Hi Nathanaël,
happy to assist :) hope it will work in first run:
1) install the proxy and ovirtsdk. 2) put attached file in the right place (according to docs: ".../plugins"), make sure to edit the file with your ovirt's ip, user@domain and PW. 3) restart proxy service. 3) use config tool to configure ovirt-engine: * "ExternalSchedulerServiceURL"="http://<ip>:18781/" * "ExternalSchedulerEnabled"=true 4) restart ovirt-engine service. 5) under configure->cluster_policy see that weight function memory_even_distribution was added (should be in manage policy units or /sth- you will see it in the main dialog as well). 6) clone/copy currernt cluster's used cluster policy (probably none - prefer it to have no balancing modules to avoid conflicts), name it 'your_name' and attach memory_even_distribution weight (you can leave it as the only weight module in weight section to avoid configuring factors). 7) replace cluster's cluster policy with newly created one.
try it out and let me know how goes :-)
Ok, progress of some sort :-) I added the weight function to the cluster and when I replace my dns name with localhost in ExternalSchedulerServiceURL then engine.log shows that it can contact the scheduler. I expected a rebalance but nothing happened. Stopping and starting a VM does provoke a reaction, an error :-( From the scheduler.log I see that engine contacts it and pushes some information, the log also shows that some information is returned and then there is a big error message in the log of engine. Joop

On 11 Jun 2014, at 14:31, noc wrote:
On 26-5-2014 16:22, Gilad Chaplik wrote:
Hi Nathanaël,
happy to assist :) hope it will work in first run:
1) install the proxy and ovirtsdk. 2) put attached file in the right place (according to docs: ".../plugins"), make sure to edit the file with your ovirt's ip, user@domain and PW. 3) restart proxy service. 3) use config tool to configure ovirt-engine: * "ExternalSchedulerServiceURL"="http://<ip>:18781/" * "ExternalSchedulerEnabled"=true 4) restart ovirt-engine service. 5) under configure->cluster_policy see that weight function memory_even_distribution was added (should be in manage policy units or /sth- you will see it in the main dialog as well). 6) clone/copy currernt cluster's used cluster policy (probably none - prefer it to have no balancing modules to avoid conflicts), name it 'your_name' and attach memory_even_distribution weight (you can leave it as the only weight module in weight section to avoid configuring factors). 7) replace cluster's cluster policy with newly created one.
try it out and let me know how goes :-)
Ok, progress of some sort :-)
I added the weight function to the cluster and when I replace my dns name with localhost in ExternalSchedulerServiceURL then engine.log shows that it can contact the scheduler. I expected a rebalance but nothing happened. Stopping and starting a VM does provoke a reaction, an error :-(
From the scheduler.log I see that engine contacts it and pushes some information, the log also shows that some information is returned and then there is a big error message in the log of engine.
xmlrpc is infamous about not being able to handle numbers like 9223372010239819775 Thanks, michal
Joop
<engine.log><ovirt-scheduler-proxy.log>_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This is a multi-part message in MIME format. --------------080108070306050603070009 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 8bit Michal Skrivanek wrote:
On 11 Jun 2014, at 14:31, noc wrote:
On 26-5-2014 16:22, Gilad Chaplik wrote:
Hi Nathanaël,
happy to assist :) hope it will work in first run:
1) install the proxy and ovirtsdk. 2) put attached file in the right place (according to docs: ".../plugins"), make sure to edit the file with your ovirt's ip, user@domain and PW. 3) restart proxy service. 3) use config tool to configure ovirt-engine: * "ExternalSchedulerServiceURL"="http://<ip>:18781/" * "ExternalSchedulerEnabled"=true 4) restart ovirt-engine service. 5) under configure->cluster_policy see that weight function memory_even_distribution was added (should be in manage policy units or /sth- you will see it in the main dialog as well). 6) clone/copy currernt cluster's used cluster policy (probably none - prefer it to have no balancing modules to avoid conflicts), name it 'your_name' and attach memory_even_distribution weight (you can leave it as the only weight module in weight section to avoid configuring factors). 7) replace cluster's cluster policy with newly created one.
try it out and let me know how goes :-)
Ok, progress of some sort :-)
I added the weight function to the cluster and when I replace my dns name with localhost in ExternalSchedulerServiceURL then engine.log shows that it can contact the scheduler. I expected a rebalance but nothing happened. Stopping and starting a VM does provoke a reaction, an error :-(
From the scheduler.log I see that engine contacts it and pushes some information, the log also shows that some information is returned and then there is a big error message in the log of engine.
xmlrpc is infamous about not being able to handle numbers like 9223372010239819775
Then oVirt shouldn't either use that kind of numbers or should not use xmlrpc. Sorry but thats a non-answer and doesn't help anybody. Howto solve this problem. Do you need a BZ? Joop --------------080108070306050603070009 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type"> <title></title> </head> <body bgcolor="#ffffff" text="#000000"> Michal Skrivanek wrote: <blockquote cite="mid:E04C56CC-D3B2-4105-ACC8-346DEB8BFB7B@redhat.com" type="cite"> <pre wrap="">On 11 Jun 2014, at 14:31, noc wrote: </pre> <blockquote type="cite"> <pre wrap="">On 26-5-2014 16:22, Gilad Chaplik wrote: </pre> <blockquote type="cite"> <pre wrap="">Hi Nathanaël, happy to assist :) hope it will work in first run: 1) install the proxy and ovirtsdk. 2) put attached file in the right place (according to docs: ".../plugins"), make sure to edit the file with your ovirt's ip, user@domain and PW. 3) restart proxy service. 3) use config tool to configure ovirt-engine: * "ExternalSchedulerServiceURL"="<a class="moz-txt-link-freetext" href="http://">http://</a><ip>:18781/" * "ExternalSchedulerEnabled"=true 4) restart ovirt-engine service. 5) under configure->cluster_policy see that weight function memory_even_distribution was added (should be in manage policy units or /sth- you will see it in the main dialog as well). 6) clone/copy currernt cluster's used cluster policy (probably none - prefer it to have no balancing modules to avoid conflicts), name it 'your_name' and attach memory_even_distribution weight (you can leave it as the only weight module in weight section to avoid configuring factors). 7) replace cluster's cluster policy with newly created one. try it out and let me know how goes :-) </pre> </blockquote> <pre wrap="">Ok, progress of some sort :-) I added the weight function to the cluster and when I replace my dns name with localhost in ExternalSchedulerServiceURL then engine.log shows that it can contact the scheduler. I expected a rebalance but nothing happened. Stopping and starting a VM does provoke a reaction, an error :-(
From the scheduler.log I see that engine contacts it and pushes some information, the log also shows that some information is returned and then there is a big error message in the log of engine. </pre> </blockquote> <pre wrap=""><!----> xmlrpc is infamous about not being able to handle numbers like 9223372010239819775
</pre> </blockquote> Then oVirt shouldn't either use that kind of numbers or should not use xmlrpc.<br> <br> Sorry but thats a non-answer and doesn't help anybody.<br> <br> Howto solve this problem. Do you need a BZ?<br> <br> Joop<br> <br> </body> </html> --------------080108070306050603070009--

Well, I know this is not that helpful but afaik there is work done to replace xml-rpc communication with json based communication. Am 13.06.2014 10:57, schrieb Joop:
Then oVirt shouldn't either use that kind of numbers or should not use xmlrpc.
Sorry but thats a non-answer and doesn't help anybody.
Howto solve this problem. Do you need a BZ?
Joop
-- Mit freundlichen Grüßen / Regards Sven Kieske Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen

On 13 Jun 2014, at 11:04, Sven Kieske wrote:
Well, I know this is not that helpful but afaik there is work done to replace xml-rpc communication with json based communication.
@Sven: yeah, and one of the (many) reasons why we want it. But that's vdsm. Scheduler is a different package/app. Typical workaround is to use a string type to pass integer numbers (or use smaller numbers:)
Am 13.06.2014 10:57, schrieb Joop:
Then oVirt shouldn't either use that kind of numbers or should not use xmlrpc.
Sorry but thats a non-answer and doesn't help anybody.
@Joop: what's with the attitude? I have nothing to do with the scheduler and I have no idea what is it trying or supposed to return…I'm just pointing out the error is due to the large whatever number being returned; in hope of someone with knowledge of the scheduler will take it from there. I'd tend to agree XMLRPC is not the best thing to use; and I think it's important to keep reminding the common pitfalls of it.
Howto solve this problem. Do you need a BZ?
patches welcome, BZ always helps. However I'd wait for someone more familiar with the code to answer. Thanks, michal
Joop
-- Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This is a multi-part message in MIME format. --------------090806070109050204010202 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit Michal Skrivanek wrote:
On 13 Jun 2014, at 11:04, Sven Kieske wrote:
Well, I know this is not that helpful but afaik there is work done to replace xml-rpc communication with json based communication.
@Sven: yeah, and one of the (many) reasons why we want it. But that's vdsm. Scheduler is a different package/app. Typical workaround is to use a string type to pass integer numbers (or use smaller numbers:)
Am 13.06.2014 10:57, schrieb Joop:
Then oVirt shouldn't either use that kind of numbers or should not use xmlrpc.
Sorry but thats a non-answer and doesn't help anybody.
@Joop: what's with the attitude? I have nothing to do with the scheduler and I have no idea what is it trying or supposed to return I'm just pointing out the error is due to the large whatever number being returned; in hope of someone with knowledge of the scheduler will take it from there. I'd tend to agree XMLRPC is not the best thing to use; and I think it's important to keep reminding the common pitfalls of it.
Sorry I'm only trying to help someone which got the code and it doesn't seem to work and I just figured out what is not working and replied that back to the list for everyone to see and comment on. With the now supplied reason I understand why it doesn't work but don't understand why the code is supplied in this form. Looking at the output again I don't get why the numbers are that big. The servers only have 32G and now way you use calc will it come close to the numbers seen :-( Joop --------------090806070109050204010202 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: 8bit <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <meta content="text/html;charset=windows-1252" http-equiv="Content-Type"> </head> <body bgcolor="#ffffff" text="#000000"> Michal Skrivanek wrote: <blockquote cite="mid:5310D03D-51D4-4FB2-BDE3-69CB1B77BC17@redhat.com" type="cite"> <pre wrap="">On 13 Jun 2014, at 11:04, Sven Kieske wrote: </pre> <blockquote type="cite"> <pre wrap="">Well, I know this is not that helpful but afaik there is work done to replace xml-rpc communication with json based communication. </pre> </blockquote> <pre wrap=""><!----> @Sven: yeah, and one of the (many) reasons why we want it. But that's vdsm. Scheduler is a different package/app. Typical workaround is to use a string type to pass integer numbers (or use smaller numbers:) </pre> <blockquote type="cite"> <pre wrap=""> Am 13.06.2014 10:57, schrieb Joop: </pre> <blockquote type="cite"> <pre wrap="">Then oVirt shouldn't either use that kind of numbers or should not use xmlrpc. Sorry but thats a non-answer and doesn't help anybody. </pre> </blockquote> </blockquote> <pre wrap=""><!----> @Joop: what's with the attitude? I have nothing to do with the scheduler and I have no idea what is it trying or supposed to return I'm just pointing out the error is due to the large whatever number being returned; in hope of someone with knowledge of the scheduler will take it from there. I'd tend to agree XMLRPC is not the best thing to use; and I think it's important to keep reminding the common pitfalls of it. </pre> </blockquote> Sorry I'm only trying to help someone which got the code and it doesn't seem to work and I just figured out what is not working and replied that back to the list for everyone to see and comment on. With the now supplied reason I understand why it doesn't work but don't understand why the code is supplied in this form. Looking at the output again I don't get why the numbers are that big. The servers only have 32G and now way you use calc will it come close to the numbers seen :-(<br> <br> Joop<br> <br> <br> </body> </html> --------------090806070109050204010202--
participants (8)
-
Doron Fediuck
-
Gilad Chaplik
-
Joop
-
Karli Sjöberg
-
Michal Skrivanek
-
Nathanaël Blanchet
-
noc
-
Sven Kieske