Re: [ovirt-users] ksmd high cpu usage from almost a week with just one vm running

------=_NextPartTM-000-dd1609c7-6593-4600-9926-1b296d875462 Content-Type: multipart/alternative; boundary="_000_12EF8D94C6F8734FB2FF37B9FBEDD1735F8C9D38EXCHANGEcollogi_" --_000_12EF8D94C6F8734FB2FF37B9FBEDD1735F8C9D38EXCHANGEcollogi_ Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 TWVtb3J5IHVzYWdlID4gODAlOiBrc20ga2lja3MgaW4uIFRoZXJlIGl0IHdpbGwgcnVuIGF0IGZ1 bGwgc3BlZWQgdW50aWwgdXNhZ2UgaXMgYmVsb3cgODAlLiBUaGVyZSBpcyBhbiBvcGVuIEJaIGZy b20gbWUuIEJhZCBiZWhhdmlvdXIgaXMgY29udHJvbGxlZCBieSBtb20uDQoNCk1hcmt1cw0KDQpB bSAwNi4xMi4yMDE0IDE1OjU4IHNjaHJpZWIgbWFkIEVuZ2luZWVyIDx0aGVtYWRlbmdpbjMzckBn bWFpbC5jb20+Og0KSGVsbG8gQWxsLA0KICAgICAgICAgICAgIEkgYW0gdXNpbmcgY2VudG9zNi41 IHg2NCBvbiBhIHNlcnZlciB3aXRoIDQ4IEcgUkFNIGFuZCA4DQpDb3Jlcy5NYW5hZ2VkIGJ5IE92 aXJ0DQpUaGVyZSBpcyBvbmx5IG9uZSBydW5uaW5nIFZNIHdpdGggUkFNIDM0IEcgYW5kIHdpdGgg NiBWQ1BVIChwaW5uZWQgdG8NCnByb3BlciBudW1hIG5vZGVzKQ0KDQpmcm9tIHRvcA0KDQp0b3Ag LSAwNjo0Mjo0OCB1cCA2NyBkYXlzLCAyMDowNSwgIDEgdXNlciwgIGxvYWQgYXZlcmFnZTogMC4y NiwgMC4yMCwgMC4xNw0KVGFza3M6IDI4NSB0b3RhbCwgICAyIHJ1bm5pbmcsIDI4MiBzbGVlcGlu ZywgICAwIHN0b3BwZWQsICAgMSB6b21iaWUNCkNwdShzKTogIDEuMCV1cywgIDEuNCVzeSwgIDAu MCVuaSwgOTcuNSVpZCwgIDAuMSV3YSwgIDAuMCVoaSwgIDAuMCVzaSwgIDAuMCVzdA0KTWVtOiAg NDkzNTY0NjhrIHRvdGFsLCAzMzk3NzY4NGsgdXNlZCwgMTUzNzg3ODRrIGZyZWUsICAgMTQyODEy ayBidWZmZXJzDQpTd2FwOiAxMjMzNzE0NGsgdG90YWwsICAgICAgICAwayB1c2VkLCAxMjMzNzE0 NGsgZnJlZSwgICAzNDMwNTJrIGNhY2hlZA0KDQogIFBJRCBVU0VSICAgICAgUFIgIE5JICBWSVJU ICBSRVMgIFNIUiBTICVDUFUgJU1FTSAgICBUSU1FKyAgQ09NTUFORA0KICAxMDEgcm9vdCAgICAg IDI1ICAgNSAgICAgMCAgICAwICAgIDAgUiAyNy40ICAwLjAgICA1NjUwOjA0IFtrc21kXQ0KMjYw MDQgdmRzbSAgICAgICAwIC0yMCAzMzcxbSAgNjRtIDk0MDAgUyAgOS44ICAwLjEgICAxNjUzOjI3 DQovdXNyL2Jpbi9weXRob24gL3Vzci9zaGFyZS92ZHNtL3Zkc20gLS1waWRmaWxlIC92YXIvcnVu L3Zkc20vdmRzbWQucGlkDQoyMDk2MyBxZW11ICAgICAgMjAgICAwIDM4LjVnICAzM2cgNjc5MiBT ICAzLjkgNzEuNiAgIDUyMjU6NDMNCi91c3IvbGliZXhlYy9xZW11LWt2bSAtbmFtZSBDaW5kZXIg LVMgLU0gcmhlbDYuNS4wIC1jcHUgTmVoYWxlbQ0KLWVuYWJsZS1rdm0gLW0gMzQwOTYgLXJlYWx0 aW1lIG1sb2NrPW9mZiAtc21wDQo2LG1heGNwdXM9MTYwLHNvY2tldHM9ODAsYw0KDQpmcm9tIC9z eXMva2VybmVsL21tL2tzbQ0KcGFnZXNfdW5zaGFyZWQgIDc2MDIzMjINCnBhZ2VzX3NoYXJlZCAg ICAgMjA3MDIzDQpwYWdlc190b19zY2FuICAgNjQNCnBhZ2VzX3ZvbGF0aWxlICAgIDMxNjc4DQoN CkFueSBpZGVhIHdoeSBrc21kIGlzIG5vdCBjb21pbmcgdG8gbm9ybWFsIENQVSB1c2FnZSAsb24g YSBkaWZmZXJlbnQNCnNlcnZlciBrc21kIHdhcyBkaXNhYmxlZCBhbmQgZm9yIHRlc3Rpbmcgd2hl biBpIGVuYWJsZWQgaXQgaW5pdGlhbGx5DQpDUFUgdXNhZ2Ugd2FzIGhpZ2ggYnV0IGxhdGVyIHNl dHRsZWQgZG93biB0byAzJSAsaW4gdGhhdCBob3N0IGkgaGF2ZSA0DQpWTXMgcnVubmluZy4NCg0K QmVmb3JlIHR1cm5pbmcgb2ZmIGtzbWQgY2FuIGFueSBvbmUgaGVscCBtZSBmaW5kIG91dCB3aHkg a3NtZCBpcw0KYmVoYXZpbmcgbGlrZSB0aGlzLkluaXRpYWxseSBpdCBoYWQgMiB2aXJ0dWFsIG1h Y2hpbmVzLGJlY2F1c2Ugb2YgaGlnaA0KQ1BVIHV0aWxpemF0aW9uIG9mIHRoaXMgZ3Vlc3Qgb3Ro ZXIgaXMgbWlncmF0ZWQgdG8gYW5vdGhlciBob3N0Lg0KDQpUaGFua3MNCl9fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQpVc2VycyBtYWlsaW5nIGxpc3QNClVz ZXJzQG92aXJ0Lm9yZw0KaHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3Vz ZXJzDQo= --_000_12EF8D94C6F8734FB2FF37B9FBEDD1735F8C9D38EXCHANGEcollogi_ Content-Type: text/html; charset="utf-8" Content-ID: <D4B5292395AD824885E0B8CCFCD13B95@collogia.de> Content-Transfer-Encoding: base64 PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPHAgZGlyPSJsdHIi Pk1lbW9yeSB1c2FnZSAmZ3Q7IDgwJToga3NtIGtpY2tzIGluLiBUaGVyZSBpdCB3aWxsIHJ1biBh dCBmdWxsIHNwZWVkIHVudGlsIHVzYWdlIGlzIGJlbG93IDgwJS4gVGhlcmUgaXMgYW4gb3BlbiBC WiBmcm9tIG1lLiBCYWQgYmVoYXZpb3VyIGlzIGNvbnRyb2xsZWQgYnkgbW9tLjwvcD4NCjxwIGRp cj0ibHRyIj5NYXJrdXM8L3A+DQo8ZGl2IGNsYXNzPSJnbWFpbF9xdW90ZSI+QW0gMDYuMTIuMjAx NCAxNTo1OCBzY2hyaWViIG1hZCBFbmdpbmVlciAmbHQ7dGhlbWFkZW5naW4zM3JAZ21haWwuY29t Jmd0Ozo8YnIgdHlwZT0iYXR0cmlidXRpb24iPg0KPGJsb2NrcXVvdGUgY2xhc3M9InF1b3RlIiBz dHlsZT0ibWFyZ2luOjAgMCAwIC44ZXg7Ym9yZGVyLWxlZnQ6MXB4ICNjY2Mgc29saWQ7cGFkZGlu Zy1sZWZ0OjFleCI+DQo8ZGl2Pjxmb250IHNpemU9IjIiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6 MTBwdCI+PC9zcGFuPjwvZm9udD4NCjxkaXY+SGVsbG8gQWxsLDxicj4NCiZuYnNwOyZuYnNwOyZu YnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNw OyBJIGFtIHVzaW5nIGNlbnRvczYuNSB4NjQgb24gYSBzZXJ2ZXIgd2l0aCA0OCBHIFJBTSBhbmQg ODxicj4NCkNvcmVzLk1hbmFnZWQgYnkgT3ZpcnQ8YnI+DQpUaGVyZSBpcyBvbmx5IG9uZSBydW5u aW5nIFZNIHdpdGggUkFNIDM0IEcgYW5kIHdpdGggNiBWQ1BVIChwaW5uZWQgdG88YnI+DQpwcm9w ZXIgbnVtYSBub2Rlcyk8YnI+DQo8YnI+DQpmcm9tIHRvcDxicj4NCjxicj4NCnRvcCAtIDA2OjQy OjQ4IHVwIDY3IGRheXMsIDIwOjA1LCZuYnNwOyAxIHVzZXIsJm5ic3A7IGxvYWQgYXZlcmFnZTog MC4yNiwgMC4yMCwgMC4xNzxicj4NClRhc2tzOiAyODUgdG90YWwsJm5ic3A7Jm5ic3A7IDIgcnVu bmluZywgMjgyIHNsZWVwaW5nLCZuYnNwOyZuYnNwOyAwIHN0b3BwZWQsJm5ic3A7Jm5ic3A7IDEg em9tYmllPGJyPg0KQ3B1KHMpOiZuYnNwOyAxLjAldXMsJm5ic3A7IDEuNCVzeSwmbmJzcDsgMC4w JW5pLCA5Ny41JWlkLCZuYnNwOyAwLjEld2EsJm5ic3A7IDAuMCVoaSwmbmJzcDsgMC4wJXNpLCZu YnNwOyAwLjAlc3Q8YnI+DQpNZW06Jm5ic3A7IDQ5MzU2NDY4ayB0b3RhbCwgMzM5Nzc2ODRrIHVz ZWQsIDE1Mzc4Nzg0ayBmcmVlLCZuYnNwOyZuYnNwOyAxNDI4MTJrIGJ1ZmZlcnM8YnI+DQpTd2Fw OiAxMjMzNzE0NGsgdG90YWwsJm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5i c3A7IDBrIHVzZWQsIDEyMzM3MTQ0ayBmcmVlLCZuYnNwOyZuYnNwOyAzNDMwNTJrIGNhY2hlZDxi cj4NCjxicj4NCiZuYnNwOyBQSUQgVVNFUiZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyBQ UiZuYnNwOyBOSSZuYnNwOyBWSVJUJm5ic3A7IFJFUyZuYnNwOyBTSFIgUyAlQ1BVICVNRU0mbmJz cDsmbmJzcDsmbmJzcDsgVElNRSYjNDM7Jm5ic3A7IENPTU1BTkQ8YnI+DQombmJzcDsgMTAxIHJv b3QmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgMjUmbmJzcDsmbmJzcDsgNSZuYnNwOyZu YnNwOyZuYnNwOyZuYnNwOyAwJm5ic3A7Jm5ic3A7Jm5ic3A7IDAmbmJzcDsmbmJzcDsmbmJzcDsg MCBSIDI3LjQmbmJzcDsgMC4wJm5ic3A7Jm5ic3A7IDU2NTA6MDQgW2tzbWRdPGJyPg0KMjYwMDQg dmRzbSZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyAwIC0yMCAzMzcxbSZuYnNw OyA2NG0gOTQwMCBTJm5ic3A7IDkuOCZuYnNwOyAwLjEmbmJzcDsmbmJzcDsgMTY1MzoyNzxicj4N Ci91c3IvYmluL3B5dGhvbiAvdXNyL3NoYXJlL3Zkc20vdmRzbSAtLXBpZGZpbGUgL3Zhci9ydW4v dmRzbS92ZHNtZC5waWQ8YnI+DQoyMDk2MyBxZW11Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5i c3A7IDIwJm5ic3A7Jm5ic3A7IDAgMzguNWcmbmJzcDsgMzNnIDY3OTIgUyZuYnNwOyAzLjkgNzEu NiZuYnNwOyZuYnNwOyA1MjI1OjQzPGJyPg0KL3Vzci9saWJleGVjL3FlbXUta3ZtIC1uYW1lIENp bmRlciAtUyAtTSByaGVsNi41LjAgLWNwdSBOZWhhbGVtPGJyPg0KLWVuYWJsZS1rdm0gLW0gMzQw OTYgLXJlYWx0aW1lIG1sb2NrPW9mZiAtc21wPGJyPg0KNixtYXhjcHVzPTE2MCxzb2NrZXRzPTgw LGM8YnI+DQo8YnI+DQpmcm9tIC9zeXMva2VybmVsL21tL2tzbTxicj4NCnBhZ2VzX3Vuc2hhcmVk Jm5ic3A7IDc2MDIzMjI8YnI+DQpwYWdlc19zaGFyZWQmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsg MjA3MDIzPGJyPg0KcGFnZXNfdG9fc2NhbiZuYnNwOyZuYnNwOyA2NDxicj4NCnBhZ2VzX3ZvbGF0 aWxlJm5ic3A7Jm5ic3A7Jm5ic3A7IDMxNjc4PGJyPg0KPGJyPg0KQW55IGlkZWEgd2h5IGtzbWQg aXMgbm90IGNvbWluZyB0byBub3JtYWwgQ1BVIHVzYWdlICxvbiBhIGRpZmZlcmVudDxicj4NCnNl cnZlciBrc21kIHdhcyBkaXNhYmxlZCBhbmQgZm9yIHRlc3Rpbmcgd2hlbiBpIGVuYWJsZWQgaXQg aW5pdGlhbGx5PGJyPg0KQ1BVIHVzYWdlIHdhcyBoaWdoIGJ1dCBsYXRlciBzZXR0bGVkIGRvd24g dG8gMyUgLGluIHRoYXQgaG9zdCBpIGhhdmUgNDxicj4NClZNcyBydW5uaW5nLjxicj4NCjxicj4N CkJlZm9yZSB0dXJuaW5nIG9mZiBrc21kIGNhbiBhbnkgb25lIGhlbHAgbWUgZmluZCBvdXQgd2h5 IGtzbWQgaXM8YnI+DQpiZWhhdmluZyBsaWtlIHRoaXMuSW5pdGlhbGx5IGl0IGhhZCAyIHZpcnR1 YWwgbWFjaGluZXMsYmVjYXVzZSBvZiBoaWdoPGJyPg0KQ1BVIHV0aWxpemF0aW9uIG9mIHRoaXMg Z3Vlc3Qgb3RoZXIgaXMgbWlncmF0ZWQgdG8gYW5vdGhlciBob3N0Ljxicj4NCjxicj4NClRoYW5r czxicj4NCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fPGJy Pg0KVXNlcnMgbWFpbGluZyBsaXN0PGJyPg0KVXNlcnNAb3ZpcnQub3JnPGJyPg0KPGEgaHJlZj0i aHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzIj5odHRwOi8vbGlz dHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnM8L2E+PGJyPg0KPC9kaXY+DQo8L2Rp dj4NCjwvYmxvY2txdW90ZT4NCjwvZGl2Pg0KPC9ib2R5Pg0KPC9odG1sPg0K --_000_12EF8D94C6F8734FB2FF37B9FBEDD1735F8C9D38EXCHANGEcollogi_-- ------=_NextPartTM-000-dd1609c7-6593-4600-9926-1b296d875462 Content-Type: text/plain; name="InterScan_Disclaimer.txt" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="InterScan_Disclaimer.txt" **************************************************************************** Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht gestattet. Über das Internet versandte E-Mails können unter fremden Namen erstellt oder manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine rechtsverbindliche Willenserklärung. Collogia Unternehmensberatung AG Ubierring 11 D-50678 Köln Vorstand: Kadir Akin Dr. Michael Höhnerbach Vorsitzender des Aufsichtsrates: Hans Kristian Langva Registergericht: Amtsgericht Köln Registernummer: HRB 52 497 This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden. e-mails sent over the internet may have been written under a wrong name or been manipulated. That is why this message sent as an e-mail is not a legally binding declaration of intention. Collogia Unternehmensberatung AG Ubierring 11 D-50678 Köln executive board: Kadir Akin Dr. Michael Höhnerbach President of the supervisory board: Hans Kristian Langva Registry office: district court Cologne Register number: HRB 52 497 **************************************************************************** ------=_NextPartTM-000-dd1609c7-6593-4600-9926-1b296d875462--

Thanks for the info Markus free -g total used free shared buffers cached Mem: 47 32 14 0 0 0 % of usage is 68 . Why is ksmd not sleeping.Do you have any idea? regards, On Sat, Dec 6, 2014 at 8:45 PM, Markus Stockhausen <stockhausen@collogia.de> wrote:
Memory usage > 80%: ksm kicks in. There it will run at full speed until usage is below 80%. There is an open BZ from me. Bad behaviour is controlled by mom.
Markus
Am 06.12.2014 15:58 schrieb mad Engineer <themadengin33r@gmail.com>:
Hello All, I am using centos6.5 x64 on a server with 48 G RAM and 8 Cores.Managed by Ovirt There is only one running VM with RAM 34 G and with 6 VCPU (pinned to proper numa nodes)
from top
top - 06:42:48 up 67 days, 20:05, 1 user, load average: 0.26, 0.20, 0.17 Tasks: 285 total, 2 running, 282 sleeping, 0 stopped, 1 zombie Cpu(s): 1.0%us, 1.4%sy, 0.0%ni, 97.5%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 49356468k total, 33977684k used, 15378784k free, 142812k buffers Swap: 12337144k total, 0k used, 12337144k free, 343052k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 101 root 25 5 0 0 0 R 27.4 0.0 5650:04 [ksmd] 26004 vdsm 0 -20 3371m 64m 9400 S 9.8 0.1 1653:27 /usr/bin/python /usr/share/vdsm/vdsm --pidfile /var/run/vdsm/vdsmd.pid 20963 qemu 20 0 38.5g 33g 6792 S 3.9 71.6 5225:43 /usr/libexec/qemu-kvm -name Cinder -S -M rhel6.5.0 -cpu Nehalem -enable-kvm -m 34096 -realtime mlock=off -smp 6,maxcpus=160,sockets=80,c
from /sys/kernel/mm/ksm pages_unshared 7602322 pages_shared 207023 pages_to_scan 64 pages_volatile 31678
Any idea why ksmd is not coming to normal CPU usage ,on a different server ksmd was disabled and for testing when i enabled it initially CPU usage was high but later settled down to 3% ,in that host i have 4 VMs running.
Before turning off ksmd can any one help me find out why ksmd is behaving like this.Initially it had 2 virtual machines,because of high CPU utilization of this guest other is migrated to another host.
Thanks _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi, you can see all the relevant definitions in mom's ksm policy. The trigger to run ksm is defined here: https://github.com/oVirt/vdsm/blob/master/vdsm/mom.d/03-ksm.policy#L23 You can change this setting (in each host you have) to something that suits your load. We have the below bug opened and hopefully handle it in one of the next versions. https://bugzilla.redhat.com/show_bug.cgi?id=1026294 Doron ----- Original Message -----
From: "mad Engineer" <themadengin33r@gmail.com> To: "Markus Stockhausen" <stockhausen@collogia.de> Cc: "users" <users@ovirt.org> Sent: Saturday, December 6, 2014 6:13:20 PM Subject: Re: [ovirt-users] ksmd high cpu usage from almost a week with just one vm running
Thanks for the info Markus
free -g total used free shared buffers cached Mem: 47 32 14 0 0 0
% of usage is 68 .
Why is ksmd not sleeping.Do you have any idea? regards,
On Sat, Dec 6, 2014 at 8:45 PM, Markus Stockhausen <stockhausen@collogia.de> wrote:
Memory usage > 80%: ksm kicks in. There it will run at full speed until usage is below 80%. There is an open BZ from me. Bad behaviour is controlled by mom.
Markus
Am 06.12.2014 15:58 schrieb mad Engineer <themadengin33r@gmail.com>:
Hello All, I am using centos6.5 x64 on a server with 48 G RAM and 8 Cores.Managed by Ovirt There is only one running VM with RAM 34 G and with 6 VCPU (pinned to proper numa nodes)
from top
top - 06:42:48 up 67 days, 20:05, 1 user, load average: 0.26, 0.20, 0.17 Tasks: 285 total, 2 running, 282 sleeping, 0 stopped, 1 zombie Cpu(s): 1.0%us, 1.4%sy, 0.0%ni, 97.5%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 49356468k total, 33977684k used, 15378784k free, 142812k buffers Swap: 12337144k total, 0k used, 12337144k free, 343052k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 101 root 25 5 0 0 0 R 27.4 0.0 5650:04 [ksmd] 26004 vdsm 0 -20 3371m 64m 9400 S 9.8 0.1 1653:27 /usr/bin/python /usr/share/vdsm/vdsm --pidfile /var/run/vdsm/vdsmd.pid 20963 qemu 20 0 38.5g 33g 6792 S 3.9 71.6 5225:43 /usr/libexec/qemu-kvm -name Cinder -S -M rhel6.5.0 -cpu Nehalem -enable-kvm -m 34096 -realtime mlock=off -smp 6,maxcpus=160,sockets=80,c
from /sys/kernel/mm/ksm pages_unshared 7602322 pages_shared 207023 pages_to_scan 64 pages_volatile 31678
Any idea why ksmd is not coming to normal CPU usage ,on a different server ksmd was disabled and for testing when i enabled it initially CPU usage was high but later settled down to 3% ,in that host i have 4 VMs running.
Before turning off ksmd can any one help me find out why ksmd is behaving like this.Initially it had 2 virtual machines,because of high CPU utilization of this guest other is migrated to another host.
Thanks _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Thanks Doron, Can you confirm this behaviour,i changed (defvar ksm_free_percent 0.10) and restarted vdsm and now ksmd is behaving normally.But i don't understand why its behaving aggressively when free RAM percentages is 22% even though its close to 20 :) Regards On Sun, Dec 7, 2014 at 6:13 PM, Doron Fediuck <dfediuck@redhat.com> wrote:
Hi, you can see all the relevant definitions in mom's ksm policy. The trigger to run ksm is defined here: https://github.com/oVirt/vdsm/blob/master/vdsm/mom.d/03-ksm.policy#L23
You can change this setting (in each host you have) to something that suits your load. We have the below bug opened and hopefully handle it in one of the next versions. https://bugzilla.redhat.com/show_bug.cgi?id=1026294
Doron
----- Original Message -----
From: "mad Engineer" <themadengin33r@gmail.com> To: "Markus Stockhausen" <stockhausen@collogia.de> Cc: "users" <users@ovirt.org> Sent: Saturday, December 6, 2014 6:13:20 PM Subject: Re: [ovirt-users] ksmd high cpu usage from almost a week with just one vm running
Thanks for the info Markus
free -g total used free shared buffers cached Mem: 47 32 14 0 0 0
% of usage is 68 .
Why is ksmd not sleeping.Do you have any idea? regards,
On Sat, Dec 6, 2014 at 8:45 PM, Markus Stockhausen <stockhausen@collogia.de> wrote:
Memory usage > 80%: ksm kicks in. There it will run at full speed until usage is below 80%. There is an open BZ from me. Bad behaviour is controlled by mom.
Markus
Am 06.12.2014 15:58 schrieb mad Engineer <themadengin33r@gmail.com>:
Hello All, I am using centos6.5 x64 on a server with 48 G RAM and 8 Cores.Managed by Ovirt There is only one running VM with RAM 34 G and with 6 VCPU (pinned to proper numa nodes)
from top
top - 06:42:48 up 67 days, 20:05, 1 user, load average: 0.26, 0.20, 0.17 Tasks: 285 total, 2 running, 282 sleeping, 0 stopped, 1 zombie Cpu(s): 1.0%us, 1.4%sy, 0.0%ni, 97.5%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 49356468k total, 33977684k used, 15378784k free, 142812k buffers Swap: 12337144k total, 0k used, 12337144k free, 343052k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 101 root 25 5 0 0 0 R 27.4 0.0 5650:04 [ksmd] 26004 vdsm 0 -20 3371m 64m 9400 S 9.8 0.1 1653:27 /usr/bin/python /usr/share/vdsm/vdsm --pidfile /var/run/vdsm/vdsmd.pid 20963 qemu 20 0 38.5g 33g 6792 S 3.9 71.6 5225:43 /usr/libexec/qemu-kvm -name Cinder -S -M rhel6.5.0 -cpu Nehalem -enable-kvm -m 34096 -realtime mlock=off -smp 6,maxcpus=160,sockets=80,c
from /sys/kernel/mm/ksm pages_unshared 7602322 pages_shared 207023 pages_to_scan 64 pages_volatile 31678
Any idea why ksmd is not coming to normal CPU usage ,on a different server ksmd was disabled and for testing when i enabled it initially CPU usage was high but later settled down to 3% ,in that host i have 4 VMs running.
Before turning off ksmd can any one help me find out why ksmd is behaving like this.Initially it had 2 virtual machines,because of high CPU utilization of this guest other is migrated to another host.
Thanks _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi, First of all you need to remember this is accounting all available memory cache included, so what you see as 22 may not be what mom is using. Additionally some of the calculations take time for ksm, so it takes time for everything to stabilize once you hit a peak. ----- Original Message -----
From: "mad Engineer" gmail.com> To: "Doron Fediuck" Cc: "Markus Stockhausen" <stockhausen@collogia.de>, "users" ovirt.org>, "Martin Sivak" <msivak@redhat.com>, "Roy Golan" <rgolan@redhat.com> Sent: Sunday, December 7, 2014 6:52:16 PM Subject: Re: [ovirt-users] ksmd high cpu usage from almost a week with just one vm running
Thanks Doron, Can you confirm this behaviour,i changed (defvar ksm_free_percent 0.10) and restarted vdsm and now ksmd is behaving normally.But i don't understand why its behaving aggressively when free RAM percentages is 22% even though its close to 20 :)
Regards
On Sun, Dec 7, 2014 at 6:13 PM, Doron Fediuck wrote:
Hi, you can see all the relevant definitions in mom's ksm policy. The trigger to run ksm is defined here: https://github.com/oVirt/vdsm/blob/master/vdsm/mom.d/03-ksm.policy#L23
You can change this setting (in each host you have) to something that suits your load. We have the below bug opened and hopefully handle it in one of the next versions. https://bugzilla.redhat.com/show_bug.cgi?id=1026294
Doron
----- Original Message -----
From: "mad Engineer" gmail.com> To: "Markus Stockhausen" <stockhausen@collogia.de> Cc: "users" ovirt.org> Sent: Saturday, December 6, 2014 6:13:20 PM Subject: Re: [ovirt-users] ksmd high cpu usage from almost a week with just one vm running
Thanks for the info Markus
free -g total used free shared buffers cached Mem: 47 32 14 0 0 0
% of usage is 68 .
Why is ksmd not sleeping.Do you have any idea? regards,
On Sat, Dec 6, 2014 at 8:45 PM, Markus Stockhausen <stockhausen@collogia.de> wrote:
Memory usage > 80%: ksm kicks in. There it will run at full speed until usage is below 80%. There is an open BZ from me. Bad behaviour is controlled by mom.
Markus
Am 06.12.2014 15:58 schrieb mad Engineer gmail.com>:
Hello All, I am using centos6.5 x64 on a server with 48 G RAM and 8 Cores.Managed by Ovirt There is only one running VM with RAM 34 G and with 6 VCPU (pinned to proper numa nodes)
from top
top - 06:42:48 up 67 days, 20:05, 1 user, load average: 0.26, 0.20, 0.17 Tasks: 285 total, 2 running, 282 sleeping, 0 stopped, 1 zombie Cpu(s): 1.0%us, 1.4%sy, 0.0%ni, 97.5%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 49356468k total, 33977684k used, 15378784k free, 142812k buffers Swap: 12337144k total, 0k used, 12337144k free, 343052k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 101 root 25 5 0 0 0 R 27.4 0.0 5650:04 [ksmd] 26004 vdsm 0 -20 3371m 64m 9400 S 9.8 0.1 1653:27 /usr/bin/python /usr/share/vdsm/vdsm --pidfile /var/run/vdsm/vdsmd.pid 20963 qemu 20 0 38.5g 33g 6792 S 3.9 71.6 5225:43 /usr/libexec/qemu-kvm -name Cinder -S -M rhel6.5.0 -cpu Nehalem -enable-kvm -m 34096 -realtime mlock=off -smp 6,maxcpus=160,sockets=80,c
from /sys/kernel/mm/ksm pages_unshared 7602322 pages_shared 207023 pages_to_scan 64 pages_volatile 31678
Any idea why ksmd is not coming to normal CPU usage ,on a different server ksmd was disabled and for testing when i enabled it initially CPU usage was high but later settled down to 3% ,in that host i have 4 VMs running.
Before turning off ksmd can any one help me find out why ksmd is behaving like this.Initially it had 2 virtual machines,because of high CPU utilization of this guest other is migrated to another host.
Thanks _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (3)
-
Doron Fediuck
-
mad Engineer
-
Markus Stockhausen