From jhorne at skopos.us Wed Mar 13 14:51:58 2013 Content-Type: multipart/mixed; boundary="===============3478362881169889403==" MIME-Version: 1.0 From: Jonathan Horne To: users at ovirt.org Subject: [Users] management server very slow lately Date: Wed, 13 Mar 2013 18:51:35 +0000 Message-ID: <9BE6F493F83A594DA60C45E6A09DC5AC23B814BC@AUSP01DAG0201> --===============3478362881169889403== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable --_000_9BE6F493F83A594DA60C45E6A09DC5AC23B814BCAUSP01DAG0201_ Content-Type: text/plain; charset=3D"us-ascii" Content-Transfer-Encoding: quoted-printable Hello, lately my manager server web interface is extremely sluggish. Perha= =3D ps the server is ready for a reboot? My management server is also the hosts of my NFS export and ISO mounts. Is= =3D there a prescribed method for rebooting when I am also providing NFS servi= =3D ces from the management server? My assumption is that aside from NFS, I sh= =3D ould be able to reboot the management serve and the nodes and virtual machi= =3D nes will be fine in the mean time? Thanks, jonathan ________________________________ This is a PRIVATE message. If you are not the intended recipient, please de= =3D lete without copying and kindly advise us by e-mail of the mistake in deliv= =3D ery. NOTE: Regardless of content, this e-mail shall not operate to bind SKO= =3D POS to any order or other contract unless pursuant to explicit written agre= =3D ement or government initiative expressly permitting the use of e-mail for s= =3D uch purpose. --_000_9BE6F493F83A594DA60C45E6A09DC5AC23B814BCAUSP01DAG0201_ Content-Type: text/html; charset=3D"us-ascii" Content-Transfer-Encoding: quoted-printable

Hello, lately my manager server web interface is e= xt=3D remely sluggish.  Perhaps the server is ready for a reboot?

 

My management server is also the hosts of my NFS e= xp=3D ort and ISO mounts.  Is there a prescribed method for rebooting when I= =3D am also providing NFS services from the management server?  My assump= =3D tion is that aside from NFS, I should be able to reboot the management serve and the nodes and virtual machines will be = =3D fine in the mean time?

 

Thanks,

jonathan



This is a PRIVATE mess= age. I=3D f you are not the intended recipient, please delete without copying and kin= =3D dly advise us by e-mail of the mistake in delivery. NOTE: Regardless of con= =3D tent, this e-mail shall not operate to bind SKOPOS to any order or other contract unless pursuant to explicit wri= =3D tten agreement or government initiative expressly permitting the use of e-m= =3D ail for such purpose. --_000_9BE6F493F83A594DA60C45E6A09DC5AC23B814BCAUSP01DAG0201_-- --===============3478362881169889403== Content-Type: multipart/alternative MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.bin" LS1fMDAwXzlCRTZGNDkzRjgzQTU5NERBNjBDNDVFNkEwOURDNUFDMjNCODE0QkNBVVNQMDFEQUcw MjAxXwpDb250ZW50LVR5cGU6IHRleHQvcGxhaW47IGNoYXJzZXQ9InVzLWFzY2lpIgpDb250ZW50 LVRyYW5zZmVyLUVuY29kaW5nOiBxdW90ZWQtcHJpbnRhYmxlCgpIZWxsbywgbGF0ZWx5IG15IG1h bmFnZXIgc2VydmVyIHdlYiBpbnRlcmZhY2UgaXMgZXh0cmVtZWx5IHNsdWdnaXNoLiAgUGVyaGE9 CnBzIHRoZSBzZXJ2ZXIgaXMgcmVhZHkgZm9yIGEgcmVib290PwoKTXkgbWFuYWdlbWVudCBzZXJ2 ZXIgaXMgYWxzbyB0aGUgaG9zdHMgb2YgbXkgTkZTIGV4cG9ydCBhbmQgSVNPIG1vdW50cy4gIElz PQogdGhlcmUgYSBwcmVzY3JpYmVkIG1ldGhvZCBmb3IgcmVib290aW5nIHdoZW4gSSBhbSBhbHNv IHByb3ZpZGluZyBORlMgc2Vydmk9CmNlcyBmcm9tIHRoZSBtYW5hZ2VtZW50IHNlcnZlcj8gIE15 IGFzc3VtcHRpb24gaXMgdGhhdCBhc2lkZSBmcm9tIE5GUywgSSBzaD0Kb3VsZCBiZSBhYmxlIHRv IHJlYm9vdCB0aGUgbWFuYWdlbWVudCBzZXJ2ZSBhbmQgdGhlIG5vZGVzIGFuZCB2aXJ0dWFsIG1h Y2hpPQpuZXMgd2lsbCBiZSBmaW5lIGluIHRoZSBtZWFuIHRpbWU/CgpUaGFua3MsCmpvbmF0aGFu CgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpUaGlzIGlzIGEgUFJJVkFURSBtZXNz YWdlLiBJZiB5b3UgYXJlIG5vdCB0aGUgaW50ZW5kZWQgcmVjaXBpZW50LCBwbGVhc2UgZGU9Cmxl dGUgd2l0aG91dCBjb3B5aW5nIGFuZCBraW5kbHkgYWR2aXNlIHVzIGJ5IGUtbWFpbCBvZiB0aGUg bWlzdGFrZSBpbiBkZWxpdj0KZXJ5LiBOT1RFOiBSZWdhcmRsZXNzIG9mIGNvbnRlbnQsIHRoaXMg ZS1tYWlsIHNoYWxsIG5vdCBvcGVyYXRlIHRvIGJpbmQgU0tPPQpQT1MgdG8gYW55IG9yZGVyIG9y IG90aGVyIGNvbnRyYWN0IHVubGVzcyBwdXJzdWFudCB0byBleHBsaWNpdCB3cml0dGVuIGFncmU9 CmVtZW50IG9yIGdvdmVybm1lbnQgaW5pdGlhdGl2ZSBleHByZXNzbHkgcGVybWl0dGluZyB0aGUg dXNlIG9mIGUtbWFpbCBmb3Igcz0KdWNoIHB1cnBvc2UuCgotLV8wMDBfOUJFNkY0OTNGODNBNTk0 REE2MEM0NUU2QTA5REM1QUMyM0I4MTRCQ0FVU1AwMURBRzAyMDFfCkNvbnRlbnQtVHlwZTogdGV4 dC9odG1sOyBjaGFyc2V0PSJ1cy1hc2NpaSIKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogcXVv dGVkLXByaW50YWJsZQoKPGh0bWw+CjxoZWFkPgo8bWV0YSBodHRwLWVxdWl2PTNEIkNvbnRlbnQt VHlwZSIgY29udGVudD0zRCJ0ZXh0L2h0bWw7IGNoYXJzZXQ9M0R1cy1hc2NpaSI9Cj4KPHN0eWxl Pgo8IS0tCkBmb250LWZhY2UKCXtmb250LWZhbWlseTpDYWxpYnJpfQpAZm9udC1mYWNlCgl7Zm9u dC1mYW1pbHk6VGFob21hfQpwLk1zb05vcm1hbCwgbGkuTXNvTm9ybWFsLCBkaXYuTXNvTm9ybWFs Cgl7bWFyZ2luOjBpbjsKCW1hcmdpbi1ib3R0b206LjAwMDFwdDsKCWZvbnQtc2l6ZToxMS4wcHQ7 Cglmb250LWZhbWlseToiQ2FsaWJyaSIsInNhbnMtc2VyaWYifQphOmxpbmssIHNwYW4uTXNvSHlw ZXJsaW5rCgl7Y29sb3I6Ymx1ZTsKCXRleHQtZGVjb3JhdGlvbjp1bmRlcmxpbmV9CmE6dmlzaXRl ZCwgc3Bhbi5Nc29IeXBlcmxpbmtGb2xsb3dlZAoJe2NvbG9yOnB1cnBsZTsKCXRleHQtZGVjb3Jh dGlvbjp1bmRlcmxpbmV9CnAuTXNvQWNldGF0ZSwgbGkuTXNvQWNldGF0ZSwgZGl2Lk1zb0FjZXRh dGUKCXttYXJnaW46MGluOwoJbWFyZ2luLWJvdHRvbTouMDAwMXB0OwoJZm9udC1zaXplOjguMHB0 OwoJZm9udC1mYW1pbHk6IlRhaG9tYSIsInNhbnMtc2VyaWYifQpzcGFuLkVtYWlsU3R5bGUxNwoJ e2ZvbnQtZmFtaWx5OiJDYWxpYnJpIiwic2Fucy1zZXJpZiI7Cgljb2xvcjp3aW5kb3d0ZXh0fQpz cGFuLkJhbGxvb25UZXh0Q2hhcgoJe2ZvbnQtZmFtaWx5OiJUYWhvbWEiLCJzYW5zLXNlcmlmIn0K Lk1zb0NocERlZmF1bHQKCXtmb250LWZhbWlseToiQ2FsaWJyaSIsInNhbnMtc2VyaWYifQpAcGFn ZSBXb3JkU2VjdGlvbjEKCXttYXJnaW46MS4waW4gMS4waW4gMS4waW4gMS4waW59CmRpdi5Xb3Jk U2VjdGlvbjEKCXt9Ci0tPgo8L3N0eWxlPgo8L2hlYWQ+Cjxib2R5IGxhbmc9M0QiRU4tVVMiIGxp bms9M0QiYmx1ZSIgdmxpbms9M0QicHVycGxlIj4KPGRpdiBjbGFzcz0zRCJXb3JkU2VjdGlvbjEi Pgo8cCBjbGFzcz0zRCJNc29Ob3JtYWwiPkhlbGxvLCBsYXRlbHkgbXkgbWFuYWdlciBzZXJ2ZXIg d2ViIGludGVyZmFjZSBpcyBleHQ9CnJlbWVseSBzbHVnZ2lzaC4mbmJzcDsgUGVyaGFwcyB0aGUg c2VydmVyIGlzIHJlYWR5IGZvciBhIHJlYm9vdD88L3A+CjxwIGNsYXNzPTNEIk1zb05vcm1hbCI+ Jm5ic3A7PC9wPgo8cCBjbGFzcz0zRCJNc29Ob3JtYWwiPk15IG1hbmFnZW1lbnQgc2VydmVyIGlz IGFsc28gdGhlIGhvc3RzIG9mIG15IE5GUyBleHA9Cm9ydCBhbmQgSVNPIG1vdW50cy4mbmJzcDsg SXMgdGhlcmUgYSBwcmVzY3JpYmVkIG1ldGhvZCBmb3IgcmVib290aW5nIHdoZW4gST0KIGFtIGFs c28gcHJvdmlkaW5nIE5GUyBzZXJ2aWNlcyBmcm9tIHRoZSBtYW5hZ2VtZW50IHNlcnZlcj8mbmJz cDsgTXkgYXNzdW1wPQp0aW9uIGlzIHRoYXQgYXNpZGUgZnJvbSBORlMsIEkgc2hvdWxkIGJlIGFi bGUKIHRvIHJlYm9vdCB0aGUgbWFuYWdlbWVudCBzZXJ2ZSBhbmQgdGhlIG5vZGVzIGFuZCB2aXJ0 dWFsIG1hY2hpbmVzIHdpbGwgYmUgPQpmaW5lIGluIHRoZSBtZWFuIHRpbWU/PC9wPgo8cCBjbGFz cz0zRCJNc29Ob3JtYWwiPiZuYnNwOzwvcD4KPHAgY2xhc3M9M0QiTXNvTm9ybWFsIj5UaGFua3Ms PC9wPgo8cCBjbGFzcz0zRCJNc29Ob3JtYWwiPmpvbmF0aGFuPC9wPgo8L2Rpdj4KPGJyPgo8aHI+ Cjxmb250IGNvbG9yPTNEIkdyYXkiIGZhY2U9M0QiQXJpYWwiIHNpemU9M0QiMSI+VGhpcyBpcyBh IFBSSVZBVEUgbWVzc2FnZS4gST0KZiB5b3UgYXJlIG5vdCB0aGUgaW50ZW5kZWQgcmVjaXBpZW50 LCBwbGVhc2UgZGVsZXRlIHdpdGhvdXQgY29weWluZyBhbmQga2luPQpkbHkgYWR2aXNlIHVzIGJ5 IGUtbWFpbCBvZiB0aGUgbWlzdGFrZSBpbiBkZWxpdmVyeS4gTk9URTogUmVnYXJkbGVzcyBvZiBj b249CnRlbnQsIHRoaXMgZS1tYWlsIHNoYWxsIG5vdCBvcGVyYXRlIHRvCiBiaW5kIFNLT1BPUyB0 byBhbnkgb3JkZXIgb3Igb3RoZXIgY29udHJhY3QgdW5sZXNzIHB1cnN1YW50IHRvIGV4cGxpY2l0 IHdyaT0KdHRlbiBhZ3JlZW1lbnQgb3IgZ292ZXJubWVudCBpbml0aWF0aXZlIGV4cHJlc3NseSBw ZXJtaXR0aW5nIHRoZSB1c2Ugb2YgZS1tPQphaWwgZm9yIHN1Y2ggcHVycG9zZS48L2ZvbnQ+Cjwv Ym9keT4KPC9odG1sPgoKLS1fMDAwXzlCRTZGNDkzRjgzQTU5NERBNjBDNDVFNkEwOURDNUFDMjNC ODE0QkNBVVNQMDFEQUcwMjAxXy0tCg== --===============3478362881169889403==-- From nslomian at redhat.com Thu Mar 14 05:37:33 2013 Content-Type: multipart/mixed; boundary="===============6539489302072756359==" MIME-Version: 1.0 From: Noam Slomianko To: users at ovirt.org Subject: Re: [Users] management server very slow lately Date: Thu, 14 Mar 2013 05:37:33 -0400 Message-ID: <1571529558.45826879.1363253853194.JavaMail.root@redhat.com> In-Reply-To: 9BE6F493F83A594DA60C45E6A09DC5AC23B814BC@AUSP01DAG0201 --===============6539489302072756359== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable I don't know if a reboot to management will help in any way but you can tak= e the jboss process that runs your engine down and back up for a "restart" In any case the VMs runs in a different server under vdsm->libvirt->qemu so= they are not affected by the engine going down ----- Original Message ----- From: "Jonathan Horne" To: users(a)ovirt.org Sent: Wednesday, March 13, 2013 8:51:35 PM Subject: [Users] management server very slow lately Hello, lately my manager server web interface is extremely sluggish. Perhap= s the server is ready for a reboot? = My management server is also the hosts of my NFS export and ISO mounts. Is = there a prescribed method for rebooting when I am also providing NFS servic= es from the management server? My assumption is that aside from NFS, I shou= ld be able to reboot the management serve and the nodes and virtual machine= s will be fine in the mean time? = Thanks, = jonathan = This is a PRIVATE message. If you are not the intended recipient, please de= lete without copying and kindly advise us by e-mail of the mistake in deliv= ery. NOTE: Regardless of content, this e-mail shall not operate to bind SKO= POS to any order or other contract unless pursuant to explicit written agre= ement or government initiative expressly permitting the use of e-mail for s= uch purpose. = _______________________________________________ Users mailing list Users(a)ovirt.org http://lists.ovirt.org/mailman/listinfo/users --===============6539489302072756359==-- From shuming at linux.vnet.ibm.com Thu Mar 14 11:41:22 2013 Content-Type: multipart/mixed; boundary="===============2874384113761005812==" MIME-Version: 1.0 From: Shu Ming To: users at ovirt.org Subject: Re: [Users] management server very slow lately Date: Thu, 14 Mar 2013 23:40:50 +0800 Message-ID: <5141EF82.9010404@linux.vnet.ibm.com> In-Reply-To: 9BE6F493F83A594DA60C45E6A09DC5AC23B814BC@AUSP01DAG0201 --===============2874384113761005812== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable This is a multi-part message in MIME format. --------------040000070409090006010409 Content-Type: text/plain; charset=3DISO-8859-1; format=3Dflowed Content-Transfer-Encoding: 7bit Did you use IE or firefox? I found firefox was much faster than IE = connecting the engine server. Jonathan Horne : > > Hello, lately my manager server web interface is extremely sluggish. = > Perhaps the server is ready for a reboot? > > My management server is also the hosts of my NFS export and ISO = > mounts. Is there a prescribed method for rebooting when I am also = > providing NFS services from the management server? My assumption is = > that aside from NFS, I should be able to reboot the management serve = > and the nodes and virtual machines will be fine in the mean time? > > Thanks, > > jonathan > > > ------------------------------------------------------------------------ > This is a PRIVATE message. If you are not the intended recipient, = > please delete without copying and kindly advise us by e-mail of the = > mistake in delivery. NOTE: Regardless of content, this e-mail shall = > not operate to bind SKOPOS to any order or other contract unless = > pursuant to explicit written agreement or government initiative = > expressly permitting the use of e-mail for such purpose. > > > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- = --- ?? Shu Ming Open Virtualization Engineerning; CSTL, IBM Corp. Tel: 86-10-82451626 Tieline: 9051626 E-mail: shuming(a)cn.ibm.com or shumi= ng(a)linux.vnet.ibm.com Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, B= eijing 100193, PRC --------------040000070409090006010409 Content-Type: text/html; charset=3DISO-8859-1 Content-Transfer-Encoding: 7bit
Did you use IE or firefox? I found firefox was much faster than IE connecting the engine server.
Jonathan Horne :

Hello, lately my manager server web interface is extremely sluggish.  Perhaps the server is ready for a reboot?

 

My management server is also the hosts of my NFS export and ISO mounts.  Is there a prescribed method for rebooting when I am also providing NFS services from the management server?  My assumption is that aside from NFS, I should be able to reboot the management serve and the nodes and virtual machines will be fine in the mean time?

 

Thanks,

jonathan



This is a PRIVATE message. If you are not the intended recipient, please delete without copying and kindly advise us by e-mail of the mistake in delivery. NOTE: Regardless of content, this e-mail shall not operate to bind SKOPOS to any order or other contract unless pursuant to explicit written agreement or government initiative expressly permitting the use of e-mail for such purpose.

_______________________________________________
Users mailing list
Use=
rs(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


-- =

---
舒明 Shu Ming
Open Virtualization Engineerning; CSTL, IBM Corp.
Tel: 86-10-82451626  Tieline: 9051626 E-mail: shuming(a)cn.ibm.com or <=
a class=3D"moz-txt-link-abbreviated" href=3D"mailto:shuming(a)linux.vnet.ib=
m.com">shuming(a)linux.vnet.ibm.com
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, B=
eijing 100193, PRC
--------------040000070409090006010409-- --===============2874384113761005812== Content-Type: multipart/alternative MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.bin" VGhpcyBpcyBhIG11bHRpLXBhcnQgbWVzc2FnZSBpbiBNSU1FIGZvcm1hdC4KLS0tLS0tLS0tLS0t LS0wNDAwMDAwNzA0MDkwOTAwMDYwMTA0MDkKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFy c2V0PUlTTy04ODU5LTE7IGZvcm1hdD1mbG93ZWQKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzog N2JpdAoKRGlkIHlvdSB1c2UgSUUgb3IgZmlyZWZveD8gSSBmb3VuZCBmaXJlZm94IHdhcyBtdWNo IGZhc3RlciB0aGFuIElFIApjb25uZWN0aW5nIHRoZSBlbmdpbmUgc2VydmVyLgpKb25hdGhhbiBI b3JuZSA6Cj4KPiBIZWxsbywgbGF0ZWx5IG15IG1hbmFnZXIgc2VydmVyIHdlYiBpbnRlcmZhY2Ug aXMgZXh0cmVtZWx5IHNsdWdnaXNoLiAgCj4gUGVyaGFwcyB0aGUgc2VydmVyIGlzIHJlYWR5IGZv ciBhIHJlYm9vdD8KPgo+IE15IG1hbmFnZW1lbnQgc2VydmVyIGlzIGFsc28gdGhlIGhvc3RzIG9m IG15IE5GUyBleHBvcnQgYW5kIElTTyAKPiBtb3VudHMuICBJcyB0aGVyZSBhIHByZXNjcmliZWQg bWV0aG9kIGZvciByZWJvb3Rpbmcgd2hlbiBJIGFtIGFsc28gCj4gcHJvdmlkaW5nIE5GUyBzZXJ2 aWNlcyBmcm9tIHRoZSBtYW5hZ2VtZW50IHNlcnZlcj8gIE15IGFzc3VtcHRpb24gaXMgCj4gdGhh dCBhc2lkZSBmcm9tIE5GUywgSSBzaG91bGQgYmUgYWJsZSB0byByZWJvb3QgdGhlIG1hbmFnZW1l bnQgc2VydmUgCj4gYW5kIHRoZSBub2RlcyBhbmQgdmlydHVhbCBtYWNoaW5lcyB3aWxsIGJlIGZp bmUgaW4gdGhlIG1lYW4gdGltZT8KPgo+IFRoYW5rcywKPgo+IGpvbmF0aGFuCj4KPgo+IC0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLQo+IFRoaXMgaXMgYSBQUklWQVRFIG1lc3NhZ2UuIElmIHlvdSBhcmUgbm90IHRo ZSBpbnRlbmRlZCByZWNpcGllbnQsIAo+IHBsZWFzZSBkZWxldGUgd2l0aG91dCBjb3B5aW5nIGFu ZCBraW5kbHkgYWR2aXNlIHVzIGJ5IGUtbWFpbCBvZiB0aGUgCj4gbWlzdGFrZSBpbiBkZWxpdmVy eS4gTk9URTogUmVnYXJkbGVzcyBvZiBjb250ZW50LCB0aGlzIGUtbWFpbCBzaGFsbCAKPiBub3Qg b3BlcmF0ZSB0byBiaW5kIFNLT1BPUyB0byBhbnkgb3JkZXIgb3Igb3RoZXIgY29udHJhY3QgdW5s ZXNzIAo+IHB1cnN1YW50IHRvIGV4cGxpY2l0IHdyaXR0ZW4gYWdyZWVtZW50IG9yIGdvdmVybm1l bnQgaW5pdGlhdGl2ZSAKPiBleHByZXNzbHkgcGVybWl0dGluZyB0aGUgdXNlIG9mIGUtbWFpbCBm b3Igc3VjaCBwdXJwb3NlLgo+Cj4KPiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fXwo+IFVzZXJzIG1haWxpbmcgbGlzdAo+IFVzZXJzQG92aXJ0Lm9yZwo+IGh0 dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2VycwoKCi0tIAotLS0KPz8g U2h1IE1pbmcKT3BlbiBWaXJ0dWFsaXphdGlvbiBFbmdpbmVlcm5pbmc7IENTVEwsIElCTSBDb3Jw LgpUZWw6IDg2LTEwLTgyNDUxNjI2ICBUaWVsaW5lOiA5MDUxNjI2IEUtbWFpbDogc2h1bWluZ0Bj bi5pYm0uY29tIG9yIHNodW1pbmdAbGludXgudm5ldC5pYm0uY29tCkFkZHJlc3M6IDMvRiBSaW5n IEJ1aWxkaW5nLCBaaG9uZ0d1YW5DdW4gU29mdHdhcmUgUGFyaywgSGFpZGlhbiBEaXN0cmljdCwg QmVpamluZyAxMDAxOTMsIFBSQwoKCi0tLS0tLS0tLS0tLS0tMDQwMDAwMDcwNDA5MDkwMDA2MDEw NDA5CkNvbnRlbnQtVHlwZTogdGV4dC9odG1sOyBjaGFyc2V0PUlTTy04ODU5LTEKQ29udGVudC1U cmFuc2Zlci1FbmNvZGluZzogN2JpdAoKPGh0bWw+CiAgPGhlYWQ+CiAgICA8bWV0YSBjb250ZW50 PSJ0ZXh0L2h0bWw7IGNoYXJzZXQ9SVNPLTg4NTktMSIKICAgICAgaHR0cC1lcXVpdj0iQ29udGVu dC1UeXBlIj4KICA8L2hlYWQ+CiAgPGJvZHkgdGV4dD0iIzAwMDAwMCIgYmdjb2xvcj0iI0ZGRkZG RiI+CiAgICA8ZGl2IGNsYXNzPSJtb3otY2l0ZS1wcmVmaXgiPkRpZCB5b3UgdXNlIElFIG9yIGZp cmVmb3g/IEkgZm91bmQKICAgICAgZmlyZWZveCB3YXMgbXVjaCBmYXN0ZXIgdGhhbiBJRSBjb25u ZWN0aW5nIHRoZSBlbmdpbmUgc2VydmVyLjxicj4KICAgICAgSm9uYXRoYW4gSG9ybmUgOjxicj4K ICAgIDwvZGl2PgogICAgPGJsb2NrcXVvdGUKICAgICAgY2l0ZT0ibWlkOjlCRTZGNDkzRjgzQTU5 NERBNjBDNDVFNkEwOURDNUFDMjNCODE0QkNAQVVTUDAxREFHMDIwMSIKICAgICAgdHlwZT0iY2l0 ZSI+CiAgICAgIDxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0idGV4dC9o dG1sOwogICAgICAgIGNoYXJzZXQ9SVNPLTg4NTktMSI+CiAgICAgIDxzdHlsZT4KPCEtLQpAZm9u dC1mYWNlCgl7Zm9udC1mYW1pbHk6Q2FsaWJyaX0KQGZvbnQtZmFjZQoJe2ZvbnQtZmFtaWx5OlRh aG9tYX0KcC5Nc29Ob3JtYWwsIGxpLk1zb05vcm1hbCwgZGl2Lk1zb05vcm1hbAoJe21hcmdpbjow aW47CgltYXJnaW4tYm90dG9tOi4wMDAxcHQ7Cglmb250LXNpemU6MTEuMHB0OwoJZm9udC1mYW1p bHk6IkNhbGlicmkiLCJzYW5zLXNlcmlmIn0KYTpsaW5rLCBzcGFuLk1zb0h5cGVybGluawoJe2Nv bG9yOmJsdWU7Cgl0ZXh0LWRlY29yYXRpb246dW5kZXJsaW5lfQphOnZpc2l0ZWQsIHNwYW4uTXNv SHlwZXJsaW5rRm9sbG93ZWQKCXtjb2xvcjpwdXJwbGU7Cgl0ZXh0LWRlY29yYXRpb246dW5kZXJs aW5lfQpwLk1zb0FjZXRhdGUsIGxpLk1zb0FjZXRhdGUsIGRpdi5Nc29BY2V0YXRlCgl7bWFyZ2lu OjBpbjsKCW1hcmdpbi1ib3R0b206LjAwMDFwdDsKCWZvbnQtc2l6ZTo4LjBwdDsKCWZvbnQtZmFt aWx5OiJUYWhvbWEiLCJzYW5zLXNlcmlmIn0Kc3Bhbi5FbWFpbFN0eWxlMTcKCXtmb250LWZhbWls eToiQ2FsaWJyaSIsInNhbnMtc2VyaWYiOwoJY29sb3I6d2luZG93dGV4dH0Kc3Bhbi5CYWxsb29u VGV4dENoYXIKCXtmb250LWZhbWlseToiVGFob21hIiwic2Fucy1zZXJpZiJ9Ci5Nc29DaHBEZWZh dWx0Cgl7Zm9udC1mYW1pbHk6IkNhbGlicmkiLCJzYW5zLXNlcmlmIn0KQHBhZ2UgV29yZFNlY3Rp b24xCgl7bWFyZ2luOjEuMGluIDEuMGluIDEuMGluIDEuMGlufQpkaXYuV29yZFNlY3Rpb24xCgl7 fQotLT4KPC9zdHlsZT4KICAgICAgPGRpdiBjbGFzcz0iV29yZFNlY3Rpb24xIj4KICAgICAgICA8 cCBjbGFzcz0iTXNvTm9ybWFsIj5IZWxsbywgbGF0ZWx5IG15IG1hbmFnZXIgc2VydmVyIHdlYgog ICAgICAgICAgaW50ZXJmYWNlIGlzIGV4dHJlbWVseSBzbHVnZ2lzaC4mbmJzcDsgUGVyaGFwcyB0 aGUgc2VydmVyIGlzIHJlYWR5CiAgICAgICAgICBmb3IgYSByZWJvb3Q/PC9wPgogICAgICAgIDxw IGNsYXNzPSJNc29Ob3JtYWwiPiZuYnNwOzwvcD4KICAgICAgICA8cCBjbGFzcz0iTXNvTm9ybWFs Ij5NeSBtYW5hZ2VtZW50IHNlcnZlciBpcyBhbHNvIHRoZSBob3N0cyBvZgogICAgICAgICAgbXkg TkZTIGV4cG9ydCBhbmQgSVNPIG1vdW50cy4mbmJzcDsgSXMgdGhlcmUgYSBwcmVzY3JpYmVkIG1l dGhvZAogICAgICAgICAgZm9yIHJlYm9vdGluZyB3aGVuIEkgYW0gYWxzbyBwcm92aWRpbmcgTkZT IHNlcnZpY2VzIGZyb20gdGhlCiAgICAgICAgICBtYW5hZ2VtZW50IHNlcnZlcj8mbmJzcDsgTXkg YXNzdW1wdGlvbiBpcyB0aGF0IGFzaWRlIGZyb20gTkZTLCBJCiAgICAgICAgICBzaG91bGQgYmUg YWJsZSB0byByZWJvb3QgdGhlIG1hbmFnZW1lbnQgc2VydmUgYW5kIHRoZSBub2RlcwogICAgICAg ICAgYW5kIHZpcnR1YWwgbWFjaGluZXMgd2lsbCBiZSBmaW5lIGluIHRoZSBtZWFuIHRpbWU/PC9w PgogICAgICAgIDxwIGNsYXNzPSJNc29Ob3JtYWwiPiZuYnNwOzwvcD4KICAgICAgICA8cCBjbGFz cz0iTXNvTm9ybWFsIj5UaGFua3MsPC9wPgogICAgICAgIDxwIGNsYXNzPSJNc29Ob3JtYWwiPmpv bmF0aGFuPC9wPgogICAgICA8L2Rpdj4KICAgICAgPGJyPgogICAgICA8aHI+CiAgICAgIDxmb250 IHNpemU9IjEiIGNvbG9yPSJHcmF5IiBmYWNlPSJBcmlhbCI+VGhpcyBpcyBhIFBSSVZBVEUKICAg ICAgICBtZXNzYWdlLiBJZiB5b3UgYXJlIG5vdCB0aGUgaW50ZW5kZWQgcmVjaXBpZW50LCBwbGVh c2UgZGVsZXRlCiAgICAgICAgd2l0aG91dCBjb3B5aW5nIGFuZCBraW5kbHkgYWR2aXNlIHVzIGJ5 IGUtbWFpbCBvZiB0aGUgbWlzdGFrZSBpbgogICAgICAgIGRlbGl2ZXJ5LiBOT1RFOiBSZWdhcmRs ZXNzIG9mIGNvbnRlbnQsIHRoaXMgZS1tYWlsIHNoYWxsIG5vdAogICAgICAgIG9wZXJhdGUgdG8g YmluZCBTS09QT1MgdG8gYW55IG9yZGVyIG9yIG90aGVyIGNvbnRyYWN0IHVubGVzcwogICAgICAg IHB1cnN1YW50IHRvIGV4cGxpY2l0IHdyaXR0ZW4gYWdyZWVtZW50IG9yIGdvdmVybm1lbnQgaW5p dGlhdGl2ZQogICAgICAgIGV4cHJlc3NseSBwZXJtaXR0aW5nIHRoZSB1c2Ugb2YgZS1tYWlsIGZv ciBzdWNoIHB1cnBvc2UuPC9mb250PgogICAgICA8YnI+CiAgICAgIDxmaWVsZHNldCBjbGFzcz0i bWltZUF0dGFjaG1lbnRIZWFkZXIiPjwvZmllbGRzZXQ+CiAgICAgIDxicj4KICAgICAgPHByZSB3 cmFwPSIiPl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClVz ZXJzIG1haWxpbmcgbGlzdAo8YSBjbGFzcz0ibW96LXR4dC1saW5rLWFiYnJldmlhdGVkIiBocmVm PSJtYWlsdG86VXNlcnNAb3ZpcnQub3JnIj5Vc2Vyc0BvdmlydC5vcmc8L2E+CjxhIGNsYXNzPSJt b3otdHh0LWxpbmstZnJlZXRleHQiIGhyZWY9Imh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1h bi9saXN0aW5mby91c2VycyI+aHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZv L3VzZXJzPC9hPgo8L3ByZT4KICAgIDwvYmxvY2txdW90ZT4KICAgIDxicj4KICAgIDxicj4KICAg IDxwcmUgY2xhc3M9Im1vei1zaWduYXR1cmUiIGNvbHM9IjcyIj4tLSAKLS0tCiYjMzMyOTg7JiMy NjEyNjsgU2h1IE1pbmcKT3BlbiBWaXJ0dWFsaXphdGlvbiBFbmdpbmVlcm5pbmc7IENTVEwsIElC TSBDb3JwLgpUZWw6IDg2LTEwLTgyNDUxNjI2ICBUaWVsaW5lOiA5MDUxNjI2IEUtbWFpbDogPGEg Y2xhc3M9Im1vei10eHQtbGluay1hYmJyZXZpYXRlZCIgaHJlZj0ibWFpbHRvOnNodW1pbmdAY24u aWJtLmNvbSI+c2h1bWluZ0Bjbi5pYm0uY29tPC9hPiBvciA8YSBjbGFzcz0ibW96LXR4dC1saW5r LWFiYnJldmlhdGVkIiBocmVmPSJtYWlsdG86c2h1bWluZ0BsaW51eC52bmV0LmlibS5jb20iPnNo dW1pbmdAbGludXgudm5ldC5pYm0uY29tPC9hPgpBZGRyZXNzOiAzL0YgUmluZyBCdWlsZGluZywg WmhvbmdHdWFuQ3VuIFNvZnR3YXJlIFBhcmssIEhhaWRpYW4gRGlzdHJpY3QsIEJlaWppbmcgMTAw MTkzLCBQUkM8L3ByZT4KICA8L2JvZHk+CjwvaHRtbD4KCi0tLS0tLS0tLS0tLS0tMDQwMDAwMDcw NDA5MDkwMDA2MDEwNDA5LS0KCg== --===============2874384113761005812==-- From iheim at redhat.com Fri Mar 15 13:07:07 2013 Content-Type: multipart/mixed; boundary="===============2686872612379456536==" MIME-Version: 1.0 From: Itamar Heim To: users at ovirt.org Subject: Re: [Users] management server very slow lately Date: Fri, 15 Mar 2013 13:31:49 +0200 Message-ID: <514306A5.3000905@redhat.com> In-Reply-To: 9BE6F493F83A594DA60C45E6A09DC5AC23B814BC@AUSP01DAG0201 --===============2686872612379456536== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 03/13/2013 08:51 PM, Jonathan Horne wrote: > Hello, lately my manager server web interface is extremely sluggish. > Perhaps the server is ready for a reboot? > > My management server is also the hosts of my NFS export and ISO mounts. > Is there a prescribed method for rebooting when I am also providing NFS > services from the management server? My assumption is that aside from > NFS, I should be able to reboot the management serve and the nodes and > virtual machines will be fine in the mean time? what's the cpu consumption of your ovirt-engine service (java process). = cpu load on the engine? memory/swap state of the engine, etc --===============2686872612379456536==-- From jhorne at skopos.us Fri Mar 22 09:55:15 2013 Content-Type: multipart/mixed; boundary="===============3836816204007354144==" MIME-Version: 1.0 From: Jonathan Horne To: users at ovirt.org Subject: Re: [Users] management server very slow lately Date: Fri, 22 Mar 2013 13:54:58 +0000 Message-ID: <9BE6F493F83A594DA60C45E6A09DC5AC2B23FC45@AUSP01DAG0201> In-Reply-To: 514306A5.3000905@redhat.com --===============3836816204007354144== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable top - 08:53:38 up 70 days, 16:31, 1 user, load average: 0.40, 0.34, 0.32 Tasks: 432 total, 1 running, 431 sleeping, 0 stopped, 0 zombie Cpu(s): 1.3%us, 0.1%sy, 0.0%ni, 98.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0= %st Mem: 32876240k total, 18653508k used, 14222732k free, 522432k buffers Swap: 2097144k total, 4528k used, 2092616k free, 6270908k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2121 ovirt 20 0 12.9g 7.7g 18m S 9.0 24.6 16539:08 java I don't have a lot of experience with jboss, so im not sure it thats good o= r bad. I did the jboss restart, and that helped a little, but its still a = little sluggish again, now a few days later. Thanks, -----Original Message----- From: Itamar Heim [mailto:iheim(a)redhat.com] Sent: Friday, March 15, 2013 6:32 AM To: Jonathan Horne Cc: users(a)ovirt.org Subject: Re: [Users] management server very slow lately On 03/13/2013 08:51 PM, Jonathan Horne wrote: > Hello, lately my manager server web interface is extremely sluggish. > Perhaps the server is ready for a reboot? > > My management server is also the hosts of my NFS export and ISO mounts. > Is there a prescribed method for rebooting when I am also providing > NFS services from the management server? My assumption is that aside > from NFS, I should be able to reboot the management serve and the > nodes and virtual machines will be fine in the mean time? what's the cpu consumption of your ovirt-engine service (java process). cpu load on the engine? memory/swap state of the engine, etc ________________________________ This is a PRIVATE message. If you are not the intended recipient, please de= lete without copying and kindly advise us by e-mail of the mistake in deliv= ery. NOTE: Regardless of content, this e-mail shall not operate to bind SKO= POS to any order or other contract unless pursuant to explicit written agre= ement or government initiative expressly permitting the use of e-mail for s= uch purpose. --===============3836816204007354144==-- From jhernand at redhat.com Fri Mar 22 11:05:53 2013 Content-Type: multipart/mixed; boundary="===============5495763723885035956==" MIME-Version: 1.0 From: Juan Hernandez To: users at ovirt.org Subject: Re: [Users] management server very slow lately Date: Fri, 22 Mar 2013 16:05:47 +0100 Message-ID: <514C734B.6040509@redhat.com> In-Reply-To: 9BE6F493F83A594DA60C45E6A09DC5AC2B23FC45@AUSP01DAG0201 --===============5495763723885035956== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 03/22/2013 02:54 PM, Jonathan Horne wrote: > top - 08:53:38 up 70 days, 16:31, 1 user, load average: 0.40, 0.34, 0.32 > Tasks: 432 total, 1 running, 431 sleeping, 0 stopped, 0 zombie > Cpu(s): 1.3%us, 0.1%sy, 0.0%ni, 98.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0= .0%st > Mem: 32876240k total, 18653508k used, 14222732k free, 522432k buffers > Swap: 2097144k total, 4528k used, 2092616k free, 6270908k cached > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND > 2121 ovirt 20 0 12.9g 7.7g 18m S 9.0 24.6 16539:08 java > This is not normal at all. First thing that is strange is that your = engine is taking 7.7 GiB of RAM, which it should never take, as it is by = default limited to 1 GiB. Did you assign more memory to the engine on = purpose? How much? If you assign a lot of memory it can start to consume = a lot of CPU just for garbage collection. You may want to enable verbose = garbage collection adding this to /etc/sysconfig/ovirt-engine (or = /etc/ovirt-engine/engine.conf if you are using the latest source code): ENGINE_VERBOSE_GC=3Dtrue Then restart the engine and it will start to dump garbage collection = statistics to /var/log/ovirt-engine/console.log. The garbage collection = should be quite silent in an low activity system. We used to have a bug that caused the max amount of memory not be = correctly limited, but it was fixed long ago: http://gerrit.ovirt.org/7952 The other thing that seems strange is the amount of CPU that it is = consuming. Do you have many hosts managed by that engine? In an = otherwise idle environment the CPU consumption is caused by the periodic = polls of the hosts, one each two seconds by default. If you see = continually the engine using a significant amount of CPU (you the output = of top above it is 9%) it could be useful to get a snapshot of the = stacks of threads, to see which threads in particular are consuming the = CPU. Send the QUIT signal to the engine process and it will dump the = stacks of the threads to /var/log/ovirt-engine/console.log: # kill -3 $(cat /var/run/ovirt-engine.pid) Once you have that dump you can check which thread is consuming the CPU = as follows: 1. Get the PIDs of the threads of the engine together with their use of CPU: # ps -L -u ovirt -o tid,pcpu 2. If you see one of them consuming a high amount of CPU time then try = to find it in the stack dump generated in = /var/log/ovirt-engine/console.log. Lets assume that the PID is 13397, = for example, translate it to hex: # printf "%04x\n" 13397 3455 3. Then look in /var/log/ovirt-engine/console.log for a line containing = "nid=3D0x3455". There you will find the stack trace of that thread, = something like this: "ajp-/127.0.0.1:8702-Acceptor-0" daemon prio=3D10 = tid=3D0x00007f41e0220800 nid=3D0x3493 runnable [0x00007f41dbdf2000] java.lang.Thread.State: RUNNABLE ... Most threads will be waiting, but if you find one thread that is = consistently RUNNABLE then there is probably an issue. The dump of the = stack of that thread can help to find out what it is doing and why it is = consuming the CPU. > > I don't have a lot of experience with jboss, so im not sure it thats good= or bad. I did the jboss restart, and that helped a little, but its still = a little sluggish again, now a few days later. > > Thanks, > > -----Original Message----- > From: Itamar Heim [mailto:iheim(a)redhat.com] > Sent: Friday, March 15, 2013 6:32 AM > To: Jonathan Horne > Cc: users(a)ovirt.org > Subject: Re: [Users] management server very slow lately > > On 03/13/2013 08:51 PM, Jonathan Horne wrote: >> Hello, lately my manager server web interface is extremely sluggish. >> Perhaps the server is ready for a reboot? >> >> My management server is also the hosts of my NFS export and ISO mounts. >> Is there a prescribed method for rebooting when I am also providing >> NFS services from the management server? My assumption is that aside >> from NFS, I should be able to reboot the management serve and the >> nodes and virtual machines will be fine in the mean time? > > what's the cpu consumption of your ovirt-engine service (java process). > cpu load on the engine? memory/swap state of the engine, etc > > > ________________________________ > This is a PRIVATE message. If you are not the intended recipient, please = delete without copying and kindly advise us by e-mail of the mistake in del= ivery. NOTE: Regardless of content, this e-mail shall not operate to bind S= KOPOS to any order or other contract unless pursuant to explicit written ag= reement or government initiative expressly permitting the use of e-mail for= such purpose. > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- = Direcci=C3=B3n Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta = 3=C2=BAD, 28016 Madrid, Spain Inscrita en el Reg. Mercantil de Madrid =E2=80=93 C.I.F. B82657941 - Red Ha= t S.L. --===============5495763723885035956==-- From jhorne at skopos.us Fri Mar 22 16:58:39 2013 Content-Type: multipart/mixed; boundary="===============1386107626740608086==" MIME-Version: 1.0 From: Jonathan Horne To: users at ovirt.org Subject: Re: [Users] management server very slow lately Date: Fri, 22 Mar 2013 20:58:34 +0000 Message-ID: <9BE6F493F83A594DA60C45E6A09DC5AC2B2433BB@AUSP01DAG0201> In-Reply-To: 514C734B.6040509@redhat.com --===============1386107626740608086== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable is it ok to restart the engine at any time, or should i be prepared for a maintenance window? this manager has 12 hosts, and about 75 VMs. we are running 3.1, dreyou's EL6 packages. [jhorne(a)d0lppc021 ~]$ rpm -qa|grep ovirt ovirt-engine-restapi-3.1.0-3.19.el6.noarch ovirt-engine-sdk-3.1.0.5-1.el6.noarch ovirt-engine-backend-3.1.0-3.19.el6.noarch ovirt-engine-tools-common-3.1.0-3.19.el6.noarch ovirt-log-collector-3.1.0-16.el6.noarch ovirt-image-uploader-3.1.0-16.el6.noarch ovirt-engine-setup-3.1.0-3.19.el6.noarch ovirt-engine-config-3.1.0-3.19.el6.noarch ovirt-iso-uploader-3.1.0-16.el6.noarch ovirt-engine-webadmin-portal-3.1.0-3.19.el6.noarch ovirt-engine-genericapi-3.1.0-3.19.el6.noarch ovirt-engine-3.1.0-3.19.el6.noarch ovirt-engine-cli-3.1.0.7-1.el6.noarch ovirt-engine-userportal-3.1.0-3.19.el6.noarch ovirt-engine-notification-service-3.1.0-3.19.el6.noarch ovirt-engine-jbossas711-1-0.x86_64 ovirt-engine-dbscripts-3.1.0-3.19.el6.noarch thanks, jonathan On 3/22/13 10:05 AM, "Juan Hernandez" wrote: >On 03/22/2013 02:54 PM, Jonathan Horne wrote: >> top - 08:53:38 up 70 days, 16:31, 1 user, load average: 0.40, 0.34, >>0.32 >> Tasks: 432 total, 1 running, 431 sleeping, 0 stopped, 0 zombie >> Cpu(s): 1.3%us, 0.1%sy, 0.0%ni, 98.6%id, 0.0%wa, 0.0%hi, 0.0%si, >>0.0%st >> Mem: 32876240k total, 18653508k used, 14222732k free, 522432k buffers >> Swap: 2097144k total, 4528k used, 2092616k free, 6270908k cached >> >> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND >> 2121 ovirt 20 0 12.9g 7.7g 18m S 9.0 24.6 16539:08 java >> > >This is not normal at all. First thing that is strange is that your >engine is taking 7.7 GiB of RAM, which it should never take, as it is by >default limited to 1 GiB. Did you assign more memory to the engine on >purpose? How much? If you assign a lot of memory it can start to consume >a lot of CPU just for garbage collection. You may want to enable verbose >garbage collection adding this to /etc/sysconfig/ovirt-engine (or >/etc/ovirt-engine/engine.conf if you are using the latest source code): > > ENGINE_VERBOSE_GC=3Dtrue > >Then restart the engine and it will start to dump garbage collection >statistics to /var/log/ovirt-engine/console.log. The garbage collection >should be quite silent in an low activity system. > >We used to have a bug that caused the max amount of memory not be >correctly limited, but it was fixed long ago: > > http://gerrit.ovirt.org/7952 > >The other thing that seems strange is the amount of CPU that it is >consuming. Do you have many hosts managed by that engine? In an >otherwise idle environment the CPU consumption is caused by the periodic >polls of the hosts, one each two seconds by default. If you see >continually the engine using a significant amount of CPU (you the output >of top above it is 9%) it could be useful to get a snapshot of the >stacks of threads, to see which threads in particular are consuming the >CPU. Send the QUIT signal to the engine process and it will dump the >stacks of the threads to /var/log/ovirt-engine/console.log: > > # kill -3 $(cat /var/run/ovirt-engine.pid) > >Once you have that dump you can check which thread is consuming the CPU >as follows: > >1. Get the PIDs of the threads of the engine together with their use of >CPU: > > # ps -L -u ovirt -o tid,pcpu > >2. If you see one of them consuming a high amount of CPU time then try >to find it in the stack dump generated in >/var/log/ovirt-engine/console.log. Lets assume that the PID is 13397, >for example, translate it to hex: > > # printf "%04x\n" 13397 > 3455 > >3. Then look in /var/log/ovirt-engine/console.log for a line containing >"nid=3D0x3455". There you will find the stack trace of that thread, >something like this: > > "ajp-/127.0.0.1:8702-Acceptor-0" daemon prio=3D10 >tid=3D0x00007f41e0220800 nid=3D0x3493 runnable [0x00007f41dbdf2000] > java.lang.Thread.State: RUNNABLE > ... > >Most threads will be waiting, but if you find one thread that is >consistently RUNNABLE then there is probably an issue. The dump of the >stack of that thread can help to find out what it is doing and why it is >consuming the CPU. > >> >> I don't have a lot of experience with jboss, so im not sure it thats >>good or bad. I did the jboss restart, and that helped a little, but its >>still a little sluggish again, now a few days later. >> >> Thanks, >> >> -----Original Message----- >> From: Itamar Heim [mailto:iheim(a)redhat.com] >> Sent: Friday, March 15, 2013 6:32 AM >> To: Jonathan Horne >> Cc: users(a)ovirt.org >> Subject: Re: [Users] management server very slow lately >> >> On 03/13/2013 08:51 PM, Jonathan Horne wrote: >>> Hello, lately my manager server web interface is extremely sluggish. >>> Perhaps the server is ready for a reboot? >>> >>> My management server is also the hosts of my NFS export and ISO mounts. >>> Is there a prescribed method for rebooting when I am also providing >>> NFS services from the management server? My assumption is that aside >>> from NFS, I should be able to reboot the management serve and the >>> nodes and virtual machines will be fine in the mean time? >> >> what's the cpu consumption of your ovirt-engine service (java process). >> cpu load on the engine? memory/swap state of the engine, etc >> >> >> ________________________________ >> This is a PRIVATE message. If you are not the intended recipient, >>please delete without copying and kindly advise us by e-mail of the >>mistake in delivery. NOTE: Regardless of content, this e-mail shall not >>operate to bind SKOPOS to any order or other contract unless pursuant to >>explicit written agreement or government initiative expressly permitting >>the use of e-mail for such purpose. >> _______________________________________________ >> Users mailing list >> Users(a)ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > > >-- >Direcci=C3=B3n Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta >3=C2=BAD, 28016 Madrid, Spain >Inscrita en el Reg. Mercantil de Madrid =C2=AD C.I.F. B82657941 - Red Hat = S.L. ________________________________ This is a PRIVATE message. If you are not the intended recipient, please de= lete without copying and kindly advise us by e-mail of the mistake in deliv= ery. NOTE: Regardless of content, this e-mail shall not operate to bind SKO= POS to any order or other contract unless pursuant to explicit written agre= ement or government initiative expressly permitting the use of e-mail for s= uch purpose. --===============1386107626740608086==-- From jhernand at redhat.com Mon Mar 25 03:36:47 2013 Content-Type: multipart/mixed; boundary="===============2240023394891512987==" MIME-Version: 1.0 From: Juan Hernandez To: users at ovirt.org Subject: Re: [Users] management server very slow lately Date: Mon, 25 Mar 2013 08:36:44 +0100 Message-ID: <514FFE8C.8080803@redhat.com> In-Reply-To: 9BE6F493F83A594DA60C45E6A09DC5AC2B2433BB@AUSP01DAG0201 --===============2240023394891512987== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 03/22/2013 09:58 PM, Jonathan Horne wrote: > is it ok to restart the engine at any time, or should i be prepared for a > maintenance window? > The engine can be restarted at any time, assuming that your users don't = need to use it (via the user portal) the few seconds it will be down. > this manager has 12 hosts, and about 75 VMs. we are running 3.1, dreyou's > EL6 packages. > Version 3.1 had the problem with the memory limit. To fix it open the = /usr/share/ovirt-engine/service/engine-service.py file, go to line 203 = and replace -Xms with -Xmx, the resulting lines 202 and 203 should be = the following: "-Xms%s" % engineHeapMin, "-Xmx%s" % engineHeapMax, Then restart the engine and then it should never consume more than 1 GiB = of heap, which will mean a max of approx 2 GiB of virtual space, and a = much smaller resident set size. Let us know if this makes it faster. > [jhorne(a)d0lppc021 ~]$ rpm -qa|grep ovirt > ovirt-engine-restapi-3.1.0-3.19.el6.noarch > ovirt-engine-sdk-3.1.0.5-1.el6.noarch > ovirt-engine-backend-3.1.0-3.19.el6.noarch > ovirt-engine-tools-common-3.1.0-3.19.el6.noarch > ovirt-log-collector-3.1.0-16.el6.noarch > ovirt-image-uploader-3.1.0-16.el6.noarch > ovirt-engine-setup-3.1.0-3.19.el6.noarch > ovirt-engine-config-3.1.0-3.19.el6.noarch > ovirt-iso-uploader-3.1.0-16.el6.noarch > ovirt-engine-webadmin-portal-3.1.0-3.19.el6.noarch > ovirt-engine-genericapi-3.1.0-3.19.el6.noarch > ovirt-engine-3.1.0-3.19.el6.noarch > ovirt-engine-cli-3.1.0.7-1.el6.noarch > ovirt-engine-userportal-3.1.0-3.19.el6.noarch > ovirt-engine-notification-service-3.1.0-3.19.el6.noarch > ovirt-engine-jbossas711-1-0.x86_64 > ovirt-engine-dbscripts-3.1.0-3.19.el6.noarch > > thanks, > jonathan > > > > > > On 3/22/13 10:05 AM, "Juan Hernandez" wrote: > >> On 03/22/2013 02:54 PM, Jonathan Horne wrote: >>> top - 08:53:38 up 70 days, 16:31, 1 user, load average: 0.40, 0.34, >>> 0.32 >>> Tasks: 432 total, 1 running, 431 sleeping, 0 stopped, 0 zombie >>> Cpu(s): 1.3%us, 0.1%sy, 0.0%ni, 98.6%id, 0.0%wa, 0.0%hi, 0.0%si, >>> 0.0%st >>> Mem: 32876240k total, 18653508k used, 14222732k free, 522432k buffers >>> Swap: 2097144k total, 4528k used, 2092616k free, 6270908k cached >>> >>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND >>> 2121 ovirt 20 0 12.9g 7.7g 18m S 9.0 24.6 16539:08 java >>> >> >> This is not normal at all. First thing that is strange is that your >> engine is taking 7.7 GiB of RAM, which it should never take, as it is by >> default limited to 1 GiB. Did you assign more memory to the engine on >> purpose? How much? If you assign a lot of memory it can start to consume >> a lot of CPU just for garbage collection. You may want to enable verbose >> garbage collection adding this to /etc/sysconfig/ovirt-engine (or >> /etc/ovirt-engine/engine.conf if you are using the latest source code): >> >> ENGINE_VERBOSE_GC=3Dtrue >> >> Then restart the engine and it will start to dump garbage collection >> statistics to /var/log/ovirt-engine/console.log. The garbage collection >> should be quite silent in an low activity system. >> >> We used to have a bug that caused the max amount of memory not be >> correctly limited, but it was fixed long ago: >> >> http://gerrit.ovirt.org/7952 >> >> The other thing that seems strange is the amount of CPU that it is >> consuming. Do you have many hosts managed by that engine? In an >> otherwise idle environment the CPU consumption is caused by the periodic >> polls of the hosts, one each two seconds by default. If you see >> continually the engine using a significant amount of CPU (you the output >> of top above it is 9%) it could be useful to get a snapshot of the >> stacks of threads, to see which threads in particular are consuming the >> CPU. Send the QUIT signal to the engine process and it will dump the >> stacks of the threads to /var/log/ovirt-engine/console.log: >> >> # kill -3 $(cat /var/run/ovirt-engine.pid) >> >> Once you have that dump you can check which thread is consuming the CPU >> as follows: >> >> 1. Get the PIDs of the threads of the engine together with their use of >> CPU: >> >> # ps -L -u ovirt -o tid,pcpu >> >> 2. If you see one of them consuming a high amount of CPU time then try >> to find it in the stack dump generated in >> /var/log/ovirt-engine/console.log. Lets assume that the PID is 13397, >> for example, translate it to hex: >> >> # printf "%04x\n" 13397 >> 3455 >> >> 3. Then look in /var/log/ovirt-engine/console.log for a line containing >> "nid=3D0x3455". There you will find the stack trace of that thread, >> something like this: >> >> "ajp-/127.0.0.1:8702-Acceptor-0" daemon prio=3D10 >> tid=3D0x00007f41e0220800 nid=3D0x3493 runnable [0x00007f41dbdf2000] >> java.lang.Thread.State: RUNNABLE >> ... >> >> Most threads will be waiting, but if you find one thread that is >> consistently RUNNABLE then there is probably an issue. The dump of the >> stack of that thread can help to find out what it is doing and why it is >> consuming the CPU. >> >>> >>> I don't have a lot of experience with jboss, so im not sure it thats >>> good or bad. I did the jboss restart, and that helped a little, but its >>> still a little sluggish again, now a few days later. >>> >>> Thanks, >>> >>> -----Original Message----- >>> From: Itamar Heim [mailto:iheim(a)redhat.com] >>> Sent: Friday, March 15, 2013 6:32 AM >>> To: Jonathan Horne >>> Cc: users(a)ovirt.org >>> Subject: Re: [Users] management server very slow lately >>> >>> On 03/13/2013 08:51 PM, Jonathan Horne wrote: >>>> Hello, lately my manager server web interface is extremely sluggish. >>>> Perhaps the server is ready for a reboot? >>>> >>>> My management server is also the hosts of my NFS export and ISO mounts. >>>> Is there a prescribed method for rebooting when I am also providing >>>> NFS services from the management server? My assumption is that aside >>>> from NFS, I should be able to reboot the management serve and the >>>> nodes and virtual machines will be fine in the mean time? >>> >>> what's the cpu consumption of your ovirt-engine service (java process). >>> cpu load on the engine? memory/swap state of the engine, etc >>> >>> >>> ________________________________ >>> This is a PRIVATE message. If you are not the intended recipient, >>> please delete without copying and kindly advise us by e-mail of the >>> mistake in delivery. NOTE: Regardless of content, this e-mail shall not >>> operate to bind SKOPOS to any order or other contract unless pursuant to >>> explicit written agreement or government initiative expressly permitting >>> the use of e-mail for such purpose. >>> _______________________________________________ >>> Users mailing list >>> Users(a)ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >> >> >> -- >> Direcci=C3=B3n Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, plan= ta >> 3=C2=BAD, 28016 Madrid, Spain >> Inscrita en el Reg. Mercantil de Madrid =C2=AD C.I.F. B82657941 - Red Ha= t S.L. > > > ________________________________ > This is a PRIVATE message. If you are not the intended recipient, please = delete without copying and kindly advise us by e-mail of the mistake in del= ivery. NOTE: Regardless of content, this e-mail shall not operate to bind S= KOPOS to any order or other contract unless pursuant to explicit written ag= reement or government initiative expressly permitting the use of e-mail for= such purpose. > -- = Direcci=C3=B3n Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta = 3=C2=BAD, 28016 Madrid, Spain Inscrita en el Reg. Mercantil de Madrid =E2=80=93 C.I.F. B82657941 - Red Ha= t S.L. --===============2240023394891512987==--