supervdsmServer consumes 30% of host memory

------=_Part_670523_2123662085.1417548535072 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Hi, We have a productive ovirt 3.5 cluster with 3 nodes. The first node has 40 gb of memory and running only one VM that use 8 gb of ram. But the supervdsmserver consumes 20% of host memory. On the second node has 32 GB of memory with 5 VMs. On this node the supervdsmserver using 30%(!) of memory! On the node3 there are 6 VMs with 72 GB of memory. But the supervdsm server use only 1.7% of memory. Does anyone know why? [root@node0 ~]# ps aux|grep supervdsmServer root 19549 0.2 20.8 26248348 8594352 ? S<l 06:00 1:59 /usr/bin/python /usr/share/vdsm/supervdsmServer --sockfile /var/run/vdsm/svdsm.sock --pidfile /var/run/vdsm/supervdsmd.pid [root@node1 ~]# ps aux|grep supervdsmServer root 32701 1.2 29.5 32463704 9717368 ? S<l 03:59 12:08 /usr/bin/python /usr/share/vdsm/supervdsmServer --sockfile /var/run/vdsm/svdsm.sock --pidfile /var/run/vdsm/supervdsmd.pid [root@node2 ~]# ps aux|grep supervdsmServer root 4613 0.0 1.7 6340572 1291476 ? S<l 18:26 0:04 /usr/bin/python /usr/share/vdsm/supervdsmServer --sockfile /var/run/vdsm/svdsm.sock --pidfile /var/run/vdsm/supervdsmd.pid Thanks in advance Tibor ------=_Part_670523_2123662085.1417548535072 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html><body><div style=3D"font-family: times new roman, new york, times, se= rif; font-size: 12pt; color: #000000"><div>Hi,</div><div><br></div><div>We = have a productive ovirt 3.5 cluster with 3 nodes.</div><div><br></div><div>= The first node has 40 gb of memory and running only one VM that use 8 gb of= ram.</div><div>But the supervdsmserver consumes 20% of host memory.</div><= div>On the second node has 32 GB of memory with 5 VMs. On this node the sup= ervdsmserver using 30%(!) of memory!</div><div>On the node3 there are 6 VMs= with 72 GB of memory. But the supervdsm server use only 1.7% of memory.</d= iv><div><br></div><div>Does anyone know why?</div><div><br></div><div><br><= /div><div><p style=3D"margin: 0px;" data-mce-style=3D"margin: 0px;">[root@n= ode0 ~]# ps aux|grep supervdsmServer<br>root 19549 0.2 20.8 26248348 859435= 2 ? S<l 06:00 1:59 /usr/bin/python /usr/share/vdsm/supervdsmServer --soc= kfile /var/run/vdsm/svdsm.sock --pidfile /var/run/vdsm/supervdsmd.pid<br></= p><p style=3D"margin: 0px;" data-mce-style=3D"margin: 0px;"><br></p><p styl= e=3D"margin: 0px;" data-mce-style=3D"margin: 0px;">[root@node1 ~]# ps aux|g= rep supervdsmServer<br>root 32701 1.2 29.5 32463704 9717368 ? S<l 03:59 = 12:08 /usr/bin/python /usr/share/vdsm/supervdsmServer --sockfile /var/run/v= dsm/svdsm.sock --pidfile /var/run/vdsm/supervdsmd.pid</p><p style=3D"margin= : 0px;" data-mce-style=3D"margin: 0px;"><br></p><p style=3D"margin: 0px;" d= ata-mce-style=3D"margin: 0px;">[root@node2 ~]# ps aux|grep supervdsmServer<= br>root 4613 0.0 1.7 6340572 1291476 ? S<l 18:26 0:04 /usr/bin/python /u= sr/share/vdsm/supervdsmServer --sockfile /var/run/vdsm/svdsm.sock --pidfile= /var/run/vdsm/supervdsmd.pid<br><span style=3D"font-size: 12pt;"></span></= p><p style=3D"margin: 0px;" data-mce-style=3D"margin: 0px;"><span style=3D"= font-size: 12pt;"><br></span></p><p style=3D"margin: 0px;" data-mce-style= =3D"margin: 0px;"><span style=3D"font-size: 12pt;">Thanks in advance</span>= </p><p style=3D"margin: 0px;" data-mce-style=3D"margin: 0px;"><br></p><p st= yle=3D"margin: 0px;" data-mce-style=3D"margin: 0px;">Tibor</p><p style=3D"m= argin: 0px;" data-mce-style=3D"margin: 0px;"><br></p></div><div><span name= =3D"x"></span><p style=3D"font-family: 'Times New Roman'; font-size: medium= ; margin: 0px;" data-mce-style=3D"font-family: 'Times New Roman'; font-size= : medium; margin: 0px;"><strong><span style=3D"font-size: medium;" data-mce= -style=3D"font-size: medium;"><span size=3D"5" color=3D"#2D67B0" style=3D"c= olor: rgb(45, 103, 176);" data-mce-style=3D"color: #2d67b0;"></span></span>= </strong></p></div></div></body></html> ------=_Part_670523_2123662085.1417548535072--

On Tue, Dec 02, 2014 at 08:28:55PM +0100, Demeter Tibor wrote:
Hi,
We have a productive ovirt 3.5 cluster with 3 nodes.
The first node has 40 gb of memory and running only one VM that use 8 gb of ram. But the supervdsmserver consumes 20% of host memory. On the second node has 32 GB of memory with 5 VMs. On this node the supervdsmserver using 30%(!) of memory! On the node3 there are 6 VMs with 72 GB of memory. But the supervdsm server use only 1.7% of memory.
Does anyone know why?
most probably Bug 1142647 - supervdsm leaks memory when using glusterfs which would be fixed in ovirt-3.5.1. Unfortunately, it won't be released today - the released date has been postponed to Dec 9th. Since the two glusterfs issues (memleak and segfault) repeat so often, I've tagged vdsm-4.16.8 as a release candidate for ovirt-3.5.1. Sandro, could you help in building it and placing it somewhere for people to try it out? After all, it has 87 (!) patches since 3.5.0, so testing is due. Regards, Dan.

Il 02/12/2014 22:15, Dan Kenigsberg ha scritto:
On Tue, Dec 02, 2014 at 08:28:55PM +0100, Demeter Tibor wrote:
Hi,
We have a productive ovirt 3.5 cluster with 3 nodes.
The first node has 40 gb of memory and running only one VM that use 8 gb of ram. But the supervdsmserver consumes 20% of host memory. On the second node has 32 GB of memory with 5 VMs. On this node the supervdsmserver using 30%(!) of memory! On the node3 there are 6 VMs with 72 GB of memory. But the supervdsm server use only 1.7% of memory.
Does anyone know why?
most probably
Bug 1142647 - supervdsm leaks memory when using glusterfs
which would be fixed in ovirt-3.5.1. Unfortunately, it won't be released today - the released date has been postponed to Dec 9th.
Since the two glusterfs issues (memleak and segfault) repeat so often, I've tagged vdsm-4.16.8 as a release candidate for ovirt-3.5.1.
Sandro, could you help in building it and placing it somewhere for people to try it out? After all, it has 87 (!) patches since 3.5.0, so testing is due.
Dan, 4.16.8-0 has been already built in jenkins[1][2][3][4] and it has been already published in 3.5 nightly snapshot[5]. Whoever want to test it is more than welcome. Instructions for using nightly are on the wiki [6]. Please add yourself to the testing report page [7] if you're going to test it. [1] http://jenkins.ovirt.org/job/vdsm_3.5_create-rpms-el6-x86_64_merged/133/ [2] http://jenkins.ovirt.org/job/vdsm_3.5_create-rpms-el7-x86_64_merged/130/ [3] http://jenkins.ovirt.org/job/vdsm_3.5_create-rpms-fc19-x86_64_merged/130/ [4] http://jenkins.ovirt.org/job/vdsm_3.5_create-rpms-fc20-x86_64_merged/130/ [5] http://jenkins.ovirt.org/view/Publishers/job/publish_ovirt_rpms_nightly_3.5/... [6] http://www.ovirt.org/Install_nightly_snapshot [7] http://www.ovirt.org/Testing/oVirt_3.5.1_Testing Thanks,
Regards, Dan.
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

On 02.12.2014 22:16, Dan Kenigsberg wrote:
On Tue, Dec 02, 2014 at 08:28:55PM +0100, Demeter Tibor wrote:
Hi,
We have a productive ovirt 3.5 cluster with 3 nodes.
The first node has 40 gb of memory and running only one VM that use 8 gb of ram. But the supervdsmserver consumes 20% of host memory. On the second node has 32 GB of memory with 5 VMs. On this node the supervdsmserver using 30%(!) of memory! On the node3 there are 6 VMs with 72 GB of memory. But the supervdsm server use only 1.7% of memory.
Does anyone know why?
most probably
Bug 1142647 - supervdsm leaks memory when using glusterfs
which would be fixed in ovirt-3.5.1. Unfortunately, it won't be released today - the released date has been postponed to Dec 9th.
Since the two glusterfs issues (memleak and segfault) repeat so often, I've tagged vdsm-4.16.8 as a release candidate for ovirt-3.5.1.
Sandro, could you help in building it and placing it somewhere for people to try it out? After all, it has 87 (!) patches since 3.5.0, so testing is due. Good to know! Giving it a shot, upgrading went smoothly so far. I'll report back if something comes up.
Regards, Dan. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Daniel Helgenberger m box bewegtbild GmbH P: +49/30/2408781-22 F: +49/30/2408781-10 ACKERSTR. 19 D-10115 BERLIN www.m-box.de www.monkeymen.tv Geschäftsführer: Martin Retschitzegger / Michaela Göllner Handeslregister: Amtsgericht Charlottenburg / HRB 112767
participants (4)
-
Dan Kenigsberg
-
Daniel Helgenberger
-
Demeter Tibor
-
Sandro Bonazzola