[ovirt-users] VDSM memory consumption
Darrell Budic
budic at onholyground.com
Mon Mar 9 11:40:51 EDT 2015
> On Mar 9, 2015, at 4:51 AM, Dan Kenigsberg <danken at redhat.com> wrote:
>
> On Fri, Mar 06, 2015 at 10:58:53AM -0600, Darrell Budic wrote:
>> I believe the supervdsm leak was fixed, but 3.5.1 versions of vdsmd still leaks slowly, ~300k/hr, yes.
>>
>> https://bugzilla.redhat.com/show_bug.cgi?id=1158108
>>
>>
>>> On Mar 6, 2015, at 10:23 AM, Chris Adams <cma at cmadams.net> wrote:
>>>
>>> Once upon a time, Federico Alberto Sayd <fsayd at uncu.edu.ar> said:
>>>> I am experiencing troubles with VDSM memory consuption.
>>>>
>>>> I am running
>>>>
>>>> Engine: ovirt 3.5.1
>>>>
>>>> Nodes:
>>>>
>>>> Centos 6.6
>>>> VDSM 4.16.10-8
>>>> Libvirt: libvirt-0.10.2-46
>>>> Kernel: 2.6.32
>>>>
>>>> When the host boots, memory consuption is normal, but after 2 or 3
>>>> days running, VDSM memory consuption grows and it consumes more
>>>> memory that all vm's running in the host. If I restart the vdsm
>>>> service, memory consuption normalizes, but then it start growing
>>>> again.
>>>>
>>>> I have seen some BZ about vdsm and supervdsm about memory leaks, but
>>>> I don't know if VDSM 4.6.10.8 is still affected by a related bug.
>>>
>>> Can't help, but I see the same thing with CentOS 7 nodes and the same
>>> version of vdsm.
>>> --
>>> Chris Adams <cma at cmadams.net>
>>> _______________________________________________
>>> Users mailing list
>>> Users at ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>
> I'm afraid that we are yet to find a solution for this issue, which is
> completly different from the horrible leak of supervdsm < 4.16.7.
>
> Could you corroborate the claim of
> Bug 1147148 - M2Crypto usage in vdsm leaks memory
> ? Does the leak disappear once you start using plaintext transport?
>
> Regards,
> Dan.
I don’t think this is crypto related, but I could try that if you still need some confirmation (and point me at a quick doc on switching to plaintext?).
This is from #ovirt around November 18th I think, Saggi thought he’d found something related:
9:58:43 AM saggi: YamakasY: Found the leak
9:58:48 AM saggi: YamakasY: Or at least the flow
9:58:57 AM saggi: YamakasY: The good news is that I can reproduce
9:59:20 AM YamakasY: saggi: that's kewl!
9:59:25 AM YamakasY: saggi: what happens ?
9:59:41 AM YamakasY: I know from Telsin (ping ping!) that he sees it going faster on gluster usage
tdosek left the room (quit: Ping timeout: 480 seconds). (10:00:02 AM)
djasa left the room (quit: Quit: Leaving). (10:00:24 AM)
mlipchuk left the room (quit: Quit: Leaving.). (10:00:29 AM)
laravot left the room (quit: Quit: Leaving.). (10:01:19 AM)
10:01:54 AM saggi: YamakasY: it's in getCapabilities(). Here is the RSS graph. The flatlines are when I stopped calling it and called other verbs. http://i.imgur.com/CLm0Q75.png
movciari left the room (quit: Ping timeout: 480 seconds). (10:02:34 AM)
10:02:46 AM saggi: YamakasY: horizontal is time since epoch and vertical is RSS in bytes
bobdrad left the room (quit: Quit: Leaving.). (10:03:25 AM)
10:03:52 AM YamakasY: saggi: I have seen that line soooo much!
10:04:11 AM YamakasY: I think I even made a mailing about it
10:04:18 AM YamakasY: at least asked here
10:04:32 AM YamakasY: no-one knew, but those lines are almost blowing you away
10:04:35 AM YamakasY: can we patch it ?
10:04:59 AM YamakasY: wow, nice one to catch
10:05:28 AM saggi: YamakasY: I now have a smaller part of the code to scan through and a way to reproduce so hopefully I'll have a patch soon
was that ever followed up on?
More information about the Users
mailing list