Finding that C# memory leak

profile
Tim Deschryver
timdeschryver.dev

Last week it finally happened, I saw my first memory leak in production - that I know of - and over time it was eating up all the memory. Throughout my career, I've been warned, and I've warned about these leaks and why it's so important to release unmanaged resources with the Dispose method. Yet, we still created a memory leak.

Knowing that you have a memory leak link

We noticed that the process of a new API was consuming more memory compared to other processes. At first, we didn't think much of it and we assumed it was normal because this API receives a lot of requests. At the end of the day, the API almost tripled its memory consumption and at this time we started thinking that we had a memory leak.

Remember that high memory usage doesn't always mean that there's a memory leak. Some processes just use a lot of memory. The problem is that the memory increases linearly over time, and without it dropping back to its normal consumption.

The second day, this happened again and it was worse, the API with the memory leak was almost consuming 4GB. Which is up to 5 times more resources compared to other APIs.

Reproducing the memory leak link

I think that you must be able to reproduce a problem first to solve it. This gives you some insights into the problem, and you exactly know where and when that problem occurs.

Since this was a first-timer for me, I followed the Microsoft Docs (which were well written and was exactly what I was looking for) Debug a memory leak in .NET Core.

In this tutorial you use the dotnet commands, dotnet trace, dotnet counters, and dotnet dump to find and troubleshoot process.

While being impressed with these tools, I wasn't able to reproduce the memory leak locally despite my effort to mimick the traffic towards the API with Artillery.

Analysing a dump file link

So why try to reproduce the problem when it's already occurring in a production environment? Most of the bugs can't, or hard to, look into a production environment. But for this case it is different.

By creating a dump file of the process, we have a way to look into the process. All of the information that we need is already there, it just needs to be collected and analyzed.

To create a dump file, use the dotnet dump collect command, or if you can log in on the server by opening the task manager, right-clicking on a process, and selecting "Create a dump file".

This gives you a *.dmp file which you can analyze with the dotnet dump analyze command. Personally, I like it more visualized and thus I imported the file into dotMemory to analyze it. There's also PerfView, which is free to use.

When the dump file was imported, the first graph and data table made it very obvious that we had a memory leak. A file logger was using 90% of the memory, equivalent to 3.5GB.

The starting page of dotMemory
A pie chart with a more in-depth look into the dump

While this leak was obvious, I still felt a little bit overwhelmed with all the data. To get a better understanding at how I should intepret the data I read through .NET Memory Performance Analysis. I found it beginner friendly, while still going in-depth. It's focused on PerfView, but the gained knowledge can also be used while reading the data in other tools.

Key takeaways link

More resources link

A list of resources about this topic, that I find useful and wished I had found earlier:

Feel free to update this blog post on GitHub, thanks in advance!

Join My Newsletter (WIP)

Join my weekly newsletter to receive my latest blog posts and bits, directly in your inbox.

Support me

I appreciate it if you would support me if have you enjoyed this post and found it useful, thank you in advance.

Buy Me a Coffee at ko-fi.com PayPal logo

Share this post on

Twitter LinkedIn