Search

Sunday, May 31, 2009

How does the .NET Compact Memory allocator work

Aalankrita - Twins

As I mentioned in one of my previous posts the .NET Compact Framework uses 5 separate heaps of which one is dedicated for managed data and is called the managed heap. The .NETCF allocator, allocates pools of data from this managed heap and then sub-allocates managed objects from this pool.

This is how the process goes

image At the very beginning the allocator allocates a 64Kb pool (called the obj-pool) and ties it with the current app-domain state (AppState).
image All object allocation requests are served from this pool (Pool 0). After each allocation an internal pointer (shown in yellow) is incremented to point to the end of the last allocated object.
image Subsequent allocation is just incrementing this pointer.
image This goes on until the point where there is no more space left in the current obj-pool
image Since there is no more space left in the current pool (pool 0) a new pool (pool 1) is allocated and all subsequent object allocation requests are serviced from the new pool.

The first question I get asked at this point is what happens when one object itself is bigger than 64Kb :). The answer is that the CLR considers such objects as “huge-objects” and they are placed in their own obj-pool. This essentially means that either obj-pools are 64Kb in size and has more than one object in them or are bigger than 64Kb and has only one huge-object in it.

Object allocation speed varies a lot between whether it is being allocated from the current pool (where it is just the cost of a pointer increment) or it needs a new pool to be allocated (where there is additional cost of a 64Kb allocation). On the average managed objects are 35 bytes in size and around 1800 can fit in one pool and hence on the whole allocation speed attained is pretty good.

Verifying what we discussed above

All of the things I said above can be easily verified using the Remote Performance monitor. I wrote a small application that allocates objects in a loop and launched it under Remote Performance monitor and published the data to perfmon so that I can observe how the managed heap grows. I used the process outlined in Steven Pratschner’s blog post at http://blogs.msdn.com/stevenpr/archive/2006/04/17/577636.aspx 

image

Using the above mentioned mechanism I am observing the managed bytes allocated (green line) and the GC Heap (red line). You can see that the green line increases linearly and after every 64Kb the red line or the GC Heap bumps up by 64Kb as we allocate a new object pool from the heap.

Monday, May 04, 2009

Memory leak via event handlers

Golconda fort - light and sound

One of our customers recently had some interesting out-of-memory issues which I thought I’d share.

The short version

The basic problem was that they had event sinks and sources. The event sinks were disposed correctly. However, in the dispose they missed removing the event sink from the event invoker list of the source. So in effect the disposed sinks were still reachable from the event source which is still in use. So GC did not de-allocate the event sinks and lead to leaked objects.

Now the not-so-long version

Consider the following code where there are EventSource and EventSink objects. EventSource exposes the event SomeEvent which the sink listens to.

EventSink sink = new EventSink();
EventSource src = new EventSource();

src.SomeEvent += sink.EventHandler;
src.DoWork();

sink = null;
// Force collection
GC.Collect();
GC.WaitForPendingFinalizers();

For the above code if you have a finalizer for EventSink and add a breakpoint/Console.WriteLine in it, you’d see that it is never hit even though sink is clearly set to null. The reason is the 3rd line where we added sink to the invoke list of source via the += operator. So even after sink being set to null the original object is still reachable (and hence not garbage) from src.


+= is additive so as the code executes and new sinks are added the older objects still stick around resulting in working set to grow and finally lead to crash at some point.


The fix is to ensure you remove the sink with –= as in

EventSink sink = new EventSink();
EventSource src = new EventSource();
src.SomeEvent += sink.EventHandler;
src.DoWork();
src.SomeEvent -= sink.EventHandler;
sink = null;

The above code is just sample and obviously you shouldn’t do event add removal in a scattered way. Make EventSink implement IDisposable and encapsulate the += –= in ctor/dispose.