Search

Thursday, May 01, 2008

Forcing a Garbage Collection is not a good idea

Our cars 

Someone asked on a DL about when to force a GC using GC.Collect. This has been answered by many experts before, but I wanted to re-iterate. The simple answer is

"extremely rarely from production code and if used ensure you have consulted the GC folks of your platform".

Lets dissect the response...

Production Code

The "production code" bit is key here. It is always fine to call GC.Collect from test/debug code when you want to ensure your application performs fine when a sudden GC comes up or you want to verify all your objects have been disposed properly or the finalizers behave correctly. All discussion below is relevant only to shipping production code.

Rarely

A lot of folks jumped into the thread giving examples of where they have done/seen GC.Collect being used successfully. I tried understanding each of the scenarios and explaining why in my opinion it is not required and doesn't qualify to make it to the rare scenario. I have copy pasted some of these scenarios with my response below (with some modifications).

  1. For example, your process has a class which wraps a native handle and implements Dispose pattern. And the handle will used in exclusive mode. The client of this class forgets to call Dispose/Close to release the native handle (they rely on Finalizer), then other process (suppose the native handle is inter-process resource) have to wait until next GC or even full GC to run Finalizer, since when Finalizer will run is not expected – other process will suffer from waiting such exclusive sharing resource…
    This is a program bug. You shouldn’t be covering a dispose pattern misuse with a GC call. You are essentially shipping buggy code or in case you provide the framework then allowing users to write buggy code. This should be fixed by ensuring that the clients call the dispose and not by forcing GC. I would suggest adding an Assert in the finalizer in your debug bits to ensure that you fail in the test scenario. In case of Fx write the right code and let performance issues surface so that users also writes the right code
  2. Robotics might be another example—you might want time-certain sampling and processing of data.
    .NET is not a Real time system. If you assume or try to simulate Real Time operations on it then I have only sympathy to offer :). Is the next suggestion to call all methods in advance so that they are already jitted?
  3. Another case I can think of is the program is either ill-designed or designed specially to have a lot of Finalizers (they wrap a lot of native resources in the design?). Objects with Finalizer cannot be collected in generation 0, at least generation 1, and have great chance to go to generation 2…
    This is not correct. The dispose pattern is there exactly for this reason. Any reason why you are not using dispose pattern and using GC suppress in the dispose method?
  4. Well, one “real world” scenario that I know of is in a source control file diff creation utility.  It loops through processing each file in the pack, and loads that entire file into memory in order to do so it calls GC.Collect when it’s finished with each file, so that the GC can reclaim the large strings that are allocated.
    Why cannot it just not do anything and is there a perf measurement to indicate otherwise? GC has per-run overhead. So incase nothing is done it may so happen that for a short diff creation the GC is never run or atleast run for every 10 files handled leading to less number of runs and hence better perf. For a batch system where there is no user interaction happening in the middle what is the issue if there is a system decided GC in the middle of the next file?
  5. A rare case in my mind is you allocate a lot of large objects > 85k bytes, and such size objects will be treated as generation 2 objects. You do not want to wait for next full GC to run (normally GC clears generation 0 or generation 1), you want to compact managed heap as soon as possible.
    Is it paranoia or some real reason? If it holds native resources then you are covered by dispose patterns and if you are considering memory pressure then isn’t GC there to figure out when to do it for you?

In effect most usage are redundant.

Question is then what qualifies as a rare scenario where you want to do a GC.Collect. This has been explained by Rico Mariani (here) and Patrick Dussud (here).

‘In a nutshell, don’t call it, unless your code is unloading large amounts of data at well-understood, non-repeating points (like at the end of a level in a game), where you need to discard large amounts of data that will no longer be used.”

Its almost always when you know for sure a GC run is coming ahead (which you completely understand and maybe confirmed with the GC guys of your framework) and you want to control the exact point when you want it to happen. E.g.in case of a game level end you have burned out all the data and you know that you can discard them and if you don’t GC will start after 6 frames of rendering in your next level and you are better off doing it now as the system is idle and you’d drop a frame of two if it happened in the middle of the next frame.

And obviously you call GC.Collect if you found an issue reported/discussed in the forums and you have figured out a GC bug which you want to work around.

I would highly recommend seeing this video where Patrick Dussud the father of .NET GC explains why apparent GC issues may actually be side-effect of other things (e.g finalizes stuck trying to delete behind the scene STA COM objects).

What is the problem with calling GC.Collect

So why are folks against calling GC.Collect? There are multiple reasons

  1. There's an inherent assumption that the user knows more about when the GC is run. This cannot be true because according to CLR spec there is no standard time. See here. Since GC is plugged into the execution engine it knows best of the system state and knows when to fire. With Silver Light and other cross-plat technologies being mainstream it will become harder and harder to predict where your app is run. There's already 3 separate GCs the desktop, server and compact framework. Silver light will bring in more and your assumptions can be totally wrong.
  2. GC has some cost (rather large):
    GC is run by first marking all the objects and then cleaning them. So whether garbage or not the objects will be touched and it takes awful amount of time to do that. I've seen folks measure the time to do GC.Collect and figure out the time taken. This is not correct because GC.Collect fires the collection and returns immediately. Later GC goes about freezing all the threads. So GC time is way more than what collect takes and you need to monitor performance counter to figure out what is going on,
  3. GC could be self tuning:
    The desktop GC for example tunes itself based on historical data. Lets assume that a large collection just happened which cleaned up 100mb of data. Incidentally exactly after that a forced GC happened which resulted in no data to be cleaned up. GC learns that collection is not helping and next time when a real collection is to be fired (low memory condition) it simply backs off based on the historical data. However, if the forced GC didn't occur it'd have remembered that 100mb got cleared and would've jumped in right away.

Both 2 and 3 are GC implementation specific (differs across desktop and Compact GC) stressing the first point which is most assumption are implementation details of the GC and may/will change jeopardizing the attempt to try out-guess the GC when to run.

No comments: