Optimizing Java Code
By Adrian Sutton
Rick Jelliffe proffers his advice on how to write fast Java desktop applications. It’s poor advice. He calls his approach “defensive programming”, though I really don’t see much that’s defensive about it. Defensive programming is about adding in code to handle unexpected cases and recover from errors. Rick’s advice is an attempt at optimizing code before you’ve even written it and determined that it’s not fast enough. It’s worth noting that I write Java desktop applications, I don’t do much server side work and I certainly don’t write small applets, so Rick’s complaint that everyone assumes your a server side developer doesn’t apply here. It also doesn’t apply too much in real life – fast code is fast code. In desktop development you get more “time off” or more code that’s not on the critical path because you only need to go as fast as the user. On the server side you need to process pretty much everything as fast as you can because there’s always more requests to get started on if there’s a hold up with this request.
Talking about a large DOM-like structure:
the defensive programmer may decide to have a cleanup method at the root object of the data structure to traverse the data structure to some depth, to reduce the chances that new generation objects have references from the old. To give the minor collector a hand. This sounds like a good way to ensure that your operation takes longer than it needs to, simply because you think it may or may not cause memory to be released sooner. It doesn’t actually save memory and it’s guaranteed to slow down your program. The real memory saving will come when the older generation is garbage collected and the entire DOM tree can be reclaimed. If anything needs to be done here, you probably should look at increasing the size of the young generation so that the whole DOM structure fits in it.
The defensive programmer might also decide to explicitly null the references as soon as possible so that the object is not alive if there is a full GC before the end of the method or the object with the reference This advice really doesn’t hurt much but it does point to sloppy design. Essentially, you’re using null to reduce the scope of the variable. It would be better to design your method to be smaller. What really gets me here, is the abuse of the term defensive. Explicitly setting a variable to null isn’t defensive – it’s dangerous. If at some later point a colleague adds code to the end of the method and uses that variable, NullPointerException will be thrown. A defensive programmer would *not* null the variable in case later changes cause it to be used.
We like minor collections and want them to be as effective as possible, and we want to system.gc() to encourage major collections to occur at times that the user is at rest: for example, when opening the first file and whenever a large file is closed. Sure I’d prefer major collections to occur when the user isn’t doing anything, but when exactly is that? When I open the first file of an application, I want to use it immediately, not have to wait for garbage collection to finish before I can start using the app. Start up times are the most annoying part of most Java apps and so you shouldn’t be doing anything to slow it down – particularly requesting an unnecessary full GC. After a large file is closed is also a bad time, the user is in the middle of using your application and is paying attention to how long things takes – delaying even the closing of a file for a couple of seconds means the user can’t move on to the file that was open behind it until it finishes. If you were going to request a full GC, you’d do it when there’s been no keyboard input for some period of time. A full GC when your application is in the background strips resources from other apps the user might be working on – and since it requires a lot of memory access it can cause slow down across all programs though at least they don’t freeze completely.
Actually, in many cases you may not be using a profiler at all, for whatever reason. If you are not using a profiler, then defensive programming may reduce the effect of some problems (such as unreleased listeners) that profiling makes obvious. I’d suggest using a profiler, it’s guaranteed to find any problems rather than blind optimisations (which Rick calls Defensive programming) which may or may not reduce the effects of the problems.
This comes from what looks like an excellent presentation at Java One by the wonderfully named Dr. Cliff Clack. Dr. Clack’s page 30 says pooling “loses for small objects”, “a wash for medium objects (Hashtables)” and a “win for large objects”. But in Mr. Goetz’s article this becomes “object pooling is a performance loss for all but the most heavyweight objects”, which I think is a little too enthusiastic, certainly for single-CPU desktop systems. Rick’s right, Mr. Goetz has exaggerated the effects of object pooling, but not to the point where it affects what the advice should be. You should only use object pooling for large objects. I can’t see why you’d want to perform more work implementing object pooling and risk errors in that code unless you were going to get a noticeable gain from it. It’s particularly foolish to use object pooling, if you then go and try to save RAM by explicit nulling and System.gc() calls. In summary, java isn’t some special language where new rules of optimisation apply. Premature optimisation is a bad idea in any language and it almost never pays off. Why anyone would think that writing desktop java applications renders that rule invalid is beyond me.