The Two Rules of Program Optimization
I've seen some bad code lately which was designed in an effort to improve performance. For instance, there was a long method (80 lines) that was not split into several methods for a single reason: to avoid the method call overhead (around 15 nanoseconds!). The result was code that was just hard to read.
This reminded me of the rules of program optimization (coined by Michael A. Jackson, a British computer scientist) we were teached back on university:
The First Rule of Program Optimization: Don't do it.
The Second Rule of Program Optimization (for experts only!): Don't do it yet.
Well, this is true for mainly two reasons:
- Optimization can reduce readability and add code that is used only to improve the performance. This may complicate programs or systems, making them harder to maintain and debug.
- Doing optimizations most of the time means we think to be smarter than the compiler, which is just plain wrong more often than not.
Cleaner Code
Donald Knuth said "Premature optimization is the root of all evil". Whereas "Premature optimization" means that a programmer lets performance considerations drive the design of his code. This can result in a design that is not as clean as it could have been, because the code is complicated by the optimization and the programmer is distracted by optimizing.
Therefore, if performance tests reveal that optimization or performance tuning really have to be done, they usually should be done at the end of the development stage.
Wrong Intuitions
This is what Sun Microsystem's Technology Evangelist Brian Goetz thinks: "Most performance problems these days are consequences of architecture, not coding – making too many database calls or serializing everything to XML back and forth a million times. These processes are usually going on outside the code you wrote and look at every day, but they are really the source of performance problems. So if you just go by what you're familiar with, you're on the wrong track. This is a mistake that developers have always been subject to, and the more complex the application, the more it depends on code you didn't write. Hence, the more likely it is that the problem is outside of your code." Right he is!
Smarter Compiler
Often, the best way to write fast code in Java applications is to write dumb code – code that is straightforward, clean, and follows the most obvious object-oriented principles in order to get the best compiler optimization. Compilers are big pattern-matching engines, written by humans who have schedules and time budgets, so they focus their efforts on the most common code patterns, in order to get the most leverage. Usually hacked-up, bit-banging code that looks really clever will get poorer results because the compiler can't optimize effectively.
A good example is string concatenation in Java (see this conversation with Java Champion Heinz Kabutz where he gives some measures)...
- Back in the early days, we all used the String addition (+ operator) to concatenate Strings:
return s1 + s2 + s3;
However, since Strings are immutable, the compiled code will create many temporary String objects, which can strain the garbage collector. - That's why we were told to use StringBuffer instead:
return new StringBuffer().append(s1).append(s2).append(s3).toString();
That was around 3-5 times faster those days, but the code became less readable. Was it worth it? Is your code doing enough String concatenation to make you really feel a difference after you (for instance) made that execute three times faster? - Is that still the recommended way? A main downside of StringBuffer is its thread safety that is usually not required (since they are not shared between threads), but slows things down. Hence, the StringBuilder class was introduced in Java 5, which is almost the same as StringBuffer, except it's not thread-safe. So, using StringBuilder is expected to be significantly faster, and know what? When Strings are added using the + operator, the compiler in Java 5 and 6 will automatically use StringBuilder:
return s1 + s2 + s3;
Clean, easy to understand, and quick. Note that this optimization will not occur if StringBuffer is hard-coded!
That was just one example.... All in all, it's quite simple: today's Java JIT Compilers are highly optimized and clever in optimizing your code. Trust them. Don't try to be even more clever. You aren't!
No comments:
Post a Comment