Not All Optimization Is Premature

November 27, 2012

The other day the reddit community discarded my advice for switching from text-based to binary serialization formats. It was labeled “premature optimization”. I’ll zoom out of the particular case, and discuss why not all optimization is premature.

Everyone has heard of Donald Knuth’s phrase “[..] premature optimization is the root of all evil”. And as with every well-known phrase, this one is usually misinterpreted. And to such an extent that people think optimizing something which is not a bottleneck is bad. That being the case, many systems are unnecessarily heavy and consume a lot of resources…because there is no bottleneck.

What has Knuth meant? That it is wrong to optimize if that is done at the cost of other important variables: readability, maintainability, time. Optimizing an algorithm can make it harder to read. Optimizing a big system can make it harder to maintain. Optimizing anything can take time that should probably be spent implementing functionality or fixing bugs. In practice, this means that you should not add sneaky if-clauses and memory workarounds in your code, that you shouldn’t introduce new tools or layers in your system for the sake of some potential gains in processing speed, and you shouldn’t spend a week on gaining 5% in performance. However, most interpretations say “you shouldn’t optimize for performance until it hits you”. And that’s where my opinion differs.

If you wait for something to “hit” you, then you are potentially in trouble. You must make your system optimal before it goes into production, otherwise it may be too late (meaning – a lot of downtime, lost customers, huge bills for hardware/hosting). Furthermore, “bottlenecks” are not that obvious with big systems. If you have 20 servers, will you notice that one piece of code takes up 70% more resource than it should? What if there are 10 such pieces. There is no obvious bottleneck, but optimizing the code may save you 2-3 servers. That’s why writing optimal code is not optional and is certainly not “premature optimization”. But let me give a few examples:

  • you notice that in some algorithms that are supposed to be invoked thousands of times, a linked list is used where random access is required. Is it premature optimization to change it to array/array list? No – it takes very little time, and does not make the code harder to read. Yet, it may increase the speed of the application a lot (how much is ‘a lot’ doesn’t even matter in that case)
  • you realize that a piece of code (including db access) is executed many times, but the data doesn’t change. This rarely accounts for a big percentage of the time needed to process a request. Is it premature optimization to cache the results (provided you have a caching framework that can handle cache invalidation, cache lifetime, etc.)? No – caching the things would save tons of database requests, without making your code harder to read (with declarative caching it will be just an annotation).
  • you measure that if you switch from a text to a binary format for transmitting messages within internal components you can do it 50%+ faster with half the memory. The system does not have huge memory needs, and the CPU is steady below 50%. Is replacing the text format with a binary one a premature optimization? No, because it costs 1 day, you don’t loose code readability (the change is only one line of configuration) and you don’t loose the means to debug your messages (you can dump them before/after serialization, or you can switch to text-based format in development mode. (yeah, that’s my case from the previous blogpost)

So, with these kinds of things, you saved a lot of processing power and memory even though you didn’t have any problems with that. But you didn’t have the problems either because you had enough hardware to mask them or you didn’t have enough traffic/utilization to actually see them. And performance tests/profiling didn’t show a clear bottle-neck. Then you optimize “in advance”, but not prematurely.

An important note here is that I mean mainly web applications. For desktop applications the deficiencies do not multiply. If you have a piece of desktop code that makes the system consume 20% more memory, (ask Adobe) then whatever – people have a lot of memory nowadays. But if your web application consumes 20% more memory for each user on the system, and you have 1 millions users, then the absolute value if huge (although it’s still “just” 20%).

The question is – is there a fine line between premature and proper optimization? Anything that makes the code “ugly” and does not solve a big problem is premature. Anything that takes two weeks to improve performance 5% is premature. Anything that is explained with “but what if some day trillions of aliens use our software” is premature. But anything that improves performance without affecting readability is a must. And anything that improves performance by just a better configuration is a must. And anything that makes the system consume 30% less resources and takes a day to implement is a must. To summarize – if neither readability, not maintainability are damaged and the time taken is negligible – go for it.

If every optimization is labeled as “premature”, a system may fail without any visible performance bottleneck. So assess each optimization rather than automatically concluding it’s premature.

If you find the content interesting, you can subscribe and get updates


 

15 Responses to “Not All Optimization Is Premature”

  1. Yep. I have seen systems basically implode in production from this twisted logic. Something that runs in time O ! and can be written nearly as simply to run in time O n or even O n^2 for example would definitely be worth taking the time to fix.

  2. If you want to optimise I don’t see a problem as long as you “measure, optimise, measure again”. That sort of encompasses not prematurely optimising, while implying that if you do at least you know you didn’t make things worse. Of course code clarity vs optimisation is up to your project’s needs.

    I agree with “a fine line between premature and proper optimization”. This another case of avoiding “one rule to rule them all”.

  3. You should not optimize for performance before it hits you, and you should make sure ANYWAY that it can hit you before you go into production.
    So before production, test your code, check whether there are performance hits, then optimize based on what hit you (and only based on that).

    The opposite would be to try and be really clever and optimize code, but not test performance before going into production.

    There is no fine line before premature and not premature, it is pretty clear. If you make any change to your code ONLY because you think (without measurig) it will improve performance (as opposed to having other benefits), then it’s premature.

    If you have other good reasons to change the code, then it’s not premature optimization.

    In your examples, caching is always premature unless you benchmark. In the real world, people think like you, missing a tiny detail, and break the whole system. This can cost millions, and all that to save some process calls nobody cared about.

    Your third example is also premature optimization, a near perfect example. Because what happens in the real world is people do that, miss the fact that a third internal component relies on the text format, go into production and break the system. This can cost millions, and all that to save some CPU cycles nobody cared about.

    In both cases, this can mean millions gone. For what? As a price for your pride? “Never change a running system” is one of the key motivations for avoiding premature optimization. It’s the real world that counts,the humans that are not gods and make mistakes inevitably.

    Knuth’s statement is an acknowledgment of human nature, of us being flawed, screwing up things in reality. If human’s were perfect, premature optimization as in your examples would be fine, as they would never ever break anything. Humans are not perfect, so for both your last examples, there are real world stories where people thought just like you, and broke systems, generating only costs and trouble.

  4. @Tibo Nope. As I noted explicitly in my examples, the performance improvement of the format switch is more than 50% in both CPU and memory. Because I’ve measured it. The caching thing – the code is invoked on every request, and that’s ascertained by automated tests (for example). So again, we have measured, that these pieces are indeed heavily used. So I don’t see your point.

  5. You will not get my point (or Knuth’s) before you start thinking about the costs of accidentally introducing bugs by changing code.

  6. You make it sound like you have written software for decades and it is my first project… ;) changing code always has the risk of introducing bugs, regardless of the purpose of the change. And that’s why there are automated regression tests.

  7. @Bozho Great tips. I have a question!
    What exactly do you mean by sending binary data instead of text to the client? Any hint or use case example will be helpful!

    If I get it correctly from you, I will try to optimize my system for it. Thanks.

  8. @Bozho BTW. I use Spring and Java extensively in my projects so an example through that will be very easily digestible to me.

  9. http://techblog.bozho.net/?p=1001

  10. The point of optimisation is that is optimises for a specific set of criteria; so while you can measure the difference between binary and string formats, say, you can not measure the effect that has on the consumers code. General purpose optimisation is HARD that’s why we have so many varients of algorithms, each one optimised for different criteria, e.g space vs time.

    You can not optimise properly until you have an idea of which variables you wish to minimise, which you don’t care about, etc. Looking at your small section of the universe and saying I’ve optimised this is naive.

  11. I’m usually to blogging and site-building and i really enjoy your content.

    The post has truly speaks my interest. I am going to bookmark
    your site as well as keep checking for brand new information.

  12. There is certainly a great deal to find out about this subject.
    I like all of the points you have made.

    My web site – garage door repair

  13. Heya i’m for the first time here. I found this board and
    I in finding It really helpful & it helped me out much. I hope to provide one thing back and aid others like you
    helped me.

  14. Dla sług w podróży, którzy są niezmiennie zestresowany ich pecety ozdrowienia zaś produkcyjności, dysponujące plan płukania
    poprzez freeware, który gra, żeby pecet wartko snadź być najzgodniejszym anulowaniem gwoli
    nich. Aczkolwiek są niskie, sensowne także wyegzekwuj
    bezpłatne sposoby-przyspieszenie komputera, poniektóre leki sterylizujące świadczą jeden guzik, zaś nawet intuicyjne
    zamiatanie wyjścia gwoli twojego peceta.

    Co owe układy zorganizować? No w idei, wtedy
    nie całościowe newralgiczne działania potrzebne do zbudowania, iżby doczyścić
    i nadgonić komputer do racjonalizacji organizmu, jednorakie dodatkowo anulować daremne i niestosowane zestawy, akcję Internetu tudzież komputu.
    Skończone niniejsze kwestii są pochylony, ażeby
    laptop prosto, co uprawomocni uważać nieskomplikowaną kolej przygotowuje ojczyste polecenia.

  15. This paragraph will assist the internet viewers for building up new webpage or even a weblog from start
    to end.

    Have a look at my web-site; garage door repair

Leave a Reply