AJ's blog

October 10, 2006

Optimize it!

Filed under: .NET, .NET Framework, Software Architecture, Software Development — ajdotnet @ 8:37 pm

Hi there,

back for some more talk about performance?

The last posts (“Performance is King…”) were primarily about preparing for performance. If you follow this advice you will hopefully know how to detect performance problems and how to react. But at some point in time you will actually have to do somethinig about performance, in one word: optimize. This means get your hands dirty, do some measurement, dig into the code, and eventually put some code in place that is meant to speed up runtime performance. I may have some advice for this end of the performance topic as well…

First some links:

Next the obligatory advice: It may not come as big surprise: No premature optimization!

Now, this is a sentence that has been used that often, it may have lost ist meaning due to abrasion. I therefore strongly recommend to read that sentence again, this time as if for the first time. Also read the essay “The Fallacy of Premature Optimization” to really understand what it means – and (perhaps more importantly) what not.

My own attitude towards premature optimization is: If you start with a sound design (that does not pose performance risks in itself) the KISS principle in coding is the best preparation for upcoming demands (including optimizations). Optimization on the other hand usually complicates the code. I therefore refrain from optimizations as long as I don’t have a fully functional application that can be diagnosed as a whole.
Usually the first profiling shows a mixture of things I expected to be slow, things I would not have expected, as well as the notable absence of things I would have expected to show up. And usually the things with the highest potential for optimization are related to how different parts of the application work together, things I could not even have optimized beforehand. (q.e.d.)

One obvious (but sometimes forgotten) hint: Performance optimization comes at a price!

Generally optimized code tends to be more code; code being more complex, code with bugs, code to maintain, code to document. Code that is usually less resuable, less robust against context or use case changes, etc.. It is also code that takes time to execute; if the chosen optimization strategy doesn’t catch on the performance will be hurt rather than helped.

A typical optimization scenario is trading memory for processing time (i.e. any kind of caching or data redundancy). Memory consumption hurts scaleability. Data redundancy poses the risk of data inconsistencies. Another scenario is introduction of asynchronous/parallel processing. This may cause concurrency issues and race conditions.  Any optimization strategy has its pitfalls.

As consequence you should always measure performance and scalability before and after the optimization and decide carefully whether it’s worth the price. In my opinion a tiny fraction of improvement usually isn’t worth the increased complexity in in all but the most performance critical applications.

Choose the right optimization strategie:

If you run into a performance problem there is usually more than one option to solve it. The ability to choose one or the other (or a combination) is benefitial as it allows you to react differently depeding on the current situation (i.e. temporarily throw in hardware in production and deploy the optimized version with the next regular release – whether you give the hardware back is another question 😉 ).

I know you know your job and I know that the list below is probably not exhaustive. Anyway, I’ll try to name some typical optimization strategies; it may help to have a list of possible options.

  • Infrastructure
    • Scale up: just add more processors, memory, or (in the case of a certain developer) another monitor 😉 .
    • Scale out/load balancing: adding more machines improves performance as well as fault tolerance – but only if the server side application architecture actually is scalable and can leverage this new machine.
    • Use dedicated/specialized hardware: This includes RAID systems for I/O intensive applications (read: database servers), storage systems such as EMC centera (for huge amounts of bulk data), hardware based encryption, etc.. In one project we even used hardware based XSD validation and XSLT processing.
  • Design/Architecture:
    • Streaming: a typical approach when processing large files (especially XML files). This will not only improve performance but especially scalability.
    • Asynchronous workload distribution: instead of doing a lengthy operation when the user is waiting, just put it in a queue and tell the user you’re done. Do the real work later or on other machines.
    • Changes in user experience (visual feedback, abuse wait times already there): this is not actually optimization, yet it may solve the same problems. Just tell the user you are busy. And if the user has already accepted to wait for some time, why not do some additional stuff?
  • Data Access
    • Caching: Caching can be applied in all application layers from client to backend. Caching is usefull if aquiring the data takes noteable time. This can be caused by the providing piece of code (say with databases or reflection) or by the way to get there (e.g. network bandwidth, marshaling costs, etc.).
      Less obvious is the fact that certain things done in (to?) databases can be seen as caching: indexes, materialized views in oracle, non-normalized tables (i.e. data redundancy). This things may also be done with in-memory data structures.
    • Reuse costly resources: the most common example is databases connection sharing, web browsers do something similar with HTTP connections. Thread pools and garbage collection also fall into that category.
    • Batch/bulk processing, call aggregation: whenever setup and tear down time take a notable amount of time compared with the actual processing of one item, processing more than one item at a time immediately pays back. During a mass update put chunks of updates in a transaction (rather than each row in its own), also combining remote calls (e.g. with Web Services) will improve performance (actually corse grained remote calls may be seen as batched-up fine grained calls)
  • Initialization
    • Lazy initialization: If initialization is costly do it as late as possible – and perhaps not at all. Lazy initialization distributes performance and improves startup time. The risk to take is domino effects (that compensate the aspired gain) and undetermined initialization sequences.
    • Proactive initialization: Application startup takes time. Why not take a little more time and have the application run afterwards? This is especially usefull for server applications. It also accounts for more stability since defered initialization also means defered error detection.
  • Source Code Optimizations
    • Choose the right algorithms and data structures.
    • Know the costs of certain methods and keywords. “foreach” introduces more overhead than “for” does. “string.Format” is quite costly (interestingly it is often used for tracing which is turned of most of the time). Reflection is costly in itself. Other methods may hiddenly affect performance if they cause assemblies to be loaded or code to be generated (e.g. regular expressions or XPath).

As I said, this list is not exhaustive. However, please let me know if I missed something particular important.

Now I’ve provided you with some additional advice and a bunch of options. If you are the type that can’t decide in a restaurant when presented with a particular long menu I’ll have done you a bad service. If you welcome the breadth of choice you could hopefully pick up something new.

That’s all for now folks,
AJ.NET

Advertisement

2 Comments »

  1. Hi!

    I saw your posting on optimization.

    Would you like to work it into an article and post it on our wiki ?

    Will Wagers
    Editor
    C# Online.NET
    http://en.csharp-online.net/
    editor at csharp-online.net

    Comment by Editor — February 9, 2007 @ 3:29 am

  2. Eric has a valuable lesson on performance. Read here: http://blogs.msdn.com/ericlippert/archive/2009/02/06/santalic-tailfans-part-two.aspx

    Comment by ajdotnet — February 7, 2009 @ 1:22 pm


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: