AJ's blog

May 20, 2007

About the Virtue of Not Improving Performance

Filed under: .NET, .NET Framework, ASP.NET, C#, Software Architecture, Software Development — ajdotnet @ 6:11 pm

I am very much in favour of performance awareness, as previous posts should have shown (optimize itcache as cache canperformance is king, …), nobody question that. But I repeatedly stumble over advice that I find … questionable.

So, with this post, I thought I might pick up some of the more common hints and tell you why you should not (have to) apply them for performance reasons. Yes, at the price of processor cycles. Here are some links that contain that advice (not the only ones, though):

Please note that my statements are meant for developers of ordinary line-of-business or web-applications. If you write real time software controlling atomic plants, this article is not for you. (Neither is it for the guys at Microsoft working on the .NET Framework or SQL Server.)

Please also note that I left out things related to garbage collection and finalization on purpose.

Optimization techniques that adversely affect class design

Some optimization techniques address the way the CLR is working and are aiming to help the JIT compiler to produce better (read “faster”) code.

Consider Using the sealed Keyword. The rationale behind this recommendation is “Sealing the virtual methods makes them candidates for inlining and other compiler optimizations.” (msdn)

The only candiates for this advice are derived classes of your application. Declare them as sealed won’t hurt the class design. But won’t the virtual methods usually be called via the base class? No very much to gain then anyway. Then why bother?

Consider the Tradeoffs of Virtual Members? Consider the Benefits of Virtual Members, I’d rather say! “Use virtual members to provide extensibility.” (msdn). One should avoid making a method virtual if it’s not intended to be overwritten. But that’s a question of class design and design for extensibility rather than a performance related one. Avoiding a virtual declaration for performance reasons? No way.

These are just examples, there is other advice, e.g. regarding voilatile or properties. All in all, I personally have dismissed this category of performance related advice. It’s either unnecessary (i.e. should be done for reasons far more important, like “know what you do and design things right”) or of adverse effect (like scarifying good design for small gains of performance).

Optimization techniques that adversely affect code

Some techniques aim at eliminating unnecessary repetitive calls:

  • Avoid Repetitive Field or Property Access
  • Inline procedures, i.e. copy frequently called code into the loop.
  • Use for loops rather than foreach

These … hints… are what I call developer abuse. Wasn’t inlining, loop unrolling and detection of loop invariants one of the better known optimization techniques of C++ compilers? And now I shall do that manually? “Optimizing compiler” is not part of my job description, sorry.

String optimization

My special attention goes to … Mr. String. Mr. String, could you please come up to the stage… Applause for Mr. String!

StringBuilder abuse, i.e. forgetting about +/String.Concat/String.Format, is very common. Take the following code snippet as example:

string t1 = a + b + c + d;
string t2 = e + f;
string t3 = t1 + t2;
return t3;

Quite complex, don’t you think? Wouldn’t you be using a StringBuilder instead? NO! Don’t fall or that StringBuilder babble. I cannot say that all the given advice is wrong; it just plainly forgets to mention String.Concat too often and leads one to a wrong impression.

How many temporary strings do you count? 3? (t1, t2, and t3) Or 5? (a+b put into a temporary, in turn added to c.) Well, the answer is 3, as the c# compiler will translate all appearances of + in one single expression into one call to String.Concat. If you have a straight forward string concatenation use + (but use it in one expression!). If it gets slightly more complex, using String.Format (which uses StringBuilder.AppendFormat internally) might be another option.

Use StringBuilder if you have to accumulate string contents over method calls, iterations, or recursions. Use it if you cannot avoid multiple expressions for your concatenation and to avoid memory shuffling. And please read “Treat StringBuilder as an Accumulator” (msdn) in order not to spoil the effect.

ASP.NET related things

My favourite category 🙂

Disable Session State If You Do Not Use It. That’s sound advice. You may do that for the application. But don’t do it just for one page. If you need session state, chances are you need it for all pages. That particular page is the exception? Well, it can’t be doing very much then and disabling it will hardly improve the performance of your application very much. If you stumble over it go ahead, but don’t waste your time looking for these pages. Spend that time on the majority of your pages that actually need session state; spend it on managing session state efficiently. This way your whole application will profit.

Disable View State If You Do Not Need It. Don’t. You don’t want to do it for the page, as view state is a feature of the controls. You don’t want to do it declaratively for the controls, for that is tedious and error prone work. And you certainly don’t want to do it for every control, for some of them rely in view state to work correctly.
Managing view state is the sensible approach. Find ways to avoid sending large view states back and forth to the client. Check if there is unintended view state usage. The view state is small? Well, why bother? Empty view state is not that expensive. If you worry about that, use derived controls that switch view state off by default.

Using Server.Transfer to avoid the roundtrip caused by Response.Redirect is a bit like penny-wise but pound-foolish. The user requests page A but you decide he should see page B. Rather than letting him know that you just gave him what he did not ask for. If he does a page refresh (not to mention setting a bookmark), you will always get a request for page A and always have to transfer to page B. But rest assured, the last transfer definitely is more efficient than redirecting. Oh, and by the way, you just lied to the user. Telling him he is on the Get_This_Gift_For_Free.aspx page when he actually was on the Buy_Overpriced_Stuff_You_Dont_Need.aspx page. Interesting business model, though.

Use Response.Redirect(…, false) instead of Response.Redirect(… [, true]). Now that’s an example of a half understood issue being made a common recommendation. (Am I becoming polemic? Sorry, could not resist 👿 .)
Redirect throws a ThreadAbortException. “An exception! — Oh my, what can we do about that?” “Oh, don’t worry, we can go through all the data binding, page rending, event handling, and whatever else is left of the page life cycle, we will fight any opponent, such as this perkily grid control that refuses to bind against null. And at the end of the day we will have slain the dragon and it won’t fly again.” OK, now I am being polemic. Anyway. The problem with not throwing the exception should have become clear. If you want to avoid that, try to do it client side.

What to conclude?

Now I’m going to contradict myself: None of the above advice is actually wrong (well, …). Not if you look only on performance in terms of processor cycles. But they are far too expensive in terms of brain cycles and won’t get you the benefit you expect. They trade small gains in performance for adversely affected design, workload on the developers side and additional chances for errors. And they are far too fine grained and restricted to the code level to matter all that much. Usually the interaction patterns between different components or the chosen algorithms have more impact. Not the virtual call to that method, but this non-virtual, non-suspicious method that is called 500 times during one request. Not the fact that 5 strings are concatenated with +, but the fact that those strings result from 10 database calls.

I’m contradicting myself even further: I didn’t say, don’t do it at all. I said don’t do it for performance reasons. There may be other reasons to follow the advice. (E.g. I would “Avoid Repetitive Field or Property Access” in order to have more comprehensible code, code that better supports debugging sessions. “Consider the Tradeoffs of Virtual Members” is no bad advice either.)

If you want to follow some rule upfront, I strongly recommend a sound design and understanding of what you do. And once you’ve got past the broad scenarios, let the profiler tell you what parts of your application to optimize. This way you will spend the work where it matters.

Oh, by the way: Please note that I did not reject every advice. There is a lot of good advice which tells you how to improve performance simply by writing good code (in terms of design, readability, etc.). And quite a few hints for doing things efficiently without adverse effects. Just don’t do it blindly.

That’s all for now folks,

kick it on DotNetKicks.com



  1. May favortie example of StringBuilder abuse (seen dozens of times)
    string A = “….”;
    string B = “….”;
    StringBuilder sb = ….
    // :
    sb.Append(A + B);

    SO, let’s see… We’ve used a StringBuilder to avoid a string concatenation, and then we do one anyway…..

    Comment by James Curran — May 25, 2007 @ 5:03 pm

  2. Don’t forget String.Format() internally creates a string builder anyway. 😉

    Comment by Ray Booysen — May 26, 2007 @ 12:34 am

  3. Regarding the “sealing virtual methods” and “Consider the Tradeoffs of Virtual Members”: Did you ever take a look why the hell generic collections perform (in many cases) better (read “faster”) than the non-generic ones? Main reason are not the removed casts/(un)boxing (unboxing is very cheap) but the methods are no longer virtual hence the reduced indirections hence the performance gain. And based on a clean design these methods should be virtual, but they are not. Performance trade-off. I like it. It was the right way to go.

    Regarding the “sealed” at all: I never understood why classes are not sealed by default. This is the design decision I will never ever understand. People create ridiculously complicated inheritance hierarchies for extensions that never happen because everybody happens to be a great architect these days? Think again.

    And where the hell did you get the “Use for loops rather than foreach” suggestion from? That is definitely not correct. foreach can surely beat a for loop in many cases. And many for loops beat foreachs:-)


    Comment by Ocho — June 9, 2007 @ 10:11 pm

  4. @Ocho
    Your first point: I never said that there is no performance gain in avoiding virtuals and I didn’t even mention generics. Actually I like generics for their type safety and clean class design does _not_ mandate virtual methods. Anyway, this is not the point. Someone (Microsoft actually) says you should seal or avoid virtual methods and they say you should do that for performance reasons. My stance is that perfomance may be something to consider but I refuse to let it dictate (to the point of spoiling) my class design. If in certain cases generics are an alternative approach that serves clean design and performance at the same time, great.

    Point 2: So you are saying that having the option to do something will lead to the abuse of that feature, therefore only permit that option in certain cases? And you the one to decide? Reminds me of communism, and the party to decide what people should be allowed to do. No thanks, I’ll stay in the free part of the world and live with the consequences.

    Last point: Why, from the microsoft hell that is. But there _is_ a chance that they don’t know what they are talking about. You still might want to have a look at the msdn reference I gave, they also provide ample reasons for their suggestion. Personally I haven’t tested .NET 2.0 but with 1.x the overhead definitely was noteworthy; calling the enumerator, casting with type check, IDisposable awareness; these things simply don’t happen automatically with for loops. Whether this will have a noteable effect certainly depends on the collection.


    Comment by ajdotnet — June 23, 2007 @ 12:11 pm

  5. Oops, irony storm level 5? (stolen from another blog you might know talking about Paris H.)

    Well, I guess you are a smart guy who knows what he is doing (no irony). You might be using your tools in a more “advanced” way (hey, still no irony) than other users (who might prefer “programming with pictures” based on their skills). Maybe some guidelines and tools are simply targeted towards such an audience?

    Virtuals: The collections in .Net 2.0 are a framework widely used and something like that should be tuned for a good run-time rather than a better design-time performance. And that library looks quite sexy to me.

    Virtuals 2: Less advanced users are easier helped with “automatic” optimizations (compiler, JIT) – that can yield terrific gains (I strongly believe the guys building the JIT know what they are doing – stupid thinking, I know). Virtuals hamper that, because such functions cannot be inlined. So you hurt yourself. Ouch. But at least the Class diagram looks smooth. Congrats!

    Sealing your classes:
    Doing things implicitly sucks in my eyes, doing things explicitly is king. And based on some mechanisms which changed from Netfx2 to netfx3 it looks like other people are thinking the same. If you want an extension point, say that you want an extensions point. Otherwise, no extension points by default. If you think that this means I like Honecker, go for it. I mean, most classes are never derived from.

    foreach: Using foreach on an array for example has almost no overhead. And Rico never said that you should never use foreach:-)

    And regarding Microsoft: when you got some 70.000 emps, surely you going to end up with a pretty high number of “so-so”s, as I would call them. And the worst thing is, such people rule the world. And that really hurts. Ouch.

    How do you like Linq? (still no irony. Am I getting old?)

    Comment by Ocho Cinco — June 30, 2007 @ 8:15 pm

  6. Thanks for the info re string.Concat. Yes I agree that there is such a thing as over-optimization. You have to weigh your optimizations against time lost, harder to understand code, and higher long-term maintenance.

    Comment by Chinh Do — September 19, 2007 @ 6:34 pm

  7. […] has written a post detailing string.Concat here. Thanks, AJ, for pointing it […]

    Pingback by StringBuilder is not always faster - Part 2 » Chinh Do — September 29, 2007 @ 8:20 am

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create a free website or blog at WordPress.com.

%d bloggers like this: