AJ's blog

October 28, 2006

Frameworks – don’t let ’em frame you!

Why is it that so many people want to write frameworks (rather than using them)? Is it the “Hey, I’ve just written a method that makes sense in another application! Why not make it a framework? That’s so much more fun!“-effect? Or the “We have a bunch of applications that we need to migrate. Let’s begin with a framework to ensure they are build consistently“-demand? Does “not invented here” still play a role?

Well, any of these and other motivations probably plays its role and is more or less wrong or right, depending on the situation. I’m not going to judge this in general. But there is another question that should in all honesty be asked: Why are there so many bad frameworks out there? I.e. frameworks that do not meet the requirements?

Note that “bad frameworks” does not imply “bad code”. The framework code may be at the edge of geniality, yet if the requirement was reuse and noone actually did reuse it, then it missed the requirement. If it was meant to hide complexity and ease the usage of certain things but all it did was replacing one complex task with another, then it failed. Even if it met all requirements, yet it caused overly complex and non-maintainable client code, I would regard it as failure.

Sadly many bad frameworks are written by good developers for their best. I have written bad frameworks, no kidding! (Of course I don’t do that anymore… 😉 .)

The main reason for this phenomenon may be due to one single fact that is too often overlooked: Just as frameworks differ from applications, framework development differs from application development. And the single most differentiating point between applications and frameworks is the user base. The users of a framework are other developers. In the same way a usual application supports the end user (think of usability, consistency, appearance, etc.) a framework has to support the developer.

In his article “Framing the framework conversation” Jack Vaughan lays his hands right into the open wound: “Application frameworks have been around for a while now, but accepted best practices in application framework building still seem to be something of a new thing.

Jack also says “One worst practice to avoid in framework creation is to look at the application framework in the same way as a typical IT application.” And later on “An obvious but sometimes overlooked point: Developers are the people that will be “using” this framework software.

Just my point, right?

Enough bashing, let’s get a bit more constructive. I have also written some very good frameworks (again, “good frameworks” simply stands for “frameworks that met the requirements”), and here are some golden rules I learned from experience:

1. Focus on the “user interface”, not on the functionality
2. Robustness is of utmost importance
3. Align the framework with existing environments and standards

Just as Asimov was not content with with his “Three Laws of Robotics” and prepended law #0 I like to add the following:

0. Know when not to write framework code.

Of course I’m not going to raise the natural radiation level to render mother earth uninhabitable. Promise.

In more detail:

1. Focus on the “user interface”, not on the functionality

When you begin developing a framework, start with writing the client(!) code, including any configuration, as if this framework already existed.

This way you will not only specify the functions provided by the framework, you will specify the interface the framework shall provide. This covers not only the functional demands, but the “ergonics” in using it. In other words, you’ll have a fairly complete specification of the user interface of your framework, i.e. the API, configuration file schema, and even encountered behaviour (such as exceptions).

This approach makes sure the framework can be leveraged with as little effort as possible. This also implicitely includes what should not be necessary to use the framework. E.g. some module concerned with exception handling might have to register the necessary eventhandler itself rather than expecting the user to do this, or worse yet, expecting him to cover his code with try/catch.

Focusing primarily on the functionality, the framework will probably fullfill the requirements as well. Yet in order to use it the user will have to write a lot more unnecessary and tedious code. Perhaps he will even have to deal with internal details, thus contributing to the breaking changes of your next version.

2. Robustness is of utmost importance

Framework users have much more options to stress framework code than endusers have with applications. Thus framework code needs to be much more robust than common application code. You never know under what conditions and in what contexts your framework code will run. If some error occurs, don’t expect “sympathy of a fellow developer”, people using a framework are as demanding and unforgiving as any other user.

Things you should think of include:

  • Extensive checks of all information comming into the framework: call parameters, configuration data, connection objects, various contexts (request, transactions, security, …), …
  • Possibly special configuration checks during initialization. e.g. if the configuration points to a directory (or any other resource that is uncertain to exist or subject to security restrictions), try accessing that directory right away; if it contains a type for dynamic object creation, create it now.
  • Use reasonable default values to avoid effort on the users side. The less work the user has to do, the more pleased he will be – not to mention the less chances for mistakes.
  • Provide meaningfull error messages. Don’t tell the user an error happend (he’ll know that anyway :-)), tell him how to solve it. A NullReferenceException doesn’t help at all to find the problem, some kind of configuration exception with the setting name and a hint why it caused an error does.

3. Align the framework with existing environments and standards

Work with the .NET Framework and Visual Studio. Leverage and enhance them. Use similar patterns and conventions. Behave like a good citizen who is just trying to help.

This way you will realize the most synergy effects for the users:

  • You build on already present concepts and knowledge (which also streamlines your documentation)
  • You achieve better reuse effects if you can plug in an already existing infrastructure
  • You will participate in future enhancements of the platform

If you started developing parallel or even contradictory concepts for things already there you would allways have to to justify yourself, cope with improper usage, and generally bad acceptance among your user base. (You would be seen as an anarchist rather than a good citizen.)

Personally I adhere to this rule as long as possible, even if my own solution appears to be better. I would even drop framework features if the new version of the environment suddenly provides something similar. Which is an ideal transition to the next point…

0. Know when not to write framework code.

There are different reasons for not writing framework code, yet the most important one is: The feature is already available somewhere else.

If Microsoft has done it, use it. If there is a serious community project, use it. If someone in another department did something similar, go drink a coffee with that guy.
Face it: Any big company that sells a product is far better at developing, documenting, maintaining this product. Any community will constantly contribute to and reassess framework features, as well as use and test it in very diverse contexts. Who thinks he can cope with that on his own, especially in the long run?

Of course they might fail. So what? They are less likely to fail than you and they are ar far better target to blame if they do.

At the end of the day…

… your framework will still have to deliver some functionality. But following these guideliness should make your users happy for the features, rather than frustrated because of the stunts needed to access them.

Some additional hints:

Disclaimer: While Jacks article inspired this post and some fellow collegues unwillingly contributed to it 😉 , it is solely based on my own experience. Blame me for any mistakes and for not honoring some people’s efforts. 😀

That’s all for now folks,

October 22, 2006

WebService? What do you mean, "WebService"?

Filed under: .NET, .NET Framework, SOA, Software Architecture, Software Development — ajdotnet @ 3:27 pm

When people (make that “developers”) start talking about WebServices I immediatelly get cautious. Once they use phrases like “other clients”, “probably different plattform”, and “serialization” in the same sentence I get that weary feeling, like I just eat something bad. Ironically if I ask what they actually mean when they talk about “WebService” I am the one being looked at as if I have been out of business for the past 5 years.

Well, I cannot blame them. Actually I blame Microsoft. And IBM. Bea. They made it far to easy to create something they called WebService directly from code (i.e. SOAP wrappers on objects). Then they invented SOA and talked about something entirely different which they also named WebService. It is not as if they don’t try, actually they try very hard to tell people the difference. In fact there are bright people within those companies who do nothing else. Yet this sometimes strucks me as damage control; damage caused by carelessly giving two very different things the same name – or rather by sticking with the name when WebServices evolved into something completely different.

Just as a side note: Did you notice I didn’t mention SUN? They are not to blame. They jumped on the train when it was already on its way and even then they did it reluctantly.

Here are the differences I am refering to:
There is a technology stack (namely HTTP, SOAP, and WSDL, possibly augmented with WS-* standards) commonly named “Web Service” on one hand. And on the other hand we have the architectural concept of services – nowadays allways in the context of a SOA -, which is usually also named “WebService” because WebServices are the predominant/canonical implementation technology for this concept. And because WebServices are the predominant implementation technology people often confuse the concept and the technology. Thus, when they use the technology they asume that the bennefits of the service come for free. (That is “service” in the SOA sense – unfortunately “service” is also a term that has very diverse meanings).

In order to clearly distinguish both kinds I usually use the terms “internal/private WebService” (the technology focused ones) and “external/public WebService” (following the service concept). The (somewhat idealized) definition of these two kinds of WebService goes like this:

Internal/private WebServices are used within a logical application as a means for distribution.
Both sides (client and server) are under control of the development team, both are deployed in unison, both have the same notion about the data model. Actually the fact that a WebService (or rather HTTP/SOAP) is used is just an implementation detail. One could also think of any other kind of remote method call technology (Remoting, DCOM, RMI, FTP, …) without affecting the application architecture.
These WebServices are meant to be never called from outside the application.

External/public WebService are published by an application to be used by other independend consumers – consumers you may not even know. These WebServices constitute another UI layer (rather than the interface of a layer). They require contracting, security, versioning, logging, fault tolerance, etc.. In particular this includes developing a separate data model for the interface. This model has to be versionable, stable, aligned with the service calls, and devoid of any implementation details. What you should not do is simply publish the internal data model. Apart from inflicting implementation details on the consumer, the least thing that will happen is that you can’t change anything afterwards without breaking your consumers. In other words: If you did that, you would have built the handcuffs, tied them to your own hands, threw away the key, and jumped in the water. Even good swimmers will drown in this situation. Publishing your internal data structures will get you into trouble. Trust me on this, I can show the scars.

Needless to say that internal WebServices are far easier to implement than external ones. The pitfall created by the similar term is that in my experience most WebServices are implemented as internal WebServices, yet sooner or later someone will use them externaly, expecting the quality and the features of an external WebService. Also quite often certain people (deliberately?) ignore the difference – even if you tell them. Internal WebServices are so much cheaper and a WebService is a WebService, no matter what this bitcounting bonehead (me!) says.

The way I try to avoid this situation consists of one single rule: I observe this clear distinction during design, development, and especially when talking about these things. I use the term WebService for the external kind and put emphasis on the service characteristics. Also I mark them accordingly with price tags – and sorry, nothing on sale today. For internal WebServices I try to avoid that term altogether, rather I use something like “SOAP interface”.

Of course there’s a gray area and sometimes 3 is an even number, yet following this rule will avoid a lot of problems afterwards.

That’s all for now folks,

October 18, 2006

Cache as cache can!

Filed under: .NET, .NET Framework, ASP.NET, C#, Software Architecture, Software Development — ajdotnet @ 7:51 pm

Hi again,

Caching. Probably the optimization startegy that is employed most often.
Caching is good. Caching immediately speeds up the code. This guy seems like taking a long time? Go, cache it! Cache like crazy. Well, not quite.
Caching is bad. Caching introduces memory pressure. Caching forces you to maintain cache integrity and if not done correctly and exhaustive it will cause arkward data inconsistencies. And it isn’t worth the effort anyway, since when the guy comes back and asks for the same data (if at all), the cached data will be long outdated. Well, not quite either.

So what’s the matter with caching?
Caching is a tool and like every tool it has to be handled correctly. Using a hammer to iron your shirt might look like it worked, yet it will surely smash the buttons. (Which you’ll notice when this shirt is the last clean one you have and you need it for the job interview because you got fired for some foul caching strategy you implemented in the flagship software of your previous employer… 🙂 ).

Here’s the problem:
Caching is easy. Real easy. Putting a hastable in place or using the ASP.NET cache functionality is no big deal – and the code will be faster, if only at first glance. So it’s actually quite tempting to put caching in place.

But like a good drink, caching has its potential adverse effects, especially if applied improperly, such as:

  • The application is eating up more and more memory because your homegrown caching using a static hastable will grow steadily.
  • The web application will support considerably less concurrent users due to memory preasure.
  • The web application will become slower because it invokes the garbage collector or does app domain recyclings more often.
  • In some extreme cases you will even see OutOfMemoryExceptions. (Try running BizTalk under heavy load if you’ve never seen one of these guys.)
  • You’ll get inconsistent data because not all code lines manipulating the data have been updated to also remove/update the outdated cache entry.
  • Best case: Your application just isn’t faster because noone actually used the cached data. Well, startup probably takes even longer.
  • Oh boy, did you forget to synchronize access to the cache?

And here’s the deal:
Blindly caching is bad. Sensible caching is still a powerfull tool. Caching is an optimization strategie and should be treated as such. Measure first, find the bottlenecks, decide upon the optimization strategy (yes, caching is not the only one) and then possibly apply caching.
Lemma: You can’t do this right from the beginning, you need a functional complete version of your application. Thus caching should be a tabu until you reach a first functional iteration in development. And since caching is that easy be adamant about not caching until you reach that state!

Note that there may be caching allready in place (e.g. within the database) so allways measure subsequent data requests as well. Also don’t measure the performance alone, measure potential cache hits and misses before implementing caching. After all, putting something in a cache only makes sense if it is requested again afterwards with reasonable frequency.

If you have identified a longer list of possible candidates, don’t go and cache them all. Look for the spots that provide the most revenue. Remember, caching introduces memory pressure.

Here are some additional general hints:

  • Begin with a well-defined caching strategy that clearly states what is to be cached, when, and where. Otherwise you will have a client that caches what came from an HTML fragment cache that contains business data cached at the UI layer, which in turn resulted from database data cached in the business layer that came from the database cache. Caching already cached data is particullarly bad, so you should decide carefully at which layer caching should be applied.
  • Homegrown caching (e.g. a hastable) should be used sparingly. It is ok for stable data (e.g. reflection data) that cannot change afterwards. But make sure the cache does not grow infinitely and don’t forget to use a reader/writer lock.
  • Use a real cache implementation (e.g. System.Web.Caching.Cache) where possible, one that lets the cached data expire. Leverage expiration strategies (e.g. absolute or sliding expiration) depending on the measured hit and miss statistics.
  • Make sure your caching strategy isn’t invalidated by load balancing strategies. The cache hit rate can only degrade in a web farm.
  • As a corollary: Don’t expect certain data to be in the cache.

Your caching strategy should answer a few questions and help other people in the project to understand why caching is used in one place but not in the other. It should include criterias for:

  • choosing caching candidates
  • selecting caching locations
  • deciding upon cache time and scope

Let’s see what the options for these topics are:

Good caching candidates:

Not all data is a good candidate for caching. Good candidates usually fullfill (most of) these criterias:

  • Aquiring the information takes a considerable amount of time.
  • Caching the information does not use precious resources (i.e. too much memory, open connections, additional processing time due to de/serialization, …)
  • The probability that this information is requested several times afterwards within a reasonable time span is high.
  • The probability that this information is changed – especially concurrently – (and thus the cache invalidated) is low.
  • Physical layer transistions: Whenever the transistion has to cross process boundaries, networks with bandwidth constraints, data transformation, security checks, etc., the probability to gain performance by avoiding the transition is high.

Good caching locations:

In order to avoid double caching you need to decide where to cache and where not. Good locations include:

  • Layer transitions: They are quite good candidates since layers (should) provide a clean and well-defined interface. You may be able to introduce a lightweight caching layer between two layers, thus removing all caching logic from the layers themselves. This works particular good if you already use factory patterns to access the next layer.
  • Singleton classes: If you have channeled access to certain information through some kind of class (proxy, helper, singleton, …) this class provides the very spot to employ caching. Within this class you can control cache access, cache invalidation, etc.. The user of this class doesn’t even need to know the data is cached.

Caching time and expiration:

Most data becomes outdated or unused after some time. It is sensible to decide when to throw the data away.

  • Inifinitely stable data: Data that can’t change during runtime (e.g. reflection data, machine information, or configuration data within the web.config) could be cached infinitely (i.e. as long as the app domain lives). However you should make sure that this data does not grow infinitely in size and remains a good caching candidate over time.
  • Data subject to occasional nonsignificant changes: Some data may change rarely and if it does, the change does not have to be applied immediately. A typical example is changes to passwords or configurations that become effective within the next 15 minutes. This data is a good candidate for caching with absolute expiration and requires no effort to maintain cache consistency. The changing part however (e.g. some administration screens) need to be able to bypass the cache to get access to the current values.
  • Data subject to occasional significant changes: If changes happen only rarely but need to be applied immediately (e.g. when changing some key tables) there is the option to deliberately throw away the whole cache data rather than going at length to maintain cache integrity.
  • Data subject to frequent changes: If the data shall be cached but it is likely that it might be changed during cache time you’ve got the worst possible case. You need to maintain cache integrity, i.e. every operation affecting the data needs to update the cache accordingly (i.e. update the cached data or simply remove it).
  • Data subject to concurrent changes: If data may be changed in the database or backendsystem concurrently by other users you cannot avoid going to the database with each data request. However there are still some options:
    • If the data is needed more than once during one web request (e.g. several parts in a web page of a CRM system need the current customer), you may employ a page cache (thus one data request per web request is made rather than one for each part). This cache is quite simple as it does not have to deal with expiration (it dies with the page), concurrency, and only rarely with cache consistency.
    • It may be the case that the database supports some kind of “the data hasn’t changed” feedback which is considerably faster than asking for the data itself. In this case you have the means to cache the data and still maintain cache integrity.
    • It may also be the case that the database or backend system supports some kind of event that tells you some data has changed. The ASP.NET caching supports cache dependencies to do just that, supporting dependencies on files, other cache items, and notably SQL Server 2005.

Caching scope:

Depending on the scope of the data you have different options for the location of the cache store:

  • Caching on the client: Infinitely stable data is best cached at the client. This includes static files (images, scriptfiles, static HTML files, …) in web applications (which is only some configuration in IIS) but also stable key tables and business data for smart client applications.
  • Caching for the whole application: Stable data that cannot be sent to the client and data that is user independent can be cached in the global application cache without user reference. Key tables are a perfect example.
  • Caching for the single user: Data that is user dependent can still be cached. Examples would be user rights calculated based on his roles, personalization information, etc.. One may use the global cache with user-dependent cache key or the user session for this data.
  • Request cache: Data that is used several times during a request but otherwise needs to be up-to-date can very well be cached during the processing of a request. A simple hashtable within the page may be enough.

Special cases:

There are some special cases in ASP.NET applications that I mention for completeness:

  • Caching of view state: View state in ASP.NET pages may become quite big, grid controls are especially heavy-weighted in this respect. This is an issue for low speed connections. You can mitigate the problem with HTTP compression, yet storing the view state on the server rather than sending it to the client may prove to be more effcient. ASP.NET 2.0 already supports that.
  • Caching of session state: Session state should be held out-of-proc to avoid problems with app domain restarts or web farms. However, accessing the state data requires inter process communication, serialization, and optionally database access. As long as you only do reads (usually the majority of the calls) you could cache the session data (provided you don’t have load balancing in place). It’s a rare situation but if you have heavy-weighted or complex session data you may bennefit from this.

Finally done. I cannot believe it. This is probably the final post about performance for quite some time. What was intended as one little innocent postling became a grown-up, mature familly. Perhaps they’ll reproduce in the future but for now the offspring has to grow up and prosper. Hopefully in one of your projects, I would be glad.

That’s all for now folks,

October 10, 2006

Optimize it!

Filed under: .NET, .NET Framework, Software Architecture, Software Development — ajdotnet @ 8:37 pm

Hi there,

back for some more talk about performance?

The last posts (“Performance is King…”) were primarily about preparing for performance. If you follow this advice you will hopefully know how to detect performance problems and how to react. But at some point in time you will actually have to do somethinig about performance, in one word: optimize. This means get your hands dirty, do some measurement, dig into the code, and eventually put some code in place that is meant to speed up runtime performance. I may have some advice for this end of the performance topic as well…

First some links:

Next the obligatory advice: It may not come as big surprise: No premature optimization!

Now, this is a sentence that has been used that often, it may have lost ist meaning due to abrasion. I therefore strongly recommend to read that sentence again, this time as if for the first time. Also read the essay “The Fallacy of Premature Optimization” to really understand what it means – and (perhaps more importantly) what not.

My own attitude towards premature optimization is: If you start with a sound design (that does not pose performance risks in itself) the KISS principle in coding is the best preparation for upcoming demands (including optimizations). Optimization on the other hand usually complicates the code. I therefore refrain from optimizations as long as I don’t have a fully functional application that can be diagnosed as a whole.
Usually the first profiling shows a mixture of things I expected to be slow, things I would not have expected, as well as the notable absence of things I would have expected to show up. And usually the things with the highest potential for optimization are related to how different parts of the application work together, things I could not even have optimized beforehand. (q.e.d.)

One obvious (but sometimes forgotten) hint: Performance optimization comes at a price!

Generally optimized code tends to be more code; code being more complex, code with bugs, code to maintain, code to document. Code that is usually less resuable, less robust against context or use case changes, etc.. It is also code that takes time to execute; if the chosen optimization strategy doesn’t catch on the performance will be hurt rather than helped.

A typical optimization scenario is trading memory for processing time (i.e. any kind of caching or data redundancy). Memory consumption hurts scaleability. Data redundancy poses the risk of data inconsistencies. Another scenario is introduction of asynchronous/parallel processing. This may cause concurrency issues and race conditions.  Any optimization strategy has its pitfalls.

As consequence you should always measure performance and scalability before and after the optimization and decide carefully whether it’s worth the price. In my opinion a tiny fraction of improvement usually isn’t worth the increased complexity in in all but the most performance critical applications.

Choose the right optimization strategie:

If you run into a performance problem there is usually more than one option to solve it. The ability to choose one or the other (or a combination) is benefitial as it allows you to react differently depeding on the current situation (i.e. temporarily throw in hardware in production and deploy the optimized version with the next regular release – whether you give the hardware back is another question 😉 ).

I know you know your job and I know that the list below is probably not exhaustive. Anyway, I’ll try to name some typical optimization strategies; it may help to have a list of possible options.

  • Infrastructure
    • Scale up: just add more processors, memory, or (in the case of a certain developer) another monitor 😉 .
    • Scale out/load balancing: adding more machines improves performance as well as fault tolerance – but only if the server side application architecture actually is scalable and can leverage this new machine.
    • Use dedicated/specialized hardware: This includes RAID systems for I/O intensive applications (read: database servers), storage systems such as EMC centera (for huge amounts of bulk data), hardware based encryption, etc.. In one project we even used hardware based XSD validation and XSLT processing.
  • Design/Architecture:
    • Streaming: a typical approach when processing large files (especially XML files). This will not only improve performance but especially scalability.
    • Asynchronous workload distribution: instead of doing a lengthy operation when the user is waiting, just put it in a queue and tell the user you’re done. Do the real work later or on other machines.
    • Changes in user experience (visual feedback, abuse wait times already there): this is not actually optimization, yet it may solve the same problems. Just tell the user you are busy. And if the user has already accepted to wait for some time, why not do some additional stuff?
  • Data Access
    • Caching: Caching can be applied in all application layers from client to backend. Caching is usefull if aquiring the data takes noteable time. This can be caused by the providing piece of code (say with databases or reflection) or by the way to get there (e.g. network bandwidth, marshaling costs, etc.).
      Less obvious is the fact that certain things done in (to?) databases can be seen as caching: indexes, materialized views in oracle, non-normalized tables (i.e. data redundancy). This things may also be done with in-memory data structures.
    • Reuse costly resources: the most common example is databases connection sharing, web browsers do something similar with HTTP connections. Thread pools and garbage collection also fall into that category.
    • Batch/bulk processing, call aggregation: whenever setup and tear down time take a notable amount of time compared with the actual processing of one item, processing more than one item at a time immediately pays back. During a mass update put chunks of updates in a transaction (rather than each row in its own), also combining remote calls (e.g. with Web Services) will improve performance (actually corse grained remote calls may be seen as batched-up fine grained calls)
  • Initialization
    • Lazy initialization: If initialization is costly do it as late as possible – and perhaps not at all. Lazy initialization distributes performance and improves startup time. The risk to take is domino effects (that compensate the aspired gain) and undetermined initialization sequences.
    • Proactive initialization: Application startup takes time. Why not take a little more time and have the application run afterwards? This is especially usefull for server applications. It also accounts for more stability since defered initialization also means defered error detection.
  • Source Code Optimizations
    • Choose the right algorithms and data structures.
    • Know the costs of certain methods and keywords. “foreach” introduces more overhead than “for” does. “string.Format” is quite costly (interestingly it is often used for tracing which is turned of most of the time). Reflection is costly in itself. Other methods may hiddenly affect performance if they cause assemblies to be loaded or code to be generated (e.g. regular expressions or XPath).

As I said, this list is not exhaustive. However, please let me know if I missed something particular important.

Now I’ve provided you with some additional advice and a bunch of options. If you are the type that can’t decide in a restaurant when presented with a particular long menu I’ll have done you a bad service. If you welcome the breadth of choice you could hopefully pick up something new.

That’s all for now folks,

October 7, 2006

“Fatherly Advice To New Programmers” (Chuck Jazdzewski)

Filed under: Software Developers — ajdotnet @ 4:38 pm


I just stumbled over the post Fatherly Advice To New Programmers (Chuck Jazdzewski). He summarizes quite comprehensively what I think should be the attitude of any developer towards our profession.

The advice to “grown up programmers” would be: Share your experience just like Chuck did.

That’s all for now folks,

October 3, 2006

Performance is King… (3)

Filed under: Software Development — ajdotnet @ 8:15 pm

… but somtimes the jester rules.

This is part three of our little series, see here for part one and part two.

OK, five done, three to go:

If someone complains about performance problems:

6. Don’t deny or fingerpoint. Don’t ignore these concerns, even if unsubstantiated or inappropriate

There is a fact to realize and accept: If you are working on the UI layer of an application, you are likely to be the “face to the customer”. The UI surfaces all features and their characteristics to the user, so the customer will tell you that loading that page takes way to long. Not the database guy. Not the infrastructure people. You!

If the problem is in the UI, there’s no point in denying. If it’s in the adjacent layers, help the people responsible for those areas – but also try to compensate (in case the other guy can’t handle the problem).

The key takeaways of this point should be:
1. Work with the other guys to solve the problem, not against them.
2. At the same time try to mitigate/compensate any shortcommings of the called layer/systems.

What you should not do is: Don’t ignore the customers concerns, even if they are not appropriate (e.g. because the application being tested was an early development build). At least take note of the pain and actively address it later. The customer usually is not interested in who actually caused the problem (even – or rather especially – if it was himself). But it will be you who solved it.

7. Understand the problem.

Is the problem really a performance problem? Or is the customer actuallay aware that the current action takes time and he is just asking for some kind of feedback (e.g. some kind of progress dialog).

Is the customer acting within the specification?
The other day we had a specification for some email distribution function. About 20 emails average. It was perfectly valid to send these emails synchronously and provide instant feedback on success and failure. Then came this power user out of nowhere, sending 5000 emails at once. And in his wake the other real world users, sending 1000 average. Another example would be to use a grid component that does sorting and other gimicks via scripting on the client side – and users that request a result set of 300,000 rows.
These are perfect examples of performance problems that actually are specification issues. They cannot be adressed with profilers, they need design changes.

Key takeaway: Unless you know exactly what the actual demand is, any action taken is futile. The range of possible actions might include classical optimization, design changes, strip down the feature, or even teaching the customer.

After initial deployment:

8. Harden your application.

Eventually your first version will be delivered and the first group of users will begin working, hopefully satisfied. Before starting to work on the next features, take the chance to harden your application against future demands. The number of users will increase, as well as the amount of data in your system. Thus, the fact that your system can handle the current workload accounts for nothing.

Do extensive load testing, especially under stress and abuse conditions (i.e. pull the network cable of the database server). Do this with complex data, real life data, mass data, data out of the specification, and under load test conditions. Verify that the system remains stable under reoccuring error conditions. Have some fun with abuse tests. (Did you know that Porsche tests his cars offroad?)

This way you will learn how much workload your application can handle and how robust it is against unexpected circumstances.

Ouff, finally done. 8)

Now, just for the fun of it, reread this list (including the previous post) and think about what these things will accomplish that is not performance related. Right, nothing of this is purely performance related, some things even barely (in other words, you should be doing them anyway). In other words: Preparing for performance this way will have positive effects on the quality of your application in very different ways, far exceeding simple call time improvement.

PS: This post concludes this little series about preparing for performance in a project. I may have to say something about actually optimizing (you know, the time when you get to use profilers…), yet this was not my intention for now.

That’s all for now folks,

The bear has arrived…

Filed under: Miscellaneous — ajdotnet @ 7:59 pm


I just added the bear pictures to the Blackcomb and Whistler post.

Our neighbours had just prepared their breakfast (greetings to the folks from NZ, GB and Scotland) when a black bear came along and took the invitation… . Most people from my group (including me) just came back from the washrooms and had no chance to get to their cameras. And the two who had the opportunity to shoot pictures obviously have been a little shaky. Quite understandable, at one occasion the bear was two or three meters away, coming in my direction 8-O.

That’s all for now folks,

October 1, 2006

Performance is King… (2)

Filed under: Software Development — ajdotnet @ 4:13 pm

… but Kings need advisers.

Welcome back. (This is part two of this little series.)
Now, lets look at some of the points (i.e. the “During design and development” part) mentioned in the previous post more closely:

During design and development:

1. Keep performance in mind. Check your design under performance condiderations.

This should be an easy one, most experienced developers and architects do this without thinking of it. The things I’m refering to are like

  • If the user has a search screen, make sure you think about large search results.
  • If the user shall trigger some repeated activity (e.g. send emails to a list of recipients), make sure the list is guaranteed to be small or the processing is done asynchronously.
  • Allways be wary about the number of calls into outside systems (database, WebServices, etc.) and know about response times and error conditions of those systems.
  • Use coarse grained calls for out-of-proc-calls

That kind of stuff.

Well, as I said, this should be an easy one, but there’s a pitfall: You have to know about actuall demands your application has to fullfill. Do you have a quantity structure for the expected data? Do you know how many rows to process (and whether you can do it asynchronously)? How stable is the WebService you are about to call?
This kind of information is rarely readily available and asking the business people usually doesn’t help either. You’ll have to develop a feeling for areas prone to such surprises. A little risk management doesn’t hurt either.

2. Put measurement points in your code to understand the performance distribution.

There should be measure points across all relevant parts, in all layers of your application. This is as simple as having a begin trace and an end trace around some lengthy processing or call to the next layer with the time spent between the two.
Trace the time spent in rendering, databinding, calling into the database, calling into web services, and other foreign code, special functions (e.g. heavy usage of reflection), etc.. Following the control flow of an incomming request, you should know how much time is spend in what part of your application or during outside calls.

During initial performance tests (latest) look at this measurement data. Is the distribution feasible? (Most of the time should usually be spend in the database.) Are the absolute numbers more or less acceptable? (If yes, don’t optimize!) Do this with real life data (regarding amount and complexity).

This should have two effects:
1. You will know whether you have a performance problem before the customer knows. Congratulations.
2. If someone complains about performance you will be able to assess that statement and answer with confidence.

Note: This is not enough, but in my experience you are lucky to even have the time to do that. If on the other hand you are working at developers garden of eden, you might also work on the things I listed under #8.

3. Encapsulate areas prone to performance issues

If your calls to a certain WebService (or a database, or doing reflection, accessing session state, or whatever it is that potentially may take up more time than acceptable) are spread across your code, what are you going to do if this really becomes a performance bottleneck? Encapsulate those things in a helper or proxy class and you will be able to implement asynchronicity or caching if the need arises.

This is good coding style anyway, as you will be able to enforce usage patterns, track calling code, add performance counters, add type safe wrappers, provide helpers, etc..

4. Make sure you have good test data

Too many developers make the mistake of testing their code with data used during development. Get real live data and be prepared for some surprises. Get random and deliberately wrong data and see how your code fares under rough conditions. Ask someone else to prepare test data to avoid “blinders effects”. Most important: Get mass data to see how your code scales with the data amount.

I once worked in a project where they had decided to put all data related business logic into the database “where it belongs”. They implemented the logic with 3 test data rows (perfectly valid) and went to the test phase without doing more (perfectly futile). The testers had about 100 rows in the database (not very much at all and still not nearly the amount that was expected in production). The initial query took a around 14 minutes. One hundred rows is hardly “mass data”, right?

And don’t simply lean back if you have test or QA guys in your project whose job it is to do just that. Usually they know how to write test plans but very little about your code and the resulting test points. Help them helping you.

5. Plan for initial performance tests

You may call it by its name in the project plan or you may hide it as bug fixing time, code review, or code documentation. You may do it as part of a final testing phase or as part of the developers testing for single development tasks. You may assign this task to a certain developer or have everyone doing it for his own code. Like ordinary testing, this really does not matter as long as you actually do it. (I’m not saying this does not have an effect on the efficiency of the testing. I’m saying that with many real world projects it is not a question of how effective your testing is but whether you do organized testing at all.)
Just don’t make the mistake to use it as time buffer if your project runs out of time (as quite often happens to testing).

Personally I would rather kick a part of the common testing than performance testing. In my experience performance analysis leads to a very efficient form of code review (as it trails along the control flow) and you will probably find more slips and bugs this way than with any other testing.

I have also had good experiences with doing these performance tests more than once within an iteration. Ususally initial versions of new functionalities, reworks of core code, or the realization that the last performace analysis is some time ago will be a good reason.

And I thought shorter posts would be easier… . Anyway, the next post should conclude this little series. Hopefully.

That’s all for now folks,

Blog at WordPress.com.