… but Kings need advisers.
Welcome back. (This is part two of this little series.)
Now, lets look at some of the points (i.e. the “During design and development” part) mentioned in the previous post more closely:
During design and development:
1. Keep performance in mind. Check your design under performance condiderations.
This should be an easy one, most experienced developers and architects do this without thinking of it. The things I’m refering to are like
- If the user has a search screen, make sure you think about large search results.
- If the user shall trigger some repeated activity (e.g. send emails to a list of recipients), make sure the list is guaranteed to be small or the processing is done asynchronously.
- Allways be wary about the number of calls into outside systems (database, WebServices, etc.) and know about response times and error conditions of those systems.
- Use coarse grained calls for out-of-proc-calls
That kind of stuff.
Well, as I said, this should be an easy one, but there’s a pitfall: You have to know about actuall demands your application has to fullfill. Do you have a quantity structure for the expected data? Do you know how many rows to process (and whether you can do it asynchronously)? How stable is the WebService you are about to call?
This kind of information is rarely readily available and asking the business people usually doesn’t help either. You’ll have to develop a feeling for areas prone to such surprises. A little risk management doesn’t hurt either.
2. Put measurement points in your code to understand the performance distribution.
There should be measure points across all relevant parts, in all layers of your application. This is as simple as having a begin trace and an end trace around some lengthy processing or call to the next layer with the time spent between the two.
Trace the time spent in rendering, databinding, calling into the database, calling into web services, and other foreign code, special functions (e.g. heavy usage of reflection), etc.. Following the control flow of an incomming request, you should know how much time is spend in what part of your application or during outside calls.
During initial performance tests (latest) look at this measurement data. Is the distribution feasible? (Most of the time should usually be spend in the database.) Are the absolute numbers more or less acceptable? (If yes, don’t optimize!) Do this with real life data (regarding amount and complexity).
This should have two effects:
1. You will know whether you have a performance problem before the customer knows. Congratulations.
2. If someone complains about performance you will be able to assess that statement and answer with confidence.
Note: This is not enough, but in my experience you are lucky to even have the time to do that. If on the other hand you are working at developers garden of eden, you might also work on the things I listed under #8.
3. Encapsulate areas prone to performance issues
If your calls to a certain WebService (or a database, or doing reflection, accessing session state, or whatever it is that potentially may take up more time than acceptable) are spread across your code, what are you going to do if this really becomes a performance bottleneck? Encapsulate those things in a helper or proxy class and you will be able to implement asynchronicity or caching if the need arises.
This is good coding style anyway, as you will be able to enforce usage patterns, track calling code, add performance counters, add type safe wrappers, provide helpers, etc..
4. Make sure you have good test data
Too many developers make the mistake of testing their code with data used during development. Get real live data and be prepared for some surprises. Get random and deliberately wrong data and see how your code fares under rough conditions. Ask someone else to prepare test data to avoid “blinders effects”. Most important: Get mass data to see how your code scales with the data amount.
I once worked in a project where they had decided to put all data related business logic into the database “where it belongs”. They implemented the logic with 3 test data rows (perfectly valid) and went to the test phase without doing more (perfectly futile). The testers had about 100 rows in the database (not very much at all and still not nearly the amount that was expected in production). The initial query took a around 14 minutes. One hundred rows is hardly “mass data”, right?
And don’t simply lean back if you have test or QA guys in your project whose job it is to do just that. Usually they know how to write test plans but very little about your code and the resulting test points. Help them helping you.
5. Plan for initial performance tests
You may call it by its name in the project plan or you may hide it as bug fixing time, code review, or code documentation. You may do it as part of a final testing phase or as part of the developers testing for single development tasks. You may assign this task to a certain developer or have everyone doing it for his own code. Like ordinary testing, this really does not matter as long as you actually do it. (I’m not saying this does not have an effect on the efficiency of the testing. I’m saying that with many real world projects it is not a question of how effective your testing is but whether you do organized testing at all.)
Just don’t make the mistake to use it as time buffer if your project runs out of time (as quite often happens to testing).
Personally I would rather kick a part of the common testing than performance testing. In my experience performance analysis leads to a very efficient form of code review (as it trails along the control flow) and you will probably find more slips and bugs this way than with any other testing.
I have also had good experiences with doing these performance tests more than once within an iteration. Ususally initial versions of new functionalities, reworks of core code, or the realization that the last performace analysis is some time ago will be a good reason.
And I thought shorter posts would be easier… . Anyway, the next post should conclude this little series. Hopefully.