AJ's blog

November 3, 2013

MVC is a UI pattern…

Filed under: ASP.NET, ASP.NET MVC, Software Architecture — ajdotnet @ 3:01 pm

Recently we had some discussions about how the MVC pattern of an ASP.NET MVC application fits into the application architecture.

Clarification: This post discusses the MVC pattern as utilized in ASP.NET MVC. Transferring these statements to other web frameworks (e.g. in the Java world) might work. But then it might not. However, transferring them to the client side architecture of JavaScript based applications (using knockout.js etc.), or even to rich client applications, is certainly ill-advised!

 

One trigger was Conrad’s post "MVC is dead, it’s time to MOVE on.", where her claims

"the problem with MVC as given is that you end up stuffing too much code into your controllers, because you don’t know where else to put it.”

http://cirw.in/blog/time-to-move-on.html (emphasis by me)

The other trigger was "Best Practices for ASP.NET MVC", actually from a Microsoft employee:

"Model Recommendations
The model is where the domain-specific objects are defined. These definitions should include business logic (how objects behave and relate), validation logic (what is a valid value for a given object), data logic (how data objects are persisted) and session logic (tracking user state for the application)."

"DO put all business logic in the model.
If you put all business logic in the model, you shield the view and controller from making business decisions concerning data."

http://blogs.msdn.com/b/aspnetue/archive/2010/09/17/second_2d00_post.aspx

Similar advice can be found abundantly.

 

I have a hard time accepting these statements and recommendations, because I think they are plainly wrong. (I mean no offense, really, it’s just an opinion.)

These statements seem to be driven by the idea, that the MVC pattern drives the whole application architecture. Which is not the case!
The MVC pattern – especially in web applications, certainly in ASP.NET MVC – is a UI pattern, i.e. it belongs to the presentation layer.

Note: I’m talking about the classical, boilerplate 3-layer-architecture for web applications

In terms of a classical layer architecture:

This is how it works:

  • The model encapsulates the data for the presentation layer. This may include validation information (e.g. attributes) and logic, as far as related to validation on the UI. Also I generally prefer value objects (as opposed to business objects, see also here).
  • The controller receives the incoming request, delegates control to the respective business services, determines the next logical step, collects the necessary data (again from the business services), and hands it off to the view. In this process it also establishes the page flow.
    In other words: The controller orchestrates the control flow, but delegates the single steps.

 

Coming back to the original citations…

  • If "you end up stuffing too much code into your controllers", then the problem is not MVC, not the fact that controllers by design (or inherent deficiencies of the pattern) accumulate too much code. It’s far more likely that the controller does things it’s not supposed to do. E.g. do business validations or talk directly to the database. (Or, frankly, “you don’t know where else to put it” is the operative phrase.)
  • If your model "is where the domain-specific objects are defined", and you "put all business logic in the model", in order to "shield the view and controller from making business decisions concerning data", then these statements overlook the fact that domain-specific objects and business logic are artifacts of the business layer.

Fortunately you can find this advice elsewhere (I’m not alone after all!):

"In the MVC world the controller is simply a means of getting a model from the view to your business layer and vice-versa. The aim is to have as little code here as is possible."
http://stackoverflow.com/questions/2128116/asp-net-mvc-data-model-best-practices-for-a-newb

It’s just harder to come by.

That’s all for now folks,
AJ.NET

August 29, 2010

CommunicationException: NotFound

Filed under: .NET, .NET Framework, ASP.NET, C#, Silverlight, Software Development, WCF — ajdotnet @ 3:44 pm

CommunicationException: NotFound – that is the one exception that bugs every Silverlight developer sooner or later. Take the image from an earlier post:

app_error

This error essentially tells you that a server call somehow messed up – which is obvious – and nothing beyond, much less anything useful to diagnose the issue.

This not exactly rocket science, but the question comes up regularly at the Silverlight forums, so I’m trying to convey the complete picture once and for all (and only point to it, once the question comes up again – and it will!).

I’m also violating my rule not to write about something that is readily available somewhere else. But I have the feeling that the available information is either limited to certain aspects, not conveying the complete picture, or hard to come by or to understand. Why else would the question come up that regularly?

 

The Root Cause

So, why the “NotFound”, anyway? Any HTTP response contains a numeric code. 200 (OK) if everything went well, others for errors, redirections, caching, etc.; a list can be found at wikipedia. Any error whatsoever results in a different result code, say 401 (Unauthorized), 404 (NotFound), or 503 (Service unavailable).

Any plugin using the browser network stack (as Silverlight does by default) however is also subject to some restrictions the browser imposes in the name of security: The browser will only pass 200 in good cases or 404 without any further information in any other case to the plugin. And the plugin can do exactly NIL about it, as it never gets around to see the original response.

Note: This is not Silverlight specific, but happens to every plugin that makes use of the browser network stack.

Generally speaking there are two different groups of issues that are reported as errors:

  1. Service errors: The services throws some kind of exception.
  2. Infrastructure issues: The service cannot be reached at all.

Since those two groups of issues have very different root causes, it makes sense to be able to at least tell them apart, if nothing else. This is already half of the diagnosis.

 

Handling Service Errors

Any exception thrown by a WCF service is by default returns as service error (i.e. SOAP fault) with HTTP response code 500 (Internal Server Errors). And as we have established above, the Silverlight plugin never get’s to see that error.

The recommended way to handle this situation is to tweak the HTTP response code to 200 (OK) and expect the Silverlight client code to be able to distinguish error from valid result. Actually this is already backed into WCF: A generated client proxy will deliver errors via the AsyncCompletedEventArgs.Error property – if we tweak the response code that is. Fortunately the extensible nature of WCF allows us to do just that using a behavior, which you can find readily available here.

Once we get errors through to Silverlight we can go ahead and make actual use of the error to further distinguish server errors:

  1. Business errors (e.g. server side validations) with additional information (like the property that was invalid).
  2. Generic business errors with no additional information.
  3. Technical errors on the server (database not available, NullReferenceException, …).

It’s the technical errors that will reveal more diagnostic information about the issue at hand, but let’ go through them one by one…

Business errors with additional information are actually part of the service’s contract, more to the point, the additional information constitutes the fault contract:

code_declared_fault1

code_declared_fault2

These faults are also called declared faults for the very reason that they are part of the contract and declared in advance. Declared faults are thrown and handled as FaultException<T> (available as full blown .NET version on the server, and as respective counterpart in Silverlight), with the additional information as generic type parameter:

code_throw_declared

Note: There’s no need to construct the FaultException from another exception. And of course this StandardFault class is rather simplistic, not covering more fine-grained information, e.g. invalid properties – which you may need in order to plug into the client side validation infrastructure. But that’s another post.

On the client side this information is available in a similar way, and can be used to give the user feedback:

code_handle_declared

Generic business errors are not part of the service contract, hence they are called undeclared faults, and they cannot contain additional information beyond what the already got. From a coding perspective they are represented by FaultExceptions (the non-generic version, .NET and Silverlight) and thrown and handled similarly to declared faults:

code_handle_undeclared

However, the documentation states…

“In a service, use the FaultException class to create an untyped fault to return to the client for debugging purposes. […]

In general, it is strongly recommended that you use the FaultContractAttribute to design your services to return strongly typed SOAP faults (and not managed exception objects) for all fault cases in which you decide the client requires fault information. […]”

MSDN

 

That leaves arbitrary exceptions thrown for whatever reason in your service. WCF also translates them to (undeclared) faults, yet it uses the generic version of FaultException, with the predefined type ExceptionDetails. This way, any exception in the service can (or rather could) be picked up on the client:

code_handle_exception

However, while ExceptionDetails contains information about exception type, stack trace, and so on, that fault contains by default only a generic text, stating “The server was unable to process the request due to an internal error.”. This is exactly as it should be in production, where any further information might give the wrong person too much information. During development however it may make sense to get more information, to be able to diagnose these issues more quickly. To do that, the configuration has to be changed:

config_exception_debug

And now the returned information contains various details about the original exception:

app_exception_info

BTW: To complete the error handling on the client, you need to to address the situation were the issue was on the client itself, in which case the exception would not be of some FaultException type:

code_handle_client_error

This covers any exception thrown from the WCF service, provided it could be reached at all.

Alternative routes…

As I said, tweaking the HTTP response code is the recommended way to handle these errors. This is still a compromise on the protocol level to work around the browser network stack limitation. However, there are other workarounds to do that, and for the sake of completeness:

  1. Compromise on the service’s contract: Rather than using the fault contract, one could include the error information in the regular data contract. This is typical for REST style services, e.g. Amazon works that way. For my application services I am generally reluctant to make that compromise. The down side is that is doesn’t cover technical errors, but that can be remedied with a global try/catch in your service method.
  2. Avoid the browser network stack. Silverlight offers its own network stack implementation (client HTTP handling), though it defaults to using the browser stack. Using client HTTP handling, one can handle any HTTP response code, as well as offering more freedom regarding HTTP headers and methods. The downside however is, that we lose some of the features the browser adds to its network stack. Cookie handling and browser cache come to mind.

 

Handling Infrastructure Issues

If some issue prevented the service to be called at all, there is obviously no way for it to tweak the response. And unless we revert to client HTTP handling (which would be a rather drastic means, given the implications), the Silverlight client gets no chance to look at it either. Hence, we cannot do anything about our CommunicationException: NotFound.

However, by tweaking the response code for service exceptions as proposed above, we at least make it immediately obvious (if only by indirect reasoning) that the remaining CommunicationException: NotFound is indeed an infrastructure issue.

The good news is that infrastructure issues usually carry enough information by themselves. Also they appear rarely, but if they do they usually are quite obvious (including obviously linked to some recent change), affect any call (not just some), and are easily reproducibly. Hence using Fiddler one can get information about the issue very easily (even in the localhost scenario).

The fact that the issue is pretty obvious pretty fast, in turn makes it usually quite easy to attribute it to the actual cause – it must have been a change made very recently. Typical candidates are easy to track down:

  • Switching between Cassini and IIS. I have written about that here.
  • Changing some application URLs, e.g. giving the web project a root folder for Cassini, without updating the service references.
  • Generating or updating service proxies, but forgetting to change the generated URLs to relative addresses.
  • Visual Studio sometimes assigns a new port to Cassini if the project settings say “auto-assign port”, and the last used port is somehow blocked. This may happen if another Cassini instance still lingers around from the last debugging session.
  • Any change recently made to the protocol or IIS configuration.

This only get’s dirty if the change was made by some team member and you have no way of knowing what he actually changed. But since this will likely affect the whole team, you will be in good company ;-)

 

Wrap up

There are two main issues with CommunicationException: NotFound:

  1. It doesn’t tell you anything and the slew of possible reasons makes it unnecessarily hard to diagnose the root cause.
  2. It prevents legitimate handling of business errors in a WCF/SOAP conformant way.

Both issues are addressed sufficiently by tweaking the HTTP response code of exceptions thrown within the service, which is simple enough. Hence the respective WCF endpoint behavior should be part of every Silverlight web project. And in case this is not possible for some reason, you can revert to client HTTP handling.

 

Much if not all of this information is available somewhere within the Silverlight documentation. However, each link I found only covered certain aspects or symptoms, and I hope I have provided a more complete picture on how to tackle (for the last time) CommunicationException: NotFound.

That’s all fro now folks,
AJ.NET

kick it on DotNetKicks.com

August 8, 2010

Silverlight and Integrated Authentication

Filed under: .NET, .NET Framework, ASP.NET, C#, Silverlight, Software Development, WCF — ajdotnet @ 11:25 am

I’ve been meaning to write about this for a while, because it’s a reoccurring nuisance: Using integrated authentication with Silverlight. More to the point, the nuisance is the differences between Cassini (Visual Studio Web Development Server) and IIS in combination with some WCF configuration pitfalls for Silverlight enabled WCF services….

Note: Apart from driving me crazy, I’ve been stumbling over this issue quite a few times in the Silverlight forums. Thus I’m going through this in detail, explaining one or the other seemingly obvious point…

Many ASP.NET LOB applications run on the intranet with Windows integrated authentication (see also here). This way the user is instantly available from HttpContext.User, e.g. for display, and can be subjected to application security via a RoleProvider. Silverlight on the other hand runs on the client. I have written about making the user and his roles available on the client before. However, the more important part is to have this information available in the WCF services serving the data and initiating server side processing. And being WCF, they work a little different from ASP.NET, or not, or only sometimes….

 

Starting with Cassini…

Let’s assume we are developing a Silverlight application, using the defaults, i.e. Cassini, and the templates Visual Studio offers for new items. When a “Silverlight-enabled WCF service” is created, it uses the following settings:

xml_serviceconfig

Now there’s (already) a choice to make: Use ASP.NET compatibility? Or stay WCF only? (That question may be worth a separate post…). With ASP.NET compatibility, HttpContext.* is available within the service, including HttpContext.User. The WCF pendant of the user is OperationContext.Current.ServiceSecurityContext.PrimaryIdentity. Take the following sample implementation to see which information is available during a respective call:

code_service

The client code to test that service is simple as can be:

code_page

The XAML is boilerplate enough, but for the sake of completeness:

xaml_page

I choose compatibility mode, and as the client shows, HttpContext.User is available out of the box:

app_default

Great, just what an ASP.NET developer is used to. But compatibility or not, it also shows that the WCF user is not available. But! WCF is configurable, and all we have to do is set the correct configuration. In this case we have to choose Ntlm as authentication scheme:

xml_serviceconfig_ntlm

And look what we’ve got:

app_with_ntlm

Great, no problem at all. Now we have the baseline and are ready to move on to IIS.

 

Now for IIS…

Moving to IIS on the developer machine is simple. Just go to the web project settings, tab “Web”, choose “Use Local IIS Web Server”, optionally creating the respective virtual directory from here. In order to work against IIS, Visual Studio needs to run with administrative permissions.

vs_projectconfig

 

Moving from Cassini to IIS however has a slew of additional pitfalls:

  • The service URL
  • The IIS configuration for authentication
  • WCF service activation issues
  • The WCF authentication scheme
  • localhost vs. machine name

Usually they show up as team (which obviously doesn’t help), but let’s look at them one by one.

 

The service URL

There’s one difference between how Cassini and IIS are being addressed by the web project: Projects usually run in Cassini in the root (i.e. localhost:12345/default.aspx), while in IIS they run in a virtual directory (e.g. localhost/MyApplication/default.aspx). This may affect you whenever you are dealing with absolute and relative URLs. It will at least cause the generated service URLs to differ more than just by the port information. Of course you can recreate the service references at that point, but you don’t want to do that every time you switch between Cassini and IIS, do you?

BTW: There’s a similar issue if you are running against IIS, using localhost, and you create a service reference: This may write the machine name into the ServiceReferences.ClientConfig (depending on the proxy configuration), e.g. mymachine.networkname.com/application, rather than localhost. While these are semantically the same URLs, for Silverlight is qualifies as a cross domain call. Consequently it will look for a clientaccesspolicy.xml file which is probably not there and react with a respective security exception.

The solution with Silverlight 3 is to dynamically adjust the endpoint of the client proxy in your code to point to the service within the web, the Silverlight application was started from:

code_adjusturl

Silverlight 4 supports relative URLs out of the box:

xml_clientconfig

Coming versions of the tooling will probably generate relative URLs in the first place; until then you’ll have to remember to adjust them every time you add or update a service reference.

 

The IIS configuration for authentication

This one may be obvious, but in combination with others, it may still bite you. Initially starting the application will result in the notorious NotFound exception:

app_error

Note: To be able to handle server exceptions in Silverlight you’ll have to overcome some IE plugin limitations, inhibiting access to http response 500. This can be achieved via a behavior, as described on MSDN. However this addresses exceptions thrown by the service implementation and won’t help in the infrastructure related errors I’m talking about here.

The eventlog actually contains the necessary information:

el_require_wa

No windows authentication? Well, while Cassini automatically runs in the context of the current user, IIS needs explicitly being told that Windows Authentication is required. This is simple in IIS configuration, just enable Windows Authentication and disable Anonymous Authentication in the IIS configuration for the respective virtual directory.

 

WCF service activation issues

Running again will apparently not have changed anything at all, displaying the same error. With just a seemingly minor difference in the eventlog entry:

el_require_aa

That’s right. The service demands to run anonymous after it just demanded to run authenticated. Call it schizophrenic.

To make a long story short, our service has two endpoints with conflicting demands: The regular service endpoint requires Windows Authentication, while the “mex” endpoint for service meta information requires Anonymous access. OK, we might re-enable anonymous access, but that wasn’t indented, so the way to work around this activation issue is to keep anonymous access disabled and to remove the “mex” endpoint from the web.config:

xml_serviceconfig_nomex

Curiously generating the service reference still works in Visual Studio (perhaps with login dialogs, but still)…

 

The WCF authentication scheme

We’re still not there. The next issue when running the application might be a login credentials dialog when the service is called. And no matter what you type in, it still won’t work anyway, again with the NotFound exception. Unfortunately this time without an eventlog entry.

Again, to make it short, IIS doesn’t support Ntlm as authentication scheme, we need to switch to Negotiate… .

xml_serviceconfig_negotiate

And now it works:

app_with_negotiate

Do I have to say that this configuration doesn’t run in Cassini? Right, every time you switch between IIS and Cassini you have to remember to adjust the configuration. There is another enumeration value for the authentication scheme, named IntegratedWindowsAuthentication, which would be nice – if it worked. Unfortunately those two values, Ntlm and Negotiate, are the only ones that work, under Cassini and IIS respectively.

 

localhost vs. machine name

Now it works, we get the user information as needed. For a complete picture however, we need to look at the difference between addressing the local web server via localhost or via the machine name: Calls against localhost are optimized by the operating system to bypass some of the network protocol stack and work directly against the kernel mode driver (HTTP.SYS). This affects caching as well as http sniffers like Fiddler, which both work only via the machine name.

Note: This may actually be the very reason to switch to IIS early during development, when you need Fiddler as debugger (to check the actually exchanged information). Otherwise it’s later on, when you need it as profiler (to measure network utilization). Of course you’ll want http caching enabled and working by that time.

Of course you can put the machine name in the project settings, yet this would affect all team members. Perhaps a better idea is to have the page redirect dynamically:

code_redirect

Another word on caching: In IIS6 caching has to be explicitly set for the .xap file, using 1 day as default cache time and allowing 1 minute at the minimum. During development this may be an issue. With IIS7 caching should be automatically set to CacheUntilChange and you may also set the cache time with a resolution in seconds.

 

Where are we?

OK, that was quite a list of pitfalls and differences between Cassini and IIS, even IIS6 and IIS7. Some of this may go away with the new IIS Express. Some will stay and remain a nuisance. Visual Studio initially guides you towards using Cassini. At one time however you’ll have to switch to IIS. And since you cannot have both at the same time, this may be an issue especially in development teams. My recommendation would be: Start with IIS right away or plan the switch as a concerted action within your team.

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

July 26, 2009

Playing game…

We’ve defined the problem, we’ve established the rules, now we are ready to actually play the game. In other words: We will employ consistent and sufficient exception management in an ASP.NET application. No less!

Note: What follows is merely a sample implementation, just to drive the ball home, but it should turn my earlier musings into a little more practical advice. And since this is only meant to highlight the principle, not to provide a production ready solution, I’ll keep it simple.

Exception classes

First let’s try to classify our exceptions, depending on the supposed effect. This classification is sort of the driving force behind the strategy: who is responsible, or more to the point how to tell the responsible people…

A quick reminder on what our goal is:

  1. The user has to solve everything that results from his user user interaction in due process. This includes invalid input, insufficient permissions, or other business conditions that interrupt the current functionality.
  2. The administrator is responsible for infrastructure issues. This includes every call into the „outside world“, such as databases, services, file system. Issues here are manifold, including:
    • unavailable database or service
    • errors during some operation, e.g. unexpected a constraint violations
    • erroneous configuration
  3. Everything else: Bugs. Developer. You! ;-)

So consequently:

  • If the user is responsible, we just tell him on the page and let him choose what to do.
  • If the administrator is responsible, we show a general error page, telling the user that the application is not available, and of course we write some event log entry.
  • If the developer is responsible, we do the same, but we tell the administrator explicitly to forward the event log information to the development team.

I’ll spare you the boilerplate exception implementation details, suffice it to say that we have a UserIsResponsibleException for “user exceptions” and a AdminIsResponsibleException for “admin exceptions”, respectively. Names that wouldn’t make it into my production code, but that’s what their intention is, anyway. Oh, and add a ConcurrencyException exception for respective issues, they may or may not be user exceptions, depending on the context.

Now let’s put the handlers in place. These reside in the UI layer, and from there we can work through the architectural layers towards the databases.

User error handlers

User exceptions have to be handled on the page level. Whenever you call into some business layer method, say from a button click event handler, you wrap the call in a try/catch(respective exceptions). The exception is then presented to the user, and eaten, meaning it is not thrown along. Something like this:

protected void btnConcurrentUpdate_Click(object sender, EventArgs e)
{
    try
    {
        BusinessLogic bl = new BusinessLogic();
        bl.UpdateDatabaseWithConcurrencyIssue("some content");
        lblConcurrentUpdate.Text = "did it.";
    }
    catch (UserIsResponsibleException ex)
    {
        // just give feedback and we’re done
        PresentError(ex);
    }
    catch (ConcurrencyException ex)
    {
        // just give feedback and we’re done
        PresentError(ex);
    }
}

The way of presenting the error message is something that has to be decided per application. You could write the message to some status area on the page, you could send some java script to pop up a message box. You could even provide a validator that checks for a recent user exception and works with the validation summary for the feedback.

Statistics:

  • Exception classes: Write once.
  • Presentation: Write once.

The try/catch code is simple enough, yet you have to do it again and again, and should the need for another user exception arise (e.g. some user errors can be handled on the page, some others require leaving the page), the logic is spread across our pages. Thus it makes sense to factor the actual handling into a separate method, in a helper class, or a page base class. Now, catching specific exceptions is still part of our job and that cannot be factored out. To solve this we have three options:

  1. Build the exception hierarchy accordingly and catch the base class for user exceptions.
  2. Catch all exceptions and throw the unwanted exceptions on.
  3. Live with the problem.

I decided against option 1 because I didn’t want the concurrency exception to be derived from some user exception. Option 3 is what we wanted to avoid in the first place. So my solution is number 2, in the name of ease of use and maintenance, but at the expense of language supported exception handling features:

protected void btnConcurrentUpdate_Click(object sender, EventArgs e)
{
    try
    {
        BusinessLogic bl = new BusinessLogic();
        bl.UpdateDatabaseWithConcurrencyIssue("some content");
        lblConcurrentUpdate.Text = "did it.";
    }
    catch (Exception ex)
    {
        // since C# does not support exception filters, we need to catch
        // all exceptions for generic error handling; throw those on, that
        // have not been handled…
        if (!HandleUserExceptions(ex))
            throw;
    }
}

This is boilerplate and unspecific enough to copy&paste it around. The generic handler may look like this:

bool HandleUserExceptions(Exception ex)
{
    if (ex is ConcurrencyException)
    {
        // just give feedback and we’re done
        PresentError(ex);
        return true;
    }
    
    if (ex is UserIsResponsibleException)
    {
        // just give feedback and we’re done
        PresentError(ex);
        return true;
    }
    
    // other errors throw on…
    return false;
}

Statistics:

  • Handler method: Write once.
  • Catching exceptions: Every relevant location, boilerplate copy&paste

Please note that exception filters might make this a little less coarse, but this is one of the rare occasions, where a CLR feature is not available to C# programmers. Anyway, we just factored all logic into a centralized method which achieves exactly what we asked for.

But wait… . Wouldn’t the Page.Error event provide a better anchor? It’s already there and no need to add try/catch around every line of code. Look how nice:

protected void btnConcurrentUpdate_Click(object sender, EventArgs e)

    BusinessLogic bl = new BusinessLogic(); 
    bl.UpdateDatabaseWithConcurrencyIssue("some content"); 
    lblConcurrentUpdate.Text = "did it.";
}

protected override void OnError(EventArgs e)
{
    // raise events
    base.OnError(e);
    
    var ex = Server.GetLastError();
    
    // if we don’t clear the error, ASP.NET
    // will continue and call Application.Error
    if (HandleUserExceptions(ex))
        Server.ClearError();
}

Unfortunately at the time the error event is raised, the exception will already have disrupted the page life cycle and rendering has been broken. It may help in case you need a redirect, but not if the page should continue to work.

“Other” error handlers

So, the page handles user exceptions, what about admin and system exceptions? The page doesn’t care, so they wind up the chain and ASP.NET will eventually pass them to the application, via the Application.Error event, available in the global.asax. This is the “last chance exception handler” or LCEH for short.

One thing we did want to do was writing the exception into the event log. I’ll leave actually writing into the event log to you, just remember that creating an event log source requires respective privileges, e.g. running as admin. Just a minor deployment hurdle.

protected void Application_Error(object sender, EventArgs e)
{
    Exception ex = Server.GetLastError();
    
    // unpack the exception
    if (ex is HttpUnhandledException)
        ex = ex.InnerException;
    Session["exception"] = ex;
    
    // write detailed information to the event log…
    WriteInformationToEventLog(ex);
    
    // tell user something went wrong
    Server.Transfer("Error.aspx");
}

private void WriteInformationToEventLog(Exception ex)
{
    // an ID as reference for the user and the admin
    ex.Data["ErrorID"] = GenerateEasyToRememberErrorID();
    
    string whatToDoInformation;
    if (ex is AdminIsResponsibleException)
        whatToDoInformation = "Admin, you better took care of this!";
    else
        whatToDoInformation = "Admin, just sent the information to the developer!";
    
    // put as much information as possible into the eventlog,
    // e.g. write all exception properties into one big exception report,
    // including the Exception.Data dictionary and inner exceptions.
    // also include information about the request, the user, etc.

    …
}

As you can see, there are some details to keep in mind. One, the exception has been wrapped by the ASP.NET runtime, and that exception class doesn’t tell you anything worthwhile. Second, passing information to the error page may be a little tricky in cases where the request has not yet (or not at all) established the session state. The easier approach is to let ASP.NET handle the redirection, yet at the loss of information. Three, be careful regarding the requests: depending on the IIS version you may get only .aspx requests, or any request including static resources. Thus, with this simplistic implementation a missing image will send you to the error page depending on the IIS version and registered handlers.

I would also like to encourage you to spend some time on collecting information for the event log message. A simple ex.ToString() doesn’t reveal that much information if you look closely. You should invest some time to add information about the current request (URL, etc.), the user (name, permissions), and server state (app domain name, to detect recycling issues). And use reflection to get all information from the exception, especially including the Exception.Data dictionary. Also go recursively through the inner exceptions. The more information you provide, the better.

I also hinted at an error ID. It is probably a good idea to give the user some ID to refer to when he talks to the admin. This way, the admin will know exactly what to look for in the event log (especially in multiple event logs of a web farm).

Finally, the exception class tells us whether we should include an “Admin, do your job!” or “Kindly forward this issue to your fellow developer, brownies welcome, too.” message.

And by the way: This is the one and only location in our application that writes to the event log!

Statistics:

  • Global error handler: Write once.

Done. We have the handlers in place. Handlers for user exceptions, admin issues, and any other exception that might be thrown. Now we need to take care that error causes are mapped to the respective exception class.

Business layer

The business logic is being called from the UI layer, and calls into the data access layer. Thus it has to check parameters coming in, as well as data handed back from the data source.

Data coming in hast to comply with the contract of the business class, and I’d like to stress the fact that not every data is legal. Meaning the business logic is not expected to accept every crap the UI layer might want to pass. A search method that takes a filter class as parameter? No harm in defining this parameter mandatory and require the UI to pass an instance, even if it is empty. And a violation is not the user’s problem, it’s the programmer’s fault! In these cases we throw an ArgumentException, and that’s it. The LCEH will take care of the rest, blaming the UI developer that is.

public IEnumerable<Customer> SearchCustomer(SearchCustomerFilter filter)
{
    // preconditions have to be met by the calling code
    Guard.AssertNotNull(filter, "filter");
    
    // input values have to be provided by the user
    if (string.IsNullOrEmpty(filter.Name))
        throw new UserIsResponsibleException("provide a filter value");
    
    DatabaseAccess db = new DatabaseAccess();
    return db.SearchCustomer(filter);
}

Any issue with the content of the data, i.e. the information the user provides, is a user issue. Usually this includes data validations and if you follow the rule book you’ll have to repeat everything the UI already achieved via validators. Whether you do that or just provide additional checks, e.g. dependencies between fields, is something I leave to you. Anyway, throw a user exception if the validation fails.

Calling the data access layer may also result in thrown exceptions. Usually there is no need to catch them at all. Period.

Finally the data you get from the database may ask for some interruption. A customer database may tell you that this particular customer doesn’t pay his dues, thus no business with him. In this case you throw a user exception in order to interrupt the current business operation.

And that’s it.

Statistics:

  • Business validations and interruptions: As the business logic demands…

“Uhh, that’s it? And I used to spend so much time on error handlers in my business logic…”. Nice, isn’t it? Just business logic, no unrelated technical demand.

Data access layer

The data access layer is where many exceptions come from. If you call a data source (web service, database, file system, whatever), the call may fail because the service is unavailable. Or it may fail because the called system reports an error (constraint violation, sharing violation, …). It may time out. Some exceptions won’t require any special handling and you can leave them alone. Some need to be promoted to admin or user exceptions, or they may require special handling for some reason. You should  translate those exceptions to respective application exceptions. The reason is  that you don’t want to bleed classes specific to a data source technology (e.g. LINQ 2 SQL exceptions) into the upper layers, introducing undue dependencies.

Luckily all this tends to be as boilerplate as can be. Call/catch/translate. With always the same catches. So, again, we can provide a “once and for all” (at least per data source technology) exception handler, like this one for the ADO.NET Entity Framework:

public IEnumerable<Customer> SearchCustomer(SearchCustomerFilter filter)
{
    try
    {
        using (DatabaseContainer db = new DatabaseContainer())
        {
            //…
        }
    }
    catch (DataException ex)
    {
        TranslateException(ex);
        // any exception not translated is thrown on…
        throw;
    }
}

And the respective translation method:

static void TranslateException(Exception ex)
{
    // database not available, command execution errors, mapping errors, …
    if (ex is EntityException)
        throw new AdminIsResponsibleException("…", ex);
    
    // translate EF concurrency exception to application concurrency exception
    if (ex is OptimisticConcurrencyException)
        throw new ConcurrencyException("…", ex);
    
    // constraint violations, etc.
    // this may be caused by inconsistent data
    if (ex is UpdateException)
        throw new AdminIsResponsibleException("…", ex);
}

Statistics:

  • Translation method: Write once.
  • Catching exceptions: Every relevant location, boilerplate copy&paste

Conclusion:

Done. Let’s sum up the statistics:

  • Write once (at least partly reusable in other applications):
    • Application exception classes
    • Last chance exception handler (LCEH)
    • Handler method for user errors, including presenting the error
    • Exception translation method in DAL
  • Boilerplate copy&paste
    • Catching exceptions in the UI, call handler
    • Catching exceptions in the DAL, call translation
  • Application code that still needs a brain ;-)
    • Business validations and interruptions: As the business logic demands…

And this is all we have to do with regard to exception handling in our application. I’ll let that speak for itself… .

This post concludes this little series about the state of affairs regarding error handling in an application. I do not claim to have presented something particularly new or surprising. Rather the opposite, this or some similar strategy should be considered best practice. The reason to even start this series was the fact that I regularly encounter code that does exception handling … questionably. And I also regularly met people who look at exception handling only in the scope of the current code fragment, not in light of a broader approach that spans the application.

If you need further motivation to employ proper error handling in the first place, look here.
A little more on error management in long running, asynchronous scenarios can be found here.

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

July 19, 2009

The Rules of the Game

With the last post I tried to establish the problem: The fact that exception handling needs more than try/catch. That it needs a proper exception management strategy, which should be part of the application architecture. And I paid special attention to the paradox demand that exception management should work when everything else fails.

Now, I cannot provide THE strategy, for there is no such thing. But I may present one feasibly approach that has served me well, quite a few times. I’ll simplify quite a bit, but it is nonetheless a complete approach that may serve as starting point for more complex demands.

OK, let’s play…

To define the playing field: The following is about classical n-Tier ASP.NET web applications, and I have decided to represent user induced issues, like business validation errors, as exceptions.

Some Groundwork

Exceptions are used for different error conditions, but also for non-erroneous situations, which leads to a surprising variety:

  • Exceptions being thrown as result to infrastructure issues. E.g. IOException, SecurityException. They may be caused by invalid configurations, by unavailable infrastructure resources, or for other reasons.
  • Exceptions representing errors reported by external services, like database constraint violations or SOAP exceptions.
  • Exceptions being thrown because the caller did not meet the expectations of the called code, e.g. a violation of a contract. (Usually ArgumentExceptions).
  • Exceptions being thrown because a developer’s lapse. NullReferenceException.
  • Exceptions being thrown to sidestep the current flow of control, e.g. HttpResponse.Redirect() throws a ThreadAbortException to prevent an ASP.NET page from further unnecessary processing.
  • Business validation errors. The business logic decides that – despite proper input validation – something is not quite as it should be. Not exactly an error in a technical sense, but the processing simply cannot go on.
  • User concurrency issues. Like some user wants to update some data that has been changed by someone else in the meantime. And “last one wins” may not be good enough.

Please note that while Basic Rule Number 1 for exception handling (no abuse of exceptions for non-exceptional conditions) is as valid as ever, there are certainly exceptions (no pun intended) to that rule. For example, aborting the request upon redirection certainly makes sense; (ab)using an exception to do so may violate said rule, yet it is far more efficient and convenient than any other alternative. The same may be true for business validation errors. Not exceptional, yet exceptions may be the most convenient way to accommodate them.

This is a (perhaps surprising) variety and may even be incomplete – actually that variety is a major contributor to the complexity of exception handling. And yet, it is only half of the equation. Exceptions have cause and effect, and I found that tackling exception handling from the opposite angle, the effect that is, has certain advantages.

But before we come to that, there’s a question that has to be answered…

What is the purpose, the intention of throwing exceptions? (Seriously!)

This is a question one should ask at least once, even so it may seem a bit superfluous. For the answer does not include some of the usually mentioned suspects: Preventing data inconsistencies? Let’s simplify a bit and state that this is the job of transactions. Gracefully shut down? An error happened, what do I care how the application goes down.

Actually it’s far simpler: An error happened! Something which the developer didn’t anticipate or couldn’t cope with. The primary purpose of exceptions is to communicate that situation. And exception handling in turn it is about notifying everyone involved in the appropriate way, telling them what they are supposed to do now, either to compensate the issue or to prevent it from happening again. In other words: It’s about responsibilities emerging out of this error.

And this is what my strategy revolves around: Responsibilities.

An Exception Handling Strategy

Back to cause and effect, and let’s start by the later one. More to the point, the desired effect, driven by what is actually asked for in case of an error (rather than what could be done). This will lead to a strategy that is more simple, easier to understand, and easier to implement.

As I mentioned ‘responsibilities’ the first question is kind of obvious:

Question #1: Who should be responsible for what?

This is no esoteric question aiming at pieces of architecture or code. Rather it’s asking which person has to shoulder the work (or the blame, if you like):

  1. The Administrator: He has to solve problems caused by infrastructure issues (such as invalid configuration entries, unavailable databases, and so on).
  2. The User himself: Let him deal with errors caused by invalid input, other users working simultaneously with the same data, or in some other way inherent to the intended usage of the product. RTFM!
  3. The developer (YOU!): Anything else falls back on the developer’s desk.

Obviously the last point is the inconvenient part: Every exception defaults to that category, unless the developer explicitly “tells” someone else that he is responsible. Let’s stress that: An error caused by an invalid configuration entry is the developer’s problem – unless he explicitly tells the administrator that he should take care of this.

So, now we are back where we started right at the beginning, at the developer’s desk. The difference is that the developer now has some specific goal: Lay the blame on someone else’s doorstep!

This is actually a very important point. Making someone else responsible is in the developer’s own interest. It helps him avoiding unnecessary and annoying future work. Quite motivating if you ask me.

This makes the next question obvious:

Question #2: How does the developer tell the user or the administrator that he is to take responsibility?

The answer is simple: Provide the necessary feedback, e.g. a message box or an event log entry; in more detail:

  • In case of a user error you tell him with a message box, a popup, within some message area or on a special “you did wrong!” page.
    You don’t tell the admin, because he doesn’t care for typos. Neither do you as developer.
  • In case of an infrastructure issue you show a standard message to the user, telling him to call the admin and come back later. You don’t want to include details, because they might include security relevant information, such as the always cited database connection string.
    Additionally you have to give the admin all information he needs, usually within the event log.

In both cases it’s important that you provide not only the information that an error has occurred (that much is obvious, isn’t it?). You need to include information on how to solve it. Otherwise it will eventually again be up to you, the developer, to solve the issue. That’s why it makes sense not to stop at “couldn’t save data” or “data source not available”. Rather invest some work – in your own self-interest – in providing “Customer data could not be save because it is locked by user ‘XYZ’” or “Database not available, connection string:=…”.

  • Remember the last case: In case of technical issues for which the developer is responsible, you still have to tell the user and the admin something. The user should again get some standard error page (not the ASP.NET error page!), the admin should get an event log entry that tells him to notify the developer. This entry should include as much information as possible for a post mortem diagnosis.

So, essentially this paragraph revolved around presenting an exception, at the UI level where the information leaves the area of your application code. It’s quite simple to implement that, because you can rely on centralized features: ASP.NET provides a global error handler that can be used to provide standard event log entries. Presenting user errors on a page should be boilerplate as well, the only distinction being whether the current page and data is still valid (just show a message box) or not (redirect to some business error page). Nice, easy, and no rocket science.

Of course in order to do this we need a means of distinguishing those three cases. For this, all you need is respective exception classes, two actually. One BlameTheUserException and another BlameTheAdminException. You don’t need a BlameTheDeveloperException, because he is already to blame for every other exception, no need to stress that ;-).

Question #3: Who to put exceptions to work… ?

If the developer’s goal is to blame someone else, then this goal can guide him answering the more detailed questions, such as, where in his code should he be catching exceptions (where not)? When should he throw exceptions (when not)? What about logging? And so on. Everything else follows as consequence…

First about throwing exceptions. Yes, not only the system throws exceptions; you may do the same, and no need to be shy about it. Sprinkle your methods with guard calls throwing argument exceptions, they imply coding errors on the callers side. If the code allows certain conditions that are logically impossible, guard yourself against that impossibility. Throwing exceptions in exceptional cases is no problem at all. Actually it is by far the preferred way to react this harshly, rather than obscuring erroneous conditions because of falsely understood modesty. Fail early, fail often!

Handling exceptions, as in solving the problem once and for ever is of course possible. If it is possible – which is in my experience rarely the case. (It had to be mentioned, though.)

Enriching exception information is a more frequent task. Whenever it makes sense for the post mortem diagnosis, you should add context information to the exception. Adding the actual SQL statement and the connection string to a database error, adding the user name and the accessed resource to security exceptions. This is valuable, if not necessary, to diagnose the root cause of the problem. It’s not even necessary to wrap the exception to do that, since the exception class has a dictionary Data for this very purpose.

Promoting exceptions: Once your code is able to decide that a certain error is an administration or user problem, you should wrap it in the respective class, thus “promoting” it.
This sentence implies the fact that not every location is suited to make that decision. The database layer should probably never decide whether a particular error is the consequence of some erroneous user action; this usually depends on the business context in which it is called, and should be decided there. Infrastructure issues on the other hand happen very low in the call chain and couldn’t even be decided further up.
The real work here lies in actually putting in the necessary handlers. However they tend to gather in classes that deal with the outside world, the database, configuration, some service, or whatever. They also tend to be very boilerplate.

That’s it. … Wait! … That’s not possible!

Question #4: Can that really be all?

Actually this is it. No “every method should…”. No “layer specific exceptions…”. No “log it just in case…”. Actually whenever I tell people how to implement an exception handling strategy in an existing application, the majority of the work is removing exception handlers. Sometimes life can be so easy.

In case it escaped you: If the developer doesn’t have to do something to employ proper exception management, he can’t do anything wrong. We don’t have to rely on him to properly handle his own bugs. We just solved a paradoxon!

Going further, I even consider it an issue by itself if you did too much exception handling!

For example “shouldn’t we at least log the error, just in case?” No, you shouldn’t. The global error handler is responsible for doing that, so there is no need to log it. Worse, if the calling code decided to handle the situation and carry on, there is by definition no application issue at all. And yet the event log would show one, waking the admins for no reason, or flooding the event log and obscuring the one important entry.

So what? That’s a strategy?

Yes it is. Check the demands the initial questions if you like, they’re all accounted for. Take some file that doesn’t exist. Either you did nothing to anticipate that situation, then an IOException will ensue. Neither user nor admin are taking care of this, so it’s back on your desk. Or you did handle it and threw a BlameTheAdminException.

Of course I simplified quite a bit, but not at the expense of the concept, only in terms of features. For example you may need to have different reactions for the user, some errors calling for a message box, other calling for a redirect. Similarly some unavailable service may allow retrying rather than requiring aborting the whole application. Or you may need additional exception classes to handle them in code. Anyway, nothing of this affects the strategy as such.

Once you’ve established such a strategy, every decision you make on the code level becomes easier, and more confident. And what’s more, in my experience it streamlines error handling, makes it more robust, and frequently simplifies the code by removing unnecessary exception handling. Because quite often if someone asks for exception handling, the answer is “already taken care of, nothing to do”.

Getting back to baseball, every player on the field now knows exactly what he is supposed to do. Some do all the work, most do nothing most of the time… .

PS: The ending anecdote: http://leedumond.com/blog/the-greatest-exception-handling-wtf-of-all-time/

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

July 12, 2009

Playing Ball With Exceptions

Ever came across an ASP.NET error page because of a missing configuration entry? Been faced with the question “shouldn’t we at least catch the exception and log it? Just to make sure?” or some statement like “you should catch every exception and throw a new one”? Stumbled over the notorious catch-all or a class whose methods all do exception handling a bit differently? Well, those are symptoms of a missing error management strategy.

It’s actually quite frustrating how often that happens. What’s more, many developers don’t even notice that there is something amiss; they’re quite content dealing with exception on a method by method basis, without taking the bigger picture into account. Or they are very well aware that there is something amiss, but they don’t get it. They know they have to do something, so they set out to do a bit here, a bit there, but not with a clear goal in mind, nothing consistent, only aiming to treat the symptoms and quite regularly overdoing it.

I really don’t remember what first brought the analogy with baseball, of all things, to my mind. Perhaps it was the notion that ‘try’, ‘catch’ and ‘throw’ could be outcries during some ball game, combined with my lack of understanding of baseball (a bit stretching, but that’s how mind twists work). Perhaps it was the idea that many developers do exception handling like they would play baseball if they had never heard of the game.

Stretching or not, that analogy is something I can work with. (And my apologies to everyone who loves and understands that game for everything that follows! ;-))

A Quick Evaluation of … Baseball.

Baseball, like all games has rules. Rules that put the various players into roles they have to fulfill. Most importantly, rules you have to learn if you want to play along. Some are fairly basic “mechanics” of the game: The pitcher throws in the general direction of the batter and the catcher does what his title implies (or so wikipedia tells me).
Some rules are obvious by looking at the game: If the batter hits that ball, he drops the bat and runs, while those guys on the field are trying to catch the ball.
Some rules are rather convoluted and you may have to study some booklet to grasp them; like what the hell is an inning and how do they determine who is winning?

That’s about as much as I understand about baseball. But then, would I want to play baseball, I knew I had a lot to learn before even thinking of entering the field. What rules apply to pitching. What roles do those guys on the field have, what are their responsibilities. How to score points. When to move on to the next round. When does the game end. Under what conditions do the rules change. What is considered foul play and when does it lead to aborting the game.

And this is a surprisingly similar situation to exception handling… . Except that many who play that game play it as if there isn’t even the need for rules, much less asked for them (or have been told about them – which is telling something, too).

Back to Exceptions

The mechanics of exceptions revolve around try/catch/throw and the implied syntax, exception hierarchies, and understanding the runtime behavior. Knowing that much can be expected from every developer and is usually not an issue at all. It is however only just the mechanics, and merely a starting point to proper exception handling.

In the same way the current situation in a game does play a role in baseball (e.g. the batter’s decisions on whether to play safely or to take some risk), the location and situation of a particular code fragment plays a role in the decision on what to do with regard to exceptions. Dealing with them only in the scope of the current code fragment is simply too shortsighted. One has to ask about the overall application strategy to exception handling and how that respective fragment fits into that strategy.

The caveat is: You have to have such a strategy. And defining that strategy should be part of coming up with an application architecture. Yes, exception handling is an architectural topic, not a coding issue.

Exceptions vs. Errors vs. Complexity

As I said, exception handling is an aspect of static and dynamic application architecture, just like security, transaction handling, state management, whatever. But there’s one thing that sets exception handling apart, and that’s dealing with the unexpected, the contradiction of having to deal with what you didn’t think of in the first place.

Exceptions come into play under three different conditions:

  • First, as a means to break the current control flow, say to abort a request upon redirection to another page. This is intended behavior in a regular situation, not even an error.
  • Secondly in cases that disrupt the logically correct operation, but you actually planned for that disruption. Say you rely on a file to exist but you know that someone might screw up, so you handled that case, despite the fact that during regular operations it shouldn’t happen at all. This is an error, but an intended behavior in an expected irregular situation.
  • The third condition is the worst of all: Something unintended happens, something you cannot do anything about. It may arise as ArgumentException (telling you that the developer screwed up) or may be as TypeLoadException (telling you that the environment is on quicksand). In any case this is a purely unintended error and the only thing you can assume about the state of your application afterwards is, that it is not safe to assume anything.

An application architecture should provide provide rules on how to handle these conditions. Add questions like how to deal with the consequences, or deal breaker like parallelism, asynchronicity, or loose coupling, as well as reasonable demands regarding data consistency and error compensation… . And this becomes quite a complex task.

The paradoxical situation however is: Complexity is the very source of errors. Adding complexity is like pouring gasoline into the fire. As developers we already have to deal with the complexity of the problem domain. An exception management strategy should kick in if we fail in that regard. And if he is already doing something wrong, how can we rely on him to do something other (that isn’t even in his line of thinking) right? We can’t!

So, a good exception handling strategy needs to solve that contradiction: The need to deal with different exception conditions on one side. Making things easy for the developers, at the same time not relying on them to do things correctly on the other. Not a small task, indeed. And perhaps this is the very reason that exception handling is usually dealt with so poorly.

Stay Tuned…

OK, this was a lengthy introduction and the actual story is even longer. However I thought it prudent to point out the bigger problem and generally raise the awareness for the topic. Of course I also enjoyed playing with that baseball analogy… ;-). Anyway, in the interest of some readers not interested in such musings I split this one in two distinct posts. This one to explain the problem and make you curious, the next one to provide a solution. Stay tuned if I caught your attention.

PS: Special thanks to Karl for the permission to use his pictures…

PPS: There’s actually more to say about baseball than just baseball. Have a look here, German readers may find a more thorough explanation here. And no, it’s got nothing to do with software development ;-)

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

October 6, 2007

Workflow Communication Clarification

Filed under: .NET, .NET Framework, ASP.NET, C#, Software Architecture, Software Development, WF — ajdotnet @ 2:47 pm

My last post may need a clarification…

Tim raised the question whether it is such a good idea to synchronize the communication between web application and workflow instance. To make it short: Generally it isn’t! Period. It hurts scalability and it may block the web request being processed. Therfore from a technical perspective it usually is preferable to avoid it. Ways around this include:

  • Redirect the user to a different page telling him that his request has been placed and that it may take a while until it is processed.
  • Ignore the fact that the current page may show an outdated workflow state, hope for the best and take means that another superfluous request won’t cause any problems.
  • ….

However sometimes you cannot avoid having to support a use case like the one I presented. But even then — if the circumstances demand it — it might be possible to avoid blocking on the workflow communication. E.g. the web application could write a “processing requested…” state into the worflow instance state table (contrary to what I wrote in an earlier post, but that was just one pattern, not a rule either). The actual call could be made asynchronously, even be delayed somehow.

If you still think about following the pattern I layed out, make sure the processing within the workflow instance is fast enough to be feasibly made synchronously. E.g.:

sequence_1

As you can see, synchronization encapsulates the whole processing.

What if the processing takes longer? Well, the pattern still holds if you introduce an “I’m currently processing…” state. The callback won’t tell you the work has been done any longer, but it will still tell you that the demand has been placed and accepted by the workflow instance.

sequence_2

In this case synchronization encapsulates only the demand being accepted, not the processing itself. However it still serves the original purpose, which was telling the user that his request has been placed.

What if you need to synchronize with the end of a longer running task? In that case this pattern is not the way to go. The user clicking on a button and the http post not returning for a minute? This definitely calls for another pattern.

I hope this has made things clearer.

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

September 22, 2007

Talking WF…

Filed under: .NET, .NET Framework, ASP.NET, C#, Software Architecture, Software Development, WF — ajdotnet @ 5:11 pm

WF (for the record: Windows Workflow Foundation) is great. If you need it, you really need it. If you have the engine and one workflow running, adding other workflows is no sweat at all. If your workflows are persisted, you won‘t have to care for shutdowns. If you carefully designed your workflow you can use the graphical view for discussions with the user. If you set it up correctly, even errors can be handled by workflows.

Well, WF is great. But there certainly are a lot of “if”s here…

I was at the PDC 05 where Microsoft introduced the WWF (later to be renamed to WF due to some concerned Pandas ;-) ) as THE big surprise. But I had to wait until earlier this year to get my “production hands” on it. Curiously that happened more or less simultaneously in more than one project. And curiously, despite the different nature of the task at hand they turned out to be very similar. Similar in terms of the problems, similar in terms of the pitfalls, similar in terms of the solutions. And also very similar in the demand for knowledge of things that are not WF specific but are far from common knowledge for the average developer… (and with that I mean the average knowledge areas, not developers of average knowledge… did that make sense? I have no intention to disqualify someone for not knowing all the stuff that‘s following!)

Curiously^3 while there is so much information available about WF—hosting, data exchange, and what else—it still did not answer our questions. I rather got the impression that my problems were largely ignored. Or if they were mentioned it stopped right there. “Here is something you will have to think about!” Period. “You should use XY.” Period. “That will be a problem.” Period.

So I had to rely on my BizTalk background to fill the gaps (and my knowlegde about multithreading even goes back to OS/2, sic! Am I in danger of becomming a dinosaur?). In the following posts I will dig in one or the other topic and some of the “best practices” (if it can be called that) we came about with, which might help the next developer.

vacationBut for now let‘s just lay out the example application:

At SDX people work for various customers, usually on-site. If I would like to go to vacation I might have something to say about the matter myself. So does the customer. So does my account manager. And the back office. And… you get the idea. To manage that you need an informal workflow at least, once the head count of your company goes up you need it formalized. And this is what our example application (let‘s call it Vacations@SDX) does:

  • Show me how many vacation days I have left.
  • Let me request a new vacation.
  • Let all people concerned with my vacation have their say.
  • If everyone involved acknowledged the vacation write the information to the payroll application.

Of course since most people are mostly out of office, the application has to be a web application, the workflow long running, and notifications done by email. And the payroll application could be offline. And most importantly: DON‘T MESS WITH MY VACATION! Losing my request somewhere along the way simply WON‘T DO!

Nothing special here at all. Some kind of document review, order processing, or whatever workflow would have similar demands. Actually this is a very simple real world example. And yet, if done correctly it is a far from simple undertaking. Certainly it is not as simple as drawing a workflow and hooking up the basics as described in the available documentation.

On the list of things to talk about is workflow instance state, workflow communication, hosting, and error handling. This will be fairly basic stuff from an “architectural” point of view, but I will assume that you know or read up about the WF basics. I‘m not going to repeat the dozen lines of code it takes to host WF in a console application.

If that sounds interesting for you, stay tuned.

That’s all for now folks,
AJ.NET

May 20, 2007

About the Virtue of Not Improving Performance

Filed under: .NET, .NET Framework, ASP.NET, C#, Software Architecture, Software Development — ajdotnet @ 6:11 pm

I am very much in favour of performance awareness, as previous posts should have shown (optimize itcache as cache canperformance is king, …), nobody question that. But I repeatedly stumble over advice that I find … questionable.

So, with this post, I thought I might pick up some of the more common hints and tell you why you should not (have to) apply them for performance reasons. Yes, at the price of processor cycles. Here are some links that contain that advice (not the only ones, though):

Please note that my statements are meant for developers of ordinary line-of-business or web-applications. If you write real time software controlling atomic plants, this article is not for you. (Neither is it for the guys at Microsoft working on the .NET Framework or SQL Server.)

Please also note that I left out things related to garbage collection and finalization on purpose.

Optimization techniques that adversely affect class design

Some optimization techniques address the way the CLR is working and are aiming to help the JIT compiler to produce better (read “faster”) code.

Consider Using the sealed Keyword. The rationale behind this recommendation is “Sealing the virtual methods makes them candidates for inlining and other compiler optimizations.” (msdn)

The only candiates for this advice are derived classes of your application. Declare them as sealed won’t hurt the class design. But won’t the virtual methods usually be called via the base class? No very much to gain then anyway. Then why bother?

Consider the Tradeoffs of Virtual Members? Consider the Benefits of Virtual Members, I’d rather say! “Use virtual members to provide extensibility.” (msdn). One should avoid making a method virtual if it’s not intended to be overwritten. But that’s a question of class design and design for extensibility rather than a performance related one. Avoiding a virtual declaration for performance reasons? No way.

These are just examples, there is other advice, e.g. regarding voilatile or properties. All in all, I personally have dismissed this category of performance related advice. It’s either unnecessary (i.e. should be done for reasons far more important, like “know what you do and design things right”) or of adverse effect (like scarifying good design for small gains of performance).

Optimization techniques that adversely affect code

Some techniques aim at eliminating unnecessary repetitive calls:

  • Avoid Repetitive Field or Property Access
  • Inline procedures, i.e. copy frequently called code into the loop.
  • Use for loops rather than foreach

These … hints… are what I call developer abuse. Wasn’t inlining, loop unrolling and detection of loop invariants one of the better known optimization techniques of C++ compilers? And now I shall do that manually? “Optimizing compiler” is not part of my job description, sorry.

String optimization

My special attention goes to … Mr. String. Mr. String, could you please come up to the stage… Applause for Mr. String!

StringBuilder abuse, i.e. forgetting about +/String.Concat/String.Format, is very common. Take the following code snippet as example:

string t1 = a + b + c + d;
string t2 = e + f;
string t3 = t1 + t2;
return t3;

Quite complex, don’t you think? Wouldn’t you be using a StringBuilder instead? NO! Don’t fall or that StringBuilder babble. I cannot say that all the given advice is wrong; it just plainly forgets to mention String.Concat too often and leads one to a wrong impression.

How many temporary strings do you count? 3? (t1, t2, and t3) Or 5? (a+b put into a temporary, in turn added to c.) Well, the answer is 3, as the c# compiler will translate all appearances of + in one single expression into one call to String.Concat. If you have a straight forward string concatenation use + (but use it in one expression!). If it gets slightly more complex, using String.Format (which uses StringBuilder.AppendFormat internally) might be another option.

Use StringBuilder if you have to accumulate string contents over method calls, iterations, or recursions. Use it if you cannot avoid multiple expressions for your concatenation and to avoid memory shuffling. And please read “Treat StringBuilder as an Accumulator” (msdn) in order not to spoil the effect.

ASP.NET related things

My favourite category :-)

Disable Session State If You Do Not Use It. That’s sound advice. You may do that for the application. But don’t do it just for one page. If you need session state, chances are you need it for all pages. That particular page is the exception? Well, it can’t be doing very much then and disabling it will hardly improve the performance of your application very much. If you stumble over it go ahead, but don’t waste your time looking for these pages. Spend that time on the majority of your pages that actually need session state; spend it on managing session state efficiently. This way your whole application will profit.

Disable View State If You Do Not Need It. Don’t. You don’t want to do it for the page, as view state is a feature of the controls. You don’t want to do it declaratively for the controls, for that is tedious and error prone work. And you certainly don’t want to do it for every control, for some of them rely in view state to work correctly.
Managing view state is the sensible approach. Find ways to avoid sending large view states back and forth to the client. Check if there is unintended view state usage. The view state is small? Well, why bother? Empty view state is not that expensive. If you worry about that, use derived controls that switch view state off by default.

Using Server.Transfer to avoid the roundtrip caused by Response.Redirect is a bit like penny-wise but pound-foolish. The user requests page A but you decide he should see page B. Rather than letting him know that you just gave him what he did not ask for. If he does a page refresh (not to mention setting a bookmark), you will always get a request for page A and always have to transfer to page B. But rest assured, the last transfer definitely is more efficient than redirecting. Oh, and by the way, you just lied to the user. Telling him he is on the Get_This_Gift_For_Free.aspx page when he actually was on the Buy_Overpriced_Stuff_You_Dont_Need.aspx page. Interesting business model, though.

Use Response.Redirect(…, false) instead of Response.Redirect(… [, true]). Now that’s an example of a half understood issue being made a common recommendation. (Am I becoming polemic? Sorry, could not resist :evil: .)
Redirect throws a ThreadAbortException. “An exception! — Oh my, what can we do about that?” “Oh, don’t worry, we can go through all the data binding, page rending, event handling, and whatever else is left of the page life cycle, we will fight any opponent, such as this perkily grid control that refuses to bind against null. And at the end of the day we will have slain the dragon and it won’t fly again.” OK, now I am being polemic. Anyway. The problem with not throwing the exception should have become clear. If you want to avoid that, try to do it client side.

What to conclude?

Now I’m going to contradict myself: None of the above advice is actually wrong (well, …). Not if you look only on performance in terms of processor cycles. But they are far too expensive in terms of brain cycles and won’t get you the benefit you expect. They trade small gains in performance for adversely affected design, workload on the developers side and additional chances for errors. And they are far too fine grained and restricted to the code level to matter all that much. Usually the interaction patterns between different components or the chosen algorithms have more impact. Not the virtual call to that method, but this non-virtual, non-suspicious method that is called 500 times during one request. Not the fact that 5 strings are concatenated with +, but the fact that those strings result from 10 database calls.

I’m contradicting myself even further: I didn’t say, don’t do it at all. I said don’t do it for performance reasons. There may be other reasons to follow the advice. (E.g. I would “Avoid Repetitive Field or Property Access” in order to have more comprehensible code, code that better supports debugging sessions. “Consider the Tradeoffs of Virtual Members” is no bad advice either.)

If you want to follow some rule upfront, I strongly recommend a sound design and understanding of what you do. And once you’ve got past the broad scenarios, let the profiler tell you what parts of your application to optimize. This way you will spend the work where it matters.

Oh, by the way: Please note that I did not reject every advice. There is a lot of good advice which tells you how to improve performance simply by writing good code (in terms of design, readability, etc.). And quite a few hints for doing things efficiently without adverse effects. Just don’t do it blindly.

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

May 12, 2007

Javascript and ASP.NET

Filed under: .NET, .NET Framework, ASP.NET, Software Development — ajdotnet @ 4:03 pm

ASP.NET was never a pure server side framework. Right from Version 1.0 it employed scripts for client side validation. Back then it was meant to…, well set IE users apart, because it worked only as improvement for them. Complementing one’s own server controls with script was the champions league of ASP.NET development — and deployment of these controls was a paint in the… (children around? Sorry!)

Anyway, much is changing in this area. With ASP.NET 2.0 Microsoft has introduced the WebResourceAttribute to adress the deployment problem. (See Creating Custom ASP.NET Server Controls with Embedded JavaScript for a better example.) Ajax is doing away with the browser dependency issues and Microsoft has changed its attitude accordingly, resulting in cross browser capabilities in ASP.NET AJAX.

There is still some work to do to better integrate server side and client side coding model. The ASP.NET AJAX Control Toolkit uses some patterns to better handle scripts (via the RequiredScriptAttribute, though not documented) and to address client side properties and events. Supporting postbacks and server side events is still a little awkward, though.

One thing is for sure: With ASP.NET AJAX Microsoft is taking ASP.NET control development as well as scripting to the next level (if only of complexity). If you’ve coded controls using script before, this is a good thing for you. Or it will render your patterns obsolete, who knows. If you’ve only scratched scripting yet, well, prepare for some work. Javascript is not as easy at is seems and ASP.NET AJAX definitely pushes the limits.

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

Older Posts »

The Shocking Blue Green Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 244 other followers