AJ's blog

August 14, 2011

RIA with Silverlight–The Business Perspective

If you read this, chances are that you are a developer and that you like Silverlight. And why not? Exciting platform, great features, outstanding tooling. But! If you’re a corporate developer, have you sold it to your management yet? If not, this post is for you.

Silverlight is for RIA, and the domain of RIA applications is largely intranet or closed/controlled extranet user groups. This again is what is usually found in larger enterprise companies. Companies that usually have a vested interest in controlling their environment. And in terms of bringing software into production and of operations and maintenance afterwards, every new platform is one platform to many.

So, the odd developer comes along and talks about this great new technology. Does the management care? Probably not. What does it care about? Simple. Money! Money, as in costs for deployment and user support, hardware and licenses to get the stuff up and running, operations and developer training, maintenance. And money as in savings in the respective areas and – the cornerstone, as the business usually pays the bill – impact on the business. All usually subsumed under the term ROI.

About a year ago, I finished an analysis looking into RIA with Silverlight, conducted for a major customer. Not from the point of view of the developer, but that of business people, operations, and IT management:

So, let’s look briefly at each aspect…

User/Business perspective…

The business doesn’t exactly care for the platform Silverlight itself; it cares for its business benefits. Benefits as in improved user experience, streamlined business workflows, office integration, and so on. And since we had some lighthouse projects with Silverlight we were able to collect some customers’ voices:

“This [streamlining with Silverlight] would reduce a [...] business process [...] from ~10 min to less than a minute.”

“Advanced user experience of Silverlight UI helps raising acceptance of new CRM system in business units”

“I was very impressed of the prototype implementation […] with Silverlight 3. Having analyzed the benefits of this technology I came to the conclusion that I want the […] development team to start using Silverlight as soon as possible. [...]”

This is also confirmed by the typical research companies, like Gartner or Forrester:

“Firms that measure the business impact of their RIAs say that rich applications meet or exceed their goals” (Forrester)

Operations perspective…

In production, the benefit of Silverlight applications (compared with respective conventional web based applications) is reduced server and network utilization.

For example, we had a (small but none-trivial) reference application at our disposal, which was implemented in ASP.NET as well as Silverlight (as part of an analysis to check the feasibility of Silverlight for LOB applications). We measured a particular use case with both implementations – starting the application and going through 10 steps, including navigation, searches, and selections. Both applications were used after a warm-up phase, meaning that the .xap file, as well as images and other static files had already been cached.

The particular numbers don’t matter, what matters is the difference between the amount of data that has been exchanged for each step (in case of navigations none at all for Silverlight). For the single steps:

And accumulated over time:

A ratio of roughly a tenth of the network utilization is quite some achievement – considering that the Silverlight application wasn’t even optimized to use local session state and caching, it should be even higher.

This should have a direct impact on the number of machines you need in your web farm. Add the fact that session state management on the client drastically reduces the demand for ASP.NET session state – usually realized with a SQL Server (Cluster) – there is yet another entry on the savings list.

On the down side is the deployment of the Silverlight plugin. For managed clients – especially if outsourcing the infrastructure comes into play – this may very well become a showstopper.

IT Management perspective…

With respect to development and maintenance, what IT Management should care about includes things like ability to deliver the business demands, development productivity, bug rates in production, costs for developer training, and so on.

Actually all areas in which Silverlight can shine, compared with other RIA technologies, and with the typical mix of web technologies as well:

  • Rich, consistent, homogenous platform
    • .NET Framework (client and server), Visual Studio, Debugger, C#
    • Reduced technology mix, less technology gaps, less broad skill demands
  • Improved code correctness and quality…
    • compiler checks, unit testing, code coverage, debugging, static code analysis, in-source-documentation, …
  • Improved architecture and code
    • Clean concepts, coding patterns, clear separation of client code, lead to better architectures
    • Powerful abstractions lead to less code (up to 50% in one project), less complexity, less errors

Customers’ voices in this area:

“between our desktop app and the website, we estimate 50% re-use of code”

“a .NET developer can pretty much be dropped into a SL project. […] This is a huge deal […]”

“As alternative for Silverlight we considered Flash. […] only Silverlight could provide a consistent development platform (.NET/C#). […]”

 

Conclusion…

Taking all this together, and considering that enterprise companies usually have the tooling and test environments (well…) readily available, this all adds up to something like the following bill:

RIA Return on Invest

Whether the bill looks the same for your company or for one particular project, of course, depends on many things. Especially nowadays with all the hubbub around HTML5 and mobile applications (without any relevant Silverlight support). But if RIA is what you need, the Silverlight will quite often yield far more benefits than any other option.

Still, you need to do your own evaluation. However, I hope to have given you some hints on what you might focus on, if you want to sell technology to the people who make platform decisions in your company.

The actual analysis was fairly detailed and customer specific. But we also prepared a neutralized/anonymized version, which we just made available for download (pdf). (Also directly at SDX.)

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

December 23, 2010

Visual Studio Async CTP – An Analysis

Filed under: .NET, .NET Framework, C# — ajdotnet @ 12:21 pm

OK, buckle up. This is going to be a long one, even by my standards…

During the last PDC Microsoft announced a next innovation: Async C#, currently available as CTP. There is also a respective white paper; I recommend reading it, if you need an introduction.

Just for the record, here’s what we are talking about:

This roughly translates to:

As you can see, the compiler mainly translates an await into a new task and the remainder of the method into a continuation for that task.

To tell the truth, the actual generated code looks quite different; this is semantically equal, though.

Simple enough, but that’s only a simple case. Actually there’s more:

  • If we are talking about a UI thread, the runtime ensures that the continuation runs on the UI thread as well. The task version would need to marshal the call to the UI thread explicitly, e.g. by calling Dispatcher.BeginInvoke.
  • If the wait is contained within a loop of some kind, the compiler also handles that, cutting the loop into asynchronous bits and pieces. Similarly for conditional calls.

That’s a nice set of features and there is no doubt that this approach works (there is a CTP after all). And reception within the community so far has been very positive.

However!

If this is really such a compelling feature, it should be quite easy to present a bunch of examples in which it pays off. Instead, what I found was a bunch of examples that made me wonder. Therefore I decided to put it to some…

Litmus tests…

To test whether async/await actually merits the praise, I set out to recheck and question the examples provided by various Microsoft people:

  • Soma Segar’s introduction (actually taken from the white paper mentioned above)
  • Netflix (used by Anders Hejlsberg during his PDC talk and available as sample within the CTP)
  • Eric Lippert’s discussion

Note: Examples have a tendency to oversimplify. I chose this set to overcome that effect: Soma provides a simple introduction, Netflix is more “real world”, and Eric looks at it form a language designer point of view. Three different samples, three different people, three different perspectives.

Note 2: The code I present is from my test solution (download link at the end), thus it is not exactly a citation, rather it’s complemented with my bookkeeping code. I also didn’t address every question that may arise (such as exception handling) explicitly. This post got long enough as it is.

Soma Segar’s blog

Let’s begin with Soma’s introduction, as it is the most simple one:

He provides a „conventional“ asynchronous solution, that looks… quite convoluted, actually. Note that I prepended the necessary code to synchronize the call.

code_soma_taskasproposed_a

code_soma_taskasproposed_b

And his respective solution using the async/await looks quite nice in comparison:

code_soma_withasync

Easier to write, and easier to understand, no question about that.

However: Looking at the awfully convoluted task based version, I asked myself: „Who would write such code in the first place?“.

The use case is to load some web pages and sum up their size. And to do this asynchronously, probably to avoid freezing the main thread. And the solution – awful and async/await alike – insists on getting the value for one web page asynchronously, but coming back to the UI thread to update the sum. Then starting the next fetch asynchronously, and again come back to the UI thread to update the sum.

But then, why not put the whole loop in a separate thread? The only caveat would be the marshaling of UI calls (i.e. a call to Dispatcher.BeginInvoke, which I left out here in favor of simple tracing, but it’s addressed in the next example). Somewhat like this:

code_soma_mytask

This is very close to the await/async version, nearly equally easy to write, and certainly as easy to understand. And it did not need a language addition, merely a little thinking about what it is that shall actually be accomplished.

OK, one example that didn’t really convince me.

Netflix (sample in the CTP)

The Netflix sample is part of the CTP and was also used by Anders Hejlsberg during his PDC talk. The sample solution comes with synchronous version, conventional asynchronous version, and new async/await version.

The synchronous version looks straight forward, with 43 of relatively harmless LOC for the relevant two methods (LoadMovies and QueryMovies):

code_netflix_sync_a

code_netflix_sync_b

The conventional asynchronous version – do I even have to mention it? – is in comparison an ugly little piece of code. For the sake of completeness:

code_netflix_taskasproposed_a

code_netflix_taskasproposed_b

code_netflix_taskasproposed_c

Twice as much code (85 LOC), a mixture of event handler registrations with nested lambdas and invokes. It takes considerable time to fully understand it. No one in his right mind would want to have to write this.

The async/await version – again – turns the synchronous version into a similarly straight forward looking asynchronous version:

code_netflix_async_a

code_netflix_async_b

Again the initial verdict: The async/await version looks nice and clean compared to the ugly task based version, and very similar to the synchronous one. Hail to async/await.

But again, looking at the ugly asynchronous version, I asked myself, “Who would actually write such code? And why? Masochistic tendencies?”

Let’s take a step back and look at the use case: Call a web service asynchronously to avoid freezing the UI (as the synchronous version does), update the UI with the result, and call the web service again if there is more information available.

The task based version tries to manually dismantle this loop into a sequence of asynchronous call followed by callback on the main thread to update the UI, followed again by the next asynchronous call. Add cancelation and exception handling and it is surprising that it isn’t even more ugly than it already is.

But! Why even try to dismantle the loop? Why not put the whole loop in a separate thread? Yes, that would solve the use case, and yes, it would be just as clean as the synchronous version. The only caveat is that each interaction with the UI has to be marshaled into the main thread, but I can live with that. Here it is:

code_netflix_mytask_a

code_netflix_mytask_b

I left the QueryMovies Method out for brevity, as it is similar to the synchronous version above.

Again this is very close to the await/async version, again nearly equally easy to write, and – again – certainly as easy to understand. And – again (pardon the repetition) – it did not need a language addition. Merely – ag… ok, ok – a little thinking about what it is that shall actually be accomplished.

Another example that didn’t convince me. But two is still coincidence.

Eric Lippert‘s discussion

Eric has a slightly different use case: For a list of web addresses, fetch the web page and archive it. There should be only one fetch at a time, and only one archive operation, but fetch and archive may overlap (described here). This is more about overlapping different operations, rather than avoid freezing the UI, as in the previous examples.

The synchronous version is this simple:

code_lippert_synchronous

He constructed the asynchronous version (not even mentioning async/await) as state machine, described in the post I just mentioned. I’ll spare you the reiteration this time; follow the link if you have doubts that it is far more convoluted than the simple loop above. But of course, async/await comes to the rescue:

code_lippert_withasync

Nice and simple. But can you deduce the logic behind this? I have to admit it required some mind twists on my part. Now, to make it short, here is the task version:

code_lippert_mytasks1

Not quite as simple as the async/await version, but if you look closely, it’s actually a very close translation into continuations. And… equally inefficient. You may have noticed the comments I put in there, stating that there is some unnecessary synchronization between a fetch and an earlier archive operation.

So, this time I actually went a little bit further: This is why I complemented a little test data, bookkeeping, and a nice little printout to visualize the sequence. Here’s what the async/await version (and similarly the task version) produces:

app_lippert_async

“‘f” stands for fetch, “a” for archive. The lower part shows the operations according to their start and end time. You can depict nicely how tasks follow other tasks, and how fetch and archive run in parallel. No fetch overlaps another, no archive overlaps another archive operation. Well. Have a look at line 3. See how the fetch starts after the archive in line 1 was finished? It could have started earlier, couldn’t it? Same in line 6. That’s the unnecessary synchronization I mentioned above.

A little working on the task version solves that by chaining the fetches and archives onto their respective tasks:

code_lippert_mytasks2

However, now I becomes a little more complex and hard to understand.

Going back to the drawing-board. What again is the use case? It’s actually a pipeline, with multiple stages (two in this case), each stage throttled to process only a certain amount of input values (one in this case). The cookbook solution is to put a queue in front of each stage, and have separate workers (again, in our case just one) take input from this queue, process it, and place the result in the next stages queue. Like this:

code_lippert_myqueues

Two queues of lambdas, two tasks processing these queues, the first lambda placing new entries into the second queue upon completion. If you ask me, this is way closer to the proposed use case than either the task based versions or the async/await version. And the logic behind it – intention as well as control flow –, which in this case is a little more complicated than in the previous examples, is also far easier to comprehend. And the best part: Look at the output:

app_lippert_queues

Fetch and archive start independently of each other, making the solution more effective than the async/await version. Why? Because I actually thought about the problem and the solution, rather than simply throwing asynchronicity on the code.

If anything, this example raises even more concerns than the other ones.

Verdict

Three different samples. Three times presenting some conventional task based asynchronous code – code that is rather convoluted, ugly, and hard to understand. Three times a nice async/await version that clearly makes the code nicer.

But also three times it “only” took a little thinking to come up with a far nicer task based version than the proposed convolution. Two times very close to the async/await version. The third time actually more efficient than that.

One could actually get the impression that the convoluted task based code has been deliberately made ugly in order to provide a better motivation for the new features. I wouldn’t go that far, but still, it raises some questions, as to how these examples came into being. Perhaps a back-port of what the compiler produces for async/await?

Granted, each example I examined may have its own compromises which slightly invalidates it (as is often the case with examples); had I only found one, I wouldn’t have bothered writing this post. But: One is an incident, two is coincidence, three is a trend. And I did not stumble over one example yielding a different result.

So, to summarize my concerns:

  1. I showed that the value of async/await is less compelling than presented. This does raise the question, whether it is still compelling enough to merit a language extension.

    I judge that value by its ability to making code shorter, easier to write, easier to comprehend. Or to quote Eric Lippert:

    “Features have to be so compelling that they are worth the enormous dollar costs of designing, implementing, testing, documenting and shipping the feature. They have to be worth the cost of complicating the language and making it more difficult to design other features in the future.” http://blogs.msdn.com/b/ericlippert/archive/2008/10/08/the-future-of-c-part-one.aspx

  2. The way I verified my concerns was actually to take one step back and think about the demand. I hate to think that async/await actually encourages developers to think less and in turn write less than optimal code – especially the less experienced developers who’ll have a hard time understanding the implications of those keywords in the first place.

To make that clear: I did not show anything that suggests that the async/await language feature is per se wrong. Nor did I prove that there is no use case that does not suffer from my concerns (only that Microsoft failed to present them yet – or I failed to find them).

Conclusion

Here my conclusion: I could live with concern #1, but #2 is something that gives me the creeps. There’s two things I would like Microsoft to do to make the feature work (or drop it altogether):

  1. Make more obvious what the code actually does (e.g. see the comments here for some naming suggestions – although my concern goes beyond simple naming issues).
  2. Rethink the task based samples you are basing your reasoning on. We don’t need a feature that makes convoluted code simpler; we need features that make code simple, we would write in the first place.

BTW: Whether async/await is backed into the language or not, this endeavor showed one thing quite clearly: Asynchronicity is important, and becoming even more so. And Microsoft certainly did accomplish something great with the TPL, on which all these discussions built without even mentioning it anymore. And as Reed pointed out, Tasks play a major role in the CTP as well.

Finally: Here is the download link for those who’d like to recheck my findings.

Merry Christmas!

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

November 7, 2010

Silverlight. What if?

Filed under: .NET, .NET Framework, Silverlight, Software Development — ajdotnet @ 6:39 pm

PDC happened and Microsoft fouled up the Silverlight message big time. The talk was that Silverlight is dead on the client, based on Steve Ballmer not mentioning Silverlight, and an interview with Bob Muglia published at Mary-Jo’s blog. Actually even we got irritated emails from our own customers, whom we had just convinced that Silverlight is the right choice for RIA applications.

It took some time for Microsoft to realize what fatal message they had sent, but eventually Muglia backpedaled, and from there one it seems that every other Microsoftie and close associate came out to deny the imminent death of Silverlight. So far I’ve stumbled over:

Among the non-Microsofties where

So, just for the record: I sincerely believe that Microsoft is still very much committed to Silverlight as RIA technology for regular (read non-WP7) clients.

 

Still, as relieved as I am that this mess unfolded that way, it kept me thinking. What if? I mean, what if Microsoft actually had dropped Silverlight on the client…?

What if?

Just imagine… What would happen if Microsoft actually had changed their strategy? What if Silverlight was really dead on the client, and “only” the development platform for WP7?

  • For the vast majority of regular web applications: Nothing much would have happened. Use cases here include mostly video and advertisement. And due to the availability of the plugin this is the domain of Adobe Flash. With the advent of HTML5, and it’s coverage of video and graphics (canvas) – and none the least the backing it gets from Apple – it should have been clear to everybody with open eyes that HTML5 will be the future in that area. But that will happen at the expense of Flash, not Silverlight!

But from there it would go downhill, and Microsoft would start losing…

  • Microsoft would lose a platform for non-typical demands on the web. Demands that go far beyond what HTML5 can deliver. Complex UIs such as car configurators (Mazda), HD video streaming (maxdome). And Silverlight it gaining momentum in the market, not Flash or some other technology.
     
  • Microsoft would lose their platform for RIA applications. In this area HTML5 is of no further relevance at all, rather Adobe AIR and JavaFx are the competition. And in this area Silverlight is way ahead of the completion, both technically in terms of business features, as well as by adoption (usually intranet applications, but also SAP).
     
  • Microsoft would lose the developer base it relies on for Windows Phone 7, and with it WP7 itself. One of the big selling points for WP7 is the fact that WP7 uses the very same platform as is used for RIA, thus every developer using Silverlight instantly becomes a WP7 developer. Ironically, focusing Silverlight on WP7 would take away that advantage. Silverlight would become a platform you have to learn before doing phone development. And since WP7 is just taking off, the future not yet certain, why take the risk? Why not learn Android instead? Who would then build the apps Microsoft needs?
     
  • It gets worse: Microsoft would lose credibility, and the trust of the developer community. This is not a technology at the end of its life cycle we’re talking about. It’s a technology just beginning to take off, and a technology that they told us were strategic! A technology more and more developers are just beginning to adopt, to invest in. If Microsoft dropped Silverlight – without any warning I might add –, how would those developers react? How could they trust Microsoft to be true to what they call “strategic” in the future? I know what I would think.
     
  • Needless to say that this would affect their partners and customers in the same way. Who would invest in any platform if he cannot be sure the platform is maintained for a reasonable time (rather than being dropped at the spur of a moment). And if the vendor cannot be trusted? The platform may be as good as it wants, the first thing to care about is to protect my investments.
     
  • In the end this would include every API, every platform, every offering they have. This specially includes Azure – the very platform Microsoft is betting the company on. Which developer would work against that API? Which ISV would build his software on Azure? Which partner would counsel his customers to use Azure? Which enterprise would rely on Azure with his applications and his data? It’s a strategic platform for Microsoft, sure. But with dropping Silverlight they would just have taught us what that means.

Ultimately Microsoft could lose… Microsoft. Because at the end of the day, credibility is the most important thing. That’s what made this whole thing a marketing fiasco. Not the fact that a bunch of developers and companies sat on the wrong bandwagon. Lose credibility and you lose the company.

Now, all this is hypothetical – I hope I made that clear with my statement above. And it is also just an opinion and certainly exaggerated in some points. But it may very well be the kind of trash and FUD that Microsoft will be experiencing for some time. Which is why I believe that Microsoft will continue to have to do damage control for quite some time to come.

Yours sincerely,
AJ.NET

kick it on DotNetKicks.com

September 4, 2010

The Cost of String.Split

Filed under: .NET, .NET Framework, C# — ajdotnet @ 8:13 pm

Some code I wrote a while ago came back to haunt me. You may know the feeling.

The use case was simple: A tool chain, processing some files and spitting out others (host service descriptions as input and configuration and proxy code as output – is doesn’t really matter though). At one stage I needed to process a file line by line. And, as usually, my code was nice, clean, tested, maintainable, performing quite well, robust, and unbreakable. In one word: Just perfect. Well, until someone broke it. All he did was feeding a “slightly” larger code fragment — about 5MB file size for a corresponding text file — and he encountered OutOfMemoryExceptions. Sik!

So, what on earth could have happened here? Well, one of the least things I would have expected: string.Split and StringBuilder – despite the fact that I of course had employed them correctly, concise with the general recommendations! 

The Cost of String.Split

It turned out that string.Split trades memory for performance…. Take a string of a certain length, say 100, and split into, say, 10 parts. The result you’ll get will consist of the array (4 bytes per entry, 10 entries in our example), and the content fragments. This is roughly the same memory consumption for the result as for the input string (see also on MSDN).

Note: I‘m leaving out the overhead for internal management of strings and memory allocation in general.

That’s the memory consumption that is obvious and cannot be avoided (OK, I’ll get back to that one!). But what does String.Split add to that cost, if only temporarily?

String.Split internally creates an array of integers, with the length of the string as size. In other words for an input string of 100 characters it creates an int[100], using up 400 bytes! This array is used to remember the index of the found separator, by writing the found index at the next free position. That implies that only the first slots are actually used, the same number as the number of parts the string is split into. In our example 90 slots are simply wasted. This is a usage ratio of 10%! It would only need this array completely in the extreme case of a string only consisting of separators.

In case of StringSplitOptions.RemoveEmptyEntries and at least one empty entry, another temporary array for the result is created, add another array the size of the number of parts. Seems things can only get worse.

Once more: For a any given string as input String.Split returns something more than that as result – and on top of that it needs twice as much memory temporarily!

OK, so 100 characters, 200 bytes, produce 240 bytes result (40 bytes for the array) and 400 bytes temporarily on top. Who cares? But wait, remember the 5MB text file? The resulting string needed 10MB and the result ~11MB. Additionally ~20MB temporarily, of which only 4% – less than 1MB – where actually used. And all this is subject to the Large Object Heap. Well? Who cares now?

Mending…

Granted, that’s an extreme case. But how often did I use String.Split in other cases? How often within loops? How big where the input strings? How much unnecessary overhead and memory pressure did that introduce?

What if I could get rid of the temporary overhead? Great. What if I could even get rid of the result? Wait… what?

Matter of fact, quite often the result from String.Split is processed once, entry per entry, in a loop. And that’s what enumerators are for. So, if I had a respective implementation, instead of writing

string[] lines = text.Split(Environment.NewLine[0]);
foreach(string line in lines)
    …

I could call a method that returns an enumerator:

var lines= text.SplitString(Environment.NewLine[0]);
foreach(string line in lines)
    …

And with the lazy evaluation of enumerators I would get one part at a time. And once I’m done processing it, I drop it before going on to the next part. Granted, I still need to touch every part, but no longer all at once. When memory get’s tight, chances are all those parts are in gen 0 when the GC hits.

Meaning I would do no longer consume more than three times the memory of the input string at once. A string of 100 characters/200 bytes as input would not acquire additional 600 bytes. A 10MB string would not need additional 31MB (and only using 12MB of that anyway, subject to the LOH).

Well, I had a minute to spare… . I wrote a helper class for this post, tested to behave exactly like String.Split, so it should be possible to just replace every call to Split with the SplitString extension method, like above. You can find the code for download here.

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

August 29, 2010

CommunicationException: NotFound

Filed under: .NET, .NET Framework, ASP.NET, C#, Silverlight, Software Development, WCF — ajdotnet @ 3:44 pm

CommunicationException: NotFound – that is the one exception that bugs every Silverlight developer sooner or later. Take the image from an earlier post:

app_error

This error essentially tells you that a server call somehow messed up – which is obvious – and nothing beyond, much less anything useful to diagnose the issue.

This not exactly rocket science, but the question comes up regularly at the Silverlight forums, so I’m trying to convey the complete picture once and for all (and only point to it, once the question comes up again – and it will!).

I’m also violating my rule not to write about something that is readily available somewhere else. But I have the feeling that the available information is either limited to certain aspects, not conveying the complete picture, or hard to come by or to understand. Why else would the question come up that regularly?

 

The Root Cause

So, why the “NotFound”, anyway? Any HTTP response contains a numeric code. 200 (OK) if everything went well, others for errors, redirections, caching, etc.; a list can be found at wikipedia. Any error whatsoever results in a different result code, say 401 (Unauthorized), 404 (NotFound), or 503 (Service unavailable).

Any plugin using the browser network stack (as Silverlight does by default) however is also subject to some restrictions the browser imposes in the name of security: The browser will only pass 200 in good cases or 404 without any further information in any other case to the plugin. And the plugin can do exactly NIL about it, as it never gets around to see the original response.

Note: This is not Silverlight specific, but happens to every plugin that makes use of the browser network stack.

Generally speaking there are two different groups of issues that are reported as errors:

  1. Service errors: The services throws some kind of exception.
  2. Infrastructure issues: The service cannot be reached at all.

Since those two groups of issues have very different root causes, it makes sense to be able to at least tell them apart, if nothing else. This is already half of the diagnosis.

 

Handling Service Errors

Any exception thrown by a WCF service is by default returns as service error (i.e. SOAP fault) with HTTP response code 500 (Internal Server Errors). And as we have established above, the Silverlight plugin never get’s to see that error.

The recommended way to handle this situation is to tweak the HTTP response code to 200 (OK) and expect the Silverlight client code to be able to distinguish error from valid result. Actually this is already backed into WCF: A generated client proxy will deliver errors via the AsyncCompletedEventArgs.Error property – if we tweak the response code that is. Fortunately the extensible nature of WCF allows us to do just that using a behavior, which you can find readily available here.

Once we get errors through to Silverlight we can go ahead and make actual use of the error to further distinguish server errors:

  1. Business errors (e.g. server side validations) with additional information (like the property that was invalid).
  2. Generic business errors with no additional information.
  3. Technical errors on the server (database not available, NullReferenceException, …).

It’s the technical errors that will reveal more diagnostic information about the issue at hand, but let’ go through them one by one…

Business errors with additional information are actually part of the service’s contract, more to the point, the additional information constitutes the fault contract:

code_declared_fault1

code_declared_fault2

These faults are also called declared faults for the very reason that they are part of the contract and declared in advance. Declared faults are thrown and handled as FaultException<T> (available as full blown .NET version on the server, and as respective counterpart in Silverlight), with the additional information as generic type parameter:

code_throw_declared

Note: There’s no need to construct the FaultException from another exception. And of course this StandardFault class is rather simplistic, not covering more fine-grained information, e.g. invalid properties – which you may need in order to plug into the client side validation infrastructure. But that’s another post.

On the client side this information is available in a similar way, and can be used to give the user feedback:

code_handle_declared

Generic business errors are not part of the service contract, hence they are called undeclared faults, and they cannot contain additional information beyond what the already got. From a coding perspective they are represented by FaultExceptions (the non-generic version, .NET and Silverlight) and thrown and handled similarly to declared faults:

code_handle_undeclared

However, the documentation states…

“In a service, use the FaultException class to create an untyped fault to return to the client for debugging purposes. […]

In general, it is strongly recommended that you use the FaultContractAttribute to design your services to return strongly typed SOAP faults (and not managed exception objects) for all fault cases in which you decide the client requires fault information. […]”

MSDN

 

That leaves arbitrary exceptions thrown for whatever reason in your service. WCF also translates them to (undeclared) faults, yet it uses the generic version of FaultException, with the predefined type ExceptionDetails. This way, any exception in the service can (or rather could) be picked up on the client:

code_handle_exception

However, while ExceptionDetails contains information about exception type, stack trace, and so on, that fault contains by default only a generic text, stating “The server was unable to process the request due to an internal error.”. This is exactly as it should be in production, where any further information might give the wrong person too much information. During development however it may make sense to get more information, to be able to diagnose these issues more quickly. To do that, the configuration has to be changed:

config_exception_debug

And now the returned information contains various details about the original exception:

app_exception_info

BTW: To complete the error handling on the client, you need to to address the situation were the issue was on the client itself, in which case the exception would not be of some FaultException type:

code_handle_client_error

This covers any exception thrown from the WCF service, provided it could be reached at all.

Alternative routes…

As I said, tweaking the HTTP response code is the recommended way to handle these errors. This is still a compromise on the protocol level to work around the browser network stack limitation. However, there are other workarounds to do that, and for the sake of completeness:

  1. Compromise on the service’s contract: Rather than using the fault contract, one could include the error information in the regular data contract. This is typical for REST style services, e.g. Amazon works that way. For my application services I am generally reluctant to make that compromise. The down side is that is doesn’t cover technical errors, but that can be remedied with a global try/catch in your service method.
  2. Avoid the browser network stack. Silverlight offers its own network stack implementation (client HTTP handling), though it defaults to using the browser stack. Using client HTTP handling, one can handle any HTTP response code, as well as offering more freedom regarding HTTP headers and methods. The downside however is, that we lose some of the features the browser adds to its network stack. Cookie handling and browser cache come to mind.

 

Handling Infrastructure Issues

If some issue prevented the service to be called at all, there is obviously no way for it to tweak the response. And unless we revert to client HTTP handling (which would be a rather drastic means, given the implications), the Silverlight client gets no chance to look at it either. Hence, we cannot do anything about our CommunicationException: NotFound.

However, by tweaking the response code for service exceptions as proposed above, we at least make it immediately obvious (if only by indirect reasoning) that the remaining CommunicationException: NotFound is indeed an infrastructure issue.

The good news is that infrastructure issues usually carry enough information by themselves. Also they appear rarely, but if they do they usually are quite obvious (including obviously linked to some recent change), affect any call (not just some), and are easily reproducibly. Hence using Fiddler one can get information about the issue very easily (even in the localhost scenario).

The fact that the issue is pretty obvious pretty fast, in turn makes it usually quite easy to attribute it to the actual cause – it must have been a change made very recently. Typical candidates are easy to track down:

  • Switching between Cassini and IIS. I have written about that here.
  • Changing some application URLs, e.g. giving the web project a root folder for Cassini, without updating the service references.
  • Generating or updating service proxies, but forgetting to change the generated URLs to relative addresses.
  • Visual Studio sometimes assigns a new port to Cassini if the project settings say “auto-assign port”, and the last used port is somehow blocked. This may happen if another Cassini instance still lingers around from the last debugging session.
  • Any change recently made to the protocol or IIS configuration.

This only get’s dirty if the change was made by some team member and you have no way of knowing what he actually changed. But since this will likely affect the whole team, you will be in good company ;-)

 

Wrap up

There are two main issues with CommunicationException: NotFound:

  1. It doesn’t tell you anything and the slew of possible reasons makes it unnecessarily hard to diagnose the root cause.
  2. It prevents legitimate handling of business errors in a WCF/SOAP conformant way.

Both issues are addressed sufficiently by tweaking the HTTP response code of exceptions thrown within the service, which is simple enough. Hence the respective WCF endpoint behavior should be part of every Silverlight web project. And in case this is not possible for some reason, you can revert to client HTTP handling.

 

Much if not all of this information is available somewhere within the Silverlight documentation. However, each link I found only covered certain aspects or symptoms, and I hope I have provided a more complete picture on how to tackle (for the last time) CommunicationException: NotFound.

That’s all fro now folks,
AJ.NET

kick it on DotNetKicks.com

August 8, 2010

Silverlight and Integrated Authentication

Filed under: .NET, .NET Framework, ASP.NET, C#, Silverlight, Software Development, WCF — ajdotnet @ 11:25 am

I’ve been meaning to write about this for a while, because it’s a reoccurring nuisance: Using integrated authentication with Silverlight. More to the point, the nuisance is the differences between Cassini (Visual Studio Web Development Server) and IIS in combination with some WCF configuration pitfalls for Silverlight enabled WCF services….

Note: Apart from driving me crazy, I’ve been stumbling over this issue quite a few times in the Silverlight forums. Thus I’m going through this in detail, explaining one or the other seemingly obvious point…

Many ASP.NET LOB applications run on the intranet with Windows integrated authentication (see also here). This way the user is instantly available from HttpContext.User, e.g. for display, and can be subjected to application security via a RoleProvider. Silverlight on the other hand runs on the client. I have written about making the user and his roles available on the client before. However, the more important part is to have this information available in the WCF services serving the data and initiating server side processing. And being WCF, they work a little different from ASP.NET, or not, or only sometimes….

 

Starting with Cassini…

Let’s assume we are developing a Silverlight application, using the defaults, i.e. Cassini, and the templates Visual Studio offers for new items. When a “Silverlight-enabled WCF service” is created, it uses the following settings:

xml_serviceconfig

Now there’s (already) a choice to make: Use ASP.NET compatibility? Or stay WCF only? (That question may be worth a separate post…). With ASP.NET compatibility, HttpContext.* is available within the service, including HttpContext.User. The WCF pendant of the user is OperationContext.Current.ServiceSecurityContext.PrimaryIdentity. Take the following sample implementation to see which information is available during a respective call:

code_service

The client code to test that service is simple as can be:

code_page

The XAML is boilerplate enough, but for the sake of completeness:

xaml_page

I choose compatibility mode, and as the client shows, HttpContext.User is available out of the box:

app_default

Great, just what an ASP.NET developer is used to. But compatibility or not, it also shows that the WCF user is not available. But! WCF is configurable, and all we have to do is set the correct configuration. In this case we have to choose Ntlm as authentication scheme:

xml_serviceconfig_ntlm

And look what we’ve got:

app_with_ntlm

Great, no problem at all. Now we have the baseline and are ready to move on to IIS.

 

Now for IIS…

Moving to IIS on the developer machine is simple. Just go to the web project settings, tab “Web”, choose “Use Local IIS Web Server”, optionally creating the respective virtual directory from here. In order to work against IIS, Visual Studio needs to run with administrative permissions.

vs_projectconfig

 

Moving from Cassini to IIS however has a slew of additional pitfalls:

  • The service URL
  • The IIS configuration for authentication
  • WCF service activation issues
  • The WCF authentication scheme
  • localhost vs. machine name

Usually they show up as team (which obviously doesn’t help), but let’s look at them one by one.

 

The service URL

There’s one difference between how Cassini and IIS are being addressed by the web project: Projects usually run in Cassini in the root (i.e. localhost:12345/default.aspx), while in IIS they run in a virtual directory (e.g. localhost/MyApplication/default.aspx). This may affect you whenever you are dealing with absolute and relative URLs. It will at least cause the generated service URLs to differ more than just by the port information. Of course you can recreate the service references at that point, but you don’t want to do that every time you switch between Cassini and IIS, do you?

BTW: There’s a similar issue if you are running against IIS, using localhost, and you create a service reference: This may write the machine name into the ServiceReferences.ClientConfig (depending on the proxy configuration), e.g. mymachine.networkname.com/application, rather than localhost. While these are semantically the same URLs, for Silverlight is qualifies as a cross domain call. Consequently it will look for a clientaccesspolicy.xml file which is probably not there and react with a respective security exception.

The solution with Silverlight 3 is to dynamically adjust the endpoint of the client proxy in your code to point to the service within the web, the Silverlight application was started from:

code_adjusturl

Silverlight 4 supports relative URLs out of the box:

xml_clientconfig

Coming versions of the tooling will probably generate relative URLs in the first place; until then you’ll have to remember to adjust them every time you add or update a service reference.

 

The IIS configuration for authentication

This one may be obvious, but in combination with others, it may still bite you. Initially starting the application will result in the notorious NotFound exception:

app_error

Note: To be able to handle server exceptions in Silverlight you’ll have to overcome some IE plugin limitations, inhibiting access to http response 500. This can be achieved via a behavior, as described on MSDN. However this addresses exceptions thrown by the service implementation and won’t help in the infrastructure related errors I’m talking about here.

The eventlog actually contains the necessary information:

el_require_wa

No windows authentication? Well, while Cassini automatically runs in the context of the current user, IIS needs explicitly being told that Windows Authentication is required. This is simple in IIS configuration, just enable Windows Authentication and disable Anonymous Authentication in the IIS configuration for the respective virtual directory.

 

WCF service activation issues

Running again will apparently not have changed anything at all, displaying the same error. With just a seemingly minor difference in the eventlog entry:

el_require_aa

That’s right. The service demands to run anonymous after it just demanded to run authenticated. Call it schizophrenic.

To make a long story short, our service has two endpoints with conflicting demands: The regular service endpoint requires Windows Authentication, while the “mex” endpoint for service meta information requires Anonymous access. OK, we might re-enable anonymous access, but that wasn’t indented, so the way to work around this activation issue is to keep anonymous access disabled and to remove the “mex” endpoint from the web.config:

xml_serviceconfig_nomex

Curiously generating the service reference still works in Visual Studio (perhaps with login dialogs, but still)…

 

The WCF authentication scheme

We’re still not there. The next issue when running the application might be a login credentials dialog when the service is called. And no matter what you type in, it still won’t work anyway, again with the NotFound exception. Unfortunately this time without an eventlog entry.

Again, to make it short, IIS doesn’t support Ntlm as authentication scheme, we need to switch to Negotiate… .

xml_serviceconfig_negotiate

And now it works:

app_with_negotiate

Do I have to say that this configuration doesn’t run in Cassini? Right, every time you switch between IIS and Cassini you have to remember to adjust the configuration. There is another enumeration value for the authentication scheme, named IntegratedWindowsAuthentication, which would be nice – if it worked. Unfortunately those two values, Ntlm and Negotiate, are the only ones that work, under Cassini and IIS respectively.

 

localhost vs. machine name

Now it works, we get the user information as needed. For a complete picture however, we need to look at the difference between addressing the local web server via localhost or via the machine name: Calls against localhost are optimized by the operating system to bypass some of the network protocol stack and work directly against the kernel mode driver (HTTP.SYS). This affects caching as well as http sniffers like Fiddler, which both work only via the machine name.

Note: This may actually be the very reason to switch to IIS early during development, when you need Fiddler as debugger (to check the actually exchanged information). Otherwise it’s later on, when you need it as profiler (to measure network utilization). Of course you’ll want http caching enabled and working by that time.

Of course you can put the machine name in the project settings, yet this would affect all team members. Perhaps a better idea is to have the page redirect dynamically:

code_redirect

Another word on caching: In IIS6 caching has to be explicitly set for the .xap file, using 1 day as default cache time and allowing 1 minute at the minimum. During development this may be an issue. With IIS7 caching should be automatically set to CacheUntilChange and you may also set the cache time with a resolution in seconds.

 

Where are we?

OK, that was quite a list of pitfalls and differences between Cassini and IIS, even IIS6 and IIS7. Some of this may go away with the new IIS Express. Some will stay and remain a nuisance. Visual Studio initially guides you towards using Cassini. At one time however you’ll have to switch to IIS. And since you cannot have both at the same time, this may be an issue especially in development teams. My recommendation would be: Start with IIS right away or plan the switch as a concerted action within your team.

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

July 24, 2010

Replacing Events by Callbacks

Filed under: .NET, .NET Framework, C#, Silverlight, Software Development — ajdotnet @ 1:57 pm

My last post laid out how the employment of events has changed recently. Most importantly the broadcasting scenario – which was the major pattern so far – is no longer the only relevant pattern. Rather the “event-based asynchronous pattern”, MSDN, has emerged. Reasons include the inherently asynchronous nature of Silverlight as well as parallel patterns.

Now for the practical implications of this new pattern. Let’s look at an example to get the idea, and a better understanding of the consequences in code…

Let’s assume a component that is instantiated, does some work (allegedly asynchronous, and notifying about the progress) and provides the result at the end via an event. This is akin to making a server call, showing a confirmation message box, or the way the BackgroundWorker component works.

Example 1: Using events

First, implementing the a component the classical way would look somewhat like this:

The result event needs a respective EventArgs class, the declaration and the trigger method:

public class WorkResultEventArgs : EventArgs
{
    public object ResultData { get; set; }
}

public class MyWorkingComponent1
{
    public event EventHandler<WorkResultEventArgs> WorkResult;
    
    protected virtual void OnWorkResult(object resultData)
    {
        if (WorkResult != null)
            WorkResult(this, new WorkResultEventArgs() { ResultData = resultData });
    } 
    …
}

A work progress event should go a little further and provide cancelation support:

public class WorkProgressEventArgs : CancelEventArgs
{
    public int Progress { get; set; }
    public object SomeData { get; set; }
}

public class MyWorkingComponent1

    public event EventHandler<WorkProgressEventArgs> WorkProgress;
    
    protected virtual bool OnWorkProgress(int progress, object someData)
    {
        if (WorkProgress == null)
            return true;
        
        var ea = new WorkProgressEventArgs() { Progress = progress, SomeData = someData, Cancel = false };
        WorkProgress(this, ea);
        return ea.Cancel ? false : true;
    }
    …
}

Now we only need the actual worker method:

public class MyWorkingComponent1

    …
     
    public void StartWork()
    {
        int sum = 0;
        for (int i = 0; i < 10; ++i)
        {
            sum += i;
            if (!OnWorkProgress(i, sum))
                return;
        }
        OnWorkResult(sum);
    }
}

Again, we may assume that there is some asynchronicity involved, e.g. the loop could contain a web request or something. But this example should do for the sake of the argument.

The usage (as by support form Visual Studio to create the event handlers) would look like this:

public void Test1()
{
    var worker = new MyWorkingComponent1();
    worker.WorkProgress += new EventHandler<WorkProgressEventArgs>(Worker_WorkProgress);
    worker.WorkResult += new EventHandler<WorkResultEventArgs>(Worker_WorkResult);
    
    worker.StartWork();
}

void Worker_WorkProgress(object sender, WorkProgressEventArgs e)
{
    Console.WriteLine(e.Progress + ":" + e.SomeData);
}

void Worker_WorkResult(object sender, WorkResultEventArgs e)
{
    Console.WriteLine("Result:" + e.ResultData);
}

Creating the component, registering the event handlers, running the task, throw the component away. The fact that events are multi cast capable is never used at all (and never will, as the component is rather short-lived).

I guess we can agree that this is all very boilerplate. And all in all, that’s quite some overhead, from the component perspective as well as from the client code.

Example 2: Using callbacks

Now let’s try the new approach. Rather than defining an event, I pass in two callbacks. The information that was carried in the EventArgs is moved to the parameter lists, thus no need for these classes. The Cancel property is replaced by the return value of the callback. And since the client code always follows the same idiom, I expect the callbacks as constructor parameters, eliminating a source of errors along the way — something that is not possible with event handlers:

public class MyWorkingComponent2

    public Action<MyWorkingComponent2, object> WorkResult {get;set;} 
    public Func<MyWorkingComponent2, int, object, bool> WorkProgress { get; set; } 
         
    public MyWorkingComponent2( 
        Action<MyWorkingComponent2, object> workResult, 
        Func<MyWorkingComponent2, int, object, bool> workProgress) 
    { 
        WorkResult = workResult; 
        WorkProgress= workProgress; 
    } 
    …
}

The worker method changes only slightly:

public class MyWorkingComponent2

    …
         
    public void StartWork() 
    { 
        int sum = 0; 
        for (int i = 0; i < 10; ++i) 
        { 
            sum += i; 
            if (!WorkProgress(this, i, sum)) 
                return
        } 
        WorkResult(this, sum); 
    } 
}

That’s it. No EventArgs classes, no events, no respective OnEventHappened methods. Granted, the callback declarations are a little more complex, and their parameters also lack intellisense providing information about the semantics of each parameter. But otherwise? Way shorter, way more concise, way less overhead. The actual worker method hasn’t changed at all, but all the event related overhead is gone, which amounted to only 40% LOC.

Now the client code, first only slightly adapted:

public void Test1()
{
    var worker = new MyWorkingComponent2(
        (sender, resultData) => Worker_WorkResult(sender, resultData),
        (sender, progress, someData) => Worker_WorkProgress(sender, progress, someData)
    );
    
    worker.StartWork();
}

bool Worker_WorkProgress(object sender, int progress, object someData)
{
    Console.WriteLine(progress + ":" + someData);
    return true;
}

void Worker_WorkResult(object sender, object resultData)
{
    Console.WriteLine("Result:" + resultData);
}

As you can see, it didn’t change that much. But passing in the lambdas via the constructor fits the use case far better than events, and it is even more robust, as I cannot forget to pass in a callback via the constructor, the way I can forget to register an event handler.

Speaking of lambdas, and since the implementation is that simple, we can even simplify the client code further by omitting those two handler methods:

public void Test1()
{
    var worker = new MyWorkingComponent2(
        (sender, resultData) => { Console.WriteLine("Result:" + resultData); },
        (sender, progress, someData) => { Console.WriteLine(progress + ":" + someData); return true; }
    );
    
    worker.StartWork();
}

Alright, this would have been possible with events as well if you used anonymous methods. But Visual Studio guides you otherwise and early examples of anonymous methods (before we had lambdas) where rather ugly, so I doubt that can be seen as valid counterargument. Here however lambdas can be seen as typical means of choice.

Verdict

Neat? Net result:

  • I’m writing less code on the event source side, including no longer declaring EventArgs classes.
  • I’m writing less code on the event sink side.
  • The handler methods can use clean parameter lists (rather than EventArgs).
  • I’m eliminating the risk of forgetting to register event handlers by making the callbacks explicit parameters.
  • I’m elimination the danger of leaks due to failing to consistently deregistering event handlers.
    • (That was not addressed in the example, but still.)
  • When chaining together several of these steps I can make the logic – especially conditional processing – more explicit and concise.
    • Events would either require setting up beforehand (partly unnecessary overhead), or setup on demand, cluttering the handler with registration and deregistration code.

All in all, this is way more readable, way more robust, and way more efficient than using events.

I for one have begun to adopt this scheme quite liberally. My Silverlight bookshelf application has wrappers for service calls that translate the event to callbacks (several actually, including error handling and other demands). My dialogs always take callbacks for OK and Cancel. I so far have two ICommand implementations, both take callbacks (one with parameters, the other without). I even have a PropertyObserver class that translates a PropertyChanged event into a callback.
Actual event handlers? Apart from the wrappers enabling what I just presented, only a few reacting to control events.

In other words: This is not just an interesting technical detail. It really changes the way I’m addressing certain demands.

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

July 18, 2010

Employment of Events is changing…

Filed under: .NET, .NET Framework, C#, Silverlight, Software Development — ajdotnet @ 3:27 pm

The other day I had a little chat with a colleague. It was about one of his worker components and how it used events to communicate intermediate state and the final result. About event registration and deregistration, and the ungainly code resulting from it. When I suggested using callbacks instead of events, he quickly jumped on the bandwagon; a few day later I got a single note on Messenger: „Callbacks are WAY COOL!“.

That got me thinking. Why did I recommend callbacks? When and why did I abandon events? What’s wrong with events anyway?

Well, there’s nothing wrong with events at all. It’s just that Silverlight, asynchronous, and multithreaded processing have changed the picture (and I may have worked too much in those areas lately ;-) ). And this is the deal:

  1. Until recently I used to “see” (read: write code for/against) events mainly from the event consumer side. WinForms, WebForms; register handler, react to something. That kind of stuff.
    • Since “recently” I kind of had to do that less often. Why? Silverlight data binding solved many of the demands I previously used to address with event handlers. Making a control invisible for example. (Events still drive databinding, but at the same time databinding shields them away from me.)
  2. Also since “recently” I have had to implement the providing part quite a bit more often. Why? Silverlight databinding relies on certain interfaces that contain event declarations, namely INotifyPropertyChanged, INotifyCollectionChanged, and INotifyDataErrorInfo.
  3. And the “event-based asynchronous pattern”. Yep. We’ll get to that one.

OK, let’s try to classify these scenarios.

Broadcasts

The first two points are just two sides of the same coin: The radio broadcasting scenario.

  • Some component wants to advertise interactions or state changes; hence it broadcasts them by way of events.
  • Some client code needs to get notified about one or the other of these events; hence it subscribes to the event by way of an respective event handler, to consume it from there on.

Same as radio, the broadcaster broadcasts and doesn’t care whether anyone listens. Same as radio, the receiver is turned one and listens as long as something comes in. Well, the analogy stops at the life time: Event source and consumers tend to have similar life time.

Passing the Baton

The 3rd point is actually a quite different scenario: Start some work and have an event notify me about the result (and sometimes about intermediate state). Once I receive the result I let go of the participant and pass the baton on to the next piece of work.

Same as in a relay run, each participant does one job and once it’s done, he is out of business. Same as in a relay run, participation is obligatory – take someone out (or put something in his way) and the whole chain collapses.

Needless to say that this is nothing like the broadcasting scenario…

Usually the reason for the event approach (rather than simple return values) is asynchronous processing; and in fact this is not a particularly new pattern – BackgroundWorker works accordingly. On the other hand the pattern is still evolving, as the usual pattern for asynchronous work has been no pattern at all (i.e. leave it to the developer, as Thread or ThreadPool do), or the IAsyncResult pattern (relying on a wait handle). New developments however start to employ events more often, and Microsoft has actually dubbed this pattern as “event-based asynchronous pattern” (see MSDN).

One area which relies heavily on this pattern is server requests in Silverlight, via WebClient or generated proxies. But it doesn’t stop there, as Silverlight is asynchronous by nature, rather than by exception: Showing a modal dialog, navigating to another page, (down-)loading an assembly. And quite often these single incidences are chained together to form a bigger logical program flow, for example:

  • The user clicks the delete button –> the application shows the confirmation dialog –> a call to the delete operation is made –> IF it succeeds (the code navigates to a list page –> …) OTHERWISE (an error message box is shown –> …)

Any arrow represents an event based “hop” to bridge some “asynchronicity gap” – essentially turning the logically sequential chain into a decoupled, temporary register and deregister event nightmare.

Coming back to the beginning of the post: This is the scenario I was discussion with my colleague. And doing this with a whole bunch of events and respective handler methods is simply awkward, especially if you even have to provide the event sources, usually with respective EventArgs classes. And the issue of having to consistently deregister the event handlers in order to avoid memory leaks becomes more prevalent.

Changing the Picture…

Inevitably I got annoyed with the setup/teardown orgies, and eventually I began to abandon events in this case and started passing simple callbacks along. Like this:

void DeleteBook(Book book)
{
    string msg= "Delete book #" + book.Inventory + ", ‘" + book.Title + "’ ?" ;
    MessageBoxes.Instance.ShowConfirm(null, msg, 
        ok => { if (ok) BeginDeleteBook(book); }
    );
}

void BeginDeleteBook(Book book)
{
    this.DeleteBookCall.Invoke(book.BookID,
        ea => NavigateToBooks());
}

And actually I’m not the only one following this approach. The Task Parallel Library TPL for example has already started to make heavy use of callbacks. So this is definitely not limited to Silverlight…

Note: Also this lays the ground for a next evolutionary step: Coroutines.

Caliburn has a nice example of what this looks like; a little weird at first glance actually, but it collects all that logically sequential but technically asynchronous control flow in one method. Jeremy digs a little deeper into the topic in his post “Sequential Asynchronous Workflows in Silverlight using Coroutines”. 

Anyway, even without going into coroutines, the callbacks over events approach has its merits in terms of coding efficiency. I’ll provide a motivating example, next post.

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

June 3, 2010

MCPD – Enterprise Application Developer 3.5

Filed under: .NET, .NET Framework, Software Developers, WCF — ajdotnet @ 4:43 pm

I finally managed to upgrade my MCPD/EA from .NET 2.0 to .NET 3.5. And since my last post on the topic is still requested relatively frequently, I thought an update might be due.

Again, I took the upgrade exams, but since they distinguish the topics pretty well (each topic is covered in a separated section during the questionnaire and has to be completed before moving on to the next), the assessments should be valid for single exams as well – even if the single exams may shift emphasis somewhat.

Logistics

The upgrade is available to those who have the MCPD/EA on .NET 2.0. (This implies that there is no direct upgrade from the older MCSD, sorry Daniel.)

There are two exams that cover the ground of 5 single exams:

  • Exam 70-568: UPGRADE: Transition your MCPD Enterprise Application Developer Skills to MCPD Enterprise Application Developer 3.5, Part 1
  • Exam 70-569: UPGRADE: Transition your MCPD Enterprise Application Developer Skills to MCPD Enterprise Application Developer 3.5, Part 2

You can learn more about these and other upgrades here.

Part 1 includes the following three parts and respective certifications:

  • Microsoft Certified Technology Specialist: .NET Framework 3.5, ADO.NET Applications
  • Microsoft Certified Technology Specialist: .NET Framework 3.5, ASP.NET Applications
  • Microsoft Certified Technology Specialist: .NET Framework 3.5, Windows Forms Applications

Part 2 adds another Technology Specialist part and the “Designing and Developing” part that concludes the MCPD:

  • Microsoft Certified Technology Specialist: .NET Framework 3.5, Windows Communication Foundation Applications
  • Designing and Developing Enterprise Applications Using the Microsoft .NET Framework 3.5

All of which concludes the “Microsoft Certified Professional Developer: Enterprise Application Developer 3.5”

More information about single exams is available here.

The Exam Parts

While the last upgrade included a major remapping of the certification scheme, this one is merely keeping up with technological changes. The single parts vary in the amount of change they have undergone and the degree to which they can be considered being up-to-date.

  • The ASP.NET part was no big surprise, containing additional questions for ASP.NET AJAX. I’ll call that “reasonably up-to-date”.
     
  • The WinForms part hasn’t really changed its content at all. Hardly surprising, since WinForms itself haven’t changed. What has changed with .NET 3.5 though, is the introduction of WPF, a whole new UI technology. WPF is completely missing from the 3.5 exams. Hence, seen as Windows development part, this can hardly be called up-to-date.
     
  • The ADO.NET part. Well. I would call that one “unreasonably up-to-date”. It is still mainly focused on DataSets in great detail. Now, I can understand that LINQ2SQL doesn’t qualify for an exam with ADO.NET Entity Framework around the corner. But still! DataSets have gone out-off-fashion even before LINQ2SQL and I really had to scratch some layers of dust from that knowledge.
     
  • The WCF Part is probably the most advanced. This used to be the “Building Services” exam, covering remoting and WSE, but those topics are completely gone. This exam is solely focused on WCF, and you have to have a solid understanding about hosting options, configuration, and security to pass this exam.
     
  • Designing and Developing hasn’t changed in style, but it incorporates current technologies. Unlike the respective other exams, it actually addresses WPF and ADO.NET Entity Framework as technology choices.

Generally speaking, each and every of these exam parts can be solved by people with reasonable experience in the topic. That should cover ASP.NET, WinForms, and “Designing and Developing” quite nicely. Whether you put ADO.NET in that bag is up to you, but dusting the DataSet knowledge shouldn’t be a showstopper anyway. The only part that is more demanding is the WCF part. Be aware that the questions cover WCF capabilities in general; having created one or the other WCF service or reference in Visual Studio and twisted the configuration somewhat is certainly not going to get you through this exam.

Looking Ahead

.NET 4.0 and Visual Studio 2010 have been released weeks ago and usually it takes Microsoft about a year to prepare the next increment of certifications. So the current 3.5 certification is going to stay valuable for quite some time. Still, a little crystal ball gazing might be fun:

In terms of technologies, .NET 4.0 does not only contain brand new stuff, but also incorporates some technologies that have been around for some time, but not formally part of the .NET Framework. Together with the gaps noted above we have the following list of suspects:

  • In the ADO.NET area:
    • LINQ 2 SQL (leaving it out so far may have been lack of time rather than a deliberate choice after all)
    • ADO.NET Entity Framework
  • In the ASP.NET area:
    • ASP.NET MVC
    • ASP.NET Dynamic Data
    • perhaps jQuery
  • In the Windows Development area:
    • WPF
    • Parallel Extensions (PLINQ, TPL)
  • In the Designing and Developing area:
    • Visual Studio Architecture features
    • Testing features

I do hope that we will see a substitution of content in the ADO.NET area. Please Microsoft, no more DataSets! And it wouldn’t be a surprise at all if Microsoft dropped WinForms in favor of WPF, adding a little parallel stuff. ASP.NET on the other hand will certainly grow, as ASP.NET WebForms are not substituted but complemented by the new additions. The new Visual Studio features may result in in just one or the other additional question. But then, they might not, as they are not universally available.

Anyway, this is reasonably manageable for those people trying to keep their certificates up-to-date. For people trying to get their first certification it certainly adds up; but then, they may start with Technology Specialist or the more specific MCPD certificates and build on that.

What parts didn’t I mention? Silverlight is obviously missing from the list. And don’t tell me, it’s not relevant for enterprise development, for it certainly is! But I still can live with that, for it isn’t formally part of the .NET Framework and you have to draw the line somewhere. The same applies to Team Foundation Server.

On the other hand, how can you call yourself a certified “Enterprise Application Developer”, not knowing about TFS or Silverlight?

That’s all for now folks,
AJ.NET

May 15, 2010

Calling Amazon – Sample Code

Filed under: .NET, C#, Silverlight, Software Architecture — ajdotnet @ 4:59 pm

As addition to my blog posts regarding Amazon I have extracted the respective code into a separate sample:

The code can be downloaded here: AmazonSample.zip

After downloading the solution (VS2010!), you need to open the appSettings.config file in the web project and provide your personal Amazon credentials. Afterwards F5 should be all you need.

Calls via the server use a simple cache implementation that stores the XML returned from Amazon in a local directory. This way one can have a more detailed look at the information available from Amazon. This is intended for debugging purposes and to avoid flooding Amazon during development – it is not suitable for production code!

The related blog posts are available here:

That’s all for now folks,
AJ.NET

« Newer PostsOlder Posts »

The Shocking Blue Green Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 244 other followers