AJ's blog

November 23, 2014

WCF Policy – Part 1: What is policy about?

Filed under: .NET, .NET Framework, SOA, Software Architecture, Software Development, WCF — ajdotnet @ 3:16 pm

Remember what I described during the introduction? Switch a feature on on the server, and the WCF takes care that this information gets pushed through to the client, at runtime client and server work together to fulfill the demand, and on top of it the developer can interfere if necessary.

Add the tiny fact that the “feature” in question usually adheres to some standard, and you have a policy and its implementation – in our case a “WCF Policy”. Let’s look into this a little more in detail, as this defines our to-do-list for the upcoming posts.

Note: This post is part of a series. The TOC can be found here

Policy explained…

The world of services is based on various standards. These standards provide the set of features to choose from. Defining how a standard is employed by a particular service is actually defining the policy for said service.

Now, bringing this – so far abstract – concept of a policy to live in WCF includes several aspects:

  • Initially the developer needs some way to activate the feature on the service (or – rarely – the client), i.e. to switch it on and configure the details.
  • At design time the policy has to be advertised by the service. The most basic aspects like transport protocol and message format are already addressed in the WSDL and XSD. For everything that goes beyond that we have WS-Policy. On the server the information needs to be put into the WSDL, on the client it needs to be evaluated to generate a respective client proxy.
  • At runtime the policy has to be implemented by some runtime component; again, client and server.
  • Additionally, information implied by the policy should be accessible by the developer. Depending on the actual demand, this may be informal, he may be able to interfere, ore he may actually be required to provide additional input. (Say the username and password for basic authentication.)


OK, let’s make this a little more tangible and have a look at how it applies to employing WS-Addressing in WCF. This will also define the to-do-list for our own custom policy…

Initially the developer needs to declare that his service uses WS-Addressing. WCF covers WS-Addressing through different bindings. WsHttpBinding does it implicitly, a custom binding configures SOAP and WS-Addressing versions as part of the message encoding:

The binding also takes care of advertising the policy in the WSDL. Since WSDL itself has no knowledge of WS-Addressing, WS-Policy comes into play. WS-Policy is by itself a rather dull standard. It merely defines how to place additional information (particularly for the use of policies) in the WSDL. In our case it might look like this:

The element wsaw:UsingAddressing is called a custom policy assertion provided by WS-Addressing – Metadata – in general an arbitrary XML fragment. WS-Policy also provides means to define combinations and alternatives between policy assertions.

WCF also addresses the client side by generating a client proxy from a respective WSDL. This process includes handling policy assertions and generating the respective configuration (e.g. by choosing the appropriate binding) or code.

At runtime the actual behavior kicks in: The WCF client automatically generates a message ID for the SOAP request and includes it in the message as SOAP header:

The server takes the message ID and includes it in the reply message using the RelatesTo SOAP header.

Additionally the developer can access the message ID on the server and manipulate it, as he likes.


To summarize this, here’s what makes up a policy in WCF:

During design:

  • Definition of the underlying standard

During development:

  • A way to switch the policy on and to configure as necessary
  • Advertising the policy in the WSDL
  • Evaluating the policy assertion during client proxy generation

During runtime:

  • Establishing the default behavior on server and client
  • Providing an interface to the developer

And this defines what we’re going to have to do – in the following posts…


That’s all for now folks,


Building Your Own Policy with WCF

Filed under: .NET, .NET Framework, C#, SOA, Software Architecture, Software Development, WCF — ajdotnet @ 2:55 pm

Flexibility and extensibility of WCF is really great. You need a feature provided by the platform? Just switch it on, via code or configuration, and all the bits and pieces follow suit. The fact is advertised in the WSDL; the client proxy generation will pick that up and produce respective code and configuration; both, client and server, do their part to make it work at runtime; and last not least the developer usually gets the means to access or manipulate the respective information, should the need arise.

Wouldn’t it be nice to have the same apply to your custom demands, demands not covered by the platform? Demands like…

  • New versions of standards you need to support, but which are not supported by the platform yet.
  • Homegrown authentication schemes.
  • Tenants for multi-tenant solutions.
  • Billing information for services in B2C scenarios.
  • SLA and diagnostics information for operations

Whatever the reason, it would be very nice if we could implement our custom policy in a way that the application developer could use it in a way very similar to policies provided by the platform.

The good news is: It is actually possible to support custom demands in a way very similar to the built-in features of WCF.
The not-so-good news: It is not that simple or straight-forward to do, as there a a lot of bits and pieces working together – and no single information source putting them together. Well, this is what I’m trying to accomplish…

So, here’s the agenda for this little adventure:

Design time aspects:

Runtime aspects:

Looking back:

Quite a bit more than one post…

So stay tuned…

That’s all for now folks,

November 3, 2013

MVC is a UI pattern…

Filed under: ASP.NET, ASP.NET MVC, Software Architecture — ajdotnet @ 3:01 pm

Recently we had some discussions about how the MVC pattern of an ASP.NET MVC application fits into the application architecture.

Clarification: This post discusses the MVC pattern as utilized in ASP.NET MVC. Transferring these statements to other web frameworks (e.g. in the Java world) might work. But then it might not. However, transferring them to the client side architecture of JavaScript based applications (using knockout.js etc.), or even to rich client applications, is certainly ill-advised!


One trigger was Conrad’s post "MVC is dead, it’s time to MOVE on.", where her claims

"the problem with MVC as given is that you end up stuffing too much code into your controllers, because you don’t know where else to put it.”

http://cirw.in/blog/time-to-move-on.html (emphasis by me)

The other trigger was "Best Practices for ASP.NET MVC", actually from a Microsoft employee:

"Model Recommendations
The model is where the domain-specific objects are defined. These definitions should include business logic (how objects behave and relate), validation logic (what is a valid value for a given object), data logic (how data objects are persisted) and session logic (tracking user state for the application)."

"DO put all business logic in the model.
If you put all business logic in the model, you shield the view and controller from making business decisions concerning data."


Similar advice can be found abundantly.


I have a hard time accepting these statements and recommendations, because I think they are plainly wrong. (I mean no offense, really, it’s just an opinion.)

These statements seem to be driven by the idea, that the MVC pattern drives the whole application architecture. Which is not the case!
The MVC pattern – especially in web applications, certainly in ASP.NET MVC – is a UI pattern, i.e. it belongs to the presentation layer.

Note: I’m talking about the classical, boilerplate 3-layer-architecture for web applications

In terms of a classical layer architecture:

This is how it works:

  • The model encapsulates the data for the presentation layer. This may include validation information (e.g. attributes) and logic, as far as related to validation on the UI. Also I generally prefer value objects (as opposed to business objects, see also here).
  • The controller receives the incoming request, delegates control to the respective business services, determines the next logical step, collects the necessary data (again from the business services), and hands it off to the view. In this process it also establishes the page flow.
    In other words: The controller orchestrates the control flow, but delegates the single steps.


Coming back to the original citations…

  • If "you end up stuffing too much code into your controllers", then the problem is not MVC, not the fact that controllers by design (or inherent deficiencies of the pattern) accumulate too much code. It’s far more likely that the controller does things it’s not supposed to do. E.g. do business validations or talk directly to the database. (Or, frankly, “you don’t know where else to put it” is the operative phrase.)
  • If your model "is where the domain-specific objects are defined", and you "put all business logic in the model", in order to "shield the view and controller from making business decisions concerning data", then these statements overlook the fact that domain-specific objects and business logic are artifacts of the business layer.

Fortunately you can find this advice elsewhere (I’m not alone after all!):

"In the MVC world the controller is simply a means of getting a model from the view to your business layer and vice-versa. The aim is to have as little code here as is possible."

It’s just harder to come by.

That’s all for now folks,

August 14, 2011

RIA with Silverlight–The Business Perspective

If you read this, chances are that you are a developer and that you like Silverlight. And why not? Exciting platform, great features, outstanding tooling. But! If you’re a corporate developer, have you sold it to your management yet? If not, this post is for you.

Silverlight is for RIA, and the domain of RIA applications is largely intranet or closed/controlled extranet user groups. This again is what is usually found in larger enterprise companies. Companies that usually have a vested interest in controlling their environment. And in terms of bringing software into production and of operations and maintenance afterwards, every new platform is one platform to many.

So, the odd developer comes along and talks about this great new technology. Does the management care? Probably not. What does it care about? Simple. Money! Money, as in costs for deployment and user support, hardware and licenses to get the stuff up and running, operations and developer training, maintenance. And money as in savings in the respective areas and – the cornerstone, as the business usually pays the bill – impact on the business. All usually subsumed under the term ROI.

About a year ago, I finished an analysis looking into RIA with Silverlight, conducted for a major customer. Not from the point of view of the developer, but that of business people, operations, and IT management:

So, let’s look briefly at each aspect…

User/Business perspective…

The business doesn’t exactly care for the platform Silverlight itself; it cares for its business benefits. Benefits as in improved user experience, streamlined business workflows, office integration, and so on. And since we had some lighthouse projects with Silverlight we were able to collect some customers’ voices:

“This [streamlining with Silverlight] would reduce a […] business process […] from ~10 min to less than a minute.”

“Advanced user experience of Silverlight UI helps raising acceptance of new CRM system in business units”

“I was very impressed of the prototype implementation […] with Silverlight 3. Having analyzed the benefits of this technology I came to the conclusion that I want the […] development team to start using Silverlight as soon as possible. […]”

This is also confirmed by the typical research companies, like Gartner or Forrester:

“Firms that measure the business impact of their RIAs say that rich applications meet or exceed their goals” (Forrester)

Operations perspective…

In production, the benefit of Silverlight applications (compared with respective conventional web based applications) is reduced server and network utilization.

For example, we had a (small but none-trivial) reference application at our disposal, which was implemented in ASP.NET as well as Silverlight (as part of an analysis to check the feasibility of Silverlight for LOB applications). We measured a particular use case with both implementations – starting the application and going through 10 steps, including navigation, searches, and selections. Both applications were used after a warm-up phase, meaning that the .xap file, as well as images and other static files had already been cached.

The particular numbers don’t matter, what matters is the difference between the amount of data that has been exchanged for each step (in case of navigations none at all for Silverlight). For the single steps:

And accumulated over time:

A ratio of roughly a tenth of the network utilization is quite some achievement – considering that the Silverlight application wasn’t even optimized to use local session state and caching, it should be even higher.

This should have a direct impact on the number of machines you need in your web farm. Add the fact that session state management on the client drastically reduces the demand for ASP.NET session state – usually realized with a SQL Server (Cluster) – there is yet another entry on the savings list.

On the down side is the deployment of the Silverlight plugin. For managed clients – especially if outsourcing the infrastructure comes into play – this may very well become a showstopper.

IT Management perspective…

With respect to development and maintenance, what IT Management should care about includes things like ability to deliver the business demands, development productivity, bug rates in production, costs for developer training, and so on.

Actually all areas in which Silverlight can shine, compared with other RIA technologies, and with the typical mix of web technologies as well:

  • Rich, consistent, homogenous platform
    • .NET Framework (client and server), Visual Studio, Debugger, C#
    • Reduced technology mix, less technology gaps, less broad skill demands
  • Improved code correctness and quality…
    • compiler checks, unit testing, code coverage, debugging, static code analysis, in-source-documentation, …
  • Improved architecture and code
    • Clean concepts, coding patterns, clear separation of client code, lead to better architectures
    • Powerful abstractions lead to less code (up to 50% in one project), less complexity, less errors

Customers’ voices in this area:

“between our desktop app and the website, we estimate 50% re-use of code”

“a .NET developer can pretty much be dropped into a SL project. […] This is a huge deal […]”

“As alternative for Silverlight we considered Flash. […] only Silverlight could provide a consistent development platform (.NET/C#). […]”



Taking all this together, and considering that enterprise companies usually have the tooling and test environments (well…) readily available, this all adds up to something like the following bill:

RIA Return on Invest

Whether the bill looks the same for your company or for one particular project, of course, depends on many things. Especially nowadays with all the hubbub around HTML5 and mobile applications (without any relevant Silverlight support). But if RIA is what you need, the Silverlight will quite often yield far more benefits than any other option.

Still, you need to do your own evaluation. However, I hope to have given you some hints on what you might focus on, if you want to sell technology to the people who make platform decisions in your company.

The actual analysis was fairly detailed and customer specific. But we also prepared a neutralized/anonymized version, which we just made available for download (pdf). (Also directly at SDX.)

That’s all for now folks,

kick it on DotNetKicks.com

March 12, 2011

HTML5 – Part III: The Limits

Filed under: HTML5, Silverlight, Software Architecture — ajdotnet @ 5:26 pm

Alternative title: HTML5 – Still HTML, not a RIA platform

The last post was about the chances and benefits of HTML5. But there are also some exaggerated expectations that HTML5 cannot fulfill. Mainly this concerns the term “RIA”, and the effect HTML5 will have in this area.

Or as someone wrote:

“Will HTML 5 one day make Flash, Silverlight and other plug-in technologies obsolete?” (link)

Actually this is a very regular question I have to answer. (Often enough it doesn’t come as question, but as statement. Or accusation. Sic!)

I already detailed how HTML5’s video and canvas will take over tasks that have formerly been solved by other technologies, namely Flash. And some people seem to infer that if Flash is a RIA technology, and HTML5 obsoletes Flash, then HTML5 must obviously make RIA technologies obsolete in general.

Quite wrong. Actually the fact that Flash is used for video and 2D graphics has nothing to do with Flash being a RIA technology. Flash simply happened to be able to delivering these features, both in terms of technical capability as well as broad availability. That it also happens to be a RIA technology is more or less happenstance.

But before moving on we need to clarify the terms “RIA” and “RIA technology”…

My personal definition of RIA technologies relates to the following attributes: Stateful programming model, with some kind of page model, for applications running in a browser sandbox. This includes Flash, JavaFx, Silverlight as a browser plugin (but not in its WP7-platform variation).

Wikipedia applies the terms to Flash, Java (not JavaFx! Sic!), and Silverlight. Sill, this is debatable, and a year ago even Wikipedia had a far broader definition, but in my experience this actually covers the common understanding. Curiously Adobe claims that AIR is their RIA technology, not Flash. But me, Wikipedia, and general consensus agree that Flash indeed is a valid RIA technology.

By the way: Leaving HTML+AJAX out of this picture is by no means meant to be deprecatory, it just reflects common understanding. Wikipedia actually makes the distinction based on the (lack of) necessity to install an additional software framework.

And a final tidbit: Once upon a time Microsoft advertised Silverlight as Flash replacement, addressing video and graphics, just like in the typical Flash use cases. However, even with growing adoption of the Silverlight plugin, Silverlight never became a serious competitor for Flash in that area. (This may actually have played a role in Microsoft’s commitment to HTML5…) Still, Silverlight has long outgrown this narrow definition and later versions has put more emphasis on business features.

So, let’s have a look at where HTML5 reaches its limits and where RIA technologies might kick in. I’ll look at regular web applications before moving on to RIA applications.

Web Applications

HTML5 is going to address standard demands of web applications, including those addressed today by Flash. This will have a crowding-out effect on Flash and RIA technologies in general. But once the demands go beyond being “standard”, RIA technologies will find their niche even in web applications.

On example could be premium video delivery: Some vendors will probably be eager to offer unique selling propositions in the emerging markets of WebTV and high quality HD video content (probably involving DRM).

Since Flash can no longer play the trump of being the only or the broadest available platform in this area, this will also change the picture among the RIA technologies. Especially Silverlight has been very successful in this area recently. Take the Olympic Games or maxdome.

Other examples include complex user interactions that are not feasible with canvas and script, e.g. Mazda’s car configurator, and similarly dynamic data visualizations.

Finally there is the support of additional features. RIA technology vendors certainly have shorter innovation cycles than standards bodies. This especially includes hardware support (camera, game controller, hardware accelerated animations and 3D).

These scenarios all require the user to accept the plugin – which might become a more severe issue if this necessity is less ubiquitous. Thus for the web site provider this always incurs the question whether he can compel his users to use his offering despite that nuisance, or whether he may have to provide a (perhaps simplified and less capable) HTML based version.

RIA Applications

HTML5 won’t turn HTML into a RIA technology. It doesn’t come with a new programming model, doesn’t change server side processing, page model, and postbacks, doesn’t change that fact that the HTML ecosystem really is a conglomerate of diverse technologies.

Many applications – be it multimedia, some kind of hardware dependency, or line of business – simply require the potential of rich client applications. For these, HTML simply cannot deliver what is necessary. Typical demands include:

  • Data centric applications: Large amounts of data; data input with complex validations, lists with extended feature sets, …
  • Usability: Immediate feedback (not possible with postbacks), dynamic UIs with forms built dynamically or changing depending on user input, …
  • Business features: Printing, graphical presentation, Office integration, …
  • Offline capability…
  • Connectivity: Communication patterns such as peer-to-peer, server pushes, …
  • Multimedia and hardware support: Animations, camera, microphone, multitouch, …
  • Rich platform: Stateful programming model, component model, feature rich controls, rich databinding capabilities, …

While these are certainly not the demands for typical web applications. But intranet applications and applications addressing some distinct or closed user group on the web are very well within this category. Prominent example is SAP, one can also think of WebTV portals, home banking, or other.

In the past java applets were often used to cover these demands. Recently AJAX approach have spread, but while this worked to some degree, it often falls short of meeting the demands completely. From a technical perspective, RIA technologies are the adequate choice in these scenarios. And (in my opinion), Microsoft Silverlight is currently the best technology available in that area. Adobe AIR lacks availability and adoption, Flash alone is not sufficient, and JavaFx seems to die a slow death


HTML5 will push RIA technologies out of their makeshift role (video and canvas). However this doesn’t affect the feasibility of employing RIA technologies on their own turf, i.e. beyond-HTML-capability demands in web applications and fully fledged RIA applications.

However, since this “pushing out of RIA technologies” mainly affects Flash, HTML5 has an interesting effect on the RIA market: Broad availability is no longer a strong USP for Flash, which is to the benefit of Silverlight. Add the hazy prospect of JavaFx and the fact that Silverlight is not only a RIA platform, but also enters devices (WP7, WebTV), and HTML5 may actually further the adoption of Silverlight – not as cross-platform tool, as it was once intended, but in all areas not covered by HTML5.

The one argument in favor of HTML5 – which no RIA technology is likely to ever achieve – is its universal availability across all platforms, even if that comes at a cost.

Where are we?

The conclusion of this little series may be as follows: The conflict, or enmity, between HTML5 and RIA that some people see (or exaggerate) doesn’t really exist. There may be a competition between HTML5 and Flash, but even that may turn out differently form what people expect.

Actually HTML5 and RIA complement each other. There are areas in which one technology certainly make more sense than the other, other areas in which there is a choice, again other areas in which a combination of both may work best; even areas in which neither is an ideal choice. E.g.…

  • A web application addressing the broadest available audience? HTML5.
  • An LOB application with high usability demands? Silverlight.
  • A mobile application addressing a broad audience? HTML5. As long as not tighter device integration is necessary, in which case one has to address several mobile OSes…

And between these black-and-white examples there’s a lot gray areas, to be decided on a case-by-case basis. And usually the important thing is not exactly which technology you favor. The important thing is to make an informed decision, aware of the pros and cons, and not solely based on political opinions of certain apologists.

That’s all for now folks,

kick it on DotNetKicks.com

March 10, 2011

HTML5 – Part II: New Standard for Web Applications

Filed under: HTML5, Software Architecture — ajdotnet @ 6:35 pm

As I laid out in the first post, a lot of people talk about HTML5, but there are also a lot of misconceptions – and misguided expectations – about HTML5. So, what is HTML5, anyway? And for whom?

HTML5 is a lot of things for a lot of people. For some it is a vehicle to rid the web of patent laden video formats. For Apple it is a means to keep Flash off iPhone and iPad. Microsoft uses it to push IE9. Some people are using it to – at least try to – bury RIA. Anything else…? Oh yes, some people actually look into HTML5 technologies.

In an attempt to join the last group, here is my opinion on what HTML5 is and what it will mean.

The Standard

Formally, HTML5 is just the next version of HTML. (Period!) In terms of scope however it covers some more ground than “HTML” did before. Included is not only the markup language we know as HTML 4.1, but also the formerly distinct standards XHTML and DOM.

Given that HTML 4.0 became a recommendation in 1997 one can only say (shout!) “It’s about time!”. The fact that HTML 4.x cannot be considered “state of the art” for quite some time now is exactly the reason why browser vendors chose to implement proprietary extensions and why technologies like Flash – even ActiveX – have to fill in the gaps.

HTML5 is available as working draft (latest version today from January 2011) and all major browser venders are more or less committed to implementing it:

Regarding general support for HTML5, the industry for once agrees on something. Surprising enough. Regarding the details however, one has to be careful, which part of HTML5 is supported by which browser to what degree. It may be going to take some time – and more than one version number – until we get broad HTML5 support on all browsers.

For some time the critics gloated at the fact that HTML5 has barely reached “working draft” status and that “recommendation” status is not expected before 2022. Even if that were the case, it would have been totally irrelevant, as the industry is implementing and therefore standardizing parts of HTML5 right now. Additionally the W3C group is just now reconsidering its approach and looking for a more timely “delivery”. TODO

So far for the term, now for the content…

The Details

In the details, HTML5 is a conglomerate of different – and not necessarily related – improvements and new features. Of course this includes the two features mentioned mostly: video and 2D graphics (a.k.a. “canvas”).

Video is nothing spectacular: It simply displays a video, albeit without depending on some 3rd party plugin (namely Flash). Benefit for web developers: a standardized API based on HTML and JavaScript (rather than having to learn another technology). Benefit for the user: He can watch the video, independently on the device and plugin availability.

Canvas allows 2D graphics, even (with JavaScript) animated. This again allows (in principle) to address use cases formerly the domain of Flash. Diagrams for usage statistics, stock exchange rates, and so on. Contrary to simple (server rendered) images this could be including user interaction. This even may include more complex things, like web games; the technology is up to it, as this classic proves.

Regarding multimedia, one probably hast to mention audio support for a complete picture.

Video and canvas are mentioned quite often, probably because they constitute the use cases today most often addressed using Flash. Still, it would be unfair to reduce HTML5 to these two.

If it comes to plain old markup code HTML5 offers some improvements regarding java script control, as well as semantic tags like “header” and “footer”.

Regarding user interaction HTML5 offers new types of input fields (e.g. date picker), complete with validations (e.g. textbox with email address). Also add APIs for drag-and-drop, browser history management, and some other.

You may find this in more detail at the W3C or on Wikipedia. A nicely illustrated introduction can be found here (German, but the pictures shouldn’t need translation ;-)).

Yet this still doesn’t conclude the list. For a complete picture one has to name topics closely related to (even if not formally part of) HTML5: CSS3, Web Storage, and (especially for mobile applications) Geolocation.

It should be noted that the recent improvements in JavaScript execution in all major browsers (well… – IE starting with IE9) also contribute to HTML5: Many of the features unfold their full potential only with scripting. It’s a fair assumption that HTML5 will cause a further growth in scripting in general, and probably the adoption of respective AJAX frameworks.

The Impact

At a closer look HTML5 is a mixture of rather different and unrelated things – none of them especially outstanding. Basically HTML is only upgraded to accommodate current state-of-the-art technologies. All together a (long overdue and therefore a little extensive) evolution, but certainly no revolution or “groundbreaking upgrade”, as some like to think.

Therefor the relevance of HTML5 is not the functionality itself. It stems from two facts:

  1. The broad support from all major browser vendors. No matter why they do it, that fact that they agree on HTML5 in such an unequivocal and joined fashion is without precedence. This will likely ensure that all browsers level on comparable features and capabilities. Which is important to break todays disparities and incompatibilities. HTML5 shows all promises of becoming a platform for the web that is state-of-the-art as well as broadly available. Something HTML has increasingly failed at in recent years.
  2. The timely appearance together with the emerging mobile ecosystem (smart phones, pads). In this area we have a far more diverse and inhomogeneous platform landscape than on desktops (iOS, Android, WP7, other mobile OSes, desktop OS adaptions). No platform has a dominance similar to Windows, thus vendors need to address more than one platform. And web applications built for mobile devices are the only feasible cross platform approach available. Even if HTML5 lacks full device integration (e.g. phone integration or multitouch), it goes a long way with web storage, geolocation, and rudimentary offline capabilities.

To conclude: HTML5 is not going to be some academic standard, it will be a true and broadly supported industry standard. Together with browser improvements, especially regarding script engines, HTML will become an adequate development platform for state-of-the-art web applications and the emerging mobile area.

For web developers HTML5 is a good thing. At least as long as the agreement among the browser vendors holds – and as long as we don’t have to wait another 10 years for the next version.

That’s all for now folks,

kick it on DotNetKicks.com

March 7, 2011

HTML5 – Part I: The Hype

Filed under: HTML5, Silverlight, Software Architecture — ajdotnet @ 5:17 pm

META: This blog got a little sleepy recently. But actually I’ve been spending quite some time writing blog posts, albeit for our company blog. And I’m planning to reuse some of that content here (this post is actually the first one). Also I’ve been busy in several new areas, including WP7, and I may have something to say about those, too. So, this blog is still alive and kicking.

There’s a small hype around HTML5 for some time now. Ironically the reason were more political than for technical ones, since browsers begin to support HTML5 only just now. Be it organizational hassles, discussions about a video format, or a certain company having issues with Flash on their iPlatform. And since Microsoft has announced its decision to make HTML5 their cross platform strategy, the last big browser vendor has joined the camp.

And this is a good thing! Not only homogenizes HTML5 the web platform again, it is also the only feasible platform for cross platform development in the mobile world, which is much more diverse than our desktop eco system.

On the other hand I am regularly irritated about what people think HTML5 will be able to accomplish. Especially in the relation to RIA applications that are all but dead, according to various sources:

“HTML5, a groundbreaking upgrade to the prominent Web presentation specification, could become a game-changer in Web application development, one that might even make obsolete such plug-in-based rich Internet application (RIA) technologies as Adobe Flash, Microsoft Silverlight, and Sun JavaFX.” (link)

“Will HTML 5 one day make Flash, Silverlight and other plug-in technologies obsolete? Most likely, but unfortunately that day is still quite a way off.” (link)

“Sure maybe today, we have to rely on these proprietary browser plugins to deliver content to users, but the real innovative developers and companies are going to standard on HTML 5 and in turn revolutionize how users interact with data.  We all want faster web applications and the only way to deliver this is to use HTML 5.” (link)

To put it bluntly: HTML is no RIA technology and HTML5 is not going to change that. Thus Silverlight is a valid choice for any RIA application today, and it will be one tomorrow.

On the other hand, HTML5 is certainly going to deliver features that today are the domain of RIA technologies, namely Flash. And this will affect RIA technologies in some way.

Then how will HTML5 affect RIA? Well, I’m afraid, it’s not that simple and there is no short and sufficient answer that I’m aware of. In order to decide – probably on a case by case basis – what HTML5 can do, in which use cases HTML5 is the right answer, and in which cases RIA technologies still are the better choices, we need to take a closer look at some details…

To keep it a little more concise, I’m going to break with my usual habit of very long posts. I’m splitting the remainder into two additional not-quite-so-long posts:

Stay tuned,

June 12, 2010

The Future (of) UI

Filed under: Software Architecture, Software Development — ajdotnet @ 3:54 pm

The way we think about user interaction – actually the user interfaces themselves – is changing. The iPhone seems to be the protagonist teaching us new ways to interact with phones and iPad even coins a new form factor driving this trend further. Touch and multi touch are becoming main stream because vendors have begun to create operating systems, UI metaphors, and backing services around these interaction principles – rather than slightly adjusting OSes/UIs build for conventional PCs with keyboard and mouse.

This is actually a defining feature of the next evolutionary step of UI, namely Natural User Interfaces (NUI). As wikipedia states…

A NUI relies on a user being able to carry out relatively natural motions, movements or gestures that they quickly discover control the computer application or manipulate the on-screen content. The most descriptive identifier of a NUI is the lack of a physical keyboard and/or mouse. (wikipedia)

While Apple seems to take the lead in public perception, Microsoft has a rather mixed lineup: With smart phones Windows Mobile 7 seems a bit like “taking the last chance”, even if the move to Silverlight as a platform is a bold one and (IMO) a good one. On the other hand they just managed to drop the very promising – by itself as well as positioned against the iPad – Courier project. As a colleague stated in your internal company blog: “I’m frustrated. Period.” And lastly Microsoft has Surface which has no competition I’m aware of at all (unless you want to build one yourself).

Surface is not only commercially available, it also adds the capability to detect objects placed on the table and thus goes beyond plain multi touch. And it is subject to further research, as this excerpt from PDC09 shows: (better quality here, at 83:00)


Looking Ahead

Well, this is kind of what we have today. If you would like to see where this might be heading, have a look at the Microsoft Gives Glimpse Into the Future talk Stephen Elop held early ‘09. It’s a 36 minutes video, but you may jump to 14:00 and watch the presentation of "Glimpse in the future". What’s presented there is impressive: Live translations enabling people talking with each other in different languages. Surface like tables interacting directly with iPad-like multi touch tablets placed on it. Minority Report like control. Augmented reality. …. It’s even more impressive since everything is backed afterwards by actually existing (if in early stages) technology. There’s a shortened and also an extended version available on youtube:


Speaking of Minority Report. Another great video comes from John Underkoffler; John has been the science adviser for that movie and he does the whole presentation with exactly that technology!

This talk is certainly worth watching, as he makes some very interesting observations (in fact, watching this video triggered this post; thanks Daniel). His final prediction is … ambitious: “I think in 5 years time, when you buy a computer, you’ll get this.”

Is that cool or what?


Second Thoughts 

Well, as they say:

“Prediction is very difficult, especially about the future.” (various)

There’s one thing I don’t like about those predictions. They are (deliberately?) incomplete. They certainly shine in new fields of applications for computers, new degrees of collaboration, new ways of interaction. Like home integration, meeting areas with huge collaboration screens, geo services and augmented reality, or simply navigating and reshaping existing data. But in their aim to show new ways of doing things, they neglect the “old”, conventional demands, demands that won’t go away.

The very fact that these NUI approaches – touch, gestures, even voice – are defined by “the lack of a physical keyboard and/or mouse” (and in case you didn’t notice – NONE of the above videos hat a keyboard in it!) renders them inappropriate for a whole bunch of scenarios. Can you imagine a secretary typing on a virtual keyboard? A call center clerk waving at his screen while he talks to a customer? A banker shooing stock rates up and down? A programmer snipping his code into place? Cool as all that Minority Report and other stuff may seem, I have a hard time imagining anyone whose daily job today requires a keyboard to a substantial degree using some other “device” instead.

In the end we’ll probably see both. NUI approaches are going to spread, new devices targeted at different scenarios simply require different notions of user interaction. But they are not going to replace today’s conventional computers, they are going to be a complement, actually even a necessary one. Another necessary complement is the mutual integration with each other, the internet/cloud, and social platforms, but that’s a different story.

For us developers this will be the actual challenge: developing on conventional machines for devices and environments that have totally different ideas of how an application should look like and interact with its surroundings. Testing is going to be a bitch.

That’s all for now folks,

kick it on DotNetKicks.com

May 15, 2010

Calling Amazon – Sample Code

Filed under: .NET, C#, Silverlight, Software Architecture — ajdotnet @ 4:59 pm

As addition to my blog posts regarding Amazon I have extracted the respective code into a separate sample:

The code can be downloaded here: AmazonSample.zip

After downloading the solution (VS2010!), you need to open the appSettings.config file in the web project and provide your personal Amazon credentials. Afterwards F5 should be all you need.

Calls via the server use a simple cache implementation that stores the XML returned from Amazon in a local directory. This way one can have a more detailed look at the information available from Amazon. This is intended for debugging purposes and to avoid flooding Amazon during development – it is not suitable for production code!

The related blog posts are available here:

That’s all for now folks,

March 25, 2010

Calling Amazon – Part 2

Filed under: .NET, C#, Silverlight, Software Architecture — ajdotnet @ 9:27 pm

The last post provided some introduction into calling an external service, namely Amazon, and spend some thoughts on the infrastructure questions. It left off with some decisions:

  1. I’m going to make REST calls, as they are more flexible than SOAP calls.
  2. I’ll make the calls to Amazon from the client.

It’s important to note that these decisions depend to a very high degree on the particular service I’m calling. Amazon offering a policy file, the structure of the API allowing to keep the secrets on my server, the fact that Amazon actually offers REST calls in the first place. Any other service might need a completely different approach. (That’s what the last post covered.)

As a reminder, here is the relevant image:

So how about actually implementing that?

The Server Part 

The server has to build and sign the URL for the Amazon call. Implementing that is straight forward. The AmazonApi class maintains the Amazon configuration in the appSettings.config:

The BuildItemSearchRequestUrl method first calls a method to prepare the query parameters, then another method to build a respective URL:

The called methods are equally simple. BuildRequestParams translates the typed query parameters into a dictionary, adding some other necessary parameters along the way. The parameter names can be found in the developer guide:

In order to build the URL I need the SignedRequestHelper class, extracted from Amazon’s REST sample:

This method is made available to the client via a WCF service, but I’ll leave that one out, its straight forward and boilerplate enough.

Calling Amazon from the Client

On the SL client we have a two step process: First, call the server with the filter criteria, and get the prepared URL. Second, make the call to Amazon, using the URL. The first call is no different than any other call to my own server application, no need to elaborate on that. The second one uses the WebClient class to make the REST call:

The ParseItemSearchResponse translates the XML into a respective object structure. Boilerplate, boring, and kind of longish if you do it manually.

View Model Stuff

Now that the details are in place, I “only” need to wire them into the UI.

The calls from the SL client to its housing server application are straight forward. First, call the server with the filter criteria, and get the prepared URL. Second, make the call to Amazon, using the URL. First the bookkeeping:

The BuildItemSearchRequestUrlCall class encapsulates the calls to the BuildItemSearchRequestUrl service operation shown earlier, AmazonClientApi does the same for Amazon and is also shown above.

Now the actual implementation, kind of leapfrogging from one method to the next by way of asynchronous events and lambdas I pass in for that purpose:

That should get the first 10 results from Amazon – and the proof that I can actually make the call:

The ShowAmazonResponseErrors simply iterates over the returned error collection and shows a respective message box. Amazon will return an error if it couldn’t find anything:

I have now solved the basic technical demands, yet the user may be a little more demanding, since…

Employing Paging

… 10 items is usually not sufficient. Hence I need to make more calls, read paging. Paging technically only requires an ItemPage parameter to be set to a value bigger than 1 (the page index is 1-based). However on the view model, some additional questions arise.

First question is whether the subsequent pages should be loaded right away, constantly filling the result grid in the background. This could be done by triggering the next call, once the previous one has returned, until all available results have arrived. Leapfrogging in a loop. Of course, if the user triggered a new search somewhen in-between, I would have to cancel that chain of calls. Or I could let the user trigger the loading explicitly, e.g. with some „load more“ button (which is what I’ll do).

In any case I have to deal with the user changing the filter criteria or interacting with the result, e.g. resorting it. This is obvious for the second case, but even automatically loading all data in chunks takes time.

Therefore I need to distinguish between the first call, and subsequent calls. The first call initiates a new search, replacing any previous search result. Subsequent calls have to use the same filter criteria, just with another page, and the result is appended to the previous ones. Now, if the filter criteria is bound to the UI and used as parameter to the service call, the user might change the filter and then click the „load more“ button (or the automatic loading might kick off at that time). To prevent that I need a copy of my request property. Similarly I need to maintain my result in a separate property, otherwise the call would overwrite any previous result data.

BeginSearchAmazon and EndSearchAmazon now only handle the first call, initiating a new search, and have to be changed accordingly:

The chain for subsequent calls looks similar in structure, but preserves the values in the separate copy properties:

The next image shows the dialog after having loaded 3 pages and in the process of loading the fourth: 

Great? By the way, the details link jumps straight to Amazon, showing the respective book.


Whether you are going to call Amazon or some other external service, these two posts should give you some hints on what to take into account form the infrastructure and architectural perspective. On the client you’ll have to look into Silverlight security and cross domain calls, on the server you might run into firewall or proxy authentication issues.

Also the Amazon API with its approach to paging may give you some hints on how to implement paging over larger result sets with Silverlight. While server calls are asynchronous, SL doesn’t provide the option of processing results while they arrive. For a large result set it might take some time to download the data, and the user might notice the time lag. It could be the better user experience to load the data in chunks, as shown here.

One hint at last: Jon Galloway has a good explanation on the rationale behind policy files on the called server, see here.

That’s all for now folks,

kick it on DotNetKicks.com

« Newer PostsOlder Posts »

Blog at WordPress.com.