AJ's blog

November 3, 2013

MVC is a UI pattern…

Filed under: ASP.NET, ASP.NET MVC, Software Architecture — ajdotnet @ 3:01 pm

Recently we had some discussions about how the MVC pattern of an ASP.NET MVC application fits into the application architecture.

Clarification: This post discusses the MVC pattern as utilized in ASP.NET MVC. Transferring these statements to other web frameworks (e.g. in the Java world) might work. But then it might not. However, transferring them to the client side architecture of JavaScript based applications (using knockout.js etc.), or even to rich client applications, is certainly ill-advised!

 

One trigger was Conrad’s post "MVC is dead, it’s time to MOVE on.", where her claims

"the problem with MVC as given is that you end up stuffing too much code into your controllers, because you don’t know where else to put it.”

http://cirw.in/blog/time-to-move-on.html (emphasis by me)

The other trigger was "Best Practices for ASP.NET MVC", actually from a Microsoft employee:

"Model Recommendations
The model is where the domain-specific objects are defined. These definitions should include business logic (how objects behave and relate), validation logic (what is a valid value for a given object), data logic (how data objects are persisted) and session logic (tracking user state for the application)."

"DO put all business logic in the model.
If you put all business logic in the model, you shield the view and controller from making business decisions concerning data."

http://blogs.msdn.com/b/aspnetue/archive/2010/09/17/second_2d00_post.aspx

Similar advice can be found abundantly.

 

I have a hard time accepting these statements and recommendations, because I think they are plainly wrong. (I mean no offense, really, it’s just an opinion.)

These statements seem to be driven by the idea, that the MVC pattern drives the whole application architecture. Which is not the case!
The MVC pattern – especially in web applications, certainly in ASP.NET MVC – is a UI pattern, i.e. it belongs to the presentation layer.

Note: I’m talking about the classical, boilerplate 3-layer-architecture for web applications

In terms of a classical layer architecture:

This is how it works:

  • The model encapsulates the data for the presentation layer. This may include validation information (e.g. attributes) and logic, as far as related to validation on the UI. Also I generally prefer value objects (as opposed to business objects, see also here).
  • The controller receives the incoming request, delegates control to the respective business services, determines the next logical step, collects the necessary data (again from the business services), and hands it off to the view. In this process it also establishes the page flow.
    In other words: The controller orchestrates the control flow, but delegates the single steps.

 

Coming back to the original citations…

  • If "you end up stuffing too much code into your controllers", then the problem is not MVC, not the fact that controllers by design (or inherent deficiencies of the pattern) accumulate too much code. It’s far more likely that the controller does things it’s not supposed to do. E.g. do business validations or talk directly to the database. (Or, frankly, “you don’t know where else to put it” is the operative phrase.)
  • If your model "is where the domain-specific objects are defined", and you "put all business logic in the model", in order to "shield the view and controller from making business decisions concerning data", then these statements overlook the fact that domain-specific objects and business logic are artifacts of the business layer.

Fortunately you can find this advice elsewhere (I’m not alone after all!):

"In the MVC world the controller is simply a means of getting a model from the view to your business layer and vice-versa. The aim is to have as little code here as is possible."
http://stackoverflow.com/questions/2128116/asp-net-mvc-data-model-best-practices-for-a-newb

It’s just harder to come by.

That’s all for now folks,
AJ.NET

August 14, 2011

RIA with Silverlight–The Business Perspective

If you read this, chances are that you are a developer and that you like Silverlight. And why not? Exciting platform, great features, outstanding tooling. But! If you’re a corporate developer, have you sold it to your management yet? If not, this post is for you.

Silverlight is for RIA, and the domain of RIA applications is largely intranet or closed/controlled extranet user groups. This again is what is usually found in larger enterprise companies. Companies that usually have a vested interest in controlling their environment. And in terms of bringing software into production and of operations and maintenance afterwards, every new platform is one platform to many.

So, the odd developer comes along and talks about this great new technology. Does the management care? Probably not. What does it care about? Simple. Money! Money, as in costs for deployment and user support, hardware and licenses to get the stuff up and running, operations and developer training, maintenance. And money as in savings in the respective areas and – the cornerstone, as the business usually pays the bill – impact on the business. All usually subsumed under the term ROI.

About a year ago, I finished an analysis looking into RIA with Silverlight, conducted for a major customer. Not from the point of view of the developer, but that of business people, operations, and IT management:

So, let’s look briefly at each aspect…

User/Business perspective…

The business doesn’t exactly care for the platform Silverlight itself; it cares for its business benefits. Benefits as in improved user experience, streamlined business workflows, office integration, and so on. And since we had some lighthouse projects with Silverlight we were able to collect some customers’ voices:

“This [streamlining with Silverlight] would reduce a [...] business process [...] from ~10 min to less than a minute.”

“Advanced user experience of Silverlight UI helps raising acceptance of new CRM system in business units”

“I was very impressed of the prototype implementation […] with Silverlight 3. Having analyzed the benefits of this technology I came to the conclusion that I want the […] development team to start using Silverlight as soon as possible. [...]”

This is also confirmed by the typical research companies, like Gartner or Forrester:

“Firms that measure the business impact of their RIAs say that rich applications meet or exceed their goals” (Forrester)

Operations perspective…

In production, the benefit of Silverlight applications (compared with respective conventional web based applications) is reduced server and network utilization.

For example, we had a (small but none-trivial) reference application at our disposal, which was implemented in ASP.NET as well as Silverlight (as part of an analysis to check the feasibility of Silverlight for LOB applications). We measured a particular use case with both implementations – starting the application and going through 10 steps, including navigation, searches, and selections. Both applications were used after a warm-up phase, meaning that the .xap file, as well as images and other static files had already been cached.

The particular numbers don’t matter, what matters is the difference between the amount of data that has been exchanged for each step (in case of navigations none at all for Silverlight). For the single steps:

And accumulated over time:

A ratio of roughly a tenth of the network utilization is quite some achievement – considering that the Silverlight application wasn’t even optimized to use local session state and caching, it should be even higher.

This should have a direct impact on the number of machines you need in your web farm. Add the fact that session state management on the client drastically reduces the demand for ASP.NET session state – usually realized with a SQL Server (Cluster) – there is yet another entry on the savings list.

On the down side is the deployment of the Silverlight plugin. For managed clients – especially if outsourcing the infrastructure comes into play – this may very well become a showstopper.

IT Management perspective…

With respect to development and maintenance, what IT Management should care about includes things like ability to deliver the business demands, development productivity, bug rates in production, costs for developer training, and so on.

Actually all areas in which Silverlight can shine, compared with other RIA technologies, and with the typical mix of web technologies as well:

  • Rich, consistent, homogenous platform
    • .NET Framework (client and server), Visual Studio, Debugger, C#
    • Reduced technology mix, less technology gaps, less broad skill demands
  • Improved code correctness and quality…
    • compiler checks, unit testing, code coverage, debugging, static code analysis, in-source-documentation, …
  • Improved architecture and code
    • Clean concepts, coding patterns, clear separation of client code, lead to better architectures
    • Powerful abstractions lead to less code (up to 50% in one project), less complexity, less errors

Customers’ voices in this area:

“between our desktop app and the website, we estimate 50% re-use of code”

“a .NET developer can pretty much be dropped into a SL project. […] This is a huge deal […]”

“As alternative for Silverlight we considered Flash. […] only Silverlight could provide a consistent development platform (.NET/C#). […]”

 

Conclusion…

Taking all this together, and considering that enterprise companies usually have the tooling and test environments (well…) readily available, this all adds up to something like the following bill:

RIA Return on Invest

Whether the bill looks the same for your company or for one particular project, of course, depends on many things. Especially nowadays with all the hubbub around HTML5 and mobile applications (without any relevant Silverlight support). But if RIA is what you need, the Silverlight will quite often yield far more benefits than any other option.

Still, you need to do your own evaluation. However, I hope to have given you some hints on what you might focus on, if you want to sell technology to the people who make platform decisions in your company.

The actual analysis was fairly detailed and customer specific. But we also prepared a neutralized/anonymized version, which we just made available for download (pdf). (Also directly at SDX.)

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

March 12, 2011

HTML5 – Part III: The Limits

Filed under: HTML5, Silverlight, Software Architecture — ajdotnet @ 5:26 pm

Alternative title: HTML5 – Still HTML, not a RIA platform

The last post was about the chances and benefits of HTML5. But there are also some exaggerated expectations that HTML5 cannot fulfill. Mainly this concerns the term “RIA”, and the effect HTML5 will have in this area.

Or as someone wrote:

“Will HTML 5 one day make Flash, Silverlight and other plug-in technologies obsolete?” (link)

Actually this is a very regular question I have to answer. (Often enough it doesn’t come as question, but as statement. Or accusation. Sic!)

I already detailed how HTML5’s video and canvas will take over tasks that have formerly been solved by other technologies, namely Flash. And some people seem to infer that if Flash is a RIA technology, and HTML5 obsoletes Flash, then HTML5 must obviously make RIA technologies obsolete in general.

Quite wrong. Actually the fact that Flash is used for video and 2D graphics has nothing to do with Flash being a RIA technology. Flash simply happened to be able to delivering these features, both in terms of technical capability as well as broad availability. That it also happens to be a RIA technology is more or less happenstance.

But before moving on we need to clarify the terms “RIA” and “RIA technology”…

My personal definition of RIA technologies relates to the following attributes: Stateful programming model, with some kind of page model, for applications running in a browser sandbox. This includes Flash, JavaFx, Silverlight as a browser plugin (but not in its WP7-platform variation).

Wikipedia applies the terms to Flash, Java (not JavaFx! Sic!), and Silverlight. Sill, this is debatable, and a year ago even Wikipedia had a far broader definition, but in my experience this actually covers the common understanding. Curiously Adobe claims that AIR is their RIA technology, not Flash. But me, Wikipedia, and general consensus agree that Flash indeed is a valid RIA technology.

By the way: Leaving HTML+AJAX out of this picture is by no means meant to be deprecatory, it just reflects common understanding. Wikipedia actually makes the distinction based on the (lack of) necessity to install an additional software framework.

And a final tidbit: Once upon a time Microsoft advertised Silverlight as Flash replacement, addressing video and graphics, just like in the typical Flash use cases. However, even with growing adoption of the Silverlight plugin, Silverlight never became a serious competitor for Flash in that area. (This may actually have played a role in Microsoft’s commitment to HTML5…) Still, Silverlight has long outgrown this narrow definition and later versions has put more emphasis on business features.

So, let’s have a look at where HTML5 reaches its limits and where RIA technologies might kick in. I’ll look at regular web applications before moving on to RIA applications.

Web Applications

HTML5 is going to address standard demands of web applications, including those addressed today by Flash. This will have a crowding-out effect on Flash and RIA technologies in general. But once the demands go beyond being “standard”, RIA technologies will find their niche even in web applications.

On example could be premium video delivery: Some vendors will probably be eager to offer unique selling propositions in the emerging markets of WebTV and high quality HD video content (probably involving DRM).

Since Flash can no longer play the trump of being the only or the broadest available platform in this area, this will also change the picture among the RIA technologies. Especially Silverlight has been very successful in this area recently. Take the Olympic Games or maxdome.

Other examples include complex user interactions that are not feasible with canvas and script, e.g. Mazda’s car configurator, and similarly dynamic data visualizations.

Finally there is the support of additional features. RIA technology vendors certainly have shorter innovation cycles than standards bodies. This especially includes hardware support (camera, game controller, hardware accelerated animations and 3D).

These scenarios all require the user to accept the plugin – which might become a more severe issue if this necessity is less ubiquitous. Thus for the web site provider this always incurs the question whether he can compel his users to use his offering despite that nuisance, or whether he may have to provide a (perhaps simplified and less capable) HTML based version.

RIA Applications

HTML5 won’t turn HTML into a RIA technology. It doesn’t come with a new programming model, doesn’t change server side processing, page model, and postbacks, doesn’t change that fact that the HTML ecosystem really is a conglomerate of diverse technologies.

Many applications – be it multimedia, some kind of hardware dependency, or line of business – simply require the potential of rich client applications. For these, HTML simply cannot deliver what is necessary. Typical demands include:

  • Data centric applications: Large amounts of data; data input with complex validations, lists with extended feature sets, …
  • Usability: Immediate feedback (not possible with postbacks), dynamic UIs with forms built dynamically or changing depending on user input, …
  • Business features: Printing, graphical presentation, Office integration, …
  • Offline capability…
  • Connectivity: Communication patterns such as peer-to-peer, server pushes, …
  • Multimedia and hardware support: Animations, camera, microphone, multitouch, …
  • Rich platform: Stateful programming model, component model, feature rich controls, rich databinding capabilities, …

While these are certainly not the demands for typical web applications. But intranet applications and applications addressing some distinct or closed user group on the web are very well within this category. Prominent example is SAP, one can also think of WebTV portals, home banking, or other.

In the past java applets were often used to cover these demands. Recently AJAX approach have spread, but while this worked to some degree, it often falls short of meeting the demands completely. From a technical perspective, RIA technologies are the adequate choice in these scenarios. And (in my opinion), Microsoft Silverlight is currently the best technology available in that area. Adobe AIR lacks availability and adoption, Flash alone is not sufficient, and JavaFx seems to die a slow death

Conclusion

HTML5 will push RIA technologies out of their makeshift role (video and canvas). However this doesn’t affect the feasibility of employing RIA technologies on their own turf, i.e. beyond-HTML-capability demands in web applications and fully fledged RIA applications.

However, since this “pushing out of RIA technologies” mainly affects Flash, HTML5 has an interesting effect on the RIA market: Broad availability is no longer a strong USP for Flash, which is to the benefit of Silverlight. Add the hazy prospect of JavaFx and the fact that Silverlight is not only a RIA platform, but also enters devices (WP7, WebTV), and HTML5 may actually further the adoption of Silverlight – not as cross-platform tool, as it was once intended, but in all areas not covered by HTML5.

The one argument in favor of HTML5 – which no RIA technology is likely to ever achieve – is its universal availability across all platforms, even if that comes at a cost.

Where are we?

The conclusion of this little series may be as follows: The conflict, or enmity, between HTML5 and RIA that some people see (or exaggerate) doesn’t really exist. There may be a competition between HTML5 and Flash, but even that may turn out differently form what people expect.

Actually HTML5 and RIA complement each other. There are areas in which one technology certainly make more sense than the other, other areas in which there is a choice, again other areas in which a combination of both may work best; even areas in which neither is an ideal choice. E.g.…

  • A web application addressing the broadest available audience? HTML5.
  • An LOB application with high usability demands? Silverlight.
  • A mobile application addressing a broad audience? HTML5. As long as not tighter device integration is necessary, in which case one has to address several mobile OSes…

And between these black-and-white examples there’s a lot gray areas, to be decided on a case-by-case basis. And usually the important thing is not exactly which technology you favor. The important thing is to make an informed decision, aware of the pros and cons, and not solely based on political opinions of certain apologists.

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

March 10, 2011

HTML5 – Part II: New Standard for Web Applications

Filed under: HTML5, Software Architecture — ajdotnet @ 6:35 pm

As I laid out in the first post, a lot of people talk about HTML5, but there are also a lot of misconceptions – and misguided expectations – about HTML5. So, what is HTML5, anyway? And for whom?

HTML5 is a lot of things for a lot of people. For some it is a vehicle to rid the web of patent laden video formats. For Apple it is a means to keep Flash off iPhone and iPad. Microsoft uses it to push IE9. Some people are using it to – at least try to – bury RIA. Anything else…? Oh yes, some people actually look into HTML5 technologies.

In an attempt to join the last group, here is my opinion on what HTML5 is and what it will mean.

The Standard

Formally, HTML5 is just the next version of HTML. (Period!) In terms of scope however it covers some more ground than “HTML” did before. Included is not only the markup language we know as HTML 4.1, but also the formerly distinct standards XHTML and DOM.

Given that HTML 4.0 became a recommendation in 1997 one can only say (shout!) “It’s about time!”. The fact that HTML 4.x cannot be considered “state of the art” for quite some time now is exactly the reason why browser vendors chose to implement proprietary extensions and why technologies like Flash – even ActiveX – have to fill in the gaps.

HTML5 is available as working draft (latest version today from January 2011) and all major browser venders are more or less committed to implementing it:

Regarding general support for HTML5, the industry for once agrees on something. Surprising enough. Regarding the details however, one has to be careful, which part of HTML5 is supported by which browser to what degree. It may be going to take some time – and more than one version number – until we get broad HTML5 support on all browsers.

For some time the critics gloated at the fact that HTML5 has barely reached “working draft” status and that “recommendation” status is not expected before 2022. Even if that were the case, it would have been totally irrelevant, as the industry is implementing and therefore standardizing parts of HTML5 right now. Additionally the W3C group is just now reconsidering its approach and looking for a more timely “delivery”. TODO

So far for the term, now for the content…

The Details

In the details, HTML5 is a conglomerate of different – and not necessarily related – improvements and new features. Of course this includes the two features mentioned mostly: video and 2D graphics (a.k.a. “canvas”).

Video is nothing spectacular: It simply displays a video, albeit without depending on some 3rd party plugin (namely Flash). Benefit for web developers: a standardized API based on HTML and JavaScript (rather than having to learn another technology). Benefit for the user: He can watch the video, independently on the device and plugin availability.

Canvas allows 2D graphics, even (with JavaScript) animated. This again allows (in principle) to address use cases formerly the domain of Flash. Diagrams for usage statistics, stock exchange rates, and so on. Contrary to simple (server rendered) images this could be including user interaction. This even may include more complex things, like web games; the technology is up to it, as this classic proves.

Regarding multimedia, one probably hast to mention audio support for a complete picture.

Video and canvas are mentioned quite often, probably because they constitute the use cases today most often addressed using Flash. Still, it would be unfair to reduce HTML5 to these two.

If it comes to plain old markup code HTML5 offers some improvements regarding java script control, as well as semantic tags like “header” and “footer”.

Regarding user interaction HTML5 offers new types of input fields (e.g. date picker), complete with validations (e.g. textbox with email address). Also add APIs for drag-and-drop, browser history management, and some other.

You may find this in more detail at the W3C or on Wikipedia. A nicely illustrated introduction can be found here (German, but the pictures shouldn’t need translation ;-)).

Yet this still doesn’t conclude the list. For a complete picture one has to name topics closely related to (even if not formally part of) HTML5: CSS3, Web Storage, and (especially for mobile applications) Geolocation.

It should be noted that the recent improvements in JavaScript execution in all major browsers (well… – IE starting with IE9) also contribute to HTML5: Many of the features unfold their full potential only with scripting. It’s a fair assumption that HTML5 will cause a further growth in scripting in general, and probably the adoption of respective AJAX frameworks.

The Impact

At a closer look HTML5 is a mixture of rather different and unrelated things – none of them especially outstanding. Basically HTML is only upgraded to accommodate current state-of-the-art technologies. All together a (long overdue and therefore a little extensive) evolution, but certainly no revolution or “groundbreaking upgrade”, as some like to think.

Therefor the relevance of HTML5 is not the functionality itself. It stems from two facts:

  1. The broad support from all major browser vendors. No matter why they do it, that fact that they agree on HTML5 in such an unequivocal and joined fashion is without precedence. This will likely ensure that all browsers level on comparable features and capabilities. Which is important to break todays disparities and incompatibilities. HTML5 shows all promises of becoming a platform for the web that is state-of-the-art as well as broadly available. Something HTML has increasingly failed at in recent years.
  2. The timely appearance together with the emerging mobile ecosystem (smart phones, pads). In this area we have a far more diverse and inhomogeneous platform landscape than on desktops (iOS, Android, WP7, other mobile OSes, desktop OS adaptions). No platform has a dominance similar to Windows, thus vendors need to address more than one platform. And web applications built for mobile devices are the only feasible cross platform approach available. Even if HTML5 lacks full device integration (e.g. phone integration or multitouch), it goes a long way with web storage, geolocation, and rudimentary offline capabilities.

To conclude: HTML5 is not going to be some academic standard, it will be a true and broadly supported industry standard. Together with browser improvements, especially regarding script engines, HTML will become an adequate development platform for state-of-the-art web applications and the emerging mobile area.

For web developers HTML5 is a good thing. At least as long as the agreement among the browser vendors holds – and as long as we don’t have to wait another 10 years for the next version.

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

March 7, 2011

HTML5 – Part I: The Hype

Filed under: HTML5, Silverlight, Software Architecture — ajdotnet @ 5:17 pm

META: This blog got a little sleepy recently. But actually I’ve been spending quite some time writing blog posts, albeit for our company blog. And I’m planning to reuse some of that content here (this post is actually the first one). Also I’ve been busy in several new areas, including WP7, and I may have something to say about those, too. So, this blog is still alive and kicking.

There’s a small hype around HTML5 for some time now. Ironically the reason were more political than for technical ones, since browsers begin to support HTML5 only just now. Be it organizational hassles, discussions about a video format, or a certain company having issues with Flash on their iPlatform. And since Microsoft has announced its decision to make HTML5 their cross platform strategy, the last big browser vendor has joined the camp.

And this is a good thing! Not only homogenizes HTML5 the web platform again, it is also the only feasible platform for cross platform development in the mobile world, which is much more diverse than our desktop eco system.

On the other hand I am regularly irritated about what people think HTML5 will be able to accomplish. Especially in the relation to RIA applications that are all but dead, according to various sources:

“HTML5, a groundbreaking upgrade to the prominent Web presentation specification, could become a game-changer in Web application development, one that might even make obsolete such plug-in-based rich Internet application (RIA) technologies as Adobe Flash, Microsoft Silverlight, and Sun JavaFX.” (link)

“Will HTML 5 one day make Flash, Silverlight and other plug-in technologies obsolete? Most likely, but unfortunately that day is still quite a way off.” (link)

“Sure maybe today, we have to rely on these proprietary browser plugins to deliver content to users, but the real innovative developers and companies are going to standard on HTML 5 and in turn revolutionize how users interact with data.  We all want faster web applications and the only way to deliver this is to use HTML 5.” (link)

To put it bluntly: HTML is no RIA technology and HTML5 is not going to change that. Thus Silverlight is a valid choice for any RIA application today, and it will be one tomorrow.

On the other hand, HTML5 is certainly going to deliver features that today are the domain of RIA technologies, namely Flash. And this will affect RIA technologies in some way.

Then how will HTML5 affect RIA? Well, I’m afraid, it’s not that simple and there is no short and sufficient answer that I’m aware of. In order to decide – probably on a case by case basis – what HTML5 can do, in which use cases HTML5 is the right answer, and in which cases RIA technologies still are the better choices, we need to take a closer look at some details…

To keep it a little more concise, I’m going to break with my usual habit of very long posts. I’m splitting the remainder into two additional not-quite-so-long posts:

Stay tuned,
AJ.NET

June 12, 2010

The Future (of) UI

Filed under: Software Architecture, Software Development — ajdotnet @ 3:54 pm

The way we think about user interaction – actually the user interfaces themselves – is changing. The iPhone seems to be the protagonist teaching us new ways to interact with phones and iPad even coins a new form factor driving this trend further. Touch and multi touch are becoming main stream because vendors have begun to create operating systems, UI metaphors, and backing services around these interaction principles – rather than slightly adjusting OSes/UIs build for conventional PCs with keyboard and mouse.

This is actually a defining feature of the next evolutionary step of UI, namely Natural User Interfaces (NUI). As wikipedia states…

A NUI relies on a user being able to carry out relatively natural motions, movements or gestures that they quickly discover control the computer application or manipulate the on-screen content. The most descriptive identifier of a NUI is the lack of a physical keyboard and/or mouse. (wikipedia)

While Apple seems to take the lead in public perception, Microsoft has a rather mixed lineup: With smart phones Windows Mobile 7 seems a bit like “taking the last chance”, even if the move to Silverlight as a platform is a bold one and (IMO) a good one. On the other hand they just managed to drop the very promising – by itself as well as positioned against the iPad – Courier project. As a colleague stated in your internal company blog: “I’m frustrated. Period.” And lastly Microsoft has Surface which has no competition I’m aware of at all (unless you want to build one yourself).

Surface is not only commercially available, it also adds the capability to detect objects placed on the table and thus goes beyond plain multi touch. And it is subject to further research, as this excerpt from PDC09 shows: (better quality here, at 83:00)

 

Looking Ahead

Well, this is kind of what we have today. If you would like to see where this might be heading, have a look at the Microsoft Gives Glimpse Into the Future talk Stephen Elop held early ‘09. It’s a 36 minutes video, but you may jump to 14:00 and watch the presentation of "Glimpse in the future". What’s presented there is impressive: Live translations enabling people talking with each other in different languages. Surface like tables interacting directly with iPad-like multi touch tablets placed on it. Minority Report like control. Augmented reality. …. It’s even more impressive since everything is backed afterwards by actually existing (if in early stages) technology. There’s a shortened and also an extended version available on youtube:

 

Speaking of Minority Report. Another great video comes from John Underkoffler; John has been the science adviser for that movie and he does the whole presentation with exactly that technology!

This talk is certainly worth watching, as he makes some very interesting observations (in fact, watching this video triggered this post; thanks Daniel). His final prediction is … ambitious: “I think in 5 years time, when you buy a computer, you’ll get this.”

Is that cool or what?

 

Second Thoughts 

Well, as they say:

“Prediction is very difficult, especially about the future.” (various)

There’s one thing I don’t like about those predictions. They are (deliberately?) incomplete. They certainly shine in new fields of applications for computers, new degrees of collaboration, new ways of interaction. Like home integration, meeting areas with huge collaboration screens, geo services and augmented reality, or simply navigating and reshaping existing data. But in their aim to show new ways of doing things, they neglect the “old”, conventional demands, demands that won’t go away.

The very fact that these NUI approaches – touch, gestures, even voice – are defined by “the lack of a physical keyboard and/or mouse” (and in case you didn’t notice – NONE of the above videos hat a keyboard in it!) renders them inappropriate for a whole bunch of scenarios. Can you imagine a secretary typing on a virtual keyboard? A call center clerk waving at his screen while he talks to a customer? A banker shooing stock rates up and down? A programmer snipping his code into place? Cool as all that Minority Report and other stuff may seem, I have a hard time imagining anyone whose daily job today requires a keyboard to a substantial degree using some other “device” instead.

In the end we’ll probably see both. NUI approaches are going to spread, new devices targeted at different scenarios simply require different notions of user interaction. But they are not going to replace today’s conventional computers, they are going to be a complement, actually even a necessary one. Another necessary complement is the mutual integration with each other, the internet/cloud, and social platforms, but that’s a different story.

For us developers this will be the actual challenge: developing on conventional machines for devices and environments that have totally different ideas of how an application should look like and interact with its surroundings. Testing is going to be a bitch.

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

May 15, 2010

Calling Amazon – Sample Code

Filed under: .NET, C#, Silverlight, Software Architecture — ajdotnet @ 4:59 pm

As addition to my blog posts regarding Amazon I have extracted the respective code into a separate sample:

The code can be downloaded here: AmazonSample.zip

After downloading the solution (VS2010!), you need to open the appSettings.config file in the web project and provide your personal Amazon credentials. Afterwards F5 should be all you need.

Calls via the server use a simple cache implementation that stores the XML returned from Amazon in a local directory. This way one can have a more detailed look at the information available from Amazon. This is intended for debugging purposes and to avoid flooding Amazon during development – it is not suitable for production code!

The related blog posts are available here:

That’s all for now folks,
AJ.NET

March 25, 2010

Calling Amazon – Part 2

Filed under: .NET, C#, Silverlight, Software Architecture — ajdotnet @ 9:27 pm

The last post provided some introduction into calling an external service, namely Amazon, and spend some thoughts on the infrastructure questions. It left off with some decisions:

  1. I’m going to make REST calls, as they are more flexible than SOAP calls.
  2. I’ll make the calls to Amazon from the client.

It’s important to note that these decisions depend to a very high degree on the particular service I’m calling. Amazon offering a policy file, the structure of the API allowing to keep the secrets on my server, the fact that Amazon actually offers REST calls in the first place. Any other service might need a completely different approach. (That’s what the last post covered.)

As a reminder, here is the relevant image:

So how about actually implementing that?

The Server Part 

The server has to build and sign the URL for the Amazon call. Implementing that is straight forward. The AmazonApi class maintains the Amazon configuration in the appSettings.config:

The BuildItemSearchRequestUrl method first calls a method to prepare the query parameters, then another method to build a respective URL:

The called methods are equally simple. BuildRequestParams translates the typed query parameters into a dictionary, adding some other necessary parameters along the way. The parameter names can be found in the developer guide:

In order to build the URL I need the SignedRequestHelper class, extracted from Amazon’s REST sample:

This method is made available to the client via a WCF service, but I’ll leave that one out, its straight forward and boilerplate enough.

Calling Amazon from the Client

On the SL client we have a two step process: First, call the server with the filter criteria, and get the prepared URL. Second, make the call to Amazon, using the URL. The first call is no different than any other call to my own server application, no need to elaborate on that. The second one uses the WebClient class to make the REST call:

The ParseItemSearchResponse translates the XML into a respective object structure. Boilerplate, boring, and kind of longish if you do it manually.

View Model Stuff

Now that the details are in place, I “only” need to wire them into the UI.

The calls from the SL client to its housing server application are straight forward. First, call the server with the filter criteria, and get the prepared URL. Second, make the call to Amazon, using the URL. First the bookkeeping:

The BuildItemSearchRequestUrlCall class encapsulates the calls to the BuildItemSearchRequestUrl service operation shown earlier, AmazonClientApi does the same for Amazon and is also shown above.

Now the actual implementation, kind of leapfrogging from one method to the next by way of asynchronous events and lambdas I pass in for that purpose:

That should get the first 10 results from Amazon – and the proof that I can actually make the call:

The ShowAmazonResponseErrors simply iterates over the returned error collection and shows a respective message box. Amazon will return an error if it couldn’t find anything:

I have now solved the basic technical demands, yet the user may be a little more demanding, since…

Employing Paging

… 10 items is usually not sufficient. Hence I need to make more calls, read paging. Paging technically only requires an ItemPage parameter to be set to a value bigger than 1 (the page index is 1-based). However on the view model, some additional questions arise.

First question is whether the subsequent pages should be loaded right away, constantly filling the result grid in the background. This could be done by triggering the next call, once the previous one has returned, until all available results have arrived. Leapfrogging in a loop. Of course, if the user triggered a new search somewhen in-between, I would have to cancel that chain of calls. Or I could let the user trigger the loading explicitly, e.g. with some „load more“ button (which is what I’ll do).

In any case I have to deal with the user changing the filter criteria or interacting with the result, e.g. resorting it. This is obvious for the second case, but even automatically loading all data in chunks takes time.

Therefore I need to distinguish between the first call, and subsequent calls. The first call initiates a new search, replacing any previous search result. Subsequent calls have to use the same filter criteria, just with another page, and the result is appended to the previous ones. Now, if the filter criteria is bound to the UI and used as parameter to the service call, the user might change the filter and then click the „load more“ button (or the automatic loading might kick off at that time). To prevent that I need a copy of my request property. Similarly I need to maintain my result in a separate property, otherwise the call would overwrite any previous result data.

BeginSearchAmazon and EndSearchAmazon now only handle the first call, initiating a new search, and have to be changed accordingly:

The chain for subsequent calls looks similar in structure, but preserves the values in the separate copy properties:

The next image shows the dialog after having loaded 3 pages and in the process of loading the fourth: 

Great? By the way, the details link jumps straight to Amazon, showing the respective book.

Roundup

Whether you are going to call Amazon or some other external service, these two posts should give you some hints on what to take into account form the infrastructure and architectural perspective. On the client you’ll have to look into Silverlight security and cross domain calls, on the server you might run into firewall or proxy authentication issues.

Also the Amazon API with its approach to paging may give you some hints on how to implement paging over larger result sets with Silverlight. While server calls are asynchronous, SL doesn’t provide the option of processing results while they arrive. For a large result set it might take some time to download the data, and the user might notice the time lag. It could be the better user experience to load the data in chunks, as shown here.

One hint at last: Jon Galloway has a good explanation on the rationale behind policy files on the called server, see here.

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

March 21, 2010

Calling Amazon – Part 1

Filed under: .NET, .NET Framework, Silverlight, Software Architecture — ajdotnet @ 6:57 pm

Connectivity is one of the promises of Silverlight. And what better target for my bookshelf application than Amazon? So I decided the book lists could come with the book cover image, and creating new catalogue entries can be streamlined using Amazon search. And while this is about Amazon, the thoughts should give you some hints on what to consider for other calls to external services as well.

Note: This post will dig into the Amazon API and some general infrastructure questions. Actually implementing this will be the topic of the next post.

The Preconditions

Amazon isn’t exactly forthcoming with its product catalogue API. Starting at http://aws.amazon.com/ points to about any Amazon service offering there is — except the product catalogue API. Well, some bings later, by way of other articles and blog posts, and once you know that the correct name is „Product Advertising API“, you’ll find the entry point. From there it is reasonably well documented.

First thing is to register oneself as developer. This will result in various information, one can pick up on his user profile page:

  • The AWS Account ID is the user ID
  • A variable number of  pairs of an AWS Access Key ID and the respective AWS Secret Access Key itself. You need one for the REST API.
  • A variable number of X.509 certificates, one of which you need to make secure SOAP requests.

Understanding the API

As a quick guide to the documentation: The entry to documentation is here. Under “Documentation Archive” you can pick the latest version of the Getting Started Guide and the Developer Guide.

The logical next step is to find some samples, get them working and understand the details of calling Amazon. There is one sample using the REST API, and one for SOAP using… WSE?. Well. WCF is actually quite new and since almost everyone is still using WSE, why update… . Anyway, the example is simple to migrate – and it doesn’t work at all, could never work actually, since it doesn’t address security.

You can find descriptions on how to get security for SOAP calls working here. I never checked that out, though (I ended up using the REST API, see below), and I couldn’t find any samples using WCF.

The REST API is used by putting parameters in a dictionary, supplementing the user information, and letting a helper class (SignedRequestHelper.cs) produce a URL. The respective request with that URL will return some XML, one has to parse.

You’ll need that helper class; unfortunately it is, again, not that easy to find, as most links will lead you the online test page, not the download. You could even download that page here. But I never found a download for the class by itself and ended up extracting it form the example named above.

The WSDL is also available and can be used to create a WCF client. Even if you employ the REST API, the data classes from the WCF client may help you parsing the returned XML.

The API itself is simple and straight forward. You send some query parameters and you get some result. The query parameters include the operation, and the operation determines the valid set of other parameters. The operation I’m going to be interested in is ItemSearch.

The query parameters also include the ResponseGroup parameter that describes what kind of output I would like. Amazon doesn’t return each and every detail about, say, any book in a book search result. It returns just some major fields like title, detail URL, etc. by default. One could use this information to populate a search result list and load the book details on demand, thus relieving Amazon from doing too much unnecessary work in the first place (and reducing network load). But in cases where more information is needed right away, one can tell Amazon to include other sets of information, like images, in the returned search result.

Another parameter is ItemPage for the partition of results. Amazon returns 10 items with each search request, and no way to change that value. To get the next 10 values, one has to make separate calls for page 2, 3, and so on, 400 at most.

The Infrastructure Question

Now there are a few choices to make (or rather to rule out). We have the REST and the SOAP API at our disposal, and we can make the call from our server, or from our Silverlight client. Note also that the REST call can be split into two independent parts: building the URL (which includes signing), and making the actual call against Amazon. In theory this leads to the following options:

  1. SOAP call from the server
  2. SOAP call from the Silverlight client
  3. REST URL built at the server, call made from the server
  4. REST URL built at the server, call made from the SL client
  5. REST URL built at the Silverlight client, call made from the SL client

What are the forces restricting these options?

  • One restriction is that I‘m not going to send my private Amazon secret key or certificate – my eyes only, signed with the blood of a black cat killed on full moon on the grave of a convicted murderer – to the Silverlight client. It’s not that I don’t trust you… Well, it is, and I don’t. That invalidates option 2 and 5.
  • Another possible restriction is the server infrastructure. Depending on proxy or firewall configuration, you cannot call outbound from your server to the internet. In case of a proxy it might be possible, but it‘d take unreasonable effort. That puts at least a question mark behind options 1 and 3.

Regarding proxy authentication: ASP.NET applications (including .asmx services) have the CredentialCache.DefaultNetworkCredentials property to get the current user’s credentials to pass on. WCF services don’t have that option which makes it unreasonably hard to make the subsequent call using the current user’s security context. Tell me if I’m missing something! 

  • To make the picture complete: Our SL client is also subject to security restrictions. The called service has to explicitly allow the call from our client, offering a policy file. Fortunately Amazon does that, so this won’t be an issue for now. If you are planning on using other services, make sure to check this out, for this puts a block on the call form the client.

Note: I‘d like to stress the fact that it is the providing service, Amazon in this case, who has to opt in for client calls. There is nothing that can be done on the client side about it. This is quite a common misconception…

The corrected list of options:

  1. (SOAP call from the server)
  2. SOAP call from the Silverlight client
  3. (REST URL built at the server, call made from the server)
  4. REST URL built at the server, call made from the SL client
  5. REST URL built at the Silverlight client, call made from the SL client

Since the REST API offers client side calls (option 4) and still leaves the option of making server calls (option 3), the SOAP option never came up again. I actually started with server calls.

Calling Amazon from the Server

In this scenario all security related issues are the server’s problem:

The client passes the filter criteria to the server, the server creates a signed URL, invokes the REST call, parses the XML, and returns the result. The call from the server to Amazon may have to go through proxies, firewalls, or other intermediaries.

Doing the work on the server had certain advantages: It was easy to implement, I could use unit tests, I added some persistent caching in files (actually to avoid flooding Amazon with my debug calls, but caching would of course improve multiple clients). Also in this scenario the server does the job of parsing the returned XML into decent entities, and only those are returned to the client, which may be a factor depending in the WAN structure.

Calling Amazon from the Client

In this scenario the client still calls the server, but rather than making the call to Amazon, the server just builds the signed URL and hands it back to the client. The client calls Amazon and it also has to parse the resulting XML.

Calling the service from the client is only possible if the called serve permits it (as Amazon does). And if it does, it is more fragile as one cannot always foresee under which circumstance the code will run. If something goes wrong, the chances of getting diagnostics information are bad.

On the other hand the proxy issue is nicely circumvented and in case you have to support different authentication schemes, say openID, you may be better off telling the user that you cannot record his credential information on the server if the server never sees it.

Initially I implemented the client side call more out of curiosity, to evaluate the implications. But when I eventually did run into the proxy issue, I only had to switch my view model to get it working again.

Calling What from Where?

I’d like to stress that point: The decision whether to call the external service from my own server or from the client is extremely depended on very different influences: Infrastructure, security related, available API, did the service opt in to client calls, etc.. The decision may consequently be completely different in other cases. It may even be the case that there is no simple solution. For example, had Amazon neglected to opt in for client calls (the policy file), I would have been forced to make the call form the server. Had I then run into the proxy or some other firewall issue I would have had some hard tasks to face.

There may be some workarounds, like falling back to .asmx for the credentials or some browser script workaround – but none is especially nice.

Anyway, since in my case I have a working approach, I can now set out to actually implementing it. Next post…

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

February 28, 2010

Understanding Validation in Silverlight

Filed under: .NET, .NET Framework, Silverlight, Software Architecture — ajdotnet @ 3:51 pm

Input validation is necessary in every business application. Time to look into validation with Silverlight.

Starting a little internet research will reveal two variations:

  • Putting validation code into property setters, throwing exceptions (covered by Jesse Liberty).
  • Decorating properties with validation attributes, either checked by the data grid, or manually, again eventually based on exceptions (see MSDN).

OK, obviously you don’t need me to repeat what has been described by others in sufficient detail.

However, neither the property setter approach, nor validation attributes work well with code generated classes (which are employed quite regularly with SL). Would property changed event handler solve that? How do value converters fit into the picture? What about mutual dependent properties? Null values? Actually, there’s no overall concept that would help answer these questions.

Note: WCF RIA Services (see also here; formerly known as RIA Services) addresses the „generated code“ issue by copying the validation from the server to the client. However I refrain from using WCF Services in this learning experience, since the aim is to understand what’s going on under the hood. Additionally I can easily envision cases in which I may want different (i.e. more restricting) validations on the frontend than on the server.

Defining “Validation”

Let’s clarify what we are talking about. Validation (more to the point, input validation) deals with checking some user input, however it is made, against, let’s call it syntactical and semantical rules. Syntactical rules include all that is necessary to satisfy the type of the backing data, e.g. ensuring the string actually is a valid time (provided the property is of type DateTime). Semantical rules deal with additional restrictions, usually business related, say whether some value is in a certain range.

Note: Validation always happens after the user has made his input, making sure the input was correct. The alternative approach (though not always feasible) is to let him only make valid input in the first place, e.g. with date pickers, numeric up/down, or masked edit boxes. This renders at least the need for syntactic validation obsolete, semantic validation may or may not be covered. Still, it may provide the better user experience.

Syntactic rules are usually enforced implicitly during the actual conversion. The need for a conversion depends on what data the control actually provides (e.g. a date picker control supplies a DateTime ready to use, no need for validation in this case). In other cases syntactic validation is covered by the databinding mechanism of Silverlight.

Semantical rules are up to the developer.

There’s another difference between syntactic and semantic validation: Syntactic validation has to occur before changing the data (since it’s a precondition to a successful type conversion), thus it is also tied to the UI and the databinding process. Semantic validation on the other hand can also happen after the data has already been changed. And it doesn’t have to happen in the UI either. Matter of fact, some validations may not even be possible in the UI, but have to be enforced by the business logic or even the database. Heck, they may happen out-of-band at any time, say by asking some other system asynchronously.

Note: Within my Silverlight UI I don’t care who actually does the validation, but the fact that it may happen outside of the usual sequence (databinding, triggered by some user input) is important.

Typical use cases for validation include:

syntactic validation
A) ensuring a string represents a certain type (numeric, date,…)

semantic validation
B) ensuring some input, i.e. required fields
C) ensuring a certain text length
D) ensuring a value matches some criterion (range check, regular expression, date in the future, etc.)

special cases
E) mutually dependent fields (mutual exclusion, value dependencies, etc.)
F) distinguishing between input and no input (i.e. null values)

Validation “mechanics” in SL

Validation is tied to data binding and to get it working the binding properties have to be set respectively:

Now let’s take a closer look at the data binding process. Here’s a (somewhat simplified) sequence diagram of the relevant code for validation, after a control changed its value. Basically it all depends on exceptions:

(Note: this is not entirely correct regarding the exception handling, but is conveys the actual meaning better.)

Let’s go over the locations I numbered:

1: The value converter you could declare in the binding expression is called – yet it does not take part in the validation handling. Meaning, any exception thrown here won’t appear as validation error, but as actual application error. This renders value converters useless in cases that need syntactic validation (like use case A).

2: Right after the value converter, a try marks the area in which every exception will be treated as validation error.

Personally I don’t understand why the value converters have been left out of this area (seems to be a conscious decision). They would have offered some easy to use validation mechanism…

3: The type converter of the property’s type is called, which translates the value from the type passed in by the control (a textbox passes a string, other controls may pass in other types) to the target type, say Int32. Exceptions thrown here will show up as something like „Input string was not in a correct format.“, and of course the property retains its original value. Hence the type converter does syntactic validation (use case A). There’s however no feasibly way to customize the conversion, like what a value converter could do.

Personally I don’t understand why the value converters… . Ah, said that already, didn’t I?

4: The property value is set, and in turn ticks off any validation code put in the property setter for semantic validation. The samples on the internet usually have the validation code before actually changing the property value, thus the property still retains its original value in case of a validation error, read exception. Anyway, use cases B (ignoring null values for the moment), C, and D are covered here.

5: The setter will also raise the PropertyChangedEvent, and in turn any exception thrown in a respective handler will also take part in the validation handling. However if we get that far, the property now has the invalid data already set. Still, this may be another location for use cases B, C, and D.

6: Any validation error is announced to the control via the BindingValidationError event. Many controls have an error state and will show a red border and an error tooltip. Alternatively there is a ValidationSummary control that takes care of presenting the feedback:

7: Finally there’s some missing parts in the sequence diagram:

  • The databinding mechanism doesn’t check the validation attributes, so who does? The DataGrid is the only control that actively validates the row in edit mode, but one can trigger the validation himself.
    If done before writing the data, this approach bypasses any value and type conversion (another reason why value converters have no part in validation). Theoretically one could validate a property this way without setting the values at all. However due to the type conversion issue, the only sensible approach is validate afterwards (which is what the DataGrid does), and let the data binding mechanism care about conversions. This way one could trigger that validation generically in the PropertyChanged event handler.
  • Nobody cares about errors (exceptions) in property setters and event handlers (point 4 and 5) outside of this sequence. Meaning any other code manipulation the properties and causing an exception will tear the whole application down.

As a corollary: Validation works one binding at a time. This implies that mutually dependent properties (use case E) are not part of the equation. If property X caused an error that may be fixed by changing property Y, SL doesn’t help. One can work around this (by a combination of UI design and simulated PropertyChanged events), but it’s ugly work.

I left out use case F (null values) so far: Data intensive applications may have to distinguish between no entry (empty string, null value) and the 0-value for the data type (e.g. „0“, „00:00“). The data type of the property would be a nullable value type, e.g. Nullable<Int32> or Nullable<DateTime>. The sad part: Type converters don’t handle null values, neither does some other part of the data binding mechanism. nullable types are treated like their non-nullable counterpart, thus no empty strings are allowed, worse, an empty string even usually causes an exception, thus validation error. The best way to solve this is a separate property of type string that handles null and 0 representation and does the type conversion. However it takes some effort to keep those two properties in sync and propagate PropertyChanged events of the other property.

Consequences & Conclusions

Now that the input validation mechanism is understood (I hope), I can draw some conclusions:

To set or not to set…

There is some inconsistency between whether an invalid value will actually be set in the data property or not: Type converter (syntactic validation) and property setter issues (semantic validation) generally leave the value as is. Validations in property changed notifications – including validation attributes if implemented that way – (also semantic validations) will set the property to the invalid value and notify only afterwards. This in turn means that our view model logic has to take the validity of the data into account, i.e. a save button should check some kind of IsValid property on the model.

To throw or not to throw…

In SL3 the validation mechanism relies on exceptions. Adverse effect is that the first validation error hides subsequent errors, which may obscure the feedback for the user. Secondly this code is also present when I set the properties from code, with no data binding infrastructure readily available to catch the exceptions. Hence I have to be very careful, or some innocent code may take my whole application down.

To depend or not to depend…

Validation in SL3 doesn’t cover mutually dependent properties properly. Some nasty stunts and compromises may get you a working solution, but it hurts to write that code (been there, done that!).

Anyway, with SL4 around the corner (see below) I‘d refrain from putting too much effort into this issue right now.

To convert or not to convert…

As said before, value converters do not exactly work with validation, at least if the conversion itself may fail. Thus I’d refrain from using them altogether in case validation is also needed. I’d rather put the conversion logic into „shadow properties“, i.e. properties that replace/complement the original property, changing only the property type (usually to string). Another reason for these shadow properties would be the null/0 issue mentioned above.

To keep these two properties connected and synchronized takes some effort. For example if one property changes, the other should raise the PropertyChanged event as well.

Silverlight 4

With SL4 we’ll get IDataErrorInfo and INotifyDataErrorInfo for asynchronous validation, as announced by Tim Heuer, and described in more detail by Fredrik Normén.

Contrary to SL3 this isn’t tied to exceptions any more. This will change the picture completely:

  • The first validation error won’t necessarily prevent other validations
  • Other code setting the value (outside of databinding) won’t have to deal with exceptions (and still partake in validation).
  • Asynchronous validation is covered.
  • Mutual dependencies can be addressed.

The one notable gap is the still missing MetadataTypAttribute that would enable the usage of validation attributes for code generated classes. The only other aspect not covered is the null values and value converter issue. But strictly speaking this is more an issue of type conversion than of the validation itself. (Still a pesky issue.)

If SL4 changes nearly everything, then why this post in the first place? Well, apart from the fact that existing code doesn’t migrate itself, while the contents presented here are less relevant, they are still valid.

Final Verdict

With SL3 I’d say about 80% of my validation demands are covered. But as always, the remaining 20% don’t appear all that often, but if they do, it hurts.

Looking ahead, SL4 will provide a better foundation for validation, and solve some issues — yet it still won’t cover every aspect. Still, having looked at SL4 may give some hints on what to implement today for SL3 (and what not), with a clean migration path to SL4.

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

Older Posts »

The Shocking Blue Green Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 244 other followers