AJ's blog

August 14, 2011

RIA with Silverlight–The Business Perspective

If you read this, chances are that you are a developer and that you like Silverlight. And why not? Exciting platform, great features, outstanding tooling. But! If you’re a corporate developer, have you sold it to your management yet? If not, this post is for you.

Silverlight is for RIA, and the domain of RIA applications is largely intranet or closed/controlled extranet user groups. This again is what is usually found in larger enterprise companies. Companies that usually have a vested interest in controlling their environment. And in terms of bringing software into production and of operations and maintenance afterwards, every new platform is one platform to many.

So, the odd developer comes along and talks about this great new technology. Does the management care? Probably not. What does it care about? Simple. Money! Money, as in costs for deployment and user support, hardware and licenses to get the stuff up and running, operations and developer training, maintenance. And money as in savings in the respective areas and – the cornerstone, as the business usually pays the bill – impact on the business. All usually subsumed under the term ROI.

About a year ago, I finished an analysis looking into RIA with Silverlight, conducted for a major customer. Not from the point of view of the developer, but that of business people, operations, and IT management:

So, let’s look briefly at each aspect…

User/Business perspective…

The business doesn’t exactly care for the platform Silverlight itself; it cares for its business benefits. Benefits as in improved user experience, streamlined business workflows, office integration, and so on. And since we had some lighthouse projects with Silverlight we were able to collect some customers’ voices:

“This [streamlining with Silverlight] would reduce a [...] business process [...] from ~10 min to less than a minute.”

“Advanced user experience of Silverlight UI helps raising acceptance of new CRM system in business units”

“I was very impressed of the prototype implementation […] with Silverlight 3. Having analyzed the benefits of this technology I came to the conclusion that I want the […] development team to start using Silverlight as soon as possible. [...]”

This is also confirmed by the typical research companies, like Gartner or Forrester:

“Firms that measure the business impact of their RIAs say that rich applications meet or exceed their goals” (Forrester)

Operations perspective…

In production, the benefit of Silverlight applications (compared with respective conventional web based applications) is reduced server and network utilization.

For example, we had a (small but none-trivial) reference application at our disposal, which was implemented in ASP.NET as well as Silverlight (as part of an analysis to check the feasibility of Silverlight for LOB applications). We measured a particular use case with both implementations – starting the application and going through 10 steps, including navigation, searches, and selections. Both applications were used after a warm-up phase, meaning that the .xap file, as well as images and other static files had already been cached.

The particular numbers don’t matter, what matters is the difference between the amount of data that has been exchanged for each step (in case of navigations none at all for Silverlight). For the single steps:

And accumulated over time:

A ratio of roughly a tenth of the network utilization is quite some achievement – considering that the Silverlight application wasn’t even optimized to use local session state and caching, it should be even higher.

This should have a direct impact on the number of machines you need in your web farm. Add the fact that session state management on the client drastically reduces the demand for ASP.NET session state – usually realized with a SQL Server (Cluster) – there is yet another entry on the savings list.

On the down side is the deployment of the Silverlight plugin. For managed clients – especially if outsourcing the infrastructure comes into play – this may very well become a showstopper.

IT Management perspective…

With respect to development and maintenance, what IT Management should care about includes things like ability to deliver the business demands, development productivity, bug rates in production, costs for developer training, and so on.

Actually all areas in which Silverlight can shine, compared with other RIA technologies, and with the typical mix of web technologies as well:

  • Rich, consistent, homogenous platform
    • .NET Framework (client and server), Visual Studio, Debugger, C#
    • Reduced technology mix, less technology gaps, less broad skill demands
  • Improved code correctness and quality…
    • compiler checks, unit testing, code coverage, debugging, static code analysis, in-source-documentation, …
  • Improved architecture and code
    • Clean concepts, coding patterns, clear separation of client code, lead to better architectures
    • Powerful abstractions lead to less code (up to 50% in one project), less complexity, less errors

Customers’ voices in this area:

“between our desktop app and the website, we estimate 50% re-use of code”

“a .NET developer can pretty much be dropped into a SL project. […] This is a huge deal […]”

“As alternative for Silverlight we considered Flash. […] only Silverlight could provide a consistent development platform (.NET/C#). […]”

 

Conclusion…

Taking all this together, and considering that enterprise companies usually have the tooling and test environments (well…) readily available, this all adds up to something like the following bill:

RIA Return on Invest

Whether the bill looks the same for your company or for one particular project, of course, depends on many things. Especially nowadays with all the hubbub around HTML5 and mobile applications (without any relevant Silverlight support). But if RIA is what you need, the Silverlight will quite often yield far more benefits than any other option.

Still, you need to do your own evaluation. However, I hope to have given you some hints on what you might focus on, if you want to sell technology to the people who make platform decisions in your company.

The actual analysis was fairly detailed and customer specific. But we also prepared a neutralized/anonymized version, which we just made available for download (pdf). (Also directly at SDX.)

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

November 7, 2010

Silverlight. What if?

Filed under: .NET, .NET Framework, Silverlight, Software Development — ajdotnet @ 6:39 pm

PDC happened and Microsoft fouled up the Silverlight message big time. The talk was that Silverlight is dead on the client, based on Steve Ballmer not mentioning Silverlight, and an interview with Bob Muglia published at Mary-Jo’s blog. Actually even we got irritated emails from our own customers, whom we had just convinced that Silverlight is the right choice for RIA applications.

It took some time for Microsoft to realize what fatal message they had sent, but eventually Muglia backpedaled, and from there one it seems that every other Microsoftie and close associate came out to deny the imminent death of Silverlight. So far I’ve stumbled over:

Among the non-Microsofties where

So, just for the record: I sincerely believe that Microsoft is still very much committed to Silverlight as RIA technology for regular (read non-WP7) clients.

 

Still, as relieved as I am that this mess unfolded that way, it kept me thinking. What if? I mean, what if Microsoft actually had dropped Silverlight on the client…?

What if?

Just imagine… What would happen if Microsoft actually had changed their strategy? What if Silverlight was really dead on the client, and “only” the development platform for WP7?

  • For the vast majority of regular web applications: Nothing much would have happened. Use cases here include mostly video and advertisement. And due to the availability of the plugin this is the domain of Adobe Flash. With the advent of HTML5, and it’s coverage of video and graphics (canvas) – and none the least the backing it gets from Apple – it should have been clear to everybody with open eyes that HTML5 will be the future in that area. But that will happen at the expense of Flash, not Silverlight!

But from there it would go downhill, and Microsoft would start losing…

  • Microsoft would lose a platform for non-typical demands on the web. Demands that go far beyond what HTML5 can deliver. Complex UIs such as car configurators (Mazda), HD video streaming (maxdome). And Silverlight it gaining momentum in the market, not Flash or some other technology.
     
  • Microsoft would lose their platform for RIA applications. In this area HTML5 is of no further relevance at all, rather Adobe AIR and JavaFx are the competition. And in this area Silverlight is way ahead of the completion, both technically in terms of business features, as well as by adoption (usually intranet applications, but also SAP).
     
  • Microsoft would lose the developer base it relies on for Windows Phone 7, and with it WP7 itself. One of the big selling points for WP7 is the fact that WP7 uses the very same platform as is used for RIA, thus every developer using Silverlight instantly becomes a WP7 developer. Ironically, focusing Silverlight on WP7 would take away that advantage. Silverlight would become a platform you have to learn before doing phone development. And since WP7 is just taking off, the future not yet certain, why take the risk? Why not learn Android instead? Who would then build the apps Microsoft needs?
     
  • It gets worse: Microsoft would lose credibility, and the trust of the developer community. This is not a technology at the end of its life cycle we’re talking about. It’s a technology just beginning to take off, and a technology that they told us were strategic! A technology more and more developers are just beginning to adopt, to invest in. If Microsoft dropped Silverlight – without any warning I might add –, how would those developers react? How could they trust Microsoft to be true to what they call “strategic” in the future? I know what I would think.
     
  • Needless to say that this would affect their partners and customers in the same way. Who would invest in any platform if he cannot be sure the platform is maintained for a reasonable time (rather than being dropped at the spur of a moment). And if the vendor cannot be trusted? The platform may be as good as it wants, the first thing to care about is to protect my investments.
     
  • In the end this would include every API, every platform, every offering they have. This specially includes Azure – the very platform Microsoft is betting the company on. Which developer would work against that API? Which ISV would build his software on Azure? Which partner would counsel his customers to use Azure? Which enterprise would rely on Azure with his applications and his data? It’s a strategic platform for Microsoft, sure. But with dropping Silverlight they would just have taught us what that means.

Ultimately Microsoft could lose… Microsoft. Because at the end of the day, credibility is the most important thing. That’s what made this whole thing a marketing fiasco. Not the fact that a bunch of developers and companies sat on the wrong bandwagon. Lose credibility and you lose the company.

Now, all this is hypothetical – I hope I made that clear with my statement above. And it is also just an opinion and certainly exaggerated in some points. But it may very well be the kind of trash and FUD that Microsoft will be experiencing for some time. Which is why I believe that Microsoft will continue to have to do damage control for quite some time to come.

Yours sincerely,
AJ.NET

kick it on DotNetKicks.com

August 29, 2010

CommunicationException: NotFound

Filed under: .NET, .NET Framework, ASP.NET, C#, Silverlight, Software Development, WCF — ajdotnet @ 3:44 pm

CommunicationException: NotFound – that is the one exception that bugs every Silverlight developer sooner or later. Take the image from an earlier post:

app_error

This error essentially tells you that a server call somehow messed up – which is obvious – and nothing beyond, much less anything useful to diagnose the issue.

This not exactly rocket science, but the question comes up regularly at the Silverlight forums, so I’m trying to convey the complete picture once and for all (and only point to it, once the question comes up again – and it will!).

I’m also violating my rule not to write about something that is readily available somewhere else. But I have the feeling that the available information is either limited to certain aspects, not conveying the complete picture, or hard to come by or to understand. Why else would the question come up that regularly?

 

The Root Cause

So, why the “NotFound”, anyway? Any HTTP response contains a numeric code. 200 (OK) if everything went well, others for errors, redirections, caching, etc.; a list can be found at wikipedia. Any error whatsoever results in a different result code, say 401 (Unauthorized), 404 (NotFound), or 503 (Service unavailable).

Any plugin using the browser network stack (as Silverlight does by default) however is also subject to some restrictions the browser imposes in the name of security: The browser will only pass 200 in good cases or 404 without any further information in any other case to the plugin. And the plugin can do exactly NIL about it, as it never gets around to see the original response.

Note: This is not Silverlight specific, but happens to every plugin that makes use of the browser network stack.

Generally speaking there are two different groups of issues that are reported as errors:

  1. Service errors: The services throws some kind of exception.
  2. Infrastructure issues: The service cannot be reached at all.

Since those two groups of issues have very different root causes, it makes sense to be able to at least tell them apart, if nothing else. This is already half of the diagnosis.

 

Handling Service Errors

Any exception thrown by a WCF service is by default returns as service error (i.e. SOAP fault) with HTTP response code 500 (Internal Server Errors). And as we have established above, the Silverlight plugin never get’s to see that error.

The recommended way to handle this situation is to tweak the HTTP response code to 200 (OK) and expect the Silverlight client code to be able to distinguish error from valid result. Actually this is already backed into WCF: A generated client proxy will deliver errors via the AsyncCompletedEventArgs.Error property – if we tweak the response code that is. Fortunately the extensible nature of WCF allows us to do just that using a behavior, which you can find readily available here.

Once we get errors through to Silverlight we can go ahead and make actual use of the error to further distinguish server errors:

  1. Business errors (e.g. server side validations) with additional information (like the property that was invalid).
  2. Generic business errors with no additional information.
  3. Technical errors on the server (database not available, NullReferenceException, …).

It’s the technical errors that will reveal more diagnostic information about the issue at hand, but let’ go through them one by one…

Business errors with additional information are actually part of the service’s contract, more to the point, the additional information constitutes the fault contract:

code_declared_fault1

code_declared_fault2

These faults are also called declared faults for the very reason that they are part of the contract and declared in advance. Declared faults are thrown and handled as FaultException<T> (available as full blown .NET version on the server, and as respective counterpart in Silverlight), with the additional information as generic type parameter:

code_throw_declared

Note: There’s no need to construct the FaultException from another exception. And of course this StandardFault class is rather simplistic, not covering more fine-grained information, e.g. invalid properties – which you may need in order to plug into the client side validation infrastructure. But that’s another post.

On the client side this information is available in a similar way, and can be used to give the user feedback:

code_handle_declared

Generic business errors are not part of the service contract, hence they are called undeclared faults, and they cannot contain additional information beyond what the already got. From a coding perspective they are represented by FaultExceptions (the non-generic version, .NET and Silverlight) and thrown and handled similarly to declared faults:

code_handle_undeclared

However, the documentation states…

“In a service, use the FaultException class to create an untyped fault to return to the client for debugging purposes. […]

In general, it is strongly recommended that you use the FaultContractAttribute to design your services to return strongly typed SOAP faults (and not managed exception objects) for all fault cases in which you decide the client requires fault information. […]”

MSDN

 

That leaves arbitrary exceptions thrown for whatever reason in your service. WCF also translates them to (undeclared) faults, yet it uses the generic version of FaultException, with the predefined type ExceptionDetails. This way, any exception in the service can (or rather could) be picked up on the client:

code_handle_exception

However, while ExceptionDetails contains information about exception type, stack trace, and so on, that fault contains by default only a generic text, stating “The server was unable to process the request due to an internal error.”. This is exactly as it should be in production, where any further information might give the wrong person too much information. During development however it may make sense to get more information, to be able to diagnose these issues more quickly. To do that, the configuration has to be changed:

config_exception_debug

And now the returned information contains various details about the original exception:

app_exception_info

BTW: To complete the error handling on the client, you need to to address the situation were the issue was on the client itself, in which case the exception would not be of some FaultException type:

code_handle_client_error

This covers any exception thrown from the WCF service, provided it could be reached at all.

Alternative routes…

As I said, tweaking the HTTP response code is the recommended way to handle these errors. This is still a compromise on the protocol level to work around the browser network stack limitation. However, there are other workarounds to do that, and for the sake of completeness:

  1. Compromise on the service’s contract: Rather than using the fault contract, one could include the error information in the regular data contract. This is typical for REST style services, e.g. Amazon works that way. For my application services I am generally reluctant to make that compromise. The down side is that is doesn’t cover technical errors, but that can be remedied with a global try/catch in your service method.
  2. Avoid the browser network stack. Silverlight offers its own network stack implementation (client HTTP handling), though it defaults to using the browser stack. Using client HTTP handling, one can handle any HTTP response code, as well as offering more freedom regarding HTTP headers and methods. The downside however is, that we lose some of the features the browser adds to its network stack. Cookie handling and browser cache come to mind.

 

Handling Infrastructure Issues

If some issue prevented the service to be called at all, there is obviously no way for it to tweak the response. And unless we revert to client HTTP handling (which would be a rather drastic means, given the implications), the Silverlight client gets no chance to look at it either. Hence, we cannot do anything about our CommunicationException: NotFound.

However, by tweaking the response code for service exceptions as proposed above, we at least make it immediately obvious (if only by indirect reasoning) that the remaining CommunicationException: NotFound is indeed an infrastructure issue.

The good news is that infrastructure issues usually carry enough information by themselves. Also they appear rarely, but if they do they usually are quite obvious (including obviously linked to some recent change), affect any call (not just some), and are easily reproducibly. Hence using Fiddler one can get information about the issue very easily (even in the localhost scenario).

The fact that the issue is pretty obvious pretty fast, in turn makes it usually quite easy to attribute it to the actual cause – it must have been a change made very recently. Typical candidates are easy to track down:

  • Switching between Cassini and IIS. I have written about that here.
  • Changing some application URLs, e.g. giving the web project a root folder for Cassini, without updating the service references.
  • Generating or updating service proxies, but forgetting to change the generated URLs to relative addresses.
  • Visual Studio sometimes assigns a new port to Cassini if the project settings say “auto-assign port”, and the last used port is somehow blocked. This may happen if another Cassini instance still lingers around from the last debugging session.
  • Any change recently made to the protocol or IIS configuration.

This only get’s dirty if the change was made by some team member and you have no way of knowing what he actually changed. But since this will likely affect the whole team, you will be in good company ;-)

 

Wrap up

There are two main issues with CommunicationException: NotFound:

  1. It doesn’t tell you anything and the slew of possible reasons makes it unnecessarily hard to diagnose the root cause.
  2. It prevents legitimate handling of business errors in a WCF/SOAP conformant way.

Both issues are addressed sufficiently by tweaking the HTTP response code of exceptions thrown within the service, which is simple enough. Hence the respective WCF endpoint behavior should be part of every Silverlight web project. And in case this is not possible for some reason, you can revert to client HTTP handling.

 

Much if not all of this information is available somewhere within the Silverlight documentation. However, each link I found only covered certain aspects or symptoms, and I hope I have provided a more complete picture on how to tackle (for the last time) CommunicationException: NotFound.

That’s all fro now folks,
AJ.NET

kick it on DotNetKicks.com

August 8, 2010

Silverlight and Integrated Authentication

Filed under: .NET, .NET Framework, ASP.NET, C#, Silverlight, Software Development, WCF — ajdotnet @ 11:25 am

I’ve been meaning to write about this for a while, because it’s a reoccurring nuisance: Using integrated authentication with Silverlight. More to the point, the nuisance is the differences between Cassini (Visual Studio Web Development Server) and IIS in combination with some WCF configuration pitfalls for Silverlight enabled WCF services….

Note: Apart from driving me crazy, I’ve been stumbling over this issue quite a few times in the Silverlight forums. Thus I’m going through this in detail, explaining one or the other seemingly obvious point…

Many ASP.NET LOB applications run on the intranet with Windows integrated authentication (see also here). This way the user is instantly available from HttpContext.User, e.g. for display, and can be subjected to application security via a RoleProvider. Silverlight on the other hand runs on the client. I have written about making the user and his roles available on the client before. However, the more important part is to have this information available in the WCF services serving the data and initiating server side processing. And being WCF, they work a little different from ASP.NET, or not, or only sometimes….

 

Starting with Cassini…

Let’s assume we are developing a Silverlight application, using the defaults, i.e. Cassini, and the templates Visual Studio offers for new items. When a “Silverlight-enabled WCF service” is created, it uses the following settings:

xml_serviceconfig

Now there’s (already) a choice to make: Use ASP.NET compatibility? Or stay WCF only? (That question may be worth a separate post…). With ASP.NET compatibility, HttpContext.* is available within the service, including HttpContext.User. The WCF pendant of the user is OperationContext.Current.ServiceSecurityContext.PrimaryIdentity. Take the following sample implementation to see which information is available during a respective call:

code_service

The client code to test that service is simple as can be:

code_page

The XAML is boilerplate enough, but for the sake of completeness:

xaml_page

I choose compatibility mode, and as the client shows, HttpContext.User is available out of the box:

app_default

Great, just what an ASP.NET developer is used to. But compatibility or not, it also shows that the WCF user is not available. But! WCF is configurable, and all we have to do is set the correct configuration. In this case we have to choose Ntlm as authentication scheme:

xml_serviceconfig_ntlm

And look what we’ve got:

app_with_ntlm

Great, no problem at all. Now we have the baseline and are ready to move on to IIS.

 

Now for IIS…

Moving to IIS on the developer machine is simple. Just go to the web project settings, tab “Web”, choose “Use Local IIS Web Server”, optionally creating the respective virtual directory from here. In order to work against IIS, Visual Studio needs to run with administrative permissions.

vs_projectconfig

 

Moving from Cassini to IIS however has a slew of additional pitfalls:

  • The service URL
  • The IIS configuration for authentication
  • WCF service activation issues
  • The WCF authentication scheme
  • localhost vs. machine name

Usually they show up as team (which obviously doesn’t help), but let’s look at them one by one.

 

The service URL

There’s one difference between how Cassini and IIS are being addressed by the web project: Projects usually run in Cassini in the root (i.e. localhost:12345/default.aspx), while in IIS they run in a virtual directory (e.g. localhost/MyApplication/default.aspx). This may affect you whenever you are dealing with absolute and relative URLs. It will at least cause the generated service URLs to differ more than just by the port information. Of course you can recreate the service references at that point, but you don’t want to do that every time you switch between Cassini and IIS, do you?

BTW: There’s a similar issue if you are running against IIS, using localhost, and you create a service reference: This may write the machine name into the ServiceReferences.ClientConfig (depending on the proxy configuration), e.g. mymachine.networkname.com/application, rather than localhost. While these are semantically the same URLs, for Silverlight is qualifies as a cross domain call. Consequently it will look for a clientaccesspolicy.xml file which is probably not there and react with a respective security exception.

The solution with Silverlight 3 is to dynamically adjust the endpoint of the client proxy in your code to point to the service within the web, the Silverlight application was started from:

code_adjusturl

Silverlight 4 supports relative URLs out of the box:

xml_clientconfig

Coming versions of the tooling will probably generate relative URLs in the first place; until then you’ll have to remember to adjust them every time you add or update a service reference.

 

The IIS configuration for authentication

This one may be obvious, but in combination with others, it may still bite you. Initially starting the application will result in the notorious NotFound exception:

app_error

Note: To be able to handle server exceptions in Silverlight you’ll have to overcome some IE plugin limitations, inhibiting access to http response 500. This can be achieved via a behavior, as described on MSDN. However this addresses exceptions thrown by the service implementation and won’t help in the infrastructure related errors I’m talking about here.

The eventlog actually contains the necessary information:

el_require_wa

No windows authentication? Well, while Cassini automatically runs in the context of the current user, IIS needs explicitly being told that Windows Authentication is required. This is simple in IIS configuration, just enable Windows Authentication and disable Anonymous Authentication in the IIS configuration for the respective virtual directory.

 

WCF service activation issues

Running again will apparently not have changed anything at all, displaying the same error. With just a seemingly minor difference in the eventlog entry:

el_require_aa

That’s right. The service demands to run anonymous after it just demanded to run authenticated. Call it schizophrenic.

To make a long story short, our service has two endpoints with conflicting demands: The regular service endpoint requires Windows Authentication, while the “mex” endpoint for service meta information requires Anonymous access. OK, we might re-enable anonymous access, but that wasn’t indented, so the way to work around this activation issue is to keep anonymous access disabled and to remove the “mex” endpoint from the web.config:

xml_serviceconfig_nomex

Curiously generating the service reference still works in Visual Studio (perhaps with login dialogs, but still)…

 

The WCF authentication scheme

We’re still not there. The next issue when running the application might be a login credentials dialog when the service is called. And no matter what you type in, it still won’t work anyway, again with the NotFound exception. Unfortunately this time without an eventlog entry.

Again, to make it short, IIS doesn’t support Ntlm as authentication scheme, we need to switch to Negotiate… .

xml_serviceconfig_negotiate

And now it works:

app_with_negotiate

Do I have to say that this configuration doesn’t run in Cassini? Right, every time you switch between IIS and Cassini you have to remember to adjust the configuration. There is another enumeration value for the authentication scheme, named IntegratedWindowsAuthentication, which would be nice – if it worked. Unfortunately those two values, Ntlm and Negotiate, are the only ones that work, under Cassini and IIS respectively.

 

localhost vs. machine name

Now it works, we get the user information as needed. For a complete picture however, we need to look at the difference between addressing the local web server via localhost or via the machine name: Calls against localhost are optimized by the operating system to bypass some of the network protocol stack and work directly against the kernel mode driver (HTTP.SYS). This affects caching as well as http sniffers like Fiddler, which both work only via the machine name.

Note: This may actually be the very reason to switch to IIS early during development, when you need Fiddler as debugger (to check the actually exchanged information). Otherwise it’s later on, when you need it as profiler (to measure network utilization). Of course you’ll want http caching enabled and working by that time.

Of course you can put the machine name in the project settings, yet this would affect all team members. Perhaps a better idea is to have the page redirect dynamically:

code_redirect

Another word on caching: In IIS6 caching has to be explicitly set for the .xap file, using 1 day as default cache time and allowing 1 minute at the minimum. During development this may be an issue. With IIS7 caching should be automatically set to CacheUntilChange and you may also set the cache time with a resolution in seconds.

 

Where are we?

OK, that was quite a list of pitfalls and differences between Cassini and IIS, even IIS6 and IIS7. Some of this may go away with the new IIS Express. Some will stay and remain a nuisance. Visual Studio initially guides you towards using Cassini. At one time however you’ll have to switch to IIS. And since you cannot have both at the same time, this may be an issue especially in development teams. My recommendation would be: Start with IIS right away or plan the switch as a concerted action within your team.

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

July 24, 2010

Replacing Events by Callbacks

Filed under: .NET, .NET Framework, C#, Silverlight, Software Development — ajdotnet @ 1:57 pm

My last post laid out how the employment of events has changed recently. Most importantly the broadcasting scenario – which was the major pattern so far – is no longer the only relevant pattern. Rather the “event-based asynchronous pattern”, MSDN, has emerged. Reasons include the inherently asynchronous nature of Silverlight as well as parallel patterns.

Now for the practical implications of this new pattern. Let’s look at an example to get the idea, and a better understanding of the consequences in code…

Let’s assume a component that is instantiated, does some work (allegedly asynchronous, and notifying about the progress) and provides the result at the end via an event. This is akin to making a server call, showing a confirmation message box, or the way the BackgroundWorker component works.

Example 1: Using events

First, implementing the a component the classical way would look somewhat like this:

The result event needs a respective EventArgs class, the declaration and the trigger method:

public class WorkResultEventArgs : EventArgs
{
    public object ResultData { get; set; }
}

public class MyWorkingComponent1
{
    public event EventHandler<WorkResultEventArgs> WorkResult;
    
    protected virtual void OnWorkResult(object resultData)
    {
        if (WorkResult != null)
            WorkResult(this, new WorkResultEventArgs() { ResultData = resultData });
    } 
    …
}

A work progress event should go a little further and provide cancelation support:

public class WorkProgressEventArgs : CancelEventArgs
{
    public int Progress { get; set; }
    public object SomeData { get; set; }
}

public class MyWorkingComponent1

    public event EventHandler<WorkProgressEventArgs> WorkProgress;
    
    protected virtual bool OnWorkProgress(int progress, object someData)
    {
        if (WorkProgress == null)
            return true;
        
        var ea = new WorkProgressEventArgs() { Progress = progress, SomeData = someData, Cancel = false };
        WorkProgress(this, ea);
        return ea.Cancel ? false : true;
    }
    …
}

Now we only need the actual worker method:

public class MyWorkingComponent1

    …
     
    public void StartWork()
    {
        int sum = 0;
        for (int i = 0; i < 10; ++i)
        {
            sum += i;
            if (!OnWorkProgress(i, sum))
                return;
        }
        OnWorkResult(sum);
    }
}

Again, we may assume that there is some asynchronicity involved, e.g. the loop could contain a web request or something. But this example should do for the sake of the argument.

The usage (as by support form Visual Studio to create the event handlers) would look like this:

public void Test1()
{
    var worker = new MyWorkingComponent1();
    worker.WorkProgress += new EventHandler<WorkProgressEventArgs>(Worker_WorkProgress);
    worker.WorkResult += new EventHandler<WorkResultEventArgs>(Worker_WorkResult);
    
    worker.StartWork();
}

void Worker_WorkProgress(object sender, WorkProgressEventArgs e)
{
    Console.WriteLine(e.Progress + ":" + e.SomeData);
}

void Worker_WorkResult(object sender, WorkResultEventArgs e)
{
    Console.WriteLine("Result:" + e.ResultData);
}

Creating the component, registering the event handlers, running the task, throw the component away. The fact that events are multi cast capable is never used at all (and never will, as the component is rather short-lived).

I guess we can agree that this is all very boilerplate. And all in all, that’s quite some overhead, from the component perspective as well as from the client code.

Example 2: Using callbacks

Now let’s try the new approach. Rather than defining an event, I pass in two callbacks. The information that was carried in the EventArgs is moved to the parameter lists, thus no need for these classes. The Cancel property is replaced by the return value of the callback. And since the client code always follows the same idiom, I expect the callbacks as constructor parameters, eliminating a source of errors along the way — something that is not possible with event handlers:

public class MyWorkingComponent2

    public Action<MyWorkingComponent2, object> WorkResult {get;set;} 
    public Func<MyWorkingComponent2, int, object, bool> WorkProgress { get; set; } 
         
    public MyWorkingComponent2( 
        Action<MyWorkingComponent2, object> workResult, 
        Func<MyWorkingComponent2, int, object, bool> workProgress) 
    { 
        WorkResult = workResult; 
        WorkProgress= workProgress; 
    } 
    …
}

The worker method changes only slightly:

public class MyWorkingComponent2

    …
         
    public void StartWork() 
    { 
        int sum = 0; 
        for (int i = 0; i < 10; ++i) 
        { 
            sum += i; 
            if (!WorkProgress(this, i, sum)) 
                return
        } 
        WorkResult(this, sum); 
    } 
}

That’s it. No EventArgs classes, no events, no respective OnEventHappened methods. Granted, the callback declarations are a little more complex, and their parameters also lack intellisense providing information about the semantics of each parameter. But otherwise? Way shorter, way more concise, way less overhead. The actual worker method hasn’t changed at all, but all the event related overhead is gone, which amounted to only 40% LOC.

Now the client code, first only slightly adapted:

public void Test1()
{
    var worker = new MyWorkingComponent2(
        (sender, resultData) => Worker_WorkResult(sender, resultData),
        (sender, progress, someData) => Worker_WorkProgress(sender, progress, someData)
    );
    
    worker.StartWork();
}

bool Worker_WorkProgress(object sender, int progress, object someData)
{
    Console.WriteLine(progress + ":" + someData);
    return true;
}

void Worker_WorkResult(object sender, object resultData)
{
    Console.WriteLine("Result:" + resultData);
}

As you can see, it didn’t change that much. But passing in the lambdas via the constructor fits the use case far better than events, and it is even more robust, as I cannot forget to pass in a callback via the constructor, the way I can forget to register an event handler.

Speaking of lambdas, and since the implementation is that simple, we can even simplify the client code further by omitting those two handler methods:

public void Test1()
{
    var worker = new MyWorkingComponent2(
        (sender, resultData) => { Console.WriteLine("Result:" + resultData); },
        (sender, progress, someData) => { Console.WriteLine(progress + ":" + someData); return true; }
    );
    
    worker.StartWork();
}

Alright, this would have been possible with events as well if you used anonymous methods. But Visual Studio guides you otherwise and early examples of anonymous methods (before we had lambdas) where rather ugly, so I doubt that can be seen as valid counterargument. Here however lambdas can be seen as typical means of choice.

Verdict

Neat? Net result:

  • I’m writing less code on the event source side, including no longer declaring EventArgs classes.
  • I’m writing less code on the event sink side.
  • The handler methods can use clean parameter lists (rather than EventArgs).
  • I’m eliminating the risk of forgetting to register event handlers by making the callbacks explicit parameters.
  • I’m elimination the danger of leaks due to failing to consistently deregistering event handlers.
    • (That was not addressed in the example, but still.)
  • When chaining together several of these steps I can make the logic – especially conditional processing – more explicit and concise.
    • Events would either require setting up beforehand (partly unnecessary overhead), or setup on demand, cluttering the handler with registration and deregistration code.

All in all, this is way more readable, way more robust, and way more efficient than using events.

I for one have begun to adopt this scheme quite liberally. My Silverlight bookshelf application has wrappers for service calls that translate the event to callbacks (several actually, including error handling and other demands). My dialogs always take callbacks for OK and Cancel. I so far have two ICommand implementations, both take callbacks (one with parameters, the other without). I even have a PropertyObserver class that translates a PropertyChanged event into a callback.
Actual event handlers? Apart from the wrappers enabling what I just presented, only a few reacting to control events.

In other words: This is not just an interesting technical detail. It really changes the way I’m addressing certain demands.

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

July 18, 2010

Employment of Events is changing…

Filed under: .NET, .NET Framework, C#, Silverlight, Software Development — ajdotnet @ 3:27 pm

The other day I had a little chat with a colleague. It was about one of his worker components and how it used events to communicate intermediate state and the final result. About event registration and deregistration, and the ungainly code resulting from it. When I suggested using callbacks instead of events, he quickly jumped on the bandwagon; a few day later I got a single note on Messenger: „Callbacks are WAY COOL!“.

That got me thinking. Why did I recommend callbacks? When and why did I abandon events? What’s wrong with events anyway?

Well, there’s nothing wrong with events at all. It’s just that Silverlight, asynchronous, and multithreaded processing have changed the picture (and I may have worked too much in those areas lately ;-) ). And this is the deal:

  1. Until recently I used to “see” (read: write code for/against) events mainly from the event consumer side. WinForms, WebForms; register handler, react to something. That kind of stuff.
    • Since “recently” I kind of had to do that less often. Why? Silverlight data binding solved many of the demands I previously used to address with event handlers. Making a control invisible for example. (Events still drive databinding, but at the same time databinding shields them away from me.)
  2. Also since “recently” I have had to implement the providing part quite a bit more often. Why? Silverlight databinding relies on certain interfaces that contain event declarations, namely INotifyPropertyChanged, INotifyCollectionChanged, and INotifyDataErrorInfo.
  3. And the “event-based asynchronous pattern”. Yep. We’ll get to that one.

OK, let’s try to classify these scenarios.

Broadcasts

The first two points are just two sides of the same coin: The radio broadcasting scenario.

  • Some component wants to advertise interactions or state changes; hence it broadcasts them by way of events.
  • Some client code needs to get notified about one or the other of these events; hence it subscribes to the event by way of an respective event handler, to consume it from there on.

Same as radio, the broadcaster broadcasts and doesn’t care whether anyone listens. Same as radio, the receiver is turned one and listens as long as something comes in. Well, the analogy stops at the life time: Event source and consumers tend to have similar life time.

Passing the Baton

The 3rd point is actually a quite different scenario: Start some work and have an event notify me about the result (and sometimes about intermediate state). Once I receive the result I let go of the participant and pass the baton on to the next piece of work.

Same as in a relay run, each participant does one job and once it’s done, he is out of business. Same as in a relay run, participation is obligatory – take someone out (or put something in his way) and the whole chain collapses.

Needless to say that this is nothing like the broadcasting scenario…

Usually the reason for the event approach (rather than simple return values) is asynchronous processing; and in fact this is not a particularly new pattern – BackgroundWorker works accordingly. On the other hand the pattern is still evolving, as the usual pattern for asynchronous work has been no pattern at all (i.e. leave it to the developer, as Thread or ThreadPool do), or the IAsyncResult pattern (relying on a wait handle). New developments however start to employ events more often, and Microsoft has actually dubbed this pattern as “event-based asynchronous pattern” (see MSDN).

One area which relies heavily on this pattern is server requests in Silverlight, via WebClient or generated proxies. But it doesn’t stop there, as Silverlight is asynchronous by nature, rather than by exception: Showing a modal dialog, navigating to another page, (down-)loading an assembly. And quite often these single incidences are chained together to form a bigger logical program flow, for example:

  • The user clicks the delete button –> the application shows the confirmation dialog –> a call to the delete operation is made –> IF it succeeds (the code navigates to a list page –> …) OTHERWISE (an error message box is shown –> …)

Any arrow represents an event based “hop” to bridge some “asynchronicity gap” – essentially turning the logically sequential chain into a decoupled, temporary register and deregister event nightmare.

Coming back to the beginning of the post: This is the scenario I was discussion with my colleague. And doing this with a whole bunch of events and respective handler methods is simply awkward, especially if you even have to provide the event sources, usually with respective EventArgs classes. And the issue of having to consistently deregister the event handlers in order to avoid memory leaks becomes more prevalent.

Changing the Picture…

Inevitably I got annoyed with the setup/teardown orgies, and eventually I began to abandon events in this case and started passing simple callbacks along. Like this:

void DeleteBook(Book book)
{
    string msg= "Delete book #" + book.Inventory + ", ‘" + book.Title + "’ ?" ;
    MessageBoxes.Instance.ShowConfirm(null, msg, 
        ok => { if (ok) BeginDeleteBook(book); }
    );
}

void BeginDeleteBook(Book book)
{
    this.DeleteBookCall.Invoke(book.BookID,
        ea => NavigateToBooks());
}

And actually I’m not the only one following this approach. The Task Parallel Library TPL for example has already started to make heavy use of callbacks. So this is definitely not limited to Silverlight…

Note: Also this lays the ground for a next evolutionary step: Coroutines.

Caliburn has a nice example of what this looks like; a little weird at first glance actually, but it collects all that logically sequential but technically asynchronous control flow in one method. Jeremy digs a little deeper into the topic in his post “Sequential Asynchronous Workflows in Silverlight using Coroutines”. 

Anyway, even without going into coroutines, the callbacks over events approach has its merits in terms of coding efficiency. I’ll provide a motivating example, next post.

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

June 12, 2010

The Future (of) UI

Filed under: Software Architecture, Software Development — ajdotnet @ 3:54 pm

The way we think about user interaction – actually the user interfaces themselves – is changing. The iPhone seems to be the protagonist teaching us new ways to interact with phones and iPad even coins a new form factor driving this trend further. Touch and multi touch are becoming main stream because vendors have begun to create operating systems, UI metaphors, and backing services around these interaction principles – rather than slightly adjusting OSes/UIs build for conventional PCs with keyboard and mouse.

This is actually a defining feature of the next evolutionary step of UI, namely Natural User Interfaces (NUI). As wikipedia states…

A NUI relies on a user being able to carry out relatively natural motions, movements or gestures that they quickly discover control the computer application or manipulate the on-screen content. The most descriptive identifier of a NUI is the lack of a physical keyboard and/or mouse. (wikipedia)

While Apple seems to take the lead in public perception, Microsoft has a rather mixed lineup: With smart phones Windows Mobile 7 seems a bit like “taking the last chance”, even if the move to Silverlight as a platform is a bold one and (IMO) a good one. On the other hand they just managed to drop the very promising – by itself as well as positioned against the iPad – Courier project. As a colleague stated in your internal company blog: “I’m frustrated. Period.” And lastly Microsoft has Surface which has no competition I’m aware of at all (unless you want to build one yourself).

Surface is not only commercially available, it also adds the capability to detect objects placed on the table and thus goes beyond plain multi touch. And it is subject to further research, as this excerpt from PDC09 shows: (better quality here, at 83:00)

 

Looking Ahead

Well, this is kind of what we have today. If you would like to see where this might be heading, have a look at the Microsoft Gives Glimpse Into the Future talk Stephen Elop held early ‘09. It’s a 36 minutes video, but you may jump to 14:00 and watch the presentation of "Glimpse in the future". What’s presented there is impressive: Live translations enabling people talking with each other in different languages. Surface like tables interacting directly with iPad-like multi touch tablets placed on it. Minority Report like control. Augmented reality. …. It’s even more impressive since everything is backed afterwards by actually existing (if in early stages) technology. There’s a shortened and also an extended version available on youtube:

 

Speaking of Minority Report. Another great video comes from John Underkoffler; John has been the science adviser for that movie and he does the whole presentation with exactly that technology!

This talk is certainly worth watching, as he makes some very interesting observations (in fact, watching this video triggered this post; thanks Daniel). His final prediction is … ambitious: “I think in 5 years time, when you buy a computer, you’ll get this.”

Is that cool or what?

 

Second Thoughts 

Well, as they say:

“Prediction is very difficult, especially about the future.” (various)

There’s one thing I don’t like about those predictions. They are (deliberately?) incomplete. They certainly shine in new fields of applications for computers, new degrees of collaboration, new ways of interaction. Like home integration, meeting areas with huge collaboration screens, geo services and augmented reality, or simply navigating and reshaping existing data. But in their aim to show new ways of doing things, they neglect the “old”, conventional demands, demands that won’t go away.

The very fact that these NUI approaches – touch, gestures, even voice – are defined by “the lack of a physical keyboard and/or mouse” (and in case you didn’t notice – NONE of the above videos hat a keyboard in it!) renders them inappropriate for a whole bunch of scenarios. Can you imagine a secretary typing on a virtual keyboard? A call center clerk waving at his screen while he talks to a customer? A banker shooing stock rates up and down? A programmer snipping his code into place? Cool as all that Minority Report and other stuff may seem, I have a hard time imagining anyone whose daily job today requires a keyboard to a substantial degree using some other “device” instead.

In the end we’ll probably see both. NUI approaches are going to spread, new devices targeted at different scenarios simply require different notions of user interaction. But they are not going to replace today’s conventional computers, they are going to be a complement, actually even a necessary one. Another necessary complement is the mutual integration with each other, the internet/cloud, and social platforms, but that’s a different story.

For us developers this will be the actual challenge: developing on conventional machines for devices and environments that have totally different ideas of how an application should look like and interact with its surroundings. Testing is going to be a bitch.

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

January 24, 2010

Silverlight Bits&Pieces: The First Steps with Visual State Manager

Filed under: .NET, .NET Framework, C#, Design Time, Silverlight, Software Development — ajdotnet @ 5:41 pm

Visual State Manager. Easy to understand in principle. But it takes some getting used to to be able to use it… .

There is a lot of information about VSM available, e.g. a quick introduction at silverlight.net, and when I first started to tackle VSM I read it all and then some (felt that way, anyway). Still, my first experiments with VSM failed miserably—and it did so because of a lack of understanding. Because the one main issue for me was that all the articles and screencasts explained what the VSM does and what great effects one (well, someone else) could achieve with it, yet all with the emphasis on ‚what‘ (and usually all at once), not on ‚how‘ (in small digestible chunks).

So, if you have looked into VSM and didn’t quite get it, but only just, then this post may be for you. First I‘m going to dive into some code, afterwards I’ll try to offer a few hints that should help you getting started with VSM.

The Crash Course

Controls have states (like normal, pressed, focused, for a button); states are represented by VSM as Visual States, organized in distinct State Groups. State Groups separate mutually independent Visual States (e.g. pressed state or mouse over are independent of the focus state). Silverlight allows to define these states in templates, along with State Transitions that define how the state change is to happen (e.g. instant change or some animation).

Silverlight also provides an attribute, namely TemplateVisualStateAttribute, to declare the supported Visual States and Groups on a control. Keep in mind however that this is merely for tool support and perhaps documentation. At runtime, the presence or absence of these attributes is of no consequence at all.

The Sample

OK, let’s see some code. I’ll build on the image button from my last post. It should support three different images, as well as a focused rectangle. (I’ll leave out the text though. I don’t need it and it would complicate matters for this post without gain.) The button class already defines the Visual States and Groups, I’ll stick with that.

First I extended the image button control class to support three dependency properties, namely NormalImage, HooverImage, and DisabledImage. (I could have added a ClickedImage, but I‘ll solve that otherwise.) To make a long story short, here is the custom class, defining the necessary dependency properties:

The button inherits the VSM attributes from its base class, thus I don’t have to reiterate them here.

Having done that I can already use these properties to set them in XAML:

Of course it still uses only the one image, I gave it last time. So, the next step is to extend the template with images, and other parts, to accommodate the Visual Styles. If I can live with XAML, I could do this using Visual Studio 2008. To do it in a design view I need Visual Studio 2010 or Blend.

In any case, this is conventional designing, it’s not yet time to look in blend at the “States” panel in Blend! And when it comes to this, VS2010 is also out of the game (at least in beta 2).

The resulting XAML looks somewhat like that:

Note that I used a Grid to stack the images on top of each other. Note also that the default settings for all parts are compliant with my „normal“ button state, i.e. the second and third image are invisible, so is the focus rectangle.

Now that my control contains all the primitives I need, it’s time to enter VSM. To prepare for that, I manually provided the State Groups and Visual States. This ensures that I get all states (Blend would only add the ones it manipulates, and since the normal state is going to be empty, it would always be missing), and in the order I prefer.

Now is the time to enter blend, select the button, then the current template, and have a look at the “States” pane.

Note that the “States” pane contains the State Groups with their respective Visual States. Blend gets this information from the TemplateVisualStateAttribute on the class, but also includes additional states and groups it finds in the template XAML. Additionally there is a “pseudo-state” named “Base”, which is simply the “state” in which the control is without putting it in a distinct state.

Now I went ahead, selected the state in question in Blend and changed the controls to match my design. Since I had the desired design figured out before I started the VSM – up to which properties to change for a transition – this was as simple as can be. For the mouse over state:

Note how Blend shows the design area with a red border and a “recording mode” sign. Every change to the template is now recorded as state change for the selected state, mouse over in this case. (You could switch recording off by clicking on the red dot in the upper left, and manipulate the properties ordinarily; yet selecting another state will switch it back on, so this is OK for some quick fixes, but too error prone for general editing.)
Note also that the “Objects” pane shows not only the controls, but marks those affected by the currently manipulated state with a red dot and puts the manipulated properties beneath. In case you accidentally manipulated the wrong property, you should remove this entry, rather than simply change it back, otherwise the (trivial) transition will remain in the XAML.

Just setting the visibility of two images results in some verbose and (at first sight) rather confusing XAML:

The disabled state looks similar. The click state is represented by the hover image which is moved slightly off center to achieve the click effect (“Properties” pane, “Transform”).

And here’s the resulting button in action, showing normal, hover, disabled, and clicked state:

Lessons Learned

What I just presented was a fast forward replay of employing the VSM. Using VSM minimalistic to the extreme actually, since I have left out quite a bit of VSM functionality, most notably transitions with animations. Still, I have applied some guidelines I learned to value when using VSM, that I’d like to point out.

So, here are some of the twists that made VSM useful to (rather by) me… (some learned the hard way).

:idea: Hint 0 (The most general hint): States need careful planning.

If you don’t know yet what the control should look like in the various states, you should shy away from the “States” pane in Blend. Start with conventionally designing the control, It may even help to design a separate control template per state, and merge them only after the design has reached a stable state.

:idea: Hint 1: Don’t look at existing controls.

It’s tempting to look at the existing templates, with Blend the XAML is only a mouse click away. Don’t. The button template has ~100 LOC, and I’ve seen others with more than 300 LOC. And what’s more, they are fully styled, meaning that they employed probably all the features, caring for sophisticated visual effects but not exactly for the poor developer trying to deduce the workings from looking at the XAML.

:idea: Hint 2: Start simple.

Many samples quickly jump at animations used for transitions, easing functions, and slicing French Fries. For me one key to understanding VSM was to stick to the minimum at first. States. Transitions only as simple as possible. Period.

:idea: Hint 3: VSM is not about creating states. It is about styling them.

My initial thinking was „I have a button with a normal image; in disabled mode I need to have a disabled image…“. This led to all kinds of mind twists, like „how do I create an image control during a state change?“ „Should I rather replace the image URL of the existing control, and how?“, and others. It was a crucial part of understanding when I realized that I do not have a button with one image in one state and another image in another state. What I have is a button with three images in all states, as presented above. The difference between the states is merely which of these images is visible and which is not.

:idea: Hint 4: When designing a new control, avoid the VSM “States” pane for quite some time.

There is one pitfall I managed to hit several times in the beginning. I started Blend, selected the particular state I‘m interested in, and tried to design my control for that state. This is futile, because Blend does not actually design the control (as in setting property values), rather it designs the transitions to these values. (You could switch off recoding mode, but Blend really insists on switching it on again and again and again.)

Therefore I generally design my control „conventionally“. I.e. I place the normal image in a grid and style it; then I make it invisible (kind of a manual state change) and do the same with the next image; and so forth for all states. Only when I‘m done with this I allow myself to even look at the VSM support in Blend.

:idea: Hint 5: The visual state for the normal state is always there. And always empty.

Worse, you’ll have to include it manually in XAML, since Blend doesn’t put it there… :-(

„Normal state“ is the default state of the state group. Each state group has one; it doesn’t have be be named „normal“, but it has to exist. This is the state in which the control is by default, after initially displaying the control, and before VSM has even touched it. The one that is denoted as “Base” in the “States” pane.

The „normal“ state has to be declared, because otherwise the control will not be drawn correctly after it has been in a different state, say normal –> hover –> normal. And it has to be empty because otherwise the control would show up in an undefined state, at least according to VSM, which can never again be reached, once the control was in a different state. This would lead to all kinds of inconsistencies.

Lemma: All controls in the template initially have property values compliant with the normal state. In the image button example: The normal image is visible, the other images invisible.

:idea: Hint 6: VSM is not about designing states. It’s about designing differences between the state in question and the normal state.

Suppose I have the control designed the „conventional“ way with the looks of the normal state; I also have yet invisible controls for the other states. Now is the time to enter Blend and the “State” pane. Choose the state in question, e.g. mouse over, and manipulate exactly those properties that constitute the difference between normal state and mouse over state. I.e. set the normal image to invisible and the hover image to visible. Blend will record the respective transitions.

It’s always this difference, always normal state vs. the state in question. Only if you achieved the first belt in using VSM and signed the no-liability warrant should you go ahead and attack transitions between specific states, for complexity will explode.

:idea: Hint 7: State groups are mutually independent. And the same is mandatory for the state differences.

Never let different state groups manipulate the same properties. For example the button addresses common states and focus states independently. It would not work to implement the focused state by setting the hover image visible, as this would collide with the mouse over state and eventually result in undefined behavior. The focused state could show a focus rectangle. Or it could actually even manipulate the hover image, as long as it is not the visibility property used by the mouse over. (Whether that makes is a different question, though.)

:idea: Hint 8: Visual states are not put in stone.

Controls usually have visual states defined via attributes. However this is just some information, used by some tools (such as Blend), but of no consequence otherwise. VisualStateManager.GoToState is what triggers a transition, and it may or may not be called from the control itself. The visual states and groups defined in the template are merely backing information used at runtime. Should the need arise, I could define a new state group, say „Age“, with two visual states „YoungAge“ and „OldAge“ in XAML. Then I could go ahead and call the VSM from the code behind file of my page class to change the state. And after 5 minutes of inactivity my button could grow a beard.

Wrap-Up

So far the hints. But what about more complex demands? I have barely touched the eye catching features at all.

In my opinion, what I just presented will cover the first steps and provide a sound understanding of the core VSM principles. Once this level of understanding VSM is mastered, one can go ahead and explore other areas.

And there certainly are “other areas”. I already mentioned state specific transitions; animated transitions are another topic. If you need an example of what’s possible have a look at this online sample. This is VSM in action, admittedly complemented with some code, but surprisingly little. (You can dig into it starting Blend and opening the sample project „ColorSwatchSL“.)

Some other useful links:

And from now on it’s no longer a lack of understanding that keeps me from doing things. It’s my incompetence as designer… ;-)

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

January 17, 2010

Silverlight Bits&Pieces: Derived Custom Controls

Filed under: .NET, C#, Design Time, Silverlight, Software Development — ajdotnet @ 3:06 pm

OK, let’s put the last findings to good use and create a derived control that carries its own default template. This post is again about some fairly basic stuff, but it is the logical next step.

My use case: I wanted/needed a simple image button, one that simply takes the image as a property, rather than having to manipulate the content for every button anew. So what do I need?

  1. A derived class.
  2. A dependency property for the image URL.
  3. A new default template.

Deriving a SL control is just a matter of clicking add/new item in the solution explorer, and choosing „Silverlight Templated Control“. This will actually create 2 things (and address the default template requirement as well…):

  1. A class derived form Control, placed in a .cs file in the folder I used to create the new item.
  2. A XAML file named Themes/Generic.xaml is created (or extended if it already exists) and contains a style and template for the new control.

Now, the implied behavior is that a custom control sets its DefaultStyleKey property to a type (usually its own). At runtime SL will determine the default style of a control by using this type’s assembly to read the Themes/Generic.xaml content and pick the style that has the type as target type.

Note how the style in Themes/Generic.xaml uses an XML namespace to map the class name to a C# namespace:

Note: One consequence of this is that all styles of all custom controls in an assembly will end up in the same generic.xaml file. This is usually not an issue, even if implementation and style/template reside in relatively remote files. However, if the assembly grows to accommodate a bigger number of controls, it might make sense to put the control specific resources into separate .xaml files right beside the implementation. Loading the template from a .xaml resource is no big deal, all you need is GetManifestResourceStream and XamlReader.Load, in more detail here.

The next step is to change the base class — I want to extend the Button, not write it completely anew — and to provide the dependency property for the image. Having a peek at the Source property of the Image control tells me that ImageSource is the adequate type.

Now, let’s customize the appearance. Unfortunately Blend cannot deduce the dependency between the control an the style in Themes/Generic.xaml. Therefore its easier to create an instance of the ImageButton, assign a temporary style with template in blend, placed it into the same page’s XAML:

and the respective button:

Of course I need to have the respective image…

I can now use Blend to work on the template (assigned within the style):

This will change to editing context to the template rather than the control:

I changed it to include the image, placed it beside the text. (Actually, placed beside what I choose to have in the content property, using a content presenter.)

Now I want the image control to show what I have in the NormalImage property I just wrote. Blend is aware of the type of my class, so I can bind the Image.Source property using a template binding to the property of my class.

and clicking it:

The temporary style with template finally looks like this:

Template bindings can be used to bind against existing properties (of appropriate type), as well as any new property I choose to provide. Actually for a complete implementation I would probably have to map alignments, and other properties accordingly to parts of my template, to provide full customizability for my control.

Now that I’m done designing my button, I can save it, copy the resulting template into the default style for my control in Themes/Generic.xaml, recompile – and then just use it:

Just an image, no text; and at runtime:

 

Alright, that’s the basics of a custom control. Essentially what I’ve done is

  • replacing the default template with one that includes an image
  • providing a dependency property for the image, actually nothing more than a mirror of the respective property of the image in my template.

This is all very boilerplate on one hand, yet extremely flexible at the same time.

Now, the image of the button does not „feel“ very buttonish, i.e. it does not reflect mouse over, disabled state, or clicked state. This is the domain of VSM. Next post…

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

November 28, 2009

Silverlight Bits&Pieces – Part 9: A MessageBox replacement

OK, let’s put the brand new service provider model to some good use.

Whenever a service call reports an error I want some message box telling me about it (rather than simply swallowing it, which is the default behavior). Whenever the user does something potentially devastating I want some explicit confirmation, read message box, that he knows what he’s doing. MessageBox.Show does all I need (well, it is restricted to OK and OK/Cancel, but one can live with that). Only… these system message boxes are dull, boring, and not at all a shiny example for a Silverlight application. Enter the message box service provider…

Basic implementation of a message box service

The basic implementation will get the infrastructure up and running.

The first step is defining the service contract. Show this and that and a query method. The first (and naive) version looks like this:

The default implementation of our application extension service turned service provider (AES/SP) would use the dull system message boxes to implement that. The code is actually quite straight forward:

Now, I could demand that the app.xaml has this one (or any other service implementing my interface) registered. However, I like to be correct by default, thus my accessor will fall back on this implementation if none is registered – and I can be sure that there will always be a respective service.

All that is left is a search&replace for all calls to MessageBox.Show… E.g. to show an error:

… and to get confirmation, in this case to return a book:

And, of course, it works as expected:

Replacing the Dialog

Second act. Get rid of those dull things.

Create a new “Silverlight Child Window” and style it to look like a message box. I „borrowed“ the images from the Visual Studio Image Library (on my machine under C:\Program Files\Microsoft Visual Studio 9.0\Common7\VS2008ImageLibrary\1033\VS2008ImageLibrary\Objects\png_format\WinVista\) and simply placed all possible images in the dialog. A textbox, two buttons, that’s it. Here is the styled XAML:

Some code is needed for the initialization. The message has to be set, the correct image made visible, etc.. I could probably have done this with less coding, using some tricks and elaborate databinding. But who cares, it’s straight forward and comprehensible (unlike what I probably would have come up with).

Setting the DialogResult property also closes the dialog (sik!).

Finally I need a replacement AES/SP. The main method to show the dialog looks like this:

Great? Great! … GOT YOU! (Fell into the trap myself, actually… :-/ )

Fixing the Bug

Remember that in Silverlight everything is asynchronous? Well, everything except MessageBox.Show? And ‘everything’ includes ChildWindow.Show! Meaning my confirm method will not work this way. To overcome this I decided to pass a delegate to the dialog constructor and made sure it’s called in the OK case:

And to be able to pass the delegate I changed the existing AES as well (and the interface respectively):

Of course I had to adjust the default implementation using a messagebox:

The calling code changes respectively, passing a lambda:

Done. Now my application looks nice, even if it has to show a message box:

ANFSCD

This endeavor served actually three purposes:

  • First, I wanted/needed the feature ;-)
  • Second, I wanted to see/demonstrate the service provider pattern from a user’s point of view.
  • And third – as you may have noticed from some screenshots – I used this implementation to check out VS2010 beta.

A quick verdict about VS 2010 beta (not really worth a separate post)…

The core system, i.e. the shell, the C# code editor, build system, etc. feels very good. No apparent bugs, quite fast, including intellisense, and close enough to VS2008 to feel familiar. Considering that big parts of this are complete rewrites, this is quite an achievement.

The visual designer for (Silverlight) XAML works nice for user controls. Designing grids, the property pane, and other tasks, is at first glance en par with Blend, but comes in a more familiar „Visual Studio flavor“; still it feels more rich and mature than VS2008.
However, there are some notable gaps. Editing of styles and templates, animations, and visual state manager are not covered. Thus my guess is that Blend will remain a necessary complement to VS, even if one has to switch less often. BTW: Contrary to what Tim wrote, I could work with Blend on VS2010 solutions (the project that cannot be loaded is only the web project), I just refrained from manipulating my project files with Blend.

Other areas I touched briefly have been less satisfying. IntelliTrace didn’t work, but I didn’t spend too much time on that. The architecture and modeling area for example has changed, but is by no means bug free (to the point of “not yet usable”). The profiler has evolved, but IMO still lacks what DevPartner offered nearly 10 years ago: a decent call graph.

Oh, one bright spot for any dev lead: code analysis (FxCop) rules are now maintained in separate files, projects reference these files by name.

Anyway, I have been using VS2010 beta since I installed it and was never compelled to switch back to VS2008. I’m going to have to reinstall my machine anytime soon, and I’m planning on going along with VS2010 beta, not installing VS2008 at all.

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

Older Posts »

The Shocking Blue Green Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 244 other followers