AJ's blog

January 24, 2010

Silverlight Bits&Pieces: The First Steps with Visual State Manager

Filed under: .NET, .NET Framework, C#, Design Time, Silverlight, Software Development — ajdotnet @ 5:41 pm

Visual State Manager. Easy to understand in principle. But it takes some getting used to to be able to use it… .

There is a lot of information about VSM available, e.g. a quick introduction at silverlight.net, and when I first started to tackle VSM I read it all and then some (felt that way, anyway). Still, my first experiments with VSM failed miserably—and it did so because of a lack of understanding. Because the one main issue for me was that all the articles and screencasts explained what the VSM does and what great effects one (well, someone else) could achieve with it, yet all with the emphasis on ‚what‘ (and usually all at once), not on ‚how‘ (in small digestible chunks).

So, if you have looked into VSM and didn’t quite get it, but only just, then this post may be for you. First I‘m going to dive into some code, afterwards I’ll try to offer a few hints that should help you getting started with VSM.

The Crash Course

Controls have states (like normal, pressed, focused, for a button); states are represented by VSM as Visual States, organized in distinct State Groups. State Groups separate mutually independent Visual States (e.g. pressed state or mouse over are independent of the focus state). Silverlight allows to define these states in templates, along with State Transitions that define how the state change is to happen (e.g. instant change or some animation).

Silverlight also provides an attribute, namely TemplateVisualStateAttribute, to declare the supported Visual States and Groups on a control. Keep in mind however that this is merely for tool support and perhaps documentation. At runtime, the presence or absence of these attributes is of no consequence at all.

The Sample

OK, let’s see some code. I’ll build on the image button from my last post. It should support three different images, as well as a focused rectangle. (I’ll leave out the text though. I don’t need it and it would complicate matters for this post without gain.) The button class already defines the Visual States and Groups, I’ll stick with that.

First I extended the image button control class to support three dependency properties, namely NormalImage, HooverImage, and DisabledImage. (I could have added a ClickedImage, but I‘ll solve that otherwise.) To make a long story short, here is the custom class, defining the necessary dependency properties:

The button inherits the VSM attributes from its base class, thus I don’t have to reiterate them here.

Having done that I can already use these properties to set them in XAML:

Of course it still uses only the one image, I gave it last time. So, the next step is to extend the template with images, and other parts, to accommodate the Visual Styles. If I can live with XAML, I could do this using Visual Studio 2008. To do it in a design view I need Visual Studio 2010 or Blend.

In any case, this is conventional designing, it’s not yet time to look in blend at the “States” panel in Blend! And when it comes to this, VS2010 is also out of the game (at least in beta 2).

The resulting XAML looks somewhat like that:

Note that I used a Grid to stack the images on top of each other. Note also that the default settings for all parts are compliant with my „normal“ button state, i.e. the second and third image are invisible, so is the focus rectangle.

Now that my control contains all the primitives I need, it’s time to enter VSM. To prepare for that, I manually provided the State Groups and Visual States. This ensures that I get all states (Blend would only add the ones it manipulates, and since the normal state is going to be empty, it would always be missing), and in the order I prefer.

Now is the time to enter blend, select the button, then the current template, and have a look at the “States” pane.

Note that the “States” pane contains the State Groups with their respective Visual States. Blend gets this information from the TemplateVisualStateAttribute on the class, but also includes additional states and groups it finds in the template XAML. Additionally there is a “pseudo-state” named “Base”, which is simply the “state” in which the control is without putting it in a distinct state.

Now I went ahead, selected the state in question in Blend and changed the controls to match my design. Since I had the desired design figured out before I started the VSM – up to which properties to change for a transition – this was as simple as can be. For the mouse over state:

Note how Blend shows the design area with a red border and a “recording mode” sign. Every change to the template is now recorded as state change for the selected state, mouse over in this case. (You could switch recording off by clicking on the red dot in the upper left, and manipulate the properties ordinarily; yet selecting another state will switch it back on, so this is OK for some quick fixes, but too error prone for general editing.)
Note also that the “Objects” pane shows not only the controls, but marks those affected by the currently manipulated state with a red dot and puts the manipulated properties beneath. In case you accidentally manipulated the wrong property, you should remove this entry, rather than simply change it back, otherwise the (trivial) transition will remain in the XAML.

Just setting the visibility of two images results in some verbose and (at first sight) rather confusing XAML:

The disabled state looks similar. The click state is represented by the hover image which is moved slightly off center to achieve the click effect (“Properties” pane, “Transform”).

And here’s the resulting button in action, showing normal, hover, disabled, and clicked state:

Lessons Learned

What I just presented was a fast forward replay of employing the VSM. Using VSM minimalistic to the extreme actually, since I have left out quite a bit of VSM functionality, most notably transitions with animations. Still, I have applied some guidelines I learned to value when using VSM, that I’d like to point out.

So, here are some of the twists that made VSM useful to (rather by) me… (some learned the hard way).

:idea: Hint 0 (The most general hint): States need careful planning.

If you don’t know yet what the control should look like in the various states, you should shy away from the “States” pane in Blend. Start with conventionally designing the control, It may even help to design a separate control template per state, and merge them only after the design has reached a stable state.

:idea: Hint 1: Don’t look at existing controls.

It’s tempting to look at the existing templates, with Blend the XAML is only a mouse click away. Don’t. The button template has ~100 LOC, and I’ve seen others with more than 300 LOC. And what’s more, they are fully styled, meaning that they employed probably all the features, caring for sophisticated visual effects but not exactly for the poor developer trying to deduce the workings from looking at the XAML.

:idea: Hint 2: Start simple.

Many samples quickly jump at animations used for transitions, easing functions, and slicing French Fries. For me one key to understanding VSM was to stick to the minimum at first. States. Transitions only as simple as possible. Period.

:idea: Hint 3: VSM is not about creating states. It is about styling them.

My initial thinking was „I have a button with a normal image; in disabled mode I need to have a disabled image…“. This led to all kinds of mind twists, like „how do I create an image control during a state change?“ „Should I rather replace the image URL of the existing control, and how?“, and others. It was a crucial part of understanding when I realized that I do not have a button with one image in one state and another image in another state. What I have is a button with three images in all states, as presented above. The difference between the states is merely which of these images is visible and which is not.

:idea: Hint 4: When designing a new control, avoid the VSM “States” pane for quite some time.

There is one pitfall I managed to hit several times in the beginning. I started Blend, selected the particular state I‘m interested in, and tried to design my control for that state. This is futile, because Blend does not actually design the control (as in setting property values), rather it designs the transitions to these values. (You could switch off recoding mode, but Blend really insists on switching it on again and again and again.)

Therefore I generally design my control „conventionally“. I.e. I place the normal image in a grid and style it; then I make it invisible (kind of a manual state change) and do the same with the next image; and so forth for all states. Only when I‘m done with this I allow myself to even look at the VSM support in Blend.

:idea: Hint 5: The visual state for the normal state is always there. And always empty.

Worse, you’ll have to include it manually in XAML, since Blend doesn’t put it there… :-(

„Normal state“ is the default state of the state group. Each state group has one; it doesn’t have be be named „normal“, but it has to exist. This is the state in which the control is by default, after initially displaying the control, and before VSM has even touched it. The one that is denoted as “Base” in the “States” pane.

The „normal“ state has to be declared, because otherwise the control will not be drawn correctly after it has been in a different state, say normal –> hover –> normal. And it has to be empty because otherwise the control would show up in an undefined state, at least according to VSM, which can never again be reached, once the control was in a different state. This would lead to all kinds of inconsistencies.

Lemma: All controls in the template initially have property values compliant with the normal state. In the image button example: The normal image is visible, the other images invisible.

:idea: Hint 6: VSM is not about designing states. It’s about designing differences between the state in question and the normal state.

Suppose I have the control designed the „conventional“ way with the looks of the normal state; I also have yet invisible controls for the other states. Now is the time to enter Blend and the “State” pane. Choose the state in question, e.g. mouse over, and manipulate exactly those properties that constitute the difference between normal state and mouse over state. I.e. set the normal image to invisible and the hover image to visible. Blend will record the respective transitions.

It’s always this difference, always normal state vs. the state in question. Only if you achieved the first belt in using VSM and signed the no-liability warrant should you go ahead and attack transitions between specific states, for complexity will explode.

:idea: Hint 7: State groups are mutually independent. And the same is mandatory for the state differences.

Never let different state groups manipulate the same properties. For example the button addresses common states and focus states independently. It would not work to implement the focused state by setting the hover image visible, as this would collide with the mouse over state and eventually result in undefined behavior. The focused state could show a focus rectangle. Or it could actually even manipulate the hover image, as long as it is not the visibility property used by the mouse over. (Whether that makes is a different question, though.)

:idea: Hint 8: Visual states are not put in stone.

Controls usually have visual states defined via attributes. However this is just some information, used by some tools (such as Blend), but of no consequence otherwise. VisualStateManager.GoToState is what triggers a transition, and it may or may not be called from the control itself. The visual states and groups defined in the template are merely backing information used at runtime. Should the need arise, I could define a new state group, say „Age“, with two visual states „YoungAge“ and „OldAge“ in XAML. Then I could go ahead and call the VSM from the code behind file of my page class to change the state. And after 5 minutes of inactivity my button could grow a beard.

Wrap-Up

So far the hints. But what about more complex demands? I have barely touched the eye catching features at all.

In my opinion, what I just presented will cover the first steps and provide a sound understanding of the core VSM principles. Once this level of understanding VSM is mastered, one can go ahead and explore other areas.

And there certainly are “other areas”. I already mentioned state specific transitions; animated transitions are another topic. If you need an example of what’s possible have a look at this online sample. This is VSM in action, admittedly complemented with some code, but surprisingly little. (You can dig into it starting Blend and opening the sample project „ColorSwatchSL“.)

Some other useful links:

And from now on it’s no longer a lack of understanding that keeps me from doing things. It’s my incompetence as designer… ;-)

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

January 17, 2010

Silverlight Bits&Pieces: Derived Custom Controls

Filed under: .NET, C#, Design Time, Silverlight, Software Development — ajdotnet @ 3:06 pm

OK, let’s put the last findings to good use and create a derived control that carries its own default template. This post is again about some fairly basic stuff, but it is the logical next step.

My use case: I wanted/needed a simple image button, one that simply takes the image as a property, rather than having to manipulate the content for every button anew. So what do I need?

  1. A derived class.
  2. A dependency property for the image URL.
  3. A new default template.

Deriving a SL control is just a matter of clicking add/new item in the solution explorer, and choosing „Silverlight Templated Control“. This will actually create 2 things (and address the default template requirement as well…):

  1. A class derived form Control, placed in a .cs file in the folder I used to create the new item.
  2. A XAML file named Themes/Generic.xaml is created (or extended if it already exists) and contains a style and template for the new control.

Now, the implied behavior is that a custom control sets its DefaultStyleKey property to a type (usually its own). At runtime SL will determine the default style of a control by using this type’s assembly to read the Themes/Generic.xaml content and pick the style that has the type as target type.

Note how the style in Themes/Generic.xaml uses an XML namespace to map the class name to a C# namespace:

Note: One consequence of this is that all styles of all custom controls in an assembly will end up in the same generic.xaml file. This is usually not an issue, even if implementation and style/template reside in relatively remote files. However, if the assembly grows to accommodate a bigger number of controls, it might make sense to put the control specific resources into separate .xaml files right beside the implementation. Loading the template from a .xaml resource is no big deal, all you need is GetManifestResourceStream and XamlReader.Load, in more detail here.

The next step is to change the base class — I want to extend the Button, not write it completely anew — and to provide the dependency property for the image. Having a peek at the Source property of the Image control tells me that ImageSource is the adequate type.

Now, let’s customize the appearance. Unfortunately Blend cannot deduce the dependency between the control an the style in Themes/Generic.xaml. Therefore its easier to create an instance of the ImageButton, assign a temporary style with template in blend, placed it into the same page’s XAML:

and the respective button:

Of course I need to have the respective image…

I can now use Blend to work on the template (assigned within the style):

This will change to editing context to the template rather than the control:

I changed it to include the image, placed it beside the text. (Actually, placed beside what I choose to have in the content property, using a content presenter.)

Now I want the image control to show what I have in the NormalImage property I just wrote. Blend is aware of the type of my class, so I can bind the Image.Source property using a template binding to the property of my class.

and clicking it:

The temporary style with template finally looks like this:

Template bindings can be used to bind against existing properties (of appropriate type), as well as any new property I choose to provide. Actually for a complete implementation I would probably have to map alignments, and other properties accordingly to parts of my template, to provide full customizability for my control.

Now that I’m done designing my button, I can save it, copy the resulting template into the default style for my control in Themes/Generic.xaml, recompile – and then just use it:

Just an image, no text; and at runtime:

 

Alright, that’s the basics of a custom control. Essentially what I’ve done is

  • replacing the default template with one that includes an image
  • providing a dependency property for the image, actually nothing more than a mirror of the respective property of the image in my template.

This is all very boilerplate on one hand, yet extremely flexible at the same time.

Now, the image of the button does not „feel“ very buttonish, i.e. it does not reflect mouse over, disabled state, or clicked state. This is the domain of VSM. Next post…

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

August 27, 2009

Silverlight Bits&Pieces – Part 4: View Model Basics

Filed under: .NET, .NET Framework, C#, Design Time, Silverlight, Software Development — ajdotnet @ 8:48 pm

Note: This is part of a series, you can find the related posts here

Displaying and manipulating data on the client – the one and only purpose of any LOB application – includes two things if it comes to Silverlight: a) The asynchronous server calls and b) the client architecture. I will be going over them in a cursory fashion and come back later to address each one in more depth. This time the client architecture.

View Model basics

The client code is largely guided by the employment of the Model-View-ViewModel (M-V-VM, or MVVM) approach, which is the predominant architectural approach used with SL and WPF. The reason probably being that it is a natural counterpart to the WPF and Silverlight data binding features, the two work extremely well together.

Cook book style:

  • For every page (the view) there is a respective class (the view model) that manages the data (the model).
  • For each control on the view that shall be populated dynamically with data, the view model has a corresponding property providing the model.
  • For each control state, such as enabled or visible, that shall be controlled by the logic, the view model has a corresponding property, probably boolean.
  • For each action triggered by the UI the view model has a respective method.

The view model is very closely associated with its view, so there is not much reuse here, but then, it largely consists of properties and forwards to service calls. Or so the theory says. (Subsequent posts will gave deal with the shortcomings….)

Please note that it is debatable whether the data provided by the view model actually is the model. Another approach would be to expose a view related data model, namely view entities. The view model would have to map them to the data model (the model) the lower layer exposes, probably some service proxy.

Both approaches have pros and cons, however you’ll mostly see the former approach, since it’s well supported by the tools (service proxy generation, etc.). Still, it has some cons…

Anyway, the only technical demand for view models in SL is due to the intended use for data binding: Classes subject to fully fledged data binding need to support the INotifyPropertyChanged interface for simple properties, while collections have to support INotifyCollectionChanged. The later one comes for free if you use ObservableCollection<T> consequently. (Please note that data binding works with conventional properties, but only to a limited degree.)

For INotifyPropertyChanged a little base class comes in handy:

Note the generic overload. That’s a little trick to avoid typos in the property name argument.

With this class as base a property implementation usually follows this idiom:

(Without that trick one would have to pass the property name as string. A source of errors due to typos, and a pitfall during refactoring.) 

BookFilter is a data class that, again, follows the same pattern, i.e. it supports INotifyPropertyChanged.

Hint: This begs for a code snippet! :-)

The View Model

Any book shelf has a collection of books, so does my application. The book list page should provide a means to filter the book list (two text boxes), a button to trigger the search, and to show the result (a data grid):

Not especially nice and the grid is a little degenerated for now, but that will change. In XAML:

Thus the first view model implementation may look like this (including some simple test data, the next post will deal with the server call):

Databinding

For the actual binding I need to wire that up with the page. The usually presented manual way looks like this:

The view model class is created in the c’tor, and assigned to the DataContext. A property provides a more convenient access to it. This is already used in the button event handler that triggers the book search.
However, I recommend against this way. Rather I did the data binding in Blend…

Opening the page, selecting the first textbox, finding the Text property in the properties and clicking the text box (or that tiny little dot to the right) brings up the context menu.

Then I choose data binding, the Data Field tab, and the +CLR Object button:

That left finding the view model class and selecting it:

This way I could add various “data sources”, yet I only want one for now, and according to M-V-VM, for ever. Afterwards the dialog lets me browse the class structure and select the property to bind against:

On second thought, I selected the StackPanel which contains the filter textboxes and bound it against the BookFilter property, in order to narrow the available context. Afterwards the textboxes could be bound via the Explicit Data Context tab:

I had to expand the lower area to set the binding to TwoWay.

But I didn’t have to go through the dialog armada for every field. Blend also provides the data tab that lets me browse through the available data sources. Drag’n’drop of field simply works and generally uses the last settings from the dialog, i.e. it includes the TwoWay setting. Also it binds by against the default property, but if I hold the shift key down it lets me choose the property. And some other stuff I leave to you to explore. Really nice.

Anyway, this is what Blend just created for us in markup speak (just the relevant part):

It registered the namespace to locate the view model class, created the view model as resource, set the DataContext of the LayoutRoot element to this resource, and it added the usual binding to the subsequent controls.

Changing the generated prefix to viewModel was all I did. Otherwise I could live very well with that, given that is achieves all I need and it allows me to do my data binding much more efficient and less error prone in Blend. The manual way was opaque to Blend, thus it couldn’t assist me in any way.

The only thing left was removing the manual instantiation from the c’tor and changing the ViewModel property to use the LayoutRoot control. I may have moved the binding to the page instead, but I prefer to work with the tools, not against them.

After running the application and clicking on the button, it shows the respective test data:

The next post will deal with the actual server call.

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

August 21, 2009

Silverlight Bits&Pieces – Part 3: First Layout

Filed under: .NET, .NET Framework, Design Time, Silverlight, Software Development — ajdotnet @ 9:48 pm

Note: This is part of a series, you can find the related posts here

One thing is as true with SL as it was with HTML and CSS: Starting with a basic layout of the pages and a “site map” really helps a lot. (Trying to work with a style system that you don’t know doesn’t.)

The application created by the template looks like this:

It’s a navigation application with head area (including menu) and remaining work area. It uses a Grid control, yet only as kind of canvas, all controls are placed via margins and alignments. I find this, well, curious, there is a canvas control after all. But the layout doesn’t suit me anyway.

So the first step is to get rid of all styles, i.e. I cleaned Assets/Styles.xaml. That included removing all references to these styles, as in this fragment:

Searching with a little regular expression solves that problem. Now for the intended layout:

I want to have a head area with application title and other information. A left area containing the menu. A bottom line with legal information. And finally the remaining area for our pages. And some space between for some visual separation.

In SL this is done with a Grid. Curiously enough, for this is akin to table layout in HTML. I wonder when the CSS gang will show up and cry wolf ;-)

Most samples show and explain how to write the markup to define rows and columns. But then, most samples are pre-created and reiterated, and I have problems “mind rendering” the markup. With the Silverlight 2 SDK there was at least a kind of preview in VS, but with SL3 that preview does not work (the document outline may help to navigate larger XAML files, though):

So I chose a different way and put Blend to its first use…

Grid layout

Defining the desired Grid layout can be done easily in Blend. Placing rows and columns is just a mouse click away: Clicking on the areas to the left or the top adds new rows and columns. Clicking on the symbols changes the type. If you don’t see them, change the layout mode by clicking in the upper left icon.

 

Once the rough layout is done, it can be saved and one can switch back to VS. The resulting markup is very clean, Blend generally does a good job producing clean markup and at the same time leaving your markup as it was.

Placing the contents

Next step was creating controls for the three areas (head, menu, footer) and move the respective content from the page there (copy & paste in VS). The markup now only contains the navigation frame, which is the working area. Drag & drop and some more mouse jostling in Blend positions that control in the correct cell:

And sizing it correctly:

Also – after recompiling the solution – Blend picked up the user controls and offered them as assets if you select Project. Again some mouse jostling later (and with different backgrounds colors for each user control to distinguish them) the result looks like this:

Blend automatically generated a namespace definition as prefix for the controls, using a veeerrrryyyy long prefix name. But that’s easy to solve and to replace with control. The final markup is, again, very clean:

After some working on the user controls and some styling… alright, it may take some time, but had it ready from some internal application, and besides it was done by a colleague, for if I had done it myself if would probably cause eye cataracts… , the end result looks like this:

This may all seem trivial if you’ve been through this experience once. Bottom line I guess is the way I worked through this (which may or may not work for you). It’s the combination of VS and Blend, working with both tools at the same time. Even if you are a markup guy rather than design time worker in ASP.NET, my recommendation is that you give Blend a chance. It’s far more stable than the ASP.NET designer in VS ever was. I’m not saying you should switch to Blend, just get the best of both, VS and Blend.

The next post will contain some real code, promise ;-)

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

March 29, 2009

Visual Studio 2010 Architecture Edition

Today I’d like to share another left over from the SDX Talk I mentioned earlier: Basically some screenshots from Visual Studio 2010 Architecture Edition (VSArch from now on). Don’t expect something fancy if you already know VSArch, I just couldn’t find all that much information on the Web beyond the two screenshots on the Microsoft site.

The main new things within VSArch include the Architecture Explorer, UML Support, and the Layer Diagram.

Architecture Explorer

Note: To make the following more tangible I loaded a sample project I use regularly as test harness and took respective screenshots while I analyzed it. Click the images for a larger view…

The Architecture Explorer is about getting a better view into existing code. Whether you join a project that is under way, whether you have lost control over your code, or whether you just need to spice up your documentation. Architecture Explorer helps you by visualizing your solution artifacts and dependencies. Artifacts include the classical code artifacts (classes, interfaces, etc.), as well as whole assemblies, files, and namespaces.

Architecture Explorer lets you select those artifacts, display graphs with dependencies, and even navigate along those dependencies and in and out of detail levels.

The following screenshot shows VSArch. The docked pane on the bottom contains the Architecture Explorer that acts as “navigation and control center”. This is where you select your artifacts and visualizations. It could certainly use some improvement from a usability perspective, but it does the job anyway.

vsarch_assemblies.jpg

The screenshot shows two different visualizations of the assembly dependencies in my solution, a matrix view and a directed graph. Just to stress the fact: This was generated out of the solution, by analyzing the project dependencies.

The next screenshot shows a mixture of various artifacts, including classes, interfaces, even files, across parts of or the whole solution.

vsarch_artifacts.jpg

Depending on what filters you set, this graph could give you a high level overview of certain artifacts and their dependencies. For example you could easily spot hot spots, like the one class your whole system depends upon. Or make sure the dependencies are nicely managed via interfaces and find undue relationships. Even spot unreferenced and therefore dead graphs.

Once you go one level deeper, you may want to cluster the artifacts by some category.

vsarch_relationship.jpg

The image shows again artifacts and their dependencies, but this time grouped by the project to which they belong. It also shows what kind of relationship a line represents and lets you navigate along that dependency.

The Architecture Explorer should help getting a better understanding of your code. It helps you to detect code smells or may guide your refactoring.

UML Support

Yes, UML like in, well UML. Not extensively, but it includes activity diagram, component diagram, (logical) class diagram, sequence diagram, and use case diagram. I didn’t spend much time investigating them, just drew some diagrams in order to take the screen shots. Generally I can say that Microsoft can draw boxes and lines (big surprise here) but there is a lingering feeling that those diagram editors may not be finished yet (again, hardly surprising on a CTP).

Creating a new diagram is easy enough. Just create a new project of type “Modeling Project” and add an item:

vsarch_dialog.jpg

Everything starts with a use case, so here is our use case diagram:

vsarch_usecase.jpg

One can draw the diagram as he likes. As you can see from the context menu, there is something being worked on. Namely the “Link to Artifacts” entry shows the Architecture Explorer, yet I couldn’t quite figure out what’s behind this. Also note the validate entries which didn’t do very much, but we’ll see them later in the Layer Diagram.

Next on the list is activity diagrams:

vsarch_activity.jpg

Works as expected, no surprises, no hidden gems that I’ve found.

The same is true for the component diagram:

vsarch_component.jpg

Just a diagram, no surprises.

The logical class diagram gets more interesting:

vsarch_logicalclass.jpg

As you can see, it contains very .NETy stuff like enumerations. It also has these menu entries that hint on more to come in the future — right now the selected menu entry brings up the error message asking for a stereotype, yet I didn’t even find a way to set those. Also the editor may still need some work, e.g. one cannot drag classes in and out of packages.

As a side note: The relation between this logical class diagram and the already existing class diagram escapes me. At least they are a little redundant.

Next on the list is the sequence diagram. Rather than drawing one myself I reverse engineered the existing code:

vsarch_sequence.jpg

Quite nice and again, used this way it can help you documenting or just plain understanding existing code.

Note: If you want to try that yourself, the CTP has a bug: You need to have a modeling project and at least one diagram before the menu entry “Generate Sequence Diagram” appears. And while you will be presented with a dialog asking what call depth to analyze, it usually works only for one level.

Layer Diagram.

Now for the most dreadfully looking diagram (though Microsoft has a more colorful one on its site…): Some boring connected blocks, meant to represent the layers of your architecture.

vsarch_layer.jpg

Actually this is one of the most interesting features for any architect and dev lead: It’s a living beast! :evil: 

You can add assemblies as well as other artifacts to the bleak boxes. Afterwards you can actually validate whether the dependencies between those artifacts match or violate the dependencies implied by the diagram. In the screenshot you can see that I deliberately misplaced an assembly and consequently got a respective error. Using this feature an architect can ensure that all layer related architectural decisions are honored during development.

To conclude…

The Architecture Explorer is certainly a worthwhile feature and I also like the validation feature of the Layer Diagram. That’s certainly something new and not to be found in other products.

Generating sequence diagrams is nice but it remains to be seen whether this will allow roundtrip engineering. The logical class diagram doesn’t yet meet my expectations and it’s not quite clear to me how it will evolve. The other diagrams? Well, they just work. However in this group is nothing exciting for you if you already have another modeling tool like Enterprise Architect (no advertising intended, just happens to be the one I’ve used recently…). And a dedicated tool probably will provide a more complete UML coverage. UML 2.0 has 13 types of diagrams, including state diagrams, which is in my opinion the biggest gap in VSArch UML support.

Anyway, if that caught your attention and your interested in more details there are two options: One, download the CTP and try for yourself. Two, if you want it more condensed and avoid the hassle with a VPC, watch a video with VSArch at work. For that there are two links I can provide:

  1. Peter Provost’s talk at the PDC. Go to the timeline on the PDC site, search for TL15 and you should find “TL15 Architecture without Big Design Up Front”, which is about VSArch, despite the title. His talk was the role model for my analysis of VSArch, yet seeing it live could still give better insights.
  2. Visual Studio Team System 2010 Week on Channel 9 has a bunch of videos, especially the “Architecture Day” ones. “top down” and “bottom up” show VSArch at work.

The final question however will be if all those features are compelling enough to actually buy the … Visual Studio Team Suite (i.e. the “you get everything” package). Why not the Architecture Edition? Well, if you are a developer as well as an architect, the Architecture Edition lacks too much in the other areas. Given that there is usually quite a monetary gap between dev edition and team suite, that gap might very well be used to buy a 3rd party UML tool instead… .

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

September 4, 2007

partial methods for partial developers

Filed under: .NET, .NET Framework, C#, Design Time, LINQ, Software Developers — ajdotnet @ 8:19 pm

The evolution of C# has all but stopped. One addition with .NET 2.0 was partial classes. Probably meant to ease the usage of code generators (keeping generated code separate from hand written code) the common developer is not limited to “consuming” it, he may also actively employ it. E.g. it might serve a puprose to separate production code from debug code. With .NET 3.5 there already was much talk about the language additions around LINQ, such as … well, just have a look at The LINQ Project or ScottGu’s Blog. And of course, these features are meant to be actively employed by the common developer.

Now we have a relatively new addition: Partial methods. (In case you haven’t hread, see In Case You Haven’t Heard, see Partial Methods for the VB.NET camp). With this addition I’m not so sure it’s meant for me… .

For the record: A partial method is a method declared in a partial class (hence the name). It may or may not be implemented in the same class (usually in another file). If it is implemented, the compiler will emit usual code. If it is not implemented, the compiler will emit nothing, not even metadata about that method. And will will also wipe out any calls to the method.

Ergo: Partial methods are purely a compiler feature. You need the code of the declaring class and it has to be prepared with declarations to support partial method implementation.

The usual (probably the only relevant) use case for partial methods is light weight event handling in combination with code generation. Say, you parse a database or an XML file and generate data classes. Tens of classes, hundreds of properties, all fairly generic, boilerplate stuff. And also fairly common is the need to interfere with this method for validation or that property setter to update another one. So instead of manipulating the generated code or having hundreds of events and thousands of method calls during runtime (of which in most cases only a tiny fraction will actually do something) partial methods kick in. They are easier to declare and use than events (pay for what you need) and if not used they simply vanish (pay as you go).

Any other relevant use case? None that I am aware of. In other words we have a language feature that is very efficient for a very limited number of use cases.

For whom are partial methods made?

Implementors of code generators are the ones to employ partial methods. Today this means Microsoft and LINQ related stuff, tomorrow Microsoft may decide to use partial methods in other event laden environments, such as ASP.NET and WinForms. In these cases the common developer only consumes partial methods. Let me re-phrase this: The common developer has to consume partial methods. Why? With partial methods the code generator will certainly not generate any other means of extension, such as events or virtual methods. The very purpose of partial methods is to do away with these heavy weight means, right?

The pro: If you follow the designer driven, RAD style, code based development style that ASP.NET or WinForms used for quite some time, the designer will eventually handle partial methods transparently. The only difference for the common developer is that he knows the stuff he is doing will be more efficient at runtime.

The con: If you like metadata driven applications (and already struggled with ASP.NET databinding because there is no way to attach meta data)… well, prepare for some further loss of information. If you need some event for any property setter (for tracing or tracking), if you need any kind of dynamic behaviour (e.g. to attach some kind of generic rules engine),  … let’s just hope the developer of the code generator anticipated that use case (or prepare for some work). You have a clean layer separation and the entities should know nothing about the UI? Well, put your code riht into the data class will spoil that. But hey, you might use partial methods to implement real events. Manually.

So you and I, the common developer that is, will not gain very much from this use case. But we will become more dependend on the ability, the mercy, and the foresight of the developers of the code generators and designers.

Will we be able to employ partial methods (rather than only consuming them)? Let’s see… when did I last write a class that had to support a vast amount of events? (That actually occasionally happens when I lay some infrastructure for the next project or work on framework code.) A class that was at the same time not intended to act as base class? (OK, forget about the infastructure!). But I surely wrote code by hand (because there was no code generator) that looks exactly like a candidate? But then I simply wrote the methods I needed, as I needed them — no need for a partial declaration.

So, unless I enter the camp of the not-so-common-code-generator-writing-developers (it happens, but only rarely), I can see no relevant use case that allows me to employ partial methods. (I really don’t count the example of increasing the readability conditional compilation — as presented in the VB post above — as relevant for a new language feature.)

Again, for whom are partial methods made? In my opinion they are made for Microsoft. To help them writing code generators that generate more efficient code. Code that conforms with the constant shift from object oriented (inheritance and virtual methods), component oriented (interfaces, events), metadata driven (attributes) development to a more and more exclusively used code generation approach. Highly efficient, but really not meant for me.

Do I have a better solution for the problem partial methods solve? I don’t. Therefore I can’t blame Microsoft for putting them in. Do I have concerns about how that will affect my work? I certainly do. Therefore I do hope the developers employing them do it with utmost caution. And with the awareness that not everyone uses their tools in a point-and-click kind of fashion.

There already has been some concern about partial methods in the .NET community — and for other reasons than the ones I mentioned: Language complexity, naming issues, other features higher on the wish list, and so on. I recommend reading the comments of the post above if you want to keep up with that. Whether partial methods are a good idea or not, they are easily the single most controversial language feature in C# so far.

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

March 24, 2007

List the List

Filed under: .NET, .NET Framework, C#, Design Time — ajdotnet @ 5:51 pm

This post is again going deep down to the bits (writing on high-level topics takes so much more time…).

Suppose (again) you were writing some kind of generic serializer or databinding code. Sooner or later you would have to deal with lists. Collections. Arrays. In other words, you would have to deal with a situation like this: 

public class MyObject
{
    // …
}

public class MyCollection : CollectionBase
{
    // …
}

public class Data
{
    public MyObject[] MyObjectArray { /* … */ }
    public MyCollection CollectionOfMyObject { /* … */ }
    public IList<MyObject> GenericListOfMyObject { /* … */ }

    public ArrayList ListOfMyObjects { /* … */ }
    public object ThisCouldBeAListOfMyObjects { /* … */ }
}

In order to analyze some arbitrary object (say an instance of Data), you would use either type.GetProperties() (more suited for serializers) or TypeDescriptor.GetProperties(type) (the better choice for databinding and design time related stuff). You would then look at each property’s type, recognize it is a collection type, and somehow deduce the type of the collection elements (to create them dynamically or to read their properties to create list columns during databinding).

Let’s have a look on what our code could be presented with:

  1. Arrays. They are the most simple collection type, embedded in the language, and are often used by code generation tools. Supporting them is a must.
  2. Collection classes derived from CollectionBase. MSDN states that
        “This base class [CollectionBase] is provided to make it easier for implementers to create a strongly typed custom collection. Implementers should extend this base class instead of creating their own.”
    Therefore CollectionBase was the means of choice before we had generics. Please note that this class comes with a pattern that implies type safe methods in the derived class.
  3. Collections implementing ICollection or IList. This is a more generic approach than using CollectionBase. We will have to look closer at this, but if it worked, it would automatically cover the ColectionBase approach.
  4. Generic collections, implementing ICollection<T> or IList<T>. This is propably the way new code will present collections to our code. Please note that a bunch of methods (like Add, Remove, etc.) that are in the non-generic version part of IList have been pushed down to ICollection<T> in the generic version.
  5. The predefined collection classes in the System.Collections namespace, notably ArrayList, will also have been used quite often.
  6. There is a special interface ITypedList, meant to support databinding. This may help (or it may not.)
  7. Finally we may have to deal with collections that may be present in some untyped property.

Now let’s see which of these cases we can support to what degree:

Arrays: You can check if it’s an array using Type.IsArray and use Type.GetElementType() to get the type of the elements.
:arrow: Supporting arrays is mandatory and no sweat at all. 100% done.

CollectionBase, ICollection/IList: Neither CollectionBase nor one of the interfaces (also implemented by CollectionBase) tell you something about the element type. The usage of CollectionBase however implies a pattern that will have the implementor support type safe overloads of the usuall collection methods. What we can do is get hold of one of those members (e.g. the Add method or the indexer) and analyze its type.
:arrow: Supporting arbitrary ICollection classes can be done if they adhere to some pattern (implied by but not restricted to CollectionBase). Let’s call that 90% covered.

ICollection<T>/IList<T>: This case is as easy as arrays are. Well, appart from figuring out the interface. But let’s ignore this exotic cases and settle with, say 99% coverage? Once you got hold of the interface it’s just a matter of calling type.GetGenericArguments().
:arrow: Supporting generic collections is mandatory and no sweat at all. 99% done.

ArrayList: Here we will raise the white flag. The type of ArrayList does not tell us anything about the element type and no way to get it working. Can we live with that? ArrayList is “not the best choice” as property type, so this restriction might be the encouragement the developer needed to improve his data structures… (allways point out the positive aspects ;-) )
:arrow: Supporting ArrayList stays at 0%.

ITypedList: ITypedList will give you direct access to the element’s properties (similar to TypeDescriptor.GetProperties(type)). This may be usefull for databinding and design time features — in fact I would regard that as a must, since it is part of the databinding infrastructure of .NET.
For serializers might be used to get the properties and guess the component type (“von hinten durch die Brust ins Auge” — german proverb, literally “from the back through the cest into the eye”, used for arkward indirect ways to achive something). I would consider that only if I absolutely had to.
:arrow: Supporting ITypedList depends on the purpose of our code. For databinding it should be considered (100% coverage), for serializers it may be a fallback chance, though unreliable. No more than 50% coverage.

Untyped property: No type, no chance to even know it’s a collection.
:arrow: 0%.

Further complications…

So far we’ve looked at collection types, not at elements. If the collection type does not tell us enough, we may look at the first element in the collection. Asuming that there is one. If not, a serializer might have no problem, yet a databinding scenario might — which is the very reason Microsoft came up with ITypedList.

Another aspect has so far been ignored: We … (OK, I) asumed homogenous collections, i.e. collections of elements of the same type. Collections containing elements of different types (they may have a common base class, or be completely arbitrary) will pose a whole new bunch of problems. This is probably beyond what databinding can support, serializers would have to make sure that each list entry is stored along with type information.

Where are we?

If you take a look at what can be supported and what can’t, you’ll notice that it is simply not possible to cover 100% of the theoretically possible cases. Even some feasible cases will only be covered by 80%. However, if you look closer, those 80% may very well be all you’ll ever need. And if you really stumble over one of the 20% cases (ArrayList might be one of those), don’t try to guess out of the blue; think of some way to feed additional meta information into your serializer.

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

March 10, 2007

New version of my AddIn…

Filed under: .NET, .NET Framework, C#, Design Time, Software Developers — ajdotnet @ 9:25 pm

BrowseCurrentFile.jpgI just put a new version of my addin on my web site (for a first introduction see this post). Here are the major changes (apart from bug fixing):

  • Browse current file: Methods now show signatures
  • Browse current file: Generics shown correctly
  • Browse (all): Support of progress bar
  • Browse (all): Persistent window size
  • AddIn: first tests under Vista

There is currently an issue under vista with language packs. Menu icons are not shown and shortcut keys are not assigned.

Obviously MS has changed the loading scheme of the resource DLL — I need to fix that, once I get an idea how to do that). They also made the same mistake the once made with localized VBA languages (remember your VBA keywords being translated from IF-THEN-ELSE to WENN-DANN-SONST?). Now they translate the key codes (“Ctrl-Up Arrow” to “Strg-NACH-OBEN-TASTE”) as well as the commands. Can you believe that?

Anyway, I took care that the AddIn works, but nothing more. The missing icons are no vital part. Regarding shortcuts you need to assign them manually until I have fixed that issue. 

I think I have addressed some of the feedback I got (if only in the FAQ help page). Other feedback has been placed on my todo list (including flattened presentation, i.e. a list control rather than a tree control for browse).

I hope you enjoy it.

That’s all for now folks,
AJ.NET

kick it on DotNetKicks.com

December 16, 2006

It’s christmas time…

Filed under: .NET, .NET Framework, C#, Design Time, Software Developers — ajdotnet @ 10:52 pm

… and Christmas means presents. Well, here’s my present for you:

I just finished a first version of my AddIn v2.0 for Visual Studio 2005. Its focus is on code navigation:

  1. navigate C# code files with cursor keys, e.g. Ctrl-Down to go to the next type, method or whatever (something I got used to under Eclipse).
  2. browse solution files or types and quickly jump to the one you need using filter criterions (similar to the respctive dialogs in ReSharper).

There are more details in the readme and help file.

The addin has been tested by me and some friends and should be reasonably stable. It is however a first version and may have some bugs, in this case please sent me the stack trace from the output window.

Also I decided to get it out (in order to get feedback) as early as feasible rather than trying to do the 110% implementation. Therefore there is “room for improvement” in several areas (e.g. showing method signatures, browse inherritance hierarchy, and other stuff). Any comments and wishes regarding future development are welcome as well.

Christmas also means even less time than usual; christmas dinners, visiting of relatives, etc. take their toll.

Since this is probably my last post for 2006 I wish you all a peacefull christmas and a happy new year.

That’s all for now folks,
AJ.NET

December 2, 2006

Got GAT?

Filed under: .NET, .NET Framework, C#, Design Time, Software Developers, Software Development — ajdotnet @ 2:38 pm

I have been working with the Guidance Automation Toolkit (GAT) for some time now and thought I could give you a little motivation to look into it yourself.

What is GAT anyway?

GAT is a framework to build Visual Studio addins of a certain kind. Emphasis here is on the ‘G’ in GAT, G like “Guidance“. GAT makes it very easy to provide the user of the GAT package you developed (i.e. another developer) with templates, snippets, and most importantly with wizards and the ability to fullfill complex tasks. Typical usage scenarioas may include:

  • Create a new class (boilerplate code), say an exception or form, based on user input (wizard), register it with some kind of configuration (complex task), enterprise library exception handling or UIP application block, and update the project structure accordingly, i.e. create the project entry, deal with SCC, etc. (again comlex task)
  • Create code based on some configuration or other information. E.g. generate standard web pages supporting display and CRUD operations on data, based on an existing dataset or XML schema.
  • Create your own wrapper class for some service description (like WSDL or other) that adresses special needs such as error handling or logging.

In other words, use GAT …

  • whenever you have to create a new code file that is boilerplate but requires a few parameters (a Wizard) and is a little too complex for code snippets.
  • whenever you have to create a family of related code files (as group, all or none), say a form and accompanying resource and configuration file.
  • whenever creating a new file requires addition work, like registering it in a central configuartion file
  • whenever work shall be triggered via context menu entries on project items
  • whenever these things span multiple projects or depend on project types
  • whenever these things need to be done “transactional”, i.e. support an undo mechanism

How does it work?

GAT is a bit like Lego. It’s a set of small building blocks of different type and for different purpose. There are references, type converters (rings a bell, doesn’t it? ;-) ), value providers, actions, all playing nicely together. Just like with Lego you have to know how the pices work together, not only where to fit them in but how to shape them into a working system. On the other hand, many of the components you will undoubtedly have to write tend to be very reusable — if you do it carefully — and quite often are unrelated to the specific task at hand. A recipe to update a config file for web applications? A type converter that provides a list of the web projects within the current solution — or better yet, it can be configured to show this or that kind of projects — has nothing to do with the special config file.

The place for assembling the pieces into a larger system is a central XML file in each GAT package. Here you describe logical units of work, called recipes. A recipe usually contains four major parts:

  • infrastructure information, like in which menu the recipe will be available, which text and icon it will show.
  • arguments or rather data declaration. This is where “variables” are declared and associated with type converters and value providers
  • a wizard to get information from the user
  • a sequence of actions to do the actual work

This set up reminds me a bit of COBOL file structure ;-)

The XML file may contain multiple recipes and it also has some global infrastructure information, like name or help file URL. There is also a special recipe called during the registration of the GAT package, used to register the other recipes. This registration employs a reference object that further decides whether the respective recipe will be available or not (e.g. one may check the current project type and only allow the menu entry in web projects).

Recipes may be the the major concept in GAT but it certainly does not stop here. GAT also includes template engines (solution, project, and file templates), packages may install code snippets, and it comes with a management component for the end user (the “Guidance Package Manager”).

What can’t be done?

Whatever you do plainly with GAT is available via GAT only and surfaced to the user mainly via menu entries on project items (with the exception of solution and project templates). The consequences:

  • No way to react on anything but menu entries, especially not on other Visual Studio events, say starting a recipe automatically when saving a file. To accomplish that you’ll have to build a regular addin the hardcore way.
  • No way to use the actions somewhere else, in paritular no way to leverage them within the build process. MSBuild support or some command line tool would be more suited to that need.
  • GAT is also not exactly suited to work “within” a file, i.e. provide recipes that modifies a part of an existing code file, available depending on the current cursor location. Something like “Implement Interface” or “Encapsulate Field” in the context menu depending on whether you are at the location of an interface or a field. Again this would be better done within a regular addin. (Fortunatelly someone within Microsoft allready thought of the examples I just mentioned…)

It is not as if GAT is out of question in these cases. It just won’t solve all requirements and you need to carefully plan a layered implementation approach. Put core functionality in a Core.DLL and call it from a GAT action as well as from your favourite command line tool.

Famous last words…

Well, it has to be said: GAT is only a Technology Preview right now. It’s quite stable and fairly complete, but some things may need some improvement (user fedback in error cases, SCC awareness, shortcommings of existing components). Another issue is the documentation which is better than one would expect but still needs a good deal of improvement. And of course noone knows whether the next release will break existing code and no final release date has been announced yet.

Anyway, if the above description of GAT sounds like something you have been looking for I recommend to give it a try. For those interested: there is a forum in which you will even get feedback from the authors and a dedicated GAT web site which also has sample code.

That’s all for now folks,
AJ.NET

Older Posts »

The Shocking Blue Green Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 244 other followers