AJ's blog

August 31, 2006

Long Live Console Applications!

Filed under: .NET, .NET Framework, C#, Software Development — ajdotnet @ 6:07 pm

There are some things every decent developer has to have done (well, at least with some C++ background):

  1. write his own collection/tree classes
  2. write his own string class
  3. write his own console class

With the advent of standard classes (e.g. stl, MFC, or others in the C++ world, the java library and .NET base class library in the post-C++ world) writing ones own hasttable or string class does not really make sense anymore (perhaps appart from academic interest or educational purposes). With console applications the situation is somewhat different because as of yet there is no standard or widespread library. Yet the demands for any non-trivial console application are reoccuring, boilerplate and tedious to do again and again and again.

What are console applications usefull for in the first place?

  • Console applications are a necessity if to be used in batch files. And batch files are a necessity if one wants to automate tasks. Automate tasks on the other hand is the dayly business of administrators and with continuous intergration it has become the daily business of developers as well.
  • Console applications are the better services. If you thought of writing a service or some other kind of background application just to trigger some action at certain times or time intervals think again. The windows scheduler (or cron or whatever) already handles scheduling quite flexible. A console application registered with such as system will do the trick at cheaper cost, less effort, and provide more freedom in usage.
  • For experienced users the console usually offers more flexibility and is more efficient to use than GUI interfaces.

Additionally, virtally any “enterprise application” actually is a familiy of applications, not only consisting of the main applications. Usually there are installation tasks, cleanup jobs, data import and export, nightly processing, diagnostics and maintenance tasks, and whatever else to do. Some of these tasks may be better done within a special environment (such as SQL jobs) but on the other hand, how do you have a SQL procedure participate in your applications error tracing?

There may be a time when build tasks (MSBuild, (N)ant, etc.) or libraries tailored for Monad (newly christened PowerShell) will become a standard. But this time is not due tomorrow and probably not next year. Until then I have to admit I’m a fan of console applications. (Call it DOS nostalgia if you like. ;-)).

Speaking of PowerShell, the new command line shell Microsoft proposes, it may be just arround the corner. Betas are available (even a first RC) and are very promising. There is also some work done to maximize reuse of functionality between PowerShell, the new management console (MMC 3.0), and WMI. And all Microsoft server products will entually provide PowerShell support. As Bob Muglia (Microsofts Senior Vice President, Windows Server) said on the last PDC: “We are going to undergo a project over the next few years to get a full set of Monad commands across all of Windows Server, and across all of our server applications. […]“. The PowerShell will probably also define the way to go for other companies developing for the Microsoft plattform.

And the good news: A properly designed commandline application is ideally prepared for PowerShell. The various combinations of command line arguments will just become different overloads/functions of the commandlet. No service application or job can do that.


As it has become my custom (share both, bits and opinion) I have decided to share my own little console framework. (After the intro you wouldn’t have suspected I have one, would you ;-)?). It’s provided as is and it’s free, just mention my name and perhaps tell me of your extensions.

The framework is tailored for ease of use with applications that have more or less complex argument lists. You wouldn’t need it with trivial EXEs with a trivial fixed argument set and no useability demand (although I would argue that such applications do not exist). It also does not aim to fullfill academic pretensions, i.e. I refrained from trying to develop the last-framework-you-need that is fully metadata driven, aiming to solve all problems and doing it with 72 UML designed classes far to complex for the usual task at hand. I like the 80% approach and usually it fullfills 100% of my demands.

In this case the demands where primarily:

  • Handle command line parsing
  • Handle user feedback (including logo and help messages)
  • Play a little with colored output (which is now readily supported with .NET 2.0)

Here’s a short sample that should give you an idea of how to use it:

  • Provide a ressource file with messages:

<?xml version=″1.0″ encoding=″utf-8″ ?>
<data name=″Logo″>
            <value>XCopy test application based on AJ.Console.ConsoleApp (c) by Alexander Jung – mailto:info@Alexander-Jung.NET</value>
        <data name=″Syntax″>
            <value>SYTNAX: XCopy.exe Source [Target] [/A | /M] [/EXCLUDE file1 [file2] [file3]. . . ]</value> 
        <data name=″Help″>
            <value> This is just a simulation, no harm is done 😉</value> 

  • Derive from a given base class and overwrite three methods (argument parsing, switch parsing and processing):

class XCopyApp: ConsoleApp
    static int Main(string[] args)
        XCopyApp app= new XCopyApp();
        return app.Run(args);
    protected override void ApplyArguments(string[] values)
        // on argument is mandatory, the second optional
        EnsureLength(values, 1, 2, null);
    protected override void ApplySwitch(string name, string[] values)
        switch (name.ToLower())
            case ″/a″:
            case ″/m″:
                // no additional arguments
                EnsureLength(values, 0, 0, name);
            case ″/exclude″:
                // at least one additional argument
                EnsureLength(values, 1, int.MaxValue, name);
                // we don’t like what we don’t know
    protected override void Process()
        // just print out some of the arguments intention
        string[] args= GetArguments();
        WriteLine(″about to copy the following files: ″+args[0]);
        if (args.Length>1)
            WriteLine(″target is: +args[1]);
        if (HasSwitch(″/a″) || HasSwitch(″/A″))
            WriteLine(″only if archive bit is set, leaves the bit as is.″);
        if (HasSwitch(″/m″) || HasSwitch(″/M″))
            WriteLine(″only if archive bit is set, clears the bit afterwards.″);

  • And the output looks like (yet more colorfull):
XCopy test application based on AJ.Console.ConsoleApp 
(c) by Alexander Jung - mailto:info@Alexander-Jung.NET 
Invalid number of argument: arguments needs at least 1 value(s). 
Use /? for further information.            

[D:ProjektePrivAJConsoleXCopybinDebug]XCopy.exe /? 
XCopy test application based on AJ.Console.ConsoleApp 
(c) by Alexander Jung - mailto:info@Alexander-Jung.NET 
SYTNAX: XCopy.exe Source [Target] [/A | /M] 
                    [/EXCLUDE file1 [file2] [file3]...] 
System parameter: 
    @parameterfile      read arguments from file 
    !logfile            write log file 
    /?|/h[help]         show help 
    /v[erbose]          verbose output 
    /q[uiet]            no output 
    /nologo             no logo            

This is just a simulation, no harm is done ;-)            

[D:ProjektePrivAJ.ConsoleXCopybinDebug]xcopy *.* x: /M /nologo 
about to copy the following files: *.* 
target is: x: 
only if archive bit is set, clears the bit afterwards.

And finally the code (downlad and rename to *.zip).

That’s all for now folks,

August 28, 2006

Do You Value Design Time Support?

Filed under: .NET, .NET Framework, ASP.NET, C#, Design Time, Software Development — ajdotnet @ 7:26 am

Good design time support for your components may seem like a nice exercise but otherwiese unnecessary and generally not worth the money (especially if it’s the customers money). After all, design time support plays no role at runtime when the real ROI is realized, right?

I think it comes as no big surprise if I tell you: I don’t agree. Design time support elevates quality! It guides you through complex tasks by wizards (would you type in all the parameters for a complex data source control?), it may provide default settings (effectively speeding up the input), it offers selections of valid values (effectively reducing typos), it validates parametrizations (before inconsistencies cause runtime errors), and last not least it instantly visualizes the information that is important.

Thus, good design time support reduces complexity, speeds up the development process, eliminates bugs. It even steepens the learing curve for new team members. To put it short: Good design time support saves money!

Well, design time support still needs investment. Granted, Visual Studio already offers a great deal of design time support (the principle holds true for any other IDE as well). Visual designers for windows forms, ASP.NET, XSD, etc., the property grid with its implanted comboboxes and dialogs, various wizards (e.g. the already mentioned wizards to set up data sources), etc.. But there is still room for improvement, most obviously for your own components but for existing ones as well.

The extension points ASP.NET and Visual Studio offer are manyfold:

See, there’s a multitude of opportunities for improvement. A word of warning, though: the Visual Studio design time infrastructure is quite complex. Unfortunatelly it lacks a bit of documentation in certain areas (this has become better with Visual Studio 2005 but there is still a lot room for improvements – or perhaps a book of 1000 pages or more, no kidding).

Concrete example:

Since I allways try to mix bits&bytes with higher level assessments and opinions I guess I’ll have to provide some code, right? OK, let’s have a look at our old friend the data source. Consider the following page within the Visual Studio designer:

naive placing of data source controls

A perfectly valid page – and yet it’s totally irrelevant for real world pages. The data source controls are arranged directly beneath the controls bound to them, thus they spoil the HTML design. So once you start using the designer as HTML designer the first thing you will do is to move them out of the way. (You could tell Visual Studio to hide them, yet who would do that?) I placed them at the bottom in a panel control (set to invisible and marked with background color):

moving data source controls out of the way

Next: Real world pages occasionally use one or two data controls more than shown here. Say about 12 to 20. (That’s not exaggerated. Think of a company page in a CRM system with state, country, region, primary contact, assigned sales person, primary address, assigned industry type and subtype, … . And that’s only the data area.) By the way, that drop down list over there…, could you please tell me which data source control was again filtered by its selected value? Hey, just click on the data sources, one by one, and have a look at the properties…

propetry dialog for a data source control

Too bad. The property dialog doesn’t tell you, just go to the respective dialog. About 12 to 20 times… .

I guess you get the idea. Now close your eyes, lay back and imagine a peacefull world, without environmental pollution, and a dsign time appearance that instantly tells you what you want to know.

Well, mankind being what it is I cannot help with peace and pollution, but here’s my solution for the page: Derive your own class from the data source control class of your choice and attach a designer class. This designer class will be asked by the Visual Studio designer for a design time HTML representation of the control:

public class MySqlDataSource : SqlDataSource 
    public MySqlDataSource() 

public class MySqlDataSourceDesigner : SqlDataSourceDesigner 
    public override string GetDesignTimeHtml() 
        return DesignerHelper.CreateDesignTimeHtml(control); 

Given the right implementation the outcome will satisfy the initial demand:

enhanced design time rendering for data source controls

No magic, all well documented, you just have to do it.

Let’s get that straight: Nothing has been won by this code in terms of functionality. (Don’t get carried away with this in front of your customer, he probably won’t understand the value of something that doesn’t add functionality.) What has been gained is instant information. I have all the relevant information right bevor my eye and can even highlight special things (as demonstrated with the ID in red). Add warning or error feedback and no parametrization error will stay unnoticed anymore.

OK, that’s all for… What? … I forgot something? … What source? … Ah, that source. But only on your word that you won’t hold me accountable for any bugs! Here it is: DesignTimeSupport1.zip (download and rename to *.zip).

That’s all for now folks,

August 19, 2006

Visual Studio Team Edition for Database professionals

Filed under: .NET, .NET Framework, Software Development — ajdotnet @ 5:00 pm

In a post earlier this year Karl complained that database IDEs have not seen the same evolution of new features (like build systems, refactoring, etc.) that developer IDEs (like Visual Studio or Eclipse) enjoyed.

This week I was at a Microsoft talk where I was reminded of this post. They showed the current CTP version of the newly announced “Visual Studio Team Edition for Database Professionals“. See http://msdn.microsoft.com/vstudio/teamsystem/dbpro/ for more information; follow the “Announcing!” link and you’ll get some more details and also some screenshots.

The outstanding features of the VS/DP (my own littele abreviation, since the full name is a little bulky) included:

  • tight intergration with/in VS2005  
  • integration with other Team System features such as bug tracking and SCM
  • Reengeneering of existing DBs or SQL scripts
  • Refactoring for DB objecs (e.g. renaming a table changes references within SPs, etc.)
  • compare two DB instances (e.g. development vs. production), schema as well as data, and generate update scripts (full or incremental)
  • automatically generate unit tests for SPs
  • automatically generate test data (configurable, honoring DB constrains and relationships, even based on a quantitative analysis of a production database)

As I said, this is currently a CTP and the final protduct shall be available “by the end of the year” (or so the MS consultant said).

The good part is that this will get the database developers “closer” to the rest of the team, i.e. more seamlessly intergrated. This is in accordance with the whole Team System approach that already got project management aspects as well as architects and testing on the boat, with web and windows designers jsut around the corner by means of Expression.

Now comes the upsetting part (especially for you Karl, sorry): VS/DP will support SQL Server 2000 and 2005. Period. No mentioning of other databases either on the web site or from the MS consultant I talked to.

Now, I wouldn’t expect Microsoft to support Oracle or DB2 out of the box, why should they after all. But for enterprise development those two DBMSs are often the predominant database systems – no sense in arguing about that. Since Microsoft claims to have a very open system with Team System, perhaps the other big players (or some ISV) will come up with similar integration for non-MS databases. 

That’s all for now folks,

August 18, 2006

MCPD – Enterprise Application Developer

Filed under: Software Developers — ajdotnet @ 7:26 am

MCPD - Enterprise Application Developer (logo)

Note: Read also my update about MCPD/EA 3.5…

I just completed my MCPD – Enterprise Application Developer. … Oh tanks, that’s very kind of you… :-).

For those who don’t know yet: Microsoft has reorganized their certification scheme. There is no more MCAD and MCSD. Those are still valid certifactions, yet they only cover the world until .NET 1.1. Starting from .NET 2.0 there is three new (higher level) certifications:

Successor to MCAD:

  • (Microsoft Certified) Professional Developer: Web Developer (a.k.a. MCPD – Web Developer)
  • (Microsoft Certified) Professional Developer: Windows Developer (a.k.a. MCPD – Windows Developer)

Sucessor to MCSD:

  • (Microsoft Certified) Professional Developer: Enterprise Applications Developer (a.k.a. MCPD – Enterprise Applications Developer)

There’s also MCTS (Microsoft Certified Technology Specialist) certifications (below MCPD), but I’m not going into details about that or the specific examns for each certification. You can find that under http://www.microsoft.com/learning/mcp/certifications.asp. But some broader information may still be usefull:

As you can guess, with the change from .NET 1.1 to .NET 2.0 the workload of the respective examns has increased considerably to cover the new features. But there is also a none-obvious fact that further adds to the workload: The security stuff that has been a separate examn or MCSD (and could even be substituted with a SQL Server or other examn) is now part of the common examns. Ever wondered about the security options for ClickOnce? Or the credentials settings of WSE 3.0 (yes, WSE now is part of the distributed development examn)? Well, you better start reading. On the other hand, MS has factored the common stuff (like language questions or how to query a database) into a seperate examn called Application Development Foundation.

For those already owning the MCSD you don’t have to start again and take on 5 new examns, rather there is two upgrade examns available. But beware! The workload is notable:

What else as MS changed? Well, most obviously the naming of the certifications – which is something I’m not very satisfied with: MCAD and MCSD where clearly distinguishable, even for non-IT people (e.g. the manager deciding upon you salary). Now we have MCTS and MCPD. MCTS is less than MCAD and MCPD is a mixture of two tin versions (MCAD successors) and a gold plated version (MCSD successor). Actually, given the increased ground to cover I would even rate the MCPD/EAD higher than the MCSD. Consequently I would have prefered a title that clearly states that this is the top certification. Perhaps I’m just being vain, but the certification accounts for something – and it should clearly tell that.

Thats all for now folks,

August 17, 2006

What do your read?

Filed under: .NET, .NET Framework, Software Developers — ajdotnet @ 7:50 am

The other day I was surprised to learn that two fellows I work with didn’t know the Microsoft MSDN Magazine. Experienced C#/.NET developers! I was shocked :shock:! One of the most valueable information sources for serious Windows/.NET/WinFX/(any other Microsoft technology) developer and people I hold in high regard don’t know about it. How could they survive? It turned out that one of them at least new the predecessors (System Journal) and accidentially stumbled over the german MSDN Magazin – which I didn’t know either. But bad enough, though. They also didn’t know some of the valuable blogs that are available, so I thought perhaps I should share a list of my “most valuable information sources” 😀 …Please keep in mind that I keep this list purely .NET related.

English magazines:

  • Highly recommended: “msdn magazine” is available online http://msdn.microsoft.com/msdnmag/ and offline (paperware). This is the medium Microsoft uses to for in-depth articles of current and future technologies. The authors are from the “Who is Who?” list in the Microsoft developer community (usually MS employees or people with close MS connections). The online stuff goes back to 1996 so it may even suit IT archeologists ;-).


German magazines (call it “regional information”):

  • The dotnetpro http://www.dotnetpro.de/ is probably the best german magazine (and not just because I published some articles there ;-), http://www.dotnetpro.de/articles/author1006.aspx), yet it is only available as subscription, not in stores.
  • There is a german translation of the msdn magazine (http://msdnmag.de/). For some time this was a section within the dot.net magazine but it just became a magazine of its own. I don’t know yet if this is a full translation or just a selction of articles. (“Das Abonnement des MSDN Magazin – Deutsche Ausgabe umfasst 6 Ausgaben, die aus 12 Ausgaben der US-Version des MSDN Magazine zusammengestellt sind.”)

There may be other blogs (OK, there are other blogs), but they didn’t attract me yet. May it be because they didn’t live up to my expectations, because they adressed the wrong topics, or simply because I overlooked them (shame on me). Anyway, this list is my personal reading list and I hope I could point you to the information source you needed but missed yet. Perhaps you can share your reading tips with me.

That’s all for now folks,

August 11, 2006

Why I like working for SDX

Filed under: SDX, Software Developers — ajdotnet @ 6:16 pm

www.sdx-ag.deTime and again I’m asked why I like working for SDX. The short answer is: “Working with bright people!”

The longer answer:

  • Working with people that challenge you and that you can challenge – to accomplish something better.
  • Working with people with whom you can quarrel about one thing and drink a coffee afterwards, joking about something different.
  • It’s the mixture of professionalism, determination, motivation, and fun. The fact that you can rely on your collegues to be honest with you (even if it hurts).
  • Having the opportunity to work with new technologies, evaluating them, and finally use them. The opportunity to question old ideas and develop new ones. To accomplish something new, to drive progress, for the individual, for the current project, for the customer, and for SDX.
  • And the fact that all this does not stop with bits and bytes, rather it is a common attitude reaching from dev people to sales to marketing.

There is of course the fact that at SDX we work leading edge on SOA and .NET. This may be the thing that makes SDX attractive (for customers as well as for new collegues), yet in my opinion this is just a consequence of the things said before.

PS: I lied in the first sentence. Time and again I’m asked whether I would like to work for XY. Well, things may change, so keep asking ;-).

PPS: This post was not sponsored, nor did anyone ask for it :-).

PPPS: If you like what you just read, if you are from Germany (Rhein-Main-Area), and if you would like to give SDX a try, just visit http://www.sdx-ag.de/jobs/jobs.htm.

PPPPS: Too bad, I should have thought earlier of sponsorship… 😕

That’s all for now folks,

August 10, 2006

Generics. Boon and Bane…

Filed under: .NET, .NET Framework, C#, Software Development — ajdotnet @ 8:37 pm

As a developer I like certain things:

  1. I like type safety. Type safe access relieves me of “mind compiling” and puts the burden on the compiler (where it belongs). It also makes the code cleaner as it makes type casts unnecessary.
  2. I like shortcuts. Whenever I have to type the same 3 lines of code more than 2 times I’ll likely come up with a helper method. It’s easier to use, it documents the intention, it is less error prone, and it offers the opportunity to add additional checks. And after some time I will usually end up with several overloads.

Statement #1 led me to use generics. Rather than having code look like this:

Hashtable ht = new Hashtable();

ht["A"] = "first letter";

ht["Z"] = "last letter";string infoA = (string)ht["A"];

string infoB = (string)ht["B"]; // returns null

I can now write that:

Dictionary<string, string> dict = new Dictionary<string, string>();

dict["A"] = "first letter";

dict["Z"] = "last letter";string infoA = dict["A"];

string infoB = dict["B"]; // ???

Well. Then I discovered that the last line no longer returns null, rather it throws a KeyNotFoundException… 😦

Hey, Microsoft isnt’t shy of this little change in semantics. They even advertise it: “The following code example uses the Item property (the indexer in C#) to retrieve values, demonstrating that a KeyNotFoundException is thrown when a requested key is not present, and showing that the value associated with a key can be replaced. “ (http://msdn2.microsoft.com/en-us/library/9tee9ht2.aspx)

To overcome this little obstacle, I would have to write something like this:

string infoC = null;



    infoC = dict["C"];





… or better this:

string infoD = null;

dict.TryGetValue("D", out infoD); // return value ignored

… which both violates my initial statement #2. And a helper method for just one generic incarnation simply wouldn’t do.

But we have generics. And why should the old C++ template tricks not work with generics as well? So I came up with a little generic helper class:

public class DictAdapter<TKey, TValue>

    where TValue: class


    public static TValue GetValue(Dictionary<TKey, TValue> dict, TKey key)


        TValue value = null;

        if (!dict.TryGetValue(key, out value))

            return null;

        return value;

Dictionary<TKey, TValue> _dict;
public DictAdapter(Dictionary<TKey, TValue> dict)


        _dict = dict;

public TValue GetValue(TKey key)


        TValue value = null;

        if (!_dict.TryGetValue(key, out value))

            return null;

        return value;

public TValue this[TKey key]


        get { return Value(key); }



Using this class I can get the dictionary value as before with on static method call, providing the dictionary and the key. If I have more than one of these calls I can create a temporay variable that holds the dictionary reference and simplify subsequent calls. With an object instance I can even leverage an indexer:

// static call

string infoE = DictAdapter<string, string>.Value(dict, "E");// temp. instance

DictAdapter<string, string> gv = new DictAdapter<string, string>(dict);

string infoF = gv.GetValue("F"); // returns null

string infoZ = gv.GetValue("Z");
// indexer call

string infoG = gv["G"];

Great, the lazy part of me is satisfied. But there is always the overly critical part in me, the part that cannot be convinced so easily…

Critical Point of View

Think of it: The strategy works just as it did in C++. And it is likely to produce the same problems.

Look at the last code snippet.

  • Do you understand immediately what every single line does?
  • If you know that the generic dictionary throws an exception if it does not contain the key, would you instantly guess that DictAdapter does not, thus effectively changing behaviour and semantics?
  • Would the call to TryGetValue no be more comprehensible, even at the cost of more and ugly code?

If you look closely at these questions and validate them against the effects intended with statement #2 (especially be more comprehensible and less error prone) this little class may as well be considered a failure.

Even if these questions would not arise, this little helper may be a problem in and of itself. It has happend with C++ code: When people started to realize the bennefit of these little helpers the result was often a bunch of helper classes, each one ususally written by one developer and never properly advertised, causing another developer to write similar helpers with slighly different behaviour, resulting in code fragments that looked the same but had subtle differences, especially under error conditions, and noone was ever able to maintain this mess properly, until eventually a huge code cleanup effort had to be undertaken, usually resulting in removing all of these classes and sometimes – but only rarely – in a clean and usefull set of new helpers, until someone realized the bennefit of these little helpers…. (take a breath, this was a very long sentence!).

Of course today we know better! Given our experience with C++ we will look for the helper classes that have broad usage scenarios (so we won’t see a flood of classes). We will use generics as type adapters to provide type safe access but otherwise mimic the behaviour of existing classes (so we will have obvious semantic and clearly defined responsibilities). We will refrain from unexpectedly hiding or changing semantics (so the code using the helpers will be self-documenting and in accordance with our expectations). And we will collect all those classes at the proper namespaces (so they will become public knowledge and we will instantly recognize overlapping behaviours with other classes in this namespace).
Additionally Microsoft has wisely decided to ommit some of the more suspect features of templates (e.g. constants as arguments or setting the base class as template argument), so we won’t become overly “inventive”.

But beware! It takes some disciplin to avoid the chaos. And since it requires the disciplin of every developer involved, my experience tells me that this is a battle that one is more likely to lose than win. (Hey, all it takes is a bright new right-out-of-the-university-and-I-know-everything jump start developer, the kind that did not yet validate theoretical academic knowledge against reality and common sense. Huh, he’s here – and he also knows of LINQ… :shock:.

That’s all for now folks,

August 3, 2006

What are ObjectDataSources good for?

Filed under: .NET, .NET Framework, ASP.NET, C# — ajdotnet @ 12:53 pm

DataSources in general and ObjectDataSources in particular are a new feature within ASP.NET 2.0.
At first glance ObjectDataSources raise the expectation that this is a means to support layered architectures that feature a cleanly separated business layer (consisting of business services that exchange business data – either plain class structures or data sets). But this hope dies very fast.
At second glance they appear to support layered architectures that feature data objects (classes that represent the domain specific data model and implement a part of the business interfaces – all in one package). This hope lives longer before it, too, diminishes.
Finally one has to accept that they support about the same conceptual features that a SqlDataSource with DataSets supports: A view into the data that works best if it is closely tied to the current page.

Once you have accepted that you can start thinking about what ObjectDataSources actually can do for you – which is more than this introduction may suggest.

To understand what ObjectDataSources can and cannot do, let’s start with understanding SqlDataSources. On a page you may have a SqlDataSource configured to provide data for your grid view. Thus it would contain a SELECT statement asking for the columns you would like to show and upon databinding of the grid it would issue that statement against the database. Another SqlDataSource an the same page may be configured to select a distinct row, e.g. the one selected within the gridview, to be used by a form view (i.e. providing a master/detail page). This data source would contain a SELECT statement asking for the columns you would like to show/edit (perhaps different columns than in the grid view) and adding a WHERE clause for the id. It also would contain respective UPDATE, INSERT, and DELETE statements. The word “contain” in this context means “buried into the page”, as in this excerpt:

<asp:SqlDataSource ID=”SqlDataSource2″ runat=”server” ConnectionString=”<%$ ConnectionStrings:ConnectionString %>
        SelectCommand=”SELECT * FROM [Authors] WHERE ([Id] = @Id)”
        UpdateCommand=”UPDATE [Authors] SET [fname] = @fname, [lname] = @lname, [phone] = @phone WHERE [Id] = @Id”>
        <asp:ControlParameter ControlID=”GridView1″ Name=”Id” PropertyName=”SelectedValue” Type=”Object” />
        <asp:Parameter Name=”fname” Type=”String” />
        <asp:Parameter Name=”lname” Type=”String” />
        <asp:Parameter Name=”phone” Type=”String” />
        <asp:Parameter Name=”Id” Type=”Object” />

Let’s make that clear: Every databound control (grid view and form view in our example) has its own private datasource control. This means in particular:

  1. The datasource controls are independent of each other, going directly to the database. If one datasource control issues an UPDATE statement the other one will reflect those changes only if its SELECT comes after the UPDATE.
  2. The datasource controls are tailored to the specific needs of the databound control, i.e. giving access to exacly the data that is shown/manipulated. E.g. if a datasource selected/updated a column “street”, yet this column is not bound within the form view, it will not be left as is on update, rather it will be overwritten with NULL (best case).

And here is the point: The only relevant difference between SqlDataSources and ObjectDataSources is that ObjectDataSources issue method calls rather than SQL statements.
Now, this is crucial for the understanding and the implications are both good and bad.

  • For one thing, calling a method means abstraction. Having SQL statements within the pages ties the database structures directly into your UI code. A method can hide some of these details.
    ➡ ObjectDataSource is good!
  • C# classes are indivisible. While a SQL statement can explicitely state which columns to access, a class cannot be asked to please contain only a subset (i.e. a partition) of its properties. (Well, not unless we have LINQ support.) Thus we cannot use class instances for insert or update methods if we are only working on a partition of that object. In these cases we will have to provide methods like UpdateMainData(Customer), UpdateAddress(Customer), UpdateContactData(Customer), … with the name implying the partition of the Customer data type (or conceptually equal methods with single field parameters rather than customer objects). Can you spell “maintenance”?
    ➡ ObjectDataSource has its problems!
  • ObjectDataSources create the configured data object class on demand, each datasource its own data object. Again, those data objects know nothing of each other, no change yet.
    ObjectDataSource does not address all problems!
  • However, there is the possibility to subscribe to certain events and customize the behaviour. One could register for the ObjectCreating event and rather than creating a new data object return a reference to a previously created object. One could also register for the Updating event and implement logic that updates exactly the provided fields.
    ➡ ObjectDataSource provides better oportunities to address arising problems!

For small point&click applications, SqlDataSource may be the way to go. It’s fast, it is supported by designers and wizards (always good for quality and maintenance), it avoids unnecessary coding overhead, it is pure ASP.NET and therefore the application code has a steep learnig curve. No need to bother with the additional overhead an ObjectDataSource introduces.

For not so small, non-trivial, or enterprise applications, e.g.

  • consisting of many pages,
  • needing more sophisticated business logic,
  • having a business logic layer that is (in principle) independent of UI specifics,
  • having to comply with database access restrictions/policies,
  • requiring a more sophisticated UI, e.g. maintaining changes in state before they are explicitely saved,
  • being subject to versioning,

SqlDataSource is probably not the way to go. Rather it is prone to become a development problem and a maintenance nightmare.

ObjectDataSource in its plain form may not be the way to go either. Yet it offers the extensibility to roll your own additional logic. The farther you deviate from vanilla web applications, the more work you have to do – but you can do it.

Ergo: Use the right tool for the job at hand and know how to handle the tool. ObjectDataSource is a good tool if you know when to use it, how to use it, and what not to expect.

That’s all for now folks,

August 1, 2006

Is GetInterface() broken?

Filed under: .NET, .NET Framework, C# — ajdotnet @ 6:41 pm

A previous post raised an interesting follow-up question. Suppose you wrote some fairly generic piece of code, like a serialization engine or … hey, why not an objectmapper? (Back to Gerhard, www.objectmapper.net ;-)) In this case you would not only want to check whether a collection implements, say IList<int>, or IList<Customer>. You would want to support any kind of collection, therefore you would want to know whether it implements any IList<> derived interface. Right?

So, let’s do some type checking…

List<int> list = new List<int>(new int[] { 1, 4, 5 });                

bool isListOfInt = (list is List<int>); 
bool isListOfT = (list is List<>); // compiler error

OK, the classical c# is/as does not work because the compiler refuses to accept the generic version. So we have to roll our own type check using reflection.

List<int> list = new List<int>(new int[] { 1, 4, 5 }); 
Type ilistType = typeof(IList<>); // IList<> 
Type listType = list.GetType(); // List<int>                

Type listGenericType = listType.GetGenericTypeDefinition(); // List<> 
bool isIListOfT = (listGenericType == ilistType); 
    // false: List<> vs. IList<>

Note that typeof works fine with the generic IList<> type. But this first approach was a little naive as it compares the wrong generics: The interface generic IList<> and the list implementation generic List<>. Yet we are looking for the interface rather than a “concrete” list implementation. So we have to ask for an interface, yet, again, we do not know which interface to ask for, do we?
It turns out that there is a way to ask for a interface that is based on a generic without actually telling what the generic type parameters are: It’s documented in a note in Type.GetInterface(string) (see http://msdn2.microsoft.com/en-us/library/ayfa0fcd.aspx) and it works by passing the name of the generic and the number of type parameters:

List<int> list = new List<int>(new int[] { 1, 4, 5 }); 
Type ilistType = typeof(IList<>); // IList<> 
Type listType = list.GetType(); // List<int>                

Type ilistOfX = listType.GetInterface("IList`1"); // IList<int> 
Type ilistGenericType = ilistOfX.GetGenericTypeDefinition(); // IList<> 
bool isIListOfT2 = (ilistGenericType == ilistType); 
    // true: IList<> vs. IList<>

Finally we’re there. Wait… . The generic name. And the number of parameters…. . What if some genius wrote a class that implemented two incarnations of that generic interface…? Let’s see.

Just for the curious, academic pea counters, we’ll start with simply deriving the list class and see what happens to the code above:

public class MyList : List<int> 
    public MyList(int[] args) 
        : base(args) 

MyList list = new MyList(new int[] { 1, 4, 5 }); 
Type ilistType = typeof(IList<>); // IList<>                

Type listType = list.GetType();     // MyList 
Type listGenericType = listType.GetGenericTypeDefinition(); 
    // System.Exception {System.InvalidOperationException: 
    // "Operation is not valid due to the current state of the object."

I know, this is based on the first snippet that didn’t work in the first place, so why should it now. But isn’t it peculiar how some innocent change causes quite a different error? Raising an exception rather than simply returning the wrong result?
Anyway, back to the working stuff: To make a long story short the variation above still works (code ommited). Now let’s change the class definition again:

public class MyList : List<int>, IList<double> 
    public MyList(int[] args) 
        : base(args) 

[...] // implementations omitted 

MyList list = new MyList(new int[] { 1, 4, 5 }); 
Type ilistType = typeof(IList<>); // IList<>                

Type listType = list.GetType(); // MyList 
Type ilistOfX = listType.GetInterface("IList`1"); 
    // System.Reflection.AmbiguousMatchException

And you thought, we were done. Ts ts ts…

Since there is no GetInterfaces() method that takes a string as filter we’ll have to do everything by hand. Get all interfaces, loop, get generic, check… :

MyList list = new MyList(new int[] { 1, 4, 5 }); 
Type ilistType = typeof(IList<>); // IList                

Type listType = list.GetType();     // MyList 
Type[] interfaces = listType.GetInterfaces(); 
bool isIListOfT = false; 
Type genericType; 
foreach (Type type in interfaces) 
    if (!type.IsGenericType) 
    genericType = type.GetGenericTypeDefinition(); 
    if (genericType != ilistType) 
    isIListOfT = true; 

Finally there. Welcome back, Mr. GetInterfaces, I saw you in a previous post, didn’t I? I know a relative of you, one Mr. GetInterface. He’s a little dangerous, don’t you think? I thought we could get along quite easily – but all of a sudden he attacked me from behind and bit me in the back. What do you mean, you knew that would happen? What muzzle? … Yes, usually he behaved quite good… Bad childhood? … no, I didn’t feed him ….

That’s all for now folks,

Blog at WordPress.com.