… is something I did explicitly not want to do!
Not that it is not necessary to do so. It’s just that I expected a lot of bloggers, especially from Microsoft itself, trying to spread the news and foster understanding of what is ahead of us. Well, the Microsoft folks became kind of hushed, as if ducking down and counting the shrapnels after having thrown the bomb at the PDC. So I changed my mind…
Given what Microsoft unveiled at the PDC — A new vision, a new strategy, a new technology stack, and a mish mash of existing, sometimes overlapping, not yet consistent, much less complete applications and services — it’s no wonder that it took me some time to grasp the idea. And when I thought I might have understood the gist I was still, well, unsure of whether I had gotten it all right.
The breakthrough came at the ask-the-experts. More to the point I had the chance to talk to Ori Amiga (the guy that did the “BB04 Live Services: A Lap around the Live Framework and Mesh Services” talk). Other than joking about the various ways to pronounce “Azure” and the fact that Americans always manage to get it wrong (sorry guys — and sorry Ori, hope I didn’t give you away too badly :o) ), this little chat really turned “suspected functionality” into “understood technology” (at least I do hope so…).
Explaining Azure worked not on explaining the ever present Azure picture as is. It worked on developing the pieces of the picture, bit by bit, and relating it to other concepts. And since I worked for me, I thought I might share the gist of that conversation in much the same style, hoping I’m providing new insights rather than reiterating already available information. Actually I’m going to use the very sheet of paper we (mostly Ori) drew throughout said discussion.
Enough of the preliminaries, here we go…
First of all, keep in mind that there’s two Azures: Windows Azure and the Azure Services Platform. They are not the same and neither presents the full picture. I’ll try to dissect that picture layer by layer, like a cream gateau…
It starts with the cake bottom: Windows Azure
Windows Azure is the basic “infrastructure” (to avoid the term “Operating System” for now) to run applications on, highlighted in the following picture in red:
That includes computing capabilities, basic storage, management of applications (i.e. deployment, including upgrading), and operations (e.g. handling failures). These concepts are abstractions from the underlying OS (Windows 2008 actually), machines, and storage devices.
The terms to think of are service instances rather than processes or (virtual) machines. This is similar to the way virtual memory abstracts physical memory. While any memory access obviously has to happen on physical memory, the virtual memory manager is free to relocate it, even to swap it out to disk. This not only makes applications independent of the amount of physical memory, it also optimizes resource usage, allowing other applications to use the memory my application has claimed but does not access at the moment.
Yet, while Microsoft Azure is one level of abstraction above the machine’s OS, it has similar concepts (highlighted in the above picture in blue):
- computing ~ job scheduling, etc. (NT Kernel);
- storage ~ file system (NTFS);
- management ~ application installer;
- operations ~ task manager, event manager, etc..
Thus it is quite feasible to call Windows Azure an Operating System for the datacenter (or for the cloud if you’re a marketing guy), even if that may not match exactly what you learned in university about operating systems.
The implications of deploying an application to Windows Azure (i.e. how the application has to be built and how the fabric manages it) is actually quite interesting … but a whole new blog post, thus I will skip that for now.
Let’s move on to the second layer of the cake: The Azure Services Platform
A barebone OS like Windows Azure would be of limited use if it were not completed with other general purpose services. Which services exactly that includes may be debatable, yet, again, the similarity with our local environment may help depicting the features we as developers have come to expect from the platform we are developing on: database service, user accounts, IPC, etc..
Again, the following picture highlights the services in red and similarities in blue:
Microsoft decided that the following services may be good ones to start with:
- .NET Services: basic infrastructure services for application security (access control), application communication (Service Bus), and workflow (three guesses…?).
- SQL Services: database related stuff; not exactly a SQL Server, but aspiring to be…
- Live Services: All around social applications (community, devices, etc.)
- Core application services: This is a set of higher level application services, such as SharePoint Services and CRM Services (explicitly not including the UI!). In my opinion they are there because they were readily available, not because they are particularly necessary.
Oh, and not to forget the reoccurring three dots in the PDC slides; those dots tell us that these are not closed and sealed sets. Actually Microsoft said that every major server application will eventually be made available on Azure.
Now for the chocolate and the cream: the applications running on the Azure Services Platform
While Ori included the application layer in the platform, any PDC slide puts it on top:
Where you put the label is of no consequence anyway, because this is no more than a logical hierarchy. However, please don’t misinterpret this hierarchy by assuming that those applications have to run on Azure and only applications running on Azure can leverage the Azure services! Au contraire!
If you have an application deployed on Azure it is (technically speaking) no different from other services. The difference only lies in the purpose, or the consumer if you wish. And still applications and services are free to call any services, not just those running on Azure. Likewise if your application is running on your local machine or network it can use services deployed on Azure to store data or integrate with whatever, that’s fine as well. Actually the best example for this flexibility is coming from Microsoft itself: Live Mesh.
The cherry on the cream: Live Mesh
Technically speaking the Live Mesh Desktop and Live Services are just another set of applications and services running on the Azure Services Platform, complemented with applications running somewhere else and using those services. This limited view however would miss much of the capabilities of Live Mesh, and the way it enhances the platform.
Live Mesh aims no less than to connect people, devices, and applications. Live Services contains services for identity (LiveID), presence, etc., and Mesh Services to maintain users, devices, applications, and — a corner stone — synchronization. The resource model organizes mesh objects (data, news, etc.) in feeds and entries, which in turn are subject to synchronization among the applications being “deployed” to Live Mesh. “Deploying” an application means either actual deployment on Azure, or storing it for (seamless) installation on your local device, via Live Mesh Client, offline capable if built to be.
He had his media center PC connected to the Live Mesh, advertising its meta data, like favorites, recordings, etc.. That information was synchronized to the Live Desktop (running in the browser), so he could pick a TV show there and “start” recording. (Actually that “start” was some little piece of data, synchronized back to the media center PC which in turn did, surprise!, start recording.) He then started a locally installed application that showed the TV guide and had the typical red recording sign right at the respective TV show. That application was offline capable, so he could have planed his TV recordings on the airplane and have it synchronized when he gets back online. Finally he also showed the same TV Guide on his mobile phone simulation.
The only thing missing was integration with other people, but I think it was in a key note where they showed an application that allowed sharing of film critics with some friends.
All in all, that’s what I call ubiquitous computing!
Spicing the cake: Developer tools
This part is actually not in the picture, but it’s no less essential: Where does your application or 3rd party code fit in? How does it get there? Those parts in red may be your application or service:
It’s actually quite easy: You can write applications that run on Azure and provide services (just like the basic services Microsoft provides), or an UI (just like Microsoft’s applications). You can access any of those services from the cloud or your local application, no matter whether Microsoft provided it or someone else. And you can Mesh-enable any of these applications and services as you like. This is an open platform!
Also Microsoft provides a simulated local environment for Azure, called development fabric, along with Visual Studio integration. Thus it is possible to develop your application locally, test it locally, and only afterwards deploy it to the cloud.
Regarding Live Mesh Matthias has more information on developing with Live Framework.
That’s it. You can find the complete untampered drawing here. That scrawl at the bottom of the drawing is actually Ori’s signature, but you should attribute any error, misunderstanding and adverse opinion in this post to me.
Finally two links for some alternate explanations (already repeated a thousand times over, but what the heck, they’re good):
- “Manuvir Das: Introducing Windows Azure”: Manuvir explains a little deeper how Windows Azure works.
- “Windows Azure (aka “Red Dog”) explained in 145 seconds”: funny and enlightening (presented on a site you might find interesting as well)