AJ's blog

March 25, 2010

Calling Amazon – Part 2

Filed under: .NET, C#, Silverlight, Software Architecture — ajdotnet @ 9:27 pm

The last post provided some introduction into calling an external service, namely Amazon, and spend some thoughts on the infrastructure questions. It left off with some decisions:

  1. I’m going to make REST calls, as they are more flexible than SOAP calls.
  2. I’ll make the calls to Amazon from the client.

It’s important to note that these decisions depend to a very high degree on the particular service I’m calling. Amazon offering a policy file, the structure of the API allowing to keep the secrets on my server, the fact that Amazon actually offers REST calls in the first place. Any other service might need a completely different approach. (That’s what the last post covered.)

As a reminder, here is the relevant image:

So how about actually implementing that?

The Server Part 

The server has to build and sign the URL for the Amazon call. Implementing that is straight forward. The AmazonApi class maintains the Amazon configuration in the appSettings.config:

The BuildItemSearchRequestUrl method first calls a method to prepare the query parameters, then another method to build a respective URL:

The called methods are equally simple. BuildRequestParams translates the typed query parameters into a dictionary, adding some other necessary parameters along the way. The parameter names can be found in the developer guide:

In order to build the URL I need the SignedRequestHelper class, extracted from Amazon’s REST sample:

This method is made available to the client via a WCF service, but I’ll leave that one out, its straight forward and boilerplate enough.

Calling Amazon from the Client

On the SL client we have a two step process: First, call the server with the filter criteria, and get the prepared URL. Second, make the call to Amazon, using the URL. The first call is no different than any other call to my own server application, no need to elaborate on that. The second one uses the WebClient class to make the REST call:

The ParseItemSearchResponse translates the XML into a respective object structure. Boilerplate, boring, and kind of longish if you do it manually.

View Model Stuff

Now that the details are in place, I “only” need to wire them into the UI.

The calls from the SL client to its housing server application are straight forward. First, call the server with the filter criteria, and get the prepared URL. Second, make the call to Amazon, using the URL. First the bookkeeping:

The BuildItemSearchRequestUrlCall class encapsulates the calls to the BuildItemSearchRequestUrl service operation shown earlier, AmazonClientApi does the same for Amazon and is also shown above.

Now the actual implementation, kind of leapfrogging from one method to the next by way of asynchronous events and lambdas I pass in for that purpose:

That should get the first 10 results from Amazon – and the proof that I can actually make the call:

The ShowAmazonResponseErrors simply iterates over the returned error collection and shows a respective message box. Amazon will return an error if it couldn’t find anything:

I have now solved the basic technical demands, yet the user may be a little more demanding, since…

Employing Paging

… 10 items is usually not sufficient. Hence I need to make more calls, read paging. Paging technically only requires an ItemPage parameter to be set to a value bigger than 1 (the page index is 1-based). However on the view model, some additional questions arise.

First question is whether the subsequent pages should be loaded right away, constantly filling the result grid in the background. This could be done by triggering the next call, once the previous one has returned, until all available results have arrived. Leapfrogging in a loop. Of course, if the user triggered a new search somewhen in-between, I would have to cancel that chain of calls. Or I could let the user trigger the loading explicitly, e.g. with some „load more“ button (which is what I’ll do).

In any case I have to deal with the user changing the filter criteria or interacting with the result, e.g. resorting it. This is obvious for the second case, but even automatically loading all data in chunks takes time.

Therefore I need to distinguish between the first call, and subsequent calls. The first call initiates a new search, replacing any previous search result. Subsequent calls have to use the same filter criteria, just with another page, and the result is appended to the previous ones. Now, if the filter criteria is bound to the UI and used as parameter to the service call, the user might change the filter and then click the „load more“ button (or the automatic loading might kick off at that time). To prevent that I need a copy of my request property. Similarly I need to maintain my result in a separate property, otherwise the call would overwrite any previous result data.

BeginSearchAmazon and EndSearchAmazon now only handle the first call, initiating a new search, and have to be changed accordingly:

The chain for subsequent calls looks similar in structure, but preserves the values in the separate copy properties:

The next image shows the dialog after having loaded 3 pages and in the process of loading the fourth: 

Great? By the way, the details link jumps straight to Amazon, showing the respective book.


Whether you are going to call Amazon or some other external service, these two posts should give you some hints on what to take into account form the infrastructure and architectural perspective. On the client you’ll have to look into Silverlight security and cross domain calls, on the server you might run into firewall or proxy authentication issues.

Also the Amazon API with its approach to paging may give you some hints on how to implement paging over larger result sets with Silverlight. While server calls are asynchronous, SL doesn’t provide the option of processing results while they arrive. For a large result set it might take some time to download the data, and the user might notice the time lag. It could be the better user experience to load the data in chunks, as shown here.

One hint at last: Jon Galloway has a good explanation on the rationale behind policy files on the called server, see here.

That’s all for now folks,

kick it on DotNetKicks.com

March 21, 2010

Calling Amazon – Part 1

Filed under: .NET, .NET Framework, Silverlight, Software Architecture — ajdotnet @ 6:57 pm

Connectivity is one of the promises of Silverlight. And what better target for my bookshelf application than Amazon? So I decided the book lists could come with the book cover image, and creating new catalogue entries can be streamlined using Amazon search. And while this is about Amazon, the thoughts should give you some hints on what to consider for other calls to external services as well.

Note: This post will dig into the Amazon API and some general infrastructure questions. Actually implementing this will be the topic of the next post.

The Preconditions

Amazon isn’t exactly forthcoming with its product catalogue API. Starting at http://aws.amazon.com/ points to about any Amazon service offering there is — except the product catalogue API. Well, some bings later, by way of other articles and blog posts, and once you know that the correct name is „Product Advertising API“, you’ll find the entry point. From there it is reasonably well documented.

First thing is to register oneself as developer. This will result in various information, one can pick up on his user profile page:

  • The AWS Account ID is the user ID
  • A variable number of  pairs of an AWS Access Key ID and the respective AWS Secret Access Key itself. You need one for the REST API.
  • A variable number of X.509 certificates, one of which you need to make secure SOAP requests.

Understanding the API

As a quick guide to the documentation: The entry to documentation is here. Under “Documentation Archive” you can pick the latest version of the Getting Started Guide and the Developer Guide.

The logical next step is to find some samples, get them working and understand the details of calling Amazon. There is one sample using the REST API, and one for SOAP using… WSE?. Well. WCF is actually quite new and since almost everyone is still using WSE, why update… . Anyway, the example is simple to migrate – and it doesn’t work at all, could never work actually, since it doesn’t address security.

You can find descriptions on how to get security for SOAP calls working here. I never checked that out, though (I ended up using the REST API, see below), and I couldn’t find any samples using WCF.

The REST API is used by putting parameters in a dictionary, supplementing the user information, and letting a helper class (SignedRequestHelper.cs) produce a URL. The respective request with that URL will return some XML, one has to parse.

You’ll need that helper class; unfortunately it is, again, not that easy to find, as most links will lead you the online test page, not the download. You could even download that page here. But I never found a download for the class by itself and ended up extracting it form the example named above.

The WSDL is also available and can be used to create a WCF client. Even if you employ the REST API, the data classes from the WCF client may help you parsing the returned XML.

The API itself is simple and straight forward. You send some query parameters and you get some result. The query parameters include the operation, and the operation determines the valid set of other parameters. The operation I’m going to be interested in is ItemSearch.

The query parameters also include the ResponseGroup parameter that describes what kind of output I would like. Amazon doesn’t return each and every detail about, say, any book in a book search result. It returns just some major fields like title, detail URL, etc. by default. One could use this information to populate a search result list and load the book details on demand, thus relieving Amazon from doing too much unnecessary work in the first place (and reducing network load). But in cases where more information is needed right away, one can tell Amazon to include other sets of information, like images, in the returned search result.

Another parameter is ItemPage for the partition of results. Amazon returns 10 items with each search request, and no way to change that value. To get the next 10 values, one has to make separate calls for page 2, 3, and so on, 400 at most.

The Infrastructure Question

Now there are a few choices to make (or rather to rule out). We have the REST and the SOAP API at our disposal, and we can make the call from our server, or from our Silverlight client. Note also that the REST call can be split into two independent parts: building the URL (which includes signing), and making the actual call against Amazon. In theory this leads to the following options:

  1. SOAP call from the server
  2. SOAP call from the Silverlight client
  3. REST URL built at the server, call made from the server
  4. REST URL built at the server, call made from the SL client
  5. REST URL built at the Silverlight client, call made from the SL client

What are the forces restricting these options?

  • One restriction is that I‘m not going to send my private Amazon secret key or certificate – my eyes only, signed with the blood of a black cat killed on full moon on the grave of a convicted murderer – to the Silverlight client. It’s not that I don’t trust you… Well, it is, and I don’t. That invalidates option 2 and 5.
  • Another possible restriction is the server infrastructure. Depending on proxy or firewall configuration, you cannot call outbound from your server to the internet. In case of a proxy it might be possible, but it‘d take unreasonable effort. That puts at least a question mark behind options 1 and 3.

Regarding proxy authentication: ASP.NET applications (including .asmx services) have the CredentialCache.DefaultNetworkCredentials property to get the current user’s credentials to pass on. WCF services don’t have that option which makes it unreasonably hard to make the subsequent call using the current user’s security context. Tell me if I’m missing something! 

  • To make the picture complete: Our SL client is also subject to security restrictions. The called service has to explicitly allow the call from our client, offering a policy file. Fortunately Amazon does that, so this won’t be an issue for now. If you are planning on using other services, make sure to check this out, for this puts a block on the call form the client.

Note: I‘d like to stress the fact that it is the providing service, Amazon in this case, who has to opt in for client calls. There is nothing that can be done on the client side about it. This is quite a common misconception…

The corrected list of options:

  1. (SOAP call from the server)
  2. SOAP call from the Silverlight client
  3. (REST URL built at the server, call made from the server)
  4. REST URL built at the server, call made from the SL client
  5. REST URL built at the Silverlight client, call made from the SL client

Since the REST API offers client side calls (option 4) and still leaves the option of making server calls (option 3), the SOAP option never came up again. I actually started with server calls.

Calling Amazon from the Server

In this scenario all security related issues are the server’s problem:

The client passes the filter criteria to the server, the server creates a signed URL, invokes the REST call, parses the XML, and returns the result. The call from the server to Amazon may have to go through proxies, firewalls, or other intermediaries.

Doing the work on the server had certain advantages: It was easy to implement, I could use unit tests, I added some persistent caching in files (actually to avoid flooding Amazon with my debug calls, but caching would of course improve multiple clients). Also in this scenario the server does the job of parsing the returned XML into decent entities, and only those are returned to the client, which may be a factor depending in the WAN structure.

Calling Amazon from the Client

In this scenario the client still calls the server, but rather than making the call to Amazon, the server just builds the signed URL and hands it back to the client. The client calls Amazon and it also has to parse the resulting XML.

Calling the service from the client is only possible if the called serve permits it (as Amazon does). And if it does, it is more fragile as one cannot always foresee under which circumstance the code will run. If something goes wrong, the chances of getting diagnostics information are bad.

On the other hand the proxy issue is nicely circumvented and in case you have to support different authentication schemes, say openID, you may be better off telling the user that you cannot record his credential information on the server if the server never sees it.

Initially I implemented the client side call more out of curiosity, to evaluate the implications. But when I eventually did run into the proxy issue, I only had to switch my view model to get it working again.

Calling What from Where?

I’d like to stress that point: The decision whether to call the external service from my own server or from the client is extremely depended on very different influences: Infrastructure, security related, available API, did the service opt in to client calls, etc.. The decision may consequently be completely different in other cases. It may even be the case that there is no simple solution. For example, had Amazon neglected to opt in for client calls (the policy file), I would have been forced to make the call form the server. Had I then run into the proxy or some other firewall issue I would have had some hard tasks to face.

There may be some workarounds, like falling back to .asmx for the credentials or some browser script workaround – but none is especially nice.

Anyway, since in my case I have a working approach, I can now set out to actually implementing it. Next post…

That’s all for now folks,

kick it on DotNetKicks.com

Blog at WordPress.com.