středa 16. listopadu 2011

Installing Ruby & Rails & RVM on Windows 7

I write this post, just in case that someone will have the same issues as I had when installing Ruby & Rails (on Windows) and maybe it will save someones some hours.

One limitation I had: I would use lots of versions of ruby, so I needed: RVM. (Ruby Version Manager) which is not available for windows.

My config: Win 7 64 bit

In this case, you have two options:
  • Use cygwin
  • Install on Virtual Machine

Use Cygwin - did not work for me!

  • Install RVM.
    I started with this option. To install RVM, there is some great help here
  • Install RubyGem. First download RubyGem, than run ruby setup.rb
  • Install rails by typing: gem install rails. And I just got the following errors:
Building native extensions.  This could take a while...
      0 [main] ruby 1192 C:\cygwin\bin\ruby.exe: *** fatal error - unable to rem
ap \\?\C:\cygwin\lib\ruby\1.8\i386-cygwin\etc.so to same address as parent: 0x1B
0000 != 0x210000
      Stack trace:
Frame     Function  Args
023F9BB8  6102796B  (023F9BB8, 00000000, 00000000, 00000000)
023F9EA8  6102796B  (6117EC60, 00008000, 00000000, 61180977)
023FAED8  61004F1B  (611A7FAC, 61243684, 001A0000, 00210000)
End of stack trace
      1 [main] ruby 3856 fork: child 1188 - died waiting for dll loading, errno
11
      0 [main] collect2 3220 fork: child -1 - died waiting for longjmp before in
itialization, retry 10, exit code 0xC0000135, errno 11
ERROR:  Error installing rails:
        ERROR: Failed to build gem native extension.

        /usr/bin/ruby.exe extconf.rb
checking for re.h... yes
checking for ruby/st.h... no
creating Makefile

make
gcc -I. -I/usr/lib/ruby/1.8/i386-cygwin -I/usr/lib/ruby/1.8/i386-cygwin -I. -DHA
VE_RE_H    -g -O3   -Wall  -c parser.c
gcc -shared -s -o parser.so parser.o -L. -L/usr/lib -L.  -Wl,--enable-auto-image
-base,--enable-auto-import,--export-all   -lruby  -ldl -lcrypt
collect2: fork: Resource temporarily unavailable
      0 [main] collect2 3220 fork: child -1 - died waiting for longjmp before in
itialization, retry 10, exit code 0xC0000135, errno 11
make: *** [parser.so] Error 1
And I never got over this issue. And I tried quite long enough. So if you are running the same config as I. Be aware you might end up like this...

Installing on Ubuntu 11 in VMWare

This should be just a piece of cake I thought.
  • Ubuntu comes with Ruby already installed.
  • Install RVM: sudo apt-get install ruby-rvm
  • Install RubyGems: sudo apt-get install rubygems
  • Install Railssudo gem install rails
  • And Bundle: sudo gem install bundle
I needed some special libraries for our application, which had some prerequisites which I did not have:
To install "nokogiri" I had to do:
sudo apt-get install libxslt-dev
sudo gem install nokogiri

To install "rmagick":
sudo apt-get install libmagicwand-dev
sudo gem install rmagick

Now I did run "bundle" on the application, which actually did finish. But with some warnings, and the application did not run. So I started with cleaning up the warnings:

First warning:

Invalid gemspec in [/var/lib/gems/1.8/specifications/capybara-1.1.1.gemspec]: invalid date format in specification: "2011-09-04 00:00:00.000000000Z"
Invalid gemspec in [/var/lib/gems/1.8/specifications/polyamorous-0.5.0.gemspec]: invalid date format in specification: "2011-09-03 00:00:00.000000000Z"
Apparently quite common and apparently for everyone there is different solution. For me what worked was:

sudo gem install rubygems-update
sudo update_rubygems 


Another strange warning which I was getting:
ERROR:  While executing gem ... (Gem::DocumentError)
    ERROR: RDoc documentation generator not installed: no such file to load -- json

Solved by:

gem install rdoc-data
rdoc-data --install

After that I had to reinstall "bundle". And gem actually reinstalled all dependencies. But after that, it run!!

UPDATE:
Also I needed to access to my application from our network. VMWare has two options for setting up network: NAT and Bridged. Bridged interface did not work with Ubuntu 11 (don't know why).
So when you are in NAT mode and you need to access to your VM, you will need to configure port forwarding.
Here is good way to set it up.

I get that Ruby is not Windows friendly, but on Ubuntu it was not much better, until I solved those strange warnings, nothing worked correctly - and that was clean install, I mean a clean machine! With ruby pre-installed. I was just adding RubyGem and Rails...it took me a half a day...

sobota 12. listopadu 2011

Universal Naive Bayes Classifier for C#

This post is dedicated to describe the internal structure and the possible use of Naive Bayes classifier implemented in C#.

I was searching for a machine learning library for C#, something that would be equivalent to what WEKA is to Java. I have found machine.codeplex.com but it did not include the Bayesian classification (the one in which I was interested). So I decided to implement it into the library.

How to use it

One of the aims of machine.codeplex.com is to allow the users to use simple POCO's for the classification. This can be achieved by using the C# attributes. Take a look at the following example which treats categorization of payments, based on two features: Amount and Description.
First this is the Payment POCO object with added attributes:
public class Payment
{
    [StringFeature(SplitType = StringType.Word)]
    public String Description { get; set; }

    [Feature]
    public Decimal Amount { get; set; }

    [Label]
    public String Category { get; set; }
}
And here is how to train the Naive Bayes classifier using a set of payments and than classify new payment.
var data = Payment.GetData();            
NaiveBayesModel<Payment> model = new NaiveBayesModel<Payment>();
var predictor = model.Generate(data);
var item = predictor.Predict(new Payment { Amount = 110, Description = "SPORT SF - PARIS 18 Rue Fleurus" });

After the execution the item.Category property should be set to a value based on the analysis of the previously supplied payments.

About Naive Bayes classifier

This is just small and simplify introduction, refer to the Wikipedia article for more details about Bayesian classification.

Naive Bayes is a very simple classifier which is based on a simple premise that all the features (or characteristics) of classified items are independent. This is not really true in the real life, that is why the model is called naive.
The total probability of a item having features F1, F2, F3 being of category "C1" can be expressed as:

p(F1,F2,F3|C1) = P(C1)*P(F1|C1)*P(F2|C1)*P(F3|C1)

Where P(C1) is the A priory probability of item being of category C1 and P(F1|C1) is the Posteriori probability of item being of category C1 when it has feature F1.
That is simple for binary features (like "Tall", "Rich"...). For example p(Tall|UngulateAnimal) = 0.8, says that the posteriori probability for an animal to be and ungulate is 0.8, when it is a tall animal.

If we have continuous features (just like the "Amount" in the payment example), the Posteriori probability will be expressed slightly differently. For example P(Amount=123|Household) = 0.4 - can be translated as: the probability of the payment being part of my household payments is 0.4, when the amount was 123$.

When we classify, we compute the total probability for each category (or class if you want) and we select the category with maximal probability. We have to thus iterate over all the categories and all the features of each item and multiply the probabilities to obtain the probability of the item being in each class.

How it works inside

After calling the Generate method on the model a NaiveBayesPredictor class is created. This class contains the Predict method to classify new objects.
My model can work with three types of features (or characteristics, or properties):
  • String properties. These properties have to be converted to a binary vectors based on the words which they contain. The classifier builds a list of all existing words in the set and then the String feature can be represented as a set of binary features. For example if the bag of all worlds contains four words: (Hello, World, Is, Cool), than the following vector [0,1,0,1] represents text "World Cool".
  • Binary properties. Simple true or false properties
  • Continuous properties. By default these are Double or Decimal values, but the list could be extend to other types.
After converting the String features to binary features, we have two types of features:
  • Binary features
  • Continuous features
As mentioned in the introduction for each feature in the item we have to compute the A priori and Posteriori probabilities. The following pseudocode shows how to estimate the values of A priori and Posteriori probabilities. I use array-like notation, just because I have used arrays also in the implementation.

Apriori probability

The computation of Apriori probability will be the same for both type of features.

Apriori[i] = #ItemsOfCategory[i] / #Items

Posteriori probability

The Posteriori for binary features will be estimated:

Posteriori[i][j] = #ItemsHavingFeature[j]AndCategory[i] / #ItemsOfCategory[i]

And the Posteriori probability for contiunous features:

Posteriori[i][j] = Normal(Avg[i][j],Variance[i][j],value)

Where Normal references the normal probability distribution. Avg[i][j] is the average value of feature "j" for items of category "i". Variance[i][j] is the variance of feature "j" for items of category "i".
If we want to know the probability of payment with Amount=123 being of category "Food", we have the average of all payments of that category let's say: Avg[Food][Amount] = 80, and we have the Variance[Food][Amount] = 24, then the posteriori probability will be equal: Normal(80, 24, 123).

What does the classifier need?

The response to this question is quite simple, we need at least 4 structures, the meaning should be clear from the previous explication.

public double[][] Posteriori { get; set; }
public double[] Apriori { get; set; }
public double[][] CategoryFeatureAvg { get; set; }
public double[][] CategoryFeatureVariance { get; set; }

And how does it classify?

As said before the classification is a loop for all the categories in the set. For each category we compute the probability by multiplying apriori probability with posteriori probability of each feature. As we have two types of features, the computation differs for both of them. Take a look at this quite simplified code:

public T Predict (T item){
  Vector values; // represents the item as a vector
  foreach (var category in Categories)
  {
      for (var feature in Features)
      {
          if (NaiveBayesModel<t>.ContinuesTypes.Contains(feature.Type))
          {
              var value = values[feature];
              var normalProbability = Helper.Gauss(value, CategoryFeatureAvg[category][j], CategoryFeatureVariance[category][j]);
              probability = probability * normalProbability;
          }
  
          if (feature.Type == typeof(bool)) //String properties are converted also to binary
          {
              var probabilityValue = Posteriori[category][j];
          }
      }
  
      if (probability > maxProbability)
      {
          maxProbability = probability;
          maxCategory = category;
      }
  }
  item.SetValue(maxCategory);
}


That's all there is to it. Once you understand that we need just 4 arrays, it is just a question of how to fill these arrays, that is not hard (it should be clear from the previous explication), but it takes some plumbing and looping over all the items in the learning collection.
If you would like to see the Source Code - check my fork machine.codeplex.com.

pátek 28. října 2011

Using Xaml serialization to generate Design Time Data

Recently I have been working on a almost finished Silverlight project which was needed a big change in graphical interfaces, in other words I needed a designer to be able to change all the Pages and components in Blend.

The issue - well not real issues is that all the data was bounded to properties behind, and without the data, the desiner was not able to work change the UI.

Blend can help you in this case, thanks to it's ability to generate XAML data from an existing class. So you can just generate data from existing ViewModel. That is great, but the problem is that, the data generated by Blend is not always usable. When you have an account and Blend will generate the name of the account: "Rhoncus vulputate" and the string specifing currency "Ipsum hac phasellus", you are thinking, maybe I will have to do some manual changes. Well with one account it is ok, but if you have a list of accounts (like 10) and list of operations (another 10). Editing all the things manulally might not be the right approach.

Well it happens so that in our project, we are using AutoPoco for the data generation. If you do not know it, it is a great library for generating simple POCO's - it seems like the development has been stoped last year, but hopefully this project will live on.

Well basically this means that I have already a lot of meaningful data generated in the form of POCO's, later stored in the DB. I have also working application which connects to Web Services and generates the ViewModels on the client side, so I asked myself:
Why should'nt I just take these ViewModels, serialize them into XAML and givem them to Blend as Design Time Data Sources?

So my goal was to have the same data which I was using in execution time during design time. And as you can see at the two following pictures I got it to work.
Run-time

Design-time

So I set myself to try to do so:

First idea - use the XamlSerializer. The problem - this class is only presented in WPF - not in Silverlight. OK - I thought I will just use my ViewModels in WPF application.

First Problem: The ViewModels were not ready to be used in WPF. Concretely I had two issues:
  • The INotifyDataErrorInfo interface does not exists (http://connect.microsoft.com/VisualStudio/feedback/details/568212/inotifydataerrorinfo-for-wpf)
  • The Web Service proxies generated using Silverlight are not the sames as the proxies generated using WPF
Possible solulution to these issues:
  • Try to make your ViewModel completely platform independent. Reuse a DTO's (Data Transfer Objects) on the server and client side (do not let VS to generate proxies for your DTO projects).
  • Use #if SILVERLIGHT directive to specify which parts should be build for WPF and which for Silverlight.
Well this turned out to be too complicated. Maybe it is possible on a project which you start from the very beginning. But I already had several ViewModels and changing all of these would take too much time.

Second idea - I will stay in Silverlight and see if I can get the XamlSerializer working in Silverlight

The build-int XamlSerializer is available only for WPF but there is an open-source implementation done by David Poll, which is part of his Silverlight and Beyond Library.
The use of the serializer is described in the following blog post.

This saved my life, thought I had to do some changes.
  • To determine which properties to serialize, the Serializer uses the description provided on this blog. So it will leave out all generic properties which are not read-only. In my case I had a lot of ObservableCollection properties which I wanted to serialize. Actually this was the reason to try this approach, because I did not want to edit these collections for Blend by hand. To solve this issue I had to modify the VisitProperty method of the Serializer in order to force him to serialize even the writable generic properties.
  • Disable serialization of certain properties. Blend designer has problems with some special cases of XAML. Concretely I had problem with the serialized WCF proxies. When you generate a proxy, there are at least two classes generated. Service interface (ends with "Service") and Service implementation (ends with "Client"). Blend was unable to process the xaml when he was expecting property of type "Service" and I have given him "Client" it seems that it has problems regarding services and implementations. To overcome this issue I have introduced XamlSerializationVisibility attribute which can have two values (Visible and Hidden). This parameter allows me to specify to the XamlSerializer whether to seralize or not the property. Again a slight edit of the VisitProperty method was neede to check this attribute before serialization.
  • And at last, the original WPF XamlSerializer does not serialize all basic types. In my case I had several Decimal values which I wanted to serialize and they were skipped out. This was solved quite easy. XamlSerializer contains a class BuiltInTypeConverter. This class contains a list of types which can be serialized by simple conversion to String (SupportedTypes. I have added to this collection the Decimal and DateTime and it just worked.
So thats it, grab the source code, try to serialize your ViewModels or whatever you need. Really a great thanks to David for the implementation of this class.

neděle 16. října 2011

Eclipse Indigo + Maven 3 + Tomcat debugging

I had a little hard time to see how to debug a Web project with Maven structure using Eclipse and build-in Tomcat debugging.

The problem is that Maven project has different structure thant Elicpses Dynamic Web Project, so finally eclipse does not know hot to package a WAR file and deploy it to the server.

Finally I found this blog, which explains the issue. This blog describes the situation when using Maven 2 and Eclipse Helios.

I use Maven 3 and Eclipse Indigo. When using Indigo, not all the steps described in the blog are needed (no changement of .project file needed). Basically you have to perform two steps.

1) Facets - change project nature. Go to the properties of the project and Project Facets and choose Dynamic Web Model. This step will change the structure of your project - add the WebContent folder.

2) Change the org.eclipse.wst.common.component file. This file is added when you have changed the project structure.

<?xml version="1.0" encoding="UTF-8"?>
<project-modules id="moduleCoreId" project-version="1.5.0">
<wb-module deploy-name="ProjectName">
<wb-resource deploy-path="/" source-path="/WebContent" tag="defaultRootSource"/>
<wb-resource deploy-path="/WEB-INF/classes" source-path="/src/main/java"/>
<wb-resource deploy-path="/" source-path="/src/main/webapp"/>
<wb-resource deploy-path="/WEB-INF/lib" source-path="/target/ProjecName/WEB-INF/lib"/>
<property name="context-root" value="ProjectName"/>
<property name="java-output-path" value="/ProjectName/target/classes"/>

</wb-module>
</project-modules>

What we need is to copy the everything we need into the WebContent directory. There are parts which you can copy directly from your project (jsp files, web.xml) and than parts which you have to copy from the Maven build output (libs, classes).

If you are lost, take a look at the mentioned blog.

středa 12. října 2011

Catalina - Life cycle exception

Last project: Tomcat and Java Web applicaiton.
Suddenly after just after adding some dependencies using Maven, I was not able to start the server. The exception did not give me much detaisl:
Grave: ContainerBase.addChild: start: 
org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/Bank]]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:152)

Caused by: org.apache.tomcat.util.bcel.classfile.ClassFormatException: Invalid byte tag in constant pool: 60
at org.apache.tomcat.util.bcel.classfile.Constant.readConstant(Constant.java:131)
at org.apache.tomcat.util.bcel.classfile.ConstantPool.(ConstantPool.java:60)

oct. 07, 2011 2:26:19 PM org.apache.catalina.startup.HostConfig deployDirectory

Grave: Erreur lors du déploiement du répertoire Bank de l'application web
java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/Bank]]
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:816)

So after checking the state of web.xml (no changes there) I started to take a look at te dependencies which I have added. Well actually I took a look at the JARs which were being donwloaded automatically by Maven.
I found out that when I added a dependency to Jaxen library, than several JARs war downloaded.

I started deleting one by one and redeploying, just to find out, that it was ICU4J.JAR which was causing the problem. Well I was sure that I did not need it, so I solved the problem by declaring Maven exclusion.

<dependency>
<groupid>jaxen</groupId>
  <artifactid>jaxen</artifactId>
  <version>1.1.1</version>
  <exclusions>
   <exclusion>
 <groupid>com.ibm.icu</groupId>
    <artifactid>icu4j</artifactId>
   </exclusion>
  </exclusions>
  <scope>runtime</scope>
</dependency>

Couriously enough I needed Jaxen in order to be able to use the Azure4Java tools and access to Azure table storage.


Still I do not understand the real cause of this problem, so if anyone finds similar issue, I hope this helps.

středa 17. srpna 2011

DotNetOpenAuth and Url rewriting (in Azure)

I have created a simple OAuth provider for my web application using DotNetOpenAuth just the way that I described in this last post. On local everything worked just fine. However when I published the solution to Azure then I obtained errors while processing authorized request.

Well the error happend because I used the idea of specifing exact scope within each token, which says what URL the consumer application has right to access. So to each Access Token a String value representing the scope is added representing the URL which the application has right to access. When the consumer application demands data from certain web service, than the Authorization Manager checks the scope of the access token which was added to the request.
The problem comes when the server which is hosting the OAuth data provider performs some URL rewriting. In that case the URL which is being accessed had changed in the HTTP pipeline and the provider has to take care of that. The URL of each request comming to Azure changes inside.

If you take a look at the Authorization Manager code from the DotNetOpenAuth you will see that it checks the scope of the incoming message.
public class OAuthAuthorizationManager : ServiceAuthorizationManager
{
  protected override bool CheckAccessCore(OperationContext operationContext)
  {
    //check the access token etc...
    //scopes containes the scopes added to the access token
    if (scopes.Contains(operationContext.IncomingMessageHeaders.Action)) {
      return true;
    }
  }
}

And that is actually the problem, while operationContext.IncomingMessageHeaders.Action contains the URL after rewrite and the consumer application usually specifies the URL which it wants to access in the form before the rewrite.

What I found as a solution to this issue was to use instead this piece of code
Uri requestUri = operationContext.RequestContext.RequestMessage.Properties.Via;
...
...
var action = requestUri.AbsoluteUri.Substring(0, requestUri.AbsoluteUri.IndexOf("?"));
if (scopes.Contains(action))
{
    return true;
}

Using DotNetOpenAuth to create OAuth Provider

Download the source code from GitHub.
DotNetOpenAuth is an open source library created and managed by Andrew Arnott which gives you the
possibility to use OAuth protocol, OpenID and ICard. It si powerful - and comes with a nice Samples package. Recently I needed to implement OAuth provider, in other words I wanted to allow third party application to obtain data from my application after the user authorizes them to do so.

Before I have implemented that for my application, I have created a simple Proof of Concept (POC), which I will share with you. Basically it is just a simplified version of the OAuthProvider project in the Samples package. That is a greate example, however one fact, that might be little confusing is that it uses Linq to SQL to store the authentication Tokens and if you do not want to enter into that you might get lost. My targeted application uses NHibernate for ORM, but I decided to make the POC only store data in the memmory to keep it as simple as I could.

At the end I will just lay out how I incorporated DotNetOpenAuth to my application, where I already had Data Access layer established using NHibernate.

To understand the rest of this post, you need to understand the basics of OAuth protocol.

Here is the standard way of communication between Consumer and Provider using OAuth protocol.



DotNetOpenAuth provides classes and structures which enable you to easily create OAuth Consumer or Provider and manipulate Tokens. However each both Consumer and Provider have to decide on how to handle and store the Tokens.

The basic scenario is this:
Provider exposes WCF service which is secured using OAuth protocol. Consumer can access this services only when he obtains authorization of the actual user and owner of the resources.
Here is a digram which shows the structure of OAuth Provider when implemented using DotNetOpenAuth.


There are two entities which perform the communication. First it is simple Http Handler which takes care of the OAuth "handshake". The second is the actual WCF service which uses custom Authorization Manager to perform the authentication. Both of these make use of the Service Provider (comming from DotNetOpenAuth). Service Provider than uses implementations of IServiceProviderTokenManager and INonceStore also defined in DotNetOpenAuth, which take care of the persistance of Nonces and Tokens. It is up to the programmer to decide how to implement these interfaces.

OAuth provider needs to store three types of objects: Consumers, Tokens and Nonces. To keep it simple, I decided to store all of them in memory in lists inside applications Global file.

public class Global : System.Web.HttpApplication
{
  public static List<OAuthConsumer> Consumers { get; set; }
  public static List<OAuthToken> AuthTokens { get; set; }
  public static List<Nonce> Nonces { get; set; }  
}

Now these list are then used by ServiceProvider, actually by IServiceProviderTokenManager and INonceStore, which later in turn are used by ServiceProvider. Lets first take a look at the IServiceProviderTokenManager interface (definition is here). For example the GetRequestToken method would be implemented like this:
public IServiceProviderRequestToken GetRequestToken(string token)
{

    var foundToken = Global.AuthTokens.FirstOrDefault(t => t.Token == token && t.State != TokenAuthorizationState.AccessToken);
    
    if(foundToken==null)
    {
        throw new KeyNotFoundException("Unrecognized token");
    }
    return foundToken;
}

So it is quite easy. The method actually returns IServiceProviderRequestToken, methods which work with Nonces or Consumer also return interfaces defined by DotNetOpenAuth, so in other words all of your business entities which encapsulate Consumer, Tokens or Nonces have to implement these interfaces defined by DotNetOpenAuth.

There are two types of tokens: Request Token (IServiceProviderRequestToken) and Access Token (IServiceProviderAccessToken). During the OAuth handshake, the request token is interchanged for the access token. So you can actually create one class which implements both of these interfaces. In that case implement these interfaces explicitely because there are Properties which have to be implemented with same name. There are two properties which are returning String called Token (one comming from Access Token and other from Request Token interface), here is the way in which they are implemented:
private String _token;
String IServiceProviderRequestToken.Token
{
    get { return _token; }
}

String IServiceProviderAccessToken.Token 
{
    get { return _token; }
}

public String Token { 
    set { 
        _token = value;  
    }
}

When the Token changes from Request to Access token, the actual String value stays the same. So I have backed up both of these properties by the same private field and added a property which will allows me to set this field.

Basically thats it. There is much more code around but actually I just took most of the code comming from the official set of examples.

Using NHibernate to persists Tokens, Consumer and Nonces

In the project where I needed to implement OAuth provider, I was using NHibernate as my ORM with NFluent(nice framework which allows to write configuration of NHibernate in C#). What I like about this combination is the fact, that there is no XML file and generated properties (such as with Linq2SQL).
I always try to keep my database entities as clear as possible, that is why I did not want my entities to implement the interfaces forced by DotNetOpenAuth. Instead of that I wrapped my entities by classes which are using these interfaces and use the database persisted entities as backup. So just to explain what I mean, here is the persistant class:
public class AuthToken
{
    public virtual int Id { get; set; }
    public virtual AuthConsumer Consumer { get; set; }
    public virtual AuthTokenState State { get; set; }
    public virtual DateTime IssueDate { get; set; }
    public virtual UserIdentity User { get; set; }
    public virtual String TokenSecret { get; set; }
    public virtual String Scope { get; set; }
    public virtual String Token { get; set; }
    public virtual String Version { get; set; }
    public virtual String VerificationCode { get; set; }
    public virtual DateTime? ExpirationDate { get; set; }
    public virtual String[] Roles { get; set; }
    public virtual String Callback { get; set; }
}
And here the DotNetOpenAuth compatible wrapper:
public class OAuthToken : IServiceProviderRequestToken, IServiceProviderAccessToken
{
    public OAuthToken(AuthToken token)
    {
        if (token == null)
        {
            throw new ArgumentNullException("Token passed to constructor of OAuthToken cannot be null");
        }
        Token = token;
    }

    public OAuthToken()
    {
        Token = new AuthToken();
    }

    public AuthToken Token {get;set;}

    #region IServiceProviderRequestToken

    Uri IServiceProviderRequestToken.Callback
    {
        get
        {
            return new Uri(Token.Callback);
        }
        set
        {
            if (value != null)
            {
                Token.Callback = value.AbsoluteUri;
            }
        }
    }

    string IServiceProviderRequestToken.ConsumerKey
    {
        get { return Token.Consumer.ConsumerKey; }
    }

    Version IServiceProviderRequestToken.ConsumerVersion
    {
        get
        {
            if (Token == null || Token.Version == null)
            {
                throw new ArgumentNullException("The Token or the Version are null");
            }
            return new Version(Token.Version);
        }
        set
        {
            Token.Version = value.ToString();
        }
    }

    DateTime IServiceProviderRequestToken.CreatedOn
    {
        
        get {
            return Token.IssueDate.ToLocalTime(); }
    }

    string IServiceProviderRequestToken.Token
    {
        get { return Token.Token; }
    }

    string IServiceProviderRequestToken.VerificationCode
    {
        get
        {
            return Token.VerificationCode;
        }
        set
        {
            Token.VerificationCode = value;
        }
    }

    #endregion

    #region IServiceProviderAccessToken

    DateTime? IServiceProviderAccessToken.ExpirationDate
    {
        get { return Token.ExpirationDate; }
    }

    string[] IServiceProviderAccessToken.Roles
    {
        get { return Token.Roles; }
    }

    string IServiceProviderAccessToken.Token
    {
        get { return Token.Token; }
    }

    string IServiceProviderAccessToken.Username
    {
        get {
            if (Token.User == null)
            {
                throw new ArgumentNullException("Token does not have assigned user");
            }
            return Token.User.Identification; 
        }
    }
    #endregion
}
In that case the Token Manager has to take care of the conversation with database as well as with wrapping the recieved entities.
public class DatabaseTokenManager : IServiceProviderTokenManager
{
  private IOAuthServices _oAuthServices;
  
  public IOAuthServices OAuthServices
  {
      get {
          if (_oAuthServices == null)
          {
              _oAuthServices = get your service class which talks to the database....
          }
          return _oAuthServices;
      }
  }
  
  public IServiceProviderRequestToken GetRequestToken(string token)
  {
      var authToken = OAuthServices.GetRequestToken(token);
      if (authToken == null)
      {
          throw new SecurityException("No token found: " + token);
      }
      return new OAuthToken(authToken);
  }
}
The rest stays the same and it works fine.
It took me some time to understand how DotNetOpenAuth on the provider site works. I hope this post can help someone to jump in fast.

Get the code from GitHub

pátek 22. července 2011

Emulation of Azure needs registered ASP.NET 4

While debugging Azure Web Role I have obtained error while Visual Studio debugger was attaching to the emulator.

Here is a good blog which describes hot to diagnose this error:

http://dunnry.com/blog/2011/07/14/HowToDiagnoseWindowsAzureErrorAttachingDebuggerErrors.aspx

At the core the issue was:

Handler "PageHandlerFactory-Integrated" has a bad module "ManagedPipelineHandler" in its module list

I quickly found out what was the issue. Later to perform some testing I have uninstalled ASP.NET 4 extensions from my IIS and kept just the 3.5 versions (actually 2.0, because 3.5 were just framework extensions). So a quick solution aspnet_regiis.exe -i in 4 version folder will fix it.

úterý 19. července 2011

Provide JSONP with your WCF services (using .NET 3.5)

I wrote this post mainly to correct one "bug", or let's say complete the MS example which shows how to configure your WCF services to provide data in JSONP format.

This example works except the case when you are returning a raw JSON, that is you are not returning object which is serialized in to JSON, but rather returning a Stream which represents this JSON.

The exception which you might obtain will be:

Encountered invalid root element name 'Binary'. 'root' is the only allowed root element name.

About JSONP


JSON with Padding is a transport format, which uses the ability of SCRIPT tag to execute scripts from different domains to overcome the cross-domain access issue. Generally the returned JSON is wrapped by JavaScript function which can be executed cross-domain.

So before we start - JSONP support is already added to .NET 4 so the services can be configured to use JSONP only by adding the CrossDomainScriptAccessEnabled attribute.

When the problem occurs

However I am stuck with NET 3.5 - so I needed to provide JSONP manually. Actually that is not that hard because MS provides this functionality in the WCF-WF example package (Downloadable here).

The problem is, that this example is not complete. To be more specific: It works only when the service returns .NET objects which are serialized to JSON by WCF. However in some cases you might be serving the JSON which is already prepared. In this case your service returns a Stream. And in this case the example provided by MS will not work.

To understand the problem, we have to take a look at what exactly does the example of MS code. Well to start you can simply look at this blog.

So basically to enable JSONP you just need to add JSONPBehavior attribute to your service. In fact this behavior uses JSONPEncoderFactory class, which defines an encoder (JSONPEncoder) which converts the messages to JSONP. The encoding takes place in the override WriteMessage method. Let's take a look at the method provided in the MS example.
public override ArraySegment<byte> WriteMessage(Message message, int maxMessageSize, BufferManager bufferManager, int messageOffset)
{
    MemoryStream stream = new MemoryStream();
    StreamWriter sw = new StreamWriter(stream);

    string methodName = null;
    if (message.Properties.ContainsKey(JSONPMessageProperty.Name))
        methodName = ((JSONPMessageProperty)(message.Properties[JSONPMessageProperty.Name])).MethodName;

    if (methodName != null)
    {
        sw.Write(methodName + "( ");
        sw.Flush();
    }
    XmlWriter writer = JsonReaderWriterFactory.CreateJsonWriter(stream);
    message.WriteMessage(writer);
    writer.Flush();
    if (methodName != null)
    {
        sw.Write(" );");
        sw.Flush();
    }

    byte[] messageBytes = stream.GetBuffer();
    int messageLength = (int)stream.Position;
    int totalLength = messageLength + messageOffset;
    byte[] totalBytes = bufferManager.TakeBuffer(totalLength);
    Array.Copy(messageBytes, 0, totalBytes, messageOffset, messageLength);

    ArraySegment<byte> byteArray = new ArraySegment<byte>(totalBytes, messageOffset, messageLength);
    writer.Close();
    return byteArray;
}

So what is happening here: the Message object contains the object which returns your method. The WriteMessage method will take this object and write it to a Stream which is passed to it in argument. In the method the passed stream is a JsonWriter. The problem is that JsonWriter expects the structure of the message to be object represented by XML, which it will convert to JSON.

Now you can see that before we are actually writing the content of the message, we write "methodName(" and after ");". Generally this is the wrapping by JavaScript function. The result will be something like "methodName({JSONOBject});".

The resulted Stream is than just converted to byte array.

This works, but the problem is that when you are returning raw JSON, in other words, that your method returns Stream, than you cannot use JsonWriter, because the Message.WriteMessage will push to the writer XML of different structure, than it expects.

To be specific the XML will have a form of <binary>asdqwetasfd</Binary> and JsonWriter will not be able to create reasonable Json object.

Solution

The solution to the problem is following:
  • Check the format of the message (if it Json or Raw Stream)
  • If it is a Raw Stream, than just convert the Stream to array of bytes
  • If it is Json, than use the same procedure as before
public override ArraySegment<byte> WriteMessage(Message message, int maxMessageSize, BufferManager bufferManager, int messageOffset)
{
    WebContentFormat messageFormat = this.GetMessageContentFormat(message);

    MemoryStream stream = new MemoryStream();
    StreamWriter sw = new StreamWriter(stream);

    string methodName = null;
    if (message.Properties.ContainsKey(JSONPMessageProperty.Name))
        methodName = ((JSONPMessageProperty)(message.Properties[JSONPMessageProperty.Name])).MethodName;

    if (methodName != null)
    {
        sw.Write(methodName + "( ");
        sw.Flush();
    }

    XmlWriter writer = null;
    if (messageFormat == WebContentFormat.Json)
    {
        writer = JsonReaderWriterFactory.CreateJsonWriter(stream);
        message.WriteMessage(writer);
        writer.Flush();
        //writer.Close();
    }
    else if (messageFormat == WebContentFormat.Raw)
    {
        String messageBody = ReadRawBody(ref message);
        sw.Write(messageBody);
        sw.Flush();
    }

    if (methodName != null)
    {
        sw.Write(" );");
        sw.Flush();
    }

    byte[] messageBytes = stream.GetBuffer();
    int messageLength = (int)stream.Position;
    int totalLength = messageLength + messageOffset;
    byte[] totalBytes = bufferManager.TakeBuffer(totalLength);
    Array.Copy(messageBytes, 0, totalBytes, messageOffset, messageLength);

    ArraySegment<byte> byteArray = new ArraySegment<byte>(totalBytes, messageOffset, messageLength);
    stream.Close();
    
    return byteArray;
} 

You can see that I am using two additional methods: GetMessageContentFormat and ReadRawBody. I did not came up with these methods, instead I have borrowed them from the blog of Carlos Figueira
In his blog, he describes how to use these methods when Inspecting messages. That is not the same scenario, but actually Inspecting outgoing methods or creating own MessageEncoder are just two ways to achieve the same thing.
I will add the definitions of the methods here, but the above mentioned blog post is a great source of information regarding customization of WCF Service, definitely worth checking.

private WebContentFormat GetMessageContentFormat(Message message)
            {
                WebContentFormat format = WebContentFormat.Default;
                if (message.Properties.ContainsKey(WebBodyFormatMessageProperty.Name))
                {
                    WebBodyFormatMessageProperty bodyFormat;
                    bodyFormat = (WebBodyFormatMessageProperty)message.Properties[WebBodyFormatMessageProperty.Name];
                    format = bodyFormat.Format;
                }

                return format;
            }

private String ReadRawBody(ref Message message)
            {
                
                XmlDictionaryReader bodyReader = message.GetReaderAtBodyContents();
                
                bodyReader.ReadStartElement("Binary");
                byte[] bodyBytes = bodyReader.ReadContentAsBase64();
                
                string messageBody = Encoding.UTF8.GetString(bodyBytes);

                // Now to recreate the message
                MemoryStream ms = new MemoryStream();
                XmlDictionaryWriter writer = XmlDictionaryWriter.CreateBinaryWriter(ms);
                writer.WriteStartElement("Binary");
                writer.WriteBase64(bodyBytes, 0, bodyBytes.Length);
                writer.WriteEndElement();
                writer.Flush();
                ms.Position = 0;
                XmlDictionaryReader reader = XmlDictionaryReader.CreateBinaryReader(ms, XmlDictionaryReaderQuotas.Max);
                Message newMessage = Message.CreateMessage(reader, int.MaxValue, message.Version);
                newMessage.Properties.CopyProperties(message.Properties);
                message = newMessage;
                //return bodyBytes;
                return messageBody;
            }

pátek 8. července 2011

Consuming WCF Services with Java Client

Here is the state of my latest project: I have a Silverlight application which talks to traditional WCF services in backend. The services have so far been configured automatically - so let's say Visual Studio took care of the web.config. Newest requirement to my application was to allow Java clients consume these services.

The prerequisites for this post are some basic knowledge of WCF (bindings, services, endpoints) and some knowledge of Java (I am using Axis to generate the clients...for the first time).

To make it a bit more complicated: I was using FormsAuthentication on the backend side, since these services are hosted by IIS 7.

Here I want to describe how to configure WCF services to be consumed by JAVA clients.
The second part which describes how to keep using Forms Authentication is described in my other post

To expose the services for JAVA client, we have two options:
  • Expose the services using SOAP protocol
  • Expose the services using REST approach
Both of these are possible with WCF. This ability to take existing services and expose them using different protocols and transfer formats is what makes WCF so powerful and useful.

Here I will describe in details how to expose the services using SOAP protocol and in the end I will give a brief description of what to do to expose these services using REST approach.

Changing WCF configuration

The first step is to start changing WCF configuration which is presented in "web.config" file (at least in the case of service hosted in IIS).

If you let Visual Studio configure your service, you will see that it creates for each services it's own binding - event though the services can share binding configuration.

Also - if you consume service by Silverlight client, than VS chooses to define binnaryMessageEncoding as a transport format. Because both - the backend and the client are .NET applications, WCF can be configured to transfer the objects over the wires in binary format (because both the client and the server know how to serialize/deserialize) the data in this format. To consume the service by a Java application you will need to use a traditional basicHttpBinding - that is simple binding which uses standard WSDL specification in hand with XML serialization.

So first step is to locate your binding and service definition and change the binding to basicHttpBinding.

<binding name="BinaryOverHTTPBinding">
    <binaryMessageEncoding />
    <httpTransport />
</binding>
    
<service name="Octo.Bank.Web.WCFServices.WCFUserService" behaviorConfiguration="NeutralBehavior">
    <endpoint address="" binding="BinaryOverHTTPBinding" contract="MyProject.WCFUserService"/>
    <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/>
</service>
Replace the binding configuration in the endpoint definition.
<endpoint address="" binding="basicHttpBinding" contract="MyProject.WCFUserService"/>

Just to completed the image here, the service is configured to user "NeutralBehavior".

<behavior name="NeutralBehavior">
  <serviceMetadata httpGetEnabled="true"/>
  <serviceDebug includeExceptionDetailInFaults="false"/>
</behavior>

What is important is, that the httpGetEnabled set to true in the combination with the mex endpoint will ensure that the WSDL definition of this service will be exposed (the url of the WSDL definition will be simply http://server/myService?wsdl).

Now that is the bare minimum to be able to connect to this WCFUserService with Java client.

Defining the namespaces and ports

While WCF Client, or Silverlight Client do not have a problem to generate a stub client for the defined service, when you will try to generate the client in java, you will obtain an exception saying that one of the port bindings was not properly defined. The cause is that you need to define different namespace and name in your ServiceContract and ServiceBehavior. These are two attributes which can be placed on top of your service class.

[ServiceContract(Namespace = "octo.users.service",Name="UserService")]
[ServiceBehavior(Namespace = "octo.users.port", Name = "UserPort")]
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
public class WCFUserService { }
This completely changes the resulting WSDL file, which is describing the service. This is really important because if you do not make these changes, you will not be able to generate the client with Axis framework.

Creating the Java client

I am using Eclipse in combination with Axis framework to talk to my services. But first let's put out the 2 options that we have to access Web Services.

  • Creating dynamically the client
  • - This option is more complicated because we have to now a bit how the service is defined in the WSDL file but allows us to perform some more changes on the SOAP messages that we want to send (for example changing the SOAP headers).
  • Using Axis to generate the client for us
  • - This is much easier, however it gives us only a limited ability to process the received SOAP messages.

Accessing Web Service using Axis created client

Before we start, we need to generated the client, either you can use the build in tool in Eclipse ("New -> Other -> Web Service Client") or you can use the commander line "WSDLtoJava" utility. In both cases you have to enter just the URL of the WSDL.

When the client is ready, you can see that there is quite a lot of code(10kLines) generated for you.
MyServiceLocator locator = new MyServiceLocator();
AuthService client = locator.getBasicHttpBinding_AuthService();
String cookie = client.LoginCookie("login","password");

I am calling the method defined before which gives me the authentication cookie. Remember that this "Authentication Service" stays open, so anybody can call the methods. Now when we have the cookie, we can use it to make calls to other already protected services.

MyServiceLocator locator = new MyServiceLocator();
WCFUserService client = locator.getBasicHttpBinding_WCFUserService();
((Stub)client)._setProperty(Call.SESSION_MAINTAIN_PROPERTY,new Boolean(true));
((Stub)client)._setProperty(HTTPConstants.HEADER_COOKIE, ".ASPXAUTH=" + 
cookie);
Object data = client.GetSecuredData(myParam);

The generated client does not allow you to add cookies, but you can convert the client to org.appache.axis.client.Stub which allows you to call _setProperty method a static HttpConstatns class provides the names of the headers which you can set.

Now notice the "ASPXAUTH=" that is the prefix(or in other words the name) of the cookie and it has to be there. It took me a while to find out in what exact form should I send the cookie, finally Fiddler came as help - I used the Silverlight client to see what exactly he is sending and I just did the same.

Creating the client dynamically

The java.rmi namespace provides classes allowing the creation of web service client on the fly (without code generation). This has some advantages, specially that you can create a javax.rmi.xml.Service class permitting you to assign special handlers, which are executed during the "reception" and "sending" of SOAP messages. These handlers can allow you to alter the content of the message and thus provide possibility to do some additional tuning, or security checks.

When working with WCF or CXF framework, you have probably heard of Interceptors, which are equivalent to "Handlers".

Personally I thought, that I will be able to create my own handler to recuperate the authentication cookie send the standard way. But I did not manage to get the cookie from the SOAP message. I will provide here a conception of my solution - maybe someone will be able to finalize and obtain the cookie from the response of the authentication service.
try {
  QName serviceName = new QName("http://mynamespace","AuthService");
  URL wsdlLocation = new URL("http://localhost:49830/WCFServices/WCFUserService.svc?wsdl");
  // Service
  ServiceFactory factory = ServiceFactory.newInstance();
  Service service =  factory.createService(wsdlLocation,serviceName);
  
  //Add the handler to the handler chain
  HandlerRegistry hr = service.getHandlerRegistry();
  HandlerInfo hi = new HandlerInfo();
  hi.setHandlerClass(SimpleHandler.class);
  handlerChain.add(hi);
  
  QName  portName = new QName("http://localhost:49830/WCFServices/WCFUserService.svc?wsdl", "BasicHttpBinding_AuthService");
  List handlerChain = hr.getHandlerChain(portName);
  
  QName operationName = new QName("http://localhost:49830/WCFServices/WCFUserService.svc?wsdl", "Login");
  Call call = service.createCall(portName,operationName);
  
  //call the operation
  Object resp = call.invoke(new java.lang.Object[] {"login","pass"});
}
To be able to call the web service dynamically, you will need to specify the name of the service, the port and the operations. You can find these easily in the WSDL definition file.
Here follows the definition of the SimpleHandler class which is added to the HTTP handler chain
public class SimpleHandler extends GenericHandler {
 
  HandlerInfo hi;
 
  public void init(HandlerInfo info) {
    hi = info;
  }

  public QName[] getHeaders() {
    return hi.getHeaders();
  }

  public boolean handleResponse(MessageContext context) {
    try {
     
     //Iterate over all properties - did not find the cookie there :(
     Iterator properties = context.getPropertyNames();
        while(properties.hasNext()){
         Object property = properties.next();
         System.out.println(property.toString());
        }
        
      //examine the response header - did not find the cookie there either :( 
      if(context.containsProperty("response")){
       Object response = context.getProperty("response");
       HttpResponse httpResponse = (HttpResponse)response;
       
       Header[] headers = httpResponse.getAllHeaders();
       for(Header header:headers){
        System.out.println(header.toString());
       }
      }
     
     //here is how to get the SOAP headers - they do not serve - we need pure HTTP response
      // get the soap header
      SOAPMessageContext smc = (SOAPMessageContext) context;
      SOAPMessage message = smc.getMessage();
      
    } catch (Exception e) {
      throw new JAXRPCException(e);
    }
    return true;
  }
  public boolean handleRequest(MessageContext context) { 
    return true;
  }
}

Securing services by SSL

In my other post I have described how to secure the web services by SSL, you can find the information which describes how to configure the JAVA client to connect to these secured services.

REST approach

To expose the service as RESTfull we will have to define another endpoint for the service.

<service behaviorConfiguration="NeutralBehavior" name="MyServic">
  <endpoint address="json" binding="webHttpBinding"  behaviorConfiguration="jsonBehavior" contract="Octo.Bank.Web.WCFServices.WCFAccountService" name="JsonEndpoint"/>
  <endpoint address="soap" binding="basicHttpBinding" .../>
  <endpoint address="mex" .../>
</service>
Notice that this endpoint uses webHttpBinding and a special behavior called jsonBehavior. This behavior as it's name says just defines JSON as the transport format.
<endpointBehaviors>
  <behavior name="jsonBehavior">
    <webHttp defaultOutgoingResponseFormat="Json"/>
  </behavior>
</endpointBehaviors>
This is enough for the configuraiton. Now just some minor changes to the Service itself.
At the end I showed guidlines for exposing WCF services using the REST approach.
public class MyService {  
  [OperationContract]
  [WebGet(UriTemplate="/accounts?id={id}", BodyStyle=WebMessageBodyStyle.Wrapped)]
  public IList<AccountDto> GetAccountsByCustomer(int id)
  {
    return AccountService.GetCustomerAccounts(id);
  }
}
It is the WebGet attribute which exposes the service for HTTP GET request. The UriTemplate defines which URL will invoke the service. Notice that the parameter of the service is extracted from the URL itself.
If we would have a method which posts data, it would be decorated with [WebInvoke] attribute.
This is just a slight intro, you can find more information on internet, here I wanted just to provide some basic information to make this post complete enough.

Summary

I have shown how to change the configuration to publish WCF Service using SOAP protocol and consume this services with JAVA client. At the end I just showed how to expose the service using REST approach.

ASP.NET Forms Authentication and Java client

This post describes my later situation. I have a Silverlight application which talks to traditional WCF services in backend. The services have so far been configured automatically - so let's say Visual Studio took care of the web.config. Newest requirement to my application was to allow Java clients consume these services.

The prerequisites for this post are some basic knowledge of WCF (bindings, services, endpoints) and some knowledge of Java (I am using Axis to generate the clients...for the first time).

To make it a bit more complicated: I was using FormsAuthentication on the backend side, since these services are hosted by IIS 7.

Here I want to show what to do use Forms Authentication from Java application, mobile client or any other non-browser client.

The second part which describes how to enable WCF services to be consumed by JAVA client is described in my other post.

IIS 7 buid-in Authentication Service

I was using the build-in authentication service in order to authenticate the client, which is just a basic service, which offers methods such as Login, Logout etc.
This service can be enabled on IIS server using the following configuration:
<system.web.extensions>
  <scripting>
    <webServices>
      <authenticationService enabled="true" requireSSL="false"/>
    </webServices>
  </scripting>
</system.web.extensions>
And we also need to expose this service:
<service behaviorConfiguration="NeutralBehavior" name="System.Web.ApplicationServices.AuthenticationService">
    <endpoint address="" binding="basicHttpBinding" contract="System.Web.ApplicationServices.AuthenticationService" />
    <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" />
</service>
Now that service works great from Silverlight client but I was not able to generate Java client for this service - I tried with different versions of Axis and settings - but it did not work for me.

So for the non-Silverlight client I needed to write my own Authentication service. That is actually prety easy using the FormsAuthentication static class.
[OperationContract]
public Login(String login, String password)
{
    //your way to auth the user againts DB or whatever
    var user = UserService.AuthenticateUser(login, password);
    
    if (user != null)
    {
        FormsAuthentication.SetAuthCookie(login, true);
    }
    return null;
}
After you check if the user is connected, you can just call the SetAuthCookie method. This method adds authentication token to the response which will go to server. Then the browser adds this token to any request which he will send to the server.
And here comes the problem: how to use this with non-browser based application?
Let me continue.

Services secured using PrincipalPermission

I use FormsAuthentication, because it allows me to secure all service just by adding the PrincipalPermission attribute over each Service method. So my WCFUserService can look like this:
public class WCFTagService
{
  public WCFTagService()
  {
      Thread.CurrentPrincipal = HttpContext.Current.User;
  }
  
  [OperationContract]
  [PrincipalPermission(SecurityAction.Demand, Authenticated = true)]
  public Object GetSecuredData(int param)
  {
      return MyDB.GetData();
  }
}
In the constructor the CurrentPrincipal is set to the current user of the ASP.NET application (again we are hosting this service in IIS), than the [PrincipalPermission] attribute will be check even before the method is executed if the user is logged in.
And how is the HttpContext.Current.User determined?
Well simply by checking the authentication token which the browser adds to the request. IIS will automatically check this token and populated the User static class with the correct identity.

Adding one more authentication service for Java Clients

This is definitely not correct but it is the only way I was able to get it to work. Basically when I call
FormsAuthentication.SetAuthCookie(login, true);
the cookie is added to the response and I will have to get it on the client (Java) side. Actually I was not able to achieve that - I will describe the approach I took lower, but I just did not get the cookie from the response. So I decided to build one more service which will just return the authentication token (or cookie if you will).
[OperationContract]
public String LoginCookie(String login, String password)
{
  var user = UserService.AuthenticateUser(login, password);
  if (user != null)
  {
      var cookie = FormsAuthentication.GetAuthCookie(login, true);
      return cookie.Value;
  }
  return null;
}
Ok that's it, we are done. We can almost switch to JAVA.

Accessing Authentication Service using the Axis generated client

Before we start, we need to generated the client, either you can use the build in tool in Eclipse ("New -> Other -> Web Service Client") or you can use the commander line "WSDLtoJava" utility. In both cases you have to enter just the URL of the WSDL.
When the client is ready, you can see that there is quite a lot of code(10kLines) generated for you.
MyServiceLocator locator = new MyServiceLocator();
AuthService client = locator.getBasicHttpBinding_AuthService();
String cookie = client.LoginCookie("login","password");
That is quite simple, I am calling the method which I have defined before which gives me the authentication cookie. Remember that this "Authentication Service" stays open, so anybody can call the methods. Now when we have the cookie, we can use it to make calls to other already protected services.
MyServiceLocator locator = new MyServiceLocator();
WCFUserService client = locator.getBasicHttpBinding_WCFUserService();
((Stub)client)._setProperty(Call.SESSION_MAINTAIN_PROPERTY,new Boolean(true));
((Stub)client)._setProperty(HTTPConstants.HEADER_COOKIE, ".ASPXAUTH=" + 
cookie);
Object data = client.GetSecuredData(myParam);
The generated client does not allow you to add cookies, but you can convert the client to org.appache.axis.client.Stub which allows you to call _setProperty method a static HttpConstatns class provides the names of the headers which you can set.
Now notice the "ASPXAUTH=" that is the prefix(or in other words the name) of the cookie and it has to be there. It took me a while to find out in what exact form should I send the cookie, finally Fiddler came as help - I used the Silverlight client to see what exactly he is sending and I just did the same.
What is little bit said is the fact, that we have to create a special method to be called by the Java client which returns the authentication token directly and no as a cookie.
I was thinking - it could not be that hard, generate a client and get the cookie. This way I could have only one authentication method used by browser-based clients and Java clients. But I just did not managed to do that.

I will show an attempt which I did - but did not succeed.

Creating the client dynamically

The java.rmi namespace provides classes which will the creation of web service client on the fly (without generation). This has some advantages, specially that you can create a javax.rmi.xml.Service class which allows assignment of special handlers, which are executed during the "reception" and "sending" of SOAP messages, these handlers can allow you to alter the content of the message and thus provide possibility to do some additional tuning.

Personally I thought, that I will be able to create my own handler to recuperate the authentication cookie send the standard way. But I did not manage to get the cookie from the SOAP message. Well that is actually normal, because the Cookie is not part of the SOAP message but instead part of the HTTP message (which wraps the SOAP message). But that is the problem I was not able to locate the cookie in the HTTP Response message, anyone knows how to do that?

I will provide here a conception of my solution - maybe someone will be able to finalize and obtain the cookie from the response of the authentication service.
try {
  QName serviceName = new QName("http://mynamespace","AuthService");
  URL wsdlLocation = new URL("http://localhost:49830/WCFServices/WCFUserService.svc?wsdl");
  // Service
  ServiceFactory factory = ServiceFactory.newInstance();
  Service service =  factory.createService(wsdlLocation,serviceName);
  
  //Add the handler to the handler chain
  HandlerRegistry hr = service.getHandlerRegistry();
  HandlerInfo hi = new HandlerInfo();
  hi.setHandlerClass(SimpleHandler.class);
  handlerChain.add(hi);
  
  QName  portName = new QName("http://localhost:49830/WCFServices/WCFUserService.svc?wsdl", "BasicHttpBinding_AuthService");
  List handlerChain = hr.getHandlerChain(portName);
  
  QName operationName = new QName("http://localhost:49830/WCFServices/WCFUserService.svc?wsdl", "Login");
  Call call = service.createCall(portName,operationName);
  
  //call the operation
  Object resp = call.invoke(new java.lang.Object[] {"login","pass"});
}
To be able to call the web service dynamically, you will need to specify the names of the service, the port and the operations, you can find these easily in the WSDL definition.
Here follows the definition of the SimpleHandler which is added to the handler chain
public class SimpleHandler extends GenericHandler {
 
  HandlerInfo hi;
 
  public void init(HandlerInfo info) {
    hi = info;
  }

  public QName[] getHeaders() {
    return hi.getHeaders();
  }

  public boolean handleResponse(MessageContext context) {
    try {
     
     //Iterate over all properties - did not find the cookie there :(
     Iterator properties = context.getPropertyNames();
        while(properties.hasNext()){
         Object property = properties.next();
         System.out.println(property.toString());
        }
        
      //examine the response header - did not find the cookie there either :( 
      if(context.containsProperty("response")){
       Object response = context.getProperty("response");
       HttpResponse httpResponse = (HttpResponse)response;
       
       Header[] headers = httpResponse.getAllHeaders();
       for(Header header:headers){
        System.out.println(header.toString());
       }
      }
     
     //here is how to get the SOAP headers - they do not serve - we need pure HTTP response
      // get the soap header
      SOAPMessageContext smc = (SOAPMessageContext) context;
      SOAPMessage message = smc.getMessage();
      
    } catch (Exception e) {
      throw new JAXRPCException(e);
    }
    return true;
  }
  public boolean handleRequest(MessageContext context) { 
    return true;
  }
}


Alternative approach using WCF Inspectors

When looking into this problem, I found one alternative approach that you can use when dealing with Security and WCF Service.
The solution is basic:
  • Give up on FormsAuthentication
  • Define your own authentication tickets or just pass the login/pass combination on each request in the HTTP Header
  • Define a Message InInspector on the Server which would read the message upon its reception and check the availability of the authentication token or the credentials in the message header
When following this approach what might come handy is an easy way to generate and later control the Authentication ticket. FormsAuthentication can actually help you with this. Here is what happens when you call the FormsAuthentication.GetAuthCookie.
FormsAuthenticationTicket ticket = new FormsAuthenticationTicket(1, login, DateTime.Now, DateTime.Now.AddMinutes(30), false, login);
string encryptedTicket = FormsAuthentication.Encrypt(ticket);
HttpCookie cookie = new HttpCookie(FormsAuthentication.FormsCookieName, encryptedTicket);
So you can create an Inspector class, which will do the reverse of this process:
public class TestInspector : IDispatchMessageInspector
{
    public TestInspector()  { }
    
    public object AfterReceiveRequest(ref System.ServiceModel.Channels.Message request, System.ServiceModel.IClientChannel channel, System.ServiceModel.InstanceContext instanceContext)
    {
        var httpRequest = (HttpRequestMessageProperty)request.Properties[HttpRequestMessageProperty.Name];
        var cookie = httpRequest.Headers[HttpRequestHeader.Authorization];
        if(cookie == null)
        {
          throw new SecurityException("Not authenticated!");
        }
        var ticket = FormsAuthentication.Decrypt(cookie);
        if(ticket.IsExpired)
        {
          throw new SecurityException("Ticket expired");
        }
    }

    public void BeforeSendReply(ref System.ServiceModel.Channels.Message reply, object correlationState)
    {
        
    }
}

Securing the servicing using SSL

When we pass the authentication token over the wire, we want to be sure, that no-one can intercept this token and act in name of the user against the services. To prevent this situation we can use SSL to secure the whole communication between client and server.

The WCF configuration which is needed is quite simple, we just have to alter the standard basicHttpBinding by adding the security mode.
<basicHttpBinding>
  <binding name="SecurityByTransport">
    <security mode="Transport">
      <transport clientCredentialType="None"/>
    </security>
  </binding>
</basicHttpBinding>

Than comes the infrastructure work:

  • Be sure to publish the service on your local IIS server (you cannot use the build-in Visual Studio Server
  • On the IIS server create a new certificate - for test purposes auto-signed
  • Configure a new binding to application that you have deployed using the certificate, that you have created
This should be enough. Now we need to go back to the Java client - if we can regenerated the client using Axis. When you run the client for the first time, you will get the following exception:
java client unable to find valid certification path to requested target
That is because JVM maintains its list of trusted server. If he sees that the certificate is signed by Certification Authority, he will add it to its "keystore". Because for testing you usually use Self-Signed certificate, JVM will not add it do the keystore, it has to be done manually.

So: go back to the IIS 7 configuration and in the list of the certificates, select the certificate and on the "Details" tab page choose: "Copy to File".
You can leave the predefined option and just save the ".CER" wherever you want to.

Now to finish you have to run the following command in the JAVA-HOME\BIN directory:
keytool.exe -import -alias localhost -file C:\myCert.cer -keystore "c:\Program Files\Java\jre6\lib\security\cacerts"
  • localhost - stands for the web server which holds the certificate (your local IIS).
  • cacerts directory - is the store of trusted certificates.
  • The password is "changeit".

Summary

I tried to connect to secured WCF services hosted on IIS server with Java client. During the process I found some issues, but at the end I was able to connect securely to the services. The main steps are:
  • Don't use IIS build-in Authentication Service
  • Provide a service which will return the Authnentication Cookie to the Java client
  • Pass this cookie along with any request which is sent to secured services
In the end, I have showed how to enable SSL on the WCF service and how to consume the service with Java client.
And at last I presented an approach which should be taken to replace FormsAuthentication with your own authentication scheme using WCF Message Inspectors.

neděle 26. června 2011

Silverlight Event Calendar

For one of my latest project I needed a quite simple Event Calendar component for Silverlight. I did not want to use any third party libraries and I wanted this component to stay simple.

I had following contraints on the component:
  • It has to be bindable
  • It should accept any IEnumerable collection
  • I should be able just specify which property of objects in the collection holds the DateTime value, which will be used to place the objects in the calendar
  • It should expose a template to be able to change the view of the event
  • It should expose events such as "Calendar Event Clicked"
  • It should expose a SelectedItem property

Here is the resulting component - it does not look great, but you can easily style it as you want.

You can get the code here from GitHub


The component is based on Calendar component. Calendar is not really flexible component but there are some workarounds to make it work the way, that you like. First the calendar is placed inside a UserControl.

<usercontrol x:class="EventCalendarLibrary.EventCalendar">
    <grid background="White" x:name="LayoutRoot">
        <controls:calendar x:name="InnerCalendar">
    </controls:calendar></grid>
</usercontrol>


Calendar component is composed of CalendarDayButtons. CalendarDayButton resides in the System.Windows.Controls.Primitives
namespace.

The problem is that the Calendar does not hold a collection of these buttons so we are not able to dynamically add components to these Buttons.

However the style of the each button in the calendar can be set by setting the CalendarDayButtonStyle property.

We can use this style to override the control template and this way set our proper handler for Loaded and Click events. The handler for Loaded event will simply allow us to add the loaded Button to a collection which we will maintain inside our components and which later allows us to add the "events" to the calendar.

<grid.resources>
<style targettype="controlsPrimitives:CalendarDayButton" x:key="CalendarDayButtonStyle">
            <setter Property="Template">
                <Setter.Value>
                    <controltemplate TargetType="controlsPrimitives:CalendarDayButton">
                        <border BorderBrush="#FF598788" BorderThickness="1,1,1,1" CornerRadius="2,2,2,2">
                            <stackpanel HorizontalAlignment="Stretch" VerticalAlignment="Stretch" MinHeight="30" MinWidth="10">
                                <controlsPrimitives:CalendarDayButton 
                                    Loaded="CalendarDayButton_Loaded" 
                                    Background="{TemplateBinding Background}" 
                                    BorderBrush="{TemplateBinding BorderBrush}"
                                        
                                    Content="{TemplateBinding Content}"
                                    BorderThickness="{TemplateBinding BorderThickness}" 
                                        
                                    x:Name="CalendarDayButton" Click="CalendarDayButton_Click"/>
                            </StackPanel>        
                        </Border>
                    </ControlTemplate>
                </Setter.Value>
            </Setter>
        
</style>
</grid.resources>
<controls:calendar background="White" calendardaybuttonstyle="{StaticResource CalendarDayButtonStyle}" x:name="InnerCalendar">
</controls:calendar>

So what is going on here is:
We are changing the ControlTemplate of CalendayDayButton for a new one which consits of a Border and a StackPanel containing a new CalendarDayButton. This is important, because now we now that each "day" in the Calendar will be represented by this StackPanel to which we can add additional components.
As promised we override the Loaded event. Let's see the code-behind:

private void CalendarDayButton_Loaded(object sender, RoutedEventArgs e)
{
 var button = sender as CalendarDayButton;
 calendarButtons.Add(button);

 //Resizing the buttons is the only way to change the dimensions of the calendar
 button.Width = this.ActualWidth / 9;
 button.Height = this.ActualHeight / 8;

 if (calendarButtons.Count == 42)
 {
  FillCalendar();
 }
}

We are simple take the button, store it in our inner collection (called calendarButtons) for further manipulations and then we perform some Resizing. The only way to force Calendar to Resize itself to the values which you specify in "Width" and "Height" properties is actually to change the dimensions of the inner buttons.

And last we check if all button had been loaded and if yes then we call "FillCallendar" method - yes this will be the method which will fill in the events to the calendar.

Before we go there we need to define the Dependency Properties which will allow us to bind the desired values (collection of items, DateTime property name event style and SelectedEvent property).


public static readonly DependencyProperty SelectedEventProperty = DependencyProperty.Register("SelectedEvent", typeof(Object), typeof(EventCalendar), null);
public Object SelectedEvent
{
 get { return (Object)GetValue(SelectedEventProperty); }
 set { SetValue(SelectedEventProperty, value); }
}

public static readonly DependencyProperty CalendarEventButtonStyleProperty = DependencyProperty.Register("CalendarEventButtonStyle", typeof(Style), typeof(EventCalendar), null);
public Style CalendarEventButtonStyle
{
 get { return (Style)GetValue(CalendarEventButtonStyleProperty); }
 set { SetValue(CalendarEventButtonStyleProperty, value); }
}

public static readonly DependencyProperty DatePropertyNameProperty = DependencyProperty.Register("DatePropertyName", typeof(String), typeof(EventCalendar), null);
public String DatePropertyName
{
 get { return (String)GetValue(DatePropertyNameProperty); }
 set { SetValue(DatePropertyNameProperty, value); }
}

public static readonly DependencyProperty ItemsSourceProperty = DependencyProperty.Register("ItemsSource", typeof(IEnumerable), typeof(EventCalendar),
 new PropertyMetadata(ItemsSourcePropertyChanged));

public IEnumerable ItemsSource
{
 get { return (IEnumerable)GetValue(ItemsSourceProperty); }
 set { SetValue(ItemsSourceProperty, value); }
}

You can see that there is a handler attached to the change of ItemsSourceProperty. This handler is called whenever this property changes. This is a important part, we take the Items, determine which property contains the DateTime value and we will group these Items by this property and store it in internal dictionary of type Dictionary<DateTime, <List<Object>>.
public static void ItemsSourcePropertyChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
{
    var owner = d as EventCalendar;
    //if the property was set to null we have to clear all the events from calendar
    if (e.NewValue == null)
    {
        owner.ClearCallendar();
        return;
    }
    
    IEnumerable rawItems = (IEnumerable)e.NewValue;
    PropertyInfo property = null;

    //to determine the if the Count>0
    var enumerator = rawItems.GetEnumerator();
    if(!enumerator.MoveNext()){
        owner.ClearCallendar();
        return;
    }

    Object o = enumerator.Current;
    Type type = o.GetType();

    //get the type of the properties inside of the IEnumerable
    property = type.GetProperty(owner.DatePropertyName);
    
    if (property != null)
    {
        IEnumerable<Object> items = Enumerable.Cast<Object>((IEnumerable)e.NewValue);
        //group the items and store in a dictionary
        if (items != null)
        {
            var parDate = items
                        .GroupBy(x => GetDateValue(x, property))
                        .ToDictionary(x => x.Key, x => x.ToList());
            owner.ItemsSourceDictionary = parDate;
            owner.FillCalendar();
        }
    }
}

//Returns the DateTime value of a property specified by its information
public static DateTime GetDateValue (Object x, PropertyInfo property)
{
    return ((DateTime)property.GetValue(x,null)).Date;
}

It is a bit complicated - and that comes probably from my poor knowledge and experience of working with raw IEnumerable. Basically I need to get the type of the items inside of the IEnumerable and then using this Type I can obtain the value of the DateTime property and group the values and store in an inner dictionary.


You can see that there is a simple helper functions which just takes PropertyInfo and Object and returns the Date value of that property. I prefer to get when using "Data" property I am sure that I will have exact "day" without hours and minutes and than I can group this data by this "day".

Now that we have the grouped events, we have to place them in the calendar. To create this function I have used to example shown on this blog.

private void FillCalendar(DateTime firstDate)
{
    if (ItemsSourceDictionary!=null && ItemsSourceDictionary.Count >0)
    {                
        DateTime currentDay;

        int weekDay = (int)firstDate.DayOfWeek;
        if (weekDay == 0) weekDay = 7;
        if (weekDay == 1) weekDay = 8;

        for (int counter = 0; counter < calendarButtons.Count;counter++)
        {
            var button = calendarButtons[counter];
            var panel = button.Parent as StackPanel;


            int nbControls = panel.Children.Count;
            for (int i = nbControls - 1; i > 0; i--)
            {
                panel.Children.RemoveAt(i);
            }

            currentDay = firstDate.AddDays(counter).AddDays(-weekDay);

            if (ItemsSourceDictionary.ContainsKey(currentDay))
            {
                var events = ItemsSourceDictionary[currentDay];
                foreach (Object calendarEvent in events)
                {
                    Button btn = new Button();
                    btn.DataContext = calendarEvent;
                    btn.Style = CalendarEventButtonStyle;
                    panel.Children.Add(btn);
                    btn.Click += new RoutedEventHandler(EventButton_Click);
                }
            }
        }
    }
}

This function accepts a DateTime parameter which is the first date of the month which is being shown in the Calendar. When the first of the month is Monday, than it will be shown as a first in the second row. When it is Tuesday, it will be shown as the second in the second row. For other cases, it will be shown in the first row.
Thus we can easily subtract the integer values specifying which the day in the week (eg. 3 for Thursday) and we will obtain the date which is shown in the first cell.

The day which is being shown in the calendar is exposed by Calendar.DisplayDate Property and we can easily access that to obtain the month which is being shown (and thus the first day of the month).

So we just iterate over all the buttons, determine the date for each and knowing that the buttons are wrapped by a StackPanel we can add to this panel the events.
Each event is represented by a Button and the style which is exposed as DependencyProperty is applied.

Exposed events

This component exposes two events, one for the moment when the user clicks on existing "Event" in the calendar and second one for the click on the button of the day.
public event EventHandler<calendareventargs> EventClick;
public event EventHandler DayClick;
When the user clicks on existing event in the calendar, we pass the clicked "Event" wrapped by "CalendarEventArgs" class.
void EventButton_Click(object sender, RoutedEventArgs e)
{
    object eventClicked = (sender as Button).DataContext as object;
    
    //set the selected event
    SelectedEvent = eventClicked;

    //just pass the click event to the hosting envirenment of the component
    if (EventClick != null)
    {
        EventClick(sender, new CalendarEventArgs(eventClicked));
    }
}
When the user clicks on the button of the day, we passed the Date of this day wrapped up by CalendarEventArgs.
private void CalendarDayButton_Click(object sender, RoutedEventArgs e)
{
    CalendarDayButton button = sender as CalendarDayButton;
    DateTime date = GetDate(GetFirstCalendarDate(),button);

    if(date!=DateTime.MinValue && DayClick!=null)
    {
        DayClick(sender,new CalendarEventArgs(date));
    }
}
We can obtain the Date for the button by method similar to the one described above.

Summary

There is no more to that, as I said the component stays super simple, just one class, you can style the Events which are placed to the Calendar and you have to handle the other actions (like eg. adding an "Event" when clicking on the "day" button) by yourself.

Download the source from GitHub

úterý 7. června 2011

Map Creator - convert raster images to maps data (in Silverlight)

This posts talks about a tool which can help you convert lines in raster maps to a set of locations which later can be visualized using Silverlight Bing Maps (or some other mapping framework).

I call the application Silverlight Map Creator :) and it is disponible at:
http://mapcreator.codeplex.com/.


This application can help you a bit when you have an existing map in a bitmap picture format (png, jpeg) and you would like to use the route lines from this map in your application.

Then you have two options:

  • Use MapCruncher. MapCruncher is tool from MS which allows you to create your own Tiles source from existing map. OK, if you never heard of Tiles:

    When using a map component (Google Maps, Bing Maps or any other) the entire map is composed of several tiles, each tile, when you zoom in is again composed of "smaller tiles" (they are of the same size, but have higher precision).
    So MapCruncher lets you create your own map, composed of you own tiles based on the raster image. Later you can use this new map and "put it over" the standard Bing/Google/Open map - thus showing the additional information.

    This however has one disadvantage - the map is quite static - it is just a bunch of pixel on top of your the classic map - you are for example not able to get the total length of the route on the map.
  • You need to obtain the geo-data which specify the route lines - in other words coordinations of the route points. To obtain this data from predefined image you would need to first set the correspondence between the image and the map and than analyze the image to get all the points of your map.
    I decided to create a tool which would help me with this task - this post is a brief description of the tool.

Here is a screen shot of the Map Creator tool which will help you accomplish it:



If you are wondering in the screenshot I am converting a map of a ski-race (www.jiz50.cz) to a set of points. In the left part you can see the map (jpg image) and in the right part the resulting route.

Converting raster image to map

The task of converting raster image to map data is composed of the following parts:
  • Load the image (by clicking the browse button...)
  • Set correspondence points between the image and the map
  • Pick up the color which defines the route or path in the raster image
  • Set some parameters for the analysis of the map
  • Press Start and hope to get some results
  • Perform some changes to the route
  • -> change positions of the points -> remove points from route
  • Add the route which you have obtained to the "result" set
    -> result set defines the data which is used to generate the XML.
    -> also this is the data which is saved any time, that you press save
  • Generate XML data for your maps
To accomplish all of that the application has a simple menu:



Here are some details to the parts which are not straightforward:

Setting correspondences


Technical background
Generally to set correspondences between two coordination systems you need to determine if there is a transformation which could transform the coordinates of one point from the resource to the resulting coordination systems.

Map Creator is not really sophisticated tool so it supports only the case when there is a Affine Transformation between the two coordination systems.

Affine transformation preserves colinearity, that means that points which lie on a line in one coordination system will also lie on the map in the second one. Basically it means that Affine transformation can be composed of any linear transformation (scaling, rotation) and translation, for example skewing is not allowed.

The relation between the two coordination systems can be specified using the following equation:

sx = c00*rx + c01*ry + c02
sy = c10*rx + c11*ry + c12

(rx, ry) - coordinates in resource coordination system (so lets say pixels in the image)
(sx,sy) - coordinates in the resulting system (so lets say longitude and latitude)

So we need 6 parameters. For each point we have 2 equations, so we need 3 points to have 6 equations for 6 parameters. In matrix notation we can write it like this.

[c00 c01 c02] [x1 x2 x3] [u1 u2 u3]
[c10 c11 c12] [y1 y2 y3] = [v1 v2 v3]
[ 1 1 1]

In Map Creator
Just select the "Correspondences" radio button. Then every time you click "Right" on the image, a new point is added to a list, when you select the point and click right on the map a correspondence will be set.

Select colors of the route

Just select the "Color selection" radio button. Than when you click right button the mouse in the picture the color will be selected (and added to the list).

Set the parameters

There are 4 parameters which somehow change the why the resulting route is going to look like:
1)Search Range - basically it says what is the minimal distance in pixels of points of the same route. Setting this parameter to bigger value will cause connections between routes which are normally not connected. To low parameter will increase the density of point in the route (which is not desirable either).
2) Color toleration - determines the color of the pixel which will still be marked as in the route. Each color is composed of 3 parts (RGB) with values in range 0-255. This parameter sets the tolerance for each part (RGB).
Min Points Per Route - Each bike route is composed of several routes (or lets say lines). This is caused by side routes, which have to be represented separately. This parameter sets the minimal points for each route. If there is a route with less points that this parameter sets, it will not be added to the result.
Distance to connectAfter the analysis, some routes which should be connected are will not be. Typical example are the side routes. There will always be a little space between the main route and the side routes. This parameter sets what is the maximal distance between to routes which should be connected.

Performing changes to the resulting route

There are two possible changes that you can do:
1) Remove the point by clicking the right button
2) Change the position of the point by dragging it

Adding the route the the results

Generally a map is composed of several routes. The basic idea behind this tool is that once you have finished working on a route, you can add it to the result (pressing the button on the list of colors). When the route is in the results it will not be affected by running the analysis again.

Saving your work

By pressing "Save" button you can save your work. Saved will be the list of correspondences and the routes in the "result" set. This why the next time you can continue working on existing map.

Generating XML

The main idea is to use the data which you have generated in your application. The "Generate XML" button simply serializes the "result" set to XML.
As said before: Map is a collection of routes. Route is a collection of lines. Line is a collection of locations.
OK, in C# or Java or whichever language it is something like this:

List<List<List<Location>>> result

When you serialize this object to XML (here just using the standard C# serializer you will obtain XML with following structure:
<?xml version="1.0" encoding="utf-16"?>
<arrayofarrayofarrayoflocation xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
  <arrayofarrayoflocation>
    <arrayoflocation>
      <location>
        <latitude>50.834165728659457</Latitude>
        <longitude>15.292032040283674</Longitude>
        <altitude>0</Altitude>
        <altitudereference>Ground</AltitudeReference>
      </Location>
      <location>
        <latitude>50.83278001263735</Latitude>
        <longitude>15.293082116316537</Longitude>
        <altitude>0</Altitude>
        <altitudereference>Ground</AltitudeReference>
      </Location>
</ArrayOfLocation>
</ArrayOfArrayOfLocation>
</ArrayOfArrayOfArrayOfLocation>

OK, I agree - it is too verbose and not optimized and for most ugly, but I did not have time to implement my own format.

Future

So that's it. I am not sure that this tool will be useful to anyone, if you think that you might use it, if you have a suggestion or a bug, just leave me a note here...(ok not for the bugs, there is too many of them anyway).