Radenko Zec: How to fake session object / HttpContext for integration tests

Sometimes when we write integration tests we need to fake HttpContext to test some functionality proper way.

In one of my projects I needed a possibility to fake some session variables such as userState and maxId.

A project that is tested in this example is an ASP.NET Web API with added support for session but you can use the same approach in any ASP.NET/ MVC project.

Implementation is very simple. First, we need to create FakeHttpContext.

 

 public HttpContext FakeHttpContext(Dictionary<string, object> sessionVariables,string path)
{
    var httpRequest = new HttpRequest(string.Empty, path, string.Empty);
    var stringWriter = new StringWriter();
    var httpResponce = new HttpResponse(stringWriter);
    var httpContext = new HttpContext(httpRequest, httpResponce);
    httpContext.User = new GenericPrincipal(new GenericIdentity("username"), new string[0]);
    Thread.CurrentPrincipal = new GenericPrincipal(new GenericIdentity("username"), new string[0]);
    var sessionContainer = new HttpSessionStateContainer(
      "id",
      new SessionStateItemCollection(),
      new HttpStaticObjectsCollection(),
      10,
      true,
      HttpCookieMode.AutoDetect,
      SessionStateMode.InProc,
      false);

   foreach (var var in sessionVariables)
   {
      sessionContainer.Add(var.Key, var.Value);
   }

   SessionStateUtility.AddHttpSessionStateToContext(httpContext, sessionContainer);
   return httpContext;
}


 

After we have FakeHttpContext we can easily use it in any integration tests like this.

 

 [Test]
public void DeleteState_AcceptsCorrectExpandedState_DeletesState()
{
   HttpContext.Current = this.FakeHttpContext(
      new Dictionary<string, object> { { "UserExpandedState", 5 }, 
                                        { "MaxId", 1000 } },    
                                          "http://localhost:55024/api/v1/");

   string uri = "DeleteExpandedState";
   var result = Client.DeleteAsync(uri).Result;
   result.StatusCode.Should().Be(HttpStatusCode.OK);

   var state = statesRepository.GetState(5);
   state.Should().BeNull();
}

 

This is an example of integration test that tests Delete method of WebAPI.

WebAPI is this example hosted in memory for the purpose of integration testing.

If you like this article don’t forget to subscribe to this blog and make sure you don’t miss new upcoming blog posts.

 
 

The post How to fake session object / HttpContext for integration tests appeared first on RadenkoZec blog.


Radenko Zec: Replace JSON.NET with Jil JSON serializer in ASP.NET Web API

I have recently come across a comparison of fast JSON serializers in .NET, which shows that Jil JSON serializer is one of the fastest.

Jil is created by Kevin Montrose developer at StackOverlow and it is apparently heavily used by Stackoveflow.

This is only one of many benchmarks you can find on Github project website.

JilSpeed

You can find more benchmarks and the source code at this location https://github.com/kevin-montrose/Jil

In this short article I will cover how to replace default JSON serializer in Web API with Jil.

Create Jil MediaTypeFormatter

First, you need to grab Jil from NuGet

PM> Install-Package Jil

 

 

After that, create JilFormatter using code below.

    public class JilFormatter : MediaTypeFormatter
    {
        private readonly Options _jilOptions;
        public JilFormatter()
        {
            _jilOptions=new Options(dateFormat:DateTimeFormat.ISO8601);
            SupportedMediaTypes.Add(new MediaTypeHeaderValue("application/json"));

            SupportedEncodings.Add(new UTF8Encoding(encoderShouldEmitUTF8Identifier: false, throwOnInvalidBytes: true));
            SupportedEncodings.Add(new UnicodeEncoding(bigEndian: false, byteOrderMark: true, throwOnInvalidBytes: true));
        }
        public override bool CanReadType(Type type)
        {
            if (type == null)
            {
                throw new ArgumentNullException("type");
            }
            return true;
        }

        public override bool CanWriteType(Type type)
        {
            if (type == null)
            {
                throw new ArgumentNullException("type");
            }
            return true;
        }

        public override Task<object> ReadFromStreamAsync(Type type, Stream readStream, System.Net.Http.HttpContent content, IFormatterLogger formatterLogger)
        {
            var task= Task<object>.Factory.StartNew(() => (this.DeserializeFromStream(type,readStream)));
            return task;
        }


        private object DeserializeFromStream(Type type,Stream readStream)
        {
            try
            {
                using (var reader = new StreamReader(readStream))
                {
                    MethodInfo method = typeof(JSON).GetMethod("Deserialize", new Type[] { typeof(TextReader),typeof(Options) });
                    MethodInfo generic = method.MakeGenericMethod(type);
                    return generic.Invoke(this, new object[]{reader, _jilOptions});
                }
            }
            catch
            {
                return null;
            }

        }


        public override Task WriteToStreamAsync(Type type, object value, Stream writeStream, System.Net.Http.HttpContent content, TransportContext transportContext)
        {
            using (TextWriter streamWriter = new StreamWriter(writeStream))
            {
                JSON.Serialize(value, streamWriter, _jilOptions);
                var task = Task.Factory.StartNew(() => writeStream);
                return task;
            }
        }
    }

 

This code uses reflection for deserialization of JSON.

Replace default JSON serializer

In the end, we need to remove default JSON serializer.

Place this code at beginning of WebApiConfig

config.Formatters.RemoveAt(0);
config.Formatters.Insert(0, new JilFormatter());

 

Feel free to test Jil with Web API and don’t forget to subscribe here and don’t miss a new blog post.

 
 

The post Replace JSON.NET with Jil JSON serializer in ASP.NET Web API appeared first on RadenkoZec blog.


Darrel Miller: Hypermedia as the engine of application state, the client-server dance

We are currently seeing a significant amount of discussion about building hypermedia APIs.  However, the server side only plays part of the role in a hypermedia driven system.  To take full advantage of the benefits of hypermedia, the client must allow the server to take the lead and drive the state of the client.  As I like to say, it takes two to Tango.

So you think you can dance? Tango

Soon after I was married, my wife convinced me to take dance lessons with her.  Over the couple of years we spent taking lessons, I learned there were three types of people who join a dance studio.  There are people who want to get better at dancing, there are couples who are getting married who don't want to look like idiots during their 'first dance' and there are divorcees.  I'll leave it to you to figure out why the divorcees are there as I'll just be focusing on the other two groups.

The couples who are preparing for their weddings usually are under time and budgetary constraints, so they usually opt to learn a choreographed sequence of steps to a particular song.  They both learn the sequence of steps and, to an extent, dance their own independent steps whilst hanging on to each other.  It is a process that serves a purpose. It meets the goals of the couple, but it is a vastly inferior result to the approach taken when people's goal is simply to learn how to dance.

FirstDance

In order to learn to dance there are a number of basic fundamentals that are required. It is essential to be able to follow the beat of whatever music you are trying to dance to.  There are a set of basic dance primitives that can be combined to make up dance sequences.  It is also important to understand the role of the man vs that of the woman when dancing (Note: these are traditional role names, and no longer necessarily correlate with gender). The man leads the dance, the woman follows. 

RumbaSteps As the dance progresses the man chooses the sequences of primitives to perform and uses hand signals, body position and weight change to communicate with the women what steps are coming next.  There is no predefined, choreographed set of sequences. The man basically does whatever he wants within the constrains of dance style. The woman follows. 

When done right, it looks like magic

Watching a talented couple do this freestyle dance is often indistinguishable from a choreographed dance.  When people learn to dance this way, they can dance to any piece of music as long as the beat matches a style of dance they know and they can dance with any partner.   Whereas, couples who learn their wedding dance, know one sequence to one piece of music and can only dance it with one partner.

The Client-Server dance

Building a client that can consume a HTTP API can be done in different ways.  You can build your application to be like a choreographed dance, where both the client and sever know in advance what is going to happen.  When the client makes an HTTP request to a particular resource it knows in advance how the server will respond.  The challenge with this approach is that both parties need to have knowledge of the sequences, and more importantly, where they are up to in the sequence. If someone decides to make any changes the other party is likely to get confused by the unplanned change.Duel_of_the_Fates[1]

A choreographed client

The last twenty years of building clients for distributed applications has taught us how to build highly choreographed clients.  We first learn the API that the server exposes and then teach our clients an intricate pattern of interactions in order to achieve our desired goals.  Our application is the dance we perform.

Frequently when building clients like this we will create a facade over the remote service and a view model to manage the state of the sequence of interactions. Consider the following example of a distributed application that is designed to perform the dance of turning on and off a light switch.

The service facade:

    public class SwitchService
    {
        private const string SwitchStateResource = "switch/state";
        private const string SwitchOnResource = "switch/on";
        private const string SwitchOffResource = "switch/off";

        private readonly HttpClient _client;

        public SwitchService(HttpClient client)
        {
            _client = client;
        }

        public async Task<bool> GetSwitchStateAsync()
        {
            var result = await _client.GetStringAsync(SwitchStateResource).ConfigureAwait(false);

            return bool.Parse(result);
        }

        public Task SetSwitchStateAsync(bool newstate)
        {
            if (newstate)
            {
                return _client.PostAsync(SwitchOnResource,null);
            }
            else
            {
                return _client.PostAsync(SwitchOffResource, null);
            }
        }
    }

The view model:

    public class SwitchViewModel : INotifyPropertyChanged
    {
        public event PropertyChangedEventHandler PropertyChanged;
        
        private readonly SwitchService _service;
        private bool _switchState;

        public SwitchViewModel(SwitchService service)
        {
            _service = service;
            _switchState = service.GetSwitchStateAsync().Result;
        }

        
        private bool SwitchState {
            get
            {
                return _switchState; 
            }
             set
            {
                _service.SetSwitchStateAsync(value).Wait();
                _switchState = value; 
                OnPropertyChanged();
                OnPropertyChanged("CanTurnOn");
                OnPropertyChanged("CanTurnOff");
            }
        }

        public bool On
        {
            get { return SwitchState; }
        }

        public void TurnOff()
        {
            SwitchState = false;
        }

        public void TurnOn()
        {
            SwitchState = true;
        }


        public bool CanTurnOn
        {
            get { return SwitchState == false; }
        }

        public bool CanTurnOff
        {
            get { return SwitchState; }
        }
        

        protected virtual void OnPropertyChanged([CallerMemberName] string propertyName = null)
        {
            var handler = PropertyChanged;
            if (handler != null) handler(this, new PropertyChangedEventArgs(propertyName));
        }
    }

In our client view model we maintain the current SwitchState.  The client needs to know at any point in time, whether the switch is on or off.  This information will be provided to the View to present a visual representation to the user and it is also used to drive the application logic that determines if we are allowed to turn the switch on or off again.  Our application wishes to prevent someone from trying to turn on the switch if it is already on and turn off the switch if it is already off.  This is an extremely simple example but will be sufficient to illustrate differences between the two approaches.

The important point to note,  that just like our engaged couple doing their dance, both the client and server must keep track of the current application state in order to know what they can and must do next.  SkippingRope

Sometimes you just have to let go

In this next example, we take away the responsibility from the client of keeping track of state that is already being tracked by the server.  The client simply follows the lead of the server and trusts the server to provide it the necessary guidance.

basejump

We no longer need to provide a facade over the server API and instead we focus on understanding the messages communicated to by the server.  For that we have created a class called SwitchDocument that allows the client to parse and interpret the message. 

    public class SwitchDocument
    {
        public static SwitchDocument Load(Stream stream)
        {
            var switchStateDocument = new SwitchDocument();
            var jObject = JObject.Load(new JsonTextReader(new StreamReader(stream)));
            foreach (var jProp in jObject.Properties())
            {
                switch (jProp.Name)
                {
                    case "On":
                        switchStateDocument.On = (bool)jProp.Value;
                        break;
                    case "TurnOnLink":
                        switchStateDocument.TurnOnLink = new Uri((string)jProp.Value, UriKind.RelativeOrAbsolute);
                        break;
                    case "TurnOffLink":
                        switchStateDocument.TurnOffLink = new Uri((string)jProp.Value, UriKind.RelativeOrAbsolute);
                        break;
                }
            }
            return switchStateDocument;
        }

        public bool On { get; private set; }
        public Uri TurnOnLink { get; set; }
        public Uri TurnOffLink { get; set; }

        public static Uri SelfLink
        {
            get { return new Uri("switch/state", UriKind.Relative); }
        }
    }

Our view model now has the reduced role of simply presenting the information contained in the SwitchDocument to the view and providing a way to interact with the affordances described in the SwitchDocument.

    public class SwitchHyperViewModel : INotifyPropertyChanged
    {
        public event PropertyChangedEventHandler PropertyChanged;
        private readonly HttpClient _client;
        private SwitchDocument _switchStateDocument = new SwitchDocument();

        public SwitchHyperViewModel(HttpClient client)
        {
            _client = client;
            _client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/switchstate+json"));
            _client.GetAsync(SwitchDocument.SelfLink).ContinueWith(t => UpdateState(t.Result)).Wait();
        }

        public bool On
        {
            get { return _switchStateDocument.On; }
        }

        public bool CanTurnOn
        {
            get { return _switchStateDocument.TurnOnLink != null; }
        }

        public bool CanTurnOff
        {
            get { return _switchStateDocument.TurnOffLink != null; }
        }

        public void TurnOff()
        {
            _client.PostAsync(_switchStateDocument.TurnOffLink, null).ContinueWith(t => UpdateState(t.Result));
        }

        public void TurnOn()
        {
            _client.PostAsync(_switchStateDocument.TurnOnLink, null).ContinueWith(t => UpdateState(t.Result));
        }

        private void UpdateState(HttpResponseMessage httpResponseMessage)
        {
            if (httpResponseMessage.StatusCode == HttpStatusCode.OK)
            {
                _switchStateDocument = SwitchDocument.Load(httpResponseMessage.Content.ReadAsStreamAsync().Result);

                OnPropertyChanged();
                OnPropertyChanged("CanTurnOn");
                OnPropertyChanged("CanTurnOff");
            }
        }

        protected virtual void OnPropertyChanged([CallerMemberName] string propertyName = null)
        {
            var handler = PropertyChanged;
            if (handler != null) handler(this, new PropertyChangedEventArgs(propertyName));
        }

    }

This new hypermedia driven View Model has the same interface as the choreographed one and can be easily connected to the same simple user interface to display the state of the light switch and provide controls that can change the light switch.   The difference is in the way the application state is managed.  In this case, the view model determines if the switch can be turned on or off based on the presence of links that will turn on and off the switch.  Attempting to turn on or off the switch involves making a HTTP request to the server and using the response as a completely new state for the view model. 

You can find a complete WPF example in the Github repository.

The similarity is only skin deep

On the surface it appears that the two different approaches produce pretty much the same results.  It is almost the same amount of code, with a similar level of complexity.  The question has to be asked, what are the benefits?

FakePurse If you watch a couple dance who have learned a choreographed dance, you may think they are very capable dancers.  You may not even be able to tell the difference between them and others doing the same dance who have a much more fundamental understanding of how to dance.  The differences only begin to appear when you introduce change.  Changing dance partners, changing music or adding new steps will quickly reveal the differences.

The impact of change

The same is true with our sample application.  Consider the scenario where new requirements are introduced where a switch could only be turned on during a certain time period, or only users with certain permissions could turn on the switch. 

In the choreographed application, we would need to add a number of other server resources that would allow a client to inquire if a user has permission, or if the time of day permits turning on or off the switch.  The client must call those resources to retrieve the information, which in itself is not terribly complex.  However, deciding when to make those requests can be tricky.  Calling them frequently adds a significant performance hit, but caching the values locally can introduce problems with keeping the local state consistent with the server based resource state.

Windmills In the hypermedia driven client, neither of our new business requirements require additional resources to be created or server roundtrips.  In fact the client code does not need to change.  All the logic that is used to determine if a client can turn on or off a switch can be embedded into the server logic for determining whether to include a "TurnOn" link or a "TurnOff" link. 

The links are always refreshed along with the state of the switch so the client state is always consistent.  The state may be stale, but that is fine because HTTP has all kinds of mechanism for refreshing stale state.  The key thing is the client does not need to deal with the complexity of the permissions being an hour old, the timing schedule being ten minutes old and state of the switch being ten seconds old.

Some constraints can lead to unimagined possibilities

The fact that our client application does not need to change to accommodate these new requirements is far more significant than our analogy might lead us to believe.  When ballroom dancing there usually just one man and one woman.  The implications of making changes to the dance are limited.  In distributed systems, it is not uncommon for a single API to have thousands of client instances and perhaps multiple different types of clients.  The client applications are often created by different teams, in different countries with completely different time constraints. 

Being able to make logic changes on the server that would normally be embedded into the client can potentially have huge benefits.  The example I have shown only scratches the surface of the techniques that be applied using hypermedia, but hopefully it hints at the possibilities.

Hangglider

Image Credit: Tango https://flic.kr/p/7BY638
Image Credit: First Dance https://flic.kr/p/d3vMxG
Image Credit: Rumba Steps https://flic.kr/p/dMrKk
Image Credit: Skipping Rope https://flic.kr/p/6pZSGX
Image Credit: Base jump https://flic.kr/p/df5Nwd
Image Credit: Fake Gucci purse: https://flic.kr/p/9w5Qji
Image Credit: Windmills https://flic.kr/p/96iTMv
Image Credit: Hang Glider https://flic.kr/p/6jfG9m


Radenko Zec: 8 ways to improve ASP.NET Web API performance

ASP.NET Web API is a great piece of technology. Writing Web API is so easy that many developers don’t take the time to structure their applications for great performance.

In this article, I am going to cover 8 techniques for improving ASP.NET Web API performance.

1) Use fastest JSON serializer

JSON serialization  can affect overall performance of ASP.NET Web API significantly. A year and a half I have switched from JSON.NET serializer on one of my project to ServiceStack.Text .

I have measured around 20% performance improvement on my Web API responses. I highly recommend that you try out this serializer. Here is some latest performance comparison of popular serializers.

SerializerPerformanceGraf

Source: theburningmonk

UPDATE: It seams that StackOverflow uses what they claims even faster JSON serializer called Jil. View some benchmarks on their GitHub page Jil serializer.

2) Manual JSON serialize from DataReader

I have used this method on my production project and gain performance benefits.

Instead reading values from DataReader and populating objects and after that reading again values from those objects and producing JSON using some JSON Serializer,  you can manually create JSON string from DataReader and avoid unnecessary creation of objects.

You produce JSON using StringBuilder and in the end you return StringContent as the content of your response in WebAPI

var response = Request.CreateResponse(HttpStatusCode.OK);
response.Content = new StringContent(jsonResult, Encoding.UTF8, "application/json");
return response;

 

You can read more about this method on Rick Strahl’s blog

3) Use other formats if possible (protocol buffer, message pack)

If you can use other formats like Protocol Buffers or MessagePack in your project instead of JSON do it.

You will get huge performance benefits not only because Protocol Buffers serializer is faster, but because format is smaller than JSON which will result in smaller and faster responses.

4) Implement compression

Use GZIP or Deflate compression on your ASP.NET Web API.

Compression is an easy and effective way to reduce the size of packages and increase the speed.

This is a must have feature. You can read more about this in my blog post ASP.NET Web API GZip compression ActionFilter with 8 lines of code.

5) Use caching

If it makes sense, use output caching on your Web API methods. For example, if a lot of users accessing same response that will change maybe once a day.

If you want to implement manual caching such as caching tokens of users into memory please refer to my blog post Simple way to implement caching in ASP.NET Web API.

6) Use classic ADO.NET if possible

Hand coded ADO.NET is still the fastest way to get data from database. If the performance of Web API is really important for you, don’t use ORMs.

You can see one of the latest performance comparison of popular ORMs.

ORMMapper

The Dapper and the  hand-written fetch code are very fast, as expected, all ORMs are slower than those three.

LLBLGen with resultset caching is very fast, but it fetches the resultset once and then re-materializes the objects from memory.

7) Implement async on methods of Web API

Using asynchronous Web API services can increase the number of concurrent HTTP requests Web API can handle.

Implementation is simple. The operation is simply marked with the async keyword and the return type is changed to Task.

[HttpGet]  
public async Task OperationAsync()  
{   
    await Task.Delay(2000);  
}

8) Return Multiple Resultsets and combined results

Reduce number of round-trips not only to database but to Web API as well. You should use multiple resultsets functionality whenever is possible.

This means you can extract multiple resultsets from DataReader like in the example bellow:

// read the first resultset 
var reader = command.ExecuteReader(); 

// read the data from that resultset 
while (reader.Read()) 
{ 
	suppliers.Add(PopulateSupplierFromIDataReader( reader )); 
} 

// read the next resultset 
reader.NextResult(); 

// read the data from that second resultset 
while (reader.Read()) 
{ 
	products.Add(PopulateProductFromIDataReader( reader )); 
}

 

Return as many objects you can in one Web API response. Try combining objects into one aggregate object like this:

public class AggregateResult
{
     public long MaxId { get; set; }
     public List<Folder> Folders{ get; set; }
     public List<User>  Users{ get; set; }
}

 

This way you will reduce the number of HTTP requests to your Web API.

Thank you for reading this article.

Leave a comment below and let me know what other methods you have found to improve Web API performance?

 
 

The post 8 ways to improve ASP.NET Web API performance appeared first on RadenkoZec blog.


Dominick Baier: NDC London: Identity and Access Control for modern Web Applications and APIs

I am happy to announce that NDC will host our new workshop in London in December!

Join us to learn everything that is important to secure modern web applications and APIs using Microsoft’s current and future web stack! Looking forward to it!

course description / ndc london / tickets


Filed under: .NET Security, ASP.NET, IdentityModel, IdentityServer, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Taiseer Joudeh: Enable OAuth Refresh Tokens in AngularJS App using ASP .NET Web API 2, and Owin

After my previous Token Based Authentication post I’ve received many requests to add OAuth Refresh Tokens to the OAuth Resource Owner Password Credentials flow which I’m currently using in the previous tutorial. To be honest adding support for refresh tokens adds a noticeable level of complexity to your Authorization Server. As well most of the available resources on the net don’t provide the full picture of how to implement this by introducing clients nor how to persist the refresh tokens into database.

So I’ve decided to write a detailed post with live demo application which resumes what we’ve built in the previous posts, so I recommend you to read part 1 at least to follow along with this post. This detailed post will cover adding Clients, persisting refresh tokens, dynamically configuring refresh tokens expiry dates, and revoking refresh tokens.

You can check the demo application, play with the back-end API for learning purposes (http://ngauthenticationapi.azurewebsites.net), and check the source code on Github.

AngularJS OAuth Refresh Tokens

Enable OAuth Refresh Tokens in AngularJS App using ASP .NET Web API 2, and Owin

Before start into the implementation I would like to discuss when and how refresh tokens should be used, and what is the database structure needed to implement a complete solution.

Using Refresh Tokens

The idea of using refresh token is to issue short lived access token at the first place then use the refresh token to obtain new access token and so on, so the user needs to authenticate him self by providing username and password along with client info (we’ll talk about clients later in this post), and if the information provided is valid a response contains a short lived access token is obtained along with long lived refresh token (This is not an access token, it is just identifier to the refresh token). Now once the access token expires we can use the refresh token identifier to try to obtain another short lived access token and so on.

But why we are adding this complicity, why not to issue long lived access tokens from the first place?

In my own opinion there are three main benefits to use refresh tokens which they are:

  1. Updating access token content: as you know the access tokens are self contained tokens, they contain all the claims (Information) about the authenticated user once they are generated, now if we issue a long lived token (1 month for example) for a user named “Alex” and enrolled him in role “Users” then this information get contained on the token which the Authorization server generated. If you decided later on (2 days after he obtained the token) to add him to the “Admin” role then there is no way to update this information contained in the token generated, you need to ask him to re-authenticate him self again so the Authorization server add this information to this newly generated access token, and this not feasible on most of the cases. You might not be able to reach users who obtained long lived access tokens. So to overcome this issue we need to issue short lived access tokens (30 minutes for example) and use the refresh token to obtain new access token, once you obtain the new access token, the Authorization Server will be able to add new claim for user “Alex” which assigns him to “Admin” role once the new access token being generated.
  2. Revoking access from authenticated users: Once the user obtains long lived access token he’ll be able to access the server resources as long as his access token is not expired, there is no standard way to revoke access tokens unless the Authorization Server implements custom logic which forces you to store generated access token in database and do database checks with each request. But with refresh tokens, a system admin can revoke access by simply deleting the refresh token identifier from the database so once the system requests new access token using the deleted refresh token, the Authorization Server will reject this request because the refresh token is no longer available (we’ll come into this with more details).
  3. No need to store or ask for username and password: Using refresh tokens allows you to ask the user for his username and password only one time once he authenticates for the first time, then Authorization Server can issue very long lived refresh token (1 year for example) and the user will stay logged in all this period unless system admin tries to revoke the refresh token. You can think of this as a way to do offline access to server resources, this can be useful if you are building an API which will be consumed by front end application where it is not feasible to keep asking for username/password frequently.

Refresh Tokens and Clients

In order to use refresh tokens we need to bound the refresh token with a Client, a Client means the application the is attempting communicate with the back-end API, so you can think of it as the software which is used to obtain the token. Each Client should have Client Id and Secret, usually we can obtain the Client Id/Secret once we register the application with the back-end API.

The Client Id is a unique public information which identifies your application among other apps using the same back-end API. The client id can be included in the source code of your application, but the client secret must stay confidential so in case we are building JavaScript apps there is no need to include the secret in the source code because there is no straight way to keep this secret confidential on JavaScript application. In this case we’ll be using the client Id only for identifying which client is requesting the refresh token so it can be bound to this client.

In our case I’ve identified clients to two types (JavaScript – Nonconfidential) and (Native-Confidential) which means that for confidential clients we can store the client secret in confidential way (valid for desktop apps, mobile apps, server side web apps) so any request coming from this client asking for access token should include the client id and secret.

Bounding the refresh token to a client is very important, we do not want any refresh token generated from our Authorization Server to be used in another client to obtain access token. Later we’ll see how we will make sure that refresh token is bounded to the same client once it used to generate new access token.

Database structure needed to support OAuth Refresh Tokens

It is obvious that we need to store clients which will communicate with our back-end API in persistent medium, the schema for Clients table will be as the image below:

OAuth 2.0 Clients

The Secret column is hashed so anyone has an access to the database will not be able to see the secrets, the Application Type column with value (1) means it is Native – Confidential client which should send the secret once the access token is requested.

The Active column is very useful; if the system admin decided to deactivate this client, so any new requests asking for access token from this deactivated client will be rejected. The Refresh Token Life Time column is used to set when the refresh token (not the access token) will expire in minutes so for the first client it will expire in 10 days, it is nice feature because now you can control the expiry for refresh tokens for each client.

Lastly the Allowed Origin column is used configure CORS and to set “Access-Control-Allow-Origin” on the back-end API. It is only useful for JavaScript applications using XHR requests, so in my case I’ m setting the allowed origin for client id “ngAuthApp” to origin “http://ngauthenticationweb.azurewebsites.net/” and this turned out to be very useful, so if any malicious user obtained my client id from my JavaScript app which is very trivial to do, he will not be able to use this client to build another JavaScript application using the same client id because all preflighted  requests will fail and return 405 HTTP status (Method not allowed) All XHR requests coming for his JavaScript app will be from different domain. This is valid for JavaScript application types only, for other application types you can set this to “*”.

Note: For testing the API the secret for client id “consoleApp” is “123@abc”.

Now we need to store the refresh tokens, this is important to facilitate the management for refresh tokens, the schema for Refresh Tokens table will be as the image below:

Refresh Tokens

The Id column contains hashed value of the refresh token id, the API consumer will receive and send the plain refresh token Id. the Subject column indicates to which user this refresh token belongs, and the same applied for Client Id column, by having this columns we can revoke the refresh token for a certain user on certain client and keep the other refresh tokens for the same user obtained by different clients available.

The Issued UTC and Expires UTC columns are for displaying purpose only, I’m not building my refresh tokens expiration logic based on these values.

Lastly the Protected Ticket column contains magical signed string which contains a serialized representation for the ticket for specific user, in other words it contains all the claims and ticket properties for this user. The Owin middle-ware will use this string to build the new access token auto-magically (We’ll see how this take place later in this post).

Now I’ll walk you through implementing the refresh tokens, as I stated before you can read the previous post to be able to follow along with me:

Step 1: Add the new Database Entities

Add new folder named “Entities”, inside the folder you need to define 2 classes named “Client” and “RefreshToken”, the definition for classes as the below:

public class Client
    {
        [Key] 
        public string Id { get; set; }
        [Required]
        public string Secret { get; set; }
        [Required]
        [MaxLength(100)]
        public string Name { get; set; }
        public ApplicationTypes ApplicationType { get; set; }
        public bool Active { get; set; }
        public int RefreshTokenLifeTime { get; set; }
        [MaxLength(100)]
        public string AllowedOrigin { get; set; }
    }

public class RefreshToken
    {
        [Key]
        public string Id { get; set; }
        [Required]
        [MaxLength(50)]
        public string Subject { get; set; }
        [Required]
        [MaxLength(50)]
        public string ClientId { get; set; }
        public DateTime IssuedUtc { get; set; }
        public DateTime ExpiresUtc { get; set; }
        [Required]
        public string ProtectedTicket { get; set; }
    }

Then we need to add simple Enum which defined the Application Type, so add class named “Enum” inside the “Models” folder as the code below:

public enum ApplicationTypes
    {
        JavaScript = 0,
        NativeConfidential = 1
    };

To make this entities available on the DbContext, open file “AuthContext” and paste the code below:

public class AuthContext : IdentityDbContext<IdentityUser>
    {
        public AuthContext()
            : base("AuthContext")
        {
     
        }

        public DbSet<Client> Clients { get; set; }
        public DbSet<RefreshToken> RefreshTokens { get; set; }
    }

Step 2: Add new methods to repository class

The methods I’ll add now will add support for manipulating the tables we’ve added, they are self explanatory methods and there is nothing special about them, so open file “AuthRepository” and paste the code below:

public Client FindClient(string clientId)
        {
            var client = _ctx.Clients.Find(clientId);

            return client;
        }

        public async Task<bool> AddRefreshToken(RefreshToken token)
        {

           var existingToken = _ctx.RefreshTokens.Where(r => r.Subject == token.Subject && r.ClientId == token.ClientId).SingleOrDefault();

           if (existingToken != null)
           {
             var result = await RemoveRefreshToken(existingToken);
           }
          
            _ctx.RefreshTokens.Add(token);

            return await _ctx.SaveChangesAsync() > 0;
        }

        public async Task<bool> RemoveRefreshToken(string refreshTokenId)
        {
           var refreshToken = await _ctx.RefreshTokens.FindAsync(refreshTokenId);

           if (refreshToken != null) {
               _ctx.RefreshTokens.Remove(refreshToken);
               return await _ctx.SaveChangesAsync() > 0;
           }

           return false;
        }

        public async Task<bool> RemoveRefreshToken(RefreshToken refreshToken)
        {
            _ctx.RefreshTokens.Remove(refreshToken);
             return await _ctx.SaveChangesAsync() > 0;
        }

        public async Task<RefreshToken> FindRefreshToken(string refreshTokenId)
        {
            var refreshToken = await _ctx.RefreshTokens.FindAsync(refreshTokenId);

            return refreshToken;
        }

        public List<RefreshToken> GetAllRefreshTokens()
        {
             return  _ctx.RefreshTokens.ToList();
        }

Step 3: Validating the Client Information

Now we need to implement the logic responsible to validate the client information sent one the application requests an access token or uses a refresh token to obtain new access token, so open file “SimpleAuthorizationServerProvider” and paste the code below:

public override Task ValidateClientAuthentication(OAuthValidateClientAuthenticationContext context)
        {

            string clientId = string.Empty;
            string clientSecret = string.Empty;
            Client client = null;

            if (!context.TryGetBasicCredentials(out clientId, out clientSecret))
            {
                context.TryGetFormCredentials(out clientId, out clientSecret);
            }

            if (context.ClientId == null)
            {
                //Remove the comments from the below line context.SetError, and invalidate context 
                //if you want to force sending clientId/secrects once obtain access tokens. 
                context.Validated();
                //context.SetError("invalid_clientId", "ClientId should be sent.");
                return Task.FromResult<object>(null);
            }

            using (AuthRepository _repo = new AuthRepository())
            {
                client = _repo.FindClient(context.ClientId);
            }

            if (client == null)
            {
                context.SetError("invalid_clientId", string.Format("Client '{0}' is not registered in the system.", context.ClientId));
                return Task.FromResult<object>(null);
            }

            if (client.ApplicationType == Models.ApplicationTypes.NativeConfidential)
            {
                if (string.IsNullOrWhiteSpace(clientSecret))
                {
                    context.SetError("invalid_clientId", "Client secret should be sent.");
                    return Task.FromResult<object>(null);
                }
                else
                {
                    if (client.Secret != Helper.GetHash(clientSecret))
                    {
                        context.SetError("invalid_clientId", "Client secret is invalid.");
                        return Task.FromResult<object>(null);
                    }
                }
            }

            if (!client.Active)
            {
                context.SetError("invalid_clientId", "Client is inactive.");
                return Task.FromResult<object>(null);
            }

            context.OwinContext.Set<string>("as:clientAllowedOrigin", client.AllowedOrigin);
            context.OwinContext.Set<string>("as:clientRefreshTokenLifeTime", client.RefreshTokenLifeTime.ToString());

            context.Validated();
            return Task.FromResult<object>(null);
        }

By looking at the code above you will notice that we are doing the following validation steps:

  1. We are trying to get the Client id and secret from the authorization header using a basic scheme so one way to send the client_id/client_secret is to base64 encode the (client_id:client_secret) and send it in the Authorization header. The other way is to sent the client_id/client_secret as “x-www-form-urlencoded”. In my case I’m supporting the both approaches so client can set those values using any of the two available options.
  2. We are checking if the consumer didn’t set client information at all, so if you want to enforce setting the client id always then you need to invalidate the context. In my case I’m allowing to send requests without client id for the sake of keeping old post and demo working correctly.
  3. After we receive the client id we need to check our database if the client is already registered with our back-end API, if it is not registered we’ll invalidate the context and reject the request.
  4. If the client is registered we need to check his application type, so if it was “JavaScript – Non Confidential” client we’ll not check or ask for the secret. If it is Native – Confidential app then the client secret is mandatory and it will be validated against the secret stored in the database.
  5. Then we’ll check if the client is active, if it is not the case then we’ll invalidate the request.
  6. Lastly we need to store the client allowed origin and refresh token life time value on the Owin context so it will be available once we generate the refresh token and set its expiry life time.
  7. If all is valid we mark the context as valid context which means that client check has passed and the code flow can proceed to the next step.

Step 4: Validating the Resource Owner Credentials

Now we need to modify the method “GrantResourceOwnerCredentials” to validate that resource owner username/password is correct and bound the client id to the access token generated, so open file “SimpleAuthorizationServerProvider” and paste the code below:

public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
        {

            var allowedOrigin = context.OwinContext.Get<string>("as:clientAllowedOrigin");

            if (allowedOrigin == null) allowedOrigin = "*";

            context.OwinContext.Response.Headers.Add("Access-Control-Allow-Origin", new[] { allowedOrigin });

            using (AuthRepository _repo = new AuthRepository())
            {
                IdentityUser user = await _repo.FindUser(context.UserName, context.Password);

                if (user == null)
                {
                    context.SetError("invalid_grant", "The user name or password is incorrect.");
                    return;
                }
            }

            var identity = new ClaimsIdentity(context.Options.AuthenticationType);
            identity.AddClaim(new Claim(ClaimTypes.Name, context.UserName));
            identity.AddClaim(new Claim("sub", context.UserName));
            identity.AddClaim(new Claim("role", "user"));

            var props = new AuthenticationProperties(new Dictionary<string, string>
                {
                    { 
                        "as:client_id", (context.ClientId == null) ? string.Empty : context.ClientId
                    },
                    { 
                        "userName", context.UserName
                    }
                });

            var ticket = new AuthenticationTicket(identity, props);
            context.Validated(ticket);

        }

 public override Task TokenEndpoint(OAuthTokenEndpointContext context)
        {
            foreach (KeyValuePair<string, string> property in context.Properties.Dictionary)
            {
                context.AdditionalResponseParameters.Add(property.Key, property.Value);
            }

            return Task.FromResult<object>(null);
        }

By looking at the code above you will notice that we are doing the following:

  1. Reading the allowed origin value for this client from the Owin context, then we use this value to add the header “Access-Control-Allow-Origin” to Owin context response, by doing this and for any JavaScript application we’ll prevent using the same client id to build another JavaScript application hosted on another domain; because the origin for all requests coming from this app will be from a different domain and the back-end API will return 405 status.
  2. We’ll check the username/password for the resource owner if it is valid, and if this is the case we’ll generate set of claims for this user along with authentication properties which contains the client id and userName, those properties are needed for the next steps.
  3. Now the access token will be generated behind the scenes when we call “context.Validated(ticket)”

Step 5: Generating the Refresh Token and Persisting it

Now we need to generate the Refresh Token and Store it in our database inside the table “RefreshTokens”, to do the following we need to add new class named “SimpleRefreshTokenProvider” under folder “Providers” which implements the interface “IAuthenticationTokenProvider”, so add the class and paste the code below:

public class SimpleRefreshTokenProvider : IAuthenticationTokenProvider
    {

        public async Task CreateAsync(AuthenticationTokenCreateContext context)
        {
            var clientid = context.Ticket.Properties.Dictionary["as:client_id"];

            if (string.IsNullOrEmpty(clientid))
            {
                return;
            }

            var refreshTokenId = Guid.NewGuid().ToString("n");

            using (AuthRepository _repo = new AuthRepository())
            {
                var refreshTokenLifeTime = context.OwinContext.Get<string>("as:clientRefreshTokenLifeTime"); 
               
                var token = new RefreshToken() 
                { 
                    Id = Helper.GetHash(refreshTokenId),
                    ClientId = clientid, 
                    Subject = context.Ticket.Identity.Name,
                    IssuedUtc = DateTime.UtcNow,
                    ExpiresUtc = DateTime.UtcNow.AddMinutes(Convert.ToDouble(refreshTokenLifeTime)) 
                };

                context.Ticket.Properties.IssuedUtc = token.IssuedUtc;
                context.Ticket.Properties.ExpiresUtc = token.ExpiresUtc;
                
                token.ProtectedTicket = context.SerializeTicket();

                var result = await _repo.AddRefreshToken(token);

                if (result)
                {
                    context.SetToken(refreshTokenId);
                }
             
            }
        }
 }

As you notice this class implements the interface “IAuthenticationTokenProvider” so we need to add our refresh token generation logic inside method “CreateAsync”, and by looking at the code above we can notice the below:

  1. We are generating a unique identifier for the refresh token, I’m using Guid here which is enough for this or you can use your own unique string generation algorithm.
  2. Then we are reading the refresh token life time value from the Owin context where we set this value once we validate the client, this value will be used to determine how long the refresh token will be valid for, this should be in minutes.
  3. Then we are setting the IssuedUtc, and ExpiresUtc values for the ticket, setting those properties will determine how long the refresh token will be valid for.
  4. After setting all context properties we are calling method “context.SerializeTicket();” which will be responsible to serialize the ticket content and we’ll be able to store this magical serialized string on the database.
  5. After this we are building a token record which will be saved in RefreshTokens table, note that I’m checking that the token which will be saved on the database is unique for this Subject (User) and the Client, if it not unique I’ll delete the existing one and store new refresh token. It is better to hash the refresh token identifier before storing it, so if anyone has access to the database he’ll not see the real refresh tokens.
  6. Lastly we will send back the refresh token id (without hashing it) in the response body.

The “SimpleRefreshTokenProvider” class should be set along with the “OAuthAuthorizationServerOptions”, so open class “Startup” and replace the code used to set “OAuthAuthorizationServerOptions” in method “ConfigureOAuth” with the code below, notice that we are setting the access token life time to a short period now (30 minutes) instead of 24 hours.

OAuthAuthorizationServerOptions OAuthServerOptions = new OAuthAuthorizationServerOptions() {
            
                AllowInsecureHttp = true,
                TokenEndpointPath = new PathString("/token"),
                AccessTokenExpireTimeSpan = TimeSpan.FromMinutes(30),
                Provider = new SimpleAuthorizationServerProvider(),
                RefreshTokenProvider = new SimpleRefreshTokenProvider()
            };

Once this is done we we can now test obtaining refresh token and storing it in the database, to do so open your favorite REST client and I’ll use PostMan to compose the POST request against the endpoint http://ngauthenticationapi.azurewebsites.net/token the request will be as the image below:

Generate Refresh Token Request

What worth mentioning here that we are setting the client_id parameter in the request body, and once the “/token” end point receives this request it will go through all the validation we’ve implemented in method “ValidateClientAuthentication”, you can check how the validation will take place if we set the client_id to “consoleApp”  and how the client_secret is mandatory and should be provided with this request because the application type for this client is “Native-Confidential”.

As well by looking at the response body you will notice that we’ve obtained a “refresh_token” which should be used to obtain new access token (we’ll see this later in this post) this token is bounded to user “Razan” and for Client “ngAuthApp”. Note that the “expires_in” value is related to the access token not the refresh token, this access token will expires in 30 mins.

Step 6: Generating an Access Token using the Refresh Token

Now we need to implement the logic needed once we receive the refresh token so we can generate a new access token, to do so open class “SimpleRefreshTokenProvider”  and implement the code below in method “ReceiveAsync”:

public async Task ReceiveAsync(AuthenticationTokenReceiveContext context)
        {

            var allowedOrigin = context.OwinContext.Get<string>("as:clientAllowedOrigin");
            context.OwinContext.Response.Headers.Add("Access-Control-Allow-Origin", new[] { allowedOrigin });

            string hashedTokenId = Helper.GetHash(context.Token);

            using (AuthRepository _repo = new AuthRepository())
            {
                var refreshToken = await _repo.FindRefreshToken(hashedTokenId);

                if (refreshToken != null )
                {
                    //Get protectedTicket from refreshToken class
                    context.DeserializeTicket(refreshToken.ProtectedTicket);
                    var result = await _repo.RemoveRefreshToken(hashedTokenId);
                }
            }
        }

What we’ve implemented in this method is the below:

  1. We need to set the “Access-Control-Allow-Origin” header by getting the value from Owin Context, I’ve spent more than 1 hour figuring out why my requests to issue access token using a refresh token returns 405 status code and it turned out that we need to set this header in this method because the method “GrantResourceOwnerCredentials” where we set this header is never get executed once we request access token using refresh tokens (grant_type=refresh_token).
  2. We get the refresh token id from the request, then hash this id and look for this token using the hashed refresh token id in table “RefreshTokens”, if the refresh token is found, we will use the magical signed string which contains a serialized representation for the ticket to build the ticket and identities for the user mapped to this refresh token.
  3. We’ll remove the existing refresh token from tables “RefreshTokens” because in our logic we are allowing only one refresh token per user and client.

Now the request context contains all the claims stored previously for this user, we need to add logic which allows us to issue new claims or updating existing claims and contain them into the new access token generated before sending it to the user, to do so open class “SimpleAuthorizationServerProvider” and implement method “GrantRefreshToken” using the code below:

public override Task GrantRefreshToken(OAuthGrantRefreshTokenContext context)
        {
            var originalClient = context.Ticket.Properties.Dictionary["as:client_id"];
            var currentClient = context.ClientId;

            if (originalClient != currentClient)
            {
                context.SetError("invalid_clientId", "Refresh token is issued to a different clientId.");
                return Task.FromResult<object>(null);
            }

            // Change auth ticket for refresh token requests
            var newIdentity = new ClaimsIdentity(context.Ticket.Identity);
            newIdentity.AddClaim(new Claim("newClaim", "newValue"));

            var newTicket = new AuthenticationTicket(newIdentity, context.Ticket.Properties);
            context.Validated(newTicket);

            return Task.FromResult<object>(null);
        }

What we’ve implement above is simple and can be explained in the points below:

  1. We are reading the client id value from the original ticket, this is the client id which get stored in the magical signed string, then we compare this client id against the client id sent with the request, if they are different we’ll reject this request because we need to make sure that the refresh token used here is bound to the same client when it was generated.
  2. We have the chance now to add new claims or remove existing claims, this was not achievable without refresh tokens, then we call “context.Validated(newTicket)” which will generate new access token and return it in the response body.
  3. Lastly after this method executes successfully, the flow for the code will hit method “CreateAsync” in class “SimpleRefreshTokenProvider” and a new refresh token is generated and returned in the response along with the new access token.

To test this out we need to issue HTTP POST request to the endpoint http://ngauthenticationapi.azurewebsites.net/token the request will be as the image below:

Generate Access Token Request

Notice how we set the “grant_type” to “refresh_token” and passed the “refresh_token” value and client id with the request, if all went successfully we’ll receive a new access token and refresh token.

Step 7: Revoking Refresh Tokens

The idea here is simple, all you need to do is to delete the refresh token record from table “RefreshTokens” and once the user tries to request new access token using the deleted refresh token it will fail and he needs to authenticate again using his username/password in order to obtain new access token and refresh token. By having this feature your system admin can have control on how to revoke access from logged in users.

To do this we need to add new controller named “RefreshTokensController” under folder “Controllers” and paste the code below:

[RoutePrefix("api/RefreshTokens")]
    public class RefreshTokensController : ApiController
    {

        private AuthRepository _repo = null;

        public RefreshTokensController()
        {
            _repo = new AuthRepository();
        }

        [Authorize(Users="Admin")]
        [Route("")]
        public IHttpActionResult Get()
        {
            return Ok(_repo.GetAllRefreshTokens());
        }

        //[Authorize(Users = "Admin")]
        [AllowAnonymous]
        [Route("")]
        public async Task<IHttpActionResult> Delete(string tokenId)
        {
            var result = await _repo.RemoveRefreshToken(tokenId);
            if (result)
            {
                return Ok();
            }
            return BadRequest("Token Id does not exist");
            
        }

        protected override void Dispose(bool disposing)
        {
            if (disposing)
            {
                _repo.Dispose();
            }

            base.Dispose(disposing);
        }
    }

Nothing special implemented here, we only have 2 actions which lists all the stored refresh tokens on the database, and another method which accepts a query string named “tokeId”, this should contains the hashed value of the refresh token, this method will be used to delete/revoke refresh tokens.

For the sake of the demo, I’ve authorized only user named “Admin” to execute the “Get” method and obtain all refresh tokens, as well I’ve allowed anonymous access for the “Delete” method so you can try to revoke your own refresh tokens, on production you need to secure this end point.

To hash your refresh tokens you have to use the helper class below, so add new class named “Helper” and paste the code below:

public static string GetHash(string input)
        {
            HashAlgorithm hashAlgorithm = new SHA256CryptoServiceProvider();
       
            byte[] byteValue = System.Text.Encoding.UTF8.GetBytes(input);

            byte[] byteHash = hashAlgorithm.ComputeHash(byteValue);

            return Convert.ToBase64String(byteHash);
        }

So to test this out and revoke a refresh token we need to issue a DELETE request to the following end point “http://ngauthenticationapi.azurewebsites.net/api/refreshtokens“, the request will be as the image below:

Revoke Refresh Token

Once the refresh token has been removed, if the user tries to use it he will fail obtaining new access token until he authenticate again using username/password.

Step 8: Updating the front-end AngularJS Application

I’ve updated the live front-end application to support using refresh tokens along with Resource Owner Password Credentials flow, so once you log in, and if you want to user refresh tokens you need to check the check box “Use Refresh Tokens” as the image below:

Login with Refresh Tokens

One you log in successfully, you will find new tab named “Refresh Tokens” on the top right corner, this tab will provide you with a view which allows you to obtain a new access token using the refresh token you obtained on log in, the view will look as the image below:

Refresh Token AngularJS

Lastly I’ve added new view named “/tokens” which is accessible only by username “Admin”. This view will list all available refresh tokens along with refresh token issue and expiry date. This view will allow the admin only to revoke a refresh tokens, the view will look as the below image:

List Refresh Tokens

I will write another post which shows how I’ve implemented this in the AngularJS application, for now you can check the code on my GitHub repo.

Conclusion

Security is really hard! You need to think about all the ins and outs, this post turned out to be too long and took way longer to compile and write than I expected, but hopefully it will be useful for anyone looking to implement refresh tokens. I would like to hear your feedback and comments if there is something missing or there is something we can enhance on this post.

You can check the demo application, play with the back-end API for learning purposes (http://ngauthenticationapi.azurewebsites.net), and check the source code on Github.

Follow me on Twitter @tjoudeh

References

  1. Special thanks goes to Dominick Baier for his detailed posts about OAuth, especially this post was very useful. As well I highly recommend checking  the Thinktecture.IdentityServer.
  2. Great post by Andrew Timney on persisting refresh tokens.

The post Enable OAuth Refresh Tokens in AngularJS App using ASP .NET Web API 2, and Owin appeared first on Bit of Technology.


Dominick Baier: Updated IdentityServer v3 Roadmap (and Refresh Tokens)

Brock and I have been pretty busy the last months and we did not find as much time to work on IdentityServer as we wanted.

So we have updated our milestones on github and are currently planning a Beta 1 for beginning of August.

You can check the github issue tracker (or open new issues when you find bugs or have suggestions) or you can have an alternative view on our current work using Huboard.

I just checked in initial support for refresh tokens, and it would be great if you could give that a try and let us know if it works for you – see here.

That’s it – back to work.


Filed under: ASP.NET, IdentityServer, OAuth, OpenID Connect, WebAPI


Filip Woj: Building a strongly typed route provider for ASP.NET Web API

ASP.NET Web API 2.2 was released last week, and one of the key new features is the ability to extend and plug in your own custom logic into the attribute routing engine. Commonly known as “attribute routing”, it’s actually officially … Continue reading

The post Building a strongly typed route provider for ASP.NET Web API appeared first on StrathWeb.


Darrel Miller: The Insanity of the Vary Header

In my first deep dive into a HTTP header on the user-agent header I said that I would try and produce a series of posts going under the covers on certain HTTP headers.  This post is about the Vary header.  The Vary header both wonderful and sad at the same time.  I'll discuss how to make it work for you and where it fails miserably.

The Vary header is used for HTTP caching.  If you want the really gory details of HTTP caching, you can find them here in,  Caching is hard, draw me a picture.  The short and pertinent part of that story is, when you make an HTTP request, it is possible that the response will come from a cache, rather than being generated by the origin server.  For the cache to know whether it can satisfy a response it needs a cache key.

Anatomy of a cache key

KeyCache entries have a primary cache key and potentially a secondary cache key.  The primary cache key is made up of a HTTP method and a URL.  For the vast majority of cases the HTTP method is a GET.  So, for the purposes of our discussion about the vary header, we can assume that the primary cache key is the URL of the HTTP resource. 

Assuming a resource identifies itself as cacheable, or at least, does not explicitly prevent it, a cache that sits somewhere between the client and the origin server, could hold on to a copy of the representation returned from the origin server and store it for satisfying future requests to the same URL. 

But we have variants

The challenge that we have is other HTTP headers can be used to request variations of the representation.  If we were to send Accept-encoding: gzip in our request, we are telling the server that we can handle the response being compressed.  What should the cache do?  Should it ignore the request and pass it along to the server.  Should it return the uncompressed version?  For compressed content, it might not be a big deal because if the client can handle compressed responses, it can also handle uncompressed responses, so whatever happens the client will be happy.  But what should the cache do with a compressed response that comes back from the origin server? Should it update the representation stored in the cache  with the new compressed one?  That would be a problem for future request from clients that do not have the ability to decompress responses.

The example of Accept-Encoding has lots of possible solutions.  However, a header like Accept-Language is more challenging.  If one user asks for a French version of a resource and another asks for an English version of a resource, only one can be stored in the cache if we limit ourselves to just the primary cache key.

We have the same problem if we just use the Accept header to do transparent negotiation between media types.  If one user asks for application/calendar+json and another asks for application/calendar+xml then we can only cache one of these at once.

Vary to the rescue

So far we have mentioned three different HTTP headers that could cause different variations of the resource to be returned.  We can use the Vary header in a response from a server to indicate which HTTP headers were used to produce the variation.

This is the LA County Sheriff Department's Rescue 5 helicopter. The first time I saw it was at the air show but ever since then, I see it all the time on the local news. At the air show the crew performed a mock rescue by lowering a rescuer down to the ground and then lifting someone up into the helicopter. It was pretty impressive!

ABOUT THE SERIES

In 2008 I went to the American Heroes Air Show with my friend Ryan. The air show exhibits helicopters from law enforcement, fire departments, search and rescue, and other government agencies.

When a HTTP cache goes to store the representation it needs to look at the Vary header and for each header listed, look at the request headers that generated the response.  The values of those request headers are used as the secondary cache key.  The cache then uses this secondary cache key to store multiple variants for the same primary cache key.

When trying to satisfy a request from those stored, the cache will use the headers named in the Vary header of the stored variant to generate a new secondary cache key from the request.  If the secondary cache key generated from the request, match that of the stored representation then the stored representation can be served to the client.

Bingo! We can now cache all kinds of variants of our resource and the cache will know which one to serve up based on what the request asks for.

Sounds like a great idea, but...

Consider this request

> GET /test HTTP/1.1
> Host: example.com
> Accept-Encoding: gzip,deflate
>
< HTTP/1.1 200 OK
< Vary: Accept-Encoding
< Content-Encoding: gzip
< Content-Type: application/json
< Content-Length: 230
< Cache-Control: max-age=10000000

followed by

> GET /test HTTP/1.1 
> Host: example.com 
> Accept-Encoding: gzip 
> 
< HTTP/1.1 200 OK 
< Vary: Accept-Encoding
< Content-Encoding: gzip 
< Content-Type: application/json 
< Content-Length: 230 
< Cache-Control: max-age=10000000 

The first request would generate a secondary cache key of "gzip,deflate" because the Vary header declared by the server says that the representation was affected by the value of the Accept-Encoding header. 

In the second request, the Accept-Encoding header is different, because this client does not support the "deflate" method of compression.  Even though the cache is holding onto a perfectly good copy of the representation that is gzip compressed and the second client can process gzipped representations, the second client will not get that stored response served because the Accept-Encoding header of the request does not match the value in the secondary cache key.

Translated into English, if you don't ask for exactly the same thing, you won't get the cached copy even if is what you want.

Wait, it gets worse

accident

Time passes, representations are cached, the origin server code is updated to be multilingual, and now the vary header that is returned includes both Accept-Encoding and Accept-Language.

A client makes the following request,

> GET /test HTTP/1.1
> Host: example.com
> Accept-Encoding: gzip,deflate
> Accept-Language: fr
>
< HTTP/1.1 200 OK
< Vary: Accept-Encoding, Accept-Language
< Content-Encoding: gzip
< Content-Language: fr
< Content-Type: application/json
< Content-Length: 230
< Cache-Control: max-age=10000000

The cache stores the representation using a secondary cache key of "gzip,deflate:fr".  The same client then makes exactly the same request. Can you see a problem?

If we assume that the representation we stored, back when the vary header only contained Accept-Encoding, is still fresh then we now have two stored representations that match.  This is because when we compare this new request with the old stored representation, the vary header of the old representation only tells us to look at the Accept-Encoding header.

The guidance provided by the HTTP Caching specification tells us that we  MUST use the most recent matching response to satisfy these ambiguous requests.   This isn't really a major problem for developers writing clients and servers, but it's a pain for people trying to write caches.  In fact, I haven't found a private cache implementation that actually does this yet.

Its not as simple as I make it out to be

I glossed over a number of additional issues mentioned in the spec.  When the vary header contains an asterisk, no variants are allowed to match.  I'm still trying to figure out why you would want to store a variant that will never match a request.

Also, I talked about generating the secondary cache key from the values in the request header.  Technically, before creating the secondary cache key, those header values should be normalized.  Which is a fancy term for stripping unnecessary whitespace, removing differences of letter casing when a header value is deemed case insensitive and other more insane requirements like re-ordering field values where the order is not significant.  You can imagine doing a vary on an accept header that lists a bunch of different media types and having to parse them and sort them before being able to do a comparison!

If you think the specification is bad, you should see the implementations

I can't speak for implementations on all platforms, but the support for the vary header on the Windows platform is less than ideal.  Eric Lawrence covers the details of Vary in IE in a blog post.  It would not surprise me in the slightest if other platforms are similarly limited in their support for Vary.

Is there a point to this post?

I believe there are three points to this post: 

  • Vary is a widely used HTTP header, so ideally developers should understand how it is supposed to work.
  • Lots of people are gung-ho on transparent content negotiation.  Without a good working vary implementation, caching is going to be difficult.  That's not good for performance.
  • I'd like to point to a proposed alternative that solves many of the problems of the vary header, the Key Response Http Header.  I'll have to save discussion of this solution for a future post.
Road

Image Credit: Key https://flic.kr/p/56DLot
Image Credit: Rescue https://flic.kr/p/9w9doc
Image Credit: Accident https://flic.kr/p/5XfRKk
Image Credit: Road https://flic.kr/p/8GokGE


Radenko Zec: Simple way to share Dependency Resolvers between MVC and Web API

I had several projects in past using ASP.NET MVC 4 and 5 and ASP.NET Web API that reside in same project. When you want to share DI container between MVC and Web API things can become complicated.

Reason for this is because ASP.NET MVC 5 uses interface System.Web.Mvc.IDependencyResolver for implementing dependency resolver and  ASP.NET Web API uses System.Web.Http.Dependencies.IDependencyResolver interface which has the same name but resides in different namespace.

These two interfaces are different, even they have a same name.

Why should you use MVC and Web API in the same project

There could be many reasons for this. For me the main reason was simplifying the way to securing Web API access by implementing session support in ASP.NET Web API even it is not recommended.

This way we can detect request coming from our MVC Application without need to protect API Key and API Secret which we use for mobile access to Web API. Protecting API Key and API Secret on web application can be very complicated.
 
If you want to learn more about ASP.NET Web API I recommend a great book that I love written by real experts:
BookWebAPIEvolvable
Designing Evolvable Web APIs with ASP.NET

Simple implementation

I am usually using Ninject as DI container of my choice. First we need to reference Ninject in our project.

After that we create class called NinjectRegistrations which inherits NinjectModule and will be used for registering types into container.

public class NinjectRegistrations : NinjectModule
    {
        public override void Load()
        {
            Bind<IApiHelper>().To<ApiHelper>();
        }
    }

 

After that we will create class NinjectDependencyResolver that will implement both required interfaces mentioned above:

 public class NinjectDependencyResolver : NinjectDependencyScope, IDependencyResolver, System.Web.Mvc.IDependencyResolver
    {
        private readonly IKernel kernel;

        public NinjectDependencyResolver(IKernel kernel)
            : base(kernel)
        {
            this.kernel = kernel;
        }

        public IDependencyScope BeginScope()
        {
            return new NinjectDependencyScope(this.kernel.BeginBlock());
        }
    }

 

 
We also need to implement class NinjectDependencyScope which is inherited from NinjectDependencyResolver:

    public class NinjectDependencyScope : IDependencyScope
    {
        private IResolutionRoot resolver;

        internal NinjectDependencyScope(IResolutionRoot resolver)
        {
            Contract.Assert(resolver != null);

            this.resolver = resolver;
        }

        public void Dispose()
        {
            var disposable = this.resolver as IDisposable;
            if (disposable != null)
            {
                disposable.Dispose();
            }

            this.resolver = null;
        }

        public object GetService(Type serviceType)
        {
            if (this.resolver == null)
            {
                throw new ObjectDisposedException("this", "This scope has already been disposed");
            }

            return this.resolver.TryGet(serviceType);
        }

        public IEnumerable<object> GetServices(Type serviceType)
        {
            if (this.resolver == null)
            {
                throw new ObjectDisposedException("this", "This scope has already been disposed");
            }

            return this.resolver.GetAll(serviceType);
        }
    }

 

That is entire implementation. Usage is very simple. To use the same container in MVC and Web API project we just create Ninject container in Global.asax.cs and pass same Ninject resolver to MVC and Web API.

 
NinjectModule registrations = new NinjectRegistrations();
var kernel = new StandardKernel(registrations);
var ninjectResolver = new NinjectDependencyResolver(kernel);

DependencyResolver.SetResolver(ninjectResolver); // MVC
GlobalConfiguration.Configuration.DependencyResolver = ninjectResolver; // Web API
 

You can use the same method for any kind of DI container not only Ninject.

And if you liked this post, go ahead and subscribe here to my RSS Feed to make sure you don’t miss other content like this.

 
 

The post Simple way to share Dependency Resolvers between MVC and Web API appeared first on RadenkoZec blog.


Darrel Miller: REST Agent - Lessons learned in building generic hypermedia clients

Several years ago when I was beginning to work on a large hyper media driven client, I recognized frequently recurring patterns and decided that I could raise the level of abstraction from an HTTP client to a hypermedia client.  It took about a year to realize that it was not as good an idea as I had first thought.

Mountaintop

Introducing RESTAgent

Four years ago I developed the RESTAgent library, created some samples, wrote some blog posts (an intro, basic usage, and more advanced) to try and make it easier to deal with hypermedia on the client by providing generic support for any hypermedia type.  It was designed to provide two methods for navigating links.  Navigate(Link link) and Embed(Link link).  The returned representation is stored as state in the RESTAgent class.

If the media type was recognized then RESTAgent would parse the content looking for additional links contained in the response.  In order for RESTAgent to know how to construct an HTTP request when following a link, a Link class was developed that would encapsulate all the link attributes defined in RFC5988.  This Link class could then be used to construct a complete HTTP request.

The combination of the Link class, the RESTAgent request generator and pluggable media type parsers allowed RESTAgent to navigate all over a hypermedia API using just a few simple methods.

Abstractions need to make things simpler

The challenge, as I discovered, is that HTTP is an extremely flexible protocol.  There are many different valid ways of dealing with the wide variety of HTTP status codes.  Dealing with client errors, redirects and server errors are all situations where the right solution is, "it depends".  Trying to encapsulate that behaviour inside RESTAgent left me with two choices.  Be opinionated about the behaviour, or build a sophisticated interface to allow users of RESTAgent, to configure or replace the default RESTAgent behaviour. 

lightswitch

Being opinionated about the way HTTP is used is fine for server frameworks, but for a client library you need to be able to accommodate the chosen approaches of all of the APIs that you want to interact with.  Flexibility is a critical feature for clients.

Trying to build a lot of flexibility into an abstraction ends up with a solution just as complex as the thing that you are trying to abstract.  At this point the value that you are adding is negated by requiring people to learn the new abstraction.

Generic content is genericpackage

Another significant limitation of a generic hypermedia client is that the only semantics that it understands are links.  Links can add a huge amount of value to a distributed systems, but it is quite difficult to do useful tasks with only links.  Client applications need some domain semantics to work with.  Having RESTAgent only able to partially handle content ended up creating content being managed in multiple places.  Which is less than ideal.

The Link is where the magic is

Further experimentation showed me that the real value of the RESTAgent library was in the Link abstraction.  It's ability to provide a place to encapsulate the interaction semantics related to a particular link relation type was critical to making dealing with hypermedia easier.

ChainLink

By creating various media type parsers that expose their links using this standardized link class, we can obtain a large amount of the value that was provided by RESTAgent without needing to abstract away the HTTP interface.  My recent efforts in this area have been on building out a more useful Link class.

Managing Client State

The other work that RESTAgent was trying to help with was managing client state.  However, I realized that with a full blown client application there tends to be lots of "mini" client state machines that have different lifetimes and work together to form the complete client application.  I initially described these mini state machines as missions.  They used a single RESTAgent to achieve a goal through a set of client server interactions.  Once I actually formalized the notion of this encapsulating mission into an actual implementation , I decided that the mission itself was a better place to manage state than within the RESTAgent where all state had to be handled generically.  I will write more on the concept of missions in a future post.

My advice

I've seen a few people recently talking about building generic hypermedia clients.  Hopefully, if they read about my experiences, they may avoid making the same mistakes I did, and get a better result.  I would definitely be interested to hear if anyone else is experimenting with concepts similar to Links and Missions.

Photo credit: Chain https://flic.kr/p/3fdHLE
Photo credit: Package https://flic.kr/p/4A15jm
Photo credit: Mountain top https://flic.kr/p/883WpP
Photo credit: Light switch https://flic.kr/p/6mZCc4


Taiseer Joudeh: AngularJS Authentication with Auth0 & ASP .Net OWIN

This is guest post written originally to Auth0.

Recently I’ve blogged about using tokens to authenticate users in single page applications, I’ve used ASP.NET Web API, Owin middleware and ASP.NET Identity to store local accounts in database, I didn’t tap into social identity logins such as (Google, Microsoft Accounts, Facebook, etc..) because each provider will not supply the same information (profile schema) about the logged in user, there might be properties missing or with different names, so handling this manually and storing those different schemes will not be a straight forward process.

I was introduced to Auth0 by Matias Woloski, basically Auth0 is a feature rich identity management system which supports local database accounts, integration with more than 30 social identity providers, and enterprise identity providers such as (AD, Office 365, Google Apps, etc…). You can check the full list here.

In this post I’ll implement the same set of features I’ve implemented previously using Auth0 management system as well I’ll integrate authentication with multiple social identity providers using less code in the back-end API and in the front-end application which will be built using AngularJS. So let’s jump to the implementation.

I’ll split this post into two sections, the first section will be for creating the back-end API and configuring Auth0 application, and the second section will cover creating the SPA and Auth0 widget.

The demo application we’ll build in this post can be accessed from (http://auth0web.azurewebsites.net). The source code is available on my GitHub Repo.

Section 1: Building the Back-end API

Step 1.1: Create new Application in Auth0

After you register with Autho you need to create an application, Auth0 comes with set of applications with easy to integrate SDKs, in our case we’ll select “ASP.NET (OWIN)” application as the image below:

OWIN Application

After you give the application a friendly name, in my case I named it “ASP.NET (OWIN)” a popup window will show up asking which connections you want to enable for this application. “Connection” in Auth0 means identity providers you want to allow in the application, in my case I’ll allow Database local accounts, Facebook, Google, GitHub, and Microsoft Accounts as the image below. Usually the social accounts will be disabled for all applications, to enable them navigate to “Connections” tab, choose “Social” then enable the social providers you like to enable for your application.

Social Connections

Once the application is created, you can navigate to application “settings” link where you will find all the needed information (Domain, Client Id, Client Secret, Callback URLs, etc…) to configure the Web API we’ll build in the next step.

App Settings

Step 1.2: Create the Web API

In this tutorial I’m using Visual Studio 2013 and .Net framework 4.5, you can follow along using Visual Studio 2012 but you need to install Web Tools 2013.1 for VS 2012 by visiting this link.

Now create an empty solution and name it “AuthZero” then add new ASP.NET Web application named “AuthZero.API”, the selected template for the project will be “Empty” template with no core dependencies, check the image below:

WebAPIProject

Step 1.3: Install the needed NuGet Packages

This project is empty so we need to install the NuGet packages needed to setup our Owin server and configure ASP.NET Web API to be hosted within an Owin server, so open NuGet Package Manager Console and install the below packages:

Install-Package Microsoft.AspNet.WebApi -Version 5.1.2
Install-Package Microsoft.AspNet.WebApi.Owin -Version 5.1.2
Install-Package Microsoft.Owin.Host.SystemWeb -Version 2.1.0
Install-Package Microsoft.Owin.Security.Jwt -Version 2.1.0
Install-Package Microsoft.Owin.Cors -Version 2.1.0
Update-Package System.IdentityModel.Tokens.Jwt

The first package “Microsoft.AspNet.WebApi” contains everything we need to host our API on IIS, the second package “Microsoft.AspNet.WebApi.Owin” will allow us to host Web API within an Owin server.

The third package “Microsoft.Owin.Host.SystemWeb” will be used to enable our Owin server to run our API on IIS using ASP.NET request pipeline as eventually we’ll host this API on Microsoft Azure Websites which uses IIS.

The forth package “Microsoft.Owin.Security.Jwt” will enable Owin server Middleware to protect and validate JSON Web Tokens (JWT).

The last package “Microsoft.Owin.Cors” will be responsible to allow CORS for our Web API so it will accept requests coming from any origin.

Note: The package “System.IdentityModel.Tokens.Jwt” gets installed by default is old (version 1.0.0) so we need to update it to latest (version 3.0.2).

Step 1.4: Add Auth0 settings to Web.config

We need to read the Auth0 settings for the application we created earlier to configure our API, so open Web.Config file and add the below keys, do not forget to replace the values of those keys with the correct one you obtain once you create application on Auth0.

<appSettings>
     <add key="auth0:ClientId" value="YOUR_CLIENT_ID" />
     <add key="auth0:ClientSecret" value="YOUR_CLIENT_SECRET" />
     <add key="auth0:Domain" value="YOUR_DOMAIN" />
  </appSettings>

Step 1.5: Add Owin “Startup” Class

Now we want to add new class named “Startup”. It will contain the code below:

[assembly: OwinStartup(typeof(AuthZero.API.Startup))]
namespace AuthZero.API
{
    public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            HttpConfiguration config = new HttpConfiguration();

            ConfigureAuthZero(app);

            WebApiConfig.Register(config);
            app.UseCors(Microsoft.Owin.Cors.CorsOptions.AllowAll);
            app.UseWebApi(config);

        }
    }
}

What we’ve implemented above is simple, this class will be fired once our server starts, notice the “assembly” attribute which states which class to fire on start-up. The “Configuration” method accepts parameter of type “IAppBuilder” this parameter will be supplied by the host at run-time. This “app” parameter is an interface which will be used to compose the application for our Owin server.

The implementation of method “ConfigureAuthZero” will be covered in the next step, this method will be responsible to configure our Web API to generate JWT using Auth0 application we created earlier.

The “HttpConfiguration” object is used to configure API routes, so we’ll pass this object to method “Register” in “WebApiConfig” class.

Lastly, we’ll pass the “config” object to the extension method “UseWebApi” which will be responsible to wire up ASP.NET Web API to our Owin server pipeline.

The implementation of “WebApiConfig” is simple, all you need to do is to add this class under the folder “App_Start” then paste the code below:

public static class WebApiConfig
    {
        public static void Register(HttpConfiguration config)
        {
            // Web API routes
            config.MapHttpAttributeRoutes();

            config.Routes.MapHttpRoute(
                name: "DefaultApi",
                routeTemplate: "api/{controller}/{id}",
                defaults: new { id = RouteParameter.Optional }
            );

            var jsonFormatter = config.Formatters.OfType<JsonMediaTypeFormatter>().First();
            jsonFormatter.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver();
        }
    }

Step 1.6: Configure generating JWT using Auth0

Now we want to configure our Web API to use Auth0 application created earlier to generate JSON Web Tokens which will be used to allow authenticated users to access the secured methods in our Web API. So to implement this open class “Startup” and add the code below:

//Rest of Startup class implementation is here

private void ConfigureAuthZero(IAppBuilder app)
        {
            var issuer = "https://" + ConfigurationManager.AppSettings["auth0:Domain"] + "/";
            var audience = ConfigurationManager.AppSettings["auth0:ClientId"];
            var secret = TextEncodings.Base64.Encode(TextEncodings.Base64Url.Decode(ConfigurationManager.AppSettings["auth0:ClientSecret"]));

            // Api controllers with an [Authorize] attribute will be validated with JWT
            app.UseJwtBearerAuthentication(
                new JwtBearerAuthenticationOptions
                {
                    AuthenticationMode = Microsoft.Owin.Security.AuthenticationMode.Active,
                    AllowedAudiences = new[] { audience },
                    IssuerSecurityTokenProviders = new IIssuerSecurityTokenProvider[]
                    {
                        new SymmetricKeyIssuerSecurityTokenProvider(issuer, secret)
                    }
                });
        }

What we’ve implemented in this method is the following:

  • Read Autho application settings stored in the web.config file.
  • We’ve added the JSON Web Token Bearer middleware to our Owin server, this class accepts set of options, as you can see we’ve set the authentication mode to “Active” which configures the middleware to check every incoming request and attempt to authenticate the call, and if it is successful, it will create a principal that represents the current user and assign that principal to the hosting environment.
  • We’ve set the issuer of our JSON Web Token (Domain Name) along with the base64 encoded symmetric key (Client Secret) which will be used to sign the generated JSON Web Token.

Now if we want to secure any end point in our Web API, all we need to do is to attribute any Web API controller with “[Authorize]” attribute, so requests containing a valid bearer token can only access it.

Note: The JWT Token expiration time can be set from Autho Application settings as the image below, the default value is 36000 seconds (10 hours).

JWT Expiration

Step 1.7: Add a Secure Shipments Controller

Now we want to add a secure controller to serve our Shipments, we’ll assume that this controller will return shipments only for authenticated users, to keep things simple we’ll return static data. So add new controller named “ShipmentsController” and paste the code below:

[RoutePrefix("api/shipments")]
    public class ShipmentsController : ApiController
    {
        [Authorize]
        [Route("")]
        public IHttpActionResult Get()
        {
            return Ok(Shipment.CreateShipments());
        }
    }

    #region Helpers

    public class Shipment
    {
        public int ShipmentId { get; set; }
        public string Origin { get; set; }
        public string Destination { get; set; }

        public static List<Shipment> CreateShipments()
        {
            List<Shipment> ShipmentList = new List<Shipment> 
            {
                new Shipment {ShipmentId = 10248, Origin = "Amman", Destination = "Dubai" },
                new Shipment {ShipmentId = 10249, Origin = "Dubai", Destination = "Abu Dhabi"},
                new Shipment {ShipmentId = 10250,Origin = "Dubai", Destination = "New York"},
                new Shipment {ShipmentId = 10251,Origin = "Boston", Destination = "New Jersey"},
                new Shipment {ShipmentId = 10252,Origin = "Cairo", Destination = "Jeddah"}
            };

            return ShipmentList;
        }
    }

    #endregion

Notice how we added the “Authorize” attribute on the method “Get” so if you tried to issue HTTP GET request to the end point “http://localhost:port/api/shipments” you will receive HTTP status code 401 unauthorized because the request you sent till this moment doesn’t contain JWT in the authorization header. You can check it using this end point: http://auth0api.azurewebsites.net/api/shipments

Step 1.8: Add new user and Test the API

Auth0 dashboard allows you to manage users registered in the applications you created under Auth0, so to test the API we’ve created we need to create a user before, I’ll jump back to Auth0 dashboard and navigate to the “Users” tab then click “New”, a popup window will appear as the image below, this window will allow us to create local database user, so fill up the form to create new user, I’ll use email “taiseer.joudeh@hotmail.com” and password “87654321″.

Auth0 New User

Once the user is created we need to generate JWT token so we can access the secured end point, to generate JWT we can send HTTP POST request to the end point https://tjoudeh.auth0.com/oauth/ro this end point will work only for local database connections and AD/LDAP. You can check Auth0 API playground here. To get JWT token open your favorite REST client and issue POST request as the image below:

JWT Request

Notice that the content-type and payload type is “x-www-form-urlencoded” so the payload body will contain encoded URL, as well notice that we’ve specified the “Connection” for this token and the “grant_type” If all is correct we’ll received JWT token (id_token) on the response.

Note: The “grant_type” indicates the type of grant being presented in exchange for an access token, in our case it is password.

Now we want to use this token to request the secure data using the end point http://auth0api.azurewebsites.net/api/shipments so we’ll issue GET request to this end point and will pass the bearer JWT token in the Authorization header, so for any secure end point we have to pass this bearer token along with each request to authenticate the user.

The GET request will be as the image below:

Get Secure

If all is correct we’ll receive HTTP status 200 Ok along with the secured data in the response body, if you try to change any character with signed token you directly receive HTTP status code 401 unauthorized.

By now our back-end API is ready to be consumed by the front-end app we’ll build in the next section.

Section 2: Building the Front-end SPA

Now we want to build SPA using AngularJS which will communicate with the back-end API created earlier. Auth0 provides a very neat and feature rich JavaScript plugin named “Widget“. This widget is very easy to implement in any web app. When you use it you will get features out of the box such as social integration with social providers, integration with AD, and sign up/forgot password features. The widget plugin looks as the image below:

Auth0Widget

The enabled social login providers can be controlled from Autho dashboard using the “Connection” tab, so for example if you want to enable LinkedIn, you just need to activate it on your Auth0 application and it will show directly on the Widget.

Step 2.1: Add the Shell Page (Index.html)

First of all, we need to add the “Single Page” which is a container for our application, this page will contain a link to sign in, a section will show up for authenticated users only, and “ng-view” directive which will be used to load partial views. The html for this page will be as the below:

<!DOCTYPE html>
<html>
    <head>
        <link href="content/app.css" rel="stylesheet" />
        <link href="https://netdna.bootstrapcdn.com/bootstrap/3.0.3/css/bootstrap.min.css" rel="stylesheet">
    </head>
    <body ng-app="auth0-sample" class="home" ng-controller="mainController">
        <div class="login-page clearfix">
          <div class="login-box auth0-box" ng-hide="loggedIn">
            <img src="https://i.cloudup.com/StzWWrY34s.png" />
            <h3>Auth0 Example</h3>
            <p>Zero friction identity infrastructure, built for developers</p>
             <a ng-click="login()" class="btn btn-primary btn-lg btn-block">SignIn</a>
          </div>
             <!-- This log in page would normally be done using routes
          but for the simplicity of the excercise, we're using ng-show -->
          <div ng-show="loggedIn" class="logged-in-box auth0-box">
            <img ng-src="{{auth.profile.picture}}" class="avatar"/>
            <h3>Welcome {{auth.profile.name}}</h3>
            <div class="profile-info row">
                <div class="profile-info-label col-xs-6">Nickname</div>
                <div class="profile-info-content col-xs-6">{{auth.profile.nickname}}</div>
            </div>
            <div class="profile-info row">
                <div class="profile-info-label col-xs-6">Your JSON Web Token</div>
                <div class="profile-info-content col-xs-6">{{auth.idToken | limitTo:12}}...</div>
                
            </div>
                 <div class="profile-info row">
                    <a ng-href ="#/shipments" class="btn btn-success btn-sm btn-block">View my shipments</a>
               </div>
               <div class="profile-info row">
                    <a ng-click="logout()" class="btn btn-danger btn-sm btn-block">Log out</a>
               </div>
          </div>
          <div data-ng-view="">
        </div>
        </div>
        <script src="https://code.angularjs.org/1.2.16/angular.min.js" type="text/javascript"></script>
        <script src="https://code.angularjs.org/1.2.16/angular-cookies.min.js" type="text/javascript"></script>
        <script src="https://code.angularjs.org/1.2.16/angular-route.min.js" type="text/javascript"></script>
        <script src="app/app.js" type="text/javascript"></script>
        <script src="app/controllers.js" type="text/javascript"></script>
    </body>
</html>

Step 2.2: Add reference for Auth0-angular.js and Widget libraries

Now we need to add a reference for file Auth0-angular.js, this file is AngularJS module which allows us to trigger the authentication process and parse the JSON Web Token with the “ClientID” we obtained once we created Auth0 application.

As well we need to add a reference for the “Widget” plugin which will be responsible to show the nice login popup we’ve showed earlier. To implement this and at the bottom of “index.html” page add a reference to those new JS files (line 5 and 6) as the code snippet below:

<!--rest of HTML is here--> 
<script src="https://code.angularjs.org/1.2.16/angular.min.js" type="text/javascript"></script>
<script src="https://code.angularjs.org/1.2.16/angular-cookies.min.js" type="text/javascript"></script>
<script src="https://code.angularjs.org/1.2.16/angular-route.min.js" type="text/javascript"></script>
<script src="https://cdn.auth0.com/w2/auth0-widget-4.js"></script>
<script src="https://cdn.auth0.com/w2/auth0-angular-0.4.js"> </script>
<script src="app/app.js" type="text/javascript"></script>
<script src="app/controllers.js" type="text/javascript"></script>

Step 2.3: Booting up our AngularJS app

We’ll add file named “app.js”, this file will be responsible to configure our AngularJS app, so add this file and paste the code below:

var app = angular.module('auth0-sample', ['auth0-redirect', 'ngRoute']);

app.config(function (authProvider, $httpProvider, $routeProvider) {
    authProvider.init({
        domain: 'tjoudeh.auth0.com',
        clientID: '80YvW9Bsa5P67RnMZRJfZv8jEsDSerDW',
        callbackURL: location.href
    });
});

By looking at the code above you will notice that we’ve injected the dependency “auth0-redirect” to our module, once it is injected we can use the “authProvider” service where we’ll use to configure the widget, so we need to set the values for the domain, clientID, and callBackURL. Those values are obtained from Autho application we’ve created earlier. Once you set the callbackURL you need to visit Auth0 application settings and set the same callbackURL as the image below:

Auth0 CallBack URL

The callbackURL is used once user is successfully authenticated, Auth0 will redirect user to the callbackURL with a hash containing an access_token and the JSON Web Token (id_token).

Step 2.4: Showing up the Widget Plugin

Now it is time to show the Widget once the user clicks on SignIn link, so we need to add file named “controllers.js”, inside this file we’ll define a controller named “mainController”, the implementation for this controller as the code below:

app.controller('mainController', ['$scope', '$location', 'auth', 'AUTH_EVENTS',
  function ($scope, $location, auth, AUTH_EVENTS) {

    $scope.auth = auth;
    $scope.loggedIn = auth.isAuthenticated;

    $scope.$on(AUTH_EVENTS.loginSuccess, function () {
      $scope.loggedIn = true;
      $location.path('/shipments');
    });
    $scope.$on(AUTH_EVENTS.loginFailure, function () {
      console.log("There was an error");
    });

    $scope.login = function () {
        auth.signin({popup: false});
    }

    $scope.logout = function () {
        auth.signout();
        $scope.loggedIn = false;
        $location.path('/');
    };

}]);

You can notice that we are injecting the “auth” service and “AUTH_EVENTS”, inside the “login” function we are just calling “auth.signin” and passing “popup:false” as an option,  the nice thing here that Auth0-angularjs module broadcasts events related to successful/unsuccessful logins so all child scopes which are interested to listen to this event can handle the login response. So in our case when the login is successful (User is authenticated) we’ll set flag named “loggedIn” to true and redirect the user to a secure partial view named “shipments” which we’ll add in the following steps.

Once the user is authenticated the JSON Web Token and the access token is stored automatically using AngularJS cookie store so you won’t worry about refreshing the single page application and losing the authenticated user context, all this is done by the Widget and Auth0-angularjs module.

Note: To check how to customize the Widget plugin check the link here.

As well we’ve added support for signing out the users, all we need to do is call “auth.signout” and redirect the user to the application root, the widget will take care of clearing the cookie store from the stored token.

Step 2.5: Add Shipments View and Controller

Now we want to add a view which should be accessed by authenticated users only, so we need to add new partial view named “shipments.html”, it will only renders the static data from the end point http://auth0api.azurewebsites.net/api/shipments when issuing GET request. The html for partial view as the below:

<div class="row">
    <div class="col-md-4">
        &nbsp;
    </div>
    <div class="col-md-4">
        <h5><strong>My Secured Shipments</strong> </h5>
        <table class="table table-striped table-bordered table-hover">
            <thead>
                <tr>
                    <th>Shipment ID</th>
                    <th>Origin</th>
                    <th>Destination</th>
                </tr>
            </thead>
            <tbody>
                <tr data-ng-repeat="shipment in shipments">
                    <td>
                        {{ shipment.shipmentId }}
                    </td>
                    <td>
                        {{ shipment.origin }}
                    </td>
                    <td>
                        {{ shipment.destination }}
                    </td>
                </tr>
            </tbody>
        </table>
    </div>
    <div class="col-md-4">
        &nbsp;
    </div>

Now open file “controller.js” and add the implementation for “shipmentsController” as the code below:

app.controller('shipmentsController', ['$scope', '$http', '$location', function ($scope, $http, $location) {
    var serviceBase = "http://auth0api.azurewebsites.net/";
    $scope.shipments = [];
    init();

    function init() {
        getShipments();
    }
    function getShipments() {
        var shipmentsSuccess = function (response) {
            $scope.shipments = response.data;
        }
        var shipmentsFail = function (error) {
            if (error.status === 401) {
                $location.path('/');
            }
        }
        $http.get(serviceBase + 'api/shipments').then(shipmentsSuccess, shipmentsFail);
    }
}]);

The implementation for this controller is pretty straight forward. We are just sending HTTP GET request to the secured endpoint http://auth0api.azurewebsites.net/api/shipments, so if the call has succeeded we will set the returned shipments in $scope object named “shipments” and if it failed because the user is unauthorized (HTTP status code 401) then we’ll redirect the user to the application root.

Now to be able to access the secured end point we have to send the JSON Web Token in the authorization header for this request. As you notice we are not setting the token value inside this controller. The right way to do this is to use “AngularJS Interceptors”. Thanks for the Auth0-angularjs module which makes implementing this very simple. This interceptor will allow us to capture every XHR request and manipulate it before sending it to the back-end API so we’ll be able to set the bearer token if the token exists in the cookie store (user is authenticated).

Step 2.6: Add the Interceptor and Configure Routes

All you need to do to add the interceptor is to push it to $httpProvider service interceptors array. Setting the token with each request will be done by Auth0-angularjs module.

As well to configure the shipments route we need to map the “shipmentsController” with “shipments” partial view using $routeProvider service, so open “app.js” file again and replace all the code in it with the code snippet below:

var app = angular.module('auth0-sample', ['auth0-redirect', 'ngRoute', 'authInterceptor']);

app.config(function (authProvider, $httpProvider, $routeProvider) {
    authProvider.init({
        domain: 'tjoudeh.auth0.com',
        clientID: '80YvW9Bsa5P67RnMZRJfZv8jEsDSerDW',
        callbackURL: location.href
    });

    $httpProvider.interceptors.push('authInterceptor');

    $routeProvider.when("/shipments", {
        controller: "shipmentsController",
        templateUrl: "/app/views/shipments.html"
    });

    $routeProvider.otherwise({ redirectTo: "/" });
});

By completing this step we are ready to run the application and see how Auth0 simplified and enriched the experience of users authentication.

If all is implemented correctly, after you are authenticated using social login or database account your profile information and the secured shipments view will look as the image below:

LogedIn

That’s it for now folks!

I hope this step by step post will help you to integrate Auth0 with your applications, if you have any question please drop me a comment.

The post AngularJS Authentication with Auth0 & ASP .Net OWIN appeared first on Bit of Technology.


Darrel Miller: Composing API responses for maximum reuse with ASP.NET Web API

In Web API 2.1 a new mechanism was introduced for returning HTTP messages that appeared to be a cross between HttpResponseMessage and the ActionResult mechanism from ASP.NET MVC.  At first I wasn't a fan of it at all.  It appeared to add little new value and just provide yet another alternative that would a be a source of confusion.  It wasn't until I saw an example produced by Brad Wilson that I was convinced that it had value.

The technique introduced by Brad was to enable a chain of action results to be created in the controller action method that would then be processed when the upstream framework decided it needed to have an instance of a HttpResponseMessage.

A composable factory

BuildingBlocks

The IHttpActionResult interface is a single asynchronous method that returns a HttpResponseMessage.  It is effectively a HttpResponseMessage factory.  Being able to chain together different implementations of IHttpActionResult allows you to compose the exact behaviour you want in your controller method with re-usable classes.  With a little bit of extensions method syntax sugar you can create controller actions that look like this,

 public IHttpActionResult Get()
        {
            var home = new HomeDocument();

            home.AddResource(TopicsLinkHelper.CreateLink(Request).WithHints());
            home.AddResource(DaysLinkHelper.CreateLink(Request).WithHints());
            home.AddResource(SessionsLinkHelper.CreateLink(Request).WithHints());
            home.AddResource(SpeakersLinkHelper.CreateLink(Request).WithHints());


            return new OkResult(Request)
                .WithCaching(new CacheControlHeaderValue() { MaxAge = new TimeSpan(0, 0, 60) })
                .WithContent(new HomeContent(home));
        }

The OkResult class implements IHttpActionResult and the WithCaching and WithContent build the a chain with other implementations.  This is similar to the way HttpMessageHandler and DelegatingMessageHandler work together.

Apply your policies

In the example above, the CachingActionResult didn’t add a whole lot of value over just setting the CacheControl header directly. However, I think could be a good place to start defining consistent policies across your API.  Consider writing a bit of infrastructure code to allow you to do,

 public IHttpActionResult Get()
 {
          var image = /* ... Get some image */

            return new OkResult(Request)
                .WithCaching(CacingPolicy.Image)
                .WithImageContent(image);
 }

This example is a bit of a thought experiment, however the important point is, you can create standardized, re-usable classes that apply a certain type of processing to a generated HttpResponseMessage.

Dream a little

If I were to let my imagination run wild, I could see all kinds of uses for this methods,

new CreatedResult(Request)
.WithLocation(Uri uri)
.WithContent(HttpContent content)
.WithEtag()

new ServerNotAvailableResult(Request)
.WithRetry(Timespan timespan)
.WithErrorDetail(exception)

new AcceptResult(Request)
.WithStatusMonitor(Uri monitorUrl)

new OkResult(Request)
 .WithNegotiatedContent(object body)
 .WithLastModified(DateTime lastModified)
 .WithCompression()

Some of these methods might be able to be created as completely general purpose “framework level” methods, but I see them more as being able to apply your API standards to controller responses. 

More testable

One significant advantage of using this approach over using ActionFilter attributes is that when unit testing your controllers, you will be able to test the effect of the ActionResult chain.  With ActionFilter attributes, you need to create an in-memory host to get the full framework infrastructure support in order for the ActionFilter objects to execute.

Another advantage of this approach over action filters for defining action-level behaviour is that by constructing the chain, you have explicit over the order in which the changes are applied to your response. 

More reusable

It may also be possible to take the abstraction one step further and create a factory for pre defined chains.  If the same sets of behaviour are applicable to many different actions then you can create those chains in one place and re-use a factory method.  This can be useful in controllers where there are a variety of different overloads of a method that take different parameters, but the response is always constructed in a similar way.   The method for creating the action result chain can be in the controller itself.

The implementation

The CachingResult class looks like this,

public class CachingResult : IHttpActionResult
    {
        private readonly IHttpActionResult _actionResult;
        private readonly CacheControlHeaderValue _cachingHeaderValue;


        public CachingResult(IHttpActionResult actionResult, CacheControlHeaderValue cachingHeaderValue)
        {
            _actionResult = actionResult;
            _cachingHeaderValue = cachingHeaderValue;
        }

        public Task<HttpResponseMessage> ExecuteAsync(CancellationToken cancellationToken)
        {

            return _actionResult.ExecuteAsync(cancellationToken)
                .ContinueWith(t =>
                {
                    t.Result.Headers.CacheControl = _cachingHeaderValue;
                    return t.Result;
                }, cancellationToken);
        }
    }

and the extension method is simply,

  public static IHttpActionResult WithCaching(this IHttpActionResult actionResult, CacheControlHeaderValue cacheControl)
  {
            return new CachingResult(actionResult, cacheControl);
  }

A more refined implementation might be to create an abstract base class that takes care of the common housekeeping.  It can also take care of actually instantiating a default HttpResponseMessage if the ActionResult is at the end of the chain.  Something like this,

public abstract class BaseChainedResult : IHttpActionResult
    {
        private readonly IHttpActionResult _NextActionResult;

        protected BaseChainedResult(IHttpActionResult actionResult)
        {
            _NextActionResult = actionResult;
        }

        public Task<HttpResponseMessage> ExecuteAsync(CancellationToken cancellationToken)
        {
            if (_NextActionResult == null)
            {
                var response = new HttpResponseMessage();
                ApplyAction(response);
                return Task.FromResult(response);
            }
            else
            {

                return _NextActionResult.ExecuteAsync(cancellationToken)
                    .ContinueWith(t =>
                    {
                        var response = t.Result;
                        ApplyAction(response);
                        return response;
                    }, cancellationToken);
            }
        }

        public abstract void ApplyAction(HttpResponseMessage response);
    
    }

A more complete usage example

If you are interested in seeing these classes in use in a complete project you can take a look at the Conference API project on Github.

Image credit: Building blocks https://flic.kr/p/4r9zSK


Darrel Miller: An HTTP Resource is a lot simpler than you might think

Unfortunately, I still regularly run into articles on the web that misunderstand the concept of an HTTP resource.  Considering it is a core piece of web architecture, having a clear understanding of what it means can make many other pieces of web architectural guidance considerably easier to understand.

To try and keep this post as practical as possible, let’s first consider some URLs.

http://customers example.org/10 
http://example.org/customers?id=10 
http://example.org/customers 
http://example.org/customers?active=true 
http://example.org/customers/10/edit

How many resources are being identified by these five URLs? 

DogTags

If all of those requests return some kind of 2XX response, then there are five distinct information resources.  The presence of query strings parameters, or path parameters, or verbs or nouns, or plurals or singular terms, has no bearing on the fact that that these are distinct web identifiers that identify distinct resources. 

Time to set the record straight

Let me give a few examples that seem to confuse people:

Resources > Entities

Another common misconception is that resources are limited to exposing entities from your application domain.  Here are some examples of resources that are perfectly valid despite what you might read on the internet:

http://example.org/dashboard
http://example.org/printer 
http://example.org/barcodeprocessor
http://example.org/invoice/32/status 
http://example.org/searchform
http://example.org/calculator

It’s very simple

A resource is some piece of information that is exposed on the web using an URL as an identifier.  Try not to read anything more into the definition than that. 


Radenko Zec: Simple way to implement caching in ASP.NET Web API

This article is not about caching the output of APIControllers.
Often in ASP.NET Web API you will have a need to cache something temporarily in memory probably to improve performance.

There are several nice libraries that allow you to do just that.

One very popular is CacheCow and here you will find a very nice article how to use this library.

But what if you want to do caching without third party library in a very simple way?
 
If you want to learn more about ASP.NET Web API I recommend a great book that I love written by real experts:
 
BookWebAPIEvolvable
Designing Evolvable Web APIs with ASP.NET 

Caching without third party library in a very simple way

I had a need for this. I implemented some basic authentication for the ASP.NET Web API with Tokens and wanted to cache Tokens temporarily in memory to avoid going into the database for every HTTP request.

The solution is very simple.
You just need to use Microsoft class for this called MemoryCache which resides in System.Runtime.Caching dll.

 

I created a simple helper class for caching with few lines of code:

 

public class MemoryCacher
{
  public object GetValue(string key)
  {
    MemoryCache memoryCache = MemoryCache.Default;
    return memoryCache.Get(key);
  }

  public bool Add(string key, object value, DateTimeOffset absExpiration)
  {
    MemoryCache memoryCache = MemoryCache.Default;
    return memoryCache.Add(key, value, absExpiration);
  }

  public void Delete(string key)
  {
    MemoryCache memoryCache = MemoryCache.Default;
    if (memoryCache.Contains(key))
    {
       memoryCache.Remove(key);
    }
  }
}

When you want to cache something:

memCacher.Add(token, userId, DateTimeOffset.UtcNow.AddHours(1));

 

And to get something from the cache:

var result = memCacher.GetValue(token);

 

 
NOTE: You should be aware that MemoryCache will be disposed every time IIS do app pool recycle.

If your Web API :

  • Does not receive any request for more that 20 minutes
  • Or hit default IIS pool interval 1740 minutes
  • Or you copy a new version of your ASP.NET Web API project into an IIS folder (this will auto-trigger app pool recycle)

So you should have a workaround for this. If you don’t get value from the cache you should grab it for example from a database and then store it again in the cache:

 

 var result = memCacher.GetValue(token);
 if (result == null)
 {
     // for example get token from database and put grabbed token
     // again in memCacher Cache
 }

 

And if you liked this post, go ahead and subscribe here to my RSS Feed to make sure you don’t miss other content like this.

 
 

The post Simple way to implement caching in ASP.NET Web API appeared first on RadenkoZec blog.


Radenko Zec: ASP.NET Web API GZip compression ActionFilter with 8 lines of code

If you are building high performance applications that consumes WebAPI, you often need to enable GZip / Deflate compression on Web API.

Compression is an easy way to reduce the size of packages and in the same time increase the speed of communication between client and server.

Two common algorithms used to perform this on the Web are GZip and Deflate. These two algorithms are recognized by all web browsers and all GZip and Deflate HTTP responses are automatically decompressed.

On this picture you can see the benefits of GZip compression.

 

compression

Source : Effects of GZip compression

How to implement compression on ASP.NET Web API

There are several ways to do this. One is to do it on IIS level. This way all responses from ASP.NET Web API will be compressed.

Another way is to implement custom delegating handler to do this. I have found several examples of how to do this on the internet but it often requires to write a lot of code.

The third way is to implement your own actionFilter which can be used on method level, controller level or for entire WebAPI.
 

 

Implement ASP.NET Web API GZip compression ActionFilter

For this example with around 8 lines of code I will use very popular library for Compression / Decompression called DotNetZip library .This library can easily be downloaded from NuGet.

Now we implement  Deflate compression ActionFilter.

 public class DeflateCompressionAttribute : ActionFilterAttribute
 {

    public override void OnActionExecuted(HttpActionExecutedContext actContext)
    {
        var content = actContext.Response.Content;
        var bytes = content == null ? null : content.ReadAsByteArrayAsync().Result;
        var zlibbedContent = bytes == null ? new byte[0] : 
        CompressionHelper.DeflateByte(bytes);
        actContext.Response.Content = new ByteArrayContent(zlibbedContent);
        actContext.Response.Content.Headers.Remove("Content-Type");
        actContext.Response.Content.Headers.Add("Content-encoding", "deflate");
        actContext.Response.Content.Headers.Add("Content-Type","application/json");
        base.OnActionExecuted(actContext);
      }
  }

 

We also need a helper class to perform compression.

public class CompressionHelper
{ 
        public static byte[] DeflateByte(byte[] str)
        {
            if (str == null)
            {
                return null;
            }

            using (var output = new MemoryStream())
            {
                using (
                    var compressor = new Ionic.Zlib.DeflateStream(
                    output, Ionic.Zlib.CompressionMode.Compress, 
                    Ionic.Zlib.CompressionLevel.BestSpeed))
                {
                    compressor.Write(str, 0, str.Length);
                }

                return output.ToArray();
            }
        }
}

 

For GZipCompressionAttribute implementation is exactly the same. You only need to call GZipStream instead of DeflateStream in helper method implementation.

If we want to mark some method in controller to be Deflated just put this ActionFilter attribute above method like this :

public class V1Controller : ApiController
{
  
    [DeflateCompression]
    public HttpResponseMessage GetCustomers()
    {

    }

}

 

If you find some better way to perform this please let me know.

 

The post ASP.NET Web API GZip compression ActionFilter with 8 lines of code appeared first on RadenkoZec blog.


Darrel Miller: Single purpose media types and reusability

The one great thing about twitter is that you quickly find out what you failed to explain clearly :-)  My efforts in advocating for single purpose media types failed to clarify that by single purpose, I am not suggesting that these media types should not be re-usable.  Let me try and explain.

My realization was triggered by @inadarai 's tweet to me,

I'm going to take the liberty of assuming that this tweet was slightly tongue-in-cheek.  However, it did highlight to me the part of my explanation that I was missing.

It is a whacky world out there

For those who have better things to do than keep up with daily craziness in Silicon Valley, the significance of this image may not be immediately apparent.  A new start-up called Yo is making the news with an application that simply sends the message "Yo" to someone in your social network.  The joke of this tweeted image is that many people are likely to copy this idea and we will have many apps on our phone where each one will send just a single one word message.

My response to @inadarei included these two tweets,

Although still not a particularly serious suggestion, the example is extremely illustrative.  By single purpose media types, I am not suggesting that each and every API should invent media types that are designed to solve just their specific problems.  It is critical to realize that media types should be targeted to solve a single generic problem that is faced by many APIs.  It is only by doing this can we achieve the level of re-usability and interoperability that make the web successful.

Generalize the problem

By generalizing our problem space to the notion of a "salutation" we can define a single hypothetical media type and name it application/salutation+json.

To satisfy the functional requirements of an application that simply displays a single word message to an end user, all we need is a payload that looks something like:

{
   "message" : "Yo"
}

This media type could then be delivered by any API that wishes to transmit a single word message.  Now that the message has been generalized, there is no need to have a different client for each API.  In the same way that RSS feeds allowed us to use one client application to consume blog posts from many different blogs, so our salutation media type enables just a single client application for all APIs that support the format.

But that was a dumb example

Let me provide a more realistic scenario.   For many years I've been building ERP systems and for the last 7 years I've been working on a building an ERP API that uses media types and link relations to communicate information to a desktop client application.  For a period of time, I considered inventing my own media types for Invoices, Sales orders, Customers, Purchase Orders, etc, etc.  However, I quickly realized that the overhead of creating written specifications for the 500 or so entities that were in our ERP product  would be a mammoth and very painful task.  I would also never likely see any re-use out of those media types.  It is highly unlikely that I could convince any other ERP vendor that they should re-use our Invoice format as it would likely contain many capabilities that were specific to our API, and I'm sure they would believe that their format is better!

Don't those standards already exist?

As an industry we have tried on numerous occasions to create universal standards for interchange of business documents.  (e.g Edifact, ANSI X12, UBL).  None of these efforts have come close to having the adoption like HTML has, despite having massive amounts of money poured into convincing people to adopt them.

The problem with these previous efforts is they tried to bite off a bigger problem than they could chew.  In order to manage the size of problem they were trying to address, they first invented a meta-format and then applied schemas to that format to produce consistent documents for each business scenario.  The result was horribly complicated, with a huge barrier to entry.  If you don't believe me, here is the list of fields used by a UBL invoice.

Be less ambitious

Now imagine a much simpler scenario for just invoicing information.  Imagine if we could agree on a simple read-only format containing invoice information.  If services like Amazon, NewEgg, Staples, Zappos, etc, could expose their invoice information in this shared format, as well as the normal HTML and PDF, then this new format could then be consumed easily by services like Expensify, Mint, Quicken and what ever other services might help you manage your finances.   This scenario would be an achievable goal in my opinion and one that could quickly deliver real value.

Profiles can do that

There is no theoretical reason why this same goal could not be achieved by using generic hypermedia types and profiles .  However, you could now have the situation where Amazon likes using Hal, Staples decide on Siren for their company standard and NewEgg chose JSON-LD.  The theory of profiles, as I understand it, is that everyone should be able to consistently apply the same "invoice" profile across multiple base generic media types.  However, the consumers like Expensify and Mint are now faced with dealing with all these different base formats combined with profiles.  This seems like a significant amount of complexity for minimal gain.

But what about exploding media types

A common objection to creating media types that have a limited scope is the fear of "an explosion of media types".  The main concern here is multiple people creating media types that address the same purpose but have trivial differences.  Having one invoice format that has a property called "InvoiceDate" and another that has "dateInvoiced" is really not helping anyone. 

bigbang

Ironically, the competing generic hypermedia types that were partly a result of people trying to avoid creating an explosion of media types are now suffering from the "trivially different" problem.  Also, it is quite likely that the wide usage of schemas and profiles will suffer from APIs defining their own variants of semantically similar documents.

The important goal

Regardless of whether developers choose to use single purpose media types or generic hypermedia types with profiles, it is essential that we focus on creating artifacts that can be re-used across multiple APIs to enable sharing and integration that is "web scale".

Image Credit: Big bang https://flic.kr/p/cS8Hed 


Dominick Baier: Resource/Action based Authorization for OWIN (and MVC and Web API)

Authorization is hard – much harder than authentication because it is so application specific. Microsoft went through several iterations of authorization plumbing in .NET, e.g. PrincipalPermission, IsInRole, Authorization configuration element and AuthorizeAttribute. All of the above are horrible approaches and bad style since they encourage you to mix business and authorization logic (aka role names inside your business code).

WIF’s ClaimsPrincipalPermission and ClaimsAuthorizationManager tried to provide better separation of concerns – while this was a step in the right direction, the implementation was “sub-optimal” – based on a CLR permission attribute, exception based, no async, bad for unit testing etc…

In the past Brock and me worked on more modern versions that integrate nicer with frameworks like Web API and MVC, but with the advent of OWIN/Katana there was a chance to start over…

Resource Authorization Manager & Context
We are mimicking the WIF resource/action based authorization approach – which proved to be general enough to build your own logic on top. We removed the dependency on System.IdentityModel and made the interface async (since you probably will need to do I/O at some point). This is the place where you will centralize your authorization policy:

public interface IResourceAuthorizationManager

{

    Task<bool> CheckAccessAsync(ResourceAuthorizationContext context);

}

 

(there is also a ResourceAuthorizationManager base class with some easy to use helpers for returning true/false and evaluations)

The context allows you to describe the actions and resources as lists of claims:

public class ResourceAuthorizationContext
{
    public IEnumerable<Claim> Action { get; set; }
    public IEnumerable<Claim> Resource { get; set; }
    public ClaimsPrincipal Principal { get; set; }
}

 

Middleware
The corresponding middleware makes the authorization manager available in the OWIN enviroment:

public void Configuration(IAppBuilder app)
{
    var cookie = new CookieAuthenticationOptions
    {
        AuthenticationType = "Cookie",
        ExpireTimeSpan = TimeSpan.FromMinutes(20),
        LoginPath = new PathString("/Login"),
    };
    app.UseCookieAuthentication(cookie);
 
    app.UseResourceAuthorization(new ChinookAuthorization());
}

 

Usage
Since the authorization manager is now available from the environment (key: idm:resourceAuthorizationManager) you can get ahold of it from anywhere in the pipeline, construct the context and call the CheckAccessAsync method.

The Web API and MVC integration packages provide a ResourceAuthorize attribute for declarative checks:

[ResourceAuthorize(ChinookResources.AlbumActions.View, ChinookResources.Album)]

 

And several extension methods for HttpContextBase and HttpRequestMessage, e.g.:

if (!HttpContext.CheckAccess(
    ChinookResources.AlbumActions.Edit,
    ChinookResources.Album,
    id.ToString()))
{
    return new HttpUnauthorizedResult();
}

 

or..

var result = Request.CheckAccess(
    ChinookResources.AlbumActions.Edit,
    ChinookResources.Album,
    id.ToString());

 

Testing authorization policy
Separating authorization policy from controllers and business logic is a good thing, centralizing the policy into a single place also has the nice benefit that you can now write unit tests against your authorization rules, e.g.:

[TestMethod]
public void Authenticated_Admin_Can_Edit_Album()
{
    var ctx = new ResourceAuthorizationContext(User("test", "Admin"),
        ChinookResources.AlbumActions.Edit,
        ChinookResources.Album);
    Assert.IsTrue(subject.CheckAccessAsync(ctx).Result);
}

 

or…

[TestMethod]
public void Authenticated_Manager_Cannot_Edit_Track()
{
    var ctx = new ResourceAuthorizationContext(User("test", "Manager"),
        ChinookResources.TrackActions.Edit,
        ChinookResources.Track);
    Assert.IsFalse(subject.CheckAccessAsync(ctx).Result);
}

 

Code, Samples, Nuget
The authorization manager, context, middleware and integration packages are part of Thinktecture.IdentityModel – see here.

The corresponding Nuget packages are:

..and here’s a sample using MVC (if anyone wants to add a Web API to it – send me a PR).


Filed under: ASP.NET, IdentityModel, Katana, OWIN, WebAPI


Darrel Miller: Single purpose media types and caching

My recent post asking people to refrain from creating more generic hypermedia types sparked some good conversation on twitter between @mamund, @cometaj2, @mogsie, @inadarei and others.  Whilst thinking some more on the potential benefits of single purpose media types versus generic hypermedia types, realized there is a correlation between single purpose media types and representation lifetimes.  I thought it might be worth adding to the conversation, but there is no way I could fit it in 140 chars, so I’m posting it here.

Age matters

When I say representation lifetime, I am referring to how long a representation can be considered to be still fresh, from the perspective of caching.   The lifetime of a representation can be explicitly defined by either setting the Expires header or by setting the max-age parameter of the cache-control header.  If neither of these values are set then a client can choose to use heuristics to determine how long the response could still be fresh for.

BestBeforeDateWhen using Single purpose media types, those types often hint at representations that should have different lifetimes.  If you are building an HTML dashboard that contains images of charts showing KPIs (key performance indicators), then the HTML will likely have a much longer lifetime than that of the images.

However, if you are building a dynamic HTML application, then the HTML is likely to have a shorter lifespan than the images displayed on it.  Media types that provide metadata like CSS, or API discovery documents are likely to have long lifetimes, whereas media types that display status information are likely to be very short lived.

Hints for client heuristics

When I was reading the specification for apisjson I noticed that they had explicitly added a clause to say that if no lifetime information was provided then a lifetime of 7 days would be an appropriate value.  I think the idea of a media type specification explicitly stating suggested lifetime values is very interesting.  When chatting with Steven Wilmott at API Craft Meetup SF he mentioned that they had got the idea from the standard convention of caching robots.txt for 24 hours. 

Just a thought

As I mentioned, this is simply an overly long tweet, rather than a blog post.  Nothing ground breaking, but I find that developers often forget about the value of setting caching headers.  If media type designers can provide guidance on what appropriate values might be then developers might pay a little more attention and the Internet might just get a little faster.

Image Credit: Best before date https://flic.kr/p/6MAMum


Darrel Miller: XSLT is easy, even for transforming JSON!

Most developers I talk to will cringe if they hear the acronym XSLT.  I suspect that reaction is derived from some past experience where they have seen some horrendously complex XML/XSLT combination.  There is certainly lots of that around. However, for certain types of document transformations, XSLT can be a very handy tool and with the right approach, and as long as you avoid edge cases, it can be fairly easy.

When I start building an XSLT transform, I always start with the “identity” transform,

<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" >
<xsl:output method="xml" indent="yes" />
<xsl:template match="/ | @* | node()"> <xsl:copy> <xsl:apply-templates select="* | @* | node()" /> </xsl:copy> </xsl:template>
</xsl:stylesheet>

OptimusPrimeThe identity transform simply traverses the nodes in the input document and copies them into the output document.

Make some changes

In order to make changes to the output document you need to add templates that will do something other than simply copy the existing node.

<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" >
   
<xsl:output method="xml" indent="yes" />
<xsl:template match="/ | @* | node()"> <xsl:copy> <xsl:apply-templates select="* | @* | node()" /> </xsl:copy> </xsl:template>
<!-- Change something --> <xsl:template match="foo"> <xsl:element name="bar"> <xsl:apply-templates select="* | @* | node()" /> </xsl:element> </xsl:template>
</xsl:stylesheet>

This template matches on any element named foo and in it’s place creates an element named bar that contains a copy of everything that foo contained.  XSLT will always use the template that matches most specifically.

Given the above XSLT, an input XML document like this,

<baz>  
    <foo value=”10text=”Hello World/>
</baz>

would be transformed into

<baz>
     <bar value=”10text=”Hello World/>
</baz>

And add more changes…

The advantage of XSLT over doing transformations in imperative code, is that adding more transformations to the document doesn’t make the XSLT more complex, just longer. For example, I could move the foo element inside another element by adding a new matching template.  The original parts of the template stay unchanged. 

<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" >
        <xsl:output method="xml" indent="yes" />

        <xsl:template match="/ | @* | node()">
            <xsl:copy>
                <xsl:apply-templates select="* | @* | node()" />
           </xsl:copy>
        </xsl:template>

        <xsl:template match="foo">
           <xsl:element name="bar">
                <xsl:apply-templates select="* | @* | node()" />
           </xsl:element>
        </xsl:template>

        <!-- Additional template that does not change previous -->
        <xsl:template match="baz">
            <xsl:element name="splitz">
               <xsl:copy>
                 <xsl:apply-templates select="* | @* | node()" />
              </xsl:copy>
            </xsl:element>
        </xsl:template>

</xsl:stylesheet>

With imperative code, adding additional transformations often requires doing refactoring of existing code.  Due to XSLT being from a more functional/declarative heritage, it tends to stay cleaner when you add more to it.

Just because I can…

And for those of you love JSON too much to ever go near XSLT, below is some sample code that takes a JSON version of my sample document and applies the XSLT transform to it using the XML support in JSON.NET.

Starting with this JSON

{
    "baz": {
        "foo": {
            "value": "10",
            "text": "HelloWorld"
        }
    }
}

we end up with

{
    "splitz": {
        "baz": {
            "bar": {
                "value": "10",
                "text": "Hello World"
            }
        }
    }
}

Here’s the code I used to do this.  Please don’t take this suggestion too seriously.  I suspect the performance of this approach would be pretty horrific.  However, if you have a big chunk of JSON that you need to do a complex, offline transformation on, it might just prove useful.

var jdoc = JObject.Parse("{ 'baz' : { 'foo' : { 'value' : '10', 'text' : 'Hello World' } }}");
            
// Convert Json to XML
var doc = (XmlDocument)JsonConvert.DeserializeXmlNode(jdoc.ToString());

// Create XML document containing Xslt Transform
var transform = new XmlDocument();
transform.LoadXml(@"<?xml version='1.0' encoding='utf-8'?>
                    <xsl:stylesheet version='1.0' xmlns:xsl='http://www.w3.org/1999/XSL/Transform' >
                        <xsl:output method='xml' indent='yes' omit-xml-declaration='yes' />

                        <xsl:template match='/ | @* | node()'>
                            <xsl:copy>
                                <xsl:apply-templates select='* | @* | node()' />
                            </xsl:copy>
                        </xsl:template>

                        <xsl:template match='foo'>
                            <xsl:element name='bar'>
                                <xsl:apply-templates select='* | @* | node()' />
                            </xsl:element>
                        </xsl:template>
                        <xsl:template match='baz'>  
                            <xsl:element name='splitz'>
                                <xsl:copy>
                                    <xsl:apply-templates select='* | @* | node()' />
                                </xsl:copy>
                        </xsl:element></xsl:template>
                    </xsl:stylesheet>");

//Create compiled transform object that will actually do the transform.
var xslt = new XslCompiledTransform();
xslt.Load(transform.CreateNavigator());

// Transform our Xml-ified JSON
var outputDocument = new XmlDocument();
var stream = new MemoryStream();
xslt.Transform(doc, null, stream);
stream.Position = 0;
outputDocument.Load(stream);

// Convert back to JSON :-)
string jsonText = JsonConvert.SerializeXmlNode(outputDocument);

Duck-billed Platypus

Image credit: Transformer https://flic.kr/p/2LK2ph
Image credit: Platypus https://flic.kr/p/8tYptD


Darrel Miller: Please, no more generic hypermedia types

This opinion has been stewing for a couple of years now, but following an excellent conversation I had with Ted Young the other evening at the API Craft San Francisco event, I think it is time to have more discussion around this subject.

A little bit of history

I have been a big supporter of the HAL media type ever since Mike Kelly showed me his first sample on the freenode #REST IRC channel. I actively pushed for the use of HAL within several teams at Microsoft and a number of other large corporations. I saw HAL as an opportunity to introduce people to hypermedia using a simple, flexible, standardized format.  HAL has its flaws, but it served my purpose.

Since HAL was introduced, a number of other hypermedia types have been created, including Collection+JSON, Siren, JSON-LD, JsonAPI, Mason, UBER, OData and probably a few others that I have forgotten.

I think it is excellent that there is experimentation going on in this area. Much of this work can be used as the basis for future media type designs. The more people who have experience creating media types the better. However, there is a trend emerging where we are repeatedly creating general purpose hypermedia types that attempt to provide a single solution for all the resources in a API.

VanillaIceCream

What's the problem?

Developers who wish to build a hypermedia API are now required to make a decision up front on whether they want to use HAL, or Siren, or Mason, or JSON-LD, or OData, or UBER. All these designs are only incrementally different than each other and simply reflect the preferences and priorities of their authors.

The fragmentation of our already fairly small community of developers who are using hypermedia formats is really not helping us advance the adoption of hypermedia. Additionally, by trying to build hypermedia types that are sufficiently generic to support a wide range of use-cases, we have been forced to introduce secondary mechanisms to convey application semantics. Once a developer has chosen a hypermedia format, they must now choose a profile format, or a schema language, or create extension link relations in order to communicate the remainder of the semantics that the general purpose hypermedia formats do not support.

IceCreamChoices

I intentionally left Collection+JSON out of the list of competing media types because I see C+J as different, as it was designed to represent a simple lists of things. This a very focused goal and yet one that almost every application has the need for.

So what does work?

I realize that this example is going to turn off many people, but there is an elephant in the room that we just can't continue to ignore. The human web is built around a fairly small set of media types that have narrowly focused goals.

  • text/html provides read-only textual content to be presented to an end user.
  • application/x-www-form-urlencoded provides a way of transmitting user input to an origin server.
  • text/css provides hints to the user-agent on how to render the content of the HTML document.
  • image/(png|jpeg|gif) provides bitmap based images
  • image/svg provides vector based images
  • application/javascript provides source code for client side scripts

It is the combination of these narrowly focused media types that enable web applications to produce a huge variety of user experiences.

The UNIX philosophy

In my opinion the best media types are the ones that are designed to solve a single problem, solve it well and then allow the combination of those media types. This allows the media type to remain fairly simple. Simple to create, simple to parse and simple to replace if a better format comes along. It is also easier to support multiple similar formats. The web only supports one text/html format, but it supports several image formats.

IceCreamToppings

Examples of potential media types

During my years of building hypermedia based systems, I have run into many situations that would have benefited from having specialized media types. Here are some examples:

  • Discovery document : An entry point document that allows a client to discover other resources that exist within a system. e.g json-home
  • List of things : A tabular list of pieces of data. e.g. Collection+Json, text/csv
  • Data entry form : A representation of a set of data elements that have types, constraints, validation rules, intended for capturing information from the user. e.g. HTML forms are a very primitive example
  • Captured event stream : A sequence of events captured by a client application potentially used for replaying on some remote system. e.g. ActivityStreams, Json-patch
  • Error document : document used for reporting errors resulting from a HTTP Request. e.g. http-problem, vnd.error+(xml|json)
  • Operation Status document : Document which describes the current status of a long running operation. application/status+(xml|json)
  • Banded Report document : Representation of the set of data used by report writing tools to generate printed reports
  • Query description : Query language that selects and filters a subset of data e.g. application/sql, application/opensearchdescription+xml
  • Datapoints : for driving dashboard widgets and graphs
  • Security permissions management : tasks, claims, users
  • Patch format : Apply a delta set of updates to a target resource Json-patch Sundae

Call to action

I'd like to see those developers who do have experience building hypermedia types working on these kinds of focused media types instead of creating competing, one-size-fits-all formats.
I'd like to see widget and component developers building native support for consuming these specialized media types.
I'd like to see more application developers identifying opportunities for creating these specialized media types.
I'd like to see API Commons filled with a wide variety of media type specifications that I can pick and choose as building blocks for building my API.

I want to have a future where I can expose my data in a standardized format and then simply connect hypermedia enabled client components that will consume those formats. Now that would improve productivity.

That's my opinion, I'd love to hear yours.

Image Credit: Vanilla Ice cream https://flic.kr/p/9JyfVG
Image Credit: Sundae https://flic.kr/p/3aK9TU
Image Credit: Toppings https://flic.kr/p/fuY7GZ


Darrel Miller: Self-descriptive, isn't. Don't assume anything.

With the recent surge of interest in hypermedia APIs I am beginning to see the term “self-descriptive” thrown around quite frequently.  Unfortunately, the meaning of self-descriptive is not exactly self-descriptive, leading to misuse of the term. 

Consider the following HTTP requests,

Example 1:

GET /address/99
=> 200 OK
Content-Type: application/json
Content-Length: 508

{
"_meta" : {
	"street" : { 	"type" : "string", 
		     	"length" : 80, 
		     	"description" : "Street address"},
	"city" :   { 	"type" : "string", 
		     	"length" : 20, 
		     	"description" : "City name"},
	"postcode" : { 	"type" : "string", 
		       	"length" : 11, 
		       	"description" : "Postal Code / Zip Code"},
	"country" : {	"type" : "string", 
			"length" : 15, 
			"description" : "Country name"},
	},
"address" : {
	"street" : "1 youville way",
	"city" : "Mysteryville",
	"postcode" : "H3P 2Z9",
	"country" : "Canada"
	}
}


Example 2:

GET /address/99
=> 200 OK
Content-Type: application/vnd.gabba.berg
Content-Length: 90

<berg>
   <blurp filk="iggy">ababa</blurb>
   <bop>
      <bip>yerk</bip>
     </bop>
  <berg>


I suspect a fair number of people will be surprised when I make the claim that from the perspective of self-descriptive HTTP messages, the first message is not self-descriptive and the second one is. 

The first may contain more descriptive content, but it doesn't use the standardized methods provided to us by HTTP to identify the semantics of the content.  The client is forced to make assumptions.  The second one is explicit about identifying the meaning of the payload.

Nametag

Identify yourself

Self-descriptive in HTTP does not mean the message describes itself.  It means that the message depends on semantic identifiers, using mechanisms defined by HTTP (e.g. media-types and link relations) to convey the complete meaning of the message. This allows client application to know whether it can understand the incoming message. 

The first example contains all kinds of metadata which attempts to describe the actual data in the message.  However, how can the client know if it is able to interpret the metadata?   It reminds me of the first French language course I took in Quebec, where the teacher started providing instruction in French!  Fortunately, humans are pretty intelligent creatures, software applications, not so much. 

Declare your semantics

In example 1, the media type in the content type header is declared as "application/json".  Unfortunately that tells me nothing about the meaning of the information in the message body.  The client can process the content as JSON, but the message is telling it nothing else about the meaning of the message.  Allowing a client to assume that because you have retrieved the representation from /address/99 that the response will contain information about an address is a violation of the self-descriptive constraint.

Why yes, I do speak Klingon

In example 2, which at first glance appears completely unintelligible to a developer, provides a media type, which, in theory, should be registered with IANA and therefore I should be able to find a written specification that explains what all those weird attributes and elements mean.  Once I have read the specification, I can write code in my client application to be able to process content that is identified as "application/vnd.gabba.berg". 

There is no magic

I get the impression that some developers perceive hypermedia and self-descriptiveness as some magical property that will allow clients to perform tasks that they previously had no idea how to do. 

A client can only process media types it understands.  A web browser knows how to render HTML, follow links, fill forms and run script.  The browser is completely ignorant of the fact that one HTML page might be doing banking transactions and another submitting an order for a year's supply of Shamwow products. 

The effect might be magical, but the reality is that the hypermedia driven clients can only do exactly what they have been coded to do. 

 

Image credit: Name tag https://flic.kr/p/27Y1J9


Darrel Miller: Distributed Web API discovery

The site apisjson.org defines a specification for creating a document that declares the existence of an API on the web.   This document provides some identification information about the API and links to documentation and to the actual API root URL.  It also supports pointing to other resources like API design metadata documents and contact information for maintainers of the API.earth

Here is a fairly minimal version of what the declaration document might look like for the Runscope API.

{
  "Name": "Runscope APIs",
  "Description": "APIs for interacting with Runscope Tools",
  "Tags": ["testing", "http", "web api", "debugging","monitoring"],                            
  "Created": "06/12/2014",                                                                  
  "Modified": "06/12/2014",
  "Url": "http://api.runscope.com/apis.json",
  "SpecificationVersion" : "0.14",
  "Apis": [
    {
      "Name": "Runscope API",
      "Description": "An API for interacting with the Runscope HTTP debugging, testing and monitoring tools",
      "humanUrl": "https://www.runscope.com/docs/api/overview",
      "baseUrl": "https://api.runscope.com/",
      }]
    }
  ]
}

There is an experimental search engine available here http://apis.io/ that maintains a list of links to these API declaration documents and makes the contents searchable.  If I were to take the JSON above and expose it at http://api.runscope.com/apis.json and then submit that URL, the search engine would index my declaration file and allow users to find it.

The important bit

One of the best parts of this specification, regardless of the fact that it is still immature and I'm sure has many more revisions in front of it, is that the authors are really trying to embrace the way the web is supposed to work. 

  • They have identified the focused scenario of "api discovery" that can take advantage of a standardized solution. 
  • They are taking no dependencies on tooling or frameworks. 
  • They are considering registering this as a standardized IANA media type
  • They are recommending the use of a standardized link relation types described-by to allow APIs to point to their own declaration document to help make APIs more self-describing. 
  • They are considering making use of well-known URIs as a way to allow crawlers to find these API declaration documents without needing them to be submitted manually.
  • The specification is being developed in the open. They have a github repo and a Google group where they are accepting and acting on feedback they receive and they are iterating quickly.   

This is how web work should be done.  By building on the conventions and standards that others have already produced instead of re-inventing and fragmenting.  Kudos to Kin Lane and Steven Willmott for pioneering this effort.  I look forward to many more people getting involved to make this successful.

Now if only we could find a better name for the format :-)

Image credits: Earth https://flic.kr/p/8FnV8


Darrel Miller: Returning raw JSON content from ASP.NET Web API

In a previous post I talked about how to send raw JSON to a web API and consume it easily.  This is a non-obvious process because ASP.NET Web API is optimized for sending and receiving arbitrary CLR object that then get serialized by the formatters in the request/response pipeline.  However, sometimes you just want to have more direct control over the format that is returned in your response.  This post talks about some ways you can regain that control.

Serialize your JSON documentjason2

If the only thing you want to do is take a take plain old JSON and return it, then you can on rely on the fact that the default JsonMediaTypeFormatter knows how to serialize JToken objects.

public class JsonController : ApiController
{
    public JToken Get()
    {
        JToken json = JObject.Parse("{ 'firstname' : 'Jason', 'lastname' : 'Voorhees' }");
        return json;
    }

}

If you want a bit more control over the returned message then you can a peel off a layer of convenience and return a HttpReponseMessage with a HttpContent object.

Derive from HttpContent for greater control

The HTTP object model that is used by ASP.NET Web API is quite different than many other web frameworks because it makes an explicit differentiation between the response message and the payload body that is contained in the response message.  The HttpContent class is designed as an abstract base class for the purpose of providing a standard interface to any kind of payload body.

Knobsandswitches2 Out of the box there are a number of specialized HttpContent classes: StringContent, ByteArrayContent, FormUrlEncodedContent, ObjectContent and a number of others.  However, it is fairly straightforward to create our own derived classes to deliver different kinds of payloads.

 

To return JSON content you can create a JsonContent class that looks something like this,

public class JsonContent : HttpContent
{
    private readonly JToken _value;

    public JsonContent(JToken value)
    {
        _value = value;
        Headers.ContentType = new MediaTypeHeaderValue("application/json");
    }

    protected override Task SerializeToStreamAsync(Stream stream,
        TransportContext context)
    {
        var jw = new JsonTextWriter(new StreamWriter(stream))
        {
            Formatting = Formatting.Indented
        };
        _value.WriteTo(jw);
        jw.Flush();
        return Task.FromResult<object>(null);
    }

    protected override bool TryComputeLength(out long length)
    {
        length = -1;
        return false;
    }
}


This JsonContent class can then be used like this,

public class JsonContentController : ApiController
{
    public HttpResponseMessage Get()
    {
        JToken json = JObject.Parse("{ 'firstname' : 'Jason', 'lastname' : 'Voorhees' }");
        return new HttpResponseMessage()
        {
            Content = new JsonContent(json)
        };
    }
}

This approach provides the benefit of allowing you to either manipulate the HttpResponseMessage in the controller action, or if there are other HttpContent.Headers that you wish to set then you can do that in JsonContent class.

Making a request for this resource would look like this,

> GET /JsonContent HTTP/1.1
> User-Agent: curl/7.28.1
> Host: 127.0.0.1:1001
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Length: 43
< Content-Type: application/json; charset=utf-8
< Server: Microsoft-HTTPAPI/2.0
< Date: Wed, 11 Jun 2014 21:27:50 GMT
<
{"firstname":"Jason","lastname":"Voorhees"}

Wrapping HttpContent for Additional Transformations

Another nice side-effect of using HttpContent classes is that they can be used as wrappers to perform transformations on content.  These wrappers can be layered and will just automatically work  whether the content is buffered or stream directly over the network.  For example, the following is possible,

public HttpResponseMessage Get()
{
    JToken json = JObject.Parse("{ 'property' : 'value' }");
    return new HttpResponseMessage()
    {
        Content = new EncryptedContent(new CompressedContent(new JsonContent(json)))
    };
}

This would create the stream of JSON content, compress it, encrypt it and set all the appropriate headers.

Separation of concerns promotes reuse

By focusing on the payload as an element independent of the response message it becomes easier to re-use the specialized content classes in for different resources.  In the APIs I have built I have had success creating many other content classes like:

Image credit : Jason https://flic.kr/p/gQEp5X
Image credit : Knobs and switches https://flic.kr/p/2YJC3Y


Ali Kheyrollahi: BeeHive Series - Part 3: BeeHive 0.5, RabbitMQ and more

Level [T4]

BeeHive is a friction-free library to do Reactor Cloud Actors - effortlessly. It defines abstractions for the message, queue and the actors and all you have to do is to define your actors and connect their dots using subscriptions. If it is the first time you read about BeeHive, you could have a look at previous posts but basically a BeeHive Actor (technically Processor Actor) is very simple:

[ActorDescription("TopicName-SubscriptionName")]
public class MyActor : IProcessorActor
{
public Task<IEnumerable<Event>> ProcessAsync(Event evnt)
{
// impl
}
}
All you do is to consume a message, do some work and then return typically one, sometimes zero and rarely many events back.
A few key things to note here.

Event

First of all Event, is an immutable, unique and timestamped message which documents a significant business event. It has a string body which normally is a JSON serialisation of actual message object - but it does not have to be.

So usually messages are arbitrary bytes, why here it is a string? While it was possible to use byte[], if you need to send binary blobs or you need custom serialisation, you are probably doing something wrong. Bear in mind, BeeHive is targeted at solutions that require scale, High Availability and linearisation. If you need to attach a big binary blob, just drop it in a key value store using IKeyValueStore and put the link in your message. If it is small, use Base64 encoding. Also your messages need to very simple DTOs (and by simple I do not mean small, but a class with getters and setters), if you are having problem serialising them then again, you are doing something wrong.

Queue naming

BeeHive uses a naming conventional for queues, topics and subscriptions. Basically it is in the format of TopicName-SubscriptionName. So there are a few rules with this:
  • Understandably, TopicName or SubscriptionName should not contain hyphens
  • If the value of TopicName and SubscriptionName is the same, it is a simple queue and not a publish-subscribe queue. For example, "OrderArrived-OrderArrived"
  • If you leave off the SubscriptionName then you are referring to the topic. For example "OrderArrived-".
Queue name is represented by the class QueueName. If you need to construct queue names using static methods:

var n1 = QueueName.FromSimpleQueueName("SimpleQ"); // "SimpleQ-SimpleQ"
var n2 = QueueName.FromTopicName("Topic"); // "Topic-"
var n3 = QueueName.FromTopicAndSubscriptionName("topic", "Sub"); // "Topic-Sub"

There is a QueueName property on the Event class. This property defines where to send the event message. The queue name must be the name of the topic or simple queue name.

IEventQueueOperator

This interface got some make over in this release. I have not been happy the interface as it had some inconsistencies - especially in terms of creating . Thanks to Adam Hathcock who reminded me, now this is done.

With QueueName ability of differentiating topics and simple queue, this value needs to be either name of the simple queue (in the example above "SimpleQ") or the conventional topic name (in the example above "Topic-").

So here is the interface(s) as it stands now:

public interface ISubscriptionOperator<T>
{
Task<PollerResult<T>> NextAsync(QueueName name);
Task AbandonAsync(T message);
Task CommitAsync(T message);
Task DeferAsync(T message, TimeSpan howLong);
}

public interface ITopicOperator<T>
{
Task PushAsync(T message);
Task PushBatchAsync(IEnumerable<T> messages);
}

public interface IQueueOperator<T> : ITopicOperator<T>, ISubscriptionOperator<T>
{
Task CreateQueueAsync(QueueName name);
Task DeleteQueueAsync(QueueName name);
Task<bool> QueueExists(QueueName name);
}

public interface IEventQueueOperator : IQueueOperator<Event>
{
}
Main changes were made to IQueueOperator<T> passing the QueueName which made it simpler.

RabbitMQ Roadmap

BeeHive targets cloud frameworks. IEventQueueOperator and main data structures have been implemented for Azure. Next is AWS.

Amazon Web Services (AWS) provides Simple Queue Service (SQS) which only supports simple send-receive scenarios and not Publish-Subscribe cases. With this in mind, it is most likely that other message brokers will be used although a custom implementation of pub-sub based on Simple Notification Service (SNS) has been reported. Considering RabbitMQ is by far the most popular message broker out there (is it not?) it is sensible to pick this implementation first.

RabbitMQ client for .NET has a very simple API and working with it is very easy. However, the connection implementation has a lot to be desired. EasyNetQ has a sophisticated connection implementation that covers dead connection refreshes and catering for round-robin in case of High-Availability scenario. Using a full framework to just the connection is not really an option hence I need to implement something similar.

So for now, I am realising an alpha version without the HA and connection refresh to get community feedback. So please do ping me what you think.

Since this is a pre-release, you need to use -Pre to get it installed:

PM> Install-Package BeeHive.RabbitMQ -Pre


Darrel Miller: There is Unicode in your URL!

In our Runscope HipChat room a few weeks ago, I was asked about Unicode encoding in URLs.  After a quick sob about why I never get asked the easy questions, I decided it was time to do some investigating. 

I had explored this subject in the past whilst trying to get Unicode support working in my URI Templates library.  At that time I had got lost in the mysteries of Unicode normalization and never actually got to the bottom of the problem.  This time I was determined.

Get to the point, the Code Point

To cut a long story short, the solution for what I believe to be the common scenario, is fairly straightforward.  To support Unicode in a URI you simply need to convert the Unicode "code point" into UTF-8 bytes and then percent-encode those bytes.  The percent encoded bytes can than then be embedded directly in the URL.

As an example, consider we want to embed the character that has the code point \u263A into our URI.  We can create a string that has that code point in C# like this,

var s = "Hello World \u263A";

Show me the bytes

Now that string can be converted to  UTF-8 bytes likes this,

var bytes = Encoding.UTF8.GetBytes(s);

an finally they can be percent encoded like this,

var encodedstring = string.Join("",bytes.Select(b => b > 127 ? 
Uri.HexEscape((char)b) : ((char)b).ToString()));

The trick here is that we only want to do the HexEscape for characters that are part of a multi-byte UTF8 encoding of a code point.  UTF-8 guarantees that all bytes that are part of a multi-byte character encoding will have the high bit set and therefore will be greater than 127. 

One caveat to be aware of is that because you are going to be including this string in a URI, you should either call Uri.EscapeUriString() or Uri.EscapeDataString() before doing the Unicode escaping or you could end up double escaping the Unicode escaping.

A complete example

Here is a small ScriptCS example that shows how this could be used,

#r "system.net.http.dll"
using System.Net.Http;

var httpClient = new HttpClient();

var url = EncodeUnicode("http://stackoverflow.com/search?q=hello+world\u263A");
var response = httpClient.GetAsync(url).Result;

Console.WriteLine(response.StatusCode);

public string EncodeUnicode(string s) {
  var bytes = Encoding.UTF8.GetBytes(s);
  var encodedstring = string.Join("",bytes.Select(b => b > 127 ? 
           Uri.HexEscape((char)b) : ((char)b).ToString()));
  return encodedstring;
}

This produces the following request,

GET http://stackoverflow.com/search?q=hello+world%E2%98%BA HTTP/1.1
Host: stackoverflow.com

The long story

One of the reasons I was originally confused when first looking into this was Unicode supports the ability to generate the same character multiple different ways.  This happens because some characters can be combined into composite characters.  Technically, before percent-encoding the bytes a normalization process should occur to ensure that sorting and comparison of encoded Unicode characters works as expected.  I suspect a large number of use cases don't need this process, but it worth being aware of it.


Dominick Baier: DotNetRocks on OpenID Connect with Brock and Me

Recorded at NDC Oslo:

http://www.dotnetrocks.com/default.aspx?ShowNum=993


Filed under: Conferences & Training, IdentityServer, OAuth, OpenID Connect, OWIN, WebAPI


Dominick Baier: NDC Oslo 2014 Slides, Samples and Videos

As always – NDC was a great conference! Here’s the list of resources relevant to my talks:

IdentityServer v3 preview: github

Web API Access Control & Authorization: slides / video

OpenID Connect: slides / video

 


Filed under: ASP.NET, Conferences & Training, IdentityServer, OAuth, OpenID Connect, WebAPI


Taiseer Joudeh: AngularJS Token Authentication using ASP.NET Web API 2, Owin, and Identity

This is the second part of AngularJS Token Authentication using  ASP.NET Web API 2 and Owin middleware, you can find the first part using the link below:

You can check the demo application on (http://ngAuthenticationWeb.azurewebsites.net), play with the back-end API for learning purposes (http://ngauthenticationapi.azurewebsites.net), and check the source code on Github.

AngularJS Authentication

In this post we’ll build sample SPA using AngularJS, this application will allow the users to do the following:

  • Register in our system by providing username and password.
  • Secure certain views from viewing by authenticated users (Anonymous users).
  • Allow registered users to log-in and keep them logged in for 24 hours 30 minutes because we are using refresh tokens or until they log-out from the system, this should be done using tokens.

If you are new to AngularJS, you can check my other tutorial which provides step by step instructions on how to build SPA using AngularJS, it is important to understand the fundamentals aspects of AngularJS before start working with it, in this tutorial I’ll assume that reader have basic understanding of how AngularJS works.

Step 1: Download Third Party Libraries

To get started we need to download all libraries needed in our application:

  • AngularJS: We’ll serve AngularJS from from CDN, the version is 1.2.16
  • Loading Bar: We’ll use the loading bar as UI indication for every XHR request the application will made, to get this plugin we need to download it from here.
  • UI Bootstrap theme: to style our application, we need to download a free bootstrap ready made theme from http://bootswatch.com/ I’ve used a theme named “Yeti”.

Step 2: Organize Project Structure

You can use your favorite IDE to build the web application, the app is completely decoupled from the back-end API, there is no dependency on any server side technology here, in my case I’m using Visual Studio 2013 so add new project named “AngularJSAuthentication.Web” to the solution we created in the previous post, the template for this project is “Empty” without any core dependencies checked.

After you add the project you can organize your project structure as the image below, I prefer to contain all the AngularJS application and resources files we’ll create in folder named “app”.

AngularJS Project Structure

Step 3: Add the Shell Page (index.html)

Now we’ll add the “Single Page” which is a container for our application, it will contain the navigation menu and AngularJS directive for rendering different application views “pages”. After you add the “index.html” page to project root we need to reference the 3rd party JavaScript and CSS files needed as the below:

<!DOCTYPE html>
<html data-ng-app="AngularAuthApp">
<head>
    <meta content="IE=edge, chrome=1" http-equiv="X-UA-Compatible" />
    <title>AngularJS Authentication</title>
    <link href="content/css/bootstrap.min.css" rel="stylesheet" />
    <link href="content/css/site.css" rel="stylesheet" />
<link href="content/css/loading-bar.css" rel="stylesheet" />
    <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1" />
</head>
<body>
    <div class="navbar navbar-inverse navbar-fixed-top" role="navigation" data-ng-controller="indexController">
        <div class="container">
            <div class="navbar-header">
                  <button class="btn btn-success navbar-toggle" data-ng-click="navbarExpanded = !navbarExpanded">
                        <span class="glyphicon glyphicon-chevron-down"></span>
                    </button>
                <a class="navbar-brand" href="#/">Home</a>
            </div>
            <div class="collapse navbar-collapse" data-collapse="!navbarExpanded">
                <ul class="nav navbar-nav navbar-right">
                    <li data-ng-hide="!authentication.isAuth"><a href="#">Welcome {{authentication.userName}}</a></li>
                    <li data-ng-hide="!authentication.isAuth"><a href="#/orders">My Orders</a></li>
                    <li data-ng-hide="!authentication.isAuth"><a href="" data-ng-click="logOut()">Logout</a></li>
                    <li data-ng-hide="authentication.isAuth"> <a href="#/login">Login</a></li>
                    <li data-ng-hide="authentication.isAuth"> <a href="#/signup">Sign Up</a></li>
                </ul>
            </div>
        </div>
    </div>
    <div class="jumbotron">
        <div class="container">
            <div class="page-header text-center">
                <h1>AngularJS Authentication</h1>
            </div>
            <p>This single page application is built using AngularJS, it is using OAuth bearer token authentication, ASP.NET Web API 2, OWIN middleware, and ASP.NET Identity to generate tokens and register users.</p>
        </div>
    </div>
    <div class="container">
        <div data-ng-view="">
        </div>
    </div>
    <hr />
    <div id="footer">
        <div class="container">
            <div class="row">
                <div class="col-md-6">
                    <p class="text-muted">Created by Taiseer Joudeh. Twitter: <a target="_blank" href="http://twitter.com/tjoudeh">@tjoudeh</a></p>
                </div>
                <div class="col-md-6">
                    <p class="text-muted">Taiseer Joudeh Blog: <a target="_blank" href="http://bitoftech.net">bitoftech.net</a></p>
                </div>
            </div>
        </div>
    </div>
    <!-- 3rd party libraries -->
    <script src="//ajax.googleapis.com/ajax/libs/angularjs/1.2.16/angular.min.js"></script>
    <script src="//ajax.googleapis.com/ajax/libs/angularjs/1.2.16/angular-route.min.js"></script>
    <script src="scripts/angular-local-storage.min.js"></script>
    <script src="scripts/loading-bar.min.js"></script>
    <!-- Load app main script -->
    <script src="app/app.js"></script>
    <!-- Load services -->
    <script src="app/services/authInterceptorService.js"></script>
    <script src="app/services/authService.js"></script>
    <script src="app/services/ordersService.js"></script>
    <!-- Load controllers -->
    <script src="app/controllers/indexController.js"></script>
    <script src="app/controllers/homeController.js"></script>
    <script src="app/controllers/loginController.js"></script>
    <script src="app/controllers/signupController.js"></script>
    <script src="app/controllers/ordersController.js"></script>
</body>
</html>

Step 4: “Booting up” our Application and Configure Routes

We’ll add file named “app.js” in the root of folder “app”, this file is responsible to create modules in applications, in our case we’ll have a single module called “AngularAuthApp”, we can consider the module as a collection of services, directives, filters which is used in the application. Each module has configuration block where it gets applied to the application during the bootstrap process.

As well we need to define and map the views with the controllers so open “app.js” file and paste the code below:

var app = angular.module('AngularAuthApp', ['ngRoute', 'LocalStorageModule', 'angular-loading-bar']);

app.config(function ($routeProvider) {

    $routeProvider.when("/home", {
        controller: "homeController",
        templateUrl: "/app/views/home.html"
    });

    $routeProvider.when("/login", {
        controller: "loginController",
        templateUrl: "/app/views/login.html"
    });

    $routeProvider.when("/signup", {
        controller: "signupController",
        templateUrl: "/app/views/signup.html"
    });

    $routeProvider.when("/orders", {
        controller: "ordersController",
        templateUrl: "/app/views/orders.html"
    });

    $routeProvider.otherwise({ redirectTo: "/home" });
});

app.run(['authService', function (authService) {
    authService.fillAuthData();
}]);

So far we’ve defined and mapped 4 views to their corresponding controllers as the below:

Orders View

Step 5: Add AngularJS Authentication Service (Factory)

This AngularJS service will be responsible for signing up new users, log-in/log-out registered users, and store the generated token in client local storage so this token can be sent with each request to access secure resources on the back-end API, the code for AuthService will be as the below:

'use strict';
app.factory('authService', ['$http', '$q', 'localStorageService', function ($http, $q, localStorageService) {

    var serviceBase = 'http://ngauthenticationapi.azurewebsites.net/';
    var authServiceFactory = {};

    var _authentication = {
        isAuth: false,
        userName : ""
    };

    var _saveRegistration = function (registration) {

        _logOut();

        return $http.post(serviceBase + 'api/account/register', registration).then(function (response) {
            return response;
        });

    };

    var _login = function (loginData) {

        var data = "grant_type=password&username=" + loginData.userName + "&password=" + loginData.password;

        var deferred = $q.defer();

        $http.post(serviceBase + 'token', data, { headers: { 'Content-Type': 'application/x-www-form-urlencoded' } }).success(function (response) {

            localStorageService.set('authorizationData', { token: response.access_token, userName: loginData.userName });

            _authentication.isAuth = true;
            _authentication.userName = loginData.userName;

            deferred.resolve(response);

        }).error(function (err, status) {
            _logOut();
            deferred.reject(err);
        });

        return deferred.promise;

    };

    var _logOut = function () {

        localStorageService.remove('authorizationData');

        _authentication.isAuth = false;
        _authentication.userName = "";

    };

    var _fillAuthData = function () {

        var authData = localStorageService.get('authorizationData');
        if (authData)
        {
            _authentication.isAuth = true;
            _authentication.userName = authData.userName;
        }

    }

    authServiceFactory.saveRegistration = _saveRegistration;
    authServiceFactory.login = _login;
    authServiceFactory.logOut = _logOut;
    authServiceFactory.fillAuthData = _fillAuthData;
    authServiceFactory.authentication = _authentication;

    return authServiceFactory;
}]);

Now by looking on the method “_saveRegistration” you will notice that we are issuing HTTP Post to the end point “http://ngauthenticationapi.azurewebsites.net/api/account/register” defined in the previous post, this method returns a promise which will be resolved in the controller.

The function “_login” is responsible to send HTTP Post request to the endpoint “http://ngauthenticationapi.azurewebsites.net/token”, this endpoint will validate the credentials passed and if they are valid it will return an “access_token”. We have to store this token into persistence medium on the client so for any subsequent requests for secured resources we’ve to read this token value and send it in the “Authorization” header with the HTTP request.

Notice that we have configured the POST request for this endpoint to use “application/x-www-form-urlencoded” as its Content-Type and sent the data as string not JSON object.

The best way to store this token is to use AngularJS module named “angular-local-storage” which gives access to the browsers local storage with cookie fallback if you are using old browser, so I will depend on this module to store the token and the logged in username in key named “authorizationData”. We will use this key in different places in our app to read the token value from it.

As well we’ll add object named “authentication” which will store two values (isAuth, and username). This object will be used to change the layout for our index page.

Step 6: Add the Signup Controller and its View

The view for the signup is simple so open file named “signup.html” and add it under folders “views” open the file and paste the HTML below:

<form class="form-login" role="form">
    <h2 class="form-login-heading">Sign up</h2>
    <input type="text" class="form-control" placeholder="Username" data-ng-model="registration.userName" required autofocus>
    <input type="password" class="form-control" placeholder="Password" data-ng-model="registration.password" required>
    <input type="password" class="form-control" placeholder="Confirm Password" data-ng-model="registration.confirmPassword" required>
    <button class="btn btn-lg btn-info btn-block" type="submit" data-ng-click="signUp()">Submit</button>
    <div data-ng-hide="message == ''" data-ng-class="(savedSuccessfully) ? 'alert alert-success' : 'alert alert-danger'">
        {{message}}
    </div>
</form>

Now we need to add controller named “signupController.js” under folder “controllers”, this controller is simple and will contain the business logic needed to register new users and call the “saveRegistration” method we’ve created in “authService” service, so open the file and paste the code below:

'use strict';
app.controller('signupController', ['$scope', '$location', '$timeout', 'authService', function ($scope, $location, $timeout, authService) {

    $scope.savedSuccessfully = false;
    $scope.message = "";

    $scope.registration = {
        userName: "",
        password: "",
        confirmPassword: ""
    };

    $scope.signUp = function () {

        authService.saveRegistration($scope.registration).then(function (response) {

            $scope.savedSuccessfully = true;
            $scope.message = "User has been registered successfully, you will be redicted to login page in 2 seconds.";
            startTimer();

        },
         function (response) {
             var errors = [];
             for (var key in response.data.modelState) {
                 for (var i = 0; i < response.data.modelState[key].length; i++) {
                     errors.push(response.data.modelState[key][i]);
                 }
             }
             $scope.message = "Failed to register user due to:" + errors.join(' ');
         });
    };

    var startTimer = function () {
        var timer = $timeout(function () {
            $timeout.cancel(timer);
            $location.path('/login');
        }, 2000);
    }

}]);

Step 6: Add the log-in Controller and its View

The view for the log-in is simple so open file named “login.html” and add it under folders “views” open the file and paste the HTML below:

<form class="form-login" role="form">
    <h2 class="form-login-heading">Login</h2>
    <input type="text" class="form-control" placeholder="Username" data-ng-model="loginData.userName" required autofocus>
    <input type="password" class="form-control" placeholder="Password" data-ng-model="loginData.password" required>
    <button class="btn btn-lg btn-info btn-block" type="submit" data-ng-click="login()">Login</button>
     <div data-ng-hide="message == ''" class="alert alert-danger">
        {{message}}
    </div>
</form>

Now we need to add controller named “loginController.js” under folder “controllers”, this controller will be responsible to redirect authenticated users only to the orders view, if you tried to request the orders view as anonymous user, you will be redirected to log-in view. We’ll see in the next steps how we’ll implement the redirection for anonymous users to the log-in view once users request a secure view.

Now open the “loginController.js” file and paste the code below:

'use strict';
app.controller('loginController', ['$scope', '$location', 'authService', function ($scope, $location, authService) {

    $scope.loginData = {
        userName: "",
        password: ""
    };

    $scope.message = "";

    $scope.login = function () {

        authService.login($scope.loginData).then(function (response) {

            $location.path('/orders');

        },
         function (err) {
             $scope.message = err.error_description;
         });
    };

}]);

Step 7: Add AngularJS Orders Service (Factory)

This service will be responsible to issue HTTP GET request to the end point “http://ngauthenticationapi.azurewebsites.net/api/orders” we’ve defined in the previous post, if you recall we added “Authorize” attribute to indicate that this method is secured and should be called by authenticated users, if you try to call the end point directly you will receive HTTP status code 401 Unauthorized.

So add new file named “ordersService.js” under folder “services” and paste the code below:

'use strict';
app.factory('ordersService', ['$http', function ($http) {

    var serviceBase = 'http://ngauthenticationapi.azurewebsites.net/';
    var ordersServiceFactory = {};

    var _getOrders = function () {

        return $http.get(serviceBase + 'api/orders').then(function (results) {
            return results;
        });
    };

    ordersServiceFactory.getOrders = _getOrders;

    return ordersServiceFactory;

}]);

By looking at the code above you’ll notice that we are not setting the “Authorization” header and passing the bearer token we stored in the local storage earlier in this service, so we’ll receive 401 response always! Also we are not checking if the response is rejected with status code 401 so we redirect the user to the log-in page.

There is nothing prevent us from reading the stored token from the local storage and checking if the response is rejected inside this service, but what if we have another services that needs to pass the bearer token along with each request? We’ll end up replicating this code for each service.

To solve this issue we need to find a centralized place so we add this code once so all other services interested in sending bearer token can benefit from it, to do so we need to use “AngualrJS Interceptor“.

Step 8: Add AngularJS Interceptor (Factory)

Interceptor is regular service (factory) which allow us to capture every XHR request and manipulate it before sending it to the back-end API or after receiving the response from the API, in our case we are interested to capture each request before sending it so we can set the bearer token, as well we are interested in checking if the response from back-end API contains errors which means we need to check the error code returned so if its 401 then we redirect the user to the log-in page.

To do so add new file named “authInterceptorService.js” under “services” folder and paste the code below:

'use strict';
app.factory('authInterceptorService', ['$q', '$location', 'localStorageService', function ($q, $location, localStorageService) {

    var authInterceptorServiceFactory = {};

    var _request = function (config) {

        config.headers = config.headers || {};

        var authData = localStorageService.get('authorizationData');
        if (authData) {
            config.headers.Authorization = 'Bearer ' + authData.token;
        }

        return config;
    }

    var _responseError = function (rejection) {
        if (rejection.status === 401) {
            $location.path('/login');
        }
        return $q.reject(rejection);
    }

    authInterceptorServiceFactory.request = _request;
    authInterceptorServiceFactory.responseError = _responseError;

    return authInterceptorServiceFactory;
}]);

By looking at the code above, the method “_request” will be fired before $http sends the request to the back-end API, so this is the right place to read the token from local storage and set it into “Authorization” header with each request. Note that I’m checking if the local storage object is nothing so in this case this means the user is anonymous and there is no need to set the token with each XHR request.

Now the method “_responseError” will be hit after the we receive a response from the Back-end API and only if there is failure status returned. So we need to check the status code, in case it was 401 we’ll redirect the user to the log-in page where he’ll be able to authenticate again.

Now we need to push this interceptor to the interceptors array, so open file “app.js” and add the below code snippet:

app.config(function ($httpProvider) {
    $httpProvider.interceptors.push('authInterceptorService');
});

By doing this there is no need to setup extra code for setting up tokens or checking the status code, any AngularJS service executes XHR requests will use this interceptor. Note: this will work if you are using AngularJS service $http or $resource.

Step 9: Add the Index Controller

Now we’ll add the Index controller which will be responsible to change the layout for home page i.e (Display Welcome {Logged In Username}, Show My Orders Tab), as well we’ll add log-out functionality on it as the image below.

Index Bar

Taking in consideration that there is no straight way to log-out the user when we use token based approach, the work around we can do here is to remove the local storage key “authorizationData” and set some variables to their initial state.

So add a file named “indexController.js”  under folder “controllers” and paste the code below:

'use strict';
app.controller('indexController', ['$scope', '$location', 'authService', function ($scope, $location, authService) {

    $scope.logOut = function () {
        authService.logOut();
        $location.path('/home');
    }

    $scope.authentication = authService.authentication;

}]);

Step 10: Add the Home Controller and its View

This is last controller and view we’ll add to complete the app, it is simple view and empty controller which is used to display two boxes for log-in and signup as the image below:

Home View

So add new file named “homeController.js” under the “controllers” folder and paste the code below:

'use strict';
app.controller('homeController', ['$scope', function ($scope) {
   
}]);

As well add new file named “home.html” under “views” folder and paste the code below:

<div class="row">
        <div class="col-md-2">
            &nbsp;
        </div>
        <div class="col-md-4">
            <h2>Login</h2>
            <p class="text-primary">If you have Username and Password, you can use the button below to access the secured content using a token.</p>
            <p><a class="btn btn-info" href="#/login" role="button">Login &raquo;</a></p>
        </div>
        <div class="col-md-4">
            <h2>Sign Up</h2>
            <p class="text-primary">Use the button below to create Username and Password to access the secured content using a token.</p>
            <p><a class="btn btn-info" href="#/signup" role="button">Sign Up &raquo;</a></p>
        </div>
        <div class="col-md-2">
            &nbsp;
        </div>
    </div>

By now we should have SPA which uses the token based approach to authenticate users.

One side note before closing: The redirection for anonymous users to log-in page is done on client side code; so any malicious user can tamper with this. It is very important to secure all back-end APIs as we implemented on this tutorial and not to depend on client side code only.

That’s it for now! Hopefully this two posts will be beneficial for folks looking to use token based authentication along with ASP.NET Web API 2 and Owin middleware.

I would like to hear your feedback and comments if there is a better way to implement this especially redirection users to log-in page when the are anonymous.

You can check the demo application on (http://ngAuthenticationWeb.azurewebsites.net), play with the back-end API for learning purposes (http://ngauthenticationapi.azurewebsites.net), and check the source code on Github.

Follow me on Twitter @tjoudeh

The post AngularJS Token Authentication using ASP.NET Web API 2, Owin, and Identity appeared first on Bit of Technology.


Henrik F. Nielsen: Fresh Updates to Azure Mobile Services .NET (Link)

Posted the blog Fresh Updates to Azure Mobile Services .NET on the Azure Mobile Services Team Blog describing new features released today for Azure Mobile Services .NET. If you are building cloud connected mobiles apps then check out Microsoft Azure Mobile Services and let us know what you think!

Have fun!

Henrik


Darrel Miller: Sharing Fiddler requests using Runscope

Fiddler is an excellent tool that I have been using for many years to debug HTTP requests in local applications.  However, one of the things that Fiddler can't do easily, out of the box, is allow you to share requests with other team members.  Sometimes it is nice to be able to show someone a HTTP request/response for debugging or design related issues. I recently a built an extension to Fiddler that takes advantage of Runscope's request sharing mechanism, to make sharing a request captured by Fiddler a one click affair.

If you want to try out the tool, you can install the latest version using Chocolatey,

cinst runscope.fiddlerextension 

hamstersharing

If you don't have Chocolatey installed and you don't feel shaving that yak today, you can download a .zip file that has the DLL here.  Just copy the DLL into your Fiddler scripts folder.  You should find the folder here:  c:\program files (x86)\Fiddler2\Scripts.  Despite the name of this folder, this plug-in is currently only compatible with Fiddler4, the version for .net 4.0. 

How do I use it

Once the extension is successfully installed, you should see a new context menu item,

FiddlerContextMenu

Connect to your Runscope Account

The first time, you attempt to share a request, you will  be prompted with a configuration screen to connect to your Runscope account.  If you don't have one then you can sign up for free Runscope account, no credit card required.

ConfigureForm

Clicking the "Get Key" button will launch a web browser and take you to the Runscope authentication page.  Once successfully authenticated with the Runscope website, you will be asked to authorize the FiddlerExtension application.  This allows the extension to write requests into your Runscope buckets.  Once completed, an API key will be displayed and the list of available buckets will be displayed.  Select the bucket where you wish the request to be created.  By default, Fiddler will not display the requests being made to the Runscope API.  Selecting the "use proxy" checkbox will allow you to see the API request in Fiddler.

If at a later point in time you wish to change these options, you can always get back to this screen using the Tools -> Configure Runscope menu option.

ConfigureMenu

Once the configuration is complete, the Fiddler session is passed to the Runscope API, and added to the Shared message collection in the specified bucket.  The publicly shareable URL is returned and the default web browser is launched to display the shared message.

The following URL was created from a Fiddler request created on my PC,

https://www.runscope.com/public/196bfc03-e52c-4bb2-932b-b466da7b363d/7dd3f872-644a-4f0a-ba6b-613ad7472ec5

Show me how it works

The source for this tool is available and is licensed with the Apache 2 OSS license.  I will do a follow-up post with more details of the implementation.  There are a few pieces of the project that may be interesting.  There is a Owin based self-hosted server that handles the OAuth2 redirect request and extracts the api key.  I created a WebPack project that encapsulates the semantics of the Runscope API and it also demonstrates how to create a Fiddler plugin. 

Image Credit: Hamsters https://flic.kr/p/dA7Vg


Ali Kheyrollahi: Cancelling an async HTTP request Task sends TCP RESET packet

Level [T4]

This blog post did not just happen. In fact, never, if ever, something just happens. There is a story behind everything and this one is no different. Looking back, it feels like a nice find but as the story was unfolding, I was running around like a headless chicken. Here we have the luxury of the hindsight so let's take advantage of it.

TLDR; If you are a sensible HTTP client and make your HTTP requests using cancellable async Tasks by passing a CancellationToken, you could find your IP blocked by legacy bridge devices blacklisting clients sending TCP RESET packets.

So here is how it started ...

So we were supposed to go live on Monday - some Monday. Talking of live, it was not really live - it was only to internal users but considering the high profile of the project, it felt like the D-Day. All VPs knew of the release and were waiting to see a glimpse of the project. Despite the high profile, it was not properly resourced, I despite being so called architect , pretty much singled handedly did all the API and the middleware connecting the Big Data outputs with the Single Page Application.

We could not finish going live on Monday so it moved to Tuesday. Now on Tuesday morning we were all ready and I set up my machine's screen like traders with all performance monitors up on the screen looking at users. With using the cloud Azure, elasticity was the option although the number of internal users could hardly make a dent on the 3 worker roles. So we did go live, and, I could see traffic building up and all looked fine. Until ... it did not.

I saw requests queuing up and loading the page taking longer and longer. Until it was completely frozen. And we had to take the site down. And that was not good.

Server Analysis

I brought up DebugView and was lucky to see this (actual IP and site names anonymised):

[1240] w3wp.exe Error: 0 :
[1240] <html>
[1240] <h1>Access Administratively Blocked</h1>
[1240] <br>URL : 'http://www.xyz.com/services/blahblah'
[1240] <br>Client IP address : 'xyz.xx.yy.zzz'
[1240] </html>

So we are being blocked! Something is blocking us and this could be because we used an UI data endpoint as a Data API. Well I knew it is not good but as I said we had a limited time and in reality that data endpoint was meant to support our live traffic.

So after a lot of to and fro with our service delivery and some third party support, we were told that our software was recognised as malicious since it was sending way too many TCP RESET packets. Whaa?? No one ain't sending no TCP whatever packets, we are using a high level language (C#) and it is the latest HttpClient implementation. We are actually using many optimising techniques such as async calls, parallelisation, etc to make the code as efficient as possible. We also used short timeout+ retry which is Netflix's approach to improve performance.

But what is TCP RESET packets? Basically a RESET packet is one that has the RESET flag set (which is otherwise unset) and tells the server to drop the TCP connection immediately and reclaim all the resources associated with it. There is an RFC from back in 2002 that considers RESET harmful. Wikipedia's article argues that when used as designed, it is useful but forged RESET can disrupt the communication between the client and server. And Microsoft's technet blog on the topic says "RESET is actually a good thing".

And in essence, I would agree with the Microsoft (and Wikipedia's) account that sending RESET packet is what a responsible client would do. Let's imagine you are browsing a site using a really bad wifi connection. The loading of the page takes too long and you frustrated by the slow connection, cancel browsing by pressing the X button. At this point, a responsible browser should tell the server it has changed its mind and is not interested in the response. This will let the server use its resources for a client that is actually waiting for the server's response.

Now going back to the problem at hand, I am not a TCP expert by any stretch - I have always used higher level constructs and never had to go down so deep in the OSI model. But my surprise was, what is different now with my code that with a handful calls I was getting blocked while the live clients work well with no problem with significantly larger number of calls?

I had a hunch that it probably has to do with the some of the patterns I have been using on the server. And to shorten the suspense, the answer came from the analysis of TCP packets when cancelling an async HTTP Task. The live code uses the traditional synchronous calls - none of the fancy patterns I used. So let's look at some sample code that cancels the task if it takes too long:

var client = new HttpClient();
var buffer = new byte[5 * 1000 * 1000];
// you might have to use different timeout or address
var cts = new CancellationTokenSource(TimeSpan.FromMilliseconds(300)); /
try
{
var result = client.GetAsync("http://www.google.com",
cts.Token).Result;
var s = result.Content.ReadAsStreamAsync().Result;

var result1 = s.ReadAsync(buffer, 0, buffer.Length, cts.Token).Result;
ConsoleWriteLine(ConsoleColor.Green, "Got it");
}
catch (Exception e)
{
ConsoleWriteLine(ConsoleColor.Red, "error! " + e);
}

In this snippet, we are calling the google server and set a 300ms timeout (which you might have to modify the timeout or the address based on your connection speed, in order to see the cancellation). Here is a WireShark proof:



As you can see above a TCP RESET packet has been sent - if you have set the parameters in a way that the request does not complete before its timeout and gets cancelled. You can try this with a longer timeout or use a WebClient which is synchronous and make sure you will never ever see this RST packet.

Now the question is, should a network appliance pick on this responsible cancellation and treat it as an attack? By no means. But in my case, it did and it is very likely that it could do that with yours.

My solution came by whitelisting my IP against "TCP RESET attacks". After all, I was only trying to help the server.

Conclusion

Cancelling an HTTP async Task in the HttpClient results in sending TCP RESET which is considered malicious by some network appliances resulting in blacklisting your IP.

PS. The network appliance belonged to our infrastructure 3rd party provider whose security managed by another third party - it was not in Azure. The real solution would have been to remove such crazy rule, but anyhow, we developers don't always get what we want.



Henrik F. Nielsen: Real-time with ASP.NET SignalR and Azure Mobile .NET Backend (Link)

Posted the blog Real-time with ASP.NET SignalR and Azure Mobile .NET Backend on the Azure Mobile Services Team Blog explaining how to enable real-time communications between your mobile apps and Azure Mobile Services .NET backend.

Have fun!

Henrik


Taiseer Joudeh: Token Based Authentication using ASP.NET Web API 2, Owin, and Identity

Last week I was looking at the top viewed posts on my blog and I noticed that visitors are interested in the authentication part of ASP.NET Web API, CORS Support, and how to authenticate users in single page applications built with AngularJS using token based approach.

So I decided to compile mini tutorial of three posts which covers and connects those topics. In this tutorial we’ll build SPA using AngularJS for the front-end, and ASP.NET Web API 2, Owin middleware, and ASP.NET Identity for the back-end.

The demo application can be accessed on (http://ngAuthenticationWeb.azurewebsites.net). The back-end API can be accessed on (http://ngAuthenticationAPI.azurewebsites.net/) and both are hosted on Microsoft Azure, for learning purposes feel free to integrate and play with the back-end API with your front-end application. The API supports CORS and accepts HTTP calls from any origin. You can check the source code for this tutorial on Github.

AngularJS Authentication

 

Token Based Authentication

As I stated before we’ll use token based approach to implement authentication between the front-end application and the back-end API, as we all know the common and old way to implement authentication is the cookie-based approach were the cookie is sent with each request from the client to the server, and on the server it is used to identify the authenticated user.

With the evolution of front-end frameworks and the huge change on how we build web applications nowadays the preferred approach to authenticate users is to use signed token as this token sent to the server with each request, some of the benefits for using this approach are:

  • Scalability of Servers: The token sent to the server is self contained which holds all the user information needed for authentication, so adding more servers to your web farm is an easy task, there is no dependent on shared session stores.
  • Loosely Coupling: Your front-end application is not coupled with specific authentication mechanism, the token is generated from the server and your API is built in a way to understand this token and do the authentication.
  • Mobile Friendly: Cookies and browsers like each other, but storing cookies on native platforms (Android, iOS, Windows Phone) is not a trivial task, having standard way to authenticate users will simplify our life if we decided to consume the back-end API from native applications.

What we’ll build in this tutorial?

The front-end SPA will be built using HTML5, AngularJS, and Twitter Bootstrap. The back-end server will be built using ASP.NET Web API 2 on top of Owin middleware not directly on top of ASP.NET; the reason for doing so that we’ll configure the server to issue OAuth bearer token authentication using Owin middleware too, so setting up everything on the same pipeline is better approach. In addition to this we’ll use ASP.NET Identity system which is built on top of Owin middleware and we’ll use it to register new users and validate their credentials before generating the tokens.

As I mentioned before our back-end API should accept request coming from any origin, not only our front-end, so we’ll be enabling CORS (Cross Origin Resource Sharing) in Web API as well for the OAuth bearer token provider.

Use cases which will be covered in this application:

  • Allow users to signup (register) by providing username and password then store credentials in secure medium.
  • Prevent anonymous users from viewing secured data or secured pages (views).
  • Once the user is logged in successfully, the system should not ask for credentials or re-authentication for the next 24 hours 30 minutes because we are using refresh tokens.

So in this post we’ll cover step by step how to build the back-end API, and on the next post we’ll cover how we’ll build and integrate the SPA with the API.

Enough theories let’s get our hands dirty and start implementing the API!

Building the Back-End API

Step 1: Creating the Web API Project

In this tutorial I’m using Visual Studio 2013 and .Net framework 4.5, you can follow along using Visual Studio 2012 but you need to install Web Tools 2013.1 for VS 2012 by visiting this link.

Now create an empty solution and name it “AngularJSAuthentication” then add new ASP.NET Web application named “AngularJSAuthentication.API”, the selected template for project will be as the image below. Notice that the authentication is set to “No Authentication” taking into consideration that we’ll add this manually.

Web API Project Template

Step 2: Installing the needed NuGet Packages:

Now we need to install the NuGet packages which are needed to setup our Owin server and configure ASP.NET Web API to be hosted within an Owin server, so open NuGet Package Manger Console and type the below:

Install-Package Microsoft.AspNet.WebApi.Owin -Version 5.1.2
Install-Package Microsoft.Owin.Host.SystemWeb -Version 2.1.0

The  package “Microsoft.Owin.Host.SystemWeb” is used to enable our Owin server to run our API on IIS using ASP.NET request pipeline as eventually we’ll host this API on Microsoft Azure Websites which uses IIS.

Step 3: Add Owin “Startup” Class

Right click on your project then add new class named “Startup”. We’ll visit this class many times and modify it, for now it will contain the code below:

using Microsoft.Owin;
using Owin;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Http;

[assembly: OwinStartup(typeof(AngularJSAuthentication.API.Startup))]
namespace AngularJSAuthentication.API
{
    public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            HttpConfiguration config = new HttpConfiguration();
            WebApiConfig.Register(config);
            app.UseWebApi(config);
        }

    }
}

What we’ve implemented above is simple, this class will be fired once our server starts, notice the “assembly” attribute which states which class to fire on start-up. The “Configuration” method accepts parameter of type “IAppBuilder” this parameter will be supplied by the host at run-time. This “app” parameter is an interface which will be used to compose the application for our Owin server.

The “HttpConfiguration” object is used to configure API routes, so we’ll pass this object to method “Register” in “WebApiConfig” class.

Lastly, we’ll pass the “config” object to the extension method “UseWebApi” which will be responsible to wire up ASP.NET Web API to our Owin server pipeline.

Usually the class “WebApiConfig” exists with the templates we’ve selected, if it doesn’t exist then add it under the folder “App_Start”. Below is the code inside it:

public static class WebApiConfig
    {
        public static void Register(HttpConfiguration config)
        {

            // Web API routes
            config.MapHttpAttributeRoutes();

            config.Routes.MapHttpRoute(
                name: "DefaultApi",
                routeTemplate: "api/{controller}/{id}",
                defaults: new { id = RouteParameter.Optional }
            );

            var jsonFormatter = config.Formatters.OfType<JsonMediaTypeFormatter>().First();
            jsonFormatter.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver();
        }
    }

Step 4: Delete Global.asax Class

No need to use this class and fire up the Application_Start event after we’ve configured our “Startup” class so feel free to delete it.

Step 5: Add the ASP.NET Identity System

After we’ve configured the Web API, it is time to add the needed NuGet packages to add support for registering and validating user credentials, so open package manager console and add the below NuGet packages:

Install-Package Microsoft.AspNet.Identity.Owin -Version 2.0.1
Install-Package Microsoft.AspNet.Identity.EntityFramework -Version 2.0.1

The first package will add support for ASP.NET Identity Owin, and the second package will add support for using ASP.NET Identity with Entity Framework so we can save users to SQL Server database.

Now we need to add Database context class which will be responsible to communicate with our database, so add new class and name it “AuthContext” then paste the code snippet below:

public class AuthContext : IdentityDbContext<IdentityUser>
    {
        public AuthContext()
            : base("AuthContext")
        {

        }
    }

As you can see this class inherits from “IdentityDbContext” class, you can think about this class as special version of the traditional “DbContext” Class, it will provide all of the Entity Framework code-first mapping and DbSet properties needed to manage the identity tables in SQL Server. You can read more about this class on Scott Allen Blog.

Now we want to add “UserModel” which contains the properties needed to be sent once we register a user, this model is POCO class with some data annotations attributes used for the sake of validating the registration payload request. So under “Models” folder add new class named “UserModel” and paste the code below:

public class UserModel
    {
        [Required]
        [Display(Name = "User name")]
        public string UserName { get; set; }

        [Required]
        [StringLength(100, ErrorMessage = "The {0} must be at least {2} characters long.", MinimumLength = 6)]
        [DataType(DataType.Password)]
        [Display(Name = "Password")]
        public string Password { get; set; }

        [DataType(DataType.Password)]
        [Display(Name = "Confirm password")]
        [Compare("Password", ErrorMessage = "The password and confirmation password do not match.")]
        public string ConfirmPassword { get; set; }
    }

Now we need to add new connection string named “AuthContext” in our Web.Config class, so open you web.config and add the below section:

<connectionStrings>
    <add name="AuthContext" connectionString="Data Source=.\sqlexpress;Initial Catalog=AngularJSAuth;Integrated Security=SSPI;" providerName="System.Data.SqlClient" />
  </connectionStrings>

Step 6: Add Repository class to support ASP.NET Identity System

Now we want to implement two methods needed in our application which they are: “RegisterUser” and “FindUser”, so add new class named “AuthRepository” and paste the code snippet below:

public class AuthRepository : IDisposable
    {
        private AuthContext _ctx;

        private UserManager<IdentityUser> _userManager;

        public AuthRepository()
        {
            _ctx = new AuthContext();
            _userManager = new UserManager<IdentityUser>(new UserStore<IdentityUser>(_ctx));
        }

        public async Task<IdentityResult> RegisterUser(UserModel userModel)
        {
            IdentityUser user = new IdentityUser
            {
                UserName = userModel.UserName
            };

            var result = await _userManager.CreateAsync(user, userModel.Password);

            return result;
        }

        public async Task<IdentityUser> FindUser(string userName, string password)
        {
            IdentityUser user = await _userManager.FindAsync(userName, password);

            return user;
        }

        public void Dispose()
        {
            _ctx.Dispose();
            _userManager.Dispose();

        }
    }

What we’ve implemented above is the following: we are depending on the “UserManager” that provides the domain logic for working with user information. The “UserManager” knows when to hash a password, how and when to validate a user, and how to manage claims. You can read more about ASP.NET Identity System.

Step 7: Add our “Account” Controller

Now it is the time to add our first Web API controller which will be used to register new users, so under file “Controllers” add Empty Web API 2 Controller named “AccountController” and paste the code below:

[RoutePrefix("api/Account")]
    public class AccountController : ApiController
    {
        private AuthRepository _repo = null;

        public AccountController()
        {
            _repo = new AuthRepository();
        }

        // POST api/Account/Register
        [AllowAnonymous]
        [Route("Register")]
        public async Task<IHttpActionResult> Register(UserModel userModel)
        {
            if (!ModelState.IsValid)
            {
                return BadRequest(ModelState);
            }

            IdentityResult result = await _repo.RegisterUser(userModel);

            IHttpActionResult errorResult = GetErrorResult(result);

            if (errorResult != null)
            {
                return errorResult;
            }

            return Ok();
        }

        protected override void Dispose(bool disposing)
        {
            if (disposing)
            {
                _repo.Dispose();
            }

            base.Dispose(disposing);
        }

        private IHttpActionResult GetErrorResult(IdentityResult result)
        {
            if (result == null)
            {
                return InternalServerError();
            }

            if (!result.Succeeded)
            {
                if (result.Errors != null)
                {
                    foreach (string error in result.Errors)
                    {
                        ModelState.AddModelError("", error);
                    }
                }

                if (ModelState.IsValid)
                {
                    // No ModelState errors are available to send, so just return an empty BadRequest.
                    return BadRequest();
                }

                return BadRequest(ModelState);
            }

            return null;
        }
    }

By looking at the “Register” method you will notice that we’ve configured the endpoint for this method to be “/api/account/register” so any user wants to register into our system must issue HTTP POST request to this URI and the pay load for this request will contain the JSON object as below:

{
  "userName": "Taiseer",
  "password": "SuperPass",
  "confirmPassword": "SuperPass"
}

Now you can run your application and issue HTTP POST request to your local URI: “http://localhost:port/api/account/register” or you can try the published API using this end point: http://ngauthenticationapi.azurewebsites.net/api/account/register if all went fine you will receive HTTP status code 200 and the database specified in connection string will be created automatically and the user will be inserted into table “dbo.AspNetUsers”.

Note: It is very important to send this POST request over HTTPS so the sensitive information get encrypted between the client and the server.

The “GetErrorResult” method is just a helper method which is used to validate the “UserModel” and return the correct HTTP status code if the input data is invalid.

Step 8: Add Secured Orders Controller

Now we want to add another controller to serve our Orders, we’ll assume that this controller will return orders only for Authenticated users, to keep things simple we’ll return static data. So add new controller named “OrdersController” under “Controllers” folder and paste the code below:

[RoutePrefix("api/Orders")]
    public class OrdersController : ApiController
    {
        [Authorize]
        [Route("")]
        public IHttpActionResult Get()
        {
            return Ok(Order.CreateOrders());
        }

    }

    #region Helpers

    public class Order
    {
        public int OrderID { get; set; }
        public string CustomerName { get; set; }
        public string ShipperCity { get; set; }
        public Boolean IsShipped { get; set; }

        public static List<Order> CreateOrders()
        {
            List<Order> OrderList = new List<Order> 
            {
                new Order {OrderID = 10248, CustomerName = "Taiseer Joudeh", ShipperCity = "Amman", IsShipped = true },
                new Order {OrderID = 10249, CustomerName = "Ahmad Hasan", ShipperCity = "Dubai", IsShipped = false},
                new Order {OrderID = 10250,CustomerName = "Tamer Yaser", ShipperCity = "Jeddah", IsShipped = false },
                new Order {OrderID = 10251,CustomerName = "Lina Majed", ShipperCity = "Abu Dhabi", IsShipped = false},
                new Order {OrderID = 10252,CustomerName = "Yasmeen Rami", ShipperCity = "Kuwait", IsShipped = true}
            };

            return OrderList;
        }
    }

    #endregion

Notice how we added the “Authorize” attribute on the method “Get” so if you tried to issue HTTP GET request to the end point “http://localhost:port/api/orders” you will receive HTTP status code 401 unauthorized because the request you send till this moment doesn’t contain valid authorization header. You can check this using this end point: http://ngauthenticationapi.azurewebsites.net/api/orders

Step 9: Add support for OAuth Bearer Tokens Generation

Till this moment we didn’t configure our API to use OAuth authentication workflow, to do so open package manager console and install the following NuGet package:

Install-Package Microsoft.Owin.Security.OAuth -Version 2.1.0

After you install this package open file “Startup” again and call the new method named “ConfigureOAuth” as the first line inside the method “Configuration”, the implemntation for this method as below:

public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            ConfigureOAuth(app);
	    //Rest of code is here;
        }

        public void ConfigureOAuth(IAppBuilder app)
        {
            OAuthAuthorizationServerOptions OAuthServerOptions = new OAuthAuthorizationServerOptions()
            {
                AllowInsecureHttp = true,
                TokenEndpointPath = new PathString("/token"),
                AccessTokenExpireTimeSpan = TimeSpan.FromDays(1),
                Provider = new SimpleAuthorizationServerProvider()
            };

            // Token Generation
            app.UseOAuthAuthorizationServer(OAuthServerOptions);
            app.UseOAuthBearerAuthentication(new OAuthBearerAuthenticationOptions());

        }
    }

Here we’ve created new instance from class “OAuthAuthorizationServerOptions” and set its option as the below:

  • The path for generating tokens will be as :”http://localhost:port/token”. We’ll see how we will issue HTTP POST request to generate token in the next steps.
  • We’ve specified the expiry for token to be 24 hours, so if the user tried to use the same token for authentication after 24 hours from the issue time, his request will be rejected and HTTP status code 401 is returned.
  • We’ve specified the implementation on how to validate the credentials for users asking for tokens in custom class named “SimpleAuthorizationServerProvider”.

Now we passed this options to the extension method “UseOAuthAuthorizationServer” so we’ll add the authentication middleware to the pipeline.

Step 10: Implement the “SimpleAuthorizationServerProvider” class

Add new folder named “Providers” then add new class named “SimpleAuthorizationServerProvider”, paste the code snippet below:

public class SimpleAuthorizationServerProvider : OAuthAuthorizationServerProvider
    {
        public override async Task ValidateClientAuthentication(OAuthValidateClientAuthenticationContext context)
        {
            context.Validated();
        }

        public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
        {

            context.OwinContext.Response.Headers.Add("Access-Control-Allow-Origin", new[] { "*" });

            using (AuthRepository _repo = new AuthRepository())
            {
                IdentityUser user = await _repo.FindUser(context.UserName, context.Password);

                if (user == null)
                {
                    context.SetError("invalid_grant", "The user name or password is incorrect.");
                    return;
                }
            }

            var identity = new ClaimsIdentity(context.Options.AuthenticationType);
            identity.AddClaim(new Claim("sub", context.UserName));
            identity.AddClaim(new Claim("role", "user"));

            context.Validated(identity);

        }
    }

As you notice this class inherits from class “OAuthAuthorizationServerProvider”, we’ve overridden two methods “ValidateClientAuthentication” and “GrantResourceOwnerCredentials”. The first method is responsible for validating the “Client”, in our case we have only one client so we’ll always return that its validated successfully.

The second method “GrantResourceOwnerCredentials” is responsible to validate the username and password sent to the authorization server’s token endpoint, so we’ll use the “AuthRepository” class we created earlier and call the method “FindUser” to check if the username and password are valid.

If the credentials are valid we’ll create “ClaimsIdentity” class and pass the authentication type to it, in our case “bearer token”, then we’ll add two claims (“sub”,”role”) and those will be included in the signed token. You can add different claims here but the token size will increase for sure.

Now generating the token happens behind the scenes when we call “context.Validated(identity)”.

To allow CORS on the token middleware provider we need to add the header “Access-Control-Allow-Origin” to Owin context, if you forget this, generating the token will fail when you try to call it from your browser. Not that this allows CORS for token middleware provider not for ASP.NET Web API which we’ll add on the next step.

Step 11: Allow CORS for ASP.NET Web API

First of all we need to install the following NuGet package manger, so open package manager console and type:

Install-Package Microsoft.Owin.Cors -Version 2.1.0

Now open class “Startup” again and add the highlighted line of code (line 8) to the method “Configuration” as the below:

public void Configuration(IAppBuilder app)
        {
            HttpConfiguration config = new HttpConfiguration();

            ConfigureOAuth(app);

            WebApiConfig.Register(config);
            app.UseCors(Microsoft.Owin.Cors.CorsOptions.AllowAll);
            app.UseWebApi(config);

        }

Step 12: Testing the Back-end API

Assuming that you registered the username “Taiseer” with password “SuperPass” in the step below, we’ll use the same username to generate token, so to test this out open your favorite REST client application in order to issue HTTP requests to generate token for user “Taiseer”. For me I’ll be using PostMan.

Now we’ll issue a POST request to the endpoint http://ngauthenticationapi.azurewebsites.net/token the request will be as the image below:

OAuth Token Request

Notice that the content-type and payload type is “x-www-form-urlencoded” so the payload body will be on form (grant_type=password&username=”Taiseer”&password=”SuperPass”). If all is correct you’ll notice that we’ve received signed token on the response.

As well the “grant_type” Indicates the type of grant being presented in exchange for an access token, in our case it is password.

Now we want to use this token to request the secure data using the end point http://ngauthenticationapi.azurewebsites.net/api/orders so we’ll issue GET request to the end point and will pass the bearer token in the Authorization header, so for any secure end point we’ve to pass this bearer token along with each request to authenticate the user.

Note: that we are not transferring the username/password as the case of Basic authentication.

The GET request will be as the image below:

Token Get Secure Resource

If all is correct we’ll receive HTTP status 200 along with the secured data in the response body, if you try to change any character with signed token you directly receive HTTP status code 401 unauthorized.

Now our back-end API is ready to be consumed from any front end application or native mobile app.

You can check the demo application, play with the back-end API for learning purposes (http://ngauthenticationapi.azurewebsites.net), and check the source code on Github.

Follow me on Twitter @tjoudeh

References

 

The post Token Based Authentication using ASP.NET Web API 2, Owin, and Identity appeared first on Bit of Technology.


Darrel Miller: Making your ASP.NET Web API funcky with an OWIN appFunc

The OWIN specification defines a delegate called appFunc that allows any OWIN compatible host to work with any OWIN compatible application.  This post shows you how to turn an ASP.NET Web API into an AppFunc.

FunkyDancers

AppFunc is defined as ,

using AppFunc = Func<IDictionary<string, object>,Task>; 

In other words, a function that accepts an object that implements IDictionary<string,object> and returns a Task.  The dictionary contains all the OWIN environment values and the Task is used to signal when the HTTP application has finished processing the request.

Microsoft have built OWIN infrastructure code, aka Katana, to support OWIN applications and host.  Unfortunately when creating the glue code that allows Web API to run on OWIN hosts, they hid the appFunc behind "helper" code.  Using Katana does add a whole lot of value, but I feel that being required to use a Katana-enabled host to host a Katana-enabled application kind of defeats the point of OWIN, even though OWIN is being used under the covers.

Fortunately, it is not a big task to use the Katana Web API infrastructure and expose it as a real OWIN appFunc.  I do use the Katana Web API adapter code to avoid having to re-write the code that maps the Owin environment into HttpRequestMessage and HttpResponseMessage.  You can install this by using the Microsoft.AspNet.WebApi.Owin nuget package.

 public static class WebApiAdapter
    {
        public static Func<IDictionary<string, object>, Task> 
CreateWebApiAppFunc(
HttpConfiguration config,
HttpMessageHandlerOptions options = null) { var app = new HttpServer(config); if (options == null) { options = new HttpMessageHandlerOptions() { MessageHandler = app, BufferPolicySelector = new OwinBufferPolicySelector(), ExceptionLogger = new WebApiExceptionLogger(), ExceptionHandler = new WebApiExceptionHandler() }; } var handler = new HttpMessageHandlerAdapter(null, options); return (env) => handler.Invoke(new OwinContext(env)); } } public class WebApiExceptionLogger : IExceptionLogger { public Task LogAsync(ExceptionLoggerContext context,
CancellationToken cancellationToken) { return Task.FromResult<object>(null); } } public class WebApiExceptionHandler : IExceptionHandler { public Task HandleAsync(ExceptionHandlerContext context,
CancellationToken cancellationToken) { return Task.FromResult<object>(null); } }
The current the ExceptionLogger and ExceptionHandler are "do nothing" implementations, but these can be replaced with application specific code.  Passing an HttpConfiguration instance to the CreateWebApiAppFunc method will return an OWIN appFunc that should work with any OWIN host.

There is an example of how you can host an AppFunc in my earlier blog post.


Filip Woj: Announcing ASP.NET Web API Recipes

It is my pleasure to announce that this summer my ASP.NET Web API book will be released. It’s entitled "ASP.NET Web API Recipes", and will be published by Apress. While the publication date is not set in stone yet (probably … Continue reading

The post Announcing ASP.NET Web API Recipes appeared first on StrathWeb.


Darrel Miller: Streamlining self-hosting with Chocolatey, Nuget, Owin and Topshelf – part 2

In the previous post in this series, we talked about how to create a Windows Service that would use an OWIN compatible host, to host an OWIN HTTP application and package that up into an easy to manage executable. This post describes an approach to deploying that executable using  simple command line tooling.

Push vs Pull Deployments

TugOfWar I often hear people talk about xcopy deployment as if it were the epitomy of deployment simplicity.  My experience is that very often I want to deploy to a machine that is in a completely different security realm and the idea of being able to simply copy files from the source machine to target machine is not practical.  I also find that I much prefer to pull stuff onto a machine rather than push onto it.  If you are running a big web farm, then maybe pushing does make more sense, but if you are running a big web farm, I highly doubt you are considering self-hosting with Windows Services!

Chocolatey

One of the most promising solutions for doing pull deployments is Chocolatey.   Chocolatey downloads specially configured Nuget packages from a standard Nuget feed, to install applications onto machines.  In my opinion the future of Chocolatey was recently validated when Microsoft announced that Powershell V5 will contain tooling called OneGet that will provide “in-the-box” support for Chocolatey packages.

I thought Nugets were just for libraries

The majority of Nuget usage at the moment is for consuming reusable .Net libraries.  However, the internal structure of a Nuget package is quite flexible and is more than capable of delivering a payload of binaries and Powershell scripts for installing those binaries.

The oversimplified explanation of a Chocolatey Nuget is that it is a regular nuget with a powershell script called ChocolateyInstall.ps1.  After you have installed Chocolatey, there are a number of command line tools that are available for managing packages.  For example,

cinst MyApplication –Source https://myfeeds.company.com/feed

This command instructs Chocolatey to download the MyApplication nuget package from the nuget feed specified in the “source” parameter and extract the contents of the nuget into a sub-folder of [SystemDrive]:\Chocolatey\lib.  Once that is done, Chocolatey will attempt to run the ChocolateyInstall.ps1 file if one was in the nuget.  That’s the essence of the process.  For our purposes we are going to use the install script to run our application with the Topshelf “install” command to install the Windows Service.  We will also include a ChocolatelyUninstall.ps1 to handle the removal of the Windows Service when the package is uninstalled.

Putting your HTTP Application in a Nuget

I tried using the shortcut approach of building a nuget package by generating the .nuspec file directly from the application project, however, when you do this, the Nuget only contains the binary of the project and assemblies that were referenced directly.  Assemblies that are referenced via Nuget references are left simply as references and not embedded in the nupkg.  This has the benefit of making the Nuget really lightweight, but it would require some fancy powershell footwork after deploying our service to copy all the binaries from the dependent nuget packages into a single folder, ensuring that we get the correct framework version of all the binaries.

KinderSurprise

The simpler solution is just to package up the binaries that are in the debug/release folder.  A sample nuspec would look something like this:

<?xml version="1.0"?>
<package >
  <metadata>
    <id>SampleService</id>
    <version>1.0.2</version>
    <title>Sample Service</title>
    <authors>Darrel Miller</authors>
    <owners>Darrel Miller</owners>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>Sample Owin Http host</description>
    <copyright>Copyright 2014</copyright>
  </metadata>
  <files>
      <file src="..\..\SampleService\bin\release\*" target="tools"/>
      <file src="ChocolateyInstall.ps1" target="tools"/>
      <file src="ChocolateyUninstall.ps1" target="tools"/>
  </files>
</package>

Instead of using nuget.exe to pack this file into a nuget, you use the Chocolatey cpack command to create the nupkg file.

The install script looks like this,

$packageName = 'SampleService' 
$serviceFileName = "SampleService.exe"

try { 
  
  $installDir = "$(Split-Path -parent $MyInvocation.MyCommand.Definition)" 
  $fileToInstall = Join-Path $installDir $serviceFileName
  . $fileToInstall install
  . $fileToInstall start
Write-ChocolateySuccess "$packageName" } catch { Write-ChocolateyFailure "$packageName" "$($_.Exception.Message)" throw }

And the uninstall script is almost identical:

$packageName = 'SampleService' 
$serviceFileName = "SampleService.exe"

try { 
  
  $installDir = "$(Split-Path -parent $MyInvocation.MyCommand.Definition)" 
  $fileToInstall = Join-Path $installDir $serviceFileName
  . $fileToInstall uninstall

  Write-ChocolateySuccess "$packageName"
} catch {
  Write-ChocolateyFailure "$packageName" "$($_.Exception.Message)"
  throw 
}

I must confess ignorance to the incantations required to get the install folder.  I simply found other scripts that did something similar and “re-purposed”.  The source for this sample can be found here.

Now where should I put my nupkg

Once you have built your nuget package you can host it either on Chocolatey.org, on Nuget itself, or do what I did and create a feed on myget.org and upload your package there.  If you create an account on MyGet.org, all the instructions on how to upload a package are provided on your Feed page.

Installing the service is as simple as using the Chocolatey "cinst" command:

cinst SampleService -source https://myget.org/f/darrel

If you are brave enough to trust me, you can try installing it on your own machine.  If you currently don't have Chocolatey installed, there is single command line that you  can copy and pasted from the Chocolatey home page that will set it up for you.  Once you have Chocolatey and the SampleService installed you can prove it is working by doing the following:

start http://localhost:1002/

If everything works as intended you should see the following,

Install

and once it is installed, and you launch the service you will see this magnificent output.

HelloWorld

Removing the service is as easy as,

cuninst SampleService

What doesn't work?

What we have seen so far gives us a solution that makes it very simple to deploy to a new machine.  My initial attempts to do an in-place update has not worked as I had hoped.  Which leaves us to do a uninstall old version / install new version process.  That's fine as long as there isn't configuration data that you want to bring over from the old version to the new version.  I'm sure there is a solution to this, I just haven't dug deep enough yet.

Using a self-hosted windows service is not likely to fit the bill for every web api you deploy, however, it can sometimes be the ideal solution. Hopefully you will find having this set of tricks up your sleeve will come in handy one day.

Image Credit: Kinder Surprise https://flic.kr/p/fw9eVB
Image Credit:  Tug of War https://flic.kr/p/nD2nj


Darrel Miller: Streamlining self-hosting with Chocolatey, Nuget, Owin and Topshelf – part 1

ASP.Net Web API made it fairly easy to create a self-hosted HTTP server.  However, the process of taking that self-hosted server and deploying it was an exercise that was left to the reader.  This series of blog posts documents the refinements that I have made to my process of creating and deploying self-hosted web servers.  This process is not limited to ASP.Net Web API.  Any Owin compatible HTTP application will work with this method.

The elevator pitch

GrainElevator The high level overview of this process is that I am using the TopShelf OSS project to build applications that can either run as console applications or as a Windows Service.  I have extended the Topshelf service to integrate an Owin compatible web host that then can host an Owin compatible web application.  I then package this up as a Chocolatey nuget with some scripts to initiate the service installation and deploy to a public nuget feed.  The end result is I can go to a clean server and using a single command to install a new service instance or update an existing one.

Why would I want to self-host?

Before we did into the details of the process, let us first review why someone might want to self-host a web Api or web site.  If you are already convinced of the value,  then feel free to skip ahead.

The vast majority of  web sites and web apis that are run on the Windows platform are hosted by IIS.  Over the years IIS has become somewhat of a kitchen sink product that tries to solve every problem for every scenario.  Whether you are running a web farm in giant enterprise company, or trying to deploy a one person blog, IIS tries to meet every need.  The end result is that it is massively complex.  In order to try and address that complexity there is are lots of default behaviours.   Some features are opt-in, some features are opt-out.  In the end you either get lucky and your site just works, or you have to start learning about the intricacies of how to configure IIS to behave exactly as you want.  This is further complicated by the fact that IIS has gone through a fair number of revisions and stuff keeps changing.  It also can be tricky when you are trying to host multiple sites, because you need to ensure that one site’s config doesn’t conflict with another site’s config.  This is all possible.  It just takes knowledge.

Self-host is a very different approach.  Self-host allows you to run an HTTP server within your application’s own process.  You could host a server within you own desktop application, you can host a server within a console application, or you can create a Windows Service to host the HTTP server.   A self-host HTTP Server is usually a very bare bones HTTP server.  Not at all like IIS.  If you want logging, you are going to have to set it up yourself.  If you want output caching, same thing.  Obviously this has its pro’s and con’s.  For some scenarios it makes way more sense to use IIS, however, if you need a lightweight, highly isolated HTTP Server that needs almost no config to get up and running, the self-host might just be what you need.

One challenge with self-host is that there is no “out-of the box” infrastructure to get up and running.  This post will demonstrate a way to build and deploy HTTP servers that are hosted in a Windows Service.

Why a Windows Service?

A Windows Service is the most natural way to run a HTTP Server.  Usually a HTTP Server is running all of the time.  With a Windows Service you configure it to startup when the system starts up, you can define what credentials the service runs under, you can define specific actions to take if the process fails.  A individual service runs in its own process, so you get process isolation mechanism similar to what IIS application pools provide. WindowWashers

Topshelf is the easiest way to create a Windows Service

You can create a Windows Service by using the “Windows Service” Visual Studio template which effectively creates a Console application and as service class that derives from serviceBase.  This isn’t a terrible way to do it, but you will find that if you try running that project interactively from Visual Studio it will complain about not being able to start a Windows service this way.  This is a pain when you want debug your api.  Having install the service and then start and stop it on every build is annoying.  There are a number of workarounds to this, however, by far the best solution I have found is an open source library called TopShelf.

Topshelf provides a variety of helper functionality around managing you Windows Service.  Not only does it provide a programmatic interface but it also gives you and out-of-the-box command line interface for managing services.  If you have ever spent time hunting for the InstallUtil.exe program then you will appreciate being able to do things like this:

myservice.exe install

One way to create a TopShelf based service is to create a class that implements the ServiceControl interface.  This is very simple interface:

 public interface ServiceControl
 {
        bool Start(HostControl hostControl);
        bool Stop(HostControl hostControl);
 }

In order to get the service running, you need to pass a setup lambda to a static method. Like this,

HostFactory.Run(x => { Do some setup stuff } );

The setup stuff is really not that hard, but I’m not going to go into the details of this because the service class I created hides the setup goo.  I will discuss this service class once we have introduced one more piece of the technology puzzle.

Who is this Owin chap?

Owin For a HTTP Server to be useful, it needs to actually host some kind of HTTP application.  OWIN is a specification that defines an interface between HTTP servers and HTTP applications.  This means that if our HTTP Server host supports the OWIN interface then all different kinds of web applications can be hosted.   For example, our host can support ASP.NET Web API, ASP.NET MVC, Nancy, SimpleWeb, FubuMVC, SignalR, etc.

OwinServiceHost

The OwinServiceHost class is the class I created that uses Topshelf to create a Windows Service, and use OwinHttpListener to host any OWIN application.  OwinHttpListener is a Microsoft implementation of an OWIN compatible host based on the .Net framework class HttpListener which itself is a thin wrapper around the HTTP.SYS kernel mode driver that is the same driver used by IIS.   So, at the core, we are using the same code that sends and receives HTTP requests in IIS, we just dumped all the extra baggage.  In theory I could have used any other OWIN compatible host, but I know this one works and has good performance.  Changing this class to support other hosts would be fairly straightforward.

The OwinServiceHost class does perform a few important additional functions.  When the service is first installed, we ask the user what URL the HTTP application will live at.   The current implementation only accepts a single URL and it has to be an exact match to the URL that we want to serve requests to.  That means if you use http://127.0.0.1/ don’t expect to get a response at http://localhost/.  There are pro’s and con’s to using HTTP.SYS.  A benefit is that you get a very sophisticated URL sharing mechanism that allows you to host multiple HTTP Servers on the same port, even along side existing IIS installations.  The downside is that you need administrator rights to do the reserve the URL for you application.  Also the syntax of the URL reservation mechanism is particularly picky. 

The OwinServiceHost class takes care of doing the URL reservation when you install the service and removing the reservation when you uninstall it. Once the user has entered the URL, the OwinServiceHost class stores this in the service’s config file.  If you edit this config file to change the URL after the service has been installed, then you need to make sure that you restart the service for the changes to take effect.

To make it easy to re-use the OwinServiceHost class in multiple projects, I have packaged it up as a nuget Tavis.OwinService.  To get a simple example working, just create a new Console application and install the package,

Install-package Tavis.OwinService

and then a very basic example would look like this,

    class Program
    {
        static void Main(string[] args)
        {
            var service = new OwinServiceHost(new Uri("http://localhost:1002/"), simpleApp)
            {
                ServiceName = "SampleService",
                ServiceDisplayName = "Sample Service",
                ServiceDescription = "A sample service"
            };                                                                              
            service.Initialize();
        }

        private static appFunc simpleApp = (env) =>
        {
            var sw = new StreamWriter((Stream) env["owin.ResponseBody"]);
            var headers = (IDictionary<string, string[]>) env["owin.ResponseHeaders"];
            var content = "Hello World";
            headers["Content-Length"] = new string[] {content.Length.ToString()};
            headers["Content-Type"] = new string[] {"text/plain"};
            var task = sw.WriteAsync(content);
            sw.Flush();

            return task;
        };
    }

The OwinServiceHost class takes two arguments, a default URL and an “appFunc”.  The appFunc is the OWIN interface to any HTTP application.  For more details on the “simpleApp” that I am using here, see this post.  The URL that is passed to the constructor of the class is a default hosting URL.  During the process of installing the service, the user will be prompted to enter an alternative URL.

Once this console application is built, it can be run just as a regular console application, or it can be installed as a service using the Topshelf command line commands.

SampleService.exe install
SampleService.exe start
SampleService.exe stop
SampleService.exe uninstall

If you would like to see the sample application and the source for the OwinServiceHost, it is all available on GitHub here.  If you want to try and build your own service, the Nuget package here.

In the next installment of this blog series, we will talk about how to deploy this service using Chocolatey Nugets!

chocolate

Image credit: Grain elevator https://flic.kr/p/hED9mA
Image credit: Window Service https://flic.kr/p/RyjX
Image credit: Owin https://flic.kr/p/5z17uA
Image credit: Chocolate https://flic.kr/p/4pq4rk


Darrel Miller: The simplest Owin AppFunc that works

When learning new frameworks and libraries I always like to find the simplest thing that works.  It helps me to separate in my mind what is core and what is helper stuff.  When it comes to debugging it is always nice to be able to strip away the helper stuff.

No frameworks allowed!

hammer While working on my OwinServiceHost (blog post in progress!) I needed to have a sample OWIN application that I could test with.  I could have pulled in one of the compatible frameworks like WebAPI, Nancy, Simple.Web, or Katana to create a minimal web application, but I was curious if I could create an OWIN application with just .net framework classes.  I know I should be able to because one of the mantras of the OWIN specification is to avoid taking dependencies on anything that is not part of the .net framework.

What does an OWIN application look like

In order to create an OWIN application you need to create a function that is compatible with this,

Func<IDictionary<string, object>, Task>

So, that’s a function that accepts a dictionary of objects, keyed by a string, and it returns a task.  This signature is what the OWIN people call the AppFunc, and the dictionary is called the OWIN environment.  The dictionary contains information passed from the HTTP Server down to the HTTP application and provides everything that is necessary to process the request.  The dictionary keys and what the array should contain is defined by the OWIN specification

The naked truth

And now for the simplest (I didn’t say easiest) OWIN compatible application that actually works.

(env) => {
    var sw = new StreamWriter((Stream) env["owin.ResponseBody"]);
    var headers = (IDictionary<string, string[]>)env["owin.ResponseHeaders"];
    var content = "Hello World";
    headers["Content-Length"] = new string[] {content.Length.ToString()};
    headers["Content-Type"] = new string[] {"text/plain"};
    var task =  sw.WriteAsync(content);
    sw.Flush();
               
    return task;

}

In order to return a response, we need access to the response body stream, which we is stored in the dictionary using the owin.ResponseBody key. At minimum we need to set the Content-Length header and I also set the Content-Type header so that a browser will happily render the result.

So what do I do with that

Having an AppFunc without an OWIN compatible host is fairly useless.  Bear with me, I am currently in the middle of writing a post on how to host and deploy these kind of HTTP applications in as easy as possible way.  Watch this space!

Image credit:  Hammer https://flic.kr/p/da8dLo


Darrel Miller: The much maligned User Agent header

This post is the first in a series of posts that will explore some piece of the HTTP specification with the objective of providing practical insights into the uses and abuses of the feature. Consider these posts my attempt to provide HTTP guidance in language that is slightly more digestible than the official IETF specifications.

AgentPerry The user agent header is one of those headers that most developers already know about, but pay little attention to.  It gets very little love. I’m sure most developers have wondered at some point if we really need to send all the bytes in the giant browser user agent strings?  And why does Internet Explorer claim that it is a Mozilla browser?

We know that our analytics tools use this header to give us statistics on what operating system our users are using and it lets us watch the ebb and flow of browser market share.  But we’ve heard mixed messages about user-agent sniffing, and we’ve heard concerns about advertisers using the uniqueness of user-agent strings for fingerprinting users.  It is interesting that many of the things user agents are used for today were not part of the original goals for this HTTP header. 

Lost its meaning

Many developers I talk to are not aware of the intent of the user agent header and even fewer know how to properly format one. Its not particularly surprising considering how many poorly formed instances exist in the wild.   In the world of browser development we don't really have any control over the user-agent header, the browser defines it. However, for mobile applications and native client applications we have the opportunity to assign our own values and start getting the value from it that was originally intended.

How can it help me?

The HTTPbis specification says that the user agent header can be used to "identify the scope of reported interoperability problems". This is the eloquent way of saying that we can find out which versions of client applications are broken. Sometimes clients get released out into the wild with bugs. Sometimes those bugs don't become a problem until changes are made to the server. Asking all your users to update to a new client version is a good start, but its going to take time to get the clients updated. What should we do in the meanwhile? We could hold off on the server deployment, but that would suck. We could instead add a small piece of middleware that sniffs the incoming requests and looks for the broken clients and fixes the problem for them. It might be something in the request headers that needs to be fixed, or something in the response headers. Hopefully it is not in the body, and if it is, let's hope the body is small. When the broken clients are gone, we throw away the middleware.

User agent headers should not be used as a primary mechanism for returning different content to different devices. Clients should be able to choose their content based on accept headers, media attributes, client hints and prefer headers. Using the user agent header as a feature or content selection mechanism is just encouraging client developers to lie to you, which defeats the point. This is why IE claims it is a Mozilla browser, and why Opera no longer identifies itself primarily as the Opera browser.  These browsers were forced to lie to provide a first class experience to its users because web site developers were only delivering features to browsers that they knew could support them.  This is not the same as providing temporary workarounds for clients until new versions can be deployed.

So what is it supposed to look like

The user agent header is made up of a list of whitespace delimited product names and versions. The HTTPbis specification defines the header to look like this,

     user-agent     = product * ( RWS ( product / comment ) ) 

This notation is what is called ABNF and you will find it throughout the IETF specs. It's not the most wonderful of syntax descriptions, but once you get over a few initial hurdles it is fairly easy to understand. In this example the first product token is indicating that the user-agent header must consist of at least one product token. The *(…) syntax that follows says that there may be zero or more of the stuff that is inside the parentheses after the product. The RWS is a special predefined token that is defined in another spec that means 'Required White Space'. The effect of putting the RWS token is that multiple products will be whitespace delimited. However, the user agent header can also contain comments. The comment syntax is not specific to the user agent header. There are a number of other http headers that allow it. A comment is differentiated from product token by being surrounded by parentheses.

The spec hints that the comments following the product are related to the preceding comment, although it doesn't come right out and say it.

Why more than one product?

Most applications are built on top of libraries and frameworks. Identifying just the main product may not be helpful when trying to understand why a particular set of clients are sending an invalid request or failing on a particular response. By listing the core client application, followed by the components upon which the client is built, we can get a better picture of the client environment. Obviously you can take this idea too far. Enumerating every dll that your app has a dependency on is going to be a waste of bytes and effort processing the header. The idea is to capture the main components that impact the creation and sending of requests and the consumption of responses.

Syntax of a product token

The product identifier is described in the spec as

     product         = token["/' product-version]
     product-version = token
 

This is fairly self explanatory except for the fact that we have no idea what the syntax of a token is! If you dig around you will find that token is defined in httpbis part1 as,

        token     = 1*tchar 

Having fun yet?  So a token is one or more tchars.  A tchar is defined as,

     tchar     = "!" / "#" / "$" / "%" / "&" / "'" / "*"
                    / "+" / "-" / "." / "^" / "_" / "`" / "|" / "~"
                    / DIGIT / ALPHA
                    ; any VCHAR, except delimiters

The terms DIGIT, ALPHA and VCHAR are defined in yet another IETF spec here, but what are these delimiters that we are not allowed to use?  The delimiters that are not allowed to appear in a token are defined as:

 (DQUOTE and "(),/:;<=>?@[\]{}")

For those of you still paying attention, you may be saying, but I see semi-colons in user agent strings all the time.  That is because they are in the comment not in the product token.  The “comment” has a different set of syntax rules.  The following is valid example of user-agent that is full of special characters,

user-agent: foo&bar-product!/1.0a$*+ (a;comment,full=of/delimiters@{fun})

It’s not very surprising that there are many examples of invalid user-agent headers considering the torturous set of syntax rules that need to be observed.  It is made worse by the fact that the vast majority of HTTP client libraries simply allow a developer to assign an arbitrary string to the user-agent header.  It is exactly this type of situation that makes me appreciate Microsoft’s System.Net.Http HTTP library.  This library has created strong types for many of the HTTP headers to ensure that you don’t break any of these rules.  You get to offload the burden of knowing these rules and be confident that you are compliant.

[Fact]
public void TestValidCharactersInUserAgent()
{
    var request = new HttpRequestMessage();
    request.Headers.UserAgent.Add(new ProductInfoHeaderValue("foo&bar-product!","1.0a$*+"));
    request.Headers.UserAgent.Add(new ProductInfoHeaderValue("(a;comment,full=of/delimiters@{fun})"));

    Assert.Equal("foo&bar-product!/1.0a$*+ (a;comment,full=of/delimiters@{fun})", request.Headers.UserAgent.ToString());
}

The ProductInfoHeaderValue class allows you to create a product token by providing a product and a version. It also allows has a constructor overload that allows you to create a comment.  All the syntax rules are handled.

So what’s the problem?

As an real world example of how poorly the user-agent is treated, this is the user-agent of the IE11 web browser,

User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; Touch; rv:11.0) like Gecko

Gecko As I mentioned earlier, IE declares itself as Mozilla for compatibility with sites that do feature detection based on the browser.  The comment that follows provides additional information about the environment.  I’m not sure why the Trident/7.0 is in the comment and not listed as another product.  The addition of the Touch comment leads me to suspect that additional feature detection is being done on that value.  The final “like Gecko” tokens should be interpreted as two distinct products, “like” and “Gecko”, according to the syntax.  However, if you look at the Chrome user agent header you can see the origin of this weird string.

User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36

I’m assuming that some sites must do feature detection on the string “like Gecko” which is a comment on the product AppleWebKit.  However, in copying this string the IE team have completely ignored the semantics of the elements of the user-agent header and assumed that anyone processing the header will doing substring searches based on the assumption that it is a simple string.

Maybe there is real-world internet nastiness that has forced IE to format their header in the way they have, but I would think something like this,

User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64;) Trident/7.0 (Touch; like Gecko) rv/11.0

would make more sense.

Now before you dismiss this as “it’s Microsoft who doesn’t follow standards”, you can be sure they not the only ones who are missing the intent of user-agent semantics.  The Go library built in HTTP client library uses the following user-agent header.

Go 1.1 package http

I suspect the following user-agent header would be more semantically correct,

Go/1.1 package-http

I’m quite sure if I spent any amount of time watching HTTP requests, I could find violations from all different products.

Why you should care

There is an argument that could me made to say, well if the big browser vendors don’t care about respecting the semantics, then why should I concern myself with it. 

The user-agent was added to HTTP to help web application developers deliver a better user experience.  By respecting the syntax and semantics of the header we make it easier and faster for header parsers to extract useful information from the headers that we can then act on.

Browser vendors are motivated to make web sites work no matter what specification violations are made.  When the developers building web applications don’t care about following the rules, the browser vendors work to accommodate that.  It is only by us application developers developing a healthy respect of the rules of the web, that the browser vendors will be able start tightening up their codebase knowing that they don’t need to account for non-conformances.

For client libraries that do not enforce the syntax rules, you run the risk of using invalid characters that many server side frameworks will not detect.  It is possible that only certain users, in certain environments would detect the syntax violation.  This can lead to difficult to track down bugs.

Hopefully you gained some additional insight from this rather wordy exploration of the the user-agent header.  I welcome suggestions regarding other dusty areas of the HTTP protocol that might be worth shining a light on.

 

Image Credit: Agent Perry https://flic.kr/p/dsSxFt
Image Credit: Gecko   https://flic.kr/p/bdf8Te


Dominick Baier: IdentityServer v3 Nuget and Self-Hosting

Thanks to Damian and Maurice we now have a build script for IdSrv3 that creates a Nuget package *and* internalizes all dependencies. So in other words you only need to reference a single package (well strictly speaking two) to self host the STS (including Autofac, Web API, various Katana components etc). Pretty cool. Thanks guys!

A picture says more than 1000 words (taken from the new self host sample in the repo).

image


Filed under: IdentityServer, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Dominick Baier: Covert Redirect – really?

In the era where security vulnerabilities have logos, stickers and mainstream media coverage – it seems to be really easy to attract attention with simple input validation flaws. Quoting:

“Covert Redirect is an application that takes a parameter and redirects a user to the parameter value WITHOUT SUFFICIENT validation. This is often the of result of a website’s overconfidence in its partners.”

Oh yes – and amongst a myriad of other scenarios this also applies to URLs of redirect based authentication/token protocols like OAuth2, OpenID Connect, WS-Federation or SAML2p. And guys like Egor Homakov have already shown a number of times that you are bound to be doomed if you give external parties too much control over your redirect URLs.

The good thing is, that every serious implementer of the above protocols also reads the specs and accompanying threat model – e.g. quoting the OpenID Connect spec (section 3.1.2.1) :

redirect_uri
REQUIRED. Redirection URI to which the response will be sent. This URI MUST exactly match one of the Redirection URI values for the Client pre-registered at the OpenID Provider, with the matching performed as described in Section 6.2.1 of [RFC3986] (Simple String Comparison)”

Or my blog post from over a year ago:

“If you don’t properly validate the redirect URI you might be sending the token or code to an unintended location. We decided to a) always require SSL if it is a URL and b) do an exact match against the registered redirect URI in our database. No sub URLs, no query strings. Nothing.”

So this type of attack is really a thing of the past – right?


Filed under: .NET Security, AuthorizationServer, IdentityServer, OAuth, OpenID Connect, Uncategorized, WebAPI


Darrel Miller: We all need an identity

A brewing debate in the .Net community recently came to a boil when Brendan Forster attempted to address an open issue for the Octokit library, Signed Octokit Releases, Yay or Nay? It is an interesting debate with many strong opinions.  Unfortunately I think the debate is down in the weeds and I’d like to add a bit of a longer term perspective to the discussion.

SINGAPORE-2010 YOUTH OLYMPIC GAMES-BOXING

I will be the first to admit that I have a fairly limited understanding of the implementation details of this subject, so I’ll only attempt to give a high level synopsis of the problem before throwing my opinions around. 

The debate centers around whether libraries should assign “Strong names” to their assemblies.  Microsoft’s definition of a strong name is,

A strong name consists of the assembly's identity—its simple text name, version number, and culture information (if provided)—plus a public key and a digital signature.

I’m quite sure someone is going to do a great job of summarizing all the pros and cons that have been debated in the comments to that Github issue.  I’m not going to attempt to do that.  What I want to focus on is the first part of that Microsoft definition, the part that says a strong name consists of an assembly’s identity.

Why do I care about identity?

Having an identity for a chunk of code should be enable me to do several things:

  1. guarantee that the bytes in the assembly are exactly the bytes that the publisher of the assembly intended to be in the assembly.
  2. know who the publisher is to help me decide if I trust their code.

The problem with chunks of code is that over time, they change.  We fix bugs, we add features, we refactor.  Generally we use a version number to distinguish between these evolutions.  The challenge with strong naming is that the version number is embedded into the identity of the assembly.  When you change the version number you have created a chunk of code with a completely new identity.  Any existing applications that have a dependency on the old version are unable to use the new version, at least without doing some extra magic.

Sometimes this is a good thing, sometimes its a bad thing.  If the new version fixes a serious bug then using the new version would be awesome.  If the new version completely refactors the public interfaces then it would likely be very bad for existing code to automatically take a dependency on the new version.

Considering that we have stated that we want to be able to guarantee the bytes in the assembly, it seems to make sense that we include the version in the identity.  The bytes change between versions, so the identity should too.  But if the identity changes, how can we take advantage of new versions without having to manually change dependency references

Binding redirects and the love/hate relationship

The .net framework has a mechanism called binding redirects.  It lets an application declare that it is happy to reference a range of versions of an assembly.  This allows you to create new versions of assemblies without having to re-specify the dependency in the consuming application.

The problem it seems, is getting these binding redirects right can be painful.  In fact the whole strong naming process can be fairly painful.  It starts with the fact that you need to generate public/private key pairs.  Whenever, you ask developers to deal with crypto based security it gets messy.  It’s one of those situations where, once you understand it, it’s not so bad.  However, getting to that point is hard.  Developers need to understand that strong naming an assembly is called signing an assembly, but that’s not the same as signing a manifest, or code signing an exe, and that one only requires .snk file, but the other requires a pfx file, or is it a p12? They both have public and private keys, which one do I have to do delay signing for? Can I create my own .snk, or do I need to go to a certificate authority?  I’m quite sure I said several technically incorrect things in the last three sentences.

Sure, I’m mixing issues here, but these are all mechanism that .net developers use for assigning identity to their chunks of code.

So lets throw away strong naming!

A growing number of people, especially in the OSS community are calling for developers to just stop using strong naming.  .Net does not require assemblies to be strong named.  For most scenarios, it just isn’t needed.

The problem is, once an organization decides that its application will be strong named, for whatever its reasons, there is a requirement that all dependencies are also strong named.  This means that if an OSS library doesn’t strong name its assemblies, it cannot be used by anyone who wants to strong name their stuff.  This requirement makes sense considering our original requirement of ensuring we are running exactly the bytes we think  we should be running.  Having a dependency on a set of mystery bytes defeats that purpose.

The other major problem with throwing away strong naming is when you have applications that have extensibility models.  Consider Visual Studio.  It supports all kinds of extensions.  If ExtensionA and ExtensionB both use libraryX. The publisher of libraryX releases a new version, and ExtensionA decides that it is going to update to that new version.  Without strong naming, the .net framework cannot load both the old and the new version of libraryX in process at the same time. This means if the new version of LibraryX is incompatible with ExtensionB, then ExtensionB is going to break.  This is DLL hell, all over again.

Stuck between a rock and a hard place

Hence we have the proponents and the opponents.  The problem is that the debate seems to be focused around should we have “Strong Naming” or not.  I think that is the wrong question.  I think it is obvious that there are flaws in the way that strong naming and assembly binding goes about identifying chunks of code.  I don’t think with the current implementation constraints we are going find a solution that makes everyone happy.

I think a better question to ask is whether people see value in having a provable way to identify chunks of code.  Considering the current state of security concerns on the internet, I think everyone will agree this has value.zen

If we can agree on the value of having code identity, then maybe we can debate how it could be done better in future versions of the .net framework and we can live with our current pain in the short term.

 

 

What do other platforms do?

I’ve heard it mentioned a few times that other platforms don't have this pain.  I’d love to see discussions on how other platforms address this code identity issue.  I’m not nearly the polyglot that I would like to be so I can’t provide a whole lot of insight here, but I can cite one other example and do some extrapolation from that.

In the Go programming language, third party dependencies are specified by pointing to a source code repository.  The interesting thing about this approach is that the code dependency is identified using a URL.  URLs tend to be fairly good at identifying things.   They are certainly a whole lot prettier than the fully qualified assembly name syntax we have in .net.  Go is very different than .net in that it builds statically linked executables from the source, so the comparison can only go so far.  However, the use of a URL to reference code dependencies has some other interesting properties. 

URI as a code identifier

By using a https URL we get certificate based guarantee of the origin of our code.  We could enable de-referencing the URL to get all kinds of meta data like the publisher, the hash for the assembly, links to newer available versions, and who knows what else.  This might eliminate the need to store this metadata in the assembly itself.

Curiously, we already have URIs that identify many pieces of .net code.  Consider the following URL,

https://www.nuget.org/packages/Microsoft.Net.Http/2.2.20

If I were able to build an application that took a dependency on the assemblies identified by this URL, I can imagine there being infrastructure in place to guarantee that the bytes I am running are the ones that have actually been published by Microsoft.  This is probably not something you would want to do at load time for performance reasons, but why not build something like this into Windows Defender, or some other background process.  A big complaint about the current strong naming implementation is by default, an assembly is not hashed to ensure its integrity.  So although the security mechanism is in place, it is turned off by default for performance reasons.

One problem with the URL that I showed above is that it is for a Nuget package, not an assembly.  .Net specifies dependencies at the assembly level.  However, it is not hard to imagine a URL such as the following:

https://www.nuget.org/packages/Microsoft.Net.Http/2.2.20#System.Net.Http

This allows me to reference a specific assembly.  You will also notice that it  references an assembly that belongs to a specific version of the package.  However, it is also possible that I could chose to reference the latest version of the package:

https://www.nuget.org/packages/Microsoft.Net.Http#System.Net.Http

This would allow me to make the statement that my application is prepared to bind to the System.Net.Http assembly in any version of the Nuget package.  That’s probably a fairly bold thing to do.  We could take a more conservative stance by adopting the conventions of semantic versioning to declare the following:

https://www.nuget.org/packages/Microsoft.Net.Http/2.2#System.Net.Http

Now with this, I’m constraining the versions to those with the same major and minor values.  New patches will be happily accepted.

How does this help?

The result of using this mechanism to specify dependencies is that I get a globally unique identifier for my assembly.  Tooling can dereference the URL to get publisher information, available versions, assembly hashes.  As a developer, I no longer need deal with signing/delay signing assemblies, I get an easy to understand identifier for my references, my metadata is in one place, on a nuget server, not embedded into every single deployed assembly.

I also don’t see any reason why this same mechanism can’t be used for code signing executables.  To an extent, this method is similar to the way click-once provides deployment manifests and application manifests.  ClickOnce sucks because you have to manually generate those manifests and and sign them in your development/build environment.  That is completely unnecessary as a nuget server could easily be enhanced to deliver this information dynamically.  With mechanisms like Chocolatey and the new MS efforts in this space, nuget feeds are becoming a viable option for application delivery also.  Does everyone really need to have their own code signing certificate when we can deliver our applications from our nuget account using HTTPS?

URIs are proven to be effective at identifying things.  HTTPS provides us with crypto based guarantee of provenance. Most of our code is distributed via the web in the first place.  Why not use the tools it provides us to identify the code we are running?

Maybe ignorance is bliss

I’m no security expert.  Maybe I’m completely missing some obvious reason why this wouldn’t work.  However, at the moment, I don’t code sign my stuff.  I avoid signing executables unless it is absolutely necessary, and when it is, I fight my way through it by piecing together bits of blog posts that guide me through the process.  Who knows how many security related blunders I am making in the process?

One thing I do know, if I find this hard, I can assure you that there are lots of other developers who find it hard.  We need to make this easier.  We are not going to get there by by having a binary debate on strong naming, yay or nay. 

Sunrise

Maybe my idea is completely impractical.  That’s ok.  I’d rather have that discussion and hear from other people about how they would solve this problem given the chance to start fresh.  Let’s start there and then worry about how we are going to get from where we are now to where we want to be.

 

Credit: Boxing photo https://flic.kr/p/8uQwPu
Credit: Zen photo https://flic.kr/p/59f68S
Credit: Sunrise photo https://flic.kr/p/8A4W9f


Filip Woj: Ignoring routes in ASP.NET Web API

If you use centralized routing, it is occasionally needed to ignore a greedy Web API route so that the request can be processed by some other component or handler. One of the tiny overlooked features of Web API 2.1 was … Continue reading

The post Ignoring routes in ASP.NET Web API appeared first on StrathWeb.


Dominick Baier: IdentityServer v3 and Azure WebSites (and other Deployment Simplifications)

(applies to preview 1)

A common request for IdentityServer was being able to run on Azure WebSites (or other constrained deployment environments where you don’t have machine level access). This was never easy because our default implementations in v2 had a dependency on the Windows certificate store which is typically not available in those situations.

Note: A security token service is typically not a good candidate for shared / high density hosting – especially on a public cloud. There are certainly exceptions to that rule, e.g. for testing scenarios as well as private clouds – also – e.g. Azure WebSites do support modes where the machine is not shared between tenants.

1 Loading the signing key
IdSrv3 supports two modes for signing the identity token – either asymmetric using the default x.509 signing certificate or symmetric using the secret of the requesting client. In the latter case no certificates are needed at all for identity tokens.
Similar for access tokens – either you use the X.509 certificate and self contained tokens (JWTs) or you use reference tokens.

So IOW – if you want to get around X.509 certificates – you can. But quite frankly, for most situations asymmetric signatures are really what you want – so we made it much easier to deal with certificates in v3.

The heart of IdSrv3 is the ICoreSettings interface – on there you can find the GetSigningCertificate method from where you return an X509Certificate2. How you load that certificate is now totally up to you – from the certificate store, from a file, blob storage – or like in our test setup – from an embedded resource (we will actually add some helpers to simplify that in a later release). This gives you a fair bit of flexibility.

2 Public vs internal host name
In many (shared) hosting scenarios, your local machine name and the machine / DNS name that external parties will use is different. That could be due to CNAME settings, reverse proxies, load balancers and SSL termination. We added support for that in IdSrv v2 – and it was a bit painful since it was an afterthought. In v3 you can now specify the public host name on ICoreSettings (see above) – see here for an example. If you don’t specify anything here, we assume the value of the host header and SSL.

3 Managing deployment configuration
The above settings are all in code, so you might wonder how you maintain different “configurations” for different deployment scenarios.

One approach would be surely to load the variables from web.config and use config transforms during publishing. This will get very complicated very quickly. The current approach we use is to maintain several Katana startup files, one for each deployment environment. A lesser known feature of Katana is, that you can specify the startup class in web.config, e.g.:

<appSettings>

      <add key=owin:AppStartup value=LocalTest />

</appSettings>

..and then map the value LocalTest to a startup class in code:

[assembly: OwinStartup("LocalTest", typeof(…))]

You can then use config transforms to use the right startup class for your target environment, e.g.:

<appSettings>

      <add key=owin:AppStartup

                  value=AzureWebSites

                  xdt:Transform=SetAttributes

                  xdt:Locator=Match(key)/>

</appSettings>

I hope the above changes make hosting and deployment easier in IdSrv3 – I you have any feedback please come to the github issue tracker and join the conversation!


Filed under: ASP.NET, Azure, IdentityServer, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Henrik F. Nielsen: Autofac and Azure Mobile Services .NET Backend (Link)

Posted the blog Autofac and Azure Mobile Services .NET Backend on the Azure Mobile Services Team Blog explaining how the .NET backend and you can use Autofac for all things dependency injection.

Have fun!

Henrik


Ali Kheyrollahi: CacheCow 0.5 Released

[Level T1]


OK, finally CacheCow 0.5 (actually 0.5.1 as 0.5 was released by mistake and pulled out quickly) is out. So here is the list of changes, some of which are breaking. Breaking changes, however, are very easy to fix and if you do not need new features, you can happily stay at 0.4 stable versions. If you have moved to 0.5 and you see issues, please let me know ASAP. For more information, see my other post on Alpha release.

So here are the features:

Azure Caching for CacheCow Server and Client

Thanks to Ugo Lattanzi, we have now CacheCow storage in Azure Caching. This is both client and server. Considering calls to Azure Cache takes around 1ms (based on some ad hoc tests I have done, do not quote this as a proper benchmark), this makes a really good option if you have deployed your application in Azure. You just need to specify the cacheName, otherwise "default" cache will be used.

Complete re-design of cache invalidation

I have now put some sense into cache invalidation. Point is that strong ETag is generated for a particular representation of the resource while cache invalidation happens on the resource including all its representation. For example, if you send application/xml representation of the resource, ETag is generated for this particular representation. As such, application/json representation will get its own ETag. However, on PUT both need to be invalidated. On the other hand, in case of PUT or DELETE on a resource (let's say /api/customer/123) the collection resource (/api/customer) needs to be invalidated since the collection will be different. 

But how we would find out if a resource is collection or single? I have implemented some logic that infers whether the item is collection or not - and this is based on common and conventional routes design in ASP.NET Web API. If your routing is very custom this will not work. 

Another aspect is hierarchical routes. When a resource is invalidated, its parent resource will be invalidated as well. For example in case of PUT /api/customer/123/order/456 , customer resource will change too - if orders are returned as part of customer.  So in this case, not only the order collection resource but the customer needs to be invalidated. This is currently done in CacheCow.Server and will work for conventional routes.

Using MemoryCache for InMemory stores (both server and client)

Previously I have been using dictionaries for InMemory stores. The problem with Dictionary is that it just grows until system runs out of memory while MemoryCache limits its use of memory when system deals with a memory pressure.

Dependency of CachingHandler to HttpConfiguration

As I announced before, you need to pass the configuration to he CachingHandler on server. This should be an easy change but a breaking one.


Future roadmap and 0.6

I think we are now in a point where CacheCow requires a re-write for the features I want to introduce. One of such features is enabling Content-based ETag generation and cache invalidation. Currently Content-based ETag generation is there and can be used now but without content-based invalidation is not much use especially that in fact this now generates a weak ETag. Considering the changes required for achieving this, almost a re-write is required.

Please keep the feedbacks coming. Thanks for using and supporting CacheCow!



Taiseer Joudeh: Building OData Service using ASP.Net Web API Tutorial – Part 3

This is the third part of Building OData Service using Asp.Net Web API. The topics we’ll cover are:

CRUD Operations on OData endpoint using Asp.Net Web API

In this post We’ll add another OData controller which will support all the CRUD operations, as we talked before OData follows the conventions of HTTP and REST; so if we want to create a resources we’ll issue HTTP POST, if we want to delete a resource we’ll issue HTTP DELETE and so on. One more thing we want to add support for is partial updates (HTTP PATCH) which will be very efficient when we update certain properties on the entity, the request payload for PATCH will come on a key/value pair and will contain the properties changed only.

Step 1: Add OData Controller (Support for CRUD operations)

Let’s add a new Web API Controller which will handle all HTTP requests issued against the OData URI “/odata/Tutors”. To do this right-click on Controllers folder->Select Add->Name the controller “TutorsController” and choose “Empty API Controller” Template.

As we did before we have to derive our “TutorsController” from class “EntitySetController“, then we’ll add support for the Get() and GetEntityByKey(int key) methods and will implement creating a new Tutor; so when the client issue HTTP POST request to the URI “/odata/Tutors” the “CreateEntity(Tutor entity)” method will be responsible to handle the request. The code will look as the below:

public class TutorsController : EntitySetController<Tutor, int>
    {
        LearningContext ctx = new LearningContext();

        [Queryable()]
        public override IQueryable<Tutor> Get()
        {
            return ctx.Tutors.AsQueryable();
        }

        protected override Tutor GetEntityByKey(int key)
        {
            return ctx.Tutors.Find(key);
        }

        protected override int GetKey(Tutor entity)
        {
            return entity.Id;
        }

        protected override Tutor CreateEntity(Tutor entity)
        {
            Tutor insertedTutor = entity;
            insertedTutor.UserName = string.Format("{0}.{1}",entity.FirstName, entity.LastName);
            insertedTutor.Password = Helpers.RandomString(8);
            ctx.Tutors.Add(insertedTutor);
            ctx.SaveChanges();
            return entity;
        }
  }

By looking at the code above you’ll notice that we overrided the CreateEntity() method which accepts a strongly typed Tutor entity, the nice thing here that once you override the CreateEntity() method the base class “EntitySetController” will be responsible of returning the right HTTP response message (201) and adding the correct location header for the created resource. To test this out let’s issue issue an HTTP POST request to URI “odata/Tutors”, the Post request will look as the below:

web api odata crud

Step 2: Testing OData Prefer Header

By looking at the request above you will notice that status code is 201 created is returned as well the response body contains the newly created Tutor, but what if the client do not want to return a duplicate entity of the newly created Tutor from the server? The nice thing here that OData allow us to send Prefer header with request to indicate if we need to return the created entity back to the client. The default value for the header is “return-content” so if we tried to issue another POST request and pass the Prefer header with value “return-no-content” the server will create the resource and will return 204 no content response.

Step 3: Add Support for Resource Deletion

Adding support for delete is simple, we need to override method Delete(int key) so any HTTP DELETE request sent to the URI “/odata/Tutors “will be handled by this method, let’s implement this in the code below:

public override void Delete(int key)
	{
		var tutor = ctx.Tutors.Find(key);
		if (tutor == null) 
		{
			throw new HttpResponseException(HttpStatusCode.NotFound);
		}

		ctx.Tutors.Remove(tutor);
		ctx.SaveChanges();
	}

To test this out we need to issue HTTP DELETE request to the URI “OData/Tutors(12)” and if the tutor exists the HTTP response will be 204 no content. What worth mentioning here that if we tried to delete a tutor which doesn’t exists then by looking at code on line 6 you will notice that I’m throwing HttpResponseException with status code 404 along with empty response body; nothing wrong about this but if you want to build OData compliant service then handling the errors should be done on a different way; to fix this we need to return an HttpResponseException with an object of type “Microsoft.Data.OData.ODataError” in the response body.

Step 4: Return Complaint OData Errors

To fix the issue above I’ll create a helper class which will return ODataError, we’ll use this class in different methods and for another controllers, so add new class named “Helpers” and paste the code below:

public static class Helpers
    {
        public static HttpResponseException ResourceNotFoundError(HttpRequestMessage request)
        {
            HttpResponseException httpException;
            HttpResponseMessage response;
            ODataError error;

            error = new ODataError
            {
                Message = "Resource Not Found - 404",
                ErrorCode = "NotFound"
            };

            response = request.CreateResponse(HttpStatusCode.NotFound, error);

            httpException = new HttpResponseException(response);

            return httpException;
        }
    }

By looking at the code above you will notice that there is nothing special here, we only returning an object of ODataError in the response body, you can set the Message and ErrorCode properties for something meaningful for your OData service clients. Now insted of throwing the exception directly in the controller we’ll call this helper method as the code below:

if (tutor == null) 
	{
		throw Helpers.ResourceNotFoundError(Request);
	}

Step 5: Add Support for Partial Updates (PATCH)

In cases we want to update 1 or 2 properties of a resource which has 30 properties; it will be very efficient if we are able to send only over the wire the properties which have been changed, as well it will be very efficient on the database level if we generated an update statement which contains the update fields only. To achieve this we need to override the method PatchEntity() as the code below:

protected override Tutor PatchEntity(int key, Delta<Tutor> patch)
        {
            var tutor = ctx.Tutors.Find(key);
            if (tutor == null)
            {
                throw Helpers.ResourceNotFoundError(Request);
            }   
            patch.Patch(tutor);
            ctx.SaveChanges();
            return tutor;
        }

As I mentioned before the changed properties will be in key/value form in the response body thanks to the Delta<T> OData class which makes it easy to perform partial updates on any entity, to test this out we’ll issue HTTP Patch request to the URI “/odata/Tutors(10)” where we will modify the LastName property only for this Tutor, request will be as the below:

web api odata crud ODataPatchRequest

Before we send the request Let’s open SQL Server Profile to monitor the database query which will be generated to update this Tutor, you will notice from the image below how efficient is using the Patch update when we want to modify certain properties on an entity, the query generated do an update only for the changed property.

ODataSqlProfiler

In the next post we’ll see how we can consume this OData service from Single Page Application built using AngularJS and Breeze.js

Please drop your questions and comments on the comments section below.

Source code is available on GitHub.

The post Building OData Service using ASP.Net Web API Tutorial – Part 3 appeared first on Bit of Technology.


Taiseer Joudeh: Building OData Service using ASP.Net Web API Tutorial – Part 2

This is the second part of Building OData Service using Asp.Net Web API. The topics we’ll cover are:

Create read-only OData endpoint using Asp.Net Web API

In this tutorial I’ll be using the same solution I’ve used in the previous tutorial but I will start new Web API 2 project then we’ll implement OData service using Web API. The source code for the new OData Service project can be found on my GitHub repository.

Practical example covering how to build OData Service

To keep things simple we’ll depend on my previous tutorial eLearning API concepts to demonstrate how to build OData Service. All we want to use from the eLearning API is its data access layer and database model, in other words w’ll use only project Learning.Data. I recommend you to check how we built the Learning.Data project here as the navigation properties between database entities are important to understand the relations that will be generated between OData entities.

Once you have your database access layer project ready then you can follow along with me to create new Learning.ODataService project.

I will assume that you have Web Tools 2013.1 for VS 2012 installed on your machine or you have VS 2013, so you can use the ASP.NetWeb API 2 template directly which will add the needed assemblies for ASP.Net Web API2.

Step 1: Create new empty ASP.Net Web API 2 project

We’ll start by creating a new empty ASP.Net Web API 2 project as the image below, you can name your project “Learning.ODataService”, do not forget to choose .NET framework 4.5 version. Once The project is created we need install Entity Framework version 6 using NuGet package manager or NuGet package console, the package we’ll install is named “EntityFramework“. When the package is installed, we need to  add a reference for the class library “Learning.Data” which will act as our database access layer.

ODataServiceProject

Step 2: Add support for OData to Web API

By default the ASP.Net Web API doesn’t come with a support for OData, to add this we need to install NuGet package named “Microsoft.ASP.NET Web API 2.1 OData” to our Web API project. To do this open package manager console and type “Install-Package Microsoft.AspNet.WebApi.OData”.

Step 3: Configure OData Routes

The ASP.Net Web API 2 template we used has by default a class named “WebApiConfig” inside the “App_Start” folder, when you open this class you will notice that inside the “Register” method it has a “config.MapHttpAttributeRoutes()” line and “DefaultApi” route which is used to configure traditional Web API routes, the nice thing here that we can define OData end points along with traditional Web API end points in the same application.

In this tutorial we want to define only OData end points so feel free to delete the default route named “DefaultApi” as well the “config.MapHttpAttributeRoutes()” line .

Now you need to replace the code in “WebApiConfig” class with the code below:

public static class WebApiConfig
    {
        public static void Register(HttpConfiguration config)
        {
            config.Routes.MapODataRoute("elearningOData", "OData", GenerateEdmModel());
        }

        private static IEdmModel GenerateEdmModel()
        {
            var builder = new ODataConventionModelBuilder();
            builder.EntitySet<Course>("Courses");
            builder.EntitySet<Enrollment>("Enrollments");
            builder.EntitySet<Subject>("Subjects");
	    builder.EntitySet<Tutor>("Tutors");

            return builder.GetEdmModel();
        }
    }

The code above is responsible for defining the routes for our OData service and generating an Entity Data Model (EDM) for our OData service.

The EDM is responsible to define the type system, relationships, and actions that can be expressed in the OData formats, there are two approaches to define an EDM, the first one which depends on conventions should use class ODataConventionModelBuilder, and the second one should use class ODataModelBuilder. 

We’ll use the ODataConventionModelBuilder as it will depend on the navigation properties defined between your entities to generate the association sets and relationship links. This method requires less code to write. If you want more flexibility and control over the association sets then you have to use the ODataModelBuilder approach.

So we will add four different entities to the model builder, note that the string parameter “Courses” defines the entity set name and should match the controller name, so our controller name must be named “CoursesController”.

The MapODataRoute is an extension method which will be availabe after we installed OData package, it is responsible to define the routes for our OData service, the first parameter of this method is friendly name that is not visible for service clients, the second parameter is the URI prefix for the OData endpoint, so in our case the URI for the Courses resource will be as the following: http://hostname/odata/Courses. As we mentioned before you can have multiple OData endpoints in the same application, all you need to do is to call MapODataRoute with different URI prefix.

Step 4: Add first OData Controller (Read only controller)

Now we want to add a Web API Controller which will handle HTTP requests issued against the OData URI “/odata/Courses”. To do this right-click on Controllers folder->Select Add->Name the controller “CoursesController” and choose “Empty API Controller” Template.

First thing to do here is to derive our “CoursesController” from “System.Web.Http.OData.EntitySetController“, the constructor of this class accepts two parameters, the first one is the entity type mapped to this controller, and the second one is the data type of the primary key for this entity, the code for this OData controller will look as the below:

public class CoursesController : EntitySetController<Course, int>
    {
        LearningContext ctx = new LearningContext();

        [Queryable(PageSize=10)]
        public override IQueryable<Course> Get()
        {
            return ctx.Courses.AsQueryable();
        }

        protected override Course GetEntityByKey(int key)
        {
            return ctx.Courses.Find(key);
        }
    }

The EntitySetController class has number of abstract and override methods for updating or querying an entity, so you will notice that there are many methods you can override such as: Get(), GetEntityByKey(), CreateEntity(), PatchEntity(), UpdateEntity(), etc..

As I mentioned before, this controller will be a read only controller, which means that I’ll implement only read support for URI “/odata/Courses”, so by looking at the code above we’ve implemented the following:

  • Overriding the Get() method and attributing it with [Queryable] attribute which allow clients to issue HTTP Get request to this endpoint where they can encode filter, order by, pagination parameter in the URI. This Queryable attribute is an action filter which is responsible to parse and validate the query sent in the URI. This attribute is useful to protect your end point service from clients who might issue a query which takes long time to execute or ask for large sets of data, so for example I’ve set the PageSize of the response to return only 10 records at a time. you can read more about the set of parameters available here.
  • Overriding the GetEntityByKey(int key) method which allow clients to issue HTTP Get request to a single Course on the form: “/odata/Courses(5)”, note that the key data type is integer as it represents the primary key in Courses entity.

 Step 5: Testing The Courses Controller

Now we need to test the controller, we’ll use fiddler or PostMan to compose the HTTP Get requests, the accept header for all requests will be application/json so we’ll get JSON Light response, you can check how the results are formatted if you passed application/json;odata=verbose or application/atom+xml. The scenarios we want to cover as the below:

  • use $filter: We need to filter all courses where the duration of the course is greater than 4 hours
    • Get Request: http://hostname/OData/Courses?$filter=Duration%20gt%204
  • use $orderby, $take: We need to order by course name and take the top 5 records
    • Get Request: http://hostname/OData/Courses?$orderby=Name&$top=5
  • use $select: We need to get the Name and Duration fields only for all fields
    • Get Request: http://hostname/OData/Courses?$select=Name,Duration
  • $expand: We need to get related Course Tutor and Subject for each Course and order the Courses by Name descending
    • Get Request: http://hostname/OData/Courses?$expand=CourseTutor,CourseSubject&$orderby=Name desc

Notes about the $expand query:

  • The $expand allow clients to ask for related entities inline based on the navigation properties defined between the entities, as you notice the $expand accepts comma separated values so you can ask for different entities at the same time, for more information about the $expand query you can visit this link.
  • By looking at the JSON response below for the $expand query we requested you will notice that the fields UserName and Password (Lines 5 and 6) for each Tutor is returned in the response which doesn’t make sense.

{
  "CourseTutor": {
    "Id": 5,
    "Email": "Iyad.Radi@gmail.com",
    "UserName": "IyadRadi",
    "Password": "MXELYDAC",
    "FirstName": "Iyad",
    "LastName": "Radi",
    "Gender": "Male"
  },
  "CourseSubject": {
    "Id": 5,
    "Name": "Commerce"
  },
  "Id": 15,
  "Name": "Commerce, Business Studies and Economics Teaching Methods 1",
  "Duration": 3,
  "Description": "The course will talk in depth about: Commerce, Business Studies and Economics Teaching Methods 1"
}

The nice thing here that we can fix fix this issue by ignoring those two properties from the EDM model only without changing any property on the data model, to do this open the class “WebApiConfig” and replace the code in method GenerateEdmModel() with the code below, notice how we specified the ignored properties in lines 8 and 9:

private static IEdmModel GenerateEdmModel()
        {
            var builder = new ODataConventionModelBuilder();
            builder.EntitySet<Course>("Courses");
            builder.EntitySet<Enrollment>("Enrollments");
            builder.EntitySet<Subject>("Subjects");

            var tutorsEntitySet = builder.EntitySet<Tutor>("Tutors");

            tutorsEntitySet.EntityType.Ignore(s => s.UserName);
            tutorsEntitySet.EntityType.Ignore(s => s.Password);

            return builder.GetEdmModel();
        }

In the next post we’ll see how we can implement a controller which will support full CRUD operations on the Tutors resource.

Please drop your questions and comments on the comments section below.

Source code is available on GitHub.

The post Building OData Service using ASP.Net Web API Tutorial – Part 2 appeared first on Bit of Technology.


Dominick Baier: New Pluralsight Course: “Web API v2 Security”

It is finally online! Hope you like it.

http://pluralsight.com/training/Courses/TableOfContents/webapi-v2-security


Filed under: ASP.NET, AuthorizationServer, Katana, OAuth, OWIN, WebAPI


Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.