Darrel Miller: Dot Net Fringe

The last few days I spent at the DotNetFringe conference in Portland.  Considering this was the first time this conference has been run it was executed spectacularly well.

dotnetFringe

Off To A Great Start

The opening keynote was done by the one and only, Jimmy Bogard who delivered a candid history of his experience working on OSS projects, including both the successes and failures.  Jimmy delivered a wide range of sage advice regarding how, why and when you should open source projects.

Embedded image permalink
Photo Credit: Bar Arnon

It Just Got Better

The next talk I watched was from Amy Palamountain who delivered the trifecta of presenting great technical content that was entertaining and wonderfully polished.  The consensus among the speakers I spoke with right after her presentation was that we all needed to go and work on our talks.

Spreading The HTTP Love

My talk was related to the open source projects I've been building over the past few years related to creating and consuming HTTP APIs.  It is quite a challenge to deliver any amount of depth in a technical subject in 30 minutes, but I do appreciate the format as there were much less schedule conflicts than at conferences with many tracks.

Embedded image permalink
Photo Credit: Immo Landwerth

Sunshine In The Morning

Day two started on another bright note for me with Gemma Cameron talking about the changes DevOps has made the potential future of NetOps.  It was both informative and entertaining,  a perfect combination.  And I'm looking forward to a chance to use Bitstrips in my own presentations soon!

Embedded image permalink
Photo Credit: Gemma Cameron

A Chance of Showers

Next up was the DotNetRocks Open Source panel.  I think my tweet from the beginning of the session is the most succinct summary I can come up with.

The challenge with diverse opinions is that there can be disagreement.  And there were.  The panel discussion was an accurate reflection of the .net OSS community as a whole.  Some people spoke too much, some spoke too little. People showed their biases.  Yup, people were human.

Embedded image permalink
Photo Credit: Immo Landwerth

The .net community has come a very long way, and it still has a long way to go.  I do think in general we are finally all moving in the right direction.   But there is a history of pain and anguish behind us.  We need to focus on the future without forgetting the past.  There is nothing worse than making the same mistakes twice.

Rainbows Follow Rain

DotNetFringe was a strange name for a conference, with an even stranger logo.  It came together in a short amount of time and made an unusual venue work.

The conference was full of amazing people, great talks and overflowing enthusiasm for the future of .net OSS.  I look forward to the next one.

It was a bold play, but it paid off.  Look out for the session recordings becoming available soon, they will be full of great content.


Pedro Félix: Some thoughts on the recent JWT library vulnerabilities

Recently, a great post by Tim McLean about some “Critical vulnerabilities in JSON Web Token libraries” made the headlines, bringing the focus to the JWT spec, its usages and apparent security issues.

In this post, I want to share some of my assorted ideas on these subjects.

On the usefulness of the “none” algorithm

One of the problems identified in the aforementioned post is the “none” algoritm.

It may seem strange for a secure packaging format to support “none” as a valid protection, however this algorithm is useful in situations where the token’s integrity is verified by other means, namely the transport protocol.
One such example happens on the authorization code flow of OpenID Connect, where the ID token is retrieved via a direct TLS protected communication between the Client and the Authorization Server.

In the words of the specification: “If the ID Token is received via direct communication between the Client and the Token Endpoint (which it is in this flow), the TLS server validation MAY be used to validate the issuer in place of checking the token signature”.

Supporting multiple algorithms and the “alg” field

Another problem identified by Tim’s post was the usage of the “alg” field and the way some libraries handle it, namely using keys in an incorrect way.

In my opinion, supporting algorithm agility (i.e. the ability to support more than one algorithm in a specification) is essential for having evolvable systems.
Also, being explicit about what was used to protect the token is typically a good security decision.

In this case, the problem lies on the library side. Namely, having a verify(string token, string verificationKey) function signature seems really awkard for several reasons

  • First, representing a key as a string is a typical case of primitive obsession. A key is not a string. A key is a potentially composed object (e.g. two integers in the case of a public key for RSA-based schemes) with associated metadata, namely the algorithms and usages for which it applies. Encoding that as a string is opening the door to ambiguity and incorrect usages.
    A key representation should always contain not only the algorithm to which applies but also the usage conditions (e.g. encryption vs,. signature for a RSA key).

  • Second, it makes phased key rotation really difficult. What happens when the token signer wants to change the signing key or the algorithm? Must all the consumers synchronously change the verification key at the same moment in time? Preferably, consumers should be able to simultaneous support two or more key to be used, identified by the “kid” parameter.
    The same applies to algorithm changes and the use of the “alg” parameter.
    So, I don’t think that removing the “alg” header is a good idea

A verification function should allow a set of possible keys (bound to explicit algorithms) or receive a call back to fetch the key given both the algorithm and the key id.

Don’t assume, always verify

Verifying a JWT before using the claims that it asserts is alway more than just checking a signature. Who was the issuer? Is the token valid at the time of usage? Was the token explicitly revoked? Who is the intended audience? Is the protection algorithm compatible with the usage scenario? These are all questions that must be explicit verified by a JWT consumer application or component.

For instance, OpenID Connect lists the verification steps that must done by a client application (the relying party) before using the claims in a received ID token.

And so it begins …

If the recent history of SSL/TLS related problems has taught us anything is that security protocol design and implementation is far from easy, and that “obvious” vulnerabilities can remain undetected for long periods of time.
If these problems happen on well known and commonly used designs and libraries such as SSL and OpenSSL, we must be prepared for similar occurrences on JWT based protocols and implementations.
In this context, security analysis such as the one described in Tim’s post are of uttermost importance, even if I don’t agree with some of the proposed measures.



Darrel Miller: Solving Dropbox's URL Problems

A recent post on the Dropbox developer's blog post talked about the challenges of constructing URLs due to the challenges of encoding parameters.  They proposed the idea of using encoded JSON to embed parameters in URLs. I believe URI Templates offer a much easier and cleaner way to address this issue.  This blog posts shows how.

ThreeBodyProblem

I've talked about using a URI Template library to construct URLs before, but in this post I'm going to consider the specific examples highlighted in the Dropbox post.

Picky About Punctuation

The first example introduced the problem of spaces in URLs,

/1/search/auto/My+Documents?query=draft+2013

It is true that spaces are not allowed in URLs, however interestingly, the plus sign used in this example is not the correct way to deal with spaces according to RFC 3986, the URI specification.  Web browsers allow you to use the plus sign as a replacement for space in the address bar, however, technically spaces should be encoded as %20.

Using a URI Template library you are able to clearly distinguish which parts of the final URL are literals and syntax and which parts are parameters.  The parameters are considered "data" and therefore any characters that have special meaning in an URI will automatically be escaped.

[Fact]
public void EncodingTest1()
{

    var url = new UriTemplate("/1/search/auto/{folder}{?query}")
        .AddParameter("folder","My Documents")
        .AddParameter("query", "draft 2013")
        .Resolve();

    Assert.Equal("/1/search/auto/My%20Documents?query=draft%202013", url);
}

In the above example, the space in the folder parameters and the query parameters are automatically escaped.

Dastardly Delimiters

The next example given in the Dropbox post highlights the challenges of using parameter data that contains characters that are considered URL delimiters and are therefore considered reserved.

“/hello-world” is equivalent to “/hello%2Dworld”
“/hello/world” is not equivalent to “/hello%2Fworld“

The example is a bit misleading because the hypen character is not a reserved character therefore doesn't need to be escaped.   However, the point remains that a forward slash in a parameter value should be escaped to prevent it from being considered a delimiter.

This ambiguity is easily avoided in URI Templates because parameter values are specified explicitly.

[Fact]
public void EncodingTest2()
{

    // Parameter values get encoded but hyphen doesn't need to be encoded because it
    // is an "unreserved" character according to RFC 3986
    var url = new UriTemplate("{/greeting}")
        .AddParameter("greeting", "hello-world")
        .Resolve();

    Assert.Equal("/hello-world", url);

    // A slash does need to be encoded
    var url2 = new UriTemplate("{/greeting}")
        .AddParameter("greeting", "hello/world")
        .Resolve();

    Assert.Equal("/hello%2Fworld", url2);

    // If you truly want to make multiple path segments then do this
    var url3 = new UriTemplate("{/greeting*}")
        .AddParameter("greeting", new List {"hello","world"})
        .Resolve();

    Assert.Equal("/hello/world", url3);

}

Parameter Preferences

The next Dropbox example demonstrates that there is some flexibility in the way you can represent lists of values in URLs.

/docs/salary.csv?columns=1,2
/docs/salary.csv?column=1&column=2

The problem with flexibility that is given to API producers, is that API consumers have to deal with additional complexity.  Fortunately URI Templates allows these different approaches to be handled by adding one additional character to the URI Template.

[Fact]
public void EncodingTest3()
{

    // There are different ways that lists can be included in query params
    // Just as a comma delimited list
    var url = new UriTemplate("/docs/salary.csv{?columns}")
        .AddParameter("columns", new List {1,2})
        .Resolve();

    Assert.Equal("/docs/salary.csv?columns=1,2", url);

    // or as a multiple parameter instances
    var url2 = new UriTemplate("/docs/salary.csv{?columns*}")
        .AddParameter("columns", new List { 1, 2 })
        .Resolve();

    Assert.Equal("/docs/salary.csv?columns=1&columns=2", url2);
}

The only difference between the two templates is the asterisk at the end of the parameter token.  The is called the "explode modifier".  The additional bonus for hypermedia driven APIs that provide the templates to the client, is the client code can be completely ignorant of which approach is being used and the server can change its mind at some point in the future and nothing breaks.

Nested Nomenclature

The next example shows a technique developers use for including nested data as part of query string parameters

/emails?from[name]=Don&from[date]=1998-03-24&to[name]=Norm

Because of the clear separation between parameters and URI Templates, it makes this scenario fairly trivial.  Also, considering the potentially dynamic nature of the this type of query string parameters, another feature of URI Templates can be used to make this type of URL even easier to construct.

[Fact]
public void EncodingTest4()
{
    var url = new UriTemplate("/emails{?params*}")
        .AddParameter("params", new Dictionary<string,string>
        {
            {"from[name]","Don"},
            {"from[date]","1998-03-24"},
            {"to[name]","Norm"}
        })
        .Resolve();

    Assert.Equal("/emails?from[name]=Don&from[date]=1998-03-24&to[name]=Norm", url);
}

Query string parameter names are passed through the URI Template to the URL, untouched, which is why the square brackets are not escaped.  According to RFC 3986 they should be escaped.  However, it is fairly common to see them in URLs unescaped.  Although it is a violation of the rules, the impact is minimal because square brackets are currently only used in the host name for specifying IPV6 addresses.

Separating Syntax

The key to URI Templates being able help in URL encoding is that it is obvious which pieces are data and which pieces are URL syntax.  This allows the encoding to only be performed where it is needed, on the data.

Personally, I  do not think we need to resort to such extreme measures as JSON encoding parameters to make it easy for developers to safely construct URLs.  Hopefully, this post will convince a few other people.

Image Credit: Three body problem https://flic.kr/p/pNHgi5


Dominick Baier: Implicit vs Explicit Authentication in Browser-based Applications

I got the idea for this post from my good friend Pedro Felix – I hope I don’t steal his thunder (I am sure I won’t – since he is much more elaborate than I am) – but when I saw his tweet this morning, I had to write this post.

When teaching web API security, Brock and I often use the term implicit vs explicit authentication. I don’t think these are standard terms – so here’s the explanation.

What’s implicit authentication?
Browser built-in mechanisms like Basic, Windows, Digest authentication, client certificates and cookies. Once established for a certain domain, the browser implicitly sends the credential along automatically.

Advantage: It just works. Once the browser has authenticated the user (or the cookie is set) – no special code is necessary

Disadvantage:

  • No control. The browser will send the credential regardless which application makes the request to the “authenticated domain”. CSRF is the result of that.
  • Domain bound – only “clients” from the same domain as the “server” will be able to communicate.

What’s explicit authentication?
Whenever the application code (JavaScript in that case) has to send the credential explicitly – typically on the Authorization header (and sometimes also as a query string). Using OAuth 2.0 implicit flow and access tokens in JS apps is a common example. Strictly speaking the browser does not know anything about the credential and thus would not send it automatically.

Advantage:

  • Full control over when the credential is send.
  • No CSRF.
  • Not bound to a domain.

Disadvantages:

  • Custom code necessary.
  • Access tokens need to be managed by the JS app (and don’t have built-in protection features like httpOnly cookies) which make them interesting targets for other types of attacks (CSP can help here).

Summary: Implicit authentication works great for server-side web applications that live on a single domain. CSRF is well understood and frameworks typically have built-in countermeasures. Explicit authentication is recommended for web APIs. Anti CSRF is harder here, and clients and APIs are often spread across domains which makes cookies a no go.


Filed under: WebAPI


Taiseer Joudeh: ASP.NET Web API Claims Authorization with ASP.NET Identity 2.1 – Part 5

This is the fifth part of Building Simple Membership system using ASP.NET Identity 2.1, ASP.NET Web API 2.2 and AngularJS. The topics we’ll cover are:

The source code for this tutorial is available on GitHub.

ASP.NET Web API Claims Authorization with ASP.NET Identity 2.1

In the previous post we have implemented a finer grained way to control authorization based on the Roles assigned for the authenticated user, this was done by assigning users to a predefined Roles in our system and then attributing the protected controllers or actions by the [Authorize(Roles = “Role(s) Name”)] attribute.

Claims Featured Image

Using Roles Based Authorization for controlling user access will be efficient in scenarios where your Roles do not change too much and the users permissions do not change frequently.

In some applications controlling user access on system resources is more complicated, and having users assigned to certain Roles is not enough for managing user access efficiently, you need more dynamic way to to control access based on certain information related to the authenticated user, this will lead us to control user access using Claims, or in another word using Claims Based Authorization.

But before we dig into the implementation of Claims Based Authorization we need to understand what Claims are!

Note: It is not mandatory to use Claims for controlling user access, if you are happy with Roles Based Authorization and you have limited number of Roles then you can stick to this.

What is a Claim?

Claim is a statement about the user makes about itself, it can be user name, first name, last name, gender, phone, the roles user assigned to, etc… Yes the Roles we have been looking at are transformed to Claims at the end, and as we saw in the previous post; in ASP.NET Identity those Roles have their own manager (ApplicationRoleManager) and set of APIs to manage them, yet you can consider them as a Claim of type Role.

As we saw before, any authenticated user will receive a JSON Web Token (JWT) which contains a set of claims inside it, what we’ll do now is to create a helper end point which returns the claims encoded in the JWT for an authenticated user.

To do this we will create a new controller named “ClaimsController” which will contain a single method responsible to unpack the claims in the JWT and return them, to do this add new controller named “ClaimsController” under folder Controllers and paste the code below:

[RoutePrefix("api/claims")]
    public class ClaimsController : BaseApiController
    {
        [Authorize]
        [Route("")]
        public IHttpActionResult GetClaims()
        {
            var identity = User.Identity as ClaimsIdentity;
            
            var claims = from c in identity.Claims
                         select new
                         {
                             subject = c.Subject.Name,
                             type = c.Type,
                             value = c.Value
                         };

            return Ok(claims);
        }

    }

The code we have implemented above is straight forward, we are getting the Identity of the authenticated user by calling “User.Identity” which returns “ClaimsIdentity” object, then we are iterating over the IEnumerable Claims property and return three properties which they are (Subject, Type, and Value).
To execute this endpoint we need to issue HTTP GET request to the end point “http://localhost/api/claims” and do not forget to pass a valid JWT in the Authorization header, the response for this end point will contain the below JSON object:

[
  {
    "subject": "Hamza",
    "type": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier",
    "value": "cd93945e-fe2c-49c1-b2bb-138a2dd52928"
  },
  {
    "subject": "Hamza",
    "type": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name",
    "value": "Hamza"
  },
  {
    "subject": "Hamza",
    "type": "http://schemas.microsoft.com/accesscontrolservice/2010/07/claims/identityprovider",
    "value": "ASP.NET Identity"
  },
  {
    "subject": "Hamza",
    "type": "AspNet.Identity.SecurityStamp",
    "value": "a77594e2-ffa0-41bd-a048-7398c01c8948"
  },
  {
    "subject": "Hamza",
    "type": "iss",
    "value": "http://localhost:59822"
  },
  {
    "subject": "Hamza",
    "type": "aud",
    "value": "414e1927a3884f68abc79f7283837fd1"
  },
  {
    "subject": "Hamza",
    "type": "exp",
    "value": "1427744352"
  },
  {
    "subject": "Hamza",
    "type": "nbf",
    "value": "1427657952"
  }
]

As you noticed from the response above, all the claims contain three properties, and those properties represents the below:

  • Subject: Represents the identity which those claims belongs to, usually the value for the subject will contain the unique identifier for the user in the system (Username or Email).
  • Type: Represents the type of the information contained in the claim.
  • Value: Represents the claim value (information) about this claim.

Now to have better understanding of what type of those claims mean let’s take a look the table below:

SubjectTypeValueNotes
Hamzanameidentifiercd93945e-fe2c-49c1-b2bb-138a2dd52928Unique User Id generated from Identity System
HamzanameHamzaUnique Username
HamzaidentityproviderASP.NET IdentityHow user has been authenticated using ASP.NET Identity
HamzaSecurityStampa77594e2-ffa0-41bd-a048-7398c01c8948Unique Id which stays the same until any security related attribute change, i.e. change user password
Hamzaisshttp://localhost:59822Issuer of the Access Token (Authz Server)
Hamzaaud414e1927a3884f68abc79f7283837fd1For which system this token is generated
Hamzaexp1427744352Expiry time for this access token (Epoch)
Hamzanbf1427657952When this token is issued (Epoch)

After we have briefly described what claims are, we want to see how we can use them to manage user assess, in this post I will demonstrate three ways of using the claims as the below:

  1. Assigning claims to the user on the fly based on user information.
  2. Creating custom Claims Authorization attribute.
  3. Managing user claims by using the “ApplicationUserManager” APIs.

Method 1: Assigning claims to the user on the fly

Let’s assume a fictional use case where our API will be used in an eCommerce website, where certain users have the ability to issue refunds for orders if there is incident happen and the customer is not happy.

So certain criteria should be met in order to grant our users the privileges to issue refunds, the users should have been working for the company for more than 90 days, and the user should be in “Admin”Role.

To implement this we need to create a new class which will be responsible to read authenticated user information, and based on the information read, it will create a single claim or set of claims and assign then to the user identity.
If you recall from the first post of this series, we have extended the “ApplicationUser” entity and added a property named “JoinDate” which represent the hiring date of the employee, based on the hiring date, we need to assign a new claim named “FTE” (Full Time Employee) for any user who has worked for more than 90 days. To start implementing this let’s add a new class named “ExtendedClaimsProvider” under folder “Infrastructure” and paste the code below:

public static class ExtendedClaimsProvider
    {
        public static IEnumerable<Claim> GetClaims(ApplicationUser user)
        {
          
            List<Claim> claims = new List<Claim>();

            var daysInWork =  (DateTime.Now.Date - user.JoinDate).TotalDays;

            if (daysInWork > 90)
            {
                claims.Add(CreateClaim("FTE", "1"));
               
            }
            else {
                claims.Add(CreateClaim("FTE", "0"));
            }

            return claims;
        }

        public static Claim CreateClaim(string type, string value)
        {
            return new Claim(type, value, ClaimValueTypes.String);
        }

    }

The implementation is simple, the “GetClaims” method will take ApplicationUser object and returns a list of claims. Based on the “JoinDate” field it will add new claim named “FTE” and will assign a value of “1” if the user has been working for than 90 days, and a value of “0” if the user worked for less than this period. Notice how I’m using the method “CreateClaim” which returns a new instance of the claim.

This class can be used to enforce creating custom claims for the user based on the information related to her, you can add as many claims as you want here, but in our case we will add only a single claim.

Now we need to call the method “GetClaims” so the “FTE” claim will be associated with the authenticated user identity, to do this open class “CustomOAuthProvider” and in method “GrantResourceOwnerCredentials” add the highlighted line (line 7) as the code snippet below:

public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
{
	//Code removed for brevity

	ClaimsIdentity oAuthIdentity = await user.GenerateUserIdentityAsync(userManager, "JWT");
	
	oAuthIdentity.AddClaims(ExtendedClaimsProvider.GetClaims(user));
	
	var ticket = new AuthenticationTicket(oAuthIdentity, null);
	
	context.Validated(ticket);
   
}

Notice how the established claims identity object “oAuthIdentity” has a method named “AddClaims” which accepts IEnumerable object of claims, now the new “FTE” claim is assigned to the authenticated user, but this is not enough to satisfy the criteria needed to issue the fictitious refund on orders, we need to make sure that the user is in “Admin” Role too.

To implement this we’ll create a new Role on the fly based on the claims assigned for the user, in other words we’ll create Roles from the Claims user assigned to, this Role will be named “IncidentResolvers”. And as we stated in the beginning of this post, the Roles eventually are considered as a Claim of type Role.

To do this add new class named “RolesFromClaims” under folder “Infrastructure” and paste the code below:

public class RolesFromClaims
    {
        public static IEnumerable<Claim> CreateRolesBasedOnClaims(ClaimsIdentity identity)
        {
            List<Claim> claims = new List<Claim>();

            if (identity.HasClaim(c => c.Type == "FTE" && c.Value == "1") &&
                identity.HasClaim(ClaimTypes.Role, "Admin"))
            {
                claims.Add(new Claim(ClaimTypes.Role, "IncidentResolvers"));
            }

            return claims;
        }
    }

The implementation is self explanatory, we have created a method named “CreateRolesBasedOnClaims” which accepts the established identity object and returns a list of claims.

Inside this method we will check that the established identity for the authenticated user has a claim of type “FTE” with value “1”, as well that the identity contains a claim of type “Role” with value “Admin”, if those 2 conditions are met then; we will create a new claim of Type “Role” and give it a value of “IncidentResolvers”.
Last thing we need to do here is to assign this new set of claims to the established identity, so to do this open class “CustomOAuthProvider” again and in method “GrantResourceOwnerCredentials” add the highlighted line (line 9) as the code snippet below:

public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
{
	//Code removed for brevity

	ClaimsIdentity oAuthIdentity = await user.GenerateUserIdentityAsync(userManager, "JWT");
	
	oAuthIdentity.AddClaims(ExtendedClaimsProvider.GetClaims(user));
	
	oAuthIdentity.AddClaims(RolesFromClaims.CreateRolesBasedOnClaims(oAuthIdentity));
	
	var ticket = new AuthenticationTicket(oAuthIdentity, null);
	
	context.Validated(ticket);
   
}

Now all the new claims which created on the fly are assigned to the established identity and once we call the method “context.Validated(ticket)”, all claims will get encoded in the JWT token, so to test this out let’s add fictitious controller named “OrdersController” under folder “Controllers” as the code below:

[RoutePrefix("api/orders")]
public class OrdersController : ApiController
{
	[Authorize(Roles = "IncidentResolvers")]
	[HttpPut]
	[Route("refund/{orderId}")]
	public IHttpActionResult RefundOrder([FromUri]string orderId)
	{
		return Ok();
	}
}

Notice how we attribute the action “RefundOrder” with  [Authorize(Roles = “IncidentResolvers”)] so only authenticated users with claim of type “Role” and has the value of “IncidentResolvers” can access this end point. To test this out you can issue HTTP PUT request to the URI “http://localhost/api/orders/refund/cxy-4456393″ with an empty body.

As you noticed from the first method, we have depended on user information to create claims and kept the authorization more dynamic and flexible.
Keep in mind that you can add your access control business logic, and have finer grained control on authorization by implementing this logic into classes “ExtendedClaimsProvider” and “RolesFromClaims”.

Method 2: Creating custom Claims Authorization attribute

Another way to implement Claims Based Authorization is to create a custom authorization attribute which inherits from “AuthorizationFilterAttribute”, this authorize attribute will check directly the claims value and type for the established identity.

To do this let’s add new class named “ClaimsAuthorizationAttribute” under folder “Infrastructure” and paste the code below:

public class ClaimsAuthorizationAttribute : AuthorizationFilterAttribute
    {
        public string ClaimType { get; set; }
        public string ClaimValue { get; set; }

        public override Task OnAuthorizationAsync(HttpActionContext actionContext, System.Threading.CancellationToken cancellationToken)
        {

            var principal = actionContext.RequestContext.Principal as ClaimsPrincipal;

            if (!principal.Identity.IsAuthenticated)
            {
                actionContext.Response = actionContext.Request.CreateResponse(HttpStatusCode.Unauthorized);
                return Task.FromResult<object>(null);
            }

            if (!(principal.HasClaim(x => x.Type == ClaimType && x.Value == ClaimValue)))
            {
                actionContext.Response = actionContext.Request.CreateResponse(HttpStatusCode.Unauthorized);
                return Task.FromResult<object>(null);
            }

            //User is Authorized, complete execution
            return Task.FromResult<object>(null);

        }
    }

What we’ve implemented here is the following:

  • Created a new class named “ClaimsAuthorizationAttribute” which inherits from “AuthorizationFilterAttribute” and then override method “OnAuthorizationAsync”.
  • Defined 2 properties “ClaimType” & “ClaimValue” which will be used as a setters when we use this custom authorize attribute.
  • Inside method “OnAuthorizationAsync” we are casting the object “actionContext.RequestContext.Principal” to “ClaimsPrincipal” object and check if the user is authenticated.
  • If the user is authenticated we’ll look into the claims established for this identity if it has the claim type and claim value.
  • If the identity contains the same claim type and value; then we’ll consider the request authentic and complete the execution, other wist we’ll return 401 unauthorized status.

To test the new custom authorization attribute, we’ll add new method to the “OrdersController” as the code below:

[ClaimsAuthorization(ClaimType="FTE", ClaimValue="1")]
[Route("")]
public IHttpActionResult Get()
{
	return Ok();
}

Notice how we decorated the “Get()” method with the “[ClaimsAuthorization(ClaimType=”FTE”, ClaimValue=”1″)]” attribute, so any user has the claim “FTE” with value “1” can access this protected end point.

Method 3: Managing user claims by using the “ApplicationUserManager” APIs

The last method we want to explore here is to use the “ApplicationUserManager” claims related API to manage user claims and store them in ASP.NET Identity related tables “AspNetUserClaims”.

In the previous two methods we’ve created claims for the user on the fly, but in method 3 we will see how we can add/remove claims for a certain user.

The “ApplicationUserManager” class comes with a set of predefined APIs which makes dealing and managing claims simple, the APIs that we’ll use in this post are listed in the table below:

Method NameUsage
AddClaimAsync(id, claim)Create a new claim for specified user id
RemoveClaimAsync(id, claim)Remove claim from specified user if claim type and value match
GetClaimsAsync(id)Return IEnumerable of claims based on specified user id

To use those APIs let’s add 2 new methods to the “AccountsController”, the first method “AssignClaimsToUser” will be responsible to add new claims for specified user, and the second method “RemoveClaimsFromUser” will remove claims from a specified user as the code below:

[Authorize(Roles = "Admin")]
[Route("user/{id:guid}/assignclaims")]
[HttpPut]
public async Task<IHttpActionResult> AssignClaimsToUser([FromUri] string id, [FromBody] List<ClaimBindingModel> claimsToAssign) {

	if (!ModelState.IsValid)
	{
		return BadRequest(ModelState);
	}

	 var appUser = await this.AppUserManager.FindByIdAsync(id);

	if (appUser == null)
	{
		return NotFound();
	}

	foreach (ClaimBindingModel claimModel in claimsToAssign)
	{
		if (appUser.Claims.Any(c => c.ClaimType == claimModel.Type)) {
		   
			await this.AppUserManager.RemoveClaimAsync(id, ExtendedClaimsProvider.CreateClaim(claimModel.Type, claimModel.Value));
		}

		await this.AppUserManager.AddClaimAsync(id, ExtendedClaimsProvider.CreateClaim(claimModel.Type, claimModel.Value));
	}
	
	return Ok();
}

[Authorize(Roles = "Admin")]
[Route("user/{id:guid}/removeclaims")]
[HttpPut]
public async Task<IHttpActionResult> RemoveClaimsFromUser([FromUri] string id, [FromBody] List<ClaimBindingModel> claimsToRemove)
{

	if (!ModelState.IsValid)
	{
		return BadRequest(ModelState);
	}

	var appUser = await this.AppUserManager.FindByIdAsync(id);

	if (appUser == null)
	{
		return NotFound();
	}

	foreach (ClaimBindingModel claimModel in claimsToRemove)
	{
		if (appUser.Claims.Any(c => c.ClaimType == claimModel.Type))
		{
			await this.AppUserManager.RemoveClaimAsync(id, ExtendedClaimsProvider.CreateClaim(claimModel.Type, claimModel.Value));
		}
	}

	return Ok();
}

The implementation for both methods is very identical, as you noticed we are only allowing users in “Admin” role to access those endpoints, then we are specifying the UserId and a list of the claims that will be add or removed for this user.

Then we are making sure that user specified exists in our system before trying to do any operation on the user.

In case we are adding a new claim for the user, we will check if the user has the same claim type before trying to add it, add if it exists before we’ll remove this claim and add it again with the new claim value.

The same applies when we try to remove a claim from the user, notice that methods “AddClaimAsync” and “RemoveClaimAsync” will save the claims permanently in our SQL data-store in table “AspNetUserClaims”.

Do not forget to add the “ClaimBindingModel” under folder “Models” which acts as our POCO class when we are sending the claims from our front-end application, the class will contain the code below:

public class ClaimBindingModel
    {
        [Required]
        [Display(Name = "Claim Type")]
        public string Type { get; set; }

        [Required]
        [Display(Name = "Claim Value")]
        public string Value { get; set; }
    }

There is no extra steps needed in order to pull those claims from the SQL data-store when establishing the user identity, thanks for the method “CreateIdentityAsync” which is responsible to pull all the claims for the user. We have already implemented this and it can be checked by visiting the highlighted LOC.

To test those methods all you need to do is to issue HTTP PUT request to the URI: “http://localhost:59822/api/accounts/user/{UserId}/assignclaims” and “http://localhost:59822/api/accounts/user/{UserId}/removeclaims” as the request images below:

Assign Claims

Assign Claims

Remove Claims

Remove Claims

That’s it for now folks about implementing Authorization using Claims.

In the next post we’ll build a simple AngularJS application which connects all those posts together, this post should be interesting :)

The source code for this tutorial is available on GitHub.

Follow me on Twitter @tjoudeh

References

The post ASP.NET Web API Claims Authorization with ASP.NET Identity 2.1 – Part 5 appeared first on Bit of Technology.


Darrel Miller: API Design Notes: Smart Paging

If you spend any time reading about API design or working with APIs you will likely have come across the notion of paging response data.  Paging has been used in the HTML web for many years as a method to provide users with a fast response to their searches.  I normally spend my time advocating that  Web APIs should emulate the HTML web more, but in this case I believe there are better ways than slicing results into arbitrary pages of data.

DragRacer

Is it necessary?

To provide some context, it is worth asking a few questions about why we do paging.  On the HTML web, paging was critical because results needed to be rendered in HTML and too many results creates a large HTML page.  Web browsers are often slow at rendering large HTML pages and that makes users wait.  Research has shown that users don't wait.

With Web APIs, there does not need to be a direct correlation between data retrieved and data rendered to a user.  What gets sent over the wire is just data and can use a more efficient format than HTML.  So when do we need to start getting the server to page data that is returned?  How much is too much?

Unfortunately, "how much" is one of those it depends questions.  However, consider the fact that Google's guidelines for banner ads is that they should be less than 150K.  You can fit a whole lot of content in a 150K JSON payload.

PimpedCar

What's wrong with paging?

There a few things that I don't like about paging. From a UX perspective, if the paging mechanism does end up getting reflected in the UI it's just not a pleasant experience. Why can't I just scroll?  If I'm looking for some specific items it is difficult  because it is hard to guess which page the items I want might be on.  This forces me to walk though the pages one at a time.   If the data is changing while I'm paging though the data, some items may be skipped, others will be duplicated.  

Whenever, I see those drop downs that ask, do you want 5, 10, or 50 items per page, I always cringe a little.  But how do you determine the ideal page size? Based on what fits on the screen, or the time to transfer the data?  None of those factors are fixed, so there is no good answer.

It is also important to realize that as your user is paging through the data, one page at a time, in order to improve performance, the server  is having to re-execute the entire query to determine the complete set of results so that it can return just one page's worth of data.  In theory, complete results can be cached, but then you risk losing the scalability benefits of a stateless server.  Making the server do significantly more work to improve client responsiveness may become a self defeating goal.

Tesladash

How can it be done better?

Paging exists as a way to force the user to request a smaller subset of data.  Encouraging users to return less data is a win for everyone.  It's less data for the server to process, less bandwidth, less information for the client to process and less information for the user to hunt through, to find what they want.

However, slicing the items of data up into arbitrary sized chunks based on some ordering algorithm is often not the most effective way of allowing users to refine their inquiry.

I find that it is always worth reviewing the characteristics of the data you are returning and asking the question, is there some natural property of the data that would be more effective at sub-dividing the data into smaller chunks?

Know Your A,B,Cs

The most obvious example is with a list of names.  Many contact manager type applications will group contacts by first letter of either the first name or last name.  Using alphabetic ordering creates 26 "pages" of data.  This makes it much easier for a user to jump to the page that contains the person they are looking for.

  AlphabetCar

It is true that using the alphabet to page through names, limits you to only 26 pages (assuming you don't use two letters, which would be a bit weird), however even with just those pages, my estimate is you could still return a list of 30,000 names in a JSON document and still be smaller than the 150K banner ad.  With compression, you could return far more. 

It's About Time

DeloreanAlphabet based paging is only one of many ways that data can be segregated.  Time based data is ripe for doing smart paging.  Data can be paged by day, by week, by month.  You can often see this mechanism on blogs.  It's easy to jump to all the posts done in a previous month.

Sometimes data has other segments like classifications, categories or geographies.  The groups may not have a natural sequence, so you may have to invent one.  

The important thing is that you are providing the API consumer with a way of dealing with chunks of data in more manageable sizes.  Those chunks will be more meaningful in terms of the application domain and there is a reasonable chance they will be quicker to retrieve because the underlying data store may have indexes on those attributes.

What's Next?

From the API consumer's perspective, one advantage of  dumb paging is that it is easy to determine what page is next.  A client can easily increment a numeric page value.  It's not so easy with smart paging.   If your client needs to construct the link to the next page then it is going to need some smarts as to how to generate the next page URL.  You may need to send the client a sequence of categories, or provide a period for time-based paging. However, if you are using a framework that generates next/previous links in the responses (like OData does) then it's easy because the server can create the appropriate links and the client can blindly follow them to the next page.

It May Not be Possible

Sometimes data is just doesn't have a natural grouping or the size of the groups that do exist are just too large to be useful.  Arbitrary pages may be the correct approach for your scenario.  My recommendation is simply to consider the more natural possibilities first before falling back on "dumb" paging.

BossHogg

Let Your Framework Know Who's the Boss

All too often I see developers making design choices based on capabilities provided by their chosen framework. What many developers don't realize is that those facilities are often provided by the framework, not because they are best design choice, but because it was easy for the framework developers to provide it.  Obviously a framework cannot know the semantics of the data that you will be paging through, therefore it is difficult to provide a smart paging capability out of the box.  However, dumb paging is easy to provide. 

Make your own design choices and use framework capabilities where appropriate, don't trust framework designers to do that work for you.

 

Image Credits:

Drag Racer https://flic.kr/p/aig7EJ
Pimped car https://flic.kr/p/56GBm
Tesla S Dash https://flic.kr/p/c5WgBC
Boss Hogg Car https://flic.kr/p/foFX8p
Alphabet Car https://flic.kr/p/aHjgDk
Delorean https://flic.kr/p/29WWWZ


Dominick Baier: IdentityServer3 vNext

Just a quick update about some upcoming changes in IdentityServer3.

The last weeks since the 1.0.0 release in January we did mostly bug fixing, fine tuning and listening to feedback. Inevitably we found things we want to change and improve – and some of them are breaking changes.

Right now we are in the process of compiling these small and big changes to bundle them up in a 2.0.0 release, so hopefully after that we can go back into fine tuning mode without breaking anybody’s code.

Here’s a brief list of things that have/will change in 2.0.0

  • Consolidation of some validation infrastructure
    • ICustomRequestValidator signature has slightly changed
  • Support for X.509 client certificates for client authentication at the token endpoint. This resulted in a number of changes to make client validation more flexible in general
    • ClientSecret has been renamed to Secret (we will probably use the concept of secrets in more place than just the client in the future)
    • IClientSecretValidator is gone in favour of a more high level IClientValidator
  • The event service is now async (we simply missed that in 1.0)
  • The CorsPolicy has been replaced by a CORS policy service – along with configurable CORS origins per client
  • By default clients have no access to any scopes. You need to configure the allowed scopes (or override by setting the new AllowAccessToAllScopes client flag)

Probable the biggest change is the fact that we renamed the nuget package to simply IdentityServer3. We decided to remove the thinktecture registered trademark from the OSS project altogether (including the namespaces – so that’s another breaking change).

So in the future all you need to do is:

install-package IdentityServer3 (-pre for the time being)

The dev branch on github is now on 2.0.0 and we published a beta package to nuget so you can have a look (in addition to our myget dev feed):

https://www.nuget.org/packages/IdentityServer3/2.0.0-beta1

Feedback is welcome!


Filed under: .NET Security, ASP.NET, IdentityServer, OAuth, OpenID Connect, OWIN, WebAPI


Darrel Miller: Are You Or Your Customers Leaking Your API Keys?

Several months ago I wrote a post called Where, oh where, does the API key go?  I encouraged API providers to allow consumers to put the API Key in the Authorization header to help avoid accidental disclosure of keys via things like web server logs.  I recently bumped into a way that anyone can harvest hundreds of API keys from many different web sites, including ones that charge significant amounts of money for access.

combine

The API Keys I discovered are in HttpArchive.  HttpArchive is a project started by Steve Souders as a tool to help make the web faster.  All the data collected by HttpArchive is made available via Google's BigQuery project.  There is a discussion site where there are all kinds of conversations about queries that are being run on HttpArchive data and their performance impacts.

imageGoogle Cloud Platform

When I first heard about the HttpArchive I naively assumed that the data was being collected from the logs of some big piece of internet infrastructure.  I suppose if I had looked more closely at the data being collected I would have realized that the data had to be collected via another method.

Image result for webpagetest

The answer to how HttpArchive collects its data is in another incredible tool WebPageTest.  HttpArchive pulls down a list of URLs from the Alexa Top 1,000,000 web sites and then kicks off a bunch of WebPageTest machines to navigate to those URLs and record all of the requests made when loading the sites.

Many of the sites being tested and recorded, download Javascript and then make calls to 3rd party APIs.  The API keys used to call those APIs are therefore recorded in HttpArchive.

This query against the HttpArchive is all it takes to pull back more than 800 unique API keys from the most recent dump of data.

 SELECT   method,
          REGEXP_EXTRACT(url, r'([^:]*)') as scheme,
          REGEXP_EXTRACT(url, r'://([^/]*)') as host, 
          REGEXP_EXTRACT(url, r'apikey=([^&?]*)') as ApiKey
 FROM httparchive:runs.latest_requests
 WHERE url LIKE '%apikey=%'
 group by 1,2,3,4 
 ORDER BY 1,2,3,4

Hundreds more can be found with different variations of api_key, api-key and ApiKey.  Pulling the key from URL is definitely the easiest.  However, HttpArchive also records request header values.  With a little more RegEx foo, you can start pulling API keys out of headers like X-ApiKey and X-Authorization.

sadclown

Unfortunately, you can also access credentials included in the Authorization header.  This was the one header that I was really hoping would have been filtered out of the test results.  I have posted to the HttpArchive mailing list with the hope that future dumps of data can get the Authorization header value stripped out.  This is the advantage of using a standard header.  We know what it is called, we know that the information contained in it should not be shared and we can get no useful performance information from it, so we will not lose anything by removing it from the archives.

The biggest surprise to me was the fact that we also get API keys from HTTPS requests.  WebPageTest is running on the client machine and can see the request in the browser as it is being made and before SSL encryption. All the query parameters and HTTP headers are all completely accessible to store.

Takeaway

If you can't afford to have someone misusing your API Key, then don't send it down to the client.  HTTPS is not going to save you.  And don't rely on security by obscurity.  The world of big data is making it easier to expose and query massive amounts of data every day.

And finally, use the Authorization header for what it was intended and don't ever log it!

Takeout

Image Credits:
Combine
https://flic.kr/p/8kPPaw
Sad Clown https://flic.kr/p/d7EN1
Takeout https://flic.kr/p/4d5oBZ


Darrel Miller: Share Your Code, Not Your API Keys

Part of my role at Runscope involves me writing OSS libraries or sample projects to share with other developers.  I also regularly use 3rd party APIs in the process.  This requires the use of API keys and other private data that I'd rather not share.  Unfortunately it is all too easy to leave a key in a source code file and accidentally commit it to a public source control repository.

  KeysInLock

The Stack Overflow Solution

The standard guidance on Stack Overflow is to commit your configuration file with dummy information in it and then tell Git to ignore any future changes to the file.  This seems like a reasonable approach as long as you keep your private data out of the standard app or web.config.  Once you have committed to using a separate file for private configuration data, the new developer has to be made aware of this settings file.

It isn't a terrible solution, but it felt like there was room for improvement.  When someone makes the decision to try  your sample application or library, you want the experience to be as painless as possible.

As an aside, I wish API providers would make publicly available API keys that pointed to sample data.  Even if the key only allowed read-only access, it would make the education process a whole lot easier.

Automatically Initialize

MAgicWandI decided that I wanted to create a simple HTML form based user interface for supplying private data elements and automatically take care of storing that data in a file somewhere that wouldn't get committed. 

My solution to this problem went through various iterations, trying to get the right balance of simplicity and security.  I wanted to make sure there was no way that the form could expose previously stored credentials and Jeremy Miller also pointed out that you don't want to allow an external party to inadvertently or maliciously cause the existing credentials to be lost.

In order to avoid having to build a fully authenticated administrative interface, the HTML form is only display once when there is no configuration data file.  Once it has been created then the developer must edit the file manually to make changes, or delete the file to trigger the appearance of the HTML form again.  This is a simplistic solution ideal for sample applications, but could also be the starting point for something more sophisticated.

The Man in the Middle

The functionality is provided primarily by a piece of middleware.  In ASP.NET Web API this is implemented as a class that derives from DelegatingHandler and is added to the MessageHandler pipeline.  The same architectural pattern exists in other web frameworks, so I'm sure the the code I am using could be re-implemented on other platforms.

MenOnBench

To initialize the piece of middleware using ASP.NET Web API, we create an instance of a new class I created called  PrivateDataMessageHandler and pass it a path to a configuration file that will hold my private API keys.

image

I am using the standard ASP.NET App_Data folder to store the configuration information and I made sure that my source control ignore file is not going to track any files in that folder.  You choose any location that your web server can write to.

First Run

Initially that configuration file will not exist and that will cause the MessageHandler to enable itself.

image

Every request to the system will be routed through this message handler, but unless it is enabled, messages will just be passed right through.  This is handy because when you run a sample application for the first time through Visual Studio a web browser opens and hits the root of the API.  The message handler will respond with the configuration form.

image

The path to the configuration file is added to the request properties collection so that controllers can locate the file to read data from it.

First Request

When a sample application is first used we can assume that there are API keys missing to be able to access third party services.  As we don't know which resource will necessarily be requested first, we intercept all inbound requests, except for one special one, and return an HTML form that presents input controls for each of the missing API keys.

image

The _magicPath variable contains the path used when the following form is submitted,

image

The HTML form is customized to collect whatever private data you need to store.

image

The names of the input fields will be used as the property names in the configuration file.

Submitting The Private Data Form

The form is submitted back to a unique endpoint /privatedataform that is monitored by the middleware and the information contained in the body is processed and the middleware is disabled.

image

One annoying issue here is that if you are hosted on IIS and you are using Attribute Routing, it is likely that there will be no matching route for your magicpath so the routing will fail and a 404 will return before the MessageHandlers are fired.  This doesn't happen on self-hosted setups and it generally doesn't happen on regular routing because the default route will match.  No controller will be found but that's fine because the MessageHandler short circuits the request before controller selection happens.

Safe

The submitted form is stored to a JSON configuration file and a simple message is returned to the user.

image

Accessing the Private Data

In order to get at the data, we need to access the path of the configuration file which the middleware hid away in the Request.Properties dictionary and we need to load the data into an object.

TopSecret

I created an extension method to hide the details.

image

Now when I am in a controller and I need to access the private data I can just do,

image

How does this help me?

I've used this on a couple of projects so far.  One is my RunscopeMessageHandler that allows you to log requests to a WebAPI up to Runscope's API.  I wanted to include a sample application to the project that I could also use for some interactive testing, but didn't want to accidently publish my API key.  The other is a SlackBot that I have been playing with that allows you to trigger API tests via Slack commands and the Runscope API.

Due to the fact that the private data you choose to store, and the corresponding HTML form, is custom for each installation, I decided there would be little value in making a library out of these classes.  If you think this approach might work for you then feel free to grab the source from either of the two projects I just linked to.

So far it seems to be working well for me.  I suspect the whole process could be refined further so I look forward to comments and suggestions from developers who are looking to solve a similar problem.

And with that done, on to the next Yak!

yak

Image Credit:
Keys in lock
https://flic.kr/p/4Metz2
Magic Wand https://flic.kr/p/7V1y7c
Men on Bench https://flic.kr/p/oTzP55
Safe https://flic.kr/p/ruAw2
Top Secret https://flic.kr/p/4SCuPK


Taiseer Joudeh: ASP.NET Identity 2.1 Roles Based Authorization with ASP.NET Web API – Part 4

This is the forth part of Building Simple Membership system using ASP.NET Identity 2.1, ASP.NET Web API 2.2 and AngularJS. The topics we’ll cover are:

The source code for this tutorial is available on GitHub.

ASP.NET Identity 2.1 Roles Based Authorization with ASP.NET Web API

In the previous post we saw how we can authenticate individual users using the [Authorize] attribute in a very basic form, but there is some limitation with the previous approach where any authenticated user can perform sensitive actions such as deleting any user in the system, getting list of all users in the system, etc… where those actions should be executed only by subset of users with higher privileges (Admins only).

Roles Auth

In this post we’ll see how we can enhance the authorization mechanism to give finer grained control over how users can execute actions based on role membership, and how those roles will help us in differentiate between authenticated users.

The nice thing here that ASP.NET Identity 2.1 provides support for managing Roles (create, delete, update, assign users to a role, remove users from role, etc…) by using the RoleManager<T> class, so let’s get started by adding support for roles management in our Web API.

Step 1: Add the Role Manager Class

The Role Manager class will be responsible to manage instances of the Roles class, the class will derive from “RoleManager<T>”  where T will represent our “IdentityRole” class, once it derives from the “IdentityRole” class a set of methods will be available, those methods will facilitate managing roles in our Identity system, some of the exposed methods we’ll use from the “RoleManager” during this tutorial are:

Method NameUsage
FindByIdAsync(id)Find role object based on its unique identifier
RolesReturns an enumeration of the roles in the system
FindByNameAsync(Rolename)Find roled based on its name
CreateAsync(IdentityRole)Creates a new role
DeleteAsync(IdentityRole)Delete role
RoleExistsAsync(RoleName)Returns true if role already exists

Now to implement the “RoleManager” class, add new file named “ApplicationRoleManager” under folder “Infrastructure” and paste the code below:

public class ApplicationRoleManager : RoleManager<IdentityRole>
    {
        public ApplicationRoleManager(IRoleStore<IdentityRole, string> roleStore)
            : base(roleStore)
        {
        }

        public static ApplicationRoleManager Create(IdentityFactoryOptions<ApplicationRoleManager> options, IOwinContext context)
        {
            var appRoleManager = new ApplicationRoleManager(new RoleStore<IdentityRole>(context.Get<ApplicationDbContext>()));

            return appRoleManager;
        }
    }

Notice how the “Create” method will use the Owin middleware to create instances for each request where
Identity data is accessed, this will help us to hide the details of how role data is stored throughout the
application.

Step 2: Assign the Role Manager Class to Owin Context

Now we want to add a single instance of the Role Manager class to each request using the Owin context, to do so open file “Startup” and paste the code below inside method “ConfigureOAuthTokenGeneration”:

private void ConfigureOAuthTokenGeneration(IAppBuilder app)
{
	// Configure the db context and user manager to use a single instance per request
	//Rest of code is removed for brevity
	
	app.CreatePerOwinContext<ApplicationRoleManager>(ApplicationRoleManager.Create);
	
	//Rest of code is removed for brevity

}

Now a single instance of class “ApplicationRoleManager” will be available for each request, we’ll use this instance in different controllers, so it is better to create a helper property in class “BaseApiController” which all other controllers inherits from, so open file “BaseApiController” and add the following code:

public class BaseApiController : ApiController
{
	//Code removed from brevity
	private ApplicationRoleManager _AppRoleManager = null;

	protected ApplicationRoleManager AppRoleManager
	{
		get
		{
			return _AppRoleManager ?? Request.GetOwinContext().GetUserManager<ApplicationRoleManager>();
		}
	}
}

Step 3: Add Roles Controller

Now we’ll add the controller which will be responsible to manage roles in the system (add new roles, delete existing ones, getting single role by id, etc…), but this controller should only be accessed by users in “Admin” role because it doesn’t make sense to allow any authenticated user to delete or create roles in the system, so we will see how we will use the [Authorize] attribute along with the Roles to control this.

Now add new file named “RolesController” under folder “Controllers” and paste the code below:

[Authorize(Roles="Admin")]
    [RoutePrefix("api/roles")]
    public class RolesController : BaseApiController
    {

        [Route("{id:guid}", Name = "GetRoleById")]
        public async Task<IHttpActionResult> GetRole(string Id)
        {
            var role = await this.AppRoleManager.FindByIdAsync(Id);

            if (role != null)
            {
                return Ok(TheModelFactory.Create(role));
            }

            return NotFound();

        }

        [Route("", Name = "GetAllRoles")]
        public IHttpActionResult GetAllRoles()
        {
            var roles = this.AppRoleManager.Roles;

            return Ok(roles);
        }

        [Route("create")]
        public async Task<IHttpActionResult> Create(CreateRoleBindingModel model)
        {
            if (!ModelState.IsValid)
            {
                return BadRequest(ModelState);
            }

            var role = new IdentityRole { Name = model.Name };

            var result = await this.AppRoleManager.CreateAsync(role);

            if (!result.Succeeded)
            {
                return GetErrorResult(result);
            }

            Uri locationHeader = new Uri(Url.Link("GetRoleById", new { id = role.Id }));

            return Created(locationHeader, TheModelFactory.Create(role));

        }

        [Route("{id:guid}")]
        public async Task<IHttpActionResult> DeleteRole(string Id)
        {

            var role = await this.AppRoleManager.FindByIdAsync(Id);

            if (role != null)
            {
                IdentityResult result = await this.AppRoleManager.DeleteAsync(role);

                if (!result.Succeeded)
                {
                    return GetErrorResult(result);
                }

                return Ok();
            }

            return NotFound();

        }

        [Route("ManageUsersInRole")]
        public async Task<IHttpActionResult> ManageUsersInRole(UsersInRoleModel model)
        {
            var role = await this.AppRoleManager.FindByIdAsync(model.Id);
            
            if (role == null)
            {
                ModelState.AddModelError("", "Role does not exist");
                return BadRequest(ModelState);
            }

            foreach (string user in model.EnrolledUsers)
            {
                var appUser = await this.AppUserManager.FindByIdAsync(user);

                if (appUser == null)
                {
                    ModelState.AddModelError("", String.Format("User: {0} does not exists", user));
                    continue;
                }

                if (!this.AppUserManager.IsInRole(user, role.Name))
                {
                    IdentityResult result = await this.AppUserManager.AddToRoleAsync(user, role.Name);

                    if (!result.Succeeded)
                    {
                        ModelState.AddModelError("", String.Format("User: {0} could not be added to role", user));
                    }

                }
            }

            foreach (string user in model.RemovedUsers)
            {
                var appUser = await this.AppUserManager.FindByIdAsync(user);

                if (appUser == null)
                {
                    ModelState.AddModelError("", String.Format("User: {0} does not exists", user));
                    continue;
                }

                IdentityResult result = await this.AppUserManager.RemoveFromRoleAsync(user, role.Name);

                if (!result.Succeeded)
                {
                    ModelState.AddModelError("", String.Format("User: {0} could not be removed from role", user));
                }
            }

            if (!ModelState.IsValid)
            {
                return BadRequest(ModelState);
            }

            return Ok();
        }
    }

What we have implemented in this lengthy controller code is the following:

  • We have attribute the controller with [Authorize(Roles=”Admin”)] which allows only authenticated users who belong to “Admin” role only to execute actions in this controller, the “Roles” property accepts comma separated values so you can add multiple roles if needed. In other words the user who will have an access to this controller should have valid JSON Web Token which contains claim of type “Role” and value of “Admin”.
  • The method “GetRole(Id)” will return a single role based on it is identifier, this will happen when we call the method “FindByIdAsync”, this method returns object of type “RoleReturnModel” which we’ll create in the next step.
  • The method “GetAllRoles()” returns all the roles defined in the system.
  • The method “Create(CreateRoleBindingModel model)” will be responsible of creating new roles in the system, it will accept model of type “CreateRoleBindingModel” where we’ll create it the next step. This method will call “CreateAsync” and will return response of type “RoleReturnModel”.
  • The method “DeleteRole(string Id)” will delete existing role by passing the unique id of the role then calling the method “DeleteAsync”.
  • Lastly the method “ManageUsersInRole” is proprietary for the AngularJS app which we’ll build in the coming posts, this method will accept a request body containing an object of type “UsersInRoleModel” where the application will add or remove users from a specified role.

Step 4: Add Role Binding Models

Now we’ll add the models used in the previous step, the first class to add will be named “RoleBindingModels” under folder “Models”, so add this file and paste the code below:

public class CreateRoleBindingModel
    {
        [Required]
        [StringLength(256, ErrorMessage = "The {0} must be at least {2} characters long.", MinimumLength = 2)]
        [Display(Name = "Role Name")]
        public string Name { get; set; }

    }

    public class UsersInRoleModel {

        public string Id { get; set; }
        public List<string> EnrolledUsers { get; set; }
        public List<string> RemovedUsers { get; set; }
    }

Now we’ll adjust the “ModelFactory” class to include the method which returns the response of type “RoleReturnModel”, so open file “ModelFactory” and paste the code below:

public class ModelFactory
{
	//Code removed for brevity
	
	public RoleReturnModel Create(IdentityRole appRole) {

		return new RoleReturnModel
	   {
		   Url = _UrlHelper.Link("GetRoleById", new { id = appRole.Id }),
		   Id = appRole.Id,
		   Name = appRole.Name
	   };
	}
}

public class RoleReturnModel
{
	public string Url { get; set; }
	public string Id { get; set; }
	public string Name { get; set; }
}

 Step 5: Allow Admin to Manage Single User Roles

Until now the system doesn’t have an endpoint which allow users in Admin role to manage the roles for a selected user, this endpoint will be needed in the AngularJS app,  in order to add it open “AccountsController” class and paste the code below:

[Authorize(Roles="Admin")]
[Route("user/{id:guid}/roles")]
[HttpPut]
public async Task<IHttpActionResult> AssignRolesToUser([FromUri] string id, [FromBody] string[] rolesToAssign)
{

	var appUser = await this.AppUserManager.FindByIdAsync(id);

	if (appUser == null)
	{
		return NotFound();
	}
	
	var currentRoles = await this.AppUserManager.GetRolesAsync(appUser.Id);

	var rolesNotExists = rolesToAssign.Except(this.AppRoleManager.Roles.Select(x => x.Name)).ToArray();

	if (rolesNotExists.Count() > 0) {

		ModelState.AddModelError("", string.Format("Roles '{0}' does not exixts in the system", string.Join(",", rolesNotExists)));
		return BadRequest(ModelState);
	}

	IdentityResult removeResult = await this.AppUserManager.RemoveFromRolesAsync(appUser.Id, currentRoles.ToArray());

	if (!removeResult.Succeeded)
	{
		ModelState.AddModelError("", "Failed to remove user roles");
		return BadRequest(ModelState);
	}

	IdentityResult addResult = await this.AppUserManager.AddToRolesAsync(appUser.Id, rolesToAssign);

	if (!addResult.Succeeded)
	{
		ModelState.AddModelError("", "Failed to add user roles");
		return BadRequest(ModelState);
	}

	return Ok();
}

What we have implemented in this method is the following:

  • This method can be accessed only by authenticated users who belongs to “Admin” role, that’s why we have added the attribute [Authorize(Roles=”Admin”)]
  • The method accepts the UserId in its URI and array of the roles this user Id should be enrolled in.
  • The method will validates that this array of roles exists in the system, if not, HTTP Bad response will be sent indicating which roles doesn’t exist.
  • The system will delete all the roles assigned for the user then will assign only the roles sent in the request.

Step 6: Protect the existing end points with [Authorize(Roles=”Admin”)] Attribute

Now we’ll visit all the end points we have created earlier in the previous posts and mainly in “AccountsController” class. We’ll add add “Roles=Admin” to the Authorize attribute for all the end points the should be accessed only by users in Admin role.

The end points are:

 – GetUsers, GetUser, GetUserByName, and DeleteUser  should be accessed by users enrolled in “Admin” role. The code change will be as simple as the below:

[Authorize(Roles="Admin")]
[Route("users")]
public IHttpActionResult GetUsers()
{}

[Authorize(Roles="Admin")]
[Route("user/{id:guid}", Name = "GetUserById")]
public async Task<IHttpActionResult> GetUser(string Id)
{}


[Authorize(Roles="Admin")]
[Route("user/{username}")]
public async Task<IHttpActionResult> GetUserByName(string username)
{}

[Authorize(Roles="Admin")]
[Route("user/{id:guid}")]
public async Task<IHttpActionResult> DeleteUser(string id)
{}

Step 7: Update the DB Migration File

Last change we need to do here before testing the changes, is to create a default user and assign it to Admin role when the application runs for the first time. To implement this we need to introduce a change to the file “Configuration” under folder “Infrastructure”, so open the files and paste the code below:

internal sealed class Configuration : DbMigrationsConfiguration<AspNetIdentity.WebApi.Infrastructure.ApplicationDbContext>
    {
        public Configuration()
        {
            AutomaticMigrationsEnabled = false;
        }

        protected override void Seed(AspNetIdentity.WebApi.Infrastructure.ApplicationDbContext context)
        {
            //  This method will be called after migrating to the latest version.

            var manager = new UserManager<ApplicationUser>(new UserStore<ApplicationUser>(new ApplicationDbContext()));
            
            var roleManager = new RoleManager<IdentityRole>(new RoleStore<IdentityRole>(new ApplicationDbContext()));

            var user = new ApplicationUser()
            {
                UserName = "SuperPowerUser",
                Email = "taiseer.joudeh@gmail.com",
                EmailConfirmed = true,
                FirstName = "Taiseer",
                LastName = "Joudeh",
                Level = 1,
                JoinDate = DateTime.Now.AddYears(-3)
            };

            manager.Create(user, "MySuperP@ss!");

            if (roleManager.Roles.Count() == 0)
            {
                roleManager.Create(new IdentityRole { Name = "SuperAdmin" });
                roleManager.Create(new IdentityRole { Name = "Admin"});
                roleManager.Create(new IdentityRole { Name = "User"});
            }

            var adminUser = manager.FindByName("SuperPowerUser");

            manager.AddToRoles(adminUser.Id, new string[] { "SuperAdmin", "Admin" });
        }
    }

What we have implemented here is simple, we created a default user named “SuperPowerUser”, then created there roles in the system (SuperAdmin, Admin, and User), then we assigned this user to two roles (SuperAdmin, Admin).

In order to fire the “Seed()” method, we have to drop the exiting database, then from package manager console you type 

update-database
 which will create the database on our SQL server based on the connection string we specified earlier and runs the code inside the seed method and creates the user and roles in the system.

Step 7: Test the Role Authorization

Now the code is ready to be tested, first thing to do is to obtain a JWT token for the user “SuperPowerUser”, after you obtain this JWT and if you try to decode it using JWT.io you will notice that this token contains claim of type “Role” as the below:

{
  "nameid": "29e21f3d-08e0-49b5-b523-3d68cf623fd5",
  "unique_name": "SuperPowerUser",
  "http://schemas.microsoft.com/accesscontrolservice/2010/07/claims/identityprovider": "ASP.NET Identity",
  "AspNet.Identity.SecurityStamp": "832d5f6b-e71c-4c31-9fde-07fe92f5ddfd",
  "role": [
    "Admin",
    "SuperAdmin"
  ],
  "Phone": "123456782",
  "Gender": "Male",
  "iss": "http://localhost:59822",
  "aud": "414e1927a3884f68abc79f7283837fd1",
  "exp": 1426115380,
  "nbf": 1426028980
}

Those claims will allow this user to access any endpoint attribute with [Authorize] attribute and locked for users in Roles (Admin or SuperAdmin).

To test this out we’ll create new role named “Supervisor” by issuing HTTP Post to the endpoint (/api/roles/create), and as we stated before this endpoint should be accessed by users in “Admin” role, so we will pass the JWT token in the Authorization header using Bearer scheme as usual, the request will be as the image below:

Create Role

If all is valid we’ll revive HTTP status 201 Created.

In the next post we’ll see how we’ll implement Authorization access using Claims.

The source code for this tutorial is available on GitHub.

Follow me on Twitter @tjoudeh

The post ASP.NET Identity 2.1 Roles Based Authorization with ASP.NET Web API – Part 4 appeared first on Bit of Technology.


Christian Weyer: Session-Materialien von der BASTA! Spring 2015


Dominick Baier: .NET Foundation Advisory Council

I have been invited to join the .NET Foundation advisory council – looking forward to it!

http://www.dotnetfoundation.org/blog/welcoming-the-newly-minted-advisory-net-foundation-advisory-council-members


Filed under: .NET Security, ASP.NET, IdentityModel, IdentityServer, WebAPI


Darrel Miller: Don't Design A Query String You Will One Day Regret

When writing the Web API book, we decided that there was no way we would ever finish if we tried to address every conceivable issue.  So we decided to setup a Google Group where readers of the book could ask for clarifications and ask related questions.  One question I received a while ago has been sitting on my to-do list for way too long.  The question from Reid Peryam is about query resources.  This is my answer.

Denial2

The Claim

Reid quotes this paragraph from the book:

To work around the inability to easily expose new resources to clients, people often attempt to build sophisticated query capabilities into their API. The problem with this approach, beyond the coupling on the query syntax, is that just a few query parameters can open up a huge number of potential resources, some of which may be expensive to generate. Numerous API providers are starting to discover the challenging economics of exposing arbitrary query capabilities to third parties. A much more manageable approach is to enable a few highly optimized resources that address the majority of use cases. It is critical, however, that new resources can be added quickly to the API to address new requirements.

and rightfully calls me out on failing to provide examples of,

a few highly optimized resources that address the majority of use cases

The Problem

Before I try and describe the solution, let me first clarify exactly what pattern I am claiming is the source of concern.  Here is an example URI template,

http://api.example.org/orders{?fields,sort,filter,limit}

and a resolved URL might look like

http://api.example.org/orders?fields=OrderNo,Customer,OrderDate&sort=OrderDate&filter=OrderDate.gt.2012-01-01&limit=50

SegwayOn the surface this looks like an amazing idea.  A client developer can choose exactly what fields they want to have returned to minimize the bytes on the wire. They can use arbitrary filter criteria to limit the results returned.  This single generic query string can allow a client to generate a representation which contains pretty much any subset of orders data that they want.

This is a very quick way of exposing data without having to think very hard about how the data might be used.  In fact this type of functionality can be built by framework developers and delivered for free to application developers.

Why Is It A Problem?

I believe there are some problems with this approach.  The first problem is, by requiring clients to provide the field list, sort order and filter criteria you are requiring a client to have a significant amount of knowledge about the data model of the server.  Now, your client may already have this knowledge for other reasons and therefore it may not place any additional burden on the client.  However, if you ever choose to remove that client/server coupling you will find it much harder.

Unpredictable Workload

The next problems are performance related.  The first is related to how the data for the query is actually going to be retrieved.  Most likely the data will be in some kind of database.  If the sort order that is chosen matches that of a database index, the results will probably come back pretty quickly.  If however it doesn't, then it could be painfully slow.  A user of the API might be understanding if they are returning a large result set with thousands of rows of data, but what if they are querying a massive dataset but are limiting the query to only return 10 rows.  The server still has to sort the entire set of data. The API user is going to wonder why the request is so slow for such a small resultset.

Wand

Indexes are the magical things that make databases actually perform well. They also often have the ability to include extra columns of data in them to prevent queries from actually needing to go and read the actual data pages.  If in the query field list, all of the fields are including in an index, it is going to be really quick.  If one field is not in the index, then performance will degrade significantly.  These are performance details that are critical once a system begins to be loaded with a large volume of data and have a significant number of users.  It is not a problem that is easily seen during the sprint to go live whilst burning through the seed round of funding.

Diluting The Cache

The other performance challenge introduced by the "uber" query string is the fact that now, instead of there just being a few pre-chosen, performance optimized, use-case verified set of representations that can be cached, we now have to deal will potentially thousands of variants.  The combination of fields, sort orders and filter criteria make for a huge number of potential data subsets.  Caching those would not only bloat the cache but make the cache hit ratio very low.

Some Things You Can't Take Back

REST and hypermedia APIs are great in that they enable you to make many changes that don't break clients.  You can do stuff that turns out to be wrong and then fix it later, when you have the wisdom of hindsight.  However, to make the "uber" query string work, the client needs to take on a fair amount of responsibility and is given a huge amount of flexibility.  You can't just take that away without breaking things.  You end up being stuck with it.

The end result is you have clients who are getting inconsistent performance behaviour, they are are executing requests that are difficult to performance optimize on the server, and you can't fix the problem without a major breaking change to the interface.

Meerkat

Give Them Only What They Need

One approach for avoiding this outcome, is to raise the level of abstraction for your API to that of your application domain.  Instead of giving your client developers the ability to effectively write queries against your data store, write the queries for them and give them a name,

http://api.example.org/orders/open{?since,customer,region}
http://api.example.org/orders/late{?dayslate,customer,highvalue}
http://api.example.org/orders/closed{?customer,closeddaterange,closedtodate}
http://api.example.org/orders/byproduct{?productid,customer,orderdaterange}
http://api.example.org/orders/bypo{?purchaseorder}
http://api.example.org/orders/recent{?customer}

Without any specific knowledge of the types of "orders" that this API is dealing with, but with a fair amount of experience working with order management type of systems, I am going to be bold and say that this set of resources addresses 90% of types of queries that are needed on an orders API.  I can optimize my database to return these specific queries efficiently and hopefully with this reduced number of query variants I can get better cache utilization.

Responding To Feedback

It is highly likely that soon enough a customer is going to want to do something that the API doesn't support.  That's OK, because we can always add new resources to our API.  If we believe it is a valid use-case and we can deliver the results without degrading system performance, then adding the new capability should be a no-brainer.  The ability to quickly add new resources to an API is critical requirement in enabling this approach of starting with a limited API and adding new features only when required.

YesWeCan

The Original Question

In Reid's question he lists a set of URLs pulled from the documentation of his API and I have taken the liberty to relist them here as URL Templates.  Hopefully not too much was lost in the translation.

/api/shipments/{id}
/api/shipments{?ShipDate}
/api/shipments{?ShipDateStart,ShipDateFinish}
/api/shipments{?EnteredDate}
/api/shipments{?EnteredDateStart,EnteredDateFinish}
/api/shipments{?Failed}
/api/shipments{?WasFailed}
/api/shipments{?WasBlind}
/api/shipments{?Phase}
/api/shipments{?CustomerId}
/api/shipments{?CustomerIds}

As you can see, this set of URLs has not attempted to provide unbounded query capabilities.  Reid's team has used their knowledge of the domain to identify which queries are likely to be required by a consumer of the API.  The team should be able to optimize the database to be able to provide good performance characteristics for these specific queries.

One interesting difference in this set of URLs, as compared to my example, is the fact that the different subsets of query parameters are all pointing to the same path.  I have a tendency to add an extra path segment as a descriptor that makes each path only have one set of query parameters.  This is pure preference from a URL design perspective.  However, it may have an impact on the way routing to controllers works in your web api framework.

The Good/Bad News

Reid describes his set of URLs as enabling "a ton of filtering " and questions how he can "enable a few highly optimized resources that address the majority of use cases".  The good and bad news, is that's what has already been done.  It may look like a lot of filtering options, but as compared to what would have been enabled in the API with a unconstrained filter query string, the result is a just a few resources.

Reid - Sorry it took so long to get you an answer, I hope it was worth the wait.

Tortoise

Image Credits:
Denial
https://flic.kr/p/74PwUj
Segway https://flic.kr/p/hupTPq
Wand https://flic.kr/p/9uVH7P
Meerkat https://flic.kr/p/2wwVCY
YesWeCan https://flic.kr/p/5zzqtM
Tortoise https://flic.kr/p/nE44yh


Taiseer Joudeh: Implement OAuth JSON Web Tokens Authentication in ASP.NET Web API and Identity 2.1 – Part 3

This is the third part of Building Simple Membership system using ASP.NET Identity 2.1, ASP.NET Web API 2.2 and AngularJS. The topics we’ll cover are:

The source code for this tutorial is available on GitHub.

Implement JSON Web Tokens Authentication in ASP.NET Web API and and Identity 2.1

Featured Image

Currently our API doesn’t support authentication and authorization, all the requests we receive to any end point are done anonymously, In this post we’ll configure our API which will act as our Authorization Server and Resource Server on the same time to issue JSON Web Tokens for authenticated users and those users will present this JWT to the protected end points in order to access it and process the request.

I will use step by step approach as usual to implement this, but I highly recommend you to read the post JSON Web Token in ASP.NET Web API 2 before completing this one; where I cover deeply what is JSON Web Tokens, the benefits of using JWT over default access tokens, and how they can be used to decouple Authorization server from Resource server. In this tutorial and for the sake of keeping it simple; both OAuth 2.0 roles (Authorization Server and Recourse Server) will live in the same API.

Step 1: Implement OAuth 2.0 Resource Owner Password Credential Flow

We are going to build an API which will be consumed by a trusted client (AngularJS front-end) so we only interested in implementing a single OAuth 2.0 flow where the registered user will present username and password to a specific end point, and the API will validate those credentials, and if all is valid it will return a JWT for the user where the client application used by the user should store it securely and locally in order to present this JWT with each request to any protected end point.

The nice thing about this JWT that it is a self contained token which contains all user claims and roles inside it, so there is no need to do any extra DB queries to fetch those values for the authenticated user. This JWT token will be configured to expire after 1 day of its issue date, so the user is requested to provide credentials again in order to obtain new JWT token.

If you are interested to know how to implement sliding expiration tokens and how you can keep the user logged in; I recommend you to read my other post Enable OAuth Refresh Tokens in AngularJS App which covers this deeply, but adds more complexity to the solution. To keep this tutorial simple we’ll not add refresh tokens here but you can refer to the post and implement it.

To implement the Resource Owner Password Credential flow; we need to add new folder named “Providers” then add a new class named “CustomOAuthProvider”, after you add then paste the code below:

public class CustomOAuthProvider : OAuthAuthorizationServerProvider
    {

        public override Task ValidateClientAuthentication(OAuthValidateClientAuthenticationContext context)
        {
            context.Validated();
            return Task.FromResult<object>(null);
        }

        public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
        {

            var allowedOrigin = "*";

            context.OwinContext.Response.Headers.Add("Access-Control-Allow-Origin", new[] { allowedOrigin });

            var userManager = context.OwinContext.GetUserManager<ApplicationUserManager>();

            ApplicationUser user = await userManager.FindAsync(context.UserName, context.Password);

            if (user == null)
            {
                context.SetError("invalid_grant", "The user name or password is incorrect.");
                return;
            }

            if (!user.EmailConfirmed)
            {
                context.SetError("invalid_grant", "User did not confirm email.");
                return;
            }

            ClaimsIdentity oAuthIdentity = await user.GenerateUserIdentityAsync(userManager, "JWT");
        
            var ticket = new AuthenticationTicket(oAuthIdentity, null);
            
            context.Validated(ticket);
           
        }
    }

This class inherits from class “OAuthAuthorizationServerProvider” and overrides the below two methods:

  • As you notice the “ValidateClientAuthentication” is empty, we are considering the request valid always, because in our implementation our client (AngularJS front-end) is trusted client and we do not need to validate it.
  • The method “GrantResourceOwnerCredentials” is responsible for receiving the username and password from the request and validate them against our ASP.NET 2.1 Identity system, if the credentials are valid and the email is confirmed we are building an identity for the logged in user, this identity will contain all the roles and claims for the authenticated user, until now we didn’t cover roles and claims part of the tutorial, but for the mean time you can consider all users registered in our system without any roles or claims mapped to them.
  • The method “GenerateUserIdentityAsync” is not implemented yet, we’ll add this helper method in the next step. This method will be responsible to fetch the authenticated user identity from the database and returns an object of type “ClaimsIdentity”.
  • Lastly we are creating an Authentication ticket which contains the identity for the authenticated user,  and when we call “context.Validated(ticket)” this will transfer this identity to an OAuth 2.0 bearer access token.

Step 2: Add method “GenerateUserIdentityAsync” to “ApplicationUser” class

Now we’ll add the helper method which will be responsible to get the authenticated user identity (all roles and claims mapped to the user). The “UserManager” class contains a method named “CreateIdentityAsync” to do this task, it will basically query the DB and get all the roles and claims for this user, to implement this open class “ApplicationUser” and paste the code below:

//Rest of code is removed for brevity
public async Task<ClaimsIdentity> GenerateUserIdentityAsync(UserManager<ApplicationUser> manager, string authenticationType)
{
	var userIdentity = await manager.CreateIdentityAsync(this, authenticationType);
	// Add custom user claims here
	return userIdentity;
}

Step 3: Issue JSON Web Tokens instead of Default Access Tokens

Now we want to configure our API to issue JWT tokens instead of default access tokens, to understand what is JWT and why it is better to use it, you can refer back to this post.

First thing we need to installed 2 NueGet packages as the below:

Install-package System.IdentityModel.Tokens.Jwt -Version 4.0.1
Install-package Thinktecture.IdentityModel.Core -Version 1.3.0

There is no direct support for issuing JWT in ASP.NET Web API,  so in order to start issuing JWTs we need to implement this manually by implementing the interface “ISecureDataFormat” and implement the method “Protect”.

To implement this add new file named “CustomJwtFormat” under folder “Providers” and paste the code below:

public class CustomJwtFormat : ISecureDataFormat<AuthenticationTicket>
    {
    
        private readonly string _issuer = string.Empty;

        public CustomJwtFormat(string issuer)
        {
            _issuer = issuer;
        }

        public string Protect(AuthenticationTicket data)
        {
            if (data == null)
            {
                throw new ArgumentNullException("data");
            }

            string audienceId = ConfigurationManager.AppSettings["as:AudienceId"];

            string symmetricKeyAsBase64 = ConfigurationManager.AppSettings["as:AudienceSecret"];

            var keyByteArray = TextEncodings.Base64Url.Decode(symmetricKeyAsBase64);

            var signingKey = new HmacSigningCredentials(keyByteArray);

            var issued = data.Properties.IssuedUtc;
            
            var expires = data.Properties.ExpiresUtc;

            var token = new JwtSecurityToken(_issuer, audienceId, data.Identity.Claims, issued.Value.UtcDateTime, expires.Value.UtcDateTime, signingKey);

            var handler = new JwtSecurityTokenHandler();

            var jwt = handler.WriteToken(token);

            return jwt;
        }

        public AuthenticationTicket Unprotect(string protectedText)
        {
            throw new NotImplementedException();
        }
    }

What we’ve implemented in this class is the following:

  • The class “CustomJwtFormat” implements the interface “ISecureDataFormat<AuthenticationTicket>”, the JWT generation will take place inside method “Protect”.
  • The constructor of this class accepts the “Issuer” of this JWT which will be our API. This API acts as Authorization and Resource Server on the same time, this can be string or URI, in our case we’ll fix it to URI.
  • Inside “Protect” method we are doing the following:
    • As we stated before, this API serves as Resource and Authorization Server at the same time, so we are fixing the Audience Id and Audience Secret (Resource Server) in web.config file, this Audience Id and Secret will be used for HMAC265 and hash the JWT token, I’ve used this implementation to generate the Audience Id and Secret.
    • Do not forget to add 2 new keys “as:AudienceId” and “as:AudienceSecret” to the web.config AppSettings section.
    • Then we prepare the raw data for the JSON Web Token which will be issued to the requester by providing the issuer, audience, user claims, issue date, expiry date, and the signing key which will sign (hash) the JWT payload.
    • Lastly we serialize the JSON Web Token to a string and return it to the requester.
  • By doing this, the requester for an OAuth 2.0 access token from our API will receive a signed token which contains claims for an authenticated Resource Owner (User) and this access token is intended to certain (Audience) as well.

Step 4: Add Support for OAuth 2.0 JWT Generation

Till this moment we didn’t configure our API to use OAuth 2.0 Authentication workflow, to do so open class “Startup” and add new method named “ConfigureOAuthTokenGeneration” as the below:

private void ConfigureOAuthTokenGeneration(IAppBuilder app)
        {
            // Configure the db context and user manager to use a single instance per request
            app.CreatePerOwinContext(ApplicationDbContext.Create);
            app.CreatePerOwinContext<ApplicationUserManager>(ApplicationUserManager.Create);

            OAuthAuthorizationServerOptions OAuthServerOptions = new OAuthAuthorizationServerOptions()
            {
                //For Dev enviroment only (on production should be AllowInsecureHttp = false)
                AllowInsecureHttp = true,
                TokenEndpointPath = new PathString("/oauth/token"),
                AccessTokenExpireTimeSpan = TimeSpan.FromDays(1),
                Provider = new CustomOAuthProvider(),
                AccessTokenFormat = new CustomJwtFormat("http://localhost:59822")
            };

            // OAuth 2.0 Bearer Access Token Generation
            app.UseOAuthAuthorizationServer(OAuthServerOptions);
        }

What we’ve implemented here is the following:

  • The path for generating JWT will be as :”http://localhost:59822/oauth/token”.
  • We’ve specified the expiry for token to be 1 day.
  • We’ve specified the implementation on how to validate the Resource owner user credential in a custom class named “CustomOAuthProvider”.
  • We’ve specified the implementation on how to generate the access token using JWT formats, this custom class named “CustomJwtFormat” will be responsible for generating JWT instead of default access token using DPAPI, note that both format will use Bearer scheme.

Do not forget to call the new method “ConfigureOAuthTokenGeneration” in the Startup “Configuration” as the class below:

public void Configuration(IAppBuilder app)
{
	HttpConfiguration httpConfig = new HttpConfiguration();

	ConfigureOAuthTokenGeneration(app);

	//Rest of code is removed for brevity

}

Our API currently is ready to start issuing JWT access token, so test this out we can issue HTTP POST request as the image below, and we should receive a valid JWT token for the next 24 hours and accepted only by our API.

JSON Web Token

Step 5: Protect the existing end points with [Authorize] Attribute

Now we’ll visit all the end points we have created earlier in previous posts in the “AccountsController” class, and attribute the end points which need to be protected (only authenticated user with valid JWT access token can access it) with the [Authorize] attribute as the below:

 – GetUsers, GetUser, GetUserByName, and DeleteUser end points should be accessed by users enrolled in Role “Admin”. Roles Authorization is not implemented yet and for now we will only allow any authentication user to access it, the code change will be as simple as the below:

[Authorize]
[Route("users")]
public IHttpActionResult GetUsers()
{}

[Authorize]
[Route("user/{id:guid}", Name = "GetUserById")]
public async Task<IHttpActionResult> GetUser(string Id)
{}


[Authorize]
[Route("user/{username}")]
public async Task<IHttpActionResult> GetUserByName(string username)
{
}

[Authorize]
[Route("user/{id:guid}")]
public async Task<IHttpActionResult> DeleteUser(string id)
{
}

- CreateUser and ConfirmEmail endpoints should be accessed anonymously always, so we need to attribute it with [AllowAnonymous] as the below:

[AllowAnonymous]
[Route("create")]
public async Task<IHttpActionResult> CreateUser(CreateUserBindingModel createUserModel)
{
}

[AllowAnonymous]
[HttpGet]
[Route("ConfirmEmail", Name = "ConfirmEmailRoute")]
public async Task<IHttpActionResult> ConfirmEmail(string userId = "", string code = "")
{
}

- ChangePassword endpoint should be accessed by the authenticated user only, so we’ll attribute it with [Authorize] attribute as the below:

[Authorize]
[Route("ChangePassword")]
public async Task<IHttpActionResult> ChangePassword(ChangePasswordBindingModel model)
{
}

Step 6: Consume JSON Web Tokens

Now if we tried to obtain an access token by sending a request to the end point “oauth/token” then try to access one of the protected end points we’ll receive 401 Unauthorized status, the reason for this that our API doesn’t understand those JWT tokens issued by our API yet, to fix this we need to the following:

Install the below NuGet package:

Install-Package Microsoft.Owin.Security.Jwt -Version 3.0.0

The package “Microsoft.Owin.Security.Jwt” is responsible for protecting the Resource server resources using JWT, it only validate and de-serialize JWT tokens.

Now back to our “Startup” class, we need to add the below method “ConfigureOAuthTokenConsumption” as the below:

private void ConfigureOAuthTokenConsumption(IAppBuilder app) {

            var issuer = "http://localhost:59822";
            string audienceId = ConfigurationManager.AppSettings["as:AudienceId"];
            byte[] audienceSecret = TextEncodings.Base64Url.Decode(ConfigurationManager.AppSettings["as:AudienceSecret"]);

            // Api controllers with an [Authorize] attribute will be validated with JWT
            app.UseJwtBearerAuthentication(
                new JwtBearerAuthenticationOptions
                {
                    AuthenticationMode = AuthenticationMode.Active,
                    AllowedAudiences = new[] { audienceId },
                    IssuerSecurityTokenProviders = new IIssuerSecurityTokenProvider[]
                    {
                        new SymmetricKeyIssuerSecurityTokenProvider(issuer, audienceSecret)
                    }
                });
        }

This step will configure our API to trust tokens issued by our Authorization server only, in our case the Authorization and Resource Server are the same server (http://localhost:59822), notice how we are providing the values for audience, and the audience secret we used to generate and issue the JSON Web Token in step3.

By providing those values to the “JwtBearerAuthentication” middleware, our API will be able to consume only JWT tokens issued by our trusted Authorization server, any other JWT tokens from any other Authorization server will be rejected.

Lastly we need to call the method “ConfigureOAuthTokenConsumption” in the “Configuration” method as the below:

public void Configuration(IAppBuilder app)
	{
		HttpConfiguration httpConfig = new HttpConfiguration();

		ConfigureOAuthTokenGeneration(app);

		ConfigureOAuthTokenConsumption(app);
		
		//Rest of code is here

	}

Step 7: Final Testing

All the pieces should be in place now, to test this we will obtain JWT access token for the user “SuperPowerUser” by issuing POST request to the end point “oauth/token”

Request JWT Token

Then we will use the JWT received to access protected end point such as “ChangePassword”, if you remember once we added this end point, we were not able to test it directly because it was anonymous and inside its implementation we were calling the method “User.Identity.GetUserId()”. This method will return nothing for anonymous user, but after we’ve added the [Authorize] attribute, any user needs to access this end point should be authenticated and has a valid JWT.

To test this out we will issue POST request to the end point “/accounts/ChangePassword”as the image below, notice he we are sitting the Authorization header using Bearer scheme setting its value to the JWT we received for the user “SuperPwoerUser”. If all is valid we will receive 200 OK status and the user password should be updated.

Change Password Web API

The source code for this tutorial is available on GitHub.

In the next post we’ll see how we’ll implement Roles Based Authorization in our Identity service.

Follow me on Twitter @tjoudeh

References

The post Implement OAuth JSON Web Tokens Authentication in ASP.NET Web API and Identity 2.1 – Part 3 appeared first on Bit of Technology.


Taiseer Joudeh: ASP.NET Identity 2.1 Accounts Confirmation, and Password Policy Configuration – Part 2

This is the second part of Building Simple Membership system using ASP.NET Identity 2.1, ASP.NET Web API 2.2 and AngularJS. The topics we’ll cover are:

The source code for this tutorial is available on GitHub.

ASP.NET Identity 2.1 Accounts Confirmation, and Password/User Policy Configuration

In this post we’ll complete on top of what we’ve already built, and we’ll cover the below topics:

  • Send Confirmation Emails after Account Creation.
  • Configure User (Username, Email) and Password policy.
  • Enable Changing Password and Deleting Account.

1 . Send Confirmation Emails after Account Creation

FeaturedImage

ASP.NET Identity 2.1 users table (AspNetUsers) comes by default with a Boolean column named “EmailConfirmed”, this column is used to flag if the email provided by the registered user is valid and belongs to this user in other words that user can access the email provided and he is not impersonating another identity. So our membership system should not allow users without valid email address to log into the system.

The scenario we want to implement that user will register in the system, then a confirmation email will be sent to the email provided upon the registration, this email will include an activation link and a token (code) which is tied to this user only and valid for certain period.

Once the user opens this email and clicks on the activation link, and if the token (code) is valid the field “EmailConfirmed” will be set to “true” and this proves that the email belongs to the registered user.

To do so we need to add a service which is responsible to send emails to users, in my case I’ll use Send Grid which is service provider for sending emails, but you can use any other service provider or your exchange change server to do this. If you want to follow along with this tutorial you can create a free account with Send Grid which provides you with 400 email per day, pretty good!

1.1 Install Send Grid

Now open Package Manager Console and type the below to install Send Grid package, this is not required step if you want to use another email service provider. This packages contains Send Grid APIs which makes sending emails very easy:

install-package Sendgrid

1.2 Add Email Service

Now add new folder named “Services” then add new class named “EmailService” and paste the code below:

public class EmailService : IIdentityMessageService
    {
        public async Task SendAsync(IdentityMessage message)
        {
            await configSendGridasync(message);
        }

        // Use NuGet to install SendGrid (Basic C# client lib) 
        private async Task configSendGridasync(IdentityMessage message)
        {
            var myMessage = new SendGridMessage();

            myMessage.AddTo(message.Destination);
            myMessage.From = new System.Net.Mail.MailAddress("taiseer@bitoftech.net", "Taiseer Joudeh");
            myMessage.Subject = message.Subject;
            myMessage.Text = message.Body;
            myMessage.Html = message.Body;

            var credentials = new NetworkCredential(ConfigurationManager.AppSettings["emailService:Account"], 
                                                    ConfigurationManager.AppSettings["emailService:Password"]);

            // Create a Web transport for sending email.
            var transportWeb = new Web(credentials);

            // Send the email.
            if (transportWeb != null)
            {
                await transportWeb.DeliverAsync(myMessage);
            }
            else
            {
                //Trace.TraceError("Failed to create Web transport.");
                await Task.FromResult(0);
            }
        }
    }

What worth noting here that the class “EmailService” implements the interface “IIdentityMessageService”, this interface can be used to configure your service to send emails or SMS messages, all you need to do is to implement your email or SMS Service in method “SendAsync” and your are good to go.

In our case we want to send emails, so I’ve implemented the sending process using Send Grid in method “configSendGridasync”, all you need to do is to replace the sender name and address by yours, as well do not forget to add 2 new keys named “emailService:Account” and “emailService:Password” as AppSettings to store Send Grid credentials.

After we configured the “EmailService”, we need to hock it with our Identity system, and this is very simple step, open file “ApplicationUserManager” and inside method “Create” paste the code below:

public static ApplicationUserManager Create(IdentityFactoryOptions<ApplicationUserManager> options, IOwinContext context)
{
	//Rest of code is removed for clarity
	appUserManager.EmailService = new AspNetIdentity.WebApi.Services.EmailService();

	var dataProtectionProvider = options.DataProtectionProvider;
	if (dataProtectionProvider != null)
	{
		appUserManager.UserTokenProvider = new DataProtectorTokenProvider<ApplicationUser>(dataProtectionProvider.Create("ASP.NET Identity"))
		{
			//Code for email confirmation and reset password life time
			TokenLifespan = TimeSpan.FromHours(6)
		};
	}
   
	return appUserManager;
}

As you see from the code above, the “appUserManager” instance contains property named “EmailService” which you set it the class we’ve just created “EmailService”.

Note: There is another property named “SmsService” if you would like to use it for sending SMS messages instead of emails.

Notice how we are setting the expiration time for the code (token) send by the email to 6 hours, so if the user tried to open the confirmation email after 6 hours from receiving it, the code will be invalid.

1.3 Send the Email after Account Creation

Now the email service is ready and we can start sending emails after successful account creation, to do so we need to modify the existing code in the method “CreateUser” in controller “AccountsController“, so open file “AccountsController” and paste the code below at the end of the method:

//Rest of code is removed for brevity

string code = await this.AppUserManager.GenerateEmailConfirmationTokenAsync(user.Id);

var callbackUrl = new Uri(Url.Link("ConfirmEmailRoute", new { userId = user.Id, code = code }));

await this.AppUserManager.SendEmailAsync(user.Id,"Confirm your account", "Please confirm your account by clicking <a href=\"" + callbackUrl + "\">here</a>");

Uri locationHeader = new Uri(Url.Link("GetUserById", new { id = user.Id }));

return Created(locationHeader, TheModelFactory.Create(user));

The implementation is straight forward, what we’ve done here is creating a unique code (token) which is valid for the next 6 hours and tied to this user Id only this happen when calling “GenerateEmailConfirmationTokenAsync” method, then we want to build an activation link to send it in the email body, this link will contain the user Id and the code created.

Eventually this link will be sent to the registered user to the email he used in registration, and the user needs to click on it to activate the account, the route “ConfirmEmailRoute” which maps to this activation link is not implemented yet, we’ll implement it the next step.

Lastly we need to send the email including the link we’ve built by calling the method “SendEmailAsync” where the constructor accepts the user Id, email subject, and email body.

1.4 Add the Confirm Email URL

The activation link which the user will receive will look as the below:

http://localhost/api/account/ConfirmEmail?userid=xxxx&code=xxxx

So we need to build a route in our API which receives this request when the user clicks on the activation link and issue HTTP GET request, to do so we need to implement the below method, so in class “AccountsController” as the new method as the below:

[HttpGet]
        [Route("ConfirmEmail", Name = "ConfirmEmailRoute")]
        public async Task<IHttpActionResult> ConfirmEmail(string userId = "", string code = "")
        {
            if (string.IsNullOrWhiteSpace(userId) || string.IsNullOrWhiteSpace(code))
            {
                ModelState.AddModelError("", "User Id and Code are required");
                return BadRequest(ModelState);
            }

            IdentityResult result = await this.AppUserManager.ConfirmEmailAsync(userId, code);

            if (result.Succeeded)
            {
                return Ok();
            }
            else
            {
                return GetErrorResult(result);
            }
        }

The implementation is simple, we only validate that the user Id and code is not not empty, then we depend on the method “ConfirmEmailAsync” to do the validation for the user Id and the code, so if the user Id is not tied to this code then it will fail, if the code is expired then it will fail too, if all is good this method will update the database field “EmailConfirmed” in table “AspNetUsers” and set it to “True”, and you are done, you have implemented email account activation!

Important Note: It is recommenced to validate the password before confirming the email account, in some cases the user might miss type the email during the registration, so you do not want end sending the confirmation email for someone else and he receives this email and activate the account on your behalf, so better way is to ask for the account password before activating it, if you want to do this you need to change the “ConfirmEmail” method to POST and send the Password along with user Id and code in the request body, you have the idea so you can implement it by yourself :)

2. Configure User (Username, Email) and Password policy

2.1 Change User Policy

In some cases you want to enforce certain rules on the username and password when users register into your system, so ASP.NET Identity 2.1 system offers this feature, for example if we want to enforce that our username only allows alphanumeric characters and the email associated with this user is unique then all we need to do is to set those properties in class “ApplicationUserManager”, to do so open file “ApplicationUserManager” and paste the code below inside method “Create”:

//Rest of code is removed for brevity
//Configure validation logic for usernames
appUserManager.UserValidator = new UserValidator<ApplicationUser>(appUserManager)
{
	AllowOnlyAlphanumericUserNames = true,
	RequireUniqueEmail = true
};

2.2 Change Password Policy

The same applies for the password policy, for example you can enforce that the password policy must match (minimum 6 characters, requires special character, requires at least one lower case and at least one upper case character), so to implement this policy all we need to do is to set those properties in the same class “ApplicationUserManager” inside method “Create” as the code below:

//Rest of code is removed for brevity
//Configure validation logic for passwords
appUserManager.PasswordValidator = new PasswordValidator
{
	RequiredLength = 6,
	RequireNonLetterOrDigit = true,
	RequireDigit = false,
	RequireLowercase = true,
	RequireUppercase = true,
};

2.3 Implement Custom Policy for User Email and Password

In some scenarios you want to apply your own custom policy for validating email, or password. This can be done easily by creating your own validation classes and hock it to “UserValidator” and “PasswordValidator” properties in class “ApplicationUserManager”.

For example if we want to enforce using only the following domains (“outlook.com”, “hotmail.com”, “gmail.com”, “yahoo.com”) when the user self registers then we need to create a class and derive it from “UserValidator<ApplicationUser>” class, to do so add new folder named “Validators” then add new class named “MyCustomUserValidator” and paste the code below:

public class MyCustomUserValidator : UserValidator<ApplicationUser>
    {

        List<string> _allowedEmailDomains = new List<string> { "outlook.com", "hotmail.com", "gmail.com", "yahoo.com" };

        public MyCustomUserValidator(ApplicationUserManager appUserManager)
            : base(appUserManager)
        {
        }

        public override async Task<IdentityResult> ValidateAsync(ApplicationUser user)
        {
            IdentityResult result = await base.ValidateAsync(user);

            var emailDomain = user.Email.Split('@')[1];

            if (!_allowedEmailDomains.Contains(emailDomain.ToLower()))
            {
                var errors = result.Errors.ToList();

                errors.Add(String.Format("Email domain '{0}' is not allowed", emailDomain));

                result = new IdentityResult(errors);
            }

            return result;
        }
    }

What we have implemented above that the default validation will take place then this custom validation in method “ValidateAsync” will be applied, if there is validation errors it will be added to the existing “Errors” list and returned in the response.

In order to fire this custom validation, we need to open class “ApplicationUserManager” again and hock this custom class to the property “UserValidator” as the code below:

//Rest of code is removed for brevity
//Configure validation logic for usernames
appUserManager.UserValidator = new MyCustomUserValidator(appUserManager)
{
	AllowOnlyAlphanumericUserNames = true,
	RequireUniqueEmail = true
};

Note: The tutorial code is not using the custom “MyCustomUserValidator” class, it exists in the source code for your reference.

Now the same applies for adding custom password policy, all you need to do is to create class named “MyCustomPasswordValidator” and derive it from class “PasswordValidator”, then you override the method “ValidateAsync” implementation as below, so add new file named “MyCustomPasswordValidator” in folder “Validators” and use the code below:

public class MyCustomPasswordValidator : PasswordValidator
    {
        public override async Task<IdentityResult> ValidateAsync(string password)
        {
            IdentityResult result = await base.ValidateAsync(password);

            if (password.Contains("abcdef") || password.Contains("123456"))
            {
                var errors = result.Errors.ToList();
                errors.Add("Password can not contain sequence of chars");
                result = new IdentityResult(errors);
            }
            return result;
        }
    }

In this implementation we added some basic rule which checks if the password contains sequence of characters and reject this type of password by adding this validation result to the Errors list, it is exactly the same as the custom users policy.

Now to attach this class as the default password validator, all you need to do is to open class “ApplicationUserManager” and use the code below:

//Rest of code is removed for brevity
// Configure validation logic for passwords
appUserManager.PasswordValidator = new MyCustomPasswordValidator
{
	RequiredLength = 6,
	RequireNonLetterOrDigit = true,
	RequireDigit = false,
	RequireLowercase = true,
	RequireUppercase = true,
};

All other validation rules will take place (i.e checking minimum password length, checking for special characters) then it will apply the implementation in our “MyCustomPasswordValidator”.

3. Enable Changing Password and Deleting Account

Now we need to add other endpoints which allow the user to change the password, and allow a user in “Admin” role to delete other users account, but those end points should be accessed only if the user is authenticated, we need to know the identity of the user doing this action and in which role(s) the user belongs to. Until now all our endpoints are called anonymously, so lets add those endpoints and we’ll cover the authentication and authorization part next.

3.1 Add Change Password Endpoint

This is easy to implement, all you need to do is to open controller “AccountsController” and paste the code below:

[Route("ChangePassword")]
        public async Task<IHttpActionResult> ChangePassword(ChangePasswordBindingModel model)
        {
            if (!ModelState.IsValid)
            {
                return BadRequest(ModelState);
            }

            IdentityResult result = await this.AppUserManager.ChangePasswordAsync(User.Identity.GetUserId(), model.OldPassword, model.NewPassword);

            if (!result.Succeeded)
            {
                return GetErrorResult(result);
            }

            return Ok();
        }

Notice how we are calling the method “ChangePasswordAsync” and passing the authenticated User Id, old password and new password. If you tried to call this endpoint, the extension method “GetUserId” will not work because you are calling it as anonymous user and the system doesn’t know your identity, so hold on the testing until we implement authentication part.

The method “ChangePasswordAsync” will take care of validating your current password, as well validating your new password policy, and then updating your old password with new one.

Do not forget to add the “ChangePasswordBindingModel” to the class “AccountBindingModels” as the code below:

public class ChangePasswordBindingModel
    {
        [Required]
        [DataType(DataType.Password)]
        [Display(Name = "Current password")]
        public string OldPassword { get; set; }

        [Required]
        [StringLength(100, ErrorMessage = "The {0} must be at least {2} characters long.", MinimumLength = 6)]
        [DataType(DataType.Password)]
        [Display(Name = "New password")]
        public string NewPassword { get; set; }

        [Required]
        [DataType(DataType.Password)]
        [Display(Name = "Confirm new password")]
        [Compare("NewPassword", ErrorMessage = "The new password and confirmation password do not match.")]
        public string ConfirmPassword { get; set; }
    
    }

3.2 Delete User Account

We want to add the feature which allows a user in “Admin” role to delete user account, until now we didn’t introduce Roles management or authorization, so we’ll add this end point now and later we’ll do slight modification on it, for now any anonymous user can invoke it and delete any user by passing the user Id.

To implement this we need add new method named “DeleteUser” to the “AccountsController” as the code below:

[Route("user/{id:guid}")]
        public async Task<IHttpActionResult> DeleteUser(string id)
        {

            //Only SuperAdmin or Admin can delete users (Later when implement roles)

            var appUser = await this.AppUserManager.FindByIdAsync(id);

            if (appUser != null)
            {
                IdentityResult result = await this.AppUserManager.DeleteAsync(appUser);

                if (!result.Succeeded)
                {
                    return GetErrorResult(result);
                }

                return Ok();

            }

            return NotFound();
          
        }

This method will check the existence of the user id and based on this it will delete the user. To test this method we need to issue HTTP DELETE request to the end point “api/accounts/user/{id}”.

The source code for this tutorial is available on GitHub.

In the next post we’ll see how we’ll implement Json Web Token (JWTs) Authentication and manage access for all the methods we added until now.

Follow me on Twitter @tjoudeh

References

The post ASP.NET Identity 2.1 Accounts Confirmation, and Password Policy Configuration – Part 2 appeared first on Bit of Technology.


Dominick Baier: IdentityServer3 1.0.0

Today is a big day for us! Brock and I started working on the next generation of IdentityServer over 14 months ago. In fact – I remember exactly how I created the very first file (constants.cs) somewhere in the Swiss Alps and was hunting for internet connection to do a check-in (much to the dislike of my family).

1690 commits later it is time to recap what we did, why we did it – and where we are now.

Having spent a considerable amount of time in the WS*/SAML world, it became more and more apparent that these technologies are not a good match for the modern types of applications that we (and our customers) like to build. These types of applications are pretty much a combination of web and native UIs combined with web APIs. Security protocols need to be API, HTTP and mobile friendly, and we need authentication, API access and identity delegation as first class citizens.

We had two options – either try to retrofit the new protocols into the old WS* architecture (like so many commercial products do) or start from scratch. Since we also had a number of other high priority design goals for the new version we decided to start from scratch.

Some of the highlights of IdentityServer3 (at least in our opinion) are:

Support for the modern security stack
OpenID Connect and OAuth2 that is. These two protocols in combination are the perfect match to build the modern applications we had in mind. OAuth2 is used to manage access (and access control) from clients to APIs for both trusted subsystem and identity delegation systems. OpenID Connect is the extension to OAuth2 for implementing rich authentication and single sign-on scenarios for any application type.

Hosting
We wanted to be much more flexible in our hosting scenarios – IIS vs self-hosting, Windows vs Linux, ASP.NET vCurrent vs vNext, Embedded into the application vs separate standalone vs separate web farm vs cloud – you name it. Regardless which hosting environment you choose – IdentityServer is always the same.

Flexibility and Extensibility
IdentityServer2 always had a dependency on a database. The past years taught us that there are many situations where this is not appropriate. In the new version everything is code first and abstracted behind interfaces. Everything can be done in memory and no persistence store is required. We have an optional extension that uses Entity Framework for persistence – but this is up to you.

Another issue we had in the past was that there were too many situations where one had to change the core source code to implement some custom workflow. In IdentityServer3 we think we did a good job in anticipating the typical (and not so typical) modifications and baked them right into the core runtime as extensibility points. So far this has worked out really well.

Framework vs Server
As mentioned above – IdentityServer3 is all about customization and extensibility. The developer is in the centre and we give him lots of freedom in changing almost any aspect of the workflow. This is the big difference to many commercial off the shelf products.

Right from the start we used the term “STS Framework” rather than a “Server” and up to today we don’t even have an admin UI for managing the server configuration. We (and most people we spoke to) were absolutely fine doing all of that in code and in their custom configuration system. That said – we have an admin service and UI in the works that will be released soon – but again this is totally optional.

Brock and I just recently spoke to Carl and Richard about these design goals on .NET Rocks.

Where to go?
To accommodate the new versioning scheme (we switched to semver) and the componentized architecture we changed both the GitHub organization and repo names as well as the Nuget package names. The new organization can be found here and the main repo is here along with instructions on how to contribute and an issue tracker for filing bugs or giving feedback.

The new docs site gives quite a bit of background and can be found here – or you can jump directly to our samples.

If you need consulting about modern (or not so modern) security architectures in general and IdentityServer in particular – you can contact us via email at identity@leastprivilege.com or via twitter: @leastprivilege & @brocklallen.

What’s next?
We have a couple of “side projects” that complement the core IdentityServer3 – there’s IdentityManager, which we neglected a bit for the last months, and there‘s the admin service and UI (good people are working on that right now)…And there are of course new features to implement for IdentityServer – check this label and take part in the discussion.

Last but not least
The last 14 months were astounding – we got more feedback, questions, bug reports, PRs and help on IdentityServer3 than all other OSS projects we did before combined. You guys were fantastic! Thanks for your help – we hope you enjoy the result (..and keep it coming)!

Thanks!
Dominick & Brock


Filed under: ASP.NET, IdentityServer, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Radenko Zec: ASP.NET Identity 2.1 implementation for MySQL

In this blog post I will try to cover how to use a custom ASP.NET identity provider for MySQL I have created.

Default ASP.NET Identity provider uses Entity Framework and SQL Server to store information’s about users.

If you are trying to implement ASP.NET Identity 2.1 for MySQL database, then follow this guide.

This implementation uses Oracle fully-managed ADO.NET driver for MySQL.

This means that you have a connection string in your web.config similar to this:

<add name="DefaultConnection" connectionString="Server=localhost;
Database=aspnetidentity;Uid=radenko;Pwd=somepass;" providerName="MySql.Data.MySqlClient" />

 

This implementation of ASP.NET Identity 2.1 for MySQL has all the major interfaces implemented in custom UserStore class:

ASPIdentityUserStoreInterfaces

Source code of my implementation is available at GitHub – MySqlIdentity

First, you will need to execute this a create script on your MySQL database which will create the tables required for the ASP.NET Identity provider.

MySqlAspIdentityDatabase

  • Create a new ASP.NET MVC 5 project, choosing the Individual User Accounts authentication type.
  • Uninstall all EntityFramework NuGet packages starting with Microsoft.AspNet.Identity.EntityFramework
  • Install NuGet Package called MySql.AspNet.Identity
  • In ~/Models/IdentityModels.cs:
    • Remove the namespaces:
      • Microsoft.AspNet.Identity.EntityFramework
      • System.Data.Entity
    • Add the namespace: MySql.AspNet.Identity.
      Class ApplicationUser will inherit from IdentityUser class in MySql.Asp.Net.Identity namespace
    • Remove the entire ApplicationDbContext class. This class is not needed anymore.
  • In ~/App_Start/Startup.Auth.cs
    • Delete this line of code
app.CreatePerOwinContext(ApplicationDbContext.Create);
  • In ~/App_Start/IdentityConfig.cs
    Remove the namespaces:

    • Microsoft.AspNet.Identity.EntityFramework
    • System.Data.Entity
  • In method Create inside ApplicationUserManager class replace ApplicationUserManager with another which accepts MySqlUserStore :
 public static ApplicationUserManager Create(IdentityFactoryOptions<ApplicationUserManager> options, IOwinContext context) 
 {
     //var manager = new ApplicationUserManager(new UserStore<ApplicationUser>(context.Get<ApplicationDbContext>()));
        var manager = new ApplicationUserManager(new MySqlUserStore<ApplicationUser>());

MySqlUserStore accepts an optional parameter in the constructor – connection string so if you are not using DefaultConnection as your connection string you can pass another connection string.

After this you should be able to build your ASP.NET MVC project and run it successfully using MySQL as a store for your user, roles, claims and other information’s.

If you like this article don’t forget to subscribe to this blog and make sure you don’t miss new upcoming blog posts.

 

The post ASP.NET Identity 2.1 implementation for MySQL appeared first on RadenkoZec blog.


Darrel Miller: Hypermedia, past, present and future

Hypermedia is not a new concept, it has been around in various forms since the 1960s.  However, in the past seven years there has been a significant resurgence of interest in the concept.  This blog post contains my reflections on the past few years, where we currently are and where we might be headed in the use of hypermedia for building distributed applications.

The HTML years

The majority of developers have only been exposed to hypermedia via HTML.  HTML is an "in-your-face" example of the success of hypermedia, and yet in most Web API related discussions it is often dismissed with "well that's different".  A distinction is made by many between human-2-machine interactions and machine-2-machine interactions.  This distinction is used to explain why API interactions are different.  To be honest, I've yet to see my parents open a web browser and construct a HTTP request by hand.  The Web Browser is itself is a client application, running code that makes HTTP requests.  It is not completely autonomous, but no human invokes calls to load a stylesheet, javascript snippet, or embedded image. 

On the other hand, the web crawlers used by search engines are autonomous and they also consume HTML hypermedia.

There were numerous efforts over the years to present HTML, or more specifically XHTML as a viable media type for Web APIs.  Jon Moore from Comcast made the biggest splash, but Microsoft made efforts in this area and I participated in workshops a RESTfest where we used XHTML as the response media type.

Despite this, the use of HTML in Web APIs has never really gained significant traction.

RDF's lofty goals

RDF has been around for almost as long as HTML, but it has never really made significant progress outside of academia.  I've talked to a significant number of very smart people who believe RDF is the answer.  However, I suspect it is the Xanadu, or the betamax of media types.

The challenge with RDF is that can be quite difficult to grok.  First of all there are a variety of different serialization formats: turtle, n3, RDFa and now JSON-LD.  This can make learning RDF tricky because really need to learn the conceptual model and then learn the mapping used for serialization.

The model of RDF is based around triples, i.e. Subject, predicate and object, where these elements are often identified using URIs.  This can produce quite cumbersome looking documents.

My experience has been that most developers don't have the patience for the sophistication of RDF and quickly want to hide the complexity with tooling.  Tooling can hurt or help, it just depends on who is writing the tooling and what there goals are.

It is possible that I could be proved wrong by JSON-LD.  A number of organizations are starting to adopt JSON-LD.  Only time will tell if it will stick.

Feeds

File:Feed-icon.svgThe RSS format was originally based on an early working draft of RDF.  In 2005 the Atom Syndication Format was released as replacement for RSS.  Both these formats had the specific goal of allowing content creators and distributors to advertise regularly produced along with metadata about the content.

These formats spawned the creation of a wide range of new client applications that could consume this format. 

Feeds As Containers

The success of the Atom format spurred API providers to consider using Atom as a container of data other than blog posts and new items.  Big players like Google and Microsoft created GData and OData that were similar ideas where Atom was used as an envelope for API data.

In order for feed readers to be useful, they needed to understand the contents of the Atom Entry element.  Most often this content was HTML and could be rendered with a HTML rendering library.  However, when Atom feeds were used in APIs for carrying non-HTML data there was no standardized way for clients to understand the contents of the Atom Entry.  The custom data content was identified using XML namespaces that a client was expected to recognize.  Unfortunately, XML namespaces introduce URIs and prefixes and the ability to mix multiple namespaces in a single document.  This starts to bring back the complexity of RDF.

GData didn't really last very long and OData has some successes and some failures and is currently on version 4.

Overall OData was more ambitious than Gdata in that it defined a standardized querying mechanism and built on top of Microsoft's CSDL which is a metadata language used by the ORM, Entity Framework.  This allowed lots of tooling to be built around OData.

Another more recent effort in this area is Activity Streams.  Initially, I understood this to be a generalized way of advertising lists of events that occur, it seems to have evolved into not only an activity stream container, but a mechanism for describing activities using vocabularies.

Linking and Embedding

Late 2010 saw the beginning of flurry of activity in the hypermedia space.  Mike Kelly created HAL which defined a JSON based format that supported linking to other resources and embedding portions of other resources into representations.

In 2011 Collection+Json was created by Mike Amundsen that provides a way of representing a list of things as well as methods to search the list and add to the list.

Siren, Mason, Uber and JsonApi all followed in the following years, all attempting to address perceived shortcomings of HAL and C+J.

The Great Form Debate

One of the features that has been continually debated is the need for forms support in hypermedia types.  HAL did not support forms. Siren, Mason, C+J and Uber do.  HAL relies heavily on link relation types to convey both read and write semantics.  The authors of the other formats feel it is more valuable to have explicit syntax for describing write operations.

DebatingMonks

Spoilt For Choice

The growing REST community has learned a great deal in the process of developing these specifications.  However, they are left with a whole new problem.  Developers who want to start down the path of using hypermedia in their APIs need to make a choice between a range of formats that are only subtly different in their capabilities.  How are developers supposed to choose?

BeerTaps

In the past, choosing a media type was never a particularly difficult proposition because they tend to be built for a specific purpose.  HTML was originally designed for describing a textual document,  image/jpeg is ideal for photographs, text/css for stylesheets,  etc.

However, this most recent set of media types have focused on message format semantics, i.e the ability to link to other content, embed content and describe affordances, all without saying anything about the application domain.  This has the advantage that they can be used within any application domain, for almost any purpose, however, it also makes them have no meaning, no purpose.

Meanwhile On Another Planet

In contrast to these generic hypermedia types, in the world of telecommunications, a media type called VoiceXML was developed.

 Planet

VoiceXML was developed to drive audio and voice response applications.  Related to that effort was CCXML (Call Control eXtensible Markup Language), MSML (Media Server Markup Language), MSCML (Media Server Control Markup Language) and most recently SCXML (State Chart XML).

All of these media types are used as part of a larger system, but each media type is designed to solve a specific problem domain.  

The Great Divide

Earlier I accused some media types of having no meaning, no purpose.  This is actually a design objective of some media types.  The idea is that media types can limit their complexity by focusing on just structural and protocol semantics of the message and leave application and domain semantics to a separate mechanism.  Formats like RDF use the a concept called ontologies to describe application semantics.  JSON-LD can also use JSON schemas defined at schema.org.  HAL recently added support for a notion called Profiles that existed in early versions of HTML and has been resurrected in recent years.  The idea of Profiles is that you can independently apply a set of domain semantics to a hypermedia message.

The advantage of using Profiles is that your API only needs to support a small set of media types, possible only one, and application semantics can be layered on top.  This limits the time and risk involved in designing and documenting new media types.  Mike Amundsen has been spearheading an effort to define a description language called ALPS that makes defining profile description documents possible.

PaintExplosion

Media Type Explosion

One of the pieces of lore in the hypermedia community is that we must avoid a phenomenon called "media type explosion".  The fear is that if every API provider begins creating media types for all of their application semantics we will massively dilute the re-usability of media types and potentially introduce security vulnerabilities.  Also, the process of registration and expert review is not designed to be efficient for large numbers of media types that may never be suitable for public consumptions.

Ironically, the fear of media type explosion has discouraged people from creating media types and therefore have simply resorted to tunneling application semantics over generic media types like application/json.  The result being an explosion of implicit JSON structures.  Not exactly the desired effect.

API Media Types

A popular alternative to reusing existing media types, or creating media types for each application concept, some APIs have decided to create a single media type definition for the entire API.  This approach provides an API with a central place to document message format conventions and structure.  APIs like GitHub, Heroku and Sun's Cloud API take this approach.  This is an interesting compromise.  However, it limits the potential for re-use because the definition of the media type is scoped to the particular API.

Horizontal Media Types

Ideally, at least in my opinion, media types would be built that solve a specific problem but could be re-used across many APIs.  The end goal would be to be able to build APIs by composing a set of pre-defined APIs and minimize or even eliminate the need for new media types to support the API.

Getting widespread agreement on application domain types, like customer, invoice, work task, employee, etc, is especially difficult.  This can be seen if you dig into the history of standards like ANSI X12,  Edifact or UBL. 

However, there are many aspects of applications that are very similar between applications: users, accounts, permissions, roles, errors, lists, reports, long running operations, filters, tables, graphs, dashboards.

I believe these are the low hanging fruit for building re-usable media types.

Link Relation Types Add Context

Media types are not the only way to convey semantic information to a client.  Link relations are an extremely valuable way to add semantic context to more generic media types.  Consider a media type that described a street address.  An address is a fairly well defined concept that could be sufficiently defined for use in a wide range of scenarios.

Creating link relations like "ShipTo", "InvoiceTo", "Home", "Work", "Destination", can provide sufficient additional semantics to  a fairly generic media type to allow a client to make intelligent choices.

Even link relations have different styles in the way they are defined.  Some are very generic, like "next" and "previous".  Others have precisely defined behaviour like "hub" and "oauth2-token".

So, What Should I Use?

If only there were an easy answer to that question.  The hard answer is: learn about the options you have, understand the pros and cons to each approach and then consider the context of the application you are trying to build.  At that point you may have enough information to choose the right solution for your problem.  Good luck!  And make sure you tell everyone about your experiences.  We are all learning together.

Image Credits:
Janus http://davy.potdevin.free.fr/Site/links.html
Beer Taps https://flic.kr/p/3qbi
Planet : https://flic.kr/p/5WBkp9
Paint Explosion : https://flic.kr/p/7ZNa5C


Taiseer Joudeh: ASP.NET Identity 2.1 with ASP.NET Web API 2.2 (Accounts Management) – Part 1

Asp Net Identity

ASP.NET Identity 2.1 is the latest membership and identity management framework provided by Microsoft, this membership system can be plugged to any ASP.NET framework such as Web API, MVC, Web Forms, etc…

In this tutorial we’ll cover how to integrate ASP.NET Identity system with ASP.NET Web API , so we can build a secure HTTP service which acts as back-end for SPA front-end built using AngularJS, I’ll try to cover in a simple way different ASP.NET Identity 2.1 features such as: Accounts managements, roles management, email confirmations, change password, roles based authorization, claims based authorization, brute force protection, etc…

The AngularJS front-end application will use bearer token based authentication using Json Web Tokens (JWTs) format and should support roles based authorization and contains the basic features of any membership system. The SPA is not ready yet but hopefully it will sit on top of our HTTP service without the need to come again and modify the ASP.NET Web API logic.

I will follow step by step approach and I’ll start from scratch without using any VS 2013 templates so we’ll have better understanding of how the ASP.NET Identity 2.1 framework talks with ASP.NET Web API framework.

The source code for this tutorial is available on GitHub.

I broke down this series into multiple posts which I’ll be posting gradually, posts are:

Configure ASP.NET Identity 2.1 with ASP.NET Web API 2.2 (Accounts Management)

Setting up the ASP.NET Identity 2.1

Step 1: Create the Web API Project

In this tutorial I’m using Visual Studio 2013 and .Net framework 4.5, now create an empty solution and name it “AspNetIdentity” then add new ASP.NET Web application named “AspNetIdentity.WebApi”, we will select an empty template with no core dependencies at all, it will be as as the image below:

WebApiNewProject

Step 2: Install the needed NuGet Packages:

We’ll install all those NuGet packages to setup our Owin server and configure ASP.NET Web API to be hosted within an Owin server, as well we will install packages needed for ASP.NET Identity 2.1, if you would like to know more about the use of each package and what is the Owin server, please check this post.

Install-Package Microsoft.AspNet.Identity.Owin -Version 2.1.0
Install-Package Microsoft.AspNet.Identity.EntityFramework -Version 2.1.0
Install-Package Microsoft.Owin.Host.SystemWeb -Version 3.0.0
Install-Package Microsoft.AspNet.WebApi.Owin -Version 5.2.2
Install-Package Microsoft.Owin.Security.OAuth -Version 3.0.0
Install-Package Microsoft.Owin.Cors -Version 3.0.0

 Step 3: Add Application user class and Application Database Context:

Now we want to define our first custom entity framework class which is the “ApplicationUser” class, this class will represents a user wants to register in our membership system, as well we want to extend the default class in order to add application specific data properties for the user, data properties such as: First Name, Last Name, Level, JoinDate. Those properties will be converted to columns in table “AspNetUsers” as we’ll see on the next steps.

So to do this we need to create new class named “ApplicationUser” and derive from “Microsoft.AspNet.Identity.EntityFramework.IdentityUser” class.

Note: If you do not want to add any extra properties to this class, then there is no need to extend the default implementation and derive from “IdentityUser” class.

To do so add new folder named “Infrastructure” to our project then add new class named “ApplicationUser” and paste the code below:

public class ApplicationUser : IdentityUser
    {
        [Required]
        [MaxLength(100)]
        public string FirstName { get; set; }

        [Required]
        [MaxLength(100)]
        public string LastName { get; set; }

        [Required]
        public byte Level { get; set; }

        [Required]
        public DateTime JoinDate { get; set; }

    }

Now we need to add Database context class which will be responsible to communicate with our database, so add new class and name it “ApplicationDbContext” under folder “Infrastructure” then paste the code snippet below:

public class ApplicationDbContext : IdentityDbContext<ApplicationUser>
    {
        public ApplicationDbContext()
            : base("DefaultConnection", throwIfV1Schema: false)
        {
            Configuration.ProxyCreationEnabled = false;
            Configuration.LazyLoadingEnabled = false;
        }

        public static ApplicationDbContext Create()
        {
            return new ApplicationDbContext();
        }

    }

As you can see this class inherits from “IdentityDbContext” class, you can think about this class as special version of the traditional “DbContext” Class, it will provide all of the entity framework code-first mapping and DbSet properties needed to manage the identity tables in SQL Server, this default constructor takes the connection string name “DefaultConnection” as an argument, this connection string will be used point to the right server and database name to connect to.

The static method “Create” will be called from our Owin Startup class, more about this later.

Lastly we need to add a connection string which points to the database that will be created using code first approach, so open “Web.config” file and paste the connection string below:

<connectionStrings>
    <add name="DefaultConnection" connectionString="Data Source=.\sqlexpress;Initial Catalog=AspNetIdentity;Integrated Security=SSPI;" providerName="System.Data.SqlClient" />
  </connectionStrings>

Step 4: Create the Database and Enable DB migrations:

Now we want to enable EF code first migration feature which configures the code first to update the database schema instead of dropping and re-creating the database with each change on EF entities, to do so we need to open NuGet Package Manager Console and type the following commands:

enable-migrations
add-migration InitialCreate

The “enable-migrations” command creates a “Migrations” folder in the “AspNetIdentity.WebApi” project, and it creates a file named “Configuration”, this file contains method named “Seed” which is used to allow us to insert or update test/initial data after code first creates or updates the database. This method is called when the database is created for the first time and every time the database schema is updated after a data model change.

Migrations

As well the “add-migration InitialCreate” command generates the code that creates the database from scratch. This code is also in the “Migrations” folder, in the file named “<timestamp>_InitialCreate.cs“. The “Up” method of the “InitialCreate” class creates the database tables that correspond to the data model entity sets, and the “Down” method deletes them. So in our case if you opened this class “201501171041277_InitialCreate” you will see the extended data properties we added in the “ApplicationUser” class in method “Up”.

Now back to the “Seed” method in class “Configuration”, open the class and replace the Seed method code with the code below:

protected override void Seed(AspNetIdentity.WebApi.Infrastructure.ApplicationDbContext context)
        {
            //  This method will be called after migrating to the latest version.

            var manager = new UserManager<ApplicationUser>(new UserStore<ApplicationUser>(new ApplicationDbContext()));

            var user = new ApplicationUser()
            {
                UserName = "SuperPowerUser",
                Email = "taiseer.joudeh@mymail.com",
                EmailConfirmed = true,
                FirstName = "Taiseer",
                LastName = "Joudeh",
                Level = 1,
                JoinDate = DateTime.Now.AddYears(-3)
            };

            manager.Create(user, "MySuperP@ssword!");
        }

This code basically creates a user once the database is created.

Now we are ready to trigger the event which will create the database on our SQL server based on the connection string we specified earlier, so open NuGet Package Manager Console and type the command:

update-database

The “update-database” command runs the “Up” method in the “Configuration” file and creates the database and then it runs the “Seed” method to populate the database and insert a user.

If all is fine, navigate to your SQL server instance and the database along with the additional fields in table “AspNetUsers” should be created as the image below:

AspNetIdentityDB

Step 5: Add the User Manager Class:

The User Manager class will be responsible to manage instances of the user class, the class will derive from “UserManager<T>”  where T will represent our “ApplicationUser” class, once it derives from the “ApplicationUser” class a set of methods will be available, those methods will facilitate managing users in our Identity system, some of the exposed methods we’ll use from the “UserManager” during this tutorial are:

Method NameUsage
FindByIdAsync(id)Find user object based on its unique identifier
UsersReturns an enumeration of the users
FindByNameAsync(Username)Find user based on its Username
CreateAsync(User, PasswordCreates a new user with a password
GenerateEmailConfirmationTokenAsync(Id)Generate email confirmation token which is used in email confimration
SendEmailAsync(Id, Subject, Body)Send confirmation email to the newly registered user
ConfirmEmailAsync(Id, token)Confirm the user email based on the received token
ChangePasswordAsync(Id, OldPassword, NewPassword)Change user password
DeleteAsync(User)Delete user
IsInRole(Username, Rolename)Check if a user belongs to certain Role
AddToRoleAsync(Username, RoleName)Assign user to a specific Role
RemoveFromRoleAsync(Username, RoleNameRemove user from specific Role

Now to implement the “UserManager” class, add new file named “ApplicationUserManager” under folder “Infrastructure” and paste the code below:

public class ApplicationUserManager : UserManager<ApplicationUser>
    {
        public ApplicationUserManager(IUserStore<ApplicationUser> store)
            : base(store)
        {
        }

        public static ApplicationUserManager Create(IdentityFactoryOptions<ApplicationUserManager> options, IOwinContext context)
        {
            var appDbContext = context.Get<ApplicationDbContext>();
            var appUserManager = new ApplicationUserManager(new UserStore<ApplicationUser>(appDbContext));

            return appUserManager;
        }
    }

As you notice from the code above the static method “Create” will be responsible to return an instance of the “ApplicationUserManager” class named “appUserManager”, the constructor of the “ApplicationUserManager” expects to receive an instance from the “UserStore”, as well the UserStore instance construct expects to receive an instance from our “ApplicationDbContext” defined earlier, currently we are reading this instance from the Owin context, but we didn’t add it yet to the Owin context, so let’s jump to the next step to add it.

Note: In the coming post we’ll apply different changes to the “ApplicationUserManager” class such as configuring email service, setting user and password polices.

Step 6: Add Owin “Startup” Class

Now we’ll add the Owin “Startup” class which will be fired once our server starts. The “Configuration” method accepts parameter of type “IAppBuilder” this parameter will be supplied by the host at run-time. This “app” parameter is an interface which will be used to compose the application for our Owin server, so add new file named “Startup” to the root of the project and paste the code below:

public class Startup
    {

        public void Configuration(IAppBuilder app)
        {
            HttpConfiguration httpConfig = new HttpConfiguration();

            ConfigureOAuthTokenGeneration(app);

            ConfigureWebApi(httpConfig);

            app.UseCors(Microsoft.Owin.Cors.CorsOptions.AllowAll);

            app.UseWebApi(httpConfig);

        }

        private void ConfigureOAuthTokenGeneration(IAppBuilder app)
        {
            // Configure the db context and user manager to use a single instance per request
            app.CreatePerOwinContext(ApplicationDbContext.Create);
            app.CreatePerOwinContext<ApplicationUserManager>(ApplicationUserManager.Create);

	    // Plugin the OAuth bearer JSON Web Token tokens generation and Consumption will be here

        }

        private void ConfigureWebApi(HttpConfiguration config)
        {
            config.MapHttpAttributeRoutes();

            var jsonFormatter = config.Formatters.OfType<JsonMediaTypeFormatter>().First();
            jsonFormatter.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver();
        }
    }

What worth noting here is how we are creating a fresh instance from the “ApplicationDbContext” and “ApplicationUserManager” for each request and set it in the Owin context using the extension method “CreatePerOwinContext”. Both objects (ApplicationDbContext and AplicationUserManager) will be available during the entire life of the request.

Note: I didn’t plug any kind of authentication here, we’ll visit this class again and add JWT Authentication in the next post, for now we’ll be fine accepting any request from any anonymous users.

Define Web API Controllers and Methods

Step 7: Create the “Accounts” Controller:

Now we’ll add our first controller named “AccountsController” which will be responsible to manage user accounts in our Identity system, to do so add new folder named “Controllers” then add new class named “AccountsController” and paste the code below:

[RoutePrefix("api/accounts")]
public class AccountsController : BaseApiController
{

	[Route("users")]
	public IHttpActionResult GetUsers()
	{
		return Ok(this.AppUserManager.Users.ToList().Select(u => this.TheModelFactory.Create(u)));
	}

	[Route("user/{id:guid}", Name = "GetUserById")]
	public async Task<IHttpActionResult> GetUser(string Id)
	{
		var user = await this.AppUserManager.FindByIdAsync(Id);

		if (user != null)
		{
			return Ok(this.TheModelFactory.Create(user));
		}

		return NotFound();

	}

	[Route("user/{username}")]
	public async Task<IHttpActionResult> GetUserByName(string username)
	{
		var user = await this.AppUserManager.FindByNameAsync(username);

		if (user != null)
		{
			return Ok(this.TheModelFactory.Create(user));
		}

		return NotFound();

	}
}

What we have implemented above is the following:

  • Our “AccountsController” inherits from base controller named “BaseApiController”, this base controller is not created yet, but it contains methods that will be reused among different controllers we’ll add during this tutorial, the methods which comes from “BaseApiController” are: “AppUserManager”, “TheModelFactory”, and “GetErrorResult”, we’ll see the implementation for this class in the next step.
  • We have added 3 methods/actions so far in the “AccountsController”:
    • Method “GetUsers” will be responsible to return all the registered users in our system by calling the enumeration “Users” coming from “ApplicationUserManager” class.
    • Method “GetUser” will be responsible to return single user by providing it is unique identifier and calling the method “FindByIdAsync” coming from “ApplicationUserManager” class.
    • Method “GetUserByName” will be responsible to return single user by providing it is username and calling the method “FindByNameAsync” coming from “ApplicationUserManager” class.
    • The three methods send the user object to class named “TheModelFactory”, we’ll see in the next step the benefit of using this pattern to shape the object graph returned and how it will protect us from leaking any sensitive information about the user identity.
  • Note: All methods can be accessed by any anonymous user, for now we are fine with this, but we’ll manage the access control for each method and who are the authorized identities that can perform those actions in the coming posts.

Step 8: Create the “BaseApiController” Controller:

As we stated before, this “BaseApiController” will act as a base class which other Web API controllers will inherit from, for now it will contain three basic methods, so add new class named “BaseApiController” under folder “Controllers” and paste the code below:

public class BaseApiController : ApiController
    {

        private ModelFactory _modelFactory;
        private ApplicationUserManager _AppUserManager = null;

        protected ApplicationUserManager AppUserManager
        {
            get
            {
                return _AppUserManager ?? Request.GetOwinContext().GetUserManager<ApplicationUserManager>();
            }
        }

        public BaseApiController()
        {
        }

        protected ModelFactory TheModelFactory
        {
            get
            {
                if (_modelFactory == null)
                {
                    _modelFactory = new ModelFactory(this.Request, this.AppUserManager);
                }
                return _modelFactory;
            }
        }

        protected IHttpActionResult GetErrorResult(IdentityResult result)
        {
            if (result == null)
            {
                return InternalServerError();
            }

            if (!result.Succeeded)
            {
                if (result.Errors != null)
                {
                    foreach (string error in result.Errors)
                    {
                        ModelState.AddModelError("", error);
                    }
                }

                if (ModelState.IsValid)
                {
                    // No ModelState errors are available to send, so just return an empty BadRequest.
                    return BadRequest();
                }

                return BadRequest(ModelState);
            }

            return null;
        }
    }

What we have implemented above is the following:

  • We have added read only property named “AppUserManager” which gets the instance of the “ApplicationUserManager” we already set in the “Startup” class, this instance will be initialized and ready to invoked.
  • We have added another read only property named “TheModelFactory” which returns an instance of “ModelFactory” class, this factory pattern will help us in shaping and controlling the response returned to the client, so we will create a simplified model for some of our domain object model (Users, Roles, Claims, etc..) we have in the database. Shaping the response and building customized object graph is very important here; because we do not want to leak sensitive data such as “PasswordHash” to the client.
  • We have added a function named “GetErrorResult” which takes “IdentityResult” as a constructor and formats the error messages returned to the client.

Step 8: Create the “ModelFactory” Class:

Now add new folder named “Models” and inside this folder create new class named “ModelFactory”, this class will contain all the functions needed to shape the response object and control the object graph returned to the client, so open the file and paste the code below:

public class ModelFactory
    {
        private UrlHelper _UrlHelper;
        private ApplicationUserManager _AppUserManager;

        public ModelFactory(HttpRequestMessage request, ApplicationUserManager appUserManager)
        {
            _UrlHelper = new UrlHelper(request);
            _AppUserManager = appUserManager;
        }

        public UserReturnModel Create(ApplicationUser appUser)
        {
            return new UserReturnModel
            {
                Url = _UrlHelper.Link("GetUserById", new { id = appUser.Id }),
                Id = appUser.Id,
                UserName = appUser.UserName,
                FullName = string.Format("{0} {1}", appUser.FirstName, appUser.LastName),
                Email = appUser.Email,
                EmailConfirmed = appUser.EmailConfirmed,
                Level = appUser.Level,
                JoinDate = appUser.JoinDate,
                Roles = _AppUserManager.GetRolesAsync(appUser.Id).Result,
                Claims = _AppUserManager.GetClaimsAsync(appUser.Id).Result
            };
        }
    }

    public class UserReturnModel
    {
        public string Url { get; set; }
        public string Id { get; set; }
        public string UserName { get; set; }
        public string FullName { get; set; }
        public string Email { get; set; }
        public bool EmailConfirmed { get; set; }
        public int Level { get; set; }
        public DateTime JoinDate { get; set; }
        public IList<string> Roles { get; set; }
        public IList<System.Security.Claims.Claim> Claims { get; set; }
    }

Notice how we included only the properties needed to return them in users object graph, for example there is no need to return the “PasswordHash” property so we didn’t include it.

Step 9: Add Method to Create Users in”AccountsController”:

It is time to add the method which allow us to register/create users in our Identity system, but before adding it, we need to add the request model object which contains the user data which will be sent from the client, so add new file named “AccountBindingModels” under folder “Models” and paste the code below:

public class CreateUserBindingModel
    {
        [Required]
        [EmailAddress]
        [Display(Name = "Email")]
        public string Email { get; set; }

        [Required]
        [Display(Name = "Username")]
        public string Username { get; set; }

        [Required]
        [Display(Name = "First Name")]
        public string FirstName { get; set; }

        [Required]
        [Display(Name = "Last Name")]
        public string LastName { get; set; }

        [Display(Name = "Role Name")]
        public string RoleName { get; set; }

        [Required]
        [StringLength(100, ErrorMessage = "The {0} must be at least {2} characters long.", MinimumLength = 6)]
        [DataType(DataType.Password)]
        [Display(Name = "Password")]
        public string Password { get; set; }

        [Required]
        [DataType(DataType.Password)]
        [Display(Name = "Confirm password")]
        [Compare("Password", ErrorMessage = "The password and confirmation password do not match.")]
        public string ConfirmPassword { get; set; }
    }

The class is very simple, it contains properties for the fields we want to send from the client to our API with some data annotation attributes which help us to validate the model before submitting it to the database, notice how we added property named “RoleName” which will not be used now, but it will be useful in the coming posts.

Now it is time to add the method which register/creates a user, open the controller named “AccountsController” and add new method named “CreateUser” and paste the code below:

[Route("create")]
public async Task<IHttpActionResult> CreateUser(CreateUserBindingModel createUserModel)
{
	if (!ModelState.IsValid)
	{
		return BadRequest(ModelState);
	}

	var user = new ApplicationUser()
	{
		UserName = createUserModel.Username,
		Email = createUserModel.Email,
		FirstName = createUserModel.FirstName,
		LastName = createUserModel.LastName,
		Level = 3,
		JoinDate = DateTime.Now.Date,
	};

	IdentityResult addUserResult = await this.AppUserManager.CreateAsync(user, createUserModel.Password);

	if (!addUserResult.Succeeded)
	{
		return GetErrorResult(addUserResult);
	}

	Uri locationHeader = new Uri(Url.Link("GetUserById", new { id = user.Id }));

	return Created(locationHeader, TheModelFactory.Create(user));
}

What we have implemented here is the following:

  • We validated the request model based on the data annotations we introduced in class “AccountBindingModels”, if there is a field missing then the response will return HTTP 400 with proper error message.
  • If the model is valid, we will use it to create new instance of class “ApplicationUser”, by default we’ll put all the users in level 3.
  • Then we call method “CreateAsync” in the “AppUserManager” which will do the heavy lifting for us, inside this method it will validate if the username, email is used before, and if the password matches our policy, etc.. if the request is valid then it will create new user and add to the “AspNetUsers” table and return success result. From this result and as good practice we should return the resource created in the location header and return 201 created status.

Notes:

  • Sending a confirmation email for the user, and configuring user and password policy will be covered in the next post.
  • As stated earlier, there is no authentication or authorization applied yet, any anonymous user can invoke any available method, but we will cover this authentication and authorization part in the coming posts.

Step 10: Test Methods in”AccountsController”:

Lastly it is time to test the methods added to the API, so fire your favorite REST client Fiddler or PostMan, in my case I prefer PostMan. So lets start testing the “Create” user method, so we need to issue HTTP Post to the URI: “http://localhost:59822/api/accounts/create” as the request below, if creating a user went good you will receive 201 response:

Create User

Now to test the method “GetUsers” all you need to do is to issue HTTP GET to the URI: “http://localhost:59822/api/accounts/users” and the response graph will be as the below:

[
  {
    "url": "http://localhost:59822/api/accounts/user/29e21f3d-08e0-49b5-b523-3d68cf623fd5",
    "id": "29e21f3d-08e0-49b5-b523-3d68cf623fd5",
    "userName": "SuperPowerUser",
    "fullName": "Taiseer Joudeh",
    "email": "taiseer.joudeh@gmail.com",
    "emailConfirmed": true,
    "level": 1,
    "joinDate": "2012-01-17T12:41:40.457",
    "roles": [
      "Admin",
      "Users",
      "SuperAdmin"
    ],
    "claims": [
      {
        "issuer": "LOCAL AUTHORITY",
        "originalIssuer": "LOCAL AUTHORITY",
        "properties": {},
        "subject": null,
        "type": "Phone",
        "value": "123456782",
        "valueType": "http://www.w3.org/2001/XMLSchema#string"
      },
      {
        "issuer": "LOCAL AUTHORITY",
        "originalIssuer": "LOCAL AUTHORITY",
        "properties": {},
        "subject": null,
        "type": "Gender",
        "value": "Male",
        "valueType": "http://www.w3.org/2001/XMLSchema#string"
      }
    ]
  },
  {
    "url": "http://localhost:59822/api/accounts/user/f0f8d481-e24c-413a-bf84-a202780f8e50",
    "id": "f0f8d481-e24c-413a-bf84-a202780f8e50",
    "userName": "tayseer.Joudeh",
    "fullName": "Tayseer Joudeh",
    "email": "tayseer_joudeh@hotmail.com",
    "emailConfirmed": true,
    "level": 3,
    "joinDate": "2015-01-17T00:00:00",
    "roles": [],
    "claims": []
  }
]

The source code for this tutorial is available on GitHub.

In the next post we’ll see how we’ll configure our Identity service to start sending email confirmations, customize username and password polices, implement Json Web Token (JWTs) Authentication and manage the access for the methods.

Follow me on Twitter @tjoudeh

References

The post ASP.NET Identity 2.1 with ASP.NET Web API 2.2 (Accounts Management) – Part 1 appeared first on Bit of Technology.


Filip Woj: Migrating from ASP.NET Web API to MVC 6 – exploring Web API Compatibility Shim

Migrating an MVC 5 project to ASP.NET 5 and MVC 6 is a big challenge given that both of the latter are complete rewrites of their predecessors. As a result, even if on the surface things seem similar (we have controllers, filters, actions etc), as you go deeper under the hood you realize that most, […]

The post Migrating from ASP.NET Web API to MVC 6 – exploring Web API Compatibility Shim appeared first on StrathWeb.


Ugo Lattanzi: Speed up WebAPI on Microsoft Azure

One of my favorite features of ASP.NET WebAPI is the opportunity to run your code outside Internet Information Service (IIS). I don’t have anything against IIS, in fact my tough matches with this tweet:

But System.Web is really a problem and, in some cases, IIS pipeline is too complicated for a simple REST call.

we fix one bug and open seven new one (unnamed Microsoft employee on System.Web)

Another important thing I like is cloud computing and Microsoft Aure in this case. In fact, if you want to run your APIs outside IIS and you have to scale on Microsoft Azure, maybe this article could be helpful.

Azure offers different ways to host your APIs and scale them. The most common solutions are WebSites or Cloud Services.

Unfortunately we can’t use Azure WebSites because everything there runs on IIS (more info here) so, we have to use the Cloud Services but the question here is Web Role or Worker Role?

The main difference among Web Role and Worker Role is that the first one runs on IIS, the domain is configured on the webserver and the port 80 is opened by default; the second one is a process (.exe file to be clear) that runs on a “closed” environment.

To remain consistent with what is written above, we have to use the Worker Role instead of the Web Role so, let’s start to create it following the steps below:

Now that the Azure project and Workrole project are ready, It's important to open the port 80 on the worker role (remember that by default the worker role is a close environment).

Finally we have the environment ready, It’s time to install few WebAPI packages and write some code.

PM> Install-Package Microsoft.AspNet.WebApi.OwinSelfHost

Now add OWIN startup class

and finally configure WebAPI Routing and its OWIN Middleware

using System.Web.Http;
using DemoWorkerRole;
using Microsoft.Owin;
using Owin;

[assembly: OwinStartup(typeof (Startup))]

namespace DemoWorkerRole
{
    public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            var config = new HttpConfiguration();

            // Routing
            config.Routes.MapHttpRoute(
                "Default",
                "api/{controller}/{id}",
                new {id = RouteParameter.Optional});

            //Configure WebAPI
            app.UseWebApi(config);
        }
    }
}

and create a demo controller

using System.Web.Http;

namespace DemoWorkerRole.APIs
{
    public class DemoController : ApiController
    {
        public string Get(string id)
        {
            return string.Format("The parameter value is {0}", id);
        }
    }
}

Till now nothing special, the app is ready and we have just to configure the worker role that is the WorkerRole.cs file created by Visual Studio.

What we have to do here, is to read the configuration from Azure (we have to map a custom domain for example) and start the web server.

To do that, first add the domain on the cloud service configuration following the steps below:

finally the worker role:

using System;
using System.Diagnostics;
using System.Net;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Owin.Hosting;
using Microsoft.WindowsAzure.ServiceRuntime;

namespace DemoWorkerRole
{
    public class WorkerRole : RoleEntryPoint
    {
        private readonly CancellationTokenSource cancellationTokenSource = new CancellationTokenSource();
        private readonly ManualResetEvent runCompleteEvent = new ManualResetEvent(false);

        private IDisposable app;

        public override void Run()
        {
            Trace.TraceInformation("WorkerRole is running");

            try
            {
                RunAsync(cancellationTokenSource.Token).Wait();
            }
            finally
            {
                runCompleteEvent.Set();
            }
        }

        public override bool OnStart()
        {
            // Set the maximum number of concurrent connections
            ServicePointManager.DefaultConnectionLimit = 12;

            string baseUri = String.Format("{0}://{1}:{2}", RoleEnvironment.GetConfigurationSettingValue("protocol"),
                RoleEnvironment.GetConfigurationSettingValue("domain"),
                RoleEnvironment.GetConfigurationSettingValue("port"));

            Trace.TraceInformation(String.Format("Starting OWIN at {0}", baseUri), "Information");

            try
            {
                app = WebApp.Start<Startup>(new StartOptions(url: baseUri));
            }
            catch (Exception e)
            {
                Trace.TraceError(e.ToString());
                throw;
            }

            bool result = base.OnStart();

            Trace.TraceInformation("WorkerRole has been started");

            return result;
        }

        public override void OnStop()
        {
            Trace.TraceInformation("WorkerRole is stopping");

            cancellationTokenSource.Cancel();
            runCompleteEvent.WaitOne();

            if (app != null)
            {
                app.Dispose();
            }

            base.OnStop();

            Trace.TraceInformation("WorkerRole has stopped");
        }

        private async Task RunAsync(CancellationToken cancellationToken)
        {
            // TODO: Replace the following with your own logic.
            while (!cancellationToken.IsCancellationRequested)
            {
                //Trace.TraceInformation("Working");
                await Task.Delay(1000);
            }
        }
    }
}

we are almost done, the last step is to configure the right execution context into the ServiceDefinistion.csdef

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="imperugo.demo.azure.webapi" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition" schemaVersion="2014-06.2.4">
    <WorkerRole name="DemoWorkerRole" vmsize="Small">
        <Runtime executionContext="elevated" />
        <Imports>
            <Import moduleName="Diagnostics" />
        </Imports>
        <Endpoints>
            <InputEndpoint name="Http" protocol="http" port="80" localPort="80" />
        </Endpoints>
        <ConfigurationSettings>
            <Setting name="protocol" />
            <Setting name="domain" />
            <Setting name="port" />
        </ConfigurationSettings>
    </WorkerRole>
</ServiceDefinition>

Here the important part is Runtime node. That part is really important because we are using the HttpListener to read the incoming message from the Web and that requires elevated privileges.

Now we are up & running using WebAPi hosted on a Cloud Service without using IIS.

The demo code is available here.

Have fun.


Pedro Félix: JWT and JOSE specifications approved for publication as RFCs

It seems the JSON Web Token (JWT) specs are finally ready to become RFCs. I’ve wrote about security tokens before in the past: it was 2008, XML, SAML and WS-Security were still hot subjects and JWT didn’t existed yet. The more recent “Designing Evolvable Web APIs with ASP.NET” book already includes a discussion of JWT in its security chapter. However, I think this announcement deserves a few more words and a colorful diagram.

A security token is a data structure that holds security related information, during the communication between two parties. For instance, on a distributed authentication scenario a security token may be used to transport the identity claims, asserted by the identity provider, to the consuming relying party.

As a transport container, the security token structure must provide important security properties:

  • Integrity – the consuming party should be able to detect any modifications to the token while in transit between the two parties. This property is usually mandatory, because the token information would be of little use without it
  • Confidentiality – only the authorized receiver should be able to access the contained information. This property isn’t required in all scenarios.

Kerberos tickets, SAML assertions and JSON Web Tokens are all examples of security tokens. Given the available prior art, namely SAML assertions, one may ask what’s the motivation for yet another security token format. JWT tokens where specifically designed to be more compact than the alternatives and also to be URL-safe by default. These two properties are very important for the modern usage scenarios (e.g. OpenID Connect protocol), where tokens are transported in URIs query strings and HTTP headers. Also, JWT tokens use the JavasScript Object Notation (JSON) standard, which seems to be the data interchange format du jour for the Web.

The following diagram presents an example of an encoded token, the contained information and how it relates to the token issuer, the token recipient and and the token subject.

jwt

A JWT is composed by multiple base64url encoded parts, separated by the ‘.’ character. The first part is the header and is composed by a single JSON object. In the example, the object’s properties, also called claims, are:

  • "typ":"JWT" – the token type.
  • "alg":"HS256" – the token protection algorithm, which in this case is only symmetric signature (i.e. message authentication code) using HMAC-SHA-256.

The second part is the payload and is composed by the claim set asserted by the issuer. In the example they are:

  • "iss":"https://issuer.webapibook.net" (issuer) – the issuer identifier.
  • "aud":"https://example.net" (audience) – the intended recipient.
  • "nbf":1376571701 (not before).
  • "exp":1376572001 (expires).
  • "sub":"alice@webapibook.net" (subject) – the claims subject (e.g. the authenticated user).
  • "email":"alice@webapibook.net" (email) – the subject’s email.
  • "name":"Alice" (name) – the subject’s name.

The first five claims (iss to sub) have their syntax and semantics defined by the JWT spec. The remaining two (email and name) are defined by other specs such as OpenID Connect, which rely on the JWT spec.

Finally, the last part is the token signature produced using the HMAC-SHA-256 algorithm. In this example, the token protection only includes integrity verification. However, it is possible to also have confidentiality by using encryption techniques.

The signature and encryption procedures, as well as the available algorithms and the ways to represent key information (e.g. public keys and key metadata) are defined on a set of auxiliary specs produced by the Javascript Object Signing and Encryption (JOSE) IETF working group.

Finally, a reference to the excellent JWT debugger and library list, made available by Auth0.



Pedro Félix: Recollections on 2014 – the soul of a new book

Last March 11, while waiting for the subway to head home, I received an email from our O’Reilly editor telling us that “Designing Evolvable Web APIs with ASP.NET” had finally gone to print. More than 2 years had passed on a journey that started with an email from Pablo, asking me if I was interested in co-authoring a book on ASP.NET Web API.

designevolvecover

“Designing Evolvable Web APIs with ASP.NET” is the result of the combined knowledge, experience and passion of five authors (Darrel, Glenn, Howard, Pablo and me), with different backgrounds but a common interest for the Web, its architecture and possibilities.

Writing a book with five authors, living in three continents and four time zones is a challenging endeavor. However, it is also an example of what can be accomplished with the cooperation technologies that we currently have available. The book was mostly written using Asciidoc, a textual format similar to Markdown but with added features. A private Git repo associated with a build pipeline was used to share the book source among the authors and create the derived artifacts, such as the PDF and the HTML versions. A GitHub organization was also used to share all the book’s code, which is publicly available at https://github.com/webapibook. For the many conversations and meetings, we used mostly Skype and Google Hangout.

One of my recollections of reading the “C++ Programming Language” book, by B. Stroustrup, almost 20 years ago, is the following quote attributed to Kristen Nygaard: “Programming is understanding”. For me, writing is also understanding. Many afternoons and evenings were spent trying to better grasp sparse and incomplete ideas by turning them into meaningful sequences of sentences and paragraphs. The rewarding feeling of finally being able to write an understandable paragraph made all those struggling hours worthwhile. I really hope the readers will enjoy reading them as much as I did writing them. There were some defeats also. For them, I apologize.

“Designing Evolvable Web APIs with ASP.NET” aims to provide the reader with the knowledge and skills required to build Web APIs that can adapt to change over time. It is divided in three parts.
The first one is composed by four chapters and contains an introduction to the Web architecture, Web APIs and related specs, such as HTTP. It also contains an introduction to the ASP.NET Web API programming model and runtime architecture.

The second and core part of the book addresses the design, implementation and use of an evolvable Web API, based on a concrete example: issue tracking. It contains chapters on problem domain analysis, on media type selection and design, on building and evolving the server and on creating clients.

The third and last part is a detailed description of the ASP.NET Web API technology, addressing subjects such as the HTTP programming model, hosting and OWIN, controllers and routing, client-side programming, model binding and media type formatting, and also testing. It also includes two chapters about Web API security, with an emphasis to the authentication and authorization aspects, namely the OAuth 2.0 Authorization framework.

“Designing Evolvable Web APIs with ASP.NET” is available for purchase at the O’Reilly shop. A late draft is also freely available at O’Reilly Atlas. Also, feel free to drop by our discussion group.

(the title for this post was inspired by the “The Soul of a New Machine” book, authored by Tracy Kidder)



Pete Smith: Functional web synergy with F# and OWIN

Before we get started I’d just like to mention that this post is part of the truly excellent F# Advent Calendar 2014 which is a fantastic initiative organised by Sergey Tihon, so big thanks to Sergey and the rest of the F# community as well as wishing you all a merry christmas!

Introduction

Using F# to build web applications is nothing new, we have purpose built F# frameworks like Freya popping up and excellent posts like this one by Mark Seemann. It’s also fairly easy to pick up other .NET frameworks that weren’t designed specifically for F# and build very solid applications.

With that in mind, I’m not just going to write another post about how to build web applications with F#.

Instead, I’d like to introduce the F# community to a whole new way of thinking about web applications, one that draws inspiration from a number of functional programming concepts – primarily pipelining and function composition – to provide a solid base on to which we can build our web applications in F#. This approach is currently known as Graph Based Routing

Some background

So first off – I should point out that I’m not actually an F# guy; in fact I’m pretty new to the language in general so this post is also somewhat of a learning exercise for me. I often find the best way to get acquainted with things is to dive right in, so please feel free to give me pointers in the comments.

Graph based routing itself has been around for a while, in the form of a library called Superscribe (written in C#). I’m not going to go into detail about it’s features; these are language agnostic, and covered by the website and some previous posts.

What I will say is that Superscribe is not a full blown web framework but actually a routing library. In fact, that’s somewhat of an oversimplication… in reality this library takes care of everything between URL and handler. It turns out that routing, content negotiation and some way of invoking a handler is actually all you need to get started building web applications.

Simplicity rules

This simplicity is a key tenet of graph based routing – keeping things minimal helps us build web applications that respond very quickly indeed as there is simply no extra processing going on. If you’re building a very content-heavy application then it’s probably not the right choice, but for APIs it’s incredibly performant.

Lets have a look at an example application using Superscribe in F#:

Superscribe defaults to a text/html response and will try it’s best to deal with whatever object you return from your handler. You can also do all the usual things like specify custom media type serialisers, return status codes etc.

The key part to focus on here is the define.Route statement, which allows us to directly assign a handler to a particular route – in this case /hello/world and /hello/fsharp. This is kinda cool, but there’s a lot more going on here than meets the eye.

Functions and graph based routing

Graph based routing is so named because it stores route definitions in – you guessed it – a graph structure. Traditional route matching tends focus on tables of strings and pattern matching based on the entire URL, but Superscribe is different.

In the example above the URL /hello/world gets broken down into it’s respective segments. Each segment is represented by a node in the graph, with the next possible matches as it’s children. Subsequent definitions are also broken down and intelligently added into the graph, so in this instance we end up with something like this:

hello world graph

Route matching is performed by walking the graph and checking for matches – it’s essentially a state machine. This is great because we only need to check for the segments that we expect; we don’t waste time churning through a large route table.

But here’s where it gets interesting. Nodes in graph based routing are comprised of three functions:

  • Activation function – returns a boolean indicating if the node is a match for the current segment
  • Action function – executed when a match has been found, so we can do things like parameter capture
  • Final function – executed when matching finishes on a particular node, i.e the handler

All of these functions can execute absolutely any arbitrary code that we like. With this model we can do some really interesting things such as conditional route matching based on the time of day, a debug flag or even based on live information from a load balancer. Can your pattern matcher do that!?

Efficiency, composibility and extensibility

Graph based routing allows us to build complex web applications that are composed of very simple units. A good approach is to use action functions to compose a pipeline a functions which get executed synchronously once route matching is complete (is this beginning to sound familiar?), but it can also be used for processing segments on the fly, for example in capturing parameter capture.

Here’s another example that shows this compositional nature in action. We’re going to define and use new type of node that will match and capture certain strings. Because Superscribe relies on the C# dynamic keyword, I’ve used the ? operator provided by FSharp.Dynamic

In the previous example we relied on the library to build a graph for us given a string – here we’re being explicit and constructing our own using the / operator (neat eh?). Our custom node will only activate when the segment starts with the letter “p”, and if it does then it will store that parameter away in a dynamic dictionary so we can use it later.

If the engine doesn’t match on a node, it’ll continue through it’s siblings looking for a match there instead. In our case, anything that doesn’t start with “p” will get picked up by the second route – the String parameter node acts as a catch-all:

hello fsharp
hello pete

Pipelines and OWIN

This gets even more exciting when we bring OWIN into the mix. OWIN allows us to build web applications out of multiple pieces of middleware, distinct orthogonal units that run together in a pipeline.

Usually these are quite linear, but with graph based routing and it’s ability to execute arbitrary code, we can build our pipeline on the fly. In this final example, we’re using two pieces of sample middleware to control access to parts of our web application:

Superscribe has support for this kind of middleware pipelining built in via the Pipeline method. In this code above we’ve specified that anything under the admin/ route will invoke the RequireHttps middleware, and if we’re doing anything other than requesting a token then we’ll need to provide the correct auth header.Behind the syntactic sugar, Superscribe is simply doing everything using the three types of function that we looked at earlier.

This example is not going to win any awards for security practices but it’s a pretty powerful demonstration of how these functional-inspired practices of composition and pipelining can help us build some really flexible and maintainable web applications. It turns out that there really is a lot more synergy between F# and the web that most people realise!

Summary

Some aspects still leave a little to be desired from the functional perspective – our functions aren’t exactly pure for example. But this is just the beginning of the relationship between F# and Superscribe. Most of the examples in the post have been ported straight from C# and so don’t really make any use of F# language features.

I’m really excited about what can be achieved when we start bringing things like monads and discriminated unions into the mix, it should make for some super-terse syntax. I’d love to hear some thoughts on this from the community… I’m sure we can do better than previous attempts at monadic url routing at any rate!

I hope you enjoyed today’s advent calendar… special thanks go to Scott Wlaschlin for all his technical feedback. I deliberately kept the specifics light here so as not to detract from the message of the post, but you can read more about Superscribe and graph based routing on the Superscribe website

Merry christmas to you all!
Pete

References

http://owin.org/
http://sergeytihon.wordpress.com/2014/11/24/f-advent-calendar-in-english-2014/
http://about.me/sergey.tihon
http://superscribe.org/
http://superscribe.org/graphbasedrouting.html
https://github.com/fsprojects/FSharp.Dynamic
https://gist.github.com/unknownexception/6035260
https://github.com/koistya/fsharp-owin-sample
https://github.com/freya-fs/freya
http://blog.ploeh.dk/2013/08/23/how-to-create-a-pure-f-aspnet-web-api-project/
http://wizardsofsmart.net/samples/working-with-non-compliant-owin-middleware/
http://happstack.com/page/view-page-slug/16/comparison-of-4-approaches-to-implementing-url-routing-combinators-including-the-free-and-operational-monads
https://twitter.com/scottwlaschin



Darrel Miller: Where, oh where, does the API key go?

Yesterday on twitter I made a comment criticizing the practice of putting an API key in a query string parameter.  I was surprised by the amount of attention it got and there were a number of responses questioning the significance of my objection.  Rather than try and reply in 140 character chunks, I decided a blog post was in order.

Security

Most of the comments were security related,

It is true that whether the API key is put in the URL or in the Authorization header. They are both going to be sent over the wire in clear text.  If security is critical then HTTPS is going to be necessary and both approaches would be equivalent… over the wire. 

The security problem is not really when the message is going over the wire, it is what happens to it on the client and server.  We developers like writing things out to log files and URLs are full of useful information for debugging.

My friend Pedro pointed me to an article that demonstrates how API keys in URLs can become a major problem.

I'm not suggesting that by putting the API key into an authorization header, that all the problems go away.  It is just a matter of reducing the chances of sensitive information being stored in unsecured placed and then being misused.

It reminds me of the choice we make to lock our car doors.  Anyone who has locked their keys in the car knows how easy it is for someone with the right tools to get into a locked car. However, locking your doors does significantly reduce the chance of theft.

Andrew makes the suggestion to use the username:password convention was introduced in RFC1738 back in 1994.

When RFC 1738 was revised in 1998 and  became RFC 2396 the following text was added:

Some URL schemes use the format "user:password" in the userinfo field. This practice is NOT RECOMMENDED

In the latest revision of the URI specification, RFC3986, they went further,

Use of the format "user:password" in the userinfo field is deprecated.

The reasons for deprecating this feature are very much applicable to the use of API keys in the query string.  It is unfortunate that we aren't quicker at learning from the mistakes of those came before us.

Forget the security issue

When I wrote the tweet, I really wasn't complaining about having an API key in the URL for security reasons.  For me, there are a number of benefits of using the authorization header.

Consistency

One of constraints of REST is called the "Uniform Interface".  A benefit of this constraint is that when you start working with a new API, there should consistency with the way it works.  This helps to reduce the learning curve and it makes it easier to build re-usable code that depends on this consistency.

Many HTTP client libraries have the ability to set default headers that will automatically be sent with every request.  It's one line of code and you get to forget about API keys and focus on actually using the API.

When assigning an API key in a URL, you first need to know if the parameter is key, apikey, api-key or api_key. Then you need to modify the URL that you want to call to add the API key. Futzing around with strings to add a query parameter to an existing URL is full of annoying little gotchas.  It is not so hard to do on a case by case basic, but trying to write generic code that will work for any URI is just painful.

I'm quite sure that string manipulation of URLs is one of the primary reasons API providers create API specific client libraries to insulate client developers from these irritants.

Hypermedia

Another REST constraint is the hypermedia constraint.  I realize that hypermedia usage in the API world is still very exceptional, but it's popularity is growing.  Having to define a URI template for every embedded link, just to add an API key would be really annoying.

Caching

Believe it or not, Caching is another REST constraint Smile.  HTTP caches use the URL as part of the primary cache key, even the query string parameters.  If you add a API key into the URL you make it difficult to take advantage of HTTP caches for resources that are common to all users.  A cache would end up keeping a duplicate copy of the resource representation for every user.  Not only is this a waste of cache space, but it also reduces the cache hit ratio massively.

Self-Descriptive

It is interesting to note that if you use an Authorization header, HTTP caches have special logic that will prevent caching by public caches unless you specifically allow it using a cache-control directive.  When the API key is buried in a query string parameter, intermediaries have no idea that the representation has come from a protected resource and therefore don't realize that caching it might be a bad idea.  Using standard HTTP features, the way they were defined, allows intermediary components to perform useful functions because they can have a limited understanding of the message.

A Thousand paper cuts

I believe that the usability of an API is hugely impacted by many small factors that in isolation, seem fairly inconsequential.  It is the combined effect of these small issues that is significant.  There is also the impact of change.  What doesn't matter today, might be very significant sometime in the future. 

The HTTP specifications define a set of guidelines for building distributed applications that have been proven to work, in real running applications.  Disregarding the advice they contain is throwing money down the drain.

A final comment that I would like to address came from Bret,

My original objection was that using the Authorization was not an option in the API I was trying to use.  I understand why some users prefer to use query string parameters.  Providing an easy path to get users working with your product is critical and if providing them with a query string parameter to send the auth key helps that process then do it.  However, I also believe part of the role of API provider is to help educate API consumers on the best way to work with an API to get the best results over the long run.  Give them an easy way, and when they are ready, educate them on the better way.

Hopefully, this blog post has provided some concrete reasons as to why using an Authentication header is a better solution. 


Radenko Zec: Fixing issue with HTTPClient and ProtocolViolationException: The value of the date string in the header is invalid

When you parse a large number of sites / RSS feeds using Microsoft HTTPClient or WebClient you can get an exception :
”ProtocolViolationException: “The value of the date string in the header is invalid”

This error usually occurs when you try to access LastModified date in a response.

var  response  = (HttpWebResponse)httpRequest.GetResponse();
var lastModified = response.LastModified;

The problem is usually related where LastModified date is send in incorrect format that ignores the HTTP specs.

Then Microsoft code inside WebClient throws this exception.

This can be very painful but there is a nice workaround for this.

Instead of accessing strongly typed LastModified header property in response get it by reading it from the response header:

 var resultedLastModified = response.Headers["Last-Modified"];

After this you’ll need to check for nulls, and try to parse it as datetime but you will not get ugly “The value of the date string in the header is invalid” exception.

If you like this article don’t forget to subscribe to this blog and make sure you don’t miss new upcoming blog posts.

 

The post Fixing issue with HTTPClient and ProtocolViolationException: The value of the date string in the header is invalid appeared first on RadenkoZec blog.


Taiseer Joudeh: Secure ASP.NET Web API using API Key Authentication – HMAC Authentication

Web API Security

Recently I was working on securing ASP.NET Web API HTTP service that will be consumed by a large number of terminal devices installed securely in different physical locations, the main requirement was to authenticate calls originating from those terminal devices to the HTTP service and not worry about the users who are using it. So first thing came to my mind is to use one of the OAuth 2.0 flows which is Resource Owner Password Credentials Flow, but this flow doesn’t fit nicely in my case because the bearer access tokens issued should have expiry time as well they are non revocable by default, so issuing an access token with very long expiry time (i.e one year) is not the right way to do it.

After searching for couple of hours I found out that the right and maybe little bit complex way to implement this is to use HMAC Authentication (Hash-based Message Authentication Code).

The source code for this tutorial is available on GitHub.

What is HMAC Authentication?

It is a mechanism for calculating a message authentication code using a hash function in combination with a shared secret key between the two parties involved in sending and receiving the data (Front-end client and Back-end HTTP service) . The main use for HMAC to verify the integrity, authenticity, and the identity of the message sender.

So in simpler words the server provides the client with a public APP Id and shared secret key (API Key – shared only between server and client), this process happens only the first time when the client registers with the server.

After the client and server agrees on the API Key, the client creates a unique HMAC (hash) representing the request originated from it to the server. It does this by combining the request data and usually it will contain (Public APP Id, request URI, request content, HTTP method, time stamp, and nonce) in order to produce a unique hash by using the API Key. Then the client sends that hash to the server, along with all information it was going to send already in the request.

Once the server revives the request along with the hash from the client, it tries to reconstruct the hash by using the received request data from the client along with the API Key, once the hash is generated on the server, the server will be responsible to compare the hash sent by the client with the regenerated one, if they match then the server consider this request authentic and process it.

Flow of using API Key – HMAC Authentication:

Note: First of all the server should provide the client with a public (APP Id) and shared private secret (API Key), the client responsibility is to store the API Key securely and never share it with other parties.

Flow on the client side:

  1. Client should build a string by combining all the data that will be sent, this string contains the following parameters (APP Id, HTTP method, request URI, request time stamp, nonce, and Base 64 string representation of the request pay load).
  2. Note: Request time stamp is calculated using UNIX time (number of seconds since Jan. 1st 1970) to overcome any issues related to a different timezone between client and server. Nonce: is an arbitrary number/string used only once. More about this later.
  3. Client will hash this large string built in the first step using a hash algorithm such as (SHA256) and the API Key assigned to it, the result for this hash is a unique signature for this request.
  4. The signature will be sent in the Authorization header using a custom scheme such as”amx”. The data in the Authorization header will contain the APP Id, request time stamp, and nonce separated by colon ‘:’. The format for the Authorization header will be like: [Authorization: amx APPId:Signature:Nonce:Timestamp].
  5. Client send the request as usual along with the data generated in step 3 in the Authorization header.

Flow on the server side:

  1. Server receives all the data included in the request along with the Authorization header.
  2. Server extracts the values (APP Id, Signature, Nonce and Request Time stamp) from the Authorization header.
  3. Servers looks for the APP Id in a certain secure repository (DB, Configuration file, etc…) to get the API Key for this client.
  4. Assuming the server was able to look up this APP Id from the repository, it will be responsible to validate if this request is a replay request and reject it, so it will prevent the API from any replay attacks. This is why we’ve used a request time stamp along with nonce generated at the client, and both values have been included into HMAC signature generation. The server will depend on the nonce to check if it was used before within certain acceptable bounds, i.e. 5 minutes. More about this later.
  5. Server will rebuild a string containing the same data received in the request by adhering to the same parameters orders and encoding followed in the client application, usually this agreement is done up front between the client application and the back-end service and shared using proper documentation.
  6. Server will hash the string generated in previous step using the same hashing algorithm used by the client (SHA256) and the same API Key obtained from the secure repository for this client.
  7. The result of this hash function (signature) generated at the server will be compared to the signature sent by the client, if they are equal then server will consider this call authentic and process the request, otherwise the server will reject the request and returns HTTP status code 401 unauthorized.

Important note:

  • Client and server should generate the hash (signature) using the same hashing algorithm as well adhere to the same parameters order, any slight change including case sensitivity when implementing the hashing will result in totally different signature and all requests from the client to the server will get rejected. So be consistent and agree on how to generate the signature up front and in clear way.
  • This mechanism of authentication can work without TLS (HTTPS), as long as the client is not transferring any confidential data or transmitting the API Key. It is recommended to consume it over TLS. But if you can’t use TLS for any other reason you will be fine if you transmit data over HTTP.

Sounds complicated? Right? Let’s jump to the implementation to make this clear.

I’ll start by showing how to generate APP Id and strong 265 bits key which will act as our API Key, this usually will be done on the server and provided to the client using a secure mechanism (Secure admin server portal). There is nice post here explains why generating APP Ids and API Keys is more secure than issuing username and password.

Then I’ll build a simple console application which will act as the client application, lastly I’ll build HTTP service using ASP.NET Web API and protected using HMAC Authentication using the right filter “IAuthenticationFilter”, so let’s get started!

The source code for this tutorial is available on GitHub.

Section 1: Generating the Shared Private Key (API Key) and APP Id

As I stated before this should be done on the server and provided to the client prior the actual use, we’ll use symmetric key cryptographic algorithm to issue 256 bit key, the code will be as the below:

using (var cryptoProvider = new RNGCryptoServiceProvider())
	{
		byte[] secretKeyByteArray = new byte[32]; //256 bit
		cryptoProvider.GetBytes(secretKeyByteArray);
		var APIKey = Convert.ToBase64String(secretKeyByteArray);
	}

And for the APP Id you can generate a GUID, so for this tutorial let’s assume that our APPId is:

4d53bce03ec34c0a911182d4c228ee6c
  and our APIKey generated is: 
A93reRTUJHsCuQSHR+L3GxqOJyDmQpCgps102ciuabc=
 and assume that our client application has received those 2 pieces of information using a secure channel.

Section 2: Building the Client Application

Step 1: Install Nuget Package
Add new empty solution named “WebApiHMACAuthentication” then add new console application named “HMACAuthentication.Client”, then install the below HTTPClient Nuget package which help us to issue HTTP requests.

Install-Package Microsoft.AspNet.WebApi.Client -Version 5.2.2

Step 2: Add POCO Model
We’ll issue HTTP POST request in order to demonstrate how we can include the request body in the signature, so we’ll add simple model named “Order”, add new class named “Order” and paste the code below:

public class Order
    {
        public int OrderID { get; set; }
        public string CustomerName { get; set; }
        public string ShipperCity { get; set; }
        public Boolean IsShipped { get; set; }
    }

Step 3: Call the back-end API using HTTPClient
Now we’ll use the HTTPClient library installed earlier to issue HTTP POST request to the API we’ll build in the next section, so open file “Program.cs” and paste the code below:

static void Main(string[] args)
        {
            RunAsync().Wait();
        }

        static async Task RunAsync()
        {

            Console.WriteLine("Calling the back-end API");

            string apiBaseAddress = "http://localhost:43326/";

            CustomDelegatingHandler customDelegatingHandler = new CustomDelegatingHandler();

            HttpClient client = HttpClientFactory.Create(customDelegatingHandler);

            var order = new Order { OrderID = 10248, CustomerName = "Taiseer Joudeh", ShipperCity = "Amman", IsShipped = true };

            HttpResponseMessage response = await client.PostAsJsonAsync(apiBaseAddress + "api/orders", order);

            if (response.IsSuccessStatusCode)
            {
                string responseString = await response.Content.ReadAsStringAsync();
                Console.WriteLine(responseString);
                Console.WriteLine("HTTP Status: {0}, Reason {1}. Press ENTER to exit", response.StatusCode, response.ReasonPhrase);
            }
            else
            {
                Console.WriteLine("Failed to call the API. HTTP Status: {0}, Reason {1}", response.StatusCode, response.ReasonPhrase);
            }

            Console.ReadLine();
        }

What we implemented here is basic, we just issuing HTTP POST to the end point “/api/orders” including serialized order object, this end point is protected using HMAC Authentication (More about this later in post), and if the response status returned is 200 OK, then we are printing the response returned.

What worth nothing here that I’m using a custom delegation handler named “CustomDelegatingHandler”. This handler will help us to intercept the request before sending it so we can do the signing process and creating the signature there.

Step 4: Implement the HTTPClient Custom Handler
HTTPClient allows us to create custom message handler which get created and added to the request message handlers chain, the nice thing here that this handler will allow us to write out custom logic (logic needed to build the hash and set in the Authorization header before firing the request to the back-end API), so in the same file “Program.cs” add new class named “CustomDelegatingHandler” and paste the code below:

public class CustomDelegatingHandler : DelegatingHandler
        {
            //Obtained from the server earlier, APIKey MUST be stored securely and in App.Config
            private string APPId = "4d53bce03ec34c0a911182d4c228ee6c";
            private string APIKey = "A93reRTUJHsCuQSHR+L3GxqOJyDmQpCgps102ciuabc=";

            protected async override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
            {

                HttpResponseMessage response = null;
                string requestContentBase64String = string.Empty;

                string requestUri = System.Web.HttpUtility.UrlEncode(request.RequestUri.AbsoluteUri.ToLower());

                string requestHttpMethod = request.Method.Method;

                //Calculate UNIX time
                DateTime epochStart = new DateTime(1970, 01, 01, 0, 0, 0, 0, DateTimeKind.Utc);
                TimeSpan timeSpan = DateTime.UtcNow - epochStart;
                string requestTimeStamp = Convert.ToUInt64(timeSpan.TotalSeconds).ToString();

                //create random nonce for each request
                string nonce = Guid.NewGuid().ToString("N");

                //Checking if the request contains body, usually will be null wiht HTTP GET and DELETE
                if (request.Content != null)
                {
                    byte[] content = await request.Content.ReadAsByteArrayAsync();
                    MD5 md5 = MD5.Create();
                    //Hashing the request body, any change in request body will result in different hash, we'll incure message integrity
                    byte[] requestContentHash = md5.ComputeHash(content);
                    requestContentBase64String = Convert.ToBase64String(requestContentHash);
                }

                //Creating the raw signature string
                string signatureRawData = String.Format("{0}{1}{2}{3}{4}{5}", APPId, requestHttpMethod, requestUri, requestTimeStamp, nonce, requestContentBase64String);

                var secretKeyByteArray = Convert.FromBase64String(APIKey);

                byte[] signature = Encoding.UTF8.GetBytes(signatureRawData);

                using (HMACSHA256 hmac = new HMACSHA256(secretKeyByteArray))
                {
                    byte[] signatureBytes = hmac.ComputeHash(signature);
                    string requestSignatureBase64String = Convert.ToBase64String(signatureBytes);
                    //Setting the values in the Authorization header using custom scheme (amx)
                    request.Headers.Authorization = new AuthenticationHeaderValue("amx", string.Format("{0}:{1}:{2}:{3}", APPId, requestSignatureBase64String, nonce, requestTimeStamp));
                }

                response = await base.SendAsync(request, cancellationToken);

                return response;
            }
        }

What we’ve implemented above is the following:

  • We’ve hard coded the APP Id and API Key values obtained earlier from the server, usually you need to store those values securely in app.config.
  • We’ve got the full request URI and safely Url Encoded it, so in case there is query strings sent with the request they will safely encoded, as well we’ve read the HTTP method used, in our case it will be POST.
  • We’ve calculated the time stamp for the request using UNIX timing (number of seconds since Jan. 1st 1970). This will help us to avoid any issues might happen if the client and the server resides in two different time zones.
  • We’ve generated a random nonce for this request, the client should adhere to this and should send a random string per method call.
  • We’ve checked if the request contains a body (it will contain a body if the request of type HTTP POST or PUT), if it contains a body; then we will md5 hash the body content then Base64 the array, we are doing this to insure the authenticity of the request and to make sure no one tampered with the request during the transmission (in case of transmitting it over HTTP).
  • We’ve built the signature raw data by concatenating the parameters (APPId, requestHttpMethod, requestUri, requestTimeStamp, nonce, requestContentBase64String) without any delimiters, this data will get hashed using HMACSHA256 algorithm.
  • Lastly we’ve applied the hashing algorithm using the API Key then base64 the result and combined the (APPId:requestSignatureBase64String:nonce:requestTimeStamp) using ‘:’ colon delimiter and set this combined string in the Authorization header for the request using a custom scheme named “amx”. Notice that the nonce and time stamp are included in creating the request signature as well they are sent as plain text values so they can be validated on the server to protect our API from replay attacks.

We are done of the client part, now let’s move to building the Web API which will be protected using HMAC Authentication.

Section 3: Building the back-end API

Step 1: Add the Web API Project
Add new Web application project named “HMACAuthentication.WebApi” to our existing solution “WebApiHMACAuthentication”, the template for the API will be as the image below (Web API core dependency checked) or you can use OWIN as we did in previous tutorials:

WebApiProject

Step 2: Add Orders Controller
We’ll add simple controller named “Orders” controller with 2 simple HTTP methods, as well we’ll add the same model “Order” we already added in the client application, so add new class named “OrdersController” and paste the code below, nothing special here, just basic Web API controller which is not protected and allows anonymous calls (we’ll protect it later in the post).

[RoutePrefix("api/Orders")]
    public class OrdersController : ApiController
    {
        [Route("")]
        public IHttpActionResult Get()
        {
            ClaimsPrincipal principal = Request.GetRequestContext().Principal as ClaimsPrincipal;

            var Name = ClaimsPrincipal.Current.Identity.Name;

            return Ok(Order.CreateOrders());
        }

        [Route("")]
        public IHttpActionResult Post(Order order)
        {
            return Ok(order);
        }

    }

    #region Helpers

    public class Order
    {
        public int OrderID { get; set; }
        public string CustomerName { get; set; }
        public string ShipperCity { get; set; }
        public Boolean IsShipped { get; set; }


        public static List<Order> CreateOrders()
        {
            List<Order> OrderList = new List<Order> 
            {
                new Order {OrderID = 10248, CustomerName = "Taiseer Joudeh", ShipperCity = "Amman", IsShipped = true },
                new Order {OrderID = 10249, CustomerName = "Ahmad Hasan", ShipperCity = "Dubai", IsShipped = false},
                new Order {OrderID = 10250,CustomerName = "Tamer Yaser", ShipperCity = "Jeddah", IsShipped = false },
                new Order {OrderID = 10251,CustomerName = "Lina Majed", ShipperCity = "Abu Dhabi", IsShipped = false},
                new Order {OrderID = 10252,CustomerName = "Yasmeen Rami", ShipperCity = "Kuwait", IsShipped = true}
            };

            return OrderList;
        }
    }

    #endregion

Step 3: Build the HMAC Authentication Filter
We’ll add all our logic responsible for re-generating the signature on the Web API and comparing it with signature received by the client in an Authentication Filter. The authentication filter is available in Web API 2 and it should be used for any authentication purposes, in our case we will use this filter to write our custom logic which validates the authenticity of the signature received by the client. The nice thing about this filter that it run before any other filters especially the authorization filter, I’ll borrow the image below from a great article about ASP.NET Web API Security Filters by Badrinarayanan Lakshmiraghavan to give you better understanding on where the authentication filter resides.

ASP.NET Web API Security Filters
Now add new folder named “Filters” then add new class named “HMACAuthenticationAttribute” which inherits from “Attribute” and implements interface “IAuthenticationFilter”, then paste the code below:

public class HMACAuthenticationAttribute : Attribute, IAuthenticationFilter
    {
        private static Dictionary<string, string> allowedApps = new Dictionary<string, string>();
        private readonly UInt64 requestMaxAgeInSeconds = 300;  //5 mins
        private readonly string authenticationScheme = "amx";

        public HMACAuthenticationAttribute()
        {
            if (allowedApps.Count == 0)
            {
                allowedApps.Add("4d53bce03ec34c0a911182d4c228ee6c", "A93reRTUJHsCuQSHR+L3GxqOJyDmQpCgps102ciuabc=");
            }
        }

        public Task AuthenticateAsync(HttpAuthenticationContext context, CancellationToken cancellationToken)
        {
            var req = context.Request;

            if (req.Headers.Authorization != null && authenticationScheme.Equals(req.Headers.Authorization.Scheme, StringComparison.OrdinalIgnoreCase))
            {
                var rawAuthzHeader = req.Headers.Authorization.Parameter;

                var autherizationHeaderArray = GetAutherizationHeaderValues(rawAuthzHeader);

                if (autherizationHeaderArray != null)
                {
                    var APPId = autherizationHeaderArray[0];
                    var incomingBase64Signature = autherizationHeaderArray[1];
                    var nonce = autherizationHeaderArray[2];
                    var requestTimeStamp = autherizationHeaderArray[3];

                    var isValid = isValidRequest(req, APPId, incomingBase64Signature, nonce, requestTimeStamp);

                    if (isValid.Result)
                    {
                        var currentPrincipal = new GenericPrincipal(new GenericIdentity(APPId), null);
                        context.Principal = currentPrincipal;
                    }
                    else
                    {
                        context.ErrorResult = new UnauthorizedResult(new AuthenticationHeaderValue[0], context.Request);
                    }
                }
                else
                {
                    context.ErrorResult = new UnauthorizedResult(new AuthenticationHeaderValue[0], context.Request);
                }
            }
            else
            {
                context.ErrorResult = new UnauthorizedResult(new AuthenticationHeaderValue[0], context.Request);
            }

            return Task.FromResult(0);
        }

        public Task ChallengeAsync(HttpAuthenticationChallengeContext context, CancellationToken cancellationToken)
        {
            context.Result = new ResultWithChallenge(context.Result);
            return Task.FromResult(0);
        }

        public bool AllowMultiple
        {
            get { return false; }
        }

        private string[] GetAutherizationHeaderValues(string rawAuthzHeader)
        {

            var credArray = rawAuthzHeader.Split(':');

            if (credArray.Length == 4)
            {
                return credArray;
            }
            else
            {
                return null;
            }

        }
}

Basically what we’ve implemented is the following:

  • The class “HMACAuthenticationAttribute” derives from “Attribute” class so we can use it as filter attribute over our controllers or HTTP action methods.
  • The constructor for the class currently fill a dictionary named “allowedApps”, this is for the demo only, usually you will store the APP Id and API Key in a database along with other information about this client.
  • The method “AuthenticateAsync” is used to implement the core authentication logic of validating the incoming signature in the request
  • We make sure that the Authorization header is not empty and it contains scheme of type “amx”, then we read the Authorization header value and split its content based on the delimiter we’ve specified earlier in client ‘:’.
  • Lastly we are calling method “isValidRequest” where all the magic of reconstructing the signature and comparing it with the incoming signature happens. More about implementing this in step 5.
  • Incase the Authorization header is incorrect or the result of executing method “isValidRequest” returns false, we’ll consider the incoming request as unauthorized and we should return an authentication challenge to the response, this should be implemented in method “ChallengeAsync”, to do so lets implement the next step.

Step 4: Add authentication challenge to the response
To add authentication challenge to the unauthorized response copy and paste the code below in the same file “HMACAuthenticationAttribute.cs”, basically we’ll add “WWW-Authenticate” header to the response using our “amx” custom scheme . You can read more about the details of this implementation here.

public class ResultWithChallenge : IHttpActionResult
    {
        private readonly string authenticationScheme = "amx";
        private readonly IHttpActionResult next;

        public ResultWithChallenge(IHttpActionResult next)
        {
            this.next = next;
        }

        public async Task<HttpResponseMessage> ExecuteAsync(CancellationToken cancellationToken)
        {
            var response = await next.ExecuteAsync(cancellationToken);

            if (response.StatusCode == HttpStatusCode.Unauthorized)
            {
                response.Headers.WwwAuthenticate.Add(new AuthenticationHeaderValue(authenticationScheme));
            }

            return response;
        }
    }

Step 5: Implement the method “isValidRequest”.
The core implementation of reconstructing the request parameters and generating the signature on the server happens here, so let’s add the code then I’ll describe what this method is responsible for, open file “HMACAuthenticationAttribute.cs” again and paste the code below in class “HMACAuthenticationAttribute”:

private async Task<bool> isValidRequest(HttpRequestMessage req, string APPId, string incomingBase64Signature, string nonce, string requestTimeStamp)
        {
            string requestContentBase64String = "";
            string requestUri = HttpUtility.UrlEncode(req.RequestUri.AbsoluteUri.ToLower());
            string requestHttpMethod = req.Method.Method;

            if (!allowedApps.ContainsKey(APPId))
            {
                return false;
            }

            var sharedKey = allowedApps[APPId];

            if (isReplayRequest(nonce, requestTimeStamp))
            {
                return false;
            }

            byte[] hash = await ComputeHash(req.Content);

            if (hash != null)
            {
                requestContentBase64String = Convert.ToBase64String(hash);
            }

            string data = String.Format("{0}{1}{2}{3}{4}{5}", APPId, requestHttpMethod, requestUri, requestTimeStamp, nonce, requestContentBase64String);

            var secretKeyBytes = Convert.FromBase64String(sharedKey);

            byte[] signature = Encoding.UTF8.GetBytes(data);

            using (HMACSHA256 hmac = new HMACSHA256(secretKeyBytes))
            {
                byte[] signatureBytes = hmac.ComputeHash(signature);

                return (incomingBase64Signature.Equals(Convert.ToBase64String(signatureBytes), StringComparison.Ordinal));
            }

        }

        private bool isReplayRequest(string nonce, string requestTimeStamp)
        {
            if (System.Runtime.Caching.MemoryCache.Default.Contains(nonce))
            {
                return true;
            }

            DateTime epochStart = new DateTime(1970, 01, 01, 0, 0, 0, 0, DateTimeKind.Utc);
            TimeSpan currentTs = DateTime.UtcNow - epochStart;

            var serverTotalSeconds = Convert.ToUInt64(currentTs.TotalSeconds);
            var requestTotalSeconds = Convert.ToUInt64(requestTimeStamp);

            if ((serverTotalSeconds - requestTotalSeconds) > requestMaxAgeInSeconds)
            {
                return true;
            }

            System.Runtime.Caching.MemoryCache.Default.Add(nonce, requestTimeStamp, DateTimeOffset.UtcNow.AddSeconds(requestMaxAgeInSeconds));

            return false;
        }

        private static async Task<byte[]> ComputeHash(HttpContent httpContent)
        {
            using (MD5 md5 = MD5.Create())
            {
                byte[] hash = null;
                var content = await httpContent.ReadAsByteArrayAsync();
                if (content.Length != 0)
                {
                    hash = md5.ComputeHash(content);
                }
                return hash;
            }
        }

What we’ve implemented here is the below:

  • We’ve validated that public APPId received is registered in our system, if it is not we’ll return false and will return unauthorized response.
  • We’ve checked if the request received is a replay request, this means that checking if the nonce received by the client is used before, currently I’m storing all the nonce received by the client in Cache Memory for 5 minutes only, so for example if the client generated a nonce “abc1234″ and send it with a request, the server will check if this nonce is used before, if not it will store the nonce for 5 minutes, so any request coming with same nonce during the 5 minutes window will consider a replay attack, if the same nonce “abc1234″ is used after 5 minutes then this is fine and the request is not considered a replay attack.
  • But there might be an evil person that might try to re-post the same request using the same nonce after the 5 minutes window, so the request time stamp becomes handy here, the implementation is comparing the current server UNIX time with the request UNIX time from the client, if the request age is older than 5 minutes too then it is rejected and the the evil person has no possibility to fake the request time stamp and send fresher one because we’ve already included the request time stamp in the signature raw data, so any change on it will result into new signature and it will not match the client incoming signature.
  • Note: If your API is published on different nodes on web farm, then you can store those nonce using Microsoft Azure Cache or Redis server, do not store them in DB because you need fast rad access.
  • Last step is we’ve implemented is to md5 hash the request body content if it is available (POST, PUT methods), then we’ve built the signature raw data by concatenating the parameters (APPId, requestHttpMethod, requestUri, requestTimeStamp, nonce, requestContentBase64String) without any delimiters. It is a MUST that both parties use the same data format to produce the same signature, the data eventually will get hashed using the same hashing algorithm and API Key used by the client. If the incoming client signature equals the signature generated on the server then we’ll consider this request authentic and will process it.

Step 5: Secure the API End Points:
Final thing to do here is to attribute the protected end points or controllers with this new authentication filter attribute, so open controller “Orders” and add the attribute “HMACAuthentication” as the code below:

[HMACAuthentication]
    [RoutePrefix("api/Orders")]
    public class OrdersController : ApiController
    {
     //Controller implementation goes here
    }

 Conclusion:

  • In my opinion HMAC authentication is more complicated than OAuth 2.0 but in some situations you need to use it especially if you can’t use TLS, or when you are building HTTP service that will be consumed by terminals or devices which storing the API Key in it is fine.
  • I’ve read that OAuth 1.0a is very similar to this approach, I’m not an expert of this protocol and I’m not trying to reinvent the wheel, I want to build this without the use of any external library, so for anyone reading this and have experience with OAuth 1.0a please drop me a comment telling the differences/ similarities about this approach and OAuth 1.0a.

That’s all for now folks! Please drop me a comment if you have better way implementing this or you spotted something that could be done in a better way.

The source code for this tutorial is available on GitHub.

Follow me on Twitter @tjoudeh

References:

The post Secure ASP.NET Web API using API Key Authentication – HMAC Authentication appeared first on Bit of Technology.


Dominick Baier: The Future of AuthorizationServer

Now that IdentityServer v3 is almost done, it makes sense to “deprecate” some of the older projects. Especially all of the functionality of AuthorizationServer is completely replaced by the IdSrv3 feature set.

AuthorizationServer is actually a pretty small and compact code base, and a relatively complete implementation of OAuth2 including a simple authorization model based on clients, applications and scopes. Also there are no major bugs (that we know about) or feature gaps.

IOW – if you want to use AS, simply make it part of your own code base and feel free to change it at will. Check the wiki for documentation.

If somebody wants to take over the project, contact me.


Filed under: ASP.NET, AuthorizationServer, OAuth, WebAPI


Darrel Miller: Constructing URLs the easy way

When building client applications that need to connect to a HTTP API, sooner or later you are going to get involved in constructing a URL based on a API Root and some parameters.  Often enough when looking at client libraries I see lots of ugly string concatenation and conditional logic to account for empty parameter values and trailing slashes.  And there there is the issue of encoding.  Several years ago a IETF specification (RFC 6570) was released that described a templating system for URLs and I created a library that implements the specification.  Here is how you can use it to make constructing even the most crazy URLs as easy as pie.

templates

Path Parameters

The simplest example is where you have a base URI and you need to update a parameter in the URL path segment,

[Fact]
public void UpdatePathParameter()
{
    var url = new UriTemplate("http://example.org/{tenant}/customers")
        .AddParameter("tenant", "acmé")
        .Resolve();

    Assert.Equal("http://example.org/acm%C3%A9/customers", url);
}

This is a really trivial case that could mostly be handled with a string replace.  However, a string replace wouldn’t take care of percent-encoding delimiters and unicode characters in the parameter value.

Under the covers there is a URITemplate class that can have parameters added to it and a Resolve method.  I have created a simple fluent inferface using extension methods to make it convenient quickly create and resolve a template..

Query Parameters

A slightly more complex example would be adding a query string parameter.

[Fact]
public void QueryParametersTheOldWay()
{
    var url = new UriTemplate("http://example.org/customers?active={activeflag}")
        .AddParameter("activeflag", "true")
        .Resolve();

    Assert.Equal("http://example.org/customers?active=true",url); 
}

This style of template can be problematic when there are optional query parameters and a parameter does not have a value.  A better way of defining query parameters is like this,

[Fact]
public void QueryParametersTheNewWay()
{
    var url = new UriTemplate("http://example.org/customers{?active}")
        .AddParameter("active", "true")
        .Resolve();

    Assert.Equal("http://example.org/customers?active=true", url);
}

when you don't want to provide any value at all, the template parameter will be removed.

[Fact]
public void QueryParametersTheNewWayWithoutValue()
{

    var url = new UriTemplate("http://example.org/customers{?active}")
        .AddParameters(null)
        .Resolve();

    Assert.Equal("http://example.org/customers", url);
}

In this last example I used a slightly different extension method that takes a single object and uses it's properties as key-value pairs.  This makes it easy to set multiple parameters.

[Fact]
public void ParametersFromAnObject()
{
    var url = new UriTemplate("http://example.org/{environment}/{version}/customers{?active,country}")
        .AddParameters(new
        {
            environment = "dev",
            version = "v2",
            active = "true",
            country = "CA"
        })
        .Resolve();

    Assert.Equal("http://example.org/dev/v2/customers?active=true&country=CA", url);
}

Lists and Dictionaries

Where URI Templates start to really shine as compared to simple string replaces and concatenation is when you start to use lists and dictionaries as parameter values. 

In the next example we use a list id values that are stored in an array to specify a property value.

[Fact]
public void ApplyParametersObjectWithAListofInts()
{
    var url = new UriTemplate("http://example.org/customers{?ids,order}")
        .AddParameters(new
        {
            order = "up",
            ids = new[] {21, 75, 21}
        })
        .Resolve();

    Assert.Equal("http://example.org/customers?ids=21,75,21&order=up", url);
}

We can use dictionaries to define both the query parameter name and value,

[Fact]
public void ApplyDictionaryToQueryParameters()
{
    var url = new UriTemplate("http://example.org/foo{?coords*}")
        .AddParameter("coords", new Dictionary<string, string>
        {
            {"x", "1"},
            {"y", "2"},
        })
        .Resolve();

    Assert.Equal("http://example.org/foo?x=1&y=2", url);
}

We can also use lists to define a set of path segments.

[Fact]
public void ApplyFoldersToPathFromStringNotUrl()
{

    var url = new UriTemplate("http://example.org{/folders*}{?filename}")
        .AddParameters(new
        {
            folders = new[] { "files", "customer", "project" },
            filename = "proposal.pdf"
        })
        .Resolve();

    Assert.Equal("http://example.org/files/customer/project?filename=proposal.pdf", url);
}

Parameters can be anywhere

Parameters are not limited to path segments and query parameters.  You can also put parameters in the host name.

[Fact]
public void ParametersFromAnObjectFromInvalidUrl()
{

    var url = new UriTemplate("http://{environment}.example.org/{version}/customers{?active,country}")
    .AddParameters(new
    {
        environment = "dev",
        version = "v2",
        active = "true",
        country = "CA"
    })
    .Resolve();

    Assert.Equal("http://dev.example.org/v2/customers?active=true&country=CA", url);
}

You can even replace the entire base URL.

[Fact]
public void ReplaceBaseAddress()
{

    var url = new UriTemplate("{+baseUrl}api/customer/{id}")
        .AddParameters(new
        {
            baseUrl = "http://example.org/",
            id = "22"
        })
        .Resolve();

    Assert.Equal("http://example.org/api/customer/22", url);
}

However, by default URI template will escape all delimiter characters in parameters, so the slashes in the base address would come out percent-encoded.  By adding the + operator to the front of the baseUrl parameter we can instruct the resolution algorithm to not escape characters in the parameter value.

Partial Resolution

A recently added feature to the library is the ability to only resolve parameters that have been passed and leave the other parameters untouched.  This is useful sometimes when you want to resolve base address and version parameters on application startup, but then want to add other parameters later.

[Fact]
public void PartiallyParametersFromAnObjectFromInvalidUrl()
{

    var url = new UriTemplate("http://{environment}.example.org/{version}/customers{?active,country}",resolvePartially:true)
    .AddParameters(new
    {
        environment = "dev",
        version = "v2"
    })
    .Resolve();

    Assert.Equal("http://dev.example.org/v2/customers{?active,country}", url);
}

And there is so much more…

The URI Template specification contains many more syntax options that I have not covered.  Many of them you may never use.  However, it is nice to know that if you ever run into some API that uses some strange formatting, there is a reasonable chance that URI templates can support it.

Although the templating language is fairly sophisticated, it was specifically designed to be fast to process.  The resolution algorithm can be performed by walking the template characters just once and performing substitutions along the way.

Where to find it

All the source code for the project can be found on Github and there is a nuget package available.  The library is built to support .Net35, .Net45 and there is a portable version that supports Win Phone 8, 8.1, WinRT and mono on Android and iOS.

Image Credit: Templates https://flic.kr/p/5Xc9sq


Taiseer Joudeh: AngularJS Authentication Using Azure Active Directory Authentication Library (ADAL)

In my previous post Secure ASP.NET Web API 2 using Azure Active Directory I’ve covered how to protect Web API end points using bearer tokens issued by Azure Active Directory, and how to build a desktop application which acts as a Client. This Client gets the access token from the Authorization Server (Azure Active Directory) then use this bearer access token to call a protected resource exits in our Resource Server (ASP.NET Web API).

Azure Active Directory Web Api

Initially I was looking to build the client application by using AngularJS (SPA) but I failed to do so because at the time of writing the previous post Azure Active Directory Authentication Library (ADAL) didn’t support OAuth 2.0 Implicit Grant which is the right OAuth grant that should be used when building applications running in browsers.

The live AngularJS demo application is hosted on Azure (User: ADAL@taiseerjoudeharamex.onmicrosoft.com/ Pass: AngularJS!!), the source code for this tutorial on GitHub.

So I had discussion with Vittorio Bertocci and Attila Hajdrik on twitter about this limitation in ADAL and Vittorio promised that this feature is coming soon, and yes ADAL now supports OAuth 2.0 Implicit Grant and integrating it with your AngularJS is very simple, I recommend you to watch Vittorio video introduction on Channel 9 before digging into this tutorial.

AngularJS Authentication Using Azure Active Directory Authentication Library (ADAL)

What is OAuth 2.0 Implicit Grant?

In simple words the implicit grant is optimized for public clients (can not store secrets) and those clients are built using JavaScript and they run in browsers. There is no client authentication happening here and the only factors should be presented to obtain an access token is the resource owner credentials and pre-registration for the redirection URI with the Authorization server. This redirect URI will be used to receive the access token issued by the Authorization server in a form of URI fragment.

What we’ll build in this tutorial?

In this tutorial we’ll build simple SPA application using AngularJS along with ADAL for JS which provides a very comprehensive abstraction for the Implicit Grant we described earlier. This SPA will communicate with a protected Resource Server (Web API) to get list of orders, and will request the access token from our Authorization Server (Azure Active Directory).

In order to follow along with this tutorial I’ve created a sample skeleton project which contains the basic code needed to run our AngularJS application and the Web API without adding any feature related to the security. So I recommend you to download it first so you can follow along with the steps below.

The NuGet packages I used in this project are:

Install-Package Microsoft.AspNet.WebApi -Version 5.2.2
Install-Package Microsoft.AspNet.WebApi.Owin -Version 5.2.2
Install-Package Microsoft.Owin.Host.SystemWeb -Version 3.0.0
Install-Package Microsoft.Owin.Security.ActiveDirectory -Version 3.0.0

Step 1: Register the Web API into Azure Active Directory

Open Azure Management Portal in order to register our Web API as an application in our Azure Active Directory, to do so and after your successful login to Azure Management Portal, click on “Active Directory” in the left hand navigation menu, choose your active directory tenant you want to register your Web API with, then select the “Applications” tab, then click on the add icon at bottom of the page. Once the modal window shows as the image below select “Add an application my organization is developing”.
Azure New App
Then a wizard of 2 steps will show up asking you to select the type of the app you want to add, in our case we are currently adding a Web API so select “Web Application and/or Web API”, then provide a name for the application, in my case I’ll call it “AngularJSAuthADAL”, then click next.
Azure App Name
In the second step as the image below we need to fill two things, the Sign-On URL which is usually will be your base URL for your Web API, so in my case it will be “http://localhost:10966″, and the second field APP ID URI will usually be filled with a URI that Azure AD can use for this app, it usually take the form of “http://<your_AD_tenant_name>/<your_app_friendly_name>” so we will replace this with the correct values for my app and will be filed as “http://taiseerjoudeharamex.onmicrosoft.com/AngularJSAuthADAL” then click OK.

Azure App Properties

Step 2: Enable Implicit Grant for the Application

After our Web API has been added to Azure Active Directory apps, we need enable the implicit grant. To do so we need to change our Web API configuring using the application manifest. Basically the application manifest is a JSON file that represents our application identity configuration.

So as the image below and after you navigate to the app we’ve added click on “Manage Manifest” icon at the bottom of the page, then click on “Download Manifest”.

Download Manifest

Open the downloaded JSON application manifest file and change the value of “oauth2AllowImplicitFlow” node to “true“.  As well notice how the “replyUrls” array contains the URL which we want the token response returned to. You can read more about Web API configuration here.

After we apply this change, save the application manifest file locally then upload it again to your app using the “Upload Manifest” feature.

Step 3: Configure Web API to Accept Bearer Tokens Issued by Azure AD

Now and if you have downloaded the skeleton project right click on it and click on build to download the needed Nuget packages, if you want you can run the application and a nice SPA will be show up and the orders will be displayed as the image below because we didn’t configure the security part yet.

AngularJS Adal

Now open file “Startup.cs” and paste the code below:

public void ConfigureOAuth(IAppBuilder app)
        {
            app.UseWindowsAzureActiveDirectoryBearerAuthentication(
               new WindowsAzureActiveDirectoryBearerAuthenticationOptions
               {
                   Audience = ConfigurationManager.AppSettings["ida:ClientID"],
                   Tenant = ConfigurationManager.AppSettings["ida:Tenant"]
               });
        }

Basically what we’ve implemented here is simple, we’ve configured the Web API authentication middleware to use “Windows Azure Active Directory Bearer Tokens” for the specified Active Directory “Tenant” and “Audience” (Client Id). Now any API controller lives in this API and attribute with [Authorize] attribute will only accept bearer tokens issued from this specified Active Directory Tenant, any other form of tokens will be rejected.

It is a good practice to store the values for your Audience, Tenant, Secrets, etc… in a configuration file and not to hard-code them, so open the web.config file and add 2 new “appSettings” as the snippet below, the value for the Client Id can be read your Azure App settings.

<appSettings>
    <add key="ida:Tenant" value="taiseerjoudeharamex.onmicrosoft.com" />
    <add key="ida:ClientID" value="1725911b-ad8f-4295-8258-cf95ba9f7ea6" />
  </appSettings>

Step 4: Protect Orders Controller

Now open file “OrdersController” and attribute it with [Authorize] attribute, by doing this any GET request to the path “http://localhost:port/api/orders” will return status code 401 if no token provided in the Authorization header. When you apply this the orders view won’t return any data until we obtain a valid token. Orders controller code will look as the below snippet:

[Authorize]
[RoutePrefix("api/orders")]
public class OrdersController : ApiController
{
	//Rest of code is here
}

Step 5: Download ADAL JavaScript Library

Now it is time to download and use ADAL JS library which will facilitates the AngularJS authentication using the Implicit grant, so after you download the file, open page “Index.html” and add reference to it at the end of the file as the code below:

<!-- 3rd party libraries -->
 <!Other JS references here -->
 <script src="scripts/adal.js"></script>

Step 6: Configure our AngularJS bootstrap file (app.js)

Now open file “app.js” where we’ll inject the ADAL dependencies into our AngularJS module “AngularAuthApp”, so after you open the file change the code in it as the below:

var app = angular.module('AngularAuthApp', ['ngRoute', 'AdalAngular']);

app.config(['$routeProvider', '$httpProvider', 'adalAuthenticationServiceProvider', function ($routeProvider, $httpProvider, adalAuthenticationServiceProvider) {

    $routeProvider.when("/home", {
        controller: "homeController",
        templateUrl: "/app/views/home.html"
    });

    $routeProvider.when("/orders", {
        controller: "ordersController",
        templateUrl: "/app/views/orders.html",
        requireADLogin: true
    });

    $routeProvider.when("/userclaims", {
        templateUrl: "/app/views/userclaims.html",
        requireADLogin: true
    });

    $routeProvider.otherwise({ redirectTo: "/home" });

    adalAuthenticationServiceProvider.init(
      {
          tenant: 'taiseerjoudeharamex.onmicrosoft.com',
          clientId: '1725911b-ad8f-4295-8258-cf95ba9f7ea6'
      }, $httpProvider);

}]);
var serviceBase = 'http://localhost:10966/';
app.constant('ngAuthSettings', {
    apiServiceBaseUri: serviceBase
});

What we’ve implemented is the below:

  • We’ve injected the “AdalAngular” to our module “AngularAuthApp”.
  • The service “adalAuthenticationServiceProvider” is now available and injected into our App configuration.
  • We’ve set the property “requireADLogin” to “true” for any partial view requires authentication, by doing this if the user requested a protected view, a redirection to Azure AD tenant will take place and the user (resource owner) will be able to enter his AD credentials to authenticate.
  • Lastly, we’ve set the “tenant” and “clientId” values related to our AD Application.

Step 7: Add explicit Login and Logout feature to the SPA

The ADAL Js library provide us with explicit way to login/logout the user by calling “login” function, so to add this open file named “indexController.js” and replace the code existing with the code below:

'use strict';
app.controller('indexController', ['$scope', 'adalAuthenticationService', '$location', function ($scope, adalAuthenticationService, $location) {

    $scope.logOut = function () {
      
        adalAuthenticationService.logOut();
    }

    $scope.LogIn = function () {

        adalAuthenticationService.login();
    }

}]);

Step 8: Hide/Show links based on Authentication Status

For better user experience it will be nice if we hide the login link from the top menu when the user is already logged in, and to hide the logout link when the user is not logged in yet, to do so open file “index.html” and replace the menu items with code snippet below:

<div class="collapse navbar-collapse" data-collapse="!navbarExpanded">
                <ul class="nav navbar-nav navbar-right">
                    <li data-ng-hide="!userInfo.isAuthenticated"><a href="#">Welcome {{userInfo.profile.given_name}}</a></li>
                    <li><a href="#/orders">View Orders</a></li>
                    <li data-ng-hide="!userInfo.isAuthenticated"><a href="" data-ng-click="logOut()">Logout</a></li>
                    <li data-ng-hide="userInfo.isAuthenticated"><a href="" data-ng-click="LogIn()">Login</a></li>
                </ul>
            </div>

Notice that we are depending on object named “userInfo” to read property named “isAuthenticated”, this object is set in the “$rootScope” in ADAL JS library so it is available globally in our AngularJS app.

By completing this step and if we tried to run the application and request the view “orders” or click on “login” link, we will be redirected to our Azure AD tenant to enter user credentials and have an access to the protected view, there is lot of abstraction happening here that needs to be clarified in the section below.

ADAL JavaScrip Flow

  • Once the anonymous user request protected view (view marked with requireADLogin = true ) or hit on login link explicitly, ADAL will trigger login directly as the code highlighted here.
  • The Login in ADAL library means building redirection URI to the configured Azure AD tenant we’ve defined in the “init” method in “app.js” file, the URI will contain random “state” and “nonce” to uniquely identify each request and prevent from any reply requests, as well the value for the “redirect_uri” is set the to current location of the page which should match the value we’ve defined once we registered the application, lastly the “response_type” is set to “id_token” so the authorization server should send back a JWT token contains claims about the user. The redirection URI will look as the below and code for building the redirection URI is highlighted here

https://login.windows.net/taiseerjoudeharamex.onmicrosoft.com/oauth2/authorize?response_type=id_token&client_id=1725911b-ad8f-4295-8258-cf95ba9f7ea6&redirect_uri=http%3A%2F%2Fngadal.azurewebsites.net%2F&state=9cce69d3-af36-4eb4-a882-4e9b7080e90d&x-client-SKU=Js&x-client-Ver=0.0.3&nonce=d4313f89-dc4f-4590-b7fd-9bbc2b340759

  • Now the Azure AD login page will show up for the user, then the user will be prompted to enter his AD credentials as the image blow, assuming he entered credentials correctly, a redirection will take place to the following URI containing the token as part of URI fragment not query string along with the same state and nonce specified by ADAL earlier.

https://ngadal.azurewebsites.net/#/id_token=eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6ImtyaU1QZG1Cdng2OHNrVDgtbVBBQjNCc2VlQSJ9.eyJhdWQiOiIxNzI1OTExYi1hZDhmLTQyOTUtODI1OC1jZjk1YmE5ZjdlYTYiLCJpc3MiOiJodHRwczovL3N0cy53aW5kb3dzLm5ldC8wODExZmIzMS05M2VkLTRmZWItYTAwOS1kNmUyN2RmYWMxN2IvIiwiaWF0IjoxNDE3NDgwMjc5LCJuYmYiOjE0MTc0ODAyNzksImV4cCI6MTQxNzQ4NDE3OSwidmVyIjoiMS4wIiwidGlkIjoiMDgxMWZiMzEtOTNlZC00ZmViLWEwMDktZDZlMjdkZmFjMTdiIiwiYW1yIjpbInB3ZCJdLCJvaWQiOiI2OTIyYTVlZi01YmMyLTRiYmEtYjI5Yy1kODc0YzQyYjg1OWQiLCJ1cG4iOiJIYW16YUB0YWlzZWVyam91ZGVoYXJhbWV4Lm9ubWljcm9zb2Z0LmNvbSIsInVuaXF1ZV9uYW1lIjoiSGFtemFAdGFpc2VlcmpvdWRlaGFyYW1leC5vbm1pY3Jvc29mdC5jb20iLCJzdWIiOiJxVWU4UUQ4SzdwOF9zTlI5WlhQYWcyOVFCeGZURlhKbTBaVzR0UDdYSEJNIiwiZmFtaWx5X25hbWUiOiJKb3VkZWgiLCJnaXZlbl9uYW1lIjoiSGFtemEiLCJub25jZSI6ImQ1NmEwNDJhLWQzM2YtNGYxYS05ZmYwLTFjMWE1ZDk3YzVmMSIsInB3ZF9leHAiOiI3NDMzNDg5IiwicHdkX3VybCI6Imh0dHBzOi8vcG9ydGFsLm1pY3Jvc29mdG9ubGluZS5jb20vQ2hhbmdlUGFzc3dvcmQuYXNweCJ9.efAp-95yMvhLu--8TfXYwozJsc09OTsB5bneH9bvGzko6uLZj0YloDTIrVtu_SU95hOBpFvma0FOeGmsqre6DBwaLTSJDD9wTYtqmoCGwpTy_cewpS78MJ9aR-IjWx5O6K8Nt90d4ujaco5T-o2EQ4ygPx5Z6vH-sLy8t9NDVER7HtlClhRwj2uDUF-kdihh7lv5w0U7TqHZUtLkBNL2l69yY5F0Jdj0q7m81gNps6nfqfa8aypgmztpPWDJAChvwsD5r58CyGVXPKSp_2CfK0kkWasP6fmLKKi5tGPvjg-wEb2j47UVIgO8v9xIkqg8RGqnZ1lboZKa2FlCs-Jnrw&state=6ac680d1-8b3b-452a-97d2-3deddb4016fc&session_state=2d48f965-e026-4c08-9d38-2cce49ee72cb

Azure AD Login

Note: You can extract the JSON Web token manually and debug it using JWT debugging tool to explore the claims inside it.

  • Now once ADAL JS library receives this call back, it will go and extract the token from the hash fragment, try to decode this token to get user profile and other claims from it (i.e. token expiry), then store the token along with decoded claims in HTML 5 local storage so the token will not vanish if you closed the browser or do full refresh (F5) for your AngularJS application, the implementation is very similar to what we’ve covered earlier in AngularJS authentication post.
  • To be able to access the protected API end points, we need to send the token in the “Authorization” header using bearer scheme, the ADAL JS provides an AngularJS interceptor which is responsible to add the Authorization header to each XHR request if there is token stored in the localStorage, these is very similar to the interceptor we’ve added before in our AngularJS Authentication post.
  • If the user decided to “logout” from the system, then ADAL JS clears all the stored data in the HTML 5 local storage, so no token stored locally, then it issue redirect to the URI: https://login.windows.net/{tenantid}/oauth2/logout which will be responsible to clear the clear any session cookies created by Azure AD tenant, but remember that logging out from the system will not invalidate your token, if you extracted the token manually then it will remain valid until it expires, usually after 1 hour of the issuing time.
  • ADAL JS provides a nice way to renew the token without using grant_type = refresh_token, it tries to renew the token silently by using a hidden iframe which communicates with Azure AD tenant asking for a new token if there is a valid session established by the AD tenant.

The live AngularJS demo application is hosted on Azure (ADAL@taiseerjoudeharamex.onmicrosoft.com/AngularJS!!), the source code for this tutorial on GitHub.

Conclusion

ADAL JS library really simplifies adding OAuth 2.0 Implicit grant to your SPA, it is still on developer preview release so testing it out and providing feedback will be great to enhance it.

If you have any comments or enhancement on this tutorial please drop me comment. Thanks for taking the time to read the post!

Follow me on Twitter @tjoudeh

References

The post AngularJS Authentication Using Azure Active Directory Authentication Library (ADAL) appeared first on Bit of Technology.


Ali Kheyrollahi: Health Endpoint in API Design: slippery slope that it is

Level [C3]

Health Endpoint is a common practice in building APIs. Such an endpoint, unlike other resources of a REST API, instead of achieving a business activity, returns the status of the service and while it can gather and return some data, it is the HTTP status that defines whether the service is "Up or Down". These endpoints commonly go and check a bunch configurations and connectivity with the dependent services, and even make a few calls for a "Test Customer" to make sure business activity can be achieved.

There is something above that just doesn't feel right to me - and this post is an exercise to define what I mean by it. I will explain what are the problems with the Health API and I am going to suggest how to "fix" it.

What is the health of an API anyway? The server up and running and capable of returning the status 200? Server and all its dependencies running and returning 200? Server and all its dependencies running capable of returning 200 in a reasonable amount of time? API able to accomplish some business activity? Or API able to accomplish a certain activity for a test user? API able to accomplish all activities within reasonable time? API able to accomplish all activities with its 95% percentile falling within an agreed SLA?

A Service is a complex beast. While its complexity would be nowhere near a living organism, it is useful to draw a parallel with a living organism. I remember from my previous medical life that the definition of health - provided by none other than WHO - would go like this:

"Health is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity."
In other words, defining health of an organism is a complex and involved process requiring deep understanding of the organism and how it functions. [Well, we are lucky that we are only dealing with distributed systems and their services (or MicroServices if you like) and not living organisms.] For servies, instead of health, we define the Quality of Service as a quantitative measure of a service's health.

Quality Of Servie is normally a bunch of orthogonal SLAs each defining a measurement for one aspect of the service. In terms of monitoring, Availability of a service is the most important aspect of the service to guage and closely follow. Availability of the service cannot simply be measured by the amount of time the servers dedicated to a service have been up. Apart from being reachable, service needs to respond within acceptable time (Low Latency) and has to be able to achieve its business activity (Functional) - no point server being reachable and return 503 error within milliseconds. So the number of error responses (as a deviation from the baseline which can be normal validation and business rule errors) also come into play.

So the question is how can we, expose an endpoint inside a service that can aggregate all above facets and report the health of a service. Simple answer is we cannot and should not commit ourselves to do it. Why? Let's take some simple help from algebra.
API/Service maps an input domain to an output domain (codomain). Also availability is a function of the output domain.

A service (f) is basically a function that maps the input domain (I) to an output domain (O). So:
O = f(I)
The output domain is a set of all possible responses with their status codes and latencies. Availability (A) is a function (a) of the output domain since it has to aggregate errors, latencies, etc:
A = a(O)
So in other words:
A = a(f(I))
So in other words, A cannot be measured without I - which for a real service is a very large set. And also it needs all of f - not your subset bypass-authentication-use-test-customer method.

So one approach is to sit outside the service and only deal with the output domain in a sort of proxy or monitoring server logs. Netflix have done a ton of work on this and have open sourced it as Hysterix) and no wonder I have not heard anything about the magical Health Endpoint in there (now there is an alternative endpoint which I will explain later). But if you want to do it within the service you need all the input domain and not just your "Test Customer" to make assertions about the health of your service. And this kind of assertion is not just wrong, it is dangerous as I am going to explain.

First of all, gradually - especially as far as the ops are concerned - that green line on the dashboard that checks your endpoint becomes your availability. People get used to trust it and when things go wrong out there and customers jump and shout, you will not believe it for quite a while because your eye sees that green line and trusts it.

And guess what happens when you have such an incident? There will be a post-mortem meeting and all tie-and-suits will be there and they identify the root cause as the faulty health-check and you will be asked to go back and fix your Health Check endpoint. And then you start building more and more complexity into your endpoint. Your endpoint gets to know about each and every dependency, all their intricacies. And before your know it, you could build a complete application beside your main service. And you know what, you have to do it for each and every service, as they are all different.

So don't do it. Don't commit yourself to what you cannot achieve.

So is there no point in having a simplistic endpoint which tells us basic information about the status of the service? Of course there is. Such information are useful and many load balancers or web proxies require such an endpoint.

But first we need to make absolutely clear what the responsibility of such an endpoint is.

Canary Endpoint

A canary endpoint (the name is courtesy of Jamie Beaumont) is a simplistic endpoint which gathers connectivity status and latency of all dependencies of a service. It absolutely does not trigger any business activity, there is no "Test Customer" of any kind and is not a "Health Endpoint". If it is green, it does not mean your service is available. But if it is red (your canary is dead) then you definitely have a problem.



So how does a canary endpoint work? It basically checks connectivity with its immediate dependencies - including but not limited to:
  • External services
  • SQL Databases
  • NoSQL Stores
  • External distributed caches
  • Service brokers (Azure RabbitMQ, Service Bus)
A canary result contains name of the dependency, latency and the status code. If any of the results has non-success code, endpoint returns a non-success code. Status code returned is used by simple callers such as load balancers. Also in all cases, we return a payload which is aggregated canary result. Such results can be used to feed various charts and draw heuristics into significance of variability of the latencies.

You probably noticed that External Services appear in Italic i.e. it is a bit different. Reason is if an external service has a canary endpoint itself, instead of just a connectivity check, we call its canary endpoint and add its aggregated result to the result we are returning. So usually the entry point API will generate a cascade of canary chirps that will tell us how things are.

Implementation of the connectivity check is generally dependent on the underlying technology. For a Cache service, it suffices to Set a constant value and see it succeeding. For a SQL Database a SELECT 1; query is all that is needed. For an Azure Storage account, it would be enough to connect and get the list of tables. The point being here is that none of these are anywhere near a business activity, so that you could not - in the remotest sense - think that its success means your business is up and running.

So there you have it. Don't do health endpoints, do canary instead.

Canary Endpoint implementation

A canary endpoint normally gets implemented as an HTTP GET call which returns a collection of connectivity check metrics. You can abstract the logic of checking various dependencies in a library and allow API developers to implement the endpoint by just declaring the dependencies.

We are currently working on an implementation in ASOS (C# and ASP.NET Web API) and there is possibility of open sourcing it.

Security of the Canary Endpoint

I am in favour of securing Canary Endpoint with a constant API key - normally under SSL. This does not provide highest level of security but it is enough to make it much more difficult to break into. At the end of the day, a canay endpoint lists all internal dependencies, components and potentially technologies of a system that can be used by hackers to target components.

Performance impact of Canary Endpoint

Since canary endpoint does not trigger any business activity, its performance footprint should be minimal. However, since calling the canary endpoint generates a cascade of calls, it might not be wise to iterate through all canary endpoints and just call them every few seconds since deeper canary endpoints in a highly layered architecture get called multiple times in each round. 



Darrel Miller: Runscope: Notifications from the Traffic Inspector

Runscope provides a way to log HTTP traffic that passes between client and server and it also can also continuously monitor Web API’s to ensure they are functioning correctly.  When something goes wrong with the Web API you can be notified immediately.  However, out of the box, there isn’t a way to be notified if there a failure appears in the traffic log.  However, it can be done,  it just requires a little creativity.  This blog post shows how. 

Canary

The API to the rescue

The Runscope Traffic Inspector is where you can see all the requests that have been relayed by Runscope.  Requests that have failed, based on their status code or connectivity issue, are included in the Errors stream. 

image

We can create a Radar Test that looks in this Errors stream by making a request to the Runscope API.

image

Unfortunately, it’s not quite that simple.  Once an error is in the Errors stream, the Radar test would end up notifying us of an error every time it checked.  We need a way of asking if any requests have been added to the Errors stream since the last time we ran the test.

image

We can also use the API to look at the results of a test and determine when it was last run. 

image

We can use the started_at timestamp from the response body of the last test run to then query the errors stream for requests since that timestamp. 

image

There is the possibility that if requests happen between the time that the test starts and the time the errors collection is queried, that the error may be reported twice.  This seems like a better alternative than using the finished_at and risk missing an error than comes in whilst the test is running.

Accessing the API

In order to use the Runscope API we need an API Key.  We can get one of those by defining an Application within Runscope account configuration.

image

The Website URL and Callback URL are dummy values because your account is the only one who will access the API and therefore you don’t need to use OAuth2 web flow authentication.  Once you create the application you will find a Access Token at the bottom of the page.

image

We will now be able to use this access token in our tests.

Creating the Tests

Before creating the tests we should create a new Bucket to hold the tests.  I called this bucket “Traffic Tester”. By using a different bucket we can avoid getting the traffic relating to monitoring intermixed with our actual relayed traffic.  Make a note of the “Traffic Tester” bucket key which is displayed in the bottom left corner of the Traffic Inspector screen.

The next step is to go to the Radar view and create a test.  I created a test called “Monitor Slackbot Traffic” as I want to use it to notify me if there are any errors that occur in my Slackbot integration.

Identifying the Test itself

The first request that we add to this test is going to determine the last time that the test was run.  Before we can do this, we need to know the UUID of the test.  We can use the API to tell us this.  So initially, we are going to set up the first request to just determine the UUID.

Set the request URI to be

https://api.runscope.com/buckets/{TrafficTesterBucketKey}/radar

In my case the URI looked like this, your bucket key will be different,

image

You will also need to add a Authorization header that has a value that has this format,

bearer {APIKey}

My header looked something like this. 

image

Run this test and when it succeeds, check the Last Response value.  This JSON object is a description of our “Monitor Slackbot Traffic” test and it contains a UUID property to identify the test.  Copy that UUID value.

image

Getting the “Since” value

Change the URL of the test to be,

https://api.runscope.com/buckets/{TrafficTesterBucketKey}/radar/{TestUUID}/results

replacing both the bucket key {TrafficTesterBucketKey} and {TestUUID} values.  Next, add a Variable named since like this,

image

We get the started_at value of data[1] instead of data[0] because data[0] is the information about the currently running test.  We want to know when the last test started.

At this point you can try running the test and see if it populates the since variable with a timestamp value.

image

Time to Check the Errors Stream

Create the next request and set up the same Authorization header as the previous request.

Set the new request URL to be

https://api.runscope.com/buckets/{bucketToMonitor}/errors?since={{since}}

Replace {bucketToMonitor} with the bucket key of the bucket you wish to monitor!

When the request runs it will return a data array that either contains errors or does not.  I tried to use the standard assertions for this but could not find a way that could handle the empty array.  So instead, I used a bit of script code to do the check.

Add the following script code to the request to complete the request,

image

Tell somebody about it

Add some notifications to the test,

image

This will send you an email if any request gets captured in the Errors stream.  If you prefer to use some other notification method then select the Integrations page and select one of our many integration partners.

Is this useful to you?

Ideally, it shouldn’t require quite this much creativity to be able to do this.  Let me know if this is useful to you and we’ll find a  way of making it much simpler.

Image Credit: Canary https://flic.kr/p/c2fdZG


Darrel Miller: The Web API business layer anti-pattern

What follows is a description of an architectural pattern that I see many developers discussing that I believe is an anti-pattern.  My belief is based on architectural theory and I have no empirical evidence to back it up, so feel free to come to your own conclusions.

The proposed architecture looks like this,

image

I’ve never been a big fan of architecture diagrams like this, because they are a purely logically representation of the architecture and forget about physical realities.

Consider this next diagram which is an example of how this architecture might actually be deployed.

image

In this diagram I changed the colours of the arrows to indicate the protocol used to interact between the components. The blue arrows use HTTP, the purple one is an in-process call and the red one is cross process but most likely not using HTTP. 

HTTP is smart but not very quick

HTTP is designed to enable communication over high latency distributed networks.  It is a text based protocol that enables the transmission of a large amount of semantically rich information in a single request/response pair.  It was designed to scale massively and allow application components to evolve independently.  It was not designed to be particularly efficient over high speed connections within a data center.

High speed in the data center

HTTP is convenient, but definitely not the best choice when communicating with a database server within a data center. The interaction between the Web Site and the Web API is HTTP, because that’s the protocol of choice for Web APIs.  However,  I think it is important to question the wisdom of this interaction.

The right protocol for the right job

It is highly likely that the Web Site and the Web API are living in the same data center. It is quite possible that they are running on the same physical machine.  This means that interactions between the two do not even need a network round trip.  There are much faster ways for the web site to get access to the data it needs than using HTTP to talk to a Web API.

DRY Layers

However, from what I have heard, performance is not the motivating factor for funneling all interactions through the Web API.  The intent is to provide a single interface that all “client” applications can consume.  The goal is re-use.  The theory being that we can write a single Web API and all the different client applications can consume that single API. 

Building good APIs for clients that are communicating across the Internet, need to satisfy a different set of requirements, as compared to building an API for a client sitting across the room.  Internet APIs can’t afford to be chatty.  They tend to be more coarsely grained and contain more metadata in order to reduce chattiness.  They also need to be much more resilient to change.  It’s not hard to push an update to the Web Site when the Web API changes, but it is a lot more challenging to update mobile devices, or some third party integration.

It isn’t impossible, it’s just sub-optimal

You can share a Web API between both local and remote clients.  The problems you will encounter will depend on who is the driving force behind API changes.  If the Web Site requirements push API changes then you are likely to end up with something that works OK for the web site and sucks horribly for the remote clients.  If you are lucky, it will be the remote clients that drive the API and hopefully the performance advantages of being local will make up for the inefficient interface that the Web Site needs to deal with.

A better way

image

In my opinion, a better unit of re-use would be the business logic of the application packaged up with a package manager and then deployed into either the Web Site or Web API projects.  With this approach, the Web Site gets high speed access to the underlying business logic and data and the Web API gets to focus on optimizing for remote clients.

Feedback

As I started out saying, this opinion is based on theory.  I’d be really interested in hearing about practical experiences that developers have had with these types of scenarios.   Some readers might find this a stretch, but I see a correlation between what I am describing here and the changes that Neflix implemented to its internal architecture.

It is worth noting also that many of the negative impacts that I am envisioning are not necessarily going to surface in the first six months of the project.  I tend to focus on the long term evolution of an application, so if you happen to be building a tool for your internal HR department that is going to be scrapped next year, feel free to ignore everything I just said Smile.


Darrel Miller: Continuous Integration, Deployment and Testing of your Web APIs with AppVeyor and Runscope

Fast iterations can be very valuable to the software development process, but make for more time spent doing deployments and testing.  Also, more deployments means more opportunities to accidentally introduce breaking changes into your API. 

RubeGoldbergMachine

AppVeyor is a cloud based continuous integration server and Runscope enables you to do integration testing of your Web APIs, when these services are combined they can save you time doing the mechanical stuff and ensure your API consumers stay happy.

The following steps show the key points to achieve this integration using AppVeyor, Runscope and Azure websites.

1) Create your Web API solution.

image

2) Commit your source code to a publicly accessible source code repository in Github, BitBucket, Visual Studio Online or Kiln.

3) Setup a default AppVeyor project.

image

3b) If  you did not include your Nuget packages in source control then you will need to go to the “Settings –> Build” page and add the nuget restore command.

image

4) Create an Azure Web Site from the Azure management portal using the Quick Create option.   After the site has been created you should setup deployment credentials.

image

5) Return back to the AppVeyor project configuration and setup the deployment of the project using Web Deploy.

image

The Web Deploy Server field should be set to

https://{sitename}.scm.azurewebsites.net/msdeploy.axd?site={sitename}.azurewebsites.net

Where {sitename} must be replaced by whatever you chose to name your Azure Website.  The Username and Password should be set to the credentials you provided in the Azure portal.

6)  Create your Runscope API tests.

image

7) Determine the trigger URL to initiate the tests.  An individual test can be triggered using the trigger URL on the test Settings page.

image

If you want to run all the tests defined in a bucket, there is also a trigger URL in the bucket settings page.

8) Configure AppVeyor to call Runscope Trigger URL.

image

Once this is all setup, as soon as you commit a change to the source code repo, AppVeyor will be notified of the commit and it will initiate a build.  Once the build is completed successfully, the Runscope trigger URL will be called and the newly deployed API will be tested.  Runscope notifications can be used to send emails, SMS messages or IMs if desired.

Image Credit: Lego Rube Goldberg machine https://flic.kr/p/8tA1H7


Darrel Miller: REST–The Chocolate Chip Cookie Analogy

At a recent conference, I found myself once again in a conversation about the meaning of the term REST.  I’ve had this conversation so many times, that I tend to forget that not everyone has heard my take on the subject.  The conversation ended with a “you should blog that…”. 

ChocolateChipCookies

Most developers are aware that REST is one of those terms that means different things to different people.  Lots of key presses have been wasted arguing about what is and isn’t RESTful.  I’m going to try and avoid that trap by making a particularly silly comparison to demonstrate why we should care about accurate use of terminology.

Constraints

KidCookieThe term REST has some similarities to the term “Chocolate Chip Cookie”.   A chocolate chip cookie is defined by two primary constraints.  It must be a cookie and it must contain chocolate chips.  There is no single official chocolate chip cookie recipe.   There are hundreds of different recipes, but the end result is always a cookie with chocolate chips in it.  More importantly, kids love them. 

The problem developers have with REST is that there is no single recipe for “how to do REST”.  Developers like very prescriptive guidance on how to achieve a goal.  The definition of REST simply provides a set of constraints and leaves out the details.

Desired Effect

When you are planning a birthday party and decide to buy some Chocolate Chip Cookies, you are not making that decision because you know Chocolate Chip Cookies contain flour and butter, but because you know kids love them, and you want the kids to be happy.

BirthdayParty

The REST constraints were chosen because they evoke certain characteristics in the systems that follow those constraints.  You choose to follow REST constraints because you want those desired effects.

Sometimes Oatmeal cookies are better

When taking kids on a long car ride, especially in warmer climates, chocolate chip cookies are not always the best choice for snacks.  Lots of kids like Oatmeal cookies too and they don’t make a mess of your back seat.  It is the effect that is important, not the ingredients.  You select the type of cookie that has the characteristics you desire.

MessyFace

Complying with the REST constraints requires a certain amount of work.  Maybe you don’t need all the characteristics that a REST system exhibits, or maybe you need additional ones.

False Expectations

However, if you tell your birthday party guests that they are going to get chocolate chip cookies, and you switch the chocolate chips for raisins, you are going to have some confused and upset kids.

UnhappyKid

This is the key point of the REST naming debacle.   Calling something REST that doesn’t conform to the constraints defined by REST is just a source of pain and confusion.  Some people argue that the popular use of the term REST is simply a subset of the constraints.  People have even tried to name these different sets of constraints: Hi-REST & Lo-REST, Fielding’s REST and Pop-REST.   None of these distinctions have really taken hold.

Rogue Constraints

Imagine how someone would be ridiculed if they came along and said,

our cookie making machine produces square cookies therefore we declare that chocolate chip cookies must be square. 

Or,

our chocolate chip cookie recipe contains pecans and they taste great, so we believe it is a best practice for all chocolate chip cookies to contain pecans.

Unfortunately, this has happened to the term REST.  For a variety of different reasons, new constraints have been invented and attributed to REST.  Sometimes, it is because someone believes the new constraint is a “best practice”, or often it is due to some framework limitation.

Landofconfusion

Land of confusion

The end result is many different definitions of REST.  There are thousands of recipes for making chocolate chip cookies, but the basic definition of what is a chocolate chip cookie remains the same: cookie + chocolate chip. 

The term REST should tell me that a system uses a layered architecture, uses caching, has a client-server architecture, interactions between the client and server are stateless and each layer applies the rules of the uniform interface constraint. 

Today, the term REST doesn’t guarantee that any of these constraints are being respected and makes the term fairly worthless in technical conversations.  The term HTTP API is much more accurate when describing most of today’s Web API.  Many of the core REST community have resorted to using the term Hypermedia API to describe an API that actually conforms to all of the REST constraints.

There is a point where all metaphors break down, and the difference between chocolate chip cookies and REST is that you can do a web search and easily find a recipe to make a great tasting chocolate chip cookie.  Unfortunately, you can’t do the same with REST due to the watering down of the term.

Let’s move on

DriveIntoSunsetIt is an unfortunate state of affairs, and one that is likely never to be resolved.  However, as long as you understand where we are and how we got here, we can move forward, get some real work done, stop debating what it means to be RESTful, and try not to make the same mistake with the next technical noun that comes along.

 

Image Credit : Cookies https://flic.kr/p/83Wef5
Image Credit: Kid with cookie
https://flic.kr/p/dMNwWR
Image Credit: Birthday Party https://flic.kr/p/6D1ag1
Image Credit: Messy Face https://flic.kr/p/PVrds
Image Credit: Unhappy kid https://flic.kr/p/i8SrkX
Image Credit: Land of confusion https://flic.kr/p/5eLaX7


Dominick Baier: IdentityServer & IdentityManager, Updates and the .NET Foundation

It’s busy times right now but we are still on track with our release plans for IdentityServer (and IdentityManager, which will get more love once IdentityServer is done). In fact we just pushed beta 3-4 to github and nuget, which mostly contains bug fixes and merged pull requests.

The other big news is that both projects joined the .NET Foundation as part of the announcements around open sourcing .NET. Joining the Foundation provides us with a strong organizational backbone to increase the visibility and attractiveness of IdentityServer and IdentityManager to both, new users and new committers. As a current user of one of these projects, this will provide even stronger long-term safety of your investments in the use of these frameworks.

If you want to contribute to any of the projects – you are more than welcome! Please have a look at our contribution guidelines and don’t hesitate to get in touch with us!

Also big thanks to our contributors – and especially Damian Hickey and Hadi Hariri who proved this week that this whole community thing is actually working!


Filed under: ASP.NET, IdentityServer, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Chad England: Order the controllers in ASP.NET WebAPI Help

Recently I started a new API project for my company.  With this project I decided to give the Help files generator that Microsoft created a try.  Overall it looks like it will meet the needs of the project.  Today however I hit my first snag with it.  After I began to turn on routes for consumption I noticed in the help page that the routes was not sorted in any fashion.  To me, this can cause a level of annoyance for the folks needing to consume this. 

To solve this it was fairly trivial

  • Go to \Areas\HelpPage\Views\Help\
  • Open Index.cshtml
  • Locate the following line

ILookup<HttpControllerDescriptor, ApiDescription> apiGroups = Model

.ToLookup(api => api.ActionDescriptor.ControllerDescriptor);

  • Replace that line with this one

ILookup<HttpControllerDescriptor, ApiDescription> apiGroups = Model

.OrderBy(d => d.ActionDescriptor.ControllerDescriptor.ControllerName)

.ToLookup(api => api.ActionDescriptor.ControllerDescriptor);

What we are doing here is first sorting the model before the lookup can be created.



Taiseer Joudeh: Getting started with ASP.NET 5 MVC 6 Web API & Entity Framework 7

One of the main new features of ASP.NET 5 is unifying the programming model and combining MVC, Web API, and Web Pages in single framework called MVC 6. In previous versions of ASP.NET (MVC 4, and MVC 5) there were overlapping in the features between MVC and Web API frameworks, but the concrete implementation for both frameworks was totally different, with ASP.NET 5 the merging between those different frameworks will make it easier to develop modern web applications/HTTP services and increase code reusability.

The source code for this tutorial is available on GitHub.

Getting started with ASP.NET 5 MVC 6 Web API & Entity Framework 7

In this post I’ve decided to give ASP.NET 5 – MVC 6 Web API a test drive, I’ll be building a very simple RESTful API from scratch by using MVC 6 Web API and the new Entity Framework 7, so we will learn the following:

  • Using the ASP.NET 5 empty template to build the Web API from scratch.
  • Overview of the new project structure in VS 2015 and how to use the new dependency management tool.
  • Configuring ASP.NET 5 pipeline to add only the components needed for our Web API.
  • Using EF 7 commands and the K Version Manager (KVM) to initialize and apply DB migrations.

To follow along with this post you need to install VS 2015 preview edition or you can provision a virtual machine using Azure Images as they have an Image with VS 2015 preview installed.

Step 1: Creating an empty web project

Open VS 2015 and select New Web Project (ASP.NET Web Application) as the image below, do not forget to set the .NET Framework to 4.5.1. You can name the project “Registration_MVC6WebApi”.

ASPNET5 New project

Now we’ll select the template named “ASP.NET 5 Empty” as the image below, this template is an empty template with no core dependencies on any framework.

MVC 6 Web API Project Template

Step 2: Adding the needed dependencies

Once the project is created you will notice that there is a file named “project.json” this file contains all your project settings along with a section for managing project dependencies on other frameworks/components.

We’ve used to manage packages/dependencies by using NuGet package manager, and you can do this with the new enhanced NuGet package manager tool which ships with VS 2015, but in our case we’ll add all the dependencies using the “project.json” file and benefit from the IntelliSense provided as the image below:

NuGet IntelliSense

So we will add the dependencies needed to configure our Web API, so open file “project.json” and replace the section “dependencies” with the section below:

"dependencies": {
        "Microsoft.AspNet.Server.IIS": "1.0.0-beta1",
        "EntityFramework": "7.0.0-beta1",
        "EntityFramework.SqlServer": "7.0.0-beta1",
        "EntityFramework.Commands": "7.0.0-beta1",
        "Microsoft.AspNet.Mvc": "6.0.0-beta1",
        "Microsoft.AspNet.Diagnostics": "1.0.0-beta1",
        "Microsoft.Framework.ConfigurationModel.Json": "1.0.0-beta1"
    }

The use for each dependency we’ve added as the below:

  • Microsoft.AspNet.Server.IIS: We want to host our Web API using IIS, so this package is needed. If you are planning to self-host your Web API then no need to add this package.
  • EntityFramework & EntityFramework.SqlServer: Our data provider for the Web API will be SQL Server. Entity Framework 7 can be configured to work with different data providers and not only relational databases, the data providers supported by EF 7 are: SqlServer, SQLite, AzureTableStorage, and InMemory. More about EF 7 data providers here.
  • EntityFramework.Commands: This package will be used to make the DB migrations command available in our Web API project by using KVM, more about this later in the post.
  • Microsoft.AspNet.Mvc: This is the core package which adds all the needed components to run Web API and MVC.
  • Microsoft.AspNet.Diagnostics: Basically this package will be used to display a nice welcome page when you request the base URI for the API in a browser. You can ignore this if you want, but it will be nice to display welcoming page instead of the 403 page displayed for older Web API 2.
  • Startup
  • Microsoft.Framework.ConfigurationModel.Json: This package is responsible to load and read the configuration file named “config.json”. We’ll add this file in a later step. This file is responsible to setup the “IConfiguration” object. I recommend to read this nice post about ASP.NET 5 new config files.

Last thing we need to add to the file “project.json” is a section named “commands” as the snippet below:

"commands": {
        "ef": "EntityFramework.Commands"
}

We’ve added short prefix “ef” for EntityFramework.Commands which will allow us to write EF commands such as initializing and applying DB migrations using KVM.

Step 3: Adding config.json configuration file

Now right click on your project and add new item of type “ASP.NET Configuration File” and name it “config.json”, you can think of this file as a replacement for the legacy Web.config file, for now this file will contain only our connection string to our SQL DB, I’m using SQL Express here and you can change this to your preferred SQL server.

{
    "Data": {
        "DefaultConnection": {
            "Connectionstring": "Data Source=.\\sqlexpress;Initial Catalog=RegistrationDB;Integrated Security=True;"
        }
    }
}

Note: This is a JSON file that’s why we are using escape characters in the connection string.

Step 4: Configuring the ASP.NET 5 pipeline for our Web API

This is the class which is responsible for adding the components needed in our pipeline, currently with the ASP.NET 5 empty template, the class is empty and our web project literally does nothing, I’ll add all the code in our Startup class at once then describe what each line of code is responsible for, so open file Startup.cs and paste the code below:

using System;
using Microsoft.AspNet.Builder;
using Microsoft.AspNet.Http;
using Microsoft.AspNet.Hosting;
using Microsoft.Framework.ConfigurationModel;
using Microsoft.Framework.DependencyInjection;
using Registration_MVC6WebApi.Models;

namespace Registration_MVC6WebApi
{
    public class Startup
    {
        public static IConfiguration Configuration { get; set; }

        public Startup(IHostingEnvironment env)
        {
            // Setup configuration sources.
            Configuration = new Configuration().AddJsonFile("config.json").AddEnvironmentVariables();
        }
        public void ConfigureServices(IServiceCollection services)
        {
            // Add EF services to the services container.
            services.AddEntityFramework().AddSqlServer().AddDbContext<RegistrationDbContext>();

            services.AddMvc();

            //Resolve dependency injection
            services.AddScoped<IRegistrationRepo, RegistrationRepo>();
            services.AddScoped<RegistrationDbContext, RegistrationDbContext>();
        }
        public void Configure(IApplicationBuilder app)
        {
            // For more information on how to configure your application, visit http://go.microsoft.com/fwlink/?LinkID=398940
            app.UseMvc();

            app.UseWelcomePage();

        }
    }
}

What we’ve implemented in this class is the following:

  • The constructor for this class is responsible to read the settings in the configuration file “config.json” that we’ve defined earlier, currently we have only the connection string. So the static object “Configuration” contains this setting which we’ll use in the coming step.
  • The method “ConfigureServices” accepts parameter of type “IServiceCollection”, this method is called automatically when starting up the project, as well this is the core method responsible to register components in our pipeline, so the components we’ve registered are:
    • We’ve added “EntityFramework” using SQL Server as our data provider for the database context named “RegistrationDBContext”. We’ll add this DB context in next steps.
    • Added the MVC component to our pipeline so we can use MVC and Web API.
    • Lastly and one of the nice out of the box features which has been added to ASP.NET 5 is Dependency Injection without using any external IoC containers, notice how we are creating single instance scoped instance of our “IRegistrationRepo” by calling 
      services.AddScoped<IRegistrationRepo, RegistrationRepo>();
      . This instance will be available for the entire lifetime of our application life time of the request, we’ll implement the classes “IRegistrationRepo” and “RegistrationRepo” in next steps of this post. There is a nice post about ASP.NET 5 dependency injection can be read here. (Update by Nick Nelson to use Scoped injection instead of using Singleton instance because DbContext is not thread safe).
  • Lastly the method “Configure” accepts parameter of type “IApplicationBuilder”, this method configures the pipeline to use MVC and show the welcome page. Do not ask me why we have to call “AddMvc” and “UseMvc” and what is the difference between both :) I would like to hear an answer if someone knows the difference or maybe this will be changed in the coming release of ASP.NET 5. (Update: Explanation of this pattern in the comments section).

Step 5: Adding Models, Database Context, and Repository

Now we’ll add a file named “Course” which contains two classes: “Course” and “CourseStatusModel”, those classes will represents our domain data model, so for better code organizing add new folder named “Models” then add the new file containing the the code below:

using System;
using System.ComponentModel.DataAnnotations;

namespace Registration_MVC6WebApi.Models
{
    public class Course
    {
        public int Id { get; set; }
        [Required]
        [StringLength(100, MinimumLength = 5)]
        public string Name { get; set; }
        public int Credits { get; set; }
    }

    public class CourseStatusModel
    {
        public int Id { get; set; }
        public string Description { get; set; }
    }
}

Now we need to add Database context class which will be responsible to communicate with our database, so add new class and name it “RegistrationDbContext” then paste the code snippet below:

using Microsoft.Data.Entity;
using System;
using Microsoft.Data.Entity.Metadata;

namespace Registration_MVC6WebApi.Models
{
    public class RegistrationDbContext :DbContext
    {
        public DbSet<Course> Courses { get; set; }

        protected override void OnConfiguring(DbContextOptions options)
        {
            options.UseSqlServer(Startup.Configuration.Get("Data:DefaultConnection:ConnectionString"));
        }
    }
}

Basically what we’ve implemented here is adding our Courses data model as DbSet so it will represent a database table once we run the migrations, note that there is a new method named “OnConfiguration” where we can override it so we’ll be able to specify the data provider which needs to work with our DB context.

In our case we’ll use SQL Server, the constructor for “UseSqlServer” extension method accepts a parameter of type connection string, so we’ll read it from our “config.json” file by specifying the key “Data:DefaultConnection:ConnectionString” for the “Configuration” object we’ve created earlier in Startup class.

Note: This is not the optimal way to set the connection string, there are MVC6 examples out there using this way, but for a reason it is not working with me, so I followed my way.

Lastly we need to add the interface “IRegistrationRepo” and the implementation for this interface “RegistrationRepo”, so add two new files under “Models” folder named “IRegistrationRepo” and “RegistrationRepo” and paste the two code snippets below:

using System;
using System.Collections;
using System.Collections.Generic;

namespace Registration_MVC6WebApi.Models
{
    public interface IRegistrationRepo
    {
        IEnumerable<Course> GetCourses();
        Course GetCourse(int courseId);
        Course AddCourse(Course course);
        bool DeleteCourse(int courseId);
    }
}

using System;
using System.Collections.Generic;
using System.Linq;


namespace Registration_MVC6WebApi.Models
{
    public class RegistrationRepo : IRegistrationRepo
    {
        private readonly RegistrationDbContext _db;

        public RegistrationRepo(RegistrationDbContext db)
        {
            _db = db;
        }
        public Course AddCourse(Course course)
        {
            _db.Courses.Add(course);

            if (_db.SaveChanges() > 0)
            {
                return course;
            }
            return null;

        }

        public bool DeleteCourse(int courseId)
        {
            var course = _db.Courses.FirstOrDefault(c => c.Id == courseId);
            if (course != null)
            {
                _db.Courses.Remove(course);
                return _db.SaveChanges() > 0;
            }
            return false;
        }

        public Course GetCourse(int courseId)
        {
            return _db.Courses.FirstOrDefault(c => c.Id == courseId);
        }

        public IEnumerable<Course> GetCourses()
        {
            return _db.Courses.AsEnumerable();
        }
    }
}

The implementation here is fairly simple, what worth noting here is how we’ve passed “RegistrationDbContext” as parameter for our “RegistrationRepo” constructor so we’ve implemented Constructor Injection, this will not work if we didn’t configure this earlier in our “Startup” class.

Step 6: Installing KVM (K Version Manager)

After we’ve added our Database context and our domain data models, we can use migrations to create the database, with previous version of ASP.NET we’ve used NuGet package manager for these type of tasks, but with ASP.NET 5 we can use command prompt using various K* commands.

What is KVM (K Version Manager)?  KVM is a Powershell script used to get the runtime and manage multiple versions of it being on the machine at the same time, you can read more about it here.

Now to install KVM for the first time you have to do the following steps:

1. Open a command prompt with Run as administrator.
2. Run the following command:

@powershell -NoProfile -ExecutionPolicy unrestricted -Command "iex ((new-object net.webclient).DownloadString('https://raw.githubusercontent.com/aspnet/Home/master/kvminstall.ps1'))"

3. The script installs KVM for the current user.
4. Exit the command prompt window and start another as an administrator (you need to start a new command prompt to get the updated path environment).
5. Upgrade KVM with the following command:

KVM upgrade

We are ready now to run EF migrations as the step below:

Step 7: Initializing and applying migrations for our database

Now our command prompt is ready to understand K commands and Entity Framework commands, first step to do is to change the directory to the project directory. The project directory contains the “project.json” file as the image below:

EF7 Migrations
So in the command prompt we need to run the 2 following commands:

k ef migration add initial
k ef migration apply

Basically the first command will add migration file with the name format (<date>_<migration name>) so we’ll end up having file named “201411172303154_initial.cs” under folder named “Migrations” in our project. This auto generated file contains the code needed to to add our Courses table to our database, the newly generated files will show up under your project as the image below:

EF7 Migrations 2
The second command will apply those migrations and create the database for us based on the connection string we’ve specified earlier in file “config.json”.

Note: the “ef” command comes from the settings that we’ve specified earlier in file “project.json” under section “commands”.

Step 8: Adding GET methods for Courses Controller

The controller is a class which is responsible to handle HTTP requests, with ASP.NET 5 our Web API controller will inherit from “Controller” class not anymore from “ApiController”, so add new folder named “Controllers” then add new controller named “CoursesController” and paste the code below:

using Microsoft.AspNet.Mvc;
using Registration_MVC6WebApi.Models;
using System;
using System.Collections.Generic;

namespace Registration_MVC6WebApi.Controllers
{
    [Route("api/[controller]")]
    public class CoursesController : Controller
    {
        private IRegistrationRepo _registrationRepo;

        public CoursesController(IRegistrationRepo registrationRepo)
        {
            _registrationRepo = registrationRepo;
        }

        [HttpGet]
        public IEnumerable<Course> GetAllCourses()
        {
            return _registrationRepo.GetCourses();
        }

        [HttpGet("{courseId:int}", Name = "GetCourseById")]
        public IActionResult GetCourseById(int courseId)
        {
            var course = _registrationRepo.GetCourse(courseId);
            if (course == null)
            {
                return HttpNotFound();
            }

            return new ObjectResult(course);
        }

    }
}

What we’ve implemented in the controller class is the following:

  • The controller is attribute with Route attribute as the following
    [Route("api/[controller]")]
    so any HTTP requests that match the template are routed to the controller. The “[controller]” part in the template URL means to substitute the controller class name, minus the “Controller” suffix. In our case and for “CoursesController” class, the route template is “api/courses”.
  • We’ve defined two HTTP GET methods, the first one “GetAllCourses” is attributed with “[HttpGet]” and it returns a .NET object which is serialized in the body of the response using the default JSON format.
  • The second HTTP GET method “GetCourseById” is attributed with “[HttpGet]“. For this method we’ve specified a constraint on the parameter “courseId”, the parameter should be of integer data type. As well we’ve specified a name for this method “GetCourseById” which we’ll use in the next step. Last thing this method returns object of type IActionResult which gives us flexibility to return different actions results based on our logic, in our case we will return HttpNotFound if the course does not exist or we can return serialized JSON object of the course when the course is found.
  • Lastly notice how we are passing the “IRegistrationRepo” as a constructor for our CoursesController, by doing this we are implementing Constructor Injection.

Step 9: Adding POST and DELETE methods for Courses Controller

Now we want to implement another two HTTP methods which allow us to add new Course or delete existing one, so open file “CoursesController” and paste the code below:

[HttpPost]
public IActionResult AddCourse([FromBody] Course course)
{
	if (!ModelState.IsValid)
	{
		Context.Response.StatusCode = 400;
		return new ObjectResult(new CourseStatusModel { Id = 1 , Description= "Course model is invalid" });
	}
	else
	{
	   var addedCourse =  _registrationRepo.AddCourse(course);

		if (addedCourse != null)
		{
			string url = Url.RouteUrl("GetCourseById", new { courseId = course.Id }, Request.Scheme, Request.Host.ToUriComponent());

			Context.Response.StatusCode = 201;
			Context.Response.Headers["Location"] = url;
			return new ObjectResult(addedCourse);
		}
		else
		{
			Context.Response.StatusCode = 400;
			return new ObjectResult(new CourseStatusModel { Id = 2, Description = "Failed to save course" });
		}
	   
	}
}

What we’ve implemented here is the following:

  • For method “AddCourse”:
    • Add new HTTP POST method which is responsible to create new Course, this method accepts Course object which is coming from the request body then Web API framework will deserialize this to CLR Course object.
    • If the course object is not valid (i.e. course name not set) then we’ll return HTTP 400 status code and an object containing description of the validation error. Thanks for Yishai Galatzer for spotting this out because I was originally returning response of type “text/plain” always regarding the “Accept” header value set by the client. The point below contains the fix.
    • If the course object is not valid (i.e. course name not set) then we’ll return HTTP 400 status code, and in the response body we’ll return an instance of a POCO class(CourseStatusModel) containing fictional Id and description of the validation error.
    • If the course created successfully then we’ll build a location URI which points to our new created course i.e. (/api/courses/4) and set this URI in the “Location” header for the response.
    • Lastly we are returning the created course object in the response body.
  • For method “DeleteCourse”:
    • Add new HTTP DELETE method which is responsible for deleting existing course, this method accepts integer courseId.
    • If the course has been deleted successfully we’ll return HTTP status 204 (No content)
    • If the passed courseId doesn’t exists we will return HTTP status 404.

Note: I believe that the IhttpActionResult response which got introduced in Web API 2 is way better than IActionResult, please drop me a comment if someone knows how to use IhttpActionResult with MVC6.

The source code for this tutorial is available on GitHub.

That’s all for now folks! I’m still learning the new features in ASP.NET 5, please drop me a comment if you have better way implementing this tutorial or you spotted something that could be done in a better way.

Follow me on Twitter @tjoudeh

References

The post Getting started with ASP.NET 5 MVC 6 Web API & Entity Framework 7 appeared first on Bit of Technology.


Darrel Miller: Microsoft: Open source and cross-platform all the things

It was announced today that Microsoft will be delivering a cross-platform and open source, cloud-optimized version of the .Net framework.

AllTheThings

Today’s announcement is a culmination of a series of changes that have been happening over the past few years in certain parts of Microsoft.  Through the persistence of numerous Microsoft employees and the encouragement by the .net developer community, OSS is finally becoming accepted as an integral part of Microsoft’s business model

Wait, there’s more…

Surprisingly, releasing the source code to this new .net Framework is probably the least significant part of the announcement.  For as long as I remember, Microsoft have had a culture of creating strong coupling between Microsoft’s own products.   This is epitomized in the developer world by the fact that it is very difficult to get build machines to function without have Visual Studio on them.  This is just one example, but this has been a pervasive attitude throughout the company.

Time to leave the nest

In my opinion, today’s announcement is a declaration that Microsoft have grown up.  They have accepted the fact that their products must succeed on their own merit, and not because a customer chose one product and became locked into an entire eco-system.

.net on every server

The new cloud-optimized .net runtime will run on Mac’s and on Linux.  You can use your favorite code editor to write and compile .net applications.  There is no more coupling with Visual Studio.  In fact, in the last few days, I have experienced first hand that MS employees are working hard to get features like, intellisense, syntax/error highlighting features into editors like Sublime, Atom, Emacs and vim. With the announcement of Visual Studio Community edition, which is almost on feature parity with the Pro edition, the old Microsoft would just have said, “hey, we’re giving you a free edition, why would you want to use anything else”.

Haven’t we heard this all before?

Microsoft have made “open” announcements before and have had less than stellar results.  I think one of the reasons has been that it has taken a very long time to chip away at the internal cultural armor within Microsoft.  At the MVP Summit we had the opportunity to watch a panel discussion with some senior Microsoft people who were clearly 100% behind this open approach.

Best of breed should always win

One indication that this shift in culture is more than skin deep is the fact that Microsoft teams, more and more, are choosing to use external tools instead of being required to use their own.  The new Cloud-Optimized CLR and ASP.NET vNext are all being developed in Github repositories.  To file an issue, you can create a Github issue and if you think you can fix a bug, then send a pull request.  Github is the king of “social-coding” which suits an OSS model well.  Microsoft’s own TFS source code control has always been aimed at the enterprise space which often has different needs.  Microsoft’s teams get to use the best product for their own particular requirements and Microsoft products need to compete purely on merit.

Oh, no, not another .net framework!

I must admit, when I first heard about the cloud-optimized runtime, I was underwhelmed.  Why do we need yet another variant of the .net framework?  Why is it server-optimized?  What about mobile and desktop?  Well, technically, the CLR for this stack is not new.  It’s the Silverlight CoreCLR.  I’ll wait for you to stop snickering…. It is also the CLR used to power .net applications on Windows Phone 8 and Windows 8 apps. The CoreCLR is a proven, lightweight version of the CLR that happens to be ideal for creating lightweight web applications. When this lightweight CLR is combined with the new K Runtime Environment, the result is compelling.  Different versions of the CLR can sit side by side and switching between them is trivial.  The .Net framework libraries have been sliced up into more then 70 nuget packages so that only the functionality that is needed is pulled in. 

A victim of success

One of the major challenges with the .net framework in the past is that it had become very large to deploy, very slow for the team to get new features into the framework and every new major version, the team has attempted a new, hopefully less painful way, to deliver those changes.  The result has been inconsistent and confusing.  The KRE does enough things differently that we may finally have something that can allow the ,net framework to evolve quickly and painlessly.

K for Kool

The KRE finally breaks the dependency on the Global Assembly Cache that has been a critical part of .Net since it’s inception.  However, as the .net framework team like to say, they are “in the app compat business”.  There will still be full CLR framework support, that still uses the GAC for applications that run under the K runtime.  The new CoreCLR also changes the rules with regards to strong naming.  Microsoft are now saying quite openly that they do not recommend strong naming for Open Source projects, which will come as a great relief for many developers.

This is just the beginning

It was not until I realized the impact that open-sourcing big chunks of the .Net CLR would have on the Mono framework, that I started to appreciate the bigger picture.  The plan for the future, as I understand it, is that Mono will start to replace some of the less optimized parts of Mono with MS source code.  Future versions of Mono will take advantage of more pieces of the CLR infrastructure.  I can envision a future where some combination of Mono / CoreCLR would become a “client optimized” CLR stack.  This would bring a fully supported and optimized .net to every platform that matters.

How is this going to help us developers?

Now that pieces of the .net frameworks and Core CLR are easily accessible on Github, it should really help developers get a better understand of how the .Net framework behaves and by being able to see the commit history and related issues we should start to get a picture of why it behaves that way.

The .Net and Core CLR source code is MIT licensed which now allows us to copy and modify the code.  This is huge.  There is all sorts of useful code in the framework that is marked as internal because MS didn’t want to support in the public interface.  We can now re-use this code.  Microsoft have also attached their Patent Promise to the repositories.

Cross platform is becoming a critical part of the Microsoft developer’s workload.  With easy access to Linux VMs on the server side on Azure and tools like Xamarin that allow C# across iOS and Android, Microsoft developers can reach every platform.  Having Microsoft recognize cross platform as a 1st class concern, rather than just a feature checkbox, is going to make building products with wide reach much easier.

This is a rebirth of Microsoft and I look forward to being part of it.


Dominick Baier: MVP Summit Hackathon: IdentityServer v3 on ASP.NET vNext

Today we had a chance to sit together with the ASP.NET team and try moving IdentityServer to vNext.

There are two fundamental approaches for doing that – migrate the code and middleware to the new APIs or host IdentityServer as-is as an OWIN component.

We went for the latter – and lo and behold – after two hours we got everything up and running. Big thanks to Chris, Lou and Dan from the ASP.NET team!

This allows us (at least for the time being) to run IdentityServer on both ASP.NET vCurrent as well as vNext. This will not give us support for the new CoreCLR – but we also have a plan how to tackle that.

If you want to try it out yourself – the code can be found here.

2014-11-06 12.04.53

Update: two hours later, Christian got everything also running on Ubuntu!

leastprivilege_2014-Nov.-06


Filed under: .NET Security, ASP.NET, IdentityServer, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Dominick Baier: IdentityServer v3 Beta 3

Some of our users already found out and broke the news – so here’s my official post ;)

Beta 3 has been released to github and nuget – 107 commits since Beta 2-1…new features include:

  • Anti-forgery token support
  • Permission self-service page for users
  • Added support to add all claims of a user to a token (and support for implementation specific claims rules)
  • Added more documentation and comments
  • Added token handle and authorization code hashing
  • New view system and support for file system based assets
  • Support for WS-Federation, OpenID Connect and social external IdPs
  • Support for upstream federated sign-out
  • Added flag to hide scopes from discovery document
  • Re-worked claims filtering and normalization
  • Added support for more authentication scenarios, e.g. client certificates

Documentation will be updated, and new samples will be added ASAP – bear with us.

Again a massive thanks to all contributors and the people giving feedback and filing issues – you make IdentityServer better every day!


Filed under: ASP.NET, IdentityServer, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Taiseer Joudeh: JSON Web Token in ASP.NET Web API 2 using Owin

In the previous post Decouple OWIN Authorization Server from Resource Server we saw how we can separate the Authorization Server and the Resource Server by unifying the “decryptionKey” and “validationKey” key values in machineKey node in the web.config file for the Authorization and the Resource server. So once the user request an access token from the Authorization server, the Authorization server will use this unified key to encrypt the access token, and at the other end when the token is sent to the Resource server, it will use the same key to decrypt this access token and extract the authentication ticket from it.

The source code for this tutorial is available on GitHub.

This way works well if you have control on your Resource servers (Audience) which will rely on your Authorization server (Token Issuer) to obtain access tokens from, in other words you are fully trusting those Resource servers so you are sharing with them the same “decryptionKey” and “validationKey” values.

But in some situations you might have big number of Resource servers rely on your Authorization server, so sharing the same “decryptionKey” and “validationKey” keys with all those parties become inefficient process as well insecure, you are using the same keys for multiple Resource servers, so if a key is compromised all the other Resource servers will be affected.

To overcome this issue we need to configure the Authorization server to issue access tokens using JSON Web Tokens format (JWT) instead of the default access token format, as well on the Resource server side we need to configure it to consume this new JWT access tokens, as well you will see through out this post that there is no need to unify the “decryptionKey” and “validationKey” key values anymore if we used JWT.

Featured Image

What is JSON Web Token (JWT)?

JSON Web Token is a security token which acts as a container for claims about the user, it can be transmitted easily between the Authorization server (Token Issuer), and the Resource server (Audience), the claims in JWT are encoded using JSON which make it easier to use especially in applications built using JavaScript.

JSON Web Tokens can be signed following the JSON Web Signature (JWS) specifications, as well it can be encrypted following the JSON Web Encryption (JWE) specifications, in our case we will not transmit any sensitive data in the JWT payload, so we’ll only sign this JWT to protect it from tampering during the transmission between parties.

JSON Web Token (JWT) Format

Basically the JWT is a string which consists of three parts separated by a dot (.) The JWT parts are: <header>.<payload>.<signature>.

The header part is JSON object which contains 2 nodes always and looks as the following: 

{ "typ": "JWT", "alg": "HS256" }
 The “type” node has always “JWT” value, and the node “alg” contains the algorithm used to sign the token, in our case we’ll use “HMAC-SHA256″ for signing.

The payload part is JSON object as well which contains all the claims inside this token, check the example shown in the snippet below:

{
  "unique_name": "SysAdmin",
  "sub": "SysAdmin",
  "role": [
    "Manager",
    "Supervisor"
  ],
  "iss": "http://myAuthZServer",
  "aud": "379ee8430c2d421380a713458c23ef74",
  "exp": 1414283602,
  "nbf": 1414281802
}

All those claims are not mandatory  in order to build JWT, you can read more about JWT claims here. In our case we’ll always use those set of claims in the JWT we are going to issue, those claims represent the below:

  • The “sub” (subject) and “unique_claim” claims represent the user name this token issued for.
  • The “role” claim represents the roles for the user.
  • The “iss” (issuer) claim represents the Authorization server (Token Issuer) party.
  • The “aud” (audience) claim represents the recipients that the JWT is intended for (Relying Party – Resource Server). More on this unique string later in this post.
  • The “exp” (expiration time) claim represents the expiration time of the JWT, this claim contains UNIX time value.
  • The “nbf” (not before) claim represent the time which this JWT must not be used before, this claim contains UNIX time vale.

Lastly the signature part of the JWT is created by taking the header and payload parts, base 64 URL encode them, then concatenate them with “.”, then use the “alg” defined in the <header> part to generate the signature, in our case “HMAC-SHA256″. The resulting part of this signing process is byte array which should be base 64 URL encoded then concatenated with the <header>.<payload> to produce a complete JWT.

JSON Web Tokens support in ASP.NET Web API and Owin middleware.

There is no direct support for issuing JWT in ASP.NET Web API or ready made Owin middleware responsible for doing this, so in order to start issuing JWTs we need to implement this manually by implementing the interface “ISecureDataFormat” and implement the method “Protect”. More in this later. But for consuming the JWT in a Resource server there is ready middleware named “Microsoft.Owin.Security.Jwt” which understands validates, and and de-serialize JWT tokens with minimal number of line of codes.

So most of the heavy lifting we’ll do now will be in implementing the Authorization Server.

What we’ll build in this tutorial?

We’ll build a single Authorization server which issues JWT using ASP.NET Web API 2 on top of Owin middleware, the Authorization server is hosted on Azure (http://JwtAuthZSrv.azurewebsites.net) so you can test it out by adding new Resource servers. Then we’ll build a single Resource server (audience) which will process JWTs issued by our Authorization server only.

I’ll split this post into two sections, the first section will be for creating the Authorization server, and the second section will cover creating Resource server.

The source code for this tutorial is available on GitHub.

Section 1: Building the Authorization Server (Token Issuer)

Step 1.1: Create the Authorization Server Web API Project

Create an empty solution and name it “JsonWebTokensWebApi” then add a new ASP.NET Web application named “AuthorizationServer.Api”, the selected template for the project will be “Empty” template with no core dependencies. Notice that the authentication is set to “No Authentication”.

Step 1.2: Install the needed NuGet Packages:

Open package manger console and install the below Nuget packages:

Install-Package Microsoft.AspNet.WebApi -Version 5.2.2
Install-Package Microsoft.AspNet.WebApi.Owin -Version 5.2.2
Install-Package Microsoft.Owin.Host.SystemWeb -Version 3.0.0
Install-Package Microsoft.Owin.Cors -Version 3.0.0
Install-Package Microsoft.Owin.Security.OAuth -Version 3.0.0
Install-Package System.IdentityModel.Tokens.Jwt -Version 4.0.0
Install-Package Thinktecture.IdentityModel.Core Version 1.2.0

You can refer to the previous post if you want to know what each package is responsible for. The package named “System.IdentityModel.Tokens.Jwt” is responsible for validating, parsing and generating JWT tokens. As well we’ve used the package “Thinktecture.IdentityModel.Core” which contains class named “HmacSigningCredentials”  which will be used to facilitate creating signing keys.

Step 1.3: Add Owin “Startup” Class:

Right click on your project then add a new class named “Startup”. It will contain the code below:

public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            HttpConfiguration config = new HttpConfiguration();

            // Web API routes
            config.MapHttpAttributeRoutes();
            
            ConfigureOAuth(app);
            
            app.UseCors(Microsoft.Owin.Cors.CorsOptions.AllowAll);
            
            app.UseWebApi(config);

        }

        public void ConfigureOAuth(IAppBuilder app)
        {

            OAuthAuthorizationServerOptions OAuthServerOptions = new OAuthAuthorizationServerOptions()
            {
                //For Dev enviroment only (on production should be AllowInsecureHttp = false)
                AllowInsecureHttp = true,
                TokenEndpointPath = new PathString("/oauth2/token"),
                AccessTokenExpireTimeSpan = TimeSpan.FromMinutes(30),
                Provider = new CustomOAuthProvider(),
                AccessTokenFormat = new CustomJwtFormat("http://jwtauthzsrv.azurewebsites.net")
            };

            // OAuth 2.0 Bearer Access Token Generation
            app.UseOAuthAuthorizationServer(OAuthServerOptions);

        }
    }

Here we’ve created new instance from class “OAuthAuthorizationServerOptions” and set its option as the below:

  • The path for generating JWT will be as :”http://jwtauthzsrv.azurewebsites.net/oauth2/token”.
  • We’ve specified the expiry for token to be 30 minutes
  • We’ve specified the implementation on how to validate the client and Resource owner user credentials in a custom class named “CustomOAuthProvider”.
  • We’ve specified the implementation on how to generate the access token using JWT formats, this custom class named “CustomJwtFormat” will be responsible for generating JWT instead of default access token using DPAPI, note that both are using bearer scheme.

We’ll come to the implementation of both class later in the next steps.

Step 1.4: Resource Server (Audience) Registration:

Now we need to configure our Authorization server to allow registering Resource server(s) (Audience), this step is very important because we need a way to identify which Resource server (Audience) is requesting the JWT token, so the Authorization server will be able to issue JWT token for this audience only.

The minimal needed information to register a Recourse server into an Authorization server are a unique Client Id, and shared symmetric key. For the sake of keeping this tutorial simple I’m persisting those information into a volatile dictionary so the values for those Audiences will be removed from the memory if IIS reset toke place, for production scenario you need to store those values permanently on a database.

Now add new folder named “Entities” then add new class named “Audience” inside this class paste the code below:

public class Audience
    {
        [Key]
        [MaxLength(32)]
        public string ClientId { get; set; }
        
        [MaxLength(80)]
        [Required]
        public string Base64Secret { get; set; }
        
        [MaxLength(100)]
        [Required]
        public string Name { get; set; }
    }

Then add new folder named “Models” the add new class named “AudienceModel” inside this class paste the code below:

public class AudienceModel
    {
        [MaxLength(100)]
        [Required]
        public string Name { get; set; }
    }

Now add new class named “AudiencesStore” and paste the code below:

public static class AudiencesStore
    {
        public static ConcurrentDictionary<string, Audience> AudiencesList = new ConcurrentDictionary<string, Audience>();
        
        static AudiencesStore()
        {
            AudiencesList.TryAdd("099153c2625149bc8ecb3e85e03f0022",
                                new Audience { ClientId = "099153c2625149bc8ecb3e85e03f0022", 
                                                Base64Secret = "IxrAjDoa2FqElO7IhrSrUJELhUckePEPVpaePlS_Xaw", 
                                                Name = "ResourceServer.Api 1" });
        }

        public static Audience AddAudience(string name)
        {
            var clientId = Guid.NewGuid().ToString("N");

            var key = new byte[32];
            RNGCryptoServiceProvider.Create().GetBytes(key);
            var base64Secret = TextEncodings.Base64Url.Encode(key);

            Audience newAudience = new Audience { ClientId = clientId, Base64Secret = base64Secret, Name = name };
            AudiencesList.TryAdd(clientId, newAudience);
            return newAudience;
        }

        public static Audience FindAudience(string clientId)
        {
            Audience audience = null;
            if (AudiencesList.TryGetValue(clientId, out audience))
            {
                return audience;
            }
            return null;
        }
    }

Basically this class acts like a repository for the Resource servers (Audiences), basically it is responsible for two things, adding new audience and finding exiting one.

Now if you take look on method “AddAudience” you will notice that we’ve implemented the following:

  • Generating random string of 32 characters as an identifier for the audience (client id).
  • Generating 256 bit random key using the “RNGCryptoServiceProvider” class then base 64 URL encode it, this key will be shared between the Authorization server and the Resource server only.
  • Add the newly generated audience to the in-memory “AudiencesList”.
  • The “FindAudience” method is responsible for finding an audience based on the client id.
  • The constructor of the class contains fixed audience for the demo purpose.

Lastly we need to add an end point in our Authorization server which allow registering new Resource servers (Audiences), so add new folder named “Controllers” then add new class named “AudienceController” and paste the code below:

[RoutePrefix("api/audience")]
    public class AudienceController : ApiController
    {
        [Route("")]
        public IHttpActionResult Post(AudienceModel audienceModel)
        {
            if (!ModelState.IsValid) {
                return BadRequest(ModelState);
            }

            Audience newAudience = AudiencesStore.AddAudience(audienceModel.Name);

            return Ok<Audience>(newAudience);

        }
    }

This end point can be accessed by issuing HTTP POST request to the URI http://jwtauthzsrv.azurewebsites.net/api/audience as the image below, notice that the Authorization server is responsible for generating the client id and the shared symmetric key. This symmetric key should not be shared with any party except the Resource server (Audience) requested it.

Note: In real world scenario the Resource server (Audience) registration process won’t be this trivial, you might go through workflow approval. Sharing the key will take place using a secure admin portal, as well you might need to provide the audience with the ability to regenerate the key in case it get compromised.

Register Audience

Step 1.5: Implement the “CustomOAuthProvider” Class

Now we need to implement the code responsible for issuing JSON Web Token when the requester issue HTTP POST request to the URI: http://jwtauthzsrv.azurewebsites.net/oauth2/token the request will look as the image below, notice how we are setting the client id for for the Registered resource (audience) from the previous step using key (client_id).

Issue JWT

To implement this we need to add new folder named “Providers” then add new class named “CustomOAuthProvider”, paste the code snippet below:

public class CustomOAuthProvider : OAuthAuthorizationServerProvider
    {

        public override Task ValidateClientAuthentication(OAuthValidateClientAuthenticationContext context)
        {
            string clientId = string.Empty;
            string clientSecret = string.Empty;
            string symmetricKeyAsBase64 = string.Empty;

            if (!context.TryGetBasicCredentials(out clientId, out clientSecret))
            {
                context.TryGetFormCredentials(out clientId, out clientSecret);
            }

            if (context.ClientId == null)
            {
                context.SetError("invalid_clientId", "client_Id is not set");
                return Task.FromResult<object>(null);
            }

            var audience = AudiencesStore.FindAudience(context.ClientId);

            if (audience == null)
            {
                context.SetError("invalid_clientId", string.Format("Invalid client_id '{0}'", context.ClientId));
                return Task.FromResult<object>(null);
            }
            
            context.Validated();
            return Task.FromResult<object>(null);
        }

        public override Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
        {

            context.OwinContext.Response.Headers.Add("Access-Control-Allow-Origin", new[] { "*" });

            //Dummy check here, you need to do your DB checks against memebrship system http://bit.ly/SPAAuthCode
            if (context.UserName != context.Password)
            {
                context.SetError("invalid_grant", "The user name or password is incorrect");
                //return;
                return Task.FromResult<object>(null);
            }

            var identity = new ClaimsIdentity("JWT");

            identity.AddClaim(new Claim(ClaimTypes.Name, context.UserName));
            identity.AddClaim(new Claim("sub", context.UserName));
            identity.AddClaim(new Claim(ClaimTypes.Role, "Manager"));
            identity.AddClaim(new Claim(ClaimTypes.Role, "Supervisor"));

            var props = new AuthenticationProperties(new Dictionary<string, string>
                {
                    {
                         "audience", (context.ClientId == null) ? string.Empty : context.ClientId
                    }
                });

            var ticket = new AuthenticationTicket(identity, props);
            context.Validated(ticket);
            return Task.FromResult<object>(null);
        }
    }

As you notice this class inherits from class “OAuthAuthorizationServerProvider”, we’ve overridden two methods “ValidateClientAuthentication” and “GrantResourceOwnerCredentials”

  • The first method “ValidateClientAuthentication” will be responsible for validating if the Resource server (audience) is already registered in our Authorization server by reading the client_id value from the request, notice that the request will contain only the client_id without the shared symmetric key. If we take the happy scenario and the audience is registered we’ll mark the context as a valid context which means that audience check has passed and the code flow can proceed to the next step which is validating that resource owner credentials (user who is requesting the token).
  • The second method “GrantResourceOwnerCredentials” will be responsible for validating the resource owner (user) credentials, for the sake of keeping this tutorial simple I’m considering that each identical username and password combination are valid, in read world scenario you will do your database checks against membership system such as ASP.NET Identity, you can check this in the previous post.
  • Notice that we are setting the authentication type for those claims to “JWT”, as well we are passing the the audience client id as a property of the “AuthenticationProperties”, we’ll use the audience client id in the next step.
  • Now the JWT access token will be generated when we call “context.Validated(ticket), but we still need to implement the class “CustomJwtFormat” we talked about in step 1.3.

Step 1.5: Implement the “CustomJwtFormat” Class

This class will be responsible for generating the JWT access token, so what we need to do is to add new folder named “Formats” then add new class named “CustomJwtFormat” and paste the code below:

public class CustomJwtFormat : ISecureDataFormat<AuthenticationTicket>
    {
        private const string AudiencePropertyKey = "audience";

        private readonly string _issuer = string.Empty;

        public CustomJwtFormat(string issuer)
        {
            _issuer = issuer;
        }

        public string Protect(AuthenticationTicket data)
        {
            if (data == null)
            {
                throw new ArgumentNullException("data");
            }

            string audienceId = data.Properties.Dictionary.ContainsKey(AudiencePropertyKey) ? data.Properties.Dictionary[AudiencePropertyKey] : null;

            if (string.IsNullOrWhiteSpace(audienceId)) throw new InvalidOperationException("AuthenticationTicket.Properties does not include audience");

            Audience audience = AudiencesStore.FindAudience(audienceId);

            string symmetricKeyAsBase64 = audience.Base64Secret;
  
            var keyByteArray = TextEncodings.Base64Url.Decode(symmetricKeyAsBase64);

            var signingKey = new HmacSigningCredentials(keyByteArray);

            var issued = data.Properties.IssuedUtc;
            var expires = data.Properties.ExpiresUtc;

            var token = new JwtSecurityToken(_issuer, audienceId, data.Identity.Claims, issued.Value.UtcDateTime, expires.Value.UtcDateTime, signingKey);

            var handler = new JwtSecurityTokenHandler();

            var jwt = handler.WriteToken(token);

            return jwt;
        }

        public AuthenticationTicket Unprotect(string protectedText)
        {
            throw new NotImplementedException();
        }
    }

What we’ve implemented in this class is the following:

  • The class “CustomJwtFormat” implements the interface “ISecureDataFormat<AuthenticationTicket>”, the JWT generation will take place inside method “Protect”.
  • The constructor of this class accepts the “Issuer” of this JWT which will be our Authorization server, this can be string or URI, in our case we’ll fix it to URI with the value “http://jwtauthzsrv.azurewebsites.net”
  • Inside “Protect” method we are doing the following:
    1. Reading the audience (client id) from the Authentication Ticket properties, then getting this audience from the In-memory store.
    2. Reading the Symmetric key for this audience and Base64 decode it to byte array which will be used to create a HMAC265 signing key.
    3. Preparing the raw data for the JSON Web Token which will be issued to the requester by providing the issuer, audience, user claims, issue date, expiry date, and the signing Key which will sign the JWT payload.
    4. Lastly we serialize the JSON Web Token to a string and return it to the requester.
  • By doing this requester for an access token from our Authorization server will receive a signed token which contains claims for a certain resource owner (user) and this token intended to certain Resource server (audience) as well.

So if we need to request a JWT from our Authorization server for a user named “SuperUser” that needs to access the Resource server (audience) “099153c2625149bc8ecb3e85e03f0022″, all we need to do is to issue HTTP POST to the token end point (http://jwtauthzsrv.azurewebsites.net/oauth2/token) as the image below:

Issue JWT2

The result for this will be the below JSON Web Token:

eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1bmlxdWVfbmFtZSI6IlN1cGVyVXNlciIsInN1YiI6IlN1cGVyVXNlciIsInJvbGUiOlsiTWFuYWdlciIsIlN1cGVydmlzb3IiXSwiaXNzIjoiaHR0cDovL2p3dGF1dGh6c3J2LmF6dXJld2Vic2l0ZXMubmV0IiwiYXVkIjoiMDk5MTUzYzI2MjUxNDliYzhlY2IzZTg1ZTAzZjAwMjIiLCJleHAiOjE0MTQzODEyODgsIm5iZiI6MTQxNDM3OTQ4OH0.pZffs_TSXjgxRGAPQ6iJql7NKfRjLs1WWSliX5njSYU

There is an online JWT debugger tool named jwt.io that allows you to paste the encoded JWT and decode it so you can interpret the claims inside it, so open the tool and paste the JWT above and you should receive response as the image below, notice that all the claims are set properly including the iss, aud, sub,role, etc…

One thing to notice here that there is red label which states that the signature is invalid, this is true because this tool doesn’t know about the shared symmetric key issued for the audience (099153c2625149bc8ecb3e85e03f0022).

So if we decided to share this symmetric key with the tool and paste the key in the secret text box; we should receive green label stating that signature is valid, and this is identical to the implementation we’ll see in the Resource server when it receives a request containing a JWT.

jwtio

Now the Authorization server (Token issuer) is able to register audiences and issue JWT tokens, so let’s move to adding a Resource server which will consume the JWT tokens.

Section 2: Building the Resource Server (Audience)

Step 2.1: Creating the Resource Server Web API Project

Add a new ASP.NET Web application named “ResourceServer.Api”, the selected template for the project will be “Empty” template with no core dependencies. Notice that the authentication is set to “No Authentication”.

Step 2.2: Installing the needed NuGet Packages:

Open package manger console and install the below Nuget packages:

Install-Package Microsoft.AspNet.WebApi -Version 5.2.2
Install-Package Microsoft.AspNet.WebApi.Owin -Version 5.2.2
Install-Package Microsoft.Owin.Host.SystemWeb -Version 3.0.0
Install-Package Microsoft.Owin.Cors -Version 3.0.0
Install-Package Microsoft.Owin.Security.Jwt -Version 3.0.0

The package “Microsoft.Owin.Security.Jwt” is responsible for protecting the Resource server resources using JWT, it only validate and de-serialize JWT tokens.

Step 2.3: Add Owin “Startup” Class:

Right click on your project then add a new class named “Startup”. It will contain the code below:

public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            HttpConfiguration config = new HttpConfiguration();

            config.MapHttpAttributeRoutes();

            ConfigureOAuth(app);

            app.UseCors(Microsoft.Owin.Cors.CorsOptions.AllowAll);
           
            app.UseWebApi(config);

        }

        public void ConfigureOAuth(IAppBuilder app)
        {
            var issuer = "http://jwtauthzsrv.azurewebsites.net";
            var audience = "099153c2625149bc8ecb3e85e03f0022";
            var secret = TextEncodings.Base64Url.Decode("IxrAjDoa2FqElO7IhrSrUJELhUckePEPVpaePlS_Xaw");

            // Api controllers with an [Authorize] attribute will be validated with JWT
            app.UseJwtBearerAuthentication(
                new JwtBearerAuthenticationOptions
                {
                    AuthenticationMode = AuthenticationMode.Active,
                    AllowedAudiences = new[] { audience },
                    IssuerSecurityTokenProviders = new IIssuerSecurityTokenProvider[]
                    {
                        new SymmetricKeyIssuerSecurityTokenProvider(issuer, secret)
                    }
                });

        }
    }

This is the most important step in configuring the Resource server to trust tokens issued by our Authorization server (http://jwtauthzsrv.azurewebsites.net), notice how we are providing the values for the issuer, audience (client id), and the shared symmetric secret we obtained once we registered the resource with the authorization server.

By providing those values to JwtBearerAuthentication middleware, this Resource server will be able to consume only JWT tokens issued by the trusted Authorization server and issued for this audience only.

Note: Always store keys in config files not directly in source code.

Step 2.4: Add new protected controller

Now we want to add a controller which will serve as our protected resource, this controller will return list of claims for the authorized user, those claims for sure are encoded within the JWT we’ve obtained from the Authorization Server. So add new controller named “ProtectedController” under “Controllers” folder and paste the code below:

[Authorize]
    [RoutePrefix("api/protected")]
    public class ProtectedController : ApiController
    {
        [Route("")]
        public IEnumerable<object> Get()
        {
            var identity = User.Identity as ClaimsIdentity;

            return identity.Claims.Select(c => new
            {
                Type = c.Type,
                Value = c.Value
            });
        }
    }

Notice how we attribute the controller with [Authorize] attribute which will protect this resource and only will authentic HTTP GET requests containing a valid JWT access token, with valid I mean:

  • Not expired JWT.
  • JWT issued by our Authorization server (http://jwtauthzsrv.azurewebsites.net).
  • JWT issued for the audience (099153c2625149bc8ecb3e85e03f0022) only.

Conclusion

Using signed JSON Web Tokens facilitates and standardize the way of separating the Authorization server and the Resource server, no more need for unifying machineKey values nor having the risk of sharing the “decryptionKey” and “validationKey” key values among different Resource servers.

Keep in mind that the JSON Web Token we’ve created in this tutorial is signed only, so do not put any sensitive information in it :)

The source code for this tutorial is available on GitHub.

That’s it for now folks, hopefully this short walk through helped in understanding how we can configure the Authorization Server to issue JWT and how we can consume them in a Resource Server.

If you have any comment, question or enhancement please drop me a comment, I do not mind if you star the GitHub Repo too :)

Follow me on Twitter @tjoudeh

References

The post JSON Web Token in ASP.NET Web API 2 using Owin appeared first on Bit of Technology.


Darrel Miller: Add Runscope logging to your ASP.NET Web API in minutes

If you are building an ASP.NET Web API and want a view into the HTTP traffic that is hitting your API then this is a really quick solution that might prove useful.

Monitoring

Runscope is a cloud based service that allows you to monitor, measure and test Web APIs.  By using their API I was able to build a HttpMessageHandler that logs all requests and responses to the service.  Runscope has a free tier that allows you to log 10K requests/month.

Add a messagehandler

In your Web API project, simply add the Nuget package Runscope.HttpMessageHandler and the following line of code in your Web API configuration code:

config.MessageHandlers.Add(new RunscopeApiMessageHandler(<ApiKey>,<BucketKey>);

You can get an APIKey by creating an Application within Runscope and the bucket key is displayed at the bottom right hand corner of the page.

And you’re done.

Get results immediately

Once you start making requests to the API you will be able to see the details of the request in the Runscope Traffic Inspector,

RunscopeRequest

and the response…

RunscopeResponse

Let me show you how

The following demonstrates how to create a Web API, setup a Runscope account and add Runscope logging to it in less than 5 mins.

Image credits : Heart rate monitor https://flic.kr/p/8S3ofm 
Image credits : Stopwatch https://flic.kr/p/6xSoka


Ali Kheyrollahi: Performance Series - How poor performance of HttpContent.ReadAsAsync can affect your API/site

Level [T2]

This has been a revelation - what I am about to reveal here, deeply surprised me - it might surprise you too. This post is mainly about consuming restful APIs using HttpClient and when the payload is JSON.

UPDATE: I got in touch with the ASP.NET team and they confirmed this as a performance bug which has now been fixed but the fix yet not available.

As you probably know performance and benchmarking is very close to my heart and I have been recently focusing on benchmarking a few APIs at work. One of my observations was that the Web APIs/Web Sites which have historically been IO-bound, they show sign of CPU strain and have become CPU-bound.

When you think logically about it, there is no magic here: by using async/await, you end up putting your CPU into some use unlike the old times when the threads are blocked waiting for the IO to return and CPU would be twiddling its thumb. However, I found the CPU overhead of the operations excessive so I set out to benchmark a few different scenarios.

Test Setup

Two APIs were created where one was using the other. These two APIs were part of the same cloud service which was deployed to two separate Medium (A2) web roles. I used 2 different deployments of the same code, one dependent upon version 4.0.30506.0 of the API and the ther one with the latest version which was 5.2.2. Difference between two versions of the Web API is the topic of another post, but the differences were not huge although newer versions showed improved performance.

API being called returns a customer with its orders. Every customer has between 1 to 3 orders and each order between 1-3 items. On the long run, these randomisation gets evened out. Each document returned is between 1-2 KB. So the more superficial API, for every customer, makes one call to get the customer and for each customer will separately call the deeper API once for each order. Then it combines the result and sends back the response. Both APIs are deployed in the same Azure Data Centre.

You can find the whole code at GitHub. The code takes 4 different approaches as below:

public class CustomerController : ApiController
{
public FullCustomer GetSync(int id)
{
var webClient = new WebClient();
var customerString = webClient.DownloadString(BuildUrl(id));
var customer = JsonConvert.DeserializeObject<Customer>(customerString);
var fullCustomer = new FullCustomer(customer);
var orders = new List<Order>();
foreach (var orderId in customer.OrderIds)
{
var orderString = webClient.DownloadString(BuildUrl(id, orderId));
var order = JsonConvert.DeserializeObject<Order>(orderString);
orders.Add(order);
}
fullCustomer.Orders = orders;
return fullCustomer;
}

public async Task<FullCustomer> GetASync(int id)
{
var webClient = new WebClient();
var customerString = await webClient.DownloadStringTaskAsync(BuildUrl(id));
var customer = JsonConvert.DeserializeObject<Customer>(customerString);
var fullCustomer = new FullCustomer(customer);
var orders = new List<Order>();
foreach (var orderId in customer.OrderIds)
{
var orderString = await webClient.DownloadStringTaskAsync(BuildUrl(id, orderId));
var order = JsonConvert.DeserializeObject<Order>(orderString);
orders.Add(order);
}
fullCustomer.Orders = orders;
return fullCustomer;
}

public async Task<FullCustomer> GetASyncWebApi(int id)
{
var httpClient = new HttpClient();
httpClient.DefaultRequestHeaders.Add("Accept", "application/json");
var responseMessage = await httpClient.GetAsync(BuildUrl(id));
var customer = await responseMessage.Content.ReadAsAsync<Customer>();
var fullCustomer = new FullCustomer(customer);
var orders = new List<Order>();
foreach (var orderId in customer.OrderIds)
{
responseMessage = await httpClient.GetAsync(BuildUrl(id, orderId));
var order = await responseMessage.Content.ReadAsAsync<Order>();
orders.Add(order);
}
fullCustomer.Orders = orders;
return fullCustomer;
}

public async Task<FullCustomer> GetASyncWebApiString(int id)
{
var httpClient = new HttpClient();
httpClient.DefaultRequestHeaders.Add("Accept", "application/json");
var responseMessage = await httpClient.GetAsync(BuildUrl(id));
var customerString = await responseMessage.Content.ReadAsStringAsync();
var customer = JsonConvert.DeserializeObject<Customer>(customerString);
var fullCustomer = new FullCustomer(customer);
var orders = new List<Order>();
foreach (var orderId in customer.OrderIds)
{
responseMessage = await httpClient.GetAsync(BuildUrl(id, orderId));
var orderString = await responseMessage.Content.ReadAsStringAsync();
var order = JsonConvert.DeserializeObject<Order>(orderString);
orders.Add(order);
}
fullCustomer.Orders = orders;
return fullCustomer;
}

private string BuildUrl(int customerId, int? orderId = null)
{
string baseUrl = string.Format("http://{0}:8080/api/customer/{1}", Request.RequestUri.Host, customerId);
return orderId.HasValue
? string.Format("{0}/order/{1}", baseUrl, orderId.Value)
: baseUrl;
}

}
So as you can see, we use 4 different methods:

1) Using WebClient in the sync fashion
2) Using WebClient in the async fashion
3) Using HttpClient in the async fashion with ReadAsAsync on HttpContent
4) Using HttpClient in the async fashion with reading content as string and then using JsonConvert to deserialise

I used SuperBenchmarker to invoke the main API which gathers the data from the other API. I used the tool within the same Azure Data Centre from another machine (none of the APIs) to make the tests more realistic yet eliminate network idiosyncrasies.

I used 5000 requests with concurrency of 10 - although I tried other number as well which did not make any material difference in the results.

Results

Here is the result for scenario 1 (sync using WebClient):

TPS:    394 (requests/second)
Max: 199ms
Min: 8ms
Avg: 25ms

50% below 24ms
60% below 25ms
70% below 27ms
80% below 28ms
90% below 30ms
95% below 32ms
98% below 36ms
99% below 55ms
99.9% below 185ms


The result for scenario 2 (Async using WebClient) usually shows better throughput but higher CPU

TPS:    485 (requests/second)
Max: 291ms
Min: 5ms
Avg: 20ms

50% below 19ms
60% below 21ms
70% below 23ms
80% below 25ms
90% below 27ms
95% below 29ms
98% below 32ms
99% below 36ms
99.9% below 284ms

The CPU difference is not huge and can be explained by the increase throughput:

CPU usage during Scenario 1 and 2

Now what surprised me greatly was the result of the third scenario (using HttpContent.ReadAsAsync<T>). Apart from CPU of 100% and signs of queueing, here is the result:

TPS:    41 (requests/second)
Max: 12656ms
Min: 26ms
Avg: 240ms

50% below 170ms
60% below 178ms
70% below 187ms
80% below 205ms
90% below 256ms
95% below 296ms
98% below 370ms
99% below 3181ms
99.9% below 12573ms

Yeah, shocking. The diagram below compares CPU usage between scenario 1 and 3:

CPU usage in scenario 1 (arrow) and 3 (box)

Scenario 4 is definitely better and is not too far from scenario 1 and 2:

TPS:    230 (requests/second)
Max: 7068ms
Min: 7ms
Avg: 43ms

50% below 20ms
60% below 22ms
70% below 24ms
80% below 26ms
90% below 29ms
95% below 34ms
98% below 110ms
99% below 144ms
99.9% below 7036ms

The CPU usage is around 80% and definitely worse that scenario 1 and 2 (which requires further analysis).

Analysis

Where is the problem? It appears that JSON Deserialization when reading from a stream is not efficient. It is possible that the JSON Deserialization has to optimise for memory efficiency rather than CPU efficiency since when the whole string is passed, it is surely much faster. 

Profiling proves that the problem is indeed JSON Deserialization:

Profiling scenario 3 is showing that the most of the CPU time is spent in JSON Deserialisation

So in order to prove that, we do not have to invoke an API. The whole operation can be done inside a Console application. So I used the same code that was generating customers and orders. Here I am comparing

private static void Main(string[] args)
{
const int TotalRun = 10*1000;

var customerController = new CustomerController();
var orderController = new OrderController();
var customer = customerController.Get(1);

var orders = new List<Order>();
foreach (var orderId in customer.OrderIds)
{
orders.Add(orderController.Get(1, orderId));
}

var fullCustomer = new FullCustomer(customer)
{
Orders = orders
};

var s = JsonConvert.SerializeObject(fullCustomer);
var bytes = Encoding.UTF8.GetBytes(s);
var stream = new MemoryStream(bytes);
var content = new StreamContent(stream);

content.Headers.ContentType = new MediaTypeHeaderValue("application/json");


var stopwatch = Stopwatch.StartNew();
for (int i = 1; i < TotalRun+1; i++)
{
var a = content.ReadAsAsync<FullCustomer>().Result;
if(i % 100 == 0)
Console.Write("\r" + i);
}
Console.WriteLine();
Console.WriteLine(stopwatch.Elapsed);

stopwatch.Restart();
for (int i = 1; i < TotalRun+1; i++)
{
var sa = content.ReadAsStringAsync().Result;
var a = JsonConvert.DeserializeObject<FullCustomer>(sa);
if (i % 100 == 0)
Console.Write("\r" + i);
}
Console.WriteLine();
Console.WriteLine(stopwatch.Elapsed);

Console.Read();

}

As expected, the result shows uncomparable difference, in the order of ~120:

10000
00:00:06.2345493
10000
00:00:00.0509763

So this result basically confirms what we have seen. I will get in touch with James Newton King and try to shed more light on the subject.

Conclusion

HttpContent.ReadAsAsync on JSON payloads is really slow - in the order of 120x compared to JsonConvert. I guess it might to do with the memory efficiency of reading from streams (keeping memory footprint at zero)  but that is a guess and I have been in touch with James Newton King (creator of Json.Net) to get to the bottom of it.

For the meantime, if you know your content is not going to be huge and always in JSON, you might as well forget about content negotiation and read it as a string and then use JsonConvert to deserialize.


Radenko Zec: Manual JSON serialization from DataReader in ASP.NET Web API

In one of my previous blog posts 8 ways to improve ASP.NET Web API performance I talked how we’ve used manual JSON serialization from DataReader to gain some performance benefits.

I haven’t provided any code example, but only link to this excellent blog post from Rick Strahl JSON serialization of a DataReader.

In our production project we’ve used code from Rick Strahl’s blog post to do this.

Instead reading values from DataReader and populating objects and after that reading again values from those objects and producing JSON using some JSON Serializer,  we can directly create a JSON string from DataReader and avoid unnecessary creation of objects.

I’ve received lots of requests to explain this method on my blog.

JsonSerialization

Set up JSON serialization project

As I mention above we’ve used WestWind’s code to do this.

However, this library which is used to perform JSON Serialization is quite large and contains a lot of code that we didn’t need.

So I’ve just extracted the code related to JSON serialization and modified it a little bit to support additional data type and to serialize in CamelCase.

We have also removed some data types we didn’t need.

You can download this modified project library here.

How to use this library

Code for serialization from DataReader to JSON string is very small.

We just need to call the Serialize method and pass the instance of DataReader we want to serialize.

string jsonResult;
var serializer = new WestwindJsonSerializer
{
     DateSerializationMode = JsonDateEncodingModes.Iso
};

using (SqlDataReader reader = cmd.ExecuteReader())
{ 
     jsonResult = serializer.Serialize(reader);
}

 

How to implement this in your ASP.NET Web API controller method

We have implemented code above in our Repository method GetItemsAsJson which is used for retrieving items from database.

The code looks similar to this.

public HttpResponseMessage GetItems([FromUri]ItemQueryParams itemQueryParams)
{

var jsonResult = this.itemRepository.GetItemsAsJson(itemQueryParams);
if (!string.IsNullOrEmpty(jsonResult))
{
     var response = Request.CreateResponse(HttpStatusCode.OK);
     response.Content = new StringContent(jsonResult, Encoding.UTF8, "application/json");
     return response;
}
...
...
}

 

If you like this article don’t forget to subscribe to this blog and make sure you don’t miss new upcoming blog posts.

 

The post Manual JSON serialization from DataReader in ASP.NET Web API appeared first on RadenkoZec blog.


Ali Kheyrollahi: SuperBenchmarker v0.4 released

Level [T2]

This is a quick shoutout on the release of version 0.4 of SuperBenchmarker, a Web and/or Web API performance benchmarking command line tool for Windows.

You might have heard about and used Apache Benchmark (ab.exe) in the past which is a very useful tool but on Windows it is very limited (e.g cannot make POST, PUT, etc requests and only supports GET). SuperBenchmarker (sb.exe) supports PUT, DELETE, POST or any arbitrary method and allows you to parameterise the URL and headers using a data file, a .NET DLL plugin and the new feature is the randomisation feature which removes the need for any setup when all needed is random data.


Getting started

The best way to get SuperBenchmarker is to use awesome Chocolatey which is Windows' equivalent of apt-get tool on Linux.

To get Chocolatey, just run this command in your Powershell console (in Administrative mode):
iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))
And then install SuperBenchmarker in the command line shell:
c:\> cinst SuperBenchmarker
And now you are ready to load test:
c:\> sb -u http://google.com
Note: if you are using Visual Studio's command line shell, you cannot use ampersand character (&) and you have to escape it using hat (^).

Using SuperBenchmarker

Normally you would define total number of requests and concurrency:
c:\> sb -u http://google.com -c 10 -n 2000
Statement above runs 2000 requests with concurrency of 10. At the end, you are shown important metrics of the test:
Status 503:    1768
Status 200: 232

TPS: 98 (requests/second)
Max: 11271.1890515802ms
Min: 3.15724613377097ms
Avg: 497.181240820346ms

50% below 34.0499543844287ms
60% below 41.8178295863705ms
70% below 48.7612961478952ms
80% below 87.4385213898198ms
90% below 490.947293319644ms
So the breakdown of the statuses returned, TPS (transaction per second), minimum, maximum and average of the time taken. But more importantly, your percentiles that really should be driving your performance SLAs (90% or 99%). [Never use the average for anything].

In case you need to dig deeper, a log file gets created in the current directory with the name run.log which you can change using -l parameter:
c:\> sb -u http://google.com -c 10 -n 2000 -l c:\temp\mylog.txt
log file is a tab separated file which contains these columns: order number (based on the time started not the time ended), status code, time taken in ms and then any custom parameters that you might have had - see below.

Sometimes when running a test for the first time, something might not have been quite right in which case you can make a dry run/debug using -d parameter that makes a single request and the body of the response will be shown at the end. If you want to see the headers as well, use -h parameter.
c:\> sb -u http://google.com -c 10 -n 2000 -d -h

Supplying request headers or a payload for POST, PUT and DELETE

In order to pass your tailored request headers, a template file needs to be defined which is basically the HTTP request template (minus the first line defining verb and URL and version):
c:\> sb -u http://google.com -t template.txt
And the template.txt contains our custom headers (from the second line of the HTTP request):
User-Agent: SuperBenchmarker
MyCustomHeader: foo-bar;baz=biz
Please note that you don't have to provide headers such as Host and Content-Length - in fact it will raise errors. These headers will be automatically added by the underlying framework.

For using POST, PUT and DELETE we need to supply the verb parameter:
c:\> sb -u http://google.com -v POST
But this request would require a payload as well which we need to supply. We use the template file to supply HTTP payload as well as any headers. Similar to an HTTP request, there must be an empty line between headers and body:
User-Agent: WhateverValueIWant
Content-Type: x-www-formurlencoded

name=value&age=25

Parameterising your requests

Basically you can parameterise your requests using a CSV file containing values, your plugin DLL or by specifying randomisation.

You would define parameters in URL and headers (payload not yet supported but coming soon in 0.5) using SuperBenchmarker's syntax:
{{{MyParameter}}}
As you can see, we use three curly brackets to denote a parameter. For example the statement below defines a customerId parameter:
c:\> sb -u "http://myserver.com/api/customer?customerid={{{customerId}}}^&ignore=false"
Please note quoting the URL and use of ^ to escape & character - if you are using Visual Studio command prompt. To run the test successfully, you need to provide a CSV file containing customerId:
customerId
123,
245,
and use -f option to run the test:
c:\> sb -u "http://myserver.com/api/customer?customerid={{{customerId}}}&ignore=false" -f c:\mypath\values.csv
Alternatively, you can use a plugin DLL to provide values:
c:\> sb -u "http://myserver.com/api/customer?customerid={{{customerId}}}&ignore=false" -p c:\mypath\myplugin.dll
This DLL must have a single public class implementing IValueProvider interface which has a single method:
public interface IValueProvider
{
IDictionary<string, object> GetValues(int index);
}
For every request implementation of the interface is called and the index of the request is passed to and in return a dictionary of field names with their respective values is passed back.

Now we have a new feature that in most cases alleviates the need for CSV file or plugin and that is the ability to setup random value provider in the definition of the parameter itself:
c:\> sb -u "http://myserver.com/api/customer?customerid={{{customerId:RAND_INTEGER:[1000:2000]}}}&ignore=false"
The parameter above is set up to be filled by a random integer between 1000 and 2000.
Possible value types are:
  • String: using RAND_STRING. Will output random words
  • Date: using RAND_DATE (accepts range)
  • DateTime: using RAND_DATETIME (accepts range)
  • DateTimeOffset: using RAND_DATETIMEOFFSET which outputs ISO dates (accepts range)
  • Double: using RAND_DOUBLE (accepts range)
  • Name: using RAND_NAME. Will output random names

Feedback

Don't forget to feedback with issues and feature requests in the GitHub page. Happy load testing!


Darrel Miller: Xamarin Evolve

This past week I spent in Atlanta, Georgia, attending Xamarin Evolve and Atlanta Code Camp.  This was the second annual Evolve conference and attendance went from 600 the first year to 1200 this year.  This year’s event was an impressive affair.

WP_20141010_001

Cultivating an Niche

Not only did the number of attendees grow significantly from last year, there were 700 of those people who attended the pre-conference training sessions.  This is a significant amount of demand for what is perceived by some to be a fairly expensive set of tools to build cross-platform native mobile applications.

As Mike Beverly notes, Xamarin are making the transition from a place where C# .Net developers go to create iOS and Android versions of their applications, to a cross platform solution for developers of all platforms who are prepared to learn C# and .net.

The idea that you can build mobile applications that have the look and feel of the native platform, but share a significant chunk of client code between platforms is a lofty goal.  One that Xamarin seem to be making significant progress towards.

Building a Community

Numerous people have made comparison between the atmosphere at the Evolve event and Microsoft PDC’s of the past.  There is no doubt there was a palpable excitement in the air.

There was a sensation that Xamarin was going all out to make sure that attendees got everything they possibly could out of the event.  Xamarin employees were everywhere and highly visible due to them all wearing the same shirts.  Attendees could schedule one on one meetings to discuss projects and ask questions.  The Darwin Lounge was full of toys and challenges to give developers the chance to explore all kinds of cool things.

The technical production of the event, the food and drinks, the social events, the venue, were all quite spectacular.   This holistic focus on quality is what makes developers want to be  part of the community.  It feels safe.  It feels like it will have a future. 

WP_20141009_001We as developers are tired of half finished products being thrown over the wall at us by vendors;  those products being milked dry once they become successful and then left to rot when the next cool thing comes along. 

When you eat the locally made fruit popsicle and find some Xamarin inspired pun printed on the popsicle stick, you want to believe that the same attention to detail is going into the products.

Heading in the right direction

While preparing for my talk at Evolve I took the opportunity to write some sample applications and test some of my OSS PCL libraries on various platforms.  I was impressed that my libraries worked unchanged on Android.  Creating simple applications that worked on multiple platforms was quite straightforward once I had figured out how to setup the Android Emulator.

Xamarin have recognized the pain of using the Android SDK for development and announced the release of their own Android player that not only makes setup and configuration way easier, but it is also significantly faster than Google’s own emulator.

In the opening keynote, there were a number of other significant product announcements.  The Sketches tool provides an instant feedback mechanism to write code and see it immediately appear on apps running on iOS and Android emulators.  This high speed feedback loop is the kind of feature that Web Browser and HTML/CSS advocates have been rubbing in the face of native developers for years.

The Insights real-time monitoring product and new features in Test Cloud demonstrate Xamarin’s desire to get developers to produce high quality apps with their toolset.  This is a far cry from Microsoft’s efforts to evangelize the Windows Phone where the marketing department seem to be focused on the quantity of apps in the app store with little care for quality.

Architecture is important

Regardless of all the delightful trimmings that VC funded companies are able to display, when it comes down to it, architecture is a key component to longevity.  Native user interface will always have a performance edge over a generalized user interface solution.  In the past, cross-platform issues and development speed were barriers that made native development a barrier to many.  Xamarin is lowering that barrier.

As someone who has always been a believer in native user experiences, I really hope to see many more editions of Evolve in the future.


Taiseer Joudeh: Two Factor Authentication in ASP.NET Web API & AngularJS using Google Authenticator

Last week I was looking on how to enable Two Factor Authentication in a RESTful ASP.NET Web API service using Soft Tokens not SMS. Most of the examples out there show how to implement this in MVC application where there will be some cookies transmitted between requests, this approach defeats the stateless nature of the RESTful APIs, as well most of the examples ask for the passcode on the login form only, so there was no complete example on how to implement TFA in RESTful API.

The live AngularJS demo application is hosted on Azure, the source code for this tutorial on GitHub.

Basically the requirements I needed to fulfill in the solution I’m working on are the following:

  • The Web API should stay stateless, no cookies should be transmitted between the consumer and the API.
  • Each call to the Web API should be authenticated using OAuth 2.0 Bearer tokens as we implemented before.
  • Two Factor Authentication is required only for some sensitive API requests, in other words for some selective sensitive endpoints (i.e. transfer money) should ask for Two Factor Authentication time sensitive passcode along with the valid bearer access token, and the other endpoints will use only One Factor for authentication which is the OAuth 2.0 bearer access tokens only.
  • We need to build cost effective solution, no need to add new cost by sending SMSs which contain a passcode,  we will depend on our users’ smartphones as a Soft Token. The user needs to install Google Authenticator application on his/her smartphone which will generate a time sensitive passcode valid for only 30 seconds.
  • Lastly the front-end application which will consume this API will be built using AngularJS and should support displaying QR codes for the Preshared key.

TFA Featured Image

Before jumping into the implementation I’d like to emphasize some concepts to make sure that we understand the core components of Two Factor Authentication and how Google Authenticator works as Soft Token.

What is Two Factor Authentication?

Any user trying to access a system can be authenticated by one of the below ways:

  1. Something the user knows (Password, Secret).
  2. Something the user owns (Mobile Phone, Device).
  3. Something the user is (Bio-metric, fingerprint).

Two Factor authentication is a combination of any two of the above three ways. When want to apply this to real business world applications, we usually use the first and the second ways. This is because the third way (Biometrics) is very expensive and complicated to roll out, and the end user experience can be problematic.

So Two Factor authentication is made up of something that the user knows and another thing the user owns. The smartphone is something the user owns so receiving a passcode (receiving SMS or call) on it, or generating a passcode using the smartphone (as in our case) will allow us to add Two Factor authentication.

In our Back-end API we’ll ask the user to provide the below two separate factors when he/she wants to access a sensitive endpoint, the factors the user should provide are:

  1. The OAuth 2.0 bearer access tokens which is granted to the user when he provides his/her username and password at an earlier stage.
  2. The time sensitive passcode generated on the user smartphone which he/she owns using Google Authenticator.

What is Google Authenticator?

Google Authenticator is a mobile based application which is available on different platforms, the application works as a soft token which is responsible for generating time sensitive passcodes which will be used to implement Two Factor authentication for Google services. Basically the time sensitive passcode generated contains six digits which is valid for 30 seconds only, then new six digits will be generated directly.

Once you install the Google Authenticator application on your smartphone, it will basically ask you to enter Preshared Secret/Key between you and the application, this Preshared Key will be generated from our back-end API for each user register in our system and will be displayed on the UI as QR code or as plain text so the user can add this Preshared Key to Google Authenticator. More about generating this Preshared Key later in this post.

The nice thing about Google Authenticator is that it generates the time sensitive passcode on the smartphone without depending on anything, there is no need for internet connection or access to Google services to generate this passcode. How is this happening? Well the Google Authenticator implements Time-based One-time Password Algorithm (TOTP) which is an algorithm that computes a one-time password from a shared secret key and the current time, TOTP is based on HMAC-based One Time Password algorithm (HOTP) with a time-stamp replacing the incrementing counter in HOTP. You can check the implementation for Google Authenticator here.

Google Authenticator

In our solution the first step to authenticate the user is by providing his username and password in order to obtain an OAuth 2.0 bearer access token which is considered a type of knowledge-factor authentication, then as we agreed before and for certain selective sensitive API end-points (require second factor authentication), the user will open Google Authenticator application installed on his/her smartphone, lookup the time sensitive passcode which is generated from the Preshared key, and this passcode represents something the user owns (represents ownership-factor authentication).

Process flow for using Google Authenticaor with our Back-end API

In this section I’ll show you the process flow which the user will follow in our Two Factor authentication enabled solution in order to allow the user to access the selective sensitive API end-points.

  • The user will register in our-back end system by providing username,password and confirm password (normal registration process), upon successful registration and in the back-end API,  a Preshared Key (PSK) is generated for this user and returned to the UI in form of QR code and plain text (5FDAEHNBNM6W3L2S) so the user will open Google Authenticator application installed on his smartphone and scan this Preshared Key or enter it manually and select Time Based key. UI will look as the image below:

TFA Signup Form

  • Now the user will login to our system by providing his username/password and obtaining an OAuth 2.0 access token which will allow him to access our protected API resources, as long as he is trying to access non elevated sensitive end-point (i.e. transfer money) then our back-end API will be happy with only the one-factor authentication (knowledge-based factor) which is the OAuth 2.0 bearer access token only. As on the images blow the user is allowed to access his transactions history (non sensitive end point, yet it is protected by one-factor authentication).

Login

Transactions History

  • Now the user wants to perform an elevated sensitive request (Transfer Money), this end-point in out back-end API is configured to ask for second ownership factor to authenticate before completing the request, so the UI will ask the user to enter the time sensitive pass-code provided by Google Authenticator, the UI for Transfer Money will look as below:

Transfer Money TFA Code

  • The user will fill the form with the details he wants, if he tried to issue a transfer request without providing the second factor authentication (pass code) the request will be rejected and HTTP 401 unauthorized will be returned because this endpoint is sensitive and needs a second ownership factor to authenticate successfully. So the user will grab his smartphone, open Google Authenticator application and type the six digits pass code appears that on the application and hit transfer button. This six digits code will be sent in a custom HTTP header along with the OAuth 2.0 access token to this sensitive end-point.

Google Authenticator Passcode

  • Now the API receives this pass code for example (472307), and checks the code validity, if it is valid then the API will process the request and the user will complete transferring the money successfully (execution for the sensitive end-point is successful).

Money Transfer Successfully

The last step is the trickiest one, you are not asking yourself how this six digits code generated by Google Authenticator which will keep changing every 30 seconds will be understood and considered authentic by our back-end API. To answer this we need to take a brief look on how Google Authenticator produces those six digits; because we’ll simulate the same procedure at our back-end API.

How does Google Authenticator work?

As we have stated before Google Authenticator supports both TOTP algorithm on top of HOTP algorithm for generating onetime pass-codes or time sensitive pass-codes.

The idea behind HOTP is that the server (our back-end API) and client (Google Authenticator app) share a Preshared Key value and a counter, The preshared key and the counter are used to compute a one-time passcode independently on both sides (Notice it is a one-time passcode not a time sensitive passcode). Whenever a passcode is generated and used, the counter is incremented on both sides, allowing the server and client to remain in sync.

TOTP is built on top of HOTP, it uses HOTP same algorithm with one clear difference; the counter used in TOTP is replaced by the current time. But the challenge here is that this six digits passcode will keep changing rapidly with every second change and this will not be feasible because the end user will not be able to enter this number when asked for.

To solve this issue we’ll generate the number (counter) corresponding to 00 second of the current minute and let it remain constant for the next 30 seconds; by doing this the six digits passcode will become usable and will stay constant for the next 30 seconds. Note that all time calculations are based on UTC, so there is no need to worry about the time zone for the server and the time zone for the user using Google Authenticator application.

When the sensitive API endpoint receives this time sensitive passcode, it will repeat the process described above. Basically the API will know the user id from the OAuth access token sent, then fetch the Preshared key saved in the database for this user, and from there it will generate the time sensitive passcode and compare it with the passcode sent from the client application (UI), if the passcode sent form the UI matches the one generated in the back-end API then the API will consider this passcode authentic and process the request successfully.

Note: One of the greatest books which describes how Google Authenticator works is Pro ASP.NET Web API Security (Chapter 14) written by Badrinarayanan Lakshmiraghavan. Highly recommend if you are interested in ASP.NET Web API security in general.

Now it is time to start implementing this solution, this is the longest theory I’ve written so far in all my blog posts so we’d better to move to the code :)

Building the Back-End API

I highly recommend to read my previous post Token Based Authentication before implementing the steps below and enabling Two Factor Authentication, because the steps mentioned below are very identical to the previous post, so I’ll list the identical steps and will be very brief in explaining what each step is doing except for the new steps which are related to enabling Two Factor Authentication in our back-end API.

Step 1: Creating the Web API Project

Create an empty solution and name it “TFAWebApiAngularJS” then add a new ASP.NET Web application named “TwoFactorAuthentication.API”, the selected template for the project will be “Empty” template with no core dependencies. Notice that the authentication is set to “No Authentication” taking into consideration that we’ll add this manually.

Step 2: Installing the needed NuGet Packages:

Install-Package Microsoft.AspNet.WebApi -Version 5.2.2
Install-Package Microsoft.AspNet.WebApi.Owin -Version 5.2.2
Install-Package Microsoft.Owin.Host.SystemWeb -Version 3.0.0
Install-Package Microsoft.Owin.Cors -Version 3.0.0
Install-Package Microsoft.Owin.Security.OAuth -Version 3.0.0
Install-Package Microsoft.AspNet.Identity.Owin -Version 2.0.1
Install-Package Microsoft.AspNet.Identity.EntityFramework -Version 2.0.1

Step 3: Add Owin “Startup” Class

Right click on your project then add a new class named “Startup”. It will contain the code below:

public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            HttpConfiguration config = new HttpConfiguration();

            ConfigureOAuth(app);

            WebApiConfig.Register(config);
            app.UseCors(Microsoft.Owin.Cors.CorsOptions.AllowAll);
            app.UseWebApi(config);

        }

        public void ConfigureOAuth(IAppBuilder app)
        {
            OAuthBearerAuthenticationOptions OAuthBearerOptions = new OAuthBearerAuthenticationOptions();

            OAuthAuthorizationServerOptions OAuthServerOptions = new OAuthAuthorizationServerOptions()
            {
                //For Dev enviroment only (on production should be AllowInsecureHttp = false)
                AllowInsecureHttp = true,
                TokenEndpointPath = new PathString("/token"),
                AccessTokenExpireTimeSpan = TimeSpan.FromDays(1),
                Provider = new SimpleAuthorizationServerProvider()
            };

            // Token Generation
            app.UseOAuthAuthorizationServer(OAuthServerOptions);

            //Token Consumption
            app.UseOAuthBearerAuthentication(OAuthBearerOptions);

        }
    }

Basically this class will configure our back-end API to use OAuth 2.0 bearer tokens to secure the endpoints attribute with [Authorize] attribute, as well it also set the access token to expire after 24 hours. We’ll implement the class “SimpleAuthorizationServerProvider” in later steps.

Step 4: Add “WebApiConfig” class

Right click on your project, add a new folder named “App_Start”, inside this class add a class named “WebApiConfig”, then paste the code below:

public static class WebApiConfig
    {
        public static void Register(HttpConfiguration config)
        {
            // Web API routes
            config.MapHttpAttributeRoutes();

            var jsonFormatter = config.Formatters.OfType<JsonMediaTypeFormatter>().First();
            jsonFormatter.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver();
        }
    }

Step 5: Add the ASP.NET Identity System

Now we’ll configure our user store to use ASP.NET Identity system to store the user profiles, to do this we need a database context class which will be responsible for communicating with our database, so add a new class and name it “AuthContext” then paste the code snippet below:

public class AuthContext : IdentityDbContext<ApplicationUser>
    {
        public AuthContext()
            : base("AuthContext")
        {
        }
    }

    public class ApplicationUser : IdentityUser
    {
        [Required]
        [MaxLength(16)]
        public string PSK { get; set; }
    }

As you see in the code above, the “AuthContext” class is inheriting from “IdentityDbContext<ApplicationUser>”, where the “ApplicationUser” inherits from “IdentityUser”, this is done like this because we need to extend the “AspNetUsers” table and add a new column named “PSK” which will contain the Preshared key generated for this user in our back-end API. More on generating this Preshared key later in the post.

Now we want to add “UserModel” which contains the properties needed to be sent once we register a user, this model is a POCO class with some data annotations attributes used for the sake of validating the registration payload request. So add a new folder named “Models” then add a new class named “UserModel” and paste the code below:

public class UserModel
    {
        [Required]
        [Display(Name = "User name")]
        public string UserName { get; set; }

        [Required]
        [StringLength(100, ErrorMessage = "The {0} must be at least {2} characters long.", MinimumLength = 6)]
        [DataType(DataType.Password)]
        [Display(Name = "Password")]
        public string Password { get; set; }

        [DataType(DataType.Password)]
        [Display(Name = "Confirm password")]
        [Compare("Password", ErrorMessage = "The password and confirmation password do not match.")]
        public string ConfirmPassword { get; set; }
    }

Now we need to add a new connection string named “AuthContext” in our Web.Config file, so open you web.config and add the below section:

<connectionStrings>
    <add name="AuthContext" connectionString="Data Source=.\sqlexpress;Initial Catalog=TFAAuth;Integrated Security=SSPI;" providerName="System.Data.SqlClient" />
  </connectionStrings>

Step 6: Add Repository class to support ASP.NET Identity System

Now we want to implement two methods needed in our application which they are: “RegisterUser” and “FindUser”, so adda  new class named “AuthRepository” and paste the code snippet below:

public class AuthRepository :IDisposable    
    {
        private AuthContext _ctx;

        private UserManager<ApplicationUser> _userManager;

        public AuthRepository()
        {
            _ctx = new AuthContext();
            _userManager = new UserManager<ApplicationUser>(new UserStore<ApplicationUser>(_ctx));
        }

        public async Task<IdentityResult> RegisterUser(UserModel userModel)
        {
            ApplicationUser user = new ApplicationUser
            {
                UserName = userModel.UserName,
                TwoFactorEnabled = true,
                PSK = OneTimePass.GenerateSharedPrivateKey()
            };

            var result = await _userManager.CreateAsync(user, userModel.Password);

            return result;
        }

        public async Task<ApplicationUser> FindUser(string userName, string password)
        {
            ApplicationUser user = await _userManager.FindAsync(userName, password);
            
            return user;
        }

        public void Dispose()
        {
            _ctx.Dispose();
            _userManager.Dispose();

        }
    }

By looking at the code above you will notice that we are generating Preshared key for the registered user by calling the static method “OneTimePass.GeneratePresharedKey”, this Preshared key will be sent back to the end user so he will enter this 16 characters key in his Google Authenticator application.

Step 7: Add support for generating Preshared keys and passcodes

Note: There are lot of implementations for Google Authenticator algorithms (HOTP, and TOTP) out there in different platforms including .NET, but there is nothing that beats Badrinarayanan Lakshmiraghavan’s implementation in simplicity and minimal number of code used. The implementation exists and is available for public in the companion source code which comes with his Pro ASP.NET Web API Security book, so all the credit for the below implementation goes for Badrinarayanan, and I’ll not re-explain how the implementation is done. Badrinarayanan explained it in a very simple way, so my recommendation is to check his book.

So add a new folder named “Services” and inside it add a new file named “TimeSensitivePassCode.cs” then paste the code below:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Security.Cryptography;
using System.Web;
using System.Text;

namespace TwoFactorAuthentication.API.Services
{
    public static class TimeSensitivePassCode
    {
        public static string GeneratePresharedKey()
        {
            byte[] key = new byte[10]; // 80 bits
            using (var rngProvider = new RNGCryptoServiceProvider())
            {
                rngProvider.GetBytes(key);
            }

            return key.ToBase32String();
        }

        public static IList<string> GetListOfOTPs(string base32EncodedSecret)
        {
            DateTime epochStart = new DateTime(1970, 01, 01, 0, 0, 0, 0, DateTimeKind.Utc);

            long counter = (long)Math.Floor((DateTime.UtcNow - epochStart).TotalSeconds / 30);
            var otps = new List<string>();

            otps.Add(GetHotp(base32EncodedSecret, counter - 1)); // previous OTP
            otps.Add(GetHotp(base32EncodedSecret, counter)); // current OTP
            otps.Add(GetHotp(base32EncodedSecret, counter + 1)); // next OTP

            return otps;
        }

        private static string GetHotp(string base32EncodedSecret, long counter)
        {
            byte[] message = BitConverter.GetBytes(counter).Reverse().ToArray(); //Intel machine (little endian) 
            byte[] secret = base32EncodedSecret.ToByteArray();

            HMACSHA1 hmac = new HMACSHA1(secret, true);

            byte[] hash = hmac.ComputeHash(message);
            int offset = hash[hash.Length - 1] & 0xf;
            int truncatedHash = ((hash[offset] & 0x7f) << 24) |
            ((hash[offset + 1] & 0xff) << 16) |
            ((hash[offset + 2] & 0xff) << 8) |
            (hash[offset + 3] & 0xff);

            int hotp = truncatedHash % 1000000; 
            return hotp.ToString().PadLeft(6, '0');
        }
    }

    public static class StringHelper
    {
        private static string alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ234567";

        public static string ToBase32String(this byte[] secret)
        {
            var bits = secret.Select(b => Convert.ToString(b, 2).PadLeft(8, '0')).Aggregate((a, b) => a + b);

            return Enumerable.Range(0, bits.Length / 5).Select(i => alphabet.Substring(Convert.ToInt32(bits.Substring(i * 5, 5), 2), 1)).Aggregate((a, b) => a + b);
        }

        public static byte[] ToByteArray(this string secret)
        {
            var bits = secret.ToUpper().ToCharArray().Select(c => Convert.ToString(alphabet.IndexOf(c), 2).PadLeft(5, '0')).Aggregate((a, b) => a + b);

            return Enumerable.Range(0, bits.Length / 8).Select(i => Convert.ToByte(bits.Substring(i * 8, 8), 2)).ToArray();
        }

    }
}

Briefly what we’ve implemented in this class is the below

  • We’ve added a static method named “GeneratePresharedKey” which is responsible for generating 16 characters as our Preshared key, basically it is an array of 80 bit which is encoded using base32 format, this base32 format uses 26 letters A-Z and six digits 2-7 which will produce restricted set of characters that can be conveniently used by end users.
  • Why did we encod the key using base32 format? Because Google Authenticator uses the same encoding to help end users who prefer to enter the Preshared key manually without any confusion, the numbers which might confuse the users with letters are omitted (i.e 0,1,8,9). The implementation for Base32 encoding can be found on the extension method named “ToBase32String” in the helper class “StringHelper”.
  • We’ve implemented the static method “GetHotp” which accepts the base32 encoded Preshared key (16 characters) and a counter, this method will be responsible for generating the One time passcodes.
  • As we stated before the implementation of TOTP is built on top of HOTP, the only major difference is that the counter is replaced by time, so in the method “GetListOfOTPs” we are getting three time sensitive pass codes, one in the past, another in the present, and one in the future; the reason for doing this is to accommodate for the clock alter/shift between the server time and the clock on the smartphone where the passcode is generated using Google Authenticator, we are basically making it easy for the end user when he enters the time sensitive passcodes.

Step 8: Add our “Account” Controller

Now we’ll add a Web API controller which will be used to register a new users, so under add new folder named “Controllers” then add an Empty Web API 2 Controller named “AccountController” and paste the code below:

[RoutePrefix("api/Account")]
    public class AccountController : ApiController
    {
        private AuthRepository _repo = null;

        public AccountController()
        {
            _repo = new AuthRepository();
        }

        // POST api/Account/Register
        [AllowAnonymous]
        [Route("Register")]
        public async Task<IHttpActionResult> Register(UserModel userModel)
        {
            if (!ModelState.IsValid)
            {
                return BadRequest(ModelState);
            }

            IdentityResult result = await _repo.RegisterUser(userModel);
            
            IHttpActionResult errorResult = GetErrorResult(result);

            if (errorResult != null)
            {
                return errorResult;
            }

            ApplicationUser user = await _repo.FindUser(userModel.UserName, userModel.Password);

            return Ok(new { PSK = user.PSK });
        }

        protected override void Dispose(bool disposing)
        {
            if (disposing)
            {
                _repo.Dispose();
            }

            base.Dispose(disposing);
        }

        private IHttpActionResult GetErrorResult(IdentityResult result)
        {
            if (result == null)
            {
                return InternalServerError();
            }

            if (!result.Succeeded)
            {
                if (result.Errors != null)
                {
                    foreach (string error in result.Errors)
                    {
                        ModelState.AddModelError("", error);
                    }
                }

                if (ModelState.IsValid)
                {
                    // No ModelState errors are available to send, so just return an empty BadRequest.
                    return BadRequest();
                }

                return BadRequest(ModelState);
            }

            return null;
        }
    }

It is worth noting here that inside method “Register” we’re returning the SPK after registering the user successfully, this PSK will be displayed on the UI on form of QR code and plain text so we give the user 2 options to enter it in Google Authenticator.

Step 9: Add Protected Money Transactions Controller with Two Actions

Now we’ll add a protected controller which can be accessed only if the request contains valid OAuth 2.0 bearer access token, inside this controller we’ll add 2 action methods, one of those action methods “GetHistory” will only requires One-Factor authentication (only bearer tokens) to process the request, on the other hand there will be another sensitive action method named “PostTransfer” which will require Two Factor authentication to process the request.

So add new Web API controller named “TransactionsController” under folder Controllers and paste the code below:

[Authorize]
    [RoutePrefix("api/Transactions")]
    public class TransactionsController : ApiController
    {
        [Route("history")]
        public IHttpActionResult GetHistory()
        {
            return Ok(Transaction.CreateTransactions());
        }

        [Route("transfer")]
        [TwoFactorAuthorize]
        public IHttpActionResult PostTransfer(TransferModeyModel transferModeyModel)
        {
            return Ok();
        }
    }

    #region Helpers

    public class Transaction
    {
        public int ID { get; set; }
        public string CustomerName { get; set; }
        public string Amount { get; set; }
        public DateTime ActionDate { get; set; }


        public static List<Transaction> CreateTransactions()
        {
            List<Transaction> TransactionList = new List<Transaction> 
            {
                new Transaction {ID = 10248, CustomerName = "Taiseer Joudeh", Amount = "$1,545.00", ActionDate = DateTime.UtcNow.AddDays(-5) },
                new Transaction {ID = 10249, CustomerName = "Ahmad Hasan", Amount = "$2,200.00", ActionDate = DateTime.UtcNow.AddDays(-6)},
                new Transaction {ID = 10250,CustomerName = "Tamer Yaser", Amount = "$300.00", ActionDate = DateTime.UtcNow.AddDays(-7) },
                new Transaction {ID = 10251,CustomerName = "Lina Majed", Amount = "$3,100.00", ActionDate = DateTime.UtcNow.AddDays(-8)},
                new Transaction {ID = 10252,CustomerName = "Yasmeen Rami", Amount = "$1,100.00", ActionDate = DateTime.UtcNow.AddDays(-9)}
            };

            return TransactionList;
        }
    }

    public class TransferModeyModel
    {
        public string FromEmail { get; set; }
        public string ToEmail { get; set; }
        public double Amount { get; set; }
    }

    #endregion
}

Notice that the “Authorize” attribute is set on the controller level for both actions, which means that both actions need the bearer token to process the request, but on the action method “PostTransfer” you will notice that there is new custom authorize filter attribute named “TwoFactorAuthorizeAttribute” setting on top of this action method, this custom authorize attribute is responsible for enabling Two Factor authentication on any sensitive controller, action method, or HTTP verb in future. All we need to do is just use this custom attribute as an attribute on the endpoint we want to elevate the security level on and require a second factor authentication to process the request.

Step 10: Implement the “TwoFactorAuthorizeAttribute”

Before starting to implement the “TwoFactorAuthorizeAttribute” we need to add simple helper class which will be responsible to inspect request headers looking for the time sensitive passcode sent by the client application in a custom header, so to implement this add a new folder named “Helpers” and inside it add a new file named “OtpHelper” and paste the code below:

public static class OtpHelper
    {
        private const string OTP_HEADER = "X-OTP";

        public static bool HasValidTotp(this HttpRequestMessage request, string key)
        {
            if (request.Headers.Contains(OTP_HEADER))
            {
                string otp = request.Headers.GetValues(OTP_HEADER).First();
                
                // We need to check the passcode against the past, current, and future passcodes
               
                if (!string.IsNullOrWhiteSpace(otp))
                {
                    if (TimeSensitivePassCode.GetListOfOTPs(key).Any(t => t.Equals(otp)))
                    {
                        return true;
                    }
                }

            }
            return false;
        }
    }

So basically this class is looking for a custom header named “X-OTP” in the HTTP request headers, if this header was found we’ll send the value of it (time sensitive passcode) along with Preshared key for this authenticated user to the method “GetListOfOTPs” which we defined in step 7.

If this time sensitive passcode exists in the list of pass codes (past, current, and future passcodes) then this means that this passcode is valid and authentic, otherwise the passcode is invalid or the user didn’t include it in the request.

Now to implement the custom “TwoFactorAuthorizeAttribute” we need to add a new folder named “Filters” and inside it add a new class named “TwoFactorAuthorizeAttribute” then paste the code below:

public class TwoFactorAuthorizeAttribute : AuthorizationFilterAttribute
    {
        public override Task OnAuthorizationAsync(HttpActionContext actionContext, System.Threading.CancellationToken cancellationToken)
        {
            var principal = actionContext.RequestContext.Principal as ClaimsPrincipal;

            var preSharedKey = principal.FindFirst("PSK").Value;
            bool hasValidTotp = OtpHelper.HasValidTotp(actionContext.Request, preSharedKey);

            if (hasValidTotp)
            {
                return Task.FromResult<object>(null);
            }
            else
            {
                actionContext.Response = actionContext.Request.CreateResponse(HttpStatusCode.Unauthorized, new CustomError() { Code = 100, Message = "One Time Password is Invalid" });
                return Task.FromResult<object>(null);
            }
        }
    }

    public class CustomError
    {
        public int Code { get; set; }
        public string Message { get; set; }
    }

What we’ve implemented in this customer authorization filer is the following:

  • This custom Authorize attribute inherits from “AuthorizationFilterAttribute” and we are overriding the “OnAuthorizationAsync” method.
  • We can be 100% sure that the code execution flow will not reach this authorization filter attribute if the user is not authenticated by the OAuth 2.0 bearer token sent, this custom authorize attribute run later in the pipeline after the “Authorize” attribute.
  • Inside the “OnAuthorizationAsync” method we are looking for the claims for the authenticated user, this claim will contain custom claim of type “PSK” which contains the value of the Preshared key for this authenticated user (Didn’t implement this yet, you will see how we set the claim in the next step).
  • Then we will call the helper method named “OtpHelper.HasValidTotp” by passing the HTTP request which contains the time sensitive pass code in a custom header along with the Preshared key. If this method returns true then we will consider this request a valid one and that has fulfilled the Two Factor authentication requirements.
  • If the request doesn’t contain the valid time sensitive passcode then we will return HTTP status code 401 along with a message and an arbitrary integer code used in the UI.

 Step 11: Implement the “SimpleAuthorizationServerProvider” class

Add a new folder named “Providers” then add new class named “SimpleAuthorizationServerProvider”, paste the code snippet below:

public class SimpleAuthorizationServerProvider : OAuthAuthorizationServerProvider
    {
        public override Task ValidateClientAuthentication(OAuthValidateClientAuthenticationContext context)
        {
            context.Validated();
            return Task.FromResult<object>(null);
        }

        public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
        {
            var  allowedOrigin = "*";
            ApplicationUser appUser = null;

            context.OwinContext.Response.Headers.Add("Access-Control-Allow-Origin", new[] { allowedOrigin });

            using (AuthRepository _repo = new AuthRepository())
            {
                 appUser = await _repo.FindUser(context.UserName, context.Password);

                if (appUser == null)
                {
                    context.SetError("invalid_grant", "The user name or password is incorrect.");
                    return;
                }
            }

            var identity = new ClaimsIdentity(context.Options.AuthenticationType);
            identity.AddClaim(new Claim(ClaimTypes.Name, context.UserName));
            identity.AddClaim(new Claim(ClaimTypes.Role, "User"));
            identity.AddClaim(new Claim("PSK", appUser.PSK));

            var props = new AuthenticationProperties(new Dictionary<string, string>
                {
                    { 
                        "userName", context.UserName
                    }
                });

            var ticket = new AuthenticationTicket(identity, props);
            context.Validated(ticket);
        }

        public override Task TokenEndpoint(OAuthTokenEndpointContext context)
        {

            foreach (KeyValuePair<string, string> property in context.Properties.Dictionary)
            {
                context.AdditionalResponseParameters.Add(property.Key, property.Value);
            }

            return Task.FromResult<object>(null);
        }
    }

It is worth mentioning here that in the second method “GrantResourceOwnerCredentials” which is responsible for validating the username and password sent to the authorization server token endpoint, and after fetching the user from the database, we are adding a custom claim named “PSK”, the value for this claim contains the Preshared key for this authenticated user. This claim will be included in the signed OAuth 2.0 bearer token, that’s why we can directly get the PSK value in step 10.

 Step 12: Testing the Back-end API

First step to test the API is to register a new user so open your favorite REST client application in order to issue an HTTP request to register the user, so issue an HTTP POST request to the endpoint https://ngtfaapi.azurewebsites.net/api/account/register as the image below:

Register User

If the request processed successfully you will receive 200 status along with the Preshared Key, so open Google Authenticator app and enter this key manually.

Now we need to obtain OAuth 2.0 bearer access token to allow us to request the protected end points, this represent the first factor authentication because the user will provide his username and password, so we need to issue HTTP POST request to the endpoint http://ngtfaapi.azurewebsites.net/token as the image below:

Request Access Token

Now after we have a valid OAuth 2.0 access token, we need to try to send an HTTP POST request to the protected elevated security endpoint which requires second factor authentication, we’ll issue the request to the endpoint https://ngtfaapi.azurewebsites.net/api/transactions/transfer asshown in the image below, but we’ll not set the value for the custom header (X-OTP) so definitely the API will response with 401 unauthorized access

Failed TFA Request

Now in order to make this request authentic we need to open Google Authenticator and get the passcode from there and send it in the (X-OTP) custom header, so the authentic request will be as shown in the image below:

Authentic TFA Request

The live AngularJS demo application is hosted on Azure, the source code for this tutorial on GitHub.

That’s it for now folks!

I hope this step by step post will help you enable Two Factor Authentication into ASP.NET Web API RESTful services for selective sensitive endpoints, if you have any question please drop me a comment.

Follow me on Twitter @tjoudeh

References

The post Two Factor Authentication in ASP.NET Web API & AngularJS using Google Authenticator appeared first on Bit of Technology.


Dominick Baier: IdentityServer v3 Beta 2-1

We just did a minor update to Beta 2.

Besides some smaller changes and bug fixes we now support redirecting back to a client after logout (very requested feature). I will write a blog post soon describing how it works.


Filed under: IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: Getting started with IdentityServer v3

Last night I started working on a getting started tutorial for IdentityServer v3 – while writing it, it became clear, that a single walkthrough will definitely not be enough to show the various options you have – anyways I started with the canonical “authentication for MVC scenario”, and it is work in progress.

Watch this space:

https://github.com/thinktecture/Thinktecture.IdentityServer.v3/wiki


Filed under: ASP.NET, IdentityServer, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.