Filip Woj: Disposing resources at the end of Web API request

Sometimes you have resources in your code that are implementing IDisposable, and that you’d like them to be disposed only at the end of the HTTP request. I have seen a solution to this problem rolled out by hand in a few code bases in the past – but in fact this feature is already built into Web API, which I don’t think a lot of people are aware of.

Let’s have a quick look.

RegisterForDispose

ASP.NET Web API contains an extension method for HttpRequestMessage that’s called RegisterForDispose – which allows you to pass in a single instance or a collection of instances of IDisposable.

What happens under the hood, is that Web API will simply store those object references in the properties dictionary of the request message (under HttpPropertyKeys.DisposableRequestResourcesKey key), and at the end of the Web API request lifetime, will iterate through those objects and call Dispose on them. This is the responsibility of the Web API host, so each of the existing hosting layers – Web Host (“classic” ASP.NET), self host (WCF based) and OWIN host (the Katana adapter) is explicitly implementing this feature. It is guaranteed to happen (for example, the Katana adapter uses a try/catch block around the entire Web API pipeline and performs the disposal in finally.

As a result you can do things like the code below. Imagine a message handler that opens a stream, writes to it, then saves the stream into request properties (so that other Web API components such as other message handlers, filters, controllers etc can use it too):

public class WriterHandler : DelegatingHandler
{
    protected async override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    {
        var writer = new StreamWriter("c:\\file.txt", true);
        //save the writer for later use
        request.Properties["filewriter"] = writer;

        await writer.WriteLineAsync("some important information");

        request.RegisterForDispose(writer);
        return await base.SendAsync(request, cancellationToken);
    }
}

You don’t have to use a using statement here, as registering for disposal using Web API resource tracking feature will ensure that the stream writer is disposed of at the end of the request lifetime. This is very convenient, as other components during the HTTP request, can share the same resource.

It is also worth adding that resource tracking is also used if you ever use Web API GetDependencyScope feature – which allows you to resolve services from the registered dependency resolver:

var myService = request.GetDependencyScope().GetService(typeof(IMyService));

In the above case, the instance of IMyService will be registered for disposal at the end of Web API request.

One final note – at any time you can also take a peek into the resources currently registered for disposal, using a GetResourcesForDisposal extension method on HttpRequestMessage.


Dominick Baier: IdentityModel 1.0.0 released

Part of the ongoing effort to modernize our libraries, I released IdentityModel today.

IdentityModel contains useful helpers, extension methods and constants when working with claims-based identity in general and OAuth 2.0 and OpenID Connect in particular.

See the overview here and nuget here.

Feedback and PRs welcome.


Filed under: .NET Security, IdentityModel, OAuth, OpenID Connect, WebAPI


Dominick Baier: The State of Security in ASP.NET 5 and MVC 6: OAuth 2.0, OpenID Connect and IdentityServer

ASP.NET 5 contains a middleware for consuming tokens – but not anymore for producing them. I personally have never been a big fan of the Katana authorization server middleware (see my thoughts here) – and according to this, it seems that the ASP.NET teams sees IdentityServer as the replacement for it going forward.

So let’s have a look at the bits & pieces and how IdentityServer can help in implementing authentication for MVC web apps and APIs.

IdentityServer
IdentityServer is an extensible OAuth 2.0 authorization server and OpenID Connect provider framework. It is different from the old authorization server middleware as it operates on a higher level of abstraction. IdentityServer takes care of all the protocol and data management details while you only need to model your application architecture using clients, users and resources (see the big picture and terminology pages).

IdentityServer is currently available as OWIN middleware only (as opposed to native ASP.NET 5 middleware) – that means it can be used in Katana-based hosts and ASP.NET 5 hosts targeting the “full .NET framework” aka DNX451. It does currently not run on the Core CLR – but we are working on it.

You wire up IdentityServer in ASP.NET 5 using the typical application builder extension method pattern:

public void Configure(IApplicationBuilder app)
{
    var idsrvOptions = new IdentityServerOptions
    {
        Factory = new IdentityServerServiceFactory()
                        .UseInMemoryUsers(Users.Get())
                        .UseInMemoryClients(Clients.Get())
                        .UseInMemoryScopes(Scopes.Get()),
    };

    app.UseIdentityServer(idsrvOptions);
}

For this sample we use in-memory data sources, but you can connect to any data source you like.

The Users class simply defines two test users (Alice and Bob of course) – I won’t bore you with the details. Scopes on the other hand define the resources in your application, e.g. identity resources like email addresses or roles, or resource scopes that represent your APIs:

public static IEnumerable<Scope> Get()
{
    return new[]
    {
        // standard OpenID Connect scopes
        StandardScopes.OpenId,
        StandardScopes.ProfileAlwaysInclude,
        StandardScopes.EmailAlwaysInclude,

        // API – access token will
        // contain roles of user
        new Scope
        {
            Name = “api1”,
            DisplayName = “API 1”,
            Type = ScopeType.Resource,

            Claims = new List<ScopeClaim>
            {
                new ScopeClaim(“role”)
            }
        }
    };
}

Clients on the other hand represent the applications that will request authentication tokens (also called identity tokens) and access tokens.

public static List<Client> Get()
{
    return new List<Client>
    {
        new Client
        {
            ClientName = “Test Client”,
            ClientId = “test”,
            ClientSecrets = new List<Secret>
            {
                new Secret(“secret”.Sha256())
            },

            // server to server communication
            Flow = Flows.ClientCredentials,

            // only allowed to access api1
            AllowedScopes = new List<string>
            {
                “api1”
            }
        },

        new Client
        {
            ClientName = “MVC6 Demo Client”,
            ClientId = “mvc6”,

            // human involved
            Flow = Flows.Implicit,

            RedirectUris = new List<string>
            {
                “
http://localhost:2221/”,
            },

            // access to identity data and api1
            AllowedScopes = new List<string>
            {
                “openid”,
                “email”,
                “profile”,
                “api1”
            }
        }
    };
}

With these definitions in place we can now add an MVC application that uses IdentityServer for authentication, as well as an API that supports token-based access control.

MVC Web Application
ASP.NET 5 includes middleware for OpenID Connect authentication. With this middleware you can use any OpenID Connect compliant provider (see here) to outsource the authentication logic.

Simply add the cookie middleware (for local signin) and the OpenID Connect middleware pointing to our IdentityServer to the pipeline. You need to supply the base URL to IdentityServer (so the middleware can fetch the discovery document), the client ID, scopes, and redirect URI (compare to our client definition we did earlier). The following snippet requests the user’s ID, email and profile claims in addition to an access token that can be used for the “api1” resource.

app.UseCookieAuthentication(options =>
{
    options.AuthenticationScheme = “Cookies”;
    options.AutomaticAuthentication = true;
});

app.UseOpenIdConnectAuthentication(options =>
{
    options.Authority = “
https://localhost:44300″;
    options.ClientId = “mvc6”;
    options.ResponseType = “id_token token”;
    options.Scope = “openid email profile api1”;
    options.RedirectUri = “
http://localhost:2221/”;

    options.SignInScheme = “Cookies”;
    options.AutomaticAuthentication = true;
}

MVC Web API
Building web APIs with MVC that are secured by IdentityServer is equally simple – the ASP.NET 5 OAuth 2.0 middleware supports JWTs out of the box and even understands discovery documents now.

app.UseOAuthBearerAuthentication(options =>
{
    options.Authority = “
https://localhost:44300″;
    options.Audience = “https://localhost:44300/resources”;
    options.AutomaticAuthentication = true;
});

Again Authority points to the base-address of IdentityServer so the middleware can download the metadata and learn about the signing keys.

Since the built-in middleware does not understand the concept of scopes, I had to fix that with this:

app.UseMiddleware<RequiredScopesMiddleware>(
   new List<string> { “api1” });

Expect this be improved in future versions.

API Client
Tokens can be also requested programmatically – e.g. for server-to-server communication scenarios. Our IdentityModel library has a TokenClient helper class which makes that very easy:

async Task<TokenResponse> GetTokenAsync()
{
    var client = new TokenClient(
        “
https://localhost:44300/connect/token”,
        “test”,
        “secret”);

    return await client.RequestClientCredentialsAsync(“api1”);
}

…and using the token:

var client = new HttpClient();
client.SetBearerToken(response.AccessToken);

var result = await client.GetAsync(“http://localhost:2025/claims&#8221;);

The full source code can be found here.

Summary
IdentityServer can be a replacement for the Katana authorization server middleware – but as I said it is a different mindset, since IdentityServer is not protocol oriented – but rather focusing on your application architecture. I should mention that there is actually a middleware for ASP.NET 5 that mimics the Katana approach – you can find it here.

In the end it is all about personal preference, and most importantly how comfortable you are with the low level details of the protocols.

There are actually much more samples on how to use IdentityServer in the samples repo. Give it a try.


Filed under: .NET Security, ASP.NET, IdentityServer, OAuth, OpenID Connect, OWIN, WebAPI


Dominick Baier: The State of Security in ASP.NET 5 and MVC 6: Claims & Authentication

Disclaimer: Microsoft announced the roadmap for ASP.NET 5 yesterday – the current release date of the final version is Q1 2016. Some details of the features and APIs I mention will change between now and then. This post is about beta 5.

Claims
I started talking about claims-based identity back in 2005. That was the time when Microsoft introduced a new assembly to the .NET Framework called System.IdentityModel. This assembly contained the first attempt of introducing claims to .NET, but it was only used by WCF and was a bit over-engineered (go figure). The claims model was subsequently re-worked by the WIF guys a couple of years later (kudos!) and then re-integrated into .NET with version 4.5.

Starting with .NET 4.5, every built-in identity/principal implementation was based on claims, essentially replacing the 12+ years old antiquated IIdentity/IPrincipal interfaces. Katana – but more importantly ASP.NET 5 is the first framework that now uses ClaimsPrincipal and ClaimsIdentity as first class citizens – identities are now always based on claims – and finally – no more down-casting!

HttpContext.User and Controller.User are now ClaimsPrincipals – and writing the following code feels as natural as it should be:

var email = User.FindFirst(“email”);

This might not seem like a big deal – but given that it took almost ten years to get there, shows just how slow things are moving sometimes. I also had to take part in a number of discussions with people at Microsoft over the years to convince them that this is actually the right thing to do…

Authentication API
Another thing that ASP.NET was missing is a uniform authentication API – this was fixed in Katana via the IAuthenticationManager and was pretty much identically brought over to ASP.NET 5.

AuthenticationManager hangs off the HttpContext and is a uniform APIs over the various authentication middleware that do the actual grunt work. The major APIs are:

  • SignIn/SignOut
    Instructs a middleware to do a signin/signout gesture
  • Challenge
    Instructs a middleware to trigger some external authentication handshake (this is further abstracted by the new ChallengeResult in MVC 6)
  • Authenticate
    Triggers validation of an incoming credential and conversion to claims
  • GetAuthenticationSchemes
    Enumerates the registered authentication middleware, e.g. for populating a login UI dynamically

Authentication Middleware
The actual authentication mechanisms and protocols are implemented as middleware. If you are coming from Katana then this is a no brainer. If your background is ASP.NET.OLD think of middleware as HTTP modules – just more flexible and lightweight.

For web UIs the following middleware is included:

  • Cookie-based authentication (as a replacement for good old forms authentication or the session authentication module from WIF times)
  • Google, Twitter, Facebook and Microsoft Account
  • OpenID Connect

WS-Federation is missing right now. It is also worth mentioning that there is now a generic middleware for OAuth2-style authentication (sigh). This will make it easier to write middleware for the various OAuth2 dialects without having to duplicate all the boilerplate code and will make the life of these guys much easier.

Wiring up the cookie middleware looks like this:

app.UseCookieAuthentication(options =>
{
    options.LoginPath = new PathString(“/account/login”);
    options.AutomaticAuthentication = true;
    options.AuthenticationScheme = “Cookies”;               
});

The coding style is a little different – instead of passing in an options instance, you now use an Action<Option>. AuthenticationType has been renamed to AuthenticationScheme (and the weird re-purposing of IIdentity.AuthenticationType is gone for good). All authentication middleware is now passive – setting them to active means setting AutomaticAuthentication to true.

For signing in a user, you create the necessary claims and wrap them in a ClaimsPrincipal. Then you call SignIn to instruct the cookie middleware to set the cookie.

var claims = new List<Claim>
{
    new Claim(“sub”, model.UserName),
    new Claim(“name”, “bob”),
    new Claim(“email”, “bob@smith.com”)
};

var id = new ClaimsIdentity(claims, “local”, “name”, “role”);
Context.Authentication.SignIn(“Cookies”, new ClaimsPrincipal(id));

Google authentication as an example looks like this:

app.UseGoogleAuthentication(options =>
{
    options.ClientId = “xxx”;
    options.ClientSecret = “yyy”;

    options.AuthenticationScheme = “Google”;
    options.SignInScheme = “Cookies”;
});

The external authentication middleware implements the authentication protocol only – and when done – hands over to the middleware that does the local sign-in. That’s typically the cookie middleware. For this purpose you set the SignInScheme to the name of the middleware that should take over (this has been renamed from SignInAsAuthenticationType – again clearly an improvement).

Also the pattern of having more than one cookie middleware to be able to inspect claims from external authentication systems before turning them into a trusted cookie still exists. That’s probably a separate post.

For web APIs there is only one relevant middleware – consuming bearer tokens. This middleware has support for JWTs out of the box and is extensible to use different token types and different strategies to convert the tokens to claims. One notable new feature is support for OpenID Connect metadata. That means if your OAuth2 authorization server also happens to be an OpenID Connect provider with support for a discovery document (e.g. IdentityServer or Azure Active Directory) the middleware can auto-configure the issuer name and signing keys.

One thing that is “missing” when coming from Katana, is the OAuth2 authorization server middleware. There are currently no plans to bring that forward. IdentityServer can be a replacement for that. I will dedicate a separate blog post to that topic.

Summary
If you are coming from Katana, this all does not look terribly new to you. AuthenticationManager and authentication middleware works almost identical. Learning that, was no waste of time.

If you are coming from plain ASP.NET (and maybe even WIF or DotNetOpenAuth) this all works radically different under the covers and is really only “conceptually compatible”. In that case you have quite a lot of new tech to learn to make the jump to ASP.NET 5.

Unfortunately (as always) the ASP.NET templates are not very helpful in learning the new features. You either get an empty one, or the full-blown-all-bells-and-whistles-complexity-hidden-by-extensions-method-over-more-abstractions version of that. Therefore I created the (almost) simplest possible cookie-based starter template here. More to follow.


Filed under: .NET Security, ASP.NET, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: The State of Security in ASP.NET 5 and MVC 6

We’ve been closely following ASP.NET 5 and MVC 6 since the days it was presented behind closed doors, through the “vNext” and “Project K” phase up to recent beta builds.

I personally monitored all developments in the security space in particular and was even involved in one or the other decision making process – particularly around authorization which makes me also a little bit proud.

In preparation for the planned ASP.NET and MVC 6 security course for PluralSight, I always maintained a (more or less structured) list of changes and new features.

Tomorrow will be the release of Visual Studio 2015, which does NOT include the final release of ASP.NET 5. Instead we are between Beta 5 and 6 right now and the rough feature set has more or less been decided on. That’s why I think that now is a good time to do a couple of overview posts on what’s new.

Many details are still very much in flux and the best way to keep up with that is to subscribe to the various security related repos on github. Many things are not set in stone yet, so this is also an opportunity to take part in the discussion which I would encourage you to do.

The planned feature posts are:

Since training is very much about the details, we are holding off right now with recording any content until the code has been locked down for “v1”. So stay tuned.

The first public appearance of our updated “identity & access control” training for ASP.NET will be at NDC in London in January 2016.

Update: The final release of ASP.NET 5 is currently scheduled for Q1 2016 (https://github.com/aspnet/Home/wiki/Roadmap)


Filed under: .NET Security, ASP.NET, Conferences & Training, IdentityServer, WebAPI


Dominick Baier: Security at NDC Oslo

For a developer conference, NDC Oslo had a really strong security track this year. Also the audience appreciated that – from the five highest ranked talks – three were about security. Troy has the proof.

I even got to see Bruce Schneier for the first time. It is fair to say that his “Secrets & Lies” book changed my life and was one of the major reasons I got interested in security (besides Enno).

Brock and I did a two day workshop on securing modern web applications and APIs followed by a talk on Web API security patterns and how to implement authentication and authorization in JavaScript applications.

Other talks worth watching (I hope I haven’t missed anything):

Well done, NDC!


Filed under: .NET Security, IdentityModel, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: Give your WCF Security Architecture a Makeover with IdentityServer3

Not everybody has the luxury of being able to start over and build the new & modern version of their software from scratch. Many people I speak to have existing investments in WCF and their “old-school” desktop/intranet architecture.

Moving to an internet/mobile world while preserving the existing services is not easy because the technologies (and in my case the security technologies) are fundamentally incompatible. Your new mobile/modern clients will not be seamlessly able to request tokens from your existing WS-Trust STS and SOAP is not really compatible with OAuth2. So what to do?

You could try to teach your WS-Trust STS some basic HTTP based token service capabilities and continue using SAML tokens. You could provide some sort of SAML/JWT conversion mechanism and create Web APIs for your new clients that proxy / convert to the WCF world. Or you could provide to separate token services and establish trust between them. All approaches have their own advantages and disadvantages.

For a project I am currently working on I chose a different approach – get rid of the old WS-Trust STS altogether, replace it with an OAuth2 authorization server (IdentityServer3) and make your WCF services consume JWT tokens. This way both old and new clients can request tokens via OAuth2 and use them with either existing WCF services and the new Web APIs (which ultimately will be also used in the desktop version of the product). How does that work?

Requesting the token
The OAuth2 resource owner flow is what comes closest to WS-Trust and it is easy to replace the WCF WSTrustChannel code with that. Going forward the web view based flows actually give more features like external IdPs etc. but need a bit more restructuring of the existing clients. New clients can use them straight away.

Sending the token
This is the tricky part. WCF can not deal with JWTs directly since they are not XML based. You first need to wrap them in an XML data structure and the typical approach for that is to use a so called binary security token. This worked fine at some point but the latest version of WCF and the JWT token handler don’t seem to work together anymore (here’s a nice write up from Mickael describing the problem).

Since WCF is really done – I did not expect anyone to fix that bug anytime soon, so I needed a different solution.

Another XML container data structure that is well tested and does the job equally well is SAML – so I simply created a minimal SAML assertion to hold the JWT token.

static GenericXmlSecurityToken WrapJwt(string jwt)
{
   
var subject = new ClaimsIdentity("saml"
);
    subject.AddClaim(
new Claim("jwt"
, jwt));

   
var descriptor = new SecurityTokenDescriptor
    {
        TokenType =
TokenTypes
.Saml2TokenProfile11,
        TokenIssuerName =
"urn:wrappedjwt"
,
        Subject = subject
    };

   
var handler = new Saml2SecurityTokenHandler
();
   
var
token = handler.CreateToken(descriptor);

   
var xmlToken = new GenericXmlSecurityToken
(
      
XElement
.Parse(token.ToTokenXmlString()).ToXmlElement(),
       
null
,
       
DateTime
.Now,
       
DateTime
.Now.AddHours(1),
       
null
,
       
null
,
       
null
);

   
return xmlToken;
}

Since we are using SAML solely as a container, there is no signature, no audience URI and just a single attribute statement containing the JWT.

After that you can use the wrapped JWT with the CreateChannelWithIssuedToken method over a federation binding:

var binding = new WS2007FederationHttpBinding(
WSFederationHttpSecurityMode
.TransportWithMessageCredential);
binding.Security.Message.EstablishSecurityContext = false;
binding.Security.Message.IssuedKeyType =
SecurityKeyType
.BearerKey;
var factory = new ChannelFactory<IService
>(
    binding,
   
new EndpointAddress(https://localhost:44335/token
));

var channel = factory.CreateChannelWithIssuedToken(xmlToken);

Validating the token
On the service side I sub-classed the SAML2 security token handler to get the SAML deserialization. In the ValidateToken method I retrieve the JWT token from the assertion and validate it.

Since I have to do the validation manually anyways, I wanted feature parity with our token validation middleware for Web API which means that the token handler can auto-configure itself using the OpenID Connect discovery document as well as do the scope validation.

identityConfiguration.SecurityTokenHandlers.Add(
new IdentityServerWrappedJwtHandler("https://localhost:44333/core", "write"));

The end result is that both WCF and Web API can now consumes JWT tokens from IdentityServer and the customer can smoothly migrate and extend their architecture.

The POC can be found here. It is “sample quality” right now – feel free to make it more robust and send me a PR.


Filed under: .NET Security, IdentityServer, OAuth, WCF, WebAPI


Darrel Miller: Everything is Going to be… 308 Permanent Redirect

The last year has been been a very interesting one.  In April 2014, I announced that I was joining Runscope as a developer advocate.

oldfork

This will be my 50th blog post since that one.  I've covered all kinds of topics from the intricacies of HTTP to API design guidelines, tricks for using ASP.NET Web API, reviews of APIs, and summaries of some of the conferences that I have attended.  I've had the opportunity to attend 17 different developer events and gave talks at all but 3.  I have worked on numerous .net based OSS projects. I have met many interesting people and learned so much about how people are building and consuming HTTP based APIs.

I have also had the pleasure of working with the Runscope team and learned the myriad of ways that the Runscope tooling can help developers build better APIs.  I've seen the team triple in size and watch the product continuously be enhanced.

It has been a fabulous year.  But now it is time for a redirect of my own.  If 308 does not look like a familiar status code, it is because it is fairly new.  It was only introduced in 2014 in RFC 7238.  A 308 is a like the 301(Moved Permanently), but does not allow the HTTP method to be changed during the redirect. 

As of this week, I will be moving on from Runscope and exploring other opportunities.  I won't go into reasons for the change,  but suffice it to say I would highly recommend Runscope as a place to work, and I would not hesitate using them as a reference for future work.  There is no bad blood here.

The first thing on my agenda is to finish the Advanced HttpClient Pluralsight course that has been on the back-burner for the last twelve months.

Beyond that, I do not know, and I'm open to suggestions Smile.  I like the human side of developer advocacy, but I also need to continue writing code that will actually be deployed into production.  Without that feedback loop, I cannot, in good faith, continue offering guidance.

When I started this adventure with Runscope, my blog post was titled It’s time for a change, and more of the same.  Well once again, it is time for a change, and more of the same.

StartFinish

Image credit: Fork https://flic.kr/p/5wM3w2
Image credit: Start Line https://flic.kr/p/g1qvG


Dominick Baier: Three days of Identity & Access Control Workshop at SDD Deep Dive – November 2015, London

As part of the SDD Deep Dive event in London – Brock and I will deliver an updated version of our “Identity & Access Control for modern Web Applications and APIs” workshop.

For the first time, this will be a three day version covering everything you need to know to implement authentication & authorization in your ASP.NET web applications and APIs.

The additional third day will focus on IdentityServer3 internals and customization as well as an outlook on how to migrate your security architecture to ASP.NET 5 and MVC6.

Come by and say hello! (also get some of our rare IdentityServer stickers)

http://sddconf.com/identityandaccess/


Filed under: .NET Security, IdentityServer, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Ali Kheyrollahi: PerfIt! decoupled from Web API: measure down to a closure in your .NET application

Level [T2]

Performance monitoring is an essential part of doing any serious-scale software. Unfortunately in .NET ecosystem, historically first looking for direction and tooling from Microsoft, there has been a real lack of good tooling - for some reason or another effective monitoring has not been a priority for Microsoft although this could be changing now. Healthy growth of .NET Open Source community in the last few years brought a few innovations in this space (Glimpse being one) but they focused on solving development problems rather than application telemetry.

2 years ago, while trying to build and deploy large scale APIs, I was unable to find anything suitable to save me having to write a lot of boilerplate code to add performance counters to my applications so I coded a working prototype of performance counters for ASP .NET Web API and open sourced and shared it on Github, calling it PerfIt! for the lack of a better name. Over the last few years PerfIt! has been deployed to production in a good number of companies running .NET. I added the client support too to measure calls made by HttpClient and it was a handy addition.

From Flickr

This is all not bad but in reality, REST API calls do not cover all your outgoing or incoming server communications (which you naturally would like to measure): you need to communicate to databases (relational or NoSQL), caches (e.g. Redis), Blob Storages, and many other. On top of that, there could be some other parts of your code that you would like to measure such as CPU intensive algorithms, reading or writing large local files, running Machine Learning classifiers, etc. Of course, PerfIt! in this current incarnation cannot help with any of those cases.

It turned out with a little change and separating performance monitoring from Web API semantic (which is changing with vNext again) this can be done. Actually, not getting much credit for it, it was mainly ideas from two of my best colleagues which I am grateful for their contribution: Andres Del Rio and JaiGanesh Sundaravel.

New PerfIt! features (and limitations)

So currently at version alpha2, you can get the new PerfIt! by using nuget (when it works):
PM> install-package PerfIt -pre
Here are the extra features that you get from the new PerfIt!.

Measure metrics for a closure


So at the lowest level of an aspect abstraction, you might be interested in measuring metrics for a closure, for example:
Action action = Thread.Sleep(1000);
action(); // measure
Or in case of an async operation:
foo result = null;
Func<Task> asyncCall = async () => result = await _command.ExecuteScalar();

// and then
await asyncCall();
This closure could be wrapped in a method of course, but there again, having a unified closure interface is essential in building a common tool: each method can have different inputs of outputs while all can be presented in a closure having the same interface.

Thames Barriers Closure - Flickr. Sorry couldn't find a more related picture, but enjoy all the same
So in order to measure metrics for the action closure, all we need to do is:
var ins = new SimpleInstrumentor(new InstrumentationInfo() 
{
Counters = CounterTypes.StandardCounters,
Description = "test",
InstanceName = "Test instance"
},
TestCategory);

ins.Instrument(() => Thread.Sleep(100));

A few things here:
  • SimpleInstrumentor is responsible for providing a hook to instrument your closures. 
  • InstrumentationInfo contains the metadata for publishing the performance counters. You provide the name of the counters to raise to it (provided if they are not standard, you have already defined )
  • You will be more likely to create a single instrumentor instance for each aspect of your code that you would like to instrument.
  • This example assumes the counters and their category are installed. PerfitRuntime class provides mechanism to register your counters on the box - which is covered in previous posts.
  • Instrument method has an option to pass the context as a string parameter. This context can be used to correlate metrics with application context in ETW events (see below).

Doing an async operation is not that different:
ins.InstrumentAsync(async () => await Task.Delay(100));

//or even simpler:
ins.InstrumentAsync(() => Task.Delay(100))

SimpleInstrumentor is the building block for higher level abstractions of instrumentation. For example, PerfitClientDelegatingHandler now uses SimpleInstrumentor behind the scene.

Raise ETW events, effortlessly


Event Tracing for Windows (ETW) is a low overhead framework for logging, instrumentation, tracing and monitoring that has been in Windows since version 2000. Version 4.5 of the .NET Framework exposes this feature in the class EventSource. Probably suffice to say, if you are not using ETW you are doing it wrong.

One problem with Performance Counters is that they use sampling, rather than events. This is all well and good but lacks the resolution you sometimes need to find problems. For example, if 1% of calls take > 2 seconds, you need on average 100 samples and if you are unlucky a lot more to see the spike.

Another problem is lack of context with the measurements. When you see such a high response, there is really no way to find out what was the context (e.g. customerId) for which it took wrong. This makes finding performance bottlenecks more difficult.

So SimpleInstrumentor, in addition to doing counters for you, raises InstrumentationEventSource ETW events. Of course, you can turn it off or just leave it as it has almost no impact. But so much better, is that use a sink (Table Storage, ElasticSearch, etc) and persist these events to a store and then analyse using something like ElasticSearch and Kibana - as we do it in ASOS. Here is a console log sink, subscribed to these events:
var listener = ConsoleLog.CreateListener();
listener.EnableEvents(InstrumentationEventSource.Instance, EventLevel.LogAlways,
Keywords.All);
And you would see:


Obviously this might not look very impressive but when you take into account that you have the timeTakenMilli (here 102ms) and have the option to pass instrumentationContext string (here "test..."), you could correlate performance with the context of in your application.

PerfIt for Web API is all there just in a different nuget package


If you have been using previous versions of PerfIt, do not panic! We are not going to move the cheese, so the client and server delegating handlers are all there only in a different package, so you just need to install Perfit.WebApi package:
PM> install-package PerfIt.WebApi -pre
The rest is just the same.

Only .NET 4.5 or higher


After spending a lot of time writing async code in CacheCow which was .NET 4.0, I do not think anyone should be subjected to such torture, so my apologies to those using .NET 4.0 but I had to move PerfIt! to .NET 4.5. Sorry .NET 4.0 users.

PerfIt for MVC, Windsor Castle interceptors and more

Yeah, there is more coming. PerfIt for MVC has been long asked by the community and castle interceptors can simply remove all cross cutting concern codes out of your core business code. Stay tuned and please provide feedback before going fully to v1!


Darrel Miller: The Simplest Possible ASP.NET Web API Template

I recently needed to create a new Web API project for some content I'm working on for a talk.  I decided I might as well use the ASP.NET Web API Template to get started.

CookieCutter

The resulting project looked like this:

image

75 class files, 23 Razor templates, 34 unique Nuget packages, and 28 Javascript files before I wrote a single line of my own code.  I'm sure there is a ton of value in all of that code. However, I know that I don't currently need most of it and far worse I don't know what much of it does.  I really don't like having code in my projects that I don't understand, when I have the option of not having it there.

Bare Bones

The other option I had was to use the "Empty" template with Web API references.

image

This template looks a whole lot more manageable but I prefer having my API related code in a separate project to my hosting code.

Separation Of Concerns

Like this,

image

The advantage of this approach is that each project gets considerably simpler in terms of dependencies.  It also allows me to add a console host project, which I find much easier for debugging purposes. 

image

In order to run an API in a console we need to pull in the "Self Host" support libraries.  The way self host works changed in Web API 2.  Earlier versions used to be based on a thin WCF wrapper around HttpListener.  Since 2.0 it uses a Katana flavoured Owin wrapper around HttpListener. 

Installing the following Nuget into the Console app will get you started,

image

Hello Magic Incantation

I've built self-host APIs using the old 1.0 Web API many times.  It's a fairly simple process, create a configuration object, pass it to an instance of a HTTPServer and call Start.  In Web API 2.X it uses Katana bootstrapping code to get a HTTP server up and running.

There are two parts to bootstrapping in Katana.  One part is initializing the HTTP server and the other is initializing the Owin based application.  The challenge is finding the magic incantation code.  You need to call a static method on a class that is not the easiest to guess, and then you need to create a class that has to have a method with a signature that you need to guess.  Here is the article I found that reminded me how to do it.

The first is to call the static method Start on the WebApp class.

static void Main(string[] args)
{
    using (WebApp.Start<Startup>("http://127.0.0.1:12345"))
    {
        Console.ReadLine();
    }

}

This code is in the Console Host project.  The Startup class which is referenced here should be in the API project.

The API project will need to take a dependency on the following Nuget in order to create the Startup class:

image

public class Startup
{
    public void Configuration(IAppBuilder app)
    {
        var httpConfiguration = new HttpConfiguration();
        WebApiConfig.Register(httpConfiguration);
        app.UseWebApi(httpConfiguration);
    }
}

I kept the WebApiConfig.Register method around just follow the convention.  It could just as easily be in-lined.

There are a few gotchas to running self-host.  You either need to be running Visual Studio as an administrator, or you need to authorize the process to open a HTTP Server on the base URL you provide.  If you use localhost it will just work, however if you use any other URL you will need to set the URL ACL using netsh.

That Doesn't Sound Too Painful

Seems pretty straight forward right?  For hosting in IIS/IISExpress you just use the Web Host Nuget packages.  The dependencies look like this,

image

And for the Self Hosted Console we need the Owin Host stuff.

image

The actual API project that contains all the controllers is just a class library project and only really needs access to the Core Web API nugets and the Web API to Owin adapter.

image

Here Is Where It Gets Whacky

The confusion starts if you go and look back at the normal non-empty ASP.NET template project.  It's a web hosted project but there is a Startup class in there.  Actually, there are two Startup classes in there that are defined as partial classes for extra bonus confusion.

image

So, if we are using Microsoft.AspNet.WebApi.WebHost for hosting the API, why do we need a Startup class that is part of Katana flavoured Owin?  The answer, I believe, lies in some of those 34 nuget packages that the template pulled in.

image 

In a noble effort to build infrastructure that could be used on any Owin (with some Katana flavouring) compatible application, the security modules are built as Owin middleware. 

The absolute irony of this is the original Owin community went to great lengths to define the Owin interface in a way that required zero binary dependencies.  Unfortunately, Katana decided binary dependencies were necessary for "ease of use" and so a bunch of Katana/Owin related Nugets need to be pulled in to support these pieces of middleware, even though Owin is not actually being used to abstract away the host!

And It Gets Worse

Earlier I mentioned there were some magic incantations necessary to get the Katana host going.  And then the Katana host would call into the Startup class.  The problem is, the Webhost doesn't need use this magic.  It uses the different Global.asax WebApiApplication magic for bootstrapping.  So, how do these pieces of Security middleware get fired up?

Enter a new guest to this party,

image

This is one of those packages that I consider EVIL (ok, so maybe I exaggerate).  This package causes a DLL to be referenced that makes stuff happen just because I referenced it.  That's just nasty.  Anyway, this package is what you use to host an Owin-ish application using IIS.  I believe it contains the equivalent to WebApp.Start to fire up the Startup class.  This allows the security middleware to do its thing. [update: thanks to James' comment, I found an article that talks more about the SystemWeb startup process]

So here is the question.  The default Web API template references both Microsoft.Owin.Host.SystemWeb and Microsoft.AspNet.WebApi.WebHost, it contains a Global.asax and Startup classes.  Which of these two mechanisms, both of which are capable of connecting a WebAPI application to IIS is actually being used?  I have no idea and considering everything is changing again in MVC6, I have little motivation to go spelunking.  However, if I can avoid it, I'm not going to have both in the same project.  Who knows what subtle interactions exist between the two mechanisms.

Simplicity Rules

In the end, I decided to remove the Microsoft.AspNet.WebApi.WebHost reference from my WebHost project and stick with just the Owin hosting layer.  This left my Web project with just these dependencies:

image

Note that my web host project now has no dependencies at all on Web API.  Even more freaky is that, there isn't any code at all in the Web Host project.  The Microsoft.Owin.Host.SystemWeb  magically bootstraps the Startup class that is in the referenced API Project.  The Console host project also has no dependencies on Web API and my API project has no dependency on hosting related code.  That's what Owin is supposed to do.  But when everything is mashed up into a single project, you don't see the isolation benefits.

I've posted a complete project structure up on Github.  It doesn't have built in security, or account management, or help pages, but I'm considering doing future posts on how to add those in incrementally.

I'm not completely sure whether I wrote this post simply to get this stuff straight in my head, or whether this might actually be useful for others who are struggling to see how all the pieces fit together.  So, if you find this kind of post useful, let me know.

Image Credit: Cookie Cutter https://flic.kr/p/95hFLk


Dominick Baier: OpenID Connect Certification for IdentityServer3

I am extremely happy to announce that IdentityServer3 is now officially certified by the OpenID Foundation.

OpenID_Certified

http://openid.net/certification/

Version 1.6 and onwards is now fully compatible with the basic, implicit, hybrid and configuration profile of OpenID Connect.


Filed under: .NET Security, ASP.NET, IdentityServer, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Ben Foster: Enabling CORS in ASP.NET Web API 2

We recently completed an upgrade of one of our APIs to Web API 2. Previously we were using a CORS implementation for Web API v1 by Brock Allen which later paved the way for the support in Web API v2.

CORS can be enabled using a Web API specific package (which uses message handlers) or OWIN Middleware. Which one to use will largely depend on your requirements:

When to use the Web API CORS package

  • You need fine grained control over CORS at the controller or action level (your API resources). For example, you may wish to allow different origins/HTTP methods per resource.

You can find a tutorial covering how to configure the Web API CORS package here.

When to use the CORS OWIN middleware

  • You're using the OAuth middleware and need to authenticate client-side requests from another domain (origin).
  • You want to enable CORS for all of your middleware components.

We decided to use the OWIN middleware. Detailed documentation is a little thin on the ground with most examples just allowing all origins and HTTP methods:

app.UseCors(CorsOptions.AllowAll)

For finer grained control you can provide your own CorsPolicy:

public override void Register()
{
    var corsPolicy = new CorsPolicy
    {
        AllowAnyMethod = true,
        AllowAnyHeader = true
    };

    // Try and load allowed origins from web.config
    // If none are specified we'll allow all origins

    var origins = ConfigurationManager.AppSettings[Constants.CorsOriginsSettingKey];

    if (origins != null)
    {
        foreach (var origin in origins.Split(';'))
        {
            corsPolicy.Origins.Add(origin);
        }
    }
    else
    {
        corsPolicy.AllowAnyOrigin = true;
    }

    var corsOptions = new CorsOptions
    {
        PolicyProvider = new CorsPolicyProvider
        {
            PolicyResolver = context => Task.FromResult(corsPolicy)
        }
    };

    app.UseCors(corsOptions);
}

The CorsOptions class has a PolicyProvider property which determines how the CorsPolicy for the request will be resolved. This is how you could provide resource specific CORS policies but it's not quite as nice to use as the attribute based Web API package.

Don't forget to allow OPTIONS

One thing that caught me out after enabling the middleware is that IIS was intercepting pre-flight requests. To ensure ASP.NET handles OPTION requests, add the following in web.config:

<system.webServer>
  <handlers>
    <remove name="ExtensionlessUrlHandler-Integrated-4.0" />
    <remove name="OPTIONSVerbHandler" />
    <remove name="TRACEVerbHandler" />
    <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="*" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" />
  </handlers>
</system.webServer>


Chad England: How to Fix “No ‘Access-Control-Allow-Origin’ header” in ASP.NET WebAPI

I am not 100% sure what Google did to Chromium in Update 42.  However after that update landed I started to get reports of our Angularjs application started to have problems communicating with our ASP.NET WebAPI.  We started getting errors similar to this:

No ‘Access-Control-Allow-Origin’ header is present on the requested resource

I tested the API and CORs with Fiddler and POSTMAN and everything worked great.  So the confusion started.  The web team created a simple test UI so I could debug the HTTP stack.  After looking deeper and some searches online we found that the cause was being done by Chromes preflight call for OPTIONS.

Out of the box WebAPI does not support the OPTIONS Verb and before update 42 Chrome would ignore if the API did not have this ability and continue on; however this does not appear to be an option now.  With this issue using EnableCors[“*”,”*”,”*”] does not appear to work anymore.

There are few different ways to add the OPTIONS verb to WebAPI and it appears to very depending on your usage.

Option 1 – add it to the ConfigureAuth

app.Use(async (context, next) =>{

IOwinRequest req = context.Request;

IOwinResponse res = context.Response;

if (req.Path.StartsWithSegments(new PathString(“/Token”))){

var origin = req.Headers.Get(“Origin”);

if (!string.IsNullOrEmpty(origin)){

res.Headers.Set(“Access-Control-Allow-Origin”, origin);

}

if (req.Method == “OPTIONS”){

res.StatusCode = 200;

res.Headers.AppendCommaSeparatedValues(“Access-Control-Allow-Methods”, “GET”, “POST”);

res.Headers.AppendCommaSeparatedValues(“Access-Control-Allow-Headers”, “authorization”, “content-type”);

return;

}

}

await next();

});

Add the headers to the request headers.  Adjust the Methods and headers to what would want to support or use the wild cards.

Options 2 – Add headers in OWIN

If you are using OWIN for authentication, update the method GrantResourceOwnerCredentials and add the following to the first line.

context.OwinContext.Response.Headers.Add("Access-Control-Allow-Origin", new[] { "*" });

Option 3 – add Owin Cors to the startup.cs

To start with add NuGet package

Microsoft.Owin.Cors

Update Startup.cs and add the following to the Configuration method

app.UseCors(Microsoft.Owin.Cors.CorsOptions.AllowAll);

After this remove EnableCors[“*”,”*”,”*”] from your controllers or you will get an error for duplicate headers.



Darrel Miller: Dot Net Fringe

The last few days I spent at the DotNetFringe conference in Portland.  Considering this was the first time this conference has been run it was executed spectacularly well.

dotnetFringe

Off To A Great Start

The opening keynote was done by the one and only, Jimmy Bogard who delivered a candid history of his experience working on OSS projects, including both the successes and failures.  Jimmy delivered a wide range of sage advice regarding how, why and when you should open source projects.

Embedded image permalink
Photo Credit: Bar Arnon

It Just Got Better

The next talk I watched was from Amy Palamountain who delivered the trifecta of presenting great technical content that was entertaining and wonderfully polished.  The consensus among the speakers I spoke with right after her presentation was that we all needed to go and work on our talks.

Spreading The HTTP Love

My talk was related to the open source projects I've been building over the past few years related to creating and consuming HTTP APIs.  It is quite a challenge to deliver any amount of depth in a technical subject in 30 minutes, but I do appreciate the format as there were much less schedule conflicts than at conferences with many tracks.

Embedded image permalink
Photo Credit: Immo Landwerth

Sunshine In The Morning

Day two started on another bright note for me with Gemma Cameron talking about the changes DevOps has made the potential future of NetOps.  It was both informative and entertaining,  a perfect combination.  And I'm looking forward to a chance to use Bitstrips in my own presentations soon!

Embedded image permalink
Photo Credit: Gemma Cameron

A Chance of Showers

Next up was the DotNetRocks Open Source panel.  I think my tweet from the beginning of the session is the most succinct summary I can come up with.

The challenge with diverse opinions is that there can be disagreement.  And there were.  The panel discussion was an accurate reflection of the .net OSS community as a whole.  Some people spoke too much, some spoke too little. People showed their biases.  Yup, people were human.

Embedded image permalink
Photo Credit: Immo Landwerth

The .net community has come a very long way, and it still has a long way to go.  I do think in general we are finally all moving in the right direction.   But there is a history of pain and anguish behind us.  We need to focus on the future without forgetting the past.  There is nothing worse than making the same mistakes twice.

Rainbows Follow Rain

DotNetFringe was a strange name for a conference, with an even stranger logo.  It came together in a short amount of time and made an unusual venue work.

The conference was full of amazing people, great talks and overflowing enthusiasm for the future of .net OSS.  I look forward to the next one.

It was a bold play, but it paid off.  Look out for the session recordings becoming available soon, they will be full of great content.


Pedro Félix: Some thoughts on the recent JWT library vulnerabilities

Recently, a great post by Tim McLean about some “Critical vulnerabilities in JSON Web Token libraries” made the headlines, bringing the focus to the JWT spec, its usages and apparent security issues.

In this post, I want to share some of my assorted ideas on these subjects.

On the usefulness of the “none” algorithm

One of the problems identified in the aforementioned post is the “none” algoritm.

It may seem strange for a secure packaging format to support “none” as a valid protection, however this algorithm is useful in situations where the token’s integrity is verified by other means, namely the transport protocol.
One such example happens on the authorization code flow of OpenID Connect, where the ID token is retrieved via a direct TLS protected communication between the Client and the Authorization Server.

In the words of the specification: “If the ID Token is received via direct communication between the Client and the Token Endpoint (which it is in this flow), the TLS server validation MAY be used to validate the issuer in place of checking the token signature”.

Supporting multiple algorithms and the “alg” field

Another problem identified by Tim’s post was the usage of the “alg” field and the way some libraries handle it, namely using keys in an incorrect way.

In my opinion, supporting algorithm agility (i.e. the ability to support more than one algorithm in a specification) is essential for having evolvable systems.
Also, being explicit about what was used to protect the token is typically a good security decision.

In this case, the problem lies on the library side. Namely, having a verify(string token, string verificationKey) function signature seems really awkard for several reasons

  • First, representing a key as a string is a typical case of primitive obsession. A key is not a string. A key is a potentially composed object (e.g. two integers in the case of a public key for RSA-based schemes) with associated metadata, namely the algorithms and usages for which it applies. Encoding that as a string is opening the door to ambiguity and incorrect usages.
    A key representation should always contain not only the algorithm to which applies but also the usage conditions (e.g. encryption vs,. signature for a RSA key).

  • Second, it makes phased key rotation really difficult. What happens when the token signer wants to change the signing key or the algorithm? Must all the consumers synchronously change the verification key at the same moment in time? Preferably, consumers should be able to simultaneous support two or more key to be used, identified by the “kid” parameter.
    The same applies to algorithm changes and the use of the “alg” parameter.
    So, I don’t think that removing the “alg” header is a good idea

A verification function should allow a set of possible keys (bound to explicit algorithms) or receive a call back to fetch the key given both the algorithm and the key id.

Don’t assume, always verify

Verifying a JWT before using the claims that it asserts is alway more than just checking a signature. Who was the issuer? Is the token valid at the time of usage? Was the token explicitly revoked? Who is the intended audience? Is the protection algorithm compatible with the usage scenario? These are all questions that must be explicit verified by a JWT consumer application or component.

For instance, OpenID Connect lists the verification steps that must done by a client application (the relying party) before using the claims in a received ID token.

And so it begins …

If the recent history of SSL/TLS related problems has taught us anything is that security protocol design and implementation is far from easy, and that “obvious” vulnerabilities can remain undetected for long periods of time.
If these problems happen on well known and commonly used designs and libraries such as SSL and OpenSSL, we must be prepared for similar occurrences on JWT based protocols and implementations.
In this context, security analysis such as the one described in Tim’s post are of uttermost importance, even if I don’t agree with some of the proposed measures.



Darrel Miller: Solving Dropbox's URL Problems

A recent post on the Dropbox developer's blog post talked about the challenges of constructing URLs due to the challenges of encoding parameters.  They proposed the idea of using encoded JSON to embed parameters in URLs. I believe URI Templates offer a much easier and cleaner way to address this issue.  This blog posts shows how.

ThreeBodyProblem

I've talked about using a URI Template library to construct URLs before, but in this post I'm going to consider the specific examples highlighted in the Dropbox post.

Picky About Punctuation

The first example introduced the problem of spaces in URLs,

/1/search/auto/My+Documents?query=draft+2013

It is true that spaces are not allowed in URLs, however interestingly, the plus sign used in this example is not the correct way to deal with spaces according to RFC 3986, the URI specification.  Web browsers allow you to use the plus sign as a replacement for space in the address bar, however, technically spaces should be encoded as %20.

Using a URI Template library you are able to clearly distinguish which parts of the final URL are literals and syntax and which parts are parameters.  The parameters are considered "data" and therefore any characters that have special meaning in an URI will automatically be escaped.

[Fact]
public void EncodingTest1()
{

    var url = new UriTemplate("/1/search/auto/{folder}{?query}")
        .AddParameter("folder","My Documents")
        .AddParameter("query", "draft 2013")
        .Resolve();

    Assert.Equal("/1/search/auto/My%20Documents?query=draft%202013", url);
}

In the above example, the space in the folder parameters and the query parameters are automatically escaped.

Dastardly Delimiters

The next example given in the Dropbox post highlights the challenges of using parameter data that contains characters that are considered URL delimiters and are therefore considered reserved.

“/hello-world” is equivalent to “/hello%2Dworld”
“/hello/world” is not equivalent to “/hello%2Fworld“

The example is a bit misleading because the hypen character is not a reserved character therefore doesn't need to be escaped.   However, the point remains that a forward slash in a parameter value should be escaped to prevent it from being considered a delimiter.

This ambiguity is easily avoided in URI Templates because parameter values are specified explicitly.

[Fact]
public void EncodingTest2()
{

    // Parameter values get encoded but hyphen doesn't need to be encoded because it
    // is an "unreserved" character according to RFC 3986
    var url = new UriTemplate("{/greeting}")
        .AddParameter("greeting", "hello-world")
        .Resolve();

    Assert.Equal("/hello-world", url);

    // A slash does need to be encoded
    var url2 = new UriTemplate("{/greeting}")
        .AddParameter("greeting", "hello/world")
        .Resolve();

    Assert.Equal("/hello%2Fworld", url2);

    // If you truly want to make multiple path segments then do this
    var url3 = new UriTemplate("{/greeting*}")
        .AddParameter("greeting", new List {"hello","world"})
        .Resolve();

    Assert.Equal("/hello/world", url3);

}

Parameter Preferences

The next Dropbox example demonstrates that there is some flexibility in the way you can represent lists of values in URLs.

/docs/salary.csv?columns=1,2
/docs/salary.csv?column=1&column=2

The problem with flexibility that is given to API producers, is that API consumers have to deal with additional complexity.  Fortunately URI Templates allows these different approaches to be handled by adding one additional character to the URI Template.

[Fact]
public void EncodingTest3()
{

    // There are different ways that lists can be included in query params
    // Just as a comma delimited list
    var url = new UriTemplate("/docs/salary.csv{?columns}")
        .AddParameter("columns", new List {1,2})
        .Resolve();

    Assert.Equal("/docs/salary.csv?columns=1,2", url);

    // or as a multiple parameter instances
    var url2 = new UriTemplate("/docs/salary.csv{?columns*}")
        .AddParameter("columns", new List { 1, 2 })
        .Resolve();

    Assert.Equal("/docs/salary.csv?columns=1&columns=2", url2);
}

The only difference between the two templates is the asterisk at the end of the parameter token.  The is called the "explode modifier".  The additional bonus for hypermedia driven APIs that provide the templates to the client, is the client code can be completely ignorant of which approach is being used and the server can change its mind at some point in the future and nothing breaks.

Nested Nomenclature

The next example shows a technique developers use for including nested data as part of query string parameters

/emails?from[name]=Don&from[date]=1998-03-24&to[name]=Norm

Because of the clear separation between parameters and URI Templates, it makes this scenario fairly trivial.  Also, considering the potentially dynamic nature of the this type of query string parameters, another feature of URI Templates can be used to make this type of URL even easier to construct.

[Fact]
public void EncodingTest4()
{
    var url = new UriTemplate("/emails{?params*}")
        .AddParameter("params", new Dictionary<string,string>
        {
            {"from[name]","Don"},
            {"from[date]","1998-03-24"},
            {"to[name]","Norm"}
        })
        .Resolve();

    Assert.Equal("/emails?from[name]=Don&from[date]=1998-03-24&to[name]=Norm", url);
}

Query string parameter names are passed through the URI Template to the URL, untouched, which is why the square brackets are not escaped.  According to RFC 3986 they should be escaped.  However, it is fairly common to see them in URLs unescaped.  Although it is a violation of the rules, the impact is minimal because square brackets are currently only used in the host name for specifying IPV6 addresses.

Separating Syntax

The key to URI Templates being able help in URL encoding is that it is obvious which pieces are data and which pieces are URL syntax.  This allows the encoding to only be performed where it is needed, on the data.

Personally, I  do not think we need to resort to such extreme measures as JSON encoding parameters to make it easy for developers to safely construct URLs.  Hopefully, this post will convince a few other people.

Image Credit: Three body problem https://flic.kr/p/pNHgi5


Dominick Baier: Implicit vs Explicit Authentication in Browser-based Applications

I got the idea for this post from my good friend Pedro Felix – I hope I don’t steal his thunder (I am sure I won’t – since he is much more elaborate than I am) – but when I saw his tweet this morning, I had to write this post.

When teaching web API security, Brock and I often use the term implicit vs explicit authentication. I don’t think these are standard terms – so here’s the explanation.

What’s implicit authentication?
Browser built-in mechanisms like Basic, Windows, Digest authentication, client certificates and cookies. Once established for a certain domain, the browser implicitly sends the credential along automatically.

Advantage: It just works. Once the browser has authenticated the user (or the cookie is set) – no special code is necessary

Disadvantage:

  • No control. The browser will send the credential regardless which application makes the request to the “authenticated domain”. CSRF is the result of that.
  • Domain bound – only “clients” from the same domain as the “server” will be able to communicate.

What’s explicit authentication?
Whenever the application code (JavaScript in that case) has to send the credential explicitly – typically on the Authorization header (and sometimes also as a query string). Using OAuth 2.0 implicit flow and access tokens in JS apps is a common example. Strictly speaking the browser does not know anything about the credential and thus would not send it automatically.

Advantage:

  • Full control over when the credential is send.
  • No CSRF.
  • Not bound to a domain.

Disadvantages:

  • Custom code necessary.
  • Access tokens need to be managed by the JS app (and don’t have built-in protection features like httpOnly cookies) which make them interesting targets for other types of attacks (CSP can help here).

Summary: Implicit authentication works great for server-side web applications that live on a single domain. CSRF is well understood and frameworks typically have built-in countermeasures. Explicit authentication is recommended for web APIs. Anti CSRF is harder here, and clients and APIs are often spread across domains which makes cookies a no go.


Filed under: WebAPI


Taiseer Joudeh: ASP.NET Web API Claims Authorization with ASP.NET Identity 2.1 – Part 5

This is the fifth part of Building Simple Membership system using ASP.NET Identity 2.1, ASP.NET Web API 2.2 and AngularJS. The topics we’ll cover are:

The source code for this tutorial is available on GitHub.

ASP.NET Web API Claims Authorization with ASP.NET Identity 2.1

In the previous post we have implemented a finer grained way to control authorization based on the Roles assigned for the authenticated user, this was done by assigning users to a predefined Roles in our system and then attributing the protected controllers or actions by the [Authorize(Roles = “Role(s) Name”)] attribute.

Claims Featured Image

Using Roles Based Authorization for controlling user access will be efficient in scenarios where your Roles do not change too much and the users permissions do not change frequently.

In some applications controlling user access on system resources is more complicated, and having users assigned to certain Roles is not enough for managing user access efficiently, you need more dynamic way to to control access based on certain information related to the authenticated user, this will lead us to control user access using Claims, or in another word using Claims Based Authorization.

But before we dig into the implementation of Claims Based Authorization we need to understand what Claims are!

Note: It is not mandatory to use Claims for controlling user access, if you are happy with Roles Based Authorization and you have limited number of Roles then you can stick to this.

What is a Claim?

Claim is a statement about the user makes about itself, it can be user name, first name, last name, gender, phone, the roles user assigned to, etc… Yes the Roles we have been looking at are transformed to Claims at the end, and as we saw in the previous post; in ASP.NET Identity those Roles have their own manager (ApplicationRoleManager) and set of APIs to manage them, yet you can consider them as a Claim of type Role.

As we saw before, any authenticated user will receive a JSON Web Token (JWT) which contains a set of claims inside it, what we’ll do now is to create a helper end point which returns the claims encoded in the JWT for an authenticated user.

To do this we will create a new controller named “ClaimsController” which will contain a single method responsible to unpack the claims in the JWT and return them, to do this add new controller named “ClaimsController” under folder Controllers and paste the code below:

[RoutePrefix("api/claims")]
    public class ClaimsController : BaseApiController
    {
        [Authorize]
        [Route("")]
        public IHttpActionResult GetClaims()
        {
            var identity = User.Identity as ClaimsIdentity;
            
            var claims = from c in identity.Claims
                         select new
                         {
                             subject = c.Subject.Name,
                             type = c.Type,
                             value = c.Value
                         };

            return Ok(claims);
        }

    }

The code we have implemented above is straight forward, we are getting the Identity of the authenticated user by calling “User.Identity” which returns “ClaimsIdentity” object, then we are iterating over the IEnumerable Claims property and return three properties which they are (Subject, Type, and Value).
To execute this endpoint we need to issue HTTP GET request to the end point “http://localhost/api/claims” and do not forget to pass a valid JWT in the Authorization header, the response for this end point will contain the below JSON object:

[
  {
    "subject": "Hamza",
    "type": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier",
    "value": "cd93945e-fe2c-49c1-b2bb-138a2dd52928"
  },
  {
    "subject": "Hamza",
    "type": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name",
    "value": "Hamza"
  },
  {
    "subject": "Hamza",
    "type": "http://schemas.microsoft.com/accesscontrolservice/2010/07/claims/identityprovider",
    "value": "ASP.NET Identity"
  },
  {
    "subject": "Hamza",
    "type": "AspNet.Identity.SecurityStamp",
    "value": "a77594e2-ffa0-41bd-a048-7398c01c8948"
  },
  {
    "subject": "Hamza",
    "type": "iss",
    "value": "http://localhost:59822"
  },
  {
    "subject": "Hamza",
    "type": "aud",
    "value": "414e1927a3884f68abc79f7283837fd1"
  },
  {
    "subject": "Hamza",
    "type": "exp",
    "value": "1427744352"
  },
  {
    "subject": "Hamza",
    "type": "nbf",
    "value": "1427657952"
  }
]

As you noticed from the response above, all the claims contain three properties, and those properties represents the below:

  • Subject: Represents the identity which those claims belongs to, usually the value for the subject will contain the unique identifier for the user in the system (Username or Email).
  • Type: Represents the type of the information contained in the claim.
  • Value: Represents the claim value (information) about this claim.

Now to have better understanding of what type of those claims mean let’s take a look the table below:

SubjectTypeValueNotes
Hamzanameidentifiercd93945e-fe2c-49c1-b2bb-138a2dd52928Unique User Id generated from Identity System
HamzanameHamzaUnique Username
HamzaidentityproviderASP.NET IdentityHow user has been authenticated using ASP.NET Identity
HamzaSecurityStampa77594e2-ffa0-41bd-a048-7398c01c8948Unique Id which stays the same until any security related attribute change, i.e. change user password
Hamzaisshttp://localhost:59822Issuer of the Access Token (Authz Server)
Hamzaaud414e1927a3884f68abc79f7283837fd1For which system this token is generated
Hamzaexp1427744352Expiry time for this access token (Epoch)
Hamzanbf1427657952When this token is issued (Epoch)

After we have briefly described what claims are, we want to see how we can use them to manage user assess, in this post I will demonstrate three ways of using the claims as the below:

  1. Assigning claims to the user on the fly based on user information.
  2. Creating custom Claims Authorization attribute.
  3. Managing user claims by using the “ApplicationUserManager” APIs.

Method 1: Assigning claims to the user on the fly

Let’s assume a fictional use case where our API will be used in an eCommerce website, where certain users have the ability to issue refunds for orders if there is incident happen and the customer is not happy.

So certain criteria should be met in order to grant our users the privileges to issue refunds, the users should have been working for the company for more than 90 days, and the user should be in “Admin”Role.

To implement this we need to create a new class which will be responsible to read authenticated user information, and based on the information read, it will create a single claim or set of claims and assign then to the user identity.
If you recall from the first post of this series, we have extended the “ApplicationUser” entity and added a property named “JoinDate” which represent the hiring date of the employee, based on the hiring date, we need to assign a new claim named “FTE” (Full Time Employee) for any user who has worked for more than 90 days. To start implementing this let’s add a new class named “ExtendedClaimsProvider” under folder “Infrastructure” and paste the code below:

public static class ExtendedClaimsProvider
    {
        public static IEnumerable<Claim> GetClaims(ApplicationUser user)
        {
          
            List<Claim> claims = new List<Claim>();

            var daysInWork =  (DateTime.Now.Date - user.JoinDate).TotalDays;

            if (daysInWork > 90)
            {
                claims.Add(CreateClaim("FTE", "1"));
               
            }
            else {
                claims.Add(CreateClaim("FTE", "0"));
            }

            return claims;
        }

        public static Claim CreateClaim(string type, string value)
        {
            return new Claim(type, value, ClaimValueTypes.String);
        }

    }

The implementation is simple, the “GetClaims” method will take ApplicationUser object and returns a list of claims. Based on the “JoinDate” field it will add new claim named “FTE” and will assign a value of “1” if the user has been working for than 90 days, and a value of “0” if the user worked for less than this period. Notice how I’m using the method “CreateClaim” which returns a new instance of the claim.

This class can be used to enforce creating custom claims for the user based on the information related to her, you can add as many claims as you want here, but in our case we will add only a single claim.

Now we need to call the method “GetClaims” so the “FTE” claim will be associated with the authenticated user identity, to do this open class “CustomOAuthProvider” and in method “GrantResourceOwnerCredentials” add the highlighted line (line 7) as the code snippet below:

public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
{
	//Code removed for brevity

	ClaimsIdentity oAuthIdentity = await user.GenerateUserIdentityAsync(userManager, "JWT");
	
	oAuthIdentity.AddClaims(ExtendedClaimsProvider.GetClaims(user));
	
	var ticket = new AuthenticationTicket(oAuthIdentity, null);
	
	context.Validated(ticket);
   
}

Notice how the established claims identity object “oAuthIdentity” has a method named “AddClaims” which accepts IEnumerable object of claims, now the new “FTE” claim is assigned to the authenticated user, but this is not enough to satisfy the criteria needed to issue the fictitious refund on orders, we need to make sure that the user is in “Admin” Role too.

To implement this we’ll create a new Role on the fly based on the claims assigned for the user, in other words we’ll create Roles from the Claims user assigned to, this Role will be named “IncidentResolvers”. And as we stated in the beginning of this post, the Roles eventually are considered as a Claim of type Role.

To do this add new class named “RolesFromClaims” under folder “Infrastructure” and paste the code below:

public class RolesFromClaims
    {
        public static IEnumerable<Claim> CreateRolesBasedOnClaims(ClaimsIdentity identity)
        {
            List<Claim> claims = new List<Claim>();

            if (identity.HasClaim(c => c.Type == "FTE" && c.Value == "1") &&
                identity.HasClaim(ClaimTypes.Role, "Admin"))
            {
                claims.Add(new Claim(ClaimTypes.Role, "IncidentResolvers"));
            }

            return claims;
        }
    }

The implementation is self explanatory, we have created a method named “CreateRolesBasedOnClaims” which accepts the established identity object and returns a list of claims.

Inside this method we will check that the established identity for the authenticated user has a claim of type “FTE” with value “1”, as well that the identity contains a claim of type “Role” with value “Admin”, if those 2 conditions are met then; we will create a new claim of Type “Role” and give it a value of “IncidentResolvers”.
Last thing we need to do here is to assign this new set of claims to the established identity, so to do this open class “CustomOAuthProvider” again and in method “GrantResourceOwnerCredentials” add the highlighted line (line 9) as the code snippet below:

public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
{
	//Code removed for brevity

	ClaimsIdentity oAuthIdentity = await user.GenerateUserIdentityAsync(userManager, "JWT");
	
	oAuthIdentity.AddClaims(ExtendedClaimsProvider.GetClaims(user));
	
	oAuthIdentity.AddClaims(RolesFromClaims.CreateRolesBasedOnClaims(oAuthIdentity));
	
	var ticket = new AuthenticationTicket(oAuthIdentity, null);
	
	context.Validated(ticket);
   
}

Now all the new claims which created on the fly are assigned to the established identity and once we call the method “context.Validated(ticket)”, all claims will get encoded in the JWT token, so to test this out let’s add fictitious controller named “OrdersController” under folder “Controllers” as the code below:

[RoutePrefix("api/orders")]
public class OrdersController : ApiController
{
	[Authorize(Roles = "IncidentResolvers")]
	[HttpPut]
	[Route("refund/{orderId}")]
	public IHttpActionResult RefundOrder([FromUri]string orderId)
	{
		return Ok();
	}
}

Notice how we attribute the action “RefundOrder” with  [Authorize(Roles = “IncidentResolvers”)] so only authenticated users with claim of type “Role” and has the value of “IncidentResolvers” can access this end point. To test this out you can issue HTTP PUT request to the URI “http://localhost/api/orders/refund/cxy-4456393″ with an empty body.

As you noticed from the first method, we have depended on user information to create claims and kept the authorization more dynamic and flexible.
Keep in mind that you can add your access control business logic, and have finer grained control on authorization by implementing this logic into classes “ExtendedClaimsProvider” and “RolesFromClaims”.

Method 2: Creating custom Claims Authorization attribute

Another way to implement Claims Based Authorization is to create a custom authorization attribute which inherits from “AuthorizationFilterAttribute”, this authorize attribute will check directly the claims value and type for the established identity.

To do this let’s add new class named “ClaimsAuthorizationAttribute” under folder “Infrastructure” and paste the code below:

public class ClaimsAuthorizationAttribute : AuthorizationFilterAttribute
    {
        public string ClaimType { get; set; }
        public string ClaimValue { get; set; }

        public override Task OnAuthorizationAsync(HttpActionContext actionContext, System.Threading.CancellationToken cancellationToken)
        {

            var principal = actionContext.RequestContext.Principal as ClaimsPrincipal;

            if (!principal.Identity.IsAuthenticated)
            {
                actionContext.Response = actionContext.Request.CreateResponse(HttpStatusCode.Unauthorized);
                return Task.FromResult<object>(null);
            }

            if (!(principal.HasClaim(x => x.Type == ClaimType && x.Value == ClaimValue)))
            {
                actionContext.Response = actionContext.Request.CreateResponse(HttpStatusCode.Unauthorized);
                return Task.FromResult<object>(null);
            }

            //User is Authorized, complete execution
            return Task.FromResult<object>(null);

        }
    }

What we’ve implemented here is the following:

  • Created a new class named “ClaimsAuthorizationAttribute” which inherits from “AuthorizationFilterAttribute” and then override method “OnAuthorizationAsync”.
  • Defined 2 properties “ClaimType” & “ClaimValue” which will be used as a setters when we use this custom authorize attribute.
  • Inside method “OnAuthorizationAsync” we are casting the object “actionContext.RequestContext.Principal” to “ClaimsPrincipal” object and check if the user is authenticated.
  • If the user is authenticated we’ll look into the claims established for this identity if it has the claim type and claim value.
  • If the identity contains the same claim type and value; then we’ll consider the request authentic and complete the execution, other wist we’ll return 401 unauthorized status.

To test the new custom authorization attribute, we’ll add new method to the “OrdersController” as the code below:

[ClaimsAuthorization(ClaimType="FTE", ClaimValue="1")]
[Route("")]
public IHttpActionResult Get()
{
	return Ok();
}

Notice how we decorated the “Get()” method with the “[ClaimsAuthorization(ClaimType=”FTE”, ClaimValue=”1″)]” attribute, so any user has the claim “FTE” with value “1” can access this protected end point.

Method 3: Managing user claims by using the “ApplicationUserManager” APIs

The last method we want to explore here is to use the “ApplicationUserManager” claims related API to manage user claims and store them in ASP.NET Identity related tables “AspNetUserClaims”.

In the previous two methods we’ve created claims for the user on the fly, but in method 3 we will see how we can add/remove claims for a certain user.

The “ApplicationUserManager” class comes with a set of predefined APIs which makes dealing and managing claims simple, the APIs that we’ll use in this post are listed in the table below:

Method NameUsage
AddClaimAsync(id, claim)Create a new claim for specified user id
RemoveClaimAsync(id, claim)Remove claim from specified user if claim type and value match
GetClaimsAsync(id)Return IEnumerable of claims based on specified user id

To use those APIs let’s add 2 new methods to the “AccountsController”, the first method “AssignClaimsToUser” will be responsible to add new claims for specified user, and the second method “RemoveClaimsFromUser” will remove claims from a specified user as the code below:

[Authorize(Roles = "Admin")]
[Route("user/{id:guid}/assignclaims")]
[HttpPut]
public async Task<IHttpActionResult> AssignClaimsToUser([FromUri] string id, [FromBody] List<ClaimBindingModel> claimsToAssign) {

	if (!ModelState.IsValid)
	{
		return BadRequest(ModelState);
	}

	 var appUser = await this.AppUserManager.FindByIdAsync(id);

	if (appUser == null)
	{
		return NotFound();
	}

	foreach (ClaimBindingModel claimModel in claimsToAssign)
	{
		if (appUser.Claims.Any(c => c.ClaimType == claimModel.Type)) {
		   
			await this.AppUserManager.RemoveClaimAsync(id, ExtendedClaimsProvider.CreateClaim(claimModel.Type, claimModel.Value));
		}

		await this.AppUserManager.AddClaimAsync(id, ExtendedClaimsProvider.CreateClaim(claimModel.Type, claimModel.Value));
	}
	
	return Ok();
}

[Authorize(Roles = "Admin")]
[Route("user/{id:guid}/removeclaims")]
[HttpPut]
public async Task<IHttpActionResult> RemoveClaimsFromUser([FromUri] string id, [FromBody] List<ClaimBindingModel> claimsToRemove)
{

	if (!ModelState.IsValid)
	{
		return BadRequest(ModelState);
	}

	var appUser = await this.AppUserManager.FindByIdAsync(id);

	if (appUser == null)
	{
		return NotFound();
	}

	foreach (ClaimBindingModel claimModel in claimsToRemove)
	{
		if (appUser.Claims.Any(c => c.ClaimType == claimModel.Type))
		{
			await this.AppUserManager.RemoveClaimAsync(id, ExtendedClaimsProvider.CreateClaim(claimModel.Type, claimModel.Value));
		}
	}

	return Ok();
}

The implementation for both methods is very identical, as you noticed we are only allowing users in “Admin” role to access those endpoints, then we are specifying the UserId and a list of the claims that will be add or removed for this user.

Then we are making sure that user specified exists in our system before trying to do any operation on the user.

In case we are adding a new claim for the user, we will check if the user has the same claim type before trying to add it, add if it exists before we’ll remove this claim and add it again with the new claim value.

The same applies when we try to remove a claim from the user, notice that methods “AddClaimAsync” and “RemoveClaimAsync” will save the claims permanently in our SQL data-store in table “AspNetUserClaims”.

Do not forget to add the “ClaimBindingModel” under folder “Models” which acts as our POCO class when we are sending the claims from our front-end application, the class will contain the code below:

public class ClaimBindingModel
    {
        [Required]
        [Display(Name = "Claim Type")]
        public string Type { get; set; }

        [Required]
        [Display(Name = "Claim Value")]
        public string Value { get; set; }
    }

There is no extra steps needed in order to pull those claims from the SQL data-store when establishing the user identity, thanks for the method “CreateIdentityAsync” which is responsible to pull all the claims for the user. We have already implemented this and it can be checked by visiting the highlighted LOC.

To test those methods all you need to do is to issue HTTP PUT request to the URI: “http://localhost:59822/api/accounts/user/{UserId}/assignclaims” and “http://localhost:59822/api/accounts/user/{UserId}/removeclaims” as the request images below:

Assign Claims

Assign Claims

Remove Claims

Remove Claims

That’s it for now folks about implementing Authorization using Claims.

In the next post we’ll build a simple AngularJS application which connects all those posts together, this post should be interesting :)

The source code for this tutorial is available on GitHub.

Follow me on Twitter @tjoudeh

References

The post ASP.NET Web API Claims Authorization with ASP.NET Identity 2.1 – Part 5 appeared first on Bit of Technology.


Darrel Miller: API Design Notes: Smart Paging

If you spend any time reading about API design or working with APIs you will likely have come across the notion of paging response data.  Paging has been used in the HTML web for many years as a method to provide users with a fast response to their searches.  I normally spend my time advocating that  Web APIs should emulate the HTML web more, but in this case I believe there are better ways than slicing results into arbitrary pages of data.

DragRacer

Is it necessary?

To provide some context, it is worth asking a few questions about why we do paging.  On the HTML web, paging was critical because results needed to be rendered in HTML and too many results creates a large HTML page.  Web browsers are often slow at rendering large HTML pages and that makes users wait.  Research has shown that users don't wait.

With Web APIs, there does not need to be a direct correlation between data retrieved and data rendered to a user.  What gets sent over the wire is just data and can use a more efficient format than HTML.  So when do we need to start getting the server to page data that is returned?  How much is too much?

Unfortunately, "how much" is one of those it depends questions.  However, consider the fact that Google's guidelines for banner ads is that they should be less than 150K.  You can fit a whole lot of content in a 150K JSON payload.

PimpedCar

What's wrong with paging?

There a few things that I don't like about paging. From a UX perspective, if the paging mechanism does end up getting reflected in the UI it's just not a pleasant experience. Why can't I just scroll?  If I'm looking for some specific items it is difficult  because it is hard to guess which page the items I want might be on.  This forces me to walk though the pages one at a time.   If the data is changing while I'm paging though the data, some items may be skipped, others will be duplicated.  

Whenever, I see those drop downs that ask, do you want 5, 10, or 50 items per page, I always cringe a little.  But how do you determine the ideal page size? Based on what fits on the screen, or the time to transfer the data?  None of those factors are fixed, so there is no good answer.

It is also important to realize that as your user is paging through the data, one page at a time, in order to improve performance, the server  is having to re-execute the entire query to determine the complete set of results so that it can return just one page's worth of data.  In theory, complete results can be cached, but then you risk losing the scalability benefits of a stateless server.  Making the server do significantly more work to improve client responsiveness may become a self defeating goal.

Tesladash

How can it be done better?

Paging exists as a way to force the user to request a smaller subset of data.  Encouraging users to return less data is a win for everyone.  It's less data for the server to process, less bandwidth, less information for the client to process and less information for the user to hunt through, to find what they want.

However, slicing the items of data up into arbitrary sized chunks based on some ordering algorithm is often not the most effective way of allowing users to refine their inquiry.

I find that it is always worth reviewing the characteristics of the data you are returning and asking the question, is there some natural property of the data that would be more effective at sub-dividing the data into smaller chunks?

Know Your A,B,Cs

The most obvious example is with a list of names.  Many contact manager type applications will group contacts by first letter of either the first name or last name.  Using alphabetic ordering creates 26 "pages" of data.  This makes it much easier for a user to jump to the page that contains the person they are looking for.

  AlphabetCar

It is true that using the alphabet to page through names, limits you to only 26 pages (assuming you don't use two letters, which would be a bit weird), however even with just those pages, my estimate is you could still return a list of 30,000 names in a JSON document and still be smaller than the 150K banner ad.  With compression, you could return far more. 

It's About Time

DeloreanAlphabet based paging is only one of many ways that data can be segregated.  Time based data is ripe for doing smart paging.  Data can be paged by day, by week, by month.  You can often see this mechanism on blogs.  It's easy to jump to all the posts done in a previous month.

Sometimes data has other segments like classifications, categories or geographies.  The groups may not have a natural sequence, so you may have to invent one.  

The important thing is that you are providing the API consumer with a way of dealing with chunks of data in more manageable sizes.  Those chunks will be more meaningful in terms of the application domain and there is a reasonable chance they will be quicker to retrieve because the underlying data store may have indexes on those attributes.

What's Next?

From the API consumer's perspective, one advantage of  dumb paging is that it is easy to determine what page is next.  A client can easily increment a numeric page value.  It's not so easy with smart paging.   If your client needs to construct the link to the next page then it is going to need some smarts as to how to generate the next page URL.  You may need to send the client a sequence of categories, or provide a period for time-based paging. However, if you are using a framework that generates next/previous links in the responses (like OData does) then it's easy because the server can create the appropriate links and the client can blindly follow them to the next page.

It May Not be Possible

Sometimes data is just doesn't have a natural grouping or the size of the groups that do exist are just too large to be useful.  Arbitrary pages may be the correct approach for your scenario.  My recommendation is simply to consider the more natural possibilities first before falling back on "dumb" paging.

BossHogg

Let Your Framework Know Who's the Boss

All too often I see developers making design choices based on capabilities provided by their chosen framework. What many developers don't realize is that those facilities are often provided by the framework, not because they are best design choice, but because it was easy for the framework developers to provide it.  Obviously a framework cannot know the semantics of the data that you will be paging through, therefore it is difficult to provide a smart paging capability out of the box.  However, dumb paging is easy to provide. 

Make your own design choices and use framework capabilities where appropriate, don't trust framework designers to do that work for you.

 

Image Credits:

Drag Racer https://flic.kr/p/aig7EJ
Pimped car https://flic.kr/p/56GBm
Tesla S Dash https://flic.kr/p/c5WgBC
Boss Hogg Car https://flic.kr/p/foFX8p
Alphabet Car https://flic.kr/p/aHjgDk
Delorean https://flic.kr/p/29WWWZ


Dominick Baier: IdentityServer3 vNext

Just a quick update about some upcoming changes in IdentityServer3.

The last weeks since the 1.0.0 release in January we did mostly bug fixing, fine tuning and listening to feedback. Inevitably we found things we want to change and improve – and some of them are breaking changes.

Right now we are in the process of compiling these small and big changes to bundle them up in a 2.0.0 release, so hopefully after that we can go back into fine tuning mode without breaking anybody’s code.

Here’s a brief list of things that have/will change in 2.0.0

  • Consolidation of some validation infrastructure
    • ICustomRequestValidator signature has slightly changed
  • Support for X.509 client certificates for client authentication at the token endpoint. This resulted in a number of changes to make client validation more flexible in general
    • ClientSecret has been renamed to Secret (we will probably use the concept of secrets in more place than just the client in the future)
    • IClientSecretValidator is gone in favour of a more high level IClientValidator
  • The event service is now async (we simply missed that in 1.0)
  • The CorsPolicy has been replaced by a CORS policy service – along with configurable CORS origins per client
  • By default clients have no access to any scopes. You need to configure the allowed scopes (or override by setting the new AllowAccessToAllScopes client flag)

Probable the biggest change is the fact that we renamed the nuget package to simply IdentityServer3. We decided to remove the thinktecture registered trademark from the OSS project altogether (including the namespaces – so that’s another breaking change).

So in the future all you need to do is:

install-package IdentityServer3 (-pre for the time being)

The dev branch on github is now on 2.0.0 and we published a beta package to nuget so you can have a look (in addition to our myget dev feed):

https://www.nuget.org/packages/IdentityServer3/2.0.0-beta1

Feedback is welcome!


Filed under: .NET Security, ASP.NET, IdentityServer, OAuth, OpenID Connect, OWIN, WebAPI


Darrel Miller: Are You Or Your Customers Leaking Your API Keys?

Several months ago I wrote a post called Where, oh where, does the API key go?  I encouraged API providers to allow consumers to put the API Key in the Authorization header to help avoid accidental disclosure of keys via things like web server logs.  I recently bumped into a way that anyone can harvest hundreds of API keys from many different web sites, including ones that charge significant amounts of money for access.

combine

The API Keys I discovered are in HttpArchive.  HttpArchive is a project started by Steve Souders as a tool to help make the web faster.  All the data collected by HttpArchive is made available via Google's BigQuery project.  There is a discussion site where there are all kinds of conversations about queries that are being run on HttpArchive data and their performance impacts.

imageGoogle Cloud Platform

When I first heard about the HttpArchive I naively assumed that the data was being collected from the logs of some big piece of internet infrastructure.  I suppose if I had looked more closely at the data being collected I would have realized that the data had to be collected via another method.

Image result for webpagetest

The answer to how HttpArchive collects its data is in another incredible tool WebPageTest.  HttpArchive pulls down a list of URLs from the Alexa Top 1,000,000 web sites and then kicks off a bunch of WebPageTest machines to navigate to those URLs and record all of the requests made when loading the sites.

Many of the sites being tested and recorded, download Javascript and then make calls to 3rd party APIs.  The API keys used to call those APIs are therefore recorded in HttpArchive.

This query against the HttpArchive is all it takes to pull back more than 800 unique API keys from the most recent dump of data.

 SELECT   method,
          REGEXP_EXTRACT(url, r'([^:]*)') as scheme,
          REGEXP_EXTRACT(url, r'://([^/]*)') as host, 
          REGEXP_EXTRACT(url, r'apikey=([^&?]*)') as ApiKey
 FROM httparchive:runs.latest_requests
 WHERE url LIKE '%apikey=%'
 group by 1,2,3,4 
 ORDER BY 1,2,3,4

Hundreds more can be found with different variations of api_key, api-key and ApiKey.  Pulling the key from URL is definitely the easiest.  However, HttpArchive also records request header values.  With a little more RegEx foo, you can start pulling API keys out of headers like X-ApiKey and X-Authorization.

sadclown

Unfortunately, you can also access credentials included in the Authorization header.  This was the one header that I was really hoping would have been filtered out of the test results.  I have posted to the HttpArchive mailing list with the hope that future dumps of data can get the Authorization header value stripped out.  This is the advantage of using a standard header.  We know what it is called, we know that the information contained in it should not be shared and we can get no useful performance information from it, so we will not lose anything by removing it from the archives.

The biggest surprise to me was the fact that we also get API keys from HTTPS requests.  WebPageTest is running on the client machine and can see the request in the browser as it is being made and before SSL encryption. All the query parameters and HTTP headers are all completely accessible to store.

Takeaway

If you can't afford to have someone misusing your API Key, then don't send it down to the client.  HTTPS is not going to save you.  And don't rely on security by obscurity.  The world of big data is making it easier to expose and query massive amounts of data every day.

And finally, use the Authorization header for what it was intended and don't ever log it!

Takeout

Image Credits:
Combine
https://flic.kr/p/8kPPaw
Sad Clown https://flic.kr/p/d7EN1
Takeout https://flic.kr/p/4d5oBZ


Darrel Miller: Share Your Code, Not Your API Keys

Part of my role at Runscope involves me writing OSS libraries or sample projects to share with other developers.  I also regularly use 3rd party APIs in the process.  This requires the use of API keys and other private data that I'd rather not share.  Unfortunately it is all too easy to leave a key in a source code file and accidentally commit it to a public source control repository.

  KeysInLock

The Stack Overflow Solution

The standard guidance on Stack Overflow is to commit your configuration file with dummy information in it and then tell Git to ignore any future changes to the file.  This seems like a reasonable approach as long as you keep your private data out of the standard app or web.config.  Once you have committed to using a separate file for private configuration data, the new developer has to be made aware of this settings file.

It isn't a terrible solution, but it felt like there was room for improvement.  When someone makes the decision to try  your sample application or library, you want the experience to be as painless as possible.

As an aside, I wish API providers would make publicly available API keys that pointed to sample data.  Even if the key only allowed read-only access, it would make the education process a whole lot easier.

Automatically Initialize

MAgicWandI decided that I wanted to create a simple HTML form based user interface for supplying private data elements and automatically take care of storing that data in a file somewhere that wouldn't get committed. 

My solution to this problem went through various iterations, trying to get the right balance of simplicity and security.  I wanted to make sure there was no way that the form could expose previously stored credentials and Jeremy Miller also pointed out that you don't want to allow an external party to inadvertently or maliciously cause the existing credentials to be lost.

In order to avoid having to build a fully authenticated administrative interface, the HTML form is only display once when there is no configuration data file.  Once it has been created then the developer must edit the file manually to make changes, or delete the file to trigger the appearance of the HTML form again.  This is a simplistic solution ideal for sample applications, but could also be the starting point for something more sophisticated.

The Man in the Middle

The functionality is provided primarily by a piece of middleware.  In ASP.NET Web API this is implemented as a class that derives from DelegatingHandler and is added to the MessageHandler pipeline.  The same architectural pattern exists in other web frameworks, so I'm sure the the code I am using could be re-implemented on other platforms.

MenOnBench

To initialize the piece of middleware using ASP.NET Web API, we create an instance of a new class I created called  PrivateDataMessageHandler and pass it a path to a configuration file that will hold my private API keys.

image

I am using the standard ASP.NET App_Data folder to store the configuration information and I made sure that my source control ignore file is not going to track any files in that folder.  You choose any location that your web server can write to.

First Run

Initially that configuration file will not exist and that will cause the MessageHandler to enable itself.

image

Every request to the system will be routed through this message handler, but unless it is enabled, messages will just be passed right through.  This is handy because when you run a sample application for the first time through Visual Studio a web browser opens and hits the root of the API.  The message handler will respond with the configuration form.

image

The path to the configuration file is added to the request properties collection so that controllers can locate the file to read data from it.

First Request

When a sample application is first used we can assume that there are API keys missing to be able to access third party services.  As we don't know which resource will necessarily be requested first, we intercept all inbound requests, except for one special one, and return an HTML form that presents input controls for each of the missing API keys.

image

The _magicPath variable contains the path used when the following form is submitted,

image

The HTML form is customized to collect whatever private data you need to store.

image

The names of the input fields will be used as the property names in the configuration file.

Submitting The Private Data Form

The form is submitted back to a unique endpoint /privatedataform that is monitored by the middleware and the information contained in the body is processed and the middleware is disabled.

image

One annoying issue here is that if you are hosted on IIS and you are using Attribute Routing, it is likely that there will be no matching route for your magicpath so the routing will fail and a 404 will return before the MessageHandlers are fired.  This doesn't happen on self-hosted setups and it generally doesn't happen on regular routing because the default route will match.  No controller will be found but that's fine because the MessageHandler short circuits the request before controller selection happens.

Safe

The submitted form is stored to a JSON configuration file and a simple message is returned to the user.

image

Accessing the Private Data

In order to get at the data, we need to access the path of the configuration file which the middleware hid away in the Request.Properties dictionary and we need to load the data into an object.

TopSecret

I created an extension method to hide the details.

image

Now when I am in a controller and I need to access the private data I can just do,

image

How does this help me?

I've used this on a couple of projects so far.  One is my RunscopeMessageHandler that allows you to log requests to a WebAPI up to Runscope's API.  I wanted to include a sample application to the project that I could also use for some interactive testing, but didn't want to accidently publish my API key.  The other is a SlackBot that I have been playing with that allows you to trigger API tests via Slack commands and the Runscope API.

Due to the fact that the private data you choose to store, and the corresponding HTML form, is custom for each installation, I decided there would be little value in making a library out of these classes.  If you think this approach might work for you then feel free to grab the source from either of the two projects I just linked to.

So far it seems to be working well for me.  I suspect the whole process could be refined further so I look forward to comments and suggestions from developers who are looking to solve a similar problem.

And with that done, on to the next Yak!

yak

Image Credit:
Keys in lock
https://flic.kr/p/4Metz2
Magic Wand https://flic.kr/p/7V1y7c
Men on Bench https://flic.kr/p/oTzP55
Safe https://flic.kr/p/ruAw2
Top Secret https://flic.kr/p/4SCuPK


Taiseer Joudeh: ASP.NET Identity 2.1 Roles Based Authorization with ASP.NET Web API – Part 4

This is the forth part of Building Simple Membership system using ASP.NET Identity 2.1, ASP.NET Web API 2.2 and AngularJS. The topics we’ll cover are:

The source code for this tutorial is available on GitHub.

ASP.NET Identity 2.1 Roles Based Authorization with ASP.NET Web API

In the previous post we saw how we can authenticate individual users using the [Authorize] attribute in a very basic form, but there is some limitation with the previous approach where any authenticated user can perform sensitive actions such as deleting any user in the system, getting list of all users in the system, etc… where those actions should be executed only by subset of users with higher privileges (Admins only).

Roles Auth

In this post we’ll see how we can enhance the authorization mechanism to give finer grained control over how users can execute actions based on role membership, and how those roles will help us in differentiate between authenticated users.

The nice thing here that ASP.NET Identity 2.1 provides support for managing Roles (create, delete, update, assign users to a role, remove users from role, etc…) by using the RoleManager<T> class, so let’s get started by adding support for roles management in our Web API.

Step 1: Add the Role Manager Class

The Role Manager class will be responsible to manage instances of the Roles class, the class will derive from “RoleManager<T>”  where T will represent our “IdentityRole” class, once it derives from the “IdentityRole” class a set of methods will be available, those methods will facilitate managing roles in our Identity system, some of the exposed methods we’ll use from the “RoleManager” during this tutorial are:

Method NameUsage
FindByIdAsync(id)Find role object based on its unique identifier
RolesReturns an enumeration of the roles in the system
FindByNameAsync(Rolename)Find roled based on its name
CreateAsync(IdentityRole)Creates a new role
DeleteAsync(IdentityRole)Delete role
RoleExistsAsync(RoleName)Returns true if role already exists

Now to implement the “RoleManager” class, add new file named “ApplicationRoleManager” under folder “Infrastructure” and paste the code below:

public class ApplicationRoleManager : RoleManager<IdentityRole>
    {
        public ApplicationRoleManager(IRoleStore<IdentityRole, string> roleStore)
            : base(roleStore)
        {
        }

        public static ApplicationRoleManager Create(IdentityFactoryOptions<ApplicationRoleManager> options, IOwinContext context)
        {
            var appRoleManager = new ApplicationRoleManager(new RoleStore<IdentityRole>(context.Get<ApplicationDbContext>()));

            return appRoleManager;
        }
    }

Notice how the “Create” method will use the Owin middleware to create instances for each request where
Identity data is accessed, this will help us to hide the details of how role data is stored throughout the
application.

Step 2: Assign the Role Manager Class to Owin Context

Now we want to add a single instance of the Role Manager class to each request using the Owin context, to do so open file “Startup” and paste the code below inside method “ConfigureOAuthTokenGeneration”:

private void ConfigureOAuthTokenGeneration(IAppBuilder app)
{
	// Configure the db context and user manager to use a single instance per request
	//Rest of code is removed for brevity
	
	app.CreatePerOwinContext<ApplicationRoleManager>(ApplicationRoleManager.Create);
	
	//Rest of code is removed for brevity

}

Now a single instance of class “ApplicationRoleManager” will be available for each request, we’ll use this instance in different controllers, so it is better to create a helper property in class “BaseApiController” which all other controllers inherits from, so open file “BaseApiController” and add the following code:

public class BaseApiController : ApiController
{
	//Code removed from brevity
	private ApplicationRoleManager _AppRoleManager = null;

	protected ApplicationRoleManager AppRoleManager
	{
		get
		{
			return _AppRoleManager ?? Request.GetOwinContext().GetUserManager<ApplicationRoleManager>();
		}
	}
}

Step 3: Add Roles Controller

Now we’ll add the controller which will be responsible to manage roles in the system (add new roles, delete existing ones, getting single role by id, etc…), but this controller should only be accessed by users in “Admin” role because it doesn’t make sense to allow any authenticated user to delete or create roles in the system, so we will see how we will use the [Authorize] attribute along with the Roles to control this.

Now add new file named “RolesController” under folder “Controllers” and paste the code below:

[Authorize(Roles="Admin")]
    [RoutePrefix("api/roles")]
    public class RolesController : BaseApiController
    {

        [Route("{id:guid}", Name = "GetRoleById")]
        public async Task<IHttpActionResult> GetRole(string Id)
        {
            var role = await this.AppRoleManager.FindByIdAsync(Id);

            if (role != null)
            {
                return Ok(TheModelFactory.Create(role));
            }

            return NotFound();

        }

        [Route("", Name = "GetAllRoles")]
        public IHttpActionResult GetAllRoles()
        {
            var roles = this.AppRoleManager.Roles;

            return Ok(roles);
        }

        [Route("create")]
        public async Task<IHttpActionResult> Create(CreateRoleBindingModel model)
        {
            if (!ModelState.IsValid)
            {
                return BadRequest(ModelState);
            }

            var role = new IdentityRole { Name = model.Name };

            var result = await this.AppRoleManager.CreateAsync(role);

            if (!result.Succeeded)
            {
                return GetErrorResult(result);
            }

            Uri locationHeader = new Uri(Url.Link("GetRoleById", new { id = role.Id }));

            return Created(locationHeader, TheModelFactory.Create(role));

        }

        [Route("{id:guid}")]
        public async Task<IHttpActionResult> DeleteRole(string Id)
        {

            var role = await this.AppRoleManager.FindByIdAsync(Id);

            if (role != null)
            {
                IdentityResult result = await this.AppRoleManager.DeleteAsync(role);

                if (!result.Succeeded)
                {
                    return GetErrorResult(result);
                }

                return Ok();
            }

            return NotFound();

        }

        [Route("ManageUsersInRole")]
        public async Task<IHttpActionResult> ManageUsersInRole(UsersInRoleModel model)
        {
            var role = await this.AppRoleManager.FindByIdAsync(model.Id);
            
            if (role == null)
            {
                ModelState.AddModelError("", "Role does not exist");
                return BadRequest(ModelState);
            }

            foreach (string user in model.EnrolledUsers)
            {
                var appUser = await this.AppUserManager.FindByIdAsync(user);

                if (appUser == null)
                {
                    ModelState.AddModelError("", String.Format("User: {0} does not exists", user));
                    continue;
                }

                if (!this.AppUserManager.IsInRole(user, role.Name))
                {
                    IdentityResult result = await this.AppUserManager.AddToRoleAsync(user, role.Name);

                    if (!result.Succeeded)
                    {
                        ModelState.AddModelError("", String.Format("User: {0} could not be added to role", user));
                    }

                }
            }

            foreach (string user in model.RemovedUsers)
            {
                var appUser = await this.AppUserManager.FindByIdAsync(user);

                if (appUser == null)
                {
                    ModelState.AddModelError("", String.Format("User: {0} does not exists", user));
                    continue;
                }

                IdentityResult result = await this.AppUserManager.RemoveFromRoleAsync(user, role.Name);

                if (!result.Succeeded)
                {
                    ModelState.AddModelError("", String.Format("User: {0} could not be removed from role", user));
                }
            }

            if (!ModelState.IsValid)
            {
                return BadRequest(ModelState);
            }

            return Ok();
        }
    }

What we have implemented in this lengthy controller code is the following:

  • We have attribute the controller with [Authorize(Roles=”Admin”)] which allows only authenticated users who belong to “Admin” role only to execute actions in this controller, the “Roles” property accepts comma separated values so you can add multiple roles if needed. In other words the user who will have an access to this controller should have valid JSON Web Token which contains claim of type “Role” and value of “Admin”.
  • The method “GetRole(Id)” will return a single role based on it is identifier, this will happen when we call the method “FindByIdAsync”, this method returns object of type “RoleReturnModel” which we’ll create in the next step.
  • The method “GetAllRoles()” returns all the roles defined in the system.
  • The method “Create(CreateRoleBindingModel model)” will be responsible of creating new roles in the system, it will accept model of type “CreateRoleBindingModel” where we’ll create it the next step. This method will call “CreateAsync” and will return response of type “RoleReturnModel”.
  • The method “DeleteRole(string Id)” will delete existing role by passing the unique id of the role then calling the method “DeleteAsync”.
  • Lastly the method “ManageUsersInRole” is proprietary for the AngularJS app which we’ll build in the coming posts, this method will accept a request body containing an object of type “UsersInRoleModel” where the application will add or remove users from a specified role.

Step 4: Add Role Binding Models

Now we’ll add the models used in the previous step, the first class to add will be named “RoleBindingModels” under folder “Models”, so add this file and paste the code below:

public class CreateRoleBindingModel
    {
        [Required]
        [StringLength(256, ErrorMessage = "The {0} must be at least {2} characters long.", MinimumLength = 2)]
        [Display(Name = "Role Name")]
        public string Name { get; set; }

    }

    public class UsersInRoleModel {

        public string Id { get; set; }
        public List<string> EnrolledUsers { get; set; }
        public List<string> RemovedUsers { get; set; }
    }

Now we’ll adjust the “ModelFactory” class to include the method which returns the response of type “RoleReturnModel”, so open file “ModelFactory” and paste the code below:

public class ModelFactory
{
	//Code removed for brevity
	
	public RoleReturnModel Create(IdentityRole appRole) {

		return new RoleReturnModel
	   {
		   Url = _UrlHelper.Link("GetRoleById", new { id = appRole.Id }),
		   Id = appRole.Id,
		   Name = appRole.Name
	   };
	}
}

public class RoleReturnModel
{
	public string Url { get; set; }
	public string Id { get; set; }
	public string Name { get; set; }
}

 Step 5: Allow Admin to Manage Single User Roles

Until now the system doesn’t have an endpoint which allow users in Admin role to manage the roles for a selected user, this endpoint will be needed in the AngularJS app,  in order to add it open “AccountsController” class and paste the code below:

[Authorize(Roles="Admin")]
[Route("user/{id:guid}/roles")]
[HttpPut]
public async Task<IHttpActionResult> AssignRolesToUser([FromUri] string id, [FromBody] string[] rolesToAssign)
{

	var appUser = await this.AppUserManager.FindByIdAsync(id);

	if (appUser == null)
	{
		return NotFound();
	}
	
	var currentRoles = await this.AppUserManager.GetRolesAsync(appUser.Id);

	var rolesNotExists = rolesToAssign.Except(this.AppRoleManager.Roles.Select(x => x.Name)).ToArray();

	if (rolesNotExists.Count() > 0) {

		ModelState.AddModelError("", string.Format("Roles '{0}' does not exixts in the system", string.Join(",", rolesNotExists)));
		return BadRequest(ModelState);
	}

	IdentityResult removeResult = await this.AppUserManager.RemoveFromRolesAsync(appUser.Id, currentRoles.ToArray());

	if (!removeResult.Succeeded)
	{
		ModelState.AddModelError("", "Failed to remove user roles");
		return BadRequest(ModelState);
	}

	IdentityResult addResult = await this.AppUserManager.AddToRolesAsync(appUser.Id, rolesToAssign);

	if (!addResult.Succeeded)
	{
		ModelState.AddModelError("", "Failed to add user roles");
		return BadRequest(ModelState);
	}

	return Ok();
}

What we have implemented in this method is the following:

  • This method can be accessed only by authenticated users who belongs to “Admin” role, that’s why we have added the attribute [Authorize(Roles=”Admin”)]
  • The method accepts the UserId in its URI and array of the roles this user Id should be enrolled in.
  • The method will validates that this array of roles exists in the system, if not, HTTP Bad response will be sent indicating which roles doesn’t exist.
  • The system will delete all the roles assigned for the user then will assign only the roles sent in the request.

Step 6: Protect the existing end points with [Authorize(Roles=”Admin”)] Attribute

Now we’ll visit all the end points we have created earlier in the previous posts and mainly in “AccountsController” class. We’ll add add “Roles=Admin” to the Authorize attribute for all the end points the should be accessed only by users in Admin role.

The end points are:

 – GetUsers, GetUser, GetUserByName, and DeleteUser  should be accessed by users enrolled in “Admin” role. The code change will be as simple as the below:

[Authorize(Roles="Admin")]
[Route("users")]
public IHttpActionResult GetUsers()
{}

[Authorize(Roles="Admin")]
[Route("user/{id:guid}", Name = "GetUserById")]
public async Task<IHttpActionResult> GetUser(string Id)
{}


[Authorize(Roles="Admin")]
[Route("user/{username}")]
public async Task<IHttpActionResult> GetUserByName(string username)
{}

[Authorize(Roles="Admin")]
[Route("user/{id:guid}")]
public async Task<IHttpActionResult> DeleteUser(string id)
{}

Step 7: Update the DB Migration File

Last change we need to do here before testing the changes, is to create a default user and assign it to Admin role when the application runs for the first time. To implement this we need to introduce a change to the file “Configuration” under folder “Infrastructure”, so open the files and paste the code below:

internal sealed class Configuration : DbMigrationsConfiguration<AspNetIdentity.WebApi.Infrastructure.ApplicationDbContext>
    {
        public Configuration()
        {
            AutomaticMigrationsEnabled = false;
        }

        protected override void Seed(AspNetIdentity.WebApi.Infrastructure.ApplicationDbContext context)
        {
            //  This method will be called after migrating to the latest version.

            var manager = new UserManager<ApplicationUser>(new UserStore<ApplicationUser>(new ApplicationDbContext()));
            
            var roleManager = new RoleManager<IdentityRole>(new RoleStore<IdentityRole>(new ApplicationDbContext()));

            var user = new ApplicationUser()
            {
                UserName = "SuperPowerUser",
                Email = "taiseer.joudeh@gmail.com",
                EmailConfirmed = true,
                FirstName = "Taiseer",
                LastName = "Joudeh",
                Level = 1,
                JoinDate = DateTime.Now.AddYears(-3)
            };

            manager.Create(user, "MySuperP@ss!");

            if (roleManager.Roles.Count() == 0)
            {
                roleManager.Create(new IdentityRole { Name = "SuperAdmin" });
                roleManager.Create(new IdentityRole { Name = "Admin"});
                roleManager.Create(new IdentityRole { Name = "User"});
            }

            var adminUser = manager.FindByName("SuperPowerUser");

            manager.AddToRoles(adminUser.Id, new string[] { "SuperAdmin", "Admin" });
        }
    }

What we have implemented here is simple, we created a default user named “SuperPowerUser”, then created there roles in the system (SuperAdmin, Admin, and User), then we assigned this user to two roles (SuperAdmin, Admin).

In order to fire the “Seed()” method, we have to drop the exiting database, then from package manager console you type 

update-database
 which will create the database on our SQL server based on the connection string we specified earlier and runs the code inside the seed method and creates the user and roles in the system.

Step 7: Test the Role Authorization

Now the code is ready to be tested, first thing to do is to obtain a JWT token for the user “SuperPowerUser”, after you obtain this JWT and if you try to decode it using JWT.io you will notice that this token contains claim of type “Role” as the below:

{
  "nameid": "29e21f3d-08e0-49b5-b523-3d68cf623fd5",
  "unique_name": "SuperPowerUser",
  "http://schemas.microsoft.com/accesscontrolservice/2010/07/claims/identityprovider": "ASP.NET Identity",
  "AspNet.Identity.SecurityStamp": "832d5f6b-e71c-4c31-9fde-07fe92f5ddfd",
  "role": [
    "Admin",
    "SuperAdmin"
  ],
  "Phone": "123456782",
  "Gender": "Male",
  "iss": "http://localhost:59822",
  "aud": "414e1927a3884f68abc79f7283837fd1",
  "exp": 1426115380,
  "nbf": 1426028980
}

Those claims will allow this user to access any endpoint attribute with [Authorize] attribute and locked for users in Roles (Admin or SuperAdmin).

To test this out we’ll create new role named “Supervisor” by issuing HTTP Post to the endpoint (/api/roles/create), and as we stated before this endpoint should be accessed by users in “Admin” role, so we will pass the JWT token in the Authorization header using Bearer scheme as usual, the request will be as the image below:

Create Role

If all is valid we’ll revive HTTP status 201 Created.

In the next post we’ll see how we’ll implement Authorization access using Claims.

The source code for this tutorial is available on GitHub.

Follow me on Twitter @tjoudeh

The post ASP.NET Identity 2.1 Roles Based Authorization with ASP.NET Web API – Part 4 appeared first on Bit of Technology.


Christian Weyer: Session-Materialien von der BASTA! Spring 2015


Dominick Baier: .NET Foundation Advisory Council

I have been invited to join the .NET Foundation advisory council – looking forward to it!

http://www.dotnetfoundation.org/blog/welcoming-the-newly-minted-advisory-net-foundation-advisory-council-members


Filed under: .NET Security, ASP.NET, IdentityModel, IdentityServer, WebAPI


Darrel Miller: Don't Design A Query String You Will One Day Regret

When writing the Web API book, we decided that there was no way we would ever finish if we tried to address every conceivable issue.  So we decided to setup a Google Group where readers of the book could ask for clarifications and ask related questions.  One question I received a while ago has been sitting on my to-do list for way too long.  The question from Reid Peryam is about query resources.  This is my answer.

Denial2

The Claim

Reid quotes this paragraph from the book:

To work around the inability to easily expose new resources to clients, people often attempt to build sophisticated query capabilities into their API. The problem with this approach, beyond the coupling on the query syntax, is that just a few query parameters can open up a huge number of potential resources, some of which may be expensive to generate. Numerous API providers are starting to discover the challenging economics of exposing arbitrary query capabilities to third parties. A much more manageable approach is to enable a few highly optimized resources that address the majority of use cases. It is critical, however, that new resources can be added quickly to the API to address new requirements.

and rightfully calls me out on failing to provide examples of,

a few highly optimized resources that address the majority of use cases

The Problem

Before I try and describe the solution, let me first clarify exactly what pattern I am claiming is the source of concern.  Here is an example URI template,

http://api.example.org/orders{?fields,sort,filter,limit}

and a resolved URL might look like

http://api.example.org/orders?fields=OrderNo,Customer,OrderDate&sort=OrderDate&filter=OrderDate.gt.2012-01-01&limit=50

SegwayOn the surface this looks like an amazing idea.  A client developer can choose exactly what fields they want to have returned to minimize the bytes on the wire. They can use arbitrary filter criteria to limit the results returned.  This single generic query string can allow a client to generate a representation which contains pretty much any subset of orders data that they want.

This is a very quick way of exposing data without having to think very hard about how the data might be used.  In fact this type of functionality can be built by framework developers and delivered for free to application developers.

Why Is It A Problem?

I believe there are some problems with this approach.  The first problem is, by requiring clients to provide the field list, sort order and filter criteria you are requiring a client to have a significant amount of knowledge about the data model of the server.  Now, your client may already have this knowledge for other reasons and therefore it may not place any additional burden on the client.  However, if you ever choose to remove that client/server coupling you will find it much harder.

Unpredictable Workload

The next problems are performance related.  The first is related to how the data for the query is actually going to be retrieved.  Most likely the data will be in some kind of database.  If the sort order that is chosen matches that of a database index, the results will probably come back pretty quickly.  If however it doesn't, then it could be painfully slow.  A user of the API might be understanding if they are returning a large result set with thousands of rows of data, but what if they are querying a massive dataset but are limiting the query to only return 10 rows.  The server still has to sort the entire set of data. The API user is going to wonder why the request is so slow for such a small resultset.

Wand

Indexes are the magical things that make databases actually perform well. They also often have the ability to include extra columns of data in them to prevent queries from actually needing to go and read the actual data pages.  If in the query field list, all of the fields are including in an index, it is going to be really quick.  If one field is not in the index, then performance will degrade significantly.  These are performance details that are critical once a system begins to be loaded with a large volume of data and have a significant number of users.  It is not a problem that is easily seen during the sprint to go live whilst burning through the seed round of funding.

Diluting The Cache

The other performance challenge introduced by the "uber" query string is the fact that now, instead of there just being a few pre-chosen, performance optimized, use-case verified set of representations that can be cached, we now have to deal will potentially thousands of variants.  The combination of fields, sort orders and filter criteria make for a huge number of potential data subsets.  Caching those would not only bloat the cache but make the cache hit ratio very low.

Some Things You Can't Take Back

REST and hypermedia APIs are great in that they enable you to make many changes that don't break clients.  You can do stuff that turns out to be wrong and then fix it later, when you have the wisdom of hindsight.  However, to make the "uber" query string work, the client needs to take on a fair amount of responsibility and is given a huge amount of flexibility.  You can't just take that away without breaking things.  You end up being stuck with it.

The end result is you have clients who are getting inconsistent performance behaviour, they are are executing requests that are difficult to performance optimize on the server, and you can't fix the problem without a major breaking change to the interface.

Meerkat

Give Them Only What They Need

One approach for avoiding this outcome, is to raise the level of abstraction for your API to that of your application domain.  Instead of giving your client developers the ability to effectively write queries against your data store, write the queries for them and give them a name,

http://api.example.org/orders/open{?since,customer,region}
http://api.example.org/orders/late{?dayslate,customer,highvalue}
http://api.example.org/orders/closed{?customer,closeddaterange,closedtodate}
http://api.example.org/orders/byproduct{?productid,customer,orderdaterange}
http://api.example.org/orders/bypo{?purchaseorder}
http://api.example.org/orders/recent{?customer}

Without any specific knowledge of the types of "orders" that this API is dealing with, but with a fair amount of experience working with order management type of systems, I am going to be bold and say that this set of resources addresses 90% of types of queries that are needed on an orders API.  I can optimize my database to return these specific queries efficiently and hopefully with this reduced number of query variants I can get better cache utilization.

Responding To Feedback

It is highly likely that soon enough a customer is going to want to do something that the API doesn't support.  That's OK, because we can always add new resources to our API.  If we believe it is a valid use-case and we can deliver the results without degrading system performance, then adding the new capability should be a no-brainer.  The ability to quickly add new resources to an API is critical requirement in enabling this approach of starting with a limited API and adding new features only when required.

YesWeCan

The Original Question

In Reid's question he lists a set of URLs pulled from the documentation of his API and I have taken the liberty to relist them here as URL Templates.  Hopefully not too much was lost in the translation.

/api/shipments/{id}
/api/shipments{?ShipDate}
/api/shipments{?ShipDateStart,ShipDateFinish}
/api/shipments{?EnteredDate}
/api/shipments{?EnteredDateStart,EnteredDateFinish}
/api/shipments{?Failed}
/api/shipments{?WasFailed}
/api/shipments{?WasBlind}
/api/shipments{?Phase}
/api/shipments{?CustomerId}
/api/shipments{?CustomerIds}

As you can see, this set of URLs has not attempted to provide unbounded query capabilities.  Reid's team has used their knowledge of the domain to identify which queries are likely to be required by a consumer of the API.  The team should be able to optimize the database to be able to provide good performance characteristics for these specific queries.

One interesting difference in this set of URLs, as compared to my example, is the fact that the different subsets of query parameters are all pointing to the same path.  I have a tendency to add an extra path segment as a descriptor that makes each path only have one set of query parameters.  This is pure preference from a URL design perspective.  However, it may have an impact on the way routing to controllers works in your web api framework.

The Good/Bad News

Reid describes his set of URLs as enabling "a ton of filtering " and questions how he can "enable a few highly optimized resources that address the majority of use cases".  The good and bad news, is that's what has already been done.  It may look like a lot of filtering options, but as compared to what would have been enabled in the API with a unconstrained filter query string, the result is a just a few resources.

Reid - Sorry it took so long to get you an answer, I hope it was worth the wait.

Tortoise

Image Credits:
Denial
https://flic.kr/p/74PwUj
Segway https://flic.kr/p/hupTPq
Wand https://flic.kr/p/9uVH7P
Meerkat https://flic.kr/p/2wwVCY
YesWeCan https://flic.kr/p/5zzqtM
Tortoise https://flic.kr/p/nE44yh


Taiseer Joudeh: Implement OAuth JSON Web Tokens Authentication in ASP.NET Web API and Identity 2.1 – Part 3

This is the third part of Building Simple Membership system using ASP.NET Identity 2.1, ASP.NET Web API 2.2 and AngularJS. The topics we’ll cover are:

The source code for this tutorial is available on GitHub.

Implement JSON Web Tokens Authentication in ASP.NET Web API and and Identity 2.1

Featured Image

Currently our API doesn’t support authentication and authorization, all the requests we receive to any end point are done anonymously, In this post we’ll configure our API which will act as our Authorization Server and Resource Server on the same time to issue JSON Web Tokens for authenticated users and those users will present this JWT to the protected end points in order to access it and process the request.

I will use step by step approach as usual to implement this, but I highly recommend you to read the post JSON Web Token in ASP.NET Web API 2 before completing this one; where I cover deeply what is JSON Web Tokens, the benefits of using JWT over default access tokens, and how they can be used to decouple Authorization server from Resource server. In this tutorial and for the sake of keeping it simple; both OAuth 2.0 roles (Authorization Server and Recourse Server) will live in the same API.

Step 1: Implement OAuth 2.0 Resource Owner Password Credential Flow

We are going to build an API which will be consumed by a trusted client (AngularJS front-end) so we only interested in implementing a single OAuth 2.0 flow where the registered user will present username and password to a specific end point, and the API will validate those credentials, and if all is valid it will return a JWT for the user where the client application used by the user should store it securely and locally in order to present this JWT with each request to any protected end point.

The nice thing about this JWT that it is a self contained token which contains all user claims and roles inside it, so there is no need to do any extra DB queries to fetch those values for the authenticated user. This JWT token will be configured to expire after 1 day of its issue date, so the user is requested to provide credentials again in order to obtain new JWT token.

If you are interested to know how to implement sliding expiration tokens and how you can keep the user logged in; I recommend you to read my other post Enable OAuth Refresh Tokens in AngularJS App which covers this deeply, but adds more complexity to the solution. To keep this tutorial simple we’ll not add refresh tokens here but you can refer to the post and implement it.

To implement the Resource Owner Password Credential flow; we need to add new folder named “Providers” then add a new class named “CustomOAuthProvider”, after you add then paste the code below:

public class CustomOAuthProvider : OAuthAuthorizationServerProvider
    {

        public override Task ValidateClientAuthentication(OAuthValidateClientAuthenticationContext context)
        {
            context.Validated();
            return Task.FromResult<object>(null);
        }

        public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
        {

            var allowedOrigin = "*";

            context.OwinContext.Response.Headers.Add("Access-Control-Allow-Origin", new[] { allowedOrigin });

            var userManager = context.OwinContext.GetUserManager<ApplicationUserManager>();

            ApplicationUser user = await userManager.FindAsync(context.UserName, context.Password);

            if (user == null)
            {
                context.SetError("invalid_grant", "The user name or password is incorrect.");
                return;
            }

            if (!user.EmailConfirmed)
            {
                context.SetError("invalid_grant", "User did not confirm email.");
                return;
            }

            ClaimsIdentity oAuthIdentity = await user.GenerateUserIdentityAsync(userManager, "JWT");
        
            var ticket = new AuthenticationTicket(oAuthIdentity, null);
            
            context.Validated(ticket);
           
        }
    }

This class inherits from class “OAuthAuthorizationServerProvider” and overrides the below two methods:

  • As you notice the “ValidateClientAuthentication” is empty, we are considering the request valid always, because in our implementation our client (AngularJS front-end) is trusted client and we do not need to validate it.
  • The method “GrantResourceOwnerCredentials” is responsible for receiving the username and password from the request and validate them against our ASP.NET 2.1 Identity system, if the credentials are valid and the email is confirmed we are building an identity for the logged in user, this identity will contain all the roles and claims for the authenticated user, until now we didn’t cover roles and claims part of the tutorial, but for the mean time you can consider all users registered in our system without any roles or claims mapped to them.
  • The method “GenerateUserIdentityAsync” is not implemented yet, we’ll add this helper method in the next step. This method will be responsible to fetch the authenticated user identity from the database and returns an object of type “ClaimsIdentity”.
  • Lastly we are creating an Authentication ticket which contains the identity for the authenticated user,  and when we call “context.Validated(ticket)” this will transfer this identity to an OAuth 2.0 bearer access token.

Step 2: Add method “GenerateUserIdentityAsync” to “ApplicationUser” class

Now we’ll add the helper method which will be responsible to get the authenticated user identity (all roles and claims mapped to the user). The “UserManager” class contains a method named “CreateIdentityAsync” to do this task, it will basically query the DB and get all the roles and claims for this user, to implement this open class “ApplicationUser” and paste the code below:

//Rest of code is removed for brevity
public async Task<ClaimsIdentity> GenerateUserIdentityAsync(UserManager<ApplicationUser> manager, string authenticationType)
{
	var userIdentity = await manager.CreateIdentityAsync(this, authenticationType);
	// Add custom user claims here
	return userIdentity;
}

Step 3: Issue JSON Web Tokens instead of Default Access Tokens

Now we want to configure our API to issue JWT tokens instead of default access tokens, to understand what is JWT and why it is better to use it, you can refer back to this post.

First thing we need to installed 2 NueGet packages as the below:

Install-package System.IdentityModel.Tokens.Jwt -Version 4.0.1
Install-package Thinktecture.IdentityModel.Core -Version 1.3.0

There is no direct support for issuing JWT in ASP.NET Web API,  so in order to start issuing JWTs we need to implement this manually by implementing the interface “ISecureDataFormat” and implement the method “Protect”.

To implement this add new file named “CustomJwtFormat” under folder “Providers” and paste the code below:

public class CustomJwtFormat : ISecureDataFormat<AuthenticationTicket>
    {
    
        private readonly string _issuer = string.Empty;

        public CustomJwtFormat(string issuer)
        {
            _issuer = issuer;
        }

        public string Protect(AuthenticationTicket data)
        {
            if (data == null)
            {
                throw new ArgumentNullException("data");
            }

            string audienceId = ConfigurationManager.AppSettings["as:AudienceId"];

            string symmetricKeyAsBase64 = ConfigurationManager.AppSettings["as:AudienceSecret"];

            var keyByteArray = TextEncodings.Base64Url.Decode(symmetricKeyAsBase64);

            var signingKey = new HmacSigningCredentials(keyByteArray);

            var issued = data.Properties.IssuedUtc;
            
            var expires = data.Properties.ExpiresUtc;

            var token = new JwtSecurityToken(_issuer, audienceId, data.Identity.Claims, issued.Value.UtcDateTime, expires.Value.UtcDateTime, signingKey);

            var handler = new JwtSecurityTokenHandler();

            var jwt = handler.WriteToken(token);

            return jwt;
        }

        public AuthenticationTicket Unprotect(string protectedText)
        {
            throw new NotImplementedException();
        }
    }

What we’ve implemented in this class is the following:

  • The class “CustomJwtFormat” implements the interface “ISecureDataFormat<AuthenticationTicket>”, the JWT generation will take place inside method “Protect”.
  • The constructor of this class accepts the “Issuer” of this JWT which will be our API. This API acts as Authorization and Resource Server on the same time, this can be string or URI, in our case we’ll fix it to URI.
  • Inside “Protect” method we are doing the following:
    • As we stated before, this API serves as Resource and Authorization Server at the same time, so we are fixing the Audience Id and Audience Secret (Resource Server) in web.config file, this Audience Id and Secret will be used for HMAC265 and hash the JWT token, I’ve used this implementation to generate the Audience Id and Secret.
    • Do not forget to add 2 new keys “as:AudienceId” and “as:AudienceSecret” to the web.config AppSettings section.
    • Then we prepare the raw data for the JSON Web Token which will be issued to the requester by providing the issuer, audience, user claims, issue date, expiry date, and the signing key which will sign (hash) the JWT payload.
    • Lastly we serialize the JSON Web Token to a string and return it to the requester.
  • By doing this, the requester for an OAuth 2.0 access token from our API will receive a signed token which contains claims for an authenticated Resource Owner (User) and this access token is intended to certain (Audience) as well.

Step 4: Add Support for OAuth 2.0 JWT Generation

Till this moment we didn’t configure our API to use OAuth 2.0 Authentication workflow, to do so open class “Startup” and add new method named “ConfigureOAuthTokenGeneration” as the below:

private void ConfigureOAuthTokenGeneration(IAppBuilder app)
        {
            // Configure the db context and user manager to use a single instance per request
            app.CreatePerOwinContext(ApplicationDbContext.Create);
            app.CreatePerOwinContext<ApplicationUserManager>(ApplicationUserManager.Create);

            OAuthAuthorizationServerOptions OAuthServerOptions = new OAuthAuthorizationServerOptions()
            {
                //For Dev enviroment only (on production should be AllowInsecureHttp = false)
                AllowInsecureHttp = true,
                TokenEndpointPath = new PathString("/oauth/token"),
                AccessTokenExpireTimeSpan = TimeSpan.FromDays(1),
                Provider = new CustomOAuthProvider(),
                AccessTokenFormat = new CustomJwtFormat("http://localhost:59822")
            };

            // OAuth 2.0 Bearer Access Token Generation
            app.UseOAuthAuthorizationServer(OAuthServerOptions);
        }

What we’ve implemented here is the following:

  • The path for generating JWT will be as :”http://localhost:59822/oauth/token”.
  • We’ve specified the expiry for token to be 1 day.
  • We’ve specified the implementation on how to validate the Resource owner user credential in a custom class named “CustomOAuthProvider”.
  • We’ve specified the implementation on how to generate the access token using JWT formats, this custom class named “CustomJwtFormat” will be responsible for generating JWT instead of default access token using DPAPI, note that both format will use Bearer scheme.

Do not forget to call the new method “ConfigureOAuthTokenGeneration” in the Startup “Configuration” as the class below:

public void Configuration(IAppBuilder app)
{
	HttpConfiguration httpConfig = new HttpConfiguration();

	ConfigureOAuthTokenGeneration(app);

	//Rest of code is removed for brevity

}

Our API currently is ready to start issuing JWT access token, so test this out we can issue HTTP POST request as the image below, and we should receive a valid JWT token for the next 24 hours and accepted only by our API.

JSON Web Token

Step 5: Protect the existing end points with [Authorize] Attribute

Now we’ll visit all the end points we have created earlier in previous posts in the “AccountsController” class, and attribute the end points which need to be protected (only authenticated user with valid JWT access token can access it) with the [Authorize] attribute as the below:

 – GetUsers, GetUser, GetUserByName, and DeleteUser end points should be accessed by users enrolled in Role “Admin”. Roles Authorization is not implemented yet and for now we will only allow any authentication user to access it, the code change will be as simple as the below:

[Authorize]
[Route("users")]
public IHttpActionResult GetUsers()
{}

[Authorize]
[Route("user/{id:guid}", Name = "GetUserById")]
public async Task<IHttpActionResult> GetUser(string Id)
{}


[Authorize]
[Route("user/{username}")]
public async Task<IHttpActionResult> GetUserByName(string username)
{
}

[Authorize]
[Route("user/{id:guid}")]
public async Task<IHttpActionResult> DeleteUser(string id)
{
}

- CreateUser and ConfirmEmail endpoints should be accessed anonymously always, so we need to attribute it with [AllowAnonymous] as the below:

[AllowAnonymous]
[Route("create")]
public async Task<IHttpActionResult> CreateUser(CreateUserBindingModel createUserModel)
{
}

[AllowAnonymous]
[HttpGet]
[Route("ConfirmEmail", Name = "ConfirmEmailRoute")]
public async Task<IHttpActionResult> ConfirmEmail(string userId = "", string code = "")
{
}

- ChangePassword endpoint should be accessed by the authenticated user only, so we’ll attribute it with [Authorize] attribute as the below:

[Authorize]
[Route("ChangePassword")]
public async Task<IHttpActionResult> ChangePassword(ChangePasswordBindingModel model)
{
}

Step 6: Consume JSON Web Tokens

Now if we tried to obtain an access token by sending a request to the end point “oauth/token” then try to access one of the protected end points we’ll receive 401 Unauthorized status, the reason for this that our API doesn’t understand those JWT tokens issued by our API yet, to fix this we need to the following:

Install the below NuGet package:

Install-Package Microsoft.Owin.Security.Jwt -Version 3.0.0

The package “Microsoft.Owin.Security.Jwt” is responsible for protecting the Resource server resources using JWT, it only validate and de-serialize JWT tokens.

Now back to our “Startup” class, we need to add the below method “ConfigureOAuthTokenConsumption” as the below:

private void ConfigureOAuthTokenConsumption(IAppBuilder app) {

            var issuer = "http://localhost:59822";
            string audienceId = ConfigurationManager.AppSettings["as:AudienceId"];
            byte[] audienceSecret = TextEncodings.Base64Url.Decode(ConfigurationManager.AppSettings["as:AudienceSecret"]);

            // Api controllers with an [Authorize] attribute will be validated with JWT
            app.UseJwtBearerAuthentication(
                new JwtBearerAuthenticationOptions
                {
                    AuthenticationMode = AuthenticationMode.Active,
                    AllowedAudiences = new[] { audienceId },
                    IssuerSecurityTokenProviders = new IIssuerSecurityTokenProvider[]
                    {
                        new SymmetricKeyIssuerSecurityTokenProvider(issuer, audienceSecret)
                    }
                });
        }

This step will configure our API to trust tokens issued by our Authorization server only, in our case the Authorization and Resource Server are the same server (http://localhost:59822), notice how we are providing the values for audience, and the audience secret we used to generate and issue the JSON Web Token in step3.

By providing those values to the “JwtBearerAuthentication” middleware, our API will be able to consume only JWT tokens issued by our trusted Authorization server, any other JWT tokens from any other Authorization server will be rejected.

Lastly we need to call the method “ConfigureOAuthTokenConsumption” in the “Configuration” method as the below:

public void Configuration(IAppBuilder app)
	{
		HttpConfiguration httpConfig = new HttpConfiguration();

		ConfigureOAuthTokenGeneration(app);

		ConfigureOAuthTokenConsumption(app);
		
		//Rest of code is here

	}

Step 7: Final Testing

All the pieces should be in place now, to test this we will obtain JWT access token for the user “SuperPowerUser” by issuing POST request to the end point “oauth/token”

Request JWT Token

Then we will use the JWT received to access protected end point such as “ChangePassword”, if you remember once we added this end point, we were not able to test it directly because it was anonymous and inside its implementation we were calling the method “User.Identity.GetUserId()”. This method will return nothing for anonymous user, but after we’ve added the [Authorize] attribute, any user needs to access this end point should be authenticated and has a valid JWT.

To test this out we will issue POST request to the end point “/accounts/ChangePassword”as the image below, notice he we are sitting the Authorization header using Bearer scheme setting its value to the JWT we received for the user “SuperPwoerUser”. If all is valid we will receive 200 OK status and the user password should be updated.

Change Password Web API

The source code for this tutorial is available on GitHub.

In the next post we’ll see how we’ll implement Roles Based Authorization in our Identity service.

Follow me on Twitter @tjoudeh

References

The post Implement OAuth JSON Web Tokens Authentication in ASP.NET Web API and Identity 2.1 – Part 3 appeared first on Bit of Technology.


Taiseer Joudeh: ASP.NET Identity 2.1 Accounts Confirmation, and Password Policy Configuration – Part 2

This is the second part of Building Simple Membership system using ASP.NET Identity 2.1, ASP.NET Web API 2.2 and AngularJS. The topics we’ll cover are:

The source code for this tutorial is available on GitHub.

ASP.NET Identity 2.1 Accounts Confirmation, and Password/User Policy Configuration

In this post we’ll complete on top of what we’ve already built, and we’ll cover the below topics:

  • Send Confirmation Emails after Account Creation.
  • Configure User (Username, Email) and Password policy.
  • Enable Changing Password and Deleting Account.

1 . Send Confirmation Emails after Account Creation

FeaturedImage

ASP.NET Identity 2.1 users table (AspNetUsers) comes by default with a Boolean column named “EmailConfirmed”, this column is used to flag if the email provided by the registered user is valid and belongs to this user in other words that user can access the email provided and he is not impersonating another identity. So our membership system should not allow users without valid email address to log into the system.

The scenario we want to implement that user will register in the system, then a confirmation email will be sent to the email provided upon the registration, this email will include an activation link and a token (code) which is tied to this user only and valid for certain period.

Once the user opens this email and clicks on the activation link, and if the token (code) is valid the field “EmailConfirmed” will be set to “true” and this proves that the email belongs to the registered user.

To do so we need to add a service which is responsible to send emails to users, in my case I’ll use Send Grid which is service provider for sending emails, but you can use any other service provider or your exchange change server to do this. If you want to follow along with this tutorial you can create a free account with Send Grid which provides you with 400 email per day, pretty good!

1.1 Install Send Grid

Now open Package Manager Console and type the below to install Send Grid package, this is not required step if you want to use another email service provider. This packages contains Send Grid APIs which makes sending emails very easy:

install-package Sendgrid

1.2 Add Email Service

Now add new folder named “Services” then add new class named “EmailService” and paste the code below:

public class EmailService : IIdentityMessageService
    {
        public async Task SendAsync(IdentityMessage message)
        {
            await configSendGridasync(message);
        }

        // Use NuGet to install SendGrid (Basic C# client lib) 
        private async Task configSendGridasync(IdentityMessage message)
        {
            var myMessage = new SendGridMessage();

            myMessage.AddTo(message.Destination);
            myMessage.From = new System.Net.Mail.MailAddress("taiseer@bitoftech.net", "Taiseer Joudeh");
            myMessage.Subject = message.Subject;
            myMessage.Text = message.Body;
            myMessage.Html = message.Body;

            var credentials = new NetworkCredential(ConfigurationManager.AppSettings["emailService:Account"], 
                                                    ConfigurationManager.AppSettings["emailService:Password"]);

            // Create a Web transport for sending email.
            var transportWeb = new Web(credentials);

            // Send the email.
            if (transportWeb != null)
            {
                await transportWeb.DeliverAsync(myMessage);
            }
            else
            {
                //Trace.TraceError("Failed to create Web transport.");
                await Task.FromResult(0);
            }
        }
    }

What worth noting here that the class “EmailService” implements the interface “IIdentityMessageService”, this interface can be used to configure your service to send emails or SMS messages, all you need to do is to implement your email or SMS Service in method “SendAsync” and your are good to go.

In our case we want to send emails, so I’ve implemented the sending process using Send Grid in method “configSendGridasync”, all you need to do is to replace the sender name and address by yours, as well do not forget to add 2 new keys named “emailService:Account” and “emailService:Password” as AppSettings to store Send Grid credentials.

After we configured the “EmailService”, we need to hock it with our Identity system, and this is very simple step, open file “ApplicationUserManager” and inside method “Create” paste the code below:

public static ApplicationUserManager Create(IdentityFactoryOptions<ApplicationUserManager> options, IOwinContext context)
{
	//Rest of code is removed for clarity
	appUserManager.EmailService = new AspNetIdentity.WebApi.Services.EmailService();

	var dataProtectionProvider = options.DataProtectionProvider;
	if (dataProtectionProvider != null)
	{
		appUserManager.UserTokenProvider = new DataProtectorTokenProvider<ApplicationUser>(dataProtectionProvider.Create("ASP.NET Identity"))
		{
			//Code for email confirmation and reset password life time
			TokenLifespan = TimeSpan.FromHours(6)
		};
	}
   
	return appUserManager;
}

As you see from the code above, the “appUserManager” instance contains property named “EmailService” which you set it the class we’ve just created “EmailService”.

Note: There is another property named “SmsService” if you would like to use it for sending SMS messages instead of emails.

Notice how we are setting the expiration time for the code (token) send by the email to 6 hours, so if the user tried to open the confirmation email after 6 hours from receiving it, the code will be invalid.

1.3 Send the Email after Account Creation

Now the email service is ready and we can start sending emails after successful account creation, to do so we need to modify the existing code in the method “CreateUser” in controller “AccountsController“, so open file “AccountsController” and paste the code below at the end of the method:

//Rest of code is removed for brevity

string code = await this.AppUserManager.GenerateEmailConfirmationTokenAsync(user.Id);

var callbackUrl = new Uri(Url.Link("ConfirmEmailRoute", new { userId = user.Id, code = code }));

await this.AppUserManager.SendEmailAsync(user.Id,"Confirm your account", "Please confirm your account by clicking <a href=\"" + callbackUrl + "\">here</a>");

Uri locationHeader = new Uri(Url.Link("GetUserById", new { id = user.Id }));

return Created(locationHeader, TheModelFactory.Create(user));

The implementation is straight forward, what we’ve done here is creating a unique code (token) which is valid for the next 6 hours and tied to this user Id only this happen when calling “GenerateEmailConfirmationTokenAsync” method, then we want to build an activation link to send it in the email body, this link will contain the user Id and the code created.

Eventually this link will be sent to the registered user to the email he used in registration, and the user needs to click on it to activate the account, the route “ConfirmEmailRoute” which maps to this activation link is not implemented yet, we’ll implement it the next step.

Lastly we need to send the email including the link we’ve built by calling the method “SendEmailAsync” where the constructor accepts the user Id, email subject, and email body.

1.4 Add the Confirm Email URL

The activation link which the user will receive will look as the below:

http://localhost/api/account/ConfirmEmail?userid=xxxx&code=xxxx

So we need to build a route in our API which receives this request when the user clicks on the activation link and issue HTTP GET request, to do so we need to implement the below method, so in class “AccountsController” as the new method as the below:

[HttpGet]
        [Route("ConfirmEmail", Name = "ConfirmEmailRoute")]
        public async Task<IHttpActionResult> ConfirmEmail(string userId = "", string code = "")
        {
            if (string.IsNullOrWhiteSpace(userId) || string.IsNullOrWhiteSpace(code))
            {
                ModelState.AddModelError("", "User Id and Code are required");
                return BadRequest(ModelState);
            }

            IdentityResult result = await this.AppUserManager.ConfirmEmailAsync(userId, code);

            if (result.Succeeded)
            {
                return Ok();
            }
            else
            {
                return GetErrorResult(result);
            }
        }

The implementation is simple, we only validate that the user Id and code is not not empty, then we depend on the method “ConfirmEmailAsync” to do the validation for the user Id and the code, so if the user Id is not tied to this code then it will fail, if the code is expired then it will fail too, if all is good this method will update the database field “EmailConfirmed” in table “AspNetUsers” and set it to “True”, and you are done, you have implemented email account activation!

Important Note: It is recommenced to validate the password before confirming the email account, in some cases the user might miss type the email during the registration, so you do not want end sending the confirmation email for someone else and he receives this email and activate the account on your behalf, so better way is to ask for the account password before activating it, if you want to do this you need to change the “ConfirmEmail” method to POST and send the Password along with user Id and code in the request body, you have the idea so you can implement it by yourself :)

2. Configure User (Username, Email) and Password policy

2.1 Change User Policy

In some cases you want to enforce certain rules on the username and password when users register into your system, so ASP.NET Identity 2.1 system offers this feature, for example if we want to enforce that our username only allows alphanumeric characters and the email associated with this user is unique then all we need to do is to set those properties in class “ApplicationUserManager”, to do so open file “ApplicationUserManager” and paste the code below inside method “Create”:

//Rest of code is removed for brevity
//Configure validation logic for usernames
appUserManager.UserValidator = new UserValidator<ApplicationUser>(appUserManager)
{
	AllowOnlyAlphanumericUserNames = true,
	RequireUniqueEmail = true
};

2.2 Change Password Policy

The same applies for the password policy, for example you can enforce that the password policy must match (minimum 6 characters, requires special character, requires at least one lower case and at least one upper case character), so to implement this policy all we need to do is to set those properties in the same class “ApplicationUserManager” inside method “Create” as the code below:

//Rest of code is removed for brevity
//Configure validation logic for passwords
appUserManager.PasswordValidator = new PasswordValidator
{
	RequiredLength = 6,
	RequireNonLetterOrDigit = true,
	RequireDigit = false,
	RequireLowercase = true,
	RequireUppercase = true,
};

2.3 Implement Custom Policy for User Email and Password

In some scenarios you want to apply your own custom policy for validating email, or password. This can be done easily by creating your own validation classes and hock it to “UserValidator” and “PasswordValidator” properties in class “ApplicationUserManager”.

For example if we want to enforce using only the following domains (“outlook.com”, “hotmail.com”, “gmail.com”, “yahoo.com”) when the user self registers then we need to create a class and derive it from “UserValidator<ApplicationUser>” class, to do so add new folder named “Validators” then add new class named “MyCustomUserValidator” and paste the code below:

public class MyCustomUserValidator : UserValidator<ApplicationUser>
    {

        List<string> _allowedEmailDomains = new List<string> { "outlook.com", "hotmail.com", "gmail.com", "yahoo.com" };

        public MyCustomUserValidator(ApplicationUserManager appUserManager)
            : base(appUserManager)
        {
        }

        public override async Task<IdentityResult> ValidateAsync(ApplicationUser user)
        {
            IdentityResult result = await base.ValidateAsync(user);

            var emailDomain = user.Email.Split('@')[1];

            if (!_allowedEmailDomains.Contains(emailDomain.ToLower()))
            {
                var errors = result.Errors.ToList();

                errors.Add(String.Format("Email domain '{0}' is not allowed", emailDomain));

                result = new IdentityResult(errors);
            }

            return result;
        }
    }

What we have implemented above that the default validation will take place then this custom validation in method “ValidateAsync” will be applied, if there is validation errors it will be added to the existing “Errors” list and returned in the response.

In order to fire this custom validation, we need to open class “ApplicationUserManager” again and hock this custom class to the property “UserValidator” as the code below:

//Rest of code is removed for brevity
//Configure validation logic for usernames
appUserManager.UserValidator = new MyCustomUserValidator(appUserManager)
{
	AllowOnlyAlphanumericUserNames = true,
	RequireUniqueEmail = true
};

Note: The tutorial code is not using the custom “MyCustomUserValidator” class, it exists in the source code for your reference.

Now the same applies for adding custom password policy, all you need to do is to create class named “MyCustomPasswordValidator” and derive it from class “PasswordValidator”, then you override the method “ValidateAsync” implementation as below, so add new file named “MyCustomPasswordValidator” in folder “Validators” and use the code below:

public class MyCustomPasswordValidator : PasswordValidator
    {
        public override async Task<IdentityResult> ValidateAsync(string password)
        {
            IdentityResult result = await base.ValidateAsync(password);

            if (password.Contains("abcdef") || password.Contains("123456"))
            {
                var errors = result.Errors.ToList();
                errors.Add("Password can not contain sequence of chars");
                result = new IdentityResult(errors);
            }
            return result;
        }
    }

In this implementation we added some basic rule which checks if the password contains sequence of characters and reject this type of password by adding this validation result to the Errors list, it is exactly the same as the custom users policy.

Now to attach this class as the default password validator, all you need to do is to open class “ApplicationUserManager” and use the code below:

//Rest of code is removed for brevity
// Configure validation logic for passwords
appUserManager.PasswordValidator = new MyCustomPasswordValidator
{
	RequiredLength = 6,
	RequireNonLetterOrDigit = true,
	RequireDigit = false,
	RequireLowercase = true,
	RequireUppercase = true,
};

All other validation rules will take place (i.e checking minimum password length, checking for special characters) then it will apply the implementation in our “MyCustomPasswordValidator”.

3. Enable Changing Password and Deleting Account

Now we need to add other endpoints which allow the user to change the password, and allow a user in “Admin” role to delete other users account, but those end points should be accessed only if the user is authenticated, we need to know the identity of the user doing this action and in which role(s) the user belongs to. Until now all our endpoints are called anonymously, so lets add those endpoints and we’ll cover the authentication and authorization part next.

3.1 Add Change Password Endpoint

This is easy to implement, all you need to do is to open controller “AccountsController” and paste the code below:

[Route("ChangePassword")]
        public async Task<IHttpActionResult> ChangePassword(ChangePasswordBindingModel model)
        {
            if (!ModelState.IsValid)
            {
                return BadRequest(ModelState);
            }

            IdentityResult result = await this.AppUserManager.ChangePasswordAsync(User.Identity.GetUserId(), model.OldPassword, model.NewPassword);

            if (!result.Succeeded)
            {
                return GetErrorResult(result);
            }

            return Ok();
        }

Notice how we are calling the method “ChangePasswordAsync” and passing the authenticated User Id, old password and new password. If you tried to call this endpoint, the extension method “GetUserId” will not work because you are calling it as anonymous user and the system doesn’t know your identity, so hold on the testing until we implement authentication part.

The method “ChangePasswordAsync” will take care of validating your current password, as well validating your new password policy, and then updating your old password with new one.

Do not forget to add the “ChangePasswordBindingModel” to the class “AccountBindingModels” as the code below:

public class ChangePasswordBindingModel
    {
        [Required]
        [DataType(DataType.Password)]
        [Display(Name = "Current password")]
        public string OldPassword { get; set; }

        [Required]
        [StringLength(100, ErrorMessage = "The {0} must be at least {2} characters long.", MinimumLength = 6)]
        [DataType(DataType.Password)]
        [Display(Name = "New password")]
        public string NewPassword { get; set; }

        [Required]
        [DataType(DataType.Password)]
        [Display(Name = "Confirm new password")]
        [Compare("NewPassword", ErrorMessage = "The new password and confirmation password do not match.")]
        public string ConfirmPassword { get; set; }
    
    }

3.2 Delete User Account

We want to add the feature which allows a user in “Admin” role to delete user account, until now we didn’t introduce Roles management or authorization, so we’ll add this end point now and later we’ll do slight modification on it, for now any anonymous user can invoke it and delete any user by passing the user Id.

To implement this we need add new method named “DeleteUser” to the “AccountsController” as the code below:

[Route("user/{id:guid}")]
        public async Task<IHttpActionResult> DeleteUser(string id)
        {

            //Only SuperAdmin or Admin can delete users (Later when implement roles)

            var appUser = await this.AppUserManager.FindByIdAsync(id);

            if (appUser != null)
            {
                IdentityResult result = await this.AppUserManager.DeleteAsync(appUser);

                if (!result.Succeeded)
                {
                    return GetErrorResult(result);
                }

                return Ok();

            }

            return NotFound();
          
        }

This method will check the existence of the user id and based on this it will delete the user. To test this method we need to issue HTTP DELETE request to the end point “api/accounts/user/{id}”.

The source code for this tutorial is available on GitHub.

In the next post we’ll see how we’ll implement Json Web Token (JWTs) Authentication and manage access for all the methods we added until now.

Follow me on Twitter @tjoudeh

References

The post ASP.NET Identity 2.1 Accounts Confirmation, and Password Policy Configuration – Part 2 appeared first on Bit of Technology.


Dominick Baier: IdentityServer3 1.0.0

Today is a big day for us! Brock and I started working on the next generation of IdentityServer over 14 months ago. In fact – I remember exactly how I created the very first file (constants.cs) somewhere in the Swiss Alps and was hunting for internet connection to do a check-in (much to the dislike of my family).

1690 commits later it is time to recap what we did, why we did it – and where we are now.

Having spent a considerable amount of time in the WS*/SAML world, it became more and more apparent that these technologies are not a good match for the modern types of applications that we (and our customers) like to build. These types of applications are pretty much a combination of web and native UIs combined with web APIs. Security protocols need to be API, HTTP and mobile friendly, and we need authentication, API access and identity delegation as first class citizens.

We had two options – either try to retrofit the new protocols into the old WS* architecture (like so many commercial products do) or start from scratch. Since we also had a number of other high priority design goals for the new version we decided to start from scratch.

Some of the highlights of IdentityServer3 (at least in our opinion) are:

Support for the modern security stack
OpenID Connect and OAuth2 that is. These two protocols in combination are the perfect match to build the modern applications we had in mind. OAuth2 is used to manage access (and access control) from clients to APIs for both trusted subsystem and identity delegation systems. OpenID Connect is the extension to OAuth2 for implementing rich authentication and single sign-on scenarios for any application type.

Hosting
We wanted to be much more flexible in our hosting scenarios – IIS vs self-hosting, Windows vs Linux, ASP.NET vCurrent vs vNext, Embedded into the application vs separate standalone vs separate web farm vs cloud – you name it. Regardless which hosting environment you choose – IdentityServer is always the same.

Flexibility and Extensibility
IdentityServer2 always had a dependency on a database. The past years taught us that there are many situations where this is not appropriate. In the new version everything is code first and abstracted behind interfaces. Everything can be done in memory and no persistence store is required. We have an optional extension that uses Entity Framework for persistence – but this is up to you.

Another issue we had in the past was that there were too many situations where one had to change the core source code to implement some custom workflow. In IdentityServer3 we think we did a good job in anticipating the typical (and not so typical) modifications and baked them right into the core runtime as extensibility points. So far this has worked out really well.

Framework vs Server
As mentioned above – IdentityServer3 is all about customization and extensibility. The developer is in the centre and we give him lots of freedom in changing almost any aspect of the workflow. This is the big difference to many commercial off the shelf products.

Right from the start we used the term “STS Framework” rather than a “Server” and up to today we don’t even have an admin UI for managing the server configuration. We (and most people we spoke to) were absolutely fine doing all of that in code and in their custom configuration system. That said – we have an admin service and UI in the works that will be released soon – but again this is totally optional.

Brock and I just recently spoke to Carl and Richard about these design goals on .NET Rocks.

Where to go?
To accommodate the new versioning scheme (we switched to semver) and the componentized architecture we changed both the GitHub organization and repo names as well as the Nuget package names. The new organization can be found here and the main repo is here along with instructions on how to contribute and an issue tracker for filing bugs or giving feedback.

The new docs site gives quite a bit of background and can be found here – or you can jump directly to our samples.

If you need consulting about modern (or not so modern) security architectures in general and IdentityServer in particular – you can contact us via email at identity@leastprivilege.com or via twitter: @leastprivilege & @brocklallen.

What’s next?
We have a couple of “side projects” that complement the core IdentityServer3 – there’s IdentityManager, which we neglected a bit for the last months, and there‘s the admin service and UI (good people are working on that right now)…And there are of course new features to implement for IdentityServer – check this label and take part in the discussion.

Last but not least
The last 14 months were astounding – we got more feedback, questions, bug reports, PRs and help on IdentityServer3 than all other OSS projects we did before combined. You guys were fantastic! Thanks for your help – we hope you enjoy the result (..and keep it coming)!

Thanks!
Dominick & Brock


Filed under: ASP.NET, IdentityServer, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Radenko Zec: ASP.NET Identity 2.1 implementation for MySQL

In this blog post I will try to cover how to use a custom ASP.NET identity provider for MySQL I have created.

Default ASP.NET Identity provider uses Entity Framework and SQL Server to store information’s about users.

If you are trying to implement ASP.NET Identity 2.1 for MySQL database, then follow this guide.

This implementation uses Oracle fully-managed ADO.NET driver for MySQL.

This means that you have a connection string in your web.config similar to this:

<add name="DefaultConnection" connectionString="Server=localhost;
Database=aspnetidentity;Uid=radenko;Pwd=somepass;" providerName="MySql.Data.MySqlClient" />

 

This implementation of ASP.NET Identity 2.1 for MySQL has all the major interfaces implemented in custom UserStore class:

ASPIdentityUserStoreInterfaces

Source code of my implementation is available at GitHub – MySqlIdentity

First, you will need to execute this a create script on your MySQL database which will create the tables required for the ASP.NET Identity provider.

MySqlAspIdentityDatabase

  • Create a new ASP.NET MVC 5 project, choosing the Individual User Accounts authentication type.
  • Uninstall all EntityFramework NuGet packages starting with Microsoft.AspNet.Identity.EntityFramework
  • Install NuGet Package called MySql.AspNet.Identity
  • In ~/Models/IdentityModels.cs:
    • Remove the namespaces:
      • Microsoft.AspNet.Identity.EntityFramework
      • System.Data.Entity
    • Add the namespace: MySql.AspNet.Identity.
      Class ApplicationUser will inherit from IdentityUser class in MySql.Asp.Net.Identity namespace
    • Remove the entire ApplicationDbContext class. This class is not needed anymore.
  • In ~/App_Start/Startup.Auth.cs
    • Delete this line of code
app.CreatePerOwinContext(ApplicationDbContext.Create);
  • In ~/App_Start/IdentityConfig.cs
    Remove the namespaces:

    • Microsoft.AspNet.Identity.EntityFramework
    • System.Data.Entity
  • In method Create inside ApplicationUserManager class replace ApplicationUserManager with another which accepts MySqlUserStore :
 public static ApplicationUserManager Create(IdentityFactoryOptions<ApplicationUserManager> options, IOwinContext context) 
 {
     //var manager = new ApplicationUserManager(new UserStore<ApplicationUser>(context.Get<ApplicationDbContext>()));
        var manager = new ApplicationUserManager(new MySqlUserStore<ApplicationUser>());

MySqlUserStore accepts an optional parameter in the constructor – connection string so if you are not using DefaultConnection as your connection string you can pass another connection string.

After this you should be able to build your ASP.NET MVC project and run it successfully using MySQL as a store for your user, roles, claims and other information’s.

If you like this article don’t forget to subscribe to this blog and make sure you don’t miss new upcoming blog posts.

 

The post ASP.NET Identity 2.1 implementation for MySQL appeared first on RadenkoZec blog.


Darrel Miller: Hypermedia, past, present and future

Hypermedia is not a new concept, it has been around in various forms since the 1960s.  However, in the past seven years there has been a significant resurgence of interest in the concept.  This blog post contains my reflections on the past few years, where we currently are and where we might be headed in the use of hypermedia for building distributed applications.

The HTML years

The majority of developers have only been exposed to hypermedia via HTML.  HTML is an "in-your-face" example of the success of hypermedia, and yet in most Web API related discussions it is often dismissed with "well that's different".  A distinction is made by many between human-2-machine interactions and machine-2-machine interactions.  This distinction is used to explain why API interactions are different.  To be honest, I've yet to see my parents open a web browser and construct a HTTP request by hand.  The Web Browser is itself is a client application, running code that makes HTTP requests.  It is not completely autonomous, but no human invokes calls to load a stylesheet, javascript snippet, or embedded image. 

On the other hand, the web crawlers used by search engines are autonomous and they also consume HTML hypermedia.

There were numerous efforts over the years to present HTML, or more specifically XHTML as a viable media type for Web APIs.  Jon Moore from Comcast made the biggest splash, but Microsoft made efforts in this area and I participated in workshops a RESTfest where we used XHTML as the response media type.

Despite this, the use of HTML in Web APIs has never really gained significant traction.

RDF's lofty goals

RDF has been around for almost as long as HTML, but it has never really made significant progress outside of academia.  I've talked to a significant number of very smart people who believe RDF is the answer.  However, I suspect it is the Xanadu, or the betamax of media types.

The challenge with RDF is that can be quite difficult to grok.  First of all there are a variety of different serialization formats: turtle, n3, RDFa and now JSON-LD.  This can make learning RDF tricky because really need to learn the conceptual model and then learn the mapping used for serialization.

The model of RDF is based around triples, i.e. Subject, predicate and object, where these elements are often identified using URIs.  This can produce quite cumbersome looking documents.

My experience has been that most developers don't have the patience for the sophistication of RDF and quickly want to hide the complexity with tooling.  Tooling can hurt or help, it just depends on who is writing the tooling and what there goals are.

It is possible that I could be proved wrong by JSON-LD.  A number of organizations are starting to adopt JSON-LD.  Only time will tell if it will stick.

Feeds

File:Feed-icon.svgThe RSS format was originally based on an early working draft of RDF.  In 2005 the Atom Syndication Format was released as replacement for RSS.  Both these formats had the specific goal of allowing content creators and distributors to advertise regularly produced along with metadata about the content.

These formats spawned the creation of a wide range of new client applications that could consume this format. 

Feeds As Containers

The success of the Atom format spurred API providers to consider using Atom as a container of data other than blog posts and new items.  Big players like Google and Microsoft created GData and OData that were similar ideas where Atom was used as an envelope for API data.

In order for feed readers to be useful, they needed to understand the contents of the Atom Entry element.  Most often this content was HTML and could be rendered with a HTML rendering library.  However, when Atom feeds were used in APIs for carrying non-HTML data there was no standardized way for clients to understand the contents of the Atom Entry.  The custom data content was identified using XML namespaces that a client was expected to recognize.  Unfortunately, XML namespaces introduce URIs and prefixes and the ability to mix multiple namespaces in a single document.  This starts to bring back the complexity of RDF.

GData didn't really last very long and OData has some successes and some failures and is currently on version 4.

Overall OData was more ambitious than Gdata in that it defined a standardized querying mechanism and built on top of Microsoft's CSDL which is a metadata language used by the ORM, Entity Framework.  This allowed lots of tooling to be built around OData.

Another more recent effort in this area is Activity Streams.  Initially, I understood this to be a generalized way of advertising lists of events that occur, it seems to have evolved into not only an activity stream container, but a mechanism for describing activities using vocabularies.

Linking and Embedding

Late 2010 saw the beginning of flurry of activity in the hypermedia space.  Mike Kelly created HAL which defined a JSON based format that supported linking to other resources and embedding portions of other resources into representations.

In 2011 Collection+Json was created by Mike Amundsen that provides a way of representing a list of things as well as methods to search the list and add to the list.

Siren, Mason, Uber and JsonApi all followed in the following years, all attempting to address perceived shortcomings of HAL and C+J.

The Great Form Debate

One of the features that has been continually debated is the need for forms support in hypermedia types.  HAL did not support forms. Siren, Mason, C+J and Uber do.  HAL relies heavily on link relation types to convey both read and write semantics.  The authors of the other formats feel it is more valuable to have explicit syntax for describing write operations.

DebatingMonks

Spoilt For Choice

The growing REST community has learned a great deal in the process of developing these specifications.  However, they are left with a whole new problem.  Developers who want to start down the path of using hypermedia in their APIs need to make a choice between a range of formats that are only subtly different in their capabilities.  How are developers supposed to choose?

BeerTaps

In the past, choosing a media type was never a particularly difficult proposition because they tend to be built for a specific purpose.  HTML was originally designed for describing a textual document,  image/jpeg is ideal for photographs, text/css for stylesheets,  etc.

However, this most recent set of media types have focused on message format semantics, i.e the ability to link to other content, embed content and describe affordances, all without saying anything about the application domain.  This has the advantage that they can be used within any application domain, for almost any purpose, however, it also makes them have no meaning, no purpose.

Meanwhile On Another Planet

In contrast to these generic hypermedia types, in the world of telecommunications, a media type called VoiceXML was developed.

 Planet

VoiceXML was developed to drive audio and voice response applications.  Related to that effort was CCXML (Call Control eXtensible Markup Language), MSML (Media Server Markup Language), MSCML (Media Server Control Markup Language) and most recently SCXML (State Chart XML).

All of these media types are used as part of a larger system, but each media type is designed to solve a specific problem domain.  

The Great Divide

Earlier I accused some media types of having no meaning, no purpose.  This is actually a design objective of some media types.  The idea is that media types can limit their complexity by focusing on just structural and protocol semantics of the message and leave application and domain semantics to a separate mechanism.  Formats like RDF use the a concept called ontologies to describe application semantics.  JSON-LD can also use JSON schemas defined at schema.org.  HAL recently added support for a notion called Profiles that existed in early versions of HTML and has been resurrected in recent years.  The idea of Profiles is that you can independently apply a set of domain semantics to a hypermedia message.

The advantage of using Profiles is that your API only needs to support a small set of media types, possible only one, and application semantics can be layered on top.  This limits the time and risk involved in designing and documenting new media types.  Mike Amundsen has been spearheading an effort to define a description language called ALPS that makes defining profile description documents possible.

PaintExplosion

Media Type Explosion

One of the pieces of lore in the hypermedia community is that we must avoid a phenomenon called "media type explosion".  The fear is that if every API provider begins creating media types for all of their application semantics we will massively dilute the re-usability of media types and potentially introduce security vulnerabilities.  Also, the process of registration and expert review is not designed to be efficient for large numbers of media types that may never be suitable for public consumptions.

Ironically, the fear of media type explosion has discouraged people from creating media types and therefore have simply resorted to tunneling application semantics over generic media types like application/json.  The result being an explosion of implicit JSON structures.  Not exactly the desired effect.

API Media Types

A popular alternative to reusing existing media types, or creating media types for each application concept, some APIs have decided to create a single media type definition for the entire API.  This approach provides an API with a central place to document message format conventions and structure.  APIs like GitHub, Heroku and Sun's Cloud API take this approach.  This is an interesting compromise.  However, it limits the potential for re-use because the definition of the media type is scoped to the particular API.

Horizontal Media Types

Ideally, at least in my opinion, media types would be built that solve a specific problem but could be re-used across many APIs.  The end goal would be to be able to build APIs by composing a set of pre-defined APIs and minimize or even eliminate the need for new media types to support the API.

Getting widespread agreement on application domain types, like customer, invoice, work task, employee, etc, is especially difficult.  This can be seen if you dig into the history of standards like ANSI X12,  Edifact or UBL. 

However, there are many aspects of applications that are very similar between applications: users, accounts, permissions, roles, errors, lists, reports, long running operations, filters, tables, graphs, dashboards.

I believe these are the low hanging fruit for building re-usable media types.

Link Relation Types Add Context

Media types are not the only way to convey semantic information to a client.  Link relations are an extremely valuable way to add semantic context to more generic media types.  Consider a media type that described a street address.  An address is a fairly well defined concept that could be sufficiently defined for use in a wide range of scenarios.

Creating link relations like "ShipTo", "InvoiceTo", "Home", "Work", "Destination", can provide sufficient additional semantics to  a fairly generic media type to allow a client to make intelligent choices.

Even link relations have different styles in the way they are defined.  Some are very generic, like "next" and "previous".  Others have precisely defined behaviour like "hub" and "oauth2-token".

So, What Should I Use?

If only there were an easy answer to that question.  The hard answer is: learn about the options you have, understand the pros and cons to each approach and then consider the context of the application you are trying to build.  At that point you may have enough information to choose the right solution for your problem.  Good luck!  And make sure you tell everyone about your experiences.  We are all learning together.

Image Credits:
Janus http://davy.potdevin.free.fr/Site/links.html
Beer Taps https://flic.kr/p/3qbi
Planet : https://flic.kr/p/5WBkp9
Paint Explosion : https://flic.kr/p/7ZNa5C


Taiseer Joudeh: ASP.NET Identity 2.1 with ASP.NET Web API 2.2 (Accounts Management) – Part 1

Asp Net Identity

ASP.NET Identity 2.1 is the latest membership and identity management framework provided by Microsoft, this membership system can be plugged to any ASP.NET framework such as Web API, MVC, Web Forms, etc…

In this tutorial we’ll cover how to integrate ASP.NET Identity system with ASP.NET Web API , so we can build a secure HTTP service which acts as back-end for SPA front-end built using AngularJS, I’ll try to cover in a simple way different ASP.NET Identity 2.1 features such as: Accounts managements, roles management, email confirmations, change password, roles based authorization, claims based authorization, brute force protection, etc…

The AngularJS front-end application will use bearer token based authentication using Json Web Tokens (JWTs) format and should support roles based authorization and contains the basic features of any membership system. The SPA is not ready yet but hopefully it will sit on top of our HTTP service without the need to come again and modify the ASP.NET Web API logic.

I will follow step by step approach and I’ll start from scratch without using any VS 2013 templates so we’ll have better understanding of how the ASP.NET Identity 2.1 framework talks with ASP.NET Web API framework.

The source code for this tutorial is available on GitHub.

I broke down this series into multiple posts which I’ll be posting gradually, posts are:

Configure ASP.NET Identity 2.1 with ASP.NET Web API 2.2 (Accounts Management)

Setting up the ASP.NET Identity 2.1

Step 1: Create the Web API Project

In this tutorial I’m using Visual Studio 2013 and .Net framework 4.5, now create an empty solution and name it “AspNetIdentity” then add new ASP.NET Web application named “AspNetIdentity.WebApi”, we will select an empty template with no core dependencies at all, it will be as as the image below:

WebApiNewProject

Step 2: Install the needed NuGet Packages:

We’ll install all those NuGet packages to setup our Owin server and configure ASP.NET Web API to be hosted within an Owin server, as well we will install packages needed for ASP.NET Identity 2.1, if you would like to know more about the use of each package and what is the Owin server, please check this post.

Install-Package Microsoft.AspNet.Identity.Owin -Version 2.1.0
Install-Package Microsoft.AspNet.Identity.EntityFramework -Version 2.1.0
Install-Package Microsoft.Owin.Host.SystemWeb -Version 3.0.0
Install-Package Microsoft.AspNet.WebApi.Owin -Version 5.2.2
Install-Package Microsoft.Owin.Security.OAuth -Version 3.0.0
Install-Package Microsoft.Owin.Cors -Version 3.0.0

 Step 3: Add Application user class and Application Database Context:

Now we want to define our first custom entity framework class which is the “ApplicationUser” class, this class will represents a user wants to register in our membership system, as well we want to extend the default class in order to add application specific data properties for the user, data properties such as: First Name, Last Name, Level, JoinDate. Those properties will be converted to columns in table “AspNetUsers” as we’ll see on the next steps.

So to do this we need to create new class named “ApplicationUser” and derive from “Microsoft.AspNet.Identity.EntityFramework.IdentityUser” class.

Note: If you do not want to add any extra properties to this class, then there is no need to extend the default implementation and derive from “IdentityUser” class.

To do so add new folder named “Infrastructure” to our project then add new class named “ApplicationUser” and paste the code below:

public class ApplicationUser : IdentityUser
    {
        [Required]
        [MaxLength(100)]
        public string FirstName { get; set; }

        [Required]
        [MaxLength(100)]
        public string LastName { get; set; }

        [Required]
        public byte Level { get; set; }

        [Required]
        public DateTime JoinDate { get; set; }

    }

Now we need to add Database context class which will be responsible to communicate with our database, so add new class and name it “ApplicationDbContext” under folder “Infrastructure” then paste the code snippet below:

public class ApplicationDbContext : IdentityDbContext<ApplicationUser>
    {
        public ApplicationDbContext()
            : base("DefaultConnection", throwIfV1Schema: false)
        {
            Configuration.ProxyCreationEnabled = false;
            Configuration.LazyLoadingEnabled = false;
        }

        public static ApplicationDbContext Create()
        {
            return new ApplicationDbContext();
        }

    }

As you can see this class inherits from “IdentityDbContext” class, you can think about this class as special version of the traditional “DbContext” Class, it will provide all of the entity framework code-first mapping and DbSet properties needed to manage the identity tables in SQL Server, this default constructor takes the connection string name “DefaultConnection” as an argument, this connection string will be used point to the right server and database name to connect to.

The static method “Create” will be called from our Owin Startup class, more about this later.

Lastly we need to add a connection string which points to the database that will be created using code first approach, so open “Web.config” file and paste the connection string below:

<connectionStrings>
    <add name="DefaultConnection" connectionString="Data Source=.\sqlexpress;Initial Catalog=AspNetIdentity;Integrated Security=SSPI;" providerName="System.Data.SqlClient" />
  </connectionStrings>

Step 4: Create the Database and Enable DB migrations:

Now we want to enable EF code first migration feature which configures the code first to update the database schema instead of dropping and re-creating the database with each change on EF entities, to do so we need to open NuGet Package Manager Console and type the following commands:

enable-migrations
add-migration InitialCreate

The “enable-migrations” command creates a “Migrations” folder in the “AspNetIdentity.WebApi” project, and it creates a file named “Configuration”, this file contains method named “Seed” which is used to allow us to insert or update test/initial data after code first creates or updates the database. This method is called when the database is created for the first time and every time the database schema is updated after a data model change.

Migrations

As well the “add-migration InitialCreate” command generates the code that creates the database from scratch. This code is also in the “Migrations” folder, in the file named “<timestamp>_InitialCreate.cs“. The “Up” method of the “InitialCreate” class creates the database tables that correspond to the data model entity sets, and the “Down” method deletes them. So in our case if you opened this class “201501171041277_InitialCreate” you will see the extended data properties we added in the “ApplicationUser” class in method “Up”.

Now back to the “Seed” method in class “Configuration”, open the class and replace the Seed method code with the code below:

protected override void Seed(AspNetIdentity.WebApi.Infrastructure.ApplicationDbContext context)
        {
            //  This method will be called after migrating to the latest version.

            var manager = new UserManager<ApplicationUser>(new UserStore<ApplicationUser>(new ApplicationDbContext()));

            var user = new ApplicationUser()
            {
                UserName = "SuperPowerUser",
                Email = "taiseer.joudeh@mymail.com",
                EmailConfirmed = true,
                FirstName = "Taiseer",
                LastName = "Joudeh",
                Level = 1,
                JoinDate = DateTime.Now.AddYears(-3)
            };

            manager.Create(user, "MySuperP@ssword!");
        }

This code basically creates a user once the database is created.

Now we are ready to trigger the event which will create the database on our SQL server based on the connection string we specified earlier, so open NuGet Package Manager Console and type the command:

update-database

The “update-database” command runs the “Up” method in the “Configuration” file and creates the database and then it runs the “Seed” method to populate the database and insert a user.

If all is fine, navigate to your SQL server instance and the database along with the additional fields in table “AspNetUsers” should be created as the image below:

AspNetIdentityDB

Step 5: Add the User Manager Class:

The User Manager class will be responsible to manage instances of the user class, the class will derive from “UserManager<T>”  where T will represent our “ApplicationUser” class, once it derives from the “ApplicationUser” class a set of methods will be available, those methods will facilitate managing users in our Identity system, some of the exposed methods we’ll use from the “UserManager” during this tutorial are:

Method NameUsage
FindByIdAsync(id)Find user object based on its unique identifier
UsersReturns an enumeration of the users
FindByNameAsync(Username)Find user based on its Username
CreateAsync(User, PasswordCreates a new user with a password
GenerateEmailConfirmationTokenAsync(Id)Generate email confirmation token which is used in email confimration
SendEmailAsync(Id, Subject, Body)Send confirmation email to the newly registered user
ConfirmEmailAsync(Id, token)Confirm the user email based on the received token
ChangePasswordAsync(Id, OldPassword, NewPassword)Change user password
DeleteAsync(User)Delete user
IsInRole(Username, Rolename)Check if a user belongs to certain Role
AddToRoleAsync(Username, RoleName)Assign user to a specific Role
RemoveFromRoleAsync(Username, RoleNameRemove user from specific Role

Now to implement the “UserManager” class, add new file named “ApplicationUserManager” under folder “Infrastructure” and paste the code below:

public class ApplicationUserManager : UserManager<ApplicationUser>
    {
        public ApplicationUserManager(IUserStore<ApplicationUser> store)
            : base(store)
        {
        }

        public static ApplicationUserManager Create(IdentityFactoryOptions<ApplicationUserManager> options, IOwinContext context)
        {
            var appDbContext = context.Get<ApplicationDbContext>();
            var appUserManager = new ApplicationUserManager(new UserStore<ApplicationUser>(appDbContext));

            return appUserManager;
        }
    }

As you notice from the code above the static method “Create” will be responsible to return an instance of the “ApplicationUserManager” class named “appUserManager”, the constructor of the “ApplicationUserManager” expects to receive an instance from the “UserStore”, as well the UserStore instance construct expects to receive an instance from our “ApplicationDbContext” defined earlier, currently we are reading this instance from the Owin context, but we didn’t add it yet to the Owin context, so let’s jump to the next step to add it.

Note: In the coming post we’ll apply different changes to the “ApplicationUserManager” class such as configuring email service, setting user and password polices.

Step 6: Add Owin “Startup” Class

Now we’ll add the Owin “Startup” class which will be fired once our server starts. The “Configuration” method accepts parameter of type “IAppBuilder” this parameter will be supplied by the host at run-time. This “app” parameter is an interface which will be used to compose the application for our Owin server, so add new file named “Startup” to the root of the project and paste the code below:

public class Startup
    {

        public void Configuration(IAppBuilder app)
        {
            HttpConfiguration httpConfig = new HttpConfiguration();

            ConfigureOAuthTokenGeneration(app);

            ConfigureWebApi(httpConfig);

            app.UseCors(Microsoft.Owin.Cors.CorsOptions.AllowAll);

            app.UseWebApi(httpConfig);

        }

        private void ConfigureOAuthTokenGeneration(IAppBuilder app)
        {
            // Configure the db context and user manager to use a single instance per request
            app.CreatePerOwinContext(ApplicationDbContext.Create);
            app.CreatePerOwinContext<ApplicationUserManager>(ApplicationUserManager.Create);

	    // Plugin the OAuth bearer JSON Web Token tokens generation and Consumption will be here

        }

        private void ConfigureWebApi(HttpConfiguration config)
        {
            config.MapHttpAttributeRoutes();

            var jsonFormatter = config.Formatters.OfType<JsonMediaTypeFormatter>().First();
            jsonFormatter.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver();
        }
    }

What worth noting here is how we are creating a fresh instance from the “ApplicationDbContext” and “ApplicationUserManager” for each request and set it in the Owin context using the extension method “CreatePerOwinContext”. Both objects (ApplicationDbContext and AplicationUserManager) will be available during the entire life of the request.

Note: I didn’t plug any kind of authentication here, we’ll visit this class again and add JWT Authentication in the next post, for now we’ll be fine accepting any request from any anonymous users.

Define Web API Controllers and Methods

Step 7: Create the “Accounts” Controller:

Now we’ll add our first controller named “AccountsController” which will be responsible to manage user accounts in our Identity system, to do so add new folder named “Controllers” then add new class named “AccountsController” and paste the code below:

[RoutePrefix("api/accounts")]
public class AccountsController : BaseApiController
{

	[Route("users")]
	public IHttpActionResult GetUsers()
	{
		return Ok(this.AppUserManager.Users.ToList().Select(u => this.TheModelFactory.Create(u)));
	}

	[Route("user/{id:guid}", Name = "GetUserById")]
	public async Task<IHttpActionResult> GetUser(string Id)
	{
		var user = await this.AppUserManager.FindByIdAsync(Id);

		if (user != null)
		{
			return Ok(this.TheModelFactory.Create(user));
		}

		return NotFound();

	}

	[Route("user/{username}")]
	public async Task<IHttpActionResult> GetUserByName(string username)
	{
		var user = await this.AppUserManager.FindByNameAsync(username);

		if (user != null)
		{
			return Ok(this.TheModelFactory.Create(user));
		}

		return NotFound();

	}
}

What we have implemented above is the following:

  • Our “AccountsController” inherits from base controller named “BaseApiController”, this base controller is not created yet, but it contains methods that will be reused among different controllers we’ll add during this tutorial, the methods which comes from “BaseApiController” are: “AppUserManager”, “TheModelFactory”, and “GetErrorResult”, we’ll see the implementation for this class in the next step.
  • We have added 3 methods/actions so far in the “AccountsController”:
    • Method “GetUsers” will be responsible to return all the registered users in our system by calling the enumeration “Users” coming from “ApplicationUserManager” class.
    • Method “GetUser” will be responsible to return single user by providing it is unique identifier and calling the method “FindByIdAsync” coming from “ApplicationUserManager” class.
    • Method “GetUserByName” will be responsible to return single user by providing it is username and calling the method “FindByNameAsync” coming from “ApplicationUserManager” class.
    • The three methods send the user object to class named “TheModelFactory”, we’ll see in the next step the benefit of using this pattern to shape the object graph returned and how it will protect us from leaking any sensitive information about the user identity.
  • Note: All methods can be accessed by any anonymous user, for now we are fine with this, but we’ll manage the access control for each method and who are the authorized identities that can perform those actions in the coming posts.

Step 8: Create the “BaseApiController” Controller:

As we stated before, this “BaseApiController” will act as a base class which other Web API controllers will inherit from, for now it will contain three basic methods, so add new class named “BaseApiController” under folder “Controllers” and paste the code below:

public class BaseApiController : ApiController
    {

        private ModelFactory _modelFactory;
        private ApplicationUserManager _AppUserManager = null;

        protected ApplicationUserManager AppUserManager
        {
            get
            {
                return _AppUserManager ?? Request.GetOwinContext().GetUserManager<ApplicationUserManager>();
            }
        }

        public BaseApiController()
        {
        }

        protected ModelFactory TheModelFactory
        {
            get
            {
                if (_modelFactory == null)
                {
                    _modelFactory = new ModelFactory(this.Request, this.AppUserManager);
                }
                return _modelFactory;
            }
        }

        protected IHttpActionResult GetErrorResult(IdentityResult result)
        {
            if (result == null)
            {
                return InternalServerError();
            }

            if (!result.Succeeded)
            {
                if (result.Errors != null)
                {
                    foreach (string error in result.Errors)
                    {
                        ModelState.AddModelError("", error);
                    }
                }

                if (ModelState.IsValid)
                {
                    // No ModelState errors are available to send, so just return an empty BadRequest.
                    return BadRequest();
                }

                return BadRequest(ModelState);
            }

            return null;
        }
    }

What we have implemented above is the following:

  • We have added read only property named “AppUserManager” which gets the instance of the “ApplicationUserManager” we already set in the “Startup” class, this instance will be initialized and ready to invoked.
  • We have added another read only property named “TheModelFactory” which returns an instance of “ModelFactory” class, this factory pattern will help us in shaping and controlling the response returned to the client, so we will create a simplified model for some of our domain object model (Users, Roles, Claims, etc..) we have in the database. Shaping the response and building customized object graph is very important here; because we do not want to leak sensitive data such as “PasswordHash” to the client.
  • We have added a function named “GetErrorResult” which takes “IdentityResult” as a constructor and formats the error messages returned to the client.

Step 8: Create the “ModelFactory” Class:

Now add new folder named “Models” and inside this folder create new class named “ModelFactory”, this class will contain all the functions needed to shape the response object and control the object graph returned to the client, so open the file and paste the code below:

public class ModelFactory
    {
        private UrlHelper _UrlHelper;
        private ApplicationUserManager _AppUserManager;

        public ModelFactory(HttpRequestMessage request, ApplicationUserManager appUserManager)
        {
            _UrlHelper = new UrlHelper(request);
            _AppUserManager = appUserManager;
        }

        public UserReturnModel Create(ApplicationUser appUser)
        {
            return new UserReturnModel
            {
                Url = _UrlHelper.Link("GetUserById", new { id = appUser.Id }),
                Id = appUser.Id,
                UserName = appUser.UserName,
                FullName = string.Format("{0} {1}", appUser.FirstName, appUser.LastName),
                Email = appUser.Email,
                EmailConfirmed = appUser.EmailConfirmed,
                Level = appUser.Level,
                JoinDate = appUser.JoinDate,
                Roles = _AppUserManager.GetRolesAsync(appUser.Id).Result,
                Claims = _AppUserManager.GetClaimsAsync(appUser.Id).Result
            };
        }
    }

    public class UserReturnModel
    {
        public string Url { get; set; }
        public string Id { get; set; }
        public string UserName { get; set; }
        public string FullName { get; set; }
        public string Email { get; set; }
        public bool EmailConfirmed { get; set; }
        public int Level { get; set; }
        public DateTime JoinDate { get; set; }
        public IList<string> Roles { get; set; }
        public IList<System.Security.Claims.Claim> Claims { get; set; }
    }

Notice how we included only the properties needed to return them in users object graph, for example there is no need to return the “PasswordHash” property so we didn’t include it.

Step 9: Add Method to Create Users in”AccountsController”:

It is time to add the method which allow us to register/create users in our Identity system, but before adding it, we need to add the request model object which contains the user data which will be sent from the client, so add new file named “AccountBindingModels” under folder “Models” and paste the code below:

public class CreateUserBindingModel
    {
        [Required]
        [EmailAddress]
        [Display(Name = "Email")]
        public string Email { get; set; }

        [Required]
        [Display(Name = "Username")]
        public string Username { get; set; }

        [Required]
        [Display(Name = "First Name")]
        public string FirstName { get; set; }

        [Required]
        [Display(Name = "Last Name")]
        public string LastName { get; set; }

        [Display(Name = "Role Name")]
        public string RoleName { get; set; }

        [Required]
        [StringLength(100, ErrorMessage = "The {0} must be at least {2} characters long.", MinimumLength = 6)]
        [DataType(DataType.Password)]
        [Display(Name = "Password")]
        public string Password { get; set; }

        [Required]
        [DataType(DataType.Password)]
        [Display(Name = "Confirm password")]
        [Compare("Password", ErrorMessage = "The password and confirmation password do not match.")]
        public string ConfirmPassword { get; set; }
    }

The class is very simple, it contains properties for the fields we want to send from the client to our API with some data annotation attributes which help us to validate the model before submitting it to the database, notice how we added property named “RoleName” which will not be used now, but it will be useful in the coming posts.

Now it is time to add the method which register/creates a user, open the controller named “AccountsController” and add new method named “CreateUser” and paste the code below:

[Route("create")]
public async Task<IHttpActionResult> CreateUser(CreateUserBindingModel createUserModel)
{
	if (!ModelState.IsValid)
	{
		return BadRequest(ModelState);
	}

	var user = new ApplicationUser()
	{
		UserName = createUserModel.Username,
		Email = createUserModel.Email,
		FirstName = createUserModel.FirstName,
		LastName = createUserModel.LastName,
		Level = 3,
		JoinDate = DateTime.Now.Date,
	};

	IdentityResult addUserResult = await this.AppUserManager.CreateAsync(user, createUserModel.Password);

	if (!addUserResult.Succeeded)
	{
		return GetErrorResult(addUserResult);
	}

	Uri locationHeader = new Uri(Url.Link("GetUserById", new { id = user.Id }));

	return Created(locationHeader, TheModelFactory.Create(user));
}

What we have implemented here is the following:

  • We validated the request model based on the data annotations we introduced in class “AccountBindingModels”, if there is a field missing then the response will return HTTP 400 with proper error message.
  • If the model is valid, we will use it to create new instance of class “ApplicationUser”, by default we’ll put all the users in level 3.
  • Then we call method “CreateAsync” in the “AppUserManager” which will do the heavy lifting for us, inside this method it will validate if the username, email is used before, and if the password matches our policy, etc.. if the request is valid then it will create new user and add to the “AspNetUsers” table and return success result. From this result and as good practice we should return the resource created in the location header and return 201 created status.

Notes:

  • Sending a confirmation email for the user, and configuring user and password policy will be covered in the next post.
  • As stated earlier, there is no authentication or authorization applied yet, any anonymous user can invoke any available method, but we will cover this authentication and authorization part in the coming posts.

Step 10: Test Methods in”AccountsController”:

Lastly it is time to test the methods added to the API, so fire your favorite REST client Fiddler or PostMan, in my case I prefer PostMan. So lets start testing the “Create” user method, so we need to issue HTTP Post to the URI: “http://localhost:59822/api/accounts/create” as the request below, if creating a user went good you will receive 201 response:

Create User

Now to test the method “GetUsers” all you need to do is to issue HTTP GET to the URI: “http://localhost:59822/api/accounts/users” and the response graph will be as the below:

[
  {
    "url": "http://localhost:59822/api/accounts/user/29e21f3d-08e0-49b5-b523-3d68cf623fd5",
    "id": "29e21f3d-08e0-49b5-b523-3d68cf623fd5",
    "userName": "SuperPowerUser",
    "fullName": "Taiseer Joudeh",
    "email": "taiseer.joudeh@gmail.com",
    "emailConfirmed": true,
    "level": 1,
    "joinDate": "2012-01-17T12:41:40.457",
    "roles": [
      "Admin",
      "Users",
      "SuperAdmin"
    ],
    "claims": [
      {
        "issuer": "LOCAL AUTHORITY",
        "originalIssuer": "LOCAL AUTHORITY",
        "properties": {},
        "subject": null,
        "type": "Phone",
        "value": "123456782",
        "valueType": "http://www.w3.org/2001/XMLSchema#string"
      },
      {
        "issuer": "LOCAL AUTHORITY",
        "originalIssuer": "LOCAL AUTHORITY",
        "properties": {},
        "subject": null,
        "type": "Gender",
        "value": "Male",
        "valueType": "http://www.w3.org/2001/XMLSchema#string"
      }
    ]
  },
  {
    "url": "http://localhost:59822/api/accounts/user/f0f8d481-e24c-413a-bf84-a202780f8e50",
    "id": "f0f8d481-e24c-413a-bf84-a202780f8e50",
    "userName": "tayseer.Joudeh",
    "fullName": "Tayseer Joudeh",
    "email": "tayseer_joudeh@hotmail.com",
    "emailConfirmed": true,
    "level": 3,
    "joinDate": "2015-01-17T00:00:00",
    "roles": [],
    "claims": []
  }
]

The source code for this tutorial is available on GitHub.

In the next post we’ll see how we’ll configure our Identity service to start sending email confirmations, customize username and password polices, implement Json Web Token (JWTs) Authentication and manage the access for the methods.

Follow me on Twitter @tjoudeh

References

The post ASP.NET Identity 2.1 with ASP.NET Web API 2.2 (Accounts Management) – Part 1 appeared first on Bit of Technology.


Filip Woj: Migrating from ASP.NET Web API to MVC 6 – exploring Web API Compatibility Shim

Migrating an MVC 5 project to ASP.NET 5 and MVC 6 is a big challenge given that both of the latter are complete rewrites of their predecessors. As a result, even if on the surface things seem similar (we have controllers, filters, actions etc), as you go deeper under the hood you realize that most, if not all, of your pipeline customizations will be incompatible with the new framework.

This pain is even more amplified if you try to migrate Web API 2 project to MVC 6 – because Web API had a bunch of its own unique concepts and specialized classes, all of which only complicate the migration attempts.

ASP.NET team provides an extra convention set on top of MVC 6, called “Web API Compatibility Shim”, which can be enabled make the process of migration from Web API 2 a bit easier. Let’s explore what’s in there.

Introduction

If you create a new MVC 6 project from the default starter template, it will contain the following code in the Startup class, under ConfigureServices method:

// Uncomment the following line to add Web API servcies which makes it easier to port Web API 2 controllers.
 // You need to add Microsoft.AspNet.Mvc.WebApiCompatShim package to project.json
 // services.AddWebApiConventions();

This pretty much explains it all – the Compatibility Shim is included in an external package, Microsoft.AspNet.Mvc.WebApiCompatShim and by default is switched off for new MVC projects. Once added and enabled, you can also have a look at the UseMvc method, under Configure. This is where central Web API routes can be defined:

app.UseMvc(routes =>
        {
            routes.MapRoute(
                name: "default",
                template: "{controller}/{action}/{id?}",
                defaults: new { controller = "Home", action = "Index" });

            // Uncomment the following line to add a route for porting Web API 2 controllers.
            // routes.MapWebApiRoute("DefaultApi", "api/{controller}/{id?}");
        });

Interestingly, Web API Compatibility Shim will store all Web API related routes under a dedicated, predefined area, called simply “api”.

This is all that’s needed to get you started.

Inheriting from ApiController

Since the base class for Web API controllers was not Controller but ApiController, the shim introduces a type of the same name into MVC 6.

While it is obviously not 100% identical to the ApiController from Web API, it contains the majority of public proeprties and methods that you might have gotten used to – the Request property, the User property or a bunch of IHttpActionResult helpers.

You explore it in details here on Github.

Returning HttpResponseMessage

The shim introduces the ability to work with HttpResponseMessage in MVC 6 projects. How is this achieved? First of all, the Microsoft.AspNet.WebApi.Client package is referenced, and that brings in the familiar types – HttpResponseMessage and HttpRequestMessage.

On top of that, an extra formatter (remember, we discussed MVC 6 formatters here before) is injected into your application – HttpResponseMessageOutputFormatter. This allows you to return HttpResponseMessage from your actions, just like you were used to doing in Web API projects!

How does it work under the hood? Remember, in Web API, returning an instance of HttpResponseMessage bypassed content negotiation and simply forwarded the instance all the way to the hosting layer, which was responsible to convert it to a response that was relevant for a given host (HttpResponse under System.Web, OWIN dictionary under OWIN and so on).

In the case of MVC 6, the new formatter will grab your HttpResponseMessage and copy its headers and contents onto the Microsoft.AspNet.Http.HttpResponse which is the new abstraction for HTTP response in ASP.NET 5.

As a result such type of an action as the one shown below, is possible in MVC 6, and as a consequence it should be much simpler to migrate your Web API 2 projects.

public HttpResponseMessage Post()
{
    return new HttpResponseMessage(HttpSattusCode.NoContent);
}

Binding HttpRequestMessage

In Web API it was possible to bind HttpRequestMessage in your actions. For example this was easily doable:

[Route("test/{id:int}")]
    public string Get(int id, HttpRequestMessage req)
    {
        return id + " " + req.RequestUri;
    }

    [Route("testA")]
    public async Task<TestItem> Post(HttpRequestMessage req)
    {
        return await req.Content.ReadAsAsync<TestItem>();
    }

The shim introduces an HttpRequestMessageModelBinder which allows the same thing to be done under MVC 6. As a result, if you relied on HttpRequestMessage binding in Web API, your code will migrate to MVC 6 fine.

How does it work? The shim will use an intermediary type, HttpRequestMessageFeature, to create an instance of HttpRequestMessage from the ASP.NET 5 HttpContext.

HttpRequestMessage extensions

Since it was very common in the Web API world to use HttpResponseMessage as an action return type, there was a need for a mechanism that allowed easy creation of its instances. This was typically achieved by using the extension methods on the HttpRequestMessage, as they would perform content negotiation for you.

For example:

public HttpResponseMessage Post(Item item)
{
    return Request.CreateResponse(HttpSattusCode.NoContent, item);
}

The Web API Compatibility Shim introduces these extension methods into MVC 6 as well. Mind you, these extension methods are hanging off the HttpRequestMessage – so to use them you will have to inherit from the ApiController, in order to have the Request property available for you on the base class in the first place.

There is a number of the overloaded versions of those CreateResponse method (just like it was in Web API), as well as the CreateErrorResponse method. You can explore them in detail here on Github.

HttpError

If you use/used the CreateErrorResponse method mentioned above, you will end up relying on the HttpError class which is another ghost of the Web API past rejuvenated by the compatibility shim.

HttpError was traditionally used by Web API to serve up error information to the client in a (kind of) standardized way. It contained properties such as ModelState, MessageDetail or StackTrace.

It was used by not just the CreateErrorResponse extension method but also by a bunch of IHttpActionResultsInvalidModelStateResult, ExceptionResult and BadRequestErrorMessageResult. As a result, HttpError is back to facilitate all of these types.

More Web API-style parameter binding

A while ago, Badrinarayanan had a nice post explaining the differences between parameter binding in MVC 6 and Web API 2. In short, the approach in MVC 6 is (and that’s unerstandable) much more like MVC 5, and those of us used to Web API style of binding, could easily end up with lots of problems (mainly hidden problems that wouldn’t show up at compile time, which is even worse). Even the official tutorials from Microsoft show that, for example, binding from the body of the request, which was one of the cornerstones of Web API parameter binding, needs to be explicit now (see the action CreateTodoItem in that example) – through a use of a dedicated attribute.

Web API Compatibility Shim attempts to close this wide gap between MVC 6 and Web API 2 style of parameter binding.

The class called WebApiParameterConventionsApplicationModelConvention introduces the model of binding familiar to the Web API developers:

  • – simple types are bound from URI
  • – complex types are bound from the body

Additionally action overloading was different between MVC and Web API (for example, Web API parameters are all required by default, whereas MVC ones aren’t). This action resolving mechanism is also taken care of by the compatibility shim.

Finally, FromUriAttribute, does not exist in MVC 6 (due to the nature of parameter binding in MVC). This type is, however, introduced again in the compatibility shim.

Support for HttpResponseException

In Web API, it was common to throw HttpResponseException to control the flow of your action and short circuit a relevant HTTP response to the client. For example:

public Item Get(int id)
{
    var item = _repo.FindById(id);
    if (item == null) throw new HttpResponseException(HttpStatusCode.NotFound);
    return item;
}

HttpResponseException is re-introduced into MVC 6 by the compatibility shim, and allows you to use the code shown above in the new world.

How does it work? Well, the shim introduces and extra filter, HttpResponseExceptionActionFilter that catches all exceptions from your action. If the exception happens to be of type HttpResponseException, it will be marked as handled and Result on the ActionExecutedContext will be properly set as ObjectResult, based on the contents of that exception. If the exception is of different type, the filter just lets it pass through to be handled elsewhere.

Support for DefaultContentNegotiator

Web API content negotiation has been discussed quite widely on this blog before. Compatibility Shim introduces the concept of the IContentNegotiator into MVC 6 to facilitate its own extension methods, such the already discussed CreateResponse or CreateErrorResponse.

Additionally, the IContentNegotiator is registered as a service so you can obtain an instance of it using a simple call off the new ASP.NET 5 HttpContext:

var contentNegotiator = context.RequestServices.GetRequiredService<IContentNegotiator>();

This makes it relatively easy to port pieces of code where you’d deal with the negotiator manually (such as the example below), as you’d only need to change the way the negotiator instance is obtained.

[Route("items/{id:int}")]
    public HttpResponseMessage Get(int id)
    {
        var item = new Item
        {
            Id = id,
            Name = "I'm manually content negotiatied!"
        };
        var negotiator = Configuration.Services.GetContentNegotiator();
        var result = negotiator.Negotiate(typeof(Item), Request, Configuration.Formatters);

        var bestMatchFormatter = result.Formatter;
        var mediaType = result.MediaType.MediaType;

        return new HttpResponseMessage(HttpStatusCode.OK)
        {
            Content = new ObjectContent<Item>(item, bestMatchFormatter, mediaType)
        };
    }

Extra action results

Web API Compatibility Shims also introduces a set of action results that you might be used to from Web API as IHttpActionResults. Those are currently implemented as ObjectResults.

Here is the list:
BadRequestErrorMessageResult
ConflictResult
ExceptionResult
InternalServerErrorResult
InvalidModelStateResult
NegotiatedContentResult
OkNegotiatedContentResult
OkResult
ResponseMessageResult

Regardless of how you used them in your Web API project – newed up directly in the action, or as a call to a method on the base ApiController, your code should now be much more straight forward to port to MVC 6 (or at least to get it to a stage that it actually compiles under MVC 6).

Summary

While the Web API Compatibility Shim will not solve all of the problems you might encounter when trying to port a Web API 2 project to MVC 6, it will at least make your life much easier. The majority of the high level concepts – the code used in your actions and controllers – should migrate almost 1:1, and the only things you will be left with having to tackle would be the deep pipeline customizations, so things such as custom routing conventions, customizations to action or controller selectors, customizations to tracers or error handlers and so on.

Either way, enabling the shim is a must to even get started with thinking to migrate a Web API project to MVC 6.


Ugo Lattanzi: Speed up WebAPI on Microsoft Azure

One of my favorite features of ASP.NET WebAPI is the opportunity to run your code outside Internet Information Service (IIS). I don’t have anything against IIS, in fact my tough matches with this tweet:

But System.Web is really a problem and, in some cases, IIS pipeline is too complicated for a simple REST call.

we fix one bug and open seven new one (unnamed Microsoft employee on System.Web)

Another important thing I like is cloud computing and Microsoft Aure in this case. In fact, if you want to run your APIs outside IIS and you have to scale on Microsoft Azure, maybe this article could be helpful.

Azure offers different ways to host your APIs and scale them. The most common solutions are WebSites or Cloud Services.

Unfortunately we can’t use Azure WebSites because everything there runs on IIS (more info here) so, we have to use the Cloud Services but the question here is Web Role or Worker Role?

The main difference among Web Role and Worker Role is that the first one runs on IIS, the domain is configured on the webserver and the port 80 is opened by default; the second one is a process (.exe file to be clear) that runs on a “closed” environment.

To remain consistent with what is written above, we have to use the Worker Role instead of the Web Role so, let’s start to create it following the steps below:

Now that the Azure project and Workrole project are ready, It's important to open the port 80 on the worker role (remember that by default the worker role is a close environment).

Finally we have the environment ready, It’s time to install few WebAPI packages and write some code.

PM> Install-Package Microsoft.AspNet.WebApi.OwinSelfHost

Now add OWIN startup class

and finally configure WebAPI Routing and its OWIN Middleware

using System.Web.Http;
using DemoWorkerRole;
using Microsoft.Owin;
using Owin;

[assembly: OwinStartup(typeof (Startup))]

namespace DemoWorkerRole
{
    public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            var config = new HttpConfiguration();

            // Routing
            config.Routes.MapHttpRoute(
                "Default",
                "api/{controller}/{id}",
                new {id = RouteParameter.Optional});

            //Configure WebAPI
            app.UseWebApi(config);
        }
    }
}

and create a demo controller

using System.Web.Http;

namespace DemoWorkerRole.APIs
{
    public class DemoController : ApiController
    {
        public string Get(string id)
        {
            return string.Format("The parameter value is {0}", id);
        }
    }
}

Till now nothing special, the app is ready and we have just to configure the worker role that is the WorkerRole.cs file created by Visual Studio.

What we have to do here, is to read the configuration from Azure (we have to map a custom domain for example) and start the web server.

To do that, first add the domain on the cloud service configuration following the steps below:

finally the worker role:

using System;
using System.Diagnostics;
using System.Net;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Owin.Hosting;
using Microsoft.WindowsAzure.ServiceRuntime;

namespace DemoWorkerRole
{
    public class WorkerRole : RoleEntryPoint
    {
        private readonly CancellationTokenSource cancellationTokenSource = new CancellationTokenSource();
        private readonly ManualResetEvent runCompleteEvent = new ManualResetEvent(false);

        private IDisposable app;

        public override void Run()
        {
            Trace.TraceInformation("WorkerRole is running");

            try
            {
                RunAsync(cancellationTokenSource.Token).Wait();
            }
            finally
            {
                runCompleteEvent.Set();
            }
        }

        public override bool OnStart()
        {
            // Set the maximum number of concurrent connections
            ServicePointManager.DefaultConnectionLimit = 12;

            string baseUri = String.Format("{0}://{1}:{2}", RoleEnvironment.GetConfigurationSettingValue("protocol"),
                RoleEnvironment.GetConfigurationSettingValue("domain"),
                RoleEnvironment.GetConfigurationSettingValue("port"));

            Trace.TraceInformation(String.Format("Starting OWIN at {0}", baseUri), "Information");

            try
            {
                app = WebApp.Start<Startup>(new StartOptions(url: baseUri));
            }
            catch (Exception e)
            {
                Trace.TraceError(e.ToString());
                throw;
            }

            bool result = base.OnStart();

            Trace.TraceInformation("WorkerRole has been started");

            return result;
        }

        public override void OnStop()
        {
            Trace.TraceInformation("WorkerRole is stopping");

            cancellationTokenSource.Cancel();
            runCompleteEvent.WaitOne();

            if (app != null)
            {
                app.Dispose();
            }

            base.OnStop();

            Trace.TraceInformation("WorkerRole has stopped");
        }

        private async Task RunAsync(CancellationToken cancellationToken)
        {
            // TODO: Replace the following with your own logic.
            while (!cancellationToken.IsCancellationRequested)
            {
                //Trace.TraceInformation("Working");
                await Task.Delay(1000);
            }
        }
    }
}

we are almost done, the last step is to configure the right execution context into the ServiceDefinistion.csdef

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="imperugo.demo.azure.webapi" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition" schemaVersion="2014-06.2.4">
    <WorkerRole name="DemoWorkerRole" vmsize="Small">
        <Runtime executionContext="elevated" />
        <Imports>
            <Import moduleName="Diagnostics" />
        </Imports>
        <Endpoints>
            <InputEndpoint name="Http" protocol="http" port="80" localPort="80" />
        </Endpoints>
        <ConfigurationSettings>
            <Setting name="protocol" />
            <Setting name="domain" />
            <Setting name="port" />
        </ConfigurationSettings>
    </WorkerRole>
</ServiceDefinition>

Here the important part is Runtime node. That part is really important because we are using the HttpListener to read the incoming message from the Web and that requires elevated privileges.

Now we are up & running using WebAPi hosted on a Cloud Service without using IIS.

The demo code is available here.

Have fun.


Pedro Félix: JWT and JOSE specifications approved for publication as RFCs

It seems the JSON Web Token (JWT) specs are finally ready to become RFCs. I’ve wrote about security tokens before in the past: it was 2008, XML, SAML and WS-Security were still hot subjects and JWT didn’t existed yet. The more recent “Designing Evolvable Web APIs with ASP.NET” book already includes a discussion of JWT in its security chapter. However, I think this announcement deserves a few more words and a colorful diagram.

A security token is a data structure that holds security related information, during the communication between two parties. For instance, on a distributed authentication scenario a security token may be used to transport the identity claims, asserted by the identity provider, to the consuming relying party.

As a transport container, the security token structure must provide important security properties:

  • Integrity – the consuming party should be able to detect any modifications to the token while in transit between the two parties. This property is usually mandatory, because the token information would be of little use without it
  • Confidentiality – only the authorized receiver should be able to access the contained information. This property isn’t required in all scenarios.

Kerberos tickets, SAML assertions and JSON Web Tokens are all examples of security tokens. Given the available prior art, namely SAML assertions, one may ask what’s the motivation for yet another security token format. JWT tokens where specifically designed to be more compact than the alternatives and also to be URL-safe by default. These two properties are very important for the modern usage scenarios (e.g. OpenID Connect protocol), where tokens are transported in URIs query strings and HTTP headers. Also, JWT tokens use the JavasScript Object Notation (JSON) standard, which seems to be the data interchange format du jour for the Web.

The following diagram presents an example of an encoded token, the contained information and how it relates to the token issuer, the token recipient and and the token subject.

jwt

A JWT is composed by multiple base64url encoded parts, separated by the ‘.’ character. The first part is the header and is composed by a single JSON object. In the example, the object’s properties, also called claims, are:

  • "typ":"JWT" – the token type.
  • "alg":"HS256" – the token protection algorithm, which in this case is only symmetric signature (i.e. message authentication code) using HMAC-SHA-256.

The second part is the payload and is composed by the claim set asserted by the issuer. In the example they are:

  • "iss":"https://issuer.webapibook.net" (issuer) – the issuer identifier.
  • "aud":"https://example.net" (audience) – the intended recipient.
  • "nbf":1376571701 (not before).
  • "exp":1376572001 (expires).
  • "sub":"alice@webapibook.net" (subject) – the claims subject (e.g. the authenticated user).
  • "email":"alice@webapibook.net" (email) – the subject’s email.
  • "name":"Alice" (name) – the subject’s name.

The first five claims (iss to sub) have their syntax and semantics defined by the JWT spec. The remaining two (email and name) are defined by other specs such as OpenID Connect, which rely on the JWT spec.

Finally, the last part is the token signature produced using the HMAC-SHA-256 algorithm. In this example, the token protection only includes integrity verification. However, it is possible to also have confidentiality by using encryption techniques.

The signature and encryption procedures, as well as the available algorithms and the ways to represent key information (e.g. public keys and key metadata) are defined on a set of auxiliary specs produced by the Javascript Object Signing and Encryption (JOSE) IETF working group.

Finally, a reference to the excellent JWT debugger and library list, made available by Auth0.



Pedro Félix: Recollections on 2014 – the soul of a new book

Last March 11, while waiting for the subway to head home, I received an email from our O’Reilly editor telling us that “Designing Evolvable Web APIs with ASP.NET” had finally gone to print. More than 2 years had passed on a journey that started with an email from Pablo, asking me if I was interested in co-authoring a book on ASP.NET Web API.

designevolvecover

“Designing Evolvable Web APIs with ASP.NET” is the result of the combined knowledge, experience and passion of five authors (Darrel, Glenn, Howard, Pablo and me), with different backgrounds but a common interest for the Web, its architecture and possibilities.

Writing a book with five authors, living in three continents and four time zones is a challenging endeavor. However, it is also an example of what can be accomplished with the cooperation technologies that we currently have available. The book was mostly written using Asciidoc, a textual format similar to Markdown but with added features. A private Git repo associated with a build pipeline was used to share the book source among the authors and create the derived artifacts, such as the PDF and the HTML versions. A GitHub organization was also used to share all the book’s code, which is publicly available at https://github.com/webapibook. For the many conversations and meetings, we used mostly Skype and Google Hangout.

One of my recollections of reading the “C++ Programming Language” book, by B. Stroustrup, almost 20 years ago, is the following quote attributed to Kristen Nygaard: “Programming is understanding”. For me, writing is also understanding. Many afternoons and evenings were spent trying to better grasp sparse and incomplete ideas by turning them into meaningful sequences of sentences and paragraphs. The rewarding feeling of finally being able to write an understandable paragraph made all those struggling hours worthwhile. I really hope the readers will enjoy reading them as much as I did writing them. There were some defeats also. For them, I apologize.

“Designing Evolvable Web APIs with ASP.NET” aims to provide the reader with the knowledge and skills required to build Web APIs that can adapt to change over time. It is divided in three parts.
The first one is composed by four chapters and contains an introduction to the Web architecture, Web APIs and related specs, such as HTTP. It also contains an introduction to the ASP.NET Web API programming model and runtime architecture.

The second and core part of the book addresses the design, implementation and use of an evolvable Web API, based on a concrete example: issue tracking. It contains chapters on problem domain analysis, on media type selection and design, on building and evolving the server and on creating clients.

The third and last part is a detailed description of the ASP.NET Web API technology, addressing subjects such as the HTTP programming model, hosting and OWIN, controllers and routing, client-side programming, model binding and media type formatting, and also testing. It also includes two chapters about Web API security, with an emphasis to the authentication and authorization aspects, namely the OAuth 2.0 Authorization framework.

“Designing Evolvable Web APIs with ASP.NET” is available for purchase at the O’Reilly shop. A late draft is also freely available at O’Reilly Atlas. Also, feel free to drop by our discussion group.

(the title for this post was inspired by the “The Soul of a New Machine” book, authored by Tracy Kidder)



Pete Smith: Functional web synergy with F# and OWIN

Before we get started I’d just like to mention that this post is part of the truly excellent F# Advent Calendar 2014 which is a fantastic initiative organised by Sergey Tihon, so big thanks to Sergey and the rest of the F# community as well as wishing you all a merry christmas!

Introduction

Using F# to build web applications is nothing new, we have purpose built F# frameworks like Freya popping up and excellent posts like this one by Mark Seemann. It’s also fairly easy to pick up other .NET frameworks that weren’t designed specifically for F# and build very solid applications.

With that in mind, I’m not just going to write another post about how to build web applications with F#.

Instead, I’d like to introduce the F# community to a whole new way of thinking about web applications, one that draws inspiration from a number of functional programming concepts – primarily pipelining and function composition – to provide a solid base on to which we can build our web applications in F#. This approach is currently known as Graph Based Routing

Some background

So first off – I should point out that I’m not actually an F# guy; in fact I’m pretty new to the language in general so this post is also somewhat of a learning exercise for me. I often find the best way to get acquainted with things is to dive right in, so please feel free to give me pointers in the comments.

Graph based routing itself has been around for a while, in the form of a library called Superscribe (written in C#). I’m not going to go into detail about it’s features; these are language agnostic, and covered by the website and some previous posts.

What I will say is that Superscribe is not a full blown web framework but actually a routing library. In fact, that’s somewhat of an oversimplication… in reality this library takes care of everything between URL and handler. It turns out that routing, content negotiation and some way of invoking a handler is actually all you need to get started building web applications.

Simplicity rules

This simplicity is a key tenet of graph based routing – keeping things minimal helps us build web applications that respond very quickly indeed as there is simply no extra processing going on. If you’re building a very content-heavy application then it’s probably not the right choice, but for APIs it’s incredibly performant.

Lets have a look at an example application using Superscribe in F#:

Superscribe defaults to a text/html response and will try it’s best to deal with whatever object you return from your handler. You can also do all the usual things like specify custom media type serialisers, return status codes etc.

The key part to focus on here is the define.Route statement, which allows us to directly assign a handler to a particular route – in this case /hello/world and /hello/fsharp. This is kinda cool, but there’s a lot more going on here than meets the eye.

Functions and graph based routing

Graph based routing is so named because it stores route definitions in – you guessed it – a graph structure. Traditional route matching tends focus on tables of strings and pattern matching based on the entire URL, but Superscribe is different.

In the example above the URL /hello/world gets broken down into it’s respective segments. Each segment is represented by a node in the graph, with the next possible matches as it’s children. Subsequent definitions are also broken down and intelligently added into the graph, so in this instance we end up with something like this:

hello world graph

Route matching is performed by walking the graph and checking for matches – it’s essentially a state machine. This is great because we only need to check for the segments that we expect; we don’t waste time churning through a large route table.

But here’s where it gets interesting. Nodes in graph based routing are comprised of three functions:

  • Activation function – returns a boolean indicating if the node is a match for the current segment
  • Action function – executed when a match has been found, so we can do things like parameter capture
  • Final function – executed when matching finishes on a particular node, i.e the handler

All of these functions can execute absolutely any arbitrary code that we like. With this model we can do some really interesting things such as conditional route matching based on the time of day, a debug flag or even based on live information from a load balancer. Can your pattern matcher do that!?

Efficiency, composibility and extensibility

Graph based routing allows us to build complex web applications that are composed of very simple units. A good approach is to use action functions to compose a pipeline a functions which get executed synchronously once route matching is complete (is this beginning to sound familiar?), but it can also be used for processing segments on the fly, for example in capturing parameter capture.

Here’s another example that shows this compositional nature in action. We’re going to define and use new type of node that will match and capture certain strings. Because Superscribe relies on the C# dynamic keyword, I’ve used the ? operator provided by FSharp.Dynamic

In the previous example we relied on the library to build a graph for us given a string – here we’re being explicit and constructing our own using the / operator (neat eh?). Our custom node will only activate when the segment starts with the letter “p”, and if it does then it will store that parameter away in a dynamic dictionary so we can use it later.

If the engine doesn’t match on a node, it’ll continue through it’s siblings looking for a match there instead. In our case, anything that doesn’t start with “p” will get picked up by the second route – the String parameter node acts as a catch-all:

hello fsharp
hello pete

Pipelines and OWIN

This gets even more exciting when we bring OWIN into the mix. OWIN allows us to build web applications out of multiple pieces of middleware, distinct orthogonal units that run together in a pipeline.

Usually these are quite linear, but with graph based routing and it’s ability to execute arbitrary code, we can build our pipeline on the fly. In this final example, we’re using two pieces of sample middleware to control access to parts of our web application:

Superscribe has support for this kind of middleware pipelining built in via the Pipeline method. In this code above we’ve specified that anything under the admin/ route will invoke the RequireHttps middleware, and if we’re doing anything other than requesting a token then we’ll need to provide the correct auth header.Behind the syntactic sugar, Superscribe is simply doing everything using the three types of function that we looked at earlier.

This example is not going to win any awards for security practices but it’s a pretty powerful demonstration of how these functional-inspired practices of composition and pipelining can help us build some really flexible and maintainable web applications. It turns out that there really is a lot more synergy between F# and the web that most people realise!

Summary

Some aspects still leave a little to be desired from the functional perspective – our functions aren’t exactly pure for example. But this is just the beginning of the relationship between F# and Superscribe. Most of the examples in the post have been ported straight from C# and so don’t really make any use of F# language features.

I’m really excited about what can be achieved when we start bringing things like monads and discriminated unions into the mix, it should make for some super-terse syntax. I’d love to hear some thoughts on this from the community… I’m sure we can do better than previous attempts at monadic url routing at any rate!

I hope you enjoyed today’s advent calendar… special thanks go to Scott Wlaschlin for all his technical feedback. I deliberately kept the specifics light here so as not to detract from the message of the post, but you can read more about Superscribe and graph based routing on the Superscribe website

Merry christmas to you all!
Pete

References

http://owin.org/
http://sergeytihon.wordpress.com/2014/11/24/f-advent-calendar-in-english-2014/
http://about.me/sergey.tihon
http://superscribe.org/
http://superscribe.org/graphbasedrouting.html
https://github.com/fsprojects/FSharp.Dynamic
https://gist.github.com/unknownexception/6035260
https://github.com/koistya/fsharp-owin-sample
https://github.com/freya-fs/freya
http://blog.ploeh.dk/2013/08/23/how-to-create-a-pure-f-aspnet-web-api-project/
http://wizardsofsmart.net/samples/working-with-non-compliant-owin-middleware/
http://happstack.com/page/view-page-slug/16/comparison-of-4-approaches-to-implementing-url-routing-combinators-including-the-free-and-operational-monads
https://twitter.com/scottwlaschin



Darrel Miller: Where, oh where, does the API key go?

Yesterday on twitter I made a comment criticizing the practice of putting an API key in a query string parameter.  I was surprised by the amount of attention it got and there were a number of responses questioning the significance of my objection.  Rather than try and reply in 140 character chunks, I decided a blog post was in order.

Security

Most of the comments were security related,

It is true that whether the API key is put in the URL or in the Authorization header. They are both going to be sent over the wire in clear text.  If security is critical then HTTPS is going to be necessary and both approaches would be equivalent… over the wire. 

The security problem is not really when the message is going over the wire, it is what happens to it on the client and server.  We developers like writing things out to log files and URLs are full of useful information for debugging.

My friend Pedro pointed me to an article that demonstrates how API keys in URLs can become a major problem.

I'm not suggesting that by putting the API key into an authorization header, that all the problems go away.  It is just a matter of reducing the chances of sensitive information being stored in unsecured placed and then being misused.

It reminds me of the choice we make to lock our car doors.  Anyone who has locked their keys in the car knows how easy it is for someone with the right tools to get into a locked car. However, locking your doors does significantly reduce the chance of theft.

Andrew makes the suggestion to use the username:password convention was introduced in RFC1738 back in 1994.

When RFC 1738 was revised in 1998 and  became RFC 2396 the following text was added:

Some URL schemes use the format "user:password" in the userinfo field. This practice is NOT RECOMMENDED

In the latest revision of the URI specification, RFC3986, they went further,

Use of the format "user:password" in the userinfo field is deprecated.

The reasons for deprecating this feature are very much applicable to the use of API keys in the query string.  It is unfortunate that we aren't quicker at learning from the mistakes of those came before us.

Forget the security issue

When I wrote the tweet, I really wasn't complaining about having an API key in the URL for security reasons.  For me, there are a number of benefits of using the authorization header.

Consistency

One of constraints of REST is called the "Uniform Interface".  A benefit of this constraint is that when you start working with a new API, there should consistency with the way it works.  This helps to reduce the learning curve and it makes it easier to build re-usable code that depends on this consistency.

Many HTTP client libraries have the ability to set default headers that will automatically be sent with every request.  It's one line of code and you get to forget about API keys and focus on actually using the API.

When assigning an API key in a URL, you first need to know if the parameter is key, apikey, api-key or api_key. Then you need to modify the URL that you want to call to add the API key. Futzing around with strings to add a query parameter to an existing URL is full of annoying little gotchas.  It is not so hard to do on a case by case basic, but trying to write generic code that will work for any URI is just painful.

I'm quite sure that string manipulation of URLs is one of the primary reasons API providers create API specific client libraries to insulate client developers from these irritants.

Hypermedia

Another REST constraint is the hypermedia constraint.  I realize that hypermedia usage in the API world is still very exceptional, but it's popularity is growing.  Having to define a URI template for every embedded link, just to add an API key would be really annoying.

Caching

Believe it or not, Caching is another REST constraint Smile.  HTTP caches use the URL as part of the primary cache key, even the query string parameters.  If you add a API key into the URL you make it difficult to take advantage of HTTP caches for resources that are common to all users.  A cache would end up keeping a duplicate copy of the resource representation for every user.  Not only is this a waste of cache space, but it also reduces the cache hit ratio massively.

Self-Descriptive

It is interesting to note that if you use an Authorization header, HTTP caches have special logic that will prevent caching by public caches unless you specifically allow it using a cache-control directive.  When the API key is buried in a query string parameter, intermediaries have no idea that the representation has come from a protected resource and therefore don't realize that caching it might be a bad idea.  Using standard HTTP features, the way they were defined, allows intermediary components to perform useful functions because they can have a limited understanding of the message.

A Thousand paper cuts

I believe that the usability of an API is hugely impacted by many small factors that in isolation, seem fairly inconsequential.  It is the combined effect of these small issues that is significant.  There is also the impact of change.  What doesn't matter today, might be very significant sometime in the future. 

The HTTP specifications define a set of guidelines for building distributed applications that have been proven to work, in real running applications.  Disregarding the advice they contain is throwing money down the drain.

A final comment that I would like to address came from Bret,

My original objection was that using the Authorization was not an option in the API I was trying to use.  I understand why some users prefer to use query string parameters.  Providing an easy path to get users working with your product is critical and if providing them with a query string parameter to send the auth key helps that process then do it.  However, I also believe part of the role of API provider is to help educate API consumers on the best way to work with an API to get the best results over the long run.  Give them an easy way, and when they are ready, educate them on the better way.

Hopefully, this blog post has provided some concrete reasons as to why using an Authentication header is a better solution. 


Radenko Zec: Fixing issue with HTTPClient and ProtocolViolationException: The value of the date string in the header is invalid

When you parse a large number of sites / RSS feeds using Microsoft HTTPClient or WebClient you can get an exception :
”ProtocolViolationException: “The value of the date string in the header is invalid”

This error usually occurs when you try to access LastModified date in a response.

var  response  = (HttpWebResponse)httpRequest.GetResponse();
var lastModified = response.LastModified;

The problem is usually related where LastModified date is send in incorrect format that ignores the HTTP specs.

Then Microsoft code inside WebClient throws this exception.

This can be very painful but there is a nice workaround for this.

Instead of accessing strongly typed LastModified header property in response get it by reading it from the response header:

 var resultedLastModified = response.Headers["Last-Modified"];

After this you’ll need to check for nulls, and try to parse it as datetime but you will not get ugly “The value of the date string in the header is invalid” exception.

If you like this article don’t forget to subscribe to this blog and make sure you don’t miss new upcoming blog posts.

 

The post Fixing issue with HTTPClient and ProtocolViolationException: The value of the date string in the header is invalid appeared first on RadenkoZec blog.


Taiseer Joudeh: Secure ASP.NET Web API using API Key Authentication – HMAC Authentication

Web API Security

Recently I was working on securing ASP.NET Web API HTTP service that will be consumed by a large number of terminal devices installed securely in different physical locations, the main requirement was to authenticate calls originating from those terminal devices to the HTTP service and not worry about the users who are using it. So first thing came to my mind is to use one of the OAuth 2.0 flows which is Resource Owner Password Credentials Flow, but this flow doesn’t fit nicely in my case because the bearer access tokens issued should have expiry time as well they are non revocable by default, so issuing an access token with very long expiry time (i.e one year) is not the right way to do it.

After searching for couple of hours I found out that the right and maybe little bit complex way to implement this is to use HMAC Authentication (Hash-based Message Authentication Code).

The source code for this tutorial is available on GitHub.

What is HMAC Authentication?

It is a mechanism for calculating a message authentication code using a hash function in combination with a shared secret key between the two parties involved in sending and receiving the data (Front-end client and Back-end HTTP service) . The main use for HMAC to verify the integrity, authenticity, and the identity of the message sender.

So in simpler words the server provides the client with a public APP Id and shared secret key (API Key – shared only between server and client), this process happens only the first time when the client registers with the server.

After the client and server agrees on the API Key, the client creates a unique HMAC (hash) representing the request originated from it to the server. It does this by combining the request data and usually it will contain (Public APP Id, request URI, request content, HTTP method, time stamp, and nonce) in order to produce a unique hash by using the API Key. Then the client sends that hash to the server, along with all information it was going to send already in the request.

Once the server revives the request along with the hash from the client, it tries to reconstruct the hash by using the received request data from the client along with the API Key, once the hash is generated on the server, the server will be responsible to compare the hash sent by the client with the regenerated one, if they match then the server consider this request authentic and process it.

Flow of using API Key – HMAC Authentication:

Note: First of all the server should provide the client with a public (APP Id) and shared private secret (API Key), the client responsibility is to store the API Key securely and never share it with other parties.

Flow on the client side:

  1. Client should build a string by combining all the data that will be sent, this string contains the following parameters (APP Id, HTTP method, request URI, request time stamp, nonce, and Base 64 string representation of the request pay load).
  2. Note: Request time stamp is calculated using UNIX time (number of seconds since Jan. 1st 1970) to overcome any issues related to a different timezone between client and server. Nonce: is an arbitrary number/string used only once. More about this later.
  3. Client will hash this large string built in the first step using a hash algorithm such as (SHA256) and the API Key assigned to it, the result for this hash is a unique signature for this request.
  4. The signature will be sent in the Authorization header using a custom scheme such as”amx”. The data in the Authorization header will contain the APP Id, request time stamp, and nonce separated by colon ‘:’. The format for the Authorization header will be like: [Authorization: amx APPId:Signature:Nonce:Timestamp].
  5. Client send the request as usual along with the data generated in step 3 in the Authorization header.

Flow on the server side:

  1. Server receives all the data included in the request along with the Authorization header.
  2. Server extracts the values (APP Id, Signature, Nonce and Request Time stamp) from the Authorization header.
  3. Servers looks for the APP Id in a certain secure repository (DB, Configuration file, etc…) to get the API Key for this client.
  4. Assuming the server was able to look up this APP Id from the repository, it will be responsible to validate if this request is a replay request and reject it, so it will prevent the API from any replay attacks. This is why we’ve used a request time stamp along with nonce generated at the client, and both values have been included into HMAC signature generation. The server will depend on the nonce to check if it was used before within certain acceptable bounds, i.e. 5 minutes. More about this later.
  5. Server will rebuild a string containing the same data received in the request by adhering to the same parameters orders and encoding followed in the client application, usually this agreement is done up front between the client application and the back-end service and shared using proper documentation.
  6. Server will hash the string generated in previous step using the same hashing algorithm used by the client (SHA256) and the same API Key obtained from the secure repository for this client.
  7. The result of this hash function (signature) generated at the server will be compared to the signature sent by the client, if they are equal then server will consider this call authentic and process the request, otherwise the server will reject the request and returns HTTP status code 401 unauthorized.

Important note:

  • Client and server should generate the hash (signature) using the same hashing algorithm as well adhere to the same parameters order, any slight change including case sensitivity when implementing the hashing will result in totally different signature and all requests from the client to the server will get rejected. So be consistent and agree on how to generate the signature up front and in clear way.
  • This mechanism of authentication can work without TLS (HTTPS), as long as the client is not transferring any confidential data or transmitting the API Key. It is recommended to consume it over TLS. But if you can’t use TLS for any other reason you will be fine if you transmit data over HTTP.

Sounds complicated? Right? Let’s jump to the implementation to make this clear.

I’ll start by showing how to generate APP Id and strong 265 bits key which will act as our API Key, this usually will be done on the server and provided to the client using a secure mechanism (Secure admin server portal). There is nice post here explains why generating APP Ids and API Keys is more secure than issuing username and password.

Then I’ll build a simple console application which will act as the client application, lastly I’ll build HTTP service using ASP.NET Web API and protected using HMAC Authentication using the right filter “IAuthenticationFilter”, so let’s get started!

The source code for this tutorial is available on GitHub.

Section 1: Generating the Shared Private Key (API Key) and APP Id

As I stated before this should be done on the server and provided to the client prior the actual use, we’ll use symmetric key cryptographic algorithm to issue 256 bit key, the code will be as the below:

using (var cryptoProvider = new RNGCryptoServiceProvider())
	{
		byte[] secretKeyByteArray = new byte[32]; //256 bit
		cryptoProvider.GetBytes(secretKeyByteArray);
		var APIKey = Convert.ToBase64String(secretKeyByteArray);
	}

And for the APP Id you can generate a GUID, so for this tutorial let’s assume that our APPId is:

4d53bce03ec34c0a911182d4c228ee6c
  and our APIKey generated is: 
A93reRTUJHsCuQSHR+L3GxqOJyDmQpCgps102ciuabc=
 and assume that our client application has received those 2 pieces of information using a secure channel.

Section 2: Building the Client Application

Step 1: Install Nuget Package
Add new empty solution named “WebApiHMACAuthentication” then add new console application named “HMACAuthentication.Client”, then install the below HTTPClient Nuget package which help us to issue HTTP requests.

Install-Package Microsoft.AspNet.WebApi.Client -Version 5.2.2

Step 2: Add POCO Model
We’ll issue HTTP POST request in order to demonstrate how we can include the request body in the signature, so we’ll add simple model named “Order”, add new class named “Order” and paste the code below:

public class Order
    {
        public int OrderID { get; set; }
        public string CustomerName { get; set; }
        public string ShipperCity { get; set; }
        public Boolean IsShipped { get; set; }
    }

Step 3: Call the back-end API using HTTPClient
Now we’ll use the HTTPClient library installed earlier to issue HTTP POST request to the API we’ll build in the next section, so open file “Program.cs” and paste the code below:

static void Main(string[] args)
        {
            RunAsync().Wait();
        }

        static async Task RunAsync()
        {

            Console.WriteLine("Calling the back-end API");

            string apiBaseAddress = "http://localhost:43326/";

            CustomDelegatingHandler customDelegatingHandler = new CustomDelegatingHandler();

            HttpClient client = HttpClientFactory.Create(customDelegatingHandler);

            var order = new Order { OrderID = 10248, CustomerName = "Taiseer Joudeh", ShipperCity = "Amman", IsShipped = true };

            HttpResponseMessage response = await client.PostAsJsonAsync(apiBaseAddress + "api/orders", order);

            if (response.IsSuccessStatusCode)
            {
                string responseString = await response.Content.ReadAsStringAsync();
                Console.WriteLine(responseString);
                Console.WriteLine("HTTP Status: {0}, Reason {1}. Press ENTER to exit", response.StatusCode, response.ReasonPhrase);
            }
            else
            {
                Console.WriteLine("Failed to call the API. HTTP Status: {0}, Reason {1}", response.StatusCode, response.ReasonPhrase);
            }

            Console.ReadLine();
        }

What we implemented here is basic, we just issuing HTTP POST to the end point “/api/orders” including serialized order object, this end point is protected using HMAC Authentication (More about this later in post), and if the response status returned is 200 OK, then we are printing the response returned.

What worth nothing here that I’m using a custom delegation handler named “CustomDelegatingHandler”. This handler will help us to intercept the request before sending it so we can do the signing process and creating the signature there.

Step 4: Implement the HTTPClient Custom Handler
HTTPClient allows us to create custom message handler which get created and added to the request message handlers chain, the nice thing here that this handler will allow us to write out custom logic (logic needed to build the hash and set in the Authorization header before firing the request to the back-end API), so in the same file “Program.cs” add new class named “CustomDelegatingHandler” and paste the code below:

public class CustomDelegatingHandler : DelegatingHandler
        {
            //Obtained from the server earlier, APIKey MUST be stored securely and in App.Config
            private string APPId = "4d53bce03ec34c0a911182d4c228ee6c";
            private string APIKey = "A93reRTUJHsCuQSHR+L3GxqOJyDmQpCgps102ciuabc=";

            protected async override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
            {

                HttpResponseMessage response = null;
                string requestContentBase64String = string.Empty;

                string requestUri = System.Web.HttpUtility.UrlEncode(request.RequestUri.AbsoluteUri.ToLower());

                string requestHttpMethod = request.Method.Method;

                //Calculate UNIX time
                DateTime epochStart = new DateTime(1970, 01, 01, 0, 0, 0, 0, DateTimeKind.Utc);
                TimeSpan timeSpan = DateTime.UtcNow - epochStart;
                string requestTimeStamp = Convert.ToUInt64(timeSpan.TotalSeconds).ToString();

                //create random nonce for each request
                string nonce = Guid.NewGuid().ToString("N");

                //Checking if the request contains body, usually will be null wiht HTTP GET and DELETE
                if (request.Content != null)
                {
                    byte[] content = await request.Content.ReadAsByteArrayAsync();
                    MD5 md5 = MD5.Create();
                    //Hashing the request body, any change in request body will result in different hash, we'll incure message integrity
                    byte[] requestContentHash = md5.ComputeHash(content);
                    requestContentBase64String = Convert.ToBase64String(requestContentHash);
                }

                //Creating the raw signature string
                string signatureRawData = String.Format("{0}{1}{2}{3}{4}{5}", APPId, requestHttpMethod, requestUri, requestTimeStamp, nonce, requestContentBase64String);

                var secretKeyByteArray = Convert.FromBase64String(APIKey);

                byte[] signature = Encoding.UTF8.GetBytes(signatureRawData);

                using (HMACSHA256 hmac = new HMACSHA256(secretKeyByteArray))
                {
                    byte[] signatureBytes = hmac.ComputeHash(signature);
                    string requestSignatureBase64String = Convert.ToBase64String(signatureBytes);
                    //Setting the values in the Authorization header using custom scheme (amx)
                    request.Headers.Authorization = new AuthenticationHeaderValue("amx", string.Format("{0}:{1}:{2}:{3}", APPId, requestSignatureBase64String, nonce, requestTimeStamp));
                }

                response = await base.SendAsync(request, cancellationToken);

                return response;
            }
        }

What we’ve implemented above is the following:

  • We’ve hard coded the APP Id and API Key values obtained earlier from the server, usually you need to store those values securely in app.config.
  • We’ve got the full request URI and safely Url Encoded it, so in case there is query strings sent with the request they will safely encoded, as well we’ve read the HTTP method used, in our case it will be POST.
  • We’ve calculated the time stamp for the request using UNIX timing (number of seconds since Jan. 1st 1970). This will help us to avoid any issues might happen if the client and the server resides in two different time zones.
  • We’ve generated a random nonce for this request, the client should adhere to this and should send a random string per method call.
  • We’ve checked if the request contains a body (it will contain a body if the request of type HTTP POST or PUT), if it contains a body; then we will md5 hash the body content then Base64 the array, we are doing this to insure the authenticity of the request and to make sure no one tampered with the request during the transmission (in case of transmitting it over HTTP).
  • We’ve built the signature raw data by concatenating the parameters (APPId, requestHttpMethod, requestUri, requestTimeStamp, nonce, requestContentBase64String) without any delimiters, this data will get hashed using HMACSHA256 algorithm.
  • Lastly we’ve applied the hashing algorithm using the API Key then base64 the result and combined the (APPId:requestSignatureBase64String:nonce:requestTimeStamp) using ‘:’ colon delimiter and set this combined string in the Authorization header for the request using a custom scheme named “amx”. Notice that the nonce and time stamp are included in creating the request signature as well they are sent as plain text values so they can be validated on the server to protect our API from replay attacks.

We are done of the client part, now let’s move to building the Web API which will be protected using HMAC Authentication.

Section 3: Building the back-end API

Step 1: Add the Web API Project
Add new Web application project named “HMACAuthentication.WebApi” to our existing solution “WebApiHMACAuthentication”, the template for the API will be as the image below (Web API core dependency checked) or you can use OWIN as we did in previous tutorials:

WebApiProject

Step 2: Add Orders Controller
We’ll add simple controller named “Orders” controller with 2 simple HTTP methods, as well we’ll add the same model “Order” we already added in the client application, so add new class named “OrdersController” and paste the code below, nothing special here, just basic Web API controller which is not protected and allows anonymous calls (we’ll protect it later in the post).

[RoutePrefix("api/Orders")]
    public class OrdersController : ApiController
    {
        [Route("")]
        public IHttpActionResult Get()
        {
            ClaimsPrincipal principal = Request.GetRequestContext().Principal as ClaimsPrincipal;

            var Name = ClaimsPrincipal.Current.Identity.Name;

            return Ok(Order.CreateOrders());
        }

        [Route("")]
        public IHttpActionResult Post(Order order)
        {
            return Ok(order);
        }

    }

    #region Helpers

    public class Order
    {
        public int OrderID { get; set; }
        public string CustomerName { get; set; }
        public string ShipperCity { get; set; }
        public Boolean IsShipped { get; set; }


        public static List<Order> CreateOrders()
        {
            List<Order> OrderList = new List<Order> 
            {
                new Order {OrderID = 10248, CustomerName = "Taiseer Joudeh", ShipperCity = "Amman", IsShipped = true },
                new Order {OrderID = 10249, CustomerName = "Ahmad Hasan", ShipperCity = "Dubai", IsShipped = false},
                new Order {OrderID = 10250,CustomerName = "Tamer Yaser", ShipperCity = "Jeddah", IsShipped = false },
                new Order {OrderID = 10251,CustomerName = "Lina Majed", ShipperCity = "Abu Dhabi", IsShipped = false},
                new Order {OrderID = 10252,CustomerName = "Yasmeen Rami", ShipperCity = "Kuwait", IsShipped = true}
            };

            return OrderList;
        }
    }

    #endregion

Step 3: Build the HMAC Authentication Filter
We’ll add all our logic responsible for re-generating the signature on the Web API and comparing it with signature received by the client in an Authentication Filter. The authentication filter is available in Web API 2 and it should be used for any authentication purposes, in our case we will use this filter to write our custom logic which validates the authenticity of the signature received by the client. The nice thing about this filter that it run before any other filters especially the authorization filter, I’ll borrow the image below from a great article about ASP.NET Web API Security Filters by Badrinarayanan Lakshmiraghavan to give you better understanding on where the authentication filter resides.

ASP.NET Web API Security Filters
Now add new folder named “Filters” then add new class named “HMACAuthenticationAttribute” which inherits from “Attribute” and implements interface “IAuthenticationFilter”, then paste the code below:

public class HMACAuthenticationAttribute : Attribute, IAuthenticationFilter
    {
        private static Dictionary<string, string> allowedApps = new Dictionary<string, string>();
        private readonly UInt64 requestMaxAgeInSeconds = 300;  //5 mins
        private readonly string authenticationScheme = "amx";

        public HMACAuthenticationAttribute()
        {
            if (allowedApps.Count == 0)
            {
                allowedApps.Add("4d53bce03ec34c0a911182d4c228ee6c", "A93reRTUJHsCuQSHR+L3GxqOJyDmQpCgps102ciuabc=");
            }
        }

        public Task AuthenticateAsync(HttpAuthenticationContext context, CancellationToken cancellationToken)
        {
            var req = context.Request;

            if (req.Headers.Authorization != null && authenticationScheme.Equals(req.Headers.Authorization.Scheme, StringComparison.OrdinalIgnoreCase))
            {
                var rawAuthzHeader = req.Headers.Authorization.Parameter;

                var autherizationHeaderArray = GetAutherizationHeaderValues(rawAuthzHeader);

                if (autherizationHeaderArray != null)
                {
                    var APPId = autherizationHeaderArray[0];
                    var incomingBase64Signature = autherizationHeaderArray[1];
                    var nonce = autherizationHeaderArray[2];
                    var requestTimeStamp = autherizationHeaderArray[3];

                    var isValid = isValidRequest(req, APPId, incomingBase64Signature, nonce, requestTimeStamp);

                    if (isValid.Result)
                    {
                        var currentPrincipal = new GenericPrincipal(new GenericIdentity(APPId), null);
                        context.Principal = currentPrincipal;
                    }
                    else
                    {
                        context.ErrorResult = new UnauthorizedResult(new AuthenticationHeaderValue[0], context.Request);
                    }
                }
                else
                {
                    context.ErrorResult = new UnauthorizedResult(new AuthenticationHeaderValue[0], context.Request);
                }
            }
            else
            {
                context.ErrorResult = new UnauthorizedResult(new AuthenticationHeaderValue[0], context.Request);
            }

            return Task.FromResult(0);
        }

        public Task ChallengeAsync(HttpAuthenticationChallengeContext context, CancellationToken cancellationToken)
        {
            context.Result = new ResultWithChallenge(context.Result);
            return Task.FromResult(0);
        }

        public bool AllowMultiple
        {
            get { return false; }
        }

        private string[] GetAutherizationHeaderValues(string rawAuthzHeader)
        {

            var credArray = rawAuthzHeader.Split(':');

            if (credArray.Length == 4)
            {
                return credArray;
            }
            else
            {
                return null;
            }

        }
}

Basically what we’ve implemented is the following:

  • The class “HMACAuthenticationAttribute” derives from “Attribute” class so we can use it as filter attribute over our controllers or HTTP action methods.
  • The constructor for the class currently fill a dictionary named “allowedApps”, this is for the demo only, usually you will store the APP Id and API Key in a database along with other information about this client.
  • The method “AuthenticateAsync” is used to implement the core authentication logic of validating the incoming signature in the request
  • We make sure that the Authorization header is not empty and it contains scheme of type “amx”, then we read the Authorization header value and split its content based on the delimiter we’ve specified earlier in client ‘:’.
  • Lastly we are calling method “isValidRequest” where all the magic of reconstructing the signature and comparing it with the incoming signature happens. More about implementing this in step 5.
  • Incase the Authorization header is incorrect or the result of executing method “isValidRequest” returns false, we’ll consider the incoming request as unauthorized and we should return an authentication challenge to the response, this should be implemented in method “ChallengeAsync”, to do so lets implement the next step.

Step 4: Add authentication challenge to the response
To add authentication challenge to the unauthorized response copy and paste the code below in the same file “HMACAuthenticationAttribute.cs”, basically we’ll add “WWW-Authenticate” header to the response using our “amx” custom scheme . You can read more about the details of this implementation here.

public class ResultWithChallenge : IHttpActionResult
    {
        private readonly string authenticationScheme = "amx";
        private readonly IHttpActionResult next;

        public ResultWithChallenge(IHttpActionResult next)
        {
            this.next = next;
        }

        public async Task<HttpResponseMessage> ExecuteAsync(CancellationToken cancellationToken)
        {
            var response = await next.ExecuteAsync(cancellationToken);

            if (response.StatusCode == HttpStatusCode.Unauthorized)
            {
                response.Headers.WwwAuthenticate.Add(new AuthenticationHeaderValue(authenticationScheme));
            }

            return response;
        }
    }

Step 5: Implement the method “isValidRequest”.
The core implementation of reconstructing the request parameters and generating the signature on the server happens here, so let’s add the code then I’ll describe what this method is responsible for, open file “HMACAuthenticationAttribute.cs” again and paste the code below in class “HMACAuthenticationAttribute”:

private async Task<bool> isValidRequest(HttpRequestMessage req, string APPId, string incomingBase64Signature, string nonce, string requestTimeStamp)
        {
            string requestContentBase64String = "";
            string requestUri = HttpUtility.UrlEncode(req.RequestUri.AbsoluteUri.ToLower());
            string requestHttpMethod = req.Method.Method;

            if (!allowedApps.ContainsKey(APPId))
            {
                return false;
            }

            var sharedKey = allowedApps[APPId];

            if (isReplayRequest(nonce, requestTimeStamp))
            {
                return false;
            }

            byte[] hash = await ComputeHash(req.Content);

            if (hash != null)
            {
                requestContentBase64String = Convert.ToBase64String(hash);
            }

            string data = String.Format("{0}{1}{2}{3}{4}{5}", APPId, requestHttpMethod, requestUri, requestTimeStamp, nonce, requestContentBase64String);

            var secretKeyBytes = Convert.FromBase64String(sharedKey);

            byte[] signature = Encoding.UTF8.GetBytes(data);

            using (HMACSHA256 hmac = new HMACSHA256(secretKeyBytes))
            {
                byte[] signatureBytes = hmac.ComputeHash(signature);

                return (incomingBase64Signature.Equals(Convert.ToBase64String(signatureBytes), StringComparison.Ordinal));
            }

        }

        private bool isReplayRequest(string nonce, string requestTimeStamp)
        {
            if (System.Runtime.Caching.MemoryCache.Default.Contains(nonce))
            {
                return true;
            }

            DateTime epochStart = new DateTime(1970, 01, 01, 0, 0, 0, 0, DateTimeKind.Utc);
            TimeSpan currentTs = DateTime.UtcNow - epochStart;

            var serverTotalSeconds = Convert.ToUInt64(currentTs.TotalSeconds);
            var requestTotalSeconds = Convert.ToUInt64(requestTimeStamp);

            if ((serverTotalSeconds - requestTotalSeconds) > requestMaxAgeInSeconds)
            {
                return true;
            }

            System.Runtime.Caching.MemoryCache.Default.Add(nonce, requestTimeStamp, DateTimeOffset.UtcNow.AddSeconds(requestMaxAgeInSeconds));

            return false;
        }

        private static async Task<byte[]> ComputeHash(HttpContent httpContent)
        {
            using (MD5 md5 = MD5.Create())
            {
                byte[] hash = null;
                var content = await httpContent.ReadAsByteArrayAsync();
                if (content.Length != 0)
                {
                    hash = md5.ComputeHash(content);
                }
                return hash;
            }
        }

What we’ve implemented here is the below:

  • We’ve validated that public APPId received is registered in our system, if it is not we’ll return false and will return unauthorized response.
  • We’ve checked if the request received is a replay request, this means that checking if the nonce received by the client is used before, currently I’m storing all the nonce received by the client in Cache Memory for 5 minutes only, so for example if the client generated a nonce “abc1234″ and send it with a request, the server will check if this nonce is used before, if not it will store the nonce for 5 minutes, so any request coming with same nonce during the 5 minutes window will consider a replay attack, if the same nonce “abc1234″ is used after 5 minutes then this is fine and the request is not considered a replay attack.
  • But there might be an evil person that might try to re-post the same request using the same nonce after the 5 minutes window, so the request time stamp becomes handy here, the implementation is comparing the current server UNIX time with the request UNIX time from the client, if the request age is older than 5 minutes too then it is rejected and the the evil person has no possibility to fake the request time stamp and send fresher one because we’ve already included the request time stamp in the signature raw data, so any change on it will result into new signature and it will not match the client incoming signature.
  • Note: If your API is published on different nodes on web farm, then you can store those nonce using Microsoft Azure Cache or Redis server, do not store them in DB because you need fast rad access.
  • Last step is we’ve implemented is to md5 hash the request body content if it is available (POST, PUT methods), then we’ve built the signature raw data by concatenating the parameters (APPId, requestHttpMethod, requestUri, requestTimeStamp, nonce, requestContentBase64String) without any delimiters. It is a MUST that both parties use the same data format to produce the same signature, the data eventually will get hashed using the same hashing algorithm and API Key used by the client. If the incoming client signature equals the signature generated on the server then we’ll consider this request authentic and will process it.

Step 5: Secure the API End Points:
Final thing to do here is to attribute the protected end points or controllers with this new authentication filter attribute, so open controller “Orders” and add the attribute “HMACAuthentication” as the code below:

[HMACAuthentication]
    [RoutePrefix("api/Orders")]
    public class OrdersController : ApiController
    {
     //Controller implementation goes here
    }

 Conclusion:

  • In my opinion HMAC authentication is more complicated than OAuth 2.0 but in some situations you need to use it especially if you can’t use TLS, or when you are building HTTP service that will be consumed by terminals or devices which storing the API Key in it is fine.
  • I’ve read that OAuth 1.0a is very similar to this approach, I’m not an expert of this protocol and I’m not trying to reinvent the wheel, I want to build this without the use of any external library, so for anyone reading this and have experience with OAuth 1.0a please drop me a comment telling the differences/ similarities about this approach and OAuth 1.0a.

That’s all for now folks! Please drop me a comment if you have better way implementing this or you spotted something that could be done in a better way.

The source code for this tutorial is available on GitHub.

Follow me on Twitter @tjoudeh

References:

The post Secure ASP.NET Web API using API Key Authentication – HMAC Authentication appeared first on Bit of Technology.


Dominick Baier: The Future of AuthorizationServer

Now that IdentityServer v3 is almost done, it makes sense to “deprecate” some of the older projects. Especially all of the functionality of AuthorizationServer is completely replaced by the IdSrv3 feature set.

AuthorizationServer is actually a pretty small and compact code base, and a relatively complete implementation of OAuth2 including a simple authorization model based on clients, applications and scopes. Also there are no major bugs (that we know about) or feature gaps.

IOW – if you want to use AS, simply make it part of your own code base and feel free to change it at will. Check the wiki for documentation.

If somebody wants to take over the project, contact me.


Filed under: ASP.NET, AuthorizationServer, OAuth, WebAPI


Darrel Miller: Constructing URLs the easy way

When building client applications that need to connect to a HTTP API, sooner or later you are going to get involved in constructing a URL based on a API Root and some parameters.  Often enough when looking at client libraries I see lots of ugly string concatenation and conditional logic to account for empty parameter values and trailing slashes.  And there there is the issue of encoding.  Several years ago a IETF specification (RFC 6570) was released that described a templating system for URLs and I created a library that implements the specification.  Here is how you can use it to make constructing even the most crazy URLs as easy as pie.

templates

Path Parameters

The simplest example is where you have a base URI and you need to update a parameter in the URL path segment,

[Fact]
public void UpdatePathParameter()
{
    var url = new UriTemplate("http://example.org/{tenant}/customers")
        .AddParameter("tenant", "acmé")
        .Resolve();

    Assert.Equal("http://example.org/acm%C3%A9/customers", url);
}

This is a really trivial case that could mostly be handled with a string replace.  However, a string replace wouldn’t take care of percent-encoding delimiters and unicode characters in the parameter value.

Under the covers there is a URITemplate class that can have parameters added to it and a Resolve method.  I have created a simple fluent inferface using extension methods to make it convenient quickly create and resolve a template..

Query Parameters

A slightly more complex example would be adding a query string parameter.

[Fact]
public void QueryParametersTheOldWay()
{
    var url = new UriTemplate("http://example.org/customers?active={activeflag}")
        .AddParameter("activeflag", "true")
        .Resolve();

    Assert.Equal("http://example.org/customers?active=true",url); 
}

This style of template can be problematic when there are optional query parameters and a parameter does not have a value.  A better way of defining query parameters is like this,

[Fact]
public void QueryParametersTheNewWay()
{
    var url = new UriTemplate("http://example.org/customers{?active}")
        .AddParameter("active", "true")
        .Resolve();

    Assert.Equal("http://example.org/customers?active=true", url);
}

when you don't want to provide any value at all, the template parameter will be removed.

[Fact]
public void QueryParametersTheNewWayWithoutValue()
{

    var url = new UriTemplate("http://example.org/customers{?active}")
        .AddParameters(null)
        .Resolve();

    Assert.Equal("http://example.org/customers", url);
}

In this last example I used a slightly different extension method that takes a single object and uses it's properties as key-value pairs.  This makes it easy to set multiple parameters.

[Fact]
public void ParametersFromAnObject()
{
    var url = new UriTemplate("http://example.org/{environment}/{version}/customers{?active,country}")
        .AddParameters(new
        {
            environment = "dev",
            version = "v2",
            active = "true",
            country = "CA"
        })
        .Resolve();

    Assert.Equal("http://example.org/dev/v2/customers?active=true&country=CA", url);
}

Lists and Dictionaries

Where URI Templates start to really shine as compared to simple string replaces and concatenation is when you start to use lists and dictionaries as parameter values. 

In the next example we use a list id values that are stored in an array to specify a property value.

[Fact]
public void ApplyParametersObjectWithAListofInts()
{
    var url = new UriTemplate("http://example.org/customers{?ids,order}")
        .AddParameters(new
        {
            order = "up",
            ids = new[] {21, 75, 21}
        })
        .Resolve();

    Assert.Equal("http://example.org/customers?ids=21,75,21&order=up", url);
}

We can use dictionaries to define both the query parameter name and value,

[Fact]
public void ApplyDictionaryToQueryParameters()
{
    var url = new UriTemplate("http://example.org/foo{?coords*}")
        .AddParameter("coords", new Dictionary<string, string>
        {
            {"x", "1"},
            {"y", "2"},
        })
        .Resolve();

    Assert.Equal("http://example.org/foo?x=1&y=2", url);
}

We can also use lists to define a set of path segments.

[Fact]
public void ApplyFoldersToPathFromStringNotUrl()
{

    var url = new UriTemplate("http://example.org{/folders*}{?filename}")
        .AddParameters(new
        {
            folders = new[] { "files", "customer", "project" },
            filename = "proposal.pdf"
        })
        .Resolve();

    Assert.Equal("http://example.org/files/customer/project?filename=proposal.pdf", url);
}

Parameters can be anywhere

Parameters are not limited to path segments and query parameters.  You can also put parameters in the host name.

[Fact]
public void ParametersFromAnObjectFromInvalidUrl()
{

    var url = new UriTemplate("http://{environment}.example.org/{version}/customers{?active,country}")
    .AddParameters(new
    {
        environment = "dev",
        version = "v2",
        active = "true",
        country = "CA"
    })
    .Resolve();

    Assert.Equal("http://dev.example.org/v2/customers?active=true&country=CA", url);
}

You can even replace the entire base URL.

[Fact]
public void ReplaceBaseAddress()
{

    var url = new UriTemplate("{+baseUrl}api/customer/{id}")
        .AddParameters(new
        {
            baseUrl = "http://example.org/",
            id = "22"
        })
        .Resolve();

    Assert.Equal("http://example.org/api/customer/22", url);
}

However, by default URI template will escape all delimiter characters in parameters, so the slashes in the base address would come out percent-encoded.  By adding the + operator to the front of the baseUrl parameter we can instruct the resolution algorithm to not escape characters in the parameter value.

Partial Resolution

A recently added feature to the library is the ability to only resolve parameters that have been passed and leave the other parameters untouched.  This is useful sometimes when you want to resolve base address and version parameters on application startup, but then want to add other parameters later.

[Fact]
public void PartiallyParametersFromAnObjectFromInvalidUrl()
{

    var url = new UriTemplate("http://{environment}.example.org/{version}/customers{?active,country}",resolvePartially:true)
    .AddParameters(new
    {
        environment = "dev",
        version = "v2"
    })
    .Resolve();

    Assert.Equal("http://dev.example.org/v2/customers{?active,country}", url);
}

And there is so much more…

The URI Template specification contains many more syntax options that I have not covered.  Many of them you may never use.  However, it is nice to know that if you ever run into some API that uses some strange formatting, there is a reasonable chance that URI templates can support it.

Although the templating language is fairly sophisticated, it was specifically designed to be fast to process.  The resolution algorithm can be performed by walking the template characters just once and performing substitutions along the way.

Where to find it

All the source code for the project can be found on Github and there is a nuget package available.  The library is built to support .Net35, .Net45 and there is a portable version that supports Win Phone 8, 8.1, WinRT and mono on Android and iOS.

Image Credit: Templates https://flic.kr/p/5Xc9sq


Taiseer Joudeh: AngularJS Authentication Using Azure Active Directory Authentication Library (ADAL)

In my previous post Secure ASP.NET Web API 2 using Azure Active Directory I’ve covered how to protect Web API end points using bearer tokens issued by Azure Active Directory, and how to build a desktop application which acts as a Client. This Client gets the access token from the Authorization Server (Azure Active Directory) then use this bearer access token to call a protected resource exits in our Resource Server (ASP.NET Web API).

Azure Active Directory Web Api

Initially I was looking to build the client application by using AngularJS (SPA) but I failed to do so because at the time of writing the previous post Azure Active Directory Authentication Library (ADAL) didn’t support OAuth 2.0 Implicit Grant which is the right OAuth grant that should be used when building applications running in browsers.

The live AngularJS demo application is hosted on Azure (User: ADAL@taiseerjoudeharamex.onmicrosoft.com/ Pass: AngularJS!!), the source code for this tutorial on GitHub.

So I had discussion with Vittorio Bertocci and Attila Hajdrik on twitter about this limitation in ADAL and Vittorio promised that this feature is coming soon, and yes ADAL now supports OAuth 2.0 Implicit Grant and integrating it with your AngularJS is very simple, I recommend you to watch Vittorio video introduction on Channel 9 before digging into this tutorial.

AngularJS Authentication Using Azure Active Directory Authentication Library (ADAL)

What is OAuth 2.0 Implicit Grant?

In simple words the implicit grant is optimized for public clients (can not store secrets) and those clients are built using JavaScript and they run in browsers. There is no client authentication happening here and the only factors should be presented to obtain an access token is the resource owner credentials and pre-registration for the redirection URI with the Authorization server. This redirect URI will be used to receive the access token issued by the Authorization server in a form of URI fragment.

What we’ll build in this tutorial?

In this tutorial we’ll build simple SPA application using AngularJS along with ADAL for JS which provides a very comprehensive abstraction for the Implicit Grant we described earlier. This SPA will communicate with a protected Resource Server (Web API) to get list of orders, and will request the access token from our Authorization Server (Azure Active Directory).

In order to follow along with this tutorial I’ve created a sample skeleton project which contains the basic code needed to run our AngularJS application and the Web API without adding any feature related to the security. So I recommend you to download it first so you can follow along with the steps below.

The NuGet packages I used in this project are:

Install-Package Microsoft.AspNet.WebApi -Version 5.2.2
Install-Package Microsoft.AspNet.WebApi.Owin -Version 5.2.2
Install-Package Microsoft.Owin.Host.SystemWeb -Version 3.0.0
Install-Package Microsoft.Owin.Security.ActiveDirectory -Version 3.0.0

Step 1: Register the Web API into Azure Active Directory

Open Azure Management Portal in order to register our Web API as an application in our Azure Active Directory, to do so and after your successful login to Azure Management Portal, click on “Active Directory” in the left hand navigation menu, choose your active directory tenant you want to register your Web API with, then select the “Applications” tab, then click on the add icon at bottom of the page. Once the modal window shows as the image below select “Add an application my organization is developing”.
Azure New App
Then a wizard of 2 steps will show up asking you to select the type of the app you want to add, in our case we are currently adding a Web API so select “Web Application and/or Web API”, then provide a name for the application, in my case I’ll call it “AngularJSAuthADAL”, then click next.
Azure App Name
In the second step as the image below we need to fill two things, the Sign-On URL which is usually will be your base URL for your Web API, so in my case it will be “http://localhost:10966″, and the second field APP ID URI will usually be filled with a URI that Azure AD can use for this app, it usually take the form of “http://<your_AD_tenant_name>/<your_app_friendly_name>” so we will replace this with the correct values for my app and will be filed as “http://taiseerjoudeharamex.onmicrosoft.com/AngularJSAuthADAL” then click OK.

Azure App Properties

Step 2: Enable Implicit Grant for the Application

After our Web API has been added to Azure Active Directory apps, we need enable the implicit grant. To do so we need to change our Web API configuring using the application manifest. Basically the application manifest is a JSON file that represents our application identity configuration.

So as the image below and after you navigate to the app we’ve added click on “Manage Manifest” icon at the bottom of the page, then click on “Download Manifest”.

Download Manifest

Open the downloaded JSON application manifest file and change the value of “oauth2AllowImplicitFlow” node to “true“.  As well notice how the “replyUrls” array contains the URL which we want the token response returned to. You can read more about Web API configuration here.

After we apply this change, save the application manifest file locally then upload it again to your app using the “Upload Manifest” feature.

Step 3: Configure Web API to Accept Bearer Tokens Issued by Azure AD

Now and if you have downloaded the skeleton project right click on it and click on build to download the needed Nuget packages, if you want you can run the application and a nice SPA will be show up and the orders will be displayed as the image below because we didn’t configure the security part yet.

AngularJS Adal

Now open file “Startup.cs” and paste the code below:

public void ConfigureOAuth(IAppBuilder app)
        {
            app.UseWindowsAzureActiveDirectoryBearerAuthentication(
               new WindowsAzureActiveDirectoryBearerAuthenticationOptions
               {
                   Audience = ConfigurationManager.AppSettings["ida:ClientID"],
                   Tenant = ConfigurationManager.AppSettings["ida:Tenant"]
               });
        }

Basically what we’ve implemented here is simple, we’ve configured the Web API authentication middleware to use “Windows Azure Active Directory Bearer Tokens” for the specified Active Directory “Tenant” and “Audience” (Client Id). Now any API controller lives in this API and attribute with [Authorize] attribute will only accept bearer tokens issued from this specified Active Directory Tenant, any other form of tokens will be rejected.

It is a good practice to store the values for your Audience, Tenant, Secrets, etc… in a configuration file and not to hard-code them, so open the web.config file and add 2 new “appSettings” as the snippet below, the value for the Client Id can be read your Azure App settings.

<appSettings>
    <add key="ida:Tenant" value="taiseerjoudeharamex.onmicrosoft.com" />
    <add key="ida:ClientID" value="1725911b-ad8f-4295-8258-cf95ba9f7ea6" />
  </appSettings>

Step 4: Protect Orders Controller

Now open file “OrdersController” and attribute it with [Authorize] attribute, by doing this any GET request to the path “http://localhost:port/api/orders” will return status code 401 if no token provided in the Authorization header. When you apply this the orders view won’t return any data until we obtain a valid token. Orders controller code will look as the below snippet:

[Authorize]
[RoutePrefix("api/orders")]
public class OrdersController : ApiController
{
	//Rest of code is here
}

Step 5: Download ADAL JavaScript Library

Now it is time to download and use ADAL JS library which will facilitates the AngularJS authentication using the Implicit grant, so after you download the file, open page “Index.html” and add reference to it at the end of the file as the code below:

<!-- 3rd party libraries -->
 <!Other JS references here -->
 <script src="scripts/adal.js"></script>

Step 6: Configure our AngularJS bootstrap file (app.js)

Now open file “app.js” where we’ll inject the ADAL dependencies into our AngularJS module “AngularAuthApp”, so after you open the file change the code in it as the below:

var app = angular.module('AngularAuthApp', ['ngRoute', 'AdalAngular']);

app.config(['$routeProvider', '$httpProvider', 'adalAuthenticationServiceProvider', function ($routeProvider, $httpProvider, adalAuthenticationServiceProvider) {

    $routeProvider.when("/home", {
        controller: "homeController",
        templateUrl: "/app/views/home.html"
    });

    $routeProvider.when("/orders", {
        controller: "ordersController",
        templateUrl: "/app/views/orders.html",
        requireADLogin: true
    });

    $routeProvider.when("/userclaims", {
        templateUrl: "/app/views/userclaims.html",
        requireADLogin: true
    });

    $routeProvider.otherwise({ redirectTo: "/home" });

    adalAuthenticationServiceProvider.init(
      {
          tenant: 'taiseerjoudeharamex.onmicrosoft.com',
          clientId: '1725911b-ad8f-4295-8258-cf95ba9f7ea6'
      }, $httpProvider);

}]);
var serviceBase = 'http://localhost:10966/';
app.constant('ngAuthSettings', {
    apiServiceBaseUri: serviceBase
});

What we’ve implemented is the below:

  • We’ve injected the “AdalAngular” to our module “AngularAuthApp”.
  • The service “adalAuthenticationServiceProvider” is now available and injected into our App configuration.
  • We’ve set the property “requireADLogin” to “true” for any partial view requires authentication, by doing this if the user requested a protected view, a redirection to Azure AD tenant will take place and the user (resource owner) will be able to enter his AD credentials to authenticate.
  • Lastly, we’ve set the “tenant” and “clientId” values related to our AD Application.

Step 7: Add explicit Login and Logout feature to the SPA

The ADAL Js library provide us with explicit way to login/logout the user by calling “login” function, so to add this open file named “indexController.js” and replace the code existing with the code below:

'use strict';
app.controller('indexController', ['$scope', 'adalAuthenticationService', '$location', function ($scope, adalAuthenticationService, $location) {

    $scope.logOut = function () {
      
        adalAuthenticationService.logOut();
    }

    $scope.LogIn = function () {

        adalAuthenticationService.login();
    }

}]);

Step 8: Hide/Show links based on Authentication Status

For better user experience it will be nice if we hide the login link from the top menu when the user is already logged in, and to hide the logout link when the user is not logged in yet, to do so open file “index.html” and replace the menu items with code snippet below:

<div class="collapse navbar-collapse" data-collapse="!navbarExpanded">
                <ul class="nav navbar-nav navbar-right">
                    <li data-ng-hide="!userInfo.isAuthenticated"><a href="#">Welcome {{userInfo.profile.given_name}}</a></li>
                    <li><a href="#/orders">View Orders</a></li>
                    <li data-ng-hide="!userInfo.isAuthenticated"><a href="" data-ng-click="logOut()">Logout</a></li>
                    <li data-ng-hide="userInfo.isAuthenticated"><a href="" data-ng-click="LogIn()">Login</a></li>
                </ul>
            </div>

Notice that we are depending on object named “userInfo” to read property named “isAuthenticated”, this object is set in the “$rootScope” in ADAL JS library so it is available globally in our AngularJS app.

By completing this step and if we tried to run the application and request the view “orders” or click on “login” link, we will be redirected to our Azure AD tenant to enter user credentials and have an access to the protected view, there is lot of abstraction happening here that needs to be clarified in the section below.

ADAL JavaScrip Flow

  • Once the anonymous user request protected view (view marked with requireADLogin = true ) or hit on login link explicitly, ADAL will trigger login directly as the code highlighted here.
  • The Login in ADAL library means building redirection URI to the configured Azure AD tenant we’ve defined in the “init” method in “app.js” file, the URI will contain random “state” and “nonce” to uniquely identify each request and prevent from any reply requests, as well the value for the “redirect_uri” is set the to current location of the page which should match the value we’ve defined once we registered the application, lastly the “response_type” is set to “id_token” so the authorization server should send back a JWT token contains claims about the user. The redirection URI will look as the below and code for building the redirection URI is highlighted here

https://login.windows.net/taiseerjoudeharamex.onmicrosoft.com/oauth2/authorize?response_type=id_token&client_id=1725911b-ad8f-4295-8258-cf95ba9f7ea6&redirect_uri=http%3A%2F%2Fngadal.azurewebsites.net%2F&state=9cce69d3-af36-4eb4-a882-4e9b7080e90d&x-client-SKU=Js&x-client-Ver=0.0.3&nonce=d4313f89-dc4f-4590-b7fd-9bbc2b340759

  • Now the Azure AD login page will show up for the user, then the user will be prompted to enter his AD credentials as the image blow, assuming he entered credentials correctly, a redirection will take place to the following URI containing the token as part of URI fragment not query string along with the same state and nonce specified by ADAL earlier.

https://ngadal.azurewebsites.net/#/id_token=eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6ImtyaU1QZG1Cdng2OHNrVDgtbVBBQjNCc2VlQSJ9.eyJhdWQiOiIxNzI1OTExYi1hZDhmLTQyOTUtODI1OC1jZjk1YmE5ZjdlYTYiLCJpc3MiOiJodHRwczovL3N0cy53aW5kb3dzLm5ldC8wODExZmIzMS05M2VkLTRmZWItYTAwOS1kNmUyN2RmYWMxN2IvIiwiaWF0IjoxNDE3NDgwMjc5LCJuYmYiOjE0MTc0ODAyNzksImV4cCI6MTQxNzQ4NDE3OSwidmVyIjoiMS4wIiwidGlkIjoiMDgxMWZiMzEtOTNlZC00ZmViLWEwMDktZDZlMjdkZmFjMTdiIiwiYW1yIjpbInB3ZCJdLCJvaWQiOiI2OTIyYTVlZi01YmMyLTRiYmEtYjI5Yy1kODc0YzQyYjg1OWQiLCJ1cG4iOiJIYW16YUB0YWlzZWVyam91ZGVoYXJhbWV4Lm9ubWljcm9zb2Z0LmNvbSIsInVuaXF1ZV9uYW1lIjoiSGFtemFAdGFpc2VlcmpvdWRlaGFyYW1leC5vbm1pY3Jvc29mdC5jb20iLCJzdWIiOiJxVWU4UUQ4SzdwOF9zTlI5WlhQYWcyOVFCeGZURlhKbTBaVzR0UDdYSEJNIiwiZmFtaWx5X25hbWUiOiJKb3VkZWgiLCJnaXZlbl9uYW1lIjoiSGFtemEiLCJub25jZSI6ImQ1NmEwNDJhLWQzM2YtNGYxYS05ZmYwLTFjMWE1ZDk3YzVmMSIsInB3ZF9leHAiOiI3NDMzNDg5IiwicHdkX3VybCI6Imh0dHBzOi8vcG9ydGFsLm1pY3Jvc29mdG9ubGluZS5jb20vQ2hhbmdlUGFzc3dvcmQuYXNweCJ9.efAp-95yMvhLu--8TfXYwozJsc09OTsB5bneH9bvGzko6uLZj0YloDTIrVtu_SU95hOBpFvma0FOeGmsqre6DBwaLTSJDD9wTYtqmoCGwpTy_cewpS78MJ9aR-IjWx5O6K8Nt90d4ujaco5T-o2EQ4ygPx5Z6vH-sLy8t9NDVER7HtlClhRwj2uDUF-kdihh7lv5w0U7TqHZUtLkBNL2l69yY5F0Jdj0q7m81gNps6nfqfa8aypgmztpPWDJAChvwsD5r58CyGVXPKSp_2CfK0kkWasP6fmLKKi5tGPvjg-wEb2j47UVIgO8v9xIkqg8RGqnZ1lboZKa2FlCs-Jnrw&state=6ac680d1-8b3b-452a-97d2-3deddb4016fc&session_state=2d48f965-e026-4c08-9d38-2cce49ee72cb

Azure AD Login

Note: You can extract the JSON Web token manually and debug it using JWT debugging tool to explore the claims inside it.

  • Now once ADAL JS library receives this call back, it will go and extract the token from the hash fragment, try to decode this token to get user profile and other claims from it (i.e. token expiry), then store the token along with decoded claims in HTML 5 local storage so the token will not vanish if you closed the browser or do full refresh (F5) for your AngularJS application, the implementation is very similar to what we’ve covered earlier in AngularJS authentication post.
  • To be able to access the protected API end points, we need to send the token in the “Authorization” header using bearer scheme, the ADAL JS provides an AngularJS interceptor which is responsible to add the Authorization header to each XHR request if there is token stored in the localStorage, these is very similar to the interceptor we’ve added before in our AngularJS Authentication post.
  • If the user decided to “logout” from the system, then ADAL JS clears all the stored data in the HTML 5 local storage, so no token stored locally, then it issue redirect to the URI: https://login.windows.net/{tenantid}/oauth2/logout which will be responsible to clear the clear any session cookies created by Azure AD tenant, but remember that logging out from the system will not invalidate your token, if you extracted the token manually then it will remain valid until it expires, usually after 1 hour of the issuing time.
  • ADAL JS provides a nice way to renew the token without using grant_type = refresh_token, it tries to renew the token silently by using a hidden iframe which communicates with Azure AD tenant asking for a new token if there is a valid session established by the AD tenant.

The live AngularJS demo application is hosted on Azure (ADAL@taiseerjoudeharamex.onmicrosoft.com/AngularJS!!), the source code for this tutorial on GitHub.

Conclusion

ADAL JS library really simplifies adding OAuth 2.0 Implicit grant to your SPA, it is still on developer preview release so testing it out and providing feedback will be great to enhance it.

If you have any comments or enhancement on this tutorial please drop me comment. Thanks for taking the time to read the post!

Follow me on Twitter @tjoudeh

References

The post AngularJS Authentication Using Azure Active Directory Authentication Library (ADAL) appeared first on Bit of Technology.


Ali Kheyrollahi: Health Endpoint in API Design: slippery slope that it is

Level [C3]

Health Endpoint is a common practice in building APIs. Such an endpoint, unlike other resources of a REST API, instead of achieving a business activity, returns the status of the service and while it can gather and return some data, it is the HTTP status that defines whether the service is "Up or Down". These endpoints commonly go and check a bunch configurations and connectivity with the dependent services, and even make a few calls for a "Test Customer" to make sure business activity can be achieved.

There is something above that just doesn't feel right to me - and this post is an exercise to define what I mean by it. I will explain what are the problems with the Health API and I am going to suggest how to "fix" it.

What is the health of an API anyway? The server up and running and capable of returning the status 200? Server and all its dependencies running and returning 200? Server and all its dependencies running capable of returning 200 in a reasonable amount of time? API able to accomplish some business activity? Or API able to accomplish a certain activity for a test user? API able to accomplish all activities within reasonable time? API able to accomplish all activities with its 95% percentile falling within an agreed SLA?

A Service is a complex beast. While its complexity would be nowhere near a living organism, it is useful to draw a parallel with a living organism. I remember from my previous medical life that the definition of health - provided by none other than WHO - would go like this:

"Health is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity."
In other words, defining health of an organism is a complex and involved process requiring deep understanding of the organism and how it functions. [Well, we are lucky that we are only dealing with distributed systems and their services (or MicroServices if you like) and not living organisms.] For servies, instead of health, we define the Quality of Service as a quantitative measure of a service's health.

Quality Of Servie is normally a bunch of orthogonal SLAs each defining a measurement for one aspect of the service. In terms of monitoring, Availability of a service is the most important aspect of the service to guage and closely follow. Availability of the service cannot simply be measured by the amount of time the servers dedicated to a service have been up. Apart from being reachable, service needs to respond within acceptable time (Low Latency) and has to be able to achieve its business activity (Functional) - no point server being reachable and return 503 error within milliseconds. So the number of error responses (as a deviation from the baseline which can be normal validation and business rule errors) also come into play.

So the question is how can we, expose an endpoint inside a service that can aggregate all above facets and report the health of a service. Simple answer is we cannot and should not commit ourselves to do it. Why? Let's take some simple help from algebra.
API/Service maps an input domain to an output domain (codomain). Also availability is a function of the output domain.

A service (f) is basically a function that maps the input domain (I) to an output domain (O). So:
O = f(I)
The output domain is a set of all possible responses with their status codes and latencies. Availability (A) is a function (a) of the output domain since it has to aggregate errors, latencies, etc:
A = a(O)
So in other words:
A = a(f(I))
So in other words, A cannot be measured without I - which for a real service is a very large set. And also it needs all of f - not your subset bypass-authentication-use-test-customer method.

So one approach is to sit outside the service and only deal with the output domain in a sort of proxy or monitoring server logs. Netflix have done a ton of work on this and have open sourced it as Hysterix) and no wonder I have not heard anything about the magical Health Endpoint in there (now there is an alternative endpoint which I will explain later). But if you want to do it within the service you need all the input domain and not just your "Test Customer" to make assertions about the health of your service. And this kind of assertion is not just wrong, it is dangerous as I am going to explain.

First of all, gradually - especially as far as the ops are concerned - that green line on the dashboard that checks your endpoint becomes your availability. People get used to trust it and when things go wrong out there and customers jump and shout, you will not believe it for quite a while because your eye sees that green line and trusts it.

And guess what happens when you have such an incident? There will be a post-mortem meeting and all tie-and-suits will be there and they identify the root cause as the faulty health-check and you will be asked to go back and fix your Health Check endpoint. And then you start building more and more complexity into your endpoint. Your endpoint gets to know about each and every dependency, all their intricacies. And before your know it, you could build a complete application beside your main service. And you know what, you have to do it for each and every service, as they are all different.

So don't do it. Don't commit yourself to what you cannot achieve.

So is there no point in having a simplistic endpoint which tells us basic information about the status of the service? Of course there is. Such information are useful and many load balancers or web proxies require such an endpoint.

But first we need to make absolutely clear what the responsibility of such an endpoint is.

Canary Endpoint

A canary endpoint (the name is courtesy of Jamie Beaumont) is a simplistic endpoint which gathers connectivity status and latency of all dependencies of a service. It absolutely does not trigger any business activity, there is no "Test Customer" of any kind and is not a "Health Endpoint". If it is green, it does not mean your service is available. But if it is red (your canary is dead) then you definitely have a problem.



So how does a canary endpoint work? It basically checks connectivity with its immediate dependencies - including but not limited to:
  • External services
  • SQL Databases
  • NoSQL Stores
  • External distributed caches
  • Service brokers (Azure RabbitMQ, Service Bus)
A canary result contains name of the dependency, latency and the status code. If any of the results has non-success code, endpoint returns a non-success code. Status code returned is used by simple callers such as load balancers. Also in all cases, we return a payload which is aggregated canary result. Such results can be used to feed various charts and draw heuristics into significance of variability of the latencies.

You probably noticed that External Services appear in Italic i.e. it is a bit different. Reason is if an external service has a canary endpoint itself, instead of just a connectivity check, we call its canary endpoint and add its aggregated result to the result we are returning. So usually the entry point API will generate a cascade of canary chirps that will tell us how things are.

Implementation of the connectivity check is generally dependent on the underlying technology. For a Cache service, it suffices to Set a constant value and see it succeeding. For a SQL Database a SELECT 1; query is all that is needed. For an Azure Storage account, it would be enough to connect and get the list of tables. The point being here is that none of these are anywhere near a business activity, so that you could not - in the remotest sense - think that its success means your business is up and running.

So there you have it. Don't do health endpoints, do canary instead.

Canary Endpoint implementation

A canary endpoint normally gets implemented as an HTTP GET call which returns a collection of connectivity check metrics. You can abstract the logic of checking various dependencies in a library and allow API developers to implement the endpoint by just declaring the dependencies.

We are currently working on an implementation in ASOS (C# and ASP.NET Web API) and there is possibility of open sourcing it.

Security of the Canary Endpoint

I am in favour of securing Canary Endpoint with a constant API key - normally under SSL. This does not provide highest level of security but it is enough to make it much more difficult to break into. At the end of the day, a canay endpoint lists all internal dependencies, components and potentially technologies of a system that can be used by hackers to target components.

Performance impact of Canary Endpoint

Since canary endpoint does not trigger any business activity, its performance footprint should be minimal. However, since calling the canary endpoint generates a cascade of calls, it might not be wise to iterate through all canary endpoints and just call them every few seconds since deeper canary endpoints in a highly layered architecture get called multiple times in each round. 



Darrel Miller: Runscope: Notifications from the Traffic Inspector

Runscope provides a way to log HTTP traffic that passes between client and server and it also can also continuously monitor Web API’s to ensure they are functioning correctly.  When something goes wrong with the Web API you can be notified immediately.  However, out of the box, there isn’t a way to be notified if there a failure appears in the traffic log.  However, it can be done,  it just requires a little creativity.  This blog post shows how. 

Canary

The API to the rescue

The Runscope Traffic Inspector is where you can see all the requests that have been relayed by Runscope.  Requests that have failed, based on their status code or connectivity issue, are included in the Errors stream. 

image

We can create a Radar Test that looks in this Errors stream by making a request to the Runscope API.

image

Unfortunately, it’s not quite that simple.  Once an error is in the Errors stream, the Radar test would end up notifying us of an error every time it checked.  We need a way of asking if any requests have been added to the Errors stream since the last time we ran the test.

image

We can also use the API to look at the results of a test and determine when it was last run. 

image

We can use the started_at timestamp from the response body of the last test run to then query the errors stream for requests since that timestamp. 

image

There is the possibility that if requests happen between the time that the test starts and the time the errors collection is queried, that the error may be reported twice.  This seems like a better alternative than using the finished_at and risk missing an error than comes in whilst the test is running.

Accessing the API

In order to use the Runscope API we need an API Key.  We can get one of those by defining an Application within Runscope account configuration.

image

The Website URL and Callback URL are dummy values because your account is the only one who will access the API and therefore you don’t need to use OAuth2 web flow authentication.  Once you create the application you will find a Access Token at the bottom of the page.

image

We will now be able to use this access token in our tests.

Creating the Tests

Before creating the tests we should create a new Bucket to hold the tests.  I called this bucket “Traffic Tester”. By using a different bucket we can avoid getting the traffic relating to monitoring intermixed with our actual relayed traffic.  Make a note of the “Traffic Tester” bucket key which is displayed in the bottom left corner of the Traffic Inspector screen.

The next step is to go to the Radar view and create a test.  I created a test called “Monitor Slackbot Traffic” as I want to use it to notify me if there are any errors that occur in my Slackbot integration.

Identifying the Test itself

The first request that we add to this test is going to determine the last time that the test was run.  Before we can do this, we need to know the UUID of the test.  We can use the API to tell us this.  So initially, we are going to set up the first request to just determine the UUID.

Set the request URI to be

https://api.runscope.com/buckets/{TrafficTesterBucketKey}/radar

In my case the URI looked like this, your bucket key will be different,

image

You will also need to add a Authorization header that has a value that has this format,

bearer {APIKey}

My header looked something like this. 

image

Run this test and when it succeeds, check the Last Response value.  This JSON object is a description of our “Monitor Slackbot Traffic” test and it contains a UUID property to identify the test.  Copy that UUID value.

image

Getting the “Since” value

Change the URL of the test to be,

https://api.runscope.com/buckets/{TrafficTesterBucketKey}/radar/{TestUUID}/results

replacing both the bucket key {TrafficTesterBucketKey} and {TestUUID} values.  Next, add a Variable named since like this,

image

We get the started_at value of data[1] instead of data[0] because data[0] is the information about the currently running test.  We want to know when the last test started.

At this point you can try running the test and see if it populates the since variable with a timestamp value.

image

Time to Check the Errors Stream

Create the next request and set up the same Authorization header as the previous request.

Set the new request URL to be

https://api.runscope.com/buckets/{bucketToMonitor}/errors?since={{since}}

Replace {bucketToMonitor} with the bucket key of the bucket you wish to monitor!

When the request runs it will return a data array that either contains errors or does not.  I tried to use the standard assertions for this but could not find a way that could handle the empty array.  So instead, I used a bit of script code to do the check.

Add the following script code to the request to complete the request,

image

Tell somebody about it

Add some notifications to the test,

image

This will send you an email if any request gets captured in the Errors stream.  If you prefer to use some other notification method then select the Integrations page and select one of our many integration partners.

Is this useful to you?

Ideally, it shouldn’t require quite this much creativity to be able to do this.  Let me know if this is useful to you and we’ll find a  way of making it much simpler.

Image Credit: Canary https://flic.kr/p/c2fdZG


Darrel Miller: The Web API business layer anti-pattern

What follows is a description of an architectural pattern that I see many developers discussing that I believe is an anti-pattern.  My belief is based on architectural theory and I have no empirical evidence to back it up, so feel free to come to your own conclusions.

The proposed architecture looks like this,

image

I’ve never been a big fan of architecture diagrams like this, because they are a purely logically representation of the architecture and forget about physical realities.

Consider this next diagram which is an example of how this architecture might actually be deployed.

image

In this diagram I changed the colours of the arrows to indicate the protocol used to interact between the components. The blue arrows use HTTP, the purple one is an in-process call and the red one is cross process but most likely not using HTTP. 

HTTP is smart but not very quick

HTTP is designed to enable communication over high latency distributed networks.  It is a text based protocol that enables the transmission of a large amount of semantically rich information in a single request/response pair.  It was designed to scale massively and allow application components to evolve independently.  It was not designed to be particularly efficient over high speed connections within a data center.

High speed in the data center

HTTP is convenient, but definitely not the best choice when communicating with a database server within a data center. The interaction between the Web Site and the Web API is HTTP, because that’s the protocol of choice for Web APIs.  However,  I think it is important to question the wisdom of this interaction.

The right protocol for the right job

It is highly likely that the Web Site and the Web API are living in the same data center. It is quite possible that they are running on the same physical machine.  This means that interactions between the two do not even need a network round trip.  There are much faster ways for the web site to get access to the data it needs than using HTTP to talk to a Web API.

DRY Layers

However, from what I have heard, performance is not the motivating factor for funneling all interactions through the Web API.  The intent is to provide a single interface that all “client” applications can consume.  The goal is re-use.  The theory being that we can write a single Web API and all the different client applications can consume that single API. 

Building good APIs for clients that are communicating across the Internet, need to satisfy a different set of requirements, as compared to building an API for a client sitting across the room.  Internet APIs can’t afford to be chatty.  They tend to be more coarsely grained and contain more metadata in order to reduce chattiness.  They also need to be much more resilient to change.  It’s not hard to push an update to the Web Site when the Web API changes, but it is a lot more challenging to update mobile devices, or some third party integration.

It isn’t impossible, it’s just sub-optimal

You can share a Web API between both local and remote clients.  The problems you will encounter will depend on who is the driving force behind API changes.  If the Web Site requirements push API changes then you are likely to end up with something that works OK for the web site and sucks horribly for the remote clients.  If you are lucky, it will be the remote clients that drive the API and hopefully the performance advantages of being local will make up for the inefficient interface that the Web Site needs to deal with.

A better way

image

In my opinion, a better unit of re-use would be the business logic of the application packaged up with a package manager and then deployed into either the Web Site or Web API projects.  With this approach, the Web Site gets high speed access to the underlying business logic and data and the Web API gets to focus on optimizing for remote clients.

Feedback

As I started out saying, this opinion is based on theory.  I’d be really interested in hearing about practical experiences that developers have had with these types of scenarios.   Some readers might find this a stretch, but I see a correlation between what I am describing here and the changes that Neflix implemented to its internal architecture.

It is worth noting also that many of the negative impacts that I am envisioning are not necessarily going to surface in the first six months of the project.  I tend to focus on the long term evolution of an application, so if you happen to be building a tool for your internal HR department that is going to be scrapped next year, feel free to ignore everything I just said Smile.


Darrel Miller: Continuous Integration, Deployment and Testing of your Web APIs with AppVeyor and Runscope

Fast iterations can be very valuable to the software development process, but make for more time spent doing deployments and testing.  Also, more deployments means more opportunities to accidentally introduce breaking changes into your API. 

RubeGoldbergMachine

AppVeyor is a cloud based continuous integration server and Runscope enables you to do integration testing of your Web APIs, when these services are combined they can save you time doing the mechanical stuff and ensure your API consumers stay happy.

The following steps show the key points to achieve this integration using AppVeyor, Runscope and Azure websites.

1) Create your Web API solution.

image

2) Commit your source code to a publicly accessible source code repository in Github, BitBucket, Visual Studio Online or Kiln.

3) Setup a default AppVeyor project.

image

3b) If  you did not include your Nuget packages in source control then you will need to go to the “Settings –> Build” page and add the nuget restore command.

image

4) Create an Azure Web Site from the Azure management portal using the Quick Create option.   After the site has been created you should setup deployment credentials.

image

5) Return back to the AppVeyor project configuration and setup the deployment of the project using Web Deploy.

image

The Web Deploy Server field should be set to

https://{sitename}.scm.azurewebsites.net/msdeploy.axd?site={sitename}.azurewebsites.net

Where {sitename} must be replaced by whatever you chose to name your Azure Website.  The Username and Password should be set to the credentials you provided in the Azure portal.

6)  Create your Runscope API tests.

image

7) Determine the trigger URL to initiate the tests.  An individual test can be triggered using the trigger URL on the test Settings page.

image

If you want to run all the tests defined in a bucket, there is also a trigger URL in the bucket settings page.

8) Configure AppVeyor to call Runscope Trigger URL.

image

Once this is all setup, as soon as you commit a change to the source code repo, AppVeyor will be notified of the commit and it will initiate a build.  Once the build is completed successfully, the Runscope trigger URL will be called and the newly deployed API will be tested.  Runscope notifications can be used to send emails, SMS messages or IMs if desired.

Image Credit: Lego Rube Goldberg machine https://flic.kr/p/8tA1H7


Darrel Miller: REST–The Chocolate Chip Cookie Analogy

At a recent conference, I found myself once again in a conversation about the meaning of the term REST.  I’ve had this conversation so many times, that I tend to forget that not everyone has heard my take on the subject.  The conversation ended with a “you should blog that…”. 

ChocolateChipCookies

Most developers are aware that REST is one of those terms that means different things to different people.  Lots of key presses have been wasted arguing about what is and isn’t RESTful.  I’m going to try and avoid that trap by making a particularly silly comparison to demonstrate why we should care about accurate use of terminology.

Constraints

KidCookieThe term REST has some similarities to the term “Chocolate Chip Cookie”.   A chocolate chip cookie is defined by two primary constraints.  It must be a cookie and it must contain chocolate chips.  There is no single official chocolate chip cookie recipe.   There are hundreds of different recipes, but the end result is always a cookie with chocolate chips in it.  More importantly, kids love them. 

The problem developers have with REST is that there is no single recipe for “how to do REST”.  Developers like very prescriptive guidance on how to achieve a goal.  The definition of REST simply provides a set of constraints and leaves out the details.

Desired Effect

When you are planning a birthday party and decide to buy some Chocolate Chip Cookies, you are not making that decision because you know Chocolate Chip Cookies contain flour and butter, but because you know kids love them, and you want the kids to be happy.

BirthdayParty

The REST constraints were chosen because they evoke certain characteristics in the systems that follow those constraints.  You choose to follow REST constraints because you want those desired effects.

Sometimes Oatmeal cookies are better

When taking kids on a long car ride, especially in warmer climates, chocolate chip cookies are not always the best choice for snacks.  Lots of kids like Oatmeal cookies too and they don’t make a mess of your back seat.  It is the effect that is important, not the ingredients.  You select the type of cookie that has the characteristics you desire.

MessyFace

Complying with the REST constraints requires a certain amount of work.  Maybe you don’t need all the characteristics that a REST system exhibits, or maybe you need additional ones.

False Expectations

However, if you tell your birthday party guests that they are going to get chocolate chip cookies, and you switch the chocolate chips for raisins, you are going to have some confused and upset kids.

UnhappyKid

This is the key point of the REST naming debacle.   Calling something REST that doesn’t conform to the constraints defined by REST is just a source of pain and confusion.  Some people argue that the popular use of the term REST is simply a subset of the constraints.  People have even tried to name these different sets of constraints: Hi-REST & Lo-REST, Fielding’s REST and Pop-REST.   None of these distinctions have really taken hold.

Rogue Constraints

Imagine how someone would be ridiculed if they came along and said,

our cookie making machine produces square cookies therefore we declare that chocolate chip cookies must be square. 

Or,

our chocolate chip cookie recipe contains pecans and they taste great, so we believe it is a best practice for all chocolate chip cookies to contain pecans.

Unfortunately, this has happened to the term REST.  For a variety of different reasons, new constraints have been invented and attributed to REST.  Sometimes, it is because someone believes the new constraint is a “best practice”, or often it is due to some framework limitation.

Landofconfusion

Land of confusion

The end result is many different definitions of REST.  There are thousands of recipes for making chocolate chip cookies, but the basic definition of what is a chocolate chip cookie remains the same: cookie + chocolate chip. 

The term REST should tell me that a system uses a layered architecture, uses caching, has a client-server architecture, interactions between the client and server are stateless and each layer applies the rules of the uniform interface constraint. 

Today, the term REST doesn’t guarantee that any of these constraints are being respected and makes the term fairly worthless in technical conversations.  The term HTTP API is much more accurate when describing most of today’s Web API.  Many of the core REST community have resorted to using the term Hypermedia API to describe an API that actually conforms to all of the REST constraints.

There is a point where all metaphors break down, and the difference between chocolate chip cookies and REST is that you can do a web search and easily find a recipe to make a great tasting chocolate chip cookie.  Unfortunately, you can’t do the same with REST due to the watering down of the term.

Let’s move on

DriveIntoSunsetIt is an unfortunate state of affairs, and one that is likely never to be resolved.  However, as long as you understand where we are and how we got here, we can move forward, get some real work done, stop debating what it means to be RESTful, and try not to make the same mistake with the next technical noun that comes along.

 

Image Credit : Cookies https://flic.kr/p/83Wef5
Image Credit: Kid with cookie
https://flic.kr/p/dMNwWR
Image Credit: Birthday Party https://flic.kr/p/6D1ag1
Image Credit: Messy Face https://flic.kr/p/PVrds
Image Credit: Unhappy kid https://flic.kr/p/i8SrkX
Image Credit: Land of confusion https://flic.kr/p/5eLaX7


Dominick Baier: IdentityServer & IdentityManager, Updates and the .NET Foundation

It’s busy times right now but we are still on track with our release plans for IdentityServer (and IdentityManager, which will get more love once IdentityServer is done). In fact we just pushed beta 3-4 to github and nuget, which mostly contains bug fixes and merged pull requests.

The other big news is that both projects joined the .NET Foundation as part of the announcements around open sourcing .NET. Joining the Foundation provides us with a strong organizational backbone to increase the visibility and attractiveness of IdentityServer and IdentityManager to both, new users and new committers. As a current user of one of these projects, this will provide even stronger long-term safety of your investments in the use of these frameworks.

If you want to contribute to any of the projects – you are more than welcome! Please have a look at our contribution guidelines and don’t hesitate to get in touch with us!

Also big thanks to our contributors – and especially Damian Hickey and Hadi Hariri who proved this week that this whole community thing is actually working!


Filed under: ASP.NET, IdentityServer, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.