Dominick Baier: IdentityServer & IdentityManager, Updates and the .NET Foundation

It’s busy times right now but we are still on track with our release plans for IdentityServer (and IdentityManager, which will get more love once IdentityServer is done). In fact we just pushed beta 3-4 to github and nuget, which mostly contains bug fixes and merged pull requests.

The other big news is that both projects joined the .NET Foundation as part of the announcements around open sourcing .NET. Joining the Foundation provides us with a strong organizational backbone to increase the visibility and attractiveness of IdentityServer and IdentityManager to both, new users and new committers. As a current user of one of these projects, this will provide even stronger long-term safety of your investments in the use of these frameworks.

If you want to contribute to any of the projects – you are more than welcome! Please have a look at our contribution guidelines and don’t hesitate to get in touch with us!

Also big thanks to our contributors – and especially Damian Hickey and Hadi Hariri who proved this week that this whole community thing is actually working!


Filed under: ASP.NET, IdentityServer, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Chad England: Order the controllers in ASP.NET WebAPI Help

Recently I started a new API project for my company.  With this project I decided to give the Help files generator that Microsoft created a try.  Overall it looks like it will meet the needs of the project.  Today however I hit my first snag with it.  After I began to turn on routes for consumption I noticed in the help page that the routes was not sorted in any fashion.  To me, this can cause a level of annoyance for the folks needing to consume this. 

To solve this it was fairly trivial

  • Go to \Areas\HelpPage\Views\Help\
  • Open Index.cshtml
  • Locate the following line

ILookup<HttpControllerDescriptor, ApiDescription> apiGroups = Model

.ToLookup(api => api.ActionDescriptor.ControllerDescriptor);

  • Replace that line with this one

ILookup<HttpControllerDescriptor, ApiDescription> apiGroups = Model

.OrderBy(d => d.ActionDescriptor.ControllerDescriptor.ControllerName)

.ToLookup(api => api.ActionDescriptor.ControllerDescriptor);

What we are doing here is first sorting the model before the lookup can be created.



Taiseer Joudeh: Getting started with ASP.NET 5 MVC 6 Web API & Entity Framework 7

One of the main new features of ASP.NET 5 is unifying the programming model and combining MVC, Web API, and Web Pages in single framework called MVC 6. In previous versions of ASP.NET (MVC 4, and MVC 5) there were overlapping in the features between MVC and Web API frameworks, but the concrete implementation for both frameworks was totally different, with ASP.NET 5 the merging between those different frameworks will make it easier to develop modern web applications/HTTP services and increase code reusability.

The source code for this tutorial is available on GitHub.

Getting started with ASP.NET 5 MVC 6 Web API & Entity Framework 7

In this post I’ve decided to give ASP.NET 5 – MVC 6 Web API a test drive, I’ll be building a very simple RESTful API from scratch by using MVC 6 Web API and the new Entity Framework 7, so we will learn the following:

  • Using the ASP.NET 5 empty template to build the Web API from scratch.
  • Overview of the new project structure in VS 2015 and how to use the new dependency management tool.
  • Configuring ASP.NET 5 pipeline to add only the components needed for our Web API.
  • Using EF 7 commands and the K Version Manager (KVM) to initialize and apply DB migrations.

To follow along with this post you need to install VS 2015 preview edition or you can provision a virtual machine using Azure Images as they have an Image with VS 2015 preview installed.

Step 1: Creating an empty web project

Open VS 2015 and select New Web Project (ASP.NET Web Application) as the image below, do not forget to set the .NET Framework to 4.5.1. You can name the project “Registration_MVC6WebApi”.

ASPNET5 New project

Now we’ll select the template named “ASP.NET 5 Empty” as the image below, this template is an empty template with no core dependencies on any framework.

MVC 6 Web API Project Template

Step 2: Adding the needed dependencies

Once the project is created you will notice that there is a file named “project.json” this file contains all your project settings along with a section for managing project dependencies on other frameworks/components.

We’ve used to manage packages/dependencies by using NuGet package manager, and you can do this with the new enhanced NuGet package manager tool which ships with VS 2015, but in our case we’ll add all the dependencies using the “project.json” file and benefit from the IntelliSense provided as the image below:

NuGet IntelliSense

So we will add the dependencies needed to configure our Web API, so open file “project.json” and replace the section “dependencies” with the section below:

"dependencies": {
        "Microsoft.AspNet.Server.IIS": "1.0.0-beta1",
        "EntityFramework": "7.0.0-beta1",
        "EntityFramework.SqlServer": "7.0.0-beta1",
        "EntityFramework.Commands": "7.0.0-beta1",
        "Microsoft.AspNet.Mvc": "6.0.0-beta1",
        "Microsoft.AspNet.Diagnostics": "1.0.0-beta1",
        "Microsoft.Framework.ConfigurationModel.Json": "1.0.0-beta1"
    }

The use for each dependency we’ve added as the below:

  • Microsoft.AspNet.Server.IIS: We want to host our Web API using IIS, so this package is needed. If you are planning to self-host your Web API then no need to add this package.
  • EntityFramework & EntityFramework.SqlServer: Our data provider for the Web API will be SQL Server. Entity Framework 7 can be configured to work with different data providers and not only relational databases, the data providers supported by EF 7 are: SqlServer, SQLite, AzureTableStorage, and InMemory. More about EF 7 data providers here.
  • EntityFramework.Commands: This package will be used to make the DB migrations command available in our Web API project by using KVM, more about this later in the post.
  • Microsoft.AspNet.Mvc: This is the core package which adds all the needed components to run Web API and MVC.
  • Microsoft.AspNet.Diagnostics: Basically this package will be used to display a nice welcome page when you request the base URI for the API in a browser. You can ignore this if you want, but it will be nice to display welcoming page instead of the 403 page displayed for older Web API 2.
  • Startup
  • Microsoft.Framework.ConfigurationModel.Json: This package is responsible to load and read the configuration file named “config.json”. We’ll add this file in a later step. This file is responsible to setup the “IConfiguration” object. I recommend to read this nice post about ASP.NET 5 new config files.

Last thing we need to add to the file “project.json” is a section named “commands” as the snippet below:

"commands": {
        "ef": "EntityFramework.Commands"
}

We’ve added short prefix “ef” for EntityFramework.Commands which will allow us to write EF commands such as initializing and applying DB migrations using KVM.

Step 3: Adding config.json configuration file

Now right click on your project and add new item of type “ASP.NET Configuration File” and name it “config.json”, you can think of this file as a replacement for the legacy Web.config file, for now this file will contain only our connection string to our SQL DB, I’m using SQL Express here and you can change this to your preferred SQL server.

{
    "Data": {
        "DefaultConnection": {
            "Connectionstring": "Data Source=.\\sqlexpress;Initial Catalog=RegistrationDB;Integrated Security=True;"
        }
    }
}

Note: This is a JSON file that’s why we are using escape characters in the connection string.

Step 4: Configuring the ASP.NET 5 pipeline for our Web API

This is the class which is responsible for adding the components needed in our pipeline, currently with the ASP.NET 5 empty template, the class is empty and our web project literally does nothing, I’ll add all the code in our Startup class at once then describe what each line of code is responsible for, so open file Startup.cs and paste the code below:

using System;
using Microsoft.AspNet.Builder;
using Microsoft.AspNet.Http;
using Microsoft.AspNet.Hosting;
using Microsoft.Framework.ConfigurationModel;
using Microsoft.Framework.DependencyInjection;
using Registration_MVC6WebApi.Models;

namespace Registration_MVC6WebApi
{
    public class Startup
    {
        public static IConfiguration Configuration { get; set; }

        public Startup(IHostingEnvironment env)
        {
            // Setup configuration sources.
            Configuration = new Configuration().AddJsonFile("config.json").AddEnvironmentVariables();
        }
        public void ConfigureServices(IServiceCollection services)
        {
            // Add EF services to the services container.
            services.AddEntityFramework().AddSqlServer().AddDbContext<RegistrationDbContext>();

            services.AddMvc();

            //Resolve dependency injection
            services.AddSingleton<IRegistrationRepo, RegistrationRepo>();
            services.AddSingleton<RegistrationDbContext, RegistrationDbContext>();
        }
        public void Configure(IApplicationBuilder app)
        {
            // For more information on how to configure your application, visit http://go.microsoft.com/fwlink/?LinkID=398940
            app.UseMvc();

            app.UseWelcomePage();

        }
    }
}

What we’ve implemented in this class is the following:

  • The constructor for this class is responsible to read the settings in the configuration file “config.json” that we’ve defined earlier, currently we have only the connection string. So the static object “Configuration” contains this setting which we’ll use in the coming step.
  • The method “ConfigureServices” accepts parameter of type “IServiceCollection”, this method is called automatically when starting up the project, as well this is the core method responsible to register components in our pipeline, so the components we’ve registered are:
    • We’ve added “EntityFramework” using SQL Server as our data provider for the database context named “RegistrationDBContext”. We’ll add this DB context in next steps.
    • Added the MVC component to our pipeline so we can use MVC and Web API.
    • Lastly and one of the nice out of the box features which has been added to ASP.NET 5 is Dependency Injection without using any external IoC containers, notice how we are creating single instance of our “IRegistrationRepo” by calling 
      services.AddSingleton<IRegistrationRepo, RegistrationRepo>();
      . This instance will be available for the entire lifetime of our application, we’ll implement the classes “IRegistrationRepo” and “RegistrationRepo” in next steps of this post. There is a nice post about ASP.NET 5 dependency injection can be read here.
  • Lastly the method “Configure” accepts parameter of type “IApplicationBuilder”, this method configures the pipeline to use MVC and show the welcome page. Do not ask me why we have to call “AddMvc” and “UseMvc” and what is the difference between both :) I would like to hear an answer if someone knows the difference or maybe this will be changed in the coming release of ASP.NET 5. (Update: Explanation of this pattern in the comments section).

Step 5: Adding Models, Database Context, and Repository

Now we’ll add a file named “Course” which contains two classes: “Course” and “CourseStatusModel”, those classes will represents our domain data model, so for better code organizing add new folder named “Models” then add the new file containing the the code below:

using System;
using System.ComponentModel.DataAnnotations;

namespace Registration_MVC6WebApi.Models
{
    public class Course
    {
        public int Id { get; set; }
        [Required]
        [StringLength(100, MinimumLength = 5)]
        public string Name { get; set; }
        public int Credits { get; set; }
    }

    public class CourseStatusModel
    {
        public int Id { get; set; }
        public string Description { get; set; }
    }
}

Now we need to add Database context class which will be responsible to communicate with our database, so add new class and name it “RegistrationDbContext” then paste the code snippet below:

using Microsoft.Data.Entity;
using System;
using Microsoft.Data.Entity.Metadata;

namespace Registration_MVC6WebApi.Models
{
    public class RegistrationDbContext :DbContext
    {
        public DbSet<Course> Courses { get; set; }

        protected override void OnConfiguring(DbContextOptions options)
        {
            options.UseSqlServer(Startup.Configuration.Get("Data:DefaultConnection:ConnectionString"));
        }
    }
}

Basically what we’ve implemented here is adding our Courses data model as DbSet so it will represent a database table once we run the migrations, note that there is a new method named “OnConfiguration” where we can override it so we’ll be able to specify the data provider which needs to work with our DB context.

In our case we’ll use SQL Server, the constructor for “UseSqlServer” extension method accepts a parameter of type connection string, so we’ll read it from our “config.json” file by specifying the key “Data:DefaultConnection:ConnectionString” for the “Configuration” object we’ve created earlier in Startup class.

Note: This is not the optimal way to set the connection string, there are MVC6 examples out there using this way, but for a reason it is not working with me, so I followed my way.

Lastly we need to add the interface “IRegistrationRepo” and the implementation for this interface “RegistrationRepo”, so add two new files under “Models” folder named “IRegistrationRepo” and “RegistrationRepo” and paste the two code snippets below:

using System;
using System.Collections;
using System.Collections.Generic;

namespace Registration_MVC6WebApi.Models
{
    public interface IRegistrationRepo
    {
        IEnumerable<Course> GetCourses();
        Course GetCourse(int courseId);
        Course AddCourse(Course course);
        bool DeleteCourse(int courseId);
    }
}

using System;
using System.Collections.Generic;
using System.Linq;


namespace Registration_MVC6WebApi.Models
{
    public class RegistrationRepo : IRegistrationRepo
    {
        private readonly RegistrationDbContext _db;

        public RegistrationRepo(RegistrationDbContext db)
        {
            _db = db;
        }
        public Course AddCourse(Course course)
        {
            _db.Courses.Add(course);

            if (_db.SaveChanges() > 0)
            {
                return course;
            }
            return null;

        }

        public bool DeleteCourse(int courseId)
        {
            var course = _db.Courses.FirstOrDefault(c => c.Id == courseId);
            if (course != null)
            {
                _db.Courses.Remove(course);
                return _db.SaveChanges() > 0;
            }
            return false;
        }

        public Course GetCourse(int courseId)
        {
            return _db.Courses.FirstOrDefault(c => c.Id == courseId);
        }

        public IEnumerable<Course> GetCourses()
        {
            return _db.Courses.AsEnumerable();
        }
    }
}

The implementation here is fairly simple, what worth noting here is how we’ve passed “RegistrationDbContext” as parameter for our “RegistrationRepo” constructor so we’ve implemented Constructor Injection, this will not work if we didn’t configure this earlier in our “Startup” class.

Step 6: Installing KVM (K Version Manager)

After we’ve added our Database context and our domain data models, we can use migrations to create the database, with previous version of ASP.NET we’ve used NuGet package manager for these type of tasks, but with ASP.NET 5 we can use command prompt using various K* commands.

What is KVM (K Version Manager)?  KVM is a Powershell script used to get the runtime and manage multiple versions of it being on the machine at the same time, you can read more about it here.

Now to install KVM for the first time you have to do the following steps:

1. Open a command prompt with Run as administrator.
2. Run the following command:

@powershell -NoProfile -ExecutionPolicy unrestricted -Command "iex ((new-object net.webclient).DownloadString('https://raw.githubusercontent.com/aspnet/Home/master/kvminstall.ps1'))"

3. The script installs KVM for the current user.
4. Exit the command prompt window and start another as an administrator (you need to start a new command prompt to get the updated path environment).
5. Upgrade KVM with the following command:

KVM upgrade

We are ready now to run EF migrations as the step below:

Step 7: Initializing and applying migrations for our database

Now our command prompt is ready to understand K commands and Entity Framework commands, first step to do is to change the directory to the project directory. The project directory contains the “project.json” file as the image below:

EF7 Migrations
So in the command prompt we need to run the 2 following commands:

k ef migration add initial
k ef migration apply

Basically the first command will add migration file with the name format (<date>_<migration name>) so we’ll end up having file named “201411172303154_initial.cs” under folder named “Migrations” in our project. This auto generated file contains the code needed to to add our Courses table to our database, the newly generated files will show up under your project as the image below:

EF7 Migrations 2
The second command will apply those migrations and create the database for us based on the connection string we’ve specified earlier in file “config.json”.

Note: the “ef” command comes from the settings that we’ve specified earlier in file “project.json” under section “commands”.

Step 8: Adding GET methods for Courses Controller

The controller is a class which is responsible to handle HTTP requests, with ASP.NET 5 our Web API controller will inherit from “Controller” class not anymore from “ApiController”, so add new folder named “Controllers” then add new controller named “CoursesController” and paste the code below:

using Microsoft.AspNet.Mvc;
using Registration_MVC6WebApi.Models;
using System;
using System.Collections.Generic;

namespace Registration_MVC6WebApi.Controllers
{
    [Route("api/[controller]")]
    public class CoursesController : Controller
    {
        private IRegistrationRepo _registrationRepo;

        public CoursesController(IRegistrationRepo registrationRepo)
        {
            _registrationRepo = registrationRepo;
        }

        [HttpGet]
        public IEnumerable<Course> GetAllCourses()
        {
            return _registrationRepo.GetCourses();
        }

        [HttpGet("{courseId:int}", Name = "GetCourseById")]
        public IActionResult GetCourseById(int courseId)
        {
            var course = _registrationRepo.GetCourse(courseId);
            if (course == null)
            {
                return HttpNotFound();
            }

            return new ObjectResult(course);
        }

    }
}

What we’ve implemented in the controller class is the following:

  • The controller is attribute with Route attribute as the following
    [Route("api/[controller]")]
    so any HTTP requests that match the template are routed to the controller. The “[controller]” part in the template URL means to substitute the controller class name, minus the “Controller” suffix. In our case and for “CoursesController” class, the route template is “api/courses”.
  • We’ve defined two HTTP GET methods, the first one “GetAllCourses” is attributed with “[HttpGet]” and it returns a .NET object which is serialized in the body of the response using the default JSON format.
  • The second HTTP GET method “GetCourseById” is attributed with “[HttpGet]“. For this method we’ve specified a constraint on the parameter “courseId”, the parameter should be of integer data type. As well we’ve specified a name for this method “GetCourseById” which we’ll use in the next step. Last thing this method returns object of type IActionResult which gives us flexibility to return different actions results based on our logic, in our case we will return HttpNotFound if the course does not exist or we can return serialized JSON object of the course when the course is found.
  • Lastly notice how we are passing the “IRegistrationRepo” as a constructor for our CoursesController, by doing this we are implementing Constructor Injection.

Step 9: Adding POST and DELETE methods for Courses Controller

Now we want to implement another two HTTP methods which allow us to add new Course or delete existing one, so open file “CoursesController” and paste the code below:

[HttpPost]
public IActionResult AddCourse([FromBody] Course course)
{
	if (!ModelState.IsValid)
	{
		Context.Response.StatusCode = 400;
		return new ObjectResult(new CourseStatusModel { Id = 1 , Description= "Course model is invalid" });
	}
	else
	{
	   var addedCourse =  _registrationRepo.AddCourse(course);

		if (addedCourse != null)
		{
			string url = Url.RouteUrl("GetCourseById", new { courseId = course.Id }, Request.Scheme, Request.Host.ToUriComponent());

			Context.Response.StatusCode = 201;
			Context.Response.Headers["Location"] = url;
			return new ObjectResult(addedCourse);
		}
		else
		{
			Context.Response.StatusCode = 400;
			return new ObjectResult(new CourseStatusModel { Id = 2, Description = "Failed to save course" });
		}
	   
	}
}

What we’ve implemented here is the following:

  • For method “AddCourse”:
    • Add new HTTP POST method which is responsible to create new Course, this method accepts Course object which is coming from the request body then Web API framework will deserialize this to CLR Course object.
    • If the course object is not valid (i.e. course name not set) then we’ll return HTTP 400 status code and an object containing description of the validation error. Thanks for Yishai Galatzer for spotting this out because I was originally returning response of type “text/plain” always regarding the “Accept” header value set by the client. The point below contains the fix.
    • If the course object is not valid (i.e. course name not set) then we’ll return HTTP 400 status code, and in the response body we’ll return an instance of a POCO class(CourseStatusModel) containing fictional Id and description of the validation error.
    • If the course created successfully then we’ll build a location URI which points to our new created course i.e. (/api/courses/4) and set this URI in the “Location” header for the response.
    • Lastly we are returning the created course object in the response body.
  • For method “DeleteCourse”:
    • Add new HTTP DELETE method which is responsible for deleting existing course, this method accepts integer courseId.
    • If the course has been deleted successfully we’ll return HTTP status 204 (No content)
    • If the passed courseId doesn’t exists we will return HTTP status 404.

Note: I believe that the IhttpActionResult response which got introduced in Web API 2 is way better than IActionResult, please drop me a comment if someone knows how to use IhttpActionResult with MVC6.

The source code for this tutorial is available on GitHub.

That’s all for now folks! I’m still learning the new features in ASP.NET 5, please drop me a comment if you have better way implementing this tutorial or you spotted something that could be done in a better way.

Follow me on Twitter @tjoudeh

References

The post Getting started with ASP.NET 5 MVC 6 Web API & Entity Framework 7 appeared first on Bit of Technology.


Darrel Miller: Microsoft: Open source and cross-platform all the things

It was announced today that Microsoft will be delivering a cross-platform and open source, cloud-optimized version of the .Net framework.

AllTheThings

Today’s announcement is a culmination of a series of changes that have been happening over the past few years in certain parts of Microsoft.  Through the persistence of numerous Microsoft employees and the encouragement by the .net developer community, OSS is finally becoming accepted as an integral part of Microsoft’s business model

Wait, there’s more…

Surprisingly, releasing the source code to this new .net Framework is probably the least significant part of the announcement.  For as long as I remember, Microsoft have had a culture of creating strong coupling between Microsoft’s own products.   This is epitomized in the developer world by the fact that it is very difficult to get build machines to function without have Visual Studio on them.  This is just one example, but this has been a pervasive attitude throughout the company.

Time to leave the nest

In my opinion, today’s announcement is a declaration that Microsoft have grown up.  They have accepted the fact that their products must succeed on their own merit, and not because a customer chose one product and became locked into an entire eco-system.

.net on every server

The new cloud-optimized .net runtime will run on Mac’s and on Linux.  You can use your favorite code editor to write and compile .net applications.  There is no more coupling with Visual Studio.  In fact, in the last few days, I have experienced first hand that MS employees are working hard to get features like, intellisense, syntax/error highlighting features into editors like Sublime, Atom, Emacs and vim. With the announcement of Visual Studio Community edition, which is almost on feature parity with the Pro edition, the old Microsoft would just have said, “hey, we’re giving you a free edition, why would you want to use anything else”.

Haven’t we heard this all before?

Microsoft have made “open” announcements before and have had less than stellar results.  I think one of the reasons has been that it has taken a very long time to chip away at the internal cultural armor within Microsoft.  At the MVP Summit we had the opportunity to watch a panel discussion with some senior Microsoft people who were clearly 100% behind this open approach.

Best of breed should always win

One indication that this shift in culture is more than skin deep is the fact that Microsoft teams, more and more, are choosing to use external tools instead of being required to use their own.  The new Cloud-Optimized CLR and ASP.NET vNext are all being developed in Github repositories.  To file an issue, you can create a Github issue and if you think you can fix a bug, then send a pull request.  Github is the king of “social-coding” which suits an OSS model well.  Microsoft’s own TFS source code control has always been aimed at the enterprise space which often has different needs.  Microsoft’s teams get to use the best product for their own particular requirements and Microsoft products need to compete purely on merit.

Oh, no, not another .net framework!

I must admit, when I first heard about the cloud-optimized runtime, I was underwhelmed.  Why do we need yet another variant of the .net framework?  Why is it server-optimized?  What about mobile and desktop?  Well, technically, the CLR for this stack is not new.  It’s the Silverlight CoreCLR.  I’ll wait for you to stop snickering…. It is also the CLR used to power .net applications on Windows Phone 8 and Windows 8 apps. The CoreCLR is a proven, lightweight version of the CLR that happens to be ideal for creating lightweight web applications. When this lightweight CLR is combined with the new K Runtime Environment, the result is compelling.  Different versions of the CLR can sit side by side and switching between them is trivial.  The .Net framework libraries have been sliced up into more then 70 nuget packages so that only the functionality that is needed is pulled in. 

A victim of success

One of the major challenges with the .net framework in the past is that it had become very large to deploy, very slow for the team to get new features into the framework and every new major version, the team has attempted a new, hopefully less painful way, to deliver those changes.  The result has been inconsistent and confusing.  The KRE does enough things differently that we may finally have something that can allow the ,net framework to evolve quickly and painlessly.

K for Kool

The KRE finally breaks the dependency on the Global Assembly Cache that has been a critical part of .Net since it’s inception.  However, as the .net framework team like to say, they are “in the app compat business”.  There will still be full CLR framework support, that still uses the GAC for applications that run under the K runtime.  The new CoreCLR also changes the rules with regards to strong naming.  Microsoft are now saying quite openly that they do not recommend strong naming for Open Source projects, which will come as a great relief for many developers.

This is just the beginning

It was not until I realized the impact that open-sourcing big chunks of the .Net CLR would have on the Mono framework, that I started to appreciate the bigger picture.  The plan for the future, as I understand it, is that Mono will start to replace some of the less optimized parts of Mono with MS source code.  Future versions of Mono will take advantage of more pieces of the CLR infrastructure.  I can envision a future where some combination of Mono / CoreCLR would become a “client optimized” CLR stack.  This would bring a fully supported and optimized .net to every platform that matters.

How is this going to help us developers?

Now that pieces of the .net frameworks and Core CLR are easily accessible on Github, it should really help developers get a better understand of how the .Net framework behaves and by being able to see the commit history and related issues we should start to get a picture of why it behaves that way.

The .Net and Core CLR source code is MIT licensed which now allows us to copy and modify the code.  This is huge.  There is all sorts of useful code in the framework that is marked as internal because MS didn’t want to support in the public interface.  We can now re-use this code.  Microsoft have also attached their Patent Promise to the repositories.

Cross platform is becoming a critical part of the Microsoft developer’s workload.  With easy access to Linux VMs on the server side on Azure and tools like Xamarin that allow C# across iOS and Android, Microsoft developers can reach every platform.  Having Microsoft recognize cross platform as a 1st class concern, rather than just a feature checkbox, is going to make building products with wide reach much easier.

This is a rebirth of Microsoft and I look forward to being part of it.


Dominick Baier: MVP Summit Hackathon: IdentityServer v3 on ASP.NET vNext

Today we had a chance to sit together with the ASP.NET team and try moving IdentityServer to vNext.

There are two fundamental approaches for doing that – migrate the code and middleware to the new APIs or host IdentityServer as-is as an OWIN component.

We went for the latter – and lo and behold – after two hours we got everything up and running. Big thanks to Chris, Lou and Dan from the ASP.NET team!

This allows us (at least for the time being) to run IdentityServer on both ASP.NET vCurrent as well as vNext. This will not give us support for the new CoreCLR – but we also have a plan how to tackle that.

If you want to try it out yourself – the code can be found here.

2014-11-06 12.04.53

Update: two hours later, Christian got everything also running on Ubuntu!

leastprivilege_2014-Nov.-06


Filed under: .NET Security, ASP.NET, IdentityServer, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Dominick Baier: IdentityServer v3 Beta 3

Some of our users already found out and broke the news – so here’s my official post ;)

Beta 3 has been released to github and nuget – 107 commits since Beta 2-1…new features include:

  • Anti-forgery token support
  • Permission self-service page for users
  • Added support to add all claims of a user to a token (and support for implementation specific claims rules)
  • Added more documentation and comments
  • Added token handle and authorization code hashing
  • New view system and support for file system based assets
  • Support for WS-Federation, OpenID Connect and social external IdPs
  • Support for upstream federated sign-out
  • Added flag to hide scopes from discovery document
  • Re-worked claims filtering and normalization
  • Added support for more authentication scenarios, e.g. client certificates

Documentation will be updated, and new samples will be added ASAP – bear with us.

Again a massive thanks to all contributors and the people giving feedback and filing issues – you make IdentityServer better every day!


Filed under: ASP.NET, IdentityServer, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Taiseer Joudeh: JSON Web Token in ASP.NET Web API 2 using Owin

In the previous post Decouple OWIN Authorization Server from Resource Server we saw how we can separate the Authorization Server and the Resource Server by unifying the “decryptionKey” and “validationKey” key values in machineKey node in the web.config file for the Authorization and the Resource server. So once the user request an access token from the Authorization server, the Authorization server will use this unified key to encrypt the access token, and at the other end when the token is sent to the Resource server, it will use the same key to decrypt this access token and extract the authentication ticket from it.

The source code for this tutorial is available on GitHub.

This way works well if you have control on your Resource servers (Audience) which will rely on your Authorization server (Token Issuer) to obtain access tokens from, in other words you are fully trusting those Resource servers so you are sharing with them the same “decryptionKey” and “validationKey” values.

But in some situations you might have big number of Resource servers rely on your Authorization server, so sharing the same “decryptionKey” and “validationKey” keys with all those parties become inefficient process as well insecure, you are using the same keys for multiple Resource servers, so if a key is compromised all the other Resource servers will be affected.

To overcome this issue we need to configure the Authorization server to issue access tokens using JSON Web Tokens format (JWT) instead of the default access token format, as well on the Resource server side we need to configure it to consume this new JWT access tokens, as well you will see through out this post that there is no need to unify the “decryptionKey” and “validationKey” key values anymore if we used JWT.

Featured Image

What is JSON Web Token (JWT)?

JSON Web Token is a security token which acts as a container for claims about the user, it can be transmitted easily between the Authorization server (Token Issuer), and the Resource server (Audience), the claims in JWT are encoded using JSON which make it easier to use especially in applications built using JavaScript.

JSON Web Tokens can be signed following the JSON Web Signature (JWS) specifications, as well it can be encrypted following the JSON Web Encryption (JWE) specifications, in our case we will not transmit any sensitive data in the JWT payload, so we’ll only sign this JWT to protect it from tampering during the transmission between parties.

JSON Web Token (JWT) Format

Basically the JWT is a string which consists of three parts separated by a dot (.) The JWT parts are: <header>.<payload>.<signature>.

The header part is JSON object which contains 2 nodes always and looks as the following: 

{ "typ": "JWT", "alg": "HS256" }
 The “type” node has always “JWT” value, and the node “alg” contains the algorithm used to sign the token, in our case we’ll use “HMAC-SHA256″ for signing.

The payload part is JSON object as well which contains all the claims inside this token, check the example shown in the snippet below:

{
  "unique_name": "SysAdmin",
  "sub": "SysAdmin",
  "role": [
    "Manager",
    "Supervisor"
  ],
  "iss": "http://myAuthZServer",
  "aud": "379ee8430c2d421380a713458c23ef74",
  "exp": 1414283602,
  "nbf": 1414281802
}

All those claims are not mandatory  in order to build JWT, you can read more about JWT claims here. In our case we’ll always use those set of claims in the JWT we are going to issue, those claims represent the below:

  • The “sub” (subject) and “unique_claim” claims represent the user name this token issued for.
  • The “role” claim represents the roles for the user.
  • The “iss” (issuer) claim represents the Authorization server (Token Issuer) party.
  • The “aud” (audience) claim represents the recipients that the JWT is intended for (Relying Party – Resource Server). More on this unique string later in this post.
  • The “exp” (expiration time) claim represents the expiration time of the JWT, this claim contains UNIX time value.
  • The “nbf” (not before) claim represent the time which this JWT must not be used before, this claim contains UNIX time vale.

Lastly the signature part of the JWT is created by taking the header and payload parts, base 64 URL encode them, then concatenate them with “.”, then use the “alg” defined in the <header> part to generate the signature, in our case “HMAC-SHA256″. The resulting part of this signing process is byte array which should be base 64 URL encoded then concatenated with the <header>.<payload> to produce a complete JWT.

JSON Web Tokens support in ASP.NET Web API and Owin middleware.

There is no direct support for issuing JWT in ASP.NET Web API or ready made Owin middleware responsible for doing this, so in order to start issuing JWTs we need to implement this manually by implementing the interface “ISecureDataFormat” and implement the method “Protect”. More in this later. But for consuming the JWT in a Resource server there is ready middleware named “Microsoft.Owin.Security.Jwt” which understands validates, and and de-serialize JWT tokens with minimal number of line of codes.

So most of the heavy lifting we’ll do now will be in implementing the Authorization Server.

What we’ll build in this tutorial?

We’ll build a single Authorization server which issues JWT using ASP.NET Web API 2 on top of Owin middleware, the Authorization server is hosted on Azure (http://JwtAuthZSrv.azurewebsites.net) so you can test it out by adding new Resource servers. Then we’ll build a single Resource server (audience) which will process JWTs issued by our Authorization server only.

I’ll split this post into two sections, the first section will be for creating the Authorization server, and the second section will cover creating Resource server.

The source code for this tutorial is available on GitHub.

Section 1: Building the Authorization Server (Token Issuer)

Step 1.1: Create the Authorization Server Web API Project

Create an empty solution and name it “JsonWebTokensWebApi” then add a new ASP.NET Web application named “AuthorizationServer.Api”, the selected template for the project will be “Empty” template with no core dependencies. Notice that the authentication is set to “No Authentication”.

Step 1.2: Install the needed NuGet Packages:

Open package manger console and install the below Nuget packages:

Install-Package Microsoft.AspNet.WebApi -Version 5.2.2
Install-Package Microsoft.AspNet.WebApi.Owin -Version 5.2.2
Install-Package Microsoft.Owin.Host.SystemWeb -Version 3.0.0
Install-Package Microsoft.Owin.Cors -Version 3.0.0
Install-Package Microsoft.Owin.Security.OAuth -Version 3.0.0
Install-Package System.IdentityModel.Tokens.Jwt -Version 4.0.0
Install-Package Thinktecture.IdentityModel.Core Version 1.2.0

You can refer to the previous post if you want to know what each package is responsible for. The package named “System.IdentityModel.Tokens.Jwt” is responsible for validating, parsing and generating JWT tokens. As well we’ve used the package “Thinktecture.IdentityModel.Core” which contains class named “HmacSigningCredentials”  which will be used to facilitate creating signing keys.

Step 1.3: Add Owin “Startup” Class:

Right click on your project then add a new class named “Startup”. It will contain the code below:

public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            HttpConfiguration config = new HttpConfiguration();

            // Web API routes
            config.MapHttpAttributeRoutes();
            
            ConfigureOAuth(app);
            
            app.UseCors(Microsoft.Owin.Cors.CorsOptions.AllowAll);
            
            app.UseWebApi(config);

        }

        public void ConfigureOAuth(IAppBuilder app)
        {

            OAuthAuthorizationServerOptions OAuthServerOptions = new OAuthAuthorizationServerOptions()
            {
                //For Dev enviroment only (on production should be AllowInsecureHttp = false)
                AllowInsecureHttp = true,
                TokenEndpointPath = new PathString("/oauth2/token"),
                AccessTokenExpireTimeSpan = TimeSpan.FromMinutes(30),
                Provider = new CustomOAuthProvider(),
                AccessTokenFormat = new CustomJwtFormat("http://jwtauthzsrv.azurewebsites.net")
            };

            // OAuth 2.0 Bearer Access Token Generation
            app.UseOAuthAuthorizationServer(OAuthServerOptions);

        }
    }

Here we’ve created new instance from class “OAuthAuthorizationServerOptions” and set its option as the below:

  • The path for generating JWT will be as :”http://jwtauthzsrv.azurewebsites.net/oauth2/token”.
  • We’ve specified the expiry for token to be 30 minutes
  • We’ve specified the implementation on how to validate the client and Resource owner user credentials in a custom class named “CustomOAuthProvider”.
  • We’ve specified the implementation on how to generate the access token using JWT formats, this custom class named “CustomJwtFormat” will be responsible for generating JWT instead of default access token using DPAPI, note that both are using bearer scheme.

We’ll come to the implementation of both class later in the next steps.

Step 1.4: Resource Server (Audience) Registration:

Now we need to configure our Authorization server to allow registering Resource server(s) (Audience), this step is very important because we need a way to identify which Resource server (Audience) is requesting the JWT token, so the Authorization server will be able to issue JWT token for this audience only.

The minimal needed information to register a Recourse server into an Authorization server are a unique Client Id, and shared symmetric key. For the sake of keeping this tutorial simple I’m persisting those information into a volatile dictionary so the values for those Audiences will be removed from the memory if IIS reset toke place, for production scenario you need to store those values permanently on a database.

Now add new folder named “Entities” then add new class named “Audience” inside this class paste the code below:

public class Audience
    {
        [Key]
        [MaxLength(32)]
        public string ClientId { get; set; }
        
        [MaxLength(80)]
        [Required]
        public string Base64Secret { get; set; }
        
        [MaxLength(100)]
        [Required]
        public string Name { get; set; }
    }

Then add new folder named “Models” the add new class named “AudienceModel” inside this class paste the code below:

public class AudienceModel
    {
        [MaxLength(100)]
        [Required]
        public string Name { get; set; }
    }

Now add new class named “AudiencesStore” and paste the code below:

public static class AudiencesStore
    {
        public static ConcurrentDictionary<string, Audience> AudiencesList = new ConcurrentDictionary<string, Audience>();
        
        static AudiencesStore()
        {
            AudiencesList.TryAdd("099153c2625149bc8ecb3e85e03f0022",
                                new Audience { ClientId = "099153c2625149bc8ecb3e85e03f0022", 
                                                Base64Secret = "IxrAjDoa2FqElO7IhrSrUJELhUckePEPVpaePlS_Xaw", 
                                                Name = "ResourceServer.Api 1" });
        }

        public static Audience AddAudience(string name)
        {
            var clientId = Guid.NewGuid().ToString("N");

            var key = new byte[32];
            RNGCryptoServiceProvider.Create().GetBytes(key);
            var base64Secret = TextEncodings.Base64Url.Encode(key);

            Audience newAudience = new Audience { ClientId = clientId, Base64Secret = base64Secret, Name = name };
            AudiencesList.TryAdd(clientId, newAudience);
            return newAudience;
        }

        public static Audience FindAudience(string clientId)
        {
            Audience audience = null;
            if (AudiencesList.TryGetValue(clientId, out audience))
            {
                return audience;
            }
            return null;
        }
    }

Basically this class acts like a repository for the Resource servers (Audiences), basically it is responsible for two things, adding new audience and finding exiting one.

Now if you take look on method “AddAudience” you will notice that we’ve implemented the following:

  • Generating random string of 32 characters as an identifier for the audience (client id).
  • Generating 256 bit random key using the “RNGCryptoServiceProvider” class then base 64 URL encode it, this key will be shared between the Authorization server and the Resource server only.
  • Add the newly generated audience to the in-memory “AudiencesList”.
  • The “FindAudience” method is responsible for finding an audience based on the client id.
  • The constructor of the class contains fixed audience for the demo purpose.

Lastly we need to add an end point in our Authorization server which allow registering new Resource servers (Audiences), so add new folder named “Controllers” then add new class named “AudienceController” and paste the code below:

[RoutePrefix("api/audience")]
    public class AudienceController : ApiController
    {
        [Route("")]
        public IHttpActionResult Post(AudienceModel audienceModel)
        {
            if (!ModelState.IsValid) {
                return BadRequest(ModelState);
            }

            Audience newAudience = AudiencesStore.AddAudience(audienceModel.Name);

            return Ok<Audience>(newAudience);

        }
    }

This end point can be accessed by issuing HTTP POST request to the URI http://jwtauthzsrv.azurewebsites.net/api/audience as the image below, notice that the Authorization server is responsible for generating the client id and the shared symmetric key. This symmetric key should not be shared with any party except the Resource server (Audience) requested it.

Note: In real world scenario the Resource server (Audience) registration process won’t be this trivial, you might go through workflow approval. Sharing the key will take place using a secure admin portal, as well you might need to provide the audience with the ability to regenerate the key in case it get compromised.

Register Audience

Step 1.5: Implement the “CustomOAuthProvider” Class

Now we need to implement the code responsible for issuing JSON Web Token when the requester issue HTTP POST request to the URI: http://jwtauthzsrv.azurewebsites.net/oauth2/token the request will look as the image below, notice how we are setting the client id for for the Registered resource (audience) from the previous step using key (client_id).

Issue JWT

To implement this we need to add new folder named “Providers” then add new class named “CustomOAuthProvider”, paste the code snippet below:

public class CustomOAuthProvider : OAuthAuthorizationServerProvider
    {

        public override Task ValidateClientAuthentication(OAuthValidateClientAuthenticationContext context)
        {
            string clientId = string.Empty;
            string clientSecret = string.Empty;
            string symmetricKeyAsBase64 = string.Empty;

            if (!context.TryGetBasicCredentials(out clientId, out clientSecret))
            {
                context.TryGetFormCredentials(out clientId, out clientSecret);
            }

            if (context.ClientId == null)
            {
                context.SetError("invalid_clientId", "client_Id is not set");
                return Task.FromResult<object>(null);
            }

            var audience = AudiencesStore.FindAudience(context.ClientId);

            if (audience == null)
            {
                context.SetError("invalid_clientId", string.Format("Invalid client_id '{0}'", context.ClientId));
                return Task.FromResult<object>(null);
            }
            
            context.Validated();
            return Task.FromResult<object>(null);
        }

        public override Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
        {

            context.OwinContext.Response.Headers.Add("Access-Control-Allow-Origin", new[] { "*" });

            //Dummy check here, you need to do your DB checks against memebrship system http://bit.ly/SPAAuthCode
            if (context.UserName != context.Password)
            {
                context.SetError("invalid_grant", "The user name or password is incorrect");
                //return;
                return Task.FromResult<object>(null);
            }

            var identity = new ClaimsIdentity("JWT");

            identity.AddClaim(new Claim(ClaimTypes.Name, context.UserName));
            identity.AddClaim(new Claim("sub", context.UserName));
            identity.AddClaim(new Claim(ClaimTypes.Role, "Manager"));
            identity.AddClaim(new Claim(ClaimTypes.Role, "Supervisor"));

            var props = new AuthenticationProperties(new Dictionary<string, string>
                {
                    {
                         "audience", (context.ClientId == null) ? string.Empty : context.ClientId
                    }
                });

            var ticket = new AuthenticationTicket(identity, props);
            context.Validated(ticket);
            return Task.FromResult<object>(null);
        }
    }

As you notice this class inherits from class “OAuthAuthorizationServerProvider”, we’ve overridden two methods “ValidateClientAuthentication” and “GrantResourceOwnerCredentials”

  • The first method “ValidateClientAuthentication” will be responsible for validating if the Resource server (audience) is already registered in our Authorization server by reading the client_id value from the request, notice that the request will contain only the client_id without the shared symmetric key. If we take the happy scenario and the audience is registered we’ll mark the context as a valid context which means that audience check has passed and the code flow can proceed to the next step which is validating that resource owner credentials (user who is requesting the token).
  • The second method “GrantResourceOwnerCredentials” will be responsible for validating the resource owner (user) credentials, for the sake of keeping this tutorial simple I’m considering that each identical username and password combination are valid, in read world scenario you will do your database checks against membership system such as ASP.NET Identity, you can check this in the previous post.
  • Notice that we are setting the authentication type for those claims to “JWT”, as well we are passing the the audience client id as a property of the “AuthenticationProperties”, we’ll use the audience client id in the next step.
  • Now the JWT access token will be generated when we call “context.Validated(ticket), but we still need to implement the class “CustomJwtFormat” we talked about in step 1.3.

Step 1.5: Implement the “CustomJwtFormat” Class

This class will be responsible for generating the JWT access token, so what we need to do is to add new folder named “Formats” then add new class named “CustomJwtFormat” and paste the code below:

public class CustomJwtFormat : ISecureDataFormat<AuthenticationTicket>
    {
        private const string AudiencePropertyKey = "audience";

        private readonly string _issuer = string.Empty;

        public CustomJwtFormat(string issuer)
        {
            _issuer = issuer;
        }

        public string Protect(AuthenticationTicket data)
        {
            if (data == null)
            {
                throw new ArgumentNullException("data");
            }

            string audienceId = data.Properties.Dictionary.ContainsKey(AudiencePropertyKey) ? data.Properties.Dictionary[AudiencePropertyKey] : null;

            if (string.IsNullOrWhiteSpace(audienceId)) throw new InvalidOperationException("AuthenticationTicket.Properties does not include audience");

            Audience audience = AudiencesStore.FindAudience(audienceId);

            string symmetricKeyAsBase64 = audience.Base64Secret;
  
            var keyByteArray = TextEncodings.Base64Url.Decode(symmetricKeyAsBase64);

            var signingKey = new HmacSigningCredentials(keyByteArray);

            var issued = data.Properties.IssuedUtc;
            var expires = data.Properties.ExpiresUtc;

            var token = new JwtSecurityToken(_issuer, audienceId, data.Identity.Claims, issued.Value.UtcDateTime, expires.Value.UtcDateTime, signingKey);

            var handler = new JwtSecurityTokenHandler();

            var jwt = handler.WriteToken(token);

            return jwt;
        }

        public AuthenticationTicket Unprotect(string protectedText)
        {
            throw new NotImplementedException();
        }
    }

What we’ve implemented in this class is the following:

  • The class “CustomJwtFormat” implements the interface “ISecureDataFormat<AuthenticationTicket>”, the JWT generation will take place inside method “Protect”.
  • The constructor of this class accepts the “Issuer” of this JWT which will be our Authorization server, this can be string or URI, in our case we’ll fix it to URI with the value “http://jwtauthzsrv.azurewebsites.net”
  • Inside “Protect” method we are doing the following:
    1. Reading the audience (client id) from the Authentication Ticket properties, then getting this audience from the In-memory store.
    2. Reading the Symmetric key for this audience and Base64 decode it to byte array which will be used to create a HMAC265 signing key.
    3. Preparing the raw data for the JSON Web Token which will be issued to the requester by providing the issuer, audience, user claims, issue date, expiry date, and the signing Key which will sign the JWT payload.
    4. Lastly we serialize the JSON Web Token to a string and return it to the requester.
  • By doing this requester for an access token from our Authorization server will receive a signed token which contains claims for a certain resource owner (user) and this token intended to certain Resource server (audience) as well.

So if we need to request a JWT from our Authorization server for a user named “SuperUser” that needs to access the Resource server (audience) “099153c2625149bc8ecb3e85e03f0022″, all we need to do is to issue HTTP POST to the token end point (http://jwtauthzsrv.azurewebsites.net/oauth2/token) as the image below:

Issue JWT2

The result for this will be the below JSON Web Token:

eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1bmlxdWVfbmFtZSI6IlN1cGVyVXNlciIsInN1YiI6IlN1cGVyVXNlciIsInJvbGUiOlsiTWFuYWdlciIsIlN1cGVydmlzb3IiXSwiaXNzIjoiaHR0cDovL2p3dGF1dGh6c3J2LmF6dXJld2Vic2l0ZXMubmV0IiwiYXVkIjoiMDk5MTUzYzI2MjUxNDliYzhlY2IzZTg1ZTAzZjAwMjIiLCJleHAiOjE0MTQzODEyODgsIm5iZiI6MTQxNDM3OTQ4OH0.pZffs_TSXjgxRGAPQ6iJql7NKfRjLs1WWSliX5njSYU

There is an online JWT debugger tool named jwt.io that allows you to paste the encoded JWT and decode it so you can interpret the claims inside it, so open the tool and paste the JWT above and you should receive response as the image below, notice that all the claims are set properly including the iss, aud, sub,role, etc…

One thing to notice here that there is red label which states that the signature is invalid, this is true because this tool doesn’t know about the shared symmetric key issued for the audience (099153c2625149bc8ecb3e85e03f0022).

So if we decided to share this symmetric key with the tool and paste the key in the secret text box; we should receive green label stating that signature is valid, and this is identical to the implementation we’ll see in the Resource server when it receives a request containing a JWT.

jwtio

Now the Authorization server (Token issuer) is able to register audiences and issue JWT tokens, so let’s move to adding a Resource server which will consume the JWT tokens.

Section 2: Building the Resource Server (Audience)

Step 2.1: Creating the Resource Server Web API Project

Add a new ASP.NET Web application named “ResourceServer.Api”, the selected template for the project will be “Empty” template with no core dependencies. Notice that the authentication is set to “No Authentication”.

Step 2.2: Installing the needed NuGet Packages:

Open package manger console and install the below Nuget packages:

Install-Package Microsoft.AspNet.WebApi -Version 5.2.2
Install-Package Microsoft.AspNet.WebApi.Owin -Version 5.2.2
Install-Package Microsoft.Owin.Host.SystemWeb -Version 3.0.0
Install-Package Microsoft.Owin.Cors -Version 3.0.0
Install-Package Microsoft.Owin.Security.Jwt -Version 3.0.0

The package “Microsoft.Owin.Security.Jwt” is responsible for protecting the Resource server resources using JWT, it only validate and de-serialize JWT tokens.

Step 2.3: Add Owin “Startup” Class:

Right click on your project then add a new class named “Startup”. It will contain the code below:

public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            HttpConfiguration config = new HttpConfiguration();

            config.MapHttpAttributeRoutes();

            ConfigureOAuth(app);

            app.UseCors(Microsoft.Owin.Cors.CorsOptions.AllowAll);
           
            app.UseWebApi(config);

        }

        public void ConfigureOAuth(IAppBuilder app)
        {
            var issuer = "http://jwtauthzsrv.azurewebsites.net";
            var audience = "099153c2625149bc8ecb3e85e03f0022";
            var secret = TextEncodings.Base64Url.Decode("IxrAjDoa2FqElO7IhrSrUJELhUckePEPVpaePlS_Xaw");

            // Api controllers with an [Authorize] attribute will be validated with JWT
            app.UseJwtBearerAuthentication(
                new JwtBearerAuthenticationOptions
                {
                    AuthenticationMode = AuthenticationMode.Active,
                    AllowedAudiences = new[] { audience },
                    IssuerSecurityTokenProviders = new IIssuerSecurityTokenProvider[]
                    {
                        new SymmetricKeyIssuerSecurityTokenProvider(issuer, secret)
                    }
                });

        }
    }

This is the most important step in configuring the Resource server to trust tokens issued by our Authorization server (http://jwtauthzsrv.azurewebsites.net), notice how we are providing the values for the issuer, audience (client id), and the shared symmetric secret we obtained once we registered the resource with the authorization server.

By providing those values to JwtBearerAuthentication middleware, this Resource server will be able to consume only JWT tokens issued by the trusted Authorization server and issued for this audience only.

Note: Always store keys in config files not directly in source code.

Step 2.4: Add new protected controller

Now we want to add a controller which will serve as our protected resource, this controller will return list of claims for the authorized user, those claims for sure are encoded within the JWT we’ve obtained from the Authorization Server. So add new controller named “ProtectedController” under “Controllers” folder and paste the code below:

[Authorize]
    [RoutePrefix("api/protected")]
    public class ProtectedController : ApiController
    {
        [Route("")]
        public IEnumerable<object> Get()
        {
            var identity = User.Identity as ClaimsIdentity;

            return identity.Claims.Select(c => new
            {
                Type = c.Type,
                Value = c.Value
            });
        }
    }

Notice how we attribute the controller with [Authorize] attribute which will protect this resource and only will authentic HTTP GET requests containing a valid JWT access token, with valid I mean:

  • Not expired JWT.
  • JWT issued by our Authorization server (http://jwtauthzsrv.azurewebsites.net).
  • JWT issued for the audience (099153c2625149bc8ecb3e85e03f0022) only.

Conclusion

Using signed JSON Web Tokens facilitates and standardize the way of separating the Authorization server and the Resource server, no more need for unifying machineKey values nor having the risk of sharing the “decryptionKey” and “validationKey” key values among different Resource servers.

Keep in mind that the JSON Web Token we’ve created in this tutorial is signed only, so do not put any sensitive information in it :)

The source code for this tutorial is available on GitHub.

That’s it for now folks, hopefully this short walk through helped in understanding how we can configure the Authorization Server to issue JWT and how we can consume them in a Resource Server.

If you have any comment, question or enhancement please drop me a comment, I do not mind if you star the GitHub Repo too :)

Follow me on Twitter @tjoudeh

References

The post JSON Web Token in ASP.NET Web API 2 using Owin appeared first on Bit of Technology.


Darrel Miller: Add Runscope logging to your ASP.NET Web API in minutes

If you are building an ASP.NET Web API and want a view into the HTTP traffic that is hitting your API then this is a really quick solution that might prove useful.

Monitoring

Runscope is a cloud based service that allows you to monitor, measure and test Web APIs.  By using their API I was able to build a HttpMessageHandler that logs all requests and responses to the service.  Runscope has a free tier that allows you to log 10K requests/month.

Add a messagehandler

In your Web API project, simply add the Nuget package Runscope.HttpMessageHandler and the following line of code in your Web API configuration code:

config.MessageHandlers.Add(new RunscopeApiMessageHandler(<ApiKey>,<BucketKey>);

You can get an APIKey by creating an Application within Runscope and the bucket key is displayed at the bottom right hand corner of the page.

And you’re done.

Get results immediately

Once you start making requests to the API you will be able to see the details of the request in the Runscope Traffic Inspector,

RunscopeRequest

and the response…

RunscopeResponse

Let me show you how

The following demonstrates how to create a Web API, setup a Runscope account and add Runscope logging to it in less than 5 mins.

Image credits : Heart rate monitor https://flic.kr/p/8S3ofm 
Image credits : Stopwatch https://flic.kr/p/6xSoka


Ali Kheyrollahi: Performance Series - How poor performance of HttpContent.ReadAsAsync can affect your API/site

Level [T2]

This has been a revelation - what I am about to reveal here, deeply surprised me - it might surprise you too. This post is mainly about consuming restful APIs using HttpClient and when the payload is JSON.

UPDATE: I got in touch with the ASP.NET team and they confirmed this as a performance bug which has now been fixed but the fix yet not available.

As you probably know performance and benchmarking is very close to my heart and I have been recently focusing on benchmarking a few APIs at work. One of my observations was that the Web APIs/Web Sites which have historically been IO-bound, they show sign of CPU strain and have become CPU-bound.

When you think logically about it, there is no magic here: by using async/await, you end up putting your CPU into some use unlike the old times when the threads are blocked waiting for the IO to return and CPU would be twiddling its thumb. However, I found the CPU overhead of the operations excessive so I set out to benchmark a few different scenarios.

Test Setup

Two APIs were created where one was using the other. These two APIs were part of the same cloud service which was deployed to two separate Medium (A2) web roles. I used 2 different deployments of the same code, one dependent upon version 4.0.30506.0 of the API and the ther one with the latest version which was 5.2.2. Difference between two versions of the Web API is the topic of another post, but the differences were not huge although newer versions showed improved performance.

API being called returns a customer with its orders. Every customer has between 1 to 3 orders and each order between 1-3 items. On the long run, these randomisation gets evened out. Each document returned is between 1-2 KB. So the more superficial API, for every customer, makes one call to get the customer and for each customer will separately call the deeper API once for each order. Then it combines the result and sends back the response. Both APIs are deployed in the same Azure Data Centre.

You can find the whole code at GitHub. The code takes 4 different approaches as below:

public class CustomerController : ApiController
{
public FullCustomer GetSync(int id)
{
var webClient = new WebClient();
var customerString = webClient.DownloadString(BuildUrl(id));
var customer = JsonConvert.DeserializeObject<Customer>(customerString);
var fullCustomer = new FullCustomer(customer);
var orders = new List<Order>();
foreach (var orderId in customer.OrderIds)
{
var orderString = webClient.DownloadString(BuildUrl(id, orderId));
var order = JsonConvert.DeserializeObject<Order>(orderString);
orders.Add(order);
}
fullCustomer.Orders = orders;
return fullCustomer;
}

public async Task<FullCustomer> GetASync(int id)
{
var webClient = new WebClient();
var customerString = await webClient.DownloadStringTaskAsync(BuildUrl(id));
var customer = JsonConvert.DeserializeObject<Customer>(customerString);
var fullCustomer = new FullCustomer(customer);
var orders = new List<Order>();
foreach (var orderId in customer.OrderIds)
{
var orderString = await webClient.DownloadStringTaskAsync(BuildUrl(id, orderId));
var order = JsonConvert.DeserializeObject<Order>(orderString);
orders.Add(order);
}
fullCustomer.Orders = orders;
return fullCustomer;
}

public async Task<FullCustomer> GetASyncWebApi(int id)
{
var httpClient = new HttpClient();
httpClient.DefaultRequestHeaders.Add("Accept", "application/json");
var responseMessage = await httpClient.GetAsync(BuildUrl(id));
var customer = await responseMessage.Content.ReadAsAsync<Customer>();
var fullCustomer = new FullCustomer(customer);
var orders = new List<Order>();
foreach (var orderId in customer.OrderIds)
{
responseMessage = await httpClient.GetAsync(BuildUrl(id, orderId));
var order = await responseMessage.Content.ReadAsAsync<Order>();
orders.Add(order);
}
fullCustomer.Orders = orders;
return fullCustomer;
}

public async Task<FullCustomer> GetASyncWebApiString(int id)
{
var httpClient = new HttpClient();
httpClient.DefaultRequestHeaders.Add("Accept", "application/json");
var responseMessage = await httpClient.GetAsync(BuildUrl(id));
var customerString = await responseMessage.Content.ReadAsStringAsync();
var customer = JsonConvert.DeserializeObject<Customer>(customerString);
var fullCustomer = new FullCustomer(customer);
var orders = new List<Order>();
foreach (var orderId in customer.OrderIds)
{
responseMessage = await httpClient.GetAsync(BuildUrl(id, orderId));
var orderString = await responseMessage.Content.ReadAsStringAsync();
var order = JsonConvert.DeserializeObject<Order>(orderString);
orders.Add(order);
}
fullCustomer.Orders = orders;
return fullCustomer;
}

private string BuildUrl(int customerId, int? orderId = null)
{
string baseUrl = string.Format("http://{0}:8080/api/customer/{1}", Request.RequestUri.Host, customerId);
return orderId.HasValue
? string.Format("{0}/order/{1}", baseUrl, orderId.Value)
: baseUrl;
}

}
So as you can see, we use 4 different methods:

1) Using WebClient in the sync fashion
2) Using WebClient in the async fashion
3) Using HttpClient in the async fashion with ReadAsAsync on HttpContent
4) Using HttpClient in the async fashion with reading content as string and then using JsonConvert to deserialise

I used SuperBenchmarker to invoke the main API which gathers the data from the other API. I used the tool within the same Azure Data Centre from another machine (none of the APIs) to make the tests more realistic yet eliminate network idiosyncrasies.

I used 5000 requests with concurrency of 10 - although I tried other number as well which did not make any material difference in the results.

Results

Here is the result for scenario 1 (sync using WebClient):

TPS:    394 (requests/second)
Max: 199ms
Min: 8ms
Avg: 25ms

50% below 24ms
60% below 25ms
70% below 27ms
80% below 28ms
90% below 30ms
95% below 32ms
98% below 36ms
99% below 55ms
99.9% below 185ms


The result for scenario 2 (Async using WebClient) usually shows better throughput but higher CPU

TPS:    485 (requests/second)
Max: 291ms
Min: 5ms
Avg: 20ms

50% below 19ms
60% below 21ms
70% below 23ms
80% below 25ms
90% below 27ms
95% below 29ms
98% below 32ms
99% below 36ms
99.9% below 284ms

The CPU difference is not huge and can be explained by the increase throughput:

CPU usage during Scenario 1 and 2

Now what surprised me greatly was the result of the third scenario (using HttpContent.ReadAsAsync<T>). Apart from CPU of 100% and signs of queueing, here is the result:

TPS:    41 (requests/second)
Max: 12656ms
Min: 26ms
Avg: 240ms

50% below 170ms
60% below 178ms
70% below 187ms
80% below 205ms
90% below 256ms
95% below 296ms
98% below 370ms
99% below 3181ms
99.9% below 12573ms

Yeah, shocking. The diagram below compares CPU usage between scenario 1 and 3:

CPU usage in scenario 1 (arrow) and 3 (box)

Scenario 4 is definitely better and is not too far from scenario 1 and 2:

TPS:    230 (requests/second)
Max: 7068ms
Min: 7ms
Avg: 43ms

50% below 20ms
60% below 22ms
70% below 24ms
80% below 26ms
90% below 29ms
95% below 34ms
98% below 110ms
99% below 144ms
99.9% below 7036ms

The CPU usage is around 80% and definitely worse that scenario 1 and 2 (which requires further analysis).

Analysis

Where is the problem? It appears that JSON Deserialization when reading from a stream is not efficient. It is possible that the JSON Deserialization has to optimise for memory efficiency rather than CPU efficiency since when the whole string is passed, it is surely much faster. 

Profiling proves that the problem is indeed JSON Deserialization:

Profiling scenario 3 is showing that the most of the CPU time is spent in JSON Deserialisation

So in order to prove that, we do not have to invoke an API. The whole operation can be done inside a Console application. So I used the same code that was generating customers and orders. Here I am comparing

private static void Main(string[] args)
{
const int TotalRun = 10*1000;

var customerController = new CustomerController();
var orderController = new OrderController();
var customer = customerController.Get(1);

var orders = new List<Order>();
foreach (var orderId in customer.OrderIds)
{
orders.Add(orderController.Get(1, orderId));
}

var fullCustomer = new FullCustomer(customer)
{
Orders = orders
};

var s = JsonConvert.SerializeObject(fullCustomer);
var bytes = Encoding.UTF8.GetBytes(s);
var stream = new MemoryStream(bytes);
var content = new StreamContent(stream);

content.Headers.ContentType = new MediaTypeHeaderValue("application/json");


var stopwatch = Stopwatch.StartNew();
for (int i = 1; i < TotalRun+1; i++)
{
var a = content.ReadAsAsync<FullCustomer>().Result;
if(i % 100 == 0)
Console.Write("\r" + i);
}
Console.WriteLine();
Console.WriteLine(stopwatch.Elapsed);

stopwatch.Restart();
for (int i = 1; i < TotalRun+1; i++)
{
var sa = content.ReadAsStringAsync().Result;
var a = JsonConvert.DeserializeObject<FullCustomer>(sa);
if (i % 100 == 0)
Console.Write("\r" + i);
}
Console.WriteLine();
Console.WriteLine(stopwatch.Elapsed);

Console.Read();

}

As expected, the result shows uncomparable difference, in the order of ~120:

10000
00:00:06.2345493
10000
00:00:00.0509763

So this result basically confirms what we have seen. I will get in touch with James Newton King and try to shed more light on the subject.

Conclusion

HttpContent.ReadAsAsync on JSON payloads is really slow - in the order of 120x compared to JsonConvert. I guess it might to do with the memory efficiency of reading from streams (keeping memory footprint at zero)  but that is a guess and I have been in touch with James Newton King (creator of Json.Net) to get to the bottom of it.

For the meantime, if you know your content is not going to be huge and always in JSON, you might as well forget about content negotiation and read it as a string and then use JsonConvert to deserialize.


Radenko Zec: Manual JSON serialization from DataReader in ASP.NET Web API

In one of my previous blog posts 8 ways to improve ASP.NET Web API performance I talked how we’ve used manual JSON serialization from DataReader to gain some performance benefits.

I haven’t provided any code example, but only link to this excellent blog post from Rick Strahl JSON serialization of a DataReader.

In our production project we’ve used code from Rick Strahl’s blog post to do this.

Instead reading values from DataReader and populating objects and after that reading again values from those objects and producing JSON using some JSON Serializer,  we can directly create a JSON string from DataReader and avoid unnecessary creation of objects.

I’ve received lots of requests to explain this method on my blog.

JsonSerialization

Set up JSON serialization project

As I mention above we’ve used WestWind’s code to do this.

However, this library which is used to perform JSON Serialization is quite large and contains a lot of code that we didn’t need.

So I’ve just extracted the code related to JSON serialization and modified it a little bit to support additional data type and to serialize in CamelCase.

We have also removed some data types we didn’t need.

You can download this modified project library here.

How to use this library

Code for serialization from DataReader to JSON string is very small.

We just need to call the Serialize method and pass the instance of DataReader we want to serialize.

string jsonResult;
var serializer = new WestwindJsonSerializer
{
     DateSerializationMode = JsonDateEncodingModes.Iso
};

using (SqlDataReader reader = cmd.ExecuteReader())
{ 
     jsonResult = serializer.Serialize(reader);
}

 

How to implement this in your ASP.NET Web API controller method

We have implemented code above in our Repository method GetItemsAsJson which is used for retrieving items from database.

The code looks similar to this.

public HttpResponseMessage GetItems([FromUri]ItemQueryParams itemQueryParams)
{

var jsonResult = this.itemRepository.GetItemsAsJson(itemQueryParams);
if (!string.IsNullOrEmpty(jsonResult))
{
     var response = Request.CreateResponse(HttpStatusCode.OK);
     response.Content = new StringContent(jsonResult, Encoding.UTF8, "application/json");
     return response;
}
...
...
}

 

If you like this article don’t forget to subscribe to this blog and make sure you don’t miss new upcoming blog posts.

 

The post Manual JSON serialization from DataReader in ASP.NET Web API appeared first on RadenkoZec blog.


Ali Kheyrollahi: SuperBenchmarker v0.4 released

Level [T2]

This is a quick shoutout on the release of version 0.4 of SuperBenchmarker, a Web and/or Web API performance benchmarking command line tool for Windows.

You might have heard about and used Apache Benchmark (ab.exe) in the past which is a very useful tool but on Windows it is very limited (e.g cannot make POST, PUT, etc requests and only supports GET). SuperBenchmarker (sb.exe) supports PUT, DELETE, POST or any arbitrary method and allows you to parameterise the URL and headers using a data file, a .NET DLL plugin and the new feature is the randomisation feature which removes the need for any setup when all needed is random data.


Getting started

The best way to get SuperBenchmarker is to use awesome Chocolatey which is Windows' equivalent of apt-get tool on Linux.

To get Chocolatey, just run this command in your Powershell console (in Administrative mode):
iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))
And then install SuperBenchmarker in the command line shell:
c:\> cinst SuperBenchmarker
And now you are ready to load test:
c:\> sb -u http://google.com
Note: if you are using Visual Studio's command line shell, you cannot use ampersand character (&) and you have to escape it using hat (^).

Using SuperBenchmarker

Normally you would define total number of requests and concurrency:
c:\> sb -u http://google.com -c 10 -n 2000
Statement above runs 2000 requests with concurrency of 10. At the end, you are shown important metrics of the test:
Status 503:    1768
Status 200: 232

TPS: 98 (requests/second)
Max: 11271.1890515802ms
Min: 3.15724613377097ms
Avg: 497.181240820346ms

50% below 34.0499543844287ms
60% below 41.8178295863705ms
70% below 48.7612961478952ms
80% below 87.4385213898198ms
90% below 490.947293319644ms
So the breakdown of the statuses returned, TPS (transaction per second), minimum, maximum and average of the time taken. But more importantly, your percentiles that really should be driving your performance SLAs (90% or 99%). [Never use the average for anything].

In case you need to dig deeper, a log file gets created in the current directory with the name run.log which you can change using -l parameter:
c:\> sb -u http://google.com -c 10 -n 2000 -l c:\temp\mylog.txt
log file is a tab separated file which contains these columns: order number (based on the time started not the time ended), status code, time taken in ms and then any custom parameters that you might have had - see below.

Sometimes when running a test for the first time, something might not have been quite right in which case you can make a dry run/debug using -d parameter that makes a single request and the body of the response will be shown at the end. If you want to see the headers as well, use -h parameter.
c:\> sb -u http://google.com -c 10 -n 2000 -d -h

Supplying request headers or a payload for POST, PUT and DELETE

In order to pass your tailored request headers, a template file needs to be defined which is basically the HTTP request template (minus the first line defining verb and URL and version):
c:\> sb -u http://google.com -t template.txt
And the template.txt contains our custom headers (from the second line of the HTTP request):
User-Agent: SuperBenchmarker
MyCustomHeader: foo-bar;baz=biz
Please note that you don't have to provide headers such as Host and Content-Length - in fact it will raise errors. These headers will be automatically added by the underlying framework.

For using POST, PUT and DELETE we need to supply the verb parameter:
c:\> sb -u http://google.com -v POST
But this request would require a payload as well which we need to supply. We use the template file to supply HTTP payload as well as any headers. Similar to an HTTP request, there must be an empty line between headers and body:
User-Agent: WhateverValueIWant
Content-Type: x-www-formurlencoded

name=value&age=25

Parameterising your requests

Basically you can parameterise your requests using a CSV file containing values, your plugin DLL or by specifying randomisation.

You would define parameters in URL and headers (payload not yet supported but coming soon in 0.5) using SuperBenchmarker's syntax:
{{{MyParameter}}}
As you can see, we use three curly brackets to denote a parameter. For example the statement below defines a customerId parameter:
c:\> sb -u "http://myserver.com/api/customer?customerid={{{customerId}}}^&ignore=false"
Please note quoting the URL and use of ^ to escape & character - if you are using Visual Studio command prompt. To run the test successfully, you need to provide a CSV file containing customerId:
customerId
123,
245,
and use -f option to run the test:
c:\> sb -u "http://myserver.com/api/customer?customerid={{{customerId}}}&ignore=false" -f c:\mypath\values.csv
Alternatively, you can use a plugin DLL to provide values:
c:\> sb -u "http://myserver.com/api/customer?customerid={{{customerId}}}&ignore=false" -p c:\mypath\myplugin.dll
This DLL must have a single public class implementing IValueProvider interface which has a single method:
public interface IValueProvider
{
IDictionary<string, object> GetValues(int index);
}
For every request implementation of the interface is called and the index of the request is passed to and in return a dictionary of field names with their respective values is passed back.

Now we have a new feature that in most cases alleviates the need for CSV file or plugin and that is the ability to setup random value provider in the definition of the parameter itself:
c:\> sb -u "http://myserver.com/api/customer?customerid={{{customerId:RAND_INTEGER:[1000:2000]}}}&ignore=false"
The parameter above is set up to be filled by a random integer between 1000 and 2000.
Possible value types are:
  • String: using RAND_STRING. Will output random words
  • Date: using RAND_DATE (accepts range)
  • DateTime: using RAND_DATETIME (accepts range)
  • DateTimeOffset: using RAND_DATETIMEOFFSET which outputs ISO dates (accepts range)
  • Double: using RAND_DOUBLE (accepts range)
  • Name: using RAND_NAME. Will output random names

Feedback

Don't forget to feedback with issues and feature requests in the GitHub page. Happy load testing!


Darrel Miller: Xamarin Evolve

This past week I spent in Atlanta, Georgia, attending Xamarin Evolve and Atlanta Code Camp.  This was the second annual Evolve conference and attendance went from 600 the first year to 1200 this year.  This year’s event was an impressive affair.

WP_20141010_001

Cultivating an Niche

Not only did the number of attendees grow significantly from last year, there were 700 of those people who attended the pre-conference training sessions.  This is a significant amount of demand for what is perceived by some to be a fairly expensive set of tools to build cross-platform native mobile applications.

As Mike Beverly notes, Xamarin are making the transition from a place where C# .Net developers go to create iOS and Android versions of their applications, to a cross platform solution for developers of all platforms who are prepared to learn C# and .net.

The idea that you can build mobile applications that have the look and feel of the native platform, but share a significant chunk of client code between platforms is a lofty goal.  One that Xamarin seem to be making significant progress towards.

Building a Community

Numerous people have made comparison between the atmosphere at the Evolve event and Microsoft PDC’s of the past.  There is no doubt there was a palpable excitement in the air.

There was a sensation that Xamarin was going all out to make sure that attendees got everything they possibly could out of the event.  Xamarin employees were everywhere and highly visible due to them all wearing the same shirts.  Attendees could schedule one on one meetings to discuss projects and ask questions.  The Darwin Lounge was full of toys and challenges to give developers the chance to explore all kinds of cool things.

The technical production of the event, the food and drinks, the social events, the venue, were all quite spectacular.   This holistic focus on quality is what makes developers want to be  part of the community.  It feels safe.  It feels like it will have a future. 

WP_20141009_001We as developers are tired of half finished products being thrown over the wall at us by vendors;  those products being milked dry once they become successful and then left to rot when the next cool thing comes along. 

When you eat the locally made fruit popsicle and find some Xamarin inspired pun printed on the popsicle stick, you want to believe that the same attention to detail is going into the products.

Heading in the right direction

While preparing for my talk at Evolve I took the opportunity to write some sample applications and test some of my OSS PCL libraries on various platforms.  I was impressed that my libraries worked unchanged on Android.  Creating simple applications that worked on multiple platforms was quite straightforward once I had figured out how to setup the Android Emulator.

Xamarin have recognized the pain of using the Android SDK for development and announced the release of their own Android player that not only makes setup and configuration way easier, but it is also significantly faster than Google’s own emulator.

In the opening keynote, there were a number of other significant product announcements.  The Sketches tool provides an instant feedback mechanism to write code and see it immediately appear on apps running on iOS and Android emulators.  This high speed feedback loop is the kind of feature that Web Browser and HTML/CSS advocates have been rubbing in the face of native developers for years.

The Insights real-time monitoring product and new features in Test Cloud demonstrate Xamarin’s desire to get developers to produce high quality apps with their toolset.  This is a far cry from Microsoft’s efforts to evangelize the Windows Phone where the marketing department seem to be focused on the quantity of apps in the app store with little care for quality.

Architecture is important

Regardless of all the delightful trimmings that VC funded companies are able to display, when it comes down to it, architecture is a key component to longevity.  Native user interface will always have a performance edge over a generalized user interface solution.  In the past, cross-platform issues and development speed were barriers that made native development a barrier to many.  Xamarin is lowering that barrier.

As someone who has always been a believer in native user experiences, I really hope to see many more editions of Evolve in the future.


Taiseer Joudeh: Two Factor Authentication in ASP.NET Web API & AngularJS using Google Authenticator

Last week I was looking on how to enable Two Factor Authentication in a RESTful ASP.NET Web API service using Soft Tokens not SMS. Most of the examples out there show how to implement this in MVC application where there will be some cookies transmitted between requests, this approach defeats the stateless nature of the RESTful APIs, as well most of the examples ask for the passcode on the login form only, so there was no complete example on how to implement TFA in RESTful API.

The live AngularJS demo application is hosted on Azure, the source code for this tutorial on GitHub.

Basically the requirements I needed to fulfill in the solution I’m working on are the following:

  • The Web API should stay stateless, no cookies should be transmitted between the consumer and the API.
  • Each call to the Web API should be authenticated using OAuth 2.0 Bearer tokens as we implemented before.
  • Two Factor Authentication is required only for some sensitive API requests, in other words for some selective sensitive endpoints (i.e. transfer money) should ask for Two Factor Authentication time sensitive passcode along with the valid bearer access token, and the other endpoints will use only One Factor for authentication which is the OAuth 2.0 bearer access tokens only.
  • We need to build cost effective solution, no need to add new cost by sending SMSs which contain a passcode,  we will depend on our users’ smartphones as a Soft Token. The user needs to install Google Authenticator application on his/her smartphone which will generate a time sensitive passcode valid for only 30 seconds.
  • Lastly the front-end application which will consume this API will be built using AngularJS and should support displaying QR codes for the Preshared key.

TFA Featured Image

Before jumping into the implementation I’d like to emphasize some concepts to make sure that we understand the core components of Two Factor Authentication and how Google Authenticator works as Soft Token.

What is Two Factor Authentication?

Any user trying to access a system can be authenticated by one of the below ways:

  1. Something the user knows (Password, Secret).
  2. Something the user owns (Mobile Phone, Device).
  3. Something the user is (Bio-metric, fingerprint).

Two Factor authentication is a combination of any two of the above three ways. When want to apply this to real business world applications, we usually use the first and the second ways. This is because the third way (Biometrics) is very expensive and complicated to roll out, and the end user experience can be problematic.

So Two Factor authentication is made up of something that the user knows and another thing the user owns. The smartphone is something the user owns so receiving a passcode (receiving SMS or call) on it, or generating a passcode using the smartphone (as in our case) will allow us to add Two Factor authentication.

In our Back-end API we’ll ask the user to provide the below two separate factors when he/she wants to access a sensitive endpoint, the factors the user should provide are:

  1. The OAuth 2.0 bearer access tokens which is granted to the user when he provides his/her username and password at an earlier stage.
  2. The time sensitive passcode generated on the user smartphone which he/she owns using Google Authenticator.

What is Google Authenticator?

Google Authenticator is a mobile based application which is available on different platforms, the application works as a soft token which is responsible for generating time sensitive passcodes which will be used to implement Two Factor authentication for Google services. Basically the time sensitive passcode generated contains six digits which is valid for 30 seconds only, then new six digits will be generated directly.

Once you install the Google Authenticator application on your smartphone, it will basically ask you to enter Preshared Secret/Key between you and the application, this Preshared Key will be generated from our back-end API for each user register in our system and will be displayed on the UI as QR code or as plain text so the user can add this Preshared Key to Google Authenticator. More about generating this Preshared Key later in this post.

The nice thing about Google Authenticator is that it generates the time sensitive passcode on the smartphone without depending on anything, there is no need for internet connection or access to Google services to generate this passcode. How is this happening? Well the Google Authenticator implements Time-based One-time Password Algorithm (TOTP) which is an algorithm that computes a one-time password from a shared secret key and the current time, TOTP is based on HMAC-based One Time Password algorithm (HOTP) with a time-stamp replacing the incrementing counter in HOTP. You can check the implementation for Google Authenticator here.

Google Authenticator

In our solution the first step to authenticate the user is by providing his username and password in order to obtain an OAuth 2.0 bearer access token which is considered a type of knowledge-factor authentication, then as we agreed before and for certain selective sensitive API end-points (require second factor authentication), the user will open Google Authenticator application installed on his/her smartphone, lookup the time sensitive passcode which is generated from the Preshared key, and this passcode represents something the user owns (represents ownership-factor authentication).

Process flow for using Google Authenticaor with our Back-end API

In this section I’ll show you the process flow which the user will follow in our Two Factor authentication enabled solution in order to allow the user to access the selective sensitive API end-points.

  • The user will register in our-back end system by providing username,password and confirm password (normal registration process), upon successful registration and in the back-end API,  a Preshared Key (PSK) is generated for this user and returned to the UI in form of QR code and plain text (5FDAEHNBNM6W3L2S) so the user will open Google Authenticator application installed on his smartphone and scan this Preshared Key or enter it manually and select Time Based key. UI will look as the image below:

TFA Signup Form

  • Now the user will login to our system by providing his username/password and obtaining an OAuth 2.0 access token which will allow him to access our protected API resources, as long as he is trying to access non elevated sensitive end-point (i.e. transfer money) then our back-end API will be happy with only the one-factor authentication (knowledge-based factor) which is the OAuth 2.0 bearer access token only. As on the images blow the user is allowed to access his transactions history (non sensitive end point, yet it is protected by one-factor authentication).

Login

Transactions History

  • Now the user wants to perform an elevated sensitive request (Transfer Money), this end-point in out back-end API is configured to ask for second ownership factor to authenticate before completing the request, so the UI will ask the user to enter the time sensitive pass-code provided by Google Authenticator, the UI for Transfer Money will look as below:

Transfer Money TFA Code

  • The user will fill the form with the details he wants, if he tried to issue a transfer request without providing the second factor authentication (pass code) the request will be rejected and HTTP 401 unauthorized will be returned because this endpoint is sensitive and needs a second ownership factor to authenticate successfully. So the user will grab his smartphone, open Google Authenticator application and type the six digits pass code appears that on the application and hit transfer button. This six digits code will be sent in a custom HTTP header along with the OAuth 2.0 access token to this sensitive end-point.

Google Authenticator Passcode

  • Now the API receives this pass code for example (472307), and checks the code validity, if it is valid then the API will process the request and the user will complete transferring the money successfully (execution for the sensitive end-point is successful).

Money Transfer Successfully

The last step is the trickiest one, you are not asking yourself how this six digits code generated by Google Authenticator which will keep changing every 30 seconds will be understood and considered authentic by our back-end API. To answer this we need to take a brief look on how Google Authenticator produces those six digits; because we’ll simulate the same procedure at our back-end API.

How does Google Authenticator work?

As we have stated before Google Authenticator supports both TOTP algorithm on top of HOTP algorithm for generating onetime pass-codes or time sensitive pass-codes.

The idea behind HOTP is that the server (our back-end API) and client (Google Authenticator app) share a Preshared Key value and a counter, The preshared key and the counter are used to compute a one-time passcode independently on both sides (Notice it is a one-time passcode not a time sensitive passcode). Whenever a passcode is generated and used, the counter is incremented on both sides, allowing the server and client to remain in sync.

TOTP is built on top of HOTP, it uses HOTP same algorithm with one clear difference; the counter used in TOTP is replaced by the current time. But the challenge here is that this six digits passcode will keep changing rapidly with every second change and this will not be feasible because the end user will not be able to enter this number when asked for.

To solve this issue we’ll generate the number (counter) corresponding to 00 second of the current minute and let it remain constant for the next 30 seconds; by doing this the six digits passcode will become usable and will stay constant for the next 30 seconds. Note that all time calculations are based on UTC, so there is no need to worry about the time zone for the server and the time zone for the user using Google Authenticator application.

When the sensitive API endpoint receives this time sensitive passcode, it will repeat the process described above. Basically the API will know the user id from the OAuth access token sent, then fetch the Preshared key saved in the database for this user, and from there it will generate the time sensitive passcode and compare it with the passcode sent from the client application (UI), if the passcode sent form the UI matches the one generated in the back-end API then the API will consider this passcode authentic and process the request successfully.

Note: One of the greatest books which describes how Google Authenticator works is Pro ASP.NET Web API Security (Chapter 14) written by Badrinarayanan Lakshmiraghavan. Highly recommend if you are interested in ASP.NET Web API security in general.

Now it is time to start implementing this solution, this is the longest theory I’ve written so far in all my blog posts so we’d better to move to the code :)

Building the Back-End API

I highly recommend to read my previous post Token Based Authentication before implementing the steps below and enabling Two Factor Authentication, because the steps mentioned below are very identical to the previous post, so I’ll list the identical steps and will be very brief in explaining what each step is doing except for the new steps which are related to enabling Two Factor Authentication in our back-end API.

Step 1: Creating the Web API Project

Create an empty solution and name it “TFAWebApiAngularJS” then add a new ASP.NET Web application named “TwoFactorAuthentication.API”, the selected template for the project will be “Empty” template with no core dependencies. Notice that the authentication is set to “No Authentication” taking into consideration that we’ll add this manually.

Step 2: Installing the needed NuGet Packages:

Install-Package Microsoft.AspNet.WebApi -Version 5.2.2
Install-Package Microsoft.AspNet.WebApi.Owin -Version 5.2.2
Install-Package Microsoft.Owin.Host.SystemWeb -Version 3.0.0
Install-Package Microsoft.Owin.Cors -Version 3.0.0
Install-Package Microsoft.Owin.Security.OAuth -Version 3.0.0
Install-Package Microsoft.AspNet.Identity.Owin -Version 2.0.1
Install-Package Microsoft.AspNet.Identity.EntityFramework -Version 2.0.1

Step 3: Add Owin “Startup” Class

Right click on your project then add a new class named “Startup”. It will contain the code below:

public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            HttpConfiguration config = new HttpConfiguration();

            ConfigureOAuth(app);

            WebApiConfig.Register(config);
            app.UseCors(Microsoft.Owin.Cors.CorsOptions.AllowAll);
            app.UseWebApi(config);

        }

        public void ConfigureOAuth(IAppBuilder app)
        {
            OAuthBearerAuthenticationOptions OAuthBearerOptions = new OAuthBearerAuthenticationOptions();

            OAuthAuthorizationServerOptions OAuthServerOptions = new OAuthAuthorizationServerOptions()
            {
                //For Dev enviroment only (on production should be AllowInsecureHttp = false)
                AllowInsecureHttp = true,
                TokenEndpointPath = new PathString("/token"),
                AccessTokenExpireTimeSpan = TimeSpan.FromDays(1),
                Provider = new SimpleAuthorizationServerProvider()
            };

            // Token Generation
            app.UseOAuthAuthorizationServer(OAuthServerOptions);

            //Token Consumption
            app.UseOAuthBearerAuthentication(OAuthBearerOptions);

        }
    }

Basically this class will configure our back-end API to use OAuth 2.0 bearer tokens to secure the endpoints attribute with [Authorize] attribute, as well it also set the access token to expire after 24 hours. We’ll implement the class “SimpleAuthorizationServerProvider” in later steps.

Step 4: Add “WebApiConfig” class

Right click on your project, add a new folder named “App_Start”, inside this class add a class named “WebApiConfig”, then paste the code below:

public static class WebApiConfig
    {
        public static void Register(HttpConfiguration config)
        {
            // Web API routes
            config.MapHttpAttributeRoutes();

            var jsonFormatter = config.Formatters.OfType<JsonMediaTypeFormatter>().First();
            jsonFormatter.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver();
        }
    }

Step 5: Add the ASP.NET Identity System

Now we’ll configure our user store to use ASP.NET Identity system to store the user profiles, to do this we need a database context class which will be responsible for communicating with our database, so add a new class and name it “AuthContext” then paste the code snippet below:

public class AuthContext : IdentityDbContext<ApplicationUser>
    {
        public AuthContext()
            : base("AuthContext")
        {
        }
    }

    public class ApplicationUser : IdentityUser
    {
        [Required]
        [MaxLength(16)]
        public string PSK { get; set; }
    }

As you see in the code above, the “AuthContext” class is inheriting from “IdentityDbContext<ApplicationUser>”, where the “ApplicationUser” inherits from “IdentityUser”, this is done like this because we need to extend the “AspNetUsers” table and add a new column named “PSK” which will contain the Preshared key generated for this user in our back-end API. More on generating this Preshared key later in the post.

Now we want to add “UserModel” which contains the properties needed to be sent once we register a user, this model is a POCO class with some data annotations attributes used for the sake of validating the registration payload request. So add a new folder named “Models” then add a new class named “UserModel” and paste the code below:

public class UserModel
    {
        [Required]
        [Display(Name = "User name")]
        public string UserName { get; set; }

        [Required]
        [StringLength(100, ErrorMessage = "The {0} must be at least {2} characters long.", MinimumLength = 6)]
        [DataType(DataType.Password)]
        [Display(Name = "Password")]
        public string Password { get; set; }

        [DataType(DataType.Password)]
        [Display(Name = "Confirm password")]
        [Compare("Password", ErrorMessage = "The password and confirmation password do not match.")]
        public string ConfirmPassword { get; set; }
    }

Now we need to add a new connection string named “AuthContext” in our Web.Config file, so open you web.config and add the below section:

<connectionStrings>
    <add name="AuthContext" connectionString="Data Source=.\sqlexpress;Initial Catalog=TFAAuth;Integrated Security=SSPI;" providerName="System.Data.SqlClient" />
  </connectionStrings>

Step 6: Add Repository class to support ASP.NET Identity System

Now we want to implement two methods needed in our application which they are: “RegisterUser” and “FindUser”, so adda  new class named “AuthRepository” and paste the code snippet below:

public class AuthRepository :IDisposable    
    {
        private AuthContext _ctx;

        private UserManager<ApplicationUser> _userManager;

        public AuthRepository()
        {
            _ctx = new AuthContext();
            _userManager = new UserManager<ApplicationUser>(new UserStore<ApplicationUser>(_ctx));
        }

        public async Task<IdentityResult> RegisterUser(UserModel userModel)
        {
            ApplicationUser user = new ApplicationUser
            {
                UserName = userModel.UserName,
                TwoFactorEnabled = true,
                PSK = OneTimePass.GenerateSharedPrivateKey()
            };

            var result = await _userManager.CreateAsync(user, userModel.Password);

            return result;
        }

        public async Task<ApplicationUser> FindUser(string userName, string password)
        {
            ApplicationUser user = await _userManager.FindAsync(userName, password);
            
            return user;
        }

        public void Dispose()
        {
            _ctx.Dispose();
            _userManager.Dispose();

        }
    }

By looking at the code above you will notice that we are generating Preshared key for the registered user by calling the static method “OneTimePass.GeneratePresharedKey”, this Preshared key will be sent back to the end user so he will enter this 16 characters key in his Google Authenticator application.

Step 7: Add support for generating Preshared keys and passcodes

Note: There are lot of implementations for Google Authenticator algorithms (HOTP, and TOTP) out there in different platforms including .NET, but there is nothing that beats Badrinarayanan Lakshmiraghavan’s implementation in simplicity and minimal number of code used. The implementation exists and is available for public in the companion source code which comes with his Pro ASP.NET Web API Security book, so all the credit for the below implementation goes for Badrinarayanan, and I’ll not re-explain how the implementation is done. Badrinarayanan explained it in a very simple way, so my recommendation is to check his book.

So add a new folder named “Services” and inside it add a new file named “TimeSensitivePassCode.cs” then paste the code below:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Security.Cryptography;
using System.Web;
using System.Text;

namespace TwoFactorAuthentication.API.Services
{
    public static class TimeSensitivePassCode
    {
        public static string GeneratePresharedKey()
        {
            byte[] key = new byte[10]; // 80 bits
            using (var rngProvider = new RNGCryptoServiceProvider())
            {
                rngProvider.GetBytes(key);
            }

            return key.ToBase32String();
        }

        public static IList<string> GetListOfOTPs(string base32EncodedSecret)
        {
            DateTime epochStart = new DateTime(1970, 01, 01, 0, 0, 0, 0, DateTimeKind.Utc);

            long counter = (long)Math.Floor((DateTime.UtcNow - epochStart).TotalSeconds / 30);
            var otps = new List<string>();

            otps.Add(GetHotp(base32EncodedSecret, counter - 1)); // previous OTP
            otps.Add(GetHotp(base32EncodedSecret, counter)); // current OTP
            otps.Add(GetHotp(base32EncodedSecret, counter + 1)); // next OTP

            return otps;
        }

        private static string GetHotp(string base32EncodedSecret, long counter)
        {
            byte[] message = BitConverter.GetBytes(counter).Reverse().ToArray(); //Intel machine (little endian) 
            byte[] secret = base32EncodedSecret.ToByteArray();

            HMACSHA1 hmac = new HMACSHA1(secret, true);

            byte[] hash = hmac.ComputeHash(message);
            int offset = hash[hash.Length - 1] & 0xf;
            int truncatedHash = ((hash[offset] & 0x7f) << 24) |
            ((hash[offset + 1] & 0xff) << 16) |
            ((hash[offset + 2] & 0xff) << 8) |
            (hash[offset + 3] & 0xff);

            int hotp = truncatedHash % 1000000; 
            return hotp.ToString().PadLeft(6, '0');
        }
    }

    public static class StringHelper
    {
        private static string alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ234567";

        public static string ToBase32String(this byte[] secret)
        {
            var bits = secret.Select(b => Convert.ToString(b, 2).PadLeft(8, '0')).Aggregate((a, b) => a + b);

            return Enumerable.Range(0, bits.Length / 5).Select(i => alphabet.Substring(Convert.ToInt32(bits.Substring(i * 5, 5), 2), 1)).Aggregate((a, b) => a + b);
        }

        public static byte[] ToByteArray(this string secret)
        {
            var bits = secret.ToUpper().ToCharArray().Select(c => Convert.ToString(alphabet.IndexOf(c), 2).PadLeft(5, '0')).Aggregate((a, b) => a + b);

            return Enumerable.Range(0, bits.Length / 8).Select(i => Convert.ToByte(bits.Substring(i * 8, 8), 2)).ToArray();
        }

    }
}

Briefly what we’ve implemented in this class is the below

  • We’ve added a static method named “GeneratePresharedKey” which is responsible for generating 16 characters as our Preshared key, basically it is an array of 80 bit which is encoded using base32 format, this base32 format uses 26 letters A-Z and six digits 2-7 which will produce restricted set of characters that can be conveniently used by end users.
  • Why did we encod the key using base32 format? Because Google Authenticator uses the same encoding to help end users who prefer to enter the Preshared key manually without any confusion, the numbers which might confuse the users with letters are omitted (i.e 0,1,8,9). The implementation for Base32 encoding can be found on the extension method named “ToBase32String” in the helper class “StringHelper”.
  • We’ve implemented the static method “GetHotp” which accepts the base32 encoded Preshared key (16 characters) and a counter, this method will be responsible for generating the One time passcodes.
  • As we stated before the implementation of TOTP is built on top of HOTP, the only major difference is that the counter is replaced by time, so in the method “GetListOfOTPs” we are getting three time sensitive pass codes, one in the past, another in the present, and one in the future; the reason for doing this is to accommodate for the clock alter/shift between the server time and the clock on the smartphone where the passcode is generated using Google Authenticator, we are basically making it easy for the end user when he enters the time sensitive passcodes.

Step 8: Add our “Account” Controller

Now we’ll add a Web API controller which will be used to register a new users, so under add new folder named “Controllers” then add an Empty Web API 2 Controller named “AccountController” and paste the code below:

[RoutePrefix("api/Account")]
    public class AccountController : ApiController
    {
        private AuthRepository _repo = null;

        public AccountController()
        {
            _repo = new AuthRepository();
        }

        // POST api/Account/Register
        [AllowAnonymous]
        [Route("Register")]
        public async Task<IHttpActionResult> Register(UserModel userModel)
        {
            if (!ModelState.IsValid)
            {
                return BadRequest(ModelState);
            }

            IdentityResult result = await _repo.RegisterUser(userModel);
            
            IHttpActionResult errorResult = GetErrorResult(result);

            if (errorResult != null)
            {
                return errorResult;
            }

            ApplicationUser user = await _repo.FindUser(userModel.UserName, userModel.Password);

            return Ok(new { PSK = user.PSK });
        }

        protected override void Dispose(bool disposing)
        {
            if (disposing)
            {
                _repo.Dispose();
            }

            base.Dispose(disposing);
        }

        private IHttpActionResult GetErrorResult(IdentityResult result)
        {
            if (result == null)
            {
                return InternalServerError();
            }

            if (!result.Succeeded)
            {
                if (result.Errors != null)
                {
                    foreach (string error in result.Errors)
                    {
                        ModelState.AddModelError("", error);
                    }
                }

                if (ModelState.IsValid)
                {
                    // No ModelState errors are available to send, so just return an empty BadRequest.
                    return BadRequest();
                }

                return BadRequest(ModelState);
            }

            return null;
        }
    }

It is worth noting here that inside method “Register” we’re returning the SPK after registering the user successfully, this PSK will be displayed on the UI on form of QR code and plain text so we give the user 2 options to enter it in Google Authenticator.

Step 9: Add Protected Money Transactions Controller with Two Actions

Now we’ll add a protected controller which can be accessed only if the request contains valid OAuth 2.0 bearer access token, inside this controller we’ll add 2 action methods, one of those action methods “GetHistory” will only requires One-Factor authentication (only bearer tokens) to process the request, on the other hand there will be another sensitive action method named “PostTransfer” which will require Two Factor authentication to process the request.

So add new Web API controller named “TransactionsController” under folder Controllers and paste the code below:

[Authorize]
    [RoutePrefix("api/Transactions")]
    public class TransactionsController : ApiController
    {
        [Route("history")]
        public IHttpActionResult GetHistory()
        {
            return Ok(Transaction.CreateTransactions());
        }

        [Route("transfer")]
        [TwoFactorAuthorize]
        public IHttpActionResult PostTransfer(TransferModeyModel transferModeyModel)
        {
            return Ok();
        }
    }

    #region Helpers

    public class Transaction
    {
        public int ID { get; set; }
        public string CustomerName { get; set; }
        public string Amount { get; set; }
        public DateTime ActionDate { get; set; }


        public static List<Transaction> CreateTransactions()
        {
            List<Transaction> TransactionList = new List<Transaction> 
            {
                new Transaction {ID = 10248, CustomerName = "Taiseer Joudeh", Amount = "$1,545.00", ActionDate = DateTime.UtcNow.AddDays(-5) },
                new Transaction {ID = 10249, CustomerName = "Ahmad Hasan", Amount = "$2,200.00", ActionDate = DateTime.UtcNow.AddDays(-6)},
                new Transaction {ID = 10250,CustomerName = "Tamer Yaser", Amount = "$300.00", ActionDate = DateTime.UtcNow.AddDays(-7) },
                new Transaction {ID = 10251,CustomerName = "Lina Majed", Amount = "$3,100.00", ActionDate = DateTime.UtcNow.AddDays(-8)},
                new Transaction {ID = 10252,CustomerName = "Yasmeen Rami", Amount = "$1,100.00", ActionDate = DateTime.UtcNow.AddDays(-9)}
            };

            return TransactionList;
        }
    }

    public class TransferModeyModel
    {
        public string FromEmail { get; set; }
        public string ToEmail { get; set; }
        public double Amount { get; set; }
    }

    #endregion
}

Notice that the “Authorize” attribute is set on the controller level for both actions, which means that both actions need the bearer token to process the request, but on the action method “PostTransfer” you will notice that there is new custom authorize filter attribute named “TwoFactorAuthorizeAttribute” setting on top of this action method, this custom authorize attribute is responsible for enabling Two Factor authentication on any sensitive controller, action method, or HTTP verb in future. All we need to do is just use this custom attribute as an attribute on the endpoint we want to elevate the security level on and require a second factor authentication to process the request.

Step 10: Implement the “TwoFactorAuthorizeAttribute”

Before starting to implement the “TwoFactorAuthorizeAttribute” we need to add simple helper class which will be responsible to inspect request headers looking for the time sensitive passcode sent by the client application in a custom header, so to implement this add a new folder named “Helpers” and inside it add a new file named “OtpHelper” and paste the code below:

public static class OtpHelper
    {
        private const string OTP_HEADER = "X-OTP";

        public static bool HasValidTotp(this HttpRequestMessage request, string key)
        {
            if (request.Headers.Contains(OTP_HEADER))
            {
                string otp = request.Headers.GetValues(OTP_HEADER).First();
                
                // We need to check the passcode against the past, current, and future passcodes
               
                if (!string.IsNullOrWhiteSpace(otp))
                {
                    if (TimeSensitivePassCode.GetListOfOTPs(key).Any(t => t.Equals(otp)))
                    {
                        return true;
                    }
                }

            }
            return false;
        }
    }

So basically this class is looking for a custom header named “X-OTP” in the HTTP request headers, if this header was found we’ll send the value of it (time sensitive passcode) along with Preshared key for this authenticated user to the method “GetListOfOTPs” which we defined in step 7.

If this time sensitive passcode exists in the list of pass codes (past, current, and future passcodes) then this means that this passcode is valid and authentic, otherwise the passcode is invalid or the user didn’t include it in the request.

Now to implement the custom “TwoFactorAuthorizeAttribute” we need to add a new folder named “Filters” and inside it add a new class named “TwoFactorAuthorizeAttribute” then paste the code below:

public class TwoFactorAuthorizeAttribute : AuthorizationFilterAttribute
    {
        public override Task OnAuthorizationAsync(HttpActionContext actionContext, System.Threading.CancellationToken cancellationToken)
        {
            var principal = actionContext.RequestContext.Principal as ClaimsPrincipal;

            var preSharedKey = principal.FindFirst("PSK").Value;
            bool hasValidTotp = OtpHelper.HasValidTotp(actionContext.Request, preSharedKey);

            if (hasValidTotp)
            {
                return Task.FromResult<object>(null);
            }
            else
            {
                actionContext.Response = actionContext.Request.CreateResponse(HttpStatusCode.Unauthorized, new CustomError() { Code = 100, Message = "One Time Password is Invalid" });
                return Task.FromResult<object>(null);
            }
        }
    }

    public class CustomError
    {
        public int Code { get; set; }
        public string Message { get; set; }
    }

What we’ve implemented in this customer authorization filer is the following:

  • This custom Authorize attribute inherits from “AuthorizationFilterAttribute” and we are overriding the “OnAuthorizationAsync” method.
  • We can be 100% sure that the code execution flow will not reach this authorization filter attribute if the user is not authenticated by the OAuth 2.0 bearer token sent, this custom authorize attribute run later in the pipeline after the “Authorize” attribute.
  • Inside the “OnAuthorizationAsync” method we are looking for the claims for the authenticated user, this claim will contain custom claim of type “PSK” which contains the value of the Preshared key for this authenticated user (Didn’t implement this yet, you will see how we set the claim in the next step).
  • Then we will call the helper method named “OtpHelper.HasValidTotp” by passing the HTTP request which contains the time sensitive pass code in a custom header along with the Preshared key. If this method returns true then we will consider this request a valid one and that has fulfilled the Two Factor authentication requirements.
  • If the request doesn’t contain the valid time sensitive passcode then we will return HTTP status code 401 along with a message and an arbitrary integer code used in the UI.

 Step 11: Implement the “SimpleAuthorizationServerProvider” class

Add a new folder named “Providers” then add new class named “SimpleAuthorizationServerProvider”, paste the code snippet below:

public class SimpleAuthorizationServerProvider : OAuthAuthorizationServerProvider
    {
        public override Task ValidateClientAuthentication(OAuthValidateClientAuthenticationContext context)
        {
            context.Validated();
            return Task.FromResult<object>(null);
        }

        public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
        {
            var  allowedOrigin = "*";
            ApplicationUser appUser = null;

            context.OwinContext.Response.Headers.Add("Access-Control-Allow-Origin", new[] { allowedOrigin });

            using (AuthRepository _repo = new AuthRepository())
            {
                 appUser = await _repo.FindUser(context.UserName, context.Password);

                if (appUser == null)
                {
                    context.SetError("invalid_grant", "The user name or password is incorrect.");
                    return;
                }
            }

            var identity = new ClaimsIdentity(context.Options.AuthenticationType);
            identity.AddClaim(new Claim(ClaimTypes.Name, context.UserName));
            identity.AddClaim(new Claim(ClaimTypes.Role, "User"));
            identity.AddClaim(new Claim("PSK", appUser.PSK));

            var props = new AuthenticationProperties(new Dictionary<string, string>
                {
                    { 
                        "userName", context.UserName
                    }
                });

            var ticket = new AuthenticationTicket(identity, props);
            context.Validated(ticket);
        }

        public override Task TokenEndpoint(OAuthTokenEndpointContext context)
        {

            foreach (KeyValuePair<string, string> property in context.Properties.Dictionary)
            {
                context.AdditionalResponseParameters.Add(property.Key, property.Value);
            }

            return Task.FromResult<object>(null);
        }
    }

It is worth mentioning here that in the second method “GrantResourceOwnerCredentials” which is responsible for validating the username and password sent to the authorization server token endpoint, and after fetching the user from the database, we are adding a custom claim named “PSK”, the value for this claim contains the Preshared key for this authenticated user. This claim will be included in the signed OAuth 2.0 bearer token, that’s why we can directly get the PSK value in step 10.

 Step 12: Testing the Back-end API

First step to test the API is to register a new user so open your favorite REST client application in order to issue an HTTP request to register the user, so issue an HTTP POST request to the endpoint https://ngtfaapi.azurewebsites.net/api/account/register as the image below:

Register User

If the request processed successfully you will receive 200 status along with the Preshared Key, so open Google Authenticator app and enter this key manually.

Now we need to obtain OAuth 2.0 bearer access token to allow us to request the protected end points, this represent the first factor authentication because the user will provide his username and password, so we need to issue HTTP POST request to the endpoint http://ngtfaapi.azurewebsites.net/token as the image below:

Request Access Token

Now after we have a valid OAuth 2.0 access token, we need to try to send an HTTP POST request to the protected elevated security endpoint which requires second factor authentication, we’ll issue the request to the endpoint https://ngtfaapi.azurewebsites.net/api/transactions/transfer asshown in the image below, but we’ll not set the value for the custom header (X-OTP) so definitely the API will response with 401 unauthorized access

Failed TFA Request

Now in order to make this request authentic we need to open Google Authenticator and get the passcode from there and send it in the (X-OTP) custom header, so the authentic request will be as shown in the image below:

Authentic TFA Request

The live AngularJS demo application is hosted on Azure, the source code for this tutorial on GitHub.

That’s it for now folks!

I hope this step by step post will help you enable Two Factor Authentication into ASP.NET Web API RESTful services for selective sensitive endpoints, if you have any question please drop me a comment.

Follow me on Twitter @tjoudeh

References

The post Two Factor Authentication in ASP.NET Web API & AngularJS using Google Authenticator appeared first on Bit of Technology.


Dominick Baier: IdentityServer v3 Beta 2-1

We just did a minor update to Beta 2.

Besides some smaller changes and bug fixes we now support redirecting back to a client after logout (very requested feature). I will write a blog post soon describing how it works.


Filed under: IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: Getting started with IdentityServer v3

Last night I started working on a getting started tutorial for IdentityServer v3 – while writing it, it became clear, that a single walkthrough will definitely not be enough to show the various options you have – anyways I started with the canonical “authentication for MVC scenario”, and it is work in progress.

Watch this space:

https://github.com/thinktecture/Thinktecture.IdentityServer.v3/wiki


Filed under: ASP.NET, IdentityServer, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Dominick Baier: OpenID Connect Hybrid Flow and IdentityServer v3

One of the features we added in Beta 2 is support for hybrid flow (see spec).  What is hybrid flow – and why do I care?

Well – in a nutshell – OpenID Connect originally extended the two basic OAuth2 flows (or grants) called authorization code and implicit. Implicit allows requesting tokens without explicit client authentication (hence the name), but uses the redirect URI instead to verify client identity. Because of that, requesting long lived tokens like a refresh token is not allowed in that flow.

Authorization code flow on the other hand only returns a so called authorization code via the unauthenticated front channel, and requires client authentication using client id and secret (or some other mechanism) to retrieve the actual tokens (including a refresh token) via the back channel. This mechanism was originally designed for server-based applications only, since storing client secrets on a client device is questionable without the right security mechanisms in place.

Hybrid flow (as the name indicates) is a combination of the above two. It allows to request a combination of identity token, access token and code via the front channel using either a fragment encoded redirect (native and JS based clients) or a form post (server-based web applications). This enables e.g. scenarios where your client app can make immediate use of an identity token to get access to the user’s identity but also retrieve an authorization code that that can be used (e.g. by a back end service) to request a refresh token and thus gaining long lived access to resources.

Lastly, hybrid flow is the only flow supported by the Microsoft OpenID Connect authentication middleware (in combination with a form post response mode), and before we added support for hybrid flow to IdentityServer, interop was a bit complicated (see here).

Our samples repo has two clients using hybrid flow – native and web. Let’s have a look at the middleware based one.

You are using hybrid flow whenever you have a response type of code combined with some other response type, e.g. id_token or token (or both).

app.UseOpenIdConnectAuthentication(new OpenIdConnectAuthenticationOptions

    {

        ClientId = “katanaclient”,

        Authority = Constants.BaseAddress,

        RedirectUri = Constants.RedirectUri,

        PostLogoutRedirectUri = Constants.PostLogoutUri,

        ResponseType = “code id_token token”,

        Scope = “openid email profile read write offline_access”,

 

        SignInAsAuthenticationType = “Cookies”

    };

 

The scopes are a combination of identity scopes allowing us to retrieve user identity (email and profile), resource scopes for api access (read and write) and a request for a refresh token (offline_access).

After authentication (and consent), the middleware will validate the response and notify you when it has retrieved the authorization code from the callback form post. You then typically have a number of responsibilities, e.g.

  • Contact the userinfo endpoint to retrieve the claims about the user
  • Transform the claims to whatever format your application expects
  • Store the access token for later use (along with information about its lifetime)
  • Store the refresh token so you can refresh expired access tokens (if long lived access is needed)
  • Store the id_token if you need features at the OpenID Connect provider that requires id token hints (e.g. redirects after logging out)

In our sample, I first strip all the protocol related claims from the identity token:

// filter "protocol" claims
var claims = new List<Claim>(from c in n.AuthenticationTicket.Identity.Claims
                                where c.Type != "iss" &&
                                      c.Type != "aud" &&
                                      c.Type != "nbf" &&
                                      c.Type != "exp" &&
                                      c.Type != "iat" &&
                                      c.Type != "nonce" &&
                                      c.Type != "c_hash" &&
                                      c.Type != "at_hash"
                                select c);

 

…then get the user claims from the user info endpoint

// get userinfo data
var userInfoClient = new UserInfoClient(
    new Uri(Constants.UserInfoEndpoint),
    n.ProtocolMessage.AccessToken);
 
var userInfo = await userInfoClient.GetAsync();
userInfo.Claims.ToList().ForEach(ui => claims.Add(new Claim(ui.Item1, ui.Item2)));

 

…and retrieve a refresh token (and a fresh access token)

// get access and refresh token
var tokenClient = new OAuth2Client(
    new Uri(Constants.TokenEndpoint),
    "katanaclient",
    "secret");
 
var response = await tokenClient.RequestAuthorizationCodeAsync(
    n.Code, n.RedirectUri);

 

Then I combine all the claims together and store them in the authentication cookie:

claims.Add(new Claim("access_token", response.AccessToken));
claims.Add(new Claim("expires_at", 
    DateTime.Now.AddSeconds(response.ExpiresIn).ToLocalTime().ToString()));
claims.Add(new Claim("refresh_token", response.RefreshToken));
claims.Add(new Claim("id_token", n.ProtocolMessage.IdToken));
 
n.AuthenticationTicket = new AuthenticationTicket(
    new ClaimsIdentity(claims.Distinct(new ClaimComparer()), 
        n.AuthenticationTicket.Identity.AuthenticationType), 
        n.AuthenticationTicket.Properties);

 

You might not need all of the above steps, but this is how it generally works. HTH.


Filed under: IdentityServer, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Dominick Baier: Identity & Access Control at NDC London 2014

The NDC Agenda is out now – and Brock and me will do a number of identity & access control related sessions.

Brock will talk about identity management in ASP.NET – which is a huge topic – so he split up his talk into two sessions. I will cover OpenID Connect and OAuth2 – and especially the combination of the two protocols. Looking forward to that!

In addition we will do a 2-day workshop on monday/tuesday titled: “Identity & Access Control for modern Web Applications & APIs” – this will be deep dive in all the things you need to know to implement authentication and authorization into modern distributed applications. See the agenda here.

Early bird ends on 15th October – would be cool to meet you there and talk about security!


Filed under: .NET Security, ASP.NET, IdentityModel, IdentityServer, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Dominick Baier: IdentityServer v3 – Beta 2

We just pushed IdentityServer v3 beta 2 to github and nuget.

This time it’s been 161 commits and we added a lot of small things – and a couple of bigger things, e.g.:

  • Update to Katana v3 and JWT handler v4
  • Configurable claims for both identity and resource scopes
  • Added support for acr_values and tenant login hints
  • Improved support for custom grant types
  • Added a RequireSsl switch (thus removing the need for the public host name setting)
  • Reworked some internals like the user service, token service, userinfo endpoint, sign in message and validation pipeline
  • Added support for hybrid flow and thus improved compatibility with the Microsoft Katana OpenID Connect middleware
  • Added identity token validation endpoint for clients that don’t have access to the necessary crypto libraries
  • More control over CSP and cookies

Docs will be updated in the next days.

As always – thank you very much for feedback, bug reports etc… we are getting closer to RTW!


Filed under: ASP.NET, IdentityServer, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Filip Woj: Route matching and overriding 404 in ASP.NET Web API

In ASP.NET Web API, if the incoming does not match any route, the framework is simply hard wired to return 404 to the client (or possibly pass through to the next configured middleware, in case of an OWIN hosted Web … Continue reading

The post Route matching and overriding 404 in ASP.NET Web API appeared first on StrathWeb.


Christian Weyer: Installing Running ASP.NET vNext (Alpha 3) on Ubuntu Linux with Mono 3.8for real

Yesterday I thought I would try and prove Microsoft’s promise that the new and overall cool ASP.NET vNext would run on many platforms, including Windows, MacOS X, and Linux.

My target platform for this experiment was Linux. First thing to do in this case is check out of of the myriads of Linux distros. After a bit of investigating I chose the Ubuntu-based Xubuntu. Very slick!

After installing Xubunutu (it took only a few minutes, literally) in a Parallels VM on my MacBook Pro, I went ahead to get Mono running. Turns out the the most reliable way to get recent versions of Mono running on stable Linux distros these days is to suck down the source code and compile it.
So here we go with some simple steps (may take a couple of minutes to get through, though):

sudo apt-get install build-essential
wget http://download.mono-project.com/sources/mono/mono-3.8.0.tar.bz2
tar -xvf mono-3.8.0.tar.bz2
cd mono-3.8.0/
./configure --prefix=/usr/local
make
sudo make install


This gives us the Mono 3.8.0 CLI:

mono_cli


According to the ASP.NET vNext Home repo on GitHub this should be all we need to get started with the samples. NOT. When we build the source of the samples we get a network-related exception showing up.

The issue is the certificates used for the package sources. The .NET Framework on Windows uses the Windows Certificates store to check whether to accept an SSL certificate from a remote site. In Mono, there is no Windows Certificate store, it has its own store. By default, it is empty and we need to manage the entries ourselves.

CERTMGR=/usr/local/bin/certmgr
sudo $CERTMGR -ssl -m https://go.microsoft.com
sudo $CERTMGR -ssl -m https://nugetgallery.blob.core.windows.net
sudo $CERTMGR -ssl -m https://nuget.org
sudo $CERTMGR -ssl -m https://www.myget.org/F/aspnetvnext/

mozroots --import --sync

After these steps we should be able to build the samples from the ASP.NET vNext Home repo successfully, e.g. the HelloMvc sample.

Running kpm restore looks promising, at least:

kpm_restore

When we then try to run the sample with k kestrel (Kestrel is the current dev web server in vNext) we get a wonderfully familiar error:

Object reference not set to an instance of an object.

Hooray!!! Sad smile

After some Googling I found out that the underlying libuv (yes, that same libuv used in node.js) seems to be the problem here. A quick chat with the ASP.NET vNext team reveals it seems like this will be fixed in the future, but there are some more important things to get done by now.

Anyway, I got it finally to work by replacing libuv as suggested here:

ln -sf /usr/lib/libuv.so.11 native/darwin/universal/libuv.dylib

 

When I now run k kestrel again everything is fine:

k_kestrel_running

By default, Kestrel runs on port 5004 – thus opening our browser of choice with http://localhost:5004 gives us the desired result:

running

Mission accomplished.
Thanks for listening.


Darrel Miller: RESTFest 2014

Last week was RESTfest week.  RESTfest is an unusual little conference that happens in Greenville, South Carolina every September.  This is the fifth year it has run and this is my fourth time attending, and I learn a ton every time.

RESTFest

What makes RESTFest unique is its “everyone speaks” policy.  Although there is a keynote, a whole hack day and a number of regular length talks, the bulk of the conference time is made up of 5 minute lightning talks presented by attendees.

This is not just a conference full of people who have spent years drinking the REST coolaid.  There is always a wide range of experience levels and people with many different technical backgrounds.  This combination makes for some great conversations, and there is plenty of time for them with the social activities that are scheduled during the event.

WP_20140926_001This years keynote was presented by Sean Cribbs.  RESTFest keynotes have a tendency to challenge the audience intellectually and this year’s crash course in the functional programming theory that is the underpinning of WebMachine was no different.

The hack day was a challenge to create a hypermedia client and/or server based on a simple specification.  The interesting outcome was the wide variety of solutions presented.  We had people using Erlang, .net, Java, AngularJS with HAL, C+J, Siren, Atom and HTML hypermedia types being used.  It’s also important to realize that a large part of the value of hack day is not the code writing, but the opportunity to see how others are writing code and talking with them about it.  It’s all just conversation fodder.

Did I mention there was moonshine tasting going on?

WP_20140925_002

And the food is awesome,

WP_20140927_001

There is one more thing that makes RESTfest fairly unique amongst small “unconference” style events and that is that we have had a tradition of recording the sessions.  The first couple of years have fairly poor quality videos because I used an ipad to stream the talks using uStream.  In the third year, Twilio stepped up to the plate and paid for professional videos to be done.  Since then, every year, a sponsor has allowed us to create a fabulous catalog of talks.  These are all available on the Vimeo RESTfest channel.

If you want to learn more about APIs, HTTP, Hypermedia, or REST in general, you really should make an effort to join us in 2015.


Dominick Baier: 401 vs 403

For years, there’s been an ongoing discussion which HTTP status code to use for “not authorized” scenario – and the original HTTP 1.1 specification wasn’t exactly crystal clear about the distinction between 401 (unauthorized) and 403 (forbidden).

But there is definitely the need to distinguish between the situation where no or invalid credentials were supplied with a request and the situation where a valid credential was supplied, but the “entity” belonging to that credential is not authorized for the operation it is trying to do.

Here are some examples:

  • In good old ASP.NET FormsAuth (well this also applies to the brand new cookie middleware in Katana) – a 401 is turned into a 302 to the login page. That’s fine for anonymous requests – but when a user is already authenticated, a failed authorization (e.g. using [Authorize(Role=”foo”)]) will result in showing the login page again – not very intuitive. You rather want do deal with that error or or show an error page.
  • In Web APIs and token based authentication your client need to be able to distinguish if the token is e.g. expired or if it is missing the necessary scopes. The “Bearer Token Usage” spec (http://tools.ietf.org/html/rfc6750) is pretty clear about this. Expired or malformed tokens should return a 401 – missing scopes should result in a 403.

You might have heard that the HTTP 1.1 spec has been re-written recently. It is now clearer on the status codes as well (you know it is getting serious when you see a Courier font, right?):

The 401 (Unauthorized) status code indicates that the request has not been applied because it lacks valid authentication credentials for the target resource. The server generating a 401 response MUST send a WWW-Authenticate header field (Section 4.1) containing at least one challenge applicable to the target resource.

A server that receives valid credentials that are not adequate to gain access ought to respond with the 403 (Forbidden) status code.

Unfortunately, the ASP.NET MVC/Web API [Authorize] attribute doesn’t behave that way – it always emits 401. The logic would be simple for failed authorization: “if user is anonymous – emit 401, if not – emit 403” (we now emit 403s for our scope authorization helpers – here and here).

Maybe we can fix that in vNext.


Filed under: .NET Security, ASP.NET, Katana, OAuth, OWIN, WebAPI


Ali Kheyrollahi: Performance Counters for your HttpClient

Level [T2]

Pure HTTP APIs (aka REST APIs) are very popular at the moment. If you are building/maintaining one, you have probably learnt (perhaps the hard way) that having a monitoring on your API is one of your top cross-cutting concerns. This monitoring involves different aspects of the API, one of which is the performance.

There has been many approaches to solving cross-cutting concerns on APIs. Proxying has been a popular one and there has been proliferation of proxy type services such as Mashery or Apigee which basically sit in front of your API and provide an abstraction which can solve your security and access control, monetizing or performance monitoring.

This is a popular approach but comes with its headaches. One of these problems is that if you already have a security mechanism in place, it can clash with the one provided by the service. Also the geographic distribution of these services, although are getting better, is not as good as the one provided by many cloud vendors. This can mean that your traffic could be bouncing across the Atlantic ocean a couple of times before getting to your end users - and this is bad, really really bad. On the other hand, these will not tell you what is happening inside your application which you have to solve differently using classic monitoring approaches. So in a sense, I would say you might as well just byte! the bullet and just implement it yourself.

PerfIt was a library I built a couple of years ago to provide performance counters for ASP.NET Web API. Creating and managing performance counters for windows is not a rocket science but is clumsy and once you do it over and over and for every service, this is just a bit too much overhead. So this was designed to make it really simple for you... well, actually for myself :) I have been using PerfIt over the last two years - in production - and it serves the purpose.

Now, it is all well and good to know what is the performance characteristics of your API. But this gets really more complicated when you have taken a dependency on other APIs and degradation of your API is the result of performance issues in your dependent APIs.




This is really a blame game: considering the fact that each MicroServie is managed by a single team and in an ideal DevOps world, the developers must support their services and you would love to blame another team rather than yourself especially if this is truly the direct result of performance degradation in a dependent service.

One solution is have access to performance metrics of the dependent APIs but really this might not be possible and kinda goes against the DevOps model of operations. On the other hand, what if this is due to an issue in an intermediary - such as a Proxy?

The real solution is to benchmark and monitor the calls you are making out of your API. And I have implemented a new DelegatingHandler to do that measurement for you!


PerfIt! for HttpClient

So HttpClient is the de-facto class for accessing HTTP APIs. If you are using something else then either you have a really really good reason to do so or you are just doing it wrong.

PerfIt for client provides 4 standard counters out of the box:

  • Total # of operations
  • Average time taken (in seconds)
  • Time taken for the last operation (in ms)
  • # of operations per second

These are the 4 counters that you would normally need. If you need another, just get in touch with me but remember these counters must be business-independent.

First step is to install PerfIt using NuGet:
PM> Install-Package PerfIt
And then you just need to install the counters for your application. This can be done by running this simple code (categoryName is the performance counter grouping):
PerfItRuntime.InstallStandardCounters("<categoryName>");
Or by using an installer class as explained on the GitHub page and then running InstallUtil.exe.

Now, just add PerfitClientDelegatingHandler to your HttpClient and make some requests against a couple of websites:
using System;
using System.Net.Http;
using PerfIt;
using RandomGen;

namespace PerfitClientTest
{
class Program
{
static void Main(string[] args)
{
var httpClient = new HttpClient(new PerfitClientDelegatingHandler("ClientTest")
{
InnerHandler = new HttpClientHandler()
});

var randomSites = Gen.Random.Items(new[]
{
"http://google.com",
"http://yahoo.com",
"http://github.com"
});
for (int i = 0; i < 100; i++)
{
var httpResponseMessage = httpClient.GetAsync(randomSites()).Result;
Console.Write("\r" + i);
}

}
}
}
And now you should be seeing this (we have chosen "ClientTest" for the category name):

So as you can see, instance names are the host names of the APIs and this should provide you with enough information for you to monitor your dependencies. Any deeper information than this and then you really need tracing rather than monitoring - which is a completely different thing...



So as you can see, it is extremely easy to set this up and run it. I might expose the part of the code that defines the instance name which will probably be coming in the next versions.

Please use the GitHub page to ask questions or provide feedbacks.



Dominick Baier: IdentityServer Beta 1-2

Yesterday we pushed another interim release of IdentityServer to nuget.  You can see all commits here if you are interested.

Besides many smaller changes and bug fixes – the main new feature is that you can now configure which claims go into identity and access tokens. This allows sending e.g. a ‘role’ claim to a resource.

Before that change, this always required extending the claims provider. Check the wiki for updated docs.

Internally, everything is based now on Katana and the JWT handler v4 – which allowed to finally fix the data types of claims like ‘iat’ or ‘auth_time’ to numbers (before that the JWT handler only allowed strings). This improvers interop with other OIDC systems. Katana v3 is also a pre-req for supporting WS-Federation and OIDC based upstream IdPs, which is a feature that we will soon start working on.

We also moved all active development to the dev branch and periodically publish dev nugets to myget. See here.

As always, and feedback is appreciated. Thanks for the support so far!


Filed under: .NET Security, ASP.NET, IdentityServer, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Darrel Miller: Vermont Code Camp

This past weekend I had the opportunity to attend and speak at Vermont Code Camp.  Apart from being hosted at the beautiful University of Vermont, it was an event packed with excellent speakers.

I presented my talk on “Crafting Evolvable API Responses”, which itself managed to evolve considerably since doing it at New York Code camp the weekend before.  The slides are available here on slideshare:

It was also great to see such a great showing of women speakers.

vtcodecamp

If you get the chance next year to attend, this is one event that I would definitely recommend.


Taiseer Joudeh: Decouple OWIN Authorization Server from Resource Server

Recently I’ve received lot of comments and emails asking how we can decouple the OWIN Authorization Server we’ve built in the previous posts from the resources we are protecting. If you are following the posts mentioned below you will notice that we’ve only one software component (API) which plays both roles: Authorization Server and Resource Server.

There is nothing wrong with the the current implementation because OAuth 2.0 specifications never forces the separation between the Authorization Server and the Resource Server, but in many cases and especially if you have huge resources and huge number of endpoints you want to secure; then it is better from architecture stand point to decouple the servers. So you will end up having a stand alone Authorization Server, and another stand alone Resource Server which contains all protected resources, as well this Resource Server will understand only the access tokens issued from the Authorization Server.

You can check the source code on Github.


OAuth 2.0 Roles: Authorization Server, Resource Server, Client, and Resource Owner

Before we jump into implementation and how we can decouple the servers I would like to emphasis on OAuth 2.0 roles defined in the specifications; so we’ll have better understanding of the responsibility of each role, by looking at the image below you will notice that there are 4 roles involved:

OAuth 2.0 Roles

Resource Owner:

The entity or person (user) that owns the protected Resource.

Resource Server:

The server which hosts the protected resources, this server should be able to accept the access tokens issued by the Authorization Server and respond with the protected resource if the the access token is valid.

Client Applications:

The application or the (software) requesting access to the protected resources on the Resource Server, this client (on some OAuth flows) can request access token on behalf of the Resource Owner.

Authorization Server:

The server which is responsible for managing authorizations and issuing access tokens to the clients after validating the Resource Owner identity.

Note: In our architecture we can have single Authorization Server which is responsible to issue access token which can be consumed by multiple Resource Servers.

Decoupling OAuth 2.0 OWIN Authorization Server from Resource Server

In the previous posts we’ve built the Authorization Server and the Resource Server using a single software component (API), this API can be accessed on (http://ngauthenticationAPI.azurewebsites.net), in this demo I will guide you on how to create new standalone Resource API and host it on different machine other than the Authorization Server machine, then configure the Resource API to accept only access tokens issued from our Authorization Server. This new Resource Server is hosted on Azure can be accessed on (http://ngauthenticationResourcesAPI.azurewebsites.net).

Note: As you noticed from the previous posts we’ve built a protected resource (OrdersController) which can be accessed by issuing HTTP GET to the end point(http://ngauthenticationAPI.azurewebsites.net/api/orders), this end point is considered as a resource and should be moved to the Resource Server we’ll build now, but for the sake of keeping all the posts on this series functioning correctly; I’ll keep this end point in our Authorization Server, so please consider the API we’ve built previously as our standalone Authorization Server even it has a protected resource in it.

Steps to build the Resource Server

In the steps below, we’ll build another Web API which will contain our protected resources, for the sake of keeping it simple, I’ll add only one secured end point named “ProtectedController”, any authorized request to this end point should contain valid bearer token issued from our Authorization Server, so let’s start implementing this:

Step 1: Creating the Resource Web API Project

We need to add new ASP.NET Web application named “AngularJSAuthentication.ResourceServer” to our existing solution named “AngularJSAuthentication”, the selected template for the new project will be “Empty” template with no core dependencies, check the image below:

Resourse Server New Project

Step 2: Install the needed NuGet Packages

This project is empty so we need to install the NuGet packages needed to setup our OWIN Resource Server, configure ASP.NET Web API to be hosted within an OWIN server, and configure it to only uses OAuth 2.0 bearer tokens as authorization middle ware; so open NuGet Package Manager Console and install the below packages:

Install-Package Microsoft.AspNet.WebApi -Version 5.2.2
Install-Package Microsoft.AspNet.WebApi.Owin -Version 5.2.2
Install-Package Microsoft.Owin.Host.SystemWeb -Version 3.0.0
Install-Package Microsoft.Owin.Cors -Version 3.0.0
Install-Package Microsoft.Owin.Security.OAuth -Version 2.1.0

The usage for each package has been covered on the previews posts, feel free to check this post to know the rational of using each package is used for.

Important note: In the initial post I was using package “Microsoft.Owin.Security.OAuth” version “3.0.0″ which differs from the version used in the Authorization Server version “2.1.0″, there was a bug in my solution when I was using 2 different versions, basically the bug happens when we send an expired token to the Resource Server, the result for this that Resource Server accepts this token even if it is expired. I’ve noticed that the properties for the Authentication ticket are null and there is no expiry date. To fix this issue we need to unify the assembly version “Microsoft.Owin.Security.OAuth” between the Authorization Server and the Resource Server, I believe there is breaking changes between those 2 versions that’s why ticket properties are not de-crypted and de-serialized correctly.  Thanks for Ashish Verma for notifying me about this.

Step 3: Add OWIN “Startup” Class

Now we want to add new class named “Startup”. It will contain the code below:

[assembly: OwinStartup(typeof(AngularJSAuthentication.ResourceServer.Startup))]
namespace AngularJSAuthentication.ResourceServer
{
    public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            HttpConfiguration config = new HttpConfiguration();

            ConfigureOAuth(app);

            WebApiConfig.Register(config);
            app.UseCors(Microsoft.Owin.Cors.CorsOptions.AllowAll);
            app.UseWebApi(config);
            
        }

        private void ConfigureOAuth(IAppBuilder app)
        {
            //Token Consumption
            app.UseOAuthBearerAuthentication(new OAuthBearerAuthenticationOptions
            {
            });
        }
    }
}

As we’ve covered in previous posts, this class will be fired once our Resource Server starts, as you noticed I’ve enabled CORS so this resource server can accept XHR requests coming from any origin.

What worth noting here is the implementation for method “ConfigureOAuth”, inside this method we are configuring the Resource Server to accept (consume only) tokens with bearer scheme, if you forget this one; then your Resource Server won’t understand the bearer tokens sent to it.

Step 3: Add “WebApiConfig” Class

As usual we need to add the WebApiConfig class under folder “App_Start” which contains the code below:

public static class WebApiConfig
    {
        public static void Register(HttpConfiguration config)
        {
            // Web API routes
            config.MapHttpAttributeRoutes();

            var jsonFormatter = config.Formatters.OfType<JsonMediaTypeFormatter>().First();
            jsonFormatter.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver();
        }
    }

This class is responsible to enable routing attributes on our controller.

Step 4: Add new protected (secured) controller

Now we want to add a controller which will serve as our protected resource, this controller will return list of claims for the authorized user, those claims for sure are encoded within the access token we’ve obtained from the Authorization Server. So add new controller named “ProtectedController” under “Controllers” folder and paste the code below:

[Authorize]
    [RoutePrefix("api/protected")]
    public class ProtectedController : ApiController
    {
        [Route("")]
        public IEnumerable<object> Get()
        {
            var identity = User.Identity as ClaimsIdentity;
           
            return identity.Claims.Select(c => new
            {
                Type = c.Type,
                Value = c.Value
            });
        }
    }

Notice how we attribute the controller with [Authorize] attribute which will protect this resource and only will allow HTTP GET requests containing a valid bearer access token sent in the authorization header to pass, in more details when the request is received the OAuth bearer authentication middle ware will perform the following:

  1. Extracting the access token which is sent in the “Authorization header” with the “Bearer” scheme.
  2. Extracting the authentication ticket from access token, this ticket will contain claims identity and any additional authentication properties.
  3. Checking the validity period of the authentication ticket.

Step 5: Hosting the Authorization Server and Resource Server on different machines

Until this step and if you are developing locally on your dev machine, if you tried to obtain an access token from your Authorization Server i.e. (http://AuthServer/token) and then send this access token to the secured end point in our Resource Server i.e. (http://ResServer/api/protected) the result for this request will pass and you will have access to the protected resource, how is this happening?

One your dev machine once you request an access token from your Authorization Server; the OAuth middleware will use the default data protection provider in your Authorization Server, so it will use the “validationKey” value in machineKey node stored in machine.config file to issue the access token and protect it. The same case applies when you send the access token to your Resource Server, it will use the same machineKey to decrypt the access token and extract the authentication ticket from it.

The same applies if you are planing on your production environment to host your Authorization Server and your Resource Server on the same machine.

But in our case I’m hosting my Authorization Server on Azure website using West-US region, and my Resource Server is hosted on Azure websites on East-US region so both are totally on different machines and they share different machine.config files. For sure I can’t change anything on machine.config file because it is used by different sites on the Azure VMs I’m hosting my APIs on it.

So to solve this case we need to override the machineKey node for both APIs (Authorization Server and Resource Server) and share the same machineKey between both web.config files.

To do so we need to generate a new machineKey using this online tool, the result after using the tool will be as the below: As advised by Barry Dorrans it is not recommend to use an online tool to generate your machineKey because you do not know if this online tool will store the value for your machineKey in some secret database, it is better to generate the machineKey by your self.

To do so I’ll follow the steps mentioned in article KB 2915218, Appendix A which uses Power Shell commands to generate machineKey, so after you open Power Shell and run the right command as the image below you will receive new machineKey
PowerShell MachineKey

<machineKey validationKey="A970D0E3C36AA17C43C5DB225C778B3392BAED4D7089C6AAF76E3D4243E64FD797BD17611868E85D2E4E1C8B6F1FB684B0C8DBA0C39E20284B7FCA73E0927B20" decryptionKey="88274072DD5AC1FB6CED8281B34CDC6E79DD7223243A527D46C09CF6CA58DB68" validation="SHA1" decryption="AES" />

Note: Do not use those for production and generate a new machineKey for your Servers using Power Shell.

After you generate it open the web.config file for both the Authorization Server (AngularJSAuthentication.API project) and for the Resource Server (AngularJSAuthentication.ResourceServer) and paste the machineKey node inside the <system.web> node as the below:

<system.web>
    <compilation debug="true" targetFramework="4.5" />
    <httpRuntime targetFramework="4.5" />
    <machineKey validationKey="VALUE GOES HERE" 
                decryptionKey="VALUE GOES HERE" 
                validation="SHA1" 
                decryption="AES"/>
  </system.web>

By doing this we’ve unified the machineKey for both Authorization Server and Resource Server (Only this API in case of Azure shared hosting or any other shared host) and we are ready now to test our implementation.

Step 6: Testing the Authorization Server and Resource Server

Now we need to obtain an access token from our Authorization Server as we did before in the previous posts, so we can issue HTTP POST to the end point (http://ngauthenticationapi.azurewebsites.net/token) including the resource owner username and password as the image below:

Obtain Access Token

If the request succeeded we’ll receive an access token, then we’ll use this access token to send HTTP GET request to our new Resource Server using the protected end point (http://ngauthenticationresourcesapi.azurewebsites.net/api/protected), for sure we need to send the access token in the Authorization header using bearer scheme, the request will be as the image below:

Consume Access Token

As you notice we’re now able to access the protected resource in our new Resource Server and the claims for this identity which obtained from the Authorization Server are extracted.

That’s it for now folks, hopefully this short walk through helped in understanding how we can decouple the Authorization Server from the Resource Server.

If you have any comment or question please drop me a comment

You can use the following link to check the demo application.

You can use the following link to test the Authorization Server (http://ngauthenticationAPI.azurewebsites.net),

You can use the following link to test the Resource Server (http://ngauthenticationResourcesAPI.azurewebsites.net) 

You can check the source code on Github.

Follow me on Twitter @tjoudeh

References

The post Decouple OWIN Authorization Server from Resource Server appeared first on Bit of Technology.


Glenn Block: Our new book and my personal journey toward REST and Hypermedia

It’s hard to believe, but it has been 6 months since our new book, “Designing Evolvable Web APIs with ASP.NET” shipped! So far the feedback that we’ve received has been really positive and we’re excited to see the momentum.

book

In the first part of the book we focus on educating about the fundamentals of the web architecture and HTTP, and APIs. Next we focus on the designing of a hypermedia API, and a TDD/BDD driven implementation of a CollectionJson API using ASP.NET Web API. We also give you several coding patterns that you can use to get started building hypermedia APIs.

In the second part of the book, we focus on ASP.NET Web API itself and the underlying architecture. We do this in the context of the first part, with the explicit goal of helping you further toward building out an evolvable API. We delve deeply into the nuts and bolts of Web API including hosting, routing and model binding. We cover comprehensively how to secure your API, and the security internals. And we cover other topics like TDD and using Inversion of Control.

You can buy it online now here. We also have a Github repo with all the code, which includes a working hypermedia API.

The people that made this book a reality.

This book has been a collaborative effort with four other amazing folks that I’ve had the privilege to work with. It combines the knowledge of the Web API team itself as well as key advisors who were with us every step of the way as we built it. It was a collaborative effort across 4 time zones and countries:

  • Howard Dierking, my co-conspirator on the Web API team. An amazing individual who is helping educate the world on REST and hypermedia through his Pluralsight work.
  • Darrel Miller, a long time proponent of REST and hypermedia in the .NET world before many of us at Microsoft had a clue of what that means. Darrel has been building real hypermedia systems for a long time. Darrel was one of our first handfuls of advisors on Web API.
  • Pedro Felix, an expert on Web API security and the ASP.NET Web API architecture. His comprehensive contribution in the book on the security aspects of Web API is unparalleled in the .NET world.
  • Pablo Cibraro, former CTO of Tellago and a consultant who has implemented many API solutions. Pablo is an expert in Agile and TDD, and he wrote some deep chapters testability and IOC.

We also had a fantastic set of reviewers and advisors.

Why write this book?

This book is part of a very personal journey, a  journey that started several years ago when I joined the WCF team at Microsoft. At that time I had the pop culture understanding of REST, that is a lightweight API exposed over HTTP that doesn’t use SOAP. I joined the team to help build a better experience in the platform for these types of APIs. Little did I know the journey that awaited me, as I would delve deeper and deeper into the REST community.

It was a journey of learning.  I was fortunate to have wonderful teachers who had already been treading the path, chief of which (ordered last name first) were: Jan Algermissen, Mike Amundsen Alan Dean, Mike Kelly, Sebastien Lambla, Darrel Miller, Henrik Nielsen, Ian Robinson, Aaron Skonnard, and Jim Webber. I am deeply indebted for their help. The is book was built really on the shoulders of these giants!

During that period, I learned about HTTP and REST, that REST is much more than building an RPC JSON API, and about how to make APIs more robust. This included learning about fundamental aspects of the web architecture like caching, ETAGS and content negotiation. The learning also included the hidden jewel of REST APIs, hypermedia, and new media types like HAL, CollectionJson and Siren that were designed specifically for hypermedia-based systems.

It was clear looking at our existing Microsoft frameworks that they did need meet the requirements for building APIs that really leveraged HTTP and took advantage of the web architecture. Looking within the .NET OSS community, solutions like Open Rasta, however were explicitly designed with this in mind. There were also a ton of other options outside the .NET world in Java, Ruby, Python and PHP, and more recently in node.js.

After soaking this all in, my team at Microsoft, and a team of fantastic advisors from the community, worked together to create a new framework that was designed from the get-go to fully embrace HTTP, to enable, but not force building a RESTful system.

As part of this we had an explicit goal from day one to ensure this framework would also enable building a fully hypermedia based system. Another explicit goal was to make the framework easier to test, and to allow it to work well with agile development. This framework ultimately became what you now know as ASP.NET Web API.

Although ASP.NET Web API had the foundations in place to enable such systems, you have work to do. I am not going to say that it naturally leads you to build a hypermedia system. This was not an accident. We deliberately did not want to force people down a specific path, we wanted to make sure though if you wanted to build such a system, it wouldn’t fight you.

We saw a lot of folks jump on this, even internally, folks like the Office Lync Team that built probably one of the largest pure hypermedia APIs in existence using Web API.  Many of these early adopters had the advantage of working directly with our team. They worked with me, and Henrik Nielsen (My Web API architect, and one of the creators of HTTP) to help guide them down the path. We did a lot of educating on HTTP, API fundamentals and what hypermedia is and how to build systems that support it.

On the outside we also saw a lot of energy. Folks like our advisory board jumped in and started doing the same in the wild. They built real systems, and they guided customers.

All of this work (including shipping Web API) definitely succeeded in help raising the bar around modern API development and hypermedia in the industry in general. More and more folks including some of the largest companies started coming out of the woodwork and saying, we want to build these types of systems.

However, many folks, in particular in the .NET community, have not crossed this chasm yet and do not understand how or why to build such systems. In particular building a hypermedia-based system is a sticking point. Many are scared by the term and don’t understand what it is. Others dismiss it as purely academic. Even those that do grasp the concepts often do not know where to start to implement them. This last point makes total sense, taking into consideration the points I mentioned earlier that ASP.NET Web API does not lead you toward building such APIs, it enables it.

With this book we wanted to change that. We wanted to bring these API techniques to the masses. We wanted to show you why and how to actually build these APIs with ASP.NET Web API. We wanted to show you how to make the underlying architecture of the framework to work with you in achieving that goal.

I believe we’ve done that. We look forward to your experiences with the book as you embark on API development with ASP.NET.


Darrel Miller: Implementing Conditional Request Handling for your API


In the previous post in this series on Conditional Requests I introduced the topic of validators, their purpose and how they can be constructed.  A large chunk of the work that needs to be done to support conditional requests is done by the origin server.  This blog post is about that role.

ForkInRoad

The first job of the origin server is to return a validator header along with responses for resources that want to support conditional requests.  The second function it performs is when it receives a conditional request it needs to decide if it is going to fulfill the request or fail.  A server can recognize a conditional request because there is a request header that starts with If-. Which if- header is used depends on the type of validator being using and the type of HTTP method being used.  The following if- headers are lists in the message headers repository:


  • If-Match
  • If-Modified-Since
  • If-None-Match
  • If-Range (rfc7233 – Range Requests)
  • If-Schedule-Tag-Match (rfc6638 - CALDAV)
  • If-Unmodified-Since

The If-Range and If-Schedule-Tag-Match are slightly different cases that we can save for discussion on a different day.

For reads…

For safe requests like GET the goal is to only return a response body if it has changed from what has previously been retrieved.  Therefore we use the If-none-match header with an Etag or the if-modified-since header with a Last-modified validator.  If these pre-conditions fail, then the 304 – Not modified header is returned.

Book

The if-none-match is named the way it is because a client can actually send multiple Etag values.  This can be used when a cache is holding multiple different variant representations and it only wants to receive an update if all of its variants are out-of-date. 

When comparing validators for the purpose of deciding to read or not, a “weak” comparison is used.  This means that both strong and weak validators are compared and are considered equal if the validator values are the same.

For writes…

For unsafe requests like PUT, POST, PATCH, DELETE, the conditional headers if-match and if-unmodified-since are used to abort the request if the  validators do not match. In this case the status code 412 – Precondition Failed should be returned.

Write

When comparing validators before writing,  a “strong” comparison is used.  This means that only strong validators can be used and must be equal before the request can be performed.

Bring a backup

Although a client only really needs one type of validator value to perform conditional requests, it is recommended that an origin server return both a Etag and a Last-modified header if possible.   I can only assume this is to provide support for web caches that only support HTTP/1.0.  ETags have been standardized since 1997.  It continues to amaze me how seriously the IETF folks take backward compatibility.

Clairvoyant?

The specification makes it clear that for each of the conditional request headers, the origin server must not perform the actual request behaviour if the pre-condition fails.  However, it goes on to say that

redirects and failures take precedence over the evaluation of preconditions in conditional requests

This infers that you needs to do just enough of the request processing to know that it will produce a success status code, before testing the the pre-condition and performing the request.  This has some significant implications on how you might be able to implement conditional request handling.  The specification says,

a recipient cache or origin server MUST evaluate received request preconditions after it has successfully performed its normal request checks and just before it would perform the action associated with the request method.

The part I find interesting about this last quote is the distinction between “request checks” and the “action associated with the request method”  It would be valuable to further explore this distinction and understand what checks need to be performed before pre-conditions can be evaluated.

Precedence

If you consider each of the individual elements of the entire conditional request handling process in isolation, each piece is fairly straightforward.  However, the HTTP specification puts no limitations on the ability to send multiple different conditional headers at the same time.  In order to produce sane results there specification lays out a set of rules as to the order in which these conditional headers should be evaluated.  I have simplified the rules a little by ignoring the possibility of If-Range headers, and the end result is the diagram below.

image

If you follow the flow chart through and reach the GO symbol, that means that the actual request can be performed.

Don’t fail if you don’t have to

One interesting area that I was not previously aware of until I created this diagram was the step that takes place after an Etag fails to match or the If-Unmodified-Since fails.  If some other request has already made the same change that the current conditional request is trying to make, then a server should not return a 412 failure, but simply return a 2XX status.

One last piece of the puzzle

In the first article on this subject we talked about the validator values that are required to make conditional requests.  This article covers the role of the server in conditional request handling.  The next article will discuss how the client can make conditional requests.

Image credit: Fork https://flic.kr/p/kUFSi7
Image credit: Book https://flic.kr/p/8JBSSW
Image credit: Writing https://flic.kr/p/5WtwEy


Filip Woj: Over 100 ASP.NET Web API samples

As you might now by now, last month my ASP.NET Web API 2 Recipes book was released by Apress. The book contains over 100 recipes covering various Web API scenarios that aim to help you save some headaches when working … Continue reading

The post Over 100 ASP.NET Web API samples appeared first on StrathWeb.


Darrel Miller: Code Camp NYC

This weekend I had the opportunity to speak for the first time at the New York city Code Camp.  It was an excellent event with a huge turnout and an equivalently huge number of sessions.

NewYork

With fourteen different sessions happening simultaneously, attendees were spoiled for choice and I am quite sure there were plenty of people having to make difficult choices.

Web Performance

I was happy to get the chance to see Nik Molnar’s talk on Full Web Stack Performance.  I got to learn about a number of handy HTTP related utilities that I had not seen before as well as get some great insight into how the web browsers are attempting to enable jank free user experiences.

There were a loads of other sessions that I would have loved to have seen, but unfortunately I was still preparing my own talks.

Hypermedia’s Secret SauceSecretSauce2

My first talk was a demonstration of how to build native client applications that use hypermedia to drive the application workflow.  The first demo was a slightly enhanced version of the sample that I show in my Hypermedia Client/Server Dance post.

The second demo was a more involved client application that had multiple screens and has an explicit ClientState class and specialized link classes to encapsulate the interaction behaviour of the client.  This client is very much a work in progress and will continue to evolve as I try and demonstrate more hypermedia capabilities.  You can find the source code for both the demos in my Github repository.  Be warned, if you are looking for something polished, you will be disappointed!

You can find the slides for the talk here on OneDrive.

Crafting Evolvable API Responses

My second talk was on designing representations for APIs that can evolve.  I spent some time explaining the disadvantages of using object serializers to build your wire representations and then went through a variety of different representation examples that attempt to address many of the common issues when doing API design.  The slides are available here, however the commentary on the representations is currently missing from the slides and we also did a number of examples based on questions from the audience.  If you happen to live near Burlington VT, I’ll be doing this talk again next Saturday at the Vermont Code Camp.

Vermont

Kudos

I’d like to thank Steve Bohlen and Erik Stepp and the huge team of volunteers for putting on this event.  It is great to see so much participation in the developer community.


Image credit : New York https://flic.kr/p/dauWoe
Image credit : Secret Sauce https://flic.kr/p/ag8CoD
Image credit: Vermont https://flic.kr/p/8BReje


Taiseer Joudeh: Secure ASP.NET Web API 2 using Azure Active Directory, Owin Middleware, and ADAL

Recently I’ve been asked by many blog readers on how to secure ASP.NET Web API 2 using Azure Active Directory, in other words we want to outsource the authentication part from the Web API to Microsoft Azure Active Directory (AD). We have already seen how the authentication can be done with local database accounts, and social identity providers, so in this tutorial we’ll try something different and we’ll obtain the bearer access tokens from external authority which is our Azure Active Directory (AD).

Microsoft Azure Active Directory (AD) is PaaS service available to every Azure subscription, this service is used to store information about users and organizational structure. We’ll use this service as our Authority service which will be responsible to secure our Resource (Web API) and issue access tokens and refresh tokens using OAuth 2 Code flow grant.

The resource (Web API) should be consumed by a Client, so the client will be requesting the data from the resource (Web API), but in order for this request to be accepted by the resource, the client must send a valid access token obtained from the Authority service (Azure AD) with each request. Do not worry if this is not clear now, I’ll describe this thoroughly while we’re implementing this tutorial.

Azure Active Directory Web Api

What we’ll build in this tutorial?

As you noticed in the previous posts, I’m not big fan of the built in templates in Visual Studio 2013, those templates mix MVC controllers along with Web API controllers and bring confusion to the developers, so in this post I’ve decided to build simple ASP.NET Web API secured by Azure AD using Owin middle-ware components which we’ll add manually using Nuget, then I’ll build simple client (desktop forms application) which will consume this Web API.

The Source code for this tutorial is available on GitHub.

Building the Back-end Resource (Web API)

Step 1: Creating the Web API Project

In this tutorial I’m using Visual Studio 2013 and .Net framework 4.5, to get started create an empty solution and name it “WebApiAzureActiveDirectory”, then add new empty ASP.NET Web application named “WebApiAzureAD.Api”, the selected template for the project will be “Empty” template with no core dependencies, check the image below:

Web Api Azure AD Project

Step 2: Install the needed NuGet Packages

This project is empty so we need to install the NuGet packages needed to setup our Owin server and configure ASP.NET Web API 2 to be hosted within an Owin server, so open NuGet Package Manager Console and install the below packages:

Install-Package Microsoft.AspNet.WebApi
Install-Package Microsoft.AspNet.WebApi.Owin
Install-Package Microsoft.Owin.Host.SystemWeb
Install-Package Microsoft.Owin.Security.ActiveDirectory

The use for the first three packages have been discussed on this post, the package “Install-Package Microsoft.Owin.Security.ActiveDirectory” is responsible to configure our Owin middle-ware server to use Microsoft Azure Active Directory to offload the authentication process to it. We’ll see how we’ll do this in the coming steps.

Step 3: Register the Web API into Azure Active Directory

Now we need to jump to Azure Management Portal in order to register our Web API (Resource) as an application in our Azure Active Directory (Authority) so this authority will accept issuing tokens for our Web API, to do so and after your successful login to Azure Management Portal,  click on “Active Directory” in the left hand navigation menu, choose your active directory tenant you want to register your Web API with, then select the “Applications” tab, then click on the add icon at bottom of the page. Once the modal window shows as the image below select “Add an application my organization is developing”.

Add App to Azure AD

Then a wizard of 2 steps will show up asking you to select the type of the app you want to add, in our case we are currently adding a Web API so select “Web Application and/or Web API”, then provide a name for the application, in my case I’ll call it “WebApiAzureAD”, then click next.

Add WebApi To Azure Active Directory

In the second step as the image below we need to fill two things, the Sign-On URL which is usually will be your base Url for your Web API, so in my case it will be “http://localhost:55577″, and the second field APP ID URI will usually be filled with a URI that Azure AD can use for this app, it usually take the form of “http://<your_AD_tenant_name>/<your_app_friendly_name>” so we will replace this with the correct values for my app and will be filed as “http://taiseerjoudeharamex.onmicrosoft.com/WebApiAzureAD” then click OK.

Important Note:

  • On production environment, all the communication should be done over HTTPS only, the access token we’ll transmit from the client to the API should be transmitted over HTTPS only.
  • To get your AD tenant name, you can navigate to to your active directory and click on the “Domains” tab.

Add Web Api To Azure Active Directory

Step 4: Expose our Web API to other applications

After our Web API has been added to Azure Active Directory apps, we need to do one more thing here which is exposing our permission scopes to client apps developers who will build clients to consume our Web API. To do so we need to change our Web API configuring using the application manifest. Basically the application manifest is a JSON file that represents our application identity configuration.

So as the image below and after you navigate to the app we’ve just added click on “Manage Manifest” icon at the bottom of the page, then click on “Download Manifest”.

Download Manifest

Open the downloaded JSON application manifest file and replace the “appPermissions” node with the below JSON snippet. This snippet is just an example of how to expose a permission scope known as user impersonation, do not forget to generate new GUID for the “permessionid” value. You can read more about Web API configuration here.

"appPermissions": [
    {
      "claimValue": "user_impersonation",
      "description": "Allow the application full access to the service on behalf of the signed-in user",
      "directAccessGrantTypes": [],
      "displayName": "Have full access to the service",
      "impersonationAccessGrantTypes": [
        {
          "impersonated": "User",
          "impersonator": "Application"
        }
      ],
      "isDisabled": false,
      "origin": "Application",
      "permissionId": "856aba87-e34d-4857-9cb1-cc4ac92a35f8",
      "resourceScopeType": "Personal",
      "userConsentDescription": "Allow the application full access to the service on your behalf",
      "userConsentDisplayName": "Have full access to the service"
    }
  ]

After you replaced the “appPermission” node, save the application manifest file locally then upload it again to your app using the “Upload Manifest” feature, now we have added our Web API as an application to the Azure Active Directory, so lets go back to visual studio and do the concrete implementation for the Web API.

Step 5: Add Owin “Startup” Class

Now back to our visual studio, we need to build the API components because we didn’t use a ready made template, this way is cleaner and you understand the need and use for each component you install in your solution, so add new class named “Startup”. It will contain the code below:

using Microsoft.Owin;
using Microsoft.Owin.Security.ActiveDirectory;
using Owin;
using System;
using System.Collections.Generic;
using System.Configuration;
using System.Linq;
using System.Web;
using System.Web.Http;

[assembly: OwinStartup(typeof(WebApiAzureAD.Api.Startup))]
namespace WebApiAzureAD.Api
{

    public partial class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            HttpConfiguration config = new HttpConfiguration();
 
            ConfigureAuth(app);

            WebApiConfig.Register(config);

            app.UseWebApi(config);
        }

        private void ConfigureAuth(IAppBuilder app)
        {
            app.UseWindowsAzureActiveDirectoryBearerAuthentication(
                new WindowsAzureActiveDirectoryBearerAuthenticationOptions
                {
                    Audience = ConfigurationManager.AppSettings["Audience"],
                    Tenant = ConfigurationManager.AppSettings["Tenant"]
                });
        }
    }

}

What we’ve done here is simple, and you can check my other posts to understand what the need for “Startup” class and how it works.

What worth explaining here is what the private method “ConfigureAuth” responsible for, so the implementation inside this method basically telling our Web API that the authentication middle ware which will be used is going to be “Windows Azure Active Directory Bearer Tokens” for the specified Active Directory “Tenant” and “Audience” (APP ID URI). Now any Api Controller added to this Web Api and decorated with [Authorize] attribute will only understand bearer tokens issued from this specified Active Directory Tenant and Application, any other form of tokens will be rejected and HTTP status code 401 will be returned.

It is a good practice to store the values for your Audience, Tenant, Secrets, etc… in a configuration file and not to hard-code them, so open the web.config file and add 2 new “appSettings” as the snippet below:

<appSettings>
    <add key="Tenant" value="taiseerjoudeharamex.onmicrosoft.com" />
    <add key="Audience" value="http://taiseerjoudeharamex.onmicrosoft.com/WebApiAzureAD" />
  </appSettings>

Before moving to the next step, do not forget to add the class “WebApiConfig.cs” under folder “App_Start” which contains the code below:

public static class WebApiConfig
    {
        public static void Register(HttpConfiguration config)
        {
            // Web API configuration and services

            // Web API routes
            config.MapHttpAttributeRoutes();

            var jsonFormatter = config.Formatters.OfType<JsonMediaTypeFormatter>().First();
            jsonFormatter.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver();
        }
    }

Step 6: Add a Secure OrdersController

Now we want to add a secure controller to serve our Orders. What I mean by a “secure” controller that its a controller attribute with [Authorize] attribute and can be accessed only when the request contains a valid access token issued by our Azure AD tenant for this app only. To keep things simple we’ll return static data. So add new controller named “OrdersController” and paste the code below:

[Authorize]
    [RoutePrefix("api/orders")]
    public class OrdersController : ApiController
    {
        [Route("")]
        public IHttpActionResult Get()
        {
           var isAuth = User.Identity.IsAuthenticated;
           var userName = User.Identity.Name;

           return Ok(Order.CreateOrders());

        }
    }

    #region Helpers

    public class Order
    {
        public int OrderID { get; set; }
        public string CustomerName { get; set; }
        public string ShipperCity { get; set; }
        public Boolean IsShipped { get; set; }


        public static List<Order> CreateOrders()
        {
            List<Order> OrderList = new List<Order> 
            {
                new Order {OrderID = 10248, CustomerName = "Taiseer Joudeh", ShipperCity = "Amman", IsShipped = true },
                new Order {OrderID = 10249, CustomerName = "Ahmad Hasan", ShipperCity = "Dubai", IsShipped = false},
                new Order {OrderID = 10250,CustomerName = "Tamer Yaser", ShipperCity = "Jeddah", IsShipped = false },
                new Order {OrderID = 10251,CustomerName = "Lina Majed", ShipperCity = "Abu Dhabi", IsShipped = false},
                new Order {OrderID = 10252,CustomerName = "Yasmeen Rami", ShipperCity = "Kuwait", IsShipped = true}
            };

            return OrderList;
        }
    }

    #endregion

Till this step we’ve built our API and configured the authentication part to be outsourced to Azure Active Directory (AD), now it is the time to build a client which will be responsible to consume this back-end API and talk to our Azure AD tenant to obtain access tokens.

Building the Client Application

As I stated before, we’ll build very simple desktop application to consume the back-end API, but before digging into the code, let’s add this application to our Azure AD tenant and configure the permission for the client to allow accessing the back-end API.

Step 7: Register the Client Application into Azure Active Directory

To do so navigate to Azure Management Portal again and add new application as we did on step 3. But this time and for this application will select “Native Client Application”, give the application friendly name, in my case I’ll name it “ClientAppAzureAD” and for the redirect URI I’ll enter “http://WebApiAzureADClient”. You can think for this URI as the endpoint which will be used to return the authorization code from Azure AD which will be used later to exchange this authorization code with an access token. No need to worry your self about this, because all of this will be handled for us auto-magically when we use and talk about Azure Active Directory Authentication Library (ADAL) toolkit.

Add Client To Azure AD

Step 8: Configure the Client Application Permissions

This step is very important, we need to configure the client app and specify which registered application it can access, in our case we need to give the client application permission to access our back-end API (WebApiAzureAD), the delegated permissions selected will be “Have full access to the service”, and if you notice this permission list is coming from the modified “appPermission” node in the JSON application manifest file we’ve modified in step 4. After you do this click Save.

Grant Permession

Step 9: Adding the client application project

Time to get back to visual studio to build the client application, You can use any type of client application you prefer (console app, WPF, windows forms app, etc..) in my case I’ll use windows form application, so I’ll add to our solution a new project of type “Windows Forms Application” named “WebApiAzureAD.Client”.

Once the project is added to the solution, we need to add some new keys to the “app.config” file which we’ll use to communicate with our Azure AD tenant, so open app.config file and paste the the snippet below:

<appSettings>
   
    <add key="ida:AADInstance" value="https://login.windows.net/{0}" />
    <add key="ida:Tenant" value="taiseerjoudeharamex.onmicrosoft.com" />
    <add key="ida:ClientId" value="1c92b0cc-6d13-497d-87da-bef413c9f26f" />
    <add key="ida:RedirectUri" value="http://WebApiAzureADClient" />
    
    <add key="ApiResourceId" value="http://taiseerjoudeharamex.onmicrosoft.com/WebApiAzureAD" />
    <add key="ApiBaseAddress" value="http://localhost:55577/" />
    
  </appSettings>

So the value for “ClientId” key is coming from the “Client Id” value for the client we defined in Azure AD, and the same applies for the key “RedirectUri”.

For the value for “ApiResourceId” and “ApiBaseAddress” both values are coming from the back-end API application we already registered with Azure AD.

Step 10: Install the needed Nuget Packages for the client application

We need to install the below Nuget packages in order to be able to call the back-end API and our Azure AD tenant to obtain tokens, so open package manager console and install the below:

PM> Install-Package Microsoft.Net.Http
PM> Install-Package Microsoft.IdentityModel.Clients.ActiveDirectory

The first package installed is responsible to HttpClient for sending requests over HTTP, as well as HttpRequestMessage and HttpResponseMessage for processing HTTP messages.

The second package installed  represents Azure AD Authentication Library (ADAL) which is used to enable a .NET client application to authenticate users against Azure AD and obtain access tokens to call back-end Web API.

Step 11: Obtain the token and call the back-end API

Now it is the time to implement the logic in the client application which is responsible to obtain the access token from our Azure AD tenant, then use this access token to access the secured API end point.

To do so, add new button on the form and open “Form1.cs” and paste the code below:

public partial class Form1 : Form
    {
        public Form1()
        {
            InitializeComponent();
        }


        private static string aadInstance = ConfigurationManager.AppSettings["ida:AADInstance"];
        private static string tenant = ConfigurationManager.AppSettings["ida:Tenant"];
        private static string clientId = ConfigurationManager.AppSettings["ida:ClientId"];
        Uri redirectUri = new Uri(ConfigurationManager.AppSettings["ida:RedirectUri"]);

        private static string authority = String.Format(aadInstance, tenant);

        private static string apiResourceId = ConfigurationManager.AppSettings["ApiResourceId"];
        private static string apiBaseAddress = ConfigurationManager.AppSettings["ApiBaseAddress"];

        private AuthenticationContext authContext = null;

        private async void button1_Click(object sender, EventArgs e)
        {

            authContext = new AuthenticationContext(authority);

            AuthenticationResult authResult = authContext.AcquireToken(apiResourceId, clientId, redirectUri);

            HttpClient client = new HttpClient();

            client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", authResult.AccessToken);

            HttpResponseMessage response = await client.GetAsync(apiBaseAddress + "api/orders");
          
            string responseString = await response.Content.ReadAsStringAsync();
           
            MessageBox.Show(responseString);

        }
    }

What we’ve implemented now is the below:

  • We’ve read bunch of settings which will be used to inform the client application what is the Uri/name for Azure AD tenant that it should call, the client id we obtained after registering the client application in Azure AD. As well we need to read the App Id Uri (ApiResourceId) which tells the client which Web API it should call to get the data from.
  • We’ve created an instance of the “AuthenticationContext” class, this instance will represent the authority that our client will work with. In our case the authority will be our Azure AD tenant represented by the Uri https://login.windows.net/taiseerjoudeharamex.onmicrosoft.com.
  • Now we’ll call the method “AcquireToken” which will be responsible to do internally the heavy lifting for us and the communication with our authority to obtain an access token. To do so and as we are building client application we need to pass three parameters which they are: a. The resource which the token will be sent to, b. The client id. c. The redirect uri for this client.
  • Now you will ask your self where the end user using this system will enter his AD credentials? The nice thing here that the AuthenticationContext class which is part of ADAL will take care of showing the authentication dialog in a popup and do the right communication with the correct end point where the end user will be able to provide his AD credentials, there is no need to write any extra single line of code to do this, thanks for the nice abstraction provided by ADAL. We’ll see this in action once we test the application.
  • After the user provides his valid AD credentials, a token is obtained and returned as property in the “AuthenticationResult” response along with other properties.
  • Now we need to do an ordinary HTTP GET method to our secure end point (/api/orders) and we’ve to pass this obtained access token in the authorization header using a bearer scheme. If everything goes correctly we’ll receive HTTP status code 200 along with the secured orders data.

Step 12: Testing the solution

We are ready to test the application, jump to your solution, right click on Web Api project “WebApiAzureAD.Api” select “Debug” then “Start New Instance”, then jump to your desktop application and do the same, start new instance of the EXE, click on the button and you will see the ADAL authentication dialog popups as the image blow.

ADAL Auth Window

Fill the credentials for an AD user registered in our tenant and click sign in, if the credentials provided is correct you will receive an access token as the image below, this access token will be sent int the authorization header for the GET request and you will receive your orders data.

Azure AD Access Token

Now if you tried to click on the button again without terminating the EXE (client app) you will notice that the ADAL authorization popup will not show up again and you get the token directly without providing any credentials, thanks to the built-in token cache which keeps track of the tokens.

If you closed the application and reopen it, then the ADAL authorization pop up will show up again, maybe this is not so convenience for end users. So to over come this issue you can use strategy called “token caching” which allows you to persist the obtained tokens and store them securely on a local protected file using DPAPI protection, this is better way to do it because you will not hit the authority server if you closed your client application and you still have a valid access token when you open the client again. You can read more about this implementation here.

Conclusion

Securing your ASP.NET Web API 2 by depending on Azure AD is something easy, if you are building an API for the enterprise which will be used by different applications, you can easily get up and running in no time.

As well the ADAL toolkit is providing us with great level of abstraction over the OAuth 2.0 specifications which makes developers life easy when building the clients, they will focus on business logic and the authentication/autherization part is will be handled by ADAL.

That’s it for now, I hope this tutorial will help you securing your ASP.NET Web API 2 using Azure AD, if you have any question or you have suggestions please drop me a comment.

The Source code for this tutorial is available on GitHub.

Follow me on Twitter @tjoudeh

Resources

The post Secure ASP.NET Web API 2 using Azure Active Directory, Owin Middleware, and ADAL appeared first on Bit of Technology.


Filip Woj: Things you didn’t know about action return types in ASP.NET Web API

When you are developing an ASP.NET Web API application, you can chose whether you want to return a POCO from your action, which can be any type that will then be serialized, an instance of HttpResponseMessage, or, since Web API … Continue reading

The post Things you didn’t know about action return types in ASP.NET Web API appeared first on StrathWeb.


Darrel Miller: HTTP in depth

Over the past few months I have written a number of posts relating to HTTP that have attempted to clarify some of the lesser understood areas of the HTTP specification and provide some practical guidance.

digbighole

Posts that are complete

Posts that are on my backlog

  • Conditional Request Handling
  • Making Conditional Requests
  • The connection header
  • Range based requests

Feedback

If there are posts in the backlog that you would like to see sooner, rather than later, feel free to ping me on twitter.  If there are other areas that think would be worth exploring in more detail, let me know.

Image credit: Digging a big hole https://flic.kr/p/8c6zD5


Darrel Miller: Using Etags and Last-modified headers to improve performance with HTTP conditional requests


In the recent update of the HTTP specification, the details of conditional requests have been split out and given their whole own specification.  Most developers I talk to are familiar with the idea of 304 Not Modified response code, but whenever we start to dig deeper everyone, myself included, are missing pieces of the puzzle. This article is one of a series of blog posts that attempts to dig in to aspects of HTTP and provide practical guidance on their usage.

WaxSeal

Conditional HTTP requests are used to perform an action, unless the target representation has changed, in which case they do something else.  There are two common ways that this mechanism is used:

  • we can prevent a client from having to download bytes that it already has.
  • we can prevent a client from making changes to a resource that has been changed by someone else since they retrieved it

Validators tell you when you should care about change

In order to make a conditional HTTP request, a special type of HTTP header called a validator needs to be sent in the response from the server.  The two validators defined by the HTTP specification are Etag and Last-modified.   Validators are values that allow you to compare if two resource representations are the same. The definition of "same" depends on the flavour of validator and the opinion of the origin server.

What flavour is your validator?

Validator values are considered either strong or weak.  Strong validators tell you if the response body is identical. Weak validators are used to allow an origin server to group multiple slightly different representations together as equivalent.

flavours

A strong validator can be used to identify as equivalent two different representations where the headers differ.  For example, supposing you requested the same resource as text/plain and then as text/html. If the server supported returning both of those media types, then both representations could have exactly the same body bytes and therefore returning a 304 Not modified would be feasible for the second request.  However, the origin server does have the option of changing the validator value if it believes the changed header is semantically significant.

Weak validators are used to deliver new content to a client when the representation has significantly changed, based on what the origin server deems significant.  This capability is interesting because it can enable a server to manage load on a server, by controlling how often it signals that a resource representation has changed.  This might be useful if clients are aggressively polling for changes to a representation.  The server can effectively lie to clients for a short period of time and return 304 in order to reduce the amount of bytes being delivered by the server.

Weak validators can also be used to identify equivalence between two representations that are functionally equivalent.  For example, a cache might contain a compressed copy of a representation and when it receives a conditional request for an uncompressed copy, it can identify that the client already has an up to date version of the representation even though the body bytes do not match.  This saves a cache from having to hold a cached copy of both the compressed and uncompressed version. 

Last Modified validator

The Last-modified header is simply a date and time using the HTTP date format. The value is considered to be the time when the representation was last changed.   The Last-modified validator is normally considered a weak validator because the HTTP date format only has the resolution of one second and it is often possible for multiple different representations to be retrieved within one second.  However, there are scenarios where a Last-modified header can be considered strong.

Making an Etag

There are no defined rules of how to manufacture an Etag value.  The guideline would be just to do whatever is the easiest thing that can identify a “version” of a resource representation.  Sometimes people will create a hash of the response body.  A timestamp value is another way of generating an Etag value.  If your internal entity maintains some kind of revision number, then that can be a very useful component of an Etag.  Sometimes internal entities are naturally immutable and have an internal identifier, in which case those identifiers can be used in an Etag.  It is important to remember that if there are multiple representations of the resource then the internal identifier needs to be combined with some kind of representation identifier to ensure the Etag is unique for each representation.

Making an Etag Header

An Etag header requires putting quotes around the Etag value.  There are also constraints on what characters can be an Etag value.  If you feel a burning desire to put unusual characters in your Etag value, you might want to check the spec first.  The recent update of the HTTP specification now removes the possibility of using escaped values in Etag values.

Below is a request from the Uber API with an Etag header.

UberAPIWithEtag

If you want to identify a value as a Weak Etag then you simply put W/ in front of the quoted Etag value.

Putting your validators into use

When I started this post, I had intended to make this a single post.  The more I read RFC 7232, the more I realized there is way more that needs to be said about handling conditional requests and making conditional requests, so I’m going to leave those discussions for follow-up posts.  In the meanwhile, if you are hungry for more HTTP insight, I’ve made an index of the HTTP related posts that I have done to-date.

Image credit: Wax seal https://flic.kr/p/7piVnt
Image credit: Flavors https://flic.kr/p/nm6sps


Darrel Miller: Caching resources with query strings

This afternoon Scott Hanselman posted a fairly innocuous question on twitter.  However, the question involved versioning of a RESTful API, which is a subject that is sure to bring out lots of opinions.  This post is less about the versioning question and more about the commonly held belief that caches do things differently with URLs that have query strings.

Query strings deserve more love

Putting a version number into a query string parameter is actually quite an interesting idea.  For one, it can be made optional, so that a consumer can choose to always consume the latest version, or request a specific version.  Secondly, I like it because I can selectively version just the resources that I want to.  The part I dislike about putting a version as the first segment of the URL is that it blindly versions every resource in the API whether there has been breaking changes or not.   Putting a version at the end of the URL is much better, but putting it in the path can be tricky for both client and server implementations.  Particularly if you are not using hypermedia.

Caching is king

The last reason I like the version number in the query string is that it makes the specific version a separately cacheable resource.

When I mentioned my opinion to Scott on twitter, @jeremymiller responded with,

This is not the first time that I’ve heard this guidance.  In the past when I investigated this, I discovered that a number of the HTTP caches provide configuration options that allow you avoid caching resource with query string parameters from particular domains.  I recall reading about reasons like corporate proxy caches getting filled up with tiles from google maps, pushing out all the content that get high hit rates with stuff that have very low hit rates.

As far as I understood, it was a configuration option that someone explicitly had to set to filter out certain sites. This didn’t seem like sufficient justification for the pervasiveness of the belief that query strings break caching.  So I decided to go hunting again.

A search for the source

My search kept bringing me back to a blog post by Steve Souders on how to update files that have been released with extremely long expiry times.  In this article Steve demonstrates when using a version number in the query string, the Squid proxy cache does not cache the static file at all.  He goes on to say that Squid can be configured to cache these files, but it is not the default behaviour.  Based on this information, he rightly suggested that people don’t use versions in their query string if they want to cache static resources.

squid

That was in August 2008.  Earlier that year, in May 2008, Squid released version 2.7 which changed their default behaviour to no longer refuse to cache URLs that contained a query string.  The Squid documentation explains the changes, and claims that one of the reasons for the change was because sites like YouTube were intentionally putting stuff in the query string to prevent caches from reducing traffic to their site.  I guess someone had investors they needed to impress Annoyed

We are six years down the road and the guidance based on a default setting in a HTTP cache is still with us, discouraging developers from using a significant architectural component of a resource identifier.

I don’t doubt that there are numerous other web frameworks and output caching mechanisms that apply similar constraints to resources with query strings.  However, these components should be under the developer’s control.  There is no need to artificially constraint ourselves to describing cacheable resources using only the path component of a URL.

Since RFC 3986 was released in 2005 the role of the query string has been well defined:

The query component contains non-hierarchical data that, along with data in the path component, serves to identify a resource within the scope of the URI's scheme and naming authority

Let us move forward and start treating query strings with the respect that they deserve.

Image Credit: Squid https://flic.kr/p/79fueL



Filip Woj: Strongly typed direct routing link generation in ASP.NET Web API with Drum

ASP.NET Web API provides an IUrlHelper interface and the corresponding UrlHelper class as a general, built-in mechanism you can use to generate links to your Web API routes. In fact, it’s similar in ASP.NET MVC, so this pattern has been … Continue reading

The post Strongly typed direct routing link generation in ASP.NET Web API with Drum appeared first on StrathWeb.


Taiseer Joudeh: ASP.NET Web API Documentation using Swagger

Recently I was working on designing and implementing a large scale RESTful API using ASP.NET Web API, this RESTful API contains large number of endpoints with different data models used in the request/response payloads.

Proper documentation and having a solid API explorer (Playground) is a crucial thing for your API success and likability by developers. There is a very informative slides by John Musser which highlights the top 10 reasons which makes developers hate your API, and unsurprisingly the top one is the lack of documentation and the absence of interactive documentation; so to avoid this pitfall I decided to document my Web API using Swagger.

Swagger basically is a framework for describing, consuming, and visualizing RESTful APIs. The nice thing about Swagger that it is really keeps the documentation system, the client, and the server code in sync always, in other words the documentation of methods, parameters, and models are tightly integrated into the server code.

ASP.NET Web API Documentation using Swagger

So in this short post I decided to add documentation using Swagger for a simple ASP.NET Web API project which contains a single controller with different HTTP methods, the live demo API explorer can be accessed here, and the source code can be found on GitHub. The final result for the API explorer will look as the image below:

Asp.Net Web Api Swagger

Swagger is language-agnostic so there are different implementations for different platforms. For the ASP.NET Web API world there is a solid open source implementation named “Swashbuckle”  this implementation simplifies adding Swagger to any Web API project by following simple steps which I’ll highlight in this post.

The nice thing about Swashbuckle that it has no dependency on ASP.NET MVC, so there is no need to include any MVC Nuget packages in order to enable API documentation, as well Swashbuckle contains an embedded version of swagger-ui which will automatically serve up once Swashbuckle is installed.

Swagger-ui basically is a dependency-free collection of HTML, Javascript, and CSS files that dynamically generate documentation and sandbox from a Swagger-compliant API, it is a Single Page Application which forms the play ground shown in the image above.

Steps to Add Swashbuckle to ASP.NET Web API

The sample ASP.NET Web API project I want to document is built using Owin middleware and hosted on IIS, I’ll not go into details on how I built the Web API, but I’ll focus on how I added Swashbuckle to the API.

For brevity the API contains single controller named “StudentsController”, the source code for the controller can be viewed here, as well it contains a simple “Student” Model class which can be viewed here.

Step 1: Install the needed Nuget Package

Open NuGet Package Manager Console and install the below package:

Install-Package Swashbuckle

Once this package is installed it will install a bootstrapper (App_Start/SwaggerConfig.cs) file which initiates Swashbuckle on application start-up using WebActivatorEx.

Step 2: Call the bootstrapper in “Startup” class

Now we need to add the highlighted line below to “Startup” class, so open the Startup class and replace it with the code below:

[assembly: OwinStartup(typeof(WebApiSwagger.Startup))]
namespace WebApiSwagger
{
    public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
           
            HttpConfiguration config = new HttpConfiguration();
            
            WebApiConfig.Register(config);

            Swashbuckle.Bootstrapper.Init(config);

            app.UseWebApi(config);

        }
    }
}

Step 3: Enable generating XML documentation

This is not required step to use “Swashbuckle” but I believe it is very useful for API consumers especially if you have complex data models, so to enable XML documentation, right click on your Web API project — >”Properties” then choose “Build” tab, after you choose it scroll down to the “Output” section and check “XML documentation file” check box and set the file path to: “bin\[YourProjectName].XML” as the image below. This will add an XML file to the bin folder which contains all the XML comments you added as annotation to the controllers or data models.

Swagger XML Comments

Step 4: Configure Swashbuckle to use XML Comments

By default Swashbuckle doesn’t include the annotated XML comments on the API controllers and data models into the generated specification and the UI, to include them we need to configure it, so open file “SwaggerConfig.cs” and paste the highlighted code below:

public class SwaggerConfig
    {
        public static void Register()
        {
            Swashbuckle.Bootstrapper.Init(GlobalConfiguration.Configuration);

            // NOTE: If you want to customize the generated swagger or UI, use SwaggerSpecConfig and/or SwaggerUiConfig here ...

            SwaggerSpecConfig.Customize(c =>
            {
                c.IncludeXmlComments(GetXmlCommentsPath());
            });
        }

        protected static string GetXmlCommentsPath()
        {
            return System.String.Format(@"{0}\bin\WebApiSwagger.XML", System.AppDomain.CurrentDomain.BaseDirectory);
        }
    }

Step 5: Start annotating your API methods and data models

Now we can start adding XML comments to API methods so for example if we take a look on the HTTP POST method in Students Controller as the code below:

/// <summary>
        /// Add new student
        /// </summary>
        /// <param name="student">Student Model</param>
        /// <remarks>Insert new student</remarks>
        /// <response code="400">Bad request</response>
        /// <response code="500">Internal Server Error</response>
        [Route("")]
        [ResponseType(typeof(Student))]
        public IHttpActionResult Post(Student student)
        {
            //Method implementation goes here....
        }

You will notice that each XML tag  is reflected on the Swagger properties as the image below:
Swagger XML Tags Mapping

Summary

You can add Swashbuckle seamlessly to any Web API project, and then you can easily customize the generated specification along with the UI (swagger-ui). You can check the documentation and troubleshooting section here.

That’s it for now, hopefully this post will be useful for anyone looking to document their ASP.NET Web API, if you have any question or you have used another tool for documentation please drop me a comment.

The live demo API explorer can be accessed here, and the source code can be found on GitHub.

Follow me on Twitter @tjoudeh

The post ASP.NET Web API Documentation using Swagger appeared first on Bit of Technology.


Darrel Miller: A drive by review of the Uber API

Uber recently announced the availability of a public API.  I decided to take it for a spin and provide some commentary.  The quick version is that it is a fairly standard HTTP API and that is both a good thing and a bad thing.

You can find the official documentation for the Uber API on their developer site.  In general the documentation is easy to read and it seems fairly comprehensive for what is, at the moment, a pretty simple API.  From what I can see, it is just a read-only API at the moment, but they do advertise that "more endpoints are coming".

uber

Versioning

The API reference document starts with this,

All API requests are made to https://api.uber.com/<version>, where version is currently v1

It is unfortunate that the Uber API developers have not spent more time at conferences like API Craft, RESTFest, APIDays, EndpointCon, where it is becoming increasingly common knowledge that putting a v1 at the front of your URI is not a best practice.  There many other more finely grained ways of doing versioning, when you really must introduce breaking changes.  Having a version number as a first path segment creates a huge cliff where potentially everything changes in v2 and can become a nightmare for client and server developers.

HTTP Status Codes

The documentation then lists the HTTP status codes.  I'm really not sure why the writers of the documentation feel the need to rewrite what is supposed to be standardized.  However, it is interesting that they chose to use 422 to distinguish between "invalid content" and a "malformed request".   It will be interesting to see how they make that distinction.

It does amuse me that they only include the 500 error in the 5XX range.  I'm pretty sure most clients are going to need to account for 502, 503 and 504 as well.  Enumerating out the response messages that a server supports is very misleading to client developers.  Clients need to be built to account for every HTTP status code because you never know what other status code returning intermediary is going to be sitting between you and the origin server.  There is no harm in defining actions based on ranges of status code values.  It's not as if every status code needs explicit handling code.

Error Messages

Once again, we have another API who has decided they need to re-invent the "error message" wheel.  There a number of efforts underway to create a standardized format, such as http-problem and vnd-error .  API providers should get behind one and not invent their own.

404

Rate Limiting

It is great that Uber has chosen to use the new 429 status code to indicate that rate limits have been exceeded and it is commendable that they have re-used the headers from the Twitter API for indicating the rate limit status.  However, it would have been nice if they could have dropped the X- as has been recommend in RFC 6648.

I'm not really sure why there is a need to return the rate limiting information on every request.  Why not provide a resource I can query when I want to know what the rate limit is.  Why send those extra bytes over the wire? 

Authentication

Reading the authentication section is almost a delight, because I almost don't need to read it.  For resources that a specific to a particular user, it is OAuth 2.  This is the way it is supposed to be with HTTP APIs.  By taking advantage of the uniform interface, all the infrastructure issues are just supposed to work the same way.  I don't need to learn anything new.  At least I shouldn't.  Unfortunately when testing the API we identified a number of nits related to the OAuth2 process.  I suspect they will be ironed out soon enough though.

For resources that are not specific to a user then you use a server_token which is supplied when you register your application.  Interestingly the documentation states that you MUST pass the token using an Authorization header using the authorization scheme Token.  This is awesome, just the way it should be done.  However, the reality is a bit different.  If you look at the tutorial example the server_token is passed as a query string parameter.  This is convenient but you may find your server_token littered throughout other peoples log files.  Both ways work, so just do the right thing!

Resources

The API documentation chooses to call these "Endpoints" in the heading but then uses the term Resource within the actual documentation page.  It's one of those things that doesn't matter once you already know what they mean, but it can be confusing for people who are new and trying understand which words are significant and which are not.

I am never very good at getting a grasp of APIs from looking through a list of URIs.  I much prefer a visual representation.  I'm used to building hypermedia APIs which are a bit easier to draw pictures of because they naturally have links between resources.  However, for non-hypermedia APIs like the Uber API I have found that a hierarchical diagram of the URI space by path segment can be a good visualization tool.

image

From what I understand from the documentation, this is how the URI space is defined.  The documentation does a decent job at explaining what each of these resources represent.

Media Typesreceipts

Unsurprisingly, the API simply returns application/json for all representations.  This means that it would be difficult for clients do anything other than use the request URI to identify the structure and semantics of the response.  Even sniffing the content to determine semantics is tricky because resources like /v1/me don't have a root object name.  Coupling representation structure to the request URI is common place within the HTTP API world, I just wish developers had a better understanding of the price that they are paying for taking this shortcut.

Caching

I am disappointed that we don't see any caching directives on the responses coming back from the API.  As all of the requests require an Authorization header it makes sense to choose not to make the responses publicly cacheable.  However, private caching would be perfectly viable for responses.  Especially for things like list of products.  Even more so considering the current performance of the API.  To retrieve the list of locations is currently taking around 350ms in my tests and once an HTTPS connection is established it drops down to around 150ms.  For a single request that isn't terrible, but I am doing this on a desktop with a decent broadband connection.

One mitigating factor is that there is an Etag on the products list, so at least it allows client to perform conditional GET requests and spare the bytes over the wire.  However, the current list of products weighs in at around 1KB so, chances are it would all fit in a network packet anyway!

Summary

In general, it is a fairly simple HTTP API, that uses techniques that most API developers will be familiar with.  I'm happy that we are starting to see some stabilization around security mechanisms, but I do wish that there was a little more forward thinking in API design in general.

Hypermedia potential

Interestingly, there is quite a bit of opportunity here to take of advantage of hypermedia to define workflows in this kind of API.  There are likely a few fairly well defined paths through the process of selecting a product, verifying the prices, determining time to arrival and then selecting a car.  Along the way, the client application accumulates state: where the user is located, what type of car they want, which driver they want and finally accept a request for pickup.  This is an ideal scenario for hypermedia as the engine of application state to accumulate the information selected by the user whilst leading them on to the next step of the process.

 

Image credit: Uber https://flic.kr/p/evc5a9
Image credit: 404 sign https://flic.kr/p/ca95Ad
Image credit: receipts https://flic.kr/p/35oDiF


Filip Woj: ASP.NET Web API 2: Recipes is out!

My long promised book, ASP.NET Web API 2: Recipes has been published by Apress last week. I announced the book a while ago, when I also tried to explain the general idea behind the book. I nshort, I really wanted … Continue reading

The post ASP.NET Web API 2: Recipes is out! appeared first on StrathWeb.


Taiseer Joudeh: ASP.NET Web API 2 external logins with Facebook and Google in AngularJS app

Ok so it is time to enable ASP.NET Web API 2 external logins such as Facebook & Google then consume this in our AngularJS application. In this post we’ll add support to login using Facebook and Google+ external providers, then we’ll associate those authenticated social accounts with local accounts.

Once we complete the implementation in this post we’ll have an AngularJS application that uses OAuth bearer tokens to access any secured back-end API end point. This application will allow users to login using the resource owner password flow (Username/Password) along with refresh tokens, and support for using external login providers. So to follow along with this post I highly recommend to read previous parts:

You can check the demo application, play with the back-end API for learning purposes (http://ngauthenticationapi.azurewebsites.net), and check the source code on Github.

AngularJs External LoginsThere is a great walkthrough by Rick Anderson which shows how you can enable end users to login using social providers, the walkthrough uses VS 2013 MVC 5 template with individual accounts, to be honest it is quite easy and straight forward to implement this if you want to use this template, but in my case and if you’r following along with previous parts, you will notice that I’ve built the Web API from scratch and I added the needed middle-wares manually.

The MVC 5 template with individual accounts is quite complex, there are middle-wares for cookies, external cookies, bearer tokens… so it needs time to digest how this template work and maybe you do not need all this features in your API. There are two good posts about describing how external logins implemented in this template by Brock Allen and Dominick Baier.

So what I tried to do at the beginning is to add the features and middle-wares needed to support external logins to my back-end API (I need to use the minimal possible code to implement this), everything went good but the final step where I should receive something called external bearer token is not happening, I ended up with the same scenario in this SO question. It will be great if someone will be able to fork my repo and try to find a solution for this issue without adding Nuget packages for ASP.NET MVC and end up with all the binaries used in VS 2013 MVC 5 template.

I didn’t want to stop there, so I decided to do custom implementation for external providers with little help from the MVC 5 template, this implementation follows different flow other than the one in MVC 5 template; so I’m open for suggestions, enhancements, best practices on what we can add to the implementation I’ll describe below.

Sequence of events which will take place during external login:

  1. AngularJS application sends HTTP GET request to anonymous end point (/ExternalLogin) defined in our back-end API by specifying client_id, redirect_uri, response_type, the GET request will look as this:  http://ngauthenticationapi.azurewebsites.net/api/Account/ExternalLogin?provider=Google&response_type=token&client_id=ngAuthApp& redirect_uri=http://ngauthenticationweb.azurewebsites.net/authcomplete.html (more on redircet_uri value later on post)
  2. Once the end point receives the GET request, it will check if the user is authenticated, and let we assume he is not authenticated, so it will notify the middleware responsible for the requested external provider to take the responsibility for this call, in our case it is Google.
  3. The consent screen for Google will be shown, and the user will provide his Google credentials to authenticate.
  4. Google will callback our back-end API and Google will set an external cookie containing the outcome of the authentication from Google (contains all the claims from the external provider for the user).
  5. Google middleware will be listing for an event named “Authenticated” where we’ll have the chance to read all external claims set by Google. In our case we’ll be interested in reading the claim named “AccessToken” which represents a Google Access Token, where the issuer for this claim is not LOCAL AUTHORITY, so we can’t use this access token directly to authorize calls to our secure back-end API endpoints.
  6. Then we’ll set the external provider external access token as custom claim named “ExternalAccessToken” and Google middleware will redirect back the end point (/ExternalLogin).
  7. Now the user is authenticated using the external cookie so we need to check that the client_id and redirect_uri set in the initial request are valid and this client is configured to redirect for the specified URI (more on this later).
  8. Now the code checks if the external user_id along with the provider is already registered as local database account (with no password), in both cases the code will issue 302 redirect to the specified URI in the redirect_uri parameter, this URI will contain the following (“External Access Token”, “Has Local Account”, “Provider”, “External User Name”) as URL hash fragment not a query string.
  9. Once the AngularJS application receives the response, it will decide based on it if the user has local database account or not, based on this it will issue a request to one of the end points (/RegisterExternal or /ObtainLocalAccessToken). Both end points accept the external access token which will be used for verification and then using it to obtain local access token issued by LOCAL AUTHORITY. This local access token could be used to access our back-end API secured end points (more on external token verification and the two end points later in post).

Sounds complicated, right? Hopefully I’ll be able to simplify describing this in the steps below, so lets jump to the implementation:

Step 1: Add new methods to repository class

The methods I’ll add now will add support for creating the user without password as well finding and linking external social user account with local database account, they are self explanatory methods and there is nothing special about them, so open file “AuthRepository” and paste the code below:

public async Task<IdentityUser> FindAsync(UserLoginInfo loginInfo)
        {
            IdentityUser user = await _userManager.FindAsync(loginInfo);

            return user;
        }

        public async Task<IdentityResult> CreateAsync(IdentityUser user)
        {
            var result = await _userManager.CreateAsync(user);

            return result;
        }

        public async Task<IdentityResult> AddLoginAsync(string userId, UserLoginInfo login)
        {
            var result = await _userManager.AddLoginAsync(userId, login);

            return result;
        }

 Step 2: Install the needed external social providers Nuget packages

In our case we need to add Google and Facebook external providers, so open Nuget package manger console and install the 2 packages below:

Install-Package Microsoft.Owin.Security.Facebook -Version 2.1.0
Install-Package Microsoft.Owin.Security.Google -Version 2.1.0

Step 3: Create Google and Facebook Application

I won’t go into details about this step, we need to create two applications one for Google and another for Facebook. I’ve followed exactly the steps used to create apps mentioned here so you can do this. We need to obtain AppId and Secret for both social providers and will use them later, below are the settings for my two apps:

AngularJS Facebook App AngularJS Google App

Step 4: Add the challenge result

Now add new folder named “Results” then add new class named “ChallengeResult” which derives from “IHttpActionResult” then paste the code below:

public class ChallengeResult : IHttpActionResult
    {        
        public string LoginProvider { get; set; }
        public HttpRequestMessage Request { get; set; }

        public ChallengeResult(string loginProvider, ApiController controller)
        {
            LoginProvider = loginProvider;
            Request = controller.Request;
        }

        public Task<HttpResponseMessage> ExecuteAsync(CancellationToken cancellationToken)
        {
            Request.GetOwinContext().Authentication.Challenge(LoginProvider);

            HttpResponseMessage response = new HttpResponseMessage(HttpStatusCode.Unauthorized);
            response.RequestMessage = Request;
            return Task.FromResult(response);
        }
    }

The code we’ve just implemented above will responsible to call the Challenge passing the name of the external provider we’re using (Google, Facebook, etc..), this challenge won’t fire unless our API sets the HTTP status code to 401 (Unauthorized),  once it is done the external provider middleware will communicate with the external provider system and the external provider system will display an authentication page for the end user (UI page by Facebook or Google where you’ll enter username/password for authentication).

Step 5: Add Google and Facebook authentication providers

Now we want to add two new authentication providers classes where we’ll be overriding the “Authenticated” method so we’ve the chance to read the external claims set by the external provider, those set of external claims will contain information about the authenticated user and we’re interested in the claim named “AccessToken”.

As I’mentioned earlier this token is for the external provider and issued by Google or Facebook, after we’ve received it we need to assign it to custom claim named “ExternalAccessToken” so it will be available in the request context for later use.

So add two new classes named “GoogleAuthProvider” and “FacebookAuthProvider”  to folder “Providers” and paste the code below to the corresponding class:

public class GoogleAuthProvider : IGoogleOAuth2AuthenticationProvider
    {
        public void ApplyRedirect(GoogleOAuth2ApplyRedirectContext context)
        {
            context.Response.Redirect(context.RedirectUri);
        }

        public Task Authenticated(GoogleOAuth2AuthenticatedContext context)
        {
            context.Identity.AddClaim(new Claim("ExternalAccessToken", context.AccessToken));
            return Task.FromResult<object>(null);
        }

        public Task ReturnEndpoint(GoogleOAuth2ReturnEndpointContext context)
        {
            return Task.FromResult<object>(null);
        }
    }

public class FacebookAuthProvider : FacebookAuthenticationProvider
    {
        public override Task Authenticated(FacebookAuthenticatedContext context)
        {
            context.Identity.AddClaim(new Claim("ExternalAccessToken", context.AccessToken));
            return Task.FromResult<object>(null);
        }
    }

Step 6: Add social providers middleware to Web API pipeline

Now we need to introduce some changes to “Startup” class, the changes as the below:

- Add three static properties as the code below:

public class Startup
    {
        public static OAuthBearerAuthenticationOptions OAuthBearerOptions { get; private set; }
        public static GoogleOAuth2AuthenticationOptions googleAuthOptions { get; private set; }
        public static FacebookAuthenticationOptions facebookAuthOptions { get; private set; }

//More code here...
{

- Add support for using external cookies, this is important so external social providers will be able to set external claims, as well initialize the “OAuthBearerOptions” property as the code below:

public void ConfigureOAuth(IAppBuilder app)
        {
            //use a cookie to temporarily store information about a user logging in with a third party login provider
            app.UseExternalSignInCookie(Microsoft.AspNet.Identity.DefaultAuthenticationTypes.ExternalCookie);
            OAuthBearerOptions = new OAuthBearerAuthenticationOptions();

//More code here.....
}

- Lastly we need to wire up Google and Facebook social providers middleware to our Owin server pipeline and set the AppId/Secret for each provider, the values are removed so replace them with your app values obtained from step 3:

//Configure Google External Login
            googleAuthOptions = new GoogleOAuth2AuthenticationOptions()
            {
                ClientId = "xxx",
                ClientSecret = "xxx",
                Provider = new GoogleAuthProvider()
            };
            app.UseGoogleAuthentication(googleAuthOptions);

            //Configure Facebook External Login
            facebookAuthOptions = new FacebookAuthenticationOptions()
            {
                AppId = "xxx",
                AppSecret = "xxx",
                Provider = new FacebookAuthProvider()
            };
            app.UseFacebookAuthentication(facebookAuthOptions);

Step 7: Add “ExternalLogin” endpoint

As we stated earlier, this end point will accept the GET requests originated from our AngularJS app, so it will accept GET request on the form: http://ngauthenticationapi.azurewebsites.net/api/Account/ExternalLogin?provider=Google&response_type=token&client_id=ngAuthApp& redirect_uri=http://ngauthenticationweb.azurewebsites.net/authcomplete.html

So Open class “AccountController” and paste the code below:

private IAuthenticationManager Authentication
        {
            get { return Request.GetOwinContext().Authentication; }
        }        

// GET api/Account/ExternalLogin
        [OverrideAuthentication]
        [HostAuthentication(DefaultAuthenticationTypes.ExternalCookie)]
        [AllowAnonymous]
        [Route("ExternalLogin", Name = "ExternalLogin")]
        public async Task<IHttpActionResult> GetExternalLogin(string provider, string error = null)
        {
            string redirectUri = string.Empty;

            if (error != null)
            {
                return BadRequest(Uri.EscapeDataString(error));
            }

            if (!User.Identity.IsAuthenticated)
            {
                return new ChallengeResult(provider, this);
            }

            var redirectUriValidationResult = ValidateClientAndRedirectUri(this.Request, ref redirectUri);

            if (!string.IsNullOrWhiteSpace(redirectUriValidationResult))
            {
                return BadRequest(redirectUriValidationResult);
            }

            ExternalLoginData externalLogin = ExternalLoginData.FromIdentity(User.Identity as ClaimsIdentity);

            if (externalLogin == null)
            {
                return InternalServerError();
            }

            if (externalLogin.LoginProvider != provider)
            {
                Authentication.SignOut(DefaultAuthenticationTypes.ExternalCookie);
                return new ChallengeResult(provider, this);
            }

            IdentityUser user = await _repo.FindAsync(new UserLoginInfo(externalLogin.LoginProvider, externalLogin.ProviderKey));

            bool hasRegistered = user != null;

            redirectUri = string.Format("{0}#external_access_token={1}&provider={2}&haslocalaccount={3}&external_user_name={4}",
                                            redirectUri,
                                            externalLogin.ExternalAccessToken,
                                            externalLogin.LoginProvider,
                                            hasRegistered.ToString(),
                                            externalLogin.UserName);

            return Redirect(redirectUri);

        }

By looking at the code above there are lot of things happening so let’s describe what this end point is doing:

a) By looking at this method attributes you will notice that its configured to ignore bearer tokens, and it can be accessed if there is external cookie set by external authority (Facebook or Google) or can be accessed anonymously. it worth mentioning here that this end point will be called twice during the authentication, first call will be anonymously and the second time will be once the external cookie is set by the external provider.

b) Now the code flow will check if the user has been authenticated (External cookie has been set), if it is not the case then Challenge Result defined in step 4 will be called again.

c) Time to validate that client and redirect URI which is set by AngularJS application is valid and this client is configured to allow redirect for this URI, so we do not want to end allowing redirection to any URI set by a caller application which will open security threat (Visit this post to know more about clients and Allowed Origin). To do so we need to add a private helper function named “ValidateClientAndRedirectUri”, you can add this to the “AccountController” class as the code below:

private string ValidateClientAndRedirectUri(HttpRequestMessage request, ref string redirectUriOutput)
        {

            Uri redirectUri;

            var redirectUriString = GetQueryString(Request, "redirect_uri");

            if (string.IsNullOrWhiteSpace(redirectUriString))
            {
                return "redirect_uri is required";
            }

            bool validUri = Uri.TryCreate(redirectUriString, UriKind.Absolute, out redirectUri);

            if (!validUri)
            {
                return "redirect_uri is invalid";
            }

            var clientId = GetQueryString(Request, "client_id");

            if (string.IsNullOrWhiteSpace(clientId))
            {
                return "client_Id is required";
            }

            var client = _repo.FindClient(clientId);

            if (client == null)
            {
                return string.Format("Client_id '{0}' is not registered in the system.", clientId);
            }

            if (!string.Equals(client.AllowedOrigin, redirectUri.GetLeftPart(UriPartial.Authority), StringComparison.OrdinalIgnoreCase))
            {
                return string.Format("The given URL is not allowed by Client_id '{0}' configuration.", clientId);
            }

            redirectUriOutput = redirectUri.AbsoluteUri;

            return string.Empty;

        }

        private string GetQueryString(HttpRequestMessage request, string key)
        {
            var queryStrings = request.GetQueryNameValuePairs();

            if (queryStrings == null) return null;

            var match = queryStrings.FirstOrDefault(keyValue => string.Compare(keyValue.Key, key, true) == 0);

            if (string.IsNullOrEmpty(match.Value)) return null;

            return match.Value;
        }

d) After we validate the client and redirect URI we need to get the external login data along with the “ExternalAccessToken” which has been set by the external provider, so we need to add private class named “ExternalLoginData”, to do so add this class to the same “AccountController” class as the code below:

private class ExternalLoginData
        {
            public string LoginProvider { get; set; }
            public string ProviderKey { get; set; }
            public string UserName { get; set; }
            public string ExternalAccessToken { get; set; }

            public static ExternalLoginData FromIdentity(ClaimsIdentity identity)
            {
                if (identity == null)
                {
                    return null;
                }

                Claim providerKeyClaim = identity.FindFirst(ClaimTypes.NameIdentifier);

                if (providerKeyClaim == null || String.IsNullOrEmpty(providerKeyClaim.Issuer) || String.IsNullOrEmpty(providerKeyClaim.Value))
                {
                    return null;
                }

                if (providerKeyClaim.Issuer == ClaimsIdentity.DefaultIssuer)
                {
                    return null;
                }

                return new ExternalLoginData
                {
                    LoginProvider = providerKeyClaim.Issuer,
                    ProviderKey = providerKeyClaim.Value,
                    UserName = identity.FindFirstValue(ClaimTypes.Name),
                    ExternalAccessToken = identity.FindFirstValue("ExternalAccessToken"),
                };
            }
        }

e) Then we need to check if this social login (external user id with external provider) is already linked to local database account or this is first time to authenticate, based on this we are setting flag “hasRegistered” which will be returned to the AngularJS application.

f) Lastly we need to issue a 302 redirect to the “redirect_uri” set by the client application along with 4 values (“external_access_token”, “provider”, “hasLocalAccount”, “external_user_name”), those values will be added as URL hash fragment not as query string so they can be accessed by JS code only running on the return_uri page.

Now the AngularJS application has those values including external access token which can’t be used to access our secured back-end endpoints, to solve this issue we need to issue local access token with the help of this external access token. To do this we need to add two new end points where they will accept this external access token, validate it then generate local access token.

Why did we add two end points? Because if the external user is not linked to local database account; we need to issue HTTP POST request to the new endpoint “/RegisterExternal”, and if the external user already linked to local database account then we just need to obtain a local access token by issuing HTTP GET to endpoint “/ObtainLocalAccessToken”.

Before adding the two end points we need to add two helpers methods which they are used on these two endpoints.

Step 8: Verifying external access token

We’ve received the external access token from the external provider, and we returned it to the front-end AngularJS application, now we need to validate and make sure that this external access token which will be sent back from the front-end app to our back-end API is valid, and valid means (Correct access token, not expired, issued for the same client id configured in our back-end API). It is very important to check that external access token is issued to the same client id because you do not want our back-end API to end accepting valid external access tokens generated from different apps (client ids). You can read more about this here.

To do so we need to add class named “ExternalLoginModels” under folder “Models” and paste the code below:

namespace AngularJSAuthentication.API.Models
{
    public class ExternalLoginViewModel
    {
        public string Name { get; set; }

        public string Url { get; set; }

        public string State { get; set; }
    }

    public class RegisterExternalBindingModel
    {
        [Required]
        public string UserName { get; set; }

         [Required]
        public string Provider { get; set; }

         [Required]
         public string ExternalAccessToken { get; set; }

    }

    public class ParsedExternalAccessToken
    {
        public string user_id { get; set; }
        public string app_id { get; set; }
    }
}

Then we need to add private helper function named “VerifyExternalAccessToken” to class “AccountController”, the implantation for this function as the below:

private async Task<ParsedExternalAccessToken> VerifyExternalAccessToken(string provider, string accessToken)
        {
            ParsedExternalAccessToken parsedToken = null;

            var verifyTokenEndPoint = "";

            if (provider == "Facebook")
            {
                //You can get it from here: https://developers.facebook.com/tools/accesstoken/
                //More about debug_tokn here: http://stackoverflow.com/questions/16641083/how-does-one-get-the-app-access-token-for-debug-token-inspection-on-facebook

                var appToken = "xxxxx";
                verifyTokenEndPoint = string.Format("https://graph.facebook.com/debug_token?input_token={0}&access_token={1}", accessToken, appToken);
            }
            else if (provider == "Google")
            {
                verifyTokenEndPoint = string.Format("https://www.googleapis.com/oauth2/v1/tokeninfo?access_token={0}", accessToken);
            }
            else
            {
                return null;
            }

            var client = new HttpClient();
            var uri = new Uri(verifyTokenEndPoint);
            var response = await client.GetAsync(uri);

            if (response.IsSuccessStatusCode)
            {
                var content = await response.Content.ReadAsStringAsync();

                dynamic jObj = (JObject)Newtonsoft.Json.JsonConvert.DeserializeObject(content);

                parsedToken = new ParsedExternalAccessToken();

                if (provider == "Facebook")
                {
                    parsedToken.user_id = jObj["data"]["user_id"];
                    parsedToken.app_id = jObj["data"]["app_id"];

                    if (!string.Equals(Startup.facebookAuthOptions.AppId, parsedToken.app_id, StringComparison.OrdinalIgnoreCase))
                    {
                        return null;
                    }
                }
                else if (provider == "Google")
                {
                    parsedToken.user_id = jObj["user_id"];
                    parsedToken.app_id = jObj["audience"];

                    if (!string.Equals(Startup.googleAuthOptions.ClientId, parsedToken.app_id, StringComparison.OrdinalIgnoreCase))
                    {
                        return null;
                    }

                }

            }

            return parsedToken;
        }

The code above is not the prettiest code I’ve written, it could be written in better way, but the essence of this helper method is to validate the external access token so for example if we take a look on Google case you will notice that we are issuing HTTP GET request to a defined endpoint by Google which is responsible to validate this access token, so if the token is valid we’ll read the app_id (client_id) and the user_id from the response, then we need to make sure that app_id returned as result from this request is exactly the same app_id used to configure Google app in our back-end API. If there is any differences then we’ll consider this external access token as invalid.

Note about Facebook: To validate Facebook external access token you need to obtain another single token for your application named appToken, you can get it from here.

Step 9: Generate local access token

Now we need to add another helper function which will be responsible to issue local access token which can be used to access our secure back-end API end points, the response for this token need to match the response we obtain when we call the end point “/token”, there is nothing special in this method, we are only setting the claims for the local identity then calling “OAuthBearerOptions.AccessTokenFormat.Protect” which will generate the local access token, so add  new private function named “” to “AccountController” class as the code below:

private JObject GenerateLocalAccessTokenResponse(string userName)
        {

            var tokenExpiration = TimeSpan.FromDays(1);

            ClaimsIdentity identity = new ClaimsIdentity(OAuthDefaults.AuthenticationType);

            identity.AddClaim(new Claim(ClaimTypes.Name, userName));
            identity.AddClaim(new Claim("role", "user"));

            var props = new AuthenticationProperties()
            {
                IssuedUtc = DateTime.UtcNow,
                ExpiresUtc = DateTime.UtcNow.Add(tokenExpiration),
            };

            var ticket = new AuthenticationTicket(identity, props);

            var accessToken = Startup.OAuthBearerOptions.AccessTokenFormat.Protect(ticket);

            JObject tokenResponse = new JObject(
                                        new JProperty("userName", userName),
                                        new JProperty("access_token", accessToken),
                                        new JProperty("token_type", "bearer"),
                                        new JProperty("expires_in", tokenExpiration.TotalSeconds.ToString()),
                                        new JProperty(".issued", ticket.Properties.IssuedUtc.ToString()),
                                        new JProperty(".expires", ticket.Properties.ExpiresUtc.ToString())
        );

            return tokenResponse;
        }

Step 10: Add “RegisterExternal” endpoint to link social user with local account

After we added the helper methods needed it is time to add new API end point where it will be used to add the external user as local account and then link it with the created local account,  this method should be accessed anonymously because we do not have local access token yet; so add new method named “RegisterExternal” to class “AccountController” as the code below:

// POST api/Account/RegisterExternal
        [AllowAnonymous]
        [Route("RegisterExternal")]
        public async Task<IHttpActionResult> RegisterExternal(RegisterExternalBindingModel model)
        {

            if (!ModelState.IsValid)
            {
                return BadRequest(ModelState);
            }

            var verifiedAccessToken = await VerifyExternalAccessToken(model.Provider, model.ExternalAccessToken);
            if (verifiedAccessToken == null)
            {
                return BadRequest("Invalid Provider or External Access Token");
            }

            IdentityUser user = await _repo.FindAsync(new UserLoginInfo(model.Provider, verifiedAccessToken.user_id));

            bool hasRegistered = user != null;

            if (hasRegistered)
            {
                return BadRequest("External user is already registered");
            }

            user = new IdentityUser() { UserName = model.UserName };

            IdentityResult result = await _repo.CreateAsync(user);
            if (!result.Succeeded)
            {
                return GetErrorResult(result);
            }

            var info = new ExternalLoginInfo()
            {
                DefaultUserName = model.UserName,
                Login = new UserLoginInfo(model.Provider, verifiedAccessToken.user_id)
            };

            result = await _repo.AddLoginAsync(user.Id, info.Login);
            if (!result.Succeeded)
            {
                return GetErrorResult(result);
            }

            //generate access token response
            var accessTokenResponse = GenerateLocalAccessTokenResponse(model.UserName);

            return Ok(accessTokenResponse);
        }

By looking at the code above you will notice that we need to issue HTTP POST request to the end point http://ngauthenticationapi.azurewebsites.net/api/account/RegisterExternal where the request body will contain JSON object of “userName”, “provider”, and “externalAccessToken” properties.

The code in this method is doing the following:

  • once we receive the request we’ll call the helper method  “VerifyExternalAccessToken” described earlier to ensure that this external access token is valid, and generated using our Facebook or Google application defined in our back-end API.
  • We need to check if the user_id obtained from the external access token and the provider combination has already registered in our system, if this is not the case we’ll add new user using the username passed in request body without a password to the table “AspNetUsers”
  • Then we need to link the local account created for this user to the provider and provider key (external user_id) and save this to table “AspNetUserLogins”
  • Lastly we’ll call the helper method named “GenerateLocalAccessTokenResponse” described earlier which is responsible to generate the local access token and return this in the response body, so front-end application can use this access token to access our secured back-end API endpoints.

The request body will look as the image below:

External Login Request

Step 11: Add “ObtainLocalAccessToken” for linked external accounts

Now this endpoint will be used to generate local access tokens for external users who already registered with local account and linked their external identities to a local account, this method is accessed anonymously and will accept 2 query strings (Provider, ExternalAccessToken). The end point can be accessed by issuing HTTP GET request to URI: http://ngauthenticationapi.azurewebsites.net/api/account/ObtainLocalAccessToken?provider=Facebook&externalaccesstoken=CAAKEF……….

So add new method named “ObtainLocalAccessToken” to controller “AccountController” and paste the code below:

[AllowAnonymous]
        [HttpGet]
        [Route("ObtainLocalAccessToken")]
        public async Task<IHttpActionResult> ObtainLocalAccessToken(string provider, string externalAccessToken)
        {

            if (string.IsNullOrWhiteSpace(provider) || string.IsNullOrWhiteSpace(externalAccessToken))
            {
                return BadRequest("Provider or external access token is not sent");
            }

            var verifiedAccessToken = await VerifyExternalAccessToken(provider, externalAccessToken);
            if (verifiedAccessToken == null)
            {
                return BadRequest("Invalid Provider or External Access Token");
            }

            IdentityUser user = await _repo.FindAsync(new UserLoginInfo(provider, verifiedAccessToken.user_id));

            bool hasRegistered = user != null;

            if (!hasRegistered)
            {
                return BadRequest("External user is not registered");
            }

            //generate access token response
            var accessTokenResponse = GenerateLocalAccessTokenResponse(user.UserName);

            return Ok(accessTokenResponse);

        }

By looking at the code above you will notice that we’re doing the following:

  • Make sure that Provider and ExternalAccessToken query strings are sent with the HTTP GET request.
  • Verifying the external access token as we did in the previous step.
  • Check if the user_id and provider combination is already linked in our system.
  • Generate a local access token and return this in the response body so front-end application can use this access token to access our secured back-end API endpoints.

Step 12:Updating the front-end AngularJS application

I’ve updated the live front-end application to support using  social logins as the image below:

AngularJS Social Login

Now once the user clicks on one of the social login buttons, a popup window will open targeting the URI: http://ngauthenticationapi.azurewebsites.net/api/Account/ExternalLogin?provider=Google&response_type=token&client_id=ngAuthApp &redirect_uri=http://ngauthenticationweb.azurewebsites.net/authcomplete.html and the steps described earlier in this post will take place.

What worth mentioning here that the “authcomplete.html” page is an empty page which has JS function responsible to read the URL hash fragments and pass them to AngularJS controller in callback function, based on the value of the fragment (hasLocalAccount) the AngularJS controller will decide to call end point “/ObtainLocalAccessToken” or display a view named “associate” where it will give the end user the chance to set preferred username only and then call the endpoint “/RegisterExternal”.

The “associate” view will look as the image below:

AngularJS Social Login Associate

That is it for now folks! Hopefully this implementation will be useful to integrate social logins with your ASP.NET Web API and keep the API isolated from any front end application, I believe by using this way we can integrate native iOS or Android Apps that using Facebook SDKs easily with our back-end API, usually the Facebook SDK will return Facebook access token and we can then obtain local access token as we described earlier.

If you have any feedback, comment, or better way to implement this; then please drop me comment, thanks for reading!

You can check the demo application, play with the back-end API for learning purposes (http://ngauthenticationapi.azurewebsites.net), and check the source code on Github.

Follow me on Twitter @tjoudeh

References

  1. Informative post by Brock Allen about External logins with OWIN middleware.
  2. Solid post by Dominick Baier on Web API Individual Accounts.
  3. Great post by Rick Anderson on VS 2013 MVC 5 template with individual accounts.
  4. Helpful post by Valerio Gheri on How to login with Facebook access token.

The post ASP.NET Web API 2 external logins with Facebook and Google in AngularJS app appeared first on Bit of Technology.


Darrel Miller: Centralized exception handling using a ASP.NET Web API MessageHandler

ASP.NET Web API 2.1 introduced some significant improvements to the mechanisms that support global error handling. Before this release there were a number of different types of errors that would be handled directly by the runtime and there was no easy way to intercept these errors and add your own custom behavior.  The standard guidance suggests you register these new handlers as services, but I prefer a different approach that seems more natural to me.

The pipe that all flows through

Every request that is processed by Web API goes through the HttpMessageHandler pipeline until it is dispatched to a controller.  The message handler pipeline works in a Russian doll approach where the request goes through each handler from first to last and then the response returns back through the same handlers in reverse. 

pipe

A trap at the end of the pipe

If a controller cannot process a request for some reason and decides that the appropriate response is to throw an exception then my gut tells me that somewhere up the call stack should be the place where that exception is caught.  If I want to have a place where I can catch all exceptions generated by message handlers, the framework dispatching code, or the controllers itself, then having a ErrorHandlingMessageHandler at the front of the message handler pipeline seems like the most natural place. 

    public class ErrorHandlerMessageHandler : DelegatingHandler
    {
        protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, 
CancellationToken cancellationToken) { try { return await base.SendAsync(request, cancellationToken); } catch (Exception ex) { return GlobalErrorHandler.ConvertExceptionToHttpResponse(ex); } } }

It will not work when…

With this approach there are two types of exceptions that my message handler will not catch: the first is connectivity errors detected by the HttpServer.  For example the client closed the TCP connection whilst waiting for the response.  The second is serialization errors generated when the HttpServer finally asks the HttpContent to serialize its bytes to a stream and for some reason it fails.  I discussed a workaround to this scenario in another post about debugging serialization errors.

Wipeout

Turn off the default behaviour

By default if you have registered a IErrorHandler and IErrorLogger server those services will be called when exceptions are trapped in framework code.  The expected behavior is that the registered services will create a HttpResponseMessage and set the context object Result property to direct the framework to return response.  That response will be returned back up the pipeline and the message handlers in the pipeline will be somewhat ignorant of the failed request.

public class GlobalErrorHandlerService : IExceptionHandler
{
    public Task HandleAsync(ExceptionHandlerContext context, CancellationToken cancellationToken)
    {
        if (context.CatchBlock != ExceptionCatchBlocks.HttpServer)
        {
            context.Result = null;
        }
        return Task.FromResult(0);
    }
}

However, if your registered services explicitly set the Result property to Null, then the framework will take that as an indication that your service have not handled the error and will re-throw the exception that was initially raised.  This will allow each message handler in the pipeline the opportunity to catch the exception and do any processing before allowing the error handling message handler to convert the exception to an actual HTTP response message.  If the exception is caught up in the HttpServer framework code, then it is too late for the message handler to catch it.

 SurfStyle

It's a style thing

I'm not going to claim any major benefits of this approach to centralizing error handling over the use of IErrorHandler service mechanism.  The advantage I see is that it is just one less interaction protocol to deal with.  I already deal with the message handler pipeline, I understand how it works,  and I know how to intercept requests and responses using it and there is nothing really about it that is Web API specific.  With the new mechanisms introduced by WebAPI, I feel like I need to learn a new set of interfaces, how the framework interacts with those interfaces and what my responsibilities are as an interface implementer.  I get especially frustrated by the pattern of passing in context objects that require me to set properties as return values.  It just feels hokey.

One slightly less touchy feely difference between the two approaches relates to how and exception handling message handlers are able to intercept some of the error responses generated by the Web API framework. For example, when the routing mechanism fails to identify an appropriate action method, no exception is thrown, instead a response is created with a 404 status code and a HTTP body that contains certain property values.  An error handling message handler, could intercept those responses and update the body of the response to be something more standardized.  Something like http-problem for example.

Web API your way

Although this approach is not going to appeal to everyone, it does demonstrate an important aspect of the Web API framework.  ASP.NET Web API was never intended to be a particularly opinionated framework.  It was designed to allow people to build HTTP based applications in many different ways.  There are many different extensibility points and components that can be completely swapped out and replaced.  This is a key strength of the framework.  You should take advantage of it.

Image Credit: Pipe https://flic.kr/p/9GuHGn
Image Credit: Wipeout https://flic.kr/p/8vfZYQ
Image Credit: Surf Style https://flic.kr/p/5GNhLA


Darrel Miller: These 8 lines of code can make debugging your ASP.Net Web API a little bit easier.

I think most us ASP.Net Web API developers have, at some point, experienced the problem where their API is returning a 500 Internal Server Error, but tracing through with Visual Studio doesn't reveal any exceptions in our code.  This problem is often caused when a MediaTypeFormatter is unable to serialize an object.  This simple message handler can take away some of the pain of debugging these scenarios.

ButterflyNet

The code

I'll start with the solution, and try and explain why the problem occurs and show how this MessageHandler solves the problem.

public class BufferNonStreamedContentHandler : DelegatingHandler
{
  protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, 
CancellationToken cancellationToken) { var response = await base.SendAsync(request, cancellationToken); if (response.Content != null) {
var services = request.GetConfiguration().Services; var bufferPolicy = (IHostBufferPolicySelector)services.GetService(typeof(IHostBufferPolicySelector));
// If the host is going to buffer it anyway
if (bufferPolicy.UseBufferedOutputStream(response)) {
// Buffer it now so we can catch the exception await response.Content.LoadIntoBufferAsync(); } } return response; } }

This message handler is installed by adding it to the collection of MessageHandlers on the Web API configuration object.

config.MessageHandlers.Add(new BufferNonStreamedContentHandler());

Just do it sooner

HighSpeedTrain

The message handler uses the IHostBufferPolicySelector to determine if the Host will buffer the response body before sending it over the wire.  For the vast majority of cases this will be true, and definitely will be true if an API returns a CLR object that will be serialized using the JsonMediaTypeFormatter or the XmlMediaTypeFormatter.

If we know the Host is going to buffer it, then we force the content to be buffered early so that we can trigger any exceptions that will occur during serialization via user code.  This will allow Visual Studio to trap the exception.  Or if you are using an trapping errors with a message handler, the exception can be trapped and dealt with there.

Why does Web Api do that?

Normally, this check against IHostBufferPolicySelector is only done once all the message handlers have completed and control is returned to the Http Server Host.  At this point you there is no-longer user code anywhere in the call stack and therefore Visual Studio doesn't break unless you have configured it to break in framework code.

The buffering of content is deferred by default just in case the host decides it wants to stream the content directly to the network stream.  HTTP headers need to be written out before body bytes can be written out and message handlers might want to change HTTP headers, so Web API has to wait until all the message handlers have completed before it can start streaming content.

It's virtually free

If we have buffered the content early, then the Host is smart enough to know not to try and serialize the content again, so adding this handler will add a minimal amount of overhead.  We are paying the price of checking IHostBufferPolicySelector an extra time so that we can catch exceptions for content that will be buffered.

FreeHugs

This solution does not help if you really are serializing content directly to the network stream.  However, if you are already writing body bytes over the wire, then the status code has already been sent and there is little that can be done at that point to handle an exception cleanly. 

It can't hurt to try it

I suspect this little trick has saved me hours of head scratching, hopefully it will do the same for others.

Image Credit: Butterfly hunter https://flic.kr/p/nqGeY7
Image Credit: High speed train https://flic.kr/p/7suAc4
Image Credit: Free hugs https://flic.kr/p/4k3eAY


Dominick Baier: Announcing Thinktecture IdentityServer v3 – Beta 1

It’s done – and I am happy (and a bit exhausted) – a few minutes ago I closed the last open issue for Beta 1.

What’s new
It’s been 424 commits since we released Preview 1 – so there is quite a lot of new stuff, but the big features are:

  • A completely revamped configuration system that allows replacing bits and pieces of the core server itself – including a DI system
  • A plugin infrastructure that allows adding new endpoints (WS-Federation support is a plugin e.g.)
  • Refresh tokens
  • Infrastructure that allows customizing views and UI assets for re-branding as well as extensibility to insert your own workflows like registration or EULAs
  • Support for CORS and HTTP strict transport security
  • Content Security Policy (CSP) & stricter caching rules
  • Protection against click-jacking
  • Control over cookies (lifetime, naming etc)
  • more extensibility points

Repositories
We also split up the solution into separate repos and nugets for better composability:

Samples and documentation
We also have a separate repo for samples and added quite a bit of content to the wiki.

With that I am now leaving for holidays! ;) Give IdentityServer a try and give us feedback. A big thanks to all contributors and the people that engaged with us over the various channels!!!

Have a nice summer!


Filed under: .NET Security, ASP.NET, IdentityServer, Katana, OAuth, OpenID Connect, OWIN, Uncategorized, WebAPI


Filip Woj: Dependency injection directly into actions in ASP.NET Web API

There is a ton of great material on the Internet about dependency injection in ASP.NET Web API. One thing that I have not seen anywhere though, is any information about how to inject dependencies into the action, instead of a … Continue reading

The post Dependency injection directly into actions in ASP.NET Web API appeared first on StrathWeb.


Darrel Miller: Everything you need to know about HTTP Header syntax but were afraid to ask

If you use HTTP then the chances are good that you have to deal with HTTP headers.  The syntax of HTTP headers has a long and tortured history, originating from the syntax of email headers.  All too often I see headers that don't conform to the specifications.  This makes everyone's job a little bit harder.  The recent releases of the HTTP specifications have done a fair amount of clarification and consolidation to make getting the syntax right.RaiseHand

Two stage parsing

HTTP Header parsing is broken down into two phases.  In phase one, the headers are extracted from the HTTP message into a set of name/value pairs.   The name is a case-insensitive token that is defined in the HTTP specification and registered in the Message Header registry, and the value is a string whose syntax is defined specifically for that header name in the HTTP specification.  The syntax of a HTTP header always looks like this:

header-field   = field-name ":" OWS field-value OWS

OWS means "optional whitespace" and header fields are delimited using CRLF.  Note that RFC7230 now recommends that there should be no whitespace between the header name and the colon.

It is a header, that's all I need to knowOLYMPUS DIGITAL CAMERA

The ability to generically parse HTTP headers into these name value pairs without concern for the syntax of the field value is critical for performance and extensibility.  Not every component needs to look at every header, so the requirement for every intermediary in the path of the HTTP request to parse the header value contents would be wasteful.  Also, new headers are created regularly, so needing to update HTTP libraries whenever new header definitions are added would be problematic.

Wrapped lines are no more

There are a couple of additional issues relating to parsing the header lines.  In the past it was possible to allow header values to wrap onto the next line by prefixing the wrapped line with a whitespace character[1].  The most recent HTTP header specifications recommend to no longer do this[2].   

Multiple header instances

The second major issue is related to headers that contain lists of values separated by a comma.  These headers can appear multiple times in a message.  A header parsing routine can aggregate these multiple headers into a single list of values.  The one exception to this rule is the Set-Cookie header.  Set-Cookie headers are allowed to appear multiple times despite not being a comma separated list.

The building blocks

Splitting the header field names and values is the easy part.  Unfortunately, that is where most HTTP frameworks that I've seen seem to give up, or only provide support for the most commonly used headers.  They lay the burden of parsing the semantics out of headers on the application developer.  This can be painful because every header field-value has its own syntax definition Fortunately, the HTTP specification provides some basic building blocks for defining the syntax of HTTP headers.

buildingblocks

Most headers, when simplified down to primitives, are a combination of token, quoted-string, comment, OWS (optional whitespace), RWS (required whitespace), and literalsThere are also several rules for defining lists of expressions.

  • Tokens are are string of characters that avoid certain delimiter characters.
    Quoted-strings are surrounded by double quotes and have a slightly different set of rules for what characters are allowed.
    Comment is a string surrounded by parentheses with yet again another set of valid characters.

For a quick review of which characters are allowed and which are not, I created a chart below[3].

Syntax Rules

As i mentioned each header has its own syntax rules.  For example, User-Agent is defined as,

User-Agent = product *( RWS ( product / comment ) )

Where,

product         = token ["/" product-version]
product-version = token

and *(x) means "zero or more x", (x/y) means "either x or y" and [x] means "x is optional".  This is a standardized syntax definition language called ABNF.  Many of the headers rules are summarized in appendices of the specification in which they are defined.

Lists of things

The most common way that HTTP headers allow you to specify lists of tokens is using the  #(x)  syntax[4] which means you can have zero more x delimited with a comma.  As you can see from the example above, you can also have whitespace delimited lists. Parameters, which are often lists within a single element of a comma delimited list,  will be delimited by semi-colons.

An implementation

The various HTTP frameworks that I have reviewed have varying support for parsing of header values.  Considering that it is just a bunch of grammar rules for parsing and production of strings, it would seem useful to me if there was a library that allowed developers to ensure that the headers they are sending conform to the specification without needing to constantly refer to the specification rules.

I believe the key requirements of a .Net framework library for HTTP header parsing and generating are:

  • support for all standard headers
  • support for creating new headers that use the standard header primitives
  • allow for parsing/generating individual headers
    generate warnings for invalid headers when parsing and do best guess parsing.
  • have no external dependencies other than the .net framework
  • make efficient use of memory and be fast enough that its usage is negligible on the overall processing time of the message.

I'm currently working on a OSS library to do this, with the hope of being able to use it within OWIN middleware.  I'll blog about it when I have something to show.

[1] I suspect the reason header wrapping exists is based on the fact that headers were originally defined to be part of email messages that were limited to a 80 character line length.  It's really not relevant for headers in a HTTP message where there is no need to wrap.

[2] When specifications make a change to deprecate certain behaviour it is important to remember Postel's law.  When parsing headers, we have to assume that old clients/servers are going to continue doing the line folding, so it is essential that we write components that accept folded headers but we should never do it when generating headers.

[3]

Allowed Characters

Hex

Chars

Token

Quoted String

Comment

00-08        
09 <tab>   y y
1A-1F        
20 <Space>      
21 ! y y y
22 "     y
23-27 #$%&' y y y
28-29 ()   y  
2A-2F *+ y y y
2C ,   y y
2D-2E -. y y y
2F / y   y
30-39 <digit>   y y
3A-40 :;<=>?@   y y
41-5A <ALPHA> y y y
5B [   y y
5C \      
5D ]   y y
5E-60 ^_` y   y
61-7A <alpha> y y y
7B-7E {   y y
7C | y y y
7D }   y y
7E - y y y

 

[4] The #() list syntax is an extension to the standard ABNF rules that is defined with the HTTP specification.

Image credit: Raise hand https://flic.kr/p/2vgyWN
Image credit: Package https://flic.kr/p/8sgnwu


Henrik F. Nielsen: More Updates to Azure Mobile Services (Link)

Just posted the blog Azure Mobile Services .NET Updates describing new features now available for Azure Mobile Services including:

  • Support for CORS using ASP.NET Web API CORS enabling first class support for specifying CORS policy at a per-service, per-controller, or per-action level.
  • An extensible authentication model enabling you to control which authentication mechanisms are available for your mobile service clients. For example, you can add your own authentication mechanisms in addition to or in place of the default support for Azure Active Directory, Twitter, Facebook, Google, and Microsoft Account.
  • Support for Azure Active Directory Authentication using a server-side flow simplifying client authentication significantly.

If you are building cloud connected mobiles apps then check out Microsoft Azure Mobile Services and let us know what you think!

Have fun!

Henrik


Radenko Zec: How to fake session object / HttpContext for integration tests

Sometimes when we write integration tests we need to fake HttpContext to test some functionality proper way.

In one of my projects I needed a possibility to fake some session variables such as userState and maxId.

A project that is tested in this example is an ASP.NET Web API with added support for session but you can use the same approach in any ASP.NET/ MVC project.

Implementation is very simple. First, we need to create FakeHttpContext.

 

 public HttpContext FakeHttpContext(Dictionary<string, object> sessionVariables,string path)
{
    var httpRequest = new HttpRequest(string.Empty, path, string.Empty);
    var stringWriter = new StringWriter();
    var httpResponce = new HttpResponse(stringWriter);
    var httpContext = new HttpContext(httpRequest, httpResponce);
    httpContext.User = new GenericPrincipal(new GenericIdentity("username"), new string[0]);
    Thread.CurrentPrincipal = new GenericPrincipal(new GenericIdentity("username"), new string[0]);
    var sessionContainer = new HttpSessionStateContainer(
      "id",
      new SessionStateItemCollection(),
      new HttpStaticObjectsCollection(),
      10,
      true,
      HttpCookieMode.AutoDetect,
      SessionStateMode.InProc,
      false);

   foreach (var var in sessionVariables)
   {
      sessionContainer.Add(var.Key, var.Value);
   }

   SessionStateUtility.AddHttpSessionStateToContext(httpContext, sessionContainer);
   return httpContext;
}


 

After we have FakeHttpContext we can easily use it in any integration tests like this.

 

 [Test]
public void DeleteState_AcceptsCorrectExpandedState_DeletesState()
{
   HttpContext.Current = this.FakeHttpContext(
      new Dictionary<string, object> { { "UserExpandedState", 5 }, 
                                        { "MaxId", 1000 } },    
                                          "http://localhost:55024/api/v1/");

   string uri = "DeleteExpandedState";
   var result = Client.DeleteAsync(uri).Result;
   result.StatusCode.Should().Be(HttpStatusCode.OK);

   var state = statesRepository.GetState(5);
   state.Should().BeNull();
}

 

This is an example of integration test that tests Delete method of WebAPI.

WebAPI is this example hosted in memory for the purpose of integration testing.

If you like this article don’t forget to subscribe to this blog and make sure you don’t miss new upcoming blog posts.

 
 

The post How to fake session object / HttpContext for integration tests appeared first on RadenkoZec blog.


Radenko Zec: Replace JSON.NET with Jil JSON serializer in ASP.NET Web API

I have recently come across a comparison of fast JSON serializers in .NET, which shows that Jil JSON serializer is one of the fastest.

Jil is created by Kevin Montrose developer at StackOverlow and it is apparently heavily used by Stackoveflow.

This is only one of many benchmarks you can find on Github project website.

JilSpeed

You can find more benchmarks and the source code at this location https://github.com/kevin-montrose/Jil

In this short article I will cover how to replace default JSON serializer in Web API with Jil.

Create Jil MediaTypeFormatter

First, you need to grab Jil from NuGet

PM> Install-Package Jil


After that, create JilFormatter using code below.

    public class JilFormatter : MediaTypeFormatter
    {
        private readonly Options _jilOptions;
        public JilFormatter()
        {
            _jilOptions=new Options(dateFormat:DateTimeFormat.ISO8601);
            SupportedMediaTypes.Add(new MediaTypeHeaderValue("application/json"));

            SupportedEncodings.Add(new UTF8Encoding(encoderShouldEmitUTF8Identifier: false, throwOnInvalidBytes: true));
            SupportedEncodings.Add(new UnicodeEncoding(bigEndian: false, byteOrderMark: true, throwOnInvalidBytes: true));
        }
        public override bool CanReadType(Type type)
        {
            if (type == null)
            {
                throw new ArgumentNullException("type");
            }
            return true;
        }

        public override bool CanWriteType(Type type)
        {
            if (type == null)
            {
                throw new ArgumentNullException("type");
            }
            return true;
        }

        public override Task<object> ReadFromStreamAsync(Type type, Stream readStream, System.Net.Http.HttpContent content, IFormatterLogger formatterLogger)
        {
	        return Task.FromResult(this.DeserializeFromStream(type, readStream));           
        }


        private object DeserializeFromStream(Type type,Stream readStream)
        {
            try
            {
                using (var reader = new StreamReader(readStream))
                {
                    MethodInfo method = typeof(JSON).GetMethod("Deserialize", new Type[] { typeof(TextReader),typeof(Options) });
                    MethodInfo generic = method.MakeGenericMethod(type);
                    return generic.Invoke(this, new object[]{reader, _jilOptions});
                }
            }
            catch
            {
                return null;
            }

        }


        public override Task WriteToStreamAsync(Type type, object value, Stream writeStream, System.Net.Http.HttpContent content, TransportContext transportContext)
        {
            using (TextWriter streamWriter = new StreamWriter(writeStream))
            {
                JSON.Serialize(value, streamWriter, _jilOptions);
                return Task.FromResult(writeStream);
            }
        }
    }

 

This code uses reflection for deserialization of JSON.

Replace default JSON serializer

In the end, we need to remove default JSON serializer.

Place this code at beginning of WebApiConfig

config.Formatters.RemoveAt(0);
config.Formatters.Insert(0, new JilFormatter());

 

Feel free to test Jil with Web API and don’t forget to subscribe here and don’t miss a new blog post.

 

The post Replace JSON.NET with Jil JSON serializer in ASP.NET Web API appeared first on RadenkoZec blog.


Darrel Miller: Hypermedia as the engine of application state, the client-server dance

We are currently seeing a significant amount of discussion about building hypermedia APIs.  However, the server side only plays part of the role in a hypermedia driven system.  To take full advantage of the benefits of hypermedia, the client must allow the server to take the lead and drive the state of the client.  As I like to say, it takes two to Tango.

So you think you can dance? Tango

Soon after I was married, my wife convinced me to take dance lessons with her.  Over the couple of years we spent taking lessons, I learned there were three types of people who join a dance studio.  There are people who want to get better at dancing, there are couples who are getting married who don't want to look like idiots during their 'first dance' and there are divorcees.  I'll leave it to you to figure out why the divorcees are there as I'll just be focusing on the other two groups.

The couples who are preparing for their weddings usually are under time and budgetary constraints, so they usually opt to learn a choreographed sequence of steps to a particular song.  They both learn the sequence of steps and, to an extent, dance their own independent steps whilst hanging on to each other.  It is a process that serves a purpose. It meets the goals of the couple, but it is a vastly inferior result to the approach taken when people's goal is simply to learn how to dance.

FirstDance

In order to learn to dance there are a number of basic fundamentals that are required. It is essential to be able to follow the beat of whatever music you are trying to dance to.  There are a set of basic dance primitives that can be combined to make up dance sequences.  It is also important to understand the role of the man vs that of the woman when dancing (Note: these are traditional role names, and no longer necessarily correlate with gender). The man leads the dance, the woman follows. 

RumbaSteps As the dance progresses the man chooses the sequences of primitives to perform and uses hand signals, body position and weight change to communicate with the women what steps are coming next.  There is no predefined, choreographed set of sequences. The man basically does whatever he wants within the constrains of dance style. The woman follows. 

When done right, it looks like magic

Watching a talented couple do this freestyle dance is often indistinguishable from a choreographed dance.  When people learn to dance this way, they can dance to any piece of music as long as the beat matches a style of dance they know and they can dance with any partner.   Whereas, couples who learn their wedding dance, know one sequence to one piece of music and can only dance it with one partner.

The Client-Server dance

Building a client that can consume a HTTP API can be done in different ways.  You can build your application to be like a choreographed dance, where both the client and sever know in advance what is going to happen.  When the client makes an HTTP request to a particular resource it knows in advance how the server will respond.  The challenge with this approach is that both parties need to have knowledge of the sequences, and more importantly, where they are up to in the sequence. If someone decides to make any changes the other party is likely to get confused by the unplanned change.Duel_of_the_Fates[1]

A choreographed client

The last twenty years of building clients for distributed applications has taught us how to build highly choreographed clients.  We first learn the API that the server exposes and then teach our clients an intricate pattern of interactions in order to achieve our desired goals.  Our application is the dance we perform.

Frequently when building clients like this we will create a facade over the remote service and a view model to manage the state of the sequence of interactions. Consider the following example of a distributed application that is designed to perform the dance of turning on and off a light switch.

The service facade:

    public class SwitchService
    {
        private const string SwitchStateResource = "switch/state";
        private const string SwitchOnResource = "switch/on";
        private const string SwitchOffResource = "switch/off";

        private readonly HttpClient _client;

        public SwitchService(HttpClient client)
        {
            _client = client;
        }

        public async Task<bool> GetSwitchStateAsync()
        {
            var result = await _client.GetStringAsync(SwitchStateResource).ConfigureAwait(false);

            return bool.Parse(result);
        }

        public Task SetSwitchStateAsync(bool newstate)
        {
            if (newstate)
            {
                return _client.PostAsync(SwitchOnResource,null);
            }
            else
            {
                return _client.PostAsync(SwitchOffResource, null);
            }
        }
    }

The view model:

    public class SwitchViewModel : INotifyPropertyChanged
    {
        public event PropertyChangedEventHandler PropertyChanged;
        
        private readonly SwitchService _service;
        private bool _switchState;

        public SwitchViewModel(SwitchService service)
        {
            _service = service;
            _switchState = service.GetSwitchStateAsync().Result;
        }

        
        private bool SwitchState {
            get
            {
                return _switchState; 
            }
             set
            {
                _service.SetSwitchStateAsync(value).Wait();
                _switchState = value; 
                OnPropertyChanged();
                OnPropertyChanged("CanTurnOn");
                OnPropertyChanged("CanTurnOff");
            }
        }

        public bool On
        {
            get { return SwitchState; }
        }

        public void TurnOff()
        {
            SwitchState = false;
        }

        public void TurnOn()
        {
            SwitchState = true;
        }


        public bool CanTurnOn
        {
            get { return SwitchState == false; }
        }

        public bool CanTurnOff
        {
            get { return SwitchState; }
        }
        

        protected virtual void OnPropertyChanged([CallerMemberName] string propertyName = null)
        {
            var handler = PropertyChanged;
            if (handler != null) handler(this, new PropertyChangedEventArgs(propertyName));
        }
    }

In our client view model we maintain the current SwitchState.  The client needs to know at any point in time, whether the switch is on or off.  This information will be provided to the View to present a visual representation to the user and it is also used to drive the application logic that determines if we are allowed to turn the switch on or off again.  Our application wishes to prevent someone from trying to turn on the switch if it is already on and turn off the switch if it is already off.  This is an extremely simple example but will be sufficient to illustrate differences between the two approaches.

The important point to note,  that just like our engaged couple doing their dance, both the client and server must keep track of the current application state in order to know what they can and must do next.  SkippingRope

Sometimes you just have to let go

In this next example, we take away the responsibility from the client of keeping track of state that is already being tracked by the server.  The client simply follows the lead of the server and trusts the server to provide it the necessary guidance.

basejump

We no longer need to provide a facade over the server API and instead we focus on understanding the messages communicated to by the server.  For that we have created a class called SwitchDocument that allows the client to parse and interpret the message. 

    public class SwitchDocument
    {
        public static SwitchDocument Load(Stream stream)
        {
            var switchStateDocument = new SwitchDocument();
            var jObject = JObject.Load(new JsonTextReader(new StreamReader(stream)));
            foreach (var jProp in jObject.Properties())
            {
                switch (jProp.Name)
                {
                    case "On":
                        switchStateDocument.On = (bool)jProp.Value;
                        break;
                    case "TurnOnLink":
                        switchStateDocument.TurnOnLink = new Uri((string)jProp.Value, UriKind.RelativeOrAbsolute);
                        break;
                    case "TurnOffLink":
                        switchStateDocument.TurnOffLink = new Uri((string)jProp.Value, UriKind.RelativeOrAbsolute);
                        break;
                }
            }
            return switchStateDocument;
        }

        public bool On { get; private set; }
        public Uri TurnOnLink { get; set; }
        public Uri TurnOffLink { get; set; }

        public static Uri SelfLink
        {
            get { return new Uri("switch/state", UriKind.Relative); }
        }
    }

Our view model now has the reduced role of simply presenting the information contained in the SwitchDocument to the view and providing a way to interact with the affordances described in the SwitchDocument.

    public class SwitchHyperViewModel : INotifyPropertyChanged
    {
        public event PropertyChangedEventHandler PropertyChanged;
        private readonly HttpClient _client;
        private SwitchDocument _switchStateDocument = new SwitchDocument();

        public SwitchHyperViewModel(HttpClient client)
        {
            _client = client;
            _client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/switchstate+json"));
            _client.GetAsync(SwitchDocument.SelfLink).ContinueWith(t => UpdateState(t.Result)).Wait();
        }

        public bool On
        {
            get { return _switchStateDocument.On; }
        }

        public bool CanTurnOn
        {
            get { return _switchStateDocument.TurnOnLink != null; }
        }

        public bool CanTurnOff
        {
            get { return _switchStateDocument.TurnOffLink != null; }
        }

        public void TurnOff()
        {
            _client.PostAsync(_switchStateDocument.TurnOffLink, null).ContinueWith(t => UpdateState(t.Result));
        }

        public void TurnOn()
        {
            _client.PostAsync(_switchStateDocument.TurnOnLink, null).ContinueWith(t => UpdateState(t.Result));
        }

        private void UpdateState(HttpResponseMessage httpResponseMessage)
        {
            if (httpResponseMessage.StatusCode == HttpStatusCode.OK)
            {
                _switchStateDocument = SwitchDocument.Load(httpResponseMessage.Content.ReadAsStreamAsync().Result);

                OnPropertyChanged();
                OnPropertyChanged("CanTurnOn");
                OnPropertyChanged("CanTurnOff");
            }
        }

        protected virtual void OnPropertyChanged([CallerMemberName] string propertyName = null)
        {
            var handler = PropertyChanged;
            if (handler != null) handler(this, new PropertyChangedEventArgs(propertyName));
        }

    }

This new hypermedia driven View Model has the same interface as the choreographed one and can be easily connected to the same simple user interface to display the state of the light switch and provide controls that can change the light switch.   The difference is in the way the application state is managed.  In this case, the view model determines if the switch can be turned on or off based on the presence of links that will turn on and off the switch.  Attempting to turn on or off the switch involves making a HTTP request to the server and using the response as a completely new state for the view model. 

You can find a complete WPF example in the Github repository.

The similarity is only skin deep

On the surface it appears that the two different approaches produce pretty much the same results.  It is almost the same amount of code, with a similar level of complexity.  The question has to be asked, what are the benefits?

FakePurse If you watch a couple dance who have learned a choreographed dance, you may think they are very capable dancers.  You may not even be able to tell the difference between them and others doing the same dance who have a much more fundamental understanding of how to dance.  The differences only begin to appear when you introduce change.  Changing dance partners, changing music or adding new steps will quickly reveal the differences.

The impact of change

The same is true with our sample application.  Consider the scenario where new requirements are introduced where a switch could only be turned on during a certain time period, or only users with certain permissions could turn on the switch. 

In the choreographed application, we would need to add a number of other server resources that would allow a client to inquire if a user has permission, or if the time of day permits turning on or off the switch.  The client must call those resources to retrieve the information, which in itself is not terribly complex.  However, deciding when to make those requests can be tricky.  Calling them frequently adds a significant performance hit, but caching the values locally can introduce problems with keeping the local state consistent with the server based resource state.

Windmills In the hypermedia driven client, neither of our new business requirements require additional resources to be created or server roundtrips.  In fact the client code does not need to change.  All the logic that is used to determine if a client can turn on or off a switch can be embedded into the server logic for determining whether to include a "TurnOn" link or a "TurnOff" link. 

The links are always refreshed along with the state of the switch so the client state is always consistent.  The state may be stale, but that is fine because HTTP has all kinds of mechanism for refreshing stale state.  The key thing is the client does not need to deal with the complexity of the permissions being an hour old, the timing schedule being ten minutes old and state of the switch being ten seconds old.

Some constraints can lead to unimagined possibilities

The fact that our client application does not need to change to accommodate these new requirements is far more significant than our analogy might lead us to believe.  When ballroom dancing there usually just one man and one woman.  The implications of making changes to the dance are limited.  In distributed systems, it is not uncommon for a single API to have thousands of client instances and perhaps multiple different types of clients.  The client applications are often created by different teams, in different countries with completely different time constraints. 

Being able to make logic changes on the server that would normally be embedded into the client can potentially have huge benefits.  The example I have shown only scratches the surface of the techniques that be applied using hypermedia, but hopefully it hints at the possibilities.

Hangglider

Image Credit: Tango https://flic.kr/p/7BY638
Image Credit: First Dance https://flic.kr/p/d3vMxG
Image Credit: Rumba Steps https://flic.kr/p/dMrKk
Image Credit: Skipping Rope https://flic.kr/p/6pZSGX
Image Credit: Base jump https://flic.kr/p/df5Nwd
Image Credit: Fake Gucci purse: https://flic.kr/p/9w5Qji
Image Credit: Windmills https://flic.kr/p/96iTMv
Image Credit: Hang Glider https://flic.kr/p/6jfG9m


Radenko Zec: 8 ways to improve ASP.NET Web API performance

ASP.NET Web API is a great piece of technology. Writing Web API is so easy that many developers don’t take the time to structure their applications for great performance.

In this article, I am going to cover 8 techniques for improving ASP.NET Web API performance.

1) Use fastest JSON serializer

JSON serialization  can affect overall performance of ASP.NET Web API significantly. A year and a half I have switched from JSON.NET serializer on one of my project to ServiceStack.Text .

I have measured around 20% performance improvement on my Web API responses. I highly recommend that you try out this serializer. Here is some latest performance comparison of popular serializers.

SerializerPerformanceGraf

Source: theburningmonk

UPDATE: It seams that StackOverflow uses what they claims even faster JSON serializer called Jil. View some benchmarks on their GitHub page Jil serializer.

2) Manual JSON serialize from DataReader

I have used this method on my production project and gain performance benefits.

Instead reading values from DataReader and populating objects and after that reading again values from those objects and producing JSON using some JSON Serializer,  you can manually create JSON string from DataReader and avoid unnecessary creation of objects.

You produce JSON using StringBuilder and in the end you return StringContent as the content of your response in WebAPI

var response = Request.CreateResponse(HttpStatusCode.OK);
response.Content = new StringContent(jsonResult, Encoding.UTF8, "application/json");
return response;

 

You can read more about this method in this blog post Manual JSON serialization from DataReader in ASP.NET Web API

3) Use other formats if possible (protocol buffer, message pack)

If you can use other formats like Protocol Buffers or MessagePack in your project instead of JSON do it.

You will get huge performance benefits not only because Protocol Buffers serializer is faster, but because format is smaller than JSON which will result in smaller and faster responses.

4) Implement compression

Use GZIP or Deflate compression on your ASP.NET Web API.

Compression is an easy and effective way to reduce the size of packages and increase the speed.

This is a must have feature. You can read more about this in my blog post ASP.NET Web API GZip compression ActionFilter with 8 lines of code.

5) Use caching

If it makes sense, use output caching on your Web API methods. For example, if a lot of users accessing same response that will change maybe once a day.

If you want to implement manual caching such as caching tokens of users into memory please refer to my blog post Simple way to implement caching in ASP.NET Web API.

6) Use classic ADO.NET if possible

Hand coded ADO.NET is still the fastest way to get data from database. If the performance of Web API is really important for you, don’t use ORMs.

You can see one of the latest performance comparison of popular ORMs.

ORMMapper

The Dapper and the  hand-written fetch code are very fast, as expected, all ORMs are slower than those three.

LLBLGen with resultset caching is very fast, but it fetches the resultset once and then re-materializes the objects from memory.

7) Implement async on methods of Web API

Using asynchronous Web API services can increase the number of concurrent HTTP requests Web API can handle.

Implementation is simple. The operation is simply marked with the async keyword and the return type is changed to Task.

[HttpGet]  
public async Task OperationAsync()  
{   
    await Task.Delay(2000);  
}

8) Return Multiple Resultsets and combined results

Reduce number of round-trips not only to database but to Web API as well. You should use multiple resultsets functionality whenever is possible.

This means you can extract multiple resultsets from DataReader like in the example bellow:

// read the first resultset 
var reader = command.ExecuteReader(); 

// read the data from that resultset 
while (reader.Read()) 
{ 
	suppliers.Add(PopulateSupplierFromIDataReader( reader )); 
} 

// read the next resultset 
reader.NextResult(); 

// read the data from that second resultset 
while (reader.Read()) 
{ 
	products.Add(PopulateProductFromIDataReader( reader )); 
}

 

Return as many objects you can in one Web API response. Try combining objects into one aggregate object like this:

public class AggregateResult
{
     public long MaxId { get; set; }
     public List<Folder> Folders{ get; set; }
     public List<User>  Users{ get; set; }
}

 

This way you will reduce the number of HTTP requests to your Web API.

Thank you for reading this article.

Leave a comment below and let me know what other methods you have found to improve Web API performance?

 
 

The post 8 ways to improve ASP.NET Web API performance appeared first on RadenkoZec blog.


Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.