Henrik F. Nielsen: Announcing ASP.NET WebHooks Release Candidate 1 (Link)

We are very excited to announce the availability of ASP.NET WebHooks Release Candidate 1. ASP.NET WebHooks provides support for receiving WebHooks from other parties as well as sending WebHooks so that you can notify other parties about changes in your service:

Currently ASP.NET WebHooks targets ASP.NET Web API 2 and ASP.NET MVC 5. It is available as Open Source on GitHub, and you can use it as preview packages from Nuget. For feedback, fixes, and suggestions, you can use GitHub, StackOverflow using the tag asp.net-webhooks, or send me a tweet.

For the full announcement, please see the blog Announcing ASP.NET WebHooks Release Candidate 1.

Have fun!

Henrik


Pedro Félix: Using Fiddler for an Android and Windows VM development environment

In this post I describe the development environment that I use when creating Android apps that rely on ASP.NET based Web applications and Web APIs.

  • The development machine is a MBP running OS X with Android Studio.
  • Android virtual devices are run on Genymotion, which uses VirtualBox underneath.
  • Web applications and Web APIs are hosted on a Windows VM running on Parallels over the OS X host.

I use the Fiddler proxy to enable connectivity between Android and the ASP.NET apps, as well as to provide me full visibility on the HTTP messages. Fiddler also enables me to use HTTPS even on this development environment.

The main idea is to use Fiddler as the Android’s system HTTP proxy, in conjunction with a port forwarding rule that maps a port on the OS X host to the Windows VM. This is depicted in the following diagram.

android

 

The required configuration steps are:

  1. Start Fiddler on the Windows VM and allow remote computers to connect
    • Fiddler – Tools – Fiddler Options – Connections – check “Allow remote computers to connect”.
    • This will make Fiddler listen on 0.0.0.0:8888.
  2. Enable Fiddler to intercept HTTPS traffic
    • Fiddler – Tools – Fiddler Options –  HTTPS – check “Decrypt HTTPS traffic”.
    • This will add a new root certificate to the “Trusted Root Certification Authorities” Windows certificate store.
  3. Define a port forwarding rule mapping TCP port 8888 on the OS X host to port TCP 8888 on the Windows guest (where Fiddler is listening).
    • Parallels – Preferences – Network:change settings – Port forward rules  – add “TCP:8888 -> Windows:8888”.
  4. Check which “host-only network” is the Android VM using
    • VirtualBox Android VM – Settings – Network – Name (e.g. “vboxnet1”).
  5. Find the IP for the identified adapter
    • VirtualBox – Preferences – Network – Host-only Networks – “vboxnet1”.
    • In my case the IP is 192.168.57.1.
  6. On Android, configure the Wi-Fi connection HTTP proxy (based on “Configure Fiddler for Android / Google Nexus 7”).
    • Settings – Wi-Fi – long tap on choosen network – modify network – enable advanced options – manual proxy
      • Set “Proxy hostname” to the IP identified in the previous step (e.g. 192.168.57.1).
      • Set “Proxy port” to 8888.
    • With this step, all the HTTP traffic will be directed to the Fiddler HTTP proxy running on the Windows VM
  7. The last step is to install the Fiddler root certificate, so that the Fiddler generated certificates are accepted by the Android applications, such as the system browser (based on “Configure Fiddler for Android / Google Nexus 7”).
    • Open the browser and navigate to http://ipv4.fiddler:8888
    • Select the link “FiddlerRoot certificate” and on the Android dialog select “Credential use: VPN and apps”.

And that’s it: all HTTP traffic that uses the Android system’s proxy settings will be directed to Fiddler, with the following advantages

  • Visibility of the requests and responses on the Fiddler UI, namely the ones using HTTPS.
  • Access to Web applications running on the Windows VM, using both IIS hosting or self-hosting.
  • Access to external hosts on the Internet.
  • Use of the Windows “hosts” file host name overrides.
    • For development purposes I typically use host names other than “localhost”, such as “app1.example.com” or “id.example.com”.
    • Since the name resolution will be done by the Fiddler proxy, these host names can be used directly on Android.

Here is the screenshot of Chrome running on Android and presenting a ASP.NET MVC application running on the Windows VM. Notice the green “https” icon.

Screen Shot 2016-03-05 at 19.31.53

And here is the Chrome screenshot of a IdentityServer3 login screen, also running on the Windows VM.

Screen Shot 2016-03-05 at 19.34.42

Hope this helps!



Pedro Félix: OAuth 2.0 and PKCE

Introduction

Both Google and IdentityServer have recently announced support for the PKCE (Proof Key for Code Exchange by OAuth Public Clients) specification defined by RFC 7636.

This is an excellent opportunity to revisit the OAuth 2.0 authorization code flow and illustrate how PKCE addresses some of the security issues that exist when this flow is implemented on native applications.

tl;dr

On the authorization code flow, the redirect from the authorization server back to client is one of the most security sensitive parts of the OAuth 2.0 protocol. The main reason is that this redirect contains the code representing the authorization delegation performed by the User. On public clients, such as native applications, this code is enough to obtain the access tokens allowing access to the User’s resources.

The PKCE specification addresses an attack vector where an attacker creates a native application that registers the same URL scheme used by the Client application, therefore gaining access to the authorization code. Succinctly, the PKCE specification requires the exchange of the code for the access token to use a ephemeral secret information that is not available on the redirect, making the knowledge of the code insufficient to use it. This extra information (or a transformation of it) is sent on the initial authorization request.

A slightly longer version

The OAuth 2.0 cast of characters

  • The User is typically an human entity capable of granting access to resources.
  • The Resource Server (RS) is the entity exposing an HTTP API to access these resources.
  • The Client is an application (e.g. server-based Web application or native application) wanting to access these resources, via a authorization delegation performed by the User. Clients can be
    • confidential – client applications that can hold a secret. The typical example are Web applications, where a client secret is stored and used only on the server side.
    • public – client application that cannot hold a secret, such as native applications running on the User’s mobile device.
  • The Authorization Server (AS) is the entity that authenticates the user, captures her authorization consent and issues access tokens that the Client application can use to access the resources exposed on the RS.

Authorization code flow for Web Applications

The following diagram illustrates the authorization code flow for Web applications (the Client application is a Web server).

Slide2

 

  1. The flow starts with the Client application server-side producing a redirect HTTP response (e.g. response with 302 status) with the authorization request URL in the Location header. This URL will contain the authorization request parameters such as the state, scope and redirect_uri.
  2. When receiving this response, the User’s browser automatically performs a GET HTTP request to the Authorization Server (AS) authorization endpoint, containing the OAuth 2.0 authorization request.
  3. The AS then starts an interaction sequence to authenticate the user (e.g. username and password, two-factor authentication, delegated authentication), and to obtain the user consent. This sequence is not defined by OAuth 2.0 and can take multiple steps.
  4. After having authenticated and obtained consent from the user, the AS returns a HTTP redirect response with the authorization response on the Location header. This URL points to the client application hostname and contains the the authorization response parameters, such as the state and the (security sensitive) code.
  5. When receiving this response, the user’s browser automatically performs a GET request to the Client redirect endpoint with the OAuth 2.0 authorization response. By using HTTPS on the request to the Client, the protocol minimises the chances of the code being leaked to an attacker.
  6. Having received that authorization code, the Client then uses it to obtain the access token from the AS token endpoint. Since the client is a confidencial client, this request is authenticated with the client credentials (client ID and client secret), typically sent in the Authorization header using the basic scheme. The AS checks if this code is valid, namely if it was issued to the requesting authenticated client. If everything is verified, a 200 response with the access token is returned.
  7. Finally, the client can use the received access token to access the protected resources.

Authorization code flow for native Applications

For a native application, the flow is slightly different, namely on the first phase (the authorization request). Recall that in this case the Client application is running in the User’s device

Slide3

  1. The flow begins with the Client application starting the system’s browser (or a web view, more on this on another post) at a URL with the authorization request. For instance, on the Android platform this is achieved by sending an intent.
  2. The browser comes into the foreground and performs a GET request to the AS authorization endpoint containing the authorization request.
  3. The same authentication and consent dance occurs between the AS and the User’s browser.
  4. After having authenticated and obtained consent from the user, the AS returns a HTTP redirect response with the authorization response on the Location header. This URL contains the the authorization response parameters. However, there is something special in the redirect URL. Instead of using a http URL scheme, which would make the browser perform another HTTP request, the redirect URL contains a custom URI scheme.
  5. As a result, when the browser receives this response and processes the redirect an inter-application message (e.g. an intent in Android) is sent to the application associated to this scheme, which should be the Client application. This brings the Client application to the foreground and provides it with the authorization response parameters, namely the authorization code.
  6. From now on, the flow is similar to the Web based one. Namely, the Client application  uses the code to obtain the access token from the AS token endpoint. Since the client is a public client, this request is not authenticated, that is no client secret is used.
  7. Finally, having received the access token, the client application running on the device can access the User’s resources.

On both scenarios, the authorization code communication path, from the AS to the Client via User’s browser, is very security sensitive. This is specially relevant in the native scenario since the Client is public and the knowledge of that authorization code is enough to obtain the access token.

Hijacking the redirect

On the Web application scenario, the GET request with the authorization response has a HTTPS URL, which means that the browser will only send the code if the server correctly authenticates itself. However, on the native scenario, the intent will be sent to any installed application that registered the custom scheme. Unfortunately, there isn’t a central entity controlling and validating these scheme registrations, so an application can hijack the message from the browser to the client application, as shown in the following diagram.

Slide4

Having obtained the authorization code, the attacker’s application has all the information required to retrieve a token and access the User’s resources.

The PKCE protection

The PKCE specification mitigates this vulnerability by requiring an extra code_verifier parameter on the exchange of the authorization code for the access token.Slide5

  • On step 1, the Client application generates a random secret, stores it and uses its hash value on the new code_challenge authorization request parameter.
  • On step 4, the AS somehow associates the returned code to the code_challenge.
  • On step 6, the Client includes a code_verifier parameter with the secret on the token request message. The AS computes the hash of the code_verifier value and compares it with the original code_challenge associated with the code. Only if they are equals is the code accepted and an access token returned.

This ensures that only the entity that started the flow (sent the code_challenge on the authorization request) can end the flow and obtain the access token. By using a cryptographic hash function on the code_challenge, the protocol is protected from attackers that have read access to the original authorization request. However, the protocol also allows the secret to be used directly on the code_challenge.

Finally, the PKCE support by an AS can be advertised on the OAuth 2.0 or OpenID Connect discovery document, using the code_challenge_methods_supported field. The following is the Google’s OpenID Connect discovery document, located at https://accounts.google.com/.well-known/openid-configuration.

{
 "issuer": "https://accounts.google.com",
 "authorization_endpoint": "https://accounts.google.com/o/oauth2/v2/auth",
 "token_endpoint": "https://www.googleapis.com/oauth2/v4/token",
 "userinfo_endpoint": "https://www.googleapis.com/oauth2/v3/userinfo",
 "revocation_endpoint": "https://accounts.google.com/o/oauth2/revoke",
 "jwks_uri": "https://www.googleapis.com/oauth2/v3/certs",
 "response_types_supported": [
  "code",
  "token",
  "id_token",
  "code token",
  "code id_token",
  "token id_token",
  "code token id_token",
  "none"
 ],
 "subject_types_supported": [
  "public"
 ],
 "id_token_signing_alg_values_supported": [
  "RS256"
 ],
 "scopes_supported": [
  "openid",
  "email",
  "profile"
 ],
 "token_endpoint_auth_methods_supported": [
  "client_secret_post",
  "client_secret_basic"
 ],
 "claims_supported": [
  "aud",
  "email",
  "email_verified",
  "exp",
  "family_name",
  "given_name",
  "iat",
  "iss",
  "locale",
  "name",
  "picture",
  "sub"
 ],
 "code_challenge_methods_supported": [
  "plain",
  "S256"
 ]
}

 

 

 

 



Henrik F. Nielsen: ASP.NET WebHooks and Slack Slash Commands (Link)

We just added a couple of new features in ASP.NET WebHooks that make it easier to build some nice integrations with Slack Slash Commands. Slash Commands make it possible to trigger any kind of processing from a Slack channel by generating an HTTP request containing details about the command. For example, these commands typed in a Slack channel can be configured to send an HTTP request to an ASP.NET WebHook endpoint where processing can happen:

/list add <user>
/list list
/list remove <user>
 
Responses from Slash Commands can be structured documents containing text, data, images, and more:

SlashCommandReply1

In the post ASP.NET WebHooks and Slack Slash Commands we describe how ASP.NET WebHooks help you with parsing Slash Commands and generate structured responses so that you can build cool integrations with Slack.

Have fun!

Henrik


Dominick Baier: NDC London 2016 Wrap-up

NDC has been fantastic again! Good fun, good talks and good company!

Brock and I did the usual 2-day version of our Identity & Access Control workshop at the pre-con. This was (probably) the last time we ran the 2-day version on Katana. At NDC in Oslo it will be all new material based on ASP.NET Core 1.0 (fingers crossed ;))

The main conference had dozens of interesting sessions and as always – pretty strong security content. On Wednesday I did a talk on (mostly) the new identity & authentication features of ASP.NET Core 1.0 [1]. This was also the perfect occasion to world-premier IdentityServer4 – the preview of the new version of IdentityServer for ASP.NET and .NET Core [2].

Right after my session, Barry focused on the new data protection and authorization APIs [3] and Brock did an introduction to IdentityServer (which is now finally on video [4]).

We also did a .NET Rocks [5] and Channel9 [6] interview – and our usual “user group meeting” at Brewdogs [7] ;)

All in all a really busy week – but well worth it!

[1] What’s new in Security in ASP.NET Core 1.0
[2] Announcing IdentityServer4
[3] A run around the new Data Protection and Authorization Stacks
[4] Introduction to IdentityServer
[5] .NET Rocks
[6] Channel9 Interview
[7] Brewdog Shepherd’s Bush

 


Filed under: .NET Security, ASP.NET, IdentityServer, OAuth, OpenID Connect, Uncategorized, WebAPI


Henrik F. Nielsen: Sending ASP.NET WebHooks from Azure WebJobs (Link)

Azure WebJobs is a great way for running any kind of script or executable as a background process in connection with an App Service Web App. You can upload an executable or script as a WebJob and run it either on a schedule or continuously. The WebJob can perform any function you can stick into a command line script or program and using the Azure WebJobs SDK, you can trigger actions to happen as a result of inputs from a Azure Queues, Blobs, Azure Service Bus, and much more.

WebHooks provide a simple mechanism for sending event notification across web applications and external services. For example, you can receive a WebHook when someone sends money to your PayPal account, or when a message is posted to Slack, or a picture is posted to Instagram.

The blog Sending ASP.NET WebHooks from Azure WebJobs describes how to send ASP.NET WebHooks from Azure WebJobs triggered by anything that can kick off a WebJob.

Have fun!

Henrik


Dominick Baier: Which OpenID Connect/OAuth 2.0 Flow is the right One?

That is probably the most common question we get – and the answer is of course: it depends!

Machine to Machine Communication
This one is easy – since there is no human directly involved, client credentials are used to request tokens.

Browser-based Applications
This might be a JavaScript-based application or a “traditional” server-rendered web application. For those scenarios, you typically want to use the implicit flow (OpenID Connect / OAuth 2.0).

A side effect of the implicit flow is, that all tokens (identity and access tokens) are delivered through the browser front-channel. If you want to use the access token purely on the server side, this would result in an unnecessary exposure of the token to the client. In that case I would prefer the authorization code flow – or hybrid flow.

Native Applications
Strictly speaking, a native application has very similar security properties compared to a JavaScript application. Still they are generally considered a bit more easy to secure because you often have stronger platform support for protecting data and isolation.

That’s the reason why the current consensus is, that an authorization code based flow gives you “a bit more” security than implicit. The much more important reason IMO is, that there are a couple of (upcoming) protocols that are optimized for native clients, and they use code exchange and the token endpoint as a foundation – e.g. PKCE, Proof of Possession and AC/DC.

Remark 1: With native applications I mean applications that have access to platform-native APIs like data protection or maybe the system browser. Cordova applications are e.g. written in JavaScript, but I would not consider them to be a “browser-based application”.

Remark 2: For code based flows, you need to embed the client secret in the client application. Of course you can’t treat that as a secret anymore – no matter how good you protect it, a motivated attacker will be able to reverse engineer it. It is still a bit better than no secret at all. Specs like PKCE make it a bit better as well.

Remark 3: I often hear the argument that the client application does not care who the user is, it just needs an access token – thus we rather do OAuth 2.0 than OpenID Connect. While this might be strictly speaking true – OIDC is the superior protocol as it includes a couple of extra security features like nonces for replay protection or c_hash and at_hash to link the (verifiable) identity token to the (unverifiable) access token.

Remark 4: As an extension to remark 3 – always use OpenID Connect – and not OAuth 2.0 on its own. There should be client libraries for every platform of interest by now. ASP.NET has middleware, we have a library for JavaScript. Other platforms should be fine as as well.

Remark 5: Whenever you think about using authorization code flow – rather use hybrid flow. This gives you a verifiable token first before you make additional roundtrips (another extensions of remark 3 and 4).

HTH


Filed under: .NET Security, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: Announcing IdentityServer for ASP.NET 5 and .NET Core

Over the last couple of years, we’ve been working with the ASP.NET team on the authentication and authorization story for Web API, Katana and ASP.NET 5. This included the design around claims-based identity, authorization and token-based authentication.

In the Katana timeframe we also reviewed the OAuth 2.0 authorization server middleware (and the templates around it) and weren’t very happy with it. But as usual, there were deadlines and Web API needed a token-based security story, so it shipped the way it was.

One year ago the ASP.NET team decided to discontinue that middleware and rather focus on consuming tokens instead. They also asked us if IdentityServer can be the replacement going forward.

By that time there were many unknowns – ASP.NET was still in early betas and literally changing every day. Important features like x-plat crypto (and thus support for JWT) weren’t even existing. Nevertheless, we agreed that we will port IdentityServer to ASP.NET 5 and .NET Core once the builds are more stabilized.

With RC1 (and soon RC2), we decided that now would the right moment in time to start porting IdentityServer – and here it is: IdentityServer4 (github / nuget / samples)

What’s new
When we designed IdentityServer3, one of our main goals was to be able to run self-hosted. At that time MVC was tied to IIS so using it for our default views was not an option. We weren’t particularly keen on creating our own view engine/abstraction, but that’s what needed to be done. This is not an issue anymore in ASP.NET 5, and as a result we removed the view service from IdentityServer4.

In IdentityServer4 you have full control over all UI aspects – login, consent, logoff and any additional UI you want to show to your user. You also have full control over the technology you want to use to implement that UI – it will be even possible to implement the UI in a completely different web application. This would allow adding OAuth 2.0 / OpenID Connect capabilities to an existing or legacy login “application”.

There will be also a standard UI that you can simply add as a package as well as templates to get you started.

Furthermore, IdentityServer4 is a “real” ASP.NET 5 application using all the standard platform facilities like DI, Logging, configuration, data protection etc, which means you have to learn less IdentityServer specifics.

What’s not new
Everything else really – IdentityServer4 has (or will have) all the features of IdentityServer3. You still can connect to arbitrary user management back-ends and there will be out of the box support for ASP.NET Identity 3.

We still provide the same architecture focused modelling around users, clients and scopes and still shield you from the low level details to make sure no security holes are introduced.

Database artifacts like reference or refresh tokens are compatible which gives you a nice upgrade/migration story.

Next steps
We will not abandon IdentityServer3 – many people are successfully using it and are happy with it (so are we). We are also aware that not everybody wants to switch its identity platform to “the latest thing” but rather wait a little longer.

But we should also not forget that IdentityServer3 is built on a platform (Katana) which Microsoft is not investing in anymore – and that also applies to the authentication middleware we use to connect to external providers. ASP.NET 5 is the way forward.

We just published beta1 to nuget. There are still many things missing, and what’s there might change. We also started publishing samples (link) to showcase the various features. Please try them out, give us feedback, open issues.

Around the RC2 timeframe there will be also more documentation showing up in our docs as and the ASP.NET documentation site. At some point, there will be also templates for Visual Studio which will provide a starting point for common security scenarios.

IdentityServer3 was such a great success because of all the good community feedback and contributions. Let’s take this to the next level!


Filed under: ASP.NET, IdentityServer, OAuth, OpenID Connect, Uncategorized, WebAPI


Pedro Félix: Using Vagrant to test ASP.NET 5 RC1

The recent Release Candidate 1 (RC1) for ASP.NET 5 includes support for Linux and OS X via .NET Core. After trying it out on OS X, I wanted to do some experiments on Linux as well. For that I used Vagrant to automate the creation and provision of the Linux development environments. In this post I describe the steps required for this task, using OS X as the host (the steps on a Windows host will be similar).

Short version

Start by ensuring Vagrant and VirtualBox are installed on your host machine.
Then open a shell and do the following commands.
The vagrant up may take a while since it will not only download and boot the base virtual machine image, but also provision ASP.NET 5 RC1 and all its dependencies.

git clone https://github.com/pmhsfelix/vagrant-aspnet-rc1.git (or your own fork URL instead)
cd vagrant-aspnet-rc1
vagrant up
vagrant ssh

After the last command completes you should have a SSH session into a Ubuntu Server with ASP.NET RC1 installed, running on a virtual machine (VM). Port 5000 on the host is mapped into port 5000 on the guest.

The vagrant-aspnet-rc1 host folder is mounted into the /vagrant guest folder, so you can use this to share files between host and guest.
For instance, a ASP.NET project published to vagrant-aspnet-rc1/published on the host will be visible on the /vagrant/published guest path.

For any comment or issue that you have, please raise an issue at https://github.com/pmhsfelix/vagrant-aspnet-rc1.

Longer (and perhaps more instructive) version

First, start by installing Vagrant and also VirtualBox, which will be required to run the virtual machine with Linux.

Afterwards, create a new folder (e.g. vagrant-aspnet-rc1) to host the Vagrant configuration.

dotnet pedro$ mkdir vagrant-aspnet-rc1
dotnet pedro$ cd vagrant-aspnet-rc1
vagrant-aspnet-rc1 pedro$

Then, initialize the Vagrant configuration using the init command.

vagrant-aspnet-rc1 pedro$ vagrant init ubuntu/trusty64
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`vagrantup.com` for more information on using Vagrant.
vagrant-aspnet-rc1 pedro$ ls
Vagrantfile

The second parameter, ubuntu/trusty64, is the name of a box available on the Vagrant public catalog, which in this case contains a Ubuntu Server 14.04 LTS.
Notice also how a Vagrantfile file, containing the Vagrant configuration, was created on the current directory. We will be using this file latter on.

The next step is to start the virtual machine.

vagrant-aspnet-rc1 pedro$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'ubuntu/trusty64'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'ubuntu/trusty64' is up to date...
==> default: Setting the name of the VM: vagrant-aspnet_default_1451428161431_85889
==> default: Clearing any previously set forwarded ports...
==> default: Fixed port collision for 22 =&amp;amp;gt; 2222. Now on port 2200.
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 22 (guest) =&amp;amp;gt; 2200 (host) (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2200
    default: SSH username: vagrant
    default: SSH auth method: private key
    default:
    default: Vagrant insecure key detected. Vagrant will automatically replace
    default: this with a newly generated keypair for better security.
    default:
    default: Inserting generated public key within guest...
    default: Removing insecure key from the guest if it's present...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
    default: The guest additions on this VM do not match the installed version of
    default: VirtualBox! In most cases this is fine, but in rare cases it can
    default: prevent things such as shared folders from working properly. If you see
    default: shared folder errors, please make sure the guest additions within the
    default: virtual machine match the version of VirtualBox you have installed on
    default: your host and reload your VM.
    default:
    default: Guest Additions Version: 4.3.34
    default: VirtualBox Version: 5.0
==> default: Mounting shared folders...
    default: /vagrant =&amp;amp;gt; /Users/pedro/code/dotnet/vagrant-aspnet-rc1

As can be seen in the command output, a VM was booted and SSH was configured. So the next step is to open a SSH session into the machine to check if everything is working properly. This is accomplished using the ssh command.

vagrant-aspnet-rc1 pedro$ vagrant ssh
Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.13.0-74-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Tue Dec 29 22:29:41 UTC 2015

  System load:  0.35              Processes:           80
  Usage of /:   3.4% of 39.34GB   Users logged in:     0
  Memory usage: 25%               IP address for eth0: 10.0.2.15
  Swap usage:   0%

  Graph this data and manage this system at:
    https://landscape.canonical.com/

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.

vagrant@vagrant-ubuntu-trusty-64:~$ hostname
vagrant-ubuntu-trusty-64
vagrant@vagrant-ubuntu-trusty-64:~$ id
uid=1000(vagrant) gid=1000(vagrant) groups=1000(vagrant)

Notice how we end up with a session into a vagrant-ubuntu-trusty-64 machine, running under the vagrant user.
In addition to setting up SSH, Vagrant also mounted the vagrant-aspnet-rc1 host folder (the one were the Vagrantfile was created) into the /vagrant file on the guest.

vagrant@vagrant-ubuntu-trusty-64:~$ ls /vagrant
Vagrantfile

We could now start to install ASP.NET 5 following the procedure outlined at http://docs.asp.net/en/latest/getting-started/installing-on-linux.html. However, that would be the “old way of doing things” and would not provide us with a reproducable development environment.
A better solution is to create a provision script, called bootstrap.sh, and use it with Vagrant.

The provision script is simply a copy of the procedures at http://docs.asp.net/en/latest/getting-started/installing-on-linux.html, slightly changed to allow unsupervised installation.

#!/usr/bin/env bash

# install dnvm pre-requisites
sudo apt-get install -y unzip curl
# install dnvm
curl -sSL https://raw.githubusercontent.com/aspnet/Home/dev/dnvminstall.sh | DNX_BRANCH=dev sh &amp;amp;amp;&amp;amp;amp; source ~/.dnx/dnvm/dnvm.sh

# install dnx pre-requisites
sudo apt-get install -y libunwind8 gettext libssl-dev libcurl4-openssl-dev zlib1g libicu-dev uuid-dev
# install dnx via dnvm
dnvm upgrade -r coreclr

# install libuv from source
sudo apt-get install -y make automake libtool curl
curl -sSL https://github.com/libuv/libuv/archive/v1.4.2.tar.gz | sudo tar zxfv - -C /usr/local/src
cd /usr/local/src/libuv-1.4.2
sudo sh autogen.sh
sudo ./configure
sudo make
sudo make install
sudo rm -rf /usr/local/src/libuv-1.4.2 &amp;amp;amp;&amp;amp;amp; cd ~/
sudo ldconfig

The next step is to edit the Vagrantfile so this provision script is run automatically by Vagrant.
We also change the port forwarding rule so that is matches the default 5000 port used by ASP.NET.

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure(2) do |config|
  config.vm.box = &amp;amp;quot;ubuntu/trusty64&amp;amp;quot;
  config.vm.provision :shell, path: &amp;amp;quot;bootstrap.sh&amp;amp;quot;, privileged: false
  config.vm.network &amp;amp;quot;forwarded_port&amp;amp;quot;, guest: 5000, host: 5000
end

To check that everything is properly configured we redo the whole process by destroying the VM and creating it again.

vagrant-aspnet-rc1 pedro$ vagrant destroy
    default: Are you sure you want to destroy the 'default' VM? [y/N] y
==> default: Forcing shutdown of VM...
==> default: Destroying VM and associated drives...
vagrant-aspnet-rc1 pedro$ vagrant up
( ... lots of things that take a while to happen ... )

Finally, do vagrant ssh and check that dnx is fully functional.

How about publish with runtime?

Instead of having to previously provision ASP.NET, wouldn’t it be nice to include all the dependencies on the published project so that we could deploy it on a plain vanilla Ubuntu or Debian machine?
Well, one one hand it is possible to configure the publish process to also include the runtime, via the --runtime parameter.

dnu publish --out ~/code/dotnet/vagrant-ubuntu/published --no-source --runtime dnx-coreclr-linux-x64.1.0.0-rc1-update1

On the other hand, in order to have the Linux DNX runtime available on OS X we just need to explicitly specify the OS on the dnvm command

dnvm install latest -OS linux -r coreclr

Unfortunately, this approach does not work because the published runtime is not self-sufficient.
For it to work properly it still requires some dependencies to be previously provisioned on the deployed machine.
This can be seen if we try to run the ASP.NET project

vagrant@vagrant-ubuntu-trusty-64:~$ /vagrant/published/approot/web
failed to locate libcoreclr with error libunwind-x86_64.so.8: cannot open shared object file: No such file or directory
vagrant@vagrant-ubuntu-trusty-64:~$ Connection to 127.0.0.1 closed by remote host.
Connection to 127.0.0.1 closed.

Notice how the libunwind-x86_64.so.8 failed to be opened.
So, for the time being, we need to provision at least the runtime dependencies on the deployed machine.
The runtime itself can be contained in the published project.



Henrik F. Nielsen: New Year Updates to ASP.NET WebHooks Preview (Link)

Posted New Year Updates to ASP.NET WebHooks Preview about the latest update of ASP.NET WebHooks. It has a couple of interesting new features on the WebHooks sender side – that is, when you want to send WebHooks to others when some event happens – including:

  1. Sending events to all users registered for a particular event.
  2. Scaling out and load balancing events using persistent queues.

For the full story, please see New Year Updates to ASP.NET WebHooks Preview.

Happy New Year!

Henrik


Dominick Baier: Validating Scopes in ASP.NET 4 and 5

OAuth 2.0 scopes are a way to model (API) resources. This allows you to give logical “names” to APIs that clients can use to request tokens for.

You might have very granular scopes like e.g. api1 & api2, or very coarse grained like application.backend. Some people use functional names e.g. contacts.api and customers.api (which might or might not span multiple physical APIs) – some group by criteria like public or internal only. Some even sub-divide a single API – e.g. calendar.read and calendar.readwrite. It is totally up to you (this is how Google uses scopes).

At the end of the day, the access token (be it self-contained or referenced) will be associated with the scopes the client was authorized for (and optionally – the user consented to).

IdentityServer does that by including claims of type scope in the access token – so really any technique that allows checking the claims of the current user will do.

As a side note – there is also a spec that deals with return codes for failed scope validation. In short – this should return a 403 instead of a 401.

We ourselves had some iterations in our thinking how we deal with scopes – here’s a summary and some options.

ASP.NET 4.x

The most common way we do scope checking is via our token validation middleware (source/nuget), which combines token and scope validation into a single step:

app.UseIdentityServerBearerTokenAuthentication(new IdentityServerBearerTokenAuthenticationOptions
    {
        Authority = "https://localhost:44333/core",
        RequiredScopes = new[] { "calendar.read""calendar.readwrite" },
    });

This would validate the token and require that either the calendar.read or calendar.readwrite scope claims are present.

This middleware also emits the right response status code, Www-Authenticate header and respects CORS pre-flight requests.

For finer granularity we also once wrote a Web API authorization attribute – [ScopeAuthorize] that you can put on top of controllers and actions (source).

As mentioned before – you can always check inside your code for scope claims yourself using the claims collection.

ASP.NET 5

We will have the same “all in one” IdentityServer token validation middleware for ASP.NET 5 – but this time split up into separate middleware that can be also used stand-alone (I wrote about the introspection aspect of it in my last post).

The scope validation part (source/nuget) of it looks like this:

app.AllowScopes("calendar.read""calendar.readwrite");

This has the same OR semantics as described above.

You can also use the new ASP.NET 5 authorization API to do scope checks – e.g. as a global policy:

public void ConfigureServices(IServiceCollection services)
{
    var scopePolicy = new AuthorizationPolicyBuilder()
        .RequireAuthenticatedUser()
        .RequireClaim("scope""calendar.read""calendar.readwrite")
        .Build();
 
    services.AddMvc(options =>
    {
        options.Filters.Add(new AuthorizeFilter(scopePolicy));
    });
}

..or as a named policy to decorate individual controllers and actions:

services.AddAuthorization(options =>
{
    options.AddPolicy("read",
        policy => policy.RequireClaim("scope""calendar.read"));
    options.AddPolicy("readwrite",
        policy => policy.RequireClaim("scope""calendar.readwrite"));
});

and use it e.g. like that:

public class CalendarControllerController
{
    [Authorize("read")]
    public IActionFilter Get() { ... }
 
    [Authorize("readwrite")]
    public IActionFilter Put() { ... }
}

One last remark: We get this question a lot – scopes are not used for authorizing users. They are used for modeling resources (and optionally to compose the consent screen as well as to specify which client might have access to these resources).

HTH


Filed under: ASP.NET, IdentityModel, IdentityServer, Katana, OAuth, Uncategorized, WebAPI


Pedro Félix: A first look at .NET Core and the dotnet CLI tool

A recent post by Scott Hanselman triggered my curiosity about the new dotnet Command Line Interface (CLI) tool for .NET Core, which aims to be a “cross-platform general purpose managed framework”. In this post I present my first look on using .NET Core and the dotnet tool on OS X.

Installation

For OS X, the recommended installation procedure is to use the “official PKG”. Unfortunately, this PKG doesn’t seem to be signed so trying to run it directly from the browser will result in an error. The workaround is use Finder to locate the downloaded file and then select “Open” on the file. Notice that this PKG requires administrative privileges to run, so proceed at your own risk (the .NET Core home page uses a https URI and the PKG is hosted on Azure Blob Storage, also using HTTPS – https://dotnetcli.blob.core.windows.net/dotnet/dev/Installers/Latest/dotnet-osx-x64.latest.pkg).

After installation, the dotnet tool will be available on your shell.

~ pedro$ which dotnet
/usr/local/bin/dotnet

I confess that I was expecting the recommended installation procedure to use homebrew instead of a downloaded PKG.

Creating the application

To create an application we start by making an empty folder (e.g. HelloDotNet) and then run dotnet new on it.

dotnet pedro$ mkdir HelloDotNet
dotnet pedro$ cd HelloDotNet
HelloDotNet pedro$ dotnet new
Created new project in /Users/pedro/code/dotnet/HelloDotNet.

This newcommand creates three new files in the current folder.

HelloDotNet pedro$ tree .
.
├── NuGet.Config
├── Program.cs
└── project.json

0 directories, 3 files

The first one, NuGet.Config is an XML file containing the NuGet package sources, namely the http://www.myget.org feed containing .NET Core.

<?xml version="1.0" encoding="utf-8"?>
<configuration>
<packageSources>
<!--To inherit the global NuGet package sources remove the <clear/> line below -->
<clear />
<add key="dotnet-core" value="https://www.myget.org/F/dotnet-core/api/v3/index.json" />
<add key="api.nuget.org" value="https://api.nuget.org/v3/index.json" />
</packageSources>
</configuration>

The second one is a C# source file containing the classical static void Main(string[] args) application entry point.

HelloDotNet pedro$ cat Program.cs
using System;

namespace ConsoleApplication
{
    public class Program
    {
        public static void Main(string[] args)
        {
            Console.WriteLine(&amp;amp;amp;amp;quot;Hello World!&amp;amp;amp;amp;quot;);
        }
    }
}

Finally, the third file is the project.json containing the project definitions, such as compilation options and library dependencies.

HelloDotNet pedro$ cat project.json
{
"version": "1.0.0-*",
"compilationOptions": {
"emitEntryPoint": true
},

"dependencies": {
"NETStandard.Library": "1.0.0-rc2-23616"
},

"frameworks": {
"dnxcore50": { }
}
}

Resolving dependencies

The next step is to ensure all dependencies required by our project are available. For that we use the restore command.

HelloDotNet pedro$ dotnet restore
Microsoft .NET Development Utility CoreClr-x64-1.0.0-rc1-16231

  GET https://www.myget.org/F/dotnet-core/api/v3/index.json
  OK https://www.myget.org/F/dotnet-core/api/v3/index.json 778ms
  GET https://api.nuget.org/v3/index.json
  ...
Restore complete, 40937ms elapsed

NuGet Config files used:
    /Users/pedro/code/dotnet/HelloDotNet/nuget.config

Feeds used:
    https://www.myget.org/F/dotnet-core/api/v3/flatcontainer/
    https://api.nuget.org/v3-flatcontainer/

Installed:
    69 package(s) to /Users/pedro/.dnx/packages

After figuratively downloading almost half of the Internet, or 69 packages to be more precise, the restore process ends stating that the required dependencies where installed at ~/.dnx/packages.
Notice the dnx in the path, which shows the DNX heritage of the dotnet tool. I presume this names will change before the RTM version. Notice also that the only thing added to the current folder is the project.lock.json containing the complete dependency graph created by the restore process based on the direct dependencies.

HelloDotNet pedro$ tree .
.
├── NuGet.Config
├── Program.cs
├── project.json
└── project.lock.json

0 directories, 4 files

Namely, no dependencies where copied to the local folder.
Instead the global ~/.dnx/packages/ repository is used.

Running the application

After changing the greetings message to Hello dotnet we can run the application using the run command.

HelloDotNet pedro$ dotnet run
Hello dotnet!

Looking again into the current folder we notice that not extra files where created when running the application.

<del>
HelloDotNet pedro$ tree .
.
├── NuGet.Config
├── Program.cs
├── project.json
└── project.lock.json

0 directories, 4 files
</del>

This happens because the compilation produces in-memory assemblies, which aren’t persisted on any file. The CoreCLR virtual machine uses this in-memory assemblies when running the application.

Well, it seems I was wrong: the dotnet run command does indeed produce persisted files. This is a change when compare with dnx, which  did use in-memory assemblies.

We can see this behaviour by  using the -v switch

HelloDotNet pedro$ dotnet -v run
Running /usr/local/bin/dotnet-compile --output "/Users/pedro/code/dotnet/HelloDotNet/bin/.dotnetrun/3326e7b6940b4d50a30a12a02b5cdaba" --temp-output "/Users/pedro/code/dotnet/HelloDotNet/bin/.dotnetrun/3326e7b6940b4d50a30a12a02b5cdaba" --framework "DNXCore,Version=v5.0" --configuration "Debug" /Users/pedro/code/dotnet/HelloDotNet
Process ID: 20580
Compiling HelloDotNet for DNXCore,Version=v5.0
Running /usr/local/bin/dotnet-compile-csc @"/Users/pedro/code/dotnet/HelloDotNet/bin/.dotnetrun/3326e7b6940b4d50a30a12a02b5cdaba/dotnet-compile.HelloDotNet.rsp"
Process ID: 20581
Running csc -noconfig @"/Users/pedro/code/dotnet/HelloDotNet/bin/.dotnetrun/3326e7b6940b4d50a30a12a02b5cdaba/dotnet-compile-csc.rsp"
Process ID: 20582

Compilation succeeded.
0 Warning(s)
0 Error(s)

Time elapsed 00:00:01.4388306

Running /Users/pedro/code/dotnet/HelloDotNet/bin/.dotnetrun/3326e7b6940b4d50a30a12a02b5cdaba/HelloDotNet
Process ID: 20583
Hello dotnet!

Notice how it first calls the compile command (addressed in the next section) before running the application.

Compiling the application

The dotnet tool also allows the explicit compilation via its compile command.

HelloDotNet pedro$ dotnet compile
Compiling HelloDotNet for DNXCore,Version=v5.0

Compilation succeeded.
    0 Warning(s)
    0 Error(s)

Time elapsed 00:00:01.4249439

The resulting artifacts are stored in two new folders

HelloDotNet pedro$ tree .
.
├── NuGet.Config
├── Program.cs
├── bin
│   └── Debug
│       └── dnxcore50
│           ├── HelloDotNet
│           ├── HelloDotNet.deps
│           ├── HelloDotNet.dll
│           ├── HelloDotNet.pdb
│           └── NuGet.Config
├── obj
│   └── Debug
│       └── dnxcore50
│           ├── dotnet-compile-csc.rsp
│           ├── dotnet-compile.HelloDotNet.rsp
│           └── dotnet-compile.assemblyinfo.cs
├── project.json
└── project.lock.json
6 directories, 12 files

The bin/Debug/dnxcore50 contains the most interesting outputs from the compilation process. The HelloDotNet is a native executable, visible by a _main symbol inside of it, that loads the CoreCLR virtual machine and uses it to run the application.

HelloDotNet pedro$ otool -tvV bin/Debug/dnxcore50/HelloDotNet | grep _main
_main:

otool is the object file displaying tool for OS X.

We can also see that the libcoreclr dynamic library is used by this bootstrap executable

HelloDotNet pedro$ otool -tvV bin/Debug/dnxcore50/HelloDotNet | grep libcoreclr.dylib
00000001000025b3    leaq    0x7351(%rip), %rsi      ## literal pool for: &amp;amp;amp;amp;quot;libcoreclr.dylib&amp;amp;amp;amp;quot;
00000001000074eb    leaq    0x2419(%rip), %rsi      ## literal pool for: &amp;amp;amp;amp;quot;libcoreclr.dylib&amp;amp;amp;amp;quot;
000000010000784b    leaq    0x20b9(%rip), %rsi      ## literal pool for: &amp;amp;amp;amp;quot;libcoreclr.dylib&amp;amp;amp;amp;quot;

The HelloDotNet.dll file is a .NET assembly (has dll extension and starts with the 4d 5a magic number) containing the compiled application.

HelloDotNet pedro$ hexdump -n 32 bin/Debug/dnxcore50/HelloDotNet.dll
0000000 4d 5a 90 00 03 00 00 00 04 00 00 00 ff ff 00 00
0000010 b8 00 00 00 00 00 00 00 40 00 00 00 00 00 00 00
0000020

Directly executing the HelloDotNet file runs the application.

HelloDotNet pedro$ bin/Debug/dnxcore50/HelloDotNet
Hello dotnet!

We can also see that the CoreCLR is hosted in executing process by examining the loaded libraries.

dotnet pedro$ ps | grep Hello
18981 ttys001    0:00.23 bin/Debug/dnxcore50/HelloDotNet
19311 ttys002    0:00.00 grep Hello
dotnet pedro$ sudo vmmap 18981 | grep libcoreclr
__TEXT                 0000000105225000-000000010557a000 [ 3412K] r-x/rwx SM=COW  /usr/local/share/dotnet/runtime/coreclr/libcoreclr.dylib
__TEXT                 000000010557b000-000000010575a000 [ 1916K] r-x/rwx SM=COW  /usr/local/share/dotnet/runtime/coreclr/libcoreclr.dylib
__TEXT                 000000010575b000-0000000105813000 [  736K] r-x/rwx SM=COW  /usr/local/share/dotnet/runtime/coreclr/libcoreclr.dylib
__LINKEDIT             0000000105859000-00000001059e1000 [ 1568K] r--/rwx SM=COW  /usr/local/share/dotnet/runtime/coreclr/libcoreclr.dylib
__TEXT                 000000010557a000-000000010557b000 [    4K] rwx/rwx SM=PRV  /usr/local/share/dotnet/runtime/coreclr/libcoreclr.dylib
__TEXT                 000000010575a000-000000010575b000 [    4K] rwx/rwx SM=PRV  /usr/local/share/dotnet/runtime/coreclr/libcoreclr.dylib
__DATA                 0000000105813000-0000000105841000 [  184K] rw-/rwx SM=PRV  /usr/local/share/dotnet/runtime/coreclr/libcoreclr.dylib
__DATA                 0000000105841000-0000000105859000 [   96K] rw-/rwx SM=ZER  /usr/local/share/dotnet/runtime/coreclr/libcoreclr.dylib

Native compilation

One of the most interesting features on .NET Core and the dotnet tool is the ability to create a native executable containing the complete program and not just a boot strap into the virtual machine. For that, we use the --native option on the compile command.

HelloDotNet pedro$ ls
NuGet.Config        Program.cs      project.json        project.lock.json
HelloDotNet pedro$ dotnet compile --native
Compiling HelloDotNet for DNXCore,Version=v5.0

Compilation succeeded.
    0 Warning(s)
    0 Error(s)

Time elapsed 00:00:01.1267350

The output of this compilation is a new native folder containing another HelloDotNet executable.

HelloDotNet pedro$ tree .
.
├── NuGet.Config
├── Program.cs
├── bin
│   └── Debug
│       └── dnxcore50
│           ├── HelloDotNet
│           ├── HelloDotNet.deps
│           ├── HelloDotNet.dll
│           ├── HelloDotNet.pdb
│           ├── NuGet.Config
│           └── native
│               ├── HelloDotNet
│               └── HelloDotNet.dSYM
│                   └── Contents
│                       ├── Info.plist
│                           └── Resources
│                               └── DWARF
│                                   └── HelloDotNet
├── obj
│   └── Debug
│   └── dnxcore50
│   └── HelloDotNet.obj
├── project.json
└── project.lock.json

11 directories, 13 files

Running the executable produces the expected result

HelloDotNet pedro$ bin/Debug/dnxcore50/native/HelloDotNet
Hello dotnet!

At first sight, this new executable is rather bigger that the first one, since it isn’t just a bootstrap into the virtual machine: it contains the complete application.

HelloDotNet pedro$ ls -la bin/Debug/dnxcore50/HelloDotNet
-rwxr-xr-x  1 pedro  staff  66368 Dec 28 10:12 bin/Debug/dnxcore50/HelloDotNet
HelloDotNet pedro$ ls -la bin/Debug/dnxcore50/native/HelloDotNet
-rwxr-xr-x  1 pedro  staff  987872 Dec 28 10:12 bin/Debug/dnxcore50/native/HelloDotNet

There are two more signs that this new executable is the application. First, there aren’t any references to the libcoreclr dynamic library.

HelloDotNet pedro$ otool -tvV bin/Debug/dnxcore50/HelloDotNet | grep libcoreclr.dylib
00000001000025b3    leaq    0x7351(%rip), %rsi      ## literal pool for: &amp;amp;amp;amp;quot;libcoreclr.dylib&amp;amp;amp;amp;quot;
00000001000074eb    leaq    0x2419(%rip), %rsi      ## literal pool for: &amp;amp;amp;amp;quot;libcoreclr.dylib&amp;amp;amp;amp;quot;
000000010000784b    leaq    0x20b9(%rip), %rsi      ## literal pool for: &amp;amp;amp;amp;quot;libcoreclr.dylib&amp;amp;amp;amp;quot;
HelloDotNet pedro$ otool -tvV bin/Debug/dnxcore50/native/HelloDotNet | grep libcoreclr.dylib
HelloDotNet pedro$

Second, it contains a ___managed__Main symbol with the static void Main(string[] args) native code

HelloDotNet pedro$ otool -tvV bin/Debug/dnxcore50/native/HelloDotNet | grep -A 8 managed__Main:
___managed__Main:
0000000100001b20    pushq   %rax
0000000100001b21    movq    ___ThreadStaticRegionStart(%rip), %rdi
0000000100001b28    movq    (%rdi), %rdi
0000000100001b2b    callq   _System_Console_System_Console__WriteLine_13
0000000100001b30    nop
0000000100001b31    addq    $0x8, %rsp
0000000100001b35    retq
0000000100001b36    nop

In addition to the HelloDotNet executable, the compile --native command also creates a bin/Debug/dnxcore50/native/HelloDotNet.dSYM folder the native debug information.

Unfortunately, the .NET Core native support seems to be in the very early stages and I was unable to compile anything more complex than a simple “Hello World”. However, I’m looking forward to further developments in this area.



Dominick Baier: OAuth 2.0 Token Introspection Middleware for ASP.NET 5

In my last post I described the value of reference tokens and how the OAuth 2.0 token introspection spec (aka rfc7662) gives us a standard way of using them.

Over the christmas break I worked on an ASP.NET 5-based middleware for token introspection – it is pretty simple to use:

app.UseOAuth2IntrospectionAuthentication(options =>
{
    options.AutomaticAuthenticate = true;
    options.ScopeName = "api1";
    options.ScopeSecret = "secret";
    options.Authority = "https://identityserver.io";
});

If your token issuer supports discovery, all you need to do is to pass in the base URL – the token introspection endpoint is then found via metadata (there is also a way to explicitly pass in the endpoint address).

ScopeName and ScopeSecret are used to authenticate against the introspection endpoint (you can also use an HTTP message handler if you need more control over the wire format).

The result will – as usual – get turned into a claims principal and your pipeline/business logic will have access to the claims.

The token introspection spec is quite new – but needless to say, this works with IdentityServer.

source code / nuget package

PS. This is the first beta version – there is definitely room for improvement. One thing that is missing right now is caching – but I will get to that soon. Please use github to give feedback. thanks!


Filed under: ASP.NET, IdentityServer, OAuth, WebAPI


Henrik F. Nielsen: Sharing Windows 10 Store Apps with your Family

This is not a typical blog subject of mine, but after having gone through it myself, I thought it could be of interest to others as well. The challenge started when I, like many others, bought a nice new Windows 10 laptop for the family in time for the holidays. I wanted to set it up so that our two children could use each their Microsoft Account in order not to step on each other and to benefit from the age appropriate content filters provided by the Windows Store.

The issue is that Windows Store apps are not shared between users, so unlike ‘classic’ Windows desktop apps, you can’t install them once and have them be available to all users on a particular device. That is, if I buy a Windows Store app and install it on the PC, it is not automatically made available to anybody else. The Windows Store Licensing Model allows you to install apps across a number of devices and users. However, before you can actually share your apps, there are a few hoops you need to go through first. Hence this blog!

Creating a Family of Microsoft Accounts

The first step is to create Microsoft Accounts for your family members and configure them as being part of the same family. Assuming you already have a Microsoft Account, you can go to Your Family and add your children and other adults with each their Microsoft Accounts. It should look something like this:

AccountFamily

In addition to setting up your family, you can also control a lot of settings including what your kids have access to, what time of day they have access, etc.

Enabling Family Members to Sign in to your Device

The second step is to allow all family members to sign into your device. Sign in as you and then go to All Settings in the Action Center:

ActionCenterSettings

In Settings, select Accounts and then Family and other users where you should see your family members. To enable them to sign in to the device, click on each one and select Allow. You can also change the account type if you want, but typically they should not be administrators.

FamilyLogin

In addition to enabling your family to sign in, I recommend that each member sets up their own PIN code for signing in. In general, it is a lot easier to remember a PIN than a full password. Each member can set up a PIN by going to the same Accounts settings, but instead select Sign-in options and then set the PIN:

AccountPIN

Sharing Windows Store Apps

Each member of your family can now sign in to your device using a short PIN but each will see their own Windows Store tailored to them. To share apps with a family member, you first sign in as that member and then change the account used to interact with the Windows Store. That is, while signed in as family member, you switch the account used to talk to the Windows Store enabling you to install your apps into your family member’s account. This is a little tricky, so let’s do it step-by-step.

First sign in as the family member you want to share the app with, then open the Windows Store. You will see the family member’s account in the upper right corner:

SelectAccount

After selecting the account entry, you will see something like this:

PreSignOut

Click on the account and then select Sign out:

SignOut

Now select the account again and sign in to the store as you:

SignIn

As of Dec 25 2015, there seems to be a bug in the Windows Store sign in process as it may ask for your PIN code but it actually wants your family member’s PIN code. That is, at least at the time of this writing, use the PIN of the signed in family member even though it asks for your PIN!

SignInPin

You should now be signed into the Windows Store while the overall account is still that of your family member. You can now either buy and share a new app or share an already bought app with that family member.  In the first case you just go through the normal payment process as you and hit Install to share the app with your family member.

If you have already bought the app then you should see something like this: “You own this product and can install it on this device”. Hit Install to share the app with your family member.

Once done you can flip the Windows Store sign in back to the family member.

Hope this helps!

Henrik


Pedro Félix: How to fail in HTTP APIs

In the HTTP protocol, clients use request messages to perform operations, defined by request methods, on resources identified by request URIs.
However, servers aren’t always able or willing to completely and successfully perform these requested operations.
The subject of this post is to present proper ways for HTTP servers to express these non-success outcomes.

Status codes

The primary way to communicate the request completion result is via the response message’s status code.
The status code is a three-digit integer divided into five classes, (list adapted from RFC 7231):

  • “1xx (Informational): The request was received, continuing process”
  • “2xx (Successful): The request was successfully received, understood, and accepted”
  • “3xx (Redirection): Further action needs to be taken in order to complete the request”
  • “4xx (Client Error): The request contains bad syntax or cannot be fulfilled”
  • “5xx (Server Error): The server failed to fulfill an apparently valid request”

The last two of these five classes, 4xx and 5xx, are used to represent non-success outcomes.
The 4xx class is used when the request is not completely understood by the server (e.g. incorrect HTTP syntax) or fails to satisfy the server requirements for successful handling (e.g. client must be authenticated).
These are commonly referred as client errors.
On the other hand, 5xx codes should be strictly reserved for server errors, i.e., situations where the request is not successfully completed due to a abnormal behavior on the server.

Here are some of basic rules that I tend to use when choosing status codes:

  • Never use a 2xx to represent a non-success outcome.
    Namely, always use a 4xx or 5xx to represent those situations, except when the request can be completed by taking further actions, in which a 3xx could be used.
  • Reserve the 5xx status code for errors where the fault is indeed on the server side.
    Examples are infrastructural problems, such as the inability to connect to external systems, such as a database or service, or programming errors such as an indexation out of bounds or a null dereference.
    Inability to successfully fulfill a request due to malformed or invalid information in the request must instead be signaled with 4xx status codes.
    Some examples are: the request URI does not match any known resource; the request body uses an unsupported format; the request body has invalid information.

As a rule of thumb, and perhaps a little hyperbolically, if an error does not require waking up someone in the middle of night then probably it shouldn’t be signaled using a 5xx class code, because it does not signals a server malfunction.

The HTTP specification also defines a set of 41 concrete status codes and associated semantics, from which 19 belong to the 4xx class and 6 belong to the 5xx class.
These standard codes are a valuable resource for the Web API designer, which should simultaneously respect and take advantage of this semantic richness when designing the API responses.
Here are some rule of thumb:

  • Use 500 for server unexpected errors, reserving 503 for planned service unavailability.
  • Reserve the 502 and 504 codes for reverse proxies.
    A failure when contacting an internal third-party system should still use a 500 when this internal system is not visible to the client.
  • Use 401 when the request has invalid or missing authentication/authorization information required to perform the operation.
    If this authentication/authorization information is valid but the operation is still not allowed, then use 403.
  • Use 404 when the resource identified by the request URI does not exist or the server does not want to reveal its existence.
  • Use 400 if parts of the request are not valid, such as fields in the request body.
    For invalid query string parameters I tend to use 404 since the query string is an integral part of the URI, however using 400 is also acceptable.

HTTP status codes are extensible, meaning that other specifications, such as WebDav can define additional values.
The complete list of codes is maintained by IANA at the Hypertext Transfer Protocol (HTTP) Status Code Registry.
This extensibility means that HTTP clients and intermediaries are not obliged to understand all status codes.
However, they must understand each code class semantics.
For instance, if a client receives the (not yet defined) 499 status code, then it should treat it as a 400 and not as a 200 or a 500.

Despite its richness, there aren’t HTTP status code for all possible failure scenarios.
Namely, by being uniform, these status code don’t have any domain-specific semantics.
However, there are scenarios where the server needs to provide the client with a more detailed error cause, namely using domain-specific information.
Two common anti-patterns are:

  • Redefining the meaning of standard code for a particular set of resources.
    This solution breaks the uniform interface contract: the semantics of the status code should be the same independently of the request’s target resource.
  • Using an unassigned status code in the 4xx or 5xx classes.
    Unless this is done via a proper registration of the new status code in IANA, this decision will hinder evolution and most probably will collide with future extensions to the HTTP protocol.

Error representations

Instead of fiddling with the status codes, a better solution is to use the response payload to provide a complementary representation of the error cause.
And yes, a response message may (and probably should) contain a body even when it represents an error outcome – response bodies are not exclusive of successful responses.

The Problem Details for HTTP APIs is an Internet Draft defining JSON and XML formats to represent such error information.
The following excerpt, taken from the draft specification, exemplifies how further information can be conveyed on a response with 403 (Forbidden) status code, stating the domain specific reason for the request prohibition.

HTTP/1.1 403 Forbidden
Content-Type: application/problem+json
Content-Language: en

{
    "type": "https://example.com/probs/out-of-credit",
    "title": "You do not have enough credit.",
    "detail": "Your current balance is 30, but that costs 50.",
    "instance": "/account/12345/msgs/abc",
    "balance": 30,
    "accounts": ["/account/12345","/account/67890"]
}

The application/problem+json media type informs the receiver that the payload is using this format and should be processed according to its rules.
The payload is comprised by a JSON object containing both fields defined by the specification and fields that are kept domain specific.
The type, title, detail and instance are of the first type, having their semantics defined by the specification

  • type – URI identifier defining the domain-specific error type. If it is URL, then its dereference can provide further information on the error type.
  • title – Human-readable description of the error type.
  • detail – Human-readable description of this specific error occurrence.
  • instance – URI identifier for this specific error occurrence.

On the other hand, the balance and accounts fields are domain specific extensions and their semantics is scoped to the type identifier.
This allows the same extensions to be used by different Web APIS with different semantics as long as the use type identifiers are different.
I recommend an HTTP API to have a central place documenting all type values as well as the domain specific fields associated to each one of these values.

Using this format presents several advantages when compared with constantly “reinventing the wheel” with ad-hoc formats:

  • Taking advantage of rich and well defined semantics for the specification defined fields – type, title, detail and instance.
  • Making the non-success responses easier to understand and handle, namely for developers that are familiar with this common format.
  • Being able to use common libraries to produce and consume this format.

When using a response payload to represent the error details one might wonder if there is still a need to use proper 4xx or 5xx class codes to represents error.
Namely, can’t we just use 200 for every response, independently of the outcome and have the client use the payload to distinguish them?
My answer is an emphatic no: using 2xx status to represent non-success breaks the HTTP contract, which can have consequences on the behavior of intermediary components.
For instance, a cache will happily cache a 200 response even it’s payload is in the application/problem+json format.
Notice that the operation of most intermediaries is independent of the messages payload.
And yes, HTTP intermediaries are still relevant on an HTTPS world: intermediaries can live before (e.g. client caching) and after (e.g. output caching) the TLS connection endpoints.

The HTTP protocol and associated ecosystem provides richer ways to express non-success outcomes, via response status codes and error representations.
Taking advantage of those is harnessing the power of the Web for HTTP APIs.

Additional Resources



Dominick Baier: Reference Tokens and Introspection

Access tokens can come in two shapes: self-contained and reference.

Self-contained tokens are using a protected, time-limited data structure that contains metadata and claims to communicate the identity of the user or client over the wire. A popular format would be JSON Web Tokens (JWT). The recipient of a self-contained token can validate the token locally by checking the signature, expected issuer name and expected audience or scope.

Reference tokens (sometimes also called opaque tokens) on the other hand are just identifiers for a token stored on the token service. The token service stores the contents of the token in some data store, associates it with an infeasible-to-guess id and passes the id back to the client. The recipient then needs to open a back-channel to the token service, send the token to a validation endpoint, and if valid, retrieves the contents as the response.

A nice feature of reference tokens is that you have much more control over their lifetime. Where a self-contained token is hard to revoke before its expiration time, a reference token only lives as long as it exists in the STS data store. This allows for scenarios like

  • revoking the token in an “emergency” case (lost phone, phishing attack etc.)
  • invalidate tokens at user logout time or app uninstall

The downside of reference tokens is the needed back-channel communication from resource server to STS.

This might not be possible from a network point of view, and some people also have concerns about the extra round-trips and the load that gets put on the STS. The last two issues can be easily fixed using caching.

I presented this concept of the last years to many of my customers and the preferred architecture is becoming more and more like this:

If the token leaves the company infrastructure (e.g. to a browser or a mobile device), use reference tokens to be in complete control over lifetime. If the token is used internally only, self contained tokens are fine.

I am also mentioning (and demo-ing) reference tokens here starting minute 36.

IdentityServer3 supports the reference token concept since day one. You can set the access token type to either JWT or Reference per client, and the ITokenHandleStore interface takes care of persistence and revocation of reference tokens.

For validating reference tokens we provide a simple endpoint called the access token validation endpoint. This endpoint is e.g. used by our access token validation middleware, which is clever enough to distinguish between self-contained and reference tokens and does the validation either locally or using the endpoint. All of this is completely transparent to the API.

You simply specify the Authority (the base URL of IdentityServer) and the middleware will use that to pull the configuration (keys, issuer name etc) and construct the URL to the validation endpoint:

app.UseIdentityServerBearerTokenAuthentication(new IdentityServerBearerTokenAuthenticationOptions
    {
        Authority = "https://localhost:44333/core",
        RequiredScopes = new[] { "api1" }
    });

The middleware also supports caching and scope validation – check the docs here.

There are also multiple ways to revoke a token – e.g. through the application permission self-service page, the token revocation endpoint, by writing code against the ITokenHandle store interface (e.g. from your user service to clean up tokens during logout) or by simply deleting the token from your data store.

One thing our validation endpoint does not support is authentication – this is a non issue as long as you don’t want to use the reference token mechanism for confidentiality.

Token Introspection
Many token services have a reference token feature, and all of them, like us, invented their own proprietary validation endpoint. A couple of weeks ago RFC 7662 – “OAuth 2.0 Token Introspection”, which defines a standard protocol, has been published.

IdentityServer3 v2.2 as well as the token validation middleware starting with v2.3 have support for it.

The most important difference is that authentication is now required to access the introspection endpoint. Since this endpoint is not accessed by clients, but by resource servers, we hang the credential (aka secret) off the scope definition, e.g.:

var api1Scope = new Scope
{
    Name = "api1",
    Type = ScopeType.Resource,
 
    ScopeSecrets = new List<Secret>
    {
        new Secret("secret".Sha256())
    }
};

For secret parsing and validation we use the same extensible mechanism that we use for client secrets. That means you can use shared secrets, client certificates or anything custom.

This also means that only scopes that are included in the access token can introspect the token. For any other scope, the token will simply look like invalid.

IdentityModel has a client library for the token introspection endpoint which pretty much self explanatory:

var client = new IntrospectionClient(
    "https://localhost:44333/core/connect/introspect",
    "api1",
    "secret");
 
var request = new IntrospectionRequest
{
    Token = accessToken
};
 
var result = client.SendAsync(request).Result;
 
if (result.IsError)
{
    Console.WriteLine(result.Error);
}
else
{
    if (result.IsActive)
    {
        result.Claims.ToList().ForEach(c => Console.WriteLine("{0}: {1}",
            c.Item1, c.Item2));
    }
    else
    {
        Console.WriteLine("token is not active");
    }
}

This client is also used in the validation middleware. Once we see the additional secret configuration, the middleware switches from the old validation endpoint to the new introspection endpoint:

app.UseIdentityServerBearerTokenAuthentication(new IdentityServerBearerTokenAuthenticationOptions
    {
        Authority = "https://localhost:44333/core",
        RequiredScopes = new[] { "api1" },
 
        ClientId = "api1",
        ClientSecret = "secret"
    });

Once you switched to introspection, you can disable the old validation endpoint on the IdentityServerOptions:

var idsrvOptions = new IdentityServerOptions
{
    Factory = factory,
    SigningCertificate = Cert.Load(),
 
    Endpoints = new EndpointOptions
    {
        EnableAccessTokenValidationEndpoint = false
    }
};

Reference tokens are a real problem solver for many situations and the inclusion of the introspection spec and authentication makes this mechanism even more robust and a good basis for future features around access token lifetime management (spoiler alert).

HTH


Filed under: .NET Security, ASP.NET, IdentityServer, Katana, OAuth, OWIN, Uncategorized, WebAPI


Dominick Baier: IdentityServer3 v2.2

Yesterday we published v2.2 to nuget and github. You can see the release notes here.

Besides a couple of bug fixes and refinements – the big features are support for the introspection specification (rfc 7662) and the OpenID Connect HTTP-based logout specification.

The next post will give you some perspective on introspection and Brock will write up the new features around logout. stay tuned!

PS. Brock and I will be at NDC London in January. If you are interested in identity & access control with Katana and ASP.NET (4&5) – join us!


Filed under: .NET Security, ASP.NET, IdentityServer, Katana, OAuth, OpenID Connect, OWIN, Uncategorized, WebAPI


Henrik F. Nielsen: Using ASP.NET WebHooks with IFTTT and Zapier to Monitor Twitter and Google Sheets (Link)

Posted the blog Using ASP.NET WebHooks with IFTTT and Zapier to Monitor Twitter and Google Sheets describing how to use ASP.NET WebHooks in connection with IFTTT and Zapier. Both IFTTT and Zapier provide a huge number of integrations in areas such as productivity, automotive, blogging, CRM, commerce, project management, IoT, social, mobile, collaboration, and much more. In this blog we’ll show how to leverage this to build two simple integrations – one with IFTTT and one with Zapier – and then use ASP.NET WebHooks to receive the result. For IFTTT we wire it up to Twitter monitoring when a tweet is sent, and for Zapier we wire it up to a Google Sheet monitoring when a row is added. These are of course just examples – the possibilities for integrations are endless!

Have fun!

Henrik


Henrik F. Nielsen: Updates to Microsoft ASP.NET WebHooks Preview (Link)

Posted the blog Updates to Microsoft ASP.NET WebHooks Preview describing Beta4 of ASP.NET WebHooks Preview with a nice set of new features based on feedback and help from the community! Always let us know what you think – either by raising a issue on GitHub or pinging me on twitter. You can get the update from nuget.org by looking for the packages under Microsoft.AspNet.WebHooks.*. If you haven’t heard about ASP.NET WebHooks then you may want to look at Introducing Microsoft ASP.NET WebHooks Preview.

Have fun!

Henrik


Dominick Baier: IdentityServer3 Logging & Monitoring using Serilog and Seq

IdentityServer has two fundamental “monitoring” facilities : development-time logging and production-time eventing. The original docs are here.

Logging is for developers – in fact – when I start a new IdentityServer3 project, that’s the first thing I configure. For security reasons (and to be spec compliant) the error messages in IdentityServer are pretty vague – logging gives you a detailed inside view of what’s really going on.

We use the fabulous LibLog library for logging, which means we support logging frameworks like SeriLog, NLog, Log4Net and others out of the box. If you want to directly connect to a custom logging framework – we have a sample for that as well.

Depending on how much logging sources you enable, the logs will contain sensitive data like passwords, claims and tokens. This is where eventing comes in.

Events are more suitable for production scenarios where you want more high level – but also queryable – data. This includes typical events like login failed/success, errors etc… to connect IdentityServer to an event processing system (ELK being the most popular), you would implement the IEventService interface.

For this post I want to show you how to connect IdentityServer to Serilog for logging and a local Seq instance for querying and parsing events.

Logging
That’s super easy – first get Serilog from Nuget

install-package Serilog

If you are using an IIS hosted IdentityServer, you probably want to log to a text file using System.Diagnostics (here’s a nice tail tool to view those logs in real time). In that case, add the following Serilog setup to your Startup:

Log.Logger = new LoggerConfiguration()
    .MinimumLevel.Debug()
    .WriteTo.Trace()
    .CreateLogger();

..and add the following snippet to your web.config:

<system.diagnostics>
  <trace autoflush="true" indentsize="4">
    <listeners>
      <add name="myListener" 
            type="System.Diagnostics.TextWriterTraceListener" 
            initializeData="Trace.log" />
    </listeners>
  </trace>
</system.diagnostics>  

That’s it. LibLog will detect that Serilog has been configured in the host and will start piping all IdentityServer logging to it.

If you prefer self-hosts for IdentityServer (I do for dev time) – there’s a nice console log formatter:

install-package Serilog.Sinks.Literate

In that case add the following Serilog setup to your host:

Log.Logger = new LoggerConfiguration()
    .WriteTo
    .LiterateConsole(outputTemplate: "{Timestamp:HH:MM} [{Level}] ({Name:l}){NewLine} {Message}{NewLine}{Exception}")
    .CreateLogger();

This gives you nicely formatted console output

SerilogConsole

Eventing
Seq is free for single user, easy to use and easy to setup. To connect IdentityServer to Seq, I wrote the following event service:

class SeqEventService : IEventService
{
    static readonly ILogger Log;
 
    static SeqEventService()
    {
        Log = new LoggerConfiguration()
            .WriteTo.Seq("http://localhost:5341")
            .CreateLogger();
    }
 
    public Task RaiseAsync<T>(Event<T> evt)
    {
        Log.Information("{Id}: {Name} / {Category} ({EventType}), Context: {@context}, Details: {@details}",
            evt.Id,
            evt.Name,
            evt.Category,
            evt.EventType,
            evt.Context,
            evt.Details);
 
        return Task.FromResult(0);
    }
}

..and registered it like this:

factory.EventService = new Registration<IEventServiceSeqEventService>();

Using the Seq console, you can now query the events – e.g. for failed logons, IP addresses etc..

Seq

Nice! HTH.

(full source code can be found here)


Filed under: .NET Security, ASP.NET, IdentityServer, OAuth, OpenID Connect, OWIN, WebAPI


Dominick Baier: The State of Security in ASP.NET 5 and MVC 6: Authorization

The hardest part in designing an application is authorization. The requirements are always so app-specific that for 10 applications you often see 12 different implementations.

To make things worse, ASP.NET and MVC traditionally had not much more built-in to offer than boring role checks. This lead to either unmaintainable code (hard coded role names and Authorize attributes) or complete custom implementations – or both.

In ASP.NET 5, a brand new authorization API is supposed to improve that situation – and IMHO – oh yes it does. Let’s have a look.

Overview
ASP.NET 5 supports two styles of authorization out of the box – policy-based and resource-based.

Both styles are a substantial improvement over the current ASP.NET authorization features and reduce the need to write your own authorization attribute/filter/infrastructure – though this is still totally possible.

The new Authorize Attribute
My main gripe with the old attribute is that it pushes developers towards hard-coding roles (or even worse – names) into their controller code. It violates separation of concerns and leads to hard to maintain code with roles names sprinkled all over your code base.

Also – let’s face it – declarative, role-based security might be nice for demos but is nowhere near flexible enough to write  anything but trivial applications.

The new Authorize attribute can still do role checks like this:

[Authorize(Roles = "Sales")]
public IActionResult DoSalesyStuff()
{ /* .. */ }

But this is mainly for backwards compatibility (the ability to check for names is gone). The more recommended pattern is to use so called authorization policies instead:

[Authorize("SalesOnly")]
public IActionResult DoSalesyStuff()
{ /* .. */ }

Let’s have a look at policies next.

Policies & Requirements
Policies are a way to create re-usable authorization logic. Policies consist of one or more so called requirements. If all requirements of a policy are met, the authorization check is successful – otherwise it fails.

Policies are created using a policy builder, and the following snippet creates a very simple policy (aka “require authenticated users”) and sets that globally in MVC:

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        // only allow authenticated users
        var defaultPolicy = new AuthorizationPolicyBuilder()
            .RequireAuthenticatedUser()
            .Build();
 
        services.AddMvc(setup =>
        {
            setup.Filters.Add(new AuthorizeFilter(defaultPolicy));
        });
    }
}

There are more extension methods similar to RequireAuthenticatedUser – e.g. RequireClaim or RequireRole.

Another common use case are named policies which can be referenced like the above SalesOnly example (again in Startup.ConfigureServices):

services.AddAuthorization(options =>
{
    // inline policies
    options.AddPolicy("SalesOnly", policy =>
    {
        policy.RequireClaim("department""sales");
    });
    options.AddPolicy("SalesSenior", policy =>
    {
        policy.RequireClaim("department""sales");
        policy.RequireClaim("status""senior");
    });
});

You can also encapsulate requirements in classes, e.g.:

public class StatusRequirement : AuthorizationHandler<StatusRequirement>, IAuthorizationRequirement
{
    private readonly string _status;
    private readonly string _department;
 
    public StatusRequirement(string department, string status, bool isSupervisorAllowed = true)
    {
        _department = department;
        _status = status;
    }
 
    protected override void Handle(AuthorizationContext context, StatusRequirement requirement)
    {
        if (context.User.IsInRole("supervisor"))
        {
            context.Succeed(requirement);
            return;
        }
 
        if (context.User.HasClaim("department", _department) &&
            context.User.HasClaim("status", _status))
        {
            context.Succeed(requirement);
        }
    }
}

Which can then be added like this:

options.AddPolicy("DevInterns", policy =>
{
    policy.AddRequirements(new StatusRequirement("development""intern"));
 
    // ..or using an extension method
    //policy.RequireStatus("development", "intern");
});

Under the covers, the AddAuthorization extension method also puts an IAuthorizationService (or more specifically the DefaultAuthorizationService) into the DI container. This class can be used to programmatically evaluate policies (amongst other things – more on that later).

To make the authorization service available – simply add it to e.g. a controller constructor:

public class TestController : Controller
{
    private readonly IAuthorizationService _authz;
 
    public TestController(IAuthorizationService authz)
    {
        _authz = authz;
    }
}

..and then use it in an action:

public async Task<IActionResult> SalesSeniorImperative()
{
    if (await _authz.AuthorizeAsync(User, "SalesSenior"))
    {
        return View("success");
    }
 
    return new ChallengeResult();
}

Scott used a slightly different approach, but using the same underlying infrastructure.

Remark ChallengeResult can be used to trigger an “access denied” condition in MVC. The cookie middleware e.g. will translate that either into a redirect to a login page for anonymous users, or a redirect to an access denied page for authenticated users.

Remark 2 Since views in MVC 6 also support DI, you can inject the authorization service there as well. Some people like this approach to conditionally render UI elements.

@{
    ViewData["Title"] = "Home Page";
    
    @using Microsoft.AspNet.Authorization
    @inject IAuthorizationService _authz
}
 
@if (await _authz.AuthorizeAsync(User, "SalesOnly"))
{
    <div>
        <a href="test/salesOnly">Sales only</a>
    </div>
}

This is a nice way to centralize authorization policies and re-use them throughout the application.

The only thing I don’t like about this approach is, that it pushes you towards using the claims collection as the sole data source for authorization decisions. As we all know, claims describe the identity of the user, and are not a general purpose dumping ground for all sorts of data – e.g. permissions.

It would be nice if one could use the DI system of ASP.NET to make further data source accessible in custom requirement. I’ve opened an issue for that – we’ll see what happens.

Resource-based Authorization
This is a new approach for ASP.NET and is inspired by the resource/action based approach that we had in WIF before (which was ultimately inspired by XACML). We also like that approach a lot, but the problem with the WIF implementation (and also ours) was always that due to the lack of strong typing, the implementation became messy quickly (or at least you needed a certain amount of discipline to keep it clean over multiple iterations).

The idea is simple – you identify resources that your application is dealing with – e.g. customers, orders, products (yawn). Then you write a so called handler for each of these resources where you express the authorization requirements in code.

You use requirements to express whatever action is supposed to be applied to the resource, and conclude with a success/failure, e.g.:

public class CustomerAuthorizationHandler : 
    AuthorizationHandler<OperationAuthorizationRequirementCustomer>
{
    protected override void Handle(AuthorizationContext context, 
        OperationAuthorizationRequirement requirement, 
        Customer resource)
    {
        // implement authorization policy for customer resource
    }
}

Operation requirements are built-in and can be used to model simple string-based actions – but you can also write your own, or derive from OperationAuthorizationRequirement.

public static class CustomerOperations
{
    public static OperationAuthorizationRequirement Manage = 
        new OperationAuthorizationRequirement { Name = "Manage" };
    public static OperationAuthorizationRequirement SendMail =
        new OperationAuthorizationRequirement { Name = "SendMail" };
 
    public static OperationAuthorizationRequirement GiveDiscount(int amount)
    {
        return new DiscountOperationAuthorizationRequirement
        {
            Name = "GiveDiscount",
            Amount = amount
        };
    }
}

You then register the resource handler with the DI system and can access it imperatively from within your controllers (using the above mentioned authorization service):

public async Task<IActionResult> Discount(int amount)
{
    var customer = new Customer
    {
        Name = "Acme Corp",
        Region = "south",
        Fortune500 = true
    };
 
    if (await _authz.AuthorizeAsync(User, customer, 
        CustomerOperations.GiveDiscount(amount)))
    {
        return View("success");
    }
            
    return new ChallengeResult();
}

What I like about this approach is that the authorization policy has full strong typed access to the domain object it implements authorization for, as well as the principal. This is a huge improvement over the WIF API. It also makes it easy to unit test your controller without the authorization code – and even more importantly (at least for security guy) – it allows unit testing the authorization policy itself.

protected override void Handle(AuthorizationContext context, 
    OperationAuthorizationRequirement requirement, 
    Customer resource)
{
    // user must be in sales
    if (!context.User.HasClaim("department""sales"))
    {
        context.Fail();
        return;
    }
 
    // ...and responsible for customer region
    if (!context.User.HasClaim("region", resource.Region))
    {
        context.Fail();
        return;
    }
 
    // if customer is fortune 500 - sales rep must be senior
    if (resource.Fortune500)
    {
        if (!context.User.HasClaim("status""senior"))
        {
            context.Fail();
            return;
        }
    }
 
    if (requirement.Name == "GiveDiscount")
    {
        HandleDiscountOperation(context, requirement, resource);
        return;
    }
 
    context.Succeed(requirement);
}

In addition, the resource handler can make full use of the DI system. That means we can inject access to arbitrary data stores. A very common use case is to use some sort of database to query permission tables or similar. This makes it very flexible.

Imagine a permission service that queries your authorization using some backing store (the _permissions variable):

private void HandleDiscountOperation(
    AuthorizationContext context, 
    OperationAuthorizationRequirement requirement, 
    Customer resource)
{
    var discountOperation = requirement as DiscountOperationAuthorizationRequirement;
    var salesRep = context.User.FindFirst("sub").Value;
 
    var result = _permissions.IsDiscountAllowed(
        salesRep,
        resource.Id,
        discountOperation.Amount);
 
    if (result)
    {
        context.Succeed(requirement);
    }
    else
    {
        context.Fail();
    }
}

All in all – I am very pleased with this new API and it will be interesting to see how it performs in a real application.

The full source code of the sample can be found here and the repo for the authorization API is here (open an issue if you have feedback)

Have fun!


Filed under: .NET Security, ASP.NET, WebAPI


Filip Woj: Global route prefixes with attribute routing in ASP.NET Web API

As you may have learnt from some of the older posts, I am a big fan, and a big proponent of attribute routing in ASP.NET Web API.

One of the things that is missing out of the box in Web API’s implementation of attribute routing, is the ability to define global prefixes (i.e. a global “api” that would be prepended to every route) – that would allow you to avoid repeating the same part of the URL over and over again on different resources. However, using the available extensibility points, you can provide that functionality yourself.

Let’s have a look.

RoutePrefixAttribute

ASP.NET Web API allows you to provide a common route prefix for all the routes within a controller via RoutePrefixAttribute. This is obviously very convenient, but unfortunately that attribute, out of the box, cannot be used globally.

Here’s a typical use case per controller:

[RoutePrefix("api/items")]
public class ItemsController : ApiController
{
    [Route("")]
    public IEnumerable<Item> Get() { ... }

    [Route("{id:int}")]
    public Item Get(int id) { ... }

    [Route("")]
    public HttpResponseMessage Post(Item item) { ... }
}

Of course in use cases like the one above, it would be nice to have to avoid that “api” repeated on every single controller.

Applying a route prefix globally

In order to apply route prefix globally, you will need to customize the way ASP.NET Web API creates routes from the route attributes. The way to do that, is to implement a custom IDirectRouteProvider, which can be used to feed custom routes into the Web API pipeline at application startup.

public interface IDirectRouteProvider
{
    IReadOnlyList<RouteEntry> GetDirectRoutes(
        HttpControllerDescriptor controllerDescriptor, 
        IReadOnlyList<HttpActionDescriptor> actionDescriptors,
        IInlineConstraintResolver constraintResolver);
}

That would typically mean a lot of work though, since you would need to manually deal with everything from A to Z – like manually processing controller and action descriptors. A much easier approach is to subclass the existing default implementation, DirectRouteProvider, and override just the method that deals with route prefixes (GetRoutePrefix). Normally, that method will only recognize route prefixes from the controller level, but we want it to also allow global prefixes.

DirectRouteProvider exposes all of its members as virtual – allowing you to tap into the route creation process at different stages i.e. when reading route prefixes, when reading controller route attributes or when reading action route attributes. This is shown in the next snippet:

public class CentralizedPrefixProvider : DefaultDirectRouteProvider
{
    private readonly string _centralizedPrefix;

    public CentralizedPrefixProvider(string centralizedPrefix)
    {
        _centralizedPrefix = centralizedPrefix;
    }

    protected override string GetRoutePrefix(HttpControllerDescriptor controllerDescriptor)
    {
        var existingPrefix = base.GetRoutePrefix(controllerDescriptor);
        if (existingPrefix == null) return _centralizedPrefix;

        return string.Format("{0}/{1}", _centralizedPrefix, existingPrefix);
    }
}

The CentralizedPrefixProvider shown above, takes a prefix that is globally prepended to every route. If a particular controller has it’s own route prefix (obtained via the base.GetRoutePrefix method invocation), then the centralized prefix is simply prepended to that one too.

Finally, the usage of this provider is as follows – instead of invoking the “regular” version of MapHttpAttributeRoutes method, you need to use the overload that takes in an instance of IDirectRouteProvider:

config.MapHttpAttributeRoutes(new CentralizedPrefixProvider("api"));

In this example, all attribute routes would get “api” prepended to the route template.

In more interesting scenarios, you can also use parameters with CentralizedPrefixProvider – after all, we are just building up a regular route template here. For example, the following set up, defining a global version parameter as part of every route, is perfectly valid too:

config.MapHttpAttributeRoutes(new CentralizedPrefixProvider("api/v{version:int}"));

Now in any action method signature, you can add an additional version integer parameter, even though you don’t have it in the route template beside the action – because it will be propagated down from the CentralizedPrefixProvider. For example:

[RoutePrefix("items")]
public class ItemsController : ApiController
{
    [Route("{id:int}")]
    public Item Get(int id, int version) { ... }
}

And obviously – the “api” in the controller level RoutePrefixAttribute is no longer needed too.


Henrik F. Nielsen: Receive WebHooks from Azure Alerts and Kudu (Azure Web App Deployment)

Posted a new blog on receiving WebHooks from Azure Alerts and Kudu (Azure Web App Deployment) and how to monitor your Azure Web App with Kudu and Azure Alert WebHooks using  Microsoft ASP.NET WebHooks.

Have fun!

Henrik


Dominick Baier: Upcoming Identity & Access Control Workshops in Europe

Brock and I will be in London in November and January to hold our identity & access control workshop.

In November we are at the SDD Deep Dive event and do a very special three day version which includes extra content around IdentityServer3 and migrating to ASP.NET 5 and MVC6.

In January we will be at NDC London for a two day workshop which will also include the ASP.NET 5 content.

I will be also in Oslo in January to do the same two day version of the workshop.

Hope to see you at one of those events!


Filed under: .NET Security, ASP.NET, IdentityServer, OAuth, OpenID Connect, WebAPI


Henrik F. Nielsen: Integrating with Instagram Using ASP.NET WebHooks Preview (Link)

In the blog Integrating with Instagram using ASP.NET WebHooks Preview we describe how to integrate with Instagram using Microsoft ASP.NET WebHooks. This enables you to get notified when pictures are posted, for example within a 1000 meter radius or the Seattle Space Needle!

Have fun!

Henrik


Henrik F. Nielsen: Sending WebHooks with ASP.NET WebHooks Preview (Link)

In the blog Introducing Microsoft ASP.NET WebHooks Preview, we gave an overview of how to work with Microsoft ASP.NET WebHooks. We mentioned that it is not only possible to receive WebHooks from others but also to add support for sending WebHooks from your Web Application. The blog Sending WebHooks with ASP.NET WebHooks Preview goes into detail on how to do that.

Have fun!

Henrik


Filip Woj: Disposing resources at the end of Web API request

Sometimes you have resources in your code that are implementing IDisposable, and that you’d like them to be disposed only at the end of the HTTP request. I have seen a solution to this problem rolled out by hand in a few code bases in the past – but in fact this feature is already built into Web API, which I don’t think a lot of people are aware of.

Let’s have a quick look.

RegisterForDispose

ASP.NET Web API contains an extension method for HttpRequestMessage that’s called RegisterForDispose – which allows you to pass in a single instance or a collection of instances of IDisposable.

What happens under the hood, is that Web API will simply store those object references in the properties dictionary of the request message (under HttpPropertyKeys.DisposableRequestResourcesKey key), and at the end of the Web API request lifetime, will iterate through those objects and call Dispose on them. This is the responsibility of the Web API host, so each of the existing hosting layers – Web Host (“classic” ASP.NET), self host (WCF based) and OWIN host (the Katana adapter) is explicitly implementing this feature. It is guaranteed to happen (for example, the Katana adapter uses a try/catch block around the entire Web API pipeline and performs the disposal in finally.

As a result you can do things like the code below. Imagine a message handler that opens a stream, writes to it, then saves the stream into request properties (so that other Web API components such as other message handlers, filters, controllers etc can use it too):

public class WriterHandler : DelegatingHandler
{
    protected async override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    {
        var writer = new StreamWriter("c:\\file.txt", true);
        //save the writer for later use
        request.Properties["filewriter"] = writer;

        await writer.WriteLineAsync("some important information");

        request.RegisterForDispose(writer);
        return await base.SendAsync(request, cancellationToken);
    }
}

You don’t have to use a using statement here, as registering for disposal using Web API resource tracking feature will ensure that the stream writer is disposed of at the end of the request lifetime. This is very convenient, as other components during the HTTP request, can share the same resource.

It is also worth adding that resource tracking is also used if you ever use Web API GetDependencyScope feature – which allows you to resolve services from the registered dependency resolver:

var myService = request.GetDependencyScope().GetService(typeof(IMyService));

In the above case, the instance of IMyService will be registered for disposal at the end of Web API request.

One final note – at any time you can also take a peek into the resources currently registered for disposal, using a GetResourcesForDisposal extension method on HttpRequestMessage.


Dominick Baier: IdentityModel 1.0.0 released

Part of the ongoing effort to modernize our libraries, I released IdentityModel today.

IdentityModel contains useful helpers, extension methods and constants when working with claims-based identity in general and OAuth 2.0 and OpenID Connect in particular.

See the overview here and nuget here.

Feedback and PRs welcome.


Filed under: .NET Security, IdentityModel, OAuth, OpenID Connect, WebAPI


Dominick Baier: The State of Security in ASP.NET 5 and MVC 6: OAuth 2.0, OpenID Connect and IdentityServer

ASP.NET 5 contains a middleware for consuming tokens – but not anymore for producing them. I personally have never been a big fan of the Katana authorization server middleware (see my thoughts here) – and according to this, it seems that the ASP.NET teams sees IdentityServer as the replacement for it going forward.

So let’s have a look at the bits & pieces and how IdentityServer can help in implementing authentication for MVC web apps and APIs.

IdentityServer
IdentityServer is an extensible OAuth 2.0 authorization server and OpenID Connect provider framework. It is different from the old authorization server middleware as it operates on a higher level of abstraction. IdentityServer takes care of all the protocol and data management details while you only need to model your application architecture using clients, users and resources (see the big picture and terminology pages).

IdentityServer is currently available as OWIN middleware only (as opposed to native ASP.NET 5 middleware) – that means it can be used in Katana-based hosts and ASP.NET 5 hosts targeting the “full .NET framework” aka DNX451. It does currently not run on the Core CLR – but we are working on it.

There is an integration package for bridging OWIN into ASP.NET 5 which allows wiring up IdentityServer using the typical application builder extension method pattern:

var certFile = env.ApplicationBasePath + "\\idsrv3test.pfx";
 
var idsrvOptions = new IdentityServerOptions
{
    Factory = new IdentityServerServiceFactory()
                    .UseInMemoryUsers(Users.Get())
                    .UseInMemoryClients(Clients.Get())
                    .UseInMemoryScopes(Scopes.Get()),
 
    SigningCertificate = new X509Certificate2(certFile, "idsrv3test"),
};
 
app.UseIdentityServer(idsrvOptions);

For this sample we use in-memory data sources, but you can connect to any data source you like.

The Users class simply defines two test users (Alice and Bob of course) – I won’t bore you with the details. Scopes on the other hand define the resources in your application, e.g. identity resources like email addresses or roles, or resource scopes that represent your APIs:

public static IEnumerable<Scope> Get()
{
    return new[]
    {
        // standard OpenID Connect scopes
        StandardScopes.OpenId,
        StandardScopes.ProfileAlwaysInclude,
        StandardScopes.EmailAlwaysInclude,
 
        // API - access token will 
        // contain roles of user
        new Scope
        {
            Name = "api1",
            DisplayName = "API 1",
            Type = ScopeType.Resource,
 
            Claims = new List<ScopeClaim>
            {
                new ScopeClaim("role")
            }
        }
    };
}

Clients on the other hand represent the applications that will request authentication tokens (also called identity tokens) and access tokens.

public static List<Client> Get()
{
    return new List<Client>
    {
        new Client
        {
            ClientName = "Test Client",
            ClientId = "test",
            ClientSecrets = new List<Secret>
            {
                new Secret("secret".Sha256())
            },
 
            // server to server communication
            Flow = Flows.ClientCredentials,
 
            // only allowed to access api1
            AllowedScopes = new List<string>
            {
                "api1"
            }
        },
 
        new Client
        {
            ClientName = "MVC6 Demo Client",
            ClientId = "mvc6",
 
            // human involved
            Flow = Flows.Implicit,
 
            RedirectUris = new List<string>
            {
                "http://localhost:19276/signin-oidc",
            },
            PostLogoutRedirectUris = new List<string>
            {
                "http://localhost:19276/",
            },
 
            // access to identity data and api1
            AllowedScopes = new List<string>
            {
                "openid",
                "email",
                "profile",
                "api1"
            }
        }
    };
}

With these definitions in place we can now add an MVC application that uses IdentityServer for authentication, as well as an API that supports token-based access control.

MVC Web Application
ASP.NET 5 includes middleware for OpenID Connect authentication. With this middleware you can use any OpenID Connect compliant provider (see here) to outsource the authentication logic.

Simply add the cookie middleware (for local signin) and the OpenID Connect middleware pointing to our IdentityServer to the pipeline. You need to supply the base URL to IdentityServer (so the middleware can fetch the discovery document), the client ID, scopes, and redirect URI (compare to our client definition we did earlier). The following snippet requests the user’s ID, email and profile claims in addition to an access token that can be used for the “api1” resource.

app.UseCookieAuthentication(options =>
{
    options.AuthenticationScheme = "Cookies";
    options.AutomaticAuthenticate = true;
});
 
app.UseOpenIdConnectAuthentication(options =>
{
    options.AutomaticChallenge = true;
    options.AuthenticationScheme = "Oidc";
    options.SignInScheme = "Cookies";
 
    options.Authority = "http://localhost:5000";
    options.RequireHttpsMetadata = false;
 
    options.ClientId = "mvc6";
    options.ResponseType = "id_token token";
 
    options.Scope.Add("openid");
    options.Scope.Add("email");
    options.Scope.Add("api1");
});

MVC Web API
Building web APIs with MVC that are secured by IdentityServer is equally simple – the ASP.NET 5 OAuth 2.0 middleware supports JWTs out of the box and even understands discovery documents now.

app.UseJwtBearerAuthentication(options =>
{
    options.Authority = "http://localhost:5000";
    options.RequireHttpsMetadata = false;
 
    options.Audience = "http://localhost:5000/resources";
    options.AutomaticAuthenticate = true;
});
 
app.UseMiddleware<RequiredScopesMiddleware>(new List<string> { "api1" });

Again Authority points to the base-address of IdentityServer so the middleware can download the metadata and learn about the signing keys.

Since the built-in middleware does not understand the concept of scopes, I had to fix that by adding the extra scope middleware.

Expect this be improved in future versions.

API Client
Tokens can be also requested programmatically – e.g. for server-to-server communication scenarios. Our IdentityModel library has a TokenClient helper class which makes that very easy:

async Task<TokenResponse> GetTokenAsync()
{
    var tokenClient = new TokenClient(
        "http://localhost:5000/connect/token",
        "test",
        "secret");
 
    return await tokenClient.RequestClientCredentialsAsync("api1");
}

…and using the token:

async Task<HttpResponseMessage> CallApi(string accessToken)
{
    var client = new HttpClient();
    client.SetBearerToken(accessToken);
 
    return await client.GetAsync("http://localhost:19806/identity");
}

The full source code can be found here.

Summary
IdentityServer can be a replacement for the Katana authorization server middleware – but as I said it is a different mindset, since IdentityServer is not protocol oriented – but rather focusing on your application architecture. I should mention that there is actually a middleware for ASP.NET 5 that mimics the Katana approach – you can find it here.

In the end it is all about personal preference, and most importantly how comfortable you are with the low level details of the protocols.

There are actually much more samples on how to use IdentityServer in the samples repo. Give it a try.


Filed under: .NET Security, ASP.NET, IdentityServer, OAuth, OpenID Connect, OWIN, WebAPI


Dominick Baier: The State of Security in ASP.NET 5 and MVC 6: Claims & Authentication

Disclaimer: Microsoft announced the roadmap for ASP.NET 5 yesterday – the current release date of the final version is Q1 2016. Some details of the features and APIs I mention will change between now and then. This post is about beta 5.

Claims
I started talking about claims-based identity back in 2005. That was the time when Microsoft introduced a new assembly to the .NET Framework called System.IdentityModel. This assembly contained the first attempt of introducing claims to .NET, but it was only used by WCF and was a bit over-engineered (go figure). The claims model was subsequently re-worked by the WIF guys a couple of years later (kudos!) and then re-integrated into .NET with version 4.5.

Starting with .NET 4.5, every built-in identity/principal implementation was based on claims, essentially replacing the 12+ years old antiquated IIdentity/IPrincipal interfaces. Katana – but more importantly ASP.NET 5 is the first framework that now uses ClaimsPrincipal and ClaimsIdentity as first class citizens – identities are now always based on claims – and finally – no more down-casting!

HttpContext.User and Controller.User are now ClaimsPrincipals – and writing the following code feels as natural as it should be:

var email = User.FindFirst(“email”);

This might not seem like a big deal – but given that it took almost ten years to get there, shows just how slow things are moving sometimes. I also had to take part in a number of discussions with people at Microsoft over the years to convince them that this is actually the right thing to do…

Authentication API
Another thing that ASP.NET was missing is a uniform authentication API – this was fixed in Katana via the IAuthenticationManager and was pretty much identically brought over to ASP.NET 5.

AuthenticationManager hangs off the HttpContext and is a uniform APIs over the various authentication middleware that do the actual grunt work. The major APIs are:

  • SignIn/SignOut
    Instructs a middleware to do a signin/signout gesture
  • Challenge
    Instructs a middleware to trigger some external authentication handshake (this is further abstracted by the new ChallengeResult in MVC 6)
  • Authenticate
    Triggers validation of an incoming credential and conversion to claims
  • GetAuthenticationSchemes
    Enumerates the registered authentication middleware, e.g. for populating a login UI dynamically

Authentication Middleware
The actual authentication mechanisms and protocols are implemented as middleware. If you are coming from Katana then this is a no brainer. If your background is ASP.NET.OLD think of middleware as HTTP modules – just more flexible and lightweight.

For web UIs the following middleware is included:

  • Cookie-based authentication (as a replacement for good old forms authentication or the session authentication module from WIF times)
  • Google, Twitter, Facebook and Microsoft Account
  • OpenID Connect

WS-Federation is missing right now. It is also worth mentioning that there is now a generic middleware for OAuth2-style authentication (sigh). This will make it easier to write middleware for the various OAuth2 dialects without having to duplicate all the boilerplate code and will make the life of these guys much easier.

Wiring up the cookie middleware looks like this:

app.UseCookieAuthentication(options =>
{
    options.LoginPath = new PathString(“/account/login”);
    options.AutomaticAuthentication = true;
    options.AuthenticationScheme = “Cookies”;               
});

The coding style is a little different – instead of passing in an options instance, you now use an Action<Option>. AuthenticationType has been renamed to AuthenticationScheme (and the weird re-purposing of IIdentity.AuthenticationType is gone for good). All authentication middleware is now passive – setting them to active means setting AutomaticAuthentication to true.

For signing in a user, you create the necessary claims and wrap them in a ClaimsPrincipal. Then you call SignIn to instruct the cookie middleware to set the cookie.

var claims = new List<Claim>
{
    new Claim(“sub”, model.UserName),
    new Claim(“name”, “bob”),
    new Claim(“email”, “bob@smith.com”)
};

var id = new ClaimsIdentity(claims, “local”, “name”, “role”);
Context.Authentication.SignIn(“Cookies”, new ClaimsPrincipal(id));

Google authentication as an example looks like this:

app.UseGoogleAuthentication(options =>
{
    options.ClientId = “xxx”;
    options.ClientSecret = “yyy”;

    options.AuthenticationScheme = “Google”;
    options.SignInScheme = “Cookies”;
});

The external authentication middleware implements the authentication protocol only – and when done – hands over to the middleware that does the local sign-in. That’s typically the cookie middleware. For this purpose you set the SignInScheme to the name of the middleware that should take over (this has been renamed from SignInAsAuthenticationType – again clearly an improvement).

Also the pattern of having more than one cookie middleware to be able to inspect claims from external authentication systems before turning them into a trusted cookie still exists. That’s probably a separate post.

For web APIs there is only one relevant middleware – consuming bearer tokens. This middleware has support for JWTs out of the box and is extensible to use different token types and different strategies to convert the tokens to claims. One notable new feature is support for OpenID Connect metadata. That means if your OAuth2 authorization server also happens to be an OpenID Connect provider with support for a discovery document (e.g. IdentityServer or Azure Active Directory) the middleware can auto-configure the issuer name and signing keys.

One thing that is “missing” when coming from Katana, is the OAuth2 authorization server middleware. There are currently no plans to bring that forward. IdentityServer can be a replacement for that. I will dedicate a separate blog post to that topic.

Summary
If you are coming from Katana, this all does not look terribly new to you. AuthenticationManager and authentication middleware works almost identical. Learning that, was no waste of time.

If you are coming from plain ASP.NET (and maybe even WIF or DotNetOpenAuth) this all works radically different under the covers and is really only “conceptually compatible”. In that case you have quite a lot of new tech to learn to make the jump to ASP.NET 5.

Unfortunately (as always) the ASP.NET templates are not very helpful in learning the new features. You either get an empty one, or the full-blown-all-bells-and-whistles-complexity-hidden-by-extensions-method-over-more-abstractions version of that. Therefore I created the (almost) simplest possible cookie-based starter template here. More to follow.


Filed under: .NET Security, ASP.NET, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: The State of Security in ASP.NET 5 and MVC 6

We’ve been closely following ASP.NET 5 and MVC 6 since the days it was presented behind closed doors, through the “vNext” and “Project K” phase up to recent beta builds.

I personally monitored all developments in the security space in particular and was even involved in one or the other decision making process – particularly around authorization which makes me also a little bit proud.

In preparation for the planned ASP.NET and MVC 6 security course for PluralSight, I always maintained a (more or less structured) list of changes and new features.

Tomorrow will be the release of Visual Studio 2015, which does NOT include the final release of ASP.NET 5. Instead we are between Beta 5 and 6 right now and the rough feature set has more or less been decided on. That’s why I think that now is a good time to do a couple of overview posts on what’s new.

Many details are still very much in flux and the best way to keep up with that is to subscribe to the various security related repos on github. Many things are not set in stone yet, so this is also an opportunity to take part in the discussion which I would encourage you to do.

The planned feature posts are:

Since training is very much about the details, we are holding off right now with recording any content until the code has been locked down for “v1”. So stay tuned.

The first public appearance of our updated “identity & access control” training for ASP.NET will be at NDC in London in January 2016.

Update: The final release of ASP.NET 5 is currently scheduled for Q1 2016 (https://github.com/aspnet/Home/wiki/Roadmap)


Filed under: .NET Security, ASP.NET, Conferences & Training, IdentityServer, WebAPI


Dominick Baier: Security at NDC Oslo

For a developer conference, NDC Oslo had a really strong security track this year. Also the audience appreciated that – from the five highest ranked talks – three were about security. Troy has the proof.

I even got to see Bruce Schneier for the first time. It is fair to say that his “Secrets & Lies” book changed my life and was one of the major reasons I got interested in security (besides Enno).

Brock and I did a two day workshop on securing modern web applications and APIs followed by a talk on Web API security patterns and how to implement authentication and authorization in JavaScript applications.

Other talks worth watching (I hope I haven’t missed anything):

Well done, NDC!


Filed under: .NET Security, IdentityModel, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: Give your WCF Security Architecture a Makeover with IdentityServer3

Not everybody has the luxury of being able to start over and build the new & modern version of their software from scratch. Many people I speak to have existing investments in WCF and their “old-school” desktop/intranet architecture.

Moving to an internet/mobile world while preserving the existing services is not easy because the technologies (and in my case the security technologies) are fundamentally incompatible. Your new mobile/modern clients will not be seamlessly able to request tokens from your existing WS-Trust STS and SOAP is not really compatible with OAuth2. So what to do?

You could try to teach your WS-Trust STS some basic HTTP based token service capabilities and continue using SAML tokens. You could provide some sort of SAML/JWT conversion mechanism and create Web APIs for your new clients that proxy / convert to the WCF world. Or you could provide to separate token services and establish trust between them. All approaches have their own advantages and disadvantages.

For a project I am currently working on I chose a different approach – get rid of the old WS-Trust STS altogether, replace it with an OAuth2 authorization server (IdentityServer3) and make your WCF services consume JWT tokens. This way both old and new clients can request tokens via OAuth2 and use them with either existing WCF services and the new Web APIs (which ultimately will be also used in the desktop version of the product). How does that work?

Requesting the token
The OAuth2 resource owner flow is what comes closest to WS-Trust and it is easy to replace the WCF WSTrustChannel code with that. Going forward the web view based flows actually give more features like external IdPs etc. but need a bit more restructuring of the existing clients. New clients can use them straight away.

Sending the token
This is the tricky part. WCF can not deal with JWTs directly since they are not XML based. You first need to wrap them in an XML data structure and the typical approach for that is to use a so called binary security token. This worked fine at some point but the latest version of WCF and the JWT token handler don’t seem to work together anymore (here’s a nice write up from Mickael describing the problem).

Since WCF is really done – I did not expect anyone to fix that bug anytime soon, so I needed a different solution.

Another XML container data structure that is well tested and does the job equally well is SAML – so I simply created a minimal SAML assertion to hold the JWT token.

static GenericXmlSecurityToken WrapJwt(string jwt)
{
   
var subject = new ClaimsIdentity("saml"
);
    subject.AddClaim(
new Claim("jwt"
, jwt));

   
var descriptor = new SecurityTokenDescriptor
    {
        TokenType =
TokenTypes
.Saml2TokenProfile11,
        TokenIssuerName =
"urn:wrappedjwt"
,
        Subject = subject
    };

   
var handler = new Saml2SecurityTokenHandler
();
   
var
token = handler.CreateToken(descriptor);

   
var xmlToken = new GenericXmlSecurityToken
(
      
XElement
.Parse(token.ToTokenXmlString()).ToXmlElement(),
       
null
,
       
DateTime
.Now,
       
DateTime
.Now.AddHours(1),
       
null
,
       
null
,
       
null
);

   
return xmlToken;
}

Since we are using SAML solely as a container, there is no signature, no audience URI and just a single attribute statement containing the JWT.

After that you can use the wrapped JWT with the CreateChannelWithIssuedToken method over a federation binding:

var binding = new WS2007FederationHttpBinding(
WSFederationHttpSecurityMode
.TransportWithMessageCredential);
binding.Security.Message.EstablishSecurityContext = false;
binding.Security.Message.IssuedKeyType =
SecurityKeyType
.BearerKey;
var factory = new ChannelFactory<IService
>(
    binding,
   
new EndpointAddress(https://localhost:44335/token
));

var channel = factory.CreateChannelWithIssuedToken(xmlToken);

Validating the token
On the service side I sub-classed the SAML2 security token handler to get the SAML deserialization. In the ValidateToken method I retrieve the JWT token from the assertion and validate it.

Since I have to do the validation manually anyways, I wanted feature parity with our token validation middleware for Web API which means that the token handler can auto-configure itself using the OpenID Connect discovery document as well as do the scope validation.

identityConfiguration.SecurityTokenHandlers.Add(
new IdentityServerWrappedJwtHandler("https://localhost:44333/core", "write"));

The end result is that both WCF and Web API can now consumes JWT tokens from IdentityServer and the customer can smoothly migrate and extend their architecture.

The POC can be found here. It is “sample quality” right now – feel free to make it more robust and send me a PR.


Filed under: .NET Security, IdentityServer, OAuth, WCF, WebAPI


Dominick Baier: Three days of Identity & Access Control Workshop at SDD Deep Dive – November 2015, London

As part of the SDD Deep Dive event in London – Brock and I will deliver an updated version of our “Identity & Access Control for modern Web Applications and APIs” workshop.

For the first time, this will be a three day version covering everything you need to know to implement authentication & authorization in your ASP.NET web applications and APIs.

The additional third day will focus on IdentityServer3 internals and customization as well as an outlook on how to migrate your security architecture to ASP.NET 5 and MVC6.

Come by and say hello! (also get some of our rare IdentityServer stickers)

http://sddconf.com/identityandaccess/


Filed under: .NET Security, IdentityServer, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Ali Kheyrollahi: PerfIt! decoupled from Web API: measure down to a closure in your .NET application

Level [T2]

Performance monitoring is an essential part of doing any serious-scale software. Unfortunately in .NET ecosystem, historically first looking for direction and tooling from Microsoft, there has been a real lack of good tooling - for some reason or another effective monitoring has not been a priority for Microsoft although this could be changing now. Healthy growth of .NET Open Source community in the last few years brought a few innovations in this space (Glimpse being one) but they focused on solving development problems rather than application telemetry.

2 years ago, while trying to build and deploy large scale APIs, I was unable to find anything suitable to save me having to write a lot of boilerplate code to add performance counters to my applications so I coded a working prototype of performance counters for ASP .NET Web API and open sourced and shared it on Github, calling it PerfIt! for the lack of a better name. Over the last few years PerfIt! has been deployed to production in a good number of companies running .NET. I added the client support too to measure calls made by HttpClient and it was a handy addition.

From Flickr

This is all not bad but in reality, REST API calls do not cover all your outgoing or incoming server communications (which you naturally would like to measure): you need to communicate to databases (relational or NoSQL), caches (e.g. Redis), Blob Storages, and many other. On top of that, there could be some other parts of your code that you would like to measure such as CPU intensive algorithms, reading or writing large local files, running Machine Learning classifiers, etc. Of course, PerfIt! in this current incarnation cannot help with any of those cases.

It turned out with a little change and separating performance monitoring from Web API semantic (which is changing with vNext again) this can be done. Actually, not getting much credit for it, it was mainly ideas from two of my best colleagues which I am grateful for their contribution: Andres Del Rio and JaiGanesh Sundaravel.

New PerfIt! features (and limitations)

So currently at version alpha2, you can get the new PerfIt! by using nuget (when it works):
PM> install-package PerfIt -pre
Here are the extra features that you get from the new PerfIt!.

Measure metrics for a closure


So at the lowest level of an aspect abstraction, you might be interested in measuring metrics for a closure, for example:
Action action = Thread.Sleep(1000);
action(); // measure
Or in case of an async operation:
foo result = null;
Func<Task> asyncCall = async () => result = await _command.ExecuteScalar();

// and then
await asyncCall();
This closure could be wrapped in a method of course, but there again, having a unified closure interface is essential in building a common tool: each method can have different inputs of outputs while all can be presented in a closure having the same interface.

Thames Barriers Closure - Flickr. Sorry couldn't find a more related picture, but enjoy all the same
So in order to measure metrics for the action closure, all we need to do is:
var ins = new SimpleInstrumentor(new InstrumentationInfo() 
{
Counters = CounterTypes.StandardCounters,
Description = "test",
InstanceName = "Test instance"
},
TestCategory);

ins.Instrument(() => Thread.Sleep(100));

A few things here:
  • SimpleInstrumentor is responsible for providing a hook to instrument your closures. 
  • InstrumentationInfo contains the metadata for publishing the performance counters. You provide the name of the counters to raise to it (provided if they are not standard, you have already defined )
  • You will be more likely to create a single instrumentor instance for each aspect of your code that you would like to instrument.
  • This example assumes the counters and their category are installed. PerfitRuntime class provides mechanism to register your counters on the box - which is covered in previous posts.
  • Instrument method has an option to pass the context as a string parameter. This context can be used to correlate metrics with application context in ETW events (see below).

Doing an async operation is not that different:
ins.InstrumentAsync(async () => await Task.Delay(100));

//or even simpler:
ins.InstrumentAsync(() => Task.Delay(100))

SimpleInstrumentor is the building block for higher level abstractions of instrumentation. For example, PerfitClientDelegatingHandler now uses SimpleInstrumentor behind the scene.

Raise ETW events, effortlessly


Event Tracing for Windows (ETW) is a low overhead framework for logging, instrumentation, tracing and monitoring that has been in Windows since version 2000. Version 4.5 of the .NET Framework exposes this feature in the class EventSource. Probably suffice to say, if you are not using ETW you are doing it wrong.

One problem with Performance Counters is that they use sampling, rather than events. This is all well and good but lacks the resolution you sometimes need to find problems. For example, if 1% of calls take > 2 seconds, you need on average 100 samples and if you are unlucky a lot more to see the spike.

Another problem is lack of context with the measurements. When you see such a high response, there is really no way to find out what was the context (e.g. customerId) for which it took wrong. This makes finding performance bottlenecks more difficult.

So SimpleInstrumentor, in addition to doing counters for you, raises InstrumentationEventSource ETW events. Of course, you can turn it off or just leave it as it has almost no impact. But so much better, is that use a sink (Table Storage, ElasticSearch, etc) and persist these events to a store and then analyse using something like ElasticSearch and Kibana - as we do it in ASOS. Here is a console log sink, subscribed to these events:
var listener = ConsoleLog.CreateListener();
listener.EnableEvents(InstrumentationEventSource.Instance, EventLevel.LogAlways,
Keywords.All);
And you would see:


Obviously this might not look very impressive but when you take into account that you have the timeTakenMilli (here 102ms) and have the option to pass instrumentationContext string (here "test..."), you could correlate performance with the context of in your application.

PerfIt for Web API is all there just in a different nuget package


If you have been using previous versions of PerfIt, do not panic! We are not going to move the cheese, so the client and server delegating handlers are all there only in a different package, so you just need to install Perfit.WebApi package:
PM> install-package PerfIt.WebApi -pre
The rest is just the same.

Only .NET 4.5 or higher


After spending a lot of time writing async code in CacheCow which was .NET 4.0, I do not think anyone should be subjected to such torture, so my apologies to those using .NET 4.0 but I had to move PerfIt! to .NET 4.5. Sorry .NET 4.0 users.

PerfIt for MVC, Windsor Castle interceptors and more

Yeah, there is more coming. PerfIt for MVC has been long asked by the community and castle interceptors can simply remove all cross cutting concern codes out of your core business code. Stay tuned and please provide feedback before going fully to v1!


Dominick Baier: OpenID Connect Certification for IdentityServer3

I am extremely happy to announce that IdentityServer3 is now officially certified by the OpenID Foundation.

OpenID_Certified

http://openid.net/certification/

Version 1.6 and onwards is now fully compatible with the basic, implicit, hybrid and configuration profile of OpenID Connect.


Filed under: .NET Security, ASP.NET, IdentityServer, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Chad England: How to Fix “No ‘Access-Control-Allow-Origin’ header” in ASP.NET WebAPI

I am not 100% sure what Google did to Chromium in Update 42.  However after that update landed I started to get reports of our Angularjs application started to have problems communicating with our ASP.NET WebAPI.  We started getting errors similar to this:

No ‘Access-Control-Allow-Origin’ header is present on the requested resource

I tested the API and CORs with Fiddler and POSTMAN and everything worked great.  So the confusion started.  The web team created a simple test UI so I could debug the HTTP stack.  After looking deeper and some searches online we found that the cause was being done by Chromes preflight call for OPTIONS.

Out of the box WebAPI does not support the OPTIONS Verb and before update 42 Chrome would ignore if the API did not have this ability and continue on; however this does not appear to be an option now.  With this issue using EnableCors[“*”,”*”,”*”] does not appear to work anymore.

There are few different ways to add the OPTIONS verb to WebAPI and it appears to very depending on your usage.

Option 1 – add it to the ConfigureAuth

app.Use(async (context, next) =>{

IOwinRequest req = context.Request;

IOwinResponse res = context.Response;

if (req.Path.StartsWithSegments(new PathString(“/Token”))){

var origin = req.Headers.Get(“Origin”);

if (!string.IsNullOrEmpty(origin)){

res.Headers.Set(“Access-Control-Allow-Origin”, origin);

}

if (req.Method == “OPTIONS”){

res.StatusCode = 200;

res.Headers.AppendCommaSeparatedValues(“Access-Control-Allow-Methods”, “GET”, “POST”);

res.Headers.AppendCommaSeparatedValues(“Access-Control-Allow-Headers”, “authorization”, “content-type”);

return;

}

}

await next();

});

Add the headers to the request headers.  Adjust the Methods and headers to what would want to support or use the wild cards.

Options 2 – Add headers in OWIN

If you are using OWIN for authentication, update the method GrantResourceOwnerCredentials and add the following to the first line.

context.OwinContext.Response.Headers.Add("Access-Control-Allow-Origin", new[] { "*" });

Option 3 – add Owin Cors to the startup.cs

To start with add NuGet package

Microsoft.Owin.Cors

Update Startup.cs and add the following to the Configuration method

app.UseCors(Microsoft.Owin.Cors.CorsOptions.AllowAll);

After this remove EnableCors[“*”,”*”,”*”] from your controllers or you will get an error for duplicate headers.



Pedro Félix: Some thoughts on the recent JWT library vulnerabilities

Recently, a great post by Tim McLean about some “Critical vulnerabilities in JSON Web Token libraries” made the headlines, bringing the focus to the JWT spec, its usages and apparent security issues.

In this post, I want to share some of my assorted ideas on these subjects.

On the usefulness of the “none” algorithm

One of the problems identified in the aforementioned post is the “none” algoritm.

It may seem strange for a secure packaging format to support “none” as a valid protection, however this algorithm is useful in situations where the token’s integrity is verified by other means, namely the transport protocol.
One such example happens on the authorization code flow of OpenID Connect, where the ID token is retrieved via a direct TLS protected communication between the Client and the Authorization Server.

In the words of the specification: “If the ID Token is received via direct communication between the Client and the Token Endpoint (which it is in this flow), the TLS server validation MAY be used to validate the issuer in place of checking the token signature”.

Supporting multiple algorithms and the “alg” field

Another problem identified by Tim’s post was the usage of the “alg” field and the way some libraries handle it, namely using keys in an incorrect way.

In my opinion, supporting algorithm agility (i.e. the ability to support more than one algorithm in a specification) is essential for having evolvable systems.
Also, being explicit about what was used to protect the token is typically a good security decision.

In this case, the problem lies on the library side. Namely, having a verify(string token, string verificationKey) function signature seems really awkard for several reasons

  • First, representing a key as a string is a typical case of primitive obsession. A key is not a string. A key is a potentially composed object (e.g. two integers in the case of a public key for RSA-based schemes) with associated metadata, namely the algorithms and usages for which it applies. Encoding that as a string is opening the door to ambiguity and incorrect usages.
    A key representation should always contain not only the algorithm to which applies but also the usage conditions (e.g. encryption vs,. signature for a RSA key).

  • Second, it makes phased key rotation really difficult. What happens when the token signer wants to change the signing key or the algorithm? Must all the consumers synchronously change the verification key at the same moment in time? Preferably, consumers should be able to simultaneous support two or more key to be used, identified by the “kid” parameter.
    The same applies to algorithm changes and the use of the “alg” parameter.
    So, I don’t think that removing the “alg” header is a good idea

A verification function should allow a set of possible keys (bound to explicit algorithms) or receive a call back to fetch the key given both the algorithm and the key id.

Don’t assume, always verify

Verifying a JWT before using the claims that it asserts is alway more than just checking a signature. Who was the issuer? Is the token valid at the time of usage? Was the token explicitly revoked? Who is the intended audience? Is the protection algorithm compatible with the usage scenario? These are all questions that must be explicit verified by a JWT consumer application or component.

For instance, OpenID Connect lists the verification steps that must done by a client application (the relying party) before using the claims in a received ID token.

And so it begins …

If the recent history of SSL/TLS related problems has taught us anything is that security protocol design and implementation is far from easy, and that “obvious” vulnerabilities can remain undetected for long periods of time.
If these problems happen on well known and commonly used designs and libraries such as SSL and OpenSSL, we must be prepared for similar occurrences on JWT based protocols and implementations.
In this context, security analysis such as the one described in Tim’s post are of uttermost importance, even if I don’t agree with some of the proposed measures.



Dominick Baier: Implicit vs Explicit Authentication in Browser-based Applications

I got the idea for this post from my good friend Pedro Felix – I hope I don’t steal his thunder (I am sure I won’t – since he is much more elaborate than I am) – but when I saw his tweet this morning, I had to write this post.

When teaching web API security, Brock and I often use the term implicit vs explicit authentication. I don’t think these are standard terms – so here’s the explanation.

What’s implicit authentication?
Browser built-in mechanisms like Basic, Windows, Digest authentication, client certificates and cookies. Once established for a certain domain, the browser implicitly sends the credential along automatically.

Advantage: It just works. Once the browser has authenticated the user (or the cookie is set) – no special code is necessary

Disadvantage:

  • No control. The browser will send the credential regardless which application makes the request to the “authenticated domain”. CSRF is the result of that.
  • Domain bound – only “clients” from the same domain as the “server” will be able to communicate.

What’s explicit authentication?
Whenever the application code (JavaScript in that case) has to send the credential explicitly – typically on the Authorization header (and sometimes also as a query string). Using OAuth 2.0 implicit flow and access tokens in JS apps is a common example. Strictly speaking the browser does not know anything about the credential and thus would not send it automatically.

Advantage:

  • Full control over when the credential is send.
  • No CSRF.
  • Not bound to a domain.

Disadvantages:

  • Custom code necessary.
  • Access tokens need to be managed by the JS app (and don’t have built-in protection features like httpOnly cookies) which make them interesting targets for other types of attacks (CSP can help here).

Summary: Implicit authentication works great for server-side web applications that live on a single domain. CSRF is well understood and frameworks typically have built-in countermeasures. Explicit authentication is recommended for web APIs. Anti CSRF is harder here, and clients and APIs are often spread across domains which makes cookies a no go.


Filed under: WebAPI


Taiseer Joudeh: ASP.NET Web API Claims Authorization with ASP.NET Identity 2.1 – Part 5

This is the fifth part of Building Simple Membership system using ASP.NET Identity 2.1, ASP.NET Web API 2.2 and AngularJS. The topics we’ll cover are:

The source code for this tutorial is available on GitHub.

ASP.NET Web API Claims Authorization with ASP.NET Identity 2.1

In the previous post we have implemented a finer grained way to control authorization based on the Roles assigned for the authenticated user, this was done by assigning users to a predefined Roles in our system and then attributing the protected controllers or actions by the [Authorize(Roles = “Role(s) Name”)] attribute.

Claims Featured Image

Using Roles Based Authorization for controlling user access will be efficient in scenarios where your Roles do not change too much and the users permissions do not change frequently.

In some applications controlling user access on system resources is more complicated, and having users assigned to certain Roles is not enough for managing user access efficiently, you need more dynamic way to to control access based on certain information related to the authenticated user, this will lead us to control user access using Claims, or in another word using Claims Based Authorization.

But before we dig into the implementation of Claims Based Authorization we need to understand what Claims are!

Note: It is not mandatory to use Claims for controlling user access, if you are happy with Roles Based Authorization and you have limited number of Roles then you can stick to this.

What is a Claim?

Claim is a statement about the user makes about itself, it can be user name, first name, last name, gender, phone, the roles user assigned to, etc… Yes the Roles we have been looking at are transformed to Claims at the end, and as we saw in the previous post; in ASP.NET Identity those Roles have their own manager (ApplicationRoleManager) and set of APIs to manage them, yet you can consider them as a Claim of type Role.

As we saw before, any authenticated user will receive a JSON Web Token (JWT) which contains a set of claims inside it, what we’ll do now is to create a helper end point which returns the claims encoded in the JWT for an authenticated user.

To do this we will create a new controller named “ClaimsController” which will contain a single method responsible to unpack the claims in the JWT and return them, to do this add new controller named “ClaimsController” under folder Controllers and paste the code below:

[RoutePrefix("api/claims")]
    public class ClaimsController : BaseApiController
    {
        [Authorize]
        [Route("")]
        public IHttpActionResult GetClaims()
        {
            var identity = User.Identity as ClaimsIdentity;
            
            var claims = from c in identity.Claims
                         select new
                         {
                             subject = c.Subject.Name,
                             type = c.Type,
                             value = c.Value
                         };

            return Ok(claims);
        }

    }

The code we have implemented above is straight forward, we are getting the Identity of the authenticated user by calling “User.Identity” which returns “ClaimsIdentity” object, then we are iterating over the IEnumerable Claims property and return three properties which they are (Subject, Type, and Value).
To execute this endpoint we need to issue HTTP GET request to the end point “http://localhost/api/claims” and do not forget to pass a valid JWT in the Authorization header, the response for this end point will contain the below JSON object:

[
  {
    "subject": "Hamza",
    "type": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier",
    "value": "cd93945e-fe2c-49c1-b2bb-138a2dd52928"
  },
  {
    "subject": "Hamza",
    "type": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name",
    "value": "Hamza"
  },
  {
    "subject": "Hamza",
    "type": "http://schemas.microsoft.com/accesscontrolservice/2010/07/claims/identityprovider",
    "value": "ASP.NET Identity"
  },
  {
    "subject": "Hamza",
    "type": "AspNet.Identity.SecurityStamp",
    "value": "a77594e2-ffa0-41bd-a048-7398c01c8948"
  },
  {
    "subject": "Hamza",
    "type": "iss",
    "value": "http://localhost:59822"
  },
  {
    "subject": "Hamza",
    "type": "aud",
    "value": "414e1927a3884f68abc79f7283837fd1"
  },
  {
    "subject": "Hamza",
    "type": "exp",
    "value": "1427744352"
  },
  {
    "subject": "Hamza",
    "type": "nbf",
    "value": "1427657952"
  }
]

As you noticed from the response above, all the claims contain three properties, and those properties represents the below:

  • Subject: Represents the identity which those claims belongs to, usually the value for the subject will contain the unique identifier for the user in the system (Username or Email).
  • Type: Represents the type of the information contained in the claim.
  • Value: Represents the claim value (information) about this claim.

Now to have better understanding of what type of those claims mean let’s take a look the table below:

SubjectTypeValueNotes
Hamzanameidentifiercd93945e-fe2c-49c1-b2bb-138a2dd52928Unique User Id generated from Identity System
HamzanameHamzaUnique Username
HamzaidentityproviderASP.NET IdentityHow user has been authenticated using ASP.NET Identity
HamzaSecurityStampa77594e2-ffa0-41bd-a048-7398c01c8948Unique Id which stays the same until any security related attribute change, i.e. change user password
Hamzaisshttp://localhost:59822Issuer of the Access Token (Authz Server)
Hamzaaud414e1927a3884f68abc79f7283837fd1For which system this token is generated
Hamzaexp1427744352Expiry time for this access token (Epoch)
Hamzanbf1427657952When this token is issued (Epoch)

After we have briefly described what claims are, we want to see how we can use them to manage user assess, in this post I will demonstrate three ways of using the claims as the below:

  1. Assigning claims to the user on the fly based on user information.
  2. Creating custom Claims Authorization attribute.
  3. Managing user claims by using the “ApplicationUserManager” APIs.

Method 1: Assigning claims to the user on the fly

Let’s assume a fictional use case where our API will be used in an eCommerce website, where certain users have the ability to issue refunds for orders if there is incident happen and the customer is not happy.

So certain criteria should be met in order to grant our users the privileges to issue refunds, the users should have been working for the company for more than 90 days, and the user should be in “Admin”Role.

To implement this we need to create a new class which will be responsible to read authenticated user information, and based on the information read, it will create a single claim or set of claims and assign then to the user identity.
If you recall from the first post of this series, we have extended the “ApplicationUser” entity and added a property named “JoinDate” which represent the hiring date of the employee, based on the hiring date, we need to assign a new claim named “FTE” (Full Time Employee) for any user who has worked for more than 90 days. To start implementing this let’s add a new class named “ExtendedClaimsProvider” under folder “Infrastructure” and paste the code below:

public static class ExtendedClaimsProvider
    {
        public static IEnumerable<Claim> GetClaims(ApplicationUser user)
        {
          
            List<Claim> claims = new List<Claim>();

            var daysInWork =  (DateTime.Now.Date - user.JoinDate).TotalDays;

            if (daysInWork > 90)
            {
                claims.Add(CreateClaim("FTE", "1"));
               
            }
            else {
                claims.Add(CreateClaim("FTE", "0"));
            }

            return claims;
        }

        public static Claim CreateClaim(string type, string value)
        {
            return new Claim(type, value, ClaimValueTypes.String);
        }

    }

The implementation is simple, the “GetClaims” method will take ApplicationUser object and returns a list of claims. Based on the “JoinDate” field it will add new claim named “FTE” and will assign a value of “1” if the user has been working for than 90 days, and a value of “0” if the user worked for less than this period. Notice how I’m using the method “CreateClaim” which returns a new instance of the claim.

This class can be used to enforce creating custom claims for the user based on the information related to her, you can add as many claims as you want here, but in our case we will add only a single claim.

Now we need to call the method “GetClaims” so the “FTE” claim will be associated with the authenticated user identity, to do this open class “CustomOAuthProvider” and in method “GrantResourceOwnerCredentials” add the highlighted line (line 7) as the code snippet below:

public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
{
	//Code removed for brevity

	ClaimsIdentity oAuthIdentity = await user.GenerateUserIdentityAsync(userManager, "JWT");
	
	oAuthIdentity.AddClaims(ExtendedClaimsProvider.GetClaims(user));
	
	var ticket = new AuthenticationTicket(oAuthIdentity, null);
	
	context.Validated(ticket);
   
}

Notice how the established claims identity object “oAuthIdentity” has a method named “AddClaims” which accepts IEnumerable object of claims, now the new “FTE” claim is assigned to the authenticated user, but this is not enough to satisfy the criteria needed to issue the fictitious refund on orders, we need to make sure that the user is in “Admin” Role too.

To implement this we’ll create a new Role on the fly based on the claims assigned for the user, in other words we’ll create Roles from the Claims user assigned to, this Role will be named “IncidentResolvers”. And as we stated in the beginning of this post, the Roles eventually are considered as a Claim of type Role.

To do this add new class named “RolesFromClaims” under folder “Infrastructure” and paste the code below:

public class RolesFromClaims
    {
        public static IEnumerable<Claim> CreateRolesBasedOnClaims(ClaimsIdentity identity)
        {
            List<Claim> claims = new List<Claim>();

            if (identity.HasClaim(c => c.Type == "FTE" && c.Value == "1") &&
                identity.HasClaim(ClaimTypes.Role, "Admin"))
            {
                claims.Add(new Claim(ClaimTypes.Role, "IncidentResolvers"));
            }

            return claims;
        }
    }

The implementation is self explanatory, we have created a method named “CreateRolesBasedOnClaims” which accepts the established identity object and returns a list of claims.

Inside this method we will check that the established identity for the authenticated user has a claim of type “FTE” with value “1”, as well that the identity contains a claim of type “Role” with value “Admin”, if those 2 conditions are met then; we will create a new claim of Type “Role” and give it a value of “IncidentResolvers”.
Last thing we need to do here is to assign this new set of claims to the established identity, so to do this open class “CustomOAuthProvider” again and in method “GrantResourceOwnerCredentials” add the highlighted line (line 9) as the code snippet below:

public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
{
	//Code removed for brevity

	ClaimsIdentity oAuthIdentity = await user.GenerateUserIdentityAsync(userManager, "JWT");
	
	oAuthIdentity.AddClaims(ExtendedClaimsProvider.GetClaims(user));
	
	oAuthIdentity.AddClaims(RolesFromClaims.CreateRolesBasedOnClaims(oAuthIdentity));
	
	var ticket = new AuthenticationTicket(oAuthIdentity, null);
	
	context.Validated(ticket);
   
}

Now all the new claims which created on the fly are assigned to the established identity and once we call the method “context.Validated(ticket)”, all claims will get encoded in the JWT token, so to test this out let’s add fictitious controller named “OrdersController” under folder “Controllers” as the code below:

[RoutePrefix("api/orders")]
public class OrdersController : ApiController
{
	[Authorize(Roles = "IncidentResolvers")]
	[HttpPut]
	[Route("refund/{orderId}")]
	public IHttpActionResult RefundOrder([FromUri]string orderId)
	{
		return Ok();
	}
}

Notice how we attribute the action “RefundOrder” with  [Authorize(Roles = “IncidentResolvers”)] so only authenticated users with claim of type “Role” and has the value of “IncidentResolvers” can access this end point. To test this out you can issue HTTP PUT request to the URI “http://localhost/api/orders/refund/cxy-4456393” with an empty body.

As you noticed from the first method, we have depended on user information to create claims and kept the authorization more dynamic and flexible.
Keep in mind that you can add your access control business logic, and have finer grained control on authorization by implementing this logic into classes “ExtendedClaimsProvider” and “RolesFromClaims”.

Method 2: Creating custom Claims Authorization attribute

Another way to implement Claims Based Authorization is to create a custom authorization attribute which inherits from “AuthorizationFilterAttribute”, this authorize attribute will check directly the claims value and type for the established identity.

To do this let’s add new class named “ClaimsAuthorizationAttribute” under folder “Infrastructure” and paste the code below:

public class ClaimsAuthorizationAttribute : AuthorizationFilterAttribute
    {
        public string ClaimType { get; set; }
        public string ClaimValue { get; set; }

        public override Task OnAuthorizationAsync(HttpActionContext actionContext, System.Threading.CancellationToken cancellationToken)
        {

            var principal = actionContext.RequestContext.Principal as ClaimsPrincipal;

            if (!principal.Identity.IsAuthenticated)
            {
                actionContext.Response = actionContext.Request.CreateResponse(HttpStatusCode.Unauthorized);
                return Task.FromResult<object>(null);
            }

            if (!(principal.HasClaim(x => x.Type == ClaimType && x.Value == ClaimValue)))
            {
                actionContext.Response = actionContext.Request.CreateResponse(HttpStatusCode.Unauthorized);
                return Task.FromResult<object>(null);
            }

            //User is Authorized, complete execution
            return Task.FromResult<object>(null);

        }
    }

What we’ve implemented here is the following:

  • Created a new class named “ClaimsAuthorizationAttribute” which inherits from “AuthorizationFilterAttribute” and then override method “OnAuthorizationAsync”.
  • Defined 2 properties “ClaimType” & “ClaimValue” which will be used as a setters when we use this custom authorize attribute.
  • Inside method “OnAuthorizationAsync” we are casting the object “actionContext.RequestContext.Principal” to “ClaimsPrincipal” object and check if the user is authenticated.
  • If the user is authenticated we’ll look into the claims established for this identity if it has the claim type and claim value.
  • If the identity contains the same claim type and value; then we’ll consider the request authentic and complete the execution, other wist we’ll return 401 unauthorized status.

To test the new custom authorization attribute, we’ll add new method to the “OrdersController” as the code below:

[ClaimsAuthorization(ClaimType="FTE", ClaimValue="1")]
[Route("")]
public IHttpActionResult Get()
{
	return Ok();
}

Notice how we decorated the “Get()” method with the “[ClaimsAuthorization(ClaimType=”FTE”, ClaimValue=”1″)]” attribute, so any user has the claim “FTE” with value “1” can access this protected end point.

Method 3: Managing user claims by using the “ApplicationUserManager” APIs

The last method we want to explore here is to use the “ApplicationUserManager” claims related API to manage user claims and store them in ASP.NET Identity related tables “AspNetUserClaims”.

In the previous two methods we’ve created claims for the user on the fly, but in method 3 we will see how we can add/remove claims for a certain user.

The “ApplicationUserManager” class comes with a set of predefined APIs which makes dealing and managing claims simple, the APIs that we’ll use in this post are listed in the table below:

Method NameUsage
AddClaimAsync(id, claim)Create a new claim for specified user id
RemoveClaimAsync(id, claim)Remove claim from specified user if claim type and value match
GetClaimsAsync(id)Return IEnumerable of claims based on specified user id

To use those APIs let’s add 2 new methods to the “AccountsController”, the first method “AssignClaimsToUser” will be responsible to add new claims for specified user, and the second method “RemoveClaimsFromUser” will remove claims from a specified user as the code below:

[Authorize(Roles = "Admin")]
[Route("user/{id:guid}/assignclaims")]
[HttpPut]
public async Task<IHttpActionResult> AssignClaimsToUser([FromUri] string id, [FromBody] List<ClaimBindingModel> claimsToAssign) {

	if (!ModelState.IsValid)
	{
		return BadRequest(ModelState);
	}

	 var appUser = await this.AppUserManager.FindByIdAsync(id);

	if (appUser == null)
	{
		return NotFound();
	}

	foreach (ClaimBindingModel claimModel in claimsToAssign)
	{
		if (appUser.Claims.Any(c => c.ClaimType == claimModel.Type)) {
		   
			await this.AppUserManager.RemoveClaimAsync(id, ExtendedClaimsProvider.CreateClaim(claimModel.Type, claimModel.Value));
		}

		await this.AppUserManager.AddClaimAsync(id, ExtendedClaimsProvider.CreateClaim(claimModel.Type, claimModel.Value));
	}
	
	return Ok();
}

[Authorize(Roles = "Admin")]
[Route("user/{id:guid}/removeclaims")]
[HttpPut]
public async Task<IHttpActionResult> RemoveClaimsFromUser([FromUri] string id, [FromBody] List<ClaimBindingModel> claimsToRemove)
{

	if (!ModelState.IsValid)
	{
		return BadRequest(ModelState);
	}

	var appUser = await this.AppUserManager.FindByIdAsync(id);

	if (appUser == null)
	{
		return NotFound();
	}

	foreach (ClaimBindingModel claimModel in claimsToRemove)
	{
		if (appUser.Claims.Any(c => c.ClaimType == claimModel.Type))
		{
			await this.AppUserManager.RemoveClaimAsync(id, ExtendedClaimsProvider.CreateClaim(claimModel.Type, claimModel.Value));
		}
	}

	return Ok();
}

The implementation for both methods is very identical, as you noticed we are only allowing users in “Admin” role to access those endpoints, then we are specifying the UserId and a list of the claims that will be add or removed for this user.

Then we are making sure that user specified exists in our system before trying to do any operation on the user.

In case we are adding a new claim for the user, we will check if the user has the same claim type before trying to add it, add if it exists before we’ll remove this claim and add it again with the new claim value.

The same applies when we try to remove a claim from the user, notice that methods “AddClaimAsync” and “RemoveClaimAsync” will save the claims permanently in our SQL data-store in table “AspNetUserClaims”.

Do not forget to add the “ClaimBindingModel” under folder “Models” which acts as our POCO class when we are sending the claims from our front-end application, the class will contain the code below:

public class ClaimBindingModel
    {
        [Required]
        [Display(Name = "Claim Type")]
        public string Type { get; set; }

        [Required]
        [Display(Name = "Claim Value")]
        public string Value { get; set; }
    }

There is no extra steps needed in order to pull those claims from the SQL data-store when establishing the user identity, thanks for the method “CreateIdentityAsync” which is responsible to pull all the claims for the user. We have already implemented this and it can be checked by visiting the highlighted LOC.

To test those methods all you need to do is to issue HTTP PUT request to the URI: “http://localhost:59822/api/accounts/user/{UserId}/assignclaims” and “http://localhost:59822/api/accounts/user/{UserId}/removeclaims” as the request images below:

Assign Claims

Assign Claims

Remove Claims

Remove Claims

That’s it for now folks about implementing Authorization using Claims.

In the next post we’ll build a simple AngularJS application which connects all those posts together, this post should be interesting :)

The source code for this tutorial is available on GitHub.

Follow me on Twitter @tjoudeh

References

The post ASP.NET Web API Claims Authorization with ASP.NET Identity 2.1 – Part 5 appeared first on Bit of Technology.


Dominick Baier: IdentityServer3 vNext

Just a quick update about some upcoming changes in IdentityServer3.

The last weeks since the 1.0.0 release in January we did mostly bug fixing, fine tuning and listening to feedback. Inevitably we found things we want to change and improve – and some of them are breaking changes.

Right now we are in the process of compiling these small and big changes to bundle them up in a 2.0.0 release, so hopefully after that we can go back into fine tuning mode without breaking anybody’s code.

Here’s a brief list of things that have/will change in 2.0.0

  • Consolidation of some validation infrastructure
    • ICustomRequestValidator signature has slightly changed
  • Support for X.509 client certificates for client authentication at the token endpoint. This resulted in a number of changes to make client validation more flexible in general
    • ClientSecret has been renamed to Secret (we will probably use the concept of secrets in more place than just the client in the future)
    • IClientSecretValidator is gone in favour of a more high level IClientValidator
  • The event service is now async (we simply missed that in 1.0)
  • The CorsPolicy has been replaced by a CORS policy service – along with configurable CORS origins per client
  • By default clients have no access to any scopes. You need to configure the allowed scopes (or override by setting the new AllowAccessToAllScopes client flag)

Probable the biggest change is the fact that we renamed the nuget package to simply IdentityServer3. We decided to remove the thinktecture registered trademark from the OSS project altogether (including the namespaces – so that’s another breaking change).

So in the future all you need to do is:

install-package IdentityServer3 (-pre for the time being)

The dev branch on github is now on 2.0.0 and we published a beta package to nuget so you can have a look (in addition to our myget dev feed):

https://www.nuget.org/packages/IdentityServer3/2.0.0-beta1

Feedback is welcome!


Filed under: .NET Security, ASP.NET, IdentityServer, OAuth, OpenID Connect, OWIN, WebAPI


Taiseer Joudeh: ASP.NET Identity 2.1 Roles Based Authorization with ASP.NET Web API – Part 4

This is the forth part of Building Simple Membership system using ASP.NET Identity 2.1, ASP.NET Web API 2.2 and AngularJS. The topics we’ll cover are:

The source code for this tutorial is available on GitHub.

ASP.NET Identity 2.1 Roles Based Authorization with ASP.NET Web API

In the previous post we saw how we can authenticate individual users using the [Authorize] attribute in a very basic form, but there is some limitation with the previous approach where any authenticated user can perform sensitive actions such as deleting any user in the system, getting list of all users in the system, etc… where those actions should be executed only by subset of users with higher privileges (Admins only).

Roles Auth

In this post we’ll see how we can enhance the authorization mechanism to give finer grained control over how users can execute actions based on role membership, and how those roles will help us in differentiate between authenticated users.

The nice thing here that ASP.NET Identity 2.1 provides support for managing Roles (create, delete, update, assign users to a role, remove users from role, etc…) by using the RoleManager<T> class, so let’s get started by adding support for roles management in our Web API.

Step 1: Add the Role Manager Class

The Role Manager class will be responsible to manage instances of the Roles class, the class will derive from “RoleManager<T>”  where T will represent our “IdentityRole” class, once it derives from the “IdentityRole” class a set of methods will be available, those methods will facilitate managing roles in our Identity system, some of the exposed methods we’ll use from the “RoleManager” during this tutorial are:

Method NameUsage
FindByIdAsync(id)Find role object based on its unique identifier
RolesReturns an enumeration of the roles in the system
FindByNameAsync(Rolename)Find roled based on its name
CreateAsync(IdentityRole)Creates a new role
DeleteAsync(IdentityRole)Delete role
RoleExistsAsync(RoleName)Returns true if role already exists

Now to implement the “RoleManager” class, add new file named “ApplicationRoleManager” under folder “Infrastructure” and paste the code below:

public class ApplicationRoleManager : RoleManager<IdentityRole>
    {
        public ApplicationRoleManager(IRoleStore<IdentityRole, string> roleStore)
            : base(roleStore)
        {
        }

        public static ApplicationRoleManager Create(IdentityFactoryOptions<ApplicationRoleManager> options, IOwinContext context)
        {
            var appRoleManager = new ApplicationRoleManager(new RoleStore<IdentityRole>(context.Get<ApplicationDbContext>()));

            return appRoleManager;
        }
    }

Notice how the “Create” method will use the Owin middleware to create instances for each request where
Identity data is accessed, this will help us to hide the details of how role data is stored throughout the
application.

Step 2: Assign the Role Manager Class to Owin Context

Now we want to add a single instance of the Role Manager class to each request using the Owin context, to do so open file “Startup” and paste the code below inside method “ConfigureOAuthTokenGeneration”:

private void ConfigureOAuthTokenGeneration(IAppBuilder app)
{
	// Configure the db context and user manager to use a single instance per request
	//Rest of code is removed for brevity
	
	app.CreatePerOwinContext<ApplicationRoleManager>(ApplicationRoleManager.Create);
	
	//Rest of code is removed for brevity

}

Now a single instance of class “ApplicationRoleManager” will be available for each request, we’ll use this instance in different controllers, so it is better to create a helper property in class “BaseApiController” which all other controllers inherits from, so open file “BaseApiController” and add the following code:

public class BaseApiController : ApiController
{
	//Code removed from brevity
	private ApplicationRoleManager _AppRoleManager = null;

	protected ApplicationRoleManager AppRoleManager
	{
		get
		{
			return _AppRoleManager ?? Request.GetOwinContext().GetUserManager<ApplicationRoleManager>();
		}
	}
}

Step 3: Add Roles Controller

Now we’ll add the controller which will be responsible to manage roles in the system (add new roles, delete existing ones, getting single role by id, etc…), but this controller should only be accessed by users in “Admin” role because it doesn’t make sense to allow any authenticated user to delete or create roles in the system, so we will see how we will use the [Authorize] attribute along with the Roles to control this.

Now add new file named “RolesController” under folder “Controllers” and paste the code below:

[Authorize(Roles="Admin")]
    [RoutePrefix("api/roles")]
    public class RolesController : BaseApiController
    {

        [Route("{id:guid}", Name = "GetRoleById")]
        public async Task<IHttpActionResult> GetRole(string Id)
        {
            var role = await this.AppRoleManager.FindByIdAsync(Id);

            if (role != null)
            {
                return Ok(TheModelFactory.Create(role));
            }

            return NotFound();

        }

        [Route("", Name = "GetAllRoles")]
        public IHttpActionResult GetAllRoles()
        {
            var roles = this.AppRoleManager.Roles;

            return Ok(roles);
        }

        [Route("create")]
        public async Task<IHttpActionResult> Create(CreateRoleBindingModel model)
        {
            if (!ModelState.IsValid)
            {
                return BadRequest(ModelState);
            }

            var role = new IdentityRole { Name = model.Name };

            var result = await this.AppRoleManager.CreateAsync(role);

            if (!result.Succeeded)
            {
                return GetErrorResult(result);
            }

            Uri locationHeader = new Uri(Url.Link("GetRoleById", new { id = role.Id }));

            return Created(locationHeader, TheModelFactory.Create(role));

        }

        [Route("{id:guid}")]
        public async Task<IHttpActionResult> DeleteRole(string Id)
        {

            var role = await this.AppRoleManager.FindByIdAsync(Id);

            if (role != null)
            {
                IdentityResult result = await this.AppRoleManager.DeleteAsync(role);

                if (!result.Succeeded)
                {
                    return GetErrorResult(result);
                }

                return Ok();
            }

            return NotFound();

        }

        [Route("ManageUsersInRole")]
        public async Task<IHttpActionResult> ManageUsersInRole(UsersInRoleModel model)
        {
            var role = await this.AppRoleManager.FindByIdAsync(model.Id);
            
            if (role == null)
            {
                ModelState.AddModelError("", "Role does not exist");
                return BadRequest(ModelState);
            }

            foreach (string user in model.EnrolledUsers)
            {
                var appUser = await this.AppUserManager.FindByIdAsync(user);

                if (appUser == null)
                {
                    ModelState.AddModelError("", String.Format("User: {0} does not exists", user));
                    continue;
                }

                if (!this.AppUserManager.IsInRole(user, role.Name))
                {
                    IdentityResult result = await this.AppUserManager.AddToRoleAsync(user, role.Name);

                    if (!result.Succeeded)
                    {
                        ModelState.AddModelError("", String.Format("User: {0} could not be added to role", user));
                    }

                }
            }

            foreach (string user in model.RemovedUsers)
            {
                var appUser = await this.AppUserManager.FindByIdAsync(user);

                if (appUser == null)
                {
                    ModelState.AddModelError("", String.Format("User: {0} does not exists", user));
                    continue;
                }

                IdentityResult result = await this.AppUserManager.RemoveFromRoleAsync(user, role.Name);

                if (!result.Succeeded)
                {
                    ModelState.AddModelError("", String.Format("User: {0} could not be removed from role", user));
                }
            }

            if (!ModelState.IsValid)
            {
                return BadRequest(ModelState);
            }

            return Ok();
        }
    }

What we have implemented in this lengthy controller code is the following:

  • We have attribute the controller with [Authorize(Roles=”Admin”)] which allows only authenticated users who belong to “Admin” role only to execute actions in this controller, the “Roles” property accepts comma separated values so you can add multiple roles if needed. In other words the user who will have an access to this controller should have valid JSON Web Token which contains claim of type “Role” and value of “Admin”.
  • The method “GetRole(Id)” will return a single role based on it is identifier, this will happen when we call the method “FindByIdAsync”, this method returns object of type “RoleReturnModel” which we’ll create in the next step.
  • The method “GetAllRoles()” returns all the roles defined in the system.
  • The method “Create(CreateRoleBindingModel model)” will be responsible of creating new roles in the system, it will accept model of type “CreateRoleBindingModel” where we’ll create it the next step. This method will call “CreateAsync” and will return response of type “RoleReturnModel”.
  • The method “DeleteRole(string Id)” will delete existing role by passing the unique id of the role then calling the method “DeleteAsync”.
  • Lastly the method “ManageUsersInRole” is proprietary for the AngularJS app which we’ll build in the coming posts, this method will accept a request body containing an object of type “UsersInRoleModel” where the application will add or remove users from a specified role.

Step 4: Add Role Binding Models

Now we’ll add the models used in the previous step, the first class to add will be named “RoleBindingModels” under folder “Models”, so add this file and paste the code below:

public class CreateRoleBindingModel
    {
        [Required]
        [StringLength(256, ErrorMessage = "The {0} must be at least {2} characters long.", MinimumLength = 2)]
        [Display(Name = "Role Name")]
        public string Name { get; set; }

    }

    public class UsersInRoleModel {

        public string Id { get; set; }
        public List<string> EnrolledUsers { get; set; }
        public List<string> RemovedUsers { get; set; }
    }

Now we’ll adjust the “ModelFactory” class to include the method which returns the response of type “RoleReturnModel”, so open file “ModelFactory” and paste the code below:

public class ModelFactory
{
	//Code removed for brevity
	
	public RoleReturnModel Create(IdentityRole appRole) {

		return new RoleReturnModel
	   {
		   Url = _UrlHelper.Link("GetRoleById", new { id = appRole.Id }),
		   Id = appRole.Id,
		   Name = appRole.Name
	   };
	}
}

public class RoleReturnModel
{
	public string Url { get; set; }
	public string Id { get; set; }
	public string Name { get; set; }
}

 Step 5: Allow Admin to Manage Single User Roles

Until now the system doesn’t have an endpoint which allow users in Admin role to manage the roles for a selected user, this endpoint will be needed in the AngularJS app,  in order to add it open “AccountsController” class and paste the code below:

[Authorize(Roles="Admin")]
[Route("user/{id:guid}/roles")]
[HttpPut]
public async Task<IHttpActionResult> AssignRolesToUser([FromUri] string id, [FromBody] string[] rolesToAssign)
{

	var appUser = await this.AppUserManager.FindByIdAsync(id);

	if (appUser == null)
	{
		return NotFound();
	}
	
	var currentRoles = await this.AppUserManager.GetRolesAsync(appUser.Id);

	var rolesNotExists = rolesToAssign.Except(this.AppRoleManager.Roles.Select(x => x.Name)).ToArray();

	if (rolesNotExists.Count() > 0) {

		ModelState.AddModelError("", string.Format("Roles '{0}' does not exixts in the system", string.Join(",", rolesNotExists)));
		return BadRequest(ModelState);
	}

	IdentityResult removeResult = await this.AppUserManager.RemoveFromRolesAsync(appUser.Id, currentRoles.ToArray());

	if (!removeResult.Succeeded)
	{
		ModelState.AddModelError("", "Failed to remove user roles");
		return BadRequest(ModelState);
	}

	IdentityResult addResult = await this.AppUserManager.AddToRolesAsync(appUser.Id, rolesToAssign);

	if (!addResult.Succeeded)
	{
		ModelState.AddModelError("", "Failed to add user roles");
		return BadRequest(ModelState);
	}

	return Ok();
}

What we have implemented in this method is the following:

  • This method can be accessed only by authenticated users who belongs to “Admin” role, that’s why we have added the attribute [Authorize(Roles=”Admin”)]
  • The method accepts the UserId in its URI and array of the roles this user Id should be enrolled in.
  • The method will validates that this array of roles exists in the system, if not, HTTP Bad response will be sent indicating which roles doesn’t exist.
  • The system will delete all the roles assigned for the user then will assign only the roles sent in the request.

Step 6: Protect the existing end points with [Authorize(Roles=”Admin”)] Attribute

Now we’ll visit all the end points we have created earlier in the previous posts and mainly in “AccountsController” class. We’ll add add “Roles=Admin” to the Authorize attribute for all the end points the should be accessed only by users in Admin role.

The end points are:

 – GetUsers, GetUser, GetUserByName, and DeleteUser  should be accessed by users enrolled in “Admin” role. The code change will be as simple as the below:

[Authorize(Roles="Admin")]
[Route("users")]
public IHttpActionResult GetUsers()
{}

[Authorize(Roles="Admin")]
[Route("user/{id:guid}", Name = "GetUserById")]
public async Task<IHttpActionResult> GetUser(string Id)
{}


[Authorize(Roles="Admin")]
[Route("user/{username}")]
public async Task<IHttpActionResult> GetUserByName(string username)
{}

[Authorize(Roles="Admin")]
[Route("user/{id:guid}")]
public async Task<IHttpActionResult> DeleteUser(string id)
{}

Step 7: Update the DB Migration File

Last change we need to do here before testing the changes, is to create a default user and assign it to Admin role when the application runs for the first time. To implement this we need to introduce a change to the file “Configuration” under folder “Infrastructure”, so open the files and paste the code below:

internal sealed class Configuration : DbMigrationsConfiguration<AspNetIdentity.WebApi.Infrastructure.ApplicationDbContext>
    {
        public Configuration()
        {
            AutomaticMigrationsEnabled = false;
        }

        protected override void Seed(AspNetIdentity.WebApi.Infrastructure.ApplicationDbContext context)
        {
            //  This method will be called after migrating to the latest version.

            var manager = new UserManager<ApplicationUser>(new UserStore<ApplicationUser>(new ApplicationDbContext()));
            
            var roleManager = new RoleManager<IdentityRole>(new RoleStore<IdentityRole>(new ApplicationDbContext()));

            var user = new ApplicationUser()
            {
                UserName = "SuperPowerUser",
                Email = "taiseer.joudeh@gmail.com",
                EmailConfirmed = true,
                FirstName = "Taiseer",
                LastName = "Joudeh",
                Level = 1,
                JoinDate = DateTime.Now.AddYears(-3)
            };

            manager.Create(user, "MySuperP@ss!");

            if (roleManager.Roles.Count() == 0)
            {
                roleManager.Create(new IdentityRole { Name = "SuperAdmin" });
                roleManager.Create(new IdentityRole { Name = "Admin"});
                roleManager.Create(new IdentityRole { Name = "User"});
            }

            var adminUser = manager.FindByName("SuperPowerUser");

            manager.AddToRoles(adminUser.Id, new string[] { "SuperAdmin", "Admin" });
        }
    }

What we have implemented here is simple, we created a default user named “SuperPowerUser”, then created there roles in the system (SuperAdmin, Admin, and User), then we assigned this user to two roles (SuperAdmin, Admin).

In order to fire the “Seed()” method, we have to drop the exiting database, then from package manager console you type 

update-database
 which will create the database on our SQL server based on the connection string we specified earlier and runs the code inside the seed method and creates the user and roles in the system.

Step 7: Test the Role Authorization

Now the code is ready to be tested, first thing to do is to obtain a JWT token for the user “SuperPowerUser”, after you obtain this JWT and if you try to decode it using JWT.io you will notice that this token contains claim of type “Role” as the below:

{
  "nameid": "29e21f3d-08e0-49b5-b523-3d68cf623fd5",
  "unique_name": "SuperPowerUser",
  "http://schemas.microsoft.com/accesscontrolservice/2010/07/claims/identityprovider": "ASP.NET Identity",
  "AspNet.Identity.SecurityStamp": "832d5f6b-e71c-4c31-9fde-07fe92f5ddfd",
  "role": [
    "Admin",
    "SuperAdmin"
  ],
  "Phone": "123456782",
  "Gender": "Male",
  "iss": "http://localhost:59822",
  "aud": "414e1927a3884f68abc79f7283837fd1",
  "exp": 1426115380,
  "nbf": 1426028980
}

Those claims will allow this user to access any endpoint attribute with [Authorize] attribute and locked for users in Roles (Admin or SuperAdmin).

To test this out we’ll create new role named “Supervisor” by issuing HTTP Post to the endpoint (/api/roles/create), and as we stated before this endpoint should be accessed by users in “Admin” role, so we will pass the JWT token in the Authorization header using Bearer scheme as usual, the request will be as the image below:

Create Role

If all is valid we’ll revive HTTP status 201 Created.

In the next post we’ll see how we’ll implement Authorization access using Claims.

The source code for this tutorial is available on GitHub.

Follow me on Twitter @tjoudeh

The post ASP.NET Identity 2.1 Roles Based Authorization with ASP.NET Web API – Part 4 appeared first on Bit of Technology.


Christian Weyer: Session-Materialien von der BASTA! Spring 2015


Dominick Baier: .NET Foundation Advisory Council

I have been invited to join the .NET Foundation advisory council – looking forward to it!

http://www.dotnetfoundation.org/blog/welcoming-the-newly-minted-advisory-net-foundation-advisory-council-members


Filed under: .NET Security, ASP.NET, IdentityModel, IdentityServer, WebAPI


Taiseer Joudeh: Implement OAuth JSON Web Tokens Authentication in ASP.NET Web API and Identity 2.1 – Part 3

This is the third part of Building Simple Membership system using ASP.NET Identity 2.1, ASP.NET Web API 2.2 and AngularJS. The topics we’ll cover are:

The source code for this tutorial is available on GitHub.

Implement JSON Web Tokens Authentication in ASP.NET Web API and and Identity 2.1

Featured Image

Currently our API doesn’t support authentication and authorization, all the requests we receive to any end point are done anonymously, In this post we’ll configure our API which will act as our Authorization Server and Resource Server on the same time to issue JSON Web Tokens for authenticated users and those users will present this JWT to the protected end points in order to access it and process the request.

I will use step by step approach as usual to implement this, but I highly recommend you to read the post JSON Web Token in ASP.NET Web API 2 before completing this one; where I cover deeply what is JSON Web Tokens, the benefits of using JWT over default access tokens, and how they can be used to decouple Authorization server from Resource server. In this tutorial and for the sake of keeping it simple; both OAuth 2.0 roles (Authorization Server and Recourse Server) will live in the same API.

Step 1: Implement OAuth 2.0 Resource Owner Password Credential Flow

We are going to build an API which will be consumed by a trusted client (AngularJS front-end) so we only interested in implementing a single OAuth 2.0 flow where the registered user will present username and password to a specific end point, and the API will validate those credentials, and if all is valid it will return a JWT for the user where the client application used by the user should store it securely and locally in order to present this JWT with each request to any protected end point.

The nice thing about this JWT that it is a self contained token which contains all user claims and roles inside it, so there is no need to do any extra DB queries to fetch those values for the authenticated user. This JWT token will be configured to expire after 1 day of its issue date, so the user is requested to provide credentials again in order to obtain new JWT token.

If you are interested to know how to implement sliding expiration tokens and how you can keep the user logged in; I recommend you to read my other post Enable OAuth Refresh Tokens in AngularJS App which covers this deeply, but adds more complexity to the solution. To keep this tutorial simple we’ll not add refresh tokens here but you can refer to the post and implement it.

To implement the Resource Owner Password Credential flow; we need to add new folder named “Providers” then add a new class named “CustomOAuthProvider”, after you add then paste the code below:

public class CustomOAuthProvider : OAuthAuthorizationServerProvider
    {

        public override Task ValidateClientAuthentication(OAuthValidateClientAuthenticationContext context)
        {
            context.Validated();
            return Task.FromResult<object>(null);
        }

        public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
        {

            var allowedOrigin = "*";

            context.OwinContext.Response.Headers.Add("Access-Control-Allow-Origin", new[] { allowedOrigin });

            var userManager = context.OwinContext.GetUserManager<ApplicationUserManager>();

            ApplicationUser user = await userManager.FindAsync(context.UserName, context.Password);

            if (user == null)
            {
                context.SetError("invalid_grant", "The user name or password is incorrect.");
                return;
            }

            if (!user.EmailConfirmed)
            {
                context.SetError("invalid_grant", "User did not confirm email.");
                return;
            }

            ClaimsIdentity oAuthIdentity = await user.GenerateUserIdentityAsync(userManager, "JWT");
        
            var ticket = new AuthenticationTicket(oAuthIdentity, null);
            
            context.Validated(ticket);
           
        }
    }

This class inherits from class “OAuthAuthorizationServerProvider” and overrides the below two methods:

  • As you notice the “ValidateClientAuthentication” is empty, we are considering the request valid always, because in our implementation our client (AngularJS front-end) is trusted client and we do not need to validate it.
  • The method “GrantResourceOwnerCredentials” is responsible for receiving the username and password from the request and validate them against our ASP.NET 2.1 Identity system, if the credentials are valid and the email is confirmed we are building an identity for the logged in user, this identity will contain all the roles and claims for the authenticated user, until now we didn’t cover roles and claims part of the tutorial, but for the mean time you can consider all users registered in our system without any roles or claims mapped to them.
  • The method “GenerateUserIdentityAsync” is not implemented yet, we’ll add this helper method in the next step. This method will be responsible to fetch the authenticated user identity from the database and returns an object of type “ClaimsIdentity”.
  • Lastly we are creating an Authentication ticket which contains the identity for the authenticated user,  and when we call “context.Validated(ticket)” this will transfer this identity to an OAuth 2.0 bearer access token.

Step 2: Add method “GenerateUserIdentityAsync” to “ApplicationUser” class

Now we’ll add the helper method which will be responsible to get the authenticated user identity (all roles and claims mapped to the user). The “UserManager” class contains a method named “CreateIdentityAsync” to do this task, it will basically query the DB and get all the roles and claims for this user, to implement this open class “ApplicationUser” and paste the code below:

//Rest of code is removed for brevity
public async Task<ClaimsIdentity> GenerateUserIdentityAsync(UserManager<ApplicationUser> manager, string authenticationType)
{
	var userIdentity = await manager.CreateIdentityAsync(this, authenticationType);
	// Add custom user claims here
	return userIdentity;
}

Step 3: Issue JSON Web Tokens instead of Default Access Tokens

Now we want to configure our API to issue JWT tokens instead of default access tokens, to understand what is JWT and why it is better to use it, you can refer back to this post.

First thing we need to installed 2 NueGet packages as the below:

Install-package System.IdentityModel.Tokens.Jwt -Version 4.0.1
Install-package Thinktecture.IdentityModel.Core -Version 1.3.0

There is no direct support for issuing JWT in ASP.NET Web API,  so in order to start issuing JWTs we need to implement this manually by implementing the interface “ISecureDataFormat” and implement the method “Protect”.

To implement this add new file named “CustomJwtFormat” under folder “Providers” and paste the code below:

public class CustomJwtFormat : ISecureDataFormat<AuthenticationTicket>
    {
    
        private readonly string _issuer = string.Empty;

        public CustomJwtFormat(string issuer)
        {
            _issuer = issuer;
        }

        public string Protect(AuthenticationTicket data)
        {
            if (data == null)
            {
                throw new ArgumentNullException("data");
            }

            string audienceId = ConfigurationManager.AppSettings["as:AudienceId"];

            string symmetricKeyAsBase64 = ConfigurationManager.AppSettings["as:AudienceSecret"];

            var keyByteArray = TextEncodings.Base64Url.Decode(symmetricKeyAsBase64);

            var signingKey = new HmacSigningCredentials(keyByteArray);

            var issued = data.Properties.IssuedUtc;
            
            var expires = data.Properties.ExpiresUtc;

            var token = new JwtSecurityToken(_issuer, audienceId, data.Identity.Claims, issued.Value.UtcDateTime, expires.Value.UtcDateTime, signingKey);

            var handler = new JwtSecurityTokenHandler();

            var jwt = handler.WriteToken(token);

            return jwt;
        }

        public AuthenticationTicket Unprotect(string protectedText)
        {
            throw new NotImplementedException();
        }
    }

What we’ve implemented in this class is the following:

  • The class “CustomJwtFormat” implements the interface “ISecureDataFormat<AuthenticationTicket>”, the JWT generation will take place inside method “Protect”.
  • The constructor of this class accepts the “Issuer” of this JWT which will be our API. This API acts as Authorization and Resource Server on the same time, this can be string or URI, in our case we’ll fix it to URI.
  • Inside “Protect” method we are doing the following:
    • As we stated before, this API serves as Resource and Authorization Server at the same time, so we are fixing the Audience Id and Audience Secret (Resource Server) in web.config file, this Audience Id and Secret will be used for HMAC265 and hash the JWT token, I’ve used this implementation to generate the Audience Id and Secret.
    • Do not forget to add 2 new keys “as:AudienceId” and “as:AudienceSecret” to the web.config AppSettings section.
    • Then we prepare the raw data for the JSON Web Token which will be issued to the requester by providing the issuer, audience, user claims, issue date, expiry date, and the signing key which will sign (hash) the JWT payload.
    • Lastly we serialize the JSON Web Token to a string and return it to the requester.
  • By doing this, the requester for an OAuth 2.0 access token from our API will receive a signed token which contains claims for an authenticated Resource Owner (User) and this access token is intended to certain (Audience) as well.

Step 4: Add Support for OAuth 2.0 JWT Generation

Till this moment we didn’t configure our API to use OAuth 2.0 Authentication workflow, to do so open class “Startup” and add new method named “ConfigureOAuthTokenGeneration” as the below:

private void ConfigureOAuthTokenGeneration(IAppBuilder app)
        {
            // Configure the db context and user manager to use a single instance per request
            app.CreatePerOwinContext(ApplicationDbContext.Create);
            app.CreatePerOwinContext<ApplicationUserManager>(ApplicationUserManager.Create);

            OAuthAuthorizationServerOptions OAuthServerOptions = new OAuthAuthorizationServerOptions()
            {
                //For Dev enviroment only (on production should be AllowInsecureHttp = false)
                AllowInsecureHttp = true,
                TokenEndpointPath = new PathString("/oauth/token"),
                AccessTokenExpireTimeSpan = TimeSpan.FromDays(1),
                Provider = new CustomOAuthProvider(),
                AccessTokenFormat = new CustomJwtFormat("http://localhost:59822")
            };

            // OAuth 2.0 Bearer Access Token Generation
            app.UseOAuthAuthorizationServer(OAuthServerOptions);
        }

What we’ve implemented here is the following:

  • The path for generating JWT will be as :”http://localhost:59822/oauth/token”.
  • We’ve specified the expiry for token to be 1 day.
  • We’ve specified the implementation on how to validate the Resource owner user credential in a custom class named “CustomOAuthProvider”.
  • We’ve specified the implementation on how to generate the access token using JWT formats, this custom class named “CustomJwtFormat” will be responsible for generating JWT instead of default access token using DPAPI, note that both format will use Bearer scheme.

Do not forget to call the new method “ConfigureOAuthTokenGeneration” in the Startup “Configuration” as the class below:

public void Configuration(IAppBuilder app)
{
	HttpConfiguration httpConfig = new HttpConfiguration();

	ConfigureOAuthTokenGeneration(app);

	//Rest of code is removed for brevity

}

Our API currently is ready to start issuing JWT access token, so test this out we can issue HTTP POST request as the image below, and we should receive a valid JWT token for the next 24 hours and accepted only by our API.

JSON Web Token

Step 5: Protect the existing end points with [Authorize] Attribute

Now we’ll visit all the end points we have created earlier in previous posts in the “AccountsController” class, and attribute the end points which need to be protected (only authenticated user with valid JWT access token can access it) with the [Authorize] attribute as the below:

 – GetUsers, GetUser, GetUserByName, and DeleteUser end points should be accessed by users enrolled in Role “Admin”. Roles Authorization is not implemented yet and for now we will only allow any authentication user to access it, the code change will be as simple as the below:

[Authorize]
[Route("users")]
public IHttpActionResult GetUsers()
{}

[Authorize]
[Route("user/{id:guid}", Name = "GetUserById")]
public async Task<IHttpActionResult> GetUser(string Id)
{}


[Authorize]
[Route("user/{username}")]
public async Task<IHttpActionResult> GetUserByName(string username)
{
}

[Authorize]
[Route("user/{id:guid}")]
public async Task<IHttpActionResult> DeleteUser(string id)
{
}

– CreateUser and ConfirmEmail endpoints should be accessed anonymously always, so we need to attribute it with [AllowAnonymous] as the below:

[AllowAnonymous]
[Route("create")]
public async Task<IHttpActionResult> CreateUser(CreateUserBindingModel createUserModel)
{
}

[AllowAnonymous]
[HttpGet]
[Route("ConfirmEmail", Name = "ConfirmEmailRoute")]
public async Task<IHttpActionResult> ConfirmEmail(string userId = "", string code = "")
{
}

– ChangePassword endpoint should be accessed by the authenticated user only, so we’ll attribute it with [Authorize] attribute as the below:

[Authorize]
[Route("ChangePassword")]
public async Task<IHttpActionResult> ChangePassword(ChangePasswordBindingModel model)
{
}

Step 6: Consume JSON Web Tokens

Now if we tried to obtain an access token by sending a request to the end point “oauth/token” then try to access one of the protected end points we’ll receive 401 Unauthorized status, the reason for this that our API doesn’t understand those JWT tokens issued by our API yet, to fix this we need to the following:

Install the below NuGet package:

Install-Package Microsoft.Owin.Security.Jwt -Version 3.0.0

The package “Microsoft.Owin.Security.Jwt” is responsible for protecting the Resource server resources using JWT, it only validate and de-serialize JWT tokens.

Now back to our “Startup” class, we need to add the below method “ConfigureOAuthTokenConsumption” as the below:

private void ConfigureOAuthTokenConsumption(IAppBuilder app) {

            var issuer = "http://localhost:59822";
            string audienceId = ConfigurationManager.AppSettings["as:AudienceId"];
            byte[] audienceSecret = TextEncodings.Base64Url.Decode(ConfigurationManager.AppSettings["as:AudienceSecret"]);

            // Api controllers with an [Authorize] attribute will be validated with JWT
            app.UseJwtBearerAuthentication(
                new JwtBearerAuthenticationOptions
                {
                    AuthenticationMode = AuthenticationMode.Active,
                    AllowedAudiences = new[] { audienceId },
                    IssuerSecurityTokenProviders = new IIssuerSecurityTokenProvider[]
                    {
                        new SymmetricKeyIssuerSecurityTokenProvider(issuer, audienceSecret)
                    }
                });
        }

This step will configure our API to trust tokens issued by our Authorization server only, in our case the Authorization and Resource Server are the same server (http://localhost:59822), notice how we are providing the values for audience, and the audience secret we used to generate and issue the JSON Web Token in step3.

By providing those values to the “JwtBearerAuthentication” middleware, our API will be able to consume only JWT tokens issued by our trusted Authorization server, any other JWT tokens from any other Authorization server will be rejected.

Lastly we need to call the method “ConfigureOAuthTokenConsumption” in the “Configuration” method as the below:

public void Configuration(IAppBuilder app)
	{
		HttpConfiguration httpConfig = new HttpConfiguration();

		ConfigureOAuthTokenGeneration(app);

		ConfigureOAuthTokenConsumption(app);
		
		//Rest of code is here

	}

Step 7: Final Testing

All the pieces should be in place now, to test this we will obtain JWT access token for the user “SuperPowerUser” by issuing POST request to the end point “oauth/token”

Request JWT Token

Then we will use the JWT received to access protected end point such as “ChangePassword”, if you remember once we added this end point, we were not able to test it directly because it was anonymous and inside its implementation we were calling the method “User.Identity.GetUserId()”. This method will return nothing for anonymous user, but after we’ve added the [Authorize] attribute, any user needs to access this end point should be authenticated and has a valid JWT.

To test this out we will issue POST request to the end point “/accounts/ChangePassword”as the image below, notice he we are sitting the Authorization header using Bearer scheme setting its value to the JWT we received for the user “SuperPwoerUser”. If all is valid we will receive 200 OK status and the user password should be updated.

Change Password Web API

The source code for this tutorial is available on GitHub.

In the next post we’ll see how we’ll implement Roles Based Authorization in our Identity service.

Follow me on Twitter @tjoudeh

References

The post Implement OAuth JSON Web Tokens Authentication in ASP.NET Web API and Identity 2.1 – Part 3 appeared first on Bit of Technology.


Taiseer Joudeh: ASP.NET Identity 2.1 Accounts Confirmation, and Password Policy Configuration – Part 2

This is the second part of Building Simple Membership system using ASP.NET Identity 2.1, ASP.NET Web API 2.2 and AngularJS. The topics we’ll cover are:

The source code for this tutorial is available on GitHub.

ASP.NET Identity 2.1 Accounts Confirmation, and Password/User Policy Configuration

In this post we’ll complete on top of what we’ve already built, and we’ll cover the below topics:

  • Send Confirmation Emails after Account Creation.
  • Configure User (Username, Email) and Password policy.
  • Enable Changing Password and Deleting Account.

1 . Send Confirmation Emails after Account Creation

FeaturedImage

ASP.NET Identity 2.1 users table (AspNetUsers) comes by default with a Boolean column named “EmailConfirmed”, this column is used to flag if the email provided by the registered user is valid and belongs to this user in other words that user can access the email provided and he is not impersonating another identity. So our membership system should not allow users without valid email address to log into the system.

The scenario we want to implement that user will register in the system, then a confirmation email will be sent to the email provided upon the registration, this email will include an activation link and a token (code) which is tied to this user only and valid for certain period.

Once the user opens this email and clicks on the activation link, and if the token (code) is valid the field “EmailConfirmed” will be set to “true” and this proves that the email belongs to the registered user.

To do so we need to add a service which is responsible to send emails to users, in my case I’ll use Send Grid which is service provider for sending emails, but you can use any other service provider or your exchange change server to do this. If you want to follow along with this tutorial you can create a free account with Send Grid which provides you with 400 email per day, pretty good!

1.1 Install Send Grid

Now open Package Manager Console and type the below to install Send Grid package, this is not required step if you want to use another email service provider. This packages contains Send Grid APIs which makes sending emails very easy:

install-package Sendgrid

1.2 Add Email Service

Now add new folder named “Services” then add new class named “EmailService” and paste the code below:

public class EmailService : IIdentityMessageService
    {
        public async Task SendAsync(IdentityMessage message)
        {
            await configSendGridasync(message);
        }

        // Use NuGet to install SendGrid (Basic C# client lib) 
        private async Task configSendGridasync(IdentityMessage message)
        {
            var myMessage = new SendGridMessage();

            myMessage.AddTo(message.Destination);
            myMessage.From = new System.Net.Mail.MailAddress("taiseer@bitoftech.net", "Taiseer Joudeh");
            myMessage.Subject = message.Subject;
            myMessage.Text = message.Body;
            myMessage.Html = message.Body;

            var credentials = new NetworkCredential(ConfigurationManager.AppSettings["emailService:Account"], 
                                                    ConfigurationManager.AppSettings["emailService:Password"]);

            // Create a Web transport for sending email.
            var transportWeb = new Web(credentials);

            // Send the email.
            if (transportWeb != null)
            {
                await transportWeb.DeliverAsync(myMessage);
            }
            else
            {
                //Trace.TraceError("Failed to create Web transport.");
                await Task.FromResult(0);
            }
        }
    }

What worth noting here that the class “EmailService” implements the interface “IIdentityMessageService”, this interface can be used to configure your service to send emails or SMS messages, all you need to do is to implement your email or SMS Service in method “SendAsync” and your are good to go.

In our case we want to send emails, so I’ve implemented the sending process using Send Grid in method “configSendGridasync”, all you need to do is to replace the sender name and address by yours, as well do not forget to add 2 new keys named “emailService:Account” and “emailService:Password” as AppSettings to store Send Grid credentials.

After we configured the “EmailService”, we need to hock it with our Identity system, and this is very simple step, open file “ApplicationUserManager” and inside method “Create” paste the code below:

public static ApplicationUserManager Create(IdentityFactoryOptions<ApplicationUserManager> options, IOwinContext context)
{
	//Rest of code is removed for clarity
	appUserManager.EmailService = new AspNetIdentity.WebApi.Services.EmailService();

	var dataProtectionProvider = options.DataProtectionProvider;
	if (dataProtectionProvider != null)
	{
		appUserManager.UserTokenProvider = new DataProtectorTokenProvider<ApplicationUser>(dataProtectionProvider.Create("ASP.NET Identity"))
		{
			//Code for email confirmation and reset password life time
			TokenLifespan = TimeSpan.FromHours(6)
		};
	}
   
	return appUserManager;
}

As you see from the code above, the “appUserManager” instance contains property named “EmailService” which you set it the class we’ve just created “EmailService”.

Note: There is another property named “SmsService” if you would like to use it for sending SMS messages instead of emails.

Notice how we are setting the expiration time for the code (token) send by the email to 6 hours, so if the user tried to open the confirmation email after 6 hours from receiving it, the code will be invalid.

1.3 Send the Email after Account Creation

Now the email service is ready and we can start sending emails after successful account creation, to do so we need to modify the existing code in the method “CreateUser” in controller “AccountsController“, so open file “AccountsController” and paste the code below at the end of the method:

//Rest of code is removed for brevity

string code = await this.AppUserManager.GenerateEmailConfirmationTokenAsync(user.Id);

var callbackUrl = new Uri(Url.Link("ConfirmEmailRoute", new { userId = user.Id, code = code }));

await this.AppUserManager.SendEmailAsync(user.Id,"Confirm your account", "Please confirm your account by clicking <a href=\"" + callbackUrl + "\">here</a>");

Uri locationHeader = new Uri(Url.Link("GetUserById", new { id = user.Id }));

return Created(locationHeader, TheModelFactory.Create(user));

The implementation is straight forward, what we’ve done here is creating a unique code (token) which is valid for the next 6 hours and tied to this user Id only this happen when calling “GenerateEmailConfirmationTokenAsync” method, then we want to build an activation link to send it in the email body, this link will contain the user Id and the code created.

Eventually this link will be sent to the registered user to the email he used in registration, and the user needs to click on it to activate the account, the route “ConfirmEmailRoute” which maps to this activation link is not implemented yet, we’ll implement it the next step.

Lastly we need to send the email including the link we’ve built by calling the method “SendEmailAsync” where the constructor accepts the user Id, email subject, and email body.

1.4 Add the Confirm Email URL

The activation link which the user will receive will look as the below:

http://localhost/api/account/ConfirmEmail?userid=xxxx&code=xxxx

So we need to build a route in our API which receives this request when the user clicks on the activation link and issue HTTP GET request, to do so we need to implement the below method, so in class “AccountsController” as the new method as the below:

[HttpGet]
        [Route("ConfirmEmail", Name = "ConfirmEmailRoute")]
        public async Task<IHttpActionResult> ConfirmEmail(string userId = "", string code = "")
        {
            if (string.IsNullOrWhiteSpace(userId) || string.IsNullOrWhiteSpace(code))
            {
                ModelState.AddModelError("", "User Id and Code are required");
                return BadRequest(ModelState);
            }

            IdentityResult result = await this.AppUserManager.ConfirmEmailAsync(userId, code);

            if (result.Succeeded)
            {
                return Ok();
            }
            else
            {
                return GetErrorResult(result);
            }
        }

The implementation is simple, we only validate that the user Id and code is not not empty, then we depend on the method “ConfirmEmailAsync” to do the validation for the user Id and the code, so if the user Id is not tied to this code then it will fail, if the code is expired then it will fail too, if all is good this method will update the database field “EmailConfirmed” in table “AspNetUsers” and set it to “True”, and you are done, you have implemented email account activation!

Important Note: It is recommenced to validate the password before confirming the email account, in some cases the user might miss type the email during the registration, so you do not want end sending the confirmation email for someone else and he receives this email and activate the account on your behalf, so better way is to ask for the account password before activating it, if you want to do this you need to change the “ConfirmEmail” method to POST and send the Password along with user Id and code in the request body, you have the idea so you can implement it by yourself :)

2. Configure User (Username, Email) and Password policy

2.1 Change User Policy

In some cases you want to enforce certain rules on the username and password when users register into your system, so ASP.NET Identity 2.1 system offers this feature, for example if we want to enforce that our username only allows alphanumeric characters and the email associated with this user is unique then all we need to do is to set those properties in class “ApplicationUserManager”, to do so open file “ApplicationUserManager” and paste the code below inside method “Create”:

//Rest of code is removed for brevity
//Configure validation logic for usernames
appUserManager.UserValidator = new UserValidator<ApplicationUser>(appUserManager)
{
	AllowOnlyAlphanumericUserNames = true,
	RequireUniqueEmail = true
};

2.2 Change Password Policy

The same applies for the password policy, for example you can enforce that the password policy must match (minimum 6 characters, requires special character, requires at least one lower case and at least one upper case character), so to implement this policy all we need to do is to set those properties in the same class “ApplicationUserManager” inside method “Create” as the code below:

//Rest of code is removed for brevity
//Configure validation logic for passwords
appUserManager.PasswordValidator = new PasswordValidator
{
	RequiredLength = 6,
	RequireNonLetterOrDigit = true,
	RequireDigit = false,
	RequireLowercase = true,
	RequireUppercase = true,
};

2.3 Implement Custom Policy for User Email and Password

In some scenarios you want to apply your own custom policy for validating email, or password. This can be done easily by creating your own validation classes and hock it to “UserValidator” and “PasswordValidator” properties in class “ApplicationUserManager”.

For example if we want to enforce using only the following domains (“outlook.com”, “hotmail.com”, “gmail.com”, “yahoo.com”) when the user self registers then we need to create a class and derive it from “UserValidator<ApplicationUser>” class, to do so add new folder named “Validators” then add new class named “MyCustomUserValidator” and paste the code below:

public class MyCustomUserValidator : UserValidator<ApplicationUser>
    {

        List<string> _allowedEmailDomains = new List<string> { "outlook.com", "hotmail.com", "gmail.com", "yahoo.com" };

        public MyCustomUserValidator(ApplicationUserManager appUserManager)
            : base(appUserManager)
        {
        }

        public override async Task<IdentityResult> ValidateAsync(ApplicationUser user)
        {
            IdentityResult result = await base.ValidateAsync(user);

            var emailDomain = user.Email.Split('@')[1];

            if (!_allowedEmailDomains.Contains(emailDomain.ToLower()))
            {
                var errors = result.Errors.ToList();

                errors.Add(String.Format("Email domain '{0}' is not allowed", emailDomain));

                result = new IdentityResult(errors);
            }

            return result;
        }
    }

What we have implemented above that the default validation will take place then this custom validation in method “ValidateAsync” will be applied, if there is validation errors it will be added to the existing “Errors” list and returned in the response.

In order to fire this custom validation, we need to open class “ApplicationUserManager” again and hock this custom class to the property “UserValidator” as the code below:

//Rest of code is removed for brevity
//Configure validation logic for usernames
appUserManager.UserValidator = new MyCustomUserValidator(appUserManager)
{
	AllowOnlyAlphanumericUserNames = true,
	RequireUniqueEmail = true
};

Note: The tutorial code is not using the custom “MyCustomUserValidator” class, it exists in the source code for your reference.

Now the same applies for adding custom password policy, all you need to do is to create class named “MyCustomPasswordValidator” and derive it from class “PasswordValidator”, then you override the method “ValidateAsync” implementation as below, so add new file named “MyCustomPasswordValidator” in folder “Validators” and use the code below:

public class MyCustomPasswordValidator : PasswordValidator
    {
        public override async Task<IdentityResult> ValidateAsync(string password)
        {
            IdentityResult result = await base.ValidateAsync(password);

            if (password.Contains("abcdef") || password.Contains("123456"))
            {
                var errors = result.Errors.ToList();
                errors.Add("Password can not contain sequence of chars");
                result = new IdentityResult(errors);
            }
            return result;
        }
    }

In this implementation we added some basic rule which checks if the password contains sequence of characters and reject this type of password by adding this validation result to the Errors list, it is exactly the same as the custom users policy.

Now to attach this class as the default password validator, all you need to do is to open class “ApplicationUserManager” and use the code below:

//Rest of code is removed for brevity
// Configure validation logic for passwords
appUserManager.PasswordValidator = new MyCustomPasswordValidator
{
	RequiredLength = 6,
	RequireNonLetterOrDigit = true,
	RequireDigit = false,
	RequireLowercase = true,
	RequireUppercase = true,
};

All other validation rules will take place (i.e checking minimum password length, checking for special characters) then it will apply the implementation in our “MyCustomPasswordValidator”.

3. Enable Changing Password and Deleting Account

Now we need to add other endpoints which allow the user to change the password, and allow a user in “Admin” role to delete other users account, but those end points should be accessed only if the user is authenticated, we need to know the identity of the user doing this action and in which role(s) the user belongs to. Until now all our endpoints are called anonymously, so lets add those endpoints and we’ll cover the authentication and authorization part next.

3.1 Add Change Password Endpoint

This is easy to implement, all you need to do is to open controller “AccountsController” and paste the code below:

[Route("ChangePassword")]
        public async Task<IHttpActionResult> ChangePassword(ChangePasswordBindingModel model)
        {
            if (!ModelState.IsValid)
            {
                return BadRequest(ModelState);
            }

            IdentityResult result = await this.AppUserManager.ChangePasswordAsync(User.Identity.GetUserId(), model.OldPassword, model.NewPassword);

            if (!result.Succeeded)
            {
                return GetErrorResult(result);
            }

            return Ok();
        }

Notice how we are calling the method “ChangePasswordAsync” and passing the authenticated User Id, old password and new password. If you tried to call this endpoint, the extension method “GetUserId” will not work because you are calling it as anonymous user and the system doesn’t know your identity, so hold on the testing until we implement authentication part.

The method “ChangePasswordAsync” will take care of validating your current password, as well validating your new password policy, and then updating your old password with new one.

Do not forget to add the “ChangePasswordBindingModel” to the class “AccountBindingModels” as the code below:

public class ChangePasswordBindingModel
    {
        [Required]
        [DataType(DataType.Password)]
        [Display(Name = "Current password")]
        public string OldPassword { get; set; }

        [Required]
        [StringLength(100, ErrorMessage = "The {0} must be at least {2} characters long.", MinimumLength = 6)]
        [DataType(DataType.Password)]
        [Display(Name = "New password")]
        public string NewPassword { get; set; }

        [Required]
        [DataType(DataType.Password)]
        [Display(Name = "Confirm new password")]
        [Compare("NewPassword", ErrorMessage = "The new password and confirmation password do not match.")]
        public string ConfirmPassword { get; set; }
    
    }

3.2 Delete User Account

We want to add the feature which allows a user in “Admin” role to delete user account, until now we didn’t introduce Roles management or authorization, so we’ll add this end point now and later we’ll do slight modification on it, for now any anonymous user can invoke it and delete any user by passing the user Id.

To implement this we need add new method named “DeleteUser” to the “AccountsController” as the code below:

[Route("user/{id:guid}")]
        public async Task<IHttpActionResult> DeleteUser(string id)
        {

            //Only SuperAdmin or Admin can delete users (Later when implement roles)

            var appUser = await this.AppUserManager.FindByIdAsync(id);

            if (appUser != null)
            {
                IdentityResult result = await this.AppUserManager.DeleteAsync(appUser);

                if (!result.Succeeded)
                {
                    return GetErrorResult(result);
                }

                return Ok();

            }

            return NotFound();
          
        }

This method will check the existence of the user id and based on this it will delete the user. To test this method we need to issue HTTP DELETE request to the end point “api/accounts/user/{id}”.

The source code for this tutorial is available on GitHub.

In the next post we’ll see how we’ll implement Json Web Token (JWTs) Authentication and manage access for all the methods we added until now.

Follow me on Twitter @tjoudeh

References

The post ASP.NET Identity 2.1 Accounts Confirmation, and Password Policy Configuration – Part 2 appeared first on Bit of Technology.


Dominick Baier: IdentityServer3 1.0.0

Today is a big day for us! Brock and I started working on the next generation of IdentityServer over 14 months ago. In fact – I remember exactly how I created the very first file (constants.cs) somewhere in the Swiss Alps and was hunting for internet connection to do a check-in (much to the dislike of my family).

1690 commits later it is time to recap what we did, why we did it – and where we are now.

Having spent a considerable amount of time in the WS*/SAML world, it became more and more apparent that these technologies are not a good match for the modern types of applications that we (and our customers) like to build. These types of applications are pretty much a combination of web and native UIs combined with web APIs. Security protocols need to be API, HTTP and mobile friendly, and we need authentication, API access and identity delegation as first class citizens.

We had two options – either try to retrofit the new protocols into the old WS* architecture (like so many commercial products do) or start from scratch. Since we also had a number of other high priority design goals for the new version we decided to start from scratch.

Some of the highlights of IdentityServer3 (at least in our opinion) are:

Support for the modern security stack
OpenID Connect and OAuth2 that is. These two protocols in combination are the perfect match to build the modern applications we had in mind. OAuth2 is used to manage access (and access control) from clients to APIs for both trusted subsystem and identity delegation systems. OpenID Connect is the extension to OAuth2 for implementing rich authentication and single sign-on scenarios for any application type.

Hosting
We wanted to be much more flexible in our hosting scenarios – IIS vs self-hosting, Windows vs Linux, ASP.NET vCurrent vs vNext, Embedded into the application vs separate standalone vs separate web farm vs cloud – you name it. Regardless which hosting environment you choose – IdentityServer is always the same.

Flexibility and Extensibility
IdentityServer2 always had a dependency on a database. The past years taught us that there are many situations where this is not appropriate. In the new version everything is code first and abstracted behind interfaces. Everything can be done in memory and no persistence store is required. We have an optional extension that uses Entity Framework for persistence – but this is up to you.

Another issue we had in the past was that there were too many situations where one had to change the core source code to implement some custom workflow. In IdentityServer3 we think we did a good job in anticipating the typical (and not so typical) modifications and baked them right into the core runtime as extensibility points. So far this has worked out really well.

Framework vs Server
As mentioned above – IdentityServer3 is all about customization and extensibility. The developer is in the centre and we give him lots of freedom in changing almost any aspect of the workflow. This is the big difference to many commercial off the shelf products.

Right from the start we used the term “STS Framework” rather than a “Server” and up to today we don’t even have an admin UI for managing the server configuration. We (and most people we spoke to) were absolutely fine doing all of that in code and in their custom configuration system. That said – we have an admin service and UI in the works that will be released soon – but again this is totally optional.

Brock and I just recently spoke to Carl and Richard about these design goals on .NET Rocks.

Where to go?
To accommodate the new versioning scheme (we switched to semver) and the componentized architecture we changed both the GitHub organization and repo names as well as the Nuget package names. The new organization can be found here and the main repo is here along with instructions on how to contribute and an issue tracker for filing bugs or giving feedback.

The new docs site gives quite a bit of background and can be found here – or you can jump directly to our samples.

If you need consulting about modern (or not so modern) security architectures in general and IdentityServer in particular – you can contact us via email at identity@leastprivilege.com or via twitter: @leastprivilege & @brocklallen.

What’s next?
We have a couple of “side projects” that complement the core IdentityServer3 – there’s IdentityManager, which we neglected a bit for the last months, and there‘s the admin service and UI (good people are working on that right now)…And there are of course new features to implement for IdentityServer – check this label and take part in the discussion.

Last but not least
The last 14 months were astounding – we got more feedback, questions, bug reports, PRs and help on IdentityServer3 than all other OSS projects we did before combined. You guys were fantastic! Thanks for your help – we hope you enjoy the result (..and keep it coming)!

Thanks!
Dominick & Brock


Filed under: ASP.NET, IdentityServer, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Radenko Zec: ASP.NET Identity 2.1 implementation for MySQL

In this blog post I will try to cover how to use a custom ASP.NET identity provider for MySQL I have created.

Default ASP.NET Identity provider uses Entity Framework and SQL Server to store information’s about users.

If you are trying to implement ASP.NET Identity 2.1 for MySQL database, then follow this guide.

This implementation uses Oracle fully-managed ADO.NET driver for MySQL.

This means that you have a connection string in your web.config similar to this:

<add name="DefaultConnection" connectionString="Server=localhost;
Database=aspnetidentity;Uid=radenko;Pwd=somepass;" providerName="MySql.Data.MySqlClient" />

 

This implementation of ASP.NET Identity 2.1 for MySQL has all the major interfaces implemented in custom UserStore class:

ASPIdentityUserStoreInterfaces

Source code of my implementation is available at GitHub – MySqlIdentity

First, you will need to execute this a create script on your MySQL database which will create the tables required for the ASP.NET Identity provider.

MySqlAspIdentityDatabase

  • Create a new ASP.NET MVC 5 project, choosing the Individual User Accounts authentication type.
  • Uninstall all EntityFramework NuGet packages starting with Microsoft.AspNet.Identity.EntityFramework
  • Install NuGet Package called MySql.AspNet.Identity
  • In ~/Models/IdentityModels.cs:
    • Remove the namespaces:
      • Microsoft.AspNet.Identity.EntityFramework
      • System.Data.Entity
    • Add the namespace: MySql.AspNet.Identity.
      Class ApplicationUser will inherit from IdentityUser class in MySql.Asp.Net.Identity namespace
    • Remove the entire ApplicationDbContext class. This class is not needed anymore.
  • In ~/App_Start/Startup.Auth.cs
    • Delete this line of code
app.CreatePerOwinContext(ApplicationDbContext.Create);
  • In ~/App_Start/IdentityConfig.cs
    Remove the namespaces:

    • Microsoft.AspNet.Identity.EntityFramework
    • System.Data.Entity
  • In method Create inside ApplicationUserManager class replace ApplicationUserManager with another which accepts MySqlUserStore :
 public static ApplicationUserManager Create(IdentityFactoryOptions<ApplicationUserManager> options, IOwinContext context) 
 {
     //var manager = new ApplicationUserManager(new UserStore<ApplicationUser>(context.Get<ApplicationDbContext>()));
        var manager = new ApplicationUserManager(new MySqlUserStore<ApplicationUser>());

MySqlUserStore accepts an optional parameter in the constructor – connection string so if you are not using DefaultConnection as your connection string you can pass another connection string.

After this you should be able to build your ASP.NET MVC project and run it successfully using MySQL as a store for your user, roles, claims and other information’s.

If you like this article don’t forget to subscribe to this blog and make sure you don’t miss new upcoming blog posts.

 

The post ASP.NET Identity 2.1 implementation for MySQL appeared first on RadenkoZec blog.


Taiseer Joudeh: ASP.NET Identity 2.1 with ASP.NET Web API 2.2 (Accounts Management) – Part 1

Asp Net Identity

ASP.NET Identity 2.1 is the latest membership and identity management framework provided by Microsoft, this membership system can be plugged to any ASP.NET framework such as Web API, MVC, Web Forms, etc…

In this tutorial we’ll cover how to integrate ASP.NET Identity system with ASP.NET Web API , so we can build a secure HTTP service which acts as back-end for SPA front-end built using AngularJS, I’ll try to cover in a simple way different ASP.NET Identity 2.1 features such as: Accounts managements, roles management, email confirmations, change password, roles based authorization, claims based authorization, brute force protection, etc…

The AngularJS front-end application will use bearer token based authentication using Json Web Tokens (JWTs) format and should support roles based authorization and contains the basic features of any membership system. The SPA is not ready yet but hopefully it will sit on top of our HTTP service without the need to come again and modify the ASP.NET Web API logic.

I will follow step by step approach and I’ll start from scratch without using any VS 2013 templates so we’ll have better understanding of how the ASP.NET Identity 2.1 framework talks with ASP.NET Web API framework.

The source code for this tutorial is available on GitHub.

I broke down this series into multiple posts which I’ll be posting gradually, posts are:

Configure ASP.NET Identity 2.1 with ASP.NET Web API 2.2 (Accounts Management)

Setting up the ASP.NET Identity 2.1

Step 1: Create the Web API Project

In this tutorial I’m using Visual Studio 2013 and .Net framework 4.5, now create an empty solution and name it “AspNetIdentity” then add new ASP.NET Web application named “AspNetIdentity.WebApi”, we will select an empty template with no core dependencies at all, it will be as as the image below:

WebApiNewProject

Step 2: Install the needed NuGet Packages:

We’ll install all those NuGet packages to setup our Owin server and configure ASP.NET Web API to be hosted within an Owin server, as well we will install packages needed for ASP.NET Identity 2.1, if you would like to know more about the use of each package and what is the Owin server, please check this post.

Install-Package Microsoft.AspNet.Identity.Owin -Version 2.1.0
Install-Package Microsoft.AspNet.Identity.EntityFramework -Version 2.1.0
Install-Package Microsoft.Owin.Host.SystemWeb -Version 3.0.0
Install-Package Microsoft.AspNet.WebApi.Owin -Version 5.2.2
Install-Package Microsoft.Owin.Security.OAuth -Version 3.0.0
Install-Package Microsoft.Owin.Cors -Version 3.0.0

 Step 3: Add Application user class and Application Database Context:

Now we want to define our first custom entity framework class which is the “ApplicationUser” class, this class will represents a user wants to register in our membership system, as well we want to extend the default class in order to add application specific data properties for the user, data properties such as: First Name, Last Name, Level, JoinDate. Those properties will be converted to columns in table “AspNetUsers” as we’ll see on the next steps.

So to do this we need to create new class named “ApplicationUser” and derive from “Microsoft.AspNet.Identity.EntityFramework.IdentityUser” class.

Note: If you do not want to add any extra properties to this class, then there is no need to extend the default implementation and derive from “IdentityUser” class.

To do so add new folder named “Infrastructure” to our project then add new class named “ApplicationUser” and paste the code below:

public class ApplicationUser : IdentityUser
    {
        [Required]
        [MaxLength(100)]
        public string FirstName { get; set; }

        [Required]
        [MaxLength(100)]
        public string LastName { get; set; }

        [Required]
        public byte Level { get; set; }

        [Required]
        public DateTime JoinDate { get; set; }

    }

Now we need to add Database context class which will be responsible to communicate with our database, so add new class and name it “ApplicationDbContext” under folder “Infrastructure” then paste the code snippet below:

public class ApplicationDbContext : IdentityDbContext<ApplicationUser>
    {
        public ApplicationDbContext()
            : base("DefaultConnection", throwIfV1Schema: false)
        {
            Configuration.ProxyCreationEnabled = false;
            Configuration.LazyLoadingEnabled = false;
        }

        public static ApplicationDbContext Create()
        {
            return new ApplicationDbContext();
        }

    }

As you can see this class inherits from “IdentityDbContext” class, you can think about this class as special version of the traditional “DbContext” Class, it will provide all of the entity framework code-first mapping and DbSet properties needed to manage the identity tables in SQL Server, this default constructor takes the connection string name “DefaultConnection” as an argument, this connection string will be used point to the right server and database name to connect to.

The static method “Create” will be called from our Owin Startup class, more about this later.

Lastly we need to add a connection string which points to the database that will be created using code first approach, so open “Web.config” file and paste the connection string below:

<connectionStrings>
    <add name="DefaultConnection" connectionString="Data Source=.\sqlexpress;Initial Catalog=AspNetIdentity;Integrated Security=SSPI;" providerName="System.Data.SqlClient" />
  </connectionStrings>

Step 4: Create the Database and Enable DB migrations:

Now we want to enable EF code first migration feature which configures the code first to update the database schema instead of dropping and re-creating the database with each change on EF entities, to do so we need to open NuGet Package Manager Console and type the following commands:

enable-migrations
add-migration InitialCreate

The “enable-migrations” command creates a “Migrations” folder in the “AspNetIdentity.WebApi” project, and it creates a file named “Configuration”, this file contains method named “Seed” which is used to allow us to insert or update test/initial data after code first creates or updates the database. This method is called when the database is created for the first time and every time the database schema is updated after a data model change.

Migrations

As well the “add-migration InitialCreate” command generates the code that creates the database from scratch. This code is also in the “Migrations” folder, in the file named “<timestamp>_InitialCreate.cs“. The “Up” method of the “InitialCreate” class creates the database tables that correspond to the data model entity sets, and the “Down” method deletes them. So in our case if you opened this class “201501171041277_InitialCreate” you will see the extended data properties we added in the “ApplicationUser” class in method “Up”.

Now back to the “Seed” method in class “Configuration”, open the class and replace the Seed method code with the code below:

protected override void Seed(AspNetIdentity.WebApi.Infrastructure.ApplicationDbContext context)
        {
            //  This method will be called after migrating to the latest version.

            var manager = new UserManager<ApplicationUser>(new UserStore<ApplicationUser>(new ApplicationDbContext()));

            var user = new ApplicationUser()
            {
                UserName = "SuperPowerUser",
                Email = "taiseer.joudeh@mymail.com",
                EmailConfirmed = true,
                FirstName = "Taiseer",
                LastName = "Joudeh",
                Level = 1,
                JoinDate = DateTime.Now.AddYears(-3)
            };

            manager.Create(user, "MySuperP@ssword!");
        }

This code basically creates a user once the database is created.

Now we are ready to trigger the event which will create the database on our SQL server based on the connection string we specified earlier, so open NuGet Package Manager Console and type the command:

update-database

The “update-database” command runs the “Up” method in the “Configuration” file and creates the database and then it runs the “Seed” method to populate the database and insert a user.

If all is fine, navigate to your SQL server instance and the database along with the additional fields in table “AspNetUsers” should be created as the image below:

AspNetIdentityDB

Step 5: Add the User Manager Class:

The User Manager class will be responsible to manage instances of the user class, the class will derive from “UserManager<T>”  where T will represent our “ApplicationUser” class, once it derives from the “ApplicationUser” class a set of methods will be available, those methods will facilitate managing users in our Identity system, some of the exposed methods we’ll use from the “UserManager” during this tutorial are:

Method NameUsage
FindByIdAsync(id)Find user object based on its unique identifier
UsersReturns an enumeration of the users
FindByNameAsync(Username)Find user based on its Username
CreateAsync(User, PasswordCreates a new user with a password
GenerateEmailConfirmationTokenAsync(Id)Generate email confirmation token which is used in email confimration
SendEmailAsync(Id, Subject, Body)Send confirmation email to the newly registered user
ConfirmEmailAsync(Id, token)Confirm the user email based on the received token
ChangePasswordAsync(Id, OldPassword, NewPassword)Change user password
DeleteAsync(User)Delete user
IsInRole(Username, Rolename)Check if a user belongs to certain Role
AddToRoleAsync(Username, RoleName)Assign user to a specific Role
RemoveFromRoleAsync(Username, RoleNameRemove user from specific Role

Now to implement the “UserManager” class, add new file named “ApplicationUserManager” under folder “Infrastructure” and paste the code below:

public class ApplicationUserManager : UserManager<ApplicationUser>
    {
        public ApplicationUserManager(IUserStore<ApplicationUser> store)
            : base(store)
        {
        }

        public static ApplicationUserManager Create(IdentityFactoryOptions<ApplicationUserManager> options, IOwinContext context)
        {
            var appDbContext = context.Get<ApplicationDbContext>();
            var appUserManager = new ApplicationUserManager(new UserStore<ApplicationUser>(appDbContext));

            return appUserManager;
        }
    }

As you notice from the code above the static method “Create” will be responsible to return an instance of the “ApplicationUserManager” class named “appUserManager”, the constructor of the “ApplicationUserManager” expects to receive an instance from the “UserStore”, as well the UserStore instance construct expects to receive an instance from our “ApplicationDbContext” defined earlier, currently we are reading this instance from the Owin context, but we didn’t add it yet to the Owin context, so let’s jump to the next step to add it.

Note: In the coming post we’ll apply different changes to the “ApplicationUserManager” class such as configuring email service, setting user and password polices.

Step 6: Add Owin “Startup” Class

Now we’ll add the Owin “Startup” class which will be fired once our server starts. The “Configuration” method accepts parameter of type “IAppBuilder” this parameter will be supplied by the host at run-time. This “app” parameter is an interface which will be used to compose the application for our Owin server, so add new file named “Startup” to the root of the project and paste the code below:

public class Startup
    {

        public void Configuration(IAppBuilder app)
        {
            HttpConfiguration httpConfig = new HttpConfiguration();

            ConfigureOAuthTokenGeneration(app);

            ConfigureWebApi(httpConfig);

            app.UseCors(Microsoft.Owin.Cors.CorsOptions.AllowAll);

            app.UseWebApi(httpConfig);

        }

        private void ConfigureOAuthTokenGeneration(IAppBuilder app)
        {
            // Configure the db context and user manager to use a single instance per request
            app.CreatePerOwinContext(ApplicationDbContext.Create);
            app.CreatePerOwinContext<ApplicationUserManager>(ApplicationUserManager.Create);

	    // Plugin the OAuth bearer JSON Web Token tokens generation and Consumption will be here

        }

        private void ConfigureWebApi(HttpConfiguration config)
        {
            config.MapHttpAttributeRoutes();

            var jsonFormatter = config.Formatters.OfType<JsonMediaTypeFormatter>().First();
            jsonFormatter.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver();
        }
    }

What worth noting here is how we are creating a fresh instance from the “ApplicationDbContext” and “ApplicationUserManager” for each request and set it in the Owin context using the extension method “CreatePerOwinContext”. Both objects (ApplicationDbContext and AplicationUserManager) will be available during the entire life of the request.

Note: I didn’t plug any kind of authentication here, we’ll visit this class again and add JWT Authentication in the next post, for now we’ll be fine accepting any request from any anonymous users.

Define Web API Controllers and Methods

Step 7: Create the “Accounts” Controller:

Now we’ll add our first controller named “AccountsController” which will be responsible to manage user accounts in our Identity system, to do so add new folder named “Controllers” then add new class named “AccountsController” and paste the code below:

[RoutePrefix("api/accounts")]
public class AccountsController : BaseApiController
{

	[Route("users")]
	public IHttpActionResult GetUsers()
	{
		return Ok(this.AppUserManager.Users.ToList().Select(u => this.TheModelFactory.Create(u)));
	}

	[Route("user/{id:guid}", Name = "GetUserById")]
	public async Task<IHttpActionResult> GetUser(string Id)
	{
		var user = await this.AppUserManager.FindByIdAsync(Id);

		if (user != null)
		{
			return Ok(this.TheModelFactory.Create(user));
		}

		return NotFound();

	}

	[Route("user/{username}")]
	public async Task<IHttpActionResult> GetUserByName(string username)
	{
		var user = await this.AppUserManager.FindByNameAsync(username);

		if (user != null)
		{
			return Ok(this.TheModelFactory.Create(user));
		}

		return NotFound();

	}
}

What we have implemented above is the following:

  • Our “AccountsController” inherits from base controller named “BaseApiController”, this base controller is not created yet, but it contains methods that will be reused among different controllers we’ll add during this tutorial, the methods which comes from “BaseApiController” are: “AppUserManager”, “TheModelFactory”, and “GetErrorResult”, we’ll see the implementation for this class in the next step.
  • We have added 3 methods/actions so far in the “AccountsController”:
    • Method “GetUsers” will be responsible to return all the registered users in our system by calling the enumeration “Users” coming from “ApplicationUserManager” class.
    • Method “GetUser” will be responsible to return single user by providing it is unique identifier and calling the method “FindByIdAsync” coming from “ApplicationUserManager” class.
    • Method “GetUserByName” will be responsible to return single user by providing it is username and calling the method “FindByNameAsync” coming from “ApplicationUserManager” class.
    • The three methods send the user object to class named “TheModelFactory”, we’ll see in the next step the benefit of using this pattern to shape the object graph returned and how it will protect us from leaking any sensitive information about the user identity.
  • Note: All methods can be accessed by any anonymous user, for now we are fine with this, but we’ll manage the access control for each method and who are the authorized identities that can perform those actions in the coming posts.

Step 8: Create the “BaseApiController” Controller:

As we stated before, this “BaseApiController” will act as a base class which other Web API controllers will inherit from, for now it will contain three basic methods, so add new class named “BaseApiController” under folder “Controllers” and paste the code below:

public class BaseApiController : ApiController
    {

        private ModelFactory _modelFactory;
        private ApplicationUserManager _AppUserManager = null;

        protected ApplicationUserManager AppUserManager
        {
            get
            {
                return _AppUserManager ?? Request.GetOwinContext().GetUserManager<ApplicationUserManager>();
            }
        }

        public BaseApiController()
        {
        }

        protected ModelFactory TheModelFactory
        {
            get
            {
                if (_modelFactory == null)
                {
                    _modelFactory = new ModelFactory(this.Request, this.AppUserManager);
                }
                return _modelFactory;
            }
        }

        protected IHttpActionResult GetErrorResult(IdentityResult result)
        {
            if (result == null)
            {
                return InternalServerError();
            }

            if (!result.Succeeded)
            {
                if (result.Errors != null)
                {
                    foreach (string error in result.Errors)
                    {
                        ModelState.AddModelError("", error);
                    }
                }

                if (ModelState.IsValid)
                {
                    // No ModelState errors are available to send, so just return an empty BadRequest.
                    return BadRequest();
                }

                return BadRequest(ModelState);
            }

            return null;
        }
    }

What we have implemented above is the following:

  • We have added read only property named “AppUserManager” which gets the instance of the “ApplicationUserManager” we already set in the “Startup” class, this instance will be initialized and ready to invoked.
  • We have added another read only property named “TheModelFactory” which returns an instance of “ModelFactory” class, this factory pattern will help us in shaping and controlling the response returned to the client, so we will create a simplified model for some of our domain object model (Users, Roles, Claims, etc..) we have in the database. Shaping the response and building customized object graph is very important here; because we do not want to leak sensitive data such as “PasswordHash” to the client.
  • We have added a function named “GetErrorResult” which takes “IdentityResult” as a constructor and formats the error messages returned to the client.

Step 8: Create the “ModelFactory” Class:

Now add new folder named “Models” and inside this folder create new class named “ModelFactory”, this class will contain all the functions needed to shape the response object and control the object graph returned to the client, so open the file and paste the code below:

public class ModelFactory
    {
        private UrlHelper _UrlHelper;
        private ApplicationUserManager _AppUserManager;

        public ModelFactory(HttpRequestMessage request, ApplicationUserManager appUserManager)
        {
            _UrlHelper = new UrlHelper(request);
            _AppUserManager = appUserManager;
        }

        public UserReturnModel Create(ApplicationUser appUser)
        {
            return new UserReturnModel
            {
                Url = _UrlHelper.Link("GetUserById", new { id = appUser.Id }),
                Id = appUser.Id,
                UserName = appUser.UserName,
                FullName = string.Format("{0} {1}", appUser.FirstName, appUser.LastName),
                Email = appUser.Email,
                EmailConfirmed = appUser.EmailConfirmed,
                Level = appUser.Level,
                JoinDate = appUser.JoinDate,
                Roles = _AppUserManager.GetRolesAsync(appUser.Id).Result,
                Claims = _AppUserManager.GetClaimsAsync(appUser.Id).Result
            };
        }
    }

    public class UserReturnModel
    {
        public string Url { get; set; }
        public string Id { get; set; }
        public string UserName { get; set; }
        public string FullName { get; set; }
        public string Email { get; set; }
        public bool EmailConfirmed { get; set; }
        public int Level { get; set; }
        public DateTime JoinDate { get; set; }
        public IList<string> Roles { get; set; }
        public IList<System.Security.Claims.Claim> Claims { get; set; }
    }

Notice how we included only the properties needed to return them in users object graph, for example there is no need to return the “PasswordHash” property so we didn’t include it.

Step 9: Add Method to Create Users in”AccountsController”:

It is time to add the method which allow us to register/create users in our Identity system, but before adding it, we need to add the request model object which contains the user data which will be sent from the client, so add new file named “AccountBindingModels” under folder “Models” and paste the code below:

public class CreateUserBindingModel
    {
        [Required]
        [EmailAddress]
        [Display(Name = "Email")]
        public string Email { get; set; }

        [Required]
        [Display(Name = "Username")]
        public string Username { get; set; }

        [Required]
        [Display(Name = "First Name")]
        public string FirstName { get; set; }

        [Required]
        [Display(Name = "Last Name")]
        public string LastName { get; set; }

        [Display(Name = "Role Name")]
        public string RoleName { get; set; }

        [Required]
        [StringLength(100, ErrorMessage = "The {0} must be at least {2} characters long.", MinimumLength = 6)]
        [DataType(DataType.Password)]
        [Display(Name = "Password")]
        public string Password { get; set; }

        [Required]
        [DataType(DataType.Password)]
        [Display(Name = "Confirm password")]
        [Compare("Password", ErrorMessage = "The password and confirmation password do not match.")]
        public string ConfirmPassword { get; set; }
    }

The class is very simple, it contains properties for the fields we want to send from the client to our API with some data annotation attributes which help us to validate the model before submitting it to the database, notice how we added property named “RoleName” which will not be used now, but it will be useful in the coming posts.

Now it is time to add the method which register/creates a user, open the controller named “AccountsController” and add new method named “CreateUser” and paste the code below:

[Route("create")]
public async Task<IHttpActionResult> CreateUser(CreateUserBindingModel createUserModel)
{
	if (!ModelState.IsValid)
	{
		return BadRequest(ModelState);
	}

	var user = new ApplicationUser()
	{
		UserName = createUserModel.Username,
		Email = createUserModel.Email,
		FirstName = createUserModel.FirstName,
		LastName = createUserModel.LastName,
		Level = 3,
		JoinDate = DateTime.Now.Date,
	};

	IdentityResult addUserResult = await this.AppUserManager.CreateAsync(user, createUserModel.Password);

	if (!addUserResult.Succeeded)
	{
		return GetErrorResult(addUserResult);
	}

	Uri locationHeader = new Uri(Url.Link("GetUserById", new { id = user.Id }));

	return Created(locationHeader, TheModelFactory.Create(user));
}

What we have implemented here is the following:

  • We validated the request model based on the data annotations we introduced in class “AccountBindingModels”, if there is a field missing then the response will return HTTP 400 with proper error message.
  • If the model is valid, we will use it to create new instance of class “ApplicationUser”, by default we’ll put all the users in level 3.
  • Then we call method “CreateAsync” in the “AppUserManager” which will do the heavy lifting for us, inside this method it will validate if the username, email is used before, and if the password matches our policy, etc.. if the request is valid then it will create new user and add to the “AspNetUsers” table and return success result. From this result and as good practice we should return the resource created in the location header and return 201 created status.

Notes:

  • Sending a confirmation email for the user, and configuring user and password policy will be covered in the next post.
  • As stated earlier, there is no authentication or authorization applied yet, any anonymous user can invoke any available method, but we will cover this authentication and authorization part in the coming posts.

Step 10: Test Methods in”AccountsController”:

Lastly it is time to test the methods added to the API, so fire your favorite REST client Fiddler or PostMan, in my case I prefer PostMan. So lets start testing the “Create” user method, so we need to issue HTTP Post to the URI: “http://localhost:59822/api/accounts/create” as the request below, if creating a user went good you will receive 201 response:

Create User

Now to test the method “GetUsers” all you need to do is to issue HTTP GET to the URI: “http://localhost:59822/api/accounts/users” and the response graph will be as the below:

[
  {
    "url": "http://localhost:59822/api/accounts/user/29e21f3d-08e0-49b5-b523-3d68cf623fd5",
    "id": "29e21f3d-08e0-49b5-b523-3d68cf623fd5",
    "userName": "SuperPowerUser",
    "fullName": "Taiseer Joudeh",
    "email": "taiseer.joudeh@gmail.com",
    "emailConfirmed": true,
    "level": 1,
    "joinDate": "2012-01-17T12:41:40.457",
    "roles": [
      "Admin",
      "Users",
      "SuperAdmin"
    ],
    "claims": [
      {
        "issuer": "LOCAL AUTHORITY",
        "originalIssuer": "LOCAL AUTHORITY",
        "properties": {},
        "subject": null,
        "type": "Phone",
        "value": "123456782",
        "valueType": "http://www.w3.org/2001/XMLSchema#string"
      },
      {
        "issuer": "LOCAL AUTHORITY",
        "originalIssuer": "LOCAL AUTHORITY",
        "properties": {},
        "subject": null,
        "type": "Gender",
        "value": "Male",
        "valueType": "http://www.w3.org/2001/XMLSchema#string"
      }
    ]
  },
  {
    "url": "http://localhost:59822/api/accounts/user/f0f8d481-e24c-413a-bf84-a202780f8e50",
    "id": "f0f8d481-e24c-413a-bf84-a202780f8e50",
    "userName": "tayseer.Joudeh",
    "fullName": "Tayseer Joudeh",
    "email": "tayseer_joudeh@hotmail.com",
    "emailConfirmed": true,
    "level": 3,
    "joinDate": "2015-01-17T00:00:00",
    "roles": [],
    "claims": []
  }
]

The source code for this tutorial is available on GitHub.

In the next post we’ll see how we’ll configure our Identity service to start sending email confirmations, customize username and password polices, implement Json Web Token (JWTs) Authentication and manage the access for the methods.

Follow me on Twitter @tjoudeh

References

The post ASP.NET Identity 2.1 with ASP.NET Web API 2.2 (Accounts Management) – Part 1 appeared first on Bit of Technology.


Filip Woj: Migrating from ASP.NET Web API to MVC 6 – exploring Web API Compatibility Shim

Migrating an MVC 5 project to ASP.NET 5 and MVC 6 is a big challenge given that both of the latter are complete rewrites of their predecessors. As a result, even if on the surface things seem similar (we have controllers, filters, actions etc), as you go deeper under the hood you realize that most, if not all, of your pipeline customizations will be incompatible with the new framework.

This pain is even more amplified if you try to migrate Web API 2 project to MVC 6 – because Web API had a bunch of its own unique concepts and specialized classes, all of which only complicate the migration attempts.

ASP.NET team provides an extra convention set on top of MVC 6, called “Web API Compatibility Shim”, which can be enabled make the process of migration from Web API 2 a bit easier. Let’s explore what’s in there.

Introduction

If you create a new MVC 6 project from the default starter template, it will contain the following code in the Startup class, under ConfigureServices method:

// Uncomment the following line to add Web API servcies which makes it easier to port Web API 2 controllers.
 // You need to add Microsoft.AspNet.Mvc.WebApiCompatShim package to project.json
 // services.AddWebApiConventions();

This pretty much explains it all – the Compatibility Shim is included in an external package, Microsoft.AspNet.Mvc.WebApiCompatShim and by default is switched off for new MVC projects. Once added and enabled, you can also have a look at the UseMvc method, under Configure. This is where central Web API routes can be defined:

app.UseMvc(routes =>
        {
            routes.MapRoute(
                name: "default",
                template: "{controller}/{action}/{id?}",
                defaults: new { controller = "Home", action = "Index" });

            // Uncomment the following line to add a route for porting Web API 2 controllers.
            // routes.MapWebApiRoute("DefaultApi", "api/{controller}/{id?}");
        });

Interestingly, Web API Compatibility Shim will store all Web API related routes under a dedicated, predefined area, called simply “api”.

This is all that’s needed to get you started.

Inheriting from ApiController

Since the base class for Web API controllers was not Controller but ApiController, the shim introduces a type of the same name into MVC 6.

While it is obviously not 100% identical to the ApiController from Web API, it contains the majority of public proeprties and methods that you might have gotten used to – the Request property, the User property or a bunch of IHttpActionResult helpers.

You explore it in details here on Github.

Returning HttpResponseMessage

The shim introduces the ability to work with HttpResponseMessage in MVC 6 projects. How is this achieved? First of all, the Microsoft.AspNet.WebApi.Client package is referenced, and that brings in the familiar types – HttpResponseMessage and HttpRequestMessage.

On top of that, an extra formatter (remember, we discussed MVC 6 formatters here before) is injected into your application – HttpResponseMessageOutputFormatter. This allows you to return HttpResponseMessage from your actions, just like you were used to doing in Web API projects!

How does it work under the hood? Remember, in Web API, returning an instance of HttpResponseMessage bypassed content negotiation and simply forwarded the instance all the way to the hosting layer, which was responsible to convert it to a response that was relevant for a given host (HttpResponse under System.Web, OWIN dictionary under OWIN and so on).

In the case of MVC 6, the new formatter will grab your HttpResponseMessage and copy its headers and contents onto the Microsoft.AspNet.Http.HttpResponse which is the new abstraction for HTTP response in ASP.NET 5.

As a result such type of an action as the one shown below, is possible in MVC 6, and as a consequence it should be much simpler to migrate your Web API 2 projects.

public HttpResponseMessage Post()
{
    return new HttpResponseMessage(HttpSattusCode.NoContent);
}

Binding HttpRequestMessage

In Web API it was possible to bind HttpRequestMessage in your actions. For example this was easily doable:

[Route("test/{id:int}")]
    public string Get(int id, HttpRequestMessage req)
    {
        return id + " " + req.RequestUri;
    }

    [Route("testA")]
    public async Task<TestItem> Post(HttpRequestMessage req)
    {
        return await req.Content.ReadAsAsync<TestItem>();
    }

The shim introduces an HttpRequestMessageModelBinder which allows the same thing to be done under MVC 6. As a result, if you relied on HttpRequestMessage binding in Web API, your code will migrate to MVC 6 fine.

How does it work? The shim will use an intermediary type, HttpRequestMessageFeature, to create an instance of HttpRequestMessage from the ASP.NET 5 HttpContext.

HttpRequestMessage extensions

Since it was very common in the Web API world to use HttpResponseMessage as an action return type, there was a need for a mechanism that allowed easy creation of its instances. This was typically achieved by using the extension methods on the HttpRequestMessage, as they would perform content negotiation for you.

For example:

public HttpResponseMessage Post(Item item)
{
    return Request.CreateResponse(HttpSattusCode.NoContent, item);
}

The Web API Compatibility Shim introduces these extension methods into MVC 6 as well. Mind you, these extension methods are hanging off the HttpRequestMessage – so to use them you will have to inherit from the ApiController, in order to have the Request property available for you on the base class in the first place.

There is a number of the overloaded versions of those CreateResponse method (just like it was in Web API), as well as the CreateErrorResponse method. You can explore them in detail here on Github.

HttpError

If you use/used the CreateErrorResponse method mentioned above, you will end up relying on the HttpError class which is another ghost of the Web API past rejuvenated by the compatibility shim.

HttpError was traditionally used by Web API to serve up error information to the client in a (kind of) standardized way. It contained properties such as ModelState, MessageDetail or StackTrace.

It was used by not just the CreateErrorResponse extension method but also by a bunch of IHttpActionResultsInvalidModelStateResult, ExceptionResult and BadRequestErrorMessageResult. As a result, HttpError is back to facilitate all of these types.

More Web API-style parameter binding

A while ago, Badrinarayanan had a nice post explaining the differences between parameter binding in MVC 6 and Web API 2. In short, the approach in MVC 6 is (and that’s unerstandable) much more like MVC 5, and those of us used to Web API style of binding, could easily end up with lots of problems (mainly hidden problems that wouldn’t show up at compile time, which is even worse). Even the official tutorials from Microsoft show that, for example, binding from the body of the request, which was one of the cornerstones of Web API parameter binding, needs to be explicit now (see the action CreateTodoItem in that example) – through a use of a dedicated attribute.

Web API Compatibility Shim attempts to close this wide gap between MVC 6 and Web API 2 style of parameter binding.

The class called WebApiParameterConventionsApplicationModelConvention introduces the model of binding familiar to the Web API developers:

  • – simple types are bound from URI
  • – complex types are bound from the body

Additionally action overloading was different between MVC and Web API (for example, Web API parameters are all required by default, whereas MVC ones aren’t). This action resolving mechanism is also taken care of by the compatibility shim.

Finally, FromUriAttribute, does not exist in MVC 6 (due to the nature of parameter binding in MVC). This type is, however, introduced again in the compatibility shim.

Support for HttpResponseException

In Web API, it was common to throw HttpResponseException to control the flow of your action and short circuit a relevant HTTP response to the client. For example:

public Item Get(int id)
{
    var item = _repo.FindById(id);
    if (item == null) throw new HttpResponseException(HttpStatusCode.NotFound);
    return item;
}

HttpResponseException is re-introduced into MVC 6 by the compatibility shim, and allows you to use the code shown above in the new world.

How does it work? Well, the shim introduces and extra filter, HttpResponseExceptionActionFilter that catches all exceptions from your action. If the exception happens to be of type HttpResponseException, it will be marked as handled and Result on the ActionExecutedContext will be properly set as ObjectResult, based on the contents of that exception. If the exception is of different type, the filter just lets it pass through to be handled elsewhere.

Support for DefaultContentNegotiator

Web API content negotiation has been discussed quite widely on this blog before. Compatibility Shim introduces the concept of the IContentNegotiator into MVC 6 to facilitate its own extension methods, such the already discussed CreateResponse or CreateErrorResponse.

Additionally, the IContentNegotiator is registered as a service so you can obtain an instance of it using a simple call off the new ASP.NET 5 HttpContext:

var contentNegotiator = context.RequestServices.GetRequiredService<IContentNegotiator>();

This makes it relatively easy to port pieces of code where you’d deal with the negotiator manually (such as the example below), as you’d only need to change the way the negotiator instance is obtained.

[Route("items/{id:int}")]
    public HttpResponseMessage Get(int id)
    {
        var item = new Item
        {
            Id = id,
            Name = "I'm manually content negotiatied!"
        };
        var negotiator = Configuration.Services.GetContentNegotiator();
        var result = negotiator.Negotiate(typeof(Item), Request, Configuration.Formatters);

        var bestMatchFormatter = result.Formatter;
        var mediaType = result.MediaType.MediaType;

        return new HttpResponseMessage(HttpStatusCode.OK)
        {
            Content = new ObjectContent<Item>(item, bestMatchFormatter, mediaType)
        };
    }

Extra action results

Web API Compatibility Shims also introduces a set of action results that you might be used to from Web API as IHttpActionResults. Those are currently implemented as ObjectResults.

Here is the list:
BadRequestErrorMessageResult
ConflictResult
ExceptionResult
InternalServerErrorResult
InvalidModelStateResult
NegotiatedContentResult
OkNegotiatedContentResult
OkResult
ResponseMessageResult

Regardless of how you used them in your Web API project – newed up directly in the action, or as a call to a method on the base ApiController, your code should now be much more straight forward to port to MVC 6 (or at least to get it to a stage that it actually compiles under MVC 6).

Summary

While the Web API Compatibility Shim will not solve all of the problems you might encounter when trying to port a Web API 2 project to MVC 6, it will at least make your life much easier. The majority of the high level concepts – the code used in your actions and controllers – should migrate almost 1:1, and the only things you will be left with having to tackle would be the deep pipeline customizations, so things such as custom routing conventions, customizations to action or controller selectors, customizations to tracers or error handlers and so on.

Either way, enabling the shim is a must to even get started with thinking to migrate a Web API project to MVC 6.


Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.